Proponents of autonomous weapons will highlight the precision of these machines to hit their target, avoiding unintended casualties and limiting impacts on civilians. But precise tools wielded in imprecise ways can be very harmful. When states consider deploying modern autonomous systems powered by artificial intelligence (AI), they must consider the legal and ethical concerns in addition to the technical specifications of the tool.
In this video, Branka Marijan, senior researcher at Project Ploughshares, discusses the legal concerns of using automated AI weapons and the role that human judgment plays in determining if a target is legitimate and if the attack falls within international rules of engagement.
“At the moment, no one can be held accountable for actions carried out by an autonomous system,” explains Marijan. The international community must set out clear rules of use before these autonomous systems are so ubiquitous that their use runs rampant, causing unnecessary harm and civilian casualties because guardrails weren’t in place.
Read the essay here: https://www.cigionline.org/articles/a...
Смотрите видео The Legal Void in Which AI Weapons Operate онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь Centre for International Governance Innovation 28 Ноябрь 2022, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 7,288 раз и оно понравилось 24 людям.