Today’s topic was extremely challenging to write about because I had to conduct some basic research in subject matter best left to anyone that isn’t me. It’s important for legal and policy experts to have an understanding of how things work so as to draft systems that would support the intended objective. Okay overachiever; here we are.
😂😂😂😂😂😂 Sorry thinking about👇👇 him engaging in “hostilities” and being “belligerent” makes me 😂😂😂😂😂….. (Love this movie) Okay back to the topic.
To understand today’s meme, I had the privilege (ha ha ha) of trying to understand at a kindergarten level, the marriage between cyberterrorism and artificial intelligence. And also, whether our current statutes as is are sufficient in how we would assign criminal liability.
Yes. I tried it…
Cyberterrorism is the use of the Internet to conduct violent acts that result in, or threaten, loss of life or significant bodily harm, in order to achieve political gains through intimidation. Artificial intelligence is intelligence demonstrated by machines, in contrast to the natural intelligence (NI) displayed by human. Therefore in short, think of it as the terminator as terrorist. At least that is how I understood the article. Most of these malicious actors would be programs and not necessarily robots, but for my blog and visualization, the terminator was both a program (AI) and robot and the program just committed a terrorist attack on U.S. Soil. Who do you prosecute since he is not a person under criminal statutes? Do you use the domestic terrorism statutes, or do you use the Military Commissions Act of 2009?
Herein lies the problem. If the terminator was owned or operated by a:
1)Rogue state (document from U.S. State Department)
2) Known criminals
The response would probably be proportionate to the crime. However, what if the terminator was owned by:
1) An American Corporation; or
2) A foreign corporation in ally nation?
MILITARY COMMISSIONS ACT OF 2009
Corporations are most likely to use artificial intelligence in their day to day functions or in the selling of their products and services. As such, they are most likely to be susceptible to outside bad actors and misconduct from the inside. In the case of an international corporation, the current statutes as is, might not be enough to impose personal criminal liability on a corporate executive because the statutes states that:
The term ‘unprivileged enemy belligerent’ means an individual who….
On the other hand, 18 USC 2331(3) defines the term person as individual or entity capable of holding a legal or beneficial interest in property. Then 18 USC 2331(5) defines domestic terrorism as “activities” that ……..
One could argue that there is a discrepancy in how a terrorist act involving AI owned by domestic and international corporation would be handled.
There are a lot of things that would need to be worked out from technical, ethical, and even international agreements on how AI should be used. Last but not least, Congress should be proactive unlike 9-11 in putting together a taskforce of for AI statutes and framework. With AI being “property” and not an individual, the taskforce may have to model criminal statutes after security and exchange criminal laws where gross negligence and willful violation may be a liability. The taskforce may also want to consider the imposition of personal liability to corporate executives to serve as a deterrence from engaging in misconduct.
1) Opposing views are welcome. However, negativity, trolling, name calling, and or disruptive remarks will not be published.
2) I have a full-time job and involved in other activities as well. To that end, I will do my very best to provide citations and research, but it may not be thorough (not easy to do legal research on open internet sources) or even cited per blue-book or academic format. It takes a lot of time just to conduct preliminary research on a topic. Please feel free to correct any errors in the comment section and I will edit the post.