Loading...

Media is loading
 

Faculty Mentor Name

Susan Rauch

Format Preference

Poster Presentation with Video

Abstract

As of 2018, the U.S. began developing “a shared understanding of the risk and benefits of this technology before deciding on a specific policy response. We remain convinced that it is premature to embark on negotiating any particular legal or political instrument in 2019.” The DOD stated that “Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use and outcomes of DOD AI systems.” With our investigation, we predict that accountability for AI that is predetermined will fall on the person who oversees the process, while deterministic AI will fall on whoever the AI is based off if there is no one to oversee. There is no current U.S. policy on the application of AI. The U.S. is starting to put a greater emphasis on the development of artificial intelligence but has yet to create policies on the application (Chambliss, 2019). Within the Armed Forces, AI has the means to kill people based on the given information it receives from human or automated input, and if the input is incorrect or there is other missing information, no one can be held liable, which is unethical and a cause for concern and lawsuits against the U.S. (Gregg, 2019). Due to a lack of U.S. policy regarding artificial intelligence, our research group looked to the policies of our allies towards development of a policy that would be beneficial for the U.S. and ethical for its citizens (Greguric, 2016).

  • Original: POSTER PRESENTATION; AUDIO added when event went online only.

Share

COinS
 

Ethical Responsibility of Artificial Intelligence in Building Entry Security: The Productivity Lost and Responsible Parties

As of 2018, the U.S. began developing “a shared understanding of the risk and benefits of this technology before deciding on a specific policy response. We remain convinced that it is premature to embark on negotiating any particular legal or political instrument in 2019.” The DOD stated that “Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use and outcomes of DOD AI systems.” With our investigation, we predict that accountability for AI that is predetermined will fall on the person who oversees the process, while deterministic AI will fall on whoever the AI is based off if there is no one to oversee. There is no current U.S. policy on the application of AI. The U.S. is starting to put a greater emphasis on the development of artificial intelligence but has yet to create policies on the application (Chambliss, 2019). Within the Armed Forces, AI has the means to kill people based on the given information it receives from human or automated input, and if the input is incorrect or there is other missing information, no one can be held liable, which is unethical and a cause for concern and lawsuits against the U.S. (Gregg, 2019). Due to a lack of U.S. policy regarding artificial intelligence, our research group looked to the policies of our allies towards development of a policy that would be beneficial for the U.S. and ethical for its citizens (Greguric, 2016).

  • Original: POSTER PRESENTATION; AUDIO added when event went online only.