Selmer Bringsjord, Rensselaer Polytechnic Institute – Ethical A.I.

Should artificial intelligence be used in weapons?

Selmer Bringsjord, professor of cognitive science at Rensselaer Polytechnic Institute, explores this question.

Selmer Bringsjord is Professor of Cognitive Science, Professor of Computer Science, Professor of Logic & Philosophy, Professor of Management & Technology, and Director of Rensselaer AI and Reasoning Laboratory.  He specializes in the logico-mathematical and philosophical foundations of artificial intelligence (AI) and cognitive science, and in collaboratively building AI systems on the basis of computational logic. Though he spends considerable “engineering” time in pursuit of ever-smarter computing machines, he says that “armchair” reasoning time has enabled him to deduce that the human mind will forever be superior to such machines.

“Soon enough, much of what many humans do for a living will be better done by indefatigable machines who require not a cent in pay,” Bringsjord said. “I figure the ultimate growth industry will be building smarter and smarter such machines on the one hand, and philosophizing about whether they are truly conscious and free on the other.  Job security is nice.  I’ve worked in this two-fold industry for a long time, and plan to continue as long as my health holds out.”

Bringsjord is the author of papers and essays ranging in approach from the mathematical to the informal, and covering such areas as AI, logic, gaming, philosophy of mind, philosophy of religion, robotics, and ethics and he has of late begun to move into the area of computational economics, for which he has invented a new paradigm based on formal logic.

He is the author of What Robots Can & Can’t Be, concerned with the future of attempts to create robots that behave as humans, and also Superminds: People Harness Hypercomputation, and More. Before the second of these books he wrote, with IBM’s David Ferrucci, Articial Intelligence and Literary Creativity: Inside the Mind of Brutus, A Storytelling Machine.

Bringsjord currently holds appointments in the Department of Cognitive Science, the Department of Computer Science, and the Lally School of Management & Technology, and teaches AI, formal logic, human and machine reasoning, philosophy of AI, other topics relating to formal logic, and the intellectual history of New York City and the Hudson Valley. Funding for his research and development has come from the Luce Foundation, the National Science Foundation, the Templeton Foundation, AT&T, IBM, Apple, AFRL, ARDA/DTO/IARPA, ONR, DARPA, AFOSR, and other sponsors. Bringsjord has consulted to and advised many companies in the general realm of intelligent systems, and continues to do so.

Bringsjord has received many honors including, recently, the 2011 Annual Rensselaer Trustees Celebration of Faculty Achievement honor for research excellence, the 2008 Undergraduate Research Program Mentor Award; the 2007 Best Paper Award for “Provability Based Semantic Interoperability”; and the 2005 Best Paper Award from GameOn2005.

Ethical A.I.

AM-favicon-pink

Engineers succeed by making pessimistic assumptions.  Today, AI, artificial intelligence, where my research lies, follows suit.  A new Acura auto has AI designed under the pessimistic assumption that its human driver will sooner or later fail to brake for some obstacle; hence the car is engineered as an artificial agent that can stop itself.  With collaborators, I design and engineer ethically correct artificial agents under the assumption that human agents will behave badly.  When they do, our moral machines can intervene, and save the day. 

Specifically, I assume that, no matter what, sooner or later, some humans will once again unjustly harm and even kill, and that these malevolent humans will sometimes be would-be mass shooters, and sometimes law-enforcement bad apples.  Given this, I strive to replace human-controlled weapons and devices with AI-controlled ones:  smart and virtuous guns, and intelligent restraining devices that operate in accord with ethics, and the law.  Recent simulations in my lab of our ethical-AI technology show that it can in only 2.3 seconds both perceive a human gunman’s evil plan, and either lock out or permit the gun in question. 

Ultimately research along this line should enable humans, in particular some human police, to simply be replaced by machines that, as a matter of ironclad logic, cannot do wrong.  If George Floyd had been detained and held by an ethically correct robot built in accordance with our research, he’d still be alive; a parallel “what could have been” applies to a long line of avoidable deaths.  From an AI point of view, perhaps the best way to save black lives is to have ethically correct machines enforce the law, rather than inevitably imperfect humans

Share
No Responses