This summer I had the exciting opportunity to research and write a paper on a topic in the philosophy of artificial intelligence. I was interested specifically in the ways humans interact with human-like social robots and their perceptions of a robot’s moral agency.
Social robots are an emerging technology meant to cohabit in social spaces with humans. The technology still only emerging, but it is impressive. For example, social robots have several applications in both the education and medical sectors, among others. However, as of now, none of the robots are currently considered moral agents. This means that the machines are not morally responsible for what they do. For example, instead of blaming a self-driving car for its own safety malfunction, we should hold the company or developers behind the design accountable.
This project examines the probable future scenario where it is unclear whether some robots are moral agents or not. Put most directly, humans are prone to anthropomorphizing the things we interact with. So, if a sufficiently life-like robot walked and talked like a human, it would be very natural for reasonable people to give it the benefit of the doubt. They would naturally treat it like a responsible, decision-making, moral agent.
The problem is that mounting philosophical and technological evidence shows that robots and AI are not conscious, and it may not be possible to make conscious machines at all. For this reason, we would not be entitled to consider them moral agents at all. But, in the future, it might be very easy to mistake a robot for a moral agent.
Moral agents deserve rights. But these “mistaken moral agents†(MMAs) would not. I foresee political and social confusion in a world where MMAs are adopted widely. I explain how this confusion would arise and propose different solutions to prevent it.
As a Ph.D. student hopeful, my objective with this project was not only to put my research, writing, and argumentation skills to the test, but also to develop them under the supervision and mentorship of Dr. Jocelyn Maclure. Dr. Maclure is a professor of political philosophy who is currently interested in the ways democracy, rights, technology, and artificial intelligence intersect. His experience, input, and guidance were invaluable to me throughout the summer.
The project started with a comprehensive literature review. My research took me to multiple fields outside of philosophy, including political science, computer science, and psychology. I would meet with Dr. Maclure regularly to provide updates on my research and writing progress, as well as to discuss my arguments in detail. Receiving Dr. Maclure’s insight was always valuable and insightful. The research process was time-consuming but nevertheless enjoyable and intellectually stimulating.
The writing was, in my view, the greatest challenge I faced throughout the entire project. Even though I had built a wealth of knowledge during my research, this did not translate into a perfectly smooth writing process. I found that it was easy to write a lot, but difficult to organize and then distill my ideas into a rigorous and altogether complete paper. I found that taking short rest breaks and going for walks to clear my mind were really the best ways to refocus and write more. Writer’s block is real!
After obtaining my bachelor’s degree, I intend to aim single-mindedly at a career in academia by pursuing a Ph.D. in philosophy. I am certain that the skills I built working on my ARIA project this summer will contribute directly to both my applications and my future success in academia. Moreover, I am excited to say that I intend to further refine the paper I wrote during this project into a fully publishable article for an academic journal.
Finally, I want to thank Mr. Harry Samuel once again for his generous support of my project. Working on my ARIA this summer has been the highlight of my time here at Ã山ǿ¼é, and the experience I had working with Dr. Maclure has proven invaluable.