OpenAI, a leading artificial intelligence research organization, has awarded a grant to Duke University researchers to develop algorithms that can predict human moral judgments. The grant, part of a larger $1 million award, aims to create AI systems that can make moral decisions in scenarios involving conflicts in medicine, law, and business.
The research project, titled "Research AI Morality," is led by Walter Sinnott-Armstrong, a practical ethics professor at Duke, and co-investigator Jana Borg. While little is publicly known about the project, the researchers have previously explored the concept of AI as a "moral GPS" to help humans make better judgments. They have also developed a "morally-aligned" algorithm to aid in kidney donation decisions and studied scenarios where people prefer AI to make moral decisions.
However, experts question whether AI systems can truly grasp nuanced moral concepts. Modern AI models are statistical machines that learn patterns from large datasets, but they lack an appreciation for ethical concepts and the emotional and reasoning aspects of moral decision-making. This limitation can lead to AI systems parroting Western, educated, and industrialized values, and internalizing biases beyond a Western bent.
The challenge of developing an algorithm that can predict human moral judgments is further complicated by the inherent subjectivity of morality. Philosophers have debated various ethical theories for thousands of years, and there is no universally applicable framework in sight. Different AI systems, such as Claude and ChatGPT, may adopt different ethical approaches, such as Kantianism or utilitarianism, but it is unclear which one is superior.
The example of the Allen Institute for AI's Ask Delphi tool, which was meant to provide ethically sound recommendations, highlights the difficulties in developing AI systems that can make moral judgments. Ask Delphi was able to judge basic moral dilemmas correctly but was easily manipulated by rephrasing questions, leading it to approve of morally reprehensible actions.
The OpenAI-funded research project faces a high bar in developing an algorithm that can accurately predict human moral judgments. The project's success will depend on its ability to take into account the complexities and nuances of human morality, as well as the limitations of AI systems in grasping ethical concepts.
As AI systems become increasingly integrated into various aspects of life, the need for morally aligned AI decision-making grows more pressing. While OpenAI's funding of this research project is a step in the right direction, the challenges ahead will require significant advances in AI capabilities and a deeper understanding of human morality.
Ultimately, the success of this project will depend on whether AI systems can be developed to truly understand and appreciate the complexities of human morality. If achieved, such systems could have a profound impact on various fields, including medicine, law, and business. However, the journey ahead will be long and arduous, and it remains to be seen whether AI can truly grasp the nuances of human moral judgments.