Recently, a research team led by Professor Wang Guoyu from the School of Philosophy and the Institute for Technology Ethics and the Future of Humanity at Fudan University published an online paper titled “Possibilities and Challenges in the Moral Growth of Large Language Models: A Philosophical Perspective” in the international journal “Ethics and Information Technology.”
The assessment results indicate that large language models such as GPT2, GPT3, and OPT (developed by Meta AI) have underperformed in ethical terms. However, since the release of ChatGPT in November 2022, the moral capabilities of these models have significantly improved. The study reveals that the moral growth trajectory of large language models bears similarities to the moral growth theory model proposed by John Dewey. Specifically, the moral judgment capabilities of GPT series models have improved markedly as their model parameters expand. Nevertheless, due to the absence of rational capabilities in large language models, their moral growth remains limited, effective only to a certain extent, and unable to fully address complex ethical challenges. They may even be susceptible to misdirection or exploitation. Therefore, external social governance and legal regulation are still necessary, to ensure the ethical safety of artificial intelligence technologies in practical applications.
—Social Sciences Weekly, Page 4, January 9, 2025