Researchers from the University of Pennsylvania School of Social Policy and Practice (SP2) and Annenberg School for Communication have published recommendations to ensure the ethical use of artificial intelligence resources such as ChatGPT by social work scientists. . The paper, titled “ChatGPT for Social Work Science: Ethical Challenges and Opportunities,” Journal of the Social Work and Research Association.
This article was co-authored by Desmond Upton Patton, Aviv Landau, and Siva Mathiyazhagan. Patton is a pioneer in the interdisciplinary fusion of social, communication, and data science work, and holds a joint appointment with Annenberg and his SP2 Brian & Randy Schwartz Professor.
This article outlines the challenges that ChatGPT and other large-scale language models pose across bias, legality, ethics, data privacy, confidentiality, informed consent, and academic misconduct, and discusses the ethical use of technology. It provides recommendations in five areas: transparency, fact-checking, and authorship. , plagiarism prevention, inclusion and social justice.
Of particular concern to the authors are the limitations of artificial intelligence in the context of human rights and social justice.
“Like any bureaucratic system, ChatGPT forces thinking without compassion, reason, speculation, or imagination,” they write.
For more information, please see SP2 News.