Cathy Baxter, principal architect of Salesforce’s ethical AI practice, says AI developers need to move quickly to develop and deploy systems that address algorithmic bias. In an interview with ZDNET, Baxter emphasized the need for diverse representation in datasets and user research to ensure fair and unbiased AI systems. She also emphasized the importance of making AI systems transparent, understandable, and accountable while protecting individual privacy. Baxter believes in the need for cross-sectoral collaboration, like the model used by the National Institute of Standards and Technology (NIST), to develop robust and secure AI systems that benefit everyone. is emphasized.
One of the fundamental issues in AI ethics is ensuring that AI systems are developed and deployed without reinforcing existing social biases or creating new ones. To achieve this, Baxter emphasized the importance of asking who benefits from and who pays for AI technology. It’s important to consider the datasets being used and ensure they reflect everyone’s voice. Inclusivity in the development process and identifying potential harms through user research is also essential.
Also: ChatGPT has zero intelligence but is a revolution in usability, say AI experts
“This is one of the fundamental issues we have to discuss,” Baxter said. “Women of color in particular have been asking and researching this question in this field for years, and I see a lot of people talking about this, especially the use of generative AI. I’m very happy about that. But what we have to do is, fundamentally, the question is, who’s benefiting from this technology, who’s paying for it? Who’s voice? Is it included?”
Social biases can be injected into AI systems through the datasets used to train them. Unrepresentative datasets that contain biases, primarily image datasets that include one race or lack cultural differences, can result in a biased AI system. Additionally, applying AI systems unevenly to society can perpetuate existing stereotypes.
To make AI systems transparent and understandable to the public, it is important to prioritize explainability during the development process. Techniques such as “chaining thought prompts” can help AI systems demonstrate their behavior and make the decision-making process more understandable. User research is also essential to clarify explanations and help users identify uncertainties in AI-generated content.
Also: AI has the potential to automate 25% of all jobs.The ones most (and most) at risk are:
Transparency and consent are required to protect individual privacy and ensure responsible use of AI. Salesforce follows guidelines for responsible generative AI, including respecting the origin of data and only using customer data with consent. Enabling users to opt in, opt out, or otherwise control data usage is important for privacy.
“We only use customer data with the customer’s consent,” Baxter said. “It’s really important to be transparent when using someone’s data, allow them to opt in, and be able to go back and say when they don’t want their data included.”
As the competition for innovation in generative AI intensifies, it is more important than ever to maintain human control and autonomy over increasingly autonomous AI systems. You can maintain control by allowing users to make informed decisions about their use of AI-generated content and by keeping humans in the loop.
Ensuring that AI systems are safe, reliable, and easy to use is critical. Achieving this will require collaboration across the industry. Baxter praised the AI Risk Management Framework created by NIST with the participation of more than 240 experts from various fields. This collaborative approach provides a common language and framework for identifying risks and sharing solutions.
Failure to address these AI ethical issues can have serious consequences, as seen in cases of facial recognition errors and false arrests resulting in the generation of harmful images. Investing in safeguards and focusing on the here and now, rather than just future potential harms, can help alleviate these problems and ensure the responsible development and use of AI systems. Helpful.
Also: How ChatGPT works
While the future of AI and the potential of artificial general intelligence is an interesting topic, Baxter emphasizes the importance of focusing on the present. By ensuring the responsible use of AI and addressing today’s societal biases, society can better prepare for future AI advances. By investing in ethical AI practices and collaborating across industries, we can help create a safer and more inclusive future for AI technology.
“I think schedule is very important,” Baxter said. “We have to really invest in the here and now, create the muscle memory, create these resources and create regulations that allow us to continue to move forward safely.”