Amplifying Social Problems: Urgent Action Needed to Address the AI Boom, Says AI Ethicist

Kathy Baxter of Salesforce explains how to prevent AI systems trained on internet data from reinforcing negative bias and causing societal harm.

Kathy Baxter, Principal Architect of Ethical AI Practice at Salesforce, has urged AI developers to act swiftly in creating and implementing systems that tackle algorithmic bias. In an interview with ZDNet, Baxter emphasized the importance of diverse representation in data sets and user research to ensure fairness and unbiasedness in AI systems. She also underscored the need for transparency, comprehensibility, and accountability while safeguarding individual privacy. Baxter highlighted the value of cross-sector collaboration, citing the National Institute of Standards and Technology (NIST) as a model for developing robust and safe AI systems that benefit all.

Scrutinizing the Impact

In the realm of AI ethics, a pivotal concern revolves around the development and deployment that does not reinforce or introduce societal biases. To address this, Baxter emphasized the need to scrutinize who benefits from and bears the costs of it. To ensure comprehensive consideration and inclusion of all perspectives, it is crucial to actively ensure the inclusivity and representation of diverse voices within the data sets. Involving diverse perspectives throughout the development process and conducting user research to identify potential harms are also of utmost importance.

“This is a fundamental question that necessitates discussion,” Baxter commented. “Women of color, in particular, have been exploring and researching this issue for years. It’s encouraging to see more people engaging in this dialogue, especially in relation to generative AI. However, we must fundamentally inquire: Who benefits from this technology? Who bears its consequences? Whose voices are provided with an opportunity to be heard?

Recognizing the Impact of Data Sets

It can inadvertently inherit social biases from the data sets used for their training. Biased AI systems can emerge from unrepresentative data sets that lack diversity or fail to capture cultural variations. Moreover, when systems are unevenly implemented throughout society, they can reinforce and perpetuate existing stereotypes.

Prioritizing explainability during development is crucial for enhancing transparency and understanding for the public. “Chain of thought prompts” can make AI systems’ reasoning more understandable and enhance their decision-making process. User research plays a vital role in ensuring that explanations are clear, allowing users to identify uncertainties in AI-generated content.

Building a Foundation for Ethical AI

While the future prospects of AI, including the potential for artificial general intelligence, spark curiosity, Baxter emphasizes the significance of directing our attention to the present. Prioritizing responsible usage and addressing social biases today will lay a solid foundation for navigating future advancements. By dedicating resources to ethical AI practices and fostering collaboration among different industries, we can collectively contribute to shaping a safer and more inclusive future for AI technology.

Baxter stated, “The timeframe is crucial. We must invest in the present, building our expertise, resources, and regulations to enable safe progress in AI.”