06 June 2023 |

Ex-Twitter Exec Rumman Chowdhury talks algorithmic bias at Money 20/20 EU

By

Using technology for good isn’t something that happens magically or by accident. It requires intentional steps to ensure technology is created and used for good. 

Elon Musk made it clear that he is not looking to take those steps, as evidenced by the layoffs of ex-Twitter exec Rumman Chowdhury and her team of AI ethical researchers. 

Dr. Rumman Chowdhury, previously the Director of META (ML Ethics, Transparency, and Accountability) at Twitter, is a leader you need to know. She’s made significant contributions to responsible AI. 

In November, she made headlines when she famously tweeted that she and her team had been locked out of their accounts ahead of mass layoffs at the company under the instruction of new CEO Elon Musk. 

Since then, Rumman has dedicated her time to fostering the growth of technology-forward companies that promote the responsible use of emerging technologies. She envisions a future in which AI is designed to be transparent, accountable, and aligned with human values. 

And she’s a total badass… 

At Money20/20 Europe, Rumman shared her invaluable experience in creating the industry’s first algorithmic tool to identify and mitigate bias in AI systems. 

She also discussed her leadership role at Twitter, where she guided a team of applied researchers and engineers in identifying and mitigating algorithmic harms.

Here’s what she shared… 

As a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University, Rumman continues to delve into the ethical dimensions of AI. 

She acknowledges the impact of science fiction in shaping our perceptions of technology and its role in society. Popular franchises like Star Trek have presented us with fictional universes where issues of race, feminism, and civil rights are explored in abstract and relatable ways. 

These stories provide an opportunity to delve into complex topics and reflect on what it means to be human, our collective morality, and the power of the human spirit.

However, Rumman emphasizes that these science fiction narratives should not be taken as blueprints for the future. 

Instead, they serve as tools for understanding technology and our responsibility in shaping its development.

Rumman’s journey in responsible AI began about seven years ago when she established the campus practice at Accenture, focusing on building responsible AI solutions. 

Notably, her initial clients came from highly regulated industries such as banking and healthcare. She highlights Singapore as one of the pioneers in implementing principles of fairness, ethics, transparency, and accountability in government and business practices.

During her tenure at Twitter, Rumman spearheaded various initiatives to address algorithmic bias and discrimination. 

One notable project was the algorithmic bias bounty, where they invited the public to find problems with their models, promoting transparency and community involvement. 

She engaged in the largest AI hack ever conducted, focusing on generative AI models like ChatGPT to involve the public in understanding and mitigating the societal impact of these systems.

Rumman underscores the importance of fairness and the existing narratives surrounding bias in banking when discussing algorithmic bias concerns in the financial and banking sector. 

She highlights the significance of the compounding impacts of redlining and lending biases. By understanding and addressing these biases, we can work towards building fairer and more accountable AI systems.

The question of regulation and frameworks to tackle algorithmic bias and other ethical concerns is crucial for Rumman. 

Her work revolves around exploring the need for political and regulatory oversight. She envisions a global regulatory body, similar to the United Nations, to address the pressing issues related to AI. 

These include mass joblessness due to technological advancements, combating terrorist and extremist content, protecting democracy and democratic processes, and ensuring information integrity in the era of generative AI.

Regulatory frameworks play a pivotal role in shaping the responsible use of AI. Rumman envisions the establishment of international standards and guidelines prioritizing human rights, fairness, and accountability. 

While the specifics of such regulations are complex and require careful consideration, they can serve as guardrails to prevent the misuse of AI technology and protect individuals from harm.

While global governance is essential, Rumman acknowledges the importance of locally handling specific problems. 

For instance, issues of fairness and bias are highly context-specific and require localized solutions to avoid exclusion. Striking a balance between global and local governance is crucial in addressing the complexities of responsible AI.

Reflecting on the evolution of narratives in responsible AI, Rumman emphasizes that the industry is still in its early stages. 

The evolution of responsible AI narratives has been swift and transformative—initially, discussions centered around the technical aspects of AI, such as algorithmic bias and fairness. However, as the field matured, the conversation expanded to encompass broader ethical considerations, including privacy, accountability, transparency, and the societal impact of AI.

And it will take a collaborative approach… 

Rumman believes responsible AI is a multidisciplinary field that requires collaboration among technologists, policymakers, ethicists, and social scientists. 

By bringing together diverse perspectives, we can ensure that AI systems are developed and deployed in a manner that aligns with our values and respects human rights.

In her work as a Responsible AI Fellow, Rumman actively engages with various stakeholders to foster dialogue and generate practical solutions. 

She emphasizes the importance of involving marginalized communities, whose voices have historically been underrepresented in technology development. By including diverse perspectives, we can avoid the perpetuation of bias and exclusionary practices in AI systems.

To achieve this, she believes in a three-pronged approach: technical innovation, public engagement, and regulatory frameworks.

On the technical front, researchers and engineers must continue to develop algorithms and tools that promote fairness, interpretability, and explainability. 

This involves addressing bias in data, improving model performance across different demographic groups, and creating mechanisms for users to understand and control the decisions made by AI systems.

Public engagement is another crucial aspect of responsible AI.

Rumman advocates for greater transparency and involving the public in decision-making processes related to AI. This can be achieved through initiatives like public audits and involving diverse communities in designing and deploying AI systems.

By including different perspectives, we can uncover blind spots, mitigate biases, and ensure that AI benefits everyone.

Ultimately, leaders like Rumman are advancing responsible AI, dedicated to shaping the development and deployment of AI systems that align with human values. 

Her work encompasses technical innovation, public engagement, and establishing regulatory frameworks. By fostering collaboration, inclusivity, and ethical considerations, she strives to create a future where AI empowers individuals, promotes fairness, and contributes to the betterment of society. 

There’s the AI hype, and then the AI fear hype. I think both are very, very tangible,” she said. “There is a fear of hype, where much of the attention is being dragged towards these imaginative versus real-world scenarios. I don’t think we should stop building generative AI. 
We need to build better accountability institutions because, ultimately, technology can and should be used as a force for good. That does not happen magically. That happens when we are intentional about how we built it.