To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Miranda Bogen is the founding director of the Center of Democracy and Technology’s AI Governance Lab, where she works to help create solutions that can effectively regulate and govern AI systems. She helped guide responsible AI strategies at Meta and previously worked as a senior policy analyst at the organization Uptown, which seeks to use tech to advance equity and justice.
Briefly, how did you get your start in AI? What attracted you to the field?
I was drawn to work on machine learning and AI by seeing the way these technologies were colliding with fundamental conversations about society — values, rights, and which communities get left behind. My early work exploring the intersection of AI and civil rights reinforced for me that AI systems are far more than technical artifacts; they are systems that both shape and are shaped by their interaction with people, bureaucracies, and policies. I’ve always been adept at translating between technical and non-technical contexts, and I was energized by the opportunity to help break through the appearance of technical complexity to help communities with different kinds of expertise shape the way AI is built from the ground up.
What work are you most proud of (in the AI field)?
When I first started working in this space, many folks still needed to be convinced AI systems could result in discriminatory impact for marginalized populations, let alone that anything needed to be done about those harms. While there is still too wide a gap between the status quo and a future where biases and other harms are tackled systematically, I’m gratified that the research my collaborators and I conducted on discrimination in personalized online advertising and my work within the industry on algorithmic fairness helped lead to meaningful changes to Meta’s ad delivery system and progress toward reducing disparities in access to important economic opportunities.
How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I’ve been lucky to work with phenomenal colleagues and teams who have been generous with both opportunities and sincere support, and we tried to bring that energy into any room we found ourselves in. In my most recent career transition, I was delighted that nearly all of my options involved working on teams or within organizations led by phenomenal women, and I hope the field continues to lift up the voices of those who haven’t traditionally been centered in technology-oriented conversations.
What advice would you give to women seeking to enter the AI field?
The same advice I give to anyone who asks: find supportive managers, advisors, and teams who energize and inspire you, who value your opinion and perspective, and who put themselves on the line to stand up for you and your work.
What are some of the most pressing issues facing AI as it evolves?
The impacts and harms AI systems are already having on people are well-known at this point, and one of the biggest pressing challenges is moving beyond describing the problem to developing robust approaches for systematically addressing those harms and incentivizing their adoption. We launched the AI Governance Lab at CDT to drive progress in both directions.
What are some issues AI users should be aware of?
For the most part, AI systems are still missing seat belts, airbags, and traffic signs, so proceed with caution before using them for consequential tasks.
What is the best way to responsibly build AI?
The best way to responsibly build AI is with humility. Consider how the success of the AI system you are working on has been defined, who that definition serves, and what context may be missing. Think about for whom the system might fail and what will happen if it does. And build systems not just with the people who will use them but with the communities who will be subject to them.
How can investors better push for responsible AI?
Investors need to create room for technology builders to move more deliberately before rushing half-baked technologies to market. Intense competitive pressure to release the newest, biggest, and shiniest new AI models is leading to concerning underinvestment in responsible practices. While uninhibited innovation sings a tempting siren song, it is a mirage that will leave everyone worse off.
AI is not magic; it’s just a mirror that is being held up to society. If we want it to reflect something different, we’ve got work to do.