Home AI Women in AI: Rachel Coldicutt researches how technology impacts society

Women in AI: Rachel Coldicutt researches how technology impacts society

by ccadm


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

In the spotlight today: Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society. Clients have included Salesforce and the Royal Academy of Engineering. Before Careful Industries, Coldicutt was CEO at the think tank Doteveryone, which also conducted research into how technology was impacting society.

Before Doteveryone, she spent decades working in digital strategy for companies like the BBC and the Royal Opera House. She attended the University of Cambridge and received an OBE (Order of the British Empire) honor for her work in digital technology.

Briefly, how did you get your start in AI? What attracted you to the field? 

I started working in tech in the mid-’90s. My first proper tech job was working on Microsoft Encarta in 1997, and before that, I helped build content databases for reference books and dictionaries. Over the last three decades, I’ve worked with all kinds of new and emerging technologies, so it’s hard to pinpoint the precise moment I “got into AI” because I’ve been using automated processes and data to drive decisions, create experiences, and produce artworks since the 2000s. Instead, I think the question is probably, “When did AI become the set of technologies everyone wanted to talk about?” and I think the answer is probably around 2014 when DeepMind got acquired by Google — that was the moment in the U.K. when AI overtook everything else, even though a lot of the underlying technologies we now call “AI” were things that were already in fairly common use.

I got into working in tech almost by accident in the 1990s, and the thing that’s kept me in the field through many changes is the fact that it’s full of fascinating contradictions: I love how empowering it can be to learn new skills and make things, am fascinated by what we can discover from structured data, and could happily spend the rest of my life observing and understanding how people make and shape the technologies we use.

What work are you most proud of in the AI field?

A lot of my AI work has been in policy framing and social impact assessments, working with government departments, charities and all kinds of businesses to help them use AI and related tech in intentional and trustworthy ways.

Back in the 2010s I ran Doteveryone — a responsible tech think tank — that helped change the frame for how U.K. policymakers think about emerging tech. Our work made it clear that AI is not a consequence-free set of technologies but something that has diffuse real-world implications for people and societies. In particular, I’m really proud of the free Consequence Scanning tool we developed, which is now used by teams and businesses all over the world, helping them to anticipate the social, environmental, and political impacts of the choices they make when they ship new products and features.

More recently, the 2023 AI and Society Forum was another proud moment. In the run-up to the U.K. government’s industry-dominated AI Safety Forum, my team at Care Trouble rapidly convened and curated a gathering of 150 people from across civil society to collectively make the case that it’s possible to make AI work for 8 billion people, not just 8 billionaires.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?  

As a comparative old-timer in the tech world, I feel like some of the gains we’ve made in gender representation in tech have been lost over the last five years. Research from the Turing Institute shows that less than 1% of the investment made in the AI sector has been in startups led by women, while women still make up only a quarter of the overall tech workforce. When I go to AI conferences and events, the gender mix — particularly in terms of who gets a platform to share their work — reminds me of the early 2000s, which I find really sad and shocking.

I’m able to navigate the sexist attitudes of the tech industry because I have the huge privilege of being able to found and run my own organization: I spent a lot of my early career experiencing sexism and sexual harassment on a daily basis — dealing with that gets in the way of doing great work and it’s an unnecessary cost of entry for many women. Instead, I’ve prioritized creating a feminist business where, collectively, we strive for equity in everything we do, and my hope is that we can show other ways are possible.

What advice would you give to women seeking to enter the AI field?

Don’t feel like you have to work in a “women’s issue” field, don’t be put off by the hype, and seek out peers and build friendships with other folk so you have an active support network. What’s kept me going all these years is my network of friends, former colleagues and allies — we offer each other mutual support, a never-ending supply of pep talks, and sometimes a shoulder to cry on. Without that, it can feel very lonely; you’re so often going to be the only woman in the room that it’s vital to have somewhere safe to turn to decompress.

The minute you get the chance, hire well. Don’t replicate structures you have seen or entrench the expectations and norms of an elitist, sexist industry. Challenge the status quo every time you hire and support your new hires. That way, you can start to build a new normal, wherever you are.

And seek out the work of some of the great women trailblazing great AI research and practice: Start by reading the work of pioneers like Abeba Birhane, Timnit Gebru, and Joy Buolamwini, who have all produced foundational research that has shaped our understanding of how AI changes and interacts with society.

What are some of the most pressing issues facing AI as it evolves?

AI is an intensifier. It can feel like some of the uses are inevitable, but as societies, we need to be empowered to make clear choices about what is worth intensifying. Right now, the main thing increased use of AI is doing is increasing the power and the bank balances of a relatively small number of male CEOs and it seems unlikely that [it] is shaping a world in which many people want to live. I would love to see more people, particularly in industry and policy-making, engaging with the questions of what more democratic and accountable AI looks like and whether it’s even possible.

The climate impacts of AI — the use of water, energy and critical minerals — and the health and social justice impacts for people and communities affected by exploitation of natural resources need to be top of the list for responsible development. The fact that LLMs, in particular, are so energy intensive speaks to the fact that the current model isn’t fit for purpose; in 2024, we need innovation that protects and restores the natural world, and extractive models and ways of working need to be retired.

We also need to be realistic about the surveillance impacts of a more datafied society and the fact that — in an increasingly volatile world — any general-purpose technologies will likely be used for unimaginable horrors in warfare. Everyone who works in AI needs to be realistic about the historical, long-standing association of tech R&D with military development; we need to champion, support, and demand innovation that starts in and is governed by communities so that we get outcomes that strengthen society, not lead to increased destruction.

What are some issues AI users should be aware of?

As well as the environmental and economic extraction that’s built into many of the current AI business and technology models, it’s really important to think about the day-to-day impacts of increased use of AI and what that means for everyday human interactions.

While some of the issues that hit the headlines have been around more existential risks, it’s worth keeping an eye on how the technologies you use are helping and hindering you on a daily basis: what automations can you turn off and work around, which ones deliver real benefit, and where can you vote with your feet as a consumer to make the case that you really want to keep talking with a real person, not a bot? We don’t need to settle for poor-quality automation and we should band together to ask for better outcomes!

What is the best way to responsibly build AI?

Responsible AI starts with good strategic choices — rather than just throwing an algorithm at it and hoping for the best, it’s possible to be intentional about what to automate and how. I’ve been talking about the idea of “Just enough internet” for a few years now, and it feels like a really useful idea to guide how we think about building any new technology. Rather than pushing the boundaries all the time, can we instead build AI in a way that maximizes benefits for people and the planet and minimizes harm?

We’ve developed a robust process for this at Careful Trouble, where we work with boards and senior teams, starting with mapping how AI can, and can’t, support your vision and values; understanding where problems are too complex and variable to enhance by automation, and where it will create benefit; and lastly, developing an active risk management framework. Responsible development is not a one-and-done application of a set of principles, but an ongoing process of monitoring and mitigation. Continuous deployment and social adaptation mean quality assurance can’t be something that ends once a product is shipped; as AI developers, we need to build the capacity for iterative, social sensing and treat responsible development and deployment as a living process.

How can investors better push for responsible AI? 

By making more patient investments, backing more diverse founders and teams, and not seeking out exponential returns.



Source link

Related Articles