Women in AI: Rachel Coldicutt researches how technology impacts society | TechCrunch

Women in AI: Rachel Coldicutt researches how technology impacts society | TechCrunch

Giving women academics and others focused on AI their well-deserved — and overdue — time in the spotlight, TechCrunch is publishing A series of interviews Focused on notable women who have contributed to the AI ​​revolution. We're publishing these pieces throughout the year as the AI ​​boom continues, highlighting important work that often goes unrecognized. Read more profiles Here.

In today's light: Rachel Koldickutt is the founder of Cautious industries., which investigates the social impact of technology on society. Clients include Salesforce and the Royal Academy of Engineering. Prior to Careful Industries, Coldicutt was CEO at think tank Doteveryone, which also researched how technology is impacting society.

Prior to Doteveryone, he spent decades working in digital strategy for companies such as the BBC and the Royal Opera House. He studied at Cambridge University and was awarded an OBE (Order of the British Empire) for his work in digital technology.

Briefly, how did you get your start in AI? What drew you to the field?

I started working in tech in the mid-90s. My first proper tech job was working on Microsoft Encarta in 1997, and before that, I helped build content databases for reference books and dictionaries. Over the past three decades, I've worked with all kinds of new and emerging technologies, so it's hard to pinpoint the exact moment I “got into AI” as I began to use automation and data to drive decisions. , have been using to create experiences. and producing artwork since the 2000s. Instead, I think the question is probably, “When did AI become the set of technologies that everyone wanted to talk about?” And I think the answer is probably around 2014 when DeepMind was acquired by Google – that was the moment in the UK when AI overtook everything else, although a lot of the core technologies that we now ” AI” were things that were already pretty much there. common use.

I started working in tech almost by accident in the 1990s, and what has kept me in the field through so many transitions is that it's full of interesting paradoxes: I love learning new skills; And how empowering it can be to make things, I'm fascinated by what we can discover with structured data, and I may spend the rest of my life observing and understanding how people use the technologies we use. Form and form.

What work in the AI ​​field are you most proud of?

Much of my AI work has been in policy-making and social impact assessments, working with government departments, charities and businesses of all kinds to help them use AI and related tech in deliberate and reliable ways. does.

In the 2010s I ran Doteveryone – a responsible tech think tank – which helped change the frame of how UK policymakers think about emerging tech. Our work made it clear that AI is not an outcome-free set of technologies but rather something that has real-world implications for people and societies. In particular, I'm really proud of the free. Result scanning tool We developed, now used by teams and businesses around the world, which helps them assess the social, environmental and political impact of their choices when shipping new products and features.

At the latest, 2023 AI and Society Forum There was another proud moment. In connection with the UK government's industry-dominated AI Safety Forum, my team at Care Trouble quickly convened 150 people from civil society and collectively made the case that AI could work for 8 billion people. . Not just 8 billionaires.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

As a relative old timer in the tech world, I feel that most of what we've gained in gender representation in tech has been lost in the last five years. Turing Institute research shows that less than 1% of investment in the AI ​​sector has gone to startups led by women, while women still make up only a quarter of the overall tech workforce. . When I go to AI conferences and events, the gender mix—especially in terms of who gets a platform to share their work—reminds me of the early 2000s, which really saddens and shocks me. Looks like a giver.

I'm able to navigate the sexist attitudes of the tech industry because I have the great privilege of being able to found and run my own organization: I spent much of my early career dealing with sexism and sexual harassment on a daily basis. spent facing. This gets in the way of doing great work and is an unnecessary cost of entry for many women. Instead, I've chosen to build a feminist business where, collectively, we strive for equality in everything we do, and my hope is that we can show other ways that it's possible.

What advice would you give to women aspiring to enter the AI ​​field?

Don't feel like you have to work in a “women's issue” field, don't hesitate, and seek out colleagues and build friendships with other people so that you have an active support network. What kept me going all these years was my network of friends, former colleagues, and allies—we offered each other mutual support, an endless supply of pep talks, and sometimes a shoulder to cry on. are Without it, it can feel very lonely; You are often the only woman in the room so having a safe place to decompress is essential.

The moment you get the chance, hire well. Don't copy the structures you've seen or incorporate the expectations and norms of an elitist, sexist industry. Challenge the status quo every time you hire and support your new hire. That way, you can start building a new routine wherever you are.

And explore the work of some of the great women driving great AI research and practice: Start by reading the work of pioneers like Abeba Birhane, Timnit Gebru, and Joy Buolamwini, who have done all the basic research that has led to AI. has shaped our understanding of Changes and interactions with society.

What are the most pressing issues facing AI as it evolves?

AI is an accelerator. It may feel like some uses are inevitable, but as societies, we need to be empowered to make clear choices about what is worth spiking. At the moment, the main work being done by the increasing use of AI is increasing the power and bank balances of a relatively small number of male CEOs, and it seems to be creating a world where In which many people want to live. I'd love to see more people, especially in industry and policy-making, engage with the questions of what a more democratic and accountable AI looks like and whether it's even possible.

AI's climate impacts – the use of water, energy and key minerals – and the health and social justice impacts for people and communities affected by natural resource exploitation need to be high on the list for responsible development. The fact that LLMs, in particular, are very energy intensive reflects the fact that the current model is not fit for purpose. In 2024, we need innovation that protects and restores the natural world, and an end to extractive models and ways of working.

We also need to be realistic about the surveillance implications of a more datafied society and the fact that – in an increasingly volatile world – any general-purpose technologies will likely be used for unimaginable horrors in war. . Anyone working in AI needs to be realistic about tech R&D's historic, long-standing association with military development. We need to support, support and demand innovation that starts and is governed by communities so that we achieve outcomes that strengthen society, not add to destruction.

What issues should AI users be aware of?

As well as the environmental and economic implications inherent in many of the current AI business and technology models, it is really important to think about the day-to-day effects of the increasing use of AI and what this means for everyday human interactions. Is.

While some of the headline-grabbing issues have been around more existential threats, it's worth keeping an eye on how the technologies you use are helping and hindering you on a daily basis: What automation can do, which deliver real value, and where you as a consumer can vote with your feet to make the case that you're actually talking to a real person. Want, not by boot? We don't need to settle for poor quality automation and we must band together to demand better results!

What is the best way to build AI responsibly??

Responsible AI starts with good strategic choices – rather than throwing an algorithm at it and hoping for the best, it's possible to be deliberate about what to automate and how. I've been talking about the idea of ​​a “just enough internet” for a few years now, and it feels like a really useful idea for how we think about building any new technology. Instead of pushing the boundaries all the time, can we build AI in a way that maximizes benefits and minimizes harm to people and the planet?

We have made progress. A strong action That's why at Careful Trouble, where we work with boards and senior teams, we start by mapping out how AI can, and can't, support your vision and values. Understanding where problems are too complex and variable to enhance through automation, and where it would benefit. And finally, developing a proactive risk management framework. Responsible development is not a one-off application of a set of principles, but an ongoing process of monitoring and mitigation. Continuous deployment and social adaptation means that quality assurance cannot be something that ends once the product is shipped. As AI developers, we need to build an iterative, social sensing capability and understand responsible development and deployment as a living process.

How can investors best push for responsible AI?

By investing more patiently, backing more diverse founders and teams, and not looking for rapid returns.

About the Author

Leave a Reply