0
Women in AI: Anna Korhonen studies the intersection between linguistics and AI | TechCrunch

Women in AI: Anna Korhonen studies the intersection between linguistics and AI | TechCrunch

To give female academics and others focused on AI their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a A series of interviews Focusing on notable women who have contributed to the AI ​​revolution. We'll be publishing several pieces throughout the year as the AI ​​boom continues, highlighting important work that often goes unrecognized. Read more profiles Here.

Anna Korhonen is a professor of Natural Language Processing (NLP). University of Cambridge. They I am also a Senior Research Fellow Churchill CollegeA fellow at the Association for Computational Linguistics, and a fellow at the European Laboratory for Learning and Intelligent Systems.

Korhonen previously served as an associate. Alan Turing Institute and holds a PhD in Computer Science and Masters degrees in both Computer Science and Linguistics. She researches NLP and how. Develop, adapt and apply computational techniques to meet the needs of AI. He has a special interest. Responsive and “human-centered” in NLP which is – in his own words – “based on an understanding of human cognitive, social and creative intelligence.”

Q&A

Briefly, how did you get your start in AI? What drew you to the field?

I have always been fascinated by the beauty and complexity of human intelligence, especially in relation to human language. However, my interest in STEM subjects and practical applications led me to study engineering and computer science. I chose to specialize in AI because it is a field that allows me to combine all these interests.

What work in the AI ​​field are you most proud of?

While the science of building intelligent machines is fascinating, and one can easily get lost in the world of language modeling, the ultimate reason we're building AI is its practicality. I am most proud of the work where my fundamental research on natural language processing has produced tools that can support social and global good. For example, tools that can help us better understand how diseases like cancer or dementia develop and can be treated, or apps that can support education.

Much of my current research is based on the mission of developing AI that can improve human lives for the better. AI has enormous positive potential for social and global good. A big part of my job as an educator is to encourage the next generation of AI scientists and leaders to focus on realizing this potential.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

I am fortunate to be working in an area of ​​AI where we have a significant female population and established support networks. I have found them immensely helpful in navigating career and personal challenges.

The biggest issue for me is how the male-dominated industry sets the agenda for AI. The current arms race to develop ever-larger AI models at any cost is a perfect example. This has huge implications for both academic and industry priorities and wider socio-economic and environmental implications. Do we need larger models, and what are their global costs and benefits? I think if we had a better gender balance on the field, we would have asked these questions much earlier in the game.

What advice would you give to women aspiring to enter the AI ​​field?

AI desperately needs more women at all levels, but especially at leadership levels. The current leadership culture isn't necessarily attractive to women, but active involvement can change that culture — and ultimately the culture of AI. Women are notoriously not always good at supporting each other. I would really like to see a change in attitude in this regard: we need to actively network and support each other if we want to achieve a better gender balance in this field.

What are the most pressing issues facing AI as it evolves?

AI has developed incredibly fast: it has transformed from an academic field to a global phenomenon in less than a decade. During this time, most of the efforts have gone towards scaling through large-scale data and computation. Little effort has been put into thinking about how to develop this technology so that it can best serve humanity. People have good reason to worry about the safety and reliability of AI and its impact on jobs, democracy, the environment and other areas. We urgently need to put human needs and safety at the center of AI development.

What issues should AI users be aware of?

Current AI, even when seemingly very fluid, ultimately lacks the global knowledge of humans and the ability to understand the complex social contexts and norms with which we operate. Even today's best technology makes mistakes, and our ability to prevent or predict these mistakes is limited. AI can be a very useful tool for many tasks, but I wouldn't trust it to educate my children or make important decisions for me. We humans must be in charge.

What is the best way to build AI responsibly?

AI developers think about ethics after they think about it — after the technology has already been built. The best way to think about it is First No development begins. Questions like, “Do I have a diverse enough team to develop a fair system?” or “Is my data truly free to use and representative of the entire user population?” or “Are my techniques strong?” Should really be asked at the beginning.

While we can address some of this problem through education, we can only enforce it through regulation. Recent developments in national and global AI regulations are important and need to continue to guarantee that future technologies will be safer and more reliable.

How can investors best push for responsible AI?

AI regulations are emerging and companies will eventually need to comply. We can think of responsible AI as sustainable AI that is truly worth investing in.

A.I

About the Author

Leave a Reply