- AI's Current Surge: Fei-Fei Li describes the rapid advancements and investments in AI since ChatGPT's release as a civilizational technology with profound impacts on lives and work.
- Personal Background: Li emigrated from China to the US at age 15, facing language barriers and financial struggles, while running her family's dry-cleaning business through college.
- Shift to AI: Initially inspired by physics for its vast scope, Li's curiosity led her to question intelligence and pursue building intelligent machines during college.
- ImageNet Breakthrough: In 2006, Li developed ImageNet, a vast visual database with millions of images, drawing from psychology and linguistics to organize object recognition like human semantic structures.
- Spatial Intelligence Focus: As co-founder of World Labs, Li advances spatial intelligence, enabling AI to perceive and interact with 3D worlds, complementary to language models, for applications in design, robotics, and education.
- AI's Societal Impacts: Li acknowledges AI's double-edged nature, predicting job shifts but emphasizing upskilling, responsible development, and democratization beyond a few tech companies.
- Risks and Responsibilities: Disagreeing with extinction fears from experts like Geoffrey Hinton, Li stresses human governance, regulation, and agency to prevent misuse, while highlighting benevolent potentials in healthcare and education.
- Advice for Future: Li urges parents to foster human values like curiosity, critical thinking, and integrity in children, preparing them to use AI responsibly without over-relying on it.
By Mishal Husain
November 21, 2025 at 1:00 AM EDT
Share this article
- [](mailto:?subject=World%20Labs%20Founder%20Fei-Fei%20Li%20Wants%20AI%20to%20Be%20More%20Democratized&body=Stanford%20scientist%20Fei-Fei%20Li%20talks%20about%20teaching%20machines%20to%20see%20as%20humans%20do%2C%20the%20US-China%20AI%20arms%20race%2C%20and%20what%20worries%20her%20about%20a%20more%20automated%20future.%0D%0Ahttps://www.bloomberg.com/features/2025-fei-fei-li-weekend-interview%0D%0Avia Bloomberg.)
AI is now so present in our lives that the story of how it came to be so is receding — that is, if we ever absorbed it properly. It’s a tale of scientists laboring for years in the hope of one day making machines intelligent, and breaking down the components of human intelligence in order to get there.
Stanford University professor Fei-Fei Li was at the forefront of that quest, which is why she has been called the “godmother of AI.” In 2006 she released her academic work on a visual database containing millions of images, and the idea of training computers to “see” as humans do sparked a wave of AI development.
Behind this breakthrough is a woman with an unusual background, one that plays a role in how she sees the world. Li arrived in the US at age 15 when her parents emigrated from China. She spoke little English and had to adjust academically, socially and financially to a new environment; after her parents set up a small dry-cleaning business to make ends meet, she ran it through her college years.
When Li came into Bloomberg headquarters in London, we talked about her personal and professional history, and I found in her a deep sensitivity. Excited about the potential of technology she’s helped create, she also emphasizes human agency — you’ll find her message to parents towards the end.
Listen to and follow The Mishal Husain Show on iHeart Podcasts, Apple Podcasts, Spotify or wherever you get your podcasts.
The Mishal Husain Show: Fei-Fei Li (Podcast)
This conversation has been edited for length and clarity. You can listen to an extended version in the latest episode of The Mishal Husain Show podcast.
May I start with this remarkable period for your industry? It’s been three years since ChatGPT was released to the public. Since then there have been new vehicles, new apps and huge amounts of investment. How does this moment feel to you?
AI is not new to me. I’ve been in this field for 25 years. I’ve lived and breathed it every day since the beginning of my career. Yet this moment is still daunting and almost surreal to me, in terms of its massive, profound impact.
This is a civilizational technology. I’m part of the group of scientists that made this happen and I did not expect it to be this massive. 1
1 Li spoke to us while in London to receive the 2025 Queen Elizabeth Prize for Engineering, alongside Nvidia CEO Jensen Huang and five others. Li has previously written and spoken about what she’s called “the AI winter” of the early 21st century, when those working in the field were getting no attention.
When was the moment it changed? Is it because of the pace of developments or because the world has woken up and therefore turned the spotlight on people like you?
I think it’s intertwined, right? But for me to define this as a civilizational technology is not about the spotlight. It’s not even about how powerful it is. It is about how many people it impacts.
Everyone’s life, work, wellbeing, future, will somehow be touched by AI.
AI Investment Surge
Capital spending on AI-related activities exceeds half of all investment
Note: Information processing equipment + software + research and development as a share of nonresidental investment Source: Bureau of Economic Analysis, Bloomberg calculations
In bad ways as well as good?
Well, technology is a double-edged sword, right? Since the dawn of human civilization, we have created tools we call technologies, and these tools are meant, in general, for doing good things. Along the way we might intentionally use them in the wrong way, or they might have unintended consequences.
You said the word powerful. The power of this technology is in the hands of a very small number of companies, most of them American. How does that sit with you?
You are right. The major tech companies — through their products — are impacting our society the most. I would personally like to see this technology being much more democratized.
No matter who builds, or holds, the profound impact of this technology: Do it in a responsible way.
I also believe every individual should feel they have the agency to impact this technology.
You are a tech CEO as well as an academic. Your very young company, little more than a year old, is reportedly already worth a billion dollars.
Yes! [Laughs]
I am co-founder and CEO of World Labs. We are building the next frontier of AI — spatial intelligence — which people don’t hear too much about today because we’re all about large language models. I believe spatial intelligence is as critical [as] — and complementary to — language intelligence. 2
2 World Labs raised more than $200 million ahead of its launch in 2024. In a TED Talk she delivered that year, Li said: “If we want to advance AI beyond its current capabilities, we want more than AI that can see and talk. We want AI that can do.”
I know that your first academic love was physics.
Yes.
What was it in the life or work of the physicists you most admired that made you think beyond that particular field?
I grew up in a small, or less well known, city in China. And I come from a small family. So you could say life was small, in a sense. My childhood was fairly simple and isolated. I was the only child. 3
3 Li grew up in Chengdu in China’s Sichuan Province; her mother was a teacher and her father worked in the computer department of a chemicals factory. In her book The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI, she linked her professional path to her early years: “Research triggered the same feeling I got as a child exploring the mountains surrounding Chengdu with my father, when we’d spot a butterfly we’d never seen before, or happen upon a new variety of stick insect.”
Physics is almost the opposite — it’s vast, it’s audacious. The imagination is unbounded. You look up in the sky, you can ponder the beginning of the universe. You look at a snowflake, you can zoom into the molecular structure of matter. You think about time, magnetic fields, nuclear.
It takes my imagination to places that you can never be in this world. What fascinates me to this day about physics is to not be afraid of asking the boldest, [most] audacious questions about our physical world, our universe [and] where we come from.
But your own audacious question, I think, was What is intelligence?
Yes. Each physicist I admire, I look at their audacious question, right from [Isaac] Newton to [James Clerk] Maxwell to [Erwin] Schrödinger to Einstein — my favorite physicist.
I wanted to find my own audacious question. Somewhere in the middle of college, my audacious question shifted from physical matters to intelligence. What is it? How does it come about? And most fascinatingly, How do we build intelligent machines? That became my quest, my north star.

Get the Bloomberg Weekend newsletter.
Big ideas and open questions in the fascinating places where finance, life and culture meet.
[
Sign Up
](https://www.bloomberg.com/account/newsletters/weekend)
By continuing, I agree to the Privacy Policy and Terms of Service.
And that’s a quantum leap because, from machines that were doing calculations and computations, you’re talking about machines that learn, that are constantly learning.
I like [that] you use the physics pun, the quantum leap.
Around us right now, there are multiple objects. We know what they are. The ability that humans have to recognize objects is foundational. My PhD dissertation was to build machine algorithms to recognize as many objects as possible.
What I found really interesting about your background is that you were reading very widely. And your ultimate breakthrough was possible when you started thinking about what psychologists and linguists were saying that was related to your field.
That’s the beauty of doing science at the forefront. It’s new, no one knows how to do it.
It’s pretty natural to look at the human brain and human mind and try to understand, or be inspired by, what humans can do. One of the inspirations in my early days of trying to unlock this visual intelligence problem was to look at how our visual semantic space is structured. There are so many tens and thousands, millions, of objects in the world. How are they organized? Are they organized by alphabet, or by size or colors? 4
4 While doing her PhD at Caltech, Li became convinced that larger datasets would be crucial to AI progress. Later, she was influenced by neuroscientist and psychologist Irving Biederman’s paper on human image understanding, which estimated that the average person recognizes around 30,000 different kinds of objects.
Are you asking that because you have to understand how our brains organize in order to teach the computers?
That’s one way to think about it.
I came upon a linguistic work called WordNet, a way to organize semantic concepts — not visuals, just semantics or words — in a particular taxonomy. 5
5 After Biederman’s paper, this was the second influential piece of work that came from outside Li’s own academic field.
Give me an example.
In a dictionary, apple and appliance are very close together, but in real life an apple and a pear are much closer to each other. The apple and pear [are] both fruits, whereas an appliance belongs to a whole different family of objects.
I made a connection in my head. This could be the way visual concepts are organized, because apples and pears are much more connected than apples and washing machines.
But also, even more importantly, the scale. If you look at the number of objects described by language, you realize how vast it is, right? And that was a particular epiphany for me. We, as intelligent animals, experience the world with a massive amount of data — and we need to endow machines with that ability.
It’s worth noting that at that time — I think this is the early part of the century — the idea of big data didn’t really exist.
No, this phrase did not even exist. The scientific data sets we were playing with were tiny. 6
6 “Big data” was coined in the 1990s, but the term didn’t become commonplace until the 2010s. It may now seem obvious that large datasets are crucial to machine learning, but it wasn’t always. After all, children can learn complex rules from just a few examples. As it turns out, modern AI's performance hinges on the amount of data available to it.
How tiny?
In images, most of the graduate students in my era were playing with data sets of four or six — at most 20 — object classes. Fast forward three years, after we created ImageNet; it had 22,000 classes of objects and 15 million annotated images. 7
7 A group led by Li first released the ImageNet database in 2006. It has been credited with significantly advancing the ways computers recognize objects. Crucially, Li also created a competition where teams from around the world trained algorithms to classify images in a subset of the dataset. That, in turn, allowed a team led by Geoffrey Hinton at the University of Toronto to demonstrate the power of a seemingly antiquated technique: the neural network.
ImageNet was a huge breakthrough. It’s the reason that you’ve been called “the godmother of AI.” I’d love to understand what enabled you to make these connections and see things other scientists didn’t. You acquired English as a second language after you moved to the US. Is there something in that which led to what you’re describing?
I don’t know. Human creativity is still such a mystery. People talk about AI doing everything and I disagree. There’s so much about the human mind that we don’t know. So I can only conjecture that a combination of my interest and my experiences led to this.
I’m not afraid of asking crazy questions in science, not afraid of seeking solutions that are out of the box. My appreciation for the linguistic link with vision might be accentuated by my own journey of learning a different language. 8
8 Without any scientific evidence, I do wonder if Li’s background — and especially her acquisition of a new language in the crucial last three years before college — is linked to her pioneering work with ImageNet. She had to work so hard to understand her new world of the US, culturally as well as linguistically, and to fit all of her learnings together. To me, it would follow that she thought and read widely and was also attracted to ways to organize information in order to learn better.

King Charles III (top row, 2nd left) with the recipients of the 2025 Queen Elizabeth Prize for Engineering, including Professor Geoffrey Hinton (top right), Nvidia CEO Jensen Huang (bottom left) and Li (bottom center). Photographer: Yui Mok/Pool/Getty Images
What was it like coming to the United States as a teenager? That’s a particularly difficult stage in life to move and make new friends, even if you weren’t battling a language barrier.
That was hard. [Laughs]
I came to America when I was 15 — [to] Parsippany, New Jersey. None of us spoke much English. I was young enough to learn more quickly [but] it was very hard for my parents.
We were not financially very well [off] at all. My parents were doing cashier jobs and I was doing Chinese restaurant jobs. Eventually, by the time I entered college, my mom’s health was not so good. So my family and I decided to run a little dry cleaner shop to make some money to survive.
You got involved yourself?
I joke that I was the CEO. I ran the dry cleaner shop for seven years, from [age] 18 to the middle of my graduate school.
Even at a distance you ran your parents’ dry cleaning business?
Yes. I was the one who spoke English. So I took all the customer phone calls, I dealt with the billing, the inspections, all the business.
What did it teach you?
Resilience.
As a scientist, you have to be resilient because science is a non-linear journey. Nobody has all the solutions, you have to go through such a challenge to find an answer. And as an immigrant, you learn to be resilient.
Were your parents pushing you? They clearly wanted a better life for you; that’s why they left China for the United States. How much of this was coming from them, or from your own sense of responsibility to them?
To their credit, they did not push me much. They’re not “tiger parents,” in today’s language. They were just trying to survive, to be honest.
My mom is an intellectual at heart, she loves reading. But the combination of a tough, survival-driven immigrant’s life plus her health issues — she was not pushing me at all. As a teenager, I had no choice. I either had to make it or not make it; the stakes are pretty high. So I was pretty self-motivated.
I was always a curious kid and then my curiosity had an outlet, which was science — and that really grounded me. I wasn’t curious about night clubs or other things, I was an avid lover of science.
You also had a teacher who was really important. Tell me about him.
I excelled in math and I befriended Mr. Bob Sabella, the math teacher. We became friends through the mutual love of science fiction. At that time I was reading in Chinese; eventually I started reading English.
He probably saw in me a desire to learn. So he went out of his way to create opportunity for me to continue to excel in math. I remember I placed out of the highest curriculum of math and so there were no more courses. He would use his lunch hour to create a one-on-one class for me, and now that I’m a grownup I know he was not paid extra.
It was really a teacher’s love and sense of responsibility. He became an important person in my life. 9
9 This isn’t the first time a beloved teacher has come up in a Weekend Interview: Playwright James Graham told me that he credits his childhood drama teacher, Mr. Humphrey, with inspiring his career — and even named a character after him.
Is he still alive?
Mr. Sabella passed away while I was an assistant professor at Stanford. But his family — his two sons, his wife — they’re my New Jersey family.
You use the word love. Did he and his family introduce you to American society, the world of America beyond your school?
Absolutely. They introduced me to the quintessential middle-class American family in a suburban house. It was a great window for me: to know the society, to be grounded, to have friends and have a teacher who cared. 10
10 Sabella and his wife lent Li money for college and continued to help her parents with the dry cleaning business while she was away. “In many ways, he provided what felt like a missing piece in my relationships with both of my parents,” Li wrote in her book. “My mother had always inspired me, but she didn’t actually share my interests when it came to math and physics … And while my father’s influence was nearest to my heart — he was the first to encourage my curiosity about the natural world, and my introduction to physics — I had to acknowledge I’d long since outgrown his example.”
Do you think you could have had this career in China?
I don’t think I’m able to answer this question because life is so serendipitous, right? The journey would be very different. But what is timeless or invariant is the sense of curiosity, the pursuit of north stars. So I would still be doing AI somehow, I believe.
Do you still feel connected to China?
It’s part of my heritage. I feel very lucky that my career has been in America. The environment my family is in right now — Stanford, San Francisco, Silicon Valley — is very international. The discipline I’m in, AI, is so horizontal. It touches people everywhere. I do feel much more like a global citizen at this point.
It is a global industry, but there are some really striking advances in China: the number of patents, the number of AI publications coming out, the Deepseek moment earlier this year. As you look ahead, do you think China will catch up with the US in the way that it has in other fields like manufacturing? 11
11 The US and China are effectively alone at the top of the AI race, with most top labs located in these two countries. While much of the world’s attention is focused on Western tech companies, Bloomberg reported last month that Chinese rivals are courting Africa’s startups and innovation hubs with open-source AI models, in a strategy “with parallels to China’s Belt and Road Initiative for physical infrastructure.”
I do think China is a powerhouse in AI.
At this moment, most people would recognize the two leading countries in AI are China and [the] US. The excitement, the energy, and frankly the ambition of many regions and countries, of wanting to have a role in AI, wanting to catch up or come ahead in AI — that is pretty universal.
And your own next frontier. Tell me what you mean when you say “spatial intelligence.” What are you working on?
Spatial intelligence is the ability for AI to understand, perceive, reason and interact [with the world]. It comes from a continuation of visual intelligence.
The first half of my career — around the ImageNet time — was trying to solve the fundamental problem of understanding what we are seeing. That’s a very passive act. It’s receiving information, being able to understand: This is a cup. This is a beautiful lady. This is a microphone. But if you look at evolution, human intelligence — perception — is profoundly linked to action. We see, because we move. We move, therefore we need to see better.
How you create that connection has a lot to do with space — because you need to see the 3D space. You need to understand how things move. You need to understand, When I touch this cup, how do I mathematically organize my fingers so that it creates a space that would allow me to grab this cup?
All this intricacy is centered around this capability of spatial intelligence. 12
12 This idea that perception is fundamentally organized around action originates in the work of the psychologist James J. Gibson, who argued that organisms do not passively receive visual information but actively detect the possibilities for action. “We perceive in order to move, and we move in order to perceive,” he said. The Stanford Vision and Learning Lab’s 3D simulation platform is named The Gibson Environment to acknowledge his influence.
On your website, I’ve seen the preview you’ve released — essentially a virtual world. Is it, to you, a tool for training AI?
So let’s just be clear with the definition. Marble is a frontier model. What’s really remarkable is it generates a 3D world at a simple prompt. That prompt could be, Give me a modern-looking kitchen. Or, Here’s a picture of a modern-looking kitchen. Make it a 3D world.
The ability to create a 3D world is fundamental to humans. I hope one day it’s fundamental to AI. If you are a designer or architect, you can use this 3D world to ideate, to design. If you are a game developer, you can use it to obtain these 3D worlds, so that you can design games.
If you want to do robotic simulation, these worlds will become very useful as training data for robots or evaluation data. If you want to create immersive, educational experiences in AR [or] VR, this model would help you to do that. 13
13 There are several companies working on different flavors of this idea, including Google DeepMind. Li outlined her vision for “spatial intelligence” as AI’s next frontier in a recent essay.

A screenshot of a world created by Marble, an AI model that produces 3D environments based on a text or image prompt. Source: Marble
Interesting. I’m imagining girls in Afghanistan, virtual classrooms in a very challenged place.
Yes. Or, how do you explain to [an] 8-year-old, What is a cell? One day we’ll create a world that’s inside a cell. And then the student can walk into that cell and understand the nucleus, the enzymes, the membranes. You can use this for so many possibilities. 14
14 As Li said these words I was remembering school biology exams and drawing and labeling the structure of a cell. I can imagine a virtual world inside a cell creating a strong mental image for a younger child, one that is not easily forgotten.
Yours is a big, complex industry, but there are some immediate issues. Can I put a selection of those to you, for an instinctive response on how you see them. Number one: Is AI going to destroy large numbers of jobs?
Technologies do change the landscape of labor. A technology as impactful and profound as AI will have a profound impact on jobs. 15
15 The picture on AI and jobs is complicated and uncertain: Anecdotally, some companies claim they’re already using it to automate work — especially at startups — but it’s not yet visible in economy-wide jobs data, at least in the US. However, it may already be taking away entry-level jobs in white-collar professions, and some economists worry it could hurt workers longer-term.
Which is happening already. Salesforce CEO Marc Benioff has said 50% of the company’s customer-support roles are going to AI.
So it will, there’s no question. Every time humanity has created a more advanced technology — for example, steam engines, electricity, the PC, cars — we have gone through difficult times. But we have also gone through [a] re-landscaping of jobs.
Only talking about the number of jobs, bigger or smaller, doesn’t do justice. We need to look at this in a much more nuanced way and really think about how to respond to this change.
There is the individual-level responsibility — you’ve got to learn, upskill yourself. There is also company-level responsibility, societal-level responsibility. This is a big question.
Number two is an even bigger question. You’ll know Professor Geoffrey Hinton, a Nobel Laureate whose work has overlapped with yours. He thinks there’s a 10% to 20% chance that AI leads to human extinction. 16
16 So-called AI doomers worry that humans don’t know how to ensure alignment between AI and human goals. As the technology gets smarter, they warn, it will develop the ability to evade instructions and pursue its own self-preservation — possibly at our expense.
First of all Professor Hinton, or as I call him, Geoff — because I have known him for 25 years — he’s someone I admire and we talk a lot.
But this thing about replacing the human race, I respectfully disagree. Not in the sense that it’ll never happen. In the sense that if the human race became really in trouble, that would be the result — in my opinion — of humans doing wrong things, not machines doing wrong things.
But the very practical point he makes: How do we prevent the superintelligent creation from taking over, when it becomes more intelligent than us? We have no model for that, he says.
I think this question has made an assumption. We don’t have such a machine yet. We still have a distance, a journey to take, to that day.
My question is, Why would humanity as a whole allow this to happen? Where is our collective responsibility? Where is our governance or regulation? 17
17 It’s worth watching this interview with Hinton by Bloomberg’s David Westin. Raising the question of how we prevent AI from taking over, Hinton said companies and governments are making bad assumptions. “Their basic model is: ‘I’m the CEO, and this superintelligent AI is the extremely smart executive assistant. I’m the boss and I can fire the executive assistant if she doesn’t do what I want,’” Hinton says. “It’s not going to be like that when it’s smarter than us and more powerful than us.”
Do you think there’s a way to make sure that there is an upper limit to superintelligence?
I think there is a way to make sure there is responsible development and usage of technology internationally.
At a government level, a treaty? Or just companies agreeing to behave in a certain way?
The field is so nascent that we don’t yet have international treaties, we don’t yet have that level of global consent. I think we have global awareness.
I do want to say that we shouldn’t [overthink] the negative consequences of AI. This technology is powerful. It might have negative consequences; it also has a ton of benevolent applications for humanity. We need to look at this holistically.
I know you talk to politicians a lot. You’ve done that in the US, in the UK, in France and elsewhere. What’s the most common question they ask that you find frustrating?
I wouldn’t use the word frustrating, I would use the word concerned, because I think our public discourse on AI needs to move on from the question of, What do we do when [our] machine overlord is here? 18
18 Li is too polite to tell me she found my question on extinction frustrating, but I suspect she did. I’ve generally found such suggestions alarmist, but hearing someone as eminent as Hinton address them made me reconsider, and it’s easy to imagine Li fielding similar worries from government officials. She's been advising California Governor Gavin Newsom on “workable guardrails” for AI after he vetoed a contentious AI safety bill, and she’s also spoken to multiple heads of state.
Another question I get asked a lot is from parents, worldwide: AI is coming, how do I advise my kids? What’s the future for my kids? Should they study computer science? Are they going to have jobs?
What do you say?
I say that AI is a very powerful technology and I’m a mother. The most important way we should empower our kids is as humans, with agency, with dignity. With the desire to learn and timeless values of humanity: Be an honest person, a hardworking person, creative, critically thinking.
Don’t worry about what they’ll study?
Worry is not the right word. Be totally informed. Understand that your children’s future is living in the world of AI technology. Depending on their interest, their passion, personality, circumstances, prepare them. Worry doesn’t solve the problem.
I’ve got another industry question, about the huge sums of money flowing into companies like yours. Whether this might be a bubble, like the dot-com bubble, and some of these companies are overvalued. 19
19 Concerns around the scale of AI spending have been growing, including the circular nature of some of the bigger deals and the US economy’s exposure to the sector. Earlier this month, Michael Burry (of Big Short fame) revealed bearish wagers against Nvidia — the world’s largest provider of AI chips — and Peter Thiel’s hedge fund sold its entire stake in Nvidia last quarter.
First of all, my company is still a startup.
When we’re talking about huge amounts of money, we really look at Big Tech. AI is still a nascent technology and there’s still a lot to be developed. The science is very hard. It takes a lot to make breakthroughs. This is why resourcing these efforts [is] important.
The other side of this is the market: Are we going to see the payoff? I do believe that the applications of AI are so massive — software engineering, creativity, healthcare, education, financial services — that we’re going to continue to see an expansion of the market. There are so many human needs in wellbeing and productivity, that can be helped by AI, as an assistant [or] collaborator. And that part, I do believe strongly, is an expanding market.
But what does it cost in terms of power, energy and therefore climate? There’s a prominent AI entrepreneur, Jerry Kaplan, who says that we could be heading for a new ecological disaster because of the amount of energy consumed by vast data centers.
This is an interesting question. In order to train large models, we are seeing more and more need for power, or energy, but nobody says these data centers must be powered by fossil fuels. Innovation on the energy side will be part of this.

An Amazon Web Services data center in Manassas, Virginia. Total power usage in the US is expected to climb 2.15% in 2026, spurred largely by a 5% spike from commercial users because of the expansion of data centers, according to a US Energy Department report. Photographer: Nathan Howard/Bloomberg
The amount of power they need is so enormous; it’s hard to see it coming from renewable energy alone.
Right now this is true. Countries that need to build these big data centers need to also examine energy policy and industry. This is an opportunity for us to invest in and develop more renewable energy.
You’re painting a very positive picture. You’ve been at the forefront of this and you see much more potential, so I understand where you’re coming from. But what worries you about your industry?
I’m not a tech utopian, nor am I a dystopian — I’m actually the boring middle. The boring middle wants to apply a much more pragmatic and scientific lens to this.
Of course, any tool in the hands of the wrong mindset would worry me. Since the dawn of human civilization: Fire was such a critical invention for our species, yet using fire to harm people is massively bad. So any wrong use of AI worries me. The wrong way to communicate with the public worries me, because I do feel there’s a lot of anxiety. 20
20 At Stanford’s Institute for Human-Centered AI, Li is surrounded by people who, like her, want to use AI for the public good. But I wonder if her vision of the future is too reliant on power being held by responsible actors, who combine commercial interests in AI with policy and academic work.
The one worry I have is our teachers. My own personal experience tells me they’re the backbone of our society. They are so critical for educating our future generations. Are we having the right communication with them? Are we bringing them along? Are our teachers using AI tools to superpower their profession, or helping our children to use AI?
This is the Bloomberg Weekend Interview, so we’re always interested in people’s lives as well as their work. The life you’re living today is so different from the way that you grew up, working in your parents’ dry cleaning shop. Are you conscious of the power that you have as a leader in this industry?
[Li laughs.] I still do a lot of laundry at home.
I am conscious of my responsibility. I’m one of the people who brought this technology to our world. I’m so privileged to be working at one of the best universities in the world, educating tomorrow’s leaders, doing cutting-edge research.
I’m conscious of being an entrepreneur and a CEO of one of the most exciting startups in the gen AI world. So everything I do has a consequence, and that’s a responsibility I shoulder.
And I take that very seriously because, I keep telling people: In the age of AI, the agency should be within humans. The agency is not the machines’, it’s ours.
My agency is to create exciting technology and to use it responsibly.
And the humans in your life, your children, what do you not let them do with AI, or with their devices or on the internet?
It’s the timeless advice: Don’t do stupid things with your tools.
You have to think about why you’re using the tool and how you’re using it. It could be as simple as, Don’t be lazy just because there is AI. If you want to understand math, maybe large language models can give you the answer, but that is not the way to learn. Ask the right questions.
The other side is, Don’t use it to do bad things. For example, integrity of information: fake images, fake voices, fake texts. These are issues of AI as well as our social media-driven communication in society.
You’re calling for old-fashioned values, amid this world of new developments we couldn’t have imagined even three years ago.
You can call it old-fashioned, you can call it timeless. As an educator, as a mom, I think there are some human values that are timeless and we need to recognize that.
Mishal Husain is Editor at Large for Bloomberg Weekend.
More On Bloomberg