
Today, AI is the biggest buzzword, inspiring all sorts of emotions in people — from wonder and fascination to awe and worry. One thing is for sure: AI is going to be transform our lives and future. There are few people as suitable as Kai-Fu Lee to narrate the story of AI, purely because of the diverse roles he has held in AI, from AI researcher to technology executive to now venture-capital investor. In his book “AI Superpowers,” he discusses various important and ever-so-relevant aspects of AI and dedicates a significant chunk to how China could become the superpower in AI, owing to his experience of working in both the United States and China. Inspired by his own life lessons and self-reflections, he also envisions the future of AI on the global stage and its impact on humanity.
From Age of Discovery to Age of Implementation
AI has arrived, making a seemingly sudden transition from sci-fi movies to real daily life applications. This seemingly sudden transition is all down to AI based on artificial neural networks, a result of decades of work. But remember, one can have artificial intelligence even without neural networks or any machine learning. As long as any artificial entity can make rational decisions within its environment, it fulfils the definition of AI. For a long time, people even had a rule-based approach for AI. But today, AI is mostly based on the neural net approach, which is inspired by biological neurons. Just as we humans learn from examples around us, the idea is to make machines learn from examples, finding patterns and correlations that are invisible or even irrelevant to humans (like using the charge in your phone to predict your credit score, a seemingly irrelevant pattern for us humans).
That way, the sudden rise of AI is the rise of artificial neural networks, and its resurrection is due to three things: an abundance of data, high computing power, and cool algorithms to learn parameters (thanks to Hinton and friends, with regular innovation). With this, we have also transitioned from the Age of Discovery in AI (the research on algorithms, which takes decades) to the Age of Implementation, which improves and innovates every day. To make top-notch algorithms, you need top-notch scientists. But to use top-notch algorithms, you don’t need high-level top-notch scientists. Also, after a point, it is difficult to exceed the level of computing power and top-notch algorithms, as they need decades for breakthroughs. Ultimately, the deciding factor in AI is going to be data, thereby making it a transition from the Age of Expertise to the Age of Data. This is where Kai-Fu Lee believes China could have an edge, allowing it not only to compete with the US, the reigning AI superpower, but even to surpass it.
Why China Could Become the AI Superpower
While AI needs data, high computing power, and top-notch algorithms, the AI revolution needs four things: experts/scientists, an environment for entrepreneurship, data, and government help. The US has had an edge (and still does by a long shot) in experts. But as we are entering the Implementation Phase of AI, quantity will have more impact than quality, and the tinkerer engineers will revolutionise more than the expert scientists, for groundbreaking discoveries take decades, and it is the implementation that matters. The author explains one cultural difference between the US and China that makes China the breeding ground of innovation. In the US, copying is often stigmatised, and creating something new is considered the greatest achievement. In China, earning money (and not creating something new) is considered the greatest achievement. Hence, there is more innovation in China than invention like in the US. This innovation-over-invention mentality means way more competition, proper cutthroat competitions between many companies working on the same idea of invention. So, what succeeds ultimately is the one with more innovation, making the Chinese ecosystem the survival of the fittest.
But the biggest weapon China has is the oil of AI’s world: data. Thanks to its sheer population and the revolution in the Online-to-Offline ecosystem, be it the use of the internet in everything from food delivery to ride-sharing, or the payment of almost everything via cellphone, China is vastly ahead of the US in terms of accumulating data. China’s relatively late rise in the economy has also unwittingly helped in this Online-Offline ecosystem and central data collection, as most of the population directly joined the mobile payment system, unlike the still prevalent card-based payment system in the US.
Lastly, even in terms of government role, the Chinese structure could prove more suitable for improvement in AI. Because of the differing government structure and culture, the Chinese government is able to collect data in public via cameras and surveillance, as even the public see this as a necessary measure of security. In the US, the idea of personal freedom is held high, making the open use of mass-level data collection difficult. Ironically, the authoritarian nature of their governance means China can easily take risks and implement breakthrough AI technologies like self-driving cars on a mass national scale, where the government in the US won’t be able to do that due to the risk of potential failure and public outrage in what is already a politically polarised country. It could be the same in terms of investment, as China can invest in AI without mass public outcry and scrutiny like in the US. Thus, China has a unique edge in government help, both in terms of data collection and AI implementation (which is pivotal as AI improves through feedback).
But the applications of AI are broad, so it’s important to go deep into four waves of AI and compare the two superpowers in these waves.
Four Waves of AI:
- Internet Wave: Companies solely internet-based. Like YouTube, Netflix, Amazon, Baidu, Alibaba.
- Business Wave: Finance companies and industries, or any big companies with excess data.
- Perception Wave: Where AI can hear and see just like us. Face recognition, speech recognition, paying with face, ordering with gestures (from O2O to OMO; Online to Offline to Online merges with Offline).
- Autonomous Wave: Automation we have already had for more than a century (most of the industrial machines). Autonomous AI will truly change the world.
In the Business Wave, US companies are more technical, better equipped with data, and hence far ahead. In the Perception Wave, China is far ahead, with many implementations in public spaces. In the Internet Wave, the US is currently ahead of China, but China is leveling up, again owing to its huge population. So, it is the Autonomous Wave that will decide the winner. While the US has better technology, and more importantly high-level expertise required to create autonomous devices, it might well come down to policy issues in the end. This is where the authoritarian (as the West likes to see it) or techno-utilitarian (as China likes to see it) nature of China might swing the balance.
Utopia, Dystopia, and the Real Danger of AI
The AI transition from sci-fi imagination to real-world applications has sparked imagination on a futurist AI world, creating two opposite camps. The first is the Utopia camp, which revels in the positive imaginations that AI brings. Like being able to understand more about physics, chemistry, and biology, which could lead us to solve many intractable problems and find cures for diseases. Or even understanding and then enhancing our consciousness in the future, with extreme imaginations going up to attaining immortality and uploading consciousness. Then there’s the Dystopia camp, which sees the danger of AI — not through some terminator-esque takeover — but through realistic imaginations. Like for example, an AI created to reverse global warming leading to the destruction of humans through subtle, non-apparent ways, after finding this as the optimum path for attaining its value. The dystopia camp sees the danger of AI mostly through these sorts of problems, known as the value alignment problem.
Kai-Fu Lee thinks we are far off from both scenarios, at least for now. Even today, we have AI that is good at performing narrow tasks, known as Artificial Narrow Intelligence (ANI). Artificial General Intelligence (AGI), one that is flexible and excels in diverse tasks, still seems difficult to attain. As for Artificial Super-Intelligence (ASI), where AI can create AI and whose intelligence might dwarf human intelligence like human intelligence dwarfs that of insects, it seems a long way off. But this doesn’t mean we can live carefree and take a nap, as we could soon face other dangerous scenarios, both realistic and urgent. These dangers would be at economic, political, social, and individual levels.
One of the biggest dangers of AI could be in terms of inequality. As AI needs data to start up, big companies that already have data have a head start in the race. With AI working its magic, these companies not only get more data than others, but also wipe out other less data-driven AI companies. Thus, the very nature of AI gravitates towards monopoly. This could mean ever-rising social inequality, both between countries and within countries, having deep economic and political implications.
Then comes the often heated topic about jobs. Is AI going to take over our jobs, or is it just a luddite paranoia that every other technology has faced in history?
AI and the Job Market
When comparing the impact of AI, it’s important to compare it with three equally revolutionary technologies from history, often known as General Purpose Technologies or GPTs. These three GPTs are the steam engine, electricity, and the internet. It’s easy to see, even without digging deep, how all these technologies have transformed our lives for the better, prompting many to view AI as being the same. However, this is where a serious consideration is required. We should not confuse an increase in productivity with an increase in employment. The first two GPTs led to increments in both, whereas the third led to improvements mostly in productivity. Judging by the current state of Artificial Narrow Intelligence, AI could do the same. This prompts the immediate question: Which jobs are safe from AI and which aren’t?
Jobs that can be easily optimised via data are at risk. It’s also important to consider one crucial development related to AI when analysing its impact on jobs. Most breakthroughs in AI have been around algorithms rather than robotics. So, jobs requiring motor skills and dexterity seem safe, at least for the time being. For instance, it’s far easier to create an intelligent stock predictor algorithm than an AI Robot with even a child’s motor skills. This has consequences across industries, with agriculture-related motor skills being less vulnerable than service-related industries, ironically making the white-collar service sector more vulnerable in comparison. In many industries, it will also be a case of one person doing the task of ten, rather than the industry being wiped out.
But the biggest job-related impact will be at a deeper individual level: what it means to be human and what it means to live. Our sense of self-worth and purpose of life have been heavily tied to what we do, right from our ancient prehistorical days. The culture we develop, the community we surround ourselves with, and our personality are heavily tied to what we do. Thus, loss of identity and purpose is going to be the biggest challenge we are going to face as humanity.
Lesson from Cancer
Before talking about other global implications of AI, Kai-Fu Lee shares his personal story and the realisations and self-reflections that have shaped his views on AI for humanity. At the age of 53, Kai-Fu Lee was diagnosed with cancer. With one heartbreaking turn of events, he had a deep, unsettling realisation: In his quest to create machines that can think like humans, he had become a human who thinks like a machine. He had treated most of his life as an optimisation task, trying to quantify the pros and cons of choosing between one important work meeting or being at the hospital for the birth of his first child. In the end, he felt hollow despite all his achievements, and what mattered most were the people around him and the love he received and shared (and regretted not doing more). His experiences at meditation retreats and consultations with spiritual leaders also led to the realisation that the sense of achievement, doing something impactful, all tied with his ego. Seeing that humans are different from AI because of love and compassion, not intelligence (which AI is already surpassing in many narrow fields), has influenced and inspired him to envision how humans could live happily with AI.
Blueprint for Humans Co-existing with AI
AI is here, and we can’t fall into the Luddite fallacy, purely because of three things: the sheer efficiency, scale, and pace with which AI is developing. The usual “things will work out like always” argument stands on increasingly shaky ground. But instead of being bleak, dystopian, and looking for survival methods, we could actually be creative and proactive so that we can thrive more than ever with AI.
People have already offered some solutions in the form of the three R’s: Retraining, Reducing, and Redistributing. Retraining people is not feasible purely because of the pace at which AI is developing. Reducing (less workdays, shifting work, etc., to engage more people) will not be feasible in fields where entire industries could be disrupted. This leaves redistribution (like Universal Basic Income) as the only option. Some see this as an easy cure, with some even calling it an ideal utopian world where people can finally follow their passion.
However, in reality, it could just be a painkiller. Remember, it’s not easy for everyone to find their passion or even explore their passion. Thus, it fails to solve the self-identity and self-purpose issues for many. Also, this utopian world is only possible if people receive significant money from UBI. In reality, it could just be enough for hand-to-mouth survival. Thus, it requires more innovative ways to create human-AI symbiosis where human roles and values are valued. Kai-Fu Lee imagines a future where the private sector and VCs also focus on human-centric jobs. He also believes the government has a massive role to play by creating new social contracts and policies that treat social service-related jobs (like taking care of families, helping people in need, volunteering at social events) in a respectable manner (which is sadly tied to salary). Above all, beyond tweaks in the economic model, he suggests a tweak in our modern cultural value system where only wage-earning work is valued. We need a transition from valuing economically productive activities to socially productive activities.
Global AI Story
At the end of the day, the story of AI is a global story. Thus, we need to see its future not as another race for being a superpower. Instead of making it a zero-sum military race, it should be seen as an opportunity for every country and world citizen (not just two superpowers and a few elites) to think of creative ways of living and thriving with AI. It’s important to shift our cultural value system that sees humans as mere cogs that must have economic value. This means making machines act like machines and humans act like humans. Otherwise, the consequences are much deeper, affecting fundamentals like what it means to be human. Remember, we want humans to be the authors of this AI story, not mere spectators.







Leave a comment