CEIBS 25th Anniversary

1994 - 2019

Speeches from the CEIBS Insights 2019 US Forum

Empowered, not Overpowered by AI

Prof. Max Tegmark,
Professor of Physics, MIT; President, Future of Life Institute

“It is really a great honour to be here today and talk to you about Artificial Intelligence and my positive vision for how it can help us all.
The basic message I want to convey is partly that this is an incredibly powerful technology that has a host of opportunities throughout all industries, including all the other things we’re going to hear about today.

And the second message is that Artificial Intelligence is not going to stop there, it's going to continue and ultimately become the most powerful technology ever, which means that it's ultimately going to become the best thing to ever happen to humanity or the worst. Either America and China and everybody else are going to win together or we're all going to lose together.

It's a great honour for me to be speaking to this Chinese-American audience today because in the particular moment in which we live now, I think it's more valuable than ever, as we just heard said, to strengthen the collaboration at the lower levels, between companies, between universities, between researchers and so on. This not only is valuable within itself for all involved but will also hopefully help ultimately improve things at the political level.

I've been encouraged by the organizers to think big about Artificial Intelligence.

So let's look more closely at our relationship with technology and how it can benefit us. My friend, Jaan Tallinn, co-founder of Skype, likes to point out that there's a really powerful metaphor between rocketry on one hand and then all of technology on the other hand. Mainly, it's not enough to make our technology powerful. If we want to be really ambitious, we also have to figure out how to steer our technology and where we want to go with it.

So let us talk about all three for artificial intelligence, the power of artificial intelligence, steering or controlling artificial intelligence, and the destination — what we are trying to do with it.

I define intelligence simply as the ability to accomplish goals; the more complex the goals, the more intelligent. I give such an inclusive definition because I want to include both biological and artificial intelligence.

Intelligence is all about information processing and it doesn't matter whether the information is processed by carbon atoms in neurons and brains or by silicon atoms in today's computers or something else.

It's this idea that it's all about information processing, not about the implementation details that have transformed our hardware.  For example, memory has been brought out a hundred billion times cheaper in recent decades.

It’s the same with computing, which is ten to the power of fifteen times cheaper.

And it's not just been this hardware progress that is pure AI, it's also been a real revolution on the software side. We've gone from old fashioned AI where you basically have to programme in the intelligence yourself by knowing how to do things, to instead having machine learning where the machine can become much better than its creators by learning from big data sets.

It's really amazing how the power of artificial intelligence has grown recently. Not long ago we didn't have robots that could walk. Now, they can do back flips. We have self-flying rockets that can land themselves with artificial intelligence. Not long ago, AI could not do face recognition. Now, AI can not only do that but it can simulate fake faces and it can actually have your face saying things that you never said.

Not long ago, AI could certainly not save any lives. Now, we’re not far away from being able to save a large fraction of over a million lives that are wasted each year on our planet's roads with autonomous vehicles. This rapid progress, of course, begs the question: how far is it going to go ultimately? Is it eventually going to flood all land with artificial intelligence matching human capabilities at all tasks? This is the definition of artificial general intelligence or AGI. And by this definition, people would say, “No, there will always be jobs that humans can do better than machines.” They are simply saying there will never be AGI. Is that true or not? Well, we're going to talk about it.

Building AGI has always been the original goal of the field of artificial intelligence research since its inception. So, people who say, “It’s never going to happen” are basically saying that AI as a field is going to fail in what it originally set out to do.

If you think that AGI, across the board, human level or better AI, sounds like crazy science fiction, I want you to be clear that there's something else that sounds more crazy science fiction. Superintelligence. The idea here is very simple. If we actually get to the level of AGI, since by definition AI can now do all jobs better than humans, that includes the job of an AI developer which means Google, Baidu, Tencent, and everybody else can actually start replacing human researchers by AI.

And this opens the very controversial possibility that further AI progress beyond that cannot go much faster than the typical human R&D type skill, which opens this controversial possibility of actually referring to self-improving AI that rapidly leaves human intelligence far, far behind, creating superintelligence.

Is this talk about AGI or even superintelligence, just crazy science fiction that we’re wasting our time on, that people only talk about when they’ve had too much to drink or if they are philosophers who don't know anything about technical research?

Well, on one hand, we do have serious AI researchers like my former MIT colleague Rodney Brooks who says, this is not going to happen for hundreds of years. On the other hand, we have people like Demis Hassabis, the leader of Google DeepMind who gave you AlphaFold, Alpha Zero and all that, who thinks absolutely it's going to happen. His company is trying to make it happen. And he does know a thing or two about AI.

Recent surveys of AI researchers themselves show that most expect AGI to happen not tomorrow but also not in a thousand years. [They give it a timeframe] of decades. So, in our lifetime, quite likely, we’ll have AGI or something close to it.  Which raises the question, if this happens then aside from all the business applications and so on, what do we even want it to mean to be human if machines can do everything better and cheaper than us?

The way I see it, we face a choice here. Its not the choice between the US and China or between stopping technology and not stopping it.  Technology is going to go on, but it’s a choice between being complacent and being ambitious. We can either be complacent and say “Yeah, let’s just build technology that can build everything better and cheaper than humans and not worry about the consequences.  Because after all, if we make technology that makes all humans obsolete, what could possibly go wrong?” Or we can be ambitious. If we want to be ambitious, we should obviously envision a truly inspiring high tech future that we all want to live in and then figure out how we can steer toward that future.

So that brings us to the second part of my remarks, the steering. We’re making AI more powerful but how can we steer towards the future where AI is going to help humanity flourish all across the globe?

I'm optimistic that we can create a truly inspiring high tech future IF we win the wisdom race.  By which I mean the race between the growing power of the technology and the growing wisdom with which we manage our technology. The challenge is that to win the wisdom race with artificial intelligence, we are going to have to change strategies. Our old strategy for always keeping the wisdom one step ahead has been learning from mistakes.

First, we invented fire, screwed up a bunch of times, then invented the fire extinguisher.

First invented the car, screwed up a bunch of times and then invented the seat belt, the traffic light, the air bag and laws against driving too fast.
That worked more or less okay because those were not very powerful technologies. But as science keeps progressing, we keep discovering ever more powerful technologies, and at some point, power crosses the threshold where learning from mistakes goes from bringing good strategies to being a really bad strategy. It’s much better to be proactive rather than reactive and get things right the first time. Which might be the only time we have, especially if we ever decide, as a species, to build artificial general intelligence.

[This means] on a small scale in our businesses, making sure that when we deploy AI systems, we have thought through all the ways they can get hacked, all the ways they can malfunction and harm our business. But also, on the more global scale, as a society, thinking through the things that could go wrong to make sure it goes right.

And this is really a global challenge. It’s a challenge that by its very nature it’s so big that it cannot be handled by America alone or by China alone. And I actually feel honoured to be here today because I feel that China is really uniquely positioned to have a very leading role in steering AI towards a positive outcome.

Now there are two reasons for this. One reason is technical strength, the other is wisdom, in a sense I’ll come back to.  Technical strength obviously – I've been to China three times in the last year and it’s just so inspiring to see all the fantastic technological progress happening. Many of my best grad students at MIT are also Chinese.  So there’s real opportunity there for China.

But [there is] also the wisdom part. As one of the oldest surviving civilizations on earth, China has a much stronger tradition of long-term planning than we have here. If I start talking to people in America about the new long-term plan for high-speed rail, between New York and Boston, or wherever, they're just going to burst out into uncontrollable laughter. And if a politician here in America says that, people will again not take it seriously. In China, if there's a long-term plan to develop the high-speed rail, it happens. This is something that [shows] wisdom and willingness and ability to actually think long term. I feel it’s something very valuable that China can bring to the global conversation because it’s exactly long-term planning that we need if we’re talking about getting things right that are going to happen in decades.

And for that reason, we were very happy that the last conference that we organized with the Future Life Institute – where we regularly bring together AI leaders and tech leaders to discuss not just how to make AI powerful, but how to make sure its beneficial – we actually had a record large number of Chinese participants this time. So, what do we need to do for this wisdom? Well in short, we need to envision a really positive future and all the great things we want to do with AI, and then we need to draw a very clear red line between the acceptable uses, which are most of them, and the unacceptable ones.

This is not the first time in the history of technology [that] we can learn a lot from our biology friends on this, who really have led the way.
What's happening today is we’re obviously putting artificial intelligence in charge of ever more decisions and physical infrastructure that’s actually affecting people’s lives, affecting the survival of companies. And that means we have to up our game.

If we keep the same pathetically low bar for what we accept in terms of reliability and robustness of the systems, then all this shiny new AI technology that you invest in and that you deploy, can malfunction and harm you or ruin your company or it can be hacked and be turned against you. So, we have to invest much more not just in making this powerful, but actually making it safe and trustworthy. And that becomes ever more important, the more powerful AI becomes.  The real risk with human level AI and beyond in not what silly Hollywood movies try to make us worry about, like Terminator scenarios where AI somehow turns evil. Rather the real risk is that the AI just becomes very, very competent and goes ahead and accomplishes some goal that was not aligned with your goal.

The Knight Capital algorithm wasn’t evil, the Boeing 737 Max software wasn’t evil, but it just had the wrong goal put into it. So that means that since artificial general intelligence is by definition smarter than us, if we actually build it in the future, to avoid putting our species [at risk of extinction] we have to make sure that we make machines that can understand our goals, adopt our goals and actually retain our goals. Just the way we do it with our children when we raise them and make sure they have good goals.

We also have to make sure that these can’t fall into the hands of some terrorist or someone else who can use them to illegally have their goals implemented on a truly, truly massive scale. This technical nerdy issue of making AI really robust and beneficial so that it will actually do what we want it to do, it’s very much the focus of my lab at MIT. I think there's just so much fantastic technical work to be done there.

Now I want to end on a high note. I started about talking how this is getting evermore powerful, all these business opportunities, but to make sure that we create a world where humanity really flourishes, we have to also draw a line between excited uses and misuses.

Sustainable development goals are goals that there's a remarkable agreement on, all across the planet that we want to accomplish. These are challenging goals. I recently wrote a paper showing that artificial intelligence can pretty much help to do all of these right, and create a planet where we all win together.

And I want to end the way I started, by saying I think it’s just so exciting to be talking to an audience of Chinese and American entrepreneurs mixed together. We are not living in the Stone Age where we had no impact, as human beings. That has completely changed. Science and technology has given us a deeper understanding of how nature works and we have figured out how to use that understanding to give us all the technology thanks to which we live healthier and happier lives all around the world, and now we’re on the cusp of the most powerful technology ever. Artificial intelligence that can actually amplify our own intelligence, to make us even better at solving all these great problems.

This is an amazing upside, if we draw the right loop, we’ll all win here.

We can attain all the sustainable development goals and go far beyond them. How do we do it? By acknowledging all the great upside and thinking that people understand all the opportunities and get excited about them, and also drawing a line to make sure we don't do the bad things. That’s how we create a future where we feel that AI doesn’t overpower us, but empowers us.”


Building an Effective Supply Chain

Prof. Xiande Zhao,
CEIBS Professor of Operations and Supply Chain Management,
JD.COM Chair in Operations and Supply Chain Management,
Director of CEIBS-GLP Centre of Innovations in Supply Chains and Services

“My background is in supply chain management and in most of my academic career, I have been studying manufacturing companies. But, in recent years, there has been a lot of innovation in e-commerce, in new retailing, and I am attracted to this field.

At CEIBS, I'm offering an elective called New Retail Supply and Chain Service Innovation. And if you look at the growth of e-commerce, it is really growing very fast. E-commerce as a percentage of total retail sales in China has increased from 2.1% in 2013 to, 23.6% [in 2017]. If you look at the corresponding percentage in the U.S., actually, it's much lower. And if you look at the growth of [China’s] retail sector, and also look at both the online and offline players, it is obvious that the online players are growing much faster.

The two big giants, Tmall.com, JD.com are now occupying a very high percentage of market share in the retail sector. And the offline retailers Dashang, Suning, and Gome, their share is kind of declining. In order to really compete, they are trying very hard to work on the online space and they are trying to combine the online e-commerce channel with offline retailing. They are also working very, very hard to develop the omnichannel or the all-channel supply chain.

In China, there are a lot of different forms of e-commerce. One of the fastest growing forms is cross-border e-commerce. The demand for buying foreign goods has increased at about 34% per year in the past five years. And the prediction is that e-commerce will increase four times by 2022, and the percentage of cross-border e-commerce in total e-commerce will double by 2022. So that means there are a lot of opportunities for foreign companies to sell a lot of things that you cannot sell conveniently through the normal channel — you can do cross-border e-commerce. There are a lot of good opportunities.

China has become the world's largest cross-border e-commerce consumer market. If you want to really be in the market in China, that's a very good way to do it, and you can use e-commerce and work with your partner in or outside China to capture the market.

But, at the same time, you also have to realize consumer needs, requirements, and their preferences are changing very, very rapidly. Now, the consumer really wants a high-quality product. At the same time, they also want excellent service experience. And they want personalized services, you need to design products and services according to their individualized, personalized needs — that makes it more difficult.

And then, at the same time, they are buying more and more healthy, high-quality products. They also are willing to pay for convenience. So, the good news is we have more and more consumers who are willing to pay more. The bad news is they are becoming very, very picky. So, you have to really work hard and try to understand the personalized needs of different types of customers and try to effectively meet their needs. Then, you will be able to win in the marketplace. If you are not doing well in this area, you will most likely lose market share. And, more importantly, if you do nothing, you will go bankrupt.

With such a tremendous change in customer demand, people are creating new ways of doing business, and now the [buzz] word is ‘new retail’. [Some refer to it as] unbounded retail, and in terms of the concept of unbounded retail, there are really three levels of unboundedness.

One is, you are going to serve a variety of different types of customers. Even the same type of customers have different personalized needs at different times, when they are in different places. When you try to satisfy multiple types of customers, that makes the change more dramatic and makes the need for innovation greater.

To meet the diversified needs of different type of customers, you also need to combine the online and offline channel. Many companies do not have just one online or one offline channel, maybe they have multiple channels. But the idea is to try to integrate all those different channels and then try to design them together to improve productivity, improve quality so that the customer, in the end, will be able to have a better experience. In addition to the unbounded customer and unbounded channel, there is a more important unbounded supply chain.

Basically, if you really want to satisfy these customers and create value for them, you will have to be able to integrate resources and capabilities from multiple organizations. Each of them is playing a different role. Then you need to be able to use technology, you need to be able to design the new process, the new business model, so that you will be able to break down the boundaries between the organizations. And that's what we call supply chain integration: the capability to make multiple organizations work together in a seamless way so that they will be able to use their capability and their resources to do different things in the supply chain. In the end, you will have the capability to be able to really serve all of these different customers' needs. And you will be able to do that fast, accurately, and with lower cost.

Under such an environment, we need new supply chain capabilities. And I describe this new supply chain as the PDA supply chain. P stands for pull: rather than doing the traditional push type of supply chain, you need to do customer-centric pull type of supply chain. [This means] you try to understand what different types of customers want at different times and when they are in different places. You use their needs and their wants to drive supply chain activities.

But in order to be able to link the different parts of the supply chain together, you need to have digital technology, so we have the digital supply chain. Once you link these processes together, you can do that faster, more accurately, and then you can have lower costs. At the same time, you will be able to accumulate a lot of data. And using the data, you can further optimize supply chain decisions and activities.

And then, the third word is what I call agile supply chain. By that, we mean you will be able to configure your supply chain using resources and capabilities from multiple organizations. And when you have the PDA capability, then, most likely, you will be able to satisfy multiple groups of customers’ needs well. But it's not easy to build this supply chain capability.”


5 Step Guide to Optimizing AI

Prof. Kristian J. Hammond,
Bill and Cathy Osborn Professor of Computer Science, Director of CS Plus X Initiative, and Director of Masters of Science in Artificial Intelligence, McCormick School of Engineering and Applied Science, Northwestern University

“I have spent my entire life building intelligent machines, and I absolutely, fundamentally love the notion of machine intelligence. Even as a child, I thought this was the greatest notion in the world.  Even when everybody was convinced that they were going to kill us. In fact, I was always stunned by, [the plotline] in the old movie 2001, where HAL, because he's got a mission, wipes out everybody on his ship, in order to succeed in the mission.

I always thought he got a bad rap for doing that. Because the only problem was that they sent him out into space and they gave him a mission, and it didn't occur to anybody to say, "By the way, not only do we want you to succeed, but don't kill everybody." All you’ve got to do is explain that to the machine, and everything will be fine.

So, for me, the study of intelligence is absolutely fundamental, and it's one of three things that we can study in the world. The first is the nature of physics, the physical world, the nuts and bolts of atoms, and molecules and chemistry, and how the physical world perceives.

The second is, what is the nature of the biological world? That is, how does life work sitting on top of a substrate in the physical world? And then the third is, given that we've got the physical world and we've got life, how does intelligence work? What is the nature of intelligence?

There are two ways you can go about looking at the nature of intelligence. One is you can study human beings. And I've spent a fair amount of my time doing that. I'm always stunned by how miraculously good we are at what we do, given how incompetent we are.  I think it's astounding.

The other thing that you can do is think about it from the point of view of machine intelligence. And there, you're building something, you're creating something new, and you have to think about intelligence from an engineering point of view.

I start off with a very straightforward definition of machine intelligence. This definition is from 1956, put together by a fellow by the name of John McCarthy, who looked at machine intelligence, and thought, "Okay, well, we're going to define it as when a system does something, that if a human being did it, we would consider that to be intelligent."

Human beings do all sorts of things. We do math, we do physics, we think about planning, we understand the world, and we fall down. [The question was] which one of those [things are we] going to call intelligent? And for the ones we call intelligence, when a machine does that, we will call that machine intelligence. The [question] is, what makes us so smart? And this is kind of important. The thing that makes us smart is that we can make decisions.

We understand what's happening around us. We can understand language, which is absolutely borderline miraculous. We draw conclusions about the world. We organize and recognize situations. We explain the past, we recognize the present and we predict the future. Those three elements are amazingly crucial to what it means to be intelligent.

When we look at all of this though, we have to realize that everything that we do is built on top of the fact that we can learn.

Everything that we do is built on top of learning. And what's interesting is that when you think about the machine, [there is] a parallel there for machines as well. This link between learning and cognition, learning and reasoning is something for the machine. And the question is, why today? Why do we now have this? Because 10 years ago, we didn't have AI that worked at all. None of it worked.

Why do we have it today? [Because] we finally have the data that we can use for the machine to learn. This is an important point. There's a lot of discussion about data, and there's going to be more discussion about data. But this is an important point.

Most of the technologies that you see today around AI have existed for decades. But they didn't work. And then we had the data, and it worked. So we're in a new world that is associated with this data.

Now, the issue really is why this fascination with data? Why are we so transfixed with the idea of data and AI? We’re transfixed because we've got a lot of it.

Every single day we actually collect the equivalent of 500 books of data for every man, woman, and child on the face of the planet. And we are only just beginning. Some of that is really, really valuable.

But then you take a look at how big data is being used, and the gap between the data we collect and the data we use, and we realize that big data has actually failed. And oddly enough, the data itself became the problem.

Fortunately the problem — the data — empowers the solution. And the solution is this: A collection of techniques. If there's one takeaway [from my speech] that I can ask you to guarantee me is that when you think about AI, stop thinking it's a thing. It's not a thing. It's dozens of things. It's a lot of different things.

These range from taking really well-structured data and learning from that data — statistical machine learning. It’s about taking data that looks like pixels and sound stream and a whole bunch of sensors and turning it into some characterization of what's really happening in the world. That is deep learning. Evidence-based reasoning. That’s like IBM's Watson that, given a question, will search for little snippets of evidence to pull together and give you an answer. These are all very different technologies. And then there's the world of recommendation. That's all about transactional data being pulled together so that when somebody looks at me and my transactions, they'll see I'm similar to somebody else, and their transactions, and I can get recommendations based on them. Again, all data-driven.

The world of AI is rife with things we can do. But the question is where to start. And I'm going to give you a quick five-step approach to this. And that is the starting point, because for me the starting point is never technology. The starting point is what's the task? And let's take a look at the tasks that you have in your organizations. Look at those tasks and ask yourself, “Is this task driven by data? Do I know how we do this? Can I actually break it down into components? And do I care about actually doing anything automated with regard to it because I do it a lot?” You probably don't want to automate [an annual task] because it takes time to automate things. We probably want to automate the tasks we do every day. A task you do every day, 1,000 times, you really want to automate.

And the way you do that is you take five basic steps of tooling, tracking, training, testing, and transformation. And what this means is, you look at your task, and you say, “I'm just going to build a tool to help somebody do this.” But the weird thing is that when you build a tool to help somebody do something, you have to understand it. You have to understand the things that they use to make their decisions, what the possible worlds of those decisions are. And you have to understand whether or not you're going to be right or wrong. And once you've built the tool, you've made their life better. But you've done yourself a favour: now that you have a tool and someone is doing something with that tool and you could find the data coming in and the decisions going out to that tool, that means you can track. And you can understand exactly what the conditions are that are associated with every decision they make.

And once you can track them and you gather that data, you can train the system. And the training doesn't have to be learning. The training could be you look at it and you figure it out. But in general, you can use this data to train a system to make decisions that are like a human decision.

And a really wonderful moment is that once you've done that, you can test it. And how do you test it? Well, you hook it up to the same system and you let people keep making their decision and you have your device try to make this decision. And you see when it's right, when it's wrong, and you tune it over time. And when you’re happy with where it is, then you transform the entire process. And that means that you turn a life of somebody doing a road test over and over again, or a repetitious task over and over again, is this something where you narrow the scope of what they have to look at. And it doesn't become decision making to them. It’s verification. The machine and human being are in partnership.

This notion though is part of an overall approach, the AI that is linking decision making by the machine, linking the things the machine does to human being. And never letting that link go because it's the partnership that's going to matter. Because when we look at what's going on today in AI, a ton of it is driven by analytics and data. But the reality is, when we open those analytics and data, we're going to end up with machine learning at the front of everything.

But think about machine learning. And learning in general, you don't solve problems with worry and machines that solve problems with machine learning. You solve problems with the knowledge and the rules that are generated as a product of machine learning. And what you want to do is think about these things, these rules, and this knowledge are the things that we incorporate into a larger picture. And that larger picture is this notion that we have things bringing together data. And then we have these learning techniques that we can turn this data into knowledge. And then with that knowledge, we can actually drive decision-making and reasoning and planning, problem solving and prediction. But it's not the learning alone though. It's the learning associated with other things. And we are now in the midst of what is a fantastic future.

We started off with data and analytics, we are in the midst of and we're going beyond the pure machine learning phase. And we're looking at what it means to use all this information, all this knowledge and reasoning. And once we actually get our arms around the reasoning, that means we’re going to think about interaction, interaction in data. We just say, partnership, because that's what we want. We don't want AI sitting by itself. Because it will either be a thing we trust blindly and then we will be subjugated to it. Or it will be a thing we don't trust at all; it'll be useless to us. Now everything in this world is interesting, but a world of partnership is incredibly powerful because it means the best of the machine and the best of human beings can be brought together, and it will, by its nature be better than anything either side can do by itself.”