TED Talk: “The Incredible Inventions of Intuitive AI” Speech Summary, Text, & Analysis

March 15, 2023

14 min read

robot pointing on a wall

The use of artificial intelligence or AI has become more integrated into daily life. In “The Incredible Inventions of Intuitive AI”, Maurice Conti discusses the extensions of this application as an active tool. If you are looking for a speech summary, text, and analysis of this TED Talk, we have provided one for you.

TED Talk: “The Incredible Inventions of Intuitive AI” Speech Summary

  • Speech discusses how much will change in the next 20 years in terms of how humans work, signifying a new era: the Augmented Age.
  • The Augmented Age is based around cognitive augmentation and robots and AI working together with humans.
  • Computers can no longer just do what humans tell them – they are now generative and able to come up with their own solutions.
  • Examples given of how humans, computers and robots are working together to solve highly complex design problems.
  • Technology will help humans not only to imagine and design, but also to make and control things in the physical world.
  • A digital nervous system connecting us to the things that are made is necessary to learn from the real world experiences of these things.
  • Describes a vision of the future in which things are farmed, not fabricated; connected, not isolated; aggregated, not extracted; and autonomous, not obedient.

TED Talk: “The Incredible Inventions of Intuitive AI” Speech Text

Yoodli automatically generates the transcript of any speech you upload or record, regardless of length. We have included the text transcript of “The Incredible Inventions of Intuitive AI” below:

"How many of you are creatives, designers, engineers, entrepreneurs, artists, or maybe you just have a really big imagination. Show of hands. Hey, that’s most of you. I have some news for us creatives over the course of the next 20 years more will change around the way we do our work than has happened in the last 2000. In fact, I think we’re at the dawn of a new age in human history. Now there have been four major historical errors defined by the way we work.

The hunter-gatherer age lasted several million years, and then the agricultural age lasted several thousand years. The industrial age lasted a couple of centuries, and now the information age has lasted just a few decades. And now today we’re on the cusp of our next great era as a species. Welcome to the augmented age. In this new era, your natural human capabilities are gonna be augmented by computational systems that help you think robotic systems that help you make and digital nervous system that connects you to the world far beyond your natural senses. Let’s start with cognitive augmentation. How many of you are augmented cyborgs? ?

I would actually argue that we’re already augmented. Imagine you’re at a party and somebody asks you a question that you don’t know the answer to. If you have one of these in a few seconds, you can know the answer, but this is just a primitive beginning. Even Siri is just a passive tool.

In fact, for the last three and a half million years, the tools that we’ve had have been completely passive. They do exactly what we tell them and nothing more. Our very first tool only cut where we struck it, the chisel only carves where the artist points it. And even our most advanced tools do nothing without our explicit direction. In fact, to date, and this is something that frustrates me, we’ve always been limited by this need to manually push our wills into our tools like manual, like literally using our hands even with computers.

But I’m more like Scotty and Star Trek . I wanna have a conversation with a computer. All right, I wanna say computer. Let’s design a car. And the computer shows me a car and I say, no more fast looking and less German. And bang, the computer shows me an option, . Now that conversation might be a little ways off.

It’s actually probably less than many of us think, but right now, uh, we’re working on it. Tools are making this leap from being passive to being generative. Generative design tools use a computer and algorithms to synthesize geometry, to come up with new designs all by themselves. All it needs are your goals and your constraints. I’ll give you an example. In the case of this aerial drone chassis, all you would need to do is tell it something like it has four propellers.

You want it to be as lightweight as possible and you need it to be aerodynamically efficient. And then what the computer does is it explores the entire solution space. Every single possibility that solves and meets your criteria. Millions of them takes big computers to do this, but it comes back to us with designs that we by ourselves never could have imagined. And the computer is coming up with this stuff all by itself. No one ever drew anything and it started completely from scratch. And by the way, it’s no accident that the drone body looks just like the pelvis of a flying squirrel .

It’s cuz the algorithms are designed to work the same way that evolution does. Now what’s exciting is we’re starting to see this technology out in the real world. We’ve been working with Airbus for a couple of years on this concept plane for the future. Now it’s a ways out still, but just recently we used a generative design AI to come up with this. This is a 3D printed cabin partition that’s been designed by a computer. It’s stronger than the original, yet half the weight, and it’ll be flying in the Airbus a three 20 later this year.

So computers can now generate, they can come up with their own solutions to our well-defined problems, but they’re not intuitive. They still have to start from scratch every single time. And that’s because they never learn. Unlike Maggie, Maggie’s actually smarter than our most advanced design tools.

What do I mean by that? If her owner picks up that leash, Maggie knows with a fair degree of certainty that it’s time to go for a walk. And how did she learn? Well, every time the owner picked up the leash, they went for a walk and Maggie did three things. She had to pay attention, she had to remember what happened, and she had to retain and create a pattern in her mind. Interestingly, that’s exactly what computer scientists have been trying to get ais to do for the last 60 or so years.

Back in 1952, they built this computer, they could play tic-tac-toe. Big deal. Then 45 years later, 1997, deep blue beats Casper off at chess in 2011, Watson beats these two humans at jeopardy, which is much harder for a computer to play than chess is. In fact, rather than working from predefined recipes, Watson had to use reasoning to overcome his human opponents. And then what?

A couple of weeks ago, deep minds alpha go beats the world’s best human at go, which is the most difficult game that we have. In fact, in go there are more possible moves than there are Adams in the universe. So in order to win, what AlphaGo had to do was develop intuition. And in fact, at some points, AlphaGo’s programmers didn’t understand why AlphaGo was doing what it was doing. And things are moving really fast. I mean, consider in the space of a human lifetime, computers have gone from a child’s game to what’s recognized as a pinnacle of strategic thought. What’s basically happening is computers are going from being like Spock to being a lot more like Kirk , right? From pure logic to intuition.

Would you cross this bridge? Most of you’re saying, oh hell no, . And you arrived at that decision in a split second. You just sort of knew that that bridge was unsafe. And that’s exactly the kind of intuition that our deep learning systems are starting to develop right now. Very soon you’ll literally be able to show something that you’ve made, you’ve designed to a computer and it’ll look at it and say, Hmm, sorry homie, that’ll never work. You have to try again. Or you could ask it if people are gonna like your next song or your next flavor of ice cream.

Or much more importantly, you could work with a computer to solve a problem that we’ve never faced before. For instance, climate change. We’re not doing a very good job on our own. We could certainly use all the help we can get. That’s what I’m talking about, technology amplifying our cognitive abilities so we can imagine and design things that we were simply out of our reach as plain old Una augmented humans.

So what about making all of this crazy new stuff that we’re gonna, uh, invent and, and design? I think the era of human augmentation is just as much about the physical world as it is about the virtual intellectual realm. So how will technology augment us in the physical world? Robotic systems? Okay.

There’s certainly a fear that robots are gonna take jobs away from humans, and that is true in certain sectors, but I’m much more interested in this idea that humans and robots working together are going to augment each other and start to inhabit a new space. This is our applied research lab in San Francisco. Were one of our areas of focus is advanced robotics, specifically human robot collaboration. And this is bishop one of our robots.

And as an experiment, we set it up to help a person working in construction, doing repetitive tasks, tasks like cutting out holes for outlets or light switches in drywall . So bishop’s human partner can tell him what to do in plain English in the simple gestures, kind of like talking to a dog. And then bishop executes on those instructions with perfect precision. We’re using the human for what the human is good at, right? Awareness, perception, and decision making and reusing the robot for what it’s good at. Precision and repetitiveness.

Here’s another cool project that Bishop worked on. The goal of this project, which we called the Hive, was to prototype the experience of humans, computers, and robots all working together to solve a highly complex design problem. The humans acted as labor, they cruised around the construction site, they manipulated the bamboo, which by the way, because it’s a non isomorphic material, is super hard for robots to deal with. But then the robots did this fiber winding, which was almost impossible for a human to do. And then we had an AI that was controlling, um, everything. It was telling the humans what to do. It was telling the robots what to do and keeping track of thousands of individual components. What’s interesting is building this pavilion was simply not possible without human robot and AI augmenting each other. Okay, I’ll share one more project.

This one’s a little bit crazy. We’re working with Amsterdam based artists, yours, Larman and his team at MX 3D to Generatively design and robotically print, the world’s first autonomously manufactured bridge. So Joris and an AI are designing this thing right now as we speak in Amsterdam. And when they’re done, we’re gonna hit go and robots will start 3D printing in stainless steel and then they’re gonna keep printing without human intervention until the bridge is finished.

So as computers are gonna augment our ability to imagine and design new stuff, robotic systems are gonna help us build and make things that we’ve never been able to make before. But what about our ability to sense and control these things? What about a nervous system for the things that we make? Our nervous system, the human nervous system tells us everything that’s going on around us, but the nervous system of the things we make is rudimentary at best. Uh, for instance, a car doesn’t tell the city’s public works department that it just hit a pothole at the corner of Broadway in Morrison, a building doesn’t tell its designers whether or not the people inside like being there.

And the toy manufacturer doesn’t know if a toy is actually being played with how and where and whether or not it’s any fun. Look, I’m sure that the designers imagined this lifestyle for Barbie when they designed her, right? But what if it turns out that Barbie’s actually really lonely . If the designers had known what was really happening in the real world with their designs, the road, the building and Barbie, they could have used that knowledge to create an experience that was better for the user. Now, what’s missing is a nervous system connecting us to all of the things that we design, make and use.

What if all of you had that kind of information flowing to you from the things you create in the real world? With all of the stuff we make, we spent a tremendous amount of money and energy, in fact, last year about, uh, 2 trillion convincing people to buy the things that we’ve made. But if you have this connection to the things that you design and create after they’re out in the real world, after they’ve been sold or launched or whatever, we could actually change that and go from making people want our stuff to just making stuff that people want in the first place. The good news is we’re working on digital nervous systems that connect us to the things we design.

Uh, we’re working on one project with a couple of guys down in, um, Los Angeles called the Bandido Brothers and their team. And one of the things these guys do is build insane cars that do absolutely insane things. These guys are crazy in the best way. And uh, what we’re doing with them is, uh, taking a traditional race car chassis and giving it a nervous system.

So we instrumented it with dozens of sensors and then we put a world class driver behind the wheel, took it out to the desert and drove the hell out of it for a week. And the car’s nervous system captured everything that was happening to the car. We captured 4 billion data points, all of the forces that it was subjected to, and then we did something crazy. We took all of that data and plugged it into a generative design AI that we call treat and capture. So what do you get when you give a design tool, a nervous system, and you ask it to build you the ultimate car chassis? You get this, this is something that a human could never have designed except a human did design this, but it was a human that was augmented by a generative design, ai, a digital nervous system and robots that can actually fabricate something like this. So if this is the future, the augmented age, and we’re gonna be augmented cognitively, physically and perceptually, what will that look like?

What are this wonderland gonna be like? I think we’re gonna see a world where we’re removing from things that are fabricated to things that are farmed. We’re removing from things that are constructed to that which is grown. We’re gonna move from being isolated to being connected, and we’ll move away from extraction to embrace aggregation.

I also think we’ll shift from craving obedience from our things to valuing autonomy. Thanks to our augmented capabilities, our world is gonna change dramatically. We’re gonna have a world with more variety, more connectedness, more dynamism, more complexity, more adaptability, and of course more beauty. The shape of things to come will be unlike anything we’ve ever seen before. Why? Because what we’ll be shaping those things is this new partnership between technology, nature, and humanity. That to me, is a future well worth looking forward to. Thank you all so much."

TED Talk: “The Incredible Inventions of Intuitive AI” Speech Analysis

Yoodli simplifies the process of speech analysis for any user. The AI-powered speech platform provides feedback on the categories of Word Choice and Delivery. Through automatic rankings, you can improve your public speaking skills with ease.

Word Choice

In the Word Choice category, Maurice Conti’s top keywords included “things” and “human”, followed by “design” and “computer”.

Despite the strong use of keywords, Conti had 55 weak words, including the use of “so” 13 times. In addition to the detection of filler words, Yoodli also provided suggestions of language usage. The AI speech platform suggested the alternatives of “guys” with “folks” as well as replacing “crazy” and”insane” with “confusing”.

Delivery

Yoodli found Conti spoke in a conversational tone, averaging 162 words per minute. In addition to pacing, the AI-speech platform also ranked Conti in the subcategories of pauses, smiles, eye contact, and centering.

Wrapping Up

)

Start practicing with Yoodli.

Getting better at speaking is getting easier. Record or upload a speech and let our AI Speech Coach analyze your speaking and give you feedback.

Get Yoodli for free