Artificial Intelligence and the Case for Human Ingenuity

  • 10.10.2024
  • Matthew Bushery

Before Alloy 2024 attendees heard from entrepreneurs who have tackled innovation challenges head-on at their organizations, High Alpha Innovation CEO Elliott Parker kicked off our venture-building summit with a keynote on a topic that’s especially top of mind for business leaders in every industry: artificial intelligence.

Elliott discussed how AI can help corporations improve day-to-day operations and explore new business models — as long as they leverage the technology wisely. Specifically, he made the case that, in a world proliferated with AI, human ingenuity becomes increasingly valuable.

Key Takeaways

  • The more that “operators” use the technology to improve their work, the more likely they are to lose (or never acquire) the skills needed to apply critical thinking in their roles.
  • Problems are inevitable, but they can actually be drivers of positive change, when scaled organizations work to solve them through innovative approaches, not avoid them.
  • To overtake incumbents, insurgents must use AI as a disruptive technology. By experimenting with the tech constantly, they can improve their core products and services and investigate other, potentially transformative business opportunities.
  • Scaled organizations’ best path for differentiation is through human ingenuity, as it will unlock new and different ways to gather the wealth and knowledge they need to grow.

Check out Elliott's entire Alloy keynote talk below to discover even more of his insights on artificial intelligence.

Sign up for our Wavelength newsletter today to access additional Alloy 2024 content, including recaps of our other sessions and keynotes.

Transcript

Welcome to Alloy 2024. We are so excited that you're here with us this year.

As Lauren mentioned, the theme this year is making transformation tangible. That's something that we do at High Alpha Innovation. We're working to make transformation tangible by partnering with corporations, universities, governments, and others to build compelling, venture-backable, fast scaling startups that can change the world.

We have a lot of transformation coming our way in large part, because the role that AI is playing and will play in our lives now is very dangerous. To make predictions. I'm not going to make any predictions today, but what I am going to do is make the case. That because of AI, demand for human ingenuity, counterintuitively, is going to increase.

And the organizations that understand how to harness human ingenuity are going to be best positioned to accumulate knowledge and accumulate the wealth needed to solve the tricky problems coming our way that automation won't be equipped to address. So we're going to talk about three things today.

First, the problem with problems. Second, the Automation Paradox. And third, the case for people. And through this, we're going to talk about how to make sure our organizations are positioned to succeed in a world where AI is driving so much of the progress that we experience. Alright, the problem with problems.

Now, about 25 years ago, the government of Haiti decided they needed a national emergency response plan. Haiti, as many of you may know, receives more than its fair share of natural disasters, and they wisely decided they needed a plan to deal with these disasters as they come. So they went to the top university in Haiti, found one of the smartest professors at the university, and asked him if he would put together a plan.

He recognized immediately, this is too big a task for one person. I need help. And built a team of other smart professors and students to help him build a national emergency response plan for the country of Haiti. One of the people he asked was my friend Paul, who at the time was doing his undergrad work at this university in Haiti.

Team got together and they started figuring out how to devise this plan. And what would you do if you had to build a national emergency response plan? The first thing you do is you need to identify an emergency. That might happen, that you want to build a plan around and so what they do is they sat down, started brainstorming what kind of emergency should we build this plan around?

We need something that's gonna push our thinking That's within the realm of possible plausibility, but that's so different so unique It forces us to consider creative ways in which we might address Emergency response so they sat down thought about it volcanoes Well, there are volcanoes in Haiti, but they haven't erupted since the Pleistocene, so maybe not the best one.

Somebody suggested hurricanes. Now, hurricanes happen all the time in Haiti, and even though the response isn't great, people kind of know what to do. They're used to it. They wanted something that would really push the thinking. They wanted to identify the worst possible natural disaster they could imagine.

You know what they came up with? 12 inches of snow in 24 hours. Now, my friend Paul lives in Indianapolis now. He laughs when he tells this story because he's very used to snow. But at the time, growing up in Haiti, this was an incomprehensible problem, if it were to snow in Haiti, 12 inches in 24 hours. And they knew this was not necessarily a plausible scenario.

They wanted to use it as a thought experiment to push their thinking. But as they started planning, the plan became very focused and tactical. They started looking at things like, well, if it snows, we're going to need a lot of shovels. Where do you get enough shovels to remove a foot of snow? We're going to need blankets to keep people warm.

Do we buy those ahead of time? Where would we store all the blankets and on and on? You see how this goes now plans can only get us so far. No precaution can be taken to avoid problems that we can't yet predict. Plans can help us think through scenarios. And if it ever does snow in Haiti. There's a plan somewhere that someone can dust off and take and use, but plans will only get us so far.

What's far better is to be prepared for any situation that might arise, to make sure we have the knowledge and the wealth or the means with which we can use to address any problem that might pop up. Now, the sad irony of this story is that, as many of you know, about a decade later, the worst natural disaster in the history of Haiti hit the country.

In January 2010, an earthquake hit Haiti. Over 100,000 people died. A quarter of a million homes were destroyed. Everything shut down. The main road in the country was blocked for 10 days because of debris. No telephone service. No internet. Took a week for radio stations to come back online. For the people impacted by this, it was a terrible tragedy.

Even the recovery from this was hard and fraught with more tragedy. It's a lesson that problems are unpredictable. For more UN videos visit www. un. org And it's better to have a stance of problem fixing, which we'll talk about. If we hadn't had more knowledge and maybe more wealth, perhaps we could have done something different about the problem of the earthquake in Haiti.

In fact, I think if you go forward, there will be a time when people look back at our era and think that it's crazy that we lived at a time when we could not predict earthquakes. You can imagine that with more knowledge and more wealth, we might get to a point where we'll be able to predict earthquakes, maybe one day even prevent them.

Now, there are other types of natural disasters we've gotten pretty good at dealing with. Hurricanes is one example. Over time, we've learned from past mistakes. We've gathered knowledge about what to do. We've gathered wealth and means with which to address problems. And we can now predict hurricanes with a pretty high degree of accuracy.

We've developed technologies for creating structures that can withstand the force of the wind and the waves that come with hurricanes. And as a result, we've gotten pretty good. Not as good as we could be. Our primary solution looks like this. It's still pretty rudimentary. Because we can predict hurricanes, we know to get out of town when they come.

This does save lives, but, again, it's possible to imagine that we might do better. If we had more knowledge, and more wealth, we might be able to deal with the problem differently, and in better ways than we can. Now, it's not to denigrate the progress that we've made. Every year, on the planet, about 40,000 to 50,000 people on average die from natural disasters.

That is far too many. But there is good news. The good news is that that number has remained pretty steady over the last several decades, even as the global population has grown, the number of deaths has remained steady. So on a per capita basis, the likelihood that you're going to die in a natural disaster is much lower than it was for your grandparents.

That is a very good thing, but I think we can do even better. And as we learn to solve these problems, of course, new problems will pop up that we need to be prepared for problems that we can't even predict. Let me give you an example of one that I consider a problem pretty close to being solved. This chart shows the childhood mortality rate in Iceland over the last couple hundred years.

Now, Iceland, because, Uh, they've kept very good records for a very long time. You could put just about any country or area of the planet on this chart and the graph would look similar. In 1846, 61 percent of the children born in Iceland died before the age of 5. This year, only 1 in 40, 000 children born in Iceland will die before the age of 5.

That is a remarkable number. Achievement and something that collectively as people we've accomplished because we've gathered knowledge About what to do and we've gathered the means or the wealth that we need to address the issue. I think this chart is an excellent case for gathering more knowledge and more wealth as quickly as we can To solve other kinds of problems.

We can't even yet imagine problems come in all kinds of shapes and sizes. I would argue that problems are inevitable. They're natural and they're actually beneficial drivers of progress. The only way society advances is by encountering, confronting, and solving problems. We need problems to achieve progress now in the face of problems that are coming.

It's important to know and I think this this quote makes the point very effectively all problems are solvable If only we have enough knowledge and wealth you can imagine a scenario in which we had infinite knowledge and infinite wealth We'd be able to solve any problem that would come our way as long as we were doing so within the laws of physics Now if every problem is solvable the logical conclusion of that is That infinite progress is possible.

And this is in fact the core idea of the Enlightenment. It's how I define optimism. Optimism is the recognition that problems will continue to come our way, but that our capacity to solve them and address them will continue to increase. If you look at past civilizations in history, all of them failed for the same reason.

And that reason is they failed to innovate fast enough. They lack the knowledge and the means that they needed to address the problems that came their way. Problems accumulated until one day a problem came that was so big, either from nature or from other people. That the civilization didn't know how to address it or lacked the wealth or the means to address it.

That civilization failed. If they had had enough knowledge and enough wealth, the civilizations would still be around and still be thriving. The same could be said about companies, by the way, no company ever went out of business because it learned too much. The hard thing about problems. Is that they keep coming and they're almost impossible to predict.

If you're leading an organization, it's important to recognize that problems are inevitable, unpredictable problems will continue to come our way, but it's also worth noting and remembering that every problem is solvable as long as. You have enough knowledge and wealth with which to do it. It's also worth noting.

I think that every solution we create creates more problems that need to then be solved in their own turn, an endless cycle of problems and progress as we confront these problems. And we learned, so the only viable strategy in a world where problems will continue to come our way. Is to adopt a stance of problem solving, not problem avoidance.

We do that in our organizations by adopting a strategy focused on accumulating as much wealth and knowledge as we can so that we can deal with those unforeseeable, unpredictable problems that will arise. Now, some organizations are optimized for problem avoidance. Other organizations are optimized for problem fixing.

They're focused on accumulating wealth and knowledge. You want to be part of the ladder and you want to be focused on building the ladder. The problem, as we've talked about in past Alloys, is that as organizations scale, the default stance becomes one of problem avoidance. Management by committee to avoid making mistakes.

This is a path to inevitable failure. Even in tightly controlled environments, things happen all the time that we can't predict. In fact, when things break down in tightly controlled environments, it's because the challenges that arise are so unpredictable, so unimaginable, so atypical, 12 inches of snow in 24 hours, maybe, or the equivalent, that we don't know what to do to solve them.

Now, organizations should do all they can to avoid failure. Failure causes loss of life. It causes destruction of value, damaged reputation, and on and on. And organizations rightly spend a lot of time and energy focused on avoiding failure. Some organizations do this so well, we call them high reliability organizations.

They operate in extremely complex environments and produce very little error. Think about nuclear engineering or commercial aviation. But history and research tells us that even in those tightly controlled environments, failure will occur. And then when the challenges and the problems come, They're going to be so unusual and so hard to deal with.

That's what causes these, these tightly controlled situations, circumstances, environments to, to fail. So the only viable strategy again, is to take a stance of problem solving, to focus on accumulation of knowledge and wealth to enable us to deal with any type of situation or challenge that might arise.

Okay. Number two: the automation paradox. Did any of you have a teenage driver in your home? We've had teenage drivers in our house for the last 10 years. And a funny thing you may have noticed about teenage drivers is that they cannot get anywhere without a smartphone and map. Now, uh, we've got a daughter who recently got her license and, uh, not that long ago decided to run an experiment.

We're coming back from running errands and about a mile from our house, I turned off the smartphone and I challenged her, All right. You've got nothing but your brain and landmarks. Figure out how to get home. And she's got a very good brain. She's very smart. She couldn't do it. She couldn't figure out how to do it.

And now many of us who are older remember a time when we didn't have smartphone maps. We had to figure out how to get around in the world with just our brains and landmarks. And this is not a complaint of kids these days. Uh, nothing like that. On the contrary, it's a recognition that as we become accustomed to the machine and relying on it.

It's possible that we become less capable at solving that problem, that particular type of problem, in alternative ways. This is a version of a cognitive bias that was discovered back in the 1940s. There was a married couple of psychologists, the Lucians, who ran a really interesting experiment. We're gonna run a version of it right now.

What they did was they gave Test participants a math problem. The math problem was kind of complicated. You have three jars of different capacities, and your job is to measure out a liquid of a specific capacity using those three jars. Now, we won't do too much math today, just a little bit. It's a really hard problem to solve, but once you figure it out, you know, you learn that you can apply the same formula, the same process, to other problems, even if the numbers change, you can solve them in the same way.

I'll show you what I mean. This is one of the real problems they used in the test. You have three jars, One with a capacity of 43 units, one with 18, and one with 10. And your job is to measure out 5 units of liquid. Feels like an SAT question. Uh, and it's early in the morning, especially for you, those of you from the West Coast.

But, let me show you how you could do this, and then I'll, I'll ask you to think through the second one. What you do is you'd fill up the jar of 43, you'd pour that into the one with 10, you'd dump that out, pour it into the one with 10 again. So 43 minus 10 minus 10 leaves you with 23, pour that into one with 18, Now you have five.

So you see the pattern? Big one, subtract the little one twice, then subtract the medium one. Okay, second question. This is another real one from their test. Forty two, nine, and six. This time your job is to get to twenty one. You see what to do? Forty two minus six is thirty six. Minus six is thirty. Minus nine is twenty one. 

Alright, so you know the pattern. Last one. Promise. Thirty nine, fifteen, and three. Your job is to get to eighteen. So what do you do? See the pattern now? Thirty nine minus three gets you to thirty six, minus three gets you to thirty three, minus fifteen gets you to eighteen, where you wanted to land. Or, you could just add fifteen and three together.

That's a much easier way to solve it. And so what you see what this test uncovers, is that when we become accustomed to a particular way of solving a problem, it's really hard to break free from that approach, even if an easier solution, The Lutyens call this cognitive bias the Einstellung effect, which means more or less state of mind.

It's now been documented in all sorts of areas, and I think it's important to ask the question, as we become more reliant on artificial intelligence, are we at risk of dealing with the Einstellung effect? Uh, story time. So, a few years ago, a researcher at Harvard Business School wanted to test What happens when people are reliant on AI?

Do they get better at their jobs or worse, actually? And he wanted to make the results as realistic as possible. So what he did is he recruited a bunch of recruiters, gave them a stack of resumes and said, I want you to scan through these resumes, I want you to score these resumes based on the criteria I'm going to provide you.

One group of researchers gave access to a commercially available piece of software that's very good at doing this. Scanning resumes and providing a score. Another group of recruiters he gave access to an inferior version of the software and said This software can produce really good results sometimes, but other times it's quite patchy.

You'll need to be aware that sometimes the software can make mistakes. And to a third group, no software, you have to do it the old fashioned way. Now you won't be surprised to learn that the recruiters that had the software performed this task better. What was surprising to the team is that the recruiters that had the good AI, the high quality software, actually underperformed the recruiters that had the bad AI.

The reason Is that the recruiters with the good AI turned off the brains. They relied entirely on the software to make the decisions and trusted it completely. Whereas the research, the recruiters without bad AI, I knew that it was going to make mistakes. They paid close attention, use their own judgment to refine the results and got better outcomes.

All right, another story. Uh, you've all been involved in brainstorming sessions, I imagine. You're, you're sitting, the way it works is you sit in a conference room around a table, small group, you've got flip charts, post it notes, sharpies, shared sense of dread, you know what I'm talking about. And the job is the still day provides a prompt and you come up with a bunch of ideas, you write them down, refine those ideas, and so on.

The advent of AI makes this a really, uh, Interesting potential application. And a few years ago, there's a researcher at Stanford who's an expert in brainstorming or ideation. And he saw ChatGPT coming and thought this is the best technology we've ever seen in the history of humankind for brainstorming.

I want to run a test and see how much better people get when they use ChatGPT to brainstorm. ChatGPT is easy to use. It provides a ton of output and some people complain about ChatGPT. Sometimes it hallucinates in the case of brainstorm, that might actually be a good thing. You want weird and crazy ideas and lots of them.

So he divided people into groups. Half the groups got access to ChatGPT to do their brainstorming. The other half had to do the old fashioned way with Post it Notes and Sharpies. What he found was surprising. The group with ChatGPT underperformed, did worse than the group with Sharpies and Post it Notes.

And the reason, they said, when they looked at what was going on, it was a version of the Einstein effect. What people were doing was using ChatGPT like a Google search bar, entering in their query, accepting the results at face value. And if, uh, the researcher said, when you went into the room and noticed the people brainstorming with ChatGPT, it was pretty quiet.

Some tapping on the keyboards, people looking at their screen, with what the researchers actually called resting AI face, which is kind of funny. Now, the other group that was doing it the old-fashioned way, loud discussion, back and forth, refining of ideas and beating them up together. Thank you. Now, they had more ideas and higher quality ideas produced that way.

I'm not advocating that we should ignore AI and not use AI, quite the contrary. I'm not advocating that we should use bad AI and not good AI, quite the contrary. What I'm arguing is that we need to be careful and recognize that as we become dependent on automation, we may lose our skills. And become less capable.

There is value in human judgment and human ingenuity. The automation paradox says that as systems become more automated, the operators actually become less capable. of dealing with challenges that arise. This is a problem. Because in those automations, automated systems, when those systems fail, it's usually because something so unexpected and atypical has happened, we need the operators in those systems to be more skilled, not less.

So that's the paradox. In automated systems, we need the operators to be more skilled, not less. Yet automation makes it difficult for these operators to acquire the skills they need. Again, what we see in that circumstance is that the only viable, possible strategy is to make sure that you are prepared for the unexpected.

That you as an individual, as an organization, that us, collectively, as a society are focused on accumulating the knowledge and the wealth that we will need to address problems that arise and finding ways that we can use human ingenuity in the process. All right, number three, the case for people, the case for human ingenuity.

I want to talk about a company that I think is doing a pretty good job of taking a problem solving stance that is accumulating wealth and knowledge to apply to new problems that arise and that is using AI to do it. And that company is Meta, the parent of Facebook. Now, Meta is not without its problems, but they're in the midst of a really fascinating experiment.

What Meta did a few years ago is they dramatically cut their expenses. By about a third, this freed up a ton of cash flow, unimaginable amounts of cash, actually, that they're now using to invest in R and D, especially around AI meta this year, we'll invest 40 billion in research related to AI. That's a number that's hard to comprehend.

They're focused on building large language models that learn faster and are more transparent, figuring out ways to deliver. More effective advertising, generate better content, and on and on. Most of the research is focused on ways to make Meta's core business more relevant and more profitable. Forty billion dollars is a lot of money.

That is hard for just about anybody to compete with directly, at least the way the Meta's It's tackling the problem of AI. And you might argue that as they get better at this, what we might see is that they actually generate more cash flow creates a virtuous cycle where they're able to double down and invest more.

They get bigger and more powerful, at least for a while. So in that context, where you have some companies with means 40 billion in a core business that continues to spit out that type of capital, what do the rest of us do about AI? We get this question a lot. What should we do about AI? I'm not sure it's the right question to ask.

AI is a tool. An extremely powerful tool for solving problems. An unusual tool. But it's still a tool for solving problems. I think a better question to ask is, Is AI a disruptive technology or a sustaining technology? And the reason why I think that's a better question is that the answer tells us what we should do about the opportunities and threats in front of us.

An important part of that question is to ask, To whom? Is AI a disruptive or a sustaining technology? Now, this is a horrible oversimplification of the definition of what a disruptive technology is. But for our purposes this morning, I think it will suffice to. For Meta, AI is a sustaining technology. It's going to help them win.

It's going to help them get bigger and more powerful because they can invest 40 billion in the effort for quite some time. But I would also argue that for Meta, AI promises to be an extraordinarily disruptive technology just in ways we can't yet predict. Surely, there are insurgents who will come along using AI, discover new products and services that grab a foothold and over time erode Meta's strength in the marketplace.

Despite all that Meta is doing to avoid that scenario, it's very likely to happen. So disruption theory tells us what we should do. In that situation, it tells us what matters to do. And it looks a lot like what they're doing, run a very efficient core business, take all the excess cash, deploy it into areas of strategic importance, make experiments, run experiments to accumulate knowledge and accumulate more wealth so that you can solve any problem.

Now, if you're not meta, what do you do? Shockingly, the answer is pretty similar. For your organization, in your own ways, you need to find ways to generate more knowledge and more wealth so that you can deal with any problems that arise. The trick is to do it in a way that's different from the way that meta is doing it.

Your best path to differentiation is through human ingenuity that is going to unlock new and different ways to accumulate the wealth and the knowledge that you need. All right, so the case for human ingenuity. Let's see if we can tie all these things together. It's important to recognize that problems are inevitable.

They're natural, they're beneficial drivers of progress. If you're leading an organization, there is no moment at which you can look to the future and say, our problems are going to go away or get easier. We often make the mistake of thinking, we land this big customer, then things will get easier. We land this funding round, then things will get easier.

The opposite, I'm sorry to say, is true. As your organization scales, the problems are only going to get trickier and more complex. They are inevitable, they're not going away, and most of the problems coming your way will be unpredictable. So the only viable long term strategy then is one of a problem solving orientation.

You do that by seeking to gather as much knowledge and as much wealth as an organization as you can to deal with the unforeseeable that might happen. Now, in a world where automation is continuing to increase This is in many ways going to get harder, because it's going to get more difficult for us as operators in the system to find ways to keep our skills sharp.

We're more at risk of being reliant on the automation, and less capable in the end of dealing with those unpredictable things, those edge cases that come our way. We need to be focused on, Accumulating the knowledge and the means necessary to solve any problem that may arise. The lesson from this is don't be afraid to take on opportunities that may appear to be fraught with challenge.

Challenges and problems are the way that we learn. Organizations, our default is to look at some of these spaces in front of us and say there's too many problems there. There's problems everywhere. Problems are inevitable. So choose those things that are fraught with problems. Knowing that opportunity exists there.

Thing is you're going to need human ingenuity to be able to do it. Human ingenuity is your best path to differentiation and an ability to accumulate wealth and knowledge in ways that other organizations can't. Now, in the face of increasing automation, history tells us that demand for human ingenuity.

Should increase. We've seen this happen before. When automation lowers the cost of a good or service, we see the demand for that good or service increase. When automation, for example, in the world of banking, made ATMs a cheaper way to run bank branches, guess what? Banks opened more branches and, by the way, hired more bank tellers over time.

In the world of legal firms, software made it easier to review a vast amount of documents. And so discovery became bigger and judges started allowing it because the cost of that discovery was lower in the industrial automation or industrial revolution. What we saw is that automation increased the importance of the operators in those systems and the skills required.

Also increased over time. We saw that in the production of fabrics and the production of steel. It's a pattern we see over and over again. We shouldn't expect this time to be any different. Everybody wants to think we're living in a different era. I think the same patterns apply. The only viable strategy is a stance of problem solving focused on accumulating knowledge and wealth to deal with any challenge that may arise.

All right, so, as a UCLA grad and an adopted Hoosier, I am required by statute occasionally to drop John Wooden coach quotes in the presentations. Uh, the good thing, good news is that John Wooden was a very wise man, the greatest basketball coach of all time. I love this quote. Team that makes the most mistakes usually wins.

I urge you to take action. Plunge into those spaces fraught with problems. That is how your organization is going to advance and learn. Don't shy away. Problems are coming your way no matter what. Pick your problems. And rely on human ingenuity to do it. Accumulate wealth. Accumulate knowledge. And let humans explore.

Thank you.

Elliott-Keynote
High Alpha Innovation CEO Elliott Parker gave a keynote on AI and the case for human ingenuity.
David Senra Podcast
Founders Podcast host David Senra gave a keynote talk on what it takes to build world-changing companies.
Governments and Philanthropies
High Alpha Innovation General Manager Lesa Mitchell moderated a panel on building through partnerships with governments and philanthropies.
Networking
Alloy provided great networking opportunities for attendees, allowing them to share insights and ideas on their own transformation initiatives.
Sustainability Panel
Southern Company Managing Director, New Ventures Robin Lanier spoke on a panel about the energy sector's sustainability efforts.
Healthcare Panel
Microsoft for Startups Worldwide Lead, Health & Life Sciences Sally Ann Frank took part in our panel on healthcare transformation.
Agriculture Panel.
Make Hay CEO and Co-founder Scott Nelson discussed the ongoing transformation in the food and agriculture value chain.

Stay up to date on the latest with High Alpha Innovation, our work, and the future of venture building.