Wireless Noodle Episode 10: The what, when and why of Artificial General Intelligence

Matt delves into the murky world of Artificial General Intelligence, the aspirational end goal for billions of dollars of investment, but with little clear idea of what the applied benefit might be. He asks what AGI is, discrete from AI as a whole, who is doing it and why? On the way he explores David Beckham's right boot, the threat of extra-terrestrial invasion, the reason why the hyperscalers like Google and Microsoft are so interested, Nvidia’s likely acquisition of ARM and the arrival of GPT-3.

You can access it here, or via Google or Apple.

The full transcript of the podcast is available below. 


Artificial General Intelligence: What is it? Why is it important? Who is interested in it? And why?

<> 

Today I want to talk about Artificial Intelligence. It’s a topic that bubbles away in the background of what a lot of technology companies are doing, and occasionally bursts into the headlines.

Sometimes it’s in quite a dramatic way, such as Elon Musk expressing deep concern about its dangers, having invested a billion in DeepMind, one of the leaders in the field.

Or there’s the more subtle stuff where the growing importance of AI has a knock-on effect on other commercial things. Recently Nvidia opened up discussions to buy ARM from Softbank. Nvidia makes chips. ARM designs chips. Seems like a reasonable match up, right? Except not really, because it causes lots of friction between ARM and all the other chip manufacturers for who it does similar work. This may accelerate the development of open-source equivalents to what ARM does. On that basis it’s perhaps surprising that the value of ARM to a company that is potentially closing off a lot of its sales opportunities, is rated so high. Nvidia clearly sees the additional market upside from the combination of its hardware capabilities with ARM’s R&D and design as more than compensating for the drop off in ARM business.

The reason: Artificial Intelligence. Nvidia sees a massive opportunity in chips associated with running AI.

Partly that’s quantitative. There will be a lot more demand. To be optimised you need to put the AI as close to the thing it’s controlling as possible, which means more processing close to the edge device. Which itself also means more processors. For instance, autonomous vehicles will need onboard processing. They can’t rely on using processing at a centralised data centre. The round trip time (or latency as I discussed in episode 6 when I talked a bit about 5G and the move to the edge) is just too long to cope with real-time decision making. Too late. You’ve already driven into a lamp-post. So, you need a lot more processors distributed as close to the edge application as possible. More devices, bigger market.

Also it’s qualitatively different. Processors for AI, so called AI Accelerators, are optimised for it, offering everyone involved a much greater opportunity for differentiation than there has been for years. And typically this means more profitable products.

Based on all this (lots more chips with better margin), Nvidia can seemingly see sufficient profit in there, courtesy of AI, to justify a USD40bn investment.

And probably with good reason. In our review at Transforma Insights of where companies were investing money, we see over USD1 trillion of investment globally in AI over the next decade. It’s a magnet for an enormous amount of spend by technology giants.

But I don’t want to try to cover the whole topic. What I want to look at specifically is the bit that has Elon Musk spooked. Artificial General Intelligence, trying to recreate an equivalent of human intelligence.

<> 

Firstly, what is it? For AI broadly there’s a million definitions. I rather like Elaine Rich of the University of Texas’s definition, which she applied to AI more broadly, but I think is a good way of thinking about AGI. She said “it is the study of how to make computers do things at which, at the moment, people are better.”

AGI is a subset of that. Broadly speaking most people would define AGI as being ‘strong’ AI that is able to perform a range of tasks in a range of environments based on independent decision making that replicates human behaviour. There are other elements that have been proposed too, such as self-awareness (although I’m not 100% sure we can assume that applies to actual humans in all cases) and an understanding that others have their own beliefs, desires etc. Again, insert your own joke here.

This contrasts with ‘weak’ or ‘narrow’ AI which simulates human behaviour in carrying out a very narrow task, albeit often brilliantly. Lest we forget, in 1997 IBM’s Deep Blue beat Garry Kasparov, marking the last point at which a human was the best chess player in the world. But Deep Blue could ONLY play chess. AGI isn’t about being brilliant at a single task, it’s about being more broadly able to replicate what humans can do.

OK, then let’s think about the process of how to develop these capabilities.

The most basic form of AI, beyond just simple if-this-then-that expert systems is ‘Machine Learning’, which is about the application of statistical techniques and the use of experience to learn. It breaks down into various types. Supervised ML is trained using large amounts of labelled data for instance to identify pictures of cats (if you like). Unsupervised is more sophisticated, being given a set of data and asked to create a framework to understand it, for instance with customer segmentation. Tell the AI what the characteristics are of the users and it will identify trends and patterns. And there are others.

The next step on from ML (and a subset of it) is Deep Learning which involves self-teaching algorithms aimed at recognising, analysing and interpreting data. This is the area where there have been big breakthroughs in the past few years leading to the current round of innovation. Essentially deep learning is about using enormous amounts of data and techniques that imitate human thought.

The hope is, of course, that these break-throughs from machine learning to deep learning end up helping us on the road to AGI. But as yet the most obvious the distinction between different types of AI is this: between AI that exists (machine learning, natural language processing, chat bots and so forth) and AI that doesn’t exist, that being something that actually looks like intelligence.

This presupposes a couple of things. First that it’s achievable. We just don’t know. Ray Kurzweil, who I mentioned in a previous episode thinks it’ll be between 2015 and 2045 and in 2012 a couple of researchers from the Machine Intelligence Research Institute looked at 95 predictions with typical predictions of 15-25 years. But others (particularly more recently) have put it well towards the end of the 21C, if at all.

The second that it’s identifiable. It’s too clear we’d know it if we saw it. One test is the Turing Test, that an AI should be indistinguishable from a human to a neutral observer, which I guess is as good a mechanism as any. But is that perhaps missing the point that artificial intelligence may be completely different (and therefore easily distinguishable from) human intelligence.

To further complicate things, I would suggest that all of the AI that has been developed thus far isn’t even intelligence as it should be understood. AI thus far is actually Artificial Wisdom. Intelligence is different.

I think a good definition of ‘intelligence’ is the ability to work out how to perform a task with no experience. Making a pool shot based on calculating angles rather than having played a million shots. I remember a news article about how David Beckham was a ‘genius’ for calculating angles of his free kicks. But he didn’t. He performed the same task a million times and saw the results.

Translate this into AI. When it comes, Artificial General Intelligence can’t be based on the same massive training sets as Machine/Deep Learning. What Beckham (or a pool player) does is reinforcement learning. Not AGI. AGI should be able to work out how to do a task without trying it before.

But it’s likely that no matter how intelligent the AGI, it won’t clear up, or hit ‘top bins’ first time out. Einstein may have been able to work out the angles for a sweet break but he would not have executed as well as Ronnie O’Sullivan.

All this makes me think that AGI and Machine Learning are actually two separate disciplines. Intelligence vs Wisdom if you like. What we have created thus far is Artificial Wisdom. Artificial Intelligence may be completely different.

Mostly at TI what we care about is how enterprises might use AI to get a leg up. In that context ‘wisdom’ is just fine. Show me a piece of ML that has practised what I want it to do a million times and refined it to the nth degree and can do it for me, I’m happy. I don’t need it to be able to independently work out how to submit an invoice, summarise a legal document or crunch my numbers. It’s perfectly fine for it to have had a million opportunities to hone its capabilities to perform that specific task that I’m asking it to.

So, is true AI, AGI, call it what you will, really of much practical use? As with most tech there’s probably diminishing returns. The first uses of AI for automating processes can knock out a whole load of costs and make things run much more efficiently. Gradually those processes are refined and applied to more marginal use cases.

So, we can characterise AGI as almost impossible to define, hard to achieve (probably) and (perhaps) likely of marginal use. Nevertheless there are dozens of organisations spending vast amounts on researching it. Who are they, and why?  

<> 

So, who wants it? Well, there are enough companies out there sinking billions into investments that someone really wants to reach AGI. There are around 50 organisations according to some reckonings, although how many there might be in China is very hard to fathom. The biggest that we know of are the likes of DeepMind (which was acquired by Google in 2014), Human Brain Project (funded by the EU) and OpenAI (which was initially set up by Elon Musk and others, and is now funded to the tune of USD1 billion by Microsoft).

Have you noticed how, when it comes to intriguing new technology it’s the same companies coming up again and again? Amazon, Microsoft and Google.

Why? Fear. Fear that someone else will get there first and it will prove a critical competitive differentiator. Because it promises to (as famously said in The Terminator) learn at a geometric rate, which ultimately puts every conceivable and most inconceivable technological developments in play.

Liu Cixin talks in his marvellous book The Three Body Problem about the process of identifying existential threats to the People’s Republic of China, and specifically ones that no-one has thought about. Unknown unknowns, if you like. The key hypothesis is that the US makes contact with an alien civilisation and thus benefits from superior technology. It’s the ultimate in horizon scanning that I talked about in Episode 4.

Now, I’m not suggesting we should be reaching for our telescopes. But, for major corporations that seemingly have unchallengeable positions in their respective parts of the ICT ecosystem, AI represents a challenge. If it really can learn at a geometric rate, perhaps it will take a few days to develop a software suite that’s better than Windows (again, insert your own joke here) or a better search engine than Google. Or maybe it develops something that completely changes the technology paradigm. Something that can be inserted directly into the brain, for instance. We don’t know. And that’s the point. These AGI investments are a form of insurance policy against the worst case scenario that it is a massive disruptor, or indeed a massive opportunity. And so it may prove. Or it may be nothing at all. The one thing we can be sure of is that the quantities of money involved, albeit in the billions of dollars, are no indication that AGI is achievable.

<> 

There’s been a lot of talk recently about a technology called GPT-3, which was developed by OpenAI. I recommend you check out some of the news articles about it because it looks pretty fantastic. So, what is it and what does it do? And crucially what doesn’t it do?

GPT stands for Generative Pre-trained Transformer (no relation), and the 3 is because it’s the third iteration. It has been trained on 570GB of text from the internet to provide answers to questions, or to write you a poem, or even write code. Anything involving text. It has, effectively, learnt to understand text input and predict what a useful output will be to the user’s command.

It looks pretty revolutionary at least for simple requests. The more complex the requests become (e.g. to write a 1,000 word essay rather than a 10 word statement) the more the cracks start to show. Another weakness is the sheer volume of compute power required to run it. This is brute force AI. Simply using trial and error backed up by enormous data sets and oodles of processing to attempt to fool the watcher that it understands, when actually all it’s doing is predicting what will earn it a (figurative) biscuit. If the user types in x and y, this 570GB of data indicates that I *should* respond with z.  

But it’s also interesting because it’s unsupervised learning. There’s no indication in the training of what’s right or wrong. Just a large volume of data.

As mentioned, perhaps it’s a step along the way. Or perhaps it’s not. But picture a world in which GPT-4 or 5 is unleashed on the world, able to carry out human like conversations. Or, more accurately, to imitate human conversations. How long before it floods twitter and the internet overall. Pretty soon every document is written by an AI. And eventually perhaps AIs are the only things reading them, constantly attempting to refine how human-like their text can be. Makes you think doesn’t it?

<> 

This week’s final thought is an invitation. At Transforma Insights we’re running a webinar on the 2nd November on a topic that’s close to our heart, IoT connectivity, and specifically our forecasts of the market opportunity there. We will be talking about new narrowband Low Power Wide Area (LPWA) network technologies, 5G and private networks. All topics that I’ve covered at various points in the podcast.

One thing I keep forgetting to do is ask all you good people to kindly rate the podcast in wherever you download it. That helps tremendously with getting attention.

I’m going to be taking a break for a couple of weeks from the podcast, but will be back on the 27th October to give some views on that IoT growth that we’re covering in the webinar.

I hope you can join me.

Links to some of the research that I’ve refered to in this week’s show, as well as a transcript of the recording, will be available on the podcast website at WirelessNoodle.com

Thank you for listening to The Wireless Noodle. If you would like to learn more about the research that I do on IoT, AI and more, you can follow me on Twitter at MattyHatton and you can check out TransformaInsights.com

Thanks for joining me. I’ve been Matt Hatton and you’ve been listening to the Wireless Noodle.

 


 


 


 


 

Comments