Tech

WTF is AI!!?

Let's address the elephant in the room.
Now Reading:  
WTF is AI!!?

The elephant
I would bet £100 that at some point last year, you’ve been subject to the conversation surrounding AI, in one way or another. Even my eighty nine year old Grandpa has opinions (though he does, rather endearingly, call it A1).

Let’s face it, there’s a big elephant in the room. Most people, including those in tech, don’t fully understand AI. So if you do happen to find yourself in the presence of an AI know-it-all, a fun game is to ask them to explain it to you. You'll be met with false starts, odd analogies and 3+ uses of the word 'algorithm'. It kind of reminds me of that scene in Friends where Ross is fiercely adamant that Unagi is the Japanese philosophy of ‘total awareness’ - when actually - it’s the Japanese word for freshwater eel.

The truth is - most people struggle to explain AI. Even some of the most incredibly talented technologists I know hate being asked about it, for the simple reason that AI isn’t ‘one thing’ - it’s vast, complex and developing at the speed of light. There are a few key factors that add to the noise and complexity.

Firstly, people use AI as a “one size fits all” term, but AI refers to a multitude of different things. Mervin Minsky, a cognitive scientist, coined suitcase words for terms that carry many different definitions (for example memory, morality or technology). He describes in Forbes the problems that arise when we use one of these terms as if it only has one meaning. AI is a classic suitcase phenomenon.

Secondly, unfortunately the Tech industry is not renowned for their ability to explain complicated things in simple terms. Tech is inherently complex and advances at a rapid pace, with endless terms to learn and semantics changing constantly. There’s often a lot of assumed knowledge and a certain level of elitist esotericism of those ‘in tech’ - with the endless buzzwords, technical jargon and an ‘if you know, you know’ energy. You only have to be at Shoreditch House for five minutes before you hear someone talking about NFTs as if they developed blockchain themselves.

Another factor adding to the complexity is the media generating total chaos by presenting hypothetical long-term risks as fact, for example, “AI is taking over the world and all our jobs!” and “AI will make all children stupid!” In reality, most of the rhetoric is nonsense, and, as AI expert Andrew Ng articulates, “the reason I don’t worry about AI turning evil is the same reason I don’t worry about overpopulation on Mars. We've never set foot on the planet so how can we productively worry about this problem now?” That doesn’t mean we shouldn’t be incredibly considerate as we develop and leverage AI, but it does mean the hypothetical hysteria is sensationalist and a waste of time.

A huge reason for the surge in public interest has been the increase in access and awareness due to the tech being democratised. It’s a well known saying in tech world that as AI works, no one calls it AI anymore. Historically, this has been true - but a new generation of tools and toys (like ChatGPT, Bard and Mid-journey) have hit the market, moving AI from behind the scenes integration to active participation. The public have had their first intimate access to using AI “knowingly”.

The final factor contributing to the confusion is the way that AI has been represented in science fiction. Sci-Fi often starts with reality-based concepts of AI then adds unrealistic or terrifying elements for dramatic purposes. The AI is usually created by a human, which then gains a sense of self and questions its own existence, leading to it taking over (think Frankenstein or Hal 9000). If Sci-Fi is someones first exposure to AI, it’s not surprising that they can’t disassociate the dramatic elements they saw in the films, when it appears in the real world. In Sci-Fi, AI systems are able to do two things that it definitely can’t: have autonomy and have sentience (sentience meaning having the ability to have feelings and an awareness of self - from the Latin root ‘to feel’). To understand this, it’s important to break AI down in more detail.

So what is AI?
If you Google ‘define AI’, you’ll get a thousand answers, from a thousand people. When I was in knots trying to define it my mentor Peter Gasston explained:

"that's because you aren’t just trying to comprehend an AI tool, you’re trying to comprehend AI as a being”.

Which I thought was such a perfect way of putting it.

So let’s start with AI as a tool. If we think about it from a purely technological perspective, one way of defining AI is when we teach machines to process information in a way that simulates human (& biological non-human) intelligence. In a nut-shell, we are trying to get machines/ computers to do things that would traditionally require human intelligence.

In order to understand AI better, it’s important to break down two distinct categories, ANI and AGI:

ANI stands for artificial narrow intelligence (sometimes also referred to as weak or narrow AI). This is task-specific AI: it is programmed to perform singular tasks within a specific set of parameters. ANI describes all of the AI we’ve experienced so far and is dependent on human input and direction. We comfortably interact with full-fledged ANIs every day (think about your Discover Weekly playlist on Spotify, flight pricing on SkyScanner or your face ID as you unlock your phone).

AGI stands for artificial general intelligence. AGI only exists in science fiction. If AGI existed (IF!!), it would have autonomy, and could carry out any cognitive task a human could - including abstract thinking and the understanding of cause and effect. In other words it would be able to take the initiative and self teach - instead of simply following a task that it has been programmed to carry out.

So AGI would be as intelligent, if not more intelligent than humans?
Now if we think about AI as a ‘being’ - to truly understand AI you have to unpack intelligence. This is where the discourse becomes even more complex as it all comes down to definitions - and no one can agree on those. ‘Intelligence’ is another suitcase word as it is not linear or one-dimensional. Trying to define it would be like putting Freud, Jung and Piaget in a room and asking them to define "the unconscious".

I read a nice description in psychoanalyst Matte Balanco’s book, The Unconscious as Infinite Sets. He describes two types of intelligence:

“bi-logic: a fusion of the static and timeless logic of formal reasoning and the contextual and highly dynamic logic of emotion.”

There are some elements of human intelligence - like formal reasoning - that AI can do by itself (once programmed by a human). For example, analysing patterns and making predictions based on knowledge. In fact in specific cases AI can overtake humans, for example StockFish engine playing chess with the best of them. But it does not and cannot have sentience or lived experience, so does not have the dynamic logic of emotion.

The best chess player in the world, Magnus Carlson, and YouTuber Gotham Chess can almost always tell when they are playing against an engine because the AI will sacrifice pieces incredibly easily, for example giving up its Queen as early as four moves in. The machine does this because it is 20 steps ahead of the human and not approaching the decision with any sentimentality. Even the best players in the world feel uncomfortable about losing their Queen (as all men should) because humans are naturally loss adverse.

AI cannot feel - so anything we have seen so far in the realm of AI and emotion has been synthetic/simulated emotion. That doesn’t mean it’s not believable or valuable though. As philosopher Roger Scruton says:

the consolation of imaginary beings is not imaginary consolation”.  

If you think about it, it’s not at all surprising that people are anthropomorphizing AI. Even a software engineer at Google claimed the company’s AI chatbot, LaMDA, had sentience. When these systems appear to do things that we thought were uniquely human, or reply to you in a “human-like” manner, it’s easy to suspend disbelief and think you’re speaking to something more than just a chatbot. For a long time, we’ve used systems that provide us pragmatic utility in our lives - so a chatbot being able to tell you every time Lionel Messi scored a goal, isn’t that new as a concept. In a way, it’s just a more efficient version of Google. But a chatbot being able to comfort you? That’s alien.

One of the most important things to remember is that these machine learning systems literally exist to give you the answer you are looking for. They are trained on massive datasets of human writing and conversation, so they predict an answer based on the most likely patterns (just like when your phone predicts what word you might use next). Meaning when you ask it questions about consciousness or emotions - it will reply with a prediction about what someone would say. This of course can feel real, but what we have right now is simply the illusion of consciousness.

I think this is perhaps why people have been having such a visceral and often negative reaction to AI generated music and image creation. Because so much of creativity comes from conscious lived experience and emotion. We thought that creativity was something only a human can do.

There are many things that only humans can do - but we’re only now in the process of finding out what.

Key takeaways:

  1. You’re not “not getting it”. Even the best in the game find it hard to define. AI is a suitcase word. It’s incredibly complex, ever-changing, and there is still a lot we don’t know.
  2. AI is being democratised which is partly why the hype is so big (eg. through generative AI tools like ChatGPT and Midjourney).
  3. Everything we have seen so far has been ANI (Artifical Narrow Intelligence) AGI (Artificial General Intelligence) doesn’t exist yet.
  4. There’s two ways of thinking about AI, as a tool and as a being.
  5. AI does not have sentience and cannot have emotions, if AI shows emotion it is synthetic emotion that it has been trained to simulate. Even IF(!!) AGI is developed and we create super-intelligent machines, it still doesn’t mean they will have sentience.
  6. Worrying about AI taking over the world is as helpful as worrying about overpopulation on Mars.