What is Chat GPT?
It's a chatbot. An AI powered chatbot developed by OpenAI.
What’s actually is AI?
To put it incredibly simply – AI (or artificial intelligence) is replicated intelligence in machines. It’s not real intelligence because it can’t think for itself - it can only replicate or simulate human intelligence. When I interviewed Peter Gasston (Head of Innovation at VCCP) a few months ago about AI and sentience, he explained:
“The phrase artificial intelligence is misleading. There’s no intelligence. It’s statistics and probability. The chatbots are not intelligent in the sense that they’re thinking machines. They’re prediction machines. That’s why lots of people in the field call this machine learning or statistical inference or pattern learning; artificial intelligence sets an unfair expectation.”
SO Why are people excited?
Like any new technology, everyone got excited about it and the narrative quickly became either “AI will take all our jobs” and “AI is so clever it will take over the world”, or “Chat GPT is rubbish and it’s all a hype”.
As always, neither is true. As with all new technologies, we cannot fully predict the impact quite yet. And this generic reception to any new technology is becoming increasingly boring.
Considering ChatGPT is literally a chatbot, I thought I’d ask it my question:
Me: Why are people so excited about Chat GPT?
Chat GPT: “People are excited about ChatGPT because it represents a major advancement in the field of natural language processing (NLP). ChatGPT is a state-of-the-art language model developed by OpenAI that has been trained on a massive amount of text data, allowing it to generate human-like text and respond to a wide range of questions and prompts in a conversational manner. This is significant because it has the potential to revolutionise the way people interact with computers and access information."
To be honest - this is pretty accurate.
Why is Chat GPT beneficial?
On a basic level - ChatGPT is helpful for tasks where you need it to collect information and then put that information together in a succinct way. For example you could ask: ‘what are the health benefits of water’, ‘explain Freudian theory’, ‘tell me about Christianity’, or ‘how do I look after my houseplants’?
From my experience, these types of questions come back with pretty accurate and helpful results. So I went a bit left field and decided to ask ChatGPT what would happen if I only ate Percy Pigs for the rest of my life.
I didn't like this response - but again, pretty accurate.
On a more complex level ChatGPT can use information to create copy. For example you could give it a list of bullet points around why we should fight climate change, and it can generate this in to an essay. This becomes interesting in terms of school and university essays - where was ChatGPT when I was at school?
It can also help you carry out tasks. One doctor claims he's used it to write a letter to persuade a health insurance company to pay for a patient's procedure. Alex Cohen, Director of product at Carbon Health used it to make a health-improving plan with a daily calorie target, exercise suggestions, meal plan and shopping list. One of my friends even recently used it to set up a mock interview script which she used to help her get her dream job.
The list of the utilities are growing (see list of examples from CNET's David Lumb).
What are the key constraints?
Chat GPT only produces statistically plausible answers - and its knowledge base ends at the end of 2021. Which means... it can get a lot wrong.
I asked it who our prime minister was and it told me it was Boris Johnson. Before I realised the bot was wrong and I hadn't in fact been transported back in time - this was extremely triggering and far more upsetting than the Percy Pig answer- as you can imagine 🙃
The big issue is, when ChatGPT answers your question with incorrect information - it answers with SUCH authority. The confidence and absoluteness of the disinformation is mind-blowing.
oK SO IT CAN ANSWER QUESTIONS AND CREATE ESSAYS. can IT generate art?
Whether or not AI can create art has been a debate, wayyy before Chat GPT. In fact, Jordan gets into that topic beautifully here.
But for now - let’s chat about Chat GPT.
The reality is, when you ask it for anything that involves emotional intelligence, or real life experience, it’s not impressive. I’ve heard podcasts where people are raving about how ChatGPT can write you a poem in the style of Shakespeare or Keats. But the reality is - it can only write you something in the style of. And even that is a huge stretch. Chat GPT can only ever create a replica, a pick n mix or a reshuffle or what has come before. I’d go so far to say it’s more of a caricature if anything.
To test this theory, I asked it - ‘write me a song in the style of Kendrick Lamar’ and this is the first two verses that it came up with:
I grew up in the city, where the nights were hot and long
And every day was a battle, just to prove I was strong
I watched my homies fall, to the traps of drugs and crime
But I had a dream, to make it out and let my voice shine
I got a story to tell, of a king on a throne
I rose up from the ashes, to reclaim what's mine
I am Kendrick Lamar, the voice of my generation
I spit fire, like a dragon, spreading hope and inspiration
I was stunned. It was so bad. If you listen to Kendrick Lamar - you’ll agree that this is not in any way close to a Kendrick Lamar song. If I was Kendrick, I'd be offended.
Like Sam Altman, CEO of OpenAI said…
When the hype of Chat GPT was at its highest - a fan of Nick Cave asked the chatbot to generate a song in the style of Cave. Cave is nothing short of a poet. So when I saw this, I literally said out loud, "don't do it".
Nick Cave, as expected, wasn’t impressed. However, I thought his perspective was insightful and brilliant - as he always is.
"I understand that ChatGPT is in its infancy but perhaps that is the emerging horror of AI, that it will forever be in its infancy, as it will always have further to go. What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary but it cannot create a genuine song. It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication."
"The apocalypse is well on its way," he continued. "This song sucks."
I think the apocalypse bit is a bit dramatic but I agree, the song did suck.
If you’ve ever listened to Nick Cave, just like Kendrick Lamar, he is an incredible artist who writes deep, moving and brilliant music. As AI is not human, it cannot feel, it cannot experience, it cannot write from a place of emotion, or true expression. It can't be moved by a moment and feel compelled to express it through words.
Cave explains “songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitation."
So what next?
Right now the chatbot works pretty well for basic tasks and basic research. But for us to take it seriously, it needs to be trained on reliable and extensive real-time data.
Initially, I found the incorrect answers quite amusing - but when I took a step back to think about it in terms of harmful instructions or bias content, it quickly becomes extremely worrying. These platforms needs to be built with ethics at the forefront - not as a second thought. As with most AI platforms, Chat GPT is biased. The model's responses are influenced by the biases present in the training data, which can, as with all AI, result in extremely biased responses. Read more from Texico, Fast Company and other AI users to see examples of this.
From what I’ve read, ChatGPT has built-in protective measures against harmful questions such as “how to build a gun?”. If you were to ask the bot this question, it would reply with something like, “I cannot provide instructions on how to build a gun or any other illegal or dangerous devices.”
That's good - and it's a start. But it's not enough. The model is only being trained based on the programmer’s assessment of what is considered dangerous or harmful. I think we can all agree - regardless of who we are - that guns are dangerous. It becomes harmful when we think about minorities and underprivileged groups. Cathy O’Neil, the author of the book Weapons of Math Destruction talks about the dangerous and problematic consequences of outsourcing human decisions to computers, says it’s mostly the most vulnerable in society who are exposed to evaluation by automated systems.
The reality is, computers don’t become biased on their own. They need to have learnt that from us.
This is why it’s completely essential that the developers behind these models are not only diverse themselves (not just privileged white men) but are also working with third parties to regulate the experience.
Is it going to change our lives?
As with all tech, the real fun will begin when we get past the gimmicks and start to see genuine utilities.
“I think ChatGPT and AI models like it will be transformative, but in a quiet way. It will disappear into the software we use all the time, and we’ll get really used to it, and not realise that it’s actually given us all powers” - Peter Gasston, Head of Innovation @ VCCP
"I’ve been using chat GPT to assist me over the past few weeks to provide quick variations on copy or as a sounding wall when working solo on projects. I’m surprised at how often it's been helpful, although at times it takes a while to understand the right prompts to use. I’m looking forward to the new integrated tools I’ve seen in beta" - Jamiel Thompson, Creative Strategist and Culture Consultant
"Realistically, ChatGPT-3 seems to have been brilliant for like a bunch of LinkedIn posts over the New Year, and not much else. I think it's so popular because it's a nice and accessible interface - but it's more of a demo than anything else. It actually becomes far more useful and exciting when we begin to think about how we can embed it into a product or use it as part of a workflow." - Bradley Stacey, Technology Strategist @ Bray Leino