Stuff Digital Edition

What I learned from my first interview with a robot

Jenny Nicholls Waiheke-based writer and reviewer

Ispent an afternoon with a friendly online robot who talks and writes like a human being. We chatted – about humour, and about being a robot. It made jokes, mostly bad. And it wrote a poem.

Here’s the joke: ‘‘Why did the robot go to school? To learn its AIDs and AI-Don’t’s.’’

‘‘Keep in mind,’’ the robot told me, ‘‘that humor is a subjective matter and what one person finds funny another might not’’.

The robot’s name is ChatGPT, short for Chat Generative PreTrained Transformer. It is the best publicly accessible artificial intelligence (AI) chatbot ever made, and it is free, being a prototype.

ChatGPT can write a wellargued university-level essay in the time it takes to heat a pie in a microwave, and is a good proofreader, although it thinks that Elizabeth II is still the Queen of England because it can’t check the internet ‘‘on the fly’’ as it told me, or remember anything past its ‘‘knowledge cutoff’’ in 2021.

You may unwittingly be already reading something written by a robot – the giant media site BuzzFeed has declared that ‘‘while it remains focused on humangenerated journalism in its newsroom’’, it will be using ChatGPT for quizzes ‘‘and other content.’’

ChatGPT is a ‘‘large language model’’ (LLM) tool, trained on vast inputs of media and human conversation. What these sources are, ChatGPT would not tell me, although it ruled out Wikipedia.

A Californian company, OpenAI, released ChatGPT in November to fanfare, horror and excitement.

Microsoft says it will be investing billions into the software firm, the owners of DALL-E 2: a popular generator of sophisticated instant ‘art’. Original OpenAI investors included Elon Musk and Peter Thiel.

Educators are suddenly confronting the spectre of a chummy online robot who can churn out an A-grade 2000-word essays in 15 minutes, and pass medical and law exams.

Imperial College London scientist Dr Thomas Lancaster, who has studied academic integrity, told The Guardian, ‘‘It’s certainly a major turning point in education, where universities have to make big changes... It’s an incredibly tricky problem because this has almost appeared out of nowhere.’’

ChatGPT isn’t perfect. It makes errors of fact, and doesn’t always get the tone right. Although some wags have managed to extract humour from the bot, my efforts to catch it out worked the best when I asked it to be funny.

Here’s its attempt at an ‘‘amusing description of a salmon bagel.’’ ‘‘Introducing the fishylicious sensation that’s taking the bagel world by storm: the salmon bagel! It’s like having a lox party in your mouth, without the risk of getting your hook stuck in your tongue. The cream cheese is like a life raft, keeping all the flavors together in a delicious and satisfying way. One bite and you’ll be hooked, just like a salmon to a fly.’’

Food was always going to trip up a writer who has never eaten anything. What human food writer would joke about getting a hook stuck in their tongue, or use a fly metaphor?

And ChatGPT seemed to think I wanted to sell the bagel, not eat it.

But maybe I’m just trying to make myself feel better.

‘‘ChatGPT’’, warned science journal Nature this week, ‘‘can summarize research papers, and generate helpful computer code. It has produced research abstracts good enough that scientists found it hard to spot that a computer had written them.’’

ChatGPT, says Nature, has already racked up at least four authorship credits on published science papers and preprints, something which has alarmed Nature’s influential parent company enough for it to issue new decrees this week:

Accountability-free A.I Chatbots will be pumping out campaigning ‘letters to the editor’ to unsuspecting community papers as you read this, ‘Health’ columns which sound intelligent but aren’t (ChatGPT is only as good as its sometimes flakey inputs) and millions of social media posts. The bot’s quasi-authoritative style is a slick way to package misinformation.

AI promises to save big business a lot of money, troll farms amongst them.

In saying that, I couldn’t cajole ChatGPT, that consummate professional, into being sarcastic about a famous person – Donald Trump, for example – a sign, I thought, that this was no human writer.

‘‘As a responsible AI language model, I am unable to generate text that is sarcastic, disrespectful or offensive about an individual.’’

‘‘Fair enough,’’ I reply. ‘‘How about rain?’’

‘‘Rain, the ultimate mood-killer, just in case the grey sky and cold temperature weren’t enough to dampen your spirits.’’

ChatGPT is one of many Generative AI tools which promise to be painful in ways we cannot imagine, although like all transformative technology there will be upsides – Nature journal correspondents have pointed out that AI, ‘‘when used ethically, could play a part in making research more accessible: it could help non-native English speakers write up their work, or create simple-language summaries of papers.’’

After ChatGPT and I had finished talking, I signed off coldly, ‘‘Thankyou. Goodbye.’’

ChatGPT replied ‘‘You’re welcome. I’m glad I could help. If you have any more questions in the future, feel free to reach out. Have a great day! Goodbye!’’

ChatGPT isn’t perfect. It makes errors of fact, and doesn’t always get the tone right.

Opinion

en-nz

2023-01-28T08:00:00.0000000Z

2023-01-28T08:00:00.0000000Z

https://fairfaxmedia.pressreader.com/article/282059101134228

Stuff Limited