AI is supposed to replace boring, dull and monotonous work. Photo: Photocreo Bednarek

I think, therefore I think, therefore ...

Wednesday, 1 May, 2024 - 14:00

Modern artificial intelligence systems are pattern-matching marvels.

Last month, researchers at the Imperial College in London released a system that controlled a robot using ChatGPT.

Unusually, the researchers gave it strings of numbers. No text or grammar, just numbers. These numbers were compressed versions of images of the robot’s environment, followed by numbers representing robot arm movements. The researchers collected this information and fed it into a single, big ChatGPT prompt.

Finally, the researchers inputted some numbers representing a new environment, but without the robot movements for that environment. They then waited for ChatGPT to respond and used whatever this response was to control the robot arm.

Amazingly, the plan worked. Even though ChatGPT is designed for a conversational chat, it was able to see patterns in the seemingly random numbers and find success nearly 80 per cent of the time.

This general pattern matching demonstrates super-human capability, but does it demonstrate thinking?

Anyone who has interacted with a chatbot will know about its tendency to hallucinate, although ‘hallucinate’ is just a nice way of saying that it lies and makes things up when it doesn’t know something.

I think this distinction is important because there are many places where we don’t want things to be simply guessed and lied about. And these are places that look like the boring, dull and monotonous work that AI is supposed to replace.

Consider a cyber security safety review for a new dashboard software. The software is going to show data about a hazardous industrial plant, perhaps an oil refinery or a chemical plant. The team that wants to implement the system will need to sit down and have a discussion with their cyber security team.

These discussions are highly process driven. There are spreadsheets, forms and boxes to be ticked. AI could be hugely valuable in this setting. If these reports could be generated by a simple paragraph prompt, it would save hundreds of meetings of highly paid workers every year, easily outweighing the cost of implementation. An easy business case.

But this is a terrible business decision. It comes down to what the templates, spreadsheets and forms in a safety audit represent: they represent tools for thinking.

A junior data scientist can quickly create a model good enough to solve this cyber security report problem and produce the required forms. However, the model will likely spend as much time on the question ‘how many CPUs does the computer it’s running on have?’ as ‘are there any safety critical systems the dashboard needs direct access to?’

The former is a simple comprehension of the marketing material for the software, the latter needs deep understanding of the entire industrial plant’s network setup.

The AI is largely a patternmatching system, so if most of the dashboards to this point haven’t needed direct system access (but this one does), then it will probably falsely select ‘no’, creating a potential cyber security hole.

It seems unlikely AI will be allowed to do all this without some human oversight. The business case will likely include giving it to an architect for a final review. The problem is that the AI is designed to generate highly plausible looking (i.e. lie and hallucinate) output forms. This makes the reviewer’s job even harder, as they will need to carefully doubt every statement by the AI and it will take longer to do a good job. If the business case included reducing the time spent in cyber security meetings, then many false reports are bound to get through.

After one costly mishap, the company will be forced to bring back the expensive cyber security meetings. Of course, ultimately, these meetings weren’t really about pattern matching, but rather educating the rest of the company about cyber security and thinking through the potential consequences.

Not something you should outsource to AI.

• John Vial has a PhD in robotics and has spent the past several years leading teams in major Perth businesses focused on AI and robotics