Who can you trust nowadays to be straight and honest with you? And does it really matter? I ask primarily because I’ve been following the antics of Elon Musk, he of Tesla and Twitter and SpaceX and around 10 children, and marvel at his, umm, ability to be convincing.

It turns out that a 2016 video promoting the self-driving technology of his Tesla Model X electric vehicle driving itself on streets and highways, stopping itself at red lights, etc, wasn’t all it seemed. The video is still on Tesla’s website and says: “The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.”

Now it turns out the guy in the driver’s seat was there to correct the car’s many errors. Ashok Elluswamy, the director of Autopilot software at Tesla, has confessed the video used 3D mapping on a predetermined route and the many human interventions, plus the test car crashing into a fence in Tesla’s own carpark, were edited out.

The point being, just because someone says some new tech will perform tasks that will make you gasp out loud, it ain’t necessarily so. So now meet ChatGPT, the latest artificial intelligence breakthrough which proponents say is the beginning of a better-communicating world.

Universities and schools everywhere are worrying, and writers like me wonder if their eventual redundancy letter, oozing regret and sincerity, will be written by a bot.

The ChatGPT program is able to take information and write around it, with results that I must admit can be quite impressive. The New York Times put it up against seventh graders, giving kids and machine similar instructions for writing an essay. As an editor of other people’s articles, I was able to identify the computer-written efforts only because of a certain lack of excitement when writing about what it did on vacation.

The journalists at Insurance News have been having a play with the system online, and report that for now it’s handy for non-complex stuff, but that it lacks a very important aspect of a journalist’s skills that only comes with years of experience: an impenetrable bullshit detector.

ChatGPT can “challenge incorrect premises”, according to its developers, as well as admit its mistakes. But just as it was with their ancient 64-bit forbears, the rule holds true today: Garbage in, garbage out.

But a rather more sophisticated form of garbage perhaps? Our first trial was a tough one. We asked the system a simple question: Who are the top 20 most influential people in the general insurance industry in Australia?

It responded: “I’m sorry, but I don’t have a current list of the top 20 most influential people in the general insurance industry in Australia. The information I have been trained on only goes up to 2021, and such a list is subject to change and could vary depending on the source and criteria used to rank the individuals. It’s important to note that the insurance industry is vast and includes many different specialties, so a list of the top 20 most influential people in the industry would likely vary based on the perspective of the person compiling the list.”

I couldn’t agree more. That’s exactly how we approach our Top 20 list each December.

The next question: What are the key issues in the Australian general insurance industry?

It responded: Some key issues in the Australian general insurance industry are:
• Affordability and access to insurance
• Natural disaster management and pricing
• Regulation and conduct in the industry
• Fraud and anti-competitive behaviour
• Consumer protection and education
• Cyber security risks
• Climate change and its impact on the industry.

That’s pretty darned good, when you consider we left the thing to find it all out for itself. But could it take that information, write an article featuring a range of different viewpoints, at least one of which may be a bit dodgy? People automatically answer questions weighted with their own perceptions. To balance that you need experience of the subject, intuition and the ability to detect the information hidden among the blather.

Until they can connect interview subjects to a lie detector, we’ll put our faith in humans for collecting your news and information.