Generative AI: Harnessing the speed of change Episode 1

AI: Harnessing the speed of change

From disinformation and deep fakes, to stolen intellectual property and fake student essays, it seems artificial intelligence has moved forward at such dizzying speed over the last year it’s rarely out of the news. And it’s mostly bad news. But what about the positive side?

Related topics AI

Some are calling it the fourth Industrial Revolution.

Fast, clever, and infinitely-resourced, from the moment the likes of ChatGPT entered the public consciousness in October 2022, people have been grappling with what artificial intelligence means for their jobs.

It's easy to picture a dystopian future - mass redundancies as robots that never tire become our engineers, our teachers, our waitstaff - even our police officers.

But LinkedIn editor in chief Dan Roth says a recent survey reported 40 percent of people are already using generative AI at work, and the way they use it largely falls into two camps: to generate ideas, or to solve problems.

Generative AI can summarise long reports, come up with new logo ideas, write code, and schedule meetings. Other forms of deep learning AI can take away time consuming or dangerous tasks, design smart agricultural solutions of climate prediction tools. For many people, AI is a job enhancer.

Murray Robertson is one of those people.

The chief executive of infrastructure company Downer, Robertson has 33,000 workers operating across hundreds of sites in New Zealand and Australia, including on the single largest transport infrastructure project ever undertaken in New Zealand: Auckland's City Rail Link.

Underground at the massive construction site is kilometre upon kilometre of tunnels. Robertson sees AI playing a crucial role in those tunnels – and on other big Downer projects powering special cameras that will be able to monitor sites and safely coordinate the comings and goings of workers and machines.

"Across the civil and infrastructure space, we're constantly looking at ways we can use tech to advance the way we deliver our work, so we can deliver it more efficiently, and in a safer manner."

Downer partnered with Rush Digital to find ways to use AI in its projects. Rush founder Danu Abeysuriya says the idea is for the cameras to be able feed images from the site to a central cloud, which then analyses what it sees in the image in real time. An AI camera might pick up cars speeding by the construction site and alert the site manager, who might then decide to put up more signs, or station a stop-go worker on the roadside.

"These tools increase efficiency of existing processes," Abeysuriya explains. "If you know within three hours of deploying a traffic management plan that it's not slowing down traffic, you can act faster. Your risk window is only three hours, whereas typically it might take a day."

It sounds simple, but of course it's not. A human can look at an image and automatically distinguish between a construction worker and a digger, a signpost and a woman pushing a pram – and, more importantly, make a judgement call about what things are important and what aren't. Artificial intelligence needs to be taught these values, slowly and painstakingly.

But once it's learnt, AI can take huge amounts of data and analyse it instantaneously, and at all times of the day - including, crucially, when workers are off the clock.

"Someone, even a member of the public, having a heart attack and falling over at a site, those are things you want to know about straight away," says Robertson. As the AI develops further the cameras might be able to send him a text when a serious incident is detected.

"We joke that CCTV allows you to watch someone steal your stuff from two weeks ago," says Abeysuriya. "That's really the difference between AI-based cameras and CCTV. “An AI system allows you to act immediately and trigger action within seconds and munities.”

More startling than AI's ability to learn is its ability to create. Well-known programmes like ChatGPT, which have made headlines for its ability to churn out school essays, theatrical plays, or plot out whole book series in a matter of seconds, can be classified as 'generative' AI, says Emma Maconick, partner at EY New Zealand and head of the company's Oceania data digital and technology law team.

"It's not replicating. It's creating a new image, a new text, new audio, new video," she says. "The algorithm you're using has been trained on all the text in the world - or, at least all that can be accessed through the internet."

Maconick explains the complex language models generative AI use to complete prompts using probability.

"If I say 'Jack and Jill', your brain is already thinking about how to complete that sentence: 'went up the hill to fetch a pail of water'. So what the computer programme does is identify that 'Jack' often appears next to 'Jill' and 'fetch a pail of water', all of which are quite random words often clustered together, and therefore, it will predict the next sentence and finish it for you.

"And to us that feels like logic, and it feels like intelligence, and it feels like knowledge. But, in fact, what it is a probabilistic calculation of which word is likely to follow from another based on bajilions of data points that this algorithm has been pre-trained on."

Beyond writing a passable book report, generative AI has uses in virtually any workforce. Maconick gives the example of AI creating a 3D map for a warehouse layout that maximises stock space, or generating call centre scripts in real time based on a customer's perceived temper.

Generative AI is such a Jack - or Jill - of all trades, no wonder it's at the heart of the battle between Hollywood film studios and the Screen Writers Guild.

Self-described tech futurist Ben Reid says AI has led to over 200,000 layoffs in the tech industry worldwide.

"You've got to wonder whether it's worth putting four years into a software engineering degree when the job can be done by anyone with ChatGPT," he says.

Then there's the cases of malicious AI use, like deepfake technology and increasingly sophisticated financial scams. Reid says there's been reports from the US of synthetic voices being used to scam people over the phone, making a convincing-sounding plea for ransom money.

But EY's Emma Maconick, herself a "professional catastrophist", isn't fazed.

"Generative AI doesn't scare me any more than any of the dumb stuff humans can do," she says.

"We may have massively underestimated the genie that we've let out of a box... but I do believe in the power of our adaptability and our creativity.

"And if I gave you three hours of your day back because we took repetitive or friction-intensive tasks off your plate, what would you do with that time?"

Presenters

Emma Maconick
EY Oceania Data and Technology Law Leader

Podcast

Episode 01

Duration 27m 02s