‘Pause’ of ChatGPT-type AI development not practical – panel

The Business School and the Auckland Business Chamber joined forces to put on a panel discussion regarding AI tools like ChatGPT.

A suggestion that artificial intelligence developers should pause work on any large language models more powerful than ChatGPT-4 is not practical, but regulators need to familiarise themselves with generative AI in order to combat its potential harms.

That was the consensus view of four expert panellists from commerce and academia who appeared at a University of Auckland Business School and Auckland Business Chamber event on Tuesday night, in front of a sell-out audience of around 150 people.

The National Business Review (NBR) facilitated the panel which included associate professor in the university’s Department of Commercial Law Alex Sims, senior lecturer in the Department of Accounting and Finance Paul Geertsema, YoungShand managing director Duncan Shand, and 3PM founder and CEO Andrew Radcliffe.

The question of a “pause” was recently raised in an open letter by the Future of Life Institute which was signed by thousands of academic and industry figures, including the likes of Elon Musk and Apple co-founder Steve Wozniak.

The signatories fear hurried and unregulated AI development could exacerbate levels of disinformation and have unforeseen economic and societal consequences leading potentially to the demise of the human race.

Digital marketing specialist and YoungShand managing director Duncan Shand has been using ChatGPT daily to help brainstorm and refine content that he and his human employees create. Shand said he thought a six month pause on AI development “is kind of like asking your teenagers to stop using social media”.

Shand also felt that some of the rhetoric around the dangers of large language models like ChatGPT were a little overstated.

Paul Geertsema
Business School academic Paul Geertsema is also a member of the Artificial Intelligence Research Association.

Geertsema, a member of the Artificial Intelligence Research Association, said he sympathised with the letter-writers’ concerns but did not believe that Armageddon was imminent due to AI development. However, the ability of AI models to be disruptive in unanticipated ways should be taken seriously, he said.

He wants engagement by society – not just the tech community – on the development of controls and guardrails so that we can move towards a world where AI is “a net benefit to society rather than potentially putting us in an unanticipated place that nobody wants”.

Meanwhile, Radcliffe, whose company provides AI automation systems to assist small and medium-sized enterprises, suggested that an explosion of AI-generated disinformation would soon become apparent to the general public as elections are conducted in the ChatGPT age, and therefore backed the need for regulation of how the models are built.

From a practical perspective, Sims noted a recent comment from Google CEO Sundar Pichai that he would not commit to such a pause, saying it was not clear how such a measure would actually work without government intervention.

“His point was that he basically doesn’t trust the others not to do it, so the question is enforceability. And if the good players pause and the so-called bad players don’t, then we’re in a worse situation.”

What about my job?

Armageddon aside, attendees at the event were keen to hear about the prospects for employment in the ChatGPT-era.

Asked what jobs will be created by large language models, Geertsema noted that “Prompt Engineer” is now a trending job title: someone who is adept at asking large language models the right questions to get the desired answers.

“What ends up happening is not that there will be a class of jobs that are only related to large language models, but really the ability to use large language models is going to become a valuable skill in a range of occupations.”

Shand agreed that people with such skills would be more productive and therefore more valuable to any organisation.

“People are using it to research…they’re using it to create headline alternatives, to write short copy, to summarise documents, to come up with podcast outlines – so there are lots of different use cases for it,” he said earlier in the discussion.

“It’s not going to replace the magic that happens. We’re about working strategically: understanding our clients’ businesses, their sectors, understanding their customers, coming up with interesting creative ideas. It’s not going to do that – yet.”

AI image

Radcliffe works with companies in the construction sector, which isn’t always known for being on the leading edge of digital technology, but the analogy he uses strikes a chord.

“They used to use a hammer to hit a nail, and now they use a power tool, and this is exactly the same.

“Very small things that you do every day – getting a bill in, processing the bill and paying the bill – most companies that are embracing AI don’t touch that at all.”

If you process 100 bills like that “you become a very productive company, and if you don’t do it, you become very unproductive”.

To a further question as to whether ChatGPT would spell the end of professionals in fields such as law and accounting, Sims said they would not be replaced altogether, but there would likely be fewer roles and changes to traditional roles. Computerised spreadsheets did not kill the accounting industry, she noted.

There will always be a role for humans to decide which problems need solving, said Geertsema.

“You can use these tools to solve problems, but a human needs to actually decide on priorities: which are the problems we want to solve? What will create value in society? Ultimately, that is not a question we want to leave to machines, that is something we as society, hopefully, decides on.”