Rise of AI tools and lack of rules a ‘ticking time bomb’ for communications crises

The rapid rise of artificial intelligence (AI) technologies and their integration into many workflows by companies and individuals is opening the door to potential controversies materializing, creating a “ticking time bomb” according to a crisis communications expert.

There have been a number of controversies in recent months stemming from AI tools and the content they produce. From generative AI chatbots “hallucinating” and spitting out inaccurate information to models being trained on biased or otherwise flawed data and companies facing lawsuits for training AI models on copyrighted materials – AI’s rise is creating new headaches even as it makes certain tasks easier and shows the potential for revolutionizing work in the years ahead.

“I think big picture, in many cases AI is a ticking time bomb,” Kevin Dinino, president of KCD PR, told FOX Business. “When you look at, for example, cybersecurity breaches from a communications standpoint, so many companies are in many cases just horribly unprepared. So when you think about what you need from a crisis communications standpoint in situations like these be it AI or a breach – a lot of it comes down to that response planning and having a cadence of events that a management team can follow and act upon.” 

AI FUELING RISE IN CYBERATTACKS

OpenAI ChatGPT Screen

AI is a helpful tool in many contexts, but its flaws remain a “ticking time bomb” for businesses according to a crisis communications expert. (Photo by Jaap Arriens/NurPhoto via Getty Images / Getty Images)

“AI throws that into a little bit of a curveball because there’s not really a disclosure, if you will, in terms of ‘is content being created by AI,’” he added. “And I think AI for the communications industry is really going to reshape it in many cases.”

While companies face regulatory requirements in terms of informing users about how their data is collected and utilized along with a mandate to notify potentially affected parties about cybersecurity breaches impacting their data, similar requirements are not in place for AI content, leaving the door open to confusion and controversy.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Artificial Intelligence Business

Artificial intelligence tools can be useful research aids but often get things wrong, making it essential to double-check its outputs before using them in any sort of formal setting. (iStock / iStock)

Dinino noted that AI tools can be useful to augment workflows, especially research-oriented tasks, but that “you can pick holes in it pretty quickly for as helpful as it is,” so double-checking its findings is critical. As a result, he does not see AI tools becoming an outright replacement for the human touch and personalization of crisis communications.

“I think AI right now is really going to be the fuel for the crisis communications industry as a whole,” he added. “There’s just so much risk and lack of any sort of rules, regulation, planning, etc. that the human component of crisis communication, at least in the short-term, is always going to prevail.”

As far as what businesses should do to get ahead of potential situations where the use of generative AI may go awry, Dinino said that business leaders and employees need to be on the same page about how AI can or should be used within an organization, in addition to how its use is communicated to customers and the public at large.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Check Also

Railing against cost of coffee as prices soar | Personal Finance | Finance

Caffe Nero has ratcheted up the cost of a large latte from £3.30 last summer …