The next wave of AI is delivering: Three predictions for 2024
At the start of 2024, we looked at where companies were heading when leveraging AI, noting that the question for organizations was no longer “When should I pull the trigger on using AI?” but “How can I use AI strategically?”
Since then, the potency of this tool has developed even further, mostly thanks to GenAI being combined with other emerging technologies.
For instance, McKinsey estimates that there is more than $4 trillion in value to be unlocked when GenAI and cloud technologies are combined.
With this in mind, AI’s impact on organizations is set to penetrate more deeply, but where exactly will this impact fall?
Here are three predictions for 2024 on the future of AI and the next wave of changes that we should expect to see in the industry.
Speech-writing industry forever changed
Back in 2022, two founders decided to use an AI-generated script for a 40-minute industry presentation. Part social experiment, part sheer curiosity, the finished result was bizarre in places but almost believable. The presenters decided that, although random, the output was solid enough for them to conclude that “the uncanny valley is getting shallower.”
Just two short years later and much of the earlier problems associated with AI’s tendency to make nonsensical decisions with content have been greatly reduced as the technology grows more sophisticated with each iteration.
With nothing more than a few prompts detailing the outline of a speech along with any quotes or data points that need to be included in a structured initial script. This will be useful for C-Suite executives and industry leaders who are frequently invited to deliver keynote speeches, present awards, and give public comments.
Further, Gartner predicts that by 2026, 75% of businesses will use generative AI to create synthetic customer data. In the context of speechwriting, this could become useful in dealing with crisis communications, when a public statement needs to be delivered at very short notice but the words of which have a huge impact on how effectively the crisis is managed.
A private LLM model would allow leadership to run mock scenarios that test a number of sample statements against audience sentiment. This would remove a great deal of chance from situations that are often very fraught and tense, helping leaders deliver a statement that balances the right tone and sentiment.
This also offers the additional benefit of providing more nuanced results, as the LLM can be trained on broader internal data and past speeches given by the speaker to produce a script that adheres to corporate house guidelines.
New industry use cases for AI
In an earlier article, I’d explored how AI-driven chatbots were going to be leveraged to transform customer service interactions.
Initially, companies hoped to leverage gains by enhancing the productivity of customer service agents or reducing the number of agents overall, however there are some pitfalls associated with this approach that are becoming more clear. These include deploying bots as a standalone solution and not part of an omnichannel system, not addressing the quality of underlying data, or sending customers into frustrating repeat loops that cause satisfaction rates to nosedive.
In response, companies that have been deploying GenAI chatbots need to first and foremost clean up their knowledge base and customer data. We should expect to see more focus on this in 2024 as adoption continues to increase.
Many times, companies find it’s easier to start with a new knowledge base generated from desirable customer interactions sourced from calls, chats and emails to avoid contradictory sources. Upwards of 40% of businesses will use this approach.
We’re also likely to see an uptick in the number of specialized industries deploying chatbots. By 2027, more than 50% of the GenAI models that enterprises use will be specific to either an industry or business function. This will have the added benefit of domain models that are smaller, less computationally intensive and lower the hallucination risks associated with general-purpose models.
For instance, the legal sector is one example of a more regulated industry in which AI is adding significant value. Due to the high-pressure nature of a lawyer’s job, status updates are often infrequent or incomplete, compelling clients to schedule in-person meetings with the law firm. Equally, drafting legal documents and agreements can be a complicated process. For these reasons, using a private AI tool as a copilot can offer significant value to the business relationship.
Trust and transparency will gain even more prominence
Alongside the rise of new use cases and more sophisticated models, we’re also going to see more focus placed on building trust and transparency with customers and clients.
Many companies are choosing to apply labels to AI-generated content or name chatbots in a way that makes the use of AI crystal clear each time customers interact with the tool. For example, in April 2024 Meta released detailed new guidelines that will begin labeling a wider range of video, audio and image content as “Made with AI” when we detect industry standard AI image indicators on how the use of AI across its community of users will be labeled going forward. We should expect to see more of this.
Governments and policymakers are also closely monitoring what steps are needed to avoid the erosion of public trust in online content, particularly in the run-up to a global year of elections. To give an example, President Biden issued a wide-ranging executive order in October 2023 that included standards and best practices for clearly labeling AI-generated content and guidance on avoiding deepfake content.
At the moment such decisions are optional but as governments and policymakers explore the explosive adoption of this technology, these may well become regulatory standards in the near future.
The next frontier for AI
AI has already had an irreversible impact on companies and organizations. Alongside the exciting potential, pitfalls such as hallucinations and deepfakes have also been put into closer focus.
However, solutions to these challenges are being developed alongside new algorithms to help society harness the power of AI in a fair and trustworthy manner.
This article was authored by Rajat Mishra, CEO of Prezent
Featured photo of the Prezent team