In addition to pricing fears, IDC found concerns about bad outcomes (51.3%) — including unintended bias, unauthorized usage of someone else’s intellectual property, or unintentional leakage of confidential information — and lack of confidence in the benefits (46.1%) of generative AI as top roadblocks to adoption.
Here, an antidote may be using SaaS agents and pursuing basic gen AI use cases, such as automated document summarization, rather than attempting to build and train a foundation model, says Paul Beswick, CIO of Marsh McLennan. Doing so can also be a cost-conscious inroads to AI, he adds.
“There is absolutely a sweet spot of relatively easy-to-access capability at a modest price that many technology organizations are perfectly capable of reaching. I think the bigger risk is that they get distracted by trying to shoot for things that are less likely to be successful or buying into technologies that don’t offer a good price/performance trade-off,” he says.
“Most organizations should avoid trying to build their own bespoke generative AI models unless they work in very high-value and very niche use cases,” Beswick adds. “For most companies, I think there’s far better return in taking advantage of the ecosystem that’s being built and that is relatively easy to buy or rent your way into.”
UST’s Masood agrees that the cost potential of model training isn’t for the faint of heart.
IT leaders “seem most alarmed by the specter of runaway training bills: Once you press ‘go’ on a large-scale generative model, it can be a bottomless pit without operational transparency and robust risk mitigation strategies,” he says. “At the same time, a daily sticker shock from incremental charges wreaks havoc on institutional legitimacy — no one wants to explain last night’s spike in AI usage to the board without a strong governance innovation framework.”
Budget constraints also play a role in preventing the building out of AI infrastructure, given the cost of GPUs, Rockwell’s Nardecchia says. A shortage of experienced AI architects and data scientists, technical complexity, and data readiness are also key roadblocks, he adds.
“Foundational models require vast, clean, and structured data — and most organizations are still battling legacy silos and low-quality data. This is largely the No. 1 constraint I hear from peers,” he says, regarding concerns about bad outcomes.
Vendors are working to overcome these obstacles by addressing pricing concerns and trying to improve outcomes. For example, Microsoft this week introduced consumption-based pricing for Copilot Chat. And Amazon recently unveiled features for its Bedrock generative AI platform designed to improve outcomes.
At AWS re:Invent, Doordash’s Chaitanya Hari said Amazon Bedrock’s new Knowledge Bases feature allowed the company to implement the entire retrieval augmented generation (RAG) workflow, from ingestion to retrieval, without the need for a lot of custom data integrations or complex data back-end management.
“Even if a model is fast and fairly accurate, how do we ensure that it’s pulling information from the context that we’ve provided and not just making things up? We went through multiple iterations of prompt engineering and fine-tuning to ensure our AI models reliably referenced only the knowledge bases that we provided with Amazon Bedrock,” said Hari, product owner of enterprise AI solutions at DoorDash.
“We were able to mitigate a large portion of our hallucinations, prevent things like prompt-injection attacks, and detect things like abusive language,” Hari said. “This gave us the confidence to scale without compromising on quality or trust.”