Our AI WIll Not Hallucinate, Glia Promises With New Offering

NEW YORK– Glia, provider of a platform for intelligent banking interactions, said it is now offering its clients a contractual guarantee against AI hallucinations being presented to customers or members on its Banking AI platform. 

Glia said it also now guarantees zero impact from prompt injection attacks on its platform — malicious attempts to trick customer or member care AI into providing information or performing tasks it shouldn’t.

“Our platform makes negative impacts from AI hallucinations and prompt injection attacks not just improbable, but actually impossible,” Justin DiPietro, chief strategy officer and co-founder of Glia, said in a statement. “We’re adding this guarantee to our contracts because that’s how serious we are about this claim. In the race to adopt AI, many banks and credit unions are unknowingly accepting a level of risk they would never tolerate in any other part of their business. We want them to know they don’t have to jeopardize their organizations to see the benefits of AI.”

‘Risks are Inherent’

In announcing its new guarantee, Glia said AI hallucinations are situations when generative or agentic AI presents false or misleading information. 

“These risks are inherent in fully-generative AI because the internal decision-making is hidden—which means even the people who build these tools can’t always predict or explain why the AI says what it says,” the company said.

It added that it eliminates the potential impacts of AI hallucinations and prompt injections through a built-in proprietary approvals framework, and that while the platform leverages generative AI and Large Language Models to achieve a 92%+ understanding rate — “comprehending exactly what a customer or member needs — it never uses that same AI to ‘improvise’ answers in real time.”

“I anticipated substantial maintenance for the first six months because you have thousands of inquiries coming in with various types of people expressing it in a wide variety of ways,” Adam Goetzke, director of banking services at Heritage Federal Credit Union, said in a statement.  “But that really hasn’t been the case at all. Glia’s Banking AI made a better experience not only for our members, but our internal teams, too.”

‘Powerful Elements’

According to Glia, the platform leverages the most powerful elements of generative AI — the ability to parse complex, messy human language, identify intent and develop responses based on existing information — and combines them with an approvals framework for banking-grade governance. 

“This distinction between input and output ensures institutions are never sharing inaccurate information or introducing opportunities for bad actors to manipulate customer- and member-facing AI tools,” the company said. 

“If you use fully generative AI in your customer- or member-facing AI interactions, it’s like putting an open door to your banking core on the front steps of your branch,” DiPietro added in a statement.

Why Guardrails Aren’t Enough

Glia further stated that many AI vendors “suggest ‘guardrails’ are enough to protect institutions from financial and reputational damage. These guardrails attempt to catch and filter inaccurate or hallucinated AI responses after they’re generated. While marketed as ‘safe enough’ for banking, this approach is fundamentally flawed because it relies on the AI to police itself. By relying solely on guardrails, these vendors essentially transfer the risk to the institution, opting out of legal liability for the very content their systems generate.”

Glia said its architecture moves beyond simple detection. Instead of trying to block bad AI behavior, “the Banking AI platform is designed to make such behavior mathematically impossible.”

Facebook
Twitter
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.