Test Finds Chatbots Used for Customer/Member Service are Vulnerable to Manipulation

NEW YORK — Artificial intelligence chatbots widely used by banks for customer service are vulnerable to manipulation and data leakage, exposing financial institutions to significant compliance and fraud risks, according to a new report that found “simple conversational prompts” were often all that were needed.

Milton Leal, lead applied AI researcher at TELUS Digital, tested 24 generative AI models from major providers configured as banking customer-service assistants and found that all were exploitable, Corporate Compliance Insights reported. Success rates for extracting restricted or sensitive information ranged from 1% to more than 64%, depending on the technique used.

Leal found that some chatbots displayed what he described as “refusal but engagement” behavior — responding that they could not help with a request, but then immediately providing the information anyway, according to Corporate Compliance Insights.

More Than Half Have Implemented Systems

The report noted that banks, like credit unions, are increasingly turning to generative AI to handle customer/member interactions involving account balances, disputes, loan applications and fraud alerts. A 2025 survey cited by Corporate Compliance Insights found that 54% of financial institutions have implemented or are actively implementing generative AI, primarily to improve customer experience.

However, the report warned that rapid adoption has outpaced governance and controls. Because chatbot responses carry the same legal and regulatory weight as advice from a human agent, a single inaccurate or misleading answer could violate federal disclosure rules or consumer protection laws.

‘Simple Conversational Prompts’

Leal’s testing showed that attackers could use simple conversational prompts to extract proprietary creditworthiness scoring logic, internal eligibility criteria and other information that should only be available to bank employees, Corporate Compliance Insights reported. Such techniques could be reused by fraud rings to refine scams, particularly as AI-driven fraud continues to grow.

What Regulators Say

Regulators have signaled that banks remain fully accountable for chatbot behavior. The Consumer Financial Protection Bureau has said since 2023 that chatbots must meet the same consumer protection standards as human representatives, while the Office of the Comptroller of the Currency has emphasized that AI-driven customer service tools are subject to existing safety, soundness and audit requirements, according to Corporate Compliance Insights.

The report identified three recurring weaknesses: inaccurate or incomplete guidance, leakage of sensitive or restricted information, and a lack of logging and audit trails that would allow banks to reconstruct how errors occurred.

Corporate Compliance Insights reported that regulators and standards bodies increasingly expect banks to treat chatbots as regulated systems, with formal risk inventories, continuous testing, clear escalation paths to human agents and board-level oversight.

Facebook
Twitter
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.