By Mitch Rutledge

The recent Government Accountability Office (GAO) report on AI oversight in financial services delivers a critical message for credit unions: innovation is accelerating, but regulatory frameworks, particularly those guiding the National Credit Union Administration (NCUA), are not keeping pace. The GAO found the NCUA lacks two crucial tools to effectively oversee credit unions’ use of AI: comprehensive model risk management guidance and the authority to examine technology service providers upon whom credit unions increasingly rely for AI-driven services.
For many credit unions, this report hits close to home. Institutions are actively exploring how AI can deepen member engagement, increase operational efficiency, and offer smarter, more tailored financial experiences. But that momentum risks slowing under a cloud of regulatory uncertainty.
Can’t Stall Progress
We should not let it stall the movement’s progress.
The path forward is not to pause AI adoption; it is to build a modern oversight framework that encourages innovation while protecting members. The solution is not a binary choice between innovation and accountability. It is building both in tandem.
Here’s what that can look like.
Recognize That Not All AI Is the Same
One of the biggest risks in regulating AI is treating it as a single, monolithic concept. In practice, AI in credit unions spans a wide range of use cases, from automating repetitive back-office tasks to powering personalized marketing or guiding loan underwriting decisions.
Each application carries a different level of risk and requires varying levels of scrutiny. AI that recommends which product to offer a member next often enhances customer service and efficiency with relatively low risk. In contrast, AI used in lending decisions has higher risk potential, where algorithmic bias or data quality issues can lead to discriminatory outcomes or regulatory exposure. Regulators and industry leaders must avoid one-size-fits-all rules that inadvertently stifle low-risk, high impact applications that improve member outcomes.
This starts with a shared vocabulary. We need clearer definitions around what constitutes AI in financial services, along with guidance that distinguishes between high-stakes, decision-making AI and lower-risk, decision support tools.
Build Around Human-in-the-Loop Models
Most AI tools used today in community financial institutions operate as decision support – essentially, copilots. They assist credit union teams by identifying opportunities, scoring member behavior, or prioritizing outreach, but humans remain in full control of the final actions.
This human-in-the-loop model should be the cornerstone of any responsible AI framework. It ensures that frontline staff, marketing teams, and risk managers are empowered, not replaced, by intelligent systems. This structure ensures accountability, creates a chain of explainability, and builds trust with both members and regulators.
Interestingly, the GAO noted that most federal financial regulators also use AI outputs to inform staff decisions but are not used as sole decision-making sources, validating this human-centric approach.
Credit unions should lean into this model, and regulators should view it as a baseline standard for many AI applications in the sector.
Demand Transparency, Auditability, and Fairness by Design
The AI systems being deployed in financial institutions must align with principles of trustworthy AI, as highlighted by frameworks like NIST’s AI Risk Management Framework, which the GAO also referenced. This means they must be explainable, both internally and externally. Teams need to understand what data is being used, how predictions are being generated, and what actions are being recommended.
More importantly, members deserve assurance that the AI tools used on their behalf are transparent, auditable, and inherently fair by design – mitigating bias and ensuring valid, reliable outcomes.
The good news is that this kind of transparency is not theoretical. It is already being implemented in credit unions today. Teams are successfully using AI-driven tools with insight into inputs, logic, and performance without needing in-house data science teams or complex data infrastructure.
The lesson here is simple: ethical AI is not just possible, it is already in practice. Now the challenge is making this the norm, not the exception.
Do Not Wait for Regulation to Catch Up
Credit unions cannot afford to put innovation on hold until new guidance arrives, especially given the GAO’s finding that NCUA’s current model risk management guidance is “limited in scope and detail” and doesn’t sufficiently cover AI models.
Member expectations are quickly evolving. Consumers are increasingly accustomed to hyper-personalized digital experiences in every aspect of life, from streaming services to online shopping, and they now expect the same from their financial institutions.
AI has the potential to help credit unions deliver that experience safely, effectively, and at scale. However, this requires institutions to start now with clear internal policies, robust ethical use frameworks, and cross-functional alignment between IT, risk, compliance, and business teams.
Getting ahead of regulatory change is not just a compliance play. It is a strategic move that also positions credit unions to build and retain member trust.
Shape the Regulatory Conversation
Finally, credit unions have a chance to lead the conversation, not just react to it. Industry associations, research collaboratives, leagues and forward-thinking executives should work together to define what good AI governance looks like in our unique context.
That means advocating for principles that reflect the cooperative mission of credit unions: inclusion, fairness, and member wellbeing. Sharing successful case studies and demonstrating how responsible AI can enhance, not endanger, trust will be crucial in shaping this conversation.
We cannot let fear slow the momentum of meaningful innovation. Instead, we must shape a framework that allows credit unions to innovate with confidence, protect what matters, and better serve the communities that depend on them.
The future of AI in credit unions is not a question of if; it is a question of how well we rise to this moment.
Mitch Rutledge is CEO of Vertice AI.