AI in banking: Risk identification in the digital age

AI in banking: Risk identification in the digital age

As AI increasingly serves as the ‘brain’ of banking systems, identifying and mitigating associated risks is no longer optional - it is essential for ensuring stability and sustainable growth.


In brief

  • Vietnamese banks accelerate AI adoption for efficiency and customer experience, but face high costs and rising cybersecurity risks. 

  • CIOs prioritize AI and GenAI for 2026, signaling a shift to enterprise-wide deployment with strong ROI and risk governance frameworks. 

  • Sustainable AI growth requires controlled pilots, strict data access, lifecycle monitoring, and alignment with global standards like the EU AI Act.


In the era of digital transformation, artificial intelligence (AI) has become a pivotal technology shaping the future of the Financial Services. In Vietnam, AI unlocks opportunities to reduce costs, boost operational efficiency, and enhance customer experience, while also introducing significant challenges in risk management, data security, and regulatory compliance. As AI increasingly serves as the ‘brain’ of banking systems, identifying and mitigating associated risks is no longer optional - it is essential for ensuring stability and sustainable growth.

AI – A Catalyst for Growth and a New Spectrum of Challenges

In recent years, Vietnamese banks have entered a fast-paced race to adopt artificial intelligence. Technologies such as electronic Know-Your-Customer (eKYC), big data analytics, credit scoring, fraud detection, and other AI-driven solutions have been widely deployed. These advancements have enabled banks to accelerate transaction processing, reduce operating costs, and improve customer service capabilities.

Beyond the banking sector, AI has become a defining “buzzword” across industries and is increasingly central to the technology strategies of Vietnamese enterprises.

The report “2026 CIO Priorities and Technology Trends” - a survey on technology trends and priorities of Chief Information Officers (CIOs) in the Age of AI, conducted by CIO Vietnam with technical support from EY Consulting VN - reveals that AI and GenAI remain among the Top 5 prioritized technologies for 2026. Over 54% of CIOs identify AI as a strategic focus, while 48% prioritize GenAI applications. This signals a strong shift from experimentation to enterprise-wide deployment, as AI is expected to generate real, measurable value and directly contribute to business performance.

In parallel, IT budgets for 2026 are projected to increase significantly, with more than 60% of surveyed senior executives (CxOs) indicating increased spending on AI, cybersecurity, enterprise resource planning/customer relationship management (ERP/CRM) modernization, and enhancements to digital customer experience. These areas represent critical priorities in the broader journey toward comprehensive digital transformation. 

Notably, Vietnamese CIOs are undergoing a major transition – from technical support roles to strategic leadership positions. More than 50% of CIOs now work alongside corporate leadership in shaping business strategy and steering digital transformation. In particular, 48% report prioritizing AI-driven innovation across all business functions to help their organizations adapt effectively amid the rapid expansion of AI adoption and growing cybersecurity risks. This is a promising signal that AI will be embedded into overarching strategy, rather than treated merely as a technical initiative.

Anh minh hoa 1_Illustration 1

However, as AI evolves from a supporting tool to a core decision-making engine, the associated risks expand exponentially. Modern AI models can automate end-to-end processes and coordinate multiple agents. In particular, Agentic AI represents a new generation of advanced AI systems designed to act autonomously - making decisions, planning, and executing tasks without continuous human intervention. This marks a significant leap from traditional AI, which merely responds to predefined rules or user prompts.

Data is the “fuel” that powers AI, but it is also its most vulnerable point of exploitation. One of the most prominent threats today is prompt injection. Prompt injection refers to attacks on AI systems - especially large language models (LLMs) - where malicious or deceptive inputs are inserted to manipulate model behavior, expose internal information, or bypass established security and ethical safeguards. These hidden commands can be embedded in emails, PDF files, or websites, causing AI systems to perform unauthorized actions.

Beyond that, the AI supply chain itself carries substantial risks. Modern AI models depend on data sourced from hundreds of open-source libraries and datasets; a single compromised component can expose the entire system.

To save time and reduce costs, many organizations choose to reuse or fine-tune existing AI models rather than train them from scratch. For example, banks may adopt large language models (LLMs) to build AI-powered financial advisory chatbots, or create “virtual banking staff” based on digital replicas of real employees for use in instructional videos or product consultations.

This trend is driving the rise of the clone-model economy, which introduces new layers of risk. If an original model is compromised or contains vulnerabilities, all cloned versions will inherit those risks and carry them into the banking environment. More critically, when cloned models are fine-tuned using real customer data, the boundary between lawful data use and data exposure becomes blurred - posing the risk of violating Vietnam’s Personal Data Protection Law 2025 and placing banks in a “responsibility gray zone.” For instance, if AI makes an incorrect credit-approval decision, who is liable - the bank or the third-party AI provider?

Anh minh hoa 1_Illustration 2

AI – An Inevitable Path or a Strategic Choice?

Although AI is expected to deliver significant benefits, the reality is that implementation costs remain high - particularly for mid-sized and smaller banks. Investments in data infrastructure, cloud computing, specialized talent, and security can amount to millions of dollars annually. Yet, outcomes are sometimes disproportionate to expectations, especially when AI models are still in pilot phases or not fully optimized for the Vietnamese market.

This raises an important question: Is AI-driven digital transformation an unavoidable trajectory, or merely a strategic option? Many experts argue that AI is not a “magic wand” that resolves all challenges instantly. To avoid falling into a “technology trap,” banks need a clearly defined implementation roadmap and must identify priority areas that can deliver rapid, tangible value - such as back-office automation or fraud analytics. 

The balance between cost and return on investment (ROI), as well as the actual business value generated by technology spend, must be carefully evaluated. According to the “2026 CIO Priorities and Technology Trends” report, 70% of surveyed CIOs expressed concerns about their ability to demonstrate business value and ROI from technology initiatives, including AI and other digital transformation projects. This underscores the need for banks to establish robust performance measurement frameworks - from cost savings and productivity gains to improvements in customer experience - in order to justify AI investments to executive leadership.

In Vietnam, several pioneering banks have adopted a “controlled pilot” approach - deploying AI on a small, manageable scale to assess the impact before committing to full-scale rollout. This strategy reduces financial risk, optimizes resource allocation, and provides real operational data to better calculate ROI.

Anh minh hoa 1_Illustration 3

In the long term, AI will undoubtedly become an inevitable trend, but investment must go hand in hand with intelligent cost management and robust risk governance. To effectively manage risks when deploying AI at scale, banks must not only implement controlled pilots but also restrict AI’s access to sensitive data, strengthen employee training on AI safety, and invest in systems capable of monitoring the entire AI model lifecycle.

Banks should also collaborate with international standards bodies such as the U.S. National Institute of Standards and Technology (NIST) and the British Standards Institution (BSI) to stay updated on emerging standards and best practices.

From a regulatory perspective, Vietnam has yet to introduce a dedicated legal framework for AI; however, leading jurisdictions have already begun enacting related legislation. In March 2024, the European Union adopted the EU Artificial Intelligence Act (EU AI Act) - the world’s first comprehensive regulatory framework governing the design, deployment, and use of AI systems throughout their lifecycle, ensuring transparency, safety, and accountability.

The EU AI Act classifies AI system risks into four tiers: (1) unacceptable, (2) high, (3) limited, and (4) minimal. When an AI system falls into the high-risk category, banks - as AI system providers - must comply with a stringent set of requirements to ensure safety, transparency, and accountability throughout the system’s lifecycle.

To begin with, banks must establish an end-to-end AI risk management system - from design to operation and ongoing updates. Training, testing, and validation data must be representative, complete, and accurate, aligned with the system’s intended use. Banks are also required to develop comprehensive technical documentation to demonstrate compliance and support regulatory assessment.

AI systems must be designed to log critical events, enabling risk identification and tracking of significant changes. In addition, banks must provide clear usage instructions for system operators, ensure effective human oversight, and maintain appropriate levels of accuracy, robustness, and cybersecurity. 

Finally, implementing an internal quality management system is mandatory to maintain continuous compliance. These measures are essential for enabling the responsible adoption of AI within the banking sector amid increasingly stringent regulatory environments. As Vietnam works toward its own AI regulatory framework, banks should proactively adopt these international standards as a strategic step in preparing for the future.

In drafting a national AI law, Vietnam can draw on global best practices and focus on three priority steps: developing national AI standards for the Financial Services sector; adopting an AI Bill of Materials (AI-BOM) mechanism to manage data sources and models; and establishing a National AI Model Testing and Certification Center, similar to the EU’s AI Office.

Over the next five years, AI will extend far beyond chatbots and data analytics, evolving into hyper-automation capabilities and advanced predictive models for customer behavior. This unlocks opportunities for highly personalized services but also intensifies security and ethical requirements.

Vietnamese banks must prepare for scenarios where AI integrates with blockchain, quantum computing, and advanced security technologies. This will be a long-term game where leadership in risk governance will translate into competitive advantage.

AI is the key to digital banking, but without strong risk oversight, its benefits can quickly become challenges. Investment in model safety, data transparency, and a coherent regulatory framework will form the foundation for the sustainable development of Vietnam’s banking industry in the age of artificial intelligence.

This article was first published in Vietnam Investment Review on 8 December 2025



About this article