In Singaporeโ€™s boardrooms, artificial intelligence adoption has rapidly outpaced the development of governance safeguards. According to the 2025 EY Responsible AI Pulse survey, every surveyed C-suite executive said their organisation had either integrated AI into most initiatives or was actively doing so. Yet only 53 per cent indicated they had moderate to strong controls in place to manage threats such as unauthorised access or data corruption. This mismatch reflects a wider tension between strategic ambition and operational maturity.

Nearly half of the Singaporean leaders surveyed also admitted that their organisations lacked the governance frameworks needed to keep pace with todayโ€™s AI systems. More concerning still, 67 per cent felt their existing risk-management practices would not be sufficient for the next wave of AI advancements. The gap is not one of intent but of implementation, where principles struggle to keep up with the technology they are meant to regulate.


Jasie Fon from Ping Identity shares how Southeast Asia is securing digital identities in the age of AI


This disconnect becomes more pronounced when contrasted with earlier findings from the EY Reimagining Industry Futures study, which found that just 30 per cent of senior decision-makers had deployed AI in critical business workflows. While C-suites may count AI use in support functions such as finance or HR, middle management often focuses on high-impact applications like customer engagement or product development. This difference in perspective likely contributes to the divergence in reported adoption.

Singaporeโ€™s leaders clearly understand the importance of AI and are eager to harness its potential. However, without embedding trust, fairness and accountability into AI systems from the outset, adoption could outstrip control, risking unintended consequences. The challenge for organisations now is to turn aspiration into action, aligning innovation with robust governance to ensure AI is a force for good, commercially and societally.

To learn more about this, we spoke to Manik Bhandari, EY Artificial Intelligence and Data Leader, about the challenges we’re facing, as well as how the market will adapt.

Why do you think there’s such a significant gap between AI adoption and governance among Singaporeโ€™s C-suite, especially when 100% have integrated AI into their initiatives?

The gap between artificial intelligence (AI) adoption and governance in Singaporeโ€™s boardrooms stems less from a lack of awareness and more from the challenge of operationalising principles into practice. Most leaders understand the importance of responsible AI, but the โ€œhowโ€ remains uncertain. AI capabilities are advancing at breakneck speed, while the governance technologies and tools needed to embed safeguards into AI systems are still catching up. It is similar to the early days of automobiles, when cars were on the road long before seatbelts were introduced.

What is EYโ€™s view on how organisations can better prepare for the next wave of AI while keeping trust and accountability at the core?

Organisations can embed trust and accountability by establishing a formal AI governance council with a structured intake process. Each time a new AI capability is introduced, it should undergo an internal review to ensure safeguards and ethical considerations are in place before deployment.

This can be reinforced through periodic audits that extend beyond financial or operational checks to include the performance, fairness and transparency of AI systems.

How do you interpret the mismatch between EYโ€™s Responsible AI Pulse survey and the earlier Reimagining Industry Futures study, where only 30% of Singaporean leaders said theyโ€™ve rolled out AI into critical workflows?

The gap between the two surveys could reflect differences in scope and perspective between C-suites and middle management. C-suite leaders might see AI embedded across their organisations and report near complete integration as they take into consideration lower-risk, back-office functions like human resources, finance and legal, beyond higher-stakes applications in sales, customer engagement and new market entry.

However, middle management and other key decision makers tend to focus on mission-critical, customer-facing workflows where AI adoption is currently more limited. Hence, the lower figure. Additionally, AI is often woven into routine back-office processes to the point where it may not be recognised as a distinct AI initiative, contributing to the perception gap between leadership levels.

How should businesses in Southeast Asia, particularly SMEs and startups, approach AI governance when they may not have the same resources or internal capabilities as multinationals?

For small- and medium-sized enterprises (SMEs) and startups in Southeast Asia, building AI solutions from scratch is often not feasible due to resource constraints and a lack of financial muscle. They typically rely on third-party vendors and off-the-shelf products. Strong governance guardrails are crucial when selecting and using third-party AI tools and services. Many vendor agreements currently shift liability and compliance responsibility onto the users, even though realistically, vendors should be more accountable for their AI products and solutions.

To help smaller businesses adopt AI responsibly and with confidence, government-backed whitelisting of trusted vendors and products can play a key role. This could be similar to how the Singapore government lists pre-approved solution providers under its grants and initiatives. This whitelisting approach ensures that smaller businesses can choose trusted, vetted AI tools and services that meet regulatory and ethical standards.

Singaporeโ€™s government has launched initiatives like the Enterprise Compute Initiative and Model AI Governance Framework. Are these efforts translating into practical outcomes for businesses, or is there still a gap in adoption?

Efforts like the Enterprise Compute Initiative (ECI) set the groundwork, but translating policy into day-to-day business impact is still a work in progress. Many enterprises do not have AI tools that are readily accessible to their workforce. Closing this gap will require greater availability of practical, industry-tailored and cost-effective AI solutions that address real business needs, which is what ECI aims to do. Meaningful AI adoption is a gradual journey that demands ongoing collaboration and continuous learning to fully unlock the technologyโ€™s potential.

What practical steps can organisations take to align their business and technology leaders, especially when it comes to evaluating AI risk and readiness?

Organisations can first bring together both their business and technology leaders to form a joint AI governance council. This group should agree on a clear charter that defines the purpose of each AI system, how it will be used, the risks it may pose and how those risks will be addressed.

Next, they should make responsible data use a non-negotiable principle. AI must not use data for unintended purposes, breach laws or erode customer trust. Regardless of specific regulatory requirements, organisations should prioritise responsible data use to maintain customer trust and support long-term business success.

Lastly, business and technology leaders should evaluate AI decisions through both a commercial and a societal lens. Balancing business value with ethical standards and relevant government guidance ensures AI delivers meaningful results without causing harm to customers, society or the organisation itself.

What does โ€œresponsible AIโ€ mean in practice to EY, and how do you see this principle shaping AI innovation in Southeast Asia over the next 3โ€“5 years?

At EY, responsible AI means developing and deploying AI technologies in a way that balances business innovation with accountability to society. This involves not only complying with regulatory frameworks and ethical standards set by governments but also embedding transparency, fairness and privacy into AI systems from the outset. While some reporting and governance requirements may seem burdensome or vary in relevance depending on the context, they serve as critical guardrails to prevent harm and build trust.

Responsible AI will shape Southeast Asiaโ€™s innovation landscape by driving organisations to prioritise ethical design and continuous risk management, not just for the business and its customers but also the broader community.