If 2025 taught enterprises anything, it’s that AI moved from possibility to dependency almost overnight. What began as experimentation quickly became embedded in day-to-day operations, from customer service and supply chains, to software development and decision-making.

Pilots, however, conceal fragility in the industry. They mask the long-term cost of scaling models, the operational strain of constant refresh cycles, and the governance gaps that surface only under regulatory scrutiny. The Year of the Fire Horse, therefore presents a defining choice: allow AI’s velocity to amplify enterprise risk, or build the infrastructure required to convert raw speed into controlled, sustained momentum.


Here’s why agentic AI is the new enterprise standard in Southeast Asia?


Across APJ, AI ambition is not slowing. But the question has changed. It is no longer whether organisations can deploy AI, but whether they can deliver it reliably, repeatably, and at scale; without destabilising the core business.

Taming AI: Building “AI-smart” operations with discipline

The past year rewarded bold experimentation. The next will reward operational discipline. 

APJ organisations are shifting from technical proof-of-concept to “AI-smart” operations: prioritising use cases with clear business outcomes and designing AI services for reliability from the outset. The real test is not whether a model works in isolation, but whether it can endure scale, change, and scrutiny. 

As AI moves from development into production, consistency becomes critical: workloads must behave predictably across development, cloud and on-premises environments. Without that consistency, performance drifts, costs escalate, and risk accumulates. This causes even the most promising AI initiatives to veer off course once they pick up speed. 

Technologies such as containerisation act as the ultimate harness, reducing friction and allowing AI services to scale without constant re-engineering. 

Pastures new: Rethinking infrastructure for inference and sovereign AI

As AI becomes more deeply embedded in operations, infrastructure strategies are expanding accordingly, following the data rather than forcing everything into a single stable. Enterprises are now balancing public cloud, private data centres, and the edge to meet competing demands around performance, cost, compliance, and data control.

While training often remains cloud-centric, inference increasingly benefits from environments closer to where data is generated. Predictable costs, lower latency, and tighter governance are pushing more AI workloads toward on-premises and edge deployments. This is particularly true in regulated or real-time use cases.

As AI systems increasingly handle sensitive data and critical processes, sovereign AI is rising up the agenda for C-suites, further shaping how organisations place and govern AI workloads across hybrid and multicloud environments. In this context, the edge can no longer be seen as a far-off pasture. It must be recognised as a sovereign layer of enterprise infrastructure, globally managed yet locally autonomous, where mission-critical AI can run while meeting evolving data localisation requirements.

Building stamina: Operational maturity as the backbone of scalable AI

Of course, this is all far easier said than done. Maintaining AI services over time and across multiple environments requires far more effort than initial deployment. Model refreshes, security updates, compliance controls, and coordination across teams and locations all become part of daily operations as AI estates expand.

This is where operational stamina matters. Enterprises need a unified foundation that delivers coordination and control across environments. Platform architecture is therefore becoming one of the most consequential decisions IT leaders will make. Cloud-native, modular architectures help teams absorb change by allowing services to evolve independently, without unsettling the broader system. Orchestration platforms provide a consistent operating model across hybrid environments, enabling AI to coexist with traditional applications rather than creating parallel silos.

AI at full gallop: Converting infrastructure into lasting market advantage

When AI infrastructure is resilient, well-governed, and dependable, its value becomes tangible across the organisation. Productivity improves, decisions accelerate, and processes become more automated without introducing fragility or unnecessary complexity. At this stage, infrastructure fades into the background not because it is less important, but because it is powering the business forward at a steady, unbreakable gallop. 

In 2026, speed will be assumed. Winners will be determined not by how fast they can run, but whether they can go the distance while navigating physical constraints, distributed environments, and rising expectations for reliability. Enterprises that invest in platforms delivering consistency, flexibility, and control will be best positioned to turn AI innovation into enduring business value and will ride confidently into AI’s next chapter.

The article titled “Taming the Fire Horse: What APJ needs to power AI’s next gallop” was authored by Jay Tuseth, Vice President and General Manager, Asia Pacific & Japan, Nutanix

About the author

Jay Tuseth is Vice President and General Manager of Asia-Pacific and Japan (APJ), Nutanix.

Jay is a seasoned enterprise executive who has lived in Singapore since 2013. Prior to joining Nutanix, he served as Conviva’s vice president of sales for APAC SaaS applications and general manager of customer experience, based in the company’s Singapore office.

At Conviva, Tuseth led all operations in the APJ region, helping digital businesses and their operations teams shift from a focus on quality of service to one centred on quality of experience. He drove continuous growth, challenging teams to push boundaries, take ownership, and ensure that great contributions were recognised.

Before Conviva, Tuseth served as vice president of cloud applications at Oracle. He also spent 12 years at Dell Technologies and EMC in multiple executive leadership roles, leading diverse teams across APJ and helping customers use data to maximise their competitive advantage.