Generative AI (GenAI) is reshaping industries across the globe, and its impact is particularly evident in Southeast Asia. With a particularly fragmented business environment, GenAI can be an affordable way to scale up marketing, coding, content and so many other factors that are critical to business growth.

As organizations strive to integrate GenAI into their operations, the region faces a unique set of challenges and opportunities that demand innovative approaches. With governments and businesses placing a stronger emphasis on AI-driven solutions, Southeast Asia is emerging as a key player in the global AI ecosystem.

Countries like Singapore are leading this transformation, leveraging strategic investments in AI infrastructure, talent development, and regulatory frameworks. Initiatives such as the National AI Strategy and partnerships with organizations like AI Singapore are laying the groundwork for sustainable AI adoption. This focus extends beyond economic benefits, aiming to address pressing challenges such as data gaps, talent shortages, and ethical concerns around AI deployment.


We look into how Singapore can further support its startup ecosystem as an AI hub


We spoke to an expert to understand the landscape a bit better. In this conversation, Sarah Taraporewalla, CTO for APAC at Thoughtworks, provides insights into the adoption of GenAI in emerging markets, the unique challenges faced by organizations in the region, and the strategies required to bridge critical gaps. From addressing ethical concerns to fostering collaboration between public and private sectors, this interview explores how Southeast Asia is positioning itself as a leader in the global AI landscape.

Which industries in emerging markets are currently leading in the adoption of GenAI?

GenAI is a transformative technology making a significant impact across industries in both emerging and established markets. Its adoption is not confined to a specific sector; rather, it is creating opportunities universally.

At an employee level, organizations are prioritizing AI literacy through education programs and providing tools that enhance personal productivity. For example, Thoughtworks recently partnered with PEXA Group to develop the PEXA AI Assistant, a security-first, permissions-aware GenAI platform designed to boost productivity for all PEXArians.ย 

Within technology departments, GenAI is revolutionizing software delivery. At Thoughtworks, we champion an “AI-first” approach, integrating AI technologies throughout the software development lifecycle. This approach enhances productivity, accelerates innovation, and redefines the possibilities of software delivery. A cornerstone of this strategy is Haivenโ„ข, our AI-enabled team assistant that acts as a knowledge amplifier, streamlining tasks across the lifecycle, working alongside AI coding assistants to augment team expertise, and improving overall software quality.

Organizations tend to favour “human-in-the-loop” solutionsโ€”those that empower employees while enhancing customer experiences. For instance, industries reliant on customer call centres, such as financial services and airlines, are leveraging GenAI for improved customer sentiment analysis and advanced knowledge management to optimize interactions.

Regionally, the Singapore Government is a leading example of GenAI adoption, using it to enhance public services and deliver greater value to citizens.

It is worth noting that while organizations are navigating the rapid rise of generative AI, many continue to achieve measurable success with traditional AI and ML techniquesโ€”for instance, the loyalty rewards platform Thoughtworks created for minden.ai in Singapore.

Is there a difference in adoption between emerging markets and more developed markets like China or the US?

With GenAI being a rapidly evolving technology, all marketsโ€”emerging and developedโ€”are moving swiftly to balance innovation with data privacy and regulatory concerns.

In this region, Singapore stands out as a global technology hub. Once focused on becoming a “Digital First Nation,” Singapore has now shifted to building an “AI First Nation.” The government has actively promoted AI research and development through initiatives like the National AI Strategy (NAIS 2.0), which is guided by the vision: “AI for the Public Good, for Singapore and the World.”

Singaporeโ€™s strategic location, pro-business policies, and strong emphasis on innovation have made it a magnet for AI startups and multinational corporations. The government collaborates with AI-focused research institutions like AI Singapore, which provides funding and support for AI startups and projects.

Thoughtworks is proud to partner with AI Singapore to advance the reliability and adoption of AI. This collaboration formalized through a Memorandum of Understanding (MOU) at the Public Sector Day Singapore (organized by AWS and GovInsider), focuses on equipping tech innovators with the skills, tools, and best practices needed to transform AI pilots into impactful solutions. Joint research projects and training initiatives under this partnership aim to foster competence in AI and analytics technologies.

Across markets, whether in emerging economies or developed nations like China and the US, organizations face similar challenges in scaling AI from proof of concept (POC) to production. Weโ€™ve identified five key gaps that must be addressed for successful AI adoption:

  1. Data Gaps: Ensuring data is accessible, properly tagged, and equipped with the necessary metadata for specific use cases.
  2. Infrastructure Gaps: Evaluating whether current infrastructure can scale with user demands and manage token-related costs effectively.
  3. Evaluation Gaps: Develop robust evaluation techniques to measure the performance and reliability of AI models, particularly large language models (LLMs).
  4. AI Skill Gaps: Preparing employees to embrace AI, addressing resistance to change, and demonstrating how AI enhances roles.
  5. Adoption Gaps: Implementing strong change management strategies to facilitate AI adoption within organizations.

While developed markets like China and the US often have a head start due to larger investments and more mature infrastructure, emerging markets like Singapore are catching up quickly through strategic government policies, international collaborations, and a focus on foundational AI challenges.

How are emerging markets addressing the challenges of limited technological infrastructure and talent shortages to support the growth of GenAI applications?

Emerging markets, such as Singapore, are tackling the challenges of limited technological infrastructure and talent shortages through strategic initiatives and investments designed to foster growth in GenAI applications.

To address the talent gap, Singapore has introduced programmes like the AI Singapore (AISG) AI Internship Programme (AIIP), launched in partnership with the Centre for Strategic Infocomm Technologies (CSIT). This initiative provides young professionals with hands-on experience in the fast-evolving AI field. Interns work alongside AISGโ€™s team of AI Engineers and Apprentices on real-world projects spanning domains such as Computer Vision, Natural Language Processing, and Generative AI, including Large Language Models (LLMs). By equipping participants with practical skills and industry-relevant knowledge, the program is nurturing the next generation of AI talent needed to drive innovation.

Singapore has also invested heavily in developing its own family of open-source Large Language Models, South East Asian Languages in One Network (SEA-LION). These models are specifically designed to understand Southeast Asiaโ€™s diverse contexts, languages, and cultures, ensuring that AI solutions are culturally and linguistically relevant to the region.

On the infrastructure side, global cloud computing providers such as AWS, Azure, and GCP are playing a critical role in overcoming local hardware limitations. These platforms offer scalable and cost-effective solutions, reducing the need for significant upfront investments in physical infrastructure. They also provide access to advanced AI tools and frameworks, empowering developers to create sophisticated GenAI applications without the burden of maintaining costly on-premise systems.

By combining talent development initiatives, investments in regionally tailored AI solutions, and leveraging cloud-based technologies, emerging markets like Singapore are setting a strong foundation for the growth of GenAI applications despite existing challenges.

Ethical challenges like data privacy, accountability and unconscious bias are possible when using AI. What strategies should organizations adopt to address them effectively?

Organizations can address the ethical challenges associated with AIโ€”such as data privacy, accountability, and unconscious biasโ€”by adopting a comprehensive, multi-faceted approach centred on privacy, accountability, fairness, and collaboration.

To ensure data privacy, organizations should establish strong data governance frameworks that comply with regulations and safeguard sensitive information. 

Techniques such as data anonymization, encryption, and differential privacy can help mitigate risks and protect user data. Accountability is another critical aspect, requiring transparent decision-making mechanisms that make the logic and outcomes of AI systems explainable and auditable. Comprehensive audit trails should be maintained to enable traceability and ensure accountability across all AI processes.

Addressing unconscious bias requires training AI models on diverse datasets and implementing fairness-aware algorithms to minimize biased outcomes. Regular audits should be conducted to assess and address potential biases, while diverse, multidisciplinary teams should be involved in AI development to embed varied perspectives and reduce the risk of biased designs or applications.

Developing and adhering to ethical guidelines is essential for responsible AI use. These guidelines should define clear principles for AI development and deployment. For instance, Thoughtworks collaborated with PEXA Group to create an AI-assisted platform aligned with ethical AI principles, demonstrating how organizations can embed ethical considerations into their AI initiatives.

Leveraging open-source AI can also play a significant role in addressing ethical concerns by fostering transparency, collaboration, and innovation. Open-source communities encourage shared solutions to common AI challenges, such as bias detection and safety, while cross-company partnerships drive advancements in transparency, robustness, and security. The rise of specialized open-source models, tailored for specific domains like healthcare or natural language processing, is helping industries address unique ethical and technical challenges. Open-source frameworks focusing on explainability, fairness, and ethical use are becoming more prevalent, setting standards for responsible AI development.

Open-source initiatives are also enhancing security and resilience by tackling malicious AI uses, such as deepfake detection and misinformation prevention. Modular and interoperable open-source foundation models enable organizations to build tailored AI systems without starting from scratch, fostering innovation while reducing development barriers. Additionally, open data initiatives and collaborations between industry, academia, and governments are reducing obstacles to training fair and reliable AI models.

By combining these strategies, organizations can effectively address ethical challenges, ensuring their AI systems align with societal values, foster trust, and operate responsibly. This holistic approach not only mitigates risks but also sets the stage for sustainable and ethical AI-driven innovation.

How can emerging markets establish effective regulations that balance innovation with ethical considerations, and what role should governments and private sectors play in this process?

Emerging markets have a unique opportunity to develop regulatory frameworks for AI that encourage innovation while ensuring ethical considerations are met. Achieving this balance requires a collaborative effort between governments and the private sector, with each playing distinct but complementary roles.

Emerging markets face a critical challenge in establishing regulations that balance AI innovation with ethical considerations. Businesses increasingly recognize that this balance is not just a regulatory obligation but a strategic priority. Many are addressing this by implementing formal AI governance frameworks that align AI initiatives with ethical guidelines and compliance standards. These frameworks often involve governance bodies made up of cross-functional teams that oversee AI projects from development to deployment, ensuring they meet safety and ethical benchmarks.

A growing trend is the adoption of explainable and interpretable AI (XAI) models to address the risks of “black box” AI systems. These models make decision-making processes more transparent and accountable, which is particularly crucial in regulated industries like finance and healthcare. Human-in-the-loop systems further enhance safety and accuracy by adding human oversight at key stages of the AI lifecycle. This approach is especially valuable in high-risk applications, such as medical diagnoses or automated lending, where human review can help prevent errors and mitigate bias.

To ensure reliability and safety, businesses are prioritizing robust testing and simulation of AI systems. Stress-testing models in controlled environments under real-world and edge-case conditions allow organizations to identify and resolve potential issues before deployment. Many companies are also adopting tiered deployment strategies, starting with internal or controlled rollouts, then scaling to select customer groups, and finally moving to full production. This phased approach minimizes risks while maintaining public trust in AI systems.

Data privacy and security are central to AI safety. Companies are leveraging secure data practices such as on-device processing and federated learning to protect user privacy while harnessing AIโ€™s power. These practices help mitigate the risks of data breaches and misuse of personal information. Additionally, businesses are collaborating through industry consortia like the Partnership on AI or AI4People to establish shared safety standards and best practices. Such partnerships foster innovation while ensuring unified approaches to safety.

Investing in ethics and compliance training is also a priority for many organizations. By training employees in both technical AI skills and ethical considerations, companies ensure that everyone involved in the AI process understands the broader implications of their work.

The collaboration between governments and the private sector is critical to creating a comprehensive regulatory framework that balances innovation and ethics. Governments should take the lead in raising public awareness about AIโ€™s potential benefits and risks and encourage cross-border collaboration to establish global standards. The private sector, on the other hand, should focus on developing industry-specific codes of conduct and ethical guidelines. Transparency about AI systems and their decision-making processes, coupled with investment in the research and development of ethical AI, will ensure that businesses contribute meaningfully to this collaborative effort. Together, governments and businesses can create an ecosystem that fosters innovation while maintaining ethical integrity.

What strategic initiatives should organizations and governments in emerging markets prioritize to foster responsible and sustainable growth of GenAI?

To foster responsible and sustainable growth of GenAI, organizations and governments in emerging markets should prioritize strategic initiatives that clearly define objectives, measure outcomes, and address foundational gaps. A critical first step is to establish clear business objectives for AI projects, ensuring alignment with specific outcomes such as cost reduction, increased revenue, or enhanced customer satisfaction. By defining these objectives upfront, organizations can measure progress using relevant key performance indicators (KPIs) and link AI initiatives directly to business value.

Establishing baseline metrics before AI implementation is essential for tracking improvements over time. For example, if the goal is to reduce operational costs, organizations should identify current expenditures and monitor cost savings attributable to AI. Since AI benefits often compound over time, adopting a longitudinal approach allows for incremental measurement of ROI. Tracking short-term gains, such as efficiency improvements, alongside medium-term outcomes like customer retention or revenue growth, offers a more comprehensive view of AIโ€™s impact.

Operational efficiency gains are another area to focus on, as many AI applications improve internal processes such as customer service automation or supply chain optimization. Metrics, like reduced processing time, error rates, and productivity enhancements, can provide tangible evidence of these benefits. For customer-facing AI applications, leveraging A/B testing helps isolate AIโ€™s effects on outcomes like conversion rates, user engagement, or customer retention, offering direct insights into ROI. Additionally, organizations should account for intangible benefits, such as improved decision-making or enhanced product quality, using proxy measures like reduced defects or long-term business value estimates.

A thorough cost-benefit analysis is critical for sustainable growth. Organizations must calculate the total cost of ownership for AI, including infrastructure, talent, and ongoing maintenance, and compare this against both quantifiable and intangible benefits. To ensure continuous improvement, feedback loops and real-time evaluation of AI models should be implemented. AI dashboards can help visualize ROI trends, enabling stakeholders to see the direct impact of AI on performance and identify areas for optimization.

Before unlocking growth from AI investments, it is vital to address several key gaps. Data gaps must be bridged by ensuring that data is accessible, properly tagged, and enriched with necessary metadata for specific use cases. Infrastructure gaps require assessing whether existing systems can scale with user demand and manage associated costs, such as those for processing tokens in large models. Evaluation gaps can be tackled by developing robust techniques to measure AI model performance and reliability, particularly for large language models (LLMs).

AI skill gaps must also be addressed by preparing users to leverage AI tools effectively. This includes overcoming resistance to change and equipping employees with the knowledge to understand how AI can enhance their roles. Lastly, adoption gaps can be closed through well-designed change management strategies that facilitate the integration of AI technologies within organizations. By systematically addressing these foundational gaps and implementing clear strategies for ROI measurement, emerging markets can ensure that GenAI development is both responsible and sustainable.