The UAE as an AI Hub: When Substance Matters More Than StructureWhy regulators are looking beyond legal form and what it means for AI businesses and investors
China’s recent decision to block Meta’s acquisition of AI agent company Manus and to require the deal to be unwound has underlined that geography still matters in artificial intelligence. What began as a review has now been framed explicitly as a response to concerns over technology exports, national security and the relocation of AI activity away from China, and as a signal that regulators are prepared to intervene where offshore structures and re‑domiciliation do not reflect where substantive control and risk continue to sit.
Against that background, the question for jurisdictions that wish to act as AI hubs is whether they are attracting activity on a substantive basis or merely as a legal or regulatory label. The UAE has placed artificial intelligence at the center of its long-term economic agenda. Through the National Strategy for Artificial Intelligence 2031, the creation of a dedicated ministerial portfolio and the establishment of federal and emirate level AI and advanced technology bodies, it has indicated that it intends to play a meaningful role in this field rather than simply adopting technologies developed elsewhere. That direction is reinforced by continuing investment in digital infrastructure and public sector AI programs, particularly in Abu Dhabi and Dubai.
The UAE is also increasingly visible in international conversations about AI risk and governance, including through participation in multilateral initiatives and bilateral engagement with technology supplying states. The UAE is not in the position of Singapore in the Meta Manus narrative, but that story provides a useful lens. It raises the question of when a jurisdiction can credibly present itself as an AI hub and when it risks being seen as a venue for regulatory arbitrage, particularly where activity is relocated in form but not in substance.
From a legal perspective, the UAE’s answer to that question is still developing, but the direction of travel is increasingly clear. There is no single comprehensive AI statute. Instead, AI activity sits within a framework that brings together Federal Decree Law No 45 of 2021 regarding the Protection of Personal Data (“PDPL”), the data protection regimes of the Dubai International Financial Centre (“DIFC”) and Abu Dhabi Global Market (“ADGM”), cyber security and content rules and sector specific regulation in areas such as financial services and healthcare. AI innovation is encouraged, but within a structure that is increasingly shaped by these regimes and by emerging expectations around AI governance.
This article is the first in a three-part series on what that means for AI businesses that choose to base operations in the UAE. The series starts with the UAE’s positioning as an AI hub and the extent to which it offers substance rather than a light touch label. It then considers data and cross border flows, and finally looks at compute, export controls and AI governance, with a particular focus on whether a UAE base can act as a credible organizing center for group level AI risk.
For founders and investors, three questions underpin the analysis across this series.
- The extent to which the UAE provides a platform for substantive AI operations, or primarily a jurisdictional and branding advantage?
- The extent to which UAE-based structures can support multi-jurisdictional data use and governance without creating regulatory friction?
- The extent to which the UAE can credibly anchor decision-making around infrastructure, export controls and AI risk in a manner that withstands scrutiny from regulators, investors and commercial counterparties?
These questions are particularly salient in the context of cross border AI investment and technology flows, where the UAE may serve as a neutral platform for structuring AI‑related operations across multiple jurisdictions. In such scenarios, the credibility of a UAE‑based structure will turn on whether it embodies genuine operational substance, including real decision‑making authority and governance functions, or whether it is instead perceived as an intermediary layer for activities carried out elsewhere. The Meta / Manus unwind illustrates that, for higher value AI transactions, regulators are now willing to look past corporate form and ask where technology, talent and effective control are really located when deciding whether a structure is acceptable.
These considerations have moved beyond the abstract and are now directly influencing how AI‑related investments are structured and negotiated in practice.
- The UAE as AI Hub Ambition, Opportunity and Regulatory Reality
The UAE’s commitment to artificial intelligence is clearly articulated in its policy framework and is increasingly borne out in market practice. The UAE Strategy for Artificial Intelligence 2031 sets out an ambition to strengthen national AI capability and global competitiveness, underpinned by a national implementation program, a dedicated Minister of State for Artificial Intelligence, and formal coordination across government. At regional and international forums, Dubai and Abu Dhabi are consistently positioned as credible platforms for AI innovation, investment, and talent development, rather than mere adopters of technologies developed elsewhere.
For investors and operators, the focus is less on stated policy ambition and more on whether UAE‑based structures can support genuine operational control, coherent data strategy, and effective risk governance without creating friction in other key jurisdictions. From that perspective, the UAE’s positioning as an AI hub will increasingly be tested not by headline initiatives, but by its ability to support real‑world deployment, cross‑border data use, and regulatory scrutiny in transactional and operational settings.
Policy direction and market positioning
Policy intent has been matched by delivery, particularly at the emirate level. Abu Dhabi and Dubai have moved to integrate artificial intelligence and data‑driven technologies into government functions and regulated sector services through initiatives aligned with Digital UAE and related strategies. These programs have created concrete opportunities for pilot projects and public‑sector collaboration across areas including healthcare, transport, education, and the justice system. For international AI businesses, this signals an openness to working with partners that can deploy clearly defined AI use cases within regulated and public‑facing environments.
At the same time, the UAE remains attentive to how its AI positioning is viewed internationally. Recent developments, including the decision to block and unwind the Meta Manus transaction, have underscored regulatory sensitivity to jurisdictions that appear to host AI activities in form rather than substance. Against that backdrop, the UAE’s public narrative consistently emphasizes responsible and human‑centric AI. The authorities have been careful to frame the Emirates not as a low‑oversight jurisdiction, but as a venue where AI development is supported by a structured, credible and internationally recognizable governance framework. While the legal landscape continues to evolve, it broadly reflects and reinforces that positioning.
Responsible AI themes in UAE policy
Official materials on AI in the UAE consistently emphasize concepts such as responsibility, human centric design and trustworthiness. National level strategies and guidance refer to accountability, transparency, fairness, privacy and security. They also reference international work on trustworthy AI, including principles developed by organizations such as the OECD and initiatives around safety and risk management.
For operators, the practical consequence is that AI projects in the UAE are likely to be assessed through lenses already familiar from other jurisdictions, even where no AI specific statute exists. In a transactional and investment context, these issues are increasingly addressed through due diligence and risk allocation rather than at the level of abstract policy. Questions relating to training‑data provenance, the scope of data‑usage rights, model limitations, and ownership of outputs can affect diligence timelines, drive the need for tailored contractual protections, and, in some cases, influence valuation. AI governance is therefore no longer a purely forward‑looking compliance exercise, but an active commercial consideration in how risk is assessed and priced.
Core legal and regulatory framework
The regulatory framework governing AI in the UAE is assembled from existing legal regimes rather than a single, standalone AI law, but it is increasingly capable of constraining real world AI deployments.
At the federal level, the Personal Data Protection Law (“PDPL”) provides the central framework for the processing of personal data. It requires data to be handled lawfully, fairly and transparently, used only for defined purposes, and protected by appropriate security measures, while also granting individuals specific rights over their personal information. Rules on cross‑border data transfers are particularly significant for AI‑driven activities, as they require either an assessment of adequacy in the receiving jurisdiction or the implementation of contractual and other safeguards before personal data can be transferred outside the UAE. AI systems that depend on personal data, especially those used for profiling or automated decision‑making with individual impact, will therefore require careful assessment under the PDPL, often supported by documented risk analysis, governance processes and tailored contractual arrangements.
Alongside data protection, the UAE’s cybercrime and content‑related legislation is also relevant. These laws address issues such as misuse of IT systems and the creation or dissemination of unlawful content. Depending on how they are designed and deployed, AI‑enabled tools used for content generation, monitoring or security may fall within scope. Businesses using such tools should ensure that their internal policies and technical controls address misuse, escalation, takedown procedures and cooperation with relevant authorities.
Distinct data protection regimes apply within the Dubai International Financial Centre (“DIFC”) and Abu Dhabi Global Market (“ADGM”). These frameworks are closely aligned with concepts familiar from European data protection law, including the requirement to identify a lawful basis for processing, implement appropriate technical and organizational safeguards, and, in certain circumstances, conduct formal data protection impact assessments. Both regimes also contain specific provisions addressing profiling and automated decision‑making, which may be engaged where AI systems influence decisions with legal or similarly significant effects. As a result, AI initiatives operating from the DIFC or ADGM typically attract early engagement from risk and compliance functions and are more likely to require a higher level of structured documentation.
Sector regulators are also beginning to approach AI as a discrete area of oversight. In financial services, regulatory commentary across the UAE and its financial free zones points to growing expectations that firms actively understand and manage model risk, including where AI is deployed in areas such as credit assessment, transaction monitoring, trading and underwriting. In healthcare, standards and approval pathways are emerging for the validation and ongoing oversight of AI systems used in diagnostics and triage, particularly within the Abu Dhabi and Dubai health systems. While these sector‑specific approaches remain in development, they signal a clear trajectory towards more defined and demanding regulatory expectations over time.
Practical takeaways
For founders, investors and operators based in the UAE, several practical observations emerge. The absence of a standalone AI statute should not be mistaken for a light‑touch or uncertain regulatory environment. AI projects will, in practice, need to engage at an early stage with the PDPL, the DIFC or ADGM data protection regimes where relevant, alongside cyber security, content controls and applicable sector‑specific regulation.
Where AI systems are deployed in ways that may have a material impact on individuals or markets, organizations should expect to justify how existing legal and regulatory obligations have been applied in practice. This goes beyond high‑level policy statements and typically requires clear, documented analysis. Internal papers that map AI use cases, data flows, purposes, legal bases and safeguards can be invaluable when engaging with regulators, major commercial counterparties and investors. Recent interventions in high profile AI transactions, including the unwinding of the Meta Manus deal, suggest that these questions around the location of technology, data and effective control are now capable of influencing not only deal terms but deal viability itself.
For organizations seeking to position the UAE as an AI hub, substance is key. Establishing data governance, AI risk oversight, and meaningful decision‑making capability within the UAE, supported by genuine operational control at the local entity level, significantly strengthens that proposition. By contrast, structures that operate primarily as booking or branding vehicles, with strategic control located elsewhere, are more likely to face closer scrutiny from regulators, investors, and commercial counterparties, particularly in higher‑value or more sensitive AI use cases.
The next article in this series will examine data and cross‑border flows in more detail, and consider how a UAE base can be used to structure multi‑jurisdictional data strategies in a way that reinforces, rather than undermines, claims of operational substance.Authored by: Hisham Oweiss (Partner) and Kwan Lung Wong (Andrew) (Associate)