Promoting Taiwan as an "AI Island" and strengthening "Sovereign AI" have become critical directions for the government's development strategy. In recent years, the administration has taken multiple actions to lay the groundwork:
- April 2023: The Executive Yuan established the "Digital Policy and Legal Coordination Project Meeting" to analyze digital legal issues.
- August 2023: Released the "Reference Guidelines for the Use of Generative AI by the Executive Yuan and its Subordinate Agencies," which are subject to rolling revisions in line with international trends.
- July 2024: The National Science and Technology Council (NSTC) published the draft "Artificial Intelligence Basic Law," planning to submit it to the Legislative Yuan for deliberation to assign clear responsibilities for AI development across ministries.
- December 2024: The Executive Yuan convened the "12th National Science and Technology Conference," outlining a technological blueprint centred on "Smart Innovation, Democratic Resilience, and Building a Balanced Taiwan."
However, to truly realise "Sovereign AI," experts suggest that the government must lead by combining forces with the private sector to strengthen AI capabilities comprehensively. Crucially, the government must first perfect its own internal application and governance of AI.
As a global leader in AI, the United States federal government has, since 2019, utilized various regulations and executive orders to define its leadership role and establish a robust system for AI application and risk control. The following analysis of US measures offers a roadmap for Taiwan's future AI development planning.
Development and Execution of US Federal AI Regulations and Governance
The US federal government has taken a leading role in AI governance. Between 2019 and 2022, it enacted several orders and laws, including Executive Order 13859 (2019), Executive Order 13960 (2020), the AI in Government Act (2021), and the Advancing American AI Act (2022).
However, early regulations often lacked precise, unified execution details and monitoring mechanisms, resulting in a lacklustre implementation. According to a report by the Stanford Institute for Human-Centered AI (HAI), only 12% of departments publicly disclosed AI plans required by EO 13859, indicating limited effectiveness.
In 2023, the US government released the updated Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. On October 30, 2023, the Office of Management and Budget (OMB) released the M-24-10 Memorandum (M-Memo) as a concrete implementation guide. Unlike earlier orders, these directives issued specific requirements that emphasised dedicated roles, budget allocation, and workforce management.
Three Core Requirements of EO 14110 and the M-Memo:
- Appoint Chief AI Officers (CAIO): Federal agencies were required to appoint a CAIO within 60 days (by May 27, 2024) to oversee governance, risk management, and innovation.
- Establish AI Governance Boards: Agencies covered by the Chief Financial Officers Act (CFO Act) must form governance boards to unify AI rules and build a systematic governance framework.
- Publish Compliance Plans: Agencies were required to publicly release their compliance plans within 180 days (by September 27, 2024) to ensure that their use of AI aligns with M-Memo principles.
Execution Status and Future Challenges in the US
According to Stanford HAI's analysis of 266 federal agencies, the implementation of EO 14110 and the M-Memo has significantly improved execution power:
- CAIO Appointments: Approximately 30% of all federal agencies have disclosed CAIO information. Among CFO Act agencies and large independent agencies, the disclosure rate reached 94%. However, 89% of these roles are held concurrently by existing officials, such as CIOs or CDOs, with only the Department of Justice appointing an external expert. Dual-hatting existing executives raises concerns about whether they have the bandwidth to implement AI governance effectively.
- Budgetary Needs: Approximately one-third of agencies have allocated specific funding for AI in their 2025 fiscal budgets. However, disparities are vast: the Department of Defense requested $435 million, while other agencies averaged requests of only $270,000. Sustained financial and professional support remain critical variables.
- Compliance Plans: Under the M-Memo, 86% of major agencies submitted compliance plans or usage statements, a stark improvement over the 12% under EO 13859. The challenge now shifts to tracking how these plans are optimized and implemented.
Future Directions for Taiwan's Government AI Governance
Current Approach: Focus on Coordination and Guidelines. While the Executive Yuan and NSTC have established coordination meetings, released guidelines for GenAI, and drafted the AI Basic Law, Taiwan's current governance primarily focuses on regulatory coordination and internal guidance for the use of GenAI. The framework lacks detailed execution planning regarding how policies will be implemented and the specific role the government should play in the broader AI ecosystem.
To achieve the goals of an "AI Island" and "Sovereign AI," the government must actively enhance its own efficacy and assume a leadership role.
Recommendations Borrowed from the US Experience
- Integrate Execution Direction into the AI Basic Law: Clearly define the government's roles and obligations within the law to ensure the long-term continuity and stability of AI policy.
- Appoint or Designate Chief AI Officers (CAIO): Designate AI officials based on ministry size and needs to drive policy and align with industry demands. If existing heads serve concurrently, they must be ensured adequate professional resources and manpower to avoid the "dual-duty" trap affecting performance.
- Issue Compliance and Risk Assessment Plans: Following the US M-Memo, agencies should establish compliance plans for GenAI and general AI, encompassing risk management, information security, human rights, and ethics. These should be made public within a set timeframe to enhance transparency and allow for rolling revisions.
- Budgeting and Talent Cultivation: Allocate specific budgets for AI adoption, compliance auditing, and talent recruitment. Organise cross-ministry training to enhance AI literacy and risk awareness among civil servants.
- Establish a Cross-Ministry AI Governance Framework: Similar to the US Governance Boards, the Executive Yuan should coordinate across ministries to formulate AI plans. Establishing an "AI Steering Committee" at the Executive Yuan level could facilitate regular review and supervision.
- Tracking and Evaluation Mechanisms: Set evaluation metrics (e.g., plan completion rates, compliance levels, transparency) and regularly review agency performance. Initiate assistance or supervision mechanisms for agencies that fall behind to ensure policies are truly implemented.
While Taiwan has initiated preliminary actions, a more complete regulatory system and cross-ministry framework are needed. By learning from the US experience—establishing dedicated roles, promoting compliance plans, allocating sufficient budgets, and maintaining continuous tracking—Taiwan can enhance the execution of its AI policy while striking a balance between security, privacy, and ethics.
Perfecting these supporting measures will help Taiwan establish an AI ecosystem characterised by both innovation and public trust, enabling the government to lead industrial development and fully realise the goal of "Sovereign AI."


SSL 256bit transmission encryption mechanism