By Chu Chen-Tso Secretary-General, International Artificial Intelligence and Law Research Foundation
Since OpenAI launched ChatGPT in 2022, the world has formally entered the era of general artificial intelligence (AI). Although current AI technology lacks consciousness and personality—serving primarily as a high-level tool to bring efficiency and convenience to human society—it has also introduced a host of potential risks and concerns, including privacy violations, bias, discrimination, and the possibility of AI spiraling out of control. To address these emerging challenges, nations are accelerating the formulation of AI regulations.
The following analysis outlines the regulatory characteristics of the European Union, the United States, and China, and explores how Taiwan is referencing international experience while integrating local needs and industrial realities to promote its own "AI Basic Law."
The EU: Mandatory Regulations Combining Risk Classification and Human Rights Protection
The European Union has adopted mandatory legislative norms that combine risk classification with strict penalties and a focus on human rights. On August 1, 2024, the EU officially published the world's first "Artificial Intelligence Act" (AIA). Comprising 113 articles and 13 annexes, the act classifies AI products or services into four risk levels: unacceptable risk, high risk, limited risk, and low or no risk. Corresponding review measures are applied based on these levels, with violators facing fines of up to 6% of their global revenue.
The promulgation of the AIA has sparked widespread discussion. One camp views it as an important paradigm for AI regulation that other countries can emulate. The other camp worries that "over-regulation" will stifle innovation and development within the EU's AI industry, rendering it unable to maintain the flexibility and innovative capacity needed to compete with AI players in the United States and elsewhere. Whether this "high-regulation" model can balance technological development with human rights protection remains to be seen.
The US: Executive Orders and Policy Frameworks Prioritizing Industry Self-Regulation
While the United States has long held a leadership position in AI and technology, it currently lacks federal-level AI regulatory legislation. Management is primarily driven through executive orders and policy documents, emphasizing industry self-regulation, international cooperation, and the encouragement of innovation.
In October 2023, President Biden issued Executive Order 14110, requiring federal agencies to establish clear AI policies and implementation paths, promote AI technology assessment and infrastructure construction, and support the setting of international AI standards. Following this, in September 2024, Secretary of State Antony Blinken released the "Global AI Research Agenda" and the "AI in Global Development Playbook," highlighting U.S. leadership in establishing international norms.
At the state level, California's "Safe and Secure Innovation for Frontier Artificial Intelligence Act," introduced in August 2024, attracted significant attention. The bill required large AI models with development costs exceeding $100 million to include a "kill switch" to prevent abuse or catastrophic consequences. Although passed by the legislature and supported by the scientific community, it faced strong opposition from Silicon Valley giants like OpenAI and Meta. They argued it would impose heavy compliance burdens and drive the AI industry offshore, hindering innovation. Ultimately, California Governor Gavin Newsom vetoed the bill, indicating that the U.S. core approach remains focused on encouraging industry self-regulation and technological innovation, preferring policy guidance over strict regulation.
China: Government-Led Compliance with a Wait-and-See Approach on a Basic Law
China ranks second only to the United States in AI technological strength and is characterized by strong government control. Since 2023, it has implemented several regulations, such as provisions on the management of deep synthesis in internet information services, algorithmic recommendations, and interim measures for generative AI services. These regulations emphasize government compliance requirements for tech platforms and enterprises, strengthening supervision in specific areas.
Regarding the formulation of a comprehensive AI Basic Law, Chinese officials have adopted a wait-and-see attitude, despite calls from experts and scholars (such as the "Artificial Intelligence Law (Scholar's Draft Proposal)" proposed in March 2024). Aside from the government's already high capacity to control private enterprises, the concern that excessive legislation could suppress innovation remains a major consideration. Consequently, China has not yet enacted comprehensive AI legislation similar to the EU's AIA.
Taiwan: Integrating Global Experience to Push for an AI Basic Law
In response to the rapid arrival of the AI era and regulatory needs, Taiwan's executive branch has successively issued relevant AI guidelines. These include the "Reference Guidelines for the Use of Generative AI by the Executive Yuan and its Subordinate Agencies" (August 2023) and the "Guidelines for the Use of AI in the Financial Industry" issued by the Financial Supervisory Commission (June 2024). While not legally binding, these provide basic principles and directions for self-regulation for the public sector and specific industries.
Simultaneously, Taiwan is actively pushing for the legislation of an AI Basic Law. Two latest versions have been proposed: a civil version drafted by the International Research Foundation for Artificial Intelligence Law in March 2023 (24 articles, submitted to the Legislative Yuan in April 2024), and an official version drafted by the National Science and Technology Council in July 2024 (18 articles, currently under Executive Yuan review).
Both versions reference the latest international regulatory trends, blending the EU's mandatory approach with the more flexible U.S. management model. Compared to the EU AIA's complex 113-article structure, Taiwan's draft AI Basic Law is relatively streamlined, influenced by the U.S. model to enact principle-based and policy-oriented guidelines into law. Both versions cover definitions, ethical principles, risk classification, industry support, and protection for the disadvantaged. Neither version currently includes penalty clauses, functioning instead as an "umbrella framework" of legal guidelines. The civil version emphasizes international alignment and industry participation, suggesting the Ministry of Digital Affairs as the competent authority, while the official version emphasizes the government's responsibility in promoting risk-based management, with the Executive Yuan as the competent authority.
In summary, Taiwan is establishing its AI Basic Law through an "umbrella framework," first establishing top-level design and ethical principles at the legal level, and then authorizing various ministries to formulate specific detailed regulations. In the future, ministries such as the Ministry of Health and Welfare and the Ministry of Transportation can use this legal basis to regulate specific areas like smart healthcare and smart transportation, striking an appropriate balance between protecting public interest and promoting industrial innovation.
From Risk Control to Industrial Development: The Historical Mission of Taiwan’s AI Basic Law
The rise of AI technology drives global industrial and social transformation while highlighting different national approaches to risk management and market development. The EU leads with mandatory regulations but faces questions about limited innovation; the US relies on executive orders and market self-regulation to maintain technological dominance; China uses rapid legislation and strict control to ensure safety and order in key technologies.
Experience shows that over-regulation may suppress innovation, while a lack of clear rules invites unpredictable risks. Taiwan's "umbrella framework" approach seeks to balance risk control and industrial development by combining international legal experience with local needs, exploring a new direction for global AI legal systems.
For Taiwan, AI technological strength is closely tied to national competitiveness. Taiwan is already a global leader in semiconductors and AI server hardware but has room for improvement in AI software services like large language models (LLMs) and cloud data. By embodying long-term industrial policy directions through the AI Basic Law, Taiwan can coordinate central and local agencies to promote specific AI industries. Integrating legal and policy resources will allow Taiwan to consolidate and expand its value in the global supply chain. Through an AI Basic Law that balances innovation and safety, Taiwan can not only gain an advantage in future competition but also ensure AI development aligns with social public interests, becoming a key demonstrator of global AI legal systems. This is both the historical mission of legislators and the key to Taiwan effectively mastering AI potential and international influence.
Source: Straits Exchange Foundation Magazine, February Issue Link:


SSL 256bit transmission encryption mechanism