On Thursday, February 19, Secretary-General Chu Chen-tso hosted a working group forum titled "U.S.-China AI Competition: AI Regulation in the U.S., EU, and East Asia" at the Harvard Kennedy School.

The forum featured a panel discussion with Florence G’sell (Visiting Professor at Stanford University), Gal Forer (Researcher at UC Berkeley), and Hao Chen (Researcher at Harvard Kennedy School). The experts compared and discussed the current AI legal frameworks across the United States, the European Union, China, South Korea, Japan, and Taiwan.

The Shift to the AI Agent Era
During the presentation, the concept of the "AI Security Triangle" was introduced. In the era of AI Chatbots, discussions primarily focused on the intrinsic risks of Large Language Models (LLMs), such as bias and hallucination, with governance centered on the "software/model" level.

However, as development advances into the era of AI Agents, the value and potential risks of AI will manifest more directly in actual deployment and execution scenarios. This involves software-hardware integration, cross-system collaboration, and the ability to take action and interact with the real world. Therefore, security governance in the AI Agent era must simultaneously address risk management and overall competitiveness across both software and hardware.

The Three Pillars of the "AI Security Triangle"
The "AI Security Triangle" proposed by Secretary-General Chu comprises three main pillars:

AI Legal Framework (Core Security): Striking a balance between risk management and innovative development while providing clear guidance on the nation's future AI trajectory and governance boundaries.

Friend-shoring Supply Chains (External Security): As the world gradually moves toward a "one world, two systems" paradigm, it is crucial to establish trusted, allied supply chains to maintain the security and stability of critical technologies. The Pax Silicon signed in Washington, D.C., serves as a representative case of this effort.

AI Sovereignty (Internal Security): Building autonomous AI infrastructure, data governance, language models, and business development models to ensure national security, data sovereignty, and industrial competitiveness.

Taiwan's "Soft Law" Approach: A Signal of Trust
Furthermore, it is worth noting that Taiwan's recently passed Artificial Intelligence Basic Law has blazed a new trail in the global AI regulatory landscape. Unlike the penalty-centric "Hard Law" regulatory models seen in the EU, Japan, and California, Taiwan has adopted principle-oriented framework legislation. By excluding penalties at this stage, Taiwan is taking a "Soft Law" approach that heavily emphasizes room for innovation and societal participation.

As Taiwan's semiconductor and hardware supply chains profoundly impact the globe, this institutional design sends a powerful signal: Trust. Taiwan's influence on the global supply chain stems not only from its industrial prowess but is also deeply rooted in a solid and predictable legal foundation.

According to Article 18 of the Basic Law, various government ministries will complete the review and refinement of existing regulations within two years of the Act's implementation. This will concretely realize the spirit of the Basic Law and gradually shape a more comprehensive AI legal system in Taiwan.

Taiwan is moving forward, and the world is watching!