(Central News Agency, Taipei, August 16, 2025)
Regarding the legislative process for the Artificial Intelligence Basic Act currently underway in the Legislative Yuan, Li-Ching Chang, Chair Professor at Shih Chien University and Executive Director of the International Artificial Intelligence and Law Research Foundation, stated that to balance government regulation and industrial innovation, legislation can be used to guide operators to enhance their self-governance capabilities. This includes regularly updating specifications for labelling and data management, and dynamically adjusting regulations as technology evolves to prevent the law from becoming an obstacle to innovation.
The joint committee meeting of the Legislative Yuan's Education and Culture and Transportation Committees completed the preliminary review of the draft Artificial Intelligence Basic Act on August 4. However, key articles, including the regulatory mechanism and whether to stipulate penalties, remain divergent among parties and have been reserved for cross-caucus negotiation in the whole session.Chang told the Central News Agency that the various legislative proposals all revolve around balancing "government regulation" and "industrial innovation." She believes the two should proceed in parallel. Particularly as AI technology is an innovative application, the government should reduce regulatory barriers and allow businesses to test new technologies under a certain degree of supervision. For example, by collecting data on safety and risks during the "regulatory sandbox" phase, the government can encourage operators to establish relevant guidelines and norms, thereby enhancing the industry's self-governance capabilities.
Chang explained that overly strict government regulation might suppress the development of AI applications. Therefore, she suggested referencing the U.S. model, which balances risk classification with flexible management: high-risk items require the submission of safety monitoring plans, while low-risk items are subject to lighter supervision, thereby reducing the cost of innovation for businesses.
Chang cited the draft private-sector version of the Artificial Intelligence Basic Act prepared by the International Artificial Intelligence and Law Research Foundation in 2023, which was based on this thinking. Considering economic incentives and industry welfare, that draft clearly stipulated that the government should provide incentives such as tax and financial breaks, while also reducing compliance costs, allowing businesses to continue innovating within a legal framework to foster industrial development.
Min-Huei Hsu, a professor at Taipei Medical University's Graduate Institute of Big Data Technology and Management, admitted that balancing regulation and industrial innovation is always a dilemma. Because technology develops rapidly, it is impossible to foresee all potential future impacts at present, making preventive legislation inherently challenging. However, he remains positive about the Legislative Yuan launching the legislative process, as it promotes social dialogue.
In the face of the rapid development of AI technology, Chang mentioned that in recent years, generative AI has become deeply embedded in people's lives, giving rise to issues such as deepfake videos, privacy infringement, and fraud. The legal framework should be strengthened to protect human rights, requiring AI applications to adhere to the "human-centric" principle, upholding fundamental values such as privacy, safety, fairness, transparency, and accountability. This should be coupled with dynamic regulatory adjustments, including periodic reviews and the abolishment of outdated regulations, to prevent the law from becoming a hindrance to innovation.
Hsu also suggested that, similar to how the biomedical field initially explored genetic research through the lens of "ELSI (Ethical, Legal, and Social Implications)," new technologies like AI should be used cautiously. For instance, while AI development creates new job opportunities, it also produces "victims" who lose their jobs as a result. The question is how to provide legal protection for them. Alternatively, if specific demographic data is not used for privacy protection reasons, could it inadvertently restrict related research?
Chang added that for high-risk AI, a safety monitoring plan must be in place in advance, periodic safety monitoring reports should be conducted during execution, and continuous regulatory measures should be implemented, such as ensuring the stable reliability of labelling technology. Furthermore, the use of watermarks should empower users to distinguish the authenticity of content and avoid being misled. If AI infringes upon people's rights, there must be avenues for remedy, compensation for damages, and an insurance system.
The government must also mandate the removal of related content by the program to prevent further harm. (Editor: Lin Ke-lun) 1140816


SSL 256bit transmission encryption mechanism