Generative artificial intelligence (AIGC) large models—similar to ChatGPT—have ignited a capital rush, while growing global attention to AIGC compliance is accelerating the introduction of corresponding regulatory measures.
On April 11, China’s Cyberspace Administration of China (CAC) released for public comment the Draft Measures for the Management of Generative Artificial Intelligence Services (hereinafter referred to as the “Draft Measures”). Comprising 21 articles, the draft outlines requirements covering everything from market entry for generative AI service providers to algorithm design, training data selection, model development, content generation, user real-name verification, and the protection of personal privacy and trade secrets. This marks the first dedicated regulatory framework for the currently booming generative AI industry in China.
Globally, security and ethical concerns surrounding ChatGPT are drawing intense scrutiny. The Biden administration in the United States has begun examining whether tools like ChatGPT should be subject to formal oversight. As a potential first step toward regulation, on April 11 the U.S. Department of Commerce officially opened a public consultation on accountability measures—including whether new AI models should undergo certification before public release.
Notably, the Draft Measures also express clear support for and encouragement of the generative AI industry. It states that “the state supports independent innovation, promotion, and international cooperation in foundational AI technologies such as algorithms and frameworks, and encourages the prioritized use of secure and trustworthy software, tools, computing resources, and data.”
Setting Rules for AI
“The speed at which these Draft Measures were issued is remarkable—it effectively keeps pace with the rapid evolution and application of the technology itself,” said Wu Shenkuo, Ph.D. supervisor at Beijing Normal University Law School and Deputy Director of the China Internet Association Research Center, in an interview with Yicai Global. “This reflects the increasing maturity, agility, and efficiency of China’s digital and cyberspace regulatory system.”
Generative artificial intelligence refers to technologies that produce text, images, audio, video, code, and other content based on algorithms, models, and rules. Represented by ChatGPT, this wave of generative AI has triggered a new “AI arms race” among global tech giants—including Microsoft, Google, Meta, Baidu, and Alibaba—as well as startups. Just one day before the Draft Measures were published, three companies announced their entry into the generative AI large-model space: Baichuan Intelligence (founded by former Sogou CEO Wang Xiaochuan), SenseTime, and Kunlun Tech.
Wu highlighted three defining features of China’s regulatory approach as reflected in the Draft Measures: (1) full utilization of existing legal and institutional frameworks; (2) a strong emphasis on risk prevention, response, and management; and (3) an ecosystem-oriented, process-based regulatory model—particularly evident in multi-layered oversight across the entire lifecycle of AI application.
The Draft Measures embody a “regulation-first” philosophy. Regarding market access, it stipulates that before offering generative AI services to the public, providers must submit their systems for a security assessment to the CAC and complete algorithm registration procedures—including updates or deregistration when changes occur.
Providers must also comply with laws and regulations, uphold social ethics and public order, respect intellectual property and business ethics, and refrain from leveraging algorithmic, data, or platform advantages to engage in unfair competition. Users, in turn, are required to provide real identity information. The Draft further mandates that AI-generated content must be “truthful and accurate,” and providers must implement measures to prevent the dissemination of false information.
On liability allocation, the Draft states that any organization or individual offering generative AI services—including chat, text, image, or audio generation, or enabling others to generate such content via programmable interfaces—shall bear the legal responsibility of a “content producer.” If personal information is involved, they must fulfill the statutory obligations of a “personal information processor” under relevant laws.
Multiple provisions address the protection of personal privacy and trade secrets. For instance, providers must safeguard users’ input data and usage records during service delivery. They are prohibited from illegally retaining input data that could identify a user, from profiling users based on their inputs or behavior, and from sharing user input data with third parties. The Draft also bans the illegal acquisition, disclosure, or exploitation of personal information, privacy, or trade secrets.
Additionally, providers must implement appropriate safeguards to prevent excessive user reliance on or addiction to AI-generated content, ensure safe, stable, and continuous service operations, and clearly label AI-generated images and videos. If a provider discovers that a user is violating laws or ethical norms—such as engaging in online hype campaigns, malicious posting, spam email generation, malware creation, or unethical marketing—the provider must suspend or terminate the service.
Chen Yiran, a lawyer at JunYue Law Offices in Shanghai, told Yicai Global that the Draft Measures will help clarify liability attribution in judicial practice. “By delineating what generative AI services can and cannot offer—and by specifying penalty amounts—the guidelines will enable companies to assume clear legal responsibilities for their AI offerings.”
For example, violations of the Draft Measures will be penalized by the CAC and relevant authorities under existing laws such as the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law. In cases of refusal to rectify or serious violations, services may be suspended or terminated, and fines ranging from RMB 10,000 to RMB 100,000 may be imposed.
Global Regulatory Momentum Builds
Globally, the first ban on ChatGPT came from Italy. On March 31, Italy’s Data Protection Authority (Garante) announced an immediate temporary ban on ChatGPT, citing privacy violations. EU member states have since begun considering similar regulatory actions.
Because OpenAI—the developer of ChatGPT—has no established headquarters in the EU, any EU data protection authority can initiate investigations or enforcement actions against it. Media reports indicate that European governments are increasingly alarmed by ChatGPT’s risks, which span data protection breaches, disinformation, cybercrime, fraud, and even student cheating on exams.
Ireland’s Data Protection Commission stated it is “following up with Italian regulators to understand the basis for their action” and will “coordinate with all EU data protection authorities on this matter.”
Germany’s data protection commissioner told the press that Germany might follow Italy’s lead and ban ChatGPT over data security concerns.
France’s data privacy watchdog, CNIL, confirmed it has launched an investigation following two complaints about ChatGPT and has contacted Italian regulators to better understand the rationale behind the ban.
Belgium’s data protection authority noted that ChatGPT’s potential infringements “should be discussed at the European level.”
The UK’s Information Commissioner’s Office warned that AI developers must not violate data privacy laws, adding that those who fail to comply will face consequences.
Norway’s data regulator, Datatilsynet, said it has not yet investigated ChatGPT but “does not rule out future action.”
In Brussels, the European Commission is debating its landmark Artificial Intelligence Act. However, Executive Vice President Margrethe Vestager expressed caution about outright bans. “No matter what technology we use, we must continue to advance our freedoms and protect our rights,” she wrote on Twitter. “That’s why we don’t regulate AI technology itself—but how it’s used. We must not discard in a few years what took decades to build.”
U.S. policymakers share similar concerns. Recently, a bipartisan delegation of ten members of the U.S. House of Representatives visited Silicon Valley to meet with top tech executives and venture capitalists, including Microsoft President Brad Smith, Google’s President of Global Affairs Kent Walker, and leaders from Palantir and ScaleAI.
According to sources, discussions focused heavily on recent advances in AI. While many executives expressed openness to government oversight, some warned that existing antitrust laws could hinder U.S. competitiveness. A Microsoft spokesperson clarified that its president does not believe competition law should be altered due to AI. Executives also urged increased federal investment in AI research and deployment.
Balancing Regulation and Innovation
Every technological revolution brings both immense opportunities and significant risks. For generative AI, continuous improvement depends on a virtuous cycle between real user interactions and model iteration. Striking the right balance between regulation and innovation remains a critical challenge for all stakeholders.
Mike Gallagher, Chair of the U.S. House Select Committee on Strategic Competition with China, expressed skepticism about extreme proposals—such as calls for a pause in AI deployment. “We must find a way to put guardrails in place while allowing our tech sector to innovate and maintain its edge,” he said.
Irene Tunkel, Chief Strategist for U.S. Equities at BCA Research, told Yicai Global that competition in AI will push policymakers to prevent major bottlenecks. “To maintain America’s technological advantage globally, the U.S. government will need to deploy both policy tools and fiscal spending to sustain AI growth over the long term. We expect more government resources—both defense and non-defense—to be allocated to AI, and a regulatory environment that encourages broader adoption across public and private sectors.”
On global regulatory trends, Wu Shenkuo emphasized the need for “agile, efficient regulatory mechanisms that can swiftly address emerging risks, alongside clear, practical, and accessible compliance guidelines that give all parties predictable standards.”
“Effective ecosystem governance also requires broad consensus-building—establishing shared values and mutually accepted behavioral norms around new technologies and applications,” Wu added.
Chen Yiran pointed to unresolved practical issues: “AI-provided services must clearly disclose to consumers that they are AI-driven—not disguised as human agents. Enterprises must bear full responsibility for harms caused by AI services. Moreover, large enterprises handling sensitive information should restrict employee use of open-source AI tools, as every query could inadvertently leak confidential data.”
When asked for further regulatory recommendations, Gao Fuping, Professor at East China University of Political Science and Law and Director of its Data Law Research Center, told Yicai Global: “Large language models raise profound questions involving both individual and public interests. Currently, we cannot fully control the content these models output—their responses are highly intelligent yet inherently stochastic. This unpredictability will be both the greatest challenge and focal point of future regulation.”
(Authors: Liu Jia, Gao Ya, Fan Xuehan)