Supporters of Marcus Endicott’s Patreon can access weekly or monthly video consultations on this topic.
China has constructed the world's most detailed regulatory architecture governing artificial intelligence, digital avatars, and the intellectual property rights that attach to AI-generated content. No single statute governs this space. Instead, a layered system of civil law provisions, administrative regulations, judicial decisions, technical standards, and enforcement campaigns has coalesced since 2020 into a framework that is at once protective of individual personality rights and responsive to the commercial imperatives of a booming digital-human industry. The framework rests on four pillars: a copyright regime that conditions protection on demonstrable human creativity, a personality-rights doctrine rooted in the Civil Code that extends to AI-generated likenesses and voices, a regulatory apparatus that mandates transparency and consent for synthetic media, and an emerging standards infrastructure that seeks to bring order to digital identity. Taken together, these pillars define what it means to own, license, and enforce rights over avatars and AI-generated works in the People's Republic of China through early 2026.
The foundation for copyright protection of AI-generated works traces to the Third Amendment of the Copyright Law of the People's Republic of China, adopted by the Standing Committee of the National People's Congress on November 11, 2020, and effective June 1, 2021. The amendment did not explicitly address artificial intelligence. It did, however, reshape the statutory definition of a protectable work. Article 3 now defines works as "intellectual creations with originality in the fields of literature, art, and science that can be expressed in a certain form," and introduces a catch-all category covering "other intellectual creations that satisfy the characteristics of works." This open-ended formulation replaced the prior closed-list approach and gave courts the doctrinal space to consider whether outputs of generative AI systems qualify as protectable expression. Critically, the statute itself contains no requirement of "substantial human creativity." The term that appears in the law is "originality" (独创性), which the implementing regulations gloss as "intellectual creation." The more specific standard of meaningful human involvement that Chinese courts now apply to AI-generated works is a product of judicial interpretation, not legislative text.
The landmark case establishing that standard was decided by the Beijing Internet Court on November 27, 2023, in Li Yunkai v. Liu Yuanchun, case number (2023) Jing 0491 Min Chu No. 11279. Li had used Stable Diffusion to generate a photorealistic image titled "Spring Breeze Brings Tenderness," inputting more than 150 positive and negative prompts, selecting specific model packages, adjusting parameters for layout and composition, and refining results through multiple iterations before publishing the image on the social platform Xiaohongshu. When Liu republished it on the Baijiahao platform without authorization, stripping Li's watermark, Li sued for copyright infringement. The court applied a four-element test — belonging to the literary, artistic, or scientific domain; originality; fixed form of expression; and intellectual achievement — and held that Li's selection and arrangement of prompts, parameter settings, and iterative refinement constituted a sufficient exercise of human intellectual input. The AI system itself could not qualify as an author under Article 11 of the Copyright Law, which limits authorship to natural persons and, in certain circumstances, legal entities. Authorship was therefore attributed to Li as the person who made direct creative contributions. The court awarded 500 yuan in damages. On February 27, 2025, the Supreme People's Court designated this ruling as one of the top ten nominated cases of 2024 for advancing the rule of law, giving it significant precedential weight in the Chinese legal system.
A second affirmation of copyright for AI-generated imagery came from the Changshu People's Court in Jiangsu Province, in a decision announced March 7, 2025. The plaintiff, a designer surnamed Lin, had spent approximately fourteen hours creating an image of a half-heart structure floating on the Huangpu River waterfront using a combination of Midjourney for initial image generation and Adobe Photoshop for post-production editing. He obtained copyright registration for the work. When an inflatable model company and a real estate developer reproduced the image for commercial purposes, Lin sued. The court found that Lin's iterative modification of Midjourney prompts and extensive refinement through Photoshop constituted a "unique selection and arrangement" sufficient to establish originality, and it affirmed copyright protection in the two-dimensional image as a work of fine art. It did, however, apply the idea-expression dichotomy to deny infringement with respect to a physical three-dimensional installation that the defendants had built, holding that the sculpture embodied the abstract idea of a half-heart rather than copying Lin's specific expression. Total damages amounted to 10,000 yuan. Just twelve days later, on March 19, 2025, the Zhangjiagang People's Court, also in Jiangsu, reached the opposite conclusion in the case of Feng Runjuan v. Kuashi Plastic Manufacturing Co. The plaintiff had created images of butterfly-shaped children's chairs using Midjourney but admitted she could not reproduce the same outputs using the same prompts, owing to the randomness inherent in the model. The court treated this inability as evidence that the final images lacked the author-driven expression necessary for originality, distinguished AI-assisted works involving meaningful creative direction from predominantly AI-generated works with minimal human input, and denied copyright protection. Feng's appeal was abandoned after she failed to pay the requisite fees. The divergence between these two Jiangsu rulings illustrates a fault line in Chinese copyright jurisprudence that remains unresolved: whether the test for originality focuses on the human's process of creative direction or on the degree of deterministic control the human exercises over the output.
The first Chinese court case specifically addressing intellectual property in digital avatars was decided by the Hangzhou Internet Court in April 2023, case number (2022) Zhe 0192 Minchu No. 9983. The plaintiff was Mofa Technology (魔珐科技, also known as Xmov), the developer of the virtual digital human Ada. When a defendant used Ada's image without authorization for commercial marketing purposes, Mofa sued for copyright infringement and unfair competition. The court's reasoning broke new doctrinal ground. It classified Ada as a form of "weak AI" whose performance was determined entirely by human operators, and it held that a digital avatar does not qualify as a performer under the Copyright Law and therefore cannot independently enjoy performers' rights or neighboring rights. The avatar's visual appearance, however — its lines, colors, and aesthetic design — constituted a copyrightable work of fine art, and the audiovisual content featuring Ada qualified as an audiovisual work. All rights vested in Mofa as the developer and creative originator. This principle was reinforced on August 18, 2025, when the Beijing Internet Court ruled in a case involving two popular virtual digital humans — one with 4.4 million followers — that avatar images constitute works of art reflecting "unique aesthetic choices and judgment regarding line, color, and specific image design." A former employee who had sold the avatars' CG models on a third-party platform without authorization was found liable for copyright infringement, with damages of 15,000 yuan. Both rulings establish that while digital avatars are objects rather than legal subjects, the creative expression embodied in their design receives full copyright protection attributed to the humans or entities that produced them.
China's Civil Code, effective January 1, 2021, provides the doctrinal architecture for protecting personality rights implicated by AI-generated likenesses and voices. Article 109, situated in Book One, declares that "the personal freedom and personal dignity of natural persons are protected by law." This provision functions as a value-level foundation for the personality-rights system, but it does not enumerate specific rights or directly address artificial intelligence. The specific personality rights relevant to digital avatars are articulated elsewhere. Article 110 enumerates the right to life, body, health, name, likeness, reputation, honor, and privacy. Article 990, opening Book Four on personality rights, provides both a catalogue of specific rights and a catch-all clause recognizing "other personality rights and interests arising from personal liberty and human dignity." The provisions most directly bearing on AI avatars and synthetic media appear in Articles 1018 through 1023. Article 1018 defines the right to likeness (肖像权) as the right of a natural person to make, use, publicize, or authorize others to use their external image as reflected in video recordings, sculptures, drawings, or other media "by which the person can be identified." Article 1019 prohibits any organization or individual from infringing on another's likeness rights by vilifying, defacing, or — critically — "falsifying another's image by utilizing information technology." This language, enacted before the widespread deployment of generative AI systems, now serves as the primary statutory basis for claims against deepfakes, unauthorized AI face-swaps, and the creation of digital avatars based on real persons' likenesses without consent. Notably, the 2021 Civil Code removed the prior requirement under the General Principles of Civil Law that likeness infringement must involve a commercial purpose. Under the current regime, any unauthorized use of a person's likeness, commercial or otherwise, can constitute infringement.
Article 1023 extends these protections to voice, stating that "for the protection of a natural person's voice, the relevant provisions on the protection of the right to likeness shall be applied mutatis mutandis." This provision became the basis for China's first AI voice-cloning case, decided by the Beijing Internet Court on April 23, 2024, case number (2023) Jing 0491 Min Chu No. 12142. The plaintiff, a professional dubbing artist surnamed Yin, discovered that recordings of her voice had been shared by a cultural media company with an AI software developer, who used them to train a text-to-speech product without her knowledge or consent. The synthesized voice was subsequently commercialized through a mobile application. The court identified five defendants across the production and distribution chain. It held that voice, distinguished by voiceprint, timbre, frequency, and pronunciation style, constitutes a personality interest concerning personal dignity, and that when an AI-generated voice maintains sufficient identifiability — enabling the general public or persons in the relevant field to associate the synthetic output with a specific natural person — the original person's voice rights extend to the AI reproduction. The key legal distinction is that this was a personality-rights infringement, not a copyright claim. The court found that the cultural media company's copyright in the sound recordings did not include authorization to use Yin's voice for AI training and synthesis, because voice personality rights are independent of copyright in recordings. Defendants were ordered to apologize and pay 250,000 yuan in total compensation. On May 26, 2025, the Supreme People's Court designated the ruling as a "Typical Case" commemorating the fifth anniversary of the Civil Code, conferring heightened precedential authority. The Beijing Internet Court subsequently included it in its compilation of eight typical AI cases released on September 10, 2025, alongside decisions addressing AI-generated image copyright, deepfake personality-rights infringement, avatar artwork protection, and the evidentiary burden borne by creators seeking copyright in AI-assisted works.
The regulatory apparatus governing synthetic media and generative AI rests on two principal instruments. The first is the Provisions on the Administration of Deep Synthesis of Internet-based Information Services (互联网信息服务深度合成管理规定), jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security on November 25, 2022, and effective January 10, 2023. Spanning twenty-five articles in five chapters, the provisions mandate that deep-synthesis service providers label all AI-generated content with visible indicators alerting users to its synthetic nature. Where a system provides editing functions for biometric information such as faces and voices, the provider must prompt the user to notify the affected individual and obtain that person's separate consent. Providers bear primary responsibility for establishing user registration and identity verification systems, algorithm review mechanisms, content monitoring infrastructure, and emergency response protocols. Safety assessments are required for services that generate or edit biometric information. The second instrument is the Interim Measures for the Management of Generative Artificial Intelligence Services (生成式人工智能服务管理暂行办法), promulgated on July 10, 2023, by seven agencies led by the CAC — also including the National Development and Reform Commission, the Ministries of Education, Science and Technology, Industry and Information Technology, and Public Security, and the National Radio and Television Administration — and effective August 15, 2023. These measures, the first binding regulation anywhere in the world specifically targeting generative AI services, apply to any service that uses generative AI technology to provide text, images, audio, video, or other content to the public within mainland China, while exempting research and internal use. They require that generated content uphold core socialist values, prohibit outputs that incite subversion or promote terrorism, extremism, ethnic hatred, or discrimination, and mandate that providers take measures to improve the authenticity, accuracy, objectivity, and diversity of training data. Services with public-opinion or social-mobilization capabilities must undergo security assessments and register their algorithms with the CAC. By December 2025, 748 generative AI services had completed national-level filing under these measures.
Building on these foundations, the CAC and partner agencies issued the Measures for Labeling of AI-Generated Synthetic Content (人工智能生成合成内容标识办法) on March 14, 2025, effective September 1, 2025, accompanied by the mandatory national standard GB 45438-2025 specifying labeling methods. These measures require both explicit labels visible to users and implicit labels embedded as machine-readable metadata or watermarks in all AI-generated content across modalities. In a further significant step, the CAC released for public comment on April 3, 2026, the Management Measures for Digital Virtual Human Information Services (数字虚拟人信息服务管理办法), with the comment period closing May 6, 2026. This draft regulation defines digital virtual humans as virtual figures simulating human appearance and behavior using computer graphics, digital image processing, and artificial intelligence. It requires providers to display a prominent "digital human" (数字人) label throughout any display, mandates separate consent from natural persons before using their sensitive personal information for modeling or image generation, and prohibits the provision of virtual intimate relationships — virtual family members or romantic partners — to minors. It also bars the use of digital virtual humans to circumvent facial recognition or voice recognition identity authentication systems. A separate but related draft, the Provisional Measures on the Administration of Anthropomorphic AI Interaction Services, had been released on December 27, 2025, targeting AI companion and emotional interaction services with provisions including mandatory identity disclosure, two-hour interaction limits, and emotion assessment systems for vulnerable populations. Together, these instruments represent a move toward comprehensive lifecycle governance of digital humans.
Enforcement under this regulatory framework has primarily operated through the CAC's Qinglang (清朗) campaign system rather than through widely publicized fines against individually named platforms. The 2025 iteration, formally titled "Qinglang — Combating the Abuse of AI Technology," ran from April through June 2025 and resulted in the removal of more than 3,500 AI products, the deletion of over 960,000 pieces of illegal content, and penalties against more than 3,700 accounts. Guangdong's provincial CAC office interviewed 187 website platforms, warned 19, fined 12, and delisted 42 mobile applications during the same period. Shanghai's office removed over 820,000 pieces of illegal content and disabled approximately 2,700 non-compliant AI agents. Criminal enforcement has also intensified: in December 2024, the Ministry of Public Security announced ten typical criminal cases involving the use of AI tools to fabricate and disseminate rumors. The State Administration for Market Regulation released five typical cases of unfair competition in AI on February 6, 2026, involving fraudulent mini-programs and commercial confusion schemes. Individual platform enforcement actions have generally not been publicized with sufficient specificity to associate fines with named commercial entities in the public record.
The Personal Information Protection Law (个人信息保护法, effective November 1, 2021) intersects with avatar rights through its treatment of biometric data. Article 28 classifies biometric characteristics — including facial recognition patterns, fingerprints, and voiceprints — as sensitive personal information that may be processed only when there is a specific purpose, a demonstrable necessity, and strict protective measures. Article 29 requires that individuals provide separate consent for the processing of sensitive personal information, a standard that applies directly to the collection of facial geometry, voice samples, and motion-capture data used in the creation of digital avatars modeled on real persons. The Deep Synthesis Provisions reinforce this by requiring that providers of face and voice editing functions obtain separate consent from affected individuals, and the draft Digital Virtual Human measures of April 2026 extend the consent requirement explicitly to the use of sensitive personal information in digital-human modeling, image generation, and scene construction.
China's infrastructure for digital identity verification took a notable step forward with the launch of the China Real-Name Decentralized Identifier System (中国实名DID, commonly referred to as RealDID) on December 12, 2023, at the "BSN Real Name" conference in Beijing. The system was developed by the First Research Institute of the Ministry of Public Security and deployed on the Blockchain-based Service Network (BSN) China's Yan'an Chain, a permissioned blockchain launched in June 2023. Administered by Anicert, a wholly owned subsidiary of the Ministry's First Research Institute, RealDID employs a three-factor verification process — government-issued identification, legal name, and facial recognition — managed through the Cyber Trusted Identity (CTID) system already in use by banks and institutional service providers. Users can create unlimited public-private key pairs for use on separate platforms, logging in anonymously with DID addresses while the government retains backend access to verified identities when legally required. The system implements a principle of "anonymity at the front desk, real name at the backend," allowing pseudonymous interaction while preserving law enforcement's ability to identify individuals. By November 2024, trials had extended to Hong Kong for cross-border identity verification, enabling mainland Chinese citizens to verify their identities for purchasing regulated stablecoin and tokenized financial products without presenting physical identification documents.
Trademark law is developing more slowly in its application to digital avatars, but a 2024 ruling by the Hangzhou Intermediate People's Court in the G.PATTON case broke significant ground. The court held that a trademark registered for real vehicles in Class 12 could be infringed by the use of the same brand on virtual vehicle skins in a mobile game, normally classified under Class 9. Overturning the first-instance court's finding that virtual and real goods were dissimilar, the appellate court reasoned that virtual experiences serve as powerful marketing tools directly affecting real-world brand value, and that the critical factor is the relevance of goods and the likelihood of consumer confusion rather than rigid classification boundaries. This decision bridges the gap between physical trademark protection and the digital realm, with clear implications for virtual-human merchandising and branded avatar content. On the institutional side, the China Digital Human Intellectual Property Certification and Protection Platform (数字人知识产权存证保护平台) was launched on June 17, 2023, at the Thirteenth China International Trademark Brand Festival in Dongguan, Guangdong. Initiated by Zhongzhi Zhihui Technology, a subsidiary of the National Intellectual Property Administration's Intellectual Property Press, the platform provides compliance review, rights certification, copyright protection, traceability, open licensing, and transaction matching services for digital-human creators and operators. The China Patent Protection Association has reportedly been preparing to establish the first national-level professional committee dedicated to digital-human intellectual property.
The standards infrastructure has matured considerably. GB/T 46483-2025, China's first national standard for virtual digital humans, was published in late 2025 and covers general technical requirements for customer-service virtual digital humans. Three additional national standards released on April 25, 2025, and effective November 1, 2025 — GB/T 45654-2025, GB/T 45674-2025, and GB/T 45652-2025 — address generative AI service security, data annotation security, and pre-training and fine-tuning data security respectively. The National Cybersecurity Standardization Technical Committee (TC260) published its AI Safety Governance Framework 1.0 in September 2024 as a comprehensive non-binding guideline covering AI risk management across the lifecycle, and released a second version in September 2025. Industry standards have also proliferated: the Beijing Information Association issued T/BIA 17-2024 on digital-human platform basic capabilities including image copyright protection and content traceability, and the Telecommunications Association of China published T/TAF 203-2024 on technical requirements for three-dimensional identity-type virtual digital humans. The China Academy of Information and Communications Technology (CAICT) has led the development of two international digital-human standards reported to be nearing publication, defining the concept internationally and establishing evaluation frameworks. Patent law has also been clarified: the CNIPA Patent Examination Guidelines, amended in November 2025 and effective January 1, 2026, reaffirm that inventors must be natural persons and prohibit the designation of AI systems as inventors, while strengthening disclosure requirements for AI-related inventions.
China's Fifteenth Five-Year Plan (2026–2030), formally approved during the National People's Congress sessions in March 2026, references artificial intelligence more than fifty times and designates it alongside quantum computing, biotechnology, and advanced energy as a strategic science priority. The plan incorporates a sweeping "AI+ Action Plan" aimed at integrating artificial intelligence across manufacturing, healthcare, logistics, education, elder care, and governance, with the National Development and Reform Commission projecting that AI-related industries will exceed ten trillion yuan by the end of the plan period. While the plan does not contain provisions specific to digital-human intellectual property, it calls for advancing "Digital China" construction, improving intelligent digital development, strengthening legal and regulatory frameworks including algorithm registration and transparency requirements, and deepening China's participation in global AI governance and rulemaking. A comprehensive AI law (人工智能法) was included in the State Council's 2024 legislative work plan and appeared on the NPC Standing Committee's preliminary review agenda, but it was quietly removed from the 2025 legislative schedule. The amended Cybersecurity Law, passed on October 28, 2025, and effective January 1, 2026, represents the first time AI governance provisions have been incorporated into national-level legislation, but a unified AI statute remains at the proposal stage. China's regulatory philosophy continues to favor an incremental, sectoral approach — deploying targeted rules, pilot programs, and technical standards to manage specific risks while preserving flexibility for a rapidly evolving industry.
The trajectory of this legal framework reveals a jurisdiction that is building avatar and AI intellectual-property protections not through a single legislative act but through the accretion of judicial precedent, administrative regulation, and technical standardization. Courts have established that copyright in AI-generated works depends on the demonstrable exercise of human creative judgment in the generative process, with the degree of control and iterability potentially determining whether protection is available. Personality rights under the Civil Code extend to AI-reproduced likenesses and voices whenever the synthetic output remains identifiable with a specific natural person, independent of any copyright interest in the underlying recordings or images. Regulatory requirements for labeling, consent, and platform accountability apply across the chain from deep-synthesis tools to generative AI services to the emerging category of digital virtual human information services. And an institutional infrastructure — from blockchain-based identity verification to IP registration platforms to national technical standards — is being assembled to support the commercial deployment and rights management of digital humans at scale. What remains absent is a unified statutory framework, and the deliberate pace at which China is approaching comprehensive AI legislation suggests that the current mosaic of rules will continue to define the landscape for the foreseeable future. For creators, platforms, and rights holders operating in this space, the practical imperative is to document human creative contributions meticulously, secure consent for any use of real persons' biometric characteristics, comply with labeling and filing requirements, and monitor the rapidly evolving body of judicial decisions that is giving concrete meaning to statutory provisions drafted before the generative AI revolution.
[Apr 2026]