Supporters of Marcus Endicott’s Patreon can access weekly or monthly video consultations on this topic.
The legal status of virtual digital humans in China occupies a contested and rapidly evolving space where technological ambition, academic theorization, and regulatory pragmatism converge without yet reaching consensus. At the core of the debate lies a fundamental question of classification. The China Academy of Information and Communications Technology (CAICT) established, in its 2020 Virtual Digital Human Development White Paper, a binary framework distinguishing between human-driven and AI-driven virtual digital humans. This remains the most widely accepted industry classification. Subsequent academic work, notably by Xiong Jinguang and Jia Jun of Jiangxi University of Finance and Economics School of Law, published in the Shanghai Legal Research Collection in 2024, proposed a refined three-part taxonomy separating virtual digital humans mapped to real persons, those operated by behind-the-scenes individuals, and those driven autonomously by artificial intelligence. This tripartite framework was designed not as an industry standard but as an analytical instrument for parsing questions of liability and legal personality, since the mapped category resolves most legal issues through the pre-existing legal subject status of the real person behind the digital figure. Additional classification schemes further organize virtual digital humans by application (service, performance, and identity types, per the China Communication University's Virtual Digital Human Influence Index Report), by visual presentation (cartoon versus hyperrealistic, with some sources introducing a three-way distinction among two-dimensional anime, three-dimensional cartoon, and three-dimensional hyperrealistic forms), and by modeling method (real person-based versus fictional), the latter of which can be crossed with the driving method to yield a two-by-two matrix of sub-types.
The question of whether virtual digital humans can or should be recognized as legal subjects under Chinese law remains firmly in the realm of academic speculation. The Civil Code of the People's Republic of China, effective January 1, 2021, recognizes three categories of civil subjects: natural persons, legal persons, and unincorporated organizations. This tripartite structure, codified in Article 2, replaced the earlier binary framework of the 1986 General Principles of Civil Law, which recognized only citizens and legal persons. The addition of unincorporated organizations as a third category was introduced in the 2017 General Provisions of Civil Law and subsequently incorporated into the Civil Code. Virtual digital humans fall outside all three recognized categories. The prevailing legal position, articulated by major law firms including Zhong Lun, AllBright, and Han Kun, and confirmed by courts including the Hangzhou Internet Court in the 2023 Ada case, is that virtual digital humans are legal objects, not legal subjects. Civil rights, obligations, and liabilities attach to the owner, operator, or natural person behind the virtual entity. No formal legislative proposals from the National People's Congress, the Chinese People's Political Consultative Conference, the State Council, or any ministry have been identified that would expand civil subject categories to encompass virtual digital humans. The 2017 State Council New Generation AI Development Plan acknowledged the need to clarify the legal status of artificial intelligence in general terms, and NPC sessions in 2025 and 2026 featured proposals for comprehensive AI legislation, but none specifically addressed granting civil subject status to virtual digital humans. Xiong and Jia have proposed, in their 2024 paper, a graduated pathway from legal object to quasi-legal personality to full legal personality, treating virtual digital humans as "special civil subjects" through amendment of Civil Code Article 2, but this remains a de lege ferenda position with no legislative traction.
The personality rights framework of the Civil Code provides the primary legal basis through which virtual digital human disputes are adjudicated. Article 110, Paragraph 1, enumerates the personality rights enjoyed by natural persons, including rights to name, portrait or likeness, reputation, and privacy, while Book Four of the Code provides detailed provisions for each. Voice is treated as a legally recognized personality interest under Article 1023, Paragraph 2, which extends the protections applicable to portrait rights by analogy, though the Code deliberately uses "voice" rather than "voice right," stopping short of designating it as an independent specific personality right. Protection requires that the voice in question be identifiable. These rights have direct bearing on virtual digital human disputes because the creation and deployment of digital figures frequently involves the appropriation of real persons' likenesses, voices, and public identities.
Liability in virtual digital human cases is currently resolved through existing Civil Code provisions applied by analogy rather than through any dedicated statutory framework. The relevant provisions include Article 1165 on general fault-based tort liability, Articles 1194 through 1197 on network tort liability including notice-and-takedown obligations for internet service providers, and Articles 1202 through 1207 on product liability where an AI system or virtual digital human is characterized as a product. No law currently designates AI or virtual digital humans for strict liability, a point confirmed by the Shenzhen Futian District Court in 2024. The academic analysis linking liability regimes to the three-part taxonomy of mapped, operated, and AI-driven virtual digital humans, while intellectually productive, does not reflect established law. Similarly, no legal basis exists under Chinese law for virtual digital humans to serve as contracting parties. Article 119 of the Civil Code limits contractual obligations to recognized civil subjects, and Article 133 restricts the performance of civil juristic acts to those same categories. No Supreme People's Court judicial interpretation and no court ruling has granted digital entities contractual capacity. All contractual rights and obligations are attributed to the human or corporate entity behind the virtual digital human.
Chinese courts have, however, built a significant and growing body of case law addressing disputes at the intersection of artificial intelligence, digital identity, and personality rights. The Hangzhou Internet Court's 2023 ruling in the Ada case, brought by Xmov against a Hangzhou technology company that had reposted Xmov's virtual digital human videos on Douyin to market its own products, established that virtual digital human images are protectable under copyright but that the virtual entity itself is not a legal subject and cannot qualify as a performer or author. Copyright and related intellectual property rights belong to the creating company, while the human operator behind the figure is recognized as the performer. The Beijing Internet Court contributed several landmark rulings. In He v. an AI technology company, the court found that a bookkeeping application had infringed a famous television host's name rights, image rights, and general personality rights by allowing users to create AI companions using his identity. The court rejected the defendant's technology neutrality defense, holding that subjective values were embedded in the algorithm's design and that the defendant had actively organized user behavior. This case was designated by the Supreme People's Court as a typical civil case on judicial protection of personality rights following the promulgation of the Civil Code. In Yin v. Beijing Smart Technology Co., decided in April 2024, the Beijing Internet Court issued what became China's first ruling expressly protecting against AI infringement of voice rights, finding that AI-synthesized voice is identifiable when the general public can associate it with a natural person based on timbre, intonation, and pronunciation style, and that copyright in sound recordings does not authorize AI voice processing. The court ordered joint compensation of RMB 250,000. The Li Yunkai v. Liu Yuanchun case of November 2023 established that an AI-generated image created using Stable Diffusion could constitute a copyrightable work where the human operator's prompt selection and parameter adjustments reflected personalized expression. The Liao v. XX Technology face-swapping case of June 2024 drew a significant doctrinal distinction between portrait rights, which require identifiability, and personal information rights under the Personal Information Protection Law, which protect against unauthorized data processing regardless of whether the subject remains identifiable in the output. The Guangzhou Internet Court's February 2024 ruling in the Ultraman AI-generated image case became what is widely described as the first case worldwide in which an AI-generated content platform was held liable for copyright infringement of existing characters, with the platform treated as a content provider rather than a neutral intermediary. In September 2025, the Beijing Internet Court published a collection of eight typical AI cases, two of which were designated as SPC typical cases, encompassing AI-generated image copyright, AI voice infringement, AI-synthesized celebrity voice misuse, AI face-swapping, virtual digital human copyright ownership, and AI virtual celebrity personality rights.
The regulatory architecture governing virtual digital humans in China has developed as a layered and cumulative system rather than through a single comprehensive statute. The Algorithm Recommendation Provisions, effective March 1, 2022, require algorithm filing for services with public opinion attributes or social mobilization capability, a requirement applicable to virtual digital human operators using generative or synthetic algorithms. The Deep Synthesis Provisions, jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security, took effect on January 10, 2023, and require prominent labeling of AI-generated content including face generation, face replacement, face and posture manipulation, and synthesized or cloned voices, as well as separate consent for the editing of biometric information. The Generative AI Interim Measures, issued by seven agencies with the CAC as lead, took effect on August 15, 2023, adopting an "inclusive and prudent" supervisory approach combined with classified and graded oversight, and extending labeling requirements to generative AI services providing text, image, audio, or video content to the public in China. The AI-Generated Content Labeling Measures and the companion mandatory national standard GB 45438-2025, effective September 1, 2025, require both explicit labels such as visible watermarks and text notices and implicit labels through metadata embedding for all AI-generated content, with direct applicability to virtual digital humans. The National Radio and Television Administration's recommended industry standard GY/T 411-2024, effective November 28, 2024, constitutes China's first industry-level technical standard for digital virtual humans, covering classification, application scenarios, technical requirements, and personal information security provisions including the requirement for separate consent when editing real face or voice biometric data.
The most significant regulatory development is the draft Measures for the Management of Digital Virtual Human Information Services, released by the Cyberspace Administration of China's Network Management Technology Bureau on April 3, 2026, with a public comment period closing on May 6, 2026. If enacted, this would represent China's first dedicated regulation for digital virtual human information services. The draft defines a digital virtual human as a virtual digital image that exists in the non-physical world, uses graphics, digital image processing, or AI technology, is driven by real persons or by computation, simulates human appearance, and possesses voice, behavior, interactivity, or personality characteristics. Among its notable provisions, Article 7 requires separate consent for the use of sensitive personal information in modeling or image generation. Article 8 prohibits creating identifiable digital virtual humans of specific persons without consent. Article 10 prohibits providing minors with virtual relatives, virtual companions, or services inducing excessive consumption or religious indoctrination. Article 13 mandates a "digital human" label displayed throughout the entire service display area. Article 19 requires human oversight in government services, public management, and judicial activities, and grants users the right to refuse digital virtual human services in those settings. Article 21 requires providers with public opinion influence to complete algorithm filing under the Algorithm Recommendation Provisions. Penalties under Article 24 range from warnings and ordered rectification to fines of RMB 10,000 to 100,000 for serious cases and RMB 100,000 to 200,000 for cases endangering life or health safety. The draft does not propose expanding civil subject categories to include virtual digital humans and operates entirely within the existing framework of administrative regulation.
Policy support for virtual digital humans as elements of China's broader digital economy strategy is well established across multiple levels of government. The MIIT-led Three-Year Action Plan for Metaverse Industry Innovation and Development, jointly issued by five departments in August 2023, identifies digital humans alongside virtual space development tools as components to be innovated, and describes applications across both industrial and consumer metaverse contexts. The Shanghai Municipal Government's 2022 action plan for cultivating the metaverse included a dedicated Digital Human Comprehensive Improvement Project as one of eight key initiatives, with a target of RMB 300 billion in metaverse-related industry scale by 2025, supplemented by a 2023 follow-up plan detailing technical specifications for digital human generation. Beijing's 2022 Digital Human Industry Innovation and Development Action Plan, described as China's first digital human industry-specific support policy, targeted digital human industry scale exceeding RMB 50 billion by 2025, with the goal of cultivating one to two enterprises with revenue exceeding RMB 5 billion. The MIIT Metaverse Action Plan also references distributed identity authentication as a key metaverse technology and calls for the construction of digital identity management platforms and trust infrastructure, including piloting decentralized scenario applications. Beijing's 2025 Blockchain Innovation Plan explicitly calls for building trusted digital identity systems and developing distributed digital identity issuance and verification. China's BSN RealDID system, launched in December 2023 by the National Information Center, the Ministry of Public Security First Research Institute, and the BSN Development Alliance, provides infrastructure for front-end anonymous, back-end real-name digital identity built on the BSN blockchain service network, though it was not designed specifically for virtual digital humans. National standards for distributed identity systems are under development through TC590, drafted by institutions including the People's Bank of China Digital Currency Research Institute, CESI, and Huawei, among others. While academic literature connects decentralized identity authentication to virtual digital human identity management, no specific DID standard for virtual digital humans has been promulgated, and the W3C DID standard is not explicitly referenced in Chinese virtual digital human policy documents.
The academic literature on the legal status of virtual digital humans in China spans multiple institutional and disciplinary perspectives. Beyond the work of Xiong and Jia, Zheng Fei and Xia Chenbin analyzed the dual legal dimensions of virtual digital humans in the Changbai Journal in 2023, tracing the trajectory from legal object toward legal subject and from "thing" characteristics toward "person" characteristics. Sun Kai and Bao Haiyue of the Shanghai Baoshan District People's Court, writing in China Trial in 2024, concluded that virtual digital humans remain legal objects under conditions of weak artificial intelligence, proposing that they be understood as "things of special significance." Mei Xiaying developed a theoretical framework for AI legal subject status in the Peking University Law Journal using personality differentiation theory, mapping a spectrum from ethical personality to property personality. Lu Chunya proposed industry access requirements and producer strict liability for virtual digital humans driven by large language models. Li Jing of Shanghai Jiao Tong University's KoGuan Law School distinguished three modes of virtual idol creation: idol passive virtualization, idol active virtualization, and virtual idol creation. Sun Yue, writing in Dispute Settlement in 2025, analyzed copyright attribution and fair use for human-driven versus AI-driven types using the Ada case. The Xiao Sa Legal Team published a comprehensive legal compliance report for the virtual digital human industry in 2022, while Han Kun produced a series on personality rights and intellectual property protection for virtual digital humans, and AllBright classified virtual digital humans as online virtual property under Civil Code Article 127, reinforcing their status as legal objects rather than subjects. The academic literature documents a notable gap in case law concerning the rights of the "person behind the curtain" in virtual idol and VTuber operations, with no formal court cases identified despite documented industry disputes.
[Apr 2026]