Supporters of Marcus Endicott’s Patreon can access weekly or monthly video consultations on this topic.
Tencent Holdings Limited (腾讯控股有限公司) treats digital humans as a full-stack capability spanning enterprise deployment through Tencent Cloud, foundation-model and multimodal generation through the Tencent Hunyuan model family, consumer-facing creation through ZenVideo, real-time high-fidelity digital characters through NExT Studios, and virtual-idol and voice-driven digital human production through Tencent Music Entertainment Group and its Lyra Lab, with work distributed across multiple business groups and research labs rather than centralized in a single unit. Shenzhen is identified as the company’s founding base and remains the primary corporate locus for Tencent, Shanghai is explicitly identified as the base of Tencent YouTu Lab, and Tencent AI Lab is described as having a Seattle office led by Dong Yu, indicating a material presence in the United States alongside mainland China hubs. Tencent Music Entertainment Group (腾讯音乐娱乐集团) is the named subsidiary entity operating a virtual-idol and virtual-music ecosystem and housing Lyra Lab (天琴实验室) as a dedicated AI research division for audio and virtual-human generation, while other major contributors are internal divisions or labs rather than separately incorporated subsidiaries, including Tencent Cloud and Tencent Cloud Intelligent within CSIG for enterprise digital humans, the cross-TEG/CSIG Tencent Hunyuan team for foundation models, Tencent AI Lab, Tencent YouTu Lab, WeChat AI Lab, ARC Lab under PCG, Tencent Research Institute (TISI), and the IEG studio system including NExT Studios plus related IEG teams that created named virtual characters.
Subsidiaries and Platforms Relevant to Digital Humans:
Tencent Holdings Limited (腾讯控股有限公司): Tencent is positioned as an ecosystem operator whose digital human capability emerges from the interaction of enterprise cloud distribution, foundation models, content tooling, entertainment production, and policy/strategy work, with sustained investment beginning around 2018 and a portfolio that includes both open-source model releases and commercial platforms; its digital human posture is defined less by a single flagship avatar and more by the breadth of production pipelines, deployment channels, and research organizations that can convert multimodal model advances into enterprise presenters, interactive service agents, virtual idols, and game-grade digital characters across Tencent’s broader platform footprint.
Tencent Cloud (腾讯云): Tencent Cloud acts as the primary commercialization channel for Tencent’s digital humans through enterprise services delivered under CSIG, operating the Tencent Cloud AI Digital Human platform and adjacent enterprise tooling such as Tencent QiDian for customer service deployments, with named leadership and product management accountability embedded in the cloud organization and a stated focus on serving vertical scenarios including finance, media, government, education, and retail rather than treating digital humans as purely entertainment artifacts.
Tencent Cloud Intelligent / Tencent Cloud AI (腾讯云智能): Tencent Cloud Intelligent is the AI-branded arm within Tencent Cloud that develops and markets the “digital intelligent human” (数智人) product line, presenting the positioning shift from visually animated “digital humans” toward LLM-integrated agents that can broadcast, converse, and integrate into enterprise workflows, including small-sample personalization that reduces the production barrier by allowing rapid creation of a realistic customized digital human from limited video and speech input and then deploying it through APIs and SDKs across screens, web, mobile, and embedded endpoints.
Tencent Hunyuan Team (腾讯混元团队): The Hunyuan team is described as operating across TEG and CSIG and providing the foundation-model backbone for most of Tencent’s digital human capabilities, with joint leadership by Jiang Jie, Liu Yuhong, and Wang Di and outputs spanning audio-driven portrait animation, multimodal video avatar generation, 3D asset creation and motion generation, and related model families, enabling downstream products to treat digital humans as a model-powered pipeline rather than a handcrafted media-production workflow.
Tencent AI Lab (腾讯AI实验室): Tencent AI Lab is presented as a foundational research lab established in 2016 with remit across NLP, computer vision, speech, and machine learning that underpins digital human intelligence and interaction, and it is described as being directed by Jiang Jie while including senior research leadership such as Dong Yu, linking it directly to the speech and language components that enable conversational and voice-synthesized digital humans across Tencent’s ecosystem.
Tencent YouTu Lab (腾讯优图实验室): Tencent YouTu Lab is framed as the computer-vision center for face and portrait technologies relevant to digital humans, including face detection and neural avatar synthesis, and it is explicitly described as Shanghai-based under Wu Yunsheng’s leadership, with a named contribution as the origin lab for Sonic and as a provider of the computer-vision foundations for facial rendering and realism in Tencent’s digital human pipeline.
WeChat AI Lab (微信AI实验室): WeChat AI Lab is identified as one of Tencent’s major AI research labs relevant to digital humans, indicating a contribution to the intelligence, interaction, and multimodal capabilities needed for virtual characters deployed inside Tencent’s messaging and mini-program ecosystem, even where specific product outputs are not enumerated alongside those of Hunyuan, YouTu, or ARC Lab.
ARC Lab / Tencent ARC Lab (PCG): ARC Lab is described as an applied research group under PCG directed by Ying Shan, focused on video generation, image synthesis, and human-centric content generation, with its SEED multimodal model work feeding into digital human capabilities, making it a bridge between Tencent’s content platforms and the model-level innovations needed for scalable presenter-style and character-style video output.
ZenVideo / Tencent Intelligent Video (腾讯智影): ZenVideo is described as a consumer and prosumer AI video creation platform under PCG that treats digital human broadcasting as a core feature, supporting AI-driven virtual presenters, personal likeness cloning into digital avatars, and automated 24/7 digital human live streaming, effectively productizing a “digital presenter” workflow for creators and businesses rather than limiting digital humans to bespoke enterprise deployments.
Tencent Yuanqi (腾讯元器): Tencent Yuanqi is described as a zero-code and low-code AI agent creation and distribution platform built on the Hunyuan model, enabling the creation of digital personas and deployable virtual characters across Tencent channels such as WeChat and QQ, and it functions as the agentic intelligence layer that can complement visual digital human layers when used to operationalize persona behavior, knowledge bases, plugins, and workflow logic.
NExT Studios (NExT工作室): NExT Studios is described as an internal IEG game development studio pioneering high-fidelity real-time digital humans since 2017, with proprietary pipelines for face building and motion capture (xFaceBuilder and xMoCap) and a flagship demonstration in Project Siren plus subsequent work such as the Matt AI facial animation system and a film-grade digital double of Tencent SVP Steven Ma, positioning it as Tencent’s primary locus for game-engine and cinematic digital human realism rather than cloud-hosted presenter avatars.
Tencent Music Entertainment Group / TME (腾讯音乐娱乐集团): TME is described as operating a virtual idol ecosystem and a virtual music social platform (TMELAND), with an internal AI lab (Lyra Lab) that develops voice and virtual-human video generation technology, making TME the entertainment-focused branch of Tencent’s digital human stack where vocals, performance, and music-social experiences drive productization and deployment.
Lyra Lab / TME Lyra Lab (天琴实验室): Lyra Lab is described as TME’s dedicated AI research division that built the LyraSinger Engine for AI vocal generation and released open-source “Muse series” tools for virtual human video generation, lip synchronization, and pose-driven animation, with an explicit role as co-developer (with the Hunyuan team) of HunyuanVideo-Avatar, making it a key production-grade contributor to audio-driven and performance-centric digital humans.
Tencent Research Institute / TISI (腾讯研究院): Tencent Research Institute is described as a policy and strategy think tank led by Si Xiao that contributes industry analysis and guidance, including a named digital human industry trends report produced with the Tencent Cloud Xiaowei team, shaping governance and market framing around digital humans rather than building core generation or deployment products.
Other IEG Virtual Character Producers (光子工作室群, QQ Dance Project Team, XNOX工作室): Additional IEG-linked groups are described as creating specific virtual characters, including Photon Workshop’s Gilly from Peace Elite, the IEG Content Ecosystem Department and QQ Dance project team’s Star Pupil, and XNOX Studio’s co-creation (with NExT Studios) of the virtual singer Lumos, indicating that Tencent’s “virtual character” layer includes both enterprise-style digital humans and entertainment characters originating from game and music production teams.
Digital human related products:
Tencent Cloud AI Digital Human / TCADH (腾讯云智能数智人): Tencent Cloud AI Digital Human is described as the enterprise-grade platform for creating and deploying AI digital humans, structured into a Broadcasting Digital Human Platform for one-way presenter video generation and an Interactive Digital Human Platform for real-time two-way voice interaction, supporting multiple avatar styles and exposing server APIs and client SDKs for avatar creation, customization, voice cloning, video generation, and interactive sessions across varied deployment surfaces including web, mobile, large displays, and embedded endpoints.
Xiaowei Digital Humans (小微数智人) And Yu Ling (余灵): Xiaowei Digital Humans are described as the branded digital human character and assistant system powered by the Tencent Cloud AI Digital Human platform, serving as an ultra-realistic spokesperson capable of broadcasting, Q&A interaction, and sign-language interpretation, and the derivative virtual sign language interpreter Yu Ling is described as supporting very large-scale sign vocabulary coverage with high reported accuracy and use in sportscasting contexts, positioning it as a specialized accessibility-facing digital human deployment.
HunyuanVideo-Avatar (混元视频数字人): HunyuanVideo-Avatar is described as an open-source multimodal diffusion transformer model for audio-driven human animation developed jointly by the Hunyuan team and Lyra Lab, generating emotion-controllable video from a single portrait image and an audio clip and supporting multiple visual styles and multi-character scenes through named architecture modules designed to balance identity consistency with motion and to transfer emotion cues via audio and reference imagery.
Sonic: Sonic is described as an open-source audio-driven portrait animation framework developed with Zhejiang University that generates lip-synced and expression-matched video from a static image and audio, emphasizing global audio perception and including mechanisms for independent head and expression control as well as temporal stabilization for longer sequences, with commercial usage routed through a Tencent Cloud API licensing pathway.
MuseV, MuseTalk, MusePose (Muse Series): The Muse series is described as three complementary open-source tools from Lyra Lab covering diffusion-based virtual human video generation with long-duration support (MuseV), real-time high-quality lip synchronization (MuseTalk), and pose-driven image-to-video performance generation (MusePose), collectively forming a modular pipeline for generating and animating human-like virtual performers and presenters.
HY-Motion 1.0 And HY-Motion-1.0-Lite: HY-Motion 1.0 is described as a text-to-3D human motion generation model released by the Tencent Hunyuan 3D Digital Human Team that produces skeleton-based animation in SMPL-H format using diffusion/flow-based architectures and reinforcement learning from human feedback, with a smaller Lite variant also available, extending Tencent’s digital human pipeline into controllable 3D body motion generation.
ZenVideo Digital Human Broadcasting And Live Streaming: ZenVideo’s digital human feature set is described as including presenter video generation from text or audio using official avatar templates, personalization via form cloning and voice cloning, and 24/7 automated digital human live streaming with automated comment replies and human takeover, representing Tencent’s mass-market packaging of “digital presenter” workflows for content and commerce use.
Tencent Yuanqi Persona And Clone Distribution: Tencent Yuanqi is described as enabling creation of digital personas and “digital clones” that can be distributed across Tencent properties (including WeChat Official Accounts, Mini Programs, and customer service channels), supporting knowledge bases, plugins, and workflow editing that can serve as the behavior and knowledge layer behind a digital human front end.
Hunyuan3D Series: The Hunyuan3D series is described as generating 3D models from text, images, or sketches with texture generation and topology optimization, and while framed as 3D asset generation, it is explicitly linked to digital humans through character model creation and virtual environment production, making it an enabling tool for 3D digital human pipelines.
U82: U82 is described as a consumer mobile app for personal avatar creation with adjustable facial and styling parameters and WeChat login, representing a consumer-facing pathway for everyday digital human/avatar creation distinct from enterprise and open-source model offerings.
Tencent Cloud TI Platform, Virtual Interactive Space (VIS), And Tencent Cloud Text-to-Speech: Tencent Cloud TI Platform is described as the enterprise machine learning platform underpinning the “digital human factory,” Virtual Interactive Space (VIS) is described as the cloud rendering product for virtual environments, and Tencent Cloud Text-to-Speech is described as the voice synthesis backbone supporting Tencent’s digital human products, together forming core infrastructure that supports model training/serving, real-time rendering contexts, and speech output for digital humans.
AvaMo (Japan Partner Launch): AvaMo is described as a Japan-market AI avatar video generation service developed by Vector Group’s Offshore Company using Tencent Cloud digital human creation technology, indicating Tencent’s internationalization pattern for digital human products via partner branding rather than only direct-to-market Tencent naming in every region.
Digital human retated people:
Pony Ma / Ma Huateng (马化腾): Pony Ma is described as Tencent’s co-founder, chairman, and CEO who sets overall corporate direction including AI and digital human investment priorities, with specific emphasis on AI talent and organizational change around the Hunyuan team that contextualizes digital humans as one outcome of a broader AI-first transformation across Tencent’s portfolio.
Martin Lau / Liu Chiping (刘炽平): Martin Lau is described as Tencent’s president and as the direct reporting line for the Chief AI Scientist role, positioning him as an executive gatekeeper for senior AI appointments and the governance layer that determines where and how foundational AI capabilities such as those enabling digital humans are prioritized across business groups.
Dowson Tong / Tang Daosheng (汤道生): Dowson Tong is described as Senior Executive Vice President and CEO of CSIG, directly overseeing the organization that houses Tencent Cloud’s commercial digital human products and framing AI digital humans as practical enterprise tools for industry scenarios such as customer service, aligning enterprise deployment strategy with the cloud platform that productizes digital humans.
Lu Shan (卢山): Lu Shan is described as Senior Executive Vice President and President of TEG, the organizational base for major foundational AI functions including AI Lab and AI infrastructure efforts that supply the core model and platform capabilities feeding Tencent’s digital human systems.
Steven Ma / Ma Xiaoyi (马晓轶): Steven Ma is described as Senior Vice President of Interactive Entertainment and as a high-visibility demonstration subject for Tencent’s film-grade digital double work produced by NExT Studios, linking executive sponsorship and showcase validation to IEG’s high-fidelity digital human pipeline.
Yao Shunyu / Vinces Yao (姚顺雨): Yao Shunyu is described as Chief AI Scientist appointed in December 2025, leading Tencent’s overall AI model development including the Hunyuan foundation models that underpin avatar and digital human generation, with a background spanning elite academic training and prior OpenAI research contributions that connect agentic and model-level innovation to Tencent’s ability to scale digital human intelligence and generation quality.
Jiang Jie (蒋杰): Jiang Jie is described as a Tencent vice president who heads Tencent AI Lab and co-leads the Hunyuan model effort, directly overseeing model development that powers HunyuanVideo, HunyuanVideo-Avatar, and related components of the digital human pipeline, and representing the link between central AI research management and production model output.
Liu Yuhong (刘煜宏): Liu Yuhong is described as a Tencent Cloud vice president and co-lead of the Hunyuan large model, responsible for model technology underlying Tencent’s digital human solutions and associated deployment across internal business clients, placing him at the intersection of model capability and scalable internal rollout.
Wang Di (王迪): Wang Di is described as a Tencent Cloud vice president and technical lead of the Hunyuan model, with responsibility for presenting and shaping the multimodal model matrix that supports digital human generation and related multimodal applications.
Wu Yunsheng (吴运声): Wu Yunsheng is described as a Tencent Cloud vice president who simultaneously heads Tencent Cloud AI and Tencent YouTu Lab, directly overseeing the enterprise digital human platform while also controlling a key computer vision lab that supplies facial and portrait technology, thereby reducing friction between research and commercialization in Tencent’s digital human stack.
Ying Shan (单缨): Ying Shan is described as a Distinguished Scientist directing ARC Lab under PCG and the Center of Visual Computing at Tencent AI Lab, leading work on video generation and multimodal model development that feeds digital human capabilities and positioning Tencent’s content-generation research as a direct supplier of techniques used in digital human video synthesis.
Dong Yu (俞栋): Dong Yu is described as a Distinguished Scientist and Vice General Manager at Tencent AI Lab who heads the Seattle office and contributes speech recognition, NLP, and voice synthesis capabilities, aligning with the voice and conversational layers required for interactive digital humans.
Si Xiao (司晓): Si Xiao is described as a Tencent vice president and dean of Tencent Research Institute (TISI), shaping governance and industry framing for digital humans through policy and market analysis work that supports compliant scaling and strategic positioning of digital human deployments.
Li Xuechao (李学朝): Li Xuechao is described as a Tencent vice president overseeing Tencent Cloud Intelligent Products and publicly articulating the shift from “digital humans” toward “digital intelligent humans” as enterprise-serving digital employees, linking product philosophy, public messaging, and platform direction for Tencent’s cloud digital human line.
Chen Lei (陈磊): Chen Lei is described as general manager for Intelligent Digital Human Products at Tencent Cloud, directly managing the digital human product line and associated small-sample production capability and “AI plus Digital Human Factory” framing, tying day-to-day product execution to Tencent’s goal of lowering cost and scaling enterprise adoption.
All English Name Variations:
Tencent Holdings Limited, Tencent, Tencent Cloud, Tencent Cloud Intelligent, Tencent Cloud AI, Tencent Cloud AI Digital Human, TCADH, ZenVideo, Tencent Intelligent Video, Tencent Smart Video, Tencent Yuanqi, Tencent Xiaowei, Tencent Cloud Xiaowei, Tencent Hunyuan, Tencent HY, HunyuanVideo-Avatar, HunyuanVideo, HunyuanVideo-I2V, HunyuanCustom, Hunyuan3D, HunyuanWorld, HY-Motion 1.0, Sonic, Tencent AI Lab, Tencent YouTu Lab, Tencent Youtu Laboratory, WeChat AI Lab, ARC Lab, Tencent ARC Lab, Tencent Research Institute, TISI, Tencent Institute for Social Innovation, NExT Studios, NEXT Studio, Tencent QiDian, Tencent Enterprise Point, Tencent Music Entertainment Group, TME, TME Lyra Lab, Lyra Lab, TMELAND, MuseV, MuseTalk, MusePose, Muse series, Tencent Cloud TI Platform, Virtual Interactive Space, VIS, U82, Xiaowei Digital Humans, AvaMo.
All Chinese Name Variations:
腾讯控股有限公司, 腾讯, 腾讯云, 腾讯云智能, 腾讯云智能数智人, 腾讯智影, 腾讯元器, 腾讯小微, 腾讯云小微, 腾讯混元, 混元视频数字人, 腾讯AI实验室, 腾讯优图实验室, 微信AI实验室, 腾讯研究院, 云与智慧产业事业群, 互动娱乐事业群, 平台与内容事业群, 腾讯企点, 天琴实验室, 腾讯音乐娱乐集团, 腾讯音乐娱乐世界, 小微数智人, 余灵, 虚拟互动空间, 播报数智人平台, 交互数智人平台.
[Mar 2026]