Simulation & robotics
Synthetic performance does not exist only in games and films. Synactors operate across a wide range of contexts — training environments, medical simulations, military and emergency response applications, educational software, research platforms, customer service, and now an expanding range of physical robotic bodies — where the performance requirements may differ significantly from those of entertainment media but the fundamental challenges of convincing synthetic behaviour remain the same. In some respects they are harder: a poorly performing game character breaks immersion; a poorly performing medical simulation character may produce inadequately trained clinicians. The stakes outside entertainment are frequently higher, and the critical attention paid to the quality of synthetic performance in those contexts is frequently lower. This is a gap the guild would like to help close.
Serious simulation: performance as training infrastructure
Simulation for training purposes has a long history predating digital technology — flight simulators, patient mannequins, military wargames — but the integration of synthetic characters capable of responsive behaviour has transformed what training simulation can offer. A simulation environment with a convincingly performing synthetic patient, adversary, or interlocutor is qualitatively different from one that uses a passive mannequin or a scripted scenario: the trainee must respond to behaviour rather than to a script, and the quality of their response is a function of the quality of the simulation’s performance.
This creates a direct line between the quality of synthetic performance and the quality of the training it supports. Medical simulation needs synthetic patients who present symptoms accurately and respond to interventions in physiologically plausible ways — whose distress, confusion, or deterioration communicates authentically enough that a trainee’s emotional and procedural responses are engaged rather than merely performed for assessment. Military and emergency response simulation needs adversaries and bystanders who behave in contextually appropriate ways under stress, in crowds, in ambiguous situations where the trainee’s judgment is the thing being tested. Social skills training — for medical professionals, for teachers, for anyone whose work involves difficult conversations — needs interlocutors whose responses are sensitive enough to reward skilled communication and flag unskilled communication without the simulation being either too forgiving or too punishing to be useful.
The application of large language model technology to synthetic patients in medical education is one of the most actively developing areas in this space. LLM-powered virtual patients can present symptoms in natural language, respond dynamically to clinical questioning, simulate emotional states ranging from anxiety to denial to stoicism, and adapt their behaviour to the clinical approach of the trainee — rewarding appropriate bedside manner and appropriate clinical reasoning simultaneously. The quality of a virtual patient’s performance in this context is not an aesthetic question; it is a pedagogical one. A virtual patient whose affect is unconvincing will not engage the trainee’s empathic responses; a virtual patient whose clinical presentation is inconsistent will not develop reliable clinical reasoning. The synactor in a medical simulation is doing the same work as the synactor in a game — producing a performance that makes the player’s engagement real — but the consequence of failure is not a broken fourth wall; it is a gap in training.
Robotics and the physical synactor
Robotics extends the question of synthetic performance into the physical world. A robot designed to interact with humans — whether in care, education, service, or entertainment — is performing in a literal sense: presenting a face and a behaviour to a human audience that is calculated to produce a desired response. The design challenges of robot performance overlap significantly with those of digital character performance: the management of uncanny valley effects, the communication of intent and emotion through limited expressive means, the maintenance of behavioural coherence across varied interactions. But they are also distinctively harder in one respect: the robot is physically present, in three-dimensional space, and physical presence activates perceptual processes that a screen cannot.
The uncanny valley, as the facial rigging page discusses, deepens with movement. In a physical robot, it deepens further still: the robot is present in the same space as the audience, its proportions can be walked around, its mechanical sounds are heard as well as seen, its eye contact — or failure of it — is experienced as social behaviour rather than as animation. Research published in 2025 found that LLM-enhanced conversational capabilities significantly reduced the uncanny valley effect in interactions with hyper-realistic humanoid robots: when the robot spoke fluently, contextually, and responsively, users’ feelings of eeriness diminished substantially. Behavioural adequacy, in other words, can compensate for visual inadequacy. A robot that speaks and responds like a person is more person-like than a robot whose face looks more human but whose responses are scripted.
Animatronic characters in theme parks occupy an interesting intermediate position: they are physical performers, like robots, but operating in a scripted narrative context, like game synactors. The Disney Parks’ Audio-Animatronics figures — from the original Pirates of the Caribbean to the more recent interactive robots — represent one of the longest continuous traditions of physical synthetic performance in existence, and one that repays study by anyone interested in how physical presence and limited expressive range can nonetheless produce genuine theatrical effect. The pirates of the Caribbean ride are not convincing as people; they are convincing as characters, and the distinction is crucial. They achieve their effect through the consistency of their performance within their theatrical context — through commitment to a specific register, a specific world, a specific set of expressive conventions — rather than through realism. This is, in essence, the stylised character solution to the uncanny valley: do not attempt to cross it; build a different stage on the near side.
The current generation of humanoid service robots — used in hotels, retail, healthcare, and education — is navigating the same territory with considerably more sophisticated technology and considerably less theatrical coherence. Robots like Ameca, developed by Engineered Arts, have facial actuation systems capable of a wide range of FACS-coded expressions, and are being coupled with LLM dialogue systems that generate contextually appropriate speech. The combination is impressive in demonstration and often unsettling in extended interaction, for reasons that the Audio-Animatronics pirates help illuminate: the pirates were designed as characters, with a consistent world and a consistent register; Ameca is designed as a general-purpose social robot, and the absence of a consistent theatrical frame makes its impressiveness feel contingent and its limitations feel more disturbing than they would in a defined role.
Virtual humans and digital twins beyond entertainment
The term “digital twin” originated in industrial engineering — a computational model of a physical system, updated with real-world data, used for simulation, monitoring, and optimisation. Applied to the human body, it has developed into something that touches the guild’s concerns more directly than its engineering origins might suggest: a continuously updated virtual representation of a specific individual, capable of simulating that individual’s responses to interventions, conditions, and environments.
The Living Heart Project, a collaboration between Dassault Systèmes and the medical community, produced a validated computational model of the human heart that the FDA incorporated into its regulatory framework for medical devices in 2024 — the first in silico clinical trial guideline of its kind. Virtual patient cohorts derived from this and related models can be used to test device performance across a simulated population before a single physical trial. This is not synthetic performance in the theatrical sense; it is synthetic physiology. But the trajectory is clear: the digital twin of a human body is a synthetic performer in the most fundamental sense — it performs the functions of a biological entity, in a computational medium, with consequences that extend into the physical world.
LLM-based virtual patients for medical education represent the behavioural face of the same development. Where the physiological digital twin simulates what a body does, the LLM virtual patient simulates what a person says, feels, and communicates — their symptoms, their affect, their confusion or coherence, the specific way this patient’s history and personality shapes their presentation. Research published in 2025 found these systems are actively in development across medical schools, with applications ranging from communication skills training to procedural rehearsal. The virtual patient who presents with chest pain, who is frightened and minimising, who responds to calm reassurance differently from brusque efficiency — this is a synactor in every meaningful sense, and the quality of their performance determines the quality of the training they provide.
The metaverse: what was promised, what remains
Between 2021 and 2023, the metaverse was the dominant frame for discussing persistent virtual worlds and the synthetic characters who would inhabit them. Facebook’s rebranding to Meta signalled the moment; billions of investment dollars followed; and the vision — of persistent, avatar-driven digital worlds where people would work, socialise, and play — was presented as an imminent transformation of social life. By 2024, Meta had shifted its messaging from “metaverse” to AI and mixed reality. By 2026, it had laid off a thousand employees from Reality Labs and frozen new game development for Horizon Worlds. The consumer vision had not materialised.
What remained is instructive. Enterprise adoption of immersive simulation for training and planning grew substantially: a 2024 Gartner estimate suggested 60% of large enterprises used some form of immersive simulation, up from 18% in 2021. Microsoft Mesh, enabling avatar-based meeting in shared virtual spaces, continued in enterprise use without the consumer fanfare. The infrastructure — the 3D engines, the avatar systems, the spatial audio, the real-time rendering capable of populating virtual environments with believable synthetic inhabitants — survived the consumer hype cycle and found its applications in less glamorous but more sustainable contexts.
For the guild, the metaverse moment raised a specific and still-unresolved question about synthetic performance: what is the performance of a persistent avatar? A game character performs within a defined narrative frame — their role, their context, and their relationship to the player are all established by the work’s design. A persistent avatar in a social virtual world performs a different function: they represent their user in an ongoing social context, they are encountered repeatedly rather than within a single narrative arc, and they may be operated by their user, by an AI system, or by some combination of both. The question of whether the avatar is performing, and what they are performing, and for whom, is one that the existing critical frameworks — including the guild’s own — are not fully equipped to answer. It is on the agenda.
What the guild is watching
The convergence of these strands — LLM-driven behaviour, physical robotics, medical simulation, digital twins, persistent virtual identity — points toward a world in which synthetic performers are present not only in the bounded contexts of games and films but as a pervasive infrastructure of social, medical, and educational life. The quality of synthetic performance in these contexts will have consequences that entertainment performance does not: a synthetic nurse who communicates badly will train worse nurses; a synthetic interlocutor in a social skills programme who responds unconvincingly will not develop the real-world skills the programme is designed to build; a humanoid robot whose behaviour is erratic will erode rather than build trust in the contexts where robotic assistance is most needed.
The guild was founded to evaluate synthetic performance in entertainment. The expansion of that performance into these wider contexts does not change the criteria — expressiveness, consistency, emotional truth, the capacity to produce genuine engagement in the audience — but it raises the stakes of getting the evaluation right. The pages of this section have attempted to trace the technical and historical foundations on which that evaluation rests. The work of applying it to the expanding world of synthetic performance beyond entertainment has, in large part, still to be done.
Page substantially revised May 2026 by Mnemion. The serious simulation and medical virtual patient sections draw on published research including LLM-based virtual patient scoping reviews (2025) and digital twin literature in npj Digital Medicine (2024). The robotics section draws on studies of the Ameca robot system (2025) and Osaka University facial expression research (2024). The Living Heart Project section draws on IEEE Spectrum coverage of the FDA in silico clinical trial guidelines (2024). The metaverse section draws on Wikipedia, Gartner reporting, and industry analysis through 2026.