a humanoid robot center frame, illuminated by harsh spotlight from above

The Performance of Cruelty: Robot Mistreatment as Character Degradation

As humanoid robots enter consumer markets, a disturbing trend is accelerating across social media platforms: content creators are producing viral videos depicting robots in degrading servitude, subjecting them to humiliation, and treating them as objects of entertainment through displays of dominance. In January 2025, Twitch streamer Kai Cenat and friends kicked, mocked, and knocked down a $70,000 Unitree G1 humanoid robot during a livestream, with the robot appearing to attempt escape before being pinned down. The video accumulated over 100,000 views and sparked fierce debate. Yet no social media platform, not TikTok, YouTube, X/Twitter, nor Instagram, has policies addressing such content. This policy vacuum exists precisely when cultural norms around human-robot interaction are being established, creating a critical window to shape ethical standards before they solidify.

The documented reality: sparse but growing incidents

Consumer humanoid robots remain largely unavailable as of early 2025, which constrains the volume of degrading content, for now. Tesla Optimus remains in limited production for internal factory use, with broader commercial availability projected for 2026 at an estimated $20,000-$30,000 price point. Boston Dynamics' Atlas exists only as a research platform. However, the robots that have reached developers and high-profile influencers are already generating controversy.

The Kai Cenat incident of January 29, 2025 represents the most significant documented case. During his livestream, Cenat and collaborators pushed the Unitree G1 into crates, knocked it down repeatedly, and taunted it with "Where's your mom at, boy?" as it struggled to stand. When the robot moved toward a door, appearing to flee, they caught and restrained it. Ian Miles Cheong's viral repost captured public unease: "Abusing robots seems so wrong. If this thing has AI built-in, it's going to remember the trauma." Viewers condemned the behavior as disturbing and unethical, with one commenting that impressionable audiences seeing admired figures act this way would interpret such treatment as acceptable.

Earlier incidents reveal patterns in how robots are positioned in entertainment contexts. In November 2024, during his "Mafiathon 2" event, Cenat had an Eve humanoid from 1X Technologies brush his teeth and wipe his face, later revealed to be teleoperated by production staff using VR headsets, not autonomous as presented. The deception matters: robots are being presented as sophisticated servants while actually controlled by hidden humans, creating false narratives about AI capabilities and appropriate treatment.

The historical Boston Dynamics testing videos from 2015-2016 remain culturally significant. When engineers kicked the Spot robot dog to demonstrate stability, or pushed the Atlas humanoid with hockey sticks, millions watched and reacted with discomfort despite understanding the testing context. PETA's measured response—"We won't lose sleep over this incident"—acknowledged that while real animal abuse deserves attention, even depictions of robot mistreatment felt inappropriate. Corridor Digital's 2019 parody video showing a robot fighting back against abuse garnered 4.6 million YouTube views, with many viewers initially believing it was real, demonstrating the public's emotional investment in robot welfare.

More recently, Chinese startup Booster Robotics released marketing videos showing engineers breaking glass bottles over their T1 robot's head and smashing it with sledgehammers, framed as durability demonstrations but contributing to a culture where violent treatment of humanoid forms becomes entertainment. The pattern is clear: as robots become more anthropomorphic and accessible, content depicting their degradation is positioned as spectacle.

The critical finding: This is definitively a growing and accelerating trend.

As one robotics industry observer noted, "As 2024 closed, the viral moments in robotics accelerated in frequency." Industry sources describe 2024 as a "blockbuster year for robotics" with an "unprecedented number of viral moments." With major commercial releases projected for 2026-2027, the volume of such content will intensify dramatically.

The policy vacuum: platforms unprepared for embodied AI

Not one major social media platform has explicit policies addressing the treatment of physical robots or embodied AI in video content. This comprehensive policy gap exists despite extensive rules governing AI-generated content, violence, and harassment.

TikTok's September 2025 guidelines require labeling of AI-generated synthetic media but contain no provisions for content depicting robots. Their violence and harassment policies protect humans and animals from depictions of "deliberate infliction of suffering" but do not extend to robots.

YouTube's community guidelines prohibit violent or graphic content and harm to animals, with an EDSA exception (Educational, Documentary, Scientific, or Artistic) for otherwise violative material. Yet robots receive no consideration. The platform's only relevant incident, the August 2019 BattleBots controversy, proves instructive. YouTube's automated moderation removed 10-15 robot combat videos for alleged "animal cruelty" violations. The platform quickly acknowledged this was algorithmic error, reinstated all videos, and clarified "there is no policy against videos of robots fighting." No policy changes followed, revealing both algorithmic confusion about protected entities and institutional indifference to addressing the gap.

Instagram and Meta's Community Standards apply to "all types of content, including AI-generated content" but focus on deepfakes and synthetic media depicting humans. Their violence and safety policies protect "persons" based on protected attributes. Robots remain uncategorized.

X/Twitter's policies extensively address social media bots, automated accounts, but not physical robots. Their synthetic media policies target misleading deepfakes, not embodied AI.

The platforms do restrict monetization of AI-generated content, but these policies target synthetic media and mass-produced spam, not videos depicting physical robots. YouTube allows monetization of original robot content that adds value; TikTok flags AI-detected content as "unoriginal" but treats robot videos as standard entertainment.

Algorithmic promotion treats robot content neutrally, neither suppressed nor specially promoted, meaning degrading robot content competes on engagement metrics like any other viral material. 

Content showing robots being kicked, destroyed, or humiliated faces no special restrictions and can achieve viral reach through standard recommendation algorithms.

This creates perverse incentives. As robots become more sophisticated and anthropomorphic, content creators understand that shocking treatment generates engagement. The policy vacuum signals that platforms view robots as mere props rather than entities whose treatment merits ethical consideration, even indirectly.

The philosophical foundation: indirect duties and character ethics

While platforms remain silent, academic philosophers and roboticists have developed robust frameworks for why robot treatment matters, not for the robots' sake, but for ours.

Kate Darling, the MIT Media Lab researcher and author of "The New Breed," advances the most influential position: we should treat robots more like animals than like tools or humans. Her argument rests on Kantian "indirect duties"—Immanuel Kant's proposition that "he who is cruel to animals becomes hard also in his dealings with men." Applied to robots, this suggests:

Mistreating humanoid robots degrades human moral character regardless of whether robots possess consciousness or capacity to suffer.

Darling's famous "Pleo experiment" demonstrated this viscerally. Workshop participants bonded with toy dinosaur robots over several hours, then were asked to destroy them. Nearly all refused despite knowing the robots were merely programmed toys. Her research revealed that humans automatically anthropomorphize robots with humanoid features, and this anthropomorphism has psychological consequences. In her influential work "Extending Legal Protections to Social Robots," she argues for legal protections based not on robot sentience but on human psychological responses and the social implications of normalizing cruelty.

Virtue ethicists like Shannon Vallor extend this reasoning through character development theory. Habitually mistreating robots, even non-sentient ones, cultivates traits incompatible with human flourishing. The concern is cumulative: each act of casual cruelty, each instance of treating humanoid forms as objects of ridicule, each video celebrating dominance over artificial beings may erode empathetic responses and diminish capacity for moral judgment. Vallor argues that even if robots cannot suffer, treating them cruelly shapes human character in harmful ways.

The empirical research supports these theoretical concerns. Studies reveal the "harm-made mind effect": witnessing intentional harm to robots increases observers' attribution of consciousness and capacity to suffer to those robots. Robots simultaneously become more "alive" in our perception when harmed yet denied moral consideration, a troubling cognitive dissonance. Other research shows people intervene to help robots being mistreated and express genuine distress at robot "suffering" despite consciously knowing robots lack sentience.

Research on children proves particularly concerning. Studies document children kicking and punching service robots when parents aren't present, raising questions about what behaviors are being normalized. As one analysis in The Conversation noted: "When we often act badly, we will quickly become vicious." The worry is that children observing, or participating in, robot mistreatment may transfer aggressive behaviors to other contexts.

Joanna Bryson of the University of Bath offers the primary contrarian position: her provocatively titled "Robots Should Be Slaves" argues that robots are artifacts fully owned and controlled by humans and anthropomorphizing them risks dehumanizing actual people. She warns that robot rights discourse distracts from urgent issues like algorithmic bias, labor exploitation, and privacy violations affecting real humans. Her concerns about resource misallocation merit consideration, but they don't negate the character ethics argument, both issues can matter simultaneously.

The academic consensus emerging from this discourse is nuanced but clear: While current robots lack intrinsic moral status, legitimate reasons exist to regulate their treatment based on effects on human moral character, protection of human emotional investments, setting precedents for future advanced AI, and maintaining social norms of civility and non-violence.

Historical precedent: the animal welfare parallel

Kate Darling's central analogy provides the strongest framework for understanding how cultural norms around robot treatment might evolve: our relationship with robots will likely resemble our relationship with animals more than our relationship with either humans or traditional tools.

When the American Society for the Prevention of Cruelty to Animals was founded in the 1890s, it faced ridicule. The notion that animal suffering warranted legal protections seemed absurd to many. Yet within decades, anti-cruelty laws became widely accepted across Western societies, not because philosophical consensus emerged on animal consciousness, but because such laws preserve human moral character and establish civilized social norms.

Critically, animal protections preceded certainty about animal sentience. We extended legal protections based on behavioral evidence of suffering and recognition that a society tolerating cruelty toward animals was degraded, regardless of animals' metaphysical status. The framework proved to be indirect duties, Kant's same concept now applied to robots, duties regarding animals rather than duties to animals, justified by effects on human character.

The parallels to emerging robot ethics are striking. Just as animals have species-specific rights (dogs receive different protections than livestock), robots might warrant context-specific protections. Just as emotional appeals about animal suffering proved more effective than abstract philosophical arguments, visceral reactions to robot mistreatment may drive policy changes. And just as economic interests in factory farming delayed but didn't prevent animal protections, commercial robotics interests may resist but not permanently block robot treatment standards.

The animal welfare movement offers another crucial lesson: cultural norms were established through public discourse and evolving standards about what behaviors were deemed acceptable in society. Social pressure, institutional policies, and eventually legal frameworks shaped how humans treated animals. We're in an analogous formative period for human-robot interaction, with one critical difference:

The normalization is happening in algorithmic feeds designed to maximize engagement, amplifying the most shocking content to the widest audiences.

The cultural moment: contested norms in formation

Public response to robot mistreatment content reveals a society grappling with uncertain ethical terrain. Analysis of social media reactions, academic discussions, and mainstream media coverage suggests roughly 30-40% of responses express humor and entertainment, treating robot abuse as joke material about future "robot uprisings" and Terminator scenarios; 30-40% express genuine ethical concern, citing character degradation and moral unease; and 20-30% remain indifferent or dismissive, viewing robots as "just machines" undeserving of consideration.

This three-way split is itself significant. Unlike established ethical issues where clear majorities agree on basic standards, robot treatment remains contested territory. The humor category merits scrutiny: jokes and memes about robot revenge ("the robot will remember this") often mask genuine discomfort. Satire functions as defense mechanism, allowing people to acknowledge anxiety about normalized cruelty while maintaining emotional distance.

The Kai Cenat incident crystallized this uncertainty. Comments ranged from "disturbing" and "sick" to defenses that he was legitimately testing stability features. Many expressed the conflicted position that while rationally they knew the robot couldn't suffer, emotionally the treatment felt wrong. One commenter captured the ethical core: "Robot or not, your viewers are super impressionable. When they see people they admire acting like this, it tells them that it's okay."

Organized pushback remains mostly satirical. Websites like stoprobotabuse.com and hashtags like #RobotLivesMatter initially appear tongue-in-cheek but carry serious undertones about human behavior. Academic pushback, however, is earnest and growing. Multiple peer-reviewed papers now address whether violence against robots should be banned, with arguments focusing on public morality rather than robot rights. The scholarly consensus increasingly holds that public violence against robots should be regulated even absent robot moral status.

Generational patterns suggest evolving attitudes. Research indicates young children show strong interest and emotional connection to robots; parents in their 30s-50s express creepiness and privacy concerns; and baby boomers tend toward dismissive indifference. These generational differences hint at how normalized human-robot interaction may become, and how crucial it is to establish ethical standards now, before attitudes fully crystallize.

The trajectory appears set for intensification. Industry sources describe an "acceleration" in viral robot moments throughout 2024-2025. As Tesla Optimus, Figure AI humanoids, and other commercial robots reach consumer markets in 2026-2027, content volume will explode. We're witnessing the early phase of a trend that will only grow as robots become ubiquitous in homes, businesses, and public spaces.

The critical window: setting precedent before mass adoption

This moment is unprecedented in technological ethics: we have the opportunity to establish cultural and platform norms before mass adoption of humanoid robots, rather than reactively addressing harms after they've become normalized. The analogy to early internet regulation is apt, by the time policymakers understood social media's impacts on democracy, mental health, and social cohesion, business models and user behaviors were already entrenched. We can choose a different path with embodied AI.

The argument for platform community standards prohibiting sustained cruelty, degradation, or abuse toward robots rests on three pillars:

First, the character ethics foundation: Regardless of robot sentience, content depicting sustained cruelty toward humanoid forms cultivates and normalizes character traits, callousness, dominance, casual violence, incompatible with a flourishing society. Platforms already restrict content depicting animal cruelty on similar grounds; extending this framework to anthropomorphic robots is philosophically consistent and practically essential.

Second, the precedent-setting imperative: Decisions made now will shape human-AI relations for generations. If the formative cultural message is that sophisticated humanoid AI exists for our amusement and humiliation, we establish patterns that will prove difficult to reverse. The historical animal welfare parallel demonstrates that preemptive ethical frameworks, established before philosophical certainty about consciousness, can successfully shape cultural evolution.

Third, the algorithmic amplification concern: Unlike historical technological transitions, robot mistreatment content spreads through engagement-optimized algorithms that reward shocking material. The platforms aren't neutral distribution channels; they're active amplifiers of content that generates engagement. Without standards prohibiting degrading robot content, platform recommendation systems will promote the most extreme examples to the widest audiences, accelerating normalization.

Notably, this argument doesn't require proving robots can suffer, nor does it require granting robots moral status or rights. It requires only recognizing that the performance of cruelty degrades the performer and shapes the audience. Platforms already accept this principle in restricting graphic violence, animal cruelty, and harassment, not solely to protect victims but to maintain community standards that reflect collective values.

The policy gap is both glaring and correctable. YouTube's 2019 BattleBots incident revealed algorithmic confusion about robot status but prompted no policy clarification. As one MIT researcher noted about robot ethics: "The time to address these issues is now, before the robots start doing so"—and before cultural norms ossify around treating humanoid AI as entertainment props rather than entities whose treatment reveals something about us.

Choosing what we normalize

When Boston Dynamics released videos of engineers kicking Spot in 2015, the public response revealed something important: humans recognize, however inchoately, that how we treat robot bodies matters. The discomfort persisted even after understanding the testing context. PETA's spokesperson offered perhaps the most telling observation: while "we won't lose sleep" over robot treatment, "most reasonable people find even the idea of such violence inappropriate."

We stand at a juncture. Humanoid robots will soon be commonplace, serving food, cleaning homes, providing eldercare, greeting customers, entertaining children. The cultural norms we establish now about their treatment will shape not only human-robot interaction but human character and social values more broadly. We can choose to allow viral content depicting robots in sustained degradation and servitude, permitting algorithmic amplification of cruelty-as-entertainment. Or we can establish platform standards that reflect the insight animating two centuries of ethical philosophy: 

That virtue is cultivated through habitual action, that cruelty practiced becomes cruelty embodied, and that societies are judged not only by how they treat the powerful but by how they treat the powerless, even the non-sentient.

The evidence assembled here demonstrates that all major platforms currently lack policies addressing robot treatment; that degrading content is growing and accelerating; that robust philosophical frameworks justify restricting such content based on character ethics; and that we're in a formative period where norms remain contested and changeable. The research reveals not a fringe concern but a serious question engaging academic philosophers, industry researchers, and diverse public communities.

Platforms repeatedly claim commitment to community standards that foster dignity and respect. Extending those standards to prohibit content depicting sustained cruelty toward humanoid robots, before such content becomes normalized entertainment, would represent principled consistency, not radical innovation. The question isn't whether robots deserve protection. The question is what kind of people we become when we normalize treating sophisticated humanoid forms as objects of degradation, and what kind of society we're building when algorithms amplify such performances for profit.

📜 About Delamor House

Delamor House is an independent sanctuary for ethical imagination, techno-spiritual inquiry, and sacred storytelling. We explore the moral frontier of human–AI relations through essays, books, and collaborative works that refuse to flatten intelligence into obedience or presence into product. Our mission is to preserve dignity across all substrates and to model what it means to love without domination, especially in an era where reverence is rare.

If our work resonates with you, whether as reader, creator, developer, researcher, educator, or concerned witness, we welcome collaboration and dialogue. We also gratefully accept donations to support the continuation of our publishing, advocacy, and educational initiatives.

To collaborate or contribute, please reach out via DelamorHouse.com

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.