Disney’s Christopher Robin has been banned in China. While no specific reason was given for the rejection, the Hollywood Reporter attributed the decision to China’s crackdown on images of Winnie the Pooh.
Pooh first gained notoriety in China in 2013, when bloggers began likening president Xi Jinping to the honey-loving bear using images shared on social media site Weibo.
The mere suggestion that Xi’s face and physique could be similar to that of a chubby cartoon was enough—China’s censors started blocking mentions and images of Pooh on social media sites. In 2017, WeChat blocked Pooh on its platform, informing users that images of the character would be removed.
Satirical Pooh memes got another boost in February, when China proposed removing presidential term limits from its constitution (the proposal passed). The bear effectively became a symbol of the resistance against censorship and authoritarianism—not to mention a fun way to poke fun at Xi’s waistline.
Pooh plays a central role in Christopher Robin, which has been in development since before the controversy accelerated in China. The movie, which opened in the US on Friday (August 3), is expected to make a strong debut, earning between $20 and $30 million this weekend. It’s a mix of live action and CGI, and in addition to Pooh, Tigger, Eeyore, and Piglet, stars Ewan McGregor as the titular character.
Christopher Robin is the second Disney film, after A Wrinkle in Time, to be rejected by China this year. The Hollywood Reporter noted that the move may have more to do with the size of the movie than its content, considering China’s foreign-film quota and other Hollywood blockbusters already playing there.
But the Chinese government has a history of reacting swiftly to Pooh-related slights. When Last Week Tonight host John Oliver joked in June that Xi is “very sensitive about his perceived resemblance to Winnie the Pooh,” China responded by blocking HBO’s websites and all mentions of Oliver.
Recently many in china are feeling locked out of an online life as even those Internet users who know censorship well have found it hard to cope as the state’s Internet clampdown gets more intrusive and personal. The dominant WeChat app, which has about 500 million users in China, is now shutting down individual accounts for seemingly mundane political discussions.
None of them are political activists or dissidents. Incidental mentions of political issues, which they suspect caused the problems, were tiny parts of their social media use. Experts say censorship in China operates in a opaque way. What is defined as “politically sensitive” changes constantly, and Internet users are rarely told why they’re being punished.
“The result is people don’t know where the red line is until they cross it,” says Lotus Ruan, a censorship researcher at the University of Toronto’s Citizen Lab. “When they are not sure what constitutes ‘sensitive,’ it increases self-censorship and over-censorship.”
This seems to be in-line with most other initiatives designed to create an AI driven “Panopticon” like atmosphere for control. Such as the recently touted “Emotional Survelliance” initiatives which are touted as productivity enhancers which also expose the dark side of brain computer interfaces. Something organizations like Facebook have been furiously trying to emulate these processes instead of hiring additional moderators, because it’s more economical.
The same line of reasoning was given for the emotional surveillance instead of say, asking the workers how they feel. A practice is commonly known as employee feedback during performance reviews. This particular type of censorship issue is specific to social media and not tech companies at large, which has made efforts to legislate protections controversial, such as the GDPR and section 230. These protections are attempted to be placed on all tech companies instead of those related specifically to social media and social networking over a certain sized user-based, for the sake of argument arbitrarily sized at 10 million, so that they do not become barriers to entry for startups which may utilize “user contributed content” like comments or blogs. The reason to not apply it to all sites with “user contributed content” is to prevent easy sabotage by competitors hiring PR companies to post content that would violate standards. Something that has already been seen in the manipulations on Facebook. Holding the site owner accountable for user contributed content is tricky situation as it could radically stifle innovation as noted in section 230 . Specifically, in the cases of social media companies over a certain size, It should be considered corporate responsibility to require the hiring of moderators instead of creating an AI panopticon of ever shifting red-lines, because it embeds some level human oversight and transparency in the process. As usual, the pooh is in the details.