Five months after ChatGPT set off an investment frenzy over artificial intelligence, Beijing is moving to rein in China’s chatbots, a show of the government’s resolve to keep tight regulatory control over technology that could define an era.
The Cyberspace Administration of China unveiled draft rules this month for so-called generative artificial intelligence — the software systems, like the one behind ChatGPT, that can formulate text and pictures in response to a user’s questions and prompts.
According to the regulations, companies must heed the Chinese Communist Party’s strict censorship rules, just as websites and apps have to avoid publishing material that besmirches China’s leaders or rehashes forbidden history. The content of A.I. systems will need to reflect “socialist core values” and avoid information that undermines “state power” or national unity.
Companies will also have to make sure their chatbots create words and pictures that are truthful and respect intellectual property, and will be required to register their algorithms, the software brains behind chatbots, with regulators.
The rules are not final, and regulators may continue to modify them, but experts said engineers building artificial intelligence services in China were already figuring out how to incorporate the edicts into their products.
Around the world, governments have been wowed by the power of chatbots with the A.I.-generated results ranging from alarming to benign. Artificial intelligence has been used to ace college exams and create a fake photo of Pope Francis in a puffy coat.
ChatGPT, developed by the U.S. company OpenAI, which is backed by some $13 billion from Microsoft, has spurred Silicon Valley to apply the underlying technology to new areas like video games and advertising. The venture capital firm Sequoia Capital estimates that A.I. businesses could eventually produce “trillions of dollars” in economic value.
In China, investors and entrepreneurs are racing to catch up. Shares of Chinese artificial intelligence firms have soared. Splashy announcements have been made by some of China’s biggest tech companies, including most recently the e-commerce giant Alibaba; SenseTime, which makes facial recognition software; and the search engine Baidu. At least two start-ups developing Chinese alternatives to OpenAI’s technology have raised millions of dollars.
ChatGPT is unavailable in China. But faced with a growing number of homegrown alternatives, China has swiftly unveiled its red lines for artificial intelligence, ahead of other countries that are still considering how to regulate chatbots.
The rules showcase China’s “move fast and break things” approach to regulation, said Kendra Schaefer, head of tech policy at Trivium China, a Beijing-based consulting firm.
“Because you don’t have a two-party system where both sides argue, they can just say, ‘OK, we know we need to do this, and we’ll revise it later,’” she added.
Chatbots are trained on large swaths of the internet, and developers are grappling with the inaccuracies and surprises of what they sometimes spit out. On their face, China’s rules require a level of technical control over chatbots that Chinese tech companies have not achieved. Even companies like Microsoft are still fine-tuning their chatbots to weed out harmful responses. China has a much higher bar, which is why some chatbots have already been shut down and others are available only to a limited number of users.
Experts are divided on how difficult it will be to train A.I. systems to be consistently factual. Some doubt that companies can account for the gamut of Chinese censorship rules, which are often sweeping, are ever-changing and even require censorship of specific words and dates like June 4, 1989, the day of the Tiananmen Square massacre. Others believe that over time, and with enough work, the machines can be aligned with truth and specific values systems, even political ones.
Analysts expect the rules to undergo changes after consultation with China’s tech companies. Regulators could soften their enforcement so the rules don’t wholly undermine development of the technology.
China has a long history of censoring the internet. Throughout the 2000s, the country has constructed the world’s most powerful information dragnet over the web. It scared away noncompliant Western companies like Google and Facebook. It hired millions of workers to monitor internet activity.
All the while, China’s tech companies, which had to comply with the rules, flourished, defying Western critics who predicted that political control would undercut growth and innovation. As technologies such as facial recognition and mobile phones arose, companies helped the state harness them to create a surveillance state.
The current A.I. wave presents new risks for the Communist Party, said Matt Sheehan, an expert on Chinese A.I. and a fellow at the Carnegie Endowment for International Peace.
The unpredictability of chatbots, which will make statements that are nonsensical or false — what A.I. researchers call hallucination — runs counter to the party’s obsession with managing what is said online, Mr. Sheehan said.
“Generative artificial intelligence put into tension two of the top goals of the party: the control of information and leadership in artificial intelligence,” he added.
China’s new regulations are not entirely about politics, experts said. For example, they aim to protect privacy and intellectual property for individuals and creators of the data upon which A.I. models are trained, a topic of worldwide concern.
In February, Getty Images, the image database company, sued the artificial intelligence start-up Stable Diffusion for training its image-generating system on 12 million watermarked photos, which Getty claimed diluted the value of its images.
China is making a broader push to address legal questions about A.I. companies’ use of underlying data and content. In March, as part of a major institutional overhaul, Beijing established the National Data Bureau, an effort to better define what it means to own, buy and sell data. The state body would also assist companies with building the data sets necessary to train such models.
“They are now deciding what kind of property data is and who has the rights to use it and control it,” said Ms. Schaefer, who has written extensively on China’s A.I. regulations and called the initiative “transformative.”
Still, China’s new guardrails may be ill timed. The country is facing intensifying competition and sanctions on semiconductors that threaten to undermine its competitiveness in technology, including artificial intelligence.
Hopes for Chinese A.I. ran high in early February when Xu Liang, an A.I. engineer and entrepreneur, released one of China’s earliest answers to ChatGPT as a mobile app. The app, ChatYuan, garnered over 10,000 downloads in the first hour, Mr. Xu said.
Media reports of marked differences between the party line and ChatYuan’s responses soon surfaced. Responses offered a bleak diagnosis of the Chinese economy and described the Russian war in Ukraine as a “war of aggression,” at odds with the party’s more pro-Russia stance. Days later, the authorities shut down the app.
Mr. Xu said he was adding measures to create a more “patriotic” bot. They include filtering out sensitive keywords and hiring more manual reviewers who can help him flag problematic answers. He is even training a separate model that can detect “incorrect viewpoints,” which he will filter.
Still, it is not clear when Mr. Xu’s bot will ever satisfy the authorities. The app was initially set to resume on Feb. 13, according to screenshots, but as of Friday it was still down.
“Service will resume after troubleshooting is complete,” it read.
Be the first to comment