TikTok will prompt users when videos are flagged as misleading

Sara Fischer
WASHINGTON DC: TikTok will begin asking users to seek credible information about a topic if they try to share a video that carries a misleading information label, the company announced Wednesday.
Why it matters: It’s one of the most dramatic steps TikTok has taken to reduce the spread of misinformation on its platform.
How it works: TikTok partners with fact checkers to help assess the accuracy of content. If fact checks confirm content to be false, the content gets removed. If the facts checks are inconclusive, TikTok may notify users to consider the video before sharing it.
The viewer will be notified via a banner on the video if the reviewed content can’t be conclusively validated.
If the user tries to share the flagged video, they’ll see a prompt reminding them that the video has been flagged as unverified content. The feature, which was designed and tested with Irrational Labs — a behavioral science lab — will roll out globally over the coming weeks, starting Thursday in the U.S. and Canada.
The company will also disincentive creators from uploading this kind of content by making it harder for false content to go viral.
If a video is deemed false by a fact-checker, the video’s creator will be notified that their video was flagged as unsubstantiated content, potentially making it ineligible to appear in users’ main “For You” feed.
TikTok is hoping that this extra friction will get users to consider their next move before choosing to “cancel” or “share anyway.”
Twitter rolled out a similar feature ahead of the election, which the platform said reduced engagement but was effective in slowing the spread of misinformation. Twitter asked users to add a thought, or a “quote tweet,” when they attempted to share someone else’s tweet.
TikTok said the feature decreased the share rate of flagged videos by 24% during testing. Likes on unsubstantiated content also decreased by 7%.
The big picture: TikTok has taken a series of incremental steps to help reduce the spread of misinformation, hate speech and other bad content on its platform over the past year.
Most notably, the company significantly tightened its misinformation rules before last year’s election by updating its policies on misleading content and partnering with fact-checkers to help validate content (Those fact-checking partnerships will help drive the new update).
Yes, but: The platform, which was sidetracked by the threat of a potential Trump ban last year, hasn’t implemented the same number of changes to address misinformation as some of its more high-profile rivals, like Facebook and Twitter.
The policy could open TikTok up to the same criticisms of censorship that some of its peers have faced in the past few months as they have tried to tighten their misinformation rules.