
The evolving intersection of technology with free expression has never been more pronounced than it is today. In a landmark announcement, Meta, the parent company of Facebook, Instagram, and Threads, unveiled sweeping changes to Meta’s content moderation policies.[1] These reforms may represent a significant shift in how global platforms manage free expression, and their implications extend across legal, regulatory, political and social landscapes.
This article explores these changes, the dismantling of Meta’s third-party fact-checking program, the merits and demerits of the decision and the impact it may have on other users across the world who will come across posts originating from the affected region.
BACKGROUND
In his 2019 Georgetown University address, Meta CEO Mark Zuckerberg articulated a vision for free expression as the cornerstone of societal progress.[2] He warned against the dangers of prioritizing political outcomes over individual voices, arguing that empowering individuals through free speech often disrupt entrenched power structures. This principle appears to underpin Meta’s latest policy revisions, aimed at recalibrating the balance between open dialogue and responsible content governance.[3]
Meta’s platforms, home to billions of users, have faced criticism for their complex content moderation systems, which often suppress legitimate political discourse and stifle harmless speech. The company’s acknowledgement of these shortcomings —notably, that up to 20% of enforcement actions may be erroneous—signals a willingness to embrace transparency and recalibrate.[4]
[1] Meta, More Speech and Fewer Mistakes, January 7, 2025 https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
[2] https://about.fb.com/news/2019/10/mark-zuckerberg-stands-for-voice-and-free-expression/
[3] Mark Zuckerberg in a video announced that fact checking will be reduced due to fact checkers’ bias judgement. See the video here https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
[4] Ibid footnote 1