In an age where digital platforms shape public discourse, influence elections, and impact mental health, governments around the world are tightening their grip on how tech giants operate within their borders. One such country taking a firm stance is Indonesia.
Recently, Indonesian authorities called on social media giants TikTok and Meta to take stronger action against the spread of harmful online content, highlighting rising concerns over misinformation, online abuse, and extremist material.
As Southeast Asia’s largest digital economy, Indonesia is home to millions of active users on platforms like Facebook, Instagram, and TikTok.
While these platforms have enabled creativity, commerce, and communication, they have also become breeding grounds for disinformation and toxic behavior.
In this TazaJunction.com article, we explore why Indonesia is pushing back, what it expects from tech companies, and the broader implications of this move for digital regulation across the globe.
Table of Contents
Growing Concern Over Harmful Online Content
The Indonesian government’s concerns are not unfounded. Over the past few years, the country has faced several waves of harmful online content that have had real-world consequences. These include:
- Misinformation related to elections
- Religious and ethnic hate speech
- Terrorist recruitment and extremist ideologies
- Bullying and harassment, particularly of women and minorities
- Scams and financial fraud on social platforms
Indonesia’s Ministry of Communication and Information Technology (Kominfo) has stated that while tech companies have taken steps to moderate their platforms, those efforts are still insufficient.
The government believes that a stronger and more transparent content moderation policy is required to curb the damage caused by unregulated content.
TikTok and Meta Under Government Scrutiny
TikTok and Meta (parent company of Facebook and Instagram) are the two most influential social media platforms in Indonesia, both in terms of user base and content reach. Together, they host millions of daily interactions among Indonesians, ranging from harmless entertainment to politically sensitive debates.
However, the same platforms have also been criticized for allowing harmful online content to spread unchecked. Viral videos and posts promoting fake news, divisive rhetoric, or illegal activities can quickly gain traction, especially among younger audiences. Indonesian authorities have repeatedly flagged such content and are now calling for more decisive action.
In a recent meeting with both companies, Kominfo emphasized that allowing such content to flourish is a threat not only to social harmony but also to national security.
The ministry urged the platforms to invest in stronger AI moderation tools, faster content removal mechanisms, and closer collaboration with local law enforcement.
Why Indonesia’s Voice Matters?

Indonesia is not just another market for social media giants—it is one of their largest and most influential. With a population exceeding 270 million and over 200 million internet users, Indonesia represents a massive and growing audience.
For TikTok, Indonesia ranks among its top five global markets. For Meta, the country is a key driver of engagement and revenue in Asia.
This gives Indonesia considerable leverage. The government has already enacted laws that allow them to block platforms that fail to comply with local content regulations. In fact, in the past, Kominfo has temporarily banned or threatened to block platforms like Telegram and Netflix over content issues.
By putting pressure on TikTok and Meta regarding harmful online content, Indonesia is signaling that platform compliance is not optional. If companies want to continue operating in this digital-first nation, they must respect local laws and societal norms.
Regulatory Landscape: Indonesia’s Legal Framework
Indonesia has already laid the foundation for regulating online content through several legal instruments. The most prominent among them is the Electronic Information and Transactions Law (ITE Law), which criminalizes the distribution of false information, defamation, and hate speech online.
Additionally, the country introduced Ministerial Regulation No. 5 in 2020, which mandates all digital platforms to take down prohibited content within 24 hours (or four hours in urgent cases) upon notification by the government. This regulation was introduced precisely to combat the rise of harmful online content, and now Indonesia expects tech platforms to comply fully.
Non-compliance can lead to heavy fines, legal action, or even complete blocking of access to the platform in the country.
Industry Response: What Are the Platforms Doing?
Both TikTok and Meta have responded cautiously but positively to the government’s concerns. Representatives from both companies have indicated their willingness to work more closely with Indonesian regulators to ensure compliance and promote safer online environments.
Meta has been investing in local language AI moderation tools and hiring more Indonesian-speaking content moderators. TikTok, similarly, has taken steps to remove extremist content and promote media literacy through creator partnerships and in-app campaigns.
However, critics argue that these measures are often reactive rather than proactive. The speed at which harmful online content spreads requires not just quick takedown tools but also smarter content filtering algorithms and a better understanding of local cultural sensitivities.
Challenges in Moderating Harmful Online Content

Tackling harmful online content is a complex task. The volume of content uploaded every minute on platforms like TikTok and Facebook is staggering. Relying solely on automated moderation tools often leads to errors—both false positives and false negatives. Human moderators, while more nuanced, cannot scale at the same rate as content creation.
Moreover, defining what constitutes “harmful” is itself a challenge. Cultural, religious, and political contexts can greatly influence what is deemed offensive or dangerous. What is acceptable in one society may be considered inflammatory in another.
For a diverse and pluralistic country like Indonesia, moderation becomes even more delicate. Tech companies must walk a fine line between protecting freedom of expression and ensuring that their platforms are not weaponized for harm.
The Role of Digital Literacy
While the onus is on platforms to regulate harmful online content, the Indonesian government is also investing in digital literacy programs to educate users about online safety, critical thinking, and content verification. The aim is to empower citizens, especially youth, to navigate digital spaces responsibly.
Education and awareness campaigns are essential long-term strategies to combat misinformation, online radicalization, and cyberbullying. Without digitally literate users, even the most robust moderation systems will fall short.
The Global Implication
Indonesia’s call for action resonates beyond its borders. As one of Asia’s leading digital economies, the country’s stance on harmful online content could influence regulatory thinking across the region.
Neighboring nations like Malaysia, the Philippines, and Thailand are also grappling with similar issues and may follow suit with stricter demands on tech companies.
Global platforms are increasingly being held accountable not just by Western regulators but also by emerging economies that are asserting their digital sovereignty. Indonesia’s demands set a precedent: tech giants must adapt to local realities or risk losing access to major markets.
Conclusion
Indonesia’s strong message to TikTok and Meta about curbing harmful online content is part of a larger global movement demanding responsibility from tech companies. As digital platforms become more integral to everyday life, their influence—positive or negative—cannot be left unchecked.
The call to action from RC Bhargava reflects growing awareness that platform neutrality is a myth. Tech companies are no longer just communication tools; they are powerful actors in shaping society, culture, and politics.
For Indonesia, protecting its citizens from misinformation, extremism, and digital abuse is a matter of national interest. If companies like Meta and TikTok are to continue thriving in one of the world’s most vibrant online markets, they must do more to ensure their platforms are safe, inclusive, and accountable.