Indonesia is attempting to regulate disinformation and foreign propaganda, risking the country’s democracy, which is already in decline.
Indonesia is once again attempting to regulate disinformation and foreign propaganda at a moment when its democratic institutions are already under strain. Although the proposed regulation is not listed in the current National Legislation Program or the 2026–2029 mid-term legislative agenda, an academic background paper for the bill circulated in early January, and the Coordinating Minister for Law, Human Rights, Immigration, and Corrections, Yusril Mahendra, has publicly confirmed that the initiative will proceed.
Efforts to address disinformation and foreign propaganda can take multiple forms, including platform interventions, algorithmic accountability, strengthening media independence and business models, public literacy, international cooperation, and legal regulation. However, research across media studies, journalism, law, and international studies indicates that regulatory approaches face substantial challenges in implementation and enforcement, particularly in countries like Indonesia, where information controls have been selectively applied. In such settings, regulation is unlikely to curb disinformation effectively unless it is accompanied by stronger platform transparency, algorithmic accountability, and public education that promotes critical engagement rather than compliance.
The first challenge to implementing such a law lies in the structural realities of the digital ecosystem. Efforts to counter disinformation and propaganda remain fundamentally constrained when platform and corporate responses prioritise surface-level interventions rather than addressing engagement-driven algorithms that systematically amplify polarising and misleading content. These challenges are further compounded by technical constraints such as encrypted platforms, anonymity tools, and platform architectures that limit traceability, as well as by rapid technological developments that expand the creative repertoire of propaganda itself.
While regulatory responses have focused on the production, distribution, and consumption of problematic content through automated moderation, content flagging, and virality controls, such measures remain limited as long as platform business models reward engagement over information integrity and broader reforms are resisted in the name of innovation, connectivity, and competitiveness.
The second challenge concerns the documented tendency of such laws to be used to pressure journalists, critics, and civil society under vague “disinformation” provisions. Between 2010 and 2022, at least 80 countries introduced new legislation or amended existing laws to address the spread of misinformation (Bradshaw, Lim, & Haue, 2025). This number rose to 151 in 2023 (Chesterman, 2025), with the pace accelerating after 2016. Across regime types, criminal sanctions—particularly imprisonment and fines—have been the dominant enforcement tools, reflecting a largely security-oriented approach.
In liberal democracies, including Canada and Australia, these penalties have generally been applied in narrowly defined cases, such as interference in democratic processes by foreign actors, and, in some instances, to platform executives. By contrast, in more authoritarian contexts, criminal sanctions are disproportionately used against civil society actors and journalists for online expression, often with limited transparency or due process, as illustrated by cases in Iran, China, Vietnam, and Myanmar, where prosecutions occur under broad defamation provisions or within opaque judicial settings.
Even in the absence of a dedicated disinformation or propaganda law, Indonesia’s recent experience illustrates these risks. During nationwide protests in August 2025, the government responded not only with reported instances of excessive force and mass arrests but also by tightening control over public information and targeting dissenting voices. Civil society activists were detained without a clear legal status. At the same time, authorities restricted digital platforms, including TikTok live-streaming, and were criticised for tolerating coordinated online harassment and disinformation campaigns against civil society groups. Overall, the emphasis on punitive sanctions against individuals, rather than structural measures such as platform accountability or user protection, underscores the dominance of securitised responses to misinformation over rights-based or transparency-focused approaches.
Since the first term of Joko Widodo’s administration in 2009, the Indonesian government has been fighting fake news and hoaxes, although it has aimed to control information flows (Jalli & Idris, 2023). Most responses to disinformation have been selective, with enforcement disproportionately directed at websites and social media accounts critical of official narratives. At the same time, pro-government sources have largely been allowed to operate, even when they have been documented as disseminating misleading or false information (Idris & Ariyanti, 2025).
In the Jokowi era, this selective, or ‘cherry picking’, stance was followed by Indonesia’s digital literacy programme, which prioritises obedience to state ideology and moral conformity over critical engagement with the political interests behind disinformation. By repeatedly invoking the Information and Electronic Transactions Law (UU ITE)—widely criticised for suppressing dissent—it normalises legal intimidation as responsible digital behaviour. Framed through one-way lectures and influencer-driven narratives, the programme ultimately reinforces a disciplinary model of digital governance that weakens, rather than strengthens, democratic resilience.
Besides those technical challenges, the most worrying issue concerns the definitions and who will apply the law. When the state assumes authority to determine what is false, misleading, or harmful without independent adjudication, the line between counter-disinformation efforts and politically motivated control becomes dangerously thin. The academic background paper cites the new Criminal Code, which sanctions individuals who disseminate false information or deceptive statements that result in public disorder. This refers to the reinstatement of the provisions of false information in the new Criminal Code, despite the Indonesian Constitutional Court having revoked them in the old Criminal Code.
Furthermore, the definition of “false information” in criminal law has come under scrutiny for its loose criteria, creating uncertainty about what can be considered “false”. Since the provision uses “false information” instead of “disinformation,” the Bill seems to justify the lack of legislation on disinformation, which remains narrowly defined. Disinformation that triggers social unrest is treated as a non-military threat to national stability. National security, in this context, is not defined solely by the absence of armed conflict, but also by the maintenance of public order free from horizontal conflict arising from information manipulation. If the bill fails to clearly distinguish between public order and national security, as regulated by the international human rights framework ratified by Indonesia, the conflation will create wide discretionary space for authorities, legitimising excessively coercive responses to critical but non-threatening speech.
Amid declining digital rights, Indonesia has been elected as the President of the United Nations Human Rights Council. The role of the Council is to promote and protect human rights and to facilitate equal dialogue among member states. Indonesia should therefore demonstrate its commitment to these principles by ensuring that new laws do not become oppressive tools and by including true, meaningful participation from diverse stakeholders in the legislative process.
Ika Idris is Co-Director of the Monash Data & Democracy Research Hub and a specialist in social media analytics. Since 2019, Ika has trained officials across major Indonesian government agencies on strategic public communication and social media analysis.
Dr Eka Nugraha Putra is a research fellow at the Centre for Trusted Internet and Community at the National University of Singapore. His research interests include criminal law, human rights and cyber law.
This article is published under a Creative Commons License and may be republished with attribution.