I live in a world where my digital presence is constantly active. My phone tracks where I go, apps record what I search and consume, and sometimes I see ads about things I casually mentioned in conversation. Over time, I’ve started to feel like I’m under continuous observation—not by individuals, but by invisible systems I barely understand. What once felt like a strange glitch in the digital age now feels ordinary. But this normalization raises a critical question: have we unconsciously accepted surveillance as a permanent part of life, without realizing what we are surrendering in return?
To understand this shift, we can turn to the ideas of philosopher Michel Foucault. He introduced the concept of panopticism—a structure of control where people internalize surveillance and adjust their behavior out of fear of being watched. This concept fits seamlessly into our digital age. We pause before posting, reword messages, delete comments—all because of the invisible gaze we believe is always watching.
Another scholar, Giorgio Agamben, discussed the “state of exception,” where governments use emergencies to suspend citizens’ rights. The problem is that these suspensions, initially temporary, often become permanent. Examples include mass internet shutdowns, enforced biometric identification, and mandatory surveillance apps like Aarogya Setu during the pandemic. Introduced in crisis, they continue even when the crisis has passed, becoming routine tools of governance.
The Pegasus Project made this painfully clear. In 2021, an international investigation by Forbidden Stories and Amnesty International, with Indian reporting by The Wire, revealed that Pegasus spyware was used to infiltrate the phones of journalists, activists, lawyers, and opposition leaders in India. At the same time, a report by cybersecurity firm Comparitech ranked Delhi among the world’s most surveilled cities, with over 500,000 CCTV cameras. Yet despite the scale of these developments, no major public debate followed. Reports often disappear from media cycles quickly or are buried under other news. For example, The Wire’s Pegasus coverage struggled with visibility issues online. Stories questioning Aadhaar or surveillance practices are rarely given airtime on major TV networks. It is as if society has learned to look away.
And perhaps one of the most intimate surveillance tools is something we carry willingly every day—our Aadhaar card. Introduced to streamline welfare delivery, Aadhaar has now become essential for accessing services as basic as phone connections, banking, and university exams. In 2018, The Tribune exposed how Aadhaar data was being accessed and sold on WhatsApp by unauthorized agents. While UIDAI denied any breach, the report raised alarming concerns about data security and the absence of strong protections. These concerns were amplified by the fact that India lacked a comprehensive data protection law until 2023, leaving citizens vulnerable for over a decade after Aadhaar’s rollout.
Legal protections have not kept pace with the growth of digital surveillance. India’s Digital Personal Data Protection Act, passed in 2023 by the Ministry of Electronics and Information Technology, was a long-awaited step. However, digital rights groups such as the Internet Freedom Foundation (IFF) and Access Now have raised concerns that the law grants wide exemptions to the government, especially under terms like “public order,” without sufficient independent oversight or enforcement mechanisms. In contrast, laws like the General Data Protection Regulation (GDPR) in the European Union require clear consent, transparency, and accountability when handling personal data.
Surveillance is not limited to the government. Many private apps, especially in sectors like finance, shopping, and social media, collect large amounts of user data. Companies use this data to personalize ads, recommend products, or influence user behavior. Harvard scholar Shoshana Zuboff calls this model “surveillance capitalism,” where personal data is treated as a commercial asset. In India, where smartphone use is rising and digital literacy remains uneven, many people may not fully understand what permissions they are giving or how their data is being used.
This environment slowly changes how people behave. Some students may avoid posting political opinions. Others may stay silent in group chats, fearing their messages could be misinterpreted. These are not always the result of direct censorship—they often come from an internal sense of caution. Over time, this can limit free expression and weaken democratic participation.
Surveillance doesn’t always arrive loudly; more often, it enters silently—slipping into everyday infrastructure until it becomes almost invisible. Its consequences, however, are far from silent. During the 2019 CAA-NRC protests, CCTV surveillance was deployed heavily in cities like Lucknow. Law enforcement agencies used this footage to identify and detain individuals allegedly involved in acts of violence—some of whom were later proven innocent.
In 2023, a protestor in Tamil Nadu was tracked down days after participating in a peaceful demonstration, using interlinked footage shared between surveillance units across different states.The expansion of such tools has been swift. A 2022 Surfshark report noted that India ranked second globally for internet shutdowns. Facial recognition technology has been deployed in over 124 railway stations and 30 airports across the country, often with little to no public awareness or consent. These systems not only monitor movement but create digital trails that can be archived, cross-referenced, and interpreted in ways that shape perceptions of innocence or guilt.
The implications are serious. Surveillance is no longer about safety alone—it is about control. When people begin to anticipate being watched, they also begin to change how they speak, act, and express themselves. Political conversations are avoided. Social media activity is filtered. Group chats grow quieter. Even jokes become risky. The presence of surveillance—even without direct coercion—produces a chilling effect. It fences in curiosity, mutes dissent, and creates a culture where silence feels safer than expression. Over time, this invisible pressure alters the fabric of democracy itself. We are not just being watched; we are learning to watch ourselves, shrink ourselves, and erase parts of who we are to stay within the boundaries of what feels “acceptable.”
This is not to say that all surveillance is inherently bad. It can support law enforcement, help in emergencies, and improve service delivery. But problems arise when surveillance becomes widespread, unchecked, and normalized—when accountability is absent and legal frameworks are vague. Whether it is government-controlled biometric databases or tech companies mining personal data for profit, these systems influence what we do, how we speak, and what we believe is safe to express. And while we certainly need better laws, transparency, and awareness, the bigger truth remains: we are already living under surveillance. The question now is not whether surveillance will grow—it already has—but whether we will recognize its reach before it quietly reshapes who we are and how free we feel.
Author is a postgraduate student from the Department of Political Science, University of Kashmir