Civic spaces as the Panopticon, AI as the invisible surveillant

The origin of the panopticon can be traced back to the famous English philosopher, Jeremy Bentham who, in the late 18th century, propounded the idea of a centralised arrangement as a principle in the design of prisons, factories, schools and hospitals. It allowed a single person to watch all the inmates, without the inmates being able to decipher whether they were being watched at a point in time or not. In his book, Discipline and Punish, Michael Foucault – the French philosopher, historian and social theorist, furthered the concept of the panopticon (1975) and used it as a metaphor to illustrate the practise of mass surveillance, through which disciplinary societies control and exercise asymmetrical power over their citizens. He commented, ‘…the major effect of the Panopticon is to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power’. 

Elevation, section and plan of Jeremy Bentham’s Panopticon penitentiary, drawn by Willey Reveley, 1791 Source: J.Bentham, Panopticon, Works, Vol. IV, n°17

‘He is seen, but he does not see; he is an object of information, never a subject in communication.’

AI surveillance technology is spreading at a faster rate to a wider range of countries than experts have commonly understood.

Cut to 2020. Digital and data surveillance is being enforced en masse by governments the world over, seemingly in the interest of protecting national security as well as thwarting terrorism, crime and social unrest. The issue came to sharp prominence in 2013 when Edward Snowden leaked thousands of Australian, British and Canadian intelligence files that revealed the scale of global surveillance carried out by the members of the UKUSA alliance in their efforts to implement global surveillance. The very fact that Snowden was charged with espionage and theft of government property is indicative of the governments’ belief in their total ownership of the surveillance data. Since then, the surveillance has only intensified and become all pervasive. A report by Carnegie Endowment for international Peace states that, ‘at least seventy-five out of 176 countries globally are actively using AI technologies for surveillance purposes including smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries)’.

Watch: An interactive map on how AI surveillance technology is spreading rapidly around the globe.


Our civic spaces are threatened like never before.

Article 19’s website defines Civic space as, ‘the place where individuals realise their rights. It is the freedom to speak and to access the means to do so: to receive information, participate in public decision-making, organise, associate, and assemble.’ It adds that, ‘a robust and protected civic space forms the cornerstone of accountable, responsive, democratic governance and stable open societies.’ These public spaces can be physical, for example, parks, streets and squares, as well as digital including the social media platforms, messaging apps and the internet. Let us go back to Foucault’s expansion of the concept of a Panopticon to understand how surveillance is carried out in civic spaces. 

“They are like so many cages, so many small theatres, in which each actor is alone, perfectly individualised and constantly visible. The panoptic mechanism arranges spatial unities that make it possible to see constantly and to recognize immediately. In short, it reverses the principle of the dungeon; or rather of its three functions — to enclose, to deprive of light and to hide — it preserves only the first and eliminates the other two. Full lighting and the eye of a supervisor capture better than darkness, which ultimately protected. Visibility is a trap” – Foucault, 1975

While access to the civic spaces creates an illusion for the freedom of speech and association, our activities are constantly monitored. The technologies being deployed for these purposes include: mass surveillanceIMSI catchersremote hackingmobile phone extractionsocial media monitoringfacial recognition cameras, and predictive policing. Increasing sophistication of surveillance technologies and AI algorithms has augmented the capabilities of police and intelligence agencies to become simultaneously invisible and omnipresent as they track social media posts, web search histories and physical movements without the knowledge and consent of the people involved. 

Read more: How London became a test case for using facial recognition in democracies

Violation of right of privacy, expression and association as a form of biopower.

Not only do these surveillance mechanisms violate the fundamental right of privacy by thwarting people’s freedom of expression, dialogue and redressal, they have other far reaching implications as well. Increased awareness of these technologies leads to fear and apprehension among people on potential ramifications, and they start to consciously self-censor their words, actions and associations. This is a manifestation of the biopower, where governments employ ‘an explosion of numerous and diverse techniques for achieving the subjugations of bodies and the control of populations (Foucault 1976). Freedom House’s research report highlights a ‘sharp global increase in the abuse of civil liberties and shrinking online space for civic activism’. 47 of the 65 countries assessed in their report have carried out arrests of ‘users for political, social, or religious speeches’. At greater risk, are the minorities who are at an increased risk of persecution or harassment. An article by the New York Times revealed how China is investing billions of dollars every year in Xinjiang, home to many Muslim ethnic groups, to test the deployment and efficacy of increasingly intrusive policing systems. 

Software developed by China’s SenseTime scans a city crosswalk and tabulates pedestrian and car information. PHOTO: THE WALL STREET JOURNAL

Read more: Resistance to Surveillance – Why protests are becoming increasingly faceless

Advanced democracies are at risk too. 

Mass surveillance defeats the purpose of civic spaces as avenues to raise and rally around important public issues. We must not forget that democracies pride themselves on rights on equal participation in political and public affairs. Carnegie Endowment research highlights that contrary to the popular perception,  ‘51 percent of advanced democracies and 41 percent of electoral democracies/ illiberal democracies have adopted AI surveillance systems. Guardian reported that the Indian government has been using automated facial recognition systems to identify and to exclude protesters rallying against the redefinition of Indian identity. The Trump administration directed technology companies to help employ artificial intelligence for extreme vetting of prospective immigrants as potential terrorist threats, but later dropped the plan in the face of widespread criticism.

Read: How AI systems could threaten democracy

Greater transparency and adequate legal safeguards to protect access to and participation in Civic Spaces.

Policy advocacy organizations such as Article 19Privacy International and the Centre for the Internet and Society have been vociferous in their demands for inclusion of transparency, accountability, legal scrutiny, effective remedy measures and regular audits in the security and surveillance apparatuses of the nations. With the development of new technologies including advanced biometrics and 5G mobile networks, it is imperative to put necessary filters, checks and scrutiny on the central tower of the Panopticon. Specifically on the impact on Civic Spaces, ARTICLE 19 and its partners are emphasising that ‘the right to participate publicly in decision-making, engage in open debate, criticise, protest, and dissent, in physical and online space, are widely recognised in legislation, policy, and practice’. There is an urgent need to address the enhanced risk and violation of rights on account of governments having ‘direct and unrestricted access’ to data of citizens and organisations.

Watch: John Oliver on Government Surveillance

Further reading







DeepFakes: From Porn to Politics

An insidious fake story. An anti-migrant narrative. A protest followed by foreign political interference in Germany. Yes, these are all consequences of fake news.

In the cold month of January 2016, a fake news story concerning a 13-year old Russian-German girl went viral on digital media. The girl falsely reported a rape accusation on two migrant refugees. Before the claim could be investigated, a news channel quickly reported the story on its social media channel. The misleading news went viral and garnered over a million views. The virality on social media sparked protests outside Angela Merkel’s residence where more than 500 people including far right and anti-Islamic groups assembled. Additionally, Russia was accused of politicising the issue and trying to “erode public trust in Ms Merkel” by capitalising on the existing anti-immigrant sentiments in Germany (Rinke and Carrell 2016). While this is one example of disinformation and fake news, a simple Google search generates millions of pages on this topic. According to Farkas and Schou (2018, p. 300), “fake news becomes part of a much larger hegemonic struggle to define the shape, purpose and modalities of contemporary politics.” Politicians all around the world have been notorious for weaponising fake news in primarily two ways: invalidate news reports that challenge their views (Wardle and Derakshan 2017) and use it as propaganda to manipulate public opinion (Goldhill 2019).

Fake news can take many different forms – from a misleading headline to fraudulent and doctored images, audios and videos. The latter, known as Deepfake, has taken the Internet by storm. Deepfake, a combination of ‘deep learning’ and ‘fake’, is a technology that uses Artificial intelligence to seamlessly morph real faces and voices of people into videos and audios. (Mirsky and Lee 2020). Although, digitally altered images and videos are not new (remember the scene from Forrest Gump when Tom Hanks met President Kennedy?), deepfake technology is only improving with time.

So how does this technology work? Artificial Intelligence in deepfake uses a network called neural networks and a generative adversarial network (GAN). The neural network is fed data so it can be trained to animate the required face. The GAN helps to compare visuals from the neural network to the visuals of the targeted person. It rejects the visuals until a seamless recreation of the person is created. Basically, robots doing their thing.

How did it all start?

Let’s go to the beginning of when deepfake (somewhat) got democratized. Towards the end of 2017, a Reddit user named ‘deepfake’ released a collection of fake pornographic videos using female celebrities. The Redditor used a machine-learning algorithm based on open-source libraries. He then made a Subreddit community called r/deepfakes accumulating over 15,000 members. The community now stands deleted by Reddit.

Another Reddit user built an app called FakeApp that would allow anyone without a computer science background to alter digital videos in minutes. Dozens of Reddit users started using this technology. While some users stuck to fake porn videos, others were experimenting with politics. For instance, a Redditor swapped ex-Argentinian President Macri’s face with Adolf Hitler. Additionally, to prove the far-reaching and alarming consequence of deepfake technology, Buzzfeed released a deepfake of President Obama, cautioning people of deepfake (yup, you read that right).

As predicted by several academic scholars including Chesney and Citron(2019, p. 1777), the deepfake AI technology can be used by hegemonic actors such as politicians and corporate elites to “skew information and manipulate public belief” or sabotage the reputation of competing political candidates.

One of the first instances of the use of deepfake video by a political party occurred in 2018. Sp.a or the Social democratic party in Belgium, created and deployed a fake video of President Donald Trump. Intending to promote a climate change petition, the makers used English audio and Dutch subtitles. Although the video reveals to be a fake towards the end, the creators failed to insert Dutch subtitles for this part.

A more recent example of deepfake technology used by a political party can be seen in India. Right before the Delhi State elections in 2020, a video featuring a party member, Manoj Tiwari of the Bharatiya Janata Party (BJP) went viral on Whatsapp. As part of a smear campaign, Tiwari was seen attacking the opposition leader and persuading citizens to vote for the BJP. Although the original video was of Tiwari himself, the fake video used different audio. Further, in order to increase the audience reach, the content was made in two different languages. The highly altered video was disseminated on Whatsapp and reached 15 million people. 

It is no surprise that as humans, we tend to fall for sensationalism, lies and misinformation. As Winston Churchill famously said, “A lie gets halfway around the world before the truth has a chance to get its pants on.” Fake news can infiltrate through deep networks and spread like wildfire (Meyer 2018). According to a Pew Research Centre survey, in late 2016, more than 60% of people could not tell the difference between a factual and fake story.

One of the main concerns about deepfake technology is that visual imagery can be far more persuasive than text (Vaccari and Chadwick 2020). Therefore, from manipulating public opinion and discourse to declining trust in news, deepfake can have plenty of disastrous impacts. However there may be some hope. Organisations such as Facebook have pledged to remove media that superimpose objects onto a video. In addition, researchers are using AI to detect deepfake videos.

Well for now, I decided to see what the fuss was all about. I’ve always wanted to know what it feels like to be the Head of State. Thanks to Reface App, I met Putin and realised, I’m better off being an ordinary citizen in a complicated democracy.

Warning, the result is a bit disturbing: *viewer discretion is advised*

Swipe to see before (original) and after(fake)

Fun Facts:

  • The term ‘deepfake’, officially part of the Oxford English Dictionary, was coined by the Redditor ‘deepfakes’ in 2017.
  • DARPA, the US Military’s research division spent $68 Million on a project to spot deepfake videos.
  • Before deciding to ban deepfake videos, Facebook refused to do so in an attempt to let people make their own informed decisions.

More on deepfake:


Chesney, B. and Citron, D., 2019. Deep fakes: a looming challenge for privacy, democracy, and national security. Calif. L. Rev.107, p.1753.

Farkas, J. and Schou, J., 2018. Fake news as a floating signifier: Hegemony, antagonism and the politics of falsehood. Javnost -The Public25(3), pp.298-314.

Goldhill, O., 2019. Politicians Are Embracing Disinformation In The UK Election. [online] Quartz. Available at: <https://qz.com/1766968/uk-election-politicians-embrace-fake-news-disinformation/&gt;

Meyer, R., 2018. Huge MIT Study of ‘Fake News’: Falsehoods Win on Twitter. [online] The Atlantic. Available at: <https://www.theatlantic.com/technology/archive/2018/03/largest-study-ever-fake-news-mit-twitter/555104/&gt;

Mirsky, Y. and Lee, W., 2020. The Creation and Detection of Deepfakes: A Survey. arXiv preprint arXiv:2004.11138.

Rinke, A. and Carrel, P., 2016. How Russia Is Using ‘Rape’ Of 13-Year-Old ‘Lisa F.’ To Weaken German Leadership. [online] The Sydney Morning Herald. Available at: <https://www.smh.com.au/world/how-russia-is-using-rape-of-13yearold-lisa-f-to-weaken-german-leadership-20160202-gmj4xx.html&gt;

Wardle, C. and Derakhshan, H., 2017. Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report27.

On Anonymous and Anonymity


Hacktivist collective Anonymous gained notoriety in the late 2000s and early 2010s through a number of high-profile and highly effective campaigns against targets as varied as the Church of Scientology, Monsanto and the Bay Area Rapid Transit service, or in support of movements and organisations including the Arab Spring uprisings, Occupy and Wikileaks. Whilst the collective was too diverse to be tied to a political ideology, their actions were characterised by a radical emphasis on collectivism over individualism, and they came to be seen as broadly allied with progressive causes and social justice. 

Although Anonymous themselves are no longer active, their legacy can be seen in the campaigns of other anonymous hacker groups such as Phineas Fisher. Phineas Fisher often try to force greater transparency on powerful corporate and state actors, as well as to hold them accountable for their abuses of power. Recent targets of Phineas Fisher have included oil companies, offshore banks and the Chilean military

Perhaps the greatest legacy of Anonymous, however, is in their role as champions of anonymity itself. They defended the right to anonymity not only as an obvious necessity for their activities, but also, as Biella Coleman states, so that citizens can “be the guardians of their own individuality, or determine for themselves how and when it is reduced into data packets”. It is a form of defiance against the mass surveillance and lack of transparency of those in power.

“While Anonymous has not put forward any programmatic plan to topple institutions or change unjust laws, it has made evading them seem easy and desirable. To those donning the Guy Fawkes mask associated with Anonymous, this—and not the commercialized, “transparent” social networking of Facebook—is the promise of the Internet, and it entails trading individualism for collectivism.”

– Biella Coleman Our Weirdness Is Free‘, triplecanopy

Threats to Anonymity

The right to anonymity, however, is under attack. Tools for protecting anonymity, such as VPNs or the Tor browser, are often portrayed as synonymous with criminal behaviours like hacking, drug-dealing or fraud. Meanwhile, recent examples in the USA, Brazil and Hong Kong show that governments around the world are employing ever more sophisticated and aggressive techniques to expose and punish whistleblowers and activists. Seeking anonymity is an increasingly radical act.

The recent Hong Kong protests have seen a fierce battle between protesters and government forces over the right to anonymity. Protesters have been targeted with facial recognition technology and, if recognised, face 10 years in prison for ‘illegal assembly’ as well as a series of extrajudicial attacks including doxxing, online harassment and physical violence. In their defence, protesters used masks (eventually banned), balaclavas, umbrellas and other items to obscure heir faces and remain anonymous. 

Protesters in Hong Kong protecting themselves and their anonymity, August 2019. Source: Anthony Kwan/Getty Images

One high profile case centred around a woman who was wearing a balaclava, gas mask, goggles and a helmet labelled “don’t aim at protesters’ heads” in order to protect herself both physically and from the threat of identification. She was hospitalised and blinded by a ‘bean-bag round’ that police fired through her goggles. The police then attempted to de-anonymise her by searching through health records before they were stopped by an injunction. 

These types of tactics are often held up in the West as examples of tyrannical rule, but Western democracies are also engaged in this assault on anonymity for those seeking justice.

In 2019, the USA decided to indict Wikileaks’ Julian Assange with charges of espionage with a maximum sentence of 175 years in prison. This signals a new tactic in the battle against whistleblowers and transparency activists by targeting not only the whistleblowers themselves, but also the publishers of leaked material with draconian sentences.

Glenn Greenwald was charged with orchestrating hacking operations. Source: Getty Images

A similar approach was taken by the Brazilian government in an indictment made against the Intercept’s Glenn Greenwald for publishing material that exposed political corruption within the government’s anti-corruption ‘Lava Jato’ operation. Whilst the charges against Greenwald have been suspended by a judge, both of these actions are designed to have a chilling effect on anonymous whistleblowing, a vital tool in subjecting governments to public scrutiny. Moreover, as Christian Christensen points out, these actions are not limited to obvious would-be authoritarian presidents like Trump and Bolsonaro, but are indicative of a larger trend.

“We need to disabuse ourselves of the notion that—at least in terms of WikiLeaks and Assange—what we are seeing in 2019 from the Trump administration is significantly different from what we saw under Obama. It is well worth repeating the fact that, from 2009 to 2013, Obama prosecuted more people as whistleblowers under the 1917 Espionage Act than all former presidents combined. Let’s also remember that Chelsea Manning was tried and convicted under Obama, and spent huge periods of time in solitary confinement (even though her sentence was later commuted).”

Christian Christensen, Assange, Espionage, and the Cult of Personality

The Watchful Eye

These examples are linked to the increasingly pervasive forms of corporate and governmental surveillance that we are all subjected to. Greenwald was also the journalist who helped publish the Snowden leaks that exposed the systematic mass surveillance operations of the US government. Facial recognition technology and CCTV are not only used to track people in Hong Kong, but in countries across the world. We are all being monitored by our smartphones and other devices and both our online and offline data is being tracked by an array of private and state actors, a process that Couldry and Mejias refer to as ‘data colonialism’, and that Shoshana Zuboff calls ‘surveillance capitalism’. These mechanisms undermine choice, democracy and freedom, and anonymity is one of the last defences we have against them.  

Identity and Representation in Social Media Emojis

Since its inception, the role of the virtual social networks has evolved gradually. This evolution has contributed to changing the social fabric in terms of how we interact with our surroundings, and perceive and present our own identities. According to José van Dijck, over time, the role of the social networks has shifted from being tools for self-expression and connectivity with family and friends to being tools for representation and self-promotion. Meaning that our social media identities are formed through complex layers of life experiences and self-projection. Thus, the role that social media plays today consists of two main pillars; firstly, reflecting our identities in a way that represents us and contributes to the knowledge formation of the public about us. And secondly, influencing identities to either evolve or get reconstructed. Through this article, I will focus on one modern mode of communication in the social media realm, which formulates a big part of the social media language. The Emojis!

Are you that person who takes some time to consciously choose the perfect emoji that represents his feeling or identity? Have you ever felt that you can’t find a proper emoji that represents your current status? Well, you are not alone. We all know that emojis are just some small icons; however, these little icons are used by by 92% of the online population. In her TEDx talk, Tracey Pickett the founder and CEO of Eboticon – a media design company with a mission to create dynamic and culturally relevant emojis for niche social groups– argues that emojis are creating new brain patterns within us; which is similar to the patterns we already have related to tone of voice, body gestures, and words.

Source: https://imgur.com/gallery/cnplv

The History of Emojis

Emojis were initially created in Japan as a form of communication to add an emotional nuance to plain text. Emojis history goes back to more than 20 years; According to Emojipedia, SoftBank, a Japanese carrier, brought the first set of emojis to life in 1997, compromising 90 emoji characters. Over the years, Emojis gained extreme popularity in Japan, and big corporations like apple and google saw potential success opportunities in taking the emojis culture outside Japan. From 2007 until 2010, the Unicode Consortium has been approached by different initiatives and teams to include emojis in their system. Unicode Consortium is a non-profit group that has been put in the early 1990s and fundamentally works on maintaining text standards when sent from one country to another and one computer to the other. In 2010, Unicode accepted the proposal of 2 apple engineers to adopt 625 new emoji characters, announcing emojis to be officially accessible everywhere. In 2015, the word ‘emoji’ was chosen as Oxford Dictionaries’ word of the year. Currently, there are 3,304 emojis in the Unicode Standard as of March 2020.

SoftBank’s 1997 set of emojis
Source: emojipedia.org

Skin Tone emojis, a smart move towards identity representation or another mode of social discrimination:

“It’d be nice to see some emoji that look like me. But, at the end of the day, none of these really do.”

– Paige Tutt, a writer in the Washington post

After years of complaints about the lack of black and brown representation, In April 2015, the Unicode Consortium introduced its new range of skin tones emojis. This range included 5 different skin tones in addition to the standard yellow one. Five years now, from when they were introduced, they are still the same 5 tones with no changes. Even though emojis of colour were initially created to represent people from different communities, however, it has created a  discourse of ‘Racialized communications’ not particularly that this is something terrible, but when you get to choose an emoji to complement your sentence now, you somehow need to make a choice of how you would like to identify and present yourself. Nevertheless that it would be too naïve to assume that the internet can be a raceless space. On the other hand, in my opinion, what went wrong with these emojis is that instead of creating real emojis that represent each race, emoji design companies decided to take the easy way out by coloring the white/ yellow emojis in different shades ignoring the physical features of each race, and offending the consumers that they were originally putting this together to please.

Differences in tones for five platforms
Source: Robertson et al. 2020

In an article by the Atlantic in 2016, Andrew Mcgill explained his findings on why people of lighter skin tones tend to use shades that are not particularly close to their skin colors. He tried to quantify his research by using data from Twitter’s streaming API, and he built a dataset of 18,000 tweets from the United States. His findings led him that 52 per cent of the users in his dataset use the darkest 3 skin tones, while 32 per cent use the second-lightest, and only 19 per cent use the lightest. Mcgill assumes that people from lighter skin tones tend to use darker skin colors because they feel that “it is awkward to use an affirmatively white emoji.”  This might be true, however, in my opinion, the complexity of the issue lies in the limitation of the representation. You are given a few options to choose from, which neither of them represents you. Moreover, according to Robertson and his team, different emojis are rendered differently on different platforms (Robertson et al., 2020), which, in my opinion, might lead to varying perceptions of the emoji. In other words, each individual will use the emoji depending on the entire context he is put in, yet we can’t deny the lack of accommodation of these emojis to the cultural differences and appropriateness.

Emojis are for everyone, but are they representing everyone?

Extracted from: 2015 Emoji Report
Designed by: https://createdbyjoe.me/infographics

“I just wanted an emoji of me”

Rayouf Alhumedhi, the 16 year old Saudi-girl behind the Hijabi Emoji

According to Adweek’s 2015 piece, Emojis are used by 92% of the online population. In 2019, according to Statista,  it was estimated that more than 700 million emojis are used every day on Facebook posts alone. The growing popularity of emojis demanded considering important dimensions like sensitivity towards different ethnic and cultural identities in terms of representation. In 2016, a 16-year-old Saudi girl residing in Germany had made her name to the headlines of the biggest newspapers. Rayouf Alhumedhi has been communicating with Unicode to include an emoji that represents her identity. Alhumedhi says that she has been in a conversation with her colleagues, and they wanted to create a WhatsApp group with emojis that represent each of them. When Rayouf was looking for an emoji that represents her, she didn’t find any. At that moment, she was inspired to send the Unicode consortium a proposal to include a woman in a headscarf emoji to the keyboard. In the world emoji day (yes, you read it right, there is a day for emojis)  in 2017, Apple released the emoji of the woman in a headscarf in addition to a collection of new emojis such as the breastfeeding woman.

In conclusion, it feels good to see your identity represented, especially when it is represented in the way you would like to see it. Representation does not happen forcefully or by checking boxes. Representation happens when you take the opinion of the intended beneficiaries on how they would like to see themselves represented.

Resources and further readings:

How Social Media Shapes Identity | Ulrike Schultze | TEDxSMU: https://www.youtube.com/watch?v=CSpyZor-Byk

Emoji: The Language of the Future | Tracey Pickett | TEDxGreenville: https://www.youtube.com/watch?v=Dzlek8nMrc8

How emoji replaced QWERTY as the world’s most popular keyboard | Jeremy Burge | TEDxEastEnd: https://www.youtube.com/watch?v=bsZBziJVzNA

Van Dijck, J., 2013. ‘You have one identity’: Performing the self on Facebook and LinkedIn. Media, culture & society35(2), pp.199-215.

Robertson, A., Magdy, W. and Goldwater, S., 2020. Emoji Skin Tone Modifiers: Analyzing Variation in Usage on Social Media. ACM Transactions on Social Computing3(2), pp.1-25.

Emogi Research Team, 2015 Emoji Report







Simpsons reaction, I don’t belong here: https://gph.is/g/Z2ng5NA

There is no emoticon for what I’m feeling GIF: https://imgur.com/gallery/cnplv

1997’s SoftBanks set of Emojis: https://emojipedia.org/softbank/1997/

Emojis are for everyone infograph: https://createdbyjoe.me/infographics

Deleuze & Data on the Loose

In the ‘Postscript on the Societies of Control’, philosopher Gilles Deleuze recognises an apparatus of control that penetrates society, moving forward from the societies of discipline coined by Michel Foucault. Foucault said that the individual passes from one enclosed environment to another, each having its own laws: first the family; then the school; then the barracks; then the factory; from time to time the hospital; and possibly the prison. However, as per Deleuze, previous disciplinary models are undergoing a critical transformation. The control of individuals is no longer limited to confined spaces such as prisons, factories, schools or hospitals, but rather permeates into other common social spaces, functioning as an invasive form of surveillance achieved through the use of sophisticated technologies. 

It is more than evident that we are currently living in such a society of control. The signs are all around us. From traditional CCTV cameras, biometric scanning to even smart cards used for public transportation. These methods are so prevalent that we often accept them as a necessity for the way in which our collective society functions. Deleuze mentions that such surveillance mechanisms are never perceived to be controlling but rather exercises of our own freedom.  

On the state front, no other country has yet reached the magnitude of surveillance scrutiny the way China has implemented along with its home bred corporations. Social credit score systems working in tandem with facial recognition cameras have become an integral part in the way citizens shop, travel, borrow and even make friends. 

Similarly, the scholar Shoshana Zuboff, in her book ‘Surveillance Capitalism’, describes silicon valley giants such as Facebook and Google offering their services to billions of people without charging a fee. Instead, users pay for their services by providing intimate personal data. Tech corporations use this data to analyse, segregate and make predictions about a user’s interests, personality and finally behaviour. They then proceed to bombard us with tailored advertisements, and sinisterly even nudge our behaviour to finally achieve profitable outcomes for their companies. The insights in the book provide an explicit account of what former CEO of Google Eric Schmidt meant when he famously quoted “We know where you are, we know where you’ve been. We can more or less know what you’re thinking about”. While the initial beneficiaries of this data were advertisers, governments soon noticed the massive data surpluses these companies possessed and have at multiple occasions made attempts to obtain this information

Little by little, we are seeing the development of a far reaching mode of surveillance and ordinance that creates a world in which personal privacy is almost non-existent with information about our whereabouts, purchases, interactions and behaviour endlessly being collected by corporations and/or governments. It should be noted  that silicon valley has attained a degree of data collection from their billions of users that would not have been acceptable for democratic governments to do so directly.   

Or would it? 

From the start of the year, the ongoing Covid-19 pandemic has turned the world upside down for a lot of people. Across the globe, public health departments are striving to curtail the spread of the virus. Ranging from the usage of drones to contact tracing apps and big data analyses, government agencies are employing several technologies to monitor and control the current situation as it progresses. As you might notice in the  examples below, some of these measures lay on the grey area between what is considered ethical and an intrusion into personal privacy, while some others outrightly violate individual freedom and privacy. However, they are still being undertaken in the name of a public health crisis. 

In China, health apps are playing an important factor in the government’s race to eliminate a second wave of coronavirus infections. In addition to conventional precautions, citizens are required to scan QR codes before boarding a bus, train, flight and often even to enter their own apartment complexes. Authorities are able to assess a person’s level of risk based on the different colours on the app. To track movements in some cases, the government has even installed CCTV cameras inside houses. The Government of India has launched a smartphone app called AarogyaSetu and has made it mandatory for public and private workers to install. In Hong Kong, new arrivals are required to wear a tracking bracelet. Israel has mobilised its intelligence service, Shin Bet and has teamed up with the controversial cyber surveillance firm NSO Group to track people who may be infected. In South Korea, officials rummage through everything from credit card records to taxi receipts in order to track those infected. We see multiple countries implement invasive surveillance methods to track and curtail the spread of the virus. Even countries in the European Union, known for their moral high ground on personal data protection have been pondering over the possibility of employing such measures. On the private sector front, Google and Apple have jointly created an application program interface (API) that most experts say protects privacy, but face ongoing criticism for falling short on its proposed effectiveness to curtail the spread of the virus. 

Like the pandemic itself, the societal implications and political challenges posed by these global transformations are unprecedented to say the least. The data driven surveillance operations mentioned above have unfolded in the midst of a heated debate on the trade-offs between individual autonomy and collective well-being. During the crisis, rapid innovation seems to have surpassed regulation in explaining for increased surveillance measures. A joint letter from close to 300 academics have stressed on the importance to protect user privacy while developing contact tracing apps. 

What we see unfolding here is a normalisation of the state of exception. Governments in authoritarian, semi-authoritarian as well as so called liberal democracies are functioning in an increasingly extra-judicial manner, where the state of exception is turning into the presiding pattern of politics today. In addition to developing surveillance measures without an expiration date, it will be tempting for states to procure the immense reserve of data that big tech has accumulated and to establish that ultimate state control is the solution. The historian Yuval Noah Harari goes on to further elucidate on the implications of such a data dictatorship in this timely article. Ultimate state control is especially frightening given the dominance of far-right ideologies and increased authoritarianism in the political mainstream nowadays. The ongoing pandemic has accelerated pre-existing trends such as border closures and nationalism while the more serious long term effects are yet to be seen.

Tech companies possess a set of trade-offs in such a situation: tools and data offered to one country must be offered to all countries unless they wish to discriminate and face potential risks by increased regulation and perhaps even a total ban. This post by Google’s former Head of International Relations provides an account of some of the questionable acts committed by the tech giant. The only admissible form of freedom in modern neo-liberal societies today seems to be the narrow consumerist freedom of the market, which is also constantly being tweaked to favour a small percentage of the population. Thus it is imperative that our understanding of the society of control should not be limited to traditional forms of surveillance but rather include the whole spectrum of modern tools and control measures we see today. 

Deleuze tells us that “there is no need to fear or to hope for the best, simply to look for new weapons”. New weapons could come in the form of knowledge and activism around it. Understanding the logic of control and creating new models of resistance. A new collective mechanism that puts people first. While this requires a pluralistic long-term strategy, the most promising short-term solution in light of the ongoing crisis would be for the public and civil society to continuously hold governments accountable on how our data is being collected, used and for what purposes. This is especially important for states without existing data protection laws. The Electronic Frontier Foundation has provided some key points to help with this endeavour. 

As postmodernist philosophers such as Foucault and Deleuze have tracked the evolution of societies, from sovereign to disciplinary to control, what does the next evolution hold? While a precise answer is not possible at this time, the Covid-19 pandemic presents us with a turning point with regard to the functioning of global surveillance mechanisms. A much needed time to rethink the subsequent direction in which our society moves.

Can Wikipedia Save the World?

The unnerving rise of ‘fake news’ and digital manipulation has called into question how we evaluate truth. While political deception is far from new, the Internet permits ‘false’ or corrupting information to circulate with ease, threatening the ideals once promised by the World Wide Web at its launch: free-flowing knowledge, connectedness, and global understanding.

However, things got a little more complex, as Evgeny Morozov explains, the Internet “celebrates post-truth and hyper-truth simultaneously.” With truth, it could be argued, the world will be set free from corruption or contention, as long as this truth is freely shared amongst all and remains unbiased. This sentiment is core to the objective of Wikipedia and the Wikimedia Foundation: “We are guided by a vision of a better world.” In the face of Trumpism and growing data surveillance, can Wikipedia – or at least the mindset it advocates – really change our world for the better? 

Of course, ‘better’ is an uncertainty in itself. For Wikipedia, inclusion is key to bettering our reality: “Our vision is about more than providing universal access to all forms of knowledge. It’s about creating an inclusive culture.” This approach seeks to foster constructive, inter-cultural discourse upon a digital platform that all can access and contribute to. Although, Morozov posits, “there are two ways to be wrong about the Internet. One is to embrace cyber-utopianism and treat the Internet as inherently democratising. Another, more insidious way is to succumb to Internet-centrism. Internet-centrists happily concede that digital tools do not always work as intended and are often used by enemies of democracy.” There is another glaring limitation of Wikipedia’s digital presence. Only around half of the global population has access to the Internet, with studies predicting universal access might only be reached in 2050 due to lack of funding, infrastructure and education in more remote, impoverished areas.

While Wikipedia aspires to establish itself as a democratising online space, there are many ways it currently fails to live up to this utopian standard. The shortcomings of the site’s editorial procedure have been largely documented, from societal bias inevitably skewing the ‘neutral voice’ of its content to a noticeably inequitable record of world history. Likewise, Wikipedia’s shrouded bureaucracy has also come to light, particularly in the recent, arguably unjust, banning of a prolific editor. 

Wikipedia’s mission to better the world through amassing the sum of knowledge faces more than just practical issues. The assumption that worldwide inequalities are simply the cause of misunderstandings or ignorance, rather than a complex, evolving socio-political power struggle, paints out a far too simple relationship between knowledge and ‘equality’. With Wikipedia often held up as a shining example of positive digital collaboration, Morozov contests the superficial conclusion that it serves as “just another reminder that Internet logic is the correct way to run the world”, assuming a “coherent logic to the Internet and its many components”, stating that “some problems… can only be mitigated – never solved – through bargaining, because those problems emerge from competing interests, not knowledge gaps.” 

In contrast, a 2020 Wired article declares Wikipedia as the “last best place on earth”, insisting it “shines by comparison” to the data-looting, monopolising approach of other technological ventures. The article claims that Wikipedia “does not plaster itself with advertising, intrude on privacy, or provide a breeding ground for neo-Nazi trolling.” While this is certainly an oversight of the more subtle ways ideological bias can make its way into Wikipedia, the writer rightfully acknowledges that the Wikimedia Foundation are actively discussing these problems “often in dedicated forums for self-critique.” Interestingly, the article considers Wikipedia mostly a reflection of “the personal interests and idiosyncrasies of its contributors”, which appears to negate the objective, neutral voice encouraged of its editors.

This points to a very timely solution of our modern world. What if humans weren’t in charge of writing Wikipedia?

MIT Technology Review, 2014

In fact, bots have already been writing Wikipedia pages, including Swedish physicist Sverker Johansson’s “Lsjbot”, who is responsible for creating a substantial portion of the Cebuano Wikipedia edition. In his discussion with Seeker, Johannson views the use of bots as a way to make the platform more inclusive: “too many Wikipedia entries are written by white male “nerds”…on Swedish Wikipedia, there are more than 150 characters from The Lord of the Rings, but fewer than 10 profiling people from the Vietnam War.” 

Although artificial intelligence holds potential to remedy some of our human problems, it is not as pure and objective as we had hoped. The developments of AI are discussed in this Harvard Business Review article, stating, it “can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases”. Thus, the application of AI, algorithms and data in assessing ‘fairness’ in the world is also encumbered by the subjectivities of human beings. The bleak conclusion offered by this Vice article regarding the use of AI for Wikipedia, “it can lead to credible concerns over its quality but is also hugely better than nothing”, encapsulates what Morozov warned against – succumbing to Internet-centrism despite its failures. 

On the other hand, it is worth noting that the ‘nerd’ culture prevalent within Wikipedia is perhaps what has ensured its longevity. According to Wired, Wikipedia’s “innovations have always been cultural rather than computational… this remains the single most underestimated and misunderstood aspect of the project: its emotional architecture.” Maybe it is the “idiosyncrasies” of Wikipedia editors that drive its cause, rather than a quest for objectivity.

In Chris Bateman’s exploration of what Wikipedia means for knowledge, he argues that “truth is not some foreign land of objectivity that we have to struggle to reach… it is something for which we all possess a uniquely individual familiarity” (63). This revision of seeking truth is based upon our reliance on a collective pool of expertly gathered knowledge, as we are merely “relying on someone else’s experience of it… we trust a reliable witness of the truth, or rely upon a spokesperson for objective knowledge” (63).

As we examine Wikipedia through this lens, it certainly reinvigorates the idealism once inspired by the Internet. Yet, it is important not to surrender to such wishful thinking that this alone can bring forth global change or a better world. Bateman’s suggestion that “perhaps you can simply accept that we are all equally stupid” (60), serves as a better gauge of what we can achieve with the pursuit of knowledge.

Even if Wikipedia amounts to something close to the haven of collective truths it envisions, we must remain skeptical that this exclusively will solve all our problems or ‘better’ the world. As Morozov clarifies: “Ideas on their own do not change the world; ideas that are coupled with smart institutions might.” Societal change is a slow, ongoing process, of which collective and collaborative knowledge plays a significant – but not sole – part in attaining.

An ‘Infodemic’ as Dangerous as a Pandemic

Fake news and misinformation are not a novel phenomenon linked exclusively to coronavirus. Laclau (2005) defines fake news as a floating signifier, used by different and opposing, antagonistic, hegemonic political projects as part of a battle to impose the “right” viewpoint onto the world” (Farkas and Schou 2018:302). In that sense, fake news is a deeply politicized concept used to delegitimize political opponents and construct hegemony (Farkas and Schou 2018). Research by the Massachusetts Institute of Technology found that disinformation travels faster, deeper, and more broadly than real information in any category.  So, how does this apply in the context of coronavirus?

Disinformation cases about COVID19 within one month
source: eeas.europa.eu

Types of misinformation about coronavirus

Fake news about the virus can be divided into two categories. The first is misinformation which is “information that is false but not created with the intention of causing harm” (Udupa et al 2020:4). This could be the circulation of rumors by citizens which reflects the public’s hopeless need to acquire information about something that causes great anxiety. Misinformation travels seamlessly on Whats-app given its personal proximity as messages seem to come from reliable sources such as family and friends. Such rumors have included information about the spread of the disease in regions where there are no registered cases or unverified home remedies to cure the virus which ranges from simply drinking water to drinking bleach and methanol. Some rumors claimed the virus will be eradicated by summer or spread false information about how the virus was contracted. The second category is disinformation which is “information that is false and deliberately created to harm a person, social group, organization or country” (Udupa et al 2020:4). Disinformation examples include accusations that some countries are not revealing the real number of cases identified or rumors that China found the “wonder drug” and is concealing it from the rest of the world. Baseless conspiracy theories suggesting the virus was invented in Chinese labs or is an American biological weapon are also popular. According to the Center for Public Integrity/Ipsos poll, 44% of Americans believe the pandemic spreads from certain minorities and organizations, 66% of those respondents believe China or Chinese citizens are to blame while 13% believe it was invented in China.

Findings from research conducted by Network Contagion Research Institute show a spike in conspiracy theories containing anti-Chinese sentiment and terms like “bioweapon”. The study also revealed an “acute increases in both the vitriol and magnitude of ethnic hate”. It also states that online misinformation might translate into real-life violence. Rising online hate speech , reflects a surge in Sinophobia and xenophobia against people of Asian descent and appearance around the world. Extreme speech related to coronavirus was also found to be on the rise, according to the research, a post on Instagram that called for shooting every Asian to eradicate the pandemic, was removed shortly after it was posted. While slurs rose in time near the peak of the outbreak in Wuhan, researchers also reported a second rise when President Trump’s tweeted “Chinese Virus” on Twitter.

Rep. Judy Chu (D-Calif.), the chair of the Congressional Asia Pacific American Caucus, responds to president Trump’s use of “Chinese-virus”expression and voices out concerns of implications of such a term.
source: washingtonpost.com

Read more: “Go eat a bat, Chang!”: An Early Look on the Emergence of Sinophobic Behavior on Web Communities in the Face of COVID-19

Who are the agents of disinformation and why they spread it?

States and political parties spread disinformation campaigns and propaganda as means to manipulate the public and manufacture false consensus to serve their agendas. On May 1st, Donald Trump claimed to possess evidence that proves China manufactured the virus at the Wuhan Institute of Virology. Despite his refrainment from revealing the evidence, which makes his declaration completely invalid,  such a claim can have serious implications on exacerbating the stigma against Chinese people. Typical to former US propaganda, through the dissemination of fake news, Trump is creating a common enemy to the US in an attempt to divert the public ‘s attention from his administration’s failings. This comes in line with the recently leaked Republican memos that encourages candidates to relentlessly blame China for the spread of the virus in their public statements. They are also encouraged to criticize democratic candidates for being “too soft” on China, in an attempt to delegitimize the opponents’ party. Employing fake news as a strategy is overused by Trump in various contexts, one of which is labeling far-left media as “fake news”.

Anti-establishments groups also play a role in the production and dissemination of disinformation. In Germany for example, right-wing conspiracy theorists are exploiting the pandemic-caused uncertainty to create a discourse of hate and mistrust against politicians and the goverment. Through their alternative media, they use fake news to stir anger against democratic political parties. Some media agencies are also to blame for spreading panic by publishing unverified information and taking an active role in both disinformation and misinformation.

Efforts to combat the spread of fake news

With increasing pressure on social media companies to act against the spread of fake news and hate speech related to coronavirus, tech-giants such as Google, Facebook, Instagram, and others have collaborated with the WHO, Centers for Disease Control and Prevention (CDC) and the EU in an attempt to eliminate the spread of fake news. Users searching for information on COVID19 are directed by social media companies to official organizations websites, guaranteeing that search results are predetermined. They are also cooperating with fact-checkers, in order to identify false information and erase it. 

Despite that social media platforms have stepped up their game in fighting against misinformation, some argue their efforts are minimal. Critics argue that such platforms are built on the ethos of profit maximization, their business model feeds on clicks and shares, with no emphasis or consideration to accuracy. Therefore, combating misinformation might not be in the best interest of these platforms. Moreover, algorithms that determine who sees what online, use people’s biases when deciding which content is relevant for each individual user. Social media advertising tools enable misinformation campaigns to draw on individuals’ confirmation biases by designing messages to people who are more prone to believe them.  One would wonder how genuine are the efforts of social media platforms in regulating their content, when they have been reluctant to address hate speech and anti-vaxx propaganda before? How effective it is given the embedded biases built in their systems and the filter bubbles traps?

Watch: How can you spot coronavirus fake news stories?

Read more:

Weaponized Information Outbreak

Biases Make People Vulnerable to Misinformation Spread by Social Media

We’re fighting fake news AI bots by using more AI. That’s a mistake.

Coronavirus: Call for apps to get fake Covid-19 news button


Udupa, S., Gagliardone, I., Deema, A. and Csuka, L., 2020. Hate Speech, Information Disorder, and Conflict.

Farkas, J. and Schou, J., 2018. Fake news as a floating signifier: Hegemony, antagonism and the politics of falsehood. Javnost-The Public, 25(3), pp.298-314.

ZoomBombing: the 2020 Hacking Trend that Disrupted our Coronavirus Confinement

All of a sudden, a new app turned into one of the lockdown essentials. With large sectors of the population working from home, Zoom became one of the web’s favourites, outperforming Google’s Hangout Meet and Microsoft Teams and topping the app store’s charts worldwide in February and March 2020.

Launched in California in 2013, Zoom Cloud Meetings found in the Coronavirus pandemic an unexpected ally that helped its downloads rise by 14x in March when compared to 2019 in the US. Also, with 20x more downloads in the United Kingdom, 22x more in Spain and 55x more in Italy, the rise in popularity of Zoom is unprecedented. While in December 2019 there were 10 million daily meeting participants, in March 2020 the usage shot up to 200 million, and 300 million by the following month. The UK cabinet and schools all around the world were among the brand-new users.

The number of downloads of business apps such as Zoom, Microsoft Teams and Hangouts skyrocketed as Coronavirus became a pandemic. Graph source: App Annie.

But just as with great power comes great responsibility, the Spiderman principle can be paraphrased today: with many downloads come great headaches. Zoom CEO Eric Yuan not only became a billionaire in a matter of weeks. His trendy app turned into an easy target for professional hackers, computer addicts and amateur intruders.

ZoomBombing – that is, the form of cyber harassment in which calls are hijacked by unidentified individuals who belch hateful language or post graphic content – quickly became a term. US Government meetings were attacked, the FBI was forced to issue a news release to warn people and NYC banned the service from the city’s classrooms.

ZoomBombers not only interrupt conversations. While they do it they record themselves or even stream their actions live (This video contains offensive language).

ZoomBombing origins and motivations

Since the term was coined in mid-March, ZoomBombing has boomed. The niche prank started on an abandoned Discord channel, a VoIP (Voice over Internet Protocol) platform designed for video gaming communities, where groups of bored youngsters organized the first attacks.

Soon after that, as an article on PCMag revealed, Zoom conference codes began to be shared on Reddit and Twitter. The bombers also recorded the attacks and uploaded them to YouTube or TikTok and even streamed them live on Twitch.

Among the many calls affected around the world, some of the most prominent include Alcoholics Anonymous meetings, Muslim health forums and even a carers’ online dance session.

But what is behind these attacks? What drives these teenagers to cause such distress in these already difficult times? Many of these ZoomBombing actions coincide with anthropologist and hacker expert Gabriella Coleman’s description of a hacker, which reveals certain communitarian social habits and discourses of anti-authoritarianism. In short, what these groups did was to organise themselves in their already existing online communities and challenge other more traditional structures.

It is true that the preliminary descriptions of the ZoomBombers match those of digital activists as described by Jordana George and Dorothy Leidner in their paper From clicktivism to hacktivism: Understanding digital activism. They are young, computer-literate, reduced in number but highly effective and connected via social media.

But what is behind these attacks? What drives these teenagers to cause such distress in these already difficult times? Many of these ZoomBombing actions coincide with anthropologist and hacker expert Gabriella Coleman’s description of a hacker, which reveals certain communitarian social habits and discourses of anti-authoritarianism.

However, they differ in the fact that normally digital activists and hacktivists want to achieve social or political objectives, and with that purpose they target governments, organisations and individuals (according to Tim Jordan and Paul Taylor in their book Hacktivism and Cyberwars, Rebels with a Cause). In this recent Zoom phenomenon, teenagers do not appear to have such motivations, but are simply trying to relieve the boredom of Coronavirus confinement days.

The British reporter who spied on his rivals

Not only bored teenagers have used Zoom to target its users. The case of a British journalist became a world story. Picture credit: Bloomberg.

But not only adolescents have breached Zoom’s security. What happened with Financial Times’ media and technology correspondent Mark Di Stefano could perfectly make it to one of those journalist-related thrillers in which adrenaline flows as reporters write to tight deadlines. On March 23, Di Stefano listened in on confidential Zoom meetings in which The Independent and The Evening Standard journalists were informed of salary cuts and furloughs.

According to log files, the reporter joined one of the private calls for a few seconds using his ft.com email address. After quitting it he re-joined, this time anonymously and turning his camera off during the whole of the conversation. Unluckily for him, some of his rival colleagues had already seen his name pop up on their screens.

Di Stefano used the information to tweet all the details he had overheard and later to write an article for the Financial Times, quoting “people on the call” as sources of the story. In reality, however, he was getting all his inside information from spying on their Zoom meeting.

A few days later, he tweeted again, this time to offer his resignation and announcing he would “take some time away and log off”. He had already been suspended and his story was spread all around the world, while the editor of The Independent described Di Stefano’s snoop as “entirely inappropriate and an unwarranted intrusion into our employees’ privacy”.

What now?

With world-class hackers targeting the World Health Organisation in the middle of a pandemic and others doing the same against hospitals, ZoomBombing and journalistic espionage could be perceived as a mere frivolity. Except they are not.

Working and studying from home has increased the exposure to cyberthreats to unprecedented levels, opening doors which had always remained closed or that didn’t even exist. Hackers of all levels – from bored teenagers to the very elite – have targeted people’s increased dependence on digital tools.

Different organisations have started warning the population about the most common types of cyberattacks during work-from-home times. Source: McKinsey & Company.

What is certain is that the world won’t be the same. Security in companies and schools has been forced to shift from security cameras and metal detectors to step-by-step guides to prevent ZoomBombing. While the US had its first March in 18 years without a school shooting, the concern of teachers and parents has shifted towards online security. Just as we have increased our hygiene levels by washing our hands after every physical contact, it is time to review and update our digital hygiene habits.

ZoomBombing has affected people all around the world. This CBS Boston short video summarizes some of the main concerns and gives some tips on how to avoid further attacks.

Hate Speech, Fake News, and Whatsapp

Introduction: “Hate speech” or “extreme speech” are terms that can be heard daily around the world, but particularly, on many news channels in America. Hate speech is defined by the European Commission against Racism and Intolerance (ECRI, 2020) as “ …covering many forms of expressions which spread, incite, promote or justify hatred, violence, and discrimination against a person or group of persons for a variety of reasons”( ECRI, 2020). While few people justify the use of hate or extremist speech online, the method of dealing with the issue is a polarizing issue. Hate speech has always been a complicated issue in the world, however, is it particularly divisive in the United States of America due to its past and current racially motivated transgressions. As freedom of speech is protected by the First Amendment in the United States Constitution, some argue that any limitations upon speech would be a violation of one’s civil rights guaranteed by the Bill of Rights. Over the years, the issue of hate speech has become more or less relevant, but with the growth of online far right activity in the United States of America, some Americans feel that it is particularly important to regulate hate speech. Online far right activity tends to continue extremist views on gender, race, and religion. There is a grey area, however,  in the line to define what is considered hate speech, and how it should be regulated and who should regulate it. The regulation of “hate speech” creates the question of “is it better to let everyone express their opinions no matter the consequences or to allow censorship to stop the spread of hate speech and extremist ideology?

Speech Regulation: Currently, one reason online speech is not able to be regulated in the United States is due to Section 230 of the Communications Decency Act. While the Communications Decency Act was originally thought to limit speech online, Section 230 has become one of the most challenging legislations to create limitations around. Section 230 provides immunity of all liability for providers and users of an interactive computer service of content that is published by other people. Thus, websites and internet providers are not held responsible for what is posted online, users are not held responsible for the reactions their posts create, and those who repost something are not held responsible for the original post. Regulating speech with such large gaps in liability is difficult. Limiting someone’s ability to post, a website’s ability to host content or an internet provider’s ability to provide access to a website would all violate Section 230.  So, how does one create regulations or decide if regulations are needed? Many consider speech that incites violence to be the cut-off boundary of when speech should be regulated. However, online systems have failed at fully being able to access the threat of online speech. Recent studies from the Center for the study of Hate and Extremism have shown links between mass shootings in the US and political discourse and hate crimes (Colagrossi, 2019). While it is impossible to fully create a link between the two without further research, several mass shootings have had social media signs prior to the tragedies unfolding. With shooters taking photos with guns and posting threatening messages online prior to their rampages, it is difficult for one to understand how these posts would not be flagged as inciting violence or dangerous enough to cause concern or censorship. In addition, social media is not only the source of political discourse for some Americans but also a large source of news. In a recent article Peter Suciu stated, “According to a newly published Pew Research Center report 55% of U.S. adults now get their news from social media either “often” or “sometimes” – an 8% increase from last year. About three-in-ten (28%) said they get their news “often,” up from 20% in 2018” (Suciu, 2019). With so many Americans interacting in social media spaces for news, political discourse, and social interactions, it is concerning that so little is done to prevent the spread of false information or extremist ideology. However, with recent scandals and tragic events, more people are calling for a crackdown of online content. 

Fake News: The spread of information online that is false and branded as news has become the first target in cracking down on online speech. As social media was a large source of misinformation during the 2016 presidential election, some social networks have seen backlash. Most notably, Facebook’s Cambridge Analytica scandal caused Mark Zuckerberg to testify in front of Congress. The Cambridge Analytica scandal revolves around Facebook’s harvesting of user data for political advertisements. The scandal is possibly the largest social media scandal to emerge from the election. Facebook was also criticized for allowing false ads to be purchased and promoted during the scandal. This scandal has been attributed to voter’s lack of informed decision making, due to tailored political ads priming them for certain candidates. As a result, users have become more skeptical about what information they can trust. This feeling of mistrust has provoked more citizens to support online speech restrictions to prevent the spread of fake news and skeptical of news in online spaces.

Innovation of Usage: Facebook and social media since their inception have provided platforms for public expression of political views.. While social media is becoming less trusted in the U.S. for political ideals and news, other countries have begun to use it for political campaigning. Whatsapp, which is owned by Facebook, is emerging as a powerful tool in political campaigning (Tactical Tech, 2019). In fact, in a recent study by Tactical Tech, Whatsapp was found to be the most delivery for political messages for counties in the Global South (Tactical Tech, 2019). Due to the less public nature of the messaging system, it has created a more personal relationship with users (Tactical Tech, 2019). Whatsapp also encrypts its messages from end to end meaning no other parties receive the messages except those intended to receive it. However, Facebook still has access to who sends messages, from where, and to whom (Tactical Tech, 2019). So, while the app allows users to feel like political messages are more personal, the messages are still targeted and phone number data is being accessed in order to send the messages (Tactical Tech, 2019). Thus, while Whatsapp may be an emerging political platform in some countries, the usage of the app for political messaging still has questionable aspects. 

Ending thoughts: Hate speech, fake news, and data collection are major concerns on social media and the internet as a whole. However, the solution to how to regulate the issues is still largely unknown. While alternative media and independent news are options, everything always comes with a bias so it is not a perfect solution. Also, the regulation of speech comes with complications. Is more government interference a good thing when it stops extremist views or is more interference allowing the government to decide what may be defined as extremist views online? This divisive issue is one that will likely not have a clear cut solution in the near future. However, alternative media and education of the public on issues can better help to prevent the spread of false information and extremist views. Recently, the creation of online platforms and tools dedicated to reporting extremist content and fact-checking are becoming more popular. These features allow the user to feel more confident in the news the receive. Data acceptance policies are another method that gives users more power over their online presence. In conclusion, the complex issues surrounding online freedom will likely not be solved soon. However, as users become more aware of fake news, data collection, and dangerous speech online they will likely continue to demand more innovation to protect themselves online.

Works cited:

Suciu, P. (n.d). More Americans Are Getting Their News From Social Media. Retrieved April 21, 2020, from https://www.forbes.com/sites/petersuciu/2019/10/11/more-americans-are-getting-their-news-from-social-media/

WhatsApp: The Widespread Use of WhatsApp in Political Campaigning in the Global South. (n.d.). Retrieved April 21, 2020, from https://ourdataourselves.tacticaltech.org/posts/whatsapp/

Colagrossi, M. (08/22/). Bigotry and hate are more linked to mass shootings than mental illness, experts say – Big Think. Retrieved April 21, 2020, from https://bigthink.com/politics-current-affairs/bigotry-hate-mass-shootings

Hate speech and violence. (n.d.). Retrieved May 3, 2020, from https://www.coe.int/en/web/european-commission-against-racism-and-intolerance/hate-speech-and-violence

The Darknet! A Journey to the Digital Underworld

I recently watched Deep Web: The Hunt for Dread Pirate Roberts, which is based on real life events about the creator of the underground black-market website Silk Road. Prior to this documentary I was unfamiliar with the deep and the dark web and little to my knowledge there is a whole field of research dedicated towards the illegal activities happening in what is known as the darknet. In general, the darknet tends to be portrayed by the media as a space wherein criminal activities take place, even to the extent of expressing criminogenic desires. Criminal activities may vary from anonymous trading of illegal goods such as drugs via cryptocurrencies, trading weapons and, exotic animals.  Illicit activities are made easier for users due to the features of the darknet, including anonymous browsing, cryptocurrencies and virtual markets and as such, the last few years the darknet has become one of the most discussed topics in cyber security circles. According to Rand, data collected from January 2016 has shown the total drug revenues made on the darknet which, were estimated between 12-21.1 million dollars.

In the following section I will discuss the common questions I faced researching the dark web.


source: https://steemitimages.com/DQmYm7tAwr3Ux7M21ZXhF7KVA6NZqjm3rVFpZaBkqaF1NDg/dark-web-infographic.jpg


The surface web is what the majority of people access on a daily basis, it is the internet that consists of all the websites that could be searched on Google. Other examples include Amazon for online shopping and Spotify for listening to music. The surface web can be described as the tip of the iceberg, where it seen by everyone.

Websites on the surface web are stored as HTML files with fixed content unchangeable anytime anyone wants to access it that is available to everyone. Unlike the dark web in which, access is only guaranteed among trusted peers that are required to be part of the hidden network.


The submerged part of the iceberg is known as the deep web, which is hidden from some conventional searches like google as such, it requires specific software’s to gain access to personal records.


The dark web is a subset of the deep web, but it completely and intentionally hides your identity and location. The dark web also requires specific software’s to access it which is explained more below.

It is important to note that, while law enforcement and the media portray the dark web and TOR as a place for criminal activities, it is largely used for good by government agencies, journalists and dissidents around the world.


The dark web functions differently than the surface web, websites on the dark web are mostly in a continuous change of servers, meaning links can lead to different things at different times. The darknet also operates on networks made between trusted peers that are required to be part of the hidden network.

To access these hidden networks users, have to download specific software’s such as I2P, Freenet and TOR project.


TOR stands for the onion routing project; it was developed in the 1990s by the US Naval Research Lab with the purpose of protecting US intelligence communications online so enemies could not detect a ship’s position while, nowadays it’s an open source and publicly funded where the vast majority of people use it to access the dark web. Tor was originally designed to stop people, government agencies and corporations from gaining access or learning about individual’s location or browsing habits. Tor ensures privacy and anonymity between users through offering the technology of directing internet users and websites’ traffic through relays, which sends computer messages through at least 3-4 separate computer servers to hide the identity and location of its users.

source: https://cdn.techpp.com/wp-content/uploads/2019/01/Tor_working.png


Tor is also used by military professionals, the US navy is still a prime user as well as activists from countries with strict censorship of the internet and media for example, reporters without borders journalists are advised to use Tor to safeguard their identity and keep away from government control. Creators of Tor point to the wider group of legitimate users including activists, journalists, law enforcement and professionals, however it has been largely accused of facilitating a dangerous dark web of paedophiles. According to the the Guardian Tor users has grown from 500,000 to more than 4 million user’s daily users worldwide.

While many illegal activities are taking place through the usage of Tor software it is the same software used by the police or investigative agencies to go undercover and question websites and services.

Read: The Tor Social Contract | Tor Blog


The hidden identity of Tor users’ marks for an attractive and powerful weapon used by criminals. Moreover, individuals are careless with their actions as there are no consequences traced back to them due to the dark web element of anonymity. The buying and selling of illegal drugs, weapons, forged documents, stolen identities and contract killing are all made available on the dark web. One of the most famous darknet website is the Silk Road – was said to promote decentralisation of governments and socio-political movements against law enforcement agencies.  


The silk road was an online platform used for the selling of illegal drugs, fake passports and other documents as well as, provide illegal services such as computer hackers, hit men and forgers. The Silk Road was operated by Tor.

the most sophisticated and extensive criminal marketplace on the Internet today.

This is the story of the rapid rise and hard fall of Silk Road.

The Silk Road website also known as the Amazon of the darknet nearly linked over 4,000 drug dealers to more than 100,000 buyers around the world. According to Forbes the Silk Road annual revenue estimate hit 30 million to 45 million.

Users were mainly seeking the buying and selling of drugs, forged documents and weapons. Commissions are taken through bitcoins, which have appreciated against the dollar since Silk Road launched in 2011, Ross Ulbrict and other stakeholders were accumulating millions in profit.

The use of the bitcoin was vital against the state’s war on drugs. Unlike other currencies, the integrity of nearly one billion dollars’ worth of bitcoins floating around the internet is maintained by the distributed computing power of thousands users who run the crypto-currency’s software not by any bank. As such, users never have to link or tie their accounts to their real identity and as a result the bitcoin was highly essential within the dark web. It was very difficult for FBI to trace the money due to the bitcoin complexity and lack of central authority.

The creator of the website Ross Ulbrict a libertarian, had the idea of a change in the power structure between individuals and the state. Forbes quoted Ulbrict saying “the silk road is not just about scoring drugs, it is about standing up to our rights as human beings and returning the power to us rather than state”.

Read: The beginner guild to buying goods on the Darknet


In October 2013 the FBI shut down the Silk Road website and in November 2014 Silk Road 2.0 was also shut down. Ross Ulbrict was convicted of many crimes such as, money laundering, conspiracy to traffic narcotics, fraudulent identity documents and computer hacking by the means of the internet.

It is not clear how did the FBI locate the Silk Road server and manage to penetrate through their system. Jerry Brito a researcher at the George Mason University says, the FBI bypassed the website security through the weakness in Ulbrict computer code as such hacking into the site and issuing computer commands allowing them to act as the site administrator and talk to the server.


A dead dream. What was once a trusted trading ground on the internet where government laws and drugs wars could never reach is now dream that has died.

Further Readings







Lost in translation

Language serves as a measure of culture and inclusion in the world of Wikipedia. Yet this is trickier than we think.

On their website, the Wikimedia Foundation states that it aims to ‘provides the essential infrastructure for free knowledge’. A ticker runs on the screen, espousing this sentiment in four languages, including English and Hindi. At the bottom of the PC page-view, as well as at the top right-hand corner, are five language options. In her presentation at SOAS in mid-February, Miriam Redi, a researcher at the foundation, spoke about how they were looking at driving greater diversity. Understanding and incorporating the variegated ways in which we perceive beauty in images in the Wikipedia algorithms, was one project. Providing content in different languages was another. For Wikipedia then, language appears to be an epistemological axis of both access and, by extension, inclusion.

Do different Wikipedias translate to different cultures?

In a 2015 study, Gloor et al. considered whether there were cultural differences between the English, Chinese, Japanese, and German Wikipedias. In their own words, the research analyzed the ‘historical networks of the World’s leaders since the beginning of written history, comparing them in the four different Wikipedias’. They did this by considering all people who made it into Wikipedia by fulfilling its notability criteria, and then establishing their social networks by considering links between people who were alive at the same time. Thus for a given time period, they arrived at those ‘notable’ people who were most ‘networked’. As an example, an English Wikihistory of 0 BC lists Augustus, Paul the Apostle, Tiberius, and Mary (Mother of Jesus) as having the largest network. Gloor et al. then ranked all such leaders across all times, for each of the four Wikipedias. Two key findings emerged.

First, the Chinese and Japanese Wikipedias mostly had famous warriors and politicians in the top ten, while the English and German ones were more balanced with around half of the top ten as well as half of the top fifty being religious leaders, artists or scientists. Second, and perhaps more strikingly, 80% of the top 50 leaders in the English Wikipedia were not English while just 2 non-Chinese leaders made it to the top 50 in the Chinese Wikipedia. The German and Japanese Wikipedias were slightly more balanced with about 40% of the top 50 not being German or Japanese, respectively. The bottomline then, was that language could indeed be taken as being representative of culture, at least in the Wikiworld. Read the detailed study here.

But if language is a metaphor for culture, then the natural endeavour for bringing about greater cultural inclusivity would be to have several many Wikipedias, in several many languages.

The fatality of languages and imagined communities.

Writing about how print capitalism helped shape the modern nation-state, Benedict Anderson highlighted that ‘almost all modern self-conceived nations … have national print-languages … many of them have these languages in common, and in others only a tiny fraction of the population uses the national language in conversation or on paper’ (1991: 46). In other words, the ability to connect with hundreds, thousands, and millions of others through ‘unified fields of exchange and communication’ or simply, a common, interstitial language situated above the variegated spoken vernaculars and below the high tongue, allowed a unified ‘consciousness’ to be imagined (1991: 44). This was the nation-state, the imagined community.

Anderson argued that there was always an element of ‘fatality’ to language, which he explained as the ‘general condition of irremediable linguistic diversity’ (1991: 43). He posited that this also led to Foucauldian languages-of-power. For there were dialects closer to the print-language which influenced its final form, and there were those which lost out. And once thus established, languages as means of domination were and still are, consciously exploited.

Source: Graham et. al 2014, cited by vox.com

Anderson’s surmising is not too difficult to see or (allowing for a play on the word) imagine. The map on the left represents how inclusive Wikipedia is (or is not). Articles about most of the European countries, for example, are written in their (primary) languages. Articles about Mongolia though, are written in English. But if pluralism is to be driven by language, the question is simply: how many?

Mind your language. And mine. And hers. And his. And everybody else’s.

Perhaps before trying to address the question of how many languages, there is merit in a quick infographic and map-based look at our linguistic (and if we can therefore extrapolate, cultural) diversity.

A twitter map of New York City in 2012-13
Source: vox.com

The map above depicts tweets in different languages in New York City in 2012-13. The most common language for tweets originating from New York City was English, represented by grey dots. What is interesting however, are the other languages, and what this has to say about linguistic diversity and pockets.

Front (obverse: top) and Back (reverse: bottom) views of an Indian currency note, equivalent to INR 50
Source: shutterstock.com.

On the Indian currency note here, you can see that two languages feature prominently: English and Hindi. Yet if you look closely enough, in a boxed column in the middle of the reverse of the note (the bottom image on the left), is printed the denomination of the legal tender in 15 additional languages. vox.com estimates that India lost over 220 languages in the last 50 years, and that today it speaks about 780.

Into a different Wikiverse: an alternative ontology of cultural inclusion

In 2014-15, Siobhan Seiner got her students to engage in the process of curating, debating, and adding content on Wikipedia about indigenous Native authors from New England, US. Writing about the experience, she says:

Crowdsourced knowledge presents itself as contingent, as always subject to further input and revision. Wikipedia changes to reflect not only changing facts, like shifting national borders; it has the potential, at least, to reflect shifting intellectual paradigms. In this respect, wikis are not unlike oral traditions. 

Seiner 2015: 42

Thus, if they are like oral traditions then what do they say about how the same Wiki content is perceived in different cultures? This is where Eduardo Viveiros De Castro’s Amerindian thought comes into play. Castro (1998) contrasts western cosmologies of ‘multiculturalism’ which predicate one natural world and a multitude of cultures, with Amerindian (as well as Indian) conception which conceives a plurality of bodily existence, but a unifying culture. He writes:

…an ethnographically-based reshuffling of our conceptual schemes leads me to suggest the expression, ‘multi-naturalism’, to designate one of the contrastive features of Amerindian thought in relation to Western ‘multiculturalist’ cosmologies. Where the latter are founded on the usual implication of the unity of nature and the plurality of cultures – the first guaranteed by the objective universality of the body and substance, the second generated by the subjective particularity of spirit and meaning – the Amerindian conception would suppose a spiritual unity and a corporeal diversity. Here, culture or the subject would be the form of the universal, whilst nature or the object would be the form of the particular.

Castro 1998: 470

Castro explains this further by positing a humanistic condition or cosmology, where animals are humans in their own perception, but specifically non-human in form. They have their own societies, food, and culture, as ‘jaguars see blood and manioc beer … fur, feathers, claws, beaks etc. as body decorations or cultural instruments’ (1998: 470). Multi-Naturalism may at once appear difficult to grasp, yet examples thereof abound in contemporary cultural constructs. Think of The Jungle Book, Winnie the Pooh, and the Panchatantra (a series of ancient Indian fables).

What this suggests for our question of how many languages we need for Wikipedia to be more culturally inclusive is in fact a restating of the problematic itself, to one which is not about the language as the nodal representation of culture, but about what the content implies or means for different cultures, even if culture is represented by language. Thus a Wiki article on a particular topic on the English Wikipedia could have (for example) stubs for what it could mean for users of the different dialects of English, in England, the United States, and even India or Australia. In a sense then, Wikipedias in different principal print-languages could remain the first method of being more inclusive, but the contextually and culturally different meanings which content carries, as appropriations of Bourdieuesque habitus, could be further elaborated along the lines of the close and not-so-close ‘less-powerful’ languages and dialects which contribute to the principal print-language.

Of course, it may not be as straightforward as I imply above. As an example, and to begin with, the number of Wikipedias could be expanded to include Mongolian, and Wiki contributions encouraged therein. In this way, articles about Mongolia could in fact be made available in Mongolian. The second order of elaboration in Oirat, Buryat, and Mongolic Khamnigan could provide a contextually relevant meaning to these articles. Elements of language-as-power would still remain of course, in the form of Mongolian Wikipedia as the dominant, print-language equivalent. Additionally there may need to effect a hybridisation of the second order of elaboration with factors such as geography, religion, or tribe. Yet I argue that the question of inclusion on a platform such as Wikipedia, is beset with the same element of fatality as Anderson (1991) argued for language. Its irremediable diversity is but its nature, and this stands culturally irreconcilable with the objective of including one and all.


This post is inspired significantly by a conversation during the Q&A session following Miriam Redi’s presentation at SOAS in mid-February. Specifically, I would like to thank Gitika Saksena (@gitikasoas) from the seminar cohort for suggesting an exploration of Wikipedia inclusivity through Amerindian Perspectivism as opposed to language alone, during the the session.

Additional resources

Ever wondered as to how many languages exist out there? And where? Langscape is probably your best bet. Try out their interactive map(s) to zoom in on different parts of the world.

Most Indian children have grown up knowing one or more of the animal-fables espoused in the Panchatantra. Watch an animated version here. The animation may not be the most contemporarily executed, but the essence of the stories is very much there.

Language of course, is not the only construction of bias as far as Wikipedia goes. Read ‘Is Wikipedia Biased?’ by  Shane Greenstein and Feng Zhu in The American Economic Review, Vol. 102, No. 3, 2012 for a political take on bias in Wikipedia.

Bibliographic references

ANDERSON, B. 1991. Imagined Communities: Reflections on the Origins and Spread of Nationalism. London: Verso.

CASTRO, E. 1998. Cosmological Deixis and Amerindian Perspectivism. The Journal of the Royal Anthropological Institute, 4:3, 469-488.

GLOOR et al. 2015. Cultural Anthropology Through the Lens of Wikipedia – A Comparison of Historical Leadership Networks in the English, Chinese, Japanese and German Wikipedia. 10.5167/uzh-102327.

MANSFIELD, N. 2000. Subjectivity: Theories of the Self from Freud to Haraway. New York: New York University Press.

SENIER, S. 2015. Indigenizing Wikipedia: Student Accountability to Native American Authors on the World’s Largest Encyclopedia. In Web Writing: Why and How for Liberal Arts Teaching and Learning (eds) J. Dougherty and T. O’Donnell, Ann Arbor: University of Michigan Press.  

WACQUANT, L. 2016. A Concise Genealogy and Anatomy of Habitus, The Sociological Review, 64:1, 64-72.