This article contributes to SYG 1: Prevention of Threats from Cyber, Space & Emerging Disruptive Technologies
In October 2022, CYIS published the second part of its Journal Article “Contemplating politico-military trends in the future of warfare”. The Cybersecurity, Space, and EDT Program dedicated its section to disinformation. It highlighted the enabling role that new technologies play in the propagation of disinformation. The present piece attempts to explore a different dimension of disinformation by shedding light on connections between it and cybersecurity.
Disinformation has historically blurred the line between reality and falsehood. New technologies, such as deepfake, and the place taken by social media in the information space have taken disinformation tactics to a higher level. As such, the vulnerabilities to be exploited are multiplied and the face of the threat can be more than ever deceiving. In this context, it is important to identify and understand where disinformation and cybersecurity meet and interact.
Many parallels can be drawn between cybersecurity and disinformation. Two will be exposed in this section: the “unfair advantage of the bad guys” and the manipulation of human emotions.
On the one hand, the “unfair advantage of the bad guy” refers to the high ground position occupied by perpetrators of both cyber attacks and disinformation campaigns. Disinformation instigators and propagators have the luxury to spread infinite streams of false content, none of which having to be consistent with one another. On the opposite side, those fighting disinformation have to address false content carefully, maintaining accuracy and consistency at all times. The pattern in cybersecurity is similar. While attackers can attempt to exploit all vulnerabilities and see which vector leads to success, cybersecurity professionals have to be foolproof one hundred percent of the time.
On the other hand, human emotions are manipulated by perpetrators in order to use citizens as vehicles to achieve malevolent goals; whether it be to successfully execute a cyber attack or to ensure greater dissemination of disinformation content. While both disinformation and cybersecurity are carried through technologies, they also have humans at their core, oiling the machine willingly or unwillingly. When the seeds of disinformation and cyber threats are planted in the minds and social media feeds of people, specific emotions sprout. Emotions that have the potential to cause a person to change their original course of action, to take decisions they would perhaps have not taken without intervention. Indeed, perpetrators can count on the fact that people tend to act rashly above a certain threshold of emotionality.
For instance, social engineering attacks is a common cyberattack method that seeks to manipulate and deceive people in order to trick them into revealing sensitive information (e.g. phishing). When one receives an email informing them that their bank account was breached and that they need to immediately click on a link to sort the matter, the fear of losing money and the urgency created by the message can lead to actions based on emotions instead of reason. The attacker can then redirect the victim to a fake website and have them fill in crucial information.
The same goes in the case of disinformation.The strategy is for perpetrators to spark fear or anger in their subjects. It has been shown for instance that the more anger is sparked around a topic, the more viral it can become. Again, once people are driven by strong emotions, it allows perpetrators to shift the manufactured emotion towards something or someone else (e.g. anger against migrants). This can prove to be an effective tool to manipulate public opinion during strategic moments, such as elections.
There are more than parallels between cybersecurity and disinformation; strong and bi-directional connections between the two can also arise.
Both cybersecurity and disinformation affect one another. First, disinformation impacts cybersecurity by creating a new and neglected level of threat. That is especially true when it comes to the security of physical infrastructures. Intangibles such as networks and softwares are hosted in hardwares. The integrity of these hardwares can be threatened by natural disasters or by being targeted in the context of an armed conflict between states. But the threat actor is not always so transparent. With disinformation, it is much more insidious in that the true nature of the attacker and the full scope of the attack are not necessarily what they seem.
The example of 5G cell towers is a telling one. Various disinformation campaigns, helped by the conspiracy theories they give birth to, have leveraged the general lack of knowledge of the public on technologies. These new technologies can create a sense of fear or anger when they are not fully understood by the population. Victims of so-called technophobia, some are afraid of the repercussions of technologies on their health or their ability to keep their jobs. Disinformation actors use these emotions and fill the lack of knowledge with false content, therefore changing a general and vague sense of fear to a targeted cause. To a group that was already uninformed and skeptical, that will come as confirmation of why they were right all this time, even providing a sense of closure. These tendencies can be expected and explained by psychological concepts, including the confirmation bias, describing the “subconscious tendency to seek and interpret information and other evidence in ways that affirm our existing beliefs, ideas, expectations, and/or hypotheses”, and other cognitive biases.
Further into the 5G example, there are concerns amongst certain people that cell towers could transmit Covid-19 or be generally unhealthy. This fear is weaponized to ensure disinformation does not remain constrained to people’s mind. In the past years, hundreds of incidents of vandalized 5G cell towers were reported. The hybrid nature of such events is clear. Disinformation spawns in the digital realm and uses the mind as a bridge in order to find its way into the physical realm. Whilst this happens, the true instigators will probably never have to carry any kinetic actions to reach their intended objective, remaining in the shadows. To add to the difficulty, the number of “middle-persons” enabling the mission to be achieved, can be counted in hundreds; they can be proxies across countries to launch the campaigns or simple users on an app, unknowingly disseminating the message.
Second, cybersecurity, or rather a breach thereof, enables disinformation. Cyberattacks or the use of cyber ploys can be powerful tools when attempting to silence or corrupt information sources and channels. In a scenario where a state regime is conditioning its domestic information landscape to only showcase polished and prepared messages, the media fighting for freedom of speech is a target. Different attacks can be imagined. To put a media’s reputation into question or propagate a message through its channels with the aim of reaching usual viewers, the regime could hack into the media’s systems and post articles in its name. In a similar scenario, the regime could attempt to paralyze the media and its ability to reach out to its audience by preventing it from accessing its systems (i.e. through a denial of service attack). Yet another possibility could be to hack the database of the media to retrieve information on journalists openly against the regime, on protected sources that were not named in an dissident article; any type of information that would identify and find “problematic” people.
Moreover, cybersecurity breaches in the large sense do not merely involve hacking. Recent years have seen the surge of “social media bots”, which are programmed to, amongst other things, spread disinformation. In an era where most people resort to social media as their primary source of information, malicious bot accounts are harmful. This includes pretending to be human beings and faking user-to-user interactions, masquerading as a famous person, or boosting false content and its perceived accuracy by liking, commenting, and sharing. The main cybersecurity risk in this case is failing to recognize the inauthenticity of user’s identities which results in compromising the accuracy of the information landscape.
Identifying the connections between cybersecurity and disinformation has not yet reached a satisfactory level. Identification has to be followed by an integration of the knowledge in decision-making processes, policies, and laws. Beyond that, it requires implementation through actions.
One way to mitigate the risks born out of the collision between cybersecurity and disinformation could be to recognize disinformation campaigns as cyber attacks. This very proposition comes from EU DisinfoLab, an NGO focusing on disinformation in the EU. The rationale is that such a recognition would allow for “frameworks needed for effective attribution and dissuasion, by improving information sharing and situational awareness among stakeholders from both the disinformation and cybersecurity fields, as well as governments, the private sector, and the technical community.” However, this argument presents a major shortcoming. Disinformation is more than a cyber issue and behind a disinformation campaign does not always lie a clear cyberattack.
Late 2021, the INGE Committee from the European Parliament issued a draft report in which a short state-of-play of EU’s current inability to face cyber threats was mentioned. Later in the text, the Committee affirmed that an EU coordinated strategy against foreign interference should include “Cybersecurity and resilience against cyberattacks”. Different propositions to tackle the problem were proposed, such as urging EU institutions to invest in digital capacities or welcoming the proposals by the Commission for a cybersecurity strategy.These would be solid plans if the threat was not already established and growing, and if each were not requiring years of development, and many more to be implemented by Member States.
Not only concrete and immediate actions are rarely, if not ever, initiated, but the full scope of the issue also fails to be accurately recognized. The human factor, cited earlier, is a prime example. Not everything is rooted in technology. More research is needed on the ways in which cyber threats and disinformation tap into basic behavioral and societal traits. Laws, policies, and tools created to fight disinformation and mitigate cyber threats have to start expanding their expertise not to focus almost exclusively on the technological and digital aspects. There seems to exist a quasi obsession from governments on regulating social media companies and their algorithms in order to find an easy cause and a remedy to disinformation. A sempiternal attempt to make machines and software impenetrable. What is missing is the understanding of how human emotions are weaponized for instance or educating citizens to think critically. It is the understanding of the unexpected or indirect roles of humans in cyberattacks. It is the fact that the human factor will always be a potential liability to security if not properly apprehended. Finally, it is educating citizens on basic digital behaviors.
*The main points raised in this blog post were originally presented by the author in a panel on Cybersecurity at the 2022 MEU Sofia*