Mis/disinformation differ from typical cyber threats such as malware in two ways: who is behind the threat, and how it is made and disseminated. While some who develop disinformation are profiteers looking for financial gain, many times they are nation states, extremists, provocateurs, disgruntled former employees or even business competitors. Criminals can now produce and propagate disinformation with relative ease because there’s no need to physically, or even virtually, infiltrate a country or business network. Inauthentic content can be crafted in blogs, emails and social media posts.
Social media and memes
Memes are a popular format for disinformation: they’re fast and easy to make, appeal to a wide range of age groups and have high viral potential.1 False stories and memes proliferate on social media, where a particularly controversial post can generate optimum engagement (usually in the form of heated debate), build a larger following and subsequently trigger viral spread to multiple other media channels. Notably, memes posted to social media allow threat actors to operate in a gray area and damage a business' reputation without exposing themselves to the fallout of any conflict.
Disinformation can also take the form of forgeries, which typically feature fake letterheads, copied and pasted signatures and maliciously edited emails. To make forgeries seem more credible, threat actors claim they are from a hack, theft or other document interception — “leaked” materials. And they may include legitimate content to lend authenticity to their messaging.
Because people are naturally more likely to believe what they see, synthetic media such as manipulated photos and audio and video deepfakes are especially convincing and dangerously effective. Without sophisticated software, it can be difficult to determine the authenticity of these forms of disinformation.
Another way cyber criminals develop disinformation is through proxy or fake websites. Proxy websites are fronts designed to disguise the source of content or use that content to drive pageviews. These sites often crop up after newsworthy events, playing on the public desire for more information. The only way to discriminate between a legitimate website and a proxy is to scrutinize the URL for misspellings or cross-check the site’s information with verifiable sources.
To distribute their deceptive content, cyber criminals use a number of platforms and technology services. Content farms, also known as content mills, generate large quantities of low-quality web content designed around search engine standards to display them higher in search results — a practice known as search engine optimization (SEO). While SEO is a legitimate marketing practice, churning out false content to target popular searches and drive advertising revenue is not. Disinformation and misinformation spread quickly through content farms, as their goal is to attract a high volume of web traffic at all costs.
Botnets are often used to amplify disinformation engagement on proxy websites and social media. A botnet, short for robot network, is a network of computers (bots) infected by malware and leveraged by a single person, who can command each bot to simultaneously carry out a coordinated action. Bots can generate fake social media and commenter profiles, making it difficult for the average user to tell the difference between bot and human. The sheer size of a botnet — some with millions of bots — enables cyber criminals to manipulate public sentiment on a massive scale.
Finally, disinformation-as-a-service (DaaS) models are now popping up to assist in creating faux social media identities and using them to either boost a reputation through fake reviews, testimonials and news stories, or to tarnish one using the same methods. DaaS can target both individuals and organizations, and it’s often fairly inexpensive, ranging from under $100 to $100,000-plus. DaaS agents have emerged in many countries, and they routinely advertise to the private sector.