If you enjoy the information in the Hacked Off podcast, but would prefer it in a different format, you’ve come to the right place! In this week’s episode, we’re taking a look at the history of malware, so click the link below or read through for our insights.

Holly Grace Williams: On this week’s episode of Hacked Off, I talked about my love:hate relationship with malicious software. Whilst it’s technically fascinating, the impact it causes to businesses can be significant. In this post I wanted to offer the same information in a different format, but also add some links to the resources I referenced in the podcast itself. Firstly, for the most part I’m going to talk about malicious software using the shorthand “malware”. However, there are more specific terms: virus, trojan, worm, ransomware, hacking tool. I’m going to avoid these terms as they’re so often misused or corrupted – but for the sake of completeness, here’s a quick overview:

Ransomware is any malicious software that has the intention of holding access to a system or data from a user, most often through the use of file-encryption, and demanding a monetary ransom to re-establish access.

Viruses are generally considered to be malicious software that spreads by attaching itself to other files and programs (or “infecting” them, if you prefer) and are therefore spread manually. If you share the infected files between devices the virus will organically spread.

Worms are similar to viruses in that they propagate, but the method is different. Worms self-propagate, often by means of network connections. Jumping from device to device either through a known exploit, an open share, or perhaps a software exploit.

Trojans pose as other software to trick the user into downloading or sharing them, for example they could pose as a game or other legitimate piece of software – but when they are executed, they do their damage. Many trojans simply give attackers a backdoor into the system, therefore raising the term RAT for Remote Access Trojan.

You’ll occasionally see RAT expanded in a different way, to mean Remote Administrative Tool, and this is a good time to talk about dual-use software. The term “dual-use technology” originating with the military for technology which can have a legitimate benefit to civilians, but also a direct military application. The go-to example here being Global Positioning Satellite (GPS) technology, which civilians may utilise through their car SatNav but may also be used in military applications, such as directing missiles.

In this context, when I say “dual-use” I instead mean, tools which have legitimate purposes for things like system administration but may also be used by criminal attackers aiming to compromise devices. A remote administrative tool is useful, regardless of whether you own the device you’re administrating, or you’ve compromised it through a successful attack. There are also tools to consider that are useful to, and marketed towards, professional penetration testers which are also useful to criminal attackers. A good example here might be something like Metasploit, used by professional penetration testers, bug bounty hunters, and no doubt the odd cybercriminal too.

There is an interesting point of law around these dual-use technologies; the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies, which covers not only conventional arms, but put December 2013 considers new technologies such as “intrusion software”.

That said, looking into the history of Malware, we find that it’s been around for a surprisingly long time. The oldest reference I could find to a malicious software attack, as opposed to something like attacks against cryptographic systems, would be an alleged attack from 1969. Where somebody allegedly loaded a software program into the University of Washington’s Computer Centre – the program replicated itself endlessly until all system resources had been used and the system became unavailable. However, interesting, the UW’s own “A History of IT at the UW” makes no mention of such an attack – although it is documented on catb.org, a site famous for hosting the Hacker Jargon File.

Another “attack” of sorts, was the Creeper, created in 1971 the original version was a simple program which moved through the ARPANET (The precursor to the modern internet) displaying a message saying, “I’m the creeper, catch me if you can”. A later version copied itself to each device, instead of moving from device to device. Back in those days, it infected PDP-10 mainframes, quite different to the modern home PC. “Creeper” is often referred to as “The first computer virus”, which I’m sure will cause the pedants some difficulty – was it really a virus, or more of a worm since it self-propagated?

Jump forward to 1989 and we see, as far as my research can tell, the first Ransomware. The “AIDS Trojan” as it is commonly called (another one for the pedants – was it really a Trojan?) It was a simple ransomware, spread on physical floppy disks; it demanded a payment of $189 by Bankers draft to a PO Box in Panama. Retro. That being, of course before the convenience of crypto-currencies like Bitcoin – Bitcoin being introduced in 2009.

I mentioned dual-use technology earlier, which could include certain tools used legitimately by network administrators or penetration testers, but it can also include tools such as email flooding tools. These tools posing the threat of denial-of-service – fill a user’s email inbox with junk mail and they won’t be able to use it for legitimate emails. Even this kind of attack has been around for a seriously long time.

Interesting, I’ve heard several people say words to the effect of denial-of-service attacks weren’t explicitly illegal until 2006, when the Policy and Justice Act amended the Computer Misuse Act to include: “Unauthorised Acts with intent to impair, or with recklessness as to impairing the operation of a computer”. With people interpreting “impairing” a computer system as ingraining that denial-of-service attacks are a crime – however they were prosecuted prior to this change. Now I’m not a lawyer so consider this just an interesting note for further reading – a relevant case in this context is DPP v Lennon [2006], which happened before the amendments of the Police and Justice Act. The short story is that in January 2004, Lennon downloaded a program which allowed him to send an estimated 5,000,000 emails to his former employer causing a “bit of a mess up”. Ultimately, Lennon was charged and sentenced to two months curfew by electronic tag.

Malware has been around for a long time, from back in the 1960s or very early 1970s – what’s changed? Well something that I predict, over the next few years, is an increase in the levels of propagation. That is, malware that gains a foothold in a corporate network and then propagates through the network infecting large numbers of machines as it goes. In the ransomware game, prior to the last couple of years, we haven’t seen huge amounts of this type of propagation. I think the point that I personally decided this, was when NotPetya happened to me.

I say happened to me, because it’s very common to hear about malware attacks on the news, but NotPetya was one that I was personally involved in the response for, and as anyone who read up on the details of the attack knows – it was very effective.

I was asked to help out a customer who reported that they had been hit by ransomware. Now, my experience responding to incidents at the time, was that the report is often missing information and often incorrect. Hearing that a customer thought they’d been hit by ransomware didn’t necessarily mean that it was – for example previously I was called to a “ransomware attack” which turned out to be a crashed machine displaying the “blue screen of death” common to Windows machines and not at all ransomware.

This time around though, from the reception area of the customer’s office I could see many, many machines displaying the ransomware’s “Ooops, your files are encrypted” message. I knew this was ransomware, and that it was going to be bad. At the time it was common to get called to an infection in the single digits, NotPetya hit several companies – Maersk reporting for example having to rebuild 4,000 servers and 45,000 PCs.

For those unfamiliar with the details of the NotPetya attack, I’m going to gloss over the whole nation-state involvement and the fact that it only presented as a ransomware and is reported as a system wiper instead. My point here is simply that NotPetya propagated, which makes it distinct from an awful lot of other ransomware – that we often see simply infecting machines from an email attachment and going no further than a single machine.

NotPetya propagated using two main methods – the first being the same exploit used by WannaCry which is often called MS17-010 which is actually the name of the patch that fixes it, or it’s sometimes called EternalBlue the name given when it was originally released from the NSA toolkit. NotPetya also spread by extracting credentials from a system and reusing them on other systems – effective if either domain administrative credentials are stolen or if passwords are reused. That sounds awfully similar to the hacking tool Mimikatz which is from what, 2012? So, we have WannaCry and NotPetya using self-propagation and showing that this can be incredibly effective, plus we have ransomware such as SamSam taking a more manual approach, more like how an attacker might operate on a PenTest. The authors behind that software exploiting several issues from simple password bruteforce to exploiting issues like weaknesses in JBoss. Symantec report that the SamSam attackers used the actual tool Mimikatz rather than just a similar technique. They also used PsExec – a tool commonly used on penetration tests as well as for network administration. It’s a Microsoft SysInternals remote administration tool.

With automated propagation being demonstrated to be incredibly effective, and manual exploitation allowing attacks to be well executed – it may be the case that we see increasing numbers of attacks of this nature in the future.

Having talked this long about malicious software though, I’m sure someone out there is screaming to themselves “but we have anti-virus software!” or it’s more recently called “anti-malware” due to the aforementioned pedantry. However hopefully you’re at least somewhat aware that these solutions are never perfect.

One thing I don’t often see discussed though, is how these solutions are imperfect. Many organisations just consider them black boxes of security, install and forget. So, here’s a couple of things worth mentioning and a simplified overview to consider.

The first is how these systems work – I often here confusion around this. One thing which is often said is that anti-malware systems work on signatures of malware but there isn’t always a great deal of detail about that. People also often talk about signatures as if they’re the only detection method. I’d broadly categorise malware detection into signature detection engines and behavioural analysis engines. There’s also machine learning to consider but I’ll save that for its own post.

Firstly, signatures work broadly as people expect, patterns in the binary of the malicious file are recorded when a file is known to be malicious. For example, a malware analysis may take a look at a file, come to the decision that it’s bad and then record details about that file; such that if it’s seen in the future it’s detected as bad.

This method of detection can often be defeated quite simply; through obfuscation, packing, or crypyting. These are methods of hiding the structure of the file such that signatures can’t detect it. For example, crypting is the art of encrypting a binary file and packing it with a decryption stub such that, by encrypting the code you hide the contents from the signature engine, and therefore remain undetected.

If signature engines are so easily fallible, then why do we use them? Certainly, if we have a better alternative in behavioural analysis why don’t we just use that? In short, they’re fast or more accurately they consume few resources and would get a positive hit on bad things they’ve seen before. If they’re so much lighter than the behavioural engines – they’re worth running because a positive hit will prevent the need to run the more resource intensive behavioural engine.

Behavioural engines run through sandboxing, or virtualising, or emulating an actual system and executing the malicious software on that system to see how it behaves. If it does bad things, then mark it as bad. It doesn’t matter if a malware analyst has pulled it apart, or if you’ve seen it before, or if it’s known bad. A much better approach than signatures.

The problem here though is, what if we can lie to the scanner? What if we can program my malware in such a way that it knows when it is being scanned and does good things and when it is sure that it’s not within the scanner only then does it do the bad things? That is very often possible. It’s certainly a lot more effort on the part of the attacker, but it may well be worth it.

This can work both ways, the first being that you could detect something unique about the scanner and if you see it then you’re being scanned. Alternatively, you could discover something unique about the target environment which must be present for your malicious code to run but wouldn’t be present within the scanner. I’ll leave this latter one off for now, as it’s less useful for things like ransomware – but is worth mentioning because it’s quite useful for penetration testing where you can do a little intelligence gathering and then tune your malware for the target.

However, say you’re a ransomware author and that approach doesn’t work for you. How could you learn something unique about a scanning engine to allow you to avoid detection – well I’ve talked about this previously and my fun with online scanning engines like VirusTotal but I’m not the only person to research this area over the last few years – for example the AVLeak team spoke back in 2016 about extracting scanner artefacts from common desktop AV engines.

Examples of the unique things you can detect that are specific to the scanning engine can be as simple as the name of the running user, the presence of unusual registry keys or files on the system, or implementation weaknesses in the emulated system – such as a system API not being present. These methods of anti-malware evasion have been shown to be effective.

In summary, malware has been around for a long time, recent years have shown that propagation can significantly increase the business impact from malware, and anti-malware alone is not enough.


ledgeredge trading corporate bonds cybersecurity fintech

vISM Case Study: Working in Close Partnership with LedgerEdge

Secarma and LedgerEdge have developed an ongoing consultancy-based cybersecurity partnership, workin...

Cybersecurity Misconceptions

At Secarma, we're passionate about security. That's why, as part of Cybersecurity Awareness Month 20...

Cybersecurity Events in the Capital

Over the past month or so, the Secarma team have been very busy with cybersecurity events. From the ...