Artificial Intelligence

Artificial Intelligence

Malware uses artificial intelligence as an important part of its business logic by using algorithms to find patterns and system activity. Unusual patterns can change how malware acts, increase or decrease evasion and stealth configurations, and change the amount of time it takes to communicate. Malware has used situational awareness for a long time, but Artificial Intelligence can offer much more accurate and adaptable ways to do things.

DeepLocker was shown off at the Black Hat USA 2018 conference by researchers from IBM Security. DeepLocker is encrypted ransomware that decides on its own which computer to attack based on a face recognition algorithm. This means that the attack doesn’t happen until the camera recognises the person, which it does by using face recognition techniques. Learn more about DeepLocker in DeepLocker: How AI can be used to make a new kind of malware that is hard to detect.

Cyberattacks made easier by Artificial Intelligence

Artificial Intelligence algorithms aren’t used in malicious code or malware that runs on the victim’s computer. However, Artificial Intelligence (AI) is used in other parts of the attacker’s environment and infrastructure, like when malware is made on the server side.

Information-stealing malware sends a lot of personal information to the command and control (C&C) server, which then runs a natural language processing (NLP) algorithm to the group and classifies parts of the information as “interesting” (e.g. credit card numbers, passwords, confidential documents). This doesn’t need a person to look over the data, so it’s easy to get exactly what’s needed quickly.

The #TheFappening attack, in which celebrity photos stored on iCloud were leaked, is another example of this. If this attack had been helped by AI, it could have happened on a much larger scale. For example, algorithms based on computer vision could be used to look at millions of pictures and find the ones with celebrities in them.

Adversarial attacks

“Malicious” Artificial Intelligence algorithms are used to stop “good” Artificial Intelligence algorithms from working. This is done by using the same algorithms and techniques used in traditional machine learning, but this time to “break” or “reverse-engineer” the algorithms of security products.

Breaking neural networks with adversarial attacks is another technique. AI can be taught to recognise a picture of a person and add artificial noise. the same picture looks like someone else will be added to it. There are also hundreds of cybersecurity products on the market that use the terms AI and machine learning to try to protect your business. What would happen if they were retrained to let bad guys into your network? How ready would you be?

    No feed items found.