18. August 2021

AI as both a line of defense and a new threat

Blog-bild-16-17

Artificial intelligence // Artificial intelligence is becoming an increasingly popular tool for the early detection of risks and to combat threats. Systems such as anomaly detection and behavioral biometrics proactively identify potential attacks or risks and can also be used to fight back against social engineering. But it’s not just the defenders who turn to AI; attackers aren’t shy about using the technology either.

Fraud in transaction environments, particularly finance, is heavily monitored today through a series of policies and regulations. We know, for example, that a credit card can’t be used in two places at once, or in quick succession in two completely different parts of the world. These kinds of transactions are classified as potential fraud, and automatically rejected. Many of these rules are pretty obvious, and can therefore be programmed into regulatory frameworks in a meaningful way. However, these regulations always improve reactively: it’s only after a new potential fraud has been uncovered that a rule is programmed into the system to stop it happening in the future.
In this instance, the relatively new approach of anomaly detection, which is based on artificial intelligence, can offer a remedy. Anomaly detection is where a system learns the “normal” behavior of a card holder or an entire user group through analyzing lots of data. If there’s any deviation from the standard behavior, the system alarm goes off. However, there is no clear definition of what exactly a deviation should constitute. It is merely something that is outside of the normal framework.

Anomaly detection: A system for various applications
Network and infrastructure security
In network analysis and security, we often work with exclusion lists (so-called blacklists). Because of the sheer amount of data, identifying suspicious activity in a network is so difficult that the log files and protocols are often only examined more closely after the attack has been detected. This is where anomaly detection comes in. A machine can process a high volume of data within a very short time to pinpoint suspicious i.e. unusual activity or connection attempts. Attempts to conceal activity with proxies, VPNs or TOR networks can be detected more easily and in many cases averted through preventative measures. The first reaction is to block access immediately. Then someone checks the activity and, if necessary, reverses the machine’s decision. Through this feedback loop, the person remains the one to decide. The machine learns from this decision and continues to improve as a result.

Mechanical security and IoT
Sensors have long monitored mechanical systems. As with the regulation frameworks in the finance world, fixed thresholds are programmed in these then enable the system to sound a warning if something has overheated or failed. In complex and expensive systems such as turbines or power stations, a variety of sensors are used as early warning systems before damage occurs or a risk arises – and not only once a certain threshold has been exceeded or fallen below, and the damage or risk has already occurred. These predictive maintenance systems can also be based on anomaly detection. Here, there is no need to program in fixed thresholds; instead, the system independently recognizes whether certain functions in the mechanical system being monitored deviate from the norm.

Using behavioral biometrics to combat social engineering
The most effective way to infiltrate a system or access information is not through technology, but through people. According to German Wikipedia, social engineering is “people exerting their influence over others with the aim of prompting them to behave in a certain way, for example to get them to disclose confidential information, purchase a product or approve funding.” This method deliberately exploits people’s good faith, credulity and willingness to help. To date, however, there haven’t been many ways to defend against this, other than vigilance and a healthy sense of mistrust.
But all that is changing with the application of artificial intelligence. If an attacker manages to access someone’s log-in details, for example, and thus adopt their identity in a system context, a machine can ascertain relatively quickly that this person is behaving differently than usual. By analyzing behavior patterns such as click and typing behavior, scrolling speed and reading flow, it is possible to build up a picture of a user’s normal behavior. If there is a deviation from this behavior, additional security mechanisms can then be applied to verify the person’s identity: caller ID, voice recognition, iris recognition, vein pattern detection, fingerprint or facial scans. Thanks to behavioral biometrics, these otherwise tiresome security measures don’t have to be applied as often, as the system only requests these factors in case of doubt.

Another early warning system that is based on artificial intelligence makes use of Natural Language Processing (NLP). In this, a system analyzes and interprets the language used in communication, and is then able to identify a case of potential abuse such as cyber bullying or sexting.

AI in the future: The better the defense, the better the attacker
When algorithms come up against each other and are able to learn from one another, we find ourselves in something of a dystopian scenario. But this is exactly how deep fakes are created. They are based on generative adversarial networks (GAN): in many iterations, an algorithm (generator) tries to create a fake and then gets some feedback from a different algorithm (discriminator) as to how good it was. Over time, the generator learns how to get a better response from the discriminator, which in turn leads to the generator learning more and getting better. To detect these deep fakes, therefore, you need a different approach. This is where we are currently at in research; ti&m has also been able to offer some valuable contributions in this field. Working alongside our AI specialists, we harness the potential of artificial intelligence, thus boosting IT security for our clients, among other things.


Pascal Wyss
Pascal Wyss

Pascal Wyss hat seine Karriere als Software-Entwickler in
der Finanzindustrie begonnen. Nach 8 Jahren wechselte er
zum Consultant für Digitalisierung und half Unternehmen, ihre
Services und Produkte auf die Bedürfnisse der digitalen Kunden
auszurichten. Seit 2019 leitet er das AI Competence Center bei ti&m.

Björn Sörensen
Björn Sörensen

Over the last ten years, Björn Sörensen has held several leadership roles in the finance industry, specializing in major software development programs and digitalization. At ti&m,he leads the Innovation Department, focusing on AI, cloud, blockchain, design. He also leads the IT.