For years, digital security followed a simple strategy: Learn from past events where fraudsters successfully compromised systems and build tactical defences.

But as threats multiply online, old defences such as siloed monitoring systems are no longer enough—especially in financial services. Customers love the ease of mobile wallets, banking apps and the ability to wire money anywhere to anyone at any time. But the more we rely on those, the greater the risk of fraud. Security breaches, fraud attacks and poor controls erode customer trust and damage a bank’s revenue and brand reputation.

 

To respond, financial companies have turned to data and analytics—using predictive algorithms and machine learning to make more sense of their information and sifting it to detect fraud.

“The credit card fraud prevention industry has been at the forefront in terms of adopting machine learning,” says David Stewart, banking industry director for SAS Fraud and Security Intelligence. In the last few years, financial institutions, working with technology firms such as SAS, have expanded machine learning to gain a cross-channel view of behaviour, often leveraging additional digital information about devices, geolocation and biometric authentication.

Decisions about whether to approve or deny a digital payment are no longer based only on its amount, time, date and merchant. Now, bank’s systems are looking at what’s normal for a customer. “One of our clients is saving an additional $20 million a year in fraud losses,” says Mr Stewart, “because we’re scoring the risk of the customer at the same time that we’re scoring the risk of the transaction.”

Banks now use algorithms and machine learning to profile how their customers are paying and moving money, asking a wide variety of questions. If customers use a banking app, are they doing so through the same device and operating system as they usually do? Are customers sending a lot of ACH or SWIFT transactions to higher-risk countries? Are small-value transactions popping up in places the customer hasn’t been before? And, finally, how many of these risk factors are present? “One of them individually might only affect the score,” says Mr Stewart. “But if we start to see this pattern of behaviour—large payments to new payees, high-risk geographies—then it starts to look like we probably need to take action to suspend activity and forward this for further investigation to protect our client.” Thanks to algorithms and machine learning, computers often make those decisions in 30 to 40 milliseconds, in time to approve or decline a payment.

Money laundering

Data and analytics are also helping financial firms more effectively fight money laundering, which is a growing threat as more connectivity is introduced to banking. “Firms are concerned with all these new and different technologies for holding and storing funds. You can store money everywhere now—in a cloud or on your phone,” says John Gill, vice president of education for the Association of Certified Fraud Examiners. “These technologies make it easier for fraud or money laundering to happen. There’s a whole underground connected network of fund movement, transactions and accounts. Bitcoin and a lot of those digital currencies keep people up at night because it really is not regulated.”

And generally speaking, complying with governments’ anti-money laundering regulations is a huge expense: US financial firms spend $25 billion a year on it, while European firms spend $83 billion a year, according to surveys by LexisNexis Risk Solutions. But firms that take layered approaches to the hunt for suspicious transactions—approaches that often include machine learning or artificial intelligence analysis—lower the cost of compliance compared with firms that handle all the required reporting manually, the survey found.

“We have some advanced clients that recognize that, if they layer analytics on top of all these individual rules, they can start to build a matrix of customers, parties or counterparties that hit across multiple behaviours,” says Mr Stewart. “Instead of looking at a transaction or a series of transactions, they can look at their customers’ behaviour over a month, or six months or 13 months, and determine: Is that suspicious enough that we should investigate further?”

Analytics can help firms identify people and entities that are subject to international economic sanctions. “Sanctions screening is an imprecise science,” explains Ian Holmes, the global lead for enterprise fraud solutions with SAS Fraud and Security Intelligence. “A lot of textual data needs to be addressed.” Data and analytics can help compare names, addresses and dates of birth with those of people on sanctions lists. Because many new electronic payment methods complete transactions faster than ever, “that is becoming more critical to do in real time”, says Mr Holmes.

False positives are high in the hunt for money laundering: Typically, the vast majority of transactions that don’t pass initial sniff tests turn out to be benign. Analytics can help improve those daunting numbers. “Our clients are using analytics to improve their false positive rates, from 99 of every 100 is a false positive to maybe 50 of every 100 is a false positive in the money laundering space,” says Mr Stewart.

Outwitting hackers

Two more fields are emerging where analytics can help firms combat fraud in business email security and insider threat.

Business email breaches are caused when hackers take over commercial accounts, often through spear phishing or keystroke logging. “Fraudsters take over the credentials of a privileged senior official,” Mr Stewart says, “usually in small- to medium-sized businesses that might have lesser awareness of information security controls.”

Spoofing vs. Phishing

Spoofing delivers malware via email. Phishing tricks the user into revealing sensitive information like passwords or account numbers. Both arrive from accounts disguised as trusted entities.

Data and analytics also play an emerging role in predicting inside threats of breaches or fraud by catching early warning signs before fraud occurs. Analytics can detect variations in human behaviour patterns and flag them as possible security risks. A rogue employee trying to exfiltrate data or insert malware into a network, for instance, could be caught in the act. “You can see changes in their behaviour, of the amount of data they’re downloading or uploading,” says Rachel Tobac, CEO of SocialProof Security, “and that can help you catch insider threats.”Companies can even use analytics to see when an employee is turning against the company. “SAS has a lot of interest around the context and the language which you’re utilizing within your activity,” says Mr Holmes. Algorithms can assess employees’ emails to detect language of anger and frustration, Mr Holmes says, which could “indicate that you’re annoyed with your employer or not happy with your role and are potentially more likely to do something which is illegal”.

As business email compromise—or CFO fraud—has ramped up in the last six or seven years, security experts are working to catch it. SAS has spent two years working with one of the largest US banks to build a rare-event fraud-detection system. With payment fraud, analytics tools look for high-velocity transactions, new payees, whether payments are sent within a batch and payments going to countries with corruption or poor financial controls or countries subject to international sanctions. But analytics doesn’t yet contribute as much to the fight against business email compromise as it does against consumer payment fraud, because the security questions are so much more complex. Firms are looking for very rare black-swan events that can involve phishing, spoofing and fraudsters fooling unwitting employees. “The number of actually fraudulent transactions is low,” says Mr Stewart, “but when it happens, it can be a $10 million, $50 million or $100 million potential loss.”

However, algorithms are only as good as their training. They can generate so many false positives that the system becomes unworkable. “You have multiple different technologies that are all generating a series of alerts that head into a queue that a security analyst needs to filter through,” says Stu Bradley, vice president of SAS Fraud & Security Intelligence Division. “The false positives have become a major issue, because you’ve got limited resources, limited time and [the security analysts] all have more work than they can get through on a daily basis.”

Yet over time, data and analytics can help solve the same false-positive challenge they create. In its most basic form, as human analysts sort alerts into actual risk and no-risk categories, machine learning can use that feedback to prioritise future risks in the alert queue. Machine learning can also track the additional data that human analysts look at to investigate alerts and bring in that data to improve efficiency before the alert is created, thus reducing operational workload. Such feedback often separates successful analytics programs from failed ones, says Mr Bradley. “It’s an important life cycle,” he says, “to be sure you’re continually improving.”

 

Source: https://expectexceptional.economist.com/data-analytics-machine-learning.html?utm_source=Paid%20Media&utm_medium=IPM%20Native&utm_campaign=APAC&dclid=CjgKEAjwr8zoBRD0trDB57PD9VgSJACJft-LMewsPAwcdCDn-J6VP-RI7eexRg5H3ie9pTBA0VEX5_D_BwE