Ages ago, if you went to barter at the market you may have been deceived by an untrustworthy trader; perhaps they swindled you out of a few pennies, sold you fake goods, or failed to otherwise deliver. Your only option was to avoid buying from them the next time around. To quote a commonplace saying, “Fool me once, shame on you. Fool me twice, shame on me.”

We ran our financial systems like this for years, even after digitalising our recordkeeping systems, moving banking information onto the cloud, and pioneering FinTech apps and technologies. And it worked. If fraudsters compromised our system, we learned from failure, redesigned and rebuilt and moved on with new digital defences.

Why 2019 is a great year for cyber security in Southeast Asia


Digitalisation demands intelligent systems

The sheer numbers of transactions on our financial systems are no longer manageable; our global datasphere will comprise 175 zettabytes by 2025. To put that in familiar terms, that’s the amount of data that would fit onto 43,750 billion DVDs!

img IX mining rig inside white and gray room

Consumers are leaving this headache of fraud prevention and data security up to companies that they assume “have got it covered.” After all, Millennials and younger generations love the ease of FinTech solutions that supply them with mobile wallets and simple banking apps. They have simply sacrificed their security and privacy in exchange for factors that they value more, such as efficiency and ease-of-use. Why wouldn’t they entrust their savings and personal data to the Cloud?

If you said “FRAUD,” yes. That’s quite right. Here’s why:

The cost of cybercrime

Fraud is costing the global economy $600 billion USD, which is equivalent to a “tax on growth” of 14%. Furthermore, it’s affecting a majority of the digitally-active people on this planet – two-thirds have had their personal data stolen or compromised. That’s more than 2 billion individuals. Furthermore, as you probably know, fraud erodes customer trust, reduces bank revenue, and severely tarnishes a financial institution’s reputation.

That’s why the dynamic duo of AI and machine learning present us with a welcomed alternative.

Inside the black box: how AI and machine learning work

First, to clarify: AI is a broad term, basically referring to the concept of machines making decisions in a way that humans consider “smart.” Machine learning is a subset of that concept, but even though it’s a buzzword that sounds complicated, all it means is that we’re giving machines quite a large amount of data and allowing the machines to comprehend what it means.

In this case, financial companies are feeding the machines huge quantities of our personal data in order to prevent hackers and fraudsters from laundering billions or trillions of dollars worth of funds. Since AI can profile what a “normal” transaction is for you based upon your data history, it allows banks to make better decisions on what might be considered a fraudulent transaction. And the decision to approve or decline a payment takes a mere 30 to 40 milliseconds.

Predictive algorithms

Here’s how machine learning essentially takes place: human analysts supply the machine with initial data that they have gone through and manually flagged for fraud. They’re teaching the machine like we would a kindergartener, showing it that when certain risk factors are present – for instance, using a Samsung device when you own an Apple iPhone, or frequently sending SWIFT transactions to high-risk, corrupt countries – that most likely, something’s not quite right.

Then they set the machine learning intelligence to work: it gathers data on customers and their transactions, uses its expansive supply of examples to predict which activities are fraudulent and passes its verdicts on to a final human-analyst checkpoint. The truly innovative part, however, is that unlike past systems, machine learning takes the feedback that it receives and incorporates it into its flagging systems without human intervention.

How cyber secure is Southeast Asia?

In short, as hackers and fraudsters react to new security measures, constantly-improving AI fraud tracking systems can stay alert to new forms of cybersecurity, phishing, and spoofing, as well as recent developments where nefarious characters take over email accounts of CEOs and CFOs. With data analytics, banks now possess the ability to stay one step ahead.

Big Brother is watching you

two bullet surveillance cameras attached on wall

The only thing is that we cannot escape the reality of trade-offs. Our banking software makes our financial habits incredibly easy to manage and track. Data and analytics are providing banks with the tools to drastically cut down on fraud, and yet we do not stop to consider the consequences. Perhaps we have made them consciously, perhaps unconsciously.

Consider the cross-channel predictive algorithms of machine learning. To keep up with the rapidly shifting tactical field of cybercrime, banks are now providing their AI systems with incredible amounts of consumer information. This includes using your geolocation, tracking your social media, and even leveraging biometric authentication.

Is the “greater good” reason enough?

Keep in mind that thus far, this information is being used with good intent. Machine learning allows banks to drastically reduce the expense of complying with anti-money laundering government regulations: $25 billion in the U.S. and $83 billion in the UK per year. It safeguards the accounts of millions of customers, and as mentioned, it stops our economic growth from slowly being siphoned away.

But let us not be deceived.

Winning comes at a price

AI is forging into new frontiers, and so far, we haven’t stopped to draw a line. It’s one thing to track where we purchase goods, but fraud-protection algorithms have evolved to assess more personal information such as employee emails for “language of anger and frustration” that signifies unhappiness with their role and a potential to “do something which is illegal.”

Yes, AI and machine learning can outcompete the fraudsters. But we must start to ask ourselves a critical question: how much power will we give them in order to fight fraud? Even if our machine intelligence has the best of intentions, we are still stepping into Orwellian territory.

And Big Brother is watching.