Blog entry by Les Bell

Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Most links within this post will lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study by our students. However, there are some references at the end.

Commentary

Rather than the usual short news items, today I intend to do a deep dive on the recent spate of ransomware breaches that has been reported in the media, with the intention of extracting lessons to be learned.


Optus, Medibank and Other Recent Australian Ransomware Breaches

The Incident Response cycle.Australian media has become super-sensitized to the various ransomware attacks that have occurred here recently, generating extensive reporting. We have reported on a number of breaches:

  • Optus
  • Medibank
  • Harcourts Melbourne (a real estate agency)
  • PNORS Technology Group (a provider of services to various Victorian Government departments)
  • Medlab Pathology
  • A political site run on behalf of Senator James Paterson

Politicians have also leapt on the bandwagon, announcing increased fines for breaches, and every media outlet has featured commentary from 'experts' - not all of it particularly helpful. In line with the objectives for this blog, it is time to extract a few 'Lessons Learned' (which my course attendees will recognise as the last phase of the incident response cycle).

Lessons for Enterprises

Governance

The first and most obvious lesson is for the C-suite and Boards: by and large, you need to invest more in cybersecurity. There's an old saying in the aviation safety field: "If you think air safety is expensive, just wait until you have your first accident!".

Incident response is expensive - you usually need to bring in outside specialists to perform forensics and understand the breach, and you may need to engage with government agencies, which can be difficult to manage and require a lot of time from senior managers. Further down the track, there may well be mandatory disclosure, communication with affected customers, the provision of credit protection services, payments for replacement of identity documents and - of course - fines and judgements, including damages awarded in shareholder actions or class actions.

Cyber insurance will lessen the pain of some of this, but it will not cover all the costs and losses. In its accounts, Optus has set aside $140 million to cover the expected costs of its breach - far in excess of the likely ransom demand and probably in excess of proposed fines.

We have also seen a lot of cost-cutting over the last decade or more, as management and boards have sought increased efficiency and therefore profits (not to mention bonuses). The result, inevitably, is brittleness and especially a lack of resilience, as companies have found themselves severely stretched in responding to cybersecurity incidents, due to a lack of skilled staff and other resources. And of course, cost-cutting leads to inadequate controls, inadequate security education, training and awareness, and inadequate risk management more generally.

Data Retention

It is also important for executive management and boards to realize that personal information that has been collected but is not being used is not an asset - it is a liability. This applies especially to personal identity information which was used to verify the identity of an individual as part of account enrollment and identity proofing; once these have been used to resolve a claimed identity, they are no longer required and should be disposed of. Of course, there is a tension here with national security and counter-terrorism legislation which has required telcos, in particular, to retain this kind of information; its value is something that government needs to consider in the light of recent events.

Technical Measures

It seems that the initial compromise, in many of these cases, is a spear-phishing attack. We run education and awareness campaigns, as well as simulated attacks to test its effectiveness, but this clearly is not enough. We need to improve our authentication techniques, specifically requiring multi-factor authentication using time-based one-time passwords or, better still, security keys (FIDO U2F authentication).

Customer Authentication

Medibank, in particular, has been issuing bad advice to its customers on its advice page at https://www.medibank.com.au/health-insurance/info/cyber-security/staying-safe-online/. They suggest, for example, "changing your passwords regularly with 'strong' passwords". However, current good practice on passwords - technically known as "memorized secrets" - is quite well defined in NIST SP 800-63B section 10.2.1, which states:

"Do not require that memorized secrets be changed arbitrarily (e.g. periodically) unless there is a user request or evidence of authenticator compromise."

Simply put: we have long known that requiring or even suggesting frequent password changes just leads to users choosing weak passwords, such as using the same root word with an incremented number on the end.

Although it is awkward, I like the phrase "memorized secrets" for a number of reasons: first, many authentication protocols do not pass the secret (CHAP is a good example) but also because it takes the focus away from password. We should be thinking in terms of passphrases, which can be much longer, increasing the work factor for attackers, but also easier to memorize. A much better approach is to encourage users to come up with a good passphrase and then stick with it.

SP 800-63B also says:

"Allow at least 64 characters in length to support the use of passphrases. Encourage users to make memorized secrets as lengthy as they want, using any characters they like (including spaces), thus aiding memorization"

A passphrase can be any personally-meaningful, -memorable or -valued phrase, such as a line from a poem or song, a book or movie title, or just a peculiar phrase such as "I wish I didn't have to go through this rigmarole just to change my Medibank password" (although the phrase "correct horse battery staple" should be blocked, for obvious reasons). Finally, Section 10.2.1 also says:

"Do not impose other composition rules (e.g. mixtures of different character types) on memorized secrets."

This is what is usually meant by the term 'strong password'. Unfortunately such rules simply make it hard or even impossible for users to select an appropriately memorable passphrase, encourage the selction of weak passwords and also reduce the brute-force attack space. Now, some off-the-shelf software might have these rules hard-coded, but most well-written software should allow customization, and internally-developed enterprise software should definitely comply with the NIST recommendations. This needs to be fixed.

There is a lot more detailed and useful guidance in NIST's four-volume Digital Identity Guidelines (see https://pages.nist.gov/800-63-3/); I am always amazed how many security professionals are not aware of them - or worse, are aware of them but have never read them.

Another way of reducing or eliminating the cognitive load of generating and remembering passphrases is to support federated identity management where appropriate. Allowing users to rely on Google, Microsoft and other identity providers eliminates the need for yet another passphrase, and also eliminates the need for a password hash to be stored which, if compromised, could be used for dictionary or rainbow tables attacks. We can also encourage customers to set up multi-factor authentication on their identity provider account, further increasing their security.

In the longer term, of course, we need to ensure that enterprise applications support multi-factor authentication - and by this I mean the use of a time-based one-time password (TOTP) token or FIDO U2F security keys, or, in some cases, biometric techniques, although these are more commonly used for physical access control. Note: the use of mTAN's - mobile transaction authentication codes, or SMS messages containing a six-digit number sent by SMS to the customer's phone - is now deprecated, because mobile phone numbers can be ported and the SMS messages can also be diverted by attacks on the SS7 protocol used in telco networks.

By using a second factor which has a much larger keyspace, we reduce the dependence on long, strong passphrases. Ultimately, of course, we should aim to eliminate the passphrase altogether through the use of cryptographic techniques such as FIDO authentication (commonly known as passkeys).

And yes - customers could use password safes to take care of all this, but the evidence is that most do not and we should not make it their problem.

Employee Authentication

While some of the above may not be feasible for customers, it is certainly so for employees - remember, it is the capture of employee credentials that give rise to most breaches - particularly when the enterprise can easily carry the cost of providing hardware tokens or security keys, as well as providing appropriate training.

For privileged levels of access to browser-based cloud-hosted applications, consideration should be given to the use of thin clients such as Chromebooks in order to reduce the risk of compromise by infostealers.

Of course, by enabling multi-factor authentication and transitioning to cryptographic techniques, we will simply encourage attackers to switch to cookie-stealing and Adversary-in-The-Middle attacks, so we will also need to lift our game there was well.

Access Control and Architecture

The other side of this particular coin is authorization, or access control. We need to pay closer attention to the principle of least privilege, so that we can limit the impact of information exfiltration to just a subset of customers, or a subset of their records. Security architects need to re-familiarize themselves with security models and access control models, in order to restrict access on a need-to-know basis. Healthcare information, for example, is often best dealt with by a role-based access control model, although I am not suggesting that that was, or was not, the case in the breaches considered here.

We also need to do better in the transition to cloud-hosted systems. In at least one case here, it seems that data was obtained from inadequately-secured Amazon S3 buckets - a far too common occurrence. Architects should be making use of network segmentation and microsegmentation as well as functionality like cloud access security brokers to further secure cloud-hosted applications.

It is also obvious that in at least some cases, victims were unaware of information exfiltration, suggesting a need to improve network egress filtering and intrusion detection systems.

In short, we need to stop playing whack-a-mole with adversaries on a vulnerability-by-vulnerability and exploit-by-exploit basis and start engineering security into our systems via good security architecture and a culture which values secure programming and administration practices.

Crisis Management and Communications

The first phase of the incident response cycle is planning and preparation. This means having in place policies and procedures, including customized playbooks for incident response. These should provide guidance as to when an incident should be escalated to executive management, as well as what information should be disclosed, and when, in public communications.

It is clear that Medibank was caught wrong-footed here, initially believing that the breach was confined to simple encryption of records. Only when informed by the Australian Cyber Security Centre did they discover that information had been exfiltrated, and even then, they believed that the damage was limited to a subset of their customers. It is clear that the top priority in incident response should be the detection and analysis phase, executed simultaneously with containment, before any measured public statements are made

Having to make a series of statements, disclosing increasing levels of severity in the breach, can look like incompetence or, even worse, a cover-up, and will damage consumer confidence leading to lost customers. Medibank may be counting on the high costs of switching health insurance providers to limit the damage here, but Optus has certainly seen the effect, despite having done a better job with their initial disclosure.

It is also important to be direct and clear in communication with customers. A good example is the letter which Medlab Pathology sent to their affected customers, which started out explaining in general terms the nature of the breach, the wonderful actions the company had taken, etc. However, the specific customer information which had been stolen was not revealed until a "Questions and Answers" page near the end of the letter. We know that reader attention flags as they read, and many readers are likely to completely skip an FAQ-style page which they expect to contain the usual platitudes about changing passwords, being aware of scams, etc.

It is understandable that managers may want to downplay the impact of a breach, but starting out by praising yourselves for your response before revealing the damage to the customer is not a good look. More planning and preparation for crisis communications would avoid this kind of error.

Lessons for Government

Immediate Response

The response by Australian Government agencies - specifically, the Australian Federal Police and Australian Signals Directorate - seems to have generally been effective, at least in terms of attributing the Medicare breach. As we learned from the US Treasury Department's Financial Crimes Enforcement Network (FinCEN) via the recent International Counter Ransomware Initiative Summit, roughly 75 percent of the ransomware-related incidents reported during the second half of 2021 pertained to Russia-related ransomware variants, so the odds were always on it being a Russian ransomware operator here.

Attribution is one thing; prosecution is quite another. In the culture of Putin's Russia, it is seen as patriotic to commit cybercrime against Western countries; the payment of ransoms via cryptocurrencies bypasses banking sanctions and brings in cash to the Russian economy. Besides, this revenue can be used to further improve the country's offensive cyber capabilities, as we saw with the deployment of the NotPetya wiper against Ukraine, and if it uncovers information that might be strategically useful, so much the better. And so the Russian goverment turns a blind eye to the activities of ransomware and other cybercrime groups, and is unlikely to pay more than lip service to Australian requests for assistance in extradition and prosecution.

With this in mind, the Australian cybersecurity minister, Clare O'Neil, and the attorney general, Mark Dreyfus, have announced a joint standing operation against cybercriminal syndicates, which will involve around 100 officers from the AFP and ASD in an offensive security, 'hack back' operation against the ransomware operators. This approach is, of course, not available to other government agencies and private sector enterprises, who must focus on defensive techniques - but ASD has had considerable success in the past, particularly in counter-terrorism operations.

This approach is not without its risks, however. Russian strategic thinking, particularly its 'information confrontation' concept, means that "the Kremlin views control over its domestic information space as essential to their security - a threat to the information space might be perceived as a threat to state sovereignty" (Hakala and Melynchuk, 2021). An attack on Russian citizens could be met with an escalated response, with Australia becoming a favoured target for both directly state-sponsored and freelance groups within Russia. In other words, it's definitely time to batten down the hatches.

Another question is whether other countries will join in this response; I really cannot see Australia going it alone here.

Ransom Payment

There is no doubt that the increasing prevelance of ransomware attacks, and the increasing demands in each case (usually around $US 1.00 per record) have made the ransomware business an attractive one for criminals. Not only that, the exponentially increasing revenues have made it possible for ransomware groups to buy or develop 0day exploits and also to polish up their post-exploitation tools, in part to evade detection and also to make them easier to use. This has led to the development of the Ransomware-as-a-Service model, as well as the emergence of initial access brokers, who will perform the initial exploitation of a victim, drop a loader or backdoor, and then on-sell the victim to a ransomware operator (or someone else, if they can).

All of this makes it hard to argue against the proposition that one should never acquiesce to a ransom demand. This argument says that, for the common good, we should cut off the funding to the ransomware groups. If nobody pays, their business model is broken, their revenue stream will dry up, and they will move on (though where to remains an open question). Furthermore, the argument goes, we should unburden business of the ethical dilemma inherent in the payment decision by legislating to make ransomware payments illegal.

This is all true, but it breaks down in reality. Firstly, individual businesses may make a calculated - and quite rational - decision that the ransom payment is much less than the downstream damage that would be inflicted by non-payment. Furthermore, cyber insurance has a distorting effect in this market; not only will it cover the cost of the ransom, it will also take care of the logistics of payment. And historically, in many case, the ransomware operator, wishing to preserve their business model, will in return provide a key to unlock encrypted data and will delete exfiltrated data, rather than publishing it.

But more to the point, the common good will in many cases be far outweighed by the individual damage caused by disclosure of highly-sensitive personal information. Politicians, realising the potential public relations disaster here, have so far refrained from the blunt instrument of legislative prohibition of ransomware payments. I'm just glad it's not my decision.

Increased Fines for Privacy Breaches

Which brings us to the other legislative response to these breaches: an increase in fines for breaches of the Privacy Act. In a media statement Attorney-General Mark Dreyfus stated that

"existing safeguards are inadequate. It's not enough for a penalty for a major data breach to be seen as the cost of doing business. We need better laws to regulate how companies manage the huge amount of data they collect, and bigger penalties to incentivise better behaviour".

"The Privacy Legislation Amendment (Enforcement and Other Measures) Bill 2022 will increase maximum penalties that can be applied under the Privacy Act 1988 for serious or repeated privacy breaches from the current $2.22 million penalty to whichever is the greater of:

    • $50 million
    • three times the value of any benefit obtained through the misuse of information; or
    • 30 percent of a company's adjusted turnover in the relevant period."

That might focus the attention of boards and C-suites, and will doubtless meet with the approval of the public. It might even result in CISO's getting a more sympathetic hearing when budget review time comes around, with increased spending in . . . well, some areas of security efforts.

However, while fines are an effective deterrent to bad behaviour, they do not encourage good behaviour. And it is doubtful whether they would have much effect at the lower levels of an enterprise. For example, one theory - and this is really speculation - about the Optus breach is that information was exfiltrated via an accidentally-exposed API endpoint which did not require authentication. Now, I doubt that a deep investigation into this would reveal a developer, security architect or network administrator who acted with criminal intent. Far more likely, it is the result of oversight, or one of the many varieties of human error.

There is a massive opportunity here to take the work of Prof. James Reason (1990, 1997) in aviation and industrial safety, and apply it to cybersecurity culture among IT professionals. Reason also has a lot to say which is applicable to security culture more generally.

Of course, my earlier comments about cost-cutting and brittleness apply here, too. Enterprises need to start investing in education in security for their architects and developers, in order to build resilient systems with far fewer vulnerabilities.

Lessons for Consumers

It's not your fault

It's as simple as that. In all the cases discussed here, the privacy breach was not a result of anything a customer did, or did not, do, and while the companies involved have contacted their customers with the usual advice, mostly consisting of platitudes such as watch for scams, change your passwords, change your passwords regularly, etc. that horse has well and truly bolted, leaving the stable door banging in the breeze. It's also a bit rich for such advice to be offered by companies who could not themselves prevent egregious breaches.

I do not intend to preach to consumers here. Those who have read this far already know what to do, better than the companies they trusted.

Summary

My concern here is with how companies can raise their game, as politicians, journalists and the public clearly expect them to do, and I have set out to provide some clear advice. But to summarize:

  • Executive management and boards need to realize that an ounce of prevention is far better than a pound of cure
  • Companies need to improve authentication practices for both their internal staff as well as for customers before lecturing the latter
  • Companies need to invest more in good security architecture, specifically access control and resilience
  • Companies need to respond faster to incidents, expect data to have been exfiltrated, and perform thorough analysis before making any public statements
  • Politicians need to wield a big stick - but only as a last resort

References and Further Reading

Hakala, Janne, and Jazlyn Melnychuk, Russia's Strategy in Cyberspace, NATO Strategic Communications Centre of Excellence, June 2021. Available online at https://stratcomcoe.org/cuploads/pfiles/Nato-Cyber-Report_15-06-2021.pdf.

Reason, J., Human Error, Cambridge University Press, 1990.

Reason, J., Managing the Risks of Organizational Accidents, Ashgate Publishing Limited, 1997.


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEARCopyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

[ Modified: Saturday, 12 November 2022, 6:46 PM ]