Blog entry by Les Bell

Les Bell
by Les Bell - Tuesday, 25 October 2022, 9:12 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


Australia Increases Penalties for Privacy Breaches

Following much public anger, hand-wringing and outrage on the part of politicians and pundits, Australia's Commonwealth Government has concluded that the answer is tougher penalties. In a media statement Attorney-General Mark Dreyfus stated that

"existing safeguards are inadequate. It's not enough for a penalty for a major data breach to be seen as the cost of doing business. We need better laws to regulate how companies manage the huge amount of data they collect, and bigger penalties to incentivise better behaviour".

"The Privacy Legislation Amendment (Enforcement and Other Measures) Bill 2022 will increase maximum penalties that can be applied under the Privacy Act 1988 for serious or repeated privacy breaches from the current $2.22 million penalty to whichever is the greater of:

  • $50 million
  • three times the value of any benefit obtained through the misuse of information; or
  • 30 percent of a company's adjusted turnover in the relevant period."

That might focus the attention of boards and C-suites, and will doubtless meet with the approval of the public. It might even result in CISO's getting a more sympathetic hearing when budget review time comes around, with increased spending in . . . well, some areas of security efforts.

But it's doubtful whether it will do anything to improve things at the coalface. If the popular theory that the Optus breach which triggered all this brow-beating is correct, and the data was exposed via a misconfigured API endpoint, then nothing would be different - no amount of punitive incentivization will improve the Mark I human's proclivity to errors in the form of slips and lapses. And for developers who may well find themselves walking the plank if caught anywhere near a breach, or CISO's who suddenly find their title has change to DFG (Designated Fall Guy), the size of the fine is irrelevant.

On the other hand, I predict a boom in half-day and one-day courses for directors and senior managers on cybergovernance, with a good lunch as a bonus.

Dreyfus, Mark, Tougher penalties for serious data breaches, media release 22 October 2022. Available online at https://ministers.ag.gov.au/media-centre/tougher-penalties-serious-data-breaches-22-10-2022.

Cybergovernance Principles Launch Hacked

Speaking of which . . . The Australian Institute of Company Directors (AICD) has produced a new set of cybersecurity governance principles, and was set to launch them with an online event yesterday. The event had gained the support of the relevant Federal Minister, Clare O'Neil, as well as CEO of the Cyber Security Cooperative Research Centre, Rachael Falk.

Everything was set for thousands of online attendees to learn how to secure their companies and their systems. But when they tried to log on, the conference did not start on time. As they waited, a fake Eventbrite link, which requested credit card details, was posted to the related LinkedIn chat. When AICD officials asked participants not a follow links in the chat, it was followed by an official-looking AICD link - which also didn't work.

Eventually, the AICD was forced to give up and cancel the event, with MD & CEO, Mark Rigotti, forced to warn anyone who had submitted credit card details to contact their bank, and to apologise for the issues. "We recognise this experience has fallen well below the high standards our members rightly expect of the AICD", he stated.

Apparently, the Magic Wand of Cybergovernance isn't quite as effective as claimed; regular readers are reminded of Putt's Law:

Technology is dominated by two types of people:

Those who understand what they do not manage
Those who manage what they do not understand

Towell, Noel and Kishor Napier-Raman, Hackers hit cybersecurity conference, The Sydney Morning Herald, 24 October 2022. Available online at https://www.smh.com.au/national/hackers-hit-cybersecurity-conference-20221024-p5bsiq.html.

Meanwhile, Another Take on Incentives

In the latest issue of Communications of the ACM, the former Editor-in-Chief of that august journal, Moshe Y. Vardi, also ponders these problems. In 2017, he wrote, "So here we are, 70 years into the computer age and after three ACM Turing Awards in the area of cryptography (but none in cybersecurity), and we still do not seem to know how to build secure information systems." Five years on, the only change he would make is subsituting 75 for 70.

Vardi points the finger at the externalities in the system: whatever we do in the digital world involves disclaimers; whether installing new software or signing in to an online service, we accept terms and conditions which allow the vendors to escape liability:

'As the philosopher Helen Nissenbaum pointed out in a 1996 article, while computing vendors are responsible for the reliability and safety of their product, the lack of liability results in lack of accountability. She warned us more than 25 years ago about eroding accountability in computerized societies. The development of the "move-fast-and-break-things" culture in this century shows that her warning was on the mark.'

Vardi suggests that the way to address the cyber-insecurity issue may well be regulation, which overcome the power imbalance between vendors and their customers, and prevent them escaping accountability. The question that comes to mind is, what would we - or governments - regulate? Perhaps it is time to shift the pendulum away from playing catch-them-if-you can in the incident response phase, and back towards engineering security into systems.

Vardi, Moshe Y., Accountability and Liability in Computing, Communications of the ACM, November 2022, Vol. 65 No. 11, Page 5. Available online at https://cacm.acm.org/magazines/2022/11/265836-accountability-and-liability-in-computing/fulltext.

Pentesters Pwned By Malware-Laced PoC's

The dire state of penetration testing is highlighted by a new report from researchers at the Leiden Institute of Advanced Computer Science, who anaysed proof-of-concept exploit code posted to GitHub. Using three fairly simply techniques:

  1. Comparing the committer's IP addres to public blacklists, VirusTotal and AbuseIPDB
  2. Submitting binaries and their hashes to VirusTotal for analysts, and
  3. Deobfuscation of base64 and hex values before performing the above two checks

the researchers found that 4,893 examples, out of the 47,313 that they downloaded in total, made calls to malicious IP addresses, carried obfuscated malicious code, or included trojanized binaries. In other words: download a PoC from GitHub, and you have a 10.3% chance of catching something nasty. More to the point, if you use the code in an engagement without checking it, your client could catch some nasty.

The current emphasis on pen-testing as a way of improving security posture is fine - if the testing is performed by highly-skilled testers. Unfortunately, there aren't enough really skilled testers out there. At the bottom of the market, many rely upon the basic testing performed by automated scanners, while some go a little further, with the aid of Kali Linux and a library of YouTube videos. The better ones will dig a bit deeper, using the capabilities of tools like Metasploit and Cobalt Strike, especially for red-teaming.

But even those tools run out of steam, and so the temptation is huge, to just download any relevant PoC's and see if they work. The results could be devastating. It is incumbent on professional pen-testers to

  • Read and understand the code they are about to run against or on their own or their customers' networks
  • Use easily-available, free tools like VirusTotal to analyze binaries
  • Analyze the code manually, taking the time to deobfuscate where necessary. If this will take too long, then explode it in a sandbox while monitoring it for malicious behaviour and suspicious network traffic

One has to ask: why would code in a proof-of-concept be obfuscated anyway? There are a few sort-of-good reasons, but if it's to stop casual reading and understanding, that's a huge red flag.

El Yadmani, Soufian, Robin The and Olga Gadyatskaya, How security professionals are being attacked: A study of malicious CVE proof of concept exploits in GitHub, arXiv pre-print, 15 October 2022. Available online at https://arxiv.org/abs/2210.08374.


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons LicenseCopyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Tags: