I only managed to get half the solution to last weeks challenge.
Finding the first half of the solution was pretty straight-forward.
In the File system view (or via your image mounting directory traversal of choice) navigate to \var\log\apt.
Here we find the history.log file which keeps track of applications installed via apt.
If you scroll to the end of the log you can see that a lot of packages were installed or upgraded on 2017-11-08. From there on we have no (logged) activity until 2019-10-07 when php [Flag] is installed.
Part 2 I’m afraid eluded me. I’m looking forward to seeing the other write-ups to see the solve for the Why?
Unlike the Fighting Irish, I don’t have a perfect record this year – but I’m still loving the game. I never did get to finish the week 6 challenge, but with week 7, I’m back in it.
Challenge 7 (Nov 16-23) Part 1, Domains and Such. What is the IP address of the HDFS primary node?
As I was gathering information about Linux forensics, I came across LinuxForensicsForNon-LinuxFolks from Hal Pomeranz. (Google it). It’s chock full of pointers on where to find particular artifacts as the correspond to their Windows counterparts, and as the title indicates it’s meant for novices to Linux.
To identify the IP address of a Linux host there are a few places to check. If the address is assigned statically it will be in /etc/hosts. If the address is assigned by DHCP there should be a reference in /var/lib/dhclient and/or /var/log/*.
Bringing up our evidence in the File System view in Magnet Axiom, we navigate to /etc/hosts.
We can see that the primary node has the IP assignment of 192.168.2.100. [Flag 1]
After not being able to finish the challenge the week before, I was so excited to get the flag I nearly (or did) miss the fact that this was a 3-part question. It was only later in the afternoon watching the video introducing the challenge that I realized it had multipe sections.
Part 2: Is the IP address on HDFS-Primary dynamically or statically assigned?
Based on the fact that the address was in the hosts file, that indicates that the address was assigned statically. [Flag 2]
Part 3: What is the interface name for the primary HDFS node?
For this answer we navigate to /etc/network/interfaces.
Previewing the content of the file we see that ens33 [Flag 3] is the name of the interface. Had we identified this file first we would have been able to surmise all 3 flags from the same source. As with all things forensics, there are many ways to get to the same information. Understanding how those compare and what the outliers are, is all part of the challenge.
That’s all for this week. Now I’m off to watch the next video so I can see what I missed in last weeks challenge.
So for week 5 we started Ali Hadi’s Linux image, (farewell for now Android.) I’ve worked WITH Linux for years as my underlying operating system for forensics, but as the forensics target, not so much.
As the Magnet Training team is fond to say, “You don’t know what you don’t know.” That was certainly the feeling for me when I opened up the week 5 challenge.
Challenge 5 (Nov. 2-9) Had-A-Loop Around the Block: What is the original filename for block 1073741825?
I knew this had something to do with data architecture but not much else. I scoured the filesystem and find some references to block_1073741825, but nothing related to file associations.
Midway through the week, a hint was dropped. It cost 20 points but I knew that without a pointer in the right direction this was going to elude me.
The hint wound up being a link to a presentation from the DFIR Summit in 2016.
I watched this several (countless?) times. I think by the end after infinite repetitions my wife and cats were starting to grasp Hadoop. I’ll be looking out for other talks by Kevvie in the future.
There were 2 main takeaways for me that wound up getting me to the correct solution before the final bell tolled.
hdfs-site.xml – this file will tell you where within the system the data resides.
So I parsed the .E01’s in Axiom (as Windows images) and found the hdfs-site.xml in the File system view.
I exported the file out and opened with VS Code. Lately I’ve been finding this just as useful if not more as Notepadd++ when dealing with text or xml files.
We’re looking for the namenode path, seen here as /usr/local/hadoop/hadoop2_data/hdfs/namenode…
This brings me to take-away #2 from the video.
The transaction logs, which capture the to and fro of files on the system, can be exported from it’s native format to a human-readable xml – when done with a utility on the Hadoop server.
Using the identified path information I exported the transaction logs via Axiom.
The video calls out the usage of OEV (Offline Edits Viewer).
My next step was to get a Hadoop VM so that I could utilize the OEV tool.
After a pretty basic setup I transferred the exported edit logs via SCP to the VM. Once I had the transaction log files on the VM I used the OEV utility to convert to xml.
hdfs oev -I [edit_log_name] -o [export_name].xml -p XML
I then SCP’d the xml files back to my analysis machine and ran a search for the block number in VS Code on the file directory.
The block ID we’re looking for is shown as part of a file copy operation. If we drop back about 10 lines to <PATH> we see that the filename of the file was AptSource.
Another week down.
Another challenge completed.
Another (multiple) things learned that I didn’t know when I started.