Magnet Weekly CTF: Question 9 Solution Walk Through

This weeks challenge was the first of the challenges to deal with Memory forensics. The ‘question’ provided by Aaron Sparling (@OSINTlabworks) was a 7 parter!

For most of the challenges so far I’ve been using Magnet Axiom and supplementing with other tools as needed. For this weeks solution you’ll see that all is being done via Volatility using the REMnux platform.

Part 1: The user had a conversation with themselves about changing their password. What was the password they were contemplating changing too. Provide the answer as a text string.

Originally the clue of “conversation” had me looking at Slack as it’s a communications tool. With no luck there I began to investigate WINWORD.exe (MS Word). First off, we dump the process memory:

Next we can use strings against the process dump and use grep to filter for ‘password.’

Apparently the ‘coversation’ the user was having with themselves was in a Word document. Within the document the user says: “Hmmm mmaybe I should change my password to: wow_this_is_an_uncrackable_password. ” [Flag 1]

Part 2: What is the md5 hash of the file which you recovered the password from?

The dumpfiles Volatility plugin can be used to dump out any files that were opened in relation to the particular memory process. There are 2 tmp (temp) files associated with Word. We can md5sum the exported files to get their hashes. Bonus if you spot where it shows I spend more time in Powershell.

The `WRD0000.tmp.dat has the MD5 has of af1c3038dca8c7387e47226b88ea6e23. [Flag 2]

Part 3: What is the birth object ID for the file which contained the password?

We use the mftparser Volatility plugin to dump the MFT (Master File Table) to a text file.

Here we can see that the temp file which is holding the Autorecovery information for the open word document has the Birth Object ID: 31013058-7f31-01c8-6b08-210191061101. [Flag 3]

Part 4: What is the name of the user and their unique identifier which you can attribute the creation of the file document to? Format: #### (Name)

Using the getsids plugin againts the PID for Word (3180), we see the the WINWORD.EXE process was executed under the context of 1000 (Warren). [Flag 4]

Part 5: What is the version of software used to create the file containing the password? Format ## (Whole version number, don’t worry about decimals)

Using the filescan Volatility plugin to locate “Word” we see that WINWORD.EXE is being called from \Program Files\Microsoft Office\Office15. [Flag 5]

Some basic google searching informed me this directory corresponds with the installation of Office 2013, which includes Word 2013.

Part 6: What is the virtual memory address offset where the password string is located in the memory image? Format: 0x########

This one had me struggling for bit. During the week I heard that Aaron was presenting a webcast on Friday, When your forensic tool only tells part of the story; finding code injection using memory analysis Webcast, and that if you watched closely there may be helpful workflow.

Sure enough there was a segment where Aaron is utilizing yara rules to find the virtual offset for a string in memory, which I applied to our identified password string.

In this case the offset we are looking for is 0x2180a2d. [Flag 6]

Part 7: What is the physical memory address offset where the password string is located in the memory image? Format: 0x#######

Unfortunately, I wasn’t able to get the flag for the last part of the challenge before time ran out. For certain I’ll be reviewing the other walk-throughs published this week. I’m looking forward the upcoming memory challenges as well.

Magnet Weekly CTF: Question 8 Solution Walk Through

I only managed to get half the solution to last weeks challenge.

Finding the first half of the solution was pretty straight-forward.

In the File system view (or via your image mounting directory traversal of choice) navigate to \var\log\apt.

Here we find the history.log file which keeps track of applications installed via apt.

If you scroll to the end of the log you can see that a lot of packages were installed or upgraded on 2017-11-08. From there on we have no (logged) activity until 2019-10-07 when php [Flag] is installed.

Part 2 I’m afraid eluded me. I’m looking forward to seeing the other write-ups to see the solve for the Why?

 

Magnet CTF: Question 7 Solution Walk-Through

Unlike the Fighting Irish, I don’t have a perfect record this year – but I’m still loving the game. I never did get to finish the week 6 challenge, but with week 7, I’m back in it.

Challenge 7 (Nov 16-23) Part 1, Domains and Such. What is the IP address of the HDFS primary node?

As I was gathering information about Linux forensics, I came across LinuxForensicsForNon-LinuxFolks from Hal Pomeranz. (Google it). It’s chock full of pointers on where to find particular artifacts as the correspond to their Windows counterparts, and as the title indicates it’s meant for novices to Linux.

To identify the IP address of a Linux host there are a few places to check. If the address is assigned statically it will be in /etc/hosts. If the address is assigned by DHCP there should be a reference in /var/lib/dhclient and/or /var/log/*.

Bringing up our evidence in the File System view in Magnet Axiom, we navigate to /etc/hosts.

We can see that the primary node has the IP assignment of 192.168.2.100. [Flag 1]

After not being able to finish the challenge the week before, I was so excited to get the flag I nearly (or did) miss the fact that this was a 3-part question. It was only later in the afternoon watching the video introducing the challenge that I realized it had multipe sections.

Part 2: Is the IP address on HDFS-Primary dynamically or statically assigned?

Based on the fact that the address was in the hosts file, that indicates that the address was assigned statically. [Flag 2]

Part 3: What is the interface name for the primary HDFS node?

For this answer we navigate to /etc/network/interfaces.

Previewing the content of the file we see that ens33 [Flag 3] is the name of the interface. Had we identified this file first we would have been able to surmise all 3 flags from the same source. As with all things forensics, there are many ways to get to the same information. Understanding how those compare and what the outliers are, is all part of the challenge.

That’s all for this week. Now I’m off to watch the next video so I can see what I missed in last weeks challenge.

Magnet CTF: Question 5 Solution Walk-Through

So for week 5 we started Ali Hadi’s Linux image, (farewell for now Android.) I’ve worked WITH Linux for years as my underlying operating system for forensics, but as the forensics target, not so much.

As the Magnet Training team is fond to say, “You don’t know what you don’t know.” That was certainly the feeling for me when I opened up the week 5 challenge.

Challenge 5 (Nov. 2-9) Had-A-Loop Around the Block: What is the original filename for block 1073741825?

I knew this had something to do with data architecture but not much else. I scoured the filesystem and find some references to block_1073741825, but nothing related to file associations.

Midway through the week, a hint was dropped. It cost 20 points but I knew that without a pointer in the right direction this was going to elude me.

The hint wound up being a link to a presentation from the DFIR Summit in 2016.

I watched this several (countless?) times. I think by the end after infinite repetitions my wife and cats were starting to grasp Hadoop. I’ll be looking out for other talks by Kevvie in the future.

There were 2 main takeaways for me that wound up getting me to the correct solution before the final bell tolled.

  1. hdfs-site.xml – this file will tell you where within the system the data resides.

So I parsed the .E01’s in Axiom (as Windows images) and found the hdfs-site.xml in the File system view.

I exported the file out and opened with VS Code. Lately I’ve been finding this just as useful if not more as Notepadd++ when dealing with text or xml files.

We’re looking for the namenode path, seen here as /usr/local/hadoop/hadoop2_data/hdfs/namenode…

This brings me to take-away #2 from the video.

The transaction logs, which capture the to and fro of files on the system, can be exported from it’s native format to a human-readable xml – when done with a utility on the Hadoop server.

Using the identified path information I exported the transaction logs via Axiom.

The video calls out the usage of OEV (Offline Edits Viewer).

OEV documentation

My next step was to get a Hadoop VM so that I could utilize the OEV tool.

After a pretty basic setup I transferred the exported edit logs via SCP to the VM. Once I had the transaction log files on the VM I used the OEV utility to convert to xml.

hdfs oev -I [edit_log_name] -o [export_name].xml -p XML

I then SCP’d the xml files back to my analysis machine and ran a search for the block number in VS Code on the file directory.

The block ID we’re looking for is shown as part of a file copy operation. If we drop back about 10 lines to <PATH> we see that the filename of the file was AptSource.

Another week down.

Another challenge completed.

Another (multiple) things learned that I didn’t know when I started.