Huntress CTF: Week 1 – Forensics: Backdoored Splunk, Traffic, Dumpster Fire

Backdoored Splunk

Hit Start.

So we’ve got a url and a specific port. Firefox web browser yields…

So we need an Authorization header. ๐Ÿค”

Time to look at the provided files. It looks to be the export of a Splunk application.

Time to download an eval copy of Splunk and… pause. There’s probably a simpler way to attack this.

The Silver Searcher is a command line tool I picked up during the CTF and I love it. It’s like Grep on PCP.

Once installed, the base command is ag, followed by what you’re searching for, and where. So let’s do a quick search for Authorization on all the contents of this directory.

That looks interesting. A clue? One of the PowerShell scripts has Authorization and what looks to be Base64 code.

We also see a comment about the $PORT being dynamic based on the Start button. Decoding the string in CyberChef…

At this point we have all the pieces, we just need to put them together. I started to look at different ways to pass an Authorization header to a web server. There’s proxy tools galore. And then there’s the basic’s like curl. After a bit of brushing up on my syntax I had:

curl -H "Authorization: Basic [longStringFromThePowershell]" http://site:$PORT

Yay what looks like more Base64. Once more with our Chef’s hat and…


Traffic

rita was a tool I hadn’t used before but it was very easy to use. I installed it on my REMnux box and then ran it against the dataset.

I then used the command to generate an html report.

Looking through the DNS requests there’s something sketchy indeed.

Let’s go take a look at that.


Dumpster Fire

Let’s start with the_silver_searcher again and see if we have any luck with “Password”.

There’s a number of hits including references to an encryptedUsername and encryptedPassword in the logins.json file. So we’ve got some encrypted Firefox user passwords. If only there were a utility that could decrypt those. Enter firepwd.py, an open source tool to decrypt Mozilla protected passwords.

Run the script in Python and point it to the directory for the user profile (where the logins.json file is).

That’s a pretty LEET password ๐Ÿ˜‰


Use the tag #HuntressCTF on BakerStreetForensics.com to see all related posts and solutions for the 2023 Huntress CTF.

Creating YARA files with Python

When I’m researching a piece of malware, I’ll have a notepad open (usually VS Code), where I’m capturing strings that might be useful for a detection rule. When I have a good set of indicators, the next step is to turn them into a YARA rule.

It’s easy enough to create a YARA file by hand. My objective was to streamline the boring stuff like formatting and generating a string identifier ($s1 = “stringOne”) for each string. Normally PowerShell is my goto, but this week I’m branching out and wanted to work on my Python coding.

The code relies on you having a file called strings.txt. One string per line.

When you run the script it will prompt for (metadata):

  • rule name
  • author
  • description
  • hash

It then takes the contents of strings.txt and combines those with the metadata to produce a cleanly formatted YARA rule.

Caveats:

If the strings have special characters that need to be escaped, you may need to tweak the strings in the rule after it’s created.

The script will define the condition “any of them”. If you prefer to have all strings required, you can change line 22 from

yara_rule += '\t\tany of them\n}\n'

to

yara_rule += '\t\tall of them\n}\n'

CreateYARA.py

def get_user_input():
    rule_name = input("Enter the rule name: ")
    author = input("Enter the author: ")
    description = input("Enter the description: ")
    hash_value = input("Enter the hash value: ")
    return rule_name, author, description, hash_value

def create_yara_rule(rule_name, author, description, hash_value, strings_file):
    yara_rule = f'''rule {rule_name} {{
    meta:
    \tauthor = "{author}"
    \tdescription = "{description}"
    \thash = "{hash_value}"

    strings:
    '''
    with open(strings_file, 'r') as file:
        for id, line in enumerate(file, start=1):
            yara_rule += f'\t$s{id} = "{line.strip()}"\n\t'
    yara_rule += '\n'
    yara_rule += '\tcondition:\n'
    yara_rule += '\t\tany of them\n}\n'

    return yara_rule

def main():
    rule_name, author, description, hash_value = get_user_input()
    strings_file = 'strings.txt'  

    yara_rule = create_yara_rule(rule_name, author, description, hash_value, strings_file)
    print("Generated YARA rule:")
    print(yara_rule)
    
    yar_filename = f'{rule_name}.yar'
    with open(yar_filename, 'w') as yar_file:
        yar_file.write(yara_rule)

    print(f"YARA rule saved to {yar_filename}")

if __name__ == "__main__":
    main()
Sample strings.txt file used as input for the YARA rule
Running CreateYARA.py
YARA rule created from Python script, viewed in VS Code.

Raspberry Pi Internet Speed Monitor

I was looking wistfully at the Lack Rack from my arm chair, admiring the (faux) copper conduit that covered the primary inbound internet link to the switch. I thought it would be cool looking to have an antique steam gauge attached to the piping. Two things caused that idea to quickly change – 1. the going prices for antique steam gauges right now, 2. once I was thinking about it as a gauge I thought an ‘internet speed gauge’ would be perfect. Alas, even if said gauge could be acquired without breaking the bank, converting MBPS to PSI and making it functional is above my level of engineering. So on to the next best thing – a Raspberry Pi hack.

Materials:

  • Raspberry Pi (3 or 4) with Raspbian 32-bit OS
  • Case with 3.5 in LCD Display
  • Copper spray paint ๐Ÿ˜‰
  • Attention to detail at the command line

Speedtest CLI

Once you’ve got your Raspberry Pi up and running start with the Installing the Speedtest CLI instructions at https://pimylifeup.com/raspberry-pi-internet-speed-monitor/. Complete steps 1-6. When the article gets to Writing our Speed Test Python Script, you can skip that section. I do recommend it from a learning perspective, but the code from that step won’t be used in the final project.

Assuming this is a new installation, you will need to install InfluxDB and Grafana. Complete the respective instructions for each.

Continue with the primary article’s instructions for Using Grafana to Display your Speedtest Data.

If you’ve made it along this far, you should have a working Grafana dashboard displaying Upload Speed, Download Speed, and Ping (Latency). If you’re hitting a glitch – go back through what you’ve coded and double check that any references to the user (default = Pi) are accurate for the user on your device. You should be seeing updated data based on the frequency you specified in crontab -e.

Install Grafana Kiosk

Next, we want to set up our device as a kiosk, and have it boot and display the Network Speed dashboard automatically.

Install Grafana Kiosk from https://github.com/grafana/grafana-kiosk. For my installation I used the ARM v6 grafana-kiosk.linux.armv6 release.

Running the Dashboard on startup:

We’re going to use a yaml file to store our dashboard configuration:

Create a new file, config.yaml and populate it as such:

general:
  kiosk-mode: full
  autofit: true
  lxde: true
  lxde-home: /home/(user)
target:
  login-method: local
  username: admin
  password: (password)
  playlist: false
  URL: http://localhost:3000/d/bdf20d32-c4ff-4578-a3f4-7a38e1f722b9/network-speed?orgId=1
  ignore-certificate-errors: false

Be sure to substitute the proper ID wherever you see (user). The URL for the dashboard can be copied from the web interface of the dashboard.

Edit /home/(user)/.config/lxsession/LXDE-pi/autostart

Add a line: (one line, may show as wrapped)

@/usr/bin/grafana-kiosk -lxde-home /home/(user) -c /home/(user)/config.yaml

Save & Exit.

Now when you reboot the Pi, the dashboard should come up full screen after login.

NSRL Query from the Command Line

In digital forensics, we’re frequently trying to separate the signal from the noise. When examining operating systems – including mobile, it can be helpful to know what files came with the operating system. By filtering those out we can concentrate on what’s new on the device as we start looking for activity.

The National Software Reference Library (NSRL) is designed to collect software from various sources and incorporate file profiles computed from this software into a Reference Data Set (RDS) of information. The RDS can be used by law enforcement, government, and industry organizations to review files on a computer by matching file profiles in the RDS. This will help alleviate much of the effort involved in determining which files are important as evidence on computers or file systems that have been seized as part of criminal investigations.

https://www.nist.gov/itl/ssd/software-quality-group/national-software-reference-library-nsrl

Recently I came across a site that, among other capabilities, has the option of doing an NSRL lookup using curl from the command line.

Me being Mr. PowerShell I wanted to see what the syntax would be to do the same lookup with PowerShell. So where did I turn? No, not to Jeffrey Snover. I went to ChatGPT. I’d heard quite about how services like these, while not trustworthy for anything of historical accuracy, are pretty good at translating code.

The original syntax:

curl -X 'GET' \
  'https://hashlookup.circl.lu/lookup/sha1/09510d698f3202cac1bb101e1f3472d3fa399128' \
  -H 'accept: application/json'

Sure enough it returned functional code to do the same operation in PowerShell. What I really appreciated though is the detailed information beneath that explains the parallel functions between the two, and what the different values represent. I could see myself using ‘explain this code to me’ in the future.

PowerShell NSLR Query Syntax:

Invoke-RestMethod -Uri 'https://hashlookup.circl.lu/lookup/sha1/3f64c98f22da277a07cab248c44c56eedb796a81' -Headers @{accept='application/json'} -Method GET

I also asked it to convert the curl command to Python which it handled equally well, and once again the same level of explanation of what’s going on beneath the code.

Python NSRL Query Syntax:

import requests
response = requests.get('https://hashlookup.circl.lu/lookup/sha1/09510d698f3202cac1bb101e1f3472d3fa399128', headers={'accept': 'application/json'})
print(response.json())

Curl script & output

Python script & output

PowerShell script & output


Of the three, I prefer the output of the PowerShell command as the output is the most readable. In the screenshot above, four queries were run. For the first two there wasn’t a matching hash detected, so we can’t confirm whether those were included with the operating system. For the second two queries, which happen to be for executable names that are frequently misused by bad actors, we see that the hashes queried do match the published NSRL.