BLOG: My thoughts on improving analysis and reporting

Unrelated image of an AI beach.

TL;DR

I share my thoughts on how I can improve, scoping the reporting, and such.

Tactical Pause

THE CONTENT, VIEWS, AND OPINIONS EXPRESSED ON THIS DOCUMENT ARE MY OWN AND DO NOT REFLECT THOSE OF MY EMPLOYER OR ANY AFFILIATED ORGANIZATIONS. ALL RESEARCH, ANALYSIS, AND WRITING ARE CONDUCTED ON MY PERSONAL TIME AND USING MY OWN PERSONALLY-ACQUIRED RESOURCES. ANY REFERENCES, TOOLS, OR SOFTWARE MENTIONED HERE ARE LIKEWISE USED INDEPENDENTLY AND ARE NOT ASSOCIATED WITH, ENDORSED, OR FUNDED BY MY EMPLOYER.

Summary up Front

I share my thoughts and goals for becoming more efficient. I formally declare that my intended audience members are other analysts. I document the parts that I think are the most important to share with other analysts. I discuss breaking the analysis in three parts – delivery, execution, and C2. I highlight the importance of scoping the analysis. I discuss validators, monitoring over time, and sourcing malware. Finally, I lightly discuss tailoring the reporting to the appropriate medium.

Pre

This will be more bloggy than normal. This is me articulating my thoughts. My opinions might not always apply, but in general, as a thruntellisearch hobby analyst, it mostly applies. I am not bound by organizational requirements to meet any standard. My standards are arbitrary and up to me to figure out.

Beginning Thoughts

I’m in a phase of my thruntellisearch hobby that I want to streamline everything. It would benefit thruntellisearch analysts to have a set checklist. I noticed when I worked on “Oyster Malware Delivery via Teams Fake App”, I took too long [1]. I often let myself get lost in each analysis. I sometimes use analysis as a mental-escape activity. Analyzing stuff is relaxing, and that’s cool. It’s cool to enjoy the ride and all, but I want to create something that will help improve my efficiency.

Problem

Vendor reports often lack explicit tactical-level intelligence. Most vendor incident reports limit the discussion to the specific incident. The analyst audience needs to extract the patterns and action them. Specifically, they need to figure out what can be used to search for similar activity. A unique problem is that indicators may be inactive by the time a vendor report is released, and the thractors are using new infra. The vendor reports will often mention the vendor-specific detections. Those are good for the vendor’s customers, but might not help non-customers. This is especially so if the report only states the rule names, but not the underlying mechanics.

Solution

Create reports that provide tactical-level intelligence. Thrintel for analysts by analysts. You don’t need to identify every little detail about the delivery or the malware execution. You only need to identify enough to uncover additional infrastructure and create detections.

Limit the Scope

Scope the analysis to components that sets your product apart from others. For example, vendor reports usually include historical info about malicious activities. They may include how the malware developers sell it on whatever market. They may discuss historical changes. Leave it to the user to learn it from the vendors, or from an AI generated summary. Focus on the parts that sets your product apart.

Breakdown

Each incident should be broken into simple parts. They are delivery, execution, and C2. There will be some overlap in delivery and execution. This is especially so for staged loaders. The deliver part should be limited to everything that does not happen on the user’s machine. Once there is activity on the user’s machine, that should be the start of execution. This can be when the user downloads a fake update, or opens an email attachment. The C2 should be described as any network activity after the execution began.

The following sections are succinct explanations of the parts, and what should occur.

Delivery Analysis

  • Analyze the delivery
    • The goal is to understand it
  • Check for gates
    • That is – mechanisms that defend against analysts, or mechanisms that filter for properties like geolocation or user agent
  • Check for patterns
    • How can you find additional infrastructure?
  • Use the DTF checklist [2]
  • Good: identify the general chain of events
    • Identify the domains for the reported incident
  • Better: identify the code parts that influence the delivery path
    • Identify the specific code parts that might be used for a gate
    • Identify the specific code that leads to the next part
  • Best: identify the unique patterns that can be used for detections
    • Identify domain registration patterns
    • Identify layer 7 patterns (eg. Unique page title, or unique resource hashes)

Execution Analysis

  • Analyze the execution
    • The overall goal is to identify behaviors that can be used for detection
  • Identify command-line arguments
  • Identify common persistence methods
    • like scheduled tasks, registry mods, startup folder, etc.
  • Identify C2 indicators (domains/IPs)
  • Identify unique C2 profiles
    • Like specific user agents, or specific C2 routes (like /api/kcehc)
  • Good: identify the general chain of events
    • Identify the files dropped, persistence, and C2 indicators
  • Better: identify the unique patterns
    • Identify any unique patterns such as dropped file names/paths, or C2 profiles (eg. User agent, C2 routes)
  • Best: identify enduring patterns over time
    • Monitor behavior over time
    • Search OSINT sources to identify enduring patterns

C2 Analysis

  • Find patterns
    • Registration patterns (eg. Registrar/name server/ASN)
    • Application patterns (eg. Page titles, banner hashes, resource hashes, etc.)
  • Use the DTF checklist [2]
  • Action patterns / test hypotheses
    • Check if it’s possible to find additional infrastructure
  • Good: identify the general indicators
    • Identify the indicators (IPs/domains) for the C2
  • Better: identify the unique patterns
    • Identify any unique patterns such as C2 profiles (eg. User agent, C2 routes)
    • Identify indicator’s unique registration/application patterns
    • Monitor behavior over time
    • Compare with OSINT sources

Validators

Along each step, determine a validation technique. For example, does the delivery domain use a unique masquerade theme? Does the C2 respond in a unique way to certain requests? As you test your patterns, use the validators to confirm if it’s a useful pivot.

Monitor

It’s important to callout the need for monitoring over time. Monitor over time for new infrastructure. Monitor over time for slight variations in the behavior. When it’s your first time observing something, you can use OSINT sources to identify patterns over time.

Sourcing Malware

It could be a challenge for a hobbyist to acquire malware for analysis. The best option is to get it direct from the source – that is the thractor serving it. It’s unfortunate when vendors push a product after the thractor infra is down. Enterprise users will likely have enterprise sources, but hobyists are limited to the free stuff. Some of the quality sources that are free include Any Run, Tria.ge, Malware Bazaar, and Threat Insights Portal by Neiki.dev.

Reporting Medium

Tailor reporting to the medium. Website visitors might want the long-form recipe. OTX users will most likely want to ingest indicators and links. X users will likely want indicators, maybe some snips, and then a link to the long-form. Limit short-form communication to indicators and links (and other collaborative discussions).

Summary

I share my thoughts and goals for becoming more efficient. I formally declare that my intended audience is other analysts. I document the parts that I think are the most important to share with other analysts. I discuss breaking the analysis in three parts – delivery, execution, and C2. I highlight the importance of scoping the analysis. I discuss validators, monitoring over time, and sourcing malware. Finally, I lightly discuss tailoring the reporting to the appropriate medium.

References

1 – https://malasada.tech/oyster-malware-delivery-via-teams-fake-app/

2 – https://github.com/MalasadaTech/defenders-threatmesh-framework/blob/main/checklist/README.md

With planny aloha – mahalo for your time