Defense Against the Dark Arts: Week 2

This week’s lectures continued in a similar vein to last week’s, though with more focus on the practice of forensics and compiling evidence. Just like last week, the lecturer was Christiaan Beek, a computer security expert at McAfee.

Just like last week, my main takeaway from this week’s lectures were definitions of terms and descriptions of practices. I’ve broken them up into two parts, forensics and evidence.


Forensic computing is defined as: “Getting data of probative value out of systems.” That process involves the following three steps:

  • Acquire data while minimizing data loss
  • Analyze all collected data
  • Report findings


When it comes to acquisition, Beek discussed the three main types of acquisition:

  • Live forensics: The malware or exploit is still occurring and you’re observing its behavior and effects in real time.
  • Post-mortem forensics: The malware is no longer running, but its effects may be observable in changes made to the hard disk or memory.
  • Network-based forensics: The malware is investigated by observing network traffic, network logs, and other network-based evidence.

Check the time! And keep other records besides.

Over and over, Beek stressed the importance of keeping detailed records of your own investigation. A key component of this was checking your own watch and noting the current time that observations are made. He also discussed squaring the time on your watch with the system time of the machine so as to make log reviews easier.

Beek said that oftentimes a non-electronic (paper) notebook is the best choice for recording observations. He said that on some teams a specific person is deemed the note taker and is responsible for all record keeping.


Beek defined triage this way: “If I find something, can I prove the same conclusion in multiple ways?”

Bounds of the investigation

A “fishing expedition,” according to Beek, happens when investigators go outside the bounds of an investigation. For the same reason that search warrants are required for police to enter someone’s home, a judge’s approval may be necessary before exploring another part of a suspect’s system. In other words, just because you find someone’s Gmail password doesn’t mean you’re legally allowed to use it.


The points above regarding time- and record-keeping are critical components of a timeline of events. That’s because a key part of forensics is being able to understand what you see and then to replicate what you see happening.


Evidence is anything you can use to prove (or disapprove) a fact. An example is the memory dump, in which the contents of volatile memory (RAM) are saved to a more permanent medium for later review.

An important point here is Locard’s Exchange Principle, which states that you can’t interact with a live system without having some sort of effect on it.

Where does evidence disappear from fastest?

In order of most to least volatile:

  1. Applications
  2. Operating system
  3. Server
  4. Computerized systems
  5. Infrastructure systems
  6. Local area network (referred to colloquially as the “DMZ”)
  7. External environment

Beek also referenced the following ordered list of most to least volatile evidence, which is from RFC 3227:

  1. System memory
  2. Temporary files
  3. Process table and network connections
  4. Network routing information and ARP cache
  5. Forensics acquisition of disks
  6. Remote logging and monitoring data
  7. Physical configuration and network topology
  8. Backups


We discussed several tools last week. This week Beek added an important directive: Don’t install forensic tools on a suspect’s machine!

Volatility is a command-line tool for gathering memory dumps. It works with Windows, OS X, Linux, and Android. A cheat sheet is available here.