Presentation is loading. Please wait.

Presentation is loading. Please wait.

Digital Forensics Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #30 Network Forensics (Revisited) November 7, 2007.

Similar presentations


Presentation on theme: "Digital Forensics Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #30 Network Forensics (Revisited) November 7, 2007."— Presentation transcript:

1 Digital Forensics Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #30 Network Forensics (Revisited) November 7, 2007

2 Outline l Review of Lectures 29 l Discussion of the papers on Network forensics

3 Review of Lectures 29 l Data Hiding in Journaling File Systems - http://dfrws.org/2005/proceedings/eckstein_journal.pdf http://dfrws.org/2005/proceedings/eckstein_journal.pdf l Evaluating Commercial Counter-Forensic Tools - http://dfrws.org/2005/proceedings/geiger_couterforensics.pdf http://dfrws.org/2005/proceedings/geiger_couterforensics.pdf l Automatically Creating Realistic Targets for Digital Forensics Investigation - http://dfrws.org/2005/proceedings/adelstein_falcon.pdf http://dfrws.org/2005/proceedings/adelstein_falcon.pdf

4 Papers to discuss l File Hound: A Forensics Tool for First Responders - http://dfrws.org/2005/proceedings/gillam_filehound.pdf http://dfrws.org/2005/proceedings/gillam_filehound.pdf l Monitoring Access to Shared Memory-Mapped File - http://dfrws.org/2005/proceedings/sarmoria_memorymap.pdf http://dfrws.org/2005/proceedings/sarmoria_memorymap.pdf l Network Forensics Analysis with Evidence Graphs - http://dfrws.org/2005/proceedings/wang_evidencegraphs.pdf http://dfrws.org/2005/proceedings/wang_evidencegraphs.pdf

5 Abstract of Paper 1 l Since the National Institute of Justice (NIJ) released their Electronic Crime Needs Assessment for State and Local Law Enforcement study results in 2001, several critical strides have been made in improving the tools and training that state and local law enforcement organizations have access to. One area that has not received much attention is the computer crime first responder. This paper focuses on the development and current results from File Hound, a “field analysis” software program for law enforcement first responders that is currently used by over 14 law enforcement agencies around the State of Indiana. It has been successfully used in several cases ranging from child pornography to fraud.

6 Outline l Introduction l FILE HOUND l Example Investigation l Directions

7 Introduction l Current tools are excellent for case management and investigations in a laboratory l Time-sensitive investigations can occur out in the field. l This has led to a new classification of investigators--the first responders. l These officers are the first on the scene and have basic training dealing with searching and handling digital evidence. l File Hound was developed by Purdue to assist first responders in conducting a quick field analysis to satisfy 4th Amendment (protection against unreasonable search and seizure) and issued warrant requirements.

8 FILEHOUND l Search for images. The software had to be able to search a hard drive for image files. Since a filename search may not be thorough enough for a forensics investigation, the search must focus on file headers to determine a file’s true identity. l Identify relevant images. Since several hundred or thousand files may be found during a search, the software had to present an interface for an examiner to browse through the images found and select those relevant to an investigation. This interface should be simple but yield the results in an intuitive form. l Generate a report of the results. The software had to generate a report of the results. At a bare minimum, the report must include the full logical path of the file. l Require minimal user training. A user should be able to fully utilize the software with minimal user training. A powerful but intuitive user interface was determined to be the best means to accomplish this goal.

9 Example Investigation l The investigation begins when the suspect’s hard drive is connected and mounted to the investigator’s laptop using a hardware write blocker. l File Hound is started and the suspect’s hard drive is selected from a drop down list. l By clicking search without changing any options, an image search is initiated. Once the search has completed, the results are displayed in a tabular format l The total time needed for the initial search depends on the size of the files being searched. File Hound typically searches through a gigabyte of data in 15 minutes. l Next, image identification can occur using the identify tab. The investigator can select any of the images for inclusion in the final report.

10 Directions l What is not clear is the uniqueness of FILEHOUND? That is, the authors claim that it is suitable to be used in the field. How do they accomplish this?

11 Abstract of Paper 2 l The post-mortem state of a compromised system may not contain enough evidence regarding what transpired during an attack to explain the attacker’s modus operandi. Current systems that reconstruct sequences of events gather potential evidence at runtime by monitoring events and objects at the system call level. The reconstruction process starts with a detection point, such as a file with suspicious contents, and establishes a dependency chain with all the processes and files that could be related to the compromise, building a path back to the origin of the attack. However, system call support is lost after a file is memory-mapped because all read and write operations on the file in memory thereafter are through memory pointers. Authors present a runtime monitor to log read and write operations in memory- mapped files. The basic concept of the approach is to insert a page fault monitor in the kernel’s memory management subsystem. This monitor guarantees the correct ordering of the logs that represent memory access events when two or more processes operate on a file in memory. The monitor increases accuracy to current reconstruction systems by reducing search time, search space, and false dependencies.

12 Outline l Introduction l Fine grained monitoring l Monitoring techniques l Directions

13 Introduction l Authors claim that logging to detect intrusions are carried out at the applications level l They argue that it is better to log at the Operating system level as the intruder will find it difficult to hide his/her tracks l They describe a run time monitor to log read and write operations in memory-mapped files l Key to their approach is VMA which is the virtual memory area which is a memory region (contiguous memory frames) allocated to a process. l Finer grained monitoring is carried out

14 Fine Grained Mapping l Authors consider memory-mapped files as objects with constituent parts such as pages. l Goal is not only to trace read and write operations to memory, but also to determine what location of the file is accessed. The following are the levels of granularity: - Object-level: The whole object such as the memory mapped file, without considering any subset of it. - VMA-level: The memory regions allocated to the mapped file. Can be one or many, depends both on contiguous memory frames availability and the amount of memory requested by the process. - Page-level: Most operating systems define a page as a memory allocation unit to take advantage of the paging hardware support present in the underlying architecture. - Minipage-level: minipage introduces the idea of hybrid memory units. Although hardware support is lost for memory allocation units smaller than a page, authors have studied techniques to monitor at this level,.

15 Monitoring l Object Level - object-level monitor can determine whether, once in memory, the object was read or modified by any process at any time. Technique used is the protection of memory pages to generate page faults at the time a process operates on the memory space allocated to the memory-mapped file. Monitor interprets a page fault as a process’ intention to either read or write to the memory-mapped file. The object-level monitor does not consider constituent parts. It has the advantage of generating a reduced number of log records at the cost of false dependencies because it does not consider which part of the file is accessed by a process. l VMA Level - VMA level monitor can log read and write accesses to the individual memory regions allocated to the mapped object. Eliminates false dependencies such as when a process writes to a VMA and another process reads from a different VMA allocated to the same object. Technique to monitor at this level is also based on page protection and page faults monitoring to detect read and write operations on the memory address space where the file is mapped.

16 Monitoring l Page Level - page-level monitor starts tracing a memory-mapped file when a process makes an mmap system call, The monitor tags the VMA(s) allocated to the process. At this point there is no physical memory page allocation because Linux implements demand paging. The strategy is to generate page faults when the process tries to perform a read/write memory operation on the pages of a tagged VMA. Other than page faults resulting from the regular operating system protection mechanism, such as allocating a page upon first time use, address space violation control, and so on, a page fault can also be due to the PTE status bits manipulation implemented in the monitor. When a page fault occurs, the page fault handler checks if the requested page is part of a tagged VMA. If this is the case, and if the memory address is a valid one, a ticket is issued. A ticket is a combination of process ID, type of memory access operation (read or write), and the page frame number.

17 Directions l Need to conduct more investigation to determine whether this method is effective

18 Abstract of Paper 3 (repeat) l Authors develop a prototype network forensics analysis tool that integrates presentation, manipulation and automated reasoning of intrusion evidence. They propose the evidence graph as a novel graph model to facilitate the presentation and manipulation of intrusion evidence. For automated evidence analysis, They develop a hierarchical reasoning framework that includes local reasoning and global reasoning. In local reasoning, They apply Rule-based Fuzzy Cognitive Maps (RBFCM) to model the state evolution of suspicious hosts. In global reasoning, They aim to identify group of strongly correlated hosts in the attack and derive their relationships in the attack scenario. Their analysis mechanism effectively integrates analyst feedbacks into the automated reasoning process. Experimental results demonstrate the potential of our proposed techniques.

19 Outline l Introduction l Network forensics analysis Prototype l Directions

20 Example Prototype System: Overview l Network Forensics Analysis mechanisms should meet the following: - Short response times; User friendly interfaces l Questions addresses - How likely is a specific host relevant to the attack? What is the role the host played in the attack? How strong are two hosts connected to the attack? l Features of the prototype - Preprocessing mechanism to reduce redundancy in intrusion alerts - Graph model for presenting and interacting with the evidence - Hierarchical reasoning framework for automated inference of attack group identification

21 Example Prototype System: Modules l Evidence collection module l Evidence preprocessing module l Attack knowledge base l Assets knowledge base l Evidence graph generation module l Attack reasoning module l Analyst interface module

22 Directions l Data mining and artificial intelligence/reasoning techniques may be effective for forensics analysis


Download ppt "Digital Forensics Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #30 Network Forensics (Revisited) November 7, 2007."

Similar presentations


Ads by Google