Flash Memory: In late 2010, Grem Bell and Richard Boddington published “Solid State Drives: The Beginning of the End for Current Practice in Digital Forensic Recovery?” (“Hard drives. The beginning of the end of current methods of forensic data recovery?”), Which caused an ambiguous reaction from the Internet community.
Although the peculiarities of hard drives discussed in this article were first announced at the 2008 DEFCON 16 conference, they were studied by Microsoft experts and presented at the 2009 carrier developer conference [1, sixth slide] and even mentioned in seminars Russians in 2010 [2, slides 10 and 11], recognition of the problem and serious discussion of these characteristics in the criminal community fell through in March 2011.
In this note, I will try to find out the features of information work on the use of flash memory through the prism of computer forensics.
Briefly about computer forensics
The main area of computer forensics is the study of information-carrying machines in order to compose forensic evidence (carrying out forensic examinations and computer technicians) and collect operational information that is not used as evidence in court.
Traditionally, computer forensics includes related areas in which the study of computational information plays an important role: the investigation of information security incidents in organizations, the study of information carriers for military (combat) purposes, etc.
Thus, in the forensic study of information companies in a large number of cases is carried out to obtain judicial evidence, the method of conducting such studies must ensure verification of the results with a second study (this requirement is recorded in article 57 of the Code of Criminal Procedure of the Russian Federation.
“ The specialist has no right … to conduct, without the permission of the investigating officer, investigator or research court, which may entail the complete or partial destruction of objects or a change in their appearance or essential properties”).
Regarding the study of computational information, the requirement of repeatability in guaranteeing the constancy of information (although some scholars prefer to use the term “information integrity”) is expressed in different ways.
Software locks that prevent any registry commands sent to the information bearer from being investigated by the operating system (on Linux, feedback devices generated in read-only mode > “); hardware locks that perform the same functions but do not require the installation of any software (in simpler words: intermediary devices between computers and information carriers that filter transmitted commands); Specialized operating systems that, in the process of loading and operating, do not send an information command to the connected media.
Early stage computer forensics development
The development of computational information forensic methods can be indicated by the following directions:-
1. Constant confirmation (integrity) of the studied data with the cryptographic hash function.
2. Ensuring the constancy of the studied data through the mandatory use of record blockers or through the study of forensic copies of information carriers created by copying the content of one medium over another (sometimes these copies are called tapping or screening).
3. Generalize these trends and fix them in the methods of conducting lawsuits.
It is not difficult to understand that the development of computer forensics is affected by the static content of information-bearing machines, which can be expressed in the form of a simple principle: “Data is changed only by command”.
This principle works on platters, magnetic hard disks (NZHMD) and other types of “traditional” information carriers, and a change in the content of these carriers without malfunction (appearance of bad sectors) should be considered.
Obviously, changing formal data (smart parameters) is not a serious problem, because this formal information is rarely used in forensics to solve any problems (and changing it is not considered critical, unlike changing filesystems).
Flash memory development from a forensic perspective
The massive use of flash memory in removable and non-removable information carriers was made due to the solution of the problem of wear and tear of flash memory cells, which leads to a limitation in the number of possible cleaning cycles (rewriting) of individual memory blocks and a significant reduction in flash memory resource due to irregular data writing to shared file systems (some areas of data are written more frequently than others).
To solve this problem, flash memory manufacturers started using the wear leveling method, which consists of moving data from the most worn areas of memory to the least worn ones.
This operation can be performed at the file system level or at the controller level: in the first case, data is redistributed by a special file system driver, in the second, data is redistributed by the controller transparent information and system information operational broker (the controller provides the necessary read-bypass procedure, i.e. returns the initial data structure due to the use of a table of compliance of logical memory addresses with the physical location of memory cells).
It was not flash memory, to equalize wear, you need to install an additional driver in the system and use a special file system, widespread among ordinary users, the most widespread algorithms for equalizing console wear. Thus, even when working with flash memory in “read-only” mode, a constant redistribution of data by the controller occurs.
However, this process is completely transparent to the operating system and reading devices, so there are no particular difficulties with forensic analysis of flash memory (at the physical level, data is redistributed, but at the logical level it is static), if is static ), if not two big But:-
1. USB flash arrays sometimes do not connect blocks of memory cells with the first write command to those blocks [4];
2. Effective wear leveling requires additional (reserve) memory partitions, for which it is advisable to use free (unoccupied) partitions for file systems.
The first “but” leads to the fact that when reading unrecorded sections of memory from USB flash drives, random data (noise) is returned, which, as expected, changes with each reading.
The second “but” leads SSD manufacturers to the problem of determining (resolving) and using free (unoccupied) partitions of these file systems for compromise.
These features are a serious violation of the “change data only on command” principle on which modern computer forensics is based. Furthermore, using filesystems containing deleted files and the data from previous filesystems to align free (unoccupied) file domains leads to serious data recovery problems (because before the free data area is used, the controller will wipe it, ie…remote data is destroyed, similar in nature to rewriting files with special software).
How does flash memory capping work using free space from filesystems?
A simplified working algorithm can be described in several lines:
flash memory cells that match (on a logical level) are cleared of data areas that are not explicitly used by the information storage system, and links to them are deleted (e.g.)
The operating system can no longer read the contents these cells by reading data from the corresponding areas), the cells are then used to redistribute the data at the physical level. If, at the time of cell cleaning, the cell contains fragments of deleted files or previous file systems, this data will be irreversibly removed (overwritten).
At the same time, methods of detecting flash drives from free partitions of these file systems are of greatest interest: Flash drive controller:
Detection using the ATA trim command [3]: An operating system that supports the ATA command automatically when files are deleted or formatted, transmits information about unused data areas that can be used to equalize wear;- Detection by processing file system structures through a USB stick controller: a controller without the participation of the operating system produces the stored data, handles the structures of the most common file systems (eg FAT) and identifies vacant data partitions which can be cleaned and used for corrosion equalization.
It should be noted that this method of wear is not used on modern USB flash drives, but only on hard drives.
And what threatens computer forensics?
From the above principles of flash memory wear leveling, two conclusions can be drawn:
The contents of flash memory can change at the logical level, even when data carriers are connected using registry brackets or when filesystems are mounted in “read-only” mode;- Hard disks that independently identify and use (free) space on file systems lead to rapid remote data destruction.
And form the following general principles for legitimate work with flash memory:
It is impossible to use hash functions to confirm the stability (integrity) of the contents of flash drives.- The use of software and hardware recorders for recording does not guarantee the impermanence (integrity) of the studied data when working with pen drives.
Remote data recovery from hard drives can be tricky.
The only way to ensure the stability of the content of flash memory in the study is to discard the memory modules and read them with subsequent software reconstruction of the data with special software and hardware complexes.