A microprocessor is a fast chip-based volatile storage in a computing system.

Nonvolatile memory is the memory that can keep the information even when it is powered off. In other words, nonvolatile memory requires power while storing the data; however, once the data is stored, the nonvolatile memory technologies do not require power to maintain the data stored.

Volatile versus nonvolatile memory

As we can see, nonvolatile and volatile memory are fundamentally different by the definitions themselves. At first it may seem that nobody would prefer volatile memory over nonvolatile memory because the data are important and power is uncertain. However, there are a few reasons that both types of memories are in use and will continue to be in use:

First and foremost, volatile memory is typically faster than nonvolatile memory, so typically when operating on the data it's faster to do it on volatile memory. And since power is available anyway while operating on or processing the data, it's not a concern.

Since, inherently, volatile memory loses data, the mechanism to retain data in volatile memory is to keep refreshing the data content. By refreshing, we mean to read the data and write it back in cycle. Since memory refresh consumes significant power, it cannot replace nonvolatile memory for practical purposes.

There is a memory hierarchy so that the systems can get the best of both worlds with limited compromises. A typical memory hierarchy in a computer system would look like Figure 3.11.

A microprocessor is a fast chip-based volatile storage in a computing system.

■ Figure 3.11. Typical memory hierarchy of a computer system.

So, as depicted in Figure 3.11, the CPU continues to process data from nonvolatile memory, which is fast. However, the data in volatile memory is continuously backed by nonvolatile memory. It must be noted that if the memory CPU is talking to is slow, it would slow down the whole system irrespective of how fast the CPU is, because the CPU would be blocked by the data availability from the memory device. However, fast memory devices are quite costly. In practice, therefore, computer systems today have multiple layers in the memory hierarchy to alleviate the problem.

We can see that volatile memory has multiple layers in the hierarchy and typically the nonvolatile memory has a single layer. The layers in the memory hierarchy from bottom to top typically go faster, costlier, and smaller. The fundamental principle for having this multilayer hierarchy is called locality of reference. Locality of reference means that during a given small period of time, in general, data accesses will be in a predictable manner within an address region, and the switching in this locality will happen at intervals. Therefore the data in a locality can be transferred to the fastest memory so that the CPU can process the data quickly. This works not only in theory but in practice as well. Details of memory evolution and various interfaces that these memory devices use are discussed in Chapter 7.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128016305000037

Domain 6

Eric Conrad, ... Joshua Feldman, in CISSP Study Guide (Second Edition), 2012

RAM and ROM

RAM is volatile memory used to hold instructions and data of currently running programs. It loses integrity after loss of power. RAM memory modules are installed into slots on the computer motherboard. Read-only memory (ROM) is nonvolatile: Data stored in ROM maintains integrity after loss of power. The basic input/output system (BIOS) firmware is stored in ROM. While ROM is “read only,” some types of ROM may be written to via flashing, as we will see shortly in the Flash Memory section.

Note

The volatility of RAM is a subject of ongoing research. Historically, it was believed that DRAM lost integrity after loss of power. The “cold boot” attack has shown that RAM has remanence; that is, it may maintain integrity seconds or even minutes after power loss. This has security ramifications, as encryption keys usually exist in plaintext in RAM; they may be recovered by “cold booting” a computer off a small OS installed on DVD or USB key and then quickly dumping the contents of memory. A video on the implications of cold boot, Lest We Remember: Cold Boot Attacks on Encryption Keys, is available at http://citp.princeton.edu/memory/. Remember that the exam sometimes simplifies complex matters. For the exam, simply remember that RAM is volatile (though not as volatile as we once believed).

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499613000078

Data Hiding Forensics

Nihad Ahmad Hassan, Rami Hijazi, in Data Hiding Techniques in Windows OS, 2017

Windows Forensics 227

Capture Volatile Memory228

DumpIt 228

Belkasoft 230

FTK® Imager 231

Capture Disk Drive 231

Using FTK® Imager to Acquire Disk Drive 232

Deleted Files Recovery 233

Acquiring Disk Drive Images Using ProDiscover Basic 234

Analyzing the Digital Evidence for Deleted Files and Other Artifacts 235

Windows Registry Analysis 239

Windows Registry Startup Location 239

Checking Installed Programs 239

Connected USB Devices 242

Mostly Recently Used List 243

UserAssist Forensics 245

Internet Programs Investigation 245

Forensic Analysis of Windows Prefetch Files 249

Windows Minidump Files Forensics 250

Windows Thumbnail Forensics 250

File Signature Analysis 252

File Attributes Analysis 252

Discover Hidden Partitions 252

Detect Alternative Data Streams 255

Investigating Windows Volume Shadow Copy 255

Virtual Memory Analysis 257

Windows Password Cracking 259

Password Hashes Extraction 259

Ophcrack 262

Offline Windows Password and Registry Editor: Bootdisk/CD 262

Trinity Rescue Kit 262

Host Protected Area and Device Configuration Relay Forensic 262

Examining Encrypted Files 262

TCHunt 262

Cracking TrueCrypt Encrypted Volume Passwords 263

Password Cracking Techniques for Encrypted Files 264

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128044490000063

Case Processing

David Watson, Andrew Jones, in Digital Forensics Processing and Procedures, 2013

Appendix 25 Some Evidence Found in Volatile Memory

The evidence recovered from volatile memory acquisition will vary depending on the device being acquired, but depending on the device being acquired will include, but not limited to:

available physical memory;

BIOS information;

clipboard information;

command history;

cron jobs;

current system uptime;

driver information;

hot fixes installed;

installed applications;

interface configurations;

listening ports;

local users;

logged on users;

malicious code that is run from memory rather than disk;

network cards;

network information;

network passwords;

network status;

open DLL files;

open files and registry handles;

open files;

open network connections;

operating system and version;

pagefile location;

passwords and crypto keys;

plaintext versions of encrypted material;

process memory;

process to port mapping;

processes running;

registered organization;

registered owner;

remote users;

routing information;

service information;

shares;

system installation date;

system time;

the memory map;

the VAD tree;

time zone;

total amount of physical memory;

unsaved files;

user IDs and passwords.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597497428000091

Collecting evidence

John Sammons, in The Basics of Digital Forensics (Second Edition), 2015

Alert!

Evidence in RAM

A computer’s volatile memory (RAM) can contain some very valuable evidence, including running processes, executed console commands, passwords in clear text, unencrypted data, instant messages, Internet protocol addresses, and Trojan horse(s) (Shipley and Reeve, 2006).

Conducting and documenting a live collection

Now comes the tricky part. It’s time to get focused. Once you start, you should work uninterruptedly until the process is complete. To do otherwise only invites mistakes. Before getting underway, gather everything you will need: report forms, pens, memory capture tools, and so on. Every interaction with the computer will need to be noted. You could use an action/response approach (“I did this … The computer did that.”).

If the desktop isn’t visible, you can move the mouse slightly to wake it up. If that fails to bring up the desktop, pressing a single key should solve the problem. You should, of course, document which key was depressed in your notes.

Now that you can see the desktop, the first thing to note is the date and time as it appears on the computer. Next, record the visible icons and running applications. You don’t want to stop there. Documenting the running processes could help identify any malware that is in residence on the computer. The running processes can be documented by accessing the task manager. Why would that matter? One of the more popular defenses, especially in child pornography cases, is to claim that the contraband images were deposited by an unknown third party by way of a Trojan horse.

Now it’s time to use a validated memory capture tool to collect that volatile evidence in the RAM. After this step is complete, the process ends with proper shutdown. The proper shutdown allows any running application a chance to write any artifacts to the disk, allowing us to recover them later.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128016350000048

Performance issues and design choices in delay-tolerant network (DTN) algorithms and protocols☆

J. Morgenroth, ... L. Wolf, in Advances in Delay-Tolerant Networks (DTNs) (Second Edition), 2021

13.4 The curse of copying—I/O performance matters

Traditional Internet Protocol (IP) stacks adopt the notion of “streaming,” in which a limited amount of data may be buffered but most of the data is sent out right away. If the outgoing link is currently unavailable or overloaded, packets are discarded. End-to-end data loss is usually prevented by end-to-end retransmissions of higher-level protocols. In a DTN the outgoing link may be unavailable over an extended period of time and data has to be stored on nodes. The DTN architecture (Cerf et al., 2007) further requires such storage to be persistent so that stored data survives system restarts.

Commercial Ethernet switches employ the “store-and-forward” paradigm in which frames are received, buffered (usually in RAM), and subsequently forwarded. While this allows switches to do error checking, it also requires enough temporary storage for at least a single frame. While Ethernet frames are limited in size, ADUs (which are transformed into PDUs by the DTN Engine) in a DTN are “possibly long” (Cerf et al., 2007) and normally not limited in size. When talking about the BP, PDUs have, in fact, limited size of 1.8 × 1019 bytes (because a self-delimiting numeric value (SDNV) can only hold 264 − 1 values).

So, a DTN Engine has to persistently store PDUs of significant size. While the performance of IP stacks is usually limited by the processing capabilities, DTN Engines will likely be limited by the storage bandwidth. Since PDUs have to be stored and retrieved during forwarding, the attainable throughput cannot exceed 50% of the storage bandwidth. Since persistent storage usually involves a hard disk drive (HDD) or flash memory, the storage bandwidth is significantly lower than RAM used in the switches. This makes it clear that DTNs are not a good match for streaming applications (many small ADUs) because the overhead per ADU is comparably high. Furthermore, supporting ADUs of arbitrary size causes certain handling problems, which will be discussed in this section.

13.4.1 Problem statement

On conventional DTN nodes, volatile memory is usually in the form of RAM and persistent memory in the form of flash memory or a hard disk. While RAM cannot be used as persistent storage and is also more expensive than flash or hard disk, it offers significant performance benefits. In Fig. 13.3, we show a network throughput measurement of the DTN2 reference implementation with PDUs stored in RAM or on HDD. The attained throughput when using RAM is between 2.7 and 19.1 times faster compared with storing bundles on HDD. This clearly shows that storing or buffering ADUs in RAM can offer significant performance benefits. However, since RAM is volatile and will be lost on node restarts, not all PDUs can be stored in it. Those requiring special reliability (custody) have to be stored persistently before custody is accepted.

A microprocessor is a fast chip-based volatile storage in a computing system.

Fig. 13.3. DTN2 network throughput (Pöttner et al., 2011a) (log y-axis).

Even when the performance of the storage back end is sufficient to support high throughput, copying data can also drastically impact performance. In Fig. 13.4 we show a traditional DTN Engine in which data arrives at a CL and is immediately handed over to a storage module. This storage is likely not in RAM because the PDU has to be persistently stored and can be of arbitrary size that can easily exceed the RAM. When the routing module then takes care of the PDU, data is copied to the next storage module. When the PDU is forwarded, data is again copied from the storage module to the CL (and the respective storage module) to allow sending the PDU. Even when keeping PDUs in RAM, copying is expensive and impacts performance.

A microprocessor is a fast chip-based volatile storage in a computing system.

Fig. 13.4. DTN Engine PDU handling with (slow) copying of blocks.

When keeping PDUs in persistent memory, copying has to be avoided as much as possible because the performance impact is even more significant.

13.4.2 Design advice: Central block storage mechanism

To allow the DTN Engine to achieve high performance, copying of block data has to be avoided as much as possible. The ideal case is shown in Fig. 13.5, in which a central storage component takes care of the PDU. The PDU enters the DTN Engine on the left side and is directly stored in the central component. Subsequently, references to the PDU are passed along until the PDU is forwarded to the next hop. In Fig. 13.6, we show a measurement with IBR-DTN, with and without a central block storage module. The performance increase of central storage that avoids copying is between 5.6% and 80.4%, depending on the size of the ADU.

A microprocessor is a fast chip-based volatile storage in a computing system.

Fig. 13.5. DTN Engine PDU handling with central storage.

A microprocessor is a fast chip-based volatile storage in a computing system.

Fig. 13.6. IBR-DTN throughput with and without block copying (Pöttner et al., 2011a).

Another issue with performance is the application programming interface (API). Sending and receiving applications have to be able to create and retrieve ADUs as fast as possible. In most implementations, copying the data at this point cannot be avoided. However, in an implementation that is ideal from the performance perspective, this copying would also be avoided by letting the application directly access the central storage component.

13.4.3 Design advice: Hybrid storage

As argued earlier, fast storage such as RAM is usually expensive and volatile. Persistent storage such as HDD is slow and cheap, while solid-state drives (SSDs) are in between because they are faster than HDDs but also more expensive. It is a characteristic of DTNs that the traffic patterns are bursty. During a contact, data has to be transferred as fast as possible because, especially for short contacts, time is precious. When no other node is in range, IO performance is of minor importance. A hybrid storage approach that combines the benefits of fast, expensive and slow, cheap storage is a good match for this kind of traffic pattern.

Fig. 13.7 shows the concept, in which two layers of storage are combined (Patterson and Hennessy, 2005). On write accesses, data is first of all written to volatile memory. Custody PDUs need to be written directly into persistent storage before accepting custody (write-through). However, conventional PDUs may be forwarded when residing on volatile memory. These PDUs can be written to persistent storage whenever there is time (write back). For write accesses, hybrid storages allow a certain amount of data to be stored with the native speed of the volatile storage. When the volatile storage is exceeded, the storage performance goes down to the performance of the persistent storage. This pattern is a good match for the bursty traffic pattern of typical DTNs.

A microprocessor is a fast chip-based volatile storage in a computing system.

Fig. 13.7. Hybrid storage architecture.

For read accesses, it is desirable to use the performance of the volatile memory. However, the DTN Engine or the storage component would have to preload PDUs into volatile memory. In a network with predicted or scheduled contacts (see Section 13.6), this is very possible. Since the DTN Engine knows which neighbor is going to show up next, ADUs for this neighbor can be preloaded and transferred at the bandwidth of the volatile storage. However, the prediction of opportunistic contacts is outside the scope of this chapter. In any case, ADUs that have not been preloaded into the volatile buffer have to read out of the persistent memory. Fortunately, flash as well as HDDs have the property that read access is faster (in terms of data rate) than write access. Therefore, preloading ADUs produces a smaller performance advantage than buffering write accesses.

The volatile buffer of the hybrid storage should be able to handle all data that is transferred during one contact. This ensures that data transfer can happen at maximum speed. For networks with a maximum contact duration of tcontactmax and a networking link with a data rate r, the amount of volatile buffer that is necessary can be calculated as tcontactmax × r. Furthermore, the intercontact time should be long enough to flush the volatile buffer into persistent storage.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081027936000138

Digital Building Blocks

Sarah L. Harris, David Harris, in Digital Design and Computer Architecture, 2022

5.5.4 Area and Delay

Flip-flops, SRAMs, and DRAMs are all volatile memories, but each has different area and delay characteristics. Table 5.6 shows a comparison of these three types of volatile memory. The data bit stored in a flip-flop is available immediately at its output. But flip-flops take at least 20 transistors to build. Generally, the more transistors a device has, the more area, power, and cost it requires. DRAM latency is longer than that of SRAM because its bitline is not actively driven by a transistor. DRAM must wait for charge to move (relatively) slowly from the capacitor to the bitline. DRAM also fundamentally has lower throughput than SRAM, because it must refresh data periodically and after a read. DRAM technologies such as synchronous DRAM (SDRAM) and double data rate (DDR) SDRAM have been developed to overcome this problem. SDRAM uses a clock to pipeline memory accesses. DDR SDRAM, sometimes called simply DDR, uses both the rising and falling edges of the clock to access data, thus doubling the throughput for a given clock speed. DDR was first standardized in 2000 and ran at 100 to 200 MHz. Later standards, DDR2, DDR3, and DDR4, increased the clock speeds, with speeds in 2021 being over 3 GHz.

Table 5.6. Memory comparison

Memory TypeTransistors per Bit CellLatencyFlip-flop~20FastSRAM6MediumDRAM1Slow

Memory latency and throughput also depend on memory size; larger memories tend to be slower than smaller ones if all else is the same. The best memory type for a particular design depends on the speed, cost, and power constraints.

Is a non

Nonvolatile, chip-based storage, often used in mobile phones, cameras, and MP3 players. Sometimes called flash RAM, slower than conventional RAM, but holds its charge even when the power goes out.

What are examples of non

What are non-volatile storage examples? Three common examples of NVS devices that persistently store data are tape drives, HDDs and SSDs. The term non-volatile storage also applies to the semiconductor chips that store data or controller program code within devices such as SSDs, HDDs, tape drives and memory modules.

What is non

Non-volatile memory (NVM) or non-volatile storage is a type of computer memory that can retain stored information even after power is removed. In contrast, volatile memory needs constant power in order to retain data.

What is the most common type of volatile memory?

Flash memory is the most common type of volatile memory, which loses its contents when you turn off the power to the computer.