Amu_Ke_Fundye
Memory Hierarchy
- The additional storage with main memory capacity enhance the performance of the general purpose computers and make them efficient.
- Only those programs and data, which is currently needed by the processor, reside in main memory. Information can be transferred from auxiliary memory to main memory when needed.
Cache Memory
- A small, fast storage memory used to improve average access time Or We can say that cache is a very high-speed memory that is used to increase the speed of processing by making current programs and data available to the CPU at a rapid rate.
- The cache is used for storing segments of programs currently being executed in the CPU and temporary data frequently needed in the present calculations.
Cache Performance
When the processor needs to read or write to a location in main memory, it first checks whether a copy of that data is in the cache. If so, the processor immediately reads from or writes to the cache.
Cache Hit: If the processor immediately reads or writes the data in the cache line.
Cache Miss If the processor does not found the required word in cache, then cache miss has occurred.
Hit Ratio Percentage of memory accesses satisfied by the cache.
Miss ratio = 1– Hit ratio
- Temporal Locality: The word referenced now is likely to be referenced again soon. Hence it is wise to keep the currently accessed word handy for a while.
- Spacial Locality: Words near the currently referenced word are likely to be referenced soon. Hence it is wise to prefetch words near the currently referenced word and keep them handy for a while.
- Write through writes the data to memory as well as to the cache.
- Write back: Don’t write to memory now, do it later when this cache block is evicted.
Main Memory
- The main memory refers to the physical memory and it is one central storage unit in a computer system.
- The main memory is relatively large and fast memory used to store programs and data during the computer operation.
- The main memory in a general purpose computer is made up of RAM integrated circuit.
Latency: The latency is the time taken to transfer a block of data either from main memory or caches.
- As the CPU executes instructions, both the instructions themselves and the data they operate on must be brought into the registers, until the instruction/data is available, the CPU cannot proceed to execute it and must wait. The latency is thus the time the CPU waits to obtain the data.
- The latency of the main memory directly influences the efficiency of the CPU.
Auxiliary-Memory: The common auxiliary memory devices used in computer systems are magnetic disks and tapes.
Magnetic Disks
- A magnetic disk, is a circular plate constructed of metal or plastic coated with magnetised material.
- Often, both sides of the disk are used and several disks may be stacked on one spindle with read/write heads available on each surface.
- All disks rotate together at high speed. Bits are stored in the magnetised surface in spots along concentric circles called tracks. The tracks are commonly divided into sections called sectors.
Magnetic Tapes
- A magnetic tape is a medium of magnetic recording, made of a thin magnetisable coating on a long, narrow strip of plastic film.
- Bits are recorded as magnetic spots on the tape along several tracks. Magnetic tapes can be stopped, started to move forward or in reverse. Read/write heads are mounted one in each track, so that data can be recorded and read as a sequence of characters.
Mapping of Cache Memory
The transformation of data from main memory to cache memory is referred to as a mapping process. There are three types of mapping procedures considered
- Associative mapping
- Direct mapping
- Set associative mapping
Associative Mapping
- A memory block can be placed in any cache block.
- This memory permits to store any word in cache from main memory.
- It can be implemented using comparator with each tag and mux all the blocks to select the one that matches.
- The requested address is compared in a directory against all entries in the directory. If the requested address is found (a directory hit), the corresponding location in the cache is fetched and returned to the processor; otherwise, a miss occurs.
Direct Mapping
- The location of the memory block in the cache (i.e. the block number in the cache) is the memory block number modulo the number of blocks in the cache.
- The CPU address is divided into two fields (Tag and Index)
- Lower order line address bits are used to access the directory. Since multiple line addresses map into the same location in the cache directory, the upper line address bits (tag bits) must be compared with the directory address to ensure a hit. If a comparison is not valid, the result is a cache miss
Set-Associative Mapping
A set-associative cache scheme is a combination of fully associative and direct mapped schemes.
In direct mapping, each word of cache can store two or more words of memory under the same index address. But in set-associative method, each data word is stored together with its tag and the number of tag data items in one word of cache is said to form a set.
- The Set# = (memory) block# mod #sets
- The Tag = (memory) block# / #sets
The cache maps each requested address directly to a set (akin to how a direct-mapped cache maps an address to a line), and then it treats the set as a fully associative cache.
Key Points
- The number of bits in the index field is equal to the number of address bits required to access the cache memory.
- In general, if there are 2k words in cache memory and 2n words in main memory. Then, the n-bit memory address is divided into two fields k-bits for the index field and n-k bits for the tag field.
Regards
Amrut Jagdish Gupta
Comments
Post a Comment