All About Computer Caches

All About Computer Caches

  • perf-test.com need your contributions to build up a strong repository of performance engineering resources.

A computer is a machine in which we measure time in very small increments. When the microprocessor accesses the main memory (RAM), it does it in about 60 nanoseconds (60 billionths of a second). That's pretty fast, but it is much slower than the typical microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a microprocessor 60 nanoseconds seems like an eternity.

What if we build a special memory bank in the motherboard, small but very fast (around 30 nanoseconds)? That's already two times faster than the main memory access. That's called a level 2 cache or an L2 cache. What if we build an even smaller but faster memory system directly into the microprocessor's chip? That way, this memory will be accessed at the speed of the microprocessor and not the speed of the memory bus. That's an L1 cache, which on a 233-megahertz (MHz) Pentium is 3.5 times faster than the L2 cache, which is two times faster than the access to main memory.
Some microprocessors have two levels of cache built right into the chip. In this case, the motherboard cache -- the cache that exists between the microprocessor and main system memory -- becomes level 3, or L3 cache.
There are a lot of subsystems in a computer; you can put cache between many of them to improve performance. Here's an example. We have the microprocessor (the fastest thing in the computer). Then there's the L1 cache that caches the L2 cache that caches the main memory which can be used (and is often used) as a cache for even slower peripherals like hard disks and CD-ROMs. The hard disks are also used to cache an even slower medium -- your Internet connection.
  • L1 cache - Memory accesses at full microprocessor speed (10 nanoseconds, 4 kilobytes to 16 kilobytes in size)
  • L2 cache - Memory access of type SRAM (around 20 to 30 nanoseconds, 128 kilobytes to 512 kilobytes in size)
  • Main memory - Memory access of type RAM (around 60 nanoseconds, 32 megabytes to 128 megabytes in size)
Can someone explain in an easy to understand way the concept of cache miss and its probable opposite (cache hit)?
A cache miss, generally, is when something is looked up in the cache and is not found. The cache did not contain the item being looked up. The cache hit is when you look something up in a cache and it was storing the item and is able to satisfy the query.

Why context-switching would cause a lot of cache miss?
In terms of memory, which is being referenced in the quoted text in your post, each processor has a memory cache -- a high speed copy of main memory. When a new thread is context switched into a processor, the local cache memory is empty or it doesn't correspond to the data needed for the thread. This means that all (or most) memory lookups made by that new thread result in cache misses because the data that it needs is not stored in the local memory cache. The hardware has to then make a number of requests to main memory to fill up the local memory cache which causes the thread to initially run slower.

References : http://computer.howstuffworks.com/cache.htm
Author
Vaibhav
Views
48
First release
Last update
Rating
0.00 star(s) 0 ratings