Cache Memory

There's a big problem that we have been carefully ignoring for quite a while now:

In our examples we have 200 ps as the clock cycle time, and both tick 1 and tick 4 access memory, which has a latency on the order of 15 ns.

That's right, we have 200 ps to perform an operation that takes 75 times that long. That's obviously a problem.

Our answer to this problem is cache memory, which is a small amount of really fast memory that keeps copies of the memory locations that are actually in use at any given time.

The simplest organization for a cache is called direct mapping

Some More Terminology

How it works