Re: Re: sometimes the algorithms demand it...
More to the point, caches are designed for access patterns where repeated accesses are made to the same data.
They also optimize for short stretches of linear access - for example 256bits (32 bytes) is a common "cache line" size. When you get a cache "miss", the 32 bytes containing the missed item will be moved from memory into the cache.
If one is looking at other data nearby (locality), then the large read is good. If one is truly bouncing around, then memory bandwidth is wasted (and extra latency introduced) by these large transfers.
A bad case for a cache is a purely linear access pattern - when you are making a single pass sweeping through memory that's much larger than the cache. In this cache each level of cache just adds latency, without helping (kind of like a layer of middle management that just has to approve and sign paperwork).
Originally posted by Rincewind42
Caches are designed to take advantage of linear access patterns, not effectively random ones that you get with linked lists and trees (especially when you are allowed to delete and insert at any location within the data structure).
More to the point, caches are designed for access patterns where repeated accesses are made to the same data.
They also optimize for short stretches of linear access - for example 256bits (32 bytes) is a common "cache line" size. When you get a cache "miss", the 32 bytes containing the missed item will be moved from memory into the cache.
If one is looking at other data nearby (locality), then the large read is good. If one is truly bouncing around, then memory bandwidth is wasted (and extra latency introduced) by these large transfers.
A bad case for a cache is a purely linear access pattern - when you are making a single pass sweeping through memory that's much larger than the cache. In this cache each level of cache just adds latency, without helping (kind of like a layer of middle management that just has to approve and sign paperwork).