Interestingly enough, the locality analysis that we used to predict the
caching behavior of affine references (see
Section ) cannot work for indirect references,
since there is no way to predict at compile-time which data are being
accessed. At one extreme, all of the index values may be identical, and the
reference would behave as though it had temporal locality. At the other
extreme, each reference may point to a unique cache line, and the reference
would behave as though it had no locality. Since we are unable to
accurately predict data locality for this case, the two choices are to
prefetch all the time or not at all (i.e. there is no such thing as ``loop
splitting'' for this case). For these experiments, we decided to prefetch
indirect references all the time. To improve this decision-making process
further, profiling feedback or hardware miss counters may prove useful,
as we will discuss later in Section
.