riscv-perf-model icon indicating copy to clipboard operation
riscv-perf-model copied to clipboard

Add decoupled frontend (+ L1 Instruction Cache)

Open danbone opened this issue 5 months ago • 1 comments

As the branch prediction API is coming to a close, I'd like to propose adding a decoupled frontend with an L1 instruction cache.

I did cover some of this with @arupc on Monday.

OlympiaDecoupledFrontend

What I'd like to have is a separate BranchPredictionUnit (BPU) with it's own pipeline that streams requests into the fetch unit. The requests essentially contain an address, bytes to fetch, and the termination type (taken branch, return, none etc.). Requests could be one per cacheline (i.e. if the predictor predicts across a cacheline it'll separate them into multiple requests). The fetch unit queues these requests, and flow control on this interface is managed with credits.

Once the fetch unit has a request queued, and enough credits downstream, it'll forward the address to the instruction cache.

Hit responses from the cache go back to the fetch unit where they're broken into instructions and paired with the prediction meta data given in the original BPU request. Some BTB miss detection happens here, misses or misfetches redirect the BPU.

I'm not completely sure how to handle mapping the instructions from the trace onto the data read, handling alignment, unexpected change of flow, malformed traces etc. I've tried to illustrate an idea on how it could be done, but I'm open to other suggestions.

OlympiaDecoupledFrontendTraceDriven

Some further details on the ICache OlympiaDecoupledFrontendFetchCacheBlockDiagram

What I propose is that we change the existing MemoryAccessInfo class to be generic enough so that it can be used as a memory transaction.

  • Fetch creates the MemoryAccessInfo request, sets the address and size. It's sent to the ICache, which has some simple pipeline.
  • The cache look up is preformed, and the original request is updated with hit/miss info, and returned back to Fetch.
  • Misses allocate a MSHR entry, which then propagates the request to the L2.
  • Once the L2 responds, the MemoryAccessInfo is sent back to fetch through the same ICacheResp port, triggering Fetch to continue

There's a few technical details that need to be hashed out still:

  • Handling misaligned requests (page/cacheline/block crossing i.e. how to split memoryaccessinfo into multiple downstream)
  • Miss on full MSHR
  • MSHR hits
  • Only filling on demand requests

I've identified a few changes needed to the current codebase before the ICache work can starts.

  • [ ] Add address/size to MemoryAccessInfo
  • [ ] Update L2Cache to use MemoryAccessInfo class instead of InstPtr

This might also help the prefetch work

danbone avatar Jan 24 '24 17:01 danbone