L1 Cache and TLB Enhancements to the RAMpage Memory Hierarch(10)
发布时间:2021-06-06
发布时间:2021-06-06
Abstract. The RAMpage hierarchy moves main memory up a level to replace the lowest-level cache by an equivalent-sized SRAM main memory, with a TLB caching page translations for that main memory. This paper illustrates how more aggressive components higher
Miss Rate(a)Whilecachesizeincreasesboostperformancesigni cantly,asCPUspeedincreases,alargeL1cannotsaveaconventionalhierarchyfromthehighpenaltyofwaitingforDRAM.In g.3(d),itcanbeseenthatRAMpageonlyimprovesthesituationmarginallywithoutcontextswitchesonmisses.
WithRAMpagewithcontextswitchesonmisses,timewaitingforDRAMremainsnegligibleastheCPU-DRAMspeedgapincreasesbyafactorof8( g.3(f)).ThelargestL1(combinedL1iandL1dsize512KB)resultsinonlyabout10%ofexecutiontimebeingspentwaitingforSRAMmainmemory,whileDRAMwaittimeremainsnegligible.Bycontrast,theotherhierarchies,whileseeingasigni cantreductionintimewaitingforL2(orSRAMmainmemory),donotseeasimilarreductionintimewaitingforDRAMasL1sizeincreases.
4.2TLBVariations
AllTLBvariationsaremeasuredwiththeL1parameters xedattheoriginalRAMpagemeasurements–16KBeachofinstructionanddatacache.
TheTLBmissrate( g.4),evenwithincreasedTLBsizes,issigni cantlyhigherinallRAMpagecasesthanforthestandardhierarchy,exceptfora4KBRAMpagepagesize.AsSRAMmainmemorypagesizeincreases,TLBmissratesdrop,asexpected.Further,asTLBsizeincreases,smallerpages’missratesdecrease.Inthecaseofcontextswitchesonmisses,thenumberofcontext
上一篇:多讲话者声学网络中的助听系统
下一篇:保安辞职信 辞职报告通用范本