L1 Cache and TLB Enhancements to the RAMpage Memory Hierarch(4)
发布时间:2021-06-06
发布时间:2021-06-06
Abstract. The RAMpage hierarchy moves main memory up a level to replace the lowest-level cache by an equivalent-sized SRAM main memory, with a TLB caching page translations for that main memory. This paper illustrates how more aggressive components higher
Prefetchrequiresloadingacacheblockbeforeitisrequested,eitherbyhard-ware[5]orwithcompilersupport[25];predictiveprefetchattemptstoimproveaccuracyofprefetchforrelativelyvariedmemoryaccesspatterns[1].Incriticalword rst,thewordcontainingthereferencewhichcausedthemissisfetched rst,followedbytherestoftheblock[11].Memorycompressionine ectreduceslatencybecauseasmalleramountofinformationmustbemovedonamiss.Theoverheadmustbelessthanthetimesaved[18].Therearemanyvariationsonwritemissstrategy,butthemoste ectivegenerallyincludewritebu ering[17].Anon-blockingcache(lockup-free)cachecanallowanaggressivepipelinetocontinuewithotherinstructionswhilewaitingforamiss[4].
SMTisaimedatmaskingDRAMlatencyaswellasothercausesofpipelinestalls,byhardwaresupportformorethanoneactivethread[19].SMTaimstosolveawiderrangeofCPUperformanceproblemsthanRAMpage.
Theseideashavecosts(e.g.,prefetchingcandisplaceneededcontent,causingunnecessarymisses).ThebiggestproblemisthatmostoftheseapproachesdonotscalewiththegrowingCPU-DRAMspeedgap.Criticalword rstislesshelpfulaslatencyforonereferencegrowsinrelationtototaltimeforabigDRAMtransaction.Prefetch,memorycompressionandnonblockingcacheshavelimitsastohowmuchtheycanreducee ectivelatency.Writebu eringcanscaleprovidedbu ersizecanbescaled,andreferencestobu eredwritescanbehandledbeforetheyarewrittenback.SMTcouldmaskmuchofthetimespentwaitingforDRAM,butatthecostofamorecomplexCPU.
Reducingmisseshasbeenaddressedbyincreasingcachesize,associativity,orboth.Therearelimitsonhowlargeacachecanbeatagivenspeed,sothenumberoflevelshasincreased.Fullassociativitycanbeachievedinhardwarewithlessoverheadforhitsthanaconventionalfully-associativecache,inanindirectindexcache(IIC),bywhatamountstoahardwareimplementationofRAMpage’spagetablelookup[10].AdrawbackofIICisthatallreferencesincuroverheadofanextralevelofindirection.Earlierworkonsoftware-basedcachemanagementhasnotfocusedonreplacementpolicy[7,14].
TheadvantagesofRAMpageoverSMTandotherhardware-basedmulti-threadingapproachesarethattheCPUcanbekeptsimple,andsoftwareim-plementationofsupportformultipleprocessesismore exible(thebalancebe-tweenmultitaskingandmultithreadingcanbedynamicallyadjusted,accordingtoworkload).AnadvantageofIICisthattheOSneednotbeinvokedtohandletheequivalentofaTLBmissinRAMpage.AscomparedwithRAMpage,anIIChasmoreoverheadforahit,andlessforamiss.
2.4Summary
RAMpagemaskstimewhichwouldotherwisebespentwaitingforDRAMbytakingcontextswitchesonmisses.OtherapproachseitherdonotaimtomasktimespentwaitingforDRAM,buttoreduceit,orrequiremorecomplexhard-ware.RAMpagecanpotentiallybecombinedwithsomeoftheotherapproaches(suchasSMT),soitisnotnecessarilyincon ictwithotherideas.
上一篇:多讲话者声学网络中的助听系统
下一篇:保安辞职信 辞职报告通用范本