DRAM WRITE to READ Delay L [15]
DRAM WRITE to READ Delay S [3]
Hi,
I completely missed over this message.
My bad
Soo this is bad
Writes in clock of 3 (write to pre), may or may not finish in time
It is inserted mid read (which is 8 clock, not 4) - soo middle is of course 4 clock not 2 or 3
The 2nd consecutive write:
Illustration
Code:
.... .... .... ....
RD.PR.... RD.AP....
.... WR.. .... ....
.... .... .... WR..
Read, 8 clock, read
Write can happen past first read, or 4 ticks before and after read
next write can not happen 4 ticks after read, but will happen with + 8 ticks delay (if there is no read in between)
If specific scenario is met
You get
+4pause, RD+8pause+Read+8pause
write + 4pause + 8 pause + write
Soo in technical terms
Read nop reads are 8 ticks, but only can happen if it jumps bankgroups
full roundtrip is 16 ticks (full burstlength)
Write , nop write
First write happens at 4 ticks, then its pause. If you write more data where no time needs to happen to (p)recharge rows or old targets
Then another write will happen after 8 more ticks are done. Then the actual write happens, because things need to align back to ticks of 8, after doing a write at half BurstChop8 aka 4 ticks
(soo together thats a distance of 12 ~ hence WRWR_SG can be 12 but shouldnt be)
_SG = same group = roundtrip.
This can happen because
8+8 + pause , 8+8
vs
4+ pause (we are at tick 8), more of 4 clock, (we are at tick 12) +4 , we are at tick 16 ~ back to 16, +16, +16 sync
Soo if above made any sense whatsoever
You need 4 ticks to pass // (all ticks pass at the same time, many commands happen at the same time)
soo read do their thing, and you can write to another away free space
WTR_S or so called _DG (different group, a short jump, no roundtrip waiting requirement)
^ will be WTRS = 4 , half clock of RRDS (8)
Now WTRL itself, is difficult to explain.
You know that after 4 ticks + catchup 4 ticks to be back on 8+8+8+8 rail
on the 2nd 8th tick, another read can happen.
Reads are heavy operations
It needs to backup data from row it accesses
Soo it only reads out what it really needs, while discharging the whole row where (other) data was.
It needs time to not only first fish away other important fish, then open drain valve,, collect what it needs, close drain valve, fill pool again with water till sensors say "waterlevel ok"
While pool fills start doing another work, wait for old to give "ok im usable again", put back fish, and move on.
Soo while first part is in processing state, another work begins in the timetable of 8 ticks, where it can operate.
It can not operate at 7th clock or 6th clock. UDIMM runs at 16 ticks roundtrip or 8 per whatever it has to do ~ burstchop 8
Remember i said write back needs to happen
Writes are not destructive actions.
They can decide to happen at any time if there is space left for them and dont conflict with ongoing reads.
If data needs to be stored back to the same place, aka roundtrip (WTRL)
The moment it can do it, is when first pool is ok, soo 8++ clock
Then 2nd pool does finish its job (another 8 clock)
And if you have some more special vendor design ideas or overlapping ideas of lets say micron or samsung ~ another 8 clock
Soo WTRL at best is double RRDL, or sometimes 3* RRDL.
It can be any other value between 1.5* to 3* RRDL
But alignment is what matters.
Not because RAS to RAS (= RRD meaning) needs it. Reads from Writes are separated and an own thing
But because on the bigger picture, they will crossfight for processing priority and what happens is write just being delayed indefinitely.
Soo alignment is important
but you may need to stop thinking about "book lecture of single operation".
// Likely why many tech space people confuse behavior.
// Too much focus on a single operation, without consideration of the trouble you cause by your action. Pre and past target timing.
DRAM RAS# to RAS# Delay L [8]
DRAM RAS# to RAS# Delay S [8]
See this
Soo correct behavior should be
DRAM WRITE to READ Delay L [16]
DRAM WRITE to READ Delay S [4]
With
tRDRD_sg_Runtime [16]
tRDRD_dg_Runtime [8]
tWRWR_sg [16]
tWRWR_dg [8]
RD 2 RD ~ roundtrip , 16 clock
RD 2 RD ~ fast jump, 8 clock
WR 2 WR ~ rountrip 12-16 clock, optimally 16. To have some breathing room
WR 2 WR ~ fast jump also 8 clock
Why not 4 ?
Because thats how memory operates. Go RDIMM + server boards for this, if you want two operations per single strobe (of 8).
// While one can be inserted at the middle, two can not. Two can happen between reads, but only if read is delayed,
// because 2nd write needs to be delayed to align back to 8+8+8+8 system.
Actions are done on strobe signal high. (most)
And strobe , runs in ticks of 8.
Can try what you want, but its 8. You can't call an apple a pear.
An apple grows to an apple,, even if you call it a pear.
UDIMM single row, can never do two things at the same time in one strobe.
Hence it can never be dual rank. Just dual sided with focus on access efficiency.
All DDR5 can jump bankgroups internally and subchannels semi-internally.
But per strobe its still one big operation with many round robin based orders.