Du verwendest einen veralteten Browser. Es ist möglich, dass diese oder andere Websites nicht korrekt angezeigt werden. Du solltest ein Upgrade durchführen oder einen alternativen Browser verwenden.
1,45V auf VDD/VDQ funktioniert auch anscheinend. Karhu hab ich mal kurz angeschmissen läuft schonmal anfangs auch. Da mir die Kühlung fehlt wird ab 60°C vermutlich die Fehler Orgie anfangen.
Wenn ich meinen Jonsbo NF1 wieder nutzen kann, dann kann ich auch länger testen.
Wichtig ist mir die Stabilität, Benchrekorde will ich nicht brechen. Muß mir noch ne andere SSD Kaufen, dann kann der SSD halter weg und mein Jonbo wieder rein
Wenn Du diese Anzeige nicht sehen willst, registriere Dich und/oder logge Dich ein.
1,45V auf VDD/VDQ funktioniert auch anscheinend. Karhu hab ich mal kurz angeschmissen läuft schonmal anfangs auch. Da mir die Kühlung fehlt wird ab 60°C vermutlich die Fehler Orgie anfangen.
Wenn ich meinen Jonsbo NF1 wieder nutzen kann, dann kann ich auch länger testen.
Wichtig ist mir die Stabilität, Benchrekorde will ich nicht brechen. Muß mir noch ne andere SSD Kaufen, dann kann der SSD halter weg und mein Jonbo wieder rein
Meiner ist nicht so gut wie deiner. Auf dem Asrock Mainboard wurde mir angezeigt das die E-cores besser sind als die P-Cores.
Auf diesem Board brauche ich auf jeden fall mehr SA. Unter 1,215V kommt beim Reboot Fehler 55.
Ich habe bisher mit den 7600er G.Skill nicht viel gemacht. Verhält sich bei den Spannungen etwas anders als die Teamgroup. Auch wieder ein (Custom) PCB, wie die Viper Extreme, die mehr SA wollten?
Die Xtreem sind auch Teamgroup. Die G.Skill hab ich gekauft weil sie Grad verfügbar waren und damals besser unter meinen Kühler passten. Bei dem Brocken 4 Max habe ich da keine einschränkungen.
Auf dem Asrock board hatte ich ne SA von 0,995V bei 7600MHz, auf dem Apex brauche ich mehr.
Deine CPU ist um längen besser als meine .
Hab gerade mal den Sottr Benchmark gemacht. Es schaut garnicht so schlecht aus mit den 8000. So wie ich das sehe skaliert das ganz gut zusammen.
We see the limit
We see you will not get anywhere with telemetry spoofing.
Its time to stop using IA_AC exploit , of tuning board to be reported as high leaky low quality type.
And start to actually tune VID and let it internally assign voltage.
You will not get any further with this cutting-supply exploit.
Spend the day(s) learning how to work with CPUs algo.
Translate some pages from germn into english. And i guess the major rest is already in english, soo it shouldnt be too much work.
AC_IA 0.65 as foundation and build/undervolt your curve.
Till you dont reach that ICCMAX limiter anymore.
CPU will thank you
No going around limiter, else you trade in sample health. It already limits your memOC and raw compute ability, till you keep staying under the limiter.
No way of going around the problem of undervolted cores.
But VIDs need to stay high if needed or low on harsh workload.
Best of luck.
Its not that hard, and generally it was time to stop with fixed clock. We are not in skylake days anymore.
This powerdraw/loadline exploit also very likely will not work on new CPUs. Its lucky to even work on 13/14th gen.
Better learn the CPU instead of working against it
EDIT:
In the current moment, this is the way to give you help.
Sorry that its much to read, but it is what it is.
When someday there are sponsors, then i can illustrate better & not rely on old scattered data.
Currently thats the way~
Sorry.
How can I tune VIDs without touching VF offset or AC LL or SVID offset? I don't know if any other method tbh. Also my VIDs are very low rn even under load. They match vcore
Hi @Veii, how are you? When I created this account I put my nickname wrong, which is one I use in games! I am @lmfodor. I'll see if I can change it. You have helped me a lot in the other forum, and in fact with my 13700k I used your BIOS config file, including your AC/DC_LL and your VF offsets. I must say that they worked very well for both the processor and the 7200. In fact, I always disabled VDDQ Training, but when changing processors I encountered some limitations. First, because of how ASUS handles the SVID, if I leave it in "Let BIOS Optimize" I see that it sets PL1 to 253W and PL2 uncapped. If I select "Disabled Enforce all limits, it uses Intel's limits at 253W, so the processor reaches 55/56x in the Pcores, not bad for YC tests. But if I activate ""Enable - Enforce All limits" there I should define the AC/DC so that it works properly. I mean, I would like to achieve än optimal CPU performance without a strong OC, I don't need 6.x, just for it to work with the best performance / consumption / temps ratio and that leaves headroom for a good OC of mems.
Oh hi ,
This ASUS bug is strange.
It should work on a preset base.
Enforce all off = longest duration + PL free
Enforce 90° is the same but at a temperature cap.
On, actually it depends on the SKU ~ soo there must be a bug.
13900KS & 14th gen have a new "high" xtreme-power-profile.
On 13th gen it was 320A if i remember correctly.
And around 253-255W.
14th gen targets 320?360W and 400A.
But in those 400A you have far more into it than just cores.
MemOC is factored in and their voltages.
Well basically its portion of IA supply.
GT had an own one & ring is a special kind with lack of access for customer.
To make sure,
Are you now on 14th gen or still 13th
Regarding the RONs, yes, I have read a lot about your posts and I have achieved a good delta at low speeds (7200), however now, for example for 8200 with a kit from 8000 to 1.45, I see that 1.48v VDDQ is the lowest value to pass YC. Then TX left it in Auto, but when for example I set it to 1.3v or 1.31/2v, I couldn't make it stable. On the other hand , I see that my SA values are high, well 1.28 is not that high, but I did notice that in many Apex they manage to be below 1.2, which surprises me.
Why did you decide to leave VDDQ_CPU on auto ?
Its the most crucial value.
Memory is memory. It has its own voltages as own ecosystem.
It needs how much it needs. Doesnt depend on the CPU at all.
Can you refresh my memory a bit, that's 16 or 24gb kit.
I forget things fast. Especially with months of distance.
For 24gb delta of 170mV is a bit borderline. A bit too much even.
For 16gb , you want to be above 180mV to 220mV.
Madness777 example of 16gb, was at 225mV Delta, Down to 165 mV cap.
?:) Nope iot was 40/48 on both. Tried 40-34 34-40 on first slot
www.hardwareluxx.de
Soo playroom of 60mV , split it to 15mV steps it can correct - its like 210 & 180 mV as possible options.
Then that goes to +/- 5mV till you hit a good voltage which stays coldboot stable.
Both will pass, but PSU off then can cause a "not even single cycle pass" situation.
30mV jumps will already cause no-boot scenarios. Soo it shouldnt be too hard to get the voltage.
What may be a problem ~ @tibcsi0407 is if you do huge steps, and then adjust other things like SA. to meet those huge jumps.
VDDQ needs to be set before ever bothering with RON. One shouldnt even need to.
Its when you reach IMC limit and! feel instability between subchannels.
Else the Board and ASUS-HQ knows how to tune their boards.
Regarding my V/F curve, I understand that it is not a "leaked?" curve. I'd like to first be able to configure the processor well so that it works well and leaves room to continue with the memory OC, which I think could greatly improve signal integrity. Regarding the CTLs, I managed to read all the trained parameters with RU, but I continue using as input the values that you indicated as a reference for DQVrefUp/Down. In fact, I wanted to see if I could read the RTT, if for example I put similar values and then look in the BIOS to find out how it trains.
Do you mean leaky or leaked in terms of my information ?
Every CPU has a fused curve. It slightly slightly differs.
What Shamino's Tool can read out, is the fused curve which then is trained based on users gear
Then may have shifted based on SVID presets, and is displayed within min/max allowed boost.
Curve remains stationary even if you extend FMAX.
I can not tell you in very detail how it functions,
About RU-Tool;
Do you track raw hex changes, or how do you decode
Can you be sure its the actual settings vs it being the lookup table.
NVRAM wont tell all.
About RTTs,
They are in mem.
I'm not sure you will ever find them.
SPD-Hub controls it.
You may find the default as bios setting, but it also may be encoded gibberish
, or it may be simple placeholder values that are send to SPD-Hub.
It most likely will not be read'able values.
In theory yes, in practical theory less
And in practice also barely.
DC_LL is to match Vout, but it doesnt need skewing unless you mess with AC_LL
In our case we exceed "current default" of 0.65ohm, instead 0.55ohm
I would not touch it. I miss experience in tuning it but its not a big deal if it slightly underreports powerdraw on our higher AC_LL.
CPU still does recognize powerdraw, just hwinfo may missreport slightly.
I don't know if it makes sense to correct the curve at the bottom, but if the slope, perhaps from V/F 7, trying with (-) 0.005 would take me to 1.269 for 56x and 1.304 for 5.8. Would this be a good starting point to try?
It's a pleasure to be in contact with you again! Thanks, as always @Veii
You're soo polite, thank you~~
Don't focus on mV offsets.
Its logarithmic offsets.
Let the tool calculate it, and maybe disable TVB optimized curve.
That is the Teams preset. If you want to do manual work, trust only your own data. Every SKU is unique
I started to work on an universal curve, but still miss a bit of data.
Learning by time and trial & error i guess
Any case, yes please fix the bottom.
At very least make it flat, because if a previous point has a higher voltage than the next point ~ the next point is skipped
Your issue is P3 starting point.
And P4 shape
First pull P2 and P1 down, soo at least it ends flat
Apply and check changes, then only push P3 up
If done via Tool.exe, it will push the rest up and smooth curve.
It will give you data, to apply to the bios. A "proposed curve".
Oh hi ,
This ASUS bug is strange.
It should work on a preset base.
Enforce all off = longest duration + PL free
Enforce 90° is the same but at a temperature cap.
On, actually it depends on the SKU ~ soo there must be a bug.
13900KS & 14th gen have a new "high" xtreme-power-profile.
On 13th gen it was 320A if i remember correctly.
And around 253-255W.
14th gen targets 320?360W and 400A.
But in those 400A you have far more into it than just cores.
MemOC is factored in and their voltages.
Well basically its portion of IA supply.
GT had an own one & ring is a special kind with lack of access for customer.
To make sure,
Are you now on 14th gen or still 13th
Hi @Veii! How are you? I'm now on 14th, 14900k. Now I am with a 14900k, I made the change a month ago, the truth is I was very satisfied with the 13700k, I thought I was going to have a better IMC and I still don't know, since instead of increasing the frequency of my OC of mem, I downclockedit to 8200. The new CPU seems like a good one but I still can't configure it correctly.
My goal, unlike many of achieving a great OC, is looking for good efficiency in performance and temperatures, because I am limited by an AIO360 and I don't want to go to a Custom Loop for now. That's why I initially thought that defining an AC/DC_LL could help, but for now, I take your advice to set the VF Points well. I made a mistake, I had set Ring min at 48 and max at 50. That generated excess voltage and I had a crash when running YC. I must confess that I always had an AVX Offset, at 4, so the YC tests were not correct. So I left it at 0 so as not to have any limitations. Here I show you the difference with the Ring in 48/50 (without Offsets) and everything in Auto.
The increase in SVID is notable. Don't configure anything in TVB, everything is in Auto. I had only set Ring min 48/max50
Why did you decide to leave VDDQ_CPU on auto ?
Its the most crucial value.
Memory is memory. It has its own voltages as own ecosystem.
It needs how much it needs. Doesnt depend on the CPU at all.
Can you refresh my memory a bit, that's 16 or 24gb kit.
I forget things fast. Especially with months of distance.
For 24gb delta of 170mV is a bit borderline. A bit too much even.
For 16gb , you want to be above 180mV to 220mV.
Madness777 example of 16gb, was at 225mV Delta, Down to 165 mV cap.
I had read a post of yours on OCnet where you just advised on how to configure everything in manual mode or everything in Auto, in Auto you said that you only had to set VDD=VDDQmem, SA and MC. The rest all by car. And that's what I did now, that's why I left TX on Auto, with RTTs, Training and everything on Auto. Is that right? As in each retraining TX adapted. Post #19,731 I think it was one of the best posts! I have them all bookmarked
Don't focus on mV offsets.
Its logarithmic offsets.
Let the tool calculate it, and maybe disable TVB optimized curve.
That is the Teams preset. If you want to do manual work, trust only your own data. Every SKU is unique
I started to work on an universal curve, but still miss a bit of data.
Learning by time and trial & error i guess
Any case, yes please fix the bottom.
At very least make it flat, because if a previous point has a higher voltage than the next point ~ the next point is skipped
Your issue is P3 starting point.
And P4 shape
First pull P2 and P1 down, soo at least it ends flat
Apply and check changes, then only push P3 up
If done via Tool.exe, it will push the rest up and smooth curve.
It will give you data, to apply to the bios. A "proposed curve".
I did my job here, let's see if the curve looks good to you. There is a small deviation at 5600, but I don't think I can correct it. The base is flat, I started a little lower, and then P3 a small bump, what do you think?
Regarding the voltage configuration, just set CPU L2 to 1.2V. I always had the doubt about why I need so much SA, when I see that most achieve less than 1.2V with an Apex. Change processor (13700k->14900k), mobo (Hero->Encore) and memories! (Gskill 7200 2x16 to 8000 2x16 adie). What could be the reason why if I have instability under SA? Immediate crash at YC. With the new memories I managed to lower VDD to 1.52, VDDQ to 1.48 (about 1.45v XMP), MC to 1.46, and SA to 1.27 for 8200. I have PLL Term at 1.1 and SA and MC at 0.96. Supposedly 0.96 in SA PLL should allow me a lower SA voltage, right?
Can you refresh my memory a bit, that's 16 or 24gb kit.
I forget things fast. Especially with months of distance.
For 24gb delta of 170mV is a bit borderline. A bit too much even.
For 16gb , you want to be above 180mV to 220mV.
Soo playroom of 60mV , split it to 15mV steps it can correct - its like 210 & 180 mV as possible options.
Then that goes to +/- 5mV till you hit a good voltage which stays coldboot stable.
Both will pass, but PSU off then can cause a "not even single cycle pass" situation.
30mV jumps will already cause no-boot scenarios. Soo it shouldnt be too hard to get the voltage.
What may be a problem ~ @tibcsi0407 is if you do huge steps, and then adjust other things like SA. to meet those huge jumps.
VDDQ needs to be set before ever bothering with RON. One shouldnt even need to.
Its when you reach IMC limit and! feel instability between subchannels.
Else the Board and ASUS-HQ knows how to tune their boards.
Backing to the mem configuration, as I mentioned, I have 2x8000 Adie Gskill, similar to my previous 7200C34, where I used the flat ODT shamino's, RTT's 48-34-34--34-34-240-0-0-60- 40-40/RON40-40 and DQVrefUp 172-Down. Now leave everything in AUTO. This VDDQ value with VDDQ Training Disabled seemed fine, at 1.48v. I could redefine these ODTs/RON, and try disabling VDDQ Training to see if I can lower VDDQ to 1.45 and TX 1 .3. The issue is that my SA is very high (to close to TX, I know if shouldn't be higher than TX). That's why I decided to first optimize the CPU and then go to the memories.
I just set the values in the BIOS, it seems that the curve is fine, in fact I notice an improvement in the Pi2.5 values, and a lower Core VID value. Should I improve it a little more?
Anything else about TVB or Ring? I'd like to leave a space to clearly define the values of mem's OC. I know that TM5 and Karu pass well, but I should improve the delta, perhaps start setting back the RTT/RONs, and the DQVrefup/Dwn values. Thanks!
Hello, @Veii can you explain how vcore impacts ram OC.
1) if core is undervolted through VF offset how does imc get influenced by this?
2) does core need more or less voltage per VF point to sustain ram OC?
3) does max performance power plan degrade CPU because clock is constant with constant voltage instead of 0.5V idle?
My electrical design point IA reason seems to trigger when I use 400A limit but if I set it to default 511A it seems fine even though power consumption is the same.
Also what is difference between VF offset and IA AC undervolt because both seem to.do the same thing, lower VIDs and thus vcore
Hey hey
Can you refresh my mind a bit. It's been many pages.
We were working before with 225mV delta or ?
Your foundation looks alright.
If it works that will be good
But I'm a bit worried you'll need to settle to 40-34-34-40 at the end with more delta.
Or 48-40-40-48 with less delta.
On this "trains with errors" topic
We are on manual DQ Vref or auto.
There remains a chance that floor (down) has shifted a bit with recent bios changes. But I kind of think it should be fine. If ASUS Team followed my changes, or build ontop of them ~ it should remain fine.
So we work with or without CTL0?
You know VDDQ change messes all up.
It will also do so on RON.
60mV change on VDDQ is huge.
There is no more playroom on it like it was with training.
Steps of 5-10mV, not steps of 60mV.
5mV will already show a difference between stable or hardfail.
15mV is when it will or won't train anymore.
Weakening ODT and then running RONs weak, won't bring you far.
Although report already was overcurrent crash.
As long as you have it working its ok.
If it can be called progress, unsure.
May need to just have strengthen RON and it could be ok.
It wasn't me.
People love to mess with the sheet and also mess it up
Part of people.
Keep your timing. The moment you notice 7 & 8s, then we have write alignment issues
Yes WR >/= WTRL, till approach of writes on CPU side change.
They did already, hence Board defaults to WR 8? or 6
With WTR_ 4-18.
But I like to stick to old good known.
There needs no change to happen.
You might overthink or overworry.
We know IA supply needs increase on higher clock.
Let's not forget.
Let me remind you that you also just change 30% of the available options.
I don't want you to be accidentally flooded by side issues and new rabbit holes
Keep working hard, and there is plenty of stuff to do for more clock.
It's not the time to give up, just because it has gotten difficult
New beta bios seems got some changes in the background, I totally lost the stability from my 8533 profile.
I have to find out what happened and I will report back later.
Hi @tibcsi0407, I was about to ask if anyone had tried either of the two new versions. The beta says that it updates the microcode to 0x123 and 0904 updates the ME to version 16.1.30.2307. I always wondered if when changing BIOS from 1 to 2, the microcode is also updated in the other, or are they completely separated.
Hi @tibcsi0407, I was about to ask if anyone had tried either of the two new versions. The beta says that it updates the microcode to 0x123 and 0904 updates the ME to version 16.1.30.2307. I always wondered if when changing BIOS from 1 to 2, the microcode is also updated in the other, or are they completely separated.
Hi @Veii! How are you? I'm now on 14th, 14900k. Now I am with a 14900k, I made the change a month ago, the truth is I was very satisfied with the 13700k, I thought I was going to have a better IMC and I still don't know, since instead of increasing the frequency of my OC of mem, I downclockedit to 8200. The new CPU seems like a good one but I still can't configure it correctly.
Ok i see.
In such case you can just copy it from both either tibcsi or zebra_hun.
And slightly reshape it because mid-low differs too much between CPU binns.
Yea borderline.
End parts don't need smooth ramp, also because its common that higher boost states are utilized.
Intels Powerplans aren't really focused to stay between 2-3ghz. At least to my blind perspective, i dont think they are very granular tuned.
Having AMD as a comparison. Tho AMD has their own problems with slingshot overboost beyond FMAX, but thats a topic for itself.
Bottom section at 700mV is "ok".
But you want a bit of buffer.
Optimally Vout matches Curve, but it wont. It requires too much work.
Soo while 700mV target are good, leave slightly more margins there please. 10-15mV at least, soo you have playroom on AC_LL up and down.
AC_LL down, pushes whole curve downwards. I cant see it having any Thermal Buffer in it, soo its a linear Y-Axis drop.
Partially why also only running it may be insufficient without manual curve labor. Substrate is not linear behaving in Voltage/ClkStrap.
Regarding the voltage configuration, just set CPU L2 to 1.2V. I always had the doubt about why I need so much SA, when I see that most achieve less than 1.2V with an Apex. Change processor (13700k->14900k), mobo (Hero->Encore) and memories! (Gskill 7200 2x16 to 8000 2x16 adie). What could be the reason why if I have instability under SA? Immediate crash at YC. With the new memories I managed to lower VDD to 1.52, VDDQ to 1.48 (about 1.45v XMP), MC to 1.46, and SA to 1.27 for 8200. I have PLL Term at 1.1 and SA and MC at 0.96. Supposedly 0.96 in SA PLL should allow me a lower SA voltage, right?
1.42v on 14900K is barely leaky. Close to non.
Bellow 1.4v is non-leaky. 1.42 is a good value for fused V/F.
1.46 and above is messy to work with (too leaky, too hungry for voltage).
L2$ i'm not fully certain on the layout.
Haven't read Skatterbencher's Thread/Video on the voltage layout.
Tend to first do my own work, and later compare ~ preventing Placebo knowledge or placeboo/influenced misunderstandings.
I believe his data is right, but also i dont think "IMC Voltage" nor IVRs are well explained. You can't mess with IMC voltage. Only side-influence it.
Data-links from VDDIO or however we want to call it , VDD2 ~ are Data Links , not substrate voltage.
What i'm certain is, to not use PLLs anymore.
They are for subzero conditions. They are there to counteract Board-Thermal changes. Even if CU+Reinforced FiberGlass (RFG) material is quite stable in temperature.
Excuse me, but i do think you also don't notice granularity in SA.
10mV steps are huge.
1.2 vs 1.22 is a big difference. 1.22 vs 1.27v is a worlds difference.
SA directly influences internal ODT. Well passively, but given most happens automatic, it can be called a direct change.
Another thing is,
SA base messes with VDD2_CPU & VDDQ_CPU target.
SA itself is on VID base, but VDD2 & VDDQ (CPU) are on IVR base.
They will be a factor of throttle are sit in ICCMAX.
VDDQs have their own ICCMAX but are still a factor of throttle and dynamic voltages.
This means that High Input voltage and high SA , will require higher minimum voltages on both.
Rule of thumb ~ High SA, lower VDDQ delta, higher VDD2 target ~ weaker ODT.
Low SA, bigger VDDQ delta, lower VDD2 ceiling ~ stronger ODT.
SA i believe has a lookup table for ODT too. If voltage is X , change ODT.
I don't think it scales completely linear, but it does self-adapt. Thats for sure.
ODT target is changing up to CPU leakage factor ~ soo for any "kind of accurate" recommendations
I need to know CPU fused V/F. Which you showed
As there can be easily a 40-50mV difference for target voltage @ same clock, @ same SKU ~ between samples.
Backing to the mem configuration, as I mentioned, I have 2x8000 Adie Gskill, similar to my previous 7200C34, where I used the flat ODT shamino's, RTT's 48-34-34--34-34-240-0-0-60- 40-40/RON40-40 and DQVrefUp 172-Down. Now leave everything in AUTO. This VDDQ value with VDDQ Training Disabled seemed fine, at 1.48v. I could redefine these ODTs/RON, and try disabling VDDQ Training to see if I can lower VDDQ to 1.45 and TX 1 .3. The issue is that my SA is very high (to close to TX, I know if shouldn't be higher than TX). That's why I decided to first optimize the CPU and then go to the memories.
Found it in HWInfo ~ 2x16gb. mm mm
They will not work. He changed things around and i was focusing on an old foundation + build on it.
ODT & RTT come later. If baseline foundation is different and especially ODT changes ~ all those target ODTs and then RTTs in mem , go out of the window.
It worked then and then only.
They are far to aggressive for todays standards.
VDDQ Training always off.
IVR VDDQ always set. (MAX)-Delta checked on semi low clock, and adapted to higher clock.
CPU <-> MEM , can adapt a bit on voltage and VREF changes. It is not stupid. But it is a variable of error
If you inspect closer the Proof-of-Concept i shared before:
vs bios input
Even if Renesas was leaky, OEM Green's adapted to VDDQ target.
They ignored bad user input, but also OCer had to factor in leaky delta.
Any case, it snap back into place to where it belongs
No jumping on VDDQ_MEM & typical +/- 15mV jumps on VDD.
You may start with simply copying those voltages , and putting tiny bit more VDD2.
At 8200 for now.
CTL0 stays. You can "not use" it for now if you want. But i believe it remains fine.
0080 vs 0081 definitely is a different bios.
I believe 2000 is newer than 190X, but 200X is in beta state. Soo give it 2-3 weeks without bad reports and it becomes final.
I just set the values in the BIOS, it seems that the curve is fine, in fact I notice an improvement in the Pi2.5 values, and a lower Core VID value. Should I improve it a little more?
Anything else about TVB or Ring? I'd like to leave a space to clearly define the values of mem's OC. I know that TM5 and Karu pass well, but I should improve the delta, perhaps start setting back the RTT/RONs, and the DQVrefup/Dwn values. Thanks!
Make a list~
not one load, not two loads.
Track voltages via Shamino's Tool. There are several V/F menu's. One displays the VID to Vout to Curve factor.
HWInfo and Worktool bother each other.
HWInfo and ATC bother each other.
If you dont need all reports, HWMonitor or OpenHWMonitor is a more "soft" tool to track memory voltage jumps, without EC interference.
How can I tune VIDs without touching VF offset or AC LL or SVID offset? I don't know if any other method tbh. Also my VIDs are very low rn even under load. They match vcore
This must be a communication issue.
VRM Core/Ring/Cache offsets vs enforced current ~ both are messing with supply.
Loadline telemetry faking, are messing with supply but on the request side.
SVID presets are presets ontop of sample-fused curve.
Offset presets by Intel and maybe become influence by Boardpartners. I think tho its fully Intels property.
Trained is ASUS exclusive??, it is a thermal trained offset and i believe it would be intelligent to factor other variables with it.
Any case its also a trained preset.
V/F Points are how one should adapt and work with it.
With potentially higher AC_LL vs the by default "much lower" value ASUS defines.
ASUS Team knows their work, but it is nevertheless far away of "normal" target.
No judgement required, because it just works - but any case its not a factory default behavior.
Can be ROG SKU exclusive, unsure ~ but it is what it is.
Loading Intel SVID presets has to change it to their targets, yet still with ASUS own loadline & telemetry targets.
Same as ICCMAX for ROG SKU default to 511A , maxed.
I cant say anything on this decision but it is what it is.
What part is a bug what part is intentional, is open to interpretation
I recommend to never trust Boards and enforce your values. Within own responsibility.
And absolutely not run fixed clock or fixed voltage.
Hello, @Veii can you explain how vcore impacts ram OC.
1) if core is undervolted through VF offset how does imc get influenced by this?
2) does core need more or less voltage per VF point to sustain ram OC?
3) does max performance power plan degrade CPU because clock is constant with constant voltage instead of 0.5V idle?
My electrical design point IA reason seems to trigger when I use 400A limit but if I set it to default 511A it seems fine even though power consumption is the same.
Also what is difference between VF offset and IA AC undervolt because both seem to.do the same thing, lower VIDs and thus vcore
Hello~
Let me try, but no guarantee all your questions can be answered. 1) IMC supply is dynamic. IMC supply is based on voltage margins left within IA supply to CPU. MC-Link voltages are not IMC supply
UncoreVID , SA VID, CoreVID are all loadaware supplies. Load and supply aware.
Access to Ring V/F is not given, but ring strap is influenced by core V/F. Remain internal clock are influenced based on ring clock.
Clock doesnt have to be high or low, but is intelligent enough to scale itself how much it needs.
It is done to prevent transient spikes or jitter and to hold target TDP smoothly without spikes.
Loadawareness. 2) Cores and their cache are isolated. Ring and E-cores are isolated.
Intels design allows individual voltage supply to their p-cores. But internal voltage spread is done on VID manner.
P-Cores state does not matter for QCLK or RingCLK.
Yet VID is priority focused. It will happen that parts get more or less supply, due to the duration and levels they request.
Lowering requests of parts you can influence, leaves more margins and higher priority for remain dynamic clock you can not influence.
VDDCR_IA supply is dynamic. It can be overwritten by constant voltage but is a very bad practice. 3) What you read out is ~1/160 samples. I can't tell nor say exactly what pooling rate is of Intels design. I am only decently aware of SVI2 & SVI3 design now.
IVR is still complex. I need more time with Intels Arch;
By any case it is much faster than 30-50ms pooling HWInfo can do. Even if the reports you read are an average of 1000ms. Sample pooling unlikely is faster than 50ms.
For SVI2 design that's 7/8 samples missed.
For SVI3 design that is 9/10 samples missed, but SVI3 is complex. This part is not 100% accurate, many "it depends".
For the rest,
Powerplans can not damage a sample that follows power and electrical limitations.
The OS does not take over control over the CPU.
The CPU internally does its scheduling, voltage supply and clock ramp.
The OS may influence which clocks work is offloaded, for faster processing.
But this still goes through backend and is processed by BranchPredictors, before supplied to actual workers and acceleration units. There OS here has little to say.
Given clock to voltage is internally managed,
There is no worry about "usable" clock.
Powerplans may cause a trouble with clock jitter, and waste useless cycles.
But they will not do damage to the CPU.
There can not be a 100% "NO" answer, because you very much can shorten lifespan , if CPU stays at constant high requests and gets delived constant high voltage.
The "constant or not" factor you can not see. It is far to fast for any consumer tools to track. Yet nevertheless at best CPU is loadaware and you get throttled perf at whatever clock number
Or CPU has parts of the protections off ~ which happens on OC_MODE, aka constant supply and constant frequency. Where then of course its a higher chance.
No clear answer. Degradation has many many variables.
Time is one, thermals to subtrate design is another.
But main are electrical limitations.
Thermals are substrate focused, and already resolved on the design stage.
Early on called "substrate thermal foldover point".
Any case, no the OS doesnt have such access.
It can waste compute cycles, and so increase powerdraw.
But lifetime is not exactly defined by compute cycles. Neither by only powerdraw and also not by thermals.
My electrical design point IA reason seems to trigger when I use 400A limit but if I set it to default 511A it seems fine even though power consumption is the same.
Also what is difference between VF offset and IA AC undervolt because both seem to.do the same thing, lower VIDs and thus vcore
Internal electrical design point limitations are computed values.
You can change the target points, but breaking intel's specifications will leave you under own responsibility when it will degrade (always does) and by how much.
Stay to given electrical design point targets. My suggestions.
Don't try to work around the problem, in lifting range - but work on the problem in lowering supply and increasing efficiency.
The clock that comes out should not be your worry. CoreCLK values mean nothing
Internally there are many clocks that align and dynamically get loaded, to deliver you XYZ performance.
It is not the core-clock that defines performance. CoreClock is kind of frontend. But thats all
Like on the GPU where coreclock is not the main factor. Neither on NVIDIA nor AMDs designs. Internally there are 2-5 more clocks that happen and are loadaware.
Loadaware means exactly that.
Aware of load difficulty, load type and with their health/throttle offsets , supply will differ and clock will differ.
Clock and VID visually go hand in hand, but often several voltages are combined as VIDs ~ and visualized under one or two tables
To make categorizing things easier. Less complex to boost and less wasted resources for managing frontend.
Hence user see's only one V/F, that is main cores.
They dont see e-cores, ring and remain parts of the CPU.
I can understand why it is soo difficult to illustrate or understand.
Loadline Telemetry changes (semi knowledgeable here)
are an exploit.
SVID curve is build open fused curve.
That curve factors if possible (unsure on many Intel-ME abilities) powersupply and i think also has to factor in thermals
(which i can not see either, maybe just thermals control strap switch not curve on intel)
Any case, the curve is build based on preconfigured offsets , based on Board capability ~ ontop of fused in curve.
That fused in curve for 14th gen is rated at target boost ~ 6ghz in this case @ 105° for X duration. I hope i got that part right.
Loadline faking, tells the CPU that this Board has XYZ amount of droop before it arrives at the CPU
Which optimally results in CPU increasing VID to offset target loss
Or ignoring that and adapting receive vs request delta.
The CPU is aware of your changes, but the exploit works semi intentional.
Usually telemetry faking is done by skewing what the CPU thinks it gets vs what VRMs send out.
The Problem with Loadline Telemetry faking is
Outside of telling the CPU you have a very bad board that can't supply the current it requests,
That the voltage drops linearly.
The CPU will keep requesting its high VIDs and internally reach a VID ceiling + calculate its native supply vs allowed supply range.
It will keep calculating an unrealistic number and lower performance far earlier than needed.
The benefit that many notice, is maybe first-stage powerlimiters and generally lower powerdraw.
But if CEP actually was enabled, you would notice proactive throttle, because the CPU keeps thinking it overvolts itself (VIDs stay)
Given there is a missmatch between receive and request - it will try to load a higher strap, which again results in higher VID.
Reaching voltage ceiling & calculated strain ceiling much faster ~ than it actually has to.
And aside all of that
Cores will increase requests higher and higher (VRMAX ceiling) and take away voltage priority of other parts that needs its chunk of supply.
Hence sooner or later ICCMAX limit will be hit (faster actually, due to higher VID requests on higher "allowed" straps) and throttle even faster + limit memOC
ICCMAX factors in all VIDs and all IVRs ~ although there are several ICCMAX's.
Basically you think you get more clock strap, because CPU tries to load it, because CPU partially! thinks that strain is lower
But in reality its an exploit and CPU still is aware of how much voltage it requests, hence hitting throttle limiters faster.
Hitting ICCMAX basically forces voltage and package throttle on everything associated with it ~ which then results in clock loss again. After margins for every strap vanish.
Very big topic.
On the other hand,
with VID change, you let CPU in full control of its scheduling.
You run real limitations and of course stability remains, because you do no telemetry faking.
In exchange, a reward for your work - you gain more raw compute for same clock, because internal parts have more margins to boost up.
This includes automatic IMC voltage supply. As there will be less that takes over priority.
EDIT:
I need to spend more time with Intel
there is too much to do and research on.
There may be some misunderstandings and things i dont work optimally on.
Just had it for 3months, which is long but that V/F topic was a canceled post.
It is not done research & i very surely miss some points.
@2k5lexi gibt es eine Möglichkeit, die Beitrags-Zusammenführung für meinen Account permanent zu deaktivieren ?
Ich müsste oft zwischen multi-quotes or größeren Posts warten bis jemand anderes Antwortet
Damit diese in geteilten Chunks verlinkt werden können.
Zusammengeführte Posts haben (in meinem Fall) keinerlei Berreicherung gegen "Spam" und machen es schwerer für die angesprochenen Personen, die Fragen/Antworten zu unterteilen.
// Besonders wenn man 6 Fragen in einem Post stellt und diese Beantwortet bekommen möchte
Hingegen sieht man ein großen Blob welches für beide Seiten schwer zu zittieren ist.
Nicht jeder Markiert-Zittiert manuel, und für meinen Account wäre es keinerlei Berreicherung "Post-Pings" damit niedriger zu bekommen.
Pings bei neuen Nachrichten zb.
Ich kann verstehen weswegen es so ist wie es ist,
Jedoch wäre es möglich für meinen Account diese Funktion zu deaktivieren ?
Ich gebe mir schon mühe, meine Posts zu unterteilen und nicht jede Antwort einzeln zu posten.
Jedoch sehen solche nahezu Komplette-Seiten Posts, viel zu Groß aus.
Ich würde ungern zwischen den Antworten 5-6 Stunden warten, damit das System es als "neue Nachricht" erkennt.
Ebenso ungern in den Posts allen anderen Nutzern reingrätschen ~ wenn man immer warten muss bis jemand eine Antwort postet, bevor ich wieder etwas großes hochlade
Falls möglich~
EDIT:
Eventuell wäre es schon Hilfreich genug wenn das System dir deine Nachrichten als neue Posts unterteilt, sollte man verschiedene Nutzer anpingen/zittieren.
Der Post drüber gehört eigentlich als 2 Nachrichten, da 2 verschiedene Nutzer sepperat zittiert wurden.
Oder man könnte die Wartezeit bis zu den neuen Posts auf 60min verkürzen. Momentan liegt sie nahe 3-4 Stunden.
Leider passiert mir es weiterhin, dass diese hier und da zusammengeführt werden.
kleines Update:
vdd 1.55
imc und ivr beide auf 1.35.
sa 1.21
TM5 error bei 15min, siehe Bild. Was bedeutet nun dieses Errors?
Timings gleich, wie letztes Post.
Edit: mit 1.56vdd, rest gleich, ein Freeze bei 34min.
IVR VDDQ, ist viel zu hoch.
#1 & #11 , Heat/voltage issue ~ CPU side
#7 SNR Voltage issue ~ CPU to MEM. Mostly VDDQ_CPU.
#0 MC-Link dropout, CPU to Mem
Try a jump of +/- 30mV on VDDQ and it will tell itself
Maybe team messed with Slopes, to cause such scenario.
If VDDQ, ME update is to blame
Doublecheck curve visually vs old data, if there something changed
Else it can be slopes to cause such behavior.
EDIT:
ME change can cause SVID behavior change and ODT change
But will never mess with Slopes, unless Team reworked something (tho no mention of mem compatibility improvements)
Ok i see.
In such case you can just copy it from both either tibcsi or zebra_hun.
And slightly reshape it because mid-low differs too much between CPU binns.
That Blue bios looks soo strange.
Unusual
30mV step on P3 is huge
Yea borderline.
End parts don't need smooth ramp, also because its common that higher boost states are utilized.
Intels Powerplans aren't really focused to stay between 2-3ghz. At least to my blind perspective, i dont think they are very granular tuned.
Having AMD as a comparison. Tho AMD has their own problems with slingshot overboost beyond FMAX, but thats a topic for itself.
Bottom section at 700mV is "ok".
But you want a bit of buffer.
Optimally Vout matches Curve, but it wont. It requires too much work.
Soo while 700mV target are good, leave slightly more margins there please. 10-15mV at least, soo you have playroom on AC_LL up and down.
AC_LL down, pushes whole curve downwards. I cant see it having any Thermal Buffer in it, soo its a linear Y-Axis drop.
Partially why also only running it may be insufficient without manual curve labor. Substrate is not linear behaving in Voltage/ClkStrap.
1.42v on 14900K is barely leaky. Close to non.
Bellow 1.4v is non-leaky. 1.42 is a good value for fused V/F.
1.46 and above is messy to work with (too leaky, too hungry for voltage).
L2$ i'm not fully certain on the layout.
Haven't read Skatterbencher's Thread/Video on the voltage layout.
Tend to first do my own work, and later compare ~ preventing Placebo knowledge or placeboo/influenced misunderstandings.
I believe his data is right, but also i dont think "IMC Voltage" nor IVRs are well explained. You can't mess with IMC voltage. Only side-influence it.
Data-links from VDDIO or however we want to call it , VDD2 ~ are Data Links , not substrate voltage.
What i'm certain is, to not use PLLs anymore.
They are for subzero conditions. They are there to counteract Board-Thermal changes. Even if CU+Reinforced FiberGlass (RFG) material is quite stable in temperature.
Excuse me, but i do think you also don't notice granularity in SA.
10mV steps are huge.
1.2 vs 1.22 is a big difference. 1.22 vs 1.27v is a worlds difference.
SA directly influences internal ODT. Well passively, but given most happens automatic, it can be called a direct change.
Another thing is,
SA base messes with VDD2_CPU & VDDQ_CPU target.
SA itself is on VID base, but VDD2 & VDDQ (CPU) are on IVR base.
They will be a factor of throttle are sit in ICCMAX.
VDDQs have their own ICCMAX but are still a factor of throttle and dynamic voltages.
This means that High Input voltage and high SA , will require higher minimum voltages on both.
Rule of thumb ~ High SA, lower VDDQ delta, higher VDD2 target ~ weaker ODT.
Low SA, bigger VDDQ delta, lower VDD2 ceiling ~ stronger ODT.
SA i believe has a lookup table for ODT too. If voltage is X , change ODT.
I don't think it scales completely linear, but it does self-adapt. Thats for sure.
ODT target is changing up to CPU leakage factor ~ soo for any "kind of accurate" recommendations
I need to know CPU fused V/F. Which you showed
As there can be easily a 40-50mV difference for target voltage @ same clock, @ same SKU ~ between samples.
Found it in HWInfo ~ 2x16gb. mm mm
They will not work. He changed things around and i was focusing on an old foundation + build on it.
ODT & RTT come later. If baseline foundation is different and especially ODT changes ~ all those target ODTs and then RTTs in mem , go out of the window.
It worked then and then only.
They are far to aggressive for todays standards.
VDDQ Training always off.
IVR VDDQ always set. (MAX)-Delta checked on semi low clock, and adapted to higher clock.
CPU <-> MEM , can adapt a bit on voltage and VREF changes. It is not stupid. But it is a variable of error
If you inspect closer the Proof-of-Concept i shared before: Anhang anzeigen 972372
vs bios input Anhang anzeigen 972373
Even if Renesas was leaky, OEM Green's adapted to VDDQ target.
They ignored bad user input, but also OCer had to factor in leaky delta.
Any case, it snap back into place to where it belongs
No jumping on VDDQ_MEM & typical +/- 15mV jumps on VDD.
You may start with simply copying those voltages , and putting tiny bit more VDD2.
At 8200 for now.
CTL0 stays. You can "not use" it for now if you want. But i believe it remains fine.
0080 vs 0081 definitely is a different bios.
I believe 2000 is newer than 190X, but 200X is in beta state. Soo give it 2-3 weeks without bad reports and it becomes final.
Make a list~
not one load, not two loads.
Track voltages via Shamino's Tool. There are several V/F menu's. One displays the VID to Vout to Curve factor.
HWInfo and Worktool bother each other.
HWInfo and ATC bother each other.
If you dont need all reports, HWMonitor or OpenHWMonitor is a more "soft" tool to track memory voltage jumps, without EC interference.
Beitrag automatisch zusammengeführt:
This must be a communication issue.
VRM Core/Ring/Cache offsets vs enforced current ~ both are messing with supply.
Loadline telemetry faking, are messing with supply but on the request side.
SVID presets are presets ontop of sample-fused curve.
Offset presets by Intel and maybe become influence by Boardpartners. I think tho its fully Intels property.
Trained is ASUS exclusive??, it is a thermal trained offset and i believe it would be intelligent to factor other variables with it.
Any case its also a trained preset.
V/F Points are how one should adapt and work with it.
With potentially higher AC_LL vs the by default "much lower" value ASUS defines.
ASUS Team knows their work, but it is nevertheless far away of "normal" target.
No judgement required, because it just works - but any case its not a factory default behavior.
Can be ROG SKU exclusive, unsure ~ but it is what it is.
Loading Intel SVID presets has to change it to their targets, yet still with ASUS own loadline & telemetry targets.
Same as ICCMAX for ROG SKU default to 511A , maxed.
I cant say anything on this decision but it is what it is.
What part is a bug what part is intentional, is open to interpretation
I recommend to never trust Boards and enforce your values. Within own responsibility.
And absolutely not run fixed clock or fixed voltage.
Hello~
Let me try, but no guarantee all your questions can be answered. 1) IMC supply is dynamic. IMC supply is based on voltage margins left within IA supply to CPU. MC-Link voltages are not IMC supply
UncoreVID , SA VID, CoreVID are all loadaware supplies. Load and supply aware.
Access to Ring V/F is not given, but ring strap is influenced by core V/F. Remain internal clock are influenced based on ring clock.
Clock doesnt have to be high or low, but is intelligent enough to scale itself how much it needs.
It is done to prevent transient spikes or jitter and to hold target TDP smoothly without spikes.
Loadawareness. 2) Cores and their cache are isolated. Ring and E-cores are isolated.
Intels design allows individual voltage supply to their p-cores. But internal voltage spread is done on VID manner.
P-Cores state does not matter for QCLK or RingCLK.
Yet VID is priority focused. It will happen that parts get more or less supply, due to the duration and levels they request.
Lowering requests of parts you can influence, leaves more margins and higher priority for remain dynamic clock you can not influence.
VDDCR_IA supply is dynamic. It can be overwritten by constant voltage but is a very bad practice. 3) What you read out is ~1/160 samples. I can't tell nor say exactly what pooling rate is of Intels design. I am only decently aware of SVI2 & SVI3 design now.
IVR is still complex. I need more time with Intels Arch;
By any case it is much faster than 30-50ms pooling HWInfo can do. Even if the reports you read are an average of 1000ms. Sample pooling unlikely is faster than 50ms.
For SVI2 design that's 7/8 samples missed.
For SVI3 design that is 9/10 samples missed, but SVI3 is complex. This part is not 100% accurate, many "it depends".
For the rest,
Powerplans can not damage a sample that follows power and electrical limitations.
The OS does not take over control over the CPU.
The CPU internally does its scheduling, voltage supply and clock ramp.
The OS may influence which clocks work is offloaded, for faster processing.
But this still goes through backend and is processed by BranchPredictors, before supplied to actual workers and acceleration units. There OS here has little to say.
Given clock to voltage is internally managed,
There is no worry about "usable" clock.
Powerplans may cause a trouble with clock jitter, and waste useless cycles.
But they will not do damage to the CPU.
There can not be a 100% "NO" answer, because you very much can shorten lifespan , if CPU stays at constant high requests and gets delived constant high voltage.
The "constant or not" factor you can not see. It is far to fast for any consumer tools to track. Yet nevertheless at best CPU is loadaware and you get throttled perf at whatever clock number
Or CPU has parts of the protections off ~ which happens on OC_MODE, aka constant supply and constant frequency. Where then of course its a higher chance.
No clear answer. Degradation has many many variables.
Time is one, thermals to subtrate design is another.
But main are electrical limitations.
Thermals are substrate focused, and already resolved on the design stage.
Early on called "substrate thermal foldover point".
Any case, no the OS doesnt have such access.
It can waste compute cycles, and so increase powerdraw.
But lifetime is not exactly defined by compute cycles. Neither by only powerdraw and also not by thermals.
Internal electrical design point limitations are computed values.
You can change the target points, but breaking intel's specifications will leave you under own responsibility when it will degrade (always does) and by how much.
Stay to given electrical design point targets. My suggestions.
Don't try to work around the problem, in lifting range - but work on the problem in lowering supply and increasing efficiency.
The clock that comes out should not be your worry. CoreCLK values mean nothing
Internally there are many clocks that align and dynamically get loaded, to deliver you XYZ performance.
It is not the core-clock that defines performance. CoreClock is kind of frontend. But thats all
Like on the GPU where coreclock is not the main factor. Neither on NVIDIA nor AMDs designs. Internally there are 2-5 more clocks that happen and are loadaware.
Loadaware means exactly that.
Aware of load difficulty, load type and with their health/throttle offsets , supply will differ and clock will differ.
Clock and VID visually go hand in hand, but often several voltages are combined as VIDs ~ and visualized under one or two tables
To make categorizing things easier. Less complex to boost and less wasted resources for managing frontend.
Hence user see's only one V/F, that is main cores.
They dont see e-cores, ring and remain parts of the CPU.
I can understand why it is soo difficult to illustrate or understand.
Loadline Telemetry changes (semi knowledgeable here)
are an exploit.
SVID curve is build open fused curve.
That curve factors if possible (unsure on many Intel-ME abilities) powersupply and i think also has to factor in thermals
(which i can not see either, maybe just thermals control strap switch not curve on intel)
Any case, the curve is build based on preconfigured offsets , based on Board capability ~ ontop of fused in curve.
That fused in curve for 14th gen is rated at target boost ~ 6ghz in this case @ 105° for X duration. I hope i got that part right.
Loadline faking, tells the CPU that this Board has XYZ amount of droop before it arrives at the CPU
Which optimally results in CPU increasing VID to offset target loss
Or ignoring that and adapting receive vs request delta.
The CPU is aware of your changes, but the exploit works semi intentional.
Usually telemetry faking is done by skewing what the CPU thinks it gets vs what VRMs send out.
The Problem with Loadline Telemetry faking is
Outside of telling the CPU you have a very bad board that can't supply the current it requests,
That the voltage drops linearly.
The CPU will keep requesting its high VIDs and internally reach a VID ceiling + calculate its native supply vs allowed supply range.
It will keep calculating an unrealistic number and lower performance far earlier than needed.
The benefit that many notice, is maybe first-stage powerlimiters and generally lower powerdraw.
But if CEP actually was enabled, you would notice proactive throttle, because the CPU keeps thinking it overvolts itself (VIDs stay)
Given there is a missmatch between receive and request - it will try to load a higher strap, which again results in higher VID.
Reaching voltage ceiling & calculated strain ceiling much faster ~ than it actually has to.
And aside all of that
Cores will increase requests higher and higher (VRMAX ceiling) and take away voltage priority of other parts that needs its chunk of supply.
Hence sooner or later ICCMAX limit will be hit (faster actually, due to higher VID requests on higher "allowed" straps) and throttle even faster + limit memOC
ICCMAX factors in all VIDs and all IVRs ~ although there are several ICCMAX's.
Basically you think you get more clock strap, because CPU tries to load it, because CPU partially! thinks that strain is lower
But in reality its an exploit and CPU still is aware of how much voltage it requests, hence hitting throttle limiters faster.
Hitting ICCMAX basically forces voltage and package throttle on everything associated with it ~ which then results in clock loss again. After margins for every strap vanish.
Very big topic.
On the other hand,
with VID change, you let CPU in full control of its scheduling.
You run real limitations and of course stability remains, because you do no telemetry faking.
In exchange, a reward for your work - you gain more raw compute for same clock, because internal parts have more margins to boost up.
This includes automatic IMC voltage supply. As there will be less that takes over priority.
EDIT:
I need to spend more time with Intel
there is too much to do and research on.
There may be some misunderstandings and things i dont work optimally on.
Just had it for 3months, which is long but that V/F topic was a canceled post.
It is not done research & i very surely miss some points.
Thanks for the lengthy explanation, safe to say I understand 70-80% of it. But you keep using the word "strap" I don't get what that means. Also regarding VID requests, even by AC LL modification, my VIDs show much lower values in hwinfo even under load it's low, for example, now with 0.15ACLL I have 1.18V vcore and around 1.18V VID when I run VT3, but you said AC LL modification causes the core to try and request even more voltage and giving up at ceiling + perf loss but I never saw high VIDs in hwinfo except when I used fully manual vrm voltage+ LLC 7.
Thanks for the lengthy explanation, safe to say I understand 70-80% of it. But you keep using the word "strap" I don't get what that means. Also regarding VID requests, even by AC LL modification, my VIDs show much lower values in hwinfo even under load it's low, for example, now with 0.15ACLL I have 1.18V vcore and around 1.18V VID when I run VT3, but you said AC LL modification causes the core to try and request even more voltage and giving up at ceiling + perf loss but I never saw high VIDs in hwinfo except when I used fully manual vrm voltage+ LLC 7.
HWInfo VID are not the curve.
It works because it spoofs it, but internally thats not the curve.
A strap, is a clock setting +/- its delta it can scale @ given voltage set
Like 8400 is a memory strap, or 4.4ghz baseclock is a 4.4ghz clock strap.
It is a name for a loaded frequency point + all its other required data it needs.
Freq-Point + extras = strap.
A frequency value that has other extras bound with it, in a lookup table or in our case a logarithmic curve.
P8 is a frequency point. C-States are frequency straps and so on.
Does UncoreVID scale a bit or not at all with AC_Ll change ?
What about SA-VID ?
SA VID i expect to not wiggle, because we force SA voltage.
You should check with ASUS worktool how it shows the current curve and how VID are.
Because you can see, Vout = DieSense stays the same
Curve point internally shifts ~ VID shifts
But 1468 Curve point vs 1273 VID ~ 195 delta
1450 Curve vs 1237 VID ~ 213 delta
1387 Curve vs 1154 VID ~ 233 delta
You can see that using AC_LL , the distance between Curve Point and VID got skewed.
Yet you can also see, that Vout stays the same with drastically different curve. Powerdraw changes even tho supposedly Vout stays the same.
And raw compute changes, even if same Vout stays the same.
Internal throttle happens on VID scale.
But HWInfo VID is not real curve.
And is definitely not real supply.
Throttle and ICCMAX happen due to internal curve.
Loadlines skew the result between request and deliver, but do not skew the curve itself.
What it does skew, is allow higher boost straps ~ aka higher attempt of loading higher clock strap, yet that one does not mean anything whatsoever
You should never trust voltages and never trust frequency.
Outside of no ability to track it, because of how slow consumer tools are.
Much much more happens internally, and we absolutely do not want to break that with fixed voltages or fixed clock
Clock gating is needed and voltage gating too.
EDIT:
Hence all of that + bit more time
I suggest from AC_LL "ASUS default" 0.55ohm, up to 0.62-0.65ohm. On Godlike samples thats up to 0.7 if not 0.8ohm
And then a drop on curve level.
Supply will be higher, hence CPU will take more IA for all its parts that may need it, due increased memOC strain
But curve will be offset lower, to prevent voltage-max throttle. Which is one of many ways to throttle
And also we have VRMAX + ICCMAX as voltage and amperage limiters ~ to preserve sample health.
Soo if you run into our limiters ~ which you definitely will "on stock", then at very least CPU is not harmed by our little voltage push.
I've put some thought in it, but it may be hard to follow. Its my bad for the limited explaining
Any case, with what i think are safe limiters ~ you will definitely feel degradation in perf, till you undervolt.
Lifting limits is not the way to go !
Keep working hard and result is rewarding. Also leaves more headroom for IMC, Ring and other critical parts that will have increased strain due to OC.
Yet all on all, you can not change IMC supply. Only influence parts around it.
And its also a reason why CPUs are soo hungry and "overvolted" on stock.
You need more margins and more voltage.
In reality cores are decently efficient. But the delta between same sku diff samples, is too big. Hence high voltage and limiters are set in place by default.
Well , for all other non APEX boards too.
All CPUs are overvolted more than they need. But you have to have margins , hence overvoltage by stock.
I think my approach is better~
Even if i sliightly give it more voltage, our limiters are low.
Eh ICCMAX could be a tad lower, but its ok how it is.
Soo either you fix curve, or you have degraded perf. Degraded lifetime should be prevented with the limiters, as best as possible
Thermals do not cause degradation, neither wattage powerdraw. Its more complex than this.
Wenn diese BIOS Settings sehen moechtest, hier ist:
[2024/01/08 21:27:57]
Ai Overclock Tuner [Auto]
Intel(R) Adaptive Boost Technology [Auto]
ASUS MultiCore Enhancement [Enabled – Remove All limits (90°C)]
SVID Behavior [Trained]
BCLK Frequency : DRAM Frequency Ratio [100:100]
Memory Controller : DRAM Frequency Ratio [1:2]
DRAM Frequency [DDR5-7200MHz]
Performance Core Ratio [By Core Usage]
1-Core Ratio Limit [58]
2-Core Ratio Limit [58]
3-Core Ratio Limit [57]
4-Core Ratio Limit [57]
5-Core Ratio Limit [56]
6-Core Ratio Limit [56]
7-Core Ratio Limit [55]
8-Core Ratio Limit [55]
Performance Core0 Specific Ratio Limit [Auto]
Performance Core0 specific Voltage [Auto]
Performance Core1 Specific Ratio Limit [Auto]
Performance Core1 specific Voltage [Auto]
*Performance Core2 Specific Ratio Limit [Auto]
Performance Core2 specific Voltage [Auto]
*Performance Core3 Specific Ratio Limit [Auto]
Performance Core3 specific Voltage [Auto]
Performance Core4 Specific Ratio Limit [Auto]
Performance Core4 specific Voltage [Auto]
Performance Core5 Specific Ratio Limit [Auto]
Performance Core5 specific Voltage [Auto]
Performance Core6 Specific Ratio Limit [Auto]
Performance Core6 specific Voltage [Auto]
Performance Core7 Specific Ratio Limit [Auto]
Performance Core7 specific Voltage [Auto]
Efficient Core Ratio [By Core Usage]
Efficient Turbo Ratio Limit 1 [44]
Efficient Turbo Ratio Cores 1 [Auto]
Efficient Core Group0 Specific Ratio Limit [Auto]
Efficient Core Group0 specific Voltage [Auto]
Efficient Core Group1 Specific Ratio Limit [Auto]
Efficient Core Group1 specific Voltage [Auto]
Efficient Core Group2 Specific Ratio Limit [Auto]
Efficient Core Group2 specific Voltage [Auto]
Efficient Core Group3 Specific Ratio Limit [Auto]
Efficient Core Group3 specific Voltage [Auto]
AVX2 [Auto]
AVX2 Ratio Offset to per-core Ratio Limit [Auto]
AVX2 Voltage Guardband Scale Factor [Auto]
Maximus Tweak [Mode 2]
DRAM CAS# Latency [32]
DRAM RAS# to CAS# Delay Read [42]
DRAM RAS# to CAS# Delay Write [16]
DRAM RAS# PRE Time [42]
DRAM RAS# ACT Time [54]
DRAM Command Rate [2N]
DRAM RAS# to RAS# Delay L [12]
DRAM RAS# to RAS# Delay S [8]
DRAM REF Cycle Time 2 [448]
DRAM REF Cycle Time Same Bank [Auto]
DRAM Refresh Interval [131071]
DRAM WRITE Recovery Time [24]
DRAM READ to PRE Time [12]
DRAM FOUR ACT WIN Time [32]
DRAM WRITE to READ Delay L [24]
DRAM WRITE to READ Delay S [10]
DRAM CKE Minimum Pulse Width [Auto]
DRAM Write Latency [30]
Ctl0 dqvrefup [154]
Ctl0 dqvrefdn [72]
Ctl0 dqodtvrefup [Auto]
Ctl0 dqodtvrefdn [Auto]
Ctl1 cmdvrefup [Auto]
Ctl1 ctlvrefup [Auto]
Ctl1 clkvrefup [Auto]
Ctl1 ckecsvrefup [Auto]
Ctl2 cmdvrefdn [Auto]
Ctl2 ctlvrefdn [Auto]
Ctl2 clkvrefdn [Auto]
Read Equalization RxEq Start Sign [-]
Read Equalization RxEq Start [Auto]
Read Equalization RxEq Stop Sign [-]
Read Equalization RxEq Stop [Auto]
ODT_READ_DURATION [Auto]
ODT_READ_DELAY [Auto]
ODT_WRITE_DURATION [Auto]
ODT_WRITE_DELAY [Auto]
DQ RTT WR [40 DRAM Clock]
DQ RTT NOM RD [40 DRAM Clock]
DQ RTT NOM WR [40 DRAM Clock]
DQ RTT PARK [34 DRAM Clock]
DQ RTT PARK DQS [34 DRAM Clock]
GroupA CA ODT [240 DRAM Clock]
GroupA CS ODT [0 DRAM Clock]
GroupA CK ODT [0 DRAM Clock]
GroupB CA ODT [60 DRAM Clock]
GroupB CS ODT [40 DRAM Clock]
GroupB CK ODT [40 DRAM Clock]
Pull-up Output Driver Impedance [34 DRAM Clock]
Pull-Down Output Driver Impedance [34 DRAM Clock]
DQ RTT WR [40 DRAM Clock]
DQ RTT NOM RD [40 DRAM Clock]
DQ RTT NOM WR [40 DRAM Clock]
DQ RTT PARK [34 DRAM Clock]
DQ RTT PARK DQS [34 DRAM Clock]
GroupA CA ODT [240 DRAM Clock]
GroupA CS ODT [0 DRAM Clock]
GroupA CK ODT [0 DRAM Clock]
GroupB CA ODT [60 DRAM Clock]
GroupB CS ODT [40 DRAM Clock]
GroupB CK ODT [40 DRAM Clock]
Pull-up Output Driver Impedance [34 DRAM Clock]
Pull-Down Output Driver Impedance [34 DRAM Clock]
Round Trip Latency Init Value MC0 CHA [Auto]
Round Trip Latency Max Value MC0 CHA [Auto]
Round Trip Latency Offset Value Mode Sign MC0 CHA [-]
Round Trip Latency Offset Value MC0 CHA [Auto]
Round Trip Latency Init Value MC0 CHB [Auto]
Round Trip Latency Max Value MC0 CHB [Auto]
Round Trip Latency Offset Value Mode Sign MC0 CHB [-]
Round Trip Latency Offset Value MC0 CHB [Auto]
Round Trip Latency Init Value MC1 CHA [Auto]
Round Trip Latency Max Value MC1 CHA [Auto]
Round Trip Latency Offset Value Mode Sign MC1 CHA [-]
Round Trip Latency Offset Value MC1 CHA [Auto]
Round Trip Latency Init Value MC1 CHB [Auto]
Round Trip Latency Max Value MC1 CHB [Auto]
Round Trip Latency Offset Value Mode Sign MC1 CHB [-]
Round Trip Latency Offset Value MC1 CHB [Auto]
Round Trip Latency MC0 CHA R0 [Auto]
Round Trip Latency MC0 CHA R1 [Auto]
Round Trip Latency MC0 CHA R2 [0]
Round Trip Latency MC0 CHA R3 [0]
Round Trip Latency MC0 CHA R4 [0]
Round Trip Latency MC0 CHA R5 [0]
Round Trip Latency MC0 CHA R6 [0]
Round Trip Latency MC0 CHA R7 [0]
Round Trip Latency MC0 CHB R0 [Auto]
Round Trip Latency MC0 CHB R1 [Auto]
Round Trip Latency MC0 CHB R2 [0]
Round Trip Latency MC0 CHB R3 [0]
Round Trip Latency MC0 CHB R4 [0]
Round Trip Latency MC0 CHB R5 [0]
Round Trip Latency MC0 CHB R6 [0]
Round Trip Latency MC0 CHB R7 [0]
Round Trip Latency MC1 CHA R0 [Auto]
Round Trip Latency MC1 CHA R1 [Auto]
Round Trip Latency MC1 CHA R2 [0]
Round Trip Latency MC1 CHA R3 [0]
Round Trip Latency MC1 CHA R4 [0]
Round Trip Latency MC1 CHA R5 [0]
Round Trip Latency MC1 CHA R6 [0]
Round Trip Latency MC1 CHA R7 [0]
Round Trip Latency MC1 CHB R0 [Auto]
Round Trip Latency MC1 CHB R1 [Auto]
Round Trip Latency MC1 CHB R2 [0]
Round Trip Latency MC1 CHB R3 [0]
Round Trip Latency MC1 CHB R4 [0]
Round Trip Latency MC1 CHB R5 [0]
Round Trip Latency MC1 CHB R6 [0]
Round Trip Latency MC1 CHB R7 [0]
Early Command Training [Auto]
SenseAmp Offset Training [Auto]
Early ReadMPR Timing Centering 2D [Auto]
Read MPR Training [Auto]
Receive Enable Training [Auto]
Jedec Write Leveling [Auto]
Early Write Time Centering 2D [Auto]
Early Read Time Centering 2D [Auto]
Write Timing Centering 1D [Auto]
Write Voltage Centering 1D [Auto]
Read Timing Centering 1D [Auto]
Read Timing Centering with JR [Auto]
Dimm ODT Training* [Disabled]
Max RTT_WR [ODT Off]
DIMM RON Training* [Disabled]
Write Drive Strength/Equalization 2D* [Auto]
Write Slew Rate Training* [Auto]
Read ODT Training* [Disabled]
Comp Optimization Training [Auto]
Read Equalization Training* [Auto]
Read Amplifier Training* [Auto]
Write Timing Centering 2D [Auto]
Read Timing Centering 2D [Auto]
Command Voltage Centering [Auto]
Early Command Voltage Centering [Auto]
Write Voltage Centering 2D [Auto]
Read Voltage Centering 2D [Auto]
Late Command Training [Auto]
Round Trip Latency [Auto]
Turn Around Timing Training [Auto]
CMD CTL CLK Slew Rate [Auto]
CMD/CTL DS & E 2D [Auto]
Read Voltage Centering 1D [Auto]
TxDqTCO Comp Training* [Auto]
ClkTCO Comp Training* [Auto]
TxDqsTCO Comp Training* [Auto]
VccDLL Bypass Training [Auto]
CMD/CTL Drive Strength Up/Dn 2D [Auto]
DIMM CA ODT Training [Auto]
PanicVttDnLp Training* [Auto]
Read Vref Decap Training* [Auto]
Vddq Training [Disabled]
Duty Cycle Correction Training [Auto]
Periodic DCC [Auto]
Rank Margin Tool Per Bit [Auto]
DIMM DFE Training [Auto]
EARLY DIMM DFE Training [Auto]
Tx Dqs Dcc Training [Auto]
DRAM DCA Training [Auto]
Write Driver Strength Training [Auto]
Rank Margin Tool [Auto]
Memory Test [Auto]
DIMM SPD Alias Test [Auto]
Receive Enable Centering 1D [Auto]
Retrain Margin Check [Auto]
Write Drive Strength Up/Dn independently [Auto]
LPDDR DqDqs Re-Training [Auto]
Margin Check Limit [Disabled]
tRDRD_sg_Training [Auto]
tRDRD_sg_Runtime [16]
tRDRD_dg_Training [Auto]
tRDRD_dg_Runtime [8]
tRDWR_sg [18]
tRDWR_dg [18]
tWRWR_sg [16]
tWRWR_dg [8]
tWRRD_sg [Auto]
tWRRD_dg [Auto]
tRDRD_dr [0]
tRDRD_dd [Auto]
tRDWR_dr [0]
tRDWR_dd [Auto]
tWRWR_dr [0]
tWRWR_dd [Auto]
tWRRD_dr [0]
tWRRD_dd [Auto]
tRPRE [Auto]
tWPRE [Auto]
tWPOST [Auto]
tWRPRE [Auto]
tPRPDEN [Auto]
tRDPDEN [Auto]
tWRPDEN [Auto]
tCPDED [Auto]
tREFIX9 [Auto]
Ref Interval [Auto]
tXPDLL [Auto]
tXP [Auto]
tPPD [Auto]
tCCD_L_tDLLK [Auto]
tZQCAL [Auto]
tZQCS [Auto]
OREF_RI [Auto]
Refresh Watermarks [High]
Refresh Hp Wm [Auto]
Refresh Panic Wm [Auto]
Refresh Abr Release [Auto]
tXSDLL [Auto]
tZQOPER [Auto]
tMOD [Auto]
CounttREFIWhileRefEn [Auto]
HPRefOnMRS [Auto]
SRX Ref Debits [Auto]
RAISE BLK WAIT [Auto]
Ref Stagger En [Auto]
Ref Stagger Mode [Auto]
Disable Stolen Refresh [Auto]
En Ref Type Display [Auto]
Trefipulse Stagger Disable [Auto]
tRPab ext [Auto]
derating ext [Auto]
Allow 2cyc B2B LPDDR [Auto]
tCSH [Auto]
tCSL [Auto]
powerdown Enable [Auto]
idle length [Auto]
raise cke after exit latency [Auto]
powerdown latency [Auto]
powerdown length [Auto]
selfrefresh latency [Auto]
selfrefresh length [Auto]
ckevalid length [Auto]
ckevalid enable [Auto]
idle enable [Auto]
selfrefresh enable [Auto]
Address mirror [Auto]
no gear4 param divide [Auto]
x8 device [Auto]
no gear2 param divide [Auto]
ddr 1dpc split ranks on subch [Auto]
write0 enable [Auto]
MultiCycCmd [Auto]
WCKDiffLowInIdle [Auto]
PBR Disable [Auto]
PBR OOO Dis [Auto]
PBR Disable on hot [Auto]
PBR Exit on Idle Cnt [Auto]
tXSR [Auto]
Dec tCWL [Auto]
Add tCWL [Auto]
Add 1Qclk delay [Auto]
MRC Fast Boot [Enabled]
MCH Full Check [Auto]
Mem Over Clock Fail Count [2]
Training Profile [Auto]
RxDfe [Auto]
Mrc Training Loop Count [2]
DRAM CLK Period [Auto]
Dll_bwsel [Auto]
Controller 0, Channel 0 Control [Enabled]
Controller 0, Channel 1 Control [Enabled]
Controller 1, Channel 0 Control [Enabled]
Controller 1, Channel 1 Control [Enabled]
MC_Vref0 [Auto]
MC_Vref1 [Auto]
MC_Vref2 [Auto]
Fine Granularity Refresh mode [Auto]
SDRAM Density Per Die [Auto]
SDRAM Banks Per Bank Group [Auto]
SDRAM Bank Groups [Auto]
Dynamic Memory Boost [Disabled]
Realtime Memory Frequency [Disabled]
SA GV [Disabled]
Voltage Monitor [Die Sense]
VRM Initialization Check [Enabled]
CPU Input Voltage Load-line Calibration [Auto]
CPU Load-line Calibration [Level 5]
Synch ACDC Loadline with VRM Loadline [Disabled]
CPU Current Capability [Auto]
CPU Current Reporting [Auto]
Core Voltage Suspension [Auto]
CPU VRM Switching Frequency [Auto]
VRM Spread Spectrum [Auto]
CPU Power Duty Control [Auto]
CPU Power Phase Control [Auto]
CPU Power Thermal Control [125]
CPU Core/Cache Boot Voltage [Auto]
CPU Input Boot Voltage [Auto]
PLL Termination Boot Voltage [Auto]
CPU Standby Boot Voltage [Auto]
Memory Controller Boot Voltage [Auto]
CPU Core Auto Voltage Cap [Auto]
CPU Input Auto Voltage Cap [Auto]
Memory Controller Auto Voltage Cap [Auto]
Fast Throttle Threshold [Auto]
Package Temperature Threshold [Auto]
Regulate Frequency by above Threshold [Auto]
IVR Transmitter VDDQ ICCMAX [Auto]
Unlimited ICCMAX [Auto]
CPU Core/Cache Current Limit Max. [Auto]
Long Duration Package Power Limit [300]
Package Power Time Window [Auto]
Short Duration Package Power Limit [320]
Dual Tau Boost [Disabled]
IA AC Load Line [0.14]
IA DC Load Line [Auto]
IA CEP Enable [Disabled]
SA CEP Enable [Disabled]
IA SoC Iccmax Reactive Protector [Auto]
Inverse Temperature Dependency Throttle [Auto]
IA VR Voltage Limit [1500]
CPU SVID Support [Auto]
Cache Dynamic OC Switcher [Auto]
TVB Voltage Optimizations [Enabled]
Enhanced TVB [Enabled]
Overclocking TVB [Boost Until Target]
Max Boost Target in MHz [Auto]
Overclocking TVB Global Temperature Offset Sign [+]
Overclocking TVB Global Temperature Offset Value [Auto]
Offset Mode Sign 1 [-]
V/F Point 1 Offset [0.00100]
Offset Mode Sign 2 [-]
V/F Point 2 Offset [0.00100]
Offset Mode Sign 3 [-]
V/F Point 3 Offset [0.00100]
Offset Mode Sign 4 [-]
V/F Point 4 Offset [0.00100]
Offset Mode Sign 5 [-]
V/F Point 5 Offset [0.00100]
Offset Mode Sign 6 [-]
V/F Point 6 Offset [0.03000]
Offset Mode Sign 7 [-]
V/F Point 7 Offset [0.08000]
Offset Mode Sign 8 [-]
V/F Point 8 Offset [0.08000]
Offset Mode Sign 9 [-]
V/F Point 9 Offset [0.08200]
Offset Mode Sign 10 [-]
V/F Point 10 Offset [0.08500]
Offset Mode Sign 11 [-]
V/F Point 11 Offset [0.09000]
Initial BCLK Frequency [Auto]
Runtime BCLK OC [Auto]
BCLK Amplitude [Auto]
BCLK Slew Rate [Auto]
BCLK Spread Spectrum [Auto]
Initial PCIE Frequency [Auto]
PCIE/DMI Amplitude [Auto]
PCIE/DMI Slew Rate [Auto]
PCIE/DMI Spread Spectrum [Auto]
Cold Boot PCIE Frequency [Auto]
Realtime Memory Timing [Disabled]
SPD Write Disable [TRUE]
PVD Ratio Threshold [Auto]
SA PLL Frequency Override [Auto]
BCLK TSC HW Fixup [Enabled]
Core Ratio Extension Mode [Disabled]
FLL OC mode [Auto]
UnderVolt Protection [Disabled]
Switch Microcode [Current Microcode]
Xtreme Tweaking [Disabled]
Core PLL Voltage [Auto]
GT PLL Voltage [Auto]
Ring PLL Voltage [Auto]
System Agent PLL Voltage [Auto]
Memory Controller PLL Voltage [Auto]
Efficient-core PLL Voltage [Auto]
CPU 1.8V Small Rail [Auto]
PLL Termination Voltage [Auto]
CPU Standby Voltage [Auto]
PCH 1.05V Voltage [Auto]
PCH 0.82V Voltage [Auto]
CPU Input Voltage Reset Voltage [Auto]
Eventual CPU Input Voltage [Auto]
Eventual Memory Controller Voltage [Auto]
Package Temperature Threshold [Auto]
Regulate Frequency by above Threshold [Auto]
Cooler Efficiency Customize [Keep Training]
Cooler Re-evaluation Algorithm [Normal]
Optimism Scale [100]
Ring Down Bin [Auto]
Min. CPU Cache Ratio [Auto]
Max. CPU Cache Ratio [49]
BCLK Aware Adaptive Voltage [Auto]
Actual VRM Core Voltage [Auto]
Global Core SVID Voltage [Auto]
Cache SVID Voltage [Auto]
CPU L2 Voltage [Auto]
CPU System Agent Voltage [Manual Mode]
- CPU System Agent Voltage Override [1.12000]
CPU Input Voltage [Auto]
High DRAM Voltage Mode [Enabled]
DRAM VDD Voltage [1.45000]
DRAM VDDQ Voltage [1.43000]
IVR Transmitter VDDQ Voltage [1.24000]
Memory Controller Voltage [1.21250]
MC Voltage Calculation Voltage Base [Auto]
VDD Calculation Voltage Base [Auto]
PMIC Voltages [Auto]
PCI Express Native Power Management [Enabled]
Native ASPM [Disabled]
DMI Link ASPM Control [Disabled]
ASPM [Auto]
L1 Substates [Disabled]
DMI ASPM [Disabled]
DMI Gen3 ASPM [Disabled]
PEG - ASPM [Disabled]
PCI Express Clock Gating [Enabled]
Hardware Prefetcher [Enabled]
Adjacent Cache Line Prefetch [Enabled]
Intel (VMX) Virtualization Technology [Disabled]
Per P-Core Control [Disabled]
Per E-Core Control [Disabled]
Active Performance Cores [All]
Active Efficient Cores [All]
Hyper-Threading [Enabled]
Hyper-Threading of Core 0 [Enabled]
Hyper-Threading of Core 1 [Enabled]
Hyper-Threading of Core 2 [Enabled]
Hyper-Threading of Core 3 [Enabled]
Hyper-Threading of Core 4 [Enabled]
Hyper-Threading of Core 5 [Enabled]
Hyper-Threading of Core 6 [Enabled]
Hyper-Threading of Core 7 [Enabled]
Total Memory Encryption [Disabled]
Legacy Game Compatibility Mode [Disabled]
Boot performance mode [Auto]
Intel(R) SpeedStep(tm) [Enabled]
Intel(R) Speed Shift Technology [Disabled]
Turbo Mode [Enabled]
Acoustic Noise Mitigation [Disabled]
CPU C-states [Auto]
Thermal Monitor [Enabled]
Dual Tau Boost [Disabled]
VT-d [Disabled]
Memory Remap [Enabled]
Enable VMD controller [Enabled]
Map PCIE Storage under VMD [Disabled]
Map SATA Controller under VMD [Disabled]
M.2_1 Link Speed [Auto]
PCIEX16(G5)_1 Link Speed [Auto]
PCIEX16(G5)_2 Link Speed [Auto]
PCIEX1(G4) Link Speed [Auto]
PCIEX4(G4) Link Speed [Auto]
M.2_2 Link Speed [Auto]
DIMM.2_1 Link Speed [Auto]
DIMM.2_2 Link Speed [Auto]
SATA Controller(s) [Enabled]
Aggressive LPM Support [Disabled]
SMART Self Test [Enabled]
M.2_3 [Enabled]
M.2_3 Hot Plug [Disabled]
SATA6G_1 [Enabled]
SATA6G_1 Hot Plug [Disabled]
SATA6G_2 [Enabled]
SATA6G_2 Hot Plug [Disabled]
SATA6G_3 [Enabled]
SATA6G_3 Hot Plug [Disabled]
SATA6G_4 [Enabled]
SATA6G_4 Hot Plug [Disabled]
PTT [Enable]
Intel(R) Dynamic Tuning Technology [Disabled]
PCIE Tunneling over USB4 [Enabled]
Discrete Thunderbolt(TM) Support [Disabled]
Security Device Support [Enable]
SHA256 PCR Bank [Enabled]
Pending operation [None]
Platform Hierarchy [Enabled]
Storage Hierarchy [Enabled]
Endorsement Hierarchy [Enabled]
Physical Presence Spec Version [1.3]
Disable Block Sid [Disabled]
Password protection of Runtime Variables [Enable]
Above 4G Decoding [Enabled]
Resize BAR Support [Enabled]
SR-IOV Support [Disabled]
Legacy USB Support [Enabled]
XHCI Hand-off [Enabled]
SanDisk [Auto]
LAN_U32G2_1 [Enabled]
U32G1_E5 [Enabled]
U32G1_E6 [Enabled]
U32G1_E7 [Enabled]
U32G1_E8 [Enabled]
U32G2X2_C3 [Enabled]
U32G2_5 [Enabled]
U32G2_6 [Enabled]
U32G2_7 [Enabled]
U32G2_P8 [Enabled]
U32G2X2_C9 [Enabled]
U32G1_E1 [Enabled]
U32G1_E2 [Enabled]
U32G1_E3 [Enabled]
U32G1_E4 [Enabled]
Network Stack [Disabled]
Device [N/A]
Restore AC Power Loss [Power Off]
Max Power Saving [Disabled]
ErP Ready [Disabled]
Power On By PCI-E [Disabled]
Power On By RTC [Disabled]
USB Audio [Enabled]
Intel LAN [Enabled]
USB power delivery in Soft Off state (S5) [Disabled]
Connectivity mode (Wi-Fi & Bluetooth) [Disabled]
When system is in working state [All On]
Q-Code LED Function [POST Code Only]
When system is in sleep, hibernate or soft off states [All On]
M.2_2 Configuration [Auto]
ASMedia USB 3.2 Controller_U32G1_E12 [Enabled]
ASMedia USB 3.2 Controller_U32G1_E34 [Enabled]
GNA Device [Disabled]
ASMedia Storage Controller [Enabled]
Windows Hot-plug Notification [Disabled]
ASPM Support [Disabled]
CPU Temperature [Monitor]
CPU Package Temperature [Monitor]
MotherBoard Temperature [Monitor]
VRM Temperature [Monitor]
Chipset Temperature [Monitor]
T_Sensor Temperature [Monitor]
DIMM.2 Sensor 1 Temperature [Monitor]
DIMM.2 Sensor 2 Temperature [Monitor]
Water In T Sensor Temperature [Monitor]
Water Out T Sensor Temperature [Monitor]
DIMM A1 Temperature [Monitor]
DIMM B1 Temperature [Monitor]
CPU Fan Speed [Monitor]
CPU Optional Fan Speed [Monitor]
Chassis Fan 1 Speed [Monitor]
Chassis Fan 2 Speed [Monitor]
Chassis Fan 3 Speed [Monitor]
Water Pump+ Speed [Monitor]
AIO Pump Speed [Monitor]
Flow Rate [Monitor]
CPU Core Voltage [Monitor]
12V Voltage [Monitor]
5V Voltage [Monitor]
3.3V Voltage [Monitor]
Memory Controller Voltage [Monitor]
CPU Fan Q-Fan Control [DC Mode]
CPU Fan Profile [Standard]
CPU Fan Q-Fan Source [CPU]
CPU Fan Step Up [Level 0]
CPU Fan Step Down [Level 4]
CPU Fan Speed Low Limit [200 RPM]
Chassis Fan 1 Q-Fan Control [Auto Detect]
Chassis Fan 1 Profile [Standard]
Chassis Fan 1 Q-Fan Source [CPU]
Chassis Fan 1 Step Up [Level 0]
Chassis Fan 1 Step Down [Level 0]
Chassis Fan 1 Speed Low Limit [200 RPM]
Chassis Fan 2 Q-Fan Control [PWM Mode]
Chassis Fan 2 Profile [Standard]
Chassis Fan 2 Q-Fan Source [Chipset]
Chassis Fan 2 Step Up [Level 0]
Chassis Fan 2 Step Down [Level 4]
Chassis Fan 2 Speed Low Limit [200 RPM]
Chassis Fan 3 Q-Fan Control [DC Mode]
Chassis Fan 3 Profile [Silent]
Chassis Fan 3 Q-Fan Source [CPU]
Chassis Fan 3 Step Up [Level 0]
Chassis Fan 3 Step Down [Level 0]
Chassis Fan 3 Speed Low Limit [200 RPM]
Water Pump+ Q-Fan Control [PWM Mode]
Water Pump+ Profile [Manual]
Water Pump+ Q-Fan Source [CPU]
Water Pump+ Step Up [Level 0]
Water Pump+ Step Down [Level 4]
Water Pump+ Speed Low Limit [Ignore]
Water Pump+ Point4 Temperature [70]
Water Pump+ Point4 Duty Cycle (%) [100]
Water Pump+ Point3 Temperature [50]
Water Pump+ Point3 Duty Cycle (%) [85]
Water Pump+ Point2 Temperature [40]
Water Pump+ Point2 Duty Cycle (%) [80]
Water Pump+ Point1 Temperature [25]
Water Pump+ Point1 Duty Cycle (%) [60]
AIO Pump Q-Fan Control [Auto Detect]
AIO Pump Profile [Full Speed]
CPU Temperature LED Switch [Enabled]
Launch CSM [Disabled]
OS Type [Other OS]
Secure Boot Mode [Custom]
Fast Boot [Enabled]
Next Boot after AC Power Loss [Fast Boot]
Boot Logo Display [Auto]
POST Delay Time [3 sec]
Bootup NumLock State [On]
Wait For 'F1' If Error [Enabled]
Option ROM Messages [Force BIOS]
Interrupt 19 Capture [Disabled]
AMI Native NVMe Driver Support [Enabled]
Setup Mode [Advanced Mode]
Boot Sector (MBR/GPT) Recovery Policy [Local User Control]
Next Boot Recovery Action [Skip]
BIOS Image Rollback Support [Enabled]
Publish HII Resources [Disabled]
Flexkey [Safe Boot]
Setup Animator [Disabled]
Load from Profile [7]
Profile Name [Test_New]
Save to Profile [1]
DIMM Slot Number [DIMM_A1]
Download & Install ARMOURY CRATE app [Disabled]
Download & Install MyASUS service & app [Disabled]
Du verschenkst hier gratis performance
ATC 16, Bioswert ist 12
ATC 30, Bioswert ist 26
RRD 8-12
WTR 4-24
4 auf 8200MT/s ist schwer, aber ok für 8000MT/s.
Das ist halt extrem
0.5ohm und weitaus droopy curve.
Eigentlich aber 0.62-0.65
Später erst kannst du es leicht senken.
14th gen ist ein anderes Substrate mit eigenem Throttle System.
Es ist nicht gleich 13th gen.
13900KS hat das nahezu gleiche Throttle System, aber ist nur auf 95° per Kern limitiert.
14th gen ist auf 105° per Kern. Möglich dank verbessertem Throttle System.
Deswegen auch +200MHz boost.
Die Substrate ist anders. SA & Ring sollten sich anders benehmen.
Benütze auch Cinebench R15 Extreme und Geekbench für die dynamische Last.
Try a jump of +/- 30mV on VDDQ and it will tell itself
Maybe team messed with Slopes, to cause such scenario.
If VDDQ, ME update is to blame
Doublecheck curve visually vs old data, if there something changed
Else it can be slopes to cause such behavior.
EDIT:
ME change can cause SVID behavior change and ODT change
But will never mess with Slopes, unless Team reworked something (tho no mention of mem compatibility improvements)
Can't load it fully, timings cannot be loaded, so something changed there.
I played with CPU VDDQ, and SA, but TBH didn't try to move mem VDDQ, that was always 1.47V.
I will investigate it deeper. I don't think I will flash back,or change BIOS. It should work here too with slight improvements, since TM5 is okay.
I think they should have changed many things since they released the previous BIOS 2 months ago.
Would be nice if we would get real changelog. 😂
IVR VDDQ, ist viel zu hoch.
#1 & #11 , Heat/voltage issue ~ CPU side
#7 SNR Voltage issue ~ CPU to MEM. Mostly VDDQ_CPU.
#0 MC-Link dropout, CPU to Mem
Habe es versucht. Kann keine 5er Schritte einstellen, also hab ich erstmal:
vdd me 1.56
Vddq mem 1.52
Ivr tx 1.40
Imc vdd 1.375 glaube, da sind es glaube 15er Schritte.
Freeze nach 18 min. Kein error bis 18 min.
So from the new bios (0904) on the apex encore, i had to change my offsets a tad to bring v/f 11 back to where it was - it changed slightly from needing 0.058 to 0.055, didn't check the other offsets but they look the same i think, my memory oc also changed a bit from the profile that i was using which was auto tWTR_S & L since that was what i left it as last was 18/8 and now in order to pass tm5 at all it needed 24/4 - not sure in the end what this bios changed or updated but I assume it was mostly in the ME update? cause even reverting back to 0081 I still needed that offset change and tWTR_S & L change (maybe this was just me tired and testing late/forgetting but don't think so)