I was under the impression that the os filled all unused ram with its pages, thus speeding up the system in general. That is, if its in memory its fetched much faster than on disk. Your opinions...
Richard
There is a System Read Cache.
All OSes have had it.
It was on my Sparc.
When I got my MacG4 Quad Nostril, I did a checksum
on a file. I casually repeated the command (in Terminal).
The command finished in much less time. MacOSX had it.
I spotted the System Read Cache on Win2K next. The beauty of
the Windows implementation at the time, is they took it
seriously. Much of what you would want cached, was cached.
There was a real advantage to System Read Cache back then.
Today, the System Read Cache is not as trusted as it once was.
In Windows at least, the behavior is not as good as it was in Win2K.
Perhaps they are ageing out the cache on purpose, to avoid stale/errored data.
Perhaps this is related to a distrust of computers that don't have ECC.
Should we trust a sector stored in RAM, if the sector was actually
read one week ago... and the RAM has no ECC ? What is the background
error rate on the RAM. Well, for DDR4 or DDR5, the background
error rate is very good.
The System Write cache in Windows, is booked RAM (when the cache fills,
the Task Manager indicates "some of your memory is booked/used". You can
watch the memory graph "deflate" as the system write cache clears out.
But the computer also won't let you use more than about 1/8th of memory
as write cache. There was at least one architecture hole you could
create as a user, where you could cause two processes in Windows
to get into a "death match competition for RAM", and you could
actually freeze Windows. I watched this happen once, but I was
unable to react fast enough to kill one of the consumers in time :-/
System Read Cache is purge-able on demand, so it is the "True Free Lunch"
usage of RAM. Any demand for RAM you might be planning, is not held
back by that feature.
*******
As a general observation, as Windows evolves, we have less and less
reliable information to go on.
I use Process Explorer and its Task Manager, because at least it
shows me the "Memory Compressor" process. It also lists CPU percentages
to two places past the decimal point -- essential for owners of high
core count computers. Otherwise, Task Manager sits there with a
list of "0% for everything", when you can tell from the PC fan
noise, that something is burning cycles.
I use a Kill-O-Watt meter, connected to my daily driver computer.
It reads 36W at idle. On days when "spooky stuff" is going on,
the power meter reads 70W. Now, it's my job to figure out
"who is in the machine, and, what are they doing".
This is how we live in the year 2024.
Nested virtualization is not working, that I can tell. I've tried
a couple times, to do a demo, but no luck. The machine could
have a Windows kernel and a Linux kernel. When I enter "wsl --shutdown",
I expect the Linux kernel to be shut off. But is it ? who
can say, when your Task manager design is "from the last century".
You'll notice there isn't even a decent "arch diagram" for our OS.
Show me some Rings. Show me Ring 3 and Ring 0. Show me how
nesting works. Or is supposed to work, if the driver is ever
completed.
With Hyper-V, the main OS is actually a Guest. That's part
of what an inverted hypervisor brings with it. There *is* a
diagram of the early version of how that works.
But today, the box is just a vast quantity of mystery meat.
When the power meter on my PC reads 70W, I need to understand
where those electrons went. I'm a hardware engineer. It's what
I do, is fret the details.
Paul