Hi OM,
On Mon, Jun 20, 2011 at 01:11:02PM +0200, Ole-Morten Duesund wrote:
Perhaps this might be relevant? From my dmesg :
[21532581.898890] Initializing CPU#0
[21532581.902734] Initializing HighMem for node 0 (00000000:00000000)
[21532581.902740] Memory: 338928k/368640k available (2573k kernel code,
29052k reserved, 1394k data, 396k init, 0k highmem)
[21532581.902746] virtual kernel memory layout:
[21532581.902747] fixmap : 0xf5556000 - 0xf57ff000 (2724 kB)
[21532581.902748] pkmap : 0xf5000000 - 0xf5200000 (2048 kB)
[21532581.902749] vmalloc : 0xd7000000 - 0xf4ffe000 ( 479 MB)
[21532581.902751] lowmem : 0xc0000000 - 0xd6800000 ( 360 MB)
[21532581.902752] .init : 0xc13e0000 - 0xc1443000 ( 396 kB)
[21532581.902753] .data : 0xc1283475 - 0xc13dfeb0 (1394 kB)
[21532581.902754] .text : 0xc1000000 - 0xc1283475 (2573 kB)
…
[21532582.564578] Unpacking initramfs...
[21532582.582204] Freeing initrd memory: 19440k freed
I looked a bit harder and found some of my VMs that do show the same
thing. In this case here's one that is configured with 200M but only
sees 192M:
[ 0.000000] Memory: 175980k/204800k available (2573k kernel code, 28100k reserved,
1378k data, 396k init, 0k highmem)
[ 0.000000] virtual kernel memory layout:
[ 0.000000] fixmap : 0xf5556000 - 0xf57ff000 (2724 kB)
[ 0.000000] pkmap : 0xf5000000 - 0xf5200000 (2048 kB)
[ 0.000000] vmalloc : 0xcd000000 - 0xf4ffe000 ( 639 MB)
[ 0.000000] lowmem : 0xc0000000 - 0xcc800000 ( 200 MB)
[ 0.000000] .init : 0xc13dc000 - 0xc143f000 ( 396 kB)
[ 0.000000] .data : 0xc128359d - 0xc13dbeb0 (1378 kB)
[ 0.000000] .text : 0xc1000000 - 0xc128359d (2573 kB)
…
[ 0.006983] Unpacking initramfs...
[ 0.035816] Freeing initrd memory: 20464k freed
So, that seems to hold: 28100k reserved - 20464k freed = 7636k.
In fact, all my Debian squeeze VMs are like this; missing about
8M.
Here's the equivalent from a Debian lenny VM with 256MiB RAM running
a -xen kernel (as opposed to upstream pvops kernels as seen in
Debian squeeze and Ubuntu Lucid):
[ 0.004000] Memory: 241008k/270336k available (1846k kernel code, 21008k reserved, 739k
data, 196k init, 0k highmem)
[ 0.004000] virtual kernel memory layout:
[ 0.004000] fixmap : 0xf5555000 - 0xf57ff000 (2728 kB)
[ 0.004000] pkmap : 0xf5000000 - 0xf5200000 (2048 kB)
[ 0.004000] vmalloc : 0xd1000000 - 0xf4ffe000 ( 575 MB)
[ 0.004000] lowmem : 0xc0000000 - 0xd0800000 ( 264 MB)
[ 0.004000] .init : 0xc038f000 - 0xc03c0000 ( 196 kB)
[ 0.004000] .data : 0xc02cdb3b - 0xc03868a0 ( 739 kB)
[ 0.004000] .text : 0xc0100000 - 0xc02cdb3b (1846 kB)
…
[ 0.471727] Freeing initrd memory: 14352k freed
Interesting thing here is that this VM seems to have been given an
extra ~8M above what it was configured with. 256MiB = 262,144k, yet
it sees 270,336k.
So I'm thinking this has always been the case and perhaps older -xen
kernels have actually been giving you slightly more RAM than
configured, with this behaviour being reversed in pvops kernels?
I've noticed in the past that VMs use up slightly more RAM on the
host than they say they're using as well, and just put it down to
overhead, but it could also be this.
Cheers,
Andy