This post has been a long time coming but recently, I have started working on some SPARC SuperCluster POC’s with customers and I am getting re-acquainted with my old friend Solaris and SPARC.
If you are a Linux performance guy you have likely heard of HugePages. Huge pages are used to increase the performance of large memory machines but requiring fewer TLB‘s . I am not going to go into the details TLB’s, but every modern chip supports multiple memory page sizes.
So how do you get huge pages with Solaris?
Do nothing – it is the DEFAULT with Oracle running on Solaris.
The “use_ism” parameter used to be used to control this, but it has been moved to the “_underbar” category these days since there is really no reason whatsoever to change it. I remember doing tests back in the Solaris 8 days with/without ISM pages to show the performance differences and truly it was and still is a good thing.
How are ISM/Huge pages used with Oracle running on Solaris?
At first, ISM pages are only used for the SGA so OLTP style environments benefited the most from ISM. With Oracle 10 on Solaris, it was also allowed for the PGA. So, if you were doing have PGA activity like a HASH join or sort, you would benefit from larger page sizes as well.
With Solaris, it is easy to see if the page sizes of any running process by simply running the “pmap(2)” command.
root@ssc401:~# pmap -xs 23189
Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
0000000100000000 64 64 - - 8K r-x-- oracle
0000000100010000 48 48 - - - r-x-- oracle
000000010001C000 64 64 - - 8K r-x-- oracle
000000010D420000 256 256 64 - 64K rwx-- oracle
000000010D460000 64 64 - - - rwx-- oracle
000000010D540000 2304 2304 2304 - 64K rwx-- [ heap ]
0000000380000000 262144 262144 - 262144 256M rwxsR [ ism shmid=0xf00007e ]
0000000390000000 65536 65536 - 65536 4M rwxsR [ ism shmid=0xf00007e ]
0000000400000000 31457280 31457280 - 31457280 2G rwxsR [ ism shmid=0x600007f ]
0000000B80000000 1572864 1572864 - 1572864 256M rwxsR [ ism shmid=0x600007f ]
0000000BE0000000 196608 196608 - 196608 4M rwxsR [ ism shmid=0x600007f ]
0000000C00000000 24 24 - 24 8K rwxsR [ ism shmid=0x7000000 ]
FFFFFFFF5A800000 16 16 - - 8K r-x-- libodm11.so
FFFFFFFF5A902000 8 8 8 - 8K rwx-- libodm11.so
FFFFFFFF60500000 64 64 - - 64K r-x-- libclsra11.so
FFFFFFFF60510000 24 24 - - - r-x-- libclsra11.so
FFFFFFFF7D1FC000 8 8 - - 8K r-x-- libsched.so.1
FFFFFFFF7D1FE000 8 8 - - 8K r-x-- libdl.so.1
FFFFFFFF7D300000 8 8 8 - 8K rw--- [ anon ]
FFFFFFFF7D400000 8 8 8 - 8K rw--- [ anon ]
FFFFFFFF7D500000 8 8 8 - 8K rw--- [ anon ]
FFFFFFFF7FFE0000 128 128 128 - 64K rw--- [ stack ]
Notice that the “text”, “heap”, “anon”, “stack”, and shared memory can all use different page sizes. In this case, the SGA is backed by 2G, 256M, 4M, 8k ISM pages.
So what about Dynamic ISM? Should I use ISM or DISM?
So, Dynamic ISM was introduced to resize the SGA. DISM is really just ISM memory that can be paged. This can be useful when you have HUGE memory machines and want to resize the SGA without taking down the instance. But why is this needed?
- Single-Instance availability on HUGE machines that can dynamically add/replace MEMORY. Machines like the E10k/E25k/M9000/M10… etc all allow you to add components on the fly without restarted Solaris. Let’s say have have a failing memory board. You could “Shrink” the SGA so that it fits into the surviving space and while you service the faulty board. Also, say you start with a 1/2 populated machine. You can add memory without and grow the SGA without stopping the instance.
- Consolidation or Cloud like services. In this environment you can resize running instances on the fly in order to free up memory for new instances.
Personally, I don’t see a lot of use for DISM with the SuperCluster. If you have RAC you don’t need DISM for availability reasons and with cloud/consolidation I think multiple instances within a single server is not the best practice going forward. At one point you needed to use DISM for NUMA features, but that is not case with 220.127.116.11.