Problem: Oracle I/O is complaining of low performance and recommends tuning the vol_maxio parameters.
Analysis: BigAdmin has a good article on tuning your I/O parameters. One that I'm concerned with is the vol_maxio. According to the article, "VERITAS recommends that this tunable not exceed 20 percent of kernel memory or physical memory (whichever is smaller), and that you match this tunable to the size of your widest stripe." We are currently using concatenated filesystems instead of stripes (I'm unsure why; Symantec helped us set it up that way).
This forum article from sunmanagers.org also has reference to the stripe width, but says that Oracle has a limit of 1M the maximum IO size for release 8i.
If I check the maxio with adb:
# adb -k /dev/ksyms /dev/mem
maxphys/D
physmem 1fb99f
maxphys/D
maxphys:
maxphys: 131072
vol_maxio/D
vol_maxio:
vol_maxio: 2048
We find that it's already set to 1M (2048).
Due to the limits of the maxio specified by Veritas and Oracle, it doesn't really look like playing with the maxio number will buy us much, if we've already optimized the volume. So, let's take a look at the volume.
Almost all the articles I've researched stated benefits with using Direct I/O, but Veritas uses, by default, Quick I/O (ref manpage for mount_vxfs). Miracle Benelux has a good article when describing other limitations when specifying direct I/O, but there's no clarity as to the differences between direct I/O, Quick I/O, or Concurrent I/O (cio). Blog O' Matty's article references that direct I/O was enabled on his system by the following parameters: "mincache=direct” and “convosync=direct," but there is no reference to CIO or QIO.
Let's recap: I started out looking at maxio because Oracle said we're not getting the I/O they expected. From there, I was limited by Oracle and Veritas, so I checked the volume; many found benefits with using Direct I/O, and we're using Quick I/O. In addition to changing this, I also need to check the Oracle Buffer Cache.
Testing: Now, let's check with the DBAs. According to them, Oracle allows larger I/O sizes than 1M and that the max, 32M should be sufficient enough for Oracle. We set this appropriately on one of our test machines' /etc/system and rebooted:
* Increase the maximum value of IO for Oracle tuning
set vxio:vol_maxio=65535
We moved a service group over to the node after rebooting and the DBAs tested it successfully with positive results.
I tried to remove the QIO by adding "noqio" to the mount options, but Oracle didn't like it. He did, however, enjoy it when I added the "mincache=direct” and “convosync=direct" options.
Conclusion: Oracle I/O was optimized on a VXFS by setting vxio:vol_maxio to 65535 in /etc/system; it was further tuned by adding the "mincache=direct,convosync=direct" option to the mounts.
Subscribe to:
Post Comments (Atom)
Why not use ODM (Oracle Disk Manager)?
ReplyDeleteODM is used. But ODM is still based on ioctl. Tweaking vxio values will affect ODM (vxio:vol_maxio,vxio:vol_maxioctl, and vxio:vol_maxspecialio).
ReplyDelete