Create an RAC 10g R2 using VMware Workstation 5 (RHEL 4.4) (P1)

DBA Tips Archive for Oracle

by Jeff Hunter, Sr. Database Administrator


Contents

  1. Overview
  2. Install VMware Workstation 5
  3. Host Machine and Virtual Machine Configuration Overview
  4. Create Two Virtual Machines – (CentOS Enterprise Linux 4.4)
  5. Optional Virtual Machine Configuration Steps
  6. Configure Second Network Interface
  7. Install CentOS to New Virtual Machines
  8. Install VMware Tools – (optional)
  9. Network Configuration
  10. Configure Virtual Shared Storage
  11. Create “oracle” User and Directories
  12. Create Partitions on the Shared Storage Devices
  13. Configure the Linux Servers for Oracle
  14. Configure the “hangcheck-timer” Kernel Module

Overview

One of the most efficient ways to become familiar with Oracle10g Real Application Cluster (RAC) technology is to have access to an actual Oracle10g RAC cluster. In learning this new technology, you will soon start to realize the benefits Oracle10gRAC has to offer like fault tolerance, new levels of security, load balancing, and the ease of upgrading capacity. The problem though is the price of the hardware required for a typical production RAC configuration. A small two node cluster, for example, could run anywhere from $10,000 to well over $20,000. This would not even include the heart of a production RAC environment – the shared storage. In most cases, this would be a Storage Area Network (SAN), which generally start at $8,000.For those who simply want to become familiar with Oracle10g RAC, this article provides a very low cost alternative to configure an Oracle10g RAC system using my laptop (which his running Windows XP) and VMware Workstation 5. We will use VMware Workstation 5.5.3 to create and configure two virtual machines running CentOS 4.4 Enterprise Linux. The two virtual machines – (to be named vmlinux1 and vmlinux2) – will then be used for an Oracle10g RAC dual node configuration.

Please keep in mind that we will not be configuring the Oracle RAC environment to use the operating system on the laptop directly (the host environment), but rather utilizing two virtual machines that will be hosted on this laptop. The virtual machines will be created using a product named VMware Workstation (release 5.5.3 for this article) and will host two Red Hat Enterprise Linux operating environments (actually CentOS 4.4) that will be used for our RAC configuration.

This article was written and tested using:

  • VMware Workstation 5.0
  • VMware Workstation 5.5
  • VMware Workstation 5.5.1
  • VMware Workstation 5.5.3

Keep in mind that with VMware Workstation 5.x when attempting to configure either of the virtual machines for clustering the shared disk, the VMware console will display the following error when starting the virtual machine:

This is a warning that can be safely ignored for our 10g RAC installation and configuration.

Each of the virtual machines will virtualize all of the required hardware components as required for an Oracle10g two node RAC configuration. For example, each of the virtual machines will be configured with two network interfaces – one for the public network and a second (running on a separate subnet) for the interconnect. With VMware, the process of creating additional hardware components for it to virtualize (i.e. network interfaces) is effortless. The shared storage component, however, will be a bit more tricky. Like we will perform for the network interfaces, we will use VMware to virtualize several hard disks to be used for Oracle’s physical database files (data, online redo logs, control files, archived redo logs). The new hard drives will be created using VMware on the first virtual node (vmlinux1) while the second virtual node (vmlinux2) will be configured to share them.

When we are done, we will have a dual node cluster (each virtual machine will have a single processor), both running Linux (CentOS 4.4 or Red Hat Enterprise Linux 4), Oracle10g Release 2, OCFS2, ASMLib 2.0 with shared disk storage being virtualized through VMware.

This article does assume some familiarity with installing and creating virtual machines using the VMware Workstation 5 software as well as installing and configuring Oracle10g RAC. Details for the installation and virtual machine creation using VMware Workstation will not be provided in this document. I do, however, provide links (next section) to other articles of mine that do provide these detailed instructions.

It is imperative to note that this configuration should never be run in a production environment and that it is not supported by Oracle or any other vendor. In a production environment, fiber channel is the technology of choice, since it is the high-speed serial-transfer interface that can connect systems and storage devices in either point-to-point or switched topologies. It is also important to mention the significance of having all Oracle instances hosted on multiple physical machines. Having all Oracle instances hosted on multiple machines provides for failover. Access to your data will still be possible if one of the nodes in the cluster fails. All surviving nodes will continue to service all client requests. The next benefit is speed. Performance of applications may be enhanced by the ability to break jobs up into separate tasks that can be serviced by multiple Oracle instances running on different nodes. The final benefit is scalability. By adding more nodes to the Oracle RAC cluster, it is possible to complete mote jobs that may have to be run simultaneously on separate servers.
Although I wrote this article and performed all testing using CentOS Enterprise Linux 4.4, these instructions should work with minor modifications for Windows 2000, Windows 2003, Solaris 9 (x86 Platform Edition), and Solaris 10 (x86 Platform Edition) with VMware Workstation 5.
As we start to go into the details of the installation, it should be noted that most of the tasks within this document will need to be performed on both servers. I will indicate at the beginning of each section whether or not the task(s) should be performed on both nodes or not.

Install VMware Workstation 5

VMware Workstation 5 can be obtained directly from their website – http://www.vmware.com/download/ws/. A 30 day evaluation copy is available for download directly from the website. If you decide to purchase WMware Workstation, you can purchase it directly from VMware for US$189. For what this product can do, it is well worth the price.In this article, I will not be providing details for installing VMware Workstation 5 on the Windows XP Platform since it is like installing any type of software for Windows. If you would, however, like to see the step-by-step details for this type of install, I do have a separate article that provides all of the tasks (including screenshots) for a successful install at Installing VMware Workstation 5.0 – (Windows XP).


Host Machine and Virtual Machine Configuration Overview

Before diving into the instructions for creating the two new virtual machines, let’s first talk about the host machine and operating system that I have VMware Workstation installed on. In the table below is the configuration I will be using for the new virtual machines we will be creating in this article to support Oracle10gRAC.Note that I have a 300GB external hard drive connected to my laptop. While the VMware Workstation software will be installed on the internal hard drive, (C:), I will be using an external hard drive, (M:), for all virtual machines and disks to be used for the shared storage.

Host Machine
Host Machine Name melody.idevelopment.info – (192.168.1.106)
Host Operating Environment Windows XP Professional
WMware Version VMware Workstation – Release 5.5.3 (Build 34685)
Host Machine Dell Inspiron 8600 Laptop
Memory 2GB Installed
(Each virtual machine will take 748MB from this 2GB)
Internal Hard Drive 60GB
External Hard Drive 300GB
Processor 2.0 GHz.
File System NTFS
Guest Machine
Virtual Machine Configuration #1
Guest Operating Environment CentOS Enterprise Linux 4.4
Guest Machine Name vmlinux1
Public Name/IP – (eth0) vmlinux1.idevelopment.info – (192.168.1.111)
Interconnect Name/IP – (eth1) vmlinux1-priv.idevelopment.info – (192.168.2.111)
Memory 748MB
Hard Drive 25GB
Virtual Machine Location M:\My Virtual Machines\Workstation 5.5.3\vmlinux1
Guest Machine
Virtual Machine Configuration #2
Guest Operating Environment CentOS Enterprise Linux 4.4
Guest Machine Name vmlinux2
Public Name/IP – (eth0) vmlinux2.idevelopment.info – (192.168.1.112)
Interconnect Name/IP – (eth1) vmlinux2-priv.idevelopment.info – (192.168.2.112)
Memory 748MB
Hard Drive 25GB
Virtual Machine Location M:\My Virtual Machines\Workstation 5.5.3\vmlinux2
Guest Machine
Virtual Storage for Database / Clusterware Files
OCFS2
Oracle Clusterware Files
– Oracle Cluster Registry (OCR) File
– Voting Disk
ASM SPFILE
– Shared SPFILE for ASM instancesVolume will be mounted on /u02/oradata/orcl
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk1.vmdk 2GB
ASM Volume
Database Files
ORCL:VOL1 (+ORCL_DATA1)
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk2.vmdk 12GB
ASM Volume
Database Files
ORCL:VOL2 (+ORCL_DATA1)
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk3.vmdk 12GB
ASM Volume
Flash Recovery Area
ORCL:VOL3 (+FLASH_RECOVERY_AREA)
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk4.vmdk 12GB
ASM Volume
Flash Recovery Area
ORCL:VOL4 (+FLASH_RECOVERY_AREA)
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk5.vmdk 12GB
In addition to the two IP addresses that will be configured, (virtualized by VMware), for each virtual machine, Oracle10g RAC will configure a virtual IP address for each of the virtual machines during the Oracle10g Clusterware installation process. You will not need to create a third network interface for the virtual IP address as Oracle will bind it to the public network interface (eth0) for each virtual machine.The following table describes the virtual names and IP addresses that I will be using during the installation of Oracle10gClusterware:

Oracle10g Public Virtual IP (VIP) addresses for eth0
VMware
Virtual Machine
Oracle10g
Public Virtual Machine Name
Oracle10g
Public Virtual IP Address
vmlinux1.idevelopment.info vmlinux1-vip.idevelopment.info 192.168.1.211
vmlinux2.idevelopment.info vmlinux2-vip.idevelopment.info 192.168.1.212

Create Two Virtual Machines – (CentOS Enterprise Linux 4.4)

After successfully installing the WMware Workstation software, you should now create two virtual machines to host CentOS Enterprise Linux. The following table describes the values I provided in the “New Virtual Machine Wizard” in order to create the new CentOS Enterprise Linux virtual machine(s). To begin the “New Virtual Machine Wizard”, start the VMware Workstation console and choose “[File] -> [New] -> [Virtual Machine]“.

If you are looking for an article that provides step-by-step details (including screenshots) for creating a new CentOS Enterprise Linux virtual machine, visit Creating a New Virtual Machine – (CentOS Enterprise Linux 4.2).
Guest Machine
Virtual Machine #1
Screen Value
Welcome Click [Next].
Select Appropriate Configuration Select [Custom].
Select a Virtual Machine Format Select [New – Workstation 5].
Select a Guest Operating System Select [Linux] / [Red Hat Enterprise Linux 4].
Name of Virtual Machine Set the virtual machine to [vmlinux1]. Also note that I am creating the new virtual machine on my external hard drive using the directory “M:\My Virtual Machines\Workstation 5.5.3\vmlinux1”.
Memory for the Virtual Machine Oracle10g require a minimum of 512MB of RAM memory although more memory is always better for performance. In my case, I do have the memory to spare on my laptop (2GB) and will be giving each virtual machine 748MB of memory.
Network Type Select [Use bridged networking].
Select I/O Adaptor Types Always select the default option chosen by the VMware installer. For my installation, this was [ATAPI] / [LSI Logic].
Select a Disk Select [Create a new virtual disk].
Select a Disk Type Always select the default option chosen by the VMware installer. For my installation, this was [SCSI].
Specify Disk Capacity Use a size of 25GB. I also checked the box to [Allocate all disk space now].
Specify Disk File You can use any filename here. I typically use the name Disk0.vmdk as in “M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk0.vmdk”.
Guest Machine
Virtual Machine #2
To configure the second virtual machine, follow the same steps provided in creating the first virtual machine (above) while substituting the machine name (and appropriate files / directories) from vmlinux1 to vmlinux2.

Optional Virtual Machine Configuration Steps

Since the new virtual machines will only be used to host Oracle, there are several devices that can be successfully removed from the virtual machine. Having the virtual machine virtualizethese unnecessary hardware components is a waste of resources that could be better served with running Oracle.I generally remove the floppy drive and sound card. For each virtual machine, select [Edit virtual machine settings] and navigate to the device you want to remove. The following screen shot shows how to remove the audio device:


Figure 1: VMware Virtual Machine Settings


Configure Second Network Interface

When configuring an Oracle RAC environment, each node should include one network interface for the public network and another network interface for private use by Oracle RAC (the interconnect). As mentioned already in the Host Machine and Virtual Machine Configuration Overview section, the public network (eth0) will be 192.168.1.0 while the private network (eth1 / interconnect) will be 192.168.2.0. When creating both virtual machines, the VMware wizard will create eth0– the public network.For each of the new virtual machines, we need to create a second network interface for the interconnect. VMware allows you to effortlessly create (or virtualize) a second network interface. For each virtual machine, select [Edit virtual machine settings] and click the [Add] button. This will bring up the “Add Hardware Wizard”. The following table identifies the values to be used to configure the second network interface:

Add Hardware Wizard – (Network Adapter)
Virtual Machine #1
Screen Value
Welcome Click [Next].
Hardware Type Select [Ethernet Adapter] from the list of hardware types.
Network Type Select [Bridged: Connected directly to the physical network]. Verify that the option for “Connect at power on” is checked and then click the [Finish] button to complete the wizard.
Add Hardware Wizard – (Network Adapter)
Virtual Machine #2
Use the same steps above to create a second network adapter for the second virtual machine.

Install CentOS to New Virtual Machines

Now that we have our two new virtual machines, the next step is to install CentOS Enterprise Linux to each of them. The CentOS Enterprise Linux project takes the Red Hat Enterprise Linux 4 source RPMs, and compiles them into a free clone of the Red Hat Enterprise Server 4 product. This provides a free and stable version of the Red Hat Enterprise Linux 4 (AS/ES) operating environment that I can now use for testing different Oracle configurations. CentOS Enterprise Linux comes on four CDs.Use the links (below) to download CentOS Enterprise Linux 4.4. After downloading CentOS, you will then want to burn each of the ISO images to CD.

  CentOS Enterprise Linux

If you are downloading the above ISO files to a MS Windows machine, there are many options for burning these images (ISO files) to a CD. You may already be familiar with and have the proper software to burn images to CD. If you are not familiar with this process and do not have the required software to burn images to CD, here are just two (of many) software packages that can be used:   UltraISO
  Magic ISO Maker

To start, insert Disk #1 of CentOS Enterprise Linux into the physical CD-ROM drive and then power up the first virtual machine (vmlinux1). There are several ways to power up the virtual machine:

  • Click “Start this virtual machine
  • Click the power on button in the toolbar.

The following table describes the values I provided to install CentOS Enterprise Linux 4.4 to each virtual machine.

If you are looking for an article that provides step-by-step details (including screenshots) for creating a new CentOS Enterprise Linux virtual machine, visit Creating a New Virtual Machine – (CentOS Enterprise Linux 4.2).
Guest Machine
Virtual Machine #1
Screen Value
Start Installation Insert Disk #1 of CentOS Enterprise Linux into the physical CD-ROM. If any autostart windows show up on your Windows workstation/laptop, close them out. Now, power up the first virtual machine (vmlinux1). The CentOS Enterprise Linux installation should start within the virtual machine.
Boot Screen The first screen is the boot screen. At this point, you can add any type of boot options, but in most cases, all you need to do is press [Enter] to continue.
Test CD Media You can choose to verify the CD media in this screen. I know that the ISOs that I burnt to CD were OK, so I typically choose to [Skip] the media check. After checking your media CDs (or if you are like me and Skip this process), the installer then starts to probe for your video device, monitor and mouse. The installer should determine that the video drive to use is VMware. It will detect the monitor as Unknown (which is OK). It then probes and finds the mouse. Once this process is done, it will start the X Server.
Welcome After the installer starts the X Server, you should have the Welcome screen. Click [Next] to continue.
Language Selection The installer should choose the correct language by default.
Keyboard The installer should choose the correct keyboard by default.
Installation Type Select [Custom].
Disk Partitioning Setup Select [Automatic Partitioning]. When prompted with a dialog asking, “Would you like to Initialize this drive, erasing ALL DATA”. answer [Yes].
Automatic Partitioning Select [Remove all partitions on this system]. Answer [Yes] when prompted with a warning dialog asking to confirm the delete operation.
Partitioning For most automatic layouts, the defaults should be fine. For example, the space allocated for /boot is always OK at 100MB. The installer will make the Swap space equal to twice the amount of RAM configured for this virtual machine. For my example, this would be 748MB x 2 = 1,496MB. This is more than enough for the Oracle install. The remainder is left for the root file system. So for me, this is a nice layout and I will accept the defaults. Starting with RHEL 4, the installer will create the same disk configuration as just noted but will create them using the Logical Volume Manager (LVM). For example, it will partition the first hard drive (/dev/sda for my configuration) into two partitions – one for the /boot partition (/dev/sda1) and the remainder of the disk dedicate to a LVM named VolGroup00 (/dev/sda2). The LVM Volume Group (VolGroup00) is then partitioned into two LVM partitions – one for the root file system (/) and another for swap. I basically check that it created at least 1GB of swap. Since I configured the virtual machine to take 748MB of RAM, the installer created 1,496MB of swap.
Boot Loader Configuration Keep the default option to use the GRUB boot loader.
Network Configuration I made sure to install (or better yet, virtualize) both NIC interfaces (cards) in each of the Linux machines before starting the operating system installation. This screen should have successfully detected each of the network devices.First, make sure that each of the network devices are checked to [Active on boot]. The installer may choose to not activate eth1.Second, [Edit] both eth0 and eth1 as follows. You may choose to use different IP addresses for both eth0 and eth1 and that is OK. If possible, try to put eth1 (the interconnect) on a different subnet than eth0 (the public network):

eth0:
– Check OFF the option to [Configure using DHCP]
– Leave the [Activate on boot] checked ON
– IP Address: 192.168.1.111
– Netmask: 255.255.255.0

eth1:
– Check OFF the option to [Configure using DHCP]
– Leave the [Activate on boot] checked ON
– IP Address: 192.168.2.111
– Netmask: 255.255.255.0

Continue by setting your hostname manually. I used “vmlinux1” for the first node and “vmlinux2” for the second. Finish this dialog off by supplying your gateway and DNS servers.

Firewall Configuration Make sure to select [No firewall]. You may be prompted with a warning dialog about not setting the firewall. If this occurs, simply hit [Proceed] to continue.
Additional Language Support Nothing should need to be changed here.
Time Zone Selection Select your time zone.
Set Root Password Set your root password.
Package Group Selection NOTE:With some RHEL 4 distributions, you will not get the [Package Group Selection] screen by default. There, you are asked to simply [Install default software packages] or [Customize software packages to be installed]. Select the option to [Customize software packages to be installed]. This will then bring up the [Package Group Selection] screen.This is where you pick the packages to install. If you wanted to simply scroll down to the [Miscellaneous] section and select [Everything], this will install all packages. To simplify the installation, this is the option I typically choose. Doing this, you will get everything required for Oracle, but you will also get many packages that are not necessary for Oracle to install. Having these unwanted packages does not keep me up at night.If you don’t want to install everything, you can choose just those packages that are needed for Oracle. First, ensure that the [Kernel Development Libraries] and the [Development Tools] package are selected. You must have these packages for Oracle to install.

If you will be installing Oracle9i or Oracle10g, then you will need to select the [Legacy Software Development Libraries]. Oracle9i and Oracle10g needs the older versions of gcc to compile and it included in the legacy package.

About to Install We are now ready to start the installation process. Click the [Next]button to start the installation.During the installation process, you will be asked to switch disks to Disk #2, Disk #3, and then Disk #4. Click [Continue] to start the installation process. Note that with CentOS 4.2 and CentOS 4.4, the installer will ask to switch to Disk #2, Disk #3, Disk #4, Disk #1, and then back to Disk #4.
Installation Complete At this point, the installation is complete. The CD will be ejected from the CD-ROM and you are asked to [Exit] and reboot the system.
Post Installation Wizard After the virtual machine is rebooted, you will be presented with a post installation wizard that allows you to make final configuration settings. Nothing really exciting here other then setting the Date/Time and Display settings.
Guest Machine
Virtual Machine #2
To configure the second virtual machine, follow the same steps provided to install CentOS Enterprise Linux (above) while substituting the following:

  • Node name: vmlinux2
  • Public IP Address (eth0): 192.168.1.112
  • Interconnect IP Address (eth1): 192.168.2.112

Install VMware Tools – (optional)

Although this is an optional step, you really should install the VMware Tools for each new virtual machine. On a Linux guest, you can install VMware Tools within X or from the command line. Both options will be described in this section. Installing VMware Tools within X

To install VMware Tools from X with the RPM installer:

  1. Choose [VM -> Install VMware Tools]. The guest operating system mounts the VMware Tools installation virtual CD.
  2. Double-click the VMware Tools CD icon on the desktop.NOTE: In some Linux distributions, the VMware Tools CD icon may fail to appear when you install VMware Tools within an X windows session on a guest. In this case, you should continue installing VMware Tools as described in “Installing VMware Tools from the Command Line with the Tar Installer“, beginning with step 3.
  3. Double-click the RPM installer in the root of the CD-ROM. (i.e. VMwareTools-5.5.3-34685.i386.rpm)
  4. Enter the root password (if prompted).
  5. Click [Continue]. The installer prepares the packages.
  6. Click [Continue] when the installer presents a dialog box saying “Completed System Preparation“. A dialog appears for Updating system, with a progress bar. When the installer is done, VMware Tools are installed. There is no confirmation or finish button.
  7. In an X terminal, as root (su -), configure VMware Tools.
    # vmware-config-tools.pl

    Respond to any questions the installer displays on the screen. At the end of the configuration process, the program asks for the new screen resolution. You should pick the same screen resolution you selected during the CentOS Enterprise Linux install. I used option 2 (“800×600”).NOTE: Be sure to respond [yes] if the installer offers to run the configuration program.

  8. After the configuration is complete, you must now reboot the virtual machine.
    # init 6
  9. After the virtual machine is rebooted and you log back in, you will notice that you no longer have to hit Ctrl-Altto move between the virtual machine and the host operating system. Also, you will see that the mouse works more smoothly.
  10. Once you have verified the VMware Tools is installed and configured correctly, unmount the VMware Tools installation virtual CD. Choose [VM -> Cancel VMware Tools Install].

Installing VMware Tools from the Command Line with the Tar Installer

The first steps are performed on the host, within Workstation menus:

  1. Power on the virtual machine.
  2. After the guest operating system has started, prepare your virtual machine to install VMware Tools. Choose [VM -> Install VMware Tools]. The remaining steps take place inside the virtual machine.
  3. As root (su -), mount the VMware Tools virtual CD-ROM image, change to a working directory (for example, /tmp), uncompress the installer, then unmount the CD-ROM image.NOTE: Some Linux distributions automatically mount CD-ROMs. If your distribution uses automounting, do not use the mount and umount commands described in this section. You still must untar the VMware Tools installer to /tmp.
    # cd /tmp
    # mount -r /dev/cdrom /mnt

    NOTE: If you have a previous installation, delete the previous vmware-distrib directory before installing. The default location of this directory is /tmp/vmware-tools-distrib.

  4. Untar the VMware Tools tar file:
    # tar -zxf /mnt/VMwareTools-5.5.3-34685.tar.gz
    # cd /tmp/vmware-tools-distrib
    # umount /mnt

    NOTE: If you attempt to install a tar installation over an rpm installation – or the reverse – the installer detects the previous installation and must convert the installer database format before continuing.

  5. Run the VMware Tools tar installer:
    # cd /tmp/vmware-tools-distrib
    # ./vmware-install.pl

    Respond to the configuration questions on the screen. When the installation process begins, you can simply accept the default values for the first nine questions. Press [Enter] to accept the default value. At the end of the configuration process, the program asks for the new screen resolution. You should pick the same screen resolution you selected during the CentOS Enterprise Linux install. I used option 2 (“800×600”).

  6. After the configuration is complete, you must now reboot the virtual machine.
    # init 6
  7. After the virtual machine is rebooted and you log back in, you will notice that you no longer have to hit Ctrl-Altto move between the virtual machine and the host operating system. Also, you will see that the mouse works more smoothly.
  8. Once you have verified the VMware Tools is installed and configured correctly, unmount the VMware Tools installation virtual CD. Choose [VM -> Cancel VMware Tools Install].

Network Configuration

Perform the following network configuration on all nodes in the cluster!
Although we configured several of the network settings during the installation of CentOS Enterprise Linux, it is important to not skip this section as it contains critical steps that are required for a successful RAC environment.

Introduction to Network Settings

During the Linux O/S install we already configured the IP address and host name for each of the nodes. We now need to configure the /etc/hostsfile as well as adjusting several of the network settings for the interconnect. I also include instructions for enabling Telnet and FTP services.Each node should have one static IP address for the public network and one static IP address for the private cluster interconnect. The private interconnect should only be used by Oracle to transfer Cluster Manager and Cache Fusion related data. Note that Oracle does not support using the public network interface for the interconnect. You must have one network interface for the public network and another network interface for the private interconnect. For a production RAC implementation, the interconnect should be at least gigabit or more and only be used by Oracle.

I even provide instructions on how to enable root logins for both Telnet and FTP. This is an optional step. Enabling root logins for Telnet and FTP should never be configured for a production environment!

Enabling Telnet and FTP Services

Linux is configured to run the Telnet and FTP server, but by default, these services are not enabled. To enable the telnet service, login to the server as the root user account and run the following commands:

# chkconfig telnet on
# service xinetd reload
Reloading configuration: [  OK  ]

Starting with the Red Hat Enterprise Linux 3.0 release (and in CentOS Enterprise Linux), the FTP server (wu-ftpd) is no longer available with xinetd. It has been replaced with vsftp and can be started from /etc/init.d/vsftpd as in the following:

# /etc/init.d/vsftpd start
Starting vsftpd for vsftpd:         [ OK ]

If you want the vsftpdservice to start and stop when recycling (rebooting) the machine, you can create the following symbolic links:

# ln -s /etc/init.d/vsftpd /etc/rc3.d/S56vsftpd
# ln -s /etc/init.d/vsftpd /etc/rc4.d/S56vsftpd
# ln -s /etc/init.d/vsftpd /etc/rc5.d/S56vsftpd

Allowing Root Logins to Telnet and FTP Services

Now before getting into the details of how to configure Red Hat Linux for root logins, keep in mind that this is VERY BAD security. Make sure that you NEVER configure your production servers for this type of login.Configure Telnet for root logins

Simply edit the file /etc/securetty and add the following to the end of the file:

pts/0
pts/1
pts/2
pts/3
pts/4
pts/5
pts/6
pts/7
pts/8
pts/9

This will allow up to 10 telnet sessions to the server as root.Configure FTP for root logins

Edit the files /etc/vsftpd.ftpusers and /etc/vsftpd.user_list and remove the ‘root‘ line from each file.

Configuring Public and Private Network

In our two node example, we need to configure the network on both nodes for access to the public network as well as their private interconnect.The easiest way to configure network settings in Red Hat Linux is with the program Network Configuration. This application can be started from the command-line as the “root” user account as follows:

# su -
# /usr/bin/system-config-network &
Do not use DHCP naming for the public IP address or the interconnects – we need static IP addresses!

Using the Network Configuration application, you need to configure both NIC devices as well as the /etc/hosts file. Both of these tasks can be completed using the Network Configuration GUI. Notice that the /etc/hosts entries are the same for both nodes.

Our example configuration will use the following settings:

Oracle RAC Node 1 – (vmlinux1)
Device IP Address Subnet Gateway Purpose
eth0 192.168.1.111 255.255.255.0 192.168.1.1 Connects vmlinux1 to the public network
eth1 192.168.2.111 255.255.255.0   Connects vmlinux1 (interconnect) to vmlinux2 (vmlinux2-priv)
/etc/hosts
127.0.0.1        localhost.localdomain localhost

# Public Network - (eth0)
192.168.1.111    vmlinux1
192.168.1.112    vmlinux2

# Private Interconnect - (eth1)
192.168.2.111    vmlinux1-priv
192.168.2.112    vmlinux2-priv

# Public Virtual IP (VIP) addresses for - (eth0)
192.168.1.211    vmlinux1-vip
192.168.1.212    vmlinux2-vip
Oracle RAC Node 2 – (vmlinux2)
Device IP Address Subnet Gateway Purpose
eth0 192.168.1.112 255.255.255.0 192.168.1.1 Connects vmlinux2 to the public network
eth1 192.168.2.112 255.255.255.0   Connects vmlinux2 (interconnect) to vmlinux1 (vmlinux1-priv)
/etc/hosts
127.0.0.1        localhost.localdomain localhost

# Public Network - (eth0)
192.168.1.111    vmlinux1
192.168.1.112    vmlinux2

# Private Interconnect - (eth1)
192.168.2.111    vmlinux1-priv
192.168.2.112    vmlinux2-priv

# Public Virtual IP (VIP) addresses for - (eth0)
192.168.1.211    vmlinux1-vip
192.168.1.212    vmlinux2-vip
Note that the virtual IP addresses only need to be defined in the /etc/hosts file (or your DNS) for both Oracle RAC nodes. The public virtual IP addresses will be configured automatically by Oracle when you run the Oracle Universal Installer, which starts Oracle’s Virtual Internet Protocol Configuration Assistant (VIPCA). All virtual IP addresses will be activated when the srvctl start nodeapps -n <node_name> command is run. Although I am getting ahead of myself, this is the Host Name/IP Address that will be configured in the client(s) tnsnames.ora file for each Oracle Net Service Name. All of this will be explained much later in this article!

In the screen shots below, only node 1 (vmlinux1) is shown. Ensure to make all the proper network settings to both nodes!

Figure 2: Network Configuration Screen – Node 1 (vmlinux1)

Figure 3: Ethernet Device Screen – eth0 (vmlinux1)

Figure 4: Ethernet Device Screen – eth1 (vmlinux1)

Figure 5: Network Configuration Screen – /etc/hosts (vmlinux1)
Once the network if configured, you can use the ifconfig command to verify everything is working. The following example is from vmlinux1:

$ /sbin/ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:0C:29:07:E6:0B
          inet addr:192.168.1.111  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe07:e60b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:170 errors:0 dropped:0 overruns:0 frame:0
          TX packets:146 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:14360 (14.0 KiB)  TX bytes:11875 (11.5 KiB)
          Interrupt:185 Base address:0x1400

eth1      Link encap:Ethernet  HWaddr 00:0C:29:07:E6:15
          inet addr:192.168.2.111  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe07:e615/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:197 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:14618 (14.2 KiB)  TX bytes:1386 (1.3 KiB)
          Interrupt:169 Base address:0x1480

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:1962 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1962 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3226318 (3.0 MiB)  TX bytes:3226318 (3.0 MiB)

sit0      Link encap:IPv6-in-IPv4
          NOARP  MTU:1480  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

About Virtual IP

Why do we have a Virtual IP (VIP) in 10g? Why does it just return a dead connection when its primary node fails?It’s all about availability of the application. When a node fails, the VIP associated with it is supposed to be automatically failed over to some other node. When this occurs, two things happen.

  1. The new node re-arps the world indicating a new MAC address for the address. For directly connected clients, this usually causes them to see errors on their connections to the old address.
  2. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.

This means that when the client issues SQL to the node that is now down, or traverses the address list while connecting, rather than waiting on a very long TCP/IP time-out (~10 minutes), the client receives a TCP reset. In the case of SQL, this is ORA-3113. In the case of connect, the next address in tnsnames is used.

Going one step further is making use of Transparent Application Failover (TAF). With TAF successfully configured, it is possible to completely avoid ORA-3113 errors alltogether!

Without using VIPs, clients connected to a node that died will often wait a 10 minute TCP timeout period before getting an error. As a result, you don’t really have a good HA solution without using VIPs.

Source – Metalink: “RAC Frequently Asked Questions” (Note:220970.1)

Make sure RAC node name is not listed in loopback address

Ensure that the node names (vmlinux1 or vmlinux2) are not included for the loopback address in the /etc/hostsfile. If the machine name is listed in the in the loopback address entry as below:

    127.0.0.1        vmlinux1 localhost.localdomain localhost

it will need to be removed as shown below:

    127.0.0.1        localhost.localdomain localhost
If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation:

ORA-00603: ORACLE server session terminated by fatal error

or

ORA-29702: error occurred in Cluster Group Service operation

Adjusting Network Settings

With Oracle 9.2.0.1 and onwards, Oracle now makes use of UDP as the default protocol on Linux for inter-process communication (IPC), such as Cache Fusion and Cluster Managerbuffer transfers between instances within the RAC cluster.Oracle strongly suggests to adjust the default and maximum send buffer size (SO_SNDBUF socket option) to 256 KB, and the default and maximum receive buffer size (SO_RCVBUF socket option) to 256 KB.

The receive buffers are used by TCP and UDP to hold received data until it is read by the application. The receive buffer cannot overflow because the peer is not allowed to send data beyond the buffer size window. This means that datagrams will be discarded if they don’t fit in the socket receive buffer. This could cause the sender to overwhelm the receiver.

The default and maximum window size can be changed in the /procfile system without reboot:

# su - root

# sysctl -w net.core.rmem_default=262144
net.core.rmem_default = 262144

# sysctl -w net.core.wmem_default=262144
net.core.wmem_default = 262144

# sysctl -w net.core.rmem_max=262144
net.core.rmem_max = 262144

# sysctl -w net.core.wmem_max=262144
net.core.wmem_max = 262144

The above commands made the changes to the already running O/S. You should now make the above changes permanent (for each reboot) by adding the following lines to the /etc/sysctl.conf file for each node in your RAC cluster:

# Default setting in bytes of the socket receive buffer net.core.rmem_default=262144 # Default setting in bytes of the socket send buffer net.core.wmem_default=262144 # Maximum socket receive buffer size which may be set by using # the SO_RCVBUF socket option net.core.rmem_max=262144 # Maximum socket send buffer size which may be set by using # the SO_SNDBUF socket option net.core.wmem_max=262144

Check and turn off UDP ICMP rejections:

During the Linux installation process, I indicated to not configure the firewall option. (By default the option to configure a firewall is selected by the installer.)This has burned me several times so I like to do a double-check that the firewall option is not configured and to ensure udp ICMP filtering is turned off.If UDP ICMP is blocked or rejected by the firewall, the Oracle Clusterware software will crash after several minutes of running. When the Oracle Clusterware process fails, you will have something similar to the following in the <machine_name>_evmocr.log file:

08/29/2005 22:17:19
oac_init:2: Could not connect to server, clsc retcode = 9
08/29/2005 22:17:19
a_init:12!: Client init unsuccessful : [32]
ibctx:1:ERROR: INVALID FORMAT
proprinit:problem reading the bootblock or superbloc 22

When experiencing this type of error, the solution was to remove the udp ICMP (iptables) rejection rule – or to simply have the firewall option turned off. The Oracle Clusterware software will then start to operate normally and not crash. The following commands should be executed as the rootuser account:

  1. Check to ensure that the firewall option is turned off. If the firewall option is stopped (like it is in my example below) you do not have to proceed with the following steps.
    # /etc/rc.d/init.d/iptables status
    Firewall is stopped.
  2. If the firewall option is operating you will need to first manually disable UDP ICMP rejections:
    # /etc/rc.d/init.d/iptables stop
    
    Flushing firewall rules: [  OK  ]
    Setting chains to policy ACCEPT: filter [  OK  ]
    Unloading iptables modules: [  OK  ]
  3. Then, to turn UDP ICMP rejections off for next server reboot (which should always be turned off):
    # chkconfig iptables off

Configure Virtual Shared Storage

At this point, we have two virtual machines configured for Linux for our two node Oracle RAC environment. In this section we now get to take care of one of the most essential tasks in this article – configuring shared storage for the RAC instances.First, ensure that both virtual machines are powered down. If either of the virtual machines are started, click in the virtual machine and select [Start] - [Turn off computer] - [Turn off].
Create Initial Virtual Disks – (from first node)

After both virtual machines are powered down, we start with the first virtual machine (vmlinux1) and use the “Add Hardware Wizard” to create five virtual SCSI hard drives. We will be using this option to Addfive new virtual (SCSI) hard drives:

New Virtual Hard Drives
Virtual Machine #1
Virtual Hard Drive Size
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk1.vmdk 2GB
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk2.vmdk 12GB
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk3.vmdk 12GB
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk4.vmdk 12GB
M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk5.vmdk 12GB

The following table identifies the values to be used to configure the first virtual (SCSI) hard drive using the “Add Hardware Wizard”. Using vmlinux1, select the [Edit virtual machine settings] option in the VMware software console. Then click the [Add] button to bring up the “Add Hardware Wizard”. This process will need to be repeated four additional times to create all five virtual (SCSI) hard drives listed above:

Add Hardware Wizard – (SCSI Hard Disk)
Virtual Machine #1
Screen Value
Welcome Click [Next].
Hardware Type Select [Hard Disk] from the list of hardware types.
Select a Disk Select [Create a new virtual disk].
Select a Disk Type Select the [SCSI] option for the virtual disk type even if it is not identified as the “Recommended” drive by the VMware installer.
Specify Disk Capacity Specify [2GB] for the disk size. Also, check the option to [Allocate all disk space now].
Specify Disk File For the first disk, use the file name [Disk1.vmdk]. For my configuration, the file will be created in “M:\My Virtual Machines\Workstation 5.5.3\vmlinux1” by default.

Modify VMware Configuration File

In the above section, we created five (SCSI) virtual hard drives to be used as shared storage for our Oracle RAC environment. In this section, we will accomplish the following two tasks:

  1. Modify the VMware Configuration on vmlinux1 to enable disk sharing for the five (SCSI) virtual hard drives. We will also be creating (virtualizing) a second SCSI channel (HBA) for the five new (SCSI) virtual hard drives.
  2. Modify the VMware Configuration on vmlinux2 to identify and enable disk sharing for the five (SCSI) virtual hard drives. We will also be creating (virtualizing) a second SCSI channel (HBA) for the five new (SCSI) virtual hard drives.

Each of the two virtual machines we created have a “VMware Configuration File” named “rhel4.vmx“. For my environment, these files are located at:

M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Red Hat Enterprise Linux 4.vmx
M:\My Virtual Machines\Workstation 5.5.3\vmlinux2\Red Hat Enterprise Linux 4.vmx

With VMware Workstation 5.5 and higher, the name of the VMware configuration file has changed from rhel4.vmx to Red Hat Enterprise Linux 4.vmx.
The configuration file for vmlinux1will already contain configuration information for the five new SCSI virtual hard disks:

...
scsi0:1.present = "TRUE"
scsi0:1.fileName = "Disk1.vmdk"
scsi0:2.present = "TRUE"
scsi0:2.fileName = "Disk2.vmdk"
scsi0:3.present = "TRUE"
scsi0:3.fileName = "Disk3.vmdk"
scsi0:4.present = "TRUE"
scsi0:4.fileName = "Disk4.vmdk"
scsi0:5.present = "TRUE"
scsi0:5.fileName = "Disk5.vmdk"
...

(vmlinux2 obviously will not at this time!) The configuration information for the five new hard disks (on vmlinux1) should be removed and replaced with the configuration information in the table below.

This configuration data should be inserted in the VMware Configuration File (Red Hat Enterprise Linux 4.vmx) for both virtual machines. I generally append this text to the end of the configuration file for both virtual machines:

Modify VMware Configuration File to Configure Disk Sharing
Virtual Machine #1 and Virtual Machine #2
#
# ----------------------------------------------------------------
# SHARED DISK SECTION - (BEGIN)
# ----------------------------------------------------------------
# -  The goal in meeting the hardware requirements is to have a
#    shared storage for the two nodes. The way to achieve this in
#    VMware is the creation of a NEW SCSI BUS. It has to be of
#    type "virtual" and we must have the disk.locking = "false"
#    option.
# -  Just dataCacheMaxSize = "0" should be sufficient with the
#    diskLib.* parameters, although I include all parameters for
#    documentation purposes.
# -  maxUnsyncedWrites should matter for sparse disks only, and
#    I certainly do not recommend using sparse disks for
#    clustering.
# -  dataCacheMaxSize=0 should disable cache size completely, so
#    other three dataCache options should do nothing (no harm,
#    but nothing good either).
# ----------------------------------------------------------------
#

diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"

disk.locking = "false"

# ----------------------------------------------------------------
#   Create one HBA
# ----------------------------------------------------------------

scsi1.present = "TRUE"
scsi1.sharedBus = "virtual"
scsi1.virtualDev = "lsilogic"

# ----------------------------------------------------------------
#   Create virtual SCSI disks on single HBA
# ----------------------------------------------------------------

scsi1:0.present = "TRUE"
scsi1:0.fileName = "M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk1.vmdk"
scsi1:0.redo = ""
scsi1:0.mode = "independent-persistent"
scsi1:0.deviceType = "disk"

scsi1:1.present = "TRUE"
scsi1:1.fileName = "M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk2.vmdk"
scsi1:1.redo = ""
scsi1:1.mode = "independent-persistent"
scsi1:1.deviceType = "disk"

scsi1:2.present = "TRUE"
scsi1:2.fileName = "M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk3.vmdk"
scsi1:2.redo = ""
scsi1:2.mode = "independent-persistent"
scsi1:2.deviceType = "disk"

scsi1:3.present = "TRUE"
scsi1:3.fileName = "M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk4.vmdk"
scsi1:3.redo = ""
scsi1:3.mode = "independent-persistent"
scsi1:3.deviceType = "disk"

scsi1:4.present = "TRUE"
scsi1:4.fileName = "M:\My Virtual Machines\Workstation 5.5.3\vmlinux1\Disk5.vmdk"
scsi1:4.redo = ""
scsi1:4.mode = "independent-persistent"
scsi1:4.deviceType = "disk"

#
# ----------------------------------------------------------------
# SHARED DISK SECTION - (END)
# ----------------------------------------------------------------
#

Power On Both Virtual Machines

After making the above changes, exit from the VMware Console (File -> Exit) then power on both of the virtual machines one at a time starting with vmlinux1.With VMware Workstation 5.5 and higher, you will recieve the error “Clustering is not supported for WMware Workstation. This setting will be ignored.” upon starting each virtual machine:

This warning can be safely ignored. Acknowledge the dialog by clicking [OK].

During the Linux boot process (for all versions of VMware Workstation), the O/S will detect the new SCSI adaptor as an “LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI“. When prompted, hit any key to continue.

This will bring up the [Hardware Added] screen for the LSI Logic Ultra320 SCSI adaptor. Select the [Configure] button to register the new SCSI adaptor.


Create oracle User and Directories

Perform the following tasks on all nodes in the cluster!
I will be using the Oracle Cluster File System, Release 2 (OCFS2) to store the files required to be shared for the Oracle Clusterware software. When using OCFS2, the UID of the UNIX user “oracle” and GID of the UNIX group “dba” must be the same on all machines in the cluster. If either the UID or GID are different, the files on the OCFS2 file system will show up as “unowned” or may even be owned by a different user. For this article, I will use 175 for the “oracle” UID and 115 for the “dba” GID.

Create Group and User for Oracle

Lets continue this example by creating the UNIX dba group and oracleuser account along with all appropriate directories.

# mkdir -p /u01/app
# groupadd -g 115 dba
# groupadd -g 116 oinstall
# useradd -u 175 -g 115 -d /u01/app/oracle -s /bin/bash -c "Oracle Software Owner" -p oracle oracle
# chown -R oracle:dba /u01
# passwd oracle
# su - oracle
When you are setting the Oracle environment variables for each RAC node, ensure to assign each RAC node a unique Oracle SID!For this example, I used:

  • vmlinux1 : ORACLE_SID=orcl1
  • vmlinux2 : ORACLE_SID=orcl2

Create Login Script for oracle User Account

After creating the “oracle” UNIX user account on both nodes, make sure that you are logged in as the oracle user and verify that the environment is setup correctly by using the following .bash_profile:

.bash_profile for Oracle User
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
      . ~/.bashrc
fi

alias ls="ls -FA"
alias s="screen -DRRS iPad -t iPad"

export JAVA_HOME=/usr/local/java

# User specific environment and startup programs
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/crs
export ORACLE_PATH=$ORACLE_BASE/dba_scripts/common/sql:.:$ORACLE_HOME/rdbms/admin

# Each RAC node must have a unique ORACLE_SID. (i.e. orcl1, orcl2,...)
export ORACLE_SID=orcl1

export PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export PATH=${PATH}:$ORACLE_BASE/dba_scripts/common/bin
export ORACLE_TERM=xterm
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORA_NLS10=$ORACLE_HOME/nls/data
export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export CLASSPATH=$ORACLE_HOME/JRE
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export THREADS_FLAG=native
export TEMP=/tmp
export TMPDIR=/tmp


Create Mount Point for OCFS2 / Clusterware

Finally, let’s create the mount point for the Oracle Cluster File System, Release 2 (OCFS2) that will be used to store the two Oracle Clusterware shared files. These commands will need to be run as the “root” user account:

$ su -
# mkdir -p /u02/oradata/orcl
# chown -R oracle:dba /u02

Create Partitions on the Shared Storage Devices

Create the following partitions from only one node in the cluster!

Overview

The next step is to create a single partition on each of the five shared virtual drives. As mentioned earlier in this article, I will be using Oracle’s Cluster File System, Release 2 (OCFS2) to store the two files to be shared for Oracle’s Clusterware software. We will then be using Automatic Storage Management (ASM) to create four ASM volumes; two for all physical database files (data/index files, online redo log files, and control files) and two for the Flash Recovery Area (RMAN backups and archived redo log files). The four ASM volumes will be used to create two ASM disk groups (+ORCL_DATA1 and +FLASH_RECOVERY_AREA) using NORMAL redundancy.The following table lists the partition that will be created on each of the shared virtual disks and what files will be contained on them.

Oracle Shared Drive Configuration
File System Type Partition Size Mount Point ASM Diskgroup Name File Types
OCFS2 /dev/sdb1 2 GB /u02/oradata/orcl   Oracle Cluster Registry (OCR) File – (~100 MB)
Voting Disk – (~20MB)
ASM /dev/sdc1 12 GB ORCL:VOL1 +ORCL_DATA1 Oracle Database Files
ASM /dev/sdd1 12 GB ORCL:VOL2 +ORCL_DATA1 Oracle Database Files
ASM /dev/sde1 12 GB ORCL:VOL3 +FLASH_RECOVERY_AREA Oracle Flash Recovery Area
ASM /dev/sdf1 12 GB ORCL:VOL4 +FLASH_RECOVERY_AREA Oracle Flash Recovery Area
Total   50 GB      


Create Partition on Each Shared Virtual Disk

The fdiskcommand is used in Linux for creating (and removing) partitions. For this configuration, I will be creating a single partition on each of the five shared virtual disks:

When attempting to partition the new virtual disks, it is safe to ignore any messages that indicate:

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 1566.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Disk 1

# fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-261, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-261, default 261): 261

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Disk 2

# fdisk /dev/sdc
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1566, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1566, default 1566): 1566

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Disk 3

# fdisk /dev/sdd
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1566, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1566, default 1566): 1566

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Disk 4

# fdisk /dev/sde
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1566, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1566, default 1566): 1566

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Disk 5

# fdisk /dev/sdf
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1566, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1566, default 1566): 1566

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


Verify New Partitions

From the first node (vmlinux1), use the fdisk -lcommand to verify the new partitions:

# fdisk -l

Disk /dev/sda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        3263    26105625   8e  Linux LVM

Disk /dev/sdb: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         261     2096451   83  Linux

Disk /dev/sdc: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        1566    12578863+  83  Linux

Disk /dev/sdd: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        1566    12578863+  83  Linux

Disk /dev/sde: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        1566    12578863+  83  Linux

Disk /dev/sdf: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1        1566    12578863+  83  Linux

From the second node (vmlinux2), inform the kernel of the partition changes using partprobe and then verify the new partitions:

# partprobe
# fdisk -l

Disk /dev/sda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        3263    26105625   8e  Linux LVM

Disk /dev/sdb: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         261     2096451   83  Linux

Disk /dev/sdc: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        1566    12578863+  83  Linux

Disk /dev/sdd: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1        1566    12578863+  83  Linux

Disk /dev/sde: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        1566    12578863+  83  Linux

Disk /dev/sdf: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1        1566    12578863+  83  Linux

Configure the Linux Servers for Oracle

Perform the following configuration procedures on all nodes in the cluster!
Several of the commands within this section will need to be performed on every node within the cluster every time the machine is booted. This section provides very detailed information about setting shared memory, semaphores, and file handle limits. Instructions for placing them in a startup script (/etc/sysctl.conf) are included in section “All Startup Commands for Each RAC Node“.

Overview

This section focuses on configuring both Linux servers – getting each one prepared for the Oracle10gRAC installation. This includes verifying enough swap space, setting shared memory and semaphores, and finally how to set the maximum amount of file handles for the O/S.Throughout this section you will notice that there are several different ways to configure (set) these parameters. For the purpose of this article, I will be making all changes permanent (through reboots) by placing all commands in the /etc/sysctl.conf file.

Swap Space Considerations

  • Installing Oracle10g Release 2 requires a minimum of 512MB of memory.
    (An inadequate amount of swap during the installation will cause the Oracle Universal Installer to either “hang” or “die”)
  • The check the amount of memory you have, type:
    # cat /proc/meminfo | grep MemTotal
    MemTotal:       755284 kB
  • To check the amount of swap you have allocated, type:
    # cat /proc/meminfo | grep SwapTotal
    SwapTotal:     1540088 kB
  • If you have less than 512MB of memory (between your RAM and SWAP), you can add temporary swap space by creating a temporary swap file. This way you do not have to use a raw device or even more drastic, rebuild your system.As root, make a file that will act as additional swap space, let’s say about 300MB:
    # dd if=/dev/zero of=tempswap bs=1k count=300000Now we should change the file permissions:
    # chmod 600 tempswap

    Finally we format the “partition” as swap and add it to the swap space:
    # mke2fs tempswap
    # mkswap tempswap
    # swapon tempswap

Setting Shared Memory

Shared memory allows processes to access common structures and data by placing them in a shared memory segment. This is the fastest form of Inter-Process Communications(IPC) available – mainly due to the fact that no kernel involvement occurs when data is being passed between the processes. Data does not need to be copied between processes.Oracle makes use of shared memory for its Shared Global Area (SGA) which is an area of memory that is shared by all Oracle backup and foreground processes. Adequate sizing of the SGA is critical to Oracle performance since it is responsible for holding the database buffer cache, shared SQL, access paths, and so much more.

To determine all shared memory limits, use the following:

# ipcs -lm

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

Setting SHMMAX

The SHMMAX parameters defines the maximum size (in bytes) for a shared memory segment. The Oracle SGA is comprised of shared memory and it is possible that incorrectly setting SHMMAX could limit the size of the SGA. When setting SHMMAX, keep in mind that the size of the SGA should fit within one shared memory segment. An inadequate SHMMAXsetting could result in the following:

ORA-27123: unable to attach to shared memory segment

You can determine the value of SHMMAX by performing the following:

# cat /proc/sys/kernel/shmmax
33554432

The default value for SHMMAX is 32MB. This is often too small to configure the Oracle SGA. I generally set the SHMMAXparameter to 2GB using the following methods:

  • You can alter the default setting for SHMMAX without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/kernel/shmmax) by using the following command:
    # sysctl -w kernel.shmmax=2147483648
  • You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.confstartup file:
    # echo "kernel.shmmax=2147483648" >> /etc/sysctl.conf

Setting SHMMNI

We now look at the SHMMNI parameters. This kernel parameter is used to set the maximum number of shared memory segments system wide. The default value for this parameter is 4096.You can determine the value of SHMMNI by performing the following:

# cat /proc/sys/kernel/shmmni
4096

The default setting for SHMMNI should be adequate for our Oracle10g Release 2 RAC installation.

Setting SHMALL

Finally, we look at the SHMALLshared memory kernel parameter. This parameter controls the total amount of shared memory (in pages) that can be used at one time on the system. In short, the value of this parameter should always be at least:

ceil(SHMMAX/PAGE_SIZE)

The default size of SHMALLis 2097152 and can be queried using the following command:

# cat /proc/sys/kernel/shmall
2097152

The default setting for SHMALL should be adequate for our Oracle10g Release 2 RAC installation.

The page size in Red Hat Linux on the i386 platform is 4096 bytes. You can, however, use bigpages which supports the configuration of larger memory page sizes.

Setting Semaphores

Now that we have configured our shared memory settings, it is time to take care of configuring our semaphores. The best way to describe a semaphore is as a counter that is used to provide synchronization between processes (or threads within a process) for shared resources like shared memory. Semaphore sets are supported in System V where each one is a counting semaphore. When an application requests semaphores, it does so using “sets”.To determine all semaphore limits, use the following:

# ipcs -ls

------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

You can also use the following command:

# cat /proc/sys/kernel/sem
250     32000   32      128

Setting SEMMSL

The SEMMSLkernel parameter is used to control the maximum number of semaphores per semaphore set.Oracle recommends setting SEMMSL to the largest PROCESS instance parameter setting in the init.ora file for all databases on the Linux system plus 10. Also, Oracle recommends setting the SEMMSL to a value of no less than 100.

Setting SEMMNI

The SEMMNIkernel parameter is used to control the maximum number of semaphore sets in the entire Linux system.Oracle recommends setting the SEMMNI to a value of no less than 100.

Setting SEMMNS

The SEMMNSkernel parameter is used to control the maximum number of semaphores (not semaphore sets) in the entire Linux system.Oracle recommends setting the SEMMNS to the sum of the PROCESSES instance parameter setting for each database on the system, adding the largest PROCESSES twice, and then finally adding 10 for each Oracle database on the system.

Use the following calculation to determine the maximum number of semaphores that can be allocated on a Linux system. It will be the lesser of:

SEMMNS  -or-  (SEMMSL * SEMMNI)

Setting SEMOPM

The SEMOPM kernel parameter is used to control the number of semaphore operations that can be performed per semopsystem call.The semop system call (function) provides the ability to do operations for multiple semaphores with one semop system call. A semaphore set can have the maximum number of SEMMSL semaphores per semaphore set and is therefore recommended to set SEMOPM equal to SEMMSL.

Oracle recommends setting the SEMOPM to a value of no less than 100.

Setting Semaphore Kernel Parameters

Finally, we see how to set all semaphore parameters. In the following, the only parameter I care about changing (raising) is SEMOPM. All other default settings should be sufficient for our example installation.

  • You can alter the default setting for all semaphore settings without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/kernel/sem) by using the following command:
    # sysctl -w kernel.sem="250 32000 100 128"
  • You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.confstartup file:
    # echo "kernel.sem=250 32000 100 128" >> /etc/sysctl.conf

Setting File Handles

When configuring the Red Hat Linux server, it is critical to ensure that the maximum number of file handles is large enough. The setting for file handles denotes the number of open files that you can have on the Linux system.Use the following command to determine the maximum number of file handles for the entire system:

# cat /proc/sys/fs/file-max
102563

Oracle recommends that the file handles for the entire system be set to at least 65536.

  • You can alter the default setting for the maximum number of file handles without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/fs/file-max) using the following:
    # sysctl -w fs.file-max=65536
  • You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.confstartup file:
    # echo "fs.file-max=65536" >> /etc/sysctl.conf
You can query the current usage of file handles by using the following:

# cat /proc/sys/fs/file-nr
825     0       65536

The file-nr file displays three parameters:

  • Total allocated file handles
  • Currently used file handles
  • Maximum file handles that can be allocated
If you need to increase the value in /proc/sys/fs/file-max, then make sure that the ulimit is set properly. Usually for Linux 2.4 and 2.6 it is set to unlimited. Verify the ulimitsetting my issuing the ulimit command:

# ulimit
unlimited

Setting IP Local Port Range

Configure the system to allow a local port range of 1024 through 65000.Use the following command to determine the value of ip_local_port_range:

# cat /proc/sys/net/ipv4/ip_local_port_range
32768   61000

The default value for ip_local_port_rangeis ports 32768 through 61000. Oracle recommends a local port range of 1024 to 65000.

  • You can alter the default setting for the local port range without rebooting the machine by making the changes directly to the /proc file system (/proc/sys/net/ipv4/ip_local_port_range) by using the following command:
    # sysctl -w net.ipv4.ip_local_port_range="1024 65000"
  • You should then make this change permanent by inserting the kernel parameter in the /etc/sysctl.confstartup file:
    # echo "net.ipv4.ip_local_port_range = 1024 65000" >> /etc/sysctl.conf

Setting Shell Limits for the oracle User

To improve the performance of the software on Linux systems, Oracle recommends you increase the following shell limits for the oracle user:

Shell Limit Item in limits.conf Hard Limit
Maximum number of open file descriptors nofile 65536
Maximum number of processes available to a single user nproc 16384

To make these changes, run the following as root:

cat >> /etc/security/limits.conf <<EOF oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 EOF

cat >> /etc/pam.d/login <<EOF session required /lib/security/pam_limits.so EOF

Update the default shell startup file for the “oracle” UNIX account.

  • For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profilefile by running the following command:
    cat >> /etc/profile <<EOF if [ \$USER = "oracle" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF
  • For the C shell (csh or tcsh), add the following lines to the /etc/csh.loginfile by running the following command:
    cat >> /etc/csh.login <<EOF if ( \$USER == "oracle" ) then limit maxproc 16384 limit descriptors 65536 endif EOF

Activating All Kernel Parameters for the System

At this point, we have covered all of the required Linux kernel parameters needed for a successful Oracle installation and configuration. Within each section above, we configured the Linux system to persist each of the kernel parameters on system startup by placing them all in the /etc/sysctl.conffile.We could reboot at this point to ensure all of these parameters are set in the kernel or we could simply “run” the /etc/sysctl.conf file by running the following command as root. Perform this on both Oracle RAC nodes in the cluster!

# sysctl -p

net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_max = 262144
kernel.shmmax = 2147483648
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

Setting the Correct Date and Time on Both Oracle RAC Nodes

During the installation of Oracle Clusterware, the Database, and the Companion CD, the Oracle Universal Installer (OUI) first installs the software to the local node running the installer (i.e. vmlinux1). The software is then copied remotely to all of the remaining nodes in the cluster (i.e. vmlinux2). During the remote copy process, the OUI will execute the UNIX “tar” command on each of the remote nodes to extract the files that were archived and copied over. If the date and time on the node performing the install is greater than that of the node it is copying to, the OUI will throw an error from the “tar” command indicating it is attempting to extract files stamped with a time in the future:

Error while copying directory /u01/app/oracle/product/crs with exclude file list 'null' to nodes 'vmlinux2'. [PRKC-1002 : All the submitted commands did not execute successfully] --------------------------------------------- vmlinux2: /bin/tar: ./bin/lsnodes: time stamp 2007-02-19 09:21:34 is 735 s in the future /bin/tar: ./bin/olsnodes: time stamp 2007-02-19 09:21:34 is 735 s in the future ...(more errors on this node)

Please note that although this would seem like a severe error from the OUI, it can safely be disregarded as a warning. The “tar” command DOES actually extract the files; however, when you perform a listing of the files (using ls -l) on the remote node, they will be missing the time field until the time on the server is greater than the timestamp of the file.

Before starting any of the above noted installations, ensure that each member node of the cluster is set as closely as possible to the same date and time. Oracle strongly recommends using the Network Time Protocol feature of most operating systems for this purpose, with all nodes using the same reference Network Time Protocol server.

Accessing a Network Time Protocol server, however, may not always be an option. In this case, when manually setting the date and time for the nodes in the cluster, ensure that the date and time of the node you are performing the software installations from (vmlinux1) is less than all other nodes in the cluster (vmlinux2). I generally use a 20 second difference as shown in the following example:

Setting the date and time from vmlinux1:

# date -s "2/19/2007 23:00:00"

Setting the date and time from vmlinux2:

# date -s "2/19/2007 23:00:20"

The two-node RAC configuration described in this article does not make use of a Network Time Protocol server.


Configure the “hangcheck-timer” Kernel Module

Perform the following configuration procedures on all nodes in the cluster!

Oracle 9.0.1 and 9.2.0.1 used a userspace watchdog daemon called watchdogd to monitor the health of the cluster and to restart a RAC node in case of a failure. Starting with Oracle 9.2.0.2 (and still available in Oracle10g Release 2), the watchdog daemon has been deprecated by a Linux kernel module named hangcheck-timer which addresses availability and reliability problems much better. The hang-check timer is loaded into the Linux kernel and checks if the system hangs. It will set a timer and check the timer after a certain amount of time. There is a configurable threshold to hang-check that, if exceeded will reboot the machine. Although the hangcheck-timer module is not required for Oracle Clusterware (Cluster Manager) operation, it is highly recommended by Oracle.
The hangcheck-timer.ko Module

The hangcheck-timer module uses a kernel-based timer that periodically checks the system task scheduler to catch delays in order to determine the health of the system. If the system hangs or pauses, the timer resets the node. The hangcheck-timer module uses the Time Stamp Counter(TSC) CPU register which is a counter that is incremented at each clock signal. The TCS offers much more accurate time measurements since this register is updated by the hardware automatically. Much more information about the hangcheck-timer project can be found here.

Installing the hangcheck-timer.ko Module

The hangcheck-timer was normally shipped only by Oracle, however, this module is now included with Red Hat Linux AS starting with kernel versions 2.4.9-e.12 and higher. The hangcheck-timershould already be included. Use the following to ensure that you have the module included:

# find /lib/modules -name "hangcheck-timer.ko"
/lib/modules/2.6.9-42.EL/kernel/drivers/char/hangcheck-timer.ko

In the above output, we care about the hangcheck timer object (hangcheck-timer.ko) in the /lib/modules/2.6.9-42.EL/kernel/drivers/char directory.

Configuring and Loading the hangcheck-timer Module

There are two key parameters to the hangcheck-timermodule:

  • hangcheck-tick: This parameter defines the period of time between checks of system health. The default value is 60 seconds; Oracle recommends setting it to 30 seconds.
  • hangcheck-margin: This parameter defines the maximum hang delay that should be tolerated before hangcheck-timer resets the RAC node. It defines the margin of error in seconds. The default value is 180 seconds; Oracle recommends setting it to 180 seconds.
The two hangcheck-timermodule parameters indicate how long a RAC node must hang before it will reset the system. A node reset will occur when the following is true:

system hang time > (hangcheck_tick + hangcheck_margin)

Configuring Hangcheck Kernel Module Parameters

Each time the hangcheck-timer kernel module is loaded (manually or by Oracle), it needs to know what value to use for each of the two parameters we just discussed: (hangcheck-tick and hangcheck-margin).These values need to be available after each reboot of the Linux server. To do this, make an entry with the correct values to the /etc/modprobe.conf file as follows:

# su -
# echo "options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180" >> /etc/modprobe.conf

Each time the hangcheck-timer kernel module gets loaded, it will use the values defined by the entry I made in the /etc/modprobe.conf file.

Manually Loading the Hangcheck Kernel Module for Testing

Oracle is responsible for loading the hangcheck-timer kernel module when required. It is for this reason that it is not required to perform a modprobe or insmod of the hangcheck-timer kernel module in any of the startup files (i.e. /etc/rc.local).It is only out of pure habit that I continue to include a modprobe of the hangcheck-timer kernel module in the /etc/rc.local file. Someday I will get over it, but realize that it does not hurt to include a modprobe of the hangcheck-timer kernel module during startup.

So to keep myself sane and able to sleep at night, I always configure the loading of the hangcheck-timer kernel module on each startup as follows:

# echo "/sbin/modprobe hangcheck-timer" >> /etc/rc.local
You don’t have to manually load the hangcheck-timer kernel module using modprobe or insmod after each reboot. The hangcheck-timer module will be loaded by Oracle (automatically) when needed.

Now, to test the hangcheck-timer kernel module to verify it is picking up the correct parameters we defined in the /etc/modprobe.conf file, use the modprobe command. Although you could load the hangcheck-timer kernel module by passing it the appropriate parameters (e.g. insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180), we want to verify that it is picking up the options we set in the /etc/modprobe.conf file.

To manually load the hangcheck-timer kernel module and verify it is using the correct values defined in the /etc/modprobe.conf file, run the following command:

# su -
# modprobe hangcheck-timer
# grep Hangcheck /var/log/messages | tail -2
Feb 19 13:04:40 vmlinux2 kernel: Hangcheck: starting hangcheck timer 0.5.0 (tick is 30 seconds, margin is 180 seconds)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s