Installing Oracle 12c Standard Edition RAC with Dbvisit Standby (Part 2)

This section will cover:  Part 2: Installing Oracle 12c Clusterware (Grid Infrastructure)

Note:  It is assumed that you have followed Part 1 in this series.

In this section I will show you two installations of the 12c Grid Infrastructure:

  • Installing Grid Infrastructure for a cluster environment
  • Installing Grid Infrastructure on a standalone environment

All my servers are at this stage configured, and to summarize, the following pre-requisite steps have been performed (see Part 1 for more details)

  • Oracle Linux 6.4 x86_64 was installed on all 3 servers
  • Network interfaces are configured correctly (eth0 is public, eth1 private)
  • DNS Server was updated with server names including SCAN name and IP range
  • Required Operating System (OS) packages installed
  • Required Kernel configuration and resource limits are set
  • Both “oracle” and “grid” users were created and correct groups assigned
  • Following groups were created: oinstall, asmadmin, asmdba, asmoper, dba, oper
  • Oracle ASMLib configuration is done and candidate disks partitioned and marked as ASM disks
  • NTP was configured and server time zones are the same
  • Ensure you have SSH user equivalence setup for both the grid and oracle user accounts allowing password-less authentication by making use of public/private keys.

The Grid Infrastructure (GI) installation is based on two zip files which you need to download from the Oracle web site (here):

  • linuxamd64_12c_grid_1of2.zip (1,750,478,910 bytes) (cksum – 3177055641)
  • linuxamd64_12c_grid_2of2.zip (201,673,595 bytes) (cksum – 2753782116)

Once downloaded then unzip them, and  you will end up with a “grid” directory, which contains the installer.  In my environment I placed the install files on an NFS server and I just mount that on all my systems.  So I now have /install/12c/grid available on all systems and inside the “grid” directory are the GI install files.  This way there are no duplicate installation source files on each system, but they are all in once place.

Installing Grid Infrastructure for a cluster environment

When installing Grid Infrastructure (GI) for a cluster, it is recommended to make use of the cluster verification utility script supplied by Oracle, and you run this as a pre-check to make sure all is configured correctly and is ready for the installation to continue.  This script is located in the base of the installation directory.  In my case I have the file located in /install/12c/grid/runcluvfy.sh (/install/12c is my NFS shared filesystem where the “grid” install directory was created during the extraction – unzip, of the downloaded files)

Example install location (install source) in my environment:

grid@kiwi101[/home/grid]: cd /install/12c
grid@kiwi101[/install/12c]: ls -al
total 4326400
drwxr-xr-x 4 root root       3896 Jul  3 15:45 .
drwxr-xr-x 6 root root       3896 Jun 26 11:22 ..
drwxr-xr-x 7 root root       3896 Jun 11 00:14 database
drwxr-xr-x 7 root root       3896 Jun 11 00:15 grid
-rw-r--r-- 1 root root 1361028723 Jun 26 11:24 linuxamd64_12c_database_1of2.zip
-rw-r--r-- 1 root root 1116527103 Jun 26 11:26 linuxamd64_12c_database_2of2.zip
-rw-r--r-- 1 root root 1750478910 Jul  1 10:40 linuxamd64_12c_grid_1of2.zip
-rw-r--r-- 1 root root  201673595 Jul  1 10:40 linuxamd64_12c_grid_2of2.zip
grid@kiwi101[/install/12c]: cd grid
grid@kiwi101[/install/12c/grid]: ls -l
total 512
drwxr-xr-x  4 root root 3896 Jul  3 15:47 install
drwxrwxr-x  2 root root 3896 May 25 08:44 response
drwxr-xr-x  2 root root 3896 May 25 07:29 rpm
-rwxr-xr-x  1 root root 5092 Dec 22  2012 runcluvfy.sh
-rwxr-xr-x  1 root root 7809 May 25 07:29 runInstaller
drwxrwxr-x  2 root root 3896 May 25 08:44 sshsetup
drwxr-xr-x 14 root root 3896 May 25 08:44 stage
-r-xr-xr-x  1 root root  500 Jun 10 17:52 welcome.html

The test I recommend you run is: ./runcluvfy.sh stage –pre crsinst –n <node1, .., nodeX>

This check will give you a pretty good idea if your environment is ready for the installation.  Any errors or failures should be investigated and corrected before attempting the GI installation.

If you want to see more verbose output, run the above command with the “-verbose” flag to get additional details.  There are many options for this utility and I am not going to cover that, so for more details please see the Oracle documentation.

Below is an example of running the runcluvfy.sh script in my environment.  I am running this from Node 1 (kiwi101).  Note that in my case I have two “failures” that occur when it checks the memory and swap space in my environment.  Now I have 4GB memory allocated and 2GB swap, which should be fine for this test installation, so I am just ignoring this.

grid@kiwi101[/install/12c/grid]: ./runcluvfy.sh stage -pre crsinst -n kiwi101,kiwi102

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "kiwi101"

Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Node connectivity passed for subnet "192.168.40.0" with node(s) kiwi101,kiwi102
TCP connectivity check passed for subnet "192.168.40.0"

Node connectivity passed for subnet "10.5.5.0" with node(s) kiwi101,kiwi102
TCP connectivity check passed for subnet "10.5.5.0"

Interfaces found on subnet "192.168.40.0" that are likely candidates for VIP are:
kiwi101 eth0:192.168.40.101
kiwi102 eth0:192.168.40.102

Interfaces found on subnet "10.5.5.0" that are likely candidates for a private interconnect are:
kiwi101 eth1:10.5.5.101
kiwi102 eth1:10.5.5.102
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.40.0".
Subnet mask consistency check passed for subnet "10.5.5.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.40.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.40.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.

Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check failed
Check failed on nodes:
       kiwi102,kiwi101
Available memory check passed
Swap space check failed
Check failed on nodes:
       kiwi102,kiwi101
Free disk space check passed for "kiwi102:/usr,kiwi102:/var,kiwi102:/etc,kiwi102:/sbin"
Free disk space check passed for "kiwi101:/usr,kiwi101:/var,kiwi101:/etc,kiwi101:/sbin"
Free disk space check passed for "kiwi102:/tmp"
Free disk space check passed for "kiwi101:/tmp"
Check for multiple users with UID value 502 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"

Checking availability of ports "6200,6100" required for component "Oracle Notification Service (ONS)"
Port availability check passed for ports "6200,6100"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
NTP Configuration file check passed

Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

NTP common Time Server Check started...
Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started...
Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes

"domain" and "search" entries do not coexist in any  "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes

Check for integrity of file "/etc/resolv.conf" passed

Time zone consistency check passed

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check passed for process "avahi-daemon"
Daemon not running check passed for process "avahi-daemon"

Starting check for /dev/shm mounted as temporary file system ...

Check for /dev/shm mounted as temporary file system passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

Starting the GI Installation

We can now start with the installation by running the “runInstaller” file:

grid@kiwi101[/install/12c/grid]: ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 3065 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-07-26_03-52-12PM. Please wait ...grid@kiwi101[/install/12c/grid]:

The graphical installer will now start:

gi-1

From the first screen we accept the defaults to skip software updates and click on Next to continue.

gi-2

Select the first option as we are installing Grid Infrastructure for a cluster, and select next to continue. From the screen below accept the default to configure a standard cluster.

gi-3

In this case we will select a typical installation, which is the default.

gi-4

The next screen will require more input. First add the SCAN name as “dev12scan” then add kiwi102’s public and virtual hostname details as shown in 2 and 3 below.

gi-6

To identify your networks click on the “identify network interface”, which in this case was picked up correctly and we can just accept it and click on next to continue.

gi-7

In the next screen you need to specify the Oracle Base and Software Location (Oracle Home). From the Cluster Registry Storage Type dropdown, select “Oracle Automatic Storage Management” and provide your SYSASM password. This password should be complex when installing a production system. Finally make sure the OSASM group is “asmadmin” and then click next to continue.

gi-8

The next step is to specify the ASM disk we will be using to store our OCR and Voting Files on. In this case I am not using any redundancy and select “external”. In production environments Normal or High is recommended. I select my 2GB ASM disk labeled CRS_DISK1 and click on next to continue.

gi-9

The default path is acceptable for the Inventory directory so I just click on next to continue.

gi-10

When asked if I want to automatically run configuration scripts, I specify to use the root user credential and provide the root password. (I do like this new option in the installer!)

gi-11

We now get to the verification checks/results screen and in my environment I received 3 warning messages. The first two are in regards to my memory and swap allocations and, as previously mentioned, in my environment 4GB memory and 2GB swap is sufficient, so I can ignore this. But the 3rd warning message was not expected. The verification was looking at my selected ASM storage and complained about not finding the UUID for the disk. See below:

gi-12

However, investigation showed that the ASM disks do not in fact have a UUID assigned (see below), so again this warning message can be ignored. I select the “ignore all” check box and click on next to continue with the installation.

grid@kiwi101[/install/12c/grid]: blkid
/dev/xvda1: LABEL="/boot" UUID="38a1e4c2-b664-42c0-b466-49276cf50dd4" TYPE="ext4"
/dev/xvda2: LABEL="/" UUID="8a5b48b9-3ee8-489a-8061-45271d2b705d" TYPE="ext4"
/dev/xvda3: LABEL="SWAP-VM" UUID="2a343569-5d6b-4ebd-a200-318c96566963" TYPE="swap"
/dev/xvdb1: LABEL="/u01" UUID="05da5285-38a1-4bb3-b076-b69532209d16" SEC_TYPE="ext2" TYPE="ext3"
/dev/xvdc1: LABEL="CRS_DISK1" TYPE="oracleasm"
/dev/xvdd1: LABEL="ASM_DISK1" TYPE="oracleasm"
/dev/xvde1: LABEL="ASM_DISK2" TYPE="oracleasm"
/dev/xvdf1: LABEL="ASM_DISK3" TYPE="oracleasm"
/dev/xvdg1: LABEL="ASM_DISK4" TYPE="oracleasm"

I now arrive at the summary screen, and I do a quick review, then select install. The installation now starts.

gi-13

gi-14

Approx 80% through the installation I am asked to confirm that I want to run configuration scripts as the “root” user, I accept and the installation continues.

gi-15

After a few minutes the installation is complete.

gi-16

As you can see the installation is not that difficult, but the key is to make sure your environment (operating system and networking) is configured correctly, and that you have performed all the pre-requisite steps.

Once the installation is complete I update the “grid” user’s .bash_profile to set the environment automatically for use with the +ASM1 instance. All I do is add the following few lines to the file, log out and back in, and now my environment should be set and I can start executing commands:

On Node 1 (kiwi101) add to the “grid” .bash_profile

export ORAENV_ASK=NO
export ORACLE_SID=+ASM1
. oraenv >> /dev/null
export ORAENV_ASK=YES
On Node 2 (kiwi102) add to the “grid” .bash_profile

export ORAENV_ASK=NO
export ORACLE_SID=+ASM2
. oraenv >> /dev/null
export ORAENV_ASK=YES

Log out and back in for changes to take effect, or just type in “. ~/.bash_profile

Now I have Grid Infrastructure installed on the two cluster nodes, but remember I have not configured my DATA and FRA disk groups yet. This will be done next, but before I show you how to do this, let’s have a quick look at the newly installed environment.

I run a few commands to verify the installation, and to have a quick look at the new software. Below is example output from a few commands I used:

As the “grid” user:

grid@kiwi101[/home/grid]: crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

grid@kiwi101[/home/grid]: crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       kiwi101                  STABLE
               ONLINE  ONLINE       kiwi102                  STABLE
ora.OCR_VOTING.dg
               ONLINE  ONLINE       kiwi101                  STABLE
               ONLINE  ONLINE       kiwi102                  STABLE
ora.asm
               ONLINE  ONLINE       kiwi101                  Started,STABLE
               ONLINE  ONLINE       kiwi102                  Started,STABLE
ora.net1.network
               ONLINE  ONLINE       kiwi101                  STABLE
               ONLINE  ONLINE       kiwi102                  STABLE
ora.ons
               ONLINE  ONLINE       kiwi101                  STABLE
               ONLINE  ONLINE       kiwi102                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       kiwi102                  STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       kiwi101                  STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       kiwi101                  STABLE
ora.cvu
      1        ONLINE  ONLINE       kiwi101                  STABLE
ora.kiwi101.vip
      1        ONLINE  ONLINE       kiwi101                  STABLE
ora.kiwi102.vip
      1        ONLINE  ONLINE       kiwi102                  STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       kiwi102                  STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       kiwi101                  STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       kiwi101                  STABLE
--------------------------------------------------------------------------------

Another great way to double check the installation is to make use of the cluster verification utility. Example below:

grid@kiwi101[/home/grid]: cluvfy stage -post crsinst -n kiwi101,kiwi102

Performing post-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "kiwi101"

Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.40.0"
Node connectivity passed for subnet "192.168.40.0" with node(s) kiwi102,kiwi101
TCP connectivity check passed for subnet "192.168.40.0"


Check: Node connectivity using interfaces on subnet "10.5.5.100"
Node connectivity passed for subnet "10.5.5.100" with node(s) kiwi102,kiwi101
TCP connectivity check passed for subnet "10.5.5.100"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.40.0".
Subnet mask consistency check passed for subnet "10.5.5.100".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "10.5.5.100" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.5.5.100" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Time zone consistency check passed

Checking Cluster manager integrity...

Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.

Cluster manager integrity check passed

UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations

UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations

Default user file creation mask check passed

Checking cluster integrity...

Cluster integrity check passed

Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations

Checking OCR config file "/etc/oracle/ocr.loc"...

OCR config file "/etc/oracle/ocr.loc" check successful

Disk group for ocr location "+OCR_VOTING" is available on all the nodes

NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.

OCR integrity check passed

Checking CRS integrity...

Clusterware version consistency passed.

CRS integrity check passed

Checking node application existence...

Checking existence of VIP node application (required)
VIP node application check passed

Checking existence of NETWORK node application (required)
NETWORK node application check passed

Checking existence of ONS node application (optional)
ONS node application check passed

Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "dev12scan"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Checking SCAN IP addresses...
Check of SCAN IP addresses passed

Verification of SCAN VIP and Listener setup passed

Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed

WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed

User "grid" is not part of "root" group. Check passed

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
CTSS resource check passed

Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
CTSS is in Observer state. Switching over to clock synchronization checks using NTP

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
NTP Configuration file check passed

Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

NTP common Time Server Check started...
Check of common NTP Time Server passed

Clock time offset check from NTP Time Server started...
PRVF-5413 : Node "kiwi102" has a time offset of 23633.2 that is beyond permissible limit of 1000.0 from NTP Time Server "202.46.181.123"
PRVF-5413 : Node "kiwi101" has a time offset of 23438.3 that is beyond permissible limit of 1000.0 from NTP Time Server "202.46.181.123"
Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Post-check for cluster services setup was successful.

Final Step: Create DATA and FRA disk groups

Before we continue to the installation of the Database Software, let’s create the ASM disk groups we will be using first. DATA will be used for the database files and FRA will be used for the fast/flash recovery area.

This step can be done from either node 1 or node 2 in the cluster. From the console, or an ssh connection that will allow you to open a GUI (xwindows), run the “asmca” command as the “grid” UNIX user. This will open the ASM configuration utility. Below are screen shots showing how you can add the additional disk groups:

To start the graphical ASM configuration utility, just type in asmca:

asmca-00

asmca-1

Once the GUI has loaded, you should see the OCR_VOTING disk group we configured during GI installation. Now click on Create to add a new disk group. As you can see below I am first creating the DATA disk group, again using external redundancy, and I select the two 10GB disks and click on OK to start the disk group creation.

asmca-2

asmca-3

Once created you will see two disk groups. I now repeat the step and create the FRA disk group. See details below:

asmca-4

asmca-5

I now have 3 disk groups. Above shows this via the GUI. Important: Note that the FRA disk group is only mounted on the first node, see the “State” Column above. To mount the FRA disk group on both nodes, highlight the FRA disk group and then click on “Mount All”. The disk group FRA will then be mounted on all nodes.

You can also use the ASMCMD command line option to show your disk groups by executing the list disk groups “lsdg” command:

asmca-6

The next step is to install Grid Infrastructure on a standalone environment.

Installing Grid Infrastructure on a standalone environment

In this section I am going to make use of a short Video clip to show you the installation of 12c Grid Infrastructure on a standalone server.

Summary
You now have Grid Infrastructure installed and your disk groups configured. The next step is to install the Oracle 12c Database Software, and then create a database. This will be discussed in Part 3 of this series.

About Anton Els


With more than 13 years experience with Oracle Databases, and extensive Linux and Solaris knowledge, Anton is a highly motivated individual who enjoys working in a challenging environment. His certifications include Oracle 11g Database Certified Master.

Speak Your Mind