Installing Oracle 12c Standard Edition RAC with Dbvisit Standby (Part 1)

Since the release of 12c I am sure a lot of you, like me, have been enjoying playing with the new edition to the Oracle Database family.  Although it is very exciting to learn about all the new features in this major release, sometimes the installation process – one of the critical factors to any good environment, is neglected.  For this reason I would like to focus, in this post, on the installation of Oracle Database 12c Standard Edition in a Real Application Cluster (RAC) configuration.  In short, a Standard Edition 12c RAC setup on Oracle Linux.  And to spice this up a little I am also going to install and configure the latest version of Dbvisit Standby, and create a standby database.

As this topic contains a significant amount of information I will split it into four sections (posts) to help you follow it more easily:

  1. Part 1:  Environment Overview and Pre-requisite steps
  2. Part 2 Installing Oracle Clusterware (Grid Infrastructure)
  3. Part 3:  Installing Oracle Database 12c
  4. Part 4:  Creating a Standby Database using Dbvisit Standby

Please Note:  This post is for educational purposes only, to provide you with more detail on the new 12c installation.  For production environments it is recommended that you review the Oracle 12c Installation documentation in detail.

Part 1:  Environment Overview and Pre-requisite steps

Before jumping into a new installation it is important to review the pre-requisite steps and requirements.  In this section I will provide you with an overview of the steps required in preparation for our Oracle 12c Standard Edition RAC installation on Oracle Linux 6 update 4 (64bit).

My test environment makes use of Oracle OVM 3.2.2 running on a Standalone Server with multiple hard drives configured for storage. The majority of these steps will be similar if you are using Virtualbox, or even physical servers for your installation.

In summary this is an overview of my environment:

12c-OracleRAC-Environment

I have done this 12c RAC installation twice, first using a clean Oracle Linux 6 update 4 installation selecting the “basic server” installation option, and secondly using the Oracle supplied OVM template (more details here). I am not going to go into the details of installing Oracle Linux and assume that you are able to get a basic Oracle Linux 6.4 x86_64 installation running with two network interfaces configured on two different subnets for the two RAC nodes, and one Basic installation for the Standby server which will be a single instance standby database server.  For more details on Oracle Linux installation see here
for  a great guide to get you started, from Tim Hall.

At this stage it is a good idea to know what your server names and IP address allocations will be.  Below is an example of my configuration, including my cluster name, SCAN Name and IP addresses, which will be used:

12c-RAC-networking-overview

In this environment I am making use of a basic Linux DNS server.

From this point forward I will assume you have 3 Servers installed with the Basic Oracle Linux configuration.

I have never really had any issues with ASMLib and will be making use of it in this installation. For more details see here.

Step 1:  Create Required OS Users and Groups

One of the first steps I perform is to create the “oracle” and “grid” Operating System (OS) users and required OS groups on all the servers. I tend to do this as the first step before all the other pre-requisite steps such as installing the required software packages.

It is possible to install everything as one user, which ideally should be called “oracle”, but I personally prefer to split the installation up and make use of one user, “grid” in this case to own and manage my clusterware (grid infrastructure), along with the user “oracle” to own and manage the Oracle database software.

I first create the following OS Groups: oinstall, dba, oper, asmadmin, asmdba, asmoper

Example group creation:

groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 503 oper
groupadd -g 504 asmadmin
groupadd -g 505 asmdba
groupadd -g 506 asmoper

Now you might ask why specify the group id (gid).  Again this is up to you, but it is recommended to use the same on all the servers and I tend to use group 501-506 for my Oracle software groups.  I do the same when creating the users; I tend to use 501 for the “oracle” user id and 502 for the “grid” user id.

The next step is to create the “oracle” and “grid” users; example commands below:

useradd -u 501 -g oinstall -G dba,asmdba,asmoper,oper -c "Oracle Software Owner" oracle
useradd -u 502 -g oinstall -G asmadmin,asmdba,asmoper -c "Clusterware Owner" grid

Step 2:  Installing the required Operating System packages.

I do like the option of using the Oracle public yum repositories to allow for quick and easy software installation for test systems. For more details on this see http://public-yum.oracle.com/.

By default in my Oracle Linux 6.4 installation the repositories are already setup and configured.

We can now make sure all the software is at the latest version by running as the root user “yum update”, and selecting yes to update the software if there are any updates available.

yum update

Once this is done, I perform a reboot.

The next step is to install the required Oracle software.  This step has been made really easy if you are using Oracle Linux.  A new software group is available called:

oracle-rdbms-server-12cR1-preinstall.x86_64

By installing the above all the required packages for an Oracle installation will be installed for you.

If you do not want to install this package or do not have access to the public-yum repository you need to make sure the following software packages (or later versions) are installed (these are specific to Oracle Linux 6):

 binutils-2.20.51.0.2-5.11.el6 (x86_64)
 compat-libcap1-1.10-1 (x86_64)
 compat-libstdc++-33-3.2.3-69.el6 (x86_64)
 compat-libstdc++-33-3.2.3-69.el6.i686
 gcc-4.4.4-13.el6 (x86_64)
 gcc-c++-4.4.4-13.el6 (x86_64)
 glibc-2.12-1.7.el6 (i686)
 glibc-2.12-1.7.el6 (x86_64)
 glibc-devel-2.12-1.7.el6 (x86_64)
 glibc-devel-2.12-1.7.el6.i686
 ksh
 libgcc-4.4.4-13.el6 (i686)
 libgcc-4.4.4-13.el6 (x86_64)
 libstdc++-4.4.4-13.el6 (x86_64)
 libstdc++-4.4.4-13.el6.i686
 libstdc++-devel-4.4.4-13.el6 (x86_64)
 libstdc++-devel-4.4.4-13.el6.i686
 libaio-0.3.107-10.el6 (x86_64)
 libaio-0.3.107-10.el6.i686
 libaio-devel-0.3.107-10.el6 (x86_64)
 libaio-devel-0.3.107-10.el6.i686
 make-3.81-19.el6
 sysstat-9.0.4-11.el6 (x86_64)

As I am going to make use of ASMLib to make the management of my ASM storage a little easier I also install the following packages:

oracleasm-support.x86_64  (Available from public-yum)
oracleasmlib-2.0.4-1.el6.x86_64.rpm (Available from here - http://www.oracle.com/technetwork/server-storage/linux/asmlib/ol6-1709075.html)

The package “oracleasmlib-2.0.4-1.el6.x86_64.rpm” provides you with the tool oracleasm-discover, which is useful to have when using ASMLib, but more on this later.

Apart from that outlined above I also tend to install the following utilities, as you do end up using them on a regular basis and it is useful to install them now, rather than later.

dstat, sysstat, glibc.i686, wget, parted, strace, tree, xterm, xclock, unzip and lsof

To install all of these packages, you can just run one command:

yum install oracle-rdbms-server-12cR1-preinstall.x86_64 oracleasm-support.x86_64 dstat sysstat glibc.i686 wget parted strace tree xterm xclock lsof unzip

I then download and install oracleasmlib-2.0.4-1.el6.x86_64.rpm as follows:

wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.4-1.el6.x86_64.rpm
rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm

One final package to install is the cvuqdisk-1.0.9-1.rpm provided as part of the Grid Infrastructure installation media.  It is located under ./grid/rpm/cvuqdisk-1.0.9-1.rpm

In my environment I have downloaded the software into a directory /install/12c/ with the grid software located under the “grid” directory and the database software under “database”:

root@kiwi103[/]: cd /install/12c/grid
root@kiwi103[/install/12c/grid]: ls
install  response  rpm  runcluvfy.sh  runInstaller  sshsetup  stage  welcome.html
root@kiwi103[/install/12c/grid]: cd rpm
root@kiwi103[/install/12c/grid/rpm]: ls
cvuqdisk-1.0.9-1.rpm
root@kiwi103[/install/12c/grid/rpm]: rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk               ########################################### [100%]

Step 3:  Kernel parameters and resource limits

The advantage of installing the package oracle-rdbms-server-12cR1-preinstall.x86_64 in the previous step is that it configures the recommended kernel settings for you in /etc/sysctl.conf.

Example extract from /etc/sysctl.conf:

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# oracle-rdbms-server-12cR1-preinstall setting for fs.file-max is 6815744
fs.file-max = 6815744

# oracle-rdbms-server-12cR1-preinstall setting for kernel.sem is '250 32000 100 128'
kernel.sem = 250 32000 100 128

# oracle-rdbms-server-12cR1-preinstall setting for kernel.shmmni is 4096
kernel.shmmni = 4096

# oracle-rdbms-server-12cR1-preinstall setting for kernel.shmall is 1073741824 on x86_64

# oracle-rdbms-server-12cR1-preinstall setting for kernel.shmmax is 4398046511104 on x86_64
kernel.shmmax = 4398046511104

# oracle-rdbms-server-12cR1-preinstall setting for net.core.rmem_default is 262144
net.core.rmem_default = 262144

# oracle-rdbms-server-12cR1-preinstall setting for net.core.rmem_max is 4194304
net.core.rmem_max = 4194304

# oracle-rdbms-server-12cR1-preinstall setting for net.core.wmem_default is 262144
net.core.wmem_default = 262144

# oracle-rdbms-server-12cR1-preinstall setting for net.core.wmem_max is 1048576
net.core.wmem_max = 1048576

# oracle-rdbms-server-12cR1-preinstall setting for fs.aio-max-nr is 1048576
fs.aio-max-nr = 1048576

# oracle-rdbms-server-12cR1-preinstall setting for net.ipv4.ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65500

If you did not install the oracle-rdbms-server-12cR1-preinstall package you can set these values manually in the /etc/sysctl.conf file by editing and setting these values.  For example:

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

If you modified these values you can update the current values by running as root: /sbin/sysctl -p

The next step is to update the /etc/security/limits.conf, /etc/pam.d/login and /etc/profile configuration files by adding the following lines:

/etc/security/limits.conf:

oracle    soft  nproc    2047
oracle    hard  nproc    16384
oracle    soft  nofile   1024
oracle    hard  nofile   65536
oracle    soft  stack    10240
grid      soft  nproc    2047
grid      hard  nproc    16384
grid      soft  nofile   1024
grid      hard  nofile   65536
grid      soft  stack    10240
/etc/pam.d/login

session    required     pam_limits.so
/etc/profile

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
  if [ $SHELL = "/bin/ksh" ]; then
    ulimit -p 16384
    ulimit -n 65536
  else
   ulimit -u 16384 -n 65536
  fi
   umask 022
fi

Following the changes above, to make sure that all changes take effect, I do a reboot. I know this might not strictly be necessary, although you have to at least log out the “oracle” and “grid” users so that the settings can take effect on new login, but a reboot does not hurt.

Step 4: Update DNS and /etc/hosts

To build a proper test environment you ideally need to configure a DNS.  There are many blogs out there discussing this so I am not going into the details, but in my case I created a very basic DNS server on a separate Oracle Linux system.  In the DNS I configured my test domain oraclekiwi.com and made sure that all my servers are added to this.

I also added the following entries below into each of the servers /etc/hosts file.  Note that I do not enable the SCAN IP addresses; they are commented and only in here for reference.  They should, however, be added to your DNS server.

/etc/hosts

127.0.0.1 localhost.localdomain localhost localhost4
::1 localhost6.localdomain6 localhost6

# Public Network
192.168.40.101    kiwi101.oraclekiwi.com kiwi101
192.168.40.102    kiwi102.oraclekiwi.com kiwi102
#
# Private Interconnect
10.5.5.101      kiwi101-priv.oraclekiwi.com   kiwi101-priv
10.5.5.102      kiwi102-priv.oraclekiwi.com   kiwi102-priv
#
# Public Virtual Network
192.168.40.103    kiwi101-vip.oraclekiwi.com    kiwi101-vip
192.168.40.104    kiwi102-vip.oraclekiwi.com    kiwi102-vip
#
# SCAN
# 192.168.40.105  dev12scan.oraclekiwi.com      dev12scan
# 192.168.40.106  dev12scan.oraclekiwi.com      dev12scan
# 192.168.40.107  dev12scan.oraclekiwi.com      dev12scan
#
# Standby Server Details
192.168.40.121 kiwi103.oraclekiwi.com      kiwi103

Step 5:  Configure your storage

Even though this does sound like hard work, configuring the storage can actually be a quick and easy task.

  • First, we need to allocate the storage to each of the servers for the /u01 filesystem.  This is where the software (Grid Infrastructure and Oracle Database) will be installed.  I tend to use 20GB as a starting point, but I do not recommend using less than 15GB as this might be a limiting factor later when you want to perform patches or upgrades.  Usually 20GB to 30GB is a good enough.
  • The second block of storage to allocate is the shared storage for both the primary nodes.  I am not going into the details here, but in summary under OVM you would create shared Virtual Disks and attach them to both the primary nodes (kiwi101/kiwi102 in this example).  In my example I add 5 disks:
  • 1 x 2GB used for OCR/Voting Files (which will be stored in ASM)
  • 2 x 10GB used for DATA Disk group
  • 2 x 5GB used for FRA Disk group
  • The third storage block to allocate is the standby server disks that will be used for the standby database.  I am going to use ASM so I am attaching 4 Disks of 10GB each as local storage to the standby server.

Note: The above sizing is small as this is a test environment, but in your case you might need a lot more for the DATA and FRA disk groups.  Also I will be using “External Redundancy” for my ASM storage, whereas in a production environment you might consider using “Normal Redundancy” – but be aware that this requires double the space.

Step 5.1.  Partition newly added disks

For the Primary RAC nodes I have the following devices:

/dev/xvdb   => this will be the /u01 device
/dev/xvdc   => this will be the OCR/Voting ASM disk
/dev/xvdd   => this is the ASM disk1
/dev/xvde   => this is the ASM disk2
/dev/xvdf   => this is the ASM disk3
/dev/xvdg   => this is the ASM disk4

Now as /dev/xvdc to /dev/xvdg are shared storage between the primary nodes, you do not have to partition them on both nodes – this just needs to be done on one of the nodes.  For the /dev/xvdb disk you need to create the partition and create a filesystem to be used for /u01 on all the nodes.

First to list the disks you can just run “fdisk –l”.  Once you have identified your disks, you can run fdisk for that particular disk.  For example, if I want to add a primary partition on /dev/xvdb:

root@kiwi101[/root]: fdisk /dev/xvdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x92035209.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): o  <<===========  
Building a new DOS disklabel with disk identifier 0x80eac877.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): n          <<===========  
Command action
e   extended
p   primary partition (1-4)
p        <<===========  
Partition number (1-4): 1          <<===========  
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610):
Using default value 2610
Command (m for help): w       <<===========  
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

From the above you can see that creating  a primary partition on /dev/xvdb is easy.  The steps are again summarised below:

  1. run “fdisk /dev/xvdb”
  2. type in “o” to create new partition table
  3. type in “n” to create new partition
  4. type in “p” to specify it is a primary partition you want to create
  5. type in “1” to specify the partition number
  6. press “return” (enter) to accept the default starting cylinder
  7. press “return” (enter) to accept the default last cylinder
  8. type in “w” to write the changes to disk (commit)

You now have a new primary partition. From here simply follow the exact same steps for the other 5 disks on the first node (in my case kiwi101).  Now to make sure the second node knows about the newly created partions on the shared disks, run the following command as the root user on the second node (kiwi102 in my example):

partprobe

From the above command you will get warning messages for disks (devices) with mounted filesystems.  You can ignore that.  Running “fdisk –l” on the second node will now show you all the disks and partitions, except for the disk /dev/xvdb which needs to be partitioned on the second node, as this is not a shared disk.

At this stage you might as well do the same for the standby server, and create partitions on all the new disks you have attached to the standby server.

Once completed you will end up with partition on each disk, and below is example output showing the partitions:

Node 1 – kiwi101

root@kiwi101[/root]: fdisk -l |grep "/dev/xvd"
Disk /dev/xvda: 12.9 GB, 12884901888 bytes
/dev/xvda1   *           2         503      514048   83  Linux
/dev/xvda2             504       10240     9970688   83  Linux
/dev/xvda3           10241       12288     2097152   82  Linux swap / Solaris
Disk /dev/xvdb: 21.5 GB, 21474836480 bytes
/dev/xvdb1               1        2610    20964793+  83  Linux
Disk /dev/xvdc: 2147 MB, 2147483648 bytes
/dev/xvdc1               1         261     2096451   83  Linux
Disk /dev/xvdd: 10.7 GB, 10737418240 bytes
/dev/xvdd1               1        1305    10482381   83  Linux
Disk /dev/xvde: 10.7 GB, 10737418240 bytes
/dev/xvde1               1        1305    10482381   83  Linux
Disk /dev/xvdf: 5368 MB, 5368709120 bytes
/dev/xvdf1               1         652     5237158+  83  Linux
Disk /dev/xvdg: 5368 MB, 5368709120 bytes
/dev/xvdg1               1         652     5237158+  83  Linux
Node 2 – kiwi02 

root@kiwi102[/root]: fdisk -l |grep "/dev/xvd"
Disk /dev/xvda: 12.9 GB, 12884901888 bytes
/dev/xvda1   *           2         503      514048   83  Linux
/dev/xvda2             504       10240     9970688   83  Linux
/dev/xvda3           10241       12288     2097152   82  Linux swap / Solaris
Disk /dev/xvdb: 21.5 GB, 21474836480 bytes
/dev/xvdb1               1        2610    20964793+  83  Linux
Disk /dev/xvdc: 2147 MB, 2147483648 bytes
/dev/xvdc1               1         261     2096451   83  Linux
Disk /dev/xvdd: 10.7 GB, 10737418240 bytes
/dev/xvdd1               1        1305    10482381   83  Linux
Disk /dev/xvde: 10.7 GB, 10737418240 bytes
/dev/xvde1               1        1305    10482381   83  Linux
Disk /dev/xvdf: 5368 MB, 5368709120 bytes
/dev/xvdf1               1         652     5237158+  83  Linux
Disk /dev/xvdg: 5368 MB, 5368709120 bytes
/dev/xvdg1               1         652     5237158+  83  Linux

Step 5.2.  Create /u01 filesystem

The software will be installed in the new /u01 filesystem which is local to each of the servers.  We have now partitioned the disk, and in my example the primary partition is known as /dev/xvdb1

I am going to make use of ext3 for my filesystem.  The following command can be executed as the root user to create the filesystem:

root@kiwi103[/root]: mkfs.ext3 /dev/xvdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5241198 blocks
262059 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

We now have the filesystem created, and need to add this disk to our /etc/fstab to make sure /u01 is mounted on system boot.  With Oracle Linux there is a default directory /u01 created which is empty so you can just use it, or you can create a new /u01 directory as the root user.

To add the newly created filesystem to the /etc/fstab it is good to make use of the disks UUID.  This can be extracted using the “blkid” tool.  Below is an example of using this command to get the UUID for /dev/xvdb1 which is the disk I am interested in:

root@kiwi103[/root]: blkid |grep /dev/xvdb1
/dev/xvdb1: UUID="aa6c99b3-fb87-44a1-b775-9c9515d6faed" SEC_TYPE="ext2" TYPE="ext3"

I now use this UUID to add an entry into the /etc/fstab for the new /u01 filesytem.  See example fstab file below:

#
# /etc/fstab
# Created by anaconda on Tue Feb 26 02:50:25 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
LABEL=/                      /                       ext4   defaults          1 1
LABEL=/boot                  /boot                   ext4   defaults          1 2
LABEL=SWAP-VM                 swap                   swap   defaults          0 0
tmpfs                        /dev/shm                tmpfs   defaults,size=2G 0 0
devpts                       /dev/pts                devpts  gid=5,mode=620   0 0
sysfs                        /sys                    sysfs   defaults         0 0
proc                         /proc                   proc    defaults         0 0
tmpfs                        /tmp                    tmpfs   defaults,size=2G 0 0
UUID=aa6c99b3-fb87-44a1-b775-9c9515d6faed  /u01   ext3   defaults      1 2

Once the above line is added you can just type in “mount /u01” and you should now have a new /u01 filesystem ready for use.  Example:

root@kiwi103[/root]: mount /u01
root@kiwi103[/root]: df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda2            9.4G  1.5G  7.5G  17% /
tmpfs                 2.0G     0  2.0G   0% /dev/shm
/dev/xvda1            487M  121M  341M  27% /boot
tmpfs                 2.0G     0  2.0G   0% /tmp
/dev/xvdb1             20G  173M   19G   1% /u01

The next step is to configure ASMLib.

Step 5.3.  Configure Oracle ASMLib

This step assumed that you have followed the above steps, have created the grid and oracle user accounts, as well as allocated your storage.  In this example I am using ASMLib, but you are welcome to make use of UDEV and will have to create rule files to set correct permissions and path persistence.  This guide will only cover the use of ASMlib.

One of the key components in an Oracle RAC configuration is the shared storage.  In my example the “grid” user will manage this and we need to make sure that this user has the correct access permissions to the disk – and this is where ASMLib come into play.

To configure this is easier than you might think, and within a few commands you will actually be ready to install the Grid Infrastructure.

5.3.1       Configure “oracleasm”

The first step is to configure oracleasm, and this step should be executed on all the servers.   Start the configuration by running the “oracleasm configure -i” command as the root user and specifying the owner as grid with default group asmadmin.  Example:

root@kiwi101[/root]: oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

root@kiwi101[/root]: /etc/init.d/oracleasm restart
Dropping Oracle ASMLib disks:                              [  OK  ]
Shutting down the Oracle ASMLib driver:                    [  OK  ]
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

Once you have run this on all nodes you can now create the ASM disks (effectively stamping the disk with an ASM label).

5.3.2       Create your ASM disks

This step only needs to be performed on Node 1 in the RAC environment, and on the standby server. Note that with the OCR/Voting ASM disk I am calling it CRS_DISK1 and the rest of the disks, which will be used for the ASM DATA and FRA disk groups, I name ASM_DISK1 to ASM_DISK4.

Node 1 – kiwi101
root@kiwi101[/root]: oracleasm createdisk CRS_DISK1 /dev/xvdc1
Writing disk header: done
Instantiating disk: done
root@kiwi101[/root]: oracleasm createdisk ASM_DISK1 /dev/xvdd1
Writing disk header: done
Instantiating disk: done
root@kiwi101[/root]: oracleasm createdisk ASM_DISK2 /dev/xvde1
Writing disk header: done
Instantiating disk: done
root@kiwi101[/root]: oracleasm createdisk ASM_DISK3 /dev/xvdf1
Writing disk header: done
Instantiating disk: done
root@kiwi101[/root]: oracleasm createdisk ASM_DISK4 /dev/xvdg1
Writing disk header: done
Instantiating disk: done

Once done with the disk creation on the first node you can then run “oracleasm scandisks” on the second node, so that it is made aware of the newly marked shared disks.

Below is the commands I execute on the Standalone server to create the four ASM disks:

Standby Server – kiwi103
root@kiwi103[/root]: oracleasm createdisk ASM_DISK1 /dev/xvdc1
Writing disk header: done
Instantiating disk: done
root@kiwi103[/root]: oracleasm createdisk ASM_DISK2 /dev/xvdd1
Writing disk header: done
Instantiating disk: done
root@kiwi103[/root]: oracleasm createdisk ASM_DISK3 /dev/xvde1
Writing disk header: done
Instantiating disk: done
root@kiwi103[/root]: oracleasm createdisk ASM_DISK4 /dev/xvdf1
Writing disk header: done
Instantiating disk: done

5.3.3       Review your configuration

You can now run the following commands to see your newly marked ASM disks:

root@kiwi101[/root]: oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

root@kiwi101[/root]: oracleasm listdisks
ASM_DISK1
ASM_DISK2
ASM_DISK3
ASM_DISK4
CRS_DISK1

root@kiwi101[/root]: oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCL:ASM_DISK1 [20964762 blocks (10733958144 bytes), maxio 64]
Discovered disk: ORCL:ASM_DISK2 [20964762 blocks (10733958144 bytes), maxio 64]
Discovered disk: ORCL:ASM_DISK3 [10474317 blocks (5362850304 bytes), maxio 64]
Discovered disk: ORCL:ASM_DISK4 [10474317 blocks (5362850304 bytes), maxio 64]
Discovered disk: ORCL:CRS_DISK1 [4192902 blocks (2146765824 bytes), maxio 64]

root@kiwi101[/root]: blkid |grep asm
/dev/xvdc1: LABEL="CRS_DISK1" TYPE="oracleasm"
/dev/xvdd1: LABEL="ASM_DISK1" TYPE="oracleasm"
/dev/xvde1: LABEL="ASM_DISK2" TYPE="oracleasm"
/dev/xvdf1: LABEL="ASM_DISK3" TYPE="oracleasm"
/dev/xvdg1: LABEL="ASM_DISK4" TYPE="oracleasm"

root@kiwi101[/root]: ls -al /dev/oracleasm/disks/
total 0
drwxr-xr-x 1 root root           0 Jul 24 15:38 .
drwxr-xr-x 4 root root           0 Jul 24 15:38 ..
brw-rw---- 1 grid asmadmin 202, 49 Jul 24 15:38 ASM_DISK1
brw-rw---- 1 grid asmadmin 202, 65 Jul 24 15:38 ASM_DISK2
brw-rw---- 1 grid asmadmin 202, 81 Jul 24 15:38 ASM_DISK3
brw-rw---- 1 grid asmadmin 202, 97 Jul 24 15:38 ASM_DISK4
brw-rw---- 1 grid asmadmin 202, 33 Jul 24 15:38 CRS_DISK1

Step 6.  Configure NTP (Server Date/Time)

In my example I am going to make use of NTP to keep my servers date/time up to date and synchronized.  It is critical for an Oracle RAC environment that all your nodes are running the exact same date and time.  I will follow the steps below on all my RAC nodes as well as on the standby server.  As I am making use of servers located in New Zealand, I first make sure my timezones are correct.  This can be done by updating the zone in /etc/sysconfig/clock and running the following commands on all my servers.  You might need to adjust this to match your timezone.

Example:

root@kiwi101[/etc]: cat /etc/sysconfig/clock
ZONE="Pacific/Auckland"
UTC=true

root@kiwi101[/etc]: rm /etc/localtime
rm: remove symbolic link `/etc/localtime'? y
root@kiwi101[/etc]: ln -s /usr/share/zoneinfo/Pacific/Auckland localtime
root@kiwi101[/etc]: ls -al /etc/localtime
lrwxrwxrwx 1 root root 36 Jul 24 22:44 /etc/localtime -> /usr/share/zoneinfo/Pacific/Auckland

root@kiwi101[/etc]: date
Wed Jul 24 22:44:42 NZST 2013

I now update /etc/sysconfig/ntp and add the option –x

root@kiwi101[/etc]: cat /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid -g"

I now update /etc/ntp.conf file to include servers from the New Zealand pool.  See here – http://www.pool.ntp.org/en/ for more details.

…
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.nz.pool.ntp.org
server 1.nz.pool.ntp.org
server 2.nz.pool.ntp.org
server 3.nz.pool.ntp.org
…

The final step is to start the ntp service and make sure it is scheduled to start at server boot time.

root@kiwi101[/root]: service ntpd start
Starting ntpd:
root@kiwi101[/root]: chkconfig ntpd on
root@kiwi101[/root]: chkconfig --list ntpd
ntpd                  0:off  1:off  2:on   3:on   4:on   5:on   6:off

Note:  Make sure you have the same date/time (including timezone) configured on all your servers before installing the software or creating any databases.  This is not something you want to change once you have a fully configured RAC environment running.

Step 6.  Disable the firewall and SELinux

The second to last step is to disable the Linux firewall (iptables) and SELinux.  This can be done by executing the following:

Disable Firewall

chkconfig iptables off
chkconfig ip6tables off
service iptables stop
service ip6tables stop

To disable SELinux edit the /etc/sysconfig/selinux file and set SELINUX=disabled example:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Step 7.  Create the required Oracle software directories

Before we get to the following sections where I will show you how to install Oracle Grid Infrastructure  and the Database software, we first need to create the correct directories that will be used under /u01.  Don’t worry too much about this, as it will make more sense when we do the software installations.

First we create the directories as follows on all the servers:

root@kiwi103[/root]: cd /u01
root@kiwi103[/u01]: ls
lost+found
root@kiwi103[/u01]: mkdir app
root@kiwi103[/u01]: cd app
root@kiwi103[/u01/app]: mkdir 12.1.0 grid oracle
root@kiwi103[/u01/app]: ls -l
total 12
drwxr-xr-x 2 root root 4096 Jul 24 23:05 12.1.0
drwxr-xr-x 2 root root 4096 Jul 24 23:05 grid
drwxr-xr-x 2 root root 4096 Jul 24 23:05 oracle
root@kiwi103[/u01/app]: cd /u01
root@kiwi103[/u01]: chown -R grid:oinstall app
root@kiwi103[/u01]: chmod -R 775 app
root@kiwi103[/u01]: cd app
root@kiwi103[/u01/app]: chown -R oracle:oinstall oracle
root@kiwi103[/u01/app]: ls -al
total 20
drwxrwxr-x 5 grid   oinstall 4096 Jul 24 23:05 .
drwxr-xr-x 4 root   root     4096 Jul 24 23:05 ..
drwxrwxr-x 2 grid   oinstall 4096 Jul 24 23:05 12.1.0
drwxrwxr-x 2 grid   oinstall 4096 Jul 24 23:05 grid
drwxrwxr-x 2 oracle oinstall 4096 Jul 24 23:05 oracle

The key is to have a structure as follows:

root@kiwi103[/]: tree /u01

/u01
├── app
│   ├── 12.1.0
│   ├── grid
│   └── oracle
└── lost+found

And to make sure that the permissions are correct with the grid user the owner of all directories, except /u01/app/oracle which is owned by the oracle user.  The oinstall group is the default group for all directories.

Step 8.  Configure SSH User Equivalence for the “grid” and “oracle” Unix accounts

I tend to do this on all my Linux systems to allow me to connect between them without passwords, making use of keys to authenticate.  With later versions of Oracle Linux this is actually easy to setup, and all you need to do is use executables: ssh-keygen and ssh-copy-id.  Now it is important to make sure you set this up so that the “grid” user from node 1 can connect to the other systems (as the grid user) and the “oracle” unix user can do the same (using oracle account as login).

To configure this you follow these steps:

Step 8.1.  Generate the ssh keys for each user

Under every user account (oracle and grid) run the following command and just accept the defaults (just press enter/return), and do not add any passphrase: ssh-keygen –t dsa

Example (grid user on kiwi101):

grid@kiwi101[/home/grid]: ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa):
Created directory '/home/grid/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
da:95:e3:28:0c:e4:7f:11:11:7e:74:85:86:ae:4f:a4 grid@kiwi101.oraclekiwi.com
The key's randomart image is:
+--[ DSA 1024]----+
|        o....o.  |
|       . o..o    |
|    .   o...     |
|   o     oo.     |
|    o   S++      |
|     + oE=..     |
|      = +o.      |
|       o  .      |
|                 |
+-----------------+

Once you have done this for the “oracle” and “grid” users on all the servers, you can continue to the second step which will copy the public key for each user to the authorized_keys file on the remote server.

Step 8.2: Update authorized_keys files

Many people attempt to do this step manually, but why not just use the ssh-copy-id command, as this makes it so much easier!  We need to run this command 3 times, on each of the servers, for each account.  You will notice that I include the standby server in this, as it will make it easier for me to move between the primary and standby servers – and for Dbvisit Standby version 6 this is a requirement as password-less SSH communication is used between the primary and standby sites.

Example:
On node 1 (kiwi101) I run the following as the grid user, providing its password when requested:

grid@kiwi101[/home/grid]:  ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi101
grid@kiwi101[/home/grid]:  ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi102
grid@kiwi101[/home/grid]:  ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi103

Example output from above commands:

grid@kiwi101[/home/grid]: ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi101
The authenticity of host 'kiwi101 (192.168.40.101)' can't be established.
RSA key fingerprint is 2b:bf:97:96:3a:0d:57:be:23:76:af:9e:92:45:a2:d4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'kiwi101,192.168.40.101' (RSA) to the list of known hosts.
grid@kiwi101's password:    ß provide grid password here
Now try logging into the machine, with "ssh 'grid@kiwi101'", and check in:

.ssh/authorized_keys
To make sure we haven't added extra keys that you weren't expecting:

grid@kiwi101[/home/grid]: ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi102
The authenticity of host 'kiwi102 (192.168.40.102)' can't be established.
RSA key fingerprint is 1d:2f:f6:d3:7c:41:d1:de:8a:fe:fe:d7:d1:77:de:09.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'kiwi102,192.168.40.102' (RSA) to the list of known hosts.
grid@kiwi102's password:   ß provide grid password here
Now try logging into the machine, with "ssh 'grid@kiwi102'", and check in:

.ssh/authorized_keys

To make sure we haven't added extra keys that you weren't expecting:

grid@kiwi101[/home/grid]: ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi103
The authenticity of host 'kiwi103 (192.168.40.121)' can't be established.
RSA key fingerprint is ed:30:fc:69:ea:57:ec:8f:95:7a:0c:11:8d:57:ae:ef.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'kiwi103,192.168.40.121' (RSA) to the list of known hosts.
grid@kiwi103's password:  ß provide grid password here
Now try logging into the machine, with "ssh 'grid@kiwi103'", and check in:

.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.

Now do exactly the same for the grid user from kiwi102 and 103 as well as for the oracle user on all systems, and  below is an example of the commands to execute.

grid@kiwi102[/home/grid]:  ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi101
grid@kiwi102[/home/grid]:  ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi102
grid@kiwi102[/home/grid]:  ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi103

grid@kiwi103[/home/grid]:  ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi101
grid@kiwi103[/home/grid]:  ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi102
grid@kiwi103[/home/grid]:  ssh-copy-id -i /home/grid/.ssh/id_dsa.pub grid@kiwi103

oracle@kiwi101[/home/oracle]: ssh-copy-id -i /home/oracle/.ssh/id_dsa.pub oracle@kiwi101
oracle@kiwi101[/home/oracle]: ssh-copy-id -i /home/oracle/.ssh/id_dsa.pub oracle@kiwi102
oracle@kiwi101[/home/oracle]: ssh-copy-id -i /home/oracle/.ssh/id_dsa.pub oracle@kiwi103

oracle@kiwi102[/home/oracle]: ssh-copy-id -i /home/oracle/.ssh/id_dsa.pub oracle@kiwi101
oracle@kiwi102[/home/oracle]: ssh-copy-id -i /home/oracle/.ssh/id_dsa.pub oracle@kiwi102
oracle@kiwi102[/home/oracle]: ssh-copy-id -i /home/oracle/.ssh/id_dsa.pub oracle@kiwi103

oracle@kiwi103[/home/oracle]: ssh-copy-id -i /home/oracle/.ssh/id_dsa.pub oracle@kiwi101
oracle@kiwi103[/home/oracle]: ssh-copy-id -i /home/oracle/.ssh/id_dsa.pub oracle@kiwi102
oracle@kiwi103[/home/oracle]: ssh-copy-id -i /home/oracle/.ssh/id_dsa.pub oracle@kiwi103

Once the above is done, you should be able to SSH between the systems without being asked for any passwords.  Below is two example tests, run from kiwi101, you should be able to do this from all systems without being asked for passwords:

oracle@kiwi101[/home/oracle]: ssh oracle@kiwi101 "uname -n; date"; ssh oracle@kiwi102 "uname -n; date"; ssh oracle@kiwi103 "uname -n; date"
kiwi101.oraclekiwi.com
Fri Jul 26 15:03:18 NZST 2013
kiwi102.oraclekiwi.com
Fri Jul 26 15:03:18 NZST 2013
kiwi103.oraclekiwi.com
Fri Jul 26 15:03:16 NZST 2013

grid@kiwi101[/home/grid]: ssh grid@kiwi101 "uname -n; date"; ssh grid@kiwi102 "uname -n; date"; ssh grid@kiwi103 "uname -n; date"
kiwi101.oraclekiwi.com
Fri Jul 26 15:06:14 NZST 2013
kiwi102.oraclekiwi.com
Fri Jul 26 15:06:14 NZST 2013
kiwi103.oraclekiwi.com
Fri Jul 26 15:06:13 NZST 2013

It is important to get the above working without any issues before you continue with the installation.

Step 9:  Optional Install VNC server / allow X connections

This step is optional depending on your configuration and the additional components (package groups) you have installed during your Oracle Linux installation.  If you have installed a full graphical interface (Desktop GUI) then you should be able to run the installation on the console of the servers, but I never actually install the full desktop GUI. I tend to install the Base system and then just add a few packages if needed, such as xclock which is a good way to test if you will be able to open (run) the Oracle installer.  I am using my Mac laptop to connect to these servers and to test that I can open the GUI I just run the following test from a terminal window (192.168.40.101 is kiwi101):

# ssh -X -C grid@192.168.40.101
grid@192.168.40.101's password:
Last login: Fri Jul 26 15:27:02 2013 from 192.168.40.138
grid@kiwi101[/home/grid]: xclock

Once the xclock opens, I know I am ready to start the install.

If you are installing from Windows, you can install “putty” and “xming” to allow you to open remote X connections, or you can install a vncserver on your linux system and use a vncviewer from your client pc to connect to the servers.  To install the vncserver you can use “yum install tigervnc-server”.

Step 10: Update .bash_profile for “oracle” and “grid” user

The last step in the preparation is to update the environment settings for the “oracle” and “grid” UNIX accounts.  This is done by editing their .bash_profile files located in their home directory.  I know there are many values you can set, but I tend to try to just set the minimum and then rely on using the “oraenv” and “/etc/oratab” files to set the environment afterwards – but more on this at the end of each software set installation.  However, before I start the installation I add the following lines to these user’s .bash_profile file on all the servers:

For the “grid” UNIX account add:
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/12.1.0/grid
export ORACLE_TERM=xterm
For the “oracle” UNIX account add:
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export ORACLE_TERM=xterm

Summary

One final step, reboot all servers to make sure all settings and changes implemented take effect.
Hopefully the above has provided you with a more in depth view of the steps required before you start installing the Oracle software.

You are now ready to perform the next step, which will be the Oracle Grid Infrastructure installation.

About Anton Els


With more than 13 years experience with Oracle Databases, and extensive Linux and Solaris knowledge, Anton is a highly motivated individual who enjoys working in a challenging environment. His certifications include Oracle 11g Database Certified Master.

Speak Your Mind