Tuesday, March 19, 2013

Install the Oracle Grid Infrastructure software.

The Grid Infrastructure will provide the Cluster software that allows the RAC nodes to communicate, as well as the ASM software to manage the shared disks.
To begin, download the zip file from the Oracle software download website and unzip on Orpheus. Make sure you are logged into Orpheus as the oracle user so that oracle owns the unzipped files.

There is one zip file for 11gR2 Linux 64-bit called linux.x64_11gR2_grid.zip
Having unzipped it, we should have a directory called grid. It is important to be logged into Orpheus through the VM desktop and not through a non graphical interface such as Putty or SecureCRT. You may also use a tool such as VNC.
Change into the grid/sshsetup directory and we will find a script called sshUserSetup.sh
We will launch this script as follows:
[oracle@orpheus sshsetup]$ ./sshUserSetup.sh \
-user oracle -hosts "orpheus eurydice" \
-noPromptPassphrase -confirm -advanced
The output of this script is also logged into /tmp/sshUserSetup_2012-10-29-20-19-55.log
Hosts are orpheus eurydice
user is oracle
Platform:- Linux 
Checking if the remote hosts are reachable

You will be prompted for the oracle password twice during the script.
The noPromptPassphrase flag means that the script will not prompt for a pass phrase.
The confirm flag means that the script will automatically overwrite existing settings and permissions to set up the keys.
The advanced flag causes the script to set up unchallenged connections between all listed hosts, not just from the current host to the targets.
If everything is successful you should see the script complete with the following:
------------------------------------------------------------------------
--orpheus:--
Running /usr/bin/ssh -x -l oracle orpheus date to verify SSH connectivity has been setup from local host to orpheus.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Mon Oct 29 20:21:21 PDT 2012
------------------------------------------------------------------------
--eurydice:--
Running /usr/bin/ssh -x -l oracle eurydice date to verify SSH connectivity has been setup from local host to eurydice.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Mon Oct 29 20:21:21 PDT 2012
------------------------------------------------------------------------
SSH verification complete.
[oracle@orpheus sshsetup]$

Now change into the grid directory and we will find a file called runcluvfy.sh
We use this script to check that our cluster is ready for the Grid install. Invoke it as follows:
[oracle@orpheus grid]$ ./runcluvfy.sh stage -pre crsinst -n orpheus,eurydice

Performing pre-checks for cluster services setup 

Checking node reachability...
Node reachability check passed from node "orpheus"

Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

<output removed to aid clarity>

Clock time offset check from NTP Time Server started...
Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP) passed

Pre-check for cluster services setup was successful. 
[oracle@orpheus grid]$

If we have followed all the steps carefully, the script should report success and we are ready to start the Grid install:
We will now launch the Grid installer as follows:
[oracle@orpheus grid]$ ./runInstaller &

This will start the graphical installer and present the following menu:

We are going to select the first option; Install and Configure Grid Infrastructure for a Cluster

On the next screen we select Advanced Installation

On the next screen we select the languages to install. I am happy with just English.

On the next screen we unselect Configure GNS and then define our cluster name and SCAN port number. I am going to call my cluster underworld with a SCAN name of underworld-scan. This matches the SCAN definition we added to our DNS server back in Part VII.
I am defining the port for the SCAN listener as 1561, and not the suggested 1521.

On the next screen we define our cluster nodes. Orpheus is added automatically but we need to add Eurydice manually by selecting Add and then defining the addresses eurydice and eurydice-vip. Again these should match the addresses we added to the DNS server back in Part VII.

On the next screen we define the ethernet networks to use for our cluster traffic. Remember eth0 is the public port we use to access the outside world, and since this is a laptop that address changes. We set eth0 to Do Not Use
The eth1 network is the public network we set up on VMnet2 so we set that to Public.
The eth2 network is the private network we set up on VMnet3 so we set that to Private.

On the next screen we choose Automatic Storage Management (ASM)

On the next screen we should see the DATA ASM diskgroup listed. Set redundancy to External and check the ORCL:DATA option for the DATA disk group. Since this is a demonstration only we are going to place all of our files, including cluster and voting files, into a single disk group.

On the next screen check Use same passwords for these accounts. Since this is a demo I usually set these to something very simple like oracle. But make a note of this password as you will need it in Part X.

The installer warns us that our password choice is not very secure. That’s okay, I don’t forsee anyone really trying to hack into this RAC cluster.

On the next screen select Do not use Intelligent Platform Management Interface (IPMI).

On the next screen we should see OS groups for ASM management listed. These should be set as follows:
ASM Database Administrator (OSDBA) Group asmdba
ASM Database Administrator Operator (OSOPER) Group oinstall
ASM Instance Administrator (OSASM) Group asmadmin

On the next screen we choose where to install the Oracle Base and the Grid software. Note that the Grid software cannot be installed as a sub-directory of the Oracle Base.

On the next screen we select a location for the Oracle Inventory.

On the next screen we can verify that we have defined everything correctly. If everything looks good then press Finish

The install process can take a while to complete. On my VM RAC it will take at least ten minutes to complete. Don’t be alarmed if things hang at 65% for a while. This is normal. However you can check the second node to ensure that disk free space is gradually decreasing. If the 65% progress remains static for over 30 minutes you might have forgotten to disable the firewall.

Once the software install finishes we are presented with a dialog to run some scripts as the root user.
STOP! This is the part of the process where most people make mistakes!
It is extremely important that we run the scripts listed in the order listed, and that we wait for scripts to complete on Orpheus before we run them on Eurydice.
First we will run the orainstRoot.sh script on Orpheus:
[root@orpheus grid]# ./orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

Next we can run the same script on Eurydice.
Now we run the root.sh script on Orpheus. This will take a while to complete and MUST be allowed to run to completion before we start the execution on Eurydice.
Given the critical nature of this step, I have elected to show you the full output of the script on both my VM nodes. First Orpheus:
[root@orpheus grid]# ./root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2012-10-29 20:50:56: Parsing the host name
2012-10-29 20:50:56: Checking for super user privileges
2012-10-29 20:50:56: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'orpheus'
CRS-2672: Attempting to start 'ora.mdnsd' on 'orpheus'
CRS-2676: Start of 'ora.mdnsd' on 'orpheus' succeeded
CRS-2676: Start of 'ora.gipcd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'orpheus'
CRS-2676: Start of 'ora.gpnpd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orpheus'
CRS-2676: Start of 'ora.cssdmonitor' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'orpheus'
CRS-2672: Attempting to start 'ora.diskmon' on 'orpheus'
CRS-2676: Start of 'ora.diskmon' on 'orpheus' succeeded
CRS-2676: Start of 'ora.cssd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'orpheus'
CRS-2676: Start of 'ora.ctssd' on 'orpheus' succeeded

ASM created and started successfully.

DiskGroup DATA created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'orpheus'
CRS-2676: Start of 'ora.crsd' on 'orpheus' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 2e7657066f434fb3bf49ddfec8560948.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   2e7657066f434fb3bf49ddfec8560948 (ORCL:DATA) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'orpheus'
CRS-2677: Stop of 'ora.crsd' on 'orpheus' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'orpheus'
CRS-2677: Stop of 'ora.asm' on 'orpheus' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'orpheus'
CRS-2677: Stop of 'ora.ctssd' on 'orpheus' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'orpheus'
CRS-2677: Stop of 'ora.cssdmonitor' on 'orpheus' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'orpheus'
CRS-2677: Stop of 'ora.cssd' on 'orpheus' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'orpheus'
CRS-2677: Stop of 'ora.gpnpd' on 'orpheus' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'orpheus'
CRS-2677: Stop of 'ora.gipcd' on 'orpheus' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'orpheus'
CRS-2677: Stop of 'ora.mdnsd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'orpheus'
CRS-2676: Start of 'ora.mdnsd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'orpheus'
CRS-2676: Start of 'ora.gipcd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'orpheus'
CRS-2676: Start of 'ora.gpnpd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'orpheus'
CRS-2676: Start of 'ora.cssdmonitor' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'orpheus'
CRS-2672: Attempting to start 'ora.diskmon' on 'orpheus'
CRS-2676: Start of 'ora.diskmon' on 'orpheus' succeeded
CRS-2676: Start of 'ora.cssd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'orpheus'
CRS-2676: Start of 'ora.ctssd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'orpheus'
CRS-2676: Start of 'ora.asm' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'orpheus'
CRS-2676: Start of 'ora.crsd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'orpheus'
CRS-2676: Start of 'ora.evmd' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'orpheus'
CRS-2676: Start of 'ora.asm' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'orpheus'
CRS-2676: Start of 'ora.DATA.dg' on 'orpheus' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'orpheus'
CRS-2676: Start of 'ora.registry.acfs' on 'orpheus' succeeded

orpheus     2012/10/29 20:56:44     /u01/app/11.2.0/grid/cdata/orpheus/backup_20121029_205644.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4008 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

Now we can execute the root.sh script on Eurydice:
[root@eurydice grid]# ./root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2012-10-29 20:57:56: Parsing the host name
2012-10-29 20:57:56: Checking for super user privileges
2012-10-29 20:57:56: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE 
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node orpheus, number 1, and is terminating
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'eurydice'
CRS-2677: Stop of 'ora.cssdmonitor' on 'eurydice' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'eurydice'
CRS-2677: Stop of 'ora.gpnpd' on 'eurydice' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'eurydice'
CRS-2677: Stop of 'ora.gipcd' on 'eurydice' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'eurydice'
CRS-2677: Stop of 'ora.mdnsd' on 'eurydice' succeeded
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'eurydice'
CRS-2676: Start of 'ora.mdnsd' on 'eurydice' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'eurydice'
CRS-2676: Start of 'ora.gipcd' on 'eurydice' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'eurydice'
CRS-2676: Start of 'ora.gpnpd' on 'eurydice' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'eurydice'
CRS-2676: Start of 'ora.cssdmonitor' on 'eurydice' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'eurydice'
CRS-2672: Attempting to start 'ora.diskmon' on 'eurydice'
CRS-2676: Start of 'ora.diskmon' on 'eurydice' succeeded
CRS-2676: Start of 'ora.cssd' on 'eurydice' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'eurydice'
CRS-2676: Start of 'ora.ctssd' on 'eurydice' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'eurydice'
CRS-2676: Start of 'ora.drivers.acfs' on 'eurydice' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'eurydice'
CRS-2676: Start of 'ora.asm' on 'eurydice' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'eurydice'
CRS-2676: Start of 'ora.crsd' on 'eurydice' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'eurydice'
CRS-2676: Start of 'ora.evmd' on 'eurydice' succeeded

eurydice     2012/10/29 21:01:14     /u01/app/11.2.0/grid/cdata/eurydice/backup_20121029_210114.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4008 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

The critical part of the Eurydice install process is the line:
An active cluster was found during exclusive startup, restarting to join the cluster
If everything looks okay, click the OK button:

The Grid install is now complete.

If you want to check that both nodes are connected, launch the asmca tool and check that ASM instances are shown on both Orpheus and Eurydice.

No comments:

Post a Comment