Oracle
9i RAC Single Node Installation on Red Hat Linux 7.1
Steve Bourgeois,
Polaris Database Systems
Last Updated 05/09/2003
Created 07/22/2002
- Introduction
- Environment
- Red Hat Linux Configuration
- Oracle 9i Pre-Installation
Steps
- Install the Oracle9i
RAC Software
- Configure and Start
Oracle Cluster Manager Processes
- Configure the Oracle 9i Net Listener
- Oracle RAC Database
Configuration and Creation
- Conclusions
- References
1.
Introduction
- Why Bother Doing This?
I've wanted
to experiment with Oracle9i Real Application Clusters (RAC) for
quite some time now. After working with Oracle Parallel Server
(the predecessor to RAC) starting with version 6.2 in 1992 I haven't been
fortunate enough to work with a client implementing 9i RAC (as
of this writing).
As someone
who is always interested in learning about and tinkering with
latest Oracle technology, I decided to setup a low cost environment
where I could experiment with RAC. Seeking to utilize a home-built
Intel-based PC as the server platform, I download copies of Red
Hat Linux 7.1 and Oracle9i Database Release 1 (9.0.1) to build
my environment.
This document
outlines the steps to install and configure Oracle 9i RAC on a
single Linux node. It assumes the reader has a working knowledge
of Oracle RDBMS, Oracle RAC, and Linux technology.
All of the
sample scripts and files referenced in this document can be downloaded
here.
- What About Oracle 9i RAC for Multiple
Nodes?
After doing
some research, I wasn't able to find a frugal solution to this
problem. The simplest approach would be to use a supported solution
like the Network
Appliance Filer storage appliance, but I don't happen to have
one of those hanging around in my spare computer parts closet.
I looked into
several open source software solutions like NBD,
ENBD,
DRBD,
and PVFS.
Unfortunately, each solution either wasn't designed to support
multiple node concurrent access or didn't allow support for parallel
concurrent writes. Interestingly, the author of DRBD indicates
that adding
this support wouldn't require much effort.
The most promising
software solution was Sistina's
Global File System (GFS) for Linux. Apparently, GFS was an
open source project until August, 2001, when Sistina decided to
make it a commercial product. The GFS open source developers countered
by starting the OpenGFS
project, which does deserve checking out at some point.
Finally, it
is possible to use common SCSI controllers and disks to create
a dual-hosted storage solution. It is possible to put multiple
SCSI controllers on the same SCSI bus with a Y cable and external
SCSI bus termination. The Linux
High Availability HOWTO: Attaching Shared Storage has a nice
section that details how to do this.
2.
Environment
|
Motherboard |
Gigabyte
GA-8IG (Intel 845G chipset) |
Processor |
2.26
Ghz Pentium IV |
Memory |
512
MB |
Disk
Controller |
Promise
ATA-133 |
Disks
|
(2)
Quantum lct15 30 GB ATA-66 |
|
|
|
Red
Hat Linux 7.1 |
Workstation
install, kernel 2.4.2-2 patched to 2.4.9-34 |
 
|
Swap
space 1024 MB |
Oracle
RAC 9.0.1 |
Base
version 9.0.1 patched to 9.0.1.3 |
|
|
3.
Red Hat Linux Configuration
I chose Red
Hat Linux 7.1 because it is easily downloaded via the internet (unlike
SuSE Linux) and
is a certified operating system for Oracle 8.1.7 and 9.0.1.
If you need
the Red Hat installation media, you can download iso images directly
from RedHat
or from a mirror
site and burn your own copy. If you don't have access to a broadband
internet connection or a cdrom burner, you can buy the cdrom set
from Amazon.com
or have a friend burn a copy for you.
If you need
help installing Red Hat Linux, the installation documentation can
be found at RedHat.
I chose a Workstation install from the Red Hat installer,
which installs the X Windows software and development tools necessary
for a successful Oracle installation.
I installed
the following Red Hat 7.1 modules and patches to the base
Red Hat 7.1 install. If you patch your Red Hat installation,
please be sure to install the kernel and glibc rpm files appropriate
for your cpu architecture. For this installation, I used the i686
rpm files since my CPU is a Pentium IV.
|
kernel-2.4.9-34.i686.rpm |
|
glibc-2.2.4-24.i686.rpm
|
|
 
|
  |
filesystem-2.1.0-2.1.noarch.rpm |
|
 
|
  |
binutils-2.10.91.0.4-1.i386.rpm |
|
cpp-2.96-85.i386.rpm |
|
e2fsprogs-1.26-1.71.i386.rpm |
|
e2fsprogs-devel-1.26-1.71.i386.rpm |
|
gcc-2.96-85.i386.rpm |
|
gcc-c++-2.96-85.i386.rpm |
|
gcc-g77-2.96-85.i386.rpm |
|
gcc-objc-2.96-85.i386.rpm |
|
glibc-common-2.2.4-24.i386.rpm |
|
glibc-devel-2.2.4-24.i386.rpm |
|
initscripts-5.84.1-1.i386.rpm |
|
kernel-headers-2.4.9-34.i386.rpm |
|
kernel-source-2.4.9-34.i386.rpm |
|
mkinitrd-3.2.6-1.i386.rpm |
|
modutils-2.4.13-0.7.1.i386.rpm |
|
SysVinit-2.78-17.i386.rpm |
|
telnet-server-0.17-18.1.i386.rpm |
|
wu-ftpd-2.6.1-16.7x.1.i386.rpm |
|
xinetd-2.3.3-1.i386.rpm |
|
 
|
  |
rsh-server-0.17-25.i386.rpm |
Red Hat Linux
7.1 install cdrom #1 |
|
|
Before patching
the kernel, it is highly recommended to create a boot floppy as
user root with the current kernel with the mkbootdisk command:
# /sbin/mkbootdisk --device /dev/fd0
2.4.2-2
|
Once the rpm's
are downloaded to a directory on the Linux server, they should be
checked for corruption during download before being installed. If
a rpm file does not return the message "md5 OK", then the
rpm is corrupt and should be downloaded again :
# /bin/rpm -K --nogpg *.rpm
|
The rpm's can
be installed as user root with the following commands:
# /bin/rpm -Uvh e2fsprogs*.rpm
# /bin/rpm -Uvh modutils*.rpm
# /bin/rpm -Uvh filesystem*.rpm
# /bin/rpm -Uvh mkinitrd*.rpm SysVinit*.rpm initscripts*.rpm
# /bin/rpm -Uvh gcc*.rpm cpp*.rpm
# /bin/rpm -Uvh kernel-headers*.rpm kernel-source*.rpm
# /bin/rpm -Uvh glibc*.rpm
# /bin/rpm -ivh kernel-2.4.9-34.i686.rpm
# /bin/rpm -Uvh binutils-2.10.91.0.4-1.i386.rpm
|
Once the rpm's
are installed, the /etc/lilo.conf file should be updated to include
an entry for the new kernel.
The lilo.conf
file from my system can be found here.
Note:
My motherboard's onboard ATA controller is not recognized properly
by Linux, so I installed a separate Promise ATA-133 controller card.
Therefore, the append="ide2=0x9400,0x9802 ide3=0x9c00,0xa002"
line may not be necessary in your lilo.conf file. If you are using ATA devices
and don't have an additional ATA controller installed, your boot and root partitions would be on hard drives hda, hdb,
hdc, or hdd.
If you are using SCSI devices, your boot and root partitions would
be on hard drives sda, sdb, sdc, sdd, etc.
Once the /etc/lilo.conf
file is updated, the boot loader should be installed and the machine
should be rebooted with the new 2.4.9-34 kernel:
# /sbin/lilo
# /sbin/shutdown -r now
|
I also installed
some networking services rpm's as user root with the following commands:
# /bin/rpm -Uvh xinetd*.rpm
# /bin/rpm -Uvh rsh-server*.rpm
# /bin/rpm -Uvh telnet-server*.rpm
# /bin/rpm -Uvh wu-ftpd*.rpm
|
Once these modules
are installed the rexec, rlogin, rsh, telnet, and wu-ftpd files
in the /etc/xinetd.d directory should be edited such that "disable"
option is set to "no".
Once the changes
are complete, the xinetd daemon should be restarted.
# /etc/init.d/xinetd restart
|
4.
Oracle 9i Pre-Installation Steps
We must perform
a few steps before launching the Oracle Universal Installer:
- Configure Linux Kernel Parameters
As the root
user, I set the SHMMAX (the maximum allowable size of a shared
memory segment) parameter to be 2 GB:
# /bin/echo 2147483648 > /proc/sys/kernel/shmmax
|
This change
is not persistent across system reboots, so this command should
be added to the end of the /etc/rc.d/rc.local script to ensure
the change is made at system startup. The text I appended to the
rc.local file in my environment can be found here.
- Create the oracle account and the OSDBA
group
The oracle
account is the Oracle software owner, and the OSDBA group is the
privileged operating system group. They can be created as user
root with the following commands:
# /usr/sbin/groupadd -g 200 dba
# /usr/sbin/useradd -d /home/oracle -g dba -s /bin/bash -u
500 oracle
|
As user root,
set the password for the oracle account. You will be prompted
for the new password and confirmation of that password.
- Configure and Start the Linux Watchdog
Timer
The Oracle
Cluster Manager software uses the watchdog timer to reboot the
machine when the /dev/watchdog device isn't written to within
a given timeout period. It should be configured as the root user
with the following commands:
# cd /dev
# ./MAKEDEV watchdog
# /bin/chmod 600 /dev/watchdog
# /bin/chown oracle.dba /dev/watchdog
# /sbin/insmod softdog soft_margin=60
|
Loading the
softdog module is not persistent across system reboots, so the
insmod command should be added to the end of the /etc/rc.d/rc.local
script to ensure the module is loaded at system startup. The text
I appended to the rc.local file in my environment can be found
here. You can
verify that the softdog module is loaded by executing the lsmod
command and looking for a "softdog" entry.
- Create the /var/opt/oracle directory
After the
Oracle software is installed, the root.sh script that is generated
attempts to create the RAC configuration file srvConfig.loc in
this directory. Unfortunately, the OUI doesn't create this directory as part of the installation process. This appears to be an Oracle bug, but we can work
around it by creating the directory as the root user with the
following commands:
# /bin/mkdir -p /var/opt/oracle
# /bin/chown oracle.dba /var/opt/oracle
# /bin/chmod 755 /var/opt/oracle
|
- Create and Bind Raw Disk Partitions
for the Database
Since this
will be a single node RAC environment, it really isn't necessary
for the database to reside on raw devices. Since a clustered file
system for Linux is not yet released (as of this writing), I decided
to create an environment as realistic as possible by using raw
devices.
Although I
didn't install every Oracle option, I did write the creation scripts
to create a disk partition for each tablespace or device that
might be asked for during the installation process. Since the
database will reside on raw devices, each partition is 1 MB larger
than the corresponding tablespace size to account for overhead.
The file use,
file size, disk partition, and raw device layout is shown below:
File
Use |
Size |
Partition |
Raw
Device |
|
SRVCONFIG |
100
MB |
/dev/hdf5 |
/dev/raw/raw1 |
CTLFILE1 |
25
MB |
/dev/hdf6 |
/dev/raw/raw2 |
CTLFILE2 |
25
MB |
/dev/hdf7 |
/dev/raw/raw3 |
REDO1_I1 |
25
MB |
/dev/hdf8 |
/dev/raw/raw4 |
REDO2_I1 |
25
MB |
/dev/hdf9 |
/dev/raw/raw5 |
REDO1_I2 |
25
MB |
/dev/hdf10 |
/dev/raw/raw6 |
REDO2_I2 |
25
MB |
/dev/hdf11 |
/dev/raw/raw7 |
SYSTEM |
500
MB |
/dev/hdf12 |
/dev/raw/raw8 |
UNDO_I1 |
100
MB |
/dev/hdf13 |
/dev/raw/raw9 |
UNDO_I2 |
100
MB |
/dev/hdf14 |
/dev/raw/raw10 |
TEMP |
100
MB |
/dev/hdf15 |
/dev/raw/raw11 |
USERS |
100
MB |
/dev/hdf16 |
/dev/raw/raw12 |
TOOLS |
25
MB |
/dev/hdf17 |
/dev/raw/raw13 |
DRSYS |
90
MB |
/dev/hdf18 |
/dev/raw/raw14 |
INDEX |
70
MB |
/dev/hdf19 |
/dev/raw/raw15 |
EXAMPLE |
160
MB |
/dev/hdf20 |
/dev/raw/raw16 |
CWMLITE |
100
MB |
/dev/hdf21 |
/dev/raw/raw17 |
SPFILE |
5
MB |
/dev/hdf22 |
/dev/raw/raw18 |
NODEMON |
1
MB |
/dev/hdf23 |
/dev/raw/raw19 |
|
|
Since I needed
to create 19 partitions and don't happen to be using LVM software,
I simply created a single large extended partition in which I
could create 19 logical partitions.
The first
step was to manually create block devices /dev/hdf17 - /dev/hdf23
with the mknod command since they aren't created as part
of the default Linux installation.
Please note
that it is extremely important to use the correct major and minor
numbers for the disk drive and partition numbers in your environment.
In my environment, I am using disk hdf, so I can find the major
and minor numbers for /dev/hdf17 by checking the values for /dev/hdf16
with the ls command:
# ls -l /dev/hdf16
brw-rw---- 1 root disk 33, 80 Mar 23 2001 /dev/hdf16
|
Based on this
result, I know that I need to create /dev/hdf17 with major 33
and minor 81, /dev/hdf18 with major 33 and minor 82, etc. The
script that I used to create the block devices for my environment
can be found here.
The commands
to create the devices for my environment would be:
# /bin/mknod /dev/hdf17 b 33 81
# /bin/mknod /dev/hdf18 b 33 82
# /bin/mknod /dev/hdf19 b 33 83
# /bin/mknod /dev/hdf20 b 33 84
# /bin/mknod /dev/hdf21 b 33 85
# /bin/mknod /dev/hdf22 b 33 86
# /bin/mknod /dev/hdf23 b 33 87
|
The next step
is to actually create the logical partitions. Although they can be created
interactively with fdisk, I wrote a script to create the
logical partitions on disk hdf. My script can be found here.
The final
step is to bind the raw Linux character devices to the logical partitions
(block devices) we just created with the raw command and
set file permissions and ownership.
For example,
to bind the first raw device /dev/raw/raw1 to block device /dev/hdf5:
# /usr/bin/raw /dev/raw/raw1 /dev/hdf5
# /bin/chmod 600 /dev/raw/raw1
# /bin/chown oracle.dba /dev/raw/raw1
|
The script
I used to bind all of my raw devices can be found here.
This change
is not persistent across system reboots, so all of the bind commands
should be added to the end of the /etc/rc.d/rc.local script to
ensure the bindings are done at system startup. The text I appended
to the rc.local file in my environment can be found here.
You can verify the bindings by running the raw command
in query mode:
You can optionally
create symbolic links to the raw devices to give your Oracle database
files meaningful names. The script that I used to create symbolic
links in /u01/oradata/rac to the raw devices can be found here.
- Create Oracle Database Configuration
Assistant (DBCA) Raw Device Map File
The map file
tells the DBCA which raw devices to use for each of the typical
tablespaces created during database creation. It is simply a list
of tablespace name variables and the corresponding raw device.
The DBCA raw
device map file for my environment can be found here.
- Install Blackdown JDK Version 1.1.8
v3
Oracle 9i
uses the Blackdown JDK. Download the JDK distribution from Blackdown.org.
As user root,
extract the distribution from the archive:
# /usr/bin/bzip2 -dc jdk118_v3-glibc-2.1.3.tar.bz2
|/bin/tar xf -
|
This will
create directory jdk118_v3, which should be moved to the /usr
directory as user root:
When the Oracle
Universal Installer prompts you for for the JDK location (in a
later step), the directory /usr/jdk118_v3 should be used.
- Set Oracle Environment Variables
Set the Oracle
environment variables by adding the appropriate entries to the
startup file for the oracle user. In my environment I added the
following entries to the $HOME/.bashrc
file since I was using bash as my shell:
PS1="`/usr/bin/whoami`@`/bin/hostname -s`: "
export PS1
ORACLE_BASE=/d01/app/oracle
export ORACLE_BASE
ORACLE_SID=rac1
ORACLE_HOME=/d01/app/oracle/product/9.0.1
ORACLE_TERM=xterm
export ORACLE_HOME ORACLE_SID ORACLE_TERM
NLS_LANG=american_america.WE8ISO8859P1
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
export NLS_LANG ORA_NLS33
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/oracm/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH PATH=.:$ORACLE_HOME/bin:$PATH export
PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native
export THREADS_FLAG
# Workaround for Oracle bug #2062423
# It deals with glibc compatibility problem with Red Hat
7.1 and Oracle 8i
if [ -f /usr/i386-glibc21-linux/bin/i386-glibc21-linux-env.sh
];
then
LD_ASSUME_KERNEL=2.2.5
export LD_ASSUME_KERNEL
. /usr/i386-glibc21-linux/bin/i386-glibc21-linux-env.sh
fi
EDITOR=/bin/vi
export EDITOR
umask 022
alias db='. /usr/local/bin/oraenv'
alias cdpath='CDPATH=:.:$ORACLE_BASE/admin/$ORACLE_SID:$ORACLE_BASE/admin/rac:$ORACLE_HOME:$HOME;export
CDPATH'
alias psid='ps -ef|grep $ORACLE_SID'
alias pso=ps -ef|grep oracle'
|
These environment
variables will be set automatically the next time you login as
the oracle user. The .bashrc file for the oracle user can also
be sourced manually with the following command:
5.
Install the Oracle 9i RAC Software
The Oracle Universal
Installer (OUI) requires an X Windows interface to perform an interactive
installation. The OUI can be run directly on the database server,
or on any X Windows-capable computer.
- Mount the Oracle9i CDROM
The default
cdrom mount point for Red Hat Linux is /mnt/cdrom. The cdrom can
be mounted as user root with the following command:
- Run the Oracle Universal Installer
Since the
Oracle9i distribution contains multiple cdrom's, the OUI should
be NOT be run from the command line where the current working
directory is within the file system of the mounted Oracle9i cdrom.
Otherwise, it will not be possible to eject the current Oracle9i
cdrom when the OUI prompts for the next Oracle9i cdrom.
The Oracle
Universal Installer can be run as user oracle with the following
command:
$ /mnt/cdrom/runInstaller
|
- At the Welcome screen, click
Next.
- Accept the default Oracle Inventory
Location: $ORACLE_BASE/orainventory (/u01/app/oracle/orainventory
in my environment) and click Next.
- Enter UNIX Group Name dba (created
in Oracle 9i Pre-Installation Steps) and click
Next.
- If prompted to do so, run the /tmp/orainstRoot.sh
script in a separate window as root. Click Continue once the
script has completed.
- At the File Locations screen
accept default values for ORACLE_HOME and source directories
and click Next.
- At the Available Products screen
choose "Oracle9i Database 9.0.1.0.0" and click Next.
- At the Installation Types screen
choose Custom installation type and click Next. Please
note that Oracle RAC can only be installed as a Custom installation
type.
- At the Available Product Components
screen select "Oracle9i Real Application Clusters 9.0.1.0.0"
and (I also deselected Oracle Spatial and Oracle Enterprise
Manager Products for my environment) and click Next.
- At the Component Locations screen
accept the default directories for Oracle Universal Installer
and Java Runtime Environment and click Next.
- At the Shared Configuration File
Name screen enter the raw device for shared configuration
file name (/d01/oradata/rac/rac_raw_srvconfig in my environment)
and click Next.
- At the Cluster Node Selection
screen, do not add any additional entries for remote nodes and
click Next.
- At the Privileged Operating System
Groups screen accept the default values for the OSDBA and
OSPER groups (dba in my environment) and click Next.
- At the Choose JDK Home Directory
screen, enter the path /usr/jdk118_v3 (installed in Oracle
9i Pre-Installation Steps) and click Next.
- At the Summary screen click Install
to continue. Mount additional cdrom's when prompted to do so
by the OUI.
- Oracle bug 1843232 may cause the following
error message to be raised:
"Error in invoking target ntcontab.o of makefile $ORACLE_HOME/network/lib/ins_net_client.mk"
From Oracle Metalink note 158104.1:
- From the command line, make a backup
copy of the $ORACLE_HOME/bin/genclntsh script as user oracle.
- Edit the genclntsh script and remove
the ${LD_SELF_CONTAINED} flag from the ld command in the
Create Library section.
- Run the genclntsh script from the
command line.
- Click on Retry (in the OUI) to continue.
- When prompted to do so, run the $ORACLE_HOME/root.sh
script in a separate window as root. Click OK once the script
has completed.
- At the End of Installation screen
click Exit and confirm Yes to complete the software installation.
6.
Configure and Start Oracle Cluster Manager Processes
According to
the Oracle9i Real Application Clusters Installation and Configuration
Release 1 (9.0.1) manual, the Oracle Cluster Manager is:
" An operating system-dependent
component that discovers and tracks the membership state of nodes
by providing a common view of cluster membership across the cluster."
- Configure the Node Monitor Configuration
File
Edit the $ORACLE_HOME/oracm/admin/nmcfg.ora
file as user oracle to contain the following entries. The nmcfg.ora
file from my environment can be found here.
DefinedNodes=luna.polarisdb.com (the
local node name)
CmDiskFile=/d01/oradata/rac/rac_raw_nodemon (the raw device
bound to the node monitor raw device)
|
- Configure the Oracle Cluster Manager
Argument File
I quickly
found that the OCM cluster heartbeat daemon (watchdogd) is a bit
aggressive with regards to shutting down my server. When the system
was under heavy load (during database creation), I would see ping
came too late messages in the $ORACLE_HOME/oracm/log/wdd.log
file followed by the inevitable Shutting down the entire node...
To give myself
a little more breathing room, I increased the margin time for
the watchdogd daemon to 30 seconds by adding the -m 30000
flag to the watchdogd entry in the $ORACLE_HOME/oracm/admin/ocmargs.ora
file as user oracle. The ocmargs.ora file from my environment
can be found here.
watchdogd -g dba -m 30000
|
Please note
that while this may be an appropriate workaround on a system you
are experimenting with, the server can still be rebooted by watchdogd.
Oracle Metalink note 166830.1, Setting up Real Application
Cluster (RAC) environment on Linux - Single node, provides
a nice overview of the steps to modify the Linux watchdog timer
to skip the reboot completely.
- Start the Oracle Cluster Manager and
Node Monitor
Run the ocmstart.sh
script as user root with the following commands:
# ORACLE_HOME=/d01/app/oracle/product/9.0.1
# export ORACLE_HOME
# $ORACLE_HOME/oracm/bin/ocmstart.sh
|
- Initialize the raw device used by the
Oracle Global Services Daemon
The Global
Services Daemon allows the srvctl utility to perform system management
tasks. To initialize the raw device (specified in the /var/opt/oracle/srvConfig.loc
file), execute the following command as user oracle:
$ $ORACLE_HOME/bin/srvconfig -init
|
- Start the Global Services Daemon
Execute the
gsd program as user oracle with the following command:
These commands
can be appended to the /etc/rc.d/rc.local script to automate the
execution of these commands at system startup. The text I appended
to the rc.local file in my environment can be found here.
7.
Configure the Oracle 9i Net Listener
Run the Oracle
Net Configuration Assistant as user oracle with the following
command:
$ unset LANG
$ $ORACLE_HOME/bin/netca
|
If the OCM processes are not running (see
Configure and Start Oracle Cluster Manager Processes),
you may see the following error returned after executing netca:
"CM:CmThreadAttach: socket not connected (error=2)
cannot allocate memory for -1 nodes"
You can verify that the OCM processes are running by executing the
ps command and looking for "oranm" and "oracm"
entries:
$ ps -efw|egrep 'oranm|oracm'
|
We are now
ready to configure the Oracle 9i Net Listener.
- Select Cluster Configuration
and click Next.
- Click on Select all Nodes and
click Next.
- Select Listener Configuration
and click Next.
- Select Add and click Next.
- Choose the default listener name LISTENER
and click Next.
- Select protocols TCP and IPC and click
Next.
- Select the default port (1521) and click
Next.
- Enter the database name rac for
the IPC key value and click Next.
- Select NO when asked to configure
another listener and click Next and Finish to exit the Network
Configuration Assistant.
8.
Oracle RAC Database Configuration and Creation
We are now ready
to configure and create a RAC database.
- Generate the RAC Database Creation
Scripts
At this point,
we are ready to generate the scripts to create our RAC database.
For simplicity, I decided to use the Oracle Database Configuration
Assistant (DBCA) to generate the scripts. The scripts I used to
create my environment can be found here.
The first
step is to set the DBCA_RAW_CONFIG environment variable to point
to the DBCA raw device map file created in Oracle
9i Pre-Installation Steps. To
set DBCA_RAW_CONFIG and run the DBCA, execute the following commands
as user oracle:
$ DBCA_RAW_CONFIG=$HOME/dbca_raw.map
$ export DBCA_RAW_CONFIG
$ $ORACLE_HOME/bin/dbca
|
- Choose Oracle Cluster Database
and click Next. (Note that this option will not appear if the
Global Services Daemon is not running as described in Configure
and Start Oracle Cluster Manager Processes).
- Choose Create A Database and
click Next.
- Select nodes (luna in my environment)
to configure as part of the RAC database and click Next.
- Choose the New Database template
and click Next.
- Enter the global database name (rac.polarisdb.com
in my environment) and click Next.
- Deselect the database features not desired
and click Next. In my environment, I kept Example Schemas
to get the Human Resources and Sales History data.
- Click on Additional Database Configurations
and deselect unwanted options. I chose to deselect Oracle Intermedia
and the resulting DRSYS tablespace, but kept Oracle JVM. Click
on OK and click Next.
- Select Dedicated Server Mode
and click Next.
- Accept the default initialization parameters
for now (151 MB SGA). The persistent init.ora parameter file
name should reflect the value defined when the DBCA_RAW_CONFIG
environment variable was set. Click on the dB Sizing
tab and select Database Character Set WE8ISO8859P1. Click
Next.
- At the Database Storage screen,
verify that the file names are set as per DBCA_RAW_CONFIG and
change the tablespace sizes to be in sync with the raw device
sizes:
File |
Default
Size |
New
Size |
|
EXAMPLE |
10
MB |
160
MB |
INDX |
25
MB |
70
MB |
SYSTEM |
325
MB |
400
MB |
TEMP |
40
MB |
100
MB |
TOOLS |
10
MB |
25
MB |
UNDOTBS1 |
200
MB |
100
MB |
USERS |
25
MB |
100
MB |
REDO1 |
100
MB |
25
MB |
REDO2 |
100
MB |
25
MB |
|
|
Click Next to continue.
- Uncheck the Create Database checkbox
and check the Database Creation Scripts and click Next.
This will generate the database creation scripts, but will not
run them. The scripts will be created in the $ORACLE_BASE/admin/rac/scripts
directory. Click on Finish.
- Temporarily Disable Oracle RAC Support
Because of
the problems with the watchdogd daemon I described in Oracle
9i Pre-Installation Steps,
I temporarily disabled RAC support to work around the many system
reboots I experienced during the database creation process.
To temporarily
disable RAC support, execute the following commands as user oracle
to disable RAC support and relink the oracle binary:
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_off
$ make -f ins_rdbms.mk ioracle
|
- Execute the Database Creation Scripts
We are finally
ready to execute the database creation scripts generated in Oracle
RAC Database Configuration and Creation! I modified the DBCA-generated
scripts to reduce the memory footprint of the SGA, the postDBCreation.sql
script to create symbolic links to the init.ora files for each
instance, and created the postDBCreation2.sql script to create
the redo thread and the undo tablespace for the second RAC instance,
configure the server parameter file (SPFILE) for RAC, and create
an Oracle password file for the second RAC instance. The scripts
I used to create my environment can be found here.
- Re-enable Oracle RAC Support
Once the database
creation is complete, RAC support can be re-enabled by executing
the following commands as user oracle:
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_on
$ make -f ins_rdbms.mk ioracle
|
- Startup Both Instances in Parallel
Mode
The instance
names in my environment were rac1 and rac2. Execute
the following commands to start up each RAC instance in shared
mode:
$ . oraenv
ORACLE_SID = [*] ? rac1
$ $ORACLE_HOME/bin/sqlplus
/ as sysdba
startup
exit
|
$ . oraenv
ORACLE_SID = [rac1] ? rac2
$ $ORACLE_HOME/bin/sqlplus
/ as sysdba
startup
exit
|
9.
Conclusions
In this document
I have outlined the steps I took to install Oracle 9i RAC on an
Intel PC with Red Hat Linux 7.1. It proved to be an interesting
and educational experience due to the number of installation and
configuration issues that I had to work around. Of course, we all
have to deal with challenging environments when working with Oracle.
If you have
any questions or comments about this document or want to report
any errata or omissions, please feel free to email
me.
10.
References
- Oracle9i Real Application
Concepts Release 1 (9.0.1)
Oracle Part No. A89867-01, June 2001
- Oracle9i Real Application
Clusters Installation and Configuration Release 1 (9.0.1)
Oracle Part No. A89868-01, June 2001
- Oracle9i Release Notes Release
1 (9.0.1) for Linux Intel Oracle Part No. A90356-02, September
2001
- Setting up Real Application Cluster (RAC) environment on Linux - Single node
Oracle Metalink Note 166830.1, July 2002
- Upgrading
the Linux Kernel on Red Hat Linux systems Red Hat,
Inc.
|