Procedure for upgrading Dell Oracle 9i rac-linux 1 to the Dell Oracle 9i rac-linux 3 solution stack



Дата19.04.2016
Памер54.14 Kb.
#11112


May 2004






Procedure for upgrading Dell Oracle 9i RAC-Linux 2.2.1 to the Dell Oracle 9i RAC-Linux 2.3 solution stack



The purpose of this document is to determine a procedure to upgrade a system installed via Oracle 9i Deployment CD 2.2.1 to the Dell Oracle 9i RAC 2.3 solution stack


Assumptions: The following requirements should be met.


  • System is installed via Oracle 9i Deployment CD 2.2.1.

  • The user has the Red Hat® Advanced Server 2.1 Quarterly update 3 CDs.

  • Before starting this procedure, the user has backed up the database and other relevant files.

  • No packages have been removed or modified by the user which can impact the functioning of Oracle.

    Step by Step Instructions

    Follow the instructions below and execute them on each node.

    Shutting down services

  1. Shut down Oracle Instances

    1. Login as oracle and shutdown all Oracle instances running on your system.

    2. Shutdown the listener and the Global Services Daemon by typing:

      1. lsnrctl stop

      2. gsdctl stop

    3. Login as root and stop the Oracle Cluster Manager and cfs by typing:

      1. service ocmstart stop

      2. service cfs stop (In case you are using cfs service to mount ocfs volumes).

      3. umount –a –t ocfs (In case you have mounted them manually).



      1. chkconfig --level 345 cfs off (In order to prevent cfs to start during boot time)



    Installing the new kernel

    1. If you are using Qlogic host bus adapters, open the file “/etc/modules.conf” in a text editor and comment any lines related to qlogic drivers which look like





  • alias scsi_hostadapter96 qla2200_6x

  • alias scsi_hostadapter97 qla2300_6x

  1. Edit the “/etc/modules.conf” and change all the scsi_hostadapter instances that use megaraid to megaraid2. In other words, change the entry:

    alias scsi_hostadapter1 megaraid

    to:

    alias scsi_hostadapter1 megaraid2



  1. Find out the version of the kernel on your system by running the command



    # uname –r

    The kernel you should be running is 2.4.9-e.24smp



  1. Mount the Dell Deployment CD Version 2.3 and run the install.sh from the top level directory.

    #mount /dev/cdrom

    #/mnt/cdrom/install.sh



  1. Go to the directory /usr/lib/dell/dell-deploy-cd/scripts and run the command 005-copy-cds.pl . Remember you will be prompted for all the 3 Red Hat 2.1 Quarterly update 3 CDs. So keep them ready.

    # cd /usr/lib/dell/dell-deploy-cd/scripts

    # ./005-copy-cds.pl



  1. When this command finishes all the Red Hat RPMs should be copied from the Q3 update CDs to the directory /usr/lib/dell/RH-errata. Now run the following commands.

    # cd /usr/lib/dell/dell-deploy-cd/scripts/framework_scripts

    #./20-sort-RPMS.pl

    # cd /usr/lib/dell/RH-errata/kernel



  1. If system uses the smp kernel then run the following command.



    # rpm -ivh kernel-smp-2.4.9-e.34.i686.rpm –-nodeps

    If system is running uni-processor (up) kernel then run the command



    # rpm -ivh kernel-2.4.9-e.34.i686.rpm –-nodeps

    If system is running enterprise kernel run the command



    # rpm -ivh kernel-enterprise-2.4.9-e.34.i686.rpm --nodeps

    The command should execute successfully.





  1. If you are using Qlogic host bus adapters, uncomment the lines which were commented in step 3 and change the qla2300_6x to qla2300. So your modules.conf should look like

    Before

    #alias scsi_hostadapter97 qla2300_6x

    After

    alias scsi_hostadapter96 qla2300





  1. Upgrade the kernel headers by running the command

    # rpm -Uvh kernel-headers-2.4.9-e.34.i386.rpm



  1. Upgrade your kernel-source package by running the command



    # rpm -ivh kernel-source-2.4.9-e.34.i386.rpm



  1. Edit the ‘default’ parameter in /boot/grub/grub.conf to the e.34 kernel.



  1. Now reboot your system and you should be booted into the new kernel.



    Upgrading PowerPath

    Since PowerPath is not kernel ABI compliant, it becomes necessary to uninstall PowerPath when booted into the old kernel, and reinstall it when booted into the new kernel.





  1. Type ‘service PowerPath stop’ to stop the PowerPath service



  1. Uninstall PowerPath 3.0.3 while booted into your 2.4.9-e.24 kernel by typing the following command:

    #rpm –e `rpm –qa | grep EMC`



  1. Now reboot your system and you should be booted into the new kernel.



  1. Re-install PowerPath 3.0.3 by running the following command:

    #rpm –ivh




  1. Type ‘service PowerPath start’ to stop the PowerPath service



  1. Type the following command and make sure that all your PowerPath devices are visible:



    # cat /proc/partitions

    Upgrading to AS Quarterly Update 3



  1. Verify that the megaraid2 driver has been loaded and used by typing:

    #lsmod | grep megaraid2

    Verify that the output of the above command includes megaraid2



    Note: You should be using the megaraid2 driver that is included with the Red Hat kernel. In the case of the 2.3 solution stack, the validated and supported version is supplied by the Red Hat 2.4.9-e.34 kernel rpm.



  1. Upgrade your system to the latest quarterly update by one of the following methods:

    1. Upgrade from the Red Hat Network (RHN) using the up2date command (recommended)

    2. Upgrade manually by typing the following commands:



    # cd /usr/lib/dell/RH-errata/

    # mkdir temp

    # mv –f *rhn*.rpm temp/

    # mv –f up2date-*.rpm temp/

# rpm –Uvh perl-suidperl-5.6.1-36.1.99ent.i386.rpm --force

    # cd /usr/local/dell/bin

    # ./250-quarterly_update (Note: Please be patient as this command will take a long time to finish as it updates all of the required packages. Do not terminate the process) Do we know an approximate amount of time that it might take? Might be good to say something like “Please be patient as this command could take as long as ________ or longer to finish…”. This way the installer will have some kind of guidance.

    Upgrading to the latest drivers



  1. Type ./275-rpms_dkms to install the Dynamic Kernel Module Support (DKMS) driver.



  1. Type ./300-rpms_network to install the networking drivers.



  1. Type service network stop to stop the network. Type lsmod to determine if the tg3 driver is loaded. If the tg3 driver is loaded, type rmmod tg3.



  1. If you have Intel NICs, first unload the old Intel driver (e1000). The old driver name can be obtained by typing the lsmod command. Once you have obtained it, unload it by typing:

    #rmmod where is typically e1000

    Load the new Intel NIC driver by typing



    #modprobe e1000




  1. If you have Broadcom NICs, type the following command:

    # rmmod bcm5700

    # modprobe bcm5700





  1. In order to verify that the correct NIC drivers have been loaded, type the following:

    # ethtool –i ethX where X is the Ethernet device number

    The output of the above command should indicate the Intel® driver version to be 5.2.17.3, and the Broadcom driver version to be 6.0.5





  1. Type ./305-rpms_advanced_network to install the advanced networking drivers.



  1. If you are using basp, load the basp module by typing

    # rmmod basp

    # modprobe basp

    In order to verify that the basp module has been loaded, type:

    lsmod




  1. Type ./310-rpms_network_apps to install the NIC teaming drivers



  1. Type ./335-rpms_apps to install Dell applications



  1. Type service network start to start the network service.

    Upgrading Oracle Cluster File System



  1. If the system is using Oracle Cluster File System (OCFS) in order to upgrade OCFS to latest version 1.0.9-12 execute the following steps on each node.

  1. Go to the directory /usr/local/dell/bin and run the following command.

    #./340-rpms_ocfs

    This command should upgrade your system with latest rpms.



  1. Edit the file /etc/fstab and add entries similar to the following.

    /dev/sdb1 /u01 ocfs _netdev 0 0

    The first entry denotes your device name and the second one is the mount point. This can be different depending on your hardware configuration. /dev/sdb1 and /u01 is just an example.

    Add one row for each share device you are using as ocfs volumes. So if you are using powerpath and 3 shared volumes your /etc/fstab should look something like

    /dev/emcpowera1 /u01 ocfs _netdev 0 0

    /dev/emcpowerb1 /u02 ocfs _netdev 0 0

    /dev/emcpowerc1 /u03 ocfs _netdev 0 0





  1. Now run the following commands as root.

    Make sure the network is up. If not, type:

    # service network start

    Load OCFS by typing the following commands:

    # service ocfs start (This will start the OCFS and load the module)

    # mount –a (This will re-read your partition information and mount the entries just added to fstab).



  1. Type the following command and check to verify that your shared storage mount point has been mounted:

      1. mount



    Upgrading to Oracle 9.2.0.4



  1. On node 1, perform the following steps to upgrade Oracle Cluster Manager:



  1. If you are not logged into the XServer, type the following commands:

#startx

#xhost+




  1. Log in as oracle and type the following commands:

#mkdir $ORACLE_HOME/9204

#cd /usr/lib/dell/dell-deploy-cd/oracle-patchset/patchset

#cp p3095277_9204_LINUX.zip $ORACLE_HOME/9204

#cd $ORACLE_HOME/9204

#unzip -c p3095277_9204_LINUX.zip | cpio –idmv

Mount and copy the first Oracle® CD (9.2.0.1 Disk 1) to the hard drive:

# mkdir –p /oracle_cds/Disk1

# mount /dev/cdrom

# cp /mnt/cdrom/* /oracle_cds/Disk1

Run the Oracle® Universal Installer:

#/oracle_cds/Disk1/runInstaller

The Oracle® Universal Installer starts.



  1. In the Welcome window, click Next.

  2. In the File Locations window, enter the source path and destination and click Next.

  3. The source path is $ORACLE_HOME/9204/Disk1/stage/products.jar. The destination path should be $ORACLE_HOME.

  4. In the Available Products window, click Oracle9iR2 Cluster Manager 9.2.0.4 and click Next.

  5. In the Public Node Information window, enter the public node names and click Next.

  6. In the Private Node Information window, enter the interconnect node names and click Next.

  7. Click Install in the Summary window. A brief progress window appears, followed by the End of Installation window.

  8. Click Exit and confirm by clicking Yes.

  9. Disabling the Watchdog Timer:

  1. On each node, as user oracle, edit the disable_watchdog.sh script, located in the /usr/lib/dell/delldeploy-cd/scripts folder. The first line of code in the script says:

OH=/opt/oracle/product/9.2.0

Change it to the path you have set as $ORACLE_HOME. For example:

OH=/opt/oracle/product/server32/9.2.0


  1. Run this script to disable the watchdog timer and configure the hangcheck timer by typing:

./disable_watchdog.sh



  1. Starting Oracle Cluster Manager : On each node, log in as root and perform the following steps to start and verify the oracm.

  1. Type $ORACLE_HOME/oracm/bin/ocmstart.sh at the command prompt. (The oracm process starts).

  2. Type ps -ef | grep oracm at the command prompt to verify that the processes are running. (The output shows multiple instances of oracm running).

NOTE: If multiple instances of oracm are not running, repeat step ii.

  1. Upgrading Oracle Universal Installer to Version 2.2.0.18 : Installation of Oracle 9i patchset 9.2.0.4 requires Oracle Universal Installer version 2.2.0.18. On node 1, perform the following steps to upgrade the Oracle Universal Installer:

  1. Log in as oracle.

  2. Type the following commands:

cd $ORACLE_HOME/bin

./runInstaller

The Oracle Universal Installer starts.


  1. In the Welcome window, click Next.

  2. In the Cluster Node Selection window, select all nodes and click Next. In the File Locations window, enter the source path and destination and click Next.

  3. The source path is $ORACLE_HOME/9204/Disk1/stage/products.jar. The destination path should be $ORACLE_HOME.

  4. In the Available Products window, click Oracle® Universal Installer 2.2.0.18 and click Next.

  5. In the Component Locations window, review the information and click Next.

  6. Click Install in the Summary window. A brief progress window appears, followed by the End of Installation window.

  7. Click Exit and confirm by clicking Yes.

  8. On all nodes, create symbolic links by typing the following commands:

# cd $ORACLE_BASE/oui/bin/linux/

# ln -s libclntsh.so.9.0 libclntsh.so (if you get an error like “ln: `libclntsh.so': File exists” ignore it and continue to next step.)




  1. Installing Oracle9i Patchset 9.2.0.4 : On node 1, perform the following steps to install the 9.2.0.4 patchset:

  1. Type the following commands:

# cd $ORACLE_HOME/bin

# ./runInstaller (The Oracle® Universal Installer starts.)



  1. In the Welcome window, click Next.

  2. In the Cluster Node Selection window, select all nodes and click Next.



  1. In the File Locations window, verify the source path and destination and click Next. The source path



$ORACLE_HOME/9204/Disk1/stage/products.jar. The destination path should be $ORACLE_HOME.

  1. In the Available Products window, click Oracle9iR2 Patch Set 3 9.2.0.4.0 and click Next.

  2. Click Install in the Summary window.

  3. When prompted, run root.sh. A brief progress window appears, followed by the End of Installation window.

  4. In the End of Installation window, click Exit and confirm by clicking Yes.

  5. Type lsnodes at the command prompt. All of the node names that are in your cluster appear on the screen. For example, for a four node cluster, the output is as follows:

node1

node2


node3

node4


If all the node names do not appear, see "Starting Oracle9i Cluster Manager."


  1. Now your system is ready to use Oracle 9i R2 9.2.0.4.




THIS INFOBYTE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.

Dell and the Dell Logo are trademarks of Dell Inc. Oracle and Oracle9i are trademarks of Oracle Corporation. Red Hat is a registered trademark of Red Hat, Inc. Linux is a registered trademark of Linus Torvalds. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks of their products. Dell disclaims proprietary interest in these marks and names of others.


© Copyright 2004 Dell Inc. All rights reserved. Reproduction or translation of any part of this work beyond what is permitted by U.S. copyright laws without the written permission of Dell Inc. is unlawful and strictly forbidden.

REVISION 1.0, 03/21/04

DELL Proprietary & Confidential


Каталог: downloads -> global -> solutions
downloads -> Воіны-землякі, якія загінулі ці прапалі без вестак у гады Вялікай Айчыннай вайны антонаўскі сельсавет
downloads -> Для студэнтаў 5 курса факультэта журналістыкі
downloads -> Некаторыя асаблівасці цяперашняга стану абнаўлення тэхналогій у нацыянальнай прамысловасці і машынабудаўнічым комплексе
downloads -> Закон рэспублiкi беларусь 9 студзеня 2006 г. N 98-з аб ахове гiсторыка-культурнай спадчыны рэспублiкi беларусь
downloads -> -
downloads -> Лістапада 2012 года
solutions -> Dell™ OpenManage™ it assistant: Understanding Events How to Select Events for Monitoring
solutions -> Integrating Dell PowerEdge Servers into an Environment Managed by Insight Manager
global -> The Global 200 : a representation Approach to Conserving the Earth’s Distinctive Ecoregions
global -> Etc/rc d/rc local startup script


Поделитесь с Вашими друзьями:




База данных защищена авторским правом ©shkola.of.by 2022
звярнуцца да адміністрацыі

    Галоўная старонка