HOME > > Upgrade CRS and Database 10.2.0.1 to 10.2.0.5

Upgrade CRS and Database 10.2.0.1 to 10.2.0.5

Anup - Thursday, October 6, 2022

First Task to Upgrade CRS

  • Database Name: PROD

  • Instances: PROD1  Node name: racnode1

  • Instances: PROD2  Node name: racnode2

  • Oracle Database Home and ASM Home: /export/home/oracle/db

  • Oracle CRS Home: /export/home/oracle/crs

Step 1: Check the status of CRS and its resources.


cd $ORA_CRS_HOME

cd bin/

./crsctl check crs


CSS appears healthy

CRS appears healthy

EVM appears healthy


./crs_stat -t



Step 2 : Check the CRS active and Software version 


bash-3.2$ ./crsctl query crs activeversion

CRS active version on the cluster is [10.2.0.1.0]


bash-3.2$ ./crsctl query crs softwareversion

CRS software version on node [racnode1] is [10.2.0.1.0]


bash-3.2$ srvctl status asm -n racnode1

ASM instance +ASM1 is running on node racnode1.


bash-3.2$ srvctl status asm -n racnode2

ASM instance +ASM2 is running on node racnode2.


Step 3:shutdown the instance on one node racnode1.


 Since we are applying the patch in a rolling fashion.

./srvctl stop instance -d PROD -i PROD1

./srvctl stop asm -n racnode1

./srvctl stop nodeapps -n racnode1

 

Cross verify if all the resources accessing the srprim1 instance are down.

 

./crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

 

./crs_stat -t

 

Name           Type           Target    State     Host

------------------------------------------------------------

ora....D1.inst application    OFFLINE   OFFLINE

ora....D2.inst application    ONLINE    ONLINE    racnode2

ora.PROD.db    application    ONLINE    ONLINE    racnode2

ora....SM1.asm application    OFFLINE   OFFLINE

ora....E1.lsnr application    OFFLINE   OFFLINE

ora....de1.gsd application    OFFLINE   OFFLINE

ora....de1.ons application    OFFLINE   OFFLINE

ora....de1.vip application    OFFLINE   OFFLINE

ora....SM2.asm application    ONLINE    ONLINE    racnode2

ora....E2.lsnr application    ONLINE    ONLINE    racnode2

ora....de2.gsd application    ONLINE    ONLINE    racnode2

ora....de2.ons application    ONLINE    ONLINE    racnode2

ora....de2.vip application    ONLINE    ONLINE    racnode2

 

Step 4: Download and Unzip the patch 8202632 

  • run the runInstaller from the unzipped directory.
  • Select the 10g CRS HOME path that needs to be upgraded.
  • The patch will be applied on this home.

Once when the patching is done, Oracle prompts to run two scripts as root user:

Script 1: Shutdown the CRS daemons by issuing the following command:

/export/home/oracle/crs/bin/crsctl stop crs

Script 2: Run the shell script located at:

/export/home/oracle/crs/install/root102.sh

This script will automatically start the CRS daemons on the patched node upon completion.

 

bash-3.2$ ./crsctl query crs activeversion

CRS active version on the cluster is [10.2.0.1.0]

bash-3.2$ ./crsctl query crs softwareversion

CRS software version on node [racnode1] is [10.2.0.4.0]

bash-3.2$

 

You can see above that the CRS version is still being shown as 10.2.0.1.0. This would be changed to 10.2.0.4.0 only after the scripts mentioned above (Script 1 and Script 2) are run on the second node (10gnode2) as well.

Now, let's proceed to the next node (racnode2)

Node 2:

shutdown the instance on node racnode2

bash-3.2$ ./srvctl stop nodeapps -n racnode2

 

bash-3.2$ ./crsctl query crs activeversion

CRS active version on the cluster is [10.2.0.1.0]

bash-3.2$ ./crsctl query crs softwareversion

CRS software version on node [racnode2] is [10.2.0.1.0]

 

Now, run the 2 scripts on node racnode2 that were prompted by Oracle on racnode1 to be run after the patch installation.

 

Script 1: Shutdown the CRS daemons by issuing the following command:

/export/home/oracle/crs/bin/crsctl stop crs

Script 2: Run the shell script located at:

/export/home/oracle/crs/install/root102.sh

This script will automatically start the CRS daemons on the patched node upon completion.

Verify whether the CRS version and the software version have been changed to 10.2.0.4.0 on both the nodes.

bash-3.2$ ./crsctl query crs activeversion CRS active version on the cluster is [10.2.0.5.0] bash-3.2$ ./crsctl query crs softwareversion CRS software version on node [racnode2] is [10.2.0.5.0] bash-3.2$


Second Task to Upgrade DB


Step 1: Stop the Services 

 

bash-3.2$ ./srvctl stop database -d PROD

bash-3.2$ ./srvctl stop asm -n racnode1

bash-3.2$ srvctl stop asm -n racnode 2

PRKO-2002 : Invalid command line option: 2

bash-3.2$ srvctl stop asm -n racnode2

bash-3.2$ ./srvctl stop nodeapps -n racnode1

bash-3.2$ ./srvctl stop nodeapps -n racnode2

bash-3.2$

 

Step 2:

From the unzipped patch, run the runInstaller file on racnode1.

Select the 10g Database home that needs to be upgraded and verify the home path.




Step 3:

Now, let's start the Nodeapps and the ASM instances on both the nodes.

 

bash-3.2$ ./srvctl start nodeapps -n racnode1

bash-3.2$ ./srvctl start nodeapps -n racnode2

bash-3.2$ ./srvctl start asm -n racnode1

bash-3.2$ ./srvctl start asm -n racnode2

bash-3.2$

 

Step 4:

Database Upgrade.


bash-3.2$ export ORACLE_HOE=/export/home/oracle/db

bash-3.2$ export ORACLE_SID=$ORACLE_HOME/bin:$PATH

bash-3.2$ export ORACLE_SID=PROD1

 

 

 

sqlplus / as sysdba

 

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:02:552013

 

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

 

Connected to an idle instance.

 

SQL> startup nomount

ORACLE instance started.

 

Total System Global Area 926941184 bytes

Fixed Size 1270748 bytes

Variable Size 247467044 bytes

Database Buffers 675282944 bytes

Redo Buffers 2920448 bytes

SQL>



Step 5:

Create a pfile from SPFILE in order to remove the below mentioned Cluster Parameters.

SQL> create pfile='/export/home/oracle/initPROD1.ora' from spfile;

 

File created.

 

SQL> shut immediate

ORA-01507: database not mounted

ORACLE instance shut down.


Step 6:

Now, remove or comment the below mentioned cluster parameters in the newly created pfile.


vi /export/home/oracle/initPROD.ora

#REMOVE THESE PARAMETERS#

#*.cluster_database_instances=2

#*.cluster_database=true

#PROD2.instance_number=2

#PROD1.instance_number=1

 

Step 7:

Startup the instance “racnode1” in upgrade mode using the modified PFILE.

 

[oracle@10gnode1 dbs]$ sqlplus / as sysdba

 

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:09:212013

 

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

 

Connected to an idle instance.

 

SQL> startup upgrade pfile='/export/home/oracle/initPROD.ora';

ORACLE instance started.

 

Total System Global Area 926941184 bytes

Fixed Size 1270748 bytes

Variable Size 247467044 bytes

Database Buffers 675282944 bytes

Redo Buffers 2920448 bytes

Database mounted.

Database opened.

 

Step 8:

Once the database is started up in the UPGRADE mode, run the CATUPGRD.SQL script to carryout the upgrade process.


SQL> spool '/export/home/oracle/10204upgrd.txt';

SQL> @/export/home/oracle/db/rdbms/admin/catupgrd.sql


Step 9:

  • Once the script is executed successfully, shutdown the database and open it normally using the above pfile.

  • Run the utlrp.sql script from the 10.2.0 Oracle DB home/rdbms/admin path to validate the invalid objects.


[oracle@10gnode1 dbs]$ sqlplus / as sysdba

 

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:33:082013

 

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to:

Oracle Database 10g Release 10.2.0.4.0 - Production

With the Real Application Clusters option

 

SQL> select status,instance_name from v$instance;

 

STATUS   INSTANCE_NAME

------------ ----------------

OPEN MIGRATE PROD1

 

SQL> shut immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> exit

Disconnected from Oracle Database 10g Release 10.2.0.4.0 - Production

With the Real Application Clusters option

 

[oracle@10gnode1 dbs]$ sqlplus / as sysdba

 

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:33:532013

 

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

 

Connected to an idle instance.

 

SQL> startup pfile=/export/home/oracle/initPROD1.ora

ORACLE instance started.

 

Total System Global Area 926941184 bytes

Fixed Size 1270748 bytes

Variable Size 247467044 bytes

Database Buffers 675282944 bytes

Redo Buffers 2920448 bytes

Database mounted.

Database opened.


SQL> @/export/home/oracle/db/rdbms/admin/utlrp.sql

 

SQL> shut immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL>

STEP 10:

Now, lets start the instance “PROD1” using the global SPFILE .


startup

ORACLE instance started.

 

Total System Global Area 926941184 bytes

Fixed Size 1270748 bytes

Variable Size 247467044 bytes

Database Buffers 675282944 bytes

Redo Buffers 2920448 bytes

Database mounted.

Database opened.

SQL>

SQL> select status,instance_name from v$instance;

 

STATUS INSTANCE_NAME

------------ ---------------- OPEN PROD1 

 

SQL> select * from v$version;

 

BANNER

---------------------------------------------------------------

Oracle Database 10g Release 10.2.0.4.0 - Production

PL/SQL Release 10.2.0.4.0 - Production

CORE 10.2.0.4.0 Production

TNS for Linux: Version 10.2.0.4.0 - Production

NLSRTL Version 10.2.0.4.0 - Production



Step 11:

Now that the SRPRIM database is upgraded to 10.2.0.4, lets check the status of the instances using


[oracle@10gnode1 dbs]$ srvctl status database -d PROD

Instance srprim1 is running on node 10gnode1

Instance srprim2 is not running on node 10gnode2

[oracle@10gnode1 dbs]$

 

 

srvctl start instance -d PROD -i PROD2

 

 

End Of above Activity 


Contact me

Get in Touch

Need to get touch with me? Please fill out the form with your enquiry.

Name
Anup Srivastav
Address
Lucknow - Utter Pradesh
Email
myindiandba@gmail.com
Message me