Download Grid, DB and Latest CPU software.
81 GB space in mount point.
You have downloaded and copied the Oracle Grid Infrastructure
You have all the disk group’s ASM COMPATIBLE set to 12 at least.
go to 19c RAC dump directory.
unzip the V982068-01 grid setup using below command
unzip V982068-01.zip -d /oracle/grid/19c
go to /oracle/grid/19c
Verify the clusterware
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /oracle/grid/12cr2 -dest_crshome /oracle/grid/19c -dest_version 19.0.0.0.0 -fixup -verbose
Issue:
Solution:
Ignore
Check GRID Infrastructure software version and Clusterware status:
cd /oracle/grid/12cr2/bin
bash-3.2$./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
bash-3.2$ ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
bash-3.2$ ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.DATA.dg ora....up.type ONLINE ONLINE racnode1
ora....11.lsnr ora....er.type ONLINE ONLINE racnode1
ora....N1.lsnr ora....er.type ONLINE ONLINE racnode1
ora.asm ora.asm.type ONLINE ONLINE racnode1
ora.cvu ora.cvu.type ONLINE ONLINE racnode2
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE racnode1
ora.oc4j ora.oc4j.type ONLINE ONLINE racnode2
ora.ons ora.ons.type ONLINE ONLINE racnode1
ora.prod.db ora....se.type ONLINE ONLINE racnode1
ora....SM1.asm application ONLINE ONLINE racnode1
ora....E1.lsnr application ONLINE ONLINE racnode1
ora....E1.lsnr application ONLINE ONLINE racnode1
ora....de1.gsd application OFFLINE OFFLINE
ora....de1.ons application ONLINE ONLINE racnode1
ora....de1.vip ora....t1.type ONLINE ONLINE racnode1
ora....SM2.asm application ONLINE ONLINE racnode2
ora....E2.lsnr application ONLINE ONLINE racnode2
ora....E2.lsnr application ONLINE ONLINE racnode2
ora....de2.gsd application OFFLINE OFFLINE
ora....de2.ons application ONLINE ONLINE racnode2
ora....de2.vip ora....t1.type ONLINE ONLINE racnode2
ora....ry.acfs ora....fs.type ONLINE ONLINE racnode1
ora.scan1.vip ora....ip.type ONLINE ONLINE racnode1
Check Database status and configuration:
bash-3.2$ ./srvctl status database -d PROD
Instance PROD1 is running on node racnode1
Instance PROD2 is running on node racnode2
bash-3.2$ ./srvctl config database -d PROD
Database unique name: PROD
Database name:
Oracle home: /oracle/db/11.2.4
Oracle user: oracle
Spfile: +DATA/prod/spfileprod.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: PROD
Database instances: PROD1,PROD2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed
Upgrade 12cR2 by executing runInstaller:
Update OPatch to latest version
Make mount point size 85 GB
Download and unzip latest PSU Patch 34416665: GI RELEASE UPDATE 19.17.0.0.0
go to /oracle/grid/19c
To apply only Release Updates:
./gridSetup.sh -applyPSU <PSU_PATC_LOCATION>
Example:
./gridSetup.sh -applyPSU /home/oracle/34416665
Log Location : /oracle/grid/19c/cfgtoollogs/opatch
Issue:
PRCT-1011 : Failed to run "asmcmd". Detailed error: kgfnGetConnDetails requires 4 parameters at
Solution:
1.Check 'asmcmd' login on both node
export ORACLE_HOME=/oracle/grid/12cr2
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=+ASM2
asmcmd
kgfnGetConnDetails requires 4 parameters at /oracle/grid/12cr2/lib/asmcmdbase.pm line 5704. —--->>> this is the error
2. Execute the following in the both node.
/usr/bin/make -f /oracle/grid/12cr2/rdbms/lib/ins_rdbms.mk client_sharedlib libasmclntsh12.ohso libasmperl12.ohso ORACLE_HOME=/oracle/grid/12cr2
3. Executed again "root.sh" on both node
/oracle/grid/12cr2/root.sh
4. Check 'asmcmd' login on both node
export ORACLE_HOME=/oracle/grid/12cr2
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=+ASM2
asmcmd
When prompted, run the rootupgrade.sh script on each node in the cluster that you want to upgrade.
Run the script on the local node first.
The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.
After running rootupgrade.sh script,Click the OK button.
Issue:
rootupgrdae.sh failed
validate the Log File: $GRID_HOME/cfgtoollogs/crsconfig/rootcrs_xxxx.log
fix the issue and rerun rootupgrade.sh -force
Check the Clusterware upgrade version:
cd /oracle/grid/19c/bin
./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]
./srvctl status database -d PROD
Instance PROD1 is running on node racnode1
Instance PROD2 is running on node racnode2
Validate the services after CRS upgrade:
./crsctl status resource -t
Then check the status of the clusterware globally as follows:
# crsctl check cluster -all
**************************************************************
racnode1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
Install RAC database from 12cR1 to 12cR2
Check the preupgrade status
cd /home/oracle/19c
./runcluvfy.sh stage -pre dbinst -upgrade -src_dbhome /oracle/db/11g -dest_dbhome /oracle/db/12cr1 -dest_version 12.1.0.2.0
go to the 19c database dump directory
cd /home/oracle/19c
unzip V982063-01-DB.zip -d /19c/db
./runInstaller
Upgrade RAC database Manually
Run preupgrade utility after setting the 12c environment variables
export ORACLE_HOME=/oracle/db/12cr2
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=PROD1
/oracle/db/19c/jdk/bin/java -jar /oracle/db/19c/rdbms/admin/preupgrade.jar TERMINAL TEXT
Run the before upgrade steps along with preupgrade_fixups.sql
Copy dbs and network files from old home to new home ( both the servers )
Make changes in the pfile with oracle 19c location from Node 1
ASMCMD> pwd
+DATA/PROD
ASMCMD> cp spfileprod.ora /home/oracle
copying +DATA/PROD/spfileprod.ora -> /home/oracle/spfileprod.ora
Also make cluster database=false in node 1 pfile from Node 1
SQL> alter system set cluster_database=False scope=spfile sid='PROD1';
System altered.
Enable RAC on new 19c home in node1
export ORACLE_HOME=/oracle/db/19c
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=PROD1
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk rac_on
make -f ins_rdbms.mk ioracle
Shutdown database from old home and remove database from OCR from old home
export ORACLE_HOME=/oracle/db/12cr2
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=PROD1
srvctl stop database -d PROD
srvctl remove database -d PROD
Remove the database infra? (y/[n]) y
Upgrade database from new home on 1st node only
export ORACLE_HOME=/oracle/db/19c
export PATH=$ORACLE_HOME/bin:$PATH
sqlplus / as sysdba
create pfile from spfile;
startup upgrade pfile=initPROD1.ora
exit;
You can run the upgrade using either of the following commands.
# Regular upgrade command.
cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl $ORACLE_HOME/rdbms/admin/catupgrd.sql
# Shorthand command.
$ORACLE_HOME/bin/dbupgrade
[oracle@jack bin]$ ./dbupgrade
Update timezone , gather stats and run post script that will be generated by preupgrade utility tool
Update timezone
SHUTDOWN IMMEDIATE;
STARTUP UPGRADE pfile=initPROD1.ora;
SET SERVEROUTPUT ON
DECLARE
l_tz_version PLS_INTEGER;
BEGIN
SELECT DBMS_DST.get_latest_timezone_version
INTO l_tz_version
FROM dual;
DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
DBMS_DST.begin_upgrade(l_tz_version);
END;
/
SHUTDOWN IMMEDIATE;
STARTUP pfile=initPROD1.ora;
--> Do the Upgrade
SET SERVEROUTPUT ON
DECLARE
l_failures PLS_INTEGER;
BEGIN
DBMS_DST.upgrade_database(l_failures);
DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);
DBMS_DST.end_upgrade(l_failures);
DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);
END;
/
-- Check new settings.
SQL> SELECT * FROM v$timezone_file;
FILENAME VERSION CON_ID
-------------------- ---------- ----------
timezlrg_32.dat 32 0
COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A20
SELECT property_name, property_value
FROM database_properties
WHERE property_name LIKE 'DST_%'
ORDER BY property_name;
PROPERTY_NAME PROPERTY_VALUE
------------------------------ --------------------
DST_PRIMARY_TT_VERSION 32
DST_SECONDARY_TT_VERSION 0
DST_UPGRADE_STATE NONE
sqlplus / as sysdba
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
exit;
# AUTOFIXUP
sqlplus / as sysdba
@12chome/cfgtoollogs/PROD/preupgrade/postupgrade_fixups.sql
exit;
Enable cluster mode to true
+DATA/PROD/spfileprod.ora
alter system set cluster_database=true scope=spfile ;
Shutdown the database
Add database to crs from new home
srvctl add database -d PROD -o /oracle/db/19c -p '+DATA/PROD/spfileprod.ora' -role PRIMARY
srvctl add instance -d PROD -i PROD1 -n racnode1
srvctl add instance -d PROD-i PROD2 -n racnode2
Check Database status and configuration :
bash-3.2$ ./srvctl status database -d PROD
Instance PROD1 is running on node racnode1
Instance PROD2 is running on node racnode2
bash-3.2$ ./srvctl config database -d PROD
Instance PROD1 is running on node racnode1
Instance PROD2 is running on node racnode2
[oracle@racnode1 bin]$ ./srvctl config database -d PROD
Database unique name: PROD
Database name: PROD
Oracle home: /oracle/db/12cr1
Oracle user: oracle
Spfile: +DATA/prod/spfileprod.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: PROD1,PROD2
Configured nodes: racnode1,racnode2
Database is administrator managed
./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.DATA.dg ora....up.type ONLINE ONLINE racnode1
ora....ER.lsnr ora....er.type ONLINE ONLINE racnode1
ora....N1.lsnr ora....er.type ONLINE ONLINE racnode2
ora....N2.lsnr ora....er.type ONLINE ONLINE racnode1
ora....N3.lsnr ora....er.type ONLINE ONLINE racnode1
ora.MGMTLSNR ora....nr.type ONLINE ONLINE racnode1
ora.OCR.dg ora....up.type ONLINE ONLINE racnode1
ora.asm ora.asm.type ONLINE ONLINE racnode1
ora.cvu ora.cvu.type ONLINE ONLINE racnode2
ora.mgmtdb ora....db.type ONLINE ONLINE racnode1
ora....network ora....rk.type ONLINE ONLINE racnode1
ora.oc4j ora.oc4j.type ONLINE ONLINE racnode2
ora.ons ora.ons.type ONLINE ONLINE racnode1
ora.prod.db ora....se.type ONLINE ONLINE racnode1
ora....SM1.asm application ONLINE ONLINE racnode1
ora....E1.lsnr application ONLINE ONLINE racnode1
ora....de1.ons application ONLINE ONLINE racnode1
ora....de1.vip ora....t1.type ONLINE ONLINE racnode1
ora....SM2.asm application ONLINE ONLINE racnode2
ora....E2.lsnr application ONLINE ONLINE racnode2
ora....de2.ons application ONLINE ONLINE racnode2
ora....de2.vip ora....t1.type ONLINE ONLINE racnode2
ora.scan1.vip ora....ip.type ONLINE ONLINE racnode2
ora.scan2.vip ora....ip.type ONLINE ONLINE racnode1
ora.scan3.vip ora....ip.type ONLINE ONLINE racnode1
[oracle@racnode1 bin]$
./crsctl stop cluster -all
Post a Comment