Don't miss a single episode of the series that includes this post!
S1E1. In-place patching, the most common and also the most risky method
S1E2. Out-of-place patching, the recommended and most efficient method
S1E3. Creation of Gold Image for patching Oracle Single Instance Database
S2E1. Patching Oracle RAC Database with Gold Image
S2E2. Out-of-place patching of Oracle Grid Infrastructure (Oracle Restart)
S2E3. Creation of Gold Image for Oracle Grid Infrastructure (Oracle Restart) patching
S3E1. Patching Oracle Single Instance Database with AutoUpgrade
S3E2. AutoUpgrade: Creation and use of Gold Images
S3E3. AutoUpgrade: Rolling Patching a RAC Database with Gold Image
Table of Contents
Earlier this season, we demonstrated how AutoUpgrade 26.2 allows you to download patches and Gold Images, install new Oracle Homes, and patch the database. One scenario remains: rolling patching a database in RAC.
In this episode, I’ll demonstrate how to patch a two-node cluster from Oracle 19c RU24 to RU30. Before you begin, please review this Season 2 post on the manual process to refresh your understanding. This will help you follow the steps in this episode more effectively.
Starting Point
We have two servers, named node1 and node2, on which Oracle Database Server 19c with release update 19.24 has been installed in an Oracle Home named DBHome1, supporting a RAC database named orcl.
$ echo ${ORACLE_HOME}
/u01/app/oracle/19.0.0/db_1
$ srvctl status database -db orcl
Instance orcl1 is running on node node1
Instance orcl2 is running on node node2
Gold Image Creation
Our goal is to apply RU 19.30, so the following patches must be included:
| 38661284 | OCW release update 19.30.0.0.0 |
| 39024581 | Database MRP 19.30.0.0.260317 |
| 38523609 | OJVM release update 19.30.0.0.260120 |
| 38586770 | JDK bundle patch 19.0.0.0.260120 |
| 38844733 | Datapump bundle patch 19.30.0.0.0 |
$ cat download.cfg
global.global_log_dir=/stage/autoupgrade/log
global.keystore=/stage/autoupgrade/keystore
global.folder=/stage/autoupgrade/patches
patch1.patch=RU:19.30,OPATCH,OCW,MRP,OJVM,JDK,DPBP
patch1.platform=LINUX.X64
patch1.gold_image=NO
$ java -jar autoupgrade.jar \
-config download.cfg \
-patch -mode download
AutoUpgrade Patching 26.2.260205 launched with default internal options
Processing config file ...
Loading AutoUpgrade Patching keystore
AutoUpgrade Patching keystore is loaded
Connected to MOS - Searching for specified patches
-------------------------------------------------------------
Downloading files to /stage/autoupgrade/patches
-------------------------------------------------------------
DATABASE RELEASE UPDATE 19.30.0.0.0(REL-JAN260130)
File: p38632161_190000_Linux-x86-64.zip - VALIDATED
OPatch 12.2.0.1.49 for DB 19.0.0.0.0 (Jan 2026)
File: p6880880_190000_Linux-x86-64.zip - VALIDATED
OJVM RELEASE UPDATE 19.30.0.0.0
File: p38523609_190000_Linux-x86-64.zip - VALIDATED
DATAPUMP BUNDLE PATCH 19.30.0.0.0
File: p38844733_1930000DBRU_Generic.zip - VALIDATED
GI RELEASE UPDATE 19.30.0.0.0(REL-JAN260130)
File: p38629535_190000_Linux-x86-64.zip - VALIDATED
JDK BUNDLE PATCH 19.0.0.0.260120
File: p38586770_190000_Linux-x86-64.zip - VALIDATED
DATABASE MRP 19.30.0.0.260317
File: p39024581_1930000DBRU_Linux-x86-64.zip - VALIDATED
-------------------------------------------------------------
$ cat create_oh_tmp.cfg
global.global_log_dir=/home/oracle/autoupgrade/log
global.folder=/stage/autoupgrade/patches
install1.patch=RU:19.30,OPATCH,OJVM,DPBP,OCW,JDK,MRP
install1.platform=LINUX.X64
install1.download=no
install1.target_home=/u01/app/oracle/19.0.0/TMP
install1.home_settings.oracle_base=/u01/app/oracle
install1.home_settings.edition=EE
install1.home_settings.inventory_location=/u01/app/oraInventory
install1.home_settings.home_name=DBHomeTMP
install1.home_settings.osdba_group=dba
install1.home_settings.osbackupdba_group=backupdba
install1.home_settings.osdgdba_group=dgdba
install1.home_settings.oskmdba_group=kmdba
install1.home_settings.osracdba_group=racdba
install1.home_settings.ru_apply=yes
$ java -jar autoupgrade.jar \
-config create_oh_tmp.cfg \
-patch -mode create_home
AutoUpgrade Patching 26.2.260205 launched with default internal options
Processing config file ...
+-----------------------------------------+
| Starting AutoUpgrade Patching execution |
+-----------------------------------------+
Type 'help' to list console commands
patch> Job 100 completed
------------------- Final Summary --------------------
Number of databases [ 1 ]
Jobs finished [1]
Jobs failed [0]
Jobs restored [0]
Jobs pending [0]
# Run the root.sh script as root for the following jobs:
For create_home_1 in server1 -> /u01/app/oracle/19.0.0/TMP/root.sh
Please check the summary report at:
/home/oracle/autoupgrade/log/cfgtoollogs/patch/auto/status/status.html
/home/oracle/autoupgrade/log/cfgtoollogs/patch/auto/status/status.log
${ORACLE_HOME}/runInstaller -silent -createGoldImage \
-destinationLocation /stage/autoupgrade/patches \
-name db_home_19_30-gold.zip \
-exclFiles ${ORACLE_HOME}/.patch_storage
$ ls -la /stage/autoupgrade/patches
total 12982824
drwxrwxr-x 1 Mar 17 19:22 .
drwxrwxr-x 1 Mar 17 20:15 ..
-rw-r--r-- 1 Mar 17 19:22 db_home_19_30-gold.zip
-rw-r--r-- 1 Apr 26 2019 LINUX.X64_193000_db_home.zip
-rw-rw-r-- 1 Mar 17 13:48 p38523609_190000_Linux-x86-64.zip
-rw-rw-r-- 1 Mar 17 13:49 p38586770_190000_Linux-x86-64.zip
-rw-rw-r-- 1 Mar 17 13:49 p38629535_190000_Linux-x86-64.zip
-rw-rw-r-- 1 Mar 17 13:48 p38632161_190000_Linux-x86-64.zip
-rw-r--r-- 1 Mar 17 14:09 p38661284_1930000OCW_Linux_x86-64.zip
-rw-rw-r-- 1 Mar 17 13:48 p38844733_1930000DBRU_Generic.zip
-rw-rw-r-- 1 Mar 17 13:49 p39024581_1930000DBRU_Linux-x86-64.zip
-rw-rw-r-- 1 Mar 17 13:48 p6880880_190000_Linux-x86-64.zip
-rw-rw-r-- 1 Mar 17 13:49 patches_info.json
Installation using the Gold Image
As we have a Gold Image with all the required patches, we ask AutoUpgrade to use it for the installation by setting the gold_image=ALL parameter in the configuration file.
Since the servers are isolated from the internet, we use the folder parameter to specify that the patches are stored in an NFS-shared directory.
$ cat create_oh.cfg
global.global_log_dir=/home/oracle/autoupgrade/log
global.folder=/NFS/autoupgrade/patches
install1.patch=RU:19.30,OCW
install1.platform=LINUX.X64
install1.gold_image=ALL
install1.target_home=/u01/app/oracle/19.0.0/db_2
install1.home_settings.oracle_base=/u01/app/oracle
install1.home_settings.edition=EE
install1.home_settings.inventory_location=/u01/app/oraInventory
install1.home_settings.home_name=DBHome2
install1.home_settings.osdba_group=dba
install1.home_settings.osbackupdba_group=backupdba
install1.home_settings.osdgdba_group=dgdba
install1.home_settings.oskmdba_group=kmdba
install1.home_settings.osracdba_group=racdba
As pointed out in the previous episode, AutoUpgrade requires a file that maps the labels of known patches to their corresponding patch numbers. This file is named aru-bug-map.json, and needs to be created manually by entering the OCW patch details (hopefully this will be fixed in the next version).
ARU=/home/oracle/autoupgrade/log/cfgtoollogs/patch/auto/aru
mkdir -p ${ARU}
cat <<EOF | tee ${ARU}/aru-bug-map.json
[{"prefix":"OCW","version":"19.30","bugNumber":38661284}]
EOF
With everything set up, we can start installing the software, and as this is a cluster, AutoUpgrade will perform it on all available nodes.
$ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config create_oh.cfg -patch -mode create_home
AutoUpgrade Patching 26.2.260205 launched with default internal options
Processing config file ...
Oracle Grid Infrastructure detected. Target Oracle home will be a RAC DB. To change that, use "home_settings.binopt.rac=NO" configuration setting.
+-----------------------------------------+
| Starting AutoUpgrade Patching execution |
+-----------------------------------------+
Type 'help' to list console commands
patch> Job 100 completed
------------------- Final Summary --------------------
Number of databases [ 1 ]
Jobs finished [1]
Jobs failed [0]
Jobs restored [0]
Jobs pending [0]
# Run the root.sh script as root for the following jobs:
For create_home_1 in node1 -> /u01/app/oracle/19.0.0/db_2/root.sh
For create_home_1 in node2 -> /u01/app/oracle/19.0.0/db_2/root.sh
Please check the summary report at:
/home/oracle/autoupgrade/log/cfgtoollogs/patch/auto/status/status.html
/home/oracle/autoupgrade/log/cfgtoollogs/patch/auto/status/status.log
We finish the procedure by running root.sh on both servers.
[node1]
# /u01/app/oracle/19.0.0/db_2/root.sh
[node2]
# /u01/app/oracle/19.0.0/db_2/root.sh
Patching the Database
In summary, we have a database (orcl) with two instances (orcl1 on node1 and orcl2 on node2) that are serving the applications. Additionally, we have installed Oracle Database Server 19c with RU 19.30 in the DBHome2 Oracle Home.
Preliminary steps
Because the Datapump bundle patch is “non-rolling”, AutoUpgrade must apply it by shutting down the entire database. However, since we don’t want to stop the services and we are sure there are no active Datapump tasks, we will instruct it to do so instance by instance, using the parameter: rac_rolling=force.
$ cat patch_1930.cfg
global.global_log_dir=/home/oracle/autoupgrade/log
patch1.sid=orcl1
patch1.source_home=/u01/app/oracle/19.0.0/db_1
patch1.target_home=/u01/app/oracle/19.0.0/db_2
patch1.restoration=no
patch1.rac_rolling=force
patch1.drain_timeout=wait
Since we want to control when each instance is shut down, we use the drain_timeout=wait parameter so that AutoUpgrade requests our confirmation first.
Now we can begin the patching process:
$ java -jar autoupgrade.jar \
-config patch_1930.cfg \
-mode deploy
AutoUpgrade 26.2.260205 launched with default internal options
Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 CDB(s) plus 2 PDB(s) will be processed
Type 'help' to list console commands
upg>
1
2
Shutdown - node1
upg> Relocated instance orcl1 services.
To continue, run: proceed -job 101
upg> proceed -job 101
upg> Continuing with restarting
instances for job 101
$ srvctl status database -db orcl
Instance is being stopped on node node1
Instance orcl2 is running on node node2
$ srvctl status database -db orcl
Instance orcl1 is not running on node node1
Instance orcl2 is running on node node2
$ srvctl status database -db orcl
Instance orcl1 is running on node node1
Instance orcl2 is running on node node2
Shutdown - node2
Now AutoUpgrade notifies us that the services on the orcl1 instance were relocated and gives us the command to restart the instance.
Note: Sessions on the orcl1 instance will be reconnected to the orcl2 instance, provided that the services are properly configured.
upg> Relocated instance orcl2 services.
To continue, run: proceed -job 101
upg> proceed -job 101
upg> Continuing with restarting
instances for job 101
$ srvctl status database -db orcl
Instance orcl1 is running on node node1
Instance is being stopped on node node2
$ srvctl status database -db orcl
Instance orcl1 is running on node node1
Instance orcl2 is not running on node node2
$ srvctl status database -db orcl
Instance orcl1 is running on node node1
Instance orcl2 is running on node node2
3
4
Catalog patching
Finally, AutoUpgrade proceeds with the update of the database catalog, using the datapatch utility.
upg> Job 101 completed
------------------- Final Summary --------------------
Number of databases [ 1 ]
Jobs finished [1]
Jobs failed [0]
Jobs restored [0]
Jobs pending [0]
Please check the summary report at:
/home/oracle/autoupgrade/log/cfgtoollogs/upgrade/auto/status/status.html
/home/oracle/autoupgrade/log/cfgtoollogs/upgrade/auto/status/status.log
Final Thoughts
Patching is a recurring task, and as such, it is advisable to document it properly and, if at all possible, automate it, with the aim of minimizing the likelihood of errors and service interruptions.
In each episode of this series, we’ve been perfecting the process: we started by using out-of-place patching, then moved on to generating gold images, and finally automated as much as possible with AutoUpgrade, which—though it still has a few minor issues—is well on its way to becoming the essential tool.
What are you waiting for to give it a try?
Don't miss a single episode of the series that includes this post!
S1E1. In-place patching, the most common and also the most risky method
S1E2. Out-of-place patching, the recommended and most efficient method
S1E3. Creation of Gold Image for patching Oracle Single Instance Database
S2E1. Patching Oracle RAC Database with Gold Image
S2E2. Out-of-place patching of Oracle Grid Infrastructure (Oracle Restart)
S2E3. Creation of Gold Image for Oracle Grid Infrastructure (Oracle Restart) patching
S3E1. Patching Oracle Single Instance Database with AutoUpgrade
S3E2. AutoUpgrade: Creation and use of Gold Images
S3E3. AutoUpgrade: Rolling Patching a RAC Database with Gold Image