Create a Backup Job Definition - Oracle

ECX provides application database copy management through application-consistent backup creation, cloning, and recovery. ECX copy management leverages the snapshot and replication features of the underlying storage platform to create, replicate, clone, and restore backups of Oracle databases. Archive log destinations as well as universal destination mount points are supported. Archived logs are automatically deleted upon reaching defined retention.

ECX auto-discovers databases and enables backups only of eligible databases. To be eligible for backup, application databases must reside on supported storage platforms.

The following options are available for Oracle Backup jobs:

RMAN Integration - Oracle Recovery Manager (RMAN), a command-line and Enterprise Manager-based tool, is the method preferred by Oracle database administrators for backup and recovery of Oracle databases, including maintaining an RMAN repository. The retention of RMAN cataloged data is managed by settings in Oracle. ECX automates cataloging of Oracle database backups in the RMAN recovery catalog, enabling database administrators to leverage RMAN for verification and advanced recovery.

Data Masking - Data masking is used to hide confidential data by replacing it with fictitious data. This feature is used when making data copies for DevTest or other use cases.

Log Backup - The log backup feature enables continuous backups of Archive logs to a specified destination. Archive log retention is managed by settings in RMAN. ECX leverages archived logs to enable point-in-time recoveries of databases to facilitate RPOs.

BEFORE YOU BEGIN:

ORACLE DATABASE CONSIDERATIONS:

  • To ensure that filesystem permissions are retained correctly when ECX moves Oracle data between servers, ensure that the user and group IDs of the Oracle users (e.g. oracle, oinstall, dba) are consistent across all the servers. Refer to Oracle documentation for recommended uid and gid values.
  • If Oracle data resides on LVM volumes, you must stop and disable the lvm2-lvmetad service before running Backup or Restore jobs. Leaving the service enabled can prevent volume groups from being resignatured correctly during restore and can lead to data corruption if the original volume group is also present on the same system. To disable the lvm2-lvmetad service, run the following commands:

    systemctl stop lvm2-lvmetad

    systemctl disable lvm2-lvmetad

    Next, disable lvmetad in the LVM config file. Edit the file /etc/lvm/lvm.conf and set:

    use_lvmetad = 0

  • Note that Oracle databases must be registered in the recovery catalog before running an Oracle Backup job utilizing the Record copies in RMAN recovery catalog feature.
  • In your Linux environment, if Oracle data or logs reside on LVM volumes, ensure the LVM version is 2.0.2.118 or later.
  • For Oracle 12c databases, backups are created without placing the database in BACKUP mode through Oracle Storage Snapshot Optimization. All associated snapshot functionality is supported.
  • NOARCHIVELOG databases are not eligible for point-in-time recovery. NOARCHIVELOG databases can only be recovered to specific or latest versions. If upgrading from previous versions of ECX, the associated Oracle Inventory job must be re-run after upgrading to discover NOARCHIVELOG databases.
  • When the option to create an additional log destination is selected, ECX automatically purges the logs under this new location after each successful backup. For IBM SVC, ECX purges logs after a FlashCopy operation but not after a Global Mirror operation. If both FlashCopy and Global Mirror are enabled for a database (whether in separate job definitions or the same), ECX purges the logs after the FlashCopy operation only. For databases that are protected only by a Global Mirror workflow, ECX does not purge the logs at all so they must be deleted using a retention policy externally managed by a database administrator, for example, using RMAN. Note that in any case, ECX does not purge logs from other log destinations so they must also be externally managed.
  • If an Oracle Inventory job runs at the same time or short period after an Oracle Backup job runs, copy errors may occur due to temporary mounts that are created during the Backup job. As a best practice, schedule Oracle Inventory jobs so that they do not overlap with Oracle Backup jobs.

CONSIDERATIONS:

  • Note that point-in-time recovery is not supported when one or more datafiles are added to the database in the period between the chosen point-in-time and the time that the preceeding Backup job ran.
  • For email notifications, at least one SMTP server must be configured. Before defining a job, add SMTP resources. See Register a Provider.
  • One or more schedules might also be associated with a job. Job sessions run based on the triggers defined in the schedule. See Create a Schedule.

To create an Oracle Backup job definition:

  1. Click the Jobs Monitor tab icon tab. Expand the Database folder, then select Oracle.
  2. Click New New icon, then select Backup. The job editor opens.
  3. Enter a name for your job definition and a meaningful description.
  4. From the list of available sites select one or more providers to back up. Expand Oracle home directories to view associated application databases.
  5. Note: You cannot select a database if it is not eligible for protection. Hover your cursor over the database name to view the reasons the database is ineligible, such as the database files, control files, or redo log files are stored on unsupported storage.
  6. Select an SLA Policy that meets your backup data criteria.
  7. Click the job definition's associated Schedule Time field and select Enable Schedule to set a time to run the SLA Policy. If a schedule is not enabled, run the job on demand through the Jobs Monitor tab icon tab. Repeat as necessary to add additional SLA Policies to the job definition.
  8. If configuring more than one SLA Policy in a job definition, select the Same as workflow option to trigger multiple SLA Policies to run concurrently.
  9. Note: Only SLA Policies with the same RPO frequencies can be linked through the Same as workflow option. Define an RPO frequency when creating an SLA Policy.
  10. To create the job definition using default options, click Create Job. The job runs as defined by your triggers, or can be run manually from the Jobs Monitor tab icon tab.
  11. To edit options before creating the job definition, click Advanced. Set the job definition options.
  12. Maximum Concurrent Tasks
  13. Set the maximum amount of concurrent transfers between the source and the destination.
  14. Skip IA Mount points and/or databases
  15. Enable to skip Instant Disk Restore objects. By default, this option is enabled.
  16. Record copies in RMAN local repository
  17. Enable to create a local backup of the Recovery Manager (RMAN) catalog during the running of Oracle Backup job. RMAN catalogs can be used for backup, recovery, and maintenance of Oracle databases outside of ECX.
  18. Record copies in RMAN recovery catalog
  19. If Record copies in RMAN local repository is selected, select Record copies in RMAN recovery catalog to also create a remote RMAN catalog. Select an eligible Remote Catalog Database from the list of available sites. Select a Recovery Catalog Owner from the list of available Identities, or create a new Recovery Catalog Owner, then click OK.
  20. Note that Oracle databases must be registered in the recovery catalog before running an Oracle Backup job utilizing the Record copies in RMAN recovery catalog feature.
  21. Job-Level Scripts
  22. Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-level. A script can consist of one or many commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for Windows-based virtual machines.
  23. In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts Scripts icon view on the Configure Configure tab icon tab. See Configure Scripts.
  24. Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field at add a parameter to the script, then click Add. Note additional parameters can be added to a script by entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or create the credentials required to run the script. Finally, click the Application Server field to define the location where the script will be injected and executed. For parameter examples, see Using State and Status Arguments in Postscripts.
  25. Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about script return codes, see Return Code Reference.
  26. Select Continue operation on script failure to continue running the job if a command in any of the scripts associated with the job fails.
  27. Enable Job-level Snapshot Scripts
  28. Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the snapshot postscript runs after all associated snapshots complete. A script can consist of one or many commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for Windows-based virtual machines.
  29. In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts Scripts icon view on the Configure Configure tab icon tab. See Configure Scripts.
  30. Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field at add a parameter to the script, then click Add. Note additional parameters can be added to a script by entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or create the credentials required to run the script. Finally, click the Application Server field to define the location where the script will be injected and executed. For parameter examples, see Using State and Status Arguments in Postscripts.
  31. Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about script return codes, see Return Code Reference.
  32. _SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated value string containing all of the storage-based snapshots created by the job. The format of each value is as follows: <registered provider name>:<volume name>:<snapshot name>.
  33. Select Continue operation on script failure to continue running the job if a command in any of the scripts associated with the job fails.
  34. Optionally, expand the Notification section to select the job notification options.
  35. SMTP Server
  36. From the list of available SMTP resources, select the SMTP Server to use for job status email notifications. If an SMTP server is not selected, an email is not sent.
  37. Email Address
  38. Enter the email addresses of the status email notifications recipients. Click Add Add Node icon to add it to the list.
  39. To edit Log Backup options before creating the job definition, click Log Backup. If Create additional archive log destination is selected, ECX backs up database logs then protects the underlying disks. Select resources in the Select resource(s) to add archive log destination field. Database logs are backed up to the directory entered in the Universal destination directory field, or in the Directory field after resources are selected. The destination must already exist and must reside on storage from a supported vendor.
  40. The default option is Use existing archive log destination(s). Note that ECX automatically discovers the location where Oracle writes archived logs. If this location resides on storage from a supported vendor, ECX can protect it. If the existing location is not on supported storage, or if you wish to create an additional backup of database logs, enable the Create additional archive log destination option, then specify a path that resides on supported storage. When enabled, ECX configures the database to start writing archived logs to this new location in addition to any existing locations where the database is already writing logs.
  41. Note: NOARCHIVELOG databases are not eligible for log backup as they do not have archive logging enabled.
  42. If multiple databases are selected for backup, then each of the servers hosting the databases must have their destination directories set individually. For example, if two databases from Server A and Server B are added to the same job definition, and a single destination directory named /logbackup is defined in the job definition, then you must create separate disks for both servers and mount them both to /logbackup on the individual servers.
  43. If the No archive logs / Use existing archive log destination(s) option is selected, ECX does not automatically purge any archived logs. The retention of archived logs must be managed externally, for example using RMAN. In order to support point-in-time recovery, ensure that the retention period is at least large enough to retain all archived logs between successive runs of the Oracle Backup job.
  44. If the Create additional archive log destination option is selected, ECX automatically manages the retention of only those archived logs that are under the new destination specified in the job definition. After a successful backup, logs older than that backup are automatically deleted from the ECX-managed destination. Even in this case, ECX does not control the deletion of archived logs in other pre-existing destinations so they must still be managed externally as described above.
  45. If the Create additional archive log destination option is selected, ECX makes a one-time configuration change to the database to add the specified location as a parameter log_archive_dest_<num> in the database's archive log destinations. If you delete the ECX job definition, the database parameter is not affected so if you want to stop using the log destination, you may need to manually disable it this parameter.
  46. To edit Data Masking options before creating the job definition, click Data Masking. If enabled, ECX mounts snapshot copies of the protected database onto a user-specified staging server. Select resources to be masked from the list of available databases, select a backup to mask, and an Oracle home where masking takes place. Set a trigger, then in the Enter path to masking command on Oracle Server field, enter the full path to an external script or tool to perform the data masking. For example, /home/oracle/tools/maskDatabase.sh.
  47. ECX spins up a clone of the database on the staging server, then executes the user-specified command to perform masking. When the command completes successfully, ECX cleans up the clone database, and catalogs and saves the masked copies which are then available for selection in the DevOps workflow of ECX Restore jobs.
  48. Note:Oracle homes selected to be protected must be different from the Oracle home where masking takes place in the job definition.
  49. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as defined by your schedule, or can be run manually from the Jobs Monitor tab icon tab.

NEXT STEPS:

 


Catalogic ECX™ 2.7.3

© 2018 Catalogic Software, Inc. | All rights reserved. | 9/18/2018

MySupportKnowledge Base | Trademarks | info@catalogicsoftware.com