OADP 1.5 release notes
The release notes for OpenShift API for Data Protection (OADP) 1.5 describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues.
Note
For additional information about OADP, see OpenShift API for Data Protection (OADP) FAQ.
OADP 1.5.4 release notes
OpenShift API for Data Protection (OADP) 1.5.4 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.5.3. OADP 1.5.4 introduces a known issue and fixes several Common Vulnerabilities and Exposures (CVEs).
Known issues
- Simultaneous updates to the same
NonAdminBackupStorageLocationRequestobjects cause resource conflicts -
Simultaneous updates by several controllers or processes to the same
NonAdminBackupStorageLocationRequestobjects cause resource conflicts during backup creation in OADP self-service. As a consequence, reconciliation attempts fail withobject has been modifiederrors. No known workaround exists.
Resolved issues
- OADP 1.5.4 fixes the following CVEs
OADP 1.5.3 release notes
OpenShift API for Data Protection (OADP) 1.5.3 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.5.2.
OADP 1.5.2 release notes
The OpenShift API for Data Protection (OADP) 1.5.2 release notes list resolved issues.
Resolved issues
- Self-signed certificate for internal image backup should not break other BSLs
-
Before this update, OADP would only process the first custom CA certificate found among all backup storage locations (BSLs) and apply it globally. This behavior prevented multiple BSLs with different CA certificates from working correctly. Additionally, system-trusted certificates were not included, causing failures when connecting to standard services.
With this update, OADP now:
-
Concatenates all unique CA certificates from AWS BSLs into a single bundle.
-
Includes system-trusted certificates automatically.
-
Enables multiple BSLs with different custom CA certificates to operate simultaneously.
-
Only processes CA certificates when image backup is enabled (default behavior).
This enhancement improves compatibility for environments using multiple storage providers with different certificate requirements, particularly when backing up internal images to AWS S3-compatible storage with self-signed certificates.
-
OADP 1.5.1 release notes
The OpenShift API for Data Protection (OADP) 1.5.1 release notes list new features, resolved issues, known issues, and deprecated features.
New features
CloudStorageAPI is fully supported-
The
CloudStorageAPI feature, available as a Technology Preview before this update, is fully supported from OADP 1.5.1. TheCloudStorageAPI automates the creation of a bucket for object storage. - New
DataProtectionTestcustom resource is available -
The
DataProtectionTest(DPT) is a custom resource (CR) that provides a framework to validate your OADP configuration.The DPT CR checks and reports information for the following parameters:
-
The upload performance of the backups to the object storage.
-
The Container Storage Interface (CSI) snapshot readiness for persistent volume claims.
-
The storage bucket configuration, such as encryption and versioning.
Using this information in the DPT CR, you can ensure that your data protection environment is properly configured and performing according to the set configuration.
Note that you must configure
STORAGE_ACCOUNT_IDwhen using DPT with OADP on Azure. -
- New node agent load affinity configurations are available
-
-
Node agent load affinity: You can schedule the node agent pods on specific nodes by using the
spec.podConfig.nodeSelectorobject of theDataProtectionApplication(DPA) custom resource (CR). You can add more restrictions on the node agent pods scheduling by using thenodeagent.loadAffinityobject in the DPA spec. -
Repository maintenance job affinity configurations: You can use the repository maintenance job affinity configurations in the
DataProtectionApplication(DPA) custom resource (CR) only if you use Kopia as the backup repository.You have the option to configure the load affinity at the global level affecting all repositories, or for each repository. You can also use a combination of global and per-repository configuration.
-
Velero load affinity: You can use the
podConfig.nodeSelectorobject to assign the Velero pod to specific nodes. You can also configure thevelero.loadAffinityobject for pod-level affinity and anti-affinity.
-
- Node agent load concurrency is available
-
With this update, users can control the maximum number of node agent operations that can run simultaneously on each node within their cluster. It also enables better resource management, optimizing backup and restore workflows for improved performance and a more streamlined experience.
Resolved issues
DataProtectionApplicationSpecoverflowed annotation limit, causing potential misconfiguration in deployments-
Before this update, the
DataProtectionApplicationSpecused deprecatedPodAnnotations, which led to an annotation limit overflow. This caused potential misconfigurations in deployments. In this release, we have addedPodConfigfor annotations in pods deployed by the Operator, ensuring consistent annotations and improved manageability for end users. As a result, deployments should now be more reliable and easier to manage. - Root file system for OADP controller manager is now read-only
-
Before this update, the
managercontainer of theopenshift-adp-controller-manager-*pod was configured to run with a writable root file system. As a consequence, this could allow for tampering with the container’s file system or the writing of foreign executables. With this release, the container’s security context has been updated to set the root file system to read-only while ensuring necessary functions that require write access, such as the Kopia cache, continue to operate correctly. As a result, the container is hardened against potential threats. nonAdmin.enable: falsein multiple DPAs no longer causes reconcile issues-
Before this update, when a user attempted to create a second non-admin
DataProtectionApplication(DPA) on a cluster where one already existed, the new DPA failed to reconcile. With this release, the restriction on Non-Admin Controller installation to one per cluster has been removed. As a result, users can install multiple Non-Admin Controllers across the cluster without encountering errors. - OADP supports self-signed certificates
-
Before this update, using a self-signed certificate for backup images with a storage provider such as Minio resulted in an
x509: certificate signed by unknown authorityerror during the backup process. With this release, certificate validation has been updated to support self-signed certificates in OADP, ensuring successful backups. velero describeincludesdefaultVolumesToFsBackup-
Before this update, the
velero describeoutput command omitted thedefaultVolumesToFsBackupflag. This affected the visibility of backup configuration details for users. With this release, thevelero describeoutput includes thedefaultVolumesToFsBackupflag information, improving the visibility of backup settings. - DPT CR no longer fail when
s3Urlis secured -
Before this update,
DataProtectionTest(DPT) failed to run whens3Urlwas secured due to an unverified certificate because the DPT CR lacked the ability to skip or add the caCert in the spec field. As a consequence, data upload failure occurred due to an unverified certificate. With this release, DPT CR has been updated to accept and skip CA cert in spec field, resolving SSL verification errors. As a result, DPT no longer fails when using secureds3Url. - Adding a backupLocation to DPA with an existing backupLocation name is not rejected
-
Before this update, adding a second
backupLocationwith the same name inDataProtectionApplication(DPA) caused OADP to enter an invalid state, leading to Backup and Restore failures due to Velero’s inability to read Secret credentials. As a consequence, Backup and Restore operations failed. With this release, the duplicatebackupLocationnames in DPA are no longer allowed, preventing Backup and Restore failures. As a result, duplicatebackupLocationnames are rejected, ensuring seamless data protection.
Known issues
- The restore fails for backups created on OpenStack using the Cinder CSI driver
-
When you start a restore operation for a backup that was created on an OpenStack platform using the Cinder Container Storage Interface (CSI) driver, the initial backup only succeeds after the source application is manually scaled down. The restore job fails, preventing you from successfully recovering your application’s data and state from the backup. No known workaround exists.
- Datamover pods scheduled on unexpected nodes during backup if the
nodeAgent.loadAffinityparameter has many elements -
Due to an issue in Velero 1.14 and later, the OADP node-agent only processes the first
nodeSelectorelement within theloadAffinityarray. As a consequence, if you define multiplenodeSelectorobjects, all objects except the first are ignored, potentially causing datamover pods to be scheduled on unexpected nodes during a backup.To work around this problem, consolidate all required
matchExpressionsfrom multiplenodeSelectorobjects into the firstnodeSelectorobject. As a result, all node affinity rules are correctly applied, ensuring datamover pods are scheduled to the appropriate nodes. - OADP Backup fails when using CA certificates with aliased command
-
The CA certificate is not stored as a file on the running Velero container. As a consequence, the user experience degraded due to missing
caCertin Velero container, requiring manual setup and downloads. To work around this problem, manually add cert to the Velero deployment. For instructions, see Using cacert with velero command aliased via velero deployment. - The
nodeSelectorspec is not supported for the Data Mover restore action -
When a Data Protection Application (DPA) is created with the
nodeSelectorfield set in thenodeAgentparameter, Data Mover restore partially fails instead of completing the restore operation. No known workaround exists. - Image streams backups are partially failing when the DPA is configured with
caCert -
An unverified certificate in the S3 connection during backups with
caCertinDataProtectionApplication(DPA) causes theocp-djangoapplication’s backup to partially fail and result in data loss. No known workaround exists. - Kopia does not delete cache on worker node
-
When the
ephemeral-storageparameter is configured and running file system restore, the cache is not automatically deleted from the worker node. As a consequence, the/varpartition overflows during backup restore, causing increased storage usage and potential resource exhaustion. To work around this problem, restart the node agent pod, which clears the cache. As a result, cache is deleted. - Google Cloud VSL backups fail with Workload Identity because of invalid project configuration
-
When performing a
volumeSnapshotLocation(VSL) backup on Google Cloud Workload Identity, the Velero Google Cloud plugin creates an invalid API request if the Google Cloud project is also specified in thesnapshotLocationsconfiguration ofDataProtectionApplication(DPA). As a consequence, the Google Cloud API returns aRESOURCE_PROJECT_INVALIDerror, and the backup job finishes with aPartiallyFailedstatus. No known workaround exists. - VSL backups fail for
CloudStorageAPI on AWS with STS -
The
volumeSnapshotLocation(VSL) backup fails because of missing theAZURE_RESOURCE_GROUPparameter in the credentials file, even ifAZURE_RESOURCE_GROUPis already mentioned in theDataProtectionApplication(DPA) config for VSL. No known workaround exists. - Backups of applications with
ImageStreamsfail on Azure with STS -
When backing up applications that include image stream resources on an Azure cluster using STS, the OADP plugin incorrectly attempts to locate a secret-based credential for the container registry. As a consequence, the required secret is not found in the STS environment, causing the
ImageStreamcustom backup action to fail. This results in the overall backup status marked asPartiallyFailed. No known workaround exists. - DPA reconciliation fails for
CloudStorageRefconfiguration -
When a user creates a bucket and uses the
backupLocations.bucket.cloudStorageRefconfiguration, bucket credentials are not present in theDataProtectionApplication(DPA) custom resource (CR). As a result, the DPA reconciliation fails even if bucket credentials are present in theCloudStorageCR. To work around this problem, add the same credentials to thebackupLocationssection of the DPA CR.
Deprecated features
- The
configuration.resticspecification field has been deprecated -
With OADP 1.5.0, the
configuration.resticspecification field has been deprecated. Use thenodeAgentsection with theuploaderTypefield for selectingkopiaorresticas auploaderType. Note that Restic is deprecated in OADP 1.5.0.
OADP 1.5.0 release notes
The OpenShift API for Data Protection (OADP) 1.5.0 release notes list new features, resolved issues, known issues, deprecated features, and Technology Preview features.
New features
- OADP 1.5.0 introduces a new Self-Service feature
-
OADP 1.5.0 introduces a new feature named OADP Self-Service, enabling namespace admin users to back up and restore applications on the OpenShift Container Platform. In the earlier versions of OADP, you needed the cluster-admin role to perform OADP operations such as backing up and restoring an application, creating a backup storage location, and so on.
From OADP 1.5.0 onward, you do not need the cluster-admin role to perform the backup and restore operations. You can use OADP with the namespace admin role. The namespace admin role has administrator access only to the namespace the user is assigned to. You can use the Self-Service feature only after the cluster administrator installs the OADP Operator and provides the necessary permissions.
- Collecting logs with the
must-gathertool has been improved with a Markdown summary -
You can collect logs, and information about OpenShift API for Data Protection (OADP) custom resources by using the
must-gathertool. Themust-gatherdata must be attached to all customer cases. This tool generates a Markdown output file with the collected information, which is located in themust-gatherlogs clusters directory. dataMoverPrepareTimeoutandresourceTimeoutparameters are now added tonodeAgentwithin the DPA-
The
nodeAgentfield in Data Protection Application (DPA) now includes the following parameters:-
dataMoverPrepareTimeout: Defines the duration theDataUploadorDataDownloadprocess will wait. The default value is 30 minutes. -
resourceTimeout: Sets the timeout for resource processes not addressed by other specific timeout parameters. The default value is 10 minutes.
-
- Use the
spec.configuration.nodeAgentparameter in DPA for configuringnodeAgentdaemon set -
Velero no longer uses the
node-agent-configconfig map for configuring thenodeAgentdaemon set. With this update, you must use the newspec.configuration.nodeAgentparameter in a Data Protection Application (DPA) for configuring thenodeAgentdaemon set. - Configuring DPA with the backup repository configuration config map is now possible
-
With Velero 1.15 and later, you can now configure the total size of a cache per repository. This prevents pods from being removed due to running out of ephemeral storage. See the following new parameters added to the
NodeAgentConfigfield in DPA:-
cacheLimitMB: Sets the local data cache size limit in megabytes. -
fullMaintenanceInterval: The default value is 24 hours. Controls the removal rate of deleted Velero backups from the Kopia repository using the following override options:-
normalGC: 24 hours -
fastGC: 12 hours -
eagerGC: 6 hours
-
-
- Enhancing the node-agent security
-
With this update, the following changes are added:
-
A new
configurationoption is now added to thevelerofield in DPA. -
The default value for the
disableFsBackupparameter isfalseornon-existing. With this update, the following options are added to theSecurityContextfield:-
Privileged: true -
AllowPrivilegeEscalation: true
-
-
If you set the
disableFsBackupparameter totrue, it removes the following mounts from the node-agent:-
host-pods -
host-plugins
-
-
Modifies that the node-agent is always run as a non-root user.
-
Changes the root file system to read only.
-
Updates the following mount points with the write access:
-
/home/velero -
tmp/credentials
-
-
Uses the
SeccompProfileTypeRuntimeDefaultoption for theSeccompProfileparameter.
-
- Adds DPA support for parallel item backup
-
By default, only one thread processes an item block. Velero 1.16 supports a parallel item backup, where multiple items within a backup can be processed in parallel.
You can use the optional Velero server parameter
--item-block-worker-countto run additional worker threads to process items in parallel. To enable this in OADP, set thedpa.Spec.Configuration.Velero.ItemBlockWorkerCountparameter to an integer value greater than zero.Note
Running multiple full backups in parallel is not yet supported.
- OADP logs are now available in the JSON format
-
With the of release OADP 1.5.0, the logs are now available in the JSON format. It helps to have pre-parsed data in their Elastic logs management system.
- The
oc get dpacommand now displaysRECONCILEDstatus -
With this release, the
oc get dpacommand now displaysRECONCILEDstatus instead of displaying onlyNAMEandAGEto improve user experience. For example:$ oc get dpa -n openshift-adp NAME RECONCILED AGE velero-sample True 2m51s
Resolved issues
- Containers now use
FallbackToLogsOnErrorforterminationMessagePolicy -
With this release, the
terminationMessagePolicyfield can now set theFallbackToLogsOnErrorvalue for the OpenShift API for Data Protection (OADP) Operator containers such asoperator-manager,velero,node-agent, andnon-admin-controller.This change ensures that if a container exits with an error and the termination message file is empty, OpenShift uses the last portion of the container logs output as the termination message.
- Namespace admin can now access the application after restore
-
Previously, the namespace admin could not execute an application after the restore operation.
The execution failed with the following errors:
-
exec operation is not allowed because the pod’s security context exceeds your permissions -
unable to validate against any security context constraint -
not usable by user or serviceaccount, provider restricted-v2
With this update, this issue is now resolved and the namespace admin can access the application successfully after the restore.
-
- Specifying status restoration at the individual resource instance level using the annotation is now possible
-
Previously, status restoration was only configured at the resource type using the
restoreStatusfield in theRestorecustom resource (CR).With this release, you can now specify the status restoration at the individual resource instance level using the following annotation:
metadata: annotations: velero.io/restore-status: "true" - Restore is now successful with
excludedClusterScopedResources -
Previously, on performing the backup of an application with the
excludedClusterScopedResourcesfield set tostorageclasses,Namespaceparameter, the backup was successful but the restore partially failed. With this update, the restore is successful. - Backup is completed even if it gets restarted during the
waitingForPluginOperationsphase -
Previously, a backup was marked as failed with the following error message:
failureReason: found a backup with status "InProgress" during the server starting, mark it as "Failed"
With this update, the backup is completed if it gets restarted during the
waitingForPluginOperationsphase. - Error messages are now more informative when the` disableFsbackup` parameter is set to
truein DPA -
Previously, when the
spec.configuration.velero.disableFsBackupfield from a Data Protection Application (DPA) was set totrue, the backup partially failed with an error, which was not informative.This update makes error messages more useful for troubleshooting issues. For example, error messages indicating that
disableFsBackup: trueis the issue in a DPA or not having access to a DPA if it is for non-administrator users. - Handles AWS STS credentials in the parseAWSSecret
-
Previously, AWS credentials using STS authentication were not properly validated.
With this update, the
parseAWSSecretfunction detects STS-specific fields, and updates theensureSecretDataExistsfunction to handle STS profiles correctly. - The
repositoryMaintenancejob affinity config is available to configure -
Previously, the new configurations for repository maintenance job pod affinity was missing from a DPA specification.
With this update, the
repositoryMaintenancejob affinity config is now available to map aBackupRepositoryidentifier to its configuration. - The
ValidationErrorsfield fades away once the CR specification is correct -
Previously, when a schedule CR was created with a wrong
spec.schedulevalue and the same was later patched with a correct value, theValidationErrorsfield still existed. Consequently, theValidationErrorsfield was displaying incorrect information even though the spec was correct.With this update, the
ValidationErrorsfield fades away once the CR specification is correct. - The
volumeSnapshotContentscustom resources are restored when theincludedNamesapcesfield is used inrestoreSpec -
Previously, when a restore operation was triggered with the
includedNamespacefield in a restore specification, restore operation was completed successfully but novolumeSnapshotContentscustom resources (CR) were created and the PVCs were in aPendingstatus.With this update,
volumeSnapshotContentsCR are restored even when theincludedNamesapcesfield is used inrestoreSpec. As a result, an application pod is in aRunningstate after restore. - OADP operator successfully creates bucket on top of AWS
-
Previously, the container was configured with the
readOnlyRootFilesystem: truesetting for security, but the code attempted to create temporary files in the/tmpdirectory using theos.CreateTemp()function. Consequently, while using the AWS STS authentication with the Cloud Credential Operator (CCO) flow, OADP failed to create temporary files that were required for AWS credential handling with the following error:ERROR unable to determine if bucket exists. {"error": "open /tmp/aws-shared-credentials1211864681: read-only file system"}With this update, the following changes are added to address this issue:
-
A new
emptyDirvolume namedtmp-dirto the controller pod specification. -
A volume mount to the container, which mounts this volume to the
/tmpdirectory. -
For security best practices, the
readOnlyRootFilesystem: trueis maintained. -
Replaced the deprecated
ioutil.TempFile()function with the recommendedos.CreateTemp()function. -
Removed the unnecessary
io/ioutilimport, which is no longer needed.
-
For a complete list of all issues resolved in this release, see the list of OADP 1.5.0 resolved issues in Jira.
Known issues
- Kopia does not delete all the artifacts after backup expiration
-
Even after deleting a backup, Kopia does not delete the volume artifacts from the
${bucket_name}/kopia/${namespace}on the S3 location after the backup expired. Information related to the expired and removed data files remains in the metadata.To ensure that OpenShift API for Data Protection (OADP) functions properly, the data is not deleted, and it exists in the
/kopia/directory, for example:-
kopia.repository: Main repository format information such as encryption, version, and other details. -
kopia.blobcfg: Configuration for how data blobs are named. -
kopia.maintenance: Tracks maintenance owner, schedule, and last successful build. -
log: Log blobs.
-
For a complete list of all known issues in this release, see the list of OADP 1.5.0 known issues in Jira.
Deprecated features
- The
configuration.resticspecification field has been deprecated -
With OpenShift API for Data Protection (OADP) 1.5.0, the
configuration.resticspecification field has been deprecated. Use thenodeAgentsection with theuploaderTypefield for selectingkopiaorresticas auploaderType. Note that Restic is deprecated in OpenShift API for Data Protection (OADP) 1.5.0.
Technology Preview features
- Support for HyperShift hosted OpenShift clusters is available as a Technology Preview
-
OADP can support and facilitate application migrations within HyperShift hosted OpenShift clusters as a Technology Preview. It ensures a seamless backup and restore operation for applications in hosted clusters.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.