youtube image
From YouTube: Velero Community Meeting / Open Discussion - Sept 22, 2020

Description

Sept 22, 2020

Status Updates
nrb
Documenting plugin release process
Need to get Azure and CSI plugins released
Stop trying to push docker images on forks
Reviews on main repo.
Thinking on ways to make Velero more concurrent/faster as part of v1.6; no concrete suggestions yet, I owe a high level goals doc per discussion last week.
Maintainers please take a look at https://github.com/vmware-tanzu/velero-plugin-for-csi/pull/70; needed for the next version of the CSI plugin
carlisia
Last week worked on v1.5.1 release
Reviews
Piling up: more reviews, GH issues + Helm chart
working on the CLI side of the download request migration to kubebuilder/controller-runtime.
bridget
Worked on some improvements to our release scripts following the 1.5.1 release
Now looking at internal build processes
dsmithuchida
Resource blocking in vSphere plug-in for Project Pacific internal resources
OpenSource project approvals for GVDDK (currently part of Astrolabe) and Data Generator (Kibishii) test tool

Discussion Topics
nrb: Defining prerelease to GA timeframes
From last week:
Prereleases ~1 week between each?
RC to GA, ~2 weeks?
Phuong - 3 month release cycle is acceptable for them, waiting for features on that timeframe is reasonable. Longer is too much.
The RC was helpful to integrate and test with their product. It meant updating to the actual release was just removing some characters.
1 week seemed reasonable to them, but if they hit bugs it may not be enough time.
Dylan - Red Hat’s Konveyor lags a little. They want to support older Kubernetes releases like v1.7 due to supporting OpenShift 3. (this isn’t determined for upstream Velero)
Konveyor does OpenShift 3-4 and 4-4 migration.
Red Hat also has OADP for OpenShift data protection, and their Velero fork is used there, too.
Red Hat likes the 3 month release cycle, especially for OADP. Backwards compatibility in Konveyor is trickier.
For OADP, they tried the RC for basic tasks.
poojita: Recover OpenShift’s native resource: DeploymentConfig
Velero restore error seen has been captured below:
Velero failed to restore namespace frank3. {“namespaces”:{“frank3”:[“error restoring imagetags.image.openshift.io/frank3/httpd-example:latest: ImageTag.image.openshift.io \“httpd-example:latest\” is invalid: spec: Required value: spec is a required field during creation”]}}. Velero restore ‘51685651-5369-51e1-88df-8977874919ca-2020-09-16-08-53-48-frank3’ failed: {\n “phase”: “PartiallyFailed”,\n “warnings”: 6,\n “errors”: 1\n}."
Dylan - There’s an OpenShift plugin that can help restore these CRDs on vanilla Velero installs. https://github.com/konveyor/openshift-velero-plugin
For ImageTag, it skips restore of these
Plugin recreates ImageStream, which then recreates the ImageTag.
ImageTag is a new, undocumented resource in OpenShift v4.4.
The plugin is used in the context of the migration product (Konveyor) and data protection (OADP).
Can file GitHub issues on that repo if you have issues/questions.
OADP bundles Velero and the AWS plugin on OpenShift. The benefit here is that it backs up the images to the S3 bucket.
Plugin can’t do it by itself right now. OADP sets up a image repo that the plugin doesn’t orchestrate right now
Alay - Plugin dependencies are challenging
If you deploy Velero by itself, without a wrapper, there are challenges in connecting plugins to external systems such as StatefulSets or Deployments.
Dave: Difference between the vSphere plugin and OADP is that OADP is at what point in the lifecycle they’re active.
May be able to extend plugins via the Velero app operator that VMware is working on.
RH’s requirements:
Before the plugin runs, ensure the dependency is healthy
During/after backup, ensure the dependency is healthy.
If it’s not, short circuit the operation instead of trying the full backup operation and failing.
Narashima - Took a backup w/ Velero and tried to restore it to another cluster. ReplicaSets/Deployments are getting duplicated in the new cluster.
kubectl get shows multiple entries.

Shoutouts
Slow week for contributions, but thanks to mikkael for the PR to allow users to change the container’s timezone