►
From YouTube: Velero Community Meeting - September 7, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
valero
community
meeting
for
september
7
2028
2021,
not
28,
we're
not
that
far
into
the
future.
We
don't
have
flying
cars,
yet
all
right,
we're
gonna
go
through
some
status
updates
and
then
we'll
see.
If
we
have
some
discussion
topics,
if
people
are
joining
today,
we
move
those
items
up.
So
first
up
we
have
bridget.
B
Hi
everyone,
so,
since
our
last
meet,
my
focus
has
mainly
just
been
trying
to
close
things
out
for
the
one
seven
release
which
we're
hoping
to
do
and
to
get
a
release
candidate
out
today.
Although
an
issue
has
come
up
which
I'd
like
to
discuss
to
see
whether
that
will
impact
that
release
candidate,
but
we
can
discuss
that
afterwards.
B
Yeah,
I
just
put
in
a
link
to
things
that
I
fixed
or
my
they're,
currently
investigating
yeah.
So
this
particular
issue
was
we
were
backing
up
diamond
apic
volumes
with
rustic,
and
this
exposes
the
same
issue
that
we
saw
with
backing
up
and
restoring
the
projected
volumes.
B
B
So
yeah,
so
that
will
go
into
one
seven
yeah,
just
looking
at
the
participants.
I
don't
know
if
we
want
to
chat
about
this
now.
I
plan
to
put
this
in
like
a
discussion
topic,
but
since
it's
only
the
the
five
of
us
and
raphael
and
fong
aren't
here,
so
maybe
we
can
jump
into
the
discussion
of
this
noi.
B
Yeah,
so
this
is
something
I'm
currently
working
on,
we'll
chat
about
it
in
discussion
topics.
That's
it
for
me,.
C
Okay,
as
bridget
mentioned,
we're
rolling
towards
rc
one
for
1.7,
and
it
passed
all
the
automated
tests
on
aws
and
vsphere,
so
we're
starting
to
move
forward
with
the
manual
testing
one
one
question
we're
gonna
moot
was
you
know
with
maintainers
is
what
availability
they
had.
A
C
Take
on
any
testing
additional
platforms
or
stuff
like
that,
you
bring
that
up
later
on.
Let's
see,
item
snapshotter
pushing
the
pr
towards
actually
being
ready,
but
we
won't
merge
that
until
1
8.,
so
no
rush,
but
probably
good
discussion
on
that
next
week
and
then
I
had
community
support.
There
wasn't
anything
too
weird
this
week.
So
that's
it
for
me.
A
Sounds
good,
thank
you
dave.
Does
anyone
have
anything
else
they
want
to
share
about
what
they've
been
working
on
eleanor
scott.
D
A
All
right
since
raphael
isn't
here
or
fong
and
we're
gonna
punt
4080
as
well
until
next
week
dave
did
you
want
to
discuss
3708.
C
Yeah,
actually,
I
was
going
to
toss
a
couple
things
out
there
for
the
plug-in
hooks
just
for
initial
discussion.
So
this
is
adding
like
a
post,
a
pre
and
post
restore
hooks.
That
could
happen
for
the
whole
backup,
and
I'm
wondering
I'm
considering
asking
that
this
be
something
a
little
more
generalized.
So
we
can
add
pre
and
post
hooks
on
different
types
of
objects,
not
just
the
cluster
itself,
but
maybe
like
as
we
hit
a
namespace
or
something
so
any
thoughts
on
that
from
anybody.
C
Okay,
I'll
boot
that
around
with
raphael
scott
go
ahead.
E
Yeah,
I
was
just
gonna
if
we're
ready
to
the
next
issue.
I
know
you,
you
pointed
out
that
it
has
that
pr
out
there
to
deal
with
the
namespace
remapping
on
pvs
and
basically
the
background
for
this
a
couple
things.
First
of
all,
we're
already
doing
this
if
there's
a
snapshot.
So
if
you,
if
you,
if
you
change
namespace
on
your
restore
and
there's
a
snapshot
for
the
pd,
then
we're
updating
the
reference
to
the
the
pvc
and
I
ran.
We
ran
into
an
issue
with
this.
E
One
of
our
migration
use.
Cases
is
something
that
we
that
we
call
a
pv
move
where
we're
not
doing
a
snapshot
or
a
rustic
backup
we're
just
backing
up
the
kubernetes
resource
for
the
the
pv
and
pvc
and
then
restoring
it
to
a
different
cluster
and
the
use
the
main
use
case
for
this
as
an
external
nfs
volume,
but
the
idea
being
that
we
have
an
nfs
server
outside
the
cluster
and
we
want
to
migrate
an
application
that
mounts
that
to
another
another
cluster.
E
So
in
that
case,
when
we're
not
changing
namespaces
everything
works.
Fine.
We
just
back
up
do
normal
backup
of
pv
and
pvc.
We
do
a
normal
restore
and
obviously
we
shut
down
the
application,
that's
surrounding
that
in
the
source
cluster
and
then
in
the
destination
new
cluster,
we're
pointing
to
that
same
external
nfs
share
when
we
tried
the
same
use
case
but
changed
the
name
space
in
the
new
cluster.
E
So
instead
of
going
to
the
same
name
and
namespace
and
the
new
cluster
as
it
was
in
the
old
cluster,
we
do
use
the
namespace
remapping
feature
on
restore.
In
that
case
it
fails
because
we're
not
updating
that
claim
reference
space.
So
in
this
destination
cluster
the
pvc
shows
up
as
pending
because
it
says
the
pve
is
already
in
use
by
another
pvc.
Of
course
it's
not.
E
The
reason
is
because
the
name
does
the
matching
doesn't
doesn't
match
up
because
it
it
sees
the
same
name,
but
the
name
spaces
are
different,
so
this
pr
basically
makes
the
namespace
remapping
restore
work,
the
same
way
as
a
same
name.
Space
restorer.
C
E
Okay,
right.
E
E
If
the
concern
is
pointing
you
know
the
actual
underlying
volume,
for
example
this
external
nfs
volume.
I
wonder,
though,
do
we
have
the
same
problem
even
when
you
don't
change
namespace.
If
I
restore
a
pv
and
a
pvc,
and
we
restore
that
pv
that
points
to
that
external
nfs
share,
there's
nothing
guaranteeing
that
the
other
cluster
I
backed
it
up
from,
isn't
still
live
and
pointing
there.
C
That's
true,
though,
we
expect
that
the
storage
system,
if
it's
going
to
enforce
anything,
will
keep
you
from
doing
multiple,
attach
so
that's
kind
of
a
basic
issue
that
we
already
have
but
we've
covered.
But
what
bridge
you
ran
into
you
know
because
we're
looking
through
like
hey
this
hasn't,
you
know
changed
the
volume
handle
what's
going
to
happen
and
so
bridget
tried
it
yeah
and
like
even
with
read,
write
once
volumes.
C
B
Right
yeah,
so
I
was
trying
to
start
with
like
the
case
where
you
might
do
your
backup
and
remap
into
the
same
into
a
different
name
space,
but
you
still
have
the
existing
workload.
That's
still
there.
So,
all
of
a
sudden
now
you
have
two
pvs.
E
B
E
Yeah
and-
and
I
guess
I
guess
the
issue
is
that
if
the
cluster
you're
restoring
to
is
the
same
as
you're
backing
up
from
this
is
only
a
concern
in
this
remapping
case,
because
in
the
case
where
you're
not
remapping,
it's
already
there
and-
and
maybe
maybe
my
point
that
I
was
making
on
the
the
pr-
was
that
you-
some
of
this
risk
already
exists,
even
if
you're
not
remapping
namespaces,
if
you're
restoring
to
a
different
cluster,
because
you
could
have
two
different
applications
pointing
to
the
same
underlying
storage
in
different
clusters.
E
E
C
E
E
Right
exactly
so
so
so
yeah
if
it's
an
ebs
volume
you
because
they're
different
nodes,
yeah
that
would
that
wouldn't
error
out
on
you.
If
it's
an
nfs
volume
or
something
like
that,
then
you
know
you're
not
going
to
have
a
storage
provider.
That's
managing.
C
E
E
I
guess
is
there,
because
I'm
just
trying
to
think
about
you
know
here's
a
case
where,
if
we
do
this
restore
to
a
different
cluster
and
we
keep
the
namespace
the
same,
everything
works
fine
for
this
kind
of
quote
pv
move
use
case
because
blair
just
lets
it
happen
and
because,
from
valera's
point
of
view,
it
has
no
way
of
knowing
whether
this
is
a
disaster.
Recovery
situation
and
the
pb
pc
are
gone
or
you
know-
or
in
this
case
it's
a
different
cluster.
E
But
in
the
change
name
space
case,
I
guess
the
concern
there
is.
I
I
don't
know
if
there's
a
way
for
valeris
to
check
to
say
if
the
pv
that
it
could,
in
other
words
the
the
risk
here,
is
we're
changing
the
reference
to
point
to
something
different,
so
that
if
the
original
pv
is
already
there,
then
you
then
you
have
this
hit.
This
is
gonna.
Two,
you
know
two
pvcs
or
two
pvs
pointing
to
the
same
volume
situation.
E
E
It
should
cover
our
well.
It
would,
I
think
it
would
cover
our
case.
In
general,
I
mean,
obviously
it
would
fail
in
kind
of
sporadic
ways,
because
if,
in
other
words,
if
cluster,
if
cluster
a
you
know
had
basically,
if
you're,
if
you're
migrating
an
application
to
cluster
b
and
you're
changing
name
space,
because
the
namespace
you
were
using
is
in
use
it's
possible
that
there
is
the
same
name
in
use
by
a
completely
unrelated
pvc.
E
You
know
if
I
migrate
an
application,
but
some
different
version
of
that
same
application
is
in
the
same
name
space,
and
that's
why
I'm
remapping
namespaces
it
could
fail
in
those
cases,
but
in
the
general
case
of
I'm
just
creating
this
in
a
new
cluster,
and
I
want
a
different
name
space,
but
there's
no
contention,
then
this
would
narrow
the
number
of
failure
situations
for
us.
I
guess
in
other
words,
if,
if
we
allow
this
remapping
to
work,
but
only
if
the
thing
we're
mapping
away
from
doesn't
exist
in
the
cluster.
E
E
Yeah
yeah
exactly
exactly
so.
I
guess,
if
you,
if
you
want
to
think
about
the
kind
of
different
scenarios,
what
what's
happening
before
this
pr
is
basically
it
always
fails
whether
or
not
there's
a
contention
anytime.
You
do
a
change
name
space
with
this.
With
this
tv
move
use
case,
it's
going
to
fail.
E
It
sounds
like
the
concern
is
with
the
pr
as
written,
it's
going
to
apparently
succeed
in
all
cases,
but
there
might
be
rare
situations
or
maybe
not
so
rare.
That's
that's
the
question
where
there's
data
corruption
and
that's
the
thing
we
have
to
avoid.
So
there
may
be
a
solution
in
between
here
which,
instead
of
just
blindly
changing
the
the
reference
you
know,
we
do
a
check
to
see.
Does
the
thing
that
we're
changing
away
from
exists
already
in
the
cluster?
E
If
it
does,
then
we
don't
change
it
and
if
it
doesn't
exist,
then
there's
no
contention
in
this
cluster.
So
we
go
ahead
and
change.
It.
C
E
C
E
I
created
a
pvc,
the
pv
that
pointed
to
this
evs
volume,
and
I
created
another
one
that
pointed
to
the
zvs.
You
know
at
that
level.
Unless
you
want
to
examine
every
pv
in
the
cluster
and
you
know,
compare
I
I
I
think
this-
that
kind
of
basic
sanity
checking
would
at
least
kind
of
handle
the
normal,
the
kind
of
the
default
case.
If
I
back
up
in
this
cluster,
I
restore
in
this
cluster
to
a
different
name
space,
not
realizing
that
I
have
volumes
here
that
are
going
to
be.
E
You
know,
stepping
on
each
other.
I
think
that's
that's
the
case.
We're
concerned
about
here
is
the
case
where
you're
restoring
to
the
same
cluster
and
you're
changing
the
name
space
and
your
new
pvcs
and
your
new
namespace
and
your
old
pvcs
and
your
old
namespace
are
pointing
to
the
same
underlying
volumes
and
stepping
on
each
other
in
ways
that,
if
you're
not
using
something
like
ebs,
you
won't
know
about
and
you'll
just
you
know,
you'll
have
the
same
kind
of
failure.
E
You'd
have
if
you
in
a
manager,
nfs
shares
incorrectly
into
too
many
places
or
whatever
so
checking
for
that
before
doing
this,
remapping
ought
to
prevent
that
problem.
You
know,
and
I
think
any
of
these
more
obscure
kind
of
harder
to
come
up
with
failure.
Cases
are
probably
ones
we're
already
running
the
risk
of
any
way
with
any
restore
of
a
pv,
that's
not
from
a
snapshot
or
rustic.
C
B
I
think
we
just
so
one
thing
that
came
up
as
we
were
looking
at.
This
was
just
the
idea
of
folks
here,
maybe
just
like
trying
valero
white,
maybe
taking.
D
B
Did
it
work
before
they
commit
to
it
and
then
in
doing
that
process
they
end
up.
You
know
doing
something.
E
The
things
you're
doing
and
so
you're,
probably
not
configuring,
rustic
you're,
probably
not
doing
snapshots
you
may
or
may
not
even
know
you
have
volumes
in
that
your
namespace
you're
testing
and
then
you're
going
to
restore
to
the
same
cluster
and
that
and
yeah
that's
the
kind
of
combination
where
this
change
here
would
probably
be
risky.
That
makes
sense.
I
guess.
E
And
then
I
think,
if,
if
I
got
to
go
back
and
look
at
the
cpr
and
refresh
my
remember
how
I
did
it,
but
I
think
if
we
can
just
look
in
the
cluster
to
say
before
we
change
this,
is
it
currently
pointing
to
an
actual
valid
pvc,
and
if
it
is,
then
we
don't
change
it
because
that's
creating
two
pointing
to
the
same
place.
If
it's
not
pointing
to
an
existing
pvc,
then
I
think
in
that
same
cluster
use
case.
E
It's
probably
reasonably
sure
that
you're
not
stepping
on
yourself
somehow
and
I
think
the
only
easy
way
to
fail
that
check
would
be
to
take
the
same
backup
and
rest
or
to
take
two
backups
of
the
same
namespace
after
deleting
the
original
namespace
and
restore
them
to
two
different
namespaces
and
that
would
that
would
still
be
riskier.
I
don't
know
how
whether
that's
a
use
case
that
is
common
or
that
we
need
to
worry
about
it
or
if
that's
one.
E
E
To
do
it
in
the
first
place
and
then
that
and
which
you
know,
obviously,
from
from
our
point
of
view,
doesn't
that
that
means
we
don't
enable
this
particular
use
case
of
restoring
to
a
different
cluster
and
changing
the
name
space
and
pointing
to
the
same
external
story.
I
mean
we're
only
really
doing
this
for
nfs
for
the
most
part
because
of
the
you
know
the
ebs
concern
you
know
for
freebs.
E
You
know
we're
going
to
do
snapshots
or
we're
going
to
do
rustic
or
that
kind
of
thing
anyway,
that
the
main
use
case
for
this
is
nfs,
where
you're
saying
hey.
This
is
external
to
cluster,
I'm
just
migrating
and
I'm
not
doing
a
backup.
You
know
from
for
backup
purposes,
there's
no
need
to
copy
all
this
data,
and
then
that's
the
use
case.
Where
we're
doing
this.
You
know
pvc
move.
C
Without
copying
the
data
we've
had
the
request
to
do
this,
and
you
know
we
could
add
a
flag
to
say
minus
minus
migrate
or
something
like
that
or
adopt
existing
yeah.
But
I
I
hesitate
to
add
yet
another
flag,
yeah.
E
I
mean
I
mean
it's
right
now.
You
know
we've
kind
of
just
relied
on
the
fact
that
valero,
if
it
tries
to
do
a
you
know
if
it
tries
to
do
a
snapshot
and
it
can't
because
either
it's
not
a
snapshot
able
system
or
it
doesn't
have
a
volume
snapshot
location.
Then
it
just
defaults
to
copying.
You
know
backing
up
and
restoring
the
pv
definition,
and
so
you
know,
and
in
fact
that
that
abs
use
case
you
know
when
we
try
to
move.
E
So
and
that's
something
that
we
we
may
want
to
look
in
at
some
point
to
sort
of
explicitly
enable
that,
but
at
this
point
that
really
hasn't
been
a
priority.
So
so
the
use
case
that
I'm
envisioning
with
this
pr
really
is
mainly
for
nfs
and
other
kind
of
static
storage
situations.
Well,
we're
not
well,
we
can't
do
a
snapshot
in
any
way
and
where
we
don't
want
to
do
rustic,
because
we
don't
need
to
go
through
that
overhead.
E
C
I
mean
the
other
thing
we
could
do,
which
I
hesitate
would
be
to
check
and
see
like
if
it's
rewriting
many
and
go
ahead
with
that,
but
I
think
that
introduces
a
little
too
much
like
you
know.
If
it's
wednesday,
this
thing
works.
If
it's
tuesday.
E
Yeah
yeah
yeah.
I
I
think
that
I
guess
the
other
thing
is
that
in
the
case
that
bridget
saw
where
this
was
causing
problems.
E
B
So
part
of
the
so
I
think
you
know
one
of
the
checks
that
it
does
is
that
it's,
I
think,
there's
like
a
check
to
see
if
the
pv
already
exists
and
if
it
does,
it
gives
it
a
new
name.
So
it
renames.
Oh.
E
That's
right:
oh
no,
you're,
right,
you're,
right,
that's
right,
which,
which
again
is
something
you
want
in
the
case
where
you're
doing
a
snapshot,
for
example,
because
I
got
I
got
it,
okay,
so
so
that
I
guess
that's
another
possibility
is
maybe
maybe
we
update
the
pvc
mapping,
but
we
don't
offer
to
rename
the
pv.
I
don't
know
if
that
would
solve
this
in
a
simpler
way,
because
it
it
it
that
it
still
doesn't
prevent
against
some
other.
E
E
I
wonder
if,
instead
of
checking
things
just
what,
if
we
don't
do
the
pv
renaming
part
but
just
update
the
pvc
mapping,
because
in
the
case,
where
you're
doing
a
backup
and
then
doing
an
immediate
restore
to
a
new
namespace
that
pv
will
exist,
so
then
it
just
wouldn't
restore
it
would
be
a
state,
it
would
be
the
usual
valero
behavior
there
like
we're
seeing
now.
So
then
the
only
thing
this
if
we
did
that
we
limited
the
scope
of
this
pr2,
just
updating
that
pvc
claim
reference.
C
E
Right
either
error
out
or
we
would
give
valera
to
put
that
put
that
message
in
the
log
saying
hey,
because
because
when
valero
attempts
to
restore
you
know,
you
get
the
already
exist
error
and
then
you
say:
hey
you
check
to
see
it's
different
and
then
you,
then
you
get
the
log
message
saying
you
know:
pve
already
exists
in
the
cluster
and
it's
different
from
the
backed
up
version,
and
that's
just
the
you
know
the
warning
saying
we're
not
doing
anything
with
it.
E
So
in
that
case
we're
not
creating
two,
a
second
pv
if
one
already
exists,
but
if
it
doesn't
exist,
we're
updating
the
pvc
names
remapping.
So
so
that
way.
So
again,
the
use
case
here
is:
I
do
a
backup
of
a
namespace
with
the
pv
and
pvc,
whether
it's
to
a
new
cluster
or
the
same
cluster.
After
deleting
that
pv
and
pvc,
the
pv
doesn't
exist
and
I
restore
to
a
new
name,
space
it'll
work
and
it'll
it'll
in
it'll
update
that
pv
pvc
reference
so
that
that
works,
but.
E
E
C
E
A
E
C
C
C
A
E
C
E
Pv
is
going
to
restore
anyway,
and
it's
now
pointing
to
the
pvc
that
you
restored,
rather
than
just
non-existent
pvc
or
just
some
existing
pvc-
that's
pointing
elsewhere
anyway,
and
that's
not,
you
know,
because
you're
restoring
the
pv
and
pvc
together
they're
now
pointing
to
each
other
together,
as
you
expect
them
to,
but
if
we
don't
do
the
rename
of
the
pv,
I
think
we
re
remove
the
possibility
of
creating
that
cpvs
pointing
to
the
same
place
in
the
same
cluster
problem.
E
So
so,
instead
of
adding
new
checks
to
this
pr,
I
think
the
answer
is
to
just
remove
about
half
of
the
functionality,
and
so,
instead
of
doing
both
the
pv
rename
and
the
update
the
pvc
pvc
reference.
Let's
just
update
the
pvc
reference.
Let's
not
do
the
pv
rename,
and
that
should
be-
and
that's
the
only
part
that
really
that's
the
part
that
my
use
case
needed.
I
was
just
trying
to
be
more
complete
realizing.
It
just
did
the
wrong
thing
there.
So,
okay,
okay,.
E
That
that's
fine
yeah,
no
worries
yeah,
and
this
is
this-
is
one
of
those
two
that
you
know
the
version
in
place
in
this
pr,
because
that
we
were
hitting
bugs
with
it.
I
ended
up
and
and
the
fork
that
we
used
for
our
builds
had
already
included
it,
but
again
for
the
migration
uk
use
case.
This
bug
is
not
a
concern,
but
once
we
upgrade.
A
E
One
eight
I
can
abandon
this
local
commit
and
just
use
the
upstream
version
of
that
and
then
that
that
goes
away.
So
it's
you
know
getting
it
in
for
wendy
fixes
the
problem
upstream
and
at
the
same
time
you
know
it's
not
we're
not
waiting
on
it
for
release,
because
we've
got
this
somewhat
flawed
version,
but
isn't
flawed
in
our
use
case.
So
you
know,
but
yeah
I'll
get
that
updated,
so
yeah.
That
makes
sense.
B
Yeah
yeah,
if
we,
if
everyone's
okay
for
us
to
dive
into
this,
just
because
I
started
to
look
at
this
and
realize
that
it
is
affecting
like
122
like
backups.
B
So
I
just
wanted
to
see
if
this
is
something
that
we
need
to
potentially
try
and
get
in
for
one
seven
just
because
I
think
it
might
trip
up
a
few
people,
I'm
still
trying
to
like
understand
it.
But
basically
the
issue
is
is
that
there
are
folks
who
had
v1
beta
1
crds
in
their
cluster
they've
gone
through
like
an
upgrade
process
to
use
the
v1
version
of
the
crd,
but
they
have
essentially
now
because
they've
done
this
upgrade
to
to
v1.
B
There
are
fields
in
the
crd
which
were
removed
in
v1,
which
mean
that
the
the
crds
that
they
have
in
their
cluster
9
aren't
valid,
but
when
valero's
doing
a
backup,
it
checks
for
these
fields
to
try
and
determine
checks
for
that
field.
To
say,
oh,
this
is
a
v1
beta1
crd.
I
should
back
it
up
using
this
version
of
the
client,
but
then
because
they've
gone
through
this
process
and
they've
upgraded
their
clusters
to
122
the
v1
beta1
api
doesn't
exist.
B
C
B
And
then,
if
you
just
like
scroll
down
to
the
very
bottom,
to
like
the
very
last
comment,
just
that
we
can
like
look
at
one
of
these
crds
actually.
A
Sure
thing
remote,
remote,
controlling
me,
sharing
my
screen
is
a
bit
hard,
so
yeah.
B
Okay,
there
we
go.
Can
everyone
see
my
screen?
I'm
just
gonna
make
it
bigger
okay.
So
the
issue
we
have
is
that
we
have
a
yeah,
so
they
are
trying
to
do
a
backup
on
b122,
so
kubernetes,
122
and
they're
getting
there,
because
valero
is
attempting
to
use
the
v1
beta,
1
crd
api
to
back
up
their
crds
okay,
but
this
api
no
longer
exists
on
kubernetes
122..
B
It's
been
removed
now,
so
this
person
pointed
out
that
vlair
does
some
magic
to
detect
whether
this,
whether
it
should
use
the
v1
beta1
api
or
the
v1
api,
when
backing
up
the
crds
and
some
of
the
things
that
it
checks
for
are
things
like?
Does
it
have
this
preserve
unknown
fields,
field
set?
B
Does
it
have
a
non-structural
schema?
Does
it
have
a
single
version.
C
B
B
They
gave
an
example
of
one
of
their
crds,
so
it
has
this
non-structural
schema
valid.
C
C
No,
what
I'm
thinking
is
I
mean
so
one
possibility
is.
We
could
fall
back
if
the
b1
beta
1
api
fails.
You
say:
okay,
fine,
I'll
use
v1,
which
might
not
be
bad,
but
then,
if
this
is
going
so
if
this
is
going
to
fail
anyway
on
either
backup
or
restore,
I'm
not
sure,
there's
you
know
a
point
to
doing
too
much
more
work.
Other
than
simply
logging,
like
the
crd,
is
invalid
and
moving
on.
B
B
Yeah
like
this
is
just
okay
I
put
this
out
just
because
I
wasn't
really
sure
what
we
should
be
doing.
Someone
one
of
my
thoughts
was
yeah.
It
was
like
potentially
check
for
the
status
here
and,
if
any
of
these
like,
if
there's
some
reason
why
we
can
detect
that
the
crd
is
invalid,
then
we'd
be
like
okay.
A
C
C
No,
I
I
think
that
yes
they're
useful,
so
I
think
one
if,
if
we
can
back
it
up
using
the
v1
apis
and
restore
it-
and
it
goes
back
to
at
least
its
original
broken
state,
we're
probably
good.
You
know
that
would
probably
be
a
good
thing
to
do.
So
if
we,
if
we
like
in
the
code,
if
we
get
an
error
back
like
the
v1
beta1
api
doesn't
exist,
we
say:
okay,
fine,
then
we'll
flip
off
that
flag
and
we'll
just
move
forward
using
the
v1
api
yeah.
E
So
if
you're
on
a
you,
know
kubernetes,
you
know
112
cluster
and
you
do
a
backup
when
you
restore
that
crd,
it's
going
to
fail
unless
this,
unless
you
create
the
crd
before
you
know,
I
mean
I
mean
one
one
situation,
and
this
is
you
know
in
fact,
I've
been
trying
to
fix
a
bug
in
our
migration
product
for
that
purpose,
to
basically
warn
the
user
ahead
of
time
to
say:
hey,
you
know,
your
crd
versions
between
source
and
destination
clusters
are
including
incompatible
because
you
know
you
have
v1
beta
1
on
source
and
you
don't
have
it
on
destination
and
then
letting
them
know
hey.
E
You
have
these
crds
that
are
involved
in
your
namespaces.
You
need
to
create
those
ahead
of
time
before
you
restore
those
clusters,
because
the
backup
will
include
the
crd.
If
you
try
to
restore
that,
it
will
show
up
with
restore
error.
But
of
course,
if
the
crd
already
exists
in
the
cluster
it'll
just
be
the
standard
valero
again
the
warning
like
we're
talking
about
before
this
thing
exists
in
the
cluster
already.
E
So
I
just
know
from
working
on
that
bug
that,
yes,
if
you
have
a
backup
for
the
v1
beta,
1
crd
in
it,
when
you
try
to
restore
that
to
a
122
cluster,
that
resource
will
not
restore
because
that
you
know
api
version
doesn't
exist
in
the
cluster.
C
Can
I
can
we
try
this
just
as
a
simple
test,
so
we've
got
like
the
whole
yaml
for
their
crd?
If
we
just
do
a
cube,
cuddle
apply
of
this,
what
happens.
A
B
Yeah
yeah,
so
I
think
we
have.
B
I'm
trying
to
think
we
have
like
some
other
action
that
takes
place
like
elsewhere
in
the
cluster,
that,
like
does
a
remapping
of
the
crd
versions,
I'll,
take
a
look
and
see
what
it's
doing
to
see.
If
there's
like
some
case
that
it
could
be
catching
but
yeah,
I'm
like
I.
I
think
that
there
is
potential
for
yeah
for
us
to.
A
B
C
Is
we
we
grab
it
as
an
unstructured
and
then
walk
across
looking
for
these
magic
fields,
which
you
know
is
good,
but
it
doesn't
mean
that
we
can.
You
know
we're
all.
This
is
doing.
A
C
B
B
So
do
you
think
it's
do
you
think
we
should
still
try
to
investigate,
maybe
I'll
like
take
the
steps
to
investigate
the
behavior
and
see
what's
going.
C
B
C
B
C
B
Yeah
yeah,
maybe
yeah.
We
could
probably
put
something
a
little
bit
more
verbose
in
the
logging
just
to
kind
of
indicate
like
what's
going
on,
because
there's
something
like
I'm
going
to
attempt
to
back
us
up
as
v1
beta1
and
maybe
like
just
expand
on
that
a
little
bit
to
say,
because
this
state,
this
particular
state
by
the
crd,
is
true.
B
B
Okay,
but
I
don't
in
yes,
I
wasn't
really
sure
if
this
is
something
that
we
would
want
to
try
and
fix
for
one
seven
I'll
I'll.
Take
a
quick
look
at
it
this
afternoon,
but
I
don't
think
there's
no
reason.
I
don't
think
there's
a
reason
to
hold
off
the
release
for
something
like
this.
C
B
B
E
So
I
had
a
quick
question
since
we're
talking
about
one
seven
and
crds.
I
know
when
we
came
up
with
the
plan
to
release
163
with
the
v1
sierra
version
of
the
valeria
crds.
We
had
said
tentatively
at
that
point
that
for
one
seven
we
were
probably
going
to
drop
the
v1
beta
1
support
going
forward.
Is
that
not
the
case
that
we're
going
to
keep
those
at
this
point?.
E
Okay,
so
so
so
one
seven
will
also
support
the
v1
beta
one,
so
we're
so
our
minimum
sport
version
is
still,
I
think,
112.
B
A
That's
it
right
now.
Unfortunately,
I
didn't
capture
the
contributor
shout
out
so
I'll
do
that
next
week.