►
From YouTube: Velero Community Meeting/Open Discussion - June 22, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
One
hello,
everyone
and
welcome
to
the
valerie
community
meeting
and
open
discussion
today
is
june.
22Nd
2021,
if
you
haven't
filled
out
who
you
are
in
the
attendee
list.
Yet
please
do
so.
We've
got
some
status
updates
and
then
some
discussion
topics
and,
of
course,
contributor.
Shout
outs.
I
have
the
first
status
update,
so
I'll
just
continue
on
here.
A
So,
as
we
talked
about,
I
think,
was
last
week,
we're
revamping
the
community
meetings
so
we're
going
to
have
community
meetings
in
the
u.s
time
zone
and
we'll
have
community
meetings
in
asia,
pacific
time
zone
beijing
time
specifically,
so
we
had
a
poll
out
and
the
polls
showed
two
different
times
that
worked
for
pretty
much
everyone
that
responded
to
the
poll.
So
thank
you.
A
Everyone
who
responded
super
super
nice
of
you
and
we
have
decided
that
we're
gonna
do
a
community
meeting
early
morning
in
beijing,
so
8
am
in
beijing,
which
will
be
8
p.m.
Eastern
here
in
the
us,
so
anyone
who
wants
to
join
in
from
the
us
feel
free
to
do
so,
and
anyone
who
wants
to
join
in
from
asia,
pacific
or
the
beijing
or
taiwan
or
wherever
you
are
over
there
feel
free
to
join
as
well.
So
we're
sending
those
out
we're
going
to
start
that
schedule
in
july.
A
So
we'll
have
the
first
and
third
tuesday
is
going
to
stay
just
like
this,
so
we'll
have
it
at
12
p.m.
Eastern
9
a.m,
pacific
on
tuesdays,
just
as
regular,
so
first
and
third,
and
then
every
second
and
fourth
tuesday,
we'll
move
over
to
the
beijing
time
zone.
So
they
would
be
on
tuesday
evenings
here
in
the
us
wednesday
morning
over
in
the
beijing
time
zone.
A
Comments
all
right.
Next
up,
we
have
bridget.
B
Hi
everyone,
so
for
this
week
I
have
been
primarily
focusing
on
getting
the
player.
1
6
1,
release
ready-
and
we
spoke
about
it
last
week
during
last
week's
community
meeting
and
just
that
there
had
been
an
issue
discovered
with
performing
rustic
backups
and
restores
on
kubernetes
121,
so
we're
working
to
try
and
get
that
as
soon
as
possible
and
the
branch
has
been
ready.
All
the
changes
have
been
merged
into
the
release
branch.
I'm
now
just
running
some
tests
prior
to
creating
the
tag,
so
that
will
be
out
very
soon.
A
Awesome,
that's
great
to
hear
any
questions
comments
for
bridgette
or
on
the
1.61
release.
C
D
A
D
Okay,
so
so
the
the
problem
that
I'm
working
on
working
with
is
the
in
in
openshift
cluster.
But
the
pattern
that
I
see
is
might
be
in
any
user
application,
not
not
in
any,
but
in
you
know
any
user
again.
Any
application
can
have
that
pattern,
which
is
the
if
you
have
an
operator
concept
in
that
namespace
and
that
operator,
probably
is
you
know,
listening
to
some
cr,
for
example,
and
based
on
that
cr,
it
will
create
deployment
of
path
etc.
Right,
so
that
you
see
in
specific
to
openshift.
D
That
would
be,
for
example,
the
oadp
operator
would
listen
to
valero
cr
and
when
we
have
a
regulation
chain
in
the
valero
cr,
the
oebc
operator
will
based
on
that
chain
and
adjust
the
level
deployment
within
the
oedg
operator
new
space.
So
that
is
an
example
of
how
the
the
pattern
of
operator
you
know,
cr
working
in
network
in
that
context
right
so
that
looked
very
nice
and
so
on
so
forth.
D
But
when
we
go
into
the
restore
right
in
the
restore
context-
and
you
see
that
if
I
happen
to,
if
I
happen
to
restore
the
operator-
part
first
right-
the
operator
part
will
grab
you
see
the
cr
in
that
namespace
and
it
will
based
on
that
and
do
some
operations
like
you
know,
change
the
deployment
delete
the
deployment
you
know
etc.
Right,
however,
we
are
still
in
the
restoring
process
at
that
time
and
how
the
level
restore
also
doing
restoring
pop
restoring
deployment
whatever.
D
So
this
kind
of
interaction
that
there
seem
to
be
like
a
two
workload
running
in
in
parallel.
At
that
point
right,
one
one
operation
is
running
by
the
operator
who
trying
to
do
something
with
the
deployment
based
on
the
cr
and
another
workflow
that
is,
the
valero
is
trying
to
restore
all
the
deployment
of
the
positive
health
right.
So
in
the
open
chip
context,
the
openshift
look
at
the
some
of
the
owners
reference
to
decide
whether
to
restore
or
not
that
a
certain
entity.
D
For
example,
you
could
see
the
part
belong
to
a
deployment
and
the
deployment
belongs
to
something
else.
Then
it
will
not
touch.
Will
not
even
restore
the
part,
for
example
right,
however,
that
relationship
the
the
the
the
backup
vendor
like
us
have
to
be
aware
exclusively
openshift
in
the
openshift
case,
the
openshift
does
you
know
no
overall
paths,
however,
if
the
user,
the
the
the
user
applications,
do
have
an
additional
relationship
like
that
that
openshift
plugin
doesn't
aware
or
our
backup
vendor
isn't
aware,
then
it
will
interact.
D
It
will
affect
the
restore
workflow
that
we
have.
So
that
is
the
problem
I
observed,
and
I
want
to
bring
out
here.
I
don't
know
if
I
have
any
solution
for
it,
but
one
one
of
the
things
that
I
hope
that
we
can
achieve
here
is
that
when
I
bring
out
this
topic,
maybe
we
can
come
up
with
some
kind
of
convention
to
how
to
do
that.
D
So
if
the
applications
happen
to
have
that
kind
of
operator
cr,
you
know
pattern,
hopefully
that
they
follow
some
patterns
so
that
the
up
the
backup
vendor
like
me,
being
able
to
come
up
with
some
generic
code,
that's
able
to
adapt
to
that
kind
of
workflow.
C
C
The
valero
doesn't
need
to
back
up
anything
except
the
resources
themselves,
but
then
we
get
into
these
race
conditions
on
restore,
and
then
we
also
have
things
like
data
services
that
are
operator
driven
and
we'd
like
to
be
able
to
expose
not
just
the
backup
and
restore
the
resource,
but
also
doing
application,
specific
backup
and
restore.
C
One
one
model
is
for
the
operators
themselves
to
start
exposing
out,
like
snapshot
apis
and
data
extraction
apis
and
I've
been
kind
of
pushing
that
along
with
the
astrolabe
stuff.
Not
everybody
may
do
that,
so
we
may
need
to
come
up
with
other
things.
Like
you
know,
you
could
imagine
having
you
know,
people
can
write
back
up
item
action,
plug-ins
as
it
is
that
would
handle
some
of
this,
but
the
difficulty
is
as
an
end
user
you're
suddenly
installing,
like
a
whole
bunch
of
plug-ins.
C
So
you
really
want
to
move
more
towards
like
a
self-discovery
model,
where
the
backup
application
can
look
at
the
kubernetes
namespace
and
understand
what
to
do,
and
you
know
we
have
like
pre
and
post
backup
execution,
hooks,
pre
and
post
service.
Maybe
I
don't
know
so.
There's
definitely
a
lot
to
figure
out
here
we're
working
with
some
of
it
in
our
own
management
cluster
right
now
is
we
have
a
procedure,
for
example,
where
we
can
pause
the
operators
there's
a
flag
you
can
set
in
the
kubernetes
resource.
C
It
says,
pause
the
management
pause
cappy
from
doing
stuff,
and
then
you
can
backup
and
restore,
and
during
that
state
the
opera.
The
controllers
are
supposed
to
not
try
to
reconcile.
So
we
can
wait
until
everything
has
been
restored
and
then
say:
okay
now
go
into
the
reconciled
and
start
dealing
with
it.
The
ordering
is
tricky
because
there's
no
right
now
a
lot
of
it
we're
doing
alphabetically.
D
Yeah,
I
also
look
into
that
that
that
ordering-
and
I
saw
that
in
currently
inaudible
restore
data
pack.
We
do
have
a
way
to
specify
the
order
of
restore,
but
only
for
the
type
not
among
the
same
type.
So,
for
I
think
that
I
forgot
exactly
the
the
the
name
of
that
pattern
matter
that
we've
specified
that
there's
an
array
that
we
can
specify
backup
this
time
first
and
the
other
types
and
so
on
and
so
forth.
D
D
I
don't
know
if
that
would
solve
this
problem
because
they,
you
know
that
would
be
a
nice
thing
to
have
right.
If
you
can
say
that
before
we
store
the
operator,
we
store
this
thing
first
and
then
we
saw
the
operator
in
that
way
that
we
can
kind
of
guarantee
that
if
the
operator
come
up
it
will
not
interfere
with
other
stuff
right.
So
that
is
is
one
way,
but
I
don't
know
if
that
would
be
like
the
solution
for
this
problem,
but
that
it
would
be
something
nice
we
can.
D
C
D
D
One
of
the
things
that
I
play
with
is
trying
to
because
in
openshift
right
there
is
an
internal
registry,
and
internal
registry
is
actually
running
inside
a
namespace
and
it
have
like
a
pvc.
We
can.
You
know
I
can
create
a
ppc,
and
I
put
that
ppvc
as
as
an
internal
image
right,
an
instant
image,
and
then
we
save
the
image
player
and
showing
for
if
I
want
to
backup
the
internal
energy
tree
migrating
to
an
anonymous
namespace.
E
Yeah,
let
me
jump
in
there,
so
there's
a
couple
things
going
on,
and
some
of
them
are
very
specific
to
openshift.
Some
of
them
are
the
the
one.
That's
not
specific
to
openshift
here
is
really
the
operator-based
backup
restore
we've
hit
this
problem
before
in
pure
kubernetes
as
well.
Right,
like
you
guys,
talked
about
postgres
before
and
a
perfect
example
is
like
the
crunchy
postgres
operator.
E
If
you
restore
an
operator
based
application,
like
crunchy
crunchydb,
you
will
out
of
the
box,
run
into
problems,
especially
if,
depending
on
your
storage
provider,
because
we
restore
all
the
pods
and
then
the
operator
goes
and
creates
a
new
pod
and
they
both
bound
to
the
same
pvc.
So
how
we've
solved
this
with
our
plug-in?
E
If
a
pod
is
an
owner
reference
since
operator
based
resources
always
get
to
play
with
owner
refs
we'll
skip
the
restore
if
you're,
using
like
rustic,
that
will
probably
break
things
because
then
the
the
annotations
for
rustic
are
kind
of
on
the
wrong
resource
right,
but
you're
talking
about
internal
images
for
openshift,
I
think
there's
a
probably
a
way
you
can
get
around
this
without
dealing
with
the
operator
problem,
and
this
is
a
very
specific
specific
conversation.
E
So
I'm
happy
to
take
this
elsewhere
if
we
need
to,
but
we
actually,
when
we
talk
about
backing
up
the
internal
images
in
openshift,
we
do
not
typically
typically
tell
people
to
back
up
the
registry
namespace
itself.
What
we
instead
do
is
we
have
a
plugin
for
the
image
stream
resource
and
every
time
you
backup
a
namespace
which
contains
image
streams.
E
If
that
image
stream
references
an
internal
image,
we
essentially
stand
up
a
a
transient,
docker
registry,
with
that's
backed
by
the
s3
storage.
So
whatever
the
bsl
storage
is
using,
we
create
a
registry
with
environment
variables
that
point
to
that
same
storage
provider,
and
then
we
essentially
just
like
docker
pull
docker
push
the
image
into
the
s3
bucket
in
its
own
folder.
Then
at
restore
time
we
can
pull
from
the
s3
bucket
and
restore
the
image
that
that
is
like
kind
of
a
pretty
involved
workaround.
E
D
Yeah,
I'm
aware
of
that
data
path
of
going
back
up.
The
image
speaks
between
going
to
s3.
I
think
we've
been
working
together
on
that
one,
and
this
is
my
another
experiment
that
I
have
and
trying
to
see
if
they
can
find
another
way
to
backing
up
it
yeah
without
going
to
the
s3
right,
for
example,
if
a
customer
they
want
exclusively
using
our
system.
Without
you
know
going
to
f3,
then
we
able
to
supply
them
with
a
solution.
E
C
Yeah
and
we're
going
to
keep
on
hitting
this
and
there's,
there's
also
there's
certain
configurations
that
probably
can't
be
restored.
We
had
one
person
come
to
us.
They
had
stood
up
an
nfs
server
inside
their
cluster
and
then
made
that
a
pv
provider,
and
then
they
had
pvs
that
were
in
the
nfs
server.
So
they
backed
it
all
up,
just
fine,
but
when
it
came
down
to
restore
the
nfs
server,
didn't
come
up
first
and
we're
trying
to
allocate
pvs,
which
have
storage
classes.
That
pointed
to
the
nfs
server,
and
they
were
just
stuck.
D
So
it
sounds
to
me:
let
me
try
to
summarize
what
I'm
hearing
so
far.
So
it
sounds
to
me
like
there's
a
few
ways
to
get
around
this
one.
This
number
one
is,
it
seems
like
if
an
application
have
a
pre-hook
and
post
hook,
we
can
just
say
in
the
pre-hope.
We
pause
the
operator
operation
for
doing
the
restore
and,
after
the
restore
completely
you
know,
unpause
the
operator
so
that
it
continues
to
work
and
it's
so
that
it
will
not
interfere
with
our.
D
You
know,
restore
workflow
right,
so
that's
kind
of
one
one,
one
one,
one
idea.
The
other
idea
that
I
I
also
heard
is
the
the
idea
of
you
know
putting
the
restore
in
a
specific
order.
For
example,
if
we
can
restore
the
operator
last,
the
other
thing
will
you
know
we
saw
a
person
in
the
operator
will
restore
last
night.
It
will
not
interfere
with
the
restoring
of
the
other.
D
A
A
This
well,
okay:
let's
move
on
to
the
next
discussion
topic,
which
is
brilliant.
B
Yeah,
so
I
carlesi
has
asked
about
this
on
our
development
channel
on
slack
just
asking.
B
If
anyone
was
working
on
the
upgrade
task
to
upgrade
the
blur
crds
to
use
b1
apis,
I
took
a
look
again
at
the
release
schedule
for
122,
which
is
when
the
v1
beta
1
version
of
the
crd
api,
which
we're
currently
using
will
be
removed
from
kubernetes,
and
that
is
going
to
be
taking
place
at
some
point
in
august,
which
means
that
I
feel
like
this
is
something
that
we
should
be
prioritizing
trying
to
get
out
quite
soon.
B
But
I
know
that,
like
in
the
initial
in
the
issue
that
we
have
tracking,
that
nolan
had
mentioned,
that
we
might
be
able
to
use
leverage
carville
to
use
this,
and
I
don't
know
if
that's
going
to
be
something
that
we
can
make
use
of
in
in
this
time.
So
I
just
don't.
I
just
want
to
make
sure
that,
like
we're
aware
of
it
and
prioritizing
it,
I
don't
really
know
what
the
strategy
should
be
for
fixing
this.
B
This
is
my
first
time
kind
of
like
working
with
something
we're
having
where
you're
having
to
like
manage.
Potentially
two
versions
of
the
kubernetes
api,
so
I
just
wanted
to
bring
up
to
see
if
anyone
else
had
any
input
on
this
or
yeah.
I
guess
to
maybe
just
like
make
sure
that
we're
like
aware
of
it,
so
you
don't
get
bitten
by
it
in
like
a
month
and
a
half,
especially
since
we
just
had
another
issue
that
came
up
on
kubernetes
121.
I'm
keen
to
not
have
another
blocking
issue
at
release.
B
C
Yeah,
it
sounds
like
we
need
to
to
go
off
and
think
this
through.
I
thought
one
of
the
issues
for
upgrading
the
crds
might
be
breaking
compatibility
with
older
kubernetes.
B
Yes,
yes,
so
if
we
upgrade
then
we're
essentially
locked
to
being
able
to,
we
can
only
run
on
kubernetes,
I
think
116
and
later
so
that
was
whenever
the
application,
that's
whenever
the
v1
api
was
introduced.
So
we
need,
I
don't
know
whether
it's
possible
to
have
something
where
you
can
at
install
time.
B
Look
at
the
what
api
is
available
in
in
the
the
kubernetes
server
that
you're
installing
into
and
then
pick
based
on
that,
but
I
don't
know
like
what
impact
that
would
have
further
down
the
chain
in
the
rest
of
the
valero
code
base.
See
like
I
said
it's
not
something
I
have
huge
my
experience
with
and
we
have
like
frankie
and
nicole
here
he's
done
a
lot
of
work,
looking
at
different
like
api
versions
and
restoring
between
them.
C
B
Yeah,
so
my
understanding-
I
know
scott
has
mentioned
it
in
the
past
that,
like
they,
they
currently
have
a
work
around
because
valero,
the
current,
like
the
current
version
of
valero,
has,
is
not
it's
no
longer
compatible
with
kubernetes
111
and
earlier,
and
I
know
that
they
have
a
work
around
to
make
it
work
with
with
our
versions
of
kubernetes
earlier
than
112,
so
111
an
earlier.
So
I
think
this
is
going
to
have
it's
going
to
be
an
even
bigger
impact.
A
V1,
I
think
this
is
this
ties
back
to
discussions
we've
had
before
as
well.
You
both
mentioned
it
like
where,
where
do
we?
Where
do
we
stop
like
this
116,
the
earliest
one
that
we
would
support?
We
don't
have
any
statements
like
that,
and
I
think
it
would
be
good
to
put
that
into
writing
for
sure
this
could
be
a
forcing
function
for
that.
C
Yeah,
maybe
it's
something
we
should
put
out
in
the
community
mailing
list
and
just
ask
for
input,
because
I
think
the
the
solution
of
simply
upgrading
to
the
v1
apis
is.
You
know
like
a
week's
worth
of
work,
not
even
right
but
coming
up
with
a
solution
that
supports
both
sets
of
apis.
C
That
is
potentially
a
lot
and
that
that
may
even
be
something
where
we
do
this
in
161,
and
we
say
you
know
there
are
162
and
we
could
say
that
you
know
if
there
is
demand
for
supporting
older
versions.
Then
we
say:
okay,
don't
use
162,
there's
very
little
in
it.
That's
going
to
affect
you
anyway,
and
in
one
seven
we
come
up
with
a
better
solution.
It's
possibility
too.
C
C
Yeah,
so
maybe
bridget
just
like
send
the
note
out
to
the
mailing
list
or
eleanor-
maybe
maybe
maybe
it's
better
for
eleanor
to
do
just
send
a
send
a
note
out
to
the
mailing
list
asking
you
know,
hey.
You
know
we're
looking
at
upgrading
crds
and
we're
looking
at
moving
valero
support
for
kubernetes
forward
to
116,
and
you
know
looking
for
feedback
on
you
know.
Will
this
impact
people
you
know.
B
I
was
justified
to
volunteer
some
time
there,
just
in
case
you
want
to
pair
on
it.
So
that's
absolutely
fine
sounds
good.
Thank
you.
B
Awesome.
Thanks
for
the
input
everyone
yeah.
No,
it's
just
yes,
they've
been
asked
in
the
slight
channel
and
I
realized
it
wasn't
something
that
we
were
actively
working
on
right
now
and
it's
coming
up
pretty
quickly.
So
I'd
like
to
be
ahead
of
it.
B
A
A
All
right,
then
we're
going
to
dive
into
contributor
shout
outs:
does
anyone
want
to
drive
this
with
me.
B
I
think
some
of
these
might
have
been
included
in
the
last
weeks,
but
I
will
see
we'll
go
through
yeah,
so
this
fix
is,
I
believe,
from,
I
think,
was
pankaj
patil,
who
we
had
an
issue
on
the
website
where
the
rss
link
at
the
the
bottom
in
the
footer
of
the
website
wasn't
wasn't
always
going
to
the
to
the
right
location,
so
he
had
fixed
the
prefix
in
our
website
config,
so
that
that
link
is
now
fixed
and
works.
B
So,
thank
you
very
much
and
again,
then
this
one
is
from
daniel
jang,
who
is
one
of
our
new
contributors
based
in
the
beijing
team
here
in
vmware.
So
this
was
an
issue
with
the
website
where
the
latest
release
information
page
on
the
our
link
on
the
front
of
the
website,
I
was
not
updated
to
go
to
the
latest
release,
so
that
was
something
that
we
missed
during
the
1
6
release
cycle.
B
So
thank
you
very
much
daniel
for
for
fixing
that
and
then
this
is
one
yeah
we
covered
this
last
week.
But
thanks
again
I
like
this
is
the
going
out
in
the
161
release
as
well.
This
change
from
from
scott,
which
was
fixing
an
issue
where,
with
the
ordering
of
the
crds
and
crs
upon
restore,
so
this
fix,
make
sure
that
we
have
the
crds
restored
into
the
clusters
that
we
can
perform
a
custom
resource
validation
whenever
they're
being
restored.
B
A
Awesome,
thank
you
and
lastly,
I
I
want
to
give
a
big
shout
out
to
abby.
Abby
is
heading
out
on
maternity
leave
next
week,
so
she
will
be
out
of
the
community
meetings
and
out
of
the
valero
project
for
for
a
few
months.
So
I
just
want
to
give
a
big
shout
out
to
abby
for
her
hard
work
on
all
the
technical
documentation
and
the
website
and
everything
for
valero
here.
So
thank
you.
Abby.
D
A
Right
so
with
that,
thank
you.
Everyone
for
for
joining
have
a
fantastic
rest
of
the
week.
Everyone
talk
to
you
soon,
bye.