►
From YouTube: Velero Community Meeting - June 27, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
C
D
You
you
only
complete
the
status
portion
first,
like
sure.
Oh.
B
I
see
that
was
a
discussion.
Yes,
I
yeah
you
had
you
had
that
under
the
status
in
the
meeting
there,
but
yeah.
If
you
move
it
to
the
discussion
topic
I
guess
we
got
shootout
as
well
status.
B
C
Reviews
we
had
some
PRS
for
the
plug-in
side,
and
some
of
them
were
for
the
data
more
backup
and
restore,
and
that's
it.
E
E
This
is
part
of
the
upcoming
Milestone
and
I
am
currently
also
working
on
the
Json
substitutions
work,
so
I
mean
if
we
could
kind
of
expedite
the
review
for
this
PR
I
could
focus
myself
fully
on
the
other
work
I
mean
because
if
the
review
comes
later
for
both
PRS,
it
will
be
slightly
harder
for
me
to
manage
so
yeah
just
a
request
again
in
the
community
to
review
this
particular
PR
and
yeah.
Otherwise.
So
in
terms
of
status
update
the
Json
substitution
work
is
also
ongoing.
B
Okay,
great
and
I'll
make
sure
to
have
have
a
look
at
that
PR
later
today,
as
well.
D
Yeah,
this
is
more
of
a
question
because
often
we
started
some
testing
of
this,
and
especially
in
the
context
of
cube
word
right.
Most
of
the
PVS
will
be
presented
as
block
type
PVS
to
the
pod,
so
we
tested
a
little
bit
with
Valero
I
I,
don't
think
the
file
system
backup
is
supported
for
this
right
that
that's
how
it
looks
like.
As
of
now
at
least
I.
B
I've
never
tried
it
myself,
I
I,
think
I
suspect
what
you're
saying
is
true.
I
haven't
actually
tried
it,
but
yeah
I.
Imagine
that
this
The
Rustic
side,
that's,
probably
not
going
to
work
or
copia,
but.
D
B
You
can
correct
me
if
I'm
wrong
on
that
I
think
I
think
the
focus
basically
now
is
anything
that
currently
works
with
the
CSI
plug-in
I
believe
that's
kind
of
the
limited
edition
of
the
scope.
So
if
it's
supported
with
the
CSI
plug-in
alone
in
1.11,
the
expectation
is,
we
should
support
that
with
data
mover
n1.12,
oh
shoot.
That
was
that
correct.
B
So
anything,
that's
not
CSI.
Based
that
doesn't
support.
Csi
snapshots
is
not
going
to
be
supported
in
the
1.12
data
movement.
D
Right
right
got
it
so
I
think,
because
the
snapshot
seems
to
work
so
I'm,
guessing
that
it
would
be
possible
to
do
the
data
movement.
You
know
backup
in
one
dot,
probably.
B
D
B
Worth
noticing
it
noting,
and
if
it's
something
you
want
to
test
out
as
of
last
week,
it
was
announced
that
all
of
the
code
needed
on
the
backup
side
for
the
data
movement
for
112
is
merged
on
Main.
So
you
should
be
able
to
data
mover
back
up
on
the
main
branch,
but
not
a
restore
yet.
D
D
Supporting
Cloud
casters,
so
I
just
wanted
to
test
in
yeah,
we'll
do
more
testing
sure.
B
D
C
D
Okay,
sure
I
mean,
even
if
you
do
the
data
movement
right
with
co-pm
and
similar
to
what
we
are
doing
similar
thing
in
Cloud
Casa.
There
is
always
a
bit
of
a
downside
in
the
sense
that
copia
will
have
to
read
the
entire
device
right
at
least
initially
or
in
every
time,
to
find
the
block.
So
obviously
it's
not
super
efficient.
C
D
D
B
D
B
D
C
Curious
if
anyone
on
the
call
has
attempted
backup
with
the
built-in
data
mover.
B
I
have
not
yet
that's
something:
I
took
it
too
soon,
but
yeah
I
haven't,
haven't
tried
it
yet
yeah
I'd
like
to
try
it
myself
too,
but
I
haven't
been
able
to.
B
Yeah,
that
was
the
status
updated
last
week
that
was
announced
that
everything
needed
for
the
backup
side
has
been
merged
to
Maine
and,
what's
remaining
as
the
restore
side
work.
E
E
To
test
it
out
right
now,
I
mean
I
mean
if
I
want
to
test
it
out,
should
I
honestly.
B
Ask
a
question:
I,
don't
actually
I
haven't
really
been
following
the
home
chart
side
is
regularly
because
you
know
I,
don't
release
it
myself.
So
I
don't
know
if
the
helm
chart
is
up
to
date
with
what's
on
Main
right
now
or
not,
you
know.
E
F
Yeah
and
and
I
think
we
can
also
start
doing
this
on
our
end
cloudcaster
site
as
well
to
start
testing
these
pieces.
B
Yeah
yeah
that'd
be
great,
just
the
more
people
that
have
that
have
run
through
this
as
we're
going
through
this
process.
This
you
know
the
more
bugs
we'll
find
before
release
you
know,
which
will
you
know,
obviously
improve
the
quality
of
one
at
12
when
it
comes
out.
F
Yeah
and-
and
we
are,
we-
we've
done
a
similar
implementation
on
top
of
Valero
as
well,
so
we,
we
probably
have
a
list
of
bugs
that
we
hit
so
we'll
see
if
we
can
reproduce
those
based
on
past
experience.
F
B
With
any
other
topics
that
anyone
want
to
discuss
just.
D
A
quick
addition
to
what
Satya
said
right
so
I
haven't
again
gone
through
the
clear
in
detail
about
the
review
of
that.
You
know
how
the
data
movement
is
being
done,
but
my
guess
is,
the
snapshot
obviously
is
going
to
be
restored
right
and
then
we
copy
you
do
the
copy
from
the
snapshot.
Is
that
how
it
works?.
B
Yeah,
basically,
basically
we
you
know
we,
the
CSI
plugin,
creates
the
CSI
snapshot.
The
volume
snapshot,
volume
snapshot
content.
Then
we
create
a
backup
pod
in
the
Valero
namespace
that
mount
and
it
creates
a
new
PVC
that
uses
that
CSI
volume
snapshot.
Actually
a
clone
volume
snapshot
is
the
data
source,
so
we
create
a
new
PVC
based
on
that
snapshot,
and
then
we
use
copia
to
copy
that
into
the
backup
storage
location.
That's
that's
the
way.
D
Yeah,
that
is,
we
work
as
well,
and
you
you
use
typically
run
into
some
issues
when
you
try
to
mount
create
that
PVC
from
the
snapshot
in
some
cases.
For
example,
if
you
take
the
long
horn
case
right
when
you
mount
or
create
the
PVC
from
Longhorn
snapshot
in
some
cases
or
probably
in
all
cases,
actually
it
actually
tries
it
copies
the
data
it
restores
they
don't
have-
or
at
least
as
of
now
they
haven't
implemented
a
very
efficient
creation
of
a
volume
from
snapshot.
D
So
you
end
up
Longhorn
behind
the
scenes
is
going
to
create
volume
by
copying
the
data,
and
that
will
take
some
time,
so
you
might
have
to
introduce
some
timeouts
that
are
configurable
for
users,
because
the
PVC
may
not
be
available
in
time,
for
you
is
there
any
timeout
in
the
current
code?
How
long
do
you
wait
and
is
it
indefinite.
C
D
C
C
So
let's
say
if
you're,
using
CFS
or
RBD,
and
you
use
shallow
copy
option
and
stuff
like
that
which
are
proprietary
things
so
that
takes
that
doesn't
take
much
time.
There's
only
switch
pointers
or
something
so
in
that
case
yeah,
so
I
agree
that
that
it
should
be
a
configurable
value
and
they
are
trying
to
use
the
resource
timeout
in
some
places
and
in
in
backup
V2
API,
we
introduced
two
timeouts
one
was
seeing
frequency
timeout
and
what
was
the
other
one
Scott.
Do
you
remember?
B
You
know,
that's
actually
I,
guess
it's
for
any
the
item
operations
so,
but
but
when
you
create
the
PVC
and
then
there's
the
part
where
you
we
copy
that
PVC
data,
the
Clone
PVC
into
the
S3
or
in
the
in
the
backup
storage
location.
So
that's
another
operation
that
you
know
can
take
some
time
and
we
have
this
configurable
timeout,
because
for
a
huge
volume
that
might
take
a
long
time,
obviously
especially
fapping
off
off
cluster,
but.
D
B
B
Of
like
with
rustic
and
copia,
now
we
have
a
rustic
file
system
upload
time.
Timeout
this
by
default
said
I,
think
to
one
hour
or
four
hours,
I
think
it's
four
hours
and
if
you
have
very
large
volumes
you
may
need
to
increase
that,
but
the
the
intent
there
is
you
don't
want
a
situation
where
Valero,
where
an
operation
is
truly
installed,
is
never
going
to
finish
and
then
Valera
doesn't
have
any
timeout.
So
it
just
says
it's
in
progress
forever.
D
I
agree
but
I
I
yeah,
even
with
the
current
four
hour,
timeout
I
always
felt
that
the
right
solution
is
not
to
have
the
timeout
for
the
complete
operation,
but
the
timeout
should
always
be
if
there
is
no
progress
for
x
amount
of
time
about
about
the
job
or
cancel
the
job
right.
As
long
as
there
is
progress,
the.
B
Problem,
especially
with
that,
with
the
plot,
with
the
plug-in
operations
portion,
where
you
know
you
may,
depending
on
the
plug-in,
depending
on
what
your
API
is
for
these
operations,
you
may
you
may
not
necessarily
have
any
reliable
information
for
you
know
things
being
progressing
because
Siobhan
with
vaulting
we
didn't
have
that
did.
C
D
B
Because
this
is
a
plugable
system,
so,
for
example,
ODP
1.2,
we
we
have
our
own
data
mover,
that's
based
on
the
backup
item,
action
V2,
where
we're
using
Vault
sync
behind
the
scenes,
and
in
that
case
we
don't
really
have
a
lot
of
insight
when
an
upload
is
in
progress.
As
to
you
know:
okay,
it's
30
down.
It's
40
done
this.
This
many
bytes
have
been
copied.
So
it's
hard
to
sort
of
show
between.
You
know,
update
Cycles,
whether
we
truly
made
progress
or
not
right.
D
B
Well,
the
the
plug-in
has
a
progress
reporting
mechanism,
but
that
also
depends
on
the
you
know.
It
means
you're.
B
When
you
write
a
plug-in,
you
write
a
progress,
method
and
you're,
saying
and,
and
that
plug-in
knows
how
to
you
know
you,
but,
for
example,
if
the
plugins
way
of
determining
progress
is
to
look
up
some
CR,
that's
monitoring
this
progress
and
being
controlled
by
some
other
controller
I
mean
you
know
that
controller
may
not
update
progress
in
a
timely
manner,
so
the
Valero
plug-in
may
not
have
minute
by
minute
Insight
that
oh
we're
still
making
progress.
B
B
What
I'm
saying
is
because
it's
a
plugable
system,
ideally
you'd,
want
to
be
able
to
have
progress,
reporting
in
a
timely
manner
back
at
every
stage
on
the
workflow.
The
reality
is
depending
on
what
external
tools
you're,
using
what
external
packages
you're
you're
plugging
in
here,
you
may
be
working
with
something
that
doesn't
give
that
kind
of
timely
update,
so
a
timeout
based
on.
Oh,
we
haven't
seen
any
updates
in
you
know,
20
minutes
we're
going
to
call
this
stalled,
but
it
may
just
be.
F
There
could
be
Scott
just
to
make
it
an
idle
progress,
timeout
right,
so
we
we
have
a
couple
other
backup
products
that
that
do
that,
so
it
will
only
time
out
if
it's
idle.
So
if
there
are
other
pieces
of
plugins
that
don't
offer
you
know
progress,
information
I
mean
those
are
the
only
ones
that
will
time
out
on.
It
won't
punish
the
good
plugins.
B
B
If
the
longest
part
of
your
upload
is
you
know
whether
you
know
your
longest
running
plug-in
is
one
of
these.
You
know
like
like
in
the
oedp
case,
with
fall
sync,
where
all
of
our
waiting
time
is
waiting
on
valsync
uploads
for
a
backup,
which
means
you
might
have
one
volume,
that's
much
larger
than
the
rest,
which
means
the
bulk
of
that
say
one
hour
that
you're
waiting
is
going
to
be
for
that
one
plug-in
on
that
one
file
system
and
you
might
not
get
any
updates
for
an
hour.
B
So
if
you,
if
you
say
we're
going
to
time
up
for
being
idle,
but
the
underlying
controller,
that's
uploading
is
still
actually
uploading.
It's
just
not
telling
you
what
it's
doing,
then.
You
might
pretty
much
your
leave
that
time
out.
So
that's
where
yeah.
F
That's
when,
but
we
do,
we
do
timeout
at
four
hours
anyway
today
right
so
the
the
thought
is
really
to
say
let
the
timeout
logic
kick
in
only
if
there
is
no
progress,
because
that's
the
back
to.
B
That's
something
to
consent:
I
mean
that
this
would
be
I
would
say
at
this
point.
Possibly
more
relevantag
is
equally
relevant
to
the
breasted
copia,
backup,
processing
versus
mover,
but
now
it's
just
based
on
time
since
the
beginning
of
the
backup-
and
it.
E
B
Make
sense
I
mean
this
may
be
something
as
a
design
proposal
to
say
hey
this
makes
it
might
make
more
sense
to
treat
it
as
a
timeout
with
you
know,
instead
of
Maximum
time
for
the
whole
backup,
you
know
you're
thinking
in
terms
of
Maximum
time
with.
B
That's
right,
yeah
that
does
make
sense.
I
I,
don't
know
if
that
would
be
something
that
might
make
sense
as
a
configurable
option,
where
you
could
use
one
or
the
other.
You
know
it
still
might
make
sense
to
say
to
have
this
kind
of
overall
hey.
If
this
backup
has
taken
more
than
24
hours.
B
You
know
stop
report
an
error.
We
need
to
fix
something,
but
maybe
if
you
have
both
idle
timeout
and
total
timeout,
you
could
have
one
of
them.
Optional,
I,
don't
know,
but
I
mean
I
think
this
is
something
that
might
make
sense
as
a
design
kind
of
rfe
type
proposal.
Just
to
say
you
know,
because
again
there
might
be
implications
both
on
the
plug-in
side
or
on
the
Valero
arrested
copious
side
as
a
kind
of
change.
In
the
way
we
would
approach
that.
F
When,
when
we
did
a
similar
implementation,
I
think
I
I
vaguely
remember
again.
This
is
off
the
top
of
my
head,
not
looking
through
issues
and
and
stuff
I.
Think
we
most
of
our
changes
or
or
deviation
in
the
original
implementation
centered
around
supporting
EBS,
supporting
Longhorn
and
supporting
block
PVS.
So
we'll
we'll
look
at
those
volume
types
to
see
if
we
can
find
anything
that
that
may
or
may
not
work
and
we'll
share
it
back
with
you.
Okay,.
D
Yeah
I
mean
in
the
context
of
yeah,
okay,
there's
one
more
thing
that
I
remember
about
the
creation
of
the
PV
from
snapshot
as
a
source.
We
got
a
request
from
the
customer
in
this
regard,
so
these
are
temporary
volumes
right.
We
are
going
to
remove
them
as
soon
as
copy
is
done.
Backup
is
done
so
Longhorn
by
default.
If
you
create
this
PVC
right,
it
gives
you
like
two
replicas,
three
replicas
whatever
and
which
really
doesn't
make
sense.
D
So
we
implemented
the
storage
class
mapping
for
this
phase
where
users
can
provide.
You
know
like
whether
a
support
storage
class
mapping
in
restore
right
right
so
I,
don't
know
if
this
path,
this
New
Path
data
movement,
supports
storage,
class
mapping.
B
I,
don't
believe
that
for
the
temporary
PVS
we
currently
support
storage
class
mapping
I
think
we
only
use
that
on
the
restore
side
still.
D
Right
so-
and
that
is
what
we
thought
too,
so
we
didn't
do
anything,
but
we
got
a
request.
Saying
there's
no
point
in
you
know
Longhorn
using
two
replicas
for
this
temporary
TV.
So
can
you
do
the
mapping,
so
we
implemented
that
and
the
customer
is
creating
as
another
storage
class
for
Longhorn,
where
they
explicitly
set
the
number
of
replicas
to
one
and
we
use
that
for
the
temporary
volume.
So
some
of
these
things
yeah,
we
will
probably
submit
all
these.
B
D
D
B
Basically,
as
an
optional
thing
that,
if
you
set
this,
like
you,
said
a
way
of
improving
performance
for
these
temporary
PVS,
that
you
don't
expect
them
to,
you
know
be
around
any.
You
know
longer
than
the
backup,
and
so
there
might
be
different
requirements
there,
which
I've.
B
I
think
that's
another
case.
We're
submitting
an
rfe
design,
design
kind
of
reversal
would
make
some
sense,
because
I
I
would
agree.
I
I.
Think
at
this
point
in
the
you
know,
timeline
for
112.
It's
probably
not
something
we
would.
We
would
right
get
into
1.12.0,
but
this
is
something
that
would
make
sense
as
an
enhancement.
You
know
kind
of
going
forward.
D
Understood
and
one
final
question
going
back
to
block
this-
is
in
the
contextual,
ODP
I
know
that
I
had
openshift
virtualization
right.
So
do
you
test
the
ODP
with
the
openshift
virtualization
actively
or
how
does
that
work.
C
C
C
No,
it
depends
on
the
storage
class,
so
currently
we
have
CSI.
If
the
CSI
driver
supports
it,
then
we
support
it.
That's
the
tagline.
B
Okay,
were
there
any
other
topics
or
any?
You
can
follow
one
from
previous
topics.
We
wanted
to
cover
now.
B
G
Request
there
and
I
was
hoping
to
get
it
reviewed
and
get
some
attention
on
that.
Is
it
the
right
form
to
do
that
or
no
yeah,
and
if.
B
If
you
want
I
think
the
best
way
is
to
get
the
visibility
there.
If
you
go
to
the
hack
MD
for
this
meeting
and
add
it
to
the
status
that
way,
we
can.
You
know
going
back
to
look
at
that.
You
know
we
can.
We
can
link
straight
to
it
or
you
know
in
the
meantime,
if
you
just
want
to
put
in
the
chat
now,
I
can
open
it
up.
So
I
can
look
at
it
later.
G
Yes,
I
can
do
that
and
just
put
it
in
the
chat,
in
fact,
I
think
Super,
modernly
reviewed.
It
once
gave
a
few
comments
and
I
have
foreign.
G
Yes,
it's
the
same
and
I'll
just
put
it
in
the
chat
as
well.
That'll.
A
D
Scott,
by
the
way
there
is
I
see,
tigers
entry.
Also
here
for
status,
I
don't
know,
did
we
already
talk
about.
B
That,
oh,
oh
sorry,
that
was
added
yeah
tiger
go
ahead
and
introduce
yeah
it'll
be
great.
A
Hello,
were
you
calling
me.
B
Yeah,
you,
oh,
oh
I'm.
Sorry
this
way:
yeah
yeah
you
had
a
status
update,
I
I
think
it
was
added
after
we
got
on
the
discussion
topic.
So
if
you
want
to
just
you
know,
request.
A
Yeah
I'm
just
requesting
for
review
APR
in
the
gcp
plug-in.
F
A
A
That
is
not
a
gke
cluster,
such
as
openshift
on
TCP,
so
yeah
just
requesting
for
review.
Okay
yeah,
if
you
guys
have
any
questions,
you
can
leave
a
comment.
Yeah,
that's
it
for
me.
B
Okay
sounds
good
any
other
status
updates
or
discussion
topics
to
address
today.
B
Okay,
well
thanks
everybody
again
for
joining,
let's
end
the
meeting
now
and
and
as
always
bring
us
in
slack.
If
anything
else,
you
know
press
reviews
or
anything
else
comes
up.