►
From YouTube: Velero Community Meeting - July 25, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
It
will
be
super
cool
if
you
add
yourself,
and
so
we
can
keep
track
of
rejoining
and
who
is
interested
into
our
project
with
that
I'm,
not
sure
if
it
makes
sense
to
sure,
because
there
are
many
topics
for
other
folks
and
I'm
gonna
give
the
mic
to
Scott.
B
Yeah
sure
so,
on
the
pure
Upstream
side,
Miss
they've
just
been
involved
in
the
ongoing
code
reviews,
starting
in
my
environments,
I'm,
trying
to
test
out
the
Opia
and
also
the
data
mover
based
on
Utopia
in
an
ODP
environment,
trying
to
kind
of
see
what
works
and
what
doesn't
in
that
environment,
to
kind
of
figure
that
out
actually
involved
in
a
couple
in
a
slack
discussion
running
into
some
issues
where
getting
an
error
from
the
copia
back
opposer
in
it
saying
that
the
repository
is
not
empty,
even
though
I
haven't
used
copia
before.
B
But
maybe
a
configuration
issue
started
the
conversation
on
the
Valero's
Dev
slack
Channel
I'm,
not
talking
internally
in
Red
Hat
to
someone
who's
already
successfully
done
this
using
just
the
the
bills
from
the
from
the
Upstream
release,
not
in
the
node
DB
context,
so
I'm
kind
of
working
through
that,
but
basically
we're
trying
to
make
sure
that
all
this
stuff
works.
You
know
obviously
just
as
well
in
the
oedp
environment
as
it
does
on
the
pure
Valero
environment.
B
C
Scott
we
did
test
the
block
and
not
the
block.
Sorry,
the
regular
file
system
PVC
using
the
latest
data
mover
and
it
worked.
B
Yeah
yeah
I
I.
It
must
be
something
I'm
guessing
in
either.
The
configuration
of
it
may
be
something
weird
in
my
bucket
that
somehow
is
not
coming
up
very
well
in
the
errors.
Even
though
I've
been
you
know,
this
packet
works
just
fine,
for
example
for
rustic.
It
also
maybe
a
configuration
issue,
since
you
know
I
didn't
create
and
instead
of
using
the
the
standard,
Valero
installer,
you
know
we're
using
the
oedp
kind
of
that
uses
based
on
the
installer.
B
So
there
may
be
some
configuration
issues
there
as
well,
I'm
just
trying
to
work
through
those
issues
now,
but
I'm
running
the
same
same
problem
for
data
mover
and
just
copy
of
file
system,
backup,
because
it's
happening
on
the
validation
of
the
backup
repository
itself.
Basically,
when
it
comes
to
repo
init
I'm,
getting
an
error
from
the
liver
from
the
code
saying
that
there's
already
data
in
the
bucket,
so
I'm
not
really
sure
exactly
yet
you
know
again,
maybe
misinfiguration
or
maybe
something
weird
with
my
bucket
permissions.
B
You
know
I
just
know
that
this
bucket's
been
fine
for
rustic,
backups
and
everything
else.
So
I,
just
you
know,
gotta,
look
out
the
bucket
side
and
also
maybe
the
configuration
side
so
still
working
through
that.
C
B
C
So
sorry,
are
you
sharing
this
bucket
between
multiple
backups
different
backups.
B
Yeah
I
mean
this:
is
a
bucket
I've
been
using
so
so
I'd
have
existing
backups.
You
know
from
rustic,
for
example,
but
I'm
assuming
that
should
still
work,
because
you
know,
for
example,
if
you're
using
you
know
you
got
your
bucket
you're
using
rustic.
You
make
a
backup,
you
then
reconfigure
Valero
to
use
copia
and
then
restart
I
know
respeller.
We
start
some
that
should
work
because
I
know
the
code,
for
example,
for
restoring
bases
whether
using
rustic
versus
copia
on.
B
What's
in
the
backup,
so
the
you
know
it
should
be
able
to
use
the
bucket
from
you.
C
B
D
Just
to
add
to
what
Scott
is
saying,
even
for
so
Scott
I
tried
it
out
and
I
was
also
letting
the
same
error.
Okay,
yeah
init
error.
It
didn't
work
for
me
as
well.
Okay,.
B
B
B
At
his,
if
he
can
reproduce
it
I'm
going
to
look
at
his
because
it's
not
even
on
taking
the
backup
it's
what's
happening,
is
that
when
you
create
the
backup,
you
create
the
back
repository
and
then
the
backup
repository
in
it
and
validation
is
worth
failing.
So
you
know,
and
I'm
gonna
even
went
in
and
deleted
the
Valero,
slash,
copia
and
a
folder
in
the
backup,
repository
and
S3
and
then
deleted.
B
My
backup
repository
tried
again,
and
so
it
seems
like
it's
doing
something
on
initialization,
which
may
be
creating
something
which
then
it
complains
it's
not
empty.
I'm,
not
sure.
Maybe
the
the
non-empty
test
is
somehow
looking
elsewhere
in
the
bucket.
So
in
your
bucket,
where
you
hit
the
same
issue,
was
it
a.
D
B
B
Know
that's
why
I
was
when
facade
gets
this
environment
up
again,
I
was
going
to
look
at
look.
B
Metadata,
you
know
what
are
the
server
arcs
and
what
is
the
back
repositor,
because
yeah
that
was
the
thing
was
weird
to
me.
I
didn't
understand,
but
I
haven't
looked
at
this
need
to
see.
Is
it
the
actual
backup
repository
for
acopia
still
has
a
rustic
identifier
URL
in
it,
which
uses
their
rustic
path
so
I
don't
know
if
that's
what's
messing
it
up
and
if
so
how's
that
getting
in
there?
B
Okay,
especially
in
your
backup,
because
you
had
you
had
an
empty
bucket,
nothing
in
there
started
with
Cobia
uploader
type.
Unless
there's
some,
you
know
server
are
we're
missing.
That.
C
B
Yeah
yeah
I'm
I'm,
assuming
that
there's
something
off
in
my
environment
of
the
way
we're
saying
it
up
because
again
it
just
isn't
working
at
all
from
the
beginning
and
since
Valera
has
been
using
copia
since
1.10,
we
just
you
know
see
on
the
oedp
side
we
were
still
focusing
on
rustic,
so
we
really
haven't
been
supporting
officially
hope
we
are
testing
it,
and
so
we're
just
now
starting
to
do
that,
and
so
there
may
be
some
issues
in
the
ODP
environment,
where
we
just
need
to
make
sure
that
the
configuration
wise
there's
something
we're
missing
that
you
know
pure
Valero
install
has.
B
B
It
by
now
two
releases
in.
B
B
Bit
well,
yeah,
the
account
would
work
count
would
be,
would
would
work
too.
You
know,
but
basically
something
because
right
now,
what
it
does
is
that
you
know
it's
just
it
uses
the
copulist
blobs
call
and
passes
in
a
callback
that
always
throws
the
same
error,
and
so
basically,
if
list
blobs
is
empty,
returns,
nil
and
if
it's
not
empty,
then
it
returns.
This
predetermined
error,
you.
D
Yeah
and
on
inside
the
logs,
we
see
the
make
directory
error
right.
The
slash
udm
repo.
B
C
A
B
D
A
B
The
actual
spec
all
the
fields
to
see
if
this
you
know
this
Rustic
Stuff,
that
we're
seeing
in
there
is
in
a
unexpected,
also
I,
guess,
there's
also
a
question
shipment
for
ODP
we're
setting.
Are
we
setting
a
prefix
in
the
bucket?
That's
not
default
I.
B
B
Got
it
so
so,
maybe
that
the
rest
code
is
not
looking
in
the
prefix
or
something
I,
don't
know
that's
possibility.
At
least
that
may
not
be
the
case,
but
because
that
is
one
area
where
I
know
what
we're
doing
is
different
than
default.
B
We
have
some
automated
testing
already
on
the
ODP
side
and
I
know
we
were
talking,
and
we
had
some
conversations
to
try
to
get
some
of
that
in
from
some
of
that.
You
know
Upstream,
but
I'm
not
sure
how
far
along
that
is
right.
Now,
yeah.
B
Right-
and
these
are
the
kinds
of
things
where
again,
you
know,
especially
with
options
that
not
everybody
uses,
you
know
they're
relatively
uncommon,
and
so
you
know
some
new
features
is
is
brought
in
and
it
works
in
all
the
defaults
and
then
we've
tested
certain.
You
know
commonly
used
options,
but,
for
example,
this
prefix
may
not
be
very
commonly.
B
You
know
it
was
something
that
we
added
a
while
back,
because
some
of
our
migration
use
cases
early
on
with
MTC.
We
were
using
the
bucket,
not
just
for
Valero,
but
for
storing
image
information
in
the
streams
from
from
openshift.
So
you
know
we,
we
were
hitting
errors
because
there
were
these
directories
of
Valeria
wasn't
expecting.
B
So
we
added
the
prefix
so
that
everything's
under
the
prefix-
and
it
may
just
be
some
of
the
copia
code-
is
not
looking
at
that
field
or
honoring
it,
so
that
that
is
something
to
look
into.
B
Exactly
so
so
that's
good.
We
just
need
to
make
sure
that
it
works
and
if
that's
the
problem,
that'll
be
a
pretty
easy
fix.
A
Thank
you
next
one
on
the
list.
We
have
our
show.
C
E
I
hope
you
focus
down
to
you
so
basically
in
terms
of
status
updates,
so
I
worked
with
venkai.
We
tried
out
the
Azure
plugin
in
Azure
National
clouds.
There
are
certain
configuration
changes
we
like
we
need
to
set
the
cloud
name
and
for
for
these
scenarios
we
tried
it
out
and
it's
working
as
expected.
It
was
just
a
sanity
test
and
I
think
the
VMware
team
was
not
able
to
get
Hands-On
in
access
to
these,
so
we
were
able
to
validate
at
least
for
China
Cloud
for
now,
as
your
China
Cloud.
E
Second,
on
the
status
update
side,
so
the
resource
modifier
or
the
Json
substitutions
PR
is
checked
in
this
is
a
follow-on
website,
docs
per
so
just
requesting
the
the
maintenance
to
review
it,
and
third,
is
the
e2e
test
for
resource
modifier.
It's
in
the
works,
I
will
kind
of
publish
this
by
next
week.
I
hope
that
should
not
be
a
blocker
for
the
1.1
to
release
this.
It
should
I
hope
it's
okay
to
be
late,
binding.
This
e
to
e
tests
and
last
two
are
mainly
discussion.
E
Items
I
will
probably
Park
them
since
I
think
we
have
more
agenda.
We
can
discuss
this
towards
the
end,
but
the
first
is
around
the
topic.
We
discuss
much
like
way
back
around.
How
can
we
really
make
CSI
snapshotting
parallel
in
Valero's
core
code
base,
when
we
invoke
the
plugin
for
snapshotting,
we
had
certain
limitation
in
terms
of
the
current
value
of
flow
and
we
had
1.12
commitments
for
data
movers,
so
we
relatively
pick
it
up
or
think
through
what
will
be
the
actual
solution
for
this?
E
So
maybe
I
wanted
to
bring
this
topic
up
and
second
is
around
Tiger's
PR
for
existing
resource
policy
of
recreate
those
I
think
we
have
been
discussing
a
lot
of
things
in
the
comments.
If
we
have
time
in
this
meeting,
we
can
maybe
discuss
if
folks
have
other
thoughts
or
discuss
more
on
it,
so
that
is
like
top
level
for
these
two.
We
can
pick
this
up
towards
the
end
when
other
folks
are
done
with
their
agenda.
E
There
are
actually
so
we
discussed
a
lot
of
limitations
that
we
have
in
the
current
code
base
and
basically,
we
don't
really
have
the
actual
solution
or
how
to
fix
it
right
now.
This
will
kind
of
require
inputs
from
the
maintainers
on
what
is
the
right
so
because
Forex
I
mean
I'm
thinking.
Maybe
we
can
discuss
this
towards
the
end.
If
that's
okay,
I
think
we'll
go
deeper.
D
A
Thank
you.
Ashu
next
thing
is
I've
finally
managed
to
take
the
time
to
create
the
new
reminders.
I
hope
that
fits
oh
I've
created
two
or
minus
one,
15
minutes
before
the
meeting
and
one
exactly
little
time
emitting
just
FYI.
If
you
miss
that
thing,
the
next
one
is
Shivam
foreign.
D
A
Thank
you
and
on
the
discussion
topics
I'm
going
on
vacation
in
one
week,
but
that
means
I
have
to
skip
few
Community
meetings.
So
can
let
me
show
that
I
can
set
up
the
community
meetings
to
be
recorded
automatically
and
then
I
can
just
deploy
into
YouTube
if,
if
needed,
or
someone
can
can
step
in
for
me
for
these
dates,
so
8th
of
August,
22nd
and
5th
of
September
to
lead
the
meetings
and
record
them.
B
D
F
I,
don't
I,
don't
have
a
dog
in
this
fight,
I,
don't
think
I
can
do
the
eighth,
but
I
could
do
the
22nd.
A
The
thing
is,
you
have
to
use
that
host
key,
that
I
have
to
share
prior
with
you,
but
it's
it's
perfectly
fine.
Everyone
to
to
use
that
key
and
to
join
and
just
hit
record
and
deal
with
this
session
could.
B
Recently
it's
worked
fine,
but
at
one
point
I
remember
when
I
was
you
know
trying
to
host
it,
and
you
know
sometimes
the
code
would
work
and
sometimes
it
wouldn't,
and
so
you
know
if
a
couple
of
us
are
available
for
that,
then
who
doesn't
work
or
whatever
well
for
the
other
I,
don't
know
what
was
going
on
with
the
you
know
the
system
that
was
like
that,
but
maybe
external
people,
somehow
it's
a
little
bit
less
reliable,
I,
don't
know
it.
B
Yeah
the
first
couple
times
like
like
last
year,
when
we
did,
we
went
through
this
that
this
is
this
was
a
problem.
Basically
every
week,
one
of
us
couldn't
do
it.
One
of
us
could
and
then
the
most
recent
time
that
I
hosted
a
month
or
two
ago
it
was
fine,
but
so
I
don't
know.
If
this
is
one
of
these,
you
know
flakes
where
it
sometimes
happens,
or
if
the
problem's
been
fixed.
D
A
Okay,
thanks
for
that
next
thing
is
for
kubecon
Shanghai.
The
notification
is
six
days.
So,
if
you
want,
please
drop
in
a
line
if
your
talk
is
accepted,
so
we
can
write
up
some
some
blog
post
about
it
and
our
present
on
the
hi
same
same
goes
for
canco
coupon
Chicago,
but
we
have
until
28
I,
think
of
of
August.
For
the
notifications
and
about
blog
posts
do
we
need
to
drop
some
blocks
or
something
more
about
one.
Eleven
One
release
foreign.
B
Bugs
fixed
some
of
them,
you
know
edge
cases,
one
of
them
there's
one
yeah.
It
was
one
with
I
think
the
one
more
significant
ones
are
involved.
That
was
one
about
not
non-running.
Pods
will
will
no
longer
cause
backup
errors.
I
know
that's
one,
and
on
the
ODP
side,
we've
actually
had
customers
that
have
hit
before
so
that
you
know
you
know
it's
a
bug
fix
it's
not
a
significant
feature
or
anything,
but,
and
there
was
another
one
that
I've
seen
before
there
was
an
issue
with
progress
updates.
B
E
A
C
A
A
C
Is
it
wonderful,
are
we
still
talking
about
August,
30th
or
I?
Remember
in
the
dev
Channel
there
was
some
discussion,
one
somebody
posted
that
the
release
candidate
itself
might
be
delayed
or
something
right.
One.
E
F
A
A
I'm
I'm
not
I'm,
not
hearing
that
well,
I'll
Surfer,
give
me
a
sec.
C
Okay,
yeah,
the
few
things
one
is
that
PR
for
the
r
label,
selector
CLI,
that
we
submitted
I,
think
Scott
approved
that,
and
quite
quite
a
number
of
people
really
have
been
asking
for
this.
Even
on
the
forums
right,
there
is
currently
not
an
easy
way
to
pass
the
label
selectors,
so
I
I
hope
there
will
be
a
second
approval
as
well
soon
and
that
can
be
merged
right
and
that's
just
want
to
bring
it
up.
C
The
second.
This
request
came
from
somebody
in
the
last
couple
of
days
back
just
want
to
run
it
by
the
team
so
and
I
think
it's
quite
reasonable.
In
the
file
system
backups,
you
really
don't
need
to
have
the
empty
volume
backups
right.
These
are
supposed
to
be
a
temporary
FML
and
all
that-
and
currently
there
is
no
way
to
do
that.
So
what
I'm
thinking
is?
Is
it
easy
enough
to
add
this
kind
of
exclusion
to
resource
policies
right?
C
The
resource
policies
currently
have
various
ways
of
criteria
such
capacity
or
even
driver
the
type
and
things
like
that,
for
example,
NFS
and
CSI
sections
can
be
added
to
the
resource
policies.
I
was
just
wondering
if
empty
dishes
can
be
added
to
the
resource
policies
and
based
on
that,
can
we
skip.
B
Yeah
I
don't
know
if
it
supports
that
currently
I
think
I
would
agree
that
that
is
the
logical
place
for
it.
So
I
think
it's
worth.
You
know
if
it's
something
that
you
can
try
with
what's
there
now
and
if
it
works,
that's
great.
If
it
doesn't,
you
know
this
and
that
I
think
that
would
be
the
obvious
place
to
put
support
for
that.
Okay,.
C
B
Got
it
yes,
but
so
so
in
that
case,
it's
prob
and
again
I
think
that
was
the
limited
scope
of
the
initial
version
of.
A
E
C
Last
one
is
the
block
PVC,
so
we
tried
that
and
what
happens
is
that
in
the
backup
I
think
it
tries
to
bring
bring
back
from
the
snapshot
as
a
file
system
PVC,
so
that
that's
where
the
problem
currently
is,
and
obviously
that
fails
so
we're
looking
into
that.
So
the
non-block
PVC,
as
I
said
already,
with
empty
a
brand
new
bucket
and
no
prefix
that
is
working
and
for
the
block
PVC.
That's
the
error
that
we
identified.
C
My
guess
is
based
on
this
error.
If
you
just
fix
that
the
type
wherever
the
type
somehow
is
coming
as
file
system
it,
it
would
probably
proceed
but
I
think
you
you
wouldn't
wouldn't
unless
it
make
that
change
and
give
it
a
try
right.
Yeah.
B
And
I
guess
the
related
thing
to
figure
out
is
part
of
this
process
would
also
be
just
a
if
you,
if
you
use
copy
of
file
system,
the
the
copia
backup,
non-data
mover,
you
know,
does
that
also
work
with
block
PVC
but
I'm,
assuming
that
whatever
needs
to
be
fixed,
is
going
to
be
pretty
similar
for
both
because
it's
using
the
same
infrastructure.
Oh.
C
The
case
of
file
system
backups
the
we
already
identified.
It
doesn't
work
because
you're
looking
at
the
one
directory
on
the
host
right,
that
directory.
B
F
It
okay
maintainer's
question
to
you
is:
if
we're
that
close
to
possibly
having
block
volumes,
work
with
data
mover.
Is
that
something
that
we'd
want
to
focus
on
and
get
into
the
112
release,
and
maybe
even
delay
for
a
minute.
B
B
That
would
probably
be
a
larger
conversation.
I
don't
have
a
firm
opinion
there
as
to
what
makes
more
sense
and.
F
Ragu
I
don't
see
an
issue
or
PR
on
on
the
Block
a
week
to
week
cast
light
on
this
so.
B
I'd
say
I'd
say
the
sooner:
we
can
get
a
PR
up,
the
more
likely
it
is.
We
can
get
into
1.12.
I
have
a
I
I,
don't
know
that
it
makes
sense
to
delay
a
release
over
it.
So
if
we
can
get
it
early
enough
to
not
have
to
really
release
it,
that
would
be
ideal.
B
If
it's
a
question
of
okay,
we
now
have
a
fix,
but
but
it
jeopardizes
the
release.
That's
when
we
can
have
the
conversation.
F
C
C
C
Need
to
assign
the
issue
to
yeah,
because
we
are
working
on
few
issues.
If
I
can
assign
the
GitHub
issue
to
cloudcaster
developers,
I,
don't
think
I
can
do
that.
Now.
C
Even
approval
of
the
I
don't
know
if
I
can
approve
the
pr
recently
I
approved
few
documentation
PLS
but
April
in
the
sense
I
just
said,
looks
good
to
me,
but
I
don't
know
if
I
can
officially
prove
the
good
days
so
it'll
be
easier
if
I
get
those
permissions.
A
I'm
sorry
I
was
I,
got.
A
C
A
Can
we
can
create
like
a
different
group,
so
we
want
to
introduce
like
a
contributors,
kind
of
a
layer
of
permissions
or
what.
F
C
A
Think
we
can
do
that
with
proper
grouping
and
like
a
new
team
and
to
add
some
folks
in
do
we
have
an
idea
who
wants
to
be?
Who
who
will
be
in
that
group
or
with.
A
Okay,
but
we
have
to
see
how
first
that's
real
reflects
the
the
governance
dog
that
we
have
in
place.
A
A
C
E
C
E
Yeah
I
can
jump
into
those
there's.
Nothing
else.
Yeah
seems
like
okay,
so
yeah,
so
the
first
I
mean
I,
think
I'll
just
directly
speak
I
highlighted
the
points
and
give
a
quick
recap
as
well.
So
the
first
issue
I,
was
talking
about
kind
of
resurfacing
as
we
go
towards
one
one,
twelve
right,
so
it
is
about
the
snapshotting
using
the
CSI
plugin,
even
with
the
data
mover
I,
think
the
and
migrating
the
CSI
plug
into
Bia
V2.
The
backup
operation
still
is
a
very
long.
E
It
takes
a
long,
a
lot
of
long
takes
a
long
time
because
in
the
core
backup
flow,
we
basically
invoke
the
CSI
snapshot
and
we
wait
for
the
snapshot
handle
to
appear
and
in
the
async
flow
of
Bia
V2
for
CSI
plugin.
We
wait
for
the
ready
to
use
condition,
but
in
a
very
like
in
a
generic
scenario,
if
you
really
see
getting
the
snapshot
handle
in
certain
scenarios
can
take
time.
E
For
example,
if
the
CSA
driver
is
bottlenecked,
it's
like
it's
generally
has
too
many
snapshots
to
deal
with,
or
there
are
other
performance
issues
going
on
in
the
cluster.
We
have
personally
seen
this
for
a
lot
of
our
customers
where
they
have
large
classes
running.
So
basically,
what
ends
up
happening
is
a
backup,
let's
say
if
it
has
50
PVS
and
the
CSM
driver
is
not
in
a
very
good
shape.
E
In
the
worst
case,
it
could
take
15
to
10
minutes
for
the
backup
to
complete,
which
is
not
really
acceptable
or
like
good
in
terms
of
performance
or
good
in
terms
of
consistency
of
the
overall
backup.
So
we
discussed
various
approaches
of
what,
if
we
kind
of
take
this
now
all
the
snapshots
at
once
and
invoke
them
at
once
and
in
the
async
workflow
we
track
them.
E
We
discuss
various
approaches
before
1.11,
but
the
core
the
just
to
highlight
the
few
issues
for
folks
in
the
call
right,
so
the
PVC
snapshot
I
think,
as
of
today,
is
tied
with
the
hooks.
If
you
do
a
pre-hook,
you
take
the
PVC
snapshot
and
then
you
do
a
post.
So,
in
all
logical
sense,
the
snapshots
have
to
complete
before
you
kind
of
invoke
the
posters,
but
if,
during
that
flow
right,
like
pre-hook
snapshot,
let's
say
five
PVCs
for
a
given
deployment
and
then
post
took.
E
We
could
at
least
parallelize
those
calls
those
five
snapshots
if
we
would
invoke
through
the
CSA
plugin
at
once
and
pull
them
in
parallel,
I
think
we
will
get
a
much
better
performance
than
we
have
today.
So
yeah.
That
is
a
conversation.
I
wanted
to
start,
but
I
think
there
might
be
concerns
around
the
CSI
plugin.
How
can
it
handle
parallel
calls?
E
I
am
also
not
very
sure
like
if
we
call
the
CSI
plugin
the
execute
Phase
10
times
instantly
in
in
one
second,
will
it
work
properly,
because
all
those
calls
happen
over
grpc
and
it's
running
in
separate
in
a
container
that
is
like
one
side
of
things
and
second
is
I
mean.
Secondly,
I
just
want
to
get
more
thoughts
from
folks.
E
C
Level
because
the
hooks
happen
at
the
part
level
right
so
so
you
you
can
snapshot
multiple,
PVCs
and
and
I
think
we
should
have
a
couple
of
configuration
parameters
there.
If,
if
some
CSA
drivers
are
not
able
to
handle
too
much
parallelism
right,
so
we
can
always
introduce
some
number
right.
How
many
parallel
PVC
snapshots
you
can
handle
and
default?
Today
is
one
right.
That
is
what
is
happening
now
and
we
can
The
Other
Extreme
is
all
the
PVCs
for
the
given
part.
C
Whatever
is
that
number
we
can
issue
in
parallel
and
users
can
always
set
that
into
somewhere
in
the
middle
number
right
so
depending
on
how
their
CSL
driver
works,
but,
given,
apart
from
those
configuration
parameters,
I
think
it's
a
good
idea
to
issue
the
calls
for
all
the
PVCs
for
a
given
part,
because
we
have
seen
this
backup
stick,
not
just
10
15
minutes.
C
If
the
backup
configuration
is
wrong,
especially,
for
example,
we
have
seen
this
and
you
have
like
20
PVCs
at
each
is
10
minutes
right,
and
so
basically
the
backup
doesn't
fail
completely
until
those
20
things
is
one
by
one
fail.
10
minutes.
Each
time
out
and
that's
like
very
bad
U.S
yeah.
D
D
E
C
D
B
B
You
know
pods,
but
also
because
of
the
additional
items
functionality.
You
know,
you're
you're
doing
the
PVC,
and
then
we
then
we
return
the
snap
volume
snapshot
and
we
return
the
PV,
and
so
there's
a
so
you
kind
of
go
down
the
tree
for
each
one.
So
I
mean
I.
B
Guess
you
could,
instead
of
creating
the
snapshot,
add
something
to
a
list
as
your
plugin
action
somewhere
and
then
have
some
later
action,
pull
that
pulls
that
list
and
does
something
but
I,
don't
know
that
that
I
mean
that
you
know
that
I
think
the
idea
of
actually
saying
okay
plugins
are
going
to
take
a
list.
The
problem
is
not
just
the
plugins
take
one
versus
the
list
is
the
order
is
the
way
plugins
are
called.
B
It's
not
that
we're
saying:
okay
I
want
to
run
this
plugin
I
want
to
run
this
plugin
on
this
thing
that
you
know
you
don't
start
with
the
plug-in
and
then
call
individually
on
the
individual
items
that
plug-in
matches.
Rather,
you
back
up
an
item
and
it's
part
of
backing
up
that
item.
You
call
all
the
plugins
that
match
that
item.
C
B
Then,
if
you
have
additional
items
returned,
then
you
back
up
those
additional
items
and
for
each
one
you
call
the
plugins
on
the
items,
so
you
kind
of
have
that
chain
that
goes
down
so
that
that
whole
framework
doesn't
really
work
at
all
with
the
idea
of
actions
that
apply
to
a
bunch
of
things
at
once,.
E
D
B
Yeah
I
mean
in
that
case,
what
you
would
probably
do
is
well
yeah
I
mean,
basically,
you
would
probably
have
to
I
mean
even
there
I
mean
even
human
core,
you
know
it
would,
you
might
be
I
mean
it
would
still
be
a
lot
of
I
guess
the
first
change
would
be
to
just
say:
okay,
I
guess
is
the
last
thing
you
do
before
you
save
a
few.
You
know
you
know,
because
we
already
have
special
casing.
B
A
B
So
that's
a
lot
of
infrastructure
that
you'd
have
to
basically
recreate
from
scratch.
If
you,
you
know,
were
to
do
this
because
right
now,
it
all
just
uses
the
plug-in
framework,
because
the
plugin
framework
provides
an
easy
way
of
saying.
This
item
needs
that
item,
and
that
item
needs
this
other
item
and
they
all
get
pulled
into
the
backup
for
you.
We.
E
D
No,
no,
no,
not
bib2
I'm.
Talking
in
the
controllers
finalizing
phase,
we
could
we
do
a
second
patch
to
S3,
right
and
but.
B
Yeah,
because,
again
right
now,
it
just
goes
down
the
chain
of
you
know:
I'm
back
into
PVC.
That
means
I
need
to
pull
in
the
PV.
That
means
I
also
need
to
create
a
volume
snapshot
and
then
there's
a
volume
snapshot
plugin,
which
pulls
in
the
volume
snapshot
content,
and
you
know
if
we
were
going
to
pull
that
out
of
the
plug-in
framework,
then
that
special
casing
of
if
the
PVC
would
then
need
to
do
all
of
that
manually.
B
This
thing
means
that
thing,
and
that
thing
means
the
other
thing
and
we'd
have
to
build
in
the
spec,
the
similar
logic,
but
specialized
to
PVCs
to
say,
okay,
you
know
I
need
to
create
the
volume
snapshot
and
then
I
need
to
back
that
up,
but
then
I
need
to
call
plugins
on
that
thing,
because
this,
because
everything
we
think
is
backed
up,
has
plugins
that
could
run
on
it.
So
you
know
that
would
all
have
to
be
duplicated
in
the
corvallero
code
to
sort
of
get
all
those
things
done.
C
B
C
B
If
we
were
to
pull
CSI
into
that,
that
would
just
add
more
code
to
that
and
possibly
make
it
more
more
difficult
to
follow.
Unless
we
you
know,
but
you
know
what
that
could
be
worked
out
with
just
you
know
doing
it
right,
but
you
know
we
would
have
to
be
clear
that
you
know
this
is
providing
a
a
you
know
clear
advantage
that
we
can't
get
otherwise,
and
then
we
can
find
a
way
to
make
it
work,
but
there's
a
lot
of
edge
cases.
B
C
Yeah,
yeah
and
but
I
think,
even
if
you
want
to
use
the
plugin
I'm
sure,
is
there
any
way
that
instead
of
waiting,
synchronously
right
for
the
snapshot
handle?
Can
the
asynchronous
progress
reporting
mechanism
be
used
to
report
back.
E
C
To
say
so,
and
what
I'm
saying
is
that
there
are
10
PVCs,
for
example,
yeah
today.
What
is
happening
is
that
you,
the
snapshot,
happens
for
the
first
PVC.
Then
the
we
wait
for
the
snapshot
handle
to
be
ready,
synchronously,
and
then
it
comes
back
the
second
PVC
issued.
What
I
was
saying
is
that
for
these
10
PVCs
I,
we
don't
have
to
wait
for
the
snapshot
handle
synchronously.
It.
C
B
Them
the
asynchronous
mechanism
means
the
back.
Control
is
not
waiting
when
you
use
the
asynchronous
mechanism,
we
don't
come
back
until
the
finalized
phase,
so
if
a
backup,
if
a
plug-in
returns
synchronously
with
the
asynchronous
operation,
happen
in
the
background,
we
don't
we
go
and
finish
the
entire
backup
and
then
come
back
and
check
on
it.
C
So
one
option
could
be
somehow
to
tweak
or
or
maybe
think
about-
you
know
some
kind
of
a
synchronous
mechanism
right
instead
of
thinking
this
weight,
but
but
still
have
control
in
the
main
controller,
not
not
till
the
end,
but
somewhere
in
the
middle
anyway,
I'm
I
know
this
is
way.
I'll
have
to
read
up
a
little
bit
more
on
the
current
asynchronous
mechanism.
Then
I
can
I
can
come
back
with.
You
know
more
concrete
position.
E
B
Yeah
yeah
and
I
guess
the
question
is:
how
do
you
communicate
because
so
basically
what's
happening,
then?
Is
that
you're
going
through
so
so
you're
you're
back
so
so
the
the
order
here
is
that
I'm
trying
to
remember
so
I
guess
what
we
back
up
a
pod
and
then
that
pod
Returns
the
list
of
PVCs
as
additional
items
is.
That
is
that
is
that
the.
B
So
so
we
have
this
list
of
10
PVCs
and
we
we
and
we're
returning
them
as
additional
items
now
each
item
each
PVC.
We
calls
you
know
we,
we
just
call
backup
item
in
a
PVC
that
PVC
when
we
called
the
plug-in
on
that
PVC
it
will
have
a
reference
to
the
PVC.
B
It
will
also
have
a
reference
to
the
backup.
Where
do
we?
It
doesn't
have
a
list.
You
know
so.
We'd
have
to
find
a
way
to
communicate
this
list
of
10
PVCs
and
the
status
of.
E
No
I'm,
just
okay-
let's,
let's
maybe
I'll,
just
ask
this
one
question
I
understand
like
in
the
code:
how
do
we
fit
it?
I
think
that's
the
question
you
need
to
see,
but
let's
say,
for
example,
we
can.
We
can
invoke
10
snapshots
at
once.
We
can
take
10
PVCs
and
call
the
CSI
plugin
at
once.
For
all
tell
do
you
see
any
scale
limitations
or
any
issues
arising
with
10
concurrent
calls
to
the
plugin.
The
execute
phase
10
execute
calls
to
the
CSI
plugin
at
once.
Do
you
see
a
concern
there.
B
B
Because
10
calls
to
the
plugin
would
mean
you
know,
the
existing
plug-in
would
mean
whatever
additional
items
are
being
returned
from.
That
plug-in
would
then
have
to
be
processed
by
the
backup
controller.
So
again
that
wouldn't
be
concurrent
because
we'd
have
to
you
know
the
bits,
because
the
backup
controller
itself
is
going
through
all
this
stuff.
You
know
in
a
single
thread
it
would.
It
would
almost
need
to
be
one
call
to
the
plug-in
that
then
somehow
worked
on
10B
I
was
just
thinking.
A
B
So
it
it
may
be
the
the
case
where
you
know:
if
you
have
a
list
of
10
PVCs
yeah,
so
I
guess
that's
the
the
hard
part
is
because
you
know
if
you
want
10
calls
at
once.
B
Yeah
I
mean
it's
almost
like.
Maybe
what
you'd
want
is
you
want
the
oh
because
I
was
gonna
say
if
the
plug-in
could
call
and
start
something
asynchronously
and
they
have
something
that
waited.
But
again,
if
you,
if
you
somehow
knew
that
this
was
the
10th
call
and
therefore
this
one
should
wait,
synchronously
for
all
10
to
be
returned,
but
the
the
challenge
there
would
be
how
you
How.
Do
you?
How
do
you
know
what
all
those
you
know,
because
you
also
have
these
additional?
B
You
know
the
volume
snapshots
and
the
so
so
you're
I,
I,
I,
I
I,
don't
know
how
that's
going
to
work
in
terms
of
the
framework.
C
B
Yeah,
here's
actually
here's
the
possibility,
I
just
thought
of
something
instead
of
a
PDC
plugin
that
acts
on
this
PVC
and
then
creates
the
volume
snapshots.
What,
if
you
modif?
What,
if
you
created
a
CSI
pod,
plug-in
yeah
listed,
the
PVCs
pulled
them
from
the
cluster,
started
the
snapshot
process
and
and
just
handled
it
all
at
a
pod
and
in
other
words
instead
of
the
Pod
plug-in
returning
the
PVCs
and
then
the
calling
reach
PVC
each
plugin.
What
if
the
Pod
plug-in
ran
the
snapshots.
C
D
E
B
B
And
you
know,
for
example,
you
know
I
I
guess
one
use
case
here
would
be
you
know,
you've
got
a.
You
know
a
con
job
that
that
creates
a
job
pod.
You
know
once
a
day
and
has
and
its
logs
go
to
this
PVC
that
rebounds
every
time
you
still
want
that
backed
up
and
Leia
yeah
it
file
system.
Backup
does
not
back
it
up
and
that's
that's
it.
B
That's
a
downside,
but
daily
mover
would
back
that
up
because
it's
using
based
on
CSI
but
but
again,
that's
if
that's
an
edge
case.
That's
you
know.
If
you
have
someone
with
hundreds
of
volumes,
chances
are.
Those
are
not
majority,
unmounted
I!
Think
in
this
case,
if
you,
if
you
were
to
have
a
pod
plug-in
that
did
the
PVCs
in
parallel,
then
you
would
still
have
the
PVC
plug-in,
which
would
say
hey.
Is
there
a
pie
already
handling
this?
B
If
so,
don't
do
anything,
there's
not
a
pod
handling
this,
then
you
do
what
you're
doing
now
and
then
have
the
Pod
plug
and
take
on
that
snapshotting
work
so
that
it
can
do
this
in
parallel,
release,
kick
things
off
and
then
then
go
back
and
loop
over
and
wait
for
the
wall
to
be
done
or
do
all
that
rather
than
aha
snaps
handles
so
I
think
the
Pod
plugin
may
be
the
only
way
to
do
all
the
PVCs
for
a
pod
at
once.
B
E
C
I
I
also
want
to
quickly
bring
up.
In
the
same
context,
I
I've
seen
some
issue
about
the
performance:
improving
the
performance
for
the
node
agent
backups
right,
even
in
the
new
data
mover.
How
much
parallelism
is
there,
for
example,
in
the
current
node
agent
I
think
one
notation
is
handling
everything
so
I've
seen
one
issue:
can
we
use
kubernetes
jobs
to
paralyze
this
a
little
bit
more?
How
does
the
new
data
mobile
work?
Okay,
things
happen
in
parallel.
There.
B
Yeah
and
that's
actually,
we've
had
a
conversation,
even
internally
Red
Hat
about
this.
So
in
terms
of
the
transition
from
because,
for
example,
our
oadp
data
mover
that
that
we,
you
know
using
this-
this
not
part
of
this-
that
we're
eventually
trying
to
move
away
from.
But
you
know
that's
that's
run
in
a
single
pod,
but
then
we
run
multiple,
you
know
say
10
to
15
and
parallel
there,
where
we
kind
of
limit
and
the
switch
to
the
Valero
data
mover,
is
based
on
the
Node
agent.
B
So
if
you
have
a
cluster
with
many
nodes,
then
you're
getting
because
right
now
we're
just
doing
one
job,
but
it's
you
know
one
upload
at
a
time
per
node
for
node
agent.
So
if
you
have
you
know
20
nodes
in
your
cluster,
then
you
know
you're
getting
a
fair
amount
of
parallelism.
B
But
if
you've
got
a
small
cluster
with
you
know
two
or
three
nodes
and
a
bunch
of
volumes,
then
you're
not
doing
you're
getting
a
lot
of
throughput
and
that
that's
something
I
think
there's
no
inherent
design
reason
why
we
can't
increase
that
in
this
previous
conversations
with
other
maintainers
was
more.
It
was
more
a
question
of
testing
and
confidence.
It's
one
of
the
things
that
you
know
in
the
time
frame
of
1.12.
B
They
didn't
really
have
time
to
fully
work
out
if
there
were
any
threading
issues
to
increase
that
Beyond
one.
That
I
think
the
hope
is
very
much.
Certainly
on
our
side
that
we
would
like
to
make
that
configurable
to
allow
you
know
to
say:
okay,
instead
of
just
one
per
node,
we
want
you
know
three
or
four
per
node
or
five
or
whatever
I,
just
think
the
work
the
work
to
get.
B
There
is
first
of
all
to
change
one
to
two
and
test
it
out
and
work
through
any
threading
bugs
that
we
run
into
if
we,
if
we
have
any
problems
and
if
we
fix
any
problems
or
there
are
no
problems
and
it's
stable
with
you
know
multiple
in
parallel,
then
we
just
need
to
add
a
configuration
parameter.
B
So
that
will
not
be
in
1.12.
I'd
like
to
see
it
in
112
one,
but
you
know
I,
don't
know
if
it's
a
new
parameter.
That's
in
the
thing
we
know
is
that
going
to
factory
lasers,
I
I'm,
not
sure,
but
that's
where
we
are
with
that
is
that
1.12
will
not
it'll
just
be
one
upload
per
node
agent
pod.
B
So
you
get
some
parallelism
if
you
have
a
bunch
of
nodes.
Otherwise
you
know
there's
room
for
improvement.
C
E
Got
it
straight
thanks
for
that
idea,
I
think
we'll
think
on
it
and
come
back
and
we'll
see
if
you
can
also
add
any
thoughts
on
the
part
that
I'm
real
yeah
just.
B
Pvcs
and
then
also
the
PVC
plug-in,
would
need
to
have
kind
of
work,
and
basically
we
need
to
have
some
way
of
determining
you
know
what
makes
a
PVC
you
know
handled
by
the
Pod,
and
so
the
PVC
plugin
can
do
the
the
reverse.
You
know
that
and
saying:
okay,
it's
handling
this
I'm
not
going
to
do
anything
here.
First,
is
you
know
this
is
a
you
know,
unconnected
pod,
it's
not
being
done.
B
Let's
do
this
and
then
we
just
need
to
make
sure
that
basically,
every
PVC
gets
handled
and
also
no
PVC
gets
handled
twice.
Yeah.
E
E
B
C
B
C
E
This
is
great
thanks
for
this.
I
I
also
had
the
second
discussion
topic,
but
I
think
we're
already
out
of
time,
maybe
I
think
tiger.
If
you
want
to
discuss
anything
over
chat,
we
can
also
think
that
we
have
a
lot
of
comments
on
the
pr
if
I
think
we
can
get
together
to
finalize
it
like.
We
need
something
like
we
need
to
pause
on
this
or
take
this
in
a
different
direction.
E
Just
wanted
to
discuss
that
in
the
Forum,
but
yeah
I
don't
think
we
have
time
for
that
today,
the
recreate
option
PR
yeah.
We
can.
B
Do
that
in
slack
right,
yeah.