►
From YouTube: Velero Community Meeting - Jan 18, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
valero
community
meeting
today
is
tuesday
january
18th
of
2022.
happy
new
year
and
welcome
back
everyone.
Today
we
have
just
some
status
updates
if
anyone
has
any
other
status
updates.
Let's
add
those
in
there,
but
let's
get
started
with
brigade.
B
Hi
everyone
happy
new
year,
so
I
am
kicking
things
off
today
with
maybe
a
little
bit
of
a
sad
announcement.
I
have
made
the
decision
to
leave
vmware
and,
as
such,
I'm
going
to
be
stepping
down
as
a
maintainer
for
the
valero
project.
B
My
last
day
is
going
to
be
next
friday,
so
friday,
the
28th
of
january
yeah.
I
just
want
to
say
a
huge
thanks
to
everyone
on
the
blair
project
in
the
community,
and
this
is
my
first
time
like
working
on
an
open
source
project.
That's
had
like
such
an
active
community,
so
it's
been
a
really
amazing
learning
experience.
So
thank
you
very
much
everyone
for
for
making
that
experience
really
great
yeah.
B
So
I
think
before
I
finish
up,
I
have
been
working
on
a
day
of
mover
design,
I'm
going
to
be
handing
that
over
to
the
other
maintainers
at
vmware,
and
then
I
guess
just
like
closing
out
anything
else.
I
need
it's
still
assigned
to
me,
but
yeah,
that's
it!
Thank
you
very
much.
Everyone.
C
Yeah
definitely
definitely
a
sad
announcement
on
on
this
side,
so
you've
been
so
great
to
work
with
just
so
brilliant
and
so
helpful
and
yeah
definitely
going
to
miss
you
here.
So
thank
you
for
your
contribute
for
all
your
hard
work
and
your
contributions.
C
Well,
I'll
have
to
make
sure
to
take
advantage
of
your
oh
last
days
last
days
here
with
valero.
B
Definitely
yeah
I'd
be
happy
to
to
hang
out
and
chat
and
things
and-
and
I
don't
know-
share
whatever
I
can
before
I
go,
but
yeah.
No
thanks.
Thanks
very
much
for
the
comments
and
yeah
for
for
being
really
fun
to
work
with.
A
Thank
you
again.
Bridget
you've
been
awesome.
E
Oh
just
that
it's
I've
loved
working
with
you
bridgette
and
luckily
version
I
live
close
together,
so
we'll
probably
be
seeing
each
other
after,
but
I'm
so
glad
we
got
to
work
together
on
valero.
A
All
right,
thank
you.
Burgett
next
up
dave.
D
So
I'm
actually
leaving
vmware
as
well.
I
won't
be
leaving
the
valero
project
as
such,
so
I'm
going
to
cast
them.
They've
asked
me
to
remain
as
a
maintainer
for
both
valero
and
astrolabe
I'll,
be
cutting
back
a
bit
on
my
involvement
with
valero,
probably
ramping
up
more
on
the
astrolib
side.
So
I
think
we'll
need
to
submit
a
pr,
because
that
means
the
casting
will
actually
be
becoming
part
of
the
maintainer
group
for
valero.
So
I
think
that's
pretty
exciting
and
for
me
personally,
it's
it's
exciting.
D
E
Well,
needless
to
say,
I'm
glad
that
this
is
not
goodbye
and
as
much
as
we've
enjoyed
working
together.
I've.
C
D
Yeah
I
I
should
be
here
for
the
community
meetings.
You
know
I
may
be
doing
a
bit
less
on
code,
but
I'm
you
know
we're
really
wanting
to
get
like
the
astrolabe
stuff
rolling
and
you
know
work
on
getting
the
integration
with
valero
so
that
we
have
it
available
all
over
the
place.
A
I
think
thank
you,
dave
for
everything.
You've
done
and
we'll
see
you
on
the
other
side,
yeah.
A
All
right,
I
do
have
one
announcement:
I'm
not
I'm
not
leaving
vmware
but
nancy
lancaster
who's
on
the
call
here,
she's
not
on
camera
today,
but
she'll
be
taking
over
as
the
valerie
community
manager
starting
february,
so
we're
doing
a
transition
period
here
of
the
next
couple
of
weeks.
So,
if
you've
seen
emails
from
her,
you
will
see
emails
from
her
slack
messages
and
things
like
that,
as
she
transitions
into
becoming
a
full-time
community
manager
for
valero.
C
A
F
Yeah,
I'm
super
excited
to
take
on
the
valero
community
and
astrolabe
community,
looking
forward
to
working
with
everybody.
A
And
we
are
also
looking
to
a
few
more
people
from
the
engineering
crew
over
in
china.
Who's
been
working
on
valero
for
the
past
couple
of
months,
we're
looking
to
add
those
as
maintainers
as
well,
so
we'll
go
through
the
regular
maintainer
promotion
process.
A
A
All
right,
we
do
have
some
discussion
topics
today,
so,
let's
dive
into
those
the
first
one
that
I
wanted
to
bring
up
is
from
a
conversation
that
we
had
last
week
the
valero
1.8
release
date.
Eleanor
do
you
want
to
talk
a
little
about
when
we're
looking
at
the
valero
1.8
release?
Now.
E
Yes,
I
can
do
that,
so
we've
been
planning
a
release
right
at
the
end
of
the
month.
E
When
we
had
dave
and
bridget,
we
thought
that
they
could
handle
any
issues
that
might
arise
from
the
release,
but
with
all
the
transitioning
we
figured
that
it
would
make
most
sense
to
hold
the
release
until
after
spring
festival,
so
the
beijing
team
is
available
just
in
case
we
don't
expect
any
bugs
we're
doing
lots
of
testing,
but
just
in
case
we
don't
want
a
panic
situation,
so
we've
landed
on
february
14th
valentine's
day
release
as
the
as
the
new
release.
E
The
plan
is
that
I
think,
if
I
can
do
this
by
memory,
I
might
have
to
look
it
up.
Basically,
where
our
plan
is
to
cut
an
rc
bill
rc2
before
I'll
try
to
pull
up
the
dates,
but
the
points
will
cut
rc2
before
the
hope
is
to
have
everything
done
and
just
we
need
to
mark
it
as
a
release
on
february
14th.
So
if
folks
do
want
to
do
early
integration
testing
be
in
contact,
we
have
a
release
that
should
be
the
same
as
the
final
one
but
yeah.
C
E
And
sorry,
I
might
have
what
I
wanted
to
pull
up.
I'm
sorry,
never
mind
my
email
is
too
slow.
Outlook
is
really
hating
me.
Today
I
was
hoping
to
post
the
exact
you
know
what
I'll
do
I
can
edit
the
document
to
note
the
rc
two
dates
and
I
believe,
just
I
believe,
rc1
was
cut
yesterday.
Fyi.
A
So,
with
1.8
having
an
imminent
release,
we
also
wanted
to
collect
requirements
and
ideas
and
features
for
1.9
and
future
as
well,
so
it
doesn't
necessarily
have
to
be
for
1.9,
but
if
there's
anything
that
you
would
like
to
add
in,
we
do
have
a
github
discussion
now
open
here.
So
nancy
opened
this
last
week.
A
So
if
you
have
any
ideas
of
things
that
you
would
like
to
like
to
see
within
valero
for
1.9
or
1.10
or
2.0
in
the
future,
please
voice
them
here.
It
can
be
just
small
ideas,
something
that
you
would
like
to
see:
better
color,
output
of
the
the
cli
commands,
or
maybe
yeah
easter
egg
frankie
just
add
in
whatever
you
would
like
to
see
here.
A
It
doesn't
have
to
be
a
full
on
design
document,
of
course,
but
just
short
ideas
that
we
can
then
work
with
and
see
how
they
fit
into
the
the
already
existing
1.9
roadmap
that
eleanor
and
the
team
has
been
working
on
here,
see
if
there's
anything
else
that
we
can
add
in
here.
If
there's
anything
that
ties
into
what
you
would
like
to
see-
or
maybe
we
could
move
some
of
the
ideas
over
to
the
next
release
planning-
and
things
like
that.
A
A
This
is
our
first
time
we're
using
the
discussions,
the
github
discussions
for
this
previously
we've
used
surveys,
but
we
wanted
to
be
a
little
more
transparent
immediately.
So
that's
why
we're
using
discussions
here
so
just
put
them
up
there,
and
then
we
can
yeah
have
a
look
at
them.
A
D
A
G
Hello,
everyone,
I
think
I'm
gonna
go
ahead
and
add
this
design
on
the
part
of
the
letter.
One
nine
wish
list
just
to
give
a
recap
to
everyone
on
the
call.
This
is
a
desire
that
we
we've
been
talking
to
have
special
plugins,
not
it.
There
are
additional
plugins
types
for
pre
and
post
backup
and
restore
if
you're
familiar
with
the
code.
There
are
existing
hooks
for
doing
the
the
backup
and
restore
per
item.
G
This
is
actually
before
the
backup
and
after
the
backup,
before
the
restore
and
after
the
the
restore
and
and
the
advantage
of
those
that
you
can
really
trigger
a
bunch
of
stuff,
not
doing
the
backup
and
restore.
But
you
know
imagine
expanding
clusters
shrinking
clusters
and
so
on.
There's
a
bunch
of
use
cases
for
those,
so
anyways
the
the
stat,
the
status
of
this
design.
G
I've
been
incrementally
adding
some
of
the
feedback,
and
the
last
time
is
we're
waiting
the
maintainers
to
decide
if
there's
anything
missing
on
that.
So
I'm
I'm
really
happy
to
add
whatever
is
missing
and
hopefully
get
this
part
of
one
nine,
because
full
transparency,
I
do
have
a
forked
version
of
the
little
already
running
with
those
plugins.
So
hopefully
the
adding
those
codes
that
we
have
privately
into
upstream
is
not
gonna
be
well
in
my
vision.
G
Everything
is
you
have
to
be
careful,
it's
a
little
bit
complicated
because
moving
parts,
but
I
I
hope
it's
not
going
to
be
super
difficult
from
our
team
that
which
is
me
and
frankie
work
for,
and
please
please,
please
review
it.
I
really
want
to
get
this
going.
G
G
Then
you
asked
to
have
another
maintainer
to
chime
in
because
the
the
hold
off
is
the
logging
part
on
the
client
side.
So,
and
I
I
said
I
can
add,
all
the
capabilities
on
the
client
side
when
you
want
to
download
the
logs
from
the
backup
and
restore
to
make
sure
the
the
logs
from
those
plugins
are
included.
G
G
G
The
logs
are
always
going
to
be
part
of
the
the
on
the
bucket
on
the
controller,
but
is
the
ability
to
download
the
logs
on
the
client
side
if
it
makes
sense
all
it
does
is
a
piece
how
to
do
it,
and
then
we
move
on.
G
So
design,
if
I
boil
down
the
net
net,
is
the
logs
are
really
needed
to
be
part
of
the
client
or
just
sitting
on
the
server
is
fine,
and
at
this
point
the
the
proposed
changes
are
almost
like
it's
thousands
of
lines.
So
we,
if
you
keep
adding
all
this
all
this,
the
merge
requests
can
be
pretty
massive.
E
Well,
it
was
definitely
on
our
1
8
road
map
to
get
this
in,
so
we
will
keep
pushing
to
get
it
in
I'm
assuming
we
might
have
one
of
the
big
well
we'll
see.
If
we
don't
have
a
volunteer
here,
then
we
can
have
one
of
the
beijing
maintainers
review
this.
So
I
will
help
push
for
that.
E
But
thank
you
for
bringing
this
up,
I'm
glad
you
brought
it
to
our
attention.
I
we
have
a
lot
of
things
in
the
air
with
valero
and
I
guess
things
fall
through
the
crack.
Sometimes.
So
thank
you
for
raising
this
again.
A
All
right
and
yes,
we
do
have
time.
We
have
one
more
here.
H
Hey
folks,
can
you
guys
hear
me?
Okay?
Yes,
we
can
awesome.
So
I
just
I
have
a
pr
up.
It's
pretty
hacky,
but
one
of
the
things
that
I've
noticed
is
and
some
of
our
testing.
If
you
have
a
plug-in
that
takes
some
period
of
time,
backups
can
exponentially
scale
up
in
time.
H
H
If
I
understand
correctly
how
astrolabes
integration
into
the
layer,
I
was
working
using
a
backup
item
plug-in
and
how
long
that
could
take-
and
I
was
kind
of
asking-
is
that
how
actually
it
was
going
to
be
integrated
all
those
other
things
just
to
bring
this
to
the
attention
of
folks.
H
I
don't
know
if
this
is
something
that's
been
explored,
but
you
have
a
pr
up
that
sometimes
solves
some
of
the
problem,
but
it's
pretty
hacky
and
I
think
somebody
would
probably
need
to
take
it
over,
but
I
just
want
to
bring
this
up
and
have
a
discussion
about
it,
because
I
think
it's
probably
pretty
important
for
everyone
here.
A
H
I
think
it's
the
only
one
that
I
have
open
so
yeah
that
one
yeah
this
guy
effectively.
What
I
did
folks
is,
I
asynced
out
the
item
backup
and
then
added
a
channel
to
write
out
to
the
tar
file.
So,
basically,
each
item
backup
is
going
off
on
its
own.
One
is
being
backed
up,
and
only
when
we
have
to
write
to
the
target
file.
H
Does
it
all
come
back
and
coalesce,
and
once
we've
written
out
everything
to
the
tar
file,
then
the
backup
is
complete
and
that
seems
to
have
allowed
us
to
like
expand
out
enough
that
see
a
pretty
healthy
benefit
when
a
plug-in
takes
a
period
of
time.
The
plug-in
doesn't
take
a
period
of
time.
There's
almost
no
bennett,
there's
no
like
almost
no
distinction,
it's
just
when
plugins
take
some
time
for
enough
resources.
H
So
I
just
wanted
to
bring
this
up
also
in
the
context
of
how
astrolabe
is
going
to
be
working
with
what
I
look
like
to
me,
at
least
in
one
example,
I
saw
as
a
backup
item
plugin.
D
Yeah,
so
so
the
thinking
right
now
for
for
the
astrolabe
stuff
is
we're
going
to
initially
use
it
for
the
volume
snapshotting
and
so
my
plan
was,
I
choose
to
do
item
snapshot
or
trying
to
get
that
merged
and
so
backup
item
action
isn't
really
good
for
it's
not
really
designed
to
take
snapchats.
It's
designed
to
mutate
resources
on
backup
or
on
restore
so
item
snapshotter
is
actually
only
is
for
actually
taking
snapshots.
If
we
turn
some
snapshot
info
that
we
can
then
store
in
the
backup.
D
So
we
get
the
same
effect
as
we
were
getting
with
the
volume
snapshotter
plugins,
where
there's
like
a
volume
snapshots
json
that
you
can
look
at
and
say
what
snapshot.
So
the
idea
would
be
to
have
astrolabe
behind
an
item
snapshot
and
that
should
be
equivalent
performance
to
the
existing
snapshot
plugins.
So
there
will
still
be
a
grpc
layer,
but
it
will
be
at
the
astrolabe
layer.
We
won't
do
it.
We
don't
need
to
do
a
double
in
direction.
D
The
item
snapshotter
for
astrolabe
can
be
linked
in
as
part
of
valero,
but
then
the
call
to
the
astrologe
plug-in
would
be
through
grpc.
So
that
that
performance
should
be
equivalent
as
far
as
things
like
data
movement,
if
we
have
snapshots,
we
can
defer
the
date
of
movement
until
later
in
the
backup
and
do
it
async
to
the
main
backup
which
is
kind
of
how
the
v-sphere
plug-in
works
right
now,
it's
essentially
how
evs
works
under
the
covers,
azure,
etc.
D
So
I
think
in
the
short
term,
we
won't
see
an
impact
from
the
asteroid
stuff
because
it
should
pretty
much
be
the
same.
It
should
be
equivalent
performance
to
the
existing
volume.
Snap
shutters
does
that.
That
makes
sense.
H
D
So
there's
backup,
so
you
know.
Basically,
we
run
through
backup
item
actions
on
the
objects,
then,
so
we
can
also
run
through
an
item
snapchatter
now,
but
it's
going
to
be
on
a
per
per
resource
basis.
So,
like
a
pvc
should
trigger
the
item
and
then
we'll
go
down
and
we'll
do
ppc
stuff.
That
may
be.
We
actually
take
a
csi
snapshot
or
we
may
decide.
Oh,
we
don't
have
csi
snapshotting
we're
going
to
go
all
the
way
down
to
the
evs
level
or
the
vsphere
level.
D
So
I
don't
see.
I
don't
expect
a
big
change
in
in
the
amount
of
time
to
snapshot
a
single
volume.
They
should
be
pretty
close,
so
I
don't
think
we'll
see
an
impact
there
as
we
move
along.
We
add
other
things
that
can
be
snapshotted.
We
may
get
a
little
bit
more
there.
We
shouldn't
be
doing
well.
The
restic
backup
is
kind
of
the
big
one
that
holds
things
up
and
if
we
can
start
to
run
those
in
parallel,
you
know
where
we
have
to
like
block
things.
D
That
would
be
good
and
I
think
what
your
change
was
doing
was
it
was.
It
was,
it
was
running
like
the
the
backup
items
more
in
parallel,
and
so
the
question
is:
are
you
getting
if
you
don't
have
like
a
weird
plug-in,
that's
holding
things
up
for
a
long
period
of
time?
Are
you
seeing
a
performance
improvement.
H
You
see
a
small
performance
improvement,
but
it
is
around
when
one
item
basically
takes
a
period
of
time
multiple
times.
So,
let's
say,
like
our
back,
for
instance,
takes
some
period
of
time.
There's
if
you
have
a
bunch
of
our
back
stuff,
you'll
get.
You
know
a
significant
reduction
okay,
so
I
haven't.
D
Checked
it
so
our
back!
What
our
back
reading
the
rbac
rules
is
slow
or.
H
H
H
D
D
H
H
I
would
probably
agree
with
that.
I
guess
the
question
was
if
it
was
going
to
be
a
backup
item,
but-
and
this
is
where
I,
where
my
concern
was-
it
was
going
to
be
a
backup
item,
then
sitting
and
waiting
for
a
period
of
time
for
each
pvc
to
happen
when
you
could
be
doing
other
work
right
right,
so
yeah
shouldn't
be
any
worse
than
it
is
currently
sure.
I
It
sounds
like
the
backup
adam
kind
of
situation
with
astrolabe,
because
what
you
tested
was
something
that
takes
two
seconds
15
000
times,
but
we
might
have
a
similar
slowdown
if
something
takes
say
30
seconds
only
500
times.
I
think
that's
kind
of
what
you're
talking
about
with
you
with
the
astrology.
D
D
H
I
think
this
is
what
I'm
getting
at
davis
is
the
larger
concept
or
my
understanding
of
the
larger
concept
of
astrolabe?
Is
that
you're
going
to
be
adding
protected
entities
for
more
than
just
data
right?
We're
talking
about
adding
a
protected
entity
for
an
operator
or
something
like
that?
That
could
take
some
period
of
time
right.
F
D
H
So
for
the
future
of
this,
I'm
worried
about
the
performance
of
next
thing:
yeah,
that's
what
I
think
this
is
not
probably
like
immediate
concern
for
astrolabe
as
it
is
today
and
what
it's
being
used
for.
I
think
this
is
a
concerning
question
for
the
future.
Let's
say
a
year
down
the
road
if
we
continue
on
the
path
that
we're
going,
if
we
don't
solve
this
particular
problem
in
that
path
that
we
could
run
into
some
problems.
D
I
agree,
I
think,
we've
already,
you
know,
we've
already
been
hit
up
with
things
like
well
for
one,
just
serializing
of
backups
is
hitting
people,
and
you
know
we.
D
Currently,
we
serialize,
for
example,
all
of
the
snapshot,
actions
or
all
the
actions
right,
so
parallelizing
should
enable
us
to
have
like
multiple
snapshots
in
flight,
or
you
know,
multiple
astrolabe
operations
or
multiple
backup
item
actions
in
flight,
but
I
so
I
agree
that
we
should
be
doing
this
and
the
question
is
you
know:
where
do
we
bite
ourselves
when
we
get
things
running
in
parallel,
so
we've
got
things
like
pre
and
poster
store
hooks,
and
you
know
there's
the
report,
and
I
don't
know
that
we
just,
I
think
we
we
didn't
get
to
the
bottom
of
it.
D
I
I
don't
know
that
it
ever
got
resolved
like,
as
I
said,
daniel
was
looking
into
it
like
I
remember
that
issue
it
turned
out
to
be
different
than
we
initially
thought
it
was
so
then
daniel
came
back
and
said
because
I
I
think
the
the
issue
has
reported
that
he
said
oh,
but
it's
not
doing
it,
it's
actually
doing
it.
The
thing
we
thought
was
the
problem
wasn't
the
problem,
but
then
we
said
well,
it's
still
not
working
and
here's
what
we're
seeing
and.
I
D
So
we'll
have
to
dig
in,
but
that's
a
bit,
but
that's
that's
kind
of
a
basic
problem
that
we'd
have
to
make
sure
we're
not
breaking
so
things
like
pre
and
post
hooks
on
the
pods
or
whatever
assume
that
the
pre-hook
happens.
Then
a
sequence
of
actions
happen,
you
know
and
they
could
be
executed
in
parallel
in
between.
I
I
think
the
issue
there
go
and
I
think,
just
to
kind
of
go
beyond
even
the
hooks
one
of
the
things
I
think
this
kind
of
relates
to
is
in
this.
I
don't
know
whether
you
know
what
sean's
done
here.
It
affects
this
or
not,
but,
for
example,
one
of
the
things
on
the
restore
side
is,
you
know
we
have
the
additional
items
which
says
hey
before
you
restore
this
thing.
These
other
things
need
to
be
done.
So
when
we're
doing
items
and
actions
in
parallel,
how
does
that
interact
with
this?
I
A
I
Order
but
you
know
again:
you're
restoring
some
item
and
the
restore
plugin
says:
oh
wait
a
minute.
I
need
this
thing
first,
so
it
returns
additional
items
and
what
the
code
currently
does
and
it's
single
threaded
way
of
doing
it
is
it
finds
that
item
it
restores
that
item.
It
runs
that
items
actions.
D
D
I
D
A
I
D
Canada
is
a
future
flag
yeah
or
we
could
add
it
as
an
option.
I
think
it
would
be
good
for
us
to
do
some
actual
analysis.
Yeah.
I
Yeah,
I
don't
know
that
we
know
that
it's
a
breaking
change.
I
I
think
we
can
say
that
it
might
be
a
breaking
change
yeah,
in
other
words,
we're
not
changing
apis
here,
necessarily,
although
actually
we
are
changing
apis
if
we
make
it
optional.
But
you
know
what
we're
saying
is
that
there's
a
lot
of
things
in
the
code
that
we
really
haven't
thought
about
in
the
context
of
multi-threaded,
and
usually
that
means
you've
got
thread-related
bugs
there.
First
time
you
try
something
multi-threaded.
I
You
had
a
bunch
of
problems
that
we
have
to
fix,
but
you
know,
but
so
yeah,
but
but
I
don't
know
that
we've
identified
anything
specific
that
we're
saying
this
needs
to
be
fixed.
We're
just
saying,
there's
a
bunch
of
things
that
we
have
no
idea,
whether
it's
a
problem
but
among
those
bunch
of
things,
I'm
sure
a
few
of
them
at
least
are
problems.
D
So
you
know
the
path
that
nolan
and
I
have
been
thinking
of-
was
to
start
well.
So
so
we
weren't
that
concerned
about
her
backup
performance.
So
that
may
be
a
concern,
but
that
wasn't
what
we
were
worried
about
at
the
time.
D
But
we
were
getting
concerns
over
not
being
able
to
have
multiple
backups
in
flight
simultaneously,
because
people
are
running
the
issues
with
that,
like
a
single
backup
could
take
an
hour,
but
if,
if
it
means
that
you
know
like
you,
have
a
cluster
with
multiple
namespaces
you're
doing
like
you
know
each
namespace
individually
and
you
want
to
back
up
each
one
on
the
hour.
But
you
know
one
of
the
namespace
takes
an
hour.
Then
it
blocks
everything
else,
and
so
we
were
looking
at.
D
How
can
we
get
backups
to
run
in
parallel
and
we
were
looking
at
very
coarse-grained
parallelism
where
we
could
do
something
like
say
yeah,
you
know
these.
You
know
we
can
get.
That
was
part
of
the
manifest
proposal
was
to
have
this
manifest.
We
could
pull
out
at
the
beginning
of
the
backup
and
then,
if
we're
starting
another
backup,
we
could
pull
its
manifest.
We
could
say:
is
there
any
overlap?
If
there's
no
overlap,
they
can
run
in
parallel
so
that
there
would
be
no
need
for
locking
was
one
thought.
D
So
I
think
the
idea
of
running
things
in
parallel
is
the
right
idea.
The
question
is:
where
do
we
put
the
effort
to
get?
You
know
our
bang
for
buck,
because
we
were,
we
were
thinking
along
the
lines
that
getting
backups
running
in
parallel
when
there
was
no
overlap,
would
get
us
a
some.
You
know
like
a
low
hanging
fruit.
It's
like
yeah,
we
can.
We
can
get
a
speed
up
for
people
running
multiple
backups
quickly
without
too
much
risk,
because
if
we
can
say
look
we
can
compare.
D
We
can
see
that
there
is
no
need
for
locking
between
these
two.
Then
you
don't
need
to
do
any
locking.
So
that
was
one
thought
and
I
think
we
should.
We
should
definitely
be
considering
you
know
continuing
on.
Maybe
just
have
like
a
parallelism
or
performance
series
that
we're
we're
looking
into
and
we're
discussing,
and
we
kind
of
go
back
and
look
through
the
code
and
try
to
figure
out
what
needs
to
be
serialized
and
what
does
not
need
to
be
serialized.
D
You
know
like
in
theory
all
the
volume
snapshots
should
be
able
to
run
in
parallel,
but
there's
probably
something
like
well,
if
we're
doing
a
group
for
a
node
or
something
we
should
do
them
together
or
not
space
them
out
too
much
or
or
something
like
that.
So
it'd
be
worthwhile
to
like
think
through.
Some
of
that,
so
maybe
have
a
continuing
discussion
topic
and
we
can
just
work
through
that.
What
is
that?
How
does
that
sound.
G
So
so
this
is
raphael.
I
know
john
has
mentioned
the
feature
flag.
I
particularly
like
feature
flag,
but
there's
a
problem
when
you
keep
creating
new
feature
flags,
never
removing
them,
and
there
is
the
enable
api
groups
for
one
classic.
Actually
gonna.
Add
this
to
the
layer
1.9.
G
If
you
can
get
rid
of
that,
it
had
feature
flag,
is
a
way
to
protect
like
the
stability
of
the
code,
the
code,
when
you
add
new
stuff,
I'm
just
thinking
here
out
loud
if
we
should
create
a
generic
feature,
flag
called
experimental,
which
is
a
catch-all
for
all
this
kind
of
stuff.
Instead
of
have
a
feature
flag
for
parallel
for
performance
timeout,
we
create
a
feature,
feature
flag,
called
experimental
right
and
then
over
time
we
have
this
potentially
break
or
open
pandora
box
for
a
chain
for
for
problems.
G
D
I
I
A
I
D
H
Oh
all
I
was
saying
is
thinking
of
this
as
a
paralyzed
parallelism
feature,
I
think,
might
not
I
would.
I
would
consider
that
we
should
probably
have
workers
options.
It's
a
number
of
workers
for
each
set
of
things
that
we're
paralyzing.
If
you
want
to
turn
off
parallelization,
you
set
the
worker
number
to
one
or
zero,
and
then
that
will
put
it
to
just
one.
I
think
that
that
would
be.
I
A
So
what
I'm
hearing
here
looking
at
feature
flags,
adding
an
experimental
flag,
adding
a
parallel
flag,
adding
a
worker
configuration
option
or
similar?
Can
someone
please
add
that
into
the
issue
here?
What
we've
discussed.
I
I
You
know
item
actions,
that's
a
completely
different
thing,
although
with
similar
goals
to
what
dave
was
talking
about,
which
was
paralyzing
backups
and
restores
to
run
at
the
same
time
and
yeah.
H
I
And
are
you,
are
you
only
backing
up
the
items
for
a
given
group
resource
at
the
same
time
or
or
is
it
across
all
because
that's.
H
I
So
so,
in
other
words,
I
guess
I'm
just
with
your
change
here.
Is
there
a
case
where
you
know
a
pod
and
pvc
will
be
backed
up
at
the
same
time
in
parallel,
or
is
it
only
all
the
positive
parallel
and
then
all
the
pc
is
in
parallel.
I
It
so
so
there
we
will
have
to
keep
in
mind
some
of
those
concerns
they
mentioned
around
the
priority
because
yeah,
because
right.
H
I
Kind
of
priority
that
we
have
where
certain
types
have
to
go
first
and
then
other
ones
at
least
on
restore.
I
know,
that's
relevant,
I
think
on
the
backup
as
well.
So
that
may
be
a
concern
here
that
we'll
have
to
work
through.
G
Yeah
we
got
to
have
a
function
with
additional
items
and
and
make
them
in
a
group.
Maybe
we
should
parallel
the
group,
not
the
item,
you
know
what
I
mean.
We
should
have
the
concept
of
group
of
items,
for
example,
if
I
back,
if
I
I'm
going
to
back
up
a
cluster
that
has
a
hundred
namespaces
that,
and
then
you
know
you
you,
you
paralyze
the
name
space
backing
up,
not
every
single
item
inside
of
the
namespace,
because
then,
and
of
course,
for
rb
for
certain
stuff
is
cluster
level
objects.
J
D
I
think
this
is
good
and
I
would
I
would
say
we
should
I
would.
I
would
like
to
work
it
from
let's
go
back
to
what
like
requirements.
We
currently
know.
We
have,
for
example,
like
between
you
know,
pre
and
post
hooks.
The
the
operations
between
there
have
to
happen
between
those
things,
so
we
can't,
you
know,
run
the
previous.
G
D
Yeah,
so
so
that's
something
where
we
have
to.
We
have
to
ensure
those
kind
of
things
and
are
there
other
places?
I
think
in
general,
like
if
we
do
like
a
pod,
even
without
three
and
past
talks,
we'd
probably
expect
that
the
discs
get
backed
up.
You
know,
by
the
time
the
pod
finishes
or
something.
I
E
H
I
Because
restore
is
definitely
the
harder
one
of
these.
For
that
reason,
right.
D
I
I
and
I
think
that's
what
their
story
is
more
problematic.
Is
it
and
there
are
cases
where
there
are
dependencies
between
resources
and
you're,
less
likely
to
hit
that
on
backup.
Although
I
I
seem
to
recall
some
issues,
even
on
backup
where
those
dependencies
were
relevant
so
that
so
those
are
those
cases
we
have
to
make
sure
are
handled,
but
yeah.
The
backup
case
is
definitely
easier
than
the
restore
one,
although
if
you're
paralyzing
entire,
but
again,
if
you're,
paralyzing,
entire
backups
and
entire
restores.
I
That's
again,
that's
that
other
issue
separate
from
the
items
within
a
backup,
but
but
even
in
those
cases,
I
think
the
most
problematic
of
these
seems
to
be
paralyzing,
restores
within
a
given
restore
items
within
a
given
restore.
So
I
think
it
sounds
like
we're
all
saying.
That's
the
last
thing
we're
going
to
look
at
here
because
of
those
challenges,
but
you
know
the
challenges
for
backup
within
a
backup
of
the
challenges
for
multiple
backups
in
parallel
or
some
there's,
some
overlap.
I
mean
the
reentry
part
of
the
item.
I
Actions
is
definitely
a
common
problem
between
the
two.
The
inner
item
dependencies
obviously
only
applies
to
the
paralyzing
of
items
not
so
much
the
parallelizing
of
backups.
That's
where
again.
H
D
I
don't
know
that
it's
necessarily
a
problem,
but
certainly
with
the
it's
it's
more
a
matter
of
like
you
know,
is
there
any
interleaving?
I
guess
I
don't
know
it
probably
shouldn't
be
a
problem
yeah.
Definitely
with
the
pre
and
post
hooks
on
a
pod.
You
definitely
need
to
do.
You
know.
J
H
D
Otherwise,
it's
not
it's
not
correct.
It
may
not,
but
that's
what
it
needs
to
be
doing,
because
that's
the
whole
point
the
pre-hook
is
supposed
to.
Let
you,
for
example,
quiesce
the
pod,
do
something
and
then
run
the
post
hook
to
say:
okay,
the
pod's
ready
to
go
again
and
if
you're
still
doing
stuff
that
needed
the
pod
to
be
quiet
when
you
unquest
it
you're
broken.
J
Yes,
it
did
too,
I'm
betting,
my
money
on
this,
because
one
of
my
features
is
relying
on
it.
Yeah.
F
I
Red
hat
came
in
this
is,
it
was
not.
It
appears
that
it
was
not
working
properly
and
that
would
that's
still
open.
I
D
Right
but
I
mean,
but
that's
a
basic
contract.
I
think
between
the
pre
and
post
talks
is
that
we
do
a
pre-hook
right
now,
simply
on
a
pod,
but
you
know
we
could
do
it
on
other
things,
but
by
the
time
we
get
to
the
post
hook.
All
those
things
that
happen
in
between
should
be
done.
It's
very
simple
locking,
but
that's
essentially
what's
going
on
right
now,
and
we
also
have
things
like
reentrancy
on
on
hook
so
like
if
we
have
multiple
backups
running,
then
re-entering
the
pre-hook
like
where
we
freeze
the
file
system.
D
So
if
we
have
a
pre-hook
that
does
an
fs
freeze
and
we
run
it,
you
know
in
backup
a
and
then
backup
b
starts
and
it
does
it
as
well
and
then
backup
b
runs
faster
for
whatever
reason
it
unfreezes
it,
and
you
know
whatever
right
I
mean
so.
Those
are.
Those
are
the
kind
of
things
we
need
to
to
look
for
as
we
go
through
with
parallelism.
H
J
H
To
allow
you
to
play
it,
yeah,
yeah,
okay,
yeah.
If
you
look
at
item
backup
lines
123
through
like
181,
basically
well,
they
might
be
a
little
off
because
I
think
I'm
looking
at
my
branch.
But
if
you
look
at
that
guy,
like
you'll,
see
that
it's
just
based
on
that
one
particular
item.
An
item
is
a
gvr
name,
namespace
mapping
right,
so
I
don't
know
how
it
would
wait
for
every
pvc
unless
it
was
doing
something
else.
H
So
if
somebody
could
help
me
understand
how
this
is
working,
maybe
it's
working,
because
we
order
things
in
a
certain
way,
but
I
don't
understand
how
it's
backing
up
three
pvcs
after
executing
the
pre-hook
and
also
the
pod
based
on
this
code.
I
just
don't
understand
it.
Maybe
there's
a
additional
items
that
are
being
returned.
If
somebody
could
help
me
understand
that
that
would
help
me
understand
this
parallelization
problem
that
you
guys
are
talking
about.
H
A
D
That's
what
should
happen
is
if
you
take
a
snapchat,
but
basically
once
we
take
a
snapshot,
it's
a
fire
and
forget.
We
take
a
snapshot,
that's
good!
Once
the
snapchat
returns.
You
know
that
that
point
in
time
has
been
taken
and
then
things
happen
in
the
background.
But
valero
doesn't
care
about
them
currently.
D
So,
like
ebs,
uploads
in
the
background,
easter,
plug-in
uploads
in
the
background
yes
should
be,
but
that
was
the
bug
that
was
required
should
be
happening
between
the
pre
and
post
hooks,
because
we're
we're
explicitly
telling
people
do
things
like
an
fs
freeze
to
freeze
this.
While
we
back
up
the
volume
with
rustic
so
that
it
doesn't
change,
we
get
a
consistent
backup
and
then
do
an
fs
freeze
and
the
fs
unfreeze
in
the
post
hole,
and
so,
if
it's
not
ensuring
that
the
rest
is
happening
there,
it's
completely
broken.
H
D
Let's
see
so
looking
here,
we
have
pod
volume
backups,
it
appended.
H
D
D
I
assume
so
I
mean
it's
everything's
serialized
right,
so
we
can't
do
right
more
than
one
thing,
and
so,
if
we
have
pre-hooks
to
to
execute
right,
we've
got
so
we're
doing
pre-hooks,
okay
and
then
so
here
we
build
the
set
of
rustic
volumes
to
back
up.
D
Then
we
go
through
and
we
execute
actions.
I
think
that
will
actually
cause
snapshots
to
happen
if
there
were
any
and
then
here
we're
doing
backup,
pod
volumes-
and
this
should
cause
all
the
rustic
backups
to
occur.
H
D
H
H
Yeah,
it
seems
to
me
that
we
have
a
pretty
good
set
of
functions
that
encapsulate
given
actions
correctly
and
as
long
as,
if
we
think
about
those
from
the
start,
then
that
would
be
a
decent
place
to
start
to
think
about
it.
So
yeah.
D
I
D
I
D
A
real
production
environment
is
gonna,
be
like
I
don't
think
so.
H
D
D
D
We
turn
it
on
as
a
general
feature,
with
the
option
to
turn
it
off
and
people
will
use
it
see
the
benefits,
but
then,
if
they
have
a
problem
in
the
turn
it
off
flag
is
a
really
good
thing.
Because
then
you
say
oops,
you
know
we
miss
something,
but
by
the
time
we
we
we
set
it
up
and
we
say
hey.
This
is
useful,
then
yeah
go
jonas.
Sorry.
A
Yeah
just
want
to
make
sure
that
we're
mindful
of
the
time
I
this
is
a
fantastic
discussion,
I'll
make
sure
to
send
out
the
recording
here
quickly.
So
everyone
else
can
look
at
this
as
well,
because
I
think
this
is
a
really
important
one
can
sean.
I
say
you
already
added
one
item
to
the
ideas
and
features
for
1.9.
Could
you
add
in
this
as
well?
So
we
have
that
captured,
because
I'm.
I
A
Yeah,
I
think
this
would
be
a
really
good
one
to
look
at
for
the
well
experimental
for
the
next
release,
at
least.
A
All
right
with
that
that
was
the
last
discussion
topic
of
today
awesome
discussion,
everyone,
thank
you,
everyone
and
yeah.
Thank
you.
Everyone
for
joining
today
have
a
fantastic
rest
of
the
week
next
week,
we're
back
in
in
the
evening
here
on
the
east
coast
in
the
morning
over
in
beijing.
So
please
join
us,
then
at
8
00
pm
eastern.
If
you
can
have
a
good
one.