►
From YouTube: Velero Community Meeting - Feb 21, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
we're
recording
hello
everyone.
My
name
is
Julian
vasilif
and
I'm.
The
community
manager
for
Valero
today
is
February.
20Th
first
I
mean
and
that's
the
official
community
meeting
so
in
general.
B
C
A
Nice
to
each
other,
and
please
follow
the
call
of
conduct
I've
pasted
in
the
chat,
the
the
link
to
our
community
meeting
I'll.
Do
it
once
again,
so
if
you
like
it
audio
topics
there,
so
we
can
discuss
them
or
if
you
have
some
updates
or
you
want
to
address
some
PRS
or
some
issues,
that's
the
place
to
do
it.
So
with
that
I'm
gonna
share
that
same
document.
A
A
B
It's
gonna
be
like
a
super
short
item
here
so
just
a
couple
weeks
ago,
like
I
chatted
with
Scott
about
the
idea
of
like
extending
like
the
new,
include
and
exclude
resource
filters
with
something
called
field
selectors,
and
the
idea
is
to
resolve
some
of
the
existing
issues
that
where
users
have
been
requesting
for
the
ability
to
just
filter
Resources
by
name
as
a
starting
like
a
field
so
yeah,
you
know,
like
yeah,
just
put
together
like
a
proposal
like
hoping
to
get
some
feedback
from
folks,
I've
been
just
chatting
with
Daniel
and
me
on
a
different
pull
request
proposal,
which
is
related.
B
We're
just
trying
to
see.
If
you
know
if
this
is
a
potentially
like
extensional
duplication
of
the
other
proposal
that
being
put
together.
Or
is
this
like
an
extension
of
another
PR
that
I
think
I
believe
soon
put
together
as
well?
So
but
overall,
like
one
way
or
the
other
like
on
our
site.
Like
you
know,
basically,
we
just
want
a
way
to
be
able
to
filter
like
Resources.
By,
like
you
know,
some
of
the
existing
resource
fields,
like
name
starting
with
name.
B
Basically,
the
main
problem
behind
it
that
we're
trying
to
solve
it
since.
D
B
Up
of
custom
resource
definitions
there
so
yeah,
if
folks
have
about
minutes
here-
and
there,
like
you,
know,
appreciate
some
feedback
from
there
yeah.
B
The
idea
is
really
just
to
change
like
that
proposal
that
that,
after
we
soon
put
together
around
like
changing
like
the
type
of
the
filters
from
like
a
slice
of
strain
to,
like
you
know
like
some
sort
of
slice
of
selectors
or
something
I'm
using
like
the
existing
like
kubernetes
on
API
Machinery
package,
which
has
like
a
you
know,
just
support
for
like
skill,
selectors
and
all
the
other
things
so
related
selection,
so
yeah.
So
that's
me
thanks.
E
F
You
well
I,
I,
I
I
will
add
more
comment
in
the
discussion
under
the
pr
by
Ming.
F
But
currently
my
thought
is
I
I
slightly,
prefer
we
add
more
complexity
in
that
data
structure,
rather
than
adding
more
change
to
the
backups
here,
because
that
first
I
don't
think
that
will
be
merged
into
1.11
because
1.11
we
are
pretty
close
to
FC
and
we
we
want
to
implement
what
has
already
planned
and
emerged,
which
is
students,
design
regarding
adding
separate
filter
regarding
namespace
scope,
resource
and
class
or
scope
resource
and,
and
that
was
focused
on
the
type
of
the
resource
rather
than
names.
F
And
if
we
merge
that
chain
that
will
introduce
a
big
change
to
this
bad.
Comparing
to
what
a
student
has
proposed
and
in
addition
to
that
I
think
you're
you're.
Only
a
concrete
scenario
is,
is
adding
additional
custom,
CR
custom
resource
definition
to
the
backup
which
does
not
have
a
custom
resource
right.
B
Yeah,
oh,
and
also
like
I,
think
that
custom
resource
definition
is
the
first
one.
They
are
also
like
cluster
roles
and
sometimes
yeah,
also
like
weird
other
weird
stuff,
like
level
policy,
for
example,
they
don't
come
up
button
yeah.
F
But
do
you
think
that
will
be
doable
by
adding
new
field
into
the
data
structure
proposed
by
Ming
I'm,
slightly
preferred
not
to
add
additional
attributes
to
the
spec
of
the
backup,
especially
in
terms
of
filters,
because
there
are
too
many
filters
and
oftentimes
when
a
user
or
even
us
are
confused,
I
mean
which
user
or
when,
when
those
user
exist
together
and
how
in
the
conflict
with
each
other
and
how?
Where
will
behave.
B
Yeah,
you
know
like,
at
the
end
of
the
day
like
I'm,
okay,
with
either
way.
It's
just
like
you
know,
like
I,
guess,
I
get
your
point
like
I'm,
just
the
balance
between
like
I'm
floating
the
API
versus,
like
you
know,
like
I'm,
more
familiar
like
user
user
experience,
so
I
yeah,
you
know
like,
as
we
discussed
like
in
the
Ming's
PR
I,
think.
G
B
You
know,
like
I,
think
I
didn't
and
it
really
into
like
the
existing
comments
and
stuff
like
that.
I
think
like
there
is
like
room
for
it.
There's
definitely
valid
use
cases
for
policies,
type
of
Concepts
and.
B
And
the
only
reason
why
I
bring
up
like
something
like
you
know
why
I
think
no
I
bring
up
the
few
selectors
and
I
think,
like
you
know,
we
should
tag
it
on
top
of
the
June's
PR.
Whether
now
or
later
is
because
you
know
I
I,
guess
my
my
my
fair
enough
thought
is
like
we
are
already
changing
it
right.
B
We
are
already
making
a
very
big
change
so
like
either
way
like
whether
it
goes
to
you
get
tagged
on
top
of
the
you
get
added
on
top
of
the
new
filter
requests
or
the
policies
resources.
B
I
think
like
there
is
some
like
flexibility
there
on
our
side,
yeah
so
yeah
like.
Maybe
we
can
just
continue
to
discuss
it
during
the
discussion
session.
F
Yeah,
but
but
but
you're,
okay,
that
one
doesn't
go
into
1.11
right,
because
that
will
be
a
my
concern.
Is
that
that
that's
not
a
really
incremental
change,
because
that
is
a
break
change
to
whatever
Uhn
is
going
to
implement
in
yeah
version
1.11
right
so
well
that
that
that's
a
little
yeah
automatic
I.
H
Think
it
would
be
better
sorry,
yeah
I
was
going
to
say
the
same
thing.
I
think.
However,
we
implement
it
we're
talking
about
something
that
hasn't
been
fully
designed
yet
so
we're
three
weeks
away
from
in
a
feature
complete,
so
I
think
either
way
we're
talking
about
something
that
we
want
to.
You
know,
get
the
design
for,
but
probably
won't
be
implemented
until
I
mean
definitely
won't
be
inputed
in
111..
H
The
question
is:
if
we
accept
that
we're
going
to
implement
this
in
112,
if
it's
still
the
right
place
to
you
know,
put
it
on
top
of
the
existing
design,
you
know
that'll
be
another
change
in
112
or
whenever
we
implement
it,
I
think
we
I
guess.
My
point
is
I
think
we
need
to
decide
where's
the
logical
place
from
a
user
API
point
of
view
to
put
it
regardless
of
release
schedules,
but
knowing
that
either
way
this
is
a.
B
Yeah
yeah
I'm
totally
fine
if
we
don't
get
it
into
1.11
and
yeah
I'm.
Realistically
speaking,
like
yeah,
that's,
we
know,
we
know,
that's
not
going
to
happen
and
hence
you
know
like
at
this
stage.
You
know
I'm
just
hoping
to
get
some
comments
there
in
the
design
in
The
Proposal,
so
I
mean
like
Daniel.
What
you
just
said
was:
what's
great,
like
you
know,
I
guess
like
yeah
for
posterity
I
think
we
could
just
like.
Let
me
just
add
it
to
like
to
the
pull
request.
F
Sure
yeah
so
yeah,
yeah
I'll
do
that
and
yeah
really
appreciate
you.
You
know
sharing
your
thoughts
and
I
think
that's
a
really
valid
requirement,
but
yeah.
Thank
you.
Let's
continue.
A
No
okay,
it's
good.
H
Yeah,
so
this
is
the
ongoing
action.
V2
controller
work,
the
pr
has
been
out
there,
because
now
we
actually
had
quite
a
bit
of
review
and
changes
over
the
last
week.
At
this
point
where
I
am
is
I'll,
let
me
just
kind
of
describe
where
I
am
and
kind
of
where
the
discussion
is
going
for
context,
because
I
know
it's
also
discussion
topic.
H
H
That
was
actually
a
good
observation,
because
even
on
the
oadp
side,
we
had
a
problem
because
one
of
the
things
that
we
need
to
do
there
is
that
data
mover
backup
a
resource
has
certain
annotations
that
we're
adding
during
that
process,
when
ballsync
is
pushing
that
into
the
storage
location
and
those
annotations
are
needed.
When
we
restore
the
item,
so
we
definitely
need
some
mechanism.
So,
at
the
end
of
those
synchronous
operations,
once
we
would
declare
they're
all
done,
we've
set
the
status
to
persist.
Those
into
Object
Store.
H
So,
where
I
went
with
that
for
the
next
iteration
was
basically
to
we,
we
already
have
the
mechanism
for
adding
additional
items
from
a
backup
item
action
plug,
and
we
use
that,
for
example,
for
the
CSI
plugin
anyway,
and
that
is
the
mechanism
again
on
the
oadp
side
that
we
were
using
for
the
data
mover
backup
to
make
sure
that
got
included
in
the
backup,
so
I
added
that,
and
so
what
I
did
was
I
set
up
a
mechanism
so
that
we
had
a
a
new
finalized
phase
and
again
I.
H
Think
this
part
the
fact
that
we
need
a
controller
to
act
on
the
the
backup
at
the
end
again,
I
think
there's
no
disagreement
there.
Basically,
what
happens
is
in
the
workflow.
Is
that
the
backup
controller
finishes
it's
in
waiting
for
plug-in
operations.
We
have
a
plug-in
a
async
action
monitoring,
plug-in
that
on
some
configurable
schedule,
checks
calls
the
plug-in
progress
and
all
those
operations
when
all
the
operations
are
done,
it's
tested
into
the
finalized
phase
and
then
on
finalize.
We
have
a
controller.
H
That's
going
to
pick
that
up,
and
this
finally
is
controller
again
high
level.
What
it
needs
to
do
is
persist
into
Object
Store
those
resources
that
were
modified
during
those
operations
that
happened
potentially
after
the
original
lack
of
processing
began.
So
what
the
pr
is
currently
doing
is
basically
and
initially
I'm
thinking.
Okay,
I
need
to
I
need
to
write
a
controller
here
that
basically
gets
these
resources
out
of
the
cluster
and
persists
them
to
the
backup.
One
thing
I
realized
when
I
implemented.
H
That
is
that
there
are
actually
a
bunch
of
things
that
we
do
when
we
back
up
an
item.
One
of
them
is
doing
the
gvr
negotiation
to
figure
out
in
a
preferred
version
and
then
backing
up.
Whatever
versions
are
in
the
cluster,
that's
all
handled
in
the
current
backup
backup
item.
We
also
have
plugins
that
currently
run
on
resources,
and
my
thinking
here
was
if
the
user
has
a
general
plugin
that
operates
on
everything
that
puts
a
say.
H
A
certain
annotation
or
you
know,
may
needs
to
may
need
to
do
certain
things
for
compliance
reasons.
Ideally,
you'd
want
that
to
run
on
everything,
including
these
items,
and
my
thinking
was
that
it
was
actually
more
straightforward
to
go
through
and
use
that
backup
item
logic
that
already
exists
to
do
that.
H
You
know
one
thing
that
I
did
have
to
do,
which
I
think
was.
One
of
some
of
the
comments
in
the
pr
about
complexity
is
that
when
you
run
through
in
the
finalized
phase,
there
are
certain
aspects
to
that
that
we
want
to
skip.
You
know
we
don't
want
to
be
pulling
in
other
things,
additional
items
at
that
point.
H
We're
not
messing
with
rustic
or
any
of
that
you
know
snapshots
so
backup
item
does
a
bunch
of
things
there's
a
few
of
those
that
we
want
to
skip
second
time
around,
but
most
of
what
it
does
we
still
want
to
do,
and
the
other
thing
is
the
backup
item
handles
the
logic
around
streaming:
the
file
to
the
tarball
figuring
out
the
file
name.
H
All
of
that
you
know,
because
we
actually
have
to
create
a
temporary
file
in
the
file
system
and
then
read
the
file,
and
then
you
know
write
it
so
so
all
of
that
logic
is
already
there
in
the
backup
item
and
the
other
thing
was
on
the
restore
side.
So
my
original
idea,
too,
was
oh,
we'll
just
modify
the
Target
and
we
upload
it,
but
that's
actually
very
complicated
because
you'd
have
to
stream
through
the
tarball
for
each
item,
re-stream
it
out,
and
so
basically
what
I?
H
E
H
Tar
balls,
we
extract
them
in
order,
so
if
any
any
files
that
are
in
the
second
that
are
also
in
the
first
just
overwrite
them
and
you
files
through
in
the
second
that
are
not
in
the
first
just
get
added
so
so
either
way
the
result
is
the
equivalent
of
modifying
in
the
original
tarball,
except
we
don't
have
to
upload
the
whole
thing
twice.
H
So
that's
that's
where
we
are
now
with
the
pr
and
and
again
I
know
there
were
some
concerns
about
the
complexity
of
of
you
know
having
those
checks
and
backup
item
about
whether
it's
finalized-
that's
just
but
looking
at
this
now
and
I'm
kind
of
thinking
about
this.
With
the
comments
in
in
the
back
and
forth,
I
think
we
have
to
create
a
whole
new
workflow
for
this
to
create
this
completely
separate
terrible,
that's
not
part
of
the
backup,
then
the
restoration.
G
H
More
workflow
around
that
as
well,
because
then
we
have
to
treat
them
separate
because
because
one
advantage
of
this
approach
is
restore
doesn't
care
whether
it
was
in
the
original
tarball
or
in
the
finalized
Tire
ball.
The
only
place
in
the
store
that
cares
is
the
extract
method.
Once
we
extract
it's
just
one,
you
know
file
system
area
and
the
and
the
all
of
the
the
archive
package
that
accesses
those
on
the
restore
side
they'll
get
accessed,
so
that's
kind
of
where
we
are
with
that.
F
Forgot
so
in
the
daytime,
I
I
chatted
it
with
you.
But
since
there's
some
misunderstanding:
I
wish
to
First
clarify
with
you
about
your
scenario.
So
are
you
saying
you?
You
modified
the
resource
when
in
the
async
collaboration-
and
you
want
to
put
that
in
the
backup
power,
so
the
resource
you
are
putting
in
the
backup
how
about
during
the
async
operate
after
the
async
operation?
Is
that
part
of
the
user's
workload.
H
If
you
think
about
again
I
think
it's
analogous
the
existing
CSI
plugin
does
something
similar,
there's
no
asynchronous
action
there,
but
the
CSI
plug-in
creates
the
the
volume
snapshot
resource
that
volume
snapshot
resource
didn't
exist
before
you
did
the
backup,
with
CSI
plug-in.
H
I
Yeah,
actually,
when
we
talk
about
whether
the
the
data
belonged
to
the
workload
or
user
users
data,
we
need
to
type.
We
need
to
see
that
if
the
data
it
is
from
the
user's
workload
on
or
or
application
or
namespace,
actually
for
the
CSI
case
it
something
like
that,
even
the
volumizing
cloud
is
from
the
user's
namespace
right.
So.
H
You're
saying
you're
saying
that
in
the
in
these
the
the
scenario
you're
talking
about
the
backup
operator,
sorry,
the
the
snapshot
backup
is
going
to
be
in
the
Valero
Works,
Bolero
use
and
namespace.
I
I
mean
so
I'm
trying
to
opt.
I
And,
and
also
also
some
from
some
building
and
I
mean
the
building
data
from
kubernetes
and
the
most
importantly,
one
one
way.
Lastly,
imagine
a
case
that
the
Valero
we
don't
have
the
arrow
and
we
want
to
Resource
some
data
if
the
data
belongs
to
user
space.
That
is
some
data.
I
We
must
restore
and
in
any
way,
but
if
the
data
doesn't
belong
to
the
users
belong
to
the
user
workload,
it
means
that
there
may
be
some
data
created
by
the
bolero
and
for
the
you
know,
to
to
maintain
some
internal
State
and
that
data
is
not
necessary.
Actually,
for
for
for
the
user,
for
restoring
to
a
user's
I
mean
me
workload,
okay,
so
so
let
me
just
add
one
one
more
thing,
and-
and
so
this
can
do.
I
Why
do
we
need
to
distinguish
the
two
kind
of
data,
because
these
two
kind
of
data
are
with
different
life
cycle
and
different?
Something
like
the
security
requirement,
for
example,
we
want
to
in
filter
and
and
actually
even
even
for
the
current
CSI
implementation.
I
don't
think
it
is.
It
is
a
good
behavior,
because,
anyway,
we
can
delete
the
the
volume.
Some
short
and
volume
some
content,
and
then
we
can.
I
We
just
need
to
backup
some
metadata
inside
the
volume
Center
one
Central
content,
so
so
that
is
even
even
for
the
CSS
sample.
It's
not
not
so
so
necessary
to
put
them
into
the
backup
table
so
I
I.
Don't
think
that
even
that
is
it's
a
it's
a
good
behavior
earlier,
but
it
anyways
is
the
current
behavior
and
come
back
to
the
backup.
I
I
mean
the
backup
data
and
we
want
this
to
differentiate
the
the
user's
workout
data
and
there
are
internal
data
that
is
to
for
the
for
the
future
involvement
below
I.
Think
is
it's
a
good
thing
because
we
can
manage
the
the
data
in
a
different
way,
and
so
we
we
need
to
What.
We
may
add
a
new
feature.
We
we
want
to
avoid
put
in
necessary
data
in
the
backup
table.
I
That
is
why
we
we
we
that
that
is
about
the
persistency
and
the
other
thing
is
once
we
trade
the
data
as
the
backup
data
or
as
the
data
in
the
in
the
backup
table.
I
It
means
we
need
to
go
through
in
the
entire
backup
workflow,
and
so
so
where
that
is
where
we
we
come
to
revisit
the
backup
item,
backup
at
the
finalizing
case,
and
if
we
don't
don't
treat
that
kind
of
data
as
the
bank
of
data,
we
will
not
need
to
to
to
call
the
backup
again
in
the
finalized
case.
So
that's
okay,.
I
H
Guess,
first,
we're
actually
not
doing
the
whole
backup
workflow
I
I'm.
Actually
the
backup
item
is
just
one
step
in
the
backup
workflow
and
that's
that's
the
only
one
that's
being
repeated,
but
I
had
a
question
for
sure,
but
I'm
actually
so
for
our
ODP
data
mover
backup
that
that's
in
the
user
workload
namespace
right
or
is
that
in
the
Valero
namespace
yeah.
H
Well,
yes,
but
but
the
the
equivalent
of
snapshot
backup
for
the
ODP
data
movement
that
we're
implementing
on
top
of
111.
So
we're
putting
that
in
the
user.
Namespace.
H
Names:
okay!
That's
that's!
Why
I'm
making
the
distinguishing
there's
the
Valero
data
mover
that
we're
designing
now
that
we're
going
to
be
implementing
post
111,
but
then
oadp
has
the
data
mover
that
we're
already
using
that
we're
modifying
to
use
the
async
plugins
in
111
to
build
our
ODP
one
two,
which
is
going
to
be
released
right
after
111
and
so
that
oedp
data
mover
that
exists
before
the
Valero
data
mover
is
putting
what
we're
calling
a
volume
snap,
a
data
mover
backup
which
is
similar
I
think
in
function
to
the.
H
The
way
that's
to
be
implemented
now
it's
in
the
username
space.
So
that's
why
the
CSI
plug-in
example-
yeah
I
mean
it
may
be
that
we
need
both
aspects.
H
I
mean
the
thing
that
I
realized
when
I
was
initially
my
because
my
first
thought
was
finalized
was,
let's
just
you
know,
directly
back
this
up
and
not
use
anything
from
the
backup,
but
I
realized
that
the
the
backup
item
was
doing
a
lot
of
things
that
we
needed
and
it
would
have
actually
been
more
code
to
write
and
more
implementation
to
start
that
do
that
from
scratch,
because
one
of
the
you
know-
because
we
have
the
code,
for
example,
and
the
one
of
the
things
that
Echo
item
does,
is
it
it
handles
the
you
know
once
you
once
you
have
the
item
and
it
pulls
the
item,
the
temporary.
H
You
know
from
the
cluster
we,
but
when
it
streams
it
out,
we
actually,
you
know
it
has
the
code
for
it,
because
you
pass
in
to
backup
item
the
guitar
writer
figures
out
the
right
file
name
using
the
item
collector,
especially
with
the
gvr
thing,
because
again,
Valero
deals
with
the
whole
preferred
version
versus
other
versions
and
if
you're
restoring
to
a
different
kubernetes
version,
then
the
backup
was
taken
from.
Sometimes
you
have
to
do
that
version,
negotiation.
I
Yeah
yeah
actually
I
see.
One
thing
is
that
the
oidp's
data
more
including
the
the
the
same
kind
of
CR
I,
mean
the
sample
like
the
sample
CR
in
the
in
the
user's
namespace.
So
it
will
be
more
convenient
for
for
the
for
the
IDP
data
more
to
to
to
trade
it
as
a
as
a
backup
data
and
others
in
the
in
the
same
way
like
the
the
additional
item.
I
But
actually,
when
we
think
about
this
I
I,
don't
think
it's
a
it's
a
good
behavior,
because
because
we
we
as
a
Backup
Tool,
we
we
should
avoid
or
modify
anything
yeah
in
in
the
users.
Users
workload
our
namespace
as
possible
as
we
can
right.
So
it's
like
it's
like
that.
It's
better
to
put
the
data
into
the
available
namespace
or
in
a
server.
H
Although
that
I
think
that
depends
on
your
use
case,
I
think
those
are
both
valid
use.
Guys
I
think
there
are
cases
where,
like
you
said,
you
don't
want
it
in
the
username
space
you
want
to
put
in
the
Valero
if
possible,
but
there
are
certain
things
and,
like
I,
think
snapshot
backups
have
to
be
in
the
same
namespace,
because
kubernetes
requires
that.
So
that's
you
know
what
this
is.
I
Actually,
actually,
for
the
below,
building
or
I
mean
the
POC.
We
have
created
a
previous.
I
Actually,
we
we
also
face
the
same
problem
and-
and
they
are
in
this
some
way
we
can
move
the
the
the
the
the
internal
intermediate
objects,
like
the
volume
some
chart
and
or
any
PVC
created
from
the
standard
into
Belarus
ways
and
I
I
think
that's
a
a
good
practice
and,
and
last
but
but
I
think
the
that's
another
thing:
that's
a
side
of
it
and
the
things
like
when
we
see
the
the
whole
things
and
the
and
and
when
we
when
we
see
the
whole
things,
I,
think
that
first
of
all,
we
avoid
food.
I
First
of
all,
the
the
data,
the
how
to
say,
It's
the
final:
the
data
need
to
be
finalizing.
That
is
not
ready
until
the
the
I
think
operation
is
down.
It's
not
the
data
we
back
up
from
the
user
space.
It's
definitely
some
data
created
by
videro
are
created
by
the
plugin
right.
That
is
the
first
thing.
That's
it
for
backup
for
result,
it's
another
story,
because
we
don't
we
don't.
I
We
even
don't
do
not
need
to
proceed
anything
for
result,
so
I
think
the
result
in
our
story,
so
just
for
backup
and
the
that
data
is
definitely
created
by
Valero
and
if
that
is
created
by
with
arrow.
Why
should
we
just
just
a
call
the
backup
item
again
for
this
kind
of
data,
because
this
kind
of
data
has
nothing
to
do
with
the
backup
workflow
right
so.
H
I
think
it
that
that
again,
I
think
that
depends
on
again
in
the
more
General
case,
because
this
also
has
to
support.
You
know
if
a
user
writes,
for
example,
a
custom
asynchronous
plugin,
it
does
something
other
than
data
mover.
Are
we
expecting
that
that
user
to
create
things
in
the
Valerian
space?
Are
they
expecting
that
user
to
create
things
in
the
application?
Namespace
I,
I
I?
Don't.
F
H
F
Think
that's
something
we
need
to
decide
I
mean:
do
we
consider
it
a
very
general
use
case
that
user
will
create
additional
resource
during
the
async
collaboration
which
have
to
be
put
into
the
backup
turbo?
Is
that
a
very
general
or
common
use
case,
or
is
that
really
for
the
data
mover
only
if
it's
data
move
only
I
can
think
of
a
few
walk
around
without
having
to
write
to
backup
turbo
again
yeah.
H
Well,
well,
and
again,
just
just
to
be
clear
when
I
say
when
I
say
you
know,
update
the
variable,
we're
actually
not
trying
to
modify
an
existing
turbo,
we're
actually
creating
a
second
turbo,
but
that's
more
of
an
implementation
detail.
I!
Guess,
logically,
you
know
you're
updating
the
backup
but
yeah,
but,
for
example,
one
asynchronous
plugin
that
would
make
sense
in
an
openshift
context
would
be
a
plug-in
that,
because
right
now
we're
doing
this
in
a
synchronous,
plugin
and
it
you
know
it
slows
things
down.
H
Is
that
because
openshift
has
an
internal
Docker
registry,
and
so
one
of
the
things
that
we
want
to
do
in
backup
is
to
copy
those
images.
You
know
to
some
to
some
backup
registry,
and
so
that's
an
action
that
might
take
a
while
right
now
we're
doing
it
in
regular
backup
item
action,
which
slows
backups
down.
H
That's
another
thing
that
would
be
a
good
candidate
for
an
asynchronous
plug-in,
so
we
would
need
to
create
a
CR
for
that
and
again,
that's
that
question
you
know:
does
that
go
in
the
username
space
or
do
we
does
Valero
say
the
official
supported
thing
to
do.
Is
that
plugins
that
create
things
like
this
should
go
into
the
Valero
namespace?
H
It
seems
to
me
that
if
it's
especially
if
it's
a
custom
plugin-
that's
not
you
know
in
corvallero,
you
know
adding
things
to
the
layer.
Namespace
might
not
be
considered
a
good
idea.
I
For
the
of
plugging
I
mean
I.
H
I
Yeah
I
think
that
that
is
another
issue
we
need
to
discuss
and
further
in
2000.
H
But
but
again-
and
you
mentioned
the
CSI
plug-in-
is
that
you
know
you're
actually
saying
that
and
I
think
that's
out
of
the
scope
of
this.
But
you
know
that
the
CSI
plugin
ought
to
be
putting
that
in
the
Valero
namespace
and
not
in
the
user
workload
namespace.
But
since
that's
not
an
asynchronous
plugin.
H
If
we
did
that,
then
we
would
need
to
figure
out
a
way
to
get
that
and
again
we
actually
have
and
shootout
again
and
correct
if
I'm
wrong,
but
I
believe
that
one
of
the
way
that
we're
doing
the
red
hat
data
mover
is
that
we
actually
have
that.
That's
actually
a
plug-in
written
in
the
volume
snapshot
right,
not
on
PVC.
D
I
When
we
talk
about
this
question,
we
we
say
we
have
the
username
space
and
we
have
the
learning
space
actually
for
the
plug-in
data
more
or
plug-in
items
like
we
actually
have
another
of
the
other.
The
names
that's
like
created
by
the
by
the
by
the
by
the
plugin
itself
right
so
well,
I
think,
first
of
all,
we
we
as
a
Backup
Tool,
we
don't,
which
should
avoid
and
put
anything
into
the
username
users
namespace,
that's
the
password
we
can,
and
secondly,
we
can
put
it
into.
I
We
learn
this
ways
and
if
it,
if
there
is
any
problem
for
the
plugin
item,
to
put
it
into
the
learning
space,
we
can
create
a
new
namespace
actually
for
the
back
for
the
for
the
for
the
CSI
case
and
as
as
you
think
that
we
can
greater
volume,
some
product
object
into
valenum
services
or
another
name
service,
but
outside
of
the
username
space.
H
Yeah
but
but
again,
I
think
we.
Maybe
it
still
seems
to
me
that,
even
if
it's
in
the
Valera
namespace
again
since
since
the
purpose
of
the
backup
item
function
in
the
item,
backuper
is
to
take
something
in
a
cluster
and
to
extract
it
to
a
tarball.
Whether.
H
Just
saying
that
we
we
we
would
have
to
if,
if
you
don't
want
to
use
that
for
this,
then
we
would
have
to
re-implement.
Probably
you
know
a
good
portion
of
that
functionality
again
for
this
new
special
case,
because
basically
the
only
thing
is
that
in
the
finalized
in
the
existing
PR
we're
using
that
same
because
everything
we're
doing
from
backup
item
is
something
that
we
potentially
need.
You
know
we
have
to
figure
out
the
right
gvr
to
use
and
deal
with
the
preferred
version
stuff.
H
We
got
to
stream
it
out
to
a
turbo
and
in
the
general
case,
even
though
the
the
data
mover
may
not
need
it,
the
ability
to
call
user
plugins
on
everything
we
back
up,
you
know
is
something
that's
available
and
that
will
be
functionality
would
be
losing
if
we
didn't
use
that
I
I
I'm,
just
not
I'm
I'm,
having
a
hard
time.
Seeing
a
downside
to
this,
because
to
do
a
completely
different
workflow
would
be
a
lot
more
code
to
write
potentially.
I
More
complicated
yeah
yeah.
Actually,
so
we
have
a
lot
of
things.
That
is
not
clear
when
we
talk
about
some
some
future
things,
so
that
is
hard
to
decide
right
now.
That
is
that
it's.
That
is
one
fact,
and
the
other
thing
is
like.
If,
if
we,
if
I
don't
have
any,
you
know
blog
blogging
issues,
I've
also
tell
that
we
will
take
the
simplest
way.
I
That
is,
we
found
a
way
that
just
good
enough
for
the
for
the
data
more,
which
is
what
will
be
the
first
user
for
the
of
the
I
think
operation
plugin,
and
we
use
that
simply
a
way
to
implement,
implemented,
PR
and
then
we'll
discuss
further
in
filter
for
our
future
requirements.
Does.
H
That
make
sense
it's
just
that
from
what
I'm
understanding
is
you
know
doing
it,
as
suggested
in
the
pr
I
think
actually
makes
it.
It's
not.
It's
not
an
example
that
we're
currently
doing
it
might
be
more
complicated,
but
what
we're
doing
now
reuses
more
code
that
we
already
have
so
the
additional
code
needed.
F
Yeah
I
mean
I,
mean
guys,
I
think
whether
calling
the
I
have
some
bad
copper
again
in
this
final
Rider
control.
I.
Think
that's
the
implementation
detail,
but
we
really.
We
want
to
clarify
the
workflow
whether
we
need
to
support
persisting
additional
items
that
are
only
available
after
the
async
operation
finishes.
I
think
that's
a
very
important
decision
and
that's
something
since
we
we
didn't
consider
when
doing
the
yeah.
H
Either
way
you
look
at
it
you're
going
to
have
two
tar
balls,
you
have
the
original
bite
and
back
up,
and
then
you
have
a
turbo
with
everything
that
was.
You
know
written
in
the
finalized
phase,
the
yeah,
so
so
one
of
the
differences
between
what
this
PR
does
and
what
you're
proposing
is
that
the
mechanism
you're
proposing,
would
completely
skip
persisting
it
the
first
time
around,
and
it
would
be
only
in
the
second
time
ball.
H
So
one
reason
why
you
might
want
to
go
ahead
and
put
in
the
tire
ball
anyway,
while
you're
going
through
everything,
is
that
if
you
have
a
scenario
where
you
have
a
a
huge
bat,
you
know
a
huge
number
of
kubernetes
resources
in
a
backup,
but
a
small
number
of
small
volumes
by
the
time.
You
finish
that
backup
process,
you
might
actually
be
done
because
the
first,
the
last
thing
we
do
before
we
persist
backup
is
we.
H
We
call
all
of
the
backup
of
the
a
string
and
synchronous
operations
and
get
status,
because
if
some
of
those
are
already
done
at
that
point,
you
know
we
don't
have
to
call
status
on
them
later.
If
all
of
them
are
already
done,
the
backup's
complete,
we
don't
have
to
go
through
any.
You
know
that
that
waiting
two
minutes
to
pull
you
know
for
for
Progress,
because
we
make
that
First
Progress
call
before
we
persist
to
backup
the
first
time.
H
So
the
one
advantage
of
including
those
items
in
the
backup
itself
is
that,
if
the
only
asynchronous
actions
you
have
run
fairly
quickly,
especially
if
they
happen
early
in
the
backup
process,
you
may
not
need
to
do
a
second
pass.
You
may
already
have
everything.
H
So
if
you
explicitly
exclude
that
you
know
you're
not
really
saving
much
time
the
first
time
around
and
you
might
be
extending
the
backup
so
that
that's
one
difference,
another
difference
is
whether
these
items
processed
that
are
created
by
the
asynchronous
actions
are,
you
know,
run
through
the
backup
process,
including
plugins
on
them
so
and
again,
I
bring
up
the
CSI
examples
just
to
show
that,
because,
when
you
return
additional
items,
there's
you
know,
there's
async
plug
in
it
plugins
and
then
like
the
CSI
plug-in,
is
not
an
asynchronous
plug-in.
H
H
So
the
fact
that
additional
items
returned
from
regular
plug-ins
get
plug-ins
run
on
them
is
essential,
so
one
advantage
from
a
simplifying
point
of
view
of
the
way
that
we've
written
the
the
async
plugin
so
far
is
we
use
that
existing
additional
item
and
infrastructure
to
get
things
added
to
the
backup?
So
there's
no
new
code
needed
to
add
those
things
to
the
backup,
because
we
already
have
the
additional
items
infrastructure.
The
only
change
that
I've
added
is
I've
added,
another
Boolean
to
say,
update
additional
items.
H
So
when
we,
when
the
finalized
runs,
it
goes
through
all
of
those
asynchronous
actions,
the
the
operations
and
for
any
of
those
operations
that
has
that
set.
We
have
a
list
of
items
that
came
from
that
just
from
that
additional
items
list
and
we
update
those.
So
this
allows
us
to
use
a
lot
of
existing
infrastructure
without
having
to
start
over
with
it.
H
So
so
I
guess
one
question
I
have
is
that
and
again
this
also
means
on
the
restore
side.
Everything
is
in
the
same
place,
whether
it
came
from
finalized
or
from
the
first
past,
so
once
you've
persisted
that
into
the
rather
once
you've
created
that
temp
directory
and
extracted
the
the
tarballs.
From
that
point
forward,
everything
is
created.
H
Similarly,
you
know
the
you
can
set
your
priority
appropriately,
so
that,
if
you
need
you
know
if
you're,
if,
if
the
things
that
are
relating
to
your
plugin
need
to
be
restored
first,
that
can
be
in
the
restore
priority
if
they
need
to
be
restored
later,
they
can
be
later
in
the
priority,
depending
on
the
depending
on
your
plugin
and
what
your
items
are.
You
know,
I
could
see
that
that
making
a
difference.
H
So
all
of
that
you
know
reuses
as
much
of
the
existing
infrastructure
as
we
can
so
really.
The
only
new
thing
we
need
to
do
is
is
create
that
finalize,
which
just
iterates
over
and
again
we're
able
to
use
most
of
the
existing
infrastructure
around
because
we
have
an
item.
The
item
collector
also
kind
of
figures
out
all
the
appropriate
versions
of
the
resources
to
create.
H
E
J
I
Yeah,
actually
this
that's
quite
the
current
Department.
We
have
a
very
simple
way
to
solve.
We,
we,
we
don't
need.
If
we
if
we
say
that
we
want
to
keep
the
current
code
and
we
don't
want
to
make
much
changes,
we
don't
need
to
change
anything.
I
Actually,
we
just
need
to
find
a
way
that
the
current
some
some
some
some
some
CRS
in
the
in
the
in
the
finalizing
phase
I
belong
to
this
I
think
operation
and
the
fact
that
CR
and
with
persistence.cr,
that
is
the
most,
is
the
simplest
way
and
we
have
multiple
ways
to
find
that
CRS
and
if
we
cannot
make
clear
of
all
the
things
or
all
the
pictures
we
just
I
I
suggest
that
we
just
take
the
current
way
and
that
even
doesn't
impact
the
current
situation
that
oidp
data
more
is
using.
I
The
I
think
that
I
think
item
additional
item
returned
the
execute
method
that
even
doesn't
impact
that,
because,
if,
if
IDP
data
more
want
to
keep
it
one
to
keep
the
current
behavior,
it
can
continue
use
the
continue
use
the
digital
item.
We
just
want
to
add
one
more
thing:
one
more
mechanism
in
the
in
the
I
think
operation
finalized,
a
controller
or
finalized
a
model
or
something
like
that
and
the
funded
CRS.
And
we
proceed
that.
H
Well
see,
that's
that,
but
that's
actually
what
the
finalist
does
now,
it's
just
that
when
you
say
and
persist
that
the
the
easiest
way
to
persist.
Something
right
now
is
to
rather
than
write
something
from
this
from
scratch.
That
does
everything
we
need.
You
know
I
made
a
modification
to.
H
Backup
item
that
will
persist
that
handling
the
the
gvr
preferred
version
stuff
and
getting
into
the
right
file
and
generating
the
tarball
all
that's
already
handled
in
the
existing
API.
But.
F
H
Well,
that's
the
advantage
of
using
the
the
and
again
we're
not
going
through
the
backup
workflow
we're
using
the
backup
item
function,
which
is
also
used
by
the
backup
workflow,
because
this
already
handles
you
know
you,
you
pass
a
natar
writer.
H
You
use
the
item
collector
to
figure
out
the
the
the
set
of
gdr's
that
you're
working
with,
and
so
that
backup
item
already
goes
in
there
and
if
it's
the
preferred
version,
it
writes
it
out
to
this
directory
and
if
it's
not,
it
writes
it
out
using
the
version,
and
so
all
that
is
already
handled
by
backup
item.
It
does
all
that
negotiation.
F
H
Operation,
you
know
concerned
that
that's
why,
if
you
look
at
the
that's,
why
I
pass
in
that
finalized
Boolean
because
we
go
through,
we
call
backup
item,
but
when
finalized
calls
it
pass
in
the
finalized
Boolean,
and
so
it
skips
things
that
are
not
relevant,
because
we
basically
want
a
streamlined
version
of
backup
item
that
calls
the
back
of
item
actions
but
doesn't
add
more
additional
items
because
we
don't
we
don't
need
any
additional
items
at
this
point
we've
already
gone
through.
We
already
know
what
our
items
are.
We
just
say:
I.
D
H
H
Items
we
discard,
you
know
anything
relating
to
async
we're
just
calling
the
plug,
because
the
thing
is
that
what
most
plugins
do
the
AC
plugins
are
kind
of
exception.
What
most
plugins
do
is
they
modify
the
yaml?
You
know
you
get
a
backup
band,
you
add
an
annotation,
you
remove
an
annotation,
you,
you
set
a
field,
you
clear,
a
field.
H
You
know
you
might
have
some
requirement
that
says:
hey
this
field
has
to
be
removed
because
the
cluster
added
it
yeah,
and
so
because
finalize
discards,
additional
items
and
anything
like
that.
We
call
the
plugins
to
get
the
modifications
to
the
to
the
resources.
H
If
there
are
any-
and
you
know
in
most
cases,
I
mean
in
the
case
of
the
snapshot,
backup
you're-
probably
not
going
to
have
any
plugins
that
run
on
it.
So
when,
when
you
iterate
over
the
actions,
it'll
be
an
empty
list,
but
for
some
custom
plugin.
D
H
User
might
create,
like
the
oedp
dab,
stuck
in
the
shoe
bomb
about
this.
You
know
right
now.
We
don't
have
this,
but
you
know
we
can
imagine
the
oadp
data
mover
itself.
H
D
H
Finalize
that
we
want
that
field
there,
otherwise,
otherwise
the
finalized
version
is
going
to
remove
something
that
was
there
in
the
original
one.
And
so
you
end
up
with
this
situation
where,
if
the
backup
finishes
early,
you
know
you
have
there,
but
if
it
takes
longer,
you
lose
it
and
again,
yeah
sorry
go
ahead.
I
Yeah
one
question
Scott,
so
we're
going
to
talk
about
the
the
I
think
plugin
want
to
modify
some
Fields.
You
mean
that
it
means
that
modified
to
the
to
the
to
the
objects.
A
backup
I
mean
the
oh,
the
objects
in
the
user's
names
like.
H
Not
the
async
plugin,
what
I
mean
is
so
the
async
plug-in
is
probably
going
to
be
a
plug-in
on.
You
know,
for
example,
a
plug-in
on
PVC
or
on
PV
that
creates
this
data
member
backup,
but
I'm
saying
you
might
have
a
plug-in
on
data
mover
backup
and
that
would
not
be
an
async
plugin.
That
would
just
be
a
normal
plug-in.
It
might
modify
something
in
that
field.
H
You
know,
or
again
another
example
would
be.
Maybe
maybe
a
user
has
some
audit
plug-in
that
runs
on
everything
that
goes
into
backup
that
you
know
that
records
some
data
they
need.
That
would
run
on
all
these
things.
That
would
add
that
annotation
or
whatever
and.
H
You
know,
and
and
in
most
cases
these
plugins
are
probably
not
going
to
exist
or
if
they
do
that
they
won't
be
doing
much,
but
you
know
the
infrastructure
is
there
so
that
way,
you
know
this
is
one
of
these
cases
where
I
think
since
we're
already
using
backup
item,
you
know
anyway,
we
support
it.
If
you,
if
you
rip
that.
H
You
have
to
document
that
you
know
certain
these
are
exceptions
to
plugins
and
they
don't
work
here,
which
we
could
do.
I
mean
I,
don't
know
that
we
have
any
essential
use
that
require
these
plugins.
But
again
we
might
be
breaking
edge
cases
for
some
users,
but
I
think.
The
more
important
reason
to
use
backup
item
is
this:
handles
the
the
version,
negotiation
and
and
preferred
version
and
the
streaming
and
all
of
that,
and
it
allows
the
restore
side
to
not
care
whether
it
came
from
finalize
or
from
the
original
backup.
I
Yeah,
actually
so,
if,
if
the,
if
other
things
are
I
mean
the
modification
can
be
decided
or
down
very
quickly,
and
so
in
the
current
workflow
of
the
backup,
everything
is
everything
is
fine.
But
if
we
have
that
in
the
in
the
in
the
long
running,
I
think
operation
that
there
will
be
a
problem
because
we
will
do
everything,
I
mean
just
as
we
mentioned
the
recursive
thing,
everything
again
at
the
finalized
case
and.
I
Because
I
mean
the
the
current
backup
item,
action
are
just
in
the
behavior
recursive
Behavior.
It's
called
one
action
and
it's
written
additional
items
and.
H
No,
that
that's
why
additional
items
were
explicitly
one
of
the
things
that
if
you
look
at
my
PR,
where
I
pass
in
that
new
Final
seal
is
that
if
you,
if
you
call
backup,
item
and
finalize
that
true,
then
we
ignore.
We
discard
additional
items
to
avoid
that
recursive
problem,
because.
G
H
H
H
So
because
I
think
there's
there's
a
regular
way
of
running
it
and
then,
when
you're
running
on
finalize,
you
know
you
say:
okay,
this
is
final
step,
so
we
skip
certain
steps,
so
I
think
that's
less
confusing
than
creating
a
whole
I
mean
because
the
other,
the
other
option
would
be
to
create
a
whole
new
call
stack
that
instead
of
calling
backup
item,
we
call
finalized
item
and
then
finalize
item
copies
and
paste
all
the
code
because,
basically
backup
item,
you
know
just
to
make
up
some
numbers
to
make.
It
say.
H
Backup
item
has,
you
know,
300
lines
of
code
in
it
and
we
need
150
of
those.
So
you
can
either
add
that
flag
and
ex
and
exclude
certain
things
along
the
way
or
you
can
copy
and
paste
150
lines
of
code
and
then
anytime,
you
have
a
bug.
You
know
persisting
relating
to
Turbo
creation,
for
example.
You
have
two
places
to
put
it
in
and
that's
going
to
be
a
lot
I
think
that's
gonna,
be
a
lot
more
error
own
and
harder
to
maintain,
whereas.
E
C
F
F
Yeah
but
that
but
again
I
think
that's
how
we
Implement
that,
but
first
we
need
to
you
know,
reach
consensus
in
regards
to
the
workflow.
So
do
we
want
to
support
async
operation,
retraining,
additional
item
or
not
originally
I
I?
My
answer
was
no,
but
I
didn't
realize
there
was
a
requirement
for
this
yeah.
F
H
Resync
operations
are
backup
item
actions
and
since
the
existing
V1
API
for
backup
item
actions
allows
additional
items
to
be
returned,
I
think
it
would
be
a
lot
more
confusing.
If
you
say
okay,
you
know
again,
we
can.
You
can
return
additional
items,
but
those
are
you
know
if,
because
if
you
don't
do
that,
then
you
have
to
create
two
ways,
because
you
have
additional
items
which
is
you
know
the
standard
way
of
just
saying:
hey
include
this
in
the
backup
for
async
right
now.
That
still
applies.
H
We
say:
oh
include
this
in
the
backup,
but
I
added
another
Boolean
that
says
update
additional
items
and
if
that's
set
to
true
that
means
finalize
cares
about
those
items
so
that
that
allows
us
to
reuse
the
existing
API
for
additional
items.
The
alternative
would
be
to
create
a
whole,
a
brand
new
field
that
that
is,
you
know,
async
additional
items,
and
then
you
have
to
make
those
decisions
about.
Do
you
include
them
in
the
back
of
the
first
time
around
or
not?
And
if
you
don't.
F
G
H
So
the
in
the
when
we,
when
we
do
the
backup
and
and
we're
calling
backup
item
the
first
time
on
the
item
that
triggers
the
async
operation
that
execute
in
the
plug-in
creates
that
CR
AP.
G
H
That
CR,
back
as
an
additional
item
in
this
PR
passes
an
operation
ID
that
references
the
operation
and,
if
you
need
this
to
be
updated
in
the
final
tarball
from
finalize
there's
a
Boolean
that
says,
update
additional
items
on
finalize.
I
So
so,
then
we
need
to
call
the
the
entire
back.
I
mean
callback
to
the
to
the
backup
or
to
handle
the
additional
item
after
the
final
light
case
right.
H
Oh
well,
well,
no,
no,
they
did
so
in
the
original
case,
so
so
we're
not
finalized
yet
so
so
that
additional
item
is
passed
back
along
with
the
operation
ID
in
the
first
run
through
you
know
it's
it's
a
backup
item
action,
so
any
anything
in
additional
items
you
know
in
the
backup
workflow.
H
You
know
we
call
the
backup
run.
So
that's
you
know
so
CSI
plug-in
creates
volume
snapshot.
We
then
call
backup
item
on
the
bottom
snapshot.
H
If
you
know
you
have
a
plug-in
that
pulls
in
you
know
some
other
cluster
scope
resource,
so
so
the
the
additional
items
is
regular,
backup
workflow,
that's
how
we
pull
it
in
the
first
time,
but
that
additional
Boolean
that
that
we
pass
back
that
says,
update
on
a
digital
item
that
tells
Valero,
because
we
also
we're
creating
this
map.
This
list
of
all
of
the
asynchronous
operations
that
we
need
to
check
them
check
status
on
that
map
will
list
those
additional
items.
H
If
we
have
the
update
on
unfinalized
flags
at
the
true,
so
when
you
finalize,
we
now
have
a
very
limited
workflow.
We
basically
want
to
just
create
a
terrible
with
those
items
only
in
it
those
items
that
are
listed,
you
know
in
that
operations
list
we're
not
we're,
not
refreshing
the
items
that
started
the
backup.
You
know
the
the
the
asynchronous
actions.
So
so
we're
not
we're
not
we're
running
back
up
item
on
the
item
on
those
Opera
items
that
have
a
backup.
H
Sorry
I
have
an
async
plugin,
we're
we're
running
those
additional
items,
so
you
know,
in
other
words,
we're
calling
backup
item
on
the
snapshot,
backup
or
the
the
data
mover
backup,
not
on
the
PVC
or
the
volume
snapshot,
or
you
know
whatever
plug-in,
that
has
the
acing.
So
these
are
an
async
plugins.
At
this
point,
if
you
run
plugins
we're
just
calling
backup
item
on
those
things
that
were
modified
by
the
async
operations
that
need
their
final
status
and
annotations
and
whatever
else
persisted.
I
Yeah,
but
that
that
is,
that
is
for
the
for
the
for
the
a
different
kind
of
data
or
different
kind
of
item,
but
in
the
code
implementation
you
must.
F
D
H
H
Additional
items
that
you
returned
that
you
expect
the
asynchronous
operation
to
modify.
Those
are
the
things
that
you're
looking
at
on
finalize.
So
when
you
call
finalize
you
take
that
list
from
the
operation
list
and
say:
okay,
this
list
of
data
mover
backups
or
snapshot
backups
or
some
Custom
Image
backups.
Whatever
those
items
are,
we
want
to
back
up
just
those
items
create
a
smaller
tarball
with
just
those
items
upload
those
to
Object,
Store
and
yeah,
and.
G
H
Using
the
same
file
and
that's
another
advantage
using
backup
item-
is
that
you
guarantee
that
the
file
names
used
are
the
same,
including
because
again,
those
file
names
include
the
the
resource
version
in
them
and
refer
to
handle
all
the
version
negotiating
stuff
and
stuff,
so
that
second
tarball
uses
all
the
same
file
names
as
the
first,
which
means
when
we
extract
them
on
restore.
I
No
yes,
I
think
here
we
have,
you
know
it's
hard
to
decide
to
do
the
quad
chain,
so
we
have
some
big
visions
and
we
have
for
the
the
thing
the
the
the
the
the
user
or
the
the
data
more
located
out
to
be
the
user
of
the
backup,
I,
think
operation
I
think
I.
Actually,
if
we
only
talk
about
the
data
more
and
the
the
there
is,
nothing
need
to
be
updated
or
no
objects.
D
H
I
H
Yeah,
although
when
we
say
users,
we
need
to
think
about
the
data
mover
that
we're
building
for
Valerian
112,
but
also
the
ADB
data
mover
that
we're
building
on
top
of
111
and
I
think
Shivam.
You
said
that
we
do
need
to
have
that
updated
during
the
finalized
case,
because
there's
there
are
annotations
that
we're
saving
on
those
that
we
need
on
restore
right.
C
D
F
That's
for
the
data
mover
only
there's
a
workaround,
for
example.
We
can
label
it
in
some
way
so
that
the
the
controller
when
the
async
operation
finishes,
we
store
everything
related
to
data
mover
in
turbo
and
you
can
use
them
as
a
reference
during
the
restore
okay.
F
It
with
a
label
again
I
think
we
need
to
decide
whether
this
is
a
really
common
use
case.
If
you
want
to
support
in
the
async,
when
we
introduce
this
async
mechanism,
I
think
that's
more
important
thing.
We
need
to
decide
right
now,
right.
H
Yeah
I
mean
I
would
say
that
if
it's
a
use
case
required
by
the
first
implementation
of
it,
I
think
that
counts
as
a
common
use
case.
Because
it's
you
know
it's
it's
essentially
one
of
two.
We
only
have
two
concrete
use
cases
right
now,
so
if
one
of
those
two
needs
it
I
think
that's
when
we
have
to
handle.
H
So
right
now,
what
we're
doing
yeah
I
mean
the
the
plug-in
itself
can
decide
whether
it
needs
that.
So,
if,
if
the,
if
the,
if
the
asynchronous,
if
the
thing
that
the
async
plugin
creates
and
then
we
monitor,
if
that's
not
needed
on
restore,
if
that,
if
whatever
is
in
that
kubernetes
object,
is
not
needed
by
Valera
to
restore
it,
then
the
Boolean
is,
you
know
false
by
default,
and
so
we
don't
update
that
on
finalize
and
so
we're
good.
This
is
just
an
optional
way.
You.
D
H
I
think
Daniel,
you
you
were
saying
one
option
to
do.
This
is
a
work
on
is
that
we
could
create
a
you,
could
create
a
label
that
you
tell
plugins
hey,
you
have
to
use
this
label
and
then
the
finalize
would
just
look
for
the
label.
H
I
actually
think
building
into
the
API
is
probably
safer,
because
then
we
have
this
building
to
say:
hey
just
look
at
the
additional
items.
Those
are
the
things
we
need
and
then
the
plug-in
instead
of
labeling
an
item,
it
returns
it
as
an
additional
item
and
sets
two
on
it.
F
H
Right
yeah:
well,
we
we
do
yes,
and-
and
so,
if
that,
if
that
Boolean
is
false,
then
it's
identical
to
it.
We
don't
even
look
at
you
know,
which
means
we
save
it
it's
in
the
first
time,
but
we
don't
look
at
it
again.
The
API
change
for
v2
is
that
we
have
this
update
additional
items
field.
If
it's
true
that
we
well,
we
initially
treated
identical
as
V1
additional
items.
I
Yeah
that
that
come
down
to
the
I'm,
not
to
the
to
the
the
question
I
mentioned
in
the
comments
that,
for
the
even
for
I,
I,
think
cooperation.
That's
the
I.
Think
items
returned
by
the
acute
method
in
the
in
the
in
the
finalized
case
as
as
long
as
it's
returned
by
the
SQL
or
it's.
It
is
not
in
the
finalized
case
until
the
the
until
the
the
the
operation
go
to
the
finalizing
phase.
I
So
if
we
don't,
if
we
have
a
user
case
that
we
require
to
modify
the
the
items
written
by
the
ask
method
during
the
icing
operation,
that
is
one
case.
If
not,
we
don't
need
to
care
about
that,
because,
because
we
still
can
still
allow
the
additional
items
returned
by
that
skills,
but
we
just
make
one
Spirit
restriction
that
that
item
can
be
facilitated
by
the
backup
without
backup
immediately
without
waiting
for
the
finalizing
of
phase
of
the.
H
The
issue
is
that
we
still
so
additional
items
have
to
go
through
the
regular
process,
so
they
get
inserted
into
the
turbo,
even
if
you
delay
uploaded
the
tire
ball,
because
the
the
tarball
contents
are
generated
item
by
item
by
streaming
to
that
tar
writer.
H
So
once
you
get
to
finalize,
you
either
have
to
create
a
second
Tire
ball
or
you
have
to
re.
You
know
recreate
that
by
rereading
through
the
starwriter
again.
So
so
you
still
have
the
problem
of
you
know
you
create
a
second
terrible
versus
the
first,
so
I
think
I
think
there's
delaying
writing.
The
tarball
I
think
is
a
bad
thing
because
that
that
makes
the
existing
Valero
problem
of
Valero
crashes
break
backups
worse
because
right
now,
but
but.
F
F
H
I
Well,
yeah
yeah.
Actually,
we
don't
need
to
delay
a
persisting
all
the
data,
all
the
bag
of
data
we
just
want
to
delay
consisting
the
as
the
the
date
additional
item
returned
by
the.
H
But
again,
there's
no
downside
to
just
including
It
Anyway,
the
first
time
that
way,
because
the
effect
of,
in
other
words,
if
you
have
two
tarballs
and
the
updated
item,
is
in
both
Terribles
versus
you
have
two
tar
balls
and
the
update.
F
F
End
result,
yeah,
I,
think,
there's
a
lot
of
detail.
We
need
to
figure
out
I,
don't
think
we
can
clarify
all
of
them
in
this
meeting
and
by
the
way,
it's
already
midnight
in
Beijing
I.
Don't
a
really
figure
those
out
at
this
moment,
but
I
I
have
one
last
common
if
we
wanna,
if
we,
if
there's
a
icing
collaboration
and
we
somehow,
we
know
that
we
need
to
modify
the
backup
turbo
in
the
finalization
phase
yeah
to
make
that
horrible
explicitly
look
temporary.
So
we
know
that
this
is
incomplete.
There's
some
ongoing.
H
Change
to
that,
we
know
that
because
the
backup
State's-
not
you-
know
as
long.
F
H
Well,
we'll
see
see
that,
but
that's
that's
the
purpose
of
the
of
the,
because
remember
we
upload
the
backup
metadata
too.
The
backup
phase
is
is
the
is
the
key
there
to
tell
you
that
a
backup's.
H
In
fact,
we
don't
because
of
that
the
backup
scene
controller
does
not.
You
know
we
don't
sync
backups
with
this
PR
until
they're
at
a
terminal
phase.
So
if
you
have
a
second
cluster
that
shares
your
backup
storage
location,
that
cluster
will
not
pull
down
this
backup
until
it
is
in
the
completed
phase.
H
So
so
so
so
I
I,
don't
think
you
need
to
the
tarball
needs
to
be
different
because
again
the
the
other.
The
other
point
is
that
we're
only
going
to
upload
it
upload
a
second
turbo.
If
there
are
operations
that
are
incomplete,.
G
H
The
end
of
the
first
pack,
it's
the
end
of
that
processing
and
those
operations
have
indicated
to
Valera
with
that
with
that
Boolean
field
that
they
need
to
be
updated.
I
Yeah,
actually,
if
we
think
like
this,
we
don't
put
procedure
in
the
same
backup
table.
We
will
not
to
split
the
bank
turbo
and
and
I
mean
everything
that
would
look
quite
simple
if,
if
the
finalized
fits
just
after
date,
another
double
right.
H
H
But
but
again,
I
actually
think
that
has
complexity
to
skip.
Those
I
think
I
think
the
original
backup
should
use
the
existing
backup,
workflow
and
back
up
everything.
The
fact
that
we're
going
to
be
updating
it
later
and
finalized
with
a
new
Turbo
is
not
something
that
the
backup
controller
needs
to
worry
about,
because
that
adds
complexity
to.
F
The
backup
which
lower
to
add
more
right
right
right
because
because
I
I
think
you'll
eventually
was
God
wants,
is
need
to
make
modification
through
the
backup
for
a
while
after
the
async
operation
finishes.
That's
the
workflow
right.
H
H
Same
no
I
mean
there
are
two
physical
tar
balls
because
it
you
know.
F
I
Okay,
let
me
let.
F
F
H
In
the
bucket,
but
there's
no
ambiguity
here,
because
it's
like
it's
like
rustic,
you
know
rustic
when
you
do
a
backup,
it's
incremental
internally
rustic
has
you
know
it
has
the
first
snapshot
and
the
second
one
Sacrament.
On
top
of
that
same
with
you
know,
all
these
snapshots
are
all
you
know
incremental
logically,
you.
E
J
H
May
eventually
need
to
do
that
anyway,
to
support
you
know,
Valero
being
more
resilient.
F
H
I
think
that's
that's
inefficient,
because
that
requires
you
to
stream
the
entire
ball
again
and
then
regenerate
a
new
tire
ball,
because
and
that'll
also,
because
I
actually
started
doing
it.
That
way,
that's
going
to
require
a
lot
more
changes
to
backup
item,
because
we're
gonna
have
to
pass
things
in
in
the
middle
like
pass
file
names
because
remember
backup.
Item
might
create
two
files
to
the
tarball
one
for
the
preferred
version
and
one
for
the
regular.
So
we're
gonna
have
to
turn
those
file
names
back
into
of
streaming
them.
I
That
yeah
regarding
to
the
to
the
to
the
Hubble
I,
think
my
suggestion
is
just
we
do
it
like
the
current
pod
volume,
backup
or
volume,
some
charts
when
we
just
add
another
another
tar
file
in
the
along
with
the
the
backup
topper.
F
If
you
do
it
that
way,
that
that
another
file
need
to
be
very
use
case
specific
just
like
part
volumes.
I
Yeah,
let
me
let
me
give
one
example,
but
forgive
me
if
I,
if
I
make
things
too
far,
that's
just
like
we,
we
have
all
the
backup
data
I
mean
accumulated
objects,
I
need
to
backup
in
the
backup
table,
and
now
we
have
another.
I
We
have
another
asynchron
operations
that
will
proceed
something
later
when,
when
these
are
asking
operation
finished
after,
for
example,
four
hours
and
you
know
and
and
the
users
are
in
future,
we
may
have
the
immutability
immutability
of
the
repo
and
the
other
one
I
prefer
that
their
their
data
can
be
immutable
as
soon
as
possible,
so
set
of
so
for
that
kind
of
backup
data
I
mean
the
cumulative
object.
I
We
can
immune
in
mute,
make
them
immutable
as
soon
as
very
quickly
as
soon
as
the
IQ
execute
method
returns,
but
and
for
the
other
part
we
can
wait
until
do
the
testing
operation
go
to
the
finalized
case,
finalize
phase,
but
it's
okay
only
set
it
only
for
the
async
operating
data
and
for
the
backup
data.
We
still
could
meet
users
requirements
and
so
wait,
wait,
but.
F
I
think
Scott's
got
one
like
the
Ria
plugins
also
handle
that
that
the
the
resources
in
the
new
Turbo.
H
F
H
So
and
again,
the
advantage
there
is
even
not
even
this
question
combining
the
data,
but
if,
if
we,
if
we're
able
to
extract
to
the
same
place
because
we've
generated
the
tarballs
in
the
same
way,
then
there's
only
one.
H
So
if
we
extract
these
additional
items
into
the
same
directory
structure
as
the
original,
then
restore
doesn't
care
whether
it
came
from
async
item
or
a
regular
item,
it
looks
the
same.
The
same.
F
Person
so
so
Scott
I,
don't
think
we
can
really
reach
consensus
in
this
meeting,
but
I
think
if
you
need
to
do
the
combine,
we
do
the
combine
at
the
backup
time
so
that
we
have
one
bag
of
parallel
to.
H
Me
that's
an
implementation
detail.
We
can
even
change
that
after
this
VR
and
then
and
then
also
that'll,
give
us
an
advantage
to
see
here's
how
much
actually
work.
It
is
because
it's
a
separate.
F
H
I
I
just
think
that
down
and
make
things
less
efficient
and
I.
Don't
really
see
any
advantages
to
doing
that
because
you
know
it's
the
end
result
is
the
same.
You
know
it's
it
we're
treating
this
as
second
turbo
as
an
incremental
backup,
rather
than
as
a
you
know,
another
full
backup,
so
we
don't
have
to
stream
the
entire
thing
you
know
twice
to
the
back
of
storage
location,
even
though
that
was
a
concern
with
some.
G
F
I,
don't
think
we
can
reach
agreement
here,
but
but
yeah
I
I'm
still
not
quite
convinced,
I
mean
if
we
need
to
do
that.
First
I,
don't
I
I
didn't
realize
that
we
need
to
handle
additional
item
data
it's
generated
only
in
the
async
collaboration.
That's
not
something
covered
in
the
design
originally,
but
now
we
need
to
handle
that
I.
I
think
we
need
to
be
careful
here.
Instead
of
rush
to
get
things,
work.
H
H
H
Restream
it,
it
means
I'm
going
to
have
to
modify
backup
item
to
return
those
file
names
so
that
I
can
because
basically,
what
what
we
would
have
to
do
to
restream
is
you
would
need
to
open
the
original
tarball
streamer
read
through
file
by
file
for
each
file.
H
Look
at
your
finalized
list
to
see
do
I
have
a
new
version
of
this
file
if
I
do
stream
the
new
version
of
the
file,
otherwise
stream,
the
old
file
move
on
to
the
next
file,
so
we
have
to
generate
some
additional
metadata
in
the
finalized
workflow.
F
G
H
Gonna,
that's
that's
gonna,
I
think
that's
gonna,
add
quite
a
bit
of
complexity
to
the
workflow
side,
on
the
backup
and
you're
not
really
going
to
save
much
on
the
restore
side,
because
right
now,
with
two
turbos
restore
side,
changes
are
are
limited
to.
Basically,
the
extract
method
needs
to
take
in
a
slice
instead
of
a
single
file.
That's.
H
Come
back,
you
just
have
to
extract
to
your
turbos,
because
when
you
extract
because
you
you
extract
them
in
order,
so
you
extract
the
first
star
ball
and
then
you
go
through
the
second
Tire
ball
and
we
assume
that
everything
in
the
second
terrible
text
priority,
and
so
we
just
extract
it,
but
there's
no
additional
logic
needed
other
than
extract
one
and
then
extract
two.
H
So
all
that
comparison
is
going
to
be
on
the
backup
workflow
side
if
we
have
to
go
through
and
regenerate
the
tarball,
that's
where
we
have
to
do
a
lot
of
comparison.
H
Because
golang
is
very
inflexible
in
the
way
they
do
the
way.
Once
you
created
the
turbo,
you
essentially
have
to
open
it
as
another.
You
know
reader
and
just
go
through
it
and
you
know
read
file
by
file
and
and
that
stream
and
then
look
at
the
file
name.
H
You
know
you
read
the
header,
you
check
the
file
name
and
then,
if,
if
it's
not
one,
that's
updated,
then
you
read
that
file
and
stream
that
to
the
second
terrible,
and
if
the
header
is
a
file
that
matches
the
list
you
generated
for
all
the
things
that
need
updates,
then
you
would
stream
out
the
new
file
and
and
discard
the
first
one.
H
So
I
mean
it's:
it's
not
exactly
hard
to
write
the
code
to
do
that,
but
it's
going
to
be
a
lot
of
processing
and
it's
gonna
add
more
to
the
workflow
in
that
backup
item.
Where
we're
having
to
make
finalize.
H
You
know
differences
to
finalize
because
we're
having
to,
in
the
finalist
case,
store
these
file
names
and
yes
to
store
the
file
name
and
and
the
the
byte
stream,
so
so
that
the
the
bytes
slices
that
we
need
to
stream
out,
but
don't
actually
stream
them
yet
and
then,
at
the
end
of
processing,
all
those
backups.
H
We
have
to
open
the
the
original
Tire
ball
and
replace
those
files
in
it
by
creating
a
new
fireball,
so
I
mean
that's
all,
that's
all
doable,
but
I
don't
know
that
it's
the
end
result
actually
helps
us
much
I
mean
that
was
actually
I
started.
Doing
that
that
was.
That
was
my
first
approach.
I
started
writing
that
code
and
then
I
realized.
It
was
a
big
mess
and
and
I
decided
that
that
I
think
that
I
thought
that
it
was
cleaner
to
just
add
a
second
kind
of
incremental
turbo.
F
H
A
G
A
Something
else
like
purely
for
that
discussion.
What
do
you
think.
A
F
A
I
I,
don't
I,
don't
want
to
interrupt
the
flow
and
the
whole
discussion,
but
it's
like
yeah.
Obviously
it's
not
going
to
be
decided
now,
so
we
have
a
few
other
topics
to
discuss.
A
So
if
you
can
squeeze
it
out
in
like
the
next
10
to
15
minutes
would
be
super
cool,
so
everyone
can
have
their
time
back.
I'm!
Sorry
for
that,
so
answer.
Can
you
brief
us
on
your
two
topics?
Please
sure.
J
J
Now
we
reviewed
various
approaches
and
kind
of
closed
on
a
Final
Approach
for
it
and
like
requesting
the
community
to
further
review
and
further
requesting
Scott
and
shubham,
to
give
any
more
comments
any
any
further
discussion
with
which
we
were
already
having,
so
that
we
can
try
to
close
this
as
soon
as
possible.
F
So
that
one
by
the
way,
as
for
the
implementation,
I
I,
don't
think
that
will
go
into
one
diagram
and
you're.
Okay
with
that
right.
J
Yeah,
that
is
status
fair
if
it's
going
in
one
month,
the
the
next
release
that
should
be
fair
but
I,
mean,
from
my
end,
I'll
try
to
raise
it
as
quick
as
possible.
But
it's
okay!
If
you
don't
cherry
pick
it
in
1.11,
you
can
yeah.
F
J
G
H
Guess
that's
the
only
point
I
think
Dan
was
making
is
that
you
know
we
have
some
fairly
big
decisions
that
we
were
just
discussing,
that
we
need
to
kind
of
work
out
and
finish
in
the
next
three
weeks
and
that's
kind
of
the
priority
right
now.
We
still
need
to
look
at
this,
obviously,
but
you
know
right
now
we're
trying
to
not
have
to
push
the
date
back
to
any
dates
back
anymore.
If
we
want
11.
A
Cool
and
your.
J
Next,
one
yeah,
the
second
one,
is
a
new
proposal
that
I've
just
started.
So
basically
this
is
for
so
as
of
today
in
belaro
we
have
bunch
of
plugins
which
do
certain
substitutions.
For
example,
we
have
a
storage
class
mapping
plugin
we
are.
There
was
a
new
PR
which
also
brought
in
an
image
mapping
plugin,
where
you
say
change
this
image
from
the
current
one
to
a
from
the
test
to
a
Dev
kind
of
scenario.
So
what?
Basically?
What
I'm
realizing
is?
We
have?
J
We
are
having
plugins
for
each
specific
use
case,
for
let's
say,
storage,
class
images
and
whatnot
right.
So
this
is
a
proposal
to
basically
introduce
a
more
generic
way
of
changing
things
in
the
yamls,
like
whatever
you
in
the
yaml
that
you
have
in
the
what
has
been
backed
up
during
restore,
so
that
all
these
modifications
are
easier
to
do
and
like
user
friendly
in
some
sense-
and
we
don't
end
up
creating
hundreds
of
plug-in
for
each
specific
scenario
that
each
user
comes
up
with.
So
no
one
has
got
reviewed.
J
A
F
Yeah
yeah,
I,
wanna,
yeah
I
think
the
first
one
I
have
already
touched,
and
the
second
one
is,
regarding
means
a
PR
about
the
resource
filter.
Do
we
have
any
outstanding
comments,
or
maybe
we
can
go
offline
and
check
you
once
like?
If
everyone
is
okay,
I
think
we're
gonna,
probably
I
I
want
to
push
it.
We
merge
this
one
so
that
we
can
have
the
end-to-end
flow
work
delivered
in
1.11
before
FC.
F
So
please
I
know:
we've
had
a
lot
of
a
good
discussion
with
Yvonne
and
some
other
folks
on
GitHub,
so
Scottish
Football.
If
you
have
time,
please
take
a
look.
If
there's
no
additional
comments,
I
think,
probably
we
try
to
merge
this
one.
In
this
week
the
5
7
73
I
think.
A
All
right:
okay,
that's
a
super
quick
end.
Thank
you.
Anyone
else
who
wants
to
I
think
we
have
some
new
folks
on
the
call.
Do
you
want
to
share
some
some
something
about
you?
I
I
can
see
Matthew
for
first
time.
A
A
A
Sorry
for
that
all
right,
I
think
we're
right
on
time
like
half
an
hour
later,
but
that
that's
cool
for
the
last.
The
last
thing,
folks,
like
Scott
and
Daniel,
do
you
want
me
to
schedule
something
to
discuss
the
backup
item
action
stuff
or
you
you
do
that
between
you,
both
you
or
yeah.
H
A
Cool
and
two
two
things
from
my
site:
if
someone
is
going
to
join
kubecon
in
Amsterdam
in
April,
please
drop
me
a
line,
so
I
can
I
can
know,
and
we
know
the
number
we're
planning
some
stuff
with
us.
So
we
can
have
the
number
and
then
we
can
plan
accordingly,
and
my
second
thing
is
I'm
gonna
approach,
you
maybe
over
mail
or
slack
whatever.
We
want
to
write
a
blog
about
Valero
and
have
the
different
companies
that
are
in
the
community
like.
C
A
There
or
Microsoft
whoever
are
based
their
products
or
their
services
on
top
of
Valero,
so
we
can
have
like
a
good
Community
use
case
from
the
main
contributors
to
the
project.
So
yeah
I'll
touch
you
on
this
one,
so
we
can
write
up
some
nice
blog
about
that
tissue
off.
So
some
of
the
different
companies
work
on
this
one.
A
Okay,
that
was
everything
my
side.
Thank
you.
Everyone
last
call
if
someone
wants
to
bring
something
up,
if
not.
Okay,
thank
you
have
a
good
a
great
rest
of
the
day
and
talk
to
you
in
two
weeks.
All
right,
bye,.