►
From YouTube: Velero Community Meeting - Jan 18, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
And
first
we
do
some
stated
update
we're
planning
to
create
the
part
away
1.10.1
around
and
the
RT
will
be
around
January
20th
and
the
easy
list
could
be
waited
here
in
this
link
and
the
others
are
for
way
1.11
and
we
are
focusing
on
the
easy
investigation
and
also
some
design
items.
A
The
first
one
is
the
data
movement
and
we
are,
we
have
created
a
demo
for
the
building
data
movement
for
CSS
and
plot,
and
the
net
Target
is
to
discuss
the
design
and
finalize
it.
And
finally,
we
can
create
a
design
PR
and
for
the
for
the
results
include
and
exclude
design
term.
It
is
ongoing
and
we
have
created
a
a
a
PR,
a
data
MPR
and
the
other
one,
the
volume
type
filter.
This
one
is
also
ongoing
and
draft
design.
A
It's
something
it
is,
that
is
for
the
for
the
work
items
and
the
other
thing
is:
building
team
will
be
on
holiday
from
21st
to
29th
yeah
and
that's
it
for
status.
And
if
I
personal
update
myself
is
working
on
this
morning,
POV
and
and
also
investigate
some
issues
and
and
the
easy
fixes
and
some
PR
I'll
set
it
for
weapon.
11
YouTube
and
some
PR
has
been
submitted
so
for
my
site
and
to
me.
B
Yeah
I'm
also
working
on
the
data
movement
at
POC
and
I
recorded
one
demo
to
show
the
whole
workflow
and
then
the
demo
shows
that
we
can
migrate
an
app
from
a
case
cluster
to
eks
Cluster.
If
you
have
interest
in
it,
you
can
watch
the
video
and
the
second
I
am
submit
one
draft
design
for
the
handling
backup
volume
by
result.
Filter
it's
from
my
side.
C
Oh
I'm,
working
on
separating
the
resource
filter
into
the
namespace
and
cluster
scope
and
the
the
design
document,
I
think
is
pretty
detailed
in
in
discussion
and
thanks
for
Scott's
comments
and
I
will
modify
the
document
as
soon
as
possible.
And
that's
all.
D
D
E
And
said,
I
think
we're
at
this
point.
I
met
suggested
that
we
may
want
to
focus
just
on
the
backup
side.
We
may
not
need
this
on
the
restore
side,
because
the
restore
sites,
not
quite
as
complicated
in
terms
of
the
kinds
of
filters
people
use
because
you're
limited
to
the
backup,
so
that
that
might
limit
the
scope
of
what
we
have
to
do
in
111.
If
we
just
do
this
on
backup
side.
The
other
comment
that
was
ongoing
was
the
question
of
whether
we
should
eventually
deprecate
the
old
parameters.
E
E
My
thinking
is:
we
create
the
new
parameters
now,
once
users
have
a
chance
to
use
them,
we
can
kind
of
decide
whether
we
want
to
eventually
deprecate
the
old
parameters
and
just
keep
the
new
ones
or
whether
we
need
to
keep
them
both
in
parallel,
but
we
just
need
as
long
as
there's
validation
for
111,
that
for
any
given
backup.
If
they
specify
new
parameters,
they
must
not
specify
the
old
ones
and
vice
versa,
so
they're
mutually
exclusive
either.
The
three
old
parameters
or
the
four
new
you
have
to
pick
one
or
the
other.
F
E
D
Okay,
okay,
thank
you,
so
so
soon,
I
think.
Next
we
can
focus
on.
You
know,
updating
the
document
to
clarify
the
expected
Behavior
and
and
after
that,
after
this
design
is
merged,
we
can
implement
the
backup
workflow
first
yeah.
E
D
You
know
I
mean
eventually,
if
you
want
to
introduce
the
new
parameters,
we
want
that
new
parameters,
the
same
experience
for
the
workflow
to
you
know
manage
to
restore
workflow
as
well
eventually.
So,
if
time
permits,
we
should
Implement
that
but
yeah.
E
That
makes
sense.
Yeah
I
was
just
saying:
if
we're
going
to
do
it,
if
we
just
shoot
the
backup.
First,
that's
the
bulk
of
the
pain
here
for
users,
in
other
words,
there's
a
real
user
need
on
the
backup
side,
where
you're
trying
to
decide.
A
E
Want
to
include
cluster
scope,
but
only
these
print,
and
so
all
those
complicated
use
cases
don't
really
exist
on
the
restore
side,
because
on
the.
G
H
E
Don't
want
it
I
just
mean
that
the
need
is
not
as
great
there.
So
if
we
do
just
the
backup-
that's
probably
80
of
the
problem,
if
we
get
both
of
them
in
that's
fine
as
well
and
I,
guess
the
other
point
was
we
had
some
discussions
last
week
and
this
week
about
what
we
wanted
to
change,
but
I
guess
those
recent
changes
specifically
around.
E
You
know
that
third,
for
the
cluster
scope,
resources
handling
those
things
that
aren't
mentioned
and
therefore
we
only
include
relevant
ones.
All
of
that
new
discussions
starting
at
last
week,
I,
don't
believe
those
are
in
the
document
yeah.
Just
in
the
comments.
D
C
Yes,
I
think
there's
no
nothing
coming
to
this
Garden
about
this.
Okay,
I
think
it's
already
pretty
clear.
D
Okay,
so
so
I
think
I
really
like
the
last
section,
where
you
provide
a
lot
of
example
and
explain
what
the
expected
Behavior
we
just
make
update
to
that
and
clarify
them.
D
A
Okay,
thanks
so
and
to
the
next
one
Daniel.
D
We
have
been
working
on
a
few
issues
to
sign
to
me
in
101.11
for
the
IRS
issue
that
that
one
looks
like
a
usage,
I
I
I've
done
some
tests
on
my
side,
and
things
seem
to
work.
The
only
Gap
is
that
we
don't
have
a
good
way
for
user
to
set
the
service
account.
I
I
think
we
don't
have
an
a
reason
now
to
do
that,
so
I'm
adding
an
option.
If
user
sets
this
service
account,
that
will
be
a
put
in
the
stack
of
the
deployment
yeah.
D
D
D
So
it
seems
that
when
Valero
triggered
the
resting
command,
it's
you
know
feed
into
the
U.S
east
2
region
into
the
rest
of
command,
but
end
up
resting,
somehow
try
to
head
the
Buckhead
or
the
URI
with
us,
West
one
which
is
really
weird
I,
think
there
may
be
some
bugs
in
resting
in
certain
setting
in
AWS,
but
I'm,
not
sure
why
you
mentioned.
If
you
scroll
down
who's,
got
I.
Think
Scott
mentioned
there.
If
you
scroll
up
a
little
bit
Scott
mentioned,
there
was
a
bug
in
rustic
I.
E
I
was
just
I
was
just
trying
to
think
about
things
to
try.
So
as
I
was
suggesting,
there
might
have
been
an
issue,
there
might
have
been
a
permission
issue
because
it
looked
like
basically
with
rest
of
the
the
AWS
API
there.
E
It
was
supposed
it
was
trying
to
redirect
you
to
the
right
region
and
it
looked
like
we
were
hitting
information
bug
so
I
was
it
looked
like
we
might
be
missing
one
of
those
AWS
actions
and
permission
issues,
so
I
I
was
the
suggestion
was
if,
if
the
what
I
was
including
here
was
not
in
the
policy,
add
that
to
the
policy
and
try
again
because
it's
possible,
but
but
I
noticed
the
common
after
mine
said
that
that
was
already
there.
So
that
wasn't
the
problem
there.
G
D
So,
do
you
recall
the
link
to
the
issue?
I
know
it's
been
a
while
if
you.
E
Yeah
I
guess
so:
I
I
remember
looking
into
this
and
then
saying:
okay,
this
should
be
working
fine
and
then
so
that's
why
I
was
looking
into
the
bucket
permission
issue
because
it
looked
like
since
the
redirect
was
failing.
E
Maybe
there
was
a
problem
permissions
and
that's
so
maybe
there's
there
may
still
be
a
permission
issue
involved
because
I
believe
Valeria
is
supposed
to
be
doing
a
reader,
not
Valero,
but
AWS
is
supposed
to
be
doing
a
redirect
to
the
right
region
in
this
particular
use
case,
and
maybe
that
that's
failing
but
I'm,
not
really
sure
yeah.
G
D
Okay,
okay,
so
so
I'll
continue
on
communicating
with
the
author
of
the
issue.
If
he
doesn't
respond
in
a
few
weeks,
I
mean.
Let
me
call
this
one
I
think
there
may
be
some
way
that
if
we
do
not
set
any
parameter,
you
know
it
will
try
to
guess
the
default
region
in
when
when
it's
communicating
with
F3.
But
in
this
case
for
the
log
it's
we
have
explicitly
terroristic
to
init.
A
repository
in
US
is
to
so
I
I
I
I.
D
E
Yes,
so
the
the
item-
operations,
Json
PR-
was
one
there's
been
a
fair
amount
of
review
and
comment
back
and
forth
on
I.
Think
we're
at
the
point
now,
where
most
of
those
recent
comments
there's
a
kind
of
a
couple
different
categories.
Those
are
in
I
think
in
both
cases,
they're
really
outside
of
the
scope
of
this
PR.
We're
kind
of
talking
about
next
steps.
E
It's
one
of
them
worth
mentioning
is
there
was
one
issue
was
that
came
up.
Was
the
question
of
you
know
if
we're
uploading
this
the
backup
to
the
biosorage
and
then
the
BSL
and
this
item
operations
and
we're
in
that
waiting
for
plug-in
operations.
The
backup
sync
controller
could
potentially
pull
that
back
up
down
and
just
say
another
cluster,
and
we
don't
want
that
because
see
right
now.
E
In
the
current
code,
we
only
upload
to
a
backup
to
the
BSL
when
it's
complete,
so
the
backup
sync
controller,
just
grabs
all
the
backups.
Now
that
we
need
to
upload
the
backup
metadata.
E
What
before
you
know,
while
we're
in
this
waiting
for
plug-in
operations,
and
we
want
to
upload
that
that
way
if
Valero
does
restart
after
it
goes
on
to
the
next
backup,
we
don't
lose
this
backup.
E
We
need
to
as
part
of
that
implementation
update
the
backup
scene
controller
so
that,
if
it's,
if
the
backup
is
in
the
waiting
for
plug-in
operations
State,
we
don't
want
to
sync
that
down.
So,
in
other
words,
if
you
have
a
second
cluster
that
shares
a
BSL
if
cluster
one
uploads
the
backup,
because
it's
moving
on
and
then
it's
backup
and
it's
just
waiting
for
those
operations
to
finish
cluster
two
when
it
does
the
backup
scene
controller
runs
it
grabs
all
the
backups
from
the
BSL.
E
It
needs
to
add
to
filter,
to
check
the
backup
phase,
and
if
it's
not,
if
it's
not
a
completed
backup,
then
we
don't
want
to
sync
it
yet.
So
that's
one
thing:
that's
not
part
of
this
PR,
the
the
need
to
do
it
came
out
of
the
discussion,
but
that'll
actually
be
part
of
that
implementation
task
where
we
actually
do
the
update
where
we
modify
the
controllers.
E
The
second
set
of
discussions-
that's
kind
of
come
off
kind
of
more
recently
is
about
the
possibilities.
It
sounds
like
there's
some
concern
that
we
may
want
to
based
on,
depending
on
how
we,
what
design
we
end
up
going
with
for
the
Valero
data
mover,
there
may
be
some
need
for
plug-in
specific
metadata
to
be
added
to
the
interface.
E
The
use
cases
we'd
identified
outside
of
that
didn't
need
that.
Yet
the
design
we
approved-
you
know
back
in
December
didn't
include
that.
So
my
thinking
here
is:
we
need
to
finish
that
discussion
around
data
mover.
If
it
turns
out
that
we
need
some
additional
changes
to
this
interface,
then
we
need
to
create
a
separate
design.
E
Pr,
you
know,
decide
what
we
want
there
and
if
it's
approved
and
implemented
that
way,
the
current
work
based
on
the
previously
approved
design
can
kind
of
continue
without
being
blocked,
but
we
realize
that
there
may
be
some
follow-on
work.
We
need
to
do
and
sounds
like,
depending
on
how
we
end
up
deciding
what
we
want
for
the
data
mover
Upstream
in
Valero.
E
You
know
if
we
go
with
one
option:
we'll
need
some
additions
to
this
interface
soon,
possibly
even
in
this
V2
and
in
that
case
again
it'll
be
a
follow-on
PR.
That
can
be
done.
In
other
words,
if
we
implement
this
as
designed
now
and
then
do
a
follow-on,
smaller
PR,
with
that
additional
change,
that's
going
to
be
a
lot
easier
to
kind
of
fit
into
the
process,
to
review
and
to
not
risk
missing
the
111
date.
So
my
thinking
is
that
we
should
the
current
code
should
be
based
on
the
previously
approved
design.
E
E
E
So
my
hope
is
to
get
this
PR
approved
and
merged
soon,
hopefully
before
your
the
21st
when
basing
office
goes
on
holiday
capture,
any
needs
relating
to
data
mover
in
this
additional
plug-in
in
a
specific
metadata
and
a
separate
design
PR
that
can
kind
of
be
started,
and
then
I
can
you
know
next
week
or
whenever
start
working
on
that
backup
controller
modification
to
actually
implement
the
waiting
logic,
and
all
of
that
so
I
I
think
the
the
issues
that
were
brought
up
for
good
good
issues.
E
I
think
we,
the
first
set
the
category
that
I
was
talking
about
a
couple
minutes
ago
about
backup
scene
controller
and
making
sure
that
you
know,
if
you
sync
to
another
cluster
or
whatever,
that
that
cluster
doesn't
start
trying
to
check
status
of
operations.
That's
that's
actually
an
important
point
that
I
think
was
missed
in
the
original
discussion
and
when
I
do
Implement
that
in
the
controller
workflow
that
needs
to
be
taken
into
account.
E
We
need
to
have
that
discussion
around
data
mover
and
then
draft
to
design
PR
once
we
know
exactly
what
we
need,
and
so
we
can
say:
okay,
here's
an
additional
field
that
we
need
to.
For
example.
If
this
is
you
know
what
we're
talking
about
needs
to
be
returned
in
the
progress
which
then
Valero
will,
you
know,
add
to
the
progress
operation,
progress
struct?
Maybe
it's
a
config
map,
a
map
or
something
like
that,
but
once
we
identify
the
need,
you
know,
then
the
requirements
we
can
get
a
design
PR
in
place.
E
So
we
can
see
what
structure
we're
going
to
need
to
modify.
Do
we
need
to
modify
any
of
the
the
plugin
API
calls
as
well
and
kind
of
work
through
that
as
a
follow-on
VR,
and
if
this
is
all
implemented
initially,
then
that
following
work
will
actually
be
relatively
small,
because
it'll
just
be
take
these
things
that
we've
already
implemented
add
one
more
field
here,
one
more
field
there.
E
You
know
whatever
logic
we
need,
and
that
can
be
some
follow-on
work,
that
if
we
need
it
for
111,
we
can
include
it
in
there
and
if
it
turns
out
it's
not
something
that's
needed
until
112,
then
it
can
be
a
V3.
You
know
API
change.
A
Okay,
okay,
so
yeah
for
for
the
two
proposals.
I
think
we
can
discuss
this
further.
In
the
data
moment
they
die
yeah.
F
A
For
the
current
yeah
for
the
current
topics,
I
think
for
most
of
them
we
haven't
come
to
the
end.
So
I
believe
that
we
will
finish
all
the
topics
and
I.
We
can
merge
the
pr
before
this
week.
E
Now
that
we've
had
those
discussions
around,
you
know
some
follow-up
work
that
we
need
to
do.
Let's
go
back
and
look
at
this
specific
PR
and
figure
out
is
what's
in
this
PR
good
for
here.
Can
we
approve
it
and
we
can
merge
it
this
week?
Hopefully
you
know
or
make
changes
as
necessary.
Obviously
you
know
if
there's
anything
that
needs
to
be
fixed
and
then
I
can
get
started
working
on
the
the
backup
controller
side.
Yes,.
A
I
also
agree
that
evenly
want
to
add
some
more
a
digital
design
to
this
Json
persistent.
We
will
create
another
design
and
this
the
current
design
in
way
1.11
is
not
affected
and
because
the
data
movement
will
be
implemented
in
the
next
three,
not
the
current
one,
yeah.
E
Right
that
makes
sense,
because
basically
this
is
going
to
be
a
set
of
changes.
That'll
be
you
know,
relatively
small
compared
to
all
this
work,
but
it's
you
know,
change
the
design
and
then
we
can
decide
once
we've
approved
the
design
and
merged
it
is
this
something
we
need
in
111
and
we
have
time
to
do
it
in
111.
D
So
so,
as
for
the
backup
scene,
controllers,
Behavior
I
think
that
should
also
be
somehow
reflected
in
the
design
right.
The
decision
regarding
yeah
like
like
they
are
waiting
for
the
a
plugin
operation
or
if
in
progress,
the
backup
will
not
be
synced
I.
E
Think
yeah
yeah,
that's
one
thing:
that's
not
you're
right
that
part
I
can
do
a
follow-on
design
PR
that
will
be
basically
it'll,
basically
be
an
edit
to
the
overall.
You
know
we
have
the
the
actual
async
design
that
we
approved
in
December,
and
you
know
I
made
that
small
design
PR
change
later
in
December,
that
we
approved.
F
E
Do
a
similar,
Small
Change
here
to
basically
call
out
the
backup,
State
control,
because
basically
backup
sync
wasn't
addressed
in
that
design.
You
know
we
didn't
think
about
it
and
now
that
this
came
out
and
with
this
PR
we
realized
hey.
We
want
to
make
sure
we
don't
sync
these
things
until
they're
completed.
So
I
can
add
a
section
to
that
document
about
the
because
we
have
a
section
for
you,
know,
backup,
workflow,
controller
changes
and
restore
controller
changes.
I
just
need
to
add
a
section
for
backup,
scene,
controller,
yeah,.
E
Small
section
that
will
spell
out
basically
the
backup
scene
controller,
will
filter
out
backup,
sitter
and
waiting
for
plug-in
operations
and
will
not
sync
those
yeah.
D
But
for
the
just
to
clarify
for
the
complete
a
completed
backup,
do
we
want
to
sync
the
item
operations,
Json
or.
E
Also,
well,
we
well,
we
don't
think
any
of
those
files
because
see
when
the
backup
scene
controller
just
pulls
down
the
backup
CR
metadata,
okay,
because
because
the
file
storage
also
includes
you
know,
rustic
backups,
and
it
includes
log
files
and
several
other
things.
None
of
those
things
get
synced
the
backup
scene.
Controller
just
creates
the
Valero
backup
CR,
oh
and
I,
think
it
does
create
pod
volume
back
yeah,
backup,
CRS
and
pod
volume
backups.
Those.
E
Sync,
everything
else
goes
through
the
download
request,
so,
for
example,
if
I
do
Valero
describe,
that
makes
it
download
creates
a
download
request,
which
then
pulls
data
from
the
BSL.
So
if
I
do
Valero
logs
through
CLI,
that
does
a
download
request
to
get
those
logs
and
we
only
pull
those
down
as
needed
from
the
API.
E
So
these
new
Fields
I'm,
adding
in
this
PR
to
add
to
the
the
download
request
metadata
that
says
we
can
also
pull
down
item
operations
that
also
will
not
be
synced
as
part
of
the
backup
that
will
only
be
pulled
down
through
a
download
request.
E
You
know,
which
is
needed
from
which,
which,
like
Florida
described,
for
example,
would
fall
down.
F
G
E
That's
that's
not
a
concern,
because
you're
right,
we
don't
need
to
think
that
we
we
just
need
to
sync
the
backup
metadata,
but
that's
not
changing.
This
PR
is
really
just
modifying
the
API
to
add
those
new
fields
to
the
download
request
and
to
the
object
store
so
that
Valero
is
able
to
pull
those
down
when
it
needs
to.
E
The
syncing
is
only
referencing
the
backup
itself,
because
I
I
was
actually
looking
in
the
backup
scene
controller
after
this
PR
discussion
to
make
sure
I
understood
what
it
was
doing
and
basically
what
the
backup
scene
controller
does.
Is
it
grabs
a
list
of
all
the
backups
from
the
BSL?
It
filters
out.
D
Yeah
and
another
common
regarding
the
B2,
because
you
mentioned
I,
haven't
followed
up
the
review
of
the
pr
very
closely,
but
is,
in
the
other.
We
considered
some
cat
in
the
V2
background,
restore
API,
I.
Think
the
the
reason
for
us
to
work
on
the
B2
backup
API
is
that
we
want
to
support
the
backup
item.
Action.
Plugin
API
is
to
support
the
data
movement
yeah.
So
if
there's
a
gap
between
the
V2
Bia
and
the
the.
E
E
That
there
might
be
that
the
thing
is
that
it
sounds
like
because
the
because
the
data
movement
design
is
still
ongoing.
It
sounds
like
from
the
comments
there
might
be
a
gap.
We
might
need
some
additional
information,
we're
not
sure
yet,
and
so
that's
why
I'm
saying.
G
G
E
111
will
be
in
V2,
so
even
if
it's
a
separate
design
PR
in
a
separate
inflation
PR,
if
it's
all
done
before
we
release
111
it'll
all
go
into
B2
it'll,
just
be
a
modification
of
existing
V2
with
that
and
then
that'll
actually
be
a
lot
less
work
than
creating
a
V3
later.
Because
then
we
have
to
create
all
those
new
directories
and
these
new
files
and
adapters.
So.
E
E
We
decide,
we
don't
need
it
for
this
release.
We
only
need
next
release
and
we
make
it
a
V3.
That's
going
to
be
actually
more
development
work,
but
it'll
be
in
a
looser
schedule,
because
then
we're
going
to
have
to
make
all
those
V3
directories
and
the
V3
copies
of
all
this
of
this
stuff.
So
you
know.
E
B2,
but
it's
still
going
to
be
a
separate
design,
PR
and
then
an
implementation.
That's
going
to
follow
up
with
this
that
way,
I'm
not
blocked
on
implementing
what
we've
already
defined.
Meanwhile,
once
we
have
a
definition
for
what
we
want
for
it,
if
we
decide,
we
need
these
additional
fields
for
data
mover
movement,
but
then.
E
Know
we
don't
need
a
V3
as
long
well,
I
guess,
there's
two
things.
First
of
all,
if
we
get
it
done
in
this
release,
we
definitely
definitely
don't
need
to
be
three.
If
we
do
it
in
the
next
release
we
may
or
may
not.
E
One
thing
we've
learned
in
the
past
is
that
if
you
already
have
a
struct
that
you're
passing
in
this
part
of
the
API
and
you're,
adding
a
new
field
to
the
struct,
that's
optional,
you
don't
necessarily
need
to
create
a
V3
because
and
if
you
add
an
optional
Fields,
then
the
old
code
that
doesn't
reference
that
field
is
backwards
compatible.
So,
for
example,
several
releases
ago
before
we
had
blood
conversioning,
we
ended
up
modifying
the
restore
item,
action,
execute
output,
struct
to
add
some
optional
fields
to
support
something
new
we
were
working
on.
E
E
D
I
see,
but
my
point
is:
if
we
end
up
releasing
V3
and
112
hypothetically,
there
will
be
no
y
implementing
V2,
so
I
I,
don't
I,
don't
think
I
I
know
there
when
there's
a
a
prediction,
we're
supposed
to
be
bumping
up
the
version,
but
the
issue
is
that
if
we
release
V2
in
111-
and
we
know
we're
gonna
make
change
to
that
plugin,
we
should
somehow
tell
us
this
one.
Will
we
may
introduce
new
change
to
this
one
and
it
may
not
be
ready
yet
so
then,.
E
We
could
do
that
if
we
do
I
mean
I,
don't
think
I
think
our
plugin
versioning
process
hasn't
really
specified.
That
I
mean
right,
I
think
the
key
here
is.
If
we
say
this
is
a
feature:
that's
not
GA,
that's
you
know
considered,
you
know
beta
or
you
know
not
stable,
because
if
we
say
V2
is
not
GA
users
should
not
rely
on
it
long
term.
E
G
E
So
we
we
can
have
that
discussion,
you
know,
and
it
may
be,
that
the
right
answer
is,
but
let's
let's
first
get
the
design
together,
because
if
it
turns
out
that
we're
just
adding
an
optional
field
that
won't
break
backwards,
compatibility
then.
D
E
If
we
anticipate
needing
a
V3,
then
we
can
have
that
discussion
to
say:
do
we
need
the
overhead
of
a
V3?
Is
there
a
way
to
release
V2
with
out
the
expectation
of
stability
for
that
one
interface
and
I?
Don't
know
if
we
have
a
way
of
doing
this
cleanly
I
mean
it's
it's
going
to
be
messy
from
a
communication
user
customer
point
of
view.
If
we
do
that,
so
hopefully
we
don't
need
to,
but
that's
why
I'm
just
saying:
let's
work
on
the
existing
implementation.
E
G
E
Let's
also
get
that
discussion
going
in
the
context
of
data
movement
in
the
context
of
what
changes
we
need
to
the
backup
item
action
once
we
know
what
those
changes
are,
then
we
can
say:
okay,
this
is
a
minor
enough
change
that
we
don't
think
we
would
need
a
V3
anyway,
we're
good
we're
safe
and
if
we
think
it's
something
that
could
break
backwards
compatibility,
then
we
want
to
seriously
consider
first
of
all,
can
we
get
it
into
111,
because
if
we
can
make
it
into
111
we're
again
we're
covered.
E
Only
situation
where
we
have
this
messy
question
of
do
we
modify
V2
versus
Omega.
V3
is
if
we
have
a
breaking
change
that
doesn't
make
it
by
111,
then
we
have
to
have
to
have
that
conversation,
but
if
it's
not
going
to
be
breaking
change,
we're
good
and
if
we
can
get
it
into
111
we're
good.
D
Yeah,
so
so
I
think
one
option
is
that,
but
for
better
what
what
I
really
prefer
is
that
whatever
we
decide
to
support
as
biiv2
Ria
V2,
we
should
make
sure
they
work
with
the
that
movement
implementation
in
the
future.
E
And
that's
why
I'm
saying
we
need
to
have
that
design
discussion
around
what
chance
you
know.
In
other
words,
if
the
data
movement
we're
just
movement
design,
discussions
determines
that
we're
missing
something
in
the
currently
approved
biav2
rav2.
Then
we
need
to
pretty
much
have
that
next
discussion
of
okay,
what's
missing,
what
do
we
need
to
add
and
and
again
if
what
we're
adding
is
just
oh,
our
are
you
know,
operation
progress,
structs
need
some
new
fields
to
cover
it
and
adding
those
new
Fields
is
all
we
need
to
cover
it.
E
Then
we
just
need
to
write
that
PR
and
get
it
in,
because
that's
a
pretty
small
change,
there's
no
reason
not
to
do
it
now.
I
I'm,
not
saying
I,
don't
want
to
do
it
in
111
I'm,
just
saying
I
don't
want
to
block
this
current
work
based
on
something
we
haven't
decided
yet
yeah.
D
E
As
soon
as
we
know
what
we
want,
let's
implement
it
yeah.
D
Well,
what
I'm
trying
to
propose
is
that
maybe
we
should
just
supposed
to
be
in.
In
the
end
we
found
there's
a
gap
yeah,
it's
possible.
We
make
some
change
in
the
coast
so
that
E2
is
hidden
from
the
end
user.
Somehow
we
told
them,
it's
not
ready.
So
it's
you,
you
won't
be
implementing
it
because
it
won't
work
end
to
end
either
way
so
yeah
that
may
be
a
safer
place.
Then
we.
E
E
Mean:
okay,
that's
that's
fine
too
I.
Just
it
still
needs
to
be
functional
because,
regardless
of
where
we're
going
with
the
data
movement
and
Valero
itself,
I
know
on
the
red
hat,
o
ADP
side,
we
are
going
to
be
relying
on
this
new
API
in
111..
E
D
E
D
E
We
release
those
together,
you
know,
in
other
words,
whatever
IDP
we
released
with
112
will
have
the
data
movement
plugins
released
with
it.
So
because
we're
controlling
those
releases
together,
I
think
for
our
purposes
the
backwards
compatibility.
There
is
less
of
a
concern
in
general
for
any
user
of
Valero.
E
You
know
you're
writing
a
plug-in
and
you're,
not
tying
your
release
to
Valero
releases.
That's
when
the
backwards
compatibility
matters
a
lot
more
because
you
don't
want
someone
upgrading,
just
Valero
and
all
of
a
sudden,
their
plugins
breaking
and
that's
why,
like
you
said,
if
we
anticipate
having
to
change
the
V2
plugins,
then
we
want
to
make
sure
we're
not
advertising
those
as
as
stable,
okay.
So
on
the
flip
side,
if
we,
if
we
do
decide
that
the
changes
we
need
are
small
enough,
then
we
can
just
get
them
in
then.
D
So
it
seems
that
you
were
making
a
change
to
your
IDP
AP
plugins
in
parallel
with
the
V2
development.
So
could
you
share
me
the
date
for
o
ADP?
You
need
this
V2
API.
So
is
there
normally
a
Time
window
between
the
V
1.11
and
the
oadp
we're
consuming
or
Bolero
1.11.
E
First
of
all,
we
have,
we
haven't,
actually
started
updating
the
plugins
to
use
the
V2,
yet
just
because
we
only
got
the
V2
plug-in
merged,
we're.
G
E
E
But
you
know
I
think
in
general
what
we
try
to
do
is
to
to
time
our
releases
so
that
you
know
if
111
comes
out.
You
know
on
on
a
certain
at
a
certain
time
that
you
know
the
the
oadp
release
that
is
based
on
111
would
come
out.
You
know
some
number
I
West
is
on
the
call.
You
might
have
a
better
idea
of
what
we
generally
want
to
Target
here.
I
know
111
once
you
may
not
match
that.
But
the
question
was
you
know:
how
do
we
normally
handle
this?
D
E
D
E
Don't
have
a
definite
answer
to
that
question.
I
do
know
that
we
have
we
are
talking
about.
We
know
we
are
considering
the
fact
internally
that
there
may
be
delay.
You
know
you
know
again,
hopefully
not
more
than
a
couple
of
weeks
and
but
you
know
what
we
understand
that
that
you
know
an
upstream
release.
You
know,
Target
data
is
just
out
of
Target
date,
I
mean.
E
If,
if
you
hit
a
block
or
bug
you
know
as
you're
doing
final
QE,
you
can
clearly
we
have
to
fix
the
bug
you
know,
and
so
so
all
of
that
is
something
that
we're
aware
of.
E
Okay,
no
I,
I,
don't
I,
don't
know
I,
don't
have
a
more
definite
answer
right
now
for
you,
but
that's,
but
but
these
are
all
issues
that
we're
considering
and
and
that,
but
that's
also
why
I
want
to
make
sure
that
you
know
we
don't
get
blocked
and
delayed
in
implementing
the
parts
of
this
design
that
we've
already
agreed
on,
while
we're
still
trying
to
work
out
some
final
details
that
we
also.
E
In
so
of
course,
but
I
I
think
we're
I
think
we're
good
here.
As
long
as
we
consider
that
you
know
this
is
a
feature
again.
You
know,
we've
already
had
several
PR's
there's
going
to
be
one
for
backup
controllers,
one
for
ReStore
controllers.
So
this
is
not
the
kind
of
thing
you
want
to
put
one
huge
PR
for
everything
it
wants.
Anyway,
it's
easy
to
review
and
the
smell
like
chunks.
E
So
this
new
bit
of
changes
that
we
need
that
we're
talking
about
about
adding
a
new
field
or
whatever
we
need
to
do
there
that'll
be
a
separate
design,
separate
PR
that
will
be
relatively
small,
we'll
get
that
approved.
So
hopefully
that
can
all
still
make
it
in
the
same
time
frame,
but
because
we're
doing
that
as
a
separate
step,
if
it
if
it
takes
us
longer,
for
example,
to
decide
what
we
want
there
and
we
decided
that
we're
not
going
to
put
that
in
112..
E
E
But
I
think
if
we
can
agree
on
what
we
need
here
relatively
soon,
I,
don't
think,
there's
a
real
risk
here,
because
you
know,
if
we're
really
just
talking
about
adding
a
new
field
to
a
struct
that
allows
plugins
to
add
their
own
data
structures
in
some
kind
of
config
map
or
something
you
know.
That's
not
the
kind
of
thing
that
should
seriously
delay
release.
You
know
we're
not
talking
about
major
workflow
changes.
E
D
I
We
draw
closer
to
the
111
release.
It
sounds
like
it
would
be
a
really
smart
idea
to
review
the
V2
API
in
reference
to
what's
being
planned
for
the
native
data
data
mover
and
to
really
be
sure
that
we
have
everything.
I
And-
and
we
would
be
totally
understanding
of
software
release
slipping
a
couple
weeks
to
save
months
of
work
later
in
in
another
release.
That's
totally
fine
yeah.
E
That's
the
next
part
is
relatively
quick
in
comparison,
but
the
restore
item,
action,
implementation,
PR,
5569,
that's
not
ready
for
review,
it's
that
was
dependent
on
the
backup
item
action
when
getting
submitted
first,
which
was
merged
already,
so
that
basically
does
the
restore
side
in
terms
of
the
API
changes
needed
for
this
as
well
so
I
would
say
between
these
two
again.
E
E
You
know
we
obviously
need
to
get
it
reviewed
relatively
soon,
but
I'm,
not
that's
not
going
to
block
me
until
I
finish
the
backup,
workflow
and
then
move
on
to
the
restore
workflow.
So
between
those
two,
the
item
operations-
Json,
that's
already
had
some
review-
is
the
priority
to
get
merged
first,
but
it
would
be
great
to
get
their
startime
action
reviewed
as
soon
as
possible
as
well
just
so
that
that's
off
the
radar
and
I
can
focus
just
on
the
next
steps.
The
third
one
is
a
small
one.
E
That's
the
Valero
plug-in
example.
That
basically
adds
a
V2
back
of
Action
plugin.
It
doesn't
do
anything
with
the
new
the
new
functions.
Yes,
it
doesn't
create
any
asynchronous
actions,
because
there's
no
controller
logic
that
implements
that.
Yet
this
the
existing
plugins
were
already.
Basically,
the
execute
call
was
just
creating
a
log
message
so
that
you
could
see
when
you
add
a
plug-in
it
worked.
It
ran
on
your
items,
I
added
a
second,
so
there's
a
V1
and
V2
plug-in.
E
So
this
this
allows
you
to
show
that
when
you
add
this
plug-in
image
to
Valero,
both
the
V1
and
the
V2
plugins
both
work,
they
both
execute.
You
see
log
files
or
both
because
of
that
adapter.
That's
in
the
API
that
allows
V1
plugins
just
to
still
work,
even
though
Bolero
was
you
know
using
V2
now,
okay,
so.
E
This
is
just
the
the
you
know.
The
plugin
example
allows
people
to
test
with
this.
So
that's
again,
that's
you
know
a
good
one
to
get
get
merged,
get
reviewed,
but
it's
not
blocking
me
just
because
you
know
if
I
need
to
if
I
need
to
test
with
this
I
can
just
use
the
pr
you
know
Branch.
So
nothing
else
depends
on
that
being
marriage
other
than
you
know,
someone
else
being
able
to
access
it
easily
to
test
things
with.
A
Okay,
so
yes,
it's
a
great
great
example
and
for
the
Highway
2,
I'm
I'm,
also
reviewing
it
and
I
I
think
well.
I
can
see
it
can
be
more
very,
very
soon
and
anyway,
for
all
the
progress
monitoring
items
and
I'm
in
PRS
I
will
try
to
make
it
make
them
in
the
good
State
before
the
holiday.
So.
A
Yeah
anything
I
didn't
know:
I
react
very
quickly.
Scott,
you
can
p
me
directly
in
the
channel
and
I
will
have
a.
E
Great
fight
yeah,
if
we
can
get
these
marriage
before
the
holiday
that'd
be
great
and
then
hopefully,
when
you
get
back
from
holiday,
I'll
have.
H
E
A
And
next
one,
please.
G
Oh
yeah
I'm,
not
I'm,
I'm,
updating
the
restore
process
for
the
my
data
to
to
include
the
finalizer
and
manage
the
field
in
this
changes,
and
we
will
change
the
way
how
to
restore
the
metadata,
we'll
excluded
some
of
the
field
ex
publicly,
such
as
uid
on
reference
and
resource
version
and
for
all
other
fields.
You,
you
will
include
the
by
the
port
and
yeah
after
this
change.
I
think
we
will
not
lose
some
important
metadata
if
there
is
no
filter
added
into
the
metadata
parts.
H
Yes,
I'm
working
on
reproducing
and
debugging
a
QVC
random
failure
in
Natalie
as
highly
it
should
have
some
deal
with
the
low
I
o
performance
of
network
storage
or
other
resources,
and-
and
it's
still
on
in
under
investigating
that's
all
from
my
side.
A
Okay,
thanks
now
that's
for
the
Fatal
update
and
for
the
discussions,
the
first
one,
I
I,
think
away.
We
really
have.
We
don't
have
time
to
discuss
this
in
the
current
meeting,
so
but
I
think
it
is
better
to
start
the
meeting
and
soon.
So
what
do
you
guys
think?
Maybe
we
can
set
up
a
separate
meeting
under
the
at
the
moment,
discussion,
for
example
tonight
in
baking
time
so
I.
A
And
yeah,
then,
what
about
tomorrow
yeah.
A
Tomorrow,
so
what
about
the
time.
E
Would
you
do
it
tomorrow
same
time
as
today's
meeting
or
tomorrow
evening,
your
time.
A
G
A
E
Tomorrow
evening,
my
time
I,
I
I'm
available
that
will
work
for
me,
yeah
yep.
F
F
A
There
is
any
issue
for
the
oximeter
demo,
the.
E
Demo,
recording
video
yeah,
we
we
couldn't
get
access
to
it.
It
gave
us
permission
denied.
A
Okay,
okay,.
E
So
so
it'll
be
same
time,
so
8
A.M
Beijing
time
tomorrow,
7
P.M
eastern
US
time
tomorrow.
D
E
We
need
to
start
the
discussions
and
make
sure
we
start
getting
these
things
addressed,
but
let's
make
sure
we
get
everything
you
know
correct,
rather
than
like
I
said
rush,
because
this
is
so.
This
is
something
that,
for
the
most
part,
you
know
we're
not
going
to
be
implementing
this
in
111,
but
we
want
it
to
be
well
defined
and,
and
to
the
previous
point,
we
want
to
make
sure
any
API
changes
that
come
out
of
this
discussion
can
make
it
into.
Hopefully,
this
release
cycle
correct.
A
Yes,
so,
yes,
we
want
to
make
it
in
parallel
with
design,
and
we
will
finalize
all
the
well.
We
went
I
mean
created
the
pr
before
we.
Finally,
everything
yeah,
then
that's
it
for
the
moment
did
I
and
the
next
one
is
from
chumin
so
to
you
want.
This
is
for
the
volume
filter,
design
right.
B
B
Currently,
Valero
doesn't
have
a
select
way
to
feel
volumes.
There
is
two
so
but
there's
two
scenarios.
The
first
one
is
that
user
want
to
backup
or
skip
backup
volumes
in
in
some
volumes
in
different
namespace.
Currently,
user
can
use
Opening
Our
upper
up
our
approach,
one
by
one,
our
user
level
selectors.
But
if
there
is
a
lot
of
volumes
volumes,
they
will
there
will
be
a
lot
of
works.
B
The
second
scenario
is
that
currently
Valerie
is
not
accurate
enough
for
user
to
choose
a
specific
volume
by
not
the
embedded
port
ion
by
patching
enables
our
annotations,
so
so
I
think
we
should
have
a
way
to
to
use
to
let
the
user
to
select
one
accurate
target
volume
by
one
specific
resource
selectors.
That
is
very
useful.
B
If,
if
the
user
uses
Valero
other
than
secondary
development,
they
can
use
their
their
own
console
or
UI
in
their
drop
down
list
to
choose
the
results
to
back
our
node,
and
currently
we
have
a
lot
of
filters
like
include
namespace
level.
Selectors
or
annotations
labels,
but
it's
not
applicable
to
handle
volumes.
B
We
need
a
general
way
to
fill
volumes
and
currently
we
we
are
working
on
the
Backup
backup
process
and,
with
time
permission
permit,
we
we
can
consider
risk
restore
process
and
currently
we
adjust
the
handle
volumes,
not
support
other
resources,
and
it
is
involved
on
related
and
plan
for
independent
volume
attributes
that
we
support,
and
there
are
two
scenarios:
two
user
user-
really
user
cases.
The
first
one
is
that
user
wants
to
skip
PV
with
the
follow
requirements.
B
B
Another
scenario
is
that
user
have
two
PVS.
One
is
for
the
DB
wines
for
the
logs
and
they
would
only
want
to
backup
DB
dators,
so
they
need
to
specific
real
selectors
to
accurately
select
Pokemon
to
backup
and
our
high
level
designs
that
we
we
introduce
a
new
flag.
That
is
the
front
file
when
he's
reading,
Valero
backup
create
command
and
it
will
generate
one
Json.
Json
file
and
the
Json
file
included
or
divided
rows
for
the
current
backup.
B
And
for
the
one
specific
resource
selectors,
we
introduce
the
G
vrm
way
of
resource
filters
that
we
use
the
group
resource
plus
names
based
names
to
to
choose
one
specific
key
by
kubernetes
resources
and
another
way
another
is
we
use
the
we
we
use
them.
You
use
them
for
the
attribute
on
some
specific
volume
attributes.
We
we
follow
the
the
defined
data
structure
of
precision,
violence
back.
B
B
B
Csi
here
take
the
any
of
this.
For
example,
we
can
look
into
the
dinner
structure.
B
B
B
B
B
So
if
we
directed
The
Event
in
the
backup
speaker
will
be
more
and
more
complicated
in
the
CR
side
will
be
bigger
and
bigger.
So
we
want
to.
We
want
to
store
the
Json
Json
Json
rows
into
config
map
and
the
backup
crd
reference
to
the
current
configure
map,
and
then
configure
map
will
be
looking
at
this
and
it
is
even
equivalent
to
generate
bad
command
like
this.
B
And
the
configure
map
is
same
Dash,
plus
plus
backup
name
and
in
the
layer,
namespace
and
then
left
circle
of
the
configure
map
is.
If
we
use
the
com
the
lower
back
amount
with
the
flag
from
file,
then
we
will
generate
the
configure
map
and
if
the
backup
is
being
deleted,
the
confirming
my
should
be
removed
and
the
resource
filter.
Config
Mark
should
be
also
persist
into
object,
storage
and
could
and
should
be
synchronized
automatically
when
at
a
store,
app.
B
Because
the
result,
filter
configma,
is
referenced
by
the
backup
CR,
so
the
rules
in
configure
is
not
intuitive.
We
need
to
increase
integrate
in
the
rows
into
the
up
output
of
the
command,
the
lower
backup
describe,
making
it
more
readable
for
the
comparability
part
in
even-
and
we
have
we
already
have-
or
we
already
have
this
kind
of
a
resource
filters
and
for
our
new
volume
result
filters.
There
will
be
some
some
situation
that
we
should
consider
for
this
one.
B
D
I
have
a
real
comments.
I,
don't
think
we
want
this
config
map
to
control
the
resources
we
I
think.
Initially,
we
may
just
focus
on
the
PV
and
the
volumes
because
yeah
otherwise
this
will
be
conflicts
Mission
and
this
work
will
be
really
complicated
and
I
I.
Think
in
the
in
detail,
the
PV
and
the
volume
they
section
they
may
conflict
with
each
other
right.
D
You
may
choose
MPV
to
include
some
storage
classing,
but
in
the
volume
you
excluded
or
or
vice
versa,
like
what's
the
logic
between
these
two
of
section,
we
might
give
more
concrete
example
like
what
will
happen
to
clarify
the
design
and
as
for
the
referencing
I
think
it
would
be
better
to
probably
add
a
field
in
backups.
They
are
to
explicitly
reference
this
config
map,
rather
than
use
name
naming
this
yeah
a
hidden
contraction
right.
B
D
So
so,
generally,
the
goal
is
to
allow
user
to
sad,
more
flexible
filter
in
terms
of
PVS
when
they
are
creating
the
backup,
rather
than
ask
them
to
annotate
or
add
label
to
their
existing
workload.
I
think
we
got
several
issues
from
customers
of
VMware
or
users
that
they
don't
want
to
attach
their
workload,
but
they
also
want
to
implement
the
flexible
rules
for
backing
up
PV.
D
That's
why
we
want
to
introduce
something
like
this,
so
that
you
know
we
can
provide
more
flexibility
for
user,
to
select
a
PV
to
to
take
snapshot,
but
do
not,
which
do
not
require
them
to
make
change
to
their
existing
workload.
I
think
that's
the
biggest
goal
for
this
design
in
terms
of
implementation
being
I
think
we
may
not
have
to
implement
all
these
included
exclude
filters.
D
We
can
just
include
like
type
or
storage
class,
and
but
we
make
sure
this
filter
field
format
is
flexible
enough,
so
that
we
can
support
more
type
of
filters
in
future.
I
think
that
would
be
good
enough
to
run
11..
So
again,
we
need
to
reach
agreement
in
terms
of
the
design,
and
then
we
Implement
a
part
of
it
I
think
that's
good
enough.
B
A
Oh
yes
and
yes
sent
me
and
let's
continue
review
this
PR
offline
and
and
and
I
think
yes,
we
we
have
visited
all
the
topics
today
and
we
can
finish
off
here
for
the
today's
meeting
and
I
will
set
up
the
separate
meeting
understand
the
invitation
for
the
this
movement
discussion
and
thanks
all
and
how
good
day
in
the
evening
bye,
but.