►
From YouTube: Velero Community Meeting - July 27, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone
today
is
july
27th,
and
this
is
a
fellow
community
meeting.
Let's
start
it
and
first,
let's
do
some
state
update
for
the
for
the
web.
A
1.10
product
copy,
integrating
design
has
a
design,
pr
has
been
merged,
and
so
the
work
items
has
been
started
and
currently
looks
to
be
on
track
and
thanks
said
ram
for
the
review
of
the
pr
and
in
repo
1.10,
which
continued
to
do
some
computer
related,
refactorings
and
currently
one
pr
has
also
been
submitted,
and
for
one
one
on
one
and
some
more
interfaces
have
been
included
and
for
more
details.
B
Yeah
merge
the
pr
make
some
change
to
the
crds
according
to
the
design
of
copia
integration
and
the
uploader
type
to
a
volume,
backup
and
public,
restore
crd.
B
As
for
rustic
repository
crd
added
the
required
fields
like
the
ripple
type
and
stuff,
and
also
rename
it
from
rapid
repository
to
backup
repository,
I've
done
some
manual
testing
and
verify
everything
works
after
that
this
week,
I'll
add
a
flag
or
parameter
to
the
blair
binary
to
support
the
default
upload
type.
B
So
that's,
and-
and
in
addition
to
that
I
also
reveal
cars
and
that's
all
my
status.
A
Okay,
thanks
thank
you
and
from
my
side,
I'm
working
on
the
copia
integration,
related
work
items
and
one
pr
has
been
summarized
submitted
for
the
storage
configuration
part
and
the
next.
I
will
continue
to
work
some
items
like
the
the
copier
library
and
the
repository
provider
and
the
aprs
will
be
submitted
in
the
coming
weeks.
A
Yes,
that's
from
my
site
and
to
me.
A
Copier
progress,
that
is
from
my
site,
okay,
thanks
and
thank
you,
okay,
I
am
working
on
a
volume,
snapshot,
location,
refactor
and
also
fix
the
issue.
D
51
and
ford
40.,
this
is
to
fix
the
scenario
that
rustic.
A
Behavior
is
break
when
using
the
customized
ca.
Third.
A
A
Okay,
thanks
and
denver.
D
Maybe
aks,
eks
or
gcp
other
will
are
running
all
in
in
tests
in
those
manage
the
kubernetes
service.
That's
all
from
my
set.
B
So
down
phones
are
you
using
our
internal,
interrupt
tool
to.
D
Yes,
yes,
I'm
I'm
looking
at
and
I
use
it
manually
to
create
some
like
aki
has
yeah.
B
But
then,
after
the
test,
the
kubernetes
cluster
will
be
capped
or
we'll
remove
them
and
create
new
ones.
Every
night.
D
D
C
I'm
I'm
I'm
also
working
on
the
copier
equation
item,
especially
the
split
here
the
rustic
package,
and
also
submitted
a
pr
for
that.
A
E
Oh
yeah,
so
on
the
items
in
progress
monitoring,
I
just
responded
to
some
feedback
today,
but
realizing
that
in
the
case
where
backup
is
failed
rather
than
partially
failed,
it
was
pointed
out
that
we
don't
currently
do
that
unless
it's
stale
validation
but
there's
no
guarantee
we'll
never
do
that
in
the
future.
So
I'm
thinking
since
we've
added
a
cancel
option
which
again
the
plugins
are
not
required
to
honor,
but
we
should
probably-
and
if
we
do
fail,
a
backup,
that's
been
in
progress
rather
than
with
the
validation
phase.
E
We
probably
should
call
cancel
but
again
at
this
point,
there's
no
situations
in
the
current
controllers
to
do
that,
but
I
think
I
still
think
that's
a
good
idea
to
say
look
if
we,
if
we,
because
if
we
say
backups,
fail,
we're
saying,
there's
nothing
useful
here.
If
we're
providing
an
option
to
allow
plugins
to
cancel
an
ongoing
operation,
we
should
go
ahead
and
call
it
to
give
them
that
opportunity.
But
again
right
now.
E
We
don't
think
that
will
happen
because
the
only
time
the
current
controller
fails,
the
backup
other
than
partially
available
is
at
the
validation
stage.
It's
kind
of
early
on
once
we're
processing
individual
items.
That's
that
goes
to
partial
failure.
This
partial
failure
means
you
know
some
number
of
items
in
the
backup
failed
or
in
the
restore
field.
So
it's
a
change
that
makes
sense
in
the
design,
but
it
probably
won't
make
a
whole
lot
of
difference
in
the
initial
implementation,
but
it
could
have
future
implications.
E
E
I
still
need
to
go
back
and
look
at
some
of
the
prs
that
dave
has
put
in
place
in
terms
of
actually
on
the
application
side
just
to
see
if
there's
any
other
parts
of
this
design
that
I
may
have
missed,
that
that
may
have
changed
that
we
need
to
take
into
account,
and
I
also
did
you
know
I
still
would
like
to
get
some
feedback
directly
from
dave
since
I'm
modifying
his
original
design.
E
Well,
I
brought
this
up
the
meeting
the
other
time
zone
last
week
and
orlan
said
he
would
reach
out
today
to
see
if
he's
available,
to
give
feedback.
If.
D
E
Not
have
the
bandwidth
in
that
case,
we'll
go
without
it,
but
if
you
know
borland's
gonna
reach
out
to
dave
in
case
he
has
some
time
to
give
some
direct
feedback
just
because
I'm
modifying
his
original
design.
There
may
be
some
of
those
open
questions.
We
have
that.
He
had
good
reasons
for
wanting
in
a
certain
way
that
we're
not
understanding
that
it
wasn't.
You
know,
expelled
out
of
the
design
properly.
So
hopefully
we
get
some
feedback
from
dave,
but
you
know:
if
we
don't
that's
okay,
we
can
go
without
it.
E
So
that's
where
I
am
with
that
on
the
volume
snapshot,
location,
credentials,
work.
There
was
some
feedback.
Is
there
as
well?
In
particular,
there
were
some
internal
parameters
that
were
added
to
a
struct
that
we
really
didn't
need,
I'm
in
the
process
of
refactoring,
that
I
also
realized
that,
when
doing
that
that,
because
I
was,
I
was
starting
with
this
kind
of
where
those
prs
were
when
they
were
closed
out
and
reverted.
E
So
there
have
been
some
changes
to
the
backup,
storage
location
credentials
code,
mainly
around
validation,
that
I
need
to
look
at
to
see
if
I
need
to
incorporate
those
same
changes
to
the
snapshot
location.
So
I
will
be
revising
that
pr
over
the
next
week
or
so
and
updating
it.
B
E
We
don't
right
now
and
basically,
I
think,
for
the
for
the
native
snapshot
upload.
You
know
this
is
the
volume
snapshotters
where
you're
doing
an
aws
snap
start
uploading
it
as
I
understand
that
those
operations
really
can't
be
canceled
they're
going
to
the
cloud.
Basically,
at
some
point,
you
delete
the
snapshot.
It
goes
away,
but
if
we're
talking
about
this
more
flexible
approach
with
the
backup
item,
action
and
restorative
action,
we
have
these
ongoing
operations.
E
We
call
the
progress
api
method
to
get
to
get
an
answer
back,
and
there
are
certain
situations
where
we
may
want
to
cancel.
One
of
them
is
going
to
be
if
we
get
some
kind
of,
because
the
idea
is
to
have
a
timeout
with
rustic.
You
know,
after
one
hour
or
whatever
period
at
some
point
we
say
we're
not
going
to
wait
anymore,
we're
going
to
declare
this
failed
and
then,
where
we
want
the
the
proposal
and
the
adam
action
progress.
Design
here
is
to
have
a
cancel
method.
E
Now
it
could
be
that
the
underlying
action,
because
a
lot
of
cases
it's
either
an
upload
or,
if
we're
creating
a
cr
for
some
other
controller
to
handle
those
might
not
support,
cancel
so
cancel,
maybe
a
noaa,
but
it
provides
a
point
in
the
api
to
say
if
this
operation
can
be
cancelled.
This
gives
the
plugin
the
opportunity
to
do
it.
E
So
if
the
plug-in
is
interfacing
with
an
api
and
another
controller
or
operator's
bit,
it
can
issue
a
cancel
request
too.
Then
the
cancel
function
in
the
api
and
the
backup
item
actually
restore
item.
Action
would
then
call
that
and
the
plugin
would
handle
it.
The
plugin
has
the
option
of
basically
returning.
B
Okay,
so
I
think
that
should
be
added
to
the
4768
right.
It's
also
one.
You
know
important
use
case
we
should
cover.
If
we
we
decided,
we
need
to
support
it
in
the
first
delivery.
A
And
one
question
from
me
is
like
sorry:
if
I
missed
anything
so
here
we
have
the
same
shot
face
and
we
have
the
upload
please,
and
so
I
think
the
cancel
should
be
on
the
upload
case.
So
for
some
shots
we
don't
need
cancer
right,
for
which
I
mean
for
the
thumbtack
phase
and
upload
phase.
E
Basically,
when
we
see
when
they're
up
when
the
backup
restores
is
you
know
in
progress,
we
have
the
in
progress
space
and
we're
we're
running
the
plug-ins
we're
either.
You
know
creating
backup
records
that
are
going
to
go
into
property
storage
or
we're
issuing
the
restore,
create
calls,
and
so
we're
going
through
the
back
over
the
restore
when
we
get
to
the
end.
E
That's
the
point
where
we
then
iterate
over
all
of
those
ongoing
operations
to
say
hey
is
this
done
check
progress,
check
progress
if
everything's
done
at
that
point,
we
just
skip
straight
from
in
progress
to
complete
it
if,
at
that
point
one
or
more
of
those
uploads
or
other
operations,
still
report
and
progress,
we're
not
done
yet.
This
goes
into
that
waiting
phase.
E
That
was
one
that
dave
called
uploading
original
lane
and
we
had
this
the
longer
kind
of
more
generic
name
and
that's
the
phase
where
the
next
back
of
a
restore
can
begin
and
we
go
back
and
check
out
periodically.
So
as
we're
checking
periodically
when
we
hit
the.
If
we
hit
the
point
where
all
the
uploads
or
other
operations
are
completed,
then
we
move
defense
done
the
only
time
cancer
really
comes
into
play
under
the
under
the
normal
circumstances
would
be
if
we
had
a
timeout
just
like
with
rustic
upload.
E
We
have
this
one
hour
or
four
hours
by
default.
This
is
a
timeout
period
and
if
the
rustic
pod
volume
restore
isn't
completed
after
four
hours,
then
the
the
we
had
a
partial
failure
for
the
restore
right
now
with
an
error
message,
timed
out
waiting
for
rustic
resource
complete.
I
imagine
a
very
similar
situation
with
these
upload
progress.
Kind
of
stack
up
item,
maximum
store
item
action
operations.
If
we
hit
that
timeout
this
should
be
user,
configurable,
maybe
start
at
four
hours
and
let
the
user
change
it.
E
If
we
hit
that
timeout
we're
still
waiting,
it's
not
done
yet.
Then
we
do
two
things.
We
issue
a
cancel
api
call
which
gives
the
plugin
the
opportunity
to
cancel
if
we
can
and
we
go
to
partial
failed
for
the
for
the
back
of
a
restore
with
that
message.
Kind
of
like
we
do
with
it
with
rustic
saying
you
know,
operation
timed
out
waiting
on.
You
know
plug
and
operate
your
operation,
and
then
you
know,
let's
switch
plug-in
or
something
like
that.
A
So
this
looks
like
we
don't
want
to.
You
know,
distinguish
from
the
I
mean
a
weight
phase
to
decide
which
phase
could
be
cancelled.
We
just
accept
cancel
in
all
the
execution.
A
E
If
there
is
a
situation,
that's
going
to
cause
valero
to
move
that
back
up
or
restore
to
failed
rather
than
partial
failure,
which
we
don't
currently
do,
but
if
we
did
move
to
complete
fail,
because
that's
bolero's
saying
this
backup,
this
restore
it's
useless.
Nothing
useful
happened
here.
You
have
to
start
over,
that's
where
we
would
also
want
to
call
cancel
on
all
of
those
operations
that
are
not
yet
complete.
You
know
if
the
operations
are
complete.
E
You
really
can't
cancel
until
it's
already
done,
but
if
there's
an
ongoing,
you
know
upload
operation
or
ongoing.
You
know
image
copy
operation
from
some
plug-in
that
handles
images
that
has
not
completed
and
we
hit
the
timeout
valero
is
going
to
stop
waiting
for
it.
Blairo
is
going
to
declare
the
backup
or
the
restore
partially
failed,
and
this
item
in
particular
is
going
to
be
listed
as
a
failed
item.
E
A
Okay
yeah,
so
if
the
the
cancel
cannot
be
finished
or
cannot
be
completed
at
that
time,
it's
just
a
return
without
any.
You
know
right.
E
There's
no
guarantees
on
the
council
and-
and
I
know,
there's
one
thing
you
mentioned-
you
know,
for
example,
if
you,
if
you're
doing
a
a
snapshotting
of
an
you,
know
ebs
value
in
aws
and
you're
starting
that
upload.
I
don't
think
that
upload
itself
to
you
know
you
start
the
upload.
I
don't
think
that
can
be
canceled,
so
you
know
so
cancel
is
really
just.
I
got
a
best
effort
thing.
It
gives
the
plug-in
author
a
place
to
cancel
an
operation
if
they
have
the
ability
to
cancel
it,
because.
E
At
this
point,
whether
it's
a
uploader
plug-in
for
snapshots
or
some
other
plugin
that
you
know
you're
at
this
one,
interacting
with
some
other
external
controller
external
operator,
so
you're
at
the
mercy
of
that
api,
does
that
operator
have
the
ability
to
cancel
operations?
If
it
does,
then,
when
you
get
the
council
call,
you
can
issue
it,
but
some
operators
may
not
allow
you
to
cancel
things
that
are.
E
Now
you
can't
cancel
a
backup
in
progress,
so
if
the
operator
you
know
if
the
operator,
if
the
plug-ins
operator
that
it's
talking
to
can't
cancel
anything,
it
might
be
a
no,
you
know,
council
might
just
say,
hey
return.
You
know
we're
not
doing
anything.
Maybe
so
valera
at
this
point
doesn't
really
need
to
work.
E
Basically,
bolero's.
The
cancel
call
is,
is
valero's
way
of
letting
a
plug-in
cancel
operations
that
it
knows
how
to
cancel
whether
it
can
or
can't
cancel
the
operation
doesn't
have
any
impact
on
the
status
of
the
back
of
a
restore
we're
still
going
to
fail
to
say
we
were
already
going
into
partially
failed
states
so
and
the
cancel
is
really
a
courtesy
to
the
plugin
to
say
you
know
if
you
can
cancel
this
thing
you
might
want
to,
because
we
don't
need
this
anymore.
E
We've
declared
a
failure
and
it's
a
way
of
kind
of
resource.
You
know
usage
saving
right
more
than
anything
else,
because
it's
it's
a
way
of
telling
the
plug-in
hey.
You
can
stop
doing
this
thing
now,
if
you
can't,
but
bolero
is
going
to
move
forward
anyway.
After
he
bluro
says,
hey
cancel
this
thing.
If
you
can
and
then
it
goes
on
it
marks,
this
plug-in
operation
is
failed.
This
is
one
of.
C
B
Okay
yeah,
but
I
just
want
to
point
out
so
from
the
perspective
of
the
data
mover
module.
I
think
that
should
be
a
capability
right.
It
should
expose
or
define
a
way
how
user
may
you
know
issue
this
cancel
command
and
yeah.
I
agree
that
the
plugin
is
the
best
everything
right.
E
E
The
only
time
we're
going
to
be
issuing
a
cancel
operation
is
if
we
hit
the
timeout
and
a
plug-in
operation,
including
data
mover,
hasn't
completed
yet
and
shivam
if
you're
you're
on
call.
I
know
if
I'm
not
sure
what
your
plan
was
for
what
council
would
do.
It
cancel
something
that
this
that
this
first
version
of
the
data
mover
would
yeah.
C
E
E
Right
but
but
I
mean,
does
the
does
the
back
of
item
action
and
or
start
to
imagine
plug?
Is
that
you're
that
you're
envisioning
for
the
data
mover?
Is
there
something
that
makes
sense
to
do
to
actually
make
that
camp?
You
know
to
tell
that
controller.
That's
handling
this
data
mover
operation,
yeah.
You
know,
okay,.
C
E
C
E
B
Yeah,
so
so
I
I
I.
I
agree
that,
from
the
end
to
end
point
of
view,
we
do
not
expose
a
sub
command
or
somehow
the
user
can
cancel
the
data
movement,
but
from
the
data
mover
perspective,
I
think
right.
We
should
design
a
contract
like
if
you
want
to
cancel
it.
This
is
how
we
do
it
right.
E
I
think
so,
and
the
idea
being
the
the
this
this
data
mover
cr,
that
the
backup
action
plugin
is
creating.
So
tell
the
data
mover,
hey
copy.
This
volume
to
this
other
place,
that
operation
can
be
cancelled,
right
and
valero
will
be
kept,
will
be
able
to
call
that
when
a
backup
times
out,
for
example,
and
because
that
hook
is
already
there-
you
know-
maybe
in
some
future
valero
110,
112
or
112
whatever
we.
E
E
D
A
A
Personally,
I
have
another
questions
I
just
put
in
a
comment
and
unless
you
know
I
I
I
need
to
check
the
scott's
comments
later
and
let's
discuss
offline.
E
Okay,
yeah,
that's
fine
and
also
just
mention
it.
Listen
as
well
I'll
be
working
tomorrow,
and
then
I'm
got
four
days
off
so
I'll
be
back
next
wednesday.
So
so
it's
just
a
few.
C
E
A
C
Yeah,
I
think,
can
you
I
just
wanted
to
ask
whether
we
can
merge
that
data
more
phase
one
pr.
I.
B
Yeah,
yes,
I
added
a
few
okay
regarding
the
use
case.
If
you
can
chat,
and
also
in
another
discussion
with
xing,
she
has
some
disagreement
regarding
the
if
you
scroll
down
okay,
she
added
a
comment
regarding
the
life
cycle
of
this
moved
content.
I
think
we
should
make
sure
all
the
comments
are
resolved.
Okay,.
C
Wanted
to,
I
wanted
to
ask
you
guys
to
have
and
give
feedback
on
the
on
squats
item
action,
progress,
monitoring
because
we'll
be
basing
the
data
mode
design
based
off
the
new
plugin
types
like
the
new
version,
backup,
item
action
and
restore
item
actions.
E
Oh
and
so
so
you're,
like
your
last
point
about
starting
the
crd
design,
I
I
would
say
the
comments
that
I
was
responding
to
we're
kind
of
talking
about
kind
of
details
level
around
you
know
like
so
I
I
would
say,
go
ahead
and
if
you
you
know,
if
you're,
if
you're
comfortable
starting
a
zero
design,
just
realizing
that
the
design
here
is
not
yet
finalized.
So
those
theories
might
still
change,
especially
since
we're
still
awaiting
dave's
feedback.
C
E
Think
at
this
point,
I'm
not
expecting
huge
changes.
The
other
thing,
though,
is
that
implementing
those
changes
I
mean
sorry,
the
crd
is
designed
for
the
for
the
data
mover
yeah,
but
you
know,
one
thing
is
that
the
other
thing
that
the
item
action
progress
design
depends
on
implementing
is
the
plug-in
versioning,
because
we're
talking
about
oh.
E
The
apis
of
plugin
types
so
until
that
volume,
snap,
shutter,
plug-in
and
backup
by
the
connection,
we're
still
imaging
plug-in
v1
refactoring
is
done,
which
we've
already
started
with
backup
item
action.
You
know
those
have
to
be
in
place
before
we
can
get
the
v2
plug-ins
you
know
put
in
place
so.
E
Thing
that
we
really
need
to
get
you
know
we
have
this
kind
of
long
set
of
dependencies.
Here
we
have
to
get
that
the
plug-in
versioning.
You
know
making
progress
in
that
way
to
get
this
design
approved.
E
E
A
Okay,
one
question
for
me
is
like
for
the
data
motor
design.
Here
we
have
the
requirement
pr
right
and.
A
Then
I
think
data
motor
design
will
be
a
combination
of
multiple
parts,
for
example,
this
it
depends
on
the
item,
action,
monitor
and
progress,
monitor,
and
also
you
mentioned
the
crd.
So
my
question
is
like
do
well.
A
We
have
overall,
I
mean
design
for
not
requirement,
but
for
for
the
overall
design
for
the
data
mower,
and
then
we
have
the
detailed
design
in
each
of
these
things
like
this
one
or
this
one
or
we
just
have
a
detailed
design
for
them
for
these
ones,
and
we
don't
have
an
overall
design.
C
So
what
I
was
planning
was
like
once
the
phase
one.
I
addressed
the
feedback
on
phase
one
I'll
try
to
do
the
crd
design,
pr,
along
with
the
implementation
using
the
new
plug-in
version
types.
So
that
would
be
another
pr.
C
B
B
C
A
That
would
be
great.
I
think
we
need
to
have
that
our
design,
yeah
yeah.
C
A
Let's
yeah
thanks
a
lot
to
go
to
the
topic,
the
first
one.
Some
these
ones
are
from
daniel.
So
daniel
can
you
go
through
yeah.
The
first
one
is.
B
Regarding
a
a
comment
by
phong,
when
he's
doing
the
plugin
versioning
refactor
and
there's
a
concern
that
after
we
made
this
refactor,
all
the
third-party
products
will
be
recompiled.
I
I
totally
think
that's
acceptable,
I'm
not
sure
what
you
guys
think
yeah.
E
And
I
was
actually
talking
a
little
bit
about
the
supportive
meeting
too
kind
of
to
get
more
detail,
because
I
was
looking
at
our
plug-ins
for
openshift
and
we're
not
in
our
code.
Of
course,
we
don't
reference
those
plug-in
just
you
know
we
don't
import
those
interface
definitions
directly,
but
we
are
implementing
that
interface
by
adding
the
execute
method
and
all
that
applies
to.
But
fong
was
pointing
out
that
the
other
bolero
you
know
code
that
we're
importing
would
embed
dependencies
in
it,
which
would
would
so
so.
E
I
think
the
impact
is
not
on
the
code
in
the
plug-ins,
but
rather
the
the
plug-in
itself
would
need
to
be.
You
know
built
again
where
the
godot
mod
dependency
on
valero
will
need
to
be
1.10
to
make
sure
it
pulls
in
the
updated.
You
know
redesigned
because
and
the
base
the
root
cause
here
is
that
those
interfaces
that
the
underlying
and
plug-in
infrastructure
are
using,
for
example,
when
you
register
the
plug-in
in
the
server
those
are
moving
to
new
go
packages,
and
so
the
valero
code-
that's
pulling
it
in
pro.
E
We
haven't
confirmed
this.
I
guess
by
actually
trying
it
yet,
but
I
I
think
the
the
the
the
kind
of
gut
feeling
is
that
a
plug-in
compiled
with
valero
one
nine,
for
example,
if
you
added
it
to
valero
110,
as
you
know,
with
add
plugin,
there's
going
to
be
a
mismatch
there
and
all
you
would
need
to
do-
is
to
take
your
old
plugin
update,
go.mod
to
point
to
valero
110.
As
your
dependency,
you
know,
read
your
dependencies
rebuild,
I
think
you're
good.
You
don't
have
to
change
your
foot,
your
code.
E
And
I
think
we'll
we'll
have
plenty
of
time
to
test
this,
so
we
can
say
for
sure.
In
the
upgrade
note
saying
you
know
any
plugins,
you
add
need
to
be
recompiled
for
using
valero
110
and
go
to
mod
as
a
minimum,
that's
kind
of
where
I
am
with
this
right
now.
I
think
I
think
once
we
get
a
version,
that's
you
know,
that's
working.
It
should
be
easy
enough
to
test.
You
know,
even
just
in
the
the
topic
branch
to
say,
hey.
If
I
try
to
register
my
existing
plugin.
E
E
You
need
to
be
pulling
in
1.10
or
newer,
because
there
are
changes
in
the
api
code
underneath
what
your
your
plug-in
is
pulling
in,
and
I
think
related
to
that,
and
I
was
also
pointing
out
the
phone
on
on
slack.
E
We
want
to
make
sure
that
in
this
release,
we
even
even
for
plug-ins
that
we're
not
adding
a
v2
for
yet
we
want
to
get
all
the
plug-in
types
refactored
into
this
new
v1
format,
so
that
once
you
move
to
110,
you
should
be
fine
to
use
those
plugins
in
111
112
whatever,
even
though
we
add
new
versions,
because
all
the
baseline
plug-in
versions
have
been
refactored
into
the
new
infrastructure
all
at
once.
So
this
is
a
one-time
pane.
This
is
not
an
every
release
that
we
change,
plug-in
interfaces,
pain.
D
I
agree
that
it
is
a
one-time
change
and,
and
the
error
that
we
will
get
related
to
that
it
cannot
find
a
plug-in
with
with
the
service
name
right.
I
I
wonder
if,
if
that,
if
there's
any
way
that
we
can
avoid
recompiling
the
plugin
for.
D
If
we,
if
we
keep
the,
if
we
keep
the
for
the
for
the
version,
one
right
instead
of
refracting
it
can
we
just
keep
the
further
for
the
version,
one
keep
it
as
is,
but
for
version
two
and
so
on
so
forth,
we're
gonna
go
into
the
new
structure.
So
that
way
we
don't
have
to
even
recompile
the
existing
one.
So
existing
one.
This
work,
as
is
without
any
compile,
but
from
version
two.
Then
it
needs
this
new
thing.
Could
that
be
I
mean
I'm
just
I'm
just
throwing
out
the
idea
here.
E
Right
right,
yeah,
I
guess
the
question
is:
is
there
a
way
to
do
the
refactoring
so
that
the
version
one
keeps
the
current
package
naming
and
then
only
version
two
needs
the
new
one
that
that
may
be
possible.
Yes,.
B
E
E
That
that
that,
instead
of
v1,
just
use
generated
like
before
and
generate
it
or
whatever
it
is,
that
is
going
to
be
the
the
the
the
basically
the
directory
that
the
you
know
the
package
that
you
want
to
send
for
all
the
plugin
types.
And
then,
once
you
move
to
v2,
that's
when
we,
when
we
get
to
the
you
know,
generated
slash
item
action,
backup,
atom
action,
slash
v2
whatever.
E
If
we
can
do
it
so
that
the
v1
of
everything
is
in
its
current
location
and
we
can
modify
the
files
accordingly
to
get
that
just
just
for
the
proto
proto
generator
stuff
to
work.
Because
of
those
changes
that
we
realize
we
need,
because
because
we
need
those
package
directives
and
all
that
in
those
files,
so
that
when
we
do
add
the
b2
things
get
generated
properly.
E
But
it
may
be
that
you
can
put
everything
back
and
keep
it
in
the
original
location
for
now,
and
so
that
so
that
so,
in
other
words,
your
pr
refactors
for
v1
will
make
those
inline
changes
that
you
need
for
the
package
generation.
Make
the
changes
to
the
the
builder
code
that
we
made.
E
But
keep
the
package
name
and
the
directory
location
the
same
as
it
was
before,
so
that
there's
no
change
for
the
v1
plugins
and
then
it's
just
the
b2
and
beyond
would
get
the
new
subdirectories,
but
because
we
made
those,
you
know:
go
packaging
changes
to
the
proto
files
like
shared
network,
whatever
those
are
now
ready
to
be
able
to
be
imported
from
other
directories.
So
also
you
know,
if
you're,
if
your
backup
item
action,
v2
protophile
needs
to
import
the
b1,
it
can.
B
I'm
not
sure
I
follow
everything
else,
that's
got,
but
I
I
personally,
I
think
we
need
that
v1
directly
and
move
the
v1
code
into
that
directory.
That's
my
point,
and
another
point
I
would
like
to
raise
is
that
I
think
it's
totally
acceptable
for
the
plugin
developer
to
compile
their
code
regularly.
B
Otherwise
they
may
hit
other
issues
like
golang,
cve
or
any
other
change.
We
making
lateral
may
break
their
code,
so
I
think
they
should.
I
mean
in
each
minor
release,
recompel
them
plug-in
to
make
sure
the
plugin
work
with
the
latest
version
of
level.
I
think
that's
totally
acceptable
and
expected.
D
Oh
yeah,
I
also
have
this
silly
silly
question
right.
I
assume
that
we
we
now
need
to
recompile
the
plugin
to
make
it
work
with
the
new.
You
know
reflector
code
right
from
a
deployment
point
of
view
from
the
customer.
They
already
is
using
valero
and
they're
already
using
the
plug-in
essays
and
what
is
the
step
they
need
to
do
to
to
forward
it.
They
have
to
update
both
valero
and
the
valero
plug-in
binary
right,
yes
right!
Well,
I
mean
I
mean
they're
having.
E
To
update
the
valero
binary
to
update
the
new
version
right
because
to
go
from
1.9
to
1.10
means
you
have
to
go
through
the
upgraded
process
you
have
to
get
the
new
valero
1.10
image
and
the
the
plug-ins
are
added
as
just
in
a
container.
So
so
they
have
their
own
images.
E
I
know,
for
example,
for
the
openshift
plug-ins.
We
do
release
a
new
version
of
that
when
we
release
a
new
version
of
odp,
because
we
usually
have
at
least
we
know
some
bug
fixes
in
there
anyway,
and
when
we
update
the
new
valero
version,
we
do
update
that
below
our
imports
as
well
to
make
sure
we're
getting
the
latest
bolero
code,
because
you
know
I
don't
want
my
plug-in
that
I've
updated.
You
know
for
valeria,
one
nine
to
still
be
referencing
bolero,
one,
seven
or
blurry
one,
eight
code
there.
E
So
to
daniel's
point
yes
plug-in.
Authors
should
be
updating
their
plug-ins
when
they
go
to
a
new
layer
version
just
to
make
sure
no,
that's
not
that
they
should
have
to
and
and
the
plug-in
should
break.
But
this
is
the
one-time
breakage
because
we're
refactoring
the
plug-ins
to
be
more
flexible,
so
this
is
kind
of
take
the
pain
now
so
that
you
can
now
use
this
plug-in
until
you
update
it
to
the
v2
or
the
v3.
E
So
I
don't
think
that's
an
unreasonable.
The
thing
that
we're
trying
to
avoid
in
plug-in
versioning
is
to
require
users
to
change
their
code
change
the
source
code,
and
this
won't
require
that
this
will
just
require
them
to
update
their
godot
mod
to
say:
okay,
the
delayer
dependency
now
is
1.10,
not
1.9.
E
You
know,
gomod
update,
you
know,
sort
of
you
know,
go
my
tidy
and
then
rebuild,
and
then
you
know
at
that
point
you
have
a
new
image
to
add.
As
an
as
a
plugin,
you
know,
bluro
add
plug-in
with
the
new
image
and
you
gotta
remove
the
old
one,
but
that
should
be
all
they
need
to
do
is
to
resist
rebuild
their
their
plug-in
image,
using
the
updated
valero
dependency
and
their
valero
install.
B
D
D
Okay,
in
that
case,
I
think
I
need
to
have
an
a
short
code
for
some
backup
action
item
plug-in
and
then
I
will
recompile
it
with
this
new
code
to
test
it
to
test
on
my
cycle
right
now,
it's
broken,
and
I.
D
Any
I
think
I
will
reach
out
to
scott
to
get
the
open
ship
plug-in,
because
I
know
that
openshift
plug-in
have
that
backup
action
item
right.
I
mean
backup
either.
E
Action
yeah,
I
mean
I
mean
if
you're
using
an
openshift
cluster
and
you
should
build
these
are
open
to
plug
it.
If
you're,
if
you're,
not
an
openstack
cluster,
then
that's
gonna,
be
you
know
you,
you
could
even
I
mean
they're.
I
guess
the
csi
plug-in
uses
backup
item
action.
You
could
use
that
one
even
to
test
with.
D
I
do
not
have
any
plug-in
right
now.
E
D
I
do
have
I
I
do
have
the
dro
one,
but
that
is
for
the
storage.
Okay,
not
not
the
backup
item
to
action
right.
Okay,
I
will
I
reach
out
to
scott.
E
Later
you
could
even
take
the
valero
plug-in
examples,
you
know
project
and
then
build
your
own
kind
of
you
know
hello.
E
Might
be
easy
to
test
because
then
you
don't
have
to
worry
about
the
dependencies
anything
anything
else.
The
other
thing
is
that
you
know
see
when
you
when
you're
writing
a
plug-in
against
the
release
version
of
valero.
You
just
put
go
down
mod,
you.
You
know
you
can't
it's
a
valero,
you
know
v1.9.0
or
1.10.
whatever,
because
you're
working
against
unreleased
bolero,
you
know
you're
going
to
need
the
gomod
replace
so
that
you
can
point
to
your.
E
You
know
your
commit
and
your
repo
for
the
the
code
that
includes
this
because
that's
not
that's
not
in
name
even
yet,
but
that's
you
know
just
because
you're
just
in
other
words
the
godot
mod
for
your
plug-in.
If
you
want
to
pull
in
the
latest
version,
plug-in
information
you're
going
to
have
to
you
know,
there's
not
a
valero
release
in
github.
That
has
that
yet
so
you're
going
to
have
to
to
do
the
go,
mod,
replace
and
point
to
your
specific
fork.
E
You
know
a
branch
or
whatever
but
yeah
once
you
do
that
and
then,
when
you
do,
you
know
the
mighty
and
then
you
build
it's
going
to
pull
in.
Instead
of
valero
release,
it's
going
to
pull
in
from
your
dev
branch
with
these
changes
and
that's
going
to
get
that
built
in
to
to
that
plug-in
image.
D
Okay,
I'll
give
it
a
try
and
I
will
continue
contact
update
it
on
slack.
C
D
No
problem,
I
I
will
get
this
blogging
example
and
play
around
with
it.
E
B
Okay,
so
next
item,
I
think
yeah,
just
a
small
one.
If
we
want
to
create
a
separate
channel
and
make
more,
you
know
a
concentrated
discussion
regarding
data
mover
should
please
keep
us
updated
if
the
pr
is
merged,
yeah
and
the
third
one
is
regarding
the
community
meeting
time
scott.
B
Is
there
any
disagreement
regarding
the
community
meeting
time
in
the
u.s
time,
loan.
E
So
I
I
brought
this
up
last
week
as
well,
when
it
was
just
red
hat
people
and
I
think
was
there
and
yeah
and
orlan
was
here
and
what
I
I
just
kind
of
reiterated.
What
I
said
on
the
comment
thread,
which
is,
I
think,
it's
pretty
obvious,
that
we
said
that
we
need
the
two
time
zones,
because
there's
no
one
time
that
works
for
everybody
perfectly.
E
My
thinking
is
what
we
want
is
to
have
a
meeting
set
up
so
that
twice
a
month.
You
know
everybody
has
a
time,
that's
convenient
for
them
twice
a
month
and
then
twice
a
month,
there's
a
time
that's
slightly
inconvenient,
but
maybe
you
can
make
it.
The
problem
with
the
current
schedule
is
that
the
u.s
centric
time
zone
meeting
is
midnight
your
times
in
ambition.
So
that's
not
acceptable.
There's
no
way
you're
ever
going
to
join
the
meeting
then,
and
my
thinking
was
if
we
moved
it
two
hours
earlier,
maybe
three
hours
earlier.
E
That
would
still
be
inconvenient
for
you,
those
two
times
a
week
two
times
a
month,
but
it
would
be
close
enough,
convenient
that
if
you
had
to
join
the
meeting
because
it
was
an
important
topic
or
whatever
you
might
be
able
to
do
it
yeah
keep
debating
time
as
it
is,
but
take
the
u.s
centric
time
move
two
hours
three
hours
earlier
or
two
hours
earlier,
and
I'm
thinking
two
hours
earlier
might
be
better
because
for
fong
and
others
on
the
us
west
coast
they're
three
hours
behind
us,
so
that
would
be
7
a.m,
for
them:
10
a.m,
for
for
people
on
the
east
coast
and
then
10
p.m.
E
For
you
and
then
so
that
would
you
know
again,
it
would
be
some
improvement
for
beijing
and
nine
o'clock
would
be
fine
for
me
three
hours
earlier,
but
I
just
think
that
would
be
harder
for
anyone
on
the
west
coast
to
ever
join
that
meeting.
But
you
know
the
west
coast.
People
also
have
a
convenient
time
now,
so
I'd
be
open
either
way.
You
know,
I
think
9
a.m,
eastern
or
10
a.m,
eastern,
which
is
either
two
hours
or
three
hours
earlier
than
the
current
meeting,
would
be
fine
for
us.
E
We
just
need
to
decide
which
of
those
two
options
makes
the
most
sense.
The
other.
The
other
timing
factor
in
this
is
that
this
is
the
meeting
that
arlen
normally
runs.
We're
talking
about
he's,
actually
going
to
be
out
for
the
month
of
august
and
the
first
week
in
september.
E
E
The
next,
the
next
three
u.s
centric
meeting
times
because
reward
one's
gonna
be
gone.
The
one
next
week
is
going
to
be
leading
and
then
the
two
after
that
I'll
be
leading,
and
then
when,
after
that
orleans
back-
and
I
think
that's
the
point
whenever,
when
everyone's
back,
when
we
just
need
to
make
that
final
decision
and
say,
let's
have
this
time
again,
either
two
hours
early
or
three
hours
earlier.
I'm
fine
either
way
with
those
two
options,
and
I
think
the
beijing
time
should
stay
the
way.
E
It
is
because
I
think
this
is
the
time
that
I
think
it's
the
best
compromise
that
really
basically
everybody.
But
ireland
can
come
to
this
meeting
and
because
it's
like
three
in
the
morning
for
him,
I
think
and
then
the
other
time
zone.
We
can
make
it
a
little
bit
earlier
and
then
you
know
maybe
some
of
the
beijing
people
can
come
when
they
need
to.
But
there
would
be
no
expectation
that
they
always
come.
You
know
because
it
is
still
pretty
late
agreed.
B
I
just
have
a
question:
it
seems
that
10
a.m
is
slightly
preferred,
but
you
have
this
daylight
saving
plan
right.
So
yeah.
E
Let
me
think
about
what
that
would
be
so
yeah,
so
so
that
means,
if
we're
doing
it
at
10
a.m,
for
example,
that's
10
p.m.
For
you
and
then,
when
we
switch
to
off
of
daylight
savings
time,
that
makes
it
nine.
Oh
I
see
so
so.
The
problem
is.
That
means
that
that
then
becomes
11
o'clock
for
you
alpha
daily
savings
time
I
mean
yeah,
I
think
yeah.
Now
the
thing
is
that
I
mean,
and
it
would
be
fine
for
me
to
do
it.
E
D
Actually,
I'm
I'm
in
west
coast,
but
I
regularly
wake
up
about
five.
Six
am
anyway,
but
I
cannot
speak
for
everyone
who
live
in
the
world.
Of
course,.
B
B
D
B
C
E
E
D
Think
I
think
dave
is
regularly
if
I
think
he's
he
moved
to
custody
now.
E
E
I
that's
one
of
the
things
because
orlan
was
going
to
reach
out
to
him
to
ask
about
the
pr
comments
I
I
was
hoping
darling
would
also
kind
of
ask
if
he
had
plans
to
come
to
meetings
too
because
again,
if
he
wants
to
join
the
meetings
that
I
want
to
accommodate
him
absolutely.
But
if
he
has
no
intention
of
joining
the
meetings,
you
know
we
don't
really.
E
I
mean
we
need
to
accommodate
the
people
that
we
expect
to
be
here
and
that
are
making
an
effort
to
come
here
and
if
that
means
we
need
to
be
at
nine
instead
of
ten
and
that's
okay
with
fung.
E
But
again
my
concern
is
that
there's
other
people
in
the
west
coast
that
would
otherwise
want
to
join.
But,
but
you
know,
as
samuel
said,
we've
got
the
twice
a
month.
Beijing
time,
which
is
it
doesn't
work
for
west
coast,
so
given
the
number
of
people
on
the
team
from
beijing,
if
nine
would
make
them
more
likely
to
come
to
ten
I'd,
be
inclined
to
suggest
that.
But
you
know
again,
that's
just
you
know.
D
I
think
we
can,
I
think
we
can
try.
For
now
I
mean
I
guess
we
can
try
to
move
it
to
9
a.m.
Eastern
time,
yeah.
E
Talking
to
orland
before
last
week,
I
think
the
plan
was,
we
wouldn't
actually
make
the
change
until
he
got
back
and
was
hosting
meetings
again,
just
because
we're
kind
of
in
this
kind
of
substituting
posts
week
to
week
role
right
now
and
I
think
changing
the
meeting
time
at
the
same
time
as
doing
that
might
cause
confusion,
so
we're
I'm.
I
would
hope
that
we
would
say
the
end
of
september
be
on
a
new
time.
I
just
don't
know
what
exactly
the
right
time
to
do.
It
would
be.
E
B
D
E
I
mean
the
other
reason:
nine
works
better.
For
me,
right
now
is
that
we
currently
have
an
oep
meeting
scheduled
at
10,
but
we
could
move
if
we
had
to.
But
I'm
just
saying
for
my
current
schedule:
nine
is
more
free
than
ten,
so.
E
E
In
that
decision
yeah,
I
guess
kind
of
the
final
decision
is
sort
of
you
know
he
needs
to
be
involved
then,
but
you
know
I
I
already
kind
of
basically
said
the
same
thing
last
week
in
the
meeting
that
he
was
on
that
I'm
saying
now
just
that
we
didn't
have
as
many
people
here,
but
you
know
it
should
be
in
the
recording
as
well,
but.
E
But
I
think
everyone
has
agreed
that
the
the
beijing
time
basically
should
stay,
as
is,
is
probably
the
best
compromise.
That's
there
right
now,
yeah
agreed.
B
Yeah
great
so
yeah,
let's
re-discuss
when
alden
is
back
and
we
make
the
decision
thanks.
C
C
Yeah
so
I'll
be
targeting
like
parallel
backup
restore
operations
for
verilog.
Whether
went
in.
B
I
discussed
with
dylan.
I
think
that
I
first
clarified
that
we
from
vmware-
I
don't
think
we
have
time
to
work
on
that
in
the
110
time
frame.
I
think
the
top
priority
is
the
data
mover
design
and
some
poc,
but
he
mentioned
that.
If
you
have
bandwidth
you
can
start.
B
E
I
think
getting
I
think,
getting
some
design
together
makes
sense,
because
that
maybe,
if
we
have
an
idea
of
the
design
that
we've
been
discussing,
then
in
the
111
time
frame
we're
ready
to
start
implementing,
because
this
is
this
is
the
kind
of
thing
that's
going
to
take
a
look,
because
I
think
the
design
process
is
going
to
take
some
time
because
we're
going
to
have
a
lot
of
back
and
forth
and
different
ideas
and
have
to
reconcile
kind
of
you
know
conflicting
use
cases
and
priorities
and
all
that,
so
the
design
part
of
the
process
is
going
to
take
some
time
as
well.
B
Right,
I
feel
it
will
be
like
data
mover
in
one
line.
We
thought
we
can
finish
the
data
mover
design
in
one
line,
but
we
didn't
so
so
I
think
let's
not
make
any
commitment
regarding
the
parallel,
but
we
can
start
discussing
and
think
about
this.
I
think
that's
the
right
approach.
What
do
you
guys
think.
B
Stuff,
yes
right,
because
in
the
copia
integration
we
are
also
making
change
in
the
back
carbon
resource
controller,
more
or
less.
So
what
we
want
to
avoid
is
the
conflict.
A
Okay,
I
think
we
had
done
all
the
those
all
all
the
items
and
any
questions
and
any
or
anything
around
any
other
thing
you
want
to
mention
here.