►
From YouTube: Velero Upload Progress review discussion - Feb 25, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
then
just
get
started
okay,
so
this
is
the
meeting
for
reviewing
upload
progress,
monitoring
and
just
dive
into
it.
So
this
is
a
design
that
I'd
like
to
get
into
the
one
seven
release,
and
the
goal
here
is
to
start
enabling
us
so
start
having
valero
track
the
progress
of
snapshot
uploads
that
are
happening
in
the
background
and
that
will
get
us
on
the
steps.
So
currently
we
have
a
couple
of
we
have
several
different
plugins
that
do
this,
where
they
upload
data.
A
In
the
background,
and
eventually
we
want
to
move
the
actual
data
movement
up
into
valero
itself,
more
appropriate,
and
so
in
order
to
do
that,
we
need
to
have
this
ability
to
monitor
things,
and
so
we'll
get
some
incremental
progress
out
of
this
right
away,
because
we
are
able
to
do
some
things
we
can't
do
now
and
that
will
enable
us
for
some
changes
in
the
architecture
as
we're
coming
along.
A
So
I
think
we're
all
relatively
familiar
with
the
existing
plug-in
volume
or
architecture
so
most
of
the
snapshots.
So
the
snapshot
plug-in
volume,
snapshotter
api
goes
ahead
and
we
say,
take
a
snapshot
and
then
it
returns
back
to
valero
and
then
valero
continues
on
to
the
next
thing
and
when
all
the
snapshots
have
finished
and
when
all
the
valer,
when
all
the
kubernetes
metadata
has
been
written
into
the
object,
store
the
backup's
completed.
A
But
we
have
existing
examples.
For
example,
like
with
aws
eds
you'll
go
ahead
and
you'll
ask
ebs
to
take
a
snapshot.
It
will
return
immediately,
but
the
snapshot
may
not
be
usable
until
sometime
in
the
future
when
it's
actually
presented
into
s3.
So
there's
a
window
there
where
your
valero
backup
can
be
completed.
A
A
But
if
you
were
to
have
a
storage
catastrophe
during
that
time,
the
valero
backup
would
show
is
completed,
but
it
might
still
be
uploading
in
the
background
and
then
for
rustic
rustic
currently
doesn't
use
snapshotting
and
it
just
blocks
everything
until
it
finishes,
so
it
will
probably
take.
A
We
probably
won't
be
able
to
push
that
into
the
background
until
we
marry
rustic
or
rustic
like
things
with
snapshotting,
but
it'd
be
nice
to
be
able
to
do
that,
and
really
our
goal
on
the
snapshotting
side
is
to
take
them
as
close
together
as
possible.
So
we
don't
want
to
slow
anything
down
on
the
snapshot
side
and
we
might
just
move
things
along.
A
A
We
want
to
be
able
to
say
that
if
your
backup
isn't
really
persistent
or
it's
not
usable,
it
shouldn't
appear
as
completed,
and
we
want
to
not
like
change
everything
in
the
volume
snapshots
and
so
forth.
At
this
point,
we're
not
going
through
and
unifying
backup
item
actions
and
volume
snapshotters.
A
So
there's
two
snapshot
paths
right
now
that
do
not
go
through
volume
snapchatter,
that
is
the
the
csi
snapshotting
and
the
vsphere
snapshotting,
and
these
both
go.
These
with
appears
backup
item
actions,
so
we
need
to
be
able
to
handle
backup,
item
actions
and
volume
snapshotters
that
can
have
ongoing
things.
In
the
background.
A
A
A
So
I
made
a
little
state
diagram
of
what
things
would
look
like
and
where
we
go,
so
we
always
start
off
like
as
a
new
backup
once
things
once
the
backup
gets
rolling.
It's
gonna.
A
So
we're
going
to
have
an
uploading
state
and
uploading
partial
failure,
and
these
we
may
not
go
to
those
and
it's
it's
quite
possible
to
go
directly
to
the
completed
state
and
actually
there's
one
one
path
I
didn't
draw
here,
which
I
should,
which
is
it's
possible
during
the
upload
to
go
to
uploading
partial
failure.
A
So
is
everybody
good
with
the
state
diagram
here.
B
This
makes
sense,
but
the
uploading
partial
failure,
I'm
not
sure
how
useful
that
state
will
be,
because
unless
we
are
able
to
tell
what
par
what
like
define
clearly,
what
partial
failure
means.
I'm
not
sure
how
useful
that
will
be,
but
we
can
keep
it
for
now.
So.
A
So
right
now
we
have
two
end
states.
We
have
partially
failed
and
completed
so
when
we
move
so
if
we
didn't
have
so,
I
initially
designed
it
without
this,
and
if
we
just
had
uploading,
we
would
have
to
keep
the
fact
that
the
end
result
was
going
to
be
partially
failed
somewhere,
and
I
didn't
have
a
good
somewhere
okay,
because
if
I
kept
it
in
memory,
then
we
couldn't,
you
know
like
if
we
restart
for
some
reason
the
uploads
may
be
completely.
A
You
know
fine
at
the
end,
but
we
wouldn't
we
wouldn't
be
able
to
mark
it
as
partially
failed
or
we
could
always
mark
it
as
partially
failed
because
we
crashed.
But
then
that
might
not
be
true,
because
we
may
have
done
everything
correctly,
but
the
the
server
simply
was
restarted
during
the
middle
yeah.
A
B
A
We
could,
but
then
we'd
have
to
add,
in
a
add
in
commands
to
actually
restart
the
upload
so
right
now,
I
think
we're
going
to
do
a
little
bit
of
of
surgery
in
here,
while
we're
working
on
this
to
handle
our
restart
cases,
because
right
now
correct
me
if
I'm
wrong,
but
I
believe
that
if
we
restart
backups
that
are
in
progress,
you'll
simply
remain
in
progress.
A
A
Yeah,
so
as
part
of
this,
like
anything,
that's
in
progress
should
on
restart
that
should
go
directly
to
failed
and
anything
like
the
uploading
and
uploading
partial
failure.
These
may
actually
even
be
continuing
on
the
upload.
Progress
could
continue
on
while
the
valeria
server
is
being
restarted.
A
A
That's
phase
two
that
would
be
the
data
mover
but
yeah.
So
like
our
existing
stuff,
like
the
vsphere
plug-in
the
data
movers,
are
they
run
in
the
background
and
they.
D
A
To
crs
for
upload
and
download-
and
they
have.
D
C
A
C
That's
that's
really.
I
guess
that's
really
only
that
that
part
is
really
only
right.
Now
is
only
for
stuff,
like
the
vsphere
plug-in
that
doesn't
rely
on
cloud
apis
like
it
has
to
do
some
work
itself,
and
this
is
just
giving
us
a
channel
to
know
where
it's
at.
A
Yes,
well,
we
can
also
monitor
the
ebs
uploading,
so
what
we
we
should
be
able
to
do
is
when
we
come
is
when
we
do
a
restart,
we
should
be
able
to
say:
okay,
you
know,
I
think
we're
gonna
have
like
our
logs
gonna
be
stored
off
somewhere.
I
hope
so
we'll
have
to
we'll
have
to
look
a
little
bit
at
restart,
but
restart
should
enable
us
to.
A
I
think,
if
we're
in
the
middle
of
a
backup-
and
we
restart-
we
simply
say-
and
we
failed-
but
if
we're
in
these
phases,
where
something
could
be
going
in
the
background,
we're
going
to
expect
the
background
tasks
to
pick
up
and
mark
their
own
success
of
failure.
But
then
we
should
be
able
to
you
know,
watch
them
so
we'll
want
to
clean
up
our
restart
logic
as
part
of
this
project.
E
E
If
I'm
at
that
phase,
it's
like
at
that
point
of
the
process,
so
that
for
me,
is,
I
think,
it's
extremely
useful
when
I
look
at
this
diagram
and
by
the
way,
thank
you
because
this
makes
everything
so
much
easier
to
understand.
Do
you
have
a
request
that
if
this
diagram
needs
correction,
that
you
know.
A
We'll
correct
it:
yes,
this
should
also
become
part
of
our
main,
our
main
documentation.
E
I
just
like
prioritize
that,
because
it
is
so
extremely
helpful
for
people
at
least
I
think
for
everybody
who
is
very
visual.
I
am
like
that
anyway.
I,
when
I
look
at
this,
I
wonder
if
we
are,
is
it
possible
that
we
are
conflating
two
things?
Okay,
that
is
so.
This
is
a
question,
the
state
of
the
backup
at
any
given
moment,
with
the
reason
for
a
change
of
state.
E
So,
for
example,
maybe
the
states
would
be
new
in
progress,
partially
failed
and
failed
or
specially
felt
felt
complete.
And
then,
if
we
say
partially
in
progress,
we
can
say
it's
uploading,
it's
in
progress
in.
What's
what
operations
going
on
it's
uploading
so
and
then
maybe
we
can
simplify
the
transitions
and
if
it's
partially
failed,
then
we
can
say.
E
A
Well,
I
think
we
could
simplify
this
state
diagram
if
we
added
like
a
separate,
you
know
current
state
or
current
current
operation.
Perhaps
I'm
not
sure
we
gain
a
lot
by
that
we'd
have
to
add
a
new
field.
E
I
think
we,
I
think,
adding
this
new
field
that
you
you,
I
I
think
the
new
field
you
have
in
mind
is
the
same
new
field
that
I
have
in
mind.
I
don't
know
what
it
would
be
called,
but
it's
something
that
will
have
to
be
updated
like
like.
E
Maybe
this
is
the
phase
two
you
were
talking
about,
for
example,
if
snapchat
is
being
uploaded
and
the
black
backup's
complete,
but
it
hasn't
but
but
the
snapshot
is
still
going
on,
we
should
is
there
a
way
for
us
to
like
make
a
call
to
that
api
find
out?
Oh,
that
is
like
continuously
find
out,
it's
done
and
then
come
back
and
say
no
now
it's
done
done.
E
A
Okay,
we
can
think
about
that.
Let
me
think
that
through
I
want
to
make
sure
that
so.
C
Field
we
talked
before
about
a
field
where,
in
the
case
that
we
couldn't
upload
the
logs,
we
basically
have
no
information
so
having
a
field
that
would
say:
hey
it's
partially
failed
or
failed,
because
we
couldn't
upload
logs,
which
is
separate
from
this
upload.
This
is
a
snapshot
upload
this,
but
still
having
that
that
description
field
would
be
helpful,
but
I'm
specifically
thinking
about
the
case
where
we
couldn't
upload
logs
for
a
backup,
because
something
happened
and
there's
really
no
way
to
tell
that
right
now.
A
A
Something
failed
on
closing
and.
C
Yeah,
so
I
think
that
might
be
tangential
to
this
this
particular
design,
but
I
think
that
sort
of
field
would
be
useful.
C
Okay,
so
I'll.
E
I
think
would
be
useful
if-
and
I
am
volunteering
to
to
hash
this
out
with
you-
I
be
glad
to
this-
is
very
interesting,
very
interesting
problem
to
solve,
and
it's
going,
I
think
it's
also
very
beneficial
anyway.
I
think
it
would
be
interesting
to
maybe
open
a
spreadsheet
and
just
list
like
with
this
design.
E
E
I
know
I
know,
but
before
we
get
to
draw
the
diagram,
walk
like
okay,
we
have
the
idea
we
did.
We
have
this
diagram,
so
let's
work
with
it.
Let's
walk
through
like
difference
like
happy
path
in
use
cases
that
touch
every
deaf
ends
in
every
end
point
I
think
it's
gonna
be
helpful.
If
we
do
that,
you
know
what
I
mean
like
have
what
at
least
one
use
case
that
ends
with
every
endpoint
here
that
we
are
looking
at
so
you're,
proposing.
F
E
E
E
A
E
Oh
so
so
these
are
like
this
is
not
like
what
I'm
thinking
is
like
a
brainstorming
exercise.
It's
not
to
formalize
this
in
the
spreadsheet.
It's
just
to
walk
through
like
because
I
can't
hold
all
the
cases
in
my
head
at
the
same
time
for
one
model
and
then
like
another
one.
What
the
other
I
got
lost,
I
I
particularly
would
get
lost
so
I'm
thinking
it
will
be
easier
to
communicate.
It
will
be
easy
to
see
the
trade-offs.
A
I
think
it'd
be
a
good
thing
to
do.
I
don't
think
it
changes.
It
doesn't
change
much
of
the
way
of
operations
right.
So
what
we're?
We
still
have
this
state,
and
so
I
think
it's
really
good
to
go
through
and
make
sure
that
our
existing
state
transitions
are
sensible.
E
E
E
A
This,
I
think,
that's
helpful,
because
this
diagram
doesn't
explain,
for
example,
that
you
know
you
don't
go
to
in
progress
until
we
actually
start
working
on
the
backup.
G
Yeah,
that's
one
of
my
questions.
I
have
in
mind
too
that
when
we
transition
from
in
progress
to
upload,
for
example,
then
in
the
backup
workflow,
what
operation
can
be
done
at
that
point,
for
example,
if
there
is
no
useful
to
have
another,
even
though
you
know,
I
know
that
in
when
you
start
to
switching
into
uploading.
G
That
means
the
snapshot
had
been
created
and
I
started
able
to
using
the
snapshot
like
a
read
only
or
something
like
that.
Then
after
I
finish
uploading,
then
I
can
do
something
else.
For
example,
I
can.
I
just
started
throwing
out
the
that,
for
example,
if
I
already
up
even
I'm
and
even
have
stead
of
uploading,
I
can
read
it,
but
I
cannot,
you
know,
do
anything
else
with
it.
When
I
actually
completely,
then
I
can
do
a
lot
of
stuff
like
cloning
and
so
on
so
forth.
G
So
it
only
makes
sense
if
a
different
state
can
be
useful
for
something
if
it's
not
useful.
So
something
then
just
keep
them
as
like
just
saying
in
progress
until
it's
completely
done.
That
would
be.
E
E
E
You
still
need
to
look
look
things
up
to
to
make
sure
if
you
fit
you
in
under
what
condition
the
backup
is,
but
you
know
it's
it's
going
to
be
useful
under
certain
conditions,
so
you
know
you
can
do
something
with
that
backup
and
then
there
are
other
cases
that
okay,
it's
just
not
useful
at
all.
I
don't
even
have.
I
just
have
to
find
out
why
bro
fix
it
and
but
I
need
another
backup.
B
I
I
I
think,
yes,
that's
a
worthwhile
exercise
to
do
what
I
do.
What
I
would
caution
us
from
would
be
like
increasing
the
number
of
states
things
can
be
in
which
means
there
will
be
like
it'll,
be
harder
to
implement
and
there'll
be
more
chances
of
bugs
like
implementing
a
state
machine
diagram
with
like
10.
E
A
A
E
A
C
Some
so
another
wrench
I'll
throw
in
here
is
a
lot
of
other
kubernetes
objects
are
using
lists
of
conditions.
So
when
you're
creating
a
pod
or
a
deployment,
they
say,
okay,
I'll,
add
a
condition
that
one
pod
came
up
and
I'll
add
another
condition.
Another
pod
came
up,
so
they
have,
they
don't
just
have
one
phase,
they
have
a
list
that
can
be
used.
C
Crds
have
this
pods
and
deployments
have
this,
so
it
does
make
the
logic
a
little
weird,
because,
like
in
in
valero,
we
have
to
check
on
valero
install
we're
like
is
this
deployment.
Has
this
deployment
gone
through
these
like
two
three
phases
same
with
crds?
We
have
to
say:
does
it
have
these
two
or
three
conditions,
but
that's
another
format
to
consider.
E
A
It's
useful,
but
it
also
it's
tricky
because
we
get
things
like
I'm
gonna
turn
off
this
system
I
mean
literally
physically
go
turn
the
power
off
and
the
guys
are
to
come
in
and
haul
it
away
for
e-waste
and
if
we're,
if
we
mark
completed
because
it's
usable,
but
it's
not
durable,
now
you're
in
a
bad
situation
and
in
fact
it's
even
more
in
the
in
the
cloud,
because
if
you
simply
delete
all
your
resources,
things
can
go
bad.
I
think
ebs
snapshots
are
good.
A
E
Yeah
a
different,
the
second
and
last
comment
that
I
have
is
show
comments,
so
you
can,
you
can
move
on.
Is
there
is
an
issue
that
talks
about
this
back
when
I
think
even
andy
was
on
the
project?
I'm
sure
no
one
remembers.
I
unfortunately
didn't
look
it
up
before
this
meeting,
but
he
talks
about
a
bunch
of
gotchas
that
you
know
with
the
because
this
has
been
a
desirable
behavior
for
a
long
time,
but
the
the
challenge
with
you
know:
do
we
upload
how?
E
E
A
C
E
F
A
A
So
the
idea
here
is
that
we'll
continue
on
with
our
existing
behavior,
so
we'll
go
ahead
and
call
of
our
call
all
of
our
backup
item
actions
and
volume
snapshots
just
the
way
we
currently
do
and
they're
going
to
return
snapshot
ids,
just
as
they
currently
do,
but
then
we'll
be
able
to
monitor
these
snapshot.
Ids
with
the
plugins
optionally.
A
So
so
plugin
needs
to
be
able
to
provide
status,
we'll
also
need
to
be
able
to
handle
cancellation
and
deletion
so
like
if
the
backup
gets
deleted.
While
it's
in
the
uploading
phase,
we
need
to
go
through
and
whack
all
the
snapshots
as
part
of
the
deletion
and
the
plugins
need
to
be
able
to
either.
You
know,
cancel
the
upload
or
delete
the
the
snapshot
at
the
end
or
whatever,
but
they
they
need
to
be
responsible
for
actually
cleaning
things
up
and
then
yeah
when
everything
all
the
snapshots.
A
So
we
talked
about
the
partially
failed
state.
So
that's
this
and
then
if
it
goes
to
failed,
the
expectation
is
that
we
will
stop
or
delete
all
the
snapshots
that
are
associated
with
it.
I'm
not
sure
if
that's
the
right
thing,
maybe
that
that
should
really
be
something
that
happens
as
part
of
the
actual
deletion.
A
Oh,
they
just
need
to.
They
need
to
delete
the
the
failure
yeah.
C
I
think
I
I
believe
if
you
delete
a
backup
and
failed
state,
it
will
go
clean
everything
up
but
like
if
it's
in
a
failed
state,
it
doesn't
by
default,
go
clean
up
all
the
stuff
that
it
made.
A
So
we
can,
I
think,
we'll
change
it
back
to
remove
it
when
we
delete
it
and
then
in
the
future,
we
could
look
at
whether
or
not
to
halt
things
as
it
moves
to
failed.
F
F
F
Yeah,
no,
it's
just
so.
Is
there
there's
a
difference,
though,
between
deleting
the
resources
and
stopping
any
from
any
further
ones
from
being
created,
because
if
we've
initiated
the
the
upload
part,
then
maybe
we
don't
necessarily
need
to
continue
with
the
upload,
but
you
still
want
to
keep
the
existing
resources
that
have
already
been
created.
A
Right
or
like
in
vsphere
yeah,
we
need
to
delete
the
snapshot
and
that
could
stop
the
upload.
But
you
could
have
a
usable
snapshot
even
though
it
hasn't
been
uploaded.
B
C
A
A
C
Yeah,
it's
it's
been
an
issue
not
only
for
users
and
customers,
but
like
for
our
team,
it's
like
getting
getting
good
information
to
help.
Troubleshoot
can
be
involved
so
yeah
things
like
this
things
like
adding
a
description
field
to
failures.
A
E
A
A
Well
and
like
so
so,
like
the
the
thing
that
drove
me,
nuts
was
when
I
was
working
on
the
azure
testing.
The
azure
plug-in
would
fail
because
we
didn't
allocate
enough
memory
to
the
pod,
but
it
only
failed
when
we
were
actually
uploading
to
the
object
store.
So
at
that
point
we
had
marked
the
backup
as
well.
We
hadn't,
like
the
backup,
is
complete.
A
We
we'd
stopped
logging,
we
closed
the
logs
and
started
to
upload
them
to
the
object
store,
but
then
it
failed,
and
so
we
marked
it
as
failed,
but
there
was
nothing
in
the
logs
that
indicated
why
it
actually
failed,
and
you
had
to
go
back
and
like
dig
through
the
the
pod
logs
to
see
that
the
server
returned
an
error,
and
that
was
why
it
actually
failed.
C
Right
so,
okay,
so
I'm
talking
about
two
different
things.
So
that's
that's
the
case
where
a
description
field
would
be
useful
because
we
cannot
make
a
log
like
we
cannot
upload
it
to
object
storage.
So
we
should
write
that
in
description
field
there
are,
there
are
errors
when
you
do
a
backup
or
restore
describe
so
things
like
when
you
just
describe
a
restore
and
say
it'll
say
we
couldn't
restore
an
object
because
one
was
already
there.
Those
those
kinds
of
errors
are
there,
but
we're
not.
C
I
think
it's
perhaps
more
accurate
to
say
we
don't
report
every
kind
of
error
there
just
because
we
don't
have
the
space,
no,
we
shouldn't,
but
anything
that
actually
causes
a.
A
C
F
Very
side
point,
but
is
there
any
way
that
we
could
potentially
change
how
the
logs
are
kind
of
currently
managed
so
that
you
essentially
have
like
two
writers
for
the
logs,
the
one
that
can
be
streamed
during
the
backup,
that's
in
progress
and
then
one
that
eventually
gets
written
to
to
object?
Storage,
I'm
just
thinking
about
like
a
separate
another
project
that
I'd
worked
on
in
the
past.
F
A
D
If,
like
some
of
these
steps
could
be
published
as
events
during
the
backup
yeah
but
like
we
could
see
that
I
attempted
to
upload
the
backup
blogs
and
then
it
failed
like
failed
to
upload,
backup,
blogs,
retry,
and
then
maybe
you
retry
it
a
couple
of
times,
but,
like
you
could
do,
I
think
I
think
sorry
I
think
it
would
be
interesting
is
if,
if
I
could
just
like
cube,
cuddle,
describe
the
backup
and
kind
of
watch
what
it's
doing,
instead
of
having
to
wait
for
it
to
move
to
a
failed
or
completed
state
to
then
look
at
the
logs
right
like
seeing
what's
it's
currently
doing,
I
think
would
would
help
both
determining
these
failures,
as
well
as
like
making
the
descriptions
easier
to
state
right
because,
like
you
could
say,
like
a
higher
level
description
without
having
to
get
super
specific.
D
A
So
no
one
would
raise
the
idea
of
using
conditions
list
like
some
of
the
other
things
you're
doing
that
might
fit
into
this.
What
do
you
think.
C
Well,
so
I
think
what
shawn's
referring
to
is
the
kubernetes
event
structure,
which
is
also
ipods
and
everything
those.
So
I
actually
looked
in
the
looked
into
this
last
year,
maybe
late
2019,
but
we
didn't.
We
didn't
go
forward
with
it,
but
I
think
it's
a
valid
idea.
The
events
in
kubernetes
are
meant
to
be
transient,
so
it
would.
C
It
would
be
useful
for
watching
a
backup
or
restore
live,
but
we,
I
think
we
would
still
need
the
logs
for
historical,
but
I
I
yeah
I
I'm
very
much
in
agreement
yeah.
I
should
explore
it.
We.
E
E
E
C
You
make
the
log
too,
so
I
think
we
would
combine
that
with
like
bridget's
t
kind
of
idea
that
we
do
both.
I,
I
don't
think
we
could
rely
solely
on
the
kubernetes
events
and
I
don't
want
to
capture
them
because
that's
an
anti-pattern
with
what
everybody
else
does.
But
if
we
log
as
well,
I
think
we
get
both
things.
A
C
Yeah
and
like
this
is
this
goes
back
to
the
constraints
under
which
valero
has
been
designed
so
far
like
we
could
do
all
this.
We
could
put
a
little
web
server
that
fronts
it
or.
However,
it's
I
think
to
accomplish
that.
We
would
need
to
have
a
valero
service
that
people
would
have
to
plumb
through
because
right
now,
a
good
thing
about
valero.
Is
you
just
submit
stuff
to
the
kubernetes
api
server?
There's,
nothing
special!
You
don't
have
to
set
up
any
routes
or
anything,
there's
no
ingress
right,
yeah,
which
is
nice.
From
our
perspective.
C
C
So
I
would
love
to
hear
ideas
on
how
we
make
that
easier,
because
I
know
it
was
a
design
philosophy
at
the
beginning
to
just
not
require
people
to
do
all
that
ingress
service
all
that
stuff.
But
it's
probably
worth
revisiting
that.
A
Yeah,
it
opened
up
a
whole
can
of
worms.
It's
also
like
you
know.
The
the
opposite
argument
is
what
we're
getting
some
smaller
is
like
hey.
I
may
not
have
access
to
that
object,
store
that
we're
actually
writing
into
from
my
desktop.
I
do
have
access
to
the
cluster,
but
the
object
store
itself
is
firewalled
off
for
me,
for
whatever
reason.
A
Right
I
mean
for
a
for
in
you
know,
for
a
in
what
on-prem
store.
There's
no.
C
Guarantees?
Yes,
yes,
we
like
we
had
the
the
recent
requests
about
proxies,
there's
no
guarantee
that
we
could
get
it
that
we're
not
going
to
be
banned
in
the
middles,
so
yeah
there's,
I
could
see
a.
I
could
see
an
argument
for
for
saying:
valero
needs
an
ingress
and
a
service
that
we
can
hit.
I
think
that's
a
big
design
change,
but
it
enables
some
other
use
cases.
A
Is
really
good,
but
it's
a
little
off
so
the
idea
here,
let's
see
so
so,
there's
one
kind
of
really
major
change.
That
happens
in
the
way
that
we
interact
with
the
object
store,
and
that
is
that
we
might
upload
a
bunch
of
stuff
to
the
object
store.
So
we
would
put
in
like
the
the
tarball,
the
log
files.
A
We
can
prevent
like
different
valeros,
looking
at
the
backup
storage
location
from
like
fighting
with
each
other,
because
we
put
things
up
there
that
are
not
in
the
completed
or
not
in
the
final
phase,
the
reconcile
should
pull
them
down
and
then
the
local
you
know
the
the
other
server
would
start
looking
at
this
resource
and
go
oh,
you
know
I
should
be
checking
to
see
the
upload
progress,
so
that's
kind
of
a
pretty
major
change
here
that
we
may
want
to
think
through
some
more.
A
But
so
the
idea
here
is
that,
after
we've,
after
we've
taken
all
of
our
snapshots
after
we've
uploaded
the
kubernetes
metadata
that
when
we're
in
one
of
these
uploading
phases,
we
then
go
back
and
pull
the
plugins
on
all
of
their
snapshot,
ids
that
we
have
and
if
they
all
say
we're
done,
then
we're
done
and
we
move
forward.
A
They
need
to
retain
that
state,
but
then
yeah
once
we
we
could
just
pull
them
all
again
and
they
should
come
back
and
if
they
all
stay
completed,
then
we're
done.
If
not,
we
have
our
in-memory
list
of
what
is
completed
and
we
just
pull
in
on
the
things
that
are
not
done
yet.
A
So
the
idea
here
is
that
we'll
change
the
or
add
some
things
to
the
plug-in
apis.
So
we
have
a
upload
progress
structure
that
we
can
ask
them
for,
and
this
will
just
return.
It
has
a
flag
to
say
whether
or
not
things
are
completed.
There's
an
error
to
say
you
know
if
there
was
an
error
that
happened,
and
then
we
have
this
items
completed
items
to
complete
this
could
be
bytes.
This
could
be
objects,
it's
whatever
it
is.
A
Similarly,
for
the
backup
items,
but
we
chatted
about
this
a
bit
of
whether
we
add
this
to
backup
item
action
as
a
new
method
or
whether
we
introduce
a
new
plugin
and
the
current
model
we've
been
doing
is
we've
actually
been
adding
new
plugins.
A
So
we
have
like
delete
item
action
is
a
separate
plugin
from
backup
item
action,
so
this
is
kind
of
following
the
same
model
where
we
have
a
backup
item
progress,
but
I
think
I
could
go
either
way,
but
I
was
just
I
think,
nolan
and
I
bounce
this
around
a
little
bit
and
it's
like
hey.
Let's
just
keep
it
parallel
to
the
existing
plug-in
structure.
A
You
know
munging
records
and
twiddling
bits
and
have
snapshotting
be
explicitly
snapshotting
and
not
just
oh.
By
the
way,
our
backup
item
action
happened
to
do
a
snapshot,
and
then
we
can
go
back
to
tracking
all
of
our
snapshots
like
right.
Now
we
have
the
volume
snapshots
file,
that's
part
of
the
backup,
but
that
doesn't
get
updated
properly
for
backup
item
actions
that
start
doing
snapshots.
B
C
D
A
A
So
if
we
add
the
progress
to
backup
item
action,
then
why
is
delete
item
action
hanging
out
by
itself
and
actually,
why
is
your
store
item
action
separate?
But
you
know
there's
good
reasons
for
that.
If
we
put
it
as
its
own
plugin,
it's
like.
Why
did
you.
B
I
I
think
we
should
just
rationalize
the
versioning
of
the
plug-in
grpcs.
B
C
Actually,
scott
has
revised
that
design
and
it
will
be
a
breaking
api
change,
so
I've
asked
him
to
I.
I
know
following
you're
interested
in
this,
and
I
think
scott
is
too
like
we
will
need
help
to
get
the
plug-in
versioning
api
out
early
in
the
1.7
release.
Okay,
yeah
scott
scott
was
actually
implementing
his
old
design
and
he's
like.
Oh,
this
won't
work.
So,
okay,
I
did.
B
C
See
that
yeah
it's
it's
posted
on
the
core
repo,
but
he
before
he
proposed
that
he
pinged
me
and
was
like
what
should
I
do
to
reconcile
this?
I
told
him
update
the
design
but
yeah
as
one
seven.
I
think
one
of
the
very
first
things
we'll
have
to
do
is
make
sure
we
get
the
the
plug-in
versioning
rationalized.
G
Yeah,
I'm
interested
in
that
because
my
contacts,
I
want
to
add
the
contacts
into
that
to
have
a
plug-in
time-out
but
yeah
with
with
the
versioning.
It
would
be
great,
so
we
you
know,
can
we
know
which
the
the
implementer
can
know
which
one
to
use
and
so
on
and
so
forth.
Yeah.
A
Yes,
so
that's
good,
so
so,
which
way
we
actually
implement
the
apis.
I'm
willing
to
go
either
way.
Does
anybody
see
any
issue
with
the
basic
algorithm
where
we
snapshot
everything
and
then
we're
pulling
in
the
background
into
the
plugins?
And
you
know
timeouts
or
you
know,
polling
intervals
and
so
forth.
B
C
B
C
This
is
equivalent
to
the
csi
field
of
ready
to
use.
A
C
C
B
For
the
for
the
aws
plug-in.
A
C
A
B
Yeah
fair
enough,
I'm
not
sure
if
the
aws
driver
is
implementing
csi
snapshotting.
I
don't
know
if
that
changed
our
aws.
A
We
made
a
bunch
of
progress
anyway,
so
we're
almost
to
time
so
there's
no
changes.
I
don't
believe,
there's
any
actual
changes
in
the
velo
backup
format
itself,
except
for
you
know,
adding
the
new
phase
and
putting
off
when
we
upload
the
backup
record.
A
little
discussion
here
on
what
things
would
do
pretty
much.
We
talked
about
the
backup,
workflow
and
then
on
restart.
We
discussed
that
a
little
bit
that
we
would
go
ahead
and
look
for
backup
resources
that
are
in
uploading
oops.
C
I
have
one
final
question:
have
you
checked
this
with
shing
on
this
to
see
that
that
this
will
fit
in
or
if
she
has
any
comments.
A
A
A
A
C
It's
it's
not
this,
but
it
is
a.
This
depends
on
that
so
yep.