►
From YouTube: Velero Community Meeting - Sept 29, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
My
pleasure
so
of
course,
first
of
all
a
reminder,
as
I'm
doing
please
add
yourself
to
the
attendees.
If
you
have
not
done
so
and
without
further
ado,
we've
got
some
we'll
have
some
status
updates
and
then
okay,
I
see
that
raphael
has
a
discussion
topic
and
I
would
like
to
if
there's
time,
to
give
the
valero
strategy
presentation
that
I've
been
working
on
talking
about
the
valeria
strategy
that
we're
probably
going
to
be
pursuing
at
least
the
vmware
team
members.
B
So
without
further
ado,
let
me
go
back
to
here
and
dave
I
forgot.
Do
I
share
anything
or
just
dave,
just
talk
as
I
show
this
story.
C
I
I
think,
if
you've
got
the
agenda,
that's
fine
yeah,
so
the
so
with
rc2
being
done.
We
said
that
at
that
point
we're
going
to
start
looking
at
merging
prs
again.
So
prs
are
kind
of
open
game
for
maintainers
to
approve
and
merge.
I
took
the
item,
snapshot
or
pr
no
changes.
I
took
it
out
of
draft
and
I
tagged
a
bunch
of
I
think
all
the
maintainers
to
review.
C
If
you
you
know,
have
time
and
you're
interested,
please
take
a
look
comments
and
we'll
have
time
throughout
the
one
the
one
eight
time
frame
to
make
changes
we're
also
looking
at
doing
some
stuff.
C
So
you
know:
we've
had
our
end-to-end
tests
and
one
of
the
things
we've
never
really
done
is
verify
that
clusters
are
actually
healthy
after
being
backed
up
and
restored.
So
we're
looking
at
integrating
sonoboy,
which
is
another
open
source
project
that
does
things
like
kubernetes
conformance
tests,
we're
looking
at
integrating
that
into
a
valero
engine
test
and
we'll
have
that
both
for
standard
kubernetes
clusters,
which
currently
does
and
we're
looking
at
being
able
to
add
plug-in
tests
for
things
like
our
tkg
management
clusters.
C
This
may
be
something
that's
of
interest
with
openshift
or
rancher
as
well,
so
that
we
can
have
backup
and
restore
and
then
tests
of
the
full
thing.
So
that's
pretty
much
it
for
me.
D
Yeah
yeah
last
week
we
tagged
the
version
1.7
rc2
and
if
there's
no
urgent
blocking
issues
very
likely,
this
commit
will
be
used
for
the
1.7.0
ga
and
we
will
also
ga
the
new
aws
gcp
address
csi
plugin
around
september
30th
before
the
end
of
this
week.
After
that,
I'll
be,
you
know,
switch
the
gear
to
level
1.8.
D
One
thing
I
want
to
mention
is
that,
after
we
switch
to,
as
for
the
as
the
base
image
of
valero,
there
may
be
some
custom.
Plugins
fails
after
upgrades
due
to
the
missing
dependent
libraries.
I
know
that
fong
has
saw
that
issue
and
he's
working
on
that.
So
please
keep
an
eye
on
that.
I
will
also
add
a
line
in
the
release
note
about
this
problem.
E
B
Great
thanks
daniel
one
kai.
F
No
one,
so
we
had
raised
another
question
about
those
vulnerabilities
that
we
saw
in
160
so
one
so
this
170
went
through
the
scanning.
So
is
it
scanned
through
any
scanning
tool.
D
Yeah
we
have
scanned
using
our
tools,
internal
tools.
I
think
I
can
share
with
you
this.
There
is
still
a
vulnerability
of
glibc,
but
that's
the
only
one.
Let
me
check.
E
E
However,
if
we
customer
adding
a
valero
plus
a
dream-
and
that
brings
more
library
and
these
my
if
you
bring
in
these
my
including
some
of
the
vulnerabilities
with
the
package,
that
we
include
right,
correct,
yeah,
yeah
so
important,
we
will,
you
know,
scan
it.
As
the
customer
I
mean,
the
developer
can
now.
D
This
is
we
used
chat
to
scan
these
images
and
the
vulnerability
is
gdpc
included
in
the
glibc,
which
is
a
still
part
of
digital
s.
I
think
we
are
following
up
with
debian
our
us
getting
to
market.
It
says
a
wrong
fix.
I
think
that's
a
mistake,
but
we
are
keeping
an
eye
on
that
because
it
still
depends
on
davian
to
fix
it.
G
Yes,
I'm
doing
some
work
with
daniel
for
the
valero
7
release
and
after
that,
I'm
starting
to
do
some
investigation
for
the
1.8
items
to
see
what
I'm
doing
currently.
H
Okay,
bridgette
hi
everyone,
so
I
I've
had
to
spend
some
of
my
time
looking
at
some
of
our
internal
release
processes
over
the
last
week.
But
after
that
I
think
I'm
done
with
that.
Now,
I'm
gonna
be
looking
ahead
to
one
eight
nine,
so
part
of
that.
The
first
thing
I'm
going
to
be
looking
at
is
that
picking
up
the
plug-in
versioning
work
again,
the
design
dock,
I
think,
there's.
H
I
think
one
would
left
them
another
couple
of
comments
which
I
need
to
address,
but
I'd
like
to
get
that
merged
pretty
soon
and
then
I'll
sync
with
phone
on
getting
the
the
next
stages
of
the
the
re-architecturing
of
the
the
code
in
place,
so
that
other
changes
that
that
were
relying
on
I
can
make
use
of
that.
H
B
Okay,
moving
on
to
discussion
topics,
raphael
tell
us
about
pre
and
post
backup
and
restore
plugins.
I
All
right,
thank
you
more
can
I
share
my
screen
very
quick.
I
noticed
that
last
time
I
spoke
about
this
topic.
I
think
I
lost
the
audience
because
it
was
way
too
much
information.
So
this
is
an
attempt
attempt
to
show
what
those
plugins
are
about.
I
I
So
here's
my
attempt
to
explain
where
I'm
proposing,
where
those
actions
the
the
backup
I'm
calling
pre-backup
actions
and
the
post
backup
actions.
They're
gonna
happen.
So
this
is
today
what's
happening
during
the
backup
okay.
So
this
is
a
sequence
of
events.
The
new
backup
request
comes.
There
is
a
validation
of
the
request.
I
I
Then
volume
snapshot,
then
the
final
backup
phase
is
determined.
Then
we
persist
on
the
object.
Storage,
that's
oversimplifying
all
the
code.
So
what
we
are
proposing,
the
pre
backup
actions
happen
around
here
after
the
backup
check.
If
the
backup
exists,
then
we
execute
the
actions.
Then
we
actually
do
the
backup
of
the
resource
items
for
the
post,
backup
actions
they
execute
after
the
backup
is
fully
persisted
on
on
the
object.
Storage.
Now
I
had
a
conversation
with
dave.
I
Dave
is
working
on
the
upgrade
the
blood
process
that
gonna
pretty
much
and
you
can
fix
my
my
understanding.
Dave
is
gonna,
pretty
much
break
this
sequence
here.
Basically,
we're
gonna
do
the
backup
the
resource
items
upload
and
start
developing
snapshot
and
upload
in
parallel,
where
do
other
stuff
in
parallel,
but
nevertheless
the
post
backup
action
gotta
happen
after
the
full
backup
is
done
very
similar
with
the
restore
I'm
not
gonna,
go
over
the
details
here,
but
that's
pretty
much.
I
I
I
I
The
phase
is
going
to
be
something
very
simple
in
progress,
completed
or,
and
then
the
backup
status,
struct
gonna,
add
an
optional
array
of
this
because
remember,
you
can
have
multiple
plugins
right,
so
it's
not
going
to
be
one
status
for
all
the
pre
backup
and
pulse
backup,
because
you
can
register
multiple
plugins
and
have
those
for
the
same
for
restore
status.
So
those
are
the
big
changes
on
the
design.
I
Based
on
the
some
conversation
I
had
with
you,
especially
dave
thanks
dave
and
I'm
still,
refining
and
so
on,
and
I
really
want
talking
to
alienware
to
have
this
to
the
letter
1.8
once
we
approve
the
design
and
we
have
our
prototype
already
running
by
the
way
of
this,
not
with
the
statuses
though,
but
actually
the
plugins
already
executing
that's
all.
I
have.
E
Oh
yeah,
I
haven't,
I
haven't,
read
the
desired
yet,
but
do
you
specify
which,
when
we
execute
the
action,
it's
supposed
to
execute
some
some
plug-in
code?
Or
it's
supposed
to
be
plugging
some
external
goals
that
we
need
to?
We
can
hook
in.
I
E
I
You
load,
like
any
other
plug-in
and
they're
gonna,
be
executed
inside
of
the
pod
that
the
controller
is
running.
Yes,.
I
So
today
the
hook
you
have
to
tell
they're
gonna
be
executed,
bur
resource
items.
So
let's
say
you
want
to
hook
for
a
pod
or
a
hook
for
a
pv,
so
you're
already
inside
of
the
backup
or
inside
of
restore
this,
you
don't
give
like
a
resource
item
to
execute.
It
will
execute
once
before
the
backup
or
after
the
backup,
before
the
restore
and
app
that
restored
one
classic
example,
there's
a
bunch
of
examples
we
can
take
advantage
of
those.
But
let's
say
you
want
to
change
you
want
to.
I
After
restore
you
want
to
configure
dns.
Of
course
you
can
have
another
script
after
the
valero
restore
to
execute
those.
Of
course
you
can
do
whatever
you
want,
but
what
I'm
those
plugins
happen
is
valero
triggers
those
events.
So
let's
say
you
create
a
valero
plugin
to
talk
to
a
dns
provider,
then
change
the
dns
after
you
do
restore,
for
example,
or
before
the
restore
you
can
increase
the
cluster
size.
J
Another
question
about
that
like,
for
example,
with
the
existing,
you
know
they
say:
take
the
restore
item,
action
plug-in,
there's
two
ways
that
those
are
put
in
place.
There's
the
you
know
external
plug-ins
that
we
add
as
an
addict
container,
but
then
we
also
use
restore
item
actions
in
valero
core
for
certain
things
that
are
just
part
of
a
restore
once
these
are
in
place.
I
imagine
these
pre
and
post
backup
and
restore
actions.
I
Yeah
this
is
the
community
to
decide,
but
I
do
see
a
lot
of
core
plugins
that
we
can
take
advantage
of.
Take
advantage
of
that.
You
know
you
can
have
a
knob
on
an
office
of
alerter.
Don't
need
to
kill
the
the
controller.
For
example,
you
stop.
Valero
can
create
a
plug-in
that
you
just
give
a
config
map,
yes
or
no,
and
then
you
stop
executing,
for
example,
there's
so
many
other.
It
opens
a
lot
of
possibilities
here.
C
I
mean
it's
going
to
follow
the
existing
plug-in
setup,
so
you
know
pretty
much.
The
plug-in
manager
knows
about
internal
plug-ins
and
then
is
looking
for
the
ones
in
the
plugins
directory.
So
we
should
be
able
to
handle
both
types
and
then
I
think
the
other
thing
we
should
put
on
the
list.
C
It
may
not
be
done
right
away,
but
we
should
really
be
able
to
execute
a
command
in
a
container
and
we
should
be
able
to
specify
a
container
and
like
spins
up
a
pod
to
execute
that
so
that
we
get
for
end
users.
We
can
get
some
benefit
as
well,
where
they
can
write
a
script
or
something
to
execute
in
the
container.
I
C
Right,
yeah,
because
we're
currently
doing
that
on
the
per
pod
basis
right,
the
the
current
pre
and
post
backup
books
are
doing
that,
and
I
think
we
should
extend
that
feature.
You
know
for
end
users
to
be
able
to
use
pretty
easily.
H
Yeah,
it's
just
gonna
say
if
you
had
something
where
it
could
be
a
script
like
raphael.
You
mentioned
like
having
like
a
config
map
or
something
or
if
you
could
like
load
some
data
with
the
script
in
and
have
it
execute
that?
Because
I
think
at
the
minute
all
we
have
are
just
like
single
commands
that
can
be
executed,
and
then
I
think
it's
it's
tricky
for
users
to
do
that.
Well
and
well,.
I
I
C
I
Oh
yeah
that
this
pr
is
just
to
create
the
hooks
and,
and
then
after
that
we
can,
you
know,
improve
and
create
another
plugin.
So.
D
I
Part
of
alert
yeah
yeah,
I
I
do
have
a
plug-in
that
cleans
and
uncleans
applications-
that's
part
of
our
project,
but
if
we're
gonna
open
up
for
the
communities,
a
different
conversation.
E
A
quick
question:
sorry:
it
has
a
new
type
of
block
in
because
we
currently
have.
You
know
like
backup,
item
plugins
cetera.
Is
it
a
new
type
of
plugin
for
this
hook
update.
I
Yeah
gonna
be
four
new
types:
I'm
gonna
call
it
pre,
backup
action,
post,
backup,
action,
pre,
restore
action
and
post
restore
action
so
for
new
hooks.
B
Okay
back
to
me,
then
I've
got
the
last
two
action
items.
Let
me
reshare
a
quick
note
about
the
1
8
road
map
we've
finalized
a
little
more.
I
did
not
get
my
act
together
enough
to
make
a
pr
so
apologies.
I
should
have
that
ready
by
next
week.
It's
basically.
B
That
being
said
what
I
think
I
talked
about
the
last
few
weeks.
Basically,
things
that
are
in
the
one
eight
milestone
on
the
in
the
repo.
The
road
map
is
a
or
the
one.
Eight
one
roadmap
is
a
subset
of
these.
I
think
I
can
say
with
confidence
so
by
next
week
I
should
be
able
to
show
you
what
team
members
are
thinking
of
working
on
in
top
priority
order,
and
obviously
this
is
really
encapsulating
what
the
vm,
where
team
members
will
be
working
on,
so
any
other
valero
community
members.
B
Okay,
so
we've
been
hinting
about
this
for
a
long
time,
but
we
have
been
thinking
a
big
picture
like
what
are
we
trying
to
accomplish
with
valero?
What
is
our
strategy,
and
so
I
wanna
show
you
what
we're
thinking
and
get
your
feedback
on
it.
B
B
B
This
is
a
common
thing
to
say
I
know,
but
right
now
we
feel
like
data
protection
is
particularly
fragmented
across
different
kubernetes
providers
and
different
clouds,
and
we
worry-
and
frankly
I
think,
we've
been
seeing
it
with
at
least
vmware
customers
that
people
are
going
to
be
hesitant
about
really
moving
workloads
to
kubernetes
until
they're
sure
they
have
the
data
protection
that
they
feel
they
need.
B
So
really
we
really
want
to
unify
data
protection
and
make
it
easy
to
use
and
then
we'll
see
more
kubernetes
custom
kubernetes
users,
which
benefits
the
whole
community
and
in
terms
of
what
we
want
to
protect
right
now.
As
you
all
know,
with
valero
at
least
we
protect
kubernetes,
metadata
and
persistent
volumes,
and
we
think
we
should
be
looking
much
much
bigger.
B
We
want
to
protect
all
apps
running
in
kubernetes,
whether
they're
kind
of
whether
they're
in
production
environments
or
in
development
environments,
whether
they're
kind
of
made
by
a
specific
developer
or
whether
they're
a
big
product.
We
want
everything
to
be
protected
in
kubernetes.
B
We
want
all
data
stored
in
kubernetes
to
be
protected,
whether
it's
in
a
cassandra
database
or
postgres
database,
or
what
have
you
we
want,
all
of
it
to
be
protected,
and
we
also
want
all
the
kubernetes
infrastructure
to
be
protected.
So
people
can
bring
up
their
infrastructure
again
configured
in
the
same
way,
if
necessary,
I'm
most
familiar
with
capi
clusters,
but
I'm
sure
there's
other
examples
as
well.
B
B
We
think
it's
really
important
to
have
a
range
of
data
protection
offerings.
We
love
having
valero
as
this
open
source
option,
but
we
think
that
for
kubernetes
to
succeed
as
a
whole,
it
has
to
move
beyond
just
folks
who
want
to
use
open
source
software
and
to
include.
We,
of
course
want
to
include
encourage
enterprise
users
to
use
kubernetes.
So
we
probably
need
to
make
sure
that
the
enterprise
data
protection
vendors
can
also
protect
them.
B
We
also
want
to
make
it
really
easy
for
app
developers
to
define
how
best
to
back
up
their
app.
As
you
know,
right
now,
at
least
with
valero
literally.
The
way
we
back
up
apps
is
snap,
the
pvs
and
back
up
the
metadata,
and
that
is
not
necessarily
encapsulating
the
state
stored
in
an
app.
So
we
want
app
developers
to
be
able
to
tell
us
how
to
store
or
how
to
backup
their
app
and
how
to
maybe
control
how
their
app
is,
quiet
or
or
whatever
is
needed
to
take
their
backup.
B
We
also
like
the
idea
of
as
not
just
production
workloads
move
to
kubernetes,
but
obviously,
where
development
environments
move
to
kubernetes.
We
imagine
a
world
where
it'd
be
really
nice
for
application
developers
to
be
able
to
self-serve
do
backup
and
restore
like
hey,
I'm
going
to
try
something
new.
Let's
do
a
backup
and
I
can
restore
my
environment
if
necessary
and
also,
we
feel
like
we're
kind
of
seeing
a
world
where
more
and
more
app
developers
are
kind
of
app
operators,
and
so
wouldn't
it
be
nice.
B
If,
when
they're
deploying
a
new
version
of
their
app,
they
could
quickly
roll
back
and
restore
a
previous
version
if
necessary.
So
we
want
this
to
be
self-service
without
having
to
open
a
ticket
to
a
a
platform
opera
platform
team
to
do
that
and
lastly,
we
want
to
make
sure
it's
easy
to
migrate
data
and
apps
from
one
cluster
to
another,
even
across
different
clouds
and
platforms.
B
So
this
is
ambitious
and
I
want
to
show
you
how
we're
thinking
of
achieving
it
using
architecture
diagrams.
So
first
we'll
talk
about
our
current
state
and
then
by
showing
what
we
think
the
future
state
should
be.
That
will
show
you
visually
what
we're
trying
to
accomplish.
But
before
I
do
that,
let
me
pause.
I've
covered
a
bunch
of
stuff
any
questions.
B
Okay,
so,
and
tell
me
if
this
is
too
small,
I
can
whoo.
That's
not
so
tell
me
if
you
all
folks
can't
see
it.
So
this
is
how
we
see
the
current
landscape.
Let's
start
from
the
bottom,
as
you
know,
we
back
up
metadata
and
volumes,
I'm
going
to
focus
on
volumes
because
yeah,
I
think
you
all
understand
here,
especially
why
metadata
will
kind
of
shunt
to
the
side
for
now.
B
So
the
way
we
back
up
volumes
currently
is,
frankly,
we
tend
to
use
either
the
valero
volume,
snapshotters
or
rustic,
and
that's
what's
used
to
snap,
so
the
rot
volumes
and
generally,
what
calls
that
is.
Valero,
we'll
call,
obviously
we'll
call
this
and
depend
or
embedded
valeria,
depending
on
who
we're
talking
about.
So,
let's
work
our
way
up
here.
First,
so
tanzu
certainly
has
we
use
valero
to
back
up
tanzu
things,
and
then
we
go
this
path:
openshift,
rand,
rancher
anthos.
I
assume
those
co.
B
Of
course,
there
are
other
ways
besides
using
the
valero
infrastructure
like
cast
and
beam,
has
their
own
way
of
backing
up
volumes
and,
lastly,
customers
can
automate
by
calling
directly
to
valero
the
blair,
cli
or
calling
to
these
vendors
and
also
then,
these
vendors
all
have
specific
controls,
specific
dashboards
to
control
their
products,
so
any
questions
or
or
comments
on
this
current
landscape.
This
is
one
way
to
look
at
it.
I'm
sure
there's
other
ways
to
draw
this
diagram,
but
are
we
aligned
on
this.
B
Great,
so
here
is
what
we
want
to
see
the
future
as
what
we
propose.
Let's
start
at
the
bottom
again,
we
still
have
volumes,
but
we
have
much
more
now
because
we
want
to
cover
more
things.
So
we
want
to
have
databases
or
data
service
providers.
We
want
apps
that
you
know,
developers
have
written
and
we
want
to
protect
infrastructure.
I
put
cappy,
for
instance,
but
also
cluster
api,
but
also
other
kubernetes
clusters,
and
so
our
first
kind
of
big
proposal,
then,
is
we
want
this
south-facing
astrolabe
api.
B
So
the
idea
is
that
part
of
valero,
currently
specifically
in
the
valero
plug-in
for
vsphere
there's
this
astrolabe
astrolabe
infrastructure
that
takes
care
of
snapshotting
and
has,
I
believe,
some
data
mover
capacity
as
well,
and
so
we
want
to
actually
decouple
that
and
make
this
in
a
clear
api
that
is
in
charge
of
snapshotting,
whatever
needs
to
be
snapshotted,
and
this
is
really
powerful
for
two
reasons-
this
api
one,
because
then
everything
here
will
can
choose
to
implement
to
customize
how
they
will
be
snapshotted.
B
They
don't
have
to
the
default
will
be
for
anything
to
go
snapshot.
The
persistent
volumes
and
copy
the
metadata
as
usual,
but
something
like
cassandra,
can
specifically
can
specifically
implement
this
astrolabe
south
facing
api
and
can
specify
how
to
be
backed
up.
I
just
learned
today
that
cassandra
uses
something
called
medusa
to
to
do
its
backup,
and
so,
for
instance,
because
cassandra
database
could
say
could
be
I
we
think
this
is
all
tentative,
but
could
you
could
specify
medusa
in
their
implementation
of
how
they
want
to
be
backed
up
data
with
apps?
B
A
developer
can
specify
how
to
back
up
their
app
by
implementing
this
api,
and
you
can
definitely
have
have
a
complex
object.
So,
for
instance,
if
I
forgive
me
I
know
tanzubas,
so
we
could
have
a
tanzan
management
cluster
a
copy
cluster.
Actually
I
can
do
it.
Sorry,
I
can
do
it
distribution
agnostic.
B
We
can
have
a
copy
management
cluster,
say:
hey
copy
management,
cluster
back
yourself
up
and
then
its
implementation
of
how
to
back
itself
up
will
be
to
iterate
through
its
objects,
which
probably
would
be
its
workload
clusters
and
other
parts
of
it.
And
then
each
workload
cluster
will
say:
hey
how
do
you
back
yourself
up
and
then
the
workload
cluster
can
iterate
through
its
own
objects
and
say:
hey
back
up
this
volume.
This
way
back
up
this
database
this
way,
so
a
this
api
gives
us
a
common
way
to
to
have
everything
defined.
B
How
to
back
itself
up
plus,
we
are
really
hoping.
Valero,
obviously
will
be
calling
it,
because
it's
just
breaking
out
these
valeri,
this
village
infrastructure,
we
already
know
veritas
and
dell
power
protect,
are
using
valero
and,
frankly,
are
using
bits
of
this.
We're
really
hoping
we've
heard
tentatively
good
things.
The
veritas
and
power
protect
will
both
implement
this
api
and
use
it
and
we're
chatting
with
others
cast
and
veeam
and
others.
B
We've
got
no
commitments
yet,
but
we're
really
hoping
to
encourage
other
data
protection
vendors
to
use
this
api
because
then
yeah
well
a
then
they
can
protect
all
of
these
things
that
implement
it
and
b,
it's
kind
of
great
for
customers,
because
then,
depending
on
like
if
a
customer
was
covered
by
dell
power,
protect
for
other
environments,
then
when
they
move
to
kubernetes
great,
they
can
just
use
a
power
protect
and
everything
will
still
be
backed
up.
B
B
Definitely
say
the
how
for
the
where
question,
I'm
going
to
defer
to
dave
to
answer
that.
C
It's
very
similar
to
what
we're
currently
doing
inside
the
plug-in
so
like,
for
example,
our
current
path
has
like
the
snapshot
like
on
vsphere.
Our
snapshot,
path
for
power,
protect
and
valero
goes
down
to
the
plug
and
we
take
a
snapshot,
and
then
we
give
that
back
and
then
power
protect
figures
out
how
to
move
things,
and
so
we're
going
to
keep
this
ability
to
separate
data
movement
from
snapshotting,
where
it's
appropriate,
but
rather
than,
for
example,
having
to
trigger
a
valero
backup.
C
So,
like
the
kate's
cluster
box,
that's
actually
that
piece
of
valero
that
currently
serializes
a
kubernetes
cluster,
the
item
backup
essentially,
and
so
that
moves
to
being
a
open
api
where
you
don't
have
to
do
a
valero
backup
in
order
to
trigger
the
serializer
and
valero,
continues
to
have
its
plug-ins
inside.
But
then
it's
able
to
call
out
to
like
additional
astrolabe
things.
C
C
So
that,
for
example,
like
we
have
tds
databases
where
we're
providing
postgres
and
snapping
the
volumes
may
not
be
the
best
way
to
do
that,
so
we're
looking
at.
How
do
we,
for
example,
have
the
operator
provide
a
backup
and
restore
capability
with
a
common
api
that
we
can
then
trigger
from
like
the
valero
from
the
kubernetes
serializer
through
the
asteroid?
Apis.
F
Yeah
yeah,
so
it
looks
like
there
will
be
hooks
right
on
how
we
move
the
data
once.
C
It's
snapshotted
yeah
and
there's
there's
things
built
into
the
apis
that
have
there's
like
multiple
transports
that
could
be
provided,
so
you
can
provide
different
transports
like
on
vista.
We
might
return
a
v8p
transport
and
eventually
we'd
like
to
return,
say
an
s3
transport,
but
you
can
pick
which
one
you
want
to
use
because
they're
really
just
how
do
I
get
to
the
snapshot
on
evs?
We
may
return
like
evs
direct
is
available
and
we
also
have
a
way
to
get
to
it
through.
You
know
like
a
s3
style,
api.
C
Means
well
that's
something
that
we
can
define
so
there's
various
ways
to
do
this
right.
So
there's
like
there's
the
kubernetes
app
thing:
that's
going
on,
operators
can
define.
You
know,
resources
that
represent
instances
of
apps
and
we're
also
like
casting
has
the
canister
technology
that
lets
them
define
a
thing
to
be
backed
up,
and
we
can
even
do
it
from
valero
where
we
say:
hey
include
namespacex,
so
these
are
all
different
ways
that.
D
C
It
can
continue
to
do
exactly
what
it's
doing
right
now,
so
we
can
continue
to
just
back
up
plain
old,
kubernetes
resources
and
volumes
if
that
works
for
the
application,
not
a
problem,
but
if
we
have
a
more
sophisticated
application
that
has
sequencing
needs
and
stuff,
and
it's
like
harbor
right
we're
looking
at
backup
harvard
we
need
to
snap
the
postgres
database.
We
need
this
apparatus.
So
how
do
we
do
that?
C
And
how
do
we
make
that
available
to
something
like
valero
or
any
of
our
other
partners,
because
we'd
like
to
do
is
like
we
figure
out
one
way
to
back
up
harbor
and
we
don't
have
to
re-implement
that
we
don't
have.
You
know
like
power.
Protect,
doesn't
have
to
re-implement
that
that's
the
goal
there
and
so
yeah.
You
can
have
things
that
are
just
a
bunch
of
kubernetes
resources
and
bunch
of
pvs
and
we'll
continue
to
back
them
up
the
way
we're
currently
doing
them.
E
F
Like
so
that
cbt
discussion
we
are
having
in
the
working
group
right
would
this?
Would
that
fall
somewhere
here
when.
C
So,
as
an
interim
thing,
so
the
isolated
api
is
right.
So
right
now,
what
we're
missing
in
csi
is
just
take
volumes.
If
nothing
else
right
so
on
on
psi,
we
can
get
a
snapshot,
we
can
clone
the
snapshot
to
a
volume
and
we
can
delete
the
snapshot,
but
we
don't
have,
for
example,
like
a
standard
way
to
get
access
to
the
data
and
we
don't
have
kind
of
standard
ways
to
get
the
change
block
tracking
between
snapshots.
C
B
Excellent,
thank
you
for
the
discussion.
So
we've
talked
a
bit
about
this
api
that
really
helps
us
snapshot,
things
that
need
to
be
snapshotted
but
there's
another
part
of
data
protection,
which
is
how
do
you
control
the
the
data
protection
provider
and
so
right
here
we've
got
different
data
protection
providers,
as
we've
talked
about,
and
so
we
obviously,
this
is
driven
by
tanzi's
needs
a
bit,
but
we
like
this
idea
of
driving
it
to
be
an
open
source
api
to
benefit
the
community
at
large.
B
Basically,
we've
got
these
kubernetes
distributions,
tanzu
openshift,
rancher,
anthos,
etc,
and
we
all
have
the
same
needs
for
data
protection
and
so
we're
imagining
a
not
huge,
a
limited
api
thing,
but
things
that
might
do
a
backup
cluster,
restore
cluster
backup,
namespace
restore
namespace
with
the
idea,
then
that,
ideally,
these
data
protection
providers
would
implement
these
simpler
api
commands
for
data
protection
and,
of
course,
yeah
and
and
so
that
way,
then
as
a
standard
across
kubernetes.
B
If
we
can
manage
this
this
way,
we
again
customers,
kubernetes
users,
have
this
freedom,
then,
when
they
choose
a
kubernetes
distribution,
they're
not
limited
to
one
provider
or
another,
or
they
they
get
a
very
nice
integration
without
as
much
effort
by
the
providers
by
the
distributions
etc.
So
we
get
this
nice
clean
way
to
swap
out
a
provider
and
put
in
the
provider
choice
for
a
particular
customer,
and
I
will
note,
of
course,
that
this
api
customer
automations
can
still
call
it
and
for
more
complicated
measures.
B
C
I
mean
there's
all
kinds,
I
mean:
there's
a
million
things
you
can
do,
each
each
vendor
has
different,
you
know
schedulers
or
constraints,
or
things
right
or
being
able
to
check
the
compliance
of
every
backup
right.
All
these
are
things
that
should
be.
You
know
these
are.
These
are
areas
where
the
products
are
different
and
we
want
the
remainder.
B
Exactly
and
so,
for
these
more
complicated
calls
vendor
specific
controls
like
the
dell
power
protect
dashboard
will
still
be
the
thing
that's
used
by
maybe
a
backup
admin,
but
for
simpler
data
production
calls.
It
would
go
through
this
api
now
before
we
talk
about
this
open
up
to
questions,
so
first
of
all,
it
would
be
open
source
is
what
we're
proposing.
I
want
to
talk
quickly,
maybe
address
the
elephant
in
the
room
about
why
everyone
would
want
it.
B
So
I
think
it's
very
clear
why
kubernetes
users
would
benefit
because
they
suddenly
get
standardization
for
data
protection
over
two
apis
across
the
community,
so
more
options
more
standardization,
I
think
generally
very
much
benefits
the
kubernetes
user
and
then,
therefore
they
can
choose
who
they
like
who
they
want
to
use.
B
Sorry
about
that
the
distributions
benefit,
because
suddenly
now,
instead
of
having
to
work
hard
to
integrate
first
veritas
and
then
valero
and
dell
power
protect,
all
the
distributions
suddenly
have
integrated
all
the
data
protection
providers
who
implement
this
north
facing
api,
so
yay
for
them.
And
if
anyone
who
uses
this
astrolabe
api,
if
we
can
get
this
going
as
a
standard,
then
all
these
things
that
were
formerly
not
backupable
in
a
good
way
are
now
easy
to
snapshot.
B
B
I
would
push
back
on
that
and
point
out
that
I
think
that
there's
a
lot
that
these
data
protection
providers
can
a
place
where
they
can
differentiate
themselves
in
terms
of
their
storage,
storage,
tiering,
there's
a
number
of
things
that
are
independent
of
how
they
interact
with
the
distributions
and
how
they
snapshot
bits
where
they
can
still
differentiate
themselves.
B
B
I
think
that
this
one
will
be
the
easier
sell
because
it's
already
being
used
by
a
number
of
providers
and
it
it
allows
access
to
kubernetes
infrastructure
that
I
think
valero
has
had
a
head
start
on
we're
less
sure
about
this,
but
we
are
so
it's
a
place,
it's
something
we
want
to
explore
further.
So
before
I
get
into
a
few
more
slides,
let
me
ask:
are
there
questions
especially
technical
questions
around
this
api
or
or
the
whole
proposal.
D
So
for
any
kubernetes
distributions
to
accomplish
the
real
world
backup
and
restore
work,
it
seems
that
they
still
need
to
deal
with
the
vendor
specific
controls
right.
So
now
they
need
to
call
the
north
facing
oss
api,
but
at
the
same
time
they
still
need
to
deal
with
vendor-specific
controls.
C
Well,
they
don't
have
to
call
the
vendors,
they
don't
have
to
call
either
one,
so
they
can
use
everything
through
the
vendor
specific
controls
if
they
want
to,
because
you
know
all
these
things
already
provide
ways
to
save
backup
they
can
also,
if
their
needs
are
simple
right.
So,
like
a
lot
of
people
like,
for
example,
like
valero
baler,
will
mostly
run
through
this
north
facing
api.
C
There's
not
going
to
be
a
lot
of
extras
for
valero,
but
what
we
see
in
a
lot
of
larger
environments
is
there's
already
a
data
protection
system
in
place,
especially
in
on-prem
environments.
We've
got
power,
protect
installed,
we've
got
net
backup
installed,
there's
a
team
that
takes
care
of
that.
That
manages
you
know
the
tape
tiering
and
all
that
stuff.
But
from
the
kubernetes
point
of
view,
hey
do
you
want
to
learn
about
net
backup?
Do
you
want
to
learn
about
veritas?
C
C
But
we
do
want
to
have
this
experience
where
if
the
user
says
yes,
I've
installed
tanzu
and
I've
installed
valero,
and
this
is
good,
but
suddenly
I
need
a
much
bigger
system
to
handle
this,
that
there's
a
nice
upgrade
path
to
them,
where
it's
not
suddenly,
like.
Oh
my
you
know,
because
you
switch
to
power
protect
this
and
this
don't
work
or
you
know
whatever
right.
So
that's,
that's.
We
don't
want
to
that's.
That's!
That's
selfishly!
That's
our.
D
Our
goal
yeah
I
understand,
but
but
if
you
think
into
the
details,
the
reason
users
want
net
backups,
for
example,
instead
of
valero,
because
they
want
that
backup,
specific
things.
So
how
I'm
just
a
little
curious
how
this
north
facing
api
can
cover
all
these
vendor-specific
things.
B
C
Yeah,
that's
that's
an
example
or,
for
example,
you've
installed
a
tape
system.
Valero
doesn't
do
anything
with
tape,
but
that's
all
stuff
that
you're
going
to
control
through
the
existing
vendor
apis
or
you
know.
When
do
I
tear
things
off
like
we
have
some
on-prem
storage
and
it
tears
it
off
to
an
object
store
at
amazon.
C
Those
are
all
vendor-specific
controls.
You
know
when
is
this
happening?
How
is
this
going
to
happen?
C
B
Two:
oh
just
a
quick
note,
just
that
also
we're
we're
seeing
more
of
a
movement
towards
developers
having
more
autonomy
as
they
do
their
development
and
so
we're
seeing
a
lot
of
these
simpler
api
commands
like
backup
cluster
restore
cluster,
maybe
be
used
by
developers
as
they
do.
Development
environments,
whereas
the
like
the
hardcore
backup
would
still
be
owned
by
the
backup
vendor.
Maybe
using
the
vendor-specific
controls.
B
C
B
One
thing:
that's
common
for
our
tanzu
customers
is
we're
oftentimes
talking
to
an
enterprise
customer
and
they
want
to
continue
using
the
data
protection
vendor
that
they're
already
using
for
other
environments,
and
so
this
is
where
I
think
it
really
benefits
everyone.
It
benefits
the
kubernetes
distributions,
because
we
don't
we
get
someone
to
move
to
kubernetes
who
is
maybe
resistant.
B
The
data
protection
provider
gets
to
keep
their
customer
but
cover
a
new
area,
and-
and
so
because
it's
all
swappable
we've
got
all
these
integrations
already
in
place
or
because
everyone's
using
an
api
we've
got.
These
integrations
easily
in
place
as
opposed
to
having
to
laboriously
build
out
each
one.
C
C
Is
whether
we
go
down
basically
like
valero,
you
know
where
we
can
write
things
like
backup
or
restore
resources,
or
if
we
move
towards
something,
that's
more
policy
based
where
you
can
set
things
like
policies
on
the
clusters.
They
back
me
up
every
day
using
this
backup
policy.
So
those
those
are
things
we
are
going
to
have
to
figure
out,
and
I
think
that's
something
where
we'll
talk
with
all
of
the
potential
implementers
of
this.
You
know,
including
you
right,
because
a
big
part
of
this
is
originally.
C
This
was
just
going
to
be
for
tanzer
usage
and
be
available
for
tmc,
plus
kanzu
customers
and
as
we're
having
this
discussion
internally,
it
was
like
well,
why
don't
we
just
open
source
this
and
make
sure
this
is
available
everywhere
and
let
everybody
else
you
know
not
just
the
data
protection
vendors,
but
also
the
other
distributions
in
the
community
have
a
say
in
what
this
needs
to
do.
So
that's
why
we're
going
down
this
path,
but
it's
not
very
well
set
in
stone
other
than
it
should
be
run
through
kubernetes.
B
Yeah,
but
just
to
emphasize
what
dave
said
the
south
facing
astrolabe
api
has
been,
I
think,
percolating
in
dave's
brain
for
years
now,
so
we've
got
a
better
sense
of
that.
This
north
facing
api
is
much
fresher
and
in
both
cases
we
want
input,
but
especially
the
north
facing
is
very
amorphous
right
now.
B
B
B
Whichever
data
protection,
vendor
user
wants
the
two
apis
together,
really
let
the
user
choose,
whichever
vendor
they
want,
and
ideally
it
they
fit
in
to
the
whole
architecture,
allowing
app
developers
to
easily
define
how
best
to
backup
their
apps
that
astrolabe
api
that
south
facing
one
lets
you
define
the
self-service
backup
is,
as
we
talked
about
really
the
north
facing
api,
allows
that
and
that
easy
migration
is
especially
that
south
facing
api,
we'll
get
that
we
won't
go
into
those
details,
but
feel
free
to
ask
another
time
now
to
best
fulfill
this
kubernetes
data
protection
strategy.
B
Both
these
apis
ideally
need
to
be
widely
adopted,
perhaps
even
becoming
kubernetes
community
standards
like
csi,
cni,
very
ambitious
goals,
so
we'll
see
if
we
manage
that
some
next
steps
now.
Actually,
I
should
have
put
this
here
for
sure.
We
need
to
start
bringing
this
to
the
data
protection
working
group,
so
maybe
I'll
even
just
call
that
out
here
go
to
dp
working
group,
so
that
is
definitely
on
our
agenda
specifically
for
that
south
facing
api.
B
We'll
talk
about
this
more
hopefully
next
week,
but
we're
have
a
data
mover
work
coming
in
one
eight
bridget's
gonna
be
working
on
that,
especially
we
definitely
do
want
to
do
this
asteroid.
Decoupling
we've
been
calling
that
thinking
that
that
will
be
belair
2.0
sometime
next
year.
I
assume
that
that'll
be
out,
but
no
commitments.
Yet,
oh
and
I
we
want
to
work
with
data
protection
vendors
to
understand
what
may
be
needed
with
the
api
to
perhaps
get
them
to
adopt
it
to
encourage
adoption.
B
Oh-
and
I
put
it
here-
okay,
so
I
didn't
forget
sorry,
it's
late
at
night.
For
me,
it's
9
p.m
and
of
course
we
want
to
work
with
the
data
protection
working
group
north
facing
api.
So
it's
well
yeah,
okay.
So
it's
already
an
approved
tangential
strategy,
so
we
know
we're
going
to
do
this.
So
really
our
next
steps
are.
We
want
to
talk
with
the
openshift
folk
and
rancher
and
anthos
to
see
if
others
other
kubernetes
distributions
want
to
do
this.
B
B
So
that's
that
for
next
steps
I
apologize
is
the
first
time
I've
given
this
presentation,
so
I
clearly
did
not
think
as
well
through
the
slides
as
I
should
have,
and
I
just
want
to
call
out-
we
are
not
going
to
abandon
valero
by
any
means,
even
though
we
are
taking
the
infrastructure
a
bit.
We
still
have
some
big
features
on
the
long-term
roadmap.
Things
like
we
want
to
replace
rustic
with
copia,
to
eliminate
a
dependency
on
kind
of
a
less
stable
backup
path
and
to
get
incremental
backup
and
cbt.
B
B
We
constantly
hear
about
multi-cluster
support
tentatively
right
now,
we're
thinking
scope
to
a
single
cappy
management
cluster
and
its
associated
workload
clusters,
but
that's
very
much
not
defined
yet
we
oftentimes
hear
about
encryption
at
rest.
It's
pretty
important
to
a
lot
of
custom
users,
we're
considering
increasing
performance
through
parallelization,
both
parallelizing
within
a
single
backup,
parallelizing,
the
resources
that
are
being
backed
up
and
then
having
multiple
paralyzing
multiple
backups
running.
B
At
the
same
time,
we've
got
a
pretty
big
bug
backlog
right
now,
we're
not
loving
that
so
we'd
really
love
to
work,
that
down
in
the
github
repo
we'd
love
to
improve
our
documentation
and
more
things.
So
none
of
these
things
we'll
talk
about
one.
Eight
next
meeting-
none
of
these
things
are
scheduled
yet
to
be
very
clear,
but
they
are
absolutely
on
our
minds.
We
are
not
going
to
suddenly
stop
working
on
valero,
so
we
only
have
three
minutes
today,
but
certainly
this
is
an
open
plea.
B
B
F
Not
strategy,
but
so
we
are
seeing
a
lot
of,
I
guess
from
customer
side
what
vulnerabilities,
I
guess,
security
being
in
focus,
so
I
guess
one
of
the
questions
that's
being
asked
is
like.
Is
there
any
like
a
support
statement
in
the
sense?
I
know
it's
open
source
community?
What
is
the
timelines?
That
vulnerabilities
will
be
addressed
in
questions
like
those.
B
Oh
you're
also
from
dell
cool
yeah.
I
think
the
short
answer
is
we
don't
have
any
published
timelines
for
addressing
security
vulnerabilities.
I
don't
have
a
good
answer.
Great
question.
Does
anyone
else
on
the
call
have
a
better
answer.
C
Well,
I
would
say
that
at
this
point,
we've
seen
very
few
actual
vulnerabilities
in
valero.
That
doesn't
mean
they
aren't
there,
but
those
aren't
what's
been
reported.
What
people
do
is
they'll
do
a
scan,
they
scan
the
container.
They
say.
Oh,
this
library,
you
know,
has
a
bug
in
it
and
it's
like.
Yes,
it
does,
except
we
don't
use
that
library,
so
I
think
distrolus
will
cut
down
on
that
a
lot
and
then
it's
mainly
a
matter
of
that.
Now
it's
going
to
be
like
upgrading
to
the
latest.
C
Go
because
that's
going
to
be
the
thing
that
mainly
bites
us
and
like
glib
c
in
the
distal
containers.
So
that's
that's
going
to
cut
down
on
the
number
of
these
reports
and
then
we're
going
to
have
to
you
know
if
there's
actually
a
valero
vulnerability
where
it's
like
hey.
You
know.
If
I
write
this
resource
I
get
to,
I
don't
know
erase
every
disc
in
the
system.
I
think
we
would
respond
to
that
very
rapidly.
C
C
F
Case
like
should
we
respond,
saying
it's
a
false,
positive
kind
of
thing
it
won't
affect,
even
though
it's
there
in
the
image
this
library,
it
is
not
used.
C
You
know
that's
perfectly
reasonable
way
to
respond.
The
other
thing
is
to
ask
them
to
you
know
with
the
disturbance
move
as
soon
as
you're
ready
with
the
plugin
to
work
in
the
distance
environment.
You
can
say:
hey,
you
know
we
fixed
a
bunch.
We
cleaned
up
a
bunch
of
these
things.
You
know
you
can
look
at
any
of
these
cvs
so
these.
C
F
Okay
yeah,
so
we
so
we
then
we
are
using
twist
clock.
So
I
guess
well
that's
the
plan.
We
will
scan
one
seven
with
this
clock
and
then
I
guess
maybe
next
meeting
we
can
discuss
what
we
see.
C
Yeah
yeah
definitely
and-
and
we
just
have
to
see
how
we
move
forward-
I
don't
know-
I
think
we
tried
to
use
the
very
latest
disturbance
image
and
that's
pretty
much
coming
out
of
google,
so
hopefully
they're
moving
pretty
quickly
on
fixing
those
things.
B
Okay
and
as
a
security
note,
I
will
mention
that
internally,
vmware
is
doing
a
threat
analysis
of
valero
with
tk
tanzan
kubernetes
grid.
B
Nothing
has
been
turned
up
so
first
far,
certainly
if
there
is
I'm
sure
we'll
address
it,
but
tentatively
we
are
having
some
additional
security
experts
looking
at
valero
in
the
context
of
other
vmware
things,
so
that
should
give
a
little
bit
of
additional
comfort
from
a
security
point
of
view.
B
So
without
further
ado,
I
see
we're
a
minute
over.
Thank
you
all
very
much
have
a
wonderful
night
or
morning
depending
on
your
time
zone
and
we'll
see
you
all.
I
believe
next
week
or
some
of
you
will
see
next
week.
Some
of
you
will
be
sleeping.