►
From YouTube: Velero Community Meeting - June 18, 2019
Description
During this meeting we discuss the CSI prototype plugin, read-only storage locations, and restic improvements.
Keep a tab on the community calls here: https://github.com/heptio/velero-community
A
Hi
everyone
and
welcome
to
this
week's
Valero
community
meeting
and
thank
you
all
for
for
joining
in
this
meeting.
We
make
sure
that
we
talk
about
the
upcoming
releases,
the
the
work
that
we're
doing
the
stuff
that
we're
working
on
we're
doing
demos
and
all
kinds
of
things.
So,
if
you
have
any
questions
throughout
the
meeting,
please
either
voice
them
in
the
chat
or
speak
up
as
well.
This
is
being
recorded.
B
All
right
welcome
everyone,
so
the
kind
of
the
main
things
we
wanted
to
cover
today
are
looking
at
a
couple
of
demos
for
some
of
the
work
that
we've
been
working
on
recently.
If
you've,
if
you
attended
the
last
meeting,
you
know
that
we've
started
working
on
the
1.1
release.
We
decided
on
an
initial
scope
there
and
we
started
working
on
those
items
and
so
we're
gonna
start
with
with
demos
of
a
couple
of
those
items.
B
C
So
up
here,
I'm,
showing
we've
got
a
namespace
called
demo
with
a
pod
called
app,
and
that
is
linked
to
a
PVC
and
that's
what
we're
going
to
be
backing
up
today
down
here,
I'm,
just
showing
all
the
pods
doing,
is
just
writing
the
current
time
into
this
data
out
file.
So
it's
just
getting
some
data
in
there
and
before
I
start
up
velaro
server.
C
C
C
C
Here
we
go
snapshot.
Source
shows
the
snapshot
handle,
which
is
the
name
of
the
snapshot
in
AWS,
restore
size,
how
big
it
will
be
the
driver
and
when
it
was
made
we
have
a
deletion
policy
which
I
will
get
into
afterwards.
There's
actually
some
bugs
around
this.
That
will
need
to
be
fixed
forward.
Falero
to
function
like
it
currently
does,
with
its
own
plugins
and
there's
also
a
pointer
to
the
snapshot
that
created
this.
C
C
Think
I
know
it
happened
here,
there's
actually
a
bug
that
I
introduced
in
plugin
last
night.
My
apologies,
so
what's
supposed
to
be
happening,
is
inside
of
our
Valero
CSI
plug-in
repo.
We
have
a
plug-in
CSI
snapshot,
err
for
PVCs
and
that
will
map
a
data
source
to
the
Valero
or
up
sorry,
it'll
map,
a
volume
snapshot
as
a
data
source
for
the
PVC,
but
here
I
think
a
change
I
made
late
last
night
is
actually
preventing
this
from
happening.
So.
C
On
the
deletion
policy
here,
this
deletion
policy
of
retain
what
this
says
is
that
when
a
volume
snapshot
is
deleted
in
a
namespace,
the
volume
snapshot
contents
should
be
retained.
But
if
I
delete
the
volume
snapshot,
content
object,
then
the
snapshot
on
EBS.
This
one
right
here
is
also
deleted,
and
this
is
something
I'm
discussing
with
the
CSI
team
upstream
to
make
sure
that
administrators
can
leave
snapshots
behind
if
they
would
like,
and
this
would
get
us
closer
to
the
current
Valero
behavior.
If
these
snapshots
are
deleted,
then
Valero
can't
really
restore
using
CSI.
C
B
C
Three
months
out:
yeah:
okay,
so
not
that
not
the
one
happening
this
week,
I
think
there's
some
blockers
on
release
right
now,
but
yeah.
So
our
current
goal
is
to
kind
of
track
with
the
upstream
CSI
status,
so
taking
our
alpha
plug-in
and
moving
it
to
beta
when
they're
in
beta
and
then
seeing.
If
what,
if
any
changes,
we
need
to
make
to
core
velaro
to
make
the
support
more
seamless.
A
C
At
the
same
time,
CSI
is
not
fully
released,
so
we
can't
just
completely
switch
over
and
there's
also
some
concerns
with
getting
your
CSI
drivers
set
up
so
right
now
it's
kind
of
a
process
and
kind
of
a
pain,
I
imagine
managed.
Kubernetes
services
will
set
this
up
for
you,
so
eks
on
Amazon
and
gke
and
aks.
Those
will
all
set
up
the
appropriate
CSI
drivers
for
you,
but
right
now,
there's
a
lot
of
late
work
to
get
that
set
up
so
I,
don't
see
our
plugins
going
away
anytime
soon
long
term
goal
would
be.
C
It
would
be
awesome
if
we
could
deprecated
them,
but
I
don't
see
that
happening
in
the
near
term
at
all.
Just
so
we
can
provide
a
smoother
transition
and
also
speaking
of
transition.
Csi
has
a
method
of
importing
snapshots
to
be
managed
by
CSI.
I
haven't
looked
into
doing
that
for
you,
but
that
is
a
thing
that
it
can
do.
That's
a
use
case
that
they're
considering
cool
thanks,
yeah.
B
I
would
definitely
say
echo
everything
Nolan
said,
and
you
know
for
us.
The
the
biggest
win
here
with
being
able
to
integrate
with
CSI
snapshots
is
that
any
provider
that
implements
the
CSI
interface
now
automatically
becomes
able
to
in
great
with
Valero.
So
we
no
longer
have
to
write
a
custom
plugin
for
Valero
or
have
a
contributor
write,
a
plug-in
you
just
kind
of
get
snapshot
support
for
free.
B
So
you
know,
if
you're
already,
on
a
platform
that
Valera
supports,
then
this
may
be
less
exciting
to
you,
at
least
in
the
short
to
medium
term,
but
for
all
those
other
CSI
implementations
that
exist
out
there.
Once
we
get
this
work
locked
down,
you'll
have
an
integration
with
Valero
for
free,
so
definitely
excited
to
be
able
to
support
all
those
additional
platforms
and
I
definitely
also
want
to
reiterate
that
you
know
the
work
that
we're
doing
right
now
is
really
a
prototype
for
us
in
the
Valero
space.
B
So
where
we're
learning
about
CSI
and
snapshots,
we're
kind
of
thinking
about
how
we
actually
want
to
do
the
integration
with
Valero
long-term.
So
the
plug-in
approach
that
we've
used
right
now
is
kind
of
our
idea
for
implementing
this
as
a
prototype
in
a
kind
of
a
non-intrusive
way.
But
it's
the
implementation
is
certainly
subject
to
change,
as
we
continue
to
learn
about
it
and
figure
out
how
we
want
to
design
it.
A
B
So
I
wanted
to
do
a
demo
of
a
pull
request
that
that
I
worked
on
a
couple
of
weeks
ago,
so
the
yeah.
So
the
idea
is
that
you
know
in
in
velaro.
In
the
past,
we've
had
basically,
if
you're
in
particular,
if
you're,
if
you
have
maybe
two
clusters,
set
up
and
you're,
you
have
backups
that
you
took
in
the
first
cluster,
so
cluster,
a
backups
that
you
want
to
actually
restore
into
another
cluster,
so
cluster
B.
To
support
this.
B
We
had
a
restore
only
flag
for
the
velaro
server,
so
you
could
put
the
velaro
installation
that's
in
cluster
B,
in
to
restore
only
mode,
and
that
way
you
could
safely
connect
it
to
cluster
AIDS,
backup,
storage
location
and
only
be
able
to
restore
those
backups,
but
not
be
able
to
accidentally
garbage,
collect
them
or
delete
them
or
create
new
backups
in
cluster
A's,
backup,
location.
And
so
this
this
worked.
B
Okay,
but
the
the
issue
with
it
was
at
this
this
flag,
the
restore
only
flag,
was
basically
at
the
at
the
Valero
server
level,
and
so
what
we
realized
is
that
a
lot
of
times
you
know
you
would
you
would
just
want
to
put
a
single
backup,
storage
location
in
to
kind
of
restore,
only
mode
or
read-only
mode.
So
you
might
have
clustered
be
continuing
to
take
backups
into
its
own
backup,
storage
location,
and
you
would
want
that
to
continue
to
happen.
B
But
you
would
want
to
be
able
to
configure
it
to
restore
backups
from
a
cluster,
a
backup
location,
and
so
this
VR
basically
makes
that
change.
So
it
adds
support
for
putting
individual
backup
storage
locations
into
read-only
mode
so
that
you
don't
have
to
do
it
for
the
entire
velaro
server.
So
I
have
a
little
demo.
So
I'll
share
my
screen
and
we
can
go
through.
B
All
right
so,
let's
see
so
first
of
all,
so
the
backup
locations
that
I
have
right
now
so
right
now,
I
just
have
a
single
backup
location,
configured
called
default.
You
can
see
here
that
we
now
have
an
access
mode
displaying
not
through
the
CLI.
So
this
one
is
currently
in
readwrite.
I
already
have
a
back
up
here
for
nginx,
just
using
the
demo
workload,
but
I
can
create
another
one,
just
to
show
that
this
is
actually
in
read/write
rip
mode.
So
I'll
just
call
it
nginx
too,
and
include
namespaces
nginx
example.
B
All
right,
so
that's
done
so
now
we
have
just
have
two
backups
in
our
rewrite
location.
So
what
I
did
before
the
demo
is
I
set
up
another
bucket
in
object,
storage
that
has
some
backups
in
it
and
so
I'm.
What
I'm
gonna
do
now
is
basically
configure
this
falero
instance
to
connect
to
that
backup
location,
but
to
connect
to
it
only
in
read-only
mode,
so
I'll
show
you
the
the
yamo
that
I'm
going
to
apply.
B
So
this
is
a
backup,
storage
location,
it's
called
old
cluster,
so
you
can
imagine
these
backups
were
created
from
some
other
cluster
and
actually
want
to
be
able
to
use
one
of
them
to
restore
into
the
current
cluster.
Now.
The
the
important
thing
here
is
that
in
the
spec
for
this
backup,
storage,
location,
you'll
see
that
I've
specified
the
access
mode
and
it's
set
to
read
only
so.
This
is
the
kind
of
the
analog
of
the
restore
only
flag
and
other
than
that.
It
looks
just
like
an
existing
backup.
B
All
right,
so
the
backup
storage
location
was
created.
So
now,
if
I,
just
look
at
backup
locations,
we'll
see
that
I
now
have
two
of
them,
so
my
default
ones
still
around
and
it's
in
read
write
mode,
but
this
new
on
the
old
cluster
one
is
in
read-only
mode
and
I
believe
that
gelareh
just
saw
that
there
was
a
new
backup
location
and
actually
synced
those
backups
into
the
cluster.
So
if
we
now
look
give
a
flare-up
back
yet
so
we'll
now
see
we
still
have
nginx
and
nginx
to
which
I
just
showed.
B
B
If
you
look
in
the
log
on
the
bottom
here,
you
actually
see
that
Blair
is
saying
it
can't
be
garbage
collected,
because
the
backup
storage
location
is
currently
and-
and
so
this
is
what
you
would
want
if
you
had
some
old
backup
sitting
around
and
all
of
a
sudden
realize
you
needed
to
restore
them
and
didn't
want
them
to
get
and
importantly
deleted.
So
I
guess
the
other
thing
I'll
show
here
is
that
let's
say
you
go
to
create
another
backup
and.
B
So
what
happens
it
so
I
tried
to
create
a
backup
in
the
old
cluster
storage
location
which
we
know
is
currently
in
read-only
mode,
and
so
you'll
see
that
it
says
the
backup
completed
with
the
status
failed
validation.
And
so,
if
we
actually
look,
what
happened
here
see
right
on
the
top
here.
The
phase
has
failed
validation
and
we
get
an
error
that
says
a
backup
can't
be
created
because
the
storage
location,
the
old
cluster,
is
currently
in
read-only
mode.
So
again,
we're
not
allowed
to
create
new
backups.
B
B
Just
while
this
is
the
leading,
so
any
any
backups
that
don't
have
the
act
or
any
backup
storage
locations
that
don't
have
the
access
mode
specified
will
default
to
read/write.
So
they'll
function,
just
like
backup,
storage
locations
do
today.
So
it's
it's
a
fully
backwards,
compatible
change
and
we
will
also
be
keeping
the
the
restore
only
flag
around
on
the
Valera
server
command,
probably
until
to
dot.
B
Oh,
so
you're
certainly
still
able
to
use
it
if
you
want
to,
but
we
hope
that
this
gives
you
a
kind
of
a
finer
grained
way
to
to
manage
your
backups
towards
locations
all
right.
So
the
nginx
example
namespace
is
gone
now,
TVs
still
here,
but
that
should
be
okay,
I'm,
actually
just
gonna
manually
delete
it
make
sure
it's.
It
doesn't
interfere
with
the
restore.
B
B
B
Alrights
their
store
is
completed.
So
if
we
look
at
namespaces
now
we
got
our
nginx
example.
Namespace
back
everything
seems
to
be
good
there.
So
I'm
not
gonna,
go
into
the
details
of
the
workload,
because
it's
not
super
important
for
this
demo.
So
the
the
last
thing
I'll
show
is
that
you
can.
You
can
definitely
change
the
access
mode
for
your
storage
location.
So
maybe
you
know
I'm
done
done.
Restoring
from
this
I
want
to
flip
it
back
to
read,
write
mode
so
that
I
can
actually
create
new
backups
in
it.
B
B
Can
just
go
in
here
and
switch
this
to
read,
write
and
so
now,
if
I
just
do
our
backup
location
yet
we'll
see
it's
in
read,
write
mode
and
what
you'll
actually
see
happening
here
shortly,
once
flare
picks
up
with
this,
this
backup
location
is
in
read,
write
mode!
Is
that
these
two
backups
will
both
get
garbage
collected
because
it
is
now
a
writable
location
and
they're
both
expired,
but
it
will
take
a
second
for
it
to
rerun
its
check
for
expired
backups,
so
that
pretty
much
covers
the
demo.
B
Hopefully
this
is
a
useful
feature.
I
think
definitely
for
folks
who
have
multiple
backup,
storage
locations
and
are
doing
you
know
migrations
across
clusters.
This
this
can
definitely
be
useful
and
helps
to
prevent
you
from
from
having
to
flip
the
server
into
full-on
read-only
mode.
So
any
questions
on
that
I.
D
Otherwise
they
can
just
read
the
question
as
well:
hey
sorry,
I
was
muted
yeah
I'm
I'm
new.
So
this
is
a
kind
of
ignorant
question,
but
I
just
wanted
to
ask
like
where
did
physically
like
the
backups,
where
it
were,
are
they
stored?
Is
it
like
external
storage?
I
saw
somebody
answered,
I
guess,
but
could
you
speak
a
little
bit
to
that
just
because
I'm
new
yeah.
B
So
Valero
stores
backups
in
an
object,
storage
system.
So
the
you
know
the
cluster
I'm
running
on
right
now
is
in
Azure,
so
I'm
using
Azure
blob
storage.
But
you
typically,
you
would
configure
you
know,
backup
an
s3
bucket
or
Azure
blob
or
Google
storage,
whatever
platform
you're
using,
and
so,
if
you
look
at,
you
know,
for
my
demo,
say:
look
at
the
default
backup
location
which
basically
stores
the
configuration
for
where
to
store
backups.
B
So
in
Azure
there's
a
concept
of
a
resource
group,
so
this
specifies
the
resource
group
there's
a
storage
account
which
is
kind
of
a
collection
of
object,
storage,
buckets
effectively
so
specified
the
name
of
the
account
and
then
I've
actually,
given
it
the
bucket
name
here,
so
that
tells
Bolero
where
those
backups
would
go
and
if
I
look
at
the
similar
configuration
for
this
other
backup,
storage,
location,
you'll
see
it's,
it's
got
similar
information
and
the
only
difference
is
that
it's
actually
using
a
different
bucket
here.
So
it's
called
Valero
old
cluster.
So.
D
B
Valero
does
need
to
authenticate,
so
when
you
install
Valera,
you
essentially
provide
it
with
a
set
of
I''m
credentials
for
your
platform
and
those
that
I
am
account
essentially
needs
to
be
configured
with
a
policy
that
gives
it
access
to
these
object.
Storage
buckets.
So
we
have
examples
in
documentation
that
I
can
kind
of
walk
you
through.
What
a
typical
policy
or
setup
would
be
for
those,
but
but
you
do
need
to
create
an
item
accountant
and
essentially
pass
it
to
Valero
through
a
secret
in
kubernetes.
A
E
I,
don't
have
a
demo,
but
maybe
I
can
have
a
demon
next
time
where
I'm
working
with
we
have
quite
a
few
issues
to
improve
brassicas
ability
as
well
as
performance
and
the
one
I'm
working
on
now
is
to
change
the
way
we
store
rest,
the
way
like
backups.
So
right
now.
The
way
we
do
it
is
we
we
have
a
custom
resource
called
pods
volume
back
up
in
this.
E
That's
that
reflects
the
raslak
back
up,
but
instead
of
storing
that,
in
our
object,
storage
like
we
do
for
volume
snapshots,
we
instead
we
make
an
annotation
on
the
corresponding
pods,
with
the
snapshot
ID
and
that's
what
we
used
when
we
need
to
restore
Resik
backups.
So
we
are
changing
that,
so
we
are
going
to
serialize
the
part
volume
back
up
store
in
the
object,
storage
as
a
separate
turbo
part
of
the
backup,
and
that's
what
we're
going
to
use
to
restore
elastic.
E
We
are
going
to
maintain
backwards
compatibility,
so
basically,
we
will
check
if
there
are
serialized
file.
Is
there
otherwise
we're
going
to
continue
using
the
part
annotation
to
restore
and
that
just
like
the
other
feature
list
information
that
I
forgot,
that
problem
is
going
to
go
away
with
2.0
and
yeah?
I
can
maybe
show,
if
maybe
show
that
next
time
and
there
on
our
filler
board
there,
other
issues
related
to
rustic.
If
anybody
is
interested
and.
B
Sure
I
didn't
necessarily
want
to
get
into
too
much
detail
here,
but
I
did
want
to
yeah
yeah.
Just
I
just
wanted
to
remind
folks
that
we
use
a
tool
called
Zen
hub
for
doing
our
release
planning,
and
so
this
is
a
link
to
our
board,
and
so
this
is
the
best
way
to
get
an
up-to-date
view
on
kind
of
what
we're
currently
working
on
and
what
the
status
of
everything
is.
B
So
you
know
quickly
the
the
way
to
kind
of
Orient
to
this
is
that
if
you
look
right
in
the
middle
of
the
screen,
you'll
see
a
pipeline
called
release
backlog,
and
so
this
is
everything
that
we
currently
have
in
scope
for
the
current
release.
These
are
all
just
github
issues,
and
so
you
can
see
each
of
those
has
a
milestone
of
1.1
and
I
think
pretty
much
assigned
all
these
out
to
team
members.
B
So
this
is
essentially
what
we're
currently
planning
to
work
on
in
the
release,
but
haven't
started
yet
and
then,
as
you
kind
of
work,
your
way
from
here
over
towards
the
right
you'll
get
there's
an
in
progress
pipeline,
so
anything
that
we're
actively
working
on
right
now
for
the
releases
in
here
review.
Qa
is
anything.
B
That's
essentially
been
submitted
as
a
pull
request
and
is
waiting
for
review
and
then
closed
issues
is,
is
everything
that
we've
already
completed,
but
so
this
is
the
best
way
to
to
get
that
snapshot
of
what
we're
working
on
we're,
not
necessarily
saying
that
we
100%
will
complete
everything
in
the
release
backlog.
This
is
our
intended
scope,
but
we,
you
know,
certainly
reserve
the
right
to
change
that
if
things
are
taking
longer
or
you
know,
on
the
flip
side,
if
we
have
extra
time,
we
may
pull
additional
items
into
there.
B
B
B
D
A
E
A
Next
up,
so
we
we
posted
this
a
while
ago,
Tom
Spoonamore
who's
on
the
call
here
he
created
an
issue
regarding
letting
us
know
how
you
used
Valero.
We
have
gotten
a
lot
of
really
good
comments
in
here.
If
you
have
some
other
use
cases,
please
let
us
know:
we've
gotten
a
bunch
of
them
here,
so
this
is
really
amazing.
We
love
hearing
and
although
the
community
uses
Valero-
and
we
would
like
to
use
these
as
use
cases
on
the
on
the
new
website
as
well
going
forward.
So
this
is
awesome.
B
Absolutely
we
we've
gotten
a
few
contributions
and
it's
possible.
This
list
may
not
be
a
hundred
percent
up
to
date,
because
I
was
out
for
a
few
days
and
may
have
missed
any
recent
contributions,
but
I
wanted
to
shout
out
to
a
couple
of
contributions
that
came
in
so
the
first
one
from
will
hem
and
apologies
for
butchering
any
of
these
github
handles.
But
this
was
a
contribution
to
add
support
for
wildcards
when
you're,
including
and
excluding
resources
in
falero.
B
So
you
can
use
this
to
you
know
if
you're
creating
a
backup-
and
you
want
to
include
all
of
the
resources
from
a
particular
API
group
instead
of
having
to
list
them
out
one
at
a
time.
You
can
now
say,
you
know,
include
resources,
start
foo.com
and
that
will
capture
all
of
the
resource
types
from
a
particular
API
group.
So
this
is
a
really
nice
usability
improvement.
B
It's
something
we've
had
on
the
backlog
for
a
while
and
just
haven't
been
able
to
get
to,
and
so
thanks
a
lot
for
this
PR
and
it'll
be
shipping
in
1.1,
so
folks
will
be
able
to
take
advantage
of
it.
Then
we
had
another
one
come
in
around
adding
support
for
basically
having
multiple
AWS
profiles
within
the
AWS
credentials
file,
and
so
this
can
be
really
useful
if
you
have
multiple
backup
storage
locations
and
you
want
to
use
a
different
set
of
AWS
credentials
per
backup
storage
location.
B
So
what
you
can
do
is
when
you're
creating
the
credentials
file
that
you're
providing
to
Valero
through
a
kubernetes
secret.
You
can
just
have
multiple
profiles
in
there.
Each
one
has
a
name
and
then
a
set
of
credentials
associated
with
it,
and
then
for
your
backup
storage
location.
You
can
specify
a
new
config
key
that
says
which
profile
should
be
used
when
accessing
that
particular
backup
storage
location.
So
this
had
support
for
for
using
different
credentials
for
different
buckets,
essentially,
if
you're
on
the
AWS
platform.
B
A
All
right,
I
think
that
concludes
today's
community
meeting.
Thank
you
all
so
much
for
for
joining.
Thank
you
for
the
great
demos
and
thank
you
for
the
questions
here.
This
will
be
up
on
YouTube
shortly
so
I'll,
post
that
in
the
velaro
slack
channel
over
on
the
kubernetes
slack
as
well,
and
we'll
see
you
all
in
a
couple
of
weeks.
Thank
you
so
much.
Everyone.