►
From YouTube: Velero Community Meeting - July 16, 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone
and
welcome
to
this
episode
of
the
Valero
community
meeting.
My
name
is
Jonas.
Roslyn
I'll
be
your
host
today
and
we
have
a
bunch
of
stuff
that
we
want
you
to
cover
and
we
want
to
make
sure
that
we
get
time
in
for
your
questions
as
well.
So
please,
if
you
have
any
questions,
just
unmute
yourself
and
ask
away
or
if
you
would
rather
type
them
out
in
the
chat,
you
can
do
that
as
well.
If
you
want
me
to
read
them
out
afterwards,
let
me
know
if
you
can't
speak
up.
A
Alright
I'm
gonna
share
the
agenda
here,
so
you
all
can
see
what
we're
gonna
be
talking
about
today.
Give
me
one
second
and
sharing
all
right,
so
the
agenda
for
today
for
the
community
meeting
we're
going
to
start
off
talking
about
the
the
user
survey.
We
have
a
webinar
coming
out,
then
we've
got
Steve
doing
it
and
he's
gonna.
A
Do
a
demo
of
a
new
plugin
I'm
gonna
do
a
little
deep
dive
into
a
CSI
support
proposal
with
Nolan,
and
then
a
design
proposal
for
running
and
backup
and
restoring
in
its
own
worker
pods,
and
then
kaliesha
will
be
talking
about
adding
the
pod
volume
batch
of
object,
storage
as
well,
so
we'll
start
off
with
the
Bolero
user
survey.
This
is
something
that
we
we
talked
about
in
a
previous
community
meeting
as
well.
We
will
be
doing
a
user
survey.
A
This
is
not
just
for
Delora
users,
but
rather
anyone
that
is
interested
in
backup
and
recovery,
or
data
migration
or
disaster
recovery
for
kubernetes
environments.
So
it
will
be
launching
soon
most
likely
it
will
launch
on
Thursday.
So
we'll
make
sure
everyone
knows
about
it
on
Thursday-
and
this
coincides
with
the
other
thing
you
got
going
on
on
Thursday,
which
is
the
Valero
webinar,
so
the
CN
CF,
the
cloud
native
computing
foundation,
they
host
weekly
webinars
and
they
have
a
bunch
of
really
awesome
projects
open
source
projects
that
they
that
they
host.
A
Essentially
so
we
have
been
accepted
to
host
one
such
webinar
and
that's
going
to
be
on
Thursday.
So
on
Thursday,
please
join
us.
We
got
Steve
here
and
Tom
who
are
going
to
be
presenting
and
if
you
have
any
questions
after
this
meeting,
if
you
want
to
ask
Steve
or
Tom
something
during
the
webinar,
please
do
so
please
sign
up
and
join
for
the
webinar
on
Thursday
any
questions
regarding
any
of
the
surveys
for
the
webinars.
A
B
Looks
good,
okay,
cool,
so
yeah,
so
I
wanted
to
run
through
a
quick
demo
of
a
feature
that
we
merged
into
master
fairly
recently,
and
this
is
something
that
we've
been
hearing
a
decent
amount
of
demand
for
from
users.
So
the
feature
is
to
basically
add
the
ability
in
bolero
to
change
the
storage
class
of
a
persistent
volume
and
persistent
volume
claim
as
you're
restoring
it
into
a
cluster,
and
so
there
are
kind
of
a
couple
of
different
use
cases
for
this.
B
One
may
be
that
you
just
you
just
want
to
change
the
storage
class,
so
you
want
to
switch
to
using
a
different
storage
class
for
your
persistent
volume,
but
it
also
comes
up
a
lot
in
cross
cluster
migrations
and
especially
across
provider
migrations.
So
if
you
imagine
your
same
migrating
a
workload
from
an
AWS
cluster
into
a
gke
cluster
in
AWS,
you
may
have
a
storage
class
called
gp2,
but
in
gke
you're
not
gonna
have
a
store.
B
B
In
the
Valero
namespace
I'm
currently
running
the
master
image
of
Valera,
so
this
feature
has
been
merged
into
master,
but
it
hasn't
yet
been
released
in
an
official
tagged
release.
So
you
can
check
it
out
using
master
if
you
want
to,
but
we
wouldn't
recommend
using
this
in
a
production
cluster.
Yet
so
the
namespaces
I
have
so
I
have
the
standard
nginx
example
namespace
setup.
So
that's
the
workload
I'm
going
to
be
backing
up
and
restoring
here,
and
if
we
take
a
look
at
our
persistence
volumes
and
our
storage
classes,
you
can
see.
B
I
have
one
persistent
volume
here
and
this
is
being
used
in
that
nginx
example.
And
currently
this
is
wrapping
a
little
bit,
but
the
storage
class
is
managed
premium.
And
then,
if
you
look
down
here
at
the
the
set
of
storage
classes,
that
I
have
in
the
cluster,
I
have,
in
addition
to
a
default,
I
have
a
managed
premium,
storage
class
and
then
also
a
managed
premium
retained
storage
class.
So
what
I'm
gonna
end
up
doing
is
doing
a
backup
and
restore
of
the
engine
x.
B
Namespace
that
switches,
the
storage
class
from
managed
premium
to
managed
premium
retain
so
the
way
that
we
implemented.
This
is
as
an
additional
restore
item
action
plug-in.
So
it's
an
entry
plug-in
that
operates
during
restores
and
you
can
see
it
using
the
new
velaro
plugin
get
command.
So
if
you
look
right
here
at
the
list
of
restore
item
actions,
this
change
storage
class
plug-in
is
the
new
plug-in
that
implements
this
logic.
B
B
B
And
let
that
run
and
then
the
so
the
other
thing
I
want
to
show
is
the
way
that
this
plug-in
is
configured
is
through
a
config
map
in
the
Valera
namespace.
So
I
have
this
one
config
map
here,
let's
take
a
look
at
the
yeah
mo
for
it.
So
we'll
have
this
documented
so
that
it's
it's
clear
kind
of
how
to
set
this
up,
but
the
the
most
important
part
of
it
is
that
within
the
data
for
the
config
map,
we
just
have
a
really
simple
mapping
here.
B
So
you
have
a
key
from
a
key
that
his
name
is
the
current
storage
class
that
you
want
to
remap
during
your
restore
and
then
the
value
is
the
new
storage
class
that
you
want
to
assign.
So
in
this
case
we're
saying
any
persistent
volume
or
persistent
volume
claim
that's
using
the
managed
premium.
Storage
class
should
be
restored
using
the
managed
premium
retained
storage
class,
so
let's
just
check
and
make
sure
that
the
namespace
has
been
deleted
and
that
persistent
volume
has
been
deleted.
B
So
that's
all
all
good
to
go
so
the
next
thing
I'm
going
to
do
is
just
create
a
restore
from
my
demo
backup
and
since
the
config
map
already
exists,
there's
nothing
more
than
I
need
to
specify
here.
It'll
be
picked
up
as
part
of
the
restore
and
then
I'll
just
put
a
watch
on
persistent
volume.
So
we
can
see,
as
the
PV
comes
up
alright
so
that
new
persistent
volume
has
been
created.
B
C
Had
a
quick
question
about
that
doing
some
of
our
migration
scenarios
we're
actually
the
process
of
doing
something
similar,
but
our
code
is
kind
of
letting
users
choose.
You
know
on
a
per
PVC
basis.
You
know
if
they
want
a
different
storage
class
in
the
new
cluster
and
again
similar
use
case
of
you
know.
You're
going
to
a
new
version
might
not
have
the
same
storage
classes.
You
might
want
to
go
from
cluster
to
staff
or
something
like
that.
How
is
this
plug-in
different
than
in
the
rest?
C
B
C
B
A
Maybe
yeah
a
little
bit.
D
Yeah,
perfect,
okay,
so
the
background
here
is:
if
you've
been
listening
to
past
community
meetings.
In
the
last
months,
we've
been
trying
to
investigate
supporting
CSI
volumes
and
the
currently
alpha
CSI,
a
snapshot
API
that
has
been
going
on
in
cig
storage
and
the
CSI
working
through
this
proposal
is
a
new
format
that
we're
using
for
design
proposals.
D
Anybody
can
make
these,
but
this
proposal
is
coming
out
of
some
of
the
work
I
did
on
doing
a
prototype
plugin
for
supporting
CSI
snapshots,
so
we
have
here
kind
of
the
high
level
design
and
also
a
detailed
design
on
what
the
plugins
are
actually
going
to
do.
These
plugins
are
currently
slotted
to
be
included
in
tree.
D
D
A
Just
shared
the
github
link
as
well
the
link
to
the
pull
request
there.
So
yeah.
Please
please
comment
if
you,
if
you
have
any
comments
there,
that
would
be
really
really
appreciated.
Thank
you
so
much
Nolan
yep.
Thank
you
all
right.
Moving
on
to
Steve
again
the
same
proposal
for
running
each
backup
and
restore
niche
on
work,
your
pod
yeah.
B
Yeah
so
so,
I
put
up
a
draft
design
document
for
a
kind
of
an
architectural
change
to
how
Valero
runs
backups
and
restores
so
currently
is.
As
most
of
you
probably
know,
Valero
is
deployed
as
a
as
a
deployment
into
your
cluster
with
a
single
pod
replica
and
that
pod
runs
all
of
the
Valero
controllers
that
are
watching
for
Valero
custom
resources
and
acting
on
them.
B
This
draft
document
that
proposes
an
approach
where,
rather
than
running,
backups
and
restores
within
the
the
main
kind
of
Valero
server,
that's
running
the
controllers.
We
would
change
Valera
so
that
it
actually
spawns
a
new
pod
for
each
backup
in
each
restore
referring
to
these
as
worker
pods.
But
the
idea
is
that
you
know
the
main
Valera
server
would
basically
just
be
responsible
for
being
informed
of
new
backup
or
restore
custom
resources
and
then
spawning
one
of
these
worker
pods
to
go
actually
execute
the
backup
in
the
restore.
B
There
are
definitely
some
nice
benefits
to
this
approach.
It
makes
concurrency
pretty
trivial,
so
it
makes
it
really
easy
to
run
multiple
backups
and
restores.
At
the
same
time,
it
also
makes
the
scalability
characteristics
pretty
good.
You
know
you're,
basically,
at
this
point,
just
limited
by
the
total
amount
of
resources
that
you
have
in
your
cluster
to
run
these
worker
pods.
B
So
you
can
kind
of
scale
up
as
much
as
you
need
to,
and
then
the
other
kind
of
nice
benefit
that
you
get
is,
since
you
have
you
know
a
single
pod
running
a
backup
or
restore
it,
makes
it
really
easy
to
stream
the
logs
kind
of
as
the
backup
or
the
restores
being
run.
Essentially,
what
you're
doing
is
just
running
Q
control,
Logs
with
the
dash
F
flag
on
the
pod,
that's
responsible
for
running
your
your
backup
or
restore,
and
so
this
is
something
that
we
don't
support
today.
B
You
have
to
kind
of
wait
until
the
backup
er
restores
finish
before
you
can
really
see
the
logs,
and
so
that's
that's
definitely
a
nice
kind
of
secondary
benefit
of
this
approach
so
anyway,
so
the
design
dock
is
up.
We
started
to
you
know,
have
some
feedback
and
discussion
on
it.
It's
it's
absolutely
a
draft
and
work
in
progress,
and
there
are
some
alternate
approaches
discussed
that
I
would
really
like
to
get
as
much
input
as
possible
on
you
know
before
we
we
definitely
choose
one
approach.
So
yeah,
please
check
out
this
pull
request.
B
A
Have
a
few
so
first
off
that
this
looks
like
it
will
be
solving
several
of
the
things
that's
been
brought
up
in
community
meetings
in
the
past
two
months,
essentially
so,
first
off
being
able
to
handle
concurrent
backups
and
then
also,
as
he
said,
streaming
logs,
I
believed
I
was
brought
up
just
to
to
community
meetings
ago.
We
really
wanted
that
that
be
in
here.
So
it's
great
to
see
this.
My
my
main
question
here
would
be:
what's
the?
A
B
Yeah,
it's
it's
a
good
question.
I
mean
it
does
make
the
you
know,
you're
no
longer
in
a
world
where
you
just
have
a
single
pod,
assuming
you're,
not
using
rustic,
which
is
a
pretty
simple
deployment
model
and
I
would
say,
particularly
you
know,
in
the
in
the
development
workflow
a
lot
of
times,
velaro
developers,
rather
than
actually
deploying
the
Valera
server
to
a
cluster
to
run.
B
That
would
be
able
to
be
pulled
into
your
cluster,
so
you
know
it
does
potentially
make
that
development
scenario
a
little
bit
more
complex.
On
the
other
hand,
you
know
having
the
that
kind
of
the
work
process
for
processing
a
backup
or
restore
as
its
own
command,
essentially
means
that
you
could
run
that
command
directly.
B
Your
development
environment,
it's
kind
of
a
an
integration
test
if
you're
making
any
changes
to
it.
So
there
are
some
kind
of
alternate
ways
that
you
could
approach
that.
So
that's
you
know.
That's
definitely
one
that
we've
we've
looked
at
beyond
that.
Nothing
else,
major
springs
to
mind,
Nolan
or
Adnan,
or
car
Lisa.
Was
there
anything
else
that
had
come
up
so
far.
D
I
had
a
question
on
there
that
I,
don't
I,
don't
think
you've
gotten
to
yet,
but
in
terms
of
streaming
logs
the
there
is
still
a
disconnect
there
in
that
rustic
is
going
to
be
performed
by
daemon
set
pods
and
I.
Think
I
had
asked
on
the
proposal
if
you'd
given,
given
any
thought
to
moving
that
rustic
logic
into
the
worker
pods,
so
that
those
get
scheduled
on
the
node
that
they
need
to
be
on
and
they
do
all
the
rustic
logic
and
all
the
cube
API
of
logic.
B
Yeah
I
hadn't
seen
that
so
I
haven't
really
thought
about.
It
could
be
tricky,
though,
because
you
may
have
you
know
in
a
single
velaro
backup.
You
may
be
using
rustic
to
backup
volumes
for
pods
that
are
on
multiple
nodes,
and
so,
unless
you
end
up
having
kind
of
multiple
workers,
one
per
relevant
node
that
may
be
tricky
so
yeah
a
lot
to
look
at
that's
more
but
sure
it
could
be
could
be
complicated.
Yeah.
No,
no
problem.
A
B
We're
I
say
it's
still
up
in
the
air
a
little
bit
I
mean
I,
wanted
to
get
the
the
draft
design
doc
out,
because
it's
something
I've
been
thinking
about
for
a
little
while
and
we've
been
talking
about
in
the
team
for
a
little
while
I
wanted
to
get
the
design
doc
out
well
ahead
of
1.2
planning
so
that
if
we
wanted
to
execute
it
on
it
for
1.2,
we
would
feel
like
we
were
in
a
good
state
to
be
able
to
do
that.
But
I
think
we
still
need
to
do
some
prioritization
and
decide.
B
If
this
is,
you
know
one
of
the
big
features
that
we
want
to
tackle
and
1.2,
as
opposed
to
some
other
things.
So
I
definitely
like
to
you
know
kind
of
get
agreement
on
the
approach
so
that
whenever
we
we
do
decide
that
this
is
a
good
thing
to
go
after
that,
we're
kind
of
ready
to
go
but
I'm
not
yet
committing
to
doing
it
for
1.2
I
do
think
that
it
is
a
nice
kind
of
foundational
change
that
that
makes
some
other
future
things
easier,
and
so
there
are
kind
of
some
dependencies
there.
B
E
B
This
so
creating
cron
jobs
for
scheduled
backups.
This
was
just
I,
don't
know
something
that
had
come
up
as
we
were
talking
about.
This
idea
of
you
know,
using
using
a
pod
per
worker
process.
Was
that
hey?
You
know
we
have
our
currently
have
our
own,
essentially
cron
schedule
or
running
within
Valero,
but
we
could
use
a
similar
approach
where
you
know
when
we
get
a
schedule,
object
or
scheduled
custom
resource
to
find
bolero
actually
just
creates
a
cron
job
to
handle
the
cron
scheduling
for
it.
B
A
F
Yes,
so
I
have
so
I
guess:
I'm
going
to
be
working
on
rest,
but
ice
tech
related
things
and
right
now,
I'm
wrapping
up
the
feature
that
adds
the
pot
volume
back
up
as
a
zipped
file
to
the
object
storage,
and
so
we
can
deal
with
as
objects
when
we
try
to
do
the
syncing
and
restore
and
embassy
I.
Think
I
talked
about
this
in
the
previous
meetup
recording.
F
So
the
next
thing,
though,
is
we
need
to
check
it
for
when
Rasik
has
cell
locks
and
undo
them,
so
basically
I'm
going
to
be
doing
those
some
write-up
for
sort
of
a
design
document.
For
that,
and
that's
going
to
be
the
next
thing,
I'm
working
on
if
anybody's
interested
and
give
me
feedback
right
on
that
issue
right
there.
That's
it.
B
Yeah
so
Scott
I
know
you
had
asked
about
this
on
slack
when
we
were
planning
to
release
1.1.
So
you
know,
we've
we've
definitely
been
less
date,
driven
in
general
for
our
releases,
so
we've
kind
of
identified
a
scope
that
looks
to
us
like
roughly,
you
know,
two
to
three
months
of
work
and
then
typically
we've
just
kind
of
released
once
we've
gotten
done
with
that
set
of
work,
so
our
current
intent
is
to
get
1.1
out
somewhere
around
the
end
of
summer.
B
B
So
Steve
yeah
I
can
I
can
cover
this
I
just
wanted
to
give
some
shoutouts
to
some
of
the
folks
who
have
had
PRS
merged
I.
Think
I
think
these
are
all
the
ones
since
our
last
community
meeting.
So
the
first
one,
though
Tichenor,
who
added
some
some
better
documentation
that
covered
all
the
requirements
for
using
Valero
with
the
rustic
integration
on
OpenShift,
given
that
rustic
runs
as
a
daemon
process
that
needs
a
host
path
amount
there,
some
kind
of
special
requirements
there
so
appreciate
the
the
improved
documentation.
B
Therefore
OpenShift
users,
we
then
had
Pragya
parob,
who
added
started
to
add
support
into
our
kind
of
build
and
release
process
for
power
architecture,
and
so
we
now
will
be.
When
we
ship
releases,
we
will
be
publishing
the
velaro
binary
built
for
this
architecture
on
github,
and
then
we
also
have
support
for
building
docker
images
for
this
architecture.
B
Then
we
had
JW
Matthews,
who
fixed
a
bug
in
the
Valero
version,
command
I,
believe
the
the
issue
is
that
it
wasn't
respecting
the
namespace
flag.
So
if
you're
running
bolero
in
a
non-standard
namespace,
this
command
didn't
work.
So
we
got
a
bug
fix
for
that.
So
I
appreciate
that
and
then
last
but
not
least,
we
had
TL
camp
who
submitted
a
PR,
basically
an
enhancement
to
the
velaro,
install
command,
to
be
able
to
specify
pod
annotations
that
you
want
to
apply
to
the
Valera,
and
so
this
can
be
really
useful.
B
If
you're,
using
if
you're
running
on
AWS,
for
example
and
you're
using
cubed
I,
am
you
need
to
add
some
annotations
to
the
pods,
and
there
was
previously
no
easy
way
to
do
that
through
the
through
the
bolero
install
without
actually
editing
the
yamo.
So
this
new
flag
allows
you
to
do
that
and
keeps
the
simplicity
of
the
bolero
install
workflow
while
allowing
you
to
integrate
your
cube
time
stuff
with
it
so
yeah.
We
really
appreciate
that
contribution
and
then
it's
definitely
something
that
several
users
were
interested
in.
B
A
A
All
right,
so,
as
you
all
know,
this
community
meeting
is
run
on
the
first
and
third
Tuesday
of
every
month.
So
the
the
division
here
between
July
and
August
means
that
it's
actually
going
to
be
three
weeks
until
the
next
community
meeting.
So
the
next
community
meeting
will
happen
on
August
6th,
so
we
will
see
you
all
then
have
an
awesome
rest
of
the
week.
Everyone
and
yeah
about
have
a
great
day
bye.
Everyone
bye.