►
From YouTube: Velero Community Meeting - April 16, 2019
Description
In this community meeting, the Velero team talks about the first alpha of version 1.0.0, super exciting!
A
Hi
everyone
and
welcome
to
the
Bolero
community
meeting
I
hope
you're
all
having
a
fantastic
week
so
far
my
name
is
Jonas.
Grossman
I'll
be
your
moderator.
For
today
we
have
a
pretty
well
packed
agenda,
some
really
interesting
stuff
that
we
want
to
talk
about.
If
you
have
any
questions
or
feedback
or
comments
on
anything
here
today,
please
just
vote
with
them
or
put
them
in
the
chat.
B
B
B
So
you
can
see
here
an
exhaustive
list
of
all
the
changes
definitely
read
the
highlights
section.
Those
are
a
few
of
the
the
biggest
new
features
or
changes.
Actually,
since
Nolan
and
Karly
I
did
a
lot
of
work
on
a
couple
of
those
and
give
them
a
chance
to
talk
about
those
features
so
Nolan.
You
want
to
talk
briefly
about
velaro,
install.
C
Yeah
so
flare
install
is
a
new
command,
we're
adding
two
mostly
for
our
QuickStart
documentation.
So
this
will
get
you
up
and
running
on
GCP,
AWS
and
azure
more
quickly
without
having
to
apply
a
bunch
of
Hamel
and
go
in
edit
those
files,
we're
still
talking
about
some
behaviors
on
it,
but
it
should
mostly
be
there
and
I've
got
a
PR
noted
in
the
hack
MD
for
documentation
for
it.
C
B
D
Sure
so
we
did
you
can
the
factory
of
the
plug-in
cold.
So
now
it's
very
well,
first
of
all
balls.
The
most
benefit
is
that
for
plug-in
outers
they
are
going
to
update
to
the
new
version,
and
you
do
have
to
reconcile
your
plugins
to
run
with
the
what's
going
to
be
our
version
1.0,
because
we
changed
the
version
protocol.
D
What
else
yeah
so
Steve
also
added
some
codes
to
so
that
the
the
Ferreira
plug
inside
errors
will
have
contained
the
stack
trace
information
and
the
panic
the
panics
will
be
handled
much
better.
We
had
an
external
contribution
to
so
that
the
for
the
restore
plugins
we
are
keeping
out.
The
original
item
dies
that's
being
restored
before
we
didn't
do
that,
and
I
was
actually
out
on
Friday,
so
not
totally
clear
if
the
name
collision
went
in
into
that.
D
They
have
a
release
today,
good
so
before
we
didn't
check
to
see
if
there
was
a
naming
collision
for
plugging
names
and
I
would
do
so
as
you're
trying
to
add
a
plug-in.
If
one
already
exists
for
that
name,
you
will
be
the
law
it
will
show
up
on
the
login.
You
won't
be
able
to
add
that
again
and
with
that
change
we
also
added
new
formats
for
the
plug-in
name.
D
So
now
you
can
sort
of
name
spaces
with
sub-domain,
like
with
the
name
with
the
prefix
in
a
name,
its
prefix,
slash
name,
that's
the
format
and
the
prefix
has
to
be
a
dns,
valid
Z&S,
sub
domain
name,
and
that's
all
documented.
So
now
you
can
that's
just
to
make
it
easy
for
you
to
have
multiple
plugins,
even
if
you
have
to
have
you
want
to
have
the
same
name.
B
I
think
that
that
pretty
much
covers
it,
Thanks
yeah,
so
we're
we're
feeling
good
about
these
features.
But
you
know
they're
they're,
also
a
whole
host
of
other
changes
that
went
in
around
you
know:
user
experience
and
stability.
So
we're
hoping
that
this
release
is
a
pretty
good
one
that
helps
folks
be
more
successful
with
Valero.
Definitely
worth
noting
that
there
are
some
some
breaking
changes.
B
This
is
a
one
dot,
oh,
and
so
the
you
know,
the
biggest
area
where
there
are
changes
are
that
we're
completely
removing
all
vestiges
of
the
yield
arc,
API
and
any
related
code,
so
as
a
1
dot.
Oh
there's
only
the
Valero
API
group,
no
more
references
to
arc
and
so
kind
of
a
lot
of
the
annotations
labels,
etc
that
we
had
been
continuing
to
support
during
ODOT
11
that
referenced
arc
are
no
longer
supported,
so
we'll
have
kind
of
detailed
upgrade
instructions.
B
If
you're
already
on
0.11,
you
should
be
in
pretty
good
shape
to
upgrade
to
one
dot,
oh,
but
if
you're
not
yet
on
one
on
0.11
definitely
recommended
to
upgrade
to
that
before
making
the
jump
to
one
dot.
Oh,
so
I
don't
think
we
need
to
go
through
all
of
the
the
details
of
the
change
log,
but
definitely
take
a
look
see
that
you
can
see
what
those
changes
are,
and
so,
with
that
Nolan
I
know
you
already
sort
of
covered
the
stuff
that
you've
been
working
on.
C
Excuse
me
not
really
I,
guess
the
one.
The
one
thing
is
we,
the
team
are
fairly
new
to
helm
and
I.
Do
know,
there's
some
issues
with
testing
the
home
chart
upstream.
So
if
anybody
would
want
to
take
a
look
at
that
and
and
help
us
and
Joseph
figure
out
what
the
what
the
end
end
test
failures
are,
that
would
be
super
helpful.
B
D
B
It
should
be
mostly
internal
refactoring;
they
shouldn't
affect
users
or
really
plug-in
developers
yeah.
So
I'd
like
to
spend
a
few
minutes
talking
about
this
issue
that
I
have
linked
number
thirteen,
seventy
one.
If
you
could
open
that
Jonas
and
I
I-
think
I
talked
about
this
a
little
bit
in
the
last
meeting,
but
wanted
to
cover
it
again.
B
So
one
of
the
issues
that
we'd
like
to
tackle
for
one
dato
is
is
kind
of
rationalizing
the
phases
that
were
used
for
backups
and
restores,
and
so
you
know
this
issue
kind
of
summarizes
the
current
state.
So
today,
backups
or
restores
can
end
in
one
of
three
phases
and
so
the
first
phase.
There
is
failed
validation,
which
essentially
means
that
the
backup
or
restore
spec
was
invalid.
B
So
an
example
of
this
is,
if
you,
if
you
create
a
restore-
and
you
specify
a
backup
that
you
want
to
restore
that,
doesn't
actually
exist
in
the
cluster
it'll
fail
validation.
If
you
create
a
backup-
and
you
say
I
want
to
include
namespace
a
and
I
also
want
to
exclude
namespace
a
that's
an
invalid
backup
spec.
So
those
will
move
into
the
failed
validation
phase,
and
then
we
have
four
backups
or
restores
that
that
are
processed.
B
We
have
that
completed
and
failed
phases
and
the
meaning
of
these
phases
is
a
little
bit
inconsistent
between
backups
and
restores
so
for
backups
completed
means
that
there
were
no
errors
executing
the
backup.
So
every
item
was
backed
up
successfully.
The
tarball
was
uploaded
to
object
storage
for
restores,
though
a
restore
can
be
completed,
even
if
there
are
kind
of
individual
errors
or
errors
restoring
individual
items.
B
So
you
know
if
you
execute
a
restore
and
and
most
of
your
resources
are
restored
successfully,
but
maybe
one
persistent
volume
fails
to
restore
the
restore,
will
still
end
in
the
completed
phase,
but
it
will
have
an
error
count
of
of
greater
than
zero
and
if
you
run
Valero
restore
describe
you'll
see
the
the
errors
that
happened.
So
definitely
some
inconsistency
there
and
then
you
know
similarly
failed.
Is
it's
slightly
different
between
the
two
operations
so
for
backups?
B
If
there's,
if
there's
at
least
one
error,
the
backup
ends
up
has
failed,
it
is
possible
that
that,
even
if
the
backup
has
failed,
there's
still
a
tarball
and
metadata
in
object
storage.
So
it
may
still
be
restorable,
but
you
know
kind
of
similarly
here
if,
if
a
single
resource
fails
to
backup,
so
if
a
persistent
volume
snapshot
fails
for
some
reason,
the
backup
will
end
up
as
failed
for
restores,
though
it
means
that
there
was
some
kind
of
fatal
error
so,
for
example,
not
being
able
to
download
the
backup
tarball
from
object
storage.
B
So
so
yeah
some
inconsistency
here,
and
so
we
really
want
to
to
make
the
phases
consistent.
And
so,
if
you
look
at
kind
of
the
the
meaning
of
the
phases
there,
it
really
seems
like
there
are
kind
of
three
distinct
conditions
that
we
want
to
capture.
So
there's,
there's
kind
of
the
completed
with
no
errors,
result
where
you
know
the
entire
backup
or
the
entire
restore
completed
successfully,
and
there
were
no
errors
and
so
I'm
proposing
that
we
we
start
using
that
completed
phase
to
mean
no
errors
encountered.
B
There's
then
the
condition
where
there's
some
kind
of
fatal
error.
So
for
a
backup.
This
could
be
that
the
backup
fails
to
upload
to
object
storage
for
restores.
It
could
mean
that
the
restore
failed
or
the
backup
fails
to
download
from
object
storage,
and
so
essentially,
there
is
no
result
and
so
I'm
proposing
that
that
the
failed
phase
means
that
there
was
a
kind
of
a
fatal
error
encountered
and
then
there's
there's
kind
of
this
in-between
phase,
where
the
backup
word
the
restore,
largely
executed,
but
perhaps
some
individual
resources
failed
to
backup
or
restore.
B
So
it's
kind
of
like
the
persistent
volume
case
that
I
was
talking
about
before
and
so
I
think
we
we
want
to
introduce
a
new
phase
that
captures
this
state
where
it's
not
a
complete
success.
It's
also
not
a
complete
failure.
It's
somewhere
in
between
and
I
think
this
is
is
useful
because
for
the
user
it
differentiates
between
a
total
failure
and
a
partial
failure,
and
so
for
a
backup.
B
So
a
couple
of
options
for
this
phase
or
you
know,
completed
with
errors
or
partially
failed,
so
it
kind
of
depends
on
whether
you
look
at
it
as
a
glass,
half
full
or
glass,
half
empty
kind
of
thing.
Definitely
if
anyone
has
input
on
the
kind
of
the
most
descriptive
name
here,
that'd
be
useful
and
also,
if
folks
have
input
on
on
whether
it
makes
sense
to
add
this
third
phase
or
whether
something
else
would
make
more
sense
to
them.
Please
add
comments
to
this
issue.
It's
issued
1371
and
github.
B
There
are
a
couple
of
additional
items
that
we
might
tackle
after
1.0,
but
basically
adding
this
third
phase
and
then
changing
the
code
so
that
backups
and
restores
are
assigned
the
correct
phase
based
on
what
happens
is
work
we'd
like
to
do
for
one
dot?
Oh,
so
if
anyone
has
comments
or
questions
on
that
now,
definitely
jump
in
otherwise
feel
free
to
comment
on
the
github
issue.
I'm
definitely
going
to
be
starting
to
work
on
this
pretty
soon
in
the
next
day
or
two.
So
you
know,
timely
feedback
is
definitely
appreciated.
E
A
All
right,
I
have
one
comment
so
completed
with
errors
just
from
an
end
user
perspective.
If
I
do
something
and
it's
completed
with
errors,
I
would
be
less
inclined
to
investigate
what
those
errors
were
because
it
was
kind
of
completed,
but
if
something
was
partially
failed,
I
think
that
would
could
have
made
me
think.
Oh,
my
god,
stop
team
failed.
That's
not
good
I
need
to
investigate
what
that
is.
I,
think
that
resonates
with
me
at
least
so
partially
failed
would
be
my
my
go-to
here
yeah.
B
A
E
I
just
saw
another
comment
about
some
cuz
I've
run
into
this,
a
bit
myself
from
the
plug-in
side.
Is
it
but
right
now
you
know
a
backup.
Any
error
fails
in
restore
you
get
this
kind
of
still
completed.
I
just
know
in
our
use
case,
any
backup
error
would
be
something
that
would
definitely
concern
me
and
say
something
went
wrong
here,
whereas
I'm
running
into
situations
where
restore
errors
actually
would
be
expected.
For
example,
there's
a
reference
to
another
object
in
different
namespaces,
I
included.
E
Will
that's
going
to
be
an
error
because
we
can't
restore
it,
but
it
doesn't
mean
we
should
you
know
so.
I
I,
don't
know
to
some
way
of
indicating
you
know
to
the
user
added.
The
messages
show
this,
but
some
way
of
babies
on
the
plug-in
side
is
saying
you
know.
Okay,
if
we
have
an
error
of
this
kind,
it's
okay,
it
doesn't
mean.
E
B
Yeah,
that's
so
I
think
long-term.
You
know
the
way
we're
thinking
about
this
is
that
so
we
have
a.
We
create
a
log
file
for
each
operation
that
runs
for
each
backup
in
each
restore,
and
so
you
can
write
level
blogs
to
that.
So
you
can
write
info
level
logs,
warn
level
logs
or
air
level
logs
I
think
you
know
in
that
scenario
that
you
described.
Probably
what
we
would
want
to
do
is
have
you
know.
B
That's
not
very
well
supported
right
now,
I
think
everything
in
restores
is
basically
logged
at
the
info
level
right
now,
but
ultimately,
I
think
that's
kind
of
where
we
want
to
go
and
then
just
make
it
easy
for
users
to
to
see
the
results
of
those
logs
and-
and
you
know,
pull
out
the
warnings
or
the
errors
in
the
log.
Ok
is.
E
This
is
a
slightly
different
case.
This
is
the
case.
I'm
participating
of
specifically
is
I,
have
a
resource
that,
in
my
plugin
I
realize,
there's
something
else
that
is
needed,
and
so
in
the
additional
items.
Entry,
though,
that
we
returned
from
the
plug
and
I
say:
okay,
I
also
want
this
thing
restored
and
so
bolero
restore,
goes
and
say,
here's
an
additional
item.
Let's
restore
this
thing,
and
if
that
additional
item
is
one
that's
not
on
the
backup,
for
example,
then
it
fails
at
the
Valero
level.
Saying
I
couldn't
restore
this
object.
D
E
B
E
Right
exactly,
and
so
that's
sort
of
the
case
where
maybe
even
they
can't
post
one.
Oh,
maybe
there's
an
issue,
maybe
there's
a
way
in
that
additional
items
field
we
returned
back,
saying:
okay,
these
are
items
that,
if
they're
here,
we
need
them
restored
now
versus
it
in
and
if,
if
you
see
you
know,
in
other
words,
maybe
there's
a
case
where
this
is
a
required
item.
If
you
don't
find
it,
it's
a
hard
failure
versus
if
it's
here
I
want.
It
now
got.
B
A
B
So
obviously
that
phases
issue
that
I
just
talked
about
there.
Then
a
couple
of
issues
around
helm
and
so
there's
a
PR
open
right.
Now,
that's
being
driven
by
a
user
to
update
that
and
we've
been
no
one's,
been
looking
at
that
and
providing
input
to
it
and
then
will
will
obviously
get
the
helm
chart
ready
for
one
dot
o
as
well,
so
that
when
we
release
we
should
be
able
to
have
a
working
helm
chart.
B
There's
you
have
this
issue
that
Jonas
was
just
highlighting
around
improving
persistent
volume.
Persistent
volumes
that
have
a
retain
reclaimed
policy
I
think
it's
possible
that
this
one
may
get
pushed
out
of
one
dot.
Oh,
it's
it's
not
necessarily
a
breaking
change,
but
the
the
main
thing
to
fix
is
around
rustic
doing
restores
using
rustic
when
you
have
a
persistent
volume
with
a
retain
reclaim
policy,
but
if,
for
now
it's
p1
so
I'm
planning
to
tackle
it,
but
we'll
see
how
that
goes.
B
C
B
For
sure
I
mean
there's
a
PR
up
for
it.
I
guess:
there's
no,
no
related
issue,
just
the
issue
there
is
that,
even
if
you're
running
0.11,
it's
possible
that
if
you
have
backups
that
were
created
with
the
previous
version
of
Arc
or
of
Valera,
they
will
still
have
a
file
in
object.
Storage
called
arc,
backup
JSON,
and
we
will
no
longer
be
supporting
that
as
a
one
dot.
B
Oh
so
I
have
a
PR
up
that
will
actually
rewrite
those
metadata
files
in
object,
storage
as
Bolero
backup,
JSON
files,
and
so,
if
you
have
backups
that
were
created
with
a
version
of
falero
prior
to
0.11,
you'll
need
to
run
this
tool
prior
to
upgrading
to
one
data.
So
we
will
definitely
have
that
out
we're
sort
of
planning
planning
to
release
that
as
a
as
a
zero
dot.
11.1.
B
Basically
the
way
we're
thinking
about
it
is
that
it
it
will
migrate
you
to
be
fully
current
with
the
0.11
format
and
then
once
you're
there.
Upgrading
to
one
dot
o--
is,
is
pretty
straightforward,
so
will
will
definitely
update
on
that
as
that
progresses
in
the
next
couple
of
weeks,
and
then
beyond
that
we
have.
You
know
plenty
of
pts
and
other
things
in
the
backlog,
so
our
work
won't
be
done
when
we
complete
these
things.
B
D
A
Up
talking
about
how
all
of
you
on
the
call
here
can
get
involved
with
the
Miller
project,
the
first
one
is
a
an
issue
that
Tom
spoon
more
opened
up.
This
is
regarding
letting
us
know
how
you
use
bolero.
So
it
would
be
awesome
if
you
all
could
just
chime
in
and
say,
hey
we're
using
bolero
4
for
X
Y,
&
Z,
we're
doing
cool
thing.
A
Of
course,
we
also
have
the
Valera
channel
and
we
are
growing
pretty
much
every
day.
Now
there
are
a
bunch
of
people
in
there.
We
are
over
800
people
in
the
Valero
channel
in
the
the
kubernetes
lac,
which
is
awesome
and
yeah
I
see
a
lot
of
activity
there,
a
lot
of
users
helping
other
users
which
is
really
cool,
so
yeah
continue
on
doing
that
and
make
sure
that
it's
a
pleasant
place
for
everyone
to
talk
about
anything
Valero
Steve.
You
want
to
talk
about
testing
out
the
Alpha.
B
It
would
be
great
to
have
you
actually
update
them
and
recompile
them
against
the
one
auto
interfaces
and
provide
us
feedback
there,
as
well
as
obviously
executing
them
and
making
sure
that
things
still
work.
The
velaro
install
command
is,
you
know,
Nolan
talked
about
this.
It's
a
brand
new
command,
so
I
definitely
use
that
if
you
can
to
get
Valero
set
up
in
your
clusters
once
the
helm
chart
is
updated.
B
F
Hey
there
actually
have
a
quick
question.
No
because
we've
been
we've
been
trying
to
track
master
as
close
as
possible,
and
we
noticed
some
changes
that
just
got
merged
into
one
sorry
release
with
one
on
alpha
that
broke
rustic
backups
for
us,
so
we
just
we'd
Louie
just
to
solve
the
problem
like
45
minutes
ago.
So
we
submitted
a
PR.
B
Yeah
I
mean
we're
definitely
planning
to
cut
additional
one,
so
we
would
probably
you
know,
release
that
in
the
next
alpha
and
alpha
2,
which
which
you
know
we're
probably
come
as
soon
as
next
week.
I
haven't
actually
seen
the
issue
they
they
got
pushed
I
was
I've,
definitely
tested
rustic,
backups,
myself
and
didn't
see
an
issue,
so
there
must
be
some
difference
in
area,
but
yeah
go
look
at
it.
It's.
F
Actually,
just
because
we
we
followed
the
PR
that
change
or
added
prefixes
with
Roger
names,
so
in
our
backup,
storage
location.
If
you
kept
the
provider
names
as
like,
just
AWS
everything
works,
but
we
changed
it
to
Valerio,
slash,
AWS
and
then
there's
just
some
small
snippet
of
code
in
the
rustic
config
go
file
that
was
missing.
F
B
B
I
just
wanted
to
go
through
some
of
the
folks
who
have
contributed
to
the
project
recently
in
the
last
couple
of
weeks,
so
the
first
run
first,
one
from
a
Mon
W
and
James
King.
They
submitted
a
PR
to
enable
you
to
disable
individual
controllers
through
CLI
flag,
so
they
had
some
specific
scenarios
where
they
wanted
to
disable
I,
think
the
schedule
controller,
maybe
one
other,
and
so
there's
there's
now
just
a
flag
where
you
can
specify
arbitrary
controllers
that
you
wanted
disabled.
B
We
had
a
PR
from
AI
pochi
to
update
the
box
to
indicate
how
to
use
rustic
if
you're
running
on
ranch
or
OS
there's
a
little
bit
of
configuration
that
needs
to
change
here.
So
we
got
a
Doc's
PR
in
for
that
SC
go
Scott,
I,
know
you're
on
the
call
contributed
a
PR
for
allowing
restore
item
actions
to
indicate
that
the
restore
of
an
item
should
be
skipped.
So
this
is
really
useful
if
there
are
kind
of
specific
conditions
under
which
you
don't
want
to
restore
an
item.
B
We
have
Mike
I,
guess
who
added
a
notice
I
think
this
was
in
our
plugin
example
repository
just
adding
a
notice
around
kind
of
the
versioning
of
plugin
examples
and
indicating
that
the
the
master
branch
in
the
example
repo
matches
the
master
branch
of
Valero,
not
0.11,
because
there
are
some
breaking
changes
there
and
then
Joseph
courses
is
driving.
The
helm
updates
right
now.
So
this
is
a
still
work-in-progress
PR,
but
definitely
appreciate
him
working
on
those
updates.
So
thanks
to
everyone,
who's
contributed
it's
it's
really
helpful.
B
A
No
all
right
well,
thank
you.
Everyone
for
for
joining
today.
Thank
you
to
the
Bolero
maintainer
'z.
Thank
you
to
the
Blair
community,
big
shout
out
to
everyone.
Who's
participating
in
these
calls
and
helping
the
community
grow,
really
really
appreciate
it,
and
with
that
I
wish
you
all
a
very
awesome
week
and
I
hope
you
have
a
great
Tuesday
good
night.