►
From YouTube: Velero Community Meeting - April 2nd, 2019
Description
Velero Community Meeting - April 2nd, 2019
First Community meeting as Velero!
This meeting covers work that's been done in the past few months, the roadmap for 1.0, and community efforts and contributions.
Meeting information and notes can be found here:
https://github.com/heptio/velero-community
A
Hi
everyone
and
welcome
to
the
Valero
community
meeting
I
hope
you're
all
having
a
fantastic
week.
First
time
we
build
was
yesterday,
you
probably
got
fooled
more
than
once.
I
did
several
tanks
hope
you
all
had
fun.
My
name
is
Jonas
Rothman
I'll,
be
the
moderator
of
this
community
meeting,
as
if
you
have
any
questions,
please
voice
your
your
your
questions
in
the
chat,
unmute
yourself
and
ask
your
questions
or
put
them
in
the
chat.
That's
what
I
was
kind
of
saying.
We
do
have
the
Valero
team
here
with
us
today.
B
C
A
A
B
Absolutely
so
you
know
it's,
it's
been
a
little
while,
since
we've
had
a
community
meeting
and
so
I
did
want
to
start
just
kind
of
with
a
recap
of
what
we've
done
over
the
past
few
releases
and
then
talk
a
little
bit
about
where
we're
planning
to
go
in
the
future.
So
you
know
going
back
into
last
year:
I'll
I'll
kind
of
go
back
to
the
middle
of
last
year
2018,
so
we
released
0.9
in
the
middle
of
2018,
and
that
was
the
release
that
introduced
the
rustic
integration.
B
So
this
was
really
intended
to
enable
users
who
didn't
have
an
integrated
snapshot,
API
and
maybe
weren't,
using
one
of
the
public
clouds
or
or
using
different
types
of
persistent
volumes.
This
really
enabled
them
to
be
able
to
back
up
the
data
in
their
in
their
volumes,
and
we
thought
that
this
was
pretty
important
because
we
wanted
anybody
to
be
able
to
use
velaro
to
back
up
not
just
kubernetes
config,
but
also
the
data
in
their
volumes.
B
You
know,
bug
fixes
that
came
up
and
some
small
usability
enhancements
and
features,
and
at
the
same
time,
as
we
were
doing
those
point
releases,
we
were
also
working
toward
0.10,
which
is
the
release
that
we
pushed
out
towards
the
end
of
last
year
and
again,
there
were,
there
are
a
whole
bunch
of
bug,
fixes,
features
etc
in
that
release.
But
the
the
headline
feature
was
that
we
moved
away
from
the
singleton
config
that
we
had
in
previous
releases
and
moved.
B
So
some
users
wanted
to
be
able
to
put
some
of
their
backups
in
one
bucket
and
some
in
a
different
bucket,
and
this
enabled
that
is
also
really
useful
for
the
case
where
you
had
maybe
a
second
cluster,
and
you
wanted
that
cluster
to
be
able
to
both
pull
backups
from
a
primary
cluster
as
well
as
to
be
able
to
create
new
backups
on
its
own
and
so
having
backup
storage
locations,
enabled
you
to
configure
basically
pulling
in
backups
from
one
location
and
also
pushing
backups
to
a
second
location.
So
yeah.
B
That
was
the
the
main
feature
in
0.10,
which
we
pushed
out
at
the
end
of
2018
and
then,
as
you've,
probably
seen.
We
pushed
out
0.11
a
few
weeks
ago,
maybe
a
month
ago,
by
now,
and
that
release
really
was
primarily
focused
on
our
rename
and
our
rebranding.
So
most
of
you
probably
know
we
were
formerly
hep
do
arc
and
we've
rebranded
to
Valero.
B
Oh
we've
definitely
been
talking
about
this
for
a
while
and
so
we're
we're
kind
of
putting
a
stake
in
the
sand
and
saying
that
we're
gonna
ship,
a
1.0
in
the
May
time
frame
so
we'll
definitely
spend
some
time
talking
more
about
that
during
the
rest
of
the
session.
So
hopefully
that
kind
of
puts
everything
in
context
over
the
last
year
so
and
from
there
I
think
we
can
move
on
to
talking
in
a
little
bit
more
in
detail
about
what
working
on
right
now.
A
C
So
one
thing
we've
kind
of
noticed
is:
it
can
be
hard
for
users
to
track
which
of
the
example
files
we
have
for
installing
velaro
into
their
cluster,
and
it
also
means
we
have
to
run
around
and
update
our
Doc's
and
the
ammo
files
a
lot.
So
what
we've
decided
to
do
is
we're
going
to
introduce
this
velaro
install
command.
C
The
main
thing
that's
blocking
me
at
the
moment
is
testing
it
out
and
making
sure
a
sure
works
so
that
that
should
be
getting
merged
fairly
soon
and
then,
once
that's
mostly
ready.
I'm.
Also
going
to
jump
in
on
there's
a
PR
for
getting
V
0.11
as
a
help.
Excuse
me,
as
a
home,
chart
I'm
gonna
jump
in
and
make
sure
everything's
alright
with
that
and
see.
If
we
can't
roll
up
any
existing
issues
into
that
PR.
B
Yeah
and
I
just
want
to
jump
in
and
and
kind
of
reiterate
that
you
know
we
see
the
velaro
install
command
as
like
a
super,
easy,
straightforward,
on-ramp
for
new
users
to
help
them
install
velaro
for
the
first
time
or,
if
they're
kind
of
following
a
happy
path
installations.
It
just
have
a
really
easy
way
to
do
that.
B
We
haven't
spent
a
lot
of
time
working
on
that,
but
we
do
want
to
make
sure
that
going
forward
folks
have
a
good
experience
with
that
helmet,
art
and
so,
like
nolan,
said
we're
going
to
be
taking
a
look
at
it
and
just
making
sure
that
you
know
the
documentation
is
good.
That
kind
of
all
the
configuration
parameters
are
exposed
and
that
it
provides
as
good
an
experience
as
we
want.
So
we
think
kind
of
both
of
these
avenues
for
getting
glare
o
installed
are
useful
ones.
D
So
the
first
thing
we
did
was
we
changed
velaro
we
just
some
refactoring
to
make
it
so
when
the
plug-in
authors,
import,
valera,
the
footprints
of
dependency
is
a
lot
smaller
now
and
with
that,
some
of
the
interface
interface
has
changed,
so
people
will
have
to
change
and
recompile
their
plugins
and
we
spruced
up
our
examples
and
changed
them
to
conform
to
the
new,
the
new
interface.
What
else
we
make
us
tow
with
this
change?
We
thought
it
was
necessary
to
make
a
sole
viler
or
will
halt
the
server.
D
If
you
try
to
run
the
server
with
a
plug-in
version.
That
is
not
a
match
for
that
other
server.
So,
for
example,
a
few
heads,
if
you
haven't
written
you
have
an
existing
plug-in,
you
haven't
read,
compile
they
try
to
run
it
with
vulnerable
point
of
point
one
or
with
the
Masters
version.
You
immediately
see
that
your
plug-in
is
now
compatible.
So
you
have
that
notice
and
we
now
check
if
the
plug-in
name
that
you're
trying
to
register
there's
a
duplicate
of
an
existing
plug-in.
So
you
right
away.
D
You
also
know
if
that
is
the
case,
so
you
can
change
and
we
renamed
the
block
store
to
volume,
snapshot
and
that's
relevant,
because
if
you
write
in
a
plug-in
for
that,
you
will
have
to
use
this
new
name
for
that
type.
We
made
improvements
to
errors.
So
now
you
get
a
stack
traces,
so
you
know
the
arrow
location
from
within
the
plug-in
before
you
wouldn't
be
able
to
know
that.
So
now
we
know
you
will
know,
and
we
now
add
the
original
restore
item,
the
unmodified
item
to
the
restore.
So
you
Sarah.
D
You
use
a
selector,
for
example,
you
get
that
trim
time
restore,
but
you
also
will
get
the
original
and
before
you
didn't
get
up,
and
now
you
do,
we
are
passing
metadata
to
plugins.
This
is
a
working
part
process
in
progress.
It
hasn't
been
done
yet
and
another
working
progress
is.
We
will
enable
the
possibility
of
enabling
and
disabling
plugins
and
I
think
the
coverage,
but
the
most
of
what
you
will
notice
from
the
plug-in
side.
B
Yeah
those
that
was
great-
that
was
a
pretty
good
list
night.
You
know
I'm
definitely
excited
about
the
smaller
dependency
footprint.
I
know.
Carlie
has
been
wrapping
up
the
PR
to
update
our
examples
to
basically
use
the
new
set
of
dependencies
and
I
think
you
know,
like
the
binary
size
gets
cut
in
half
there.
You
know
I,
don't
know
a
couple
dozen
packages
that
no
longer
need
to
be
imported,
so
it's
just
much
simpler
and
a
much
better
experience
for
done
for
plug-in
authors.
So
that's
really
cool
and
you
know
a
lot
of
this
stuff.
B
We
knew
would
necessitate
some
breaking
changes
to
the
plug-in
interface,
and
so
we
really
wanted
to
focus
and
get
it
in
before
one
dot.
Oh
and
so,
hopefully,
with
these
changes
you
know
those
plug-in
interfaces
are
relatively
stable
and
there's
something
that
we
can.
We
can
stand
behind
and
and
continue
to
build
upon
going
forward.
A
Awesome
any
any
questions
regarding
the
the
plugins
work.
That's
been
going
on
here:
I'll,
straightforward:
alright,
let's
move
into
version
1.0
there's
going
to
be
a
big
release,
big
announcement
and
where
I
want
you
all
to
really
listen
in
here,
because
this
is
going
to
be,
is
gonna,
be
the
big
one
so
Steve
take
it
away.
B
Yeah
so
so,
like
I
mentioned
before,
you
know
our
next
big
milestone,
is
this
one
dot
over
release
and
we're
we're
tracking
towards
a
May
time
frame
for
getting
that
out?
We've
been,
you
know,
working
on
this
for
for
a
decent
amount
of
time,
and
so
at
this
point
we're
really
kind
of
trying
to
put
the
finishing
touches
on
this
release.
B
B
A
little
better,
yeah,
okay
and
I'll
actually
shrink
this
a
little
bit.
So,
first
of
all,
for
folks
who
haven't
seen
this
before
Zen
hub
is
kind
of
an
overlay
on
github
that
lets.
You
set
up
a
Kanban
board
for
four
basic
project
planning
and
tracking.
So
we've
been
using
this
to
help
manage
our
our
backlogs.
So
if
you
haven't
seen
this
before
you
can
there
are
two
ways
that
you
can
use
Zen
hub.
B
B
So
if
you're
using
Google
Chrome,
you
can
install
an
extension
and
then,
when
you
go
to
github.com,
it
will
actually
add
an
extra
tab
in
the
user
interface
so
that
you
don't
have
to
go
to
a
separate
location,
and
we
have
some
documentation
on
that
in
the
repo.
So
anyway,
just
to
kind
of
show
what
we
have
in
this
board.
So
we
have
a
few
different
swim
lanes,
but
I'll
start
just
by
focusing
on
the
Sprint
backlog
here.
B
So
everything
that's
in
here,
we're
saying
is
a
must-have
for
one
dotto,
and
so
these
are
all
labeled
as
p1
important,
which
basically
means
that
we
need
to
complete
them
before
we
finish
shipping
before
we
ship
one
dotto.
The
lanes
to
the
right
are
things
that
are
in
progress
are
currently
under
review.
So
obviously
all
of
those
need
to
get
closed
out
as
well,
but
so
if
we
look
at
what's
still
in
the
backlog,
the
first
issue
is
the
the
helm
updates,
and
so
no
one
will
be
picking
that
up
pretty
soon.
B
Just
under
that
there's
a
related
issue,
a
specific
issue
that
a
user
had
related
to
the
helm,
shard,
so
we'll
be
taking
a
look
at
that
and
making
sure
that
that
we
close
that
this
one
is
around
updating
our
installed
documentation
to
use
CLI
commands
for
setting
up
locations
so,
basically,
rather
than
requiring
you
to
apply
a
yellow
file
from
a
repo
we'll
be
using
the
velaro
backup
location
create
command
on
the
velaro
snapshot,
location
create
command.
So
hopefully
this
will
streamline
things.
Some
of
this
will
probably
also
be
covered
by
the
bolero
install
command.
B
This
one
here,
602
finalizing
the
protobuf
definitions,
so
our
plugins
basically
use
the
g
RPC
interface
interface,
and
we
want
to
make
sure
that
the
the
proto
definitions
that
we
have
there
we're
happy
with
and
that
we
can
go
ahead
with
them
for
1.0.
So
this
is
basically
just
a
review
of
those
interfaces
and
then
two
more
on
the
bottom
here
11:51.
So
this
is
a
an
issue
around
persistent
volumes
that
have
a
retain
reclaimed
policy
and
so
right
now,
Valero
doesn't
always
handle
these
in
the
way
that
a
user
would
necessarily
expect.
B
It's
a
little
bit
complicated
and
then
we
also
want
to
make
sure
that
there's
there
are
essentially
safeguards
in
place
so
that,
if
you
already
have
a
backup
in
object,
storage
that
there's
no
way
that
Valera
would
override
it.
This
came
up
I,
think
probably
a
couple
of
releases
ago
and
we've
put
a
bunch
of
fixes
and
safeguards
in
place
to
minimize
the
likelihood
of
this
happening.
B
We
haven't
heard
reports
of
it
in
in
quite
a
while
I
would
say,
but
we
do
want
to
add
a
little
bit
more
code
just
to
make
sure
that
there's
no
way
that
the
backups
would
over
ever
be
overwritten.
So
those
are
kind
of
the
work
items
that
are
still
to
do
and
then
the
in
progress
stuff
we've
we've
mostly
covered
there.
Also
a
couple
of
smaller
bug
fixes
in
here
that
I'm,
probably
not
going
to
talk
about
and
then
beyond
that
going
to
the
left.
B
Here
we
do
have
a
general
backlog
and,
what's
currently
in
here
is
a
whole
bunch
of
issues
that
we've
labeled
as
p2,
and
so
the
way
we're
thinking
about
this
is
that
these
are
all
nice
to
have
issues
for
one
dato.
So
as
we
have
time
or
as
new
contributors
come
on,
and
they
have
some
time.
This
would
be
a
logical
set
of
issues
to
to
tackle,
but
we
don't
necessarily
need
to
resolve
all
of
these
in
order
to
ship.
Oh
and
Otto.
B
So
we'll
probably
talk
a
little
bit
more
about
this
in
the
next
section,
but
but
yeah.
If
you're
looking
for
an
issue
to
work
on-
and
you
want
to
contribute
this,
this
backlog
is
definitely
a
good
place
to
to
start
so.
I
think
that's
kind
of
everything
I
wanted
to
cover
Zen,
hug
wise
and
then
jump
back
to
markdown.
B
So
this
is
one
other
issue
and
I
just
moved
it
into
in
progress
this
morning,
because
I
started
working
on
it.
So
this
is
an
issue
that
actually
covers
a
couple
of
different
things.
So
we
we
started
with
the
idea
that
we
wanted
to
kind
of
consolidate
the
warning
and
error
reporting
for
restores
into
a
single
location.
B
So
if
you've
used
restores
before
right
now,
there
are
a
few
different
places
where,
where
errors
can
be
reported,
so
the
first
one
is
that
if
there's
a
validation
error,
meaning
there's
something
wrong
with
your
restore
spec,
maybe
maybe
you
specified
a
backup
that
doesn't
work
or
that
doesn't
exist
or
there's
something
wrong
with
your
selectors
will
actually
move
that
restore
to
the
failed
validation
phase
and
then
the
details
of
those
errors
are
actually
reported
in
the
restore
status
status
under
a
restore
under
a
validation
errors,
phase
field.
Sorry,
so
that's
the
first
place.
B
We
then
additionally
have
a
per
restore
log,
which
is
a
file
that
gets
uploaded
to
object.
Storage-
and
so
this
contains
all
the
info
level
logs
about
the
restore
and
then
additionally,
we
have
a
separate
file
which
contains
any
of
the
warnings
or
errors
that
come
up
during
executing
a
restore.
So
this
is
a
separate
file
that
goes
into
object,
storage,
it's
basically,
it's
named,
has
the
restore
name
and
then
dash
results,
and
it's
it's
gzipped,
and
these
the
content
of
this
file
will
be
output.
B
Essentially,
if,
if
we're
unable
to
backup
any
single
item
that
should
be
backed
up
according
to
the
backup
spec,
the
backup
ends
up
as
failed,
and
so
you'll
still
have
a
tarball
and
object.
Storage
still
have
logs,
but
the
phase
will
be
failed
and
it
only
ends
as
completed
if
there
are,
if
essentially,
everything
that
should
have
been
backed
up
was
correctly
backed
up
with
restores
it's
a
little
bit
different.
B
So
today,
if
a
single
item
fails
to
be
restored,
we
still
end
that
restore
in
the
completed
phase,
but
it
will
have
an
error
in
this
file.
And
so,
if
you
run
a
velaro
restore
described,
you'll
see
that
it
it
ended
with
a
phase
is
completed,
but
there's
more
than
one
error
reported
and
so
there's
there's.
B
I
think
there
are
some
things
will
do
41.0
and
some
things
that
may
push
out
into
after
one
dot.
Oh,
but
I
would
say
this
is
kind
of
the
yeah
on
the
stuff.
We've
already
talked
about
kind
of
the
the
one
big
issue
that
remains
so
if
this
is
something
that's
that's
interesting
to
you,
that
you
have
input
on,
definitely
feel
free
to
read
through
this
issue
and
then
add
some
comments
on
it
with
that
I
think
I'll
turn
it
back
to
Jonas
for
a
minute.
F
B
Yeah
I
can
I
can
answer
that,
so
it's
definitely
a
limitation
we're
aware
of
and
it's
something
we
want
to
work
on.
We
were
actually
brainstorming
a
few
weeks
ago
about
what
the
best
way
to
do.
This
would
be
I
would
say
it's
it's,
not
something
we're
gonna
be
able
to
cover
for
one
dot,
oh
just
given
where
we
are
with
the
time
frame,
but
I
think
it's
it's
a
pretty
important
thing
to
look
at
after
one
dot,
oh,
and
so
we
were,
you
know
we
were
brainstorming.
B
Some
ideas
around
actually
running
each
back
up
as
a
separate,
pod
or
potentially
is
a
kubernetes
job
object.
And
that
way
you
could,
you
could
really
easily
just
take
advantage
of
the
parallelism
that
you
could
get
through
native
kubernetes.
So
we
have
an
issue
in
the
repo,
that's
tracking
that
and
there's
there's
probably
some
recent
discussion
on
it.
So
I
would
say,
look
for
that
issue
in
the
repo
and,
if
you
have
you
know
if
you
have
particular
use
cases
or
particular
reasons
why
this
would
be
a
valuable
feature.
F
C
A
G
G
My
backup/restore
and
another
thing
that
were
interested
in
backing
up
the
actual
container
images
that
exist
inside
the
cluster,
so
you
can
deploy
docker
registry
and
run
a
backup
and
restore
and
all
those
images
actually
copied
over
the
new
cluster.
So
we
have
done
a
few
things
upstream.
Scott
has
been
primarily
the
ones
submitting
PRS
upstream,
with
some
fixes
around
our
plugin
work
that
we
did,
and
we
also
have
another
engineer,
Jason
Leon,
who
has
really
been
investigating
production
applications,
getting
backed
up
and
restored
and
trying
to
minimize
downtime.
G
So,
to
do
that,
we've
submitted
a
PR
upstream
to
rustic
to
include
improve
the
restoration
process,
so
we're
trying
to
get
to
a
point
where
the
incremental
backup
feature
is
similar
on
restore.
So
you
don't
have
to
wait
for
all
the
data
to
be
copied
just
because
if
you
have
a
large
application,
you
know
you
could
be
talking
hours
of
downtime
when
we're
trying
to
minimize
as
much
as
possible.
So
the
upstream
stuff
is
mainly
been
around
the
plug-in
right
sector
right
now,
but
we're
getting
to
a
point
where
a
plug-in
is
capturing.
G
Most
of
the
resources
were
interested
in,
and
so
what
we
want
to
do
now
is
pick
some
of
these
hacky
changes.
We've
done
in
our
fork
and
start
submitting
some
agnostic
changes
upstream.
So
we
have
a
couple
issues
that
are
open
upstream.
That
we
want
to
address.
One
of
them
is
how
the
automatic
merging
of
service
accounts
work,
mainly
because,
right
now
it
doesn't
work
with
generated
names
which
Dogen
ships
heavily
relies
on.
G
So
what
we
intended
to
tackle
that
issue
right
now
we
have
a
half
workarounds
we're
trying
to
a
PFC
out
as
soon
as
possible
and
the
other
issue
that
I
think
we're
primarily
interested
in
is
custom
certs
for
the
s3
API.
We
might
be
using
something
other
than
mini
out
for
an
s3,
s4
storage
solution.
So
I
think
those
are.
The
two
keys
were
probably
going
to
start
tackling
the
first
upstream.
The
other
one
is
probably
well
beyond
the
scope
of
no,
and
it's
really
for
us.
G
So
yeah
give
this
a
recap.
We
primarily
focus
on
the
plugins
right
now
we're
starting
to
wrap
that
work
up
and
we're
going
to
start
to
make
some
stuff
upstream.
I
also
saw
you
guys
mentioned
in
the
script
backlog
the
retain
policy
on
the
PBS.
We're
definitely
interested
in
that.
So
maybe
we
can
get
one
of
our
engineers
to
tackle
that
issue
as
well.
But
we'll
keep
you
guys
in
the
know-
and
you
know
we're
looking
around
for
plugging
stuff
right
now.
So.
C
So
Dylan
one
thing
I
would
say
is
the
the
object.
Graph
stuff
is
definitely
not
1.0,
but
if
you
want,
if
your
team's
interested
in
like
doing
some
design
work
on
that
and
starting
a
discussion,
we
would
could
definitely
go
over
that
on
the
Google
Group,
because
I
think
that's
something
that's
going
to
take
a
little
while
to
brainstorm.
C
G
Absolutely-
and
we
really
have
one
component
that
depends
on
owner
references,
so
we
kind
of
thought
we
could
take
that
component
find
a
solution
that
works
for
that
and
kind
of
build
off
of
that
and
submit
some
design
stuff
upstream
to
see.
If
it's
you
know,
I'm
not
sick
enough
for
every
component,
but
we
might
not
get
to
that
for
another
month
or
so,
but
yeah
I
think
that's
a
good
idea,
we'll
help
with
the
design
process
as
much
as
we
can
cool.
G
F
G
Kind
of
using
it
with
the
with
the
idea
that
the
first
backup,
it's
probably
gonna,
take
a
long
time
and
we're
trying
to
keep
that
in
sync
as
much
as
possible,
so
that
when
you're,
trying
to
qui
ask
the
application
and
get
it
onto
a
new
cluster,
that
downtime
is
minimized
as
much
as
possible.
So
the
idea
that
we
keep
running
back
up
back
up
back
up
back
up,
get
all
the
missing
chunks,
but
then
right
before
you're
ready
to
switches
do
one
work
backup.
It
should
be
a
small
Delta
of
a
few
kilobytes.
G
F
G
Well,
of
course,
rustic
you
know
more
files,
it's
a
lot
more
conversational
intensive,
but,
as
I
mentioned
you
know
we
did
find
a
pretty
good
solution
for
improving
the
restoration
process.
We
cut
a
restore
down
by
like
70
percent
just
by
analyzing.
The
chunks
that
haven't
changed,
store
so
I
think
that's
gonna,
be
the
next
rustic
release,
I
think
it's
been
merged,
so.
B
That's
that's
awesome,
Donuts
cool
here
you
you
submitted
that
patch
to
upstream
I
know
you
know
resting
zero.
Nine
four
also
had
a
big
improvement
to
the
restore
process.
I
mean
it'd
have
been
as
far
as
I
understand
single
threaded
prior
to
that,
and
so
they
finally
kind
of
merged
this
multi-threaded
restore
that
they've
been
working
on
for
a
while,
and
so
we
definitely
saw
you
know
big
improvements
in
oh
nine,
four,
but
it
sounds
like
it
sounds
like
I
have
even
better
improvements
to
look
forward
to.
So
that's
awesome.
C
Yeah
quick
note
about
the
other
snapshots.
Lots
of
so
the
main
three
cloud
providers
offer
a
synchronous
snapshot
api's.
So
the
reason
those
go
faster
is
they
give
us
a
snapshot,
ID
and
velaro.
Just
kind
of
moves
on
and
the
rest
happens
at
the
background,
whereas
rustic
doesn't
do
that
and
we
have
to
wait
for
it
to
be
done.
So
that's
that's
where
that
difference
comes
from.
Okay,.
A
B
I
can
run
through
this
and
I'll
probably
just
share
my
screen
again
just
so
that
everyone
can
look
at
the
same
thing,
I'm
looking
at
one.
Second,
all
right,
so
you
know
so
the
first
thing
is:
we've
been
talking
about
one
dot
and
we're
definitely
planning
to
ship
a
bunch
of
pre-release
versions
of
one
dot
Oh.
So
you
know
we'll
probably
start
with
an
alpha.
B
There
have
been
a
number
of
you
know
relatively
invasive
changes,
so
that
that
would
be
huge
for
for
anyone
who
has
some
time
to
help
do
that
testing
and
make
sure
one
dodo
is
as
stable
as
possible
beyond
that
I
polled
kind
of
a
just,
a
list
of
open
issues
that
we
have.
That
would
be
great
for
someone
who's
interested
in
contributing
to
to
get
involved
with
and
I
try
to
just
break
it
out
by
different
areas.
B
I
know
different
folks
have
different
areas
of
interest,
so
I'll
just
do
kind
of
a
quick
run-through
of
the
bullets
that
I
have
here,
so
it
started
with
with
Azure
support
so
for
anyone
who's
running
velaro
on
Azure.
There
are
a
number
of
open
issues
in
the
backlog.
There's
this
first
one
which
is
actually
an
open
PR
that
basically
just
needs
testing.
This
is
around
adding
support
for
availability
zone
based
disks
in
Azure.
B
So
if
this
is
something
that's
interesting
to,
you,
I
think
we've
finally
updated
the
azure
SDK
enough
that
we
can
actually
do
this
so
working
through
that
code
change
and
then
a
couple
of
things
around
how
how
you
actually
authenticate
to
the
azure,
api's,
GC
or
gke,
we've
had
a
couple
of
people
interested
in
being
able
to
do
cross
project
snapshot,
restores
there's
an
open
issue
there.
That
would
be
a
great
one
if
you're
interested
in
that
we've
got
a
couple
of
bug.
B
B
And
then
we
also
want
to
do
some.
Some
improvements
around
kind
of
how
we
inform
the
user
when
something
goes
wrong
and
this
issues
a
little
bit
a
little
bit
generic,
but
would
be
great
to
do
some
research
here
and
look
at
how
we
can
improve
error
messages,
improve
directions
to
the
user
about
where
they
might
want
to
look
for
additional
information.
C
B
As
many
of
you
know,
we
have
we
export
a
bunch
of
Prometheus
metrics
and
there
are
a
number
of
issues
in
the
backlog
for
for
adding
new
metrics.
This
is
a
really
great
place
for
a
first-time
contributor
to
get
involved,
so
I've
linked
a
couple
of
them
here
and
there
may
even
be
one
or
two
more
issues
in
the
backlog
and
then
a
handful
of
general
things.
B
We
have
just
two
issues
here
that
are
kind
of
usability
issues
or
bugs
one
around
handling,
really
long,
restore
names
and
having
some
issues
with
labels
in
that
case
and
then
supporting
the
not
equal
operator
and
label
selectors,
adding
support
for
this,
and
then
you
know.
The
other
thing
that
I
grouped
here
under
under
this
is
is
really
just
joining
us
in
slack.
B
On
top
of
this
and
be
responsive
to
folks,
but
you
know
that
channel
just
keeps
growing,
and
so,
if
you
don't
necessarily
want
to
contribute
code
but
you're
familiar
with
using
Valero,
you
know
be
awesome
if
you
could
join
the
channel
and
just
just
help
answer
folks
questions
every
once
in
a
while.
So
hopefully
that
gives
folks
a
little
bit
of
a
kind
of
an
entry
point
if
they're
interested
in
contributing,
but
maybe
didn't
yet
know
where
hopefully
this'll
point
you
in
the
right
direction.
A
Perfect
I
just
want
to
give
a
quick
update
here
on
the
Valero
channel,
so
the
Valero
channel
in
the
kubernetes
slack
is
one
of
the
bigger
ones
we
there
are
over
800
people
in
there
and
there
are
a
little
over
60,000
people
in
the
in
the
kubernetes
slack.
If
you
do
some
math
that
actually
equates
to
1.3
percent
of
everyone
in
the
kubernetes
channel
is
in
the
Valero
channel,
so
we
have
a
lot
of
people
in
there.
So
thank
you,
everyone,
who's
being
a
part
of
that.
B
I'm
happy
to
cover
that
while
I
have
it
up
here,
just
wanted
to,
you
know,
give
a
shout
out
to
folks
who
have
contributed
recently
and
so
I
just
pulled.
Basically,
since
we
shipped
0.11
pulled
a
list
of
all
the
external
contributors
who
have
had
PRS
merged,
so
I've
listed
the
github
handles
here.
First
one
there
was
a
change
to
the
restore
item,
action
plugins
to
pass
the
original
item.
So
the
the
unmodified
item
from
the
backup,
tarball
and
Carlee,
see
I
mentioned
this
a
few
minutes
ago.
B
So
really
appreciate
that
PR
to
get
that
change
in
and
that's
that's
helpful
in
certain
scenarios
for
restore
item
actions.
We
had
a
bug
fix,
contributed
here.
So
it
turns
out
that,
during
the
restore
we
were
actually
keeping
every
file
from
the
backup
tarball
open
and
someone
ran
into
basically
hit
the
limit
here
and
discovered.
There
was
a
bug,
and
so
they
they
fixed
the
bug
here.
So
that
was
really
great.
B
We
had
a
couple
of
folks
who
noticed
that
when
we
did
restores
of
snapshots
during
on
AWS
that
the
volume
ID
that
we
configured,
the
persistent
volume
with
didn't
include
the
availability
zone
and
for
for
certain
scenarios
it
was
important
to
have
this
kubernetes
actually
supports,
either
specifying
the
volume
ID
without
the
AZ
or
with
the
AZ,
so
either
one's
kind
of
fine
from
from
kubernetes
perspective.
But
we
got
a
change
here
too
clear,
Daz
and
you
know
more
information
is
probably
better
than
less
so.
So
this
helped
those
folks
out.
B
We
had
a
contribution
to
improve
performance,
so
there
was
a
case
where
we
were
kind
of
recompiling
a
regex
over
and
over
again,
and
so
this
enhancement
was
to
just
compile
that
only
once
Matt
Stumpf
contributed
a
change
there
is.
There
is
an
issue
around
encrypted
snapshots
and
kind
of
how
that
intersected
with
ion
policies,
and
so
we
finally
actually
set
the
encrypted
field
appropriately
when
we're
doing
a
snapshot
restore
so
that
essentially
any
ion
policies
there
understand
whether
or
not
that
snapshots
encrypted.
B
So
previously,
if
there
were
any
API
groups
or
any
resources
that
that
returned
errors
from
discovery,
for
whatever
reason,
basically
the
whole
restore
process
would
essentially
just
fail,
and
so
we're
now
gracefully
handling
those
were
logging,
any
groups
that
that
fail
when
we
hit
the
discovery
endpoint,
but
we
continue
on
with
with
backing
up
for
restoring
the
groups
that
we
can
use.
So
I
just
wanted
to
say
thanks
to
all
those
folks
who
have
contributed
it's
you
know,
every
contribution
helps
and
helps
make
the
tool
better.
So
so
we
really
appreciate
it.
A
E
I'm
happy
to
I.
Just
you
know
we
we
just
wanted
to
throw
an
issue.
That's
a
issue
non
issue
out
there
just
for
people
to
put
comments,
saying
you
know
how
you're
using
valera.
We
would
love
to
hear
more
about
how
customers
are
using
Bolero,
and
you
know
what
value
you're
getting
out
of
it
on
a
day
to
day
basis
you
know
perhaps
place
in
there.
E
You
know
improvements
that
you'd
like
to
see
something
like
that,
but
it
was
just
basically
an
informal
way
for
us
to
poll
the
community
and
we've
certainly
loved
to
to
hear
more
from
from
the
community
going
forward,
and
that
was
one
way
to
kind
of
spur
conversation.
So
please
commit
to
just
a
comment
to
that
and
that
would
be
fantastic
I.
A
C
A
So
the
the
first
comment
here
is
from
laid
Roo.
They
used
to
run
Ark
for
protecting
stateful
production
workloads
for
almost
a
year,
so
they've
been
there
a
long
time
user.
Now
they
have
finished
migrating
to
Valero
they're,
an
internal
service
provider,
giving
their
customers
the
capacity
to
self-enroll
to
the
backup
to
guide
backup
policy.
A
It
simply
rocks
so
that's
great
accolades
Dharma
here
they
are
currently
using
it
to
backup
kubernetes
objects
in
Azure
and
AWS,
very
interested
in
using
it
as
a
migration
tool
to
migrate,
workflows
between
clusters
and
different
cloud
accounts,
maybe
even
between
self-manage
and
cloud
managed
clusters.
And
lastly,
here
we've
got
harsh
all
they
use
velaro,
together
hourly
backups
of
all
application
resources
in
their
clusters.
A
They
will
also
be
used
to
perform
critical
migrations,
such
as
a
change
of
criminals,
networking
plugins
on
the
running
production
cluster
or
moving
a
production
environment
to
eks.
Both
these
cases
were
cheap
without
incurring
any
downtime.
That
is
just
amazing,
I.
Think
so
big.
Thank
you
to
everyone
out
there
in
the
community
helping
make
this
possible.
So
this
is
amazing.
Work
from
from
everyone.
B
And
I'll
just
jump
in
there
and
reiterate
it's.
You
know
the
more
feedback
we
can
get
from
users,
the
the
better
we
can
kind
of
direct
our
future
efforts.
It's
really
helpful
to
hear
how
folks
are
using
it
where
any
stumbling
blocks
are.
What
could
be
improved
in.
It
just
helps
us
figure
out
kind
of
where
we
want
to
spend
our
development
time.
So
the
more
feedback,
the
better
I
would
say,
yeah.
E
Absolutely-
and
you
know
one
of
the
things
that
was
brought
up
and
that
on
one
of
the
comments
there
was
using
Blair
as
a
migration
tool,
and
that
is
a
topic
that
that
Steve
and
I
are
gonna
cover
on
a
webinar
actually
on
Thursday,
so
so
yeah,
the
it's
very
legit
use
of
Valero
for
for
migraine,
applications
from
one
cluster
to
another
or
from
or
for
basically
rehydrating
a
new
cluster.
So
you
know
look
for
that.
If
you're
interested
in
that,
you
know
sign
up
we'd
love
to
I'd
love
to
include
you
in
that
presentation.