►
From YouTube: Velero Community Meeting/Open Discussion - June 8, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
valero
community
meeting,
slash
open
discussions
today
is
june.
15Th
2021.
Please
enter
your
name
into
the
hackmd,
there
name
an
affiliation,
we
got
some
status
updates
and
then
some
interesting
discussion
topics-
and
I
see
you
added
these
contemporary
shout
outs
already.
Bridgette.
B
Excellent
yeah,
no,
I
I
I
went
ahead
and
run
this
just
because
I
think
we
we
missed
a
few
from
like
previous
weeks,
so
I
modified
some
of
the
code
and
stuff
just
to
try
and
pick
up
on
those.
So.
A
Thank
you
all
right,
so
yeah,
if
you
have
any
discussion
topics
or
any
status,
updates
that
anyone
wants
to
talk
about.
Please
add
them
in,
but
right
now,
let's
start
with
bridget.
B
Okay,
hi
everyone,
so
I
think,
like
the
main
thing
that
I've
been
working
on
this
week,
is
getting
things
ready
to
do
a
bolero,
1.6.1
release
or
patch
release
to
the
valero
release.
B
So
thanks
to
ali
patel
last
week
for
helping
to
to
diagnose
and
fix
an
issue
that
we've
discovered
when
backing
up
on
kubernetes
clusters,
so
I've,
it's
conclusion
that
he
made
is
in
the
contributions,
but
there's
another
bug,
another
part
of
that
that
will
need
some
fixes
in
order
to
make
backing
up
on
kubernetes
121
with
rustic
work
correctly.
B
So
we
want
to
get
a
release
site
for
that
as
soon
as
possible,
and
then
there
are
a
few
other
fixes
that
will
go
in
there
and
clean
one
from
from
scott
as
well
so
other
than
that
I've
been
helping
to
onboard
the
new
team
members
that
we
talked
introduced
last
week
and
then
also
the
design
dock
for
the
plug-in.
Versioning
has
has
gone
up,
and
I
just
saw
before
the
main
that
scott
has
added
some
comments.
So
thank
you
very
much
for
that.
B
I've
only
already
left
some
comments
as
well
last
week,
so
thank
you
both
I'll.
Try
and
get.
I
think
I've
addressed
thongs
comments,
but
I
will
also
address
scott's
later
today.
C
And
I
just
wanted
to
clarify
make
sure
I
understood
that
the
following
work
needed.
I
know
they
had
the
fix
so
that
a
backup
on
121
will
then
restore
properly,
and
I
think
the
issue
is
that
previously
existing
backups
without
that
fix,
will
now
fail
on
the
restore
still
even
with
that
fix,
because,
because
that
fixes
only
on
the
backup
side,
we
also
want
the
similar
fix
on
the
restore
side.
That.
B
Is
correct?
Yes,
so
I
think
I
need
to
do
a
little
bit
more
investigation,
but
I
think
what
we
can
do
is
at
the
point
when
we're
restoring,
we
can
check
what
the
volume
source
is
as
well
and
doing
equivalent
fix.
Obviously
it's
not
it's
not
ideal
to
have
to
add
that
in,
but
I
think
we
need.
We
need
to
put
something
in
there,
so
that
backups,
which
have
already
been
taken,
aren't
broken
or
can
be
restored
from.
A
Yeah,
so
that
that's
a
pretty
big
issue
for
sure
so
would
that
be
kind
of
an
on-the-fly
fix,
then,
when
you
restore
from
an
existing
backup,
that's
fairly
broken,
so
that
would
be
kind
of
on
the
fly
fixing
fixing.
That
backup
is
that
correct.
B
Yeah,
so
it's
so
if
you've
already
taken
some
backups
which
are
broken,
the
the
fix
that
we
need
to
introduce
would
would
help
solve
that.
So.
C
And
also,
I
think,
it's
worth
pointing
out
too.
I
think
this
bug
pre-exists
kubernetes
121
we've
had
the
notion
of
the
the
of
these
kinds
of
volumes
before
it's
just
that
121
uses
them
more
frequently
for
kind
of
common
things.
And
another
point
here
is
that
good
that
we
have
the
indian
test.
It
was
the
indian
test.
The
first
exposes
bug
for
us.
D
And
agreed
also
that
it
was
run,
I
guess
manually
the
undone
test,
and
it
only
shows,
of
course,
how
valuable
it
is
to
have
our
antenna
test
running
in
ci,
which
is
certainly
a
goal,
especially
with
the
new
team
members.
They
might
focus
on
that
early
on.
Yes,
that's
agreed,
thank
you
for
everyone
who
wrote
the
end
test
and
yay
that
they
caught
this
bug.
A
Awesome,
when
are
we
looking
at
releasing
161.
B
As
soon
as
possible,
so
hopefully
so
there
is
like
the
the
there
was
a
pr
created
by
scott,
which
is
still
awaiting
one
more
revere
but
I'll
ping
dave
and
try
and
get
him
to
review
that
and
get
that
in
as
well,
because
that
was
a
regression
with
crd
restores.
So
that's
something
that
also
needs
to
go
into
this
fit
into
this
release.
But
then
we
also
need
to
get
the
pr
up
for
that
issue
that
I've
listed
as
well.
B
A
Okay,
awesome
yeah.
We
don't.
We
definitely
want
to
send
that
out
as
a
as
a
message
to
the
distribution
list
and
twitter
and
slack
and
everything.
So
people
know
that
they
need
to
update.
B
Yeah
yeah,
so
I
did
so.
I
posted
in
the
valero
users
channel
about
the
issue
that
had
been
discovered
with
with
a
workaround
as
well.
So
yes,
we'll
notify
everyone
once
the
patch
releases
rather.
A
Yeah
that'd
be
great
just
so,
people
are
aware
and
that
a
fixes
are
upcoming
very
very
soon.
A
All
right,
any
other
questions,
comments
for
bridgette.
A
A
We
have
a
unanimous
vote
here
for
eight
to
ten
beijing
time
in
the
morning
there,
so
that
will
be
in
the
evening
here
on
tuesday
so
early
morning,
beijing
wednesday,
so
we'll
send
that
out
we'll
set
up
a
new
community
meeting
rotation,
so
I'll
I'll
send
out
invites
to
everyone
and
updates
to
all
the
calendar
invites
here
we're
also
doing
a
an
update
to
the
valero
office
hours.
A
It
was
scheduled
for
two
hours
before
so
we're
shortening
that
to
one
hour
based
on
the
engagement
we've
gone
from
the
office
hours,
we
love
having
the
office
hours,
it's
super
fun,
but
we
see
that
we
very
rarely
go
longer
than
one
hour,
so
we're
gonna
shorten
it
to
one
hour
and
we
hope
that
works
for
everyone.
A
Any
questions
on
the
community
meeting
for
the
asia
pacific
time
zones
all
right.
Thank
you.
Everyone
for
for
voting
really
really
appreciate
it.
All
right,
discussion
topics.
B
Okay,
so
I'm
going
to
lead
these
discussions
on
behalf
of
dave,
so
the
the
first
one
was
making
a
change
in
valero
to
use
a
vmware
owned
registry
for
hosting
the
valero
images.
So
so,
rather
than
using
or
relying
on
docker
hub
for
image
distribution,
we
would
instead
use
images
which
we're
also
pushing
to
projects.registry.vmware
and
using
that
by
default.
So
I
think
we
just
wanted
to
get
like
a
sense
of.
B
Would
that
cause
any
issues
for
anyone?
I
think,
like
our
aim
with
this,
would
still
be
that
all
image
pushing
and
building
and
so
on
would
be
trigger
from
from
github,
so
there's
nothing
that
would
be
triggered
from
like
within
like
vmware
infrastructure.
It
would
also
like
take
place.
I
I
on
github,
but
then
push
that
registry
and
use
that
registry
by
default.
A
So
we
used
a
project,
harbor
cncf
project.
Of
course,
I
think
you
all
know
harper
and
yeah.
We
use
this
for
for
a
lot
of
our
other
open
source
projects
as
well,
and
I
know
that
andrea,
which
is
now
a
cncf
project,
also
uses
it,
because
we're
we're
seeing
more
and
more
projects
moving
away
from
docker
hub
people
are,
companies
are
in
organizations
are
hitting
limits
within
their
polls
and
everything
so
yeah,
it's
just
an
option
here.
Scott.
C
Yeah
I
was
just
saying:
yeah.
We
ran
into
the
same
thing
with
our
upstream
stuff
with
docker
hub.
You
know
limits
trying
we're
trying
to
avoid
using
it
even
for
kind
of
base
images
now.
But
for
that
reason
you
know,
I
know
for
our
conveyor
work
again
all
everything's
triggered
by
github
and
we're
using
quay.io
for
our
repo
hosting
there
same
idea.
You
know
infrastructure,
we
control,
but
it's
triggered
by
all
the
you
know:
github
open
source,
stuff.
A
Sure
yeah,
so
the
so
the
builds
would
essentially
just
move
from.
I
mean
we
would
utilize
the
the
same
functionality
that
we
do
right
now,
brigitte
right,
so
everything's
built
the
same.
We
just
push
over
to
a
different
registry.
B
Yes,
yeah
and
then
then
the
valero
binaries
by
default
whenever
they
generate
the
deployment
manifest
that
they
use
it
would
obviously
be
using
that
registry
by
default.
Yeah.
C
And
I
guess
we'll
need
to
change
any
places
in
the
code
that
make
that
assumptions
like
in
in
tests,
for
example,
pull
from
docker.
I
o
right
now.
You
know
that
would
be
changed
to
point
to
this
location.
B
Yes,
yeah,
there
are
a
few
other
places.
For
example,
when
doing
rustic
restores
the
pods
get
the
the
rustic
restore
helper.
So
there
are
a
few.
There
are
definitely
a
few
images
that
and
places
that
need
to
be
updated,
but
that
would
all
be
kind
of
incorporated
as
part
of
this
work.
B
So
so,
at
the
minute
like
so,
for
example,
like
we
like
myself
as
a
maintainer
like,
I
don't
have
credentials
to
our
docker
hub
instance,
so
all
of
our
image
building
at
the
moment
is
triggered
by
tags
being
pushed
to
the
repo.
So
I
anticipate
that
the
same
workflow
would
still
be
would
still
be
used.
So
if
you
are
a
maintainer
with
access
to
do
the
release
process
such
as
pushing
tags
to
github,
that's
the
kind
of
the
entry
point
into
that
workflow.
B
So
we
still
want
to
have
that
same
that
same
kind
of
level
of
access,
but
like
for
myself,
I
can't
directly
push
to
to
docker
hub
or
anything
the
same
way.
I
can't
like
directly
push
to.
I
wouldn't
be
able
to
directly
be
able
to
directly
push
this
repo
as
well.
Okay,.
A
Yeah,
so
anyone
any
maintainer
essentially
would
be,
would
have
the
same
access
them
follow
up
here,
who's
responsible
for
fixing
it.
If
things
go
wrong.
B
Well
depend,
I
think
it
would
depend
maybe
on
the
things
that
have
gone
wrong.
These
are
good
questions.
I
I
guess
maybe
like
we
haven't.
I
guess
it
was
probably
at
this
stage
it's
gonna
be
easier
for
someone
within
vmware
to
be
able
to
have
the
access
in
order
to
fix
those.
So
I'm
not
quite
sure
yet
what
the
what
the
full
process
would
look
like.
Was
there
like
something
specific
that
you
were
thinking
of
in
terms
of
something
going
wrong.
E
No
not
really
like,
specifically,
it
was
more
like
you
know.
If
a
maintainer
outside
of
vmware
does
do
the
release
process
and
something
goes
wrong.
E
Who
is
then
responsible
for
fixing
that
if
the
maintainer
does
not
have
access
to?
You
know
the
underlying
bits
to
make
those
builds
happen
or
whatnot
yeah
yeah
like
that?
That
was
more.
What
I
was
getting
at
was
just
making
sure
that
there
was
at
least
you
know
somebody
who
was
ready
to
take
on
that
work.
If,
if
something
happened
or
whatnot,
yeah.
B
Yes,
I
think,
like
our
releases
at
the
moment,
have
been
fairly
like
for
better
or
worse,
like
our
release.
Process
is
quite
involved
at
the
minute,
and
so
I
think
that
typically,
we
tend
to
have
somebody
on
standby
to
help
resolve
these
issues
at
the
minute,
and
I
think
if
we
were
introducing
new
maintainers
and
taking
them
through
the
release
process,
because
at
the
moment
it
tends
to
be
folks
who
are
at
vmware
who
are
doing
those
releases.
B
And
but
if
we
wanted
to
have
like
our
other
maintainers,
getting
involved
in
that
process,
and
obviously
we
would
go
through
that
with
them.
Make
sure
that,
like
they're
able
to
run
that,
I
think
great
you
raise
a
great
point
about
if
things
do
go
wrong,
because
it
will
happen
eventually,
I'm
sure-
and
we
need
to
make
sure
that
we
have
processes
in
place
in
order
to
handle
that.
So
maybe
that
is
just
like
having
someone
with
access
available
during
the
release
process.
A
That
that
that
is
currently
also
the
the
process
right,
so
if
something
would
happen
right
now
on
dr
rob,
we
would
still
need
someone
with
correct
credentials
and
access
to
it
to
fix
things
if
things
went
wrong.
Yes,
that
is.
B
A
B
Yeah
yeah,
I
imagine
we'll
we'll
go
through
the
design
process
with
this
before
we
make
any
changes.
I
think
I
think
dave's
main
reasoning
behind
like
raising
us
at
this
stage
was
that
before
we
start
to
go
down
the
path
of
creating
the
design
and
moving
towards
the
exchanges
is
just
trying
to
make
sure
we
capture
any
any
concerns
or
any
blockers.
That
would
prevent
this
from
taking.
A
Place
sounds
good.
I
mentioned
andrea,
who
made
this
switch
late
last
year,
so
they
switched
over
to
have
all
their
public
images
to
be
on
the
project.registry.vmware.com.
A
They
still
had
a
few
images
on
docker
hub
for
their
ci
process,
which
was
just
being
used.
There
they're
slowly
moving
those
over
as
well,
but
if
we
have
certain
images
on
docker
hub
for
that
purpose,
I
think
they
don't
need
to
be
moved
over,
maybe
immediately,
but
yeah
just
wanted
to
watch
it.
Yeah.
B
That's
good
to
know,
yeah
and
I
think
m
as
part
of
the
design
process,
we'll
we'll,
hopefully
go
through
all
of
that
and
figure
out
what
needs
to
move
and
at
what
point
yeah.
We
have
some
other
some
other,
like
issues
that
are
open,
that
we
would
like
to
address
in
regarding
being
able
to
specify
different
registries
for
particular
images.
So,
for
example,
for
instance,
whenever
you
install
there
at
the
moment
you
can
choose
which
image
you
want
to
use.
B
So
you
can
use
your
own
registry
already,
if
that's
if
using
docker
hub,
is
a
blocker
for
you,
but
we
also
want
to
make
it
configurable.
So
you
could
use
a
different
registry,
for
example,
for
that
rustic
restore
helper.
That
I
mentioned
at
the
minute
at
the
minute.
That's
all
kind
of
hard-coded,
not
something
that
we
want
to
make
configurable
as
well.
B
So
there
are
a
few
image-related
tasks
that
we
might
end
up
just
kind
of
bundling
together
as
part
of
this
work.
B
Yeah,
so
so
at
the
minute,
whenever
you
do
a
valero
install,
you
can
override
the
image
that
is
being
used,
but
we
just
want
to
make
all
the
other
images
that
valero
ends
up
using
configurable
as
well.
So
that's
when
you
could
customize
your
installation
and
instruct
fuller
to
use
a
particular
image
in
a
particular
registry.
A
B
Okay,
great,
so
I
will
I'll
write
up
some
notes
here
and
I'll.
Let
dave
know
and
then
we'll
think
about
next
steps
and
creating
a
a
design
doc
for
this
and
opening
up
for
for
more
comments.
B
Awesome,
okay,
so
the
next
discussion
topic
was
something
that
came
up
during
the
office
hours
last
week
from
formerly
regarding
reusing
particular
parts
of
the
valero
code
base
in
other
projects.
B
So
we
want
to
think
about
being
very
explicit
about
certain
areas
of
the
code
base
and
grouping
it
into
like
a
toolkit
that
other
projects
could
use,
which
would
put
like
a
stricter
kind
of,
perhaps
like
a
stricter
like
api
maintenance
in
there,
so
that,
like
we're,
not
changing
things
too
much
and
so
they're
not
impacting
other
projects
which
are
importing
this.
I
don't
have
a
link
to
the
particular
section
that
that
was
being
used.
Let
me
find
it
scott.
B
I
don't
know
if
this
is
something
that
I
had
mentioned
to
you
about
reusing
a
particular
section,
a
particular
function
within
valero
in
another
project.
C
Yeah,
I
know
kind
of
in
general
we're
kind
of
talking
about
in
the
discovery
side.
So,
in
terms
of
you
know,
looking
at
the
resources
that
are
in
a
particular
name,
space
or
whatever
that
we'd
want
to.
You
know
back
up
and
kind
of
using
the
discovery,
part
separate
from
from
actually
running
a
valero
backup,
that's
kind
of
the
kind
of
high
level
kind
of
idea.
We
were
talking
about
kind
of
the
area
of
the
code,
who's
interested
in
using.
C
Yeah,
exactly
and
and
and
it-
and
I
guess
the
one
thing
I
mean
even
as
it
exists
now,
although
it's
not
you
know
published
as
an
api
and
it's
not
stable
any
of
that,
you
know
I
know
so,
some
of
the
prototyping
we
were
doing
he
was
able
to
make
use
of
that
the
one
area
that
you
know
he
was
running
into
issues
with
was
around.
I
think
the
feature
flags,
because
that
you
know
wouldn't
be
available
because
it's
not
a
valero
install.
C
B
That
yeah,
so
I
think,
dave
had
maybe
some
comments
on
the
the
proposed
refactoring
just
because,
due
to
the
way
that,
like
we're,
handling
some
of
the
the
feature
flags
at
the
minute,
but
I
think
that's
kind
of
like
the
direction
that
we
maybe
want
to
go
to
is
that
if,
if
there
are
sections
of
the
lyric
code
base,
which
we
anticipate
other
folks
might
be
able
to
make
use
of
that,
we
should
try
and
create
something.
A
bit
more
robust.
B
Rather
than
like
making
explicit
that
like
this
is
section
like
these
are
packages
that
we've
designed
for
consumption
by
other
projects
right.
B
B
B
But
yeah,
maybe
this
is
something
as
well,
but
like
we'll
start
to
put
a
design
for
so
it
just
makes
it
a
bit
more
explicit
in
terms
of
getting
feedback
on
it.
Yeah.
C
I
I
imagine
you
might
be
kind
of
two
levels
here,
there's
a
high
level
of
you
know.
What
is
the
approach
to
identifying
a
section
of
the
code
that
we
want
to
be
kind
of?
You
know
callable
as
a
library
and
kind
of
how
that
process
should
work,
and
then
there
are
the
specifics
of
oh
okay.
For
this
specific
you
know,
discovery
helper
function
called.
This
is
something
that
I
want
to
call.
C
Let's
codify
you
know
what
that
looks
like
and
make
that
you
know
something
that's
been
defined,
and
you
know
that
we
agree
to
kind
of
keep
stable
kind
of
the
kind
of
two
levels
there
that
that
may
be
kind
of
the
way
to
go.
There
is
because
right
now,
there's
not
even
a
process
around.
You
know
even
doing
this,
because
it's
kind
of
the
first
example
I
think
that
we've
had
where
you
know
we've
wanted
to
do
that
kind
of
thing.
B
Yeah,
so
I
think
like
there
I
know
like
when
I
when
I
joined
the
project
there
was
something
that
carly
she
had
been
bringing
up.
It's
like
a
recurring
reminder
to
the
community
about
like
and
the
public
apis
within
the
project,
but
I
know
that,
like
that
particular
discovery
package
is,
is
not
one
of
them.
B
So
I
think
if
that
is
of
use,
then
guess
this
is
where
we
should
start
to
yeah,
like
you
say,
like
trying
to
define
a
process
around
consuming
parts
of
the
lower
code
base
and
other
projects,
so
I'll
just
send
a
I'm
just
going
to
put
a
link
to
that
particular
issue.
It
is
a
pinned
issue
on
the
on
the
repo,
as
well
for
for
other
folks
to
find
again
more
easily.
B
Yeah,
of
course,
yeah.
So
thanks
thanks
for
opening
up
the
pages
jonas
yeah,
so
the
first
one
is
the
fix
I
mentioned
earlier
from
ali,
which
is
to
skip
backing
up
projected
volumes
when
using
rustic.
So
this
was
a
so
scott
mentioned
earlier.
This
is
a
change
in
kubernetes
121
to
use
projected
volumes
more
frequently,
and
this
also
in
turn,
when
restoring
these
types
of
volume,
volumes
triggers
a
long-standing
bug
in
in
rustic.
B
So
with
this
change,
if
you're
using
rasik
and
backing
up
all
volumes
by
default-
and
we
will
skip
anything
that
has
a
projected
source.
So
this
is
the
fix
it's
going
to
make
it
into
161.
B
Okay,
this
was
docs
improvement
from
abby
to
sorry.
I'm
trying
to
remember
this
get
to
update
the
minio
docks
to
use
the
correct,
aws
plugin
version,
I
think
previously
it
was
using.
It
was
all
of
our
docs-
were
referencing
an
old
and
incompatible
version
of
the
aws
plugin.
So
all
the
other
version
docs
have
been
updated
so
that
the
correct
aws
plug-in
version
is
being
referenced
in
the
appropriate
version
of
the
docs.
So
thanks
abby.
B
And
this
is
another
fix
from
abby
for
improving
the
docks
around
enabling
the
api
group
versions
feature
so
yeah.
Thank
you
very
much
for
that.
B
That's
this
one
from
norwin
schneider.
This
was
an
issue
that
was
in
one
of
the
helm,
charts
where
the
the
closing
tag
and
the
conditional
was
in
the
wrong
place
and
that
was
affecting
how
volumes
were
being
mounted,
so
it
was
causing
that
an
incorrect
volumite.
So
that
was
a
fix
in
the
health
health
chart.
B
Thank
you
and
then
the
last
one
is
from
marco
for
allowing
more
more
configuration
on
setting
security
contacts
within
the
blader
pod
in
the
helm
chart
so
there's
certain
security
context,
settings
which
need
to
be
set
at
the
container
level,
so
he
refactored
the
the
helm
chart
and
how
we
set
the
security
contacts
there
to
allow
pod
level
security
contacts,
but
also
container
security
contacts
as
well.
So
that's
a
great
change.
Thank
you.
A
Awesome
yeah,
thank
you
all
who
contributes
to
the
valero
project
really
really
appreciate
it.
Thank
you.
Everyone
for
the
discussions
today
have
a
fantastic
rest
of
the
week
and
see
you
all
next
week.