►
From YouTube: Velero Community Meeting - Nov 24, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Okay,
let's
start
here,
hello,
evan,
welcome
to
the
era
commit
meeting
for
november
21st
hosting
immediately
and
the
let's
first
update
status
and
then
dive
into
the
discussion
tops.
C
C
D
Fix
a
few
minor
issues
and
also
merge
the
pr
for
csr
support,
csi
support
for
aws
and
gcp
plugins.
D
So
if
there's
any
community
user,
I
mean
I've
tested
in
the
lab,
but
if
there's
any
community
users
who
have
a
real
world
workload
that
use
csi
driver
on
aws
or
gcp
to
provision
precision
volumes,
please
help
us
test
them
and
I'm
working
on
the
issue.
B
Yeah,
so
I'm
just
putting
together
prs
some
of
the
upload
progressing
progress
monitoring
stuff,
and
I
had
an
idea
for
how
to
do
multi-tenancy
on
a
per
name
space
basis.
So
I
wrote,
I
spent
some
time
writing
that
up
and
I
have
a
draft
proposal
as
a
pr
and
we
can
discuss
it
if
we
feel
like
it
during
the
discussion.
F
D
Also,
there
was
a
need
to
open
earlier
this
year
and
the
user
said
when
he
is
restoring
a
b1
beta
1,
maintaining
welcome
configuration
into
a
1.19
cluster.
It
has
a
failure
like
this.
D
This
is
because
in
119,
when
valero
doing,
backup
and
restore,
although
the
the
resource
was
created
using
the
v1
beta
1
api
version,
the
preferred
version
on
the
cluster
is
b1.
So
when
vario
do
the
backup
the
resource
was
converted
to
b1.
However,
after
the
conversion,
it
has
a
unsupported
value.
So
I
did
some
quick
tests.
D
If
I
create
a.
D
You
can
see
that,
as
expected,
the
cool
I
mean
the
api
server
will
serve
v1
version
of
resource,
because
this
is
the
preferred
version,
but
it
has
this
unsupported
value.
D
I
just
want
to
confirm
that.
That's
it
I
mean.
D
If
we
try
to
frost
it
together,
be
one
bit
one.
D
So
if
we
output
this
one
to
a
yaml
file,
this
will
be-
I
mean
we
can
use
this
yama
to
successfully
create
a
resource.
However,
if
we
use
the
v1.
D
D
D
Yeah
yeah,
I
tried
to
talk
nolan
last
night,
but
he's
not
available
so
considering
this
one
on
the
v1
based
one
will
be
removed
in
the
newer
version
of
furniture.
I
don't
think
it's
worth
it
to
chat
with
kubernetes
community
about
this
issue.
It
seems
there's
some
arrow
when
the
api
store
is
doing
the
conversion.
G
D
Now
yeah,
so
I
think
kubernetes
logic
is
that
it
will
always
serve
until
you
explicitly
tells
it
to
serve
the
version
you
want
like
this.
It
will
always
serve
the
preferred
version,
which
is
the
latest
supported
version,
so
in
this
case
it
will
serve
b1.
This
is
expected.
I
think
the
problem
is
when
it's
doing
this
conversion
it
should
should
not
contain
the
invalid
attribute.
I
mean
the
the.
A
D
Value
to
the
attribute
so
but,
however,
we
we
still
want
to
fix
this
issue,
I
mean
we
already
have
this
enable
api
group
version
feature
gate
to
mitigate
the
problem,
but
we
still
want
to
work
out
of
the
box.
D
We
should
be
able
to
backup
and
restore
the
v1
beta1
indicating
welcome
configuration
and
there's
a
user
proposed.
We
should
backup
resource,
as
is
I
mean,
the
question
is.
D
F
B
That
yeah
yeah
is
it.
Is
it
like
a
special
case,
so
we
put
that
into
the
main
valero
code
as
a
compiled
in
option
there,
because
the
other
option
is
so.
If,
if
you
run
it
with
the
feature,
the
feature
upgrade
turned
on
the
version
version
upgrade
turned
on.
Does
it
work.
B
A
B
D
Have
to,
but
you
have
to
do
that
config
map
yeah
this
guy
mentioned
that
will
work.
He
has
this
enable
using
the
v1b1
config
map,
but
but
I
think
the
reason
this
one
is
prioritize
is
tmcs
is
the
issue,
so
I
I
don't
think
by
default.
We
should
ask
here
tmc
to
create
such
config
map.
Always
I
mean
for
a
user
in
a
particular
case.
He
can
use
this
as
a
workaround,
but
generally
we
should
still
the
belarus.
B
Yeah,
I
think
well
yeah.
So
one
thing
is
the
feature
group,
the
the
the
feature
flag.
We
should
really
be
getting
rid
of
that
this
should
become
like
you
know
this
api
groups
versioning.
This
should
become
a
standard
feature,
not
just
a
feature
flag,
so
we
should
be
looking
to
make
that
part
of
the
main.
D
I
think
that's
a
separate
issue
and
I
I
have
a
slightly
different
opinion,
because
if
you
enable
this
one,
I
check
the
documentation.
This
one
will
force
valerio
to
back
up
multiple
versions
for
a
resource,
but
this
is
not
necessary
in
most
cases,.
B
But
it
might
actually
be
good
because
that
means
that
we'd
have
we'd
be
able
to.
So
in
that
case,
we
could
actually
tell
which
one
was
available,
and
we
could
tell
the
you
know.
I
wonder
if
I
wonder
if
this
would
so
so
it
might
make
sense,
so
there's
a
preferred
version,
but
also
that
configuration
tells
you
which
one
was
used
to
create
it
right.
B
So
when
you
got
it
there
was
that
I
figured
with
the
string
exactly,
but
it
showed
the
the
how
it
was
created
and
what
version
of
the
api
was
used
to
create
it,
and
maybe
the
right
thing
to
do
is
to
create
it
the
same
way
it
was
originally
created.
D
So
that's
another,
I
don't
think
we
need
the
other
flag
because,
with
this
feature,
gate
plus
this
config
map,
we
have
enough
flexibility
to
you
know
maneuver.
The
version.
D
D
I
I
I
think
first,
let
me
fix
the
issue
by
adding
a
item
back
backup,
item
action
plugin,
but
this
to
backup
the
original
version.
I
think
we
should
double
check
because
in
other
cases
this
may
fail.
For
example,
if
we
back
up
this
resource
and
try
to
migrate
to
a
newer
version
where
the
v1
beta
1
resource
is
removed,.
B
Well,
but
the
so
the
feature
group,
the
the
I
can't
say
it
right,
but
the
group
api
version
that's
supposed
to
handle
the
upgrade
case.
B
You're
saying
kubernetes
will
handle
that.
No,
our
are
that
feature
that's
currently
optional,
but
that
should
really
be
part
of
our
mainstream
feature.
So
either
it
works
and
we
put
it
in
or
it
doesn't
work
and
we
take
it
out
one
or
the
other.
I
don't
like
this
feature
flag
thing,
because
that's
if
it's,
if
it's
working
it
should
be
a
feature,
you
know
it
should
be
part
of
it
should
be
controllable.
Maybe
if
it
needs
to
be
turned
off,
it
should
be
controllable
on
a
per
backup
basis,
but
feature
flags.
D
B
You
know
it's
been
in
there
for
a
couple
releases
now,
but
we're
not
really
you
know
it's
going
to
continue.
Having
this
optional
feature
means
that
it
doesn't
always
get
tested.
I
mean
there
is
a
test
for
it
explicitly,
but
everything
else
doesn't
get
tested
with
it
right.
Okay,
so
feature
flags
should
be
something
that
are
there
temporarily
and
our
plan
is
to
remove
them
and
not
to
remove
the
feature
but
to
enable
the
feature
everywhere.
B
B
B
D
So
do
you
think
it's
worth
to
you
know
chat
with
other
guys
about
this
bug
in
kubernetes.
I'm
not
sure.
How
often
does
this
happen?
I
assume
it's.
It
should
rarely
happen
right.
So.
B
It
sounds
like
it's
it's
happening
twice,
though,
in
two
different
places
then
yeah.
We
can
try
and
push
it
back.
Well,
I
I
suppose
we
probably
need
to
go
and
take
it
to
the
kubernetes
core
folks
and
just
let
them
know
and
maybe
open
a
bug
for
them
and
just
report
what's
happening
and
see
what
they
say.
D
B
D
B
If
they
do
fix
it,
you
know
or
have
to
fix
it
in
the
future.
That
still
requires
that
kubernetes
upgrade
in
order
for
it
to
work
for
us
right
right
right,
so
I
think
I
think,
fixing
an
r
code
one
way
or
the
other
you
know
like
if
you
put
together
either
a
restore
a
backup
item
action.
I
think
the
restore
item
action
is
actually
the
right
thing
to
handle
it
on
the
restore
side.
So
then,
any
backups
that
are
there
that
are
broken
now
would
would
get
better
right.
D
B
B
There's
another
so
tmc
is
reporting
an
issue
with
azure.
It's
not
this
issue,
though
it's
similar,
it's
exactly
the
same
issue
I
mean,
but
is
it,
but
it
is
exactly
the
same
resource,
yeah?
Okay.
So
then,
if
you,
if
you
fix
that
so
this
so
fix
for
this
will
fix
their
issue
as
well.
Right,
okay
and
that's
easy.
So
at
least
there's
not
two
instances
of
it.
Yep.
D
Okay,
are
any
other
common
for
this,
if
not
there's.
Another
discussion
item
I
put,
I
think
I
just
want
to
point
out
as
bruce-
is
migrating
the
crd
from
using
co
builder
v2
to
v3,
so
the
crd
will
all
be
only
shipping
v1
crd
for
bolero
in
future.
So
that
means
starting
from
version.
One
eight
bolero
will
only
work
on
1.16
and
later
versions
of
kubernetes.
B
D
I
I
think
we're
gonna
make
sure
this
one
mentioned
this
in
the
release
note
of
one
night.
B
E
B
B
So
so
the
way
we've
been
thinking
about
it,
there's
there's
basically
two
ways
to
do:
multi-tenancy
in
kubernetes
one
is
to
do
a
coup
like
a
kubernetes
cluster
per
group
and
that
we
already
handled
because
you
just
installed
bolero
in
the
cluster,
but
then
there's
other
people
who
are
building
out
large
kubernetes
clusters
and
they
want
this
to
be
broken
up
for
different
groups
and
they'll
assign
each
group
or
user
a
namespace,
and
they
only
really
get
rights
to
run
things
and
do
things
inside
the
name
space.
B
And
so
there's
been
an
ask
for
hey.
How
could
you
support?
How
could
you
support
this?
So
the
idea
that
I
had
was
that
we
should
have
this.
B
We
could
have
this
multi-tenancy
up
here
named
space
bases,
get
the
same
level
of
so
we
don't
wanna
have
the
same
level
of
isolation
as
the
kubernetes
are
back
is
currently
providing,
so
we
don't
need
to
give
more
than
is
already
provided,
but
we
shouldn't
provide
less
and
get
as
many
of
the
features
that
we
can
working
and
then
also
be
able
to
support
multiple
valero
instances
running
in
a
cluster.
So
this
approach
has
a
valero
instance
that
is
then
able
to
be
controlled
by
multiple
users.
B
Okay,
good
so
here
that
we
can
actually
see
the
diagrams
so
currently
the
valero
model
for
security
is
it's
all
about
who
can
read
and
write
resources
into
the
valero
namespace?
So
if
you
can
write
a
resource
into
the
valero
namespace,
you
get
to
control
valeria,
so
you
can
write
a
backup
resource.
You
can
write
a
restore
resource.
You
can
write
a
delete,
backup
resource
and
you
can
you
know,
read
statuses
and
so
forth
from
it
and
that's
all
controlled
by
the
kubernetes
permissions.
B
Whether
or
not
you
have
permission
to
access
that
namespace
and
in
fact
you
could.
You
know
your
permissions
as
a
user
might
be
that
you
can
only
access
the
name.
Space
valero
could
actually
be
running
with
higher
permissions,
such
as
like
cluster
admin,
and
it
will
do
things
on
your
behalf,
even
though
you
don't
actually
have
cluster
admin.
It's
kind
of
an
interesting
thing
there.
B
So
that's
kind
of
our
current
model,
and
that
means
that
anybody
who
has
this
bolero
admin
credential
can
do
any
valero
operation,
so
they
can
delete
all
of
the
backups
they
can
restore
into
any
namespace.
They
can
even
create
new
namespaces
and
restore
into
them
using
valero.
They
can
overwrite
the
entire
cluster
or
at
least
restore
into
the
cluster.
We
don't
do
overwrite
right
now,
but
anything
that
wasn't
there
would
get
recreated.
Crds
can
be
recreated
and
so
forth.
B
So
I
was
thinking
about
this
and
that
we
we've
talked
about
being
able
to
do
like
self-service,
backup
and
restore,
which
means
that
you
can
back
up
and
restore
your
namespace,
but
not
somebody
else's
and
not
interfere
with
their
stuff,
and
the
thought
I
had
was
that
we
could
do
this
by
changing
valero
bit.
So
it
has
like
a
multi-tenancy
mode
and
in
the
multi-tenancy
mode,
valero
can
watch
for
resources
in
other
name
spaces,
and
what
you
would
do
is
the
user
to
enable
this
is.
B
You
would
need
to
create
a
backup,
storage
location,
and
you
would
also
need
to
create
a
backup,
storage,
location,
credential,
and
so
the
credential
would
be
like
your
s3
bucket,
and
you
know
username
password
type
stuff,
and
that
way
in
order
to
access
the
bucket,
you
have
to
show
that
you
have
access
to
the
credentials
and
kind
of
the.
The
thinking
that
I'm
going
through
here
is
that
we
don't
want
valero
to
start
implementing
its
own
permission
model
and
our
back
control.
What
we
wanted
to
do
is
to
continue
to
use
the
existing
kubernetes
stuff.
B
B
B
You
know
it
can't
back
up
the
cluster
resources
and
we
could
enforce
the
name
space
that
we're
backing
up
and
then
each
user
then
can
have
a
separate
backup,
storage
location.
So
they
can
each
have
their
own
bucket
to
go
in
and
they
wouldn't
see
each
other's
backups
and
the
only
place
they
can
restore
is
into
a
namespace.
B
So
if
they
wanted
so
either
they
might
have
like
a
new
namespace,
an
empty
namespace.
That's
created,
they'd,
have
to
configure
it
with
the
backup
storage
location
in
the
credential,
and
then
the
inventory
gets
filled
in
then
they
can
restore
into
it.
B
D
D
B
A
D
Bucket
and
user
one
somehow
so
the
only
protection
is
the
s3
credential
like.
If
he
knows
user
two's
cloud
credential,
he
will
be
able
to
restore
on
another
class
or
yes
and
that's.
B
Currently
the
case
yeah
yeah,
so
definitely
the
credential
needs
to
be
secret.
But
that's
already
the
case
I
mean
if
they
get
your
credential,
they
can
access
your
s3
bucket
any
any
different
way
and
we
could
potentially
have
things
like
encryption
and
per
user
keys,
and
then
you
need
both
the
credential
and
the
key
so
that
that
could
be
an
option.
A
In
another
question,
if
we,
if
there's
still,
multiple,
without
instance
in
single
cluster,
how
to
use
will
instance
know
which
I
mean
which
username
space
should
it
is,
without
instance,
to
need
to
monitor.
B
Yeah,
so
I
was
thinking
that
one
there's
a
mode
so
first
off
you'd,
have
to
you'd
have
to
enable
this
cluster,
this
this
valero
to
run
in
multi-tenant
mode
and
two-
and
I
did
I
need
to
add
this
to
this
document.
I
was
writing
it
in
the
slack
channel
and
two.
There
would
be
a
a
control
in
each
valero
installed
as
to
which
name
spaces
it
should
monitor.
So
you
could
write
like
monitor
everything
or
you
could
write.
C
A
B
D
Another
another
issue
I
can
think
of
right
now
is
performance
because
when
user
write
backup
resources
in
his
own
namespace,
you
know
concurrently,
but
errol
is
still
a
single
thread
model.
Yes,.
B
Yes,
definitely
and
it's
actually
listed
down
at
the
bottom
of
this-
we
probably
need
to
implement
parallelism
because
it's
not
just
performance.
It's
just
like
denial
of
service
right,
so
one
user
could
basically
lock
the
system
up
by
running
a
long-running
backup
yeah,
so
we'll
probably
need
some
form
of
parallelism
before
we
could
implement
this.
So
this
is
like
a
very
early
design
proposal
and
definitely
you
know
room
to
add
comments.
It's
not
like
hey.
This
is
something
we're
going
to
do,
but
we've
been
talking
about
this
for
a
while.
B
A
A
Before
before
away,
we
can,
I
mean
reach
this
multi-tenancy.
B
Mode
yup-
and
so
that's
that's
something
that
you
know
came
out
of
this
discussion
that
we
probably
hadn't
been
thinking
about
before.
So
it's
like,
oh
yeah,
okay,
so
we
want
multi-tenancy.
We
have
to
do
you
know
parallel
backups
yeah
and
there's
some
what
happened
with
my
format
here,
there's
some
weirdnesses
around
snapshots
and
credentials
for
the
snapshots.
That
would
need
to
be
thought
through.
B
B
Well,
so
if
we
install
it
in
each
one
so
right
now,
valero
requires
cluster
admin
permissions
in
order
to
do
backups,
so
we
could
consider
having
a
mode
where
it
doesn't
need
those,
but
we
don't
have
it
at
the
moment.
So
that's
another
option,
but
then
we
also
we
would
not
have,
for
example,
like
a
central.
B
We
aren't
sharing
resources,
each
user
has
their
own
instance
of
valero
and
they'd
have
to
have
credentials
for
not
just
well
actually
one
of
the
issues
with
the
snapshotting
is
it
cuts
you
kind
of
both
ways,
so
I
guess
with
csi
snapshotting
you
don't
need
the
the
valero
doesn't
need
a
credential
for
the
storage
system
with
the
legacy
snapshotters,
it
does
need
a
credential,
and
that
means
that
each
user
would
have
to
have
permission
to
access
the
storage
system
and
take
snapshots,
and
so
they
could
potentially
interfere
with
each
other.
That
way.
B
And
if
we
had
multiple
valeros
installed,
then
you
get
the
parallelism,
but
you
also
lose
control.
I
mean
if
everybody
starts
a
backup
simultaneously.
Maybe
that's
too
much
for
the
system
and
you
don't
have
a
way
to
throttle
that.
So
those
are
things
to
consider.
B
B
So
anyway,
this
is
here
feel
free
to
look
it
over
and
add
comments
and
we'll
kind
of
work
on
it
for
a
while.
I
don't
think
this
is
due
in
any.
You
know
particular
release,
so
this
pr
can
sit
out
for
a
while,
as
just
a
design
proposal.
C
And
anyone
has
any
questions
for
this
meeting.