►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 28 April 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Sidhartha Mani (Minio)
A
Okay,
here's
the
share
screen
there.
You
go
all
right
so
so,
as
of
the
last
review
these
comments,
you
know
we
there's
some
formatting
comments
and
and
just
some
grammatical
things
and
like
things
like
just
move
it
there
and
move
it
here,
kind
of
thing
under
different
sections,
we
have
kind
of
an
lg
tm.
A
It's
not
a
it's
not
enough
to
merge
the
cap
yet
because
we
need
we
need
an
api
review
from
from
tim
hawkin
or
any
api
reviewer
for
that
matter.
But
this
is
where
we
are.
I
think
this
is
good
progress.
We
have
acceptance
of
the
overall
design.
A
We
just
need
to
work
out
the
you
know
the
details
of
the
api,
but
even
that
I
think
I
think
we
should
be
good,
because
a
majority
of
the
review
that
was
done
by
michelle
was
around
the
api.
So
I
think
I
think
we're
in
really
good
shape.
I
mean
we
just
had
to
keep
it
up
all
these
weeks.
Just
waiting
for
reviews.
I
think
the
biggest
bottleneck
that
we
face
right
now
is
is
api
reviews
time.
A
I'm
sure
this
is
how
it
was
when,
when
guy
was
here
regularly
about
six
months
ago
or
longer
than
that
we
we're
still
kind
of
gated
on
that
one
slow
step,
but
but
it's
progress
each
time
each
time
they
do
get
a
chance.
We
keep
moving
forward
so
yeah,
that's
where
we
are
we've
we've
reached
out,
so
I
reached
out
to
tim
hawkins
yesterday
and
asked
shang
also
and-
and
she
did
the
same
she
reached
out
to
tim
hawkin.
A
We
just
we
just
need
to
wait
for
him
to
take
a
look
at
it
and
and
I'm
going
to
keep
bringing
him.
He
has
generally
been
very
nice
to
us
in
the
sense
that
he
he
takes
time
out
of
his
busy
schedule
to
to
review
this
and
and
spend
you
know
considerable
time
on
this,
so
so
I'm
confident
that
that
he
will
take
a
look
soon
yeah.
So
I
just
wanted
to
give
an
update
on
where
we
are
and
and
kind
of
open
it
up
for
people
to
ask
questions.
A
I
know
guy
you
haven't
been
here
for
a
long
time.
Would
you
like
me
to
I
mean
I
don't
have?
I
don't
have
a
presentation
or
anything
prepared
for
you
to
show
you
the
current
design,
but
but
if
you
have
any
questions
from
the
cap
or
or
if
you
want
me
to
go
over
it,
we
can
do
that.
A
B
A
Yeah
for
sure,
so
the
biggest
change
we
actually
went
through
was
we
got
rid
of
this
concept
of
bucket
access
requests
and
bucket
access.
So
earlier
we.
A
A
Right,
okay,
so
yeah,
let's
go
to
the
api
reference
all
right.
The
bucket
is,
as
it
always
was,
a
guy.
Nothing
has
changed
on
that
front.
The
way
it's
orchestrated
everything
is
the
same.
We
just
started
calling
it
existing
bucket
id
in
case
you're
importing
a
bucket
instead
of
implying
it.
You
know
if
you're
importing
a
bucket
just
put
it
in
there
a
bucket
credit
outside
of
cozy.
So
then
we
wanted
to.
B
A
Yes,
okay
yeah,
so
so
then
we
wanted
to
be
more
congruent
with
persistent
volume
claims
and
persistent
volumes,
because
it's
just
a
concept
and
wording
that
people
understand
in
the
kubernetes
community.
So
we
renamed
bucket
requests
to
bucket
claim
a
claim
for
a
bucket,
but
essentially
it's
the
same
thing
it.
It
is
exactly
the
same
as
it
was
before
other
than
the
name.
Change
in
this
bucket
class
remains
the
same:
okay,
here's
where
the
biggest
change
was
so
one
of
the
things
we
were
doing.
A
We
had
the
concept
of
a
bucket
access
request,
which
is
a
request
to
gain
access
to
a
particular
bucket
and
and
the
result
of
requesting
access
to
the
bucket
would
be
the
creation
of
a
an
actual
cluster
scope.
Bucket
access
object,
so
bucket
access
request
ended
up.
Creating
bucket
access,
just
like
bucket
request,
ended
up
creating
bucket
earlier.
We
had
that
symmetry
between
buckets
and
bucket
access
before,
but
there
was
really
no
need
to
have
that
clusterscope
resource.
A
A
You
know
access
is
per
part
as
in
anyone
who,
within
a
namespace,
if,
if
you
gave
access
to
a
particular
part,
pretty
much
all
the
pods
in
that
name,
space
can
get
access
to
the
same
same
secret.
So
it's
already.
A
You
know
when
you,
when
you
give
when
you,
when
you,
when
you
give
access
to
a
or
when
you
give
access
to
a
part,
you're
really,
but
when
you
create
a
bucket
access,
the
the
resource,
you're,
really
opening
it
up
to
the
whole
name
space-
and
there
is-
I
mean
it's
a
security
flaw
to
have
it
be
accessible
from
anywhere
else.
So
so
this
is
more
resembles.
B
A
This,
and
also
that
way,
we
also
get
rid
of
that
reference
from
the
cluster
scope
resource
to
name
spacecraft
resource,
which
was,
which
was
something
we
were
dealing
with
earlier,
where
the
bucket
access
was
pointing
back
to
the
bucket
access
request.
A
But
but
by
simplifying
this,
you
know
it's
just
more
intuitive.
It
fits
in
the
model
better
and
the
part
will.
A
Right
now,
right
now
it
is
explicit,
but
but
eventually
when
when
we
do
get
to
make
changes
in
the
part
spec,
whenever
that
is
we
will
we
will,
we
will
you
know,
do
it
behind
the
scenes.
B
Okay,
so
so
so
right
now,
I
would
put
a
pods
back
and
I
would
essentially
so
what
is
the
references
from
the
pod
spec?
I
mean,
how
do
I
reference
the
bucket
access
as
well
or
just
the
secret.
A
So
neither
actually
so
you
would
you
would
you
would
reference,
so
you
would
create
a
projected
volume
which
refers
to
yeah,
which
refers
to
the
secrets
right.
So
you
reference
refer
to
the
secret
in
the
in
the
projected
volume,
the
the
secret
that
is
pointed
to
here,
credential
secret
name,.
A
Right
and
the
contents
of
the
secret
are
in
an
api
called
bucket
info,
we'll
go
over
that.
We
also
introduced
iem
based.
Oh,
we
need
to
fix
this
im
based
authentication
so
for
im
based
access.
What
needs
to
be
done
is
a
particular
service.
Account
token
needs
to
be
associated
with
a
backend,
as
in
like
a
cloud
service
account.
A
So
any
part
that
that
is
mounted
with
that
service
account
token
will
will
will
be,
will
be
able
to
authenticate
itself,
as
that
cloud
account
whenever
it
talks
to
a
cloud
api.
A
So
if
you're
on
gk-
and
you
want
to
authenticate
yourself
as
a
user
with
access
to
a
particular
bucket,
all
you
have
to
do
is
you
know,
put
that
service
account
token
in
there
all
the
clients
for
all
the
major
cloud
providers
already
support
this
method
of
authenticating,
where,
if
you're
in
a
particular
cloud
running
in
gke,
it
looks
for
a
service
account
to
see
if
it
can
yes
to
see
if
it
is
authorized
to
do
more
stuff
than
just.
A
B
Okay,
and
so
so,
how
does
it
work
when,
from
api
perspective,
I
would
create
this
bucket
access
reference,
a
service
account
name
from
my
namespace
right
and
and
what
is
the
result?
What
does
cozy
hook
me
up
with
it
doesn't
provide
a
secret
then
right
it
or
it
doesn't.
A
So
it
so
so
yeah.
So
it's
you
still
have
to
specify
the
secret,
because
the
secret
is
here
in
the
credential
secret
name,
because
that's
how
we
we
that's
where
we
mount
the
bucket
info
object
and
the
bucket
info.
The
bucket
for
a
json
file,
has
information
about
the
bucket
name,
the
back-end
name,
authentication,
method
and
such,
and
it
has
information
for
iam
as
well
yeah.
B
Okay,
okay
and
that's
optional
for
cases,
okay,
yeah.
A
Yeah,
so
we've
also
added
I'll
go
over
it
then
there's
the
class,
which
is
same
as
always
bucket
info.
This
is
the
bucket
info
we
were
talking
about
here.
We
have
the
authentication
type
added
now
in
order
to
represent
what
kind
of
authentication
we're
doing.
So
we
have
a
key
base,
authentication
which
is
access
key
secret
key
based-
and
I
am
where
the
token
is
mounted
and
yeah
yeah-
that's
about
it.
B
Actually,
so
let
me
see
that
I
understand
the
new,
like
this
new
authentication
scheme,
now
that
a
bucket
required
by
bucket
claim
and
bucket
will
will
handle
the
provisioning
workflows
right
and-
and
if
I
want
to
use
a
brown
field
cases,
I
would
import
using
that
existing
bucket
id
right
right.
That's
that's
how
I
would
claim
an
existing
bucket,
basically
right,
yep,
okay
and
then
so
so
for
every
bucket
that
I
actually
want
to
refer
from
kubernetes.
B
I
have
to
either
import
a
bucket
with
existing
bucket
id
or
or
claim
a
new
bucket.
Basically,
and
then
I
get
a
bucket
claim
right
right
and
that's
the
beginning
for
anything
that
I
do
in
cosy
and
then
that
bucket
access
will
refer
to
the
claim
always
right
yep.
I
always
have
to
refer
to
a
claim
whatever.
C
B
Yeah
and
and
then
the
secret.
A
A
A
Yeah,
eventually,
the
plan
is
to
make
parameters
a
type
field,
but
for
now
this
is
what
we
have
yeah.
B
Okay,
so
so
let
me
finish
up
this
thought,
then
then
I
create
a
bucket
access
and
either
I
just
specify
a
secret
name,
which
I
want
to
use
to
mount
the
credentials
back
to
my
pods,
or
I
specify
the
service
account
name
and
the
secret,
and
I
mean
it's
secret
anyway,
but
the
service
account
will
allow
me
to
instead
of
getting
the
authentication
as
a
access
key
secret
key.
B
I
would
just
get
an
act
saying:
okay,
you
you
can
set
up
for
you,
so
that
this
service
account
can
use
this
bucket
right
right
and
what
is
the
and
everything
is
per
name
space
now
right.
So
I
cannot
yeah.
I
cannot
create
any
references
between
namespaces
between
these
things
between
accesses,
yes,
yeah
accessing
the
bucket
claim,
but
then
what
is
the
relation
between?
B
So
how
do
I
is
there
any
authentication,
another
thing
authorization
model
for
which
existing
buckets,
I
can
import
or
I
can
import
anything
by.
You
know
from
cozy's
perspective.
Is
there
any
yeah.
A
So
so
so
right
now
there
is,
there
is
no,
you
know
you
don't
prevent
who
can
refer
to
what
bucket,
but
but
but
there
are
two
different
efforts
that
are
going
on
for
for
this
kind
of
authorization.
Let
me
let
me
actually
pull
up
something.
So
if
we
had
written
down
in
our
talks,
you
know
I'm
also
trying
to
fully
remember
that.
C
A
Where
is
this
so.
A
All
right
so
in
december,
or
something
we
did
this
in
january,
we
were
having
the
discussion
about
oh
yeah,
yeah
yeah.
So
this
is
the
discussion.
So
here
it
is
so
the
discussion
was
so
first
we
had
a
namespace
selector
based
approach,
yeah
and.
C
A
So
so,
after
that
we
we
talked
about
this
thing
saying
at
this
point:
we
don't
need
to
do
self-service
bucket
sharing
without
admin.
Intervention
admin
can
always
be
involved
in
in
you
know,
bucket
sharing.
At
this
point,
the
idea
behind
that
was
an
admin
would
have
to
tell
the
user
here's
here's
the
name
of
the
bucket
that
you
want
to
use
without
that
the
user
need
not
know
what
buckets
exist
and
there
doesn't
have
to
be
a
strong.
A
You
know
authorization
system
for
authorizing
which
bug
can
be
used
by
which
namespace,
but
eventually
there's
this.
You
know
two
different
options
we
can
use
to
to
do
the
authorization
one
is
regular,
kubernetes
cluster
roles
and
cluster
bindings
now
they're
granted
enough
to
say
which
service
account
can
access
which
resource
before
the
issue
we
saw
was
you
know
a
particular
service
account
could
be
restricted
only
to
a
particular
resource
type,
so
you
could
say
a
service
account
can
list
all
secrets.
C
A
All
buckets,
but
now
you
can
do
at
the
bucket
level
of
granularity
so
because
of
that,
we
don't
we
don't
have
to
you
know
we
can
do.
We
can
just
rely
on
kubernetes
infrastructure
to
do
that,
and
we
just
you
know
we
we've
put
it
behind
alpha
as
in
like
we
will
get
into
it
after
alpha
simply
to
simpler
and
you
know
get
it
through.
B
Oh,
it
makes
sense.
So
what
you're
saying
is
that
maybe
maybe
that's
something
I
missed
the
existing
bucket
id
is
that
is
that
the
bucket
name
like
the
cluster
scoped
bucket
name
or
is
it
like
completely
external
identifier?
A
A
Yeah
yeah,
so
so
let's
say
you
have
yeah,
it
can
be
the
bucket
name,
so
it
can
be
the
bucket
name.
It's
it's.
It's
the
it's,
the
id
of
the
bucket
when,
when
the
bucket
is
imported,
when
it
was
created
outside
of
cozy
and
and
this
id
could
be
a
full
url,
this
is
what
is
used
when
making
grpc
calls
to
the
backend.
B
Right
so
the
driver
of
that
specific
I
mean
the
specific
driver
will
accept
this
identifier
and
will
resolve
it
right
right,
okay,
but
then
is
there
any
model
similar
to
how
pvcs
work
with
pv
that
you
can
connect
a
pvc
to
an
existing
pv
in
the
cluster.
A
So
we
don't
have
the
concept
of
the
strong
binding
right
like
between
pvc
and
pv
between
p,
because
pv
c's
are
mostly
read.
Write
ones
relate,
many
is
a
rare
case,
and
so
the
primary
model
in
which
pvc
pv
works
is
this
is
one
to
one.
You
don't
have
many
to
one,
even
though
it's
like
you
can't
bind
many
pv
pvcs
to
the
same
pv.
A
A
B
C
A
C
C
B
A
A
Right
and
also,
if
you're,
creating
a
new
bucket,
that's
where
you
would
find
the
name.
B
Yeah,
of
course,
okay,
I
see
okay,
so
so,
basically,
this
is
how
I
it's
not
binding,
but
I
I
connect
to
an
existing
and
one
and
you're
saying
that
the
our
back
will
the
fine
grain.
Our
back
will
allow
to
to
put
some
restrictions
over
what
is
possible
for
some
namespace
to.
B
Yeah,
so
I
I
remember
with
our
back
there
are
options
of
use
right.
Is
that
the
case
that
you
wanted
to
you
know
when
arbek
you,
you
have
a.
I
forgot
how
this
is.
I
mean
how
it's
specified
exactly,
but
there's
some
use
calling
something
remember.
You
know
that
I
think
I.
B
B
Yeah,
there's
no
restriction
as
of
now
so
by
default,
there's
no
restrictions,
but
the
administrator
can
put
restrictions.
If
right
that
would
okay,
we
might.
We
might
need
to
think
about
just
what
the
default
is.
The
default
is
no
restrictions
or
the
default
is
completely
restricted
and
then
releases.
B
A
Right
pvs
by
default,
have
no
restriction.
Okay,
yeah,
it's
it's
already
restricted.
In
the
sense
it's
a
clusterscope
resource,
so
people
want
people
who
shouldn't
know
about
won't
know
about
it.
The
bucket
names
that
are
generated
by
cozy
are
entirely
uuid
based.
So
it's
not
it's
not
possible
to
guess.
B
I
I
don't
think
it's
a
it's
a
huge
concern,
given
that
there
is
some
mechanism
to
prevent
it.
If
you
know,
if
there's
a
some
cluster
that
is
very
concerned
about,
you
know,
multi-tenancy
right
this
way,
okay,
right
yeah,
so
I
so
first
of
all,
I
think
the
changes
are
make
a
lot
of
sense.
I
I
we.
We
had
a
lot
of
issues
with
the
with
all
these
structures,
and
I
think
it
does
make
doesn't
make
a
lot
of
sense
to
me.
B
I
I
still
need
to
digest
everything,
but
it
does
seem,
does
seem
to
fit
yeah.
B
Yeah
we
had,
if
you
remember,
we
had
the
similar
model.
We
called
the
obc
with
within.
C
B
With
red
hat
and-
and
it's
still
it's
still
being
used
and
to
be
honest,
it's
it's
doing
a
fine
work
for
what
it's
worth.
It's,
not
it's,
not
that
bad!
I
mean
it's,
you
know
it.
It
is
modeled
similar
right
with
a
claim
and-
and
it
was
called
obc
and
ob
right
to
match
the
pvc
pv
and
but
the
the
access
thing
was
not.
It
was
not
there
at
all.
It
was
all
completely
merged
into
the
same
thing.
B
So
the
the
bucket
claim
also
contained
like
the
secret
reference,
but
it
was
implicit
but
but
anyway
and.
B
B
Yeah-
and
I
think
that
that
was
one
one
of
the
things
that
that
when
we
started
talking
about
cozy,
I
think
we
we
saw
that
the
access
provision
like
provisioning
credentials
is
should
be
some
some
other
flow,
with
more
attention
to
it,
so
yeah,
okay,
that
makes
makes
sense
to
me.
A
Right
right,
yeah,
so
so
that's
where
we
are!
You
know
it's
actually
good
for
the
others
for
me
also
to
kind
of
go
over
this
once
in
a
while.
I
think
so,
so
I'm
actually
glad
we
went
over
this.
Just
now,
did
you.
B
Have
any
questions
from
tim?
You
said
that
you're
waiting
for
him
so.
A
A
Generally,
you
know
give
it
a
day
or
two
before
paying
him
again
and
friday's.
I
think
saying
correct
me:
if
I'm
wrong
is
it
it's
fair
to
say,
tim
was
mostly
free
on
fridays
to
look
at
this
right.
C
Tim,
I
don't
know
about
his
schedule,
so
you
could
meet
him
like
once
a
day.
I
think.
A
Okay,
okay,
michelle,
I
know,
is
going
to
look
at
it
on
on
friday
tomorrow
she
seems
to
be
free,
mostly
on
fridays,
so.
C
B
A
What
do
you
want
to
do
there?
So
there's
two
things:
one
is,
you
know,
get
get
the
project
up
and
running
with
the
new
design,
the
the
reason
you
know
we
haven't
pushed
too
much
on.
That
is
because
the
design
keeps
changing,
and
we
just
wanted
to
wait
until
it's
solidified.
The
basics
are
still
already
in
there,
so
get
it
working
and
get
the
docs
to
a
point
where
people
can
start
writing
drivers
against
it.
A
A
And,
and
once
we
do
that
once
we
have,
you
know,
I
also
want
to
publish
a
blog
after
we
go
alpha.
So
once
we
do
all
this,
I
know
mania
will
push
behind
it.
I'm
sure
other
companies
that
are
participating
will
you
know,
publish
their
own
blogs
or
we'll.
We
can
all
write
one
together
at
the
end
of
that
we'll
have
more
people
participating
and-
and
you
know,
we'll
get
more
user
requests,
feature
requests
and
and
we'll
go
that
way.
C
A
Yeah,
that's
the
that's
the
long-term
plan,
so
yeah
we're
just
waiting
for
this
and
I
think
even
tim's
review
won't
take
very
long.
This
time
we
have
a
sound
api,
we
have
one
person
saying
looks
good
to
me
all
that
helps
it's
much
more
than
we
had
before,
and
also
we
have
internal
consensus.
We
all
we
all
agree
on
what
this
should
be.
So
it's
it's
positive.
Overall,
it's
it's
further
than
we've
ever
gotten
before
and
yeah
we
should
be.
We
should
be
in
much
better
position
this
time.
B
B
A
B
If
anybody
from
sig
storage
was
involved,
but
from
from
from
our
teams
and
red
hat
there
was
there,
were
there
were
a
few
discussions
about
it,
so
I
I
don't
know
if
cozy
can
be
relevant
for
for
that
I
mean
for
their
time
frame
because
they
they
are
working
in
on
their
own.
You
know
timelines,
but
I
think
I
just
mentioned
it
to
them
at
some
point.
A
Okay
makes
sense,
but
also
this
is
this
is
a
valid
comment.
Actually,
I
can
see
that
being
very
useful
yeah,
even
even
in
mania
we
we
do
have
multi-cluster
deployments
now
so
yeah
being
or
you
know,
in
your
customers,
locations
and,
and
something
like
this
would
would,
especially
if
you're
doing
a
cross-cluster
deployment
yeah
this
would.
This
would
be
very
useful.
A
Yeah
I
I
just
noticed
that
coming
anyway,
so
so
that's
you
know,
that's
where
we
are
and
yeah
thanks
for
coming
back
and
and
I'm.
A
Oh,
oh
he's
saying
that
so
right
now
allow
topologies
is
on
the
cluster
itself
in
the
storage
class
that
you
specify
a
lot
of
topologies.
A
It's
it's
to
say
this
storage
class
can
schedule
volumes
on
these
nodes
and
it's
it's
kind
of
a
label
for
a
node
guy
mentioned
that
it
can
be
labels
across
clusters.
A
So
so
you
can
use
it
to
say
this
pod
goes
or
you
know,
the
volumes
that
are
requested
goes
to
cluster
one
or
cluster
two
based
on
the
storage
class,
instead
of
instead
of
having
to
right
now.
The
only
way
you
can
do
it
is
just
by
having
two
different
kubernetes.
You
know.
B
They
asked
us
a
little
bit
about
it
about
things
that
can
happen
there,
and
they
also
said
that
there
is
a
lot
of
work
from
kcp
on
what
they
call
location
api,
or
something
like
that.
I
saw
like
a
very
big
discussion
about
about
that
and
it's
not
specific
to
cozy
at
all.
B
It's
related
to
any
resource
for
kcp,
so
it
seems
like
they
are
developing
right
now,
this
notion
of
location
for
kcp-
and
this
comment
was
before
before
they
even
referenced
it
for
us
just
looking
at
at
their
questions
about
what
can
be
done
in
storage
in
in
these
in
in
this
scheme
of
kcp.
So
I
just
mentioned
that
a
few
options
that
I
knew
back
then,
but
they
are
still
working
on
something
new
on
that
sense,
the
location,
api.
I
think
I
can
find
it
somewhere
in
the
kcp
prototype
channel.
B
A
So
so
tell
me
again,
kcp
is,
is
it
like
the
cross
plane
thing?
B
There
are
some
some
similarities,
of
course,
because
in
some
sense
they
they
require
you
to
expose
apis
under
like
a
new
crd
right.
So
if
you
want
to
expose
something
at
the
top
level
at
the
kcp
level,
you
need
to
define
it
a
new
api
for
it.
B
Basically,
so
let's
say
you
want
to
create
something
called
you
know
just
imagining
right,
but
a
bucket
right
from
from
kcp
level,
then
you
probably
have
some
you'd
have
a
crd
representing
a
bucket
in
kcp
level,
and
then
this
would
end
up
as
as
creating
things
on
the
specific
cluster
that
it
gets.
Provisioned
on
so
cozy
will
be,
will
be
needed
on
the
clusters
itself.
B
On
the
on
the
you
know,
on
the
leaf
clusters
right
on
the
on
the
actual
clusters
compute
clusters,
they
call
it
sometimes,
I
think,
but
there
is
still
some
extraction
of
an
api
that
you
need
to
go
through
if
you,
if
you
want
kcp,
to
be
able
to
provision
it,
and
the
reason
is
because
they
add
more
configurations
for
it
like
the
location
apis
that
they
wanted
to.
A
Yeah
good
to
know
yeah.
B
So
pretty
it's
pretty
much
in
the
works,
so
there
are
a
lot
of
things
happening.
I
don't
know
all
the
details.
I
I
I'm
pretty
new
to
that,
but
I
started
to
to
listen
to
their
content
from
recent
discussions.
A
Yeah
one
thing
I
keep
noticing
is,
you
know:
multi-cluster
deployments
are
becoming
more
common,
even
even
across
geographical
regions.
We
see
that
in
our
in
our
customer
locations
and
and
also
when
I
say
multi-cluster,
they
also
do
multi-cloud
multi-clusters,
so
different
infrastructures,
one
one
kubernetes
control
plane.
A
So
so
things
like
this
don't
exist
today,
but
customers
are
asking
for
it.
So
yeah
see
this
being
very
useful.
B
Yeah,
yeah
and-
and
I
think,
storage
for
them
is-
is
also
okay.
They
haven't
even
considered
it.
As
far
as
I
heard
in
the
last
two
three
weeks,
they
were
asking
a
lot
of
questions.
What
can
be
done?
What
should
be
done?
What
kind
of
things
happen
there?
I
don't
know
if
anybody,
I
didn't
see
anybody
specific
from
sig
storage
involved
in
the
kcp
prototype
work,
but
maybe
I
missed
it.
A
Well,
so
so
we
I
got
to
know
about
this
from
the
context
of
people
using
object,
storage
across
clusters.
I
think
they
should
be
focused
on
like
in
the
sense
that
I
think
in
in
such
scenarios.
A
The
reason
you
know
people
from
you
know
our
customers
are
running
into
this
issues
because,
with
with
data,
especially
if
it's
mission,
critical
data
people,
don't
put
it
in
one
cloud
at
the
risk
of
losing
it
or
having
downtime.
If
the
cloud
goes
down,
which
has
happened
many
times,
so
you
know
they,
they
do
constant
replication
like
it's
it's
asynchronous,
but
it's
constantly.
A
Say
mineo
gets
backed
up
to
aws
or
gks
or
azure,
so
so
they
want
a
single
control
plane
for
all
this
and
and
object
storage
fits
in
here
or
is
like
the
first
use
case
for
it,
mostly
because
of
the
nature
of
object,
storage
itself,
where
it's
always
over
the
network.
So
it
makes
sense
to
replicate
it
across.
A
You
know,
over
the
network
and
and
also
it's
not
tied
to
like
all
the
cloud
stock
s3
api
gke
talks,
s3
azure
talks,
s3,
so
everyone's
kind
of
standardized
on
the
s3
api,
so
multi-cluster
storage.
A
B
I
think
still
I
mean
they
will
require
file
system
block.
I
mean
pvs.
A
B
But,
and-
and
that
was
one
of
the
comments
that
you
saw
before
that
I
mentioned
also
the
volsync
option-
I
mean
any-
it
doesn't
have
to
be
the
specific
project,
but
that's
a
project
we
were
involved
in.
So
I
could
reference
it
that
that's
an
operator
like
some
controller
that
you
could
you
could
expose
as
an
api
which
provides
you
with
replication
from
a
pv
to
remote
s3,
for
example,
right.
So
things
like
that
could
could
be
involved
in
that
multi-cloud.
B
Like
you
said,
you
need
some
form
of
setting
it
up,
control,
plane,
apis
that
that
makes
sense
to
deploy
on
your
clusters.
So
you
need
the
operators
that
those
controls
to
be
available
on
your
clusters
on
your
compute
clusters,
then,
and
then
some
apis
that
that
the
control
plane
can
push
down
to
those
clusters
and
they
make
sense
right.
So
there's
a
lot
of
things
moving
on
here
in
order.
C
B
A
Yeah
and
and
thanks
for
the
introduction,
because
I
had
seen
this
project
before
where
I
didn't
know
the
direction
it
was
hiding.
So
this
is
good
to
know
cool
yeah,
that's
that's
about
it.
We
have
nine
minutes
left
we
unless
there's
any
other
questions
we'll
meet
again
next
week
and-
and
you
know
I'll,
keep
it
up
I'll
keep
following
up
with
tim
and
and
try
to
get
an
api
review.
A
I'll
also
try
to
ask
michelle
to
take
a
look
and-
and
she
she
wanted
minor
check,
changes
to
to
the
wording
and
everything
and
make
sure
she
gives
a
lgtm
too
yeah.
That's
that's!
That's
all
is
there
for
now
the
design
you
know
the
discussion
on
design
has
has
not
changed
much
we're
all
in
agreement
with
what
we
decided
a
few
weeks
back
and
you
know
we're
going
to
keep
it
that
way
for
now
and
then
based
on
the
comments
from
tim.
A
A
A
A
Yeah
yeah
we'll
we'll
we'll
we'll
have
it
then
in
that
case,
and
you
know
michelle
doesn't
have
to
join.
If
she
joins
it's
good,
we
can
record
it
if
not
we'll
just
take
notes.
C
Okay,
yeah,
then
I
think
it's
just
another
thing.
Is
you
can't
share
screen,
but
you
can
just
give
people
link?
Then
you
know
you
can
just
look
at
the
document
yourself
and
talk
yeah.
I
think
that
should
be
sure
yeah.