►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Review Meeting - 05 November 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
C
E
Oh,
I
made
shing
the
host,
you
know,
let
me
reclaim
host
okay.
There
we
go.
A
All
right,
so
I
want
to
quickly
give
an
update
on
the
progress
and
development
that
we've
made
and
then
talk
about
topology.
A
So
on
monday,
this
week
we
were
talking
about.
A
We're
talking
about
actually
moving
the
project
from
our
temporary
repositories
to
kubernetes
official
repositories,
and
also
we
were
talking
about
the
progress
of
end-to-end
testing,
so
this
is
kind
of
the
progress
that
we
had
at
a
big
picture
level
and
let's
look
at
what
we
have
today
in
terms
of
the
actual
components
we
have
more
or
less.
The
features
implemented
that
we
wanted
to
for
the
initial
demo.
A
The
thing
that
we're
all
focusing
on
right
now
is
integrating
the
four
components
and
making
sure
they
all
work
together.
So.
A
Srini
here
is
working
on
end-to-end
testing
for
the
entire
project.
If
I
maybe
srini
can
give
an
update
on
this
pretty
quickly.
F
F
So
as
part
of
that
product
set,
is
it's
the
same
like
the
csi,
so
it's
gonna,
install,
ginkgo
and
run
ginkgo
with
the
focus
ete
run
part
is,
is
in
place.
I
am
still.
I
have
basic
skeleton
code.
I
still
haven't
implemented
any
tests
yet
not
much
in
there.
A
Thanks,
so
that's
where
we
are
we're
still
on
track.
As
of
now,
the
the
bottleneck
right
now
is
moving
all
the
code
into
official
repos,
and
that
seems
to
be
the
step.
That's
taking
a
little
more
time
than
anticipated.
A
F
So
yeah,
I
just
wanted
to
say
that
yeah
we
do
not
want
to
move
the
code
until
we
have
some
kind
of
validation,
checks
and
verify
scripts
in
place.
That's
the
reason:
if
not,
there
is
no
reason
for
us
to
hold
on
to
it.
It
becomes
easier
to
review
code.
That
way.
That's
that's
the
reason.
A
And
also
it
becomes
easier
to
you
know
on
board,
more
people
to
start
contributing,
and
we
have
at
least
a
basic
testing
framework
in
place.
We
know
that
the
code
is
more
or
less
right.
On
that
front,
I
think
we,
one
of
the
things
we
wanted
to
talk
about
was
shiny.
There
was
something
we
wanted
to
bring
up
about.
The
spec
repo
is
that
correct,
yeah,
yeah.
F
Right
now,
currently,
I
put
out
two
pr's
in
there
one
pr
that
triggered
the
discussion
about
the
region
and
that
you're
going
to
cover
in
the
next
in
the
topology
part.
The
other
part
is
adding
approvers
for
the
spec
repo.
I
am
requesting
only
one
approver
that
would
be
sid,
and
then
I
changed
to
have
two
offers
as
reviewers,
so
that
as
we
go
along,
we
will
be.
F
We
may
end
up
making
some
changes
to
the
spec,
so
we
need
a
good
turnaround
on
on
the
on
the
repo.
So
please
look
at
that
pr
and
see
if
that
can
be
merged,
so
that
we'll
have
a
bit
of
control
on
the
on
updating
the
spec
right
now
I
checked
in
the
first
draft
first
spec
dot
md,
which
which
is
in
line
with
the
with
the
cap,
but
as
we
go
along,
we
talked
about
credentials,
and
so
there
might
be
future
minor
changes
to
the
structures
here
and
there.
F
So
if
we
can,
if
we
have
a
bit
more
people
more
reviewers
and
more
approvers,
then
our
turnaround
time
will
reduce
a
bit.
Does
it
make
sense
if.
A
I
understand
correctly,
so
I
haven't
been
closely
following
this,
so
I'm
asking
I
thought
we
were
going
to
ask
the
people
who
are
implementing
this.
We
should
be
able
to
at
least
have
someone
who
can
approve
and
move
this
forward
is.
Is
that
nothing.
B
Oh,
I
have
so.
I
have
approved
a
like
every
all
the
other
branches.
This
is
the
the
spec
brand
spec
rip
array.
So
I
was
going
to
check
with
zad,
because
I
thought
the
css
spec
repo
has
a
very
limited
number
of
yeah.
B
Like
wondering
beside,
what
do
you
think,
should
we
add
more
people
to
the
spec
repo
as
well.
F
B
A
Yeah,
I
think
you
think
you're
right
shane,
but
let's
go
yeah.
A
The
one
one
question
I
have,
though,
should
we
start
doing
that
after
maybe
alpha
beta
at
this
stage,
it's
kind
of
still
in
flux
and-
and
you
know
I
can
see
us
adding
and
changing
things
and
then
again
it's
not
even
alpha
so
maybe
which
you
know
it's
not
really
solidified
yet
and
I'm
sure
after
the.
B
B
I
so
so
I
reviewed
I
think,
last
time
I
reviewed
the
the
pr.
I
was
just
having
some
comments
because
it's
different
from
the
merge
cap
right.
So
if
you
have
something
that
is
exactly
the
same,
I
think
that
should
have
no
problem
of
getting
that
merged.
But
if
there
are
changes,
I
think
we
probably
should
get
more
people
look
at
it
still.
A
I
see
I
see
what
you're
saying
yeah.
Of
course
I
mean
we.
Can
you
know
if,
if
it's
just
me
on
there,
I'm
sure
I'm
sure
I'll
I'll?
You
know
I'm
not
gonna
push
anything
in
without
without
getting
either
you
or
sorry
to
take
a
look
also.
A
Okay,
yeah,
okay,
we'll
we'll
we'll
update
the
pr.
Can
you
please
update
the
pr2.
F
And
then
please
review
that
and
we'll
take
it
from
there,
whatever
okay.
A
All
right,
okay,
so
topology,
while
looking
at
the
pr
shrink,
brought
up
a
really
good
question
about
last
thursday.
I
believe
we
had
two
fields
in
our
in
our
protocol
field
in
our
protocol
structure
inside
of
a
bucket,
and
they
were
region
and
zone
now,
region
and
zone
apply
to
all
the
cloud
providers.
However,
in
bare
metal
deployments,
region
and
zone
do
not
directly
translate,
for
instance,
in
inside
of
a
data
center
for
a
particular
organization.
A
They
are
less
likely
to
have
the
concept
of
a
region
and
so
on,
they're,
more
likely
to
have
the
concept
of
racks
and
and
people
could
come
up
with
different
ways
of
slicing
and
dicing
topologies
and
great
conversation
sued
from
from
that
from
that
discussion,
and
we
more
or
less
decided
that
we
should
implement
topologies,
just
like
csi
does
and
to
that
end
I've
just
put
together
something
and-
and
you
know,
I
want
everyone's
input
on
how
how
we
can
take
this
forward
from
this.
A
A
So,
in
order
to
facilitate
this,
so
this
would
be
a
change
in
create
bucket
grpc
protocol,
where,
when
we
call
create
bucket,
we
would
tell
the
provisioner
that
it
needs
to
create
a
bucket,
that's
accessible
from
this
particular
topology.
A
This
is
similar
to
what
csi
does
where,
in
case
of
a
volume
request,
there's
a
call
called
create
volume
to
the
csi
controller,
where
it
provides
a
list
of
topology
constraints
that
that
say
that
a
volume
should
be
accessible
from
something
that
satisfies
this
topology
constraint
and
as
the
provisional.
A
It's
the
provisioner's
job
to
provision
a
bucket,
that's
accessible
from
some
node
or
some
set
of
nodes
that
satisfy
the
constraint
and
then
respond
with
whatever
that
set
of
nodes
are
as
a
topology
segment
now
I'll
go
into
what
the
difference
between
a
constant
and
segment
r.
But
for
those
of
you
familiar
with
csi,
it's
exactly
the
same
difference
between
a
constant
and
segment
in
in
the
csi
spec.
A
The
topology
constraint
structure
again
modeled
after
the
csi
structure,
is
going
to
be
a
list
of
key
value
pairs
and
there's
going
to
be
two
fields
required
and
preferred
constraints
required
constraints
and
preferred
concerns
so
required
constraints
are
those
that
need
to
be
satisfied
by
the
provisioner
when
provisioning,
the
bucket
and
preferred,
which
has
to
be
a
subset
of
required,
is
preferably
satisfied
as
a
best
case
effort.
A
A
segment
in,
in
contrast
to
this
topology
constraint,
is
just
a
single
map,
string
string
that
that
satisfies,
or
that
denotes
the
the
topology
in
which
the
bucket
is
available.
So
so
this
is.
This
is
more
or
less
how
I'm
thinking
about
it.
One
one
last
piece
to
this
is
how
cozy
will
know
about
which
topology
or
which
node
belongs,
to
which
topology.
A
So
we
want
to
rely
on.
We
want
to
rely
on
no
not
get
info
from
the
csi
driver
for
now,
so
the
node
agent
is
a
csi
driver
and
when
it
starts
up,
kubernetes
calls
a
function
called
node
get
info
which
responds
with
the
set
of
topology
fields
or
topology,
descriptions
that
apply
for
that
particular
node.
A
D
D
D
A
Rely
on
that
to
get
the
no
topology
for
now
again,
this
is
something
I'm
freshly
thinking
about.
D
A
A
G
What
is
the
exact
information
in
in
those
strings?
Is
it
node
names
or.
A
No,
so
this
it
would
look
like
it
looks
like
a
you
know
label,
so
it
would
be.
For
instance,
the
keys
are
defined
by
the
user
or
the
admin
who
sets
this
up
and
the
values
again
by
the
admin
who
sets
it
up.
A
So
it
would
be
something
like
say,
a
list
of
things
like
region,
r1
and
then
zone
z1
and
then
drive
type
nvme
or
something
like
that,
and
and
let's
say
that
goes
into
the
required.
So
so
it's
a
list
of
key
values.
A
If
you
have
multiple
items
in
the
list
like
I
said,
if
it
is
a
region
r1
and
then
a
next
list
item
re
zone
z1
and
the
next
list
item
in
a
drive
type
nvme,
then
then
it's
boolean
and
operation
between
them.
That
is
all
three
should
be
satisfied
if
it
is
within
the
same
map.
A
Then
it
is
say,
region,
r1,
zone,
z1
and
drivetv.
Nvme
is
within
the
same
map.
Then
either
one
of
them
should
be
satisfied.
If
I
remember
correctly,.
A
G
A
No,
I'm
sure
yeah.
A
G
So
so
in
json
world,
it's
an
array
of
objects,
yeah
and
objects
are
string
to
string
and
then
the
string
keys
are
property
names
for
the
identifying
the
topology
type
and
value
is
a
value
which,
how
do
I
I
mean
as
as
a
provision,
or
how
do
I
parse
this
information
in
my
cluster
right?
So
that's
what
you
said
right.
I
need
to
go
and
maybe
get
the
related
regions
somehow
right.
A
Yeah
so,
for
instance,
in
this
case.
A
Take
the
simpler
case:
let's
take
just
this
one,
so
in
this
case
a
volume
should
or
a
bucket
should
be
accessible
from
region,
r1
and
zone
z2.
I
don't
know
if
it's,
if
you
can
see
it
clearly,
that's
good
yeah,
now
yeah
region,
r1
and
zone
z2.
A
If
you
wanted
to
be
accessible
from
both
z2
and
z3,
then
you
would
add
another
item
to
the
list:
other
object
of
the
list
for
the
same
region
and
if
it
should
be
accessible
from
either
z2
or
z3.
I
believe.
G
A
G
D
E
To
support
volumes
that
replicate
synchronously
across
two
zones,
so
they're
accessible
from
both
zones
so
effectively.
This
is
a
way
to
say
I
want
something
that
is
replicated
and
available
from
both.
At
the
same
time,.
A
E
A
G
A
E
Yeah,
it's
like
it's
a
two-way
operation.
The
first
operation
is
when
the
node
is
discovered.
The
driver
says
okay
for
this
node.
These
are
the
labels
that
apply
to
it
and
the
labels
that
apply
to
it
can
be
whatever
makes
sense
for
that
driver.
It
could
be
zone
region
rack
whatever,
then.
The
second
piece
is:
when
the
scheduling
of
the
volume
happens,
the
provisioning
of
the
volume,
the
kubernetes
side
can
say.
E
Okay,
I
want
this
volume
to
be
constrained
to
these
zones
only
or
to
this
zone
only
or
to
whatever
arbitrary
combination
of
it,
and
the
driver
must
respect
that
and
then
once
the
driver
has
provisioned
the
volume
in
the
response
to
the
create
volume
call,
it
says:
okay,
this
is
where
the
volume
is
actually
accessible
from
like
this
is
the
final
result.
You
have
a
volume
and
it
is
constrained
such
that
it's
only
accessible
from
you
know
such
and
such
zone,
or
maybe
even
a
specific
node.
D
E
E
So,
on
the
kubernetes
side
the
scheduler
makes.
So
if
you
do
wait
for
consumer
on
the
pvc
side,
then
the
scheduler
influences
where
the
volume
is
going
to
be
provisioned
and
so
as
part
of
the
scheduling
decision
it.
E
If
it's
a
zonal
volume,
it'll
say:
okay,
this
node
is
constrained
to
zone
a
only
I'm
going
to
pass
in
a
constraint
on
my
create
volume.
Call
that
says
I
need
a
volume
that
must
only
be
available
from
zone
a
not
any
other
zones
and
and
that's
how
it
works.
E
I
don't
think
so.
I
think
that
was
a
limitation
of
the
way
that
it's
currently
implemented
is
that
we
don't
do
a
preferred,
because
the
scheduler
only
has
a
way
to
say
this
is
what
I
require
not
anything
else,
but
I
think
preferred
is
able
to
be
set
manually
by
the
end
user.
I
forget
exactly
storage
class,
presumably
I
think
so.
Yeah.
E
It's
future
proofing
and-
and
I
think
there
was
some
like
ability
for
the
user
to
set
to
say.
Oh
you
know
I
I
want
the.
I
think
it
was
around
like
the
way
that
we
did
stateful
sets
where
you
know,
there's
a
preference
for
a
shard
to
land
somewhere,
but
not
necessarily
a
requirement
that
kind
of
thing.
A
How
do
we,
how
do
we
say,
either
or
here
in
the
requisite.
E
A
Yeah,
I
think
happens
with
any
code
that
w
right
or
any
doc.
A
Anyways
yeah
we
while
we
wait
for
the
response
there,
so
so
guy
does
this.
Does
this
kind
of
make
sense
the
explanation
for
topology.
G
Yeah
the
only
thing
I
so
I
imagine
there
will
be
the
same
process
where
cozy
identifies
the
the
node
properties.
Somehow
right,
yeah
yeah
we
can.
We
can
but
and
then
so.
Maybe
the
the
analogy
for
csi
is
that
the
administrator
tags,
all
nodes
in
that
sense
of
csi,
so
that
all
nodes
have
a
properties
related
to
the
csi
to
the
seaside,
plugins
that
the
admin
installs
is
that
how
it
works.
A
Yeah
yeah
there
is
a
standard
mechanism
that
csi
uses
for
that
and
and
we'll
be
relying
on
that
to
to
get
the
set
of
labels
for
that
node.
G
Okay,
and
so
for
cozy
will
will
we
will
we
have
some
standard
tags
for
region
and
zone
like
or
is
it
going
to
be
just
nothing
like
each
provisioner
will
have
to
so
I'm
kind
of
wondering
how
that
works.
When
you,
when
I'm
setting
up
the
a
cozy
provisioner
right,
will
I
then
have
to
tackle
nodes
or
provide
some
rules
how's
that
working.
A
Yeah,
so
we
yeah,
each
provisioner
will
have
to
understand
a
particular
set
of
constraints.
So
again,
a
bare
metal
provisioner
could
define
them
as
a
a
rack
and
either
like
a
power
distribution
unit,
or
you.
D
A
But
the
provisioner
needs
to
understand
what
the
deployment
looks
like
right.
G
Or
but
what
does
it
do
it
works
for
csi,
then
that
I'm
deploying
the
cluster,
I'm
tagging
my
nodes
being,
let's
say
racks,
for
example,
for
some
reason.
So
I
have
rack
and
I
have
like
a
to
z
or
something,
and
then
I
and
then
my
provisioners
have
to
look
up
the
the
string
rack
inside
because
they
assume
that
this
is
how
I've
tagged.
My
notes.
A
D
G
D
G
So
there's
some
relationship
in
the
deployment
of
the
cluster
versus
the
the
cozy
deployment
yeah,
the
provisioner
and
deployment
as
well
in
some
way.
Maybe
it's
configurable
or
anything
like
that,
but
yeah.
E
E
The
driver
knows
whether
you
know
this
is
single
zone
multi-zone,
and
so
it
knows
how
many
zones
it
needs
to
select
one
zone
or
two
zones,
and
so
from
that,
if
you
pass
in
a
list
of
you
know
two
zones
and
it
needs
to
it's
going
to
use
both
of
them.
So
it
says
it
must
be
accessible
from
both
if
you
pass
in
a
list
with
four
zones,
but
it
only
needs
two.
It
is
able
to
select
any
two
from
that
list
of
four.
E
A
Yeah,
it
makes
sense,
I
think,
in
our
case,
since
we're
designing
it
from
scratch.
I
would
rather
have
all
of
the
constraints
in
one
place
rather
than
have
this
outside
mechanism,
it's
easy
to
get
it
wrong.
So.
E
It
can
mean
either
or
I
mean
effectively
it
means
or
but
but
when.
E
Exactly
so,
it
basically
means,
or
so
the
the
list
of
requisite
topology
is.
These
are
all
the
zones
that
are
all
the
you
know.
Topology
segments
that
it
should
be
accessible
could
be
accessible
from
pick
one,
but
if
your
volume
requires
more
than
one,
for
example,
it's
replicated
across
two
or
replicated
across
three
and
the
caller
has
only
specified
two
or
three,
then
all
of
them
are
going
to
get
used
so
it
becomes.
This
is
the
this.
The
this
constraint
set
that
you
must
pick
from
pick
as
many.
A
E
F
E
A
Five
zones
yeah
now
when
it
responds
it,
responds
with
the
segment
you're
saying
this
segment.
If
so,
the.
If
the
segment
does
not
include
all
the
five
zones,
then
kubernetes
will
not
put
it
in
the
all
the
five
zones.
It'll
just
put
it
in
one
now,
given
that
segment
is
just
a
map
of
string
string,
we
wouldn't
be
able
to
specify
all
the
five
zones
here.
E
You
sh
that
that's
correct,
but
I
believe
if
you
look
at
the
volume
object,
that's
returned
message
volume.
It
is
a
repeated
list
of
topology,
so
you
effectively,
you
can
send
return
more
than
one
topology
to
say.
Okay,
it's
part
of
zone,
one
and
zone
two
and
zone
three.
H
I
have
a
question:
this
relates
to
figuring
out
where
a
bucket
can
be
provisioned
based
on
the
node,
where
the
workload
will
run.
H
H
A
A
Yeah
I
mean
the
provisioner
has
the
freedom
to
resp.
Okay,
I
see
what
you're
saying
so.
H
A
system
that
where
the
workload
runs
in
europe-
and
I
always
want
to
store
my
objects
twice
once
in
the
us
and
once
in
europe
or
whatever,
then
I
will
have
a
bucket
request,
or
I
will
have
two
bucket
requests
as
part
of
the
workflow
deployment
in
one
I'll
say
I
wanted
to
be
in
us
in
the
other
I'll
say
I
wanted
to
be
in
europe.
How
do
we
pause
that
trip.
G
Isn't
it
just
somehow
separate
set
of
topology
keys,
so
region
might
be
applicable
to
to
like
the
the
bucket
location,
whereas
a
site
or
something
might
be
related
to
the
node's
location?
G
And
then
maybe
I'm
not
sure,
what's
coming
back
then
from
from
the
create
bucket,
then,
but
I
can
say
well,
I
want
to
restrict
region,
but
I
don't
want
to
restrict
the
site
in
any
way.
A
When,
when
aws,
you
know
the
the
constraints
that
I've
said
are
only
on
where
the
bucket
should
be
provisioned,
I,
and
in
case
of
aws,
for
instance,
or
in
case
of
providers
where
buckets,
are
accessible
from
every
region.
Let's
say
I
think
we
can.
We
can
set
this
kind
of
requirement
that
the
provisioner
should
return
all
the
different
regions
from
where
the
bucket
is
accessible,
without
trying
to
restrict
by
preference
and
other
things.
Well,
then,.
D
A
Availability
everywhere
that
that
this
segment
responds
with
so
so,
if
the
buckets
are
available
in
just
four
regions
and
that's
the
that's
that's
the
extent
to
which
it
can
possibly
be
available,
then
you
would
respond
with
four
regions
in
this
segment
and
it's
up
to
the
workload
to
choose
to
run
in
either
region,
one
or
region
two
and
and
and
we
can
leave
it
that
way.
D
H
A
G
And
so
what
what
does
the
mapping
between
so
so
when
we
we
talked
about
it
on
monday,
like
the
fall
of
the
pod
versus
follow
the
volume
or
the
bucket
right
modes,
and
so
does
when
we
tag
these
constraints
like
region,
what
does
it
mean
for
nodes
because
nodes?
A
Yeah,
I
think
that's
why
we
were
talking
about
getting
getting
the
topology
keys
from
you
know
when
the
driver
starts
on
the
node,
so
so
we
have
that
mapping
of
which
node
belongs
to
which
region-
and
you
know
whatever
other
constraints,.
A
G
All
regions
so
to
which
the
region
should
I
assign
my
notes
in
this
case,
if
it's
accessible
everywhere,
it
doesn't
matter
yeah,
so,
okay,
so
what
will
be
the
the
assignment
that
is
done
that
it's
an
empty
empty
list?
The
node
is
assigned
to
an
empty
list
of
regions,
and
that
means
all
regions
is
that.
D
D
E
In
if
the
pv,
unless
the
pvc
is
wait
for
consumer.
E
E
It
will
it
will
so
you
could.
You
could
create
a
volume,
basically
independent
of
kind
of
the
scheduler,
and
it
gets
scheduled
wherever
or
provisioned
wherever
it
does,
and
then
the
csi
driver
will
apply
topology
constraints
on
the
pv
object,
saying
hey.
This
is
only
accessible
from,
for
example,
zone
a
okay
and
then,
when
the
scheduler
has
a
pod
using
that
volume
it
says.
E
G
I
would
I
would
still
look
at
so
with
pvs,
which
are
used
like
this.
I'm
not
sure
if
the
problem
arises
that
we
have
different
kind
of.
Maybe
you
answered
it,
but
I'm
not
sure
so
different
kind
of
region
zones,
but
when
you
have
like
an
nfs
kind
of
any
network,
storage
or
nfs
or
any
kind
of
network
storage
where
you
have
like
the
target
storage
has
its
own
kind
of
failure,
domains
or
whatever
yeah.
E
E
Is
there
a
a
good
good
kind
of
question
and
we
have
exactly
this
so
so
on
the
volume
side?
What
we
said
were
these
are
two
separate
problems,
there's
a
reason.
We
call
this
accessibility
topology
and
the
reason
is
because
it's
effectively
the
accessibility
of
that
volume,
whether
it
is
actually
accessible
by
a
node
or
not.
E
The
second
kind
of
thing
is
storage
topology,
which
we
have
not
tackled
on
csi,
yet,
which
is
what
you're
talking
about
the
idea
that
the
volume
is
equally
accessible
from
all
nodes,
but
it
has
some
sort
of
internal
topology
where
it
makes
sense
to
influence
where
it
lands.
Maybe
an
internal
failure
domain.
That
kind
of
thing
we're
looking
at
that
now
as
part
of
kubernetes
designs,
but
it's
still
very
much
in
the
design
process.
A
F
E
E
Yeah
for
for
csi,
it
was
just
the
more
immediate
problem,
was
accessibility,
topology
and
that
was
tackled.
The
failure
domain-
internal
storage,
topology
problem
exists,
but
it's
mostly
a
nice
to
have
it's
like
a
micro,
optimization
kind
of
thing.
So
that's
why
we
de-prioritized
it,
but
we're
starting
to
look
at
that
now
as
well.
A
Accessible,
topology
and
storage.
Topology
storage
topology
would
be
sent
on
the
request,
create
I'm
thinking
this
through
as
we
go
along,
so
I'm
thinking,
maybe
storage
topology
could
be
sent
as
a
request
parameter
to
create
bucket
and
on
the
response
we
get
accessible.
Topology.
E
The
really
tricky
thing
about
storage
topology
is
that
it
could
be
completely
independent
of
the
kind
of
cluster
zone
topology
and
so
effectively.
What
we
need
is
some
mechanism
for
the
storage
system
to
be
able
to
say,
hey
kubernetes
here
is
my
internal
layout,
have
kubernetes
store
that
information
and
then
be
able
to
operate
on
it
in
the
future,
and
so,
when
you.
E
Classes
now
potentially,
but
when
you
start
getting
into
this
territory,
it's
it
comes
down
to
a
question
of.
Does
it
even
make
sense
for
kubernetes
to
be
making
these
decisions
and
start,
you
know
effectively
influencing
something
within
the
storage
system?
Where
does
the
line
draw
and
no
that's
why
I
like
that.
A
The
kubernetes
is
not
modeling
the
objects
of
the
system
itself.
The
topologies
will
be
a
set
of
keys
that
will
be
provided
by
or
understood
by
a
particular
provisioner.
As
far
as
kubernetes
is
concerned,
it's
completely
out
of
band
how
it
makes
that
happen
so
stuff.
Like
saying
I
want
a
bucket
in
region,
u.s
east
one.
A
All
that
kubernetes
will
model
in
our
case
is
accessible
topology,
which
is,
which
is
for
the
pods
to
get
scheduled
correctly.
E
A
Yeah
yeah,
let
me
do
this,
so
I
don't
know
how
much
time
is
left
yeah.
We're
almost
done
so
I'll
I'll.
Try
to
put
this
together
with
with
the
two
concepts
of
storage
and
accessible
topology
and
I'll
also
try
to
see
how
I
can
do
the
weight
on
consumer
kind
of
behavior
follow
the
volume
versus
follow
the
part,
and
let's
continue
this
discussion
on
monday.
B
And
so
for
storage
topology.
We
actually
discussed
about
that
a
little
bit
when
we
were
working
on
the
storage
pool
cap,
which
is
now
the
storage
capacity.
I
think,
at
least
for
that
revision
we
decide
not
to
pursue
it,
but
the
like.
The
volume
group
have
that
I'm
working
on
actually
also
trying
to
look
at
how
to
do
spreading
and
then
see
if
that
can
be
applied
for
this
storage
topology
related
concept
but
yeah
we
haven't
figured
out.
A
Yeah
yeah,
I
see
okay
good
to
know
yeah,
let's
reconvene
on
monday
and
we'll
go
from
there.
Shin
will
we'll
make
we'll
update
the
full
requests
and
I
think
suny
has
already
updated.
We
will
send
the
email
and
yeah
we'll
follow
up
on
that.