►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket Standup Meeting - 02 November 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
So
today
is
our
engineering
sync
up.
First,
I
want
to
so
so.
The
plan
for
today
is
first,
I
want
to
figure
out
where
we
are
in
terms
of
developing
our
demo
again,
I
want
to
reiterate
that
this
demo
is
a
development
milestone
for
us,
where
we
want
to
show
a
demo
once
we
have
developed
the
product
in
a
in
a
in
a
production
ready
manner
to
a
certain
extent.
A
So
for
this
demo
we
want
to
show
a
simple
green
field,
provision
bucket
or
create
bucket
and
grant
access
for
that
bucket
and
put
that
bucket
into
a
pot.
These
are
the
three
use
cases
and
we
are
ending
the
we're
nearing
the
end
of
the
year.
So
one
of
the
things
I
want
to
I
want
to
figure
out
is,
if
it's
possible,
to
hit
this
milestone
before
say
the
second
week
of
december.
A
We
are
very
close,
but
there
are
a
few
new
things
that
have
come
up
like
we
are
migrating
our
project
to
the
kubernetes
6
repo.
A
A
So
this
is
where
we
were
in
terms
of
development.
Last
week
on
the
26th,
the
tasks
in
green
have
been
completed.
A
B
Sorry
yeah
no
on
they're
not
finished.
I
have
to
still
do
those
too.
It's
not
a
whole
lot
of
work.
I
can
do
that,
but
currently
I'm
I'm
trying
to
do
other
things,
so
I
kind
of
put
it
in
the
back
burner
so.
A
Okay
yeah,
I
understand
you're
working
with
the
moving
moving
the
code
from
our
temporary
repo
to
the
actual
official
ones.
So
I
think
that's
a
higher
priority
yeah,
so
so
that's
fine
is
is
rob
on
the
call.
A
No,
it's
okay,
so
rob
made
another
pr
this
week
or
last
week,
and
that
was
about
fixing
some
some
updates
in
the
sidecar
controller,
where,
when
a
bucket
is
created,
he
added
code
to
make
sure
that
things
like
region
and
zone
were
passed
into
the
actual
back
end.
A
So
that
reminds
me
so
so
srini
you
made
a
pull
request
to
the
upstream
api
repo,
the
the
official
repo
where
it
looked
like.
We
had
added
a
few
fields
that
were
not
in
the
cap.
B
Right,
that's.
There
are
a
few
things
that
are
like
you're
talking
about
the
api
right
I
mean.
B
That
was
no
no
that's
back.
Spec
shing
has
commented
out,
commented
on
the
pr
and
she
said
that
yeah
we
made
the
change
to
use
name
instead
of
what
we
had
right
now
say
api.
We
fixed
the
api
in
the
in
the
cap,
but
we
did
not
update
the
specs.
So
I
need
to
update
that.
A
You
mean
in
the
cap,
so
the
cap
has
the
updated
api,
but
not
the
updated
spec.
I
see
okay,
okay,
yeah
yeah.
I
saw
the
comment
by
saying
I
added
a
few
comments
too,
but
that
that
seems
like
a
fair
explanation.
Do
you
remember
the
reasons
for
putting
the
region?
A
I
mean,
I
think
I
remember
the
reason
for
putting
region
and
zone
as
as
top
level
fields,
because
we
checked
the
three
cloud
providers
and
we
checked
a
few
implementations
of
the
s3
protocol
and
they
all
had
region
and
zone,
so
it
made
sense
to
put
it
as
a
top-level
field.
Correct.
B
Yeah
it
kind
of
makes
sense,
but
we
haven't
actually
made
the
decision
so
so
I'm
confused
so.
A
A
Okay,
so
so,
okay,
let's
quickly
resolve
this,
are
we
saying
that
we
should
have
region
and
zone
as
top-level
fields
in
the
spec.
C
C
Yeah
these
two,
it
could
be
some
other
names.
It
could
be
like
rack.
If
you
do,
I
think
I
suggest
you
actually
take
a
look
of
the.
A
C
C
A
Yeah
in
object
storage.
What
happened?
A
A
Here's
xing
just
brought
up
a
great
use
case
and
again
from
from
minions
perspective,
we
have
some
deployments
where,
like
especially
with
like
web
scale,
companies
where
they
have
multiple
clusters
of
manual,
each
on
a
separate
rack
and
some
set
of
workloads
are
configured
to
talk
to
the
minions,
on
the
closest
rack,
for
instance,
and
so
one
of
the
ways
one
of
the
big
customers
do
it
is
they
have
three
different
kubernetes
clusters,
one
for
each
of
the
racks
and
they
just
route
their
applications
to
talk
to
the
one
min
io
service
inside
there
inside
their
kubernetes
cluster.
A
Now
with
something
like
topology
awareness,
we
can
specifically
tell
the
system
to
provision
a
bucket
from
a
particular
topology
and
not
have
to
deal
with
the
overhead
of
managing
multiple
kubernetes
clusters
like
in
this
case.
I'm
sure
there
are
different
solutions
for
this
problem,
but
I
think
topology
would
be
one
way
of
addressing
this
now.
A
D
I
think
it's
a
good
idea,
also
from
cloud's
perspective,
there's
more
than
just
regions,
so.
E
Why
more
generic,
I
I
totally
agree
with
like
region
and
zone,
is
too
specific
and
you
want.
You
know
just
a
generic
thing
like
we
have
with
csi.
E
I
think
that's
not
not
controversial,
but
when
you
say
top
level
field
like
of
what
why
wouldn't
it
be
like
of
the
bucket
class
or
the
bucket
access
class
like,
why
would
it
be
of
a
specific
object?
That's
what
that's
the
piece!
That's
confusing
me.
A
Oh,
it's
not
so
what
do
you
mean
by
specific
object?
Do
you
mean.
E
A
C
A
You
spec,
but
from
the
api
it
will
come
from
the
class.
A
D
I
think
if
it
makes
sense
to
that
region
and
zone
will
be
implemented
as
layers
in
these
topologies,
then
it
just.
We
just
want
to
make
sure
that
this
kind
of
use
case
works
right,
but
other
than
this,
I
think
more
flexible
ways
of
defining
these
location
constraints,
or
I
don't
know.
In
other
words,
I
think
they
called
it
location
constraints.
D
A
Oh,
I
see
what
you're
saying
so
so
so
in
case
of
a
csi,
so
there
are
some
differences
here.
One
is
you
know
this
is
over
the
network
and
and
not
locally
attached.
Or
you
know
it's
not.
The
volume
is
not
attached
to
a
particular
node,
but.
A
A
F
A
Yeah,
I
think
I
think
both
kind
of
go.
So
what
do
you
mean
by
inverse
to
me?
It
seems
like
they
go
hand
in
hand.
You.
A
So
I
so
do
you
mean
that
when,
when
a
workload
requests,
oh
okay,
so
so
are
you
saying
when
a
workload
requests
a
bucket
we
respond
saying
these
are
all
the
nodes
on
which
you
can
you
can
run.
A
Okay,
do
we
want
to
design
it
like
that?
I'm
it's
an.
F
Honest
question:
I
think
there
it's
two
distinct
topics.
One
is
about:
where
do
you
want
to
provision?
The
other
one
is
once
provisioned
greenfield
or
brownfield
from.
Where
can
you
access
it
right.
D
A
So
so,
okay,
to
start
with,
I
think
we
should.
I
I'm
I'm
for
having
this
this.
This
kind
of
generic
topology
constraint
for
the
provisioning
itself
frozen
of
the
bucket
in
terms
of
access,
I'm
just
kind
of
worried
about
the
complexity
of
doing
this,
because
on
one
hand
on
on
one
hand,
it's
possible
to
to
achieve
this
kind
of
constraint
by
setting
some
kind
of
affinity,
rules
on
the
part
and
and
saying
that
the
bucket
is
accessible
from
anywhere.
A
So
so
like,
if
you
look
at,
if
you
look
at
policies
on
a
bucket
they're,
never
based
on
the
client,
that's
going
to
access
them.
B
D
D
E
D
So
in
kubernetes
is
there
generic
networking
abilities
to
define
these
or
that.
D
F
E
Provisioner,
no,
no,
the
csi
provisioner.
If
the
provisioner
has
the
topology
capability
and
then
the
the.
If
the
provisioner
says
that
it
has
the
topology
capability,
then
it
has,
then
it
must
respect
the
requirements
that
the
provisioner
passes
in
or
fail
if
it
cannot
and
then
and
then
additionally
after
it
it
needs
to.
You
know,
if
you
give
it
multiple
choices
like
say,
put
it
in
a
or
b
or
c
it
has
to
tell
you
which
one
it
actually
chose.
A
I'm
talking
about
the
provisioner,
the
the
csi
controller,
which
gets
create
volume.
B
E
B
A
I
think
it
does
yeah
so
so
again
and
okay,
so
let's
say
let's
say
we
want
the
exact
same
behavior,
I'm
I'm
thinking
about
this
in
terms
of
implementation
and
how
we
want
to
plan
this.
How
does
this
sound
and
and
I'm
open
to
ideas,
so
I
think,
to
begin
with,
we
only
focus
on
the
topology
constraints
while
provisioning
to
begin
with.
A
We
have
that
down.
I
think
once
we
have
our
agent
in
the
cubelet,
I
think
then
we
can
talk
about.
You
know
we
can
get
into
the
scheduling,
based
on
topology
constraints,
scheduling
of
what
scheduling
of
workloads
that
are
going
to
use
buckets
from
a
particular
buckets
of
a
particular
type.
D
You
first
want
to
affect
the
bucket
creation
and
pass
on
the
information
about
topology
right,
but
you
don't
want
to
handle
the
bucket
access
peaks
where
you
need
to
affect
the
pods
and
integrate
all
the
the
actual
logic.
That
will
take
information
about
the
this
topology
to
the
from
the
bucket
to
the
access
right.
B
D
D
E
D
They
just
need
to
be
matching
in
in
the
the
simple
case,
or
maybe
you
know,
maybe
they.
A
A
E
D
Because
that's
I'll
tell
you
why
that's
that's
that's
kind
of
a
back,
so
it
will
be
a
feedback
loop
that
allows
you
to
not
only
specify
where
the
bucket
should
be
exist
right
when
you
create
it,
but
it
should
also
tell
you
where
you
should
be.
You
know
spinning
up
your
pods
so
that
it
will
be
accessible
now.
E
B
E
E
E
D
E
D
D
Yes,
yes,
that
that
means
like
information
about
how
to
parse
and
the
topology,
and
you
know,
apply
that
to
pods
right
because
there
might
be
some
but
well,
maybe
not.
But
we
might
have
something
like
that
in
the
class.
But
you're.
A
Yeah
I
I
buy
that
argument.
That
is,
we
might
have
different
ways
to
access
the
same
bucket
within
the
list
of
allowed
machines
it
can
run
on
and
that
could
be
encoded
in
the
bucket
access
class.
E
A
So,
for
instance,
my
bucket
is
available
to
the
entire
kubernetes
cluster,
but
then
the
the
high
throughput
storage
applications
get
to
go
on
rack
one,
but
everything
else
can
run
on
anything
but
rack,
one.
That
kind
of
logic.
E
Yeah,
but
I
mean
so
so,
but
play
it
through
so
so
you
you
either
have
a
bucket
class.
That
says,
put
it
in
rack,
one
which,
in
which
case
the
the
provisioner
either
can
or
cannot
do
that
and
and
if,
if
it
can,
it
will
say
so
and
if
it
can't
you'll
get
an
error.
So
somehow
it
ends
up
on
rack
one
or
or
you
have
the
bucket.
A
Accessible
from
both
rack
one
and
say
rack,
two:
it's
just
the
rakto
applications
treat
this.
As
you
know
non
you
know,
data,
that's
that
doesn't
have
to
be
high.
Throughput.
E
A
And
you
know.
E
So
so
what
you
do
is
you
you,
you
arrange
for
the
the
bucket
class
to
say
hard
requirement.
It
has
to
be
in
in
rack
one
or
sorry
hard
requirement
has
to
be
accessible
from
everywhere
soft
requirement.
It
should
be
in
rack
one.
A
B
E
E
D
I
think
that's
I
mean
that's.
I
agree
that
this
is
how
you
do
it
like
an
admin
right.
He
would
take
and
start
administrating
everything
and
making
it
right,
but
on
the
other
hand,
what
we're
we're
saying
when
we
allow
the
you
know
cozy
to
to
apply
these
affinities
right,
is
that
suddenly
we
we
allow
automation
from
from
cosy's
side
to
the
workloads
where
maybe
the
workloads
are
not
they
don't
care
right.
Maybe
the
administrator
does
not
want
wants
something
to
automate
access
based
on
access
requirements
so
yeah.
I.
A
E
E
E
One
and
it's
gonna
be
slow
for
the
other
guys,
and
no
nothing
you
can
do
with
access
classes
is
going
to
change
that
fact
and
like
and
if,
if
you
want
to
put
it
in
rack
2
or
have
another
bucket,
that's
fast
for
the
rack
2
guys,
then
you
can
provision
another
bucket
and
ensure
that
it
ends
up
there.
The
only
thing
that
kubernetes
is
going
to
care
about
is
like,
can
I
attach
to
it
or
not,
and
and
then,
and
if,
if
I
want
to
be
close
to
it,
where
is
it
so?
E
B
E
D
Who
does
I
mean
like.
A
D
Yeah,
so
what
I'm?
What
I'm
asking
is
that
I
would
have
to
apply
these
topologies
or
affinities
on
the
workloads
to
make
that
happen
as
as
an
admin
right,
yeah.
E
E
The
volume
follow
the
pod
by
by
by
setting
the
volume
to
like
wait
for
first
consumer
and
then
it
it
lets,
lets
the
pod
get
scheduled
first
and
then
it
tries
to
put
the
volume
yeah.
D
D
E
Out
of
scope
for
what
we're
responsible
like
we,
we
are
not
going
to
go
change.
How
the
the
pod
scheduler
works
as
part
of
cozy
right.
It
would
be
nice
if
it
were
possible
to
do
so,
but
that's
not
our
job.
Our
job
is
just
to
provide
the
information
so
that
the
guy
who
wants
to
modify
the
pod
scheduler
has
all
of
the
information
available
to
do
that.
E
D
So
so
a
few
things,
but
we
are,
but
I
so
I
don't
mind
I
mean
I
think
it's
okay
to
leave
this
out,
but
I'm
kind
of
questioning
we
are
injecting
information
into
the
pods
right.
We
we
are
getting
into
close
encounter
with
you
know,
setting
the
pods
information
to
work
like
was
intended
by
the
provider
right.
So
I
would
say
I
mean
I'm
not
saying
it's.
You
know
it
should
be
inside
or
not,
but
it's
a
gray
area.
That's
that's
where
I'm
thinking
it
is.
E
I
mean
the
other
thing
to
bear
in
mind
is
like
schedulers
are
entirely
pluggable
in
kubernetes
anyways
and
like.
If
someone
is
using
anything
other
than
the
standard
scheduler,
then
it
kind
of
doesn't
matter
what
we
do,
because
it's
going
to
get
ignored
yeah
I
just
I
I
really
feel
like
pod
scheduling
is,
is
out
of
scope
and,
and
we
should
make
it
make
sure
the
information
is
reflected
into
the
object,
so
that
if
someone
wanted
to
write
a
scheduler
that
took
that
information
into
account,
it
would
be
available
to
them.
But
that's.
B
E
That
makes
sense
and
good
and,
and
the
last
thing
I'll
say,
is
like
that's
the
bucket,
not
the
access
right.
You
right,
you
know
where
the
bucket
is
you
you
tell
it.
You
put
the
information
on
the
bucket
object
and
then
a
scheduler
has
access
to
that
information.
If
it
wants
to
make
pod
scheduling
decisions
based
on
it,.
A
Right
right
right,
so
so
the
last
point
that
you
said
is
what
I
wanted
to
reiterate:
the
the
way
you
can
access
the
bucket
from
is
on
the
bucket
itself.
E
And
and
maybe
we
can
have
like
a
preferred
topologies
on
the
bucket
itself
as
well,
so
that
you
can
express
the
fact
that
it
is
accessible
from
everywhere,
but
it,
but
it's
faster
in
zone,
one
or
rack
one,
whatever
you
call
it
and
then,
if
someone
wanted
to
write
the
schedule
that
says
put
the
pod
in
the
fast
place
that
the
information
is
available
to
do
that.
Yeah.
A
Yeah,
that
makes
total
sense.
Actually,
I
think
I
think
we
should
flush
this
out
a
little
bit
more.
The
first
question
I
still
have,
though,
that
that
we
don't
have
an
answer
for
yet
is
how
much
of
this
do
we
really
need,
and
I'm
doing
this
exercise
as
a
way
of
making
sure.
A
So
so
I
just
want
to
do
this
exercise
once
of
how
much
of
this
do
we
really
need,
and
if
there
is
something
we
can
cut
down,
what
is
it
so
right
now,
we've
run
out
of
time.
So
let's
do
that
on
thursday,
but
but
that's
something
we
should
definitely
discuss
before
we
take
on
the
full
scope
of
this.
A
That's
it
from
me
about
this
one
thing
I
did
want
to
give
a
conclusion
on
the
discussion
from
the
last
few
weeks
about
credential
mounting.
I
think
we
reached
a
good
conclusion.
I
just
wanted
to
make
sure
everyone
was
aware
of
it,
but
I'll
do
that
on
thursday
again
other
than
that,
I
think
that's
it
from
my
side.
A
Okay:
okay,
great,
let's,
let's
chat
again
on
thursday,
I'm
glad
shane.
Thanks
for
the
suggestion,
I
think
we
started
a
good
discussion
and
we'll
we'll
take
it
forward
as
we
as
we
go.