►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket Standup Meeting - 03 May 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
All
right,
good
morning,
everyone
good
afternoon
for
those
of
you
in
the
east
coast
and
nicholas
good
evening,
all
right
so
this
last
week
I
brought
up
this
this
change.
This
license
change
that
minio
was
making
and
as
a
part
of
that
license
change
we
were
discussing
if
it
would
still
be
okay
to
use
the
same
repository
given
the
kind
of
libraries
that
it
links
to.
Technically
speaking,
there
are
no
nothing
but
apache
v2
references
in
in
that
code
base.
A
However,
if
we
were
to
consider
running
mineio
within
the
ci,
I
mean,
given
that
we
are
redistributing
the
code
I
mean,
given
that
our
code
is
open
source.
A
Technically,
it's
still
okay,
but
that
being
said,
it
does
get
into
kind
of
a
weird,
weird
gray
area
in
terms
of
you
know,
being
that
being
the
one
of
the
few
projects
that
we
you
know
using,
you
know
running
a
gpl
code
base
inside
so
we
spoke
with
saad,
and
some
people
from
cncf
shin
was
also
involved
in
that
conversation,
and
it
seemed
like
the
best
solution,
or
the
best
way
to
move
forward
is
to
find
a
different
way
to
do
the
sample
driver
part
rather
than
trying
to
figure
out
how
to
you
know
retrofit
the
new
change
license,
which
is
kind
of
a
you
know,
new
scenario
for
pretty
much
all
of
us
involved.
A
A
So,
in
this
case
at
least,
you
know,
cncf
took
a
look
and
gave
us
a
clear,
clear
response,
and
the
clear
response
was
the
best
way
to
move
forward
is
to
not
well
the
clear
response
was
you
know
we
cannot
use
the
hpl
project
inside
of
how
do
I
put
this?
We
cannot.
B
A
A
It's
a
copy
left
license,
so
it
expects
whoever
uses
it
to
also
distribute
their
code
as
open
source
along
with
the
license.
So,
given
all
this
again,
the
decision
that
we
want
to
make
today
or
what
I
want
to
bring
up
today
is
to
find
an
alternate
solution
for
our
sample
driver
and,
and
we've
already
made
the
request
to
archive
the
minio
sample
driver
if
you're
running
it
locally.
That
is
totally
fine.
You
won't
be
violating
any
licenses.
Let
me
know
your
license
when
I
sample
driver.
A
However,
if
you
were
to
distribute
it
commercially,
especially
you'd,
be
violating
the
license.
That's
the
one
thing
so
yeah.
I
want
to
kick
start
the
conversation
today
to
talk
about
what
the
alternatives
are
and
kind
of
go
from
there.
One
thing
I
can
say
is
you
know
we
don't
want
to
waste
too
much
time
on
this.
There's
a
lot
of
things
to
talk
about.
So
let's
try
to
converge
on
a
solution.
You
know
as
soon
as
possible.
A
Yeah
yeah,
I
I
do
have
good
alternatives,
yes,
but
I
also
wanted
to
open
up
for
others
to
bring
it
up,
because
you
know
others
also
should
have
a
chance
to
bring
up
what
you
know
what
ideas
they
might
have
before
before
even
listening
to
what
I
have,
but
if
not
I
can.
I
can
go
ahead
and
talk
about
what
I
have
so.
I
spoke
to
ben
also
about
this,
and
then
I
spoke
to
him
this
morning.
He
I
don't
know
if
he's
here
right
now.
A
He
has
a
hackathon
he's
attending
internally
to
netapp
right
now,
so
we
wanted.
We
wanted
something.
That's
really
easy
to
set
up,
doesn't
have
any
license.
Questions
coming
up
and-
and
also
you
know,
serves
the
purpose
that
we
have,
which
is
run,
something
that
is
a
sample
that
people
can
look
at
and
use
to
develop
their
own
cosy
driver
and
two
is
use
it
for
an
internal
ca
purposes.
A
Now
the
the
only
support
that
we'll
need
for
internal
ci
is
support
for
the
grpc
calls,
not
necessarily
support
for
the
data.
Io
calls.
So,
given
these
two
I'm,
it
seems
like
the
best
option
that
we
have
right
now
is
actually
to
write
a
really
simple
sample
driver
with
no
back
end
with
no
backing
at
all.
All
it
does
is
it's
like
a
mock
sample
driver
it
simulates,
creating
a
bucket
and
revoking
access
and
granting
access
and
deleting
it
and
and
that's
about
it,
but.
B
A
B
A
C
A
Yeah
so
there's
two
options
or
there's
property
options:
there's
open.
B
Source
as
well
someone's
talking
go
ahead
nicholas
there.
There
is
open
source
as
well
from
my
employer.
We
have
an
apache
2
license.
There's
s3,
compatible
api
implementation
with
the
storage
and
with
credential
checking
and
whatnot
yeah.
That
is
more
about
it.
B
It's
called
cloud
server.
We
do
use
it
also
as
part
of
our
commercial
projects
and
as
part
of
other
open
source
projects.
But
the
thing
by
itself
is
a
fully
well
full
implementation
of
the
s3
api.
It
is
apache2
licensed.
You
can
run
it
as
simple
as
a
single
docker
container.
It
is
bill,
it's
written
using
node.js.
It
is
not
implemented
in
go
or
something
like
that.
It's
implemented
using
node,
but
from
a
user
point
of
view
that
doesn't
really
change
anything.
B
A
So
so
I
think
jeff
also
had
an
option.
Jeff
did
you
have
something
in
mind?
A
Okay,
okay,
okay,
okay,
interesting!
That
would
be
interesting
all
right,
so
we
have
three
options.
It
looks
like
we
have.
Well,
we
have
four
one.
We
have
scality
one.
We
have
anova
one.
We
have
apache
swift,
sorry,
openstack,
swift
and
we
have
the
other
one,
which
is
to
write
a
very
mock
driver,
simple
one.
A
So,
let's
talk
about
which
would
be
a
better
option
here,
swift,
okay,
this
is
I
I
don't
know
enough
about
it,
but
from
what
I've
heard
it
is,
it
is
complex,
setup
right
swing
and
I
don't
know
you
can
tell
me
if
I'm
wrong
is
the
setup.
B
Okay,
so
anyone
I
have
actually,
if
you
want
a
fully
running
swift
installation,
then
it's
it
it's
quite
hard,
but
we
don't
necessarily
need
that
there
is
the
openstack
all-in-one
project
which
you
can
use
to
deploy,
openstack
or
only
swift,
or
only
various
other
services
fairly
easily,
and
there
used
to
be,
but
I'm
not
sure,
that's
still
maintained.
As
someone
who
built
a
docker
container
which,
when
you
spawn
it,
brings
up
a
swift,
well,
a
single
node,
swift
cluster.
So
you
want.
B
And
then,
okay,
there,
my
my
knowledge,
is
somewhat
dated.
The
s3
implementation
swift
used
to
have
was
rather
limited.
I'm
not
even
sure
you
can
do
things
like
provisioning
new
key
pairs.
A
I
see
okay
good
to
know
yeah
yeah,
so
you're
saying
if
you
were
to
talk
to
the
swift
api
and
try
to
provision
your
keypads
that
wouldn't
work,
I
mean.
B
Possible
the
swift
api
itself
is
a
different
api.
It's
not
even
s3
swift,
it's
a
native
api
is
a
different
object,
storage
and
then
we
would
need
to
add
it
to
the
the
list
of
object.
Storage
protocols
cozy
supports,
but
they
do
have
a
sort
of
a
front
end
which
translates
s3
api
calls
into
the
swift
storage's
internal
api
calls.
B
Honest,
I
agree
now
if
I
made
good
what
what
I
I,
I
think,
the
the
two
parts
you
you
outlined
before
both
have
their
merits,
where
a
place,
a
stop
driver
which
does
everything
in
memory
etc,
is
great
for
actual
testing.
Flakiness
is
zero.
We
can
also
use
it
to
to
have
it
return.
Other
type
like
have
it
actually
return
errors
and
then
figure
out
whether
the
the
the
sidecar
and
others
properly
handle
those.
B
But
it's
not
necessarily
super
useful
for
someone
who
wants
to
look
at
an
implementation
of
a
provisioner
to
write
an
another
provisioner
for
another
storage
system.
So
so
I
think
we
may
want
to
have
like
kind
of
both
in
the
end
where
one
is
purely
used
for
testing
purposes
as
an
internally
and
the
other
one
can
serve
as
a
reference.
B
It's
not
not
even
a
reference
implementation
if,
for
example,
the
nuba
people
build
a
cozy
provisioner
for
nuba
and
el
scality,
we
build
a
cozy
provisioner
for
scality
storage,
which
is
backed
by
cloud
server.
We
both
open
source
that
one
that's
those
while
those
two
projects.
They
are
kind
of
similar
in
value.
B
So
you
want
and
then
isn't
it
better
to
have
a
couple
of
those
rather
than
have
one
that's
blessed.
A
Totally
agree,
actually
I'm
completely
on
board
with
this.
Actually,
so
for
testing
alone,
we
can
have
a
like
in
memory
thing
like
you
were
just
saying,
and
we
should
create
probably
a
very
public
and
easily
accessible
repository,
probably
in
kubernetes
itself.
A
That
just
is
a
cozy
that
just
has
links
to
all
the
cozy
drivers.
That's
the
way
to
discover
what
the
cozy
drivers
are.
We
can
have
in
our
website
that
might
be
even
better
like
at
the
front
page
right
here
that
would
be
great
and
and
and
as
for
as
for
what
we
need
internally,
we
can
build
that
super
simple
cozy
driver.
That's
that's!
You
know,
testing
testing
friendly
one,
and
one
advantage
I
see
about
the
in-memory
testing
friendly
one
is
that
it
can
be
retrofitted.
A
For
you
know
any
api
like
you
can
just
you
know,
because
everything
is
kind
of
smoke
and
mirrors
inside
there.
So
you
know
I
could
pretend
to
work
with
the
gcs
or
azure
or
s3
doesn't
matter.
C
So
basically,
it
does
not
really
move
data
just
make
sure
grpc
interface
is
correct.
A
Yeah,
yeah
and,
and
you
can,
you
can
test.
A
Just
with
that
jeff
very
saying
something.
F
F
The
ci
needs
from
a
sample
good,
a
good
example
of
a
driver
need,
and
so
I
I
I
like
that
idea,
I'm
just
thinking
more
about
it,
but
it
may
and
then
what
xing
said
about
in
that
you
know
having
it
in
memory.
Fascia
ci
is
good.
F
I
I
may
basically
in
a
comment
in
an
email
thread
or
a
pr.
I
just
said.
I
think
we
need
to
have
a
ci
for
this,
that
we
can't
forego
that
and
two.
I
think
we
want
a
driver
that
is
easy
for
someone
to
kick
the
tires.
Someone
sees
cozy
hey.
Let
me
just
run
this
fire
this
up
on
my
laptop
mini
cube
or
whatever
see
how
it
works
yeah.
So
if
there's
a
solution
that
will
meet
those
those
needs,
I'm
I'm
happy
about.
It
then.
D
A
Yeah,
it
seems
like
people
are
on
board
any
other
thoughts.
Anyone
if
you
know
if
you
can
go
forward
to
this,
that
would
be
great.
G
Not
any
specific
thoughts,
no
I'm
just
kind
of
listening
to
see
like
what
status
is
like.
We
have
an
initial
implementation
of
creating
buckets
in
rook
using
cozy,
but
that
network
is
still
very
much
ongoing.
A
Okay
got
it
do
do,
let
us
know
if
you
have
any
any
thoughts
or
ideas
coming
from
the
rook
world,
I'm
sure
it'll
be
valuable.
G
A
Sounds
good
all
right,
so,
okay,
so
any
any
name!
Suggestions
for
this
in-memory
sample
driver
like
like
in
case
of
csi.
We
have
a
local
pve
provisioner,
similar.
B
A
A
E
A
B
A
A
Okay,
okay,
everyone
is
I
I
I'm
okay,
either
mark
or
stub.
E
A
I
A
I
this
is
good,
oh,
you
can
say
cozy
drivers
stop.
That
would
also
work.
Cozy
drivers
stop.
This
will
work
too.
Okay,
so
talking
about
driving
versus
provisioner.
What's
the
difference
between
those
two
words.
A
Yeah,
it's
kind
of
a
driver
and
a
provisional,
so
it
doesn't
just
help
you
communicate
with
the
backend
system,
which
is
our
driver
plus.
It
also
helps
you
provision
new
resources
in
the
back-end,
so
I
would
think
it's
both
so
like
you
said,
I
think
we
can
call
it
either
if
you
want
to
stand
out
which,
like
they
could
call
it
provisional.
H
Mock
is
really
it
means
something
in
the
test
world
right,
for
instance,
if
I
want
to
mock
amazon
I'd
write
a
fake
history
somewhere
example:
yeah.
A
H
A
B
A
C
C
Essentially,
because
it's
like
a
file,
you
can
actually
like
take
a
zip,
that's
a
snapshot,
so
you
can
actually
do
something
like
that
yeah,
but
this
one.
If
it
really
doesn't
do
anything
then
yeah,
that's
that's
really
just
the
sample
right.
So
the
like
the
c
is
a
hostpass
driver.
That's
actually
a
driver.
C
A
D
C
D
H
Because
somebody,
for
instance,
writing
a
driver
could
fork
this
one
and
start.
A
Yeah
I
like
I
like
cozy
there,
because
you
provisional
example
we
can.
We
can
get
started
with
if
everyone's
okay
with
it.
H
A
A
A
Yeah
content
object,
series
will
github,
so
yeah,
please
make
a
pull
request
directly
and
and
your
sample
driver
will
get
added
here.
So.
C
Are
you
talking
about
the
sample
driver?
What
so
example
driver
is
like
we're
talking
about
we're
going
to
have
that
in
three.
I'm,
not
intrigued.
We're
going
to
have
like
a
repo
for
that
right.
A
Yeah
yeah
on
the
website.
Also,
we
want
to
show
it
right
now.
The
website
is
not
linking
to
any
repo.
Yet,
ideally,
we
want
to
are.
C
C
C
C
C
I'm
just
saying
that
that's
not
those
those
are
like
production
driver
not
really
like
sample
driver.
So
like
we
we're
talking
about
as
a
host
pack
driver,
we
always
said
that's
a
sample
driver,
that's
the
not
the
production
level
driver,
but
then
those
drivers
from
vendors,
those
are
like
production,
ready
drivers.
A
C
C
A
I
mean
I
don't
see
so
so
the
way
I
see
it
is
like
we
can
go
ahead
and
create
the
repo,
because
you
know
communities
decided
on
that.
If
sad
wants
to,
you
know
like
well,
the
naming
is
like
what
what
what
does
he
want
to
give
an
input
on.
C
So
we
want
to,
we
want
to
agree
on
the
on
the
name
right
and
then
we
can
formally
because
this
one,
you
know
that
this
normally
goes
through
the
normally.
What
happens
is
you
first
send
an
email
to
the
seek
storage
mailing
list,
and
then
you
submit
a
pull
request
to
the
the
og
that
repo?
It's
just
like
last
time,
just
like
last
time
when
we
have
that
menu,
whatever
yeah.
A
Okay
yeah,
so
so
you
don't
should
we
should
we
maybe
just
send
the
email
and.
C
Send
an
email
to
just
to
seek
storage
mailing
list.
A
Okay,
that's
good!
That's
good,
yeah,
all
right!
So
I'm
glad
we
resolve
this
quickly.
So,
let's
move
on
to
the
next
question.
So
this
is
this
next
question
was
something
that
ben
brought
up
this
morning.
So
ben
is
working
on
a
pool
or
a
hackathon
internal
internet
app
and
they're
building
a
cozy
driver
as
a
part
of
the
hackathon,
and
one
of
the
things
he
brought
up
was
container
object,
storage,
interface,
spec,
let's
start
here,
so
we
use
in
our
spec
in
our
protobuf
file
right
here.
A
These
extension
numbers
are
the
same
as
what
it
is
for
csi
and
the
thing
is
whenever
we
use
a
single
binary
with
multiple
services
and
these
these
services
tend
to
in
terms
of
the
protocol
of
name
spaces,
that
is
in
the
process
where
it
holds
the
protobufs,
the
the
numbers,
one:
zero:
five:
nine
to
one:
zero:
one:
zero:
five,
nine.
Okay!
Let
me
show
you
something:
let's
just
hear.
A
Wherein,
where
in
the
csi
variables
and
the
cosy
variables
were
colliding?
So
where
is
that
over
here?
A
A
So
this
is
what
we
based
off
of
when
we
created
the
cozy.protophile,
so
these
numbers
are
the
exact
same,
and
that
leads
to
a
conflict
which
prevents
both
cosy
and
csi
to
be
deployed
or
to
run
on
this
in
as
a
part
of
the
same
process,
so
ben
has
actually
started
a
new
pull
request
to
to
upstream
protobuf,
where,
where
csi
already
has
you
know,
csa
has
already
reserved
ten
different
numbers
so
from
one
zero,
five,
nine
to
one
zero,
six,
nine
or
one
zero,
seven
zero.
A
I
believe
either
either
10
or
11
numbers,
and
each
number
corresponds
to
one
service
that
that
you
register
in
the
global
name,
space
of
all,
protographs
and
and
right
now,
cosy
is,
is
colliding
with
csi,
so
they're
not
able
to
run
cs
and
koji
on
the
on
the
same
machine
on
the
same
process.
A
So
this
is
something
that
we
want
to
fix,
and
one
question
that
both
ben
and
I
had
was
csi
ended
up
reserving
10
different
numbers.
So
let
me
let
me
pull
that
up.
Let
me
show
you
where
that
will
request
this.
A
All
right,
so
if
you
take
a
look
at
this
right
somewhere
at
five,
nine
or
so.
A
These
are
the
extension
numbers
for
each
of
the
services
that
csi
exposes.
Csi
exposes
only
three
services:
the
identity
service,
the
node
service
and
the
controller
service.
However,
it
has
actually
gone
in
and
reserved
ten
different.
You
know
numbers
extensions
and-
and
our
question
is:
should
we
do
10
just
like
csi
or
should
we
do.
A
We
have
right
now
any
thoughts
from
anyone
on
this,
I'm
very
new
to
this,
so
so
I'm
speaking
from
whatever
I
learned
this.
A
So,
like
we
could
add,
we
could
add
a
add
up.
You
know
like
another
entry
here,
which
gives
us
say
eleven:
zero,
nine
to
eleven
nineteen.
D
A
A
E
B
Why
do
we
need
so
many?
No,
what's
the
currently
we
acquire,
we
allocate
two,
we
may
need
more.
So
what
is
the
the?
What
is
your
guideline
as
to
should
we
already
pre-allocate
more
or
do
we
come
back
later?
Ask
the
people
that
maintain
the
protocol
buffers
repository.
A
Yeah,
that's
that's
a
good
point
so
by
the
way
csi
started
with
three
I
just
remembered,
but
snapshotting
is
a
separate
one.
So
that's
four
and
then
anything
else
volume
populated,
I
think,
that's
five
anytime,
they
add
a
new
api,
a
new
service
that
needs
to
be.
A
Okay,
anything
that
has
a
csi
interface
would
would
you
know,
occupy
one
of
those
spaces.
A
Yeah
there
is
a
10
from
the
beginning.
Yes
right,
so
in
our
case
I
can
see
like
performance
metrics
being
something
like
that,
where
you
know
you
want
to
have
a
separate
service
that
that
measures,
how
many
buckets
are
coming
in.
What's
the
average
bandwidth
throughput
and
all
that
or
you
know,.
A
A
Yeah
object
level,
there'll
be
things
like
locking.
You
know
life
cycle
management,
but
I'm
not
sure
what
else.
C
Like
just
like,
adding
object
like
put
with
those
type
of
thing
to
be
actually
planning
to
because
right
now
what
just
provision
the
bucket
right,
but
are
we
going
to
actually
do
anything
with
the
object
itself,
like
a
put
an
object?
Okay,
an
object,
those
two
I'll
be
planning
to
add
those
interfaces
as
well
in
future.
A
So
ideally,
we
don't
want
to
be
in
the
data
path,
that
is
in
the
data
path,
because,
because
you
know
it's
not
it's
not
easy
to
abstract
the
data
by
the
apis
and
those
apis
are
not
under
they're.
Not
even
how
do
you
put
this?
I
don't
know
if
all
of
them
are
standardized.
S3
is
a
standard,
but
I
don't
know
with
the
others.
B
And
those
of
those
api,
if
we
were
to
do
such
thing,
which
I
don't
think
we
should
will
not
be
over
protocol
or
protobuf,
because
objects
can
be
so
almost
arbitrary
and
arbitrary
in
size
and
protocols,
you
don't
want
to
have
a
data,
payload
of
say,
64
megabytes
in
a
protobuf
message.
C
Yeah
yeah,
I'm
just
I'm
just
wondering
like
in
the
future.
Could
that
be
something
that
can
be
added?
I'm
not
sure
I
was
just
you
know,
trying
we're
right
now
trying
to
think
what
are
the
other
apis
that
we
might
add.
A
Yeah,
something
like
lifecycle.
A
Something
like
you
know,
key
encryption
service.
I
could
see
that
requiring
a
new
new
in
a
new
new
service
extension.
Anything
else
custom
I
mean-
I
mean
when
you're
reserving
a
name
space.
The
idea
is,
you
know,
we're
going
to
be
fine
for
the
next
few
years
or
next
10
years
almost.
A
A
Like
bucket
level
metrics
or
object
level
metrics,
I
don't
see
any
coming
in
for
the
data
path
itself.
But
but,
like
I
said,
two
more
two
more
is
already
possible.
So
should
we
should
we
just
go
and
reserve
say
five
or
ten
thoughts,
any
thoughts.
D
A
Okay,
so
let's
do
this:
let's
actually
go
and
figure
out
why
people
need
so
many
spaces
and
if
you're
going
to
see
this
is
a
standard
rather
than
a
project.
What
we're
building
is
a
standard
rather
than
project,
and
you
know
in
our
case
it
might
make
more
sense
to
have
have
a
larger.
H
G
H
It's
a
future,
I
would
say
20
because
probably
it
will
be
bigger
than
five
systems.
Twenty
should
be
enough,
but
I'm
joking,
but.
H
Storage
is
probably
the
would
probably
at
some
point
be
the
standard
for
cloud
native
storage,
so
yeah,
it's
it's
at
least
as
important
as
csi
yeah,
pretty
cool
yeah
I
mean.
Probably
there
would
be
tensions.
A
Yeah,
this
is
more
than
about
importance.
It's
not
about
importance.
This
is
about
like
how
how
how
much
do
you
expect
it
to
be
extensible
like
and
also
like,
I
would
say,
10.
H
Because,
since
object,
storage
would
be
very
big
in
cognitive
environments,
there
will
be
plenty
of
extensions
at
some
point.
A
Yeah,
I
can
see
that
I
can
see
that,
and
you
know
extensions
that
we
don't
even
anticipate
at
this
point
fair
enough.
Okay,
so
ben
is
going
to
make
the
pull
request
here
and
he's
going
to
look
at
you
know
I'll.
Let
him
know
the
number
is
10,
so
we'll
reserve
10
spaces
once
we
have
that
reserved,
we
can
go
ahead
and
update
the
protobuf
inside
of
inside
of
our
proto
report,
the
spec
repo.
A
In
the
meantime,
if
anyone
else
wants
to
do
the
same
thing,
if
anyone
else
wants
to
use
protobuf,
you
know
you
use
csi
and
cozy
inside
the
same
binary.
There
is
a
way
to
do
it.
You
change
the
registration
conflict
level
to
warn
instead
of
panic.
The
default
is
panic,
so
I'm
going
to
send
a
link
to
that
and
if
you
said
that
it's
going
to
compile
and
go
forward
and
work.
However,
it
does
nicholas
said
plus
thousand.
A
B
C
B
A
This
time
we
got
lucky
because
none
of
these
values
are
used
anywhere,
though
that
need
to
be
used.
Like
you
know,
this
is
supposed
to
be
a
secret,
and
this
is
alpha,
or
so
there
are
only
the
variables
all
start
with
either
the
name
alpha
or
cosy
underscore,
and
a
quick
word
search
will
tell
you
it's
not
used
anywhere,
except
in
the
definition
of
the
variable
itself.
A
I
believe
this.
This
is
like
this
for
the
service
that
you
choose
to
yeah.
We
need
to
know,
understand
this
better.
I
just
know
that
you
know
10
names
is
the
reserve
for
csi.
We
should
also
take
time
like
vienna
is
saying
why
not
all
right
so
as
of
right
now.
This
is
this
is
what
was
in
my
mind
the
final
thing
again,
one
more
api
related
question
is
I
brought
this
up
with
krish
today,
krish
and
yeah.
A
The
question
was
about:
where
is
this
our
api
inside
the
s3
protocol?
We
have
a
field
called
bucket
name.
Originally,
the
idea
behind
bucket
name
was,
you
know
before
we
had
the
idea
of
a
bucket
class.
The
bucket
name
was
the
field
that
was
going
to
be
filled
in
after
the
bucket
was
created
here.
So
s3
protocol
has
fields,
endpoint
bucket
name
regions,
signature
version,
the
original
intention
of
having
bucket
name
here
was
you
know
this?
A
After
the
the
bucket
is
provisioned-
and
this
is
the
field-
that's
going
to
be
sent
down
to
the
workload
when
the
workload
needs
to
use
the
bucket.
So
it's
going
to
use
the
s3
protocol
structure,
which
has
these
four
fields
to
you
know
to
utilize.
The
bucket.
A
However,
this
has
become
kind
of
a
weird
feel
now,
because
this
s3
protocol
definition
itself
goes
into
a
bucket
class
and
a
bucket
class
cannot
have
the
bucket
name
itself.
I
don't
know
if
that
makes
sense.
If
I'm
saying
that
correctly
types,
we
have
a
bucket
class
there
you
go,
and
it
has.
The
protocol
structure
and
protocol
structure
has
one
of
the
three
protocols
in
this.
I
I
don't.
I
don't
see
a
bucket
class
having
a
bucket
name
inside
of
it
like
like
having
that
field.
Even
you
know,
be.
A
A
part
of
a
protocol
like
sc
protocol
is
confusing
because
you
know
maybe
through
documentation.
We
can
address
it
where
we
say.
Even
though
protocol
field
has
bucket
name
in
it,
you
don't
fill
it
in.
B
A
Well,
at
least
endpoint
is
a
little
clearer.
Endpoint
could
be
something
like
you
know,
talking
to
golf
cloud,
aws,
gov
cloud
versus
aws,
china,
cloud
or
aws,
general
cloud
or,
or
you
know,
talking
to
a
particular
region.
A
Yeah
I
mean
the
general
aws
provisioner
would
have
to
be
able
to
provision
for
multiple
regions.
B
A
I
mean
there
are
definitely
other
parameters.
The
the
idea
was,
you
can
have.
I
see
what
you're
saying
you
can
have
a
driver
for
each
different
kind
of
provisioner
and
we
already
have
reason
in
there.
So
you
know
the
region
argument
doesn't
make
sense,
but
you're
saying
if
you
have
govcloud
and
general
cloud
support
needed.
You
would
just
say
you
know,
have
two
different
provisions
running
one
for
golf
club,
one
for
general
cloud
right.
That's
what
you're
saying.
How
would
you
do
well.
A
B
Indeed,
region,
I
I
kind
of
get
a
signature
version
sure
but
end
point
bucket
name
I
actually
didn't
trigger.
While
I
was
reading
this,
but
it's
kind
of
the
same
and
yeah
I
kind
of
struggle
with
why
those
are
there
in
the
context
of
a
bucket
class.
Of
course,
in
the
context
of
a
bucket,
it
makes
perfect
sense.
F
F
A
Yeah,
the
other
good
reason
for
having
something
like
this
is
inside
the
workload
that
is,
that
is
the
file
that
we're
gonna,
the
bucket.yaml
that
we
plan
to
put
inside
the
pod
that
one
you
know
we
wanted
to
stand.
A
Structure,
so
the
bucket
name
would
go
and
via
the
protocol
structure.
So
if
it
is
s3,
the
bucket.aml
would
contain
the
contents
of
this
structure,
and
that's
why
endpoint
was
meaningful
back
then
that
is.
This
is
the
end
point.
The
work
you
would
have
to
talk
to
this
is
the
bucket
name.
The
workload
would
have
to
use
in
order
to
you
know
access
the
bucket.
A
I
mean
for
alpha.
We
can
keep
it
as
this,
but
but
for
the
next
step,
I
think
it
makes
more
sense
to
create
a
new
structure
that
the
workload
will
read
and
that
will
have
to
be
versioned
along
with
the
rest
of
the
bucket
related
object,
storage,
related
structures.
So
that
would
be.
That
would
be
a
lot
like
you
know,
v1
one
or
v,
one
alpha,
two
or
so
on.
F
D
H
A
Okay,
so
we're
almost
out
of
time.
I
want
to
continue
this
conversation
again
on
thursday,
and,
and
you
know
we
will
we'll
get
the
ball
rolling
on
creating
a
new
repository
for
the
sample.
So
you
know
we
need
to
get
started
working
on
the
on
the
sample
driver
and,
as
you
might
already
know,
we
are
moving
as
fast
as
we
can.
We
need.
We
need
help
from
all
of
you.
You
you've
already
helped
us
with
reviewing
the
cap,
which
we
really
appreciate.
A
One
more
help
we
really
need
is
with
developing
the
this
sample
driver.
I
need
one
of
you
to
sign
up
to,
you
know,
take
ownership
of
it.
Basically,
it
should
be
a
simple
project
and
I'll
be
there
to
help
if
needed,
but
I'd
like
one
of
you
to
take
it
up
and-
and
you
know,
see
to
completion
so
who?
Who?
Who
here
would
be
interested
in
something
like
this?
A
B
A
Deadlines,
but
you
know
we
want
to-
we
want
to
move
forward,
so
someone
who
has
interest
in
moving
this
forward
is
who
would
be
ideal.
B
A
Okay,
so
let
me
put
it
this
way
as
the
first
step,
we
need
someone
who
will
bring
it
to
a
working
condition.
After
that
you
know,
maintainer
is
the
second
step,
so
just.
A
A
All
right,
let's
meet
again
on
thursday,
and
hopefully,
by
that
time
we
have
the
sample
repo
and
probably
some
codes
on
there.