►
From YouTube: KEP Review: Object Storage API (11JUN2020)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
we
are
recording,
welcome
everybody.
This
is
the
June
11
2020
object,
storage,
API,
kept
review,
so
I
want
to
talk
to
something
that
came
out
of
the
Monday
stand
up.
Meeting
and
I
want
to
really
take
the
temperature
of
the
cap.
All
right
as
I
have
a
sig
rather
on,
where
we
should
head
with
the
controller
architecture
and
just
just
to
make
sure
that
we're
on
the
same
page,
or
at
least
in
a
summary,
get
us
on
the
same
page
before
we
move
forward.
A
So
the
Monday
meeting
the
there
was
a
proposition
that,
rather
than
following
the
kind
of
bisected
controller
design
with
a
central
controller
and
then
disparate
sidecar
controllers-
that
rather
we
go
with
a
more
library
like
design
that
would
be
imported
by
provisioners
and
then
used
to
to
operate
on
the
api's
and
now
Sid
was
the
proponent
of
that
I'll.
Let
him
take
it
from
here.
You
can
go
ahead,
said
hey
thanks,
John,
so.
B
So
far,
we've
discussed
the
architecture
for
this
cosy
controller.
To
have
two
disparate
pieces.
One
would
be
the
driver
which
would
be
in
charge
of
creating
the
bucket
and
managing
things
or
reconciling
the
bucket
stay
with
the
real
world
talking
to
AWS,
3
or
Google
Cloud,
the
other
component
of
it
would
run
alongside
each
of
the
application
parts
that
require
this
bucket,
and
these
two
we
were
talking
about
the
current
design
proposal
is,
is
having
these
two
communicating
with
each
other
using
G
RPC.
Now.
B
As
I
sew,
I
started
implementing
the
implementing
this
controller
and,
as
I
started,
doing
it,
one
of
the
things
I
noticed
was
in
this
case.
In
this
case,
in
in
case
of
object
storage,
it
makes
sense
to
beacon
we
can
actually
get
away
with
just
one
controller
rather
than
having
two
pieces
that
interact
with
each
other.
What
in
case
of
CSI?
Let
me
let
me
start
from
this
in
case
of
CSI.
What
happens?
B
Is
we
we
end
up
with
the
need
to
mount
and
attach
volumes
to
a
specific
host
and
and
to
a
specific
part
in
that
host
in
case
of
object,
storage,
all
of
the
all
of
the
data,
access
or
all
of
the
volumes
or
all
of
the
buckets
are
over
the
network,
and
nothing
is
local.
That
is
no
need
for
local
agent
to
do
any
any
set
of
related
things
for
the
pod,
for
it
to
consume,
object,
storage,
buckets
so
can.
B
C
B
B
Where
say,
if
some,
if
a
bar
needs
a
bucket,
they
add
an
annotation,
something
like
cozy
dot,
io
/,
bucket
name
somebody
and
what
the
admission
controller
would
do
is
go
and
check
if
corresponding
bucket
object
by
that
bucket
name
exists,
I'm,
just
I'm,
just
throwing
around
an
idea
here
and,
and
once
say
the
bucket
does
exist.
Then
then
the
controller
would
go
and
would
go
and
update
the
board
object
to
have
the
environment
variables
that
we
wanted
to
have,
and
then
we
get
if
you
go
for
and
get
scheduled.
B
C
C
D
C
C
C
Okay,
that's
odd,
but
let's
go
with
it
because
it
could
result
in
issues
right
if
you
have
some
programmatically
to
create
a
pod
and
you
get
rejected
because
another
object
hasn't
been
created.
That's
just
not
the
way
kubernetes
works,
you're
supposed
to
be
able
to
just
be
able
to
create
all
objects
at
the
same
time.
But
let's
say
this
is
for
a
prototype.
Okay,
so
you're
saying
you're
gonna,
create
it
you're
gonna
reject
the
bucket
or
reject
the
pod.
If
the
buckets
not
ready.
B
B
C
B
A
E
A
B
F
So
guys
I
have
to
say
that,
while
this
is
certainly
a
worthwhile
conversation,
this
I,
don't
think,
is
it
all
related
to
the
heart
of
the
architectural
issue
right.
What
we're
talking
about
is
how
to
surface
things
pods,
but
the
issue
was
central
controller
plus
side
car
plus
driver
versus
bundling
all
three
of
those
into
the
same
thing,
and
the
architecture
never
assumed
that
we
had
something
running
on
every
in
on
every
note
right
it
wasn't,
it
was
never.
A
A
A
B
That's
exactly
how
it's
thinking
of
it,
the
second
part
where
there
would
be
a
single
provisional
that
that
would
be
provided
as
a
bunch
of
stubs
with
underlying
like
so
so.
Anyone
there
who
wants
to
provide
object,
service
controller
would
just
have
to
implement
the
methods
that
that
are
defined
in
the
library
and
and
and
compile
it
to
form
a
controller
that
would
just
that
would
go
and
call
the
appropriate
methods
as
bucket
requests
come
in
and
different
operations
are
them.
B
C
A
F
There's
also
the
other
considerations
too,
like
CRD
management
right
when
you
put
that
in
a
central
controller,
then
your
life
cycle
and
versioning
of
CR
DS
is
all
handled
centrally,
whereas
if
you
make
every
single
provision
ER,
that
means
every
single
one
of
them.
You
either
have
to
install
the
CR
DS
completely
independently
via
I.
Don't
know
how
or
every.
C
Yeah
the
nice
thing
about
the
current
model
that
John
has
here
is
you
know
when
you
have
a
separation
of
concerns?
It
allows
different
components
to
handle
different
things
independently,
so,
like
the
central
controller
that
he
has
could
be
deployed
by
a
kubernetes
distribution
for
example,
and
ensure
that
logging
happens
such
that
no
matter
what's,
cozy
driver
gets
installed,
it's
going
to
be
logged
appropriately
versus.
C
If
you
try
to
condense
everything
down
into
a
single
binary
that
a
third
party
controls,
it
makes
it
much
more
difficult
to
do
that
and,
like
Andrew
said
you
know
you,
you
may
have
one
driver
thinking
it's
going
to
use
a
specific
version
of
a
C
or
D
another
driver
that
wants
to
use
a
different
version
of
the
C
or
D,
and
it
just
makes
the
system
much
much
more
difficult
to
work
together.
Can.
F
F
The
problem
is,
if
you
make
the
CR
DS,
a
a
responsibility
of
the
provision
or
provisioners
are
written
to
a
particular
version,
etc,
and
then,
and
and
and
and
so
you've
basically
handed
off
a
bunch
of
admin
responsibilities
to
the
individual
provisioners,
and
then
they
can
clash
about
their
view
of
the
world.
I
see
okay,
so.
B
F
B
B
Understood,
but
what
I'm
saying
is
when
you,
when
you
create
a
new
API
object,
if
you,
even
if
it's
a
CRT,
if
you
define
this
object
during
the
cluster
creation
time,
it's
not
something
that
individual
Provisionals
decide
the
definitions
for
it's
something
that's
defined
with
the
cluster
may
be
its
feature
gated.
But
this
this.
G
B
G
G
C
C
Even
if
that's
the
case,
why
can't
the
owner
just
be
kubernetes
or
some
component
in
the
distributor,
and-
and
you
know
it
can
be-
that's
true-
the
details
of
how
that's
actually
gonna
work
has
not
been
figured
out.
Honestly,
we've
gone
back
and
forth
with
SiC
architecture
to
try
and
figure
out.
You
know
how
are
CR
DS
that
core
components
depend
on
going
to
be
installed,
and
this
was
a
challenge
we
ran
into
you
with
CSI
and
at
the
time
there
was
no
good
answer.
C
So
the
CSI
components
ended
up
being
core
API
objects,
I,
don't
know
if
they
have
a
better
answer
now
or
not,
but
it's
probably
going
to
be
a
challenge
that
we're
going
to
run
into,
regardless
of
which
one
of
these
two
approaches
we
go
with
so
but
that's
it
I
think
we
need
a
kind
of
a
stronger
justification
to
go
with
a
library
model
and
I'm
not
hearing
that.
Yet.
B
A
B
H
F
B
B
Essentially,
they
won't
serve
the
same
purpose
and
if
maybe
I
got
it
wrong
earlier.
But
if
you're
telling
me
the
the
interface
is
going
to
be
a
single
sidecar
that
acts,
it's
really
nothing
different
from
a
library,
a
single
site
card
that
would
that
would
be
deployed
alongside
the
render
specific
controller
that
that
actually
yeah
it's
it's
not
it's
not
that
much
of
I
don't
see
technical
merit
going
from
that
to
just
a
library.
G
Why
do
you
have
to
have
a
sidecar?
At
all,
I
mean
this
is
a
technical
detail
of
the
provisional
right
right
doesn't
have
to
be
this
way
if
the
provisional
wants
to
to
read
so
to
watch
bucket,
the
cluster
bucket
entities
bucket
access
and
reconcile
them
to
its
object,
storage
with
outage
without
you
know
a
sidecar
just
having
a
single
driver
watching
like
a
controller,
that's
a
controller,
so
the.
E
Actually,
let
me
add
to
that
I
I
suggest
the
sidecar
idea,
because
it
kind
of
separates
the
kubernetes
interfaces
from
the
driver
interface.
One
of
the
really
nice
things
about
the
cycles
from
CSI
is
that
it
allows
the
driver
developer
to
only
concentrate
on
the
G
RPC
interface
and
that
alone,
at
a
certain
version.
Now
the
kubernetes
versions
change
quite
rapidly,
and
the
sidecar
actually
manages
that
in
that
bridge,
but.
G
E
But
what
I'm
trying
to
say
is
that
it
is
normally
very
difficult.
You
know,
after
working
with
suicide,
for
so
long
that
newcomers
writing
drivers,
for
these
interfaces,
find
it
actually
quite
difficult
to
to
ramp
up
by
the
community.
Providing
the
sidecar
element
out.
A
certain
version
has
already
tested
all
they
have
to
do
its
ingest,
that
one
CSI
driver
or
that
one
implementation
in
the
backend
for
the
for
the
seikar
and
then.
B
E
B
E
G
C
Absolutely
right,
I
think
you're,
both
in
violent
agreement
here,
even
though
it
sounds
like
you're.
Not,
we
have
to
separate
the
spec
from
the
components
that
we
offer
the
kubernetes
community
right.
The
spec
itself
does
not
care
how
the
driver
is
packaged.
It
does
not
care
whether
there
is
a
sidecar
or
not
you're,
absolutely
right
about
that,
and
the
spec
would
and
should
never
dictate
that
that
said,
what
does
the
kubernetes
community
offer
to
developers
to
make
their
life
easier
right,
but.
G
C
C
Every
time
kubernetes
is
updated.
Now
you
need
to
go
and
update
your
driver
and
that
library
now
needs
to
come
in
all
the
different
language
flavors
that
vendor
authors
might
want.
Instead
of
us,
writing
one
go
laying
sidecar
and
compiling
it
and
giving
people
a
container
where
they
don't
care.
What
language
it's
written
in
see.
B
G
B
Said
you
know
everyone
who
proposed
the
breaking
up
our
tiller
providers
and
and
I'm
saying
after
we
did,
there
will
be
provided.
Was
we
provided
a
go
library
that
individual
cloud
providers
could
implement
themselves?
Landis
could
do
themselves
and
they
built
their
own
controllers,
and
all
we
did
was
maintain
the
Cole
library
right,
but
that's
strictly
more.
C
G
C
E
B
E
Do
again,
I'm
discouraging
from
that
point
of
view,
we
got
it.
It's
just
a
it's
not
about
embedding
your
server
side
into
a
some
code
that
you've
already
had
in
some
control
plane
we're
just
trying
to
make
it
as
a
community
able
to
use
any
language.
So,
for
example,
in
CSI
we
got
ember,
which
is
written
in
Python
as
a
model
and
then
that
communicates
with
cinder
weight
and
then
communicates
with
something
else
over
rest.
So
we
don't
know
what
the
backend
is.
A
B
A
A
B
Yeah,
that's
what
is
much
higher?
Actually,
you
know.
You
know
that
I'm
a
little
bit
clearer
on
what
this
distinction
is
or
where
we're
trying
to
you
know,
break
apart
things,
I
think
I
think
it
technically
speaking,
it
doesn't
matter
if
it's
if
it's
over
G
a
PC
or
if
it's
alive
and
so
far
like
summer
than
everyone
else,
is
saying,
I
think
I
think
G
our
PC
makes
sense
because
it
doesn't
add
too
many
disadvantages
as
compared
to
library
approach
and
it
does
add
the
flexibility
options.
G
B
C
Think
the
way
that
we
should
design
this
is
make
it
as
accessible
as
possible
and
let
the
vendors
choose
so
the
way
that
it's
designed
today
is
on
the
kubernetes
side.
We
provide
a
sidecar
container,
but
if
there
are
vendors
that
don't
want
to
do
that
for
some
reason
they
completely
have
the
option
to
go
a
different
route.
So
we
can
do
the
same
here.
It's
let's
get
maximum
number
of
options.
Let's
not
be
silly,
you
know
opinionated
here
and
say
you
have
to
use
goal
and
you
have
to
import
a
library.
C
A
B
B
E
Really
nice
thing
about
gr
PC
is
that
there
is
a
contract.
There
is
a
contract
and
you
know
exactly
what
you're
supposed
to
get
out
whatever
version
you're
using
the
thing
about
rest,
which
you
know
rest
is
great,
but
you
don't
have
a
contract
right,
so
you
have
to
bring
the
contract
some
other
way.
I
mean
I'm.
Just
talking
about
thrift,
GRP
see
any
of
those
models,
have
a
contract
and
that's
really
beneficial
actually.
E
G
E
If
you
haven't
done
it
yet
to
write
a
CSI
driver
and
whatever
language
you
want
and
just
insert
it
and
have
it
just
log
and
just
do
nothing
but
I
think
that
experience
will
let
you
know
how
to
write
a
driver
for
different
versions
of
kubernetes
and
where
you
get
that
experience
I
think
it
may
help
here.
Yeah.
B
A
Yeah,
and
so
the
intent
here
is
that
sorry,
the
bucket
access
in
the
bucket
API
objects
are
the
communication
pathway
between
the
cose
controller
and
the
provisioners
sidecar,
so
that
that
will
be
the
method
by
which
they
talk.
They
won't
be
communicating
over
the
network
at
all,
so
you're
saying
they
don't
duck
to
each
other,
but
actually
correct,
so
yeah
the
it's
not
depicted
here,
but
the
flow
would
look
something
like
I.
As
a
user
create
my
bucket
request.
A
I
reference,
a
bucket
class,
the
cozy
controller
detects
the
bucket
requests
creates
the
bucket
cluster
scoped
API
object,
asynchronously,
the
sidecar
and
the
provisioner
detects
the
new
bucket
API
object,
which
will
contain
all
the
information
from
the
bucket
class
and
relative
information
from
the
bucket
request,
communicate
to
the
driver
over
G
RPC,
passing
that
information
along
to
dictate
how
the
provisioning
is
done.
The
driver,
of
course
reaches
out
to
the
cloud
provider
or
the
object
store,
creates
the
bucket
the
provision
or
sidecar
writes
back
to
the
bucket
cluster
scope,
object
relative
connection
data.
A
Now
this
is
broken
up
between,
on
the
right
hand,
side,
storage,
management
and
then,
on
the
left,
hand,
side,
user
and
access
management,
so
I'm
just
thinking
to
the
storage
right
now.
But
then
the
controller
would
detect
the
update
in
the
bucket.
The
the
cozy
controller
would
detect
the
update
in
the
bucket
and
then
update
the
bucket
requests
to
indicate
that
it
is
now
ready
to
be
used
and
then
in
this
in
this
diagram.
A
But
it's
you
know
until
we
figure
out
how
to
integrate
this
with
CSI,
because
the
controller
would
write
the
end
point
to
a
kubernetes
primitive
here.
It's
the
access
secret,
which
would
also
in
this
case,
contain
connection
our
credential
information,
but
that
we
can,
if
we
just
ignore
them
for
a
moment,
that's
how
that
data
gets
back
to
the
pod.
So
you
have
this
sort
of
asynchronous
messaging
through
the
API
objects,
the
the
bucket
access
and
the
bucket
so
I.
C
The
the
binary
that
the
story
event
your
rights
does
not
need
to
be
kubernetes
aware
in
the
model
that
is
proposed
here
with
side
parts.
That's
kind
of
the
whole
purpose,
I
think
a
lot
of
us
take
it
for
granted
that
providing
a
kubernetes
controller
is
easy
for
people
who
are
not
well
versed
in
the
kubernetes
ecosystem.
It's
a
it's
a
pretty
daunting
challenge
and
if
we
can
minimize
that
by
saying
hey,
don't
worry
about
any
of
this
kubernetes
stuff,
just
write,
AG
RPC
interface,
which
is
well-documented
I.
A
F
C
B
H
A
Before
we
kind
of
bring
the
gavel
down
and
and
decide
where,
for
this
model,
I
just
want
to
pull
the
room
and
see
if
anyone
has
hasn't
spoken
up.
Yet
that
would
like
to
give
some
input
before
we
move
forward
and
then
kind
of
completely
you
know
commit
to
this.
Is
there
anyone
out
there
that
would
like
to
to
have
something
to
say
about
it?
I.
B
A
Is
something
we've
been
or
maybe
I've
been
kicking
the
can
a
little
bit
about
and
we
need
to
explore
or
perhaps
I
need
to
explore
how
we
can
integrate
this
with
CSI
and
Sods
mentioned
in
a
few
times.
I
think
it's
a
great
idea.
I
just
don't
know
how
it
works,
and
so
it's
something
I'm
gonna
have
to
look
at.
B
C
Yeah
now
the
proposal
there
was
basically
write
a
seat
so
for
a
prototype
that
we're
talking
about.
We
want
to
be
able
to
hold
off
the
pod
from
starting.
What
mechanism
to
do
that
is
use
a
volume
and
CSI
allows
you
to
write
an
arbitrary
volume
extension.
So
you
can
imagine
writing
a
CSI
driver
that
acts
as
an
adapter
to
a
cozy
driver
ICC.
So
this
this
adapter
basically
would
allow
you
to
pass
a
set
of
parameters
that
are
specific
to
cozy
and
in
your
pod
definition.
C
You
say:
I
want
this
special
cozy
adapter
CSI
driver
and
here
are
the
parameters
to
pass
to
it,
and
so,
when
it
starts
the
existing
kubernetes
mechanism
already
knows:
oh,
it's
a
CSI
driver,
I,
don't
know
anything
about
Cozy,
but
since
it's
a
CSI
driver,
I'm
gonna
hold
off
on
starting
this
pot
and
then
the
adapter
can
figure
out
what
it
needs
to
do
in
order
to
interact
with
the
different
cozy
components.
But.
B
E
C
B
So
so
here's
the
the
amount
of
code
that
I've
got
right
now,
so
I
got
a
control,
define
all
the
types
and
I've
got
a
controller.
I've
got
an
admission
controller
and
bucket
types
controller.
The
admission
controller
would
look
for
power.
Annotations,
like
I,
said
earlier,
I
think
I
can
I
can
take
away
the
admission
control,
apiece
and
I
can
just
document
something
saying
a
pod
needs
to
be
started
with
this
CSI
driver
or
this
CSI
spec
asking
for
a
bucket.
That
would
be
the
user's
responsibility
right
how
we
talked
about
so
far.
A
B
A
bucket
the
user
would
say:
okay,
okay,
so
if
that's
the
case
yeah,
if,
if
yeah,
if
we
have
a
meeting
on
Monday
right,
yes,
I
can
I
can
probably
show
like
a
quick
demo
of
the
CSI
volume
plus
the
bucket
controller.
It
wouldn't
be
a
full-fledged
one,
but
it
would
probably
just
be
something
as
simple
as
a
pod
request
a
bucket
it.
A
E
C
I
think
this
is
this
is
important.
Now
like
this
is
not
a
project
that
one
person
that
should
be
put
on
one
person's
shoulder.
It's
gonna
be
insane
what
I
recommend
is.
We
are
approaching
the
end
of
the
quarter
right
now
and
we're
probably
going
to
be
doing
a
sig
storage
planning
session
in
the
next
like
around
the
the
end
of
the
month
and
what
we
can
do,
if
you
guys
can
kind
of
break
down
the
different
pieces
that
need
to
be
worked
on.
C
We
can
try
to
get
recruit
people
in
that
sig
storage
planning
meeting
to
see
if
we
can
get
folks
to
work
on
it.
But
what
that
means
is
before
then
we
should
have
kind
of
a
standalone
tasks
that
folks
can
be
assigned
to
that'll,
make
it
easier
to
kind
of
recruit
people.
If
you
can
say,
hey
here's
the
standalone
task.
If
you
want
to
help,
you
know
jump
in
and
then
they
can
start
participating
in
to
stand
up
as
well.
That's.
A
A
D
D
B
Know
I've
already
gotten
a
whole
bunch
of
things
in
so
I'm
working
on
this
full-time.
So
ideally,
what
I
just
want
to
do
is
obviously
I
can
do
the
whole
thing,
but
but
if
we
can
go
ahead
and
define
an
early
version
of
rather
than
early
version,
just
just
the
the
boilerplate
for
the
various
interacting
components,
then
we
can
have
individuals
contributing
to
different
parts
of
each
of
them
separately.
I.
B
C
B
A
C
A
Yeah
definitely
and
said
so:
let's
do
you
want
to
shoot
for
Thursday,
instead
of
Monday
to
do
like
the
a
demo
as
that?
That's
the
broader
group
and
the
the
Monday
meetings
are
more
just
kind
of
stand
up
like
here's.
What
I've
got
thus
far
and
we
can
use
Monday
to
hammer
out
the
separate
components
that
that
we
could
ask
people
to
work
on
house,
okay,.