►
From YouTube: KEP Review: Object Bucket API(25June2020)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
This
there's
also
been
a
restructuring
I've
parsed
out
some
of
the
paragraphs.
I
thought
may
have
been
redundant
kind
of
jump
straight
into
the
api's
and
then
push
architecture
and
and
diagrams
further
down
so
you'll
see
now
I'm
not
gonna,
go
through
these,
but
just
know
that
they
are
up
there
ready
for
review
and
for
questions.
A
A
Further
down,
we
have
our
bucket
class
remains
unchanged
and
then
the
access
API
is
that
we've
almost
have
discussed
the
bucket
access
requests,
the
bucket
access
and
then
the
bucket
access
class
also
add
a
diagram
here
which
hopefully
details
a
little
bit
better.
The
interaction
so
I
do
want
to
talk
to
this
for
a
second,
the,
so
the
a
corner,
the
legend
down
here.
A
So
the
only
creation
that
indicating
at
the
moment
this
is
still
trying
to
tread
water
but
or
tread
between
the
high-level
view
and
it
and
a
little
deeper
down
kind
of
relationships
and
interactions.
So
blue
indicates
references
if
there
is
pointing
from
an
object
to
an
object.
The
object
is
it
contains
a
reference
to
that
object,
so
this,
hopefully
clearly
depicts
the
the
interrelationships
by
actual
field.
References
between
the
api's,
the
black
arrows,
indicate
that
a
brook
load
or
pod
is
accessing
an
API.
A
A
C
The
go
equivalence
for
this
specific
API
spec
and
generated
the
clients
for
them.
It
is
in
our
old
repository
that
we
were
coordinating
with
the
container
object,
storage,
interface
or
paste
along
to
that,
but
I'll
be
moving
it
to
a
new
repositories.
I,
don't
know
if
you,
if
everyone
knows
but
repository
or
two
repositories
for
container
objects.
The
agencies
have
been
approved
by
kubernetes
SIG's,
believe
I,
don't
know
who
that
progress,
but
yeah
we
have
done.
We
can
start
pushing
according
to
official
reports
now.
A
A
Okay,
great
yeah
I,
want
to
point
out
to
that
previously
previous
diagrams
did
not
include
our
cosy,
CSI
adapter,
and
so
that's
included
here.
It's
just
for
simplicity
in
the
ecozy
system.
Namespace,
but
we'll
but
I
mean
that's
that's
subject
to
change,
and
the
Green
Line
just
indicates
that
it
will
be
communicating
data
read
from
the
bucket
access
and
bucket
bucket
through
the
cubelet
and
inject
it
into
the
the
pod
via
a
CSI
ephemeral
volume.
D
C
E
C
D
He
was
effectively
uncomfortable
with
the
secret
because
he
felt
like
it
was
managing
something
that
is
not
normally
managed
in
an
automated
way
that
that's
a
user
provision
thing
normally
and
having
us
kind
of
manage
lifecycle
of
it
and
everything
else,
and
he
felt
like
communicating
information
through
a
dedicated
approach,
start
off
with
a
thimble
bottom
and
then
eventually
end
up
having
some
kind
of
first-class
thing,
which
would
have
all
of
the
security
benefits
of
a
secret,
but
would
have
the
manage
additional
manageability.
That's
sort
of
how
I
interpret
it.
Yeah.
G
A
D
Again
my
questions,
the
one
benefit
was
secret.
There's
the
secret
data
is
effectively
stored
in
ED
CD,
and
so
it
is
accessible
to
cubelets
sort
of
inherently
I'm
wondering
at
how
the
data
for
an
ephemeral
volume
kind
of
gets
there.
I
guess
my
question
is:
do
we
have
to
have
an
agent
running
on
the
node
two
right?
Yes,.
F
Yeah,
so
all
CSI
drivers
have
a
component
that
runs
on
every
node
and
then
that
component
is
responsible
for
setting
up
the
mount.
In
this
case,
what
will
happen
is
they'll
carve
out
some
scratch
space
from
the
host
machine
and
temp
directory
and
they'll
then
write
in
the
contents
of
whatever
needs
to
be
written
in
they'll
pull
in
the
contents
based
on
the
objects
that
are
referred
to.
F
D
F
A
F
F
D
A
The
daemon
set
will
hat
all
right,
so
the
way
that
I've
written
this
graph
was
under
the
assumption
that
the
cose
CSI
driver
here,
one
it's
listed
as
a
pod
that
should
be
a
daemon
set.
I
can't
correct
that
my
nursing
was
that
that
would
be
the
daemon
set
that
was
distributed
across
nodes.
It
would
have
a
watch
on
bucket
access
and
bucket
objects
and.
A
At
the
I
forget
which
field
it
is,
but
once
they've
indicate
that
the
the
provisioning
has
been
completed
they,
the
CSI
adapter,
will
then
use
the
client,
the
the
CRT
client
to
read
those
objects
and
then
oh,
no
I'm.
Sorry
I've
got
this
backwards.
I'm
still
remembering
this.
Yes
I
Porsha.
This,
the
pot
itself
has
to
have
a
reference
to
the
bucket
access
request,
written
as
a
CSI
volume.
A
F
D
I
D
A
H
A
It
would
have
to,
at
a
minimum,
be
able
to
be
able
to
dereference
the
bucket
access
object
to
get
access
and
then
through
that,
the
bucket
access
will
give
it
credentialing
information
if
it
exists.
Otherwise
it
would
be
through
the
surface
account
that
namespace,
but
the
connection
information
the
way
I
have
it
measured
in
this
thus
far
would
be
associated
written
into
the
bucket.
So
you're
right
like
there
there's
some
amount
of
access
that
the
CSI
daemons
that
would
have
to
have
in
application.
Namespaces.
C
A
H
A
A
J
H
C
D
There,
what
you're
arguing
is
that
the
final
approach,
where
we're
going
to
go
with
cubelet
doing
this
has
some
benefits
so
I
think
we
all
agree
with
that.
I
guess!
My
question
is:
do
you
think
there
is
a
near-term
way
that
avoids
auerbach
that
doesn't
involve
having
to
modify
Cuba
to
understand
this
see.
D
H
H
D
H
K
F
You
know
a
secret
object
to
point
your
pod
to,
instead,
by
using
this
approach
of
having
a
special
CSI
ephemeral
driver,
the
user
just
says:
I
am
using.
You
know
a
bucket
resource
via
the
CSI
driver
and
then
any
custom
parameters
they
want.
They
can
specify
as
part
of
the
pod
in
line
and
then
the
rest
of
it
is
kind
of
handled
automatically.
So
it
leads
to
a
better
user
experience.
Does
that
make
sense,
yeah.
C
F
F
C
F
F
User
has
to
manually
create
a
PVC
object
right.
What
what
is
that
PVC
object
going
to
look
like
what
are
they
going
to
put
in
it?
How
is
that
going
to
get
interpreted
correctly
and
how.
F
C
D
A
A
A
Okay,
so
the
to
kind
of
illustrate
what's
odd
is
talking
to
so.
Instead
of
specifying
TVs
and
PVCs,
you
still
define
of
volumes
stanza
under
it,
you
name
your
volume
and
as
a
volume
type
rather
than
PVC.
You
specify
si
si.
You
name
your
driver
and
then
it
has
a
node
published
secret
reference,
I'm,
not
certain
how
that's
used
quite
yet
the
and
then
it
has
volume
attributes
which
is
a
string
string
map.
A
So
that's
something
that
we
would
probably
be
asking
users
to
have
a
little
script
inject
into
their
environment
or
inject
into
the
workload
is
his
argument
somehow,
but
that
would
be
the
flow,
so
the
cubelet
would
see
this
defined
here,
make
a
call
out
to
our
CSI
adapter
CSI
adapter
would
go
out,
but
from
the
operation
specified
per
here,
this
gets
passed
on
to
the
CSI
driver,
get
the
credentials
return.
It
I
think
at
that
point.
This
is
where
my
understanding
gets
fuzzy.
F
So
that's
all
mostly
correct
I
think
the
only
difference
I
would
add.
Is
you
don't
need
the
note
published
secret
or
FS
type
there?
You
just
need
the
driver
and
volume
attributes
and,
like
you
said,
volume
attributes
can
be
custom.
You
can
add
whatever
you
want
underneath
there,
and
so
once
the
pod
is
starting.
The
cube
lip
is
gonna,
say:
oh,
let
me
see
if
there's
a
local
driver,
a
CSI
driver
called
CSI
cause
the
adapter
it'll
make
sure
that
that's
registered.
F
Then
it
will
make
a
note,
publish
request
to
that
driver
and
pass
these
parameters
to
it,
and
then
that
driver
is
responsible
for
spitting
out
a
directory
that
cubelet
can
use
to
mount.
So
that's
all
cubelet
care
so
that
doesn't
care
about
the
details
of
any
of
this
I
just
says:
give
me
a
directory
that
I'm
going
to
mount,
and
so
the
driver
is
now
responsible
for
creating.
Does
that
mean.
F
D
F
H
B
F
F
There's
there's
two
directories:
there's
a
source
directory
and
there's
the
target
directory.
The
target
directory
is
generated
by
cubelet,
which
ensures,
like
we
said,
there's
no
collisions
there.
So
cubelet
gives
you
that
target
directory
that
you
should
mount
something
at
the
source
directory
is
up
to
your
driver.
To
figure
out.
Does
that
make
sense,
Andrew.
D
F
G
H
H
F
H
F
D
One
stock
here
it's
a
little
bit
congenial
to
what
we're
talking
about,
but
it
came
up
in
the
environment.
Variable
reference,
one
of
the
things
that
we
ran
into
when
we
were
thinking
about
this
and
I
think
it's
worthwhile.
Considering
is
what
if
an
application
has
needs
access
to
more
than
one
bucket
and
one
of
the
problems.
There
is
mapping
environment
variables
or
something
like
that,
isn't
straightforward,
and
when
it's
the
case
so
yeah
they
did
having
having
a
representation
in
the
file
system
can
solve
that.
But
yeah.
F
F
Case
you
would
have
potentially
two
volumes
here,
one
you
could
call
creds
one
and
other
one
creds
two
and
they
could
point
to
two
different
volume,
attributes
different
bucket
access
requests,
and
you
would
just
need
to
make
sure
that
the
CSI
cause
the
adapter
was
able
to.
You
know,
handle
them
uniquely
and
generate
unique
files
for
them.
A
Okay,
I
would
like
to
move
along
now
and
we
are
about
halfway
through
the
meeting
and
there's
a
few
of
the
things
I'd
like
to
cover.
Please
feel
free,
so
that
again
the
diagram
is
in
the
cap.
Please
feel
free
to
take
a
look
at
leave.
Leave
notes,
so
the
next
thing
I
want
to
take
a
look
at
was
reference.
It
sorry,
I
think
I
mix
my
slides
up.
A
That's
okay,
back
references
in
terms
of
managing
the
attached
budget
requests
to
buckets
so
that
a
bucket
object
has
a
a
list
of
mini
bucket
requests
that
the
central
controller
would
manage
as
a
way
to
enable
delete
operations.
So
there's
two
cases
for
this.
That
would
differ
slightly
and
there's
a
lot
of
questions
around
them.
But
if
we
can
start
talking
about
them,
then
I
think
well.
I'll
have
something
to
think
about
next
time.
A
Question
which
is
now
implication
ii,
how
does
the
originator
define
which
namespaces
can
come
back
to
his
bucket,
so
the
first
one
again
if
I
delete
my
green
filled
bucket
removed
from
the
cluster?
Should
consideration
be
given
for
other
namespaces
that
may
be
accessing
ignoring
for
a
second
that
other
namespaces
may
or
may
not
be
accessing.
A
Initial
reaction
is
to
say
well,
if
I'm,
the
owner
of
a
bucket
on
a
platform
and
I,
delete
my
bucket.
That's
my
right,
I
own,
the
data.
If
other
people
are
accessing
it,
you
know
tough,
cookies,
I,
don't
know
if
that's
something
we
want
to
represent
and
kubernetes
or
not,
and
so
that
that's
my
question
is
that
a
behavior
that
we
want
to
enable
with
the
kubernetes
layer
so.
D
You've
got
three
different
considerations:
you've
got
green
field,
you've
got
brown
field
and
you
have
the
desire
to
leverage
a
green
field
generated
bucket
into
a
brown
field
scenario
right,
so
I'm
intentionally
separating
those
three.
So
in
a
pure
green
field,
I
hope
people
would
think
that
it
would
be
a
normal
thing
to
expect
to
be
able
to
create
a
bucket
and
then
delete
a
bucket
and
expect
that
you
could
also
delete
the
underlying
bucket
right.
D
So
bucket
requests
comes
goes,
the
underlying
bucket
goes
so,
and
you
would
expect
just
from
conventions
sake
that
that
could
be
controlled
by
a
parameter
on
the
bucket
defaulted
from
a
bucket
class
right.
Just
like
storage
class
does
for
for
PV
I
think
it
would
be
also
reasonable
to
expect
that
a
brownfield
situation
should
never
on
the
the
case
of
a
brownfield
request.
D
Delete
the
underlying
bucket,
so
greenfield
by
default
could
has
the
option
to
not
brownfield
never
does
so.
Then
there
is
this
interesting
problem
of
what
happens
if
what
you
have
in
the
diagram
here
is
done,
which
is
I,
do
a
green
filter
provision
and
then
I
point
a
bunch
of
brownfields
at
my
question
is:
is
that
a
realistic,
quaint
and
time
scenario?
D
A
A
My
workloads
job
is
only
to
populate
this
bucket
for
a
machine-learning
AI
test
that
it
does
that
crease,
the
bucket
fills
it
with
petabytes
of
images
or
data
or
whatever,
and
then
it
goes
away
and
the
namespace
can
go
away,
I'm
running
other
jobs
and
then
in
this
cluster
in
different
namespaces,
all
of
which
would
like
to
read
from
the
single.
You
know
data
point.
So
should
we
allow
that
it
seems
like
we
should
about
how
we
enable
it
is
that.
F
D
F
A
F
A
B
F
D
I
D
G
G
G
A
So
in
terms
of
giving
users
the
power
to
do
this,
obviously
it
means
we
have
to
have
fields
now
in
the
bucket
requests.
So
I
would
propose
at
least
as
a
starting
point,
adding
a
permitted
names
faces
to
the
bucket
request,
which
is
the
slice
that
could
be
represented
by
some
predefined
string
values,
so
an
empty
string,
meaning
only
me
it's
private.
D
I'm
a
little
bit
uncomfortable
with
this
and
and
the
reason
is
because
I
feel
like
you're
getting
now
into
a
realm
of
interacting
with
other
resources
for
which
you,
the
administrator
or
you,
the
the
person
who's
authoring.
The
bucket
requests
may
not
have
in
your
administrative
level
and
so
I
don't
have
any
problem
at
all
with
the
non
namespace
resource
being
able
to
manage
this,
but
it
feels
a
little
bit
weird
for,
for
the
request
side
to
manage
it.
I
guess
I,
guess,
I'm,
saying
I
wouldn't
expect.
I
would
expect
that
this.
D
This
is
a
green
field.
You
just
get
closed
by
default
and
if
it's
brownfield
you're
getting
whatever
it
already
on
the
bucket
and
then,
if
somebody
wants
to
open
it
to
other
things,
then
they
would
actually
go
edit.
The
bucket
now
I
realize
that
that
probably
closes
off
one
possibility
of
hey
I
happen
to
have
apps
in
two
different
namespaces
that
I
both
have
access
to.
Why
can't
I
allow
this
bucket
to
be
shared
between
them?
A
Right
so
on
that
note
in
it,
that's
where
I've
been
stumped
in
trying
to
get
this
implemented.
That
my
mind
has
been
mostly
towards
this
is
an
administrator
level
action,
and
should
we
define
an
administrator
level
object
if
we
were
to
allow
users
to
do
this,
they
would
have
to
it
would
have
to
be
on
the
the
user
sign
API
of
course.
A
D
That
would
be
one
thing,
but
this
gets
to
be
a
very
specific
kind
of
permissioning
at
the
request
level,
which
I
mean
I,
just
don't
think
there
is
really
much
of
a
of
of
other
examples
of
this
I
forget
the
word
I'm
looking
for,
but
you
know,
most
of
this
kind
of
permissioning
stuff
is
generally
done
by
you
know
non
namespace
resources,
either
I
am
or
other
kinds
of
policy
statements.
You
just
don't
see
those
kind
of
policy
statements
in
the
workload
objects,
yeah,
I,
think
you're,
right,
yeah,
III,.
H
F
Yeah
I
I
think
I
was
originally
when
I
saw
this
John
I
was
like
oh
yeah.
That
makes
perfect
sense
that
I
can
buy
Andrews
argument,
especially
around
kind
of
portability.
If
you
start
sticking
namespace
kind
of
allowed
namespaces
in
there.
What,
if
you
move
this
object
into
that
namespace
and
it
gets
kind
of
funky
I
want
to
go
back
to
a
idea
that
we
had
before,
which
was
a
bucket
class
I
assume
we
still
have
a
bucket
class
right.
Yes,.
G
E
D
Asking
for
parameters
in
a
class-
and
let
me
just
point
out
that
exactly
what
John
had
started
over
there
I
think
could
be
useful
here
as
an
option
is
to
say
public
or
private
right.
That
would
allow
the
notion
of
a
bucket
class-
that's
actually
reusable
a
bucket,
then
specific,
permitted
namespaces
could
also
be
something
that
we
have,
so
it
could
be
all
none
or
listed,
but
that
gives
you
them
the
flexibility
to
be
able
to
have
a
couple
of
well-known.
D
G
D
G
D
G
D
Saying
ok,
sharing
is
brownfield
effectively,
and
so
whatever
control
mechanisms
we
want
to
put
on
brownfield
would
apply
to
the
case
of
greenfield
generating
a
bucket.
That
then,
is
available
for
brownfield,
that's
all,
and
so
to
me
it.
It
makes
perfect
sense
that
those
constraints
are
privileged
constraints
and
not
not
specifiable
by
their
by
the
user
at
the
request
level.
So.
F
F
A
F
A
A
And
so
I
had
this
slide
here,
similarly
to
kind
of
represent
how
this
operates
in
brownfield,
the
given
our
discussion
that
permitted
namespaces
on
the
bucket
plus
the
bucket
class
would
be
defined
by
an
administrator.
This
is
already
kind
of
worked
itself
out,
so
it's
effectively
the
same
model
as
part
as
before,
just
without
defaulting
to
some
originator,
namespace.
D
A
Yes
Annie's,
so
let
me
clarify
this.
This
would
be
a
controller
managed
pseudo
synchronous
list.
I
was
going
off
what
I
remember
for
from
last
week's
discussion,
and
that
was
that
there
may
be
a
need
for
buckets
both
green
field
and
brown
field
to
manage
a
list
of
a
you
know
like
a
it
is
similarly
till
I
could
claim
what
claim
ref
so
that
we
know
from
the
bucket
API
how
many
requests
are
attached
to
this
bucket
currently
correct.
D
D
G
Well,
maybe
a
little
bit
some
just
tiny
uncomfortable
with
is
that
this
list
may
grow
and
we
have
some
limitation
on
the
size
of
the
source
we
can
have
in
compaƱeras.
So
that
might
be
somewhat
kind
of
we
need
to.
Maybe
before
had
like
said,
the
maximum
number
of
requests
referring
to
the
bucket
is
this,
and
so
what.
A
H
A
G
Yeah
I
thought:
maybe
we
want.
We
would
like
to
say
like
this:
is
the
hard
limit,
so
you
can.
You
cannot
go
beyond
instead
of
just
relying
on
the
size
of
the
bucket,
which
may
be
different.
So
sometimes
you
are
able
to
connect
many
body
requests
and
sometimes
that
might
get
requests
depending
on
the
size
of
name
and
name
space
in
the
youth
and
other
parameters.
So
but
I.
D
So
if
we
manage
well
that
green
fill
brown
field
transition
model
that
we
talked
about,
then
there
becomes
an
interesting
question
of.
Do.
I
really
need
to
know
who
references
me
right,
because
the
only
thing
that
I
care
about
then
is
the
edge
we're
not
going
to
clean
up
the
requests
where
I'm
just
going
to
refuse
to
delete
the
bucket
Wow
I
have
outstanding
requests
right,
oh
I'm,
back
to
thinking.
Maybe
we
could
just
do
this
with
with
a
reference
count.
B
B
D
D
G
A
Just
by
the
nature
of
the
asynchronous
requests
and
I
also
worry
about,
like
race
conditions,
add,
like
you
know
the
single
count.
So
if
I,
if
someone
deletes
their
last
the
last
bucket
accessing
this,
but
another
one
comes
up
and
that
bucket
vanishes
in
between
a
good
use
case,
our
user
situations.
A
G
F
We're
gonna
keep
to
think
through
this
design.
It's
gonna
have
a
lot
of
race
condition
conditions.
It's
gonna
be
a
pain
in
the
butt
to
code.
Maybe
let's
set
aside
time
to
actually
just
walk
through
the
design
of
exactly
what
this
controller
is.
Gonna
look
like
and
what,
whether
you
can
have
a
back
reference
or
not,
and
all
of
that
yeah.
A
And
unfortunately,
we've
hit
the
top
of
the
hour
here
and
we're
only
about
halfway
through
these
slide
decks
so
feel
free
to.
If
you
want
to
take
a
look
at
that
drop
comments,
I
can
answer
questions.
You
know
on
the
side,
but
I
don't
want
to
run
or
work
so
I
know
people's
time
is
valuable.
So
is
there
anything
anyone
wanted
to
say
regarding
what
we've
gone
over
today
that
they'd
like
to
getting
a
chance
to.