►
From YouTube: KEP Review: Object Bucket API(09JUL2020)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everybody.
This
is
the
object,
bucket
storage
kept
review
date,
9th
of
July
2020.
So
last
week
we
did
not
meet
psych.
You
know
due
to
the
u.s.
holiday
and
I
just
wanted
to
go
over
a
recap:
real,
quick
to
kind
of
refresh
people
on
what
we
talked
about
in
the
meeting
prior
to
that.
What
there
were
some
pretty
big
discussion
points
and
I
want
to
make
sure
we're
all
still
on
the
same
page
there
so
first
with
the
cozy
CSI
driver.
A
So
this
is
the
adapter
between
cozy
and
the
cubelet
as
a
means
of
injecting
credential
and
connection
information
directly
into
the
pods
file
system.
You
can
see
that
here
you
can
see
in
the
lower
left
section
we
have
the
cozy
adapter
it'll,
communicate
information
from
the
bucket
access
and
bucket
api
had
instances
through
the
cubelet
into
the
pod
that
enables
us
to
one
not
have
to
copy
credential
or
in
connection
information
into
config
Masson
secrets,
which
is
a
bit
of
a
arbitrary
design.
A
It
doesn't
give
us
a
lot
of
control
over
how
we're
distributing
the
this
information.
It
also
allows
us
to
block
workloads
from
starting
and
mate
Li
until
provisioning
is
complete,
which
is
a
really
nice
feature
that
otherwise
we
would
end
up
placing
that
the
burden
of
on
to
a
user
by
forcing
them
into
you
know
to
find
a
secret
name
to
find
a
config
map
and
follow
that
kind
of
messy
workflow.
A
So,
to
point
out,
one
thing
that
there
was
some
security
concerns
about
the
cozy
driver
having
to
reach
into
application
namespaces
and
being
given
our
back
access
to
bucket
access
requests
and
bucket
requests.
Then
the
need
for
this
is
so
that
pods
can
reference
the
bucket
access
requests
that
will
ultimately
dereference
to
their
connection
credential
instances
and
that
dereferencing
has
to
begin
here.
A
A
The
others
are,
let
me
see.
A
Gonna
be
just
a
file
the
unfortunately
I.
Don't
have
that.
My
my
kind
of
example,
SPECT
I,
wrote
two
weeks
ago
up,
but
the
the
user
would
define
their
pod
and
specify
I'm
kind
of
assuming
here,
but
we
haven't
defined
a
process
for
define
two
CSI
ephemeral
volumes,
one
for
their
connection
data
one
for
their
credential
information.
The
reason
that
I
separated
this
out
was
credential
files
are
pretty
common
practice
for
the
big
s
yeah.
C
A
B
Really
we're
talking
about
a
secret
that
will
the
CSS
driver
is
going
to
pull
a
secret
and
then
it's
going
to
model
in
the
pod,
correct
yeah,
so
with
so
in
terms
of
the
Iraq
access
issues
or
concerns
that
we
had.
If
we,
if
you're
only
pulling
a
secret
and
for
only
waiting
for
the
bucket
to
be
created,
we
still
have
that
issue.
A
The
the
Arabic
concern
as
I
understood
it
I,
went
back
and
had
to
relist
into
the
meeting
to
make
sure
I
had
everything
right,
but
it
was
mostly
about
giving
like
get
list
watch
permissions
to
the
driver
to
get
lists
and
watch
the
bucket
access
requests
and
buck
requests
in
application.
Namespaces
the
access,
credential
secret
should
only
exist
in
the
provision
or
namespace.
A
B
Yeah
so
various
coming
from
worse,
it
might
be
possible
to
never
have
to
get
a
bucket
or
a
bucket
access
request
for
the
CSI
driver.
In
the
sense
it
doesn't
have
to
listen
in
on
events
for
those,
the
reason
being
it,
the
CSA
driver
is
going
to
be
called
after
the
party
scheduled
and
it
needs
o
volume
and-
and
we
would
have,
we
would
have
requested
this
volume
in
the
pod
spec
itself.
So
say
the
request
comes
in
and
then
on.
B
I
think
the
the
method
is
called
publish,
value
or
maybe
mount
volume
when
that
is
called
on
the
driver.
Can
we
then
just
list
further
for
the
bucket
that
we
were
interested
in
and
if
it
is
found
we
can.
We
can
assume
that
it
is
a
it's
available.
Then
we
look
for
a
secret
where
we
have
the
namespace
credentials
or
our
authorization
for-
and
you
know
just
work
with
that.
Yes
mom
then
depart
I.
A
Think
that
that's
doable
are
the
one
and
I.
Don't
let
me
let
me
put
it
like
this
I.
Don't
really
know
how
big
of
a
concern
it
is
I'm
open
to
hear,
but
the
the
like
the
reference
path
from
the
user
point
of
view,
I'm
thinking
that
when
I'm,
defining
as
a
user
I'm
defining
my
pod
spec
should
I
have
to
know
the
name
of
the
bucket
object
or
the
bucket
access
object
or
should
I
just
have
to
know
the
name
of
the.
D
D
Here
right,
so,
if
I
have
a
workload
definition
that
includes
bucket
requests
that
entire
I
have
a
multi-chain
thing
that
I
have
to
stop.
I
first
got
to
satisfy
the
bucket
and
bucket
access
requests,
provisioning
to
create
those
cluster
scoped
objects,
and
then
I
have
to
do
the
ephemeral
volume
and
then
I
can
run
the
pie
right.
So
there's
a
multi-stage
piece
there
I,
don't
think
you
can
presume
that
the
bucket
exists
before
the
CSI
driver
has
to
be
able
to
get
the
credentials.
It's
got
to
be
able
to
block
on
those
yeah.
C
C
E
A
A
A
A
So
the
way
by
I've
envisioned
this
a
rather
that
I'm
proposing,
is
that
they
would
just
make
reference
to
the
objects
that
they
themselves
have
created.
I
know
the
name
of
my
bucket
access
request
and
my
bucket
request.
I
would
stick
those
in
my
string
string
map
with
keys
to
find
my
us.
You
know
the
cozy
authors
and
then
that
would
be
passed
to
the
CSI
adapter
and
it
would
use
that
to
dereference
the
cluster
scoped
objects.
A
A
B
D
Wear
all
the
permissions
I
mean
it
needs
the
permissions
to
both
see
the
local
namespaced
access
request,
because
that's
what
they're
referencing
and
then
it
needs
two
permissions
to
be
able
to
see.
The
cluster
scoped
objects
that
derived
from
those
and
those
are
exactly
the
set
of
permissions.
I.
Think
we've
been
talking
about
all
along
I,
don't
know
how
you
Whittle,
that
down
and.
D
B
What
I
was
saying
was
if
we
had
a
standard
convention,
say
a
secret
name,
because
buck
is
the
closest
goal
and
the
uniquely
name
and
since
the
secrets
are
going
to
be
in
the
provision
namespace
auto-generated,
we
could
have
a
secret
name
matching
a
bucket
name.
The
way,
if
you
just
knew
the
bucket
name,
you'd
be
able
to
get
the
secret
and
you
wouldn't
have.
You
would
need
access
to
a
bucket
request
or
bucket
access
or
a
bucket
requests,
because
you.
D
Do
because
you're
not
putting
all
of
your
effectively
I
think
suggesting
that
you
copy
all
of
the
details
from
the
bucket
or
bucket
access
object
into
the
secret
I.
Don't
think
that
was
the
thought.
The
thought
was
that
you
would
only
put
credentials
in
there,
but
all
of
your
endpoint
information,
all
of
your
structural
information,
would
still
be
in
the
cluster
scoped
objects.
I
see
I.
A
Well,
the
point
we
ended
on
last
meeting
was
that
it's
an
acceptable
model,
given
that
the
CSI
adapter
is
kind
of
stopgap
with
the
ultimate
goal,
hopefully
being
that
this
code
is
actually
written
into.
You
know
the
cubelet
itself
and
so
that
the
cubelet
trust
identity
would
be
reading
would
have
the
are
back
rights
to
read
these
names
based
objects
and
write
the
information
directly
into
the
pot
right.
D
C
B
C
Today,
the
node
authorizer
kind
of
just
looks
at
first
level.
What
are
the
secrets
that
a
pod
is
referencing,
okay
and
I
believe
it
also
looks
at
PVCs
and
V
references,
those
as
well.
So
if
there's
a
secret
that
a
PVC
is
pointing
to
it
can
authorize
that.
So,
similarly,
you
can
imagine
we
update
this
logic
to
handle
bucket
requests.
D
C
D
Me
I
guess
I
I'm,
not
as
worried,
I
guess
about
the
are
back
concerns
that
we're
talking
about
here,
because
this
is
not
privileged
in
the
provisioners
right.
This
is
not
third-party
code.
This
is
privileged
in
the
main
controllers
for
this,
and
so
it
is
essentially
a
kind
of
a
kubernetes
component.
It's
just
not
you
know
mainline
core,
but
yeah.
D
B
A
D
A
And
we're
saying
pretty
much
what
we
said
two
weeks
ago
too
so
I,
if
I'm
wrong
on
that,
please
speak
up
now.
Otherwise
I
think
we
can
move
forward.
I
think
we
all
agree
that
the
our
back
right
now
not
a
huge
concern.
It's
it's
limited
in
scope
and
it's.
It
applies
to
code
that
we're
gonna
own
to
begin
with.
Okay,.
A
So,
moving
on
to
the
next
point
of
the
recap,
we
kind
of
we
debated
a
bit
about
the
process
of
turning
a
greenfield
bucket
into
a
shared
pseudo
brownfield
bucket.
The
process.we
kindest
cust.
To
make
this
make
this
happen
was
to
enable
administrators
to
a
edit
a
permitted
namespaces
list
on
the
bucket
cluster
object
after
creation
or
if
it's
brown
field,
to
begin
with
at
creation
and
then
be
to
enable
users
to
pick
bucket
classes
which
would
have
an
additional
permitted
namespaces
slice,
which
would
also
specify
namespaces.
A
A
We
had
sort
of
three
separate
states
that
a
bucket
API
object
could
be
so
if
its
private,
its
permanent
namespace
list,
would
just
consist
of
the
originating
bucket
requests
if
it
was
created
from
the
bucket
class.
It's
it's
a
kind
of
pseudo
brown
field
and
shared
all,
and
then
that
would
be
a
concatenation
of
both
the
originating
bucket
request
and
a
list
of
the
additional
namespaces
and
then
a
a
public
mode
defined.
As
you
know,
a
asterisks
or
some
other,
you
know
character.
A
A
Accuracy
and
race
conditions
and
the
asynchronous
way
in
which
it
would
be
handled,
so
the
consensus
at
the
end
of
the
meeting
was
for
a
delete
operation
on
greenfield
buckets.
The
cose
controller
will
query
all
bucket
requests
in
the
cluster
and
determine
which
ones
have
a
reference
to
this
bucket
and
also
has
to
be
checked
against
the
permitted
namespaces
and
pending
that.
If
there
are
no
requests
for
this
bucket
or
that
reference
this
bucket,
the
deletion
operation
will
continue.
Otherwise
it
will
block
until
all
requests
are
gone.
A
Okay,
if
something
comes
up
further
on
just
please
feel
free
to
bring
us
back
to
it.
So
now,
I
have
some
actual
questions,
we've
well,
some
of
the
updates.
Let
me
enlarge
this
all
so
current
state
of
code.
We
have
two
repositories
and
criminate
e-cigs.
Now
the
because
you
see
a
Sai
adapter
and
because
he
spec
I'll
show
these
slides
out
after
the
meeting
on
the
workgroup
Google
Groups.
So
you
have
access
to
these
links.
Currently,
there's
no
code
in
there
and
my
question
for
Sid
and
or
Chang.
B
G
C
A
Okay,
that
that
answers
that
question
pretty
succinctly
then
so.
On
that
note,
we
have
an
unofficial
workspace
right
now.
Organization
is
called
a
container
object,
storage
interface
and
we
have
a
number
of
repos
their
summer
being
developed
in
right
now,
I
have
a
PR
open
in
the
cosy
CSI
adapter
the
work
for
that's
about
80%
done.
It
was
pending
some
work
on
the
cozy
API.
That's
been
ironed
out
sufficiently.
That
I
can
continue
work
on
that
PR.
We
also
had
the
GRP
suspect
that
is
being
developed.
A
B
A
Yeah
no
worries
we.
We
can
figure
that
out.
C
B
C
And
ideally,
I
mean
what
we
would
do
is
for
these
repos
kind
of
make
you
admin
on
them,
so
you
can
iterate
quickly
and
get
things
merged
and
kind
of
move
at
your
own
pace
for
the
official
repos,
and
that
way
you
wouldn't
need
kind
of
a
side
repo,
and
if
you
needed
something
on
the
side,
you
could
use
your
own
personal,
github.
I.
C
Think
that
should
be
fine
or
maybe
even
if
you
put
like
a
notice
on
the
on
the
unofficial
one
to
say
that
you
know
suggest
a
working
space
go
to
the
official
repos
here
or
something
I,
don't
know
might
be
following
worth.
Following
up
with
someone
from
the
CN
CF
on
that,
okay,
Jenna,
who
I
can
put
you
in
touch
with
Chris?
If
you
send
me
an
email
and
then
they
can,
he
can
help
you
sort
it
out.
C
C
A
A
A
Okay
and
then
again,
I'll
send
these
slides
out
so
you'll
have
access
to
these
things
feel
free
to
review
the
code
and
chime
in
on
PRS
and
things.
A
So
next
up
speaking
of
the
getting
the
cap
merge,
we
need
to
define
our
graduation
criteria
for
this.
So
far,
we
don't
have
a
really
section
for
that.
Some
things
are
obvious.
Of
course
you
know
a
crystallized
API
that
we
agree.
We
can
move
forward
and
code
against
and
iterate
on,
but
beyond
that
there
are
things
in
a
a
new
sort
of
kept
format
that
have
come
out
since
this.
C
G
A
H
A
C
C
In
worst
case,
we
could
push
it
out
from
the
120
timeframe
to
121.
What's
the
timeline
for
the
120,
so
120
should
actually
be
end
of
year.
Normally
that's
at
the
quarter
boundary,
but
this
year
because
of
the
coronavirus,
we're
doing
three
releases
and
so
119
got
kind
of
pushed
past
into
Q,
3
and
so
120
will
be
probably
closer
to
the
end
of
q4.
C
G
C
A
A
Delecia
considerations
so
going
back
into
implementation
a
little
bit.
There
were
a
couple
of
open
questions
that
we
didn't
get
to
in
the
previous
meeting
and
so
in
the
just
to
kind
of
run
down
these
use
cases
here,
they're
pretty
common,
but
there's
some
things
that
haven't
quite
an
answer.
So
take
number
one.
A
C
B
B
F
F
A
You
know
the
question
falls
down
on
kind
of
two
separate
points:
one
is
it
the
responsibility
of
kubernetes
to
you
know,
handle
this,
or
is
this
something
we
just
expect
drivers
to
be
implemented
to
do
cleanly
and
allow
you
know
kubernetes
to
move
forward.
If
it's
a
kubernetes
responsibility,
how
can
we
signal
to
the
driver
that
we
want
to
do
an
asynchronous
the
delete
that
we
expect
us
delete
to
take?
C
I
said
we
had
a
big
discussion
on
synchronous
versus
asynchronous
for
CSI
and
ultimately
landed
on
synchronous.
It
was
much
easier
to
implement
design
and
the
thought
also
was
that
synchronous
can
capture
the
asynchronous
use
cases
in
kind
of
a
ugly
way
by
just
polling
and
at
least
for
the
CSI
side
for
volumes.
D
A
C
G
G
Christmas
on
it's
actually
not
really
feeling
you
back
it.
Basically,
you
just
leave
the
same.
You
make
the
same
call
again
again,
because
it's
an
important.
So
it's
supposed
to
give
you
the
same
results
from
the
same
snapshot
so
should
not
be
creating
a
new
one,
and
we
do
also
have
this
release
snapshots
which
just
can
carry
the
status,
but
because
that
is
optional.
So
for
dynamic,
provisioning,
we're
actually
just
calling
Chris
snapshot.
Does.
G
C
And
even
going
a
step
further,
the
expectation
could
be
that
it
will
block
until
the
call
is
complete,
but
for
these
long-running
operations
it's
going
to
hit
the
timeouts.
You
know
Network
timeouts
fail
and
then
will
be
retried
by
the
caller.
So
the
logic
from
the
caller
is
kind
of
dumb
and
simple.
It
assumes
synchronous
blocking
operations
and
it
has
the
logic
to
retry.
If
things
fail
and
that's
you
know
basically
jack
on
the
client-side
on
the
server
side,
you
just
need
you
you're
gonna,
be.
C
H
So
that
that's
fine
from
a
correctness
perspective,
but
the
downside
is
the
amount
of
time
that
things
will
actually
take,
will
end
up
depending
on
what
your
retry
cycle
is,
if
you
just
miss
a
timeout
somewhere
like
if,
if
the
retry
cycles
every
one
minute-
and
it
takes
you
61
seconds
to
delete
it,
it's
gonna
appear
to
take
two
minutes
from
users
perspective
and
that's
I.
Think.
A
C
D
And
but
but
even
his
example,
if
you
have
an
exponential
back-off
that
tops
at
2
minutes,
that
means
anything
that
is
taking
2
minutes
or
3
minutes,
you're
gonna
capture
more
fine-grain
than
that.
It's
only
when
things
start
to
take
multi
minutes
that
you're
going
to
miss
by
an
interval
and
then
you're.
You
know
your
worst
case
is
probably
sort
of
the
5
to
6
minutes.
You
might
over
count
by
a
couple
of
minutes,
but
then,
after
that
it
starts
to
be
a
diminishing,
like
small
part
of
the
overall
time.
A
C
C
D
I'm
sorry
I
want
to
ask
a
little
deeper
in
the
model
of
that.
You
expect
this
to
work.
It's
a
little
bit
weird
to
me.
Let
me
explain
what
I
was
expecting
and
then
what
I
think
you're
saying
what
I
was
expecting
is
you
would
issue
a
delete
and
it
would
return
right
away,
say:
yep
I'm,
deleting
it
and
here's
the
state
of
the
thing
it
is
being
deleted
and
then
I
would
that's
later
on
a
reconcile.
Loop,
I
would
say:
oh
I
need
to
delete
this
thing
and
either
I
could
say.
D
Well,
it
says
it's
currently
being
deleted.
Therefore,
I've
got
no
actions
to
take
or
no
I
do
have
to
in
action
to
take,
because
I
need
to
query
it
see
if
it
has
been
deleted,
but
I
could
just
model
that
as
attempting
to
delete
it
again
and
then
rely
on
the
item,
potency
constraint
and
flavor
to
line
that
up,
and
so
if
it's
capturing
an
object
or
whatever,
but
it
sounds
like
what
you're
saying
is
instead
I
will
issue
a
delete
and
I.
Don't
expect
it
to
return
right
away.
D
H
C
G
I
think
for
Christmas
showers
a
little
different,
because
there
are
actually
two
phases.
The
first
phase
actually
is
blocking
until
the
snapshot
is
cut,
that
she
really
should
not
take
the
long
that
comes
back,
and
then
we
call
Chris
snapshot
again
and
then
that
should
tell
you,
you
should
check
the
status
and
see
if
upload
is
a
completed
or
not.
For.
G
G
D
D
H
G
C
C
D
C
D
D
C
H
D
H
C
D
C
D
C
A
Okay,
so
I'd
like
to
leave
that
topic
there
for
now,
given
as
we're
coming
to
the
top
of
the
hour
and
the
last
thing,
unfortunately,
so
I
am
being
moved
due
to
a
little
bit
of
a
reorg
with
inside
Red
Hat,
so
I'll
be
moving
off
of
storage,
taking
over
responsibilities
on
the
I've
defined
the
the
roles.
Here's
administrative
and
technical
I
don't
know
if
Adam
is
the
best
name
for
it,
but
Erin
and
Jeff
from
Red
Hat
who
have
been
attending
all
the
meetings
are
gonna
be
taking
over.
A
You
know
kept
documentation,
updates
shepherding
they
kept
through
the
overall
review
process.
They're
going
to
be
setting
the
agenda
for
the
meetings
like
I've
been
doing
and
bringing
up.
You
know
technical
discussions
and
design
discussions
and
then
collaborating
with
Sid
Sid
I've
asked
to
take
on
a
technical
leader
kind
of
role
here
and
getting
the
decode
implemented,
making
the
kind
of
deep
down
design
decisions
and
helping
surface
the
unforeseen.
You
know
blockers
edge
conditions,
things
that
we
couldn't
have
predicted
back
up
into
the
kept
review
meetings.
A
It'll
also
be
running
weekly,
Monday,
stand-ups
and
soliciting
help
from
community
members.
You
know
to
help
me
technical
needs
here,
so
I'll
be
available
and
sort
of
an
unofficial
stance
in
terms
of
consulting
on
you
know,
design
ideas,
if
there's
things
in
the
cap
that
don't
make
sense
and
I'm
needed
for
sort
of
some
historical
context,
I'm
happy
to
to
talk
about
that,
but
starting
Monday
I'm
gonna
be
shifting
off
pretty
much
entirely.
C
A
A
F
We
certainly
don't
want
to
lose
momentum
and
I.
Think
the
timing
is
reasonably
good
that
we
have
some
other
people
extend
really
stepping
up
and
and
he's
even
won
one
of
the
meetings
and
a
good
technical
understanding,
and
we
certainly
we
want
everyone-
that's
been
participating.
We've
had
over
20.
Sometimes
in
these
calls
we
would.
B
Yeah
yeah
thanks
for
bringing
it
bringing
it
this
far,
a
lot
of
the
tough
questions
have
been
answered
and
we've
established
a
good
cadence.
As
of
now
we
have
Rob
working
with
us.
I,
don't
know
if
he's
on
the
meeting
today,
but
just
like
Jeff
was
saying:
if
more
people
can
contribute
that'll
be
good.
We
have
a
lot
of
interesting
challenges
to
stall
and
yeah
everyone's
invited
well.