►
From YouTube: KEP Review: Object Bucket API (21May2020)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
can
everyone
see
that
should
be
a
slide
deck,
given
last
week's
discussion
around
the
need
for
a
credentialing
API
and
some
sort
of
mechanism,
at
least
a
a
basic
way
of
handling
that
we
spend
some
time
kind
of
brainstorming
on
that
through?
Do
you
got
threw
together
a
slide
deck
yesterday
to
kind
of
illustrate
the
different
models
that
we
think
are
viable?
A
There
are
certainly
more
out
there
that
we
haven't
considered
yet,
but
this
is
kind
of
where
our
heads
out
so
we're
I
think
the
the
approach
we're
trying
to
take
is
kind
of
a
minimalistic
sort
of
change
to
the
API.
Since
we're
not
trying
to
write
like
a
full-fledged
I
am
operator
we're,
but
we're
still
trying
to
enable
credential
minting
specific
to
buckets
there's
a
bit
of
a
there's
at
least
some
amount
of
like
item
ops
that
have
to
happen.
I'm
not
going
to
get
into
those
very
much
in
the
slide
deck.
A
But
hopefully
the
conversation
will
lead
to
that.
So
the
first
one
I
call
discreet
minting.
It
is
essentially
a
standalone
API
that
interacts
with
cozy
could
be
part
of
the
cosy
interface
and
sorry
I'll.
Put
these
slides
up
in
the
Google
group,
I
heard
a
bit
Z,
while
the
graph
here
looks
pretty
complicated.
Just
note
that
the
right-hand
side
API
is
the
original
bucket
proposal.
Api
is
our
unchanged.
They
are
renamed
bucket
request
replaces
bucket
bucket
replaces
bucky
content.
A
A
The
mechanism
used
to
pass
them
to
the
drivers
that
they
will
have
to
be
created.
This
is
just
a
fact
of
the
automation.
Anyway,
the
driver
will
have
to
create
the
secret
with
the
credentials
in
its
own
namespace.
A
central
controller
will
have
to
clone
that
secret
to
a
user's
namespace,
and
this
is
part
of
the
minimally
privileged
and
Driver
philosophy,
aah
whoops,
okay,
so
a
user
would
create
a
bucket
credential
request
that
would
create
a
cluster
scoped
bucket
credential,
which
is
essentially
just
a
dereferencing
tool.
A
It
has
to
exist
because
the
driver
can't
see
into
user
namespaces,
so
it
has
to
look
somewhere.
Cluster
scoped
objects
are
the
the
method
by
which
we
communicate
with
the
driver,
so
that
to
me
just
makes
sense
if
a
or
once
it
detects
a
new
bucket
credential,
it
creates
a
new.
So
now,
I
am
credentials,
writes
those
into
its
own
namespace
updates.
The
bucket
credential
central
controller
would
clone
that
back
in
the
user
name
space.
Now
the
users
ready
to
provision
a
bucket
or
buckets
they
create
their
bucket
requests.
A
With
reference
to
the
secret,
a
bucket
is
created
in
the
cluster
scope.
The
driver
not
illustrated
here,
will
have
to
dereference
this
secret,
all
right,
the
secret
within
its
own
namespace
through
some
manner,
so
this
could
be
through
an
annotation
on
the
clone
secret
or
a
direct
reference
say
between
the
bucket
request
and
the
bucket
credential
request,
but
some
way
it
has
to
know
that
this
is
the
secret
that
should
be
used
to
create
the
bucket
creates
the
bucket
binds
that
set
of
credentials
to
the
bucket.
A
B
B
A
B
A
D
A
A
E
A
There's
a
our
our
our
thought
on
that
was
that
well,
the
bucket
credential
request
should
probably
either
reference
a
a
new
API
like
a
like.
You
said,
a
credential
class
or
the
bucket
class
I.
Think
a
credential
class
is
probably
the
better
way
to
go,
given
that
it
would
probably
specify
credential
specific
parameters
and
would
also
name
a
provision
so
that
yeah
sorry.
This
is
all
a
bit
hastily
put
together,
which
is
on
me,
but.
B
B
It
has
historically
been
the
only
mechanism
that
we
have
in
kubernetes,
but
I
have
to
believe
that
a
lot
of
kubernetes
platforms
are
going
to
be
moving
to
something
like
workload:
identity
right,
which
right
basically
maps
a
kubernetes
service
account
to
an
I
am
principal
in
the
cloud
service,
so
that
the
minting
problem
here
becomes
just
the
access
credentials.
So
there's
still
provisioning
problem.
You
still
have
to
provision
access
to
particular
bucket,
but
there
is
no
need
necessarily
to
communicate
those
credentials
back
to
the
workload.
A
B
B
What
I
would
ask
is
that
we
figure
out
how,
in
this
model,
we
could
accommodate
that,
but
also,
probably
still
have
that
fallback
to
be
able
to
do.
You
know
passing
it
via
secret,
so
it
just
sort
of
a
question
of
you
know.
How
would
you
express
the
two
different
problems
of
minting,
a
principal
and
or
minting
just
access
for
an
existing
principal?
It's.
E
Funny
you
mentioned
that
I
talked
to
Tim
this
week
about
this
design
and
he
had
exactly
that
feedback.
He
said,
make
sure
to
allow
for
both
secret
and
secret,
less
access,
which
is
exactly
what
you're
saying
this
design
is
for
secrets,
but
also
allow
for
workload,
identity
which
you
don't
have
to
provide
a
secret.
The
identity
of
the
pod
is
passed
through
and
either
something
within
kubernetes
converts
it
to
you
know
and
maps
it
before
passing
it
through
the
cloud
service
or
the
cloud
service.
A
A
B
The
provisioning
of
the
principal
we're
now
saying
is
optional,
because
you
might
want
to
do
that
through
workload,
identity
or
the
equivalent,
and
so
but
the
thing
that's
always
there
theoretically
is
provisioning
the
access,
and
so
you
could
either
bring
provisioning
of
the
principal
itself
out
of
bound
and
make
it
basically
a
three
part
problem.
You
provision
a
bucket,
a
provision
principal
and
you
provision
access
or
you
can
combine
the
principal
and
access
provisioning
together
with
lots
of
option
allottee
in
the
class.
B
But
you
know
we
just
did
that
with
bucket
and
decided
to
pull
the
two
out
and
I
suspect
that
wasn't
just
because
of
the
security
implications.
It
was
because
we
were
conjoining
abstractions,
so
I'm
wondering
if
we
might
want
to
treat
some
sort
of
principle
identification
as
independent
of
the
access
configuration.
A
The
nose
right,
yeah
and
I
get
your
point
too.
So
the
the
point
here
is
that,
like
you
said
it's
essentially
a
three-part
problem,
you
have
a
an
identity.
You
have
a
storage
in
point,
yeah,
the
the
binding
that
takes
place
that
prescribes
you,
know
your
access.
So
the
kind
of
complication
of
this
is
it
that
that
binding
can
be
written
to
the
bucket
or
Rickon
written
to
the
I
am
role
right,
so
it
also
exists
in
both
sections.
A
B
The
next
problem
is
what
buckets
can
my
application
access,
so
application
access
to
a
bucket
requires
something
in
the
cloud
service
to
enable,
but
whether
or
not
I
can
know
about
a
bucket
is
entirely
a
kubernetes
level
problem
right.
So
this
would
be
the
question
of
am
I
allowed
to
put
a
bucket
request
against
a
bucket.
E
B
E
B
No
matter
what
we
do,
it
has
to
be
instantiated
in
the
cloud
service
right,
whether
that
means
it's
instantiated
in
an
I
am
role,
that's
in
a
pendant
of
the
bucket,
but
refers
to
the
bucket
or
whether
it's
some
kind
of
property
in
the
bucket
itself
that
refers
to
the
you
know
either
way.
That
is
an
abstraction
that
we,
it
can
be
black
box
right.
A
A
B
That
was
the
point
that
I
was
trying
to
make
and
I
in
a
different
part
of
this
problem.
I
do
think
we
have
whole
fundamental
questions
about
reuse
of
bucket
and
bucket
requests
in
multiple
binding
single
binding.
We
started
to
talk
a
little
bit
about
that
last
week.
I
just
wanted
to
make
sure
you
weren't
sneaking
that
in
at
this
point,
when
we
were
talking
about
access.
B
B
A
C
Because
I,
like
the
direction
where
this
is
going,
my
first
question
will
be.
It
seems
to
me
that
the
bucket
request
itself
implies
a
lot
of
things
because
it
requires
the
class,
so
the
determine
what
type
of
credentials
will
be
provided.
So
why
can't
we
start
with
the
bucket
request,
because
the
bucket
credentials
coming
first
seem
to
me,
but,
like
I
still
say
it
may
be
too
much,
because
when
you're
requesting
the
bucket
or
II
know
you
want
a
GCS,
I
love
it
and
you're
gonna
get
the
strip
type
of
credentials.
C
You
may
actually
be
getting
a
file
as
well
mounted
and
so
I
think
that
the
bucket
request
will
be
enough
to
imply
both
what
pocket
I
want,
along
with
the
class
and
then
the
credentials
that
I'm
expecting
and,
and
then
this.
This
objective
makes
a
lot
of
sense
to
me
because
it
kind
of
hosts
what
what
was
re
provided
to
you
but
I
feel
like.
If
you
have
the
driver
here
in
your
diagram,
these
pockets
potential
requests
could
be
store
somehow
by
the
driver
or
reference
by
the
driver.
C
B
That
can
I
dress,
which
is
that
that
I,
don't
think
merely
knowing
the
particular
say
data
path.
Api
gives
you
adequate
information
about
the
way
you're
gonna
handle
credentials
because,
for
example,
there's
a
whole
bunch
of
s3
compliant
guys
did
might
describe
credentials
in
the
same
way,
but
the
way
they're
provision
and
everything
else
might
be
very
different,
but
also
even
when
you've
got
the
same
thing
like
GCS,
for
example,
you
might
in
some
cases
support
workload.
Identity
in
other
cases,
support
secret.
B
So
there
really
is
a
whole
class
of
problems,
they're
just
about
how
you
represent
credentials
and
things
like
that
that
are
kind
of
independent
of
whether
or
not
the
bucket
get
pretty
gets
provision
and
then
the
other
thing,
which
is
what
led
to
the
split
in
the
first
place,
which
is
that
you
know
we
want
to
keep
bucket
provisioning
about
bucket
provisioning,
not
about
credentials,
and
you
can
provision
a
bucket
before
putting
any
kind.
I
mean
there's
a
strict
ordering
there.
You
can
completely
provision
it
without
adding
access
credentials
and
then
apply
access
credentials.
D
Reverse
is
not
always
that's
interesting
on
this
diagram.
I
have
a
question
for
John,
or
maybe
you
Android
can
answer
I.
Don't
really
could
you
please
clarify
where
exactly
the
we
grant
access
to
credentials,
because
now
I
from
this
diagram,
I,
don't
really
see
where
exactly
that
happens,
at
which
step
we
do
grant
access
when.
D
D
E
A
A
D
B
B
F
B
E
B
Flow,
you
only
have
to
execute
once
per
actual
bucket
correct
right
yep.
So
then
you
need
to
support
a
flow
where
you
reference
an
existing
bucket
and
that's
where
all
these
sort
of
namespace
level
access
control
kind
of
things
come
in,
but
at
the
end
of
it,
the
result
of
that
all
you
get
is
a
handle.
You
get
a
bucket
in
you
know
where
a
bucket
is
you
can't
actually
get
your
application
to
access
it
yet
because
it
hasn't
been
granted
access.
A
A
F
E
B
B
One
of
the
ways
to
look
at
this
is
that
a
pod
only
ever
has
to
specify
a
bucket
access
request
that
bucket
access
request
might
refer
to
an
existing
bucket
or
might
ask
for
a
bucket
to
be
provisioned
now
exactly
how
that
works
out.
If
that
means,
you
always
have
to
have
a
local
handle
to
the
bucket,
but
but
that's
the
way
that
I
was
looking
at
it
is,
is
that
if
you
have
sort
of
two
separate
provisioning
systems,
they
have
very
different
personalities,
and
so
you
don't
want
to
combine
them
together.
D
B
You
know
they
can
be.
You
know
mounted
or
whatever
I
mean
that's
a
problem
of
the
interface
definition
with
with
with
the
particular
expectation
of
the
application,
but
but
yet
tunneling
through
through
the
same
thing
implies
that
you
end
up
having
two
flows
that
have
to
update
the
same
resource,
which
feels
weird.
E
C
I
feel
like
yeah
like
having
that
single
resource
defining
how
the
accessing
the
request
on
the
producer
reducers
gonna
happen
simplifies
a
lot
right
of
the
discussion,
because
then
the
bucket
screenshot
request
in
the
pocket
request
itself
currency
to
just
Co,
City
havior.
It's
no
longer
relevant
to
the
developer
or
the
administrator.
That's
managing
access
to
the
pocket,
she's
only
trying
trying
to
satisfy
what
the
port
is
actually
gonna
need
in
in
one
single
place,
and
then
everything
else
is
or
the
automation.
G
Question
is
how
get
access
can
can
be
reused
when
will
talking
about
about
brownfield
bucket
versus
field
credential
right
so
that
they
exists?
What
a
Gucci
dish
would
you
refer
to
they're
like
bypassed
the
back
requesting,
but
and
just
refer
to
the
bucket
actly,
and
also
oh,
by
ask
the
request
for
queer
chosen
and
refer
directly
to
something
else
like
a
secret
or
even
like
a
mark
love,
you
know
work
or
identity
or
what
is
there
so.
B
D
B
That
approach,
once
you
have
a
request
for
both
that
request,
can
then
either
be
satisfied
via
a
a
pre
provision.
You
know
brown
field,
E
or
driverless
case
or
can
be
dynamic.
Separating
them
allows
you
to
separate
the
diet,
the
dynamic
part
you
can
dynamically
mint
the
bucket,
but
not
dynamically
mint,
the
credentials.
You
can
dynamically
mint
the
credentials,
but
I'm.
G
Totally
up
for
it,
but
what
I
don't
follow
is
it?
The
access
should
support
two
pieces
of
referring
to
a
bucketing
in
some
way,
because,
if
it's,
if
it
wants
to
be
deployed
with
bucket
requests,
for
example,
then
for
the
sake
of
simplicity,
I
would
want
to
refer
to
the
bucket
requested
deploying
right.
But
if.
G
C
Name
so
that's
the
case
for
brown
pill
pockets
and
is
a
great
scenario,
because
pocket
access
could
standardize
right
if
you
are
requesting
a
pocket
which
the
system
cannot
provide
you
because
it
doesn't
understand.
Let's
say
you're,
requesting
a
pocket.
Call
lots,
but
I
mean,
if
you're
with
AWS,
that
that
opening
is
probably
taken.
So
that
me
they
we
should
make
the
portal.
C
So
this
will
satisfy
both
I
mean
if
your
drug,
if
you
are
configured
in
a
way
that
requests
for
a
pocket
call
loads,
can
be
provisioned,
it
would
lead
provision,
so
evocatively
will
be
created
and
then
you
will
be
granted
access
or
if
you
are
in
an
environment
that
that's
working
with
a
cloud
provider,
and
then
this
requested
name
still
enough
to
actual
packing
pockets,
and
this
will
actually
be
some
extra
work
for
the
minister
to
come
and
satisfy
your
pocket
access.
Somehow,
with
the
proper
names
for
the
pockets
right.
G
So
I'm
not
sure,
if
I'm
completely
understanding
what
you're
suggesting
it's
suggesting
to
keep
provisioning
suits
to
keep
creating
about.
I
could
request
an
if
I
want
to
reuse
bucket
and
then
the
bucket
access
will
always
so
so
that
every
usage
for
that
case
would
require
three
objects,
bucket
access
requests
and
credentials,
because
these
will
be
able
to
detect
the
exit
if
I
want,
if
I'm
referring
to
something
existing
or
that
the
bucket
access
would
sort
of
refer
to
the
existing.
The
pre-existing
entity
to
bucket
access
I
think.
C
Anyways,
when
we
create
anyway,
for
both
green
fields
and
brown
fields,
because
you
need
somehow
to
know
which
pockets
are
in
use
right
and
the
only
way
you
can
know
if
actually
anyone
is
using
this
pocket
is
by
by
getting
it.
So,
even
in
that
sense
we
may
be
making
a
cluster
scope.
The
pocket
requests
will
mix
them.
H
B
G
A
C
G
B
G
I
G
John
also
suggested
that
if
there
will
be,
there
will
be
2
2
is
of
referring
to
it
so
either
the
bucket
will
say
this
is
my
owner
right.
Once
that's
bucket
request
is
the
value
I
want
to
be
paid
as
well,
even
if
I
have
other
bound
requests
and
they
will
sort
of
down
bound,
become
unbound
and
the
the
other
option
would
it
would
be
I
want
to
and
to
garbage
collect
this
bucket.
G
B
G
C
An
application
he's
gonna,
become
the
owner
and
he
may
be
saying:
I
need
a
pocket
called
temporary
locks
and
I
want
that.
We
delete
it
after
I'm
gone,
but
what,
if
that
temporary
looked
pocket
end
up
spinning
around
so
he
gets
provision
access
to
an
existing
bucket
right.
So
I
think
it
should
be
like
a
best-effort
satisfy
it,
but
even
though,
if
it
request
request
to
be
deleted
after
it's
done,
he
may
be
actually
thinking
he's
gonna
be
a
greenfield
case,
but
he
ends
up
being
a
brownfield
case.
So
in.
G
C
The
reference
code
in
case
makes
sense
in
this
case,
because
it's
like
okay,
when
the
bucket
requests
are
getting
destroyed
because
they're
no
longer
needed
and
then
we're
evaluating
whether
we
should
be
discarding
the
bucket
or
not,
then
maybe
maybe
we
can
reach
the
state
or
say:
okay,
the
pocket
should
be
destroyed,
attempt
to
destroy
it.
Maybe
the
driver
would
rejected
or
the
crowdsource.
We
projected
it's
fine.
C
We
did
our
best,
but
then
next
request
comes
into
place,
and
maybe
the
driver
tells
you
this
pocket
hose
exists,
so
your
cream
filter
then
becomes
a
brownfield
again
and
then
everything
is
consistent
again
right.
So
our
pocket
requests,
even
though
it's
requesting
something
to
a
bucket
that
will
be
deleted.
It's
granted
at
an
existing
market
so.
G
I,
don't
think
a
greenfield
request
and
we
should
be
translated
into
a
brownfield
request
ever
I
mean
if
you
request,
if
you,
if
you
request
something
that,
like
a
bucket
that
doesn't
exist.
So
basically,
if
you
refer
to
a
bucket
Clasen,
not
to
a
bucket,
that's
a
greenfield
quest
and
then
the
creation
of
the
bucket
in
kubernetes
in
the
cluster
scope
will
be
a
signal
that
that
you
created
the
bucket
and
are
the
owner
of
the
greenfield.
Guess
you
wouldn't
I,
don't
think
you
would
translate
it
to
this
case
to
a
brownfield.
G
C
C
G
Thermo,
but
enough
to
managed
bucket
right,
it's
just
like
a
secret
for
the
application
or
any
other
resource
that
the
application
is
deploying
and
controlling
the
lifecycle.
It
must
not
necessarily
even
the
application,
so
maybe
the
user
that
there's
the
operation
management
manager
of
the
education,
deploying
all
the
pieces
needed
for
this
application
would
control
their
lifecycle.
The
bucket
be
able
to
control
to
delete
it.
I.
B
The
way
you
just
described
it
I
think
is
exactly
right,
which
is
that
if
you
know
it's
a
green
field
case,
you
actually
know
that
at
maybe
the
time
that
you
are
you're,
doing
your
bucket
request
and
that
you
are
asserting
that
you
want
ownership
of
this
and
probably
that
you
don't
actually
want
others
to
have
access
to
it
and
at
anytime
that
you
want
to
do
is
shared
bucket
sharing
beyond
your
workload
that
that
is
sort
of
by
definition,
brown
field.
Now
the
only
question
is:
how
do
you
get
a
brown
field
provision?
I?
C
Why
do
you
correct
the
two
clusters
when
we,
when
US
west
and
one
u.s.
East,
and
both
your
try
to
deploy
this
thing?
Workflow
and
the
same
worker
says
I'm
a
green
field
request
right
then,
but
then
underneath
you're
using
the
same
cloud
provider?
Let's
say
during
a
the
blue
areas.
So
at
the
end
of
the
day
you
should
be
getting
the
same
bucket
right.
No.
G
A
A
B
Expected
I'm
suggesting
I'm
suggesting
the
following
that,
if
you
can
successfully
create
a
bucket
request
to
a
bucket,
then
you
can
configure
access
to
that
bucket.
So
that's
the
first
problem,
whether
or
not
you
can't
even
have
access
to
that
book,
which
includes
whether
or
not
you
can
do
Greenfield,
whether
or
not
you
can
do
brownfield
to
an
existing.
You
know
that's
a
whole
problem
that
we
have
to
solve,
but
once
you've
got
the
bucket.
B
The
rest
of
this,
then,
is
that
that's
why
the
the
notion
of
a
local
reference
to
a
bucket
is
a
good
idea,
because
you've
already
passed
that
first
local
access
control
problem,
which
is
am
I
even
allowed
to
get
access
to
this
bucket.
If
I
am
then
I
just
configure
the
specific
access
that
I
need
and
that
then
gets
implemented
on
the
backend
am
I
missing
something
no.
A
B
No
okay,
so
this
is
this
is
why
I
was
suggesting
that
there
might
be
three
layers
to
this,
but
so
again
what
I'm
suggesting
is
number
one
provisioning
of
the
bucket
is
a
problem
of
the
provisioner
and
of
local
kubernetes
access
control.
That's
right!
We
agree
on
that.
So
we
can
get
a
bucket
created
and/or.
We
can
share
a
bucket
without
actually
being
able
to
access
it
yet,
but
all
that
problem
can
be
solved
with
whatever
we
decide
for
the
fan
of
maggots.
So
then
there
is
okay.
Now
this
application
wants
to
access
this
bucket.
B
That
I
have
been
able
to
map
into
my
axe
at
my
application
space,
so
I
know
I'm
allowed
to
access
it.
Now
I've
got
to
make
that
data
path,
access,
work.
That
requires
two
things.
It
requires
that
I
be
mapped
to
a
principle,
and
it
requires
that
that
principle
be
given
access
to
that
and
so
the
principle
being-
and
so
you
can
imagine
a
scenario
where
you
meant
the
principle
and
you
meant
the
access
as
effectively
one
operation.
B
That
was
that
was
basically
the
way
we
were
looking
at
this
before
or
you
could
look
at
it
as
minting
principle
is
one
path
and
minting
access,
assuming
an
existing
principle
is
another
path.
The
advantage
of
that
second
mechanism
is
that
it
can
leverage
things
like
workload
idea,
so
it
doesn't
actually
require
that
you
meant
the
principle
as
part
of
configuring
the
access,
but
if
you,
if
you
scope
the
access
piece
down
to
I,
already
have
a
principle,
then,
yes,
you
have
a
bucket.
A
E
B
A
A
And
so
then
effectively
so
when
I
create
my
bucket
request,
that's
essentially
just
saying
yes,
you
do
have
access
to
this
bucket.
Here's
the
end
point
can't
access
it.
Yet
you
request
access
to
that
end
point
a
either
credentials
are
minted
or
the
service
account
is
bound
to
the
cloud
service
account,
and
now
you
have
access
to
that
bucket.
Then
the
pod
come
out
that
this
I
guess
this
is
a
a
wrapping
bucket
access
API.
That
would
incorporate
both
of
these
that's
yeah.
B
I
mean
I
I,
think
that
you
end
up
having
to
have
the
same
driver
because
it
has
to
understand
both
bucket
lifecycle
and
bucket
access
lifecycle
and
potentially
I
am
credential
lifecycle.
It
has
to
have
knowledge
of
all
of
them,
so
it
makes
sense
to
me
that
it's
that
the
flows
have
to
be
to
some
degree
tied
together
through
through
the
driver
but
but
again,
like
I,
said
that
the
the
notion
of
workload,
identity,
solving
one
whole
piece
of
that
or
the
equivalent
means
that
I
only
have
to
solve
2/3
of
the
problem.
A
C
Risking
sounding
like
a
broken
record,
but
that's
why
I
say
that
a
single
object
assume
a
resource
that
implied
both
things
right,
so
training,
a
single
object
that
has
everything
we
want
in
it,
and
then
it
really
becomes,
but
but
it
either
they
creates
a
request
or
the
particles
could
be
rather
than
having.
Additionally,
additional
resources.
I.
D
A
A
And
then,
lastly,
so
the
bucket
access
API
was
another
key
thing
that
was
when
I
was
considering
the
the
model
we're
talking
now,
where
I
request
access
to
a
bucket,
or
rather
requests
a
binding
to
a
bucket
for
my
service
counter
credentials.
It
split
the
api
between
green
field
and
brown
field
at
that
point,
because
it,
the
bucket
request
in
my
head,
is
essentially
a
endpoint
representation,
or
rather
a
reference
to
an
endpoint
representation.
The
bucket
and
the
cluster
scope
is
the
endpoint.
A
So
in
a
brown
v
case,
though
I
don't
really
have
a
need
for
the
bucket
request,
then
in
my
head,
this
is
how
I
was
thinking
about
it.
So
if
my
bucket
credential
is
for
access
to
a
bucket,
then
I
would
just
reference
that
bucket
directly.
It
would
make
the
bucket
request
kind
of
redundant
than
the
way
I
saw
it
was
at
Greenfield
would
be
referencing
bucket
requests
brownfield
would
be
referencing
bucket,
credential
requests
yeah
this
weird
split
and
workflows.
That
was
jarring
my
mind.
B
Yeah
I
mean
I,
absolutely
think
that
not
that
it's
the
right
way,
but
it
is
a
it-
is
a
correct
way
to
model
this
as
either
you
are
talking
about
a
request
which
implies
Greenfield.
Are
you
talking
about
a
bucket
which
implies
brownfield,
but
that
means
you
have
to
solve
the
access
control
problem
for
both
the
bucket
request,
interface
and
the
bucket
access
interface,
which
just
felt
weird
to
me
with
why
I
was
thinking.
Maybe
you
should
always
have
a
bucket
problem.
Yeah.
C
Iii
thought
that,
but
that's
what
we
were
gonna
try
to
so
right,
because
a
question
just
taught
me
to
my
mind.
I
thought
means
that
if
we
had
multiple
bucket
requests
for
a
single
part,
that
means
it
wants
to
pockets.
You
will
get
two
set
of
credentials
right,
which
will
imply
that
the
application
needs
to
be
prepared
for
that
and
I.
C
Don't
know
how
how
that
will
fly,
because
I
I
guess
that
most
application
says
I'm
going
to
be
getting
a
one
set
of
credentials
to
access
two
separate
pockets
and
my
I
thought
that
we
were
exploring
the
part
of
you
know
these
credentials
being
permission
to
the
part
having
the
access
to
everything
that
location
was
when
I
need,
write
or
I
was
always
going
in
that
separate
direction.
Yeah.
A
That's
a
good
point.
That's
been
on
my
mind
too,
so
the
the
use
case
is
like
you
know
what,
if
I'm
in
Google
and
I
want
to
use,
you
know
big
analytics
and
GCS
at
the
same
time,
through
one
set
of
credentials,
it's
a
single
pod
running
a
simple
job.
It's
gonna
read
from
here
and
pipe
it
into
another
service
in
the
cloud
service
like
how
do
we
manage
that
and
then.
A
A
D
Right
yeah,
I'm
feeling
like
we
right
now
we're
looking
at
this
problem
like
two
pillars
we
have
like
one
pair
is
original
ten
shows
and
the
other
pillar
is
provisioning
bucket
and
granted
access
to
credential
of
D
bucket
your
credentials
access
to
market.
We
always
try
to
kind
of
stick
it
to
one
of
the
pillars,
but
doesn't
it
deserve
its
own
pillar?
And
then
we
for
brownfield
for
greenfield
build,
see
how
we
can
combine
these,
but.
A
A
Because
it
and
I
ask
that,
because,
if
I'm,
creating
a
Greenfield
bucket
and
I
want
to
be
able
to
share
it
out,
I
should
be
able
to
define
who
I
want
to
share
it
out
to
who
should
who
I
should
whitelist
right
if
I'm
creating
a
but
in
a
brownfield
bucket.
There's
no
owning
namespace,
which
makes
me
think
it
should
be
a
cluster
scoped
object,
defined
by
an
administrator.
B
A
B
If
you
imagine
that
you
have
buckets
and
you
have
bucket
access,
so
buckets,
there's
clearly
a
case
for
sharing
it's
an
interesting
question
as
whether
you
actually
want
to
share
a
bucket
access
from
there's.
Only
really
one
case
that
I
can
think
of
where
that's
legit,
which
is
oh
I've,
got
to
go
sorry.
But
let
me
let
me
finish
this
thought,
and
that
is
the
driverless
case,
where
you
don't
have
anybody
that
can
support
minting
that
kind
of
Lee's
privileged
revocable
credentials
for
an
application.
B
B
You
can
still
do
that.
In
other
words,
the
managing
your
credentials
is
a
different
problem
than
managing
the
underlying
access.
So
you
could
have
a
credentialing
component
that
is
just
providing
a
binding
between
you
and
a
principle,
and
it
has
nothing
to
do
with
buckets
honestly
doesn't
necessarily
have
to
good,
and
then
you
can
have
something.
That,
specifically,
is
has
a
relationship
with
an
equivalent,
but
black
box
component
in
the
cloud
service,
which
represents
the
relationship
between
a
principle
and
a
buck,
and
that
you
want
to
tackle
is
the
first-class
problem.
B
H
B
B
You
can
support
that
with
different
variants
of
the
access
credential
things,
so
you
could
say
meant
me
access
credentials
and
oh
by
the
way.
Maybe
the
way
this
driver
works
that
just
also
mints
a
user
provides.
Maybe
a
different
driver
doesn't
meant
the
user
expects
there
to
be
an
existing
user
that
you
communicate
to
it
like
an
existing
kubernetes
service.
Account
is
passed
in
as
part
of
the
request
and
then
it
no
that's
what
informs
it
as
to
what
it
should
be.
Creating
the
I
am.
A
No
I
I
think
it's
really
I
think
you
make
an
important
point
because
in
the
real
world
use
cases,
for
these
is
pretty
much
exactly
what
you
said
you
know,
I
have
to
create
my
user,
create
a
bucket
I
have
to
find
my
relationship
to
that
bucket,
and
if
we're
not
representing
that
here,
then
we're
just
creating
this
kind
of
murky
mess
of
a
relationship
between
the
two
I'll
go
ahead.
We've
got
a
drop
off
here,
I'm
sure,
so.
Thank
you
all
for
coming.
A
We'll
we'll
see
you
all
next
week,
I'll
have
the
video
posted
to
the
working
group
here
along
with
these
slides,
and
we
can
continue
a
conversation
that
okay.