►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 09 September 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Yeah
he
told
me
I'll
I'll,
even
show
you
what
he's
talking
about
and
we'll
go
from
there,
but
I'll
bring
up
the
issues
that
he
brought
up
one
by
one
and-
and
you
know
we
can-
we
can
go
from
there.
A
Sounds
good,
yeah
yeah,
so
it's
it's!
It's
so
he's
also
listed
listed
them
down
in
the
pr,
but
but
I'll
quickly
start
off
and
talk
about
what
they
are.
So
the
first
thing
about
cross
namespace
sharing.
A
So
he's
okay
with
saying
it's
free
for
all
right
now,
and
I
think
that's
how
we
should
like
the
decision
we
made
last
week
when,
where
we
said
we
don't
do
anything
specific
for
cross
namespace
bucket
access,
if
someone
creates
a
bar
pointing
to
a
bucket,
create
another
namespace,
it
just
goes
through
and-
and
you
know
that
free-for-all
stance
is
okay
for
alpha,
but
but
going
forward
like
to
go
to
beta
or
v1.
A
C
B
A
B
A
B
A
Yeah,
so
so
he
had,
he
had
another
proposal
for
how
to
do
this
cross,
namespace,
transfer
or
resource
sharing.
Have
you
heard
of
reference
policy
yeah?
You
have
no,
no,
oh,
okay,
so
so
what
it
says
is
it's
just
about
who
can
refer
to
resources
or
or
it's
it's
kind
of
our
back?
A
It's
not
really
our
work,
but
it's
something
like
that
where
you
can
set
a
reference
policy
as
a
as
a
separate
crd,
and
you
can
say
from
what
name
space
can
you
refer
to
what
resources
and
other
namespaces?
A
So,
for
instance,
if
you
had,
if
you
set
a
reference
policy,
saying
that
bucket
access
requests
from
namespace
bob
can
refer
to
buckets
in
namespace
alice,
then
then
bucket
bucket
access
or
bucket
access
request
and
then
says
bob
can
do
that
reference
policies
is
not
yet
you
know
in
place,
but
the
current
proposal
is,
it
goes
to
an
even
higher
level
or
lower
level
of
granularity,
where
you
could
say,
namespace
or
bars
from
namespace
bob
can
refer
to
specific
buckets
in
in
namespace
alice,
and
you
can
have
a
separate
policy
for
each
of
those
buckets.
A
So
so
so
it's
it's
not
it's
not
our
back,
because
it's
not
it's
not
about
who
can
do
what
it's
more
like.
What
can
be
referred
to
and,
and
you
know
that's
that's
the
that's
the
current
direction
it
seems
like
kubernetes
is
going
to
go,
but
does
that
mean
that.
B
B
A
I
was
just
thinking
of
it
as
the
b,
but
yeah
you
could.
You
could
stick
with
the
existing
approach
and
and
say
b
are
always
points
to
just
br.
B
B
Address
is
half
the
bit
I'll
just
say
this,
because
I
don't
want
to
go
too
deep
down
this
rat
hole.
Unless
you
want
to
the
whole
discussion
around
like
transferring
of
brs
and
bars,
it
really
is
there's
two
separate
problems
that
we're
addressing.
One
is
the
intent
to
actually
share
like
where
you
you
want
multiple
users
and
multiple
namespaces
to
all
have
references
to
the
same
bucket,
but
then
there's
another
problem,
we're
trying
to
solve,
which
is
to
reassign
ownership.
C
B
A
An
answer
right
so
second
half
I
mean
we
still
need
to
talk
about
it,
but
but
I
think
the
first
half
is
a
pretty
clean
approach,
but
but
you
know
I
didn't
agree
to
it
or
anything
because
one,
I
think
I
think
everyone
needs
to
pitch
in.
We
all
need
to
agree
on
it
before
before
we
go
ahead
with
that,
but
number
two
about
this
ownership
issue.
I
think
this
you
know
we
can
solve
the
problem
in
a
simple
manner.
What
I
mean
is
we
can
keep
the
invariant
of
there's.
A
Only
one
owner
for
a
bucket,
that
is
the
pr
to
be
mapping,
is
one
to
one
and
if
the
br
goes
away
while
having
the
retention
policy
or
deletion
policy
retained,
so
the
bucket
is
left
orphaned,
then
we
should
allow
another
br
to
buy
into
it.
Just
like
pv
pvc,
binding.
A
There
is
a
race
condition
there.
If
two
people
are
trying
to,
you
know
sign
to
it,
you
don't
know
which
one's
gonna
win,
but
but
that's
you
know
in
practice
that
should
be
okay.
B
A
E
That's
that
was
my
question
like
how
do
you
identify
like
the
for
the
brownfield?
There
are
no
br
right,
so
there
will
be
only
b
and
this
case
we
have
b,
but
we
should
have
b
out
to
access
that
b.
Sorry.
E
F
E
So
do
you
want
to
distinguish
like
considering
that,
from
user
point
of
view,
they
are
trying
to
use
existing
b
right
or
something
like
that,
like
the
b
was
already
the
and
that
we
removed
the
br
because
of
the
retained
policy,
the
b
is
still
there
right.
So
it's
like
a
brownfield
case.
If
you
think
in
that
way,.
A
Okay,
so
yeah
yeah,
that's
true
so
yeah
we're
saying
we
can
we
can
we
can
let
a
br
take
ownership
of
that
b
by
simply
binding
to
it.
B
B
Is
that
that's
totally
that's
totally
workable?
We
just
you
just
have
to
say
that
that's
the
plan
for
brownfield
is
you
know
you
have
to
do
this
legwork
and
then
and
then
one
person
will
end
up.
One
namespace
will
end
up
owning
it
and
if,
if
multiple
namespaces
need
to
have
access
to
it,
they'll
have
to
do
the
cross
namespace
sharing
thing.
Whatever
that
looks
like.
A
Makes
sense
all
right
so
so
it
seems
like
so
far
I
mean
we
haven't,
really
discussed
it
enough,
but
but
so
far
it
seems
workable.
Let's
keep
let's
keep
going
okay,
so
so
we
just
talked
about.
You
know
what
we
call
static,
brownfield
and-
and
we
know
this
works
for
greenfield.
Obviously,
and
we
also
know
with
reference
policy,
it's
going
to
work
for
buckets
created
in
other
name
spaces.
A
Is
there
any
other
condition?
We
should
be
looking
any
other
any
other
case.
We
should
be
looking
at.
A
All
right
seems
like
nothing.
It
seems
like
nothing
on
this
regard.
Nothing
else
left
on
this
regard
I
mean
again.
I
still
think
it's
little
too
quick
to
you
know
go
all
in
on
this,
but
but.
A
For
now,
I
think
it
looks
more
or
less
good
enough
if
any
other
questions
come
up
or
if,
if
you
feel
like
there
are
any
other
inconsistencies
or
issues
with
this
approach,
you
know
bring
it
up.
Please
we
want
to
make
sure
that
this
will
work
for
all
of
us,
yeah
all
right.
So
the
next
thing
that
he
mentioned
was,
let
me
just
share
my
screen
and
just
show
you
so
let
me
so
cross
ns
yeah.
A
He
crosses
what
we
talked
about
yeah,
okay,
so
the
other
thing
was
flattening
bars
and
ba.
I
think
he
mentioned
an
issue
as
well.
Let
me
open
up
here.
A
So
he
was
mentioning
that.
Why
do
we
need
both
bar
and
ba,
like
with
buckets
it's
more
clear,
but
why
do
we
need
a
bar
and
a
ba.
A
But
isn't
one
of
them
transferring
to
the
user
like?
Why
do
we
need
it
to
be
a
thing
like
a
resource.
A
B
B
I
mean
part
of
it
is
you
can
put
sensitive
information
in
the
ba
and
it's
not
necessarily
readable
by
end
users.
Anything
you
put
in
the
bars
are,
of
course,
going
to
be
visible
to
them.
C
A
D
C
To
user
name
spaces
right
right
just
to
cluster
scoped
resources,
so
that
was
a
b
and
a
ba
and
but
but
over
time
we
are
weakening
that
stance,
and
we
are
saying
said
you-
and
I
talked
about
this-
that
the
driver's
name
space
could
actually
have
get
access,
read
access
across
the
whole
cluster.
They
could
read
secretly.
B
C
D
C
Secret,
we
keep
weakening
this
original
concept,
that
drivers
were
untrusted
in
the
cluster
and
and
that
we
wanted
to
minimize
the
impact
a
road
driver
could
have.
A
C
A
I
mean
so
we're.
B
Talking,
I
thought
we
split
the
responsibilities
between
some
central
controller
and
then
the
driver
when
it's
sidecars
and
yeah
and
if
the
central
controller
was
was
doing
its
job
or
if
it
was
structured
like,
for
example,
the
snapshot
controller.
Then
then
the
sidecar
wouldn't
need
access
to
the
to
the
namespace
resources.
It
would
only
need
access
to
the
non-namespace
resources
and.
C
That's
what
our
original
that
that's!
What
the
cap
was
a
month
ago
with
that
wall.
A
No,
no,
so
that's
not
why
so
we
tore
it
down
because
so
the
architecture
changed.
So
we
were.
We
were
using
a
csi
node
adapter.
So
this
wasn't
a
month
ago
it
was
like
the
last
review
cycle,
we're
using
the
csi
node
adapter
to
to
hold
the
part
back
from
starting
and
and
mount
the
the
the
bucket
info.json
files.
A
But
after
the
review
we
all
talked
about
it,
and-
and
that
was
one
of
the
comments
as
as
to
why
we
needed
a
secret-
why
we
needed
a
csi
driver
when
we
could
just
do
with
the
secret,
and
so
when
we
went
to
the
simpler
secret
architecture,
the
csi
node
adapter
originally,
which
could
only
you
know.
We
said
the
node
adapter
would
be
able
to
get
get
secrets
from
only
one
namespace.
It
didn't
matter
anymore
because
it
doesn't
exist
anymore.
A
D
C
Namespace
of
the
driver
and
the
user's
namespace,
the
the
node
adapter
is
trusted.
It's
cozy,
it's
part
of
cozy,
so
we
we
trust
it
all.
We're
not
trusting
is
some
is
a
driver
written
by
a
vendor
or
someone
outside
of
cozy
right
and
and
that
driver's
in
the
same
name
space
as
the
sidecar
typically,
and
so
we
didn't
want
that
namespace
to
have
privileges
and
the
driver
could.
A
Mean
yeah
yeah
I
mean
talking
about
it.
That
way
I
mean
the
militias
are
malicious.
I
mean
the
cosy.
Will
not
you
know,
go
out
of
its
way
to
make
sure
the
user
is
not
a
bad
bad
actor.
All
we're
saying
is
if
you're
running
cozy,
if
you're
using
a
a
a
proper,
shipped
image,
you
know
you
ratify
the
shasam
and
everything.
Then
then
you
can
expect
a
good
behavior.
B
Well,
the
the
usually
the
the
concern
like
when
csi
that
the
concern
is
the
the
vendor
did
their
best
to
not
do
anything
bad
in
their
driver,
but
they
had
a
security
hole
in
their
driver
and
the
malicious
attacker
exploits
the
security
hole
in
the
driver
and
gains
control
of
the
process
in
which
the
csi
driver
is
running
and
from
there
they
launch
an
attack
on
the
rest
of
the
cluster,
using
the
privileges
that
the
that
the
driver
has
no,
usually
that
that's
the
threat
model
right.
Yeah.
F
Just
just
just
a
quick
ad
on
that
concept,
because
I'm
a
little
bit
confused
where
we're
going.
A
E
F
A
You
have
the
sidecar
is
cubelet,
I
mean
it
has
all
the
privileges.
It's
it's
super
admin.
B
A
Thanks,
no,
it's
that's
true.
So
so
you
know,
I
I
don't
think
that's
that's
really
an
issue.
The
csi
provision
is
still
is
pretty
overpowered
and
and
again
as
long
as
a
user
is
using.
So
are
you
saying
the
the
the
extra
power
that
you're
giving?
It
is
the
ability
to
write
secrets,
because
csi
provisioner
also
has
the
ability
to
read
secrets,
which
I
would
think
is
far
more
dangerous.
A
A
B
A
D
C
C
D
C
B
A
But
there's
no
point
in
that,
because,
because
the
information
is
going
to
be
revealed
to
all
the
users
in
the
namespace,
regardless
as
soon
as
it's
mounted,
possibly
so
so
let
me
explain
what
to
some
of
you
who
knew
or
who
may
not
be
able
to
follow
what
jeff
just
mentioned,
because
it
comes.
What
he
mentioned
is
based
on
a
lot
of
context
that
we
have
so
in
the
previous
architecture
or
the
way
we
were
building
cozy
the
way
we
had
architected
cozy.
Originally,
we
had
two
symmetric
resources.
A
A
A
That's
not
the
utility
of
having
this
having
two
resources
was
not
clear
at
all
and
so
we're
discussing
why
originally
we
we
had
this
design
and
one
of
the
reasons
that
jeff
brought
up
was,
and
it's
a
good
one
which
was
in
in
the
previous
architecture
we
were
separating,
who,
like
which
controllers
had
access
to
which
resources
and
and
what
the
driver
interacted
with.
A
So
we
have
an
architecture
where
the
driver
runs
alongside
one
of
our
components
called
cozy
sidecar
and
the
sidecar
only
had
access
to
cluster
scope,
resources,
the
bucket
and
bucket
accesses.
But
if
you
were
to
flatten
buckets
and
bucket
access
and
bucket
access
requests
into
one
and
just
end
up
with
a
namespace
resource,
say
just
bucket
access
request,
then
we
would
end
up
in
a
situation
where
the
sidecar
controller
would
need
additional
privileges
to
read
a
namespace
resource
which
is
bucket
access
requests.
A
So
we
we're
talking
about
whether
that's
a
security
concern
or
not,
and
so
far
I
mean
off
the
top
of
my
head.
I
I
don't
see
it
being
a
concern,
but
but
you
know
I
think
you
know
some
someone
might
bring
up
a
a
a
a
threat
that
maybe
I
haven't
thought
of
so
so
it's
open
for
discussion
right
now.
Jeff
did.
I
did
I
summarize
that
right,
yes,.
A
Okay,
so
just
so
everyone's
following
and
everyone
can
can
can
contribute
yeah.
I
don't
see
an
issue
with
the
sitecar
controller
or
or
the
part
where
driver
is
running,
which
need
not
be
the
case.
But
let's
say
the
part
where
the
driver
is
running.
I
don't
see
the
problem
with
it:
having
access
to
bars,
bucket
access
requests,
because
just
a
request
for
accessing
a
bucket
and
driver
already
has
privileges
to
access
the
bucket.
A
B
I'm
just
wondering
if
there's
any
item
potency
concerns
about
when
you're
trying
to
do
grant
access
and
and
you
don't
know
if
you
succeeded
or
not
and
you're
doing
a
retry,
and
you
can't
read
the
secret
to
know
if,
if
it,
if
it's
valid
or
not,
what
what
are
you
going
to
do
in
the
you
might
just
have
to
call.
B
I'm
just
I'm
just
thinking
for
the
sidecar
that
has
to
call
grant
access
if
it's,
if
it's
getting
a
call
or
if
it
sees
a
bar,
that's
in
a
transitional
state
and
it
needs
to.
And
you
don't
know
if,
if
you
previously
called
grant
access
this
cozy
driver
or
not
because
maybe
the
last
time
you
called
it,
you
were
killed
in
the
middle
of
the
call.
A
I
mean
you
can
tell
if
the
secret
already
exists
or
not,
and
you
know
for
right
importance
if
you've
always
followed
this
model
of
passing
in
yeah.
B
F
A
Yeah
but
the
conversation
about
it
importancy,
I
I
don't
see
that
I
don't
see
it
being
any
worse
than
where
it
was.
I
don't
think
I
don't
think
there's
the
previously
accepted
way
and
now
there's
no
difference
rather
because
we're
never
passing
in
the
existing
secrets
to
the
driver
ever
to
you
know
to
make
sure
that
it's
not
giving
us.
You
know
it
doesn't
give
us
something
that
that
that
is
that's
already
there
yeah.
B
F
F
A
That
that's
good,
it's
it's
important,
it's
best
not
to
re-implement
or
create
whole
new
ways
of
doing
things
if
something
the
works
already
exist,
so
yeah.
Thank
you,
okay,
so
I
mean
talking
about
that
so
far.
So
what
it
seems
like
we're
saying
is,
is
if
you
just
had
a
bar
and
no
ba,
it
should
still
work.
B
A
B
A
Yeah,
the
secret
name
was
always
in
the
spec,
so
yeah
that
will
remain,
as
is
we'll
just
need
a
condition
saying
that
access
granted
set
to
true.
G
A
Yeah
so
the
reason
to
so
we're
calling
it
credential
secret,
our
minted
credential
secret.
The
reason
we
we
give
the
name
up
front
is
so
that
you
can
start
a
pod
pointing
to
that
secret.
It
won't
start
up
until
that
secrets
available
otherwise
you're
making
it
imperative,
where
you're
waiting
for
a
secret
by
some
name
to
get
generated
before
you
can
start
a
part
and
it's
not
infrared.
It's
not
declarative
anymore.
A
A
B
B
What
is
I
mean
not
having
any
support
on
the
cubelet
side
to
plumb
things
through
automatically
for
you,
because
I
thought
that
from
an
api
perspective,
the
pod
was
supposed
to
refer
to
the
bar,
and
then
you
know
there
was
a
csi
driver
as
a
stopgap
mechanism.
But
the
point
was
cubelet
eventually
was
going
to
directly
implement
that
api
yeah.
A
Yeah,
that
is
still
the
plan
that
is.
A
B
C
B
A
A
B
A
C
Okay
and
then
that
means
cosy,
can't
respond
to
the
workload
terminating.
So
we
had
we
had
design.
I
don't
know
if
we
had
anything
coded,
but
we
did
have
design
for
down
for
deletion
using
your
business.
The
termination
of
the
workload
also
triggered
things,
and
if
the
workload
wasn't
terminated,
you
couldn't
actually
remove
the
bucket
and
and
because
it
was
being
used
so
that
we
lose
that
as
well.
Now
we.
A
Don't
lose
that
so
so
so
it
was
ben
who
was
suggesting
this
so
the
way
we
were
doing
it
was
by
having
a
finalizer
per
pod
on
the
ba,
and
maybe
you
maybe
you
just
you-
know,
you're
busy
that
day
I
don't
know,
maybe
you
attended,
but
but
what
happened
was
he
was
showing
us?
A
How
updates
are
actually
really
expensive
on
the
api
and,
if
you're
going
to
add
a
finalizer
and
then
revoke
them
or
remove
them
one
by
one,
as
the
part
goes
away
and
comes
back
and
all
that
we're
going
to
be
really
bombarding
the
api
server
or
hcd?
A
And
then
we
discussed
different
ways
in
which
you
can
do.
You
can
do
the
patch
instead
or
all
these
merge
strategies
and
stuff,
but
in
the
end,
what
we
ended
up
deciding
was
we're
not
going
to
use
finalizers
that
way,
we're
not
going
to
have
a
billion
finalizers
one
for
each
part
on
the
ba
or
or
one
for
each
ba
on
the
b.
A
Instead,
we're
going
to
have
one
finalizer,
where
we
maintain
the
state
of
the
set
of
cards
in
memory
and
and
keep
them
updated,
using
using
the
regular
reconciled
loop
in
kubernetes
and
and
make
sure
that
you
know,
because,
because.
A
If
the
part
goes
away,
you
you
get
the
you,
you
get
the
event,
so
you
would
still
be
able
to
do
it.
B
A
C
But
we
have
to
now
handle
pods
being
relaunched,
deploy
you
know,
there's.
A
C
But
we
originally
now,
I
know
maybe
this
has
changed
for
a
little
bit,
but
originally
we
weren't
having
to
watch
pods
right
right.
But
when.
A
A
Maybe
if
we
really
need
to
figure
this
out,
we
could
think
about
sharding,
the
pod
name
somehow,
so
that
you
know
we
have
active
active
type
setups,
where
one
of
the
active
workers
deals
with
some
of
the
parts
and
another
one
deals
with
another
set
of
parts
that
we
load
balance
the
this
this
thing,
but
we
don't
even
need
to
go
that
far.
We
could
just
say:
there's
one
one
controller
that
just
listens
to
part
events
and
then
and
then
knows,
based
on
whatever
the
current
status.
A
If
a
b
or
a
b
bai
can
be.
A
Oh
in
the
sense
that
we
can
know
just
from
the
controller,
that's
listening
on
parts
we
can
tell
if,
if
a
b
or
a
bar
can
be
deleted,
but,
like
you
said,
if
it's
the
sidecar
controller
that's
doing
this
and
which
I
think
will
be
the
one
doing
it,
it
will
also
need
read
access
on
secrets.
A
A
And
and
that,
I
think,
introduces
more
vulnerabilities
than
anything
else.
I
think,
because
that
I
see
as
a
terrible
design
actually
because
anyone
could
pretend
to
be
a
cozy
secret
and
you
know,
hold
credentials
from
going
away.
C
Yeah
sure
I
don't
like
it
either
initially,
at
least
it
seems
like
a
lot
of
work,
it's
tempting
to
add
an
annotation
to
the
pod,
but
I'm
not
sure
we
would
know
how
to
then
we're
only
looking
at
it.
I
mean.
A
D
C
Maybe
so
sid
is
the
endpoint
information,
also
in
a
secret
or
a
config
map,
or
is
there
some
file
it's
on
the
secret?
It's
all
in
the
secret
okay,
and
we
didn't
get
an
argument
about
having
non-sensitive
data
in
the
secret
that
was
okay.
Having
what.
A
C
A
C
Right,
I
I
understand
that
and
I'm
not
trying
to
raise
a
big
flag
here.
I'm
just
sharing
a
comment
in
the
past.
I
I
was
just
acknowledging
that
tim
didn't
see,
see
that
as
an
issue.
That's
all
I
wanted
to
just
get
it
in
the
recording
that
that
wasn't
an
issue
having
mix
of
sensitive
and
non-sensitive
in
the
same
secret.
A
C
A
Okay,
so
the
other
thing
was
all
right,
so
I
think
I
think
it
kind
of
looks
okay,
but
just
just
to
reiterate
the
one
weird
design
choice,
we're
making
and
we're
saying
it's:
okay,
because
it's
temporary
is
that
it's.
How
we
determine
if
a
secret
is
is,
is
a
cozy
secret.
A
A
A
E
A
I
would
avoid
creating
new
crds,
but
what
do
you
mean
by
different
types
of
sacred
like?
Is
there
a
kubernetes
concept
for
different
types
of
secret?
Yes,
okay,
I
I
don't
know.
E
A
Struck,
okay,
so
this
immutable,
this
data
stream
data
type,
okay,
secret
type-
is
what
you're
talking
about.
A
That
would
be
good
all
right,
so
yeah,
just
just
saying
that
compromise
saying
that
that
design
choice
out
loud
so
that
so
we
know
what
compromise
we're
making
and
why
we're
making
them
again.
The
the
reason
we're
saying
is:
okay
is
because
it's
it's
a
stop
gap
measure.
This
is
not
the
long
term
plan.
G
I
think
I
think
someone
shared
it:
okay,.
A
A
A
A
A
A
A
All
right
so
mauricio
here
and
and
tim
as
well,
so
so
it's
a
really
important
requirement
for
google
that
we
have
service
account
based
authentication.
A
A
So
so
I
remember
we
had
a
bunch
of
conversations
about
it
and
then
I
think
I
think
you
you
know,
helped
make
this
decision
but
correct
me.
If
I'm
wrong,
we
we
said
we
will
punt
service
account
for
now
right,
then
what
was
the
reason
for
it.
B
I
don't
think
the
intention
was
was
to
to
punt
on
it.
The
the
intention
was
to
get
the
basic
one
standardized
like
the
the
the
issue
is
for
me,
has
always
been
the
downward
facing
apis.
What
does
the
pod
consume
and
what
can
it
expect
to
be
there
and
like?
I
think
we
do
need
to
figure
out
what
service
account
token-based
authentication
looks
like,
but
we
need
to
start
from
the
pods
experience
of
that.
G
Okay,
yeah
yeah,
I
think
so
as
a
user.
There
are
two
use
cases
depending
on
where
my
app
is
it's
more
nice.
It
could
already
be
used
in
our
workload,
identity
server,
but
if
it's
not,
then
it
might
be
probably
using
some
m
bars
that
might
be
mounted
from
the
secret
that
cozy
is
going
to
create
so
yeah.
C
G
Are
the
use
cases
now
for
for
service
accounts?
Typically,
that
is
zoned
by
it
could
be
by
my
team
or
by
by
some
admin.
I
would
ask
that
person
to
create
a
service
account
for
me.
G
I
would
then
tell
him
the
permissions
that
I
need,
so
it
could
be
bucket
creation
back
retreat
and
so
on
and
under
the
hood
once
once
my
body
is
set
up
to
use
a
service
account
internally,
kubernetes
will
talk
with
and
in
google
cloud
there
is
a
workload,
identity
server,
so
every
request
that
usually
goes
to
their
the
metadata
server.
G
It
goes
through
to
this
other
thing,
and
this
other
thing
understands
the
mapping
that
is
already
set
up
in
the
service
account,
and
it
understands
that
the
pot
has
is
tied
to
one
service
account
and
that
the
service
account
has
some
some
permissions,
for
example
backgrid
and
then
from
the
application.
That's
how
we
can
get
to
a
bucket
so
yeah,
those
those
are
the
use.
A
Cases
so
yeah
thanks,
so
I
think
ben's
question
was
also
about
along
with
that
was
also
about
how
someone
would
specify
the
the
service
account
name
in
the
pod
spec.
Is
that
right,
ben.
B
In
particular,
how
would
you
write
a
workload
that
could
deal
with
either
right?
Because
the
whole
point
is
a
bucket
should
be
a
bucket
and
you
you
should
be.
I
should
be
able
to
write
a
workload
that
that
says,
you
know,
give
it
a
cozy
bucket
and
it
will
run
and
then
not
care
about
how
how
you
supply
the
cozy
bucket,
and
so
so.
A
So
so,
for
that
I
thought
about
it
and-
and
you
know
pretty
much-
every
sdk
has
multiple
credential
provider
mechanism
mechanisms
and-
and
I
think
I
think
at
least
for
the
cloud
once
we
would
have
to
go
at
cosy
as
a
credential
provider,
but
but
we
should
and
they
already
have
a
order
of
precedence.
So
so,
for
instance,
instance,
iam
is
considered
more
important,
more
more
higher
higher
order
or
more
preferred
or
preferred
as
compared
to
environment
variable
based
credential
discovery.
A
Okay
yeah,
so
I
think
I
think
that's
what
we
should
do
so,
instead
of
calling
it
even
service
account,
tim
was
calling
it
workload,
identity,
based,
authentication
and
yeah.
That's
clearly
what
we're
trying
to
do
here.
B
A
Yeah
makes
sense
all
right,
so
I
I
feel
like
we're
in
agreement
about
majority
of
things.
So
so
can
I
go
ahead
and
update
the
kept
to
say
we
can
get
rid
of
the
bar
or
sorry
the
ba
and
just
have
bar
and
then
and
then
you
know
kind
of
flush
out
that
design.
A
Okay,
all
right,
okay,
so
first
I'll
I'll,
write
it
down
and
then
I'll
I'll.
Thank
you
all
first,
then,
then,
once
we're
all
in
agreement,
let's
go,
let's
go!
Thank
them
all
right!
That's
that's
it!
For
today
there
are
a
few
other
questions
that
weren't
answered,
feel
free
to
look
at
the
cap
and
and
provide
answers
to
his
questions
directly.
If
you
think
you
know,
if
you,
if
you
want
to
do
it,
if
not,
if
not
you
know,
we
will
go
the
route
that
we
were
just
talking
about.
G
G
Oh
okay,
I
have,
I
have
been
trying
to
follow
what
cozy
what
has
been
going
on
in
kelsey.
I
I
started
joining
the
meeting
since
last
week,
but
it's
it's
kind
of
unfortunate
that
we
don't
keep
notes
so
again
today.
I
try
to
keep
some
notes
about
what
we
discussed.
G
At
least
I
got
a
couple
of
points,
so
I
think
it
would
be
good
to
rely
on
notes
in
addition
to
the
recording
because
it
for
me
it
was
it's
kind
of
hard
to
look
at
the
recordings
and
and
see
everything
that
was
discussed
because
for
some
points
like
we
started
a
discussion
and
the
in
the
in
the
end,
there
is
no
conclusion,
so
it
would
be
great
to
write
the
conclusion
at
the
end.
A
A
So
so
could
you
maybe
share
a
link
to
where
the
notes
are
being
maintained?
We
can
we
can
put
that
in
our
slack
channel.
A
All
right
all
right,
we're
out
of
time.
Is
there
anything
else.
A
All
right:
okay,
let's
let's
talk
next
week
before
that
I'll
I'll
update
the
cap-
and
I
thank
you
all.
Thank
you
sid.
Thank
you.
Everyone.