►
From YouTube: sig-auth bi-weekly meeting 20210217
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Right
so
to
kick
things
off
just
a
announcement
about
the
hot
security
policy
breakout
session.
For
now
we're
going
to
keep
the
same
time
as
our
recurring
meeting.
That's
wednesday
is
1
to
2
p.m.
On
the
sigoth
off
week,
so
not
today,
but
next
week
we'll
have
the
next
one
of
those
sessions
and
for
now
we're
going
to
try
and
limit
pot
security
policy
discussion
to
that
breakout
session,
just
to
be
fair
to
folks
who
are
able
to
make
that
meeting.
A
A
All
right
so,
first
on
the
list
of
discussion
topics,
this
was
carried
over
from
last
week.
Sorry,
we
didn't
have
time
to
get
to
it.
I'm
pronouncing
that
right.
Do
you
wanna,
take
the.
C
C
D
E
Sure
we've
got
an
issue
where
we've
been
discussing
it
and
we
also
sent
out
to
the
middle
of
this
and
the
fourth
of
that,
but
the
short
of
it
is
the
head.
Http
option
gets
turned
into
a
get,
and
that's
due
to
some
of
the
auditing
mechanisms
in
place
within
the
api
and
another
one
is
that
the
options
gets
completely
dropped
and
that
makes
difficult.
It
makes
it
very
difficult
to
understand
via
our
auditing
mechanisms,
which
operation
ids
for
these
particular
ones.
E
When
we
hit
head
and
option
it
makes
us
overlaps,
because
the
open
api
definition
in
order
to
to
map
it,
we
need
to
look
at
a
path
and
we
need
to
look
at
the
http
verb
and
our
audit
thing
has
a
field
for
the
case
verb,
but
it's
it
again
for
these
particular
these
two
options:
it's
not
available
there.
It
gets
changed
and
dropped
within
the
audit
mechanisms
and
those
are
the
off
off
mechanisms.
E
So
the
conformance
group
underneath
stick
architecture.
We
really
depend
on
these
audit
logs
to
be
accurate
and
to
be
able
to
map
them
back
to
the
endpoints
that
are
being
hit
and
I'm
under
the
impression
that
this
is
due
to
transformations
governed
by
sigoff.
That
may
I
just
want
to
confirm
that
and
get
some
guidance
there.
A
So
the
the
parsing
of
request
info
from
an
incoming
request
is
half
sigoth
and
half
sig
api
machinery,
both
of
those
groups
use
that
information
sigoth
uses
it
for
authorization
and
for
audit
sega
api
machinery
uses
it
for
routing
and-
and
things
like
that,
I
had
commented
on
the
issue
so
mapping
audit
events
back
to
the
open
api
spec
is
not
really
an
explicit
goal
of
the
audit
log.
A
There
are
other
dimensions
that
are
also
surfaced
in
the
open
api
spec
that
are
not
included
in
audit
like
all
of
the
except
parameters
for
the
encoding.
That
is
returned
to
the
return
to
the
user
right,
so
the
user
could
request
json
or
yaml
or
protobuf
encodings,
and,
like
that's
listed
in
the
open
api
spec
saying
you
can
we
can
produce
these
things,
but
I
don't
think
that
gets
included
in
the
audit
log.
So
I,
before
we
go
too
much
further
down
the
road.
I
think,
like
settling.
A
That
question
is
a
good
one
to
start
with,
like.
Where
are
the
gaps
around
this
verb?
Is
it
just
for
the
proxy
sub-resource
end
points,
because
those
are
exceptional
in
sort
of
a
lot
of
ways,
and
so
I
don't
want
to
deep
dive
too
far.
If
it's
really
just
one
one
special
case,
sub
resource.
E
Stephen
and
rhianna,
if
you
can
back
me
up
on
my
assumptions
here
from
hearing
you,
but
I
do
think
that
these
are
outliers
and
do
behave
differently,
and
if
we
want
to
try
to
make
some
work
rounds
on
our
side.
For
these,
we
can
take
a
look
at
that.
Have
you
seen
anything
to
the
country
of
that
student.
D
Oh,
I
haven't
tried
testing
any
of
those
scenarios
because
they
haven't
been
a
specific
option
that
I've
seen
in
this
spec.
Okay,.
E
If
we
search
through
the
open
api,
spec,
head
and
options
is
only
exported
for
these
endpoints,
so
I
don't
we
wouldn't.
I
wouldn't
expect
the
api
server
to
respond
with
a
valid
routing
based
on
that,
but
that
may
not
be
true.
I'm
not.
I
don't
know
how
the
routing
occurs,
but
okay,
based
on
its
exported
spec
I'd,
say
it
should
get
a
404
or
something.
A
E
Our
current
workaround
is
for
these
specific
tests.
We
look
for
the
ede
http
user
agent
to
match
the
test,
that's
excluded,
and
then
we
also
look
at
the
end
of
the
of
the
form
of
the
the
past
to
include
the
specific
verb
which
should
help
us
to
match
those.
I
think
if
we
have
an
empty,
an
empty
kubernetes
verb,
then
we
map
that
to
options
because
that
doesn't
flow
through.
So
those
are
our
two
exceptions
for
head.
E
A
Yeah,
I
I
think,
looking
at
the
path
like
putting
something
in
the
path
that
you
can
recognize
on
the
other
end
in
the
audit
log
is
a
reasonable
workaround.
I,
if
that
unblocks
you,
I
think,
that's
a
good
approach
for
you
for
the
auth
folks.
A
I
think
it's
weird
for
something
with
an
empty
verb
to
show
up
in
the
audit
log,
so
that
doesn't
seem
right.
It's
and
I
wouldn't
send.
A
I
wouldn't
try
to
disentangle
the
head
and
get
collapsing
to
get
that
seems.
Okay.
Does
anyone
have
thoughts
on
what
we
should
do
with
other
verbs
that
show
up
other
http
methods
that
show
up
so
that
we
don't
stick
empty
data
into
the
auto
log.
A
A
So
I
I
would
probably
decouple
the
issues
like
the
head.
Get
to
get
collapsing
to
get,
I
think,
is
working
as
intended.
The
other
unknown
http
methods
showing
up
as
empty
in
authorization
and
audit
is
probably
not
working
as
expected,
and
we
can
discuss
what
to
do
to
fix
that.
Probably
in
the
issue.
A
Yeah
I'll
summarize
and
circle
back
to
the
issue
on
that.
A
All
right,
next
up
on
the
agenda,
we
have
ephemeral
containers.
There's
this
lee.
A
So
for
context
for
everyone
else,
this
this
came
up
when
we
in
this
came
up.
While
we
were
discussing
some
options
around
the
plot
security
policy
replacement
and
asking
kind
of
how
what
the
expectations
are
for
handling
femoral
containers,
especially
in
the
context
of
policy
exceptions
since
the
general
expectation
of
ephemeral,
debug
containers,
especially
with
the
change
to
add
security
context
in
121,
is
that.
A
It
may
be
desirable
to
run
a
pod
with
least
privileges
for
production
use
and
then
be
able
to
have
elevated
privileges
in
a
debug
container.
So,
for
example,
you
might
forbid
a
cabinet
raw
in
a
production
container
that
doesn't
need
it
to
limit
attack
surface
but
cabinet
raw
is
a
common
requirement
for
certain
types
of
network
debugging
such
as
ping
and
trace
route.
A
Now,
when
it
comes
to
admission
and
security
policy
type
constraints
on
these,
I
believe
the
way
it
currently
works.
Maybe
some
can
correct
me
if
I'm
wrong,
but
I
think
that
the
so
ephemeral
containers
are
edited
through
the
the
ephemeral
containers
sub-resource
on
the
pod,
so
that
generates
an
admission
request,
which
I
think
is
a
list
of
ephemeral
containers.
It
doesn't
have
the
full
pod
spec
on
that
which
is
a
problem
for
enforcing,
for
instance,
pod
security
policy.
A
A
A
H
That
was
all
correct.
Maybe
you
just
see
it.
Did
it
added
context
back
in
the
in
the
way
back
when
we
designed
this,
we
were
planning
on
using
pod
security
policy
to
provide
some
knobs
for
tuning
here.
It's
clearly
not
not
the
way
forward
anymore
and
I'm
not
sure
the
right
thing
to
do
in
its
place
and
then
yeah
that
we
were
planning
on
on
going
to
to
beta
this
release
and
I
think
anything
as
big
as
changing
how
how
admission
control
works
means
that
we
should
not
go
to
beta.
A
Yeah,
I
I
actually
kind
of
ran
across
this
by
happenstance,
as
I
was
playing
around
with
different
admission
stuff
and
realized
that,
if
all
you
have
at
admission
time
is
the
container,
which
is
the
current
api
of
ephemeral,
container
sub
resource
edition.
You
don't
actually
have
enough
information
to
know
if
the
container
is
locked
down,
because
the
if
the
top
level
pod
security
context
was
locked
down,
then
it
inherits
that
that
restriction.
If
it's
not,
then
you
don't
know,
I
mean
one.
A
So
maybe
it
would
be
okay
to
say,
admission
should
expect
the
containers,
the
ephemeral
containers
to
be
explicitly
to
explicitly
be
setting
their
run
as
user
or
their
allow
privilege
escalation
or
whatever
those
things
are,
but
something
that
is
trying
to
make
the
ephemeral
containers
match
something
about.
The
pod
is
going
to
have
a
really
hard
time
of
it.
They're
going
to
do
this
out
of
band
lookup
of
the
existing
pod,
given
the
current
with
the
current
api.
A
A
good
example
might
be
runtime
class,
which
is
only
specified
on
the
pod.
So
if
I
want
to
say
that
ephemeral
containers
are
totally
unrestricted
in
a
containers
pod,
that
is
an
excellent
example.
Tim.
G
G
Right
because
our
sub
resources
for
the
most
part
are
just
controlled
views
into,
I
mean
not.
I
guess
proxy
and
stuff
is
magical,
but
a
lot
of
them
are
controlled.
Views
into
the
whole
object,
with
particular
validation
or
other
stuff.
To
give
you
a
a
constrained
api,
but
even
like,
like
status
being
the
canonical
one,
I
get
the
whole
object
right.
I
would
expect
the
whole
object
in
this
one.
A
We
actually
have
dedicated
apis.
So
the
examples
I
can
think
of
are
scale
and
pod
binding.
So
those
are
the
ones
where
we're
basically
saying
you
can
only
touch
one
field
on
the
object
and
there
we
don't
make
you
echo
the
whole
object
and
then
ignore
everything,
but
that
one
field,
we
let
you
have
a
dedicated
scale
or
binding
object.
So
that's
why
we
went
this
way.
It's
like!
Oh,
it's
an
append
only
api.
Well
then,
just
give
us
the
containers.
You
want
to
append
awesome,
but
the
enforcement
context.
G
I
mean
sorry,
I
was
just
going
to
say:
does
the
input
from
the
user
have
to
perfectly
match
what
admission
is
then
presented
with
in
the
sense
of
even
if
the
user
is
adding
just
items
to
this
list
depend
only
must
we
then
turn
around
to
admission
like
because
we
could
tell
admission
here's
a
pod
update,
look,
here's
the
whole
pod,
here's
the
thing
we
just
added
you
you're
good
with
that.
So
funny.
A
Story
there
was
actually
an
admission
bug
around
the
feature.
Admission
really
really
expects
the
type
that
it's
handed
to
match
the
type
of
the
request
like
if
I
register
an
admission
plugin
for
the
pod,
the
primal
containers
subresource.
A
A
A
Wild
idea,
but
what,
if
we
added
a
sort
of
like
context,
additional
context
to
the
admission
api?
So
you
know
the
type
the
main
type
in
the
request
is
still
the
same,
but
I
also
get
in
the
additional
list
of
context
things
the
full
pod.
I
A
H
So
one
additional
one
thing:
that's
different
about
ephemeral
containers
is
that
we
do
have
additional
fields
available,
that's
above
what's
in
container,
so,
for
example,
we
haven't,
we
added
a
field
to
allow
namespace
targeting
and
then
we
could
allow
a
field
if
it
helped
us
here
we
could
add
a
new
field
to
ephemeral,
containers
itself.
We
don't
have
a
requirement
to.
H
A
Yeah,
but
even
those
are
exactly
the
type
of
thing
that
some
of
the
policy
levels
we
have
have
opinions
about
right.
Okay,
so
like
adding
non-default
capabilities,
is
explicitly
part
of
the
baseline
and
restricted
cloud
security
standards
and,
like
tim,
pointed
out
like
that,
might
be
okay
depending
on
the
runtime
or
it
might
not,
depending
on
the
runtime.
A
So
I
I
would
hesitate
to
sort
of
special
case
pod
level
fields
like
runtime
or
security
context,
on
the
assumption
that
and
sort
of
shoehorn
them
into
pure
fields
that
we
put
into
ephemeral
containers
just
to
communicate
down
to
admission.
A
I'm
trying
to
remember
lee
long
ago
did
you
look
at
sending
the
whole
pod
to
the
sub
resource
or
did
did
we
discuss
it
and
decide
on
this
approach
before
you
started
prototyping
stuff.
A
Another
possibility
is
that
there's
actually
an
existing
issue
around
rights
to
sub-resources
not
being
seen
by
admission
plugins
that
only
have
opinions
about
the
whole
pod.
I
only
have
opinions
about
the
whole
resource
and
the
example
was
scaling.
A
I
think
like
if
I
want
to
scale
something
from
one
to
two,
an
admission
plugin
that
is
only
intercepting
the
scale
sub
resource
might
not
have
the
context
it
needs
to
make
its
decision,
and
so
it's
actually
a
very
similar
issue
to
this.
So
maybe
it
would
say
you
can't
scale
this
to
two,
because
the
resources
you're
requesting
like
you
exceed
your
resource
allocation
or
whatever,
but
the
scale
sub
resource
doesn't
have
all
of
the
other
stuff
in
the
main
object.
A
So
it
can't
it
would
have
to
do
this
side
side
channel
lookup
the
one
of
the
proposed
solutions
there
is
to
let
admission
on
the
top
level
resource
be
sent
the
update
that
would
result
as
part
of
a
sub
resource
rate
that
is
very
early
in
the
proposal
stage.
That
is
nowhere
near
complete,
but
it
seems
like
the
same
category
of.
A
I'm
sensing
ephemeral
containers
and
it
passes
through
the
ephemeral
containers
of
mission
plugins
and
they
all
say
thumbs
up
well
under
the
covers,
like
the
pod
itself,
is
actually
being
updated,
and
so
it
conceptually
could
then
give
the
old
and
new
pod
to
admission
plugins
that
wanted
to
see
all
updates
to
pods,
no
matter
the
source
and
say
like
an
update
to
a
pod
is
going
to
be
made
and
here's
the
old
one,
here's
the
new
one.
A
It,
like
I
said
it's
a
it's
identifying
a
gap
and
it's
an
early
proposal
and
there's
weird
like
ordering
details
to
work
out
and
it
would
have
to
be
opt-in
like
it
would
have
to
be
something
that
admissions
would
have
to
say.
I
want
to
see
all
updates
to
this,
no
matter
the
source
instead
of
routing
sub
resource
updates.
To
existing
admission,
I
hesitate
to
come
up
with
a
specific
solution
to
this
issue.
A
If
there's
a
more
general
problem
being
considered,
I
think
the
two
options
we
have
here
are
one
change:
the
api
to
pass
the
whole
pod.
So
we
sidestep
it
and
say
well.
The
ephemeral
container
api
is
here's
the
whole
pod
and
we
ignore
everything.
But
this
list
that's
one
possibility.
A
I
G
B
That
would
work
like
today.
You
could
cash
all
the
name
spaces.
C
G
G
A
A
H
I
tend
to
think
that
the
just
using
the
entire
pod
is
both
the
most
realistic
and
I
don't
know
it
is
a
cleaner
api.
Considering
the
amount
of
changes
that
ephemeral
containers
make.
I
A
I
can't
can
you
append
you
can't
set
or
append
in
a
patch?
That's
really
unfortunate,
I
I
was
going
to
say
if,
if
you
could
submit
something
that
says
like
add
this,
I
want
to
run
this
container
like
add
this
container
to
the
end
of
the
ephemeral
containers
list.
That
would
be
the
ideal
scenario
as
a
client,
because
you
could
get
what
you
have
today
where
it's
like.
I.
A
Actually
care
what's
there,
I
just
want
to
run
this
thing
like
tack
this
onto
the
end
and
run
it,
but
patch
can't
express
set
this
to
a
single
item
list
if
the
list
isn't
present
or
append
this
item
to
the
end
of
the
list.
A
E
H
A
A
A
If
the
only
so,
if
you
by
default
clusters,
don't
have
this
feature,
enabled
the
endpoint
isn't
enabled
you
cannot
get
from
it.
So
the
only
clients
making
use
of
it
are
using
it
on
alpha
alpha
clusters
and
they
are
using
an
aqua
feature.
If
we
change
the
type
that
returns,
then
those
clients
would
break,
which
would
mean
you
would
have
to
upgrade
them
to
a
level
that
matched
like
the
the
next
alpha
or
if
it
became
beta
you'd
have
to
upgrade
those
clients
to
match.
A
I
think
that's
inbounds,
for
what
we
claim
for
alpha.
It's
not
actually
that
much
worse
than
just
switching
to
a
different
end
point
like
those
clients.
They
wouldn't
get
a
deserialization
here.
They
would
just
get
a
404,
not
bound
error
if
we
switch
to
a
different
endpoint.
So
I
don't
know
that
we
actually
make
life
better
for
those
clients.
H
H
H
A
I
think
we
should
fill
out
the
notes
from
the
discussion
and
kind
of
talk
about
the
options
that
we
saw,
like
lean
on
api
machinery,
to
figure
out
how
to
make
an
admission,
plugin
see
all
updates,
even
via
sub
resources,
which
doesn't
actually
seem
likely
to
happen
anytime
soon
or
change
the
api
or
make
admission
fetch
pod
context
out
of
band
with
the
examples
of
what
that,
what
the
implications
would
be
for
things
like
gatekeepers,
then
in
terms
of
investigation,
maybe
see
what
it
would
look
like
to
switch
it
to
the
full
pod
and
verify
we
haven't
missed
any
implications
for
old
clients
like
it'll.
A
H
Okay,
I'll
look
into
those
things.
One
final
question:
are
there
any
objections
to
the
current
api?
If
there
is
no
security
context.
A
Locking
ourselves
to
an
api
that
we
know
has
difficulty
in
letting
you
flip
the
bits
that
make
debug
containers
most
useful
seems
like
something
we'll
regret.
I
Take
it
away
yeah,
so
I
wanted
to
bring
back
to
life
a
little
bit
the
discussion
on
private
claims
in
projected
tokens.
I
This
is
it's
an
older
issue
mike
had
originally
opened,
and
there
was
discussion
about
like
how.
How
do
we
do
this?
The
my
understanding
is
basically
that
okay,
yes,
we
technically
could
add
some
sort
of
private
claims
in
an
api,
but
issues
are
around.
I
You
know
validation
of
of
those
claims
structure
of
those
claims,
and
what
is
I
guess,
it's
a
very
open
question
about
what
that
looks
like
just
because
the
api
server
is
minting
these
tokens
as
sort
of
authoritative
for
external
parties
and
those
external
parties
need
to
rely
on
and
trust
that
those
claims
are
accurate,
correct.
The
reason
why
I'm
I'm
very
interested
in
this
is
like
there's
there's
certain
aws
features
that
would
tie
really
well
into
this
aw
supports
basically
adding
what
we
call
session
tags.
I
So
when
you
assume
a
role,
you
can
use
a
kubernetes
token
to
assume
an
aws
role.
Those
tags
can
get
propagated
as
context
into
your
your
role
session.
So
I
can
write
aws
policy
to
say
I
bet
I
if,
if
this
was
supported
to
say
you
know
this,
this
role
session
can
only
write
to
s3
this
s3
bucket,
where
the
path
prefix
is
the
pod
name
or
the
path
prefixes,
the
namespace
name
or
whatever,
whatever
we
should
in
this
the
session
context.
I
One
other
solution
that
I
thought
about
kind
of
in
addition
to
this,
the
discussion
here
was:
could
could
this
be
folded
into
the
proposed
external
signing
api
that
does
limit
the
sort
of
usability
of
of
private
claims,
but
it
does
let
an
external
signer
sort
of
determine
whatever
structure
or
claims
that
they
want,
and
it's
the
external
signer
at
the
end
of
the
day
the
day.
Creating
this
token,
so
they
are
sort
of
the
authoritative
place
to
say
what
what
claims
are.
I
In
the
token,
it
gets
out
of
the
kubernetes
api,
which
is
less
flexible
for
kubernetes
api
consumers.
But
if,
if
you
want
to
write
your
own,
you
know
run
your
own
external
signer,
you
can,
you
can
add
whatever
claims
you
want
there
yeah,
I,
I
guess
I'd
kind
of
like
people
who
are
part
of
this
previous
discussion
to
to.
I
I'm
curious
about
your
thoughts.
F
Yeah,
I
think
this
was
one
of
the
features
possible
features
that
motivated
going
with
a
generic
token
signing
or
a
going
with
an
api
structure
specific
to
token
signing
versus
doing
like
a
generic
signing
api,
which
I
know
is
discussed
in
the
cap
yeah
I
think
x.
F
I
think
that
there
are
valuable
use
cases
for
this.
I
also
think
that
the
just
delegating
to
external
signer
is
kind
of
the
most
flexible
escape
hatch.
I'm
not
super
worried
about
the
barrier,
because,
hopefully
I
suspect
that
people
who
have
use
cases
for
this
are
pretty
deep
in
the
weeds
already.
F
F
I
think,
assuming
we
can
get
the
external
signing
kept
in
this
seems
like
a
very
doable
addition.
G
G
You
know
I
can
attach
my
service
account
tokens
to
the
pods
in
secret,
as
anyone
would
expect-
and
you
know
that
stuff
is
funneled
through
to
the
kubernetes
apis.
I
would
expect
like
this
bar
seems
incredibly
high
right.
I
don't
know,
I
don't
think
it's
necessarily
that
strange
to
want
to
inject
some
extra
data.
G
It
is
the
the
other
bit.
There
is
also
like
because,
as
you
mentioned,
micah
that
this
is
trusted
right,
like
any
anything
you
get
from
the
cube
api
server,
you're
going
to
assume
it's
valid
because
that's
like
the
point
all
right-
and
I
would
I
would
feel
kind
of
strange,
like
imagine
like
a
sub
map
within
the
the
object
that
was
literally
like
untrusted
underscore
user
data
or
some
something
like
that
right
like
why
is
there
untrusted
data
in
my
signed
chat
like
is
in
the
point
that
all
the
data
is
trusted.
G
It
reminds
me
of
the
email
verified
claim
in
oidc
like
to
me.
It's
like
nonsensical,
like
I,
I
trust
my
oidc
provider,
so
either
give
me
a
verified,
email
or
don't
give
me
any
data
like
you
go
verify
it.
Please
that's
why
I
have
you,
as
my
id
provider.
G
So
I
know
it
may
it
doesn't
feel
right
to
me
to
do
it.
Sort
of
in
disregard
like
like,
if
I
think
about
like
the
closest
extension
point
we
have
to
like
the
external
signer
today.
That
already
is
there
is
the
kms
code
right
and
for
the
most
part,
the
kms
code
can't
do
anything
that
fancy
right.
It
just
encrypts
a
data,
encryption
key
and
hands
it
back
to
you
right,
but
it
can't
do
much
else
right.
It
doesn't
even
control
the
encryption
itself
right.
The
api
server
actually
does
the
final
encryption.
G
It's
just
the
api
server
is
handing
you
the
key
that
it
used
and
asking
you
to
give
it
an
encrypted
copy
back.
So
that
way
you
can
get
it
back
right,
but
it's
it's
like
a
purposely
limited
api
to
both
limit
the
surface
area
of
the
external
entity.
That's
attached
to
the
api
server,
but
also
to
retain
all
the
control
within
the
api
server.
G
F
Yeah,
if
I
recall
correctly,
I
think
clayton
had
a
crazy
idea
about
using
an
admission
review
for
this.
F
But
I
can't
have
to
dig
it
up,
can't
remember
no.
G
J
Plug-In
policy
maybe
we'd,
allow
some,
maybe
we'd
do
the
like
enclose
them
in
unknown,
not
necessarily
fully
trusted
block.
I
it
was.
It
was
a
straw
man
to
start
discussion.
I
The
the
difficulty
there,
too
is
in
structure
right
like
this
is
going
to
be
serialized
into
into
the
token,
but
it
could
be
arbitrary
structure
from
the
of
the
claims.
G
So
I
still
think
it's
at
the
same
level
as
how
much
you
trust,
kms
right,
you're,
not
telling
kms
here's
some
data
encrypt
it
for
me
and
give
it
back
to
me
you're
handing
it
the
key
that
you
used
for
encryption
right
in
this
regard.
You're
saying
here's
some
claims.
I
would
really
like
them.
I
would
like
these
signed,
thank
you,
but
I
control
the
data
as
sent
to
you.
F
Right,
I
think
the
thing
is
that
the
kms,
the
difference
is
kms.
Plugin
does
not
know
what
the
ciphertext
is.
The
thing
that
is
doing,
supporting
the
signing
api
can
feel
free
to
sign
random
service
gun
tokens,
advance
they're,
consistent
in
structure.
F
So
I
yeah,
I,
I
see
the
the
thing
that
can
do
the
signing
as
very
very
root
if
we're
considering
whether
this
is
like.
A
If
we
open
it
up
to
something
under
user
control
or
api
surface,
then
I
think
it
is
increasing
the
threat
model
because
you're,
assuming
that
the
right
things
are
in
place
to
make
sure
that
bad
values
don't
get
fed
to
the
signer
to
stamp
out.
F
Yeah,
I
agree.
I
agree
with
that.
I
didn't.
I
don't
think
that
I
don't
know
micah
what
were
you
gonna
put
in
there
and
what
is
the
source
of
what
you're
going
to
put
in
there
yeah.
I
The
the
information
that
I
really
minimally
care
about
and
would
want
in
there
is
basically
what's
already
in
the
kubernetes
claims,
but
just
in
a
different
structure
under
a
different
namespace
like
there's,
a
the
aws
docs
have
like
the
aws.amazon.com
tags,
and
it's.
G
G
Okay,
so
as
a
as
a
thought
exercise,
if
you
control,
if
you
step
back
and
say
you
control
the
signer
right,
so
you
have
the
ability
to
sign
dots
that
look
exactly
like
service
account
tokens
with
basically
arbitrary
claims
like
outband.
You
can
do
this
right.
G
I
mean
yes,
I
guess
you
would
need
like
a
custom,
csi
plugin
to
do
the
the
mounting
aspect
and
all
that
stuff
I
was
mostly
just
thinking
about.
The
idea
is,
if
you're,
if
you're
saying
that
hey,
I
control
the
signer,
I
just
you
know
I
want
to,
and
I
can
go
sign
stuff.
If
I
want
it's
true
and
thus
you
could
also
sign
it
in
the
format
that
you
need
it.
You
just
need
to
present
an
api
to
the
actors
that
care.
I
Of
yeah,
the
other
benefit
of
like
using
token
request,
is
that
I
think
that
like
node
authorizer
gets
you
get
that
sorted
for
free
right,
I
guess
maybe
csi
gives
you
reason,
might
give
you
that
restriction
too,
but
yeah
you
don't
have
to.
G
J
G
G
G
A
A
I
would
not
expect
to
be
sufficient
to
inject
arbitrary
claims
into
tokens
like
if
I'm
cluster
admin
on
my
toy
develop
on
my
toy
cluster,
that
I
can
request
a
custom
claim
that
gets
like
the
privileges
in
some
other
system.
That
seems
like
just
asking
for
like
scope.
J
Escalations
yeah,
I
can
we
divide
it
into
two
cases
where,
like
one
is
the
provider
that
owns
the
signer,
gets
to
do
some
subset
that
they're
comfortable
with
the
other
use
case,
is
kind
of
something
wants
to
request.
Some
claims
we're
not
going
to
fully
trust
it.
Could
we
make
that
work
with
the
they
go
in
a
different
block
and
you
can't
they
can't
be
arbitrary.
It's
like
well
communicated
that
they're
not
quite
trusted
all
the
way.
A
J
F
But
yeah,
maybe
we
should
bump
this
and
mull
it
over
offline
or
in
the
next
sig
meeting.
I'm.