►
From YouTube: Sig-Auth Bi-Weekly Meeting for 20230621
Description
Sig-Auth Bi-Weekly Meeting for 20230621
A
All
right,
hello,
everyone
welcome
to
the
June
21st
2023
meeting
of
Sig
auth,
the
recording
is
started.
Let's
take
a
look
at
the
agenda.
All
right.
B
There
I
can
so
the
gist
of
these
numbers
are
I,
I,
recollected
them
at
sort
of
various
levels.
B
Basically,
the
Baseline
is
just
a
single
API
server
process,
with
resources
not
being
encrypted
and
then
making
a
bunch
of
requests
that
and
the
request
basically
successfully
create
kubernetes
secrets
of
larger
and
larger
size
until
approximately
the
maximum
size
that
we
allow
for
a
kubernetes
secret
and
I
conducted
them
sort
of
at
two
levels.
One
was.
A
B
Just
regular
rest
semantics
so
like
basically,
the
equivalent
of
cube
CTL
calls
and
the
other
one
was
from
the
semantics
of
the
internal
registry.
So
basically
I
like
skipped
a
lot
of
the
layers
around
like
authentication,
authorization
and
Network
stuff.
B
So
it's
basically
just
hitting
SCD
through
the
kubernetes
players,
like
so
the
low
level
sort
of
storage
implementation,
and
then
you
know
I
kind
of
compared
them
all
between
not
having
encryption
having
encryption
using
KMS
V2
as
it
stands
today
in
master,
so
like
127
style
and
then
the
proposed
change
with
Cam
s
V2
with
the
kdf
base
approach
and
basically,
if
I
remember
correctly,
the
the
number
of
the
the
CPU
time
doesn't
like
meaningfully
change
at
all
in
any
of
these
and
granted.
B
This
was
all
done
on
my
M1
Mac,
so
it
might
be
washed
out
by
the
fact
that
my
machine
is
very
fast.
So
maybe
in
a
cloud
VM,
you
might
see
something
a
little
slower,
but
otherwise,
if
you
look
at
I,
think
that
first
one
right
there,
you
can
kind
of
see
the
difference
between
having
the
kdf
versus
not
and
I.
Think.
B
Basically,
if
you
zoom
out
far
enough,
it's
on
the
order
of
like
a
one
percent
kind
of
deal
in
terms
of
like
memory
allocations,
but
otherwise
it's
not
a
overall
big
impact.
However,
if
you
zoom
in
closely
enough
and
are
like
comparing
somewhat
synthetic
benchmarks,
canvas
is
relatively
expensive
overall
like
in
comparison
to
not
having
encryption
at
all
I'm,
not
exactly
sure
why,
but
it
you
know,
it
is
like
an
interesting
number
to
be
fair.
B
So
I
I
wanted
to
put
these
out
there
and
see
if
anyone
had
any
concerns
from
just
a
purely
like
performance
characteristics
or
any
of
that
stuff.
I
think
Mike.
Last
time
you
said
you
know,
like
we
were
being
a
little
too
synthetic
in
our
benchmarks
by
like
looking
at
them
at
like
a
unit
test
level.
I
I
think
this
sort
of
shows
that
probably
No
One's
Gonna
Care
at
least
that's
sort
of
my
gut
feeling.
C
A
The
one
thing
is
the
putting
the
encrypted
data
in
the
object,
which
I
think
is
separate
from
kdf
decision
or
not.
But
did
you
have
any
thoughts
further
thoughts
on
that
or
whether
you
want
to
change
that?
Because
that's
a
storage
format,
change,
I,
believe.
B
Yeah,
so
I
didn't
have
any
plans
on
not
using
the
Proto.
We
did
have
like
an
open
to
do
to
try
to
reuse
some
of
the
memory
buffers.
So
today
we
just
used
like
the
easiest
way
of
doing
proto-marceling,
which
will
create
new
buffers
and
stuff,
and
we
don't
necessarily
need
to
do
that.
There
is
like
a
I
forget
what
it's
called.
B
It's
like
a
shared
buffer
construct
that
lets
you
sort
of
amortize,
the
cost
of
the
Proto
decodes,
so
I
think
we
plan
on
investigating
that,
just
as
like
a
nice
to
have
but
like,
for
example,
just
implementing
the
kdf
based
approach
was
much
easier
because
I
had
like
a
proper
struck
to
deal
with
and
not
crazy
bites
like
just
a
flat
bite
array
that
I'm
trying
to
turn
into
a
struct
effectively.
B
A
Okay,
yeah
I
did
not
know
that
goprodo
had
that
capability.
I
thought
it
always
copied
byte
fields.
B
I
I
think
it
will
always
do
the
copy,
but
like
the
place
where
it
stores
it
doesn't
have
to
be
constantly
reallocated
I.
Think
it's
the
general
idea,
so
it's
kind
of
like
a
sync.pool
so
like
I,
think
you
do
have
to
do
the
copy,
but
you
today
you
have
to
do
the
copy
and
the
new
Heap
allocation
to
put
the
copy
somewhere.
B
A
In
maybe
the
Proto
Library
doesn't
own
them,
but
I
think
we
know
that
we
own
those
bite
slices
in
in
our
code.
So
I
guess
that's
the
difference,
because
what
we
could
do
if
we
really
cared
is
don't
copy
on.
C
A
It
wouldn't
have
to
be
carefully
I
would
I
would
say
we
read
that
read
the
proto-strike
from
the
prefix,
the.
E
A
Information
I,
don't
know,
I
would
say
the
ship
is
sailed
as
soon
as
we
beta
V2.
So
right.
A
E
A
For
the
Bud
freeze.
A
B
You
know
I
I,
think
I
mean
I'm
happy
to
make
whatever
sort
of
internal
changes,
and
you
know
we
can
always
roll
them
out
over
many
releases
and
if
we
like,
if
we
felt
the
need
to
do
work
over
a
few
releases
and
just
delay
gang
I,
think
that's
fine,
I,
don't
think.
That's
a
big
issue,
I
think
if
we
had
to
change
the
grpc
API
between
the
API
server
and
the
plugin
I
think
I
think
that
ship
has
royally,
sailed
and
I.
F
G
B
The
operation
is
like
doing
a
bunch
of
creates
and
deletes
of
Secrets.
B
Us
it's
it's
so
high,
because
I
I
probably
wrote
like
10
gigabytes
of
data.cd
and
then
deleted
it
all
right
yeah.
So
it
is
a
very
much
a
like
I
had
to
go
update
the
immigration
test
at
CD
parameters,
because
I
was
like
running
out
of
space
in
NCD
and
I
was
like
I'm
gonna
cheat
and
give
myself
eight
gigabytes,
which
is
the
max
apparently
that
it's
in
use.
Of
course,.
G
B
So
basically,
if
you
look
in
at
a
very
narrow
picture
and
you
sort
of
ignore
like
admission
and
authentication
and
authorization
and
all
of
those
layers
and
you're
purely
like
looking
at
the
low
level
storage
of
like
a
kubernetes
secret,
that's
going
to
get
put
into
that
CD
versus
and
it's
of
a
significant
size
and
a
kubernetes
secret.
That's
going
to
be
put
in
at
CD.
But
it's
got
KMS
encryption.
Then
yeah.
There
is
a
place
where
you
can
say
you
have
a
30
overhead.
G
B
You're
still
doing
all
the
grpc
of
the
KMS
API
and
all
of
those
semantics
right,
because
all
or
any
of
that
right
all
of
that's
present,
but
only
if
you
zoom
in
far
enough,
if
you
step
back
far
and
out
enough
and
you're
just
kind
of
looking
at
it
from
the
outside
and
all
the
layers
are
working
and
you're
you're
down
to
like
barely
perceivable,
mostly
depending
on
what
bit
you're
looking
at
okay
and
again,
this
was
only
ever
looking
at
Secrets
right.
So,
for
example,
there
were
like
no
pods
on
this
API
server.
B
Okay,
so
the
next
one
so
Mike
I
know
you
had
some
of
the
Google
crypto
folks
review.
Our
proposed
changes
for
the
kdf
approach
for
KMS.
B
The
same
thing
will
happen
with
Microsoft
crypto
folks,
sometime
next
week
and
David
eats
said
that
he
would
also
ask
Red,
Hat
scripto
folks
to
look
at
or
proposed
stuff
to
make
sure
everybody's
sort
of
happy
with
what
we're
doing
I
don't
foresee
issues
from
that,
but
one
thing
that
I
did
think
of
since
our
last
conversation
was.
B
B
If
there
was
actually
a
reason
for
us
to
generate
a
new
seed
on
every
API
server
start,
basically
today,
what
in
either
design
for
KMS
V2,
the
more
you
restart
your
API
server,
the
more
decks
or
seeds
or
whatever
sort
of
get
into
the
system.
So
the
more
Network
calls
you
have
to
do
on
Startup
to
sort
of
refill
your
cache.
Now
it's
nothing
like
cam
sv1,
where
you
would
be
in
the
thousands
you'd,
probably
realistically
stay
within
the
hundreds,
but
like
for
the
current
design
of
KMS
V2.
B
We
have
to
definitely
generate
a
new,
a
deck
on
Startup,
because
there's
no
way
for
us
to
persist
the
counter
state
in
a
safe
and
sane
way
that
problem
I,
don't
think
exists
at
all
with
the
kdf
design.
So
I
was
curious.
If
folks
thought
it
would
be
a
good
idea
to
just
store
the
seed
in
its
encrypted
form
on
the
API
service
disk,
basically,
in
the
exact
same
way
that
we
do
it
in
NCD
and
all
the
same
rotation
semantics
will
still
apply.
So
we
would
always
be
asking
the
plug-in
hey.
B
B
It
would
only
be
when
the
plugin
says:
hey
my
key
ID's
changed
and
then
we'd
be
like
cool
I,
have
to
make
a
new
seed
and
encrypt
it
with
your
new
key
ID
and
go
from
there,
but
it
would
mean
that
unless
you
did
a
key
ID
change
on
your
plugin,
if
you
had
three
API
servers,
basically
for
the
entire
lifetime
of
those
servers,
you
would
only
have
three
seeds,
meaning
you'd,
make
three
Network
calls
on
Startup
of
the
servers
or
if
you
had
five
API
servers,
you
do
five,
but
either.
D
B
A
Yeah,
it
seems
probably
fine
the
the
Trend
to
recall
that
the
AES
GCM
limitations
are
I,
think
dictated
by
the
Nic
Phipps
guidance,
which
says
that
the
probability
of
like
industry
use
is
supposed
to
be
1
over
2
to
the
32.
Maybe
I
can't
remember
so
that
we
could.
A
Yeah,
we
are
almost
certainly
fine
with
the
current,
not
size,
I
guess
I
would
maybe
defer
that
to
I,
don't
know.
Is
there
any
reason
to
do
that
now
versus
later.
B
So
the
the
thought
process
I
had
there
was,
if
we,
the
sooner
we
do
it
like,
maybe
the
sooner
we
can
either
require
or
decide
on,
like
the
semantics
of
like
whatever
storage
space,
we
need
co-located
with
the
API
server.
B
Like
a
you
know,
a
file
path
or
whatever
on
disk,
but
I
I
was
trying
to
think
to
myself
like
if
you
have
an
API
server
that
is
stable
and
not
restarting,
and
we
implement
the
kdf
based
approach
right
that
that
internal
seed
is
never
going
to
rotate
unless
the
remote
plugin
says
the
key
IDs
change
right.
B
A
Convenient
it's
like
the
where
we
can
share
state
to
subsequent
encryption
operations.
I
think.
A
Further
constraints
that
we
are
unwittingly
now,
beholden
to
but
I
think
from
just
a
purely
like
technical
perspective.
It's
probably
it's
probably
fine,
although
maybe
not
necessary.
If
we
get
the
speed
up
that
we're
looking
for
with
tens
of
live
Keys
instead
of
thousands.
B
Yeah
I
mean
David
pointed
out
that,
like
you
know,
if
you
put
it
in
that
CD,
that's
like
a
particular
disc,
which
is
a
different
disc
than
the
API
service
disk,
like
conceptually
at
least.
It
might
literally
be
the
same
disc
depending
on
your
deployment
strategy,
but
conceptually
they're
different
discs,
which
presumably
mean
they
could
have
different
backup
strategies
and
whatever
else
I
mean
they're,
both
very
critical.
They
both
have
access
to
data
on
disk
in
a
way,
that's
very
important.
C
B
B
Like
it
can
never
really
hurt
you
as
long
as
you
write
the
code
correctly
like
so
the
code
would
not
fail
in
the
cases
where
it
doesn't
have
it,
because
on
first
start
it
will
never
have
it.
So
it
sort
of
has
to
tolerate
that
condition.
No
matter
what
so
it
recovery
is
basically
just
generated
a
new,
a
new
seed
and
encrypted
with
the
remote
Plugin
or.
B
Plugin,
which
would
encrypted
with
the
remote
KMS
yeah
that
makes
sense.
B
Yeah
I
also
don't
want
to
implement
this
like
once.
This
thing
is
GA
I,
don't
want
to
like
go
around
and
change
a
bunch
of
things
about
a
GI
feature,
maybe
maybe
I'm
just
being
too
cautious
there
I
don't
know
what
people
think
about
as
what
exactly
people
have
in
mind
when
they
think
of
something
as
ga
yeah.
A
I
mean
I,
would
you
could
introduce
it
with
a
feature
flag?
I
think
it
would
be
fine
to
have
the
implementation
of
a
ga
API
change
for
the
better.
A
Yeah
I
guess
probably
at
CD,
would
be
the
best
spot.
To
put
it
either
like
through,
like
an
annotation
like
we
do
in
the
control
plane
controllers,
like
the
end
points
kubernetes
and
kubernetes
service,
endpoints
controller
in
the
API
server.
Something
like
that.
B
A
Well,
we
have
a
system
for
maintaining
the
kubernetes
service.
Endpoints
object.
A
Calling
just
over
loopback
I
think
that
would
be
good.
It
also
means
that
the
encrypted
keys
are
the
encrypted
keys,
are
in
the
same
spot
as
all
the
other
encrypted
keys,
because
the
encrypted
seed
is
going
to
be
at
the
prefix
of
every
one
of
these
encrypted
objects.
So
if
we
have
somebody
who
has
like
control
on
how
their
backups
are
where
their
backups
go,
they
don't
have
to
think
about
a
new
file
or
a
new
set
of
files.
I.
B
B
Yeah,
that's
fair,
I!
Guess
I'm
more
thinking
about
like
if,
if
it
is
available
through
the
rest,
API
in
like
like
in
the
same
way
like
a
custom
resources,
then
you
have
to
like
make
sure
that
people
can't
write
to
it
or
mess
with
it
right,
because
that
doesn't
make
sense
right
like
it
should
not
be
updated
by
non-api
server
clients,
because
it's
like.
B
Maybe
I
mean
yeah,
maybe
right
you're
like
yes,
it
is
an
authenticated
encryption
scheme,
so
we
would
detect
it
if
you
screwed
it
screwed
with
it.
But
I
don't
know
if
I
want
to
do
that.
Like
I,
don't
know
I
I,
don't
know
if
you
would
want
to
make
that
observable
right
sort
of
through
the
APK
I.
Guess
that's
the
question,
but
that
but
overall,
though
I
think
your
point
is,
though,
like
if
we
put
it
in
at
CD
somewhere.
Somehow,
then
we
don't
have
to
answer
the
question
of
oh
now.
There's
two
disks.
A
A
B
Yeah
so
like
in
the
encrypt
all
test,
there
is
a
specific
if
statement
that
says
that
when
you
encrypt
all
make
sure
you
don't
encrypt
Master
leases
and
a
few
other
things,
because
if
you
do,
you
have
no
way
to
rotate
them,
because
there's
no
way
to
do
a
storage
migration
on
those
because
they
don't
have
a
rest.
Api.
A
Okay,
interesting
all
right
yeah.
Maybe
that
is
an
option
then,
to
use
the
master
release
model.
B
B
Yeah,
okay,
so
yeah
I,
think
my
gut
saying
irregardless
of
if
it's
part
of
this
cap
or
a
different
cap,
probably
a
separate
feature
gate
and
that
sort
of
controls
the
functionality.
A
Yeah
that
sounds
good
to
me.
A
G
D
Okay,
yeah
so
I,
don't
recall
exactly
around,
but
I
think
it
was
maybe
a
month
or
two
ago.
We
have
briefly
talked
about
this,
so
I
think
there's
a
lot
of
folks.
Now,
within
the
policy
working
group,
we
had
we've
created
a
API
for
policy
reports
as
one
of
the
projects
we're
taking
on
and
it's
fairly
widely
adopted
and
used
within
different.
You
know:
scanners
policy,
engines
and
I'll,
we'll
cover
a
list
of
you
know
some
of
the
projects
all
there.
D
But
what
we
wanted
to
revisit
and
discuss
is
what
do
we
do?
You
know
next
with
this
API.
How
do
we
promote
it
outside
of
the
working
group,
because
somebody
had
asked
the
question
even
at
one
of
our
kubecon
presentations
at
the
working
group.
You
know
repo
is
about
prototypes.
D
D
So
just
a
few
few
details
on
the
API
and
then
we
can,
you
know,
discuss
in
any
inputs
into
this,
so
the
links
are
in
this
deck.
I
won't
go
into.
You
know
all
of
the
details,
but
very
briefly,
you
know
the
API
introduces
to
top
level
objects,
a
cluster
policy
report
and
a
policy
report,
and
both
of
them
have
a
summer
summary
and
then
results
right.
Results
are
more
like
findings,
so
it
so.
D
The
idea
is,
any
tool
can
use
these
to
publish
things
in
a
consistent
manner
that
they
find
could
be
through,
like
you
know,
admission
controllers
like
gatekeeper
and
kaberno,
or
could
be
even
scanners
and
other.
You
know
any
other
tool
that
wishes
to
report
something
back
to
the
operators
or
admins
or
security
teams,
and
you
know
there
are
and
we'll
talk
a
little
bit
about
some
of
the
concerns
and
Lessons
Learned
and
challenges
with
this.
But
the
structure
is
fairly
simple
and
you
know
like
this
shows
that
it's
been
used.
D
Cubano
uses
a
good
bench
uses
a
3D
also
as
an
adapter.
Now
Jaya
is
also
on
the
call
she
can.
You
know
kind
of
jump
in
and
talk
a
little
bit
about
the
using
open
cluster
management
and
also
within
ACM
I,
think
from
red
hat
right,
Jaya,
that's
correct,
so
yeah,
so
I
think
the
you
know
benefit
of
course
here
which
is
fairly
simple,
but
powerful
is
that
you
know
any
of
these
tools
can
now
produced
reports
and
then
external
tools
can
start
consuming
them
in
a
standard
Manner,
and
there
are.
D
There
are
a
few
examples
of
that.
Like
you
know,
both
Red
Hat
ACM,
the
nermata
policy
manager,
there's
other
tools
which
have
mapped
these
reports
now
back
into
oscal
and
compliance.
You
know
formats
so
they're
providing
higher
level
continuous
compliance
functionality
on
top.
So
that's
how
this
API
gets
typically
consumed.
D
Some
of
the
challenges
that
we're
seeing
here
are
really
in
you
know
with
scaling,
and
you
know
the
fact
that,
of
course,
if
you
have,
depending
on
how
many
policies
and
resources
you
have,
you
could
end
up
with
a
lot
of
reports
and
keeping
reports
updated
in
real
time
also
causes,
of
course,
load
on
hcd.
D
D
So
that
is
an
open
issue
and
one
of
the
things
we
have
been
talking
about
is:
is
there
a
way
to
you
know
kind
of
offload
this?
These
API
objects
from
etcd,
maybe
either
through
aggregation
or
I.
D
Guess
we
can't
use
the
same
mechanism
as
events,
because
that's
reserved
for
built-in
objects
at
the
moment,
but
one
option
would
be
to
potentially
with
API
aggregation
and
then
the
reports
do
get
produced
in
two
different
styles
from
you
know,
just
across
these
tools,
One
is
using
the
report
more
as
a
log,
and
it
could
be
like
the
last
and
type
of
results
within
the
cluster
and
the
other
style
is
more
as
a
status
right.
So
adding
some
information
to
the
report.
So
the
consumer
knows
is
this
like
the
last
thousand.
D
You
know
events
or
so
so.
Ebpf
tools,
like
things
like
you
know,
Falco,
will
produce
events
right
that
there's
no,
it's
not
a
status
of
all
of
the
findings
within
the
cluster,
but
it's
more
of
an
event
log,
but
other
tools
like
kirano
will
produce
something
which
is
more
of
a
status
for
every
resource
or
namespace,
which
the
policies
are
applicable
to
so
having
some
information
in
the
report.
For
that
and
then
just
some,
you
know
minor
Tiding
and
things
like
that
needs
to
be
done,
but
those
were
yeah.
D
Those
are
some
of
the
issues
that
have
come
up
either
from
you
know,
users
or
in
the
community.
The
major
one
is
the
scaling
right
so
interested
in
getting
any
thoughts
and
feedback
on
that,
and
you
know
we
I
think
there
was
obviously
events
as
very
similar
where
there's
tons
of
events
that
get
produced
and
that
does
take
a
toll
on
at
CD.
A
B
D
Yes,
so
two
two
concerns:
one
is
a
lot
of
objects
that
may
get
created
and
deleted.
You
know,
or
just
to
for
policy
results
itself
right.
So
one
potential
way
of
using
a
report
like
this-
and
you
know,
kirano-
did
this
and
it's
very
early
releases
is:
it
would
create
one
instance
per
violation
and
of
course,
if
you
have
several
of
these
that
could
quickly,
you
know,
create
a
lot
of
objects
for
that
particular
type.
But
the
other
concern
is
that
sizing
also
of
this
data
right,
so
it
can
take
up.
D
You
know
a
chunk
of
space
in
at
CD
and
because
of
the
8GB
limitation.
That
is
also
a
concern.
F
But
there's
a
jail,
so
I
think
what
I
would
say
is
typically
the
way
we
deal
with
such
scale
issues
is
you
know
we
have
a
offline
process
that
processes
this
right
and
starts
it
off
in
more
external
storage
for
longer
duration,
I
think,
that's!
So
that's
kind
of
the
thought
processor.
That's
what
I've
seen
done.
B
Okay,
the
the
other
question
I
had
here
is:
do
you
happen
to
have
like
just
some
yamls
of
these
apis
handy?
So
we
could
just
see
so
I
can
remember
what
they
look
like.
D
Not
not
right
away,
but
yeah.
We
could
definitely
produce
some.
D
Yeah,
let
me
I
can
share
a
link
on
that
with
some
samples,
but
there
is
here.
One
of
these
links
goes
to
I.
Think
the
I
don't
think
we
have
a
yaml
sample
here,
but
it's
the
preview
of
the
API
itself,
which
shows
what
the
object
structures
and
things
looks
like.
E
D
So
that
way
you
know,
since
the
working
group
is
not
supposed
to
own
code
or
publish
code
and
I,
think
what's
been
done
for
some
of
the
other.
It's
not
exactly
the
same,
but
other
projects,
like
you
know
from
working
groups,
eventually
get
promoted
somewhere
right.
So
the
question
is:
what
do
we
what's
the
best
path
for
this
API.
B
B
So
I
assume
something
like
that.
It
was
warranted
here
right
for.
A
Yeah
we
would
that
I,
don't
know
if
this
is
the
case
with
working
your
policy,
but
one
attribute
of
The
Secret
store
was
that
there
were
like
there
was
motivation
to
to
standardize
it,
because
we
had
multiple
groups,
independent
groups,
building
Integrations
that
consumed
it.
Is
that
the
case
with
these
apis.
D
Yeah,
so
here
it's
more
different
tools,
producing
the
same
of
course,
the
same
report
in
the
same
format
and
then
for
end
users
to
consume
this
or
users
being,
of
course,
kubernetes
operators
admins,
as
well
as
other
higher
level
tools
for
on
the
management
side.
A
Yeah
so
it
started
with
a
cap:
I
think
we
yeah,
we
had
a
cap
for
the
API
and
then
we
did.
We
decided
to
standardize
it,
and
then
we
had
API
reviews
and
then
that's
when
we
promoted
it.
So
that
was
the
process.
I
guess
this
would
have
to
go
through
the
same
process
and
yeah
under
the
similar
style
of
review.
D
B
D
A
Like
I
guess,
what
is
your
goal
by
what?
What
is
your
goal?
You
want
to
have
a
API
Group
namespace
under
sigs
ks.io.
D
Yes
and
and
publish
this
somewhere,
and
you
know
where
we
can
continue
to
maintain
it
in
a
more
sort
of
again
for
both
consumers
and
producers.
Of
this
it
becomes
more
of
a
you
know,
supported
standard
than
today.
It's
in
this
prototype,
sweeper.
A
Right
yeah,
so
I
guess
the
important
another
important
part
of
the
cap
other
than
just
the
API
view
is
to
the
motivations
for
standardizing
and
it's
no.
It
would
also
be
useful
to
demonstrate
that
this
has
broad
adoption
currently.
B
No
I
I,
you
know
I'm
I'm,
trusting
your
size
Jim
that
it
seems
like
it's
well
on
its
way,
if
not
already
there
so
I,
don't
think
that's
a
concern,
it's
more
of
writing
it
down
and
right
in
a
sort
of
yeah.
Exactly
the
the
other
thing,
though,
I
do
want
to
be
just
to
remind
you
once
it
does
make
it
there.
E
B
So
in
the
the
reverse,
side
of
stabilization
is
generally
very
hard
yeah.
D
It's
been
fairly
stable.
We
haven't
made
too
many
changes
other
than
some
cleanup
Etc,
and
you
know
we
will
most
likely
add
in
like
these
other
items
that
I
talked
about
like
the
configuration
and
whether
it's
a
log
or
a
event
based
report.
D
We
can
add
those
in
prior
to
you
know
prior
to
the
cap,
but
the
main
other
kind
of
guidance,
I
wanted
and
I.
Don't
know
all
the
right
answers
here,
but
like
for
the
scale
concerns
is
there
you
know
with
API
aggregation
I
think
it's
possible
to
then
you
know
store
in
any
other
at
CD
instance
or
any
other
database
right.
D
D
B
It
the
sort
of
the
issue
is,
if
you're
going
to
go
down
the
like
API
aggregation
route.
B
B
I
forget
the
ordering
I'm,
pretty
sure
the
API
wins
I
think
so
that
would
be
very
confusing
to
a
user,
because
the
crd
could
be
there,
but
it's
just
no
longer
served
by
the
cube
API
server,
because
it
sees
that
there's
a
conflicting,
aggregated
API
service,
probably
in
a
higher
priority,
and
then
it
says
cool
that
thing
owns
it.
So,
from
the
perspective
of
the
user,
would
look
like
all
their
data
disappeared,
so
they
would
have
to
like
remove
the
aggregated
API
to
get
back
to
the
crd
data
so
like.
So.
B
D
So
it's
not
possible
for
the
same
API
to
be
either
based
on
the
deployment
like
different
deployments,
either
aggregated
or
built
in
or
I
guess.
As
a
crd,
yeah.
B
As
far
as
I
understand
the
API
service
object
is
limited
at
the
group.
The
group
version
level
I
think
I
might
be
wrong
there.
But
if
my
remember,
if
my
memory
is
correct,
that
would
mean
the
only
way
for
a
for
the
same
API
to
be
served
by
two
different.
Two
different
entities
would
be
that
they're
in
different
versions,
but
then
they
would
also
like
not
be
linked
in
any
way
and
it'd
be
really
confusing
right,
because
the
resource
versions
wouldn't
match.
B
B
B
That
would
almost
certainly
give
you
a
non-trivial
performance
benefit
over
the
current
Json
encoding
that
custom
resources
have,
and
obviously
that
would
be
way
less
work
in
your
particular
implementation
right.
A
ton
of
work
in
the
cube
API
server
to
implement
the
proto-support.
B
But
you
know,
if
you
have
time
and
want
to
spend
your
efforts
in
a
particular
place.
You
know
we
could
all
sort
of
work
together
on
that
other
shared
goal
too.
D
Okay,
all
right
yeah,
so
we
probably
need
to
think
a
little
bit
more
about
that
and
see
how
we
want
to
propose
or
which,
which
direction
we
want
to
go
in,
because
yeah,
that's
the
main
issue.
That
is
in
some
ways.
You
know
that's
what
at
least
Gatekeepers
on
the
fence
about
adoption
because
of
some
of
the
scaling
concerns
there
all
right,
so
yeah
more!
You
had
asked,
for
example,
I
remember
we
could
produce
one
through
a
CLI.
D
So
that's
what
this
is
showing
just
since
this
is
a
applying
a
single
policy
to
a
single
resource
with
the
CLI,
but
obviously
other
tools
can
produce
similar
policy
reports
too.
It's
fairly
simple
right.
So
each
does.
The
main
thing
is
as
results
and
results
have
data
like
this,
which
tell
you
which
resource
pass,
fails,
skip,
warn
Etc,
which
policy
and
who
created
that
and
then
there's
a
summary
at
the
end,
which
is
just
kind
of
showing
the
the
summarized
result
across
all
the
rules
in
this
report.
A
D
Possible
right,
and
especially
what
we've
seen
some
is,
if
you
you're
not
so
like
some
folks
have
used
this
for
vulnerability
scans,
which
are
noisy,
and
you
know,
if
you
have
otherwise
like
jobs,
Behaving
Badly,
then
you
see
a
lot
of
these
produced
very
quickly
right.
So
things
like
that,
we've
encountered.
B
Yeah
I
guess
as
part
of
the
cap,
you
know
there
is
the
production
Readiness
section
that
you
know
deals
with
a
lot
of
the
concerns
around
upgrades,
but
also
like
scale
issues,
and
you
know
like
basically
how
much
data
are
you
creating
if
you're,
adding
new
API
objects?
How
big
are
they
and
all
that
so
I
I
think
you
know
those
questions
will
come
up
and
you
know
hopefully
when
David
and
others
might
have
ideas
on
how
we
could
sort
of
move
forward.
E
D
So
we'll
yeah
we'll
take
that
on
as
a
Next
Step
then,
and
at
least
draft
up
a
gap-
and
you
know
kind
of
put
both
these
topics
up
for
more
feedback
or
discussion
is
whether
if
the
feedback
is
yeah,
maybe
if
you
go
the
aggregation
route
and
we'll
just
then
create
that
or
before
we
proceed
further
with
formalizing
the
API.
B
If
you
could
just
ping
me
whenever
you
have
that
open
and
I'll
I'll
find
the
right
people
to
add
it
to
the
API
review
project
board,
so
that
way
I
can
we
can
get
someone
signed
to
it.
So
that
way
we
can
start
that
process
for
you
guys.
H
Yeah,
this
is
mostly
just
kind
of
a
brief.
Just
discussion
just
wanted
to
see
how
get
some
feedback
on
an
idea.
I
think
I
brought
up
before
at
one
point
about
the
idea
of
putting
like
the
name
in
the
projected
for
this
account
tokens
I
think
an
alternate
route
that
could
be
really
useful
and
I
can
kind
of
describe.
The
use
case
would
be
optionally,
adding
the
request
or
info
in
the
in
the
service
account
shot.
H
H
When
you
have
a
jot
that
you
want
to
present
to
an
external
service,
say
in
like,
like
our
use,
cases
like
thinking
about
using
it
against
an
AWS.
F
H
More
specifically,
we
do
get
information
like.
Obviously
the
service
account
identity
that.
H
Cluster
via
the
identity
provider,
the
Pod
and
uniqueness,
like
the
pod
uid.
H
But
if
you
really
be
really
great
to
have
additional
metadata
about
the
about
that
pod
and
and
specifically
where
it
came
from
so
like
what
node
it
was
assigned
to
and
the
requester
information
would
be
a
way
to
add
that
kind
of
information
and
an
additional
user
extra
info
from
whatever
Authenticator
the
node.
Is
you.
H
For
us
to
basically
have
that
information
embedded
in
the
token,
the
reason
why
is
it
is
when
doing
in
our
case,
like
a
credential
exchange
with
this
job
I
would
love
to
be
able
to
do
an
additional
verification
that
says,
or
that
validates
that
these
you
know,
service
account
coming
in
was
assigned
to
a
particular
node,
and
when
that,
when
that
request
comes
in
validate
that
that
pod
is
assigned
to
the
correct
node.
H
H
Assigned
to
rather
than
some,
you
know
if,
if
the
token
gets
stolen
in
some
other
identity,
tries
to
call
a
credential
exchange
API
to
get
credentials.
G
C
H
Obviously,
like
we
could
have
you
know
in
any
kind
of
credential
exchange
situation
like
this,
you
could
do
this
other
lookup
of
caching,
pods
to
nodes
and
and
or
look
you
know
cash
or
look
up
the
Pod.
You
know
uid
and
service
account
to
the
node
for
that
cluster,
but
that's
just
like
that's
at
scale
that
can
be
an
expensive,
lookup
and
and
does
just
has
to
be
synchronous
right
or
you
have
to
have
some
cash
that
you're
you're
synchronously
updating
to
to
get
that
accurate
information.
H
A
It
would
be
similar
to
the
you
know,
secret
and
pod
Bindings
that
we
have
today
which
might
solve
your
problem.
Another
way
to
do
it
is
to
model
it
like
it,
a
credential
origin
where
we
have
a
list
field
and
every
time
we
create
tokens,
we
append
to
the
list.
H
I
haven't
thought
too
much
about
it.
I
mean
the
the
my
again.
This
is
even
pre-cap
and
pre-design.
Just
thinking
about
the
requirements.
It'd
be
really
nice
to
have
a
little
bit
of
extra
info.
In
addition
to
the
node
name
potentially
like
in
our
case,
say,
if
you
have
the
user
info
extras
from
the
Authenticator,
your
authentication
module
could
add
additional.
You
know
key
value
data.
That
could
be
that
you,
maybe
that
you
might
trust
a
little
bit
more
than
just
the
username.
H
It's
just
more
a
little
bit
more
metadata.
You
can
add
and
have
a
little
bit
more
flexibility
about
adding
then.
E
D
B
So
you
have
to
like
look
at
like
multiple
entries
to
sort
of
figure
out
the
whole
picture,
whereas
with
impersonation,
it's
all
yeah,
so
like
one
way,
I
thought
about
something
like
this
could
be
implemented
is
if
it's
not
just
purely
the
the
user
extra
the
requester
that's
showing
up,
but
also,
if
there's
impersonation
happening.
That
would
also
technically
need
to
be
present
right,
because
you
could
be
an
actor
impersonating,
a
different
actor
and
then
performing
token
requests
right
so
like
technically,
there
could
be
three
identities
involved
in
the
whole
system.
B
This
is
assuming
we
don't
ever
Implement
nested
impersonation,
and
then
we
would
have
arbitrary
levels
of
any
of
this
insanity,
but
that
was
just
sort
of
something
I
was
thinking
of
like
you
are.
You
are
correct,
though
Mike
that
like
I
think
it's
probably
groups,
that's
the
most
concerning
right
is
like
if
you
have
10
000
groups
and
we
try
to
shove
all
10
000
groups
into
the
service
account
token
now
your
service
account
token
payload
might
not
even
fit
into
a
header
anymore.
Yeah.
B
B
A
Yeah
but
I
guess
it's
really
big.
It
can
get
really
big.
G
B
So
like
the
payload
sent
to
like
assigner
does.
G
B
Yeah
I
mean
I,
guess
it
depends
on
what
you're
trying
to
protect
against
right
like
today,
we
don't
protect
against
any
of
this
at
all
right,
we
just
say
we
make
a
token
request
for
that
service.
Account
token.
That's
awesome
here
you
go
yeah
fun,
I
hope
you
have
audit
logs,
enabled
for
all
token
requests
and
an
ability
to
track
them
really
well.
Yeah.
E
B
A
So
when
well
so
you
can
a
requester
can
bind
a
token
to
API
objects.
Today,
right
secrets
and
pods
all
right.
We
could
make
cubelets
request
a
node
binding
where
they
bind
the
token
to
themselves.
B
Okay,
so
so
that
makes
sense
conceptually
in
the
idea
of
like
okay,
they
add
they
would
request
that
binding
in
the
in
the
token
request,
and
they
would
get
it
and
on
validation
for
authentication
to
the
kubernetes
API
server
we
would
validate.
B
That
was
the
case
and
that
I
guess
the
node
still
exists
and
I
guess
the
node
authorizer
would
check,
but.
H
B
Yeah
I
suppose
that's
true,
at
least
within
the
semantics
of
how
kubernetes
works.
It's
usually
true,
yeah
I.
H
B
Would
be
very
strange
if
that
those
didn't
hold
together
but
I
I,
guess
at
that
point,
does
The
Binding
really
mean
anything
other
than
to
allow
someone
who's
doing
the
token
exchange
to.
G
Yeah
I
think,
are
you
skate,
like
our
the
use
case,
while
we
were
discussing
this
in
in
gcp,
was
kind
of
similar
to
what
she
was
saying
is
when
we
exchange
that
jot.
For
another
token
we'd
like
we
can
map
the
node
name,
the
kubernetes
No
Name
back
to
VM
name,
and
then
we
want
to
make
sure
that,
like
that,
VM
is
only
up
and
up.
B
So
what
what's
the
information
to
here
that
you
guys
would
need
to
make
this
useful.
G
H
G
What
is
provider
id
in
this
case,
whatever
the
cloud
provider.
G
A
A
A
That's
where
I
think
it
shows
up
commonly
for
gke
and
the
challenge
with
those
today.
Are
they,
like
everywhere,
where
we
have
that
web
needs
a?
What
is
it
podcast.
B
Yeah,
okay,
so
I
I!
Guess
if
we
step
back
for
a
second
irregardless
of
how
we
would
Implement
such
a
feature,
is
the
general
idea
that
we
would
want
to
make
available
at
the
token
exchange
Point
enough
metadata
in
the
token
itself,
to
allow
like
an
infrastructure
provider,
to
make
a
decision
without
needing
kubernetes
API
access?
Is
that.
A
B
A
I
I
think
we
have
decided
that
this
is
useful.
Let's
figure
out
how
it
looks
in
a
cap.
If
that's
something
that
you
want
to
drive.