►
From YouTube: Kubernetes SIG Auth 2022-01-05
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 2022-01-05
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/preview
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
Cool
cloud:
it
is
all
right,
hello.
Everyone
welcome
to
january
5th,
2022
meeting
of
sigoth.
Let's
get
started,
looks
like
we
have
few
ite
the
agenda.
Let's
start
with
announcements.
A
This
is
a
reminder
that
caps
targeting
124
should
be
ready
for
review
by
mid-january.
I
know
there
are
a
few
cups
that
are
being
worked
on
right
now.
Feature
freeze
is
expected
by
end
of
the
month,
so
anything
you
want
to
get
in,
please
be
sure
to
make
progress
towards
approval
in
the
last
two
weeks,
all
right,
let's
get
started
any
other
announcements
before
I
move
on.
A
All
right
going
into
our
discussion
topics.
First,
one
is
kms
observability
related
questions.
B
C
A
life
span
of
the
request,
so
if
I
submit
a
request
to
the
api
server
that
audit
id
is
going
to
be
used
for
the
lifetime
of
that
request,
but
there
can
be
multiple
invocations
of
admission
during
that
during
the
handling
of
that
request
like
if
there's
a
if
it's
a
patch
request
and
there's
a
conflict
going
to
ncd
it'll,
actually
reapply
the
patch
and
retry
admission
internally
multiple
times.
So
it's
not
a
one-to-one
request
to
admission
relationship.
B
Okay,
that
makes
sense-
I
I
guess
what
I
would
ask
is
has
has
there
been
any
any
requests
to
help
correlate
kubernetes
audit
logs
with
invocations
of
like
admission
web
hooks.
C
I'm
I'm
not
sure
so.
The
admission
webhook
can
return
information
that
gets
included
in
the
audit
logs,
so
admission
web
hooks
can
return.
Audit
annotations
and
those
get
included.
Are
you
wanting?
Are
you
asking
about
correlating
like
the
audit
log
with
web
hook,
indications
or
requests
with
web
hook,
invocations.
B
I
guess
really
the
audit
logs,
so
so
the
the
reason
this
question
came
up
at
all
is
we
were
trying
to
figure
out
exactly
what
is
the
correct
or
currently
preferred
way
of
including
some
kind
of
metadata
through,
like
api
flows,
to
make
it
clear
like
what
caused
something
to
happen
so
for,
in
this
particular
case
with
kms
right
like?
Why
is
the
kms
plugin
being
invoked
like
what
is
like?
B
B
That's
why
the
kms
plug-in
was
invoked
and
it's
okay,
if
the
kms
plug-in
got
invoked
like
three
times
or
whatever,
because
of
internal
retries
or
whatever,
but
just
like,
I
want
to
be
able
to
say
like
this
is
the
reason
you
know
in
our
conversations
around
this.
We
we
also
saw
that
there's
like
some
brand
new
tracing
code
with
the
open,
telemetry
stuff,
and
that
looked
interesting.
B
B
What
should
we
do,
and
if
it's
the
kubernetes
audit
id,
then
how
can
we
like
make
it
more
reliable
and
more
not
under
user
control?.
D
C
C
E
The
tracing
is
something
I
I
also
didn't
look
deeply
in
daniel.
Did
the
review
of
the
that
giant
pr
that
merged
for
the
the
alpha.
C
I
think
I
think
it
hooks
into
the
context
and
like
anywhere
the
context
gets
passed.
I
think
it
will
propagate.
F
C
B
C
E
I
mean
yes,
you
are
technically
right,
jordan
for
the
purpose
of
this,
though
you
can't
reuse
the
same
audit
id
twice
or
you
no
longer
know
whether
it's
responding
to
the
first
attempt
or
the
second,
and
when
you
do
something
like
a
mutating
admission,
webhook,
where
you
are
intentionally
calling
it
twice,
it's
important
to
be
sure
you
got
the
correct
one.
The
second
time.
C
If
the
tracing
stuff
is
exactly
what
you
want
since
the
tracing,
my
understanding
is
that
it
tends
to
be
sort
of
like
it
operates
on
a
subset
like
a
sample
like
one
out
of
every
x
percent
of
requests,
and
it
sounds
like
what
you're
wanting
is
a
much
more
deterministic
like
ironclad.
This
happened
because
of
that
chain
that
you
can
follow
from
one
thing
to
another.
B
I
guess
what
I
would
ask
there
is
you
know
if
you
know
I
am
a
canvas
plugin
maintainer
author,
the
production
person
and
I
do
see
indications
of
my
kms
plug-in
and
you
know,
have
some
ids
associated
with
them
say
we
did
exactly
what
admission
does
and
just
generate
a
unique
uuid
per
indication
of,
like
I
guess,
the
kms
plug-in
or
something.
B
Would
we
want
like,
as
a
for
example,
would
we
want
to
try
to
somehow
plumb
that
back
into
the
audit
event,
using
like
audit
annotations
so
that
way?
If
you
wanted
to
correlate
those
things,
you
could
in
a
way
that
can't
be
spooked
by
the
user.
A
F
F
B
Yeah,
so
I
I
guess,
then
what
I
would
ask
you,
so
if
we're
saying
tracing
doesn't
have
the
right
semantics,
I
I
saw
the
the
the
audio
id
chain
issue
that
timid
opened,
which
is
basically
the
crux
of
it.
Is
audit
id
can
be
controlled
by
an
end
user
by
them
just
setting
the
header.
B
I
guess
what
I
wanted
to
ask
is:
why
do
we
allow
them
to
do
that,
and
can
we
not
allow
them
to
do
that
so
that
way
we
can
trust
it?
How
would
you.
B
C
E
B
B
Yeah
the
audio
id
chain,
but
if,
if
it's
related
to
that,
that
seems
also
suspect,
because
the
the
connections
to
aggregate
api
servers
have
strong
mtls
associated
with
them.
So,
like
you,
you
would
just
you
should
piggyback
off
of
that
and
use
that
just
like
to
create
some
kind
of
trust
anchor,
not
just
arbitrarily
trust.
It.
C
Okay,
so
it
sounds
like
the
questions
are
around.
C
Nailing
down
whether
we
want
the
audit
id
to
be
able
to
be
dictated
the
way
it
is
today.
That's
one
question:
if
that
becomes
more
trustworthy,
the
next
next
question
would
be.
Would
we
want
to
plumb
that
down
to
the
storage
layer
make
make
use
of
that
in
the
storage
layer
for
something
like
this,
and
if
not,
then
what
would
we
want
to
use
as
an
alternative
to
allow
correlating
stuff
at
the
storage
layer
to
stuff
in
the
auto
log.
B
G
And
I
also
say
that
we
do
expect
a
lot
of
what
the
canvas
plugin
is
expected
to
do
or
asked
to
do
by
the
api
server
is
done
so
by
system
operations
and
the
loopback
client.
Like
definitely
most
read
operations
to
keep
aps
server,
not
going
to
be
associatable
with
anything
that
we
would
put
in
an
audit
log.
G
G
So
like
for
like
something
that
is
served
from
a
cache
in
the
qbi
server,
we
would
not.
You
know,
expect
every
read
request
to
to
like
a
secret
to
actually
initiate
a
kms
plug-in
operation,
primarily
like
initial
load
would
be
done
by
the
loopback
client
in
the
cube
api
server
and
then
subsequent
reads
would
be
done
from
memory
and
not
required
kms
plugin
interaction
with
the
canvas
plugin.
So
there
will
be
things
in
the
audit
log
reads
of
secrets
that
do
not
have.
G
That
does
not
require
interaction
with
the
canvas
plugin
and
vice
versa.
There
will
be
things
in
the
canvas
plugins
log
that
were
that
did
not
produce
an
audit
event.
F
C
Live
get
goes
to
a
cd,
you
can
opt
into
a
cached,
get
that
will
hit
the
watch
cache
if
it
is
populated,
but
even
apart
from
hitting
the
watch
cache
like
if
you
do,
a
live,
get
from
ncd
and
the
server
loads,
the
encrypted
data
and
the
data
encrypting
key
is
still
in
the
cache.
It
will
use
that
without
talking
to
the
kms
again
so
there's
the
server
has
the
ability,
cache
data,
encrypting
keys,.
B
I
think
that
one
is
okay.
I
I
think
what
we
were
looking
for
is
when
the
kms
plug-in
is
in
invoked.
Can
we
figure
out
why
and
it
so?
It
does
seem
like
for
live
operations
using
like
cube
cto
like
a
normal
end
user.
Would
that's
fine,
it's
only
the
less
the
internal
stuff
within
the
api
server
that
the
loopback
client
is
doing
to
fill
caches
or
whatever
that
might
hit
kms,
but.
C
B
C
When
the
server
starts,
if
it
fills
its
washcash
with
all
secrets
which
it
does
then
at
that
point,
it's
going
to
request
all
of
the
decrypting
keys
data,
encryption
keys,
so
the
first
and
most
likely
requests
to
kms
are
not
going
to
be
associated
with
particular
requests
and
it's
like
when
the
server
starts
up
it'll
load,
all
of
those
now
on
subsequent
writes
when
it's
hidden
kms
to
encrypt
those
will
be
correlated
with
the
right
requests
anyway
and
we're
20
minutes
in,
and
we
probably
want
to
unbox
this.
B
Initiative,
did
you
guys
have
anything
else
you
want
to
ask
before
I
do
see.
We
have
two
other
items
on
the
agenda.
D
Now,
I
think,
with
it,
reading
from
the
cache,
I
think
that
is
where
the
audit
id
chain
might
help
like
where
we
can
correlate
from
the
request
all
the
way
to
the
end,
to.
A
D
But
otherwise,
I
think
just
generating
the
uid
right
at
the
api
server
before
we
make
the
request
to
kms
plugin
seems
like
the
right
fit
for
now,
so
that
like
that
is
what
we
initially
want
to
correlate.
So
then,
maybe
we
can
expand
from
there
and
see
if
you
also
want
to
plug
it
into
the
audit
login.
B
Maybe
I
don't
know
like
just
thinking
through
it
like
the
stuff
that
mike
and
jordan
have
brought
up
and
I'm
I'm
questioning
how
useful
it
would
be.
I
guess
so.
I
want
to
make
sure
that
you
know
for
building
a
bunch
of
plumbing.
It
actually
is
valuable.
Plumbing,
that's
fine!
We
can.
We
can
keep
thinking
through
this.
B
Yeah,
I
think
we're
ready
to
move
to
the
next.
A
Okay,
james,
do
you
want
to
talk
about
the
124
release
team
stuff.
H
Yeah,
hello,
sorry,
can
people
hear
me
cool,
so
my
name
is
james
lavrak,
I'm
the
release
leader
for
kubernetes
124.
I
just
wanted
to
take
the
time
to
come,
say
hello
and
introduce
myself
to
you.
H
H
The
release
as
a
whole
starts
on
monday,
so
we
should
start
accepting
caps
for
tracking
then,
and
our
current
draft
for
the
full
schedule,
which
we
hope
to
finalize
with
the
end
of
the
week,
has
enhancements
freeze
on
week,
four,
which
is
actually
at
on
thursday,
the
third
of
february
and
with
the
production
readiness
soft
freeze
the
week
before
on
thursday
for
27th
of
january.
H
C
H
Nothing
has
changed
for
124.,
everyone
hates
tracking
spreadsheet.
Don't
worry,
we
all
want
to
get
rid
of
it.
It's
just
what
we
replace
it
with
is
a
larger
question.
There
is
an
effort
within
within
sig
release
to
get
that
changed,
but
it
nothing's
changing
for
124,
it's
the
same
as
it
was
before.
C
H
C
C
Like
a
small
single
digit
number
of
things,
we
need
to
add
to
it.
So
it's
not
terrible
just
wanted
to
know
where,
where
we
should
be
putting
things
in
the
next
week
or
two.
A
Great,
thank
you
james
adam.
Would
you
like
to
talk
about
the
csi
and
federal
volume
pot,
security,
emission
control,
dock.
I
Sure
so,
speaking
of
caps,
this
is
a
draft
cap
that
was
discussed.
I
think
in
sigoth,
before
the
holiday
break.
I
So
just
as
a
quick
refresher,
the
general
idea-
and
this
is
something
that
would
most
likely
be
sponsored
by
sig
storage-
is
that
we
would
be
adding
an
emission
plug-in
which
allows
csi
drivers
that
provide
the
ephemeral
volume
capability
to
sort
of
declare
themselves
safe
for
a
given
pod
security
profile,
using
either
an
annotation
or
a
label.
I
E
Answer
the
question
he
provided
very
quickly.
Examples
of
I
believe
it
was
azure,
as,
for
instance,
of
one
that
was
not
safe.
E
I'm
sure
whether
jordan
ended
up
creating
the
alternative
list
of
all
the
safe
ones.
C
So
I
think
my
posture
is
still
that
I
don't
want
to
encourage
people
to
create
unsafe,
ephemeral,
csi
drivers,
and
I
don't
like
the
idea
of
not
being
able
to
look
at
a
pod
and
know
outside
the
context
of
a
particular
cluster,
whether
it
fits
in
a
given
pod
security
standard
level.
Both
of
those
seem
not
great.
C
That,
and
this
is
another
built-in
admission
thing,
that
gloms
onto
the
same
the
same
level
labels.
E
All
right,
well,
I'm
still
interested
in
in
an
alternative
list.
Yawn,
actually,
storage
yawn
did
come
back
and
answer
the
question,
but
if
there
is
a
different
list
that
says
no,
this
isn't
really
a
problem.
That
would
be
interesting.
C
For
the
azure
one,
I'm
also
curious.
I
remember
there
being
a
variety
of
things
with
azure,
where
sort
of
what
I
would
have
considered
important
boundaries
to
safely
isolate
namespaces
and
things
were
not
really
considered
problematic
by
the
azure
team,
and
so
I
I'm
wondering
if
that
specific
one
is
more
a
mismatch
in
expectations
of
cluster
isolation.
C
C
That
were
surprising
to
me
for
someone
who
was
wanting
to
isolate
aim
space
permissions
that
was
pretty
particular
to
azure.
The
built-in
azure
driver.
G
A
Do
you
remember
which
driver?
That
was
because
I
think.
E
The
one
that
jan
linked
was
kubernetes
sig's,
azure
csi
driver
azure
file.
C
Yeah,
the
csi
one
I
I
hadn't
looked
at,
but
the
azure
file.
I
forget
which
one
was
creating.
E
In
in
seeing
the
the
list
of
csi
drivers,
if
there
are
others
that
yan
didn't
even
have
in
his
list
that
were
also
vulnerable
and
then
the
decision
about
whether
to
to
fix
them
or
to
ban
them
or
to
leave
them
broken
I'd
be
well
I'd.
I
think
I'd
like
to
look
at
that
list
before
trying
to
make
a
choice.
E
C
Yeah
I
mean
the
current
posture
came
like
when
we
were
looking
at
the
pod
security
standard
stuff
and
trying
to
decide
what
to
do
with
cs
ephemeral,
csi
drivers
and
we
went
and
talked
to
storage
folks,
and
the
feedback
from
them
was
that
there
were
few
known
ones
that
had
similar
characteristics
to
the
built-in
volume.
Plug-Ins
that
you
could
specify
like.
E
I
guess
I'd
be
interested
in
hearing
michelle's
arbitrary
connection
yeah.
I
remembered
different
feedback
from
them
more
more
in
line
with
what
jan
wrote
in
the
doc
where
he
says.
Yes,
this
is
a
real
problem
to
address
so
yeah,
okay,.
E
E
C
C
C
So
I'm
fine
setting
up
something
or
sending
something
out
and
starting
starting
a
discussion.
It
sounds.
C
For
driver
authors
that
was
released
previously
discussed
didn't
happen
to
clarify
that
when
you
make
an
ephemeral
driver
pod
authors
can
directly
access
all
the
parameters
you
expose
and
currently
and
by
default.
That
is
not
safe
to
do,
and
so
current
drivers
that
are
doing
that
are
not
safe
and
have
an
issue,
and
I
I
I'm
not
convinced
that
it
is
kubernetes
issue
to
fix
by
continuing
to
let
them
expose
unsafe
stuff.
A
All
right
any
other
items
in
the
agenda
that
y'all
want
to
talk
about,
but
it's
not
here
yet.
C
It
does
I
had
asked
michael
to
look
at
it
and
he
dug
into
it
to
the
point
where
it
hit
networking
issues.
I
think
it
was
a
networking
visibility,
config
thing
and
I
lost
track
of
where
it
went
from
there.
I
think
he
was
going
to
ask
the
networking
folks
what
the
deal
was
with
the
config
in
that
job,
but
I
don't
know
if
that
ever
happened.
C
Okay,
let
me
ping
him
real,
quick
and
see
if
he
can
pick
that
up
or
hand
it
off
and
get
acknowledgement.
B
That's
okay.
There
are
other
issues
rita
at
least.
G
What's
the
difference
between
the
flaky?
Oh,
you
see
discovery
number
two
and
the
like
one
that
looked
completely
broken,
which
is
this
top
one.
C
That
looks
like
a
configuration
with
the
job
and
it
got
routed
to
the
cloud
provider.
Folks,
because
I
don't
know
why.
That's
where
the
configuration
seemed
to
lead.
C
C
It
had
to
do
with
the
detection
of
the
self-addressable
network
address
for
the
api
server,
so
it
was.
F
On
a
slightly
more
meta
note,
it
seems
like
we
rarely
cover
flakes
in
the
meeting.
Is
there
more
process
around
that,
or
do
we
just
kind
of
let
them
slide
if
we
don't
get
to
them.
G
I
would
say
I
think
they
fill
the
shoulder
where
we,
where
we
have
time
generally
the
tests
that
sigoth
owns
are
a
lot
less
flaky
than
other
cigs.
I
would
like
to
think
because
the
majority
of
the
test
coverage
for
stuffing
kubernetes
core
is
unit
tests
and
integration
tests,
but
so
so
I
don't
really
have
a
problem
with
kind
of
leaving
it
lower
on
the
agenda
and
not
having
dedicated
time
there.
G
There
is
a
triage
meeting
monday
mornings
that
covers
like
bugs
and
maybe
stuff
more
urgent.
That
does
not
go
over
test
flicks.
G
Yeah,
I
think
if
we
were
causing
more
problems.
Maybe
jordan
can
tell
me
if
this
is
wrong,
but
I
think
that
if
we
were
causing
more
problems
or
breaking
more
submits
and
stuff,
it
would
be
like
higher
priority
to
focus
on,
but
the
current
state
is
pretty
reasonable.
C
C
Get
covered,
the
intermittent
ones,
aren't
usually
bad
enough
to
get
reported
and
also
aren't
necessarily
clear
that
it
is
our
issue
like
if
one
of
our
tests
fails.
You
know
once
a
week,
sometimes
we'll
dive
into
that
and
like
a
bunch
of
jobs,.
C
And
like
something
overloaded,
the
server
or
something
so
what
we
tend
to
look
for
is
stuff
that
bumps
up
into
the
top
flakes
list
or
that
we
happen
to
just
notice
ad
hoc.
When
we're
you
know
doing
pr's
and
stuff.
It's
like.
Oh,
I
seem
to
hit
this
one
a
few
times
and
we'll
open
an
issue.
C
C
It's
on
it's
on
walter,
but
I'm
not
sure
if
that's
correct,
so
I'm
gonna
ping
him
and
okay.
If
he
can
triage.
A
A
A
A
So
this
should
really
be
in
progress
or
something.
C
A
Okay,
moving
on
is
there
was
there.
I
don't
think
that
there
was
anything
else
right.
So
did
you
guys
see
anything
else
that
jumped
out.