►
From YouTube: Kubernetes SIG Auth 20180613
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20180613
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
All
right
welcome
to
the
biggest
meeting
for
June
13th
2018
looks
like
a
smaller
crowd
today.
I
know
we
have
stuff
wrapping
up
for
111
and
we
have
some
folks
weren't
able
to
make
it
but
we'll
run
through
the
agenda
and
see
if
see,
if
there's
time
for
anything
else
or
and
early.
So
just
to
recap,
for
111
orcas
closing
out
code
freeze
was
last
week.
A
Everything
on
our
side
seems
pretty
quiet.
The
things
that
we
had
targeted
either
made
it
in
or
we've
already
decided,
we'll
slip
to
112
I
know
there
was
a
call
in
the
flash
channel
for
folks
to
look
at
themes.
The
docs
team
is
putting
together
release
notes
and
so
they're
looking
to
gather
as
themes
from
sig
off
by
Friday.
So
if
you
want
to
jump
in
to
look
at
that,
I'll
put
a
link
in
the
agenda
once
I
once
we
get
off
this
yeah.
Just
take
a
look
at
that.
A
If
you
have
feedback
or
suggestions
or
think
something
was
missed,
I
want
to
make
sure
that
people
know
what
we
did
other
than
that
there
haven't
been
any
test
issues.
No
flaky
tests
no
test
failures
that
were
responsible
for
so
that's
great.
Are
there
any
other
announcements
things
for
the
release
closing
out
that
could
want
to
bring
up.
A
All
right,
we
don't
have
any
demos
today
and
nobody
we
put
any
new
polls
in
the
agenda.
I
know
there's
some
existing
ones
that
that
slipped
1:11
that
we're
hoping
to
get
in
pretty
early
in
112,
but
yeah,
nothing,
nothing
new,
that
anyone
called
out
so
with
that
I
can
move
on
to
discussion.
A
So
that's
funny
I
think
Tim
wasn't
actually
able
to
make
it
today.
So
it
might
just
be
me
so
I
get
to
recap
this
for
everyone.
So
if
you
saw
Eric's
message
to
the
group,
he's
actually
moving
on
to
a
different
project,
and
so
he's
going
to
be
stepping
down
from
sig
offs
chair
and
actually
play
handing
over
maintenance
a
lot
of
the
sub
projects
that
he
has
been
involved
in
for
kubernetes.
B
A
A
As
it
says,
in
our
proposed
charter.
We
prefer
to
spread
involvement
out
between
companies
between
different
groups,
but
if
the
people
who
are
interested
our
end
up
being
from
the
same
company
and
don't
seem
to
be
other
options,
we
may
end
up
with
more
than
one
person
from
the
same
company.
So
again,
if
you're
interested
or
know
someone
who
would
be
interested,
you
can
jump
through
into
that
topic
and
send
us
an
email
with
details.
C
C
We
have
this
dynamic
audit,
kept
sort
of
under
discussion
right
now
and
one
of
the
issues
that
came
up
and
that
kept
is.
We
want
to
dynamically
configure
API
servers
with
a
web
hook,
and
we
already
have
this
kind
of
thing
with
mutating
web
hooks
right
now.
We
would
like
to
find
a
way
to
let
the
API
server
and
it
might
be
multiple
API
servers
authenticate
to
the
web
hook
server
and
that's
sort
of
hard
to
do
right
now
with
the
dynamic
web
put
configurations.
C
So
one
option,
I
thought
about
there
and
I
wanted
to
kind
of
throw
this
out
to
the
group
is
that
this
could
be
a
use
case
for
valid
service
account
tokens.
So
you
could,
in
your
dynamic
web
configuration
basically
tell
the
API
server
hey,
go
out
and
get
a
service
account
token
bound
to
this
audience
and
use
that
to
authenticate
to
this
web
hook
server.
C
A
A
C
Where
the
control
plane
is
not
sort
of
self
hosted
like
that,
where
it
may
be,
the
API
servers
running
in
a
hosted
solution.
You
still
want
to
build
to
dynamically.
Tell
the
API
server
how
towards
indicate
back
to
your
web
book
receiver,
but
you
control
to
like
inject
files
into
the
API
servers
container.
A
A
Yeah
for
we're
the
kind
of
many
too
many
too
many
types
of
interactions
like
assuming
that
all
of
the
API
servers
that
are
going
to
look
at
this
config
and
be
responsible
for
calling
this
web
config
have
a
uniform
access
to
credentials
seemed
like
some
assumption.
We
didn't
want
to
bake
in
maker,
I,
agree
yeah,
so
like
I'm,
not
sure
about
I'm,
not
sure
about
it
reaching
out
and
getting
a
token
I
think
it
could
be
interesting
to
identify
an
audience.
D
D
So
I
think
it's
interesting.
I'd
I'd
be
a
little
concerned
on
how
we
lock
down
audiences,
because
at
that
point
anybody
who
can
create
an
API
extension
or
yeah.
A
If
we
were
going
to
go
down
the
route
of
using
surface
accountants
for
this
right,
typically
audience
corresponds
to
the
network
name,
that's
that's
not
uncommon,
and
that
kind
of
is
a
protection
for.
If
you
control
this,
if
you
control
this
DNS
name,
then
I
guess
you're
allowed
to
get
stuff
for
that
audience.
We
never
think
about
it
more,
but
it's
definitely
interesting.
D
E
C
E
A
B
B
What
we've
come
to
now
is
especially
two
objects:
I
want
to
be
like
austere
level,
on
a
configuration
and
want
to
be
a
namespace
on
a
configuration
I,
the
idea
of
getting
past
privilege
escalation
would
be
that
you
could
still
configure
it
on
the
master,
node
and
just
point
a
flag
to
you
know
one
of
these
object
files
on
the
node.
That
would
always
run
you
wouldn't
have
to
worry
about
privileged
escalation.
You
getting
up
to
cluster
admin
and
some
about
tampering
with
that
to
cover
their
tracks,
but
then
you
could
still
have
cluster
level.
B
Other
configurations
happening
I
independently
of
that
file,
that's
configured
on
the
master
node,
and
likewise
you
would
have
figurations.
They
could
happen
independently
within
the
namespaces
and
then
also
thinking
there
was
there
was
a
idea
floated
that
we
may
need
a
route
policy
that
could
ever
ride
any
of
these
and
I
think
internally.
Our
thinking
is,
if
you
have
access
to,
you,
know
admin
access
to
whatever
domain,
whether
it
be
you
know,
cluster
admin
or
just
I'm
an
access
to
them
a
namespace.
B
You
should
be
able
to
really
configure
that
auditing
entirely,
because
you
already
have
access
to
all
the
secrets.
You
should
be
able
to
tell
it
if
you
want
a
log
secrets
or
whatnot
to
be
able
to
do
that,
but
having
that
static
file,
provision
on
the
node
still
allows
you
to.
You
know
ensure
that
at
least
you
know
that
sets
web
hooks
or
file
logs
won't
be
tampered
with
and
is
guaranteed
I,
don't
know
if
that
makes
sense,
I
just
pushed
up
to
commit
for
the
caps
I.
B
F
B
With
the
current
method
I'm
putting
out
there
today,
they
would
just
be
independent
one
another.
So
you'd
have
a
closer
level
skips
on
a
configuration.
You
can
log
all
of
everything
everywhere.
If
you
wanted
to
and
I
would
send
out
to
whatever
what
hooks
are
you
know
defined
in
my
configuration
the
name
says
level
one
would
just
operate
independently,
but
it
would
only
have
rights
over.
You
know
the
names
based
actions
I
and
out
to
whatever,
and
you
can
log
whatever
level
you'd
like,
but.
F
B
You
would
still
be
able
to
so
they'd
actually
end
up
in
the
processes.
So
in
fact
the
cluster
Lavoie
define
a
policy
that
says
log
all
namespaces
and
all
actions,
request
response,
and
then
in
say
you
know
my
name.
Space
I
have
a
separate
configuration
that
says
they
only
log
config
map
changes.
Why
you
don't
want
that
I,
don't
know
and
send
them
to
these
web
hooks.
Those
would
be
independent
processes,
sometimes
that
the
namespace
one
would
override
the
cluster
one.
They
would
just
operate
independently.
F
C
F
B
A
Out
some
noisy
actors
yeah,
so
it
sounds
like
what
you're
describing
is
like
the
current.
What
we
currently
have,
where
you
have
a
static
kind
of
non
/,
aidable
config,
that
logs
to
a
destination
either
a
file
or
a
web
hook
like
and
that
can
be
put
in
place
with
a
cluster
administrator,
and
it
can
do
what
it
does
today
and
nothing
overrides
it.
Nothing
interferes
with
it
and
then
separately.
These
dynamic
configs,
which
are
optionally
subject
to
constraints
that
the
cluster
admin
puts
in
place.
B
If
you
have
the
rights
to
create
an
awning
configuration
I,
you
would
just
have
the
right
to
do
that.
There
would
be
no
way
of
saying:
hey.
You
can't
log
anything.
It
just
be
assumed
that
if
you're,
given
that
ride,
say
your
cluster
admin
that
you
already
have
access
to
all
that
sensitive
information,
you
can
send
it
wherever
you
want.
I
even
have
rights
to
enable
this
process
as
well.
Even.
B
E
B
F
A
F
F
F
B
B
A
A
I'm,
not
I'm,
not
sure,
if
avoiding
that
is
good
or
bad,
but
that's
been
brought
up
in
the
scheduler
proposals,
so
you
probably
want
to
be
consistent
with
whatever
we
did.
If
we
I
think
the
comment
there
I
think
it
was
Brian
grant
was
saying
if
we
want
to
have
something
that
rules
the
cluster,
let's
make
a
namespace
for
it
and
give
that
namespace
access
to
have
cluster
scoped
policies
that
gets
there
becomes
more
of
an
hackle
check
like
at
the
point.
When
you
define
the
policy,
the
decision
has
to
be
made
Ken.
A
F
E
F
F
B
A
E
D
So
I'm
here
on
behalf
of
fish
who
is
working
on
who
is
working
on
basically
a
graceful
termination
date
which
will
on
some
of
our
on
it's
it's
a
little.
It's
GCP
specific,
but
essentially
what
we
want
to
be
able
to
do
is
we
have
GPU
type
machines
that
are
not
migrate,
Abul
and
we
can
send
notifications
of
a
minute
migration
or
imminent
rescheduling
to
the
VM
and
how
that
manifests
is
a
VM
restart.
D
D
D
Proposed
solutions
for
this
type
of
integration,
our
ad
do
add
complexity
to
components
that
need
to
do
this
so
I'm
curious
about
how
general
of
an
issue
this
is.
Do
we
see
more
Damon,
set
components
potentially
wanting
to
add
paints
to
to
the
note
object,
or
are
these
so
few
that
we
should
solve
them
in
integration,
specific
controllers
and
then
the
devil's
advocate
opinion
is
conditions
are
necessary,
unnecessary
in
directions?
D
D
D
D
So
this
issue
discusses
removing
phases
from
the
API
and
also
simplifying
taints
our
flying
conditions.
One
comment
by
Brian
was
conditions,
aren't
annotations
that
can
be
added
by
just
any
entity.
They
should
be
curated
and
conditions
are
for
programmatic
consumption,
for
instance,
the
ready
condition
is
used
to
remove
pods
from
endpoints
I
think.
D
A
There
so
I
it.
It
seems
like
we're
trying
to
use
one
mechanism
which
is
paints
for
at
least
two
purposes.
Maybe
three
one
is
a
way
to
fence
off
nodes
that
are
bad
for
some
reason.
Right
like
this
is
compromised
or
this
node
doesn't
meet
XYZ
security
standard
or
like
we
want
to
keep
our
codes
off
this
node
for
some
reason,
and
we
definitely
don't
want
the
node
being
able
to
unpin
sit
self
or
opt
back
in
or
say
no
actually,
I
totally.
A
D
A
D
I
think
it
depends
on
I
would
like
to
hear
Brian's
take
on
that
is
that
this
falls
conditions
are
acceptable,
acceptable
used
in
this
way,
then
I
would
probably
go
with
option
three,
otherwise
I
think
I
would
go
with
option.
Four
I
think
option.
Four
is
more
painful
from
an
inflammation
implementation
perspective,
because
tiffin
two
lists
of
taints
is
actually
not
really
trivial
and
we
have
to
do
it
on
every
update.
Oh
right,
yep,
whereas
if
we
did
it
in
option,
three
I
think
it
could
be
potentially
easier
and
less
expensive.
A
D
A
A
Well,
yeah
dipping
dipping
contains
is
really
painful,
especially
I.
Don't
think
we
would
get
super
fine-grained
I
think
it
would
just
be
like
the
state
is
allowed
to
be
managed
by
the
node,
not
the
state,
with
this
value,
but
not
that
value
or
this
effect,
but
not
that
effect
like
that.
That
becomes
insane.
A
A
D
A
A
D
A
If,
if
some
of
the
existing
taints
like
the
not
ready
or
whatever,
not
ready
networking
available
some
of
those
existing
ones,
we're
just
looking
for
additional
conditions
that
would
inform
those
paints,
instead
of
it
being
one
to
one
and
those
additional
conditions
could
be
set
by
some
extra
thing
running
on
the
some
Damon
set
running
on
both
cubelet,
then
figuring
out
a
way
to
describe
that
would
keep
you
from
having
to
write
a
custom
condition
to
taint
controller.
It
would
just
expand
the
power
of
the
current
one
so
that
things
were
set
conditions.
A
We
would
give
cube
list
the
power
to
say
any.
One
of
these
five
conditions
means
I
should
have
a
not
ready
taint
on
me.
That
might
be
interesting.
I,
don't
think
it
would
solve
everything,
especially
if
you
want
to
change
that,
but
really
granular,
like
don't
schedule,
GP
we're
close
on
me,
but
you
can
still
schedule
other
stuff
on
date
that
wouldn't
work
right.
D
D
A
A
D
D
Yeah
I'll
forward
that
too,
so
this
is
me
again.
I
was
I,
am
having
difficulty
tracking
specific
sub
projects
in
in
at
the
main
repo
I
talked
to
Paris
about
some
suggestions.
The
suggestions
that
were
presented
were
to
start
playing
around
with
projects
and
see
if
they're,
the
right,
fit
or
or
use
area
labels
and
I
wanted
to
see
if
anybody
else
had
the
same
issues
that
I
was
having
or
and
if,
if
they
had
any
experience
with
tracking
these
issues
related
to
specific
efforts,
this.
E
D
A
I
know:
we've
done
umbrella
issues
for
a
few
things.
Those
are
really
useful
for
kind
of
getting
a
picture
of
what
we're
wanting
to
do
in
a
particular
release
and
then
like
what
got
done
and
what
got
moved
out
and
then
kind
of
having
a
chain
release.
Release
to
release
audit
has
done
a
pretty
good
job
of
that.
Pod
security
policy
has
done
a
pretty
good
job
with
the
last
couple
releases.
A
A
A
As
far
as
like
some
projects
and
efforts
I,
since
one
eleven
is
wrapping
up,
I
think
probably
our
next
meeting
would
be
a
good
time
to
walk
through
what
we're
planning
to
do
for
112
and
specifically,
who
is
kind
of
taking
lead
on
the
different
efforts,
and
so
maybe
getting
some
of
those
umbrella
issues
set
up
between
now
and
then
would
help
clarify
like
what
the
next
steps
are.
What
we're
hoping
to
accomplish.
A
Yeah
token,
the
tip
number
stuff
is
probably
the
things
that
is
most
ready
to
progress
like
a
lot
of
the
designs
are
done
and
understood,
and
a
lot
of
implementations
are
is
like
3/4
of
the
way
done.
So
that's
a
big
one
I'm
looking
to
to
see
happen
and
then
hoping
to
drive
to
closure
some
of
the
node
restrictions
stuff.
A
A
About
like
what
policy
looks
like
our
going
to
be
made,
is
that
in
the
policy
workgroup
is
that
in
cig
architecture,
I
saw
the
meeting
time
for
a
cig
policy
or
the
policy
worker
just
changed
to
one
that
is
during
business
hours.
So
that's
at
least
business
hours
in
the
US.
So
I
was
happy
to
see
that
I'm,
hoping
more
people
can
make
it
to
that.
D
Do
we
technically
on
the
admission
control
framework
and
admission
controllers,
or
is
that
just
shared
I
think.
D
A
So
different
SIG's
own
different
admission
plug-ins,
so
we
own,
like
node
restriction
and
service
account
and
probably
yeah,
probably
the
ones
that
are
used
for
like
limiting
labels
and
toleration
and
selectors
and
things
Storage
owns
a
few.
So
API
machinery
has
the
mechanism
and
then
individual
SIG's
own
specific
ones.
I.
A
And
it's
even
fuzzier
than
that
because
well,
it's
really
like
here
are
five
proposals
and
we
want
to
be
coherent
and
how
we
express
policy
across
them.
Let's
agree
and
then
scheduling
is
going
to
go
off
and
write
their
policy
and
we're
going
to
manage
our
auditing
policy,
and
so
it's
reaching
agreement
on
like
an
external
facing
approach.
D
Flex,
volume
in
CSI
that
are
specifically
motivated
by
using
flex
volume
in
CSI
to
in
check
inject
overlay
identities,
identity
systems
on
to
kubernetes.
So
this
was
a
brand-new
issue.
I
thought
it
would
be
interesting
to
get
some
people's
eyes
on
it.
So
I
just
wanted
to
make
you
all
aware
of
it.
You
can
go
read
it
offline,
yeah.
D
That
was
one
issue.
The
other
issue
was
the
currently
all
plugins
go
through
the
attached
detached
controller,
which
is
unnecessary
and
actually
hampers
availability
of
these
things
for
volumes.
That
really
should
just
be
noted.
Local
volumes,
like
the
empty
tier
volume,
with
some
interaction
to
the
daemon
set
I,
think
that
was
the
major
danger
faced
so
figuring
out
how
to
cut
attach
detach
controller
out
of
the
loop
for
some
of
these.
No
local
up
volumes.