►
From YouTube: sig-auth bi-weekly meeting 20200527
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
let's
get
started
so
pull
of
note.
I
shared
this
whole
months
ago,
the
sort
of
proposal
around
APOD
security
standards
and
the
PR
has
merged.
It's
now
part
of
the
kubernetes
documentation
sort
of
laying
out
these
three
recommended
policy
levels.
I
noticed
yesterday
that
there's
a
couple
of
restrictions
that
are
missing
and
so
I'll
go
ahead
and
add
those
soon.
B
Vaguely
remember
discussing
sort
of
like
a
test,
suite
type
approach
to
say
like
if
you
have
a
restrictive
policy
in
place.
These
these
things
should
be
able
to
be
greater
than
these
things
should
not
be
able
to
be
created
for
each
of
the
levels,
having
a
way
to
run
that
set
of
tests
against
an
arbitrary
cluster
with
an
arbitrary
policy
enforcement
mechanism.
A
C
A
A
D
D
It
was
a
member
of
his
team
that
discovered
this
that
basically,
the
and
I
think
it
was
like
intended
to
be
this
way.
I
think
that
just
the
ramifications
are
at
scale.
We
end
up
having
a
lot
of
useless
work
being
done
in
the
server,
because
we
can
effectively
use
connections
as
that
caching
layer
that
as
long
as
you
have
the
connection
open,
you
can
trust
the
requests
going
over
that
connection,
because
you've
done
the
certification
upfront,
but
that's
not
how
it
works
today.
D
The
way
it
works
today
is
you
establish
the
connection
without
doing
any
MPLS
and
then
all
the
requests
that
are
streamed
through
that
single
connection
are
doing
the
expensive
CPU
on
the
law
to
us.
It
was
super
expensive
because
we
have
really
deep
search
chains,
but
at
this
basically
becomes
a
bottleneck
for
the
API
server
at
like
a
really
large
scale.
D
So
right
now
we
have
to
carry
the
internal
patch
we'd
love
we.
What
we'd
love
is
to
be
able
to
just
have
an
option
to
turn
on
this
alternative
way
of
doing
the
verification
and
like
the
caching,
with
the
connection,
instead
of
doing
it
on
a
per
request
basis.
But
there's
been
a
lot
of
debate
about
maybe
just
having
a
different
going
option.
The
problem
is
I,
think
the
going
option
for
verifying
it.
If
it's,
if
asserts
presented,
means
that
you
still
can
bypass
off
giving
like
an
unverified
cert.
D
So
I
don't
know
how
that
how
that
passes
the
edge
case
of
like
let's
say,
I,
give
an
invalid
surd
claiming
Who
I.
Am
then,
if
it's
not
done
at
that
connection
time
or
if
I
don't
give
a
oh
I,
guess
yeah
I
mean
I,
haven't
thought
through
that
option
too
much,
but
I
mean
the
goat.
The
upstream
golang
change
is
just
gonna,
take
a
really
long
time
and
might
not
be
accepted
by
the
community.
D
I
mean
we
can
try
to
do
it
that
way,
but
I
think
the
easiest
one
for
us
at
least,
and
that
made
a
lot
of
sense
that,
like
we
could
directly
contribute
to
is
the
separate
implementation
that,
like
I,
mean
this
is
like
we're
part
of
the
division.
That's
doing
apples
like
centralized
infrastructure,
and
so
this
would
be
like
a
pretty
well
trodden
code
path.
That's
like
vetted
by
our
security
team,
and
it's
pretty
important.
You
know
to
like
Apple
security,
so
I
just
mean
it's
not
like
it's
not
like
it's
a
small
small
team.
D
E
D
D
B
The
concern
with
switching
to
the
standard
library
verify
on
connection
was
that
it
breaks
clients
that
present
clients,
words
that
are
not
intended
for
authentication
to
the
API
utterly
so
like
going
back
whatever
four
or
five
years,
like
those
were
the
clients
that
prevented
us
from
using
the
standard
library
method
in
the
first
place.
Otherwise,.
D
D
B
Are
that
I
use
for
like
corporate
proxy
authentication
or
like
this
asserts
my
identity
and
I
need
it
for,
like
some
nuns
and
it
again
percentage
of
the
network,
requests
I
make
and
so
like
I
spray
it
out
like
to
any
connection
I
make
indiscriminately.
So
the
go
clients
don't
do
that,
but
some
some
clients
do,
but
some
browsers
do
and
yeah
so
I
think
the.
B
It
we
try
to
avoid
adding
sort
of
alternate
configuration
paths
that
you
know
to
reasonably
be
enabled
by
default
or
broadly,
and
so
it's
this
is
the
sort
of
thing
like
do.
You
have
clients
making
connections
like
this,
it's
hard
to
know
until
you,
oh
yeah,
I
turns
out.
I
did
and
I
just
broke,
like
X
percent
on
my
clients,
so
I.
My
ideal
scenario
would
be
if
the
standard
library
supported
doing
this,
like
by
all
means
verify
the
the
TLS
connection
at
connection
time
right.
D
But
we
Blake
break
the
existing
clients,
I
mean.
So
that's
why,
like
I
mean
I
when
I
originally
proposed,
the
idea
is
like
I'm,
pretty
sure
you
know
it
breaks
assumptions
that
have
been
made
around
the
server
behavior.
So
we
can't
do
it
that
way.
I
don't
think
I
mean
maybe
I
mean
I,
know
that
there's
like
between
certain
major
versions.
B
D
You
think
that
the
humidity
would
be
against
switching
clients
eventually
over
to
like
a
per
connection
like
doing
the
TLS
upfront
and
providing
the
either
choosing
to
do
certification
or
I
mean
I.
Think
you
were
talking
about
the
fitnah,
fan-out
I
think
Bob.
You,
you
said
that
there's
a
way
to
address
the
fan-out
issue
for
like
when
you're,
like
blindly
sending
certs
like
multiple
or
I,
think
also
Jordan.
You
brought
up
something
about
browsers,
forwarding
like
certs
blindly
or
something
that
right
mm-hmm.
F
B
I'm,
not
quite
sure,
I
follow
like
if
we
reimplemented
enough
of
the
standard
library
we
could.
We
could
do
this
entry,
but
I
would
really
like
to
avoid
that
I
think
what
Moe
said
kind
of
matches.
My
thinking
like
I,
would
like
to
see
this
option.
Support
about
the
standard
library,
so
I
would
encourage
requesting
that
and
like
people
you
get
that
loose
in
your
library,
that's
great
until
we
do.
It
seems
like
the
caching
of
saying,
like
we
saw
this
leave
certificate
that
was
verified
in
the
handshake.
H
G
H
E
B
B
D
F
A
E
I
I
That
is
kind
of
how
nowadays,
most
of
the
people
do
authentication
against
the
kubernetes
api
or
soft
ocher
and
say
would
rather
prefer
this
this
thing
to
be
delegated
to
a
hardware
security
module
via
our
standard,
such
as
a
pkcs
11.
Now
we
prepared
small
presentation
to
follow
up
on
this
enhancement.
We
also
have
a
small
demo
I'm,
not
really
sure
how
to
make
the
best
use
of
the
time
of
this
community.
We
also
got
two
new
feedbacks
shortly
before
this.
This
meeting
so
I'm
not
sure
what
what
would
you
prefer?
A
I
I
I
And
to
this
end
we
have
used
a
hardware
security
module
like
the
Yubikey,
which
is
pretty
popular,
so
you
can
either
generate
the
an
RSA
key
on
the
Yubikey
or
you
can
just
import
a
key
onto
it.
So
I'm
going
to
do
the
latter
right
now
and
then
basically,
I
can
just
wipe
my
client
key
and
my
client
certificate.
Now,
of
course,
I
have
lost
access
to
my
mini
Q
cluster.
I
So
let
me
now
reconfigure
it,
and
this
line
is:
let's
see
the
the
core
of
the
contributions,
so
we
basically
added
a
new
authentication
provider
which
we
call
external
Steinar,
which
gets
this
parameter,
a
unique
socket
which
talks
to
another
process
called
external
signer.
So
this
is
running
right
now
in
the
lowermost
terminal
and
then
all
the
other
parameters
are
basically
just
sent
directly
to
the
external
Steinar.
I
We
chose
to
do
it
so
that
the
user
can
potentially
configure
the
external
signer
from
their
cube
config
without
having
to
spawn
several
executables
for
each
kind
of
configuration,
and
so
the
idea
is
now,
for
example,
yeah.
This
is
the
final
result
of
how
now
the
mini
cube
user
is
being
configured
and
if
I
now
do
keep
control
pods,
then
here
we
still
left
a
little
bit
of
verbis
messages,
but
is
that
the
external
signer
actually
has
the
possibility
of
giving
a
user
into
the
cube
control
to
basically
announce
the
user?
I
Hey
I'm
waiting
for
you
to
touch
the
Yubikey
or
I'm
waiting
for
you
to
type
the
password
on
some
kind
of
external
keyboard.
So
then
that
actually
would
be
very
much
depending
on
external
plugin
and
then
type.
My
super
secure,
pin
one
two
three
four
five
six
and
it's
afterwards
that
the
TLS
handshake
can
complete
and
I
can
actually
look
at
yeah
what's
running
in
the
mini
Q
cluster.
I
So
how
many
minutes
do
I?
Still
have,
let
me
maybe
skip
ahead
to
this
particular
slide,
but
the
thing
is
that
we
presented
the
first
version
of
of
this
contribution
at
600
on
March
the
4th.
At
that
time
we
had
a
rather
monolithic
implementation,
where
everything
was
was
in
cubed
control,
and
this,
of
course
was
very
problematic
because
it
basically
forced
skew
control
to
be
compiled
with
cgo.
This
is
required
in
order
to
use
the
pkcs
11
standard.
Since
it's
basically
a
protocol
in
which
dynamic
library,
iso
is
being
called
yoc
like
interface.
I
So
then
we
worked
on
that
a
little
bit
we
proposed.
We
worked
on
a
second
version,
which
is
the
one
that
is
right
now
contained
back
in
the
enhancements
proposal.
There
we
have
two
components,
so
the
cube
control
is
now
having
very
basically
no
extra
dependencies,
and
it's
just
talking
to
a
UNIX
socket
to
an
external
binary
which
is
done
using
yeah
containing
all
the
compiler
ECG.
I
I
So
this
can
take
advantage
of
the
regular
protection
mechanisms
offered
in
usual
operating
systems,
and
our
understanding
is
that
the
latest
version
of
go
also
supports
unix-like,
sockets
and
windows,
and
it
also
allows
you
to
send
back
to
Q
controlled,
prompts
so
that
it's
no
longer
in
queue
config
that
the
pinna
store
is
no
longer
to
control.
That
intermediate
step
in.
There
are
also
some
hardware
security
modules
that
just
don't
they
have
their
own
keyboard.
I
For
example,
and
they
don't
use
the
keyboard
of
the
arm
of
the
host,
so
in
any
way,
we
feel
that
this
is
a
pretty
flexible
solution
that
allows
all
kind
of
future
use
cases
for
avoiding
private
keys
in
Cube
control,
and
this
latest
suggestion
also
has
the
advantage
to
that.
It
says
the
cluster
name
via
your
PC,
so
you
could
also
potentially
not
have
any
configuration
to
config
and
do
the
multiplexing
and
the
matching
between
keys
and
clusters
in
the
external
signer.
I
I
Think
what
what
the
critical
point
right
now
to
discuss,
at
least
for
us
it
feels
is
if
the
API
server
suggesting
an
API
to
communicate
between
cube
control
and
external
signer,
and
we
want
to
know
if
this
is
adequate
or
if,
if
this
still
is
some
work,
we
already
have
received
just
before
2
hours.
Before
this
call
some
feedback,
so
I
guess
we
will
need
to
start
by
addressing
that
one
and
afterwards
we
would
like
to
discuss
yes.
So
we
have
received
strong
requirements
to
make
this
contribution
up
streamable.
I
H
H
Nis
T
recommendations
that
we
can
look
at
to
like
fresh
lies.
These
design
choices
against.
That
would
be
helpful
when
reviewing
the
design
so
trying
to
understand
I'd
like
to
understand
what
the
why
your
client
is
requesting
this
specific
solution
and
the
other
thing
is
I'm,
also
trying
to
figure
out
why
this
looks
so
different
than
Fido,
which
seems
like
a
prevailing
standard
for
user
hardware,
backed
login.
I
I
I
So,
right
now,
since
there
is
a
strong
separation
between
chip
control,
external
signer,
this
is
really
something
that
is
responsibility
of
the
external
signer
in
the
world
of
you
bicky's,
for
example,
or
pkcs
11.
You
can
actually
say
that
every
single
sign
operation
needs
to
be
authenticated
so
which
means
that
every
single
sign
operation
needs
a
pin
or
it
needs
a
touch
or
something
like
that
or
otherwise.
You
can
also
say
that,
well,
you
just
authenticate
once
and
then
you
can.
You
can,
for
a
certain
amount
of
time,
do
sign
operations,
as
you
wish.
H
I
H
F
B
The
certificate
callbacks
and
things
like
that,
and
so
it
talking
about
adding
authentication
plugins
to
client
go.
We
have
to
consider
what
does
that
do
to
the
surface
area
that
lots
of
clients,
including
the
cubelet
and
the
PI
server
and
scheduler
and
controller
manager
any
client,
oh
consumer,
is
going
to
be
exposed
to
this
complexity.
I
So
the
way
we
can't
implement
is
that
the
external
signer
is
a
separate
authentication,
plugin
and
basically
all
it
needs
is
there's.
There
was
already
an
existing
infrastructure
in
in
client,
go
to
hook
the
get
cert
method
and
basically
we're
just
hooking
into
that
one,
and
instead
of
returning
a
real
go,
signer
we're
basically
just
returning
proxy
to
a
signer
that
is
done
being
proxied
via
the
UNIX
socket.
So
from
our
point
of
view,
as
long
as
you're,
not
using
the
external
signer
authentication
plug-in.
I
So
I'm
not
really
sure
that
they
have
the
same
that
they
said
cover
the
same
area,
because
my
understanding
is
that
you
know
in
case
of
hardware
security
module
the
horror
security
module
is,
for
example,
free
to
decide
to
to
log
each
of
the
signature.
Requests
is
free
to
count
the
number
of
signature
requests.
It's
it's
free,
for
example,
display
LED
when
I
signature
request
is
being
done.
I
I'm
trying
to
find
out
the
code
in
client
code
that
actually
changes
yeah,
so
just
for
reference,
though,
the
changes
that
we
we
did
to
client
go
per
se
or
really
just
capturing
these
in
this
screen,
so
we're
creating
an
auth
provider,
TLS
interface,
that
is,
extending
oath
provider
with
also
the
possibility
of
updating
the
transferred
config
and
then,
if
we
notice
that
the
oath
provider
actually
implements
this
interface
and
we
allow
it
to
update
the
transport
config
so
to
most
of
the
users
of
client
go.
This
should
be
a
pretty
transparent
change.
H
I
So
this
is
a
specific
integration
for
kube
control,
but
that
being
said,
there
is
a
similar
effort,
cube
KMS,
which
also
wants
to
delegate
the
encryption
and
decryption
to
some
outside
party,
and
this
could
be,
of
course,
a
hardware.
Security
module,
I
think
that
as
the
top
ssin
will
grow
larger,
we
will
better
understand.
I
mean
there.
There's,
no
reason
why
this
interface
couldn't
be
used
by
other
parts
of
cubelet
or
cube
controller,
as
long
as
they
need
to
retrieve
a
signature
or
sort
to
get
a
certificate
and
users
visiting.
H
A
B
Cue,
this
up
I
had
not
looked
at
it
in
detail
yet
since
it
wasn't
targeting
119,
but
if
we
want
to
set
up
a
time,
it
might
be
helpful
to
spend
some
time
on
this
together.
I
agree
with
David
I'm,
not
as
familiar
with
some
of
the
external
izalith
Els
stuff.
So
having
probably
Mike
there
to
explain
things
to
me
would
be
helpful.
H
B
Right,
mostly
I'm
wanting
to
be
able
to
give
crisp
back
so
that
the
contributors
can
use
the
time
between
now
and
like
the
1:20
deadline
to
respond
and
update
and
make
this
a
proposal
that
is
better
understood
or
more
acceptable,
or
we
can
just
reach
a
conclusion
so
I
think
having
some
feedback
for
them.
Early
yeah
would
be
helpful.
Yeah.
I
A
E
So
I
could
take
some
time
to
breathe
this
hi.
This
is
Jackie
from
Google,
so
we
kind
of
announced
this
effort
in
last
signals
meeting
to
bring
up
some
magic
would
see
it
under
for
the
cluster
level,
stuff
I
wrote
a
small
talk
to
summarize
the
different
opinions
and
options
in
the
original
issue.
There
was
the
original
issue
discussing
about.
Why
do
we
need
is
and
how
we
plan
to
do
that.
So,
in
the
talk
we
do
summarize
some
of
the
motivations
and
some
options
of
doing
that.
E
E
So,
in
terms
of
how
do
we
inject
this
a
bundle
which
is
the
primary
design
choice,
we
could
use
a
projected
volume
to
injected
certificate
into
the
pod
or
we
could
use
a
cluster
counting
map.
There
are
pros
and
cons
of
each
option,
but
I
couldn't
tell
like
which
option
is
better
or
which
one
is
worse.
So
I
would
like
to
see
more
comments
here.
E
C
A
question
about
this
right,
I
sell.
It
was
initially
focused
on
CA,
bundles
and
injecting
them.
It
was
less
obvious
to
me
how
you
were
gonna
handle
some
of
the
mechanics
around
and
what
you
intended
right
is
this
gonna
be
a
way
for
the
hosts
running
cubelets
to
have
their
system.
Trust
bundles
changed,
or
is
this
going
to
be
something
sort
of
unconditionally
changing
one
of
the
base
layers
and
an
image
right?
Are
we
gonna
force?
E
C
E
Like
there
are
links
to
those
issues
like
people
from
developing
those
that
they
express
some
interest
like
if
there
is
some
charism
to
provide
a
urusei,
a
bundle
in
the
part,
it
will
be
easy
for
those
weapons
extensions
to
have
like.
Currently
they
use
some
template,
so
they
need
to
copy
the
certificate
into
each
parts.
Config
map,
it's
kind
of
like
wasteful,
like
if
we
do
have
this
mission,
isn't
they
could.
F
C
About
building
a
way
to
have
sort
of
the
idea
of
a
global
config
map
that
a
cluster
admin
option,
two
we
were
exploring
the
idea
and
OpenShift.
It
seems
like
a
good
idea.
Well,
we
were
gonna,
go
down
the
CSI
driver
half
for
it.
It's
not
like
hey
I,
want
this
design
a
set
of
what
you
have,
but
if
you're
looking
for
like
what
other
thoughts
in
the
area
look
like
this
is
what
those
thoughts.
C
This
is
where
we
went
some
of
the
reasons
we
did
so,
but
we
did
try
to
separate
out
the
concerns
about
trust
for
something
like
an
admission
webhook
versus
the.
How
do
I
generally
provide
this
piece
of
information
to
to
any
pod
in
my
cluster
and
it
wasn't
specific
to
see
a
bundles,
and
it
was
also
explicit
in
what
it
did
right
there
was.
E
D
E
A
And
just
to
address
the
original
agenda
item
of
what
the
next
steps
are
on,
this
yeah
I
think
we
should
kind
of
iterate
on
some
comments
on
the
doc
to
get
some
kind
of
basic
understanding
in
common
and
then
the
the
next
step
will
certainly
be
writing
up
a
formal
cap,
yeah
but
I
think
it's
it
don't
take
it
to
a
Connie
and
it
took
some
the
open
questions
on
the
doc.
First.
B
A
C
Are
you
interested
enough
in
doing
it
that
we
just
say
on
it's
been
difficult
to
fight
some
time
and
we
could
make
the
choice
that
you
know
what
you
knew
it
was
coming
it.
It
happened
in
115
before
it's
been
deprecated
for
many
releases,
it's
just
gone,
but
I'm
not
sure
what
else
you
would
have
API
machinery
do.
That
would
be
different
right
like
what
what
additional
choice
do
we
have.
J
B
C
B
Adding
those
adding
descriptions
of
those
to
this
issue
would
like
laying
out
the
options
they
get
clear
like
here's,
what
you
can
do
today
without
doing
any
work
yourself,
if
you
don't
want
to
do
that
here
are
other
options
you
can
do
with
work,
Network
Lebanese
you
want
yes,
and
that
might
even
be
worth
ducky
actually
documenting,
like
you
know,
haven't
how
do
you?
How
do
you
monitor
your
server
here?
Are
your
options
turn
on
anonymous
requests
and
you
can
hit
the
health
Z
in
point
or
set
up
something
like
this?
In
example,.
A
C
Not
super
excited
about
that
I
mean
I'm,
not
super
excited
about
having
another
binary
as
we
have
it
on
either.
Why
not?
It
adds
complexity
in
the
code
path
to
actually
started.
It
gives
you
an
endpoint
that
is
not
secured
and
protected
by
credentials
and
now
that
those
binaries
do
have
authentication
and
authorization
built-in
that
matches
the
in
cluster
versions.
I
don't
see
a
reason
to
have
it.
Basically,
we
have
it
because
it
existed
long
before
and
if
we
had
had
security
on
those
initially,
we
would
never
have
had
separate
ports.
D
D
So
it's
just
a
separate
health
health
Seaport
that
you
know
wasn't
exposed
to
necessarily
the
Internet,
but
just
that
we
could
use
for
health
checking
without
having
to
worry
about
enforcing,
like
authentication
authorization
around
would
be
really
nice
I
actually
just
had
it.
It
kept.
This
issue
come
up
in
a
document.
I
was
looking
at
earlier.
E
I
D
But
like
just
having
to
have
you
doing
like
today,
we
don't
work
fairly
opinionated
about
what
is
inside
the
cluster
like
in
terms
of
our
back
roles,
or
you
know
whatever
cuz
like
we
don't
for
the
most
part.
D
We
don't
prevent
you
from
doing
anything,
except
for
this
one,
our
back
role
of
the
system,
public
info
viewer,
because
we're
we're
using
we're
relying
on
that
for
for
health
checks,
and
so
you
know
having
to
maintain
like
an
are
back
role
for
or
for
something
to
access
or
proxy
held
see
it's
just
it's
more
work
than
just
saying.
Okay,
here's
this
this
additional
court,
we're
listening
on
and
Internet's
gonna,
not
access
it,
it's
just
for
like
load,
voluntary
health
checks
or
whatever.
That
would
be
a
lot
simpler
for
us.
A
I'm
gonna
pause
you
there,
because
I
realize
right
over
time.
Sorry
about
that
I
will
file
a
issue
to
track
the
removal
of
the
insecure
port,
and
then
we
can
kind
of
follow
up
on
that
with
some
of
the
options
as
a
strawman
I
propose
that
we
set
a
deadline
of
removing
it
in
1.20,
which
kind
of
gives
us
a
release
to
figure
out
the
other
options
and
also
give
people
a
heads
up
that
this
is
coming.
I
guess.