►
From YouTube: SIG Auth Subproject BI-Weekly Meeting for 20200916
Description
SIG Auth Subproject BI-Weekly Meeting for 20200916
A
Hi
everyone
welcome
to
the
september
16th
meeting
of
sig
auth.
A
Thank
you
for
joining
us
today.
The
first
item
on
the
agenda
is
from
andrew
docker.
B
Hey
yeah
so
like
some
extra
context
on
this,
so
in
118
myself,
walter
fender
and
nick
turner
had
proposed
a
kept,
introducing
an
exact
base
plugin
for
fetching
credentials
for
container
registries,
the
goal
being
to
build
a
plugable
replacement
for
the
cloud
provider-based
keyring
providers
that
are
built
into
the
cubelet
today.
B
I
think
in
118
we
had
sign
off
in
this
sig
on
the
initial
alpha
implementation
in
the
cap,
but
the
implementation
slipped
both
in
118
and
119,
so
we're
trying
to
get
this
in
early
in
120..
I
think
so
far
the
most
contentious
thing
about
this
pr
is
what
to
name
the
feature.
B
B
B
B
I
think
the
biggest
diff
is
the
addition
of
extra
arguments
that
you
can
define
in
the
plugin
config
and
the
the
naming.
So
in
the
cap
we
said
we
would
name
it
registry,
credential
provider
or
registry
credential
plug-in.
B
I
thought
that
was
a
little
too
generic
and
not
specific
enough,
so
I
renamed
everything
docker
auth
plugin.
But,
aside
from
that,
it's
the
same
idea.
It's
an
exact
plugin
where
you
pass
the
request
and
response
via
standard
and.
A
Do
you
want
to
go
and
update
the
cap?
Has
that
happened
yet,
or
is
this?
I
guess
my
first
question
is:
is
this
the
same
thing
that
is
described
in
the
cap
with
some
evolution
since
we
last
looked
at
it.
B
So
I
haven't
updated
the
cap,
but
it's
basically
the
same
thing.
I
just
mainly
changed
the
naming
of
things,
but
it's
just
it's
the
exact
same
like
design,
you
could
say
not
much
has
changed.
A
B
No,
I
think
what
we
want
once
the
initial
like
api
is
in
is
to
move
all
of
package
credential
provider
to
staging,
because
the
expectation
would
be
that
cloud
providers
would
build
their
auth
plugins
out
of
tree
and
then
they
would
import
the
plug-in
framework
in
there
there's
actually
a
common
thread
about
this
somewhere
in
the
pr
but
yeah.
I
think
I
think
we'd
want
to
move
package
credential
provider
somewhere
in
staging
at
some
point.
A
Yeah,
I
would
definitely
agree
with
that
yeah.
I
I
think
myself
and
tim
reviewed
the
early
one
and
it
seemed
very
reasonable
as
long
as
somebody
was
willing
to
push
it
over
the
finish
line.
So
thank
you
for
reinvigorating
that
effort.
B
C
C
The
entry
bits
not
nothing
like
out
of
tree,
and
there
were
a
couple
sort
of
strange
things
around
how
caching
was
being
done,
like
the
the
the
way
that
credentials
were
being
cached
didn't
seem
to
align
with,
like
credentials,
could
be
returned
per
image,
but
then
we're
cached
without
respect
to
the
image
which
seems
really
strange,
and
so
I
think
we
want
to
make
sure
before
we
start
externalizing
the
interface,
the
inputs
and
outputs.
We
want
to
make
sure
that
the
inputs
and
outputs
actually
make
sense.
C
So
getting
someone
from
sicknode
involved
together
would
probably
help
resolve
both
those
issues.
So
I
am
happy
to
be
on
that
and
then
I
can
look
at
it
from
the
auth
perspective.
But
I
don't
know
who
knows
the
node
cri
bits
so.
B
Yeah,
that
sounds
good.
I
think
I
think
I
queued
it
up
for
the
signal
agenda,
but
yeah.
It
might
be
good
to
just
allocate
some
time
to
just
do
the
review
together.
Yeah,
okay,
I'll,
follow
up.
C
D
Yup,
so
I
think
in
our
last
meeting
we
went
through,
I
presented
a
demo
of
the
multi-tenancy
benchmarks
tool
and
we
had
some
good
discussions
and
feedback
around
that
one
of
the
items
was.
D
One
of
the
examples
was
the
standardized
pod
security
profiles
that
tim
is
working
on,
so
we
went
back
and
thought
about.
You
know
options
there
and
I
think
one
one
option
we
are
proposing
is
to
remove
some
of
the
multi-tenancy
specific
scaffolding,
like
perhaps
like
the
categories
and
things
we
had
assumed
right
now
and
maybe
replace
them
with
something
more
flexible
like
labels.
D
So
the
question
is:
if
we
do
that-
and
you
know
it
seems
like
there
is
a
fair
amount
of
overlap
in
the
checks
we're
doing
for
multi-tenancy
with
the
pod
security.
The
standardized
profiles,
so
would
that
be
a
reasonable
way
to
proceed
and
just
use
labels
to
identify
which
test
suites?
We
want
to
run,
I
think,
there's
an
example
here
for
how
you
would
potentially
be
able
to
run
the
command
and,
of
course,
we
can
at
the
right
time
rename
the
tool
to
something
more
meaningful.
D
D
So
if
folks,
you
know
want
to
take
a
look
see
the
checks
we
have
currently
implemented,
if
there's
suggestions
for
more
and
we
can,
then
you
know
if,
if
the
feedback
and
our
folks
think
this
is
a
good
direction,
we
can
generalize
this
tool
and
move
to
using
labels
to
identify
these
validation
or
conformance.
A
A
D
Know
I
think
we
could
move
them
and
you
know
yeah
potentially
also.
We
could
decouple
actually
building
the
the
tool
itself
from
the
benchmark,
suites
which
are
being
executed,
but
the
code
would
have
to
be
picked
up
from
somewhere,
of
course,
right
to
compile
in.
A
Yeah,
I
think
it
would
be
nice
to
figure
out
how
we
could
how
you
could
compose
this
without
the
need
for
changes
to
mtv,
but
in
general
it
sounds
like
a
good
direction.
A
This
is
step
one.
I
think.
A
Awesome
thanks
jim
for
the
update
sure.
A
A
Cool
this
is
a
design
of
note.
What
would
you
do
you
want
to
introduce
it
and
then.
E
Yeah
sure
it
might
be
easier
to
kind
of
walk
a
little
bit
through
the
cap
as
well,
because
they're
gonna
point
to
the
same
purple,
yeah
we're
in
the
process.
I
can
yeah
sure
give
me
one.
Second.
E
Cool
okay,
can
people
see
my
screen:
yep
cool,
okay,
cool,
so
hi
everyone,
I'm
amber
I'm
here
with
another
couple
of
people
from
our
team
working
on
privilege,
container
support
for
windows
containers.
E
Basically,
we've
been
kind
of
shipping
around
this
kept
for
the
privileged
containers
to
a
couple
of
different
cigs
to
get
feedback
and
there's
a
couple
of
changes
here
that
were
relevant
to
the
auth
community,
which
we
thought
would
be
great
to
get
some
feedback
on
as
well,
so
from
a
high
level
privilege
containers,
I'm
sure
many
people
are
familiar
with
with
on
the
linux
side,
are
used
for
a
lot
of
different
scenarios
for
kubernetes,
especially
for
things
like
hoop
proxy
etc.
E
So
we
have
been
working
to
figure
out
a
way
to
get
privilege
container
support
for
windows
containers
at
a
high
level.
Their
approach
that
we've
taken
has
mainly
been
quite
a
divergence
from
previous
implementations
for
different
windows.
Container
types
most
windows
containers
are
are
created
via
you
know,
server
silos,
however,
in
this
implementation
we're
going
to
take
the
use
of
job
objects
instead.
E
As
a
result,
we
have
kind
of
the
access
to
the
host
that
the
server
silos
don't
make
available
for
the
other
windows
containers
types
which
allows
for
a
privileged
container
like
behavior
in
windows.
The
other
one
that
this
cap
kind
of
focuses
on
is
this
idea
of
host
network
mode.
This
is
something
that
also
has
to
be
enabled
for
privileged
networking
scenarios
to
be
workable,
even
with
the
job
object
model.
E
Non-Goals
is
that
we
don't
want
to
provide
a
privileged
mode
for
certain
types
of
windows.
Container
scenarios
such
as
hyper-v
containers
we've
gotten
a
lot
of
questions
about,
especially
as
different.
You
know.
Services
are
moving
to
use
hyper-v
containers
for
kubernetes
services,
they're
using
hyper-v
containers,
I'm
what
exactly
is
going
to
happen
with
clever
containers
in
the
hyper-v
scenario.
E
As
of
now,
we're
not
trying
to
pursue
any
sort
of
nesting
scenario
where
we,
you
know
put
a
privilege
in
here
instead
of
a
hybrid
snare
container,
that
you
know
would
do
something
privileged
then
on
the
host.
They
would
have
to
run
side
by
side
if
a
user
is
trying
to
use
a
privileged
container
with
either
process
isolated
or
with
hyper-v
containers.
E
Additionally,
we
are
focusing
support
on
container
d,
so
we're
not
it's
a
non-goal
to
provide
the
support
in
docker,
so
the
two
use
cases
for
privileged
containers
and
linux
that
are
commonly
required
are
for
daemon
sets
and
also
for
node
plug-ins
we're
targeting
the
same
use
cases
for
the
windows
container
model
as
well,
and
there
are
some
kind
of
against
caveats.
The
host
network
mode
that
we
are
mentioning
is
only
targeted
for
privileged
containers
and
pods.
E
We're
not
trying
to
expose
host
network
mode
for
process,
isolated
containers
or
hyper-v
containers
currently
and
then,
additionally,
privileged
pods
can
only
consist
of
privileged
containers.
We
can't
currently
mix
process
isolated
with
the
job
object,
implementation
of
privilege
containers
due
to
the
way
that
the
ip
is
shared
across
the
pod.
I'm
for
publish
containers
with
host
network
mode,
the
container
ip
will
be
the
host
ip,
so
mixing,
it
will
will
result
in
it.
Just
would
not
work
so
a
couple
of
the
things
that
are
required
for
us
to
actually
get
this
implemented.
E
There
are
going
to
be
changes
required
in
oci
cry
and
kublet,
in
addition
to
some
changes
that
were
needed
in
container
d
to
enable
post
network
mode
and
privilege
containers.
This
work
has
actually
kind
of
already
been
done
internally.
We've
done
through
a
couple
of
tests
also
for
different
kubernetes
scenarios
to
make
sure
that
that
component
works.
There
is
a
working
prototype
demo
that
is
available
in
this
cat,
so
I
do
recommend
people
check
that
out
if
you're
curious
as
to
how
this
works
currently
moving
kind
of
up.
E
This
stack
into
a
lot
of
the
open
source
layers
we'll
have
to
make
some
changes
to
the
oci
spec,
the
cry
api
and
the
kubelet
to
to
get
this
kind
of
pass.
All
the
way
through.
This
is
kind
of
a
lot
of
the
areas
with
the
psp
changes.
Are
the
ones
specifically
that
we
want
to
bring
to
this
sig
to
see
if
we
can
get
some
feedback
on
as
well
so
kind
of
going
into
the
psp?
E
Specifically,
we've
done
our
rundown
with
the
help
of
james
who's,
on
the
call
us
now
as
well
right
now
as
to
the
different
psps
that
we
think
might
have
some
application
either
to
like
scenario
and
how
they
translate
to
windows,
privileged
containers
and
also
further
analysis
of
non-privileged
containers
in
general,
but
for
the
ones
listed
here,
the
ones
that
we
think
that
might
have
some
application
or
relevance
to
privileged
containers
in
windows.
E
And
you
know
some
of
them
there's
actually
very
few
that
have
a
high
priority
for
us
to
get
enabled
for
privileged
containers
in
windows.
Currently
for
the
alpha
scenario,
of
course,
we
do
want
to
get
the
privileged
field
name.
The
next
one
that
might
be
relevant
is
the
different
host
network
ones,
which
we
would
be
targeting
in
beta.
E
For
now,
the
support
is
kind
of
only
set
to
the
host
network
by
default.
When
we're
looking
at
the
alpha
stage,
the
next
one
that
kind
of
comes
into
relevance
would
probably
be
the
gmsa,
and
that
is
something
that
would
have
more
involvement.
So
it's
something
that
we'd
be
targeting
for
ga
and
yeah.
So
I
would.
It
would
be
great
to
get
some
feedback
or
thoughts
on
kind
of
this
listing
here.
E
E
What
that
means
is
essentially,
we
you
know,
have
access
to
a
lot
of
the
host
resources,
as
mentioned
before
things
like
resource
limits
would
be
available,
since
that's
something
that's
available
through
job
object
and
some
different
kind
of
cases
that
we're
looking
at
are
kind
of
secret
mounting
or
is
getting
investigated,
how
exactly
we're
going
to
work
with
the
container
image?
Just
because,
with
working
a
dot
with
a
job
object
and
the
privileged
containers
images,
we
don't
exactly
require.
E
You
know
a
full
server
core
image,
perhaps
and
we're
kind
of
looking
at
seeing
how
we
can
ship
a
different
slim
based
image
to
satisfy
requirements
and
kind
of
reduce
the
footprint
that
these
containers
might
have.
But
that's
something
that's
also
kind
of
under
discussion
and
open
to
feedback
as
well,
and
we
dive
into
further
details
on
the
cry,
implementation
and
kubla
implementation
as
well.
So
I
I
recommend
people
checking
this
out
for
further
details
beyond
what
I
kind
of
mentioned
here
and
providing
any
feedback
from
from
the
auth
perspective.
F
F
Of
the
proposals
was
to
add
a
privileged
field
to
the
windows,
security
context,
api
objects,
okay,.
C
G
Object
as
well
so
the
so
the
proposal
is
to
reuse
the
privileged
flag,
that's
at
the
security
context
level
and
then
pass
that
through
to
the
window
windows,
sandbox
configuration.
A
Okay,
but
from
the
ap
from
the
kubernetes
api,
it
will
reuse
the
same
field.
Yeah,
correct,
awesome,.
A
Yeah,
I
think
independent
of
sig
auth.
This
is
a
really
big
gap
that
we
have
today
and
I'm
excited
for
this.
A
My
one
comment
was
that
psp
is,
if
you're
not
aware
deprecated.
I
think
that
what
we
need
to
do
is
reconcile
with
the
pot
security
profiles.
A
Some
of
the
additional
features-
I
don't
think
that's
gonna-
be
very
tricky,
but
one
thing
since
you're,
not
modifying
security,
the
public
api
version
of
cloud
security,
con
context-
I
don't
think
we
need
changes
to
a
psp
infrastructure
and
we
don't
have
any
standard,
psp
policies
and
I
think,
reconciling
with
the
pod
security
standards,
which
it
looks
like
it's
linked
from
the
dock,
is,
might
be
trivial.
G
E
F
A
E
I
can
link
it
here
as
well.
A
Awesome,
I
would
just
say
double
check
those
and
make
sure
that
they
are
still
accurate
for
with
the
changes
that
you
intend
to
make,
but
they
might
require
no
change
since
you
aren't
adding
any
fields
to
the
kubernetes
api
security
context.
E
E
Yep
yeah,
we
brought
it
to
them
yesterday
and
they
suggested
following
up
here
for
a
couple
of
things,
but
still
getting
feedback
from
them
as
well.
F
And
again,
a
couple
folks
have
added
some
comments
to
the
doc
raising
questions
and
we're
going
to
look
into
incorporating
all
that
back
into
this
before
an
actual
like
markdown
version
of
this
kept
goes
up
for
review.
A
A
Yeah,
so
my
take
is
that
this
feels
pretty
independent.
E
A
Profiles
possibility
profiles
are
accurate
with
respect
to
the
image
changes
that
you
make,
but
other
than
that
I
don't
have.
I
can't
think
of
anything
else
immediately.
That
is
either
concerning
or
it
needs
to
be
done.
C
Yeah,
if
the
api
isn't
changing,
then
I
I
wouldn't
expect
psp
to
change.
I
think
coordinating
on
the
pod
security
profiles
and
standard
profiles
is
probably
the
best
place
to
do
this
and
having
proof
of
concept
stuff
for
like
gatekeeper
or
whatever
is
great,
but
we
really
want
to
make
sure
that
that
central
place
is
sort
of
the
point
that
all
external
policy
providers
can
coordinate
on
and
notice.
Like.
Oh
okay,
this
this
new
thing
is
coming:
here's
how
it's
going
to
plug
into
our
existing
sort
of
layered
recommendations.
G
So
yeah
that
sounds
good
yeah,
and
so
there
was
a
open
issue
that
I
commented
on
and
kind
of
went
through
for
existing
containers
and
then
linked
to
this
kept
for
future.
So
I
think
the
next
steps
there
are
to
like
open
up
a
dock
and
provide
the
windows
guidance
for
the
various
security
features
that
we
need.
H
Okay,
I
I
I
think
they're
so
we'll
definitely
take
that
feedback
amber
the
concern,
but
the
rest
looks
like
they
got.
This
is
okay
and
on
board
from
a
cap
standpoint,
do
you
want
us
to
tag
you
as
well
right
now,
signature
windows
are
the
only
ones
attacked
in
the
enhancement
issue,
so
you
want
to
keep
an
eye
or
are
you
okay
without
it.
A
I
think
we're
okay,
I
don't
think
the
until
security
context
changes.
I
think
we
can
be
just
observe
if
they
need
to
be
explicitly
called
out
cool
awesome.
H
So
I
think,
with
that,
we
can
probably
go
back
to
signal
and
present
your
feedback
and
then
see
what
else
is
needed.
Thank
you.
A
Cool,
so
we
have
made
it
through
the
agenda.
The
rest
of
the
items
are
the
standing
items
cflakes
and
test
grid
jordans.
You
want
to
give
an
update
on
the
status
of
the
120
branch
yeah.
C
C
All
of
the
the
sort
of
phased
approach
to
getting
approved,
pull
requests
and
all
of
the
ones
that
we
were
tracking
have
been
moved
in.
So
at
this
point,
sigs
are
free
to
review
and
tag
and
milestone
their
features
and
fixes,
and
so
120
development
is
basically
open
to
sigs
to
manage
their
components
now.
So
that's
great
the
unit
test.
C
Becoming
stricter
on
unit
test
flakes
that
has
merged,
and
that
has
revealed
some
flaky
unit
tests,
I'm
not
aware
of
any
that
belong
to
segoth,
so
yay,
yes,
yeah.
So
that
is
most
of
the
updates
that
I
have.
C
I
think
I
called
out
last
time
a
gap
around
upgrade
tests
and
that
was
specifically
impacting
segoth,
because
we
want
to
make
progress
on
the
types
of
service
account
tokens
we
inject
into
containers,
and
we
need
to
have
good
confidence
that
that's
not
going
to
break
clusters
on
upgrade
and
so
for
some
driving
that
feature
actually
jumped
into
trying
to
get
the
upgrade
tests
that
we
have
working
again,
such
as
they
are.
C
While
we
are
waiting
for
sig
testing
to
deliver
like
the
one
true
beautiful
upgrade
framework
so
that
got
opened
today,
there's
so
that's
good
to
see
progress
I'll,
be
glad
to
have
signal
on
that
and
unblock
some
features
like
that.
Yeah,
that's
all
the
updates!
I
have.
A
Awesome
thanks
jordan
for
the
update
and
all
the
work
you
did
to
be
able
to
make
that
good
news
update,
looks
like
we
have
one
flicky,
very
flaky,
ed.
C
C
A
Awesome
then
I'll
ignore
it
yeah
oliver
mirrors.
A
C
C
C
Maybe
we'll
get
to
the
point
where
that
is
a
standard
we
can
expect,
but
especially
depending
on
the
job.
So
if,
if
we
see
those
in
like
master
blocking
jobs,
then
I
would
pay
attention
to
them,
because
those
are
the
ones
that
have
like
reserved
signal,
reserve
resources
and
are
really
blocking.
If
they
are
in
jobs
that
we
control,
then
I
would
pay
attention
to
them
if
they
are
in
jobs
that
aren't
release
blocking
or
releasing
forming
controlled
by
other
cigs
and
there's
one
flake
in
two
weeks.
A
Awesome,
let's
get
through
these
bugs
real
quick.
A
A
A
E
C
I
think
I
think
the
right
owners
are
on
their
scalability
team
is
the
one
noticing
the
most
this,
the
most
because
they're
starting
big
clusters
and
doing
stuff
to
them.
I
think
roytec
and
I
can
either
drive
that
or
rip
in
the
right
people.
C
Never
went
out
this,
I
would
say
this
is
important
soon.
Constructing
a
client
set.
C
Exercises
this
like
40
times,
maybe,
and
so,
if
you
construct
a
client
set
inside
the
controller
manager
for
every
controller
loop,
you've
got
like
40
times.
I
don't
know
30
40,
something
like.
C
A
C
A
A
C
C
There
there
was,
there
were
a
couple
things
that
got
put
into
the
implementation
after
the
design
that
we're
we're
kind
of
going
back
and
forth
on
so
yeah.
It's
not
quite
closed
again.
Now.
C
C
This
was
the
client
credential
right,
externalizing,
the
client
credentials.
Yes,.
C
A
A
C
I
think
this
got
closed
as
unactionable
and
should
be
kept
and
then
it
got
reopened,
but
I
don't
think
it's
actionable
and
I
still
think
it
requires
like
a
concrete
design.
C
G
C
A
C
A
A
A
A
A
Does
presumably,
this
is
to
use
wireshark
for
debugging
api
server.
A
A
C
It
it'd
be
helpful
to
know
what
the
intended
audience
is.
If
it's
someone
who
is
like
developing
stuff
inside
the
server,
then
that's
probably
reasonable,
and
what
I
would
recommend
if
it's
someone
who
is
running
the
server
and
wants
to
inspect
stuff,
then
I
am,
I
don't
like
giving
lovers
that
I
would
never
actually
want
to
see
used
in
anything.
I
cared
about
right.
C
C
Go
so
we're
missing
a
field
in
off
proxy
and
impersonation
headers.
C
C
Flood,
I
think
we
just.
A
A
A
But
the
implementation
pr
is
not
linked
from
the
kit
I'll
close
that
offline
and
then
try
a
bit
stuff.
A
Yep
rest
of
them
look
standard.
C
A
There
yeah,
I
think,
selfishly
I'm
looking
forward
for
the
to
the
ga
of
token
request
and
the
token
volume
projection.
Is
there
anything
else
we
have
ga
of
certificates
api?
What
you
marked
that
implementable.