►
From YouTube: Kubernetes SIG Auth 20170517
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20170517
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
B
A
Calling
out
a
few
PRS
that
are
in
flight
or
recently
merged
since
the
last
meeting.
The
first
two
are
ones
that
I'm
working
on
related
to
subdivision
of
cubelet
access
to
the
API,
so
the
first
one
is
tightening
who
can
create,
mirror
pods
and
once
you
have
a
mirror
pod
that
it
has
to
stay
at
Mirror
pod.
A
So
that
was
just
a
validation
change
that
already
merged
the
second
one
is
an
admission
plug-in
that
is
optional,
like
all
admission
plugins
that
will
limit
nodes
to
only
be
able
to
create
an
update
and
delete
themselves
and
pods
scheduled
on
themselves.
So
currently,
nodes
can't
update
the
status
of
any
node
and
update
of
the
status
of
any
pod,
and
this
would
let
you
limit
them
to
only
modifying
themselves
in
their
own
pods.
So
that's
open
for
review.
A
A
D
Sure
I
mean
it's
basically
that
we
have
a
group
of.
We
have
an
approver
for
couplet
TLS
inside
of
the
controller
from
a
vendor
and
the
the
PR
here
basically
makes
it
pluggable
for
so
when,
when
you,
when
a
cupola
requests
a
certificate,
there
are
a
few
steps.
It
first
requests
a
client
insert
before
itself
based
off
of
probably
a
wheat
strapping
token
or
something
like
that,
and
then
once
it
has
that
clients
earth
with
the
identifiable
like
username
of
the
node.
It
can
then
request
a
serving
search.
D
It
can
rerecord
a
serving
sort
or
request
a
plant
sort
as
those
two
expire
so
rather
than
having
a
general
policy
of
yes,
we're
going
to
allow
any
kind
of
request.
This
switches,
the
current
group
approver
to
use
it
like
access
reviews,
and
so
a
ministry
can
say
this
couplet
may
request
a
client's,
cert
or
a
request.
A
service
or
request
a
requesting.
A
new
array
or
initial
plants
are
based
off
of
some
rules.
D
So
the
idea
is
to
be
using
that
you
would
use
our
back
or
some
other
authorization
policy
to
actually
I'll
say
what
types
of
the
types
of
requests
a
serving
a
couplet
can
make
apologize
if
I'm
not
explaining
that
super
well.
But
there
are
some
examples
that
I'll
drop
in
in
the
issue
that
it
was
closing.
A
Yeah
point
out
that
this
is
not
required,
so
if
you
have
more
robust
ways
of
verifying
certificate
requests,
either
by
verifying
that
the
public
key
associated
with
the
request
is
paired
to
the
node
in
some
other
system
like
a
but
a
cloud
provider
can
attest
to,
or
you
have
ways
of
running
processes
on
the
node
itself
to
verify.
The
CSR
you're
certainly
welcome
to
approved
certificate
requests
that
way.
A
D
So
this
was
something
that
was
requested
by
some
of
the
Canadian
people.
It
was
rusted
that
say,
God
speak
on
whether
we
feel
it's
okay,
for
basically,
these
certificates
used
by
the
control
point
in
the
serving
sorts
and
stuff
like
that
to
be
put
in
secrets
in
the
tube
system.
So
for
self-hosted
control
planes,
you
have
the
API
server
running
as
a
pod
within
the
control
plane,
and
this
PR
would
start
injecting
the
actual
certificates
that
the
that.
D
D
There
there
there
is
no
bottom
level.
Actually
so
at
least
Bluetooth
works
I'm
not
entirely
familiar
with
the
way
cube.
Atm
Tomic
work
is
that
there
is
some
initial
bootstrap
process
by
which
a
initial
API
server
runs
and
initial
control
plane
runs,
creates
near-identical
pods.
It's
not
mirror
pods,
but
actual
video
pods
in
the
coop
system,
and
then
the
initial
bootstrapping
control
plane
goes
away.
I
think
the
actual
we
personally
do
check
waiting.
So
we
have
a
process
that
will
take
the
description
of
the
pod
and
write
it
to
disk.
D
So
if
you
know
all
the
Masters
reboot
at
the
same
time,
then
the
cupola
can
sort
of
the
checkpoint
will
sort
of
restore
it
from
known
good
state,
but
a
part
of
doing
that
is
that
it
is
convenient
to
have
the
TLS
assets
or
whatever
of
their
configuration,
that
the
Southwest
control
plane
would
need
in
kubernetes
itself.
So
within
that
theme,
this
PR
attempts
to
click
the
assets
that
the
API
server,
for
instance,
will
need
in
the
API.
So
this
includes
serving
certificates
and
and
such
it.
E
D
Yeah
I
mean
I
definitely
agree
with
the,
but
I
definitely
agree
that
this
is
a
less
secure
way
of
running
it
than
just
having
it
on
the
host
or
something
like
that.
I
guess.
The
the
real
question
is,
you
know
there
with
the
improvements
to
secrets
and
the
improvements
you
know
sort
of
coming
in
the
future.
E
F
A
couple
of
things,
David
I,
think
you
know
to
consider
here
so
the
first
one
is
that
generally,
these
things
are
sitting
on
disk.
If
we
let
people,
you
know,
if
somebody
compromises
a
controller
and
in
a
in
a
you,
know
crazy
way,
they
can
launch
pods
that
run
on
the
masternodes
that
allow
you
to
map.
You
know
host
directories
and
then
you
can
go
ahead
and
so
like.
B
F
G
G
We
haven't
really
so.
This
is
kind
of
like
the
broader
question
is
right
now,
there's
about
five
channels
whereby
you
can
compromise
a
cluster
pretty
much
instantly
once
you
can
get
to
the
point
of
taking
over
our
controller
or
seeing
all
secrets,
there's
four
or
five
things
that
aren't
in
the
short-term
roadmap
to
block
that
so.
F
G
F
G
B
G
Everybody's
ingress
is
today
ask
for
all
secrets
by
default,
though
I
think
maybe
what
I'm
saying
is
what
I
was
gonna
say
is
there's
four
or
five
avenues
and
there's
a
real
challenge
that
we
need
to
go
block
all
those
off
until
those
are
blocked
off
doing
this
doesn't
actually
make
the
cluster
less
secure,
but
to
David's
point
like
and
like
I
think
this
is
the
tension
right.
It's
like
we
want
to
get
to
the
point
where
we
go.
Do
those
security
things?
G
How
do
we
balance
now
and
being
able
to
to
separate
and
tease
it
apart?
I,
don't
know
of
anything
that
would
getting
access
to
cube
system
secrets,
we'll
always
be
very,
very
bad
as
long
as
there
are
any
controllers
in
that
namespace
that
have
service
account
tokens,
so
this
doesn't
make
cluster
less
secure.
It
just
means
that
there
are
a
number
of
we
go
to
coins
the
whole
in
the
future.
It's
one
more
thing
we'll
have
to
take
into
account,
but
you
know:
that's
not
the.
F
H
F
H
F
G
If
we
think
that
being
able
to
use
secrets
or
create
pods
in
huge
system
and
I
create
pods
like
when
you
create
a
pod,
you
can
run
a
process
that
gets
access
to
the
secret
if
we
think
that
that
is
going
to
be
something
we
want
to
tease
apart
in
the
future.
Doing
this
now
might
complicate
that
if
we
think
that
getting
access
to
secrets
and
cube
system
is
always
going
to
be
game
over,
then
this
isn't
really
any
worse
than
what
we
have
today
with
the
controller
service
announced.
Yet.
I
This
feels
to
me
like
we're,
going
in
a
different
direction,
to
sort
of
where
I'd
like
to
go,
where
we
we're
using
kind
of
external
secret
stores
that
are
much
better
at
you
know
in
deliver
enterprise
features
that
we
want,
for
you
know
the
most
sensitive
secrets
and
we're
talking
here
about
kind
of
putting
a
very
sensitive
secret
inside
inside
the
existing
secret
system.
Well,.
C
I
The
route
that
we
we
use
now
you
know
to
make
those
mint
those
identities,
I,
don't
think,
should
be
inside
the
secret
system.
I
think
that
you
think
we
should
aim
for,
like
none
exportable,
if
we
can
write
like
some
sort
of
interface
that
lets
just
you
know
make
that
key.
So
the
attacker
can't
reach
it
and.
H
F
G
I
think
we
had
this
discussion
at
the
beginning
of
cube
admin,
which
was
said:
there's
really
two
kind
of
security
models,
one
where
you
you
build
up
the
Rings,
so
you
you
isolate
at
CB
and
then
you
give
credentials,
might
CD
to
the
masters.
And
then
you
isolate
the
Masters
from
the
nodes
and
you
don't
let
the
master
schedule
workloads
on
the
masters
and
that
generally
fits
with.
You
know
a
lot
of
classic
security
models,
but
you
can
reason
about
who
has
access
to
n
CD?
G
Yeah
and
if,
if
this
is
a
clearly
defined
trade
off
I
mean
you
know,
David
to
your
concern
like
if
we
clearly
delineate
two
models
where
there's
self
hosting,
where
the
goal
is
to
get
up
as
easy
as
possible
and
they're
still
hosting,
where
your
tray
some
convenience
for
for
better
organization.
Do
you
still
view
this
as
hampering
our
ability
to
pose
those
protections
in
the
future.
E
It
seems
it
might
seem
a
little
strange.
I
would
feel
better
about
it
if
it
weren't
us
in
the
separate
namespace
just
because
I
think
partitioning
the
problem
would
make.
It
would
be
a
little
easier
at
that
point
right.
We
do
have
pods
that
run
inside
of
of
cube
system,
for
instance,
right.
A
lot
of
people
start
pods
in
there
for
things
like
DNS,
and
there
was
a
word
at
least
something
else
that
I
saw
when
I
was
looking
for.
A
A
One
of
them
could
have
grabbed
an
identity
for
some
other
node
and
it's
very
hard
to
kind
of
go
app
in
the
back
and
then
lock
that
down,
but
but
that
it's
a
lot
easier
right,
and
so,
if
you're
setting
up
something
small
and
you
want
to
spend
up
10
nodes-
maybe
that's
acceptable,
but
we
also
support
the
model
of
you
can
tell
the
node
to
join,
given
a
specific
credential.
So
like
we
support
both
of
those
approaches.
A
E
D
There's
also
just
a
plate.
There
are
a
lot
of
sensitive
things
stored
in
the
secrets,
and
this
is
something
we
keep
getting
back
to
and
you
know
I
think
the
long-term
goal
is
gonna
have
to
be
to
improve
the
ability
for
us
to
store
secrets
if
we
can't
use
our
own
if
like,
if
we
can
use
our
own
primitives
to
store
things
securely.
That
seems
like
a
problem
in
our
sort
of
goal.
Our
model
mental
model
here
is
yeah.
I
F
I
I
agree,
so
I
think
identity
is
definitely
definitely
the
thing
we
need
that
it's
missing,
but
I
mean
if
we're
talking
about
you
know
what
what's
good
in
a
secret
store.
It's
like
all
those
things
I
laid
out
in
the
dock
right,
and
so,
if
we're,
you
know
going
to
going
to
go
down
this
path
of
like
building
a
really
solid
secret
store
inside
a
kubernetes,
and
it
that
comes
with
you
know,
people
will
be
asking
for
HSN
support.
People
will
want
very,
very
detailed
logging.
I
D
I
Yeah
I
understand
I,
agree
with
that,
like
I,
think
you
know,
and
that's
what
I
was
trying
to
get
out
in
this
in
this
in
the
secrets
talk
to
like
giving
us
giving
us
these
identities
to
you
know,
have
the
ability
to
use
those
external
stores
and
take
advantages
of
those
features
with
already
mature,
and
you
know
you
also
get
the
security
advantage
of
like
having
a
completely
separate
system,
and
so
you
know
compromise
inside
your
kubernetes
cluster.
You
still
have
some
guarantees
about.
I
G
Think
there's
a
I
think
to
even
going
past
the
secrets
thing.
The
challenge
here
is
that
what
we
really
need
to
do
is
we
need
to
define
our
threat
model
in
our
layers
and
we
need
to
click
the
framework
in
place.
So
people
can
reason
about
what
those
layers
can
do
right.
Like
you
know,
we
have
tools
that
can
be
used
to
construct
a
secure
system.
We
need
to
do
we
committed
to
this
one
four
one,
seven
and
I
am
just
as
guilty
of
not
helping
drive
this
to
conclusion.
G
As
as
anyone
we
probably
need
to
double
down
on
be
getting
to
the
point
where
we
can
put
the
framework
in
place,
so
people
can
say
what
are
the?
How
do
I
think
about
this?
How
do
we
make
recommendations
and
how
do
we
prioritize
the
features
that
make
the
deep
security
possible
and
the
simple
severity
understandable.
A
And
Jared
to
your
point,
I
I
do
think
putting
them
secrets
makes
it
less
secure
than
as
files
I
mean
it.
It's
objectively
harder
to
get
hard
right
permissions
than
it
is
to
get
secret.
Read
permissions
right,
I
mean
it.
Lots
of
things
have
the
ability
to
read
secrets
that
can't
go,
create
pods
and
keep
system.
C
G
Yeah
I
was
gonna
say,
like
all
the
discussions
we've
had
around
identity
and
access
to
external
secrets.
There's
no
reason
that
service
account
tokens
have
to
be
in
the
API
other
than
it's
convenient
for
people
to
use
a
service.
Account
token
is
an
assertion
by
the
note
that
this
container
is
this
container,
and
if
you
guys,
don't
trust
that
node,
then
the
API
server
can
leverage
whatever
it
is
to
make
that
decision
getting
to
a
point
where
we
don't
have
service
account
tokens
or
they
can
be
turned
off.
G
C
G
It
and
I
don't
know
if
this
is
a
discussion
for
the
bootstrapping,
you
know
grouper
or
for
cube,
Adam
it
or
not,
and
it
probably
doesn't
belong
here,
but
the
idea
of
the
best
way
to
subdivide
anything,
including
self-hosting,
is
to
actually
subdivide
it
as
opposed
to
keeping
it
having
it
all
together
with
those
things,
and
so
if
down
that
path,
if
the
secrets
were
on
a
file
versus
in
the
against
server,
it's
also
relevant
to
say,
maybe
there
should
be
a
bootstrap
API
server
that
is
totally
separated.
That
is
only
accessible
on
one
machine.
G
That's
the
thing
you
go
back
to
and
you
want
to
bring
the
cluster
back
up
and
that's
I
know
it's
been
discussed
and
I.
Don't
think
we
have
to
discuss
it
here,
but
that
is
another
way
to
subdivide
secrets
that
works
today.
That
doesn't
require
all
of
these
features
in
order
to
continue
to
be
secure.
So.
F
I
feel
like
the
reality
is,
is
that
if
somebody
can
run
a
pot
and
keep
system
they
own
it
right,
if
they
can
read
secrets
they
own
it,
and
you
know
I,
think
that's
the
fundamental
problem
I
mean
like
I'm
thinking
about
this,
and
it
would
almost
be
safer
to
actually
put
this
in
a
third
party
resource
that
nobody
else
is
watching
in
a
separate,
namespace
I
know
her
pack
it
out.
Then
it
would
be,
then
it
would
be
to
to
put
it
in
the
secret
I.
C
Mean
I
think
there's
when
you
say
coop
system,
I'm,
hearing
two
things:
there's
certificates
for
nodes
and
there's
discount
tokens
for
important
service
accounts;
I
think
that
I
think
yeah.
We
need
to
get
those
stuff
that
stuff
out
of
tube
system
and
just
have
it
be
not
there's
no
need
for
it
to
be
flexible.
That
way
it
needs
to
just
up
here
in
the
right
place.
Yeah.
J
A
A
F
A
So
that's
where
you
get
into
the
chain
of
trust
right,
so
a
node
rather
than
taking
everything
out
credential
and
putting
it
into
the
API
where
anyone
who
can
read
the
API
can
see
it
at
the
point
where
the
service
account
credential
is
needed,
something
has
to
request
it
and
it's
created
at
that
point.
So
when
a
node
runs
a
container,
the
node
says
give
me
a
credential
for
this
and
to
you
you
don't
have
things
persisted
and
readable
by
anyone.
That's
the
auditability
problem
that
we
have
a
secrets
today
and
I.
A
Don't
know
how
we
get
from
the
shape
that
the
secret
API
has
to
something
that
doesn't
have
that
problem
and
that's
why
I
question
like
oh,
we'll
just
put
more
stuff
in
Secrets
and
will
improve
secrets
and
and
eventually
we
won't
have
this
auditability
problem.
I
think
the
way
the
secrets
API
works,
where
you
could
stay
readable
is
is
problematic.
A
F
I
J
A
C
Nodes
need
to
come
with
identities
if
you're
on
a
cloud
provider,
cloud
providers
are
in
a
position
to
have
nodes
pop
out
of
the
ether
with
identities,
pre
provisioned
that
we
can
trust
to
have
them
join
the
cluster
rather
well.
Yes,
our
csr
is
cool
as
like
a
self
hosting
thing,
I,
don't
think
it's.
C
How
you'd
want
to
do
it
on
a
cloud
provider
because
just
puts
the
unnecessarily
exposes
the
secrets,
so
I
think
right,
I
think,
then
you
have
to
have
some
kind
of
pig
machine
installation
procedure
that
happens
prior
to
the
machine
like
joining
the
cluster
I,
think
that
is
the
place
where
you
hook
in
the
node
identity
there.
So
no
tie
that
I
mean
I
believe
we
can
handle
outside
of
kubernetes
for
people
that
are
sensitive,
who
want
serious
levels
of
security
and
I.
B
Agree
with
Eric
I
do
not
believe
that
kubernetes
should
prescribe.
This
is
the
way
that
you
established
the
identity
of
cubelet.
Different
infrastructures
will
have
different
ways
of
establishing
node
identity,
whether
it's
with
flashing,
a
hardware,
security
module
or
a
cloud
provider
makes
a
vm
with
the
V
TPM
on
on
it
or
anything
of
that
nature.
So.
F
Node
identity
and
establishing
node
identity
is
a
separable
problem
from
kubernetes
I
having
components
then
leverage
that
node
identity
to
stitch
themselves
together
as
sort
of
the
route
control
plane
is
another
problem
having
workload
identity,
be
able
to
delegate
and
base
off
that
route
of
trust
as
a
separable
problem
and
I
think
you
know,
kubernetes
can
integrate
with
that,
but
I
think
there's
places
where,
where
that
exists
outside
of
kubernetes,
so
I
think
we're
going
in
circles
here.
I
think
you
know.
F
At
the
end
of
the
day,
if
we
had
a
strong
Louis
like
identity
thing,
then
all
these
problems
would
go
away.
We
don't
have
that,
and
so
that's
why
we're
trying
to
sort
of
bootstrap
this
stuff
up
from
from
ground
zero,
like
it
stuff
like
the
certificate,
API
itself
would
go
away.
We
wouldn't
need
that
if
we
actually
had
a
strong
identity
system,
I
think.
F
We
could
have
a
self-hosted
sort
of
kick
the
tires
type
of
thing.
That's
great,
but
I
think
you
know
it.
I
would
view
that
as
not
an
official
kubernetes
api,
but
rather
something
that
is,
is
a
sort
of
you
know,
stubbed
out
implementation
to
get
you
up
and
running
with
a
single
binary
type
of
thing
we're
out
there.
So.
C
That's
four
node
identity
for
Poli
identity,
I
think
we
also
want
our
of
the
pod
credentials
just
showing
up
from
behind
the
scenes,
rather
than
flowing
through
the
kubernetes
api
and
I
believe
also
get
there.
The
important
thing
is
like
the
labels
and
the
names
and
the
serialization
all
that
stuff.
It's
not
like
what
half
the
bits
take
to
get
to
the
coublon
or
the
node
if
they
pop
up
out
of
the
virtualized
path
or
something
that
should
have
also
work
or
if
you,
google,
it
driver
or
whatever
they
like
my.
D
C
G
Admins
goal
is
to
make
it
easy
to
stand
up
a
cluster
so
that
people
can
use
kubernetes
in
a
reasonably
secure
way
and
the
cube
admin
team
feels
that
reasonably
secure
is
not
made
significantly
worse
by
this
is
the
cube
admin
team?
Okay,
with
the
possible
implications
that
down
the
road
doing.
This
may
be
convenient
now,
but
might
cause
more
problems
because
I
think
we've
established
this
isn't
gonna
make
anything
much
worse
and
as
long
as
it's
communicated
clear
to
the
user,
what
they
get
on
the
10.
G
F
How
about
this
you
know-
and
this
is
this-
is
piggybacking
on
some
of
the
cubelet
stuff.
What
if
we
had
a
certain
type
of
secret?
Well,
it
doesn't
freakin
show
up
in
the
URL,
but
okay.
So
let's
say
that
we
had
a
namespace
called
cube
super-secret,
and
we
had
you
know
we
had
some
sort
of
special
authorizer
that
you
call.
F
A
This
is
getting
into
the
secret
subdivision
stuff
and
like
the
cross,
namespace
watch
and
list
is
what
like,
when
you
try
to
combine
that
with
like
filtering
of
a
specific
namespaces
or
specific
types.
It
just
did,
like
we've
gone
around
in
circles,
trying
to
figure
out
how
to
match
the
current
API
with
something
that
lets
you
subdivide
and
I.
Just
I
don't
see
it
actually.
F
G
C
Cgs
is
letting
you
rapidly
build
things
and
trade-off
speed
for
security,
and
then
we
come
back
later
and
we
and
we
do
it
after
the
pattern
is
proved
out.
We
do
it
in
a
more
secure
way.
I
think
it's
find
it
can
take
like
have
this
continuous
process
where
new
ideas
get
implemented
using
secrets
and
then
they
get
pushed
down
as
they
become
established.
I
mean.
A
C
I
think
the
underlying
problem
is:
we
have
namespaces
for
tenancy
of
users,
but
controllers
that
need
to
access
all
namespaces.
They
need
another
type
of
tendency
that,
like
limits
them
only
to
their
controllers
secrets
their
controllers
pods,
because
right
now
any
you
don't
want
to
install
a
third-party
controller
because
it
might
accidentally
delete
your
pods.
You
know
that
that
you
don't
want
it
to
delete
or
or
get
secrets
you
don't
want
it
to
get
so
I
think
we
need
a
we
needed
discussion
where
we
figure
out
how
we're
gonna
limit.
F
G
Are
integrations
that
target
acute
cluster
that
don't
need
global
visibility
on
secrets
just
that,
because
you
can
still
make
quick
and
dirty
and
have
useful
things
that
don't
need
it.
Global
access
will
just
put
that
into
that
bucket
and
we
we
need
to
make
sure
we
don't
repeat
the
new
2:6
and
I.
Think
like
that's.
The
best
part
of
this
discussion
is,
it
does
highlight.
We
need
to
be.
We
need
the
threat
model
and
the
layering
to
be
really
clear.
I
would
actually
be
good.
G
G
C
G
F
I
A
I
I
So
the
short-term
stuff
is
is
kind
of
the
the
database
encryption
proposal
that
I've
also
been
working
on
with
Clayton
and
he's
been
writing
code
for
and
Jordans
got
the
kind
of
scope
reduction
of
the
visibility
of
secrets.
The
kind
of
the
problem
of
like
all
modes
need
access
to
all
secrets
so
trying
to
reduce
that
using
authorization
controller
and
yet
the
the
kind
of
medium
longer-term
stuff
is
yeah.
A
I
C
I
Don't
want
to
put
him
in
my
containers:
I,
don't
affirm
in
my
source
code.
I
need
some
place
to
store
them.
I've
already
got
vault
or
something
similar.
That's
storing
this
stuff
for
all
my
other
systems.
I
just
want
to
use
that
and
and
to
get
the
get
those
kind
of
vault
secrets
in
into
my
application.
I
A
G
I,
don't
think
it's
any
surprise
that
a
bunch
of
folks
both
on
this
call
and
in
other
areas
or
are
asking
the
question
you
know
how
do
I
I've
got
this
container.
We,
you
know
we're
reinventing
everything
in
kubernetes.
The
next
thing
we're
gonna
go
reinvent
is
Kerberos,
so
we're
like
hey.
We
need
to
come
up
with
a
way
to
give
processes,
identity
and
then
you
know,
I
start
that
identity
doubt
they're
people
so
I
figured.
G
It
would
be
good
to
try
getting
a
set
of
people
who
are
interested
involved
together
to
kind
of
work
through
the
implications
to
get
to
either
a
proposal
or
a
set
of
use
cases,
or
we
can
kind
of
define
what
the
end
goal
is,
but
something
you
know
just
short
of
implementation
to
at
least
ensure
that
you
know
all
the
issues
have
been
raised
and
that
there's
folks
involved
it
seems
like
there
was
a
reasonable
amount
of
support
in
sing-off.
G
The
model
suggested
was
to
do
a
working
group
which
we
have
tried
to
do
with
resource
management.
It
worked.
Okay,
there's
been
some
follow-up
about.
You
know.
How
do
you
communicate
anything
cross-leg,
and
we
said
you
know,
working
groups
can
kind
of
work
for
that.
We
want
to
get
something
done.
So
is
there
anyone
on
this
call
who
is
concerned
about
doing
something
like
that
or
wants
to.
You
know,
object
or
alter
the
arc
of
what
that
might
be,
it's
intentionally
very
vague
other
than
container
I.
C
G
Spend
all
of
our
time
trying
to
switch
from
zoom
just
something
else.
So,
yes
and
I
think
we
may
need
to
break
it
down
to
smaller
problems,
even
which
would
be
a
good
thing.
I
guess
we
can
ask
we
get
anybody's
sake,
dissent
amongst
three
people,
or
some
kind
of
no
I
mean
like
yeah,
that
one's
yeah,
but
we
could
probably
have
lurker
well
I
mean
a
lot
of
people
are
gonna.
Look
at
resource
management.
G
A
G
F
G
One
of
the
one
of
the
concerns
I
think
to
to
Eric's
point
is
that
there's
there's
a
lot
of
topics
that
will
lock
and
overlap.
I.
Think
a
good
goal
for
the
working
group
should
be
to
figure
out
what
is
explicitly
left
out
of
scope,
and
we
should
try
as
much
as
possible
to
carve
it
down
to
the
simplest
possible
things
that
we
all
agree
on
that.
G
Allow
someone
to
make
forward
progress
like
getting
to
the
point
where
a
pod
or
a
container
has
something
inside
of
it
that
lets
it
say
who
it
is
and
being
as
I
guess
flexible
as
possible
for
experimentation.
Even
if
there's
something
that
we
may
all
want
to
agree
on.
I,
don't
know
that
everybody
in
the
world
is
gonna
sign
up
for
kubernetes
identity
1.0.
All
those
Kerberos
users
will
probably
just
laugh
at
us
and
so
I
do
think.
There's
some
level
of
you
know
we
may
want
to
be
pretty.