►
From YouTube: Kubernetes SIG Auth 20170712
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20170712
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
B
Everybody
this
is
the
cig
off
meeting
for
July
12th,
so
we
have
some
demos.
We
have
a
demo
today
we
have
a
little
bit
of
discussion
and
then
a
big
item
is
the
1/8
planning.
First
off
just
congratulations.
Everybody
to
the
1
7
release.
I
was
particularly
enjoyed.
The
fact
that
this
cig
was
highlighted
as
one
of
the
one
of
the
big
feature,
shippers
or
at
least
feature
hardening.
C
C
Exactly
is
yeah
say
for
anybody
who
isn't
already
familiar
with
it:
the
sense
of
Internet
security
publishes
benchmarks
of
best
practices
for
keeping
your
hosts
secure.
They
recently
released
this
Cuban
ESI's
1.6,
benchmark
and
there's
working
progress
on
a
1.7
version,
and
the
benchmark
includes
a
whole
bunch
of
tests
that
we
kind
of
click
to
things
like
this,
where
it
shows
you
what
commands
you
should
run
to
check
whether
or
not
you're
complying
with
these
kind
of
recommended
best
practices.
C
So
what
we've
done
with
the
key
bench
is
basically
automate
this,
so
keep
bench
is
open
source
sitting
there
and
github
on
aqua
security,
and
it
essentially
it's
a
go
implementation
that
will
run
through
all
these
different
tests.
For
you,
you
can
install
it
with
a
docker
image.
So
it's
actually
all
this
is
a
kubernetes
master
that
I've
got
here.
C
C
C
So
the
idea
is
that
we
have
those
tests
and
documented
in
kind
of
general
format
kind
of
like
this.
So
it's
easy
to
update
the
the
config
files
as
the
new
benchmarks
get
well
as
I
get
updated
or
as
have
new
versions
get
released,
we
just
have
to
add
different
configuration
files
that
differ
on
what
needs
to
happen
for
each
of
these
different
tests.
C
C
You
can
also
get
this
output
in
Jason
formats
so
that
it's
easy
to
kind
of
integrate
with
automation
of
this
kind
of
testing
across
your
whole
cluster
and
that's
kind
of
that's
kind
of
it.
Really
it's
there
on
github,
we'd
love.
You
know
common
comments,
feedback
contributions,
anything
like
that
I
thought.
I
mentioned
a
couple
of
kind
of
coming
soon
and
sort
of
issues
with
it.
So
I
mentioned
earlier
that,
with
the
1.7
benchmarks
going
to
be
released
soon,
so
we'll
obviously
be
updating
the
tests
to
comply
with
that.
C
One
thing
that
we've
noticed
is
depending
on
the
different
installation
tool.
You
know
if
it's
been
installed
with
cots
or
cube
admin
or
whatever
you
get
different
executables
or
different
config
file
lanes
and
that
kind
of
doesn't
really
correspond
to
what's
in
the
benchmark,
so
we're
making
some
changes
to
make
it
easy
to
say.
Actually
this
install
was
done
with
cops.
This
is
the
set
of
conflict
for
our
locations
that
you
should
use
and
other
tools.
C
C
C
B
Awesome
are
there
any
other
questions
on
this
tool
cool.
Just
as
a
general
note,
the
CIS
is
updating
their
benchmark
for
1/7
I've
dropped
a
link
in
the
agenda
items
or
in
the
notes
for
this
meeting
talking
about
how
you
can
get
involved
with
that,
it's
basically
signing
up
for
the
CIS
website
and
then
starting
to
review
some
of
the
items
there.
Okay,.
A
Been
reviewing
them,
this
is
Jordan,
I've,
been
reviewing
them
and
giving
feedback
and
writing
some
topics.
I
not
put
as
much
time
into
it
as
I
would
hope,
but
if
other
people
want
to
jump
in
and
just
look
at
a
couple,
I
think
that
would
be
really
helpful.
Just
kind
of
spread
spread
that
out.
If
you
are
familiar
with
one
of
the
sections,
cubelet
or
API
server
office
or
one
of
those
sections,
even
just
looking
at
two
or
three
in
depth,
would
be
really
helpful
right.
F
B
G
G
I
in
the
catalog
sig.
One
thing
that
happened
during
the
six
to
seven
weeks
that
we've
been
discussing
this
is
that
there
was
a
guidance
from
sig
off
to
to
stop
using
references
to
secret
keys
from
API
from
other
API
resources,
when
the
use
of
that
key
did
not
target
a
pod.
And
we
talked
through
a
couple
different
permutations
of
how
we
could
achieve
that.
In
sig
off
in
the
past.
G
I
spent
some
time
talking
to
Jordan
earlier
today
and
with
Clinton
Coleman,
and
we
were
able
to
talk
ourselves
out
of
using
any
of
those
mechanisms
and
back
into
using
a
reference
to
a
secret
secret
key
from
these
resources.
That
would
hold
the
set
of
parameters
to
pass
so
that
they
could
contain
secret
information
and
piggyback,
on
top
of
all
of
the
current
and
future
protections
to
secret
data
that
we
have
and
will
have
in
the
future.
G
And
what
I
wanted
to
discuss
today
is
to
get
some
closure
on
that
and
basically
describe
what
I
think
that
we
should
do
and
ensure
that
people
feel
that
it
is
sufficiently
in
line
with
how
you
will
treat
the
same
problem
and
other
API
resources
and
other
parts
of
the
system
so
that
we
can
implement
something
with
competence.
It's
very
urgent
for
us
to
get
closure
on
this.
It's
needed
for
a
number
of
different
use
cases
across
Ghana.
There
are
different
vendors.
I
am
reject.
B
So
I
was
there
was
a
lot
of
references
to
secrets
of
Secrets
and
just
to
clarify
what
the
actual
problem
is.
What
was
the
what
was
the
solution
that
was
or
we're.
G
Not
so
the
original
solution
is
some
like
topologically,
something
topologically
equivalent
to
from
the
services
instance
or
service,
finding
api
resource
having
a
reference
to
a
secret
and
one
of
the
keys
of
that
secret
and
have
that
secret
key
hold
the
parameters.
The
service,
catalogs
controller
would
dereference
would
read
that
secret
and
use
that
to
form
the
payload
that
is
sent
to
the
service
worker.
G
So
one
example
would
be
to
not
return
this
information
as
part
of
a
get
or
a
list
or
watch
and
give
you
a
special
sub
resource
on
the
resource
that
you
would
have
to
do
a
get
if
you
needed
to
get
the
secret
information,
that's
problematic
for
a
number
of
different
reasons.
I
think
that
was
really
the
crux
of
what
we
had
kind
of
discussed
as
a
possibility.
H
A
It's
a
similar
access
pattern
to
ingress
the
ingress
controller
right.
You
have.
You
have
a
controller
that
has
a
specific
job
to
do
things
with
a
specific
type
of
resource
and
as
part
of
doing
its
job.
It
needs
some
confidential
information
that
we
want
to
be
able
to
be
referenced.
So
in
the
ingress
case,
it's
like
the
certain
key
for
the
route
right
in
the
service
controller
case.
It's
confidence
of
parameters
that
are
input
to
satisfying
the
service.
Finding
like
a
database
password
or
something
like
that.
So
it's
a
very
similar
was
Angers
controller.
Has.
C
G
In
the
meantime,
we
don't
want
to
build
such
a
controller.
So
what
that
means
is
that
if
we
have
API
fields
that
are
basically
a
reference
to
a
secret
in
US
and
a
key
within
that
secret,
that
it
will
mean
that
the
when
that
excuse
me
when
the
content
of
that
secret
changes
that
will
either
need
to
rely
on
the
controller
resyncing,
the
resource
that
references
that
secret
and
doing
the
you
know
the
state
reconciliation
of
is
what
I
would
send.
G
Now
what
I've
already
sent
if
it's
not
I,
need
to
go
ahead
and
send
it
I
so
will
either
need
to
rely
on
resync
or
have
users
make
some
other
change
that
results
in
a
watch
event
on
the
resource
that
references,
the
secret
and
that's
not
the
the
greatest
user
experience
in
the
interim.
But
if
we
can
in
the
future,
you
is
a
resource
by
resource
or
secret,
by
secret
watch
to
avoid
having
a
controller
that
has
the
wrong
usage
pattern.
G
I
think
it's
acceptable
in
the
short
term
and
Mille
to
making
resources
that
have
new
unusual
semantics
around
secret
fields.
So,
since
I
talked
in
circles,
just
a
little
bit
I'll
restate
what
I'm
proposing
is
that
in
the
Service
Catalog
resources
that
we
reference
a
secret
and
one
of
its
keys
and
that
the
controller
that
performs
actions
to
fulfill
this
API
not
watch
all
secrets
that
it
do
gets
on
a
case-by-case
basis
right
now
and
eventually
move
to
use
a
bulk
watch
type
facility
or
I.
I
I
F
I
G
I
I
was
gonna
say
like
the.
It
really
depends
on
the
usability
aspect,
which
is,
if
we
can't
make
it
easy
for
you
to
grant
like
I,
create
a
secret
and
I
create
a
name
grass
who's,
making
the
decision
on
a
cluster
by
cluster
by
cluster
or
ingress
controller
by
the
ingress
controller
basis.
Whether
you
really
want
this
particular
secret
to
be
accessed
by
this
particular
ingress
controller
could
be
a
set
of
options.
There's
someone
could
go.
I
A
I
Maintenance
of
it
could
be
an
issue
like
we've.
Never
really
like
openshift
has
gone
pretty
far
down
having
like
you
eyes
and
see
life
policy,
expression
and
we've
learned
a
lot
of
things
from
it.
Only
a
few
of
those
exist
in
cube
today,
and
we
can
probably
take
those
lessons
learned
and
say
like
well.
This
is
a
new
use
case
as
well
whatever.
What's
the
right
thing,
it
does
work,
though,
to
make
that
happen.
I
H
D
I
Does
this
controller
have
this
permission
by
doing
an
hour
back
check
and
therefore
grant
access
to
this
controller?
To
you
know,
or
this
other
thing
to
have
access
to
these
automatically,
it
gets
it's
kind
of
an
automated
denormalization
of
our
back,
but
for
small
cardinality
things
it
might
actually
be
a
very
reasonable
generic
solution
being.
I
The
definite
worry
would
be
whenever
someone
adds
a
new
custom
extension,
a
PMI
or
CRD.
We
wouldn't
want
this
controller
to
be
like.
Oh
yeah,
I
found
an
object
reference
in
this
thing.
Grant
access
we
would
definitely
need
to.
There
needs
to
be
expression
of
intent
to
use
the
secret
that
a
user
does
in
a
fairly
generic
or
specific
way,
depending
on
whether
they
trust
the
extension
or
not.
I
A
Both
cases
exist
to
the
cluster
wide
one,
where
the
admin
sets
up
the
ingress
controller
and
he
doesn't
want
his
users
have
to
know
the
subject
of
the
ingress
controller.
So
he
wants
to
say
any
any
secret
reference
by
an
ingress
should
be
accessible
to
the
ingress
controller
but,
like
you
said,
there's
also
the
you
user
by
user,
where-
and
you
get
into
this
more
when
you're
running
sort
of
partitions
controllers,
where
their
controllers
that
have
access
to
a
subset
of
new
faces
and
you
kind
of
as
a
user
get
to
opt
in
and
say.
A
Oh
I
want
to
use
this
feature,
but
in
order
to
do
that,
ize,
the
user
need
to
allow
access
to
these
secrets.
To
this
thing,
just
the
usability
of
that,
like
the
feedback
loop,
where
I
create
the
object
that
drives
this
controller
and
then
it's
stuck
because
I
haven't
given
it
access
that
needs
thinking
through
like
how
that
gets
surfaced
and
the
user
takes
action
to
close
the
loop
on
giving
that
access.
A
F
I
Written
to
do
that
first,
that
will
rent
it.
The
challenge
we
so
in
all
the
various
like
musing
around
this,
the
problem
was
doing
get
post,
update
and
delete
and
weather,
because,
like
list
and
watch
are
easy
because
we
can
check
that
at
request
time.
For
every
other
thing,
we
have
to
introspect
the
object,
and
that
means
that
the
authorizer
becomes
more
of
a
instead
of
the
authorizer
being
request
based,
it
becomes
request
plus
embody
based,
and
it
also
means
okay.
I
F
Okay,
so
I'm
sensing
from
a
lot
of
this
discussion
that
we've
been
talking
about
this
stuff
for
a
long
time,
there's
a
bunch
of
sort
of
coming
at
it
from
the
edges
like
this,
the
service
catalog
stuff.
Is
it
time
to
have
a
really
focus
deep
discussion,
try
and
sort
of
really
come
back
with,
and
here
I
think
I
think
so
it's
it's!
It's.
B
The
the
sort
of
thread
on
the
ingress
talk
that
we
open
just
sort
of
our
concerns
about
resources
or,
like
controllers,
doing
both
watches
on
Secrets,
expresses
a
lot
of
the
concerns
that
we
have,
and
maybe
formalizing
that
and
saying
sort
of
so
just
to
give
a
answer
for
the
Service
Catalog
stuff.
We
definitely
don't
want
Service
Catalog
to
have
to
watch
all
the
secrets
in
a
namespace.
Is
that
fair
enough?
Yes,.
C
B
B
I
A
That
seems
like
a
reasonable,
a
reasonable
thing
to
do
now
that
lets
us
move
in
a
good
direction
going
forward
and
still
gives
us
a
limited
surface
area
to
grant
permissions
around.
However,
we
do
it
manually
or
automatically
or
even
broadly,
and
a
cluster
where
they
don't
care.
You
know
if,
if,
if
the
controller
is
nicely
behaved
and
you're
on
a
cluster
where
they
actually
don't
care
about
subdividing,
for
whatever
reason
they
don't
have
to
fiddle
with
any
of
those
stuffs,
I
can
just
say
you
can
get
any
secret
and
the
controller
can
get
any.
A
G
A
F
My
worry
Jordan
is
that
if
this
thing
becomes
so
fiddly
and
so
hard
to
use,
people
will
do
it
wrong
or
they'll.
Just
you
know,
give
access
to
everything
and
so
I
think
without
tracing
the
the
sort
of
usability
and
sort
of
the
user
experience
around
this
I'd
have
a
hard
time
sort
of
imagining
where
it's
all
going
to
end
up.
A
A
B
D
D
And
one
issue
that
came
up
recently
was
sort
of
comparing
what
coop
ADM
does
to
docker
swarm
and
specifically
sort
of
the
bootstrap
discovery
protocol,
which
is
one
one
way
of
using
cube
ATM.
But
it's
the
way
that
sort
of
document
as
the
example
flow.
When
you
go
to
you
know,
kubernetes
died,
oh
and
you
want
to
spin
up
a
test
cluster.
D
The
way
it
works
right
now
is
with
a
symmetric
token
you
can
generate
in
that
token,
is
just
just
a
random
byte
string
excuses
in
each
matt
key,
so
you
can
generate
it
ahead
of
time.
Out-Of-Band
and
spin
up
servers
spin
for
a
bi
server
spin
up
your
workers,
and
they
can
use
that
shared
secret
to
sort
of.
D
D
So
the
issue
sort
of
like
one
of
the
issues
is
the
tokens
by
default.
Right
now,
don't
expire
like
if
you
just
run
cube
Adam
in
it.
You
get
a
token
that
has
an
infinite
expiration
on
the
other
issue.
Is
that
if
you're,
just
like
me
like
sort
of
two
parties,
and
they
sure
they
share
a
secret,
you
know
they
can
authenticate
each
other,
but
as
soon
as
you're
talking
about
a
server
and
multiple
clients,
and
they
all
just
share
that
one
secret
than
any
client
can
impersonate
the
server
to
another
client.
D
D
B
D
So
there's
no
chain
proposal
is
a--.
There's
no
change
server-side
to
the
discovery
API
it
stays
serve,
as
is
cout
bottom
on
the
client
side
gets
support
or
to
new
to
new
pieces
of
code.
One
piece
of
code
in
the
joint
command
can
accept
the
TLS
pin
on
the
on
the
it's,
a
public
key
pin
on
the
root
CA.
D
So
if
you
know
your
root,
CA
cert,
ahead
of
time,
you
pass
that
into
good
about
him
join
and
it
only
trusts
servers
that
are
signed
by
that
root
and
then
also
I'm,
also
adding
some
named
name
validation.
That
goes
with
that.
The
second
piece
of
code
is
in
kuku
bottom.
An
it
at
the
end
of
about
a
minute
generates
an
example
command
for
you
to
copy
paste.
That
example
command
now
can
have
the
key
fingerprint
and
intuitive
right
so
sort
of
like
the
two
cases
that
I
called
out
in
the
proposal.
D
One
Joe
called
the
self
stitching
cluster,
where
you
kind
of
have
a
very
simple
orchestration
system
that
just
wants
to
generate
a
token.
It's
been
a
bunch
of
nodes
up
and
I'll,
just
sort
of
stitch
them
cell.
Together
that
case,
like
it's
hard
to
have
a
public/private
key
pair
injected
into
that
without
passing
permanent
secrets
over
metal
channel.
D
D
D
D
This
lends
itself
to
to
being
able
to
you.
You
can
you
can
pin
in
that
case,
to
there's
also
any
other
side
bar
here,
which
we
talked
about
in
slack
yesterday
is
this
whole
discovery
mechanism
is
still
I.
Think
correct
me
if
I'm
wrong,
still
kind
of
targeted
at
the
tutorial
kind
of
use
case
and
that
if
you
have
a
if
you're,
building
your
own
sort
of
installer,
you
can
still
generate
a
whole
yeah.
F
A
It
feels
like
it
feels
like
we're
kind
of
making
the
copy
and
paste
a
bowl
command
less
and
less
copy
and
paste
of
all
and
closer
and
closer
to
like
just
take
this
Cuba
config
as
your
bootstrap,
at
which
point
you
have
a
bundle
and
you
don't
need
a
you
know,
you
don't
need
the
symmetric
key
I
mean
yeah
I
thought
we
had
talked
about.
Actually
some
of
the
fingerprinting
stuff
before
and
that
was
kind
of
there
were.
A
There
was
resistance
to
that,
because
it
means
you
can't
rotate
your
CA
server
side
and,
like
I,
don't
know
a
the
the
copy
and
paste
bowl
like
shared
secret
bootstrap
fills
a
niche
I.
Don't
know
that
that
niche
is
like
you're
talking
about
one
node
with
that
secret
can
intercept
traffic
and
acts
like
the
server
to
another
node,
but
it
can
also
just
request
an
identity
and
be
the
other
node,
given
the
way
that
it's
set
up
today,
so
I
you're
right.
A
F
I'm
gonna
say
no
to
authorize
not
the
note
authorizer
but
the,
but
the
the
no
certificate
approver
there
is
also
you
know:
ongoing
talk
and
work
about
being
able
to
use
essentially
plug-in
model
for
underlying
extra
data
to
actually
add
more
data
to
that,
so
that
it's
going
to
be
even
more
difficult.
For
that
note,
impersonate
anybody
in
terms
of
copy
and
paste
able,
I
think
you
know
our
limit
here.
Is
you
know
we
want
to
keep
this
under,
like
80
characters,
I
think
we're
still
still
there
I
think
in
close,
it's
well
past
it.
D
D
F
But
night
candy
at
the
end
of
the
day,
I
mean
we
can
make
this
thing
optional.
Optional.
Look,
you
know,
hey
I,
think
I
actually
suggested.
F
So
so,
if
you
look
at
sort
of
the
conspire
corresponding
like
terraform
for
the
self
stitching
scenario
versus
the
Swizzle
scenario,
it's
it's.
It's
a
hell
of
a
lot
easier
and
you're,
not
passing
any
long
term
secrets
over
less
than
than
ideal
channels,
and
so,
if
we
had
gone
through
with
only
the
certificate
fingerprint
from
the
get-go
that
would
have
been,
we
still
would
have
need
to
do
a
token
for
the
initial
authentication
for
forgetting
the
certificate
right.
F
D
I
mean
I
can
actual
implementation
to
here
is
really
simple.
It's
basically
right
now.
The
cou
bottom
join
command
when
it,
when
it
starts
up,
makes
it
totally
an
unauthenticated,
unsafe
sort
of
TLS
connection,
with
no
validation
to
get
the
cluster
info.
I
can
figure
out
from
the
server
so
that
that
connection
right
now
is
just
kind
of
over
unvalidated
TLS.
D
Back
and
you
check
an
age
Mac
with
the
token
that
you
have
the
the
change
that
I've
made
basically
is
during
the
initial
connection.
You
save
off
the
certificate
chain
that
the
server
gives
you
and
then,
after
the
fact,
so
you
still
do
the
same
token
validation
with
the
H
Mac
as
before.
That
still
is
there.
D
That's
that's
the
actual
sort
of
code,
change,
I.
Think
there's
an
opportunity,
like
so
touched
on,
like
a
sort
of
like
plug-in
model
for
node
attestation
and
like
being
able
to
plug
into
something
like
ec2
identity
and
use
that
to
authenticate
and
I.
Think
that
that
could
be
also
a
pattern
for
bootstrapping
the
clients,
trust
of
the
server
to.
B
F
Think
the
final
parting
thought
I
want
to
leave
here
is
that
this
is
in
some
ways
reactionary
just
because
we
want
to
make
sure
that
that
we
are,
you
know
we
have
an
option
that
is
as
secure
as
as
swarm.
To
be
honest,
I,
don't
think
this
is
a
super
serious
attack,
vector
it's
not
something
that
I'm
losing
a
lot
of
sleep
over,
but
it
is
going
to
be
something
that
that
you
know
could
be
sort
of
checkbox
or
a
talking
point.
If
we
don't
address
it,
yeah.
B
B
A
Sorry,
who's,
muted
sure
in
the
in
a
few
seconds
we
have
left
this.
Isn't
this
wasn't
really
intended
to
be
like
a
full,
full
planning
meeting.
This
was
more
a
chance
for
people
to
indicate
what
they
are
working
on,
so
that
we
don't
duplicate
work
so
that
people
are
aware
if
they
are
interested
want
to
help,
or
you
know.
A
A
I
mean
the
stuff
we
talked
about
with
Paul,
with
the
the
secret
access
controller
pad
and
stuff
that
I'm
not
sure
if
that
falls
into
the
secret
sword,
map
or
not
it
might
partially,
but
it
feels
distinct.
A
lot
of
a
lot
of
the
secrets
of
road
map
stuff
is
about
encryption
and
external
integrations,
not
necessarily
about
access
patterns
by
controllers.
So
it's
certainly
relevant.
I
I
I'd
opened
an
issue
about
storing
secrets
and
NCD,
actually
at
a
certain
size
and
scale
of
cluster
becomes
very
bad.
So
it's
not
really
I,
don't
think
it
affects
most
people,
anyone
with
less
than
a
few
hundred
namespaces,
it's
probably
not
affected,
but
at
very
large
scales.
I'm.
Putting
the
CA
into
the
secret
rapidly
leads
to
very
very
large
amounts
of
memory
being
used
for
data.
That's
absolutely
the
same
across
the
entire
cluster,
so
that
I
open
an
issue.
I'll
link
it
in
here
at
some
point.
A
And
that
was
some
of
the
stuff.
We
had
we're
looking
at
experimenting
with
different
with
modeling
it
as
on-demand
injection
via,
like
a
volume,
plug-in
flex,
volume
plug-in
or
something,
but
something
like
vault
could
tie
into
or
something
that
requested,
like
a
service
account
so
can
on
demand
so
that
it
actually
never
lived
in
a
CD.
It
was
just
generated
in
place,
requested
in
place
and
then
injected
yeah.
A
I
Little
difference
between
the
controller
identity
and
the
service
account
token,
if
you
think
of
the
service
account
token
as
being
a
non
pod,
specific
or
non
container
specific
token.
But
what
we
really
want
is
the
service
account
the
containers
identity
to
be
this
container
in
this
pod
on
this
node
in
this
namespace,
using
this
service
account,
and
so
that
may
actually
be
the
then.
H
A
As
a
set
up
for
like
crossing
work,
some
of
the
six
storage
folks
are
looking
at
making
Flex
volumes
able
to
be
dynamically
added
to
a
node
so
that
you
could
deploy
a
new,
flexible
and
plug-in
like
with
a
daemon
set
and
have
the
cube.
Let's
pick
it
up.
So
if
we
wanted
to
prototype
some
of
these
things
with
volume
plugins
that
might
be
getting
a
lot
easier
in
the
next
month
or
so.
A
Low
feedback,
the
things
that
we
think
we
want
to
do
all
seem
to
be
bigger
scopes,
changes
that
seem
like
they
would
be
additive
rather
than
fundamental
changes
to
the
current
structure,
so
I.
From
my
perspective,
this
is
largely
a
mechanical
change,
so
we
can
get
a
release
and
the
next
release
starts
storing,
be
one
and
probably
deprecated,
be
one
output,
one
I
I.
F
Know
when,
when
our
backs
was
at
one,
six
Cuba
up
was
had
our
back
turned
on
and
then
turned
off
again
and
there's
been
some
bouncing
him
back
and
forth.
There
I
worry
that
not
enough
people
are
using
our
back
at
this
point
for
us
to
really
get
the
feedback
and
the
sort
of
the
runtime
that
we
need
is
there
something
that
we
can
do
to
to
essentially
get
it
used
more
widely.
Has.
A
F
A
F
D
A
You,
if
someone
is
just
copying
and
pasting
from
a
blog
post
and
they
want
to
run
stuff
well,
they
probably
want
to
just
make
everyone
across
their
admin
because
they
don't
care,
but
the
person
developing
a
component,
that's
intended
for
people
broadly
to
consume.
They
really
do
need
to
care
about.
You.
A
What
API
is
they
use
or
providing
a
role
or
saying
you
need
to
use
this
built-in
role?
So
you
know
I
want
I
want
to
help
the
people
running
play
clusters,
so
that
might
be
as
easy
as,
if
you're
running
a
play
cluster.
Here's
how
you
just
turn
all
this
stuff
off
and
we
have
that
right,
there's
one
command!
You
can
run
that
turns
it
all
off
and
open
it.
C
A
F
Do
take
a
little
bit
of
like
we
should
be
careful
about
saying
things
like
well,
it's
a
play
cluster.
So
this
stuff
doesn't
matter
that
that
line
is
just
not
bright
and
a
lot
of
stuff
that
that
starts
as
the
clusters
find
their
way
into
production.
Exactly
so
yeah
and-
and
you
know
that's
one
of
the
reasons
why,
like
we
wanted
to
make
cube
admin,
you
know
as
secure
as
we
could,
while
maintaining
usability
from
the
get-go
yeah
I
mean
I.
For
me,
I
would
feel
a
lot
better.
A
F
A
Tarts
I
mean
I,
think
taking
the
examples
and
things
that
we
have
in
repo
and
making
sure
those
work
under
our
back.
All
the
ones
in
our
Edes
do
right,
because
ete
runs
with
our
back
without
any
other
policy.
So
we
can
try
to
widen
the
umbrella
of
what
uses
our
back
and
works
well
out
of
the
box,
but
I
mean
once
you
get
beyond
like
our
repo
I'm,
not
sure
how
far
we.
I
C
I
E
H
A
A
F
D
A
I
C
I
F
So
if
it's
GA,
then
you
know
I,
guess,
there's,
and
maybe
this
is
just
me
being
a
little
a
little
crazy
about
this
like
if
it's
GA
and
a
lot
more
people
will
be
using
and
if
a
lot
more
people
be
using
it
and
they
have
a
bad
experience
like
you
know.
What
does
that
actually
say
about
the
product
as
a
whole
and
I?
Don't
I,
don't
think,
there's
an
easy
way
out
there.
So
I
think
it
may
look
like
it's.
F
Ga
has
to
come
first
and
then
like
the
rest
of
the
world
adjust
but
like
there
will
be
some
time
where
people
will
be
turning
on
our
back
more
and
more
often
because
of
GA.
That's
a
good
thing,
but
then
they'll
also
be
taking.
You
know
old
advice
on
how
to
do
stuff
and
then
be
confused
and
it'll
appear.
Broken
and
they'll
have
a
shitty
experience,
yeah
and
I
think
that
we're
just
going
to
take
the
lumps
they're,
probably
I,
mean.
I
I
I
certainly
am
at
the
point
where,
if
like
do,
we
feel
like
it
maybe
like
this
is
like
the
canvas
checklist
for
it's
not
required
to
go
to
the
API
b1,
but
it's
something
that
every
sync
she'd
be
able
to
answer
about.
The
feature
is
the
moment
it
becomes
GA.
The
expectations
rise
on
how
have
we'd
as
a
cig,
done
a
great
job
of
communicating
our
back
and
examples
and
have
that
and
I
think
that's
very
reasonable
to
spend
time
on
and.
F
I
think,
as
we
look
at
things
like
you
know,
as
we
define
conformance
and
what
is
a
secure
cluster,
the
more
we
tighten
that
stuff
down.
It's
going
to
be
like
at
some
point.
Well,
we
will
be
saying
if
you're
not
running
kubernetes,
with
our
back
you're,
probably
doing
it
wrong
and
that's
a
whole
nother
thing,
past
v1
and
as
we
do
that
we
want
to
make
sure
that
the
rest
of
the
community
is
lined
up
around
that
yeah.
Oh
yeah,
I
think
that's
decoupled
from
from
calling
the
thing
GA.
I
A
A
A
A
It's
good
work,
yeah,
alright,
so
sorry
to
take
all
the
time
on
that
yeah.
If
you
do
have
other
things
that
you're
planning
to
work
on,
add
them
here
or
email.
The
list-
and
we
just
want
to
make
you
aware
of
what's
what's
going
on-
do
update
the
features
repos
like
retag
or
indicate
work
that
is
actually
happening
in
1-8
I
know.
We
need
to
do
that
the
beginning
of
each
release,
just
to
kind
of
get
a
sense
for,
for
which
things
are
moving
forward
and.