►
From YouTube: Kubernetes SIG Cluster Lifecycle 20170801
Description
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.xy0lixlihmgr
Highlights:
- kubeadm adoption working group has created of list of blockers
- Certificates and certificate authorities (how many do we need?)
- Cluster API: new breakout meeting will be scheduled
- Should we promote the kubeadm API to v1beta1
- Feature freeze today!
- kops tests failing?
A
C
E
I
mean
I
can
summarize
briefly.
It
was
basically
we
sort
of
went
around
the
room
and
everybody
sort
of
introduced
themselves
talked
about
sort
of
how
they
were
related
to
kubernetes
and
cube
admin
and
then
started.
We
started
collecting
a
list
of
blockers
for
cube
admin,
adoption
and
various
sort
of
production
or
non
production
deployments
tools
for
kubernetes.
E
E
C
G
Terms
of
blockers,
I'd,
say
certainly
H
a
is,
is
up
there.
The
list
that
we
put
together
a
document
with
the
things
that
came
out
last
time.
We've
almost
run
at
a
time
last
time
with
everyone's
like
glass
or
like
wishlist
blockers,
but
I
think
I
think
another
geeky
is
yet
annotated
that
repairable
list
right
I
think
there
are
some
other
or
some
other
installers.
You
want
to
hear
from
the
the
list
it
doesn't
leave.
G
The
links
should
be
legitimate,
yeah
list
to
be
linked
than
the
minute,
and
it
is
very
interesting
to
the
crew
partially
because
it's
sort
of
surprising
that
it
is
different
from
what
we
are
I
guess
focusing
on.
So
it
is,
it
is
definitely
worth
cruising,
but
yet
H
a
is
on
their
operating
lien
mention.
C
F
Yeah,
so
just
to
answer
like
the
AJ
thing.
Well,
AJ
works,
pretty
good
I
think
like
with
cubed
M
clusters,
but
you
have
to
set
up
your
own
load
balancer
and
copy
files
around.
So
it
I
mean
I
wrote
up
the
proposal
to
like
how
we
could
do
it
within
it
and
join
or
and
Justin
at
a
counterproposal
with
including
moving
away
from
innocent
join
for,
like
AJ
real
AJ
with
some
cloud
things,
but
that's
a
scientific
anyway
in
it
and
joint.
F
F
Different
quests,
the
rest
I
mean
for
this
would
use
the
kubernetes
api
to
store
certificates
from
when
moving
from
one
master
to
the
other,
and
that's
in,
and
that's
also
see
a
key
right
so
for
some
users
like
cops
where
we
can
do
about
formation
or
terraform
or
whatever.
That
will
be
preferable
and
those
things
already
available
today,
if
you
have
the
kind
of
infrastructure
it.
C
Seems
to
me,
like
one
of
the
good
things
that
we
could
do
here
is
actually
identify
sort
of
common
paths
of
actually
using
cube
admin.
And,
if
you
like.
Oh,
if
you
have
this
level
of
support
that
you
can
use,
cube
admin
and
this
way
and
make
sure
that
we
can
actually
sort
of
like
pull
that
thread
through
I
think
you
know,
there's
definitely
there's
a
ton
of
overlap,
there's
different
scenarios
or
different
places
where
other
tools
still
in
pieces
in
and
I
think.
C
We
know
that
the
tool
chest
and
you
can
put
that
stuff
together
but
I-
think
if
we
actually
had
a
kind
of
concrete
sort
of
like
set
of
examples
of
here's,
how
you
actually
put
the
tool
together
and
then
be
incredibly
helpful.
So
some
of
this
seems
like
more
documentation
and
making
sure
that
we
got
all
the
eyes
across
all
the
keys.
Yeah.
E
G
This
is
I
also
put
on
the
iris
as
an
architecture
issue,
but
it
because
of
like
where
we
want
to
store
our
secrets,
but
it
actually
raised
a.
We
got
some
valid
feedback
from
the
cube
Fogarty's
from
core
OS
people,
which
is
the
idea
that
if
you
treat
the
CA
key
as
a
snowflake,
but
essentially
then
you
don't
persist
the
keys
at
all.
G
So
in
other
words,
your
API
server.
Each
API
server
has
on
disk
a
key
which
it
generates.
It
is
signed
by
the
CI
key,
but
they
don't
share
the
same
key
and
if
you,
if
that
machine,
goes
away
or
lose
its
disks
for
whatever
reason,
a
new
key
would
be
generated,
which
I
thought
was
actually
a
very
clever
idea.
G
C
E
C
Each
API
server
be
an
intermediate,
makes
make
sense.
Yes,.
G
C
F
Yeah
so,
but
as
I
told
Justin,
when
he
says,
talked
about
this,
it's
unfortunately
much
more
complicated
than
that,
where
we
once
used
to
see
a
one
like
once.
The
real
CA,
and
also
one
for
API
aggregation,
I,
mean
I,
know
jealously
talked
about
this
like
implications
from
requirements
from
sig
API
machinery,
about
API
aggression,
for
example.
Like
that
changes,
the
way
we
set
up
our
clusters.
C
If
you
would
say
it,
I
will
I
think
it's
insane
that
we
actually
require
multiple
CAS
anyplace,
like
that,
we
a
bug
and
like
there,
you
know
a
place
where
that
happens
is
like
the
API
server
calling
into
the
cube
like
right.
That's
the
wrong
way
for
us
for
this
to
actually
work,
and
that's
that
yeah
I
mean.
C
C
Implicit
versus
just
saying,
if
it's
signed
by
the
CA,
then
it
must
be
good
right
because
that
like.
If
we
follow
that
pattern,
then
that
means
that
every
integration
is
going
to
require
a
new
CA
and
that's
just
going
to
become
total
crazytown
as
we
as
we
move
towards
more
and
more
components
in
system
yeah.
C
F
Absolutely
but
still
I
mean
we
have
to
have
now
two
CS
inside
of
like
clapping,
for
our
communication,
which
is
pretty
salsa
that
the
problem
or
like
the
thing
we
have
to
upload
secrets
to
the
cube
API
like
if
we
want
to
do
cubed
maj
within
it
and
join
this
term
or
next
term
or
whatever
we
have
to
upload
it
and
like
store
it
somewhere
to
be
able
to
download
it,
then
from
asta
to
because
they
are
shed
like
share
certificates
between
all
API
servers.
F
C
I'm
a
broken
record
here:
I'm
just
gonna
status
for
completeness
this
50
stuff
is
making
progress.
There's
actually
going
to
be
a
lot
of
stuff.
There's
there's
a
community
date
tomorrow
for
a
bunch
of
folks
involved
in
that,
and
so
there's
gonna
be
a
lot
more
stuff,
shared
and
and
I'm
hopeful
that
we'll
get
to
the
point
where
that's
real
enough,
that
that
we
can
actually
talk
about
it
seriously,
but
I'm
not
going
to
try
and
distract
things
between
now
and
then.
C
A
D
Have
a
question
moving
forward
as
we
continue
to
mature
our
needs
or,
as
jopen
earlier
are
designed
to
rewrite
qadian
what
would
be
like
the
best
way
for
us
to
interface
back
in
with
with
the
second
to
qadian?
That's
you
can
like.
Should
we
go
cheating
fleshing
out
the
github
issue
log,
or
what
do
we
do
so.
C
In
my
mind,
I
think
one
of
the
most
useful
things
here
is
I.
Think
the
document
around
sort
of
what
are
the
blockers
and
actually
breaking
that
down
by
feature
I
think
that's
useful,
but
I
think
in
terms
of
making
sure
that,
like
everything,
adds
up
to
something
that
actually
hits
a
mark
being
sort
of
user
story
scenario
driven
around
like
the
tool.
C
Does
this
and
then
this
and
then
this,
and
that
actually
shows
us
how
well
kind
of
fit
together
to
the
point
where
now
unblocker
new
scenario
and
so
I
think
in
terms
of
interacting
I.
Think
that
would
be
the
most
useful
in
terms
of
getting
everybody
on
the
same
page
of
like
oh
I
can
see
how
all
this
stuff
adds
up
and
I
can
see
how
this
works
with
a
particular
tool.
And
then
you
know,
and
then
finding
commonalities,
or
you
know,
common
scenarios
around
multiple
tools
about
how
they
would
use.
D
C
A
A
B
G
One
I
missed:
we
talk
about
the
three
right
thing,
but
I'm
in
many
way,
but
be
tactically.
One
thing
we
could
do
is
we
could
I
was
thinking
is
as
we
extract
functionality
and
we
took
a
like.
You
know,
cops
want
you
to
bitten
by
a
library
if
that
library
actually
lived
in
cube
a
DM,
the
reap
the
reebok.
So
in
other
words,
we
extract
out
the
shared
functionality.
Now
cube,
a
DM
incriminated,
kubernetes
becomes
a
shim
into
it.
Cuffs
goes
into
the
same
one
through
the
corn
goes
into
the
same
one.
C
Am
supportive
of
actually
sort
of
being
library
first
on
this
stuff,
I
think
that
makes
a
ton
of
sense.
If
we
do
that,
though
I
you
know,
I
think
we
got
to
be
responsible
not
to
break
people
and
I.
Think
that
there's
there's
you
know,
there's
a
certain
amount
of
like
we
got
to
have
our
crap
together
before
we
can
actually
go
ahead
and
do
that
and
I
think
we've
had.
C
You
know
the
interfaces
to
Kiba
admin
has
been
narrow
enough
around
the
command
line
that
we've
been
able
to
sort
of
play
fast
and
loose
in
some
ways,
yet
still
maintaining
a
high
degree
of
fidelity
in
terms
of
user
experience.
I
worry
that
if
we
lock
it
down
to
API
and
we
have
sort
of
the
API
compatible,
that's
going
to
be
a
higher
burden,
not
saying
it's
not
worth
it.
I
think
we
just
have
to
change
our
thinking
around
that
I
think.
F
It
rolls
up
pretty
nicely
at
CLI
level.
Oh
like
go
level
or
interface
interfaces.
Oh
I
think
it's
follow
this
70%
done
or
something
6070
I
think
it
will
fit
into
this
cycle
to
get
it's
100%
like
the
phases
that
we
want
to
have
now
at
least,
and
then
we
can
really
like
poke
through
them
and
have
them
stabilized
bida
like
goal
level,
the
goal
level
interface
be
dine
or
whatever
stable.
Are
we
to
in
one
nine?
That's
that's
kind
of
my
goal.
F
At
least
CLI
level
is
also
improving,
pretty
quickly
I
hope
to
have
something
like
viable
to
the
youth.
We're
all
extension
points
are
that
cubed
and
currently
runs
are
are
available
at
CLI
level
and
but
still
in
like
alpha
under
the
all
sub
command,
and
then
we
can
evolute
that
one
as
well
and
see
if
it
needs
the
beta
criteria
and
graduated
next,
the
next
cycle
and
the
same
kind
of
go
for
the
API
like
configuration
API,
but
that's
a
later
topic
in
in
the
meeting
else
today.
A
D
What
this
is
is
it's
an
attempt
to
define
an
infrastructure,
API
object
for
kubernetes,
one
of
the
interesting
bits
to
keep
in
mind
as
I
continue
to
explain.
This
is
if
we
are
to
move
into
sort
of
the
middle
layer.
Thinking
of
the
three
layers
of
kubernetes
does
the
underlying
infrastructure,
VM
subnets,
networking
infrastructure,
etc
the
middle
layer,
which
is
the
control,
plane
and
cubelet
so
underlying
application,
and
then
the
third
layer,
which
is
the
above
application,
which
is
you
know,
and
as
you
are
defining
layers
one
and
possibly
layers
two.
D
Other
important
caveats,
Justin
has
actually
a
little
bit
of
this
is
myself
as
well.
We
have
spent
some
time
building
out
the
cuff'
API,
which
does
this,
and
this
is
a
great
starting
point.
I
have
together
140,
Zagora
and
I
know.
Google
has
one
I
think
there
are
a
handful
of
other
tools
that
do
it.
Terraform
effectively
has
one
in
the
tariff
or
specific
files
with
a
ranae's
vision,
err.
So
there's
a
lot
of
different
ways
of
representing
infrastructure
and
we're
hoping
to
bring
that
together.
So
that's
the
ranch.
D
F
Sorry,
okay,
Lucas,
so
yeah
I
guess
I
mean
cubed.
M
is
layer
2
in
this
case,
like
from
where
we
have
the
VM
or
machine
to
getting
the
control
plane
up
and
running.
So
what
I'm
going
to
talk
about
next
is
like
the
API
for
getting
from
talking
to
qbm
and
saying
here
is
how
to
run
my
control
plane
or
here
is
how
to
set
up
coordinator.
D
So
I
think
exactly
how
that
relationship
is
going
to
work.
If
at
all,
it's
still
up
for
proposal,
we
have
a
repo
on
my
github
there's
a
link
in
the
agenda
for
today
where
I
had
submitted
two
proposals
and
if
you
guys
want
to
pull
it
up
or
I
can
screen
share,
one
of
which
there
is
a
control,
plane,
resource
nested
within
a
larger
object
and
the
control
plane
resource
is
a
one-to-one
match
between
qadian
flags
and
just
api
directives.
D
So
that's
one
approach
and
again
I
think
the
whole
intent
here
is
for
us
to
get
all
of
our
proposals
out
on
paper
where
we
can
side-by-side
them
and
do
a
dip
on
them
and
understand,
pros
and
cons
to
each
and
then
hopefully
come
up
with
some
solution
that
works
for
everyone,
or
at
least,
is
making
everybody
not
angry,
really
born,
go
ahead
with
it.
So.
F
D
I
think
that's
for
the
point
of
why
we
wanted
to
look
at
having
this
hybrid
object
that
crosses
from
layer
1
to
layer
2,
because
in
order
to
get
a
kubernetes
up
and
running
it
goes
both
ways.
Layer
one
has
to
know
data
about
a
or
two,
and
vice
versa,
so
having
it
in
one
place
and
making
it
easily
accessible,
is
powerful,
I
think.
G
I
just
say:
I
think
this
idea
is,
is
great,
I'd
love
to
see
this
happen.
I
think
you
know
we,
for
whatever
reason
chose
not
to
do
this
in
core
and
I
think
the
outcome
was
not
that
we
then
had
one
incubator
project
which
succeeded,
but
now
we
have
Ken
and
I
think
that
is
the
worst,
but
it
a
worse
situation
than
where
we
should
be
and
I
think
having
a
spec
that
we
can
agree
on
is
even
if
we
have
different
implements
is
a
good
compromise.
G
C
G
E
G
E
D
J
J
D
Think
that's
the
general
idea
as
far
as
implementation
of
respect
itself,
I
think
we're
leaning
towards
defining
it
and
go
and
using
kubernetes
api
machinery,
which
was
hopefully
in
the
future,
open
the
door
to
building
and
things
like
infrastructure
operators
and
using
a
cuvette
tool
API
to
actually
manage
infrastructure
as
well,
but
I
think
part
of
the
whole
proposal
and
design
process.
Here
is
we're
sort
of
asking
for
help
and
ideas
from
people
and
what
they
think
would
be
successful.
Implementation
man.
J
I
feel
like
terraform,
already,
does
a
really
good
job
of
this.
In
terms
of
having
you
know,
you
know
you
have
provider
specific
plug-ins
for
different
different
clouds,
and
then
you
have
your
definitions
of
what
you
want
onto
the
cloud
I'm
wondering
again,
why?
Why
create
something
new,
when
we
have
something
like
terraform
we're
going
to
just
adopt
that
as
the
standard
way
to
deploy
my.
G
Experience
tearoom
is
like
a
declarative
way
of
driving
scripting.
It
doesn't
offer
an
abstraction
over
the
various
clouds
so
and
I
think
what
we
want
here
is
you
want
the
concept
of,
for
example,
I
want
to
run
instances
on
in
my
cluster
and
I,
don't
want
to
say
actually
want
to
run
the
nativist.
What
a
scaling
group
with
you
know
these
particular
a
Tobias
dedicated
tenancy
things.
I
want
to
say
the
kubernetes
contest
about
what
I
want
I
want
instances,
and
then
the
autoscaler
will
interface
with
that.
G
So
it
is,
it
is
I,
think
a
higher
level
of
abstraction,
then
terraform
gives
us,
but
I
mean
cops,
cops,
can
output
to
terraform
now,
and
certainly
it
would
be
attractive
for
cops
to
for
some
of
the
other
clouds
to
use
terraformed
implementation.
So
you
know
if
we
decide
a
terraform
is
the
best
implementation
available
to
us
and
that
it
is
time
and
we
should
consider
using
their
code,
so
I
know
I,
know.
J
You
know
shipping,
the
customers
who
have
want
to
run
on
AWS
and
the
communities
together.
Criticism.
It's
just
sort
of
my
insight
is
that
we
spent
you
know
quarters
just
on
AWS
integrations
with
customers
who
have
very
specific
requirements
around
exactly
how
the
IMA
is
are
going
to
run,
but
you
know
how
it
has
actually
how
they
can
set
the
profile
exactly
how
they're
going
to
set
up
their
networking.
J
G
Mean
it
certainly
is
certainly
a
concern.
I
think
I've
been
surprised
in
cops
as
to
how
well
it
has
gone
so
far
and
we
haven't
yet
really
got
real
support
produced
and
we
don't
allowed
users
using
GCE
yet,
but
the
other.
The
next
dimension
of
that
same
argument
is
like,
even
if
you
could
do
it,
Freight
abuse.
Can
you
also
include
GCE
in
the
same
abstraction
and
it
seems
to
be
going
okay.
So
far,
so
I
am
optimistic.
The
other
thing
is
I
think
you
know
we
can
lead
to
an
extent.
G
D
It's
something
that's
also
very,
very
important
to
point
out
here
is
we're
defining
the
abstraction
and
we're
intentionally
not
caring
about
the
implementation.
So
if
a
user
can
cook
up
some
wild
and
crazy
kubernetes
in
AWS,
that
has
all
of
these
specific
edge
cases
that
they
want
to
run
well,
they
can
write
their
own
controller
and
their
own
implementation
and
do
that.
But
there's
no
reason
they
still
can't
speak.
The
same
abstracted
representation
of
the
shorter
is.
J
G
E
Well,
I
mean
I
think
this
is
kind
of
the
crux
of
the
cluster
API
is:
can
we
build
an
API
that
is
both
generic
and
also
specific
right?
It
needs
to
be
accessible
enough
that
people
who
want
to
have
very
specific
control
over
what
their
nodes
look
like
in
their
cluster
needs
to
be
able
to
get
that
level
of
control
and
a
level
of
control
is
going
to
look
different
on
different
clouds,
and
we
need
to
figure
out
a
way
to
expose
that
and
if
we
can't
get
that
right.
This
is
not
going
to
succeed.
J
But
it
might
not
actually
be
correct.
So
I
think
there's
a
lot
of
room
for
like
a
cluster
wide
configuration
that
we
can
all
be
a
farm.
That,
sir,
like
knows
items
like
what
is
your
service
cider?
And
what
is
you
know,
your
cloud
provider
and
those
kinds
of
things
and
I
think
that
we
can
find
it
a
good
abstraction
there,
but
you're
going
to
bring
providers
into
this
and
that's
how
they
abstract
nodes
and
load.
Balancers
and
routing
rules.
I
feel
like
that's
just
going
to
be
a
never-ending
problem,
but.
E
Some
of
the
things
we
mentioned
are
not
install
time
like
load.
Balancers
are
already
abstracted
by
ingress
and
storage.
It's
already
abstracted
by
the
storage
interface,
and
the
other
thing
we
can
do
here
is.
We
can
learn
lessons
from
how
they
built
those
and
the
mistakes
they
made
and
I
was
I
was
talking
to
Tim
Hakan
last
week
about
how
we
might
build
a
crew.
Has
new
community
I
to
abstract.
We
know
it's
taking
those
lessons
that
they've
learned
right,
so
I
think
we
can.
E
G
And
I
also
I
mean
it's
not
like
I'm
entirely
optimistic,
like
I,
do
share
a
lot
of
those
concerns,
and
that
is
why
I
think
that
we
are
going
to
have
great
success
on
the
nodes
themselves
and
let
immediate
success
on
the
control
plane
and
the
mechanics
of
the
control
thing.
The
the
like
I
feel
like
a
lot
of
the
control
plane.
Stuff
is
lot
some
component
config.
G
Taking
that
sort
of
improving
there
are
other
improvements
are
going
to
happen
and
then
the
know
the
actual
configuration
of
a
cubelet
and
then
the
the
instances
that
run
nodes
is
comparatively
simple.
There
is,
there
are
fewer
designs,
there
are
fewer
decisions
that
have
to
be
made,
because
also
you
assume
that,
like
the
infrastructure
exists
right
in
terms
of
your
I
am
roles
your
network,
all
of
those
things
then
sort
of
more.
That
has
to
be
this.
G
You
should
target
a
group
right
and
my
target
a
network
where
you
might
target
an
I,
am
set
of
decisions
that
you've
made
set
separately
and
that
that's,
where
I
think
will
will
make
progress
by
eliminating
some
of
those
some
of
those
decisions.
But
we
also
give
a
lot
of
value
to
too
many
users,
because
then,
for
example,
the
autoscaler
will
integrate
with
node.
There
are
no
policy
thing,
so
you
know
you
can
do
all
yeah.
J
F
If
I,
if
I
understand
this
correctly,
it's
like
more
the
node
group
or
is
like
that
kind
of
thing
is
more
like
persistent
volume
length.
And
then
we
have
that
kind
of
storage,
class
or
integration
with
the
cloud
that
that's
the
thing.
And
then
we
get
the
actual
like
auto
scaling
group
in
AWS
and
that's
the
participant
volume
at
least
up
what.
B
I
thought
I'd
like
to
cite
a
likely
least
a
lesson
in
history
to
be
mindful
of
the
exact
use
cases
that
we
want
to
hit.
First
I
mean
all
the
cloud
providers
before
DKE
and
AWS
or
even
around.
There
is
all
the
folks
that
eucalyptus
and
HP
cloud
know
things
trying
to
do
this
exact
same
thing,
but
they
did
it
with
like
a
it
was
a
little
bit
lower
of
a
level
right.
They
they
all
failed,
and
then
everybody
adopted
Amazon's
API.
C
Reduction
I
want
to
I
want
to
add,
with
Tim
I
think
staying
scenario.
Focus
will
really
help
here.
In
my
opinion,
I
I
don't
want
to
be
I,
don't
want
to
be
a
naysayer
I.
Do
think
that
this
is
a
hard
problem.
Finding
the
right
mix
between
something
that
is
generic
and
not
lowest-common-denominator
and
useful
I
mean
it's
it's
a
very
difficult
problem
that
doesn't
mean
we
shouldn't
try,
but
it
also
means
that
it's
likely
that
we're
not
going
to
be
able
to
hit
everybody
ditch.
C
You
know,
and
so
I
think
it's
okay
to
recognize
that
if
we
can
get
80%
of
the
use
cases
here,
that's
still
a
huge
improvement,
but
we
should
recognize
that
there's
always
going
to
be
folks
that
that
have
a
different
philosophy
or
different
needs
that
aren't
going
to
fit
into
this
model.
I
mean
you
know,
and
we've
seen
that
with
Cuban,
and
you
know
itself
right.
A
lot
of
people
are
like
that's
brain,
there's
a
lot
of
people
who
are
like
okay.
You
know
not
my
thing
and
that's
okay
right,
we
don't
have
to.
H
I,
don't
think
it
would
be
useful
to
have
to
recompile
and
release
a
new
version
of
our
api's
in
order
to
pick
up
a
recently
released
cloud
API
like
a
new
GPU
support
or
something
I
think
we
need
to
expose
all
the
knobs
and
figure
out
how
a
terraform
does
a
really
good
job
of
this
figure
out
how
we
can
pass
through
these
options,
or
it
won't
be
useful
and
it
will
be
extensible
so
to
be
clear.
I.
C
G
The
main
object,
the
the
analogy
I
would
use,
is
actually
ingress
where
we
have
a
speck
and
we
have
very
different
controllers
and
we've
seen
some
of
the
flows
of
that
approach.
But
it
does
seem
to
work
as
a
or
it
seems
to
be
the
model
that
I
think
we'd
want
to
follow
and
I.
We
have
some
easy.you
that
the
use
case
that
I'm
most
excited
about
is
it
turns
the
act
of
installing
into
setting
up
your
control
plane,
and
someone
sets
up
the
first
node
set.
H
K
K
So,
for
example,
if
there's
you
know
an
Amazon,
you
have
the
notion
of
a
region
in
a
zone,
whereas
in
Azure
you
just
have
a
region
so
ergo
you
have
region
as
a
primitive
and
zone
as
a
is
an
abstraction
of
that
primitive.
You
know
there's
going
to
be
things
like
that
that
we
can
identify
and
move
forward
on
so
maybe
not
eat
the
whole
LSAT
once
but
try
and
carve
out
those
top-level
primitives
that
pretty
much
apply
to
any
cloud
and
then
break
it
out
from
there.
C
C
So
specifically,
if
the
idea
to
create
the
node
object
in
kubernetes
and
then
the
node
vm
gets
created
after
that,
instead
of
reversing
it,
as
we
start
doing
that
now
we
can
start
thinking
about
writing
controllers
on
top
of
that,
and
actually
thinking
about
so
I
think
I
think
the
scenario
for
me
that
actually
starts
to
really
really
sing
is,
is
the
is
sort
of
you
know.
How
do
we
actually
start
to
get
to
a
generic
view
of
cluster
auto-scaling
or
sort
of
clusters
with
no
tools
and
there?
D
I
think
it's
important
that
we
we
start
to
introduce
this
idea
of
the
nodes
which
is
a
set
of
sets
right
and
then
the
control
plan
being
completely
separate
and
a
valid
definition
of
a
cluster
being
either
or
or
both
so
control
playing
nodes,
either
or
or
all
of
the
above,
and
that
changing
over
time
is
pretty
important.
But
I,
agree,
I,
think
when
you
set
the
phrasing
it
really
resonated
with
me.
Defining
those
first
and
having
that
be
declarative
is,
is
a
big
win
for
us
and
make
them.
E
D
E
And
then
I
think
we
do
when
I
sort
of
keep
the
conversation
going
in
this
meeting.
Also
not
just
completely
take
it
away,
because
not
everyone's
going
to
be
able
to
make
another
meeting,
but
in
terms
of
dynamic
details,
I'd
also
don't
want
this
meeting
to
just
turn
into
cards.
Cluster
API
like
we
need
to
have
this
meeting
sort
of
roll
up
all
the
different
sort
of
sub
projects
that
are
going
on,
maybe
sending
out
ten
minutes
of
each
meeting
sort
of
discussing
a
chip
in
Perley.
E
D
A
Cool,
so
that
sounds
good
I
think.
The
next
item
on
the
agenda
is
from
Lucas
from
the
cube
a
DMV
one
into
one
API.
F
So
yeah
we
talked
about
this
a
little
right
now.
Our
API,
like
a
configuration
API,
is
alpha
and
it
has
been
all
for
for
a
year.
I'd
really
much
like
to
get
some
feedback
on
like
this
is
an
RFC,
a
Google
Doc
and
pull
request
attached
with,
like
just
the
basic
types
of,
but
it
could
look
like,
and
it's
basically
at
some
point.
We
have
to
promise
something
about
the
configuration
file
right
now.
We
document
everywhere
that
well
you
use.
F
If
you
use
this,
it's
experimental
do
not
use
this
kind
of,
but
a
lot
of
users
have
to
use
it
because
they
want
to
specify
something
in
the
like
API
server
arguments,
or
they
have
to
do
something
a
little
bit
custom
or
they
want
to
enable
the
entry
cloud
provider
integrations,
or
things
like
that.
So
I
think
it's
as
part
of
pushing
cubed
m2g,
a
eventually
I
think
it's
time
to
at
least
discuss
the
this
abstraction
of
against
abstraction
about.
F
How
do
we
set
up
the
control
plane
and
I
mean
this
kind
of
goes
hand-in-hand
with
what
we
discussed
in
the
other,
like
one
infra
meetings
as
well,
cluster
API
things,
but
I
think
it
should
be
as
little
I
mean.
Cubed
M
should
should
do
as
little
as
possible
kind
of
the
minimum
amount
of
effort
that
is
useful
for
a
lot
of
users,
but
still
not
doing
everything
or
like
handling
all
corner
cases
and
I
think
what
I
have
there
is
kind
of
in
between
design.
F
So
I
mean
this
is
a
stretch
goal
for
1/8
I
guess
it
would
help
a
lot
with
upgrading,
because
right
now
from
174
Cube
Adam
is
going
to
upload
the
config
file
use
or
like.
If
you
do
cube
Adam
in
it
with
quad
network
subnets
or
something
you
want
to
use
flannel,
then
it
will
I
mean
internally.
It
will
use
a
context
like
this
API
object
and
upload
it
to
a
config
map
in
the
cluster
and
then
when
we
upgrade
it's
going
to
check
out
this.
F
This
config
map
pause
the
the
API
types
and
like
based
on
that
regenerating
manifests
and
will
then
notice
that
oh
I
should
use
this.
This
argument
to
the
controller
manager
so
that
flannel
still
works
after
the
upgrade.
For
that
one
example
and
yeah
that,
and
if,
while
our
API
group
is
alpha,
we
don't
know
with
future
versions,
will
support
like
reading
our
old
configuration.
So
I
think
this
would
help
a
lot.
If
we
could
tell
me
something
actually
about
our
API
group.
E
So
the
reason
is
the
alpha:
isn't
that
we
won't
know
if
they
can't
read
future
versions.
It
gives
us
an
out
if
we
want
to
change
the
structure,
because
we
aren't
making
a
promise
to
other
people
that
it
will
stay
consistent
going
forward.
So
the
question
really
comes
down
to.
Are
we
ready
to
sort
of
lock
in
what
we've
got
in
the
API
and
promising
backwards
compatibility,
I.
H
H
F
B
H
G
I
commented
the
cops
isn't
in
a
position
to
adopt
this,
or
is
it
doesn't
make
sense
for
officer
Douglas
I.
Think
that
there's
a
like
it's
not
clear
to
me
what
the
model
is,
what
we're
trying
to
model
here,
whether
we're
trying
to
model
an
abstract
model
or
the
concrete
like
these-
are
the
commands.
So,
for
example,
like
there's
a
map
of
arguments
right,
but
what
are
those
arguments
mean?
What
should
happen
is
those
arguments
change
underneath
us
there's
a
map
of
host
path
volumes
or
a
list
of
host
path
volumes
right.
What
is
that?
G
What
does
that
do?
There's
a
there's,
a
path
to
a
key.
Do
I
is
cuvee
ting
good
enforces
the
key
exists
like
which
key
am
I
using?
What
is
what,
if
we
invent
a
new
specific
crypto
system
that
doesn't
use
keys
right?
You
know
something
like
all
those
sort
of
what
are
the
guarantees
are
making?
What
is
this
model
saying
it's
sort
of
what
I
don't
really?
It
goes
very
much
like
a
terraform
abstraction
right
words,
I
get
declarative,
say
thing
is
saying
like
what
we
want
to
put
on
disk.
G
The
problem
with
that
is
as
the
as
the
thing
we're,
manipulating
changes
or,
like
you
know,
for
terraformers
I
get
these
which
two
different
clouds
we
effectively
end
up
having
to
rewrite
the
turtle
manifest
in
the
same
way
we'll
have
to
rewrite.
In
our
case
this
model
I
think
a
particular
component
company
comes
online
and
we,
you
know
flags,
don't
even
mean
anything
anymore.
G
None
of
the
flag
will
be
supported
in
the
next
version
of
kubernetes,
for
whichever
version
that
is
right,
the
future
version
of
kubernetes
it
doesn't
support
any
flags
like
what
do
we
do
so
that's
sort
of
where
I
like
this
doesn't
really
make
sense
for
cups
of
cups
cups
as
a
know,
from
a
physical
point
view,
it's
just
as
bad
right.
It
is
a
we
embed.
G
What
we
think
component
config
is
going
to
look
like,
and
we
currently
render
the
flags
from
that,
and
the
idea
is
that,
when
component
comes
in
comes
along,
we
will
render
it
into
component
company.
So
it's
that
sort
of
you
know,
that's
our
individual
words,
get
rid
of
it
and
just
say
just
use
component
come
fake
right
so
that
that
is
what
we're
sort
of
saying.
Is
our
roadmap
for
what
we
are
representing:
we're
representing
the
actual
intent
knowledge
bags
themselves
and
actually
resolves.
I
F
F
Yeah
move
yeah
to
something
again,
something
be
the
level
or
I
mean
the
current
component.
Config
API
group
is
authorized
so
yeah,
so
abuse
I'd
happily
use
that
if
if
it
existed,
but
my
thought
is
like
this
is
the
interface
like
for
how
to
bring
up
the
cluster
right
and
what
option
options
are
needed,
but
still
I
mean
it's
well
definitely
a
hard
problem.
We
could
yeah
I,
don't
know.
Maybe
the.
G
Other
thing
is
that
the
component
config
is
actually
perhaps
at
least
we
actually
have
a
model
on
top
of
that,
so
we
actually
users,
don't
typically
even
set
the
options
which
set
the
flags.
They
set
options
which
set
the
options
which
set
the
flags
right
sucessfully.
It's
like
a
whole
nother,
so
they're
like
a
networking
top-level
object
and
if
you
say
networking
we've
that
triggers
CNI,
it
triggers
I,
don't
remember
all
the
other
things
it
triggers,
but
you
know,
like
disables,
allocate
pod
ciders
on
the
controller
manager,
so
we
we
encourage.
G
F
Is
this
is
low-level
yeah?
If
this
is
pretty
low
level
and
the
well,
you
should
have
something
about
it
right
if
it's
cops
or
if
it's
like
tectonic
or
geeky
or
whatever.
It
definitely
means
men
to
use
not
for
more,
like
admins
or
your
higher
level,
cop
solution
here
or
whatever,
but
this
is
I-
mean
I'd
love
to
check
out
a
better
API
configs
cube
a
DM
like
how.
F
How
can
we
tell
cube,
am
to
do
something
to
spin
up
the
cluster
and
then
have
some
guarantees
that
and
yeah
I'm
like
the
way
I
did
it
now
was
to
use
conversion
so
with
with
still
support
over
one?
We
won
over
one
API
like
for
for
a
cycle
or
something
like
that,
and
it
could
be
easily
converted
to
the
bida
bida.
F
A
F
Well
addition:
well,
we
could
just
add
component
config
when
it
comes
right.
It
won't
change
that
much
I
mean.
Then
we
just
as
if
I'd
say
that
you
can't
you
can
set
both
the
string
string,
map
and
component
config.
You
have
to
choose
one
and
then
we'll
just
get
rid
of
those
treasuring
map.
When
we
can.
H
A
K
Here
for
sure
great
yeah
so
feature
freezes
today.
Make
sure
that
if
you
haven't
put
your
features
in,
please
do
that
if
you
need
help
looking
what
a
feature
is
or
isn't
just
look
at
the
features.
Repo
readme
has
that
definition,
and
there
was
some
reports
of
some
tests
like
eNOS
and
the
cops
test.
So
if
anybody
felt
magnanimous
and
when
to
look
at
those,
that
would
be
great.
A
Cool
good
shout
definitely
I
muted,
obviously,.
A
A
F
Of
three
cloud
providers
will
hopefully
get
to
beta.
It's
not
touching
us
that
much,
but
well.
If
we
have
no
sig
cloud,
it's
three
o'clock
lifecycle
label
at
least
there
some
work
going
on
there
oak
is
the
beta
is
using
good
shape,
will
progressing
well
upgrade
to
self
hoping
yes,
and
what
we
talked
about
some
minutes
ago
was
extensible
configuration
in
vocational
qbm
and
ability
create
dynamic,
HJ
clusters
with
QAM.
We
have
the
kind
of
work
in
progress,
define
dock
there,
but
not
implementation.
I.