►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180109
Description
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.jr94l93nzpz8
Highlights:
- Followups on action items from last meeting
- Cloud provider working group overview
- kubeadm GA list
- 1.10 planning
A
Hello
and
welcome
to
the
January
9th
edition
of
the
sig
cluster
life
cycle
weekly
meeting
today
we're
going
to
start
with
some
follow-up.
From
last
week
we
had
a
number
of
action
items
from
last
week's
meeting.
I
wanted
to
go
over
those
and
sort
of
see
where
we're
at
so.
The
first
couple
were
assigned
to
me
the
first
one
was
to
add
more
people
to
our
github
teams,
and
so
we
agreed
during
the
last
meeting
that
everybody
who
was
in
a
code
owner
should
also
be
in
the
github
teams.
A
So
I
went
through
and
added
everyone
in
the
cube
admin
owners
file
to
the
github
teams.
There
was
also
a
list
of
folks
that
Lucas
had
proposed
to
joined
the
teams
sort
of
on
an
opt-in
basis,
which
was
Ilya,
Lee,
Sergey
and
Ryan
and
I
did
not
automatically
add
those
people.
But
if
you,
your
name
is
on
that
list
and
you'd
like
to
be
on
the
github
teams,
you
can,
let
me
know
and
I
will
add
you
or
you
can
ask
to
be
added
to
the
teams
and
I
will
approve
your
membership
request.
A
I
also
put
a
note
in
the
doc
that
I
removed,
Mike
Denis
as
a
reviewer,
slash
approver
from
the
queue
have
an
owner's
file.
Tim
had
a
question
about
why
it's
because
mike
has
not
been
active
on
the
project
for
a
while
I
think
try
to
keep
that
file
sort
of
up
to
date
with
people
they're,
currently
working
on
the
projects,
doing
reviews
and
contributing
so
sort
of
a.
A
So
second
one
was
Lukas
pointed
to
markdown
file
that
we
have.
That
was
previously
a
page
on
our
wiki.
It
was
out
of
date
and
basically
asked
if
there
was
anything
on
there
that
we
thought
we
should
save
and
I
read
through
it
and
there's
actually
some
interesting
bits
on
there
primer
at
that
quite
a
while
back
and
some
of
the
stuff
is
still
applicable
and
I.
A
A
A
A
A
Excellent
I'm,
not
I,
don't
think
I've
seen
Lucas
on
the
call
but
Lucas
and
had
volunteered
to
start
the
process
for
calling
and
getting
started
guides
and
I
think
he
might
have
actually
volunteered
Ilya
I
did
see
Elliot
on
the
calls.
Maybe
la
can
give
an
an
update
here.
Just
to
reiterate,
the
plan
is
to
announce
to
people
who
have
getting
started
guides
on
the
website
that,
right
after
the
1.10
release,
we
are
going
to
delete
any
gated
getting
started
guides
that
are
not.
A
They
don't
have
the
curia
certification
mark,
which
basically
gives
people
about
three
months
to
get
the
certification
mark
if
they
want
to
keep
their
guide
up-to-date,
and
so
we
thought
that
was
pretty
fair
warning
for
folks
and
then
right
after
one,
ten
ships
will
go
through
and
delete
all
the
rest
of
them.
This
should
give
us
a
pretty
good
first
pass
at
deleting
a
bunch
of
stale
getting
started
guides
in
the
meantime.
I
also
was
poking
it.
A
We
have
this
URL,
it's
been
around
for
a
long
time,
get
kto,
which
basically
downloads
the
cube
up
bash
script
and
then
runs
it
for
you,
the
intent.
Is
you
know
it's
a
curl
pipe
to
bash
to
install
kubernetes,
which
was
really
really
cool?
You
know
four
or
five
years
ago,
and
it
is
terribly
insecure.
We
shouldn't
be
telling
to
run
girl
PIPEDA
batch.
We
have
lots
of
other
tools
to
install
kubernetes
and
this
doesn't
really
work
on
many
platforms
anymore.
A
So
I've
been
slowly
trying
to
figure
out
where
this
is
used
and
get
rid
of
it
and
clean
up
our
documentation
around
it
right
now.
It's
the
only
remaining
place
unless
people
are
actually
using.
It
is
in
some
of
our
test
infrastructure,
so
I'm
working
with
the
Venge
broad
team
here
at
Google
to
try
and
rip
out
references
to
it
there,
at
which
point
I'll
send
Tim
Aachen
a
PR
to
delete
the
link
on
Kate's
that
I.
Oh.
A
So
I
think
the
problem
with
that
URL
is
is
back.
When
we
were
adding
more
platform
support
to
Cuba.
You
could
tell
like.
Oh
you
know,
people
are
launching
on.
Do
you
ask
people
are
Chinese
people?
Aren't
you
get
on
Rackspace
or
photon
or
wherever
else
we
were
adding
support.
I
mean
over
the
last
couple
of
releases.
We've
been
ripping
out
all
those
different
pieces
of
support,
so
AWS
got
pulled
out.
A
D
C
A
E
A
B
There's
two
parts:
one
is
the
Canadian
repo
and
the
second
is
the
main
repository
when
I'm
asking
for
helping,
because
anyone
in
the
main
repository
can
label
is
to
go
through
and
I
can
supply
a
set
of
queries
to
go
through
those
set
of
queries
and
to
try
to
assign
relative
priority
to
some
of
them
or
just
poke
me.
If
you
have
questions
and
to
try
and
either
close
them
out,
get
rid
of
them
or
get
them
on
the
radar
for
the
next
release.
B
Eye-Color
as
we
do
our
planning,
there
is
a
fairly
substantial
long
list
of
them
and
the
main
repository
I
will
go
through
the
Canadian
repo,
because
I
have
rights
there.
So
I
can
just
update
in
and
add
the
labels
appropriately.
But
ideally,
what
I'd
like
to
do
is
have
a
set
of
like
p1
and
p0
items
that
we
agree
to
inst
and
Confederate
out
to
the
sig
and
folks
who
are
new
to
the
sig,
as
we
have
under
how
many
people
on
this
33
people
on
this
call.
This
was
a
lot
of
people
they
can.
B
They
can
take
any
of
the
help-wanted
items
and
we
can
as
long
as
they're
a
reviewer
for
it.
We
can
help
get
that
you
know
burned
down
because
there's
a
lot
of
finagling
things
that
require
addressing
and
we
would
love
community
involvement
there
we'd
also
love.
If
you
want
to
be,
you
know,
build
a
committer,
shipwrights
or
sort
of
maintainer
rights
that
we'd
love
to
help
guide
those
folks
through
the
process
and
help
grow
the
sig,
not
just
as
a
group
of
people
who
are
interested
in
it.
B
A
Billy
a
comment
said
in
the
docs
that
he's
happy
to
help,
so
I
don't
think
other
people
should
take
that
as
a
sign
that
they
shouldn't
help
as
well.
Like
we're
I
think
you
know,
Tim
is
looking
for
at
least
one
partner
here,
not
exactly
one
partner
and
I
think
we
can
certainly
shard
the
work
across
multiple
people.
So
thank
you
William
for
volunteering,
but
other
people,
don't
let
that
stop
you
from
volunteering
as
well.
A
So
I
think
that
was
all
of
the
that
follow-ups
from
last
week.
A
couple
of
them
I'll
follow
up
still
want
to
do
in
the
weeks
flowing
forward.
I
want
to
make
sure
we
didn't
lose
crack
of
the
things
that
people
had
signed
up
for.
So
next
we
had
an
action
from
our
agenda
item
last
week
for
Walter
to
give
us
an
update
from
the
cloud
provider
working
group
which
we
didn't
get
to
because
Walter
wasn't
here,
but
Walter
I
see
on
the
line
today
and
I
think
Walter.
A
E
The
good
news
is
Lucas
is
actually
up-to-date
on
most
of
this
he's
part
of
the
working
group.
So
pretty
he
knows
everything
I'm
about
to
say,
but
I
think
he
wanted
it
shared
with
with
the
seg.
So
one
there's
there's
an
effort
to
get
rid
of
all
of
the
cloud
providers
out
of
the
main
repo.
So
there
are
seven
cloud
providers
in
repo
today
and
we
by
the
time
the
working
group
is
done
with
their
work.
We'd
like
to
get
all
of
the
cloud
providers
out.
E
The
thought
process
is
sort
of
the
Linux
model
of
having
a
kubernetes
kernel
and
then
having
cloud
provider
distributions
of
the
kubernetes
core
so
towards
getting
that
done.
There
is
a
cap,
it's
a
rough
draft
right
now,
but
it's
been
approved
by
several
people,
so
we're
just
sort
of
waiting
for
it
to
become
an
official
cap.
But
you
can
see
the
rough
draft.
It's
it's
under
the
cloud
provider
breakout
working
groups
area
until
it
becomes
an
official
cap.
E
D
E
Not
me
apologize,
no
worries
so
we're
trying
to
break
the
cube
controller
manager
into
two
pieces.
The
cube
controller
manager
and
the
cloud
controller
manager,
hopefully
starting
in
110
that'll-
be
something
that
you
can
do
in
all
cloud
providers,
but
it'll
still
be
entry
within
a
few
releases,
probably
three
or
four
we'd
like
to
get
to
the
point
where
the
cloud
controller
manager
is
completely
broken
out
of
kubernetes
core
and
is
being
run
only
in
the
distress.
E
So
there
are
several
things
involved
in
that
one
is
certain
things
like
IP
management
needs
to
be
run
in
queue,
controller
manager
unless
you're
running
it
in
distro,
and
so
we're
looking
at
things
like
the
config
system
to
be
able
to
make
sure
that
that
particular
controller
is
running
in
exactly
one
controller
manager,
and
it's
one
of
the
motivations
I
have
for
going
to
the
new
component.
Config
is
I'm,
hoping
that
that'll
make
it
easier
to
solve
that
particular
problem.
E
It
does
mean
that
which
controllers
are
running
may
vary
depending
on
the
cloud
distro.
It
also
means
that
we
need
to
make
sure
that
anything
we
want
to
run
in
cube
controller
manager.
Does
not
have
calls
to
cloud
provider
so
there's
some
interesting
things
going
on
right
now
having
to
do
with
that.
One
is
that
we've
taken
node
and
the
node
controller
and
we've
broken
it
into
currently
two
pieces,
the
IP
management
and
the
node
lifecycle.
E
So
that's
sort
of
a
high-level
view.
The
other
thing
is
that
I'm
trying
to
unify
the
infrastructure
behind
the
controller
managers,
so
API
machinery
today
has
this
concept
of
a
generic
API
API
server.
I
would
like
to
have
a
the
concept
of
a
generic
controller
manager,
and
then
every
cloud
provider
has
the
ability
to
make
use
of
that
to
make
their
Cloud
Controller
manager,
and
it
would
also
be
the
controller
manage
core
controller
manager
code
framework
code
that
runs
the
cube
controller
manager.
So
there's
a
lot
of
work
to
be
done.
E
So
the
thought
process
is
that,
eventually,
all
of
that
sort
of
infrastructure
for
bringing
up
a
cluster
is
going
to
get
moved
out
of
core
repo
and
moved
into
the
individual
cloud
providers
and
probably
we'd
end
up
with
the
only
thing
that
actually
needed
to
be
in
truth,
entry
is
something
that
allowed
you
to
run
kubernetes
on
your
local
machine,
so
happy
to
answer
any
questions
about
any
of
that.
If
people
have
feedback
we'd
love
to
hear
it,
there.
B
Are
some
constraints
inside
of
the
scheduler
that
I'm
aware
of
with
regards
to
EBS
volumes
and
number
of
attachments
and
stuff,
which
is
very
cloud
provider
specific,
it's
kind
of
glue
logic
as
it
exists
today,
I,
ideally,
I
would
like
that
stuff
to
also
be
ripped
out
if
possible.
I
don't
know
if
anyone
has
thought
about
that.
I'm.
E
F
I.
Think
another
example
might
be
ingress.
Folks
have
a
lot
of
opinions
about
ingress,
it's
a
cloud
provider
specific
things
in
some
cases,
but
I
may
want
to
use
my
own
ingress
system
instead
of
using
the
cloud
provider
integrative
ingress
system.
So
are
you
looking
at
these
things?
As
a
mix-and-match
type
of
thing,
or
is
it
all
or
nothing'
type
of
thing
more.
E
Of
a
mix-and-match
and
and
to
be
specific,
I
mean
what
we'd
really
like
to
do
is
minimize
the
amount
of
stuff.
We
move
out
our
goal
here,
for
instance,
the
reason
why
lifecycle
note
the
node
lifecycle
isn't
being
moved
out
because
it
as
a
cloud
call
in
it
is
because
we
feel,
like
the
majority
of
that
we'd
like
to
be
the
majority
of
no
lifecycle,
we'd
like
to
make
consistent
across
kubernetes
we'd
like
to
keep
that
as
a
kubernetes
core
concept.
E
And
so
you
know,
obviously,
when
someone
like
Red,
Hat
or
Google
or
a
juror
takes
kubernetes
out
they're
going
to
since
it's
a
distro
and
they
can
write
wrappers
around
it.
They
can
modify
it
somewhat,
but
we'd
like
to
make
sure
that
we
have
core
concepts
that
are
part
of
this.
This
is
also
why
we're
working
with
things
like
the
compliance
team
to
make
sure
that
we
have
the
right
set
of
compliance
tests
so
that
the
right
set
of
concepts
about
how
one
runs
kubernetes
remain,
so
that
you
have
some.
E
F
I,
so
one
comment
you
made
there
concerns
me
a
little
bit
is
that
you
said
that
oh
there's
distributions,
and
so
they
can
do
what
they
want
with
this
stuff.
The
idea
of
the
distribution
is
not
something.
That's
like
official,
with
its
kubernetes
and
personally
I'd
like
to
see
us
minimise
the
idea
of
a
distribution
and
make
sort
of
upstream
work
in
and
of
itself
and
I.
F
Think
one
of
the
one
of
the
things
that
we've
seen
with
say
the
AWS
cloud
provider
is
that
it
was
sort
of
co-developed
with
cops,
and
so
it's
difficult
to
use
that
AWS
cloud
provider
outside
of
cops
just
because
the
documentation
and
the
testing
it
wasn't
sort
of
viewed
as
a
separable
thing.
I
think
some
of
the
Google
integrations
are
in
a
similar
position
where,
because
of
the
you
know,
the
sort
of
code
development
around
GTE,
there's
not
a
lot
of
great
documentation
about
how
you
use
those
things.
E
E
One
of
the
problems
that
all
of
the
entry
systems
have
today
is
when,
when
they
pull
in
the
Nouveau
distro
and
then
try
and
push
it,
they
discover
it's
broken
and
suddenly
they're
trying
to
force
a
faster
than
normal
OSS
point
release
we'd
love
to
be
able
to
fix
that
at
the
same
time,
to
your
point,
we'd
like
to
make
sure
that
they
are
not
we're
not
fragmenting
what
kubernetes
means
and
toward
that
there's
a
couple
of
things
but
more
more.
Our
welcome
one
as
I
mentioned,
is
the
compliance
tests.
B
Is
a
piece
there's
a
piece
of
the
puzzle
that
has
not
been
fully
vetted
with
regards
to
what
Dennis
Williams
has
been
calling
conformance
profiles
and
that
relates
to
cloud
provider,
layers
of
support
with
certification,
I
kind
of
think
as
a
as
a
graduation
criteria.
You
know,
as
we
start
to
unify
these
pieces
on
a
tree
that
should
probably
be
a
requirement
so
that
way,
if
people
slap
together
these
pieces,
they
have
a
way
of
guaranteeing
that
this
cloud-based
storage
is,
you
know,
tested
adequately
within
their
environment,
because
otherwise
you
get
the
drift
of
behavior.
E
So
there's
definitely
a
lot
of
work
and
there's
also
from
our
site.
There's
a
lot
of
work
being
put
in
not
only
to
get
rid
of
a
lot
of
those
sort
of
problems
where
it
says
hey.
This
test
does
only
run
on
GCE,
but
also
you
know
how
do
we?
How
do
we
push
code
changes
out
to
the
various
cloud
providers
and
then
how
do
we
gather
the
results
in?
What
are
the
expectations?
And
you
know
if
changes
happen,
you
know
how.
B
Part
of
the
conformance
working
group
I
think
the
only
requirement
I'm
stipulating
is
that,
as
we
start
to
graduate
these
pieces,
that
there
is
an
adequate
set
of
tests,
that's
cloud
provider
agnostic
that
can
be
enabled
according
to
the
spec,
which
is
not
fully
formed
yet,
but
has
been
articulated
many
times
about
having
a
provider
layer.
Yes,
No.
E
Agreed
and
I
mean
one
of
the
things
going
on
when
I
talk
to
the
conformance
tests,
guys
because
I
mean
we're
trying
to
work
with
them
to
make
this
happen
right
now.
The
last
time
I
talked
with
them,
there's
sort
of
an
at
least
an
unofficial
view
that
the
primary
set
of
tests
they're
concerned
where
there
are
ta
features,
which
is
a
little
awkward
when
I
think
most
SIG's,
get
things
to
beta
and
stop.
F
My
concern
just
like
we're
winding
a
little
bit
is
I.
Don't
want
us
to
use
the
idea
of
a
distribution
as
a
crutch
to
make
up
for
the
fact
that
we
don't
have
documentation
that
we
don't
have
testing
that
we
shouldn't
let
we
shouldn't
let
the
tail
wag,
the
dog
right,
the
the
distributions
shouldn't
be
the
ones
driving
what
these
integrations
look
like.
They
should
be
something
that's
layered
strictly
on
top
of
the
integrations,
and
so
we
we
shouldn't,
you
know
we
shouldn't.
We
should
look
to
avoid
sort
of
that.
F
That
sort
of
you
know
unhealthy,
symbiotic
relationship
that
can
form
between
these
we're
going
to
install
mechanisms
distributions
whatever
you
want
to
call
them
and
the
and
the
cloud
providers
and
I
also
think
it's
it's.
It's
somewhat
unrealistic
for
us
to
assume
that
every
mechanism
for
installing
kubernetes
is
going
to
be
owned
by
C
and
C.
F
I.
Think
that
that
it's
it's
just
the
reality
is
that
there's
going
to
be
a
huge
amount
of
diversity
there.
F
Our
goal
with
cluster
lifecycle
is
not
to
try
and
bring
every
way
of
installing
kubernetes
into
one
sort
of
you
know
framework,
but
instead
provide
a
tool
box
so
that
we
can,
you
know
organically,
create
as
much
commonality
as
possible
across
these
things.
So
I
think
that
the
cloud
provider
should
be
a
similar
type
of
thing.
Let's
provide
a
tool
box
that
that
starts
to
rein
in
the
amount
of
drift
in
a
more
national,
organic
way.
I.
E
Agree
I
mean,
let
me
just
be
clear:
I
completely
agree
and
I
mean
two
points
on
that
one.
We
already
have
cloud
providers
outside
CN,
CF,
Rancher
and
a
few
others.
So
I
mean
it's
not
we
will
we
have
things
outside
right,
that's
the
truth
today.
The
other
thing
is
this
is
why
I
worry
about
things
like
generic
controller
manager,
because
I'd,
like
the
controller
managers
working
in
a
consistent
way
and
yeah,
it's
it's
from
my
perspective.
It's
part
of
that
toolbox.
E
F
So
I
mean
you
know
again.
The
the
conformance
stuff
is
definitely
to
be
a
big
part
of
this
and
I
think
identifying
the
different
types
of
integrations
and
really
keeping
those
separate.
So
you
can
say:
ok,
you
have
a
persistent
volume
integration.
Ok,
so
if
you
have
the
persistent
volume
integration,
that
means
that
you're
gonna
be
able
to
pass
these
conformance
tests.
When
everything
is
said
and
done,
that's
separate
from
having
a
you
know:
service
type
goals,
load,
balance
or
integration.
F
Another
thing
is
that
we
say
cloud
providers,
but
in
reality
you're,
you
know
you
may
have
a
Hitachi
storage
array
which
is
providing
your
persistent
volume,
stuff
and
sort
of
you
know
an
f5
load
balancer,
that's
you
know,
providing
your
load
balance
and
you'd
have
different
integrations
for
those
and
in
that
scenario,
you're
definitely
much
more
in
a
mix-and-match
type
of
thing
and
it
doesn't
sort
of
fit
the
more
traditional
sort
of
like
I'm
running
on
Amazon
or
I'm
running
on
on
GC.
Or
what
have
you
type
of
thing?
Yep.
A
F
Yeah
and
that's
I
think
that's
gonna
be
critical
for
things
like
managing
rate
limits
with
Amazon
right,
because
if
you
know,
if
you
have
all
these
separate,
binaries
and
they're,
not
coordinated
in
terms
of
you
know
how
hard
they're,
hammering
the
AWS
API
you're
gonna
get
throttled
pretty
quickly.
So
yeah.
This.
B
This
discussion,
kind
of
leads
into
a
dag
of
execution
and
I
know
we're
gonna
talk
about
110
planning
next
and
I
know.
One
of
the
items
is
with
regards
to
component
config
for
the
controller
manager,
so
I
just
wanted
to
make
sure
that
we
touched
that
a
little
bit
and
kind
of
realize
that
this
is
gonna
cross.
Many
many
areas.
G
I,
don't
think
I
think
our
goal
for
the
Comenius
project
should
be
to
get
kubernetes
adopted
and
create
a
great
experience
for
our
users.
I,
don't
think
it
should
be
only
we
should
express
it
in
terms
of
inside
baseball
right.
It
should
not
be
about
the
ability,
you
know
they're
the
structure
of
the
project
it
should
be
about.
How
do
we
get
everyone
in
the
world
using
kubernetes
and
loving
using
communities,
and
we
might
say
that
our
strategy
is
to
do
things
like
you
ATM,
but
we
shouldn't.
G
F
G
G
A
A
All
right,
thank
you
so
much
for
joining
Walter.
You
are
free
to
hang
around
for
the
next
part,
although
it
may
not
be
of
extreme
interest
to
you,
but
that
was
that
was
very
helpful
thanks.
So
much
so
next
on
the
list
I
have
110
planning.
It
might
be
worthwhile
template
GA
lists
on
there
below
that
I
think
we
touched
on
the
GA
for
cube
admin
last
week
and
might
be
worthwhile
doing
that
first
before
we
do
110
planning.
So
let
me
find
the
link
for
that.
B
Yeah
I,
don't
think
we've
broader.
This
is
of
probably
one
of
the
larger
groups
post-holiday
to
have.
You
know
squatting
on
the
calls,
so
I
think
you
know
soliciting
feedback
from
this
broader
group
about
what
things
they
consider
to
be
requirements
for
Kubb
ADM
to
be
GA,
I
think
is
a
worthwhile
topic
for
discussion
and
we've.
A
B
A
I
think
Lucas
was
hoping
that
it
we
beta,
so
that
we
could
use
that
to
configure
cubelets,
because
it
would
make
our
our
support
and
operating
story
easier
going
forward.
Yes
top
and
responded
to
the
comment
there,
cuz
I,
think
from
our
side.
We
were
thinking
that
it
just
barely
missed
one
nine,
a
a
small
piece
of
it
barely
missed
one
nine,
but
he
says
the
the
overall
feature
he
thinks
will
be
roughly
mid-february,
which
to
me
is
means
risky
for
1/10
right.
A
B
A
B
A
D
I
think
enterprises
won't.
Some
enterprises
have
policies
that
they
actually
know.
Some
enterprises
have
policies
that
they
will
not
adopt
beta
software.
They
just
won't
touch
it
in
that
certain
policy
I
think
other
outside
of
our
gender
prices.
That
have
policies
like
that
I.
Don't
think
it
would
readily
stop
anything
because
Cabana
has
been
around
for
so
long,
but
yeah
I.
B
Think
we've
been
erring
on
the
side
of
caution
which
isn't
a
bad
thing
on
our
promotions
for
sub
elements,
as
well
as
fruition
of
cube
ATM
itself.
We
just
want
to
make
sure
that
we
don't
prematurely
promote,
cube,
ATM
or
configuration
of
it
and
we're
actually
getting
good
feedback
and
doing
the
right
thing
by
our
users.
I
don't
know
if
Google
has
an
e
like
survey
apparatus,
but
I
think
this
is
kind
of
one
of
those
spaces
where
a
Community
Survey
would
be
a
useful
thing.
A
Yeah
I
mean
Brennan
used
to
send
out
surveys.
You
know
just
using
sort
of
Google
Google
Forms.
The
question
is
how
to
reach
the
right
audience
right
like
we
just
send
it
out
to
kubernetes
dev
or
competitors.
Users
gonna
touch
the
people
that
we
think
are
blocked,
because
it's
krysta,
if
you're
not
working
at
a
large
enterprise.
You
know
a
lot
of
people
are
already
using
cube.
Evan.
B
That's
true:
I,
don't
think
it
hurts,
but
I
think
I,
don't
think
we're
going
to
hit
the
release.
I,
don't
think
we're
gonna
hit
GA
in
the
next
cycle,
so
I
think
slow,
rolling
this
and
getting
a
survey
out
I
think
would
be
a
worthwhile
effort.
So
that
way
we
can
at
least
make
sure
that
we
have
the
bits
that
people
would
like
in
a
ga
release
when
we
say
we're
GA
and
we're
supporting
them.
B
B
We
also
mentioned
bootstrap
tokens
to
GA
that
I
think
it's
pretty
uncontroversial
I
did
see.
That
Lee.
Is
that
how
you
pronounce
her
name
and
put
the
TLS
to
keep
local
endpoint
and
not
any
master,
could
write
to
Exedy
only
root
that
PR
was
put
up,
I
have
I,
have
it
I
might
have
a
window
of
PRS
to
go
through
interview
and
I
also
talked
to
more
about
that
one.
This
morning,
too,.
B
C
B
A
C
A
E
C
B
Well,
I
know
you're,
probably
I
yep,
given
the
feedback
we
have
so
far,
I
think
we
are
not
likely
to
hit
G
a
in
the
110
cycle
and
I
know
that
Robert
want
to
get
to
the
110
backlog
or
priorities
for
this
cycle.
I
think
what
makes
a
lot
of
sense
is
to
send
out
the
survey
or
have
folks
who
are
on
the
call
review
that
back
that
G
a
list
to
ensure
sanity
and
we
can
ping
it
periodically
to
make
sure
we're
hitting
the
mark.
B
B
A
For
the
folks
on
the
call
you
know,
even
before
we
send
out
the
survey,
are
there
things
that
are
not
on
this
list
that
you
think
should
be
so.
The
one
thing
we
kind
of
skimmed
over
was
at
the
bottom
of
this
list.
It
says
non
goals
and
in
particular,
cube
admin,
sort
of
doing
H,
a
seamlessly
out
of
the
box
we've
said,
is
at
this
point
a
non
goal
from
working.
A
B
F
Think
it
seems
reasonable
because
you
know
the
addressing
the
master
it's
when
you
have
a
set
of
them.
It's
not
a
solved
problem,
figuring
out
how
to
distribute
secrets
and
Swizzle.
Those
around
is
not
a
sort
of
one-size-fits-all
problem.
I,
don't
think,
I,
don't
think
that
we
can
achieve
some
of
the
simplicity
that
we
want
without
making
some
less
than
optimal
decisions
in
terms
of
how
we
actually
get
this
stuff
done.
F
C
E
C
The
previous
point
is
quite
similar
to
that.
If
it's
like,
you
know
like
upgrades
slightly
complicated
but
but
GA
and
like
if
you
resolve
the
Cuban
flag,
that
I'm
a
configuration
issue,
that's
great,
but
but
if
that
sort
of
plugs
behind
then-
and
it's
not
it's
not
a
big
deal
of
it-
is
just
a
little
complicated
to
upgrade.
It
can
still
be
G
as
long
as
works
right
well,.
F
A
The
tricky
part
around
that
Jo
comes
in
when
we
say
we
support
AJ,
and
then
we
say
here's
the
key
I
have
an
upgrade
command
and
if
you've
done
some
extra
steps
to
make
AJ
work,
then
that
may
not
just
work.
We
need
the
next
version
of
super
Nettie's
like
you're
gonna
have
to
do
something
a
little
bit
extra.
Each
time
you
upgrade
probably
as
well
I
think
that's
might
be
a
confusing
user
experience.
I.
F
Think
to
some
degree
you
know
if
we
have
the
right
documentation
and
the
tool
set
works
within
that
documentation.
Then
then
it's
supported
and
it's
you
can
be
GA.
We
really
need
to
be
viewing
that
the
documentation,
those
upgrade
guides
as
part
of
the
product,
not
as
a
sort
of
a
separate
thing.
Yeah
I,
definitely
agree
with
that.
F
A
Yeah
there
were
a
number
of
things
at
Lucas.
That
said,
we
should
probably
just
call
beta,
even
if
the
tool
itself
is
GA
and
so
it'll
be,
like
there'll,
be
a
number
of
sort
ways
to
use
a
tool
where
you're
only
using
GA
parts
of
the
tool
and
nephews
a
tool
with
certain
flags
or
certain
features
gates
turned
on
than
you're
using.
You
know
alpha
or
beta
pieces
of
functionality
within
that
tool.
That's
pretty
consistent
with
other.
You
know,
parts
of
kubernetes
and
other.
You
know
CLI
tools
and
so
forth.
Right
I
think.
B
G
B
If
it's
all
self
hosted,
it
actually
works
pretty
seamlessly
and
clean,
except
for
the
sed
portion,
which
will
require
explicit
configuration
based
upon
how
you've
set
up
an
install
density,
because
we
support
multiple
ways
of
doing
that.
With
a
document
and
one
that
Jamie
has
created,
we
would
have
to
go
across
and
do
an
explicit.
You
know,
modification
to
the
manifests
to
roll
in
the
new
versions.
Some
and
we'd
also
we'd
also
want
to
make
a
couple
of
documented
things
beforehand:
clean
and
clear
you
like
backup
your
cluster
before
you
do
this
warning
warning.
B
G
B
A
A
So
the
stop
the
doc
put
the
cluster
API
work
that
we're
driving.
We
have
a
lot
of
people
at
Google
working
on
this.
We
have
some
somewhat
aggressive
goals.
Here
we
have
a
breakout
meeting
on
Wednesdays
and
we
have
a
lot
of
community
interests
and
a
lot
of
folks
that
are
starting
to
contribute
code.
I
keep
getting
pinged
by
new
people
that
you
know
see
the
talk
that
Chris
and
I
gave
it
cube
con
or
stumble
across
some
of
our
documentation
and
say:
hey
we've
been
working
on
a
very
similar
project.
A
How
do
we
work
together,
which
is
great,
so
I
think,
as
you
know,
sort
of
as
Joe
was
saying
earlier,
with
some
of
the
cloud
provider
code.
There's
a
lot
of
code
out
there
that
does
sort
of
this
infrastructure
provisioning
bit
stuff
and
the
more
that
we
can
create
some
shared
libraries
and
shared
api's
I
think
that'll
be
better
for
the
community
and
again
you
know
we
don't
want
to
mandate
it,
but
as
people
are
sort
of
voluntarily
coming
together
and
saying,
we
think
it'd
be
great
to
sort
of
work
on
something
common.
A
We're
gonna
keep
pushing
forward
on
that.
So
what
I've
written
down
there
is
we
want
to
have
you
know
a
solid
alpha
API
we
sort
of
have
an
alpha
API
as
at
the
end
of
last
year,
but
we
want
to
sort
of
solidify
that,
with
multiple
implementations
underneath
to
sort
of
vet,
that
the
API
is
working
and
we
want
to
add
Adsense
on
top
of
the
machines
that
we
have
so
far.
A
Both
you
think
you've
admin
in
the
cluster
API
in
terms
of
creating
sort
of
good
ways
for
users
to
configure
their
cluster
I
know
that
Justin
put
in
our
sakes
mission
statement
that
we
should
be
driving
this
across
the
project
and
I
think
I've
narrowed
down
the
list
of
other
SIG's
and
the
list
of
components.
I
think
we
should
really
be
pushing
on
here
just
to
sort
of
the
control.
A
The
sort
of
the
core
control
pieces
at
a
cluster,
basically
things
that
cube
admin,
installs
and
and
not
you
know
we
shouldn't
be
spending
our
time
going
and
talking.
You
know
to
the
folks
that
read
heap
stir
and
trying
to
convince
them
to
switch
over
until
we've
gotten
the
core
purchased
systems
switched
over
first,
so
cubelet
is
in
flight.
You
know
I'm
tough,
and
it's
often
is
pushing
on
that
already.
So
we
don't
need
to
do
too
much
there
to
sort
of
encourage
progress.
A
A
B
Just
to
be
clear,
the
controller
manager
isn't
done.
There's
no
one
signing
up
to
execute
in
that
bit.
So
having
folks
who
can
work
on
that,
I
can
help
review
it
and
guide
you
through
the
examples
that
exist
for
the
proxy
and
the
scheduler
I
know
Andy
had
blazed
the
path
there,
but
I
know
he's
busy
with
other
things.
B
If
there
are
folks
who
can
step
up
in
the
controller
manager,
space,
the
API
server
area,
I
think
would
be
much
thorny
err
because
there's
all
kinds
of
security
constraints
and
everything
else
that
goes
along
with
it
so
I'm.
Not,
then
he
I
would
recommend
any
volunteers
have
a
good
foot
in
sing-off
before
they
start
to
go
into
component
config
for
the
API
server
right.
A
But
is
not
you
know
at
the
top
of
their
list,
so
I'm,
hoping
that
we
have
sort
of
a
carrot
there
in
the
sense
that
it
makes
it
easier
for
him
to
actually
split
the
code
apart
and
that's
an
incentive
to
actually
build
it
during
this
early
cycle.
But
I
do
think
like
as
you're
saying
to
him.
If
somebody
wants
to
step
in
and
help
that
would
be
a
great
place
where
I'm
sure
Walter
would
love
to
have
other
people
help
help
with
that
code
and
help
with
that
migration
process.
A
So,
third
on
the
list,
I
put
cube
AB
into
GA
I
had
this
as
a
p0,
but
after
our
last
discussion,
I
downgraded
to
a
p1
and
marked
some
of
the
the
sub
items
as
p0,
because
I
think
we
talked
about
bootstrap
tokens
to
GA.
We
expect
that
tool
and
the
work
that
Lee's
doing
on
securing
the
EDD
CD
endpoint
with
TLS.
A
It's
sending
out
a
survey
to
figure
out
from
our
users
exactly
what
they're
expecting
and
if
we
can
get
all
that
stuff
done
then,
like
we
have,
you
know
a
small
chance
of
getting
to
GA
and
1.10
otherwise,
I
think
we're
set
up
pretty
pretty
well,
depending
on
the
results
of
the
survey
to
get
there
in
1.11
sort
of
that,
a
second
that
correct
to
them.
At
the
mid-20
this
year,
I
also
put
documentation
on
there,
I
think
as
Joe
mentioned
earlier.
A
We
need
to
view
documentation
as
part
of
that
GA
process
and
a
lot
of
the
things
that
we
expect
people
to
do
with
cube
admin.
We
just
need
to
make
it
really
clear
documentation
like
how
you
how
you
do
these
tasks
right,
so
we
shouldn't
rely
on.
You
know
the
built-in
help.
When
you
run
cube,
admin
help
upgrade
to
tell
users
how
to
run
upgrades.
For
instance,
we
should
actually
have
some
good
documentation
around
that
everything
else
in
the
doc
I've
downgrade
to
p3
I
think
pretty
much.
A
I,
don't
think
it's
worth
trying
to
make
that
to
defaults
as
we're
trying
to
push
for
GA.
We
should
make
it
work.
We
should
make
sure
it's
sort
of
beta
quality,
but
we
shouldn't
make
it
the
default.
Next
is
H
a
support.
I
think
we
should
maybe
redefine
this
as
documentation
of
H
a
support,
as
opposed
to
trying
to
build
H
a
into
the
tool
itself
and
if
we
call
documentation
of
H
a
as
a
separate
line
item
I,
think
that
would
be
a
P,
1
or
P
0,
but
actually
building
the
functionality.
A
First,
seamless,
H,
a
in
Q
Batman
itself,
I
think,
is
a
pretty
low
priority
at
this
point.
Next
one
is
the
push
forgetting
api
is
defined.
This
is
obviously
copied
from
last
time.
It
didn't
didn't
make
him
to
1:9.
I
think
we're
at
this
point
waiting
on
component
config
to
get
a
little
bit
further
so
that
we
can
use
component
config
to
define
the
cluster
configuration
itself.
A
Next.
We
have
better
test
coverage.
We
got
partway
through
this
last
quarter.
I,
don't
I
sort
of
carry
this
over
to
see
if
there
were
any
other
things
that
people
wanted
to
add
in
the
space
this
corner
or
if
this
is
something
we
should
drop
because
we
feel
like
we
have
better
test
coverage
for
the
features
that
already
exist.
B
I
want
to
know
I
need
to
talk
with
Josh
Grafton
about
what
the
state
is
of
basil
builds.
It
would
make
life
a
lot
more
seamless
if
we
could
just
build
from
route
and
have
that
all
of
the
artifacts
tested
as
part
of
every
single
test.
That
would
be
super-dee-duper,
but
that
was
not
the
state
of
things
for
a
long
time.
So
let
me
poke
him
and
follow
up
to
see
where
we're
at
but
I
know
the
code.
Is
there
the
bits
are
there,
but
it's
we
were
still
waiting
on
the
cross-platform,
build
capabilities.
B
I,
don't
think
that
should
really
hold
us
up,
because
are
the
tests
even
running,
unlike
other
architectures,
do
you
does
ku
medium
build
for
Mac
OS
or
the
dis
Linux?
B
D
A
D
A
A
It's
a
contributor
summit
I
think
our
goal
for
2018
needs
to
be
to
figure
out
what
what
atom
management
means
and
that's
something
we
should
probably
get
started
on
now
in
terms
of
sort
of
like
the
cubeb
and
GA,
maybe
sort
of
trying
to
you
know,
maybe
send
out
surveys
or
pull
together
the
primary
stakeholders
and
just
start
getting
some
consensus
on
what
add-ons
are
before
we
start
trying
to
build
a
solution
to
manage
them,
and
finally,
Lucas
had
one
in
here
about
cube
admin
phases
and
a
new
API
group
I.
Think
this.
A
A
G
A
I
think
this
was
pretty
busy,
but
next
week
might
be
a
really
good
time
to
do
a
demo
for
that
we're
two
weeks:
the
yes
or
two
weeks,
sir,
whenever
you'd
like
thank
you.