►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180612
Description
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.k6sq8orkf4wg
Highlights:
- etcd issues
- Documentation for 1.11
- Kubeadm upgrade tests
- Cluster API alpha exit criteria
- Cloud provider documentation
- Themes for 1.11 release notes
A
B
C
Yeah
so
yesterday,
when,
when
I
was
doing
just
this
kind
of
firefighting,
bug
fixing
I
noticed
a
lot
of
or
a
couple
of
HCD
problems
and
warnings
in
logs,
so
NASA.
Fourth,
so
jason
myself
and
chuck
have
been
working
on
those
and
some
others
as
well.
So
do
you
want
to
summarize
jason
the
the
fix
that
you
came
up
with
in
the
end
here
after
after
we
had
done
some
slacking
yeah.
D
So
there
were
a
few
different
issues
that
we
uncovered
the
first
one
was
that
we
were
not
actually
properly
securing
the
peer
port
for
@cd
we're
configured
the
client
off,
but
because-
and
we
assign
certificates
to
it,
but
because
we
didn't
set
the
the
actual,
listen
URLs
for
the
peers
to
HTTP.
It
basically
was
published
discarding
the
TLS
configuration
that
we
had
in
place,
so
it
basically
was
essentially
an
unsecured
peer
port,
so
the
first
PR
linked
in
the
notes.
D
Yet.
The
other
fix
in
that
first
PR
was
also
for
updating
the
SANS
that
we
assigned
for
the
peer
port
as
well,
because
it
wasn't
including
localhost
by
default,
so
just
to
be
able
to
bootstrap
the
cluster
for
a
single
node
without
any
overrides.
We
don't
really
want
to
expose
that
port
out
beyond
localhost,
so
just
use
localhost
for
that.
C
C
I
I
would
hope
at
CD
in
the
future
would
be
as
smart
as
if
you
set
the
pair
flags,
it
would
default
HTTP
instead
of
just
like
you
have
to
say
both
in
order
for
it
to
be
valid,
but
that
is
like
not
the
state
now,
so
we
have
to
set
both,
which
is
done,
that
master,
the
PR
just
merged
and
increasing
the
timeout
will
also
hopefully
fix
our
upgrade
tests
that
are
currently
failing.
Rubin
had
some
other
ideas
that
we
might
want
to
look
into
with
regards
to
the
upgrade
test.
C
D
A
Yeah
I
love
bug
upstream
against
sed.
It
sounds
like
they
are
sort
of
misusing
that
cert
or
its
at
least.
Miss
named
I
know
that
internally
Google
when
I
talked
to
our
security
folks
about
certificates.
They've
said
that
it's
it's
better,
not
to
use
the
same
search
for
server
and
client,
and
it
sounds
like
that's
what
@cd
is
doing
here.
Yeah.
D
There's
an
existing
issue
out
there
I,
don't
know
the
awareness
of
it,
but
I'll
go
ahead
and
I'll
dig
that
up
and
I'll
add
that
to
the
meeting
notes
as
well.
I
also
don't
like
the
idea,
like
the
peer
certificates,
drive
me
crazy
as
well.
There
should
be
dedicated
search
for
each
usage,
I
think
yeah.
A
C
Yeah
I
also
think
we
can
open
an
issue
for
better
order
detection,
if
you
specify
peer
sets,
the
default
pair
arguments
should
be
HTTPS
by
default,
instead
of
just
with
a
silent
warning
or
like
kind
of
silent.
Just
warning
in
the
logs
saying
that
oh,
we
won't
use
your
peer
sets
because
you
didn't
say
both
I
think
that
it's
a
fair
enough
thing
to
implement
for
future
releases,
at
least
so.
D
A
Yeah
that
sounds
right
to
me.
I
think
that
this
is
the
right
way
to
fix
it
in
the
short
term,
but
we
should
work
with
the
NCD
folks,
as
jason
said
not
just
for
this
certificate,
but
maybe
for
all
certificates
to
figure
out
if
there's
a
way
to
split
the
peer
search
into
actual
client
and
server
certs
to
separate
those
roles,
better
yeah.
C
C
D
C
Another
so
yeah
I
think
I
think
that's
that's
it
regarding
the
at
CD
CD
stuff,
really
really
satisfied
with
with
the
speed
of
this
thrashing,
like
generally
I
mean
24
hours
ago,
we
hadn't
and
I
started
with
this,
and
now
we
already
have
like
two
three
PS
fixing
stuff.
So
just
a
huge
huge
thanks,
everyone
involved,
then
we
have
other
outstanding
bug,
fixes
that's
a
metal
rattling,
the
PR
there
yeah.
B
So
I
just
want
to
I,
know
everybody's
like
debugging
right
now
and
there's
a
lot
of
noise
and
slack
about
all
the
issues
that
are
going
on.
So
I
wanted
to
see
if
other
folks
had,
if
they're
aware
of
any
other
bug
fixes,
besides
the
or
bugs
besides
the
sd1
and
the
blanket
PR
that
Lucas
put
up
for
a
bunch
of
minor
bug,
fixes.
C
C
B
We
already
have
a
release
note
against
this.
If
people
are
modifying
the
existing
unit
file,
that's
there.
That's
kind
of
you
shouldn't
be
doing
that
anyways
there's
ways
to
expand
it.
So
I
I'm,
okay,
with
with
the
release
note
with
the
current
update,
because
you're
you're
not
gonna,
be
able
to
affect
it
unless
you
put
in
a
pre
rule
on
upgrade
inside
of
your
respect,
file
and
your
debian,
so
you
can.
C
I
wasn't
thinking
about
that.
I
was
thinking
about
the
cube,
the
version
cubed
configuration
so
after
you've
upgraded
your
cluster
to
let's
say
want
well,
you
wanna,
get
that
that
little
cube,
less
configuration
for
112
is
in
a
config
map
and
you
won't
download
that
every
node
and
when
you
run
that
command
the
down.
Listen
the
thing
it's
just
gonna
override
the
111
config,
that
is,
there,
do
wanna,
like
somehow
I
think
it's.
B
E
B
C
C
B
I
commented
on
that
individual
issue
and
I
was
like
I
was
surprised
because
I'm,
not
a
debian
expert
for
dipping
packaging,
but
I,
do
have
lots
of
history
with
rpm
packaging
and
usually
respects
and
can
maintain
and
control
your
directory
structures
and
blast
them
out
at
will.
So,
typically
that
should
be
defined
inside
the
packaging
role
and
that
shelled
out
as
a
separate
script
to
be
to
create
it.
So
I
think
he
closed
the
issue
and
he
talked
with
XD
about
doing
some
other
thing,
I
think
the
reason
he
closed.
C
Ok,
so
there's
some
basil
stuff,
yeah
Jeff
just
said
you
should
use
some
other
basil
tools
rule
or
something
yeah
okay.
Well,
we
should
really
check
that
because,
as
Ruben
pointed
out
just
like,
when
I
will
go
or
something
there's
some
kind
of
weirdness
when
installing
the
community
and
I
dab
in
our
EDD
tests
upgrade
list.
If.
B
C
C
F
E
Speak
about
that
we
are
okay
in
terms
of
the
futures
and
I
and
the
new
commands
we
have.
So
that's
not
okay
for
the
code
freeze,
we
don't
have
anything
else
like
it.
The
remaining
surface,
like
refactoring
of
some
of
the
pages,
that's
over
what
and
so
many
ways
and
I
mean
I
was
speaking
with
Jennifer
about
this.
We
we
should
possibly
like
extend
this
like
in
the
following
weeks.
We
I
don't
think
we
can
fix
everything
for
today,
because
today
is
a
dog
freeze.
B
E
G
C
E
B
C
E
E
C
We
have
running
easier
tests
against
Cubert
in
clusters,
but
it's
done
I'm
doing
the
package
in
Cuba
diem
and
like
doing
cubelets
I
think
we
can
punt
the
securing
your
Cuban
and
cluster
further
that
that
is
already
there,
but
it's
in
the
general
like
monoliths
cubed
and
reference
guide,
so
splitting
that
up
another
dog
can
come
during
the
next
cycle.
That
is
not
critical.
E
B
C
I'm,
just
in
out
we
have
automatic
conversions
and
we
have
the
cubed
em
config
migrate
command
that
can
translate
it
for
you
client
side,
but
but
yeah
it.
Definitely
that
is
some
the
thing
I'm
trying
to
write,
but
it's
it's
taking
time
and
I
got
interrupted
by
someone
say
xeb
stuff
yesterday,
yeah
so.
G
B
C
C
I
think
that
configuring,
your
control
plane
is,
is
a
really
important
one.
Yes,
so
so,
if
you,
if
you
can
write
up
like
this,
is
how
you
like
common
scenarios
like
this,
is
how
you
do
the
API.
So
logs
is
how
you
do
this,
and
this
and
this
using
the
like
concrete
examples
of
the
config
file
and
saying
that,
for
example,
API
server
X
drugs
is
gonna
change,
because,
eventually
someday
we're
gonna
have
component
config,
which
is
structured.
This
is
just
a
string
string,
flag
map,
yeah.
E
G
G
C
H
Yeah
sorry
I've
been
digging
into
that
and
I'm
right
now.
My
main
obstacle
is
that
I'm
not
super
familiar
with
a
testing
framework
and
how
things
are
set
up
so
I'm,
trying
to
big
first
try
to
figure
how
that
part
works
before
I
keep
digging
into
the
logs
and
find
out
where
it
is
I
see
something
weird
going
on,
but
I
first
need
to
see
how
things
are
actually
happening
on
the
test
side.
So
once
that's
you
know,
I
want
to
have
more
familiarity
with
that
area.
I'll
be
able
to
provide
more
useful
information.
C
C
Is
that
in
general,
I
that
upgrade
tests
will
be
greener,
with
with
Jason's
latest
touch
that
should
unlock
it?
That
CD
upgrade
I'm
gonna
check
manually
on
my
machine
if,
if
I
can
get
an
existing
upgrade
and
if,
if
I
can't
or
if
I
can
there's
probably
something
wrong
with
the
packaging
all
set
up
in
testing
for,
but
yesterday
it
was
broken
for
a
call
like
for
real
as
I
couldn't
get
it
working
locally
either,
and
that
PR
is
now
so
now.
The
testing
fraud
technically
should
be
working.
C
A
I
Just
a
query
from
the
point
of
view
of
seek
release,
what
what
do
we
think
is
gonna
cause
us
to
move
the
date
right.
That's
the
kind
of
thing
that
we
need
to
look
at.
Is
there
anything
that
we,
it
doesn't
seem
like
from
the
conversation
so
far.
There
is
anything,
that's
gonna
block
us,
but
then
is
there
something
that
is
likely
to
block
us,
and
if
so,
you
know
how.
I
B
I
B
C
And
also
like
now,
getting
the
TD
fix
in
that
would
just
merge.
I
was
really
critical
because
otherwise,
if
we,
if
we
hadn't,
noticed
that
in
time
we
would
have
had
a
problem,
but
with
that
in
mind
and
Jason
saying
he
had
an
successful
uproot
upgrade
and
with
me
and
others
that
I'm
gonna
verify
from
latest
master
now
I
think
we
should
have
a
or
we
have
a
way
better
signal.
So
that
was
that
was
the
crucial,
crucial
part
and
we
hope
we
have
that
under
control.
C
I
C
Through
through
I'll
open
the
the
issue
for
like
we,
we
often
have
an
issue
in
kubernetes
that
it's
like
a
cubed
M
team
and
release
team
issue.
So
we
can
coordinator
like
okay,
we
have
five
open
issues
in
the
Kuban
repo.
How
created
call
all
they
can
we
move
out?
Can
we
all
do
then
get
need
to
get
fixes
like
this
today
or
whatever
I'll
open
that
issue?
Today
we
have.
I
C
I
J
J
B
J
A
I
mean
right
now
we
are
are
asynchronous
with
coverage
releases.
There
is
an
outstanding
issue
about
how
we
want
to
cut,
builds
and
so
forth
that
Chris
Nova
brought
up
on
the
cluster
API
call
last
week
about
whether
we
want
to
be
tied
to
the
main
carriage
release
or
not
I
think
there
are
pros
and
cons,
obviously
cubeb,
and
currently
it's
tied
to
the
main
release,
which
has
the
upside
of
sort
of
giving
you
the
forcing
function
of
getting
sort
everything
buttoned
up
for
release
and
being
able
to
be
sort
of
blocking
for
kubernetes
releases.
A
It
also
has
the
downside
of
being
tied
to
the
kubernetes
release
and
I
know
that
you
know
copses
divorced
from
the
carré's
release,
and
that
gives
some
more
flexibility
and
so
we're
trying
to
look
at
sort
of
the
different
models
of
what
we
can
do
and
talking
to
other
folks
in
the
community
about
what
their
plans
are
so
filled
with
talk
who
is
working.
A
lot
on
client
tooling,
has
some
some
interesting
thoughts
about
potentially
breaking
some
of
the
communities,
client
tooling,
away
from
the
moon
release
as
well.
A
So
we
certainly
don't
want
to
sort
of
skate
where
the
puck
is
today.
We
want
to
see
where
the
puck
is
going
and
think
about
how
it
fits
in
with
other
tools
in
the
eCos
and
what
their
release
cadence
and
cycle
is
going
to
be
like
so
there's
an
open
issue
for
that.
I
can
link
that
here.
People
are
interested
in
putting
down
thoughts
for
opinions
on
what
they
think.
Good
release,
cadence
or
release
cycle
would
be.
A
That
is
a
good
question.
We've
been
trying
to
figure
out
a
howdy,
it's
alpha
and
then
B
how
to
start
putting
ete
tests.
On
top
of
the
cluster
api,
we've
been
reluctant
to
put
tests
on
top
it
until
it's
at
least
somewhat
stable,
so
I
think.
If
you
look
at
the
issue,
rob
blinked
and
we
and
we
burned
down
those
outstanding
issues,
we'll
feel
like.
We
have
at
least
a
more
stable
core
that
we
can
start
looking
at
porting
tests
on
top
of.
A
One
thing
is
to
be
a
little
more
stable
before
it's
the
foundation
for
other
people's
stuff,
where,
if
we
make
a
breaking
change
that
we
have
a
rippling
effect
and
and
there's
a
lot
of
overhead
to
going
and
fixing
all
of
those
ripple
effects
at
this
phase
of
the
project,
so
you
know
Robo
saying
maybe
July
we'll
have
alpha.
If
that's
the
case,
then
we'd
look
at
starting
to
figure
out
how
to
rebase
them
in
dentistry.
After
that,
we.
E
A
C
So
we
we
can
really
spec
out
the
details
and
without
having
to
go
into
like
there's
some
binary
like
test
and
cube
test
or
whatever
it's
called
I,
don't
remember
anymore,
but
like
in
that
we
don't
have
to
build
our
stuff
into
cabinets
bananas
just
for
getting
eatest
EDD
tests
run,
because
that
is
currently
the
case.
That
is
another
more
general
problem
as
well.
I
Rob
and
wrong
question
was
right.
Now
the
Alpha
criterias
seems
to
point
one
provider.
Are
we
going
to
have
more
than
one
provider
for
beta
that
we
will
list
as
a
criteria
or
what
is
your
thought
process?
I
mean
right
now:
it's
vSphere
and
GCP
right
and
I.
Don't
know
how
frequently
how
people
are
testing
the
vSphere.
The
one.
A
A
A
So
assuming
we
put
those
those
infrastructure
pieces
in
place,
if
people
are
willing
to
contribute
implementations
on
different
environments,
then
we
can
put
those
in
the
dashboard.
If
nobody
contributes
those
implementations,
do
we
say
we
aren't
gonna
exit
beta
until
we
have
those
implementations
right
so
I
guess
that
is
I.
Think
the
question
in
my
mind,
yeah.
I
B
I
C
I
C
Meant
that
yeah
so
yeah
so
I
was
just
thinking
more
generally,
like
it's
really
good
that
the
other
three
cloud
providers
are
going
data
in
now
in
111,
I
haven't
tried
one
out
recently.
I
tried
it
like
here
ago,
something
when
we
started
prototyping,
but
not
not.
Now
after
I've
just
been
reviewing
some
of
the
docs
they
have
so
currently
we
have
two
modes.
C
We
have
like
entry
you
can
set,
and
then
you
do
cloud
provider
whatever
set
that
are
using
API
sub
X
drugs
and
controller
manager,
X
drugs
and
in
the
cubelet
extracts
all
these
three
exist
in
the
Kuban
in
config
file.
So
it's
now
like
possible.
Tsatsis
use
for
the
first
time
to
use
the
entry
cloud
provider
directly.
C
Then
now
have
like
the
conformance
tests
and
stuff
uploaded
and
run
like
Morris
cube,
see,
they'll
apply
and
an
add-on
on
top
of
the
cluster
and
Chuck
said
the
other
day
here
that
you
need
to
specify
a
cloud
provider
external
to
cubelets,
and
to
do
that,
you
basically
have
to
give
cubed
M
Joe
in
a
config
file
with
cubelet
ex-drug
setting
cloud
provider
to
external,
and
that
is
something
we
should
should
like
document
as
well.
That's
this
is
standard.
C
J
B
The
injury
yeah
so
the
the
documentation
is
slightly
maddening
and
by
slightly
it's
very
so
the
it's
not
very
consistent
and
there's
some
alpha
features
that
are
called
out
in
pieces
that
are
kind
of
disconcerting
for
things
that
we
are
recommending
for
folks
to
use.
So
do
we
know
if
there's
a
single
owner
like?
If
do
we
have
a
single
throat
that
we
can
choke
to
understand?
A
C
So,
like
sick
cloud
provider
has
at
least
Django
MacLeod
or
something
as
well
as
Andrew
Andrew
you
Kim
from
digital
ocean
as
sig
leads
I,
think
I'm,
and
then
we
have
Sidharth
Imani
as
well.
So
these
are
three
people
you
can
ping
in
sync
la
provider
as
they
are
leads.
If
you
want
to
get
some
clarity
into
stuff,
but
like
that
for
the
more
generic
questions
for
then
for
every
single
cloud
provider
there
is.
There
are
a
lot
of
differences,
so
then
like
if
you.
B
C
C
So
so
we
could
definitely
do
something
there.
But
yes,
as
Chuck
said
the
recommended
way
to
do
stuff
with
existing
cloud
provider
cloud
providers
or
to
like
run
the
do
in
initialize.
All
your
cubelets
with
cloud
provider
equals
external
pass,
the
flag
to
the
cubit
start
up
your
kubernetes
cluster
and
then
deploy
with
cube
seeds.
I'll
apply,
for
example,
the
cloud
controller
manager
that
is,
and
that
this
binary,
that
is
released
by
kubernetes
and
there's
an
image.
There's
everything
like
the
normal
cloud,
the
normal
controller
manager-
and
this
is
likely
the
stopgap.
C
So
you
can
cube
C
to
apply
this
binary
for
the
existing
cloud
providers.
Just
setting
cloud
provider
equals
AWS
or
whatever
and
deploy
that,
and
it
will
do
its
stuff
and
untain
the
cube,
let's
initialize
them
and
similar.
So
that
is
like
the
idea
that
you
can
do
this
new
flow
using
the
existing
cloud
providers
and
when
the
existing
cloud
providers
have
broken
out
all
the
dependencies
and
all
their
stuff
into
a
new
repo,
have
a
good
release,
process,
etc,
etc,
etc.
C
You
can
just
switch
the
image
like
a
rolling
upgrade
like
any
controller
any
operator
in
the
cluster
you
can
just
like
do
a
rolling
upgrade
or
deployment,
and
now
you
have
a
fully
functional
out
of
three
cloud
provider,
so
that
is
the
like
migration
plan.
Either
you
go
in
tree,
set
the
cloud
provider
flag
in
every
place
and
and
similar
or
you
go,
are
the
three
sets
cloud
provider
external
and
all
the
cubelets
and
run
the
cloud
controller
manager
or
your
specific
binary
like
digitalocean,
has
something
run
your
binary
on
top
of
the
cluster.
C
B
C
E
C
We
should
do
the
general,
but
the
problem
is
we
get
all
the
issues
of
people
doing
like
I
do
cube
at
a
minute
with
my
AWS
cloud
provider
and
it
doesn't
work,
is
I,
have
another
host
name
that
AWS
expects
me
to.
That
is
like
the
second
most
common
problem
like
you,
if
you
incubate
them
so
so
we
kind
of
get
a
lot
of
that
and
collaborating
with
with
Sinclair
provider
and
doing
at
steams
at
this
kind
of
centralized
place
here
are
the
cloud
providers
here?
C
C
A
C
C
C
E
C
Yeah
yeah
both
of
those
but
score
DNS,
could
be
a
good
thing
to
note
that
so
cubelet
securities
and
add
one
actually,
we
could
say
cubelets
configuration
and
yes,
because
what
we
do
now
is
like.
We
have
a
breaking
change
in
the
sense
that
we
disable
the
read-only
port,
for
example,
to
be
more
secure
and
and
stuff
like
that.
C
C
Is
also
all
the
new
commands,
it's
gonna
be
like
multiple
levels:
we're
gonna
have
major
teams.
First,
major
teams
for
all
the
SIG's,
then
a
lot
of
action
required
stuff
with
PRS
and
and
similar,
and
then,
after
that,
we
have
like
individual
SIG's,
also
relevant
what
you
need
to
know
about
PRS
so
like
the
PL
that
adds
new
commands
will
will
be
listed
there,
that
changes,
behavior,
etc.
So
there's
gonna
be
multiple
levels
of
granularity
there,
but
neither
config
file
controlling
the
cubelets
Gordian
as
default.