►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180124 - kubeadm
Description
Meeting Notes: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#heading=h.ngmmgzhwdlbf
Highlights:
- Issue triage for the 1.10 milestone
B
B
B
B
B
B
Alright,
let's
go
back
to
the
110
milestone,
I
figured
we'd
start
with
110
milestone,
so
at
least
we
have
a
subsection
that
we
all
agree
to.
You
then
we're
trying
to
execute
on
for
this
cycle
now
and
then
we
can.
We
can
always
I
can
always
triage
some
of
the
other
backlog
items
and
the
people
in
overtime,
but
I
figured.
If
we
get
this
piece
done
right,
then
you
know
at
least
we
have
some
resemblance
of
reality
and
then
burn
through
the
rest
of
the
backlog
overtime.
B
Does
that
seem
like
the
reasonable
approach
that
sounds
calm,
so
this
is
one
that
I
know
we
will
probably
address
and
I
would
probably
help
drive
through.
This
is
the
add,
the
under
optimal
priority
and
preemption
virtual
plane
components.
We
had
a
previous
annotation
that
was
originally
applied
to
the
control
plane,
components.
We
never
actually
deployed
the
quote-unquote
reschedule
er
that
prevented
things
from
interrupting,
but
the
kulit
has
code.
A
A
B
Taking
the
to
use
the
porch
or
proxy
I
figured
this
one
was
easy,
because
I
already
know
details
about
it:
alright,
so
that
one's
done
triage,
yay
only
31
left
to
go
remotely
call
Covidien
upgrade
plan.
I
am
not
intimately
familiar
with
this
one
tell
users
they
have
the
potential
options
from
Kubb
ATM,
great
plan
command
and
webpage
currently
from
the
master
machine
of
local,
dare
checks.
This
means
the
API
server
back
in
the
webpage
needs
to
be
SH
from
the
master.
A
C
A
B
D
B
E
B
A
Yeah
I
think
this
is.
We
should
make
cube
admins
specific
tests
right
so
right
now
we
use
cube
admin
and
we
run
like
the
conformance
tests
against
that
cluster,
but
we're
not
actually
testing
the
the
cube.
Admin
features
themselves
that
are
not
tested
by
conformance
right,
so
things
that
are
mentioned
here,
like
the
conformance
tests,
are
never
gonna
check
that
we're
like
labeling
or
tainting
the
masternode
differently
than
the
the
worker
notes,
and
so
this
is
like.
We
should
make
automated
tests
that
verify
that
that
behavior
is
not
changing
over
time,
but.
B
B
We
have
to
verify
by
checking
the
config
map.
What
is
the
most
likely
suspect
if
the
kik
fodmap
exists.
We
know
it's
a
cuvette
DM
deployment.
That's
the
config
map
that
we
that
comedian
now
default
the
uploads
and
then
from
there.
You
can
probably
do
behavioral
level
checks
to
make
sure
that
these
are
saying.
B
A
Just
like
you
could
write
end-to-end
tests
that
that
check
to
each
of
these
things
and
you
mark
them
with
future
calling
cube
admin,
and
you
only
run
him
on
and
end
Suites
that
you
know
was
created
a
priori
with
cube
admin
right,
and
you
could
also
make
it
dynamically
check
and
and
tried
it
into
it.
Whether
this
cluster
should
support
these
things,
but
you
could
just
set
it
up
so
that
you
were
only
running
these
tests
on
clusters
that
you
knew
we're
using
cube
admin,
and
you
knew
should
support
these
things.
The.
B
Question
is
like
I:
don't
like
the
location
of
tests,
ete
cube
admin
as
the
place
of
this
living,
because
if
we
forked
repo,
this
is
very
specific
to
this
environment
right
and
you
could
absolutely
we
do
this
on
our
side.
We
extend
the
intended
test
framework
through
binary
mashing
to
be
able
to
create
separate
end-to-end
tests.
B
I,
don't
necessarily
think
the
regular
test
framework
makes
a
lot
of
sense,
because
there's
a
lot
of
jiggery-pokery
on
the
sub
positions
of
how
everything
has
stood
up
and
I,
don't
even
think
we'd
ever
even
be
exorcized
so
having
the
availability
to
crib
these
tests
into
something
is
useful.
I,
but
I
don't
think
they
should
live
in
the
in
the
underneath
of
Maenads
and
test,
if
that
makes
sense,
sort
of
people
agree
or
disagree,
because
as
we
fork,
the
repo
we'd
have
to
rip
them
out.
So.
A
I
totally
agree,
and
actually
one
of
the
folks
working
on
the
cluster
API
code
is
asking
the
same
question
like
how
do
I
write
integration
tests
for
the
cluster
API?
Where
should
those
live,
I
told
me
yesterday
outside
of
the
main
repo,
so
we're
gonna
have
that
same
problem
there,
and
we
can
maybe
let
him
try
to
figure
out
how
to
make
that
work
and
then
copy
the
pattern
for
cube
admin.
We're.
A
A
B
B
A
B
How
do
we
want
to
handle
this?
You
wanna
call
it
review
eval
I
hate
the
next
milestone,
naming
yeah.
B
C
A
A
B
B
A
B
We
go
there,
you
go
I,
think
what
we
could
do
is
we
could
use
Help
Wanted
as
a
filter
for
the
end
of
the
release
and
we
can
punt
all
the
things
in
that
milestone
that
never
got
triage,
because
there's
no
assignee,
that's
in
tourism,
all
all
right.
B
Use
component
kids
for
coop
scheduler
they're
moving
stuff
around
and
I.
Don't
we've
got
most
of
the
things
in
place,
but
I
don't
know
if
it's
gonna
all
land
in
time
to
make
it
clean
and
tidy
I
think
it
might
be
a
little
premature.
I
would
almost
want
to
let
this
settle
for
one
cycle
because
right
now,
they're
shuffling
both
the
controller
manager
and
the
scheduler
to
do
that,
API
budging
for
the
component
config
stuff
into
its
own
location,
and
that
was
very
similar
to
how
the
Kubla
worked.
B
B
F
F
Yeah
and
then
I
don't
know
if
we
have
like
a
phase
command,
that
we
can
learn
to
rotate,
but
this
seems
like
pretty
simple
logic.
We
want
to
parameterize
it
low
enough
so
that
people
can
get
the
desired
behavior,
also
something
that
I'm
just
personally
uneducated
about
is
I,
don't
know
how
to
where,
where
the
options
are
open,
mobs
are
for,
like
CA
expiry
and
certain
spire
II.
So
somebody
knows
about.
B
B
F
F
A
Lee
they're
a
couple
people
at
Google
that
were
I
was
recently
talking
to
about
certificate
rotation.
If
you
ping
Mike
Denise
on
slack
and
ask
him
to
connect
you
with
the
other
folks,
I,
don't
know
what
their
slacker
github
IDs
are:
I'm,
not
sure
if
they're
gonna
do
anything
actively
on
it
right
now,
but
they
can
at
least
sort
of
talk.
Talk
to
you
about
some
of
the
thoughts
that
they
have
going
forward.
They've
definitely
been
thinking
about
this.
Okay.
A
F
Yeah
I
mean
is
this
something
that
we
would
expect
that
in
the
normal
static
pod
use
case,
we
would
rotate
the
certs
individually
per
note.
That
seems
like
the
desired
experience
to
me
like
I
would
go
on
to
the
master
one
and
then
I
would
like
could
be
a
cuvette
DM
phase,
certs,
rotate,
no
and
then
go
on
to
the
worker
rotate
the
certs
go
on
to
work
or
to
ideally
it
would.
B
F
F
F
We
want
to
make
sure
that
that
code
path
is
pretty
non-destructive.
I
know
Peter
mentioned
that
they
use
the
backup
directory
we'd
only
ever
want
to
rotate
assert
forward
cool
I'll.
Look
at
it.
Okay,.
B
B
B
B
C
A
C
C
B
Everything
that's
been
done
in
feet,
I,
believe
this
structure
been
broken
down
so
that
it's
completely
importable.
That
is
always
the
plan,
so
you
could
import
any
subsection
of
a
DM
and
that
command
line
has
been
factored
to
use.
Reusable
libraries,
which
is
were
then
broken
down
into
phases,
yeah.
F
B
F
F
B
Yeah
I
think
in
an
ideal
world
we
factor
into
a
set
of
composable
libraries
and
we're
actually
doing
this
down
stream
for
some
stuff
that
we're
working
on
not
comedian
specific
but
like
the
Sun
ability
project
is
it's
factored
that
way,
so
that's
command
line
has
tiers,
which
then
use
a
library.
Those
libraries
are
broken
down
into
sub
dependencies.
So
that
way,
if
people
want
to
use
build
automation,
they
can
just
leverage
the
interface
library
instead
of
having
to
like
shell
yeah.
F
B
F
C
D
B
B
B
B
D
A
B
B
C
C
B
B
B
B
B
Provide
consistent
UX
for
install
Docs,
we
always
have
like
a
Doc's
issue.
Every
cycle.
B
D
C
B
So
there
was
I
sent
you
that
link,
and
that
requires
its
own
TLC
I
did
a
PR
slug
for
all
the
PR
is
that
against
Q,
medium
and
I
made
sure
I
went
through
Lucas's
backlog
to,
but
there
needs
to
be
a
separate
cleaning
of
the
main
repo
and
that
that
there's
a
lot
of
cruft
in
there
right.
So
what
happens
is
on
that
query?
You
will
see
you
will
see
old
old
issues
that
are
unrelated
and
you'll
see
a
lot
of
bug.
Reports
filed,
but
they're,
don't
won't
necessarily
be.
B
Some
of
them
are
dispersed
right.
So
Lucas
did
a
lot
of
things.
Probably
too
many
things
and
one
of
those
things
was
to
clean
up
and
move
things
out
of
the
mainline
repo
into
Cuba
DM
repo,
so
users
do
typically
file
bugs
in
the
mainline
repo,
but
the
curated
ones
have
been
pushed
over
here.
So
a
lot
of
curation
from
the
past
has
been
pushed
into
this
repo.
So
this
report
does
have
a
lot
of
history
of
the
major
features
that
been
requested
over
time.
B
But
I
did
send
out
that
link
to
all
the
people
who
volunteered
and
wanted
to
go
through
that
list.
There's
about
a
hundred
separate
issues
to
plow
through
on
the
mainline
repo.
Not
all
of
them
are
related
to
queue
midium.
Sometimes
they
just
reference
it
in
the
title,
even
though
it's
totally
unrelated,
you
just
have
to
see.
What's
what's
what
and
I
highly
recommend
applying
the
sig
cluster
lifecycle
label
to
those
issues
that
makes
it
way
easier
to
triage?
B
C
Some
of
the
issues
would
have
been
closed
by
now.
If
we
have
the
the
bot
and
bot
around
a
while
ago,
yes,
I
was
trying
to
apply
there
real
well,
if
this
is
if
this
is
like
a
year
away
from
from
BOTS
message
about
from
time,
what
sets
their
sales
I
get
just
closed
right,
yep,
because
if
we
had
about
a
year
ago,
it
would
have
been
closed
by
now.
B
D
E
B
C
D
C
B
C
B
A
B
Seems
reasonable
for
the
for
the
rest
of
this
list.
I
don't
know
if
people
wanted
to
I
do
think
we
need
to
go
through
the
rest
of
the
backlog
at
some
other
time.
There's
there's
just
a
tremendous
amount
of
history
here
and
without
somebody
who
has
all
that
history
in
the
rain
when
we're
driving
through
some
of
this
stuff,
it's
difficult
for
people.
My
goal
in
this
cycle
was
to
clean
things
up
so
and
get
it
into
a
state
where
we
can
have
more
people
come
online
and
it's
easy
for
them
to
suss
out.
B
C
B
A
A
A
A
B
That's
the
plumbing
for
the
cider
address
running
I
was
gonna,
leave
it
to
the
Intel
guy,
who
I
think
wrote
the
original
change
and
he
had
comments
on
it.
I
wasn't
gonna
chime
in
because
I
haven't
been
deeply
involved
in
the
IP
v4
v6
coup
as
I
like
to
call
it
and
I
know
they
have
been
so
any
other
networking
folks
who
really
want
to
get
into
this
one.