►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180213
Description
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.rv6m9fojzdg3
Highlights:
- Open issues for 1.10
- kubeadm flags
- kubeadm HA
- CRDs vs api aggregation
- sub projects
- status of k8s anywhere
A
B
Feature
freeze
is
not
this
coming
Monday,
but
the
Monday
after
that.
So
I
always
think
it's
a
weird
date
but
whatever's.
It
allows
you
to
to
work
over
the
weekend
and
try
and
get
your
thing
in
if
you
need
to,
but
the
list
we
have
for
1:10
that
we
triage
from
previous
lists
and
tried
to
get
together,
there's
still
a
large
number
of
outstanding
open
items
that
are
still
there.
If
you
can,
the
bug
fixes
would
probably
be
the
most
appropriate
ones.
B
I,
don't
think
you're
gonna
try
and
be
able
to
get
features
in
in
this
time
frame,
but
there
is
a
fairly
large
number
of
bugs
that
are
there.
So,
if
folks
are
looking
for
work
or
looking
for
areas
to
improve,
that
would
be
highly
beneficial.
What
I'll
probably
do
after
this
cycle
is
anything
that's
not
a
bug.
B
So
that's
kind
of
the
PSA
there.
The
other
PSA
is
that
the
audit
log
work
is
in
as
a
feature
gate,
so
we
can
for
people
that
need
to
use
it
or
want
to
have
that
capability.
It's
there.
The
reason
we
chose
a
feature
gate
is
because
of
the
next
topic
after
this
is
that
we
want
to
do
it
like
a
freeze
on
all
the
flags
being
added
I,
it
looks
like
Lee
is
not
here
so
I'll
punt
on
the
sed
work.
B
A
B
That
has
been
kind
of
occurring
and
we've
kind
of
put
a
moratorium
on
adding
flags
and
we're
possible
if
it
lumps
together
in
a
feature,
try
to
put
in
a
feature
gate
which
can
be
wrapped
around
a
single
flag
to
enable
and
disable.
And
then
we
can
put
that
in
the
configuration
parameters
or
to
put
it
straight
in
configuration
parameters
for
a
single
config
file
that
we
can
mock
over
time.
B
A
This
for
tree
is
awesome.
I'd
like
to
thank
Fabrizio
for
putting
this
together.
I
think
it
really
illustrates
the
recent
explosion
of
the
flags
that
are
being
added.
Exciting
exists
been
mentioned
on
the
call
a
couple
of
times,
but
it's
pretty
easy
to
tell,
given
that
the
PR
links
on
a
bunch
of
these,
how
many
we
are
adding
all
of
a
sudden,
so
you
mentioned
I,
can
ping
file
I.
Think
sort
of
the
obvious
solution
to
this
is
to
switch
to
a
config
file.
A
The
concern
with
that
is
we.
Ideally,
we
would
let
each
of
the
component
owners
to
find
their
own
component
configuration
and
I
know
like
a
lot
of
these.
Things
are
for
the
API
server
in
particular,
which
doesn't
have
a
component
configuration,
so
we
can
sort
of
guess
what
we
think
it's
going
to
look
like
and
start
creating
a
config
for
the
API
server
in
particular,
but
then
we
may
end
up
in
a
state
where
it's
it's
not
possible
to
seamlessly
convert
that
to
what
they
can.
A
B
A
Don't
I've
talked
to
Danielle
Smith
and
he
said
it's
nowhere.
Nowhere
near
the
top
of
their
priority
list,
so
I
think
if
we
want
to
get
Ziggy
a
machine
to
work
on
that,
we're
gonna
have
to
probably
ask
sega
architecture
to
instruct
them
to
do
so,
as
that's
sort
of
the
only
escalation
path
we
have.
If,
if
people
aren't
working
cooperatively,
that's
that's
the
only
way
to
get
them
to
do
something
somewhat
forcefully
right.
Well,.
B
A
No
I
think
the
his
his
position
seemed
just
as
equal
to
review
cycles
as
to
doing
that
work,
and
my
other
concern
is
if
we,
if
we
go
in-
and
we
say,
here's
your
component
configuration
if
we
you
know,
if
you
assume,
assuming
we
get
resume
review
cycles,
maybe
they're
willing
to
live
with
that
permanently.
You
know,
ideally
they
would.
They
would
define
it
because
it's
something
they're
gonna
have
to
maintain.
You
know
in
a
backwards
compatible
fashion
right.
A
So
yeah
we've
talked
about
this
a
little
bit
internally
at
Google
to
try
and
figure
out
what
a
reasonable
path
forward
here
is
and
haven't
come
up
with
anything
great
I
mean
I
know
that
cops
has
taken
the
approach
that
they're
not
willing
to
wait
to
enslave
just
they've
defined
a
configuration
format
for
these
things.
I've
talked
to
Justin
about
it.
B
A
B
Yeah
we
can
do
that,
but
then
we
have
to
own
and
maintain
it
like
that's
the
whole
problem.
The
whole
problem
is
that
we'd
have
to
hitter
a
Don
and
own
this
and
maintain
this
over
a
period
of
time
and
I
the
longer
we
drift
away
from
the
solution.
We
really
want
the
the
more
we
have
to
maintain
this
glue
for
an
extended
period
of
time.
A
Yeah
I
mean
I
think
the
only
upshot
is
a
lot
of
the
things
that
we
are
adding
as
Flags
are,
are
things
that
we
think
will
be
sort
of
configurable
for
the
long
run
right.
So
it's
likely
that
those
will
need
to
be
in
the
API
server
component
of
figuration
anyway,
right,
like
you,
said,
there's
lots
of
lots
of
knobs
in
the
API
server
and
I
think
the
ones
we
were
we're
looking
at,
exposing
as
flags
or
things
that
people
really
do
need
to
be
able
to
configure,
which
is
why
were
exposing
them
as
Flags?
B
How
does
this
sound
as
a
reasonable
approach?
Somebody
is
going
to
have
to
do
like
an
audit
of
the
full
cause
like
there's
like
160
parameters
last
night
sector,
something
like
that.
So
that
will
have
somebody
has
to
do
an
audit
of
the
API
server,
along
with,
like
the
current
configuration
flags
that
pass
through
from
the
main
cuvee
DM
facility
and
what
exists
in
the
configuration
file
and
tried
to
like
map
out
what
is
the
minimal
set
of
knobs
that
we
want
to
make
over
the
course
of
a
reasonable
timeframe?
A
Yeah
I
think
that
would
be
great
I
mean
I.
Think
again.
Ideally,
someone
from
API
machinery
would
be
driving
this
process.
But
if
we
don't
think
that's
going
to
happen,
you
know
we
can
put
up
a
straw,
man
and
say
here's
what
we
we
think
it
should
look
like
and
then
maybe
then
the
escalation
is
easier
because
we're
not
asking
them
to
do
a
whole
bunch
of
work.
We're
asking
them
to
sort
of
review
and
and
sort
of
take
over
the
work
that
we've
done.
I'm.
B
A
A
Yeah
that
sounds
reasonable
to
me.
What
do
we
want
to
do
short
term?
Tactically
for
the
next
release
cycle
we
have,
like
you
said.
Oh,
we
can
change
almost
two
weeks.
I
guess
we
have
a
number
of
outstanding
PRS.
Do
we
want
to
let
those
PRS
go
in
and
give
ourselves
a
little
bit
of
flag
bloat
in
the
short
term
and
spend
the
one
the
next
really
cycle
or
chicken
is
1.11
sort
of
mutating
that
into
configuration
language,
hopefully
working
with
a
couple
of
the
SIG's
to
try
and
create
reasonable
configs.
Well,.
B
All
those
PRS
that
were
on
the
list
have
been
modified
so
after
last
meeting
we've
poked,
the
PRS
that
were
in
flights
and
we
push
them
to
either
be
flagged
gates
for
configuration
knobs
in
the
Canadian
config
file.
So
we
don't
have
the
proliferation
of
Flags
into
cube
ADM
proper.
At
this
point,
which
which
is
good,
I'm
glad
we
put
a
moratorium
in
place,
but
the
I
think
we
do
need
to
resolve
it
in
the
1:11
cycle
to
have
a
story
here.
That
makes
sense.
A
B
Think
the
only
other
PSA
I
could
give
was
that
last
week
we
had
a
during
the
breakout
session.
We
had
a
good
conversation
with
regards
to
high
availability
and
one
of
the
central
themes
that
we
had
was,
which
I
think
is
worthwhile
to
surface
to
this
group.
Is
that
I?
Don't
think
that
self-esteem
needs
to
be
a
prerequisite?
We
could
potentially
use
Kubb
ATM
upgrade
as
a
way
of
actually
running
jobs
on
a
cluster
via
in
a
controlled
fashion.
B
That
allows
us
to
do
a
dag
style,
upgrade
of
master
components,
and
that
way
we
could
get
past
the
circle
of
feature
problem
we
keep
on
running
into,
and
we
could
just
say
we
can
still
have
hi
available
control
plane
in
an
upgradable
fashion,
but
sans
the
self
hosting
and
make
it
clean
right.
And
then
we
don't
have
to
go
into
these
weird
secret
check,
pointing
conversations
over
and
over
again.
B
It
would
basically
be
a
blocking
feel
from
the
main
Kubb
ADM
UI,
where
you'd
say
kuba
TM
upgrade
and
it
would
first
search
query'
the
cluster
for
its
configuration
and
figure
out
that
you
have
and
masters
whether
it
be
1,
3
or
5,
7
or
9
mine's,
crazy,
but
whatever
sprint,
then
it
would.
Then
it
would
basically
use
committee
m
to
submit
itself
as
a
pod
on
the
cluster
and
with
host
privileges.
B
To
then
go
to
that
machine
and
do
the
upgrade
just
like
we
do
today
right
on
that
host
machine
right
and
then
back
off
of
the
vales
and
then
come
back
when
it's
done
and
then
go
on
to
the
next
master.
All
right,
so
kuba
diem
would
be
a
blocking
sort
of
dig
style
flow
and
for
lack
of
better
words
that
the
imperative
right,
where
would
be
a
then
B,
then
C,
the
D,
but
the
one
gotcha
that
we've
talked
about.
Is
that
what
happens
if
one
of
them
fails
in
the
process?
B
So
if
you're,
if
you're
got
a
master
quorum
of
5
and
you're
three
in
and
it
fails,
then
you'd
have
to
do
it
on
an
unroll,
a
graceful,
unroll,
and
this
is
basically
a
an
easier
way
to
deal
with
it
without
having
to
think
about
self
hosting
or
think
about
checkpointing
secrets.
It's
just
using
static
manifests.
There
is
there's
a
little
bit
complexity
there,
but
I
don't
think
it's
that
complicated
and
it
has
a
very
consistent
feel
with
using
cube
ATM
as
a
job
to
do
its
own
upgrading,
which
it
already
does
today.
A
Ok,
so
one
thing
things
that
we've
talked
about
in
the
past
is
so
it
sounds
like
what
you
do
is
upgrade
machine
one,
the
machine
to
the
machine.
Three
one
thing
that
folks
have
pointed
out
in
the
past
is
that
that's
not
actually
the
right
way
to
do
a
che
upgrades.
The
right
way
to
do
it
is
to
upgrade
the
components
horizontally
across
the
replicated
masters.
So
you
upgrade
all
the
API
servers.
B
I
still
I
still
think
you
can
self
host
controller
managers
and
schedulers
naejay.
It's
the
only
thorny
I
I
think
that's
totally
legit
and
works
fine.
It's
the
it's!
The
API
servers
that
are
the
problem
right.
As
long
as
the
API
servers
in
it
city
can
come
back
online.
That's
your
whole.
The
rest
of
the
bits
will
come
back
online
right
so.
A
What
you
can
do,
then,
is
you:
could
orchestrate
the
upgrades,
assuming
you
had,
that
level
of
self
hosting
you
could
orchestrate
the
component
upgrade
horizontally
via
again
be
a
sort
of
the
cube?
Admin
upgrade
command
right.
It
could
look
at
the
cluster.
Discover,
that's
replicated,
do
a
rolling
updates
of
each
of
the
components
that
was
self
hosted
and
then
schedule
a
pod
that
does
the
accordant
upgrade
of
Ivette
CDN
api
server,
but
I
think
it
would
you
can.
A
A
B
Yes,
the
cube
ATM
will
have
to
do
the
right.
Tainting
are
the
top
right
elevations
in
order
for
it
to
land
on
the
master
nodes
and
also
needs
host
privileges.
So
so
you,
it's
all
it's
all
predicated
on
the
fact
that
you
have
admin
ackles,
which
is
you
know,
you'd,
have
to
have
any
ways
to
do
this.
So
right.
A
So
anyway,
we
were
talking
before
about
about
getting
G
K
each
Oh,
dr.
Q,
Batman
and
I.
Think
this
would
be
a
place
that
would
be
hard
for
us
to
adopt
to
that
particular
workflow
for
upgrading
H,
a
masters,
even
if
we
could
upgrade
single
masters
using
cube
Evan.
So
we
should
think
a
little
bit
more
on
our
end
about
what
that
might
look
like,
because
we
should
be
able
to
reuse
some
of
the
features
just
maybe
in
a
slightly
different
way.
Yeah.
B
I,
don't
think,
there's
anything
that
prevents
us
from
doing
to
having
this
light,
be
a
feature
gated
item
and
having
it
available.
I
think
all
of
the
logic
and
code
is
already
there,
and
this
would
enabled
folks
to
unblock
because
much
like
Fabrizio's
work
right,
like
Fabricio,
isolated,
a
piece
of
the
puzzle
which
is
just
the
master
or
just
API
server,
just
they
call
it
master,
join
I,
don't
know
how
we're
gonna
make
change
that
name,
but
that's
what
it's
called
now
so.
B
But
yeah
that
was
one
piece
of
the
puzzle.
If
we
just
focus
on
just
the
upgrades
as
a
sliver
of
the
pie
and
block
it
by
a
feature
gate,
we
could
potentially
work
out
the
idiosyncrasies
between
the
different
deployments,
because
I
do
think
that
like
is
it
as
long
as
it's
hidden
behind
the
idea
of
upgrade
that
it
could
be
passenger
pigeons
and,
like
you
know,
snails,
passing
little
messages
and
it
wouldn't
matter
right.
A
You
can
separate
failure,
domains
right,
so
you
can
put
your
instead
of
putting
all
your
resources
in
the
same
sed
and
sharing
storage,
you
can
put
them
in
a
separate
database.
It
can
be
on
a
separate
host
or
in
a
separate
sed
cluster.
That's
got
a
different
failure
domain
all
right,
so
your
main
API
server
going
down
doesn't
take
everything
else
down.
A
Let's
see,
I
think
there
were
a
couple
other
other
things
that
we
liked
about
them,
but
a
lot
of
those
things
are
actually
on
the
future
roadmap
for
CR
DS.
They
just
don't
exist
today.
So
I
think
that
was
his
point
is
the
things
you
care
about
should
be
coming
to
see
our
DS,
so
you
should
be
using
C
R
D's
in
the
future.
A
A
Last
week
it
was
like
the
first
chunk,
so
it's
easy
to
start
at
the
beginning
and
watch
it
think
that
we'll
probably
know
everything
our
my
video
editing
skills
are
probably
not
good
enough
to
upload
chunks.
I,
don't
know
once
if
I
upload
the
whole
thing
to
youtube.
If
that's
easy
for
someone
else
to
to
cut
out
a
clip
or
link
to
a
sub
segment
of
the
video,
maybe
that
would
be
useful.
We've.
E
A
B
F
A
D
A
D
I
think
this
is
the
thing
is
that,
like
we
want
to
get
to
the
point
where
every
piece
of
code,
that
is
an
exterminator
organization,
rolls
up
to
a
sig,
and
so
there
is.
The
question
was
something
like
copses:
that
a
sig
AWS
singer?
Is
that
a
cluster
lifecycle
thing
and
in
are
you
argument
can
be
made
either
way,
I
think
it's
a
matter
of
you
know
the
you
know
the
folks
leading
the
sub
project
and
the
sig
sort
of
coming
to
an
agreement
in
terms
of
who's
gonna
take
ownership.
D
A
Both
cops
and
QS
Bray
passed
for
syphilis
or
lifecycle
bees
or
the
sponsor
during
their
incubation
process.
Back
when
we
had
the
incubation
process
and
I
know
that
the
cops
in
particular
is
trying
to
be
more
than
just
a
tub
us
right.
They've
got
toad
in
for
GCP,
now,
I
think
they're
working
on
code
for
other
environments
too,
and
so
I
think,
while
they're
very
active
in
the
AWS,
sig
and
I
have
a
large
user
base
on
AWS.
They
are,
you
know,
Justin's
a
frequent
contributor
to
the
sig,
also
yeah.
A
Yeah
I
also
had
Justin
put
the
cop's
office
hours
on
our
sig
calendar
too
yeah
I'm
trying
there
are
probably
other
other
things
that
would
potentially
fall
under
us
along
those
similar
lines
that
in
the
past
we
haven't
really
owned.
But
you
know
sort
of
technically
would
roll
up
under
our
purview,
yeah.
D
A
Yeah,
that's
interesting
point
because
a
lot
of
the
sub-project
you
mentioned
just
have
their
own
meetings
right,
like
cops,
has
their
own
meeting.
Let's
cluster
API
has
its
own
meetings.
Cube
admin
has
its
own
meetings
right.
So
in
those
cases
we
would
probably
appoint
people
to
the
those
meetings
rather
than
the
main
said.
Cluster
life
cycle.
Yeah.
D
But
I
think
some
of
that,
though,
is
that,
like
you
know,
those
meetings
are
charter.
You
know
it's
either
like
a
working
group
like
the
the
cluster
API
stuff,
or
you
know
you
know
I
think
there's
this
question
of
like
do.
We
want
to
have
those
actually
listed
in
a
place
where
people
can
find
them
and
get
involved,
so
you.
A
Do
so
I
updated
the
sig
page
in
a
community
repo
and
it
lists
all
the
breakout
meetings,
so
those
are
all
there
and
if
you
go
to
our
sig
meeting,
notes
they're
all
listed
at
the
top
of
this.
In
addition
to
like
the
cloud
provider,
refactoring
stuff
is
listed
atop
this
dock,
because
we
had
a
number
of
folks
that
were
involved
in
that
too.
A
D
Yeah
and
I'm
working
with
contributor
to
create
we're
likely
going
to
take
the
community
repo
and
turn
that
into
its
own
website,
so
that
it's
a
more
discoverable
you
can
search
for
it.
You
know
it's
prettier
than
just
a
pile
of
markdown,
so
you
know
so
hopefully
we'll
turn
that
we'll
turn
that
into
into
something
that's
more
discoverable.
A
G
Yeah
sorry
for
pushing
out
the
last
note,
there's
been
a
couple
of
issues
specifically
in
kubernetes
anywhere
that
are
causing
some
tests
that
have
bled
into
the
mainline
kubernetes
kubernetes
codebase
to
fail,
and
while
we
can
update
them,
there
was
a
pure
that
was
closed,
that
were,
that
was
supposed
to
increase
kind
of
CI
per
run
against
kubernetes
anywhere,
because
at
the
moment
you
have
to
make
changes
over
there.
Take
someone's
word
for
it
that
they
tested
this
on
some
suitable
cluster
and
then
bring
the
SHA
over
into
Cooper.
G
Today's
to
start
those
tests,
the
CI
PR
that
was
supposed
to
run
performance
tests
for
every
PR
against
kubernetes
anywhere,
has
been
closed
in
favor
of
moving
towards
the
cluster
API
stuff.
But
when
I
showed
up
to
the
cluster
API
meeting
and
brought
this
up,
it
was
the
case
that
kubernetes
anywhere
would
really
be
a
consumer
of
the
cluster
API,
not
necessarily
replacing
a
full-stop.
G
A
Pr,
either
in
the
B
notes
or
in
the
chat
I
can't
yes,
so
to
answer
a
couple,
your
questions,
the
plan
for
kuma
is
anywhere.
Is
that
it's
sort
of
deprecated
and
unlike
support,
because,
as
you
mentioned,
it's
running
our
tests
at
the
moment,
in
particular
the
cube
admin
tasks.
All
around
topic
you
raise
anywhere
because
we
needed
a
way
to
bootstrap
the
actual
virtual
machines
to
run
the
tests.
A
On
top
of
his
cube
admin
punts
on
machine
provisioning,
I,
don't
think
currents
anywhere
will
be
a
consumer
of
the
cluster
API
I
think
it
will
just
disappear.
I,
don't
think
anyone
is
particularly
excited
about
maintaining
that
code
going
forward
and
we'll,
hopefully,
just
get
rid
of
it
once
we
can
run
the
tests
on
top
of
something
else.
A
Closer
API
stops
not
quite
ready
to
replace
to
raise
anywhere
for
tests.
So
I
think
that
it's
perfectly
reasonable
to
make
the
testing
story
a
little
bit
better
to
decrease
our
sort
of
maintenance
burden,
so
we
can
actually
focus
on
building
the
replacement
instead
of
focusing
on
firefighting
existing
tests.
So
thank
you
for
bringing
that
up.
I'd
be
happy
to
escalate
the
PR
and
try
to
get
it
reopened
and
merged.
B
C
Generally,
we
have
something
that
uses
cue
bodman
on
the
wood
in
GCP
and
we
obviously
test
on
TV,
but
perhaps
perhaps
we
could
fit
into
that
means
that
thing
uses
ansible,
but
I,
don't
know
it's
a
bit
of
actually
the
danceable.
Basically,
although
actually
it
could
provision,
vm
says
terraform
or
whatever
you
like,
but
I,
don't
know
whatever
the
actual
technical
requirements
are.