►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180717
Description
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.9i0mvlglyeh5
Highlights:
- kubeadm / package support for other versions of Ubuntu
- Update on kube-deploy repository
- Transfer of ownership to kubernetes-sigs for https://github.com/detiber/cluster-api-provider-aws
- sig-charter
- State of testing and the cluster directory
- SIG sessions at Kubecon
- kubeadm-dind-cluster update
B
Yeah,
so
someone
was
asking
about
when
the
release
was
going
to
support
more
than
just
xenial.
It
looks
in
in
their
release
repo.
We
only
have
Deb's
defined
for
one
particular
version
of
Ubuntu
and
they
were
just
asking
if
we
had
planned
support
for
the
future
I'm,
not
really
sure
with
a
state
like
I.
Remember
the
draft
packaging
works
in
Ubuntu.
So
if
anybody
would
like
knows,
that
would
probably
have
better
ideas.
I.
C
Request
was
made
for
upstream
containers
to
as
well
from
my
understanding
that
they
were
wanted
to
up
resonators
that
were
published
with
inside
of
the
repository,
so
I
don't
know
who
actually
owns
that,
but
I
mean
I,
don't
think,
there's
a
problem
with
pushing
the
change.
I
don't
know
if
I
am.
If
we
should
own
some
of
that
release,
it's
almost
like
a
really
start
effect.
If
that
should
be
punched
pushed
to
sig
release.
C
Don't
know
if
there's
an
owner,
though
the
problem
with
a
lot
of
the
sig
relief
stuff
is
like
it's.
You
toss
the
ball
up
in
the
air
and
then
you
watch
it
hit
the
ground.
I,
don't
know!
If
there's
anyone
there
to
own
that
change
and
I,
don't
know
how
we
get
into
the
priority
queue
for
that
sig
to
manage
known.
C
D
B
A
Cool,
so
next
I
put
a
little
fYI
on
here
about
the
cube
deploy
repository
a
long
time
ago.
When
we
first
create
this
repository,
the
idea
was
to
make
it
sort
of
akin
to
the
contribute
positive
for
kubernetes
for
cluster
deployer
code,
and
that
doesn't
seem
like
it's
necessarily
working
out
great
for
the
contribute
positing
a
cig.
We
want
to
support
dumping
ground
for
random
bits
of
automation
for
cluster
deployment
either.
A
So
last
night
I
removed
the
cluster
API
code,
since
it
had
been
moved
to
its
own
repository,
and
there
was
no
reason
to
keep
it.
In
two
places.
People
were
sending
cleanup
PRS
for
it
in
the
old
location
in
which
nobody
was
looking
at
and
that
left
only
one
thing
in
cube,
deploy,
which
is
the
image
filter,
which
is
used
by
cops.
C
A
Yeah
and
for
a
long
time
it
was
it
had
a
lot
of
code
that
was
not
being
maintained,
which
we
ripped
out
when
we
started
putting
the
cluster
API
code
there,
and
basically
the
only
thing
in
there
that
was
actively
being
used,
was
the
image
builder,
so
I
think
it
became
sort
of
a
place
for
people
to
stick
stuff
and
then
never
come
back
to
it,
which
is
also
something
that
sig
doesn't
support.
I.
E
A
I
definitely
think
we
should
rename
it
to
be
something
that
actually
describes
what
it
does
right.
The
the
PR
I
sent
you
this
morning.
Justin
puts
the
readme
at
the
top
level
just
describing
the
image
builder
project
like
moving
it
up
one.
So
if
it
makes
more
sense
to
have
say
AWS
on
that,
then
that
would
be
great,
although
I
did
see
some
like
GCE
files
in
there
also
so
I'm
not
sure
how
AWS
specific
the
bulk
of
the
code
is
yeah.
E
It's
what
did
this
building
is
building
a
generic
image
that
is
used
or
suitable
for
use
on.
Aws
is
used
by
cops
by
default,
but
it
could
be
used
by
any
AWS
this
show
and
it
also
can
build
on
GCE
and
it
actually
uses
the
official
Debian
image
builder.
So
it
could
actually
build
on
anything
so
yeah.
Maybe
we
should
just
leave
it
where
it
is
not
just
you
every
name,
yeah.
A
E
I
think
wardroom,
unless
I'm
gonna,
please
Packer,
so
that's
a
no-go
dude.
Does
you
speak
her
yeah
so
like
we
even
rejected
that
for
security
reasons
like
it's
just
not
appropriate,
for
you
know
images
that
redistribute
it's
fine
for
people
using
internally,
but
it's
not
really
secure
enough
for
this
use
case.
Okay,.
A
A
A
F
Here,
oh,
yes,
that's
a
real
person.
The
only
changes
that
were
made
were
to
the
specifically
to
the
cluster
API
AWS
maintainer
z--
I
made
sure
that
David
Watson
was
on
there
from
Samsung
Nishi
from
Amazon,
and
a
couple
of
Red
Hatters
were
added
on
there
as
well,
since,
as
far
as
I
can
tell
those
are
all
of
the
kind
of
existing
provider
implementers.
So
far.
If
there's
anybody
else,
I'm
happy
to
them
to
the
list
as
well.
C
A
C
Yes,
now
that
we've
kind
of
rallied
on
the
template
and
have
examples
of
canonical
ones
that
we
consider
to
be
really
good,
sig,
node
and
sig
author,
the
examples
we
should
start
to
craft,
a
sequester
lifecycle
charter
and
I'm
looking
for
folks
who
would
want
to
help
with
that
effort,
I
do
think
that
I
as
a
lead,
I
should
probably
help
do.
My
time
is
limited,
though,
so,
if
there
are
other
folks
who
would
like
to
help
craft
that
you
know,
I'm
I'm
happy
to
work
on
it
together
with
anybody
else.
G
A
G
A
A
The
last
couple
of
Q
cons,
I
think
we
have
a
reasonable
sort
of
idea
of
scope
and
so
forth
to
stick
in
there
once
the
templates
in
place
so
I
think
getting
the
the
first
pass
won't
be
too
bad
and
then
it
would
be
sort
of
iterating
on
the
details
and
the
the
edges
where
we
defined
the
boundaries
of
our
scope,
which
is
probably
worth
discussing
next
week.
So
I
think
it's
reasonable
what
Tim
proposed
to
try
and
get
that
first
pass
done
before
the
meeting
next
week.
A
C
So
yes,
this
this
is
resulted
in
many
many
sort
of
interrupt-driven
issues
that
have
occurred
across
several
release
cycles
and
I'm
sick
of
it.
So
for
many
many
cycles
we
have
talked
about
the
ideal
world
that
we
would
want
to
get
to.
Everything
is
built
from
bezel
and
from
the
main
repository
we
no
longer
have
a
released
repository,
we're
using
cluster
API
to
provision
because
kubernetes
anywhere
is
a
dead
ends
right.
C
We
want
to
have
well-defined
jobs,
maybe
even
a
PR
blocking
job
for
committee
and
deployments,
and
we
have
put
off
these
issues
and
we
have.
We
have
kept
maintaining
this
large
corpus
of
technical
debt
and
it
becomes
an
albatross
for
anyone
new
to
the
sig,
because
it
is
very
difficult
to
understand
why
and
how
its
structured
and
why
some
of
these
issues
occur
and
exist,
because
because
of
the
split
because
of
the
history
and
all
the
other
stuff,
that's
there.
C
C
To
put
it
bluntly,
we
want
to
have
a
throat
to
choke
right
so
that
way,
when
issues
arise,
we
can
we
can
find
those
people,
because
right
now,
some
of
the
issues
are
not
clear.
The
second
part
is
release
artifacts
right
and
build
artifacts
and
who
owns
and
maintains
those
like.
We
need
to
have
canonical
ownership
of
those
artifacts
if
we
are
going
to
be
responsible
for
some
of
that
detail,
so
I
guess
what
I'm
trying
to
say
is:
we've
always
punted
off
these
things
and
at
some
point
in
time
which
I'm
declaring
bankruptcy
right.
C
We
need
to
start
to
address
them,
but
not
just
within
one
organization.
We
need
to
address
them
broadly,
as
a
sig
right.
I
know
in
the
past.
Google
took
a
lot
of
ownership
stake
in
this,
but
in
order
to
fix
this
problem,
it
needs
to
be
federated
to
amongst
many
stakeholders
right
to
make
sure
that
we
actually
solve
this
problem
and
address
the
concerns
for
the
broader
ecosystem.
So
I'll
pause
there
for
a
moment
and
see
if
other
folks
concur
what
thoughts
they
have
on
it,
etc.
I.
G
Was
talking
to
someone
today
about
this,
the
problem
from
worse
ideas
and
I?
Don't
understand
why
we
should
maintain
cloud
provider
specific
tests
to
mind
the
setting
with
the
cluster
API.
We
can
move
like
the
ownership
of
these
tests
to
cloud
providers
and
if
all
the
code
providers
fail,
then
this
is
a
comedian.
That's
how
I
see
it
and
maybe
that's
not
how
it's
gonna
work.
C
C
C
D
I
think,
as
a
baseline,
that
you
were
talking
about
which
gives
the
thumbs-up
thumbs-down,
we
need
pre,
provisioned
things,
not
cluster
api
things,
so
where
there
is
a
guarantee
from
the
test
and
trial
that
ok,
these
things
are
probably
these
VMs
or
whatever
container
within
containers
are
provisioned,
and
these
are
ready
for
you
to
use,
and
then
things
don't
change
underneath
right
as
much
as
possible,
and
that
is
what
we
would
test.
Qb
diem
width
for
a
go/no-go
signal
on
the
PRS
themselves.
D
Then
there
would
be
the
second
level
where
we
would
use
cluster
API
and
then
do
GCE,
OpenStack
or
whatever
vSphere.
That
would
be
like
the
second
level.
That
is
what
you
know.
We
should
aim
for
I,
think
I
I,
don't
think
we
should
aim
for,
like
first
go
to
cluster
API,
and
then
you
know,
you
know
there
is
no
guarantees
there
either,
like
I
was
talking
to
chuck
that
you
know.
D
If
you
are
not
installing
some
stuff,
there
are
scripts
in
cluster
API,
also
where,
when
the
VM
starts
up,
there
is
a
script
that
runs
a
bunch
of
things
and
then,
if
we
have
to
maintain
it
in
our
side,
I,
don't
think
that
that
is
fair,
so
I
think
so
we
need
two
levels.
The
first
level
is
something
where
we
know
things
are
up
and
running,
and
then
we
test
with
human
and
then
second
level
would
be
you
know
we
can
do
what
we
are
doing
now.
Instead
of
Q
Burnett
is
anywhere.
B
Just
adding
from
from
suggesting
over
here
I
I
think
the
cluster
API
stuff
looks
really
cool
and
I
like
the
idea
of
having
more
like
federated
cloud
providers.
I
do
think
for
testing
it.
We
should
kind
of
wait
and
see
how
that
pans
out
there's
a
lot
more
moving
parts,
particularly
with
the
whole,
like
pivoting,
from
a
local
bootstrap
cluster
thing,
but
I
think
are
going
to
cause
some
interesting
issues
and
sky
yeah.
C
A
Yeah
at
that
point,
it's
been
on
my
list
for
a
while
to
start
deleting
the
cluster
directory
from
kubernetes,
and
we
were
making
slow
progress
towards
doing
that.
Mike,
Denise
and
I
spent
some
time
a
couple
of
release
cycles
ago,
like
ripping
out
all
of
the
old
unmaintained
code,
ripping
out
a
bunch
of
providers
that
were
no
longer
being
maintained
if
we're
sort
of
pulling
it
down
to
just
the
stuff
that
we
actually
needed
to
run
TVs
and
the
next
step
after
that,
like
you
said,
is
to
start
replacing
that
with
the
cluster
API.
A
So
it's
on
my
my
list
for
this
release
cycle
to
try
and
start
that
process
like
getting
the
first
one
or
two
IEDs
running
on
top
of
the
cluster
API
I
think
it
might
be
a
little
ambitious
to
try
and
get
everything
off
for
the
next
release
cycle,
but
I
think
we
can
start
moving
things
off.
I
think
how.
A
Don't
know
I
haven't
tried
yet
I
think
you
know
Ben
had
a
couple
of
interesting
points.
You
know
the
way
the
cluster
cuddle
is
implemented
right
now,
with
mini
cube.
I,
don't
know
that
that's
gonna
work
at
all
in
CI,
so
a
we
need
to
get
some
some
miles
just
running
it
over
and
over
and
over,
and
make
sure
that
it's
as
reliable
as
the
cube
up
code
is
right
now
and
then
B.
We
need
to
make
sure
that
I
can
actually
run
in
a
CI
environment
right.
B
A
A
C
I
think
there
might
be
some
little
hoops
to
get
done
there,
but
I
think
it'd
be
possible
and
they
give
us
a
better
signal
before
we
get
into
some
of
these
issues,
where
periodic
jobs
give
us
baths
in
the
later
on
right,
and
if
we
had
single
node
deployments,
you
know
it
does
need
to
be
a
multi
node
provisioning
that
might
be
beneficial
for
us
to
hijack
the
machine
and
it's
running
on
to
get
to
make
it
introspectively.
Look
like
it's
running
on.
C
D
So
basically
the
it
need
not
be
virtual
machines
themselves.
Like
Robert
was
saying
you
know
we
are
right
now
we
have
a
few
options
in
tester
table
where
we
are
running
containers
within
containers,
so
the
single
node
could
be
part
of
that
right.
Then
we
do
have
a
e
to
a
running
in
d-ind
environment
right
now,
yeah.
B
I'm,
actually,
probably
going
to
be
coming
back
to
the
sig
later
this
quarter,
discussing
that
some
more
I
am
kind
of
exploring
what
we
have
there,
some
more
for
the
the
local
solutions,
because
it's
something
that
we
can
manage
really
well
and
Keith
flake
really!
Well,
you
know
it
avoids
it
avoids
and
more
things
going
out
over
the
network
and
everything
that
goes
out
over
the
networks
leaks
there.
C
Is
a
part
of
the
testing
Commons
effort
has
been
to
provide
that
abstraction
layer
and
we've
been
talking
about
it
for
a
long
time.
We
have
initial,
it's
very,
very
rough
in
very
early
days,
but
the
idea
was
just
to
provide
an
abstraction
boundary
for
us
to
get
the
right
tests
against.
That
would
do
the
pseudo
provisioning
for
lack
of
a
better
word
for
a
test
cluster
that
was
Canadian
based,
which
would
be
highly
the
whole
long
term.
C
Arc
of
this
was
that
people
want
to
create
controllers
or
their
integration
pieces,
and
they
want
to
leverage
a
localized
environments
that
pre
provision,
something
that
looks
like
a
committee
and
build
cluster
without
them
having
to
go
through
and
actually
do
provisioning,
because
that
can
be
very
expensive
here
if
you're
doing
in
CI.
So
that's
the
long
term
arc
of
what
we've
been
doing
there,
but
we
are
nowhere
near
ready
for
primetime.
You.
B
Know
from
from
a
tick
testing
point
of
view,
I
am
looking
at
the
existing
solutions.
I'm
iterating
a
little
bit
on
something
Quinton
built
previously
we're
very
interested
in
having
the
the
local
option
as
just
like
some
coverage,
and
that
might
help,
but
longer
term
I
think
we
still
need,
like
actual
cloud
providers,
there's
a
lot
of
other
things
that
happen
there,
that
don't
happen
in
a
local
saloon.
B
C
F
On
your
camera,
well,
I
was
just
going
to
say
the
the
biggest
pain.
From
my
perspective.
Right
now
is
just
trying
to
reproduce
test
failures.
You
know
not
being
able
to
actually
reproduce
it.
You
know.
Without
you
know,
waiting
for
the
next
triggered
run
is
is
quite
painful
right
now
and
having
to
create
a
local
environment
that
you
know
can
even
run
the
tests
in
some
cases
is
you
know,
on
a
cube.
Atm
deployed
cluster
is
very
painful
right
now
we
hear
that.
B
C
Chose
there's
one
other
made
a
problem.
I
want
to
address
here
at
least
why
we
have
rain
in
this
thing.
Talking
about
this
is
that
the
signal
gets
raised
from
release
and
everyone
gets
priority
level
interrupted.
You
know
right
when
it's
the
most
critical
urgent
time
it
could
we
at
send
an
email
to
the
sig
whenever
things
have
been,
you
know
consistently
failing
as
part
of
some
time
interval.
So
that
way
we
can
do
it.
Yes,.
C
A
Yeah,
if
you
do
want
to
send
that
to
the
sig
mailing
list,
we
should
definitely
send
it
to
the
middle
list
plus
something
another.
So
people
can
easily
filter
it
and
we,
it
also
might
start
a
conversation
about
whether
the
mailing
list
is
for
humans
or
for
automation,
because
I
know
I
use
Google.
We
have
that
conversation
a
lot,
but
mainly,
let's
start
to
get
busy
I've.
B
Been
discussing
some
with
Aaron
Berger's,
but
if
P
spiff,
X
P
about
potentially
reusing
or
creating
some
like
per
sig
mailing
lists
just
for
this,
so
that
you
know
it's,
you
know
smaller
subsets
of
members
that
are
actually
paying
attention.
These
failures
can
subscribe
and
unsubscribe
as
they
want
in
the
meantime,
you
can
also
you
can
set
multiple
emails,
so
you
can
just
add,
like
your
email,
I
want
this
and
you
can
add
them,
plus
something
and
filter
it.
I.
G
B
I
had
looked
at
reusing
that
currently
that's
actually
for
there's
this
github
alias
it's
like
kubernetes,
cig
testing,
test
failures.
It's
like
a
team
on
github
and
the
mentions
of
that
team.
Get
reiterated
to
that
mailing
list,
but
they're
not
in
much
use.
There's
just
some
discussion
of.
Should
we
reuse
that
mailing
list,
or
should
we
just
get
rid
of
those
and
create
a
new
one?
Just
for
like
this
test
failure,
stuff,
that's
actually
just
filler
automation
and
not
humans,
beanie
people,
yeah.
A
This
was
supposed
to
be
a
consistent
pattern
across
all
SIG's
would
be
easy
to
pull
people
from
other
SIG's
into
conversations.
So
if
sig
testing
saw
test
failure,
they
could
pull
in
you
know
just
I
could
have
group,
and
not
all
of
us,
a
cluster
lifecycle,
similar
there's
one
for
PR
reviews
and
API
reviews,
etc.
For
each
thing,
but
I
think
there's
been
some
conversation
recently
about
how
these
groups
that
are
not
really
being
used
very
well,
I'm,
not
sure
we
should
reuse
that
group.
G
B
I
mean
I've,
been
starting
to
reach
out
and
see
if
anyone's
interested
in
that,
but
in
the
short
term,
if
you
want
something
now
you
can
subscribe
yourself
and
just
sort
of
have
a
mailing
list.
Entry
to
the
test
screen
config
a
longer
term
I'd,
really
like
just
this
to
be
a
standing
thing:
SIG's
have
one
for
they
get
automation,
alerts
and
people
can
subscribe.
Unsubscribe
is
at
one.
B
A
Tim,
what
do
you
think
about
sending
it
to
the
sig
mailing
list
for
now
and
we'll
see
what
the
volume
isn't
if
people
complain,
I
mean
I,
think
that's
gonna
hit
the
widest
audience
and
also
maybe
be
the
most
encouragement
for
people
to
fix
things,
and
then,
if
we
see
some,
if
either,
if
that
volume
is
too
higher,
if
we
see
some
patterns
emerge
across
SIG's,
we
should
we
can
put
it
somewhere
else.
Sounds.
B
C
B
A
Wouldn't
sorry
but
I
was
gonna
say
that
I
think
that
the
goal
here
is
also
like
get
a
signal.
You
know
something
for
a
week.
We
want
to
know.
We
don't
want
to
wait
until
the
release
comes
out
and
then
we
get
notified.
You
know
two
months
later
right,
so
he
doesn't
have
to
notify
on
the
first
test
failure
it
could.
It
could
be
a
little
bit
lazier
than
that
and
still
get
us
the
information
we
need
much
faster
than
it
is
today.
A
Awesome
anything
else
for
that
one
Tim,
or
should
we
move
on
I'm
good
awesome,
so
next
I
put.
This
is
just
a
follow-up
to
last
week
where
we
talked
about
having
some
sig
sessions
that
coming
Cuba
cons,
I
signed
up
our
sig
for
an
intro
and
a
cluster
API
deep
dive
in
Seattle
and
Tim
signed
up
for
Cuban
deep
dive
in
Seattle.
So
far
we
don't
have
anybody
who's
volunteered
to
run
intro
or
a
deep
dive
session
in
China,
so
I
also
wanted
to
call
out.
A
G
G
A
H
I
think
it
may
be
interesting
something
about
the
update,
so
the
important
news
is
that
getting
the
cluster
now
supports.
Kubernetes
1.11
I
mean
there's
pre-built
image,
and
it
was
not
just
when
building
kubernetes
from
source.
Also,
there
was
a
problem
with
the
current
kubernetes
master.
It
was
noticed
by
lucas,
and
this
was
using
older
to
begin
conflict
version.
It's
now
fixed,
so
you
can
use
and
comedian
gaining
plus
service
to
Burnett
is
master
again
and
an
important
feature
that
was
kind
of
attempted
at
the
start
of
the
year
is
now
again
in
the
works.