►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171128
Description
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.o52g9rnenj3v
Highlights:
- Reminders about upcoming KubeCon sessions
- Next week's meeting is cancelled due to overlap with the contributor summit
- 1.9 release status
- 1.10 planning session using the new Kubernetes Enhancement Proposal (KEP) process
A
Hello
and
welcome
to
the
sink
cluster
lifecycle
meeting
for
Tuesday
November
28th
2017.
Today,
most
of
the
meetings
gonna
be
dedicated
towards
planning
for
1.10
I,
let
by
Jace,
but
we
had
a
couple
of
announcements
first
related
to
cute
con,
which
is
coming
up
next
week.
So
the
first
thing
and
I
think
this
was
covered
last
week,
but
just
a
reminder
that
Lucas
and
I
are
giving
a
cig
update
for
cig
cluster
lifecycle,
which
is
half
an
hour
I
believe
next
Tuesday
afternoon
and
then
I'm.
A
Giving
a
talk
with
Chris
Nova
about
the
cluster
API
and
Lucas
is
giving
a
talk
about
a
deep
dive
into
cube
admin.
So
if
you
guys
are
interested
in
those
you
know
most
people
on
this
call
probably
know
enough
about
all
those
topics.
But
certainly
if
you
want
to
come
to
the
sig
update,
where
we
hope
to
have
some
interaction
with
other
people
who
maybe
aren't
regular
attendees.
If
you
want
to
come
support
us
with
the
other
talks.
That'd
be
great
appreciate.
A
I
think
that's
about
when
the
talks
end
or
slightly
earlier
than
that
works
for
everybody,
I'll
set
that
up.
If
not,
please
ping
me
in
chat
or
in
slack
and
suggest
a
different
time.
I
think
you
know
that
that
time,
for
most
people
are
probably
still
there,
the
conference
haven't
gone
home
yet
and
even
if
people
show
up
late
they'll
be
there
so,
hopefully
that'll
work
for
people
that.
A
Yes,
the
other
thing
I
was
going
to
propose
cancelling
next
week's
meeting,
which
is
right
smack
in
the
middle
of
the
contributor
summit,
which
I
know
a
number
of
people
here
are
going
to
be
at
including
I
think
all
of
the
leads.
So
unless
somebody
else
wants
to
run
run
that
meeting
I
proposed
canceling
next
week
and
then
I
want
to
do
a
very,
very
quick
check-in
on
the
one
nine
release,
because
the
one
nine
release
is
scheduled
for
the
week
right
after
cube
con
and
especially
if
we're
not
gonna
have
a
meeting
next
week.
A
This
is
sort
of
our
last
chance
to
talk
about
hot
issues
coming
up
for
the
one
nine
release
before
it's
all
too
late
fix
them.
So
I
think
we
did
a
brief
check
in
last
week,
as
we
were
in
code,
slash
right
before
code
freeze.
But
if
there's
anything
you
know
about
four
one,
nine,
please
mention
it
quickly
now
and
add
something
to
the
meeting
notes.
C
B
So
the
plan,
the
first
time
I,
was
making
that
self-hosting
bi-level
support
in
1:9,
but
we're
not
gonna
enable
it
by
default
due
to
lack
of
testing
cycles,
as
this
coding
cycle
is
basically
too
short,
so
well
beat
up,
but
not
enabled
by
default
that
we
had
thought
initially
and
yeah
on
the
check
pointing
secret
thing.
Well,
that's
up
for
the
debate.
Still
I,
guess:
yeah
Justin,
that's
a
good
point.
B
Yeah
I
mean
we'll
enable
it
now
as
an
Olfa
feature
the
check
pointing
thing
but
self
hosting
with
cubed
M
is
bida
over
all
in
one
line,
and
hopefully
both
will
well.
Hopefully
self
whole
thing
will
be
the
default
in
110
and
the
check
pointing
bootstrapping
check
pointing
thing
will
be
beta,
buy
them
anything
else.
D
A
quick
question
about
that
is
there
anything
in
the
inherent
structure
the
way
the
self
hosting
works,
we're
not
having
check
pointing
and
when
I
can
lead
to
failure,
scenarios
that
are
unrecoverable
in
terms
of
their
code,
control,
client
status
and
stuff
like
that
or
because
I
know
there
was
some
some
concern
about
that
before
so
just
for
an
cluster
operator
perspective,
that's
something
I
would
be
interested
in
knowing
if.
C
You
are
self
hosting
a
single
node
cluster.
You
do
not
have
check
pointing
that
includes
secrets
which
we
have
checked,
pointing
that
there's
not
include
secrets,
because
we
didn't.
We
didn't
update
the
profiles
for
everything
right
and
we
kind
of
we're
kind
of
going
around
in
a
circle
debating
upon
whether
or
not
it's
a
hard
requirement
or
not.
So
if
you
don't
have
check
pointing
enabled
and
you're
running
a
self-hosted
single
note
environment,
if,
for
whatever
reason,
if
the
node
restarts
it
will
not
recover
right.
E
B
Basically,
a
lot
of
manual
testing,
of
course,
but
also
automated
testing,
so
we'll
probably
set
up
the
automated
testing
suit
tomorrow.
Something
I
don't
know
when
Tim
will
send
a
small
PR
with
with
like
enabling
thing
and
well
then
we
have
it
in
EDI,
but
we
it's
still
just
week
or
so
right
before
the
cut
at
that
point.
So
it's
too
little
in
my
opinion,
to
make
defaults,
but
yeah,
probably
110.
So
we
have
like
a
full
decent
cycle
for
full
testing.
I
mean.
A
C
I,
I
really
don't
want
to
document
it
unless
we
turn
on
self
hosting,
because
I
don't
even
want
people
to
know
it's
there
right
like,
because
if
you
turn
it
on
in
this
is
bass-ackwards
fashion,
you're,
probably
going
to
get
you
in
you
tinker
with
it.
You
probably
get
yourself
into
more
trouble
than
what
it's
worth,
because
the
whole
purpose
of
us
doing.
This
has
nothing
to
do
with
that
feature
itself.
No.
The
whole
purpose
of
doing
this
is
all
about
self
hosting.
A
I
agree
with
that:
that's
fine
I,
don't
think
we
want
other
people
testing
it
out
right.
The
point
was
was
for
us
to
use
it
not
to
make
a
general-purpose
feature
as
I
renamed
it
from
check
pointing
to
bootstrap
check
pointing
so
I'm
happy
to
not
document
it,
and
if
somebody
really
digs
and
finds
it,
that's
okay,
but
we
aren't
gonna
to
put
it
out
there,
something
we
expect
people
to
be
using
sounds.
B
Good
team
well
update
the
features
issues
and
we
still
need
documentation
PS
for
a
couple
of
more
so
be
it
features
in
the
features
repo
Teresa
has
made
the
great
man
page
reference
PR
somewhere.
That
I
can't
seem
to
find
right
now
desktop,
but
anyway,
that
is
to
be
merged
today,
maybe-
and
after
that,
we
can
update
like
it.
There
are
dependencies
on
that,
so
we
can
update
the
general
qbm
like
ducks
after
that,
because
the
file
structure
basically
changes
in
a
well
southern
way,
but
yeah
I
think
well.
B
B
all
the
issues
in
the
cube,
alien
repo
that
is
I
could
probably
create
it.
If
nobody
else
wants
to
do
that,
basically
to
keep
the
communication
channels
with
released
him
in
the
same
way
in
the
in
a
way
that
the
released
and
can
find
easily,
with
the
one
nine
one
nine
milestone
at
the
core
repo,
then
we
have
yeah.
We
have
one
large
issue
still
and
that
is
role
backing
a
TD
which
I
thought
worked
between
minor
versions,
but
seemingly
not
we,
which
I
I,
don't.
A
F
A
B
Just
like
what
should
we
do
with
so,
we
they
use
upgrade
from
one
eight,
two,
one
nine
and
they
got
an
upgrade
HD
which,
for
some
reason,
didn't
work.
We
took
that
shot
before
and
we
now
want
to
roll
back.
Should
we
just
leave
the
user,
as
is
like?
Well,
we
tried
to
make
this
thing.
We
try
to
upgrade
it
to
three
110,
but
seemingly
it
didn't
come
out
cleanly
fix
the
thing
and
exit
one
I
mean
that's
probably
what
we
have
to
do.
C
B
B
B
B
B
C
Just
doing
the
naive
thing
seems
reasonable
and
document
a
more
sustainable
path
for
production
grade
environments,
like
do
a
snap
plan
to
restore
in
case
you're
doing
this.
Here's
what
things
have
changed,
because
that's
a
general
good
practice
anyways
for
people
to
do
before
they
do
an
upgrade
and
I,
don't
think
we
actually
spell
that
out
as
part
of
our
documents,
yet
I.
B
Think
that's
something
like
as
a
general
good
practice
or
something
back
up
your
cluster,
but
we
don't
spell
it
out
how
to
backup
the
clusters,
who
could
just
leave
some
HCD
CTL
commands
there
sure
so
Q
betting
will
take
care
of
the
snapshotting
before
upgrade
though,
but
it
won't
restore.
Everything
is.
B
A
Right
I
was
hoping
to
keep
the
1/9
up
short
I
think
we
could
talk
about
it
cry
for
the
rest
rest
of
our
hour,
but
I
do
want
to
get
as
much
time
to
Jason's
weekend.
So
I'm
gonna
call
this
conversation.
Now
people
want
to
keep
talking,
we
can
extend
our
meeting
past,
10:00
or
or
folks
are
interested.
You
can
dial
into
a
separate
meeting,
but
I
think
we
should.
We
should
turn
it
over
to
Jason
talked
about
1.10
now
and
do
some
forward-looking
planning
sounds
good.
Hey.
D
So
so
the
process
is
moving
forward.
Past
110
is
going
to
be
that
essentially
things
that
get
moved
into
communities
that
are
what
might
have
been
called
features
or
proposals
or
all
those
things
are
going
to
be
called
cap,
and
it's
an
through
kubernetes
enhancement
proposal
and
essentially
there's
a
architectural
initiative
around
how
we
manage
those
and
really
it's
just
a
fancy
way
of
saying.
There's
a
markdown
document
that
says
here's
a
big
chunk
of
something
we
want
to
accomplish.
D
It's
going
to
take
more
than
one
release
Qing's
to
do
it
because
it's
vague
and
the
leases
are
short
and
all
that
and
the
cap
basically
allows
us
to
always
sort
of
have
a
North
Star
that
we're
pointing
to
when
we
do
work.
So
when
I'm
implementing
a
feature
to
do.
Checkpointing
I
know
that
that
ties
into
the
greater
operability
story
for
medium
and
and
self-hosting
as
a
viable
alternative
for
people
and
that
ties
into
you
know
the
greater
operational
story
of
communities.
D
D
So
we
want
to
ferret
those
out
before
we
start
doing
the
work
so
that
we're
not
wasting
time
and
effort
on
things
that
aren't
going
to
get
over.
So
hopefully,
this
stuff
is
pretty
simple
and
straightforward,
so
the
process
relies
on
somebody
in
a
sig
sort
of
being
the
champion
of
the
product
planning,
and
my
my
intent
is
that
sig
p.m.
or
six
somebody
should
be
supplying
a
pool
of
people
to
help
you
do
that,
hopefully
someday
that
attends
the
sig
meetings
and
regular
basis.
D
If
somebody
in
this
take
has
a
passion
for
product
management
and
in
project
management,
I
would
love
to
have
them
sign
up
to
being
the
person,
that's
sort
of
keeping
their
eye
on
the
ball,
to
help
the
sig,
facilitate
these
meetings
and
and
do
updates
for
for
wherever
they
need
to
be
done
in
terms
of
community
I,
yeah
I'm
doing
it
right
now,
because
I'm
passionate
about
this
and
I
love
I
have
a
deep
love
of
cluster
lifecycles
work
and
also
the
mission
of
cluster
Ops
in
general.
So
in
the
future.
D
E
D
So
this
this
process
is
essentially
trying
to
identify
major
initiatives
that
are
ready
to
be
worked
and
committed
to
so
we're
gonna
go
through
a
document
and
essentially
we're
gonna
say
what
are
the
proposals
that
we
know
are
being
worked
and
then
what
are
the
things
from
this
proposal?
So
we
want
to
try
and
get
done
in
one
set
again,
not
too
complicated.
I
described
the
cap
as
an
iceberg
and
the
issues
that
we
want
to
work
each
milestone.
Each
release
are
sort
of
the
ice
cubes
that
we
chip
off
and
deliver.
D
D
That,
hopefully,
is
useful
and
essentially
what
we
want
to
do
is,
and
some
of
this
will
have
to
be
populated
later
or
maybe
offline,
but
essentially
we
want
to
know
what
is
the
proposal
if
there
is
one-
and
there
may
be,
multiple
proposals
in
this
thing,
I
so
need
I
want
to
sort
of
outline
at
the
top
level.
What
are
the
one
of
the
big
proposals?
Are
those
the
working
group
see
I've
organized
or
is
it
mainly
couvade
iam
work
or
whatever?
What
does
that
look
like
from
the
portfolio
of
things
you
working?
It.
D
A
Caps
nested
within
each
other,
so,
like
you
said
they
sort
of
roll
up,
so
we
have.
The
mission
of
life
cycle
is
to.
You
know
basically
make
the
installation
experience
and
the
upgrade
experience
better
until
that
mission,
we're
building
cube
admin,
which
is
a
tool,
so
is
cube,
admin
a
kept
underneath
the
cap
of
the
mission
to
create
a
veteran
experience
and
then
underneath
cube
admin.
Do
we
have
sub
caps
of
make
the
install
go
cleanly
build
aj,
do
self-hosting,
build
upgrades
and
like
where?
Where
do
we
sort
of?
A
D
So
the
the
cig
mission
doesn't
really
correspond
to
a
cap.
The
idea
is
that
you
could
pull
out
your
your
deck
of
cards,
your
your
cap
cards,
the
things
you're
working
on
and
those
should
all
if
you
read
them
like.
Oh
that's,
definitely
empowering
this
primary
mission.
That's
definitely
bringing
that
this
part.
D
So
it's
not
really
it's
that's
sort
of
the
portfolio
level
of
we
want
to
accomplish
this
grand
mission
and
here's
the
caps
that
will
do
that
now,
because
the
cluster
lifecycle
has
this
really
interesting
thing
where
you're
managing
essentially
COO
medium
as
a
product.
Kubernetes
itself
will
have
caps.
So
there
will
be
things
like
you
know,
enhanced
our
back
or
the
the
the
off
system
or
whatever.
That
is.
You
know
the
certificate
authorities
and
all
that
it,
but
those
are
going
to
be
specific
to
community
so
for
Kubb
ADM
I
would
seek
a
medium.
D
D
There
may
be
cases
where
you're
doing
work,
specifically
in
kubernetes
and
not
really
anything
to
do
with
comedians
specific
work.
That
would
be
the
kept
that
lives
under
the
communities
org.
So
in
in
this
case,
what
we
want
to
do
is
identify
what
are
the?
What
are
the
sort
of
products
that
you're
working
on?
So
comedian
is
obviously
one
of
them
one
of
the
things
in
kubernetes
you're
working
on
that
would
be
served
in
this
realm,
and
then
we
just
outline
the
work
associated
which
each
of
those
things.
A
B
A
C
That
keps
themselves
actually
have
you
know
the
pointing
to
parents
fit
inside
of
them,
okay,
so
yeah,
so
we
should
fill
out
the
caps
to
point
to
each
other.
The
problem
is,
we
don't
have
you
need
to
transform
a
lot
of
the
current
proposals?
It's
a
kept
style
format
in
order
for
you
to
make
the
whole
tree
makes
sense
right.
Otherwise,
it's
kind
of,
like
you
have
this
partial
tree.
C
The
links
to
these
old
documents
that
are
not
all
maintained
over
time
and
the
purpose
of
the
cap
is
not
to
be
like
you,
create
it
once
and
magically.
You
don't
deal
with
that.
The
cap
itself
has
sections
for
updates
for
each
individual
cycle.
So
that
way,
it's
a
live
living
document
over
time
that
actually
reflects
reality,
because
if
you
actually
go
through
the
community
repo
right
now,
it's
like
a
wasteland
of
false
ideas
that
no
longer
either
apply
or
that
I've
totally
changed.
Since
the
original
proposal
was
written,
okay.
A
C
A
A
And
what
about
something
like
cops
that
is
already
out
of
core,
but
as
part
of
the
kubernetes
ecosystem,
do
they
track
their
caps
in
the
Draenei,
slash
community,
repo
and
with
everybody
else's,
that's
part
of
the
core,
or
would
they
track
their
own
externally
and
only
when
they
have
dependencies
that
impact
the
core
moves
out
there?
This.
C
Is
a
made
a
conversation
that
kind
of
tracks
across
both
architecture
in
the
steering
committee?
I,
don't
think,
there's
an
answer
for
that
question
yet
and
I
don't
think.
There's
gonna
be
one
until
we
resolve
the
the
major
question,
which
is
what
should
live
inside
of
the
kubernetes
org
and
what
should
be
pushed
out
and
where
does
it
push
where,
as
it
get
pushed
out
to
okay.
A
I
guess
I'm
wondering
technically
short
term,
like
we've
put
some
of
our
design
proposals
into
the
cube
admin,
repo
and
markdown,
to
sort
of
streamline
the
process
of
getting
them
merged
and
then
pushed
do.
We
want
to
do
the
same
thing
with
cube
admin,
specific
EPS,
to
sort
of
keep
our
ownership.
Or
do
we
not
worry
about
that
as
much
with
caps
and
just
stick
them
all
in
a
Thank,
You
rappel.
B
Right
now
we
basically
have
just
the
design
document
and
a
cube,
alien,
creeper,
I
think
the
rest
of
them
much
than
the
community
reaper
I
mean
yeah.
It's
really
matter,
I
think
that
in
the
main
in
the
interim,
while
we
back
this
out,
it's
fine
to
have
everything
in
the
community
repo
on
the
cyclists
lifecycle
and
then
break
out
as
we
move
out,
cubed
em
and
things
like
that.
Also.
B
A
A
Actually
is
the
plane
all
the
Justin
had
started,
creating
features
issues
for
cops
with
yet
with
the
goal
of
getting
sort
of
release,
notes
with
debris
at
ease,
so
cops
could
be
surfaced
there
for
visibility.
And
so,
if
the
release
notes
are
gonna
be
built
off
the
caps,
then
I
think
you
know
Justin
kids,
we
got
but
he's
gonna
have
the
same
incentive
to
create
caps
work
for
cops
so
that.
F
We're
gonna
follow
even
if
no
matter
what
the
state
which
repo
we
live
in
or
which
org
you
live
and
we're
gonna
follow
the
community's
process.
We're
gonna
have
the
CLA
we're
gonna.
Have
the
open
thing:
gonna
have
the
code
of
conduct,
we're
gonna
be
a
kubernetes
project
in
terms
of
standards,
even
if
the
steering
committee,
just
whatever
the
steering
committee,
decides
like
we're
gonna,
do
that
we're
gonna.
Do
that,
sir?
F
D
I
think
you
nailed
it
Justin,
that's
exactly
right,
I
mean
the
ideas.
I
mean
it's
really
simple.
We
wanted
to
find
some
value
that
we
want
to
convey
to
their
community,
so
they
know.
What's
going
on,
we
want
to
provide
enough
detail
that
somebody
who's
a
new
contributor
could
or
somebody
who's
a
contributor.
That's
interesting
could
look
at
one
of
these
issues.
That's
one
of
those
ice
cubes.
D
We
chip
off
the
iceberg
and
actually
start
working
it
and
have
some
visibility
that
that's
being
worked
as
opposed
to
just
being
off
in
the
ether
or
not
being
touched,
so
we're
just
basically
trying
to
provide
a
way
to
track
this
work
and
give
cigs
ownership,
because
having
a
centralized
features,
management
process
is
just
a
huge
anti-pattern
that
becomes
a
bottleneck.
Cigs
are
the
best
equipped
to
know
what
should
and
shouldn't
be
a
part
of
what
they're
working
on
they're,
also
the
most
equipped
to
detangle
dependencies.
D
With
other
sakes,
we
shouldn't
be
having
we
shouldn't
introduce
roadblocks
and
areas
where
people
have
to
go
through
one
another
to
get
stuff
done.
We
want
what
have
as
many
parallel
conversations
that
are
effective
as
humanly
possible,
so
I
think
that
this
is
a
step
in
that
direction,
regardless
of
if
it's
a
sub
project
or
it
synchronized
communities
or
if
it's
a
totally
separate
project,
we
still
want
to
have
those
same
trappings
just
as
Justin
imaging.
E
D
A
B
Yes,
I
think
so
what
we
have
in
the
future
sweep
out
today
is
basically
one
parent
dramatically:
simplify
kubernetes
bootstrapping
or
whatever
that
Joe
created
one
or
half
years
ago.
That
basically
covers
the
cube,
ATM
state
as
a
whole,
and
we
have
these
sub
caps
or
proposals.
Slash
features,
issues
that
cover
like
self
hosting
self
hosting
feature
of
cubm.
B
We
have
the
extensibility
parts,
config
api
things
and
the
aj
work,
and
then
we
also
have
the
bootstrap
tokens
that
are
kind
of
not
cube,
Adam
specific,
but
a
dependency
for
cube,
areum
and
also
the
bootstrapping
checkpointing
thing
so
I
think
it
makes
sense
to
have
the
top-level
cube.
Radium
state
of
cubed
I
am
right.
That
is
right
now
beta
and
will
not
be
graduated
graduated
to
get
GA
before
we
have
all
the
sub
dependencies
beta
or
GI
or
higher
right,
which.
B
D
For
the
for
the
purposes
of
this
list,
let's
just
stick
with
stealth
flow
sting
for
the
moment,
because
that
has
different
sub
issues
that
you
want
to
work.
I
mean
to
me
the
the
bootstrap
stuff
is
is
an
issue
that
would
be
worked
for
that.
The
idea.
The
idea
behind
the
section
here
that
we're
outlining
for
110
is
what
exactly
is
going
to
be
delivered
in
110.
D
If
it's
bigger
than
one
release
cadence
will
just
have
to
have
issues
that
that
are
basically
across
releases,
so
you
want
to
basically
define
the
issue
that
can
be
delivered
per
release.
So
if
cadet
community
em
self
hosting
has
three
things
that
you
have
to
accomplish,
and
you
know
you're
gonna
get
two
of
those
three
things
done
in
110
just
do
issues
for
those
two
and
then
the
third
will
do.
B
B
D
So
in
terms
of
the
so,
if
we're
gonna
break
down
the
self
hosting
work,
that's
gonna
get
done
in
110.
What
are
the?
What
are
the
the
chunks
of
work?
What
are
those
ice
cubes
that
we
can
look
at?
That
would
actually
be
delivered.
What
are
the
things
that
you
want
to
accomplish
in
110,
specifically
related
to
that.
B
D
B
B
B
A
Secret
wedding,
we
either
need
to
convinced
Zig
note
that
we
should
be
checkpointing
it's
insig
off,
so
it's
been
pretty
contentious
in
the
past
yeah
and
I
guess,
depending
upon
the
investigation
on
point
number
one,
it's
either
work
with
with
a
couple
of
other
things
to
make
that
work
or
we
build
a
work
around,
which
is
what
core
OS
has
done.
Yeah.
B
B
Mean
this
would
put
a
dependency
on
the
sub
kept
rights
of
general
check,
pointing
in
kubernetes
right.
Do
you
want
to
create
such
a
cap,
or
that
is
not
like
queue
bottom
specific,
because
what
Tim
built
isn't
it
doesn't
have
cubed
M,
hardwired
in
there
somewhere?
So
so,
essentially,
the
check
pointing
primitive
yeah.
B
D
E
B
Right
now,
one
thing
I
want
to
improve
someday
is
that
we
every
time
we
make
a
change
the
competitors
anywhere
so
like
we're,
gonna
move
off
kubernetes
and
we're
soon
when
a
cluster
API
comes
along,
but
right
now
we
have
to
basically
ask
a
Googler
in
sync
testing
to
push
the
new
images,
so
we
can
update
the
test
grid.
So
every
time
like
I
did
this
yesterday
and
there
were
Googlers
kind
of
really
kind
that
answered
like
in
a
minute.
B
A
Think
someone
is
working
on
trying
to
shuffle
some
of
those
images
around,
so
we
can
increase
the
number
of
people
we
can
push
to
them.
I,
don't
think
that's
strictly
a
dependency
for
this
task.
I
think
it's
annoying
that
we
have
to
have
subset
of
people
which
I
don't
think
it
includes
me
push
those
images
but
I
think
there
is
a
a
longer
term
fix
for
that
which
hopefully
is
tracked
by
somebody
else's
kept,
which
will
just
sort
of
streamline
the
process
for
us
once
it's
done.
Yeah.
B
B
D
When
it
comes
time
to
do,
testing
or
not
those
images
could
be
a
blocker
right,
I
mean
if
you
don't
have
the
right
images.
That's
gonna
be
a
blocker
at
some
point:
yeah.
Okay,
what
else
throw
at
me
here?
What?
If
what
else,
what
else
asteroids
falling
and
hitting
there
if
there
were
I
mean
what
are
the
things
that
can
could
put
this
work
off
track?
I.
D
D
A
I
mean-
and
the
other
thing
here
is
this-
is
it's
a
sort
of
solving
a
known
problem,
because
buta
cube
and
core
OS
have
sort
of
paved
the
way
for
us
here
and
in
a
sense
that
they
have
already
built
this,
and
so
we,
we
kind
of
know
at
least
that
there
is
a
path
to
a
solution
and
we've
been
trying
to
build
on
what
they've
done
and
stream.
Some
of
that
work
into
kubernetes
core,
like
Tim,
was
doing
with
the
boot,
with
the
checkpointing
and
so
and
I
think
we
have.
A
We
have
some
potential
workarounds
that
we
can
build,
we
need
to
or
we
can.
We
can
reuse
their
external
check
point
if
ya,
clean
I
mean
it's
the
same
thing
we
have
them
with
with
the
daemon
set
surge
updates
right
like
we
have
workarounds,
we
can
do
if,
when
sig
Apps
was
not
willing
to
add
that
add
that
to
core.
D
Okay,
so
essentially
the
what
would
happen
is
this
this.
Essentially,
this
issue's
gonna
be
issued
describing
what
needs
to
be
figured
out.
There's
gonna
be
some
discussion
and
at
the
end
of
that,
discussion
is
going
to
be.
Yes,
we
have
buy-in
from
these
SIG's
we're
gonna
move
forward
and
execute
I
or
it's
gonna
be
closed
issue.
This
isn't
gonna
work.
We
need
to
start
a
new,
a
new
issue,
that's
basically
implement
fukube
or
whatever.
That
is
to
make
that
work
in
a
nutshell,
okay,
cool.
D
This
is
starting
to
feel
like
we're
getting
on
track
here.
So
what
other?
What
other
dependencies
are?
We
aware
of
in
this
regard,.
A
C
Just
as
a
conversation
topic
I've
in
keep
a
couple,
people
have
mentioned
the
fact
that
we
have
too
many
release
cycles
and
I.
Think
that
is
part
of
one
of
the
conversations
at
the
developer
summit,
which
I
think
plays
into
this
point
that
we
keep
on
running
into
is
releasing
quarterly
as
a
number
of
problems.
D
D
Release
kittens
is
too
fast
to
actually
accomplish
anything
yeah.
It
feels
like
there's
I
mean
just
from
my
perspective.
One
of
the
things
I'm
trying
to
fix
is
that
there's
all
this
minutia
and
administrivia
around
these
things
that
you're
spending
you
know
one
third
of
the
release
on
administrivia.
That's
that
is
not
cool,
no
big
bet,
no
all
right.
Anything.
B
D
D
B
C
E
E
C
Have
20
participants
on
the
call
I
want
to
call
out,
like
you
know,
a
couple
of
times:
we've
had
mentoring
and
getting
people
involved
from
the
community
and
we
have
20
people
on
the
call
here
and
that's
a
lot
of
people.
You
know
you
have
two
people
doing
the
primary
load
for
this.
So
on
you
know
if
you're
looking
for
people
to
get
involved
in
the
project
here
is
a
prime
opportunity.
C
E
B
B
A
D
Like
it,
bolli
told
all
right,
cool
Wilson,
so
there's
names
here,
that's
better
than
we
had
20
minutes
ago,
so
so
user-facing
documentation.
I
assume
that
this
is
going
to
be
like
how
do
you
actually
leverage
these
capabilities
and
all
that
so
either
a
blog
or
documentation,
committees,
I
or
whatever?
That
looks
like
so.
B
A
B
Yeah-
and
we
already
have
the
these
things
in
the
Kuban
written
reference
duck
reference
guide,
and
things
like
that
outlining
it's
experimental
and
I
will
like
this
is
one
of
the
things
that
need
to
change
still
like
four
one:
nine
ducks
flip
things
that
were
all
fine
one,
eight
to
be
died
in
one
one,
nine
things
like
that
cool.
Yes,
it.
F
Might
be
good
to
write
the
dots
for
the
failure
scenario.
It's
very
early
because,
like
that
is
that,
like
I
know,
like
his
biggest
coal
operator,
competitor
like
writes
their
user-facing
Docs
and
Ser
first
write
the
the
document.
The
failure
scenarios
are
the
real
tricky
thing
here
and
well,
I
think
fetal
on
to
the
design
yeah.
D
A
D
D
I'll,
let
y'all
think
about
that.
We
definitely
need
somebody's
name
there,
because
that's
that's
how
we
actually
get
stuff
done.
So
do
you
think
this
this
work
is
going
to
need
any
new
blocking
tests,
release
blocking
tests
or
yes,
merge
to
block
F,
okay,
cool
whoo.
A
B
D
Human
one
at
least
be
point
person
on
that,
because
this,
the
docs
person
for
the
release
is
going
to
want
to
have
somebody
to
talk
to,
or
you
know,
get
that
info
and
by
the
way
the
docs
sync
Doc's.
People
are
super
happy
to
help
partner.
On
that
writing.
You
can
literally
give
them
just
some
chunk
of
semi
crappy
documentation
and
they'll,
make
it
pretty
and
do
all
that
work
for
you.
So
given.
C
D
All
right
we're
in
the
homestretch
and
I've,
got
a
hard
stop
at
a
minute
two
of
the
hour.
So
do
we
expect
this
this
work
to
get
merged
I
done
like
in
any
particular
time?
Are
we
gonna
push
it
right
up
to
where
code
freeze
happens
or
I
mean
this
is
sort
of
an
expectation
of
what
is
the
level
of
effort
required
to
get
this
time?
I.
E
D
Cool
the
reason
I
asked
this
question
or
I
want
to
know.
It
is
because,
if
somebody's
depending
on
you
for
this
they're
gonna
want
to
know
when
did
when
this
stuff
is
going
to
drop,
so
they
can
start
testing
and
integration
work.
Any
other
important
notes,
I
think
we
kind
of
nailed
it
all.
But
if
there's
anything
else,
anybody
wants
to
add
just
I,
just
I
think.
E
E
B
Sorry
go
ahead,
so
what
we
don't
enable
this,
like
self
hosting
by
default
in
1/9,
is
because,
like
we
didn't
flip
the
switch
early
in
the
cycle
right.
So
we
then
then
don't
have
the
enough
cycle,
so
Tim's,
Tim's,
PR,
I,
think
LG
TM
by
signaled
on
code
freeze,
eight,
four,
one
nine,
which
is
which
basically
doesn't
cut
it
for
like
enough
testing.
So
we
instead,
we
plan
to
do
this
really
early
in
the
110
cycle.
So
we
have
lots
of
holiday
a
great
validation
period.
B
A
B
D
B
A
D
A
D
B
D
So
if
we
can
answer
these
questions
rapidly
and
get
some
buy-in
either
or
not
from
this
other
six,
that's
great
and
I
can
help
facilitate
those
conversations
or
act
as
an
advocate
first
equestria
life
cycle
and
those
or
whatever
I
needed
you
because
I
see
part
of
the
role
of
this.
This
person,
who
helps
facilitate
this
as
being
also
your
advocate
to
get
the
work
done
so
I
think
that
that's
those
conversations
are
really
important,
so
I
have
to
go.
D
A
One
question
I
updates
in
the
past:
we've
we've
done
this
that
maybe
a
little
bit
less
level
of
detail
in
terms
of
each
of
these
items,
just
sort
of
a
quarterly
planning
tracking
duck
try
to
sort
of
prioritize
like
these
are
our
p0.
These
are
RP
ones.
We
can
write
up
a
whole
bunch
of
caps,
some
of
which
will
get
staffed,
some
of
which
won't
tell
which
are
more
important
than
others.
You
know
they're
sacked
by
the
same
people.
A
D
A
Jase
I
will
in
closing,
say
I
think
what
we
should
do
is
take
the
rest
of
the
things
we've
talked
about
doing
it
for
1/9,
especially
the
ones
that
didn't
get
finished
and
turn
those
into
kept
format.
So
people
can
can
feel
free
to
keep
hacking
on
on
the
dock.
That
Jase
was
modifying
during
the
call
and
I
think
in
two
weeks.
So
the
next
meeting
after
cube
con.
We
should
come
circle
back
and
go
through
sort
of
the
rest
of
them
and
hopefully
it'll
be
much
quicker
on
each
entry.
B
A
A
Gonna
start
with,
with
the
dock,
we
had
four
one:
nine
planning
cuz
a
lot
of
that
stuff,
didn't
actually
get
finished
and
a
lot
of
its
it's
sort
of
multi
release
cycle
work
right,
so
we
had
cube
admin
AJ
right,
that's
still,
gonna
be
something
we're
working
on
in
110,
so
I'm
just
kind
of
rolls
over
so
we'll
still
have
qmj.
We
can
start
breaking
down
what
the
tasks
are.
How
far
we
expected
to
get
into
one
10
release
cycle,
we
had,
you
know
better
documentation.
A
A
Tokens
to
GA
right
did
that
get
done,
or
is
there
more
to
do?
We
had
atom
management
like
there's
a
lot
of
things
here,
that
we
should
write
up
as
keps
and
try
to
figure
out.
You
know
if
they're
gonna
get
staffed
and
if
not
and
sort
of
and
see
if
them
there
there's
also
new
work
that
in
so
I
think
it'll
be
pretty
easy
to
backfill
the
ones
we
had
from
the
1-9
planning.
A
And
then
we
should
go
through
those
figure
out
who's,
gonna
work
on
them
and
then
also
see
if
there
are
things
that
are
missing
w.
Do
we
want
to
write
down
as
new
camps
as
well,
so
I
propose
that
we
try
to
do
the
back,
filling
and
proposing
new
options
offline
and
then
review
those
during
our
meeting
in
two
weeks
sounds
good
to
me,
and
we
are
now
two
minutes
over.