►
From YouTube: 2017-06-09 18.00.00 SIG-cluster-lifecycle 166836624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Was
a
poser?
Yes,
so
there's
when
we
talked
at
the
summit
out
in
San
Jose,
there
was
three
major
features
for
Cuba
diem:
that
kind
of
all
wrap
in
a
circle
self
hosting
upgrades
in
a
che,
and
we
what
we
didn't
have
resolution
on
was
like
each
one
of
those
items
has
a
checklist
right.
A
che
has
its
own
checklist
and
self
hosting
his
own
checklist
and
upgrades
heads
own
checklist
and
what
we
kind
of
want
to
figure
out
how
to
do
is
rationalize.
B
B
B
A
A
B
A
D
Well
for
DK,
he
I
know
the
world
of
any
documentation
for
DC
I
think
we
only
have
this
script
that
is
doing
upgrade,
so
we
basically
don't
do
H
a
more
or
less
the
kind
that
we
are
working
on
it
like
other
as
a
good
at
Google,
but
yeah,
since
we
don't
really
do
it
without
27,
so
the
pigmentation.
So
after.
A
E
B
Forth,
alright,
what
are
you
doing
with
certificates,
then?
Are
you
reassuring?
He
just
gets
lid
on
mastery
good,
but
they're,
not
tied
you're,
using
like
DNS
munging
to
have
your
new
machine
come
up
like
your
old
machines.
Yes,
we
use
to
address
the
HSN
masters.
Yes,
ok!
So
then
your
new
machines
just
like,
if
you
know
me
me
and
me
too,
or
they're,
just
API,.
A
B
A
B
B
A
A
B
But,
but
did
you
plan
on
like
doing
there's
logistical
questions?
Do
you
want
to
do
one
shot
upgrade
or
do
you
want?
Do
one
shot
scheduling
or
do
you
want
to
have
like
a
receipted
scheduler
and
then
kill
it
because
the
boot
code,
mentality
or
the
like
actually
implementing
the
one-shot
scheduling
which
still
is
unimplemented.
C
A
A
B
You
would
just
you
do
it
for
all
three,
because
they
be
standbys,
it
wouldn't
matter
yeah,
but
like
on
the
same
node.
It's.
B
There
you
could
submit
it,
but
it
would
just
not
do
anything
you
would
say,
like
the
demon
set
would
be
supporting
masternodes,
because
we
have
the
master
flag
right,
yeah
and
so
long
as
it
long
as
it
has
if
it,
because
it
supports
affinity
for
demon
sense.
Yes,
it
does
so
you
should
be
able
to
deploy
to
the
Masters,
and
you
would
basically
just
spread
schedulers
across
all
those
masters
and
it
it's
not
a
bad
thing,
but
it's
not
like.
Ideally,
we
would
want
to
have
like
two
or
three
this.
B
A
But
I
was
thinking
like
if
we
have
a
one-note
case,
would
it
be
possible
to
when,
when
doing
a
rolling
upgrade
of
a
dealership,
so
first
add
one
and
then
remove
once
or
is
it
like
the
other
way
around
it's
possible
to
have
the
new
one
added
before
the
schedule?
Okay,
this
this
or
are
we
gonna
like
we're,
setting
the
scope
now,
so
are
we
even
thinking
about
one
node,
plus
one
more
plus
for
self
forcing
in
one
a
or
do
we
like.
B
B
B
A
B
A
I
think
that
would
at
the
same
time
was
I
was
thinking
just
well.
The
the
user
does
P
wait
a
minute
we
have
self
hosting
on
by
default
and
he
doesn't
ever
add
any
more
notes,
and
then
one
nine
comes
out
and
oh
wow
I
want
to
upgrade
my
pasta,
and
so
there
we
go
I.
Don't
know
it
was
just
like.
If
we,
if
we
that's,
that's
the
criteria
right
now,
I
mean
we.
We
don't
require
the
user
to
have
in
note
at
all
right
now
with
Cuba,
and
it's
like
do
one
node.
B
Yeah
I
would
like
to,
but
there's
like
there's
this
this.
This
gets
back
to
the
root
of.
Why
we're
having
the
conversations
it's
like.
We
have
these
two
checklists
and
in
order
for
you
to
enable
a
che
in
the
way
you
Haan
self
hosting
the
way
we
want
it
to
be,
because
these
two
features
overlap
right,
yeah
we're
going
to
have
to
execute
pieces
of
the
puzzle
at
the
same
time,
right
like,
hopefully
that
make
sense.
A
Yes,
but
why
they
can
jump
in?
Are
you
familiar
with
checkpointing
and
and
this
right
so.
A
Cool
then,
then
I'll
just
send
them
some
a
couple
words
of
it.
Basically,
it's
like
when
we
tell
folks
we
right
now,
it's
like
right
from
manifest
to
disk
API
service
can
be
and
controller
manager,
and
we
have
those
a
functioning
control
plane
up
up
and
running
we're
injecting
the
same
manifests
into
the
API
server.
The
API
server
starts
crash
looping
because
it
can't
it
can't
bind
for
the
right
address
and
then
we
kill
the
static
port
manifest.
A
But
then
we
have
a
problem
that
if
you
restart
it's
a
power
cycle
or
something
on
the
node,
when
it
comes
up
the
cube,
let's
expect
the
master
to
be
available,
but
the
master
count
the
master
isn't
running
and
for
it
to
run
it
has
to
be
run
by
the
cubelet.
So
it's
like
this.
This
circular
dependency,
and
so
the
kind
of
semi
hacky
solution
is
to
the
checkpoint.
The
last
manifest
so
on
the
master.
A
A
check
pointing
agent
is
always
running,
and,
and
so
after
the
power
cycle,
the
the
only
many
static
for
manifest
that
exists
is
the
checkpoint
that
it
starts
up.
It
detects
that
all
this,
this
part
isn't
running
a
self-hosted,
but
it
should
be
as
I
remember.
I
have
put
some
state
in
a
checkpointing
directory
somewhere
and
I.
A
D
B
A
Because
I
can
see
advantages
of
both
so
yeah,
so
basically
that
works
at
some
point
in
time.
I
think
a
proposal
like
here's
how
we
could
do
checkpointing
like
generic
40
blood,
here's
how
we
could
do
it
without
something
specific
to
this
use
case,
but
I
think
it's
trended
at
some
point
and
nobody
actually
contributed
that
yeah.
A
A
B
It
to
the
ground
yeah
and
then
when
it
comes
back
up
it
says:
oh
I
see
you
spewed
samyama
on,
then
you
know
starting
those
back
up
again
and
then
having
this
like
this
sort
of
workflow
on
startup,
which
isn't
bad
on,
but
the
notion
that
we
call
it
checkpointing
kind
of
makes
my
head,
which,
because
back
in
the
day
check
pointing
and
restarting
literally
meant,
like
you
know,
storing,
register
data
and
all
the
other
goo
that
comes
along
with
it
in
actual
full
check
point
yeah.
We.
A
Could
definitely
like
rename
it
for
this
cycle.
So
to
just
say,
this
is
the
scope
for
the
disaster
recovery
we'd
like
and
so
forth,
like
we
could
limit
the
scope
for
it
as
well
to
say
the
is
the
the
only
thing
that
needs
to
be
done.
They
are
the
only
outcome
we
want.
There
is
being
able
to
reboot
right,
I.
Think.
B
That
actually
helps
because
what
seems
to
happen
every
time
people
see
checkpointing
is
they
think
of
the
full
checkpoint
and
that's
what
I
usually
thought
of
originally,
but
then
I
looked
at
Boot
Cube
and
saw
some
of
the
pieces
of
what
it's
doing
and
understanding
what
we
need
to
do
from
a
startup
routine,
and
it's
not
really
checkpointing.
It's
basically
like
it's.
It's
storing
some
local
state,
yeah.
A
So
if
we
I
don't
know
if
we
have
quorum
for
this
right
now,
but
anyway,
we
could
say
that
we
will
for
the
one
eight
cycle.
We
will
only
focus
on
disaster
recovery
when
self-hosting
and
then
like
so
somebody
else
could
work
on
checkpointing
is
dead
like
that,
more
generally
and
and
so
forth.
But
we
will.
We
will
do
this
for
this
cycle
and
it
will
be
the
the
thing
the
thing
only
required
for
keeping
state
between
reboots.
B
A
I
think
so,
and
also
I
think
if
we,
if
we
want
to
build
something
into
the
cubelet
for
real,
it
should
be
the
mode
generic
solution
like
snapshotting
checkpointing,
with
all
all
the
before
that
have,
and
all
the
implications
with,
like
writing,
proposals
and
thinking
about
every
possible
scenario
that
could
ever
happen
with
checkpointing.
But
we
by
having
it
out
of
call
which
I
really
endorse
generally
is
it's
like.
B
So
where
should
it
live,
then
I
guess
there's
another
logistical
questions,
because
this
has
got
to
be
like
the.
If
we're
prioritizing
execution
of
items,
this
is
definitely
one
of
the
highest
priority
wants
to
do.
It
has
to
live
someplace
that
can
get
versions.
What
you
know,
logistics
matter
so
I.
A
A
E
Been
this
we
did,
this
should
cost
us,
like
DNS
controller
like,
and
it
eventually
likes
there,
some
other
DNS
controllers
and
we're
all
like
merging
together
into
a
top
level
incubator
projects
when
it
becomes
big
enough.
In
this
case,
you
pre-emergent
cube
list
when
it
becomes
big
enough.
I
would
guess,
but
yeah.
C
E
I
have
a
question
now:
what
is
this?
Do
we
do?
We
want
to
scope
this
down
into
like
the
things
we
actually
want
to
solve,
like
API
server,
an
STD
and
maybe
KCM
and
scheduler,
or
do
we
want
to
say
that
we're
because
I'm,
like
ie,
worried
that
they're
actually
edge
cases?
That
will
mean
it's
not
generic
checkpoint?
It
is
like
when.
B
I
was
I'd
like
to
talk
with
okay.
If
you
it's
not
even
checkpointing,
it's
like
that's.
The
thing
is
like
we
should.
We
should
probably
have
a
run
ocular
that
makes
sense
here
because
with
which
all
you're
doing
is
like
fluffing
manifests
around
right.
That's
that's
all
we're
doing
it's
it's
it's
not.
We
should
not
call
it
check
pointing.
We
should
call
like
yeah
mo
back
up
yeah
mo
manifest
back
up
or
something
like
that.
Scrapping.
E
E
B
We
just
basically
need
for
the
local
node
of
whatever
it's
running
on.
It
needs
to
know
what
the
manifest
is
for
that
machine.
Seven,
Memphis
Sarah!
What
about
cubelet
I'm
not
going
to
do
checkpoint
II
mean
like
couplet
on
google,
it
question
I
mean
I
mean
I
mean
operate.
Oh
it's,
the
coop.
The
upgrade
will
still
be.
You
know:
yellow
update
your
machines
or
apt-get
update.
B
So
if
you're
going
to
update
the
local
mission,
investing
stock
upgrade
instructions
that
exist
today,
you
still
have
to.
If
you
want
to
upgrade
your
local
couplet,
you
still
need
to
yell
update
your
whole
cluster
and
restart,
or
it's
just
a
control
restart,
but
everything
should
still
be
running
right.
The
workloads
still
continue
to
run,
even
though
they've
been
updated.
E
Yeah
I'm
just
wondering
whether
that
means
I'm
in
sort
of
what
we
did
in
this
is
why
we
content
on
this
and
cops
right,
we're
like
well,
you
have
to
come
on
update
your
OS.
Oh
you
want
to
update
your
tube.
Let
you
reboot
the
machine
anyway
or
you
reboot.
The
machine
on
on
AWS.
Every
go
is
a
new
instance,
so,
let's
not
even
worry
about
it,
we're
just
going
to
always
reboot
and
we're
going
to
do
like
hot
rebooting
later
well.
B
E
B
E
Well,
what
was
your
question
but
well
I?
Guess
the
question
is
linking
so
what
if
we're
doing
self
hosting,
but
we're
saying
it
be
still
have
to
do
a
different
mechanism
to
update
to,
but
then
we're
not
like
what
do
we?
What
do
we?
What
is
self
hosting
mean
to
me
surfacing
means
you
can
upgrade
Trinities
by
changing
our
version
and
it
rolls
out
nicely
right.
Yeah,
I,.
B
Called
yeah
I
call
it
self
hosted,
control,
plane,
yeah,
we're
doing
selfish
to
control
plane,
but
not
self
hosted
supposted,
because
that
is
literally
impossible
with
some
of
the
logistics
that
exists
inside
of
crew
Brunetti
spill,
so
they're
they're,
literally
mouths
name
space
propagation
problems.
So
you
need
to
have
you
know:
yeah
I,
don't
wanna.
E
B
E
E
Another
ballast
car
cops
did
that
and
we
they
added
an
optional
volume
mount
to
the
API.
The
Cubist's
don't
speak
that
they
don't
have
the
volume
this
is
for
DNS,
and
so
you
we
you
upgrade.
It
gets
the
new
math
and
put
a
new
manifest
on
there,
and
now
the
qubits
can
mount
consider
the
DNS
pods
anymore,
because
they
don't
have
the
optional
volume
mount.
So
they
were
like
all
these
horrific
SKU
issues.
Now
that's
that
brass!
E
A
E
A
Holistic
point
of
view
is
like
you
should
be
able
to
run
cubelets.
There
are
two
versions
over
them
than
the
control
plane,
so
I
mean
1-4
and
1-6,
maybe
but
preferably
one
five
and
one
six
and
oh
one,
six
r17
and
I
think
that's
the
most
common
before
you
upgrade
your
qubit.
But
that's
yes,
you
mentioned
that
there
was
horrific
and
not
good,
define
work
from,
but
I
think
it
at
the
same
time,
like
effect
at
all,
yeah.
E
B
With
the
problem
is
like,
we
have
so
many
tests
that
we
don't.
We
don't
manage
them
anymore,
because
we're
just
trying
to
keep
up
with
the
status
quo,
and
it's
not
necessarily
that
we
have
that
many
tests
is
that
we
have
so
many
sweets
that
reintroduce
the
same
to
us.
That
realm
is
weird
configurations.
B
A
A
One
huge
upgrade,
though,
is
a
issue
with
Kiki
from
from
Mike
was
like
the
the
CA
key
was
thrown
away
earlier,
like
after
initial
generation,
so
when
the
CA
key
was
needed
for
for
the
first
tickets,
like
here,
let's
boot,
wrapping
things
they
had
to
come
up
with
some
clever,
clever
logistics
there
so
I
mean
this
is
also
I.
Think.
Have
we
well
lesson
learned?
We
should
throw
throw
away
stuff,
we
might
need
in
the
future,
but
it's
hard
to
like
plan
for
these
things
as
well,
so
to
know
in
the
400.
A
B
This
is
like
one
of
the
pieces
for
self
hosting
that
will
be
required
in
part
of
self
hosting.
You
know
in
order
to
do
self
hosting
the
way
we
wanted
to
there's
got
to
be
high
availability
to
roll
things
forwards
right
then
you
get
the
rolling
update
of
your
control
plane.
Otherwise,
if
you
have
single
cell
single,
no,
it
self
hosting
you
get
into
this
weird
edge
case
of
upgrades
right.
You
literally
force
checkpointing
to
work
the
stuff.
A
A
B
Sorry,
okay,
so
I
mean
literally
other
systems.
Do
do
this,
like
you
know,
I.
Actually,
we
actually
did
these
things
on
Condor
like
years
ago,
when
the
main
space
sharing
originally
came
out,
and
but
we
have
a
third
person
so
there's
like
there's
the
node,
which
is
the
couplet,
but
there
was
a
third
party
that
did
the
unshare
and
managed
that
race
and
because
there
was
a
third
intermediary,
you
could
do
all
you
could
do
infinite
recursion,
because
it
all
all
the
demons
that
were
managing
I
had
to
talk
to
the
intermediary
yeah.
A
So
cool
I
think
so
now
we
define
the
scope.
A
self-hosted
in
this
context
means
self-hosted
control,
plane
and
the
control
plane
will
be
upgrade
before
nodes
and
we
need
at
least
we
might
have
a
test,
but
we
need
more
Oh,
better
quality
or
whatever
and
cubelets
will
be
upgraded.
The
way
they
were
installed
so
right
now
it's
from
packages,
so
it
will
be
an.
B
My
plan
tentatively
has
been
to
the
packages
that
exists
from
the
release.
Process
are
binary,
funks
and
I
have
already
put
in
PRS
and
I'm,
going
to
put
in
warp
ers
on
the
release
repositories,
so
they're,
actually
legit
packages
and
I'm
also
talking
on
with
the
Debian
folks
to
do
it
on
the
other
side.
So
I
still
have
all
my
package,
your
apples,
even
though
I
don't
work
for
redhead
anymore,
so
I
can
I,
can
build
and
produce
packages
for
Apple
and
fedora
so
and.
A
Then
we
have
issues
deploying
the
scheduler,
and
if
should
we
say
that
if
we
don't
solve
this,
we'll
just
do
the
be
missus
way.
I
mean
we.
We
we
lost
a
lot
of
time
in
in
this
cycle
because
of
because
of
this
Wojtek
have
you
been
involved
in
this
I
think
you
are
familiar
with
with
this
scheduling,
discolorations
and.
D
A
So
so,
on
a
high
level,
it's
like
we
have
these
static
pods
that
the
control
panel
control
plane
is
up
and
running
everything.
Fine.
We
delete
the
API
server,
it
comes
up
self
hosted
and
but
still
are
a
node.
We
don't
have
any
network
we're
using
the
cni
network
plugin,
and
so
we
don't
have
any
network
setup
for
for,
like
the
pods,
we
can
only
use
host
networking
right,
but
and
that's
not
normally
an
issue
we're
working
around
it.
A
I
mean
the
network
is
and
I
hope,
Network
through
and
schedule,
and
the
controller
manager
or
host
network
to
everything.
Fine.
But
the
problem
is
that
CRI
and
the
its
predecessor
when
the
networking,
when
the
C&I
networking
isn't
there
it's
starting
to
report
node
not
ready
status,
and
then
the
scheduler
omits
all
the
those
nodes
for
scheduling.
A
So
problem
is:
when
we
inject
the
scheduler
deployment
from
chroma
paraquad
a
real
deployment,
it
will,
the
scheduler
will
the
static
pod
scheduler
will
look
for
for
a
node
to
place
the
self
hosted
scheduler,
but
it
it
filters
out
all
nodes
that
are
not
ready,
which
is
the
only
one
we
have.
It's
not
ready
and
then
we
fail
with
we're
dead,
walking
in
space
where
we
don't
only
knows.
C
A
A
A
A
A
B
Me
there's
two
parts
to
me
to
this
right
like
there's
the
just
get
it
done
which
says.
Yes,
then
there's
the
good
design
principle
principle,
well-thought-out
state
machines.
Part
of
me
that
says:
oh,
my
god,
because
the
the
state
machine
for
the
couplet
is
not
well
defined
and
is
implicit
versus
explicit
right,
and
it
makes
no
sense
to
me
why
this
would
be
an
explicit
state
because
you
shouldn't
be
in
a
network
mean
you
could
there's
a
couple
of
conditions
that
you
can
get
into
for
how
it
occurred.
B
But
it
seems
like
an
actual
state
right
of
the
note
of
the
couplet
itself.
So
why?
Wouldn't
you
just
have
a
pot
or
a
node
status
being
network
not
ready
right
versus
node,
not
ready
right
and
if
you
had
network,
not
ready,
there's
nothing
that
prevents
you
from
doing
is
scheduling
for
network,
not
ready,
right,
yeah.
B
A
E
A
B
D
A
E
A
E
We're
running
a
deployment
where
the
deployment
like
might
not
be
on
every
master,
then
you,
like,
after,
like
you've,
introduced
a
something
there
suggests
like
what.
If
we
have
more
nodes
than
we
have
the
count,
and
you
have
scheduled
mono
and
kicking
on
another,
an
AP
on
a
third
and
then
that's
a
whole
bunch
of
test
scenarios.
We
dare
shop.
B
A
A
Yeah
I
see
your
point,
I
think
one
of
the
motivations
for
using
a
deployment
we're
like
we
can
always
be
sure
that
there
are
two,
even
even
if
we
have
one
node,
but
if
we
have
like
because-
and
one
of
them
will
be
active,
one
will
be
passive.
But
if
we
have
five
masters
and
we
all
have
like
four
passive
and
one
active
I-
don't
I
don't
know
exactly
that's
the
problem.
But
it's
a
problem
in
the
one
node
scenario
where
we
want
to
upgrade,
but
we
have
only
one
scheduled
on
phone
yeah.
B
I,
don't
know
it
seems
to
me
like
we're
going
to
get
into
we're,
starting
to
get
into
an
H
a
questions
now
yeah.
This
is
this
is
this?
Is
where
exactly?
This
is
exactly
what
I
said
like
these
three
features
are
in
a
circle
right,
like
they
all
depend
upon,
I
didn't
know
if
it's
like
a
cross
diagram
or
everything
depends
upon
a
else.
B
A
Can
we
can
we
draw
a
conclusion
of
that?
E
It's
yet
used
an
example:
yeah
we're
getting
there.
The
other
advantage
of
a
payment
is
scheduling
is
easier.
So
if
we
had
to
reproduce,
if
we
had
to
like
five
half
the
schedule
to
a
manual
schedule,
we
could
do
it.
We
wouldn't
have
any
issues
about
like.
Are
we
actually
scheduled
on
this
note?
Because
we
know
if
we
are
a
master,
we
can
do,
we
can
schedule
a
daemon
set
locally.
E
I'm
not
saying
we
should.
We
should
rewrite
the
scheduler,
but
I'm
saying
like
it
might
make
it
might
make
that
not
check
pointer
a
lot
easier
because
we
don't
have
to
deal
with.
Like
should
I
be
running
this
if
there's
a
daemon
set
and
where
we
match
the
note
criteria,
we
should
be
running
that
odd
yeah.
B
B
A
Very
good
yeah.
A
D
B
Is
a
ref
agonal,
but
I
wanted
to
mention
it
while
I
thought
about
it,
because
we're
adding
more
manifests
to
the
massive
manifest
blog
that
is
inside
the
code.
And
do
you
have
any
opinions
about
like
using
bin
data
inside
cuvee
DM,
so
that
we
can
just
break
out
the
manifests
to
like
a
directory
structure
of
manifests
and
just
smash
it
in
within
data
I
have.
E
An
opinion
on
this,
which
is
we
did
that
in
cops,
and
it
was
a
huge
mistake.
Let's
not
do
that.
Vic
I
was
go,
is
actually
just
so
much
nicer.
I
think
actually
Chris
gave
a
talk
about
this
about,
like
our
move
from
manifest,
so
the
the
it
turns
out
users
well
as
its
in
bin
data
anyway,
and
users
can't
really
edit
it.
They
have
to
build
it.
E
B
I
think
the
difficulty,
though,
is
for
someone
coming
into
the
code
base
that,
like
just
knows
the
pieces
of
the
puzzle,
but
doesn't
necessarily
know
who've
adn
they're,
like
where
the
hell
is
my
thing,
that
I
need
to
tweak
for
just
this
one
bit
right.
So
they're
they're
like
walking
through
the
code
to
figure
out
where
all
the
manifests
are
and
what's
they
understand
that
it's
structured.
That
makes
sense,
but
that's
just
a
it's
a
learning
curve
versus
like
just
editing
a
file
which
they
know
is
right.
There
I
agree.
E
In
theory,
I
think
I
think
we
can
achieve.
This
is
just
like
my
cost
experience,
which
is
we
can
achieve
some
of
that
by
you
know,
having
a
single
function
which
generates
each
manifest
that
then
pulls
in
data,
so
like
templating
in
code,
but
once
you
start
having
more
than
a
couple
of
little
substitutions
in
terms
like
the
helm
chart
sort
of
feeds,
that
is
like
people
there's
nothing
in
the
helm,
charts
anymore
right.
It.
E
E
A
Yeah
I
think
for
now
we
could
just
document.
This
I
mean
it's
an
ongoing,
a
never-ending
effort
to
make
it
things
more
user
friendly.
But
I
also
think
that
the
face
of
like
idea
that
we'll
have
one
face
that
generates
the
the
control
plan,
I
mean
API
server,
scheduler
and
controller
manager
manifest
that
that's
just
one
and
you
can
skip
that
place.
If
you
want
to
generate
them
yourself
or
whatever.
How.
B
Does
this
sound
because
this
plays
on
to
my
point,
which
which
is
originally,
as
we
start
to
turn
on
these
ideas
and
get
them
in
place?
Every
every
major
turn
needs
to
have
like
a
document
PR
with
it.
So
we
don't
end
up
at
the
end
of
the
cycle.
You
know
trying
to
justify
and
explain
things
to
other
people,
because
if
it's
all
in
the
docs,
as
we
update
the
phases
of
what
Covidien
will
do,
it
will
become
apparent
like
this
is
where
the
code
is.
A
Mean
should
it
oh,
do
we
agree
that
it's
really?
We
should
have
the
along
with
the
the
other
design
documents
with
which
we
will
check
into
markdown.
The
github
will
have
the
sales
proposal
ready
and
down
before
we
write
any
code,
I
think
that
will
will
end
up
being
much
more
clear
and
will
be
easier
to
justify
to
other
people
and,
at
the
same
time
get
some
comments
from
other
to
chime
in
on
document
of
code.
Yes,.
B
A
B
It's
going
to
be
harder,
this
it
mean
we
have
to
have
discipline
here.
All
right,
please
good
cuz
I
mean
who's
going
to
execute
in
this.
Besides
you
and
me
like,
we
need
to
be
able
to
divvy
up
pieces
of
this
to
along
the
way,
ideally
because,
right
now
we
have
like
what
what
astonishes
me
about
this
same
versus
other
SIG's
is
that
people
show
up
to
other
SIG's
and
they're
like
I'm,
ready
to
work.
B
Give
me
work
right,
like
scheduling,
as
example
like
IBM
shows
up
and
they're
like
yes,
give
me
all
the
work
I
can
I
can
handle
right,
but
there
we
have
like
30
people
on
a
call,
and
you
know
Lucas's
one-man
show
burning
down
the
house
go
ahead.
You
know
now
I'm
explicitly
tasked
with
working
on
this.
So
that's
fine
I'll
help
you
out,
but
the
you
know
I'd
like
to
be
able
to
get
some
little
more
granularity
and
it's
part
of
the
reason
we're
talking,
though,
to
be
able
to
divvy
up
some
of
the
tasks.
B
Okay,
so
so
discipline
is
part
of
the
process
for
the
phases
that
but
I
think
as
we
start
to
you,
have
your
checklist
there
and
I
think
we
should
modify
and
update
the
checklist
for
self-hosting.
So
that
way
it
can
like
we
can
break
up
pieces,
be
like
phase
1
and
phase
2
I
know.
Jordan
is
really
good
at
that,
actually
so
like
whenever
he
does
a
large
checklist,
he
breaks
it
into
phases.
B
E
A
Initiate
initially
I
hope
it
it's
just
a
documentation
of
forth
I
mean
what
we're
going
to
have
time
for
t11
8
is
like
I
guess
it's
mostly
documentation.
Getting
the
like
bit
tries
getting
the
code
like
somehow
structured
with
I
mean
in
in
one
four
and
one
five
with
of
like
this
kind
of
code
and
right
now,
I
even
more
six,
we've
got
better,
but,
as
you
mentioned
in
some
issues
there
were
surface
Justin.
E
A
B
E
It's
your
own
criticism,
you
should
you
know
it
feels
like
because
it's
primarily
initiative
for
your
own
purposes.
You
should
do
it
whatever
makes
my
sense
for
you
that
is
coding
right.
It's
going
to
slow
down
the
things
which
are
probably
the
real
deliverables,
that
end-users
will
care
about
which
are
still
hosting
upgrades
and
they
shade-
and
you
know,
maybe
focus
on
those
yeah.
B
Want
to
like
do
we
want
to
here's
a
here's,
a
time
check
so
we're
an
hour
in.
Do
we
want
to
power
through
this?
Or
do
we
want
to,
like
you
know,
take
take
one
step
at
the
puzzle
and
which
is
really
really
tackled,
one
of
them
and
then
reconvene
at
another
time,
because
we
can
continue
this
sort
of
like
interim
interim
meetings,
till
we
get
like
an
execution
plan
and
we
can
rewrite
down
that
the
task
items
and
even
start
executing
on
pieces
of
it.
Then,
as
we
reconvene,
we
can
be
like
okay.
E
Me
that
make
it
make
sense,
reconvene
and
partially
because
of
what
you
just
said,
but
also
because
I
would
suggest
the
next
one
so
might
be
upgrades
and
I
think
I'd
love
to
have
someone
from
boot
cube
here,
because
that
to
me
is
it
sort
of
how
you
recover
from
a
failed
upgrade
is
the
is
the
problem
and
that's
where
it
gets
really
interesting
and
I.
Don't
know.
If
look
you
prepare
people
that
absolutely
Wow.
B
E
Yeah
when
you
say
that,
but
then
we
have
our
not
checkpoint
er
right,
so
maybe
we
do
have
that
right.
If
that
was
sort
of
one
of
the
points
on
the
checkpoint
are
not
being
generic
is
oh.
If
it
turns
out,
we
need
to
do
special
cases.
We
can
do
special
cases
for
those
four
components
right.
An
SUV
we
say
osut
is
a
huge
special
case
and
we're
going
to
ask
you
the
operator
that
and
then
like,
but
maybe
we
have
personal
cases
for
the
other
guys
as
well.
Yeah.
B
E
I
mean
I
totally
agree
with
you
and
that's.
What
cops
is
right.
I
mean
cops
is
exactly
that
controllers,
but
you
know
it
turns
out.
At
the
end
of
the
day,
like
the
cops
you
is,
you
need
to
upgrade
your
OS,
and
so
that's
what
cops
does
right
at
the
same
time,
that
upgrades
discriminatees
thing,
but
you
know,
like
that's
sort
of
how
it
works.
Yeah.
B
E
B
But
I
don't
want
to
go
there
yet
baby
I
think
get
it
getting
the
pieces
in
place
and
then
you
know
continually
re-evaluating
as
we
get
the
pieces
of
the
puzzle
in
place
makes
sense,
because
there's
this
we
have
known
knowns
and
we
have
unknown
unknowns.
So,
let's,
let's
do
the
things
you
can
write
now
and
then
eventually
over
time.
You
know
if
the
grand
city
in
the
hill
that's
possible.
C
A
A
It's
not
like
it's
not
like
a
package
yeah
yeah.
So
basically
do
we
want
to
build
it
into
two
batum.
The
CLI
way
like
you,
barium
orchestrate
everything
from
crimes
or
do
want
to
make
a
service
ID
in
cluster
running
thing,
that's
generally
usable
and
that
consumes
like
a
resource
of
TPR
or
whatever.
That's
getting
the
information
like
this
basic
spec
and
state,
so
I
I
think
we
want
the
second
state
thing
to
be
able
to
evolve.
I
mean
it
could
be
a
partial
upgrade
as
well.
B
To
well,
let's,
let's
table
this
one,
because
we
want
to
get
more
people
involved
in
this
conversation,
I'm
sure
other
people
will
be
interested
in
like
no
Chester
mentioned,
because
we're
already
an
hour
in
and
and
we
have
enough
stuff
to
do-
pieces
and
execute
and
reevaluate.
B
So
why
don't
we?
Why
don't
we
table
this
one?
And
if
you
want
to
set
up
a
time
for
next
week,
you
know
we
can
we
can
do
it
after
cluster
lifecycle
meeting.
If
we
can,
we
do,
you
know,
do
biweekly
things
until
we
have
these
pieces
in
place.
That's
fine
with
me,
though,
like
Friday
works
for
me,
this
time
works
for
me.
E
B
A
B
E
B
A
B
Not
going
to
get
that
far
yet
I'm
gonna
I'm,
going
to
dig
into
boot
coops
details
to
figure
out
what
exactly
needs
to
get
done
in
this
stage
right,
so
I'm
going
to
dig
deep
in
the
boot
coop,
because
I
already
started
digging
in
there
and
figure
out
what
exactly
we
need
for
this.
If
we
need
even
that
much
right
and
also
validate
how
we
want
to
deploy,
so
that
alone
will
keep
me
busy
for
a
little
bit.
Yeah.