►
From YouTube: 2017-04-04 17.02.48 SIG-cluster-lifecycle 166836624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
So
I
added
that
there
I
just
want
to
collect
the
stuff
that
we
need
to
do
and
maybe
get
some
names
on
it.
I
do
think
that
the
first
two
items
I
put
there
is
the
you
know.
Doc's
on
networking
we
haven't
updated
the
networking
ducks
to
call
out
which
solutions
in
which
providers
are
ready
for
1.6
versus
not
so
my
guess
is
that
users
are
going
to
have
a
rough
time
once
they
start
using
161.
So
probably
getting
that
done
in
the
next
couple
of
days
should
be
the
high
priority.
B
I
I'm
going
to
have
a
hard
time
doing
that
without
casting
some
of
this
stuff
out.
So
it's
going
to
probably
take
awhile
to
be
able
to
test
it
on
make
sure
that
the
instructions
make
sense-
and
you
know
it'll,
probably
be
a
little
bit
of
putting
providers
in
the
penalty
box
until
the
instructions
are
clear
and
then
we
can
sort
of
move
them
up
into
a
list
that
says
yes.
These
are
these
are
ready
for.
B
Around
16
mpreg
kids,
there
was
a
folks
were
trying
to
just
launch
the
15
version
of
cube
admin
and
the
fact
that
when
we
push
the
new
debs,
an
RPM,
the
old
ones
got
deleted,
created
a
whole
bunch
of
issues
for
folks,
the
state
that
we're
in
now
and
I
don't
see
icon.
But
the
state
that
we're
in
now
is
that
we
have
old
versions
for
the
cube,
lit
and
cute
control,
and
you
can
pin
to
those
if
you
want
to
at
least
on
the
deb
side.
B
B
C
E
C
B
B
C
Issues:
wanna
slap,
it's
one
reset
for
you,
the
doctor
service
is
disabled
and
this
is
very
crappy.
In
general,
the
code
Envy
set
should
be
improved
for
17.
Basically,
we
need
some
well
now
when
we
have
to
avoid
by
default.
We
should
be
able
to
tell
theorize
or
clean
up
everything
on
this
node.
We
should
have
a
such
grp
sequel
or
something
because
we
are
still
like
it
assumes
root
access
to
the
gri
socket.
B
But
I
can
imagine
a
world
where,
where
you
could
have
a
queue,
blood
join
to
multiple
masters
at
some
point,
something
to
think
about
yeah
anybody
talked
about
that,
but
I
think
I'm
just
scared
of
like
we're
having
multiple
cublas
these
elegant.
It
seems
scary
to
me
to
just
have
it
going
Duke
everything
at
the
wrong
time
does
have
one
relationship.
Is
you
have
it,
but
at.
C
B
Is
not
reasonable.
Okay,
I'll
dig
up
the
issues
for
the
documentation
things
and
put
those
in
there.
I
could
use
some
help
on
the
docks,
otherwise
I'll
try
and
get
to
it
this
week,
I'm
digging
out
from
post
cube
con,
but
it's
definitely
on
my
list
to
do
those
documents,
and
then
you
know
folks,
you
know
I,
don't
look
if
you
could
make
sure
that
the
the
you
know
we
have
the
right
pointer
to
point
folks,
244,
we've
I
haven't
looked
at
it
lately.
That
would
be.
That
would
be
useful
somebody
or
from
we've
continued.
A
B
Okay,
so
I'll
go
through
and
look
at
the
rest
of
them
and
I'll
have
to
do
a
little
bit
of
a
trespasser
to
make
sure
that
we're
within
spitting
distance
cool,
ok
any
other.
Besides
the
the
reset,
the
networking,
the
old
des
and
RPM,
are
there
any
other
sort
of
like
clean
up
16
type
stuff
that
we
got
it?
We.
C
C
Yeah
yeah
exactly
but
still
I'm
gonna
check
if
it
affected
for
your
release.
16
the
basically
since
we
remove
the
safety
checks
before
I,
think
we
might
hit
some
race
conditions,
but
well
I'll
see
that
may
be
unstable.
At
least
it's
broken
or
master,
but
I
think
that
they
are
up
for
fixing
that
already
we're
grateful
jump.
G
C
C
You
know
I'll
put
it
on
oh
and
then
basically,
we
should
try
to
move
on
from
component
statuses,
because
I
think
that's
kind
of
okay
I'm,
not
a
hundred
percent
sure,
but
I
think
we
original
reason
was
that
we
hit
some
race
conditions,
use
that
component
statuses.
Well,
that's
reliable
in
these,
but
but
well,
I'm
not
an
official.
Be
nice
for
17
be
able
to
use
the
help,
helps
and
pointless
that
or
checking.
B
Why
don't
we
go
through
and
just
you
know,
I
just
wanted
to
make
sure
that
there's
nothing!
That's
overhanging
that
will
lead
users
to
a
crappy
experience
right
now,
with
16,
right
and
and
so
I
think
the
reset
thing
kind
of
meets
the
bar
there,
the
day
of
RPM
thing
and
the
networking
stuff
I
think
meets
the
bar.
There
are
any
of
things
like
stumbling
blocks
that
we
think
will
get
161
users
right
now
that
that
we
need
to
prioritize
fixing
anything
else.
C
C
E
B
Your
mom,
all
pepper,
so
the
only
other
thing
is
that
if
you
go
through,
that
sort
of
you
know
tire
fire
of
a
bug
that
everybody
was
complaining
about.
There
were
a
lot
of
folks
that
were
seen
cube.
Admin
hang
in
a
different
place
and
there
were
some
suggestions
that
there
were
some
issues
around.
You
know
requirements
and
dr.
settings
around
and
when
running
on
on
centos
with
RPM
I,
don't
understand
the
look
at
that
issue.
Is
there
actually
needed
there?
Does
somebody
really
understand?
What's
going
on?
There
is.
H
This
in
the
issue
with
us
in
the
deadlock
bug
yeah
yeah.
If
you
look
in
the
deadlock
bug,
people
were
posting
some
stuff.
That
was
basically
saying
if
you
disable
cube,
lit
networking
things
are
fine
and
well
be
speaking.
That
was
true.
I
mean
it
wouldn't
be
networked
not
ready
if
there's
no
network
to
prepare.
So
that
was
fine
and
then
they
were
like
and
everything
works
fine
now,
but
it
didn't
really
stop.
G
Prodding
exactly
even
one
that
when
we
did
our
last
release
and
now
the
the
doctor
version
that
we
were
using
with
Santos
was
changed
in
a
way
that
uses
AC
group
driver
that
is
different.
They
switched
from
the
sea
group
of
s
driver
to
the
systemd
driver
and
q.
Blood
is
still
by
default,
uses
DC
group
driver,
so
tube
lit
and
sis
docker
do
not
interact
together
and
I.
Don't
think.
Cubelets
starts
yeah.
C
G
B
Okay,
so
can
we
point
we
dig
up
that
other
issue
and
release
and
point
people
tours
or
what
they're
already
there?
Okay
I
guess,
let's
just
what,
if
somebody
you
know,
I'll
keep
an
eye
on
this
on
this
fits
blog
and
sort
of
try
and
redirect
people
to
doing
opening
new
issues
or
seeking
help
elsewhere.
Dad
there
were
people
like.
D
C
B
C
B
A
B
And
so
what
would
happen
is
that
the
sea
and
I
providers
would
error
out
when
they
were
asked
to
delete
a
context
for
something
that
they
were
never
called
to
create
on
and
one,
and
this
was
a
change
in
behavior
from
15,
and
so
what
that
meant
is
that
the
cube
DNS
pod
would
never
actually
come
up,
because
what
would
happen
is
that
you
would
bring
up
the
cluster.
You
would
schedule
cube
DNS.
B
It
would
make
its
way
onto
a
node,
but
it
wouldn't
start
because
C
and
I
wasn't
configured
yet
you
can
configure
CNI
and
then
in
the
of
trying
to
restart
and
actually
bring
up
cube
DNS
with
CN.
I
would
get
into
a
fail
loop
because
the
CNI
driver
said
I,
don't
know
how
to
delete
that
thing.
Okay-
and
we
were
in
this
position
where
we
could
either
update
all
the
sea
and
I
implementations,
or
we
could
actually
change
the
behavior
of
the
cube
list
so
that
it
was
more
forgiving
of
these
errors
here.
B
The
solution
that
we
ended
up
with
was
to
actually
have
the
cube
list.
I
recognize
when
see
and
eyes
not
installed
and
not
try
and
start
any
positive
begin
with
as
a
side
effect
of
that.
What
that
meant
is
that
the
cube
lot
now
started
reporting
that
it
was
not
ready.
Oh
before
see
and
I
was
available
right
so
before
I
would
say
it
was
ready,
but
see
and
I
was
broken
and
we
could
sort
of
sneak
some
stuff
in
there
and
make
stuff
kind
of
work
enough
to
actually
fix
things
up
now.
B
It
was
actually
being
more
upfront
about
the
fact
that
the
node
was
kind
of
in
a
horse
state
because
see
and
I
was
misconfigured
or
was
not
configured
on
this
stuff
landed
very
late,
and
so,
as
a
result,
we
didn't
have
a
lot
of
testing
on
the
RC.
So
this
was
landed
between
the
last
the
last
alpha
and
the
RC,
and
we
didn't
get
a
lot
of
test
coverage
on
the
RC
I.
Think
because
most
everybody
was
was
busy
preparing
for
khoob
khun
mom.
C
B
It
was
like
over
a
weekend.
I
just
did
not
get
a
lot
of
eyes,
so
I
think
one
of
the
things
in
the
post
mortem
is
is
fundamentally
I,
think
we
rushed
the
release
of
160
because
we
didn't
have
a
no-bake
time
of
the
RC
because
we
wanted
to
be
able
announced
today,
cubed
run.
So
what
happened
is
because
the
node
would
now
mark
not
ready.
B
What
would
happen
is
that
cube
admin
would
deadlock
right
because
it
would
get
to
the
point
where
it
was.
We
would
wait
for
notes
to
actually
join
and
mark
themselves
ready.
That
would
never
happen
because
see
and
eyes
not
installed
right,
and
so
that
means
that
the
cube
would
actually
wait
there
forever.
Now,
if
we
went
through
and
we
actually
made
the
cube-
let's
say
you
know:
I
don't
care
if
the
note
is
ready
or
not.
I
just
want
to
make
sure
that
the
cute,
the
note
is,
is
actually
there.
B
We
still
have.
This
end
run
where,
if
you
schedule
a
demon
sets
with
those
networking
equals
with
those
networking
turned
on
it's
sort
of
bypasses
the
schedule
or
it
gets
scheduled,
regardless
of
the
fact
that
the
note
is
not
ready
because
good
enough
for
you
to
get
CN,
I
installed
it
off
and
runs
with
a
year
awful
races
right
at
some
point.
I
wouldn't
be
surprised
if
the
daemon
set
starts
actually
recognizing
node,
not
ready.
Also,
so
we
need
to
start
testing
this
crap
really
well.
A
lot
of
discussion.
B
B
So,
thank
you
and
I.
Think
a
lot
of
it
is
that
you
know
a
lot
of
this
stuff
just
landed
hard.
We
didn't
get
enough
testing
on
it
as
early
as
we
should
have
I
think
it
was
difficult
to
test
cube
admin
early
on
I,
just
because
the
process
of
building
the
release
artifacts
similar
to
the
way
that
we're
going
to
deliver
them
to
users
is
very
difficult
in
the
middle
of
a
release.
B
Right,
there's
no
way
for
me
to
go
through
and
build
a
private
version
of
everything
that
that
users
are
going
to
install
so
that
I
can
actually
sort
of
test
that
along
the
way.
So
so
all
the
testing
had
been
done
with
cube
admin
up
until
this
point
was
very
sort
of
narrow,
like
what's
Kabul
a
bunch
of
stuff
together
and
I.
Think
as
an
implication
of
them,
we
didn't
notice
the
ordering
issues
that
that
were
were
working
us
with
respect
to
see
and
I
as
early
as
you
probably
could
have.
B
A
B
Is
a
very
fair
assessment,
I
think
the
cni
change
probably
wouldn't
have
gone
in
if
we
had
actually
had
sort
of
testing
of
this
stuff
early
on
the
four
in
the
first
time.
So
we
never
would
have
gotten
in
that
situation,
and
if
we
had
release
testing,
we
would
have
noticed
the
deadlock
between
the
RC.
In
the
end
there
and
the
actual
release,
though,.
I
C
B
I
think
at
this
point
there
is
a
mismatch
in
terms
of
you
know
the
the
kates
release
cycle
and
how
that
works
and
a
recognition
that
so
many
users
are
using
too
bad
and
at
this
point,
whether
they
like
it
or
not.
So
I
don't
think
the
cube
admin
stuff
was
necessarily
getting
as
much
eyes
at
that
level,
as
perhaps
should
have
I
think
some
of
this
is
evidenced
by
there
was,
you
know.
B
Usually
we
do
the
161
like
two
weeks
out,
and
there
was
a
question
of
whether
we
wanted
to
do
a
hurry
up
and
do
a
161
to
fix,
cube
admin
or
whether
we
wanted
to
wake
that
two
weeks
based
on
the
sort
of
reaction
from
the
community
and
the
amount
of
noise
that
we
heard.
It's
pretty
clear
that
that
it
was
important
to
get
a
release
date
ASAP
and
that
their
body
was
out
there.
A
Make
sense
I
just
wanted
to
say
thank
you
to
everyone
who
reacted
to
it
and
put
time
into
fixing
it
so
I'll
just
say
that,
and
the
other
question
I
have
is
so
so
Robert.
You
mentioned
that
the
tests
were
in
the
wrong
category.
Has
that
been
rectified
now
or
are
they
still
in
their
own
category?
Well,.
I
It's
not
really
a
category
necessarily
right,
like
I,
think
the
release
owners
we're
looking
at
a
particular
tab
in
test
grid
and
there
are
a
zillion
different
tabs
right,
and
so
we
can
put
our
test
into
that
tab
which
hopefully
brings
up
the
attention
which
I
think
is
what
Jacob
would
just
saying
yes
about,
but
there's
a
larger
issue
of
trying
to
figure
out
which
test
should
be
released
blocking
and
how
do
we
aggregate
those
results
and
I
think
it's
a
single
pad.
This
may
be
insufficient,
because
the
number
of
tests
goes
up
makes
sense.
B
G
I
I
guess
I
just
want
to
bring
this
up
to
make
note
that
a
Brian
had
asked
for
one,
which
I
think
is
the
right
thing
to
do
in
the
same
way
that
the
sig
for
storage
wrote
a
post
mortem
after
one,
not
three
other.
They
called
a
retrospective
inside
of
the
post
mortem,
which
makes
it
really
hard
to
find
in
history
and
Jacob
the
volunteered
to
start
working
on
this
I
think
it's
gonna
probably
want
lots
of
help
from
the
community,
and
so,
if
you
rigid
that
to
you,
please
please
help
them.
I
We
want
to
get
this
out
sooner
rather
than
later,
to
show
that
we're
being
very
responsive
in
terms
of
power,
root,
Colin
and
working
on
solutions.
My
korg,
those
is
more
of
a
call
to
action
to
write
the
post-mortem,
but
other
than
trying
to
say
what
should
go
in
it,
because
I
think
we
need
to
do
that.
Online
yeah
sounds
good,
so.
I
C
It's
not
that
well,
basically,
we
have
about
20.
Maybe
more
2530p
are
pending
for
cube
Edwin
kee
bellman
for
a
targeted
ad
master.
So
just
a
heads
up
that
we
should
the
reviewers
of
the
individual
p.
Oz
should
kindly
take
a
look
at
those
or
reassign
if
they
don't
have
time
and
then
we
can
probably
as
soon
as
possible,
get
a
stable,
stable
version
on
on
on
master
and
now
that
we
also
have
the
testing
available
we
should.
We
should
have
a
quick
link
to
that
as
well,
so
so
people
can
find
it
quickly.
C
Cool
that
would
be
super
helpful
when
trying
to
to
get
some
I
mean.
Most
of
them
are
like
10,
20,
30,
p
30
lines,
so
they
are
not
that's
hard,
but
a
lot
of
it's
a
lot
of
external
contributors
that
have
made
these
contributions
and
it
would
be
they
are
already
waited
for
like
3-4
weeks.
So
we
should
try
to
take
a
look
at
them,
though,
as
soon
as
possible,
so
doesn't
seem
too
well
too
bad
for
more
perspective.
I
know
from
my
own
experience.
C
I
Great
question:
if
people
want
open
up
the
dark,
I,
think
too,
I'm
just
going
to
sort
of
walk,
walk
down
to
the
dock.
These
are
in
no
particular
order,
so
we're
going
to
try
to
get
all
the
way
to
the
end
somewhat
quickly,
so
people
can
then
kind
of
go
back
and
think
about
what
they
want
to
work
on
or
commit
to
and
or
how
we
should
prioritize
this
group.
I
So
let's
such
as
I've
in
the
first
one
is
component
figuration.
This
is
something
that
cannot
get
a
sig
meeting,
maybe
three
or
polio.
Now,
when
Brian
was
on
and
we're
talking
about
how
various
components
in
the
system,
the
cubelets,
the
scheduler
they
catch
ever,
are
all
moving
toward
getting
rid
of
team
in
line
flags
and
starting
to
put
their
configuration
f
ed.
I
This
is
somewhat
related
to
add
on
management,
because
the
atom
manager
also
is
going
to
need
to
deal
with
configuration
of
add-ons
and
since
a
lots
of
different
things
were
doing
this,
and
we
don't
really
have
a
great
framework
for
cross
cig
efforts.
It
was
sort
of
decided
that
the
cluster
lifecycle
state
would
would
own
this
effort.
Going
forward
and
from
what
I
can
tell
now,
this
is
going
to
land
in
one
dot,
seven
in
one
or
more
places
right.
I
So
it's
already
in
the
scheduler,
it's
likely
to
end
up
in
the
API
server
or
the
controller
manager.
Soon
it's
being
actually
worked
on
and
will
probably
sin
liner,
the
kabit
and
you
know
other
components
like
you
know
the
system
dns
component
and
so
forth,
as
costco
cerrado
kaler
are
likely
to
move
destruction
presume
so
Mike
manish
from
google
has
had
from
our
side
volunteer
to
drive
this
I'm
sure
he'd
be
happy
to
have
other
people
from
outside
of
Google
help
grab
this
effort,
but
basically
what
what
we're
looking
for
here
so
I
go
ahead.
I
I
The
goal
here
is
to
drive
consensus
and
a
sort
of
a
default
pattern
for
how
compelling,
if
they
should
be
done
and
for
my
point
of
view,
I,
look
at
I
use
of
criteria
as
having
at
least
two
or
more
components
in
the
system
having
a
consistent
component
configuration
story
and
that's
having
some
docs
about
how
everyone
else
can
have
them
without
necessarily
having
to
go
through
us.
So
it's
going
to
require
you
know
inviting
people
from
this
a
good
at
working
on
this.
I
I
You
know,
I
put
in
here
use
q
batted
in
Cuba
sh,
because
I
think
that
if
we
can
start
using
cube
admin
and
cube
up,
it
will
vastly
increase
the
amount
of
testing
right
like
we
won't
cut
a
release
if
you've
admin
is
broken.
If
every
single
test
that
runs
on
TV
is
built
on
top
of
cubeb,
so
you
know
stuff
like
that.
Where
we're
just
we're
burning
lots,
we
can
get
lots
of
miles
on
actually
using
it
and
we
can
start
sort
of
a
note.
I
You
know
Justin
Collins
elsewhere
in
the
doc
that
they're
starting
to
use
it
cops,
rightly
so
again,
starting
to
use
it
more
production
deployment
and
figure
out
sort
of
what
that
means.
We're
also
this
court
are
going
to
start
looking
at
how
to
use
key
bad
men
inside
of
gke
I'm,
not
sure
we'll
get
there
this
quarter,
but
we're
going
to
start
trying
to
figure
out
what
the
gaps
are
and
that's
also
going
to
drive
into
trying
to
make
more
production
ready
to
sort
of
subtasks
there
that
are
kind
of
loosely
grouped
into
this
bucket.
I
But
those
are
all
sort
of
independent
tests.
The
next
one
was
taking
cube
admin
and
making
sure
we
have
a
solid
list
of
phases.
It's
broken
into
you
and
Lucas
has
a
dock
or
where
the
phases
are
proposed,
which
hopefully
we
can
link
in
here
and
sort
of
to
get
some
consensus
around.
That
I
think
this
goes
back
to
production-ready,
which
is,
as
we
start
uses
and
more
production
deployments.
I
We
can
make
sure
that
the
things
are
correct
and
the
face
is
actually
satisfy
the
needs
of
the
various
production
deployments
and
to
try
that
sort
of
nail
those
phases
down,
because
at
some
point
those
phases
to
become
an
API
that
we
need
to
maintain
and
needs
to
be
backwards,
compatible
across
release.
But.
I
That's
kind
of
what
I
mean
by
like
it's
an
api
right.
This
is
something
that
people
are
going
to
start
coding
to
and
to
code
to
it
there
are
no
one
documentation.
So
I
understand
what
they're
doing
you
know.
I
think
Justin
had
mentioned
that
that
he's
like
blinking
in
the
cube
Edmond
source
code,
to
get
the
parts
me
which
is
just
not
where
we
want
to
be
right.
C
Yeah
I
think
we
got
Oh
quite
far
in
the
16
timeframe.
Wats
production,
readiness
with
all
the
security
features
or
whether
they
are
features
on
may
be
discussed.
At
least
it
got
improved
a
lot,
and
example
in
16
and
on
the
documented
thing
is
that
one
can
bring
your
you
can
bring
your
own
CA,
sir
and
you're
wrong
other
third,
we
haven't
documented
that
anywhere
and
I
mean
some
of
these
things.
C
I
had
time
to
like
implement
the
two
first
places,
so
it's
like
you
config,
but
it
should
be
vastly
expanded
and
a
lot
more
documented
and
I'd
love.
I'll.
Take
a
look,
I'll
revisit
the
the
cube
alien
faces
dog
soon
and
try
to
think
about
what
I,
what
I
thought
one
month
ago,
something
when
I
wrote
it
and
I
to
add
some
new
experience.
C
I
C
I
I
Deployment
like
those
are
the
two
that
I
can
pack
of
off
the
top
of
my
head
cuz.
We
talk
about
them
a
lot,
but
there
are
likely
other
people
using
some
production.
You
know
if
we
could
get
box
to
use
it
for
their
production,
for
you
get.
You
know,
folks
that
are
running
on
other
other
on-prem
installations.
It
would
be
really
useful
to
know
what
parts
they
need
to
break
out
to
make
it
work
their
automation,
yeah.
Exactly
so.
C
I
think
we
should
aim
for
bida
like
API,
for
this,
based
on
the
proposal.
I
have
and
then,
but
still
well,
we'll
see
when,
when
we
get
toasted
a
mouth
phone
code,
freeze
but
initially
offer
and
then
but
then
try
to
make
all
all
known
cloud
providers
all
large
deployment
use
it
at
least
in
testing
branch
or
something
and
then
move
eventually
to
be
del
GA
in
later
this
year.
That
sounds.
I
Really
good
me
thanks.
So
next
on
the
list
is
upgrades
and
downgrades
for
cuban.
So
this
is
something
that
we've
sort
of
been
a
little
bit
punting
on
now
the
key
ivan
is
beta.
You
know,
I
think
when
one
dot,
seven
ships
people
are
going
to
expect
to
be
able
to
upgrade
their
installations
that
they
may
one
by
Peggy's
I,
know
Lucas
and
started
working
on
a
proposal
for
this.
We
also
need
people
to
think
about
how
they
can.
C
What
do
you
think
about
reusing
at
least
something
of
the
coral
at
work
like
this
I
mean
what
we
discussed
in
the
in-person
meeting
last
week?
Well,
that,
basically,
we
would
have
Cuban
Cuban
mean
upgrade,
shall
create
a
deployment
of
some
kind
of
a
job
that
looked
at
a
tpr
that
specifies
these
things.
C
How
from
what
should
I
upgrade
work
and
so
forth,
to
avoid
building
this
this
code
into
cube
admin,
because
that
would
lead
to
a
lot
of
compatibility
issues,
for
example,
that
mean
170
can
upgrade
to
171
and
all
kinds
of
all
tissues.
So
yeah.
C
A
D
C
B
A
B
A
For
those
who
weren't
in
Berlin
or
who
won't
have
to
seek
plus
a
life
cycle
meet
up
at
in
Berlin,
we
started
doing
that
verbally
to
begin
with,
and
I
made
copious
notes
on
the
different
pieces,
and
those
notes
are
already
in
the
in-person
Berlin
notes
they're
a
bit
stream
of
consciousness
at
the
moment.
But
it's
all
there
already,
and
I
and
I
will.
I
will
make
a
commitment
now
to
have
a
first
draft
ready
by
next
Tuesday.
C
Yeah,
I
think,
at
the
same,
in
the
same
manner,
I
felt
up
the
fate
of
thing.
We
should
have
a
design
doc
for
this,
maybe
to
cabanas
cube
admin
made
this
to
community
I,
don't
know
which
one
is
better.
I
I
know
just
adjusted
some
day.
Some
days
ago
was
last
week
that
the
cubed
m
interfaces
should
be
at
the
prestigious
issue.
So
let
us
get
more
ice
and
I.
C
C
Think
that's
what
well
so
then
we
have
3
cube,
am
related,
think
we
have
the
simplified
+
life
plus
the
creation.
I,
don't
know
the
that's
la
year
old
now,
which
is
data.
We
have
the
blue
chip
tokens
which
are
also
now,
but
should
be
really
should
be
better
in
17,
because
that's
one
of
the
huge
building
blocks
we're
using
and
then
finally,
we
have
the
cases
which
will
be
probably
all
fine
17.
A
A
E
A
E
B
I'm
gonna
doubt
communicate
more
widely
and
the
features
are
really
about
communicating
more
widely
about.
What's
going
on,
I
just
don't
want.
You
know,
folks
to
be
surprised
that
stuff
happens.
I,
hey
I,
didn't
know
you're
going
down
that
path
right.
So
so
you
know
get
app
top
down
and
finding
ways
to
you
know
get
it
in
front
of
people's
eyes.
If
they
want
it,
I
think
seems
like
a
good
plan.
Yep.
I
Cool
I'm
gonna
keep
moving,
so
we
can
get
through
the
list
and
maybe
have
a
couple
minutes
at
the
end.
Next
one
quickly
is
to
pull
master
images
from
GC
are
sort
of
side
loading
them
using
whatever
distribution
related
hackery.
We
have
out
of
the
accumulator
TV
police
bundle.
This
is
partly
about
release
process
where
we
should,
when
we
build
releases,
be
building
official
images
and
pushing
them
up
to
GCR
and
docker
hub.
I
So
we
can
pull
them,
it
doesn't
mean
everybody
has
to
I,
know
Justin
yet
like
offline
builds,
and
you
can
definitely
I
think
that
the
problem
what
we
do
today
is
you
have
to
sideload
the
nice
thing.
If
we
actually
push
official
build,
is
you
can
fill
sideload?
If
you
want
to
warm
your
doctor
cash,
but
you
can
also
just
pull,
which
also
will
help
improve
the
upper
experience.
I,
don't
know
that
we
have.
I
C
C
I
Went
like
if
you
look
at
the
salt
or
the
cost
configuration
shell
script.
What
they
do
is
they
fired
loaded,
docker
image
with
a
hash
that
that
docker
image
is
a
tar
ball
inside
to
cadiz
tar.gz,
and
they,
you
know
basically
daughter,
run
that
hash,
which
is
really
terrible
right,
like
we
have
a
release
version,
it
should
be
pointing
at
the
release,
version,
I,
think.
G
G
B
G
B
G
B
I
So
next
is
Adam.
Management
closes
something
that
we
talked
about
many
times.
Justin
has
volunteered
to
drive
this
in
the
window,
fixed
time
frame
because
look
like
he's
here
today,
but
he
did
say
he
will
try
to
prioritize
those
for
that
early
cycle.
I,
don't
know
if
he's.
If
he's
committing
to
getting
this
finished.
I
know
brian
has
sort
of
a
straw
man
doctor,
but
he
would
like
it
to
look
like
and
that's
a
great
starting
point.
I,
don't
know
that
we
should
necessarily
be
forced
to
follow
that
to
the
letter.
I
I
think
we
need
to
look
at
the
requirement
to
make
sure
we're
building
the
right
thing,
but
the
end
goal
here
is
really:
you
want
to
have
a
time
management,
that's
consistent
across
all
three
eyes
installations
and
not
where
it
only
works.
If
you
run
this
weird
hockey,
shell
script,
that's
insult
I
felt
I
should
make
you
know.
Karina
is
more
portable.
It
should
make
cube
a
cube
and
installations
sort
of
more
both
sort
of
similar
and
also
powerful
across
the
park.
That's
cool
yeah.
So
next
is
a
che
clusters
from
cube
admin.
A
We
in
berlin
as
well
and
the
agreement
in
berlin,
was
that
we
needed
to
do
enough
design
for
AJ
to
ensure
that
we
weren't
going
to
go
in
the
wrong
direction
when
we
did
self-hosted
upgrades.
So
the
goal
for
a
the
design
doc
for
the
proposal
is
to
include
enough
design
on
a
che
that
people
can
get
comfortable
with
that
working.
A
A
I
I
think
that
almost
everyone
would
agree
that
upgrades
is
more
important
than
a
che.
It's
more
important
to
nail
upgrades
than
to
get
part
way
through
go
right.
I
think
it's
also
good
to
be
explicit
about
what
we're
not
gonna
deliver,
because
we
don't
want
to
be
telling
people
hey
we're
working
on
AJ
and
they
expected
the
next
release
because
we're
working
on
it.
If
we
can
say
explicitly
like
we
are
not
trying
to
get
there
for
one
about
seven
we're
trying
to
lay
the
groundwork
to
their
past,
one
that
that's
good
than
to
communicate.
I
C
C
Look
yeah
on
a
high
level.
We've
said
that
we
have
two
options
for
AJ
from
bare
metal
and
one
is
the
smart
client
which
tries
to
refresh
well
what
CH
s
do:
I
trust
and
what
API
service
order
and
somehow
catch
it
locally,
and
what
this
puts
one
on
on
the
client,
the
other
possible
option
would
be
to
use
the
NSO
some
kind.
For
example,
I
mean
we've
talked
about
with
DNS
or
a
similar
solution.
I
I
This
is
just
sort
of
on
the
radar,
because
it's
somewhat
related
to
the
point,
but
I
really
think
this
is
a
networking
cigs
problem
to
solve,
although
it
again
should
help
sort
of
make
deployments
more
consistent
and
as
we
have
an
atom
manager
in
the
atom
editors
that
you
need
to
run
to
proxy.
The
tricky
thing
with
to
proxy
is
that
there's
cubox
it
could
be
a
bootstrapping
problem
right
because
it's
any
different
services.
So
that's
that's.
Why
I
con
the
list?
Yeah.
C
We've
got
it
for
450
things,
one
wonderful,
but
the
kind
of
problem
we
face
is
currently
we
store
the
Maps
API
address,
or
basically
the
cube
config
file
in
a
config
map
and
well
there.
We
have
a
problem.
If
we
want
to
update
the
most
endpoints,
somehow
I
mean
ok,
we
have
following
upgrades
where
the
USS,
but
it's
kind
of
hard
and
probably
it's
going
to
affect
services
in
class,
planned
everything.
So
we
haven't
really
thought
about
Oh
non
non
destructive
way
to
update
things
here.
I
But
somebody
is
more
just
of
a
TCEQ
up
issue
that
it
is
a
key
bedpan
issue
and
even
as
we
rectify
cube
up
to
start
using
key
Bethenny
will
pick
up
some
of
these
pieces
on
our
way.
Yes,
excellent,
the
next
one.
This
was
a
coming
on
my
list
that
I
had
from
last
planning
cycle,
which
was
cluster
white
config.
But
I
can
remember
what
that
meant
and
talk
and
said
that
he
thought
it
was
storing
stuff
in
like
a
global
config
map.
C
I
think
it's
pretty
similar
to
complement
config,
but
I.
Don't
either
remember
exactly
what
it
is:
yeah
and
cloud
provider
thing.
C
I
I
We
need
to
do
a
lot
of
docs,
writing
and
clean
up
and
I
think
we
often
ignore
the
cleanup
parts,
but
there's
a
lot
of
sort
of
ancient
docs
about
how
to
create
clusters
on
18,
different
platforms
and
so
forth.
We
talked
about
sort
of
going
through
and
trying
to
audit.
You
know
which
ones
are
broken,
which
ones
don't
have
an
upgrade
story
either
make
that
clear
or
start
ripping
them
out
and
I.
Think
we
should.
You
know,
maybe
try
to
keep
working
out
at
this
quarter
to
make
more
progress.
I.
C
I'm
maintained
well
Brendan
birth
initially
made
the
Hamilton
guide
like
you
need
to
burn
in
its
early
days
like
zero
point.
A
1.0
and
I
worked
on
that
until
13
and
then
obviously
moved
moved
on
to
Cuba
dentist
door
code
is
preferable
or
bash
so,
but
I
am
going
to
test
if
that's
broken,
I
think
it
is
for
16,
because
I
haven't
seen
really
anyone
stepping
up
on
maintaining
that
after
me
on
one
country.
So
I
expect
I'm
gonna
just
like
10-something.
C
I
Yeah
it'd
be
great
if
the
owners
are,
the
other
ones
would
do
the
same
thing,
but
I,
don't
sure
we
can
trust
them
to
do
that
so
and
I
don't
have
a
way
to
test
a
lot
of
them.
So
I
think
it's
just
going
to
be
sort
of
like
send
a
PR
to
the
owner,
trying
to
remove
it
and
see
if
they
may
push
back
they're.
C
I
And
then
the
last
analyst
is
something
that
Jacob
added,
which
is
to
invest
in
velocity
and
pay
down
some
technical
debt,
which
would
both
important
things
for
us
to
do.
They
keep
the
wheels
turning
smoothly
we'd
like
to
move
Q
dive
into
its
own
repository
in
front
that
multiple
times
Jacob
I
started.
Writing
sort
of
a
proposal
of
how
to
do
that.
The
only
thing
I'm
concerned
about
is,
we
don't
want
to
be
the
first
one
to
move
out.
So
there's
lots
of
larger
issues
that
aren't
really
our
problem
to
solve
it.
I
Being
the
first
thing
can
move
out,
so
we
saw
with
106
I.
Don't
really
stand
up
a
forum
that
you
know
we
weren't
being
tractability
test.
You
know
our
all
of
our
issues
are
likely
to
start
getting
ignored
from
the
release
point
of
view.
Our
cuts
are
gonna
start
getting
or
as
if
we
move
out.
First
cube
control
was
supposed
to
move
out.
I
B
Glad
to
hear
you
say
that
Robert,
because
that's
that's
why
I
finish
work
leave
admin
to
be
in
the
main,
repo,
even
kind
of
a
pain
in
the
ass
when
we
can
it
go.
Is
that
you
know
we
need
to
be
able
to
to
sort
of
be
front
of
mine,
so
yeah,
let's,
let's
you
know,
let's
you
know,
be
all
the
Penguins
pushing
keep
control
off
the
ice
floe
first
right.
I
C
C
B
I
want
to
spend
some
time
personally
trying
to
get
the
release
stuff
up
and
running
so
that
everybody
can
run
it.
We
can
spread
some
of
that
load
and
it
can
actually
be
part
of
our
sort
of
day-to-day
testing
process.
When
you
really
want
to
you
really
want
to
test
things
like
they're
going
to
be
when
you
release
oh
I,
think
that'll
be
good.
Yeah.
D
C
At
least
in
my
I'm
not
sure
if
I
should
write
it
here,
but
I
envision
a
case
like
all
cubed
invalidate
or
something
I
know.
We
talked
about
this
in
Seattle,
basically,
our
single
to
bed
in
command
that
would
deploy
an
engine,
X
deployments
and
just
curl
for
it
or
somehow
test
that
DNS
is
working.
Test
approach
is
working
and
some
really
basic
smoke
tests
that
we
can
say
well
user
one
run
this
command,
and-
and
we
know
that
at
least
fifty
percent-
that
we
are
trusted
up
and
running
for
sure,
I.