►
Description
Meeting Notes: https://docs.google.com/document/d/16CEsBSSGm3sMpvB_cFnKnqqi1OxhIcyX3lVwBpIyMHc/edit#heading=h.6bl6aiqaw2a
Highlights:
- Plans for self hosting
- GA to kubeadm
A
Hello
and
welcome
to
the
Wednesday
December
13th,
a
special
edition
of
closer
lifecycle
for
cube
admin,
AJ
and
upgrades.
Let's
see
not
sure,
I
check
the
agenda
and
there
was
nothing
in
there.
So
hopefully,
the
folks
that
have
showed
up
have
something
to
talk
about
what
is
a
very
short
meeting.
I
wanted.
B
To
chat
about
the
plans
for
self
hosting,
and
you
start
to
enumerate
the
issues
around
secrets
and
whether
or
not
we
are
going
to
default
to
using
host
volume
amounts
or
whether
or
not
we
should
secret
the
secrets.
And
then
we
have
to
check
with
them,
because
this
will
affect
work.
Efforts
in
the
next
cycle.
C
For
example,
we
have
to
use
the
deny
authorizer
feature
that
might
be
nice
added
in
1/9,
but
that
requires
us
to
have
a
custom
web
hook
inside
of
a
cluster.
So
to
deny
access
to
the
secrets
by
anyone
but
root,
which
is
system
masters.
A
group
and
also
has
well
has
to
include
support
for
bootstrapping
checkpointing
for
secrets,
so
which
is
the
outer
primary
blocker
for
using
it
or
like
thing
to
discuss.
I.
Don't.
B
Think
it's
necessarily
a
blocker
other
than
it's
a
total
opt-in
feature
I,
think
it's
just
getting
it
getting
people
to
just
okay.
It
is
the
hardest
part
right,
the
code
itself
to
actually
make
this
happen.
It
is
not
hard,
I
think
it's
the
political
and
security
ramifications
that
people
kind
of
hem
it
on
that
makes
it
difficult.
B
I'm,
not
I,
think
this
is
the
right
way
to
go,
is
to
force
the
checkpointing
of
secrets.
I,
don't
think
we
want
the
checkpoint
anything
else,
though,
that
kind
of
gets
into
hairy
territory.
They
don't
want
to
own.
A
maintain
I
know
that
there
was
a
third
thing
that
the
chorus
folks,
strict
point,
which
is
config
Maps,
but
that
gets
into
another
weird
space
which
I
don't
want
to
get
into,
because
then
then
that's
generic
checkpointing.
Almost
to
that
stage,
once
you
start
doing
that.
C
C
B
The
single
master
case
right
for
the
multi
master
case.
This
is
not
a
problem.
I
mean
like
we
can
always
just
say
like
we
will
not
do
self.
We
will
do
a
very
limited
user
story
for
single
node
self-hosted
because
otherwise
we
add
all
these
complexities
into
the
system.
If
we
don't
care,
we
can
just
say,
like
the
the
larger
a
che
configuration
we
will
provide,
you
know
config
maps
or
component
config
throughout
the
entire
chain.
I
know
we
want
to
unify
this
one,
so
we
can
do
either
two
choices.
It
depends
upon.
B
C
B
B
Need
it
for
the
other
components
in
order
for
you
to
regain
the
cluster
after
restart,
because
the
API
server
will
replicate
any
pods
that
were
self
hosted
right
so
like.
If
you
had
self
hosted,
there
will
be
no
rectification
right
as
the
whole
process
of
rectification
is
gone
on
the
uncontrolled
restart
scenario.
B
The
controller
manager
and
the
scheduler
would
have
to
be
check
pointed
which
means
that
if
you're
gonna
use
component
config,
that
means
you
got
a
check
point
to
come.
Actually,
no,
you
don't
need
to
check
point
the
component
config,
because
as
long
as
it
can
get
the
API
server,
it's
fine
and
the
API
server
will
come
back
up.
So
you
don't
need
component
config
to
be
checked
pointed
it.
A
C
B
A
C
E
B
C
B
Yeah
some
we
there's
other.
We
need
to
do
two
along
this
road
like
I,
want
to
enable
we
need
to
enable
it
in
the
test
suite
right,
like
one
suite
once
I
get
a
PR
into
cube
ATM
that
does
the
restart
I
want
to
enable
in
the
test
suite
and
have
a
test
case
that
basically
checks
for
recovery
and
restart
like
just
so.
B
We
have
some
coverage
there
because
right
now
we
don't
and
then
the
next
thing
that
I'd
like
to
put
in
there
too,
which
is
slightly
orthogonal
but
still
related,
is
to
feature
gain
naval
priority
and
preemption.
Just
so.
We
have
some
coverage
there,
because
right
now
we
don't
actually
deploy
the
there's.
Many
terms
that
overlap,
but
one
of
the
terms
was
called
the
reschedule
err
and
it's
not
really
a
reschedule
err.
So
the
and
the
current
reschedule
er
doesn't
reschedule
whatever's
right,
I.
B
Well,
there's
whatever
is
critical,
pod
add-ons
right,
the
the
critical
pot
add-on
stuff
that
was
done.
We
don't
actually
support
that,
but
we
should
have
so
if
you
I,
think
that
should
be
a
requirement
for
GA
by
the
way
is
that
we
have
priority
preemption
enabled
because,
without
that,
the
control
plane
components
could
get
killed
and
then
not
restarted.
Right,
yeah.
B
C
So
it's
this
chicken
and
egg
problem:
they
won't
go
to
bida
before
we
use
it
and
we
necessarily
wouldn't
like
to
use
it
before
it's
Vera
I
mean
if,
if
it's
well,
it's
it's
of
course
hard
to
say,
but
if
they
are
confident
in
the
solution-
and
they
really
think
it's
like
as
close
to
beat
as
we
can
get
and
expect
no
breaking
changes
or
whatever
then
we
might
do
it.
I
would.
B
You
know
enable
it
as
part
of
the
test
suite
and
just
give
cycles
on
it
to
get
experience
with
it
and
see
if
we
don't
hate
it
and
if
there's
many
issues
with
it
and
then
over
time.
If
we
say
like
okay,
we
went
through
the
entire
110
release
process
and
the
test
and
totally
flake
out
on
it.
We
should
we
should
try
it.
B
Would
have
to
probably
create
one.
There
are
some
that
actually
exist
for
prior
to
preemption
I.
Think
we
can
force
the
issue.
I
do
believe.
I
reviewed
that
those
changes,
the
I
do
believe
we
can
but
I,
don't
know
whether
or
not
their
end
to
end
test
I
think
we
could
probably
crib
what
the
tests
that
are
there
and
modify
them
to
do
the
user
story
that
we
care
about,
which
is
like
we
want
the
API
server
to
not
get
blown
away,
broken
killed
or
the
control
plane
in
general.
A
Nsa's
and
and
I
think
ten
might
have
answered
my
question,
but
if
there
are
already
tests
that
do
this
does
flipping
the
field.
The
feature
gate
in
cube
admin
and
running
more
ete
tests
actually
give
us
any
additional
coverage.
I
think
the
answer
is
no.
Unless
we
write
specific
tests
for
the
scenario
we
care
about
yeah.
C
Last
time
I
talked
to
him
at
least
they
were
hoping
that
it
would
make
his
beta
in
110,
but
yeah
I
mean
it
seems
like
they
would.
At
least
we
we,
we
had
to
start
running
the
tests,
so
we
could.
We
don't
have
it
in
the
admission
chain
right
now.
We
had
it
for
a
short
period
in
the
one
nine
cycle,
where
we're
enabled
it
as
we
thought
it
would
like.
The
plan
was
for
it
beta
in
one
nine,
but
then
we
removed
it
before
actually
releasing
things
starting
to
release.
One
myself.
C
C
B
B
That's
that's
an
okay
statement
to
make
because
we're
probably
going
to
be
there's,
probably
to
me
a
number
of
other
things
that
we
want
to
do,
and
one
of
the
things
I
want
to
do.
That's
acceptable
is
triage
backlog
because
I
know
I've
said
that
many
cycles,
but
I
never
had
time
to,
but
I
actually
want
to
do
that
and
gonna
try
to
commit
to
it
this
cycle.
So
it's
okay
for
us
to
say
that,
like
we
can
just
punt
on
that
bed,
I
think
treading
back
up
to
the
original
topic.
B
A
I
guess
to
bubble
up
the
stack
a
little
bit
we're
talking
about
our
plans
for
self
hosting
right
now
self
listing
is
not
the
default.
We
are
sort
of
dancing
around
the
discussion
of
GA
I,
don't
know
if
we
should
do
that
in
this
meeting
or
and
the
wider
meeting
on
Tuesday.
But
one
question
is:
should
self
hosting
it
as
default,
be
a
blocker
for
GA,
or
are
we
okay
sort
of
going
to
GA,
as
we
are
now
with
self
hosting
as
an
option
with
maybe
some
known
shortfalls
I'd.
B
A
Not
on
single
note
on
single
node,
we
have
a
pretty
decent
upgrade
story
right
now
and
single
notice,
where
self
a
self
hosting
is
having
a
lot
of
problems
design
wise.
So
if
you
have
a
single
node
master
self
hosted,
actually
maybe
worse
from
keeping
your
cluster
running
point
of
view
did
not
so
fo
stood,
but.
B
A
Think
that
would
be
okay,
because
from
the
end-user
experience
they
still
a
functioning
cluster
and
we
would
just
introduce
it
slowly
right,
like
you'd
say
you
know,
for
this
release,
we
are
switching
the
default
for
new
clusters,
but
you
can
still
opt
out
or
not.
Gonna
switch
the
default
for
existing
clusters
during
upgrade
I
mean
the
same
thing
we
were
talking
about
doing
when
cube
admin
was
beta
right,
I
think
we
can
walk
that
same
path
once
we
have
higher
confidence
in
self
hosting
our
single
node
clusters,
yeah.
C
So
I
think
that
so
what
I'm
thinking
about
is
like
what
does
self
hosting
gain
us
when
not
considering
AJ
and
that's
not
much
to
be
honest
and
as
we
speck
out
how
an
if
Cuba
name
should
do
AJ
by
itself.
I
think
the
like
stay
with
like
go
to
GA
with
what
we
have
spec
out
a
AJ
story.
As
we
go
and
the
day
we're
doing.
Cubed
and
AJ
like
out
of
the
box.
We
should
prepare
to
do
to
switch
self
hosting
by
default,
but.
B
For
a
lot
of
folks,
a
che
is
a
hard
requirement
right,
whether
for
by
hook
or
by
crook,
a
che
is
just
the
thing
that
we
have
to
support
for
people
who
a
lot
of
folks
who
are
using
cube,
18
I
know
that
we
have
instructions
for
doing
it
outside
of
it
and
that
might
be
sufficient,
I
think
too
so
yeah,
if
you
know,
if
you
don't
have
self
hosting,
then
it
gets
weird
right.
The
management
gets
weird
yeah.
C
Absolutely
so,
supposing
is
an
definite
requirement
for
AJ
for
cubed
M,
but
AJ
for
cubed
M,
it's
gonna,
take
just
gonna
take
time
and,
as
we
discussed
in
in
Austin
and
I'd,
rather
go
to
GA
with
what
we
have
and
then
so
that
normal
people
that
are
fine
with
one
master
actually
can
use
it
by
for
their
boss.
Right
now
we
had
things
like
well.
C
I
would
love
to
use
cube
item,
but
it's
betta,
so
I
can't
I'm
fine
with
the
features
you
have
so
I
think
that's
I
wear
absolutely
not
far
from
there
I
mean
we
basically
have
all
the
things
needed.
All
the
bits
needed
before
going
to
GA
just
a
few
minor
things
to
to
fix
up
and
to
start
incorporate,
but
otherwise
and.
A
We
can
also
I
mean,
like
incubators
itself,
you've
got
some
features
that
are
GA
and
some
features
that
are
awful
and
some
features
that
are
beta
and
we
can
break
cube,
admins
feature
set
down
and
say
these
are
the
things
that
are
GA
and
are
fully
supported,
and
these
are
the
things
that
aren't
and
at
least
get
enough
of
it
to
GA,
where
it
unblocks.
You
know
like
the
set
of
people.
Lucas
is
talking
about
that.
You
know
are
unable
to
use
it
until
it's
labeled
as
GA
as.
D
A
user
this
is
what
I
would
expect.
I
would
expect
that
at
some
point,
self-hosting
would
make
it
into
the
tool
as
an
option
that
I
could.
Experiment
with
I
would
also
kind
of
presume
that,
like
as
people
start
to
experiment
with
the
option,
they'll
surface
other
problems
in
the
Canadian
channel,
that
could
refine
their
development
options.
D
So
I
wouldn't
expect
anything
to
flip
over
for
further
the
the
use
cases
that
are
currently
working
names
that
people
are
depending
on
in
production
like
so.
For
instance,
Cuba
corn
is
like
all
of
its
functionality
is
based
around
creating
single
master
scuba
ATM
clusters.
We've
been
seeing
increased
usage.
C
Which
is
really
cool,
yeah
I
think
that
it's
an
OK
plan
and
so
I
mean
at
the
really
end
of
this
cycle.
We
flipped
back
the
switch
of
self-hosting
to
alpha
naught
beta,
as
we
had
planned,
primarily
due
to
that
we
didn't,
as
the
checkpointing
PR
was
not
so
late
in
the
game.
I
think
it
was
some
code
freeze
day
we
got
at
the
LG
TM,
then
it
didn't
make
sense
as
we
wouldn't
ship
any
features
that
depended
on
self-hosting
being
via
anyway.
A
B
Maybe
we
should
solicit
feedback
too,
because
we're
kind
of
what
kind
of
a
microcosm
of
the
universe
right,
there's
a
lot
a
much
broader
and
bigger,
may
be
getting
feedback
from
either
the
list
or
maybe
even
develop,
maybe
both
like
the
cyclist
or
life
cyclist
and
DeBellis.
Be
like
hey
we're
thinking
about
going
to
GA
here,
sir
here's
our
list
of
things
that
we
will
block
GA
for
and
here's
the
list
of
things
that
we
will
be
working
on
still
working
towards
overtime,
yeah,.
A
I
mean
we
solicited
some
of
that
feedback
at
the
contributor
summit,
and
people
have
an
enormous
laundry
list
of
things
they
want,
but
I,
don't
think
they're
being
precise
about
whether
it
should
block
GA
or,
if
they're,
just
things
they
want
g8
in
the
future
right,
because
if
we
wait
for
everything
that
people
want,
you
know
we're
never
gonna
get
there.
Yep
well,.
A
I
think
that's
what
we
should
do
is
we
should
try
to
enumerate
what
we're
is
sort
of
the
minimal
and
then
we
should
write
that
up
and
publish
it
and
say
this
is
our
plan.
Here's
all
we
think
is
gonna
block,
here's
our
expected
timeline
like
maybe
we
can
make
it
in
110
or
maybe
we
think
it'll
sub
2011,
and
then
we
should
publicize
that
in
an
hour
sig
and
then
also
on
the
dev
list
and
see
if
anybody
has
fun
push
back
yeah.
C
So
that's
basically
what
I'm
trying
to
do
with
with
my
roadmap
doc
I
mean
it's.
It's
super
rough
right
now,
I
literally
just
typed
things
like
went
flying
back
back
to
Europe
typing
things
on
the
airplane
and
until
my
better,
my
laptop,
ran
out
of
battery
and
yeah
I.
Think
the
feedback
that
you
said
Gradius
cubic
face,
for
example,
top-level
Mbita,
doesn't
block
but
I
think
it's
a
reasonable
thing
to
do.
I
mean
we've
had
it
for
two
three
releases
by
then
sure.
A
I
need
you're,
saying
it's
gonna
be
beta,
then
that
should
not
block
our
GA
right.
I!
Guess
that's
what
I'm
trying
I'm
trying
to
like
be
precise
about
really
trimming
the
list
down
and
making
sure
that
if
we
have
to
drop
things,
we
drop
things
that
don't
block
GP
yeah
I
pasted
your
list
into
the
meeting
notes
for
this
this
group.
If
people
have
that
up,
I'll
read
through
them,
the
first
one
is
self-hosting.
What
certificates
wanted
using
house
path
to
beta
site
comes
back
to
whether
self
hosting
should
be
a
blocker
I?
A
C
C
It's
all
I
would
really
like
to
have
it
to
say
that
here
is
cubed
M
G,
a
tool
that
has
beed
that
has
two
options,
one
of
which
is
static.
Port
Sulphur,
static,
pod
hosting,
which
is
GA
full
of
GA,
and
it's
default,
and
one
is
self
hosting,
which
is
also
proven
to
work,
has
test
cycles
behind
it,
and
but
it's
betta
level.
We
expect
to
graduate
it
to
GA
later.
A
A
Yeah
I
guess
that's
so
I
split
in
the
early
meeting
notes
into
two
sections,
so
we've
got
blocking
GA
and
what's
nice
to
haven't
110
so
self,
hosting,
with
certificates
mounted
using
house
path
to
beta
I,
put
a
nice-to-have
graduate
cubed
in
phase
and
cording.
That's
by
default.
I
put
those
as
nice
to
have
also
so
the
things
that
are
left
we
have
dynamic,
cubed
configuration
to
beta.
Do
we
think
that
that
should
block
GA?
Yes,.
C
B
C
Yeah,
so
that
is
basically
right
now
the
cubelet
starts
up.
It
takes
the
bootstrap
cube
config
from
from
this
that
Hugh
barium
has
generated
for
it.
It
sends
a
CSR,
it
generates
private
key
locally.
It
sends
the
CSR
with
the
public
key
to
the
server
and
it
gets
that
certificate
signed,
returns
it
and
has
now
a
unique
identity
to
use,
but
it's
totally
and
with
the
right
class,
the
CA,
but
it
totally
self
signs
everything
that
it
uses
for
the
cubelet
api
server,
which
makes
when
you
do
cube,
Sita
logs
and
cube
Sita
like
sec.
C
C
Virtually
everyone
else
is
seems
to
be
fine
with
that
as
everybody's
doing
it,
but
it
would
talk
to
Jordan
and
Mike
during
the
dev
summit,
and
we
came
up
with
a
plan
that
the
cubelet
would
post
its
node
objects
it
will.
The
API
server
would
would
recognize
the
IPS
and
host
names
is
reports
as
like
something
like
unverified
or
what
it
is
self-reported
or
whatever,
then,
depending
on
whether
you're
running.
C
That
would
happen
in
any
case,
depending
on
whether
you're
running
in
a
cloud
or
with
a
central
list
of
what
nodes
to
trust
or
in
a
bare-metal
environment
where
you
might
not
have
a
such
a
table
table,
you
would
have
different
policies
so
do
I.
The
default
for
kubaton
would
would
be
trust,
the
self-reported
IPs
and
then
for
cloud
environments.
The
cloud
controller
manager
would
kick
in
and
say:
okay,
this
nodes
IP
address
is
verified.
C
So
if
you're,
okay
with
the
cubelet
reporting
its
own,
addresses,
the
csr
cyano,
will
go
ahead
and
sign
the
serving
search
for
the
cubelets
and
a
cubelet
serving
the
cubelets
api
server
will
have
the
same
cluster
CA
as
everything
else.
So
then
we
have
one
uniform
CA
all
around
and
can
from
the
master
side,
trust
this
cubelet.
When
talking.
C
C
A
C
Another
thing
that
is
when
a
person
calls
the
cubelet
API
the
default
in
1-5
was
that
allow
all
so
if
I
had
a
1/5,
cubelet
I
could
just
curl
and
exit
into
any
part
of
that
cubelets,
basically
or
get
logs
from
any
part
in
the
cubelet,
without
with
no
authorization
or
anything.
What
what
the
cubelet
is
doing
now
is
that
it's
okay,
I
received
a
call
I'm
posting
a
subject:
access
review,
request
to
the
API
server,
which
then
returns
is
this
identity
authorize
to
taxes,
the
cubit.
A
Okay,
so
that's
about
client
authentication
and
not
about
servers.
Yes,
indication
I,
assume
that
issue
would
cover
both
when
they
said.
We'd
fix
it,
because
if
you
only
have
one
half
the
authentication
story,
it
doesn't
really
secure
the
communications
right.
You
can
still
mate
in
the
middle
of
the
connection.
Lazily
yeah.
C
Well,
it's
basically
pretty
trivial
man
in
the
middle,
the
the
the
network,
traffic
between
the
API
server
and
the
qubit,
and
you
can
give
the
API
you
can
pretend
to
be
a
pod
or
whatever,
and
then
the
user
will
type
in
whatever
one
cube,
C
they'll
exiting-
and
you
can
do
what
you
like,
or
something
like
that
so
yeah.
It
would
be
an
extra
security
feature
for
sure
and
it
it's
basically
I
hope.
This
thing
I
mean
the
code
is
not
trivial.
C
Of
course
it's
security,
but
it's
not
that
hard.
The
code
wise
implement,
as
we
now
SPECT
out
a
plan
to
do
it
from
what
I
understand
the
person
that
has
been
responsible
for
this
at
Google
left
or
something
like
that.
So
we
have
to
find
a
new
owner
which
could
be
Jordan
from
what
I
well.
My
interpretation
was
when
we
talked,
but
officially
nobody
has
signed
up.
Yet
I
would
like
to
list
it
as
a
blocker
for
GA
preliminarily
and
then
see
I,
don't
wanna
sing,
Moyer.
C
C
That
means,
if
you're
a
hacker,
you
could
easily
go
ahead
and
issue
tens
of
thousands
of
requests
to
this
config
map
and
it
will
essentially
drain
that
limit
per
second,
so
your
cluster
might
get
ten
requests
per
second
at
because
the
AP
is
always
busy
like
just
showing
the
config
maps
and
hacker.
So
it's
not
a
security
exposure
or
whatever,
but
it's
basically
da
Spectre
yeah.
B
A
The
limiter
this
is
a
das
vector
for
someone
who
doesn't
have
client
creds,
though
right
yeah,
if
you
just
have
the
IP
or
name
of
the
cluster
in
point,
you
can
das
it,
which
is
worse
than
somebody
who's
authorized
to
launch
a
job
being
able
to
das.
The
cluster
I
mean
you
can
mitigate
it
by
putting
your
cluster
IP
behind
a
firewall
is
the
other
other
answer,
which
is
what
everyone
should
do.
E
C
Know
what
Joe
said
spontaneously
was
something
like:
let's
restrict
this
to
a
certain
IP
range
that
was
considered
internal
I,
don't
know
if
he
was
thinking
about
all
the
reserved
subnets
for
internal
network
communication,
but
I
think
he
was
and
do
some
right
limiting
on
everything
else.
But
I
don't.
C
A
A
C
I
could
just
hit
the
HDD,
endpoints
and
schedule
my
pod,
so
do
want
to
scope
this
down
by
just
generating
like
setting
at
CD
up
with
some
straightforward
certificates,
maybe
even
just
fine
for
local
hosts,
maybe
the
public
IP
as
well
and
then
stole
these
starts
where
we
stole
all
the
other
search,
so
it's
which
are
owned
by
root,
then
that
would
mitigate
the
issue
and
would
would
also
like
preserve
some
forwards.
I
would
make
it
easier
to
interact
with,
but
then
a
potential
AJ
cluster
in
the
future.
C
I
mean
it
straightforward
to
implement
in
that
sense
that
you
have
to
generate
two
or
three
sets.
Maybe
four
I
don't
know
yet
and
then
just
add
these
arguments
to
the
CD
stack
pod
so
and
in
the
upgrade
case,
just
run
when
upgrading
from
one
nine
to
one
ten
just
generate
the
certs
and
you'll
be
fine,
so
in
for
one
I
think
I
have
people
or
folks
that
could
do
that.
Lee.
Do
you
wanna
help
I.
C
D
C
C
That
is
an
interesting
discussions,
have
I
guess,
but
let's
start
with
using
the
Cabana
CA
for
simplicity
and
then
maybe
change
during
this
cycle.
So
it
basically
just
generates
a
serving
search
for
a
CD
and
a
peer
search
and
add
these
add
the
flags
and
what
our
host
past
months
said.
City.
We
would
still
listen
a
local
host
as
we
do
right
now.
C
C
C
B
I
think
a
lot
of
people
won't
care
for
smaller
clusters,
but
a
lot
of
people
absolutely
care
for
larger
environments,
because
they
will
want
to
enable
audit
logging
and
to
have
all
the
jiggery
there.
So
I
think
option
enabling
it
and
having
it
a
nice
hat
would
be
a
nice
to
have.
But
I
don't
think
it's
strictly
required
for
the
primary
user
story
for
goobidy
in.
C
So
when
we
have
another
cubed
configuration,
we
can
stop
putting
all
the
cubelets
arguments
in
the
queue
the
cube.
Let's
drop
in
filing
cube
the
cube
and
in
that
package
which,
right
now
it's
not
optimal,
so
I
mean,
and
that
means
we
don't
have
to
do
some
ugly
hacks
when
upgrading
that
we
have
to
do
right
now,
but.
C
C
Redirect
the
configuration
to
use
the
one
out,
111
thing
or
whatever,
so
the
current
current
node
upgrading
story
is
basically
AB
script.
Apt-Get
upgrade
that
will
stop
the
cubelet
service.
It
will
download
the
new
binary
it
will
restart.
So
for
you,
then,
as
we
have
no
running
demon
inside
of
Cuba
in
clusters,
we
have
no
way
to
actually
sew
on
an
old
object.
There's
a
config
map
reference
basically
pointing
to
the
desired
configuration.
So
we
have
no
way
to
actually
repoint
this
to
do.
C
111
config
map,
which
we
recently
generated
so
we'd,
probably
have
to
say
to
either
run
this
keep
static,
want,
like
do,
apt-get
upgrade
run.
This
keep
Caesar
come
on
or
run
run
this
up
to
get
upgrade
and
do
cubetto
upgrade
cubelets
repoint
configuration
or
whatever
wrapper
command.
We
might
add,
but
there's
there's
one
extra
step.
We
have
to
do
there.
If
we
want
to
get
fancy
sometime
in
the
future,
we
can
always
do
some
kind
of
job
that
I
go
schedules
on
one
node
goes
and
up
X
X
out
the
host
name.
C
Space
goes
and
runs
whatever
package
manager
thing.
You
have
then,
then
updates
this
reference
goes
to
the
next
node
etcetera,
but
as
we
discussed
early
in
the
summer
as
well,
but
that
is
not
a
high
priority
thing.
I
think
most
people
are
fine
with
running
to
commands
like
in
the
getting
start
you
kubernetes
user,
well,
I
upgrade
them
with
cluster
I
asked
you
to
run
two
commands
by
node,
upgrade
them
as
well,
when
I'm
ready
and
in
the
automatic
case
that
won't
be
a
problem
either
so
yeah
I
think
it
should
be.
Okay,.
A
Okay,
so
that
was
sort
of
Lucas's
list
of
things
he
thought
should
block
GA
do
other
people
have
things
we
want
to
add
to
the
list
before
we
write
this
up
sort
of
a
more
formal
form,
I
mean
we
can
always
consider
adding
things
later
to
be
nice
to
be
able
to
discuss
them
in
this
form.
First,
if
people
know
offhand
what
they'd
like
to
put
in
I
think.
B
A
C
Is
gonna
be
off
for
a
long
time
I've
given
up,
we
basically
we're
basically
waiting
for
component
config
within
now
in
one
in
now,
in
one
nine,
we
have
started
to
use
cubelets
component
config
and
queue
proxies
component
config
mode
in
alpha,
but
anyway
I
mean
that
is
reasonable,
I
guess
and
we'll
just
wait
for
all
these
components.
Configs,
taste
better,
we'll
wait
for
the
API
server
to
actual
and
controller
mentioned
to
actually
start
creating
a
group
moving
it
out
a
separate,
API,
API
group
and
then
render
everything
in
s
beta.
C
A
Yeah
I
think
a
lot
of
the
a
lot
of
the
stuff
that
we
would
put
in
there.
We
want
to
let
other
component
owners
dry,
be
a
component
configure
they're
still
gonna
be
you
know,
decent
amount
of
cube
and
specific
stuff,
like
things
like
certificates
and
how
you
manage.
Those
inside
of
your
cluster
will
still
want
to
have
in
there,
but
I'm
hoping
that
the
list
shrinks
rather
drastically
via
the
component
config
ever.
A
A
C
A
Me
nice
to
have
a
little
I,
think
it'd
be
nice
to
have
just
a
specific
dock
for
cubeb
and
GA,
and
we
can
link
to
that
from
the
roadmap
and
say
one
of
the
things
in
2018
is
Q
members
of
GA
and
just
linked
to
the
other
dock,
because
then
we
can
circulate
just
the
shorter
thing
around.
Maybe
people
are.
A
B
Happy
I'm
going
to
be
out
till
next.
You
know
starting
this
at
starting
Saturday,
I'm
gonna
be
out
of
toilet
the
end
of
the
year,
so
I
don't
want
to
commit
to
writing
it
up,
because
I
have
to
do
some
clothes
out
stuff
on
my
side
this
week
make
sure
everywhere
else
can
free
run
for
a
while,
so
I
won't
I.
Don't
have
cycles
really
right
now,
if.
A
B
C
C
A
So
I
propose
we
canceled
this
meeting
both
next
week
and
the
following
week
for
the
20th
and
the
27th
and
we'll
reconvene
on
the
3rd
I
had
something
in
the
agenda
yesterday
we
didn't
get
to
is
whether
we
should
change
its
to
office
hours
and
I.
Think
we
should
talk
about
that
in
the
new
year.
Also
yeah.
A
Right
well,
we'll
have
to
do
a
yeoman's
job
to
cover
what
you've
been
working
on,
but
we
are
a
couple
minutes
over
I'm
gonna
call
it
here
thanks
everyone
for
coming,
we'll
talk
to
you
next
week,
regular
cig
meeting
or
in
an
hour
at
the
cluster
kai
breakout
or
in
three
weeks
again
at
the
next.
One
of
these
have
a
good
day:
cool,
bye,.