►
From YouTube: 20180411 sig cluster lifecycle kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
folks
today
is
April
11th
2018.
This
is
the
cuvette
DM
office
hours
and
we
have
a
couple
of
items
to
discuss.
The
first
one
is
to
try
and
give
a
tldr
update
on
the
state
of
the
upgrade
issues.
I
know
there's
two
parts
to
this:
one
is
the
configuration
file
part
and
the
other
is
the
ordering
of
restarts
because
of
the
certain
modifications
that
we're
done
in
110
Jason.
You
want
to
give
a
quick
update
of
the
state
and
Lee
feel
free
to
chime
in
on
any
questions.
Come
it's
complete
concerns.
B
Yeah,
so
the
PR
to
fix
the
race
condition
on
checking
pod
status
is
ready
to
go.
It's
just
pending
the
an
additional
PR
that
unbreak,
the
the
upgrade,
speaking
of
which
I've
started,
work
on
a
PR
for
using
the
cubelet
API
for
checking
the
pod
status
and
then
further
using
the
sed
client
to
check
the
sed
pod
status.
B
Specifically
because
one
thing
that
I
found
when
using
the
cubelet
API
versus
checking
on
the
pod,
the
cubelet
API
shows
the
pod
changed
as
soon
as
it
picks
up
the
the
new
static
manifest
and
it
does
not
update
pod
status
on
the
pods
end
point
for
the
cubelet.
So
we
have
no
way
of
testing
to
make
sure
that
the
new
pod
is
actually
up
and
running
unless
we
query
containers
and
that
gets
into
some
of
the
issues
around
querying
the
CRI
or
mucking,
with
the
stats,
endpoint
or
the
cubelet.
C
B
C
B
Miss
probe
basically,
but
what
I'm
doing
with
the
the
other
static
pods
is
waiting
for
the
the
update
on
the
static
pod
with
cubelet
api
and
then
I'm
also
waiting
for
the
Mira
pod
update
to
to
come
through
to
verify
that
the
pod
has
been
deployed
this
ideal,
but
it
seems
to
get
the
job
done.
I
think.
A
A
So
we
need
to
talk
with
the
cig
note
folks
about
that
too,
as
well,
and
the
reason
why
I
say
that
is
because
it's
subject
to
change
and
it
has
changed
and
drifted
and
been
moved.
So
how
we're
going
to
do
the
upgrade
testing
is
going
to
be
super
important
right,
as
as
we
get
into
the
next
release
cycle,
you
know,
I've
mentioned
setting
up
a
test
apparatus
and
as
well
as
the
CNC
F
folks
came
around,
but
there's
no
good
way
of
testing
this
without
it
being
like
a
true
and
end
type
of
test.
A
C
A
It's
a
conundrum,
though,
because
no
one
wants
to
maintain.
We
have
to
somebody's
gonna,
have
to
pay
the
tech
debt
to
at
least
patch
how
kubernetes
anywhere
upgrade
path
works.
So
I
know
the
problem.
I
know
why
it's
doing
what
it's
doing
and
know
why
it's
out
of
date,
I
suppose
we
could
probably
fall
on
the
sword
there,
but
it's
kind
of
like
we're.
We're
just
gonna
go
in
there
and
try
to
band-aid
it.
So
we
can
get
past
this
point.
A
It
was
a
conflation
of
concerns
for
sure,
but
it
forced
it
forced
checking
of
two
things
within
one,
multiple
things
within
one
suite
right,
so
it
basically
validated
that
we
are
actually
publishing
the
cross
build
products
as
well
as
consuming
the
cross,
build
products
as
part
of
the
upgrade
test,
and
we
can
eliminate
the
cross
build
product
issue
because
it
was
not
actually
working.
We
weren't
actually
publishing
them.
A
So
if
you
actually
look
at
some
of
the
there's,
there's
multiple
pieces
of
output
that
is
put
there
and
one
of
the
build
artifacts
to
push
some
of
the
crossroad
products
no
longer
was
even
working
so
but
I
guess
I'm
gonna
be
full-time
on
this
stuff,
so
I'll
fall
on
the
sword
there
I
feel
like.
If
we
don't
do
this
we'll
be
in
this
problem
again
and
I
ideally
wanted
to
have
cluster
API
in
place.
A
C
C
Is
it's
it's
a
little
rigid
right
now
about
what
is
in
scope?
So,
for
instance,
you
do
not
have
cuvee
diem
Figg
in
a
lot
of
the
upgrade
functions
and
it
gets
a
little
tricky.
Basically,
some
code
needs
to
be
restructured,
but
as
far
as
adding
an
ensure
TLS
pays
for
these
static
pods,
we
should
be
able
to
just
add
a
face
for
upgrade
or
under
upgrade
in
the
CLI
and
then
call
it
directly
as
well
from
the
upgrade
path
so
from
the
main
function.
C
So
it
should
be
callable
independently
and
then
it
should
be
idempotent.
So
you
should
also
call
it
from
the
upgrade
path.
Does
that
make
sense
I
think
so
so
from
the
main
upgrade
function
you
would
you
can
import
the
faces
and
then
call
it
the
problem
with
this,
though
I
was
chatting
with
Tim
a
little
bit.
If
you
scroll
up
in
the
city
austere
life
cycle
channel,
we
have
an
issue
to
think
about
a
little
bit
because
ensure
TLS
is
going
to
have
a
since
it
needs
to
be
I.
C
This
code
only
needs
to
work
for
upgrades
from
you
know
one
dot
X
to
110,
because
everything
else
will
be
TLS
by
default
in
the
future,
but
like
we
have
to
think
about
the
lifespan
of
this
ensure
TLS
phase,
because
it
needs
to
be
able
to
look
at
the
old
pod
manifests,
be
like
oh
there's,
no
TLS
implemented
in
these
flash
eggs
and
then
add
the
flags.
If
it
thinks
that
it
should
so.
B
In
general,
I
think
this
is
a
problem
that
we're
gonna
encounter
in
the
future
anyway,
because
one
of
the
things
that
we're
gonna
have
to
look
out
for
as
kind
of
cube
idiom
gets
more
real-world
use
is
that
users
are
going
to
want
to
modify
kind
of
the
running-config
as
we
go
along.
So
assuming
that
we
can
dress,
drop
a
static
manifest
for
some
of
these
things
that
assumptions
going
to
go
away
over
time.
I
think
so
at
some
point,
I
think
we're
gonna
have
to
look
at
being
able
to
mutate
the
static
pod,
configs.
B
D
A
So
I
think
I
got
the
gist
if
my
McDonald's
drive-thru
history
has
proved
to
any
any
value.
Is
that
I
think
what
he
said
was
whether
or
not
we
are
going
to
rely
on
the
config
map
when
doing
the
free
process,
which
I
think
we
should
I
mean
honestly
I
would
not
I
think
anybody
doing
things
out
of
band
is
subject
to
be.
We
you
have.
A
You
have
made
your
choices,
you
have
made
your
bed
and
everything
requires
to
go
through
the
command-line
options
directly
or
to
edit
the
config
file
manifest
and
to
do
some
type
of
reapply
or
re.
This
gets
into
the
other
bit
that
we
were
talking
about
which
was
reconfig.
We
don't
actually
have
that
as
a
phase.
We
have
like
upgraded
in
it,
but
we
might
want
eventually
a
reconfig
which
then
you
know
you
modify
the
config
the
config
itself
and
then
reconfig
would
have
to
check
state
and
rectify
anything.
A
B
A
A
B
A
Modifications
should
be
a
rarity
right
for
most
use.
Cases
like
the
whole
idea
is
that
it
should
be
so
easy
that
we
consider
clusters
to
be
ephemeral.
That's
an
ephemeral,
because
in
the
ideal
world
we
don't
want
to
get
into
the
notion
of
even
having
clusters
as
pets.
We
just
want
to
make
it
so
stupid,
easy
that
you
can
anybody.
A
You
know
it's
like
Oprah
for
clusters
right
just
make
it
so
easy
that
you
can
stand
up
a
cluster
and
to
make
it
easy
to
snapshot
and
restore
cross
clusters
that
the
idea
of
trying
to
maintain
cohesiveness
across
many
many
many
versions
is
a
recipe
for
disaster
in
cluster
management.
History
has
taught
me
that
this
is
a
very
bad
thing
and
people
treat
their
clusters
like
pets
and
it
becomes
this
bondo
duct
tape
ball
of
soup-
that
no
one
understands
over
a
period
of
time
right.
So
what
happens?
Is
you
go
into
a
place?
A
Some
initial
person
stood
it
up.
Two
years
later,
some
new
person
came
on
board.
They
have
their
own
ideas,
but
then
they
left
halfway
through
implementing
them.
Then
the
third
person
comes
on
they're,
like
what
the
hell
is
going
to
burn
to
all
the
ground
right-
and
this
happens
all
the
time
and
configuration
management,
yeah.
C
C
On
that
line,
like
kind
of
on
the
same
concern
as
what
Jason
just
mentioned,
is
I,
think
like
for
a
specific
mutator
function
like
ensuring
TLS
flags
on
the
sed
server
and
then
on
the
API
server
we
would
want
to
like.
Have
it
probably
failed
critically
and
mourn
the
user
like
if
the
flags
in
question
to
be
applied
are
actually
present
in
the
extra
arms
map
or
something
yeah.
A
C
C
C
Thinking
that
after
we
are
yeah,
so
the
in
this
case
to
support
this
upgrade,
the
mutator
function
needs
to
happen
before
the
these
static
pod
declarations
are
made.
So
basically
it
what
the
mutator
function
is
for
is
to
resolve
an
incompatibility
between
the
old
declaration
and
the
new
declaration,
and
so
like.
We
will
need
this
kind
of
like
resolving
state.
If
we
want
to
make
a
minimal
change
to
the
code
is
I,
guess
what
I'm
getting
at
so
we
would
keep
the
declarative
model,
but
then
we
would
have
these
mutator
functions.
C
A
C
B
A
Would
agree
to
that
I
think,
but
I
think
not
having
consistency
across
the
upgrade
path
to
just
make
it
as
simple
as
possible
to
unblock
folks
is
totally
fine
and
again
again.
I
want
to
get
out
of
the
notion
that
we
need
to
treat
part
of
the
workflow
for
doing
any
type
of
upgrades
should
be
like
snapshot,
your
cluster
right.
You
know
before
you
even
do
the
upgrade,
and
then,
if
you
want
to
you,
know,
sandbox
it
and
create
a
Bluegreen
deployment
right
for
you
to
do
your
upgrade.
A
C
Yeah,
just
to
kind
of
highlight
some
of
what
Jason's
playing
out
is
basically
the
whole
reason
that
the
that
we
have
this
issue
with
the
Etsy
DTLS
upgrade
is
because
we
are
depending
on
the
API
server,
in
order
to
determine,
if
a
component
so
that
we
are
upgrading,
restarts
and
running,
which
is
really
kind
of
gross.
When
you
consider
that
the
that
you're
updating
two
components
that
are
necessary
for
the
API
server
to
function,
one
being
at
CD,
which
the
API
server
doesn't
work,
if
that's
not
up
and
then
to
being
the
API
server
itself.
C
This
is
this
is
where,
like
it
seemed
like
the
biggest
red
flag
was
when
Jason
and
I
sat
down
a
talk
was
we're
using
the
wrong
abstraction
layer
for
the
API
to
determine.
If
we,
if
our
processes
are
even
running
like
we
should
use
something
else,
then
that's
not
dependent
on
that.
Like
part
of
the
stack
I,
think.
A
We
can
also
push
back
on
the
signal
folks
to
if
there
are
pieces
that
are
missing
from
the
Google
API
that
we
think
should
be
there.
We
we
have,
we
have
signal
and
we
can
help
drive
that
forwards.
If
we
need
to
because
it's
it's,
it's
basically
I,
don't
think
there's
any
owner,
I,
don't
think.
There's
anyone
who'd
be
reticent
to
push
back
on
some
enablement
of
some
features
for
the
ghupat
API
yeah.
C
B
A
I,
don't
think
it
because
I
don't
think
it'll
be
a
problem
if
the
Google
API
was
fully
formed
right.
If
you
had
something
you
can
rely
on
as
a
generic
static
pod,
manifest
starter,
new
XYZ
right
and
and
give
me
all
the
same
checks
that
I
and
guarantees
that
I
get
with
the
API
server,
which
is
like
probably
too
much
right
and
then
I
would
it
would
be
totally
sufficient
for
what
we
needed
to
do.
This.
C
Would
really
simplify
cuvee
idiom
as
well,
because
there's
a
lot
of
code
where
we've
basically
written
like
a
client,
a
formal
API
client
for
interacting
with
static
pods
on
the
file
system?
When
really
that's
something
that
I
feel
should
be
vendored
from
something
that's
supported
and
versioned
in
the
couplet
and.
C
A
For
the
purposes
of
time,
because
we
have
other
things
to
go
through,
let's
open
up
an
issue
in
the
QbD
and
repo
to
discuss
in
detail
the
kind
of
an
ideal
state
of
where
we
would
like
to
go
to.
In
the
meantime,
we
still
need
to
bondo
or
release
out
and
I
like
the
idea
of
the
of
upgrade
certs
for
in
the
upgrade
pass
like
just
get
it
out
there
get
it
get
it
capable
and
working,
and
then
in
111
we'll
have
to
address
the
the
upgrades
narrow
cleanly
right.
Also
in
111
we
can.
A
We
can
push
on
the
kulit
api
and
figure
out
how
far
we
can
get
it
right.
So
I
can
I
can
have
all
the
right
people
to
talk
to
I
just
need
to
let
them
know
and
have
a
well-defined
problem
statement
that
that
I
can
say:
hey
we're,
gonna,
we're
gonna,
try
and
address
this
one
thing,
dear
Pierre,
bandwidth
to
review
it
gain
a
right.
Does
that
seem
like
a
reasonable
approach
that.
C
C
A
B
B
C
And
there's
it
makes
me
feel
icky
because
we're
splitting
up
the
feature,
but
so
this
is
this
doesn't
affect
our
current
timeline
for
cuvee
mga.
A
We
do
not
have
a
formal
timeline
for
QbD
mga,
we're
dependent
upon
other
features
of
the
system
promoted
in
order
for
us
to
actually
get
to
Gao
status.
The
one
work
item
that
was
our
plate
specifically,
was
you
know
security,
so
that
bit
it
as
long
as
we
check
those
boxes
and
when
we
check
those
boxes,
I
think
we
can
get
most
of
it
done
in
1:11
right.
His
part
is
what
we're
just
talking
about,
so
that
seems
reasonable.
The
other
ones
are
gonna,
be
totally
dependent
on.
A
C
Then
I
just
to
be
clear
to
reduce
the
amount
of
work
that
we
need
to
do
in
response
to
this
issue
and
to
prevent
any
further
complexity
in
the
upgrade
process.
I
think
it
sounds
like
we're
going
to
veto
a
ensure
TLS
pod
mutator
face
for
the
one
nine
ten,
yes
for
one,
ten
yeah
and
whether
or
not
we
do
that
in
one
eleven
is
a
different
design
decision,
but
it
doesn't
sound
like
it
would
be
required.
C
A
A
A
E
E
He
was
being
a
little
weird,
so
I
just
want
to
say:
hey
real
quick,
I'm,
gonna
start
attending
the
suit
clothes
flight
circle
meetings
and
these
committee
meetings
I
got
a
heads
up
that
self-hosting
was
changing
a
bit
I'm.
One
of
the
maintainer
is
on
boot
cubes,
so
I
mentioned
how
we
sell
foes
and
how
that
contrasts,
without
keeping
himself
and
I'm
gonna
kind
of
be
taking
two
angles
on
kind
of
trying
to
push
through
keyboard.
E
One
is
prototyping
boo
cube
where,
instead
of
using
selfish
and
secrets,
we
actually
find
mounting
the
secret
from
the
host
and
you
try
to
use
the
built-in
Cupid
checkpoint
and
then
I'm
also
going
to
be
taking
a
look
at
restarting
our
efforts
to
possibly
add
some
sort
of
super
speck,
pointing
into
cubelet
whether
or
not
cube
ATM,
whatever
use
that
is
potentially
a
separate
question.
But
anyway,
that's
kind
of
context
for
why
I'm,
showing
up
and
I'll,
be
paying
very
close
attention
to
your
comments.
A
Liz,
do
you
want
to
go
we'll
talk
about
self
hosting
a
little
bit
too
I
just
wanna
make
sure
we
get
through
I
want
to
give
pause
for
other
folks
to
chat
about
stuff
too
cuz.
It's
meant
to
be
here,
cohesive
thing,
not
just
me
monologuing
about
issues
Liz.
Do
you
want
to
give
the
TLDR
for
other
folks
who
may
have
not
been
caught
up
on
the
config
file
problem
and
kind
of
the
exit
strategy,
as
well
as
a
possibility
that
we're
gonna
have
to
create
a
kept
for
long
term
beta
support
yeah?
So.
F
Can
you
all
hear
me
okay,
so
the
basic
issue
is
that
we
rely
in
the
cube
kit,
cube,
ATM
master
config.
We
rely
on
a
bunch
of
other
structs,
and
those
trucks
are
also
alpha
and
one
of
them
changed
from
underneath
us
in
a
way
that
was
not
backwards
compatible.
So
what
that
means
is
that
the
configuration
serialized
by
cube
ABM
in
it
on
1.9
does
not
deserialize
in
1.10
the
stopgap
solution
that
we
came
up
with
is
we
take
the
Y
ammo?
F
We
deserialize
it
into
just
a
map
mutate
it
such
that
it
is
compatible
with
the
new,
the
new
schema
for
the
configuration
struct,
and
then
we
serialize
it
back
to
JSON
and
have
it
decoded
and
being
normal
API
teaching
refashion.
There's
a
PR
in
progress
for
it.
I've
gotten
a
lot
of
really
good
comments
on
it.
Hopefully
that
can
get
merged
in
the
next
couple
days.
It
will
need
to
be
back
ported
to.
F
F
Fabrizio
made
a
very
good
point
about
the
strategy
that
we're
using
right
now,
just
sort
of
blindly
applying.
This
update
is
not
going
to
work
forever.
We
need
to.
We
need
to
come
up
with
a
sort
of
more
formal
rigor
to
rigorous
way
of
applying.
These
updates
I
suspect
that
this
sort
of
thing
is
going
to
happen
again.
F
I
think
we
didn't
notice
it
this
time
because
of
some
of
the
upgrade
test
bugs
that
Jason
was
talking
about,
but
even
if
we
had
noticed
it,
that's
probable.
It's
going
to
happen
again
and
so
far
the
best
option
that
we
have
right
now
is
properly
versioning
the
API
and
using
the
conversion
logic
between
in
the
internal
API
representation
and
the
external
API
representation
that
we're
doing
as
part
of
the
upgrade
configuration
check
face
so
I
think
that's
the
that's
the
long-term
solution.
F
A
Could
you
create
a
cap
for
this
one?
This
is
definitely
kept
worthy
to
have
a
long
term
strategy,
the
the
depending
upon
what
the
structure
are
and
where
they
are
and
knowing
the
entanglements,
because
I
have
not
dug
in
to
know
the
details
of
what
pieces
the
puzzle
are
alpha
and
one
or
not,
it
depends
upon
whether
or
not
we
want
to
have
an
extra
layer
of
indirection
to
be
able
to
support
these
things.
I
think
this
is
all
goes
into
a
proposal.
I
guess,
there's
no
way
for
me
to
really
hot
take
on
it.
F
F
I
mean
I:
can
you
know
you
can
see
how
people
arrive
at
this?
They
see
this
like
giant,
20,
30,
plus
field
struct
and
they're,
like
I,
really
want
to
copy
all
of
this
wholesale,
and
even
if
we
had
done
so,
even
if
we'd
made
our
own
internal
representation,
this
migration
logic
would
still
have
to
live
somewhere
for
us
interfacing
with
the
system
that
needed
the
cube
proxies
dropped.
F
So
this
is
just
this
is
just
one
of
those
things.
I've
been
thinking
about
it
in
terms
of
database
migrations,
like
a
series
of
transforms
that
you
apply
one
by
one,
but
that
may
not
be
the
most
sensible
way
to
think
about
this,
but
this
is
all
going
to
go
into
the
kemper
that
I'm
going
to
write,
as
this
is
going
to
happen
again
and.
B
F
A
F
D
I
ever
a
concern
about
the
the
fix
that
is
going
on
because
and
now
could
mean
support
even
using
the
1.10
release,
support
to
upgrade
a
cluster
from
1.9,
X,
209
epsilon,
and
what
I'm
concerned
is
the
following
part.
If
we
read
the
the
config
map,
we
fix
the
the
format
of
the
feature
gate
flag
and
then
what
happen?
If
what,
when
we
say
that
the
proxy
config.
D
A
D
A
F
D
F
C
A
The
part
of
the
bootstrapping
process
itself
is
you
have
a
static
manifest
and
then
you
pivot
right
and
if
you
had
a
pod
that
just
wraps
that
behavior
on
startup
and
basically
says
check,
be
a
couplet
API.
Am
I
running
yes/no
right
if
I'm
already
running
and
then
don't
start
something
if
I'm
not
running,
then
start
something
it
just
continually
checks
in
the
loop
right,
which
basically
will
see
it
a
static,
manifest
API
server.
A
So
it's
only
purpose
in
life
is
to
basically
allow
everything
else
to
come
back
up
and
then
once
it
recognizes
that
this
host
has
a
has
another
API
server
trying
to
execute
on
it.
Just
kills
that
static
manifest.
So
you
basically
have
a
pod,
who
is
a
sentinel
living
as
aesthetic
manifests
determining
whether
or
not
from
the
kulit
API,
if
it
should
spawn
itself
or
not,
and
by
doing
that,
and
just
by
using
host
mounts
for
secrets
and
simplifying
the
problem,
it
just
eliminates
the
whole
class
of
issues
that
we've
had
right.
C
Quick
question,
then,
so
one
of
the
two
things
about
the
self-hosted
control
plane
is
that
you
can
update
the
pod
specs
and
have
the
the
update
propagate
out
with
you
know
like
no
downtime
yep
curious.
If
you
would
need
something
to
synchronize
those
updates
with
the
Sentinel
pod,
so
that
it
would
be
able
to
like
do
that.
Yeah.
A
I
mean
there's
a
couple
of
clever
ways:
I've
thought
about
doing
this.
We
could
deal
with
it.
What
I
plan
on
doing
is
opening
an
issue
creating
a
cap
talked
about
in
the
Kemp,
which
is
the
proper
way
to
do
this,
instead
of
great
through
all
the
logistics
you
know
in
our
heads,
but
the
you
could
either
do
the
transformation,
your
inverse
transformation
of
being
able
to
say
this
version
of
it.
A
E
A
E
A
Like
where
does
the
scheduler
get
brought
back
up,
the
scheduler
gets
brought
back
up
because
on
a
self-hosted,
an
environment,
all
you
need
is
the
API
server
and
the
couplets,
because
they're
self
hosted
by
the
couplet
the
couplet
can
rectify
the
state.
So
as
long
as
the
couplet,
this
is
one
of
the
fun
conversations
we
had
a
long
time
ago.
What
the
couplet,
as
long
as
I
could
talk
to
the
API
server
can
get
the
bound
pod
manifests
that
are
for
that
machine
and
the
masters
are
pre-configured.
A
G
A
E
E
E
A
I,
don't
know
where
you
go
with
that,
but
usually
they're
very
when
you
set
it
up
the
way
comedian
works.
You're
gonna
have
to
do
a
joint
master,
then
I
don't
know
for
bricio
and
I.
Don't
even
really
want
to
entertain
the
idea
of
configure
of
changing
a
node
from
to
be
from
a
worker
to
a
master.
The
idea
of
immutability
would
say,
like
you
have
dedicated
these
notes,
for
these
things
and
I.
Think
sticking
to
the
immutability
constraint
allows
us
to
be
very
put
blinders
on
and
not
care
yeah.
C
A
C
A
A
C
A
A
They
shouldn't
matter
so
long
as
the
pods
are
still
bound
to
the
host.
What
will
happen
is
the
scheduler
and
controller
manager
will
be
brought
up
online
and
eventual
consistency
will
be
reached.
Those
nodes
should
come
back
online
so
long
as
the
Kubla
can
talk
to
the
API
server,
because
the
pods
are
bound
and
they've
already
been
scheduled
right.
The
demon
set
doesn't
need
to
reschedule
itself
they've
already
been
scheduled,
the
first
time
when
the
control
plane
is
laid
down.
A
It
would
start
as
a
bound
pod
it
still.
It
still
was
originally
a
daemon
set.
When
it's
deployed
it's
been
bound
to
that
host
already
right,
so
it
doesn't
need
to
be
rescheduled.
It's
it's
bound.
You
just
basically
say.
The
only
thing
it
needs
is
that
the
API
server
needs
to
come
online
first,
so
the
coup
blip
when
it
checks
in
with
the
API
server.
It
says
what
are
my
bound
ads.
Oh
I
have
a
schedule
and
controller
manager.
I
start
those
bound
pods.
Oh
yeah,.
C
C
C
A
D
D
D
What
I'm
doing
and
breaking
down
the
prototype
in
a
set
of
PR,
what
will
be
implemented
will
be
a
little
bit
different
original.
We
are
because
now
there
is
the
control
for
an
address
pleasure
and
we
are
using
this
flag
to
give
a
stable,
IP
address
that
is
load
balancer
address,
and
so
the
result
will
be
a
little
bit
more
difficult
a
little
bit
different.
But
at
the
end
in
open,
ask
some
interesting
scenarios
like
four
in
the
cluster,
where
we
have
different
advertise
a
dress,
but
we
will
address
B.
A
A
There,
a
lot
more
thought
needs
to
be
I
need
to
think
about
phases,
a
lot
more
and
how
we're
gonna
promote
that
stuff
and
I.
Think
other
folks
do
too,
because
there's
I
want
to
be
careful
in
this
release
cycle.
We
have
a
lot
of
stuff
going
on,
which
is
good,
but
the
same
time
I
want
to
make
sure
we
bound
the
scope
of
all
of
the
changes
to
be
a
set.
That
is
consumable.
It
doesn't
cause
too
much
churn
with
a
negative
release
cycle.
If
that's
a
fair
statement.