►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171024
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
C
D
Have
a
question
in
terms
of
like
how
we
can
release
the
man
pages
to
the
thing
Robbie
you
commented
on
the
doc
about.
Maybe
we
interesting
so
basically
when
we're
not
generating
map
pages
and
releasing
them
for
other
goobidy
sees
components,
but
that's
not
really
deliberate
choices,
just
because
nobody's
done
it
so
far,
so
my
question
is
like
what
do
we
need
to
do
like?
Do?
We
need
to
sync
release?
We
need
to
create
tooling
to
like
generate
man,
dogs
automatically,
and
what
does
that
look
like?
Basically,
this.
D
B
A
E
Okay,
there's
already
tooling
in
the
main
repository
to
build
man
pages
and
it's
done.
Are
they
all
build
and
we've
released
that
like
eons
ago?
The
only
thing
that
would
need
to
be
changed
would
be
the
spec
updates
to
pull
in
the
man
pages
from
the
build
artifacts
and
whatever
the
dead
packaging
is
as
well.
D
E
So
it's
picked
up
that
way
and
then
in
one
10
timeframe,
when
the
the
release
transitioning
should
occur
so
that
the
Bazza
builds
are
online
and
then
every
build
that
we
build
from
main
line
will
be
basal
build
and
all
those
effects
that
are
published
as
part
of
that
build
should
be
pushed
through
the
main
line,
so
I'm
not
play
to
adapt
and
ago
anytime,
soon
and
and
the
specs
and
everything
else
that's
over
there.
My
plan
is
to
only
invest
in
Basel,
because
that's
the
new
world
over
and
the
bill
artifacts
that
are
there.
F
F
E
A
C
A
A
Yeah
so
at
least
well
9:00
p.m.
9:00
a.m.
Pt
and
on
what
place
is
this
meeting?
Also
tomorrow?
What
we
said
was
we
now
have
a
unified
proposal,
thanks
to
Fabricio,
myself
and
Tim
have
have
written
earlier
and
documents
and
also
I,
don't
people
in
the
community.
Now
we
hopefully
have
one
grand
unified
thing.
We
could
check
out
check
into
the
cube
alien
repo
I
think
that's
the
plan
right
the
north
time.
A
It
is
a
proposal
in
core
like
community
instead
have
it
as
IQ
Batum,
more
Cuban,
design,
doc
and
then
well
for
b-cells
written
out
the
the
UX
here
so
basically
cubed
a
minute.
As
usual,
Cuban
I'm
joined
just
doctors
master
if
and
give
it
a
master
token
for
adding
a
second
master
and
then
Cuban
I'm
join
for
four
notes.
As
usual
usual
will
use
a
self-hosted
control.
A
So
that's
one
thing
and
also
Jordan
is
working
on
the
node
authorizer
and
Mike
Lindsay
goats,
and
basically
it
should
be
improved
to
not
allow
the
node
to
add
random,
valuable
paintin
and
labels
to
itself
yeah
thanks
Matt,
please,
please
I'm
the
link
to
the
meeting
notes,
meeting
notes
as
well
yeah,
so
it
couldn't
say
Susan,
somehow,
just
as
well.
I
want
to
be
a
master
and
the
label
untainted
myself
and
get
access
automatically
to
the
shed
keys
in
the
cluster.
A
So
such
things,
but
still
after
while
discussing
it
for
a
long
time.
We
agree
that
the
user
experience
is
gonna,
be
more
benefit
like
having
this
nice
user
experience
for
the
users
it's
better
than
having
a
fully
locked
down
secure
cluster
anyway,
I
mean
when,
when
with
the
current
cubed
inflow,
we
all
know
that,
having
this
token
for
adding
a
new
node,
isn't
the
securest
possible
way.
We
could
do
it,
nor
is
registering
the
master
in
the
cluster.
A
The
securest
way
is
well
anyone
that
could
schedule
their
to
the
master,
basically
a
through
the
access,
if
it
can
access
to
know
the
hosts
file
system,
for
example.
So
instead
we'll
we'll
go
this
like
route
with
well,
it's
gonna
be
a
secure
us.
We
can
just
make
it,
but
at
least
it's
gonna
be
a
really
nice
user
experience
where
we
can
dynamically
add
mastis,
as
you
like,
and
it
comments
something
I
missed.
F
A
Yeah,
so
actually
I
don't
know
if
it's
outlined
approval
proposal,
but
what's
in
my
head
is
that
we
could
run
cube
Adam
in
it
concurrently
for
all
three,
these
three
masters
we
have
to
use
external
ICD.
Well,
obviously,
if
we
well
that's
the
coordinating
point
right
and
also
like
we
well,
if
you
distribute
the
certificates
to
to
be
the
same
before
before
you
do
this
everything's,
fine
and
I.
Think
for
this
use
case,
that's
the
first
write
off
right.
You
generate
like
in
the
case
of
cops
or
whatever
such
solution.
A
You
could
generate
the
certificates,
but
put
them
somewhere
like
in
a
bucket
or
whatever.
That
you
feel
is
secure,
then
distributes
all
the
masters
with
whatever
the
technique
and
then
boots
cubed
a
minute
with
external
CD
that
well
I
haven't
tested
it,
but
it
should
work
and
if
it
doesn't
work
we'll
have
to
fix
it.
I.
F
C
G
E
I
think
to
be
clear:
is
that
we're
not,
as
we
have
an
operator
in
place,
doesn't
mean
that
we
will
prevent
any
other
means
by
which
you
can
connect
to
your
at
CD
cluster?
This
is
just
to
make
it
ease
of
use
condition
for
the
user
story
of
out-of-the-box
experience
of
not
having
to
set
up
anything
else
externally,
right.
F
D
D
A
A
H
F
Same
forever
indefinitely
right
because
then
you
can
never
add
new
features
as
default.
I.
Think
the
question
is:
how
do
you
inform
users
that
things
are
changing
and
the
new
new
version
they're
running
or
across
upgrades
in
a
reasonable
way
and
I?
Don't
know
if
release
notes
might
not
be
enough?
We
might
need
to
build
something
into
the
program
to
say
like
by
the
way
defaults
are
difference
or
assign
different
upgrades.
F
I
think
we
did
to
build
that
into
the
program,
but
any
cameras
Elliston,
never
changing
default
behavior,
because
we
won't
build
upgrade
right,
like
we
could
say
if
the
default
was
to
install
through
a
display.
That's-
and
we
know
why
ever
change
default.
Behavior
then
you're
stuck
by
it's
a
view
to
forever,
even
like.
G
Euler
example:
we
we
did
really
mess
up
an
API
server
right
where
we
changed
the
default
and
it
was
a
disaster
and
we
agreed
never
to
do
that
again,
and
we
said
what
we've
said
is:
we've
said
that
the
the
flag
becomes
mandatory.
If
you
expect
people
to
change
right
and
eventually
be
deprecated
to
do,
we
put
warnings
on
at
cb2,
we
eventually
deprecated.
G
We
remove
it
at
that
point,
but
you
can
make
the
flag
mandatory
if
you
want
to
change
the
behavior
and
what
we
did
with
the
sed
behavior
changing
API
server
was
agreed
to
be
an
error
and
a
violation
of
our
principles,
but
we
can
certainly
make
flags
mandatory
to
allow
discovery
and
we
can
put
warnings
up
them
to
allow
discovery,
delete
them.
Well,
if,
if
cop,
if
cube
ADM
is
gonna,
have
the
same
guarantees
as
kubernetes
other
criminais
CLI
components,
and
it
should
not
change
defaults
well,.
E
This
is
your
statement.
Doesn't
jive
with
the
rest
of
the
program
like
there
are
changes
and
defaults
that
occur
all
across
the
system:
components
not
just
SED
but
like.
If,
if
you
look
at
the
kulit,
the
couplet
changes
to
false
all
the
time-
and
this
is
this-
is
no
different
for
any
release
it,
but
it
happens
in
a
well-defined
process.
E
What's
an
example
of
that
because,
like
let's,
let's
see
if
they
didn't
follow
the
rules
and
we
can
go
the
swap
the
swap
killed
everybody
that
was
on
by
default
in
in
in
one
ate
that
killed
every
single
person
who
is
running
the
kulit
right,
like
how
many
issues
are
open.
For
that
thing,
thank
you
say
at
least
6
sex
I.
Don't
know
how
many
issues
we
got,
but
in
qadian,
but
we
there
is
a
hundred
times
as
many
exactly.
G
E
A
G
E
This
is
a
meta
problem
that
kind
of
goes
beyond
the
scope.
I
think
what
we
can
do
is
I
can
take
this
Ford's
I
think
to
both
Sega
architecture
and
possibly
the
steering
committee
to
evaluate
processes
that
we
might
want
to
refine
right
or
have
NEPA
more
explicit
it,
but
I
think
that's
kind
of
beyond
the
scope
of
this
working
group,
at
least
at
this
point
in
time
other
than
we
know
we
need
to
do
a
better
job.
E
H
H
E
It's
a
question:
I
have
a
question:
it's
what's
what's
the
time
horizon
for
maturity
of
the
cluster
API
and
will
it
be
our
folks
gonna
commit
to
it
and
deliver
on
it
because
I'm
interested
but
and
I'd
love
to
test
it,
but
the
same
time
cycles
don't
exist,
I'd
rather
test
something.
That's
been
bedded,
a
little
more
yeah.
F
So
if
you
want
to
test
something
embedded,
it
should
just
wait
a
little
bit.
I
think
if
you're
interested
in
helping
sort
of
shape
with
API
looks
like
you
might
take
a
look
over
the
types
Targo
files
that
are
getting
checked
in
for
the
different
parts
of
the
API,
because,
even
if
you
aren't
ready
to
test
it
out
or
look
at
the
implementation
like
defining
a
good
API
is
really
important.
All
right.
So
if
you
have
any
time
to
dedicate
I,
would
do
it
there
and
make
sure
we're
capturing
the
right
requirements.
F
Yeah,
so
Chris
is
a
link
to
where
we're
tracking
code
in
there
are
a
couple
of
open,
PRS
Jacob
have
checked
in
the
types
for
the
node
parts.
They
also
has
an
open
PR
where
people
are
leading
comments.
I
know
the
Justin
wants
a
bunch
of
comments,
if
he's
addressing,
so
he
he
has
an
open
PR
that
if
you
want
to
comment
on,
but
also
some
types
of
checked
into
it
start
prototyping
code
against
it.
F
Chris
also
sent
a
PR,
but
yesterday
was
sort
of
the
first
cut
at
the
control
plane,
part
of
the
configuration
so
again,
if
you're
interested
in
that
half
of
the
straight
guy.
Please
take
a
look
at
that
PR
and
I'll
link
both
of
those
events
in
terms
of
level
of
commitment
from
our
side.
I
think
we're
very
committed
to
sort
of
seeing
this
through
and
making
this
happen
in
terms
of
time
frame.
A
F
So
the
the
initial
implementation
of
cluster
api
is
explicitly
not
part,
of
course,
abilities.
We
may
reassess
that
later
and
decide.
It
should
become
part
of
four
but
as
part
of
sort
of
the
layering
principles.
If
you
look
at
brian's
extra
dart
clustering
guy
is
not
inside
it,
for
so
the
intent
is
not
to
ever
put
it
inside
a
tremendous
lack
of
range
repo,
oh
yeah.
Well,
my.
F
A
That
sounds
good
to
me.
Well,
I
I
just
was
unaware
that
something
had
been
merged
already
yeah.
Some
looks
good
to
me.
Oh
I'm,
definitely
interested
in
the
cluster
control
control
playing
API,
as
you
know,
so,
I'll
take
a
look
at
well.
Are
there
any
upgrade
updates
to
the
doc
since
I
proposed
like
the
main
proposal
doc
since
I
left
my
comments
there?
Yes,.
J
J
I,
don't
know
if
you
said
that
in
like
a
scary
way
like
I,
don't
my
god,
things
are
finally
getting
committed
and
I
didn't
even
have
a
chance
to
comment,
but
we
deliberately
left
the
PR
is
open
so
that
people
can
the
inline
comments.
But
we
checked
the
code
in
in
slightly
different
locations,
just
to
start
prototyping
the
reconciling
code
to
get
even
better
feedback
about,
like
the
shape
of
the
API
and
what
are
all
the
edge
cases
we
have
to
cover.
J
A
Cool
well
better
to
take
a
look
and
see
if
something
well,
if
my
earlier
comments
have
been
addressed
and
otherwise
have
a
review
round
more
yeah
I'm.
Well,
so
the
larger
group
we're
aiming
to
like
consolidate
or
unify
the
cubed
M
API,
somewhat
least
with
the
cluster
API,
the
control
plane
portion.
So
basically
we'll
define
I.
Think
it's
gonna,
be
that
where
we're
defining
a
cube,
ATM
API,
that
will
be
the
control
plane,
spec
right
robots.
F
Yeah
so
I
think.
Ideally,
we
would
have
very
close
parity
if
not
identical
API
definitions
with
a
control
plane
between
a
cluster
API
and
cube
admin.
I'm,
not
sure
how
close
we
are
to
that
today.
I
know:
you've
got
sort
of
a
straw,
man
API
for
cube.
F
Admitting
Chris
has
at
first
PR
for
cluster
API
I,
don't
think
they're
quite
the
same
yet
so
I
think
that's
where
we'd
really
like
some
of
your
feedback
on
sort
of
direction
of
the
API
that
Chris
is
working
on
to
start
pushing
it
towards
cube
admin
and
I
think
there
are
likely
some
sort
of
requirements
that
you
know
about
that.
We
don't
know
about
yet
enter
or
vice-versa
right,
so
I
think
that
that's
where
we'd
really
like
to
collaborate
on
that.
A
But
but
still
we
have
to
keep
in
mind
like
when
saying
that
the
cubed
M
API
is
alpha.
Where
meaning
that
will
change
change
it
backwards
in
compatibly,
but
still
will
or
no
uses
are
stuck,
so
we
will
handle
the
upgrade
and
transition
seamlessly.
So
it's
like
a
bead
API.
If
you
start
using
it
today,
you
will
use
it.
You
can
use
it
as
a
bita
api,
but
it's
all
due
to
the
strict,
like
regulations
on
how
you
can
name
your
api
yeah.
F
Guess
just
one
more
picture:
if
you
guys,
if
anyways
interested
and
sort
of
item
deeper
to
this,
there
is
a
breakout
meeting
tomorrow
at
11
a.m.
Pacific
time
to
specifically
talk
about
clustering,
esta
so
I
think
it
will
sort
of
continue
to
give
sort
of
brief
updates
during
this
meeting.
But
for
more
details.
Please
join
it.
F
A
A
Okay?
Well,
just
a
heads
up
that
Cordina
the
proposal
was
was
merged
into
the
community
repo
yesterday
or
some
days
ago,
which
is
which
is
great
to
see
progress.
We
have
the
the
PR
up
as
well.
I'll
take
a
look
at
it
in
the
coming
days,
just
that.
Well,
it
would
be
great
more
even
more
reviewers
on
it
in
case
you
have
something
we
definitely
should
get
right
with
the
NS.
This
time
worth
mentioning
is
that
we
have
three
kind
of
conflicting
tiers
that
touched
DNS
code
incubating
right
now
we
have
this
Cordia
nesting.
A
We
have
a
PR
that
lets
you
specify
DNA
the
DNS
IP
explicitly
from
from
like
field
and
the
cubed
M
API
to
the
DNS
specs
as
a
service,
and
so
this,
like
the
way
we
calculate
that's
the
way
we
calculate
the
the
DNS
IP
as
is
now,
is
basically
taking
the
the
masters.
Well,
the
API
servers
internal
IP,
which
is
often
like
1910
9601
in
like
all
happy
cases
and
just
append
a
zero.
So
that's
one
so,
which
is
not
great.
A
D
A
Well,
yeah
I'll
bring
it
up
this
there's
an
IP
allocator
package
that
is
used
elsewhere
in
Cuba
and
as
well
for
doing
it.
Yeah
Oh
check
any
other
comments
like
what's
the
best
route
forward.
I
mean,
as
we
said
last
time,
I
think
components.
Config
would
be
the
well
ideal
goal,
but
I
don't
know
if
the
Signet
work
is
working
on
well
components,
config
for
cube
DNS
as.
F
D
Just
a
small
update
with
DNS
so
with
the
so
with
one
8
cubed
en
we
can't
deploy
the
DNS
employment
before
CNI
is
created
right,
but
now
that
we
have
that
new
feature
gate
in
1/9,
we
can
actually
deploy
the
DNS
server
before
CNI
I
think
so
in
order
to
get
out
to
what
I
think
we
need
to
deploy
on
the
host
network
with
like
the
DNS
policy,
is
like
cluster
first
with
hostname,
or
something
like
that.
So
my
question
is:
is
that
something
that's
ok
to
deploy
the
DNS
server
on
the
host
nut?
D
A
D
A
K
D
A
Yeah
I'm
not
sure
I'm,
a
fan
of
running
in
masters
hosted
person
for
you
to
like
scaling
and
things
later,
but
anyway
yep.
Let's,
let's
take
that
async.
A
Anything
yet
so,
if
Robert,
if
you
thought
that
it's
not
in
the
works
right
now,
it's
so
one
of
the
issues
with
coordinates.
The
migration
here
is
because
cube
DNS
doesn't
follow
components
config.
There
is
no
easy
way
to
migrate
to
cody,
honest
and
the
proposal
was
merged,
as
an
in
as
in
an
alpha
state,
which
means
that
there
is
no
upgrade
guidelines
for
existing
users
that
are
using
the
cube,
DNS
config,
whatever
it's
for
Federation
I'm,
something
like
if
you
want
to
use
it
to
delegate
to
other
external
DNS
service,
and
things
like
that.
A
A
So
but
anyway,
it's
there
is
such
a
feature
and
it's
well.
Here
we
go
private
DNS
zones
and
upstream
name
server.
Synchronous
chat,
yeah
was
in
April,
so
in
four
one,
six
I
think
this
thing
was
made
so
well.
So
one
of
the
things
there
was
it
does
is
it
follows
some
kind
of
scheme.
Nobody
knows
really
well
what
is
like
just
just
something
they
come
came
up
with,
and
this
is
now
like
beadle
or
higher
availability.
A
So
so,
in
order
to
upgrade
the
queue
for
upgrade
for
uses
the
core
DNS,
we
have
to
do
this
kind
of
migration.
I
think
we
definitely
should,
as
we
have
to
do
some
kind
of
breaking
change,
we
definitely
should
opt
in
for
component
configuration
at
the
same
time.
When
doing
this
thing
just
want
to
make
sure
signet
work
is
like
aware
of
it
at
least
or
something.
F
It
sounds
like
maybe
we
should
help
you
and
maybe
somebody
else
jump
on
the
Signet
working
call
and
ask
them
a
couple
of
questions.
I,
don't
know
how
much
Sid
the
Signet
working
folks
are
interested
in
supporting
cross
grades
between
14s
and
cube
units
and
getting
complete
feature
parity
between
the
two
they
might
just
say.
If
you're
using
QB
NS,
then
you
get
some
of
these
features
and
if
using
14s
becomes
and
we
aren't
trying
to
get
direct
compatibility
but
like
feature
parity
between
the
two
yea.
A
F
A
I
think
coordinates
is
feature
pair
and
better
performance
and
performance,
and
things
like
that,
but
the
propose
was
just
that
the
proposal
was
merged
in
an
alpha
state
that
it
didn't
consider
upgrades
and,
if
you're,
using
this,
these
features
in
cube,
DNS
and
want
to
upgrade
you
have
to
like.
There
is
no
automated
way
right.
You
betting.
This
is
actually
not
a
problem
because
our
users
doesn't
use
the
config
map,
so
we
can
just
go
with
the
default
code.
Dns
files
as
well.
That's.
A
Yeah,
of
course,
if
they
do
it
by
themself,
but
but
yeah
I
mean
we
should
definitely
check
quit
sing
network
and
component
config
there.
My
the
last
points
here,
our
own
component,
configures
well
like
cube.
Let's
keep
proxy,
keep
scheduler
I'm
I'm,
keeping
an
eye
on
this.
These
ones
and
seek
network,
wants
to
add
ipv6
functionality
to
compare
them
in
1/9.
That's
requires
key
proxy
component
configuration
supporting
cubed
em
on
actively
looking
into
this
area.
This
is
something
we
definitely
should
get
in
also
as
discussed
earlier.
F
A
And
also
but
but
it
was
some
some
discussion
zones
like
in
the
slack
channels
that
well,
we
have
some
all
functionalities
here,
like
my
TV,
yes,
things
like
that
yeah
well,
well,
just
a
heads
up,
because
we,
this
is
something
we
should
get
in
four
four
one,
nine
at
least
thick
network
is
committed
to
helping
with
it
yeah
and
Tim.
Did
you
know
of
any
final
call
for
a
to
be
three
one
or
three.
E
There's
no
formalized
testing
to
go
from
3,
o
2,
3
and
I.
Don't
want
to
be
the
guinea
pig,
so
I
think
if
we
do
a
stage
release,
3,
1
and
then
next
release
CO
2,
3,
there's
no
issues
with
3
1,
but
they're
kind
of
edge
cases.
One
of
the
edge
cases
that
we
know
about
is,
if
you
start
rolling
down.
If
you
quickly
roll
down
members
from
a
high
availability,
cluster
you'll
run
into
an
issue.
That's
one
of
the
issues.
E
There's
also
a
fix
that
Jordan
had
post
that
exists
in
three
with
a
reed
deadlock,
condition
that
only
occurs
on
high
high
scale
clusters,
so
I
think
we're
kind
of
okay
for
the
most
part
for
these
two
conditions,
but
ideally
we'd
want
to
get
past
this
point
quickly.
You
know
there
there
might
be
a
potential
way
to
say,
like
fresh
clusters
get
three
two
and
then
you
know
upgrade
clusters
get
three
one,
but
that
means
we'd
have
to
manage
it.
I.
E
Want
to
be
the
guinea
pig,
the
testing
cycles
that
exist
inside
the
mainline
repository
our
way
we
qualify
the
exact
versions
of
@cd,
because,
usually
you
get
a
ton
of
testing
cycles
in
the
scale.
Tests
alone
produce
a
number
of
interesting
issues.
Right
and
I.
Don't
want
to
be
in
the
boat
of
having
to
validate
a
foreign
version
that
has
not
been
not
gone
through.
The
rigors
right.
F
Yeah
I
agree
I'm
just
saying
like.
If,
if
we
do
Mellie
at
City
three
to
number
three,
oh,
we
should
figure
out
a
way
to
upgrade
people
from
three
out
of
three
to
move,
even
if
it
requires
two
overt
steps
right
and
not
strictly
tie
that
to
congratulate
versions
and
say
we
have
to
wait
three
months
to
get
this
32
and
I
mean
gke.
We
at
one
point
at
some
point
the
vast
divorce
that
CD
versions
from
Cabrini's
versions.
F
G
A
Yeah
I'm
so
Robert
now
on
a
plane
home
from
from
London
I
wrote
well
a
total
fresh,
actually
Cuban,
half
grade
proposal
that
has
been
sitting
for
too
many
months
without
like
me,
having
time
to
rewrite
it
yeah
and
in
that
I've
stayed
that
well
before
going
to
GA
with
Cuba
name
upgrade.
We
have
to
somehow
take
it
to
B
into
account.
L
E
Now,
with
the
way
we're
the
way,
we're
upgrading
upstream
going
for
the
minor
release
process,
it's
just
in
place
modifications.
So
there's
no
there's
no
changes.
So
if
you
go
from
300
10230,
whatever
I
think
was
three
or
17/3
110,
there's
no
there's
no
upgrade
path
other
than
switching
the
component
and
internally
there
will
be
some
minor
changes,
but
that
you
won't
see
them
right.
There
they'll
be
transferred
to
you,
the
it's
only
when
they
break
data
models
that
and
they've
they've
sworn
that
not
to
do
that
again
that
it
will.
A
Cool
yeah
I
mean
eventually
we
starts
running
the
scale
tests
ourself
right.
So
we
talked
about
this
in
an
issue
as
well.
Roberts
also
chimed
in
I
think.
Basically,
we
should
run.
We
should
move
over
cube
mark
and
the
density
tests
of
cube
up
right
now
to
use
cube
am
probably
the
cluster
API
some
implementations
there
and
then
it
well
with.
E
F
Think
there's
a
little
bit
of
concern
in
few
days
anywhere
is
sort
of
turning
into
Cuba,
with
the
number
of
configuration
parameters
that
you
have
to
set
to
make
it
go,
does
not
necessarily
sort
of
clearly
better,
and
so
we
sort
of
halted
the
path
of
moving
stuff
over
to
work
on
a
cluster
API
and
the
intent
is
to
move
stuff
off
to
Cuba
onto
the
cluster
API.
Once
we
have
a
implementation
that
we
are
using
for
testing.
F
E
A
Yeah,
that's
that's
a
grand
plan
which
is
exciting
because
we
can
start
removing
things
well,
we
have
so
what
I
did
some
some
days
or
weeks
before
the
one
8
release
I
started
going
through
the
different
kinds
of
guides
we
have
I
mean
this
is
just
a
well
in
the
inner
kubernetes
website
repo.
We
have
a
lot
of
things
ducks
that
haven't
been
updated
in
like
a
year.
A
We
should
well
it's
a
mess
right
to
go
through
these
things,
ping,
the
maintainer
and
remove
I
mean
maybe
even
deprecated
things
first
and
then
remove
the
lusts.
The
next
cycle,
but
yeah
I
mean
it's
currently
leading
our
uses
to
bad
well
to
bad
solutions
that
doesn't
necessarily
even
work
anymore.
I,
don't
know
if
there's
any
action
like
if
there's
someone
that
would
like
to
work
on
that
here
in
this
room
right
now,.
E
Just
a
PSA
that
the
dock
for
the
docks
has
been
pushed
to
the
community
repo
everything
has
been
changed
from
checkpoint
in
to
bootstrap.
Checkpointing
I
will
need
to
reread
some
of
the
names
that
are
inside
there,
because
I
still
kept
them
as
checkpoint,
but
upon
further
discussion,
everything
will
be
renamed
to
bootstrap
check
winning
to
remove
any
ambiguity
whatsoever.
Yeah
and
I
should
have
an
update
for
the
PRC
--is--
I
went
through
you,
Jews
proposed
changes
and
I
addressed
all
the
comments
that
were
in
the
existing
PR.
A
Yeah,
thank
you
James
for
adding
things
in
into
agenda.
Obviously
app
all
sei
tests
are
broken
at
master
and
we
should
fix
that.
We
know
the
root
cause.
It's
well,
not
surprisingly,
CNI
we
should
well
we
lost
test
day
or
something
we
merged
the
PR
to
move
to
CNI,
zero,
six,
zero
and
yeah
it
made
medicube
a.m.
test
ahead,
go
completely
completely.
It
read:
oh
yeah,
I
actually
tested
locally
and
I.
It
seemed
to
work,
so
it
might
be
a
packaging
issue
more
than
actually
a
CNI
issue.
A
However,
things
are
read
use
that
at
PR,
so
whether
it
is
a
CNI
issue
like
something
with
networking
or
if
it's
well,
just
the
the
basil
Deb
was
wrongly
produced
with
the
new
CNI
version,
or
something
like
that.
We
need
someone
to
well
fix
it.
I'd,
say
I.
Think
Feng
was
starting
to
look
into
it.
Robert
right.
A
A
Still
hold
on
a
fold
fix
ice,
expect
it's
a
packaging
issue
and
but
I
don't
I,
don't
have
any
fix,
but
yet,
but
could
he
just
spin
up
like
bananas
anywhere
and
do
do
the
thing
that
a
cluster
would
like
well
like
the
job
would
do
and
see
what's
wrong?
There
I
mean
it's
it.
It
said
something
like
loopback
isn't
available
in
the
no
doesn't
yeah.