►
From YouTube: 2020-01-08 - Cluster API Office Hours
Description
2020-01-08 - Cluster API Office Hours
A
B
A
A
B
You
so
I,
don't
know
how
much
of
a
dent
I
mean.
I
should
have
practiced
before
I
started
this,
so
everyone
knows
about
the
Machine
load,
balancer
kept
that
people
have
been
working
on,
I,
assume
right,
the
or
that
Moshe
has
been
working
on
the
idea
that
it
would
be
great
to
have
a
generic
machine
load
balancer
for
deploying
load
balancers
of
different
types,
whether
it's
you
know,
nsx
on
vmware,
f5,
netscaler,
whatever
and
so
I'm
sure
in
the
issues
in
Kathy.
You
all
have
seen
me
crop
several
times
relating
to
H.
B
So
recently,
a
before
word
a
little
near
Christmas
I
started
working
on
this
and
and
we
actually
merged
an
initial
implementation
of
this
into
cat
V
last
week
and
or
I
guess,
yeah
Sunday
would
Monday
so
Cathy
now
has
a
chai
and
I
thought
I'd
kind
of
review.
How
it
was
implemented
in
cat
be
to
hopefully
show
the
way
this
load
balancer
model
could
work
on
at
a
broader
scale
and
Cathy
fort
for
other
providers
as
well.
B
So
what
I,
I
guess
I
put
the
easiest
thing
to
do,
and
just
kind
of
be
able
to
go
over
the
CR
DS
real,
quick.
All
I
did.
All
we
did
was
introduce
a
new
field
into
our
infrastructure.
Cluster
called
load.
Balancer
rest
all
right,
the
this
load,
balancer
ref
or
the
load
balancer
ref
points
to
at
this
time
and
H
a
proxy
load
balancer,
the
H,
a
proxy
load,
balancer
CRD
and
the
vSphere
are
the
infrastructure.
B
Cr
DS,
that's
V,
star,
cluster
and
vSphere
machine
have
no
relationship
to
one
another
other
than
this
load,
balancer,
ref
and
I'm
about
to
make
myself
a
liar.
Just
so
you're
aware,
if
you
do
go,
look
at
the
code,
the
vSphere
cluster
controller,
sorry,
the
vsphere
cluster
controller
does
currently
watch
H,
a
proxy
load
balancer,
mainly
because
I
didn't
feel
like
implementing
the
generic
external
Watchers.
That
cappy
did
using
the
infrastructure
ref,
but
there's
there's
a
single
o
type
right
now
and
we're
be
planning
to
introduce
NSX
soon
and
then
we'll
introduce
the
external
Watchers.
B
The
way
Cathy
does
but
other
than
that,
there's
there's
no
relationship
and
so
well.
How
does
that
actually
work?
Well,
everyone
knows
and
Cathy.
When
you
do
a
cluster
reconcile
right
or
in
it
from
an
infrastructure
provider,
you
don't
mark
yourself
as
ready
until
your
infrastructure,
ready
I
always
have
thought,
that's
a
little
confusing
because
from
a
infrastructure
provider
a
point
of
view,
ready
just
means
you
ready,
doesn't
necessarily
mean
it
for
structure
rated.
B
It's
not
called
infrastructure
ready,
but
an
upstream
copied
the
field
they
copy
is
called
infrastructure
ready
up
until
I
introduced
this.
This
was
always
true
like
it.
We
added
the
finalized
or
true,
because
we're
always
rated.
We
had
no
infrastructure.
However,
now,
if
we
have
a
load,
balancer
ref
using
an
unstructured
reader,
we
do
to
that.
We
do
a
couple
of
things.
The
first
thing
we
do-
and
this
will
be
something
like
a
generic
load-
balancer
controller-
will
do
in
the
future.
B
Balancer.
If
you
need
to
have
a
status
ready,
that's
a
pool
and
a
status
address,
that's
a
string
and
that
status
address
is
used
as
our
control
plane
in
point,
and
if
somebody
has
set
a
port
on
our
spec,
we
will
use
it.
Otherwise
we
have
a
default
API
endpoint
at
6
443
and
we
law,
ok,
control,
plate,
endpoint
discovered
to
be
a
load
balancer.
So
again,
we're
no
longer
ready.
B
If
there's
a
load,
balancer
ref
set
we're
no
longer
ready
until
this
load
balancer
has
provided
us
an
address,
and
so
how
does
that
with
something
like
H,
a
proxy
load
balancer?
How
does
it
actually
do
the
load
balancing?
Well,
it's
a
pretty
basic
type,
now
I
say:
there's
no
relationship.
Now
the
first
thing
I
said:
I'm
gonna
make
a
liar
to
myself,
and
here
you
think,
I'm
gonna
make
a
liar
to
myself
again
I'm,
not
in
the
process
of
creating
this
I
also
created
a
pcard
mcat
be
called
vSphere.
B
B
I
didn't
feel
like
refactoring,
all
of
that
or
copying
all
of
that
doing,
a
copypasta,
so
I
was
like
okay,
well,
I'll
just
turn
that
into
a
generic
provisioner,
so
it
has
a
clone
spec,
because
the
H,
a
proxy
load
balancer,
is
deployed
using
an
OVA
on
your
V
Center,
and
this
is
this
is
Chuck.
This
should
look
familiar
to
you.
This
is
kind
of
like
the
cloud
init
data,
the
bootstrap
data
and
the
cube
ATM
config.
B
This
is,
if
you
want
an
optional
ssh
user
to
ssh
into
the
load
balancer
you
can
otherwise
whatever
and
again
our
status,
and
you
can
see
that
I
mark
it
as
this
field.
Both
of
these
fields
are
required
as
part
of
the
portable
load,
balancer,
multiple
model
and
it's
inspected.
So
people
know
that
it's
optional,
but
it's
really
required
at
runtime.
Okay.
So
what
what
actually
happens?
How
does
this
work?
Well,
all
this
should
look
very
familiar.
You
know
we
reconcile
it.
B
So
what
is
it
watch
so
it
watches
vSphere
VM,
because
that's
how
it
gets
deployed
again,
not
related
to
Kathy.
It
watches
a
copy
machine
using
this
control,
plane
machine
to
H,
a
proxy
load,
balancer
function
and
I
documented
what
it
does
here.
Basically,
it
says:
hey,
anytime,
a
copy
machine
is
reconciled
if
it's
a
control,
plane
machine
and
an
address
and
I
can
get
the
cluster
and
that
cluster
has
an
infrastructure
s
and
that
infrastructure
F
has
a
Bres
and
that
load
balancer
ref
points
to
me
or
something
of
my
type.
B
B
B
This
just
stands
up
the
OVA
and
then
finally
I
mean
this
is
all
just
H,
a
proxy
specific
stuff
right
create
config
secret,
but
it's
all
a
load,
balancer
config.
Here's
the
thing
that
the
other
load
balancers
might
also
need
to
do
right.
Reconcile
back-end
servers,
get
all
the
copy
machine
resources
for
this
cluster.
So
this
is.
This
is
the
part
that
may
be
able
to
bake
me
made
generic
no
get
the
control
playing
machines,
and
then
this
is
H.
A
proxy
specific.
B
You
know,
add
to
the
backend
load
balancer,
and
so
finally,
when
all
that's
done
right,
it
marks
itself
is
ready
and
it
has
an
address
and
it's
like
cool.
Okay,
now
the
VCO
cluster
can
say
alright
well,
I've
got
a
load.
Balancer
I've
got
the
address
and
I'm
gonna
set
that
as
my
control
plane
in
point
and
guess
what
hey
everybody
I'm
so
copy
can
go,
do
its
thing,
and
this
gives
us
the
ability
to
deploy
a
chai
clusters
with
cat
be
in
the
way
that
doesn't
tie
the
H,
a
proxy
implementation
to
cat
V.
B
It
ties
it
to
vSphere
in
a
way
because
we're
deploying
this
as
an
OVA
right,
but
you
could
easily
see
how
the
NetScaler
or
f5
implementation
doesn't
actually
need
to
deploy
a
VM
to
get
a
load
balancer.
Those
would
already
exist,
so
their
own
controller
would
be
draft
simpler.
It
would
just
be
well
where's.
My
config,
so
I
know
how
to
talk
in
the
NetScaler
and
then
it
would
be
reconciling
my
back-end
servers,
but
what's
true
what's
generic?
B
Is
that
reconcile
back-end
servers
as
well
as
I
need
to
watch
all
of
the
control
plane
machines
and
I
mean
I
could
run
through
what
it
looks
like,
but
it's
just
going
to
take
a
couple
of
minutes.
I
think
it's
kind
of
more
important
to
review
how
it
was
implemented
here,
because,
again,
aside
from
that
watch,
everything
is
done
with
an
unstructured
reader
and
the
H
a
proxy
load.
Balancer
CRD
has
zero
relationship
to
VC
or
cluster
of
Eastern
machine
right.
B
If
there
are
any
questions,
I
guess
I'll
show
this
real
quick.
What
does
actually
looks
like
in
a
test
that
infrastructure
that
the
cluster
spec
load,
balancer
ref
I
mean
it's
literally
like
just
like?
It
would
be
from
the
kappa
cluster,
with
an
infrastructure
breath
and
again
like
this.
This
would
be
like
that
that
load,
balancer
control
or
maybe
a
generic
load
balancer
type.
You
could
see
how
that
might
sit
inside
of
the
copy
contributor.
B
Okay,
thanks
Tim,
but
I
really
I
mean
Moshe
has
done
the
lion's
share
of
the
design
here,
I
I,
pretty
much
stole
everything
from
him
and
and
honestly
from
a
great
design
of
Kaffee.
The
whole
notion
of
just
the
rest,
the
the
control
plane
machine
to
the
load.
Balancer
function
was
fun
anyway,
but
yeah.
B
But
of
course
that
happens
after
we
delete
the
cluster
and
I
part.
As
part
of
this,
you
saw
the
vSphere
VM
CRD.
One
of
the
other
things
I
did
was
I
refactored
the
deletion
logic
to
be
more
like
copy.
So
we
don't
do
controller
owner
references
anymore.
We
just
do
owner
references
and
then,
when
we
delete
anything
I
said
I
would
show
the
delete.
It's
gonna
look
very
familiar
to
you
because
we
do
very.
B
We
do
something
very
similar
where
we
say
okay,
can
we
get
it
as
of
a
timestamp
delete,
but
again
after
the
cluster
is
deleted,
our
tests
still
take
like
another
20
seconds,
because
now
we
have
to
wait
for
the
load
balancer
to
be
garbage
collected.
If
this,
the
load
balancer
ref
for
the
notion
of
it,
exists
in
an
upstream
copy
that
could
be
part
or
the
last
thing
that
you
all
do
and
we
wouldn't
have
to
wait
for
garbage
collection
anyway.
That's
it
and
the
questions
see.
B
Sir,
the
H,
a
proxy
load
balancer,
doesn't
actually
know
about
the
V,
the
V
Stern
machine
it
when
it
does
anything,
it
works
with
copy
machines,
but
remember,
like
III,
didn't
modify
copy,
you
know,
I,
don't
have
a
local,
they
do
have
a
local
fork,
a
copy,
but
I
didn't
want
to
go,
modify
the
copy
cluster.
We
actually
needed
this,
so
I
couldn't
necessarily
wait
for
Kathy
to
have
something
like
this
upstream.
So
instead
I
put
the
ref
on
the
infrastructure
cluster,
but
yeah
you
could
have
a
reference
like
this
on
the
cap.
Equations.
C
B
B
So
the
H
a
proxy
load
balancer
anyway,
it
gets
deployed
every
time
you
deploy
a
new
vSphere,
a
new
cluster
with
cat
B,
it's
a
200
ish
Meg
OVA
and
they
use
its
link
mode.
So
it
deploys
in
seconds
so
for
every
VC
or
cluster.
You
would
have
a
very
small
VM
out.
There
provides
a
load
balancer,
but
regardless
the
actual
implementation
of
how,
with
the
question
you
just
asked,
it
looks
to
say:
okay
well,
I
know,
I
know
the
copy
cluster
owns
me.
B
So
it
fact,
if
you
look
up
here,
it
will
under
its
reconcile,
it
won't
go
any
further
until
yeah
an
owner
F
has
been
set
on
it.
That
points
back
to
a
copy
cluster,
okay,
so
it's
owned
by
a
copy
cluster
and
when
it
wants
to
discover
what
it
should
assign
to
its
virtual
server,
its
load
balancer
right,
it
says,
give
me
all
of
the
copy
machine
resources
for
this
cluster
and
then
filter
those
into
the
control
plane
machines.
B
C
C
That's
kind
of
what
my
take
away
was
in
I
was
hoping
for
something
more
generic
where
you
could
just
prescribe
I
want
some
kind
of
load,
balancer
and
then
I
want
to
add
whatever
machine,
she'll
load
balancer,
and
this
seems
to
be
very
much
more
tightly
coupled
into
various
layers
of
the
stack,
rather
than
being
like
it's
on
a
portable
thing.
Well,.
B
B
C
I
would
be
concerned
yeah
the
tight
coupling
the
control
plane,
we're
talking
about
you
know
if
this
load
balancer
exists,
the
thing's
not
going
ready
till
a
load,
balancer
and
building
all
this
other
logic
into
these
other
pieces.
It
just
seems
like
I'm,
not
sure
to
you,
know,
seems
kind
of
intertwined
and
I.
Think
like
it
just
seems.
A
D
So
thanks
really
good
start
and
I.
Think
it's
very
well
done
so.
I
do
have
some
comments,
but
I
think
we
should
rather
move
those
or
front.
So
the
question
is:
should
we
create
an
issue
in
CAD
fee
for
some
of
these
comments
or
put
them
in
a
separate
cap?
That's
on
the
one
part,
and
then
the
second
part
is
they
were.
B
B
Vmware
we
have
product
needs
around
the
load
balancer,
and
so
this
will
be
what
it
is
for
now
for
cap
V,
but
I
think
everything
should
be
driven
through
the
cap
and
then
we
could
update
this
as
we
go
along,
and
maybe
this
stays
a
POC
for
your
cap.
Maybe
something
else
becomes
the
POC,
but
we
could
chat
offline
I
mean
for
all
warning,
like
I
didn't
check
with
Moshe
before
I
implemented.
This
I
just
called
it
a
POC
of
his
cap
because
it
was
based
on
it,
but
he
didn't
agree
to
that.
D
A
Yeah
I
mean
that's
not
on
me.
Yeah
I
was
gonna,
say
I'll
talk
to
Tim,
but
I,
see
Tim,
saying,
let's
evaluate
for
alpha
4,
so
I
would
say
you
know
for
alpha
3
over
the
next
couple
of
months.
Whatever
code
y'all
are
working
on,
can
either
live
in
Kat
V
or
we
can
create
a
contrarian
in
cluster
API
and
you
can
go
wild
in
there
and
then
we
can
evaluate
the
proposal
and
official
implementation,
at
least
of
the
Machine
load.
Balancer
generic
controller
for
alpha
4.
B
Just
FYI
we
I
did
create
a
contributor
rectory
inside
of
cat
V.
If
we
want
to
put
anything
there,
because
I
had
to
create
open
API
bindings
for
h8
proxy
stay
to
play,
I
actually
didn't
want
to
use
h8
proxy
as
of
a
month
ago,
but
I
found
out,
they
had
I
thought
it
was
jank,
because
I
thought
it
could
have
to
do
everything
via
SSH,
but
they
have
this
brand
new
data
playing
API
server
and
I've
been
working
with
their
engineers
on
improving
it.
B
We
just
finished
a
PR
for
mutual
cert
off
that
doesn't
require
basic
as
well
so
they're
doing
a
lot
of
work
to
help
as
well
give
them
credit.
B
A
E
Right,
hello,
there.
E
F
E
I
think
my
audio
is
trying
to
come
out
through
a
microphone.
So
let
me
just
fix
that
sure:
hey
hello,
can
you
hear
me
now?
Yes,
or
can
you
still
hear
me
I
guess
so
about
that
books
hey?
So
this
is
Maya
from
tilt
the
Deb
tool
that
at
least
some
of
you
have
been
using
to
develop
cluster
API,
which
is
super
cool
and
I've
got
Nick
and
Dan
here.
E
E
So
you
can
configure
everything
in
one
place.
You
can
run
everything
with
a
single
command.
You
get
all
of
your
logs
in
one
place,
so
you
can
tell
when
things
are
going
wrong.
You
and
any
changes
you
make
to
your
files,
get
really
quickly
propagated
into
your
running
containers.
You
don't
need
to
wait
a
million
years
for
things
to
build
that
sort
of
thing
and
later,
since
that,
like
I've,
seen
that
cluster
API
already
has
a
tilt
file,
which
is
super
cool,
so
who?
A
I'd
say
a
good
number
of
us
do
so.
Chuck
did
the
first
bit
of
work
few
months
ago
to
get
us
started
with
tilt
and
then
I
recently
got
the
tilt
file
put
in
place
and
did
the
support
so
that
we
have
a
couple
federated
repository.
So
we
have
a
core
cluster
API
github
repository,
and
then
we
have
individual
provider
repositories
and
I
wanted
to
have
a
way
so
that
we
could
just
have
a
single
tilt
file
that
spun
everything
up
together.
E
A
F
E
Yeah
so
I
noticed
your
top
file
is
pretty
kind,
specific
and
I'm
wondering
how
what
that
thought
process
was
like
how
opinionated
you
folks
are
about
everyone
who
uses
tilt
to
develop
copy,
should
use
kind
or,
if
you,
if
there
were
folks,
were
interested
in
using
other
clusters
to
develop
just
because
there's
a
lot
of
kind
related
overhead
in
your
till
file
and
one
solution.
Some
of
that
would
be
just
use
a
cluster,
that's
less
of
a
pain
so
case
about
your
thoughts
there
and.
B
A
There's
not
I,
wouldn't
say:
there's
actually
all
that
much
that's
kind
specific,
like
there's
some
stuff
about
getting
it
preload
getting
the
certain
manager,
I'm,
just
pre-loaded
and
I,
know
nadir,
I'm
gonna,
let
you
speak
because
I
know
you're,
not
using
kind.
So
yeah
well
I'd
like
to
hear
your
thoughts.
Yeah.
G
I
I
actually
have
a
local
clustered,
I
use
and
maybe
because
I
have
some
other
picked
pink
bits
and
pieces
in
there
like
lucky
for
logs
and
so
I
can
search
on
them.
Inca
fauna,
so
we've
actually
made
the
kind
specific
stuff
additional.
So
you
can
opt
out
of
it
opted
into
kind
by
default
and
then
you
can
set.
You
can
say,
I,
don't
want
the
image
pre
loading
stuff
and
that
more
or
less
worse
than
my
cluster
I'm
add
any
issues
with
it.
Oh.
E
F
E
I
mean
normally
like,
like
we
often
recommend
docker
for
desktop,
but
I
just
tried
to
build
Cappy
on
docker
for
desktop
and
I
felt,
like
my
laptop,
was
gonna
blast
off
so
but
maybe
a
remote
cluster
like
pke
it'll,
be
like
the
initial
pushes
will
be
a
bit
of
overhead
because
you
need
to
like
push
stuff
to
a
docker
registry.
But
after
that
the
the
live
updating
is
pretty
fast,
and
so
it
shouldn't
be
too
much
of
a
big
deal.
If
your
cluster
is
remote
and.
E
F
E
Right
right,
it
makes
no
sense
yeah,
so
I
think
our
our
team's
next
bit
of
homework
certainly
is
to
figure
out
how
to
speed
up
the
the
kind
load
process
or,
at
the
very
least,
give
you
some
visibility
into
like
yes,
we're
still
working,
we
tilt
hasn't
just
frozen
an
abandoned.
You
I
also
wanted
to
know
as
far
as
speed
that
I
noticed,
y'all
have
a
live,
updates,
enabled
which
is
awesome.
How
do
you
feel
about
the
speed
of
those?
Is
it
appropriate?
Is
it
still
too
slow?
E
A
Once
I
got
through
making
sure
that
I
had
all
the
paths
set
correctly
and
the
live
update
was
getting
triggered.
I
would
say
it's
great.
We
were
accidentally
using
the
a
flag
for
go,
build
to
rebuild
all
packages,
so
live
update
was
taking
about
a
minute
and
then
the
deer
got
rid
of
that.
So
now
it's
like
eight
10
seconds
on
my
laptop.
If
I
need
to
change
something
so
I
would
say
it's
great.
F
A
E
B
Thank
you.
My
I
just
wanted
to
say
that
it
kind
of
it
goes
back
to
what
chuck
was
saying
so
I
think
it's
probably
gonna
be
more
of
a
reiteration
but
I'm
part
of
a
work
stream
team
at
the
VMware,
where
I've
got
a
couple
of
engineers
and
and
there's
some
of
their
new
to
cat
be
and
other
aspects
of
kubernetes
and
I
find.
That
kind
is
useful
because
it
builds
a
habit
right.
B
They
have
to
learn
how
to
deploy
it
over
and
over
again,
and
that
actually
is
useful
for
making
sure
they
know
how
to
work
on
the
project
versus.
Oh
here's,
this
long
lift
thing
and
then
what
they're
really
learning
is:
okay.
Well,
how
do
I
interact
with
this
long
lift
thing
and
maybe
not
the
end
to
end
workflow
so
I
mean
maybe
it
limitation
of
resources
that
people
don't
like
to
use
kind,
but
the
the
other
nice
thing
that
we
recently
did
with
with
cap
V
and
this
kind
of
relates
back
to
crime.
B
Workflow,
that's
repeatable,
reproducible
and
behaves
the
same
everywhere,
but
also
doesn't
really
need
a
bunch
of
remote
resources
to
work.
It's
actually
really
useful.
So
that's
why
I,
like
kind
I
mean
to
me
kind,
is
like
the
single
most
useful
thing.
That's
come
out
of
kubernetes
for
developers
ever
tilt,
maybe
a
close
second
I'm
getting
used
to
it.
E
Cool
I
noticed
that
y'all
are
running
nan
docker
container
runtime
and
are
using
a
start.
Sh
solution
to
restart
your
processes
is
that
working
okay
just
wanted
to
check
in
because
it's
been
a
point
of
confusion
for
some
folks
in
the
past.
B
When
you
say
nan
docker
is
a
container
D
yep.
Okay,
I
wanted
to
add
that
maybe
the
reason
that
is
because
the
machine
images
that
we
deploy
with
capi
are
all
based
on
container
D,
so
I
think
it's
probably
good
that
that
aligns
with
what
we're
actually
encouraging
people
to
do
when
they
build
their
machine
images.
E
Yeah
yeah
mute,
am
I
unmuted
now
yeah
great
cool.
Let
me
just
get
back
to
my
notes.
E
Yeah
mm
I'm,
not
sure
I
had
anything
else,
but
Nick
is
there
anything
you
want
to
get
with
hardens
cool
unless
anyone
has
anything
else
to
tell
us
or
requests
about
tilt.
While
you've
got
us
on
the
line.
E
We're
super
jazzed
that
you're
liking
developing
with
tilt
and
we're
going
to
be
working
in
the
next
couple
weeks
to
speed
up
developing
cluster
API
with
tilt,
because
we
think
that
addressing
some
of
your
pain
point
its
will
be
a
good
like
way
to
decide
on
things
to
focus
on
that
will
help
other
folks
that
would
like
bubble
out
and
help
other
folks
using
tilt
without
us
getting
totally
bogged
down
and
the
multitude
of
things
we
could
be
fixing
about
the
product.
So
I
guess
anyone
have
anything
else
for
us.
Andrew.
A
B
B
B
I
saw
that
I
think
what
I
and
thank
you
and
I
think
what
I
meant
was
well
ASCII
cinema
also
has
the
here's
how
you
upload
it
and
here's
the
shareable
feature
so
well.
I
could
take
those
logs
and
then
share
them.
I,
don't
know
I'm
being
lazy
like
it
wasn't
easy,
but
me
till
it's
an
easy
button
in
a
lot
of
ways.
So
the
more
easiness
you
can
add
to
that
button,
the
bigger
the
bigger
you
can
make
that
button,
the
better.
It
is
right,
yeah.
A
A
Okay,
let
me
switch
over
as
usual,
I'm
going
to
start
on
the
bottom
and
work
my
way
up,
so
we're
gonna,
look
at
the
oldest
ones.
First
I
did
add
a
comment
to
this
one
about
not
using
load
balancers
in
alpha
3
and
I
said
I
was
going
to
add
a
comment.
I
don't
know
three
weeks
ago,
and
it
slipped
my
mind.
So
my
recommendation
here,
if
you're,
okay
with
it
andrew,
is
just
to
close
this
because
I
don't
think,
there's
anything.
That's
gonna!
Stop
us
from
continuing
to
support
single
machine.
That's.
B
Fine
and
in
fact
cat
I
put
this
in
the
notes
of
the
doc,
but
just
another
and
give
a
shout
out
to
chuck
cat
V
and,
as
implemented
I
think
were
the
first
ones
I
said
of
cat
D,
the
e
to
e
framework,
but
Chuck
develop
along
with
others.
It's
part
of
every
presubmit
and
we
are
using
H
a
for
that,
but
we
are
doing
a
single
node
without
in
load
balancer.
So
yeah
we'll
know
real
quick.
If
this
stops
working,
because
our
e
to
e
will
break.
A
A
A
A
B
We
supporting
docker
on
the
machine
and
we're
not
building
them
with
docker
but
I'm
curious,
and
are
we
supporting
docker
with
the
machine
images
if
people
replace
container
D
with
docker,
and
is
that
a
question
for
Cappy
or
is
that
some
other
copies
kind
of
been
the
upstream
discussion
forum
for
the
image
builder
anyway?
I.
A
Would
think
that
we,
in
theory
or
I,
think
support
is
a
slippery
word,
because
we're
an
open
source
set
of
projects
and
so
I
think
it's
like?
Does
it
work
with
docker
versus
container
D
versus
something
else,
and
it
could
work
with
docker?
It
probably
should
work
with
docker,
given
that
kubernetes
works
with
docker,
so
I
think
that
it's
kind
of
it's
it's
hard
to
say
like
if
you
can
produce
an
image
that
can
run
kubernetes
correctly
and
it
works
with
docker.
Then
I
don't
see
why
it
wouldn't
work
in
a
cluster
API
setup.
A
All
right
is
that
cool
and
you
yes,
yeah,
go
cyclone
lint
errors
in
mango
I'm,
pretty
sure,
with
the
split
that
Chuck
recently
did
where
we
now
have
multiple
management
processes
that
this
generally
went
away.
I
am
so
basically.
This
is
something
that,
if
you
have
enough
web
hooks
and
enough
controllers
and
you're
using
queue
builder
to
generate
your
main
go
and
it
keeps
adding
all
the
set
up
with
manager
calls.
You
basically
end
up
exceeding
the
go
cyclo
limitations
for
linting
and
I.
A
F
F
Yeah,
so
we
split
apart
the
main
cluster
API
manager
into
three
different
three
different
managers:
there's
the
cuvette
diem,
bootstrap
manager,
the
comedian
control
play
manager
and
then
the
core
manager,
which
is
the
same
as
what
you
have
now
or
what
it
was
before.
It
was
all
bundled
into
one
manager,
and
now
it's
split
into
three
managers
and
we're
able
to
ship
three
different
images
with
three
different
sets
of
VMO,
allowing
you
to
not
use
the
coop
ATM
components.
You
choose
and
use
your
own
blue
sky
provider.
A
F
B
That's
that's
true.
Different
versions
of
cube
builder
I've
used
like
what
seems
to
seem
to
get
this
weird
code
that
shows
up
now
and
then
and
I
could
tell
it
was
just
generated
code
and
some
point
can't
be
at
the
same
thing
that
or
somebody
copied
a
template
from
another
project,
and
somebody
put
it
there.
A
A
H
A
B
So
you
say
you
can
assign
it
to
me
if
Chuck's
busy,
I
literally
just
did
this
for
Kathy
but
I,
don't
know
it
should
be
a
standard
thing.
I
I'm,
using
the
port
as
like
the
vSphere
cluster
control
plan
in
point
port
as
a
possible
default.
So
you
could
define
that,
but
not
the
host,
and
that
would
specify
the
port
will.
A
B
A
A
We
need
to
move
the
machine
deployment
annotations
to
the
right
API
group.
We
have
help
on
it
and
cleanup
and
we
need
conversion
logic.
I
think
this
Vince
did
you
move
them
at
all
or
they're.
Still
bold,
I
didn't
okay,
so
I'm
gonna
do
a
long
term
on
this
and
we
have
helpful
on
it
on
there.
So
that's
good.
A
All
righty
to
find
constraints
for
upgrades
crossing
cluster
API
versions.
I
know
you
and
I
talked
about
this
treaty.
Oh,
is
this
without
getting
into
too
many
details
here
terms
of
priority
and
milestone.
Is
this
something
that
needs
to
be
solved
for
alpha
3,
I.
H
A
A
H
A
B
H
This
is
a
follow-up
of
a
comment
on
one
PR,
basically
in
one
PR
I'm
implementing
a
test
framework
that
generate
cluster
of
just
Custer
machines,
we
fold
the
relation
all
the
brain
and
this
disk.
This
is
now
implemented
mastery
for
testing
move
across
the
cattle
mood,
but
JSON
of
several
that
disk,
and
this
might
be
useful
for
some
also
for
in
another
place.
So
this
is
the
they
should
tracking
that
the
idea.
Okay.
A
Yeah
and
I
have
some
previous
experience
with
other
projects,
doing
like
basically
creating
a
builder
for
individual
things
so
like
in
bolero,
we
had
a
builder
for
backups
and
a
builder
for
restores,
and
you
could
just
say,
like
I'd,
like
a
new
builder
with
this
namespace,
this
name
these
characteristics
and
at
the
very
end
you
just
say
build,
and
then
it
would
give
you
back
the
object
you
wanted.
So
maybe
look
at
something
like
that.