►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171108 - Cluster API
Description
Kuberentes
SIG Cluster Lifecycle
Cluster API Breakout Session
2017/11/08
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
B
Hi,
so
just
as
a
quick
update,
I
have
updated
the
pr
with
all
the
feedback.
I
received
a
lot
of
feedback
regarding
why
this
setting
is
here
in
that
setting
is
not
some
of
that
was
taken
away
like
the
cluster
object,
had
a
overall
version
and
just
at
a
quick
glance
that
seemed
fine,
but
thinking
more
and
more
about
it.
It
was
a
better
to
move
that
to
the
individual
master
machines,
so
you
could
do
her
master
upgrades
and
affect
it.
B
Other
questions
were
like.
Why
isn't
this
setting
here?
That's
setting
there
I
just
want
to
note
that
the
approach
to
this
is
purposefully
minimal.
For
now
the
initial
use
cases
are
basically
create,
upgrade
and
delete.
A
cluster
and
stretch
goals
are
like
expand
to
H
a
from
single
master,
but
I,
don't
think
we're
gonna
I,
don't
I
want
the
API
to
be
able
to
express
that,
but
maybe
no
implementation
addresses
that,
for
this
quarter
at
least
and
I
believe
Justin
Santa
Barbara
also
brought
up
a
very
good
point
of.
B
Why
can't
we
just
reuse
component
config,
and
that
is
very
good
point
in
approaches
something
I
want
to
discuss.
I
would
very
much
like
this
not
to
duplicate
anything
in
component
config,
unless
absolutely
necessary,
I
mean
I
think
it
would
be
a
great
use
case
if
we
could
just
you
a
deploy
of
a
cluster
in
any
of
the
dynamic
knobs
and
tweaks
that
are
in
component
config.
You
could
just
apply
later
as
an
example.
B
One
thing
that
is
in
component
convicted
I'd
pull
did
pull
out
were
like
the
service
network
ciders
and
the
pod
ciders,
because
those
often
require
a
cloud
provider
specific
set
up
with
either
GCP
advanced
networking,
AWS
I
think
you
could
use
flannel
there
or
some
sort
of
network
provider
and
those
are
often
not
dynamic
settings.
Those
require
our
cloud
provider
to
dimension,
but
on
that
the
all
the
current
feedback
has
been
addressed.
If
you
are
interested
and
wanted
abates,
whether
something
should
be
there
or
not,
please
go
look
at
it
and
respond.
I'll.
B
A
C
B
Both
the
cluster
and
machine
objects
have
a
provider
config
field
which
we
are
suggesting.
Bac
recur
at
this
point
be
a
serialized
string
of
some
structured
data
that
is
specific,
so
AWS
could
have
an
AWS
config
strived
ahead,
individual
settings,
their
GC.
You
could
have
one
there's
some
debate
on
whether
just
a
raw
string
is
the
right
format,
but
there
is
going
to
be
a
some
sort
of
hook
inside
of
each
one,
both
cluster
and
machine,
to
provide
cloud
provider
specific
overrides
and.
C
B
C
A
C
One
follow-up
question:
sort
of
related
to
the
cloud
provider
specific
stuff
as
part
of
the
machines
API
I,
get
that
we're
gonna
be
able
to
stamp
out
nodes.
But
what
about
the
extra
cloud
resources
that
I
need
like
load,
balancers
or
firewall
rules?
Or
things
like
that?
Is
that
going
to
be
a
part
of
the
machines
API
or
something
separate.
A
A
So
so
the
pattern
that
we
expect
for
all
of
these
is
anything
that
we
don't
think
we
can
generalize
right
out
of
the
gate
should
go
somewhere
into
the
provider
config
and
then,
if
we
find
that
everyone
who
is
supporting
the
cluster
API
keeps
using
the
same
kind
of
configuration
in
their
provider
config
it's
generalizable
enough,
then
it
can
go
through
kind
of
a
graduation
process
to
be
part
of
the
proper
API
type.
But
if
you
so
so,
let's
let's
say
like.
A
If
you
wanted
to
use
users
to
create
a
GC
cluster,
we
could
just
say
if
you
want
your
masters
to
be
behind
a
load
balancer
or
something
instead
of
just
hitting
the
wrong
piece.
But
that's
an
implementation
detail
and
it's
up
to
the
installer,
to
figure
out
the
best
way
to
support
the
rest
of
the
configuration
that
you've
defined
and
if
it
decides
that
you're
in
a
multi
master
situation,
and
you
should
use
a
load.
Balancer
I
can
do
that
and
it
should
be
transparent
to
you.
A
But
if
we
also
want
the
ability
of
allowing
you
to
specify
I,
definitely
want
to
load
balance
or
I.
Definitely
don't
want
one
that
if,
since
that
would
be
a
cloud
specific
thing
that
we
would
then
put
that
ask
some
key
in
the
provider
config
right
now.
Then
again,
like
every
provider,
config
has
the
same
exact
configuration
I'm
going
to
trouble
it
up
to
the
API
type
to.
B
A
Yeah
one
of
the
other
kind
of
guiding
principles-
I,
don't
know
if
we've
explicitly
said
this
before,
but
we've
since
we're
in
clustered
life
cycle.
A
lot
of
us
are
in
cluster
life
cycle.
We've
been
thinking
about
it,
but
for
what
goes
in
the
API?
Is
it
something
that
I
really
need
to
know
at
installation
time
or
something
that
we
want
to
change
over
time
as
part
of
cluster
operations?
A
Yeah
I,
don't
know
if
that's
completely
agreed
upon,
but
that's
how
we've
been
thinking
about
things.
What
what
actually
needs
to
change
over
time
and
what
kind
of
tooling
do
we
want
to
build?
On
top
of
the
API
versus
what
is
all
of
the
power
that
everyone's
going
to
want
to
use
to
configure
every
little
bit
of
the
cluster.
A
Okay,
next
agenda
item
was
mine.
This
is
something
that's
come
up
in
my
PR
for
the
machines
API
and
when
I
first
started
with
the
machines
API
a
while
ago
in
the
status
of
the
machine,
I
used
a
phase
because
a
lot
of
our
statuses
had
phases
and
then
found
out
those
were
completely
deprecated,
because
people
tend
to
use
them
as
a
state
machine
and
that's
very
brittle
for
API
evolution.
A
So
I
changed
to
a
list
of
conditions
and
then
found
out
that
those
are
being
deprecated
they're,
not
fully
deprecated
now,
but
it's
considered
to
be
an
anti-pattern,
that's
not
generally
useful.
So
the
recommendation
from
Erika
tun
and
Brian
grant
was
in
the
status
just
have
top-level
fields,
not
a
list
of
like
state
changes,
but
just
top-level
fields
that
indicate
meaningful
status,
and
so
the
simplification
we
have
right
now
in
this
status
is
we
have
an
object
reference
to
the
node.
If
node
exists
for
this,
we
have
enumerated
an
enumerated
error.
A
It's
just
am
I
ready
or
not,
and
if
you
need
more
power,
you
can
always
look
at
the
object,
reference
and
Traverse
to
the
node
and
look
at
its
status
and
actually
compare
fields
from
the
machines
back
to
the
node.
If
you
actually
care
like
do
them,
do
the
versions
match
right
now,
otherwise,
the
expected
workflow
for
any
tooling
that
wants
to
create
or
modify
machine.
B
A
Good
question
my
my
definition
of
success-
and
this
comes
from
the
idea
that
the
the
API
is
so
simple
right
now
that
in
our
prototype
we
have
a
single
controller,
that's
responsible
for
everything.
So
far,
maybe
it
makes
sense
once
we
have
multiple
controllers
that
are
reconciled
in
different
parts
of
the
spec
to
call
out
their
statuses
separately,
but
right
now,
if,
if
realistically
this,
the
entire
spec
is,
you
know
matches
reality,
the
controller
would
set
ready
to
true
and
on
any
single
update
to
the
spec.
A
As
soon
as
the
controller
notices,
it
would
set
the
status
back
to
false
until
it
rectifies
and
it
could
swap
out
the
VM
for
a
brand
new
one
or
something
else
can
trigger
an
in-place
upgrade
or
something,
but
the
very
next
time
it
looks
at
the
real
world
think
of
positional
spec
and
it
matches
fully.
Then
it
just
sets
ready
back
to
true.
B
A
Potentially,
the
simplification
comes
from
the
fact
that
we
just
have
a
single
controller
right
now
and
I
I,
don't
know
at
what
point
will
want
different
controllers
to
be
responsible
for
different
things.
So
another
option
is
like
I
said
like
it,
you,
if
you
just
updated.
Let's
say
the
cubelet
version,
you're
you're,
writing
an
upgrade
tool
and
you
upgrade
the
version
of
the
cubelets.
You
can
watch
not
only
the
ready
field,
but
you
can
watch
the
node
object
and
actually
verify
that
it's
reporting
words
and
matches
what
you
expect.
E
B
B
But
I
I
aired
on
the
side
of
let's
just
remove
this
flat.
Boolean
keep
all
the
error,
messages
and
air
statuses,
and
also,
if
you
need
to
inspect
the
API
as
API
endpoints
become
available
like
you
can
connect
to
them.
I
populate
the
API
endpoint
status,
and
then
you
can
go
inspect
the
API
component
statuses
to
see
what's
there
because
I
honestly
did
not
know
the
best
way
to
reconcile
a
top-level
readystate
and
I
would
rather
not
have
it
until
we're
sure
what
it
means.
Yeah.
A
Maybe
we
should
that
we
don't
have
it's
perfectly
fine
to
keep
the
errors
because
that's
reasonable
to
to
interprets,
and
it's
not
I,
don't
think
it's
terribly
bad.
One
controller
over
writes
the
error
of
another
controller
or
something
like
that,
but
but
yeah,
maybe
we
don't
want
the
pattern.
I've
just
watched
this
pool
the
pattern
should
be
for
the
thing
you
care
about
actually
check
the
status
and
make
sure
that
matches
the
spec
and
that's
when
you
determine
that
you're
done
I
think
that's
a
good,
simple
condition.
D
So
one
thing
that
at
this
point
we
tried
the
a
pain
internally
and
from
our
experience,
was
that
every
same
controller
can
or
provider
actually
can
create
arbitrary
amount
of,
for
example,
resources
in
the
in
the
quad
provider.
So
and
like
Curtis,
didn't
find
anything
in
the
API
status
that
we
can
put
those
things
so
I
think
we
should
have
something
there
or
oh,
no
yeah.
A
I
think
I
saw
your
peer
comments
and
sorry
I
haven't
gotten
through
addressing
everyone's
comments
on
that
yet
yeah.
So
I
have
kind
of
two
answers.
This
and,
and
one
is
in
our
in
our
prototype
that
we
working
on
so
far
I've
started
just
using
annotations
because
you
could
just
slap
key
value
pairs
on
any
objects
and
represent
whatever
arbitrary
state
you
want
without
you
know,
type
safety
or
anything,
because
if
it's,
if
it's
going
to
be
provider
specific,
then
it
doesn't
seem
like
it
should
be
in
the
API
proper.
A
But
I
wonder
if,
if
you're
asking
or
your
read
this
little
bit,
do
you
want
something
similar
to
the
the
provider
config
but
in
the
status
field?
So
you
could
have
some
more
structured
data
there
and
say
here's
this
or
someone
who
understands
my
provider.
Here's
all
of
the
information
you
might
want
to
know
about
it.
Yeah
more.
A
A
What's
up
with
that
yeah
so
I
mean
it's
not
really
doesn't
really
feel
like
a
status,
then
I
guess
I
I,
don't
know
I
mean
other
people
can
object
it.
It
seems
like
yeah
I
update
the
spec
as
a
client
I
update
the
spec,
and
so
the
status
exists,
so
I
can
follow
along
the
progress
toward
the
spec
being
reconciled.
A
Don't
know
it
seems
like
it
seems
like
you
can.
You
could
get
by
with
annotations
and
other
things
or
you
could
create
like
an
entirely
different
API
object.
Just
like
a
config
map
or
something
that's
just
for
your
provider,
and
it
doesn't
have
to
be
attached
to
that
machine.
There
could
be
a
link
between
the
two.
If
you
wanted
like
you,
could
have
an
object.
Ref
back
to
the
machine.
I
don't
know
so
so
are
you
leaning
toward
you'd
like
to
have
more
structured
data,
and
it
should
be
somewhat
expressible
within
the
machine
objects.
A
D
A
B
A
A
See,
oh
yeah,
so
I
just
want
to
do
a
check-in
where
we
don't
have
many
agenda
items.
So
if
anyone
has
more
things
feel
free
to
add
to
the
agenda
or
interject
right
now,
but
our
team
at
Google
has
been
prototyping
and
installer
and
reconciler
to
actually
exercise
the
API
types
and
make
sure
that
we've
covered
edge
cases
and
things,
and
so
we've
I
don't
have
a
demo
ready,
but
we
can
talk
about
its
current
status
and
I
think
objects
on
the
call
algebra
of
the
Installer
and
has
made
recent
changes
to
it
so
object.
F
Yeah
sure
so,
currently,
okay,
so
country
installer
works
only
for
Google
cloud
provider.
I
have
not
added
implementation
for
others
at
the
moment.
I
believe
you
know,
people
can
add
their
own
implementation
and
it
should
be.
It
should
not
be
difficult
to
add
your
own
implementation.
The
way
it
has
been
done
at
the
moment,
so
it
takes
your
cluster
definition
and
the
machines.
So
you
put
like
a
master
machine
definition
and
other
notes.
F
You
want
to
run
in
the
cluster
I'm
based
on
that
it
creates
a
booster
on
GCP
and
creates,
must
machines
or
customer
shows
definition
and
and
she
juice
machine
controller.
All
on
the
mash
of
unit
cell,
so
Jacob
wrote
most
of
the
machine
controller
purchases.
You
know
act
so
and
machine
controller
watches
all
the
Machine
CRD
definitions
and
whenever
they
say
add
or
update,
it
looks
and
reconciles.
So
if
you
have
a
so,
if
you
have
to
add
more
nodes
to
the
cluster,
you
can
add
a
machine.
A
Right
now,
every
update
is
just
treated
for
a
complete
simplification
that
I
don't
expect
to
last
terribly
long.
Every
update
is
just
actuated
with
a
replacement
of
the
VM
in
GCE,
so
we
don't
do
in-place
upgrades
of
the
cubelets,
which
would
be
I,
think
pretty
easy
to
bang
out
thing
is
actually
working
on
I
believe
he
can
correct
me
if
I'm
wrong,
I'm
working
on
actual
upgrades
of
the
control
plane
based
off
of
the
control
plane.
Definition.
A
A
Don't
think
that
anything
has
been
found
breaking
in
them,
but
the
functionality
is
limited
like
we
can
create
a
cluster
and
you
can
upgrade
nodes
or
upgrade
the
machines
in
the
cubelet
versions
and
upgrade
to
the
control
plane,
but
not
a
whole
lot
else.
Unless
you
start
working
a
drilling
into
like
component
config
and
versioning
that
and
but
I
don't
know
how
many
components
are
actually
using
component
config
right
now,
that's
kind
of
forward-looking
in
our
API,
but
I.
Don't
think
that
the
implementation
is
right.
A
Is
there
now
for
all
the
components,
cool
and
I'm
missing
anything?
Oh,
we
also
have
examples,
for
we
have
a
client-side
implementation
of
a
machine
sets
for
people
who
really
want
machine
sets
and
I
really
want
machine
sets,
but
I
really
wanted
the
API
to
just
start
with
machine
before
we
reason
about
sets
and
make
sure
that
we
fully
understand
them.
A
But
if
you
want
to
play
around
with
machine
sets
our
installer
right
now,
if
you
want
to
specify
multiple
machines,
it's
kind
of
an
ideal,
because
we
don't
just
have
accounts
that
you
can
change
from
one
two
three
or
something
like
that.
We
have
to
enumerate
every
machine
that
you
want
to
create,
but
you
can't
just
create
one
node
and
then
you
can
use
the
client-side
machine
sets
was
just
a
a
CLI
that
allows
you
to
specify
a
label
selector
that
should
identify
the
set
suit.
A
Whatever
arbitrary
label
you
want
to
match
on,
and
then
you
give
it
a
number
of
replicas
and
it
will
scale
them
up
or
down
just
creating
machine
objects
and
the
machine
controller
does
actuate
them
correctly.
So
you
can
use
the
installer
just
create
a
simple
master
node
and
that
scale
up
your
node
set
pretty
easily.
D
So
I
have
been
working
on
generating
like
clients
and
everything,
Informers
myself
and
and
everything-
and
I
got
it
running
because
going
to
have
like
several
clients,
yeah
and
we
need.
We
need
the
cogeneration
for
that
in
order
to
get
up
and
running
one
issue
that
I
make
a
pull
request
about.
This
was
that
the
current
version
is
cluster,
AP
I
got
credit,
say
oh,
and
this
is
not
working
with
the
current
implementation
of
the
mm
code,
generator
in
two
varieties,
so
I
suggest
we
just
me
just
change
it
to
coaster.
D
D
A
Okay,
interesting
yeah
I
have
no
objections,
I
don't
know
what
it
should
be
called,
but
awesome
I
think
thing
had
maybe
handwritten.
Maybe
this
was
generated,
but
we
do
have
a
client
checked
in
under
cluster
API.
Slash
clients
that
a
lot
of
should
say
fewer
components
are
using
right
now,
but
if
we
had
proper
generation,
that
would
be
amazed.
That
would
be
really
cool.
So
thanks
for
taking.
C
B
B
But
just
as
an
FYI,
hopefully
later
today,
both
the
machine
on
cluster
type
will
be
in
the
same
package
and
API
group.
G
My
my
argument
against
would
be
that
the
machines
API
is
fairly
uncontroversial,
very
likely
to
make
it
into
a
release.
The
cluster
API
seems
much
more
controversial
and
something
we're
likely
gonna
have
to
iterate
a
lot
more
on
I
feel
like
we
could
get
the
machines
API
into
110,
but
the
cluster
API
is
like
120.
B
G
I
mean
I,
there's,
definitely
a
there's,
definitely
a
like.
We
have
to
figure
out
how
we
do
things
outside
of
the
commands:
communities,
repo
that
are
still
like
integrated
into
the
releases
like
as
a
gem
as
a
community
project.
Yes,
but
I
mean
I
still
think
that
it
is
likely
that
the
machines
API
will
go
to
beta
much
faster
than
other
objects.
B
B
B
G
Right
I
miss
the
first
bit
I.
Think
I.
Think.
My
main
concern
is
that
there
is
this
component
come
fake.
Yes,
is
in
this
weird
state
like
it's
been
around
forever
and
isn't
moving
particulate
stay
our
well-being,
whatever
single
object
to
that
characterization
I.
Think
some
of
the
stuff
in
the
cluster
API
is
likely
to
overlap
with
the
scope
of
the
component.
Come
fake.
Yes,
I,
like
what
you
did
now,
which
is
like
there
are
these
handful
of
fields
which
are
which
do
span.
G
G
G
D
G
A
I,
don't
know
what
happens
quickly.
Can
I
ask
a
completely
naive
question
about
the
API,
so
I
think
I
understand
some
of
the
ramifications
but
I'm
hoping
someone
can
summarize,
like
all
of
the
ramifications
of
having
machines
and
clustered
either
be
together
or
separate,
because
I
see
it
like.
We
have
different
versioning,
which
is
totally
fine
to
graduate
independently
of
the
other
two
beta
or
GA
or
whatever.
We
have
the
ability
to
enable
or
disable
them
and
the
API
server
via
run
time.
B
As
far
as
being
together,
I
think
the
intent
is
at
least
to
use
them
in
conjunction
and
as
far
as
their
like
design.
They
should
be
designed
with
consideration
for
each
other
so
that
they
work
well
together
and
currently,
as
it
stands,
they're
technically
registered
under
the
same
API
group
they're
just
structurally
like
code
wise
and
different
packages,
which
is
what
I
was
trying
to
correct.
B
As
far
as
for
keeping
them
separate,
you
could,
as
Justin
mentioned,
you
could
develop
them
separately,
have
different
beta
graduations
for
them,
the
conceptually
they
move
a
little
further
apart,
but
as
far
as
like
technical
implementation,
there's
no
real,
like
limitations
on
how
they
use
you
can
do
cross
reference.
Api
object.
References
if
need
be,
I
think
it
becomes
a
little
tricky.
If
you
start
wanting
to
mix
versions
like
does
v1
work
with
v2.
All
that
well
do.
I
need
v2
of
both.
E
A
Well,
I
think
Chris
mentioned
something
that
did
resonate
with
me,
which
is
like
once
you.
If
you
start
revving
the
the
versions
completely
independently,
then
you
have
all
these
problems.
If
they
are
kind
of
related,
we
actually
have
a
pointer
from
one
to
the
or
maybe
it's
not
a
pointer
at
this
point.
But
we
you
have
expectations
of
when
you
create
a
machine
that
is
a
master,
it
will
use
the
control,
plane,
definition
for
installation
and
Reconciliation.
G
It's
okay,
I,
think
you'll
be
confusing,
but
I
think
we
could
we'll
survive.
It
I
think
that
the
I
was
just
thinking
like
the
one
technical
thing
that
I'm
aware
of
is:
if
we
do
an
aggregated,
API
server,
I,
don't
think
you
can
split
so
each
API
server
that
you
plug
in
has
to
handle
a
name
of
version
with
a
name,
so
you
can
have
a
different
one
for
v1,
an
alpha
1
and
B
1
alpha
2,
but
you
can't
have
one
for
machines
and
one
for
cluster
I.
Think.
G
B
A
B
The
reason
the
control
plane
is
so
minimal
is
because
it
is
so
controversial
and
you
made
very
good
points
about
not
duplicating
component
config
and
as
a
strawman
I
like
the
only
settings
I
have
in
the
cluster
level.
Right
now
are
the
ciders
that
I
think
almost
every
cloud
provider
needs
to
do
something
special
for
so
it
makes
sense
to
put
the
general
config
and
then
at
the
Machine
level.
B
G
Waiting
here
just
to
I,
don't
know
I
don't
want
to
like
Express
to
my
frustration
but
like
the
component
comb
that
KPI
has
been
going
on
for
a
long
time
and
it
feels
like
we
should
make
it
happen
or
give
it
up
right
and
I
feel
like
we
should
make
it
happen,
but
it'd
be
better
who's
working
on
it.
I.
B
B
G
G
A
Use
them
locket,
so
we
already
talked
about
client
generation,
API
lynnae's.
That's
awesome
can't
wait
to
see
that
and
merge
types
were
out
of
agenda
items.
I
was
hoping
that
Chris
Nova
would
make
it
supposedly
I,
don't
want
to
put
words
in
her
mouth
but
she's
working
on
the
Azure
implementation.
For
this
that
we
have
side
by
side,
GC
and
azure
as
a
pretty
good
test
of
the
types
right
now,
but
nothing
wants
to
interject
before
we
end
it
early.
D
We
also
currently
a
SS
implementation,
because
the
just
lucky
and
our
coasters
and
many
on
AWS
and
we've
been
working
internally
on
on
API,
which
is
almost
exactly
the
same
as
the
motion
API
so
and
even
the
actuator
that
we
currently
use
is
some
of
the
same
as
what
we
have
here
so
hopefully
we'll
be
able
to
contribute
film
with
working
discrimination
to
it.
Awesome.
A
D
B
Just
a
quick
note
on
using
cube,
deploy
as
long
as
everyone's
aware,
this
is
meant
as
a
temporary
location
and
not
a
permanent
deployment,
and
hopefully
cloud
provider.
Stuff
specific
stuff
will
be
split
out
into
separate
repos
later
on,
but
just
note
that
it's
not
a
long-term
solution,
hopefully
yeah.
A
And
I
hope
this
was
addressed
at
some
point
or
people
have
realized,
but
the
the
code
that
we're
putting
in
there
is
so
prototype
II.
It's
we've
been
doing
a
lot
of
self
merging.
Hopefully
that
doesn't
scared
anyone
we're
just
trying
to
it's.
It's
really
just
an
exercise
in
testing
the
types
and
those
are
the
those
are.
The
real
deliverable
I
think
is
making
sure
that
we
have
solid
understanding
of
what
the
API
should
look
like
and
having
a
functioning
prototype.
G
G
A
F
F
G
And
I
sell
a
sense,
and
so
we
get
we
sort
of.
We
deliberately
don't
specify
where
the
bootstrap
thing
happens,
so
that
you
can
plug
in
gke
or
next,
like
a
management.
Carreras
cluster
or
like
cups,
has
an
s3
hack,
which
I'm
not
terribly
proud
of.
You
know
it
works,
and
you
know
there
are
ok.
That
makes
a
lot
of
sense.
I
understand
we
basically
bring
up
just
the
master
and
then
they
jet
from
there
I
like
it,
and
also
that'll
work
well
with
with
component
config.
If
we
ever
get
there.
Ok.
A
Cool
yeah,
we
we
brainstormed
a
lot
of
things
about
maybe
starting
a
local
API
server
and
then
even
reconciling
the
master
from
that
and
completely
bootstrapping
from
a
very
minimal
set
of
thing.
Kind
of
like
blue
cube
does.
But
this
seemed
very
simple
for
the
prototype
right
now
and
it
actually
works
so
far.
But
we
can
always
revisit
and.