►
From YouTube: Kubernetes SIG Cluster Lifecycle 20181017 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.lpxy92305xqu
Highlights:
- When to apply addons to clusters
- Upgrades of cluster API clusters
- Proposal for upstreaming provisioning scripts
- Assumptions in machine actuator
A
Hello
and
welcome
to
the
Wednesday
October
17th
edition
of
the
cluster
API
signal
lifecycle
stop
project.
Today
it
looks
like
we
have
a
couple
of
action
items
we're
following
up
on
at
the
meeting
Chris.
Do
you
want
to
talk
about
what
those
are
since
I
missed
the
last
meeting,
I'm,
not
clear
on
what
number
five
thirty
four
or
five
twenty
five
or
Frankie
yeah.
B
Once
I
get
context
on
those
yeah
I
do.
Can
you
hear
me?
Yes?
Okay,
let
me
pull
them
up,
really
quick
to
remind
myself
which
one
is
which,
but
they
were
basically
I-
think
it's
clerical
items.
B
B
A
A
A
A
We
just
basically
say
like
once:
everything's
up
and
running,
we
keep
cuddle
applied
a
shaft,
what
Evers
in
the
add-ons
manifest,
and
so
we
would
basically
just
do
that
sooner
sooner
in
the
process
right
which
I
think
if
you're
you
know
creating
daemon
sets
and
deployments
and
so
forth
those
you
might
just
end
up
with
a
bunch
of
pending
pods.
Until
your
nodes
are
open,
healthy
yeah
shouldn't
have
any
bad
side
effects,
I,
don't
think
I,
don't.
A
C
That
the
main
thing
from
last
week
was
is
that
we
didn't
know
if
there
was
any
particular
reason
why
the
current
implementation
applies
the
add-ons
at
the
end.
If
there
was
actually
you
know
something
that
drove
it
other
than
just
you
know
the
the
random
order
or
the
just
the
order
that
was
chosen.
I.
A
A
A
D
Good
so
I
think
one
more
thing
that
we
probably
discussed
last
week
was:
you
know
right
now
we
are
treating
all
our
dolls
as
just
just
a
single
category
that
you
know
you
can
just
apply
them
anytime,
agree
that
you
know
what,
even
if
there's
an
add-on
that
gets
applied,
which
lets
it
stay
then
pending.
It's
ok,
however,
I
think
the
my
understanding
is
the
the
belief
was
that
we
could
potentially
categorize
the
add-ons
into
two
separate
category.
D
You
know
in
the
more
appropriate
place
in
the
workflow
so
that
we
don't
get
into
a
situation
where
we
are
waiting
for
the
add-on
to
come
on
and
get
ready
enough
to
be
active
so
to
speak
in
order
to
move
forward.
Then
there
is
another
thing
that
it's
waiting
so
I
think
that
was
because
I
think
one
more
discussion
was
we
want
to,
for
example,
care
about.
When
you
apply
an
add-on,
we
want
to
wait
for
its
add-ons
status
to
be
active
before
we
say.
D
A
Right
like
you
can
still
run
pods,
you
can
still
run
things
you're
missing
blogs
or
your
monitoring
is
another
sort
of
similar
example.
Today
and
they're
things
like
the
dashboard
where,
if
that
add-on
isn't
working,
then
it
itself
is
the
only
thing
that
is
broken
right
and
it
doesn't
really
affect
whether
everything
else
in
the
cluster
is
working.
A
B
E
Would
even
go
so
far
as
to
say
things
like
the
dashboard,
which
are
non
critical
components
are
totally
outside
of
the
scope
of
cluster
API
I
mean
I,
don't
think
those
are
things
that
we
need
to
touch
if
somebody
wants
to
add
those
those
aren't
critical
to
deploying
a
cluster
on
AWS
etc.
If
it
isn't
critical
to
running
the
cluster
itself,
then
we
should
consider
that
completely
out
of
scope,
I.
B
Mean
I
think
it's
not
a
scope
for
what
the
project
concerns
itself
with.
As
far
as
support
is
concerned,
we're
never
gonna
support
installing
the
dashboard
but
I
think
if
people
want
to
opt
into
using
to
lean
from
the
project
and
bacon,
something
like
a
dashboard
into
like
an
ami
or
it's
doing
the
provisioning
script
or
whatever,
like
that's
its
allowable,
but
outside
of
our
support,
scope.
E
A
F
I,
just
I
was
just
wondering
like
having
having
said
what
you
said,
I
wonder
whether
in
the
future
we
managed
to
extend
the
notion
of
add-ons
a
search
into
a
sort
of
a
sort
of
a
first-class
object
within
cluster
API
and
right
now,
there's
seem
to
be.
You
know,
fairly
a
dark
implementations
of
what
add-ons
mean
and
provided
by
provider
basis,
but
it
seems
like.
F
A
Yeah
that
makes
a
lot
of
sense,
I.
Think
if
you
look
at
the
way,
we're
sort
of
shoehorning
things
in
right.
Now,
it's
it's
not
particularly
great,
because
on
on
creating
a
cluster,
we
apply
a
couple
of
manifests,
but
then,
if
you
go
later
and
you
you
know,
update
the
version
of
your
control
plane
of
that
cluster,
that
has
no
effect
on
like
now.
We
need
a
different
version
of
an
ingress
controller
because
our
control
plane
got
updated
or
if
we
go
and
update
the
versions
of
nodes,
there's
no
notion
of
like
now.
A
F
G
I
think
I'd
be
great.
I,
think
also
that
there's
a
I
think
we
shared
it
before
the
the
Google
project,
which
is
sort
of
just
defining
a
bundle
which
is
supposed
to
address
the
specification
of
these
things.
So
we
could
at
least
have
a
shared
specification
of
this
version
of
kubernetes
needs
this
version
of
or
works
well
with
this
version
of
the
we've
work:
CNI
provider,
for
example,
yeah.
A
Right
yeah,
we
probably
ought
to
come
back
around
to
that
at
some
point,
because
if
you
look
at
the
way
right
now
we're
specifying
all
the
different
pieces
they
go
together
to
make
a
cluster
I'd
spread
across
a
whole
bunch
of
the
mo
files.
It's
really
ugly,
like
I,
was
watching
Krista's
video
from
last
Friday
from
vgik
about
the
cluster
api,
and
you
know
one
point
she's,
like
oh
I,
forgot
to
add
this
flag
with
this
other
file
that
we
also
need
it's
just
like
it's.
A
C
Think
it's
a
little
more
there's
another
aspect
of
the
convolution
too,
and
that
right
now
the
cluster
cuttle
workflow
kind
of
inflates.
Some
of
the
topics
of
what
does
it
take
to
kind
of
bootstrap
cluster
API
itself?
What
does
it
take
to
actually
create
a
cluster
using
cluster
API
and
then
what
does
it
take
to
pivot,
that
cluster
to
a
more
stable
cluster
and
and
as
we've
tried
to
start
describing
cluster
API
and
and
how
to
use
it
and
and
the
workflows?
G
Guess
also
other
pieces.
The
beach
there
haven't
even
covered
in
class
receipt
Elia
is
is
upgrades
right,
so
it
would
be
nice
I
see
some
people
smiling,
and
it
would
be
nice
if,
if
the
bundle
certainly
allows
us
to
describe
a
new
set
of
target
versions
and
I'm
doing
some
work
on
how
you
apply
a
bundle,
but
it
would
be
great
to
maybe
put
that
into
cluster
CTL
or
into
some
other
mechanism.
That's
Custer's
detail,
then
utilizes,
because
it
is
a
hard
problem.
Yeah.
A
I
think
that
comes
back
a
little
bit
to
what
Jason
was
just
saying,
because
one
question
is:
is
cluster
CTL
just
a
bootstrapping
tool
that
sort
of
fire-and-forget
to
after
that?
We
just
used
q
petal,
you
know
directly
or
do
we
need
like
a
cluster
cuddle,
upgrade
command
that
wraps
the
cubicle
commands.
You
could
run
yourself
and
makes
them
more
user
friendly
right,
because
I
think
once
you
have
the
cluster
up
and
running,
you
can
just
use
cube
cuddle
to
manage
all
the
bits
that
have
been
provision
right.
A
That's
the
idea
is
you
can
cube
cuddle
edit
the
version
of
your
machine
deployments
and
get
new
versions
of
nodes?
You
can
keep
cuddle
edits
the
things
that
run
your
control
plane
right
to
I
guess
those
are
deployed,
static,
pods
today,
so
maybe
that
doesn't
quite
work,
but
the
idea
was
that
you
should
be
able
to
keep
cuddle
edit
the
definition
of
your
control
plane
and
have
that
updated
as
well,
and
so
that,
after
that,
initial
bootstrapping
phase
is
finished,
that
everything
after
that
is
declarative
through
the
API
and
I,
don't
think
we're.
B
Gonna
say:
I
have
real
strong
opinions
about
the
workflow
just
based
on
lessons
from
keep
according
lessons
from
from
cob,
so
the
way
upgrades
were
managed
there.
The
fact
that
upgrades
are
virtually
non-existent
anywhere
outside
of
cuffs
I
would
I
would
like
to
take
a
stab
at
writing.
A
proposal
for
a
workflow
based
off
of
what
kind
of
Robbie
was
just
describing
of
what
it
could
look
like
to
have
a
cute
cuddle,
very
declarative
approach
and
potentially
like
where
the
responsibilities
of
operators
and
controllers
would
be
along
the
way.
G
B
G
B
That
wouldn't
be
foster
CTL,
it
would
be,
it
would
be
some
sort
of
orchestration
controller
that
would
manage
mutating,
see,
IDs
and
waiting
until
the
CID
reaches
a
certain
state
and
then
updating
record,
possibly
somewhere
and
then
moving
on
to
the
next
one
real
similar
to
how
stateful
sets
work
a
little
more
involved,
and
at
that
note
that
opens
up
this
whole
Pandora's
box
of
what
is
our
upgrade
strategy?
Do
we
do
a
mutable
upgrade?
Do
we
try
to
upgrade
in
place?
G
H
This
is
Cindy
from
Samsung
I
just
wanted
to
say.
We
made
a
stab
at
upgrade
and
I
would
like
to
see
how
the
group
wants
to
do
it,
but
we
did
in-place
upgrade
and
we
do
have
a
an
external
tool
that
manages
the
upgrade
like,
like
you
suggested.
It's
our
restful
interface,
there's
a
thing
that
waits
for
the
control
blame
to
be
done
and
then
does
each
worker
one.
H
Yeah
both
the
control
plane
in
the
nodes,
so
they
don't
change
IP
addresses,
but
they
reinstall
everything
and
they
keep
trying
the
controller
just
keeps
trying
to
to
match
the
desired
version
and
will
keep
trying.
Even
if
there's
an
error,
it
will
every
time
come
around.
There
are
a
few
yeah.
There
are
a
lot
of
little
gotchas
on
upgrading
your
package
to
a
specific
version.
H
H
I
I
H
Well,
if
you're
talking
to
me,
we
yeah,
there's
30,
use
cases,
we
probably
haven't
you
know
explored
yet,
but
we're
assuming
we
have
an
API
that
is
used
by
a
a
UI
and
so
everything's
done
through
that.
But
if
someone
went
behind
your
back
and
did
an
upgrade,
you
would
know
what
version
it
is.
We
used
the
get
nodes
command
to
know
what
cubelet
version
is
currently
running.
We
always
match
the
desired
version.
With
that
version.
I
H
Don't
keep
any
mapping
outside
of
what's
stored
in
at
CD,
so
you
know
the
spec
has
the
desired
version
and
at
CD,
and
then
we
always
check
we
always
check
in
the
upgrade
processing
in
the
controller.
What
is
the
actual
cubelet
version?
That's
running
right
now,
and
so
we
do
the
get
nodes
command.
The
cube
control
you
get
nodes
command,
tells
you
the
version.
That's
currently
running.
H
I
Yeah,
because
you
know
a
lot
of
enterprise
customers,
they
don't
want,
they
tend
to
hold
back
on
upgrading
they'll
hold
on
to
a
version
for
several
releases,
and
but
they
may
want
to
upgrade
like
to
one
or
two
this
this.
Actually,
this
is
actually
a
real
case.
The
battery
came
up
recently,
I
can't
remember
the
specifics
of
it,
but
it
had
to
do
with
CSI.
They
were.
I
They
didn't
want
to
great.
Yes,
because
I
think
that
may
require
some
significant
changes,
the
infrastructure
that
they
haven't
tested
yet,
and
the
latest
version,
kubernetes
required
I
think
required
a
CSI
version
up,
so
I
think
I
think
you
know
to
serve
the
enterprise,
customers
I,
think
meaning.
We
need
some
some
way
to
warn
users
that
you
don't
want
to
upgrade
the
this
component,
because
it's
incompatible
with
this
other
component
yeah.
A
There's
two
things
all
throughout
there
lock.
One
is
that
there
was
a
recent
proposal.
I
think
coming
out
of
cig
release
for
creating
an
LCS
version
of
kubernetes
and
I've.
Just
yet,
which
is
largely
sort
of
focused
on
small
enterprises,
want
to
upgrade
nearly
as
often
and
the
other
thing
is
that
there
was
an
email
thread.
I
think
yesterday
that
I,
maybe
this
morning,
where
somebody
asked
about
sort
of
the
different
supported
versions
and
how
you
can
tell
which
different
versions
of
components
go
with
which
versions
ready.
A
If
you
say
you
know,
when
you
run
this
version
communities,
you
can
only
run
this
version
of
the
storage
plug-in
and
so
I
think
there
is.
There
is
space
in
the
ecosystem
for
vendors
to
say
you
know,
we'll
support
different
pieces
of
this
test.
Matrix
art
will
will
test
these
different
pieces
to
help.
A
You
know
our
particular
customers
if
they
need
that,
but
I
think
as
a
community,
we
will
probably
continue
to
try
it
try
and
restrict
which
things
are
tested,
that
we
test
sort
of
as
a
whole
and
that
we
qualify
just
to
keep
ourselves
sane
right.
Otherwise,
the
test
matrix
explodes
to
the
point
where
we
can
can't
keep
working.
A
Think
if,
once
you
start
diverging
from
the
set
of
things
that
like
sort
of
we
as
a
community
say,
like
you
know,
for
version
111,
a
commodities,
we're
gonna
run
this
container
runtime
and
this
business.
And
if
you
look
at
like
the
cube
admin
tests,
they
have
a
particular
set
of
container
run
times,
and
you
know,
storage,
plugins
and
so
forth
that
they
test,
and
we
can
certainly
look
at
AAG.
Wenting
knows.
But
every
time
we
add
a
new
piece
to
that
matrix.
It
increases
our
burden
of
keeping
everything
running
right.
A
So
we're
incentivized
to
keep
that
relatively
small
and,
if
again
like,
if
you
want
to
go
outside
of
that,
then
you
know.
Vendors
are
perfectly
happy,
they're
perfectly
able
to
expand
the
test
matrix
themselves
or
have
a
slightly
different
set
of
coverage
that
they
want
to
test
and
end
it
for
their
customers.
G
And
I
think
actually
I
missed
the
threads
that
happened,
but
I
think
there
is
actually
a
like
recommended
set
of
versions
for
everything
and
the
plus
or
minus
one
type
versions
are
more
needed
for
upgrades,
so
you
know
it.
It
would
be
bad
if
during
a
rolling
upgrade
when
you
know
your
your
cubelet
moon
versions
that
everything
magic
you
broke
because
you're
adding
an
incompatible
set
of
versions,
so
I
think
there
is
actually
for
each
for
each
community's
version.
B
B
Please
good
to
move
forward
if
that
works
for
folks,
I
never
already
have
you
been
in,
since
they
call
Andrus
toner
first
person
item
here:
okay,
I
next
item,
remove
the
hold
on
525
I
was
able
to
do
that.
I
think
that's
done
it
merged
I'm,
Marco
thing
anything
else.
Let
me
know
next,
it
looks
like
Marco
also
has
a
proposal
for
upstreaming
provisioning,
scripts
or
bootstrap
scripts,
as
we've
called
them
in
the
past
in
Cuba
corn.
B
J
So
well,
when
this
not
coming
to
finish
today
today
for
now
I
think
I
will
just
slim
it
for
now
and
you
can
go
through
it
and
we
can
discuss
it
on
some
next
meeting.
I
have
talked
a
little
bit
about
that
on
some
past
implementers
of
his
house,
but
now
we
can't
be
proposal
ready
and
we
will
go
to
her
some
feedback
or
see
some
comments
here
so
far.
J
Looking
from
some
high
level
and
motivation
behind
this
proposal
is
that
we
don't
have
any
any
bosom
script
or
any
mechanism
to
handle
provisioning
process
in
water
scripts,
so
everybody
has
to
run
them
themselves
and
to
get
how
that
everything
will
be
executed,
all
that.
But
if
we
compare
the
provisioning
or
how
we
call
boost
up
screen,
you
can
see
that
most
of
them
are
same
or
very
similar,
so
providing
some
scripts
in
the
cluster
API
could
help
a
lot
to
get
started,
especially
at
in
this
case.
J
We
could
support
many
setups
and
many
operating
systems,
and
what
we
are
doing
right
now
is
something
that
we
put
the
wool
subscrib,
in
example,
dilatory
in
custom
cluster
cuddle,
or
we
have
something
like
that
is
usually,
and
that
can
be
hard
to
maintain
hard
to
find
and
hard
for
end-users
to
work
with.
So
maybe
something
like
we
have
such
in
cluster
API
to
be
able
to
be
used
by
implementers
could
be
nice
for
what
implementers
of
end-users
ok.
B
So
we
would
potentially
host
sort
of
blessed
bootstrap
scripts
as
a
community.
Everyone
hosts
them
in
one
of
our
cluster
API
repositories.
I
mean
I.
Think
this
touches
a
little
bit
on
what
we
were
talking
about
on
the
compatibility
matrix,
matrices
and
package
dependency
like
it
would
give
us
a
way
to
at
least
say
we're.
Gonna
install
these
versions
of
these
packages,
and
these
are
configurable
bits,
and
these
are
not
configurable
bits.
A
Marko
haven't
had
a
chance
to
look
through
your
dock
yet,
but
when
you
compare
like
bootstrap
scripts
across
providers,
you
say
that
they're
largely
the
same.
Are
you
anticipating
the
the
parts
that
are
not
the
same,
to
have
a
way
for
the
providers
to
inject
a
little
bit
of
their
own
sort
of
custom
script
and
at
the
end?
And
we
have
like
the
sort
of
shared
blob
and
the
customized
blob?
J
Yes,
something
like
that
so
in
the
purpose
of
I
have
mentioned
that
this
may
be
realized
using
go
templates,
so
some
stuff
can
be
templated
out
in
the
user
and
diplomatic
and
inject
into
the
script,
what
to
be
populated
and
what
to
be
used,
but
also
it
is
out
to
be
decided
when
we
want
to
stop.
For
example,
do
we
want
to
say
just
install
kubernetes,
config
or
qubit,
and
all
that,
but
also
set
up,
kubernetes
or
know?
For
example,
what
we're
usually
usually
doing
right
now
is
execute
queue.
J
Baby
I
mean
II,
talk
you
baby
and
join.
Do
you
want
to
do
that
from
the
controller
or
to
do
that
from
the
script?
That
is
also
wanted
to
decide?
Also,
they
think
I
mention
it
is
CNI
is
should
be
handled.
This
is
also
problem
how
that-
and
this
is
a
customization
after
the
cluster
is
set
up.
So
this
is
still
more
defined
by
the
proposal,
but
you're
looking
for
any
comments
and
any
feedback,
any
idea
how
they
can
be
handled.
I.
G
Would
say
what
cops
did
as
we
did
as
little
as
possible
in
bash,
and
we
put
a
lot
in
to
go
and
I
Eve
you're
pretty
happy
about
the
decision.
Despite
having
done
this
like
go
thing,
they're
gonna
say
download.
It
means
that,
like
you,
can
you
can
like
find
out
what
OS
are
actually
running
and
like
perform
the
correct
operations
based
on
that,
like
whether
you're
a
llamar,
an
apt
or
uninstalled
a
ball?
G
A
Yeah
I
guess
I
would
I
would
arrow
towards
the
first
thing
you
said
Justin,
which
is
to
try
to
do
as
little
as
possible
in
the
initialization
script
and
I
agree
that
having
a
lot
of
Bosch
is
not
a
great
long-term
solution.
So,
if
we're
doing
to
do
more
than
as
little
as
possible
doing
something
other
than
Bosch
sounds
like
a
much
better
plan.
I,
don't
know
how
much
you
have
note
up,
you
think
would
be
essentially
reusable
or
how
much
we
want
to
just
say
like.
G
It'd
be
nice
to
try
starting
from
scratch.
We
can
certainly
look
at
what's
in
note
up.
I.
Do
worry
that
we're
gonna
quickly
decide
that
we
want
most
of
the
things
they're
in
Noda,
but
that's
okay.
We
can
certain
like
there's
more
enough
than
we
need
and
I
think
we
want
I
think
we
probably
want
the
task
based
thing
and
I
think
we
probably
want
a
test
based,
retries
and
I.
Think
we
want.
B
The
huge
one
of
the
huge
wins
we
would
get
from
having
a
goat
style
library
like
Noda
in
my
mind,
if
we
had
no
tub
but
also
is
able
to
report
status
along
the
way
to
the
rest
of
the
system,
I
think
we
would
be
in
really
really
good
shape,
and
if
that
comes
in
the
form
of
for
keynote
up
or
starting
from
scratch
or
whatever
I
do
think,
we
have
a
really
good
opportunity
to
provide
a
lot
of
visibility
into
our
initialization
stuff
and
we
should
probably
take
advantage
of
our
status
statuses
and
our
other
objects
as
we're
initializing.
A
G
We
actually
had
something
interesting
in
cops
as
well,
where
the
cue
blip
bootstrapping
token
someone
submitted
a
PR
or
actually
whole
component,
which
does
the
qubit
bootstrapping
token
in
a
differently
secure
way
from
how
it's
done
in
OSS,
which
is
sort
of
like
another
component.
That
runs
therapy
interesting
to
integrate
that
finale
in
there
as
well.
To
me
that
belongs
in
the
bootstrapping
and
something
that
occurred
to
me
as
well
on
that
front
is,
if
that
that
bootstrap
component
actually
chatted
with
the
chassis,
to
talk
to
the
communicated
with
the
machine
controller.
F
Yeah
I
was
just
thinking
thinking
about
this
well
I
mean
I've,
been
thinking
about
this
fair
bit
right,
I
kind
of
tend
to
personally
favor
anything
that
is
basically
the
least
amount
of
logic
that
is
outside
of
like
cubelet
or
the
process
manager.
Whatever
that
system
deal
or
not-
and
you
know,
and
whatever
the
the
native
bootstrap
think
the
REA
is
like
cloud
in
it
having
the
least
amount
of
logic
outside
of
that
and
having
all
the
configuration
that
needs
to
be
generated
on
sort
of
node
by
node
basis
or
whatnot
pre.
F
Other
ways
to
break
this
model
right
so
and
see
difficut
it
could
be
potentially
upgraded
and
by
other
means,
throw
a
potential
in
control
or
some
some
sort.
But
the
other
thing
I
did
think
about.
Is
that
perhaps
in
some
cases,
even
if
we
do
this
sort
of
thing,
it
may
be
still
possible
to
let
user
of
great
things
in
place.
Whichever
way
they
want
and
they
could
use
their
existing
tools
like
console.
F
We
could
provide
an
immutable
way
of
managing
notes
and
their
upgrades
and
while
user
made
still
wish
to
implement
some
some
way
to
mutate
state
of
each
individual
node
in
their
own
ways,
they
could
be
a
thing
so
like
we
would
treat
nodes
immutably
but
couldn't
Cecily
like
mouth
all.
The
part
system
is
arena
and
they're
wrinkling
like
that,
would
be
up
to
the
user.
What
happens
there
and
what
is
they
choose
to
use?
I.
G
F
J
K
Okay,
so
it
was
exact,
so
the
upgrading
and
patching,
wouldn't
it
make
sense
to
write
the
script
or
go
up
or
whatever
in
a
way
that
it's
like
declarative,
like
stuff
is
already
there,
because
it's
a
pre-installed,
a
my
just,
don't
do
anything.
If
it's
not
there
install
it,
then
we
would
maybe
just
maintain
a
limited
set
of
a
my
eyes
are
not
for
every
possible
combination.
I.
G
Mean
that's,
he
makes
it
look,
that's
what
cops
and
node
up
does
and
that's
sort
of
some
of
the
other
logic
that
I
was
hoping.
We
wouldn't
need
sort
of
a
find
comparison
logic,
but
we
can
pray
for
a
simplified
version
of
that.
It's
what
you
know,
I
think
it's
like
what
ansible
and
stuff
do
as
well
right
like
it's
not
like
yeah.
A
We
have,
we
have
at
least
some
of
that
in
bash
and
queue
up
as
well.
At
this
point,
we're
just
pretty
ugly
and
and
most
of
the
reason
we
have
it
is
for
CI
right,
like
just
I,
was
saying
earlier,
I
think
when
you
start
looking
at
CI
and
wanting
to
build
a
dynamically
replace
a
whole
bunch
of
things
to
expand
your
test
matrix,
it
becomes
a
lot
more
complicated
and
we
want
to
have
this
one
sort
of
golden
configuration,
though
works.
G
G
A
G
There
are
definitely
differences
between
the
different
OSS
that
are
substantial,
I.
Sorry
I
agree
with
that
idea
like
that
most
notable
ones
are
like
how
you
install
software,
if
you're
gonna
suffer
at
all
and
the
paths
to
which
you
can
install
right,
which
you
can
execute
so
at
least
with
cops.
We
have
some
logic
fit
like
as
to
point
things
into
different
places.
F
Am
I
right
or
or
even
like,
with
the
with
any
other
part,
is
like
a
state
to
provider
or
anything
you
and,
and
the
idea
is
that
there,
you
could
sort
of
have
slightly
different
variations
of
how
the
specific
classes
managed
how
the
specific
implementation
is
done,
because,
right
now
it
seems
like
for
each
cloud
provider
that
would
have
to
be
for
each
cluster
a
care
provider
that
would
have
to
be
some
kind
of
node
bootstrap
implementation
right
and
that
node
bootstrap
implementation
is
potentially
looking
at
handling
all
sorts
of
different
classes.
I.
E
Think
it's
really
important
for
us.
You
know
we're
talking
about
a.m.
eyes
and
you
know
that's
great
for
GCP
it
and
to
be
fair,
even
OpenStack
AWS,
but
once
we
start
talking
about
SSH
once
we
start
talking
about
doing
bare
metal
I
mean
having
pre-baked
images,
isn't
a
thing:
it's
not
something
that
we
can
work
with.
E
It's
not
something
that
we
can
play
and
and
throw
around
in
and
we'll
then
very
significantly
limiting
cluster
API
to
implementations
just
for
specific
cloud
providers
that
have
the
ability
to
offer
that
or
where
you
have
the
ability
to
select
a
set
of
images
or
upload
custom
images.
I
know
in
Germany,
you've
got
Hetzler,
you
run
with
what
they've
got
they
give
you
their
pre,
baked
images,
you
run
those
or
you
don't
run
on
hetzer.
E
So
it's
kind
of
you
know
it's
great
in
theory
to
think
and
talk
about
these
pre
baked
images,
but
I,
don't
think
that's
something
that
we
can
consider
is
a
long-term
goal.
If
we
want
to
make
cluster
API
available
outside
of
just
the
major
cloud
providers
and
and
things
like
VMware,
OpenStack,
etc.
Sure.
F
But
you
could
also,
you
know,
you
could
also
imagine
a
use
case
where,
instead
of
running
cubelet
directly
like
you'd
start
a
container
or
or
run
of
the
by
KVM
or
something
of
that
sort,
and
that
would
be
your
goal.
Node
process
would
be
like
a
a
VM
inside
of
a
bare
metal
machine,
and
that
would
be
potential
flavor
that
that
some
folks
may
prefer.
F
So
it
seems
like
there
may
be
a
room
for
that,
and
you
know
differing
different
sort
of
node
bootstrap
providers,
essentially
because
now
it
seems
like
we're
kind
of
looking
at
the
problem
from
a
very
wide
angle,
and
maybe
we
could
narrow
it
down.
By
said
by
thinking
about
something
like
this
I
don't
know
just
an
idea:
I.
A
Think
the
other
thing
to
keep
in
mind
is
that
the
part
of
the
reason
we
have
the
different
providers
and
different
fighter
implementations
is
to
allow
flexibility.
So
it's
possible
that
in
environments
that
are
similar
enough,
like
you
would
imagine,
GCP
AWS
size
or
maybe
you
can
open
sacs
vmware.
Those
are
all
similar
enough
that
we
want
to
reuse
something
like
crete
leaked
images.
We
can
preclude
us
from
having
a
different
solution
for
ssh
or
bare
metal.
A
G
J
F
F
Yeah
just
I
just
thinking
what
a
yeah
I
don't
know,
I
mean
it's
not
like
something
believe
we'll
have
to
discuss
right
now.
I
decide
right
now,
but
just
like
basically
yeah
whether
we
should
keep
discussing
this
as
a
sort
of
a
general
problem.
Glasser
api
or
maybe
we
could
in
an
HDTV
deal
provider
or
maybe
we've
been
defined.
Committee
bootstrap,
know
bootstrap
provider
of
some
sort.
Well,
so
as
a
kind
of
a.
A
B
Think
that,
as
someone
trying
to
the
buggy
system
with
a
pre-baked
image
that
you
do
not
have
this
ability
to
there's
an
entire
exercise
of
figuring
out
how
the
image
was
generated
and
what
could
possibly
be
going
wrong
with
with
the
image
generation
process
that
could
be
affecting
your
downstream
problem.
If
you're
kind
of
doing
everything
at
run
time
like
it's
riskier,
but
at
least
all
your
logs
and
all
your
debugging
is
in
the
same
place.
B
D
So
I
have
one
additional
question
to
Marco
so
mark
on
your
proposal.
I
mean
I,
understand
the
benefit
of
having
maybe
a
common
stock
up
scripts
and
you
know
definitely
abstracting
off,
but
does
it
also
mean,
for
example,
in
cases
where
the
when
you
are
when
a
provider
is
trying
to
install
Kuban?
It
is
right
now
we're
assuming
that
this
is
all
just
your
vanilla
upstream
Cuban
ad
that
gets
installed
and
set
up
for
cases
like
how
about
if
I
want
to
have.
D
If
a
provider
has
its
own,
let's
say
slightly
a
customized
distribution
of
kubernetes,
and
then
they
want
to
put
back
in
in
this
the
table
I
mean
what
would
be
the
ways
I
mean,
and
one
thing
I
guess
is.
We
can
probably
overwrite
the
images
but
I'm
just
curious,
if
you
guys
have
thought
about
you
know,
and
that
kind
of
goes
very
similar
to
the
CIE
part
that
was
brought
up
earlier,
that
you
know
specifically
about
CI
case.
We
probably
want
to
actually
do
that
and
which
is
okay
to
go
board.
D
D
J
I'm
not
sure,
if
you've
not
thought
about
a
lot
about
that
may
be
having
some
options
to
customize
this
farce
or
even
to
limit
it
to
be
support.
Only
basic
part
of
both
subscribes
tend
to
say
that
user
need
to
provide
their
own
or
something
like
that.
But
this
is
still
a
part
that
we
have
not
talked
about
for
what
I
think.
This
is
a
good
comment,
and
maybe
I'll
be
looking
into
that
so,
but
for
now,
by
month
or.
B
So,
to
follow
up
on
your
original
proposal
here
in
my
go,
I
think
there's
a
lot
of
good
points
brought
up
in
this
and
I
think
we
probably
at
some
point,
would
really
benefit
as
a
community
of
having
an
idea
of
what
an
initialization
or
provision
or
bootstrap
script
looks
like
and
what
those
steps
are
along.
The
way,
I
think
what
this
discussion
kind
of
showed
us
was.
We
aren't
sure
how
we
want
to
solve
a
lot
of
these
problems
yet,
and
you
have
a
lot
of
good
ideas.
F
I
A
And
then
we
should
turn
into
markdown,
so
it
has
a
sort
of
a
permit
home
where
people
can
get
to
it
more
easily.
But
since
it
sounds
like
most
people
on
the
call
hadn't
seen
the
doc
before
the
meeting,
let's
not
turn
into
PR,
quite
yet,
because
I'm
guessing
that
the
the
first
past
people
do
will
have
a
lot
more
comments
than
you
know.
The
second
or
third
pass.
A
D
Yes,
actually
it's
a
very
quick
question,
so
I
mean
I,
so
we
recently
with
the
vSphere
implementation
for
Adobe
mommy.
We
were
trying
to
basically
when
you
were
trying
to
implement
the
create
at
least
I
kind
of
modeled
that
as
a
state
machine
itself,
but
then
that
kind
of
brought
up
an
interesting
question
that
I
wasn't
sure
the
answer
was
which
hence
I
want
to
bring
up
here
like
what
is
the
expectation
in
the
actual
interface
of
create?
D
Is
that,
like
a
sinkhole,
that
you
know
that
when
the
controller
calls
they
create,
the
expectation
is
that
minutes
comes
back,
that
the
machine
should
be
provisioned
with
the
IP
up
and
running.
Is
that
what
the
expectation
is
or
know?
What
was
the
behind
the
scene?
Intention
before
when
we
created
that.
D
So
essentially,
the
actuator
interface
method
sets
it
create.
My
question
is:
is
the
expectation,
then,
when
the
controller
actually
calls
the
providers?
Actually
providers
create
method
that,
when
the
control
is
gone,
comes
back
to
the
controller
after
the
create
method
is
called?
Is
the
expectation
that,
at
that
point
in
time
the
the
underlying
infrastructure
will
have
to
vision
the
be
yeah
booted
it
up
and
have
gotten
the
IP
ready
in
only
after
that
that
the
underlying
controller
will
move
on
from
that
create
I
mean?
Is
that,
like
more
like
a
sinkhole
yeah.
B
Okay,
so
I
think
to
reword
the
question
here.
It
was
when
the
higher
level
controller
calls
to
create
for
a
specific
provider.
What
is
the
assumption
in
the
software
that
the
create
method
should
return
at?
Is
it
an
asynchronous
or
a
synchronous
call,
and
what
are
the
boundaries
of
a
creating
complete
I?
B
Don't
know
if
you'd
explicitly
defined
it
as
a
group,
but
I
definitely
have
found
that
it's
easier
and
much
more
advantageous
to
have
the
create
method
hang
until,
which
instance
comes
to
whatever
a
ready
state
would
be
an
IP
address,
being
a
huge
part
of
that
personally,
I
don't
know
if
we
actually
called
that
out.
As
a
group,
though,
should.
I
We
should
we
call
it
out,
because
you
know
a
lot
of
the
contract
for
the
the
actuators
are
not
actually
clearly
defined.
We
we've
actually
had
to
look
at
the
cluster
CTL
code
in
the
controller
code
to
see
exactly
what
is
doing
like,
for
instance,
at
the
end
of
create.
There
is
an
expectation
that
there
are
no
annotation.
B
I
B
Definitely
a
case
here
to
provide
some
verbosity
to
the
interface
and
what
the
expectations
are
and
at
any
point,
if
we
can
guarantee
these
expectations
with
software
I
think
that's
also
important.
I.
G
Suspect
this
is
gonna,
be
like
the
early
days
when,
like
I,
did
a
lot
of
work,
filling
out
the
cloud
provider
implementation
for
AWS
and
at
the
time
the
cloud
provider
interface
wasn't
was
I
mean
it
still
is
a
little
woolly.
But
like
certainly,
we
had
to
make
some
like
assumptions
clear.
Some
things
had
to
be
changed
because,
like
load,
balancers
are
different.
Ip
based
on
GC
and
name
based.
Many
of
us
and
those
things
so
I
think
is
very
much
not
an
implementation,
but
we
almost
definitely
need
to
make
that
we're.
G
Gonna
have
to
change
that
interface
and
make
it
better
documented,
because
we
don't
even
know
what
the
assumptions
are.
I
don't
like
people
probably
assumes
that
it's
synchronous,
because
it's
fast
on
GCE
and
they've
assumed
that
it
comes
back
with
an
IP,
because
it's
pretty
fast
times
VCE
to
do
that.
So.
I
G
G
On
I
think
I
renamed
one
to
like
create
load
balancer
from
Criollo
bonzer
to
ensure
load.
Balancer
was
once
to
like
make
here
that
it
had
to
be
an
idempotent
method,
so
I,
imagine
that
this
is
gonna,
be
the
same
sort
of
thing
you're
gonna
have
to.
When
you
see
these
things
would
be
great
too,
like
great
documentation
on
the
interface
and
rename
it
to
me.
So.
E
Just
good
kind
of
shortcut
some
of
this
there's
already
issue
505,
which
is
open,
and
there
is
a
Google
Doc
that
is
already
in
place
to
discuss
as
a
community
what
the
expectations
we
have
are
for
the
interfaces
and
what
our
dependencies
are
for
that.
So
I
pasted
a
link
to
the
issue
and
the
doc
into
the
chat.
That's
also
we
issue
505
on
the
cluster
API
and
if
we
can
take
discussions
there,
that
would
be
fantastic
because
I'm
already
working
on
it
and
I
hope.