►
From YouTube: 20190510 cluster api node lifecycle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
One
was
on
the
image
building
utilities
as
Anadyr
has
renamed,
and
the
other
was
on
sort
of
sort
of
hook,
mechanics
for
life
cycle,
madness,
I
think
the
hook
one
is
like
uncontroversial
I
think
we
need
to
talk
with
the
extension
folks
to
verify
how
they
want
to
do
that,
but
other
than
that
I
think
it's
it's
pretty
much
a
no-brainer
you're.
Basically,
gonna
have
life
cycle
events
you
might
want
to
be
able
to
call
out
of
it
facilities
do
thing.
A
This
independent
and
I
haven't
spent
a
lot
of
time
on
that,
but
I
have
spent
a
lot
of
time
on
the
proposals
that
have
the
our
proposal
document
that
I
appreciate
everyone's
kind
of
stepping
in
and
filling
in
a
bunch
of
details.
There
and
I've
also
gotten
asynchronous
proposals
from
multiple
people
cuz.
Apparently,
a
lot
of
people
have
done
their
own
thing
here.
So
one
of
the
things
that
was
striking
me
on
these
proposals
and
I
use
this
as
a
topic
for
today
to
lead
into
our
document.
A
Was
that
there's
a
lot
of
technologies
that
people
have
developed
like
a
ton
of
them
and
I
really
don't
want
to
reinvent
the
wheel
for
much
of
this
stuff,
so
I
wanted
to
kind
of
pivot
the
conversation
back
to
sort
of
like
what
are
we
trying
to
accomplish,
and
is
it
possible
that
we're
not
actually
trying
to
create
a
tool,
but
instead
we
are
trying
to
provide
the
glue
between
the
tools.
So
let
me
let
me
share
my
screen
here.
It's
and
see
what
I'm
looking
for
a
feedback
from
other
people
think.
A
So,
if
you
look
at
the
camp,
we
talked
about
proposals
of
technologies
right
of
somebody
creating
their
own
custom
technology
and
then
there's
using
tear
packer
and
ansible,
and
we
talked
about
the
trade-offs
of
a
versus
B.
But
I
want
to
take
a
step
back
and
think
about
first
principles
about
what
what
we're
trying
to
accomplish
and
we're
basically
trying
to
stitch
down
the
ability
to
create
in
the
mutable
image
and
if
I,
ignore
technology
entirely
right.
The
individual
implementation
choices.
I
can
see
it
as
a
series
of
steps
in.
A
Look
good
just
it's
not
even
really
a
dag,
it's
just
a
one-line
phase
after
phase
right.
So
if
I
were
to
think
about
this,
these
are
the
kind
of
phases
that
came
to
mind
when
I
was
thinking
about
it
and
I'd
love
for
other
folks
to
speak
up.
So
what
I'll
do
is
I'll
talk
about
what
I
wrote
down
here
then
I'm
kind
of
looking
for
feedback,
so
look.
The
first
phase
is
kind
of
to
like
you
lay
down
the
base
layer
of
whatever
it
may
be.
This
could
be
an
OS
image.
A
Some
other
people
might
have
some
other
custom
imagery
they're,
like
we
don't
want
to
own
this
space.
We
want
to
be
able
to
have
the
facilities
to
have
a
starting
point
right
then.
The
second
phase
is
like
a
customization
of
that
base
level
layer,
tailoring
it
specifically
to
your
environment
or
needs,
maybe
you're
doing
some
something
specific
for
who
you
are
what
you're
doing
right.
A
The
next
phase
is
kind
of
installing
or
provisioning
software
tools
that
are
needed
for
kubernetes
right,
and
this
might
also
include
some
opinionated
software
tools
that
are
required
as
part
of
some
compliance
or
whatever
for
your
environment
phase
three
is
kind
of
to
produce
artifacts.
In
this
case
it
could
be
the
different
output
formats
that
we've
specified
inside
of
the
cap,
which
I'm
trying
to
find
our
cap
is
growing
pretty
long
here
by
the
way,
do
do
do
to
do
to
do
so.
C
A
Specifically,
avoiding
that
detail
to
basically
lay
out
the
phases
that
this
could
be
either
or
in
particular,
this
aspect
of
phase
or
phase
one
could
be
either/or,
because
the
feedback
I
got
from
multiple
folks
was
that
you
could
either
be
doing
in
a
some.
People
did
not
like
the
idea
of
using
doing
this
on
a
live
image,
so
they
might
want
to
provide
this
custom
OS
tailoring
here.
C
A
It
could
be
either
art,
so
there's
nothing
that
prevents
as
long
as
I
abstract
the
layers
here,
there's
nothing
that
prevents
phase
2
if
I
was
doing
a
troop
mounted
image
and
I
wanted
to
install
applications
so
long
as
I
provide
everything
properly.
My
installation
package
mechanism
should
allow
me
to
do
that
for
anything
or
if
I
do
to
done,
to
live
the
image
image
of
shells
to
be
able
to
do
that.
So
by
separating
out
the
details
and
having
the
individual
phases,
it
allows
glue
to
do
this
at
each
individual
phase.
C
Suppose
so
what
does
the
phase
2
of
implementation
look
like?
Is
that
something
that
were
simply
carrying
a
reference
implementation
for
or
we're
saying?
This
is
the
tool
that
we
know
works
and
if
you
want
to
do
this,
run
this
tool,
and
so
that's
all
basically
gonna
have
certain
implications
about
what
phase
0
phase
no.
A
I
disagree.
I
also
stated
a
several
times
you
can
use.
It
could
be
a
custom
script
for
you
to
install
like
youyou,
have
some
some
implied
context
right
like
if
you're
using
an
OS
of
cou,
the
script
that
you
use
to
install
will
be
dependent
upon
that,
like
Annie
use
the
script,
otherwise
you
could
use
chef
for
ansible
or
puppet
or
whatever
you
want
to
do
with
this
layer.
It's
up
to
you,
I
think
we
as
a
group
would
probably
provide
some
default
layer
right.
D
Could
be,
but
me
so
I
think
where
this
gets
weird
is
totally
different,
so
steps
yeah
is
the
question
is
kind
of
coming
down
to?
Are
we
defining
the
process
or
the
interfaces
or
both,
and
if
we're
defining
the
interfaces,
then
we
might
want
to
have
some
intermediate
artifacts
between
each
phase
and
if
that
first
artifact
is
a
binary
disk
image
and
that
limits
the
utility
of
some
of
the
future
phases.
A
That's
a
fair
statement
to
me:
I
think
you
could.
If
you
provided
the
interface
you
should
it
shouldn't
matter?
It's
just
an
endpoint
like
we
could
wrap
the
behavior
so
that
it's
not
visible
to
the
to
the
next
phase.
You
just
have
a
point
where
you
can
inject
and
run
whether
a
system
is
live
or
whether
it's
not
it
can
be
abstracted
where
they
can't
see
it
or
know
the
details.
They
could
try
to
get
hostname.
They
could
try
to
do
some
other
things
and
that
they
might
you
know
infer.
A
D
A
D
But
when
we
say
SSH
like
we
could
we
could
be
running
this
locally
and
chroot
and
then
run
some
stuff
and
then
later
image
eyes
it
or
some
people
might
want
the
tool
to
just
be
a
process.
That's
operating
on
a
directory
on
our
local
machine
sure
that
could
be
true
too
right.
Okay,
as
long
as
as
long
as
we
keep
that
as
an
option.
Yeah.
A
I
think
there
there
exists
as
long
as
you
have
an
endpoint
in
a
directory
structure
as
per
the
potential
output
or
the
input
to
the
next
layer
so
like.
If
you
were
looking
here,
be
like
you,
you
lay
this
down.
You
have
some
you're
starting
at
slash
of
whatever
right
and
that
could
be
of
a
live
system
or
a
not
live
system.
D
D
E
D
Prepared
and
everything-
and
we
have
tooling
that-
will
mount
one
of
those
disk
images
and
chroot
in
there
and
do
stuff
and
then
exit
and
when
we
do
T
it
again,
but
also
we
would
like
to
operate
on
these
images
as
if
they
were
just
directories
in
the
context
of
the
system.
That's
just
messing
with
the
directory,
not
in
a
binary
disk
avail.
So.
D
C
I
mean
everybody
has
a
different
suggestion,
because
that
is
the
nature
of
image.
Building,
like
every
organization,
has
their
own
pet
process
and
I'm
of
the
opinion
that
we
should
be
highly
opinionated
in
this
space.
I.
Don't
really
have
an
opinion
on
what
that
opinion
looks
like,
but
I
I
feel
like.
We
should
have
a
clearly
defined
path
rather
than
a
loosely
fun
one.
Oh.
C
One
of
the
things
we
ran
into
OpenShift
trying
to
do
image
building
as
a
generic
process,
is
you
say,
yeah.
You
know
start
this
instance
in
in
a
V
PC
that
has
these
permissions
and
these
accounts
and
all
this
kind
of
stuff
and
there's
your
base
image
and
all
that,
and
it's
just
never
enough.
It's
never
enough.
It's
it's,
never
simple
enough
to
just
run
it.
If
you're
trying
to
do
it
like
online
or
something
there's
always
some
level
of
step,
negative
one
step,
zero
stuff
counts
that
have
to
be
created.
C
A
Me
from
a
first
principles,
perspective
I
would
do
started
with
the
standard
use
case
that
will
hit
for
80%
of
folks,
and
then
it
allow
the
capabilities
for
that
10%
for
folks
who
want
to
differentiate
and
always
be
open
to
them,
and
they
can
have
an
option,
but
that's
not
the
default
right.
So
that's
the
way
most
of
kubernetes
actually
works.
So
if
you
look
at
a
lot
of
the
core
solutions,
we
do
that
and
it
was
all
of
our
tooling.
That's
a
cluster
lifecycle
greats.
We
have
opinionated
tooling
for
doing
a
specific
thing.
A
We
have
the
exit
strategy
for
all
these
other
options
and
that's
the
same
approach.
I'm
trying
to
take
here
so
like
for
the
base
case.
I,
think
that
we
already
have
examples
for
Packer
in
ansible
vector
to
fit
into
here
today,
and
we
can
give
us
started,
but
then
we
also
want
to
enable
to
make
sure
that
the
input
and
outputs
for
phase
0
and
phase
1
and
phase
2
etc
could
get
us
to
that
next
stage
for
enabling
other
formats
and
image
types
for
all.
A
These
different
scenarios,
like
line,
owns
different
scenario
that
they
mentioned
earlier
I,
got
I
think
three
different
proposals
from
different
people.
One
was
from
CERN
who
has
their
own
custom
image
building
apparatus,
but
there
will
be
aspects
of
some
of
this
where
they
won't
care
and
yeah
I
get
what
you're.
C
Saying,
but
it's
like
by
virtue
of
of
having
a
phase
1
and
phase
0
you're
building
assumptions
in
to
those
later
layers,
either
way
right.
So
a
phase
2
requires
that
a
certain
user
exists
on
the
file
system
right.
Well,
you.
You
can't
specify
that
in
phase
2
right,
everything
can't
be
in
in
one
giant
phase,
and
so
then
what
you're
creating?
C
Is
these
different,
undocumented
interface
layers
and
the
only
way
to
know
concretely
what
that
interface
is
between
each
layer
is
to
either
consume
a
reverse
engineer
the
layer
preceding
it
and
that's
why
I
think
like
trying
to
make
it
really
generalized
there's
always
going
to
be.
You
know
something
that
somebody
missed
I
I'm.
A
Not
going
to
support
like
the
way
we
usually
do
things
is.
We
support
the
80%
use
case
right.
So
we
test
and
bet
that
and
then
as
we
as
people
want
to
use
it
to
give
into
a
little
more
time
because
it's
their
needs,
or
maybe
they
just
want
to
swap
out
phaser
right
with
whatever
they're
doing,
and
that
seems
like
a
legit
use
case
that
they
have
the
ability
to
come
before
the
group
and
ask
for
patches
and
start
to
Mecca
date
to
make
that
possible
for
them.
A
But
we
always
optimize
for
the
80%
use
case
and
there
will
be
some
positions.
That's
fair
statement
to
make
I
get
what
you're
trying
to
say:
I
guess
what
I'm
trying
to
say
is
the
the
approach
we
took
here.
Is
we
tried
to
weigh
Labor's
as
being
right?
We're
talking
about
it.
I
think
taking
a
step
back
and
talking
about
the
phases
allows
us
to
create
the
glue
and
we
we
take.
We
pick
a
default
set
of
tools.
A
D
A
C
E
So
Oh,
as
far
as
like
the
process
that
we're
looking
at
yes
sequence,
diagram,
so
yeah,
I,
I,
think
the
the
I
mean
the
phase
does
make
sense
to
me.
I
think
that
you
know
I
agree
that
we
want.
We
want
to
be
opinionated
but
allow
you
know
an
escape
hatch
for
something
I.
You
know
for
me
in
particular,
I
what
we
have
a
use
case
where
we,
you
know,
customers
bring
their
own.
You
know
machines
already
provisioned
with
an
operating
system,
and
so
you
know
we
want
to
provision
software
at
run.
E
E
So
if
you
know
we,
if
we
can
plug
that
in
or
or
rather
not
since
they
plug
that
in,
but
if
we
can
I
guess
just
just,
you
know
pick
Phase
two
and
use
that
and
and
yeah
that
would
that
would
work
well
and-
and
there
may
be
caveats
right,
like
okay
phase,
you
know,
phase
two
will
assume
certain
things
about
the
distro,
maybe
that
that
may
be
our.
You
know,
there's
typical
assumptions
that
that
we
all
agree
to
make
about.
You
know
some
destroyed
that
will
run
kubernetes
and
and
yeah
we
can.
A
Concrete
use
case
for
like
swappable
for
lease
for
install
for
provisioning
for
creating
the
images
would
be
by
default.
We
would
you
know
the
upstream
repo
would
be
supportive
of
kaykai
right,
but
it's
totally
swappable
on
the
downstream
sign
if
people
wanted
to
modify
their
face
or
head.
This
is
almost
like
a
phase
runner
right.
If
they
wanted
to
modify
that
phase.
Runner
to
you
know,
run
some
open
shift
installation
or
to
run
some
other
distributions
installation
or
to
modify
upstream
in
some
particular
way.
E
Yeah
and
and
I
guess
want
one
thing:
when
I'm
looking
at
this
I
know,
we
haven't
we're
not
talking
about
tools,
but
if
you
know
the
proposals
that
I
see
in
the
cap
are
sort
of
speed
and
you
know
end
to
end
tools
or
well.
Yes,
it's
not
like
config
ADM
might,
you
know,
might
be
for
I'm,
not
sure
if
it
goes
all
the
phases,
but
oh
I
I
think
it
would.
E
It
would
be
nice
to
consider
you
know
having
maybe
just
two
tools
that
we
composed
it
or
you
know
more
if
we
think
that
makes
sense.
But
at
this
point,
I
think
that
you
know
phases
0,
1,
3
&
4
are,
you
know,
are
specific
to
building
an
image,
whereas
phase
2
is
something
that
you
know.
That
would
be
reasonable
to
run
at
runtime
and
and
not.
E
You
know
not
produce
an
artifact
after
that
yeah
that
maybe
maybe
that
would
be
a
separate
tool,
and
maybe
that's
you
know,
and
maybe
that's
a
tool
that
that
already
exists
like
like
there
are
already
tools
like
Packer
or
for
for
doing
image
building.
Maybe
we
can
just
come
up
with
a
sensible
way
of
composing
those
two.
G
So
I
mean
in
terms
of
the
composability
I've,
been
thinking
about
these
hooks
and
just
reading
through
this
I
mean
it
might
make
sense
to
have
something
kind
of
like
like
we're
talking
about
new
life
cycle
rate,
so
maybe
the
the
hooks.
Maybe
it's
just
a
plug-in
model
where
we
have
hooks
that
are
you
know
like
at
different
points
in
the
node
life
cycle,
you
can
have
a
plugin
that
just
does
whatever
you
need
to
do
and
and
that
provides
the
flexibility
for
the
providers
to
implement
whatever
they
need
to
implement.
G
And
maybe
you
know
if
your
phase
zero
is
you're,
starting
from
you
know
an
ami
or
something
that's
fine.
If
your
phase
gr
as
you're,
starting
from
running
your
if
you're
not
going
to
start
from
that,
if
you
just
have
a
running
instance,
you
just
skip
that
phase.
You
just
use
the
hooks
that
you
need
to
use
and
you
don't
use
the
hooks
that
you
don't
need
use
yeah.
D
That's
like
a
challenge,
but
very
valuable,
obviously,
that
we
have
this
one
document
for
the
hooks
for
the
runtime
hooks
and
this
this
document
for
the
image
creation
and
I
do
want
to
think
of
that
was
as
two
separate
stages
and
we
have
like
phases
and
stages,
or
we
called
all
these
stages
and
those
phases.
You
know
what
I
mean.
Yes,
that's.
A
That's
that's
fair
I
think
we
I
think
what
I
was
trying
to
do
here
is
to
about
have
a
conversation
about
changing
the
way,
we're
talking
about
the
cat,
and
we
could
talk
about
the
kept
in
a
different
form,
but
if
we,
if
we
rallied
on
the
output
for
the
phases
or
stages
whatever
we
want
to
call
it,
that
we
can
start
to
define
a
a
more
a
more
granular
way
to
approach
this
problem.
That
will
allow
us
to
serve
multiple
use
case
scenarios
but
obviously
having
a
default
4kk
right.
D
Yeah,
that
sounds
great.
My
main
point
was
that
we
didn't
get
to
the
other
document
last
time
and
I
kind
of
want.
E
A
That's
fair
I
have
not
spent
a
lot
of
time
at
this
document,
but
do
before
we
do
that,
though.
I
want
to
make
sure
like
we've
been
at
28
minutes
and
I.
Think
it's
fair
for
us
to
cut
over.
Do
what
do
folks
think
about
this.
This
change
in
approach,
good/bad
different
I.
C
B
D
Yes,
I
completely
agree
and
I'm
glad
you
guys
came
up
with
this
in
this
tool,
there's
a
way
to
tag
things.
So
this
is
confusing
to
me,
because
these
are
different
entities
in
a
sequence
right,
so
you
could
and
thing
could
just
be
going
back
and
forth
with
tags
of
phase
one
phase
two,
but
that's
neither
here
nor
there
anyway,
yeah.
A
A
True,
move
on
to
new
life
cycle:
Andrew
did
you
want
to?
We
haven't
really
gotten
a
lot
into
it.
For
me,
it's
just
about
looks
but
yeah.
D
E
Sure
yeah
I
mean
I,
put
down
a
couple:
yeah
user
stories
for
provision
machines
and
yeah
the
like
imagine
an
environment
where
there
is
no
infrastructure,
API
and
I
want
to
be
able
to
use
the
cluster
API
to
you
know,
bring
up
a
cooper,
Nettie's
cluster,
so
I'm
going
to
you
know,
I'm
going
to
implement
a
provider
that
can
they
can
work
with
with
existing
machines.
You
know,
there's
gonna
be
some
set
transport
mechanism
like
SSH
and
I'm
gonna
have
to
make
certain
assumptions
about
what
these
machines
are.
E
What
what
let's
say,
what
destroyers
are
running?
You
know
colonel,
etc
and
yeah.
I
want
to
be
able
to
use
cluster
API
to
you
know,
go
from
sort
of
you
can
imagine
you
know
their
Linux.
E
You
know
kubernetes
node,
so
the
first
part
of
that
is
going
to
be
installing.
Basically
everything
that
that
you
know
Kubb
ADM
needs
to
run
so
container
run
time.
You
know,
configure
a
major
contender
dinner,
run
time,
storage,
various
dependencies,
the
kubernetes
binaries
and
then
finally
invoking
rubidium
and,
and
likewise
you
know
what
I
want
to
be
able
to
decommission
the
node
right
and
sort
of
remove,
remove
that
software
remove
artifacts
that
I
put
on
there
and
yeah
the
provider
will
be.
E
You
know
the
provider
will
be
responsible
for
for
cleaning
up
various
things
that
you
know
are
inevitably
left
around
and
yeah,
and
that
that
that
part
is
you
know
that
that
part
is
not
really
in
scope
for
us
and
it's
it's
complicated
in
its
own
way.
Like
host,
you
know,
host
path,
storage,
etc,
but
yeah
I'm,
actually
not
not
sure
where,
like
these,
these
lifecycle
hooks.
What
what
is
like,
where
we,
where
are
these
being
defined
with
who-who,
is
what
are
we.
A
So
right
now
that
it's,
we
need
to
define
a
state
machine,
that's
very
similar
to
or
akin
to
life
cycle
home
to
mechanism
right
and
we
have
a
pretty.
You
have
a
pre
and
post
typically
for
every
state
that
you're
going
to
go
through
in
the
state
machine.
So
it's
basically
defining
its
explicit
set
of
states
in
a
pre
and
post
hook
for
each
state
and
I.
C
I
I
think
we
started
to
get
into
this
exact
set
of
weeds
quite
a
bit.
The
last
few
days,
I'm
drafting
a
proposal
to
kind
of
make
what
I
was
talking
about
more
concrete
I
think
this
mechanism
belongs
at
the
Machine
set
layer
or
whatever
is
doing
the
action
of
creating
machines,
and
then
there
should
be
a
bootstrap
controller.
D
Yeah,
so
that
might
imply
that
the
controller
has
shell
access
to
the
machines,
which
is
a
security
concern
for
us.
So
my
proposal
here
is
an
alternative
to
that.
So
it
sounds
like
we
actually
have
three
alternatives
on
the
table.
So
Michael,
if
you
want
to
add
a
user
story
for
your
use
case
to
this
document,
I
think
that
would
be
very
valuable.
D
C
The
problem
I'm
having
here
and
I,
mentioned
this
in
other
meetings.
Is
it's
hard
to
have
conversations
for
like
this
one
thing
when
it
has
knock-on
implications
all
over
the
other
things?
It
all
goes
back
to
the
data
model,
in
my
opinion,
so
but
I'm
happy
to
outline
some
of
what
I
have
like
I
said:
I'm
working
on
a
full
like
proposal
document
to
illustrate
exactly
all
the
steps
I
think
are
necessary
is.
E
E
C
That's
that's
the
purview
of
the
Machine
controller
and
does
that
today
it's
really
the
post
boot
actions
that
I
think
people
are
concerned
about
here,
which
would
also
fit
this
use
case
of
pre
provision
machines,
they're
already
booted
they're
humming
along
now.
We
need
something
to
configure
them.
E
Images
and
you
have
everything
baked
into
the
image
everything's
provision,
and
you
just
want
to
invoke
something,
and
you
don't
want
to
use,
for
example,
transport
like
SSH,
like
Andrew
said,
then
you
want,
you
know
you
want
to
inject
something
something
to
be
run.
You
know
something
like
cloud
in
it
that
kind
of
mechanism
yeah.
C
This
is
exactly
what
Open
stiffed
us
and
we've
built
a
lot
of
automation
around
that
exact
use
case
about
not
assisting
in
two
instances.
I
mean
in
that
story
is
the
the
machine
you
boot
needs
to
have
some
kind
of
agent
that
joins
some
kind
of
server
model
and
pulls
configuration
from
there.
You
know
similar
to
puppet,
we
call
it
the
machine,
config
daemon,
but
basically
the
cloud
and
it
payload
is
how
to
join
the
cluster
of
the
machine.
Config
daemon
and
get
your
config
from
there.
A
A
Provided
that
there's
a
hook
mechanism
for
pre
and
post
of
a
given
state,
you
can
do
whatever
you
need
to
do
and
that
would
allow
the
capabilities
to
you
know
provide
your
model.
It
doesn't
necessarily
imply
that
the
rest
of
the
data
model
would
require
it.
It
would
provide
the
extension
facilities
for
you
to
inject
your
specific
implementation
and
be
generic
enough
to
meet
all
the
other
user
stories.
Well,.
C
That's
that's
what
I'm
proposing
I'm
not
proposing
any
specific
implementation?
It's
just
that
the
bootstrap
controller
operates
on
a
machine
whether
that's
a
machine.
It's
somebody
provisioned
and
created
a
machine
record
manually
for
or
that
a
machine
set
creator
and
the
machines
that
creates
the
bootstrap
record.
Whatever
that
bootstrap
controller
is,
is
whatever
decided
to
be.
That
fits
your
workflow
yeah.
D
D
Yeah
that
sounds
great,
that
you've
defined
this
system
with
openshift,
where
you
don't
need
shell
access
and
you
have
cloud
in
it,
but
having
a
bootstrap
controller
and
having
a
system
that
pulls
scripts
and
user
data,
it's
kind
of
an
imperative
model,
it's
a
pull
model
and
it's
like
we
have
a
cluster
already.
So
why
not
use
it
and
that
cluster
is
kubernetes
and
we're
defining
an
API
where
fields
in
that
API
can
be
these
hooks
and
that's
how
I
intend
to
use
it.
So.
C
C
D
Yeah
I
missed
I
missed
some
of
the
meetings
this
week.
Okay,
so
bootstrapper
yeah
I
just
want
to
suggest
that
we
keep
it
declarative
and,
as
part
of
the
API
specification,
just
have
these
hooks
and
they
can
either
be
parameters
for
scripts
that
exist
already
in
the
image
or
on
the
machine,
or
they
can
be
scripts
themselves.
H
Michaels
got
a
different
use
case
with
hope
and
shift,
and
the
bare
metal
stuff
that
I've
been
working
on
is
very
different
as
well,
so
maybe,
rather
than
like.
Maybe
we're
not
at
a
point
where
we
can
actually
make
decisions
about
what
this
design
should
look
like
until
we
write
more
of
that
stuff
down
yeah.
C
Well,
I
think
in
there,
in
the
context
of
of
Andrews
use
case
in
our
use
case,
the
similarity
is
that
the
machine
controller
or
somebody
else
just
creates
a
machine,
and
then
we
need
some
other
component
that
turns
it
from
a
blank
slate
into
kubernetes
host
of
whatever
kind
and
and
that's
what
I've
been
trying
to
say,
is
that
it's
not
just
my
use
case
but
having
the
machine
as
its
own
small
primitive,
enables
I.
Think
a
variety
of
different
use
cases
yeah.
H
No
I
think
you're,
right
and
I
think
then
we've
talked
about
breaking
some
of
those
pieces
out
in
some
of
the
other
work
stream
discussions
this
week.
My
point
was
just
if
we
can
lay
out
what
some
of
these
use
cases
are,
then
that
will
help
us
identify
what
those
extension
points
are
more
clearly
and
say
when
we
talk
about
bootstrapping.
What
we
mean
is
this
point
in
the
process
of
you
know
going
from
nothing
to
a
node.
H
I
I
H
D
D
A
So,
let's
take
a
step
back
because
luck.
If
you
look
at
the
original
document
that
I
wrote
today,
where
I
took
a
step
back
and
I
basically
talked
about
the
bases
right,
what
we
kind
of
should
rally
on
is
basically
nomenclature
to
talk
about
the
phases,
but
for
the
lifecycle
of
objects
and
how
it
would
hook
for
a
given
set
of
primitives
right.
So
this
is
the
traditional
way
I've
drawn
like
hook
mechanics
is
you
draw
a
kind
of
like
a
state
machine?
A
You
have
your
initial
state,
then
a
machine
can
move
over
to
other
state
and
you
know
we'll
call
it
s1
will
call
generic
s1
s2
and
then
usually
for
every
single
state.
In
that
state
machine.
You
have
a
set
of
pre
and
post
hooks
that
exist
for
every
state
in
that
state
machine,
so
for
just
being
super
generic
right,
so
Todd's
kind
of
worked
this
way
for
different
states
that
exist
in
their
lifecycle
right
and
so
long
as
we
have
for
the
primitives
for
a
machine,
the
set
of
hooks
and
the
states
defined
I.
A
Think
you
can
do
everything
you
need
to
do
right.
I
think
you
have
you,
have
all
the
flexibility
and
capabilities
to
serve
the
different
user
stories.
So
that's
the
reason
I'm
not
as
concerned
about
it.
I
think
the
the
nomenclature
of
how
we
define
some
of
these
pieces-
and
you
know,
pre
articulating
them
with
special
cases-
is
kind
of
your
details
when.
C
A
Yeah,
you
can
use
an
event
mechanism,
you
can
do
both.
You
could
have
a
simple
80
percent
use
case,
for
maybe
say
a
script
hook
because
you're
doing
it
on
the
machine
right
and
you
could
also
have
an
event
generated
which
could
trigger
external
controllers.
You
can
do
both
you
could
trigger.
You
can
trigger
a
watch
notification
by
a
state
change
itself,
because
you
have
the
state
transition.
A
You
can
trigger
watch
notifications
on
the
on
on
edge
right,
so
you
could
do
that
as
a
triggering
to
an
external
controller
wherever
that
may
be,
or
you
could
inline
script
edit
or
you
could
hook
out
to
an
actual
HTTP.
You
meant
book
if
you
wanted
to
what
I'm,
what
I'm
trying
to
take
a
step
back
and
do
the
first
principles.
Nelson
is
we're
refining
a
model
or
you
basically
have
events
of
some
kind
where
we
have
the
ability
to
do
customization
at
the
edges
of
a
state
I
think
no
one
can
argue
about
that.
A
Now,
what
the
shape
of
those
things
look
like
you
know,
I'm
going
to
kind
of
rely
on
some
of
the
other
people,
that's
kind
of
partially.
The
reason
why
I
didn't
spend
a
lot
of
time
in
this
proposal
just
being
honest,
because
I
figured
it
realized
pretty
heavily
and
how
we're
gonna
do
extensions
and
I
just
need
to
define
the
state
machine.
Oh,
but
I've
been
doing
this
for
a
while
right,
I've
been
in
kubernetes,
the
kubernetes
was
the
baby
and
before
that,
I
did
grid
systems
that
had
all
this
stuff
right.
A
I
already
had
this
entire
mechanism
built
out.
There
was
all
scripts
back
in
the
day
right.
Every
single
hook
mechanism
with
a
script,
so
I
might
lean
in
that
direction,
but
that
doesn't
mean
that
your
script
can't
do
something
else,
but
that
that
implementation
detail
is
I.
Think
orthogonal
to
the
premise
that
we
want
to
have
a
pre
in
a
post
for
a
state
and
what
we
need
to
define
is
the
state
for
the
objects
and
if
we
define
the
states
for
the
existing
objects,
I
think
that
would
probably
be
sufficient.
D
C
Is
once
the
state
for
create
machine
right?
That's
where
we're
running
into
the
weeds
here
at
the
end
of
the
crate
machine
event
is
at
a
kubernetes
node.
Currently
it's
assumed
yes,
and
that
assumption
builds
implications
as
to
you
know,
necessarily
what
the
data
model
is
going
to
look
like,
and
it
also
has
implications
of.
If
you
want
to
bring
your
own
machines
that
are
not
nodes
so
and
you
want
them
to
become
nodes,
so
I
think
it's
an
important
well.
A
A
There's
you
know
like
really
hard
to
argue
about
because
they
all
go
through
the
state
machine,
and
it
might
be
that
the
end
results
at
the
very
end
of
the
entire
state
machine
is
that
it's
a
node,
but
along
the
way
you
have
every
process
or
capability
to
do
the
pre
steps
that
you
need
for
your
implementation
to
get
it
to
that
final
state.
So.
A
So
how
does
how
does
this
sound,
I
think
would
be
super
helpful,
Michael
and
Andrew
to
outline
some
of
the
states
that
you
think
in
the
document?
I
think
I'd
be
super
helpful
there
else
too,
and
if
we
can
do
that,
if
we
can
define
the
states
for
the
current
primitives
that
exist,
ignore
everything
else.
Just
right
now,
just
start
with
what
we
have
right
and
then
then
extrapolate
about
where
we
want
to
go
to.
A
D
Absolutely
so,
as
I
just
said,
the
only
state
I
can
think
of
and
really
these
are
three
states
with
transitions
right
statement:
zero,
no
kubernetes
related
things
are
executing
and
then
my
transition
is
the
data.
There's
data
associated
with
the
transition,
which
are
my
parameters,
scripts
state
one
is
the
kubernetes
stack
is
executing
and
then
the
transition
is
my
data
and
parameters
and
scripts,
and
then
state
did.
D
I
say
this
is
0,
1,
2
and
then
state
2
is
kubernetes
is
executing
and
I
have
done
the
post,
kubernetes
stuff
and
specifically
for
me,
this
0
is
hook
up
the
machine
to
the
right,
VLAN
and
revision,
some
of
the
routing
tables
stuff
and
then
in
state
2,
I
install
some
binaries
on
with
root
permissions,
so
I
only
need
three
states
or
we
could
call
it
one
state
with
these
arrows
coming
off
of
it
and
that's
what
I
tried
to
capture
in
the
document.
Actually,
if.
A
C
Fundamentally,
the
only
thing
that
I
really
care
about
in
all
of
these
discussions
is
that
the
Machine
controller
itself
provisions
machines
and
not
much
else
like
yeah,
created
discs
for
its
route
volume,
but
that's
it
so
give
it
the
cloud
in
it,
whatever
you
would
do
on
the
command
line
for
ec2
or
TCP
to
create
a
machine.
That's
the
data
you
feed
to
the
machine
controller,
it
doesn't
know
about
creating
EPCs,
doesn't
know
about
creating
SSH
keys,
all
that
stuff's
already
in
place
and
stuff
the
component
of
some
high
level
thing.
C
D
Know
so
yeah,
so
the
problem
with
that
for
me
and
I
think
of
I
think
I've
made
this
clear
is
that
I
do
not
want
a
mechanism
to
do
anything
at
init
time
or
a
boot
time,
but
I
do
want
to
define
metadata
before
the
kubernetes
stack
is
executing
and
after
the
kubernetes
stack
is
executing.
So
if
we
split
up
this
notion
of
the
machine
controller,
just
creates
machines.
Well,
that's
great,
but
it
would
have
to
leave
the
machine
not
really
created
for
me
in
order
to
do
what
I
need
to
do.
Yeah.
C
Your
machine
that
would
be
a
like
no
op
machine
controller.
We
have
an
AWS
machine
controller
provider,
we
have
a
TCP
machine
controller
provider
and
yours
is
a
no
op
machine
controller
provider,
so
you
can
put
whatever
you
want
into
the
machine
record
and
your
the
other
pieces,
whatever
those
are
do
whatever
you
need
them
to
do,
either
before
after
yeah.
D
A
C
I
I
C
I
J
Maybe
you
have
a
controller
that
comes
in
and
fills
some
details
on
the
machine
and
makes
the
Machine
bootable
so
I
think
there
are
ways
to
reason
about
this
from
a
design
that
allow
you
to
create
machines
that
are
not
quite
ready
yet
and
then
have
other
portions
of
the
system
come
in
and
fill
fill
in.
Some
details.
A
A
I
would
really
like
for
people
from
the
community
to
have
strong
opinions
to
help
work
on
the
state
machine,
at
least
to
define
what
they
view
of
it
and
then,
to
you
know,
start
to
talk
about
these
edges
and
talk
about,
like
you
know,
we'll
have
to
converse
with
the
extension
folks
to
to
talk
about
this
proposal
and
how
it
fits
into
their
extension
mechanism.
I
think
there's
a
way
to
have
our
cake
and
eat
it
too.
With
with
all
of
this
I,
don't
think
it's
an
even
worse
situation.
A
D
And
I
just
realized
that
we're
talking
about
two
different
state
machines,
actually
we're
talking
about
the
states
of
the
machine
which
represents
virtual
or
real
hardware,
and
then
we're
talking
about
the
states
of
the
operating
system,
execution
environment
on
that
machine.
So
in
my
case,
I
need
to
define
the
post
kubernetes
stuff
before
I,
even
create
the
machine
and
if
we
slice
and
dice
both
of
these
state
machines
and
couple
them
together
in
a
certain
way,
that's
gonna
make
things
like
I
might
not
be
able
to
use
cluster
API
at
that
point.
C
D
A
A
Perhaps
I'll
schedule
something
for
next
week
during
the
same
timeslot.
I
think
I
should
do
that,
but
then,
after
that
is
koukin
when
I'm
going
to
work
pushed
really
hard
next
week
in
the
docs
to
try
and
flush
out
the
details
of
this,
this
style
of
approach
I
think
it'll
be
a
little
more
beneficial
to
the
community,
but
I
appreciate
everyone
having
spending
the
time
to
chat.
So
what
I'll
try
to
do
is
I.