►
From YouTube: Kubernetes SIG Cluster Lifecycle 20190220 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.7a1zt1i34tv9
A
Please
go
ahead
and
add
your
name
to
the
agenda
and
if
you
have
any
items
that
you
want
to
bring
up,
please
go
ahead
and
those
as
well
I
went
ahead
and
put
a
link
in
the
chat
for
those
who
are
attending
live
C
to
start
out
with.
We
have
Andrew
wants
to
talk
about
a
project,
new
project
that
he's
working
on.
B
Hey
guys,
thanks
for
having
me
so
yeah
the
project
I'm
working
on
it's
a
it's
a
really
ultra
minimal
Linux
distribution
that
aims
to
just
run
kubernetes.
That's
it
tries
to
get
rid
of
all
of
the
anything
that
is
not
required
by
the
couplet
or
container
D.
It's
thrown
out
long
story
short,
it's
built
on
top
of
coop
Adam,
so
you
simply
provide
via
user
data.
Coop
Adam,
config
and
Kalos
should
just
know
what
to
do.
There's
some
minor
configuration
that
you
need
to
do
things
like
a
che
with
it,
but
yeah.
B
A
All
right
to
give
you
a
little
bit
of
heads
right
now:
kind
of
cluster
bootstrapping
is
very
provider
specific
okay,
so
most
of
the
integration
that
you
would
end
up
doing
would
have
to
be
with
each
of
the
individual
providers
for
the
different
clouds
that
you
would
want
to
be
able
to
deploy
on
longer-term,
hopefully
we'll
be
able
to
kind
of
make
that
more
common
and
reusable
between
the
providers,
but
that's
the
current
state
of
the
world.
Today,
okay,.
C
D
B
We
want
to
shoot
for
a
consistent
experience,
regardless
of
where
you're
at
and
to
answer
your
question.
We've
actually
had
it
running
on
AWS,
a
buddy
of
mine
got
it
running
in
Google,
compute
KVM.
Obviously,
that's
pretty
straightforward.
We're
trying
to
get
it
anywhere
that
you
can
all
the
major
cloud
providers,
but
we
don't
see
anything
really
blocking
us
in
a
major
way
to
get
it
on
all
of
them.
So
far
we
have
it
on
yeah,
AWS
and
Google
compute
digitalocean.
They
don't
allow
floating
IPS,
which
is
for
custom
images.
D
B
A
mutual
TLS,
so
it's
just
all
PKI
and
where
are
those
sorts
come
from?
So
you
generate
some,
you
provide
it
to
a
master
and
then
still
working
on
sort
of
the
the
bootstrapping
portion
for
the
worker
nodes,
but
they
basically
have
a
simple,
username
and
password
that
gets
generated.
They
submit
every
csr
request
to
one
of
the
masters.
They
send
back
a
certificate
and
then
now
you
can
reach
that
node
via
the
CLR.
B
B
E
I
guess
one
of
the
really
cool
things
would
being
the
leicester
provider
AWS
prosper.
Writer
GCP
could
have
the
responsibility
of
just
basically
selecting
that
image
and
passing
in
the
username
and
password,
and
then
your
Telus
master,
whatever
whatever
that
is,
would
talk
to
the
cluster
API
and
figure
out.
What
configuration
actually
has
to
happen
on
that,
like
what
version
of
cubelet
to
run,
for
example,
right.
B
Yeah,
that's
kind
of
what
I'm
hoping
is
that
right
now
it
just
depends
on
passing
end-user
data
from
whatever
cloud
provider,
but
hoping
with
something
like
the
cluster
API
that
actually,
your
your
I
figure
out.
What
you
guys
call
it,
but
you're
your
master
cluster
can
actually
be
responsible
for
sending
back
and
forth
whatever
talos
might
need.
F
Hey
sorry,
this
is
Spencer
I'm
working
on
this
stuff
to
you
with
the
danger,
but
yeah
I
mean
the
way
I've
been
kind
of
thinking
about.
It
is
having
a
talis
provider
that
kind
of
wraps
the
other
cluster
providers
right
so
that,
as
you
define
a
Talos,
config
or
whatever
and
cluster
API,
then
it
basically
spits
out
another
almost
another.
You
know
request
to
like
AWS
cluster
provider
or
whatever,
what
don't
they
user,
naina
baked
in.
D
B
B
D
B
We've
kind
of
done
something
crazy:
we've
actually
removed
all
like
host
level
access,
no
SSH.
No,
it's
immutable.
It's
so
we
actually
run
a
small
sort
of
management
daemon
on
every
node,
so
you
don't
need
to
be
on
the
host
and
it
exposes
a
an
API.
That's
based
on
the
principle
of
least
privilege,
I
mean
in
the
criminales
world.
What
yeah
you
can
delete
a
node
and
have
it
come
back
up,
so
we
don't
see
too
much
having
to
be
done
with
this
API,
but
we
run
those
on
container
D.
B
The
reason
being
is
because,
if
there
is
something
wrong
with
the
code,
but
you
can't
talk
to
that
node
or
for
whatever
reason,
our
daemon-
that
lives
on
every
node
is
as
simple
as
possible,
so
that
we
have
basically
try
to
keep
it
up
at
all
time
to
give
an
under
any
circumstances,
and
you
always
be
able
to
reach
it
and
reboot
the
note
or
something
cool
place.
Yep.
G
G
E
You
see
cluster,
ok,
evolving
in
a
way
that
something
like
talus
could
be
plugged
in
and
so
like.
For
example,
today
AWS
has
code
to
create
a
batch
script.
Cocoa,
baby,
M&G
CP,
has
gray
backed
up
the
clothes
karidian
like
we
could.
If
we
centralize
that
code,
somehow
for
that
generation
of
the
code,
I'm
not
I'm,
not
exactly
sure
like
how
similar
they
are.
But
if
we
centralize
that
code
then,
like
Tallis,
could
plug
in
at
that
point
and
be
able
to
like
work
in
the
various
providers
right,
it
might
be
overkill.
E
A
A
So
one
of
the
common
ways
that
are
shared
between
some
of
the
other
providers
right
now
is
to
basically
spin
up
the
instances
and
SSH
into
the
hosts
and
perform
a
series
of
steps
and
that's
something
that
we've
explicitly
tried
to
avoid
doing
with
the
AWS
provider
instead
using
cloud
in
it.
So
you
know,
I
I
would
like
to
see
any
kind
of
common
ott
common
type
approach
be
able
to
handle.
You
know
both
of
those
approaches
instead
of
just
one
or.
F
A
E
And
we've
talked
about
replacing
the
ssh
with
a
gr
pc
type
api
in
the
past
right
I
mean
I.
Think
those
like
we're
all
converging
on
this
idea
of
I'm
just
looking
to
see
if,
if
mr.
node
ADM
is
here,
you
don't
see
him,
but
yes,
I,
think
platform
9
of
this
concept
of
node
idea,
which
is
sort
of
like
an
API
for
the
that
script,
which
could
then
also
be
used
for
baking
like
723,
like
baking,
an
image
or
stamping
an
image.
A
H
I
am
so
we're
looking
to
do
a
new
release
of
the
a
SS
spoiler
and
right
now
we
use
basil
to
generate
our
manifests
and
we
take
the
cluster
API
VP.
We
get
a
manifest
for
the
cluster
API
manager
with
the
taxes
latest,
but
we
probably
want
a
version
that
so
I've
got
a
bunch
of
questions
like
how
do
you
version
it
and
then
what
would
be
the
process
by
which
we
actually
do
this
on
a
sort
of
regular
basis
and
I?
F
D
Yeah
I
know
when
I
cut
at
least
one
of
the
recent
releases
I
sent
a
PR
to
set
latest
to
actual
version
number
got
the
release
from
there
and
then
said
a
Fiat
I
switch
it
back
to
latest.
Maybe
would
make
more
sense
instead
of
latest
to
have
it
be,
like
you
know,
0.5
dot,
zero,
next
or
something,
and
we
bump
it
to
zero
5.0
and
then
set
it
to
0.6
a
zero
next
or
what
not,
but
I
think
there
is.
D
There
are
at
least
a
couple
of
images
that
are
tagged
with
a
specific
version
that
had
been
uploaded
from
the
main
repo
and
that
sort
of
follows
what
we
do
in
upstream.
Kubernetes
we're
cutting
images
is
cutting
a
release
as
sort
of
a
three-step
process
where
you
submit
a
PR
to
send
all
the
version
numbers
properly.
You
build
from
that
PR
so
that
all
of
the
baked
in
version
numbers
are
set
and
the
container
image
of
push
tab,
the
right
version,
and
then
you
submit
a
PR
to
bump
it
to
the
next
one.
D
H
D
F
This
was
sort
of
outlining
different
use
cases
for
people
who
want
to
use
cluster
API
and
sort
of
what
questions
they
probably
are
going
to
ask
when
they
try
to
use
clustering
API
with
with
a
couple
of
possible
solutions
to
problems,
I
think
there's
a
good,
short-term
and
mid-turn
midterm
solution
here,
short
term.
It
seems
like
the
goal.
F
The
goal
would
be
to
keep
keep
the
providers
versioning
reasonably
stable
across
providers,
so
that
users
who
are
coming
to
look
at
different
providers
can
know
of
this
version
of
this
version
of
provider,
supports
these
versions
of
kubernetes
and
it
and
I
think
I
think
I.
Think
a
lot
of
that
can
be
solved
with
documentation
but
I
think
there's
another
bigger
question
here
that
can
be
answered
and
we
were
sort
of
talking
about
that
a
little
bit
earlier.
F
It
was
mentioned
on
when,
when
we
were
talking
about
joining
the
shared
khubaib
e/m
code,
because
right
now
in
at
least
cluster
api
provider
AWS,
we
have
embedded
the
cuvette
diem
configuration
types
into
our
provider
spec
and
that
sort
of
ties
our
provider
version
that
sort
of
ties,
our
supported,
kubernetes
versions
to
the
cuvette
TM
types.
So
that
means
that
since
we're
using
the
V
1
beta
one
types
of
cube
ATM
we
can
support.
We
are
guaranteed
to
support
the
following
two
versions
of
communities.
F
Instance-
and
so
that
means
that
if
you're
using
our
current
version
of
AWS,
then
our
provider
AWS,
then
we
can
install
113
114
115
when
those
versions
of
kubernetes
get
released.
It
would
be
really
nice
to
have
that
sort
of
support,
seen
across
all
providers,
and
so
this
document
here
that
you're
welcome
to
read,
comment
or
not,
is
just
outlining
sort
of
where
those
thoughts
are
coming
from
and
how
we
can
sort
of
mitigate
that
problem
before
it
gets
kind
of
out
of
hand.
D
F
A
J
So
this
is
a
follow
up
from
the
other
PR,
where
we
are
now
using
a
label
to
link
a
machine
to
a
specific
cluster,
and
it's
also
optional
and
optional
label.
If
you
don't
want,
then
link
so
that
will
allow
some
providers
to
work
without
a
cluster.
This
will
set
the
owner
ref
non
controller
owner
for
ground
Alishan,
so
it
will
delete
the
machines
first
and
then
believe
the
cluster
afterwards
I
follow
up.
Pr
and
I
have
also
an
issue
for
it.
E
A
J
Yeah
also
I
think
we
had
some
interesting.
You
brought
up
some
issues,
doing
everything
right
which
this
wouldn't
have,
because
the
universe
would
be
set
by
the
controller.
This
is
also
in
line
from
what
we
discussed
the
word:
the
proposal
that
Robert
and
I
put
out
like
kind
of
forward-looking,
but
that's
kind
of
where
we
were
going
for
that.
Like
the
controller
loop,
the
reconcile
will
actually
set
the
owner
F
as
needed.
So.
K
K
J
K
K
J
A
E
A
I
J
So
I'm
not
sure
and
that
will
need
to
be
tested
in
the
next
PR.
This
will
only
be
for
single
machines
at
this
point,
which
is
like
the
most
simple
case
right.
The
next
PR,
which
will
like
pretty
much
extend
the
owner
ref
for
machines
at
n
deployments
to
the
cluster
that
I,
like
I,
willing
to
run
some
tests
to
make
sure
they're
like
it.
There's
no
like
race
condition
between
setting
the
machines
or
not
okay
and.
I
The
owner
reference
so
who
should
set
this
one
other
fans
it
should
be
set
by
the
top
level
controller?
Yesterday,
she'll
be
the
cluster,
because
the
machine
controller
decides
to
do
that
when
there
is
a
dependency
that
the
existence
of
the
machine.
There
should
also
be
at
the
existence
of
the
actual
cluster
which
which
we
discussed
sometime
back,
that
we
might
want
to
keep
both
of
the
things
independent
of
each
other.
But
the.
I
L
They're,
basically,
yes
at
the
point
of
wishes,
who
want
to
say
that
I
agree
that
you
shouldn't
assume
that
it
is
a
cluster,
because
there
are
many
use
cases
particular
high
particular
interest
in
the
use
case.
Work
is
not
guaranteed.
There
is
a
cross
right
across
ER
and
there
are
so
it's
important
not
to
that
idea
just
looking
at
if
there
is
no
closer
as
well.
Truly
convinced
I
need
to
take
a
look
closer
look
at
the
ER,
but
it's
a
good
point.
Ya.
I
So
it's
a
pattern.
What
I've
seen
is
that
generally
machine
deployment
creates
two
machine
set
and
then
machines
had
cleared
the
machine
rights.
The
Machine
set,
let's
say,
creates
the
machine
and
says
create
machine
with
following
own
arrangements.
It's
a
machine
set
besides
that
I'm
creating
five
machines
and
I'm
putting
a
lot
of
friends
from
them.
I
From
the
cluster
API
controllers
from
that
level
itself,
you
could
decide
that
if
Michelle
certain
machine
exists
which
does
not
help
the
owner
or
sense,
then
I
can
probably
adopt
it
and
do
not
have
friends
there
other
than
machine
controller.
Besides
that
high
level
object
like
this
or
does
not
exist
just
because
I've
seen
this
pattern
in
replica
set
in
ports
for
replica
set,
creates
supports
and
then
reference,
and
so
the
live
until
our
body
size,
whether
it
should
adapt
or
not.
M
L
But
if
you
follow,
the
idea
is
not
only
to
check
that
this
doesn't
happen,
but
it's
more
like
shifting
the
responsibility
sliding
machine
doesn't
have
to
know,
in
which
case
it
was
created.
So
but
though,
the
one
who
created
it
note
just
like
you
know,
shifting
the
responsibility
to
set
in
that
to
the
one
who
really
knows
what's
happening.
L
J
L
J
So
that's
that's
a
little
different
because
here
we're
only
talking
about
a
single
machine
item
that
you're
like
resource
that
you're
creating
for
machine
set
at
deployment.
That's
kind
of
different,
because
you
have
the
two
controllers
that
we
support
in
cluster
API
that
create
these
resources
and
they
said
the
owner
ref
for
you
now,
like
that's
gonna,
like
I
I
need
to
check
if,
like
the
owner,
F
is
set
before
or
afterwards
I'm
I'm
question
that
it's
actually
said
before:
creating
the
whole
machine.
A
So,
in
the
difference
with
trying
to
link
the
machine
to
a
controller
for
this
case,
to
is
that
the
cluster
objects
don't
actually
create
machines.
The
machines
are
created
independently
and
the
way
that
we're
linking
those
machines
back
to
a
specific
clusters
through
a
label
that
we're
setting
on
the
machines
and
similarly
with
machine
sets
and
machine
deployments,
to
link
those
to
a
specific
cluster.
Those
would
also
have
to
be
linked
through
a
label
as
well
I.
N
I
just
was
gonna
call
it
that
I
this
morning,
I
reviewed
the
PR
I'm
really
excited
about
it,
because
we
we
like
this
behavior
internally
at
Samsung,
where
you
could
delete
the
cluster
and
have
it
automatically
delete
machines.
I
just
wondered
since
it's
optional.
How
do
we
document
this?
The
credit
documentation
is
intended
for
implement
provider
implementers.
J
Yeah,
that's
a
very
good
question
so,
as
Justin
pointed
out,
I
see
like
some
providers
like
using
mostly
like
a
machine
deployments,
a
machine
said
the
SS
provider.
As
at
this
point
we
actually
lean
towards
like
a
single
item
machine,
but
we
might
move
to
deployments
very
soon
there
from
I,
like
I,
would
expect
it's
a
the
provided
responsibility
to
provide
enough
documentation
on
what
happens
like
when
you
trigger
a
delete
it
there
is
from.
N
So
so
for
a
provider
implement
or
documentation.
So
my
comment
was
just
suggesting
that
we
document
this
as
part
of
the
cluster
controller,
because
that's
where
the
user,
crazy
semantics
I'm
about
to
tell
you,
but
from
a
provider
implementer
standpoint.
This
is
actually
a
machine,
actually
change
and
that's
where
it
should
be
documented.
So.
N
N
Don't
know
I,
guess
I
guess.
My
comments
is
just
that
I
think
we
should
document
this.
It
sounds
like
because
of
where
it's
implemented.
It
should
be
document
as
part
of
the
machine
actuator
and
then
I
will
just
say
it.
I
I
personally,
look
forward
to
a
day
when
I
can
tell
people
that
the
leading
the
cluster
will
will
actually
be
and
of
all
the
resources
gladly
for
the
majority
of
use
cases
I.
J
J
C
C
So
basically,
the
heuristic
is
delete
machines
that
have
a
specific
annotation
and
give
the
next
priority
would
be
to
delete
unhealthy
machines
and
then
the
following
priority
would
be
you
know,
newest
to
oldest,
depending
on
which
policy
you
choose.
So
that's
a
similar
heuristic
as
what
is
in
the
simple
delete
policy.
So
what
that
basically
makes
the
simple
delete
policy,
somewhat
useless,
and
so
I
wanted
to
bring
up
the
idea
of
removing
the
simple
delete
policy
in
favor
of
the
oldest
in
newest.
C
C
G
A
You
know
it,
you
know,
why
would
you
delete
a
healthy
machine
just
because
it's
the
newest,
rather
than
the
unhealthy
machine
that
seems
like
you-
would
potentially
actually
compromise
your
resource
availability
by
doing
that,
so
that
that
was
kind
of
my
rationale
there,
there
isn't
a
case
that
I
could
have
thought
of
that.
You
wouldn't
want
to
delete
an
actual
pre-selected
machine
or
an
unhealthy
machine
before
applying
the
policy.
I
We
consider
that
notion
as
unhealthy
or
should
we
consider
the
machine
as
unhealthy
only
when
when
the
cubelet
is
not
responding
from
like
so
for
even
for
cubelet,
the
same
situation
can
be
implemented
in
a
way
that
extend
minutes.
I
will
replace
the
Machine,
but
within
the
ten
minutes
we
considered
the
Machine
as
unhealthy,
and
it
is
probably
be
possible
that
maybe
cubelet
is
not
responding
for
a
few
seconds,
but
it
will
respond
again,
maybe
for
very
small,
very
small
duration
machine
became
an
LED,
but
it'll
come
up
again.
In
short
time.
That's.
I
I
Mr.
pistol
and
we
delete
it
in
favor
of
the
nearest
or
delete
policies
right,
and
that
should
be
fine.
The
strengthening
situation,
but
in
any
case
I
I,
was
willing
at
least
of
behavior,
where
in
certain
machines,
getting
certain
keyword.
Annotation
should
be
given
the
highest
priority,
which
can
then
be
consumed
by
the
outside
tool
like
the
autoscaler
which
well,
the
purpose
will
definitely
be
so.
E
C
E
Was
just
gonna
say
one
one
reason
to
keep
the
simple
strategy
might
be
that
we
we
do
expect
that
simple
might
be
kept
when
you're
running
autoscaler
and
might
be
sort
of
a
perfectly
reasonable.
So
it
communicates
that,
like
a
random
strategy
is
actually
fine,
because
we
normally
expect
noes
to
delete
it
because
they
have
the
delete
me
annotation.
It
equals
us
annotation
which
I
love
where
her
name
that
gets
a
prize
that
would
be
Nate
who's.
Also
on.
C
Sorry
I
have
two
other
bullets
on
here:
I
wanted
to
bring
up.
The
first
is
well,
the
simple
was
the
default
delete
policy
before
so.
If
we
removed
simple
we'd
have
to
choose
a
new
default,
but
if
we
do
keep
simple
and
just
rename
it,
then
that
would
it
may
just
be
as
good
a
default
policy
as
simple.
We
could
keep
it,
as
you
know,
naive
or
random,
or
whatever
we
decide
on
and
lastly,
there's
a
lot
of
discussion
around
otama,
City
and
I.
Don't
have
any
good
solutions
for
watching
the
city.
C
I
Yes,
I
done
no
good
saluton
data,
me
city
at
all,
I,
didn't
because
before
a
very
well
difficult
topic.
Actually,
there
was
also
reason
why
it
is
spending
so
long
that
we
actually
had
a
separate
discussion
engine
with
a
particular
community
where
I
shake
better
than
two
folks
from
the
bus
clerk
I'm
going
to
be
joined,
and
they
had
concerns
that.
If
you
don't
maintain
the
atomicity
in
the
solution,
then
there
could
be
cases
where
autoscaler
will
behave.
Strange,
don't
expect
the
perfect
behavior
from
dr.
Skinner
and
but
then
on
our
side.
I
We
also
know
there
is
no
such
good
solution,
but
what
I
can
remember
if
I'm,
not
wrong
is
my
thumb-
was
perfectly
working
on
was
trying
to
figure
out
a
good
solution
out
of
it,
and
what
we
can
do
is
that
at
least
between
coming
we
can
try
to
align
with
them.
I
can
try
to
set
up
a
separate
meeting
again
with
the
autoscaler
community
possible
and
just
cross
check
whether
the
solution
really
satisfies
their
needs
and
giving
putting
up
our
concern
that
we
don't
know.
Good
solution.
I
C
I
I
P
Have
a
bad
solution
for
our
in
the
city
and
I,
really
don't
like
it
and
I
think
it
introduces
more
problems
than
it
solves.
C
A
C
A
That
note
next
next
topic
is
mine.
This
is
a
follow-up.
From
last
week
we
had
talked
about
moving
the
book
over
to
using
firebase
and
I
saw
David.
The
last
update
to
the
docs
Pru
had
kind
of
assumed
that
so
I
just
wanted
to
check
between
you
and
Justin.
If
there's
been
any
additional
progress
or
details
about
getting
a
firebase
account
set
up.
N
So
there
was
a
meeting
this
morning
with
the
community's
infrastructure
working
group
just
an
added
design
into
the
agenda.
So
thank
you
very
much,
Justin,
there's
tentative
agreement
that
we
can
get
a
firebase
account
and
I'm.
The
action
item
is
for
me
to
work
with
them
to
submit
a
request
to
the
Google
service
desk
or
to
get
that
account.
N
N
An
update
to
the
gift
book
to
the
CI
process
shortly
after
we
gain
confidence
that
this
actually
works
reliably
and
then
a
last
note
on
that
I
think
firebase
is
is
really
good
because
it
keeps
us
in
line
with
what
Miller
does
and
then
I
think
longer
term.
Since
the
coronaries
augmentation
uses
mattifying,
it
might
be
interesting
to
migrate,
who
builder,
as
well
as
cluster
API
and
Elston,
built
using
crew
builder
to
use
identify
so.
Q
Yes,
while
working
on
this
integration
test
for
the
main
cost,
API
I,
we
need,
we
need
the
container
registry
to
start
the
class
Napier
controller
manager
and
also
the
example
provided
controller
manager.
I
brought
up
this
question
in
today's
carrots
info
meeting
and
the
guidance
and
got
there
is
if
it's
only
for
testing.
They
are
not
going
to
fund
this,
and
if
it's
for
release
for
users,
then
it
can
be
it's
coming
covered
there.
So
the
other
solution
suggestion
are
getting
is
to
use
a
kind,
have
the
feature
side.
Q
L
G
R
I
just
wanted
just
a
couple
meetings
ago,
I
mentioned
I
was
gonna
put
together,
I
couldn't
put
an
issue
out
there.
What
I've
done
is
I've
started
to
put
together
proposal,
I'm
working
with
somebody
here
on
it
and
once
I'm
just
doing
it
as
a
Google
Doc.
That
I'll
put
an
issue
in
for
everybody
to
review,
and
if
we
like
it
and
I'll
turn
it
into
a
cap,
I
just
wanted
to
make
sure
you
knew
I
hadn't,
forgotten
everybody.
I
was
aware
it
was
still
still
in
progress.
A
I
So
the
way
we
are
doing
the
machine
actuator
and
way
it
could
be
done
it's
more
of
a
long-term
and
was
recently
thinking
about
it
did
and
try
to
put
up
some
thoughts
in
the
dog.
It's
about
having
the
machine
actuator
implement
in
a
slightly
different
way,
where
we
don't
expect
you
to
write
the
complete
machine
controllers
and
don't
expect
this
to
be
lemo
details
of
the
Integrity's
of
how
clustered
api
or
the
machine
in
case
tech
really
works.
I
We
expect
user
to
only
implement
certeyn
api,
maybe
a
GL
PC
API,
which
I
have
proposed
in
the
talk
where
we
give
a
you
set
of
brief
small
repository
or
template
appositive,
where
we
could
would
get
machine,
delayed
machine
and
get
machine
methods,
and
this
other
things
can
be
then
taken
care
by
the
shared
machine
controller,
where
this
will
be
basically
shelter.
Hundred-Person
of
the
machine
controller
at
one
place,
which
will
be
the
project
itself
and
try
to
put
some
pointer
there
where
and
what
how
it
can
actually
be
helpful
in
terms
of
adoption.
I
Well,
I
was
checking
with
folks
from
the
CSI
model,
and
they
also
believe
that
it
helps
not
option
in
a
way
that
the
knowledge
that
you
should
expect
from
the
user
should
be
minimal.
When
you
are
implementing
extension
points
and
she
has,
the
knowledge
is
minimal.
She
checked
the
supposedly,
it's
basically
implementing
a
simple.
A
A
One
there's
probably
some
overlap
with
the
dock
that
Ilya
started
as
well
and
I
know
gave
it
Daniel
and
myself
have
been
working
out
kind
of
a
model
for
a
webhook,
a
similar
type
approach,
but
webhook
based
as
well
so
I
think
it's
good
to
get
the
dock
started
and
out
there,
but
I,
don't
I
think
it's
best
to
kind
of
hold
off
discussion
on
kind
of
path
forward.
Until
after
we
cut
B
1
alpha
1.
I
L
Actually
it's
funny
because
the
do
today
was
talking
to
Ben.
You
know
the
kind
provides
much
in
provided
for
for
API
something
to
have
commented
and
because
of
some
technical
question
that
how
to
how
to
implement
it
and
actually
came
to
the
st.
aside
pattern.
So
I,
basically
I,
would
follow
your
design
because
I
need
the
processor
that
is
running
inside
kind,
cooked
all
the
closer
thought
the
time
was
running
out
outside,
so
I
need
some
way
to
communicate
to
the
side
world.
L
So
the
end
of
the
day
is
something
very
similar
and
easy
activator
to
run
an
external
process
and
the
closer
call
it
the
obturator
of
the
controller
called
you
have
twitter
using
probably
the
same
model,
so
I
would
probably
try
your
juror
proposal.
I,
see
it
works.
I'll,
give
you
a
feedback
entirely
different.
You
scheduled
I
came
to
the
same
problem
just
for
another
reason.