►
From YouTube: Cluster API Breakout Meeting 2019-05-09
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Okay,
hello:
welcome
to
cluster
API
extension
mechanism.
Workstream
discussion
we've
been
having
a
lot
of
good
conversation
on
the
extension
mechanism
proposal
and
I
thought.
That
would
be
an
appropriate
time
to
have
some
deeper
discussion
about.
Maybe
some
of
the
comments
that
have
been
difficult
to
to
get
through
in
the
document
no
way
I
wanted
to
run.
This
was
to
walk
through
the
proposals
document
that
we
have
going
through
sort
of
each
option
and
just
kind
of
talking
about
what
it
looks
like.
Maybe
addressing
some
of
the
comments
in
in
the
in
the
document.
B
If
anyone
has
a
particular
a
particular
extension
mechanism,
they
would
like
to
talk
about
feel
free
to
add
yourself
to
just
put
your
name
next
to
the
one
you'd
like
to
talk
about,
because
I
don't
have
to
do
all
the
talking
myself
if
anybody
else
wants
to-
and
it
looks
like
Doug's
got
one
you'd
like
to
talk
about,
which
is
great,
so
I
can
get
started.
I
guess,
I'll
share
my
screen,
I
think
that
seems
pretty
reasonable.
C
B
We
have
to
vendor
in
cluster
API
and
define
our
own
extension
runtime
runtime
raw
extensions.
We
do
cover
the
pros
and
cons
the
current,
the
current
approach
and
I
think
I.
Think.
Overall,
we
can
agree
that
it's
it's
a
bit
limiting,
based
on
the
based
on
the
goals
and
responsibilities
that
we
want
cluster
API
to
achieve.
B
B
B
So
we
can
have
providers,
implement
a
series
of
endpoints
and
a
web
server.
We
can
call
in
to
that
website.
We
can
call
that
web
server
and
then
wait
for
a
response
and
see
what
the
responses
I
I,
think
I
think
one
of
the
big
one
of
the
big
pros
about
that
is
that
we
can
get
really
great
code,
reuse,
so
cluster
API
is
able
to
we
don't
so
that
providers
don't
have
to
vendor
in
cluster
API.
B
And
they
can
have
their
own
web
server
living
in
its
own
repository
implementing
the
interface
that
cluster
API
defines
so
right
now,
if
so,
for
example,
right
now,
if
you
want
to
get
newest
cluster
API
code,
what
you
would
have
to
do
is
take
your
provider.
Your
provider
repository
update,
the
cluster
API
vendor
code
to
the
newest
version
that
you
want
rebuild
your
entire
application
and
deploy
it
in
the
webhooks
world.
You
would
only
update
the
cluster
e.
B
B
Now
there
are
a
couple
of
open
questions
here.
I
think
that
I
could
have
gone
through
these
before.
But
that's,
ok,
oh
yeah.
So
what's
that
what's
a
replacement
for
a
provider
spec
and
provider
status
right
now
and
it
it
sort
of
depends
on
the
implementation
suppose
a
cluster
API
would
still
have
a
type
and
if
we
start
this,
this
kind
of
gets
until
the
data
model
working
group
a
little
bit.
So
I'm
not
sure
it's
relevant
for
us
to
talk
about
to
talk
about
right
now.
B
B
B
B
Is
we
would
define
a
service
that
providers
would
have
to
implement
very
much
like
web
hooks?
We
would
define
endpoints
that
providers
would
have
to
implement
and
then
we
would
generate.
You
know
you
generate
a
client
from
G
RPC
and
then
each
provider
can
implement
their
giv
server.
However,
they
want
so
it's
pretty
much
exactly
the
same,
except
the
protocol
across
is
defined
by
GFP,
see.
B
E
B
I
I
J
B
B
E
H
It
would
be
that's
true,
but
it
would
be
like
I
mean
what
book
like
the
definition
of
it
sounds
fine,
I,
think
or
if
you'd,
like
kind
of
expanding.
Like
a
your
question,
I
think
it
does
necessarily
mean
that,
like
everything
will
have
to
be
in
one
server,
one
thing
that
we
discussed
was
to
kind
of
split
the
responsibilities
into
kind
of
different
groups
of
actions
so
that
we
can
have
like.
H
It
really
comes
down
to
like
what
is
the
the
meaning
of
like
the
transport
that
we
want
to
use
like
a
Jerry.
She
offers
both
encoding
and
decoding
and
I
catch
Sir
Richard
be
to
sort
of
Aaron
client,
but
I
like
the
kind
of
like
what
is
the
benefit
of
using
to
your
objective
me
versus
like
just
using
Jason
like
admission
controllers,
for
example.
Today,
right.
B
G
G
At
all,
it
may
be
easier
for
developers
to
target
that
architecture
or
that
implementation,
so
G
RPC,
similarly
is
used
for
things
like
the
cni
plugins,
and
so
it
may
be
that,
by
the
same
argument
that
it's
actually
an
easy
implementation
to
target,
because
it's
well
known,
etc.
So
I
feel
like
the
advantages
of
G
RPC
and
one
books
overlap
significantly
and
the
architectures
are
similar.
Oh.
K
I
I
see
FA
use
it
so
still
so
that
you
try
to
think
from
the
shared
controller
point
of
view
and
because
we
implemented
this
together.
How
would
then
look
is
that
you
basically
need
at
least
one
architecture,
specific
client
inside
the
shell
machine
controller
right,
because
machine
controller
already
needs
to
know
that
which
mechanism
it
is
talking
to.
I
For
example,
if
you
are
making
a
create
called
butyrate
book,
then
that's
the
only
an
HTTP
con,
which
is
completely
one
way
of
doing,
and
if
you
are
talking
the
GRP
see,
then
you
basically
how
to
import
your
three
specific
packages,
and
that
call
is
completely
code.
It
that
call
is
made
completely
differently.
So
one
of
the
possible
way
could
be
that
if
we
are
comfortable
in
in
providing
the
plugins,
where
we
know
that
the
dimension
controller
DL,
we
are
ready
to
support
different
kinds
of
implementation.
For
example,
we
already
have
the
crt-based
implementations
pure.
I
If
you
would
have
been
interesting,
given
the
book,
you
would
have
been
interesting,
interested
in
implementing
GRB,
see
this
and
there
could
be
a
small
pluggable
client
inside
the
machine
controller
where
you
can
basically
have
booked
this
client,
your
the
Fidesz
clients
or
eventually
the
machine,
the
normal
controller.
This
branch
right.
B
Yeah
I
think
I
think
but
I
think
the
I
think
in
both
cases
the
client,
the
client
from
the
machine
or
cluster
controller
could
could
be
shared
across
providers,
and
so
yes,
there
there
could
be
possibility
to
support
both.
You
know
we
might.
We
might
have
a
discussion
about
why
we
would
support
both
but
I
still
think
it's
it,
but
I
don't
see
it
changing
any
architecture:
architectural
II,
using
G,
RPC
being
any
different
than
using
a
web
server
with
defined
endpoints.
I
E
I
Get
kind
of
calls
periodically,
for
instance,
if
I
understand
it
correct
with
the
limited
knowledge.
What
everything
is
that,
whenever
an
objectives
edit
deleted
or
updated,
the
handlers
will
basically
get
an
event
from
it
and
that's
how
it
will
decide
to
react
right.
So
someone
creates
a
siave.
That's
Eddie
event
goes
to
the
handler
edit
event
go
to
the
handler,
and
if
somebody
updates
machine
object,
an
operator
back
up
a
bit
and
goods
goes
to
the
handler
ray.
I
So
it
could
be
so
in
certain
cases
where
the
shared
control
might
want
to
check
the
desired
state
on
the
infrastructure
it
might
want
to
make.
The
list
machine
calls
every
certain
period
of
time
every
10
minutes
or
something
to
make
sure
that
if
things
are
fine
yet
yeah
and
it's
a
really
achievable
I
think
it
should
be
achievable.
But
I'm.
L
G
J
B
J
G
Right
so
I
think
maybe
a
difference
between
the
web
hook
and
GRDC.
Implementation
is
performance
and
G
RPC
I
think
will
be
higher
performing
if
we
think
that's
required,
but
it
also
requires
you
to
reinvent
a
large
number
of
capabilities.
The
career
news
api's
provide
already
so
things
like
validation,
have
to
be
implemented
by
hand
and
GRDC
where
they
don't
necessarily
have
to
be
good
web
books.
B
B
H
B
M
We
don't
really
have
a
good
definition
about
what
events
we
want
to
send,
what
information
we
want
to
send
and
all
that
kind
of
stuff,
and
it
seems
like
a
library
model,
which
is
what
we
have
today.
We
could
iterate
on
that
clean
it
up
a
little
people
would
still
need
to
implement
it
machine
controller
for
each
individual
provider,
but
I
think
we
can
improve
upon
that
abstraction
to
remove
some
of
those
pain,
points
and
I.
Think
that
is
pretty
much
the
gist
of
it.
B
M
M
M
You
know
post
exactly
like
what
HTTP
verb?
Are
we
using
and
is
it
going
to
be,
the
json
encoded
payload
isn't
going
to
be
a
binary,
encoded,
payload
and
so
you're
you're,
inevitably,
gonna
have
to
build
something
that
conforms
to
whatever
it
is
we
decide,
even
if
we
build
a
lot
of
flexibility
into
that
yeah
and
so
the
actual
handling
of
the
HTTP
connection?
M
You
know,
obviously
not
that
difficult,
but
then
the
question
becomes
to
like
how
do
we
handle
you
know
the
different
events
in
a
comic
matter,
so
our
Meanor,
so
if
I
say
create
and
the
web
server
needs
to
create.
Is
that
synchronous
is
that
asynchronous?
Do
they
have
to
write
some
kind
of
control
loop?
Is
there
some
kind
of
callback
that
they
need
to
call,
and
so
it
inevitably
there's
gonna
be
a
lot
of
boilerplate.
That's
gonna
have
to
be
implemented
in
each
provider.
I.
M
Don't
think
it's
gonna
be
as
trivial
as
call
create
and
they
just
send
us
back
a
200
okay,
and
we
can
forget
about
the
rest
of
details
and
I
think
once
we
really
start
investigating
those
paths.
We're
gonna
see
that
the
code
reuse
that
we're
saving
on
the
controller
side
is
going
to
be
dwarfed
line
by
line
on
actually
implementing
these
different
love.
So
it's
implementations.
M
Yeah
we
can,
and
in
fact
we've
already
done,
that
it's
called
the
machine
controller,
and
so
we
don't
have
to
implement
these
abstractions
right
because,
basically,
we're
saying
hey
here's
this
thing
that
you
need
to
implement,
go
ahead
and
fork
this
and
then
fill
in
all
these
details
and
that's
exactly
what
we're
doing
today
with
the
machine
controller.
So
what
I
was
saying
is:
maybe
we
should
make
the
machine
controller
a
fork
first
project,
and
so
we
know
that
this
is
always
going
to
be
a
custom
implementation.
Here's
a
skeleton
fill
in
the
middle.
M
Well,
I
know
we
I,
don't
know
if
it's
made
it
upstream
to
you
all,
but
like
no
draining
in
our
implementation,
we
do
the
draining
at
the
Machine
controller
level,
because
that's
not
provider
or
specific.
We
consume
the
overall
reconcile
loop.
Just
the
you
know,
does
the
machine
exist?
Does
it
not
exist
when
to
create,
went
to
the
lead?
That's
all
there,
and
in
most
that
works
fine,
so
I
mean
starting
from
like
the
baseline
controller
level,
obviously
is
doable,
but
there
there's
some
value.
B
M
A
similar
comment
on
the
library
model
is,
if
we
go
with
the
library
model
and
basically
today's
model
there's
nothing
stopping
anybody
from
implementing
a
provider
that
is
an
RPC
abstraction
provider,
so
that
could
be
like
in
a
reference
provider,
implementation
and
if
the
community
decides
to
adopt
that
and
that
that
could
become
like
our
default
like
high
level
reference
and
other
controllers,
you
know
maybe
get
deprecated.
So
there's
no,
there's
no
reason
we
couldn't
support
both
of
those
worlds
with
the
library
model,
yeah.
B
G
G
G
One
idea
that
we
discussed
is
following
along
with
the
add-ons
as
operators
model
where
they
modified
through
builder,
to
generate
more
sophisticated
scaffolding
for
clustered
for
add-ons.
In
particular,
you
could
imagine
doing
something
similar
if
we
thought
they
were
going
to
be
lots
of
clustered
API
controllers,
written.
E
K
K
M
Yeah
the
library
model
doesn't
have
to
necessarily
be
a
fork,
but
it
you
know
it
could
be
just
what
do
you
call
in
vendor
and
part
of
the
problem?
Is
you
know
we
don't
want
to
pull
in
all
these
other
dependencies
into
vendor
and
then
is
there
a
way
we
can
tackle
a
problem,
as
we
mentioned
in
the
data
model
portion?
Yes,
if
we
remove
the
cluster
field
from
the
Machine
portion
of
the
data
model,
remove
everything.
M
M
But
there's
ways
we
can
do
that
without
pulling
in
things
like
cluster
in
other
fields
from
other
parts
of
the
project,
as
it
relates
to
the
Machine
controller.
So
as
a
machine
controller
doesn't
need
to
know
about
any
of
the
rest
of
cluster
API
components,
if
it's
completely,
you
know
its
own
thing.
K
F
M
Why
yeah
I
think
is
we
should
review
what
what
the
perceived
pain
points
are
of
the
current
implementation,
because
I'm
not
convinced
is
entirely
that
painful.
The
most
painful
point
is
that
for
us,
machine
controller
lives
in
cluster
API,
and
so,
if
we
fork
one,
we
for
everything
and
I
think
that's
really
dub
the
baseline
issue.
B
A
That
would
implement
the
workflow
for
each
of
the
objects
and
then
it
would
interact
through
through
an
API
that
doesn't
rely
on
rest
or
G
RPC.
That
gives
us
an
opportunity
to
leverage
the
the
reconcile
loop
in
another
operator.
You
know,
and
the
reason
for
that
is
basically
coming
from
the
idea.
I'm
working
mostly
with
their
metal
deployments
and
creating
a
new
machine,
might
take
15
minutes.
A
If
it's
a
really
fancy
box
and
rebooting,
it
takes
several
minutes
and
then
I've
got
a
deploying
image
to
it
and
that
sort
of
thing
so
using
something
with
a
REST
API,
where
it's
going
to
be
timing
out.
It
just
doesn't
really
seem
like
a
real
approach
to
me,
but
using
this
approach
also
solves
a
problem.
My
understanding
of
the
purpose
of
doing
all
of
this
rewriting
is
basically
we
don't
like
ven
during
all
of
the
code.
We
don't
like
hiding
provider,
specific
data
in
the
provider
spec
within
the
machine.
A
For
example,
we
want
uniform
but
behavior
across
the
workflow
and
so
I
think
if
we
implement
a
workflow
management
operator
in
this
group-
and
there
is
one
of
those
and
then
we
implement
a
provider
operator
which
defines
its
own
object
and
that
gets
linked
to
the
Machine
object.
Then
the
workflow
operator
can
talk
to
the
other
operator
through
that
second
custom
resource
by
and
I
specified
using
annotations
as
a
first
pass
at
that.
Just
because
that
means
you
don't
have
to
load
the
whole
data
structure.
A
You
can
just
annotate
the
metadata
and
you
you
still
get
that
notification
and
the
other
operator,
but
you
know
that's
obviously
open
to
other
interpretations
like
if
there's
a
better
way
to
do
that.
I'm
kind
of
noodle.
With
this
that
seem
like
the
obvious
thing
to
me,
based
on
what
I
know
right
now,
a
workflow
of
an
example
of
this
based
on
the
wall
on
the
event.
A
Presented
yesterday
in
the
data
structure,
this
shows
how
that
would
work
and
basically,
if
I
run
through
it
quickly,
just
when
the
machine
is
created,
then
all
of
the
things
that
are
watching
for
machines
to
be
created
would
get
that
notification
and
they
would
say,
oh
I,
need
to
participate
in
managing
that
machine.
So
you
have
the
bootstrap
thing
and
you
have
the
provider
API
thing
and
they
would
both
attach.
They
are
custom
resources
to
that
Machine
and
then
the
Machine
controller
would
say
now.
I
have
all
of
the
things
that
I
know.
A
I
listed
a
bunch
of
crows
here,
I
haven't
really
gotten
to
the
content.
I
was
focusing
on
the
crows,
but
basically
it
means
that,
like
for
a
thing
where
you're,
basically
going
to
talk
to
some
other
REST
API,
you
don't
have
to
build
what
amounts
to
a
proxy
service
right
so
to
talk
to
Amazon
I,
don't
build
a
rest
proxy
that
talks
that
receives
the
API
call
from
the
machine
controller
and
then
converts
that
into
whatever
Amazon
call
looks
like
and
makes
that
call.
I.
A
Also,
don't
have
to
in
the
main
workflow
keep
checking
am
I
done
yet
am
I
done
yet?
Am
I
done
yet
because
some
of
those
operations
that
take
a
long
time,
the
controller
that
manages
that
part
of
the
workflow
will
just
deal
with
that
and
when
it's
done
it
will
notify
the
central
controller
that
it's
done
by
setting
another
annotation
on
the
machine
object.
So
there's
no
call-and-response,
it's
basically
just
message
passing
between
the
different
operators
and
again,
if
there's
a
better
way
to
pass
as
messages
than
annotations,
like
that's
totally
up
in
the
air.
A
K
Just
gonna
say
that
I
think
I
was
I
was
thinking
of
something
very
similar
to
to
do
what
you
just
described,
because
it
sounds
like
if
we
are
about
to
say
that
there's
gonna
be
series
that
that
will
represent
providing
specific
things.
Then
it
makes
sense
for
those
series
do
to
be
directed
by
provider
operators.
K
L
Same
here
this
is
Pablo,
that's
basically
the
same.
We're
the
same
page
is
basically
what
you
described
is
that
the
way
we
are
thinking
and
using
it
I
think
that
how
they
can
confirm,
but
I
think
it
is
subtly
the
same
way.
The
gardener
project
works,
the
Machine
and
the
bootstrap
workflow
is
similar
to
that
with
dependent
controllers
know
what
hook,
which
was
you
know,
working
a
synchronous,
oh.
I
No
I
think
it's
slightly
different
that
earlier.
Actually
it's
not
this
layers
where
the
Machine
controller
layer,
we
plan
to
use
some
kind
of
good
extension.
We
can
assume
we're.
Machine
controller
does
not
have
to
really
need
to
have
provider
specific
coincided
and
the
extension
mechanism
could
be
anything.
But
if
you
are
comparing
what
is
just
suppose
right
now,
it's
what
we
call
is
call
it
as
a
workable
controller,
and
that
for
crippled
controller
is
one
step
beyond
so
what
it
says.
I
What
it
does
is
that
this
controller
is
basically
responsible
for
looking
at
the
crystal
30ml,
whatever
is
created,
and
out
of
these
out
of
that,
this
controller
is
responsible
for
creating
the
necessary
number
of
machine
deployments.
Then
it
creates
a
number
of
token
six
or
the
user
data
which
are
required
to
compute
them
properly
and
then
give
all
of
the
information
all
of
the
requiring
for
me
prepare
all
of
the
required
information
for
the
machine
controller,
which
can
then
be
interpreted
taker.
So
this
code
doesn't
help
you
inside
the
big
cluster
API
specific.
L
To
axis
but
I'm
a
refers
to
discipline,
I
mean
the
eBoost
rug
provider
is
like
the
operating
system.
Configuration
more
or
less
I
mean
the
basically,
maybe
that
you
split
the
logic
for
provided
a
physical
machine
from
the
logic
for
providing
the
bootstrap
configuration
giving.
Indeed,
this
is
not
exactly
the
same
workflow,
but
there
are
independent
controllers.
I.
Think
well,.
A
It
makes
a
lot
of
sense
to
me
to
have
as
many
options
for
plugging
in
different
pieces
as
possible
here,
so
if
we
can
break
up
the
workflow
into
pieces,
even
if
a
lot
of
those
are
always
going
to
be
the
standard
implementation
providing
a
way
to
basically
replace
the
implementation
by
just
replacing
which
controller
you
deploy,
that.
That
seems
like
it's
really
easy
to
understand
in
terms
of
what
the
user
has
to
do
when
they
build
their
cluster.
A
M
M
G
G
One
way.
One
difference
between
a
web
server
extension
mechanism
and
a
C
or
D
expansion
mechanism
is
in
terms
of
the
user
facing
manifest
so
with
a
CR
D
right,
you're,
creating
communities,
objects
and
they're
visible
to
the
user.
My
opinion
is
that
the
ideally
the
cooperate,
the
CR
DS
you
define,
should
have
some
conceptual
meaning
to
the
user,
and
so
the
idea
of
a
machine
object
makes
sense.
The
idea
of
a
kubernetes
machine
object
is
being
distinct
from
a
machine.
Object
can
make
sense
to
because,
from
a
user
perspective,
I
know
what
that
means.
G
M
Will
say:
I'm
not
against
a
web
hook,
extension
or
G
RPC
extension,
insofar
as
we
are
extending
something
other
than
the
machine
itself
right.
So
if
there's
something
that
you
want
to
do
in
addition
to
creating
that
machine
by
all
means,
let's
do
a
web
hook.
You
know,
but
as
far
as
like
provisioning,
the
machine
itself,
that
seems
like
it's
directly
in
the
domain
of
the
machine
controller.
M
G
When
one
example
is
that
was
done
over
a
year
ago,
is
platform
9
created
the
concept
of
a
provision
to
machine.
This
is
roughly
equivalent
to
like
a
BM,
and
then
the
cluster
epi
machine
object
was
a
kubernetes
provisioned
machine
I.
Think
that
follows
this
model
in
a
way
right,
you're
separating
the
provisioning
of
the
BBM,
which
is
provider
specifically
the
validity
of
kubernetes,
which
is
potentially
shared.
G
And
in
that
model,
so
this
is
something
platform
9
did
over
a
year
ago.
One
thing
that
we
were
so
when
we
first
conceived
of
having
Web
books
in
addition
to
using
CDs
part
of
the
observation,
was
that
if
you
have
you
know
this
concept
of
a
VM
or
a
provision
machine-
and
you
have
a
concept
of
the
kubernetes
machine
and
those
are
different,
then
the
question
is:
how
do
you
extend
the
the
VM
or
provision
machine,
the
thing
that
doesn't
have
kubernetes
it's
just
bare
metal
and
the
extension
is
all
provider-specific.
G
M
M
M
My
preferred
provisioning
workflow
would
be
image
based
and
we
would
have
some
tooling
to
help
people
create
a
golden
image
of
some
kind,
and
then
that
is
just
a
mi
ID
or
what
have
you
that
they
feed
to
the
machine
controller,
and
then
we
don't
have
to
worry
about
all
this
bootstrap
business.
So
it's
my
preferred
solution.
H
You
only
have
to
sum,
sir,
we
have
left,
we
should
probably
reckon
mean
and
like
it
can
be,
you
know,
keep
it
going
with
the
agenda.
My
minute
suggestion
here
is
like
it
seems
like
we
have
two
main
pattern,
so
we
want
to
like
kind
of
follow
along
there's
like
at
the
web
book
gr
PC,
and
this
the
libraries
looks
like
kind
of
/c
are
these,
which,
like
are
very
similar
I,
don't
know
if
the
best
answer
is
to
make
multiple
proposals,
maybe
race
but
I-
think
so.
H
I'm,
looking
at
the
sequence
diagram
that
that
proposed
and
I
do
also
have
a
lot
of
questions
here,
but
I
honestly
I
would
like
to
have
those
questions
and
like
a
more
formalized
proposal,
I
think
Chuck
is
presenting
the
web
book
proposal
soon.
So
if
we
can
go
to
that
and
maybe
in
the
next
meeting
next
week,
we
can
have
a
proposal
for
the
CID,
but
we
need
to
come
like
kind
of
like
we
have
a
deadline
for
next
month
to
have
these
proposals
killed
in
and
what
cube
con
in
the
middle.
H
It's
kind
of
like
short
on
time,
so
I
would
feel
like
that
if
the
impetus.
M
H
That's
a
like.
Without
like
written
proposals,
we
I
don't
think
we
can
actually
make
any
decision
and
like
go
forward
with
this.
So
if,
like
the
community
split
between
this,
to
propose
so
like
there
should
be
another
person
kind
of
in
charge
of
the
other
proposal,
like
with
the
CR
DS
and
kind
of
like
answering
all
the
question
and
making
sure
that,
like
it,
fits
into
the
requirements
that
we
set
for
the
project
when
we
worked
on
it.
So
is
anyone
kind
of
want
to
bring
their
forward
yeah.
A
K
Have
to
celebrate
and
I
just
wanted
to
add
that
you
you
were
thinking
there.
There
is
lack
of
cons.
I
think.
Maybe
one
thing
to
discuss
in
that
area
is
what
just
been
brought
up
where
you
know
folks
want
to
screw
in
arbitrary
places,
potentially
but
I
think
we'd
like
to
enumerates.
It
would
be
very
useful
to
to
nuvaring
more
use
cases
there
and
what
are
those
things,
because
the
the
use
case
that
that's
been
described
but
pre
provisioning
machines,
I
think
that's
more
general
sort
of
use
case
and
that
may
deserve
its
own.
K
K
B
Ok,
thank
you
for
volunteering
to
do
that.
On
a
similar
note,
I
wrote
up
a
using
that
template.
I
wrote
up
a
proposal
for
web
books
which
I
linked
to
in
the
agenda.
It's
not
totally
done
or
totally
fleshed
out
and
let
any
early
feedback
would
be
awesome
and
I
think
we
can
meet
again
next
week.
We'll
have
another
semi.
We
have
another
meeting
scheduled
for
next
week,
Thursday
at
11:00
Eastern,
and
we
can
see
where
the
proposal
is
for
the
other,
the
other
method,
that's
an
okay.
I
L
For
what
looks
to
fit
something,
because,
just
today
we
were
discussing
about
the
you
know
the
model.
One
of
the
theme
was
a
conclusion,
but
more
or
less
there's
kind
of
consensus
that
that
shouldn't
be
like
a
either-or
that
both
model
can
coexist
at
some
point.
So
this
is
just
to
clarify
that
we're
not
trying
to
achieve
entirely
to
one
model
or
another,
but
try
to
figure
out
how
to
make
it.
L
B
H
I
mean
that's
not
what
I
heard
to
be
honest
and
if,
unless
I'm
mistaken,
the
cooperating
controller
sincerely
would
be
an
alternative
to
what
to
the
web
books
or
I.
Guess
like
that,
could
be
the
third
option
that
we
support
both
but
I
mean
that's
for
after
the
proposal
to
decide.
If
we
can
write,
because
we
need
to
also
make
sure
that
we
understand
that
there's
work
to
be
done
behind
all
these
proposals.
Yeah
when
you
men
power
to
do
that.
But.
C
Also,
at
the
end
of
the
day,
one
of
the
outcomes
of
the
cluster
API
project
is
going
to
be
to
define
what
validation
or
conformance
of
an
implementation.
Is
you
know
whether
or
not
that
meets?
You
know
the
requirements
for
cluster
API,
so
once
we
have
that
defined,
you
know
you
could
basically
replace
any
parts
of
the
you
know,
common
cluster
API
components
that
you
want
and
implement.
You
know,
however,
you
want
as
long
as
you
meet
kind
of
that
kind
of
conformance
standard.
That's
defined.
I
Okay,
so
there
are
the
two
proposals
and
when
you
save
the
book
is
basically
of
a
server
based
proposals
right
where
we
want
to
still
check
whether
it
is
a
book
or
GRB
see
inside
it.
While
we
are
just
spending
down
the
jewelled,
if
your
is
that
understanding
correct
that
we
have
a
with
it's
basically
the
server
based
proposal
where
we
check
implementations
there
and
the
other
proposal
will
be
easier.
Okay,
cool.