►
From YouTube: 20190416 - Cluster API - Extension Mechanism breakout
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
What
I
wanted
to
cover
today
is
basically
what
this
work
stream
is
focusing
on
what
the
expected
outcome
and
goals
are
and
sort
of
give
a
bit
of
background
about
what
this
actual
problem
is,
and
this
is
one
of
it.
My
opinion,
one
of
the
more
straightforward
work
streams
that
there
that
that
we've
got
the
four
that
we've
come
up
with.
So
what
is
this
work
stream?
A
It
is
how
be
clustered
API
interacts
with
cluster
API
providers
and,
namely,
how
the
how
the
API
the
cluster
API
providers
can
define
the
data
they
need
to
create
clusters
in
that
provider.
So,
if
you,
so,
if
you
look
at
AWS,
we
use
C
Rd
is
to
define
the
AWS
provider
configs,
that's
not
necessarily
the
only
solution.
A
There
are
other
solutions
and
they're
upsides
and
downsides
for
each
of
these
things,
and
what
we're
trying
to
produce
is
a
proposal
for
how
cluster
API
and
cluster
API
providers
should
interact
and
why
we
choose
why
we
choose
a
particular
provider
over
some
other
provider
or
why
we
choose
some
mechanism
over
some
other
mechanism
and
what
we
have
today
is
we
use.
We
use
the
runtime
raw
extension
embedded
in
the
cluster
API
type.
I've
got
a
couple
links
here
that
you
can
take
a
look
and
see
what
those
are
and
we
satisfy
that
with
custom.
B
A
C
So
most
of
the
providers
actually
do
register
it
with
the
API
server.
That
said,
they
don't
have
to
and
we
aren't
actually
leveraging
it
as
a
CRD.
We
just
have
that
in
place
because
we're
using
controller
run
timing,
controller
tools
to
do
the
generation
for
the
API
server
machinery
that
we're
using
for
the
serialization
and
deserialization
okay.
B
A
C
They
would
because
they
would
have
yeah.
That's
part
of
the
reason
why
it's
a
raw
runtime
extension,
because
the
cluster,
the
cluster
object,
has
no
idea
about
the
embedded
type.
The
only
time
we're
dealing
with
that
is
when
we're
deserializing
and
serializing
it
in
the
providers
themselves.
Today,
yeah
I
see.
A
Great,
so
I
can
talk
about
at
least
the
problems
for
that
we
currently
have
with
the
the
existing
approach.
One
one
problem
that
we
have
that
actually
clashes
with
some
of
the
user
interviews.
Cases
for
cluster
API
is
that
we
can't
actually
determine
what
providers
are
running
and
installed
automatically.
We
actually
have
to
go
and
CEO,
which
controllers
are
running
and
then
hope
that
they're
named
properly
and
give
us
a
way
to
figure
out
what's
actually
running
and
what
providers
are
to
us.
A
There
really
is
no
type
safety
upon
cluster
and
machine
create
because
it
gets
read
in
as
a
series
of
bytes
and
doesn't
get
actually
read
until
it
feeds
until
the
provider
reads.
The
configuration
I
think
we
can
do
better
than
that.
So
we
can
say
we
try
to
create
a
cluster
and
says
hey
this
provider.
Config
option
isn't
valid,
it
would
be
really
nice
and
then,
when
we
were
working
with
that
configuration
in
the
AWS
provider,
it
was
pretty
annoying
to
program.
A
A
That
being
said,
I
think
there
are
three
or
so
other
mechanisms
that
we've
been
talking
about,
that
we'd
like
to
do
a
little
bit
more
investigative
work
on.
So
there
is
a
web
hook
mechanism
that
we
could
use,
or
perhaps
a
G
RPC
mechanism
that
we
could
use
in
addition
to
the
independently
we're
not
in
addition
to,
but
also
the
CRTs
that
these
today
honestly
I,
don't
actually
know
how
those
two
things
would
work
web
hook,
orgy,
RPC,
I,
haven't
really
looked
into
them
at
all.
A
I
would
love
to
know
like
how
we
could
leverage
those
things
to
make
this
work,
but
I
think
that's
sort
of
something
for
the
group
to
decide
and
talk
about,
and
one
of
the
things
I
would
like
to
do
here
is
not
necessarily
at
this
meeting
because
I'd
like
a
lot
of
its
work
to
be
asynchronous
but
discuss
the
pros
and
cons
of
each
of
the
different
approaches
for
our
extension
mechanism.
Yeah.
D
C
Part
of
the
reason
why
we
broke
up
the
work
strains
Ilya
was
to
avoid
having
too
many
kind
of
tangential
rabbit-hole
discussions
as
we
talked
about
the
greater
project
so
part
of
the
idea
of
scoping
down
the
extension
mechanism
is
to
allow
that
rabbit
hole
of
how
do
we
actually
extend
it
to
be
focused
here.
While
we
talk
about
how
do
we
change
up
the
data
model
in
a
separate
meeting,
so
that
would
determine
more
where
the
extension
points
are
versus
Chris
meeting
now.
So
that's
that's
kind
of
why
we
broke
those
up
and.
A
A
Time
and
also
the
leaves
these
exact,
you
know
what
books
and
geo
PC
are
not
like
things
that
we've
done
right.
These
are
just
examples
of
things
we
could
potentially
use.
I
think
I
think
it
might
make
sense
to
look
at
the
use
cases
and
see
exactly
what
we've
outlined.
We
want
from
cluster
API
and
see
you
know,
sort
of
reconciled
into
the
use
cases
with
the
various
ways
that
we
could
achieve
an
extension
mechanism
and
sort
of
build
a
pros
and
cons
list
that
way.
I,
don't.
E
So
it
doesn't
make
sense
to
have
a
reconciliation
loop
at
the
very
top
level
of
cluster
API,
just
to
implement
another
control
loop
in
a
whole
nother
provider.
So
I
think
a
better
model
would
be
to
put
the
reconciliation
loop
on
the
provider
level
itself.
And
then
we
don't
have
to
worry
about
what
the
actual
implementation
is
for.
Extensions,
I,
like
this
transparent
provider,
spec
or
excuse
me,
opaque
provider,
spec
that
we
have
now
I
think
that's
probably
the
best
model.
C
So
I
think
there
are
benefits
of
not
going
the
independently
reconciled
route,
because
if
you
do
go
the
independently
reconciled
route,
your
actual
status
tends
to
stay
very
deeply
buried
in
the
tree.
I
think
the
web
hook
or
G
RPC
model
that
potentially
provides
a
better
benefit
for
bubbling
comment
status
up
compared
to
kind
of
the
independent
CRD
model.
G
Gardner
with
machine
control
manager,
we
were
also
similar
face
some
time
back
and
be
implemented.
The
out
of
three
support
or
the
extensions
in
using
the
GRDC
so
to
describe
I
posted
a
link
in
meeting
notes,
but
the
high-level
idea
was,
you
know
it:
the
extension
actuator
that
becomes
a
GL
PC
server
and
it's
a
very
minimal
response
with
three
main
methods.
Since
idea
to
create
machine,
delete,
machine,
enlist
machines
and
input
to
that
method
is
basically
the
Droid
respect
feet.
G
We
have
Reuter
spec
and
then
different
ways
over
the
grid's,
but
you
weren't
directly
giving
me
on
the
wire
itself
as
we
expected
the
server
wilderness.
So
basically
you
give
this
to
my
dress
back
with
a
v8
delete
and
this
machines
method,
which
are
the
three
services
and
internally
process
it.
And
then,
whatever
response
is
required.
If
you
use
the
response
back
to
the
traditionally
shared
controller
over
a
shiny
fit,
that
is
so,
there
was
a
powerless.
G
We
somehow
make
sure
a
lot
of
things,
but
if
there
are
more
responsibilities
going
in
other
projects-
and
we
will
have
to
think
why
I
said
what
what
kind
of
responsibilities
should
be
taken
care
at
that
point
so
because
every
the
entire
machine
controller
is
shared
at
once,
the
benefit
of
controlling
the
frame
works
better,
then,
a
couple
of
other
things
is
like
rendering.
So
one
thing
that
we
are
doing
with
a
recurrent
machine
controller
approaches.
We
are
more
or
less
rendering
the
entire
cluster
a
shape
or
this
inside
it
do.
G
We
want
any
small,
very
small
part
of
it
probably
used
right
and
at
some
point
I
guess
it
becomes.
But
now
probably
it
can
become
a
little
bit
of
a
burden.
We
have
to
keep
track,
or
at
least
you
want
to
fairly
be
compatible
with
the
upstream
plus
repair
project.
Analytically,
rendering
and
make
sure
that
you
are
up
to
date
with
the
controller.
H
C
The
other
thing
that
I
want
to
say,
too,
is
that
we
are,
we
are
talking
about
breaking
things
up
a
bit
and
having
say
differences
between
infrastructure
providers
and
like
cluster
bootstrapping
providers,
and
things
like
that.
There
doesn't
necessarily
need
to
be
a
decision
made
on
one
specific
extension
mechanism.
It
may
be
that
different
types
of
extension
mechanisms
may
make
sense
for
different
use
cases,
as
we
continue
to
build
out
as
well.
G
J
Am
in
addition
that
we
don't
need.
Oh,
you
should
go
to
the
detail
of
the
proposal,
but
I
don't
still
have
clear
idea
of
what
problem
we
want
to
solve,
because
we
start
talking
about
the
reason
this
problem
of
having
the
this
opaque
data
inside
of
the
the
grocery
pie
and
passing
it
to
the
provider.
But
normally
we
could
summarize
three
four
objective,
something
we
are
we.
We
would
like
this
to
happen,
for
instance,
to
me
something
that
I
dislike
about
this
approach.
J
Is
that
can
no
it's
not
obvious
what
providers
are
installed
their
wish
if
you
have
more
than
one
because
everything
is
inside
me,
it's
a
lack
of
this
opacity
of
this
is
something
like
this.
So
that's
the
kind
of
thing
I
would
like
to
probably
issue
justice,
thomas
saying,
because
then
we
can
compare
solution
again.
Does
a
common
won't
yeah.
A
F
F
F
The
easy
part
is
kind
of
hard
to
quantify,
because
when
you
think
of
solutions,
I
don't
want
to
get
into
solutions
faith,
but
you
could
use
like
the
a
bronzes
offer.
Your
mechanism
where
you
can
generate
operators
right.
Maybe
that
makes
it
easier.
Another
mechanism
would
be
web
hooks.
Another
one
would
be
to
your
PC
and
then,
when
comparing
those
we
can
compare
them
against
their
goals.
All
three
of
those
allow
out
a
tree
development,
and
so
we
would
have
talked
about
well.
How
do
you
distinguish
between
the
mechanisms
that
divide
out
a
tree
development?
D
I
think
at
the
moment
people
are
already
doing
nodes
of
three
providers.
I
mean
all
the
providers
are
kind
of
hard
to
treat,
but
as
far
as
I
could
tell
but
they're
somehow,
in
lockstep,
with
the
trees,
I
think
basically
kind
of
decoupling
providers
from
from
what
is
in
what
is
upstream
would
probably
be
more
helpful.
There,
like
I,
think
that's
the
kind
of
bit
we're
talking
about
really
because
the
moment
like
they
are
in
different
repos
right
for
about
the
feed.
D
The
thirsty
kind
of
general,
the
general
needs
in
the
cluster
care
projects
and
the
providers
are
in
different
pictures
when
I
calculate
how
do
these
things
relate?
How
do
the
providers
relate
to
here?
Do
the
do
top
level
repo
and
how
persons
relate
and
how
API
changes
and
evolution
of
each
of
the
particular
provider
relates
to
that.
It's
kind
of
tricky
right
so
I
mean
I,
think
that
that
is
a
particular
bit
that
we
talking
about
here,
not
so
much
the
development
either
separately
for
them
as
such,
because
that's
already
happening
per
se.
D
F
Yeah
we
are
you're
right
that
we
are
currently
developing
on
tree.
The
question
is
like
how
much
effort
in
total
is
it
right?
Last
last
year
we
have
a
change
where
we
went
from
aggregated
servers,
see
our
needs
and
that
broke
every
provider
right.
So
those
kind
of
changes
are
extremely
disruptive
and
I
think
that
it
would
be
better
if
we
had
more
stable
migration
plan,
so
right
now
they're
out
of
tree,
but
so
I'm
looking
for
the
words,
but
we
want.
C
Yeah
issues
with
the
way
that
we
build
and
deploy
today,
because
we
have
two
vendor
the
cluster
API
controllers
for
the
machine
and
the
cluster,
and
we
end
up
deploying
providers
a
lot.
We
end
up
deploying
the
provider
CR
DS,
along
with
the
common
CR
DS,
and
it
creates
a
really
easy
way
to
if
you
try
to
deploy
multiple
providers
of
breaking
one
or
more
of
the
deployed
providers
based
on
whichever
last
version
of
the
CRTs
are
deployed.
D
It
seems
plausible
to
suggest
that,
generally,
both
the
the
upstream
API
different
issues
and
each
of
the
providers
should
be
able
to
evolve
at
their
own
pace
right.
So
if,
if
certain
provider
wants
to
do
something
that
doesn't
yet
exist
in
the
main
rate
or
whatever,
they
should
be
able
to
to
kind
of
accomplish
that
somehow
and
similarly,
the
other
way
around
the
beacon
p.m.
screen
image
table
for
evolve
and
move
ahead,
while
the
providers
is
still
catching
out.
J
Also,
I,
don't
know
it's
probably
related,
but
I.
Remember
Jason
awaiting
service
sometime
along
with
his
cows,
I
think
it
was
you
they
will
be
the
point
that
Theresa
personally,
we
need
to
have
a
provider
primary
for
the
process
et
al
because
and
that's
that's
something
that
is
not
manageable
in
the
long
term-
is
annoying
I
think
that
the
city
should
be
somehow
try
to
be
the
best.
You
know,
I
know
kind
of
having
the
common
binary
for
the
for
the
city,
air,
probably
some
station
mechanic
for
the
specific
part
of
the
provider.
A
Yeah
I
think
that's
a
good
point.
I,
don't
know
how
much
I
guess
I
guess
I
would
go
back
to
the
East
cases.
Document
like
I,
don't
really
know
how
much
we
have
been
talking
about
sort
of
the,
because
that's
kind
of
that's
kind
of
like
the
user,
the
user
versus
the
developer,
like
oddly
so
I,
don't
know
what
we're
what
we,
what
we
want
to
optimize
for,
like
which
one
is
more
important,
I.
J
Would
think
that
that
this
or
extension
mechanism
or
like
for
the
provider
may
be,
however,
some
a
specific
one
I
mentioned
before,
like
if
it
had
to
have
a
different
protein
city,
air
for
different
providers
and
the
one
of
the
objective
of
the
culture
API
support
provided
from
the
particular
user?
How
do
you
do
that?
I
mean
how
do
I
have
three
different
CTL
for
different
provider
all
do
the
same
or
less
so
that
that
particular
one
is
probably
more
user
facing,
but
yeah.
G
J
D
Maybe
one
way
to
think
about
is
that
actually
those
things
are
somewhat
on
interdependent
and
a
given
provider
any
given
provider
should
have
equal
chances
to
provide
the
best
user
experience
they
would
like
to
provide
to
their
users
so
thereby
you
know
they
kind
of
they
need
to
be
able
to
do
that,
and
I
think
think.
That
perhaps
is
a
way
to
think
about
it
too.
So
those
both
of
those
things
are
kind
of
important
I
mean
almost
equally
important,
but
it's
like
from
the
from
the
upstream
point
of
view.
D
The
provider
should
be
able
to,
you
know,
should
have
equal
chances
essentially
to
to
deliver
the
best
user
experience
on
there
and
whatever
they
define
as
users.
Obviously,
you
know
there
are
some
cases
where
it's
very
complicated,
but
it's
hardly
gets
we
get
into.
Then
we
get
into
the
territory
where
it's
really
too
hard
to
generalize,
but
this
may
be
a
way
to
think
about
it.
What
people
think
I.
B
Think
that's
true,
but
I
think
we
also
want
to
corral
everyone
around
a
useable
API
that
works
for
everyone.
So,
for
example,
I'm
very
wary
of
saying
that
we
would
expect
cluster
API
providers
to
for
cluster
API
because,
for
example,
we
want
a
machine
deployments
and
the
Machine
set
to
operate
exactly
the
same
way.
Whether
you're
on
AWS,
which
you
see
here
as
your
bare
metal-
and
we
don't
want
that.
B
E
H
B
I
think
there
is,
there
is
actually
a
case
for
splitting
the
cluster
API
repo
into
machine
deployment,
machine
set
and
the
machine
actuator
in
that
I.
Think
that
might
help
people
understand
that
you
know
that
they,
when
they,
when
they're,
writing
cluster
api
provider.
My
cloud
right.
They
have
two
vendor
this
library
and
they're,
getting
all
this
stuff
that
they
don't
ever
bender
this
repo
and
that's
not
get
all
this
stuff
which
they
don't
think
is
necessary.
B
But
you
know
you
can
as
long
you
don't
have
to
use
the
machine
actuator
to
write
a
functioning
machine
controller.
We
would
like
you
to
and
you
might
full
file
with
conformance
if
you
don't.
But
if
you
want
to
write
it
in
Ruby
or
whatever
it
is,
you
can
go
and
do
that
and
we,
but
we
do
want
you
to
continue
to
use
the
machine
deployment
the
Machine
sets.
So
if
as
it
Michael
as
Michael
suggests
like
if
we
achieve
that
by
setting
into
a
different
repo,
then
I
be
in
favor
of
that.
E
C
Yeah
I
mean,
depending
on
what
type
of
kind
of
extension
mechanism
we
talked
about,
they
would
be
like
a
common
machine
controller
that
is
deployed
with
cluster
API.
That
would
either
call
out,
via
web
hook
or
G,
RPC
or
wood,
or
with
somehow
bubble
up
status
from
an
independently
reconciled
provider.
Crd
that
would
be
distinct
and
separate
from
the
machines
controller.
E
Yeah
I,
don't
see,
I,
don't
see
a
lot
of
value
in
doing
that,
because
again
we
already
have
a
control
loop
in
the
machine
controller
and
if
we
have
to
write
a
G,
RPC
server
or
what
hook
server,
that's
that
is
gonna,
be
effort,
duplicated
across
every
provider
and
then
getting
that
coupling
and
timing
and
everything
right
and
making
sure
or
communicating
the
right
things
at
the
right
times
and
I.
Think
that's
going
to
be
a
lot
trickier
than
just
you
know:
I
I,
like
the
library
version
of
implementing
actuator,
it
feels
very
clean.
E
A
B
B
G
Would
completely
agree
on
that,
so
we
actually
also
did
something
similar,
it's
really
possible
that
you
can
have
different.
So
in
the
same
shared
repository.
Basically,
you
can
have
a
separate
section
where
you
implement
different
kinds
of
plans,
so
we
implemented
one
plan
for
the
G
RPC,
so
plans.
So
what
sort
of
to
geography
providers
whenever
required
and
if
she's
not
supporting
it,
can
still
so
put
the
in
deploy
ders
all
that
kind
of
all
of
us,
I
think
that
makes
perfect
sense
to
me.
G
We
don't
have
to
really
hard
cut
from
that's
one
and
the
second
thing
which
I
think.
Overall,
what
we
are
missing
is
the
adoption
point
of
view
so
I
think
well,
but
the
way
I
see
the
CSI
in
order
right
away.
Dave
Bing
is
because
the
adoption
was
super
easy.
So
what
do
you
really
expect
from
the
user
or
the
provider
implementers
at
work?
What
do
they
really
need
to
know
if
they
want
to
bring
their
providers
inside
the
stack?
So
we
need
to
then
decide
that
what
minimal
can
we
expect
from
them?
G
Do
they
expect
them
to
be?
Knowing
about
writing
the
full-fledged
controllers?
Maintaining
such
controllers
or
to
what
extent
so,
if
the
implemented
Israel
of
well,
then
probably
we
can
give
much
better
flexibility.
So
we
take
our
own
controllers,
but
if
there
are
some,
certainly
new
providers
and
would
not
really
want
to
know
how
close
API
really
works
internally
until
you
want
to
build
with
internal
stuff
and
probably
can
give
other
way
where
they
can
just
implement
simple
methods
and
still
works
for
them
is
the
way
it
should
work
just
just
treating
overflow.
C
We
need
to
get
the
different
kind
of
extension
proposals
fleshed
out
in
the
proposal,
doc
as
well,
and
and
then
we
can
continue
data
rate
on
weighing
the
different
approaches
and
I
think
there
is
some
variability,
even
with
some
of
the
approaches,
because
you
know
somebody
mentioned
earlier
that
they
do
like
the
kind
of
embedded
blob
that
we
have
today
for
provider
can
fake,
but
I
know
that
others
have
expressed
that
it's
actually
kind
of
painful
having
the
embedded
blob
as
well.
So
we
need
to
weigh
that
as
well.
F
D
Yeah
and
this
kind
of
leads
to
to
the
data
model,
isn't
it
so
I
was
wondering
how
do
we?
How
do
we
manage
the
interdependencies
between
whatever
the
decisions
are
going
to
be
made,
and
then
the
data
model
or
extreme
and
here,
and
would
it
make
sense
to
kind
of
go
through
the
data
model
discussions
for
couple
weeks
and
then
reconvene
here
after
that
or
how
we're
thinking
about
this
I
think.
H
We
can
keep
it
separate,
given
the
data
model
like
we'll
treat
the
web
mechanism
kind
of
a
black
ball
right
like
we
don't
know
what
this
extension
is
mechanism.
It's
gonna
look
like,
but
we
know
it's
there
and
then,
when
we
go
to
kind
of
more
implementation
details,
we've
probably
kind
of
like
you
know,
either
not
merge,
but
like
see
what
the
work
stream
this
work
stream
has
done
to
align.