►
From YouTube: 20190517 - Cluster API - Extension Mechanism breakout
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Yeah
so
I
know
one
area
where
we've
been
having
a
lot
of
discussion
is
around
what
a
machine
set
is
for
yeah
I'm
wondering
if
it
would
be
useful,
since
we
have
at
least
a
few
of
the
people
who
I've
been
commenting
about
that
with
here.
If
it
would
be
useful
to
talk
about
the
philosophy
around
replica
sets
versus
pods
and
how
that
is,
translated
to
machine
sets
and
machines.
C
C
It's
not
there's
like
a
replication
controller
controller,
which
is
the
controller,
and
then
the
shorter
form
of
replication
controller
represents
the
resource
that
you
would
be
creating,
and
the
intent
of
replication
controller
is
to
create
a
series
of
identical
replicas
of
a
pod
and
everything
that
you
can
define
for
a
pod
and
its
spec,
you
would
see
set
in
the
replication
controller
in
a
field
called
template.
So
the
template
in
the
replication
controller
is
essentially
a
pod
spec,
and
that
way
everything
you
define
in
a
pod.
C
You
can
define
on
a
replication
controller
and
there's
that
got
deprecated
and
turned
into
replica
sets.
But
I
don't
have
any
additional
comments
on
on
the
distinction.
So,
for
all
intents
and
purposes,
we
can
basically
just
keep
saying
replica
set
instead
of
replication
controller.
But
it's
the
same
thing
where
there's
a
template
and
the
template
is
the
pods
back
and
one
level
above
replica
set
is
deployment
and
the
purpose
of
deployment
is
to
manage
replica
sets
and
the
example
that
I
think
works
for
understanding.
C
So
if
everybody's
cool
with
those
definitions,
basically
the
same
thing
applies
to
the
way
that
machine
machine
set
and
machine
deployment
have
been
conceived
to
date
in
v1
alpha
one.
So
in
a
machine
you
specify
everything
you
need
to
run
a
a
server
of
some
sort
and
there
there's
obviously
some
common
elements
in
the
machine
and
then
what
you
see
in
v1
alpha
one
is
this
provider
spec
field,
which
is
an
inline
place
where
you
can
put
arbitrary
data,
that
is
provider
specific
so
for
AWS?
C
C
And
so
you'll
have
two
Machine
sets
for
a
period
of
time.
While
the
rolling
updates
in
progress
and
then
you'll
end
up
with
a
single
newer
machine
set,
and
so
I
wanted
to
mention
that,
because
I
think
that
there's
been
some
confusion
around,
why
I've
been
suggesting
that
trying
to
modify
the
machine
set
to
add
certain
fields,
because
the
Machine
said
is
the
the
place
where
you're
going
to
specify
your
user
data
and
that
gets
applied
to
the
machines.
C
I
think
there's
just
been
some
confusion
and
if
we
decide
as
a
group
that
we
want
to
retain
the
philosophy
that
a
machine
set
is
literally
something
that
replicates
machines,
then
we
wouldn't
be
adding
any
new
fields
directly
on
the
machine
set
that
represent
user
data
or
some
sort
of
bootstrapping
bit
or
anything
along
those
lines.
Because
you
can't
have
a
machine
or
you
can't
have
a
machine
set
until
you
have
a
machine,
and
so
in
my
head,
it
makes
the
most
sense
to
think
solely
about
machines
when
it
comes
to.
What
are
we
provisioning?
C
How
are
we
dropping?
How
are
we,
configuring
and
anything?
That's
in
the
machine
set
layer
or
the
Machine
deployment
layer
is
really
just
about
replicas
and
rolling
updates,
or
some
other
sort
of
update
mechanism,
and
so
I'm,
hoping
that
that
makes
sense
everybody
and
that
we
can
focus
on
the
Machine
layer
and
what
it
means
to
configure
bootstrap,
etcetera
and
that
all
of
that
will
just
automatically
be
will
fall
into
place
for
machine
sets
and
machine
deployments,
because
again,
they're
really
just
about
replicas
and
updates.
C
Least
as
its
implemented
now,
given
that
it's
based
on
the
concept
of
how
replica
set
functions-
yes
there,
there
are
no
placeholders
unless
the
system
supports
those
placeholders
but
generally
like
the
placeholders
that
you
might
find
in
the
pods,
for
example,
would
be
environment
variable
placeholders,
so
maybe
we
can
identify
some
things
that
make
sense
for
machines,
but
generally
it's
just.
If
you
want
to
run
a
machine,
you
need
to
fully
define
it
and
if
you
want
to
run
ten
copies
of
a
machine,
you've
fully
defined
it
in
the
template
in
the
machine
set.
So.
A
One
thing
I
would
add
to
that
is
if
we
do
end
up
straying
away
from
this
model,
I
think
we
need
to
seriously
consider
renaming
to
avoid
potential
confusion,
because
one
of
the
ways
that
we
introduce
these
concepts
to
users
is
by
drawing
the
analogies
to
the
existing
kubernetes
objects
that
they
may
already
be
aware
of.
So
if
we
no
longer
actually
mimic
the
behaviors
of
those
upstream
kubernetes
objects,
then
we
should
really
try
to
make
sure
to
draw
a
distinction
within
naming
to
avoid
confusion.
Okay,.
B
D
B
Okay,
so
I
think,
given
that
understanding
of
basically
I
guess
the
same
object,
that's
attached
as
a
provider,
spec
I
mean
with
the
provider
spec
we're
copying
data
from
one
object
to
another.
So
it's
it's
a
copy.
But
in
this
new
model
the
data
model,
that's
kind
of
been
proposed
is
to
have
a
reference
to
another
object
that
has
the
provider
specific
data
in
it.
And
if
we
follow
that
model,
then
we
might
just
copy
the
reference,
which
means
we
have
the
same
object
being
used
by
multiple
machines.
A
I
think
one
of
the
things
that
we
can
do
is
we
define
the
API
is
we
can
declare
that
certain
objects
should
be
immutable
and,
in
this
case
I
don't
think
we
necessarily
want
to
copy
that
data.
I.
Think
the
one
thing
we
want
to
avoid
is
the
provider
specific
blobs
that
are
embedded
right
now,
because
the
management
and
validation
of
those
is
more
complex
than
it
really
should
be.
A
So
the
reference
has
helped
simplify
that
and
that
we
can
just
have
simple
admission
webhooks
to
handle
validation
that
are
provided
by
the
actual
implementers
of
whatever
the
extensions,
whatever
we're
calling
the
extensions
and
B
1
alpha
2.
So
the
simplicity
of
that
model
makes
a
lot
of
sense,
but
I
think
one
of
the
things
we
may
want
to
require
of
it
is
that
these
these
references
are
immutable,
that
there
not
updated
after
they
are
created,
and
then
that
would
solve
kind
of
the
issue
of
linking
these
references
in
multiple
places.
Do.
B
A
B
Sure
I
mean
it's
easy
enough
to
implement.
I
wasn't
worried
about
that.
I
was
you
know
if
it's
going
to
be
the
oddball
thing
that
no
one
else
does
I'm
I
was
looking
for.
Do
we
want
to
consider
whether
that's
the
thing
we
want
to
take
on
yeah
Alan
had
a
question
in
chat.
He
said
he
can't
really
be
on
the
phone.
It
looks
like
are
people
answering
in
chat
there
about
what
is
the
equivalent
analogy
for
machine
class
similar
to
what
Andy
went
through
with
the
other
objects
I'm.
E
Actually,
on
the
phone
I
just
doesn't
think
it
was
his
probably
probably
not
important
enough
to
discuss
here
and
we
already
kind
of
moved
on,
but
it
was
relevant
of
what
Andy
was
going
through
it
about
for
completeness.
If
we're
gonna,
if
we're
gonna,
elaborate
on
machine
set
and
machine
deployment,
we
might
as
well
cover
the
sort
of
etymology
of
machine
class
figure
out.
C
The
way
that
machine
class
works
in
v1l
for
one
is
it
gives
you
excuse
me
it
gives
you
the
opportunity
to
take
the
provider
specific
data
again.
This
could
be
instance,
type
region,
networking
details,
stored
details,
whatever
provider
specific
put
it
in
a
single
place
and
use
it
as
an
object
reference
from
when
you're,
creating
a
or
when
you're,
specifying
a
machine,
be
it
in
a
machine
the
machine
set
or
machine
deployment.
C
So
if
you
were
creating
a
hundred
machines
by
hand,
then
you
either
can
copy
and
paste
all
the
details
that
are
provider
specific
into
each
machine
or
you
can
have
them
reference,
a
single
machine
class
that
has
all
of
that
information.
So
it's
a
way
to
consolidate
and
stayed
on
copy
and
paste
errors.
C
But
I
know
Jason
has
talked
I
think
maybe
in
the
data
model
proposal,
if
not
somewhere,
about
trying
to
change
machine
class
going
forward
so
that
it's
more
about
sizing
and
capacity
planning
and
have
it
be
more
in
line
with
storage
class
and
not
use
it
as
a
placeholder
for
all
provider.
Specific
information,
as
it
relates
to
a
machine
so.
A
The
providers
specification
provides
not
only
that
sizing
information.
You
know
disk
space
CPUs
additional
resources
such
as
GPUs
those
sorts
of
things.
It
also
provides
a
lot
of
other
things
and,
and
it
provides
a
lot
of
things
depending
on
the
provider
so,
for
example,
the
GCP
provider,
it
doesn't
have
any
impact
on
the
configuration
for
bootstrapping
the
node
as
far
as
like
cube,
ATM
configuration
because
that's
fed
into
the
GCP
provider
through
another
mechanism,
but
for
the
AWS
provider
and
other
providers
that
allow
configuring
the
cube
ATM
an
it
process
through
the
provider
spec.
A
A
F
A
A
That's
not
actually
that
bad,
because
you
can
specify
you
know,
however
many
CPUs
and
how
much
memory
you
want,
for
instance,
is
in
a
relatively
arbitrary
manner,
but
trying
to
map
that
specifically
into
like
AWS
sizes
or
even
if
you're
talking
about
the
OpenStack
world,
where
you
have
administrator
defined
classes,
now,
you're
now
you're
in
a
place
where
doing
that,
mapping
is
going
to
get
pretty
intractable.
Pretty
quick.
F
B
Yeah
I
like
that
idea,
a
little
bit
I
mean
coming
from
the
bare
metal
case,
like
I,
probably
have
even
less
flexibility
than
administrator
configured
cloud
right.
I've
got
exactly
certain
machines
that
I
can
actually
deploy
to,
and
you
know
the
best
fit
is
probably
going
to
be
way
bigger
than
some
minimum.
That's
been
specified.
E
B
C
B
What
this
meeting
was
about-
yeah
yeah
I,
like
I,
said
at
the
beginning.
I
didn't
necessarily
want
us
to
try
and
decide
anything,
but
I
do
want
to
make
sure
that
everything
is
clear
so
that
all
that
playing
time
everybody's
going
to
have
between
now
and
Monday.
You
can
spend
thinking
about
what
the
proposals
mean.
Yeah
yeah.
B
A
I
may
offer
up
kind
of
it
may
be
a
controversial
opinion.
I,
don't
necessarily
know
if
it
needs
to
be
an
either/or
situation.
I
think
by
enumerated
the
different
proposals.
I
think
we
should
at
least
be
open
to
the
potential
that
there
may
be
a
best
fit
based
on
what
type
of
extension
point
we're
talking
about
and
the
use
cases
for
the
specific
extension
point.
I,
don't.
B
H
Oh
yes
and
corrugation,
basically
saying
the
same
position
just
said:
is
that
probably
the
most
important
part
for
that
happened
is
that
the
another
model
supports
both?
We
are
noting
that,
in
the
shape
it
is,
is
probably
so
so
we
can
delay
the
decision,
as
you
said,
probably
weekends.
For
my
doing
something
at
some
point
we
decided
probably
some
part
of
the
workflow.
It
is
more
adequated
for
whatever,
for
instance,
but
if
we
need
to
try
to
keep
this
open,
I
mean
probably
will
will
not
know
that
before
handful
to
me.
F
Yes
and
I
wanted
to
contribute
to
this
discussion,
but
letting
everyone
know
why
I
prefer
controllers,
and
the
reason
is
that,
with
the
controller
model,
your
functions
that
operate
on
the
current
state
happen
to
have
visibility
to
the
state
of
the
entire
system.
So
in
a
web
book
model
we
have
to
define
an
API
for
each
hook
like
if
you
look
at
the
CSI,
that's
their
primary
motivation
is
defining
what
parameters
exist
for
each
event
that
you
might
want
to
hook
into
for
a
disc.
But
in
our
case,
because
we
have
such
varied
use
cases.
F
This
is
what
I
was
trying
to
get
at
like
last
Friday
or
whenever
that
was
I
might
at
initialization
time
need
to
know
the
data
the
metadata
for
post
initialization
time
so
in
the
controller
model.
I
have
complete
freedom
to
do
that,
and
so
it's
not
because
I've
already
written
a
controller
and
go
which
I
have
it's
really
because
I
think
the
controller
model
is
the
best
way
to
do
this.
A
C
C
H
Yeah,
just
because
of
the
question
of
how
we
can
actually
make
is
that
the
mobile
Warford,
both
approaches
I,
think
that,
as
long
as
we
keep
the
provider
specific
poreless
reference
to
external
object
and
not
embedded
that
opens
it
all
I
think
that
was
the
main
of
circle.
We
have
initially
that
any
external
CD
my
controller
cannot
easily
implement
it,
because
the
provided
specific
data
was
inside
a
little
bit,
but
once
we
have
been
moving
through
this
model,
I
think
is
a
first
approach.
The
second
one
was
a
proposal.
H
I
think
it
was
from
Benson
that,
even
when
the
Werfel
was
like
a
model
like
calling
sterling
work,
hooks
thought
those
were
optional,
decided
you
register
something
that
you
call
it.
If
not,
you
assume
that
somebody's
taking
care
of
that
object
and
I
think
that
this
more
or
less
the
way
I
see
it
could
probably
be
the
future.
It's
like
being
flexible,
say
we
define
a
workflow
we'd
son
with
Pacific
called,
but
is
there
is
nothing
there?
Well
what
we
assume
there
is
an
external
controller.
H
C
Okay,
Daniel,
you
have
your
hand
up
what
I
do
bill.
Oh,
okay,
now
it's
down
all
right:
oh
I
had
mine
up
so
I'll
go
if
that's
cool,
so
I
want
to
start
off
by
saying,
like
I'm,
not
against
controllers,
but
I
have
some
questions
that
I
honestly
don't
know
the
answers
to.
C
So
if
we,
if
we
have
an
AWS
machine
kit,
config
and
a
GCP
machine
config
in
a
bare-metal
machine
config-
and
these
are
all
provider
specific-
and
presumably
they
have
all
the
stuff
we've
been
talking
about
before
in
terms
of
sizing
and
whatnot.
How
much
of
the
stuff
stays
in
machine
spec
and
how
useful
is
the
machine
object
at
the
end
of
the
day?
C
If
it
generally
is
just
has
a
reference
to
something:
that's
provider,
specific
and
I'm
like
I'm
asking
this,
because
if
this
were
puts
a
Ruby
on
Rails,
where
you
have
polymorphic
types
and
you
can
have
multiple
tables
and
have
a
base
table,
and
you
have
an
AWS
machine,
config
table
and
a
GCP
table
and
whatnot.
Then
in
Ruby
on
Rails,
you
can
say,
go
get
me
all.
C
C
We
can
define
a
machine
as
a
basically
as
a
placeholder
to
give
it
a
name,
so
it
can
point
to
the
polymorphic
aspect,
which
is
a
ws
config
or
GCP
or
whatever,
and
so
I'm
I'm
trying
to
figure
out
like
if
we
keep
things
kind
of
the
way
they
are
like.
What
is
the
overall
purpose
with
machine
unless
it's
just
we
need
it,
because
we
want
to
be
able
to
keep
control
game
machines,
and
here
they
are
I.
J
J
Events
like
something
to
uniquely
identify
that
machine,
the
health
of
it-
and
you
know,
I've
seen
the
network
addresses,
for
example,
being
used
to
correlate
a
machine
of
the
node
after
the
milk
pops
up
those
kind
of
things
when
some
ways
the
status
of
the
machine
almost
seems
more
valuable
than
the
speck
which,
as
you
pointed
out,
has
a
lot
of
stuff
that
might
be
better
served
going
into
provider.
Specific
data
structures.
I
Yeah,
you
know
read
and
write
and
figure
out
what
may
be
phase
the
machine
is
currently
in
and
the
Machine
controller.
The
capping
machine
controller
is
what's
in
charge
of
shepherding
that
object
from
from
face
to
face
sort
of
isn't
it's
in
charge
of
orchestrating
the
controllers
or
maybe
indicating
those
controllers.
Okay,
now
you
know
now
it's
your
turn
to
you
know
to
do
whatever
you
need
to
do
on
this
object.
Now,
it's
your
turn,
etc.
A
B
F
The
phases
are
valuable,
I
think
status
is
also
valuable
in
the
sense
that
if
we
have
a
machine
deployment
and
that
deployment
needs
to
bring
up
a
new
machine
if
the
machine
has
gone
down
so
like
not
even
in
the
rolling
case
right,
it's
just
the
case
where
the
machine
is
now
shut
down
or
dead.
Now
that
machine
status
is
is
powered
off
or
whatever,
and
then
we
can
bring
up
a
new
machine,
Jason.
A
Yeah,
so
sorry,
I
can't
I
apparently
can't
raise
my
hand
in
the
chat
because
I'm
a
host
all
right.
So
one
of
the
things
I
wanted
to
say
is
I
think
most
of
the
discussion
around
management
of
the
state
and
everything
makes
a
lot
of
sense
and
how
we've
been
discussing
around
like
infrastructure
providers,
one
of
the
things
that
I
worry
about.
B
H
Okay,
there's
a
lot
of
people,
it's
just
be
free
I'm,
just
recommend
that
we
don't
really
need
the
coordination
split
Italy.
Yes,
we
don't
have
to.
But
the
thing
is
it's
for
sanity
or
people
is
better
to
have
it.
Maybe
even
we
can
do
it
in
bliss
it
even
driven
more
flow,
just
based
on
the
cooperating
probably
time
somebody
looking
to
that
logic.
It's
really
hard
to
get
it
so
I
think
that
probably
this
piece
of
workflow
give
me
that
workflow
is
only
creating
this.
Your
DS
and
complete
controllers
are
just
working
on
that
objects.
H
J
E
Date
of
cooperating
controllers
right
now,
sort
of
watching
for
things
to
be
populated
and
then
when
they
see
the
right
things
they
sort
of
move
forward.
I
guess
my
quick
point
here
too,
that
maybe
the
wider
brain
trust
is
I
feel
like
this
is
a
this
is
a
wider
problem
than
just
controllers.
We
have
the
same
issue
and
we
talk
about
operators
as
well.
E
We
want
we
want
different
operators
to
play
with
CR
DS
and
also
make
them
a
little
bit
more
pluggable
and
I
guess
I'm,
sort
of
wondering
I,
wonder
if
other
people,
besides
this
community,
are
thinking
about
this
problem
because
there's
sort
of
a
you
know,
there's
a
there's,
a
there's,
an
analogy
here
between
what
we're
trying
to
do
here,
and
you
know
even
something
that
has
nothing
to
do
with
the
cluster
API.
Just
operators
trying
to
work
together
against
CR,
DS
and
also
to
you
know,
have
the
opportunity
to
make
them
swappable
and
I.
K
Yeah
I
was
gonna
say
when
it
comes
to
a
relationship.
The
life
cycle
of
the
cluster
API
components,
including
these
extension
points.
What
we're
currently
talking
about
bootstrap
controller
at
some
point.
Presumably,
you
might
need
to
carry
to
bootstrap
controllers
one
for
current
version
and
one
for
previous
version
case.
K
A
F
Yeah
so,
first
of
all
respond
to
Michael's
point,
so
these
controllers
would
be
kubernetes
deployments
or
stateful
sets
or
daemon
sets.
However,
we
actually
deploy
them
so,
just
like
with
a
kubernetes
api
versioning,
the
controller
would
be
observing
objects
with
a
certain
API
version,
and
if
there
was
a
field
for
kubernetes
version,
then
the
functions
of
the
controller
that
would
perform
a
reconcile
on
those
objects
that
it's
watching
could
have
a
lookup
table
for
different
versions
and
could
do
different
things
depending
on
the
version.
So
I
don't
think.
F
That's
a
huge
issue
of
like
contentious
controllers.
What
I
do
want
to
caution
us
against
is
tying
us
to
specific
tools,
even
like
cuvee,
DM,
so
I
know.
This
is
a
controversial
thing
to
say,
but
we
can
have
a
field
that
abstracts
over
the
metadata
that
Kubb
a
DM
wants
and
then
avoid
some
of
these
issues
with
transient
versions
and
stuff
like
that.
L
C
And
if
you
pick
a
cube,
a
DM
bootstrap
provider,
you
could,
you
would
have
a
reference
to
a
cube,
idiom,
specific
provider,
specific
CRD
instance
that
has
all
the
details
and
if
you
don't
want
to
use
cube
idiom,
you
could
use
some
other
specific
one
or
I
think
maybe
what
you
were
getting
at
is
genericized
that
and
make
it
translate
to
a
cube
idiom
based
implementation
or
not,
but
anyways
like
I,
think
they're
solvable
problems
anyways.
So
the
reason
I
had
my
hand
raised
is
I.
C
C
Things
like
I
need
a
load
balancer
and
it
might
be
AWS
for
TCP
or
whatever
and
I
kind
of
feel
like.
If
we
can
talk
through
some
of
those
and
I
realize
we
only
have
about
15
minutes
left
in
this
hour.
So
it
was
probably
not
enough
time,
but
if
we
can
can
reason
through
some
of
those
workflows,
maybe
it'll
help
figure
out
how
we
do
this
cooperation,
be
it
with
controllers
for
asynchronous
web
book.
Calls.
K
Michael,
just
a
quick
counterpoint
to
and
Andrew
a
minute
ago,
it
is
possible
during
one
kubernetes
release
that
you
might
want
to
Bluegreen
some
changes
to
your
bootstrap
configuration
config
files.
What
have
you
and
so
I
do
think
it's
important
that
you
have
the
ability
to
support
multiple
different
controllers
or
endpoints
or
whatever
it
ends
up
being
for
that
part
of
the
workflow.
L
I
Yeah
I
want
to
respond
to
what
Andy
was
saying
so
in
the
past,
I've
tried
to
you
know,
reason
about
the
this
sort
of
workflow
by
living
or
try
to
help
myself
reason
about
it
by
splitting
it
up
into
you,
know
completely
independent
phases
and
so
yeah
define
sort
of
okay.
What
is
you
know?
What
do
I
need
by
the
time
that
I
want
to
bootstrap
machines?
You
know
I
need
to
know
the
control
plane
end
point.
I
So
if
I
need
to
know
that,
then
you
know
if
I
had
an
infrastructure
phase
that
came
first
either
that
would
have
you
know,
brought
up
an
e
lb
or
or
whatever
else
right
like
the
zip.
The
main
thing
is
that
I
have
this
control,
plane,
endpoint,
and
then
you
know
by
the
time
you're
done
bootstrapping
it.
You
know
like
okay,
you
need
to
you
know,
retrieve
a
cute
config.
I
I
F
B
Yeah
I
think
I
was
first
responding
a
little
bit
to
what
both
Daniel
and
Andrew
just
said.
I
think
I
think
it's
useful.
It's
very
useful
to
think
through
what
data
we
need
in
order
to
take
each
action
I'm
a
little
bit
worried
about
codifying
that
too
much
in
a
specific
state
diagram
that
we
try
to
use
to
orchestrate
things
like
in
lockstep,
because
I
think
some
of
the
things
can
happen
in
different
orders.
Some
of
them
can
happen
concurrently
and
I.
B
Think
that
the
it's
possible
to
allow
that
to
happen
and
the
for
the
object
object
to
sort
of
reconcile
itself
or
become
reconciled
to
to
a
useful
machine
without
harshly
saying.
You
can
only
do
this
at
this
point
and
then
we're
going
to
move
to
this
phase
and
I
don't
have
a
great
example,
but
I
feel,
like
our
workflows,
are
slightly
different,
so
I
just
I,
don't
want
to
lock
things
down
to
her.
K
Michael
yeah
I've
shared
a
document
in
chat
at
Doug
and
Andy
have
looked
at
already
and
I.
Don't
I
don't
mean
to
be
very
prescriptive
with
exactly
what
those
steps
look
like,
but
I
do
think
we
need
a
state
machine,
so
I
documented
what
I
think
like
high-level
state
machine
would
look
like
and
like
just
for
the
control
point
pieces.
I
absolutely
think
we
need
some
kind
of
state
machine.
That
knows
what
the
load
balancers
are
and
all
this
kind
of
stuff,
too,
you
know
intelligently
feed
that
into
each
object.
L
Pince-Nez
I
want
to
add
a
new
point
to
the
agenda,
which
should
we
set
some
dates
around
like
when
we
want
to
kind
of
like
put
the
proposals
in
and
look
like,
because
this
discussions,
like
we're
very
useful
and
I,
got
like
even
more
context
about
like
what
others
are
trying
to
do.
I
do
think
we
need
to
be
concrete
at
some
point.
So
what
should
we
kind
of
like?
How
should
we
move
forward
question
to
the
group
and
what
is
the
target
date?
I
know
that
we
said
target
for
the
proposals.
B
Yeah
I
think
eventually
we
should
set
dates.
I'm
a
little
bit
worried
about
setting
dates
before
I
feel
like
we're
kind
of
coming
together
on
some
sort
of
agreement,
because
if
we
force
a
date,
then
we
might
end
up
with
something
that
no
one
actually
likes.
But
it's
what
we
all
said:
well,
that's
good
enough
for
now
and
then
we're
just
going
to
end
up
having
to
go
through
all
of
this
again.
B
B
C
I
was
just
thinking
like
I
know
that
there
was
a
control
plane
workstream
that
or
yeah
that
I
was
sort
of
thinking
would
be
talking
about
some
of
these
orchestration
items
that
we've
been
discussing.
I
looked
at
one
proposal
awhile
ago,
but
have
not
had
time
to
look
at
all
of
them.
So
I
don't
know
if
we
want
to
try
and
coordinate
with
that
group,
assuming
that
there's
there
are
different
individuals
involved
to
try
and
document
some
of
these
workflows
or
if
we
I
don't
want
to
have
fragmentation
and
duplication
of
effort.
F
I,
don't
have
an
immediate
answer
to
the
project
management
issue
there,
but
I
did
want
to
reiterate
that
this
is
me
1,
alpha,
2
and
I
feel
like
we
can
actually
set
dates
and
iterate
on
what
we
have,
and
it's
also
interesting
to
me
that
we
were
going
down
the
path
of
solving
a
bunch
of
these
problems
with
machine
status
already,
and
there
was
something
else
that
a
hurdle
for
me
was
remote.
Node
references,
so
I
haven't
caught
up
with
the
cluster
work
stream.
Have
we
solved
remote,
node
references.
C
C
My
team
has
been
working
on
this
upgrade
tool
that
I've
mentioned
previously
and
the
way
that
we're
actually
doing
it
right
now
is
we
match
the
provider
ID
that's
set
on
the
machine
spec
to
the
provider.
Id
that's
set
on
node.
So
that
means
that
we
need
to
be
able
to
access
the
management
cluster
where
the
machine
is
and
the
target
cluster
where
the
node
is
simultaneously,
it's
hacky,
but
it
does
sort
of
work
for
now.
L
L
At
this
point,
and
that's
why
I'm
trying
to
like
kind
of
set
dates
like
not
because
if
we
have
to
be
super
strict
about
it
and
I,
mostly
like
to
kind
of
go
into
more
concrete
direction
in
next
few
meetings
like
for
like
each
the
work
streams
are
like
now
closer,
like
mostly
like
to
see
concrete
proposals
and
anything
else.
That
folks
want
to
add.
B
Yeah,
so
is
there
a
like
an
overall
schedule
that
we're
trying
to
synchronize
with?
Is
there
some
sort
of
like
we
want
to
do
something
by
a
particular
kubernetes
release
or
not
yet?
Okay,
okay,.
C
I
will
say
that
some
of
the
other
work
streams
had
set
Friday
June
7th
as
a
target
for
trying
to
have
proposals
vetted
and
submitted.
So
I,
don't
know
if
that
gives
us
enough
time
next
week
is
Q
Khan.
Obviously
so
people
will
be
busy
and
probably
not
working
too
much
on
on
these,
which
gives
two
weeks
to
full
work
weeks
after
Q
Khan
to
Friday
June
7th
may
be
good,
maybe
not.
C
L
B
Yeah
no
I
get
that
part.
Okay,
so
I
was
gonna,
raise
my
hand
and
no
one
else
has
their
hand
up
so
I'll
just
go
the
so
I
don't
have
a
problem
with
setting
some
deadlines
for
getting
some
proposals.
Written
I
feel,
like
maybe
I,
misunderstood,
I
thought
it
was
sort
of
agreeing
on
something
rather
than
just
getting
stuff
written
up
and
I
I
feel
like
we're
way
too
unsure
of
exactly
what
we
want
to
do
to
agree
on
anything.
Yet.
H
B
I
hope
that's
resolved
soon,
but
I
don't
necessarily
expect
that
to
be
resolved
by
June
7th,
which
brings
me
the
question
of
how
does
this
group
actually
make
decisions
like
this?
So
how
do
we,
if
we
have
several
proposals
that
might
be
either
not
necessarily
competing
but
overlapping,
say
do
we?
How
do
we
decide
which
ones
to
proceed
with
so.
A
A
If
we
don't
have
general
consensus,
then
at
that
point
probably
the
second
best
option
would
be
to
have
a
vote
on
the
topic
and
I.
Think
one
of
the
ways
that
we
would
want
to
address
a
vote
is
to
put
some
limitations
around
that
vote
based
on
probably
employment,
to
avoid
any
particular
employer
or
any
particular
company,
my
own
included
from
being
able
to
sway
the
decision
too
much,
and
then,
if
we
don't
have
resolution
at
that
point,
I
think
our
next
step
would
be
to
escalate
to
the
sig
leadership.
C
I
Real
quick,
Vince
I
know
you
had
your
hand
up:
okay,
yeah,
I,
don't
know
if
you're
interested
any
if
you're
gonna
be
a
soupcon.
Maybe
we
could.
You
know
like
collaborate
on
on
that
or
you're
a
synchronously
too,
but
yeah.
Well,
you
know
what
you
were
saying
is
pretty
much.
You
know
what
I
have
in
my
life
like
figure
out.
That's
not
the
data
model,
not
the
schema,
but
just
kind
of
a
bag
of
attributes.
Right
said
that
each
phase
needs
and
once
we
have
that
bag.
I
H
E
E
This
is
to
help
educate
you
on,
but
the
data
inputs
are,
but
if
I
could
make
a
request,
in
addition
to
all
the
sort
of
simplistic
things
that
you're
talking
about
here,
I
think
it
would
really
help
to
throw
in
some
sort
of
at
least
one
weird
complex
scenario
like
self
hosting
the
load
balancer,
or
something
like
that
to
help
guide
you
otherwise
you're
going
to
come
up
with
something
that's
going
to
fit.
The
simple
use
cases
and
you're
going
to
leave
out
some
of
the
hard
things.
L
Wait
a
time.
The
only
other
thing
that
I
wanted
to
add
is
there
is
a
world
where
we
kind
of
collapse
the
two
proposals
and
we
kind
of
try
to
meet
each
other
halfway
through,
so
that
we
don't
have
to
escalate
anything,
possibly
like
I
wanna,
say
this
in
the
open
like
that
would
be
my
preferred
solution
where
we
find
like
some
agreement
before
kind
of
trying
to
vote
or
like
could
work
in
that's
gonna.
Go
to
waste.
I've
been
that
go
back
to
you.
Jason.