►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180711 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.a0wiv5tuwnd6
Highlights:
- Alignment of cluster and machine actuator interfaces
- Bootstrapping an AWS provider implementation
- Update on creating the openstack repo
- clusterctl support for out-of-tree providers
- Deep dive session at Kubecon China / NA?
- Review of aggregate apiserver vs. CRDs
- Naming of ProviderConfig
- Issue triage for alpha milestone
- Office hours for provider implementors
- Repurposing code from the cross-cloud CNCF project
- Support for multiple masters
A
Hello
and
welcome
to
the
Wednesday
July
11th
edition
of
the
cig
cluster
life
cycle
cluster
API
working
group
meeting
today
agenda.
We
is
looking
reasonably
full,
so
we're
going
to
just
dive
right
in
it
looks
like
the
first
one
is
about
PR
that
hasn't
gotten
merged.
Yet
number
408
see
David.
You
only
get
a
little
background
on
this
one
I
didn't
I
haven't
seen
it
yet
I
know.
I
was
David
here,
I.
B
B
All
right,
this
one's
mine
as
I
started
working
through
a
POC
of
a
cluster
API
implementation.
I
noticed
that
there
is
a
significant
difference
between
the
cluster
and
machine
actuator
in
interfaces
and
also
the
controller
code
as
well.
I
put
a
PR
out
there
to
basically
solicit
feedback
on
aligning
the
two
interfaces
and
the
behaviors,
but
to
give
a
brief
overview
right
now.
B
The
machine
controller,
the
instance
that
it
passes
to
the
actuator,
is
actually
a
deep
copy
of
the
informer
provided
object
so
that
you
can
actually
mutate
that
object
without
having
to
worry
about
mutating
the
cache.
The
cluster
controller
doesn't
currently
do
that
and
I
think
it
should,
and
also
the
actuator
and
interfaces
for
a
machine.
You
have
create,
update,
delete
and
exists,
and
the
cluster
is
just
reconcile
and
delete
right
now
and-
and
it's
just
odd,
spinning
up
on
to
a
cluster
API
and
coming
up
to
speed
and
trying
to
reconcile
those
differences.
A
Yeah,
so
I
just
put
a
link
in
chat
from
a
comment
that
Kenny
made
about
two
weeks
ago
on
a
different
PR,
basically
asking
the
same
question
and
why
they
weren't
the
same.
So
no,
if
Kenny
of
you
or
or
Meghan,
wants
to
say
anything
about
the
status
of
where
we
think
we
should
head
with
the
controller.
C
C
D
A
machine
is
typically
1
cm,
so
I
guess
we'd
have
to
decide
what
it
means
to
our
cluster
to
exist.
Is
it
that
every
resource
exists
or
that
I
don't
know,
because
what,
if
it
started,
creating
and
then
failed
halfway
through
the
next
time
it
wouldn't
make
this,
or
it
was
just
a
little
bit
confusing
when
I
was
implementing
it.
Originally
the
GC.
D
B
B
That's
not
fully
created,
like
all
the
resources
are
created,
is
not
created,
but
then
you
get
the
odd
behavior
where
the
create
method
is
basically
the
same
as
the
existing
reconcile
method,
but
I
think
that
there
is
a
still
a
value
and
being
able
to
differentiate,
create
versus
update
so
that
if
somebody's
mutating,
the
cluster
configuration
you
can
more
easily
handle
that
separately
from
the
create
operation.
If
that
makes
sense,.
D
A
B
Yes,
I
know
so
some
that
we've
talked
about
the
existing
provider.
That's
linked
right
now,
as
far
as
what
we've
discussed
as
the
oh,
but
you
had
implementation,
but
it's
not
really
generic
enough
to
be
a
kind
of
generic
kubernetes
cluster
API
implementation.
So
I
wanted
to
see
if
there
was
interest
in
go
ahead
and
spinning
up
an
official
kind
of
repo
and
starting
work
on
kind
of
bootstrapping
and
AWS
implementation.
A
B
So
currently
it's
part
of
the
work
I'm
doing
with
Hettie.
Oh
we're
doing
a
internal
proof-of-concept
of
cluster
API
based
on
AWS
and
we'd,
be
happy
to
help
contribute
some
of
that
work
back
towards
the
community
and
help
use
that
to
help
bootstrap
it
as
well,
but
any
other
contributors
that
are
happy
to
join,
provide
feedback
and
help
with
testing
or
even
implementation.
We're
more
than
happy
to
kind
of
work.
With
this.
F
G
Could
probably
follow
the
outline.
We
should
follow
the
outline
that
DIMMs
had
created
for
how
we
want
to
create
new
repos.
So
I,
like
I,
can
point
to
a
link
of
what
we
should
the
process.
We
should
probably
follow
to
create
the
new
repository
and
we
can
just
create
a
separate
instance
there
and
then
work
out
the
details
and
logistics
at
the
end.
F
B
The
catch
right
now
is
that
the
centralized
cluster
API
implementation
is
using
API
server
builder
right
now
and
is
an
aggregated
API
server.
So
it's
locked
into
kubernetes
1-9
right
now.
So
it
really
is
an
easy
to
use
queue
builder.
But
there's
a
another
topic
to
talk
about
potentially
switching
to
CR
DS,
which
might
ease
that.
A
H
As
of
an
hour
ago,
we
do
have
a
repo,
so
yeah,
that's
the
good
news.
It
took
a
while
just
to
figuring
out,
you
know:
do
we
want
to
start
with
an
empty
repo
or
do
we
need
to
fill
it
in
and
what
do
we
fill
it
in
and
who
the
owner
should
be,
and
things
like
that,
so
I
have
a
template.
So
if
the
next
button,
people
who
follow
I
can
probably
help
them
jumpstart
the
process
so
yeah
we
have
a
repo.
H
So
now
the
next
step
is:
how
do
we
do
cluster
CTL
create,
for
example,
right
with
the
new
provider?
I
do
have
one
PR,
which
was
following
the
same
thing
that
we
did
in
cloud
provider
on
how
to
plug
in
external
cloud
external
providers.
So
I
would
like
to
see
some
progress
on
that
PR
and
then
we
do
have
a
choice
to
make,
which
is:
do
we
kind
of
like
vendor
the
OpenStack
repository
or
not,
and
how
would
it
work?
H
A
No
need
that's
it's
great
news.
We
now
have
the
repo,
and
then
we
have
a
template
for
creating
more
I
would
suggest
Jason.
If
you
and
Cindy,
or
you
and
David,
you
can
follow
that
template.
We
should
be
able
to
get
one
up
and
running
for
the
AWS
provider
code.
Pretty
quickly
sounds
like
how
the
dimness
is
by
the
trail
and
gone
through
all
the
trouble
yeah.
B
I'm
more
than
happy
to
follow
up
with
DIMMs
and
go
ahead
and
get
that
Kickstarter.
A
Excellent
okay.
So,
let's
move
on
to
the
other
question,
which
is
how
do
we
use
cluster
cuddle
when
you
have
an
out
of
tree
provider?
I
will
also
provide
the
background
here
that
we
do
want
to
move
to
Google
code
out,
so
this
will
be
a
problem
for
everybody,
not
just
non
Google
people.
Hopefully
you
know
in
the
quote
near
future,
and
so
we
should.
We
should
solve
the
problem
with
the
idea
that
no
provider
code
will
be
in
the
main
repo
soonish,
so
I
think
there
have
been
the
the
PR
I.
A
Think
Tim's
you
were
alluding
to
is
that
there
is
an
interface
in
the
main
code
that
has
two
functions
in
it,
so
that
we
can
I
believe
it's
get
an
IP
address
and
fetch
acute
config
I.
Think
of
the
two
things
we
need
to
do
that
right
now,
our
provider
specific
and
I
think
we
have
for
one
of
those.
We
have
a
design
for
how
to
get
rid
of
that
interface
method
and
for
the
other
we
we
don't
yet.
A
Ideally,
we'd
get
rid
of
both
of
them
and
delete
the
interface
and
not
have
to
bender
anything
back
into
the
main
repo
or
not
have
any
sort
of
you
know,
compile
time
dependencies
and
all
the
different
providers.
But
we
need
to
figure
out
how
do
you
roulette
interface
and
actually
make
the
cluster
up
work
without
at
first?
So
as
far
as
I
understand
that
looks
currently
blocking
us
and
so
I
think
we
have
the
long-term
solution,
we
need
to
figure
out
how
we
get
it
out
and
then
tactically,
the
short
term.
H
Right
so
for
right
now,
one
option
that
seems
to
me
that's
possible
is
if
we
do
the
pool
360
right.
That
will
give
us
a
way
to
you
know
plug
in
the
stuff
existing
stuff.
As
it
is,
then
the
only
other
issue
would
be.
We
need
an
import
from
the
rendered
repository
right
so
to
be
able
to
register
the
provider.
So
what
we
could
have
is
like
a
temporary
copy
of
cluster
CTO.
H
D
H
A
Think
that
sounds
really
great
for
making
sure
we
can
get
it
up
and
running.
It
doesn't
sound
super
maintain
old
to
me
as
we
start
having
well
foot
providers
so
I
think
if
we
do
that
work,
that's
gonna
need
to
also
sort
of
bump
up
the
priority
of
trying
to
figure
out
to
move
away.
Keep
that
so
can't.
G
You
follow
the
the
Kubb
control
plug-in
model
very
similar
to
that
model.
To
do
the
cluster
protocol
different
providers,
they
have
a
very
unique
solution.
That's
actually
kind
of
clever
that
allows
you
to
do
plug-ins
for
coop
control.
So
if
you
were,
if
you
had
your
own
extension
mechanisms
like
you
created
in
a
third-party
C
or
D,
and
you
wanted
to
link
that
into
coop
control
not
linked
into
qu
control,
but
be
able
to
execute
a
vehicle
control,
there's
mechanisms
for
that
capability,
it
sounds
very
analogous
to
this
yeah.
H
A
Don't
know
what
other
people
think
yeah
I
know:
I
had
a
whole
conversation
with
Chris
Rousey,
when
you
first
sent
that
PR,
we
sort
of
had
the
same
like.
If
we
merge
this,
we
a
should
clearly
document
that
this
interface
is
not
ever
getting
bigger
and
that
we
are
actively
trying
to
make
it
smaller
and
delete
this.
E
A
So
we
discussed
this
a
little
bit
during
the
cyclical
lifecycle
meeting
yesterday
and
lots
of
people
kind
of
like
we're
gonna
be
in
Seattle
and
nobody
it
sounded
like.
What's
going
to
be
in
China,
we
have
one
person
from
Google
who
might
be
in
China
if
you
stalk
doesn't
some
except
that
I'm,
not
sure
if
you
will
go
or
not.
Okay,.
E
And
I
would
propose
we
do.
It's
got
a
lazy
consensus,
let's
book
a
room
for
Seattle
and
we've
tried
on
the
back
yeah.
A
So
back
burner
isn't
too
far
back
because
the
deadline
for
filling
out
this
form
for
China
is
actually
pretty
soon.
So
unless
we
hear
from
someone
that
they
are
willing
to
run
this
event
in
China
I
think
in
the
next
couple
of
weeks,
then
we're
gonna
miss
miss
our
opportunity
to
grab
a
room
unless
they're
willing
to
give
us
one
at
the
last
minute,
which
is
happened
before
so.
That's
that's,
certainly
possible.
Yeah.
A
So
for
Seattle
actually
fill
out
the
form
last
night
to
schedule
an
intro
and
a
deep
dive
for
cluster
life
cycle
and
since
the
last
deep
dive
in
the
roof
was
on
queue.
Badman
I
wrote
the
deep
dive
to
be
on
the
cluster
API
for
Seattle
and
I,
put
in
the
notes
field
to
Dan.
That
I
really
wanted
to
have
two
deep
dives
first
thing,
because
we
have
sort
of
two
major
projects
and
it
wouldn't
make
a
lot
of
sense
to
be
enough.
A
A
I
will
just
go,
fill
out
the
form
again
and
just
check
the
box
for
deep
dive
and
fill
out.
The
description
for
the
key
Batman
dude
dive
in
and
that
should
get
us
signed
up
for
both
it
does
ask
for
like
who's
gonna
run.
It
and
I
also
put
the
notes
field
that
I
signed
myself
and
Tim
up
as
sort
of
proxies,
because
we
will
figure
out
who's
gonna
run.
It
later
is.
E
A
A
E
I
feel
like
this
is
kind
of
a
hot-button
issue,
but
I
wanted
to
bring
it
up
when
we
originally,
you
have
the
discussion.
We
talked
about
bringing
it
up
later
once
the
era
T's
are
a
little
more
complete
whatever
that
means
so
I'm
just
gonna
feel
for
focuses
opinions
and
thoughts
and
concerns
around
potentially
moving
back
from
integrity,
get
API
server
to
see
righty
and
I'll
just
leave
it
there
and
let
folks
jump
in
it.
G
Would
be
ideal,
I
mean
I
brought
up
this
topic
with
Chris
several
times
and
also
talked
with
Robby.
It
would
be
ideal
to
get
a
listing
of
the
requirements
of
what
things
are
needed
by
cluster
API
and
to
help
determine
the
choice.
I
look
through
the
history
and
there
I
think
a
lot
of
the
original
concerns
that
were
there
no
longer
applied
to
the
current
state
of
ceará
teas,
and
there
are
some
inherent
benefits
CRTs
and
if
we're
gonna
make
any
decision
or
choice,
making
it
before
the
cut
to
beta
is
is
probably
prudent.
G
A
I'll
have
a
little
bit
of
background
here.
We
had
aratoon
during
our
meeting
a
number
of
months
ago
to
discuss
this
topic
when
he
had
started
pushing
in
the
larger
community
for
people
to
stop
using
I,
read
API
servers
and
everyone
to
see
our
DS,
which
was
right
around
the
time
that
were
going,
the
opposite
direction
and
trying
to
switch
from
CR
DS
to
be
used
in
the
initial
prototype
to
using
iterative
servers.
A
A
So
if
you
look
at
like
what's
in
series
and
1.11
and
what's
coming
up
1.12
a
lot
of
the
things
that
we
needed
are
going
to
be
there
relatively
soon,
if
they're
not
already
there
and
the
in
that
release
to
just
cut
I,
think
it
would
be
really
useful
to
go
through
and
and
look
at.
You
know
if
we
do
want
to
implement
custom
validation
and
support
for
API
object,
versioning
with
C
our
DS.
A
What
does
that
actually
look
like
and
how
different
is
it
from
having
to
do
with
our
API
is
because
I
think
it's
you
get
some
of
it
for
free,
but
you
also
have
to
implement
a
whole
bunch
of
web
hooks
to
make
it
work.
So
it's
I
think
it
might
still
be
sort
of
non-trivial
bit
of
the
integration
plumbing,
even
with
CR
DS,
to
actually
get
it.
G
Yeah
I
think
we
should
just
do
an
honest
evaluation
like
what
are
the:
what
are
the
trade-offs
and
what
are
the
requirements
long
term,
for
what
we
want
to
do.
The
the
notion
of
not
having
to
provision
and
pivot
and
being
able
to
use
an
existing
cluster
is
a
is
a
highly
beneficial
thing
for
a
lot
of
folks
to,
as
well
as
as
well
as
the
dependency
graph
issues
that
currently
exist
with
the
current
implementation
right.
A
A
Maybe
before
you
really
know,
and
with
a
great
API
servers,
you
can
make
arbitrary
sub
resources.
So
if
we
wanted
to
do
things
like
have
a
sub
resource
on
machines
that
allow
you
to
like
reboot
the
machine
right,
you
could
make
a
custom
verb
in
your
rest.
Definition
that
rebooted
a
machine
and
you
could
you
could
do
that
with
a
great
API
server.
You'll,
probably
never
be
able
to
do
that
with
CDs,
so
we
should
figure
out.
I
The
road
there's
also
the
the
shrink,
aquaria
and
I
read
a
PI
server
can
be
accessed
sort
of
from
two
clusters.
In
other
words,
you
can
access
it
inside
the
cluster
that
you're
spinning
up
and
before
the
cluster
that
you're
spinning
up
exists
right
and
I.
Don't
know
if
we're
actually
using
that
I
should
at
the
moment
or
have
plans
to
so.
A
You're
talking
about
basically
having
like
a
different
end
point
that
you
can
talk
directly
to
that
radio
server
if
the
the
primary
one
is
down.
Well,
it's
it's
the
the
direct
end
point
rather
than
something
like
you:
don't
have
to
go
through
the
sort
of
cluster
kubernetes
cluster
endpoint,
which
is
typically
tightly
coupled
with
LIF,
like
the
sed
for
all
of
your
jobs
that
might
be
crashing
right,
like
we've,
definitely
seen
cases
where
that
API
server
starts
crash
lipping.
And
then,
if
you
can't
talk
to
the
API
server,
that's
storing
your
machine
objects.
B
A
Yeah
I
mean
one
of
the
things
again
when
we
talked
to
Eric
a
while
back.
He
promised
us
that
they
wouldn't
abandon
the
things
we
were
using
to
build
a
grand
API
servers,
at
least
until
you
know.
Basically,
everybody
had
moved
over
to
see
RDS
so
I
think
that's
a
really
good
question
that
we
should
follow
up
and
make
sure
that
that
sort
of
promises
is
still
in
effect
back
to
Justin.
A
A
A
With
a
C
or
D,
so
you
I
believe
what
would
happen.
Is
you'd
put
your
CR
DS
into
that
API
server.
So
then
it
would
also
register
with
the
a
great
API
server.
So
you
have
a
single
end
point
where
you
could
see
everything
like
you
do
today,
but
you'd
have
separation
of
storage
and
so
forth,
like
we
do
when
we
have
run
our
own
custom
fire
and.
I
A
Maybe
I
think
that
they're
trying
to
build
sort
of
more
first-class
support
for
this,
where
the
API
server
just
sort
of
does
this
itself
instead
of
us
having
to
hack
the
code?
Is
that
again?
If,
if
you
hacked
that
generated
code,
then
that
becomes
the
maintance
nightmare
as
you
try
to
move
up
to.
You
know
from
1
9
to
1.
G
So
I
guess
the
question
is
I.
I
know
you've
talked
about
it.
We've
talked
about
it.
Are
there
folks
that
are
going
to
like
deep
dive
the
requirements
analysis
here
to
understand
that
you
know?
What's
the
plan
going
forwards,
especially
for
beta
and
for
eventual
Gao?
What
folks
think
they
want
to
do?
I
think
requirements.
Analysis
first
makes
a
ton
of
sense
right,
yeah,.
A
A
If
anybody
wants
to
start
that
now
otherwise,
I
think
as
you're
saying
this
is
maybe
more
of
a
data
thing,
and
if
we
like
right
now,
I'm
more
focused
on
trying
to
burn
down
the
list
of
alpha
issues,
so
we
can
actually
cut
then
create
a
release
process
and
build
an
alpha
release
and
then,
after
that
would
start
looking
at
things
like
this
for
a
beta
release.
If
anyone
wants
get
started
now,
that'd
be
great
yeah.
B
Don't
know
at
least
one
so
what
one
thing
that
I'd
be
concerned
about
is:
are
we
going
to
hit
a
point
where
the
vendored
version
of
client
go
that
we're
using
right
now
will
no
longer
be
able
to
talk
to
an
API
server
like
right
now?
If
we
can
talk
to
a
1.11,
API
server
and
everything
appears
to
work,
just
fine?
Is
that
going
to
continue
through
1.12
or
are
we
gonna
hit
kind
of
like
an
upstream
deadline
where
you
know
we're?
Gonna
have
to
be
off
of
the
1.9
code,
yeah.
A
I,
don't
know
what
the
skew
support
matrix
for
client
go
is
I
assume
it's
similar
to
queue,
pedal,
which
is
basically
one
minor
version
rate,
so
1.9
to
1.11
is
probably
not
sort
of
officially
supported
right
like
if
we
hit
bugs
they
may
say:
that's
that's
not
going
to
be
fixed.
I
know
there
is
an
open
issue
for
us
to
upgrade
from
1.9
to
1:10
for
our
API
server
builder.
Don't
know
if
anybody's
actively
tried
to
do
that
yet.
Well,.
A
I
C
A
J
Yeah,
hopefully
not
trying
to
be
too
pedantic,
but
as
I've
been
working
with
the
API
and-
and
you
know,
using
quite
a
bit
of
provider
status
for
both
cluster
and
machine
I
yeah
I
realized
that
there
was
this
kind
of.
It
was
a
little
odd
that
there
was
a
private
provider
config
and
a
provider
status,
so
I
just
wanted.
I
filed
a
felon
issue
just
wanted
to
get
the
question
out
there.
J
You
know
early
since
you
know
we're
still
a
pre
pre
alpha
or
maybe
in
alpha,
so
we
have
the
chance
to
change
it
now
if
we
want
it
so
yeah.
So
the
the
reasoning
behind
this
is
that
there
is
a
provider
status
and
that
the
convention
and
the
kubernetes
api
is
a
spec
and
a
status
and
cluster
api
is
using
config
provider
config
with
provider
provider
status.
So
maybe
it
would
make
sense
to
use
provider
spec.
J
D
A
A
Let's
enqueue
this
again
for
next
week
and
and
if
we
haven't
wrapped
it
up
offline
but
force
me
next
week,
we
can
come
back
and
try
to
get
a
conclusion
at
that
point
and
get
people
a
few
minutes
to
to
read
it
outside
of
outside
of
the
meeting.
Okay,
all
right!
So
next,
sir
back
to
pull
number
408
know
if
David
is
here.
K
So
originally
I
posted
this
because
it
was.
It
was
some
work
for
me
to
find
out
who
is
developing
repos,
where
now
that
it
appears,
we
have
consensus
and
we
actually
have
an
open
shift
implementation
under
the
community
SIG's
organization,
unless
certain
how
important
this
is
I
mean
I,
don't
expect
all
providers
to
be
migrated
to
the
SIG's
organization
immediately,
there's
probably
still
value
in
having
pointers,
but
as
more
providers
become
part
of
the
organization,
it
may
suffice
to
just
explain
what
the
naming
convention
is
and
how
to
find
them.
K
A
Yeah
I
do
think
it
is
useful
if
someone's
looking
through
me
to
sort
of
see
like
here
are
the
different
environments
that
are
supported
and
where
I
can
go
to
learn
more
about
those.
So
I
think
that
that
part
of
your
PR
makes
a
lot
of
sense.
I
also
agree
that,
as
as
we
standardized
onward
put
the
code,
maybe
it
becomes
less
important,
so
I
would
vote
that.
We
merge
this
now
and
then
we
can
reassess
later
as
we
get
more
standardization
of
where
the
providers
live.
I
mean
even
at
that
point.
A
If
someone
comes
to
look
at
the
cluster
API
repo,
it's
no
longer
gonna
have
any
code
that
will
actually
be
used
useful
to
actually
create
a
cluster
right.
You're
gonna
have
to
to
go
somewhere
else
to
to
get
the
last
bit
of
glue
to
actually
run
on
a
specific
environment.
So
even
at
that
point
having
pointers
to
you
know,
here's
some
some
common
ones
that
are
actively
being
supported
by
the
by
the
cig
might
still
make
sense.
K
A
A
A
I
would
also
put
a
call
out
for
people
that,
if
you're
creating
new
issues
that
you
think
should
be
part
of
the
Alpha
milestone,
please
use
the
slash,
milestone
command
to
add
it
to
the
milestone,
because,
as
we
are
trying
to
actually
get
to
a
release,
hopefully
in
the
next
couple
of
months
sort
of
before,
maybe
before
the
next
kubernetes
release,
heavens,
that's
gonna
be
the
place
to
look
to
sort
of
burn
down.
What's
what
is
outstanding?
So
if
you
have
headphones,
you
think
should
block
us
cutting
off
release.
L
Sorry
go
ahead.
Little
we're
I
think
they
have
to
be
a
milestone
maintainer
to
do
that.
Robert
yeah!
So.
A
A
A
G
I
had
a
question
with
regards
to
like
testing
for
the
release.
How
are
we
going
to
manage
and
notify
what
providers
have
been
tested
for
what
configuration
it
won't
release
so.
A
One
of
the
open
issues
for
the
alpha
release
and
I
was
going
back
and
forth.
My
head
last
night
about
whether
there
should
be
alpha
or
maybe
beta
was
to
start
creating
a
sort
of
conformance
suite
so
along
lines
of
what
kanae's
has
for
conformance.
For
you
know,
one
at
ten
1.11
we'd
be
able
to
say
the
this
cluster
API
release
has
been
tested
as
beacon
for
mint
on
these
different
providers.
G
J
Yeah,
it
just
occurred
to
me
that
I
I,
I
would
find
it
helpful
to
you,
know-
maybe
have
a
regular
regular
time
every
week
where
provider
implementers
can
can
discuss.
You
know
the
things
like
design,
maybe
implementation
tips
or
or
hurdles.
J
You
know
anything
from
from
using
depth.
You
know
encoding
decoding
the
provider
config
status
instead
of
things
like
that.
Yeah
I
think
there
are
a
number
of
people
working
on
providers
right,
Google,
vis
vSphere.
This
is
open,
share,
loss
of
them,
so
I'm
not
sure
how
how
best
to
sort
of
come
up
with
out
with
a
time.
Maybe
a
file
file,
an
issue
maybe
create
a
Google
sheets
any
any
way,
any
any
any
any
takers.
Any
suggestions
on
how
to
get
that
organized
I'd
be
happy
to
do
that.
We.
E
Have
we'd
use
like
a
spreadsheet
before
in
the
past,
to
sort
of
do
like
a
like
a
best
fit
for
a
time,
so
we
could
totally
draft
an
email
to
sit
close
to
life
cycle
and
have
to
spreadsheet,
let's
books
kind
of
float
on
a
time
if
it's
something
that
we
think
it's
worthwhile
for
what
it's
worth
I
think
it's
a
good
idea!
I'd.
G
C
A
M
Quick
request
to
make
it
accessible
to
people
in
the
asia-pacific,
including
myself,
meeting
starting
at
2:30
in
the
morning
or
not
great.
A
Yes,
I
think
that's.
That's
part
of
the
reason
for
the
spreadsheet
of
the
doodle
is
to
try
to
figure
out
like
what
times
work
for
the
people
that
will
actually
show
up
right,
because
you
know
we
tend
to
skew
for
people
who
were
here
like
so
you're
native,
Pacific
and
I
know
Lu,
Ramirez
sort
of
on
the
opposite
side.
It's
also.
M
A
M
M
Cluster
API
module
it
uses
that
code
I
know
that
it's
all
terraform
based,
but
the
templating
and
the
availability
for
it
to
work
across
all
those
providers
is
pretty
pretty
clean.
I
definitely
would
want
to
you
know,
modified
out
to
use
Q
idiom,
but
it's
it
just
wanted
to
kind
of
throw
that
out
there
and
get
some
feedback
on
how
difficult
that
would
be
and
where
we
might
find
the
resources
for
that.
M
M
A
One
of
the
things
that
Chris
and
I
talked
about
early
on
was
using
terraform
or
you
know,
docker
machine
or
something
like
that.
As
sort
of
a
lowest
common
denominator
for
environments,
it
didn't
have
a
more
specific
controller
written
for
them,
which
I
think
is
sort
of
a
long
lines.
What
you're
saying
is
is
if
we
use
as
sort
of
a
terraform
terraform
provider
it
would.
It
would
get
you
running
in
lots
of
different
places,
and
then
you
know
if
it
ran
AWS
and
then
we
implemented
a
specific
eight
OBS
one.
A
You
could
just
use
that
instead
of
the
terraform
on
for
AWS,
so
that
certainly
has
always
conceptually
been
a
future.
We
wanted
to
explore
the
the
DC
area
implemented,
also
uses
terraform,
so
I
think
there's
some
overlap
there
in
terms
of
us
using
terraform
to
to
bootstrap
an
environment.
So
I
don't
know
if
it'd
be
possible
to
also
converge
that
one
that
would
be
kind
of
nice
I
think
that's
it's
definitely
conceptually
aligned
before
we
wanted
to
go
I.
A
M
N
I
can
I
can
speak
from
the
vSphere
point
of
view.
Little
bits
we
just
recently
added
the
vSphere
supporting
the
cross
cloud
and
we
just
have
kind
of
a
fresh
experience
there
and
the
saying
is:
we
haven't
look
into
Casta
API
in
a
you
know,
depth
so
I
I
don't
fully
understand
how
much
overlap
there
in
terms
of
their
provider
implementation
as
certainly
this
is
something
interesting
and
we
we
will
look
into
that.
C
M
A
A
J
Don't
once
well,
I
I've
been
thinking
a
little
bit
about
how
to
support
multiple
masters
with
with
cluster
API,
in
fact
I'm
the
that's.
What
I'm
that's
what
we're
working
on
with
the
SSH
provider,
but
we're
not
using
the
upstream
controllers
yet,
and
so
I'm
wondering
you
know
looking
ahead.
If,
if
we
have
it,
if
we
want
a
certain,
you
know
sort
of
custom,
I
guess
to
be
behavior
from
let's
say
a
machine
deployment
controller
or
maybe
something
like
a
you
know.
Stateful
are
sorry
a
machine
deployment,
controller
or
a
stateful
machine
set
controller.
J
Something
like
that.
Maybe
would
be
appropriate
for
four
masters
right
having
similar
semantics
with
identity,
et
cetera
like
a
stateful
setting
kubernetes.
If,
if
those
things
are
built
upstream,
are
they
going
to
have
extension
points
like
the
the
cluster
controller
and
the
machine
controller
has
today,
though,
the
actuators
being
those
extension
points?
Is
that
something
that
we're
thinking
about
I
know
that
we
have
a
machine
deployment
and
a
machine
set
controller
I
I'm,
not
sure
how
whether
it's
in
use
actively
yet
and
yeah?
That's
just
something:
I
wanted
to
I've
been
wondering
about.
B
So
I'm
getting
ready
to
start
digging
into
this
area
and
I
was
going
to
try
to
just
leverage
the
the
rolls
push
the
roll
down
into
the
provider
config
and
then
try
to
leverage
that
sit
there
and
and
do
a
simple
task,
because
we
can
register
the
API
endpoints
with
the
with
the
cluster
spec.
You
know,
if
that's
available,
then
assuming
that
you're
scaling
up
the
number
of
control
plane
instances
rather
than
doing
Internet
there,
but
that's
just
my
initial
kind
of
hack.
G
J
I
think
the
the
part
that
that
is
sort
of
most
nebulous
right
now
is
is
you
know
some
of
the
sort
of
some
of
the
states
offer,
for
example,
if
you
need
to
you
know,
if
you
want
to
scale
in
a
TD
cluster
right,
you
you
need
to
tell
the
the
incoming
peer
about
the
existing
peers.
You
know
where
do
you
place
that
information?
You
know
right
now,
for
the
SSH
provider
work
we're
placing
that
in
a
in
the
cluster
provider
status.
J
B
J
See,
okay,
let
me
maybe
maybe
I
missed
something
last
time
this
is
a
recent
update.
I'll
take
a
look
a
week.
K
I
Don't
think
there's
any
kubernetes
issue
a
that
if
you
have
a
solution,
whether
it
uses
the
sed
operator
I'm
working
on
the
yet
CD
manager
or
whether
you
have
your
own
tooling,
if
you
magically
get
a
CD
to
resize
correctly,
then
I
don't
think.
Kubernetes
has
an
issue
with
it.
Sure
I
think
that,
if
is
a
very
big,
if
yeah.
A
B
That's
less
of
an
issue
I
think
with
the
co-located
at
CD,
because
the
endpoints
that
you
pass
into
the
API
server
only
need
to
be
able
to
hit
one
of
the
endpoints.
It
doesn't
need
to
be
the
full
list
of
endpoints
it'll
get
the
full
list
of
endpoints
once
it
contacts
the
at
the
endpoint
it's
configured
for.
A
Alright,
and
with
that,
we
are
now
officially
out
of
time.
Thank
you
all
for
coming
and
we
will
see
folks
again
next
week
there
were
a
number
of
action
items
in
full
of
conversations.
You
haven't
slack,
so
I
encourage
people
to
take
care
of
those
quickly
before
they
forget.
That's
what
I'm
gonna
try
to
do
and
anyway,
otherwise,
everybody
have
a
great
week.
Take
care.