►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180822 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.v50hoocd695
Highlights:
- Pivoting by default?
- Moving to CRDs
- Making timeouts configurable
- Adding node conditions to machine status
- Steps to create a new cluster api provider
- Renaming external to bootstrap (and internal to target)
- Transferring ownership of cluster-api-provider-skeleton
- Re-initiate discussion of MachineClass
- Implementor office hours (and conflicts with aws implementors meeting)
- Schedule for alpha
A
B
I
am
we
we
had
a
issue
come
in
asking
to
be
able
to
not
pivot
the
cluster,
and
there
were
some
discussions.
Some
people
felt
that
we
shouldn't
pivot
the
cluster
by
default
and
and
this
PR
is
basically
my
attempt
at
doing
it
for
the
internal
POC
that
we
were
doing
a
little
while
back
I
know
you
had
some
questions,
Robert
about
the
implementation
versus
an
alternative
proposal,
so
I'm
happy
to
hear
that.
A
Yeah
I'm
just
pulling
it
up
to
see
so
I
guess
what
I
was
looking
at
is
right
now
in
in
the
open
source.
When
you
run
cluster
huddle,
it
creates
mini
cube
and
then
pivots
from
there
into
the
cluster
that
gets
created
and
if
we
switch
the
default,
behavior
I
guess
I'm.
It
feels
like
what's
gonna
happen,
is
it's
gonna,
create
mini
cube
and
just
use
that
as
the
cluster
and
not
pivot?
Out
of
it
is
that
right
for
the.
B
For
the
cluster
API
kind
of
bootstrapping
cluster,
yes,
wouldn't
actually
pivot
it
pivot.
The
cluster
API
into
the
created
cluster,
the
more
general
workflow
would
be
for
tying
into
like
an
external
bootstrapping
cluster,
where
you
would
have
multiple
cluster
definitions
and
you
wouldn't
want
to
perform
that
pivot.
Right.
A
So
I
guess
my
question
was
in
terms
of
like
the
the
user
scenario.
It
feels
like
the
the
default
user
scenario
that
we
want
to
get
started
with
would
be
to
pivot,
so
that
you're
not
like
you
accidentally,
closed
laptop
or
delete,
mini
cube
and
lose
your
control
plane
and
that
sort
of
maybe
the
more
advanced
scenario
would
be.
You
don't
pivot.
So
once
you
have
that
first,
one
up,
maybe
use
that
as
the
bootstrap
for
your
second.
C
A
D
D
The
admin
is
going
to
more
most
likely
going
to
be.
They
want
to
be
it
to
create
these
clusters
or,
if
they
give
a
user
ability
to
do
it,
it
has
to
be
a
central
resource
right
and
the
administrators
are
going
to
want
to
know
what
is
being
created
on
their
infrastructure,
so
having
that
single
source
of
truth
somewhere
in
specific,
is
very
important
for
them.
B
Cluster
cuddle
already
allows
you
to
use
an
external
bootstrap
server
through
command-line
argument.
Many
Cubist's
just
the
default
for
having
something
available
to
be
able
to
provision
a
cluster
and
I
think
that's
even
only
kind
that
was
just
the
initial
design.
That's
not
necessarily
the
eventual.
F
We've
actually
had
discussions
internally
about
maybe
changing
the
interface
for
cluster
control.
It's
outside
of
the
scope
of
this,
but
briefly,
though
I
mentioned
the
one
thing
that
cluster
control
does
now
that
we
depend
on,
but
we
can't
use
all
the
time
is
the
fact
that
it
deploys
the
API
server.
A
A
I
don't
know
if
that
becomes
sort
of
too
confusing
for
customers,
but
it
really
makes
sense
because,
if
you're
pointing
at
an
existing
bootstrapping
cluster,
then
presumably
you
kind
of
want
to
leave
things
there
I
think
that's
what
Jason
was
saying
with
his
PR
was
like
you
pointed
over
here,
and
you
don't
want
to
move
it
out
of
there
and
what
I
guess
my
retort
is.
If
using
mini
cube,
you
do
want
to
kick
it
out,
and
so
maybe
really
that
to
distinction
is.
A
C
B
So
I'll
go
ahead
and
I'll
update
it
and
switch
it
to
not
pick
or
to
switch
it
so
that
it
still
pivots
by
default
for
the
default
use
case,
and
if
the
external
cluster
is
defined.
If
the
external
cluster
is
specified,
don't
pivot
by
default
and
I'll,
add
some
additional
doc
updates
to
the
flags
to
make
sure
that
it
that's
clear
and
then
maybe
have
still
expose
a
flag
to
handle
the
behavior,
how
the
how
the
let
the
user
override
the
behavior
as
well
cool
thanks.
A
Alright,
so
next
I
put
a
link
in
here
to
to
4:09,
which
is
about
switching
back
to
CR.
Ds
I
was
doing
a
little
bit
of
issue
calling
and
this
one
popped
up.
I
was
looking
through
and
it
looks
like
pretty
much
everybody
who's
commented
on.
The
issue
has
been
in
favor
of
switching
back
to
CR
DS,
it's
supposed
to
saying,
with
our
grade
api's
I
think
when
we
switched
from
CR
DS
to
our
API
s.
A
We
had
some
good
reasons
to
do
so
see
our
DS
have
since
made
some
pretty
big
strides
and
sort
of
catching
up
in
terms
of
feature
parity
with
a
great
API,
and
it
also
feels
like
the
momentum
is
behind
CR
DS.
The
tools
for
CR
DS
are
being
actually
developed
and
progressing,
and
the
tools
for
managing
our
API
servers
seem
to
be
much
more
stagnant,
so
I
guess
since
there
weren't
any
dissenting
opinions
in
the
issue,
I
wanted
to
bring
it
up
here
and
see.
A
If
maybe
there
were
some
dissenting
opinions
on
the
line
or
if
people
had
good
reasons,
I
thought
we
should
stay
with
our
ad
API
servers
and
then,
if
there
aren't,
if
everybody
agrees,
I
should
move
back
sort
of
put
out
sort
of
a
call
for
help
and
see
if
anybody
wants
to
pick
up
the
mantle
and
try
to
drive
that
work.
So
we'll
start
with
the
first
one
which
is:
does
anybody
have
strong
reasons
that
want
to
stay
with
the
Irate
API
server
model.
G
H
G
A
So
if
we
wanted
to
still
have
the
similar
sort
of
separation
of
failure,
domains
between
the
api
server
for
your
pods
and
deployments
and
api
server
machines,
you
could
still
deploy
to
API
servers
in
an
aggregated
fashion
and
just
install
your
C
RDS
on
the
extension
one
so
I
think
from
an
architectural
point.
You
can
still
get
that
with
CR
DS.
You
don't
actually
have
to
build
your
own
API
server
and
that
might
not
quite
work
today,
but
they're
certainly
moving
it
in
the
direction
that
that
will
work
probably
before
we
need
it.
A
B
Far
I
haven't
gotten
really
that
far
with
it
I
basically
mocked
out
the
actual
API
objects
generated.
The
controller's
well,
I
haven't
even
gotten
to
a
full
controller
implementation.
Yet,
but
the
cue
builder
experience
seems
pretty
easy.
I
started
as
I
started
is
like
a
from
scratch
project
to
avoid
trying
to
retrofit
q
builder
in
place
because
of
the
dependency
issues.
B
So
I
was
going
to
try
to
replicate
the
api's,
replicate
the
controller
behavior
and
then
try
to
retrofit
it
back
with
the
dependency
changes
to
see
what
happened.
I'm,
not
spending
a
whole
lot
of
time
on
this,
but
I'm
happy
to
collaborate
with
other
people
that
are
willing
to
chip
in
to
try
to
make
this
work
as
well.
I.
A
So
this
is
on
the
Alpha
milestone
list
and
I.
Think
the
the
question
that
Daniel
brought
up
when
creating
the
issue
was
right.
Now
the
API
server
builder,
we're
using
is
tied
to
Q,
raised
1.9,
which
is
now
getting
kind
of
old,
and
we
like
to
be
able
to
update
to
at
least
1.10
or
switch
just
the
RT,
so
we're
not
as
tightly
tied
to
a
chromatic,
spurgeon
and
I.
A
G
Actually
didn't
experiment
as
well,
where
I
ran
the
existing
controllers
without
an
aggregate
API
server,
so
just
like
creating
a
CRD
with
no
schema
and
it
basically
I
got
it
to
work.
There's
a
lot
of
like
places
where
we
do
defaulting,
but
the
clients
don't
really
care
whether
it's
a
C
or
D
or
not.
So
that
was
sort
of
the
surprising,
so
it
sort
of
seems
to
be
workable.
B
A
Okay,
so
I
think
I'm
also
seeing
people
in
chat
voting
for
CRTs,
so
Jason.
If
you
want
to
keep
pushing
us
in
their
free
time,
that'd
be
great
and
if
other
people
are
interested
in
sort
of
helping
push
the
product
over,
it
sounds
like
there's.
You
know
support
for
moving
in
that
direction.
We
just
need
to
put
some
people
behind
it
to
actually
get
there
all
right.
Next
on
the
agenda
is
making
timeouts
more
configurable
yeah.
D
C
D
Never
got
around
to
making
them
configurable,
so
when
I
during
my
he
uses
Judy
button
through
the
couple
of
issues,
I've
noticed
that
the
cluster
control
can
basically
spin
forever
it'll
just
loop
forever,
and
some
of
it
is
based
on
it
was
either
spin
for
the
duration
of
the
timeouts,
or
at
least
in
it,
within
one
buck,
I
found
it
will
just
fit
forever.
So
yeah,
that's
my
there's
a
very
simple
question.
A
A
lot
of
the
timeouts
it
was,
we
said
something
that
worked
most
of
the
time
and
when
we
started
hitting
timeouts,
we
just
bumped
him
up.
Instead
of
you
know
plumbing
flags
and
stuff.
So
if
you
were
seeing
it
it
not
work
in
your
environment
with
the
current
settings,
we
can
talk
about
making
them
configurable
does.
Are
you
saying
you
want
to
make
them
smaller
because
things
will
fail
faster
or
yeah.
D
This
is
more
for
development
productivity.
Just
but
then
again
you
know,
I
can
just
modify
the
can
the
numbers
in
mind
by
when
they
just
build
it
and
then
use
it
that
way,
but
it
would,
it
would
be
kind
of
nice.
You
know.
Maybe
it
just
would
be
too
many
options.
I
just
want
to
get
people's
opinion
on
something.
A
A
D
A
A
Don't
know
that
we
want
to
put
on
the
end
user
to
be
setting
those
timeouts
that
I
understand
the
use
case
for
the
for
developers
wanting
to
be
able
to
them,
and
it
sucks
to
have
to
maintain
like
here's
my
get
branch
where
I've
changed
the
timeouts
that
work
for
me
and
I
have
to
keep.
You
know
rebasing
that,
like
that's,
no
fun
either
yeah.
D
And
again,
I'll
bring
up
the
fact
that
you
know
I
I'm
trying
to
represent
the
administrator.
In
this
perspective,
if
he
sees
something
running,
you
know
a
developer,
he'll
he'll
go
in
and
modify
ad
code,
but
an
administrator.
He
won't
do
that
and
if
he
sees
this
running
forever,
you
know
that
that
leaves
a
bad
user
experience.
F
Okay,
alright,
if
it's
too
long
fair
enough.
D
D
F
My
preference
is
not
to
expose
options
that
are
rarely
used.
I
I
think
this
is
a
relatively
esoteric
thing.
I
agree
that,
right
now
it
takes
too
long
to
timeout,
but
there's
another
issue
related
to
making
parts
of
cluster
control,
pluggable
and
I
wonder
if
maybe
maybe
we
just
need
to
get
the
timeouts
right
for
each
implementation.
I.
D
Don't
know
what
the
the
the
solution
is,
but
I
just
want
to
bring
up
this
issue
that
you
know
for
from
a
developer
perspective.
It's
you
know,
I,
see
no
problem,
but
from
administrative
perspective,
I
think
they're
gonna,
it's
gonna
be
a
bad
user
experience
for
them
and
they
might
end
up
not
using
it.
A
Yeah
that
makes
sense,
I
think,
as
as
you
drive
the
sort
of
making
that
like
as
they
would
say,
making
the
deployment
more
pluggable.
This
might
be
a
good
sort
of
sub
task
of
that
of
you
know.
If
you
have
different
ways
to
create,
you
know
the
bootstrap
cluster.
Those
presumably
might
need
different
timeouts
right.
A
All
right,
an
interest
of
time,
I
think
we
should
move
on
to
the
next
one,
which
is
I,
think
to
PR
and
assigned
hard
to
cut
for
this,
because
I
think
it's
ready
to
merge,
but
I
just
wanted
to
bring
it
up
here,
because
it
is
an
API
change.
Its
number
483,
which
we
talked
about
last
week
and
there
were
general
agreement
on
the
PR-
has
been
only
open
for
a
day.
A
H
H
J
Are
you
talking
with
Andrew
Martin
psych
him?
Yes,
so
he's
she's
really
been
driving
the
process
for
introducing
new
upstream
cloud
providers.
So
we've
been
working
on
that
in
cig
cloud
provider
trying
to
put
down
requirements
that
I
mean
it
to
be
fair.
It's
a
little
I
mean
there's,
certainly
I,
guess
some
implicit
bias
towards
providers
that
are
already
in
tree.
Well,
we're
really
trying
to
do
with
the
like
moving
external
providers
into
some
CN
CF
manager.
J
Posit
Ori
is
to
really
improve
the
state
they're
like
a
technical
bar
for
cloud
providers,
so
improving
the
documentation,
so
Missy
from
AWS
had
a
cap
out
for
doing
that
for
setting
new
requirements
for
documentation
for
both
entry
and
out
of
tree
there's
membership
requirements
now
for
people
who
want
to
sponsor
a
sub-project
under
rider,
but
that's
really
the
place
where
we're
trying
to
try
to
work
on
this
under
under
state
cloud
provider
because
they
have
the
you
know
the
ultimate
authority
to
be
creating
these
repositories.
There's.
A
If
you
look
at
the
owners
file,
it's
sort
of
co-owned
by
cluster
lifecycle
and
Cas
and
same
with
the
GCP
one,
so
I
think
one
one
thing
interesting
thing:
I
notice,
vegetal
ocean
is,
there
isn't
a
sake
digitalocean,
nor
is
there
the
cloud
provider
in
flotation.
So
this
would
maybe
be
the
first
case
where
we
would
be
considering
creating
the
cluster
API
provider
or
for
an
environment
before
the
other
pieces
were
in
place.
So
I
think
that's
that
is
related
right.
A
H
Okay,
so
I
have
talked
to
Andrew
from
digitalocean
and
he
is
okay
to
come,
join
the
project
or
maintain
it
along
with
me
and,
if
needed,
to
try
to
get
someone
else
from
digitalocean
to
join.
As
far
as
we
know
there
is,
there
is
no
body
from
digitalocean
working
on
the
car
api
provider.
Yes,
we
have
cloud
provider
for
digitalocean.
H
They
are
working
on
that,
but
there
is
not
know
that
they're
working
with
the
custard
API
right
now
I
believe
there
is
some
phone
from
my
company
from
woods
interested
to
join
on
the
project
to
maintain
it.
So
the
question
here
is:
is
it
possible
without
seek
how
many
for
a
while,
how
many
there
and
about
membership
I
guess
do
all
needs
to
be
member
of
kubernetes
organization
or
how
it
works?
I
have
not
got
that
part
that.
I
I
If
you
were,
if
you
wanted
to
have
your
Eccles
and
permissions
to
only
be
applicable
to
the
kubernetes
SIG's
sub
repository,
so
the
way
the
repository
Edition
would
work
is
I
I
could
we
could
put
an
example
reference
like
Jason's,
email
or
I
forgot
the
VMware
the
Redeemer
has
submitted
an
email
to
as
well.
That
shows
all
the
details
of
what
you
should
do
beforehand
as
long
as
the
owners
file
has
been
well
vetted
and
those
folks
that
are
in
the
owners
file
are
part
of
either
the
community
SIG's
or
or
the
kubernetes
org.
I
H
A
There
isn't
anything
currently
I
mean
I,
think
we're
the
people
that
are
building
the
machine.
Actuators
and
the
cluster
controllers
are
mostly
sort
of
in
this
meeting
or
in
the
implementer
office
hours
meetings
and
trying
to
to
sort
of
build
consistent
implementations
through
through
those
meetings.
A
I
do
expect
us
to,
as
we
start,
having
multiple
sort
of
functional
implementations
to
start
talking
about
some
sort
of
conformance
program
or
some
sort
of
test
suite
that
validates
that
they
all
work
about
the
same
way
and
give
the
same
sort
of
user
experience
right,
but
that
doesn't
exist
yet
and
a
lot
of
the
implementations
that
or
the
a
lot
of
the
repositories
that
are
there
today
are
sort
of
just
getting
started
and
and
don't
have
necessarily
fully
functional
implementations.
Yet,
okay.
J
Thanks
one
question
related
to
that
is
there?
Should
we
try
and
see
if
we
could
converge
on
kind
of
a
shared
set
of
requirements
across
the
external
cloud
providers
and
the
cluster
API
actuators
like
I,
guess,
there's
less
need
for
the
cleanup
of
what
the
existing
you
know.
Existing
API
is
since
it's
new,
but
with
the
creation
of
new
repositories,
I
guess,
if
you're,
creating
the
new
brunet,
six,
maybe
doesn't
matter
but
I
know
with
the
cloud
providers
a
lot
of
them
I,
don't.
A
Yeah
I
think
the
other
thing
that
maybe
is
implicit
in
what
Tim
was
saying
with
the
owners
files
is,
if
we
do
start
to
see
the
owners
files
did
rot.
I
think
that
as
a
cig,
we're
gonna,
be
you
know,
somewhat
aggressive
and
either
trying
to
find
replacement
owners
or
or
kicking
things
out,
because
our
sig
doesn't
want
to.
You
know,
claim
or
try
to
maintain
ownership
of
things,
that
people
aren't
actively
contributing
and
being
and
being
sort
of
that
owner
role.
And
if
you
look
at
the
sig
Carter
pull
request
that
Tim's
got
out.
A
The
sig
does
sort
of
delegate
sort
of
the
full
ownership
of
these
sub
projects
to
the
people
in
those
honors
files.
You
know
from
cutting
releases,
doing
issue
tracking,
making
sure
PRS,
merge
and
I
think
that
we're
going
to
sort
of
hold
those
those
people
up
to
a
reasonably
high
bar
to
make
sure
that,
at
the
same.
J
B
Yes,
this
is
just
a
follow-up
from
last
week,
where
we
had
a
discussion
around
the
external
internal
naming
within
cluster
cuddle
was
very
confusing
for
people,
so
I
was
just
looking
to
gather
some
feedback
on
the
PR
that
I
put
out
there
and
I
saw
you
already
added
a
few
comments.
There
Robert
that
I
just
haven't,
had
a
chance
to
address
yet
yeah.
A
I
think
overall
looks
great.
There
were
a
couple
of
knits
tiny,
tiny
little
things,
but
I
think
the
main
one
was
at
one
point.
You
said
you
know
creating
external
or
creating
boots
out
cluster
and
other
what
scream
boots
our
clients
I
think.
Maybe,
when
you
said
client
you
were
trying
to
get
it
the
the
sense
that
sometimes
we
actually
create
a
cluster
and
sometimes
we
use
an
existing
cluster,
but
I
also
think
that's,
maybe
confusing
for
users
yeah.
B
A
F
A
A
It
is
serve
a
great
purpose
for,
like
the
the
ones
we've
prayed
so
far,
I
also
kind
of
wonder
if
we
move
over
to
see
our
DS
if
we
need
sort
of
less
of
the
template
and
less
boilerplate,
and
also
with
the
push
towards
G
RPC,
slash
web
hooks
for
actuators
that
also
sort
of
reduces
the
amount
of
boilerplate
and
templating
that
we
need,
because
we're
reducing
the
surface
area,
you
need
to
actually
create
an
environment.
What
do
you
guys
think
I.
B
Personally,
like
the
idea
of
having
some
type
of
a
reference
for
implementers
and
I
think
even
if
we
scale
it
down
this
skeleton
repo
could
potentially
be
a
home
for
documentation
that
we
eventually
add
as
well
for
what
it
to
create
a
provider
as
well
as
having
kind
of
a
skeleton
to
start
from
it.
There
seem
to
be
a
few
different
cases
where
this
is
kind
of
common
within
the
community
already
especially
around
like
if
you
I
serve
our
examples
and
things
like
that,
it
seems
useful
to
me.
I.
I
Think
we
can
hold
for
the
time,
because
the
problem
with
apparatus
I
mean
I
think
it's
useful
as
long
as
we
get
it,
trimmed
down
to
the
state
space
that
we
want
to
maintain
it
and
I'm
wondering
if
we
do
the
two
parts
that
you
had
mentioned
earlier,
both
the
web
hook
for
defaulting,
as
well
as
the
CR
D
changes,
whether
or
not
we
can
actually
crib
an
examples,
implementation
directly
into
the
cluster
API
repository
much
like
many
other
main
applications.
Do
you
know
they
just
have
an
examples
folder.
I
A
K
Yes,
so
I
was
thinking.
Maybe
we
can
reinitiate
the
discussion
that
we
used
to
have
something
back
on
the
machine
class
so
that
we
I
guess
we
had
pretty
much
consensus
long
back
when
we
were
you
on
CR
DS,
but
then
we
moved
two
different
repository
and
it
somehow
got
buried
out.
So
the
motivation
was
coming
from
a
couple
of
directions.
First
of
all,
at
the
moment
for
the
reusability
of
the
raw
extension
that
we
are
putting
in
the
toy
toy
country.
K
So
if
we
want
to
reuse
across
different
machine,
said
some
machine
deployments,
that
could
be
one
of
the
nice
use
case
plus.
What's
one
strong,
this
case
is
directly
coming
from
the
customer
or
the
scaler,
where
at
some
point,
autoscaler
will
also
expect
that
right
now,
as
they
are
relying
one
manifold
objects
they
would
want
to,
they
would
want
to
get
certain
details
like
no
delicate
tables
and
the
resources
of
the
machines
and
so
on.
K
We
rather
stored
at
a
static
layer
which
could
be
machine
class,
and
then
they
could
they
could
go
to
grows
equal
to
zero.
Such
conduct
problems
can
be
resolved,
so
it
was
just
wondering
there
was.
There
is
already
a
comment.
I
have
put
putting
up
my
thoughts
there,
so
just
wondering
if
you
can
reinitiate
the
discussion
there
and
you
can
have
one
working
progress.
It
looks
like
it
which
will
it
will
be
quite
a
good
change,
I
mean
but
inside,
but
it
will
be
a
significant
change
from
outside.
K
A
So
I
had
the
initial
PR
for
this
and
the
main
repo.
There
are
some
comments
and
then
we
switched
from
cube
deployed
to
cluster
API
and
I
started,
trying
to
create
recreate
that
PR
and
ran
into
some
issues
with
a
code
gin,
because
the
machine
class,
as
I
defined
it
before
doesn't
have
a
spec
in
the
status.
A
It's
more
like
storage
classes,
where
it's
just
sort
of
a
data
blob
and
the
the
API
generation
stuff
with
the
API
server
builder,
doesn't
like
that,
like
it
tries
to
generate
auto,
generates
spectrums
and
status
stubs,
which
they
even
have
to
go
delete
from
the
generated
code,
which
is
kind
of
a
pain.
So
I
haven't,
checked
out
queue.
Builder
I,
don't
know
if
it
also
makes
the
assumption
that
everything
has
aspect
in
a
status
will
have
the
same
problem
with
its
code
generation.
K
A
A
And
I'm
hoping
the
answer
is
no
because
I
think
we
already
came
to
the
the
consensus
in
this
group
that
we
wanted
to
have
this
feature
as
part
of
the
API.
So
I
don't
see
anybody
speaking
up
or
shaking
their
head
or
wanting
in
chat.
So
I
think
you
know,
there's
there's
still
time
for
people
to
objective.
They
think
this
design
doesn't
make
sense,
but
I
think
we
should
start
pushing
forward
trying
to
get
rebuilt
by
that
PR.
So
maybe
harder
you
and
I
can
sort
of
chat.
A
Offline
and
I
can
I
can
point
you
to
where,
where
my
code
is
right
now
so
I
can
just
push
it
to
my
fork
and
you
can
take
a
look
at
that
and
see
if,
if
you
have
any
brilliant
ideas
of
how
to
deal
with
the
code
Jen-
and
maybe
we
should
also
sync
with
Jason,
because
it's
possible-
the
queue
builder
with
CR
DS
solves
the
problem,
and
we
should
just
push
on
that
effort
first
and
then
not
have
to
deal
with
the
code.
Jen
wrinkles
of
the
API
server
builder
make
sense.
Okay,.
K
L
A
So
I'm
not
seeing
Daniel
on
the
line
today,
so
I
think
what
happened.
Is
he
sent
out
a
poll
to
figure
out
when
people
could
meet
and
there
were
sort
of
two
groupings
of
people,
one
that
looked
like
they
could
meet
at
the
timeslot
he's
running
on
Tuesdays
and
when
it
looked
like
they
could
meet
at
this
timeslot
on
Mondays
and
he
volunteered
to
run
the
Tuesday
one.
L
A
There
was
an
email
sent
out
to
the
mailing
list.
I,
don't
know
how
long
like
the
data
for
those
things
floats
around
right,
because
we're
using
the
external
sites
that
you
kind
of
click
in
what
times
are
available
so
yeah.
If
you
dig
it
up
and
see,
if
that's
still
there,
that
would
be
useful
because
everybody
I
sort
of
put
their
name
in
with
the
times
are
available,
and
so
you
can
probably
see
the
cluster
of
people
and
they
can
talk
to
them
and
see
if
they're
still
interested
yeah.
L
L
A
L
I
Just
want
to
have
a
PSA,
we
are
trying
to
switch
this
equestrian
lifecycle
meeting
to
be
more
generic
to
apply
to
all
of
the
folks
under
the
large
umbrella
and
one
of
the
topics
that
I
think
I
want
to
bring
up
and
discuss.
As
we
start
to
have
these
set
of
federated
apposite
or
ease
is
unification
of
process
for
triaging
issues
that
are
inbound
we've.
We
have
a
well-defined
process
that
we've
used
for
the
Canadian
repository
and
those
who
have
come
up.
I
The
ranks
have
done
a
good
job
and
helping
to
distill
this
down,
and
it
helps
a
lot
for
keeping
things
up
to
date
and
managing
things
so
I
think
during
the
next
cycle,
ester
lifecycle
meeting,
which
could
potentially
be
in
two
weeks,
I.
Don't
we
should
be
able
to
have
a
conversation
there
and
it
might
be
worthwhile
for
folks
who
are
maintained
errs
of
other
repos
to
potentially
go
to
that
meeting.
D
Just
me
personally,
I
would
prefer
to
start
next
week
because
III
couldn't
make
it
to
this
meet
at
this
week's
meeting,
because
I
have
another
internal
meeting
that
that
I
must
go
to
at
that
time.
So
it's
just
so
one
request,
but
if
you
guys
can't
accommodate
it
and
you
know
I'll
just
watch
the
video
I
guess.
A
I
M
Something
about
the
topic
of
splitting
the
the
meeting
I
mean
not
splitting
the
meeting,
but
making
it
bi-weekly.
So
Kosar
API
has
a
lot
of
separate
meetings
related
to
pod
providers
and
while
we
have
a
bunch
of
a
bunch
of
problems
in
cube
ADM,
so
we
already
have
one
hour
or
week
for
cube
alien.
So
maybe
we
should
consider
like
doing
something
about
that.
A
A
So
it's
a
cloche
lifecycle
meeting
where
we
can
sort
of
evangelize
the
existing
cube
admin
issue
triage
process
and
try
to
make
that
consistent
across
the
other
repos,
which
is
especially
pertinent
people
in
this
group,
because
there
a
lot
of
field
in
this
group
that
are
starting
to
manage
some
of
the
the
provider,
implementation,
repos
and
we're
gonna
want
to
try
to
follow
a
consistent
process
on
those.
It's
maybe
more
pertinent
than
some
of
the
other
like.
A
D
This
is
more
of
a
question
for
myself.
I've
been
trying
to
find
this
information
in
the
repo
land.
I
haven't
been
able
to
find
it,
but
I
seen
the
cluster
API
alpha
featureless,
but
do
we
have
any
way
on
the
repo
that
talks
about
the
schedule
when,
when
witnessed,
what
is
the
target
for
the
for
the
alpha
release
so
because
I'm,
coming
into
this
later
than
most
of
you
guys,
I,
don't
know
that
information.
A
Yeah
so
we've
sort
of
talked
about
this
verbally
a
couple
of
times.
If
you
search
to
them,
you
know
if
you
might
find
something
I
think
what
I'd
mentioned
is.
My
goal
is
to
have
something
sort
of
around
the
end
of
this
release
cycle
so
that
if
you
look
at
just
a
tease,
cadence,
it's
every
three
months
or
new
release
gets
cut
and
we
don't
need
to
be
tightly
tied
to
exactly.
A
You
know
the
day
in
date
when
they
cut
that
release
but
sort
of
around
the
same
time
it
will
be
really
nice
to
have
and
alpha
there's
a
lot
of
sort
of
market,
material
and
blog
posts
and
so
forth.
That
goes
out
around
then,
and
if
we
can
piggyback
on
that
and
say,
and
here's
a
new
tool
for
cluster
management,
you
can
use
its
alpha
yada
yada
here
links
that
would
be
a
really
nice
sort
of
wave
to
ride.
A
That's
going
to
depend
a
lot
there
is
that
that
milestone.
We
need
to
make
sure
that
the
milestone
is
sort
of
up
to
date,
with
the
issues
that
we
think
are
actually
gonna
block
us
from
saying
that
we
have
an
alpha
release
and
then
burn
those
down
right.
So
a
lot
of
whether
or
not
we
can
make
that
proposed
schedule
is
based
on
whether
we
can
get
all
of
the
issues
fixed
up.
Okay,.
I
C
A
I
think,
third,
one
for
coming
just
a
reminder
that
there
are
all
of
the
sort
of
various
breakout
meetings.
I
think
most
of
them
are
earlier
in
the
week
than
this
meeting.
So
if
you're
interested
in
going
to
those,
you
can
catch
them
at
the
beginning
of
next
week.
Otherwise
I
will
not
be
here
next
week.
Chris
is
going
to
run
the
meeting.
I've
got
another
meeting
active.