►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
wednesday
august
18th,
and
this
is
the
cluster
api
office
hours.
As
always,
please
follow
the
extensive
code
of
conduct
when
you're
in
this
meeting,
and
if
you
don't
have
access
to
this
document,
you
can
get
it
by
joining
this
cluster
lifecycle.
Mailing
list,
please
use
the
raise
hand
feature
of
zoom
if
you'd
like
to
speak
and
feel
free
to
edit
the
agenda
in
the
doc.
A
If
you
want
to
add
any
discussion
topics,
if
you
haven't
already
make
sure
you
add
your
name
to
the
attending
list
and
let's
get
started
all
right,
I
don't
think
we
have
any
psas
today.
A
In
terms
of
releases,
we
have
a.
A
V041,
which
has
been
released
six
days
ago-
and
I
think
folks
were
already
asking
about
the
timeline
for
zero
four
two,
but
I
don't
think
we
have
an
eta
for
that.
Yet
am
I
missing
anything
anyone
want
to
add.
On
top
of
that.
A
All
right
cool,
so
I
don't
think
we
have
any
blocking
issues
at
the
moment.
But
let
me
just
check
yeah
okay
and
then
I
guess
proposals.
Would
anyone
like
to
give
any
updates
on
any
of
these
currently
open
proposals
or
anyone
blocked
on
anything.
A
A
Okay,
I'll
assume,
that's
a
no!
So
let's
keep
going
all
right.
We
have
the
first
topic.
Cluster
schedule
generate
cluster
enhancement.
B
C
C
Oh
my
god,
I
can't
change
it.
That's
fine
yeah,
awesome
yeah,
so
the
issue
here
documents,
basically
two
things
that
we
want
to
achieve
one
is
the
primary
goal-
would
be
to
basically
keep
the
quick
start
flow
as
simple
as
possible,
even
when
someone's
dealing
with
cluster
class
right.
So
it
should.
C
It
should
still
remain
as
simple
as
some
user
running
classical
generate
cluster
and
the
second
thing
is
provide
a
way
for
all
the
providers
to
opt
in
whenever
they're,
ready
to
start
supporting,
plus
class,
and
the
way
that
we
are
proposing
here
is
in
the
template.
Examples
that
we
share
that
are
released
as
release
artifacts
and
the
providers.
C
We
will
use
the
class
definition,
that's
provided
within
the
template
and
if
the
class
definition
points
to
a
topology
will
check
if
the
topology
is
already
defined
in
the
cluster,
if
it
is
just
use
it,
if
not
go
and
fetch
a
cluster
class
definition
file
from
the
release,
artifacts
and
then
add
it
to
the
template
that
gets
generated
at
the
end
generated
as
the
big
yaml
file.
So
the
implications
of
that
are
explained
here.
So
there
are
two
things
that
would
change.
C
C
C
It
would
just
have
the
cluster
class
definition
and
just
point
to
another
point
to
temp
topology
and
there
will
be
a
separate
class
with
the
topology
definition
in
the
templates
and
a
separate
file
with
the
topology
definition
templates
and
everything
and
again
this
is
going
to
be
completely
optional
for
providers
to
move
in
so
as
long
if
they,
if
the
providers
continue
to
just
ship
the
existing
class
definitions
that
they
have
been
shipping,
it
will
still
work
because
the
class
will
not
have
any
pointer
to
a
topology.
C
More
details
and
examples
are
listed
in
the
issue
if
anyone
wants
to
go
through
them,
but
yeah
like
open
this
issue
to
start
a
conversation
about
like
feedback
on.
If
this
is
okay,
if
there
are
like
any
other
points
that
need
to
be
addressed
or
thought
about.
A
Awesome
thanks
for
opening
such
a
detailed
issue.
Does
anyone
have
any
questions.
A
I
have
a
question,
so
you
say
I
would
like
to
keep
the
quick
start
flow
as
simple
as
possible
when
using
cluster
class
is
the
implication
that
at
some
point
we'll
want
to
move
the
quick
start
to
using
cluster
class.
As
in,
do
you
see
cluster
classes
being
a
good
scenario
for
a
first
time
user,
or
is
it
more
something
that
you'd
want
to
keep
out
of
the
quick
start
and
would
only
be
more
like
for
advanced
use
cases.
C
So
cluster
class
itself
is
like
has
a
better
impact
when
used
the
second
time
like
basically
have
the
class.
Definitely
when
you're,
creating
your
first
cluster.
The
flow
for
creating
it
using
cluster
class
or
the
old
approach
is
basically
roughly
the
same,
but
the
impact
is
shown
when
you
start
creating
your
next
cluster
like
the
second
cluster.
C
So
I
would
say
that
it's
completely
up
to
the
providers
to
choose
if
they
want
to
do
cluster
class
as
their
very
first
cluster
or
they
just
choose
to
do
a
simple
cluster,
as
they
first
clustered.
D
Yeah,
first
of
all,
thank
you
for
opening
the
issue.
I
I
think
that
yeah
I
totally
agree.
The
advantage
for
the
user
is
when
they
start
creating
more
cluster
of
the
same
shape
and
but
but
I
think
that,
as
soon
as
castle
classes
matter
is
mature
enough,
I
will
recommend
to
the
provider
to
start
migrating
to
cluster
classes.
So
we
start
shifting
the
user
towards
and
then
the
new
model.
A
Okay
makes
sense
yeah,
I
guess
the
only
reason
I'm
saying
that
is,
I
feel
like
cluster
cuddle
already.
Is
you
know
it's?
It
was
meant
to
do
mostly
management.
Cluster
operations
from
the
start
right
and
the
generate
command
is
more
like
a
convenience
for
the
quick
start,
but
I
don't
know
if
we
want
to
like
keep
expanding
the
scope
of
cluster
cuddle,
like
in
terms
of
like
workload,
cluster
creations.
A
If
it's
not
going
to
be
a
quick
start
scenario,
because
most
users
can't
use
cluster
code
generate
already
anyways,
because
their
configurations
aren't
going
to
be
in
the
provider
repo
for
more
advanced
use
cases.
So
that's
the
only
thing
I
would
think
about
like.
Does
it
really
make
sense
to
take
this
on
or
do
we
want
to
stay
away
from
that.
D
I
think
that
we
should
do
this
because
being
clustered
cut
or
used
as
a
library
castle,
cattle
generate
can
can
be
used.
Let
me
say
as
a
separated
pieces
and
given
the
report
that
that
we
got
in
the
in
in
the
issue
that
we
got
in
the
past,
I'm
pretty
sure
that
people
are
using
these
to
create
cluster
after
unit.
A
B
B
You
have
to
start
the
feature,
get
it
before
using
it
and
yeah
for
for
cluster
cuddle
like
it,
I
think,
would
be
great
to
support
it
mostly
because,
like
it's
also
another
word
like
distributing
in
brew
and
like
potentially,
we
could
do
this
like
for
with
other
linux
and
other
package
maintenance
systems.
B
A
All
right
feel
free
to
add
comments
to
the
issue
afterwards,
if
you
have
thoughts
all
right
jacob,
you
want
to
talk
about
the
ipad
integration
proposal.
E
So
I
haven't
worked
on
it
for
a
few
weeks
because
I
was
on
vacation,
but
now
I'm
back
to
it
and
I'm
currently
having
well
a
problem
with
the
consumption
part
of
the
proposal.
So
just
as
a
like
brief
summary,
the
idea
is
to
have
item
or
integration
with
ip
address
management
that
works
similarly
to
how
persistent
storage
works.
E
E
And
so
from
that
perspective,
the
most
obvious
thing
would
be
that
it
also
creates
anything
or
any
dependencies
that
the
machines
have
like
in
this
case.
That
would
be
such
ip
address
claims,
because
you
would
need
one
claim
for
every
interface
that
needs
an
ap
address
for
each
machine.
E
So
the
question
is:
does
capi
create
those
claims
or
does
the
provider
have
to
create
those
claims
on
its
own?
And
there
are
some
pros
and
cons
to
both
approaches,
and
I
just
wanted
to
ask
for
some
feedback
on
those
two
options.
I
did
so
in
the
slack
yesterday,
but
I
just
wanted
to
mention
it
here.
E
Ideally,
I
guess,
because
that's
a
lot
of
stuff
to
to
present
and
to
review
it
should
probably
be
offline
or
async,
so
I'm
just
advertising
for
it.
Basically,.
A
C
So
I
have
a
question
and
this
may
be
based
on
some
of
my
ignorance,
so
there
is
a
movement
of
a
cluster
and
so
on
right,
so
that
is
within
one
provider.
Can
we
be
sure
that
the
claimed
ip
address
remains
stable
when
the
clusters
are
moved
and.
C
So
me
yeah,
so
okay,
for
example.
So
there
is
this:
I'm
actually
talking
about
something
generic
which
has
been
going
on
in
the
architecture
channel.
Each
kubernetes
cluster
has
something
which
is
within
the
cluster
and
a
set
of
provider
properties.
C
Now
can
we
there
is
the
general
thought
of
how
reproducible
can
a
cluster
be
whether
you
can
snapshot
and
recreate
a
cluster
at
a
different
location
or
at
a
different
point
of
time?
So,
if
you
actually
claim,
will
there
be
some
way
of
actually
reusing
the
ip
address?
If
you
actually
just
restart
the
machine,
for
example,
or
restart
the
cluster.
E
Still
not
100
sure
what
you
mean,
but
if
you
have
like,
if
a
machine
creates
a
claim
and
then
that
claim
gets
fulfilled
with
an
ip
address,
then
it
will
stay
that
way
as
long
as
the
api
objects
exist,
at
least
from
like
the
or
how
the
proposal
models
it
right
now,
if
you
want
some
more
persistent
behavior
that
may
be,
then
something
provider
specific
that
you
could
do
so
that
make,
for
example,
hands
out
ip
addresses
based
on
the
name
of
the
machine
that
requests
it
or
things
like
that.
E
But
as
long
as
the
objects
stay
the
same,
I
would
have
it
in
a
way
that
that
the
machines
keep
the
same.
Imp
addresses
okay,.
A
All
right,
one
quick
comment
I
have
is
you
said
the
cappy
creates
the
machines,
that's
not
always
true,
like
as
a
user.
You
can
also
create
individual
machines.
You
don't
have
to
go
through
a
machine
deployment
with
the
machine
template.
So
just
something
to
keep
in
mind.
I
think
it
should
still
work
if
the
user
creates
an
individual
machine.
A
All
right,
everyone
else,
please
take
a
look
and
add
any
comments.
You
have
on
thoughts.
First,
between
the
two
approaches.
E
A
All
right,
I
can't
see
because
it's
hiding,
but
I'm
pretty
sure
oh
yeah
ignition-
supports,
go
ahead.
I
I
We
also
got
like
several
engineers
from
like,
formerly
kinfolk,
now
microsoft,
to
work
on
that
as
well,
and
yeah
vince
recently
suggested
that
perhaps
with
that
with
the
move
to
2v1
beta1
that
maybe
we
should
make
this
vr
a
separate
provider
and
yeah,
and
I
disagree
because
I
think
that
there
is
like
there
will
be
too
much
of
the
boilerplate
and
copy
pasting
of
the
code
or
like
making
that's
like
importable
and
that's
a
dependency
to
to
make
it
a
separate
provider
and
yeah
yeah.
I
I
would
like
some
some
guidance
like
how
can
we
move
this
forward
yeah.
So
at
the
moment
like
on
the
kinfolk
site,
we've
been
like
a
little
bit
unorganized
and
like
we
didn't
push
it
forward,
but
we
didn't
get
time
to
make
the
the
entrant
tests
which,
which
were
suggested.
I
think
you
suggested
adding
that
but
yeah
we
we
again
have
resources
to
work
in
that
and
yeah
so
yeah.
I
wanted
to
know
your
your
thoughts
and
maybe
to
gather
some
ideas.
B
There
is
also
like
two
to
the
current
code
base
and
that
the
types
that
we're
supporting
so
like,
I
think
that,
just
to
clarify
like
what
this
is
trying
to
do,
it's
like
we
have
the
cappy
k
types
today
and
those
have
like
those
include
like
things
that
are
like
specific
to
cloudiness,
and
what
this
pr
is
trying
to
do
is
to
make
most
of
those
fields
compatible
with
ignition
as
well.
If
you
choose
to
use
ignition,
is
that
a
fair,
like
recap
of
what
we're
trying
to
do.
B
Yeah,
so
there
were
some
concerns
that
like
popped
up
like
here
and
there
like
for,
like,
for
example,
today's
like
a
freeform
string
for
like
additional
config,
which
I
think
we
need
definitely
to
validate.
B
There
was
a
question
of
ownership,
but
like
who's
gonna
own
the
code
and
how
we're
gonna
test
it
over
time
and
oh
if
this
breaks
like
who
can
recall
to
to
fix
it
and
there's
definitely
like
the
question
of
end-to-end
tests,
which
you
know
like
a
given,
that
we
released
zero.
Four
at
this
point,
I
do
think
this
should
probably
come
with
the
pr
at
this
point
or
we
need
to
block
the
release.
B
The
other
thing
that
we
could
do
is
to
put
all
of
this
behind
a
feature
gate
as
well
and
make
it
disable
by
default
for
now,
so
that
we're
sure
that,
like
all
the
code
is
gated
and
yeah,
I
guess
like
that's
mostly
it.
B
The
most
important
thing
is
to
make
sure
that
we
have
a
pool
of
people
that
we
can
rely
on
if
something
breaks.
Personally,
I
don't
have
any
experience
with
ignition
and
it
was
really
hard
to
me
to
like
debug
it
as
well.
I
Yes,
I
I
think
from
the
ownership
perspective,
we
already
have
two
engineers
so
me
and
dongsu,
which
is
also
on
the
call
who,
like
are
the
owners,
interviewers
and
yeah.
We
also
got
johannes,
which
recently
started
working
on
that
so
yeah,
I'm
sure
he'll
also
be
capable
of
of
like
doing
the
reviews
or
or
managing
that
and
yeah.
We
still
have
like
the
flat
car
team,
which
yeah
we
might
be
able
to
pick
someone
extra
from
from
this
point
of
view
about
the
feature
gate.
B
Think
you
would,
what
you
can
do
is
what
we
have
done
for
cluster
class,
which
is
being
developed.
The
you
can
use
the
the
feature
gate
in
the
in
the
web
book
code
to
get
those
fields,
and
so
you
you
say
like
hey,
you
cannot
use
this
unless
you
actually
enable
that
for
drinking.
I
Oh
okay,
yeah
I
mean
I
don't
mind
if
you
feel
like
more
comfortable
with
doing
that.
Initially
then
yeah.
I
think
we
can
do
that
about
the
tests
yeah.
I
I
agree-
and
I
agree
on
the
issue
that
we
need
to
add
some,
and
I
know
that
you
right
now
have
like
this
not
greatest.
J
I
Of
testing
the
cloud
image
config
because
you
just
parse
it,
I
think
like
by
hand,
and
then
you
execute
that
in
docker
so
yeah
we
could
do
this
similar
thing
with
ignition,
I
suppose,
because
yeah
we
don't
have
like
the
flat
car
based
docker
image.
We
rely
on
the
system
d
there.
So
I'm
not
sure
how
this
is
done
like
from
the
like
in
the
in
the
cloud
in
it,
but
yeah
we
may
have
to
come
up
with
something
there
to
yeah.
I
I
don't
know
to
run
systemd
in
the
container
or
something
yeah,
but
this
is
all
assuming
that
we
could
get
the
api
changes
in
right.
K
D
Yeah,
I
I
like
to
raise
a
a
slightly
different
point,
which
is
somehow
related
to
this
vr.
So
as
of
today
in
in
in
the
kuber
admin
config
object,
we
have
basically
to
concern
the
address
now
becoming
three.
The
first
one
is
the
the
node
bootstrap,
which
is
a
cube
admin
cluster
configuration
in
configuration,
joint
configuration
and
then,
in
the
same
flat
space.
D
We
have
a
lot
of
cloud
init,
stuff
ntp
files
stuff
like
that.
Now
we
are
adding
a
third
concern
which
which
are
emission
that
partially
overlap
with
crowding
it,
because
in
part
of
the
edition
you
use
ground
in
it
as
well.
So
my
my
major
concern,
apart
from
the
let
me
say,
maintain
the
code
stuff
like
that-
is
that,
from
a
user
point
of
view,
the
kubernetes
config
is
kind
of
a
bulk
of
slightly
related
concern
and
it
is
kind
of
hard
to
use
so.
D
So
my
my
point,
I
mean
I,
I
don't
have
a
solution
in
mind.
I
was
hoping
that
the
work
for
machine
bootstrap
was
something
that
going
to
address
this,
but
at
least
in
my
mind,
is
something
that
we
have
to
improve
in
future
in
terms
of
usability
in
terms
of
cleanness,
of
the
api
design.
I
Yeah,
I
I've
also
suggested
that
perhaps
that,
like
the
the
cloud
image,
conversion
could
be
happening,
someone
somewhere
else,
and
you
have
some
generic
type
for
so
that
the
bootstrap
provider
only
generates
what
it
needs
and
then,
like
the
extra
information
can
be
added
some
somehow
else
or
the
conversion
to
the
cloud
in
it
or
to
ignition
could
be
happening
somehow.
Externally,
yes,.
D
Sorry,
I
I'm
not
really
concerned
about
the
code.
What
I'm
concerned
more
is
about
the
api
surface
or
the
the
goal
of
the
booster
provider,
because
it
is
becoming
too
much
things
and
and
this
I
fear
that
this
will
be
confusing
for
the
user
and
also
for
the
future
of
this
component.
So
it's
kind
of
of
a.
A
Yeah
I'd
like
to
add
to
that,
I
plus
one
to
what
fabrizio
said.
I
think
for
me
looking
at
this
like
one
of
the
promises
of
cluster
api
when
we
discussed
the
architecture,
is
that
it's
completely
modular
and
you
can
like
plug
in
your
own
infrastructure
provider
or
your
own
bootstrap
provider
right,
that's
one
of
the
big
like
pros
of
using
it
and
what
we've
done
by
bringing
in
cube
adm
bootstrap
provider
into
the
main
capy
codebase,
but
the
originally.
A
But
then,
from
a
user
perspective,
then
it
becomes
non-modular
and
like
it's
not
what
it
was
supposed
to
be
because
then
you're
like
making
tight
couplings
between
the
bootstrap
provider
and
cluster
api-
and
I
think,
if
I
remember
correctly
like
his
like
the
whole
point
at
the
beginning,
when
this
pr
was
opened,
was
that
we
there
was
this
like
proposal
right
for
the
cubelet
secure,
cubelet
bootstrapping,
which
is
going
to
introduce
a
whole
new
bootstrap
provider
which
was
going
to
solve
a
lot
of
these
problems.
A
And
so
we're
saying
we're
just
going
to
add
this
into
cap
dk
for
now
to
like
allow
ignition
support
as
a
quick
remedy,
but
then
later
on.
It's
going
to
be
replaced
by
this
new
thing,
but
we're
at
this
point
now,
where
it's
been
a
while
appears
still
open
and
like
I
we've
already
released.
We
went
off
before
so
it's
clearly
not
a
quick
fix
anymore.
So
I
think
we
should
take
a
pause
and
think
about.
You
know
what
we,
what
we're
trying
to
do
here
and
I
see
a
few
hands
later.
J
Worse
than
what
you
think
in
some
way,
in
terms
of
our
current
architecture,
we
actually
have
close
coupling
between
the
infrastructure
providers
and
cloud
in
it
as
well.
J
So
yes,
the
kubelet
secure
authentication
resolves
a
bunch
of
problems,
but
it
doesn't
resolve
control,
plane,
initialization
and
protection
of
key
material
during
control,
plane,
initialization
right
now,
aws
has
specific
code
to
store
data
in
another
service
encrypt
it,
and
then
it
hacks
cloud
in
it
and
decrypts
the
data
and
then
restarts
cloud
in
it,
and
there
is
a
pr4
ignition
support
in
kappa
as
well,
which
adds
the
ignition
specific
mechanism
to
transport
control,
plane
key
material
securely.
J
This
sort
of
comes
back
to
the
project
proposal
that
we
never
had
reached
proper
consensus
on,
which
was
to
redesign
the
bootstrapping
mechanism,
or
at
least
what's
the
boundary
between
what's
in
the
core
repo
and
everywhere
else,
and
I
certainly
think
moving
to
a
position
where
we
can
we,
we
definitely
need
some
sort
of
agent.
J
That's
on
a
machine
that
is
able
to
pull
key
material
from
whatever
secret
store
that
you
have
and
then
we
can
more
easily
just
string
template
an
ignition
template
or
cloud
in
it
and
not
have
to
care
about
the
particular
bootstrap
mechanism.
I
think
we
need
to
revisit
that
and
then
that
moves
us
out
of
having
to
care
about
clouded
and
ignition
or
whatever.
I
Yeah,
sorry
yeah,
I
don't
have
any
comments
for
another
like
what
lender
said.
So
if
someone
else
wants
to
depend,
then
please
go
ahead.
I
have
some
other
like
like
next
step
in
this
discussion.
I
think.
A
Yeah,
I
guess
about
what
you
said
it.
It
makes
sense
to
me.
The
question
is
just
what
do
we
do
in
the
meantime,
because
this
isn't
gonna
happen
overnight
and
right
now,
people
need
admission
support,
so
we're
not
gonna.
You
know
block
that
fee
like
it's.
I
think
it's
unfair
to
block
ignition,
because
we've
had
a
bad
design
from
the
beginning.
I
Yeah
so
let's
say
we
actually
create
a
separate
bootstrap
provider
we
want
like-
or
at
least
I
wouldn't
make
it
like
ignition
specific,
because
the
like
the
actual
technology
behind
this
cube
adm.
So
I
think
that
would
be
like
again
also
like
cube
adm
provider,
but
with
ignition
support
because
yeah,
I
don't
know
this.
I
I
We
well,
on
the
other
hand,
this
will
allow
us
like
having
a
separate
provider,
will
allow
us
to
actually
change
the
change,
the
types,
so
maybe
we
can
make
it
more
modular
and
less
like
cloud
in
it
specific,
but
yeah.
Do
you
think
this
would
be
like
a
better
approach
for
this,
or
at
least
in
the
like
in
the
short
term,.
A
I
personally,
don't
think
forking
the
whole
thing
and
having
duplicates
is
better
and
also
as
jason
points
out
jason.
Can
you
expand
on
that?
Are
the
control
plane
provider
would
require
forking.
H
B
A
second
like
you
know
like
we'll
need
to
reboot
the
whole
thing
and
then
things
kind
of
stopped
at
that
point,
because
you
know
forking
kcp
is
like
another
thing
like
we
need
to
make
it
better
and
like
in
general,
like,
I
think,
like
what
we're
trying
to
say
like,
and
it's
like
kind
of
referred
to
this
like
a
mere
cube
idiom
booster
provider,
and
it's
truthfully
like
we
want
you
most
of
like
the
users
that
I've
seen
like
a
do.
Use
cube
idiom
somehow.
B
B
I
guess,
like
some
data
to
then
give
to
the
machine,
and
then
that's
the
machine
boost
strap,
that's
just
like
that's
the
actual
pro
I
had
that
should
be
in
place
and
then,
if
someone
wants
to
replace
the
whole
chain
so
like
a
kubernetes
bootstrap
and
machine
bootstrap,
but
they
should
be
able
to
do
so
as
well
and
thinking
about
the
taos
provider,
for
example,
which
replaces
the
whole
chain.
B
I
do
think,
though,
that,
like
you
know
making
cuba
em
a
little
bit
more,
exposing
from
one
side
a
cluster
api,
but
replaceable,
and
the
machine
bootstrap
separated
like
truly
separated
from
cupidium
would
be
both
well.
B
You
know
we'll
have
two
wins
like
because
from
one
side
we
could
actually
expose
qubit
dml
types,
generic
types
that
now
we
have
in
the
code
base
a
little
bit
more,
and
if
someone
wants
to
use
that
that's
great
or
someone
can
use
like
a
completely
different
provider
for
kubernetes
bootstrapping,
which
is
also
okay,
and
then
we
could
use
this
to
provide
a
better
user
experience
as
well
and
then
the
format
it's
actually
like.
B
B
It's
a
huge
problem
to
solve.
I
don't
necessarily
think
we
need
to
block
this.
I
do
agree
with
cecilia,
that's
like
we
have
discussed
this
from
six
months
ago
at
this
point.
So
probably
it's
fair
to
merge
this
with
you
know
a
feature
getting
intent
tests
potentially
and
some
docs
on
how
to
use
it.
I
Yeah
so
like
right
now,
the
the
bootstrap
provider
creates
a
secret
which
is
then
consumed
by
the
by
the
machine.
So
if
we
introduce
like
a
separate
type
like
would
need
another
object
right
with
the
outputs
of
cube,
adm
and
then
well
the
bootstrap
provider,
and
then
this
would
be
consumed
by
the
machine.
Bootstrap
provider
right.
B
Well,
I
don't
necessarily
want
to
solve
this
here,
because
it's
probably
gonna
be
the
character
yeah.
It's
gonna
be
a
long
discussion
but
yeah
from
a
high
level,
architectural
level.
You
we
need,
like
an
input
and
output
and
that's
going
to
be
defined
by
q
by
dm
and
something
in
the
middle
will
take
those
inputs
and
output
and
just
generate
something
else
and
valve
data,
because
there's
going
to
be
some
fields
that
you
cannot
use
across
different,
like
cloudiness.
I
Yeah,
so
sorry
to
so
to
recap.
So
if
we
so
the
the
steps
forward,
for
that
is
to
add
end-to-end
tests,
a
gate
for
the
api
fields
and
some
documentation
how
to
use
that,
and
then
this
could
possibly
be
emerged,
then
right.
A
Yes,
sorry,
I'm
trying
to
recap
this
in
the
notes,
so
add
end-to-end
tests
and
what
it.
What
else
did
we
say?
Sorry.
A
A
A
I
A
All
right
I'll
continue
with
this
vince.
Do
you
want
to
start
the
next
topic?
Actually,
let's
go
to
the
other.
A
Okay,
what
are
the
other
two
june.
L
Hey,
I
just
want
to
focus
here
now
that
we
have
been
working
on
another
experimental
provider
which
is
built
on
top
of
canonical
mask.
We
just
released
the
v1
of
3
support
and
v1
4
is
on
the
way.
So
if
anyone
interested
in
another
bare
metal
provider
feel
free
to
go
ahead
and
try
it
out,
let
us
know
thanks.
M
L
Well,
I
think
the
metacube,
the
provider
is
mainly
using
like
ironic,
but
this
this
is
from
canonical
mass
is
a
kind
of
a
different
way
of
doing
the
provisioning
of
the
of
the
bare
metal
infrastructure.
L
In
our
experience,
the
the
mass
provider
cannot
provide
a
more
like
cloud
experience
for
the
user's
perspective
and,
and
it
is
just
a
different
way
of
doing
things-
I
think,
a
different
different
folks
that
different
user
might
have
different
requirements.
This
is
just
another
experimental
provider.
A
Okay,
cool.
Thank
you
all
right.
Let's
keep
going
so
we
have
two
topics
left.
I
have
a
feeling
they're,
both
gonna
be
lengthy.
So,
let's
just
time
box
each
one
to
like
about
eight
nine
minutes,
try
to
hopefully
get
through
both
david.
Do
you
want
to
start.
K
Yeah
just
quick
conversation
that
I
had
with
someone
in
the
community
and
it
kind
of
threw
me
for
a
loop,
so
the
individual
I
was
talking
with.
K
We
were
talking
about
cluster
api
and
they
said
that
they
didn't
feel
like
cluster
api
was
living
up
to
the
aspirations
that
you
know
what
what
they
expected
it
to
be,
and
I
was
I
was
shocked
by
this.
I
was
kind
of
surprised.
I
was
like
why
so
in
saying
that
the
they
explained
it
in
saying
that.
K
Well,
when
I
have
to
go
build
a
cluster,
I
have
to
know
about
every
infrastructure
provider
like
I
have
to
understand
all
the
infrastructure
provider
templates
underneath
it
the
generic
infrastructure,
the
generic
api
layer
is
only
you
know,
useful
after
cluster
creation
and
even
then
I
might
still
have
to
know
some
serious
details
about
the
underpinnings
of
each
infrastructure
provider.
It
really
doesn't
do
the
work
that
I
expected
it
to
do
to
abstract
those
infrastructure
providers
from
each
other
and
so
digging
into
that
a
little
bit.
K
I
was
curious
if
others
have
had
that
kind
of
feedback.
If
anybody
has
talked
about
this
with
other
people,
if
you've
thought
about
it-
and
one
thing
that
I
was
thinking
about
is-
is
there
some
way
that
we
could
provide
enough
metadata
that
folks
wouldn't
have
to
describe
the
underlying
infrastructure,
provider's
stuff
the
details
and
be
able
to
say
hey?
K
I
want
a
node
pool
with
you
know
between
four
and
eight
cpus,
and
you
know
16
to
32
gigs
per
node
with
a
gpu
or
something
like
that,
and
then
the
infrastructure
provider
figures
out
how
like,
what's
the
right
sku
to
build
like
what
is
the
optimal
infrastructure
configuration
giving
these
generic
requirements
having
something
at
that
level
like?
Maybe
you
don't
have
to
give
your
infrastructure
reference
for
everything
the
infrastructure
provider
is
intelligent
enough
to
build
this
kind
of
stuff?
I
don't
know
I
just
wanted
to
throw
it
out
there.
K
I
I
was
kind
of
taken
aback
by
it.
I
hadn't
thought
about
it,
but
this
was
really
thought-provoking
and
I
wonder
if
others
were
thinking
about
this.
A
Thanks
see
a
few
hands
going
up
your
scene
you're
first.
N
Yeah,
so
I
guess
like
this.
This
is
interesting
for
common
resources
that
are
existing
across
providers
such
as
like
cpu
memory,
this
disk
size
and
whatnot,
but
I
think
it's
it.
It
starts,
but
you
get
a
little
bit
thornier
once
we
get
to
provider
specific
features,
and
I
think
those
would
be
like
the
things
that
we
wouldn't
be
able
to
abstract
anyway,.
J
J
J
D
I
think
that
one
is
in
terms
of
users,
interface
of
of
the
overall
copy,
and
the
other
is
term
of
infrastructure
provider
to,
let
me
say,
make
some
other
at
some
problems
so
for
the
first
layer,
I
think
that
cluster
class
is
a
is
kind
of
a
classic
class
with
patches
is
kind
of
answer,
because
what
we
are
trying
to
do
in
cluster
class
with
patching
is
that,
basically,
that
we
add
all
the
complexity
down
in
the
cluster
class
and
in
the
patches
and
then
in
the
cluster,
what
we
are
surfacing
in
the
clusters
relation
topology.
D
D
Have
a
feedback
from
those
folks
as
soon
as
we
get
the
patch
proposal
out
and
in
terms
of
of
providers.
I
think
that
this.
This
is
an
interesting
point,
because
basically
this
person
is
asking
the
is
there
a
way
to
not
to
provide
better
default
to
not
make
the
user
do
all
expertise,
choice
stuff
like
that?
This
is
really
something
provider
specific,
but
it
is.
It
is
a
good
feedback,
so
provides
same
default,
trying
to.
F
Yeah,
I
would
just
ask
I've
been
very
new
to
the
project
and
I've
been
going
through
a
lot
of
the
documentation,
but
I
would
be
actually
very
interested
in
this
and
helping
out.
So,
if
anybody's
having
conversations
I'd
like
to
be
in,
I
would
actually
almost
state
that
maybe
it's
not
cappy's
job
and
maybe
that's
something
else
on
top
of
cappy.
That
should
be
doing
that,
but
anyways
yeah
just
trying
to
keep
it
short.
A
Thanks
and
I
think
that
meets
also
jason's
comment
in
the
chat,
so
all
good
comments
great,
I
think
we're
gonna
move
on
to
beta,
that's,
okay,
just
in
the
interest
of
time,
and
but
this
is
an
interesting
conversation
and
we
should
definitely
circle
back
all
right,
vince.
Take
it
away.
B
All
right,
so
we
discussed
this
a
little
bit
last
week
about
there's
some
commons
and
like
how
do
we
get
the
beta
and
like
how
do
we
get
to
it
actually
faster
right?
It's
like,
I
think
we
could
probably
all
agree
that,
like
in
terms
of
like
code
base
and
stability,
a
stability
of
our
operations,
we
clearly
have
some
apis
to
fix,
like
as
we
discussed
before.
B
We
are,
you
know.
Definitely,
production
ready,
like
no.
Multiple
companies
are
definitely
using
cluster
api
in
a
production
environment,
and
so
how
do
we
communicate
that
to
the
world
like
folks,
like
usually,
are
kind
of
thrown
out
like
when,
like
they're
like
wait
a
second?
B
This
is
like
an
alpha
four
like:
what's
stopping
us
to
actually
like
introduce
the
api
types
and
actually
at
one
point
now
so
the
thinking
from
last
week
that
kind
of
started
making
my
mind
spin
a
little
bit
was
why
don't
we
take
the
current
type
and
like
just
promote
them
to
beta
like
in
the
next
iteration,
and
you
know-
maybe
do
a
sooner
rather
than
later
and
keep
iterating
on
it,
and
there
were
like
a
few
things
that
like
came
up
my
mind,
that
that,
like
could
be
beneficial,
but
there's
also
like
a
lot
of
other
things
that,
like
might
not
be
so
one
example,
is
like
how
do
we
introduce
new
features
like
it's
imperative
that,
like
you
know,
we
keep
the
development
up
and
we
keep
the
innovation
of
like
a
new
things
and
so
like.
B
How
do
we,
for
example,
like
the
couple
like
like
the
machine
bootstrap
like
if
we
were
to
beat
up
right
now,
right
like
we
have
to
grab
like
a
new
type
version
and
then
support
it
for
six
months
or
three
releases?
If
I
remember
correctly,
that
was
the
rule.
Do
we
follow
the
kubernetes
we'll
also
like?
B
Should
we
make
like
a
our
own
rules,
which
is
a
little
bit
more
lacks,
given
that
we
have
a
people
problem
right
now
in
terms
of
reviewing
proposals
prs
and
also
like
approving
them
and
maintaining
the
code
base?
B
So
there
is
like
a
subset
of
like
three-dimensional
problem
here
that,
like
we'll,
definitely
need
to
fix
over
time,
although,
if
there
is
a
desire
to
for
the
from
this
whole
community,
actually
get
the
the
types
to
beta
and
release
a
1.0
version
with
the
current
feature
set
that
we
have,
I
I'm
definitely
like.
We
should
probably
make
a
plan
of
action
of
what's
considered
like
like
definitely
something
we
need
to
fix
before
getting
there
and
time
boxes
as
well.
B
One
idea
that
we
had,
for
example,
was
to
not
accept
any
new
proposals
that,
like
I,
have
not
already
either
merged
or
proposed,
like
in
the
past
few
months,
for
example,
and
wait
for
those
for
the
next
iteration.
If
they're
breaking
changes,
I
mean,
if
they're
like
additional
changes
like
we
would
probably
require
feature
gates
and
those
feature
gates
like
will
need
to
be
promoted
over
time.
One
question
that
came
up
was:
how
do
we
communicate
that
a
field
that
gets
added
to
what
can
existing
type
is
actually
experimental?
B
We
could
do
some
of
some
of
what
we
have
done
for
cluster
class,
which
is
negated,
and,
if
you
set
it
like
an
error,
comes
up
about
what,
if
it's
like
a
feature,
you
know
like
an
already
existing
struct
like
do
we
do?
We
still
do
that,
or
also
like
how
do
we
break
behavioral
changes
and
contract
changes
like
those
are
all
things
we
should
probably
think
about.
B
The
huge
benefit
that
I
see
from
moving
to
beta,
like
apart
from
you
know,
from
a
public
point
of
view
that,
like
the
project
like
is
more
production
ready,
is
the
fact
that
we
can
actually
move
a
little
bit
faster
in
terms
of
releases
and
breaking
changes
so
that
there
shouldn't
be
a
release
anymore
in
the
future.
That
has
like
a
huge
amount
of
breaking
changes
that
then
forces
the
infrastructure
providers,
for
example,
to
take
a
really
really
long
time
to
upgrade
to
the
new
api
version
and
minor
version
of
cluster
api.
B
So,
as
an
example
like
we
alpha
3
for
cluster
api,
it's
on
controller
runtime,
zero.
Five.
This
is
a
very
simple
example
that,
like
then
alpha
four
it's
on
zero.
Nine.
There
were
like
lots
of
releases
in
between
that
we
couldn't
update
because
of
the
our
promise
competitive
compatibility
promise.
If
we
do
release
more
often
and
minor
version
could
be
released
more
often,
we
could
do
more.
B
D
So
the
problem
is:
how
do
we
communicate
these,
and-
and
I
would
do
do
we
remove
the
the
link
that
we
have
from
the
past,
for
instance,
from
the
past
we
have.
We
have
a
link
between
the
the
version,
the
release
version
and
the
api
version.
Zero
to
three
goes
with
one
alpha:
three
zero
to
four
goes
to
the
to
with
v1
alpha
four.
D
M
Yeah,
so
I
was,
I
guess,
thinking
more
about
the
informational
side
of
this
in
terms
of
getting
the
message
out
and
whatnot
would
have
we
considered
in
the
past,
or
would
we
think
about
adding
some
sort
of
like,
like
a
blog
section
or
something
to
the
cluster
api
book?
Just
you
know,
I
think
we
get
a
lot
of
good
traffic
there.
That
would
be
a
place
to
kind
of
really
promote
changes
that
are
happening
and
whatnot
just
a
thought.
A
Yeah,
what
I
was
gonna
say,
I
think,
plus
one
to
if
not
using
our
own
criteria,
for
like
deprecation
policies.
I
think
we
should
at
least
make
it
explicit
and
I
don't
think
we
should
necessarily
tie
ourselves
to
the
kubernetes
ones.
But
even
if
we
do,
we
should
like
to
find
our
own
in
the
book.
Even
if
they're
the
same
and
then
the
other
thing
I
was
gonna
say
is.
A
So
maybe
breaking
away
from
tying
the
api
version
and
the
releases
together
will
like
not
only
allow
us
to
do
quicker
releases
and
have
minor
releases
with
like
big
features
and
then
patches
with
only
bug
fixes,
but
also
having
that
not
like
that
concept
of
not
everything
is
at
the
same
api
version,
and
I
think
the
1.0
would
be
a
big
signal
to
the
community
that
you
know
we're
we're
production
ready
and
we
we
want
people
to
use
this.
A
A
I
think
that,
even
if
we
weren't
under
that
rule,
we
would
still
take
a
long
time
to
completely
like
stop
like,
like
having
people
migrate
from
one
to
the
other,
and
we've
seen
that
already
with
cluster
class,
for
example
like
we're
adding
cluster
class
now,
but
it's
not
going
to
be
anytime
soon
that
you,
you
can't
provision
clusters
without
cluster
class
if
ever
so.
I
don't
think
we
should
be
too
concerned
about
that.
A
I
just
think
that
it
will
just
enforce
something
that
we're
already
doing
at
in
some
level,
and
I
think
we're
almost
that
time.
But
does
anyone
else
want
to
add
anything.
B
I
think
just
for
next
steps,
like
a
backlog,
grooming
review-
it's
probably
overdue
at
this
point
so
maybe
like
before
the
end
of
the
month.
We
should
do
one
and
we
could
probably
add
a
new
milestone
which
is
gonna,
could
be
1.0
and
try
to
identify
all
the
release
blocking
issues
that
we
want
before
1.0
and
in
terms
of
apis
like.
I
do
agree
that,
like
we
need
to
break
from,
you
know
zero
threes
out
for
three
and
zero
for
his
alpha
four.
B
So
if,
for
example,
we
could
also
decide
that
1.0,
it's
beta
one
just
for
the
core
types
and,
for
example,
cappy
k
and
maybe
kcp,
we
don't,
we
don't
promote
them
yet,
because
we
could
also
wait
for
1.1
to
promote
those
types
it's
like
and
we
don't
have
to
do
all
the
ones
and
with
conversion
workbooks.
These
things
are
pretty
easy.
A
All
right,
I
think,
we're
at
time,
unfortunately,
but
maybe
we
can
continue
this
discussion
offline
if
anyone
has
more
thoughts-
and
we
can
certainly
sort
of
circle
back
next
week
and
discuss
this
some
more
but
thanks
everyone
thanks
for
all
the
comments
and
good
thoughts
here
and
yeah,
have
them
have
a
great
rest
of
the
day.