►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
october
20th
2021,
and
this
is
the
cluster
api
office
hours.
If
you
haven't
been
here
before
the
cluster
api
is
a
sub
project,
that's
a
cluster
lifecycle
and
we
follow
the
cncf
code
of
conduct.
So
please
be
respectful
to
everyone
and,
if
you'd
like
to
speak,
you
can
use
the
raise
hand
feature
on
zoom
and
I'll.
Make
sure
that
I
call
on
you.
A
If
you
have
any
things
you
want
to
be
any
topics
you
want
to
bring
up,
you
can
add
them
to
the
agenda
under
discussion
topics.
That
being
said
before
we
get
started,
is
there
anyone
who
is
new
to
this
meeting
and
just
wants
to
say
hi
introduce
themselves?
Tell
us
a
bit
about
why
they're
here?
B
Hey
this
is
my
first
time
here.
My
name
is
nigel,
I'm
community
manager
for
the
recently
released
tanzu
community
edition
and,
as
it
uses
a
lot
of
cluster
api
stuff,
I
figured
I
should
probably
be
here
on
these
calls.
So
that's
me.
I
work
at
vmware
awesome.
A
All
right,
if
not
let's
move
on,
I
don't
see
any
hands
up
so
I'll
keep
going.
Let's
start
with
some
psas
mike.
You
have
the
first
two
I'll.
Let
you
go
ahead.
C
Yeah,
so
I
just
wanted
to
give
a
quick
announcement
that
we
released
kubemark
0.2.3
today.
This
release
kind
of
updates
it
to
bring
it
to
compatibility
with
right
before
we
kind
of
released
1.0
and
everything,
so
it
should
be.
It
doesn't
have
the
v1
beta1
objects
in
it.
That's
what
we're
going
to
plan
on
for
a
0.3
release,
that'll
probably
happen
within
the
next
few
weeks
and
then
kind
of
just
as
a
follow-up
to
that.
This
is
a
little
bit
of
shameless
self-promotion.
C
Here
I
wrote
a
blog
post
on
how
to
set
up
a
development
environment
using
cluster
api,
kubemark
and
capd,
and
just
wanted
to
share
it,
and
you
know
shout
out
to
the
apple
guys,
alex
and
ebiji.
C
They
helped
me
to
kind
of
debug
that
and
get
it
working
right
and
then,
as
a
follow-up
to
that,
I
started
working
on
some
ansible
playbooks
that
actually
replicate
what
I
was
doing
in
the
blog
post.
So
if
you
want
to
kind
of
cut
right
to
the
chase,
there
are
right
now,
two
playbooks
there
one
will
deploy
all
the
necessary.
C
A
Awesome
congrats
on
the
release.
Does
anyone
have
any
questions
for
mike
about
cubemark
or
the
release.
A
All
right
next,
I
guess,
release
blocking.
I
don't
think
there
should
be
anything
yeah
since
we
just
released
but
alex
I'll.
Let
you
take
the
next
one.
D
Sure,
thank
you
so
yeah,
I'm
glad
to
announce
that
cluster
api
provider
keyword
code
is
finally
being
lifted.
So
we've
gone
through
internal
reviews
and
there
are
now
about
eight
pr
spending
on
the
upstream
repository.
So
hopefully
that
will
be
merged
by
then.
At
the
end
of
the
week
we
hit
the
minor
issue
with
the
prow
itself.
So
ideally
we
want
to
use
a
squash
for
our
commits
or
mergers
to
keep
the
history
linear,
but
yeah
current
default
settings
don't
allow
that
they
only
allow
merge.
D
A
Awesome,
I
know
a
few
people
were
waiting
for
that
also
for
everyone.
If
you
missed
it,
there
is
a
kubert
channel
called
culture
api
keeper
on
slack.
So
if
you
have
any
questions
about
cuber,
you
can
reach
those
folks.
There
vince.
E
Hey
just
a
quick
note
on
the
squash
done
by
proud,
like
when
the
bot
is
configured
like
that,
you
will
lose
the
ability
to
generate
release,
notes
like
with
our
automation,
because
we
only
look
at
merge
commands.
E
D
Okay,
thanks
yeah,
so
we
were
wondering
about
that
and
we
noticed
that
some
other
repositories
and
providers
use
that
setting,
but
not
all
of
them,
just
a
small
subset.
So
yeah
I
I
was
suspecting
there
was
a
kind
of
a
pitfall
there,
but
yeah
thanks.
I
will
we'll
give
it
a
thought
if
we
need
that
going
forward.
A
Okay
oops,
if
not,
there
was
a
question
in
chat
from
alberto
for
mike,
I
believe
about
the
cubemark
provider
asking
what
are
the
medium
long-term
plans
for
qmark
provider
use
cases.
C
I
mean:
do
you
want
me
to
answer
here
or
just
answer
them
in
chat?
I
guess.
C
Yeah
so
yeah,
basically
what
you
know
what
I've
been
working
on
with
the
kubemark
provider?
Is
I'm
trying
to
build
up
a
testing
workflow
that
allows
us
to
use
the
cluster
auto
scaler
with
the
cappy
back
end
and
the
cubemark
provider
to
exercise
those
mechanisms?
C
I
would
also
like
to
see
about
running
the
I
guess
trying
to
run
the
cappy
end-to-end
tests
against
the
cubemark
provider
to
see,
if
maybe
we
could
get
some
value
out
of
that
in
this
community,
but
I'm
a
little
a
little
less
well-versed
in
the
cappy
end-to-end
test.
So
I
need
to
do
a
little
studying
up
there.
C
I
think
you
know
kind
of
in
the
medium
to
long
term.
In
terms
of
features
I
would
like
to
see
us
have
the
ability
to
kind
of
specify
the
node
shapes
or
topology,
I
guess
for
or
the
instance
types
so
that
we
could
use
cubemark
to
do
a
little
more
thorough
testing
of
kind
of
like
differently
shaped
instances
in
a
cluster.
I'm
still
doing
some
research
there
and
I
haven't
quite
figured
out
how
to
make
all
that
possible,
but
that's
kind
of
what
I've
been
thinking
you
know.
So
far,.
D
Yeah
actually,
to
add
to
use
cases,
we
find
it
valuable
to
test
control
planes
deployed
with
different
providers.
So,
for
example,
we'll
be
testing
the
performance
of
the
control
plane
running
on
virtual
machines
versus
bare
metal,
and
I
think
for
that
the
cubemark
provider
will
also
be
super
useful
yeah.
Just
do
it.
D
A
Yeah
thanks
a
lot
okay,
great
all
right,
and
then
I
just
added
just
as
a
I
guess.
This
is
a
re-announcement,
but
just
in
case
you
missed
it,
we
did
release
1.0.
So
this
is
very
recent
for
those
who
haven't
checked
in
in
a
while
there's
a
blog
post
that
was
published
after
the
meeting
last
week.
So
if
you
haven't
checked
it
out
definitely
go
check
it
out
with
some
great
quotes
from
different
users
that
have
adopted
kathy.
A
All
right,
that
being
said,
let's
move
on
to
discussion
topics.
We
do
have
a
lot
of
topics
here,
so
I'm
gonna
try
to
get
through
everything,
but
just
going
to
try
to
time
box
the
discussions-
and
I
know
this
first
one
is
probably
gonna-
have
lots
of
opinions.
So,
let's
just
try
to
time
box
it
until
10
25.
If
we
can
ideally
10
20,
but
let's
see
how
it
goes
all
right.
Stefan
I'll,
let
you
introduce
the
topic.
F
Yeah,
I
guess
the
most
important
thing
is
how
we
approach
the
individual
questions
in
this
topic.
Okay,
so
topics.
Overall,
we
now
release
1.1.0
and
the
question
is:
which
changes
can
we
make
in
which
parts
of
our
code,
so
our
code
base
or
api
types,
and
also,
how
do
we
handle
branches?
I
think
maybe
the
best
summary
is
to
issue
a
link
there.
F
I
think
the
second
bullet
point
or
something
killian
summarizes
greatly
and
vince
answered
a
few
posts
below,
so
I'm
not
sure
what
makes
sense
to
to
start
with
some
some
statement
and
then
see
what
others
think
about
it
a
little
bit
further
down.
F
So
I
think-
and
maybe
also
what
triggered
this
so
we
had
a
few
pr's
which
tried
to
add
new,
almost
compatible
fields
for
api
types,
and
we
had
a
few
other
pr's
which
dropped
some
packages
before
without
duplication
and
yeah.
The
question
is
essentially:
what
can
we
do
to
our
api
types?
What
can
we
do
to
our
code
base
and
how
do
we
handle
branches.
A
Cool
all
right,
I
guess
I'll
go
first,
I
don't
see
any
hands
raised,
but
so
I
think
from
what
I've
seen.
I
think
we
generally
agree
on
the
fact
that
basically,
we
should
follow
like
minor
releases
should
be
features
or
breaking
changes
in
the
code.
New
api
types,
but
backwards
compatible
api
types,
no
breaking
changes
in
the
api.
Now
that
we're
1.0
and
then
patch
releases
should
be
like
either.
A
I
guess
minor
features
is
debatable,
but
at
least
bug
fixes.
So
anything
that's
not
going
to
be
a
big
behavior
change.
That's
not
going
to
be
impactful,
so
they
can
be
picked
up
easily.
I
guess
the
big
question
that
we
have
right
now
is:
are
we
fast
forwarding
the
release,
1.0
branch
to
the
main
branch,
or
are
we
starting
right
now
to
do
back
ports
and
not
fast
forwards?
A
Personally,
I'm
in
the
let's
do
cherry
bix
cam
just
because
I
think
that
we
shouldn't
release
any
new
big
features
in
1.0.
If
we
have
new
feature
release,
it
should
be
1.1,
but
I
think
I'd
like
to
hear
what
others
think
as
well.
I
know
we
weren't.
There
were
some
different
opinions
on
this.
E
So
historically,
like
we
have
done
well
before
1.0,
we
have
like
already
always
waited
for
the
next
minor
release
to
do
breaking
changes
like
a
new
apis
or
like
upgraded
api
types,
etc.
This
is
going
to
change
like
for
what
we
agreed
like
in
the
past,
like
a
few
weeks
ago,
with
the
1.0
releases
and
like
we're
going
to
stay
on
one
point
x
until
maybe
we'll
do
2.0
like
in
the
very
distant
future
and
so
like
now.
The
question
remains
like
what?
E
How
do
we
handle
pretty
much
like
backboarding
and
maintainer,
maintaining
like
the
old
branches
and
for
how
long
as
if
we
have
to
mount
in
like
every
branch
or
like
a
release
branch
for
like
six
to
nine
months
to
a
year?
That's
like
a
big
stretch
for
this
small
group
of
maintainers
kubernetes.
I
guess
does
that,
although
like
they
only
do
release
like
every
you
know
on
a
set
cadence,
so
maybe
we
can
do
something
similar,
but
the
the
biggest
thing
here
is
like
probably
going
to
be.
E
How
do
we
handle
features?
Rather
than
breaking
changes
breaking
this
will
100
be
in
the
next
minor
release.
It's
mostly
about
the
features
right
like
so
what
does
it
mean
to
be
a
big
feature
rather
like
a
smaller
feature?
Maybe
we
could
add
some
text
to
it
and
then
I
guess
feature
gates
like
a
not.
You
know
a
good
way
to
get
something
in
a
patch
release
potentially,
but
we
should
probably
also
discuss
about
that.
A
G
Hey
everyone,
so
I
just
wanted
to
share
my
experience
with
projects
like
this
that
inevitably
some
critical
bug
fix
is
going
to-
or
I
should
say,
critical
bug
is
going
to
arrive
and
we're
not
going
to
have
any
control
over
where
in
a
release
cycle
that
happens.
G
We're
going
to
have
to
do
this
at
some
point,
so
it'll
be
easier
to
do
this.
If
we
have
a
standard
process
that
we
do
regularly
to
accommodate
this.
Otherwise
it's
just
going
to
be
a
big
fire
drill
where
we
make
this
up
in
real
time.
You
know
in
an
afternoon
or
in
an
evening
in
order
to
get
a
bug
fixed
to
it,
to
our
customers.
A
A
I
agree
with
this
that
it's,
I
think,
maintaining
too
many
of
them
will
become
a
kind
of
a
big
overhead,
especially
with
the
small
group
of
maintainers
that
we
have
right
now,
but
I
think
like
we
should
definitely
keep
like
the
big
api
version
release
branches
like
support
it
for
our
you
know,
guarantee
of
one
year
or
three
releases.
A
I
don't
know
if,
like
backporting,
everything
to
every
like
minor
release
branch
makes
sense.
At
this
point,
I
think
we
could
just
say
like
in
between
minor
releases
we
backwards
to
the
current
release
branch
and
then,
as
soon
as
we
cut
a
new
one,
we
switch
over
to
that
one.
Unless
there
is
like,
I
guess,
an
exception
like
very
critical
bug
that
affects
a
lot
of
users
in
the
previous
one.
We
could
like
do
a
one-off
backboard,
but
that
could
be
a
case-by-case.
I
guess
for
retail.
Go
ahead.
H
Yeah,
I
I
agree
with
cecil,
so
we
have
a
back
part
or
cherry
picking
process
that
that
works
pretty
well,
but
I
think
that
the
exception
that
we
have
now
is
that
we
have
a
release,
one
zero
branch-
and
that-
and
I
I
don't
know
if
this
branch
should
stay
in
sync
with
the
main,
so
we
should
fast
forward
it
or
cherry
pick
and
and
consider.
I
already
consider
what
we
have
a
1.1.
This
is
my
point
of
conclusion.
H
Yeah,
basically,
what
I'm
trying
to
understand
is
we
create
a
release,
1.0
branch
faster
than
we
did
in
the
past
right
and
I
I
don't
understand
what
how
we
should
manage
this
branch
now,
so
it
makes
sense
to
fast
forward
it
until
we
have
breaking
change
or
we
should
cherry
pick,
everything
which
is
kind
of
tedious,
because
yeah.
A
So
I
think
the
idea
is
that
if
we
started
the
release,
1.0
branch
so
that
we
can
keep
merging
features
to
the
main
branch
with
no
problem.
But
if
we
do
have
a
bug
fix
that
is
discovered
in
the
1.0.0
release.
We
can
back
for
or
terrific
that
bug
fix
into
1.0
and
release
it
1.0.1.
A
E
E
I
guess
like
how
many
release
branches
we
want
to
support
going
forward
and
like
because
if
for
every
like
a
new
feature,
or
I
guess
breaking
change,
we
have
to
release
like
a
new
or
set
of
breaking
changes
or
instead
of
features,
we
have
to
release
a
new
minor
release.
Like
do
we
say
like
we
only
support
n
minus
two,
perhaps
or
we
give
it
like
time
based
which
is
six
months
like
after
n
minus
two
is
done,
and
then
just
let's
just
go
document.
It.
I
Yeah
so
funny
enough,
we
had
exactly
the
same
conversation
in
the
aws
meeting
on
monday.
I
would
be
very
much
tempted
to
have
the
n
minus
two
policy.
I
think
what
complicates
things
for
us
is
the
continued
existence
of
v1
alpha
3,
and
we
want
alpha
4
as
things
that
people
use
things
that
so.
I
I
think
concretely
for
us,
but
aws
provider.
We
are
just
going
to
do
1.1
as
our
next
release
and
not
really
like,
and
only
if
one
zero
to
something
something
is
completely
broken.
We
would
back
pork
and
then
we
will
continue
back
pointing
and
we
went
out
for
four
and
we
went
out
for
free
for
some
time,
but
we
almost
start
with
a
brand
new
policy
now
that
we're
on
a
one
zero
so
that
we
don't
have
to
maintain
like
20
bunches.
G
I
I
I
think,
just
based
on
what
I've
heard
the
cherry
picking
makes
sense,
but
the
I
think
it
is
a
trade-off
of
what
you're
optimizing
for
right.
We're
either
optimizing
for
reducing
if
we
believe
there
will
be
a
high
volume
kind
of
like
you
mentioned
in
chat
or
stuff
mentioned
in
chat.
G
If
there's
going
to
be
many
cherry
picks,
then
it
may
make
sense
to
use
a
strategy
where
we
fast
forward
until
we
need
to
begin
development
on
1.1
features,
but
on
the
other
hand,
if
we
think
that
we
will
be
progressing
in
features
quickly,
then
that's
going
to
happen
very
soon
anyway.
G
So
then
we
would.
We
would
want
to
optimize
to
reduce
the.
A
Yeah,
that
makes
sense.
I
I
think
the
big
question
here
is:
what
is
a
back
portable
fix
a
feature?
I
guess
we
have
different
tolerances
here.
I
think,
like
some
people
see
it
more
as
like.
Only
the
critical
effect
fixes
are,
you
know,
blocking
and
impactful
should
be
backported,
and
then
others
are
saying
anything.
That's
not
breaking
can
be
backboarded.
A
I
think
if
we
are
cherry-picking,
it
makes
sense
to
be
closer
to
the
only
bug
fixes
side
so
that
we
can
limit
the
number
of
cherry
picks
and
then
release
often
for
new
features.
Just
because
we're
you
know
not
releasing
them
in
patches
doesn't
mean
we
can
really
summon
miners,
but
it
just
helps
users
know.
Is
this
a
behavioral
change
like
release
that
I
should
you
know
test
more
thoroughly,
or
is
this
just
the
bug
fix
that
I
should
adopt
as
quickly
as
possible.
G
Check,
maybe
in
a
crawl,
walk,
run
type
gesture.
We
can
start
by
just
back
porting
critical
fixes
and
as
we
do,
that
we
can
evaluate
how
painful
and
high
maintenance
that
process
is,
and
if
it's
not
too
painful
and
not
too
high
maintenance,
then
we
can
unlock
more
back
porting.
A
E
It's
like
this
isn't
just
the
apis,
though
right
it's
like
cluster
of
is
not
is
used
also
as
a
library
and
there's
like
code
in
there
that,
like
it's
being
used
like
not
just
for
providers
but
like
also
like
from
people
outside
of
of
the
sigs,
I'm
worried
about
the
churn,
if,
like
folks
like,
I
don't
feel
comfortable
to
upgrade
like
between
minor
versions
and
like
why
not
provide,
like
those
small
improvements
on
a
batch
release,
which,
if
I
look
at
kubernetes
releases
like
they
do,
although
like
they
do,
have
like
a
more
machinery
to
just
like
a
cherry
pick
across
like
a
number
of
different
branches,
which
we
do
in
part,
although
like.
E
If
we
do
have
breaking
changes
along
the
way
like
those
cherry
picks,
it
won't
work,
so
they
will
have
to
be
done
manually.
E
There
is
also
like
a
combination
of
which
nadir
brought
up
about
like
we
should
probably
also
think
think
about
the
the
api
version
with
relation
to
the
release
branch,
so
that
it's
like
a
code
that
has
to
be
supported,
and
so
that's
the
release
branch.
But
then
there
is
like
the
api
that
has
to
be
supported.
We
have
promised
support
and
guarantees
for
a
while.
E
So
as
an
example
like
we
could
say,
n
minus
two
code,
wise
plus,
I
guess
like
whatever
the
latest
release
on
the
api
versions
that
are
still
supported
as
an
example
alpha
3
and
alpha
4,
I
put
still
square,
I
think,
for
another
six
months
from
the
last
release,
and
so
we
need
to
keep
supporting
those
two
branches,
at
least
for
the
next
six
months,
plus
the
n
minus
two-
that
it's
a
lot
of
branches,
it's
great
to
support,
and
maybe
we
can
automate
this
more,
although,
like
I
don't
want
to
get
it
into
a
place
where,
like
the
noise,
is
like
so
much
within
the
repo
that
like
we
can't
keep
up
with
it.
A
E
Okay,
so
action
item
for
me
I'll
document
this
in
the
contributed
guide,
pr
that
I
have
out
and
then
we
can
do
some
lazy
consensus
there.
This.
B
E
A
Okay,
let's
continue
this
conversation
that
you
think
in
the
pr,
if
you
are
watching
the
recording
feel
free
to
chime
in,
we
love
your
opinions
and
let's
move
on
thanks
everyone
for
your
patience.
Sorry
this
took
a
while,
but
I
think
this
is
a
very
important
topic
to
cover.
So
all
right
defined
you
have
the
next
one.
F
Yes,
can
you
click
at
the
pr
comment,
link
yeah
sure,
that's
the
best
summary
okay,
so
we
merged
page
proposal
I
think
last
week
or
the
week
before,
and
that
pr
that's
the
corresponding
types
and
we
had
a
discussion
there.
So
I
try
to
just
want
to
bring
it
up
and
see
if
we
can
get
some
consensus.
F
So
essentially
we
added
new
types
and
we
added
a
suffix
to
namespace
destructs
a
bit
because
we
have
one
big
api
package.
Basically
and
if
you
call
something
variable,
it's
not
really
clear
to
what
it
belongs
to
these.
We
use
the
suffix
because
our
other
types
we
added
with
the
first
cluster
class
invitation,
started
with
suffix
all
for
in
those
cases,
it
also
makes
semantically
sense
and
that's
not
really
the
case
anymore.
So
what
com
comes
down
to,
basically,
is
what
I
wrote
and
to
summarize
the
current
situation.
F
So
current
is
what
I
currently
have
so
something
like
variable,
topology,
various
class,
very
definition
class
where
the
struct
name
itself.
Isn't
that
great,
because
you
read
various
class
and
you
don't
know
what
it
is,
because
it's
not
really
a
class
of
variables.
It's
just
variables
which
is
used
in
the
cluster
class
struct,
so
one
proposed
change
is
to
change
to
a
prefix,
but
I
think
that's
that
makes
more
sense
and
it
looks
like
that
is
what
the
communist
core
api
is
doing.
F
Yeah,
sorry,
I'm
looking
on
opinions
on.
Would
it
be
okay
to
change
those
strikes
or
is
it
something
like?
Oh,
we
can't
change
them
because
we
already
have
the
so
two
things.
First,
which
of
those
names
would
be
better
prefix
suffix
something
else,
and
if
we
decide
that
a
prefix
is
better,
then
that
brings
up
the
question
if
you're
allowed
to
change
the
struct
names
of
other
cluster
clusters,
we
already
have
merged
and
the
only
reason
why
we
maybe
can
change.
That
is
because
cluster
class
is
experimental.
A
Okay,
so
just
in
the
interest
of
time,
and
also
because
you're
probably
not
going
to
get
to
consensus
like
in
this
meeting,
I'm
just
going
to
say
everyone.
If
you
have
you
know
any
suggestions
or
ideas,
can
you
please
click
on
the
pure
link
and
comment
here
and
yeah?
Let's
try
to
decide.
A
Does
anyone
have
any
questions
to
clarify
any
strong
objections
or
suggestions
that
they
want
to
share
okay,
cool
yeah?
Let's,
let's
do
that
and
then
hopefully
we
can
come
to
an
agreement
here.
A
All
right,
vince
operator.
E
Yeah,
I
just
want
to
do
I've
seen
some
videos
on
the
operator
like,
and
it
seemed
like
we're
now,
switching
to
the
gun,
module
approach
that
we
talked
like
a
while
back.
I
was
wondering
if
like
it
would
be
useful
for
the
group
to
have
like
a
different
repository
at
this
point.
E
It
would
decrease
a
little
bit
of
noise
that,
like
I've,
been
seeing
and
we
can
remove
the
operator
04
branch,
which
now
kind
of
seems,
like
you
know
it's
a
little
bit
out
of
place
with
1.0,
and
it
will
also
like
give
us
more
freedom
to
like
elected
like
new
maintainers
and
approvers
in
the
different
repo
and
continue
to
work.
There.
C
Yeah,
I'm
I'm
not
working
on
it,
but
one
of
my
colleagues
is
alex
and
like
the
best
I
can
do,
I
guess
is
probably
like
point
him
towards
this.
It's
a
little
late
in
the
evening
for
him,
so
he
usually
doesn't
make
these
meetings.
C
A
Yeah,
let's,
let's
do
that?
Maybe
we
can
talk
about
it
in
slack,
we
can
start
a
thread
and,
if
yeah,
by
the
way,
if
anyone
is
curious
about
this
work
or
wants
to
help
potentially
in
the
new
repo
you
should
reach
out
to
alex
and
discuss,
what's
been
going
on,
there.
C
H
I
try
to
to
keep
this
short,
so
the
the
first
one
is
about
moving
packages
not
intended
for
external
usage
to
internal,
so
the
the
issue
come
come
around
because
on
a
pr
that
we're
trying
to
remove
third-party
code
from
copy,
and
then
we
discovered
that
this
one
is
used
from
a
provider,
but
I
think
that
it
also
fit
well
with
this
discussion
about
guarantees
and
about
versions.
H
So
in
cappy
we
have
a
lot
of
the
surface
of
copies
now
is
is
huge
due
to
historical
results,
to
the
couple
to
the
cuba
builder,
scaffolding,
to
the
fact
that
we
merged
together
cup
dig
kcp
and
cap
bk.
So
for
several
reasons
the
the
surface
is
used.
I
think
that
not
not
everything
is
there
is
intended
to
be
shared
outside.
H
So
we
have
some
new
tilt
that
we
want
to
expose
for
as
copy
with
copy
as
a
library
but,
for
instance,
the
internal
detail,
implementation
of
a
controller-
or
I
don't
know
the
the
conversion
between
kubernetes
types,
which
is
an
internal
of
ca,
is
not
something
that
we
we
want
to
expose
a
library,
something
that
we
consider
internal
detail.
H
So
the
the
real
problem
here
is
how
to
detect
what
is
already
used
in
the
wild
and
what
not
so,
the
the.
H
H
What
I
did
in
a
first
research
about
about
this
topic
is
that
I
took
a
list
of
provider:
kappa
kappa
g
captain,
kerbel
cup
of
sphere
and
and
and
cup
z,
and
basically
I
I
started
to
search
what
is
used
for
from
from
capiza
library.
H
I
consider
this
a
a
representative
set,
but
of
course
this
is
up
for
discussion
and
according
and
ideas
that,
if
arihana
agrees
on
the
principle
and
agreed
on
on
the
methodology
that
we
use
for
determining
what
we
can
move
to
internal
is
then
to
open
a
set
of
follow-up
pr
issues,
npr
to
start
moving
things
to
internal.
A
Thanks
for
brutio
I'm
gonna
raise
my
own
hands
is,
I
would
say
I
that
sounds
good
to
me.
A
First
of
all,
like
plus
one
to
doing
this
and
moving
stuff
to
internal
that
shouldn't
be
exposed
in
the
library
in
terms
of
the
methodology
of
like
looking
at
what's
already
in
use
and
what
not,
I
would
say,
that's
good
to
know
like
so
we
don't
break
providers,
but
also,
I
think
we
should
also
look
at
what
should
be
used
and
what
shouldn't
be
used,
because
some
providers
might
be
using
something
that
they
shouldn't
be
using
and
we
should
encourage
them
to
stop
using
it.
A
For
example,
like
I
know,
kevzie
was
using
the
drain
code
and
we
wanted
to
remove
that
from
capi
doesn't
mean
we
should
not
remove
it
from
capi.
We
just
need
to
work
with
the
provider
to
make
sure
that
you
know
that
everyone's
aware
of
the
change
and
that
we
have
give
significant
notice
so
that
the
provider
can
stop
using
that
code
and
then
also
like
there
are
other
providers
not
in
that
list.
That
might
be
impacted,
so
we
just
want
to
make
sure
we're
not
just
doing
this
like
just
for
those
providers.
H
This
is
a
nice
one
and
yeah.
A
I'll
paste
that
in
the
notes,
so
we
can
lose
it
thanks
jacob
for
the
suggestion,
any
other
comments
or
questions.
A
All
right,
if
not
I'll,
let
you
move
to
the
second
topic.
H
H
This
journey
by
working
only
on
a
subset
of
the
project
and
not
the
entire
project,
which
is
you
just
discussed
before
so
I
opened
these
issues
this
issue
in
order
to
gather
feedback
around
the
sabaria
that
are
already
defined
in
the
code
base
and
possible
new
one
that
that
we
we
can
create.
H
The
point
here
is
that
this
is
really
for
not
for
me
not
not
for
the
current
maintainer,
but
but
for
the
for
people
that
are
willing
to
step
up.
So
if
you
have
opinion,
if
you
are
interested,
please
provide
feedback.
We
are
doing
this
to
make
simpler
for
other
people
to
to
join
the
party.
A
The
terrific
party
does
anyone
have
thoughts
or
questions
or
comments
or
feedback
on
this.
A
All
right,
if
not,
I
encourage
you
to
comment
in
the
issue
and
yeah
thanks
fabricio
for
taking
this
initiative
and
writing
up
the
issue.
This
is
something
we've
been
talking
about
for
a
while,
so
it'd
be
great
to
like
formalize
it
all
right.
Killian,
you
have
the
next
one.
J
So
this
is
around
cluster
class
and
machine
health
checks.
So
right
now,
if
you
create
a
customer
cluster
class,
if
you
want
to
create
machine
health
checks
for
that
class,
you
have
to
go
and
create
them
manually
after
the
class
created,
so
we're
looking
at
an
api
for
actually
including
them
in
a
cluster
class.
So
if
you
can
click
through
onto
the
pr
5125
there,
there's
two
broad
ideas
on
this:
one
is
to
define
external
templates
for
machine
held
checks
and
then
reference
them
in
the
cluster
class.
J
So
this
is
similar
to
what
we
do
in
the
cluster
class
for
the
control
plane,
template
or
for
machine
deployment,
templates,
bootstrap
and
infrastructure
templates.
J
The
other
idea,
which
will
be
a
little
more
batteries
included
if
we
go.
That
way,
is
to
have
built-in
defaults
for
machine
held
checks
and
pretty
much
just
enable
or
disable
health
checks
on
either
a
cluster-wide
basis
or
on
control
plane
or
on
machine
deployment
basis.
So
there
are
two
very
different
approaches:
there's
positive
negatives
to
each
of
them.
Obviously,
the
templates
will
mean
that
people
have
to
do
extra
configuration
initially
in
order
to
set
them
up,
but
there
will
only
be
two
templates.
J
Then
afterwards,
the
battery
included
version
will
somewhat
restrict
the
range
configuration
available
but
yeah.
So
I
encourage
people
to
take
a
look
at
this
issue
and
look
at
the
two
approaches
and
if
we
can
gather
use
cases
and
ideas
around
them,
that
would
be
great.
A
Thanksgiving
you
see.
J
Yeah,
so
that's
that's.
One
of
the
ideas
is
refer
to
a
machine
health
check,
template
or
machine
health
check
class.
I
think
it's
called
here
in
the
actual
cluster
class
and
the
other
option
is
to
essentially
expose
the
an
optional
spec
for
a
mission
health
check
inside
the
cluster
class,
but
that
there
would
be
default,
so
it
wouldn't
have
to
be
filled
out
fully
and
you
still
get
the
health
check
just
by
something
like
this
health
check
enabled
true.
That's
there.
K
Okay,
so
I
guess
one
more
general
question
to
the
group:
how
much
do
we
want
to
continue
growing
the
cluster
class
apis
with
templates
and
references
like
do
we
do
we
do
we
know
what
like
at
what
point
we
should
like
prevent
from
continuing
adding
references,
or
is
there
like
something
else.
A
E
Yeah,
that's
a
really
good
question.
I
I've
originally
proposed
to
for
any
cluster
api
object
that
we
know
to
try
to
expose
it
like.
E
Like
sorry,
a
good
api
surface,
like
a
within
cluster
class
like
health
check,
is
like
a
good,
a
good
example
here,
where
it
would
be,
it
would
be
great
to
like
expose
it
like.
I
don't
know
like
which
part
like.
Would
you
do
fully,
but
especially
like
the
conditions
like
you
could
just
like
expose
it
in
here
as
part
of
the
health
check.
E
This
will
provide
a
unified
view
of
the
whole
cluster
class.
Without
going
to
look
at
the
other
objects,
you
would
have
to
copy
them
around,
but
I
guess
you
know
like
closer
classes,
for
standing
at
the
end
of
the
day,
so
it
would.
It
would
be
great
to
have
that.
A
C
Yeah,
I'm
just
kind
of
wondering
I
think
this
came
up
during
the
original
kind
of
cluster
class
review,
but
like
the
notion
of
adding
these
like
kind
of
nice
cluster
features
that
you
could
have,
and
I
I
noticed
that
alberto
put
a
link
to
an
issue
that
he
created
in
there
as
well,
which
is
like
okay.
You
know
a
user
might
want
to
create
clusters
and
have
mhc
enabled
by
default,
but,
like
you
might
also
want
to
create
clusters
that
have
like
auto
scaler.
C
You
know
enabled
by
default,
and
I'm
kind
of
wondering
you
know,
should
we
maybe
step
back
and-
and
you
know,
design
a
more
extensible
system
for
how
you
would
add
these
kind
of
third-party
services
that
you
might
want
in
a
cluster
class.
I
don't
know
it
just
seems
like
there'll,
be
more
of
these
things
in
the
future.
A
Yeah,
I
guess,
do
we
consider
cluster
mission
health
check
to
be
third-party,
since
it's
a
cluster
api
construct
or
yeah?
That's
the
question.
C
Yeah,
maybe
that's
imprecise
language
on
my
part,
but
more
like
these.
These
add-on
features
that
are
not
necessarily
the
core
part
of
cluster
api,
but
they
give
you
extra
functionality.
K
You
see
yeah
and
there's
also
the
fact
that
like
if,
for
example,
we
we
don't
use
template
references
for
other
objects,
as
a
user
like
like
users,
might
might
find
it
up,
because
some
features
are
exposed
through
references
as
templates
and
some
are
just
batteries
included
and
in
the
topology.
You
can
go
ahead
and
override
stuff.
So
I
agree
that
we
need
to
like
to
strike
a
balance
between
having
the
api
surface
grow
too
much
and
like
keeping
consistency.
H
Just
two
notes
about
how
much
we
we
think
this
will
grow
from
from
my
point
of
view,
if
I
look
at
the
core
cluster
api
feature,
what
we
are
missing,
apart
from
machining
check,
are.
H
Which
are
going
out
of
tree
and
they
are
kind
of
linked
to
the
cluster
like
cycle,
but
this
is
really
mine
field,
so,
at
least
in
in
my
mind,
machina
checks,
failure,
domains
and
some
sometime
in
the
future
machine
pool.
A
Sounds
good,
let's
continue
the
discussion.
F
L
Thanks
I
just
wanted
to
quickly
circle
back
on
the
ignition
boots
are
provided
provider
work
that
we've
been
we've
been
pushing
for
some
time,
we're
iterating
over
feedback
making
sure
to
address
anything,
and
I
joined
today
to
get
a
better
idea
and
understanding
of
the
future
path.
For
this.
What
would
be
a
good
version,
a
good
point
in
time
to
get
this
merged
like?
What
should
we
aim
at?
Is
it
since
everything's
feature
gated?
L
Is
it
still
possible,
for
instance,
to
be
part
of
a
one
o
x
patch
release,
or
should
we
aim
for
one
one?
What's
the
story
there
and
then,
when
trying
some
cursory
research
in
that
regard,
I
found
the
list
of
items
that
have
the
1.0
milestone
assigned
to
be
quite
long
and
like
60
items,
open,
30
items
close.
So
I'd
like
to
get
a
better
understanding
of
the
the
meaning
and
implications
for
that.
A
Yeah
thanks
for
bringing
that
up,
I
think
this
we
should
consider
changing
to
1.x,
given
the
conversation
today,
but
that's
just
yeah,
let's,
I
guess
follow
up
on
that
in
terms
of
this
pr
I'll,
let
the
others
speak
up
as
well,
but
in
my
personal
opinion
this
is
the
kind
of
pr
that
brings
extensible
feature
sets.
A
A
I
guess
I
don't
know
if
silent
needs
consent
in
this
situation,
but
I
don't
think
anyone
is
objecting.
So
that
being
said,
I
don't
think
it
prevents
us
from
merging
it
if
we
agree
that
the
main
branches
start
getting
1.1,
but
I
guess,
let's
figure
out
that
part
first
in
where
is
it
the
pr
here
and
once
we
have
contestants
on
this?
I
guess
we
can
move
forward
with
this.
A
L
Those
should
have
been
addressed.
We
actually
saw
just
basically
getting
back
to
the
to
the
feedback
that
we
received.
We
found
like
one
or
two
places
that
could
ideally
have
unit
tests
as
well
and
we're
working
on
one
or
two
more
tests
and
want
to
include
that
so
we're
gonna
basically
make
it
really
polished.
So
it
should
merge
perfectly
but
I'll
wait
on
new
other
fox
feedback
regarding
what
release
it
can
be
merged
to
is
there?
Is
there
a
timeline
for
one,
not
one.
A
A
E
The
the
cleveland
contributor
guide
says
that,
like
we
can
plan
the
minor
where
this
can
be
planned,
I
think
every
quarter
or
something
well.
I
actually
says
twice
per
year,
which
is
like
it
very
highly
contradicts
what
we're
like
saying
today.
So
I'll
update
that
text
to
say.
H
Yeah,
I
I'm
plus
one
two
or
twelve
academs,
at
least
the
final,
because
these
are
easy:
the
pressure
for
people
to
getting
in
the
current
to
be
reported
because
they
they
know
when
will
be
in
the
next
train
and
so,
in
my
opinion,
these
being
clear.
Giving
clarity
on
the
release.
Cadence
will
solve
a
lot
of
discussion
about
what
we
can
before,
because
there
is
a
natural
destination
for
the
next.
For
the
next
release.
A
Plus
one,
if
we're
not
backporting
features
into
patches,
we
might
want
to
do
even
even
fast,
like
more
frequents
than
every
three
months,
so
we
can
get
features
out
faster,
which
is
something
to
think
about.
Does
that
answer
your
questions
too?
Absolutely
yeah.
Thank
you
all
right,
great
last
topic.
Jacob.
M
Yes,
I'll
keep
it
short,
I'm
still
looking
for
feedback
on
my
iphone
proposal.
I
haven't
been
here
for
a
few
weeks
and
there
were
also
a
lot
of
trouble
around
getting
1.0
out,
but
maybe
people
have
more
time
now.
The
main
question
in
there
is
whether
so,
I'm
not
going
to
explain
how
it
works
again.
M
You
can
read
that
and
if
anything
is
unclear,
please
ask
me
I
don't
like
or
just
comment,
but
whether
I
p
claims
should
be
created
by
copy
itself
or
by
the
providers
and
that
basically
then
comes
down
to
whether
it
should
get
integrated
into
copy
directly
or
whether
it's
just
an
api
contract.
That's
officially
copy,
but
all
the
logic
resides
in
the
providers.
M
So
if
there
is,
for
example,
rejection
to
have
something
like
that
in
copy
directly,
then
one
of
the
op
of
the
two
options
that
I've
come
up
with
is
eliminated
and
only
one
remains
and
then
the
decision
is
made
so
yeah.
I
I
just
don't
want
to
decide
it
on
my
own,
also
I'm
torn
between
both
options,
honestly,
so
some
feedback
on
that
would
be
great,
especially
if
you're
working
on
providers
like
infrastructure
or
or
on-site
based
providers.
A
Yeah
thanks
for
bringing
this
up
again
jacob,
I
would
suggest
posting
in
the
providers
like
slide
channels
that
you
think
are
particularly
yes.
B
A
If
you
haven't
already
great
all
right
awesome,
I
think
yeah.
This
is
the
end
of
the
agenda,
thanks
so
much
everyone
for
the
super,
interesting
topics
and
for
helping
me
get
through
all
of
this
and
being
efficient.
That
was
great
and
we'll
see
you
all
on
psych.
Make
sure
that
you
take
a
look
at
all
these
issues
and
pr's
that
need
opinions.
If
there's
something
that
caught
you
right
and
if
not
I'll
see
you
next
week.