►
Description
Meeting notes https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit
A
All
right
welcome.
Everyone
today
is
Wednesday
the
9th
of
November
2022,
and
this
is
the
cluster
API
project.
Meeting
cluster
API
is
a
sub-project
of
kubernetes
Sig
and,
as
such,
We
are
following
their
Community
guidelines,
which
essentially
means
please
raise
your
hand
if
you'd
like
to
speak
and
also
please
treat
each
other
as
you
would
expect
to
be
treated
which
to
be
explicit,
is
please
be
kind
to
each
other.
So
we
will
start
off
the
meeting
and
I
would
encourage
everyone
to
add
your
add
your
name
to
the
attending.
A
We
start
every
new,
every
meeting,
with
a
chance
for
new
attendees
to
the
community,
to
kind
of
introduce
themselves
or
say
hey.
So
if
you
are
new
here
and
would
like
to
introduce
yourself
to
the
community,
please
feel
free
to
unmute
your
mic
or
raise
your
hand
and
yeah
we'll
we'll
take
a
couple
minutes
right
now.
A
couple
seconds.
B
Hey
I'm
Joseph
volts
I
work
with
a
Dev
crew
at
Microsoft,
so
we're
just
trying
to
kind
of
get
our
bearings
around
the
community
activities
with
cluster
API,
specifically,
as
we
start
to
look
at
using
the
kab
Z
provider
for
some
of
the
work
we're
doing,
and
so
we're
trying
to
get
kind
of
the
full
picture
for
Cappy
and
then
kind
of
the
smaller
kab
Z
Community
picture
and
understand
how
everything
works.
A
All
right,
three,
two
one
all
right:
I'm,
not
seeing
another
hands
going
up
so
we'll
move
on
to
the
open
proposal.
Readout
is
there
anyone
here
who
would
like
to
speak?
It
looks
like
we
only
have
label
sync
as
the
open
proposal.
Would
anyone
like
to
talk
about
that?
A
All
right
I'm
not
seeing
any
hands
going
up,
so
we
will
keep
moving
so
the
discussion
topics
for
today
it
looks
like
you've,
Raj
you're
up
first
with
a
cluster
API
team
handle.
C
Hi
everyone
yeah,
though
so
the
cluster
API
release,
team
handles
or
I
have
no
available
in
slack
and
in
GitHub.
C
So
we
have
a
GitHub
team
in
the
six
kubernetes
six
org,
it's
not
available
in
the
kubernetes
arc,
which
I
think
should
be
fine
for
us,
and
we
also
have
a
cluster
API
release
team
slack
handle
so
for
the
1.4
release
cycle,
which
starts
in
a
month
yeah
for
anyone
who
wants
assistance
with
the
release
team
or
if
the
release
team
needs
to
be
in
Loop,
you
can
just
use
the
handles
to
tag
them
on
GitHub
issues,
PRS
aurons,
like
threads
and
so
on.
A
Very
cool,
so
if
we
have
any
issues,
we
can
just
kind
of
add
cluster
API
release
team
on
GitHub,
and
we
should
we
should
get
in
touch
with
the
right
people.
That's
what
I'm
hearing
yeah
all
right!
Very
cool
thanks:
you've,
Raj
Stefan
you're
up
next
with
some
release,
related
notes,
Here.
D
Yep,
mostly
PSAs
yeah,
so
first
one
we
released
1.2.5
Tuesday,
which
has
essentially
I
think
yeah,
mostly
a
few
bug
fixes
and
and
a
change
that
our
published
manifests
are
now
using
the
new
accessory.
So
if
you
actually
run
Plus
account
in
it
and
the
class
A
back
controller
is
pulled.
That's
now
put
from
the
new
registry.
D
It
was
published
to
both
Registries
since,
like
I,
don't
know
a
few
months
and
I
don't
know
the
product
fix
classical
was
so
when
you
were
running
classical
move
on
a
cluster
and
all
related
resources
after
the
move,
classic
cuddle
owned,
Fields,
so
manage
field
service
that
apply
all
that
kind
of
stuff
classical
took
ownership
of
our
fields,
and
with
that
bug
fix,
we
now
make
sure
that
the
managed
Fields
before
and
after
to
move
our
executive
yeah.
So
that
can
have
some
unexpected
effects
later
on.
D
You
try
to
modify
the
cluster,
probably
most,
if
you
use
actually
service
that
apply,
which
is
not
enabled
for
defaulting
in
cubecting,
but.
B
D
If
you
see
some
strange
effects
of
clusters
that
you
previously
moved,
if
you're
trying
to
drop
fields
and
they
are
not
dropped
or
if
you're
trying
to
change
fields-
and
you
get
like
oh
conflict-
think
about
that
that
bug
fix
shouldn't
happen
from
now
on.
D
And
then
the
beta
are,
there
are
no
real
reasons,
notes,
which
is
something
that
I
added
as
an
improvement
task,
for
we
should
think
about
maybe
adding
something
here,
even
if
it's
just
expandable
but
yeah,
it's
just
a
new
release.
New
Beta
release,
to
be
honest,
I,
don't
know
not
worthy
difference.
It's
not
that
point,
but
yeah.
If
you're
already
trying
to
adopt
the
new
version
bump
to
new
release,
give
us
feedback
always
appreciate
it.
D
Yep
and
then
the
final
one
so
for
the
new
release,
team
I
finally
opened
the
pr
which
tries
to
document
responsibilities
of
different
roads
and
release
team
and
all
the
tasks
that
we
usually
do
during
a
release.
D
D
Yeah
definitely
interesting
for
I
guess
wider
Community,
but
especially
foreign,
because
we're
trying
to
document
down
whatever
these
students
usually
doing
during
release
cycle
yep,
okay,
I'm
done.
A
All
right,
I'm
not
seeing
any
hands,
go
up
so
cigar.
If
you
want
to
talk
about
the
TLs,
Flags
I'd
say:
go
ahead.
A
E
A
quick
summary
on
on
this
particular
thing,
so
we
just
like
updated
the
or
we
just
merged
the
pr
in
the
main
branch
to
expose
two
new
Flags
around
setting
the
TLs
minimum.
E
Cypher
Suites
for
the
webhook
server,
the
the
point
of
this
PR
is
like
do
we
or
like
do
we
want
to
add
this,
or
these
changes
to
the
1.3
version
as
well?
And
how
do
we
want
to
document
those?
E
So
at
least
from
like
I've?
Looked
at
some
cafes
perspective,
using
this
change
directly
in
cluster
API
does
not
mean.
E
That
is
any
change
that
needs
to
be
made
directly
in
the
providers
if,
when,
if
and
when
they
move
to
the
chat,
happy
1.3
version
so
like
we
just
wanted
to
call
this
out,
then
the
offers,
if
if
this
is
something
that
we
should
include
the
migrate
dock
as
well
as
a
suggested
team
for
the
Post
just
because
the
pr
also
produces
that
can
be
like
used
by
the
points
if
they
want
to
implement
similar
changes.
A
D
I
can
provide
slightly
more
more
context,
so
essentially,
the
first
PR
was
changing
things
in
in
core
cluster
API
and
then
I
brought
up
the
point:
hey.
Should
we
document
this
for
the
providers
and
essentially
we
have
our
policy
now
which
says
hey.
If
you
want
to
merge
something
after
the
first
beta
which
would
affect
providers
adoption
of
the
new
cluster
API
release,
we
should
bring
it
up
in
office
hours
and
just
make
sure
that
there
are
no
objections
from
providers
against
adding
it.
D
So
we
went
ahead
with
two
interchange
in
core
cluster
API,
because
that
definitely
has
zero
impact
on
any
providers,
and
the
thing
that
has
impact
is
essentially
that
we're
documenting
it
hey
other
providers.
Please
also
adopt
this
change
and
yeah
the
question.
Essentially.
Does
anyone
have
a
problem
with
us,
adding
that
note
for
the
1.2
to
1.3
migration
guide?
If,
yes,
we
would
defer
it
for
another
rupees
yeah,
that's
the
question.
A
Okay,
cool
and
I
see
Fabrizio
as
his
hand
raise
go
ahead.
Fabrizio.
F
Yeah,
just
a
small
note
on
top
of
these,
so
cluster
API
is
not
prescriptive
on
which
flag
each
provider
and
Masters
pause.
It
is
just
a
recommendation
that
we
give
in
order
to,
let
me
say,
be
all
consistent,
so
there
is
nothing
prosecutive.
So
personally,
I
don't
see
problem
in
adding
this
in
one
two,
three
zero.
But
of
course
the
last
word
is
on
providers.
A
Cool
yeah
I
mean
like
I,
guess
I
for
anyone
who's
here
today
or
watching
this.
Please
go
back
to
this
PR.
If
you
have
a
strong
objection
to
what's
going
on
here
and
add,
add
your
comments
there
do
do
we
want
to
remove
the
hold
in
a
couple
days,
if
nobody's
added
anything
here
or
or
do
we
need
to
even
say
that
I
guess.
D
Yeah
I
think
maybe
something
like
Friday
or
so
it's
okay,
I
would
really
so
yeah
I
would
really
like
to
get
it
in
before
the
release
candidate
next
week.
Oh
sorry,
go
ahead.
B
E
Say,
like
I
can
raise
this
on
slack,
since
this
is
like
time
setting
and
we
want
to
like
have
a
visibility
on
this.
Maybe
I
can
just
start
it
on
slack
and
see
if
anybody
has
any
UPS
or
any
objections
to
this.
A
Yeah
I
think
I
think
putting
it
on
slack
is
probably
a
good
idea
as
well.
I
I
just
want
to
make
sure
that,
if
we're,
if
we're
trying
to,
if
we're
trying
to
give
some
time
for
people
to
look
at
this
and
maybe
raise
their
objections
and
if
we,
if
they
have
any,
you
know-
maybe
just
putting
a
date
here,
so
we
know
okay,
if
nobody
has
any
objections
by
this
time,
we'll
just
move
forward
with
it.
A
All
right,
thanks
cigar
and
and
Stefano
Fabrizio
Stefan
you've
got
the
next
next
about
dependency.
Bumps
here.
D
Yep
and
I
think
that
one
is
as
much
about
the
concrete
changes
that
you
want
to
make
as
well
as
refining
the
policy.
So,
as
I
mentioned
before
we
have
the
policy,
we
say:
hey
all
the
changes
that
affect
provider,
adoption
should
be
brought
up
and
approved,
and
the
change
that
is
set
here
is
essentially
bumping
dependencies
so
background.
D
So
all
those
bumps
that
are
listed
here
so
Ginkgo,
Cobra
and
Viper,
they
all
require
zero
code
changes.
So
essentially
we
bump
the
dependencies.
We
didn't
have
to
change
the
single
line
of
code
in
chord,
cluster,
API
and
I
assume
it's
the
same
for
providers.
All
of
them
were
minor
change,
minor
release
bumps
so
I'm
asking
for
two
things.
D
One
thing:
do
we
have
a
problem
with
those
specific
problems
and
the
other
thing
do
you
think
that
those
kind
of
changes
that
we
have
to
bring
them
up
while
we
are
in
beta?
Thank
you,
because
I
think
it
should
be
fairly
straightforward
that
you
can
bump
dependencies
on
core
copy
until
the
first
rc.
If
there
are,
if
the
consequences
that
nobody
really
has
to
change
code
for
that
yeah.
A
I
think
it's
I
mean
speaking
personally
I
think
it's
nice
to
have
these
updates
here,
so
that
I
know
for
the
providers
that
I'm
working
on
maybe
it's
time
to
go
back
and
review
those
and
bump
them
in
my
provider,
and
it's
awesome
to
hear
that
there
were
no
code
changes.
A
I
guess
does
anyone
else
have
questions
or
comments
about
this
like
should
you
know,
should
we
be
continuing
to
advertise
when
we
do
these
kind
of
bumps
you
know,
is
it?
Is
it
helpful
to
other
providers?
Anyone
have
a
question
or
comment
or
anything.
A
D
Opinion
yeah
I
mean
in
general.
If
you
try
to
do
the
bumps
as
early
as
possible,
we
just
didn't
I
mean
for
the
next
release
cycle.
We
will
do
it
before
the
beta
and
I
have
an
exclusive
task
for
that,
but
just
too
much
stuff
going
on
for
this
recycle,
so
I
would
go
ahead
with
those
pumps
and
I'll
open
a
PR
to
add
a
little
bit
more
detail
on
changes
that
impact
provided
option,
and
then
we
can
discuss
on
this
PR
with
the
framing
that
I
would
like
to
see.
A
A
All
right,
cool
and
you've
got
the
next
topic
too
Stefan.
So
go
ahead.
D
Yep,
so
the
next
one
I
think
I
brought
this
up.
Like
oh
I,
don't
remember
a
few
weeks
back.
Essentially
we
made
a
contract
change
and
the
contract
change
was
hey
all
the
crds
that
are
reference
and
core
cluster
API
resources
and
does
essentially
consumed
by
core
copycontroller.
They
all
have
to
follow
a
certain
naming
scheme.
Essentially
what
cupid
is
producing,
but
we
now
have
our
own
usual
function,
which
which
exposes
sorry,
which
generates
the
cop
name
and
exactly
right
again.
D
It's
just
a
warning.
It
was
also
verified
that
it
works
for
all
providers
which
are
registered
in
cluster
Kettle
and
I
talked
to
a
full
cut
for
metal
free.
They
have
one
Cod,
which
is
a
false
positive
and
he
agreed
to
essentially
do
the
change
through
crd,
which
is
needed
to
not
get
this
warning.
So
there's
a
way
to
opt
out
of
this
morning.
D
So
if
you
have
crds,
which
are
essentially
just
your
own
series,
they
are
not
used
by
core
copy
controllers
and
those
the
name
of
that
series
is
wrong.
Then
you
can
add
an
annotation
in
The
annotation
test
classical
hey,
please
skip
that
warning.
I
know
that
mystery
name
is
wrong,
but
it
doesn't
really
matter
and
for
Capital
at
this,
this
annotation
to
this
one
metal,
three
coding.
So
as
far
as
I
know,
once
that
annotation
is
there,
we
don't
have
any
provider
which
would
get
this
warning.
D
I
mean
I,
don't
know
what
providers
which
are
not
registered
in
classical,
of
course,
but
those
are
fine,
yeah
TR.
If
providers
would
have
series
of
wrong
names,
they
would
get
a
warning.
It
shouldn't
affect
any
of
the
registered
providers
and
I'm
bringing
this
up
because
technically
it
is
could
say
that
this
affects
provider's
adoption
so
again,
the
same
question:
if
anyone
has
any
objections
against
merging
us,
the
contract
change
was
like
month
or
two
ago.
A
A
Do
we
want
to
do
we
want
to
say,
like
so
I
see
so
Fabrizio
saying
in
chat
plus
one
to
merge?
Do
we
do
we
want
to
say
maybe
on
Friday,
we'll
we'll
remove
the
hold
on
this
one
as
well?
Yeah.
A
All
right
cool,
thank
you
for
the
explanation
there
and
hopefully
people
will
go,
take
a
look
if
they
don't
want
to
you
know
if
they
don't
want
to
see
that,
but
sounds.
A
To
it,
so
all
right,
so
next
topic
is
Jack
Francis
with
something
about
machine
pool
annotations
for
external
Auto,
scalers.
G
Yes,
hey
Mike,
thanks
very
much
I'll
paste.
The
link
in
here
this
will
be
a
quick,
so
I'll
spare
everyone.
The
back
story
on
all
of
the
PR
thread
thrashing,
but
thank
you
so
much
to
a
lot
of
folks,
especially
Stefan
seal
and
there's
a
bunch
of
other
folks.
As
you
can
see,
there's
been
a
lot
of
discussion
this.
This
is
mostly
ready
to
go.
We've
been
aiming
to
include
this
prior
to
the
1.3
release.
G
So
the
only
thing
the
reason
I
brought
this
up
in
the
office
hours
is
there
is
one
open
question
about.
So
sorry,
I
haven't
even
described
what
this
is.
So
this
is
a
PR
that
introduces
a
a
new
cluster
API
aware
annotation
for
machine
pools
to
indicate
that
the
replica
count
of
those
machine
pools
is
under
the
enforcement
of
an
auto
scaler.
So
there
the
there
is
one
sort
of
non-invasive
change
to
Cluster
API
itself,
which
is
that
when
this
annotation
is
present,
which
would
be
applied
by
the
provider,
that's
the
idea
here.
G
We
would
make
a
subtle
change
to
the
way
that
cluster
API
reports
phase
status
during
observed
scaling
operations,
because
the
sanitation
indicates
that
cluster
API
is
no
longer
responsible
for
for
doing
the
scaling.
We
change
the
status.
So
it's
so
it's
not
a
sort
of
predictive
status.
So
right
now,
as
you
can
imagine,
because
cluster
API
is
the
thinks
of
itself
as
the
sole
Vector
of
enforcement
for
scaling.
G
If
it
notices
fewer
replicas
than
it's
actually
season
the
wild,
it
can
predict
and
say
I'm
scaling
up
or
I'm
scaling
down.
So
we
changed
that
status
message
subtly
to
just
say
something
like
scaling.
So
we
know
that
some
scaling
is
occurring
because
there's
a
Delta,
but
we
cluster
API
doesn't
attempt
to
predict
it.
So.
G
The
background
here,
the
the
open
question
has
to
do
with
the
whether
The
annotation
is
a
specific
looks
for
a
specific
value
or
whether
it
is
a
sort
of
I,
say
sort
of
whether
it's
a
boolean-ish
value,
which
is
to
say,
there's
some
discussion
of
the
thread
about
we're
going
to
treat.
G
We
want
to
treat
The
annotation
like
a
bull,
so
if
it's
present,
if
The
annotation
exists,
that
means
true,
unless,
if
someone
explicitly
says
to
false
that
means
false,
so
that
that's
there's
some
like
interesting
conversation
around,
why
that
might
be
a
good
or
a
bad
idea,
but
I
really
just
want
to
call
out
that.
That's
the
only
open
item.
So
if
you
have
an
opinion
on
that,
please
feel
free
to
participate
in
this
thread.
A
G
As
is
right
now,
we're
essentially
declaring
an
annotation
with
a
well-known
key
and
a
well-known
value,
but
there
is.
There
is
a
some
competing
dissent
for
folks
who
potentially
prefer
just
a
well-known
key:
go
ahead.
Stefan.
D
Yeah
and
I
think
one
important
thing
to
mention
is
I
mean
essentially
that's
a
contract
addition
to
the
machine
Port,
so
I.
It
would
be
really
good
to
have
other
info
providers.
Looking
at
this
and
saying:
okay,
that's
fine
I
mean
I,
don't
have
any
stakes
and
all
of
that
I'm
fine,
but
would
be
good
I,
don't
know
who
who
implemented
Machine
Tools,
but
just
to
get
some
feedback
from
that
side
as
well.
A
Yeah,
so
that's
a
really
good
call
out
for
any
providers
who
might
be
watching
this
on,
recording
or
hear
now.
If,
if
you
even
have
a
thought
that
you
might
someday
Implement
machine
pools,
it's
probably
worthwhile
to
take
a
look
at
this
and
understand
the
Nuance
that's
going
on
here,
even
if
you're
not
thinking
about
doing
it.
Now,
it's
probably
worthwhile
just
to
be
to
be
familiar
with
those
all.
A
Cool
thanks
Jack
does
any.
Does
anybody
have
questions
or
comments
about
this.
I
Just
one
I
haven't
looked
at
the
the
latest
on
this,
but
where
are
we
at
What
You
Know
The
Descent,
on,
like
it
that's
being
like
Auto
skill,
is
specific,
I,
guess
or
just
like
something
else.
G
I
think
we
are
we've.
We've
converged
where
this
is
an
auto
scalar
specific
thing.
The
the
question
is:
do
we
leave
open
some
flexibility
for
that
in
the
future?
By
so
assuming
that
the
value
is
significant?
That
would
give
us
some
future
flexibility
to
add
a
slightly
different,
related,
well-known
annotation,
plus
value
to
this
so
Vince
you,
you
suggested
that
we
use
we
sort
of
carry
over
a
well-known
pattern
of
so
basically
get
rid
of
the
auto
scalar
stuff
from
the
key.
G
So
the
open
question
is:
do
we
move
that
into
the
value
and
then
that
becomes
the
contract,
like
external
Auto,
scalar,
being
an
example
of
a
value
string?
That
is
the
contract,
or
do
we
take
that
replicas
manage
by
and
just
turn
that
into
a
Boolean
and
accept
any
value
is
true
and
then
sort
of
implicitly
have
the
auto
scaler.
Be
the
the
reason
why
we
use
this
is
so
you
know,
I
am
I'm
in
favor
of
having
the
value
be
significant
and
having
the
value
carry
the
auto
scaler
semantics
in
it.
G
I
I
mean
I
meant
like
something
like,
let's
just
say
that
the
value
changes
from
external
Auto
scalar
to
whatever
data
plus
incarnation
of
it
is
I
forget
what
the
the
behavior
of
that
like,
because
the
replica
is
managed
by
something
else
like
change.
So
with
the
code
change,
if
like
it,
was
a
different
thing,
I.
G
Think
in
in
practice
the
particular
change
in
terms
of
doing
non-productive
scaling,
so
just
saying
something
like
I
know:
there's
a
scaling
event,
but
I
don't
understand
it
because
I'm
externally
managed
that
would
probably
carry
over
to
additional
use.
Cases
like
you
described,
but
additional
use
cases
you
describe
might
be
interesting
in
terms
of
other
parts
of
the
code,
and
so
they
may
engage
different
different
subtle
changes.
G
I
Could
get
by
that
as
I
mean
yeah,
that
don't
usually
we
don't
check
values?
I
also
would
like
to
call
out
that,
like
if
we
have
a
lot
of
like
different
values
in
here,
just
for
the
sake
of
them,
we
probably
should
block
them
as
in
like
this
should
be
generic
enough,
that
any
external
Hydro
scale
that
I
could
go
into
that
bucket.
G
Actually
that's
not
true,
that
would
be
on
the
provider
side,
but
we
could
add
that
to
the
documentation
where
you
know,
if
you
implement
this
web
hook
and
you
you
know,
you
know,
make
sure
to
only
allow
this
value.
Otherwise,
you
know
unknown,
like
Cappy's,
not
going
to
do
anything.
It's
not
going
to
guarantee
any
particular
outcome.
I
Yeah
but
I
was
going
to
say
that,
like
on
the
flip
side,
now
having
this
annotation
only
respected
one
value,
you
know
it's
insignificant,
because
there's
only
one
value
at
the
end
of
the
day,
so
I
can
also
see
Cecil's
point
it's
like.
Why
are
we
even
enforcing
that
value,
because
it
could
be
more
significant
at
that
point
like
to
be
right.
G
I
Bull,
no,
actually
it
should
not
replica
is
managed
by
or
like
just
managed
by
something
else.
It's
not
like
I
would
never
say
like
this
would
be
magic
cluster
API,
because
that
should
be
the
default
as
in
this
annotation
does
not
exist,
but
the
managed
by
in
the
for
the
external
infrastructure
that
you
already
pushed
through
accepts
any
value,
and
the
point
of
that
is
like,
for
example,
you
could
use
it
as
debugging
purposes
or
you
could
have
like
a
different
systems
that
you
don't
recognize.
I
Gosh
I'm,
trying
to
figure
out
like
which
one
is
the
AWS
Auto
scaler
with
or
I
forget,
so
that
is
cluster
Auto
scaler,
which
Mike
knows
about
it
a
lot
and
then
Carpenter.
Yes,
thank
you
so
like
something
that
could
be
significant
here
is
like
hey
I
want
that
to
be
Carpenter
instead
of
like
it's
just
something
bit
more
specific,
because
I
might
have
multiple
out
of
skills
that
I'm
trying
out
so
I
could
see,
but
basically
just
say:
hey
I
love
anything,
and
but
we
assume
that
hey.
I
G
That
all
makes
100
sense.
I
am
I'm
coming
to
this
as
a
as
a
novice
and
I
read
that
annotation
as
having
an
absence
of
significance,
but
there's
clearly
a
pattern
whereby,
when
you
say
managed
by
the
context,
is
strict
like
outside
of
myself
like
I'm,
not
that's,
that's
a
that
is
a
well-established
pattern
of
essentially
declaring
I'm,
not
managing
this
someone
else's.
B
G
I
Yep
I
think
I
think
that
also
it
clears
up
like
a
bunch
of
other
problems
like
it's
like
you
might
have
like.
Hey
I
want
to
have
like
a
different
behavior
for
another
of
those
kind
of
Auto
Skillet,
as
that
said,
which
is
not
something
we
probably
want
for
cluster
API
to
to
go
into
that
territory.
I
G
Yeah
so
I
mean
basically,
it
allows
provider
flexibility
which
is
really
where
the
flexibility
should
be
focused,
whereas
Cappy
can
simply
generically,
say:
I,
don't
know
about
all
these
things.
These
are
provider
specific,
but
I
do
know
that
I
am
not
going
to
predict
which
direction.
This
is
scaling
anymore,
because
the
provider
is
not
leaning
on
me
for
doing
the
scaling
enforcement.
A
Yeah
I
mean
for
what
it's
worth
I
think
what
Vince
is
saying
makes
a
lot
of
sense
to
me
as
well
like
if
we
already
have
a
pattern
established
for
the
way,
we're
using
annotations
in
a
similar
manner,
with
some
of
the
other
externally
managed
things.
We
shouldn't
break
that
pattern
and
create
like
unexpected.
You
know
surprises
here,
like
unfortunately
like
as
an
API
field.
A
Implementation
to
then
decide
something
to
do
with
it.
So
yeah
good.
A
A
Totally
totally
yeah
anyways
any
other
questions
or
comments
on
this
that
people
would
like
to
get
in
here.
A
All
right
cool,
thank
you,
Jack
interesting
topic,
and
you
know
people
please
go
take
a
look
at
the
take
a
look
at
the
pr,
Kelly
and
you're
up
next
with
cluster
Network
cider
blocks
and
then
updating
the
API
versions
for
owner
references.
H
So
this
I
brought
up
before
kubecon
I,
sent
out
a
mail
on
the
mailing
list
just
to
get
feedback
for
it
to
see.
If
anybody
had
a
problem
with
it.
The
essential
issue
is
that
today,
cluster
API
doesn't
do
any
validation
on
The
Cider
blocks
that
are
defined
in
the
cluster
Network.
So
we've
got
two
cider
blocks,
pods
and
services.
H
Today,
that's
just
an
array
of
strings.
You
can
put
things
that
aren't
IPS
in
there,
so
this
PR
so
and
when
you
do
that
the
Assumption
in
those
fields
in
cluster
API
is
that
they'll
be
passed
to
something
like
qbdm
today,
qbdm
has
strict
validation
to
those
fields
and
will
fail,
and
then
the
kubernetes
components,
depending
on
which
of
those
they
need
we'll
take
in
those
fields
as
flags,
and
they
also
do
their
own
validation
on
them.
H
So
we're
trying
to
build
this
validation
into
cluster
API,
but
this
is
a
breaking
change,
so
the
validation
is
a
maximum
of
two
strings
of
this
array.
Each
of
the
strings
is
a
valid
IP,
and
if
there
are
two
strings
then
it
should
be
dual
set,
so
you
can't
have
two
ipv4
cider
blocks
or
two.
If
V6
cider
blocks,
you
need
to
have
one
ip4
and
one
five
V6.
If
there's
two,
if
there's
a
single
one,
it
just
needs
to
be
valid
one
or
the
other.
H
So
this
pure
has
evolved
a
little
bit,
but
this
is
a
breaking
change,
so
I
think
we
put
a
hold
of
it
and
I
just
want
to
give
people
an
extra
couple
of
days
to
have
a
look
at
it.
There
is
one
issue
that
came
up
during
implementation,
that's
also
in
this
PR,
which
is
that
and
that's
the
second
issue
that
I
linked
on
the
the
notes.
H
So
today,
cluster
API
defines
a
close
app
cluster
IP
family,
which
advertises
whether
the
cluster
is
running
on
ipv4
hyperv6
or
dual
sac
yeah
that
one
there's
a
calculation.
That's
done
so
this
wasn't
part
of
validation,
its
calculation.
That
was
done
when
we
introduced
IPv6
for
testing.
So
it's,
it
was
actually
initially
used
in
the
docker
provider
to
set
up
the
load
balancer,
which
requires
this-
that
kind
of
strict
validation
of
the
cluster's
IP
family.
H
But
this
is
not
a
concept
that
actually
exists
in
kubernetes
itself,
and
the
calculation
here
is
whether
the
Pod
and
service
IP
families
are
compatible.
So
I
think
my
list
here,
I
think
if
you
scroll
down
schlafen,
has
a
better
complete
table
yeah.
This
is
how
the
calculation
is
done
and
a
number
of
them
are
invalid,
so
iv6
ip4,
mixing
them
between
Parts
and
Services
is
environment
and
there's
a
couple
of
other
cases
there.
H
So
the
current
cluster
Network
validation
that
was
done
in
the
previous
PR
does
include
this
concept
for
now,
but
I
think
we
would
strongly
consider
getting
rid
of
it
if
people
thought
that
this
isn't
a
concept
that
actually
makes
sense,
but
I
want
to
get
feedback
from
providers
from
users
whether
whether
this
sort
of
thing
is
an
appropriate
definition
for
kubernetes.
So
today
we
the
only
place
we
really
use
this
inside
of
cluster
API.
Is
we
expose
it
as
a
variable
when
we're
using
topology
manage
clusters?
H
A
Okay,
that
is,
that
is
complicated,
but
I
think
it
makes
sense.
Looking
at
the
7420
PR
here,
given
the
hold
on
it,
do
you
want
to
put
a
time
before
you'd
like
to
get
comments,
or
do
you
just
want
to
leave
this
open
until
everyone's
had
a
chance
to
kind
of
weigh
in
on
it.
H
H
That
check
was
already
valid
on
topology
managed
clusters,
but
not
on
on
manage
clusters.
So
if
everybody's
happy
with
it,
if
nobody
has
objections,
I
think
we
could
merge
this
in
a
couple
days.
If
there's
no
additional
comments
and
then
continue
the
the
question,
because
adding
District
validation
is
really
good,
I
think
because
clusters
will
fail
without
it
and
it's
just
an
unvalidated
field,
and
it
brings
up
this
other
discussion,
which
is
probably
more
long-running
and
yeah.
H
It
might
take
a
while
to
figure
out
whether
a
cluster
should
have
a
an
IP
family
when
yeah
managed
by
cluster
API,
as
opposed
to
kubernetes
itself.
A
Okay
cool,
so
it's
fair
to
say:
maybe
if
you
know,
if
there's
no
other
objections
by
Friday,
then
we
can
merge
this
after
that.
A
All
right,
cool,
I
guess:
does
anyone
have
questions
or
comments
about
the
about
the
cider
block,
validation
topic
or
the
or
the
cluster
IP
families.
H
Yeah,
can
you
open
up
the
next
issue?
There
are
certain
duties
for
so
this
is
something
we're
planning
to
do
in
cluster
API,
hopefully
in
the
next
week
that
should
probably
make
it
into
the
Beta.
It
shouldn't
be
a
breaking
change
or
have
any
impact
on
providers,
but
it
might
be
useful
to
know
so.
H
The
issue
right
now
is
that
when
you
update
cluster
API
and
then
update
clusters
running
cluster
API
from
let's
say,
V1
Alpha
three,
the
whole
way
to
V1
beta
1.,
all
of
your
API
objects
are
updated,
but
their
own
references
aren't.
So
your
machine
deployments
are
still
owned
by
V1
Alpha
3
cluster,
which
is
absolutely
fine.
Today,
everything
works
as
effective
and
the
main
thing
impacted
by
this
or
one
of
the
major
things
impacted
by
this
is
garage
collection.
H
So,
when
we
delete
objects,
everything
that
is
owned
by
them,
if
it
doesn't
have
any
other
owner
reference
should
in
turn
be
deleted
by
kubernetes
garbage
collection.
Once
we
stop
serving
older
versions
of
the
API,
so
V1
Alpha
3
has
been
out
of
support
for
six
months
more
than
six
months,
plus
they're
closer
to
nine
months.
Eventually,
it'll
be
removed,
I,
don't
think,
there's
any
concrete
plans
to
remove
it,
but
once
we
stop
serving
V1
Alpha,
three,
the
owner
of
reference,
has
become
unresolvable
by
kubernetes
and
background
deletion
won't
work.
H
H
The
plan
is
to
do
an
audit
of
where
we're
setting
on
references
in
cluster
API
and
bump
all
of
the
on
references
when
they
essentially
on
every
reconcile
make
sure
that
the
owner
reference
is
correct
in
terms
of
its
API
version
and
pop
up
when
it
started
and
Patch
the
object
in
line
like
I
said
this
shouldn't
have
any
impact
on
providers
or
consumers
across
your
API,
but
it's
something
that
providers
might
want
to
look
into
themselves,
because
this
is
probably
an
essential
step
in
getting
rid
of
older
API
versions
where
that's
wanted.
H
So
if
you
don't
do
this,
then
at
least
garbage
collection
breaks,
which
means
all
of
the
Clusters
that
are
older,
that
have
been
upgraded
through
multiple
API
versions
won't
be
able
to
beat
the
research
properly
or
easily.
So
yeah,
it's
I
think
it's
going
to
take
a
look
at
this
issue.
I
have
one
work
in
progress
PR
to
do
this,
for
just
the
machine
machine
set
machine
deployments,
but
we're
open
to
roll
this
out
across
copy
before
the
before
the
next
release.
A
It
all
right
cool
thanks,
Gillian,
any
questions
about
any
questions
about
the
owner
update
owner
reference
updates.
A
I
see
that
Stefan
is
also
staying
in
in
chat
here.
That's
also
probably
good
performance
wise.
If
everyone
who
uses
the
owner
reference,
you
know
doesn't
have
to
go
through
the
conversion.
A
All
right,
I'm
not
seeing
any
hands,
go
up
so
we'll
bounce
back
to
Jack
here
for
something
about
the
managed
kubernetes
and
cluster
API
foreign.
G
B
G
But
super
grateful
for
this
very
thorough
description
of
a
very
complicated
scenario.
To
summarize
so.
B
G
Are
this
is
an
active
conversation
around
how
we
might
solve
for
managed
kubernetes
and
cluster
API
itself,
and
I
really
just
wanted
to
ask
what
would
be
the
best
way
to
form
a
working
group
that
was
sort
of
well-documented
how
we
meet
as
inclusive
and
inviting
as
as
possible
for
any
folks
who
might
want
to
join?
Would
it
make
sense
to
produce
an
issue
or
a
proposal
PR?
That
sort
of
the
objective
is
to
gather
consensus
around
a
you
know:
a
zoom
Channel
when
we
meet
all
that
kind
of
things.
G
So
we
can
because
a
lot
of
these
are
going
to
this
is
going
to
require
a
lot
of
I
think
real-time
discussion,
at
least
in
the
coming
months,
because
there
isn't
consensus
yet
on
this
issue,
and
it's
super
complicated,
it's
really
hard
to
think
that
it
exceeds
the
ability
of
this
particular
medium
to
solve
the
consensus
problem.
So.
A
A
This
topic
seemed
to
be
you
know.
People
were
really
interested
in
it
at
the
kubecon
contributor,
Summit,
so
I
feel
like.
Maybe
if
we
can
just
advertise
it
enough,
we
could
probably
build
a
working
group
and
I
don't
know.
Does
anyone
else
have
comments
about,
like
you
know,
just
establishing
a
working
group
or
if
we
should
have
a
process
around
that
within
the
clustered
API
community.
G
A
A
Some
of
our
working
group
processes,
so
I
mean
I
can
only
suggest,
like
you
know,
sending
out
to
the
mailing
list
talking
in
slack,
but
then
additionally,
maybe
if
we
could
come
up
with
a
document
about
how
we
form
the
working
group
to
do
this,
like
that.
That
might
be
a
useful
addition.
You
know
to
our
community
documentation
as
well.
A
G
If
they're,
not
you
know
the
type
who
are
hanging
out
in
slack
all
the
time
might
not
know
these
conversations
are
going
on
right
and
also
just
might
reduce
friction
in
terms
of
you
run
into
someone.
Oh,
you
guys
are
discussing
that.
How
can
I
get
involved?
Well,
here's
a
link
like
it
has
all
the
information
there.
Yeah.
A
Yeah
no
I
think
that
would
be
good
like
that.
That
goes
back
to
kind
of
making
an
issue
or
something
I
just
want
to
call
out
a
couple
chat
comments
here.
So
Winnie
is
saying
for
the
first
manage
kids
proposal,
it
was
just
Gathering
people,
but
a
working
group
sounds
like
a
great
idea
and
then
Stefan
had
a
comment
using
the
office
hour
Zoom
session
and
adding
meeting
notes
in
the
same
dock
as
the
office
hour
meeting
notes
were
helpful
in
the
past,
so
I
think.
A
The
notion
here
is
that
if
we
set
up
a
zoom
session
interleaving,
the
weekly
meeting
with
like
the
other
meetings
in
between
might
be
helpful
as
a
way
to
to
centralize,
where
that's
happening.
I
think
one
thing
I
would
call
out
in
terms
of
trying
to
set
up
like
meetings
and
stuff.
For
this
is
that
you
know
we
have
a.
We
have
a
lot
of
people
on
the
other
side
of
the
of
the
planet
from
us.
A
You
know
kind
of
north
american-centric
folks,
and
so
it
it
might
be
difficult
to
find
like
a
meeting
time
that
kind
of
works
for
everybody.
So
if
there
was
some
way
to
either
have
a
little
bit
more
asynchronous
or
be
able
to
bounce
meetings
between
your
different
times
or
I
guess,
it'll
depend
who
wants
depends?
Who
wants
to
be
involved
but
yeah?
Maybe
making
an
issue
is
the
first
step
here,
foreign.
A
Yeah
I'm
not
I'm,
not
seeing
any
hands
growing
up,
so
yeah
I
would
say
like
if,
if
you
want
to
make
an
issue
where
you
at
least
collect
the
information
about,
what's
going
on
or
maybe
use
this
one
as
like
the
centralizing
issue,
you
know
that
might
be
a
good
way
to
organize
it,
at
least
to
start
with.
A
Okay,
yeah
not
seeing
any
hands
or
comments
coming
in
so
I
think
that
brings
us
to
the
end
of
the
regular
session
Jack.
Do
you
want
to
take
the
mic
one
more
time
to
tell
us
about?
What's
going
on
with
cap
Z
yeah.
G
So
super
quick
items
we're
gonna,
we
plan
to
release
1.6
tomorrow.
So
that's
our
part
of
our
regular
monthly
minor
release,
Cadence
and
then
the
second
one.
Thank
you
for
clicking
on
this
link
is
just
a
a
call
out
that
we're
actively
in
the
process
of
integrating
1.3
into
capacity.
So
the
objective
is
to
integrate
that
before
the
release,
so
that
we
have
more
comprehensive
signal
to
report
back
before
we
cut
Cappy
1.3.
G
A
It
awesome
sounds
great,
so
that
brings
us
to
the
end
of
our
meeting
agenda
with
just
about
eight
minutes
left
here.
Does
anybody
have
any
other
ad
hoc
topics
or
things
they
want
to
bring
up,
or
should
we
take
some
time
back
in
our
day
here.
A
Five:
four
three,
two
one,
all
right:
thanks
everybody
for
coming
out
and
yeah,
we'll
see
you
in
slack
and
see
you
here
next
week,
thanks
Mike.