►
A
Foreign
welcome
everyone
today
is
Wednesday
the
28th
of
June
2023,
and
this
is
the
cluster
API
project.
Meeting
cluster
API
is
a
sub-project
of
kubernetes
sigs
and,
as
such,
we
follow
their
their
meeting
guidelines,
which
basically
means
treat
everyone,
as
you
would
expect
to
be
treated,
which
is
to
say
kindly
and
please
raise
your
hand
if
you'd
like
to
talk
and
I
will
call
on
you
usually
at
the
beginning
of
our
meetings.
A
A
All
right,
I
am
not
seeing
anyone
unmuting,
so
we
will
move
on
to
the
regular
agenda,
so
open
proposals.
Readout.
Do
we
have
any
open
proposals
that
need
to
be
checked
in
on
here,
I,
guess,
Jack
or
Richard
or
Fabrizio?
Is
there
anything
we
need
to
say
about
the
contract
changes.
A
Okay
sounds
like
maybe
no
updates
there.
So
our
first
agenda
item
Guillermo,
you
have
you
have
an
item
about
several
releases.
Please
take
it
away.
C
Yeah
real
quick,
just
announcement
that
yesterday
were
released
the
new
patch
versions
for
1.4
1.3
and
the
first
beta
for
1.5
I.
Think
Navas.
You
want
to
chime
in
to
talk
about
email
announcements
about
this.
D
Yep,
so
we
we
had
sent
out
communication
emails
for
1.4.4,
1.3.9
and
1.5.0
beta
o
releases,
but
I
think
they
have
been.
They
are
locked
right
now,
so
I
think
we
need
some
help
from
the
admins
to
unblock
and
forward
it
to
the
community.
D
A
Okay,
cool
sounds
good
on
the
new
releases
next
topic:
Fabrizio
you've
got
a
FYI
about
the
tentative
of
1.6
release,
calendar
and.
D
B
Okay,
after
this
meeting,
if
you
can
reach
out
to
me,
we
try
to
get
this
sorted
out
because
yeah
we
I
can
look
at
into
our
emails
Pixar
configure,
then
maybe
we
can
fix
it
once
more.
Thank
you
very
much.
Okay,
yeah,
a
quick
episode
for
from
my
side.
So
this
week
we
merged
at
a
PR
which
is
about
the
release.
B
1.6
calendar
first
of
all,
kudos
to
to
Joe
our
release
leads
which
drafted
these
this
calendar,
that
by
the
way,
will
be
refine
and
make
official
by
the
next
release
lead
in
a
couple
of
weeks.
For
now
it
is
tentative,
it
is
a
preliminary
date.
There
is
only
one
thing
that
is
worth
notable.
We
added
a
week
which
is
called
kubicon
Idol
week,
which
is
the
week
15,
because
yeah
most
of
the
forks
will
be
out,
and
it
does
not
make
sense
to
have
any
activity
during
this
week,
but.
E
B
A
You
awesome
that
looks
really
cool,
so
next
topic,
Guillermo
back
to
you
for
proposing
the
require
area
label,
yeah.
C
Just
to
give
a
very
quick
background
for
everyone
who's
not
aware
of
this.
There
has
been
some
work
going
on
from
previous
release
teams
to
add
better
or
better
design
area
labels
to
HPR
and
that's
what
we
have
been
trying
to
use
to
use
as
the
prefix
of
each
entry
on
the
release
node.
C
C
What
we
have
been
doing
is
going
over
all
eprs
that
have
that
don't
have
the
area
label
and
then
use
our
own
criteria
to
add
the
label
and
then
generate
the
release,
notes
to
get
all
that
pre-formatted
without
having
to
manually,
do
it
so
as
for
background
and
the
proposal
is
or
more
than
a
proposal
is
I
want
to
open
the
conversation
and
see
what
the
community
thinks
about
adding
some
kind
of
like
merge
blog
through
prow
Etc,
to
make
sure
that
each
PR
has
at
least
one
error
label
before
it
gets
merged.
C
Now
we've
been
talking
about
this
in
the
release
team.
We
are
aware
that
this
creates
friction
right,
like
we're,
adding
one
more
step
to
the
merging
process
and
this
works.
This
work
needs
to
be
done
either
by
the
author
or
the
reviewer
or
both,
but
we
believe
that
those
are
the
folks
who
have
the
better
context
and
the
better
information
to
decide
what
the
area
is.
So
we
understand
that
there
are
pros
and
cons,
but
we
want
to
open
the
conversation
and
see
what
the
the
community
thinks
about
it.
A
I
guess
I
have
a
question
here
and
I
can't
raise
my
hand
as
the
moderator,
but
so,
but
if
we're
already
at
the
point
where
something
is
going
to
be
merged,
meaning
it
has
the
appropriate
labels
on
it,
why
would
we
be
blocking
it
at
that
point
for
not
having
an
area
label
like
what
does?
What
does
adding
an
area
label
at
that
point?
Is
this
just
you
know
to
help
us
in
auditing
or
whatnot
or
what
is
the
what's
the
functional
usage
there.
C
Yeah
right
so
yeah
I,
probably
didn't
explain
that
very
well.
We
use
the
when
regenerating
the
release
nodes
for
each
release,
each
patch
release
or
it's
a
minor
release.
We
use
the
area
label
to
determine
how
to
classify
each
of
those
PR's
in
the
release
nodes.
C
If
the
area
level
is
not
missing,
someone
in
the
release
team
needs
to
make
the
decision.
Okay.
What
is
the
area
that
we're
gonna
use
here?
Not
because
the
label
is
necessary
for
anything
else,
but
because
the
error,
the
area
labels?
What
determines
how
we
classify
each
VR?
So
what
we're
proposing
is,
instead
of
trying
to
do
that,
work
at
the
end
when
we're
going
to
release
things
and
try
to
you
know,
remember
the
history
of
what
that
PR
was
etc,
etc.
C
We
are
proposing
moving
that
work
up
front
when
the
pr
is
being
review
or
merge
or
created,
but
before
we
actually
merge
it.
That
way
by
the
time
that
we
try
to
generate
the
release,
notes
for
its
release,
we
already
have
all
of
those
it
is
that
helpful
I'm,
not
sure
if.
A
Yeah,
no,
that's
total
yeah.
No!
Thank
you
that
that's
that's
a
great
explanation.
I
think
it
makes
total
sense.
So
thank
you.
Jack
you've
got
your
hand
up.
F
Yeah
I'm
plus
one
it
sounds
like
it's
an
extension
of
the
existing
requirements
that
we
have
like
sparkles
or
green
shoots,
the
Emoji.
So
it's
just
a
the
evolution
of
that
to
provide
more
detail,
so
I
think
that's
actually
great
for
users,
because
those
sort
of
five
canonical
things
which
sort
of
acted
as
the
way
of
classifying
things
in
release,
notes
from
arguably
not
enough
detail,
not
enough
classification.
So
this
will
be
great
for
that.
I
assume
I,
assume
that
there's
going
to
be
more
areas
as
a
side
effect
of
this
right.
C
Cool
yeah,
so
so
the
the
areas
are
all
already
defined.
Previous
release
teams
went
through
the
process
of
determining
what
were
the
areas
the
labels
already
exist,
so
they're
already
available.
We
have
been
using
them,
it's
just
a
matter
of
now,
making
it
a
requirement
before
emerging.
C
G
Just
real
quick
I
wanted
to
add
to
that
and
I
think.
One
thing
we
talked
about
as
part
of
the
release
team
was
the
possibility
that
there
are
multiple
area
labels
in
some
extenuing
circumstances,
where
a
large
PR
might
touch
multiple
areas
of
the
code
and
so
the
tool
handles
that.
G
D
Yep,
thank
you.
Just
going
back
to
the
point
of
the
label
which
Jack
was
mentioning,
there
are
already
four
four
kinds
of
labels:
I
mean
prefixes.
We
have
on
the
Cappy
site,
that's
bug
a
Seedling,
a
documentation
and
all
of
it
so
I
think
Guillermo
was
talking
about
area
labels
being
added
as
a
label.
So
those
five
might
not
change.
They'll
still
be
bug,
seedling,
documentation
and
Etc,
but
the
pr
label
itself
would
be
the
pr
is
expected
to
be
blocked
on
that
label.
D
So
if
a
person
doesn't
Define,
hey
I'm,
making
a
change
on
the
cluster
cattle,
so
it
would
be
area
cluster
cuddle
that
needs
to
be
added.
That
would
be
the
area
label.
As
far
as
I
understand
again,
please
feel
free
to
correct
me
if
I'm
getting
it
wrong.
A
I
guess
the
next
step
is
like.
Should
we
do?
We
need
anything
more
than
just
kind
of
agreeing
on
it
in
in
the
meeting
here
or
should
we
have
an
issue
on
the
cluster
API
repo,
where
everyone
could
kind
of
like
put
a
thumbs
up
thumbs
down
and
then
give
it
like
a
week
or
two
and
then
make
the
change
after
that
Guillermo
go
ahead.
C
Yeah
I
I
was
going
to
propose
if
there
are
not
male
concerns
here.
That's
why
I
was
bringing
it
up
just
create
an
issue
and
then
leaving
it
up
for
a
week.
I
don't
know.
Maybe
we
can
just
leave
it
up
until
next
week
and
then
next
week
we
revisit
it
and
see
if
there
are
many
concerns
or
not,
but
I
think
it
will
be
good
to
keep
an
opportunity
to
folks
who
are
not
in
this
meeting
to
to
Chinese
and
it
you
know
it.
C
It
definitely
creates
friction
so
I
I
will
rather
give
folks
more
time
to
think
about
it
Etc.
So
my
vote
would
be
if
there
are
no
main
concerns
in
this
meeting.
I
will
go,
create
an
issue
I'll
post
it
in
in
slack
and
then
maybe
we'll
release
it
next
week
and
decide
if
we
say
that
likely
consensus
for
the
next
one
or
maybe
we
need
more
discussions.
B
B
Just
a
comment
from
Assad
I
I'm,
not
against
the
taking
a
decision
now,
but
we
should
consider
if
we
start
doing
this
now
or
at
the
beginning
of
the
next
cycle,
which
is
going
to
happen
in
three
or
four
weeks.
So,
basically
we
don't
club,
we
don't
impact
the
stabilization
phase,
but
we
start
from
scratching
the
next
cycle.
F
A
F
Yeah,
just
just
quick
plus
one
on
that,
just
thinking
of
folks
who
did
some
PRS
earlier
in
this
release
cycle
who
aren't
at
this
meeting
and
don't
know
about
this
and
then
submit
a
PR
tomorrow
and
get
annoyed
because
wait.
I
didn't
have
to
do
this
a
few
weeks
ago.
What's
going
on
so
I
don't
know,
but
it
sounds
like
we
have
consensus,
ish
so
prepared
that
everybody
in
this
meeting
can
start
expecting
to
pre-apply
these
labels
before
PR's
emerged
in
the
few
weeks
before
it
becomes
an
actual
hard
and
fast
rule.
A
Yeah,
it's
I'm
not
seeing
any
strong
objections
or
even
soft
objections,
so
yeah
like
I,
think
getting
the
issue
up.
Getting
Gathering
feedback
from
people
who
couldn't
be
here
is
great
and
yeah
everyone
who's
here
or
watching
this
recording
like
I,
guess,
please
add
the
area
to
your
PRS
Fabrizio
go
ahead
and.
B
So
sorry
I
remind
another
thing,
so
we
had
a
discussion
a
couple
of
weeks
ago,
following
some
feedback
from
test
infra.
We
should
always
for
the
implementation.
We
should
always
prefer
proud,
then
GitHub
actions,
because
testing
infra
is
a
full
conference
security
and
On
guitar
GitHub
action.
There
was
a
recent
incident
about
Security
in
League,
I
just
stuff
like
that,
so
yeah,
if
possible.
Let's
prefer
this.
Let's
consider
this
in
the
implementation,
but
these
are
implementation.
Details,
foreign.
A
G
So
next
week
is
July
4th
for
the
US,
which
is
a
holiday,
and
a
lot
of
the
team
is
going
to
be
off.
It
looks
like
so
instead
of
trying
to
finagle
something
I
thought
I'd
bring
it
up
to
the
community
to
see
if
we
could
just
push
it
one
day
to
July
5th,
we
can
send
out
a
notification
and
go
that
route,
but
it's
it's.
A
1.5.0
beta
X
is
the
release
so.
G
H
B
Yeah,
this
is
basically
a
quicker
perspective
on
what
happened
at
the
following:
the
release
of
behind
the
0-20,
so
basically
kind
020
changed,
and
this
is
the
issue
with
all
the
discussion
all
the
detail,
but
the
long
story
is
that
kind
of
020
changed,
our
Capital
kind
does
not
image
are
built
and
how
you
have
to
run
them?
Okay,
that
thinks
basically,
and
not
only
they
change
it,
but
they
also
published
some
old
kubernetes
the
images
for
all
the
kubernetes
version
with
the
new
requirements
this
basically
broadcast
our
CI
retroactively.
B
To
fix
this
by
pinning
image
with
with
a
given
a
shot,
if
you-
and
if
you
look
at
the
kind
that
is
node,
they
are
asking
you
to
use
a
specific
shot
for
every
kind
of
version.
So
we
first
did
this
in
our
test
configuration
and
then
basically,
there
was
another
flash
of
PRS
to
make
a
cup
the
mortgage
resilient
to
such
type
of
changes,
use
always
use
the
right
shot.
B
B
So
these
are
a
long
story.
If
you
find
some,
if
you
still
have
some
issue
around
running
kind
or
cardi,
please
reach
out
and
those
changes
are
in
the
main
and
in
the
1.4
Branch.
They
should
be
in
their
list
that
we
got
yesterday,
so
everything
should
be
fun
now
should
be
okay.
B
Now
I
want
to
thank
to
thank
you,
the
CI
team
and
all
the
maintainer
that
basically
dedicated
a
lot
of
unplanned
time
to
this
one
to
this
issue
and
made
it
possible
to
cattle
Yesterday
release,
even
if
one
week
ago,
basically,
our
RCI
was
totally
screwed
up,
and
so
this
is
a
huge
work
congrats
to
everyone
again
and
yeah.
This
reminds
me
that
this
is
a
good
opportunity.
B
Those
kind
of
issues
are
a
great
way
to
help
the
community
to
because,
if
someone
step
in
the
com,
the
the
maintainer
can
continue
to
review
other
PRS,
etc,
etc.
Otherwise,
maintainers
are
basically
blocked
in
getting
cie
back
to
green.
So.
D
B
Great
opportunity
to
learn
and
a
great
way
to
add
to
the
community,
even
if
we
have
a
CI
team
and
this
those
kind
of
issuer
kind
of
extraordinary,
and
we
always
need
morale,
So
yeah.
Thank
you.
Everyone
and
please
consider
if
you're
looking
to
do
something
out
helping
copy
those
kind
of
activities
are
the
best
one
to
help
everyone.
A
A
Okay,
I'm
not
seeing
anything,
but
if
you
run
into
problems
running
the
test
infrastructure,
please
reach
out.
It
might
be
related
to
this
next
topic.
Jonathan,
a
reminder
to
take:
please
take
a
look
at
the
machine
pool
machines,
PR
IDE,
take
it
away.
I
Just
wanted
to
remind
folks
to
take
a
look
at
the
machine
pool
machines,
implementation.
This
is
just
the
Cappy
core
components,
so
there
are
two
other
PRS
as
well,
but
this
is
the
main
one
we
would
want
to
move
forward
on,
because
the
others
kind
of
depend
on
this
I've
already
gotten
some
reviews
from
Stefan
and
Fabrizio.
We've
made
some
changes
that
deviate
a
little
bit
from
The
Proposal,
but
I
think
they're
pretty
reasonable.
I
So
please
take
a
look
if
you
get
the
chance
and
also
if
you
plan
on
merging
any
PRS
that
touch
the
machine
pool
stuff,
please
pay
me
on
slack
or
something
just
so.
I
can
have
a
heads
up
to
rebase
as
well.
I
Yeah
we
can
keep
the
discussion
going.
Yeah.
A
So
yeah
good
good
call
to
action
for
the
community.
Please
take
a
look
at
this.
If
you
have
any
interest
at
all
machine
pools,
it's
cool
stuff,
thanks,
Jonathan,
all
right,
Nawaz
release,
Team
Planning!
Please
take
it
away.
D
D
If
is
that
the
comprehensive
list
of
providers,
if
yes
I
mean
we
still
don't
know
if
that
is
all
the
list,
so
I
was
wondering
if
we
could
start
a
chat
in
slack
and
if
there
are
any
providers
who
are
missing,
they
could
add
in
their
repo
names
over
there,
so
that
we
could
open
up
a
git
issue
saying
that
hey
we
have
a
Beta
release
opened
and
you
could
test
it
out.
This
would
be
more
as
a
marker
so
that
the
providers
don't
miss
out
on
testing
a
minor
release.
E
A
All
right,
I'm
not
seeing
any
hand
scrubs
so
yeah,
Noah's,
I,
guess
like
following
up
in
slack
or
maybe
just
trying
to
put
out
some
announcements,
maybe
send
a
mailer
out
just
to
see
if
we
can
maybe
catch
some
other
providers
who
haven't
communicated
yet
all
right.
Thanks,
Jack
you've
got
a
topic
about
get
Ops
ux
and
mutating
admission.
Webex.
F
We
by
we
I
largely
mean
some
folks
in
campsie,
are
trying
to
tease
out
how
to
best
integrate
git
Ops
with
cluster
API.
There
are
some
communities
who've
already
done
this,
and
one
of
the
sort
of
like
common
themes
in
terms
of
ux
friction
is
the
fact
that
in
any
kind
of
get
UPS
situation,
the
the
get
Ops
data
source
wants
to
be
the
source
of
Truth,
and
it's
now
competing
with
kubernetes,
which
also
wants
to
be
the
source
of
Truth.
F
So
one
particular
thing
that
happens
is
when
your
get
up.
Source
of
Truth
creates
a
resource
based
on
its
idea
of
what
the
metadata
describing
that
resource
looks
like
and
then
the
API
server
accepts
it
and
then
maybe
makes
some
mutations.
Most
common
would
be
like
a
default,
so
perhaps
a
a
non-required
from
a
API
front
end
point
of
view,
property
that
has
a
sensible
default
that
it
just
applies
every
time
the
resource
is
created.
F
So
now
you
have
a
Delta
between
the
get
ups
and
source
of
Truth
and
kubernetes
and
source
of
Truth,
so
this
issue
was
trying
to
tackle
one
particular
side
effect,
which
is
that
reapplies
could
collide
with
that
default
property
assignment
having
been
made
combined
with
another
web
hook
that
validates
immutability.
F
So,
having
said
all
that
background,
what
we're
gonna
do
in
the
coming
months
is
run
some
experimentations
in
cap
Z
and
so
I'd
like
to
keep
this
issue
open,
but
we're
not
going
to
advocate
for
any
changes
in
Kathy
right
away
until
we
get
a
better
idea
of
how
the
sort
of
broader
get
Ops,
kubernetes
Community
is
dealing
with
this.
F
What
I
might
say
common
ux
friction
so
I
just
wanted
to
call
that
out
and
if
anyone
here
is
interested
or
is
like
working
on,
get
Ops
and
is
successfully
solving
for
this,
or
at
least
I'm
not
sure
how
you
can
easily
solve
for
it.
But
reach
I'm
going
to
be
on
paternity
leave
for
most
of
the
summer,
so
see
everybody
in
September,
but
maybe
reach
out
to
David
and
other
folks
in
captivity.
Go
ahead.
Stefan.
J
Basically,
we
also
had
a
problem
that
different
controllers
were
fighting
over
objects
and
basically
it
worked
out
well
at
the
point
where
various
controllers
were
owning
different
fields
and
they're
not
trying
to
take
ownership
or
they
didn't
have
different
opinions
on
the
field.
Basically,
like
One
controller
was
writings,
one
subset
of
the
fields.
J
Another
controller
was
throwing
another
subset
of
the
fields,
I'm,
not
sure
if
you
can
get
to
get
to
the
point
where
basically
github's
just
equalizer
yaml
and
it
won't
be
changed
in
any
way
at
all,
because
there
are
just
in
general,
a
bunch
of
controllers
which
are
also
writing
back
into
their
own
spec.
So
it's
not
only
rotating
web
books.
J
It
is
also
another
cover
control.
Maybe
the
cap
C
controller.
If
you
do
some
bring
your
own
network
stuff,
they're
writing
parts
of
their
spec
once
they
did
something
in
the
cloud
or
control,
plane
endpoints,
which
are
written
as
part
of
the
spec,
then
there's
stuff
like
open,
API
defaulting,
which
also
happens,
but
it's
not
in
any
of
our
web
hooks.
H
H
F
Api,
some
of
that
stuff
is
a
black
box
and
you
can't
like
inject
so
what
one
approach
would
be
to
Define
an
annotation
that
all
these
various
mutating
actors
can
consume
and
say:
oh
actually,
I'm
not
supposed
to
mutate,
so
I'm
going
to
reject
instead
or
something
like
that.
That
way,
you're
giving
a
signal
back
to
the
the
get
up
source
of
truth
that,
like
this,
is
insufficient
data
to
Traverse
all
of
our
mutations
without
mutating
another.
J
But
what
I
think
is
where
it's
already
going
wrong
is
if
the
github's
controller
is
trying
to
reapply
the
yaml,
if
some
other
control
is
adding
a
field,
because
it
should
not
care
at
all
about
additional
fields.
That
I
mean
all
the
servers
that
apply
work
in
kubernetes
was
done
so
that
multiple
different
users
are
able
to
control
one
object
together.
J
So
just
imagine
you
create
a
cluster
or
cluster
with
class
class
or
machine
deployment
where
you
have
an
auto
scaler
running
on
a
replica
field,
so
you
have
to
make
it
work
that
the
github's
controller
can
deploy
a
machine
deployment,
not
set
a
replica
field
and
then
the
auto
scale
I
can
set
a
replica
field,
because
otherwise
those
use
case
just
won't
work
anymore,
and
that's
the
reason
why
that
is
all
the
Spanish
field
work.
So
I,
don't
know
about
a
specific
use
case
and
I
know.
Adjusting
researchers
is
perfectly
fine.
J
Let's
see
what
the
results
are.
I
would
just
in
general.
I
would
expect
that
there's
also
a
way
where
gitos
controller
can
deal
with
some
Fields
being
written
by
other
controllers,
and
then
it
just
doesn't
care
because
it
it
that
control
itself
doesn't
have
an
opinion
on
those
fields.
That's
how
servers
are
usually
solves
that
problem.
F
And
another
point,
and
then
I'll
pass
over
to
David.
Is
it's
obviously
going
to
have
to
be
a
conversation
between
git,
Ops
and
kubernetes?
It
can't
be
like
we
want
to
solve
this.
Just
by
modifying
kubernetes
behaviors,
it's
going
to
have
to
be
a
a
series
of
handshakes
that
are
sort
of
well-defined
so
that
these
types
of
out-of-band
changes
outside
of
gitups,
which
need
to
happen,
can
be
done
in
a
way
where
get
Ops
can
observe
those
and
then
consume
those
ends
to
sort
of
like
temporarily
defer
source
of
Truth.
F
The
auto
Skiller
is
a
good
one.
Go
ahead,
I
think
David
was
next
and
then
for
brizio.
K
K
That
may
be
different,
that
potentially
people
don't
care
about,
but
the,
but
the
thing
would
be
is
if
let's
say
indeed,
my
management
cluster
goes
down
or
if
indeed
my
actual
kubernetes
cluster
itself
goes
down,
that
what's
there
in
the
source
of
Truth
and
and
get
for.
My
cluster
definition
could
then
be
recreated
from
scratch,
and
I
would
have
some
level
of
confidence
that
that's
that's
going
to
work
well,
right
and,
and
so
I
think.
If,
if
you
have
no
idea
because
I
think
right
now,
most
people
just
think.
A
A
Okay,
I
also
saw
a
comment
in
chat
there
from
Mike
about
you
know
using
the
cluster
API
operator
as
well.
I'm,
not
I'm,
not
sure
if
you,
if
that
was
tongue-in-cheek
or
if
that
was
a
another
option,
to
kind
of
consider
here
as
well.
H
B
No
apologies
necessary
so
if
I
think,
let's
forget
a
second
about
class
API
think
about
the
deployments,
if
you
think
about
deployments,
you
can
control
them
with
the
tops
and
they
get
the
full
thing
open,
API
specs
and
they
can
be
contributed
by
other
controllers.
So
this
is
not
a
problem
specific
by
forecast
API.
It
is
a
generic
kubernetes
program,
I'm,
pretty
sure
that
there
are
already
solution
about
this.
B
Second,
when
David
is
talking
about,
I
want
to
make
sure
that
the
githubs
is
the
source
of
true
and
I
can
restore
the
state
of
the
cluster
I
think
that
we
are
forgetting
unimportant
things
that,
in
this
case,
is
somehow
specific
across
the
API,
so
think
about
the
cluster
and
the
control
planar
point.
B
B
So
I
don't
know
if
there
are
ways
for
GitHub
tools
to.
B
Get
back
something
for
the
cluster
from
the
cluster
or
we
have
to
start
thinking
about
how
to
split
this
kind
of
information
like
control,
plane
and
point
like
I.
Don't
know
machine
provider
id
not
that
I
have
stuff
like
that.
If
we
have
to
think
away
a
better
way
to
store
them
in
separated
crds,
but
then
githubs
have
to
be
couples,
so
those
crds
that
reflect
the
status
of
of
the
cluster.
B
So
there
are
these
two
angles
that
for
real
important
one
we
are
trying
to
solve
a
problem
which
is
object,
are
co-authorized
by
different
controllers,
which
is
a
common
problem
in
kubernetes,
and
the
second
one
is
cluster.
Api
is
some
specific
behavior
that
uses
CR
and
CR
spec
in
most
cases
to
store
some
info
that
make
the
link
between
cluster
API
and
the
underlying
the
infra,
and
we
need
to
preserve
them.
F
I
mean
the
really.
The
purpose
of
my
coming
here
was
just
to
say,
nothing's
going
to
happen
in
Cappy
anytime,
soon,
Fabrizio
you,
your
response
suggests
we
need
like
a
get.
Ups
depend
about
so
that
every
time
you
submit
something
from
your
get
Ops
kubernetes
submits
a
bunch
of
PR's
back
to
the
get
UPS.
So
you
have
the
source
of
Truth
updates.
B
Honestly,
I
don't
know
I'm
I'm,
not
an
expert
here.
I
use
I
use
a
GitHub
only
for
experiment
and
never
in
production.
So
I
cannot
give
real
life
feedback.
It
seems
to
me
that
it
is
a.
It
is
a
deep
discussion
and
at
least
I
needed
to
learn
a
lot
so
I'm
looking
forward
for
it.
J
Yeah
I
just
want
to
add,
like
one
point
to
the
things
that
you
have
to
consider.
This
thing
I
had
a
discussion
like
it
last
week
when
someone
from
Upstream
KK,
basically
and
what
they
did
for
some
new
design,
but
it
doesn't
really
matter
here,
and
the
point
was
basically
looking
at
industry
here
today
and
I-
think
that
was
like
how
we
thought
about
it
is
that,
basically,
if
you
have
something
that
has
to
survive,
backup
restore
it's,
it's
only
respect.
J
That
was
the
reason
why
we
made
decisions
like
in
Copper
and
maybe
also
kepsi,
that
in
the
spring,
your
own
case,
we
write
back
certain
data
into
the
spec
like
I,
don't
know
some
ideas
from
AWS
and
stuff,
so
that's
actually
preserved,
but
it
seems
like
there
was
an
upstream
cap
like
two
or
three
years
ago,
which
basically
States
that
status
is
also
something
so
it
it's
not
the
case
anymore,
that
we
have
to
that.
J
That
status
can
be
lost
at
any
point
in
time,
and
it
has
to
be
possible
to
rebuild
the
entire
status
based
in
Spec
status
can
also
be,
can
also
be
used
to
store
stuff.
Basically,
so
it
could
be
that
you
have
to
respect
you
reconcile
something.
Then
you
store
important
stuff
in
status
and
you
can
rely
on
the
stuff
in
status
and
further
reconciles,
which
basically
means
for
the
GitHub
story
and
for
the
backup
recovery
story
in
general.
J
We
also
probably
have
to
be
able
to
preserve
status
and
not
only
spec
and
a
lot
of
statuses,
usually
never
in
githubs,
as
far
as
I
can
tell
so.
As
far
as
I
know
today
in
cluster
API,
everyone
just
writes
persistent
stuff
in
Spec,
so
it's
probably
not
an
issue
yet,
but
Upstream
guidance
is
that
status
can
be
definitely
also
used
for
things
like
that.
J
A
Okay,
so
it
sounds
like
probably
there's
some
more
discussion
that
needs
to
happen
around
this.
Maybe
a
little
more
investigation
as
well
so
I
guess
I
would
encourage
folks
to
look
at
the
the
issue
there
and
leave
their
comments
if
they
have
suggestions
or
anything.
A
So
if
there's
no
other
questions
about
that,
that
brings
us
to
the
end
of
the
regular
agenda,
we'll
go
into
the
provider
updates.
Now
Mike
you've
got
an
update
about
the
cluster
API
operator.
Please
take
it
away.
Yeah.
H
It's
an
operator
but
yeah
today
we
made
a
new
release
of
claustrate.
The
operator
version,
zero,
four
and
thanks
to
everyone
who
took
part
so
in
in
general,
I
wanted
to
make
there
really
is
as
stable
as
possible.
So
it
did
mainly
consists
of
what
fixes,
however,
where
there's
some
interesting
features:
for
example,
Alex
added
a
possibility
to
share
a
credential
secret
between
several
providers
and
also
we
updated
the
documentation,
because
previously
it
was
just
a
copy
of
original
proposal.
H
Now
it's
regular
documentation.
The
next
release
will
be
dedicated
to
features,
and
mainly
we
plan
to
improve
our
Helm
chart
to
support
installation
of
cert
manager,
installation
and
configuration
and
also
deploying
clustered
providers.
Here
we
work
with
Alex,
like
in
parallel
I
I
work
with
cert
manager
and
configuration
of
the
operator
and
Alex
working
on
provider
stuff
and
then
we're
going
to
add
cluster
move
support
which
is
required.
We
have
Issue
request
for
that
and
yeah
also.
We
are
collecting
ideas
for
a
new
API
version
because
of
new
features.
A
Very
cool
thanks
for
the
update
Mike,
it
sounds
yeah
sounds
awesome,
well,
yeah,
okay,
so
that
brings
us
to
the
end
of
our
agenda.
There's
nothing
more
from
the
feature
groups.
So
I!
Guess
thanks
to
everyone
for
attending.
Unless
there's
any
last
minute
issues,
we'll
we'll
call
it
here.