►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
everyone,
it's
Wednesday,
November
16th,
Pacific,
Coast
time
in
the
United
States
we
are
having
our
weekly
cluster
API
office
hours.
Cluster
API
is
a
part
of
the
cncf.
It's
part
of
Sig
cluster
life
cycle.
We
respect
each
other,
we're
kind
to
each
other.
Please
adhere
to
the
code
of
conduct,
which
really
just
means
those
two
things
at
the
beginning
of
the
the
meetings
we
pause
for
a
bit
to
allow
folks
to
say,
hi
or
new
or
just
want
to
say
hi
for
any
reason,
so
I'll
do
that
now
so
feel.
B
A
A
My
bad
all
good,
okay,
great,
hopefully
folks,
can
see
my
Safari
browser
window
with
today's
agenda.
A
All
right,
great,
okay,
the
first
thing
we're
going
to
do
is
add
enable
folks
working
on
proposals
to
make
any
updates,
I
think
open
proposals.
Are
they.
B
C
Yeah
I
I
would
like
to
talk
briefly
about
the
label
annotation
propagation
proposal,
so
this
is
something
that
I
have.
It
is
a
PR
that
I've
created
just
before
kubicon,
so
probably
most
of
the
workers
have
just
skimmed
around
all
the
the
noise
of
kubicon,
so
a
little
bit
of
contest,
so
we
discussed
and
merged
recently
a
proposal
that
is
basically
defining
how
we
propagate
labels
from
a
machine
to
an
order.
C
Okay,
this
second
proposal,
which
is
marked
as
need
a
review,
is
the
complementary
part
of
the
story,
so
this
this
proposal
basically
covers
how
we
propagate
labels
from
cluster
class
down
to
clusters
and
machine
deployment,
control
plane
and
down
to
machine.
So
it
basically
covers
all
the
story
of
the
label
from
the
Eiger
level
abstraction
down
to
the
machine.
Since
the
latest
step,
a
machine
to
node
is
already
covered.
It
is
a
fairly
interested
piece
of
discussion
because
yeah
there
is
some
research
behind
the
proposal.
C
If
you
want
to
open
it,
there
is
also
some
nice
schema
that
kind
of
represent
the
the
entire
flow
of
labels.
If
you
scroll
down,
there
are
some
graphs
yep.
C
That's
nice,
somewhere
in
the
middle
of
the
proposal.
There
are
some
graphs
here.
It
is
so
that
so
that
the
first
graph
I
basically
picture
the
current
situation.
Second
graph
document,
the
change
that
we
would
like
to
do,
one
important
things
that
is
go
that
that
this
proposal
is
discussing
is
that
currently
what
what
is
happening
is
whenever
we
basically
propagate
a
change
for
a
label
of
pronunciation
from
either
level
object
down
to
object
down
to
machine.
C
It
happens
that
basically,
we
trigger
a
roll
out
which
is
kind
of
useless,
because
a
label
is
a
kubernetes
construct.
It
doesn't
really
reflect
on
the
machine,
and
so
this
proposal
not
only
includes
the
idea
of
yeah
completing
this
picture
of
label
propagation,
but
it
also
promotes
the
idea
of
in
place
changes
of
labels
and
annotation
and
a
couple
of
other
fields
like
not
drain
timeout,
etc,
etc,
which
they
do
not
impact
the
machine
itself,
but
they
impact
kubernetes
object
or
controller
behaviors,
and
so
this
will.
C
The
end
of
the
end
of
the
story
will
be
that
we
will
have
a
smarter
in
place
rotation
rules
that
begin
only
when
it
is
totally
necessary,
and
so
it
is
yeah.
It
is
interesting.
I
think
that
is
a
good
improvement
from
that
type
project.
Please
take
a
look
and
provide
feedback.
A
Foreign
thanks
for
that
is
this:
is
it
fair
to
say
that
this
may
establish
a
durable
pattern
of
in
place
propagation
for
other
future
Solutions?
So
if
you
have
an
interest
in
that
in
general,
this
would
probably
be
the
time
to
voice
those
just
to
sort
of
Sanity
check
this
proposal,
this
concrete
proposal,
because
if
you
do
a
similar,
In-Place
propagation
story,
later
you're
gonna,
there's
gonna
be
a
higher
standard
for
doing
it
differently,
probably
going
to
want
to
do
it
the
same
way.
We
do
it
here.
C
C
That
can
happen
during
this
in
these
in
place
changes
so
because
what
we
want
to
achieve
is
that
custom
API
is
going
to
set
and
own
some
of
the
labels,
but
there
could
be
other
control
layers
that
could
mention
a
different
set
of
labels,
and
I
will
have
to
do
this
in
parallel
without
conflicts,
and
so
we
are
relying
on
on
API
server.
Let
me
say
Machinery
to
do
so.
We
are
not
inventing
the
wheel
because
it
does
not
make
sense
but
yeah.
If
you
are
interested
in
this
kind
of
problem
of
problem.
A
A
Proposal-
that's
not
here,
please,
if
you
have
a
moment,
we
have
to
do
this
now
add
this
here
just
so,
we
can
remind
the
community
during
every
one
of
these
meetings,
to
maybe
voice
an
update.
A
Okay,
so
on
to
discussion
projects,
we've
got
our
discussion
topics.
We've
got
Stefan
and
Jonathan,
adding
Jonathan
to
Cluster
CTL
reviewers
looks
like
this
is
posing
a
question
to
the
community
that
this
is
on
track.
Are
we
all,
okay
with
the
lazy
consensus,
expiring
Friday?
Is
that
essentially
what
this
is
saying.
E
Yeah,
we
don't
have
a
lazy
consensus
deadline
set.
Yet
so
that's
more
like
a
question:
do
we
want
to
set
one
until
Friday
I'm,
absolutely
in
favor
so
and
we
had
a
bunch
of
Health
ships
on
the
pr
which
has
essentially
forgot
for
one
or
two
weeks
to
mention
in
office
hours
yeah.
A
Okay,
thanks
again
and
congrats
Jonathan
sounds
good
to
me.
Now
is
the
time
to
voice
any
concern
like
imagine,
you're
at
the
wedding
and
they're
just
about
to
exchange
vows.
E
Yep
I
brought
it
up
last
week,
essentially
that
I
want
to
modify
the
back
part
policy
a
bit
during
the
beta
and
that's
the
pr
for
it.
It's
Only,
just
looking
for
input
of
there
are
any
objections.
Etc,
otherwise
I
would
set
same
as
consensus.
So
it's
basically
what
we
discussed
last
week,
but
yeah
I
was
supposed
to
say,
take
a
look.
The
idea
is
that
we
can
do
non-breaking
dependency
bumps,
which
don't
require
provider
changes
drink
better,
that's
the
main
thing
yeah,
as
I
said.
E
If
there
are
no
objections
now,
I
would
also
satellite
consensus
on
a
Friday
and
feel
free
to
comment
on
the
piano.
Otherwise,
if
you
don't
do
it
now,
okay,.
A
E
It's
more
like
giving
us
a
bit
more
room
to
bump
dependencies
late
in
a
recycle.
If
we
have
to
is
one
topic,
of
course,
I
I
don't
want
to
freeze
the
dependencies
like
a
month
before
the
release
and
then
there's
CV
and
I
can't
remember
it,
but
it's
more
about
that.
We
can
actually
do
those
things
without
having
to
bring
them
up
in
office
hours.
I
mean
we
can
always
bring
up
stuff
in
office
hours
even
in
RC,
but
it's
more
like
normalizing
that
doing
beta.
A
C
Okay,
so
before
today,
I've
opened
a
PR
to
basically
change
our
support
policy
that
specifies
which
basically
defines
to
up
to
which
version
we
can
back
part
changes
or
cherry
pick
changes.
So
the
reason
is
that
current
policy
basically
defines
that
one
minor
release
goes
end
of
service
when
a
new
one
is
is
released.
Let's
make
an
example,
so
we
are
on
b1.2,
first
of
December,
we
are
going
to
release,
we
want
to
three,
and
this
will
basically
immediately
set
to
be
one
to
two
end
of
service.
C
C
We
wanted
to
running
on
an
unsupported
version
or
I
end
the
service
version
and
it,
and
it
will
take
some
time
for
the
provider
to
pick
up
the
new
version,
and
that
means
that
we
basically
giving
putting
all
the
customers
or
all
the
custom
API
users
on
an
end
of
life
version
for
some
period
this
seems
kind
of
too
aggressive,
and
so
this
proposal
is
is
telling
that
okay
do
not.
Basically
put
v1.2
end
of
life
when
one
two
three
goes
out,
but
instead
wait
for
1.4.
C
So
we
give
four
months
time
for
everyone
to
catch
up
with
the
new
release.
C
And
yeah
that
sees
the
the
only
cover
the
only
covers
is
that
we
support
and
and
ends
minus
my
one
only
for
the
current
API
version.
That
is
the
one
beta1
now
in
order
to
avoid
like
it
happening
in
past,
where
we
had
a
huge
Matrix
of
supported
version.
So
this
is
a
compromise
between
the
old
model
and
the
current
model.
It
is
just
a
requirement
in
this
case.
A
D
I'm
plus
one
on
this
I
think
this
is
kind
of
already
what
we're
doing
in
practice.
So
it's
good
to
get
it
documented.
Since
effectively,
we
haven't
really
been
that
we're
getting
out.
We've
still
been
releasing
patches
of
the
previous
release
version,
that's
very
similar
to
what
we're
doing
in
cap
Z
right
now,
so
we're
essentially
supporting
two.
D
Although
it's
I
guess
it's
like
a
little
different,
because
in
kepsi
we're
trying
to
support
two
different
previous
releases,
but
that's
also
because
we
have
releases
every
two
months
so
essentially
same
thing,
giving
four
months
for
people
to
migrate
or
upgrade
the
one
thing.
I'll
say
that
we've
had
issues
with
that
might
be
worth
thinking
about.
Here
is.
B
D
Good
all
right,
I
was
just
gonna
say
yes,
so
the
one
thing
I
would
be
careful
with
is
like
whenever
we
do.
Big
changes
like
that
involves
lots
of
file
changes
like,
for
example,
linting
Mentor
changes
like
linter
improvements,
anything
that
changes
kind
of
like
the
roles
for
Ci
or
refactors.
Things
like
that.
That
makes
it
really
difficult
to
cherry
pick
as
we
diverge
from
the
branch,
and
so
we
really
want
to
be.
D
You
know
not
supporting
something
too
long,
otherwise,
the
longer
we
support
it,
the
harder
it
becomes
to
cherubic.
So
that's
the
only
thing
I'll
say.
E
Just
one
small
addition:
we're
currently
not
really
doing
this,
so
what
we're
currently
doing
is
just
an
example.
When
we
release
1.2.0,
we
did
one
other
release
for
1.1.,
I,
don't
know
six
or
something,
but
after
that
it
was
out
of
support
So
we
had
like,
maybe
half
a
month
for
maybe
a
month
for
overlap,
but
otherwise
the
previous
race
was
immediately
out
of
support,
but
everything
else
that
you
said
absolutely
agree
and
I
think
it's
also
kind
of
when
we're
calling
just
API
production
ready.
E
So
we
should
also
kind
of
support
our
releases
a
bit
and
not
like
get
half
of
our
ecosystem
out
of
support
when
we
do
a
DOT,
Zero
release,
I
think
that's
kind
of
what
we
have
to
do
and
I
think
it
should
be
fine.
E
I
mean
we
have
to
think
about
some
things
of
making
topic
of
not
too
big
big
changes
as
long
as
we
support
all
stuff,
but
I
think
we
have
some
experience,
I
mean
there
were
times
I,
think
where
we
essentially
supported
1.1,
1.0
1.3
1.4
in
the
past
I
mean
we
stopped
doing
all
of
that
when
1.3
and
1.4
with
out
of
support.
But
we
have
some
experience
of
supporting
like
two
three
or
four
releases
at
a
time,
so
I
think
it
should
be.
It
should
work
out.
A
Great
I
can't
find
the
raised
hand
feature,
but
I
would
ask
a
question
about
the
definition
of
support
seems
to
be
sort
of
in
this
section
right
here.
Do
we
want
to
include
periodic
tests
as
a
part
of
that
support
just
to
I
think
that
that
is
actually
what's
happening
in
reality?
Right
then,
go
ahead,
Savon.
E
Yeah,
it's
happening
what
in
reality
when
I
was
reading,
when
I
was
looking
at
PR
I
was
also
thinking.
We
could
potentially
expand
the
definition
of
support
of
it
if
you
want
to
but
I
think.
If
you
take
literally,
the
ability
to
backport
and
release
patch
versions
for
me
implies
that
we
have
CI
coverage
because
I
don't
want
to
cherry
pick
something
and
then
release
it
without
any
kind
of
coverage,
but
makes
probably
sense
to
make
it
more
explicit
and
also
mention
it.
C
G
C
A
B
Just
wanted
to
share
that
we're
planning
a
quick
meeting
for
intros
for
anyone,
who's
interested
in
working
on
the
cluster
add-on
provider
for
Home
Depot
I,
just
set
it
up
to
be
at
2
p.m.
Eastern
right
after
this
meeting,
all
right,
I'll
just
be
a
quick
chat,
so
we
can
put
some
faces
to
names
and
give
an
update
about
how
the
repo
is
coming
along
I.
Also,
it's
my
first
time
trying
to
schedule
a
zoom
meeting
so
you'll.
A
A
C
Yeah,
just
just
if
you're
saying,
if
someone
needs
meeting
related
to
gas
API,
we
can't
really
use
this
same
link,
but
the
the
only
thing
is
that
you
don't.
You
have
to
not
use
it
when
we
are
doing
this
meeting
so.
C
C
A
A
E
Important
thing
you
have
to
record
to
computer,
if
you
record
to
the
cloud
it's
it's
somewhere
in
a
cloud,
but
you
don't
have
access,
because
it's
I
don't
know
the
kubernetes
account
or
something
otherwise.
It
works.
I'll.
B
But
yeah
that
that
should
be
all
for
me.
A
A
A
There's
been
really
great
comments,
thanks
everybody,
so
this
is
really
just
more
of
a
PSA
to
if
you
have
a
stake
in
managed,
kubernetes
and
Cappy,
we
are
in
the
process
of
sort
of
self-organizing
in
a
more
formal
way.
So
please
feel
free
to
hop
in
here
and
let
us
know
how
you'd
like
to
be
part
of
that.
A
There
was
a
a
good
remark
from
Killian
about
the
sort
of
reserved
language
of
working
group,
so
we
probably
will
I
think
I
agree
with
him
and
I'll
Advocate
that
we
do
use
a
slightly
different
name
to
describe
ourselves,
but
I,
don't
think
that's
super
super
important,
but
I
would
expect
that
we'll
have
a
sort
of
kickoff
Zoom
discussion
between
now
and
next
week's
happy
office
hour.
So
stay
tuned.
A
Okay,
I,
don't
see
any
hands
up
on
that
topic.
Stefan
back
to
you.
E
Yep
just
another
mention
of
the
resource
so
that
PR
essential
documents,
the
input,
tasks
that
release
team
will
do,
or
at
least
the
initial
version
of
it
on
how
to
do
release
and
what
you
have
to
do
during
the
release
cycle.
So
we
had
a
bunch
of
discussions
on
this
PR.
E
A
lot
of
them
are
still
open
would
be
good
if
whoever's
interested
please
come
in.
Let's
try
to
get
consensus
I'm
totally
fine
with.
If
you
don't
get
consents
about
specific
topics
to
just
put
a
big
TVD
into
the
document.
E
I
think
it
would
be
really
good
to
get
this
document
just
merged,
because
I
think
we
have
consensus
on
I,
guess,
80
or
90
on
of
the
content,
and
it's
really
valuable
for
the
release
team.
So
I
would
aim
for
probably
end
of
next
week
to
just
wrap
up
the
discussions
until
then
also
to
actually
merge
it
under
them.
E
Yep.
That's
that's
a
lot.
A
Oh,
this
looks
great
there's
a
ton
of
stuff
here.
So
I
would
imagine
most
of
this
just
documenting
what
we've
already
been
doing,
but
it's
just
hugely
helpful
to
have
that
on
paper.
F
G
A
B
A
F
F
A
Short
topic
item
here
for
you
to
provider
updates.
It
just
has
to
do
with
this
sort
of
long-standing
machine
pool,
annotation
PR.
So
there's
an
open
remark
from
Stefan
in
response
to
myself
wondering
if
we
want
to
land
this
before
1.3.0,
so
this
would
probably
require
somewhat
of
a
exception
to
the
strict
rules
about
what
goes
after.
Rc
is
cut.
So
if
you
have.
A
Please
feel
free
to
to
to
comment
if
no
one
sort
of
positively
affirms
us
I,
assume
it'll
just
get
bumped
to
1
4,
which
isn't
the
end
of
the
world.
So
just
like
a
call
out
for
that.
G
Thanks
Zach
yeah
I
just
wanted
to
quickly
mention
everybody
I'm,
not
sure
if
anyone
else
is
sort
of
following
along
much,
but
we
did
release
version
one
and
and
then
a
quick
subsequent
update
of
capex,
which
we're
actually
leveraging
with
our
first
eksa
GA.
That's
about
to
happen
here
in
December.
So
definitely
just
wanted
to
mention
here
and
thank
the
community
for
obviously
everything
on
the
background
to
kind
of
get
this
off
the
ground
and
that
kind
of
thing
so
we're
pretty
excited.
A
All
right
cool
Rich:
you
want
to
talk
about
Captain,
real,
quick.
F
Yeah
just
a
couple
of
things:
the
2.0
release:
that's
all
the
items
are
now
being
merged
and
any
questions
have
been
answered,
so
we
are
planning
to
get
that
release
either
the
receiving
or
first
thing
in
the
morning
thing
to
note.
With
this
we
are
stopping
the
requirement
to
use
the
same
API
kind
for
the
infrastructure
cluster
and
the
control
plane
for
eks.
So
we've
gone
back
to
the
original
way
that
we
did
it
and
the
way
that
Azure
currently
does
it.
F
So
that
is
good,
so
this
will
enable
all
cluster
class
for
eks
in
the
future.
Something
like
that
has
come
out,
which
is
a
second
Point
as
part
of
the
2.0
release
is
our
release.
F
Cycle
is
very
ad
hoc
at
the
moment,
and
this
has
become
very
apparent
with
a
2.0.
It's
felt
a
bit
messy
rushed
and
unplanned.
So
we
are
discussing
whether
we
actually
Kappa
need
a
release
cycle.
F
It
doesn't
have
to
be
the
same
as
Cappy,
but
we
don't
whether
we
need
some
form
of
release
cycle
a
bit
more
structure,
so
love
to
get
people's
inputs
on
that,
especially
if
Kappa
is
core
to
what
you
do,
whether
from
a
product
or
just
a
usage
point
of
view
and
there's
a
discussion
and
get
up
for
that.
F
F
C
F
C
I
just
want
to
call
out
that
s
when
we
discussed
the
Caster,
the
main
cluster
API
release
cycle,
it
was
evident
and
we
documented
that
cluster
API
having
a
release
cycle
makes
sense
if
providers
adopt
a
Newcastle,
API
release
in
Arizona's
time
frame,
and
so
in
my
opinion,
it
will
be
real
if
there
is
a
kind
of
synchronization
between
cluster
API
and
providers,
and
we
we
as
a
group
we
kind
of
end
up
in
in
a
common
and
repeatable
the
rhythm
of
deploying
stuff,
but
I
understand
this
is
complex.
C
There
are
many
teams
around
this
table
and
I.
Just
I
think
that
is
important,
that
we
brought
up
that
our
the
end
user
can
consume
copy
when
cluster
API,
when
both
cluster
bi
and
providers
are
ready
for
the
salaries-
and
this
is
I-
think
the
main
goal
that
all
together,
we
are
to
figure
it
out
to
make
in
a
sustainable
way.
D
Thanks
I
think
febrezo.
You
bring
up
a
really
interesting
point
here
because
currently
and
correct
me
if
I'm
wrong,
but
a
user
can
use
cluster
API.
The
latest
version
with
the
existing
version
of
the
provider
as
soon
as
cluster
API
is
released,
there's
nothing
that
the
provider
can
do
to
say.
Okay,
we've
tested
the
newest
version
of
tabi,
we
approve
it.
D
D
But
there's
not
really
any
consumption
that
happens,
and
so
that
means
that
I
don't
know.
If
that's
something
we
want
to
think
about,
maybe
of
doing
because
if
we
want
to
be
able
to
synchronize
providers
with
cluster
API
versions,
then
that
seems
pretty
important
to
me.
A
E
I
think
an
interesting
aspect
is
that
the
contract
kind
of
covers
some
aspect
of
that
that
you
are,
to
some
degree
compatible,
but
not
like.
Okay
I
can
trust
it
blindly.
As
long
as
the
contract
is
correct,
it
will
just
work
even
on
top
of
the
contract.
The
kind
of
me
test
coverage
for
combinations.
C
B
E
A
Can
you
help
me
find
the
the
the
provider
contracts
that
come
with
release?
So
that's
a
factor
as
well.
E
We
have
in
the
classical
book
there
are
a
bunch
of
pages,
I
think
even
slightly
distributed
very
described.
What
I
don't
know,
how
a
control
plane
works
and
the
corresponding
contract,
how
an
infrastructure
provider
works
in
the
crossbar
and
complex
Financial
machine
Etc,
there's
a
bit
distributed
across
the
book.
E
E
That's
not
actually
the
contract,
that's
more
like
we're
writing
down
what
we're
doing
in
core
cluster
API
in
a
sense
of
which
dependencies
you're
pumping.
So
it's
half
of
it
is
probably
a
kind
of
recommendation
to
do
the
same
things,
even
though
you
don't
have
to,
and
then
we
document
some
things
if
we
change
our
goal
apis,
so
I
don't
know
if
you
use
a
uter
from
core
cluster
API
and
we
change
it,
and
then
we
give
you
a
hint
that
it
changed.
It's
that
stuff
is
usually
never
breaking
so
most
of
that
stuff.
E
E
So
in
most
of
our
pumps,
in
most
of
our
minorities,
we
don't
have
that
situation
on
yeah
I,
don't
really
know
the
answer,
but
it's
in
a
strange
situation.
What
could
be
already?
Maybe
the
first
step
is
that
we
have
some
kind
of
documentation
where
we're
saying
okay,
this
version
of
cluster
API
is
compatible
or
has
been
tested
with
those
versions
of
providers.
So
we
could
have
a
page
in
a
core
cluster
of
here
book
where
providers
can
essentially
self-identify
okay,
I'm
supporting
just
for
API
1.2
I,
don't
know.
E
Maybe
some
information
is
already
better
than
nothing,
because
right
now,
as
a
user
is
probably
they
have
a
really
hard
time
to
figure
out,
okay,
which
version
of
tester
you
can
actually
use
which
version
is
compatible
with
copper,
which
versions
compatible
with
cap
C.
What
combinations
of
Provider
can
I
actually
deploy
in
the
same
management,
classroom?
A
A
Me
I
think
that
all
providers
should
be.
Maybe
this
is
something
we
can
bake
into
the
release
team,
but
I
think
all
providers
should
publish
a
known
working
Matrix
like
this.
Maybe
there's
some.
We
need
to
add
some
language
that
suggests
this
isn't
contractual
and
as
the
seal
mentioned,
you
know,
the
API
versions
are
are
durable
going
forward,
and
so
any
version
of
a
predator
should
be
able
to
work
with
any
version
of
Cappy.
A
D
Yeah
I
think
I
guess
I
mean
you
just
said.
This
is
really
for
the
API
version,
though
it
doesn't
really
say
anything
about
whether
it's
been
tested
with
individual
minor
versions
which
in
theory,
because
we're
working
with
API
versions.
Here
we're
not.
We
shouldn't,
have
breaking
changes
which
makes
it
so
that
a
user
can't
use
Cappy
with
any
version
of
the
provider
or
sorry
any
version
of
Cappy
with
as
long
as
they're
following
the
same
API
version
contract.
D
So
but
we
don't
know
that
until
we
test
it.
So
that's
the
issue
right
is.
We
need
tests
that
ensure
that's
true.
A
Imagine
as
a
provider
you're
already
using
this
annotation
for
other
purposes,
and
so
this
is
a
collision
that
you
want
to
avoid
somehow,
and
so,
if
you
don't
address
that
prior
to
exposing
your
user
to
using
a
version
of
the
provider
and
a
version
of
Cappy
that
are
colliding
on
their
annotations
for
different
behavioral
things,
then
weird
stuff
can
happen
and
that's
the
kind
of
thing
that
testing
can
and
get
you
through
go
ahead.
Stefan.
E
E
We
can
put
it
in
a
bunch
of
release
notes,
but
would
we
actually
want
to
wait
with
changes
like
this
for
a
contract
pump,
I'm
I'm,
not
sure
so,
yeah
I,
don't
really
know
I
mean
I,
don't
know
who
remembers
or
read
that
issue
when
the
contract
was
introduced.
Initially,
like
two
or
three
years
ago,
I
don't
know
there
was
a
decision
made
to
have
one
monolithic
contract
like
hey.
There
is
one
single
in
cluster
API
contract
thing
and
that's
now
behind
beta
one.
E
At
this
point,
maybe
maybe
it's
worth
going
forward
if
we
run
into
the
simulations
over
time
to
to
think
about
it
in
a
bit
more
detail
and
see
if
there
is
some
kind
of
I,
don't
know,
contract
per
I,
don't
know
per
resource
feature.
I
don't
want
to
go
too
far,
because
that's.
E
Now
we're
in
the
situation
where
we
essentially
have
to,
we
will
just
continue
breaking
the
contract
for
smaller
things,
because
just
way
too
hard
to
actually
bump
the
contract
and
I
think
looking
at
a
machine
pool,
annotation
thing
or
a
classical
thing.
It
seems
like
you're
kind
of
okay
with
that,
but
a
contract
kind
of
signals
that
stuff
just
doesn't
break.
C
Yeah
I
I
think
that,
no
matter
how
we
ship
out
the
contract,
there
will
be
always
the
difference
between
version
which
are
tested
or
combination
or
not,
and
this
is
I
think
that
is
interesting.
C
Things
that
we
we
can
somehow
encode,
and
so
ideally,
we
should
basically
add
something
in
custard
cattle
that
will
allow.
For
instance,
let's
make
us
the
cattle
by
default,
to
only
install
combination
that
we
test
and
that
our
marketers
are
tested
and
give
a
flex
to
the
user
to
allow
to
obtain
in
combination
which
are
not
tested,
and
this
will
provide
a
level
of
security
to
our
users
and
The
Encore
the
case.
So
I
I
will
try
to
open
an
issue
and
capture
the
discussion
and
it
has
a
couple
of
nuances
but
yeah.
A
A
A
A
Great
we'll
see
some
folks
looks
like
in
19
minutes
on
the
same
channel
to
talk
about
helmet
on
provider,
bye,
folks,.