►
From YouTube: Kubernetes SIG API Machinery 20230518
Description
- The Implicit Kubernetes-ETCD Contract
- Kubernetes control-plane upgrades
- Proposal KEP-4008: CRD Validation Ratcheting
A
A
So
hello,
good
morning,
good
afternoon,
good
evening,
depending
where
you
are
I,
hope,
everybody's
doing
fine
and
healthy
today
is
May
17th
of
2023..
This
is
cjpa
Machinery
by
weekly
meeting
in
open
source
kubernetes,
and
we
have
a
number
of
items
in
the
agenda
today
and
we
are
going
to
go
in
order
and
the
first
one
belongs
to
Han
and
Merrick.
So
why
don't
we
start
with
that?.
C
Sure
so
we've
actually
discussed
this
before,
but
there
is
an
implicit
contract
between
kubernetes
and
ETD.
America
and
I
have
drafted
a
doc
on
the
explicit
expectations
that
kubernetes
has
from
etcd,
and
we
want
to
codify
this.
We
we
don't
want
to
stop
here
at
just
a
written
documentation
of
of
what
the
contracts
are
between
the
two
binaries.
We
want
actual
tests
in
a
shared
interface
so
that,
basically,
we
can
concretely
be
confident
that
we
haven't
broken
any
underlying
assumptions
that
we
might
have
of
the
underlying
storage
layer.
C
Merrick
has
like
I
I,
think
I
had
I
had
some
code
that
I
had
shared
with
Steve
at
one
point
and
I
think
he
was
supposed
to
take
that
and
do
stuff
with
it.
But
I,
don't
know
what
happened
to
that
and
so
I
think
Merrick
has
been
sort
of
following
up
on
on
those
threads.
D
Eric
yeah
so
following
handset
can
I
share.
My
screen
is
that
okay,
so.
D
Is
that
okay?
Okay
thanks,
so
let
me
show
or
yeah
just
first
to
introduce
myself.
Can
you
see
my
screen
and
hear
me?
Yes,
I'm
Mark
I
work
on
hcd.
You
may
know
me
from
emails
like
this
I.
Don't
know
if
you
if
this
is
readable
but
yeah
I
work
on
that
CD
and
I
spend
a
lot
of
time
on
finding
or
announcing
and
finding
all
the
latest
data
inconsistencies
on
that
CD.
D
So
yeah
we
had
a
lot
of
things
to
learn
as
a
community
and
one
of
those
things
we're
learning
from
our
mistakes
and
like
making
inconsistency
disappear.
So
one
thing
that
I
wrote
is
a
new
framework
for
testing
HD,
which
required
us
to
re-abulate
all
the
API
guarantees
and
watch
apis,
and
we
test
it
by
having
model
tests,
which
basically
means
we
implemented
an
in-memory,
a
fake
of
or
something
like
fake
of
LCD.
D
That
behaves
like
hcd
has
the
similar
not
full
but
similar
API
like
hcd
it's
in
process,
and
we
use
it
to
compare
behavior
of
HCG
versus
the
intention.
So
if
you
can
see
you
can
we,
we
have
couple
of
old
history
cases
that
we
reproduced
in
reproducing
the
traffic
and
we
compare
hcd
versus
the
model
and
we
can
get
a
nice
craft
slide.
D
This
one
which
is
I,
don't
know
if
it's
visible
but
yeah.
That
shows
inconsistency.
So
this
brings
me
back
to
what
handset
is.
If
we
have
a
contract,
if
we
have
updated
API
guarantees,
if
we,
if
hcd
Community,
focuses
or
I
am
proposing
for
at
City
Community
Focus
on
or
prioritize
implementing
an
hcd,
mod
or
kubernetes
traffic,
as
defined
here
as
a
milestone,
we
can
get
a
full
HD
contract.
D
So
I
took
a
word
that
handed
and
that's
if
that's
not
the
super
great
but
Han,
basically
Define
an
interface
and
has
a
publicly
available
start
on
on
his
spark
search.
Interface,
thingy
and
I
took
it
and
if
GitHub
shows
me
and
I
took
the
the
model
that
we
have
on
HD
State
copied
into
kubernetes
code
implemented
the
same
interface
and
have
it
running
so
it's
required
to
disable
a
lot
of
things
to
do
on
the
testing.
D
D
I
have
a
full
set
of
press
under
API
server
package
storage.
That's,
of
course
there
is
couple
like
couple
of
them
that
are
removed.
I
didn't
have
time
to
fix
them,
because
this
is
a
POC,
but
most
of
them
run
and
pass,
and
that's
all
from
like
the
demon
that
I
wanted
to
show.
If
there
are
any
comments
or
questions,
I'm
happy
to
to
share
and
discuss.
E
Yeah
I
have
a
I,
have
a
question
so
so
that
demo
was
neat.
It
appeared
to
show
this
is
what
kubernetes
uses,
and
this
makes
sure
kubernetes
doesn't
grow
anything
else,
but
I
don't
think
I
saw
where
it
says.
This
is
where
we
make
sure
at
CD
doesn't
break
what
Cube
uses,
which
is
the
half
I,
was
expecting
to
see.
D
So
we
I'm
doing
it
on
the
the
I
mean
on
the
issue.
I,
don't
know
if
it's
I
think
feather
opened
it
on
the
issue.
I'm
disguising
that
there
is
list
of
Milestones
that
I
slowly
NPR
that
slowly
implement
all
the
features.
That
kubernetes
depends
on
or
add
the
traffic
types
and
contract
elements
one
by
one,
because
it's
pretty
complicated
contract
that
grow
our
coverage
of
those
tests.
So
we
currently
cover
some
at
large
Parts.
E
E
Guess
it's
not
clear
from
from
the
stock.
It
is
the
goal
to
actually
be
sure
that
SCD
doesn't
break
the
contract
or
is
the
goal
to
to
say
this?
Is
the
contract
from
Cube
and
we're
pinning
it
or
or
any
addition
needs
to
be
paired
with
something
I
like
like
I
can
see
a
lot
of
purpose
behind.
We
want
to
make
sure
NCD
doesn't
break
what
Cube
uses
I
see
less
purpose
behind
something
that
says
we
want
to
make
sure
Cube
doesn't
use
anything
else.
F
C
Are
definitely
not
saying
that
yeah
we're,
definitely
not
saying
that
we
are
definitely
not
saying
that
we
are
just.
We
just
want
to
ensure
that
ETD
doesn't
break
Kate's
expected
behaviors.
B
D
Yeah,
so
we
want
to
have
so
benefit
like
Main
benefits.
Is
there
was
a
request,
like
I
think
can't
mentioned
our
previous
discussion
when
we,
when
there
was
asked
about
having
an
interface
for
for
testing
of
kubernetes
testing
at
CD
like
or
testing
against
generic
storage?
This
is
like
one
example
of
implementation
for
of
this
interface
that
we
are
working
on
and
its
implementation.
That
should
be.
That
should
be
easy
easily
to
yeah
to
to
prove
to
cross
validate
both
both
projects.
G
G
The
model
is
accurate,
so
you
take
it
over
to
kubernetes,
you
swap
it
in
and
if
everything
works,
then
the
models
probably
accurate,
at
least
as
much
as
kubernetes
cares
about
and
then
that
would
then
allow
you
to
go
back
to
edcd
and
and
use
that
model
as
a
way
of
ensuring
that
SED
doesn't
be
coming
like,
doesn't
violate
Kate's
guarantees
and
then
also,
if
you
expand
that
model
in
some
other
way,
you
can
go
back
to
kubernetes
and
make
sure
you
didn't
break
the
model
by
checking
it
against
kubernetes.
G
So
that
kind
of
the
that
kind
of
the
the
thought.
D
Yes,
yeah
plus
we
do.
This
is
a
robustness
test,
so
the
model
is
just
small
part,
because
the
main
benefit
is
test,
like
volatility,
that
HD
behaves
like
the
model
in
and
under
any
circumstances.
So
any
injection
at
any
I
know,
storage,
failure,
any
network
failure
and
we
are
like
throwing
the
list
of
those
failures.
G
I
mean
this
seems
like
a
really
good
way
to
really
good
way
to
test
NCD,
especially
that
I
like
the
model
based
approach,
because
these
the
model,
at
least
for
those
that
haven't
seen
Merrick,
had
a
really
good
talk
at
kubecon.
About
this,
it
can
can
do
a
bunch
of
more
subtle
things
around
checking
various
like
interleavings
of
operations
that
that
might
not
be
expected,
but
could
happen
in
the
real
world.
G
G
C
Yeah
yeah
there
would
be
a
rubber
yes,
so
yes,
there
yeah,
there
would
be
a
wrapper
and
we
we
don't
have.
We
haven't
completely
worked
out
where
we
would
store
this
interface,
but
ideally
it
would
be
some
place
that
both
at
CD
and
kubernetes
could
pull.
From
that
way,
we
could
Ensure
that
we
are
testing
the
correct
intersection.
E
C
E
A
E
2K,
repo
and
everyone
depended
on
that,
and
then
he
was
like
Hey
I
want
to
use
this
new
method.
It
would
put
whoever
owns
the
repo
in
a
position
to
say
no
and
like
there
is
some
concern
over
that.
Where,
where
would
you
imagine
this
living.
C
And
that
basically,
should
mitigate
your
concerns
about
not
owning
the
interface
right,
because
it
is
actually
a.
E
To
some
degree,
yeah
I'd.
E
The
idea
of
having
the
interface
defined
makes
a
lot
of
sense
being
able
to
use
that
to
test
the
behavior
of
SCD
to
be
sure
that
it
doesn't
break
the
things
that
Q
relies
on.
Also
very
good
I
I
have
seen
I.
Think
I
saw
in
three
five
the
test
framework
here
talking
about
America.
It
was
awesome.
It
was
the
one
that
let
me
inject
failure
at
this
point
right,
yeah,
that.
C
D
C
E
H
Okay
sure
yeah-
this
is
really
interesting
and
great
I
was
wondering
how
you
test
interleavings.
How
do
you
control
time
to
get
the
specified
interleaving.
D
So
this
is
more
going
into
my
talk
from
kubecon.
We
don't
depend
on
time
we
validate
like
with
validate
linearizability,
which
means
we
use
linear,
like
we
execute
some.
We
have
some
execution
history
from
tests
during
that
test.
There
was
some
I
know
injections,
so,
but
we
have
a
history
and
we
can
use
this
history
to
validate.
D
If,
if
the
history
is
consistent
with
the
model-
and
we
don't
depend
on
I
mean
we
don't
to
the
we
depend
on
linearizability
Checker
to
find
an
order
to
of
operations,
and
we,
if
we
have
an
order,
I
mean
it's.
If
we
have
an
order,
we
can
or
linearizability
tracker
will
say
this
works
if
it
finds
at
least
one
path
of
model.
That
is
correct
and
it
will
reject
if
there
is
no
possible
path.
That
model
could
take
throughout
all
the
possible
execution
order
that
yeah,
if.
B
D
H
Yeah,
could
you
please
drop
those
pointers
into
the
notes,
I'll.
Try
to
ask
just
one
brief
question
in
summary:
are
you
saying
that
what
this
does
is
it
takes
whatever
happened
and
doesn't
really
take
care
to
control?
What
happened
and
says
is:
was
this
a
legitimate
thing
to
do.
H
Right,
my
question
was
a
little
different,
which
is:
do
you
have
the
ability
to
make
things
happen
in
a
challenging
way?
H
You
know,
I
I
gather,
know
right
so
I'm
going
to
suggest
or
ask
you
know
one
of
the
things
that
you
probably
want
to
do
is
you
know
one
of
the
problems
here.
Right
is
there's
those
concurrent
stuff
and,
depending
on
the
order
in
which
things
happen
that
are
beyond
your
control,
the
the
code
will
react
differently
and
it
would
be
good
to
be
able
to
make
tests
that
actually,
you
know,
reflect
different
impositions
from
the
underlying
reality.
So.
D
D
Yes,
you're
talking
about
higher
level
or
or
how
to
use
this
framework,
we
are
trying
to
use
the
framework,
but
at
city
has
we
already
had
this
framework.
We
are
just
trying
to
grow,
it
I,
don't
know
if
it
can
cover
all
the
edge
cases.
I
assume
like
you
described,
but
it
should
should
like
I'm,
open
and
like
this
is
open.
I
can
take
any
example.
I
mean
we
can
create
an
issue
and
grow
a
list
of
ideas.
I,
don't
yeah
it's
it's.
It
should
be.
H
I
Yeah,
hello
I
also
wanted
to
I'm,
also
working
on
NCD
I'm,
a
little
bit
familiar
with
this
work
on
the
contract,
but
more
familiar
with
the
testing
Just
Wanna
Give
A
another
where
to
summarize
it
so
I,
think
the
golden
standard
in
this
type
of
testing
is
jabson
test
and
if
you're
familiar
with
Jefferson,
IO
and
I
think
this
is
actually
taking
this
like
a
step
further
for
hcd
specifically,
and
the
reasons
are
is
first-
is
there's
the
Go
fail
injection
library
that
look
what
Mark
said
that
allows
us
to
inject
failures
in
more
granular
way
than
let's
say
Japan,
because
Japan
is
more
like
a
black
box.
I
Here
we
actually
can
do
more
and
then
this
framework
it
allows
you
basically
mimic
traffic,
which
is
also
great
and
to
write
tests
around
that
and
then
on
top
of
it.
We
do
Loren's
ability,
validation
through
this
through
the
library
that
is
with
open
source
and
pretty
good
and
then
another
thing
Mark
added
was
the
watch
validation.
So
we
can.
We
can
look
because
we're
already
injection
all
the
failures
right
we
can.
We
can
do
more
in
terms
of
validation.
So
it's
not
just
learner's
ability.
I
We
can
check
other
conditions,
make
sure
that
etcd
specific
Primitives
work
as
they
expected
and
so
I
just
want
to
point
out
that
you
know
the
contract.
I
think
is
just
for
me.
How
I
see
it
just
like
a
summary
of
what
happens
in
kubernetes,
so
people
can
write
tests
and
understand
all
the
scenarios
better.
I
So
just
hopefully
this
was
helpful.
Thank
you.
A
Very
good,
if
you
can
lower
the
hand,
so
anybody
else
so
what's
next
here
I
know
David.
You
need
some
time
to
think
about
this.
Maybe
you
know
we
bring
it
up
again
on
the
next
meeting
as
a
topic
sure.
E
The
the
implications
of
changing
our
storage
interface
are
significant,
and
we
had
previously
made
a
pretty
explicit
choice
to
choose
to
base
it
on
NCD
and
going
back
to
the
old
doc
about
why
we
made
that
choice.
Thinking
about
it,
deciding
whether
it's
right
to
change
that
now
and
the
likely
repercussions
of
that
are
all
worth
doing.
A
Do
you
could
you
help
me
find
the
link
to
that
old,
look
and
Link
it
here
in
the
notes
for
reference.
G
Yeah
and
to
that
point,
I
think
it
would
be
really
important
that
we,
whatever
testing
happens,
isn't
like
kubernetes
development
blocking
right.
So
if,
if
somehow
kubernetes
changes
in
a
way
that
works
with
the
current
LCD
implementation,
what
the
model
isn't
equipped
for
it,
like
I,
wouldn't
expect
the
kubernetes
developer
to
have
like
their
tests
fail
and
be
told.
G
No,
you
can't
do
this
because
the
model
the
model
doesn't
match
what
you
do,
even
though
de
facto
TD
does
like
we
would
probably
I
would
I
would
expect
that
to
show
up
it's
like
a
downstream
like
I'd
say
you
can
look
at
this,
because
maybe
it's
a
problem
with
the
model
or
something
like
that.
E
E
It
would
be
a
significant
change,
so
yeah
I
will
find
that
old
position
paper
feta
and
we
should
re-read
it
and
figure
out
whether
we
think
it
has
aged
well
or
whether
it
requires
updates
before
we
made
any
change
here.
We
should
make
changes
there.
A
Very
good
should
we
move
to
the
next
one.
Thank
you,
American
fun
for
it
presenting
here.
Okay
and
you
are
on
the
spotlight
again.
C
Yeah,
this
is
my
other
favorite
topic,
so
basically,
I
have
been
looking
at
various
implementations
of
Open
Source
sort
of
prescriptions
on
how
to
do
control,
plane,
upgrades
and
I've
been
kind
of
running.
Around
I
I
talked
to
Sig
cluster
lifecycle
yesterday
and
brought
this
up
and
they
suggested
I
come
here
so
here
I
am
basically
the
way
that
we
do.
Upgrades
with
Cube
admin
and
Cappy
it
occurs
to
me
is
not
ideal.
C
So
the
way
that
the
control
plane
upgrades
from
these
Like
official
entry
well
entry
for
cube
admin,
sort
of
mechanism
for
upgrading
clusters
basically
involves
upgrading
a
node
at
a
time,
and
so
what
ends
up
happening?
Is
you
end
up
with
a
skew
of
you
know?
Api
servers,
controller
manager,
scheduler,
and
you
end
up
with
a
non-deterministic
upgrade
sequence
which
to
me
doesn't
seem
ideal
and
in
our
official
documentation
it's
actually
quite
fascinating.
C
We
prescribe
that
people
use
either
Cube
admin
or
that
they
do
a
manual
deployment
and
the
thing
about
the
manual
deployment
is.
It
seems
more
correct
to
me
than
the
way
Cube
admin
and
Cappy
actually
do
upgrades,
which
is
component
by
component
I.
Don't
necessarily
agree
with
the
order
of
the
components
like
SCD
should
be
basically
order
in
different
and
yeah.
C
It's
not
clear
to
me
that
there
is
in
ordering
preference
between
controller
manager
and
scheduler,
but
ideally
you
would
want
to
update
web
Hooks
and
then
the
API
server
and
then
either
the
controller
manager
or
the
scheduler
does.
Does
that
make
sense.
C
E
Don't
know
if
I
plus
people
is
I,
think
I
think
what
you're
describing
is
that
the
way
cubeadm
does
this
is
it
goes
to
one
node.
It
upgrades
all
the
components
on
that
node
and
now
you
have
node
one
that
has
a
VN
or
latest
version
qvps
over
the
latest
version.
Controller
manager,
latest
version
scheduler.
B
E
E
B
I
don't
know
if
this
is
a
good
time
to
bring
it
up,
but
that
point
they're
around
to
talking
to
localhost
is
that
it's
very
much
so
you
had
your
hand
up.
Is
that
a
like
an
actual
recommendation
that
we
make
that
things
should
talk
to
local
hosts
too,
because
we've
definitely
seen
issues
like
you
described
where
you
end
up
with
new
ones
from
two
older
ones
and
whatnot.
E
It's
definitely
not
a
recommendation
now,
whether
it
should
be
or
not
is
also
somewhat
contentious.
I,
don't
think
it
should
be.
A
recommendation
hitting
and
not
ready,
qvpi
server
is
potentially
problematic.
Disallowing
on
ready
connections
is
also
problematic
for
different
reasons,
and
you
know
that
every
cubelet
is
not
going
to
use
localhost.
So
there
is
an
advantage
to
having
a
single
entry
point
for
all
track
that
allows
for
consistency.
E
J
All
I
was
gonna,
say
is
so
I
am
unless
they
changed
it.
I
would
be
surprised
if
Cube
ADM
is
doing
localhost
stuff
because
there
have
been
bugs
in
the
past,
like
I.
Remember
when
we
rolled
out
token
request,
it
would
like
the
cube.
Controller
manager
would
be
like.
Oh
this
API
server
doesn't
support
token
requests,
so
I'll
do
the
Legacy
Behavior
and
then
like
it
would,
because
it
would
restart
with,
like
you
know,
would
get
upgraded
later.
J
E
As
I
remember
our
recommendation,
it
is
to
update
the
qapi
server
first
before
you
update
any
nodes
before
you
update
the
controller
manager
before
you
update
the
scheduler
I
agree
that
SED
should
be
order.
Agnostic
and
I
also
agree
that
the
key
control
manager
and
keep
schedule
or
ordering
does
not
matter
I,
probably
probably.
C
E
So
I
can
say
that
the
way
openshift
does
this
is
we
have
one
component
that
goes
first,
that
updates
basic
configuration
aspects,
gets
your
crd
schemas
right.
If
you
need
to
handle
migrations
that
can
migrate,
data
control
paths,
it
can
update
web
hooks
if
needed,
then
SCD
and
the
qvpi
server
go
in
order
and
complete,
and
then
we
update
the
cube
control
manager
and
Cube
scheduler
in
parallel
and
they
complete,
and
then
we
update
everything
else.
C
C
E
A
E
What
what
we
found
previously,
that
our
biggest
problem
was
was
that
for
a
while,
we
use
localhost
and
localhost
didn't
guarantee
safety
when
hit
by
say
a
garbage
collection
controller
for
a
not
ready,
API
server
and
the
properties
of
that
were
really
really
bad
when
it
failed.
We
also
found
that
having
a
single
entry
point
gave
us
sort
of
one
thing
to
watch
where,
if
the
controller
manager
is
going
to
get
there,
we
knew
our
cable
it's
going
to
get.
E
G
C
C
Yeah
so
so
that
becomes
less
of
a
concern,
but
so
then,
given
the
fact
that
it
seems
like
API
and
Machinery
stance
on
this
is
to
go
component
by
component
as
opposed
to
node
by
node
shouldn't.
We
reflect
that
in
the
upgrade
tooling,
that
we
provide
people.
E
I
think
writing
down.
The
properties
is,
is
valuable,
deciding
about
whether
to
bring
all
the
key
control
of
Interest
down
and
then
bring
only
new
ones
back
up.
That
would
be
an
interesting
change.
I,
don't
think
we
make
a
recommendation
on
that
one
one
way
or
another,
don't
know
whether
we
should
don't
know
whether
I
have
an
opinion
on
that
I.
C
G
Yeah
I
I
think
there's
two
parts
to
this:
there's.
What
do
we
have
to
support
for
the
ecosystem,
given
that
people
are
doing
this
different
ways
and
then
like?
What
are
we
actually
recommending
and
and
like
what
is
the
fine
print
about
that
like
what
are
because
like
if
you're
talking
to
local
posts,
there
are
definitely
Corner
cases
that
come
up
if
a
controller
manager
flaps
that
causes
potentially
certain
things
to
happen?
G
But
I
like
the
idea
of
like
going
through
and
updating
this,
especially
after
uvip
plans
like
I,
don't
think
SCD
should
be
in
this
list
that
we
have
in
the
manual
deployments.
Yes,
it
doesn't
I
think
with
uvip
in
place.
You
can
pretty
much
say
like
you
should
start
updating
your
Kube
API
servers
and
then
like
everything
else,
should
go
in
parallel,
and
that
should
be
fine,
because
they'll
talk
to
they'll
talk
to
the
newest
versions
of
things.
They'll
be
able
to
see
everything.
G
But
yeah
we
should
do
an
update
on
I,
don't
know,
I,
don't
know
how
how
strongly
we
should
recommend
to
other
people
what
they
need
to
do.
I
think
I
actually
think
it's
maybe
more
of
a
benefit
to
fit
to
people
to
make
it
clear
what
our
constraints
are,
what
we
support
and
like
what
we
think
is
a
pretty
good
like
sane
way
to
do
it
in
a.
G
I
would
have
to
look
at
it
really
carefully
to
see
if
we
could
reckon
if
there's
something,
that's
really
problematic,
I
I
think
we
either
have
to
reckon
like
realize
that
that's
happening
and
make
sure
that
we
support
it
or
we
just
have
to
like
recommend
a
major
change
but
I'm
kind
of
I'm
kind
of
in
the
situation
where
I
think
like
there's
a
lot
of
stuff
going
on
in
the
ecosystem,
just
kind
of
have
a
contract
to
support
n
minus
one
skew
of
all
our
components.
Oh.
E
E
F
B
That
a
newer
controller
manager
is
not
allowed
to
talk
to
an
older
API
server,
as
per
our
versioning
things,
I've
coached
this
with
Jordan
language
I.
Like
that
run
into
problems,
we
run
into
problems,
doing
exactly
that
on
some
of
our
internal
classes
and
it's
yeah
definitely
not
supported.
You
need
to
upgrade
the
API
server
for
any
client.
E
I'm
glad
it's
not
me
falling
short
on
reviews
when
I
review
I
I
review
it
for
for
compatibility
with
its
current
Cube
API
server,
not
for
compatibility
with
the
previous
one.
J
Yeah
and-
and
we
have
checks
and
like
controller
manager,
that
make
that
assumption
like
implicitly
like
hey,
is
this
API
and
Discovery?
Thus,
I
can
use
it
well
that
that
wouldn't
make
sense,
because
if
you
made
that
check
once
on
Startup
and
then
the
API
server
changes
beneath
you,
you're
kind
of
screwed.
E
We've
also
made
use
of
new
CSR
fields,
for
instance
in
CSR
controllers,
in
the
same
release
that
were
introduced
based
on
the
Assumption
qvpi
server
always
goes
first,
I
think
the
only
thing
ahead
of
qpi
server
is
hooks.
If
you
need
a
hook
that
needs
to
get
updated
for
compatibility
with
a
new
type,
then
that
has
to
go
first.
C
B
E
Is
the
only
one
remaining
sorry
I
mean
Joe's
here
too,
but
he
and
I
I
thought
many
years
ago
wrote
that
if
we
mess
that
up,
let
me
know
Han.
C
It's
that
one
I,
don't
I,
don't
believe.
That's
that's
written
in
our
upgrade
documentation.
Awesome,
okay,
yeah.
G
E
I
agree
I
like
those
as
well,
but
but
in
terms
of
like
that
boundary
I
think
it
is
wrong
if
you,
if
you
can
have
a
new
Cube
controller
manager,
talk
to
an
old
server
needs
to
be
updated.
C
E
B
B
E
You
are
corrected
as
n
minus
one
I'm,
pretty
sure
we
wrote
it
down.
Okay,
here
we
go,
it
is
in
the
SKU
policy.
C
F
C
E
A
K
Thanks
very
I
would
like
to
introduce
a
cap
today.
If
you
can
go
to
files
changed
and
then
like
just
view
the
readme,
so
we
can
see
the
rendered
markdown.
F
A
K
Cool
so
yeah
I'm
hoping
to
get
the
Sig
to
agree
that
this
is
a
worthwhile
thing
to
do,
and
some
feedback
on
the
proposed
solution
and
hopefully
have
an
alpha
for
128..
So
yeah
this
cup
solves
two
main
issues.
The
first
would
be
that
it's
difficult
for
users
to
add
or
change
value
valuations
in
their
crd
schemas.
K
If
they
do
so
without
incrementing
their
storage
version,
their
controllers
will
break
while
trying
to
update
crds
that
are
are
CRS
that
are
now
invalid
from
the
new
validations
or
if
they
bump
their
storage
version.
I
just
think
that's
just
a
huge
cost
to
crd
authors,
so
they
probably
choose
not
to
do
that.
So
what
this
would
mean
is
that
crds
are
kind
of
in
stasis.
If
someone
wants
to
change
a
validation,
they
don't.
K
K
K
So
can
we
scroll
down
to
the
a
small
example
here?
So
this
is
an
example
of
like
just
to
illustrate
what
happens
to
the
user
when
they
change
a
value,
validation
and
their
crd,
so
assume
they
have
that
first
schema
with.
You
know
two
fields
that
are
strings,
my
field
and
my
other
field,
and
they
create
a
CR
that
includes
an
empty
string
for
my
field.
K
Now
they
change
my
field
to
give
it
a
minimum
length
of
two
and
if
they
try
to
update
the
CR
to
add
my
other
field
without
changing
anything
else,
to
get
an
error.
This
also.
This
also
would
happen
to
controllers,
while
updating
the
status
field
and
even
more
sadly,
users
of
server
side
apply,
who
submit
a
patch
with
just
the
single
field
that
doesn't
relate
to
the
one
that
is
failing,
but
we'll
still
get
that
same
error.
K
So
this
cap
proposes
a
feature
to
try
to
avoid
this
by
Pro
doing
a
post-processing
step
on
the
schema
validation
errors.
Can
you
scroll
down
to
design
details
we'll
just
do
a
post
processing
step
on
the
schema
validation
errors
find
all
the
fields
that
have
errors
associated
with
them
and
see
if
they
changed
from
old
to
new?
If
they
did
not
change
from
old
to
new,
then
we
will
convert
the
error
into
a
warning
and
if
they
did,
if
the
value
has
changed,
that
has
an
error
Associated.
We
continue
to
throw
the
error.
K
K
As
they
currently
are,
yes,
cell
Expressions,
the
fields
that
you
can
refer
to
are
limited
to
the
field
that
you
attach
the
expression
to
so
the
field
path.
That's
reported
by
the
error
is
going
to
be
a
strict
superset
of
all
the
fields
referred
to
by
the
expression.
So
we
can
do
that
comparison.
There
is
a
proposal
to
allow
the
user
to
set
a
custom
field
path
to
report
the
error
on
for
the
use
just
for
the
user
user
experience
take.
G
Yeah
that
was
an
interesting
Corner
case
of
this
one,
but
I
think
I
think
we
can
work
out
the
details
on
that
I've
been
tracking
this,
so
I'm
I'm
generally
in
favor
of
it
I
mean
the
main.
The
main
user
visible
change
will
be
that
if
you
do
break
your
crd
schema,
you
will
still
be
able
to
change
things
that
don't
affect
the
fields
that
you
broke,
but
I
guess.
E
K
G
E
Worth
debating
the
feature,
gate
is
going
to
go
away
and
it
is
not
common
for
us
to.
A
A
Thank
you
and
I
think
we
have
one
more
from
beans
that
is
requesting
attention
to
this
PR.
A
Okay,
so
probably
this
is
for
Joe
David
me.
We
can
not
do
it
in
the
meeting.
We
can
take
it
offline
unless
there
is
something
I
am
missing.
A
A
It
pay
attention
to
this.
Okay,
very
good,
we'll
do
these
things
when
you
see
this
recording.
Okay,
that's
all
anything
else
that
I
missed.
It
was
a
good
meeting.
I
will
upload
the
recording
before
the
end
of
day
today,
send
it
to
the
channel
put
in
the
document
very
good.
Thank
you,
everybody
for
coming
and
we'll
see
you
next
time.