►
From YouTube: Kubernetes SIG API Machinery 20230906
Description
- [logicalhan, jpbetz, liggitt] [external, public] Safer Kubernetes Upgrades
- [nilekh/mo] KEP to move SVM in-tree
https://hackmd.io/@azure-container-upstream/H14Q8R2T3
- [benluddy] Binary Data Format Questions
A
Recording
hello.
B
A
Api
Machinery
by
weekly
meeting
today
is
September
6th
2023
we
are
in
September.
Can
you
believe
that
it's
amazing
how
the
time
flies-
and
we
have
a
very
nice
agenda
today?
Thank
you
for
the
people
that
has
been
contributing
to
that.
So
without
any
more
introductions,
let's
start
with
Hanjo
and
Jordan.
If
he
joins.
C
C
C
So
The
Proposal
is
linked
in
the
in
our
agenda
doc,
but
basically
it
comes
down
to.
We
want
to
introduce
new
functionality
into
not
only
the
API
server,
but
every
single
component
such
that
during
an
upgrade
cycle.
We
can
start,
we
can
start
a
component.
C
We
can
upgrade
a
component
so
that
its
binary
version
is
like,
let's
say,
let's
say
our
cluster's
on
1.28.
We
upgrade
a
component
to
1.29.
We
want
to
be
able
to
start
that
component
in
a
mode
that
is
compatible
to
1.28,
and
this
means,
like
all
apis,
like
all
resources
that
are
being
served
all
feature.
C
Flags
that
are
enabled
would
be
consistent
with
the
set
that
was
enabled
for
1.28
and
the
reason
why
we
would
want
to
do
this
is
we
want
to
make
the
steps
for
upgrading
more
granular
right,
because
when
we
fail
during
an
upgrade
depending
on
when
we
fail,
we
have
more
information
about
like
what
around
actually
failed
right.
If
we
have
smaller
steps,
there's
also
less
to
rewind
if
we
need
to
roll
back.
C
So
one
of
the
things
that
I
am
proposing
is
adding
some
metadata
to
feature
Flags
so
that
we
can
actually
compute
the
set
of
feature
flags
that
would
be
enabled
at
version
X
and
in
terms
of
resources.
C
C
I
want
to
I
want
to
introduce
when,
when
that
resource
was
default,
enabled
and
I
want
this
to
be
mandatory.
So
probably
not
pre-release
lifecycle,
gen,
but
I
wanted
to
be
doing
something
similar
to
that,
and
that
is
I.
Guess.
C
There
are
more
applications,
obviously,
so,
if
we're
able
to
start
with
compatible
com,
if
we're
start,
if
we're
able
to
start
a
binary
on
1.29
in
compatibility
mode
1.28,
if
we
change
some
of
our
internal
policies,
it
also
becomes
possible
to
start
these
binaries
on
compatibility
versions
that
extend
past
n
minus
one.
For
instance,
we
could
start
a
binary
on
N
minus
3..
C
This
would
theoretically
enable
skip
level
upgrades
and
then
what
we
could
do
is
we
could
upgrade
a
binary
to
n
plus
three
with
compatibility
mode
and
-3,
and
then
we
could
do
like
a
stepwise
bump
of
the
flags
compatibility
mode
Flags
in
order
to
do
like
a
skip
level
upgrade.
That
would
be
sort
of
the
the
high
level
overview
of
how
how
that
could
possibly
work.
A
C
No
I
think
each
controller
would
like
the
thing
that
we
are
considering
is
like.
Joe
is
actually
working
on
this
component
resource
proposal,
which
is
somewhat
rounding
and
parallel,
but
that
would
allow
us
to
model
component
life
cycles.
I
have
not
really
been
thinking
much
about
cross
component.
Skew,
though
I
suppose
we
could
build
that
in
once.
We
have
these
component
registration
things
and
Joe
has
a
hand.
E
Yeah
I
think
I
think
the
answer
is
not
for
this
particular
sub
feature,
but
in
general
I
think
that's
the
direction
we're
headed
towards,
because
this
allows
us
to
be
very
clear
on
you
know
what
our
compatible
version
is
and
so
like,
like
Khan
mentioned,
we're
gonna
We
want
to
build
up
more
more
visibility
onto
what
all
the
versions
of
things
are.
As
you
know,
API
objects,
and
once
we
do
that,
I
think
detecting
versions.
E
A
F
Saw
the
document
before
and
didn't
think
of
this,
but
Han
as
you
were
speaking
you
talked
about,
it,
enables
more
granular
upgrade,
but
manually
I
want
to
make
sure
that
we're
not
changing
the
default
like.
If
someone
does
nothing
and
doesn't
pass
in
the.
F
G
To
David's
question
about
interlocks,
this
is
actually
something
I
thought
about,
even
if
we
didn't
add
any
of
this
stuff
like
we,
we
say
things
about
you
know.
This
component
must
not
be
newer
than
that
component
or
the
cubelet
must
be
within
the
destinations,
like
I
kind
of
think.
It
would
make
sense
to
already
add
some
of
those
checks
and
like
when
the
cubelet
starts
up.
If
it's
running
against
an
API
server
out
of
SKU
like
by
default,
the
people
should
say
no
I
can't
do
this
or
a
controller
running
against
an
API
server.
G
That's
out
of
supported,
SKU
by
default
should
say
no
I
can't
do
this.
Maybe
we
could
let
people
like
ignore
versions,
Q
checks
or
something
I,
don't
know,
but
taking
some
of
what's
already
in
our
supported,
SKU
Doc
and
making
it
safer
or
more
obvious
when
people
are
doing
things
out
of
bounds,
I
don't
know.
D
More
obvious
than
what
I'm
going
for,
because
I
think
this
is
going
to
get
more
and
more
people
trying
to
run
skewed,
who
maybe
haven't
thought
about
many
of
the
edges,
particularly
around
downgrades,
I've
seen
I've
seen
people
screw
up
downgrade
already
and
I
know
here
we're
going
to
get
faced
with
questions
like
if
I
run,
a
newer
cubelet
in
an
older
compatibility
mode
can
I
run
that
against
my
my
older
API
server,
because
in
a
downgrade,
that's
like
exactly
what
people
want.
Like
my
my
control
plane
went
bad.
D
G
Yeah,
that's
right,
even
if
the
checks
are
simplistic
in
nature,
like
cubelet,
starts
up
and
checks
the
version
of
the
API
server
it's
running
against
and
make
sure
it's
within
bounds.
A
simplistic
check
like
that
is
way
better
than
what
we
do
today,
which
is
no
check
run
and
then
maybe
fail
in
weird
ways.
In
edge
cases,
foreign.
G
C
E
Yeah,
if
people
aren't
sure
what
feedback
to
give
one
thing
I
found
useful,
is
to
read
this
and
then
try
and
imagine
which
upgrade
paths
you
would
want
to
use
with
this
and
see
if
you
think
it
supports
them,
I
like
to
try
and
simulate
different
types
of
upgrades
with
this,
because
there's
there's
more
that
become
possible
and
some
of
them
are
much
safer
than
others.
Yeah.
C
That
kind
of
thought
experiment,
if,
if
you
really
want
to
look
at
the
upgrade
flow,
like
the
testing
Matrix,
that
we
have,
we
have
a
little
small
section
with
the
testing
Matrix
and
that
pretty
much
gives
you
like.
The
envisioned
flow
right.
Like
start
binary
version,
that's
compatibility
version
X.
Then
the
control
plane
ends
up
in
a
mixtape
where
we're
upgrading
like
one
of
the
control
plane,
instances
to
X
plus
one,
but
only
one
of
them.
So
then
we
we
have
a
mix
of
binary
versions,
but
the
compatibility
version
is
the
same.
C
So
everything
in
the
cluster
should
theoretically
be
thinking
that
it's
on
1.28.
Even
though
it's
a
mix
of
1.28
1.29,
then
we
upgrade
all
the
control
plane
instances
to
1.29,
but
it's
still
pretending
to
be
a
1.28
and
then,
and
then
is
the
real
part.
This
is
probably
where
we
would
expect
to
encounter
some
failures,
and
this
is
also
where
we
expect
uvip
and
stuff
to
come
into
play,
which
is
control
plane
at
binary
version,
X,
plus
one
and
mix
compatibility
versions.
C
D
Yeah
I
was
going
to
point
out
that
one
of
the
sharp
edges
that
we
have
hit
is
we're.
Turning
on
and
off,
Flags
individually
is
that
you
have
to
adjust
the
runtime
config
at
the
same
time
that
you
modify
feature
dates
and
the
runtime
config
has
a
granularity
of
group
version,
but
the
the
feature
gate
has
a
logical
granularity
of
of
resource
in
that
group
version.
D
D
Thought
I
did
but
either
I
didn't
update
the
help.
Maybe
I
didn't
update
the
help
you
didn't
update
the
help
naughty
Ben
goof
fix
the
help.
D
All
right
then
never
mind
that
we
won't
hit
that
edge
it'll
be
possible
to
do
what
we
need.
G
B
A
Are
you
planning
to
have
something
in
person
keep
going
to
discuss
this
like
yeah.
C
D
C
Be
there
okay
yeah,
we
should
have
a
face
to
face.
You
guys
should
register
for
I
know
the
instrumentation
is
doing
on.
Hopefully
it
doesn't
overlap
because
I
would
love
to
attend
the
APA
Machinery
face
to
face.
A
A
A
Very
good:
okay,
unless
there
is
anybody
else
that
wants
to
talk
about
this
one.
Let's
move
to
the
next.
B
B
F
H
So
see
we
spoke
about
this
I,
don't
know
how
many
meetings
back
some
amount
of
meetings
back
but
earlier
this
year,
and
what
I
wanted
to
get
from
this
meeting
is
just
a
sense
of
agreement
on
like
sort
of
goals
and
non-goals
for
such
a
cap,
because
I
think
that
sort
of
directly
influences
what
the
design
of
the
cap
would
actually
do.
H
D
I
think
at
least
some
of
the
dongles
are
worth
going
into
a
defense
of
of.
D
Of
why
an
a
an
API
should
not
handle
API
servers,
it's
probably
worth
greater
consideration
when
bringing
it
into
core.
H
D
Yeah
as
I
recall,
it's
an
alpha
API,
maybe
and
and
if
we
missed
something
that
we
need
to
have
in
order
to
work
with
API
servers
without
greater
coordination,
because
right
now,
it's
all
possible
with
external
coordination,
but
without
external
coordination.
H
I
guess
so,
my
my
thought
process
was
basically
what
is
like.
The
smallest
thing
we
can
do
here
right
so
I,
My
Hope
was
that
that
could
be
a
logical
follow-up
where
we
have
the
capability
to
drive
storage
migration
entry,
and
then
we
start
adding
capabilities
to
automatically
drive
it
based
on
criteria,
whether
the
criteria
be
schema,
changes
or
key
rotation
or
whatever.
D
Yeah,
so
by
bringing
it
in
tree,
it
seems
more
likely
that
we
will
grow.
It
seems
likely
that
we
will
grow
adoption
and
if
part
of
the
thing
we
have
difficulty
with
is
an
API
structure
that
doesn't
work
well
with
API
servers.
I
think
we'd
be
better
served
to
correct
that
during
this
move,
with
a
V1
Alpha
too,
then
then
leaving
it
broken.
H
Maybe
I'm
misremembering
I
thought
storage
version
API
functioned,
it
just
didn't
function
for
see
our
custom.
Resources
like
it
was
correct.
It
just
wasn't
complete.
H
D
H
D
I
just
missed
it
I
thought
that
there
were
pieces
that
were
added
by
was
it
CC.
B
Was
going
through
that
Gap
that
listed
this
and
I
think
no
was
one
of
the
interested
person
who
was
trying
to
implement
it,
but
I
couldn't
find
any
references,
so
I
thought
it's
not
implemented.
Okay,.
H
A
You're
mixing,
maybe
we
did
move
to
Beta
the
API
server
identity,
which
was
part
of
this,
but.
C
H
The
pr
from
Roy
that
was
going
to
fix
it
so
storage
version
API
actually
was
correctly
honored
for
custom
resources
and
that
never
made
it
and
hasn't
progressed
so,
but
but
the
gist
of
that
bullet
point
is
saying:
I
don't
want
to
try
to
solve
all
of
those
problems.
Now,
if
there
are
problems
within
the
custom
resource
apis
that
exist
out
of
tree
today,
then
yes
most
definitely
we
should
fix
those
before
moving
them
entry
or
as
process
of
moving
inventory.
Okay,.
D
Yeah,
so
I
would
just
many
things.
Name.
Storage
version
had
the
wrong
one
in
my
head.
As
long
as
the
one
that
we're
moving,
we
address
a
concern
like
that
during.
H
H
H
So
nominally
the
high
level
goals
of
this
cup
right
now,
as
stated,
are
there
will
exist
something
inside
of
controller
manager
that
will
make
it
so
that
a
human
being
or
an
automated
entity
can
ask
for
storage
migration
to
happen,
and
then
it
will
happen
how
the
which
apis
that
are.
It
doesn't
really
super
matter,
but
it
basically
says
that
we
will
not
automatically
do
it
on
like
random
kubernetes
schema
changes.
We
will
not
automatically
do
it
on
like
kubernetes
encryption
arrest
changes,
we
will
not
automatically
do
it
on.
H
D
We
killed
the
exposed
served
pieces,
but
we
still
have
to
be
able
to
decode
them
because
there's
no
guarantee
that
they
are
not
stored
in
Sea
today,
and
one
of
the
reasons
for
this
tool
is
to
allow
us
to
permanently
retire
them.
We
wouldn't
be
able
to
do
it
willy-nilly,
because
you
don't
want
to
make
it
impossible
for
people
to
use
your
new
clients
talk
to
your
old
server,
but
on
some
cycle
we
would
finally
be
able
to
remove
the
old
serializations.
H
Yeah,
so
right
now
as
sort
of
written
there
isn't
because
there's
no
automatic
triggering
of
it.
It
doesn't
give
us
that,
yet
if
we
want
that,
I
wasn't
exactly
sure
what
the
automatic
triggering
would
be
on
to
give
us
that
guarantee
that
we
can
finally
make
extensions.
B1
beta
1
go
away,
we're
good.
H
H
G
H
H
There
man
I,
was
trying
to
make
this
kept
small.
You
guys
are
just
making
it
larger.
H
H
D
I
would
really
like
to
see
this
API
use.
The
new
streaming
watch
capability
that
we're
building
I,
don't
think.
I
would
predicate
that
I,
don't
think
I
predicate
a
move
on
it,
but
as
a
ga
criteria,
I
think
that
would
be
really
helpful
for
the
memory
footprint
of
a
q
API
server
when
this
is
running
automatically
and
trying
to
do
those
conversions.
We've
definitely
hit
spikes
even
with
pagination
trying
to
do
this.
D
G
F
G
D
B
D
G
D
B
E
Right
it
might
work
because
when
it
first
starts
streaming,
it
starts
to
streaming
like
the
listing,
and
that
gives
you
a
marker
to
tell
you
that
the
list
is
done.
So
if
your
only
goal
was
to
like
get
the
list,
then
you
would
you
would
get
those
as
watch
events,
but
then
you
get
a
marker,
basically
saying
you're
done
with
the
list
and
you
could
just
stop.
G
Yeah
I'm
just
trying
to
figure
out
like
what
happens
if
it
can't
finish
processing
the
full
set
before
I.
Don't
know
whatever
expires
like
it.
You
know
if
it's
doing
a
no-op
right
to
everything
and
that
takes
a
while
or
it
gets
throttled
or
it
goes
slow
to
avoid
flooding
the
server
with
rights.
Maybe
it
doesn't
handle
the
full
initial
list
in
time
and
gets
dropped
so
then
it
restarts,
but
it
restarts
with
the
full
initial
list
again.
So
maybe
it's
doing
internal
bookkeeping
to
make
sure
it
can
skip
stuff.
G
G
D
D
H
So
David
I
think
openshift
is
the
only
thing.
I
know
that
actually
runs
svm.
Does
anybody
else
actually
run
svm.
H
Okay,
so
yeah
we
would
move
so
I
I
haven't
looked
at
the
existing
logic,
I
assumed
the
existing
logic
at
least
uses
pagination
with
proper
handling
of
continue
tokens
right.
It.
H
B
I
I
think
in
the
original
I
mean
in
the
svm.
The
the
watch
is
more
like
a
alphabetical
order
rather
than
index.
If
I
put
that
code
correctly.
H
G
It
would
be
good
to
Define
if
that
is
a
guarantee
or
if
that
is
just
an
implementation
detail,
especially
around
some
of
the
initial
watch
stuff.
We've
seen
a
lot
of
like
implementation
as
spec,
where
especially
well-considered,
behavior
and
like
ordering
on
initial
synthetic
watch.
Events
was
an
example
that
I
know
was
not
super
well
specified.
I
I.
It
may
be
different
for
the
streaming
watch.
So,
if
that's
the
case,
that's
good,
but
we
need
to
make
sure.
H
I
know
list
is
lexographical
order.
I
just
didn't
know
about
watch,
certainly
don't
know
anything
about
streaming
a
lot
of
stuff.
Okay,
let's
see,
could
you
scroll
up
to
the
nun
goals
for
a
second,
so
I
can
remember
what
I
said
was
a
knock
go?
Do
we
all
agree
that
we
don't
have
to
integrate
with
encryption
arrested
anyway?
H
I'll,
do
it
later
is
to
just
okay
just
know
what
I'm
doing
right
now
I
think
we've
talked
about
the
second
one:
I,
don't
think
we're
gonna
use
the
discovery
API
for
anything
new,
so.
D
H
H
All
right,
okay,
so
we
might
have
to
write
like
a
controller
that
watches
for
like
svm,
hash
changes
and
conversions
or
sorry,
storage
version,
API,
hash
changes
in
like
new
conversions
and
then
runs
the
migration
I
guess
something.
B
H
H
G
H
Sure,
or
even
just
on,
like
the
question
about
the
whole
custom
resource
stuff,
where
you're
like
hey
I,
have
this
old
schema
who
is
causing
the
migration
to
run?
Are
we
doing
it
automatically?
Are
we
doing
it
based
on
the
spec
or
status
of
the
crd,
like?
What
are
we
using
to
drive
the
Run.
C
C
Yeah
so
Joe's
working
on
this
component
resource,
which
is
going
to
help
us
model
cluster
life
cycles,
and
it
makes
a
lot
of
sense
to
hook
that
in
there
like
a
lot
of
sets
right,
because
once
we're
transitioning
version
boundaries,
we
know
exactly
where
we
are
in
the
cluster
lifecycle
flow
and
that's
exactly
what
we
need
to
kick
off:
storage
version,
migration.
G
Images
or
KMS
key
rotation
or
like
so
when
you
say
embed,
do
you
mean,
like
the
source
of
Truth,
for
what
this
thing
is
doing
would
be
inside
that
or
that
could
drive
the
request
to
do
a
a
storage
migration
having
multiple
things
that
can
drive,
storage
migration
seems
good
to.
E
There's
also
we
earlier,
we
were
mentioning
when
we
were
talking
about
the
feature
Flags
about
the
versions
you
problems,
I
think
it
could
also
be
useful
for
that
right,
like
you,
shouldn't
really
upgrade
Beyond
a
certain
point.
If
you
haven't
done
certain
storage,
migrations,
I
think
we
might
have
enough
information
to
kind
of
do
this
in
more
thing
here,.
A
E
Want
to
get
two
I'm,
not
sure
exactly
it's
kind
of
a
night,
it's
kind
of
more
a
general
idea
like
I,
like
Jordan's
idea
of
like
we
shouldn't,
have
to
wait
on
the
feature
Flex.
This
is
where
I
have
enough
information.
Now,
I,
don't
know
exactly.
We
want
to
do
it,
but
I
would
like
to
get
there
as
part
of.
G
G
D
G
To
answer
your
question,
though,
I
think
starting
with
it
being
manual,
is
easier
to
reason
about,
and
we
know
we're
going
to
have
things
driving
it
that
we
won't
have
visibility
to
so,
starting
with
that
surface
area
gives
us
like
the
building
block
and
then,
if
we
want
to
hook
things
into
it,
to
drive
it
automatically.
We
can
consider
that
case
by
case.
H
G
Okay,
that's
what
I
would
start
with
it's
the
simplest
thing
that
could
possibly
work
so.
B
H
Okay,
so
that
makes
sense
to
me
would
would
in
the
implementation
of
this
cap.
Would
we
then
include
any
any
flow,
that's
automatic,
for
example,
if
if
the
storage
version
API
is
enabled
and
the
hashes
change
for
what
API
servers
agree
on,
would
that
trigger
a
creation
of
one
of
these
migration
thingies?
H
G
Until
we're
sure
that
automatically
doing
a
bunch
of
stuff
is
what
we
want,
I
wouldn't
do
it
automatically
like
hey
I,
just
got
the
last
cluster,
the
last
server
in
my
cluster
upgraded.
They
all
agree
and
now
bam,
but
now
I'm
getting
a
ton
of
Rights
or
like
I'm,
not
I'm,
not
confident,
that's
what
we
want
so
until
we
are
confident
I
wouldn't
do
it
automatically,
but
maybe
maybe
there's
ways
to
be
sure.
That's
what
we
want
and
but
I
I
wouldn't
do
it
in
the
first
iteration
I,
don't
know
David
Joe.
D
G
I
think
it's
more
a
question
of
like
when
the
default
state
to
be
for
people
who
are
just
upgrading,
kubernetes
versions
who
aren't
paying
attention
to
this
feature.
Are
we
envisioning
at
some
point?
They
will
do
an
upgrade
and
without
them
taking
any
action
or
changing
any
invocations.
They
start
getting
storage
migrations.
D
D
Have
envisioned
it
being
the
case
where
a
at
a
certain
level
of
kubernetes,
when
this,
when,
when
this
feature
was
stable,
a
user
would
change
their?
The
crd
schema
to
say.
I
no
longer
want
to
serve
this
version,
and
this
controller
would
get
triggered
in
some
manner
to
start
migrating
their
custom
resource,
and
then
the
crd
status
would
eventually
be
updated
to
say
this
version
is
no
longer
stored.
D
Would
be
nice,
and
so
that
means
you
do
these
things
well
for
built-in
types
it
depends
on
how
it's
driven
right.
So
if
you
drive
it
based
on
a
crde
yeah,
it
probably
wouldn't
happen.
If
you
drove
it
based
off
a
storage
version.
Well,
maybe
it
would
you
have
something
stored
that
doesn't
match.
What's
served,
you
say,
oh
it's
unnecessary!
For
this
to
be
stored.
This
way,
let
me
trigger
that.
H
G
G
H
A
H
E
Yeah
I'm
a
little
torn
I
I,
really
like
the
idea
of
getting
crds
done.
Eventually,
I
would
like
to
get
built-ins
fully
automated,
but
I
don't
want
to
like
stop
progress
in
the
effort
for
Perfection
and
so
I.
Would
you
know
as
long
as
we're
as
long
as
we
have
like
a
North
star
and
we're
working
towards
that?
Even
if
one
cat
finishes
the
other
one
has
to
do
that
either
still
is
acceptable
and
like
the
sense
that,
like
somebody,
has
a
task
they
complete
and
we
get
closer
towards
the
goal.
A
We
can
put
more
time
to
keep
discussing
if
it
doesn't
get
resolved
online
offline.
Next
meeting.
F
I
Okay,
so
for
background,
I've
been
looking
into
the
possibility
of
introducing
a
new
binary,
serialization
format
to
kubernetes,
with
the
aim
of
sort
of
reducing
the
penalty,
the
performance
penalty
of
CR
traffic
CR,
serving
and
storage,
as
well
as
Dynamic
clients,
basically
anything
that
can't
take
advantage
of
the
generated
protobuf,
serialization
and
I've,
been
you
know,
making
proof
of
Concepts
and
various
comparisons,
and
one
of
the
challenges
to
you
know
putting
forth
a
cap
on
this.
I
Is
that
a
lot
of
the
strictness
and
correctness
decisions
that
we
make
you
know
have
an
impact
on
what
performance
games
we
actually
see
right
and
since
that's
the
overall
benefit
of
even
trying
to
do
this,
I
have
a
few
questions
to
put
to
the
group
and
try
to
collect
thoughts
on
so.
I
So
today,
right
with
with
Jason,
we
keep
the
last.
We
produce
non-fail
errors
that
enumerate
all
the
duplicate
keys
that
we're
seeing
and
you
know,
depending
on
field
validation,
options
or
I.
Think
recognizing
encoder
was
the
other
place.
We
can
interpret
these
strict
errors
and
do
different
things.
I
You
can
reject
them
or
not,
whereas
in
protobuf
all
the
generated
on
Marshall
code
is
for
duplicated
Fields.
This
last
field
wins
and
it's
the
same
thing
for
map
map
elements.
The
last
map
element
where
the
key
wins
so
clearly
in
Json.
There's:
ambiguity
right,
API,
duplicate,
Keys,
it's
also
a
text
format.
I
So
a
lot
of
users
are
manually,
editing,
editing,
Json
and
potentially
using
it
in
you
know,
github's
flow
or
applying
it
directly
with
Cue
pedal.
So
that's
error.
Brown!
I
I
What
would
such
a
decoder
do,
taking
into
account
there's
a
performance
cost
to
keeping
track
of
which
Keys
you've
seen
and
also
the
fact
that
we're
probably
not
going
to
find
an
off-the-shelf
implementation
of
of
any
binary
encoding?
That
does
exactly
what
our
Json
4
Rd
does.
I
Maybe
there's
more
that
I
haven't
thought
of,
but
the
first
is
do
what
Jason
does
authentically,
which
probably
requires
working
with
imitator
of
a
encoder
Library
or
a
fork.
Two.
I
I
include
this
because
some
of
the
existing
implementations
that
I've
looked
at
for
binary
encoders
already
support
this,
and
that's
just
if
you
have
duplicate
keys.
This
is
not
a
valid
object
for
me
to
decode,
and
you
just
fail.
I
I
think
the
costs,
the
performance
cost
of
doing.
That
is
probably
virtually
the
same
as
as
the
first
option,
but.
I
The
shelf
and
rates
that
have
evaluated,
and
then
the
third
is
sort
of
what
Proto
does
say:
hey
we've
defined
up
front
that
we're
only
going
to
consider
your
last
map
item.
So
it's
not
an
error,
but
except
that
we're
going
to
ignore
any
other
duplicates
which
is
supported.
You
know
by
virtually
every
encoding,
Library
I
looked
at
and
doesn't
have
as
much
performance
cost
so
I'm
curious
to
hear
about
or
huge
misses
on.
Why
we're
doing
this
with
Jason
and
I
see
Joey
everyone.
E
Yeah
I
had
two
thoughts.
One
is
that
it's
quite
quite
plausible
that
people
using
yaml
would
convert
to
this
before
it
showed
into
the
API
server.
So
it's
quite
possible
that
handwritten
stuff
could
come
through
that
pathway.
D
If
memory
serves
the
spec
actually
for
seabor,
the
RFC
actually
says
duplicate
Keys
are
not
valid
indeterminate
decoding
and
if
a
client
was
to
say,
Cube
control
was
to
convert
yaml
to
seabor
before
sending
it.
That
client
would
be
able
to
decide
to
error
right
there
right.
I
G
I
think
we
have
a
similar
issue
actually
with
yaml
and
Json.
So
if
you
give
Cube
control
yaml
today,
it
first
actually
converts
it
to
Json
and
then
sends
Json
to
the
server,
and
so
it
loses
duplicate
information
when
it
does
that
transformation
and
when
we
added
the
server-side
validation,
we
captured
that
duplicate
information
at
that
point
client-side
and
made
use
of
it.
So
so
in
Cuba
control,
what
you
say:
Cube
control
apply,
you
give
it
a
gamble,
file,
duplicate
stuff.
G
G
Thing
when
converting
to
see
board
that
would
probably
make
sense
and
like
not
opening
up
new
ambiguous
things
we
accept
on
the
server
seems
better
to
me,
like
I,
would
require
everyone
sending
C4
to
the
server
to
send
well-formed
seaboor
and
handle
the
duplicate
errors
at
the
transformation
point
in
the
client
like
we
do
with
yamla
Json.
E
Yeah
I
agree
I,
though
what
I
want
is
to
break
a
protocol
so
that
there's
like
a
kubernetes
c
board,
which
is
not
real
seaboard
me.
I
Come
here
so
it
sounds
like
we
favor,
just
it's
invalidated
or
a
decode
time.
If
there
are
any
duplicates,
but
not
necessarily,
you
know
the
strict
error
business
where
we're
enumerating
all
the
tokens
yeah.
E
And
if
I
mean,
if
there's
a
client
that
needs
to
have
backwards,
compatible
Behavior,
they
could,
when
they
do
their
conversion
to
see
more,
they
could
you
know
Implement
their
behavior
there
right.
So
if
it's,
if
they
want
to
pick
last
thing,
wins
they
do
that
before
I'm
riding
the
C
board,
and
then
they
just
read
the
last
one.
Can.
G
You
go
up
Ben
I'm,
maybe
I,
misunderstood
the
middle
issue.
Duplicate
keys
are
never
acceptable
and
produce
fatal
errors
on
decode.
That's
what
I
would
have
expected
the
default
to
be
for
seaboor
and
I
would
have
expected
that
to
be
way
cheaper.
Actually
because
they
don't
have
to
maintain
a
list,
am
I.
Misunderstanding
I
thought
option
two
would
be
the
default
and
cheapest.
I
It
depends
on
the
implementation.
Okay,
most
of
them
expose
options
where
you
can
decide
what
to
do
in
this
case.
Okay,.
I
Okay,
great
thanks:
how
much
time
do
we
have
one
minute
so
maybe
not
enough
time
to
to
open
another
topic.
D
Can
we
get
a
list
of
people
who
are
actually
interested
in
going
through
the
others,
because
they're
likely
to
take
a
longer
period
of
time?
We
can
find
time
to
discuss
them.
We
have
I,
think
a
better
shot
at
resolving,
so
perhaps
assignment
for
Ben
to
reach
out
to
everyone,
via
slack
and
figure
out
whether
they
are
interested
to
try
to
put
together
one
off.