►
From YouTube: Kubernetes SIG API Machinery - 20230726
Description
- [serathius] SIG etcd Charter & Vision
- [benluddy] binary encoding for custom resources
-- Discuss benchmark results / solidify criteria to move forward
A
Pretty
good
there
we
are,
we
are
recording,
we
are
live,
welcome
everybody.
This
is
API
Machinery
by
weekly
meeting
today
is
July
26th
2023
and
we
have
a
nice
agenda.
We
have
to
cancel
the
last
two
meetings,
so
I
apologize
for
that
I
think
you
know.
In
the
last
five
years
we
never
canceled
two
in
a
row,
so
this
might
be,
you
know
unprecedented,
but
we
will
try
to
not
repeat
it.
A
B
I
wanted
to
bring
to
you
the
topic
of
changes
that
are
happening
in
LCD,
which
are
I,
think
preacher
11th
and
coming
to
somewhat
close
to
scope
of
missionary.
So
around
a
month
ago,
there
was
early
agreement
about
create
creation
of
Sig
hcd,
and
today
we
we
are
on
stage
of
we
are
just
getting
through
the
details
of
things
that
will
apply
or
will
not
apply
and
what
exceptions
seek
hcd
will
need,
but
at
the
end
we
there
is
high
level
agreement.
B
We
need
just
final
confirmation
from
steering
on
the
detects
and
so
high
level,
and
this
is
not
about
donating
rhd,
nor
changing
the
project
goals.
It's
about
utilizing
the
experience,
infrastructure
and
processes
built
by
kubernetes
and
for
hcd
to
still
maintain
its
own
goals
and
serve
both
kubernetes
and
its
own
users.
We
recently
did
a
user
survey
and
we
are
trying
to
get
through
all
the
the
use
cases,
but
high
level.
We
just
need
more
support.
It's
this
critical
component
that
needs
support
that.
Currently
we
cannot
get
in
in
cncf
and
we
we
are.
B
We
want
to
collaborate,
cloud,
service,
kubernetes
and
also
API
Machinery.
So
one
of
the
things
that
were
brought
up
was
how
LCD
can
protect
its
own
like
how
we
can
maintain
hcd
as
its
own
thing,
when
it's
part
of
kubernetes,
so
to
do
that.
B
I
have
dropped
down
two
things:
I
put
them
into
one
dock,
for
clarity,
but
first
defining
our
goals
as
a
seek
and
second
defining
Vision
how
we
can
like
how
to
execute
on
those
goals
so
high
level
we
want
to
build
like
we
want
to
continue
supporting
kubernetes
as
the
the
that
place
to
start
infrastructure
configurations
that
kubernetes
does
but
kubernetes
or
API
server
built
a
lot
of
things
like
watch
cache
and
reconcile
reconciliation,
Loop
that
are
really
great
and
are
underappreciated
and
not
available
outside
of
kubernetes,
and
those
things
are
not
something
that
are
like
into
kubernetes.
B
So
our
like
my
my
current
interest,
is
like
making
hcd
creating
the
contract,
optimizing
the
contract
and
making
it
available
for
for
everyone
and
like
oh
to
to
put
like
a
cherry
on
top.
We
want
to
scale
Beyond
kubernetes,
because
API
Machinery
is
great,
like
scheduling.
Pods
is
great
or
like
running
kubernetes,
but
we
want
to
go
further
like
there
is
now
more
movement
in
batch
area.
There
is
no
more
movement
in
like
cape
multi-class
or
controller
flying
hcd
should
be
able
to
give
dust.
B
I
I
put
on
top
like
additional
in
the
document.
I
put
the
interface
that
I'm
working
on
to
that.
We
have
already
agreement
with
with
Joe
like
sick,
leads
about
having
this
interface
on
hcg's,
defining
interface
between
kubernetes
and
LCD
and
putting
it,
but
we
will
put
it
on
hcd
side,
because
this
is
part
of
hcd
reconcile.
We
want
to
put
reconcil
or
how
kubernetes
uses
and
the
interface
exactly
in
hcd,
but
not
and
keep
it
there
yeah.
D
My
question
is
about
at
CD
3.4
to
3.5
and
either
of
those
to
3.6
and
I
haven't
seen
it
be
explicitly
called
out
as
a
goal
of
the
working
group
to
ensure
that
this
is
a
safe
path
for
kubernetes
users,
but
I
would
like
to
if
it
isn't,
is
that
something
you
discussed
and
decided
not
to
include,
or
is
that
something
we
could
add.
B
You
added
a
roadmap
for
what
we
want
to
deliver
to
for
3.6.
It's
like
the
scope
for
delivering
what
we
want
to
do,
or
the
scope
was
mostly
defined
like
last
year.
We
are
just
getting
really
really
slow
on
getting
there
and
there
is
recently
if
you
go
into
the
main
repository
there
is
in
documentation
now
the
roadmap
file.
So
we
have
roadmap
that
lists
all
the
tasks
that
we
are
going
to
want
to
go
through.
E
D
Just
I
mean
I
think
just
building
on
my
initial
comment,
which
I
think
Jordan
is
riffing
on
is
I.
My
understanding
is,
there
are
issues
in
three
five
that
are
known
to
be
dangerous,
and
so
we
are
cautious
in
many
cases
about
choosing
3.5
today.
What
I'm
not
clear
on
is
if
we
intend
to
fix
those
issues
and
three
five
is
an
expected
step
between
three
four
and
three
six
or
if
we
expect
kubernetes
users
to
go
from
three
four
to
three
six
and
Skip
three
five
or
something
else
yeah.
B
Okay,
now.
F
Yeah
yeah,
so
I
actually
just
had
a
conversation
about
this
with
Merrick
like
three
hours
ago,
and
basically
I
would
like
to
push
for
a
downgrade
path
from
three
five
to
three
four,
so
that
we
could
upgrade
safe
like
and
have
some
sort
of
option
to
roll
back
right
like
given
the
issues
with
three
five
I.
Think
that
is
a
necessary
requirement
for
us
to
be
able
to
do
that.
F
C
E
F
D
E
B
To
I
mean
to
clarify
to
get
everyone
on
the
same
page,
we
had
issues
last
year,
but
we
found
like
we
built
that
whole
framework
to
track
and
find
those
issues.
We
found
issues
in
3.5
that
all
the
reproduce
all
the
reports
that
we
had
and
we
are
consistently
testing
it
and
we
even
find
found
issues
with
3.4.
What
we
didn't
find
is
there
are
still
open,
Jobcentre,
open,
Jepsen
reports
on
3.5,
but
we
are
unable
to
reproduce.
B
It
reason
is
we
don't
have
experts
in
in
closure
like
we
don't,
and
that
outer
of
Jepsen
is
totally
not
available.
B
C
So
I
guess:
question
of
Diego:
do
you
see
the
acceptance
of
the
Sig
as
predicated
on
on
whether
or
not
they
agree
to
do
this,
or
is
it
a
independent
of
whether
an
NCD
Sig
is
created?
D
Okay,
I
am
in
favor
of
the
kubernetes
project,
directly
investing
in
etcd
as
a
hard
dependency
and
I'm,
suggesting
that
this
specific
idea
be
added
to
its
Charter
or
road
map,
or
things
that
it
keeps
an
eye
on.
D
B
Yeah
all
right
to
to
just
finish:
we
have
downgrades
implemented
for
3.6.
We
are
just
discussing
and
driving
agreements
to
backboard
it
to
3.5
when
it
gets
into
3.5.
It
will
support
downgrades
from
3.5
to
3.4.
B
A
A
A
A
H
H
Yeah
grab
us
anytime,
if
you,
if
you
see
something
that
we
might
need
to
attend
or
anything
like
that,.
A
No
okay,
thank
you!
Maria
Khan
and
everybody
participating
in
this
I
think
Shane,
but
I
don't
think
he's
going
to
be
able
to
make
it
today.
So,
let's
move
to
Ben
who
I
see
on
the
call
and
discuss
his
topic.
I
Sure
everyone
so
I'm
interested
in
Reviving
the
topic
of
binary
encodings
for
custom
resources
that
is
apis
defined
by
crds.
So
right,
the
context.
It's
not
a
new
idea.
I
These
are
performant,
but
they're,
not
something
we
can
just
adopt
for
crds,
because
users
are
defining
the
structure
of
these
apis
at
runtime,
so
I
think
a
couple
years
ago,
at
least
that's
the
latest
work
in
the
space
I
can
find
Joe
had
done
a
nice
survey,
sort
of
taxonomy
of
a
few
different
options
for
various
encodings
and
also
gone
pretty
deep
on
how
we
might
support
a.
I
I
What
sort
of
architectural
Machinery
is
required
to
actually
Implement
and
support
the
encoding
right
so
see
just
slides,
shared
now.
There's
a
taxonomy
on
the
second
slide
between
the
up
one.
I
So,
basically
there's
a
class
of
binary
encodings
that
are
self-describing,
just
like
Jason
is
so
they
basically
allow
us
to
have
better
efficiency
and
potentially
slightly
smaller
encodings,
but
because
they're
self-describing
every
time
you
encode
an
object,
you're
going
to
be
serializing.
You
know
all
of
the
keys
in
your
in
your
in
your
objects,
for
example,
so.
J
I
Not
an
insignificant
version
of
overhead
and,
on
the
other
hand,
there's
schema,
driven
and
encodings.
I
Theoretically,
these
allow
us
to
encode
sci's
to
smaller
sizes
and
in
terms
of
number
of
bytes
with
good
performance,
because
we
have
a
schema
in
hand
that
allows
us
to
translate
values
that
we're
reading
out
of
bytes
to
whatever
field
they
belong
to
on
the
CR
AP.
So,
by
having
a
schema,
we
potentially
avoid
a
lot
of
overhead,
but,
as
I
mentioned
earlier,
if
we
do
go
with
this
email,
driven
encoding,
along
with
outcomes
missionary
to
support,
persisting,
schemas
or
potentially
managing
the
evolution
of
crd
schemas.
I
So
in
order
to
get
this
rowing,
I
I
prototyped
various
encodings,
with
an
eye
to
benchmarking,
so
that
we
can
sort
of
start
to
consider
the
actual
performance
characteristics
of
these
encodings
and
use
that
information
to
decide.
You
know
do
we
want
to.
Is
it
worth
it
for
us
to
all
right
to
take
on
schema
management
in
order
to
potentially
have
a
more
efficient
encoding.
I
So
I've
included
two
existing
serializers
here.
One
is
just
the
unstructured
Json
serializer.
This
is
what
we're
using
today.
This
trip
sort
of
represents
our
status
quo
and
anything
we
select
has
to
be
Improvement
on
this.
Otherwise,
it's
not
clear
why
we
want
to
I've,
also
included
protobuf.
The
the
generated
protobuf
serializer
here.
I
I
The
the
built-in
protocol
serializer,
of
course,
also
benefits
from
operating
on.
You
know,
typed,
go
objects
and
having
regenerated
code,
which
is
something
that
we
won't
be
able
to
do.
Necessarily
we
see
our
encodings,
so
on
top
of
those
I
investigated
three
encoding
schemes,
seaboor,
which
is
a
self-describing
encoding,
so
we
would
not
have
to
manage
schemas
if
we
used
an
encoding
that
is
self-describing
I
compared
two
different
publicly
available
at
seabar
libraries,
and
also
to
two
serializers
that
require
schemas.
I
The
V2
of
the
the
the
mainstream
protobuf
Library
supports
reading
protobuf
definitions
at
runtime
and
generating
reflective
objects
that
can
have
their
fields
populated
into
the
Marshall
and
not
Marshals
and
runtime.
Without
code,
gen
yeah.
F
I
That
that
this
would
perform
well,
but
one
problem
with
this
with
using
this
Library
directly,
is
we're
operating
on
unstructured
types
right,
so
some
apple
string
to
interface,
interface,
slice
and
such
so
using
Dynamic
PB
requires
us
to
convert
from
unstructured
types
to
take
an
unstructured
type
and
populate
the
fields
of
dynamic
order,
books,
message
and
vice
versa
in
the
other
direction.
I
So
this
comes
with
all
I've
ever
had
I,
don't
think
it's
a
theoretical
limitation
of
Proto
buff,
the
wire
encoding
but
I
I
think
it
would
be
hard
for
us
to
adopt
Dynamic
PB
in
its
current
state
and
have
that
be
performant
to
do
that
overhead,
so
I,
because
of
this
I
evaluated
a
second
schema-based
encoding,
that's
Apache
Avro,
so
I
I
prototyped
a
little
bit
of
code
that
generates
an
average
schema
from
open,
API
definitions
and
there
happened
to
be
a
LinkedIn,
has
a
as
a
Avro
Library
Indigo
that
is
designed
to
operate
directly
on
roughly
the
same
types
that
unstructured
objects
can
contain.
J
I
Sort
of
mechanical
sympathy
with
with
our
use
case,
I,
think
the
the
performance
of
the
Avro
serializer
is
reflective
of
roughly
any
schema-based
encoding.
It's
probably
going
to
be
in
that
Polo
Park
and
in
fact
we
see
that
the
serialized
object
size
comes
down
to
roughly
the
same
as
the
generated
buff
serializer
produces
for
the
same
type,
but
it
is
not
as
fast
and
it's
it's.
I
It's
not
clear
how
much
faster
that
could
be
with
effort
focused
towards
you
know
specific
optimizations
to
our
use
case,
but
but
it
is
probably
possible
to
make
that
perform
better.
I
So
I
was
sort
of
impressed
in
the
course
of
you
know
collecting
this
information
that
at
least
one
of
the
Seaboard
implementations
performed
quite
well
other
than
the
encoded
size,
the
encode
time
actually
outperforms.
The
schema
based
Avro.
I
Performance
is
not
as
good,
but
it
is
still
an
improvement
over
unstructured,
Json,
so
I
think
I
think
we
certainly
could
today
just
just
take
AC
More
implementation
off
the
shelf
and
drop
it
in
and
we
we
would
relatively
easily
see
see
improvements
there
I
think
my
so
so
my
my
decision
Point
now
is
we.
If
we
have
the
option
to
benefit
immediately
from
a
trivial,
nearly
trivial
replacement
for
Json
serialization,
what
do
we
need
to
prove
that?
I
Sorry?
How
do
we
make
the
decision
to
go
with
that
versus
to
invest
further
in
optimizing
a
schema
full
encoding?
I
So
I'd
like
to
hear
you
know,
thoughts
on
what
we're
looking
forward
to
to
make
this
decision
I.
Think
that's
that's
effectively!
That's
my
spiel.
H
Yeah,
so
thanks
for
doing
this
Ben,
this
is
like
a
really
great
continuation
of
stuff
that
we
dropped
long
time
ago.
I
do
think
I
agree.
I'd
always
been
worried
that
the
dynamic
protobuf
would
be
a
little
underwhelming.
These
numbers
confirmed
was
actually
even
a
little
worse
than
I
thought.
H
I.
Think,
like
the
fundamental
decision
is,
if
we
pick
something
we're,
probably
not
going
to
do
two
things
right,
we
would
probably
like
if
we
did
see
more,
that
would
probably
be
kind
of
it
they're
kind
of
hard
to
imagine
like
doing
seabor
and
Avro,
and
so,
if
we
did
pick
say
seabor,
we
would
presumably
get
a
pretty
significant
encode
decode
boost
and
I'm
guessing
that
you
could
probably
even
optimize
this
a
little
more.
H
But
then
we
would
kind
of
forever
give
up
on
the
the
data
storage
benefit
right
like
we
would.
We
would
kind
of.
We
would
be
kind
of
pretty
much
saying
like
we
we
chose
to
buy
us
towards.
H
You
know.
The
Simplicity
of
just
another
protocol
that
is
kind
of
like
you
know,
doesn't
require
any
extra
scheme
information,
it's
just
a
substitute
for
Json
and
then
we're
picking
that,
but
we'll
we'll
never
really
get
a
storage
method
from
that,
and
so
I
was
kind
of
curious.
If
anybody
had
any
objections
with
that
I'm
actually
much
more
concerned
with
the
encode
decode
speak
personally,
so
I
don't
see
it
as
a
huge
problem.
But
I
was
curious.
If
anybody
had
concerns
about
that.
C
I
am
also
biased
towards
encode,
particularly
encode
speed
I,
believe
we
encode
significantly
more
than
we
decode
because
of
the
watch
cache
I'm
interested
in
the
speed
there
and
the
memory
utilization
there.
The
actual
storage
size
has
not
been
that
much
of
a
practical
issue
in
that
most
people
who
hit
limits
on
what
we
can
store
are
they
want
something
so
much,
theoretically
larger
that
it
doesn't
matter
like
if
we
manage
to
save
50
on
our
storage.
C
It
won't
matter
to
someone
who
says,
but
I
want
to
stick
100
Megs
in
here,
like
it
just
won't
matter,
and
anyone
who.
J
E
H
E
Are
their
hands?
No,
there
aren't
hands
I
I.
It
might
be
helpful
to
add
a
few
Dimensions
to
the
different
options
which
were
considered
like
making
it
clear
which
ones
are
schema
based
and
which
ones
are
self-describing
and
then
also
making
it
clear
which
ones
are
implementing
a
specification
versus
which
ones
like
the
implementation
is
the
specification.
E
So
seabor
particularly
has
an
actual
RFC
standard,
and
these
are
like
two
implementations
of
that
standard
which
appeals
to
me
and
then
then
we
can
look
at
things
like
how
how
stable
has
that
specification
been
over
time?
How
confident
are
we
that
the
spec
is
stable
and
then
how
certain
are
we
that
these
things
are
conformant
to
the
spec
in
terms
of
test
coverage?
E
Improving
correctness,
I
got
the
Seaboard
implementation,
the
first
one
I
I,
find
the
performance
very
compelling
and
so
I'm
immediately
thinking
like
if
we
could
get
that
performance
that'd
be
awesome.
Now,
let's
talk
about
correctness
and
like
how
confident
are
we
and
the
things
we're
writing
and
reading
and
the
edge
cases
we
know
about
around
Json
handling
like
how
does
this
thing
handle
those.
H
I
remember
many
years
ago,
I
did
go,
look
at
seabor,
B,
song,
b,
Json
and
I.
Think
a
couple
others
and
the
one
thing
it's
been
a
long
time
so
bear
with
me.
The
one
thing
I
do
remember
very
specifically
is
that
seabor
was
the
only
one
that
was
very,
very
clear
and
precise
in
what
the
protocol
was
everything
else.
There
was
some
place
where
I
felt
like
it
was
getting
pretty
hand.
Wavy
and
I
wasn't
really
sure
what
the
spec
was
but
see
boy.
You
could
read
the
spec,
you
could
understand
it.
H
I
felt
like
anybody
that
implemented.
If
they
just
walked
through
the
RFC,
like
it
told
them
exactly
what
to
do.
No
I
haven't
compared
an
implementation
with
that
so
I
agree.
We
still
need
to
do
that,
but
I
do
I,
can
kind
of
support
Seaboard,
at
least
from
that,
like
theoretical
side.
C
So
what
are
the
high
level
choices
that
would
be
nice
to
hear
opinions
on
now
is
self-describing
versus
schema
I
I,
definitely
like
the
the
final
performance
of
our
built-in
protobuf
based
on
a
schema,
but
when
I
look
at
the
Delta
between
the
schema
options
that
are
presented
here
say
Avro
that
we
would
like
more
realistically
be
able
to
achieve
and
seabor
I
look
and
say
given
supportability
concerns
about
schema,
Evolution,
storing
the
schemas,
communicating
those
schemas
getting
clients
to
properly
use
the
schemas
to
decode
I'm
personally
biased
at
this
point
towards
self-describing
is
their
General
agreement
on
that
or
is
there
more
information
someone
thinks
they
need
to
make
a
choice,
I
guess
most
notably
probably
Jomo,
Jordan.
H
I
will
mention
one
thing
that
I
don't
think
it's
been
discussed
yet,
which
is
when
you
pick
one
of
these
you
can
draw,
you
can
kind
of
mentally
imagine
a
matrix
of
native
types
crds,
any
other
type
in
the
kubernetes
system
and
whether
or
not
we
would
support
this
new
protocol
with
seaboor,
because
it's
just
a
serialization
layer,
you
when
you
turn
it
on.
You
could
turn
it
on
for
everything
you
could
have
native
types.
H
If
you
go
with
something
like
Avro
or
dynamic,
Proto
I
don't
know
if
we
would
ever
turn
that
on
for
negative
types,
maybe
we
would,
it
seems
more
complicated.
I.
Think
seaboor
is
very
simple,
clear
in
that
way.
It's
just
another
protocol,
everybody
gets
it.
It's
just
part
of
the
wire
protocol.
G
In
in
this
discussion,
are
we
purely
talking
I
guess
maybe
I
it
wasn't
clear
to
me.
Are
we
talking
about
how
we
store
things
in
at
CD,
or
are
we
also
talking
about
how
we
do
it
on
the
wire
or
both
like
that?
I
think
I
wasn't
fully
clear
on
where
we
throw
away
the
Json
and
put
the
Seaboard,
because
in
all
the
places
where
the
Json
I.
G
Okay,
so
the
idea
would
be:
is
that
relatively
quickly
over,
like
you
know,
technically
I
guess
two
releases
you
could
you
could
switch
the
internals
to
start
using
seabor,
but
externally
it
would
be
a
choice
that
the
client
made
we
it's
except
encoding,
header
and
presumably
we
would
tell
client
go
to
start
using
that
as
soon
as
we've
done,
it
made
sense,
I
guess,
yeah.
H
When
we
store
things
in
FCD
that
we
store
a
little
prefix
that
says
what
the
contents
of
the
rest
of
the
bytes
is,
and
so
you
know
if
we
went
through
a
state
where
we
had
a
mix
of
Json
and
cboard
for
crd
types,
this
system
would
understand
that
it
could
read
either
it
would
it
would.
You
would
have
to
choose
which
one
you
prefer
to
write,
which
I
presumably
would
transition
to
C4
over
time,
but
the
system
already
can
support
that
would
go
pretty
smooth.
E
G
F
G
G
G
C
Would
have
to
be
really
really
really
good,
I
mean
I.
Don't
want
you
to
have
a
false
sense
of
of
what's
likely
here
if
we
do
seaboor
and
we
get
numbers
that
approximate.
What's
here
for
seabor
eugorji
convincing
me
that
it's
a
good
idea
to
support
the
code
required
to
also
manage
schemas,
it's
gonna
take
a
lot.
G
I
I
They're,
actually,
the
one
of
the
core
implementations
has
unsafe
implementations
of
certain
things,
but
I
have
it
disabled
with
a
bill
tag
as
far
as
I
know,
none
of
the
others
are
doing
anything
unsafe,
they're,
not
important.
E
Yeah,
a
lot
of
the
high
performance
encoder
decoders
used
to
do
unsafe,
casts
to
get
around
like
map
reallocation,
but
go
actually
added
some
of
the
ability
to
reset
Maps
into
the
standard
reflect
package.
So
hopefully,
if
that's
all,
they
were
using
it
for,
hopefully
they
don't
need
to
do
that
anymore.
C
E
I
I
think
I
think
we
would
want
to
look
at
like
what
what
the
uses
were
for
I
mean
we,
we
use
unsafecasts
in
our
conversion
against
internal
external
types,
that
we
have
unit
tests,
ensuring
are
identical
if
they
were
using
unsafe
for
something
that
we
could
verify
was
actually
safe.
That
would
be
one
consideration,
but
a
lot
of
the
uses
I've
seen
are
like
unsafe
or
pointing
at
linkages
to
particular
versions
of
internal
go
functions
for
memory
management.
That's
just
that's
just
nonsense
like
we
can't
do
that.
E
So,
oh
I
for
the
schema
question.
I've
always
been
extremely
skeptical
about
a
storage,
serialization
approach
for
customer
resources.
That
was,
that
used
a
schema
based
on
just
how
the
whole
world
writes
series
and
evolves
them.
So
I've
I've
always
been
in
favor
of
something
like
Seaboard,
where
it's
self-describing
and
it's
just
a
serialization
layer.
So
if
these
benchmarks
hold-
and
we
can
like
look
at
the
edge
cases-
and
you
know
make
sure
that
the
correctness
is
where
it
needs
to
be
I
like
the
idea
of
self-describing
C
board
implementation.
H
I
do
as
well.
I
went
through
the
extra
mental
exercise
once
I'm
trying
to
think
of
like
what
you
would
have
to
do
with
crds.
If
you
wanted
to
use
schemas-
and
my
conclusion
is,
you
would
basically
have
to
write
every
schema
that
was
ever
used
to
write
anything
into
storage,
probably
with
the
content,
addressable
hash
and
keep
them
all
forever
and
have
no
way
to
garbage
collect
them.
It's
it's
kind
of
scary.
D
J
D
J
It
is
obvious
answer
is
obvious:
I
didn't
go
to
the
details,
but
when
we
are
talking
about
binary
coding,
so
what
happened?
If
the
CRS
are
changing
over
time,
evolving
backward
forward
compatibility
and
other
things
about
like
data
conversions
might
be
more
difficult.
We
are
saying
that
we
are
decoding
more
than
encoding,
but
in
terms
of
like
debugging,
we
need
to
encode
and
then
debugging
might
be
more
complex,
so
yeah
I
just
want
to
know
more
in
general
why
the
whys
are
behind
this
binary
encoding
here.
I
So
I
think
the
the
point
about
schema
evolution
of
crds
is
is
really
critical,
because
if
we
use
a
self-describing
and
going
like
seabor
or
similar
encoding,
basically
all
of
all
of
the
information
that
we're
currently
encoding
today
as
Jason
is
also
encoded
just
in
a
more
efficient
form
for
this
whole
class
of
encodings
I
think
the
risks-
and
you
know
potential
complexity,
come
when
we
talk
about
adopting
a
schema-based
encoding,
because
in
those
cases,
if
you
encode
something
with
the
current
schema
and
then
you
lose
svma
or
you,
you
aren't
able
to
associate
it
with
that
object.
I
C
I
C
I
It's
a
good
question,
at
least
with
self-describing
encodings.
We
can
always
Translate
then
back
to
Jason.
If
we're
talking
about
client
requests,
then
we
can
always
serve
Jason
via
content
negotiation.
But
if
we're
looking
at
encoded
objects,
I
think
we
would
have
to
decode
them
back
into
Jason
for
debugging
purposes.
C
But
the
benefit
here
when
you
look
at
the
Benchmark
is
a
looks
like
a
1.
6
takes
1
6
the
amount
of
time
to
encode
to
Seaboard
than
it
does
to
encode
to
Json.
So
once
you
have
that
struct
in
memory
in
the
qapi
server,
you
can
encode
it
it's
six
times
faster.
Six.
E
C
H
H
Is
like
very
directly
correlated
to
actual
system
performance,
so
sorry,
Jordan
go
ahead.
E
E
I
remember
an
issue
that
I
think
Daniel
found
last
year
where
a
buffer
pool
was
like
sort
of
gradually
growing
the
shared
buffers
to
the
largest
size
that
ever
got
written.
So
if,
if
most
objects
are
like
a
couple
kilobytes,
but
occasionally
you
need
to
serialize
something.
E
That's
like
megabytes
like
a
full
cluster-wide
list
of
all
instances
and
then
gradually,
like
the
shared
buffers
grow
to
that
Max
size
and
they
get
put
back
into
the
shared
buffer
and
then
they're
getting
reused
to
serialize
like
two
kilobyte
things,
that's
something
to
keep
an
eye
on.
If
we
have
control
over
the
buffer
pool,
we
might
want
to
Partition
by
size
or
do
it
in
a
way
that
we
throw
away
things
over
a
certain
threshold.
Just
so
we
don't
like
grow
this
year
to
cool
anyway.
That's
implementation.
I
Yeah,
it's
a
good
point
in
this
case
the
encoder
itself
isn't
managing
a
buffer
pool.
It's
just.
The
Benchmark
is
providing
it
to
the
call
to
the
code.
So.
H
I,
don't
know
Ben,
do
you
want
to
open
an
issue
for
that
or
something
we
could
dog
pile
on
it
I
don't
know
what
the
best
way
is.
Yes,.
A
I
Yeah
I'm
just
the
last
chance.
If
anyone
wants
to
strongly
argue
that
that
we
need
to
see
how
fast
we
can
make
a
schema
based,
encoding
I
would
like
to
hear
that
before
we
start
digging
into
a
self-describing
encoding.
C
I
think
I
already
asked
you
to
do
that
and
I
think
you
gave
it
your
best
shot
and
I
think
the
results
were
underwhelming,
not
a
reflection
on
your
work.
Just
I
am
convinced
that
we
don't
need
to.
I
Okay,
as
am
I
like
I
I'm,
pretty
sure
that,
yes,
with
optimization
effort,
probably
the
average,
could
come
close
or
similar
to
what
we're
seeing
on
seaboard
and
the
Dakota
is
better
than
Seaboard.
But
that
decision
comes
with
all
of
the
baggage
of
managing
schemas,
which
I
I
don't
know.
If
we
can
make
it
better
enough
to
justify
that
cost.
I
I
If
anyone
thinks
of
anything
else,
please
just
reach
out
to
me
in
any
form,
and
let
me
know
thank
you.
Everyone.
A
H
A
If
you
can
we'll
see
you
next
time
and
I
hope,
everybody
has
a
good
rest
of
your
day
and
a
good
week.
Thank
you
for
joining.