►
Description
* Bringing FlatBuffers Zero-Copy Serialization to Rust by Robert Winslow
* Fearless Low-Latency Music Synthesis, by Raph Levien
A
Just
one
announcement,
the
rust
2018
edition
is
being
released
soon
and
you
should
help
test
it.
It's
got
all
these
goodies
like
better
pads
and
eventually
async/await,
also
clippies
better
in
the
edition.
Yes,
that's
happening.
Non-Electrical
lifetimes
are
happening.
Finally,
after
years
of
saying,
it's
coming,
they're
happening
for
real
this
time.
I
promise.
B
A
Right,
if
you
have
any
talk
ideas
for
future
meetups,
you
can
email
me
at
this.
We
have
meetups
here
in
the
South
Bay.
We
try
to
alternate
but
yeah.
If
you
email
me,
I
can
get
you
up
to
get.
You
connect
you
to
whichever
meetup
you
want,
and
today
we
have
two
talks
which
are
in
the
wrong
order
here,
but
we
are.
A
C
D
A
A
C
C
C
E
C
Here's
the
agenda,
I'm
gonna,
do
a
little
introduction.
I'm
gonna
set
the
stage
with
a
little
bit
of
talk
and
opinions
about
serialization,
explain
why
flatbuffers
exists
because
there's
a
whole
lot
of
other
choices,
how
you
can
use
it,
how
it
works,
how
we
test
it
and
then
what
there
is
still
to
do.
C
My
name
is
Robert
Winslow
I'm,
an
artificial,
artificial
intelligence.
Product
consultant
was
kind
of
a
new
thing,
but
it's
very
exciting.
I've
been
volunteering
on
the
flatbuffers
project
since
2013,
and
before
the
rust
port,
I
wrote
the
go
and
the
Python
ports
as
well,
so
I've
been
involved
a
long
time
just
as
a
sidebar.
The
creator
of
the
project
is
actually
here
tonight.
Router
is
back
there
in
the
corner.
If
everybody
wave
for
him.
C
So
what
is
flatbuffers
flatbuffers
is
a
high-performance
serialization
format.
Kind
of
the
key
idea
is
that
there
are
no
heap
allocations
needed
anywhere,
not
for
a
reading
of
a
writing,
its
schema
versioned
and
it
supports
a
lot
of
languages
and
more
in
the
pipeline.
Right
now
we
have
C
C,
sharp
C++,
Go,
Java,
Java,
Script,
Lua,
PHP,
Python,
rust
and
typescript,
and
this
is
the
pull
request
I'm
going
to
talk
about
tonight,
so
this
was
on
github
and
we
merged
it
on
September.
C
12
second
excuse
me,
and
you
can
see
that
there
was
kind
of
a
big
description,
got
a
lot
of
likes
and
party
hats
and
stuff.
So
I
was
a
very
exciting
day
for
me,
and
this
is
the
only
place
where
I
show
the
project
github
page.
So
I
want
to
note
that
we
crossed
10000
stars
as
this
pull
request
was
being
submitted.
So
that's
kind
of
a
big
milestone
for
our
project
and,
on
my
part,
this
rust
port
was
a
volunteer
effort.
It
took
six
months
and
just
for
clarity,
I,
don't
work
for
Google.
C
Let's
talk
about
serialization!
This
is
short.
Two
sections
I'll
give
a
definition
and
then
some
examples.
A
serialization
format
describes
how
to
write
and
read
data
and
that's
it.
There
are
too
many
choices
to
make
with
this,
so
you
can
imagine
any
number
of
choices
and
options
to
choose
from
for
how
you
can
write
data,
save
it
in
memory,
save
it
on
disk,
send
it
over
the
network,
read
it
from
the
network,
all
these
different
options
so
for
concision,
I'm,
just
gonna
focus
on
three
different
kind
of
popular
ones
and
I'll
compare
them.
C
First,
tried-and-true
we
love
it.
We
hate
it
JSON,
it's
a
plain
text
format.
We
can
read
it.
It's
human,
readable
out
of
the
box,
there's
no
schema
evolution,
which
means
that
there's
no
way
to
have
a
standard
format
and
then
talk
about
a
structured
way
to
change
what
the
data
is.
Takeaway
fields,
add
fields,
there's
something
called
JSON
schema
I'm,
not
about
that.
The
third
point
about
JSON
is
it
requires
parsing
and
when
we
parse,
we
don't
necessarily
know
ahead
of
time.
How
many?
How
much
memory
are
going
to
need?
C
C
Second,
one
I'll
talk
about
is
protocol
buffers,
and
this
has
really
had
a
huge
impact
on
the
serialization
landscape
as
I'm
sure.
Most
of
you
know,
it's
a
binary
format,
so
it's
not
human
readable.
You
can't
evolve
it
with
the
schema
and
that's
a
useful
property.
It
does
still
require
parsing,
though
it
requires
heap
allocations
to
read
the
to
read
the
data
and
unlike
JSON,
it's
binary
safe.
So
you
can
start
arbitrary,
bytes
inside,
but
you
can
store
files
whatever
you
want
and
then
choice.
3
is
flatbuffers.
It's
also
a
binary
format
like
protocol
buffers.
C
So
why
do
we
do
this?
I'll
introduce
some
ideas.
Mechanical
sympathy
is
the
first
one.
I,
don't
give
an
example
of
how
we
talk
about
how
we
use
integers
in
these
different
formats,
why
you
would
want
to
use
flat
buffers
and
probably
why
you
would
not
want
to
use
flat
by
first
mechanical
sympathy.
I'll
start
off
with
this
quote
from
the
Formula
One
driver,
Jackie
Stewart.
C
C
I
make
this
strong
claim
that
slow
software
is
our
fault,
because
the
machines
are
extremely
fast.
The
real
speed
limits
are
billions
of
CPU
instructions
per
second
gigabytes
of
RAM
access
per
second
hundreds
of
thousands
of
SSD
I/os
per
second.
So,
if
we
aren't
achieving
that,
that's
our
problem-
it's
not
the
hardware
hardware
is
great
and
I.
Think
rust,
philosophically,
is
a
big
way
that
we
can
get
there
in
the
future,
where
we're
not
already
what
a
cpus
want.
This
is
the
machine
that
we
want
to
have
sympathy
for
maybe
kind
of
like
a.
C
C
Could
we
have
a
format
like
protocol
buffers?
That's
binary,
statically
typed,
multi
language
schema
version
just
like
protocol
buffers,
but
with
the
additional
feature
of
being
CPU
friendly
of
being
mechanically
sympathetic
to
CPUs
and
unsurprisingly,
because
I'm
here
tonight
the
answer
is
yes
and
flatbuffers.
Is
that
format?
Is
everybody
with
me
so
far
on
the
justifications
for
this
great.
C
C
So
here
I'll
compare
three
different
civilization
formats,
again
JSON
protocol
and
flatbuffers,
and
give
a
concrete
example
of
the
requirements
for
heap
allocations
in
the
first
two.
So
in
the
left
column
you
can
see
that
JSON
I
described
it
below
as
being
an
ASCII
format
and
there's
an
example
one
two
three
four,
it's
a
number
that's
in
ASCII
and
the
way
that
you
get
that
number
into
a
register
and
your
CPUs
have
to
parse
it.
You
have
to
parse
it
into
some
variable:
okay,
protobuf.
C
It
uses
this
thing
called
VAR
int,
which
is
a
way
to
save
space
when
you
have
small
integers.
So
if
you
have
au
32,
but
only
the
number
one
isn't
it,
you
only
use
one
byte
to
store
it.
That's
a
space
savings,
but
the
flipside
is
that
because
the
decoding
step
is
data
dependent?
Now
you
have
to
do
a
lot
of
branching
and
you
might
have
to
make
allocations
to
parse
that
successfully.
C
So
you
get
space
savings,
but
you
have
heap
allocations
and
that
also
was
a
parsing
step,
but
it's
a
binary
format.
So
it's
a
little
bit
different
flatbuffers,
in
contrast,
has
little
endian,
integers
they're,
always
the
fixed
size.
In
the
way
you
access
them.
Let's
say:
you've
memory
mapped
this
flatbuffers
object.
Is
you
just
do
a
pointer,
dereference?
That's
it!
It
is
in
the
wire
format
how
it
should
be
in
the
CPU.
C
C
You
append
it
to
a
log
and
then
you'll
need
to
read
those
in
quick
succession
to
update
some
index
or
some
on
disk
data
structure,
and
so
the
fewer
heap
allocations,
the
less
churn
you
can
have,
the
better
rendering
in
you
is,
which
I'll
bring
up
an
example
in
a
second
video
games,
save
files
which
was
actually
one
of
the
initial
reasons
to
create
flat
buffers
because
it
came
from
a
game
library
lab
at
Google.
And
finally,
it
may
be
a
value
format,
key
value
stores.
C
So
you
can
store
these
in
a
big
index
and
then
pull
them
out
and
you
can
have
the
structure
be
inside
the
values.
So
here's
one
example:
Facebook
uses
it
in
the
Android
app
you've,
probably
heard
of
Facebook.
They
have
a
popular
product,
a
lot
of
people
use
it.
They
have
over
1
billion
users
of
the
Android
app
and
they
move
from
JSON
to
flatbuffers
and
they
decreased
load
time
from
35
milliseconds
to
4
milliseconds.
C
They
reduced
transit
memory
allocations
by
75%
and
they
completely
eliminated
that
really
annoying
affective
stutter
when
you
scroll
so
if
before
they
had
it,
you
would
scroll
quickly
and
it
would
maybe
not
catch
quite
up
or
it
would
feel
jagged
and
now
it's
silky
smooth,
so
I've
heard
and
here's
their
blog
post
about
it,
and
this
was
over
3
years
ago.
So
you
can
see
the
libraries
had
some
success
even
before
a
rust
came
along,
which
is
shocking,
I'm
sure,
but
another
use
case.
So
there's
a
successor
in
the
works
to
no
js'.
C
It
has
a
different
philosophy
in
a
lot
of
ways
and
internally
they
still
use
v8,
but
they
use
a
lot
of
rust
also
and
they
were
able
to
use
rust,
flatbuffers,
which
just
came
out
and
they're
actually
using
our
alpha
version,
which
was
scary
for
all
of
us
involved,
but
the
they
were
able
to
rip
out
a
lot
of
their
C++
code
and
convert
entirely
into
rust.
As
a
result
of
using
flat
buffers
in
this
way,
they
also
use
flat
buffers
and
their
typescript
components
and
pretty
pervasively
all
over
the
place.
C
It's
not
perfect
shouldn't
use
it
all
the
time.
A
couple
of
anti
use
cases
that
come
into
mind
if
you're,
storing
things
like
trees
or
try,
some
people
call
them
or
hash
tables.
You
probably
don't
want
to
use
flat
buffers
for
that.
You
could
probably
but
it'd,
be
awkward.
Sparse,
arrays,
like
we
see
in
machine
learning
very
large
arrays
where
most
of
them
are
zeros,
and
so
you
compress
it
that
way,
not
a
really
good
fit
for
flat
buffers
and
because
flatbuffers
has
a
format
restriction
of
two
gigabytes
per
payload.
C
If
you
have
huge
files
like
videos,
it's
not
gonna
be
a
good
fit
either
alright,
how
to
use
it
as
an
aside
I
like
distinguishing
between
how
to
use
it
and
how
it
works
and
I've
noticed
the
developer.
Marketing
material
will
complete
the
two
like
installer
thing:
how
does
it
work?
It's
like?
No,
that's
not
how
it
works,
so
I'll
talk
about
how
to
use
it
and
then
I'll
talk
about
how
it
works.
So
you
get
both
sides
of
it.
C
First
I'll
show
you
how
to
install
the
flat
CD
program
right,
a
schema
generate
code.
Add
it
to
your
cargo
file
and
then
have
hello
world,
so
we
have
flat
buffers
in
a
couple
different
package
managers.
This
just
shows
you,
you
can
brew,
install
it
and
it
works.
Just
fine
second
thing
is:
you
can
define
a
schema
and
I'll
walk
through
it,
I
apologize.
C
If
not,
everybody
can
see
it,
but
I
want
to
show
off
some
of
the
different
features
that
we
support,
especially
for
those
of
you
comparing
to
protocol
buffers
so
I'm,
starting
at
the
top
left.
We
can
declare
a
namespace,
so
we
support
nested
namespaces.
We
have
enum
types,
we
have
unions,
we
have
structs
which
are
packed
data
and
they're,
actually
not
schema
versioned,
but
the
trade-off
is
they're
very
efficient,
so
you
could
put
them
in
a
big
array
going
to
the
other
side
of
it.
C
We
have
this
thing
called
a
table
which
we
call
a
monster
and
that's
just
a
game
legacy
from
from
the
beginnings
of
the
project,
and
this
is
sort
of
the
the
prime
type
or
declaration
in
a
flatbuffers
schema
file
as
the
table,
and
so
every
field
there
can
take
a
number
of
different
parameters
and
going
from
the
top,
we
have
something
called
POS
or
position
which
references
the
struct
that
we
created
on
the
left
hand
side.
So
it's
a
three
dimensional
vector
where
you
could
be
in
space.
C
The
next
one
is
a
iru,
sixteen
variable
which
we
call
a
short
and
it
has
a
default
value
of
150,
and
so
the
default
values
are
really
important,
because
if
you
know
ahead
of
time
about
default
values
for
your
data,
you
can
save
a
lot
of
space
going
down.
We
have
strings,
we
have
default
values
for
pools
which
we
can
deprecated
any
field.
We
have
a
vector
of
bytes
and
that's
useful
as
a
generic
type,
enum
default
values
and
so
on.
C
Okay,
so
we
can
use
this
file
with
the
flat
c
compiler
and
the
rust
flag
to
generate
generated
code.
And
what
comes
out
is
a
pretty
big
file
relative
to
the
amount
of
data
we
put
in
the
schema.
So
this
shows
that
the
generate
file
is
about
500
lines,
but
it
contains
everything
we
need
to
read
and
write
data
in
many
different
scenarios
to
use
it
in
your
code,
after
you've
generated
the
code
from
the
compiler.
You
should
add
this
to
your
cargo
Donal
we're
currently
at
version
zero.
C
And
that's
because
in
this
rust
code
the
arcs
truck
takes
default
values.
That
was
an
important
economic
development
that
happened
pretty
late
in
the
port
on
line
10.
We
finish
it,
which
means
we
just
sort
of
wrap
up
the
bookkeeping
for
the
buffer
and
then
on
line
12.
We
extract
the
bytes.
So
now
this
is
just
a
you,
a
slice
line
14
we
initialize
it.
So
we
read
it
as
if
it
came
in
off
the
disk
or
off
the
network
and
then
finally,
on
line
15,
we
can
call
it
as
a
rust
object.
C
C
So
this
is
the
more
interesting
part
to
me
as
the
implementer.
So
now
we're
going
to
get
into
how
it
works
and
we're
probably
two-thirds
of
the
way
through
the
talk.
Maybe
halfway
so
I
want
to
focus
on
three
traits
that
really
helped
us
write
good
code
for
this,
the
follow
trait
the
push
trait
and
the
safe
slice
access
trait,
follow
and
push.
Our
two
sides
of
same
coin
follows
for
reading
and
pushes
for
writing
safe
slice
access
as
an
optimization.
C
So
the
follow
trick
trait
works
because
flatbuffers
are
tree
shaped,
and
we
have
this
thing
called
offsets
which
are
like
pointers
they're
a
little
they're,
always
32
bits
wide,
but
we
interpret
them
in
the
same
way.
So
it
tells
us
where
to
go
inside
of
a
buffer
as
we
traverse
the
data
and
what's
interesting
about
the
follow
trade
is
that
it
allows
us
to
interpret
these
offsets
as
type
system
entities,
and
so
in
this
way,
conceptually
the
follow
trade
lets
us
lift
these
effectively
pointer
dereferencing
actions
into
the
type
system.
C
C
It's
called
follow
and
it
takes
a
buffer
which
could
be
very
wide
and
it
takes
a
location
in
that
buffer
in
that
location,
that
buffer
is
our
offset
or
Poynter,
and
then
what
it
returns
is
the
associated
type
self
inner
and
so
by
having
it
return,
what
the
Associated
type
is
and
then
maybe
that
also
could
have
an
associate
type.
It
could
be
followed
or
it
could
be
following
so
on.
We
get.
This
I'll
show
you
in
a
second
a
declarative
chain
of
pointer,
dereferences.
C
So
this
is
a
pretty
hairy
function
definition,
but
the
signature
is
as
long
as
the
function,
so
that's
kind
of
nice,
but
the
thing
I
want
to
point
out
is
on
line
two.
This
get
function
which
is
used
pervasively
in
the
code.
This
is
in
the
runtime
library
and
it's
called
in
hot
loops
all
over
the
place.
It
has
a
trait
bound
to
follow
and
what
this
means
is
that
T
needs
to
be
follow,
and
so
that
means
T
needs
to
be
D
referenceable,
but
this
function
doesn't
know
how
to
dereference
it.
C
C
So
here
is
how
it
looks
in
the
generated
code.
It's
a
simple
example,
and
we
use
HP
earlier
so
I'll
show
this
again.
So
here
we
have
a
function
marked
in
line
and
it's
we
call
it
HP
for
health
points.
Then
it
returns
in
I.
Sixteen,
and
here
we
see
self
dot,
tab
get
and
get
was
the
function
we
just
had
in
the
other
slide,
but
now
we
give
it
a
him.
We
tell
it.
What
is
the
type
T
that
should
have
follow
on
it?
C
Well
type
T
is
I
16
and
we
go
find
it
in
the
flat
buffer
at
offset
monster,
V
T
HP,
which
is
a
generated
constant,
and
this
is
part
of
the
interplay
of
the
generated
code
and
the
runtime
code,
and
there
was
a
lot
of
architectural
decisions
that
had
to
be
made
to
make
that
same
sum.
100
is
the
default
value
and
then
unwrapped
is
there
because
we
always
know
it
will
exist.
So
in
this
example,
I
16
has
follow
implemented.
C
So
what
that
means
is
that
at
V,
T
HP
offset
there
is
code
to
pull
out
an
eye
16
at
that
lock,
LOC
and
the
buffer,
but
here's
a
much
more
fun
example.
So
this
is
test
array
of
string,
and
this
is
a
different
example
in
our
test
suite.
So
here
we're
talking
about
a
vector
where
each
element
is
a
an
offset
into
a
variable
length
string.
C
So
it's
actually
pretty
complicated
data
structure
to
serialize,
but
we
encode
this
in
the
type
system-
I'm
not
going
to
walk
through
it
manually
here,
but
every
step
that
I
mentioned
about
the
vector
that
could
be
variable
length
and
inside
those
the
offsets
to
these
things.
That
could
be
variable
length.
C
Those
are
all
in
the
type
system,
and
so
that
has
made
programming
this
to
be
much
more
fun
and
also
it
feels
like
the
compiler
has
my
back
in
a
way
that
maybe
we
wouldn't
have
in
the
C++
versions,
so
I
find
that
to
be
a
significant
innovation
for
this
port,
any
questions
so
far
on
all
that:
okay,
cool,
all
right.
Second,
trait
pushed
rate.
This
is
the
right
dual
of
the
follow
trait,
and
so
this
is
a
little
more
complicated.
But
the
thing
that's
important
is
on
line
three.
C
We
have
a
function
called
push
and
it
takes
in
or
it
is
the
function
where
a
type
that
implements
it
will
know
how
to
write
itself
into
a
buffer,
and
then
there
are
other
things
about
sizing
alignment,
and
so
this
is
a
way
for
a
type
to
sort
of
export.
Everything
that
is
needed
to
write
itself
to
a
buffer.
C
So
the
key
I
keep
thing
is
on
line
eight,
where
we
do
X
dot
push,
and
that
is
calling
the
implementation
of
push
that
X
has,
and
so
this
is
a
way
to
keep
the
code.
The
right
path,
very
small
in
the
generated
code,
contains
implementations
of
this
for
all
the
types
that
need
it.
So
here
we
have
a
type
that
is
like
the
XYZ
coordinates
that
we
had
earlier.
So
this
is
called
Vic
3,
and
so
you
can
see
it's
implementing
the
trait
flatbuffers
push
for
BEC
3.
C
As
an
aside,
we
know
we
can
do
that
safely,
because
in
flatbuffers
most
data
is
little-endian,
always
and
so
even
on
big
endian
machines,
we
store
it
in
RAM
as
little
endian,
which
means
we
can
mem
copy
it
as
a
little
endian
onto
the
wire
when
we
need
to
the
last
trade
I'll
talk
about
a
safe
slice
access,
so
this
is
just
a
dummy
trait.
It
indicates
that
the
host
machine
is
little-endian
and
it
lifts
endianness
into
the
type
system.
I
hope.
That's
a
theme.
C
You
can
see
I'm
trying
to
lift
everything
into
the
type
system.
Maybe
I
went
a
little
overboard,
but
I
think
it
was
a
good
idea,
and
so,
when
this
type
checks
it
means
that
it's
safe
to
use
mem
copy
to
copy
data
from
say
a
vector
of
bytes
or
structs
or
whatever
directly
into
the
byte
buffer
of
the
builder
the
right
path.
Essentially,
it
makes
the
right
path
much
more
efficient
when
it
can
be
by
using
mem
copy
everywhere
it
can.
C
The
alternative
on
big
endian
machines,
that'll,
use
different
operations
and
might
write
byte
by
byte,
for
example,
ok
testing.
So
we
test
in
five
and
a
bunch
of
different
five
plus
a
bunch
of
different
other
ways.
So
we
test
for
C++
and
Java
compatibility.
We
test
that
the
code
is
important.
We
do
a
lot
of
fuzzing,
round-trips
we'd
count
heap
allocations,
we
check
bytes
on
the
wire
and
then
to
miscellaneous
ones,
testing,
C++
and
Java,
so
we're
Russ's
the
twelfth
language
to
be
ported
to
flatbuffers.
C
C
Second
thing:
testing
imports
generating
usable
code
can
be
tricky.
I
mentioned
that
there
was
kind
of
a
big
blow
up
factor
when
we're
generating
code
compared
to
the
size
of
the
schema
and
getting
all
that
right
with
namespacing
and
modules,
and
public
and
private
types
and
lifetime
constraints
can
be
error-prone
and
so
to
just
absolutely
make
sure
that
that's
working
as
we
need
it
to.
We
take
a
black
box
approach
to
testing.
So
we
in
our
test
suite
the
entire
test
suite
file,
which
is
thousands
of
lines
long.
C
C
You
can't
really
test
all
cases
because
the
users
are
providing
the
input,
and
so
we
use
an
approximation
of
all
input
called
quick
check
which
tries
to
create
exemplary
test
cases
that
we
can
use
to
test
a
wide
variety
of
edge
conditions
and
when
there's
an
error,
quick
check
will
hone
down
and
find
sort
of
the
minimum
violating
examples
so
that
it's
easy
to
debug.
So
we
use
that
all
over
the
place
in
the
test.
C
Suite
third
thing
counting
heap
allocations
in
theory,
the
flatbuffers
format
doesn't
require
any
heap
allocations,
but
do
we
actually
do
that?
Maybe
we
have
a
Veck
hidden
somewhere,
something
like
that.
So
as
of
rust
1.28,
oh,
we
actually
can
use
a
custom
global
allocator
in
a
separate
program
to
verify
that
we
aren't
making
any
heap
allocations
on
hot
paths,
and
so
this
is
the
critical
section.
So
on
line
one
we're
creating
a
static
global
variable
called
n
Alex
and
it
starts
off
at
zero.
We
follow
just
the
documentation
for
creating
a
basic
allocator
here.
C
So
2
is
a
tracking
allocator,
it's
our
name
for
it.
We
implement
one
function
on
it,
which
returns
the
number
of
allocations
that
its
seed
has
seen
so
far
in
the
program
and
then
on
lines
10
through
18.
We
implement
pass-through
functions
for
allocation
and
de-allocation,
with
the
caveat
that
on
the
allocation
path,
we
increment
that
global
and
Alex
variable
and
then
test
time.
C
We
pull
out
the
an
Alex
value
in
assert
that
it
has
stayed
the
same
as
it
was
before
the
test
was
executed,
and
so
this
is
how
seriously
we
take
performance
in
the
flatbuffers
project
that
we're
trying
to
use
features
as
soon
as
they're
put
into
stable
to
make
sure
that
we're
as
fast
as
we
possibly
can
be.
I'll
talk
more
about
speed,
and
maybe
the
potential
for
that
later.
C
C
You
know
being
able
to
sleep
well
at
night
that
you
don't
have
bugs
in
your
code
and
also
these
tests
are
really
good
for
new
contributors,
because
you
can
see
concretely
what
the
byte
layout
is
on
the
wire
format
and
kind
of
bridge
the
gap
between
the
schema,
the
generated
code,
the
runtime
library
and
then.
Finally,
what
is
on
the
wire?
What's
on
disk?
What's
in
RAM,
so
here's
an
example.
C
The
idea
is
that
it's
a
test
function
that
stands
alone
and
there
is
one
utility
function
called
check
and
so
lines
3
through
7
create
data.
They
set
up
some
situation
in
all
these
different
test
cases,
write
some
data
and
then
we're
done
writing
and
on
lines
8
through
16.
We
call
this
check
utility
function
that
verifies
that
the
data
that
was
written
to
the
byte
buffer
is
exactly
what
we
expect,
and
so
in
the
comments
there
are
actually
often
descriptions
of
what
the
numbers
mean.
C
C
Miscellaneous
testing,
we
have
191
tests
in
the
test
suite
and
growing.
We
test
data
alignment
and
sizing.
We
check
various
borrow
checker
situations,
so,
for
example,
we
had
an
issue
recently
with
flatbuffers
that
were
used
with
data
that
had
static
lifetimes.
It
wasn't
working,
and
so
we
fixed
that
and
now
we're
checking
it
in
the
test
suite
we
have
many
many
tests
for
the
different
major
types
and
flatbuffers
such
as
tables
structs,
vectors
unions,
enum
scalars,
and
also
a
mini
tests
for
the
traits
that
I
mentioned
earlier,
including
follow
and
push
finally
future
work.
C
C
But
in
this
case
we
can
justify
it
if
we
can
implement
a
verifier
that
uses
a
an
already
known
algorithm
to
double
check
that
the
offsets
don't
go
out
of
bounds
and
it's
a
very
quick
process,
and
we
know
that
from
C++
already
and
so
once
we
can
have
that
in
rust,
then
we're
free
to
do
even
more
aggressive
on
the
safe
usage,
because
we
know
it
won't
go
out
of
bounds
at
runtime,
and
this
also
lets
you
use
untrusted
data.
The
second
thing
is
that
flatbuffers
s
support
for
the
mutation
of
values.
C
So
if
a
space
is
already
allocated
in
the
buffer,
then
you
can
overwrite
those
bytes
with
the
value
that
fits
and
the
rest
for
port
does
not
yet
support
that
we
hope
to
soon.
The
third
thing
is
reflection,
so
in
the
C++
port
is
possible
to
evaluate
metadata
at
runtime
we'd
like
to
be
able
to
do
that
in
rust,
as
well.
Five
extreme
fuzzing,
this
one's
pretty
exciting
to
me,
as
you
can
tell
I
like
testing.
So
this
is
the
next
level.
C
Extreme
fuzzing
would
be
a
situation
where
we
can
generate
whole
flatbuffers
schemas,
randomly
maybe
using
quick
Chek.
We
can
generate
data
that
complies
to
that
schema
that
we
haven't
seen
before
serialize
it
D
serialize
it
and
then
check
that
the
values
we
get
out
are
the
same
as
what
we
put
in,
and
so
that
would
be
a
complete
round
trip
from
write
to
read
and
then
there's
something
else.
C
No
real
attempts
to
speed
things
up
in
a
rigorous
way,
we're
already
pretty
fast,
but
I
think
we
can
be
a
lot
faster,
and
so
this
is
just
the
beginning
and
if
you
all
are
willing
to
dig
in,
we
have
a
lot
of
really
helpful
mentoring,
maintain
errs
and
contributors
and
I
hope.
You
take
a
look
at
the
project.
C
Three
final
thoughts:
one:
there
are
Network
effects
with
serialization
formats,
so
the
more
people
who
like
it,
who
use
it,
who
are
happy
users,
the
more
people
who
use
it
for
storing
data
and
where
people
use
it
for
sending
and
receiving
messages
the
network
effect.
There
means
that
the
format
will
grow
in
usage
and
that's
one
of
the
reasons
that
the
there's
a
it's
hard
to
dislodge
existing
serialization
formats.
It's
like
the
social
network
of
data
storage,
so
we
hope
to
make
users
happy
so
that
we
can
grow.
C
F
C
The
well
answer
that
is,
that
I
believe
the
C++
port
supports
this
Emidio
outer
can
speak
to
it,
but
there's
limited
support
for
a
key
value:
okay
operation,
which
might
be
what
you
need
yeah,
and
then
you
can
deserialize
that
how
you
want
in
memory
and
I'll,
just
note
that
whatever
the
hash
map
or
hash
set
that
you're
using
that
doesn't
map
to
what
the
wire
format
would
be.
It's
not
necessary.
F
C
G
Thank
you.
They
support
in
place
binary
search,
so
it's
not
a
hash
map,
but
certainly
for
do
some
kind
of
dictionary
kind
of
structure
and
the
cool
thing
is
like
much
of
the
rest
of
flatbuffers
like
you.
Don't
actually
have
to
build
up
a
binary
tree.
First,
it
works
directly
on
the
stored
data.
So
if
a
binary
tree
search
is
fast
enough
for
you,
then
this
will
be
a
great
solution.
C
This
this
comes
up
a
lot:
aye
aye.
We
like
the
captain,
proto
project,
I,
think
that
there
are
uses
for
both
of
them.
In
my
estimation
and
I'm,
not
an
expert
on
captain
proto
and
I
am
on
flatbuffers,
so
I'm,
biased
and
I
have
a
biased
knowledge
base,
but
the
some
of
the
things
that
I
understand
flatbuffers
give
are
one
of
the
major
things
in
captain
proto.
E
This
looks
a
lot
more
friendly
to
actually
using
flatbuffers
like
if
you're
writing
the
game.
This
looks
a
lot
more
friendly,
you're,
actually
using
flatbuffers
to
store
parts
of
your
game
state
and
in
so
making
it
easier
to
save
that
to
disk.
But
if
you're
using
captain
proto
it's
it's
not
so
friendly
to
that.
E
If
you're
reading
from
it
there's
a
lot
of
error,
checking
to
go
on
when
you're
actually
taking
out
values,
and
so
you
find
yourself
doing
a
lot
of
one
raps
and
run
from
what
I've
seen,
for
example,
in
dot
h,
P
call
it
does
the
unwraps
for
you.
So
it
seems
like
it
would
be
a
lot
more
convenient
to
use.
In
that
particular
case,
I.
C
To
give
a
shout-out
to
the
Creator
proto,
because
my
understanding
is
that
he
was
crucial
in
creating
protocol
buffers
and
we
owe
a
huge
intellectual
legacy
tradition
to
that.
So
I,
don't
really
see
it
as
a
competition.
I
see
it
as
an
ecosystem
where
we
can
learn
from
each
other
not
to
be
too
diplomatic
about
it.
But
you
know
for.
E
C
C
Only
major
one
I
know
of
is
the
denno
project
that
I
mentioned
earlier,
so
they're
using
it
too,
so
they
have
a
runtime,
that's
in
rust
and
C++,
and
also
typescript
and
JavaScript,
or
something
it's
pretty
complicated,
but
to
make
all
that
tolerably
efficient.
They
use
flatbuffers
as
the
RPC
in
these
very
hot
loops
between
things.
Besides
that
I'm
not
familiar
with
many
other
projects,
but
I
put
their
out
there.
Oh
and
also
a
bigger
picture,
GRP
see
added
support
for
flatbuffers
and
I
think
and
go
so
that's
an
example
of
cross-platform
usage.
I
C
Are
three
ways
allocations
can
happen
in
the
right
path?
The
first
one
is
when
the
Builder
itself
has
a
set
of
bytes
that
it
uses
a
scratch
space
when
that
is
at
capacity.
No
more
allocations
happen,
so
it
will
regrow,
and
then
you
can
reuse
that
data
over
and
over
again,
no
more
heaps
in
steady-state
operation.
C
The
second
way
is
that
we
have
a
utility
function,
called
create
vector
of
strings
and
I
mentioned
earlier,
that
a
vector
of
strings
is
a
bunch
of
variable
lengths,
nested
variable
length
data,
and
so
inside
there
we
use
a
package
called
small
Beck,
which
unsafely
puts
vectors
of
things
on
the
stack
if
it
can
and
then
on
the
heap
of
a
can.
So
that's
the
place
that
keeps
keep
allocations
can
occur.
The
third
way
is
through
you,
creating
data,
so
the
references
that
the
builder
gets.
C
They
need
to
come
from
somewhere,
and
so,
if
you
make
a
string,
for
example,
that
might
be
heap-allocated,
and
so
at
that
point,
if
you're
creating
things
in
a
hot
loop
along
with
the
flatbuffers
code,
it's
not
technically
flatbuffers
problem,
but
it
is
your
problem,
and
so
it's
part
of
the
same
situation
does
that
make
sense.
Yeah
also
I'll
say
on
the
read
path:
there
really
are
no
allocations
required.
I.
J
Thank
you.
This
is
a
really
awesome
project
and
I'm
really
glad
you
brought
this
to
rest.
I
missed
up
here
the
question,
so
this
might
have
already
been
covered,
but
I
saw
that
during
writes
you
like
to
use
BIM
copies
whenever
you
possibly
can
you
should
an
example
of
men
copying
an
entire
victor
at
once
and
I'm
wondering
how
flatbuffers
accounts
for
padding
instructs,
whether
it
thinks
about
that
and
does
anything
special
for
it?
So.
C
B
C
I
started
off
without
them,
because
there's
this
idea
that
LLVM
is
sufficiently
smart
and
I'm,
not
a
compiler
expert,
but
I
know
somebody
who
is
and
that's
water
and
one
piece
of
wisdom
he
gave
me
is
that
when
you
start
suggesting
in
lines
note
that
I,
don't
say
in
line
always
I
just
say
in
line.
So
it's
a
suggestion.
When
those
start,
when
the
compiler
starts
agreeing
to
those
suggestions
all
over
the
place,
it
can
start
collapsing
code
together
and
start
doing.
J
H
G
C
So
that's
a
really
good
point.
The
way
that
it's
handled
in
general
in
the
project
is
that
the
language
natively
that
writes
the
data
should
know
that
it's
a
utf-8
string,
and
so
it
should
be
right
in
the
correct
data,
and
so
on
the
read
side.
We
can
you
know,
assuming
that
you
trust
the
source
can
just
cast
it
to
say
a
utf-8
string,
yeah,
but
yeah
I
think
you
just
gave
us
another
feature
to
work
on
for
the
verifier.
K
Thanks
were
you
able
to
use
some
interesting
things
with
like
burrow
checker,
so
flatbuffers
like
you
essentially
have
memory,
you
know
view
to
data
so
like
you
could
do
mirroring
or
like
without
even
copying
like.
It
seems
that,
for
example,
if
I
want
to
pull
a
string
from
the
flood
buffer,
I'm
still
actually
do
I
get
a
slice
of
it
or
like
do
I
copy.
The
bytes
do
I
get
a
view
on
it.
So.
D
K
C
Yeah,
so
I
haven't
added
that
turning
the
port,
so
I'm
not
an
expert
on
it.
My
understanding
is
that
we
would
have
to
have
a
probably
quite
a
bit
of
separate
code
paths
where
we
add
me
to
thanks
to
allow
users
to
change
data
and
other
languages,
it's
easier
like
in
C++.
You
can
just
get
a
pointer
and
there
you
go,
but
here
we
need
mutant
on
mute
references.
So
that's
still
TBD
and.
G
K
C
So
because
the
pointer
would
just
be
pointing
to
the
buffer
that
stores
the
flat
buffer
data
I'm
using
the
buffer
twice
I
mean
the
you
a
buffer
that
backs
the
flat
buffer
data.
So
you
get
a
pointer
into
that
and
you
know
that
at
that
location
there
are.
You
know
one
two,
four
eight
bytes
that
represent
some
number
and
so,
but
on
the
read
path,
we
just
you
know
if
you're
on
a
little
endian
machine,
then
you
just
get
that
value
directly,
just
a
pointer
dereference.