►
From YouTube: Neon - building a Postgres storage system in Rust, Heikki Linnakangas | Rust Finland 9.5.2022
Description
Neon (https://github.com/neondatabase/neon) is a serverless open source alternative to AWS Aurora Postgres. It separates storage and compute and substitutes PostgreSQL storage layer by redistributing data across a cluster of nodes.
This talk by Heikki Linnakangas was recorded at Rust Finland meetup 9.5.2022 hosted by Futurice
Neon: https://neon.tech/
Rust Finland: https://www.meetup.com/Finland-Rust-Meetup/
Futurice: https://futurice.com/devbreakfast
A
So
yeah,
my
name
is
hey
kilinogangas,
I've
been
a
postcards
hacker
since
about
2006..
I've
worked
for
different
companies
on
postgres
and
I've
worked
in
different
areas
of
the
postgres
code
base.
My
first
big
bigger
feature
was
two-phase,
commit
around
that
time.
2006
or
so.
A
I
also
worked
a
lot
on
b3
indexes
other
index
types
storage
by
the
logging
control
around
the
place,
but
mostly
storage,
related
and
now,
since
last
year
we
founded
a
company
called
neon
and
we
decided
to
build
a
special
storage
system
for
posters
and
that's
written
in
rust,
and
that's
why
I'm
here
so
so?
What
is
this
thing?
What
is
neon?
It's
a
first
of
all,
it's
a
startup.
I
said
it's
a
startup.
We
found
it.
This
is
a
silicon
valley,
funded
startup.
A
It's
also
a
fully
managed
postgres
service.
That's
how
we
plan
to
make
money
in
the
future.
A
It's
also
open
source
piece
of
software,
which
is
like
a
cloud
native
storage
system
for
postgres,
and
that's
that's
what
I'm
most
going
to
talk
about
here.
Okay,
just
a
little
bit
of
background,
how
many
of
you
have
used?
Postwars,
oh
nice!
Thank
you.
A
How
many
of
you
have
like
written,
something
that
runs
inside
the
postcode
server
like
c
code
or
extension
or
something?
Oh
there
you
go
someone
cool
suppose
this
is
very
extensible.
A
You
can
write
your
stuff
in
c
on
top
of
that,
but
the
way
you
typically
install
postcards
is
that
you
have
a
server
and
then
you
install
postcase
on
it
using
a
debian
package
or
something
and
it
stores
the
data
on
its
local
disk,
or
maybe
it's
in
ebs
volume,
if
you're
on
amazon
or
something
else,
but
but
it's
basically
a
file
system
and
and
postcase
manages
that
and
and
that's
fine
it
gets
a
little
bit
more
complicated
in
a
typical
cloud
setup.
A
So
if
you
want
to
run
postcase
in
the
cloud
which
a
lot
of
people
do
nowadays,
your
setup
might
look
like
something
like
this.
I
mean
I
mean
your
production
setup.
Your
your
system
on
your
laptop
would
probably
look
a
little
simpler,
but
a
typical
production
setup
looks
like
this
nowadays,
so
you
have
the
primary
postgres
server.
A
A
So
what
we're
doing
is
that
we're
separating
the
storage
and
compute
that's
the
magical,
key
phrase
we
use,
which
basically
means
that
you
have
this
special
storage
system
that
holds
all
your
data
and
the
postcard
server
doesn't
have
a
local
disk
at
all.
Well,
well,
it
does
for
like
temporary
files
and
stuff,
but
the
data
is
stored
elsewhere.
So
the
data
is
stored
here
in
the
in
the
storage
system
and
and
that
storage
system
also
uses
the
cloud
storage
behind
the
scenes.
A
Well,
this
gets
interesting
when
you
consider
that
you
can
have
multiple
first
instances,
you
can
have
read-only
replicas
you
can
have,
or
you
know,
even
if
you
just
manage
a
lot
of
servers,
it
gets
a
lot
easier
to
do
if
you,
if
you
can
just
quickly
spin
up
these
postgre
servers
and
they
don't
have
any
local
data,
which
means,
if
you
launch
the
server
you
don't
need
to
like
restore
terabyte
of
data,
to
get
it
up
and
running.
It's
just
a
process
that
you
launch
can
be
a
docker
container
or
whatever
it's
quick
to
launch.
A
So
one
way
we
think
about
this,
this
neon
storage
system
is
that
it's
like
a
it's
like
a
storage
area
network
like
a
sand
on
steroids.
So
it
has
a
lot
of
the
same
properties
as
the
sand,
but
but
but
I'm
gonna
talk
a
little
bit
more
about
that,
so
so
so
yeah.
The
benefits
of
this
is
that
it's
serverless,
which
for
us
means
that
these
posters
instances
they're
very
quick
to
to
launch
they're,
also
very
quick
to
just
kill,
there's
no
data
on
them.
The
data
is
stored
in
the
storage
system.
A
You
can
do
cool
stuff
like
point
in
time,
query,
which
means
you
can
launch
a
new
postcast
instance,
and
you
just
tell
it
to
show
all
the
data
as
it
was
at
5
pm
yesterday
like
before.
You
accidentally,
dropped
your
table
or
something,
and
it
can
do
that
instantly.
It
just
launches
a
new
new
server
like
a
server
process,
but
it
doesn't
have
any
data.
So
it's
very
fast.
We
can
launch
it
in
a
few
seconds
and
and
so
forth,
and
so
that's
one
benefit.
A
You
can
do
database
branching
if
you
want
to
take
a
copy
of
your
production
database
and
run
your
ci
workflow
through
it.
You
can
do
that
very
quickly
without
having
to
duplicate
all
of
the
data
and
and
finally,
we
can
use
the
cloud
storage,
which
is
much
cheaper
than
eds
volumes
or
something
else.
A
So
that's
basically
the
background
of
what
we're
trying
to
do
and
why
we're
doing
it,
and
now
we
get
to
the
rust
parts.
So
this
storage
system
is
written
in
rust
mentioned
earlier,
that
it's
multi.
All
right
did,
I
mention
it.
I
don't
know,
but
it's
multi-tenant,
which
means
that
we
can
run
like
multiple
posts
servers
against
the
same
piece
of
storage,
which
is
really
cool.
If
you
want
to
have
a
thousands
or
hundreds
or
thousands
of
small
servers,
you
can
share
the
storage
slide.
A
Just
like
a
san
again,
you
can
you
can
share
it.
You
don't
need
to
provision
a
separate
ec2
instance
or
something
like
that
for
each
one
with
their
own
disk
or,
and
you
don't
need
to
decide
their
sizes
and
so
forth.
You
can
just
run
them
all
against
the
same
storage
system.
A
A
A
So
about
a
year
ago,
when
we
were
starting
this
project,
we
you
know.
One
of
the
first
questions
was
what
programming
language
do
we
use
to
write
this
thing
so
in
the
beginning,
there's
me
other
co-founder
and
the
third
third
guy
and
a
few
other
people
who
had
already
like
agreed
to
join
the
company.
A
Most
of
us
most
of
us,
like
were
postcodes
hackers,
and
I
had
worked
on
the
postgres
code
base
before
so
we
knew
you
see
postgresql
c,
we
just
recently
like
a
year
or
two
raised
the
standard
to
c99,
so
we
kind
of
had
that
background.
We
knew
the
posters
code
base
in
and
out
slam
fast
also
knew
c,
plus
ruby,
some
other
languages.
A
But
basically
I
mean
what
would
what
would
be
the
ideal
language
for
what
we're
trying
to
do
now
here,
and
one
of
the
ideas
we
had
earlier
on
was
that
we
would
want
to
use
a
lot
of
the
postgres
code
because
obviously
we
know
that
and-
and
we
have
some
parts
of
the
codebase
that
actually
need
to
like
process.
The
postgres
write
ahead
log
and
then
like
interoperate
with
postgres,
which
is
written
in
c
so
c
might
make
a
lot
of
sense.
A
Kind
of
the
other
idea
was
to
just
write
some
something
from
scratch.
C,
plus
plus
was
a
strong
contender
like
we
considered.
That
c,
plus
plus,
is
it's
probably
the
most
common
language
used
in
database
programming
in
general.
If
you
look
at,
I
don't
know.
Actually
what
mysql
is
written
in,
probably
c,
plus
plus,
but
like
that?
That's
the
most
common
one.
There's
a
lot
of
database
engines
written
in
c
plus
plus,
so
that
would
be
a
good
choice
in
the
future.
A
To
considering
hiring
I
mean
it
would
be
easy
to
find
people
who
have
database
background,
who
know
c,
plus
and
so
forth,
but
like
putting
the
putting
down
the
requirements
for
the
language.
One
thing
that
stands
out
at
the
top
here
is
security,
so
so
we
really
wanted
to
have
a
language.
That's
memory
safe.
We
don't
want
to
have
buffer
overflow
box
crashing
the
server,
and
the
big
reason
for
that
is
the
I
mentioned
earlier
that
this
server
is
a
multi-tenant.
A
So
we
actually
plan
to
run
this
cloud
service
and
serve
store
the
data
from
multiple
tenants,
multiple
different
customers,
different
paying
customers
stored
in
the
in
the
same
storage
system
and
like
co-mingling
the
data
from
these
different
tenants
in
the
same
process.
A
It's
pretty
risky.
If,
if
you
don't
have
a
memory
safe
language,
I
mean,
if
you
manage,
to
crush
the
server
first
of
all,
it's
a
availability
problem.
Your
database
is
down,
but
even
more
importantly,
you
could
easily
leak
the
data
from
one
tenant
to
another,
and
that
would
be
really
bad.
So
I
kind
of
pretty
early
on.
I
decided
that
we
need
to
have.
We
need
to
pick
a
language,
that's
memory
safe.
A
I
mean
there's
still
ways
you
can
screw
it
up,
but
at
least
that
eliminates
a
whole
class
of
problems.
Then,
of
course
it
has
to
be
like
you
have
to
be
productive
in
this
language.
It
has
to
you
know.
If
you
choose
something
very
esoteric,
it
can
be
hard
to
to
get
stuff
done
if
there
there's
no
libraries
and
ecosystem
for
for
these
things,
we
need
to
do
like
uploading
to
s3
cloud
and
then
dealing
with
that.
So
you
need
to
be
able
to
do
those
things.
A
I
also
need
to
be
able
to
to
go
pretty
low,
like
very
close
to
the
metal.
For
these
I
o
routines
and
networking.
We
also
care
about
performance,
so
all
of
that
was
also
important,
but
one
one
also
one
consideration
was
the
popularity
of
the
language.
A
To
be
honest,
it's
I
mean
we
could
go
with
the
lisp,
but
you
know
I
know
one
yeah
I
mean
I
know
one
guy
in
france
who
would
probably
join
the
company
if
we
did
that
and
and
would
help
us
out,
but
that's
it
or
maybe
maybe
those
maybe
you
would
join,
but
that's
that
would
make
their
hiring
and
recruiting
really
hard.
And,
to
be
honest,
I
I
don't
think
that
would
work
because
of
that,
it's
just
not
popular
enough.
A
Also,
we,
like
the
founding
team.
We
had
experience
with
c
a
little
bit
of
c
plus
plus
well,
we
didn't
know
rust,
but
at
least
it's
like
a
curly
brace
language.
We
can
learn
so
that
helps
so
like.
If
you
look
at
these
requirements,
rust
ticks
a
lot
of
these
boxes.
It's
not
great!
On
the
popularity
side,
I
mean
it
keeps
growing
it's
already
better
than
it
was
a
year
ago.
A
I
think,
but
it's
you
know
it's
not
like
c
plus
it
is
harder
to
find
people,
but
it's
getting
better
so
yeah.
So,
oh,
I
didn't
mention
memory
management
like
that
was
one
of
the
things
that
we
didn't
want
to
choose,
or
language
with
the
memory
garbage
collection,
they're
getting
better.
I
hear,
but
I'm
kind
of
skeptical
still
for
for
our
systems.
Programming
that
we're
doing.
A
No,
there
was
some
problems
like
like
how
do
we,
none
of
us
actually
knew
rust.
So
that's
one
problem,
but
we
figured
we'd
learn
it,
but
one
thing
we
did
pretty
early
on.
I
think
within
the
first
weeks
or
a
month,
or
so
we
actually
hired
someone
who
knew
rust
pretty
well
and
he
was
he
was
really
helpful
in
the
beginning
to
to
help
us
set
up.
The
like
directory
structure,
tell
explain
us
what
the
cargo
is
and
what
is
the
crate
system?
A
All
of
that,
and
he
really
got
us
started
and
helped
us
out
a
lot
and
actually
like
we
would
have
a
slack
channel
rust
channel
in
our
company
and
our
people
would
just
like
ask
questions
related
to
rust.
How
do
I
make
this
thing
compile
and
he
would
help
us
so
that
was
that
was
really
really
important
now.
The
other
other
question
we
kind
of
had
in
our
mind,
like
is
rust.
Mature
enough.
Is
it
like
it's?
It's
still
a
new
language.
A
It's
been
around
for
many
years
now,
but
you
know
yes
as
programming
as
a
programming
language.
It's
still
pretty
young,
so
is
it?
Is
it
mature
enough
for
what
we're
trying
to
do
so?
We
we
figured
we'd,
take
the
risk,
I
mean
it's.
We
were
pretty
confident
with
it,
it's
not
as
mature,
c
or
c,
plus
plus,
but
it's
it's
probably
enough
for
what
we're
doing.
A
Also
we
knew
this
would
be
a
long
project,
we're
still
not
quite
finished,
but
we're
close
to
launch
now
and
it's
already
one
year,
there's
been
one
one
more
year
of
development
in
rust
and
it
I
you
know,
I
can
see
that
it's
gotten
better
within
the
year
already,
so
we
knew
it's
growing,
it's
getting
more
popular,
and
so
that
was.
We
hope
that
if
there
are
any
problems
or
if
we
run
into
it,
will
probably
there
was
a
good
chance
that
they
would
actually
get
fixed
before
we
launch
basically
so
yeah.
A
We
chose
rust
and
we
started
coding
in
march
2021
and
like
for
me
personally,
I
was
we
didn't
run
fast,
so
I
had
to
start
learning
to
have
to
start
building
the
product
and
learning
learning
rust
at
the
same
time,
this
is
also
for
me.
I
was
also
learning
how
to
be
a
co-founder
for
startup.
I
haven't
done
that
before
I've
been
an
engineer
all
my
life,
so
I
didn't
know
how
to
build
a
team
and
then
you
know
deal
with
setting
up
a
company
and
stuff
like
that.
I
hope.
A
Fortunately,
we
had
one
one
of
the
co-founders
who
knew
how
to
do
that
stuff
so
that
that
was
like
a
third
learning
experience
for
me.
So
my
first
impressions
with
rust-
it's
probably
a
lot.
Probably
all
of
you
share
this
these
experiences,
but
the
compiler
is
very
pedantic,
which
means
it's
it's
really
sticky
and
it
really
gets
it
really
doesn't.
Let
you
go
both
easily.
A
Like
with
c
I
mean
it's
pretty
easy
to
get
the
chord
to
compile
and
you'll
get
a
bunch
of
warnings,
but
you
can
ignore
them,
and
you
know
it
goes
the
compiler.
It's
pretty
easy
to
make
the
compiler
happy.
Even
if
the
code
is
garbage
with
rust,
it's
a
lot
harder
to
make
the
compiler
happy.
It
really
likes
to
complain
about
things
I
mean
when
you're
beginning,
and
you
don't
understand
how
it
works.
It's
really
sticky
about
the
types.
A
It's
really
sticky
about
the
memory
management
and
all
of
those
things
you
have
to
get
them
right
or
it
won't
compile.
So
this
this
of
course
took
a
little
bit
of
learning
in
the
beginning.
Lifetimes
was
something
I
I
saw
in
in
places,
but
I
I
just
know
I
I
like
I,
I
read
the
tutorials
two
or
three
or
four
times,
and
I
just
no,
no
not
for
me
not
yet
so
I
tried
to
get
away
without
them
for
for
quite
a
while,
and
that
actually
worked
for
quite
a
long
time.
A
Another
thing
I
didn't
do
in
the
beginning
is
templates,
or
you
know
called
generics
in
rust.
We
all
kind
of
had
that
bad
feeling
from
c
plus
they
have
a
bad
reputation
there.
So
we
kind
of
didn't
want
to
learn
them
too
well
in
the
beginning,
and
that
was
fine
too.
I
mean
it's
not
a
problem.
A
So
after
a
month
or
two
like
starting
to
get
to
know
the
language,
one
thing
that
I
found
a
few
times
actually
this-
and
this
was
this-
was
this-
was
a
nice
experience
like
when
you,
when
you
actually
make
the
com
code
to
compile?
It
surprisingly
often
actually
works
on
the
first
try,
and
this
is
not
that
you
don't
get
that
with
c.
A
The
first
thing
you
try
with
c
and
when
you
get
your
when
your
code
compiles,
is
that
you
launch
the
debugger
and
start
debugging
where
it
trashes
with
rust,
like
you,
don't
get
that
it
surprisingly
often
actually
works,
and
it's
kind
of
a
strange
feeling
in
the
beginning
like
it
run,
it
did
what
I
told
it
to
do
like
did.
I
miss
something.
Am
I
using
like
a
wrong
version
of
this
binary
or
something?
But
no
it's
that
that's
a
good
experience.
A
That's
a
very
good
experience
and
I've
heard
others
say
the
same
thing
after
a
few
months.
I
still
didn't
really
understand
the
lifetimes,
and
I
just
used
a
lot
of
clone
and
hoping
to
put
the
push
off
learning
them
for
quite
a
while.
I
then
I
started
to
learn
generics
and-
and
one
thing
I
found
them
really
useful
for-
was
writing
unit
tests
like
you
could
have
easily
have
a
mock
implementation
of
the
other
stuff
using
generics
and
in
general
they're,
pretty
nice
when
you
get
to
use
them
when
you
get
used
to
them.
A
So
one
one
reason
we
chose
rust
was
the
interoperability
with
c,
because
we
know
that
we
would
be
dealing
with
postgres
code,
a
lot
like,
for
example.
This
is
this
is
something
we
need
to
do.
We
need
to
parse
the
postgres
rider,
headlock
format
and
the
way
that's
defined
in
postgres
is
by
the
c
struct,
but
it's
actually
more
than
this.
This
is
just
one
of
them.
A
You
can
see
up
there
in
the
comments
it's
referenced
to
a
bunch
of
other
structs
that
are
below
here
and
then
there's
another
file
or
somewhere,
where
it
also
defines
the
page
format
for
the
writer
headlock
that
contains
these
these
structs,
and
so
for
this,
it's
called
kind
of
defined.
The
the
on
disk
format
is
defined
by
the
c-structs.
A
Well,
no
problem:
we
would
just
use
bind-gen
to
generate
corresponding
rust
structs
from
that,
or
that
was
the
idea
anyway.
That
was
kind
of
the
first
disappointment
I
had
with
rust.
I
find
that
the
bind
gen
wasn't
really
up
to
the
task.
We
run
into
a
number
of
issues,
but
I
I
guess
that
we're.
A
I
guess
this
is
not
how
people,
what
what
people
usually
try
to
do
with
bind
gen
like
generate
these
structs
for
on
disk
formats,
so
we
were
possibly
doing
something
wrong,
but
we
ran
into
a
number
of
issues
like
it
wouldn't
generate
for
rust
is
very
strict
about
the
padding
bytes
when
you
try
to
read
it
from
disk
and
convert
it
to
a
struct
like
that.
A
It
won't
let
you
do
that
without
unsafe
code,
basically
on
transmute
or
something
and
then
there's
the
padding
bytes
which,
if
you
don't
define
what
happens
to
them,
it's
it's
yeah,
it
gets
undefined
very
easily
and
we
try
to
stay
away.
From
that
I
mean
one
of
the
reasons
we
chose
rust
was
to
keep
keep
things
memory
safe.
So
we
tried
to
be
very
careful
with
that.
A
This
was
actually
one
of
the
first
contributions
we
made
to
the
other,
like
the
rust
ecosystem
was
to
fix
these
issues
in
binding,
and
then
we
added
these
features
we
needed
to
allow
you
to
specify
drives
on
these
trucks
that
are
generated
to
add
fields
for
the
padding,
that's
missing
there
and
and
stuff
like
that.
So
now
it's
doing
what
we
want
and
that's
great,
but
we
were.
I
still
feel
a
little
bit
uneasy
about
that
that
experience
with
binding
chain.
A
So
one
experience
with
with
rust
is
obviously
the
memory
management,
so
that's
very
different
from
what
I'm
used
to
like
right.
Working
on
postcase
code,
postcast
uses
a
scheme
called
membrane
context
for
managing
memory.
Basically,
everything
is
allocated
in
memory
contexts
in
postcards.
These
are
called
memory,
pools
or
memory
arenas
in
other
languages,
but
in
postgres.
The
idea
is
that
you
have
one
top
level
memory
context
and
as
a
child
of
that,
you
have
a
memory
context
for
the
current
transaction.
A
That
would
have
a
child
context
for
the
current
sub
transaction.
If
you're
using
save
points
or
stuff
like
that,
you
would
have
another
one
for
a
query
within
that
transaction
and
so
forth,
like
every
executor,
node
would
might
have
its
own
memory
context
if
they
need
to
load
little
stuff,
and
the
idea
is
that
you
can
allocate
in
that
memory
context
and
when
it
goes
out
of
scope
like
if
the
transaction
ends.
A
Now.
This
is
not
at
all
how
rust
works.
Obviously,
rust
uses
these
lifetimes,
which
I
didn't
want
to
learn
in
the
beginning.
Anyway.
I
understand
them
now,
but
it's
it's
a
different
model
and
I
missed
the
postgas
memory
context
model.
I
actually
looked
at
a
few
crates
that
do
stuff
like
that.
There's
a
bumper
type
arena.
A
A
A
So
another
problem
I
ran
into
pretty
early
and
I
keep
running
into-
is
the
self-referential,
structs
problem.
How
many
of
you
have
run
into
this
problem?
Oh
yeah,
a
lot
of
hands,
so
I
mean
you
can't
do
that.
Basically,
you
can't
have
something
that
those
trucks
owns
and
then
have
references
to
that
thing.
A
A
So
this
kind
of
ties
into
one
thing
I
one
thing
we
don't
do
something
we
haven't
done
is
that
they
haven't
run
code
that
runs
in
the
postcast
back
end
that
we
would
write
in
rust.
We
don't
do
that.
We
have
a
few
helper
extensions
in
in
neon
that
we
that
run
in
postgas
side
of
things.
So
most
of
the
code
is
in
the
storage,
but
we
have
some
that
runs
within
postgres
process
and
we
wrote
all
of
that
in
in
c,
and
the
reason
is
is
all
of
these
things
listed
here.
A
A
So
in
order
to
use
this
facility,
I
mean
postgres
c
code.
Is
it's
like
its
own
dialect
of
c?
We
have
our
own
stuff
for
memory
management.
I
already
talked
about
that.
Remember
with
the
memory
context.
You
almost
never
see
memory
leaks
there.
Thanks
to
that,
there's
also
like
one
thing
we
do.
Is
we
handle
the
out
of
memory
error
and
that's
something
that
rust
just
doesn't?
Do
you
panic?
If
you
run
out
of
memory,
then
there's
error
handling
in
postgres
code.
A
If
an
error
happens,
there's
functions
to
throw
an
error,
they
use
long
jump
behind
the
scenes
and
there's
a
lot
of
macros.
It
tells
you
where
they
ever
happened.
What
was
the,
how
did
you
get
there?
What
was
the
context?
What
was
the
query
and
so
forth,
and
that's
that
the
information
prints
it
out
to
the
log
sends
it
there
to
the
client
and
and
finally
rolls
back
the
transaction
releases,
all
the
resources
you
have
files
open
buffer,
pins,
whatever
it
closes
all
of
that
yada
yada.
A
That
would
be
really
hard
to
wrap
around
in
rust
code.
I
mean
I
don't
know
what
that
would
look
like
same
with
transactions
and
stuff,
so
we
just
basically
don't
do
this.
There
are
projects
I've
seen
like
three
or
four
different
projects
that
try
to
do
things.
Things
like
allow
you
to
write,
postcase
extensions
in
rust
and
they
introduce
wrappers
for
around
these
facilities,
but
it's
not
great.
A
So
I'm
really
curious
how
this
is
going
to
work
for
the
linux
kernel.
As
you
know,
the
linux
kernel
does
work
in
progress
on
getting
rust
being
used
in
in
the
linux
kernel
for
drivers.
I
think
in
the
beginning,
or
something
like
that,
and
I'm
really
curious
how
they're
gonna
solve
these
issues,
because
I
would
imagine
that
they
have
exactly
the
same
problems.
Linux
kernel
has
a
lot
of
its
own
facilities
and
infrastructure,
so
yeah.
I
I
wonder
how
that's
going
to
work
for
them.
A
So
another
lesson
we
learned
the
hard
way
async
and
synchronous
code,
don't
mix
very
well
in
rust.
I
mean
you
have
to
know
what
you're
doing
at
least
and
which
will
remind
you.
We
didn't
when
we
started
so
in
neon
code
base.
We
have
a
lot
of
low-level
code
and
there's
a
page
cache,
there's
a
lot
of
locking
atomics
all
that
kind
of
low-level
stuff.
It's
all
written
in
the
blocking
style
of
like
like
a
normal,
oh,
like
like
what
the
traditional
database
engine
looks
like.
A
Basically,
then,
we
also
had
a
bunch
of
code
to
deal
with
the
networking
like
accept
incoming
connections.
A
Read
the
requests,
run
your
your
your
code
and
send
the
response
back.
That
was
written
in
async
style
and
we
kind
of
have
some
other
async
code
here
and
there
as
well.
I
think
this
it
doesn't
mean
we
didn't
pay
much
attention
to.
You
know
it
depends
on
the
programmer.
Basically,
what
style
was
chosen
and
we
didn't
see
the
the
risk
of
this
early
on
and
we
started
to
have
deadlocks
and
it
took
over
a
long
time
like
to
figure
out
what
was
happening,
but
it
turns
out
what
was
happening.
A
Is
that,
like
a
request,
comes
in
from
the
client
and
in
order
to
to
process
that
request?
We'd
have
to
wait
for
some
transaction
log
to
arrive
through
another
connection,
and
what
would
happen
is
that
all
of
the
threads
in
the
like
tokyo
thread
pool
were
busy
waiting
on
these
other
connections
for
the
the
transaction
block
to
arrive,
and
therefore
there
were
no
threats
available
to
process
that
log
when
it
actually
did
arrive.
A
A
It
was
just
horrible,
so
we
we
had
long
discussions
on
this.
I
mean
like
how
do
we
fix
this?
What
is
the?
What
do
we
do,
and
I
found
that
I
mean
there
was
a
divide
between
the
programmers
clearly
in
our
team.
A
At
this
point
we
had
two
or
three
or
four
people
who
actually
had
rust
experience.
We
had
hired
people
and
then
we
had
the
team
of
former
postgres
hackers
and
see
people,
and
there
was.
It
was
pretty
clear
that
the
people
who
had
prior
rust
experiences
were
like
happy
to
use
async
and
used
it
naturally,
and
that
was
fine
and
the
rest
of
us
me
included
we're
much
more
familiar
and
the
with
the
threaded
model.
I
guess
that's
how
postgres
works,
that's
how
you
know
most
c
plus
plus
programs
work.
A
So
so
now
we
had
a
bit
of
a
divide
in
the
team
over
this,
so
we
had
long
discussions
on
conference
calls
and
so
okay.
So
what
do
we
do
so
like
the
two
extremes
or
two
models
you
we
could
go
with
is:
is
okay,
no
async?
Let's
forget
about
that.
That's
a
bad
idea.
If
we
just
have
one
thread
for
each
incoming
connection
and
that
thread
handles
that
connection,
then
then
things
will
work
real.
That
will
avoid
the
deadlock.
A
Kind
of
the
other
direction
was
that
okay,
the
problem
here
is
that
we're
trying
to
do
we're
trying
to
mix
sync
and
async
code.
So
let's
fix
that
by
not
blocking
so
everywhere.
We
would
have
to
block
or
no
lock
io
whatever
we
have
to
replace
the
synchronous
calls
with
async
calls
and
be
very
careful
not
to
introduce
any
blocking
anywhere.
A
Well,
we
we
found
a
middle
ground
in
the
end,
so
we
do
have.
We
mainly
switched
to
threaded
models,
so
we
now
have
one
thread
for
each
incoming
connection,
and
but
we
do
have
some
asian
code
within
within
that
thread
and
we
use
the
tokyo.
So
what's
called
the
current
thread
executor,
so
we
don't
we
don't
let
tokyo
to
manage
the
thread
pull
for
us.
A
We
just
always
force
it
to
use
the
current
thread
you're
running
on,
because
we
do
already
do
this
trading
outside
of
that
there
are
some
other
special
cases
where
we
just
had
to
use
async.
For
example,
we
have
some
code
that
needs
to
use
the
postcast,
client
library
called
rust,
postgres
and
that's
all
written
in
async
style.
A
So,
in
order
to
use
that
library
at
all,
we
just
have
to
deal
with
it,
and
so
how
we
deal
with
that
is
just
a
very
small
async
block
and
and
then
we
block
on
that,
basically
turn
that
that
into
a
blocking
code
we
also
use
async
in
the
s3
like
a
cloud
storage,
upload
and
download
code,
because
there
we're
dealing
with
a
lot
of
files,
we
might
be
uploading
and
downloading,
and
it's
just
natural,
more
natural
there.
A
But
kind
of
the
lesson
here
was
that
you
really
have
to
design
this
thing
and
decide
where
you
want
to
use
async
and
where
we're
not
it's
all.
You
know.
If
you
have
a
small
program,
it's
either
model
is
fine
and-
and
you
just
choose
one
and
you
stick
to
that
model,
but
in
a
reasonably
big
code
base
like
what
we
have.
I
mean
there
is
natural
places
where
you
really
want
to
use
async
and-
and
you
have
to
learn
to
do
it
if
you
want
to
mix
and
match
so
we
it
works.
A
Now
we
found
a
solution.
This
works,
but
but
you
really
have
to
be
careful
with
this.
This
boundary
where
and
think
about
what
happens
if
you
block
aware.
A
So
some
miscellaneous
annoyances
with
rust.
In
my
experience,
the
binaries
are
huge,
like
our
binary
is
like
400
megabytes
or
something
ridiculous,
and
when
you
strip
it
out
strip
the
debug
info
with
the
strip
command,
it's
like
five
megabytes.
That's
what
I
haven't
heard.
Other
teammates
complain
too
much
about
it,
but
it
really
blocks
me
because
I
keep
running
out
of
disk
space
on
my
laptop.
A
It's
like
the
target
directory
grows
into
like
50
gigabytes
within
a
few
days
of
working
and
every
now,
and
then
I
just
have
to
delete
it
and
recompile
from
scratch
and
then
it
goes
down,
and
I
haven't
fully
figured
out
why
this
keeps
happening.
Maybe
we
keep
touching
some
source
file
that
that
then
needs
to
be
recompiled
and
it
keeps
all
the
versions
or
something
I'm
not
sure,
but
that
that's
that's
annoying.
A
So
the
one
conversation
I
had
with
the
fellow
postcards
hacker
from
another
company
is
like
I
was.
I
was
all
excited
telling
him
about.
Oh
yeah,
we're
using
rust,
it's
a
great
language,
and
but
one
thing
that
blocks
me
is
that
it's
really
slow
to
compile
and
he
was
like.
Oh,
is
it
c
plus
plus
slow,
like
yeah
yeah?
It
really
is,
it
is
slow.
I
mean
it
takes
minutes
to
go
to
build
this
code
base
and
that's
that
it's
just
slow.
I
don't
know
what
to
say
about
that.
A
I
tried
all
kinds
of
different
options:
different
linkers.
It
helped
a
little
bit.
I
think
cashbot
helps,
if
you
don't
have
to
you,
know,
compile
everything
from
scratch.
But
then
again
I
find
that
my
editor
is,
it
doesn't
work
very
well.
It
still
compiles
a
lot
and
just
building
these
big
binaries
seems
to
take
a
while,
especially
in
release
mode
there's
a
difference
there.
A
Oh,
so
that
that's
kind
of
annoying
anyone
has
ideas
on
how
to
make
these
builds
faster
on
your
laptop,
I'm
all
ears,
another
thing
that
is
kind
of
funny.
I
just
run
into
this
a
few
days
ago,
and
I
added
the
list
I
mean
there
are
always
these
features
that
are
not
stable.
Yet
we
try
to
stick
to
the
you
know,
play
the
stable
version
roughly,
but
every
now
and
then
there's
like
this
one
little
feature
that
that's
not
stable
and
would
be
just
a
perfect
fit
for
it.
You're
doing
so.
A
Just
running
to
this
other
day,
I
think
I
was
trying
to
how
did
it
go?
I
was
trying
to
use
the
question
mark
operator
in
a
function
that
was
returning
something
other
than
and
result.
I
think,
and
the
compiler
told
me:
oh,
you
must
the
thing
you're
returning
from
this
function,
because
using
the
question
mark
operator,
this
thing
must
return,
something
that
implements
the
from
residual
trade.
I
was
like,
oh
now,
okay,
that's
cool.
A
A
So
I'm
getting
to
the
end
here
so
kind
of
my
key
learnings
from
from
this
one
year
down
the
path.
Rust
is
a
mature
language.
It's
it's!
It's
it's!
Definitely
it's
productive!
I
get
a
lot
of
done
in
rust.
After
the
initial
learning
curve
of
1.203,
you
have
to
be
very
careful
with
the
sync
and
async
code.
A
If
you
google,
around
those
blog
posts
on
what
color
is
your
code-
and
I
you
know-
I
was
not
familiar
with
that
before
I
ran
into
the
problem
and
then
I,
of
course
I
found
all
the
flame
wars
on
on
the
internet
about
this.
This
topic,
but
it's
real,
I
mean
you
have
to
know
what
you're
doing
and
understand
both
models
and
and
what
they're
good
for
the
interoperability
with
c
was
kind
of.
A
I
I
told
you
about
the
experience
with
the
bind
gen
and
the
fact
that
writing
extensions
in
rust
for
postcards
would
be
kind
of
hard
because
all
of
these
facilities,
you
can't
really
use,
or
you
have
to
write
wrappers.
I
can
kind
of
I
mean
there's
this
meme
of.
Why
don't
you
rewrite
it
in
rust
and
I
think
that's
where
it's
coming
from
it's
it's
like
it's
kind
of
it
was
sold
to
me
I
mean
the
way
I
thought
about.
A
It
would
be
that
I
could
just
mix
and
match
c
chord
pretty
easily
with
rust,
but
it's
not
really
that
easy.
I
mean,
I
know
how
to
do
it
now,
but
it's
yeah.
I
mean
as
soon
as
you
step
into
any
c
code.
You
have
to
dread
really
carefully
with
with
the
unsafe
and
stuff.
So
it's
it's
not
really
that
easy.
A
It
also
didn't
really
matter
as
much
as
I
thought
we
didn't
have
to
use
a
lot
of
postcards
coding
in
this
project
in
the
end
and
and
the
pieces
where
we
wanted
to
really
run
the
code
in
postgre
server.
We
just
wrote
in
c
we
you
know
that
is
fine
for
us.
A
One
key
learning
of
one
thing
I
didn't
realize
when
we
started
was
that
there
are
a
lot
of
good
programmers
out
there
currently
who
want
to
learn
rust
or
are
specifically
looking
for
a
rust
job
so
like
when
we
go
out
and
and
our
recruiters
try
to
find
people
it's
a
selling
point.
I
mean
people
want
to
work
for
us
because
we
use
rust
it's
kind
of
funny
but
like
yeah.
That
makes
a
lot
of
sense
and
they're
they're.
A
Generally
speaking,
good
candidates
like
and
my
theory,
my
personal
theory
on
this-
is
that
they
don't
teach
rusty
in
schools
these
days
they
mean
they
they're.
It's
not
the
default
language
for
any
university
or
anything
like
that.
A
So
anyone
who
knows
rust
or
anyone
well,
anyone
who
knows
rust
have
you
know
they've
made
a
conscious
decision
that
they
want
to
learn
it
and
they
probably
already
run
into
the
problems
with
cmc
plus
plus
or
some
other
language,
and
they
realize
that
hey
rust
has
these
cool
solutions
for
that.
So
so
that's
a
good
attitude
for
a
programmer
to
have
also
someone
who
wants
to
learn
rus.
That's
that's
again.
A
A
good
signal
from
a
programmer
if
they're
interested
in
these
things
and
they're
they're
gonna,
you
know
develop
themselves
or
something
like
that.
That
will
probably
change
in
a
few
years
I
mean
if
rust
gets
more
popular
people.
Will
you
know
this
will
change?
It
would
stop
being
such
a
good
signal,
but
for
now
it
is
a
good
signal
for
when
hiring
yeah,
that's
oops.
A
I
went
backwards
for
some
reason.
So
that's
that's
where
we
are
basically,
if
you're
interested
in
what
we're
doing
you
can
go
to
our
website.
We
just
put
it
online
neon
tech,
there's
an
invite
code.
A
Currently,
we
are
not
totally
public
yet
on
what
you're
doing,
but
if
you
wanna,
if
you
need
a
postcard
server,
go
to
that
address
and
you
can
click
a
button,
have
your
own
postcode
server
running
on
this
rust
backend
also,
we
are
hiring
if
you,
if
you're
one
of
those
people
who
want
to
learn
or
use
rust,
you
know
come
speak
to
me
or
go
to
the
website
any
questions
at
this
point.
A
A
We
do
use
mio,
there's
one
specific
piece
of
code
that
needs
to
speak
to
another
program,
which
happens
to
be
your
like
postgres
over
a
pipe
and
the
way
it
works
is
that
it
sends
a
command
through
the
pipe
and
then
it
needs
to
read
the
response
from
std
out
like
from
from
from
from
that
program
and
at
the
same
time,
it
needs
to
watch
for
the
std
air
and
print
out
any
messages
there
to
the
the
log.
A
So
it
needs
to
do
three
things
at
the
same
time,
and
this
was
a
case
where
async
is
really
what
you
want.
You
want
to
do
those
three
things
at
the
same
time
and
we
so
we
use
tokyo
for
that,
but
that
was
very
performance,
sensitive
code.
It
was
really
in
the
hot
path
of
this
whole
thing
and
we
found
that
the
the
async
like
tokyo
abstractions
were
not
quite
as
free
as
we
hoped
for,
and
we
actually
rewrote
that
in
not
in
mio.
A
Actually,
I
think
we
just
went
for
paul
and
and
or
what
started
like
paul
linux
poll
called
straight
to
that.
I
just
worked
with
the
file
descriptors
directly
and
that
gave
a
small
speed
up
there.
A
A
Do
I
regret
choosing?
No,
not
really
I
mean
what
were
the
other
choices.
I
mean,
though,
if
we
went
with
c
or
c
plus
plus
we
would
have
these
memory
safety
problems.
We
would
that
would
affect
the
whole
architecture
and
we
couldn't
use
multi-tenant
the
way
we'd
want.
Maybe
we
would
find
workarounds
run
these
things
in
different
vms
or
something
but
nah
no
or
we
would
have
to
be
very
careful.
That's
the
other
option,
but
I
don't
like
that
at
all.
A
A
It
would
have
been
a
much
higher
learning
curve
and
I
think
they're
even
more
exotic
in
you
know,
good
luck,
finding
haskell
programmers
we
could
have
gone
with
go.
That
was
one
thing
we
did
seriously
consider,
but
it
has
this
garbage
collection,
which
kind
of
turned
me
off,
and
I
don't
like
this.
It's
I
don't
like
the
syntax,
quite
as
much
as
I
like
rust
and
it's
it's
generally,
not
thought
of
as
like
a
systems,
programming
language.
A
I
mean
if
we
had
went
with
go,
I'm
sure
we
would
get
things
done,
but
then
there's
more
exotic
languages,
which
are
all
you
know,
would
probably
have
the
same
benefits
as
rust,
but
are
worse
in
other
ways,
and
you
know
why
I
mean
better
to
go
with
the
mainstream
option
here,
we're
trying
to
build
a
production
code
and
we
need
to
hire
people
working
on
this
thing,
and
so
I
I
don't
think
there
are
also
really
any
other
good
options
out
there.
A
What
pro,
what
problem
I
was
trying
to
solve
with
pinning?
No,
I
I
just
couldn't
figure
out
how
it
works.
To
I
mean
it
was
hard
to
wrap
my
head
around.
What
pinning
is.
I
think
I
got
it
at
some
point,
but
now
I've
already
forgotten
in
the
end.
I
I
just
found
a
different
way
to
do
it.
A
A
A
Amazon
has
company,
I
mean
there's
competitors
that
do
similar
things.
Snowflake
has
a
similar
architecture,
but
for
data
warehousing,
but
this
they're
all
kind
of
proprietary
or
at
least
half
proprietary-
and
I
didn't
like
that.
So
I
wanted
to
build
something
something
that
is
true
to
the
community
and
so
forth.
A
Right,
so
what
was
the
protocol
between
these
servers?
So
the
basic
operation
is
between
postgres
and
the
storage.
The
basic
operation
is
of
it's
called
get
page
at
lsn,
which
means
get
you
you.
The
arguments
are
like
inputs
are
the
page
number
and
the
like
the
relation
that
that,
within
the
relation
and
the
block
number
within
that
relation
and
the
point
in
time,
what
point
in
time
do
you
want
to
receive
this?
A
A
So
why
didn't
we
go
full
on
with
async?
Well,
several
reasons
I
mean
like.
Maybe
we
could
have
to
be
honest.
I
was
more
comfortable
with
the
threaded
model,
so
I'm
sure
that
was
a
big
factor,
but,
like
the
debugging
issue
I
mean
I
want
to
use
gdp
if
they
think
that
was
really
hard.
I
know
there's
tokyo
console
nowadays
that
actually
actually
looks
really
nice.
I've
never
used
it
myself,
but
I've
seen
seen
that
and
it
looks
very
promising.
A
It
would
mean
it
would
mean,
like
writing
a
lot
of
the
the
low
level
stuff
using
async
and
that
that
means
that
you
have
to
like
a
lot
of
things
need
to
be
sent
and
send
save
and
what
I
think
that's
the
word
that
tends
to
be
a
problem
with
async
code.
A
I
don't
know
if
the
performance
would
have
been
great.
I
mean
most
of
these
locks
that
you
need
to
hold
for
a
short
time.
You
really
need
to.
You
only
need
to
hold
them
for
a
very
short
time.
So
where
do
you
draw
the
line?
Would
you
use
the
async
variants
of
those
locks
there
as
well
and
with
performance?
I
don't
know
it
just
didn't
feel
like
the
right
choice.
A
Also,
I
really
wanted
to
have
be
in
control
of
the
threading
and
you
know
which
thread
handles
which
operation
it
felt
like
felt
like
I
I
wouldn't
want
we
want
to
be
in
control
of
that.
I
don't
know
if
that
was
the
right
choice,
but
I
think
so.
A
A
We
are
trying
to
figure
out
what
to
do
about
that
problem,
but
we
haven't
sold
it
yet
so
the
so.
The
basic
problem
is
that
in
postgres
you
postgas
has
its
own
buffer
cache.
First
of
all,
but
that's
only
you'll.
You
usually
only
size
it
to
be
like
a
one
quarter
of
your
ram
and
the
rest
goes
to
the
operating
system
and
the
operating
system
manages
its
own
cache
and,
and
most
of
the
memory
is
actually
used
in
postgas
by
that
thing.
A
Now,
because
we've
replaced
the
local
disk
with
our
server
service,
the
storage
system,
we
don't
get
the
benefit
from
the
operating
system.
Cache
we
just
bypassed
that.
A
Well,
one
answer
is
that
we
can
just
tune
it
differently
and
actually
increase
the
size
of
the
postcase
buffer
cache
so
that
that
takes
most
of
the
space,
but
it
has
some
downsides
but
yeah.
The
short
answer
is
that,
yes,
it
is
a
problem
and
we
haven't
fully
fixed
it,
but
it's
it's
tolerable
and
there
are
future
plans.