►
From YouTube: Rust Bay Area Meetup - April 24, 2019
Description
Rust Bay Area Meetup.
B
C
How
do
I
like
get
the
gifted
display,
Oh,
perfect,
so
Berkeley
packet,
filter
and
rust,
so
our
our
project
is
called
solana.
It's
a
high-performance
block
chain,
if
you
guys
care
about
that.
We
can
talk
after,
but
this
this
presentation
is
focused
solely
on
the
blood,
sweat
and
tears
of
getting
birth
of
her
full
Russ
toolchain
to
compile
to
Berkeley
packet
filter.
So
what's
Berkeley
packet
filter,
it's
limited
by
cup
created
in
92.
C
There
was
some
guy
working
on
a
PhD
for
proof
carrying
code
and
some
folks
thinking
that
they
could
build
a
self-updating
of
us
and
somehow
it
ended
up
as
a
as
a
package
filter,
bytecode
with
two
registers
and
that
ended
up
getting
called
classic
Berkeley
packet
filter,
but
what
it
ended
up
doing
it.
What
would
actually
ended
up
is
a
byte
code.
C
So
it's
kind
of
really
I
think
a
really
amazing
achievement
of
engineering,
so
2013
two
registers
doesn't
seem
like
enough
and
64-bit
machine
64-bit
register
sets
have
been
out
for
a
while,
so
it
was
updated
to
EBP
F,
which
is
now
simply
called
BPF,
and
this
it
expanded.
The
register
set
to
tube
sorry
to
ten
registers
from
two
and
added
a
bunch
of
64-bit
registers
and
BPF
was
back
in
postman
line
into
LLVM
same
compiler
used
by
Russ.
C
C
C
It's
got
a
bunch
of
virtual
registers
program.
Counter
r0
contains
the
return
value
just
like
x86
r1
that
through
our
five
are
the
holder,
arguments
for
function
calls
or
six
or
nine
kali
sit
registers.
If
you
guys
understand
x86,
this
is
basically
one-to-one
mapping
from
x86
to
Berkley
packet
filter,
so
why?
Why
is
this
like?
C
Why
why
this
particular
design
choice,
because
the
Linux
kernel,
when
it's
doing
its
verification,
really
doesn't
want
to
do
this
mapping
of
registers
to
whatever
the
current
architecture
is
because
just
about
everything
in
the
world
and
the
most
popular
architecture
is
x86.
So
this
this
passive
converting
instructions
to
to
their
native
op
codes
and
to
where
the
register
placement
goes,
and
the
hardware
is
very,
very
easy,
very
fast
when
you're
doing
the
the
JIT
verification.
C
So
what's
interesting
about
Berkeley
packet
filter
is
doesn't
have
like
a
traditional
stack
where
you're
doing
a
push
and
pops
of
values.
It
just
has
a
stack
frame.
Poner
and
the
stack
frame
pointer
doesn't
move.
So
it's
a
base.
That's
reference
for
allocations
of
memory
in
the
in
the
current
stack
frame,
so
kind
of
think
of
it
as
passing
arguments
by
the
stack
you
like,
if
you
have
all
your
data
in
the
stack
frame,
and
then
you
can
reference
that
addresses
based
offset
from
the
base
stack
frame,
address.
C
Traditionally,
BPF
had
this
fixed
size
and
number
of
frames
and
fixed
called
up,
but
now
they've
added
support
for
function
to
function,
BPF
calls
so
BPF
and
linux
must
be
verified
before
the
kernel
will
accept
it.
It
guarantees
termination
the
way
it
does
it
is.
It
makes
sure
that
all
the
loops,
all
the
jumps,
never
actually
jump.
C
The
kernel
does
this
memory
accesses
are
all
safe,
the
execution
paths
are
safe
and
it
can
basically
do
this.
Do
this
in
a
single
pass,
so
the
LVM
compiler
will
spit
up
code
that
has
violates
these
rules
and
then,
when
you
push
it
to
the
kernel,
the
kernel
will
just
reject
it
in
the
during
the
the
driver
kind
of
initialization
phase.
So
while
the
language
is
far
more
open,
the
way
it's
used
in
linux
is
a
subset
of
it.
It's
very
used
for
effectively
almost
like
a
non
Turing
machine
event
handler.
C
C
C
C
You
can
have
an
elf
specified
entry
point,
so
you
generate
this
alpha
man,
you
tell
hey
look
jump
to
this
address
at
the
start,
you
have
load
time,
relocations,
we
added
load,
time,
symbol,
resolution,
read-only
data
segments,
some
pc-relative
jumps,
and
now
you
can
pass
and
return
aggregate
types
on
the
stack
frame
which,
if
you're
returning
a
Veck,
that's
an
aggregate
type,
because
it
could
it's
effectively
a
structure
of
pointer
and
lengthen
I.
Think
that
may
be
something
else,
so
you
can
still
only
modify
data
passed
into
it.
C
So
there's
no
writable
segments
so
imagine
a
program
without
any
writable.
Global's
I,
don't
know
if
this
stuff
is
like
way
too
deep,
but
so
outside
of
the
Linux
kernel
runs
in
a
virtual
machine
which
guards
memory,
accesses
and
maximum
instruction
counts
and
function
call
access.
So
we
also
have
a
preliminary
JIT
to
x86
support.
What
a
real
goal
is
with
this
is
to
take
BPF
and
convert
it
in
jeddah
to
GPU
code,
because
GPUs
have
4000
Cindy
lanes.
C
I,
don't
know
sixty
thread
context
is
all
this
silicon
that
you
can
use
for
these
very
small
programs
that
have
very
little
stack
y
YB
pfn
up
waz
up
so
whaza,
miss
kind
of
like
this
big
cool
thing.
New
bytecode
new
stack
stack
based,
vm
BPF
has
been
used
in
production
and
a
single
machine
single
socket
to
process
60
million
packets
per
second,
so
sixty
million
events
per
second
and
a
40
gigabit
network.
It's
a
big
pile
of
data,
a
massive
amount
of
events.
C
So
it
has
this
like
really
nice
proven
architecture
for
speed.
But
it's
also
safe,
like
this
stuff
runs
in
ring
zero.
You
take
code,
it's
compiled
down
to
bytecode,
and
then
you
can
send
it
to
the
most
secure
trusted
space
in
your
hardware
and
because
of
the
simplicity
of
the
the
register
set
and
how
it's
designed
it's
maybe
the
most
opposite
way.
You
can
go
from
a
stack
base,
video
because
there's
no
like
actual
stacked
register,
there's
no
push
and
pops.
C
It's
very
very
simple
to
verify
and
that's
I
think
a
really
compelling
use
for
it
and
the
reason
why
it
does
this
is
because
of
all
these
limited
limitations
right.
It's
it's
got
limited
threads,
this
very
simple
stack
frame
and
all
the
stuff
that
basically
makes
it
impossible
to
write.
Large-Scale
programs
like
you
would
want
to
with
blossom
so
why?
Why
trust,
finally
getting
to
adding
support
Russ
to
PDF?
So
our
project
itself
is
written,
rust,
rust
is
friendly,
rust
and
safe
and
fast,
if
you
guys
are
working
on
rust.
C
I
want
to
thank
you
for
making
a
language
that
I'm
like
happy
to
code,
and
it's
just
wonderful
I
spent
12
years
working
and
see
and
like
it's
just
awesome,
can
be
happier
so
rust.
Natively
uses
LVM
for
his
backends,
so
it
should
be
easy
right.
Just
like
take
this
backend
switch
this
thing,
and
that
note
so,
basically,
everything
you
can
think
of
blows
up.
C
Rust
often
requires
things
like
memory
allocation
system
dependencies
create
dependencies,
pull
in
like
every
kind
of
thing
you
can
imagine,
and
the
environment
that
we
need
is
single
threaded,
no,
not
working,
no
standard
I/o.
No
general
general-purpose
allocation,
so
no
Malick's
limited
support
for
panic.
C
C
So
BPM
is
not
a
built-in
target,
so
custom
rusty
is
required.
So
we
added
a
new
target
and
our
inner
fork.
We
added
a
new
API
modifier
to
BPF
all
Russ
this
route
and
sim
D,
and
all
these
built-in
lips
require
modifications
to
accommodate.
We
can't
do
120
bits.
We
can't
do
some
floating
points
because
there's
no
support
for
sign
division.
Sign
numbers
are
hard,
but
this
awesome
project,
X
X
our
go,
I,
don't
know.
C
If
you're
working
on
this,
it's
a
wonderful
project,
you
can
kind
of
take
all
the
stuff
that
we
forked
and
put
it
together
into
the
package
management.
It's
great.
So
this
work
is
very
actually
very
easy
to
leverage,
there's
a
big
pile
of
stuff
that
still
needs
to
be
done
so
for
us
specifically
reroute.
Although
standard
out
standard
air
to
specific
logging
functions,
we
want
to
provide
a
limited,
an
isolated
memory
allocator.
So
when
the
VM
is
initialized,
we
can
kind
of
say
here's
some
memory
for
the
heap.
C
C
C
It'll
generate
code
without
errors,
but
when
you
look
at
the
code,
it's
just
like
random
random
instructions
in
there
like
and
it's
impossible
to
really
trace
where
the
hell
that
came
from.
If
you
guys
are
LVM
experts
love
to
talk
to
you,
you
can
go
to
a
repo
and
try
it
out.
It's
a
work
in
progress,
but
it's
actually
fairly
useful.
C
Now
so
for
our
particular
project,
these
programs,
our
single-threaded,
really
don't
do
any
oh
they're,
just
event
handlers
getting
some
data,
processing
them
and
modifying
some
memory,
and
that's
the
only
output
is
that
modified
written
memory.
There's
an
issue,
that's
kind
of
tracking,
all
the
all
the
things
that
we
are
working
on
and
you
can
go
fairly
easily.
Try
this
out.
C
So
there's
a
bunch
of
references
on
BPF.
It's
a
really
cool
project.
This
company
psyllium
did
it's
doing
a
lot
of
work
in
the
space
and
they're,
mostly
using
it
for
software-defined
networks,
but
they've,
really
added
a
ton
of
tool
tooling
and
support
for
BPF,
and
especially
in
the
linux
kernel,
you
can
find
our
customized
BPF
a.m.
and
our
github
and
some
example
programs,
and
if
you
guys,
want
to
roll
up
your
sleeves
and
fix
these
bugs
there's
a
forum.
B
D
B
D
C
So
the
reason
for
BPF
isn't
that
guarantees,
because
you
can
use
an
instruction
counter
with
a
similar
property,
so
the
linux
kernel.
Does
this
check
where
it
just
looks
at
all
the
jumps
make
sure
that
they
don't
create
loops?
Instead,
you
can
like
add
the
number
of
instructions.
The
jumps
are
jumping
until
you
run
out,
so
you
can.
You
can
basically
get
around
those
those
problems
with
very
dumb
tools,
we're
using
BPF,
because
our
goal
like
in
our
particular
project,
we're
constrained
by
the
resources
of
a
single
system.
It's
a
fully
replicated
state
machine.
C
There
is
no
way
to
split
the
state,
so
that
means
that
we
want
to
really
maximize
the
available
compute
and
the
only
way
to
do
that
is
with
GPUs.
Unfortunately,
there
isn't
like
a
stable,
GB
bytecode
that
we
can
go
to
and
BPF
is
so
that's
simple.
You
can
look
at
these
this
bike
out
and
map
it
directly
to
CUDA
or
whatever
else
is
gonna
come
next.
C
C
Yeah,
okay,
but
what's
great
about
EBP
F,
is
that
nobody
in
the
Linux
kernel
is
gonna.
Add
something
that'll
like
we
won't
take
write-ups
they're,
not
gonna,
upstream,
a
change
that
will
break
security
or
performance
because
they're
running
this
thing
in
the
kernel
right,
so
they're
they're,
really
like
like
fairly
good
constraint
and
what
we're
doing
is
a
superset
of
what
they're
doing,
but
we'll
never
really
have
to
worry
about,
like
you
know,
being
a
total
Fork.
D
C
C
Could
be
C,
C's,
still
I
think
a
better
choice
but
like
because
of
the
how
much
Russ
pulls
in
out
of
like
malloc
and
stuff
like
that
it'll
be
pretty
tough,
but
it's
getting
closer,
though
that
would
be
an
awesome
like
outcome.
Is
that
there's
a
like
a
full-featured
like
rust
way
to
do
BCC
all
right,
yeah.
F
C
C
Is
any
more
questions.
C
B
D
G
Everyone-
and
my
name
is
Ivan
and
I
work
for
commuter,
and
today,
I
would
like
to
tell
a
couple
of
stories
at
how
we
use
russet
kay
mirror
and
what
kind
of
like
a
challenges
that
they
had
while
working
on
their
application,
they're
boning,
so
first
I
would
like
to
start.
If
who
are
we
so
we
come
here?
We
have
like
stealth
startup
and
we
are
working
on
fixing
the
software
doctors
use
and
that's
how
kind
of
like
a
sales
pitch
for
hiring
and
but
what
that
means.
G
And
I'll
tell
a
little
bit
more
details
later
like
what
exactly
that
means
for
our
like
a
rust
part
of
for
a
building
and
I
also
want
to
include
this.
We
are
hiring
so
if
you're
interested
in
rust-
and
we
want
to
work
on
brass
hundred
percent
of
your
time.
You
can,
you
can
apply
this
email
address
and
a
couple
of
warts
about
like
our
back-end,
so
we
building
we're
building
like
different
things
and
some
of
the
like
front-end
pieces
on
the
backend
pieces.
G
I'm
gonna
be
talking
about
look
back
and
pieces
specifically
and
a
couple
of
notes,
but
back
and
it's
written
completely
in
rust,
with
bits
of
C
code,
because
unfortunately,
not
every
crate
or
not.
Every
library
was
written
and
rust
here.
Unfortunately,
so
you
have
to
use
Scylla
but
they're
like
really
small
areas,
and
it
was
pure
rust
from
the
day
one.
So
we
never
had
like
Python
code
and
we
were
like
pushed
into
rust
because
of
performance
issues
or
something
like
that.
G
G
For
us,
the
enterprise,
so
the
enterprise
software,
our
that's,
not
kind
of
back
term.
That
can
mean
everything,
but
meat
means
things
like
like
api's
and
data
transformation,
interoperability,
data
security,
kind
of
like
this,
this
kind
of
thing
and
one
big
challenge
of
building
enterprise
software
is,
they
have
complexity
like
everywhere
and
if
you,
if
you
realize
some
like
discussions
and
like
Twitter
or
internet,
that
people
like
looking
for
simplicity
and
like
the
asking
questions,
I
really
solving
this
problem
and
making
it
simpler,
we're
just
like
shifting
the
complexity
around
so
for
enterprise
software.
G
The
answer
is
pretty
much
like
you
always
just
shift
the
complexity
around
like
it's
like.
You
cannot
make
it
simple,
it's
like
very
complicated,
and
there
are
a
number
of
reasons
why
why?
Why
is
that
happening?
And
one
of
the
reasons
is
that
it's
just
like
the
data
models,
they're
they're
they're,
trying
to
model
the
real
world
and
real
world
is
complicated,
and
you
cannot
really
change
that.
So
you
have
to
just
like
to
follow
that,
and
your
data
models
become
super
complicated.
Then
you
have
little
things
like
regulations.
G
The
completely
outside
of
a
control,
like
mostly
outside
of
your
control,
is
that
they
can
Victor
like
keep
kinda
average.
Some
like
new
regulations
on
law,
telling
well
maybe
you're
selling
from
tomorrow.
Your
data
is
have
to
be
available
in
this
particular
API
and
then,
like
week
later,
we're
like
well,
you
actually
found
that
that
was
not
a
good
regulation.
We're
gonna
change
that
and
like
there's
like
a
different
standard,
you
have
to
implement
now
and
you
have
to
again
you
have
to
comply
for
that,
and
it's
also
things
like
legacy.
G
Software
is
like
a
lot
of
lot
of
pieces,
they're
built
before
you've
built
your
your
own
application
and
they
already
exist,
and
you
have
to
like
deal
with
that.
Somehow
that
would
be
like
old
legacy
data
standards.
Legacy
API
is,
in
some
case,
wouldn't
be
even
even
any
API.
Just
like
some
software
working
somewhere.
G
It
kind
of
works
everybody's
happy,
but
nobody
knows
how
do
you
mean
like
to
talk
to
that
software
because,
like
the
all
the
documentation
was
lost
or
something
and
there's
a
lot
of
requirements
on
a
security
side
and
I
think
they're
kind
of
like
the
theme
of
enterprise
software?
Is
it's
like
it's
it's
all
about
like
the
scale?
It's
it's
complex
in
all
dimensions,
its
its
complex
like
in
time
so
like.
G
If
the
project's
takes
like
years
years
and
years
to
build
it's
like
the
software
like
there,
we
have
to
deal
with
standards
built
like
30
years.
It's
also
a
scale
in
terms
of
like
people,
organizations
in
wool
like
if
you
take
the
whole
healthcare
us
like
an
industry.
This
there's
like
regulations
and
the
hospital
said
like
there's
like
insurance
camera.
G
It's
not
like
there's
a
lot
of
lot
of
complexity
coming
from
that
and
what
what
I
kind
of
found
is
that,
but
the
main
theme
of
working
this
kind
of
software
is
to
being
able
to
like
separate
kind
of
a
compartmentalized.
Everything
is
to
build
your
software
in
like
pieces
and
make
those
it's
a
comment,
decouple
everything
into
smaller
manageable
pieces,
which
kind
of
like
obvious
but
I.
G
Think
in
enterprise
software,
it's
kind
of
like
really
like
I
was
I,
was
having
a
discussions
related
to
rust
and
like
hackers,
news
and
ever
support
was
one
of
the
themes
like
well.
Why
don't
you
just
go
and
switch
this
data
structure
to
that
data
extraction?
It's
like!
Well,
if
you
have
like
thousand
a
hundred
thousand
lines
of
code,
you
can't
really
do
that.
G
You
start
changing
in
one
place
and
that's-
and
you
said
and
they're
like
it'll,
take
you
like
days
and
days
and
days
and
then
at
some
point,
you'll
find,
oh
well,
actually
that
new
data
structure
doesn't
satisfy
that
particular
requirement.
So
you
cannot
really
replace
everything.
So
it's
important
to
build
those
boundaries
and
make
make
everything
work
kind
of
like
across
these
boundaries
and
one
of
the
important
principles
becomes.
G
You
need
to
have
really
like
a
loose
coupling
and
I
think
this
is
where
a
rust
presented
at
if
some
of
the
challenges
and
I'm
gonna
highlight
them
in.
In
examples
and
giving
now,
the
principle
is-
and
it
maybe
that's,
because
we
are
like
early
startup
and
we
haven't
really
learned
everything
about
the
domain.
They're
working
for
you're
still
building
an
expert
expertise,
so
we
don't
know
which
answer
is
correct,
so
like
it's
very
important
to
what
have
wait.
G
One-Way
doors
and
I
would
think
that
this
is
sort
of
like
an
extension
of
this
loose
coupling,
but
for
for
design
or
architecture
you're
building.
So
whatever
you
design
the
lower.
What
component
you
design,
you
cannot
just
say!
Well,
that's
the
way
to
build
and
like
switching
to
that
way
of
doing
things
like
starting
from
tomorrow,
it's
turning
tomorrow,
because
that
would
be
this
kind
of
like
one-way
door
decision.
G
So
what
do
you
have
to
do
is
to
build
it
on
a
side
make
it
kind
of
work,
observe
how
they,
how
their
whole
architecture
develops
and
then
maybe
finally
making
a
decision
like
yeah
that
that
thing
actually
works
better,
so
we're
gonna
switch
that
this
way
and
just
in
general,
like
you,
have
to
accept
trade-offs.
There's
like
no
perfect
way
of
doing
things
like
no
simple
ways
like
you're
gonna.
G
You
can
change
the
simplicity
for
like
months
and
then
you're
still
finding
more
and
more
in
your
answer,
so
it
becomes
important
just
like
say
at
some
point.
Well
that
solution
is
good
enough,
like
let's
move
on
to
somehow
that
part
of
the
system
and
work
on
that,
and
since
everything
is
like
loosely
a
couple,
hopefully,
then
you
can
kind
of
still
make
it
work.
You
know
the
old
parts
of
the
system,
you're
part
of
the
system
and
one
of
their
and
I
like
to
call
it.
G
It's
like
a
waterbed
architecture
is
like
you
have
like
a
waterbed
like
a
bad
field
of
water,
and
you
want
to
like
compress
it
think
it
look.
It's
easy
to
compress
it
on
the
outside,
but
they
don't
like
pop
on
the
other
side.
So
that's
kind
of
that
architecture.
It's
you're
not
really
solving
there
like
you're,
not
really
making
it
simpler.
G
What
you
doing
is
you're
shifting
the
complexity
in
a
way
that
it
gets
off
there
ways
that
I
important
and
it
moves
too
well
in
the
places
where
it's
like
less
important,
so
hopefully
you're,
moving
complexity
from
part,
which
is,
let's
say
facing
to
application
developers
and
removing
that
complexity
to
the
area
where,
like
more
existing
developers,
build
look
at
the
framework.
Builders
kind
of
working
and
I
would
like
to
give
a
couple
of
examples
where
we
had
some
of
the
challenges.
G
If
that
rust
presented
us,
while
building
that
the
software
that
we're
building
and
first
example
would
be
like
lifetimes
so
lifetimes
are
great
and
rust,
like
they
really
help
you
to
keep
your
data
safe
and
you
know
safe
you,
they
save
you
for
like
data
races
and
if
you
can
sometimes
you
can
even
map
more
high-level
requirements
like
well,
while
I'm
using
this
particular
object,
like
other
that
other
object
should
not
be
accessible.
So
you
can.
G
G
That
is
a
completely
like
remote
from
the
part
that
you're
working
on,
but
you
cannot
do
that
because
it
has
its
lifetime
and
there
was
one
particular
example
which
took
a
while
to
figure
out
and
work
it
through
and
it
was
there
was
this
connection
in
passe
Greece
Crete
and
was
this
transaction
in
the
same
crate?
So
the
connection
was
the
struct.
If
I
was
lifetimes,
you
can
put
it
in
whatever
you
want.
G
G
The
transaction
would
borrow
from
connection
and
actually
in
the
new
version,
I
think
they
change
this
sure
borrowing
into
mutable,
one
which
expresses
environment
even
better,
is
that
now
you
can
only
have
one
transaction
active
for
that
particular
connection,
and
unless
you
drop
your
transaction,
you
won't
be
able
to
do
anything
on
a
connection,
because
the
connection
will
be
borrowed
by
this
transaction
link.
But
unfortunately,
that
creates
pretty
strong
limitation.
Is
that
now
you
cannot
like
if
you
need
to
pass
the
transaction
around,
you
have
to
pass
it
if
those
lifetimes
and
that's
very
problematic.
G
G
Well,
if
you
look
at
the
connection
in
transaction
together,
they
don't
really
have
outgoing
references,
so
it
should
be
possible
to
somehow
duct
tape
them
together
and
just
say.
Well
now
we
don't
have
lifetimes
anymore
like
now.
It
should
be
able
to
put
it
in
our
core,
like
put
it
in
whatever
app
cell
or
like
move
it
around,
but
you
can
do
that
because
that
would
be
a
self
referential
data
structure
and
Russ
kind
of
like
doesn't
support
them,
doesn't
like
them
and
scene
opinion
saying
that.
G
Well,
if
you
like
need
self
relational
data
structures,
maybe
are
not
using
crust
right.
We're
like
I
think
that's
that's
like
that
was
like
a
real
real-world
example.
Is
that
we're
taking
this
ripe
library
didn't?
It
is
what
component
we
have
in
one
part
of
the
system,
we're
taking
this
another
library
we're
trying
to
combine
them
together
and
they
just
don't
match
and
like
at
this
point.
It's
not
like
about
finding
like
the
perfect
way
of
doing
that.
G
It's
just
like
give
me
some
sort
of
duct
tape,
so
it
can
just
like
make
these
pieces
to
work
and
buy
some
time,
and
maybe
later
I
really
rebuild
this
part
of
the
system's
just
to
make
them
work
better
if
each
other
that
I
need
some
sort
of
solution
like
right
now
and
we
found
that
solution,
and
it
was
a
rental
crate.
So
somebody
created
a
crate
that
allows
you
to
create
self
referential
data
structures
if
I
I
think
that
a
lot
of
lot
of
like
this
small
fine
text,
warnings
is
like
well
I.
G
Think
I've
seen
the
city
post
by
the
author
creates
saying
something
like
I'm,
not
sure
that
this
crate
is
like
completely
sound
like
might
introduce
someone
sound
this,
but
at
this
point
is
like
like
I'm
Jesper
like
I
need
some
sort
of
solution
like
it's
better
to
have
this
kind
of
guarantees
versus
just
well
I
can't
ship
anything
because
I
can
build
anything.
It's
like
self-referential
thinking
like
I'm,
like
I'm
done
so
I.
E
G
It
kind
of
provide
a
solution
method
allowed
us
to
create
this
struct
that
that
that's
the
way
you
express
this
truck,
you
say:
hey
I,
have
this
field
connection
and
I
have
this
another
field
called
chose
action
and
it
uses
lifetime
which
is
named
after
a
field
and
by
some
magic.
This
rental
credo,
transform,
is
track.
I
think
it
transform
is
some
some
weird
think
it's
probably
uses
boxes
or
something
to
make
sure
that
the
address
of
the
connection
doesn't
change,
but
it
allows
you
to
kind
of
like
do
exactly
that.
G
Whatever
what
I
mentioned
before
is
just
duct
tape,
those
things
together
and
just
forget
about
this
problem
for
a
while
and
it's
kind
of
solved
the
problem
at
that
at
the
point.
So
that
was
that
so
their
outcome
is
that
sometimes
you
need
some
solution
that
probably
nobody
should
ever
use,
but
you
feel
really
really
like
any
piece.
You
have
to
have
something
it's
better
to
have
that
solution
versus
just
saying.
Well,
you
know
like
we,
we
can't
move
forward,
but
later
we
actually
leave
wrote
and
you
wrote
the
code
now
to
use
that
transaction.
G
G
It's
like
a
NATO
requirement.
If
you
have
lower-level
error,
sometimes
you
want
to
add
some
more
context
and
wrap
it
into
high
level
error,
and
maybe
you
do
that
multiple
times
example
could
be
what
validation
failed.
A
particular
object.
I'd
be
like
low
level
air
and
then
a
high
level
would
be
well
that
happened.
While
we
were
really
dating
that
particular
instance
of
data
like
if
that
particular
ID
and
then
on
a
higher
level,
it
would
be
well.
This
whole
thing
happened.
G
While
we
try
to
process
that
message
you
received
from
that
system,
so
it's
kind
of
like
natural
requirement
to
being
able
to
add
more
and
more
details.
So
another
corner
was
because
our
parts
of
our
systems
supposed
to
be
like
loosely
coupled.
Then
we
use
dynamic
dispatch,
a
lot
to
achieve
that
in
that
particular
setting.
Sometimes
you
don't
really
know.
G
What's
the
arrow
type
you
will
be
receiving,
because
if
it's
a
trade
object
and
say
it's
implements
like
a
data
logger
thing,
it
could
be
implemented
as
a
data
lorry
from
file
system
from
database
or
from
main
memory.
They
would
have
a
completely
different
sets
of
errors
like
the
database.
One
would
be
some
database
specific
errors,
so
we
needed
some
sort
of
an
error
type
which
can
pack
any
possible
error.
Something
similar
to
box
of
STD
error.
Error,
like
you,
can
use
boxes
the
error
for
that,
and
then
there
was
another
requirements
yeah.
G
It
would
be
nice
to
have
back
traces.
So
you
know
where
the
error
happened.
Like
an
ideal
world,
you
should
never
need
back.
Traces
like
your
arrow
should
be
like
very
well
structured,
and
they
should
contain,
like
all
the
information
that
you
need.
So
whenever
ever
happens,
you
look
at
the
error
and
it
says
well,
this
happens
at
that
point.
This
happens
at
that
point,
but
in
real
world
a
lot
of
errors
are
just
like.
Well,
something
happened
and
we
have
this
arrow
type
doesn't
have
any
data
in
it.
G
Then
you
have
no
idea
where
it's
coming
from.
So
if
you
have
a
back
trace
at
least
you
can
look
at
that
piece
of
code,
maybe
make
a
couple
of
guesses
like
why.
Why
did
that
happen?
It's
more
like
a
pragmatic
choice
versus
versus
like
like
a
perfect
thing
and
also
we're
trying
to
avoid
using
panics
and
unwraps
as
much
as
possible,
because
if
you
have
a
long-running
server,
there's
some
consequences.
If
panics
like
your
panics,
might
cause
your
mutexes
to
be
poisoned,
and
then
at
that
point
your
server
is
basically
done.
G
Somehow
we
return
it
has
an
internal
aurora
and
internal
rules
contain
back
traces,
so
you
can
track
it
back
and
try
to
understand
like
why
why
that
particular
situation
happened
and
then
also
some
other
like
more
application.
Specific
requirements
is
that
it
would
be
like
it
would
be
nice
not
to
require
application
developers
or
whoever
is
developing
different
components
to
use
any
specific
error
type.
This
should
be
free
to
design
error
types.
G
They
are
not
to
stuff
like
not
too
complex
to
implement
all
of
them,
but
there
was
also
another
requirement
which
kind
of
like
changed
the
whole
whole
thing
to
make
it
like
way
more
complex,
at
least
for
me,
it's
what
we
would
also
like.
We
also
needed
a
way
of
getting
a
structured
information
outside
of
Paris,
and
the
reason
for
that
was
that
standard
we
were
implementing
on
our
api's
is
required
that
when
we
call
an
API
and
we're
getting
error,
we
are
supposed
to
just
return.
G
Our
like
a
text
message
we're
supposed
to
return
like
a
structured,
JSON
saying
well.
That
thing
happened
error
that
could
that
diagnostic
message
and
then
some
some,
some
detailed
information
for
us
and
not
just
like
their
message
and
and
at
that
particular
that
particular
requirement
was
hard
and
I'll
explain
a
little
bit
later.
G
Why
so,
therefore,
the
first
white
you
just
like
take
failure,
great
and
it
just
does
that
it
was
great,
like
I,
actually
I,
pretty
much
randomly
chose
that
crater
cuz
I
knew
nothing
about
Russ
at
the
time
and
it
was
like
looking.
How
do
you
do
errors
and
I
found
out
this
failure
thing
and
it
turned
out
that
was
like
a
great
create
it's
just
a
movement
always
the
first
five
requirements
for
us
like
we
didn't
have
to
do
anything.
G
They
mainly
one
was
what's
difficult
and
the
reason
for
that
was
being
able
to
get
the
structured
information
out
of
errors
meant
that
all
errors
supposed
to
implement
some
sort
of
trade.
So
they
let's
say
we
have
some
trade
called
extra
info
and
all
errors
which
could
provide
some
structured
information
supposed
to
implement
it.
G
And
have
this
function
info
that
will
just
return
an
option
of
some
info
and
info
would
contain
all
the
structured
information
about
this
particular
type,
its
particular
instance,
but
the
problem,
if
that
was
that
you
have
this
chain
of
errors
and
failure,
click
create
allows
you
to
go
through
that
chain
and
scan
all
this
errors
and
like
look
at
them.
But
how
do
you?
How
do
you
get
this
trade
object
of
exchange,
for
if
you
only
have
the
trade
object
to
fail?
G
So
the
trade
object
of
fail
is
what
failure
create,
would
give
you
if
you
iterate
through
errors
in
a
chain,
so
that
was
like
a
problem
of
dunkey
like
cross.
Casting
trade
object
to
a
trade
object,
which
is
not
really
a
thing
and
Russ
I.
Can
language
like
Java
would
be
not
a
issue
just
like
cast
and
and
ring
courts
fine
for
us,
there
are
no
solution,
at
least
like
I,
wasn't
able
to
find
any
good
solution,
but
there
the
idea
was
that
well,
we
can
use
downcast
Ref.
G
This
is
a
way
of
down
casting
from
fail
trade
object
into
a
specific
arrow
type.
So
we
can
use
that,
but
a
dumb
casting
only
works
for
country
types.
So
how
do
we
know
which
concrete
types
you
have
because
remember
like
everybody
is
free
to
declare
error
types
like
as
many
error
types
as
they
want?
So
we
don't
really
know
so.
Our
error
subsystem
doesn't
really
know
that
doesn't
really
have
the
holy
s--t
list
of
all
the
error
types
we're
using.
G
So
that
is
problematic
and
one
idea
was
like
well,
we
can
have
some
sort
of
registry
like
a
hash
map,
mapping
a
type
ID
into
like
a
cross
casting
function
and
a
type
ID
is
something
that
the
Russ
compiler
can
provide
you
it.
It's
unique
identifier
attached
like
every
every
type
in
a
Russ
type
system.
Could
the
Russ
compiler
could
generate
this
unique
identifier
for
every
type
of
a
rust
system.
So
if
you
know
type
ID,
you
kind
of
know,
what's
there
like
the
real
tile
behind
this
trade
object.
G
So
if
you
build
this
map-
and
then
you
register
all
the
error
types
in
this
registry
and
registration
would
be
you
just
basically
go
and
for
each
specific
error
type,
you
can
put
a
mapping
from
the
type
idea
of
that.
I
wrote
that
into
a
function
that
would
be
just
down
casting
to
the
specific
type
and
then
immediately
generating
a
trade
object
of
extra
info.
But
now
the
problem
is
that,
well
still,
how
do
we
enumerate
all
the
errors
like?
We
haven't
really
saw
that
so
this?
G
This
is
a
kind
of
approach,
but
the
problem
is
that
how
to
be
enumerated,
errors
and
I
was
looking
for
a
long
time
for
solution.
It
was
like
different
ideas
like
maybe
we
can
build
a
compiler
plugins.
So
whenever
we
declare
error
type,
you
like
annotate
it
and
somehow
we
would.
This
compiler
plug-in
would
generate
the
code
that
would
like
register
all
the
error
types.
We
can
build
like
a
tool
that
will
scan
our
source
code
and
file,
find
all
errors
or
we
can,
like
man,
only
register
all
error
types.
That's
that's
that's!
G
The
approach
we
took
originally
were
just
like
explicitly
registering
all
arrow
types
and
beginning
of
our
main
function,
but
would
be
nice
to
have
a
way
of
running
some
code
before
you
mean
automatically
like
a
static
initializer,
but
they
tell
you
like
well,
there's
no
static,
initializer
Rast,
you
shouldn't
even
have
them
it's
a
terrible
idea
and,
like
I,
created
this
terrible
idea.
But
this
exactly
again
an
example
where
well,
like
maybe
maybe
I
know
what
I'm
doing
god.
I
promise
I'm
not
gonna,
do
like
sophisticated
stuff
on
this
static.
G
Initializer,
maybe
I
just
like
built
what
I
have
a
simple
list
of
point
of
references
between
those
error
like
error
types
like
something
really
that
I
guarantee
that's
going
to
be
safe
like
and
I'm,
not
gonna
tell
like
other
developers
how
to
do
that,
but,
like
I
want
that
for
myself
just
to
make
this
error
handing
working
and
and
I
found
the
solution.
So
I
was
before
before
doing
this.
Software
for
c'mere
I
was
doing
this.
The
Hobby
project
for
embed
it
and
I
remembered
well
I'm
an
embedded
world.
G
G
No,
as
an
answer
like
we
have
like,
we
need
to
ship
something
you
need
to
build,
a
solution
might
not
be
perfect,
it
has
a
discussion
happening
is
like
if
this
whole
crate
should
be
marked,
as
on
say,
doesn't
introduce
unsoundness.
It
might
introduce
some
soundness
because
a
lot
of
iterations
you
cannot
do
before
you
have
before
your
main,
for
example.
The
create
itself
is
an
example.
G
You
shouldn't
have
you
shouldn't
do
any
like
out
to
test
EDI
or
because
the
output
subsystem
and
rust
is
not
initialized,
yet
I'm,
not
sure
like
if
you,
if
you
can
use
like
a
memory
allocator
at
that
point,
like
maybe
you
could,
it
seems
like
it
works,
but
maybe
you
shouldn't
use
it.
Maybe
some
memory
locators
would
not
work
if
you're
using
it
this
way,
but
again,
like
that's
a
solution
that
kind
of
allows
us
to
continue.
Okay
and
then
most
relics.
We
have
this.
G
That
becomes
very
simple,
so
we
can
hide
it
behind
a
nice
obstruction.
So
we
can
have
this
Drive
extra
infiltrated
behind
the
scenes.
We'll
use
that
on
occasion,
so,
like
I'm,
not
gonna
tell
like
anybody
how
how
this
works,
but
it's
like
magic,
since
it's
like
working
well
so
far,
so
like
problem
solve
that
was
like
another
kind
of
like
duct
tape
we
got
used
to
just
like
make
things
work
and
also.
There
are
third
examples.
G
What
we,
what
we
had
to
solve
it
for
us,
which
were
particularly
challenging
it
was
just
modeling
the
healthcare
data
and
overall,
the
problem
looks
like
there.
Are
there,
some
data
schemas
given
us
by
third
party
and
there
simple
in
a
sense
that
I'm
using
any
crazy
like
type
thinks
they're
just
struts
and
fields
and
those
trucks.
The
fields
have
types
and
types
could
be
like
lists
of
other
types,
they're
like
pretty
simple
but
they're
numerous,
and
it's
like
hundreds
and
hundreds
of
them
and
then
yeah.
G
That
slows
downs
compilation
by
like
a
lot
like
how
compact
compilation
times
like
it's,
probably
one
of
their,
like
top
top
three
issues
that
we're
having
right
now
and
whenever
we
so
that
that's
kind
of
like
the
the
set
of
challenges
through
and
there's
also
like
several
versions
of
them
at
some
point
we
said:
well,
you
know
what
actually
we
have
to
multiply
that
by
three,
because
then
we
had
three
versions
of
this
standard
we
need
to
implement
and
the
standard
provides
formal
schema.
So
one
the
idea
was
that
we
can.
G
We
can
use
those
schemas
to
generate
the
code
for
us,
so
we
don't
have
to
like
manually
type
everything,
and
there
are
like
many
many
many
new
answers.
It's
like
I'm
still
I
keep
learning
about
like
this
tiny
little
things
about
this
standard.
They
would
they
would
they
would
do
like
terrible
things.
I
would
publish
the
scheme
and
say
this
resource
is
stable.
G
Now
we
are
not
going
to
change
that,
but
that
particular
field
we
can
change
like
we
can
still
change
it
in
future
revisions,
it's
like
how
is
it
stable
if
it's
like
have
like
understand
like
what
is
unstable?
That
means,
but
that's
not
very
new
I,
said
well,
but
those
fields,
other
fields,
they're
not
going
to
change
or
then
I,
would
give
you
a
list
of
items
and
specify
that
and
say
well.
The
list
is
stable,
but
the
contents
of
that
list
Luis
quit
like
enum,
is
not
stable.
G
It's
like
how
is
that
stable
and
then
acquire
all
the
new
answers
that
you
can
rely.
That
definition
would
exist
and
legit
definition
in
this
annotated,
URL
like
they
get
a
product
like
this,
is
a
whole
set
of
Madilyn
available.
You
can
rely
that
would
exist,
but
that
they
did
what
would
they
would
put
analyst
that
might
change
in
the
future
version.
So
there's
a
lot
of
these
like
tiny
things,
which
I
mean
we're
still
learning
and
they
make
this
problem
look
really
hard
because
at
some
point
like,
for
example,
we
wanted.
Well.
G
Maybe
you
can
extract
some
commonalities
between
those
versions
or
like
generate
like
it's
like
for
stable
types,
generate
type
once
for
all
the
versions
and
it's
disabled,
but
then
it
turned
out
that
even
even
that,
it's
like
very
difficult.
For
these
reasons,
it's
like
yeah,
it
is
stable,
but
if
it
still
might
be
different
like
a
newer
version,
they
still
can
change
that
definition
and
for
this,
for
this
data
be
remodeling,
we
we
needed
pretty
basic
operations.
So
far
is
we
need
to
digitize
and
sterilize
them?
G
Obviously,
like
we
need
to
reach
a
sounds
or
XML
and
write
them
back,
and
we
need
to
validate
them
and
validate
them
in
a
type
agnostic
manner,
in
a
sense
that
sometimes
so,
we
need
to
be
able
to
validate
them
according
to
the
building
definitions
provided
by
the
standard,
but
also
the
standard
allows
you
to
define
your
own
set
of
constraints
for
those
data
types
and
we
need
to
be
able
to
quality
data
against
that.
So,
for
example,
Hospital
can
create
their
own
set
of
constraints
and
say
well
in
this
hospital.
G
They
patient
have
those
additional
requirements
and
this
additional
constraints
put
it
in
a
formal
definition
and
then
give
us,
and
we
should
be
able
to
validate
patient
against
that
definition
and
safest,
violate
or
not,
and
then
the
this.
This
Turner
is
like
really
crazy.
They
they
pretty
much
redefined
the
whole
world
for
themselves
and
they
defined
a
language
which
you
can
use
to
extract
data
from
your
data
types
so
similar
to
XPath
for
JSON
path.
G
How
that
could
be
handled
now
like?
How
can
you
model
this
data
and
initially
well?
One
option
is
obvious:
we
just
take
aways
data
schemas
generate
rosco.
Everything
is
fine.
There
are
some
new
answers.
It
turned
out
that
whatever
data
schema
still
give
us,
they
doesn't
really
match
what
they
have
in
json
like
they
would
define
like
a
semantical
model,
but
then
it
would
be
like
a
separate
part
of
specification
by
the
way.
G
In
this
particular
case,
that
field
doesn't
matte
like
one
two
one,
two,
what
you
see
in
JSON
and
that
made
it
a
crazy
it's
like
now
we
have
an
option:
either
your
model,
your
data
based
on
semantically
model,
and
then
you
cannot
use
like.
Let's
say:
surely
this
utilize
it
because
the
Jason
doesn't
match
struct
or
you
can
generate
struct
based
on
JSON
and
then
the
whole
application
code.
You're
building
need
to
remember
that
whatever
you
see
in
a
struct
doesn't
is
not
really
what
the
standard
defines
and
then
that
option
would
be.
G
G
Like
a
rust,
Bordeaux
check
your
lab
or
go
check,
it
would
never
allow
you
to
have
that,
and
this
language
is
like
yeah
fine,
it's
possible
to
have
two
variables
pointing
to
like
two
kind
of
like
the
same
thing
and
both
are
like
immutable
and
like
I,
have
my
theory
or
like
why
that
happened.
I
think
like
they
define
it
started.
Looking
at
like
languages
like
Java,
which,
like
don't
care
about
reference,
you
can
modify
whatever
you
want,
but
they
didn't
really
think
about
always
like
potential
semantics
is
like
what
are
they
like
about?
G
Like
data
racism,
things
like
that,
so
to
kind
of
support
that
language
we
had
to
implement
another
data
structure
and
then
at
some
point
we
had
all
four
this
ways
of
doing
the
data
of
modeling
the
data
and
that's
again
goes
back
to
this
loose
coupling
and
being
able
to
not
having
this
one-way
door
is
like.
We
would
model
in
one
way
in
other
part
of
the
system.
We
need
to
model
another
way,
so
we
like
so
we
needed
to
develop
a
way
for
like
how
parts
of
the
system
to
work.
G
It
allows
you
to
get.
It
allows
us
to
like
get
the
valid
like
giving
some
reference
to
the
data.
We
can
get
some
like
primitive
value,
for
example,
number
or
string,
or
something
like
that
and
also
allows
you
to
go
in
the
inside
a
field.
And
if
you
have
a
field
name,
you
can
go
and
check
if
there
what's
the
reference
lat
field
and
get
some
type
information
for
the
data
table.
G
Unfortunately,
that
API
work
only
to
some
level
and
then
we
started.
Then
we
got
to
the
point
where.
Well,
we
need
the
immutable
references.
We
need
to
be
able
to
mutate
data
using
this
API
and
turned
out
that
to
express
that
in
traits
you
need
G
80s,
which
are
generic
associated
types,
and
it
must
it
goes
to
that.
If
you
want
to
mutate
data,
you
want
a
function
called
a
gift
field,
mute
and
it
should
take
self
as
me
yourself,
and
it
should
return
something
that
borrows
from
that
self.
G
But
that
something
is
is,
is
what
the
type
of
that
sound
link
will
be
defined
by
the
implementers
to
this
trade,
so
it
will
be
like
associated
type
on
the
street,
so
you
need
an
associated
type
which
is
parameterized
by
a
lifetime
and
that
lifetime
needs
to
come
from
this
mute
self,
but
a
long
story
short.
It
requires
a
feature
that
is
not
available
in
Russ,
unfortunately,
so
the
other
consideration
was
well.
G
If
you
go
this
static
type
way
and
it
uses
trades
that
can
times
becomes
terrible,
because
all
your
curve
that
uses
that
is
parameterize
by
the
type
T
implementing
this
reference
will
be
monomer
files
in
the
model
version
for
each
data
structure.
Basically,
if
you
have
a
validation
code
using
that
reference,
you
will
have
like
four
versions
of
this
validation
code
immediate
by
compiler,
because
for
every
possible
implementer
of
this
reference
type,
there
will
be
like
a
copy
generated
sort
of
and
that
increases
binary
size
and
manipulation
time.
G
So
what
be
the
solution
that
I
can
are
like
reach
to
exact?
Well,
maybe
maybe
should
go
with
trade
objects.
It's
not
like
it's
not
something
like
you
know
in
a
rust
wall,
it's
kind
of
like
this
Tennessee.
Well,
the
trade
objects
now
like
you
should
supposed
to
use
static
polymorphism
and
there's
like
the
power
of
frost
and
everything,
but
I
think
for
enterprise
software.
It's
more
important
to
be
like
decoupled
in
I.
G
Don't
know,
I
think
the
trade
objects
are
fine
because
we're
not
working
on
like
kernel
drivers,
which
has
to
like
have
this
perfect
performance.
Everything
and
trade
objects
as
a
side
effects.
In
really
nicely
we
borrow,
so
you
can
even
have
this
mutable
access.
So
if
you
have
this
gap
film
you
take
some,
you
take,
sell,
buy
mute.
It
can
return
reference
to
this.
G
Another
object
think
and
the
borrowed
check
it
will
make
sure
that
while
you're
having
this
borrow
on
these
children,
you
cannot
modify
the
parent
so
like
everything
works
in
a
very
natural
way,
but
unfortunately,
like
the
straight
objects
harder
to
make
them
work
for
other
types.
They
they
work
nicely
for
structure.
If
you
generate
struck,
you
can
just
directly
implement
the
straight
objects
and
then
plumbin
ation
would
be
like
pretty
straightforward.
So
when
I
was
somebody's
asked
for
a
field,
you
just
return
reference
to
the
field,
but
there
are
some
challenges
like
you.
G
You
can't
really
make
them
work.
For,
let's
say
for
vector,
I
mean
this.
Is
data
structure
because
there's
nothing
to
borrow
from
you
you're,
not
borrowing,
something
that
already
exists.
You
need
to
create
something.
Is
there
I
have
some
ideas,
how
that
could
be
like
if
you
really
really
have
to
look
again
like
this
one,
one
way,
door
kind
of
thinking
and
explore
other
options
like
how
can
we
make
this
trade
object?
G
G
First
of
all,
like
you're
doing
this
dynamic
dispatch,
it
is
slower.
But,
more
importantly,
you
are
not
letting
compiler
to
look
through
the
standard
objects
and
do
optimizations,
which
would
be
available
otherwise
and
in
the
end,
so
what
we
ended
up
t'v
is.
We
have
regular
rush
trucks,
implement
those
set
of
trades,
the
years
macros
to
derive
those
trade
objects.
We
have
custom
JSON,
XML
serialization,
that
uses
those
trade
objects
to
digitalize
data,
and
what
I
was
curious
about
is
like
how
do
we
compare
to
service?
G
So
we
had
this
original
structs
which
used
sorted
for
this
realization,
and
now
we
build
our
own.
The
sterilizer,
which
uses
those
trade
objects
that
those
dynamic
dispatch
to
load
the
data
and
like,
like
performance
wise.
How
and
until
now
that
it
was
like
two
two
and
a
half
times
faster
for
some
reason,
like
I'm,
not
sure
I'm,
not
quite
sure,
while
I
think
it's
numbers
like
I,
think
this
there's
something
I'm
missing
there.
G
It's
not
exactly
apples
to
apples
comparison,
but
loading
the
same
data
types
into
what
the
old
structures
just
using
survey
or
using
Taman,
and
you
did
it
apps
using
our
dynamic
dispatch
of
this
like
dynamic
dispatch,
be
surprised,
or
it's
like
two
times
faster,
less
in
a
binary
size
drop
because
shortage
in
your
eyes
like
a
lot
of
codes,
but
for
those
thousands
of
structs
it
generates,
like
thousands
and
thousands
of
those
functions
to
digitalize
those
trucks.
But
in
case
when
you're
using
trade
objects,
we
have
like
one
copy
of
d
sterilized,
it
doesn't
care.
G
It
just
gets
a
trade
object
as
a
parameter
and
complete
sometimes
two
seems
to
be
faster,
which
is
also
good.
And
then
it
was
like.
The
third
example
I
have
final
thought.
I
would
say
building
this
kind
of
software,
as
this
was
really
really
fun,
because
I
had
to
figure
out
a
lot
of
things
that
I'm
not
sure,
like
I,
think
we're
kind
of
like
using
rust
in
a
direction.
We're
not
in
the
direction
it
was
like.
G
I
would
consider
like
as
a
primary
direction
of
frost
so
easily
for
enterprise
software
and
for
like
building
applications
and
I
think
I.
Recently,
people
started
using
it
in
this.
Originally
Rasmus
was
move
more
like
like
a
systems,
language
for
replacing
C
and
C++
or
doing
the
high-performance
things
so
I
think
that's.
It
was
fun
to
figure
out
a
lot
of
these
things.
G
Ourselves
and
I
would
say
that
part
I
found
like
parts
of
ecosystem
language
I
like
a
bit
immature,
which
is
expected
given
the
rust
age,
and
there
are
some
interesting
engineering
challenges
and-
and
that's
that's-
that's
all
I
have
so.
As
a
last
item
I'd
like
to
look,
we
are
using
cross
the
law
to
last
a
lot.
We
should
be
contributing
back
to
the
community.
G
We
are
not
really
contributing
a
lot
at
this
point,
but
we
have
couple
of
crates
which
we
created
a
Jason
patch
and
like
the
crates
that
I
created
for
data
driven
tasks.
So
you
have
your
tests.
We
didn't
ask
like
data
files
that
will
generate
like
a
rust-eze
out
of
it
and
also
I
created
them
like
this.
Have
this
crazy
experiment
of
building
the
tables
at
runtime
which
are
called
trader,
but
this
more
there's
dynamic?
It's
like
you
now
I
should
use
it.
G
It
uses
like
a
lot
of
fun
defined
like
behavior,
like
it
has
a
lot
of
fun
defined
behavior
in
it,
but
it's
pretty
cool
is
that
you
can.
You
can
basically
add,
like
attach
any
trait
to
any
data
type.
Any
data
instance.
You
can
even
attach
different
implementations
of
the
same
trade
object
to
the
same
data
instance
in
manner.
I
find
it's
pretty.
That
was
an
exciting
experiment
anyway
and
yeah
and
that's
that's
all.
H
G
A
G
Comic
summation
is
not
sufficient.
I
think
it
comes
to
all
this
like
requirements.
So
if
you
drop
some
of
them,
you
can
use
context.
I
think
the
problem
was
that
if
you
used
different
complex
types,
so
what,
in
the
end,
what
you're
getting
is?
Okay,
you
had
a
straight
object
fail
and
you
have
functionality
to
scan
for
this
chain,
and
every
element
would
be.
The
straight
object
fail
like
now
what
you're
going
to
do
from
that
point.
So
the
only
thing
you
can
do
like
two
things
you
can
do
you
can
ask
that
failed
trade-off.
G
Is
you
give
me
your
string
representation
like
text
message
and
that's
it
and
now
the
functionality
is
that
can
you
downcast
to
that
specific
type?
So
the
question
is
like
what
specific
type
would
you
use
like?
How
do
you?
How
do
you?
How
do
you
extract
that
it
I
guess
it
would
be
possible
to
do
it
in
a
way
that
we
would
use
a
specific
context
and
a
string
representation
would
be
some
sort
of
JSON
and
you
can
convert
the
whole
error
chain
and
a
string
and
then
parse,
JSON
and
extract
emission
like
that.
G
G
That'll
destructure
this
whole
chain
back
into
like
specific
data
types
and
be
like
fully
statically
Ted,
but
once
you
introduce
this
universal
error
type
like
this
fail
or
error
like
if
the
whole
error
thing,
the
whole
type
information
is
hidden
like
you
get
this
errors
like
there's
nothing
you
can
do,
except
for
this,
like
dynamic
functionality
like
down
casting
it.
Okay,.
G
We
use
them
in
in
some
places,
but
I
think
the
cow
data
tag.
It's
it's
kind
of
you
know,
unifies
this
two
worlds
and
like
in
a
weird
manner,
is
that
it
either
has
lifetime
and
you're
borrowing,
but
then
otherwise
you
just
have
well
whatever
I'm,
just
making
this
whole
copier
thing
and
moving
here
on
so
in
case
of
like
transaction,
we
can't
really
Kelowna
transaction
have
an
own
girl
from
that.
It
still
has
this
lifetime.
So
that's
problematic,
I.
G
Yeah
I
think
static,
colic
I
would
say
it's
like
honestly.
I
think
the
functionality
of
static
cow
should
be
provided
by
the
string
itself
and
I
think
there
are
some
RFC's
about
that.
It
should
be
possible
to
create
a
create
string
object
without
any
allocations
out
of
memory
that
you
already
have
like
statically
allocated
like
first
string
static
allocated.
If
you
have
that
you
wouldn't
even
need
count
like
look.
It's
very
very
still.
G
Example
like
the
string
itself
internally
could
have
like
a
like
a
like
a
bit
in
its
point
or
address
like
a
lower
bit
saying
well,
if
I'm
like
static
string
or
even
old
string
and
behave
differently
on
drop
like
if
it's
all
think
I
would
drop
it.
Otherwise,
that's
the
that's
the
STR
and
like
I'm
not
going
to
drop
it
I.
Think
I
mean
I,
don't
like
details
like
if
you're
even
allowed
to
use
like
extra
bits
and
side
pointers,
probably
like
shouldn't,
like
it's
probably
very
specific,
to
platform
or
something.