►
From YouTube: Rust and Tell Berlin - March 2020
Description
https://berline.rs/2020/03/31/rust-and-tell.html
Rust & Tell Berlin, the monthly event to share ideas, and learn about new things in and about Rust, went fully online for the first time.
Talks:
#1: 00:09:25 - Integers in the Small, Big, and Macro by Bram Geron
#2: 00:32:12 - Tremor - A Rust Project from Wayfair by Heinz Gies
#3: 01:03:39 - Async HTTP by Yosh Wuyts
B
A
A
All
right,
so
we're
going
to
get
started,
welcome
everyone
to
rust,
Intel
Berlin
for
four
March,
we're
doing
things
a
little
bit
differently
this
month.
Everyone
here
probably
can
tell
we
are
not
somewhere
in
Berlin,
but
we
are
rather
on
the
interwebs
shout
out
to
anybody
who
is
coming
to
us
who
does
not
live
in
Berlin.
A
Welcome
to
probably
your
first
rest
until
Berlin
that
you've
been
to
we're
happy
to
have
you
here
and
we
are
going
to
have
a
great
night
of
talks
and
also
a
small
club
challenge
at
the
end,
and
hopefully
things
will
go
smoothly.
This
is
the
first
time
we're
doing
this
event
online
and
so
bear
with
us.
If
you
have
any
questions
or
any
any
technical
questions
or
anything
about
the
stream
or
anything
like
that,
feel
free
to
write
it
in
the
zoom
chat,
we
also
have
our
matrix
chat.
A
Hopefully
you
have
the
instructions
for
joining
that.
That's
where
we'll
do
more
of
our
more
of
the
kind
of
talk
about
the
actual
talks
and
things
like
that
there
so
real
quick,
a
little
bit
about
who
we
are
I'm,
Ryan
Levesque
and
you
can't
see
my
Twitter
handle
because
it's
covered
by
Ferriss,
but
I'm
at
ryan
underscore
Levesque
on
twitter,
so
feel
free
to
reach
out
to
me
and.
B
I'm
posting
group
also
a
rest
full-time
developer
for
about
a
year,
and
you
can
reach
out
on
me,
find
me
on
LinkedIn
or
on
my
website,
and
so
like.
The
concept
for
the
meetup
is
the
first
time
it's
super
a
digital,
but
even
here
it's
for
the
beginners
to
the
expert.
So
we
just
want
to
hear
about
your
ideas,
your
struggles,
your
hikes
or
projects.
A
And
really
what
that
means
is
that
we
would
like
you
to
to
participate
em,
and
that
really
also
means
people
who
are
joining
from
outside
of
Berlin
I'm,
probably
I'm,
pretty
sure
we're
going
to
be
doing
this
online
again
in
the
future.
Unless
things
go
really
well,
then
we
will
be
doing
this
online
in
the
future,
and
so
we
are
open
to
having
talks
from
people
who
are
not
based
in
Berlin,
and
we
really
wanted
to
you
to
to
speak
about
anything
related
to
rest.
A
B
We
follow
the
Berlin
code
of
contact.
We
want
to
have
a
safe
and
a
fun
space.
Every
one
of
us
I'm
spending
there
I'm
a
free
time
and
they're
in
a
private
time
on
this
Meetup.
So
we
want
to
behave
nicely.
You
can
share
on
a
secure
code.
You
can
share
what
you
want
and
you
shouldn't
be
like
a
rest
by
it
and
if
you
feel
not
welcome
here
or
you
feel
that
something
happened
at
a
meet-up,
please
report
us
privately.
You
can
create
a
fake
email
and
email
us.
A
So
this
slide
is
normally
for
our
sponsors.
We
don't
have
a
sponsor
this
month,
but
we
would
like
to
have
a
shout
out
to
yon
epic,
who
has
helped
us
set
up
the
matrix
in
the
zoom.
This
is
based
off
of
the
infrastructure
that
was
used
for
the
oxide
1000
conference.
That
happened
last
week,
which
was
really
great
for
those
that
weren't
there.
The
talks
are
now
online,
so
check
check
that
out,
and
so
thank
you
very
much
an
attic
for
your
help.
B
A
A
little
note
on
brakes,
we
will
have
one
roughly
10
to
15
minute
break
after
the
second
talk
and
we'll
break
out
into
two
breakout
rooms
where,
if
you
are
feeling
up
to
it,
you
can
chat
with
your
fellow
recitations
about
anything
and
everything
and
then
we'll
bring
you
back
and
when
the
talks
are
ready
to
go
and,
of
course
feel
free
to
mute,
your
mic
and
your
and
your
webcam.
If
you
don't
want
to
participate,
there's
there's
no
obligation
there.
A
C
C
Good
yeah
welcome
thanks
for
organizing
this
Ryan
Gosling
generic
kiss
stuff,
like
this
really
breaks
up
my
week
in
a
nice
way.
It's
it
makes
it
a
lot
less
boring
and
monotonous
these
testing
times,
I
hope
everyone's
okay,
given
the
circumstances.
So
my
talk
is
gonna,
be
about
a
crates
that
I
wrote
school
small
beginnings,
PC
my
cursor,
oh
by
the
way
I'm
assuming
it
can
because
I'm
using
it
to
point
at
stuff
and
if
they're
awesome
thanks
and
the
talks
for
the
slide.
C
The
slides
for
this
talk
are
on
this
URL
and
there's
also
some
links
in
that
and
the
presence
of
notes.
If
you
want
my
name
is
Brian
Jarrah
I
have
a
company
called
confirmo.
We
make
data
virtualization
and
government
stuff
for
enterprise,
but
it's
not
related
to
rust
and
I.
Think
for
us
is
really
amazing,
because
I
find
it
super
satisfying
that
when
you
create
a
program,
maybe
it's
not
always
the
easiest
language
to
create
your
program
in
the
first
time,
but
when
it
works,
it
really
works.
C
You
don't
tend
to
get
no
pointer
exceptions
or
there's
a
lot
of
funny
stuff
that
you
could
run
into
in
production
or
wherever
you
see
a
thing
that
you
didn't
expect
when
you
wrote
the
program
and
then
it
takes
a
lot
of
time
to
fix
the
thing
with
rust.
That's
less
I
think
a
lot
of
this
comes
from
the
type
system
and
a
compiler
it
just
knows
are
not
allowed
stuff
like
that.
But
I
think
another
part
is
because
there
are
fractions,
are
so
cheap,
so
a
new
step,
for
instance
structs
with
a
single
field.
C
They
are
practically
free
at
runtime,
not
entirely
free
at
compile
time
and
that's
like
small
overheads.
But
it's
really
negligible
and
I
think
this
allows
people
to
create
our
actions.
More
often,
they
don't
sweat
the
things
why?
So
you
just
make
a
new
struts
stuff
like
this,
not
just
the
abstractions
but
stuff
like
this.
Not
just
new
types,
I
mean
it
really
helps
to
make
a
great
community
with
great
API
is
and
then
create
everything,
and
so.
C
C
Or
maybe
yeah,
not
a
second
synonym
or
their
stats
on
it.
So
there's
a
lot
of
manipulations
that
you
won't
deal
with
numbers,
and
so
it's
really
useful
when
the
numbers
work,
but
it
depends
on
how
big
you
get.
Can
you
see
if
you've
got
this
really
big
number
in
JavaScript,
then
you
can't
do
+1
anymore.
You
can
do
plus
2,
you
can
do
plus
3,
but
it
rounds
really
funny
and
so
I
think
numbers
are
often
something
that
don't
just
work
here.
Yeah
and
it's
a
pity.
C
A
C
It
could
be
fun
to
explain
anyway,
so
the
thing
that
you
just
saw
is
because
JavaScript
uses
64
bit
floats,
and
that
means
that
there's
about
53
bits
of
precision,
if
you're
a
numbers
girl
above
53
bits,
then
you
can
only
store
every
other
number.
So
plus
2
works
but
plus
1
doesn't
work
anymore
and
when
the
numbers
get
even
bigger,
then
you
get
even
less
precision,
and
this
is
a
surprising
thing.
You
probably
expect
us
to
just
work
because
they
always
work
but
sometimes
yeah.
C
You
suddenly
get
something
even
expects
rust
or
something
different
I.
Think
it's
better
is
also
not
perfect.
So
in
Russell
you
know
big,
it's
still
big
sign
your
program
crashes.
It's
really
annoying
that
you
can
fix
it
and
then
it's
probably
easier
to
debug
the
crash
than
to
debug
a
wrong
answer,
but
still
a
bit
annoying
other
languages
tend
to
do
wrapping
integers,
which
you
cannot
see
you
in
rust.
If
you
want
to
so
then,
if
you
have
a
big.
C
Your
multiplier,
then
maybe
you
get
zero
Lawson
or
maybe
you
get
something
negative
or
something
else
funny,
also
not
really
ideal
and
I
come
from
I
used
to
use
a
lot
of
Lisp
and
Python
in
the
past
and
as
language
tends
to
have
big
uns,
which
are
optimized
and
everything
just
works
and
I
thought
it's
a
really
nice
experience
that
wants
it.
Make
it's
slightly
easier
immersed
as
well.
I
just
want
to
say
from
that
I
didn't
do
any
of
the
hard
work.
The
real
hard
work
is
done
by
this
great
non-vegans.
C
And
but
let
me
let
me
just
explain
how
they
work
so
with
humans.
If
you
want
to
use
numbers
you've
got,
they
just
wondered
once
a
nine
and
people
learned
that
three
plus
four
is
seven,
so
you
have
those
graphics
and
if
you
do
plus
some
of
those
things
that
you
get,
you
just
learn
it
by
heart.
But
it's
not
scalable,
of
course.
C
So,
after
seven,
eight
nine,
you
need
to
go
to
one
zero
and
then,
after
one
nine,
you
need
to
go
to
zero
and
after
nine
nine
years
ago,
two
one
zero
zero,
and
this
makes
things
unlimited
and
they're
certain
algorithms
for
calculating
so
there's
a
sofa
in
London,
primary
school,
probably,
and
so
no
big
and
the
supreme
was
saying,
but
in
a
bigger
sense.
So
big
humans
is
the
type
of
numbers
that
we're
gonna
use
that
we
use
internally
in
this
crate
and
it's
just
a
vector
of
digits.
The
digits
are
0
to
9.
C
C
C
C
Make
big
in
smaller
and
faster?
That's
what
the
crate
does
and
I
just
used
a
small
fact
trick
so
small
that
is
a
type
of
vectors,
but
when
it's
small
it
goes
on
the
stack,
so
you
can
make
a
small
vac
of
size,
8
integers
and
then,
when
you
have
5
integers
now
it
stays
on
the
stack
if
it
gets
more
than
it
goes
to
the,
and
we
just
do
the
exact
same
thing.
C
So
small,
big
uns,
the
small
numbers
go
on
the
stack
big
numbers
go
on
the
heap
you
see.
This
is
the
non
begins,
type,
so
stroke
Givens,
and
then
we
just
either
have
small
one
I've
chosen,
you
32,
but
it
could
be
anything
I
guess
either
this
or
we
have
a
reference
to
a
known
reference
to
ambiguous
on
the
heap.
So
when
it's
small
it
just
stays
on
the
stack,
this
thing
is
16
bytes,
so
it's
still
fairly
begging
could
be
optimized,
more
I.
C
C
C
C
We
need
to
change
the
types
mister,
a
few
32.
Okay
use
my
new
type
events.
This
example
by
the
way
is
in
the
repository
of
the
crate.
So
you
don't
need
to
copy
anything
or
whatever.
Instead
of
these
literals,
we
need
something
you
use.
Un's
small,
which
looks
like
a
function
call,
but
it's
a
constant,
so
it
doesn't
take
up
any
time
now.
You
still
see
we
got
some
compilers
and
this
is
because
humans,
unfortunately,
is
not
copy,
because
in
some
cases
I
can
contain
a
reference
to
the
heap
and
types
like
that.
C
I
never
copy
so
we're
gonna
do
is
in
some
cases
we're
gonna
ask
references
all
the
function.
Arguments
were
gonna,
make
references,
and
then
we
need
to
has
references
to
them
as
well.
So
that's
here
and
then
there's
a
couple
more
things
left.
So
any
comparison
needs
to
be
between
the
same
things
are
the
reference
and
a
EWTN's
or
a
serve
between
a
reference
and
orphans
or
humans
and
humans,
or
another
type
of
number.
C
E
C
C
C
C
Always
gonna
be
a
bit
slower
on
big
numbers,
but
yeah
you
see
that
on
small
numbers,
though
we're
also
slower
than
machine
integers.
But
of
course
the
advantage
is
that
here
we
can
have
any
size
integer.
So
we
don't
suddenly
crash
when
you're,
trying
to
factorize
a
big
number
and
we're
significantly
faster
than
big
uns.
So
there's
also
always
trade-offs,
but
I
think
it
can
make
for
a
nice
to
defaults
for
the
project
I'm
trying
to
use
this,
for
we
I
really
want
to
stay
on
the
stack.
C
C
This
would
be
really
boring.
Culture
right,
I
also
don't
want
to
copy
and
paste
this
so
I
want
to
do
something
with
this.
There
are
some
duplicated
code
between
you
and
zan
ins
and
I
just
copy
and
pasted
that
and
change
it
myself,
because
the
thing
is
not
worth
trying
to
add
extra
layers
of
abstraction
there,
but
yeah
for
this
kind
of
repetition.
It's
really
not
worth
making.
It
could
be
so
big,
I,
think
and
I
think
there's
something
reasonable
that
we
can
do
so.
C
C
You
have
to
find
the
right
abstractions
that
racism
is
little
bit
limited
in
a
different
application.
I've
run
into
a
brick
wall
somewhere
like
I,
don't
start
something,
but
the
type
system
just
couldn't
deal
with
it,
which
was
a
bit
of
a
pity,
and
it's
also
hard
to
explain
for
people
who
might
once
you
come
to
your
projects.
So
instead,
what
I'm
doing
is
macros
by
example,
and
ever
greats
actually
I.
Think
in
general
generation
always
works,
and
especially
in.
A
C
It's
also
pretty
usable,
so
the
code
is
still
fairly
legible.
It's
pretty
robust
everything
is
checked
at
compile
time
and
see.
Id
and
error
messages,
and
the
ID
is
also
pretty
good
great.
You
get
the
nice
red
squiggles
yeah,
so
this
is
a
example
for
things
that
will
not
strike
from.
So,
if
you
do
5
minus
3,
then
you
should
work
if
you
do
3
minus
5
as
unsigned
integers,
that
should
not
be
have
a
results.
This
is
what
checks
of
those
and
so
there's
two
cases
either
as
two
small.
A
C
And
what
I
do
there
is
I
convert
both
to
big
or
unless,
if
they're,
not
big,
already
and
I
need
is
checked
so
from
the
big
type
so
two
cases,
but
this
has
to
be
done
for
against
and
also
projective
and
a
couple
other
traits,
so
how
I
would
like
to
do
it?
I
couldn't
get
this
to
work,
unfortunately,
is
with
a
macro
like
this
for
all
typing
events
and
ends
implements
today's
code.
C
It
seems
because
microbes
been
only
expensive,
come
on
statements,
expressions
items
and
in
violence
and
said
a
little
bit
with
much
crushed
micros,
and
it's
probably
so.
Instead
I
found
out
the
following
pattern
is
not
as
elegant,
but
also
we
just
define
new
macro
for
everything
that
we
wire
beats.
So
here
we
want
to
repeat,
checked
sub
for
integer
and
humans
and
then
the
variables
come
from
macros
itself
and
you
know
it's
a
bit
less
beautiful,
because
now
you
have.
C
It's
okay,
so
yeah.
So
here
we
just
call
this
macro
twice,
but
then
you
can
do
other
tricks,
so
I
made
a
little
matter
of
cool
with
arcs
and,
of
course,
import
jackets
up
with
no
bare
documents,
but
then
first
with
humans
and
then
within
service
macro
cool
cause.
This
twice
and
then
you
can
extend
this.
C
You
can
add
these
arguments
so
here
we
have
stretched
from
also
traits
that
are
influencing
and
the
methods
that
were
using
so
for
checks
of
waiting
to
use
the
method.
So
there's
our
variables
here,
and
so
we
call
check
traits
it's
wise
for
you
and
some
hint
and
then
also
six
more
times
for
different
traits
and
different
methods.
I
found
it
quite
nice,
there's
some
other
loads
that
you
can
get
if
you
implement
a
lot
of
trades
like
I
didn't.
This
is
great,
so
brush.
A
C
With
with
machine
flaps,
I
wanted
to
do
with
all
machine
type,
so
you
wake
your
16
blah
blah,
and
this
is
a
bit.
Much
can
be
a
bit
hard
to
see
if,
if
you
really
have
all
the
gene
integers,
so
I
just
show
a
little
help
limit
row
that
practically
calls
code
arts
with
all
the
Machine
interview,
types
and
SoCo
with
all
unsigned
base
types.
I
think
it's
super
easy
to
read
and
my
programming
can
be
really
nice
I
found
in
that's
my
confusion.
C
Obviously,
precision
integers
are
quite
fast
and
easy
unless
you
want
to
do
it
in
a
talk
and
you
can
practice
his
life
coding
as
as
often
as
you
want,
but
you're
still
going
to
get
everyone
in
this
work
that
I
find
out
easy
in
practice.
Unfortunate,
not
copy,
so
you
do
have
to
do
some
referencing
and
Ronen.
Sometimes
code
generations
can
be
decent
at
least
early,
sometimes
to
abstractions
did
not
just
see
I
think
their
importance
free
ecosystem
for
your
API
is
that
just
leads
a
better
quality
code.
A
Alright,
thank
you
very
much
since
you
can't
hear
the
class
there's
clapping
on
myself
and
real
quick
before
we
get
to
the
questions.
One
note
if
you
do
not
want
to
be
recorded,
then
make
sure
that
your
video
is
off.
Otherwise
this
is
recorded
and
you
will
be
seen
in
the
video,
so
please
make
sure
to
every
everybody
in
the
in
the
audience
to
turn
off
your
video,
unless
you
want
to
be
recorded
for
all
of
time
and
clapping
is
now
happening
inside
of
the
zoom.
A
C
That's
a
good
idea:
I
didn't
consider
that
and
let's
see
how
big
would
that
be
so
in
the
small
case,
then
you
would
have
you
32
I,
guess
1u
32,
because
that
will
be
the
length
of
your
small
deck
and
then
you
want
their
capacity
as
well.
So
it's
a
good
idea.
I
think
you
would
still
end
up
with
16
bytes
with
any
of
them,
since
it
implementation,
yeah,
16,
bytes,
I,
guess.
A
C
B
A
A
E
First
of
all,
yes,
the
same
to
Jason.
People
will
definitely
consider
that
and
do
that
so
Thank
You
Brad,
that's
going
to
be
awesome.
Okay,
let's
get
started
I'm
going
to
talk
about
tremor
tremor
is
something
we
builded
Wayfair
and
killed
a
thousand
and
more
course
was
that
in
the
cloud
so
I
think
we
can
say
we
are
saving
the
planet
one
course
anyway,
the
agenda.
Even
so,
it's
only
20
minutes.
E
E
So
you
want
to
go,
buy
a
rack
now
that
we
all
are
at
home,
go
buy
a
rack
and
we
have
about
a
thousand
people
working
in
Berlin
and
25,000
worldwide
between
engineers,
marketing
sales,
people
in
warehouses,
people
deliver
and
what
not.
So
not
a
small
company
and
as
a
result,
we
do
have
a
few
computers
running
and
doing
stuff.
E
Who
are
we
as
a
team?
We
are
a
small
team.
We
had
two
people
in
Berlin.
That
is
that
he
hangs
around
in
the
chat,
say
hi
to
him
and
in
Berlin
with
me,
and
we
have
a
new
in
Boston
he's.
Also
in
the
chat
say
hi
so
represents
our
Boston
part
of
the
team.
We
do
system
engineering,
it
wafer
is
the
first
team
doing
that
and
what
we've
been
building
about
the
last
year-and-a-half
give
or
take
em
is
trauma
water,
I'm
going
to
talk
about
today
and-
and
yes,
we
do
sometimes
talk
about
it.
E
If
you're
curious,
we
have
all
Twitter
accounts.
Well
battle
had
me
have
Twitter
accounts
of
the
new
perfuses
to
be
on
Twitter,
but
who
knows
perhaps
that
account
will
exist
eventually
go,
say
hi.
We
also
have
one
for
tremor.
That
is
at
the
end,
so
you
can
copy
it
because
we
want
to
you
all
to
remember
em.
So
what
is
tremor
and
tremor
is
an
event
processing
engine,
and
it
is
also
an
ETL
language.
It
is
also
a
query
language.
E
Then
we
have
replaced
locks
and
I
am
sure
a
lot
of
you
are
familiar
with
it.
It's
Java
thing
doing
lots
viously
with
elasticsearch
and
friends
in
waving,
and
we
have
replaced
Telegraph
the
influx
thing
that
does
the
last
mile
between
our
case
Kafka
and
influx
to
be
completely
with
tremor
we're
now
starting
to
get
integrated
it
with
kubernetes.
So
there
is
more
stuff
to
be
going
and
I
forgot
to
put
it
on
a
slide
or
delete
at
the
slide.
This
morning,
when
I
read
at
the
talk,
we
was
this
rolling
out.
E
We
saw
saved
about
to
3,000
cores
and
went
to
reduction
on
infrastructure
in
this
realm
for
nearly
a
factor
of
10
and
so
for
every
10
bucks
that
were
there
now
there
as
well,
which
is
nice,
okay,
we'll
start
with
to
get
the
understanding
of
the
different
things
and
why
we
are
doing
what
we
are
doing
a
bit
of
cartography
about
event,
processing
systems.
Please
note
this
is
not
scientifically
accurate.
E
There
is
no
islands
in
event
presenting
that
just
programs,
and
but
this
makes
it
really
fun
to
explain
and
don't
come
near
me
with
facts
about
them.
This
is
going
to
be
funny,
I,
hope
or
really
embarrassing,
one
of
those
two
but
stay
away
with
facts
and
don't
take
it
as
bad.
Take
and
I
know
it
likes
nuances
spots.
Well,
here
we
go
and
so
first
group
of
event
processing
systems
is
the
you're
going
to
write
some
effing
Java.
E
The
second
is
the
archipelago's
off,
let's
cobble
together
transformations
effectively,
that
is
log
stash
and
friends
where
you
just
have
a
few
building
blocks
that
are
programmed
by
the
team
that
builds
log
stash
and
you
configure
them.
You
change
them
in
a
well
in
whatever
fashion
you
want
to
do
what
you
want,
that
is
usually
used
by
operational
teams
to
just
need
to
get
done,
which
is
fair
shout
out
to
them
there
at
the
front
lines.
We
as
developers
have
the
happy
job,
and
so
they
like
to
get
their
stuff
done.
E
E
It
consists
a
bit
between
the
two
others.
Well,
it
takes
some
of
one
some
of
the
others
and
it
comes
at
the
cost
of
having
and
was
mentioned
before,
a
rather
big
runtime
that
allows
you
to
do
most
of
the
logic.
You'd
usually
have
to
write
in
a
programming
language
like
Java
and
in
a
scripting
language
is
easy
to
learn
for
operational
folks
and
not
so
heavy,
not
so
resource
intensive
as
something
heavyweight.
E
So
that's
it
for
the
little
exercise
and
geography
for
event,
processing
systems
and
let's
talk
a
bit
about
tremor
script
because
well
we
talked
about
scripting.
So
this
is
the
natural
conclusion
of
that
and
tremor
script
is
a
ETL
language.
So,
basically,
what
happens
was
locks
in
wafer?
Is
we
parse
them?
We
transform
them
training
fields
changing
date,
hand
them
the
filters
some
out.
We
care
about
sanitize
them,
and
it
is
mostly
and
I'm
embarrassed
to
say
that
Jason
s
data,
not
not
a
fan
of
Jason,
but
it
is
well
we're
living
in.
E
So
it's
active.
It
is
not
a
language.
We
love
in
the
sense
of
that.
We
didn't
design
it
to
be
a
beautiful
language.
We
designed
it
to
be
a
language
which
gets
the
stuff
done
there.
Operational
teams
need
to
get
done
and,
and
sometimes
those
two
things
are
at
odds
as
a
programmer.
You
want
some
beauty
in
the
language
as
a
rational
person.
You
just
wanted
to
get
it
done.
So
we
came
from
that
aside
on
it.
It
is
heavily
influenced
by
rust,
for
obvious
reasons
and
by
Erlang
Sims.
E
Both
style
and
me
have
been
happier
line
users
for
years
and
we
try
to
spread
the
Erlang
goodness
wherever
we
can.
Last
but
not
least,
there's
a
good
pinch
of
what
was
needed
in
there.
So
we
looked
at
all
that
we
had
where
we
had
to
pick
fortune
to
being
able
to
look
at
thousands
of
nodes
whose
configurations
and
see
how
they
are
used
and
how
you
can
making
how
you
can
make
some
kind
of
this
more
efficient,
more
easy
to
use
more
friendly
for
the
operator,
while
also
making
it
fast.
E
So
well,
let's
jump
into
it.
We
are
going
to
release
the
version
8
of
tremor
and
we
decided
we
are
going
to
do
that
over
the
last
few
days
and
got
it
nearly
ready
today.
So
that's
why
I
rewrote
the
entire
talking.
I
was
going
to
talk
about
something
slightly
different,
but
no,
we
are
going
to
talk
about
the
o8
release
and
partially
because
we
are
following
an
RFC
driven
process
or
try
to
follow
an
RC
driven
process,
and
this
is
in
the
last
stretch
of
it.
E
So
if
any
of
you
is
interested
after
this
to
give
their
commands
and
tell
us
what
we
did
wrong-
and
please
do
so
I'm
going
to
share
a
bit
about
what
is
new
in
the
o8
release
and
we
are
going
to
introduce
new
modules,
so
logical,
encapsulation
of
code.
We
are
going
to
use
use
to
introduce
a
use
keyword
which
allows
us
to
include
those
modules
from
different
files
and
make
structuring
the
code
easier.
I
suspect.
E
Most
of
you
will
see
the
rust
influence
here
and
we
literally
stole
the
keywords:
I
hope
we're
not
going
to
be
sued
by
Mozilla
and
we
added
functions.
So
it
is
possible
to
write
your
own
functions
with
logic
in
them
and
intrinsic
stew.
Hookup
rust
functions
easily
over
this
new
function
module.
So
let's
look
at
the
components
of
those
and
first
of
all
modules
there
I
would
say,
while
probably
the
most
powerful
of
the
set,
also
the
most
boring
and
they
just
encapsulate.
You
can
have
your
module
and
you
see
it
here.
E
It's
called
mod,
my
mod
with
and
we
have
a
constant
defiant
image,
which
is
the
answer
42
and
we
have
a
function
defined
in
it,
which
takes
two
arguments
a
and
B
and
adds
them
together.
Functions
called
add
for
obvious
reasons.
We
then
can
call
this
function
outside
of
the
module
by
prefixing
it
with
the
module
double
colon,
the
name
of
the
function,
my
module
answer
and
-19
again.
E
It
is
roughly
visible
that
there's
a
lot
of
rust
influence
there
and
with
a
bit
of
Erlang
mixed
in
with
waste
and
and
and
keywords,
so
they
can
be
nested.
If
you
want,
as
you
can
define
one
module
in
another
module
in
another
module
and
get
the
nested
structure
pretty
much
like
you
can
do
in
rust.
Something
you
can't
do
in
Erlang
by
the
way
which
you
always
missed,
so
we
made
sure
we
get
it
in
and
use
clause
which
you
can
see
it
on.
E
The
right
use
food,
the
little
box
with
the
code
above
it
is
the
content
of
food,
and
so
the
file
food
of
strimer
includes
constants,
not
equals
badger.
The
second
file,
the
one
we
are
calling
is,
is
using
use
food
to
include
that
file
and
then
it
matches
on
the
event
we
own
event,
processing
engine,
all
our
scripts
handled
in
event,
and
if
the
event
is
a
empty
struct,
so
in
Jason
and
and
we
output
the
strengths
not
followed,
but
by
whatever
was
in
the
constants,
not
in
the
module.
E
So
we
don't
cache
modules
in
the
sensor
that
we
have
multiple
source
files
at
the
end
that
get
linked
together
because
it's
an
interpreter
not
a
compiler,
so
we
just
concatenate
them
as
a
preprocessor
step
and
add
them
together
and
to
illustrate
that
at
the
bottom,
in
the
big
box,
you
can
see
how
the
preprocessor
output,
what
looked
like
and
those
files
are
taken
for
the
observant
amongst
you
from
a
test
called
PP
underscore
Ness
0,
and
we
have
a
line
directive
which
is
a
compiler
hint
later
on.
That
tells
us.
E
We
are
in
this
file
at
this
line
that
allows
us
to
have
error
numbers
and
line
numbers
and
errors,
and
then
the
module
foo,
which
we
included,
gets
put
into
a
module
clause,
so
module
foo
with
and
so
on.
The
content
get
added,
constants
not
equals
better
and
we
closed
the
module
we
created
with
a
semicolon,
and
after
that
we
have
another
line
directive,
which
now
tells
the
lexer
and
parser
and
compiler
later
on
that
we
are
now
even
so
technically
in
line
5
would
be
in
line
2
off
the
original
file.
E
E
E
So
this
allows
you
already
to
define
a
lot
of
small
algorithms
internally
and
abstract
the
way
part
of
the
of
your
code
into
small
functions.
You
outsource
to
a
module
somewhere
and
go
in
and
reuse.
This
whole
was
a
bit
of
background
story.
This
whole
thought
experiment.
Aside
of
you,
wanted
functions,
but
we
don't
do
what
we
want.
We
do
what
we
must
beside
us
wanting
them.
E
We
found
that
a
very
common
pattern
for
the
users,
we're
outsourcing
parts
of
code
into
templates
for
the
configuration
management
system
and
then
cobbling
them
together
inside
a
output
template
at
the
end.
This
is
horrible.
There
was
something
in
programming
that
solved
this
problem
before
which
is
modularity,
so
we
got
functions
and
modules
out
of
that
and
anyway
we
take
parameters.
We
combine
them,
so
we
call
this
notify
function.
We
define
was
better
and
the
result
of
that
is
not
better
and
but
that
is
not
all
and
we
thought
well.
E
We
like
Erlang
and
Erlang
has
this
wonderful
feature
of
allowing
you
to
write
a
function
with
patterns
in
the
argument
definition.
So
you
can
execute
different
kinds
of
bodies
based
on
the
arguments
that
are
passed
to
a
function,
something
which
would
be
really
lovable
and
rust,
and
I
saw
the
yacht
Josh
earlier
tweeting
about
something
like
that
with
enums
and
I
hope
that
gets
interests,
because
that
would
be
awesome
anyway.
E
I
digress
and
we
are
talking
about
functions
with
patterns,
so
we
use
case
for
patterns
that
is
a
Erlang
ism
in
a
way,
so
it
changed
from
function
named
arguments
or
width
to
function
named
arguments
off,
and
then
we
have
a
set
of
cases
and
a
default
case
of
none
of
them
hit,
and
so
the
first
case
compares
if
the
past
in
argument
was
a
string
was
a
content
of
better.
If,
yes,
it
returns,
not
bad
right,
hell,
yeah.
E
The
second
one
is
what
we
call
an
extractor.
So,
since
extracting
data
from
strings
is
a
incredibly
common
pattern
in
operational
use
of
the
engines
and
we
made
that
a
first-level
construct-
and
here
we
are
using
the
extra
JSON
extract
of
JSON
pipe
pipe,
which
looks
if
the
string
is
a
valid
JSON
and
then
creates
the
jason
out
it
out
of
it.
E
The
third
case
in
this
function
would
be
looking
at
the
input
s
and
if
it
is
a
string,
then
returns
not
followed
by
the
string
the
same
as
the
function
before
and
now,
if
none
of
those
caged
cases
matched
since
it
is
an
expression
based
language
and
we
always
have
so,
we
have
to
return
something.
We
have
a
default
case
and
that
just
tells
you
like.
Well,
you
call
that
with
something
that
is
not
working
so
putting
in
a
few
examples
here
that
would
be
notified
from
bedre
and
we
get
the
expected
hell.
E
Yeah,
it's
not
if
I
have
a
horse
which
could
a
snot
horse
snow
defy
of
42,
which
falls
into
the
default.
Well,
we
get
the
error
we
defined
and
last
but
not
least,
modify
with
a
jason
that
is
badger.
42
returns
a
purse
jason
of
the
type
of
an
object
or
record
in
tremor
with
two
keys.
One
is
bedre
42,
which
was
one
we
had
before
and
the
keys
not
because
it's
notify
set
to
true.
So
so
much
about
the
matching
on
functions
and
since
there
wasn't
enough,
we
figured
we
add,
variable
arcs,
so
functions.
E
E
My
brain
is
breaking
here:
we
define
a
functions,
not
all
the
things,
three
dots
for
this
as
bar
arcs,
you
could
prefix
it
with
a
number
of
given
arguments.
You
always
expects
a
minimum
number
of
arguments
and
in
this
case
we
don't
and
we
simply
have
snot
and
then
followed
by
the
argument.
So
if
we
call
snot
all
the
things
with
snore
as
three
arguments
bed
jaw
horse
and
cat,
we
get
snot
bad,
just
not
horse
and
snot
cat
out
of
it.
This
can
be
quite
powerful
in
itself.
E
If
you're
going
to
write
more
complex
functions
and
look
at
the
number
of
arguments
and
what
you
got
passed
in
last
but
not
least,
Sims
know
no
functional
language
or
semi.
Functional
language
is
complete
without
recursion
we
added
recursion,
so
you
can
write
functions
that
call
themselves
since
performance
is
something
we
really
care
about
and
we
enforce
tail
recursion
by
using
a
special
keyword
for
the
recursion
which
we
call
recur.
Thank
you
closure
or
someone
remembers
it
and
pass
it
the
arguments.
E
If
that
is
not
called
somewhere
in
the
tail,
then
well,
it
will
error
during
compile
time.
I'll
tell
you,
you
can't
use
recursion.
That
was
not
tail
recursion,
so
here
we
have
an
example
of
the
implementation
of
the
Fibonacci
sequence
and
we
write
two
functions
here.
One
is
Fibonacci
of
n,
which
calls
Fibonacci
underscore
of
zero
one
N
and
then
the
next
one
have
been
actually
underscore.
It
has
to
define
first
because
of
the
wise.
E
It
doesn't
know
what
to
call
which
prevents
trampoline
like
recursion,
so
it
has
to
be
fine
first
and
we
look
if
n
worse
was
larger
than
zero.
We
call
recursion
with
well
the
math
behind
it
if
it
was
larger
than
zero.
We
return
a
and
are
done
so.
If
we
one
run
this
in
a
comprehension,
not
a
loop
again,
nothing
infinite
can
happen
here.
So
we
run
it
over
a
range
of
the
numbers
of
zero
to
ten
and
call
Fibonacci
of
them.
E
We
get
the
array
0
1,
2,
3,
5,
8,
13,
21
and
30
for
the
recursion
depth
is
limited
you,
so
we
do
not
consume
extra
stake
for
them,
because
we
want
to
enforce
that
every
event
that
ever
goes
into
the
streamer
script.
Engine
is
deterministically
to
guarantee
to
go
out
and
not
block
forever,
so
the
default
we
set
it
to
is
I,
believe
8000,
but
that
might
become
more
configurable
later
on.
E
So
this
was
a
short
rundown
of
the
changes
in
camera
light
which
is
going
out
tomorrow
as
a
pre-release,
otherwise
I
get
in
trouble.
If
I
don't
say
that
if
you
want
come
by
comment
on
it,
give
us
your
ideas
share.
What
you
think
is
a
terrible
idea,
what
we
are
doing
before
it's
too
late
and
a
reminder
here
tremor
script.
This
engine
is
not
heart
rebound
tremor
the
project,
so
you
can
use
the
scripting
engine
inside
any
project.
E
You
like
there
is
a
crate
for
it
even
can
do
cargo
and
then
put
it
in
your
crates
tunnel
anyway.
Last
but
not
least,
we
wanted
to
use
the
chance
to
give
a
shout
out
to
a
few
open-source
projects
which
have
been
elemental
to
tremor
to
building
this
and
to
rust
community
as
a
whole.
To
be
honest,
because
it
has
been
a
really
really
really
great
time,
so
we
open
sourced
February
this
year.
E
We
have
been
preparing
for
this
for
nearly
a
year
because
we
wanted
to
do
it
right,
like
the
RFC
process,
like
getting
all
the
repositories
in
the
open
and-
and
we
are
by
now
at
the
point
where
all
development
happens
in
the
open.
There
isn't
even
a
internal
project
board
anymore.
They
are
just
get
up
issues
and
we
as
people
who
have
been
an
open
source
for
years
before
we
joined
way
fair
value
collaboration
and
that
is
hugely
important
to
us,
which
is
why
we
added
the
section
and
and
where
we
can.
E
E
They
have
built
an
allocator
called
SNM
airlock
or
how
I
can't
get
it
out
of
my
head
anymore
smell
and
it
is
heavily
optimized
for
a
producer/consumer
patent
with
multiple
threads.
So
one
thread
produces
data
it
gets
sent
to
another.
This
thread
consumes
it
and
Friedemann
Friese
the
memory
again
and
which
is
a
perfect
alignment
with
what
we
are
building
was
tremor
and
has
given
us
a
huge
boost
in
performance.
E
So,
if
you're
building
an
application
that
follows
this
patterns
of
producing
data
and
one
thread
and
consuming
it
in
another
gift
as
animalic
a
go,
the
the
people
behind
it,
especially
Matthew,
are
incredibly
incredibly
helpful.
They
reached
out,
we
talked
a
lot
to
them
and
as
a
result
of
the
benchmarking
we
did
to
them,
there
has
been
two
improvements
done
by
them
on
the
engine
which
gave
us
I.
Think
a
total
of
3
percent
more
throughput
and
two
more
are
in
progress
which
I'm
really
excited
to
see.
E
They
might
see
this
as
a
competition.
Even
so
we
are
not
a
tech
company
at
Wayfair
and
well
absolutely
they
have
been
wonderful.
We
have
been
cobble
a
collaborating
with
them
on
a
few
things
and
sharing
ideas
and
sharing
code
and
Anna.
Our
vector
is
currently
working
on
a
pro
to
buff
DCU
Eliza
in
Wesson.
That
takes
the
disadvantages
of
having
to
hard,
compile
protobufs
into
your
application
away.
So
they
can
be
loaded
at
starting
time
instead
and
we've
been
talking
about
generalizing
interfaces
for
sings
and
sources
or
how
Trainor
calls
onerous
offerings.
E
So
how
you
get
data
in
and
out
of
an
event,
processing
system
and
hope
to
provide
a
bit
of
an
ecosystem
and
rust
around
that,
and
we
have
been
talking
about
simply
Jason,
which
came
from
Gemma
to
be
integrated
and
vector
eventually
to
increase
the
performance
off
of
there.
Jason
parsley
I,
said
comes
from
Trevor,
the
rust
part
comes
from
Trevor,
we
didn't
do
simply
JS,
because
that's
the
next
one
and
dr.
E
Lee
Myer
did
Cindy
Jason
and
a
incredibly
great
library
again,
the
original
library
isn't
C
naught
and
rust,
but-
and
it
is
some
of
the
cleanest
C
code.
I
have
ever
seen
it
as
a
pleasure
to
read
if
you
like,
if
you
like
this
kind
of
stuff,
go
check
out
the
C
library
and
look
at
some
really
great
C
code,
it
is
blazingly
fast.
E
It
is
just
amazing
how
that
chews
through
Jace-
and
we
have
ported
this
to
Russ,
to
be
a
bit
more
idea,
Matic
in
the
ecosystem,
in
the
tools
we
use
in
the
interaction
it
is
comfortable
with
cert
in
the
large
degree,
but
it
is
by
far
the
fastest
Jason
powder
you
can
get
in
rust
by
a
factor
of
two.
In
many
cases
we
have
contributed
back
a
few
fixes
and
a
few
performance
tweaks
to
send
the
Jason
upstream,
based
on
measurements
and
experiences
we
made
with
using
the
port
in
our
rust
codebase.
E
E
Ok,
so
again,
thank
you
all
thank
you
for
listening
and
thank
you
for
being
an
awesome
community
and,
if
you're
interested
in
what
we
have
built,
there
is
tremor
dot
RS,
which
you
can
go
to
there
all
the
links,
everything
from
slack
to
Twitter
through
our
documentation
to
what
now
and
come
by,
say,
hi.
We
have
about
hundred
stickers
left,
if
you
want
some
say
a
word
and
then
a
Twitter
account
called
tremor
Deb's
and
you
can
follow
us
there
or
not
followers
there,
and
that
would
be
it
for
the
talk.
B
A
E
Tremor
is
has
multiple
on
ramps
and
off
ramps.
We
mostly
consume
data
over
Kafka
within
wayfarer.
That
is
where
most
of
the
data
comes
from.
Yes,
if
you
have
a
Kafka
top,
you
read
from
locks
that
you
can
now
read
this
Kafka
topic
from
tremor
and
the
HTTP
API
is
not
the
same
at
all
and
we
didn't
try
to
build
a
new
log
stash,
because
well,
mistakes
were
made
without
hate
for
log
stash.
E
E
Yes,
that
is
a
hard
question,
because
it
depends
on
the
hardware
you're
having
it
and
the
algorithms
you
put
into
it,
and
the
data
you
are
going
through.
It
and
benchmarks
are
always
lies,
but
I
will
share
the
our
go
to
benchmark,
which
was
up
to
500
megabytes,
a
second
of
parsing
JSON
processing
Jason
like
filtering
it
changing
some
keys
in
it.
It's
based
on
a
production
use
case
in
wave
here
and
just
immunized
and
then
reisi
realizing
it.
B
A
A
Please
remember
that
our
code
of
conduct
still
applies
in
the
breakout
rooms
and
if
you
would
not
like
to
participate,
it's
fine
to
turn
off
your
camera
and
your
and
your
mic
and
step
away
have
a
drink
of
water,
and
we
will
be
back
in
about
15
minutes
with
our
last
talk
of
the
day
from
Yosh
and
then
a
quick
coding
challenge
for
everyone
so
enjoy
the
break
everyone.
Thank
you
very
much.
A
D
Yeah,
oh
whoa
I
can
unmute
myself
amazing
hi
everyone.
Let
me
share
my
screen
screen
shared
all
right.
Can
everyone
see
this
wait?
Wait
I
see
exactly
one
face,
Eric
similar.
If
you
can
see
it
is
wave
your
hands,
amazing.
Okay,
great!
That's!
Just
a
first
check
passed
great,
so
hi
everyone
Josh.
This
talk
is
called
async
HTTP
what
click
yeah?
D
Okay,
which
means
it's
not
about
everything
in
async,
h,
AP,
it's
just
here's
a
bunch
of
stuff
that
we
have
done
in
the
realm
of
async
HTTP,
which
includes
me
Josh
and
my
co-conspirators
collaborators
friends,
esteemed
colleagues,
Ryan
who's
right
over
there
and
and
Friedel.
What
I
don't
think
is
on
the
stream
and
many
many
other
people.
It's
about
three
libraries
that
we've
written
co-written.
Basically,
so
yes,
so
talk
about
today,
try
to
keep
it,
keep
it
like
a
little
short
talk:
okay,
yes,
okay!
D
Gonna,
take
a
quick
look
at
HT
PRS
today,
which
is
our
get
up
org
and
the
stuff
where
we
work
and
do
these
things
under
I've
got
a
look
at
some
some
goals.
We
had
that
we
set
out
recently
to
be
like
oh
here's,
some
things
that
we
want
to
do
and
we're
gonna
look
at
what
we
did.
That's
the
basic
structure.
So
without
further
ado,
a
brief
history
of
async
async
rests
brief
history
of
async,
so
beckon
August,
2016
futures
were
announced
on
Aaron's
blog.
D
So
it's
like
almost
four
years.
Three
and
a
half
years
ago,
then
in
April
of
2018,
so
futures
were
like
here:
zero
cost
like
state
machines
and
they're,
very
cool
and
they're
very
fast
and
I
know
April
2018
in
came
along,
which
was
we
think,
Eddy
B,
and
then
definitely
boats
took
started
at
a
home
there,
which
was
like
here's.
How
we
can
make
these
things
even
faster.
So
we
don't
need
to
allocate
like
a
whole
lot
in
future.
D
Here
is
that
in
la
serena
of
years
a
lot
has
happened.
A
lot
of
iterations
have
happened
and
a
lot
of
things
have
only
recently
become
possible
like
to
go
mainstream,
so
we're
still
trying
to
figure
out
like
what's
the
best
way
to
use
a
synchros.
What
are
the
right
ways
to
write
patterns?
When
should
we
use
it?
What
are
the
trade-offs?
D
You
know
we're
getting
a
feel
for
it,
and
this
will
continue
to
like
phase
out
over
the
years
and
and
many
of
the
things
that
came
before
it
is
maybe
were
not
the
best
options,
we're
still
exploring
how
to
go
forward,
but
at
the
same
time,
there's
also
this
other
tension
which
is
like
in
order
to
keep
going
forward
the
parts
that
we
agree
on.
We
need
to
keep
stabilizing
those.
So
there's
a
part
of
exploration,
but
also
part
of
stabilization.
D
As
you
could
see
stuff
like
pin
got
stabilized
stuff
like
the
future
trade
could
stabilize
and
as
we
go
forward
likely
we'll
look
at
streams,
we'll
look
at
the
async,
read
an
async
right,
api's
and
and
keep
carving
forward
so
that
people
can
share
more
and
we
we
agree
on
more
things.
That's
a
bit
of
attention
there
I
say
it
is
because
I
get
a
lot
of
comments
about
like
you're,
not
stabilizing
things
I'm
like
yeah
yeah,
but
we
also
need
to
keep
exploring
you
know
so,
there's
some
stuff
there
anyway,
it's
for
us
today.
D
D
There's
a
little
surf
example
here,
Oh
or
you
can
see
like
surf,
get
you
give
it
a
little
URL
then
you'll
wait
it
and
then
you
get
back
a
response
and
you
can
like
give
it
more
parameters.
If
you
like
serve
get
body,
got
something
thought
all
sorts
of
parameters
and
then
it
get
it
gets
converted
into
a
future.
First
call
you
call
oh
wait
on
it,
then
with
strains,
you
can
see
a
whole
fun
example,
so
it
should
be.
Should.
D
Should
be
a
response,
the
response
is
streaming,
so
we
can
take
a
take,
a
little
request
here
or
actual
response
and
pipe
it
out
into
a
file,
and
it
will
like
copy
all
the
bytes
that
are
going
into
or
coming
from,
the
requests
and
stream
them
directly
into
a
file
until
the
request
is
done
or
until
we
get
in
there
which
returns
I
over
here.
Does
it
yeah
it
does
anyway.
So
that's
surf.
D
It's
nice
nice
a
little
convenient
this
Pied
or
at
least
what
we're
trying
to
get
to
with
tide,
which
is
a
great
name,
tide,
app,
a
greenio
type
server.
Then,
at
the
route,
slash
we
for
de
HB
get
method.
We
give
it
a
call
back
an
async
closure,
and
then
we
return
hi
from
there.
Hopefully,
like
a
little
hello
world,
we
listen
on
localhost,
8080,
asynchronously
process
requests.
Now
the
thing
here
is
like
we
have
a
this.
D
Almost
working
in
this
exact
shape
is
that
the
core
of
this
code
is
three
lines
for
an
HTTP
server
with
like
routing
which
is
fairly
small
and
fairly
competitive,
with
a
lot
of
other
languages
rest
as
a
sea
replacement
is
many
times
how
people
like
to
think
about
it.
But
the
ergonomics
part
is
like
something
that
is
really
good
in
rest
and
I
feel
not
necessarily
always
the
focus
or
the
thing
that's
brought
forward
is
like
one
of
the
best
benefits,
but
you
know
what
we
say:
yeah
there
we
go.
D
D
Alright,
so
that's
a
little
brief
history
of
of
async
and
rust
and
some
of
the
stuff
that's
like
going
on
today.
So
what
are
the
goals?
So
we
set
out
with
a
few
goals
we
were
already
had
like
surf
and
tide,
and
these
things
look
really
good,
I
think
like
they're,
quite
usable,
but
we
had
some
problems
internally
with
the
layers
that
we
were
using
internally.
There
was
a
lot
of
like
patching
over
things
abstracting
over
things.
I
just
didn't
feel
quite
right,
so
we
set
out
to
to
try
and
figure
out
like
hey.
D
D
So
in
part,
that
means
get
it
yeah,
sorry
just
click
the
little
thing
and
I'm
like
how
to
train
a
thot.
So
it
means
like
light
streams:
okay,
bye,
James.
Yes,
that's
what
we
want
thanks
boy,
a
clear
goal
for
us,
something
we
really
wanted
was
to
define
clear
boundaries
between
layers,
so,
for
example,
something
that
we
were
missing
that
we
weren't
seeing
an
ecosystem
quake.
It
was
a
streaming
HTTP,
encoder
and
decoder
that
didn't
do
any
IO
by
itself.
D
So,
for
example,
if
you
want
to
get
do
HD
AP
over
unix
streams,
that
should
be
possible
because
that
should
be
decoupled
from
from
the
exact
tcp
definition.
All
it
uses
is
like
the
is.
It
creates
and
fried
AP
eyes,
and
the
stream
API
so
by
by
decoupling,
that
you
get
these
very
nice
layered
api's
same
for
encryption
same
for
like
HTTP
decoding
encoding,
so
that
was
like
a
goal
that
we
were
like
hey.
Can
we
do?
That?
Is
that
possible,
because
I
would
enable
a
lot
of
people
to
be
very
flexible?
D
And
how
do
you
use
these
things?
So
yeah
like
running
HTTP
over
over
you
know
main
streams
or
unique
streams.
This
is
like
an
actual
use
case
that
people
or
file-
or
you
know
so
we
started
also
the
other
one.
We're
kind
of
like
looking
at,
like.
Oh,
what's
the
perfect
tight
API
if
we
could
write
one,
this
is
pretty
much
where
the
proxy
server
would
look
like.
So
you
get
a
request.
You
give
it
as
a
body
for
like
here
this
thing
you
proxy.
D
This
call
you
stream
out
the
whole
thing
back
to
the
other
thing
and
they
get
back
a
response
and
I.
You
can
take
that
and
like
scream
it
back
out.
Like
writing.
A
whole
proxy
should
be
about
like
five
lines
as
again
very
aspirational,
but
we're
like
pretty
close
to
this.
So
we
were
like
how
can
we
make
this
work
so
introducing
three
libraries
you
wrote
about
this
last
month,
but
yeah
now
I'm
doing
a
little
talk
on
it,
which
is
async
h1.
It
should
be
types
and
async
native
TLS.
D
D
The
way
it
looks
and
surf
is
that
the
pub
layer
we
have
surf,
which
is
the
end
user
interface
HTTP
client,
is
the
abstract
interface
and
async
TLS
async
h1,
together
with
a
6/3,
they
sort
of
form
the
foundation
and
HTP
types
brings
it
together.
Now,
if
we're
tied
with
something
similar
except
it's
tied
and
should
be
service
and
stuff,
HTTP
client,
so
that's
that's
a
little
back-end.
D
So
the
way
you
can
use
so
sort
of
focusing
on
I
know
if
y'all
can
see
my
mouse,
but
we're
just
gonna
talk
about
the
async
TLS
async
h1
async
stood
part
where
you
have
a
runtime
and
like
encryption
and
Lowell,
HTTP,
encode
or
decoder.
Is
that
the
way
you
put
those
together
to
start
making
those
HTP
requests?
Is
not
that
much
code
right
now,
so
this
is
all
the
code
that's
required
for
to
make
an
actual
like
I
should
be
requests
asynchronously
in
Russ
today,
so
you
open
up
a
TCP
stream.
D
You
get
the
address
out.
You
create
a
URL,
you
quit!
You
create
a
request.
Then
you
pass
all
these
things
into
like
you,
convert
the
stream
into
an
encrypted
stream,
there's
a
handshaking
for
you,
et
cetera,
and
at
the
other
day
you
convert
the
encrypted
stream.
Okay
over
this
encrypted
stream.
We're
now
going
to
run
like
our
HIV
requests
and
you
send
out
the
request
that
streams
back
the
response
it
does
the
encryption
etc.
D
Behind
the
scenes,
it's
like
an
all
too
much
code,
but
if
you
look
at
it
as
you
might
be,
like
that's
kind
of
difficult
to
read,
so
what
weird
like
trying
to
get
at-
and
we're
not
quite
here
yet
but
we've
like
been
talking
about
it-
is
greater
URL
and
like
have
a
notion
of
PCP
stream,
pls
stream,
HDPE
stream
and
sort
of
work.
Your
way
down
we're
here.
D
We
create
a
stream
that
we
pass
into
the
next
stream
that
we
pass
into
an
extreme
together
with
like
a
request,
and
then
you
get
back
a
response.
It
like
the
idea
is
that
you
know
making
requests
and
doing
this
thing
should
really
be
concise
and
just
natural
to
write
out
and
you're.
Like
oh
ok,
I
see
what
goes
on.
If
you
want
to
remove
encryption,
you
want
to
do
encryption
over
a
different
layer.
You
just
swap
out
the
backend.
So
instead
of
TCP
stream,
you
could
do
unique
stream
instead
of
a
TLS
stream.
D
You
could,
for
example,
do
some
utter
encryption
layer,
or
maybe
you
want
to
use
this
in
TLS
encryption
with
not
HTTP
at
all,
but
some
other
protocol.
You
know
so
yeah.
This
provides
us
with
the
necessary
flexibility
for
that,
or
at
least
that
that's
what
we're
aspiring
to-
and
this
is
the
server
equivalent
I've
gotta
linger
too
much
on
this,
because
it's
still
like
in
the
semi
broken
state.
D
What
we're
trying
to
do
is
recently
release
also
called
parallel
stream,
which
allows
you
to
for
every
incoming
requests
or
every
incoming
stream
spawn
and
task
and
like
join
all
the
handles
yeah.
That's
basically
a
they.
We
do
the
inverse
where
we
decrypt
the
incoming
connection
and
we
turn
onto
an
HTTP
except
instead
of
like
connect
anyway,
something
nice
that
we
did
with
the
series
of
crates.
D
Is
we
now
have
HTTP
aware
error
handling,
so
our
HTTP
types,
which
is
the
shared
types
abstraction
that
doesn't
do
any
I
know,
has
a
notion
of
what
a
HTTP
error
is,
which
is
there
with
a
status
code
and
has
a
nice
little
shorthand,
which
is
HTTP
types
of
results.
You
should
be
aware,
result
and
the
air
it
exists.
You
can
get
a
the
status
trade
you
can.
If
you
get
in
scope
or
you
just
imported
pre
Lou,
then
every
error
gets
a
dot
status.
D
Every
result
gets
adult
status
method
they
can
use
you
just
give
it
quickly
give
give
her
a
responsive
status.
Otherwise,
when
it's
castle-like
default
to
500,
which
is
quite
nice
so
here,
for
example,
we're
like
reading
file
and
if
some
thing
doesn't
hold
or
if
we
like
get
an
error,
it
just
returns
with
501
and
all
the
way
at
the
top
inside
the
frameworks.
It's
intended
to
to
look
at
the
error
and
be
like
okay,
cool,
get
the
status
code
and
act
based
on
that
yeah.
D
Having
errors
are
aware
of
status
codes
is
really
beneficial
when
writing
like
Cordell
way
through
and
yeah.
We
just
wanted
to
be
very
nice
working
on
integrating
it
so
yeah.
The
three
crates
that
we
provide
now
our
HTTP
types,
which
is
on
doctoress
on
their
HP
types,
is
the
docs
page.
So
we
have
a
notion
of
what
mime
types
are.
We
have
no
chef
what
cookies
our
header
is,
etc
and
we're
like
looking
to
to
even
have
more
things
such
as
you
know,
full
type
constructors
for
certain.
D
Like
things
there's
a
lot
of
things
in
the
HTTP
effect
are
like
they
can
only
be
constructed
a
single
way
or
a
specific
amount
of
yeah.
You
know
just
having
type
constructors
is
very
nice
and
just
fits
into
this
trying
to
build
up
the
dictionary
of
HTTP
things
that
that
can
be
the
type
take
away.
The
second
one
is
a
sink.
H1
is
what
the
docs
page
looks
like.
There's
two
functions
except
connect
for
now,
then
there's
a
sink
native
TLS,
which
is
a
little
bit
more,
but
also
on
crate
IO.
D
And
finally,
since
we
made
these
things
and
since
we
realized
burned
another
crate,
with
the
help
of
a
go
to
bus,
stop,
which
is
a
friend
which
is
a
sync
SSE,
a
sink
SSE,
which
provides
a
async
surface
and
events
for
rust,
so
we're
working
towards
getting
like
server
sent.
Events
in
which
is
like
a
you
know,
directional
version
of
web
sock,
so
your
server
can
send
events
over
and
they
existing
a
chippy
connection
to
clients
and
clients
can
act
on
that.
D
Has
some
caveats
compare
the
WebSockets,
but
it's
overall
quite
useful,
and
now
it
exists-
and
this
thing
also
is
generic
over
streams
again
over
ASA
green.
It
provides
you
like
with
a
handle
to
send
events
on
anyway.
That
was
what
I
wanted
to
share
with
you
today.
She
have
two
links,
so
you
can
check
it
out,
you're
interested
in
trying
out,
like
some
fun
things
around
like
async
HTTP
use
it
over.
You
know,
file
systems
or
some
other
like
fun.
D
A
A
D
We
they're
okay,
depending
on
how
the
questions
asked
I'm
not
entirely
sure,
like
the
operating
system.
Definitely
buffers,
there's
buffering
in
the
operating
system.
We
don't
do
buffer.
We
haven't
shown
buffering
yet
in
at
this
layer,
but
if
you
want
to
introduce
some
form
of
like
back
pressure,
some
forms
like
extra
buffering
you
could
just
plug
it
in
it
could
just
be
another
form
of
stream
could
be
another
thing
where
you
set
limits
recently
wrote
a
post
about
parallel
stream
and
we
export
the
the
concept
of
limits
doesn't
exist.
D
Yet
in
this
version,
there's
this
crates
on
crate
sale,
which
provide
limits.
I
forget
what
the
name
is,
but
there
there
are
controls
that
can
be
implemented
to
actually
provide
like
a
maximum
amount
of
like
requests
are
like
in
flight.
As
for
rejecting
things,
I
think
the
AWS
CMS,
like
lots
of
content
about
how
to
correctly
go
about
buffering
as
well.
So
it's
definitely
possible
I,
just
haven't
yeah
all
right.
Does
that
make
yeah
yes
make.
B
D
This
sounds
like
a
question
of
back
pressure
so
generally,
like
whatever
controls
you
are
used
to
having
can
be
implemented
in
this
model.
We
just
haven't
done
so
yet
the
controls
that
we
currently
provide
live
at
the
async
state
level,
where
there's
like
a
TCP
timeout,
it's
just
a
socket
flag.
That's
set,
there's
like
a
max
like
a
keepalive
thing
that
can
be
set
at
the
HTTP
layer.
We
have
a
notion
of
like
maximum
some,
some
basic
details,
prevention,
stuff
that
exists,
not
super
elaborate.
We
have
timeouts
also
a
small
notion
about
that.
D
Like
so
I'm,
not
sure
if
we
have
actually
implemented
that,
but
like
we,
we
do
make
sure
that
we
don't
consume
more
than
we
can
handle
as
well
as
possible.
At
some
point,
if
things
like
stall
too
long
or
like
your
responsive
clients,
connections
are
actually
reported.
That
is
built
in
to
some
degree,
but
we
can
always
be
better
and
there
are
more
elaborate
controls
that
could
be
built
that
should
be
built
that
haven't
been
built
yet.
A
All
right,
great
I
think
we
have
one
more
question
and
that
was
it's
not
entirely
related
to
HTTP,
but
about
async
programming
and
rust.
Have
you
considered
some
notion
of
cancellation
for
futures,
like
maybe,
if
the
user
wants
to
quit
the
application,
then
you
might
want
to
stop
taking
input,
but
still
finish
off.
All
of
the
input
that
you
are
currently
processing
do
I.
D
Love
that
question
such
a
good
question
is:
am
I
still
screen.
Sharing,
yes,
okay,
great,
so
just
gonna
link
you
to
create
you
can
use
for
this
exact
thing.
It's
going
to
stop
token
it's
made
by
Aleksey
from
rest,
analyzer
frame
and
the
basic
idea
is
that
you,
this
is
a
channel
that
can
receive
him
death
and
once
I
event
is
received.
So
you
say
like
whose
work
yes,
okay,
so
here,
is
a
stream
called
work.
D
You
wrap
it
inside
of
this
stop
token,
and
at
some
point
this
stop
token
will
be
triggered
and
will
say
like
now.
Work
means
to
stop
and
it
will
start
yielding
none
and
then
the
stream
can
suit
like
finishes
all
the
work
that's
been
doing,
but
no
new
items
will
be
accepted
and
you
get
like
a
graceful
shutdown
system.
I
have
some
notes
on
how
I
think
we
can
maybe
tweak
the
ergonomics
for
this.
D
A
D
It's
Ryan
leave
it
on
Twitter,
dignified
choir
and
just
formats
later,
yeah
I
know
we're
not
very
coordinated.
We
just
chat.
Oh,
we
have
a
discord.
Yes,
come
a
a
sink
RS
the
wait
hold
on
I'm,
just
gonna
click
you
to
where
you
can
get
onto
our
discord,
which
I
think
is
from
tide
no
a
sinks.
It
also
has
a
link.
It's
the
same
discord
just
cuz,
we're
all
friends,
there's
a
chat
link
here.
So
if
you
look
to
a
Singh,
stood
click
on
chat,
there's
room
so
HTTP
come
hang
out
talk
to
people.
A
Come
on
zoom
be
play
nice
with
me
here
a
shared
screen
and
we
are
going
to
bring
this
up.
Okay,
hopefully
everybody
can
see
my
screen
now.
I
am
going
to
paste
this
into
the
various
rooms
that
we
have.
It's
just
a
link
here
to
a
playground,
so
I'm
pasting
it
into
our
matrix
instance
that
we
have
and
if
I
could
find
it
into
the
zoom
chat,
I'm,
not
very
good
at
zoom.
That's
what
I've
come
to
the
conclusion
of
if
somebody
could
paste
that
into
the
zoom
chat
as
well.
That
would
be
super
helpful.
A
So
this
is.
Hopefully
everybody
can
see
that
and
it's
it's
large
enough
on
the
screen.
Also,
let
me
know
if
it's
not
what
the
challenge
is
is.
Basically,
this
is
slightly
arbitrary
and
slightly
just
coax
up
to
be
giving
you
some
weird
and
interesting
kind
of
challenges
with
the
compiler.
But
what
we're
trying
to
do
here
and
I
called
a
client
builder,
because
that's
what
I
was
doing
inside
of
this
codebase,
but
you
can
call
it
whatever
you
want.
A
You
just
get
that
same
instance
that
you
that
you
initialized
before,
and
so
if
we
go
ahead
and
by
the
way,
if
one
thing
that
you
can
do,
if
you
have
any
questions
or
thoughts
or
comments
or
whatever
I'm
monitoring
the
the
matrix
chat
so
make
sure
to
pop
your
questions
or
thoughts
in
there.
And
what
we're
going
to
do
is
try
and
get
this
to
build
and
see.