►
From YouTube: Bay Area Rust Meetup July 2016
Description
Bay Area Rust Meetup for July 2016. Topics TBD.
Help us caption & translate this video!
http://amara.org/v/2Fhz/
A
Hello,
everybody,
hello,
Internet,
welcome
again
to
another
Bay
Area
Russ
meetup.
We
got
a
great
lineup
tonight,
very
glad
you're,
all
here
so
right.
There,
Oh,
aha,
okay,
figured
it
out
all
right.
First
off
thanks,
Mozilla
once
again
for
feeding
us
and
giving
us
a
wonderful
place
to
host
these
and
loaning
us
their
AV
people.
In
order
to
film
us.
It
has
always
and
will
always
be
awesome.
A
You
guys
are
great
all
right
so
our
agenda
tonight
we
have
a
variety
of
things:
going
on
we're,
gonna
open
the
night
talking
about
a
machine
learning
library
called
rescue
machine.
Then
we
change
gears
a
lot
and
start
talking
about
an
embedded
operating
system
talk
and
finally,
over
the
Internet
we're
going
to
be
talking
about
a
rust,
bioinformatics
project
so
anyway.
A
The
other
thing
that
I
want
to
mention
before
we
start
is
that
we
actually
have
a
lot
of
things
coming
up.
We're
going
to
be
conference
season,
so
Russ
Kampf
is
in
early
September
and
I
believe
the
registration
closes
on
Monday.
So
if
you
want
a
ticket,
you
should
buy
one
out,
no
all
right,
they're
open
till
we
sell
out.
But
oh
oh.
B
A
Right
steve
says:
scholarships
might
be
closing
soon.
So
if
you
want
a
scholarship
you
should
it
may
or
may
not
be
true.
We
might
be
tricking
you
into
buying.
You
know
supplies
we're
selling
out
quickly
so
by
now.
Likewise,
all
the
way
over
in
Berlin
rust
Fest
will
be
happening.
Mid-September
I,
don't
know
about
their
tickets,
I'm
sure
they're
selling
out,
so
you
should
buy
them
too
and
finally,
in
October
we
have
the
Rust
Belt
in
Pittsburgh.
A
Oh
and
I
also
forgot
that
the
next
event
that
we
have
here
is
a
kind
of
a
different
fare.
It's
a
workshop,
a
thon,
the
Russ
community
team,
has
started
building
workshops
as
part
of
our
russ
bridge
event.
So
if
you
want
to
help
build
content
for
people
who
are
new
to
rust
or
people
who
are
new
to
programming,
it
would
be
really
helpful.
If
you
came
on
August
13th
I
can
read
August
their
teeth
and
help
build
workshops
and
stuff
alright,
so
that's
it
for
me
and
so
now
on
to
James
whoops
wrong
way.
C
D
C
So
thanks
for
the
introduction,
I
am
James
I'm,
going
to
talk
about
rusty
machine,
which
is
a
machine
learning
library
written
entirely
in
rust,
so
a
quick
disclaimer
I'm,
a
mathematician
by
training,
so
I'm,
sorry,
if
it's
bitmap
heavy
in
places,
but
please
like
interrupt
me
if
I'm
nice
penny
things
well
like
feel
free
to
shout
out
at
me
and
tell
me
to
go.
Slow
I
go
quicker
whatever,
okay,
so
in
this
talk,
I'm
gonna
sort
of
briefly
go
over.
C
What
is
machine
learning,
so
I've
tried
to
introduce
some
basic
concepts
that
sort
of
motivate
rusty
machine
and
then
sort
of
tool
for
a
little
bit
about
how
rusty
machine
works,
how
it
makes
use
of
rust
and
finally
try
and
convince
you
why
rasta
machine
is
great
or
maybe
more
Germany,
why
rust
is
really
good
for
machine
learning?
Okay.
So
what
is
rusting
machine
as
a
drafting
machine?
C
Is
a
machine
learning
library
that's
written
entirely
in
rust,
and
it's
by
entirely
I
mean
that
it
has
sort
of
no
external
dependencies
on
common
libraries
like
blasts
in
the
pack
which
you
sort
of
generally
see
for
this
machine
learning
space,
and
so
the
motivation
behind
that
is
that
I
wanted
a
machine.
Learning
lab
that
just
sort
of
works
out
of
the
box
and
ruff
provides
a
nice
place
to
do
that
without
sort
of
sacrificing
performance.
C
So
a
really
common
question
is
another
machine
learning
library
really
there's
like
hundreds
in
lots
of
different
languages,
so
I
still
want
to
say
that
rusty
machine
is
more
than
deep
learning,
which
seems
like
a
pretty
bad
thing
to
say
today,
a
deep
learning
soul
seemed
to
take
up
this
big
space
in
machine
learning.
But
there
is
still
you
know.
A
lot
of
use
cases
come
from
places
other
than
deep
learning
and
roasting
machine.
So
has
a
focus
on
that
space
and
the
other
thing
is
that
rust.
C
You
seemed
like
a
really
good
choice
for
machine
learning,
as
I
try
and
sort
of
convince
you
as
we
go
forwards.
It
has
a
lot
to
offer
and
I
thought
that
it
would
be
really
rewarding
someone
to
explore
what
this
looks
like
and
it
ended
up
being
very,
very
rewarding,
ok,
so
on
some
sort
of
machine
learning.
C
C
So,
let's
imagine
that
you
know
you're
currently
living
somewhere
in
San
Francisco,
and
you
want
to
figure
out
how
much
your
lands
gonna
be
trying
to
get
you
to
pay
in
a
couple
months
time.
So
machine
learn
from
you
sort
of
help
us
solve
this
problem
and
the
president
sort
of
the
process
we'd
go
through
to
solve.
This
would
be
first,
we
need
to
gather
some
data.
C
You
could
imagine
that
we
might
gather
some
rent
prices
from
Craigslist
and
a
bunch
for
other
facts
about
the
residents
from
sort
of
various
government
ordinances
or
any
role
to
get
your
hands
on
it,
and
we
then
use
this
data
to
train
a
machine
learning
model
and
then
hopefully
we
could
then
gather
the
same
facts
about
our
own
residents
to
predict
what
the
rent
increase
will
be
for
our
own
are
in
place
residents.
Another
example
could
be
predicting
whether
an
image
contains
a
cat
or
a
dog
I'm,
going
to
take
the
youthful
example.
C
But
in
this
case
the
idea
is
that
we're
sort
of
feeding
our
machine,
an
image
that
has
like
an
animal
in
it
and
we
don't
tell
it
what
the
animal
is
and
we
want
the
machine
to
tell
us.
This
is
the
cat
or
it's
a
doc,
and
so
in
order
to
do
this,
we
have
to
train
the
motor
with
some
data,
which
is
made
up
of
labelled
pictures
of
cats
and
dogs.
C
Really
common
example
that
you
saw
Rosie
Fred
around
a
lot
in
the
sort
of
machine
learning,
spaces,
understanding,
handwritten
digits,
and
so
in
this
case
that
dataset
would
be.
You
know,
lots
of
examples
of
handwritten
digits
where
we
label
them
and
say
this
is
a
zero.
This
is
the
one.
This
is
the
two
and
then
the
challenge
for
the
machine
learning
model
would
be
to
show
it.
One
of
these
digits
that's
been
drawn
fresh
without
a
label
and
have
it
tell
us
which
number
that
digit
belongs
to
you?
C
So
much
is
the
way
I'm
describing
them
to
help
with
the
tool,
so
a
model
I'm
sort
of
scribing
as
an
object
that
transforms
inputs
into
outputs
based
on
information
in
some
data
and
we
use
training
or
fitting
a
model
is
the
art
of
teaching
the
model.
How
it
should
transform
its
inputs
using
the
data
and
finally
predicting
is
now
feeding
in
new
inputs
to
the
model
to
receive
the
outputs
so
to
go
back
to
one
of
our
previous
examples.
C
When
we're
predicting
bread,
we
might
use
a
linear
regression
model,
that's
the
type
of
machine
on
your
model.
We
train
the
model
on
some
existing
rent
prices
and
facts
about
the
residences
and
then
the
final
step
of
the
flow
would
be
predicting
the
rent
of
unlisted
places,
but
we
don't
have
the
price
for
currently
okay.
C
So
why
is
machine
learning
heart
I've,
seen?
Machine
learning
is
inherently
difficult,
so
I'm
not
so
trying
to
say
that
there's
some
key
like
points
that
make
it
difficult
but
I,
think
on
top
of
the
sort
of
standard
parts
is
the
fact
that
there
are
many
many
models
to
choose
from.
So
when
we're
solving
this
sort
of
rent
problem,
there's
an
infinite
space
of
models
that
we
could
choose
and
on
top
of
that,
there's
many
different
ways
to
use
each
model.
C
Well,
okay,
that's
all
I'm
going
to
say
about
machine
learning
for
now,
maybe
we'll
sort
of
go
back
to
us,
nothing
to
go
through,
but
now
I'm
going
to
talk
about
rusty
machine
and
sort
of
how
it
solves
some
of
these
problems
that
I've
described
in
the
machine
learning
space,
so
the
foundation
of
rusty
machine.
Is
this
what
we
call
a
trait
here?
So
for
those
of
you
that
aren't
that
familiar
with
rust,
a
trait
is
sort
like
an
interface
in
another
language?
C
C
The
first
one
is
train,
so
train
in
this
case
is
transforming
the
model
to
understand
like
to
how
to
teach
a
how
it
should
transform
the
inputs
into
the
outputs
and
predicting
which
is
now
giving
it
a
new
set
of
inputs
and
asking
it
what
the
output
should
be,
and
so
we
sort
of
can
capture
this
sort
of
key
flow
of
a
model
using
this
train.
Now
this
is
a
little
simplified
from
the
actual
trades
using
a
rusty
machine,
but
for
sake
of
illustration,
I've
kept
it
as
this
okay.
C
So
it's
been
sort
of
very
theoretical
and
explaining
all
the
stuff
right
now,
so
we're
going
to
jump
into
an
actual
example
of
using
rusty
machine.
So
this
houssam
will
show
how
users
function
from
the
model
trade
to
do
some
machine
learning,
so
I'm
gonna
be
talking
about
k-means
clustering
model.
So
k-means
is
the
type
of
machine
learning
model.
That's
used
for
the
problem
of
clustering,
so
clustering
is
machine.
Learning
tasks
where
we
want
to
group
together,
items
that
are
similar
to
each
other,
so
similar
could
mean
they're
closely
over
in
space.
C
They
share
similar
features
in
some
other
way,
and
so
in
the
sort
of
plot
I've
got
here.
You
can
see
that
there's
lots
of
different
points
in
2d
space
and
you
sort
of
roughly
make
out
there
sort
of
two
sort
of
origins
of
these
points,
sort
of
two
balls
or
points
that
are
concentrated.
So
the
idea
is
that
we
could
use
Kami's
to
try
and
predict
which
to
try
and
classify
which
point
each
ball
belongs
to.
C
So
this
is
what
using
a
k-means
model
looks
like
in
rust
in
rusty
machine,
so
there's
sort
of
three
lines
to
make
this
up.
The
first
is
we
create
the
model,
so
we
have
the
let
mute
model
equals
k-means
classifier
new,
and
we
specify
that
should
have
two
clusters,
as
we
became,
is
what
we
have
to
say
how
many
clusters
it
will
have
explicitly.
C
We
want
to
classify
these
initial
data
points
and
we
get
out
this
vector
of
clusters,
which
tells
us
which
ball
each
of
the
classes
belongs
to,
and
if
we
run
this
on
the
data
actually
before,
then
this
is
what
we
get
out.
So
you
can
see
that
the
model
has
sort
of
broken
up
the
data
into
two
halves,
which
sort
of
roughly
represent
those
two
balls
that
it
came
from,
and
each
point
is
colored
according
to
the
classes
that
it
belongs
to.
C
Okay,
so
hopefully,
you've
been
somewhat
convinced
that
that
was
fairly
simple.
Three
lines
to
do
all
that
work
is
pretty
good
for
machine
learning
and
the
API
sort
of
all
of
the
other
models
in
machine
learning
aims
to
be
as
simple
as
that
one.
But
machine
learning
is
complicated
so,
as
I
said
before,
there's
lots
of
models,
lots
different
ways
to
use
them
and
sort
of
one
of
the
key
goals
of
rusk
is
to
try
and
capture
all
that,
like
complexity
and
put
a
simple
sort
of
API
over
that.
C
C
C
To
keep
things
simple,
so
the
main
part
is
using
traits
and
there's
sort
of
some
other
mechanics
going
into
it,
but
sort
of
by
using
the
trait
system
in
rust.
We
can
try
and
sort
of
have
sort
of
a
nice
modular
layout
to
rusting
machine
and
make
it
sort
of
easy
to
jump
from
one
wall
to
the
other
without
having
to
completely
rewrite
huge
chunks
of
code.
So,
as
I
showed
before
rasta
machine
uses
that
model
train
as
its
foundation
so
sort
of
all
models
implement
this
model
train.
C
So
when
a
user
comes
to
trust,
machine
they're,
always
using
that
same
train
predicting
and
which,
by
the
way,
isn't
like
competing,
my
idea
and
other
machine.
Only
libraries
have
been
doing
this
before
it's
pretty
common
trick
and
we
use
sort
of
use
trace
around
hide
as
much
of
this
machine
under
complexity
as
possible,
so
some
key
ways
that
we
do
that
are
providing
extensibility
at
the
user
level,
and
so,
if
I
use
in
those
that
need
something
more
than
it's
often
in
roasting
machine.
C
It's
easy
for
him
sort
of
add
that
code
on
and
we
to
try
and
make
things
reusable
throughout
the
library
in
as
many
ways
as
we
can
so.
First
I'm
going
to
talk
about
extensibility,
see
as
I
said,
we're
using
traits
to
sort
of
define
different
parts
for
the
model
and
want
to
make
it
really
easy
for
a
user
to
come
along
and
sort
of
modify
those
parts
without
breaking
all
the
rest
of
the
code
and
rusty
machine
sort
of
try
to
provide
some
common,
sensible
defaults
that
people
use
often.
C
To
try
and
go
over
an
example
of
extensibility
without
sort
of
diving
too
deep
into
the
machine
knowing
stuff
behind
it.
So
first
I'm
gonna
talk
about
a
support,
vector
machine,
so
I
guess
we
need
to
know
is
that
a
support
vector
machine
is
a
model.
That's
generally
used
for
classification,
so
it's
similar
to
a
k-means
model
in
how
it
behaves.
C
So
the
behavior
of
a
support
vector
machine
is
governed
by
something
called
a
kernel,
and
so
a
kernel
is
essentially
a
function
that
are
based
on
particular
properties
and
I'm.
Not
really
gonna
spend
too
much
time
going
into,
but
his
Ruffy
bought
a
support,
vector
machine,
that's
like
in
rusty
machine,
and
so
we
have
the
struct
SVM
and
we
make
a
generic
on
a
type
K
which
is
implementing
kernel.
C
So
we
treat
kernel
just
describe
some
function,
kernel
which
takes
in
the
inputs
needed
for
a
kernel
which
is
two
slices
and
outputs
a
scalar,
so
by
having
sort
of
this
layout
we're
using
the
generic
type
to
make
it
really
easy
for
user
to
come
and
say:
oh,
if
you
need
a
particular
type
of
kernel,
this
application
and
they
can
just
implement
the
straight
kernel
and
use
it
in
all
the
same
code.
And
again
this
isn't
something
that's
you
know,
I
mean
it's
obvious.
This
isn't
something
that's
only
accessible
in
brass.
C
C
So,
by
using
a
slice,
we're
telling
the
detain
the
kernel
function,
that
it
should
borrow
the
data
and
so
I
shouldn't
consume
any
of
the
data
which
would
just
take
a
pointer
to
the
data
and
sort
read
from
that
only
so
by
sort
of
enforcing
that
behavior
strictly
in
rust.
We
make
sure
that
when
anyone
else
is
using
a
kernel,
they
don't
do
something
like
consume
the
impaired
data
which
might
be
possible
in
other
languages
and,
as
this
could
sort
of
break
the
model
code,
if
this
was
to
happen.
C
Okay,
so
I'm
gonna
go
on
a
tiny
bit
of
a
detour
and
I
want
to
sort
of
show
off
something
that
again
you
can
achieve
in
other
language,
but
works
quite
nicely
in
rust.
So
kernels
ever
said
of
these
functions
that
have
these
properties,
and
one
of
these
properties
is
that
if
you
have
two
kernels
and
add
them
together,
then
the
resulting
function
from
that
like
as
in
function
audition,
the
resulting
function,
mat
is
still
a
kernel,
so
it
has
all
of
the
same
properties
and
behaviors
that
a
kernel
originally
has.
C
So
we
can
try
and
sort
of
capture
this
relationship
in
rust.
So
to
do
this,
we
create
a
new
structure
called
a
kernel,
some
which
again
is
generic
over
tnu
and
again
I'm.
Using
this
we're
now
Skippy's
a
little
bit
tidier,
but
this
is
this
means
the
same
thing
that
T
implements
kernel
and
you
implements
kernel
and
then
kernel.
Some
just
contains
two
of
these
kernels
k1
and
k2,
and
then
we're
going
to
implement
the
kernel
trait
for
kernel
some,
so
here's
that
kernel
tree
so
the
implement
tu
kernel
for
kernel.
C
Some
is
saying
with
implementing
kernel
for
the
kernel.
Some
and
again
we
have
those
generic
trait
bouncer
T
has
to
be
a
kernel.
You
have
to
be
a
kernel
and
then
the
kernel
function
we're
now
sort
of
explicitly
describing
and
it
just
sort
of
embodies
that
equation
of
the
foot
at
the
top
of
the
slide.
So
it's
k1
kernel
on
the
input
point
plus
k2
kernel
on
your
put
points
and
sort
of
because
of
the
way
that
curves
behave.
We
know
that
this
is
also
a
kernel,
so
just
sort
of
the
idea.
C
Why
would
you
want
to
do
this?
So,
as
I
said,
this
behavior
of
sort
of
adding
kernels
together
is
sort
of
a
nice
moment
and
it's
sort
of
something
that
quite
common
you
want
to
sort
of.
You
want
to
combine
two
different
kernels
together
to
sort
of
achieve
a
slightly
different
behavior
and
your
support,
vector
machine
or
another
model,
and
by
sort
of
inputting
this
kernel,
some
trade.
We
can
make
it
really
easy
to
do
this,
like
share
the
books
going
to
worry
about
running
any
additional
code.
C
We
can
just
use
existing
sort
of
in
library
cones
and
combine
them
together
and
if
we
wanted
to
use
some
complicated
kernel,
that's
the
addition
of
a
polynomial
kernel
and
a
hyper
tank
kernel.
We
just
define
our
two
kernels
above
and
then
we'd
have
the
some
kernel
be
equal
to
the
polynomial
kernel,
plus
the
hyper
ton
kernel,
and
we
could
just
plug
that
some
code
or
write
into
the
SVM
again.
This
isn't
something
that's
impossible
in
other
languages,
but
in
Rus
we
sort
of
keep
all
those
guarantees
about.
C
C
This
is
this
is
definitely
possible.
Another
language
again,
but
Russ
makes
it
really
easy
to
do
so.
The
other
breeze
abilities
that
we
have
these
components
that
can
be
swapped
in
and
out
of
the
models
and
we
can
easily
make
new
models
from
existing
components
as
well.
So
I'll
go
through
an
example
in
a
moment,
sort
of
describing
how
if
a
user
wants
to
make
use
of
something
in
rusting
machine,
they
can
just
sort
of
pull
it
out
the
box
and
relying
on
the
strong
enforcement
of
that
trade.
C
They
can
sort
of
ensure
that
everything
work
roughy
is
expected.
Yes,
it's
just
nothing
again,
I've
sort
of
touched
only
a
few
times
now,
but
again,
there's
something
you
can
get
in
other
languages,
but
I
think
in
rust.
It's
nice
to
sort
of
be
sure
that
if
I'm
using
a
kernel
function
in
another
model
that
I've
just
created
that
it
won't
consume
data,
it
will
like
borrow
the
data.
Instead
of
consuming
it
and
behave
nicely
with
all
this
various
safety
guarantees.
C
We
have
okay,
so
reusability
example
I'm
going
to
about
gradient
descent,
solvers
I'm,
so
they're
pretty
some
of
you
probably
heard
of
gradient
descent,
I'll
sort
of
briefly
describe
what
it
is
and
why
it's
important
to
too
many
details.
So
we
use
gradient
descent
to
minimize
a
cost
function.
So
what
I
mean
by
a
cost
function-
and
you
can
imagine
batter
an
earlier
example
of
having
rent
in
that
problem-
we're
trying
to
predict
the
what
our
rent
will
be
and
they'll
see.
C
There's
you
sort
of
imagine
there's
a
true
value
for
my
rent
actually
is,
and
we're
trying
to
minimize
the
distance
between
the
value
that
we
predict
the
true
value
itself,
and
so
you
could
define,
for
example,
a
cost
function
to
be
the
square
distance
between
our
prediction
and
the
actual
value,
and
that's
a
pretty
common
cost
function.
Actually.
C
Okay,
so
the
idea
behind
a
great
dissenter
solver
is
that
in
rusty
machine
is
that
we
have
it
implement
this
trait.
So
this
trait
for
Benison
algorithms
is
poorly
named
optimum
algorithm
and
optin
algorithm
is
generic
over
m,
which
is
optimizable,
and
it
has
this
optimized
function.
So
that's
a
bit
of
a
math
when
it
looks
pretty
messy
up
on
the
page
and
that's
even
after
I
remove
some
things.
C
But
essentially
this
is
saying
that
the
greater
sense
over
has
the
ability
to
optimize
a
model,
and
so
we
give
the
brain
a
sense
over
a
model
and
a
set
of
sort
of
initial
parameters,
and
it
will
do
some
magic
under
the
hood
and
it'll
return.
An
optimized
set
of
parameters
that
sort
of
minimize
our
cost
function
as
well
as
possible,
just
sort
of
talk
for
a
couple
other
pieces
of
notation
here,
but
I,
maybe
skipped
over
and
so
the
back
part
after
the
backup.
64.
C
C
It's
a
lot
nicer,
maybe
then
sort
of
importing
this
stuff
from
the
ground
up
yourself.
So
the
first
thing
we
going
to
do
is
define
our
model
and
I'm
going
to
find
a
really
simple
model,
which
is
called
their
x
squared
model.
So
the
x
squared
model
represents
our
cost
function,
which
is
f
of
X
equals
X
minus
C
squared.
So
that's
that
sort
of
Euclidean
distance
in
1d
space.
C
So
it's
the
distance
from
the
point
C
and
the
F
squared
model
just
takes
in
an
argument
C,
which
is
reps
into
this
point,
and
so
the
idea
of
if
we
wanted
to
minimize
this
cost
function,
we're
trying
to
get
X
to
be
as
close
to
C
as
possible.
So
you
can
sort
of
think
of
this
model
was
learning
the
value
with
C,
and
so
we
give
them
on
the
value
of
C
and
then
by
plugged
it
into
the
grain
and
set
optimizer
we're
trying
to
sort
of
learn.
What
that
value
is.
C
C
is
again
I
said
it's
kind
of
a
convoluted
the
example.
We
know
what
value
C
is,
but
it
sort
of
represents
what
we're
doing
a
bit
more
in
the
real
world.
Okay,
so
once
we
define
our
model,
the
next
step
is
improvising
optimizable
for
the
model
and,
as
I
said,
optimizable
is
something
we
implement
for
for
models
that
are
differentiable.
C
I
have
two
French
book
cost
functions
so
I
have
this
function,
compute
gradient,
so
it
takes
in
the
model
which
the
sort
of
the
Amazon
itself
and
then
it
takes
in
a
set
of
parameters
which
is
a
slice
and
it
returns
the
set
of
optimized
parameters
at
the
end,
which
is
a
vector
which
has
the
same
dimension
as
the
slice
coming
in.
So
in
this
case,
it's
really
easy
to
compute
our
derivative,
our
cost
function.
C
We
just
differentiate,
X,
minus
C
squared
and
we
get
two
times
X
minus
C,
so
just
say
in
this
case
the
parameters
would
just
be
one-dimensional.
It's
just
going
to
contain
the
value
X
inside
a
slice
yeah,
the
Veck
exclamation.
One
thing
is
something
called
a
macro
in
rust:
it
basically
means
create
a
new
vector
with
this
data
inside
it.
C
Okay,
so
again,
student
super
useful
on
its
own,
but
now
we
can
sort
of
use
optimum
algorithm
to
just
some
fun
stuff,
so
using
optimum
algorithm
out
of
the
box
being
compute,
these
optimized
parameters,
so
the
whole
code,
all
at
once
is
this.
So
this
creates
the
new
model
x,
squared
with
the
value
C.
Defining
that
cost
function
it
teaches
us
how
to
differentiate
that
cost
function
and
then,
finally,
at
the
bottom
we're
sort
of
using
this
in
action.
C
So
the
first
line
underneath
the
implementation
part,
is
let
x
squared
be
a
new
x
squared
model
with
the
value
C
being
one.
And
then
we
just
crank
some
starting
point
which
I've
to
set
to
30.
It
could
be
anything
and
then
the
next
line
creates
a
new
gradient
descents
over.
So
let
G
D
equal
gradient
descent
to
full
just
creates
a
new
bit
of
sense
all,
but
with
whatever
the
default
settings
are
within
rusty
machine
and
in
the
last
line,
is
a
let
optimal
equals
G
D
to
optimize.
C
So
this
is
running
that
optimum
algorithm
trait,
and
the
hope
here
is
that
the
optimal
value
we
get
out
will
be
as
closest
to
one
as
possible.
So
by
sort
of
using
this
setup
and
just
taking
advantage
of
the
great
assets,
Oddie
and
rusty
machine,
we
should
get
an
awesome
value
out.
That's
one,
and
so
it's
kind
of
useless
right.
C
No
one's
really
gonna
using
the
optimizer
like
this,
but
I
think
those
you
that
are
familiar
with
machine
learning
having
this
sort
of
trait
setup
for
degrading
so
now
makes
it
really
easy
sort
of
implement
a
new
algorithm
algorithm
and
the
Dajjal
Mindbender
sent
something
like
no
net,
even
and
just
plug
in
this
rated
since
over.
At
the
last
step.
To
do
the
optimization
part
and
of
course
it's
really
easy
as
well
for
us
to
implement
different
types
of
grading
since
overs,
like
stochastic
rates
and
various
other
things
that
people
are
probably
familiar
with.
C
Okay,
I
think
that's
about
as
like
maths,
heavy
and
theoretical
and
obscure
things
get
so
sort
of
move
on
to
the
second
part,
a
bit
more
about
sort
of
why
rust
is
valuable
and
sort
of
a
bit
more
about
how
rusty
machine
works
at
a
high
level
underneath,
and
so
this
is
just
a
list
of
things
that
Rossi
machine
can
do.
It's
just
of.
Is
the
common
question
under
what
we
actually
covered
here
and
it's
it's
a
fair
amount
of
stuff.
C
Why
really
now
I
mean
there's
some
history
behind
why
this
thing
exists?
So
when
I
first,
like
development,
it
was
kind
of
unclear
whether
there
would
be
any
other
options.
That
would
be
a
good
fit
for
what
I
wanted
sort
of.
There
was
no
sort
of
clear
rust,
linear
algebra
library
that
worked
in
high
dimensional
spaces
so
that
machine
learning
is
sort
of
predisposed.
To
so
I
mean
you
know,
ruling
else
would
provide
that
ease
of
use
and
out
of
both
stuff
I
mean
ease
of
use
for
the
user
riding.
C
It
was
a
nightmare,
but
it's
sort
of
pride
makes
it'll,
be
easy
for
the
user
to
just
get
running
right
up
the
box
without
any
other
book
and,
of
course,
like
rosters,
are
really
great
choice
to
implementing
linear
algebra,
and
so
anyone
that's
played
around
with
sort
of
low-level
like
C,
libraries,
linear
algebra.
It's
like
it's
difficult
and
messy
and
there's
all
these
weird
routines
and
stuff
I
mean
rust.
Just
makes
all
this
stuff
like
really
nice
and
friendly.
It
takes
some
work,
but
it's
really
pleasant
to
you.
C
So
the
result
matrix
error,
part
a
result
in
rust,
communicates
that
a
method
might
fail
and
we
could
expect
it
to
fail
and
so
using
a
rough
sort
of
explicitly
says
you
know,
if
you
try
to
inverse
a
matrix,
it
might
not
work.
I
mean
obviously
again
is
something
is
present
in
other
languages,
but
for
those
of
you
that
have
used
Russ,
I'm,
surely
familiar
with
like
how
nice
or
result
chaining
stuff
can
work
yeah.
C
C
C
So
if
I
don't
want
to
help
out
that
had
to
be
awesome,
okay,
so
what
is
reading
out
do
so
right
now
really
now
has
some
data
structures
matrix
and
vector,
as
well
as
I
haven't
mentioned
here,
but
there's
some
matrix
slices
as
well,
which
sort
of
analogs
of
slices
in
the
standard
libraries
they
should
provide
sort
of
views
into
the
matrix,
letting
you
borrow
the
data
without
consuming
it
or
owning
it.
They
have
sort
of
basic
arithmetic
operations.
C
I
would
using
in
place
allocation
rather
possible,
which
is
another
really
nice
thing,
and
then
some
I
say
Stata
decomposition.
Some
of
these
took
a
lot
of
work,
and
so
we
have
inversing
eigen
decomposition,
singular
value,
decomposition
and
a
few
other
things
building,
ok
and
talk
quickly
about
why
rust
is
a
really
good
choice.
Am
I
doing
for
time
by
the
way,
okay,
I'm
gonna
talk
about
why
rusty
reading,
which
was
made
hopefully
I
sort
of
lose
that
as
I've
gone
through
I?
C
Think
the
Keating
to
me
that
you
know
the
trait
system
is
really
fantastic.
Error
handling
is
amazing,
even
though
I've
not
use
of
that
well
and
obviously
rust.
Has
that
blazing
speed,
which
is
really
great,
I,
mean
sadly
rusty.
C
She
needs
some
more
work,
but
if
you
fix
really
bright,
having
sort
of
really
high-performance
code
and
rust
sort
of
a
couple
other
things
on
top
of
this,
historically
with
sort
of
machine
learning
stuff,
we
sort
of
had
this
pattern
of
prototyping
in
a
high-level
language
like
Python
and
then
rewriting
performance,
critical
parts
for
our
production
code
in
like
C
or
C++
I.
Think
rust
can
sort
of
feel
this
really
nice
space
of
doing
both.
C
So
it's
as
I've,
hopefully
shown
it's
very
quick
to
get
a
prototype
up
and
running
a
rusty
machine,
and
maybe
with
some
more
work,
it'll
also
be
possible
to
extend
that.
To
you
know
a
performance
focused
production,
ready
piece
of
code.
Another
thing
is
that
I
mean
I'm
sure
for
those
who
have
been
developing
in
rust.
It
just
provides
a
lot
more
insight
into
what
you're
doing
so.
C
This
is
from
a
developer's
point
of
view,
but
sort
of
working
hard
to
get
these
models
working
in
rust
was
like
a
great
way
for
me
to
learn
a
bit
more
about
how
the
models
work.
Underneath
you
know
focusing
on
when
there's
the
model
need
to
own
its
data
ones
like
when's,
it
morphogen
from
a
model
to
sort
of
borrow
a
piece
of
data.
It
was
yeah
very,
very
challenging
at
times,
but
like
a
great
learning
experience
and
that's
abusing
to
this
yeah.
C
So
another
freaking
question
is:
when
would
you
use
rusty
machine
so
right
now
I'm,
not
in
production?
It's
experimentation!
Non-Performance
acquittal
applications
are
maybe
okay,
I
mean
it's.
These
albums
work,
but
they're
not
sort
of
merrily
stable,
they're,
not
state
of
the
art.
I
think
that
in
the
future,
rots
machine
will
hopefully
sort
of
fill
that
space
of
quick,
safe
and
also
powerful
modeling,
which
is
certainly
available
in
other
languages,
but
I
think
rust
has
the
power
so
to
provide
that
without
external
dependencies,
though
we're
not
quite
there.
C
Yet
I
was
gonna
talk,
sort
of
quickly
about
rust
and
machine
learning
in
general,
and
so
I
think
that
rust
is
really
well
poised
to
make
an
impact
in
a
machine
learning
space
I
think
it's
fairly
obvious.
The
things
that
sort
of
rust
offers
and
its
selling
points
are
things
that
machine
and
can
read,
take
advantage
of
I
think
on
top
of
that,
there's
some
sort
of
really
excellent,
tooling,
built
into
rust
minutes
it's
a
modern
language
and
so
that
that
tooling
is
going
to
be
really
really
valuable.
C
Just
in
the
machine
learning
space
in
general
and
so
have
another
thing
that
I
felt
the
benefit
of
really
early
on
is,
and
you
sort
of
get
I
mean
thanks
to
LLVM
and
and
sort
designer
for
us.
You
get
performance
with.
You
know,
minimal
effort
compared
to
if
you
slippin
coding
these
things
in
C++
and
other
languages,
I
mean
once
you're
past
of
wrestling
with
the
broad
checker.
It
becomes
really
quick
and
easy
to
get
something.
That's
vectorized
and
boring
data
efficiently
and
running
quickly.
C
C
C
Routines
like
these
are
things
you
use
and
I
think
it's
kind
of
silly
to
try
and
fight
that
I
mean
there's
literally
decades
of
like
experienced
research
and
expertise
that
going
to
build
these
things.
I
think
it's.
It's
definitely
very
very
useful
to
have
those
bindings
there
and
the
reason
I
haven't
so
prized
that
I
just
feel
like
it
was
sort
of
more
valuable
to
have
that
ease
of
use
and
just
for
things
running
out
of
the
box.
At
least
for
now,
and
the
other
thing
is
addressing
sort-
the
lack
of
tooling
around
rusty
machine.
C
Of
talking,
I
generally
mean
data
handling
it
things
like
that.
So
right
now,
sort
of
all
that
is
left
to
the
user
and
there's
no
sort
of
built-in.
You
know
I
want
to
load
a
CSV
I
just
run
this
one
function
and
it
comes
in
like
you
have
in
numpy
numpy,
so
I
think
that's
sort
of
something
else.
It'd
be
really
nice
to
see
coming
in
and
just
creating
a
couple
things
I'd
like
to
see
from
rust,
I
think
specialization,
I
think
most
looking
for
to
that.
C
You
be
just
sort
of
overwrite
these
traits
and
particular
inserts.
That
performance
reasons
would
be
nice
and
it
would
also
sort
of
help
keep
the
code
a
lot
cleaner,
I
think,
I've,
sort
of
said,
quietly,
growth
of
flow
and
collection
areas,
because
I
have
no
idea
like
how
look,
but
it's
kind
of
difficult
right
now
to
sort
of
do
things
like
eigen
decomposition
and
singular
value
decomposition
where
you
might
put
in
floats
and
then
get
out.
Complex
values,
I
think
it's
quite
difficult
to
capture
that
in
rust.
C
Right
now
and
it
sort
of
a
continued
effort
from
the
community
I
mean
it's
hardly
anything
bad
about
the
community.
It's
really
great
and
the
driving
in
few
dozen
is
amazing
and
it's
awesome
sort
of
seeing
new
things
popping
up
all
the
time
that
making
using
rust
easier.
So
a
quick
summary
machine
and
done
quickly,
so
I
hope,
I
didn't
confuse
anybody
too
much
I'm,
just
so
briefly
explaining
what
machine
learning
is
and
why
we
do.
It
spoke
back
Rasta
machine,
a
ruling
out
a
little
bit.
C
G
Those
graphics
you
showed,
did
you
draw
them
with
rust?
No.
C
That
is
really
good
and
I.
Think
yeah,
there's
I've
sort
of
tried
some
a
little
bit
of
paralyzing
that
and
using
sort
of
multi-threaded
start
to
speed
it
up,
which
was
somewhat
pressed
as
well,
but
hasn't
made
its
way
into
the
library
so
I
think
in
general.
It's
not
terrible,
but
it's
not
to
say
we
don't
need
them.
I
think
they'll
still
be
like
pretty
necessary
to
use
the
library
cool.
F
C
Sorry
so
try
my
notes
right,
sorry,
so
exploratory
analysis,
right
so
by
this
I
mean
sort
of
when
you're
tackling
a
new
machine.
Only
problem
you
generally,
it
goes
like
this
you've
been
given
a
data
set,
and
you
want
to
find
some
insights
about
it,
and
so
the
first
thing
you
do
is
try
and
dig
into
that
date
from
sort
of
see.
What
does
the
data
look
like?
What
are
the
sort
of
key
important
features
of
the
data?
C
F
C
I
think
I
think
the
key
thing,
the
sort
of
most
people
I,
guess,
pray.
The
lack
of
reap,
also
the
ability
to
sort
of
have
its
own
command
line
sort
of
do
this
stuff,
I
think
in
Python
that
particular
valuable,
and
also
that's
okay,
that
rust
off
for
a
while
yeah,
but
I
think
that
definite,
some
libraries
that
would
make
that
sort
of
easier
and
whether
it
would
be
nice
to
sort
of
tailor
that
big
dominant
space
by
Python.
Not
so
sure.
Thank
you.
Okay,.
C
There
is.
There
is
a
wreath
of
this
madness
right.
Okay,
so
in
this
case,
like
train
takes
a
mutable
reference,
and
so
we're
sort
of
first
gave
the
definition
of
trails
and
it
sort
of
transforms
our
model
to
sort
of
map
inputs
to
outputs.
So
in
this
case
it's
sort
of
quite
natural
to
let
it
be
immutable,
borrow
and
where's.
In
other
cases,
we
sort
of
don't
need
that
mutability
I
think
in
general
it
sort
of
comes
down
to
me
deciding
the
you
know.
We
don't
need
me
to
be
able
to
here.
C
E
C
So
it's
it's
not
an
simple
answer.
Originally
I
sort
of
wanted
to
try,
learning,
rust
and
I
was
like
well
I
know
about
about
machine
learning.
I
guess
it
would
be
nice
to
sort
of
transfer
that
road
into
rusts,
and
hopefully
my
lovely
machine
Oh.
Anyone
help
sort
of
cooked
me
through
the
tough
patch
of
the
borough
checker
it
did.
But
so
what
ended
up
happening
is
I
was
like
oh
I,
guess:
I
need
some
linear
out
of
it.
Let
just
look
up
the
linear,
algebra
library.
C
Oh,
there
isn't
really
one
that
just
works
out
the
box.
I
guess
I'll
just
write
some
simple
stuff
myself
and
this
sort
of
by
the
time
I
finally
got
to
the
machine
learning
part
after
building
up
like
all
this
stuff
to
get
in
verses
in
rust
and
I
was
like
well,
I've
spent
so
much
time.
I
measure
does
carry
on,
and
so
my
attendant
sort
of
learn,
rust
and
take
girls
to
make
server-side
like
low-level
systems,
programming,
sort
of
ended
up
in
me
spending
all
my
time.
C
Writing
machine
learning
library
and
at
this,
but
it's
pretty
much
a
labor
of
love,
I
mean
I'm,
just
I'm
really
enjoying
writing
in
rust.
It's
really
really
fun
and
it's
also
great
to
learn
some
machine
learning
stuff
that
I,
maybe
don't
know
so
well
right,
now,
sort
of
caught
it
well
that
in
a
language
that
sort
of
forces
you
to
really
understand.
What's
going
on
under
the
hood.
B
B
Thanks
you
guys
so
I
am
going
to
tell
you
about
talk,
which
is
an
operating
system
for
microcontrollers
and
by
magnet
controllers.
I
mean
these
little
computers
that
have
extremely
limited
resources.
So
we're
talking
about
things
that
we
would
like
to
be
able
to
run
it
like
an
average
current
draw
of
under
50
micro
apps.
Just
like
for
comparison,
your
phone
runs
on
an
average
of
like
100
milliamps
Oh.
Was
that,
like
three-and-a-half
orders
of
magnitude,
they
have
very
limited
amounts
of
memory
and
they
need
to
meet
relatively
relatively
tight
timing
constraints.
B
B
My
Savior
thank
you
and
so
to
meet
these
constraints
in
sort
of
an
operating
system
environment.
What
Todd
needs
to
do
is
to
rely
on
a
different
isolation
mechanism
than
sort
of
traditional
processes,
traditional
hardware
protection-
and
we
basically
use
the
Russ
type
system
to
isolate,
to
do
most
of
the
isolation
in
the
in
the
kernel
and
then,
like.
B
B
Okay,
so
my
controllers
are
everywhere
all
the
way
from
things
that
are
primarily
designed
to
you
know
convince
rich
people
that
keep
making
money,
so
they
can
buy
like
fancy,
lamps
and
stuff
all
the
way
to
actually
necessary
things
like
medical
devices
and
so
forth,
and
then
the
existing
sort
of
you
know
quote
unquote.
Operating
systems
that
wrote
that
tend
to
running
on
these
devices
aren't
really
operating
systems
in
sort
of
the
traditional
way
that
we
think
about
that
and
I'm
not
saying
that
to
like
put
down
existing
operating
systems.
B
They
just
don't
have
the
same
kinds
of
goals.
So
you
know
they
typically
don't
really
do
any
sort
of
separation
between
the
core
kernel
code
and
drivers
and
applications.
There's
some
sort
of
all
like
this
one
monolithic
piece
of
code
and
they
don't
expose
any
isolation
mechanisms
to
let
you
do
isolation
on
your
own,
and
so
really
the
operating
system
is
more
of
like
a
library
that
helps
you
program.
B
These
things,
with
a
little
bit
more
ease
and
so
kind
of
the
mental
model
that
you
want
to
think
of
is
like
Ruby
on
Rails
Ruby,
on
Rails,
for
your
defibrillator
and
and
this
model
kind
of
comes
up
comes
out
of
how
you
know
what
the
the
traditional
thought
is
is
of
how
we
build
and
been
its
systems,
but
it
doesn't
really
apply
to
how
embedded
systems
are
actually
built
anymore
anyway.
So
how
are
they
built?
B
Well,
first,
we're
going
to
take
some
some
set
of
sort
of
off-the-shelf
parts
like
a
microcontroller,
maybe
some
radios
and
some
sensors
and
actuators
and
we'll
stick
them
on
a
custom
board
for
our
product
or
our
development
kit
or
whatever,
and
then
we'll
choose
some
operating
system
quote-unquote
to
to
build
our
software
on
top
of,
and
then
we're
going
to
grab
a
bunch
of
drivers
for
all
these
off-the-shelf
components
that
we
that
we
had.
So
you
know
we
might
take
like
a
temperature
sensor
from
Adafruit
or
like
a
Bluetooth
driver
from
Nordic.
B
You
know,
etc,
etc.
It
might
come
from
like
a
variety
of
sources
and
then
we'll
build
build
our
application.
On
top
of
these
drivers
and
operating
system
and
and
these
applications
are
kind
of
a
hand-rolled
code,
they
might
do
relatively
complicated
thing
like
like,
like
encrypt
data,
that
we're
sending
over
the
network
or
machine
learning
like
we
just
learned
about,
etc,
and
then,
finally,
we'll
take
all
this
code
that
we
cobbled
together
and
wrote
ourselves
and
optimize
it
for
basically
everything
except
for
security.
B
And
so
you
know
the
result
is
that
embedded
systems
are
really
built
just
the
way
that
other
systems
are
built,
they're
built
from
reusable
components-
and
you
know
reasonable
I,
don't
to
give
you
as
local.
It's
a
bad
name,
they're
good
like
we
need
these
right,
because
it
means
that
we
don't
have
to
reinvent
the
world.
Every
time
we
build.
You
know
like
a
silly
little.
B
You
know
remote
control
of
a
lightbulb
or
something,
but
you
know
we're
mixing
code
from
a
variety
of
sources
and
we
don't
have
any
isolation
mechanism
mechanisms
like
we
would
in
a
traditional
system
and
then
we're
optimizing
for
a
bunch
of
stuff.
That's
complicated,
optimize
for
and
is
not
security,
and
so
this
is
really
a
recipe
for
for
bugs,
and
the
question
is
like
what
happens
when
there
is
a
bug
and
one
of
these
components,
and
the
answer
is
that
the
you
know
all
bets
are
off.
B
This
I
think
pretty
cool
mechanism
called
grants,
but
that
is
particularly
cool
because
it
I
think
shows
off.
Some
really
neat
features
about
how
you
can
route
use,
sort
of
really
basic
low-level
primitives
in
the
language
to
guarantee
high
level
system
stuff
and
then
I'll
show
you
some
numbers
that
prove
that
doing
language
isolation
can
be
much
more
efficient
than
I,
don't
know,
I,
don't
they
prove
it.
They
argue
or
whatever,
that
like
doing,
isolation
at
the
language
level
can
be
much
more
efficient
than
using
processes.
B
Okay,
so
why
our
process
is
not
going
to
work
in
a
traditional
process.
Isolation
model
we're
basically
going
to
split
our
system
into
a
bunch
of
different
isolated
components,
and
each
of
these
components
will
effectively
get
its
own
region
of
memory
and
they,
you
know
each
component
can't
really
reach
into
into
the
other
components,
memory
and
and
touch
anything.
B
And
so
each
of
these
red
box
is
kind
of
its
own
little
world
and
it
can
more
or
less
pretend
that
it's
running
by
itself
and
if
they
want
to
communicate,
you
know
in
order
to
interact
with
each
other
they'll
do
so
over
some.
Some
sort
of
you
know
actual
messaging
communication
mechanism
like
IPC
on
something
like
Linux
or
something
like
that.
B
Okay
and
this
model
is,
is
in
general,
pretty
nice
because
it
does
sort
of
two
things
for
us.
It
gives
us
isolation,
but
it
also
provides
this
nice
concurrency
model.
That
also
enables
parallelism-
and
in
particular,
is
this,
like
concurrency
model,
that
we
don't
really
have
to
think
of
concur
about
concurrency
all
the
time.
So
we
get
to
write
this
nice
sort
of
sequential
code
and
it's
also
a
model
that's
convenient
to
enforce,
both
with
like
language,
runtimes
and
and
hardware.
B
So
there's
kind
of
good
reasons
why
most
systems
use
something
like
a
process
model
or
some
variant
of
a
process
model
to
do
to
do
isolation.
It's
it's
a
it's
a
good
model,
but
the
problem
is
that
there's
a
resource
overhead
associated
with
every
unit
of
isolation
right
so
every
time
we
break
apart
another
component
of
the
system
and
put
it
in
its
own
box
and
its
own
process,
we
have
to
allocate
some
memory.
That's
that's
only
going
to
be
usable
by
that
process,
and
every
time
we
want
to
communicate
between
components.
B
We
have
to
do
some
sort
of
context
which,
like
on
Linux
or
whatever
operating
system,
is
going
to
be
an
actual
context
which
it
might
be.
You
know
a
more
light
like
more
lightweight
context,
which,
between
like
threads
or
closures
or
just
swapping
stacks,
but
there's
some
overhead
to
sort
of
communicating
between
the
components,
because
we
basically
have
to
swap
you
which
domain
is
active
at
the
moment.
Remember
that
these
systems
that
were
targeting
have,
like
you
know,
very
constrained
resources.
B
So
you
know
at
the
lower
level,
at
the
lower
limits
we
may
have
as
little
as
only
like
16
kilobytes
of
memory,
in
some
cases
less,
but
actually
our
operating
system
doesn't
support
that
either.
So,
let's
pretend
like
this
is
the
lower
limit
and
and
relatively
tight
timing
constraints,
which,
on
the
one
hand,
means
that,
like
we
just
can't
fit
that
many
things
into
memory
and
then
on
the
other,
it
means
that
there's
like
a
relatively
high
sort
of
practical
cost
to
yet
from
the
applications
perspective
of
like
incurring
this
overhead.
B
Every
time
we
want
to
communicate.
So,
let's
look
at
how
this
might
play
out.
Suppose
we
have
this
like
little
tiny,
16
kilobyte
device.
Well,
you
know
if
we
have
one
component
on
there,
maybe
we
only
need
to
allocate
like
4
kilobytes
or
something
like
that
for
just
a
stack
just
that
one
component,
and
so
we
can
maybe
fit
two
or
three
more
in
there,
but
as
soon
as
we
you
know,
we
want
to
split
up
our
system
to
two
more
things.
B
Ok,
so
talk
tries
to
resolve
this
by
story.
Choice!
There's
all
this
this
exactly
this
particular
challenge.
So
how
do
we
isolate
concurrent
components
inside
the
the
kernel,
in
our
case,
without
having
to
incur
an
overhead
for
each
new
unit
of
isolation,
and
the
sort
of
key
idea
is
that
we're
gonna
use
a
different
concurrency
model?
That's
just
a
single
threaded,
cooperative
event
system
and
isolate
components
just
using
the
rest
type
system.
B
So
you
know
at
the
top,
we
have
more
or
less
traditional
processes,
who's,
this
grant
routine,
which
I
may
or
may
not
talk
about
if
I
have
time,
but
otherwise
these
processes
kind
of
look
like
a
process
would
look
I'm
like
you
know,
Linux
or
or
whatever
traditional
operating
system,
but
the
kernel
has
you
know
numerous
components
called
capsules
and
those
are
effectively
units
that
only
sort
of
exist
before
compile
time
once
you
compile
them
down
to
machine
code.
You
know
these
barrier
between
the
the
capsules.
You
know
more
or
less
go
away.
B
B
Right
so,
as
I
mentioned
at
the
the
concurrency
model
is
event
based
and
in
particular,
what
we're
going
to
do
is
this
effectively
anytime.
We
have
some
actual
event
from
the
world
coming
in,
as
a
hardware
interrupts
will
just
even
queue
those
all
into
the
the
main
kernel
thread
and
and
sort
of
run
them
as
they
come
in
you
know,
dole
them
out
to
whichever
capsules
are,
are
specified
to
process
them
and
then
importantly,
like
the
capsules
never
block
on
I/o.
B
So
so
every
time
we
we
get
to
a
point
in
the
capsule
where,
where
we
would
otherwise
otherwise
sort
of
wait
for,
let's
say
sending
a
packet
to
complete,
will
instead,
you
know
just
return
back
to
the
main
event
handler
and
say,
like
you
know,
let
me
know
when
this
packet
is
done,
but
like
meanwhile
let
other,
let
other
capsules,
do
the
thing
or
just
go
to
sleep
or
something.
But
then
you
know
once
we're
actually
running
in
in
one
of
these
capsules.
B
And
then
we
get
isolation
because
the
rest
type
system
you
know,
gives
us
this
mechanism
to
effectively
do
things
like
hide
private
fields
or
force
interaction,
sort
of
across
module
boundaries.
Only
through
like
a
tight
interface
from
a
function
or
something
like
that,
so
you
can
sort
of
constraint
even
further
than
just
which
fields
are
exposed
or
not.
B
Okay,
so
what
are
these
capsules
look
like,
so
the
capsules
are
basically
the
things
that
that
live
inside
the
kernel,
as
I
mentioned,
the
kernel
itself
is
sort
of
divided
divided
up
into
hopefully
as
small
as
possible.
This
trusted
computing
base,
and
you
know,
there's
some
things
that
are
that
are
unsafe.
I,
don't
necessary,
have
to
do
that.
Much
with
the
capsules
like
the
process.
B
Okay,
so
this
is
an
example
of
a
capsule.
This
is
us
pretty
simplified,
but
still
gets
the
idea
across
I
hope
example
of
a
basically
a
driver
for
a
light
sensor
on
one
of
these
microcontrollers,
so
yeah.
So,
let's
look
at
this,
so
you
know
we
have
this.
This
data
structure
called
light
sensor,
which
is,
is
modeling
the
functionality
that
we
need
here
and
it
you
know.
The
first
field
is
just
this
reference
to,
basically
just
a
pointer
to
to
an
I
squared
C
device.
B
I
squared
C
is
just
a
communication
protocol
that
a
lot
of
microcontrollers
used
to
communicate
with
sensors
and
stuff
like
that.
But
you
know
this
could
be
like
whatever
TCP
device
or
something
like
that.
It
doesn't
really
matter,
but
this
I
squared
C
device
is
also
a
capsule
right.
So
if
we
want
to
do
something
like
send
a
packet
over
I
squared
C
we're
just
going
to
call
a
function
and
that'll
get
compiled
down
to
you
know,
you
know
jump
in
structure
or
something
like
that.
B
It
still
doesn't
mean
that
I
can't
do
anything.
I
want
with
it
I'm
like
forced
to
go
in
the
case
in
the
case
of
the
light
sensor,
I'm
forced
to
go
through
this,
like
relatively
narrow
interface
of
like
whatever
start
start
start
reading,
the
the
lux,
the
the
light
measurement
right,
which
means
that
I
can't
just
like
arbitrarily
change
the
current
state
of
the
protocol
to
be
some
some
incorrect
things
right,
like
I,
have
to
go
through
this
narrow
interface,
okay.
B
So
the
result
is
that
capsules
are
sort
of
untrusted
for
access,
meaning
they're
only
trusted
to
to
be
able
to
access
resources
that
are
explicitly
given
right.
So
they
might
be
trusted
to
access
like
a
virtual
I
squared
C
device
and
go
through
a
particular
interface.
But
they
can't
just
like
reach
over
into
some
random
place
in
memory
and
twiddle
a
bit
or
something
but
they're
not
trusted
for
liveness,
and
this
is
sort
of
a
consequence
of
the
threading
model
right.
B
You
know
something
like
a
process
model
where
we
can
just
sort
of
decide
to
switch
away
to
another
task
and
and
our
state
is
going
to
stay
the
same
in
the
first
one.
You
know
we
can't
just
like
decide
to
preempt
a
capsule
in
the
middle
and
in
fact,
if
a
capsule
decides
to
like
spin
on
a
wild
one
loop
or
something
the
system
is
screwed.
So
don't
do
that.
If
writing
capsule
do
have
time
to
do
memory.
Grants
barely
okay,
so
so
I
say
that
the
process
model
is
more
or
less
traditional.
B
One
sort
of
difference
here
is
this
grant
region
at
the
top?
That's
just
this
this
little
grit.
Well,
it's
the
gray
box
that
says
grant
in
it.
The
reason
for
this
is
that
one
of
the
constraints
that
we
that
we
have
in
the
kernel
and
is
pretty
typical
for
microcontrollers,
is
we
want
to
totally
avoid
dynamic
memory
allocation
in
the
kernel.
That's
a
sort
of
a
typical
engineering
practice
in
microcontrollers.
B
Just
because
there's
relatively
limited
memory,
and
it's
with
with
with
it
with
heap
allocation,
it
tends
to
be
hard
to
predict
whether
or
not
you're
going
to
run
out
of
memory
so
kind
of
like
best
to
avoid
it.
If
we
don't
have
a
good
way
of
like
debugging
this
at
runtime
or
something
because
it's
you
know
some
defibrillator,
this
is
my
body
that
I
can't
you
know,
get
a
core
dump
from
or
something
anyway.
B
But
if
we
have,
you
know
multiple
applications
applications
and
if
they're
dynamically
changing
it
can
often
be
convenient
to
nonetheless
dynamically
allocate
stuff.
So
we
have
these
grand
regions
which
are
are
basically
allocated
in
the
processes
memory,
but
are
are
given
acts
as
access
exclusively
to
some
kernel
capsules.
B
You
know
we
shouldn't
have
to
wait
until
all
of
the
capsules
that
might
have
had
access
to,
like
some
part
of
the
grant
region
are
done
using
it.
We
don't
have
to
go
through
like
some
complicated
chat
to,
like
you
know,
count
all
the
references
to
some
processes
brand
region.
We
want
to
be
able
to
just
wipe.
You
know
all
this
memory
clean
as
soon
as
the
process
dies,
so
that
we're
not
again
using
up
a
bunch
of
memory
that
could
be
reused
for
for
a
process.
B
Okay.
This
is
all
what
I
just
said.
So
so
the
things
to
remember
are
that
or
the
one
thing
to
remember
is
that
we
have
a
single
threaded
execution
model
that
will
come
into
play
as
sort
of
our
system
constraint,
that
we
get
to
leverage
to
get
a
high-level
property
right.
So
you
don't
think
that
already
said
okay,
so
we
want
to
enforce
three
invariants.
The
first
is
that
allocated
memory.
Doesn't
let
capsules
break
the
type
system?
B
That's
you
know
obvious,
but
it
important
to
state
capsules,
you're
only
access,
pointers
to
to
process
memory
in
there
great
region
while
they're
alive.
This
is
again
a
consequence
of
the
fact
that
we
want
to
be
able
to
reuse
that
memory
as
soon
as
the
process
dies
to
restart
the
processor
to
put
another
process
there
and
the
the
core
kernel
needs
to
be
able
to
reclaim
that
memory
as
soon
as
the
process
dies.
B
Okay,
the
problem
is
that
like
processes
can
die
dynamically
right
at
runtime
and
we
don't
know
how
to
predict
that
ahead
of
time,
but
rust
determines
memory
reclamation,
so
it
would
eject
sort
of
the
equivalent
of
mallets
and
fries
statically.
That's
the
whole
point
of
the
Baro
checker
is
to
figure
out
exactly
at
compile
time
when
I
get
to
free
thing.
So
there's
kind
of
this
mismatch
here,
but
the
observation
is
that
we
actually
can
kind
of
use
the
type
system
to
enforce
very
simple
properties.
B
But
if
those
simple
properties
interact
with
with
our
system
architecture
in
just
the
right
way,
then
we
can
actually
achieve
sort
of
higher
level
safety
goals
like
what
we
want
for
the
grant
regions.
So
this
isn't
the
implementation,
but
it's
the
entire
interface
for
Grand
regions,
and
actually
we
can
looked
only
at
the
types
of
of
these
of
these
structures
to
to
see
that
we
can
get
the
properties
that
we
care
about
from
grants.
Okay,
so
we
have
three
structured
here.
The
own
structure
is
is
basically
our
version
of
box.
B
So
what
this
you
know
forward
tick
be
means
is
that
we
know
for
sure
that
nothing
is
going
to
escape,
that
the
own
type
is
not
going
to
escape
the
closure,
and
then
the
allocator
is
basically
the
way
that
we
allocate
extra
own
stuff
once
we're
inside
of
a
grant
region.
Okay.
So
so
what
are
the
invents
that
we
know
from
these
from
these
types?
B
Well,
as
I
said,
we
know
that
the
tick
B
lifetime
is
existential,
so
once
I've
returned
from
the
enter
function,
I
know
that
nobody
has
a
reference
to
the
thing
that
I
to
the
own
type
to
the
thing
inside
of
the
grant
region.
I
also
know
that
allocator
and
own
don't
implement
copies
so
that
that's
important,
because
the
only
thing
I
can
return
from
the
closure
is
something
that's
copyable
right.
B
Okay,
so
we
know
that
own
types
can
never
escape
the
closure.
That's
passed
in
tanjur
right,
that's
the
thing
that
we
cared
about,
so
we
know
that.
We
also
know
that
when
the
process
schedule
is
executing
all
the
capsules
have
returned.
Why
do
we
know
that?
Because
we're
in
a
single-threaded
event
model
the
process,
schedule
run
sort
of
at
the
bottom
of
this
or
at
the
top
of
the
stack
whatever
at
the
very
beginning.
B
So
so
it's
just
like
an
invariant
of
the
systems
that
we
that
we've
designed
that
like
when
the
process
schedule
is
running.
We
know
for
sure
that
it's
that
there's
no
sort
of
ongoing
capsules
that
all
capsules
have
returned,
and
so
that
means
is
that
that,
when
we've
noticed
that
a
process
has
died
in
this
sort
of
process
schedule
in
the
reclaimer,
we
know
that
we
can
reclaim
all
of
its
grant
regions
right.
We
know
that
there
aren't
any
current
capsules
running
that
have
sort
of
the
outstanding
references
to
any
grant
regions.
B
We
know
that
that
that
the
own
type,
the
stuff
inside
of
the
grant
couldn't
have
escaped
anywhere
else,
because
we
reinforced
that
in
the
types
and
so
we
can
just
wipe
away
the
Grand
Regent
by
like
overwriting,
with
zeros
or
or
or
decorating
a
reference
count
or
something
like
that.
And
then
you
know.
Basically,
you
can
think
of
the
the
the
stuff
inside
of
there
being
like
D
allocated
lazily
when
I
try
to
enter
the
grant
again,
you
know
they're
just
not
going
to
be
anything
there.
I
B
Okay,
so
so
currently
talk
runs
mostly
on
this.
This
one
platform,
the
details,
aren't
super
important
and
in
the
implementation
for
this
platform,
there
are
I,
haven't
counted
exactly,
but
at
least
over
a
hundred
capsules.
So
kind
of
the
point
of
that
is
to
imagine
like
what
you
know.
How
would
you
do
this
with
processes?
B
Okay
and
the
high
bit
is
that
in
some
cases
like
you
really
need
to
use
something
like
a
capsule.
If
you
care
about
doing
isolation
just
for
performance
reasons,
so
this
is
sort
of
a
comparison
of
of
the
time
it
takes
to
do
commute
to
do
sort
of
a
high
level
operation.
That's
communicating
across
units
of
isolation
depending
on
whether
the
isolation
is
that
a
process
or
a
capsule.
B
So
this
is
like
literally
six
instructions
or
which
is
like
a
pointer,
dereference
and
store,
or
something
like
that
with
a
couple
checks
but
like
in
other
cases.
It
doesn't
really
matter
that
much
like
you
for
sending
a
large
string
over
a
serial
port,
which
is
the
very
the
the
two
bars
on
the
very
right
then
sure
like
doing
this
over
a
process
takes
longer
but
notice.
B
The
huge
gap
between
the
bottom
part
and
the
top
part
of
the
graph
like
sending
a
large
string
over
you
know
using
DMA
or
something
where
co-consul
takes
forever
and
so
like.
It
doesn't
really
matter
that
we're
that
we
would
incur
some
felony
by
using
a
process.
So
so
you
know,
the
point
is
just
that
there
are
cases
where
you
really
need
this
level
of
isolation
to
be
really
cheap,
and
there
are
other
cases
where
it
matters
less.
These
are
some
hard
numbers
comparing
the
overhead
of
the
overhead
of
capsules
with
processes.
B
The
high
bid
is
that
there
still
is
some
overhead
for
doing
things
like
like
capturing
events
from
from
from
interrupts,
and
the
reason
is
we
fundamentally
have
to
be
in
order
to
enforce
all
the
embarrass
that
we
want.
We
have
to
be
single
threaded
and
so
we're
giving
up
on
basically
being
able
to
like
immediately
respond
to
events
in
the
in
the
hardware
which
would
take.
You
know
effectively
no
time
like
only
the
amount
of
time
that
the
hardware
takes
to
respond
to
the
interrupt
versus
what
is
eight
microseconds
relative.
B
Ok,
so
there's
a
bunch
of
things
that
I
didn't
talk
about
partially,
because
I
don't
have
time
partially
because,
like
I
may
not
have,
you
may
not
have
the
answers
to
these.
Yet
so
I
think
that
the
really
hard
questions
to
answer
as
of
yet
are
like
okay.
So
we
can
use
in
principle
the
type
systems
to
sort
of
the
design
these
these
interfaces.
B
That
would
give
us
isolation
but
like
what
are
the
principles
that
you
use
to
make
sure
that
this
interface,
that
you're
exposing
is,
on
the
one
hand,
actually
say
from
a
language
perspective,
it's
actually
very
easy
design
interfaces.
If
you're,
not
careful,
that
would
allow
you
to
break
the
type
system
and
then
beyond,
just
like
the
language
level
safety.
How
do
you
actually
make
these
things
secure
so
like?
B
If
I,
you
know,
expose
a
type
safe
interface,
but
nonetheless,
you
know
inadvertently
let
some
random
driver,
like
you
know,
I,
read
your
crypto
key
or
something
like
that
from
the
hardware
like
you
haven't,
really
done
very
much
and
I.
We
don't
have
good
answers
for
that.
Yet
I
think
that's
like
a
generally
speaking
hard
question
to
answer
in
general
for
sort
of
how
to
design
these
interfaces
in
safe
languages.
B
B
G
B
The
the
30
kilobytes
of
flash
is
a
little
bit
of
a
lie
to
get
to
30
kilobytes
of
flash
for
the
kernel
for
the
kernel
binary.
Basically,
we
had
to
remove
the
definition
of
panic,
so
that
seems
to
be
the
main
source
that
seems
to
be
the
main
source
of
size
for
the
for
the
binaries.
Basically,
lots
of
operations
that
could
happen
like,
for
example,
unwrapped
in
an
option
could
panic,
and
if
you
have
the
definition
for
panic,
the
binary
will
contain
a
bunch
of
format.
B
Formatted
strings
for
panic
to
output
that
takes
up
a
ton
of
spit,
like
in
our
case,
like
I,
think
it
was
twice
the
binary
size
without
removing
the
definition.
Pro
panic
in
theory,
that's
like
not
necessarily
useful,
so
you
could
remove
it.
It
turns
out
to
not
be
very
user-friendly
to
remove
right
now
you
have
to
like
change
Lib
core,
which
you
ought
not
to
do,
but
yeah
Haskell
binaries
are
big.
G
G
B
So,
in
order
to
do
processes,
we
need
some
hardware
mechanism.
Basically,
the
these
are
core
tech,
ARM,
Cortex
MS
and
the
new
variants
of
cortex-m
does
look
like
the
last
three
four
years.
Have
this
mechanism
called
memory
protection,
which
is
unlike
memory
manager
in
the
sense
that
there's
no
virtualization
but
basically
excuse
me,
you
get
to
set
read,
write,
execute
bits
on
you
know
relatively
small
regions
as
small
as
like
32
bytes,
up
to
the
whole
bowls
as
a
memory,
but
it's
a
flat
memory
memory.
The
memory
address
space.
B
So
I
didn't
say
binary
blobs,
although
that
is
the
state
of
the
world.
For
the
most
part
we
cleverly
avoid
using
hardware.
That
requires
us
to
do
that,
but
basically
no,
you
can't
use
that
the
only
way
the
only
way
to
use
binary
blobs
is
typically,
if
there
a
driver.
There
is
some
it
might
be
a
binary
blob,
but
there's
some
like
simulator
that
you
need
to
implement
for
your
particular
right
controller.
We
can
put
a
shim
later.
B
K
B
B
D
B
So,
generally
speaking,
some
like
static
array
that
you've
allocated
when
you
initialized
the
capsule.
So
there
was
this
big.
This
is
no
limitations.
I
didn't
go
over.
There's
this,
like
big,
awful,
very
unsafe
configuration
file
that
that
the
platform
integrator
right,
basically
Kabul.
All
these
things
together
and
statically
allocated
a
bunch
of
buffers
for
each
thing
to
use.
D
B
Every
other
question
question
was
what
the
build
tool
chain
is
so
I
there's
a
couple
things
that
could
mean
but
I
think
like
right.
Now
we
have
a
bunch
of
terrible
make
files
we're
all
so
behind
on
rust,
nightly
by
like
a
year,
I
think
so
no
way,
oh,
is
it
yeah,
but
but
other
systems
that
do
like
other
operating
systems
and
rust
successfully
use
cargo
very
nicely.
That's
what
we
hope
to
end
up
using.
If
that's
what
you
meant,
that's
the
answer.
B
B
A
D
A
J
L
The
form
of
of
chromosomes
I
can
see
here
on
the
left
side,
and
if
you
look
into
these
chromosomes,
they
are
basically
double
helixes
of
two
complementary
DNA
molecules
that
are
kind
of
glued
together,
and
each
of
these
DNA
molecules
is
a
chain
of
so-called
nucleotides
and
the
nucleotide
is
a
molecule
made
of
phosphate
sugar
and
the
base.
And
these
bases
there
are
four
different
bases:
adenine
cytosine,
guanine
and
cumin.
L
So
as
as
a
computer
scientist
you've
just
assigned,
let
us
know
that
AC,
G
and
T,
and
importantly,
the
sequence
of
these
bases
is
the
blueprint
of
the
proteins
that
are
appearing
in
your
body
and
so
so
from
the
computer
science
perspective
perspective.
Dna
is
just
a
text
over
the
alphabet.
Acgt
and
the
second
important
type
of
sequence
are
proteins,
so
I
already
mentioned,
proteins
are
generated
from
the
from
the
DNA.
A
L
D
L
Being
catalyzed
for
reactions,
chemical
reactions
and
so
on
and
yeah,
so
these
20
amino
acids
again
make
an
alpha
that
for
us
over
these
letters
here-
and
this
is
again
a
text-
we
can
make
a
deal
with
informatics.
So
at
the
end
you
always
kind
of
want
to
analyze.
These
sequences
compare
them
against
each
other
and
so
on,
and
this
is
usually
a
big
data
problem.
L
We
use
succinct
data
structures
and
full
text
indexes
and
so
on.
So
let
me
talk
about
about
Russ,
bio,
so
bioinformatics
software
in
general.
There
we
have
kind
of
a
gap
between
what
we
want
and
what
we
have
in
reality.
I
would
say.
So,
as
we
have
this
big
data,
we
surely
want
fast
implementations
and
since
bioinformatics
is
more
and
more
also
involved
in
medicine,
especially
in
personalized
medicine,
these
implementations
should
be
essentially
back
free
and
we
should
really
be
able
to
rely
on
the
results
we
get
from
that.
L
So
my
idea
was
kind
of
to
let
the
rust,
compile
I
do
the
work
that
the
people
in
bioinformatics
sometimes
can't
do,
because
they
don't
have
enough
time.
So
I
already
said
that
that
they
have
limited
time,
so
it
might
be
counterintuitive
to
say
okay,
they
should
use
a
language
and
a
compiler
which
is
so
strict
and
it's
so
much
effort
to
get
something
to
work.
L
But
on
the
other
hand,
my
impression
is
actually
that
you
just
shift
the
development
time
from
from
the
time
you
use
for
debugging
some
kind
of
scripts
to
time
using
it
for
for
making
it
compile
in
making
the
compiler
happy.
And
so
my
proposal
is
to
just
rely
on
the
compiler,
the
linting
system
and
the
ownership
model
to
solve
all
the
code,
quality
and
back
issues
you
usually
have
with
my
informatics
software
and
that
we're
really
suffering
off
I
think
rust
can
solve
these
problems.
So
let
me
let
me
come
to
rest
bio
itself.
L
So
that's
that's,
become
a
bit
more
concrete,
so
I
actually
I
picked
one
example
that
might
show
some
some
features
of
rust
reuse,
but
also
provides
some
typical
example
of
what
you
do
in
bioinformatics.
So
a
very,
very
common
task
in
bioinformatics
is
to
align
sequences
against
each
other.
L
If
you
look
at
the
example
above,
we
have
an
rather
long
sequence
here,
a
and
a
shorter
sequence,
B
and
a
possible
alignment
of
B
against
a
would
be
that
we
first
take
the
first
three
basis
of
a
then
we
have
a
deletion
in
this
sequence
here
and
then
we
take
again
three
bases.
Then
we
have
a
substitution
here
and
then
we
have
mattress
again-
and
this
is
a
very,
very
common
question
in
value
for
Mattox,
for
example,
to
find
mutations
answer
and
so
technically,
how
do
you
do
that?
L
A
path
from
the
top
left
to
the
bottom
right
and
yeah,
so
an
optimal
alignment,
would
would
obviously
be
an
alignment
which
has
either
minimal
cost
or
a
maximal
score.
So
that
would
be
a
path
here,
for
this
path
would
be
the
optimal
alignment
here
and
yeah.
So
there
are
a
couple
of
variants
of
the
sequence
alignment
problem,
so
obviously
you
can
define
arbitrary
costs
for
the
edges
or
scores.
L
So
most
prominently
in
bioinformatics
are
so
called
affine
gap
coasts
where
they
get
some
actually
expensive
to
open
a
gap
like
an
insertion
or
deletion
here.
But
to
extent
is
it's
it's
not
that
expensive
this
this
comes
from
a
biological
intuition
and
then
it
might
be.
You
might
have
different
costs,
whether
you
substitute
a
tee
against
C,
II
or
G
against
the
tee
and
so
on,
and
so
defining
these
cost
functions
should
be
quite
variable
and
second,
there
are
different
variants
of
alignments
itself.
L
So,
as
I
already
mentioned,
an
alignment
would
be
a
path
from
top
left
to
bottom
right
that
that's
also
called
a
global
alignment
because
it
aligns
the
the
whole
part
of
the
sequence
B
against
the
sequence.
A
but
there's
also
a
so
called
semi
global
alignment
where
you
just
require
a
path
from
top
to
bottom.
L
That
would
mean
that
you
align
all
of
sequence
B,
but
in
sequence
a
you
might
might
just
omit
certain
parts
at
the
beginning
at
the
end,
and
then
there
is
also
a
local
alignment,
which
means
just
that
you're
just
interested
in
the
top
scoring
path
within
that
graph,
but
it
does
not
necessarily
need
to
reach
the
top
left
and
the
bottom
right
that
door
yeah.
So
these
are
the
variants
that
we
have
as
I
already
mentioned.
This
can
be
implemented
by
dynamic
programming,
I.
L
Obviously,
just
looking
at
the
last
column
you
investigated
and
the
current
column
and
then
investigating
this
course
and
determining
the
maximum
score
and
yeah.
If
you
do
that
in
the
end,
if
you
process
the
whole
the
whole
graph,
but
you
didn't
have
have
to
have
it
in
the
memory
you
end
up
with
the
optimal
alignment
score,
but
you
actually
don't
have
an
idea.
What
what
is
the
actual
alignment
to
get
the
actual
alignment?
You
have
to
additionally
store
some
kind
of
trace
back.
L
That
gives
you
the
optimal
decision
you
took
in
each
column,
basically,
and
you
can
do
that
quiet
space
efficiently
by
using
bit
encoding
and
yeah.
So
so,
how
is
this
available
in
respire?
Let's
look
at
a
little
example
here,
so
we
have
two
texts
here
defined
at
the
top.
Then
we
define
a
scoring
function
with
a
closure
here
for
our
substitutions
or
equal
basis,
so
in
case
of
equal
equal
basis.
L
L
L
We
allocating
all
these
matrices
and
the
second
technical,
technically
interesting
part
of
the
alignment
is
how
the
actual
dynamic
programming
algorithm
is
implemented.
So,
as
I
already
said,
there
are
different
variants
like
global
alignment,
semi
global
alignment,
local
environment,
depending
on
where
your
your
path
shall
be
located
in
this.
In
this
graph
and,
however,
the
dynamic
programming,
algorithms
look
very
similar
and
therefore
I
decided
to
kind
of
make,
make
use
of
the
rest
macro
system
to
avoid
a
code
education.
L
So
I
don't
go
into
all
the
details
here.
But
basically,
you
see
in
this
line
here
on
the
outer
loop,
which
iterates
over
the
columns,
and
then
you
have
an
inner
loop
which
iterates
over
the
rows
of
each
column
and
what's
in
here,
is
actually
the
the
real
dynamic
programming
where
you
xx
access
the
previous
column
and
calculate
the
optimal
scores
of
your
of
your
current
column
and
so
on.
And
then
you
have
the
the
macro,
allow
certain
Sun
blocks
here.
L
So,
for
example,
there's
an
initialization
block
that
appears
before
each
columns
process,
then
there's
an
inner
block
which
appears
within
each
row
of
a
column
and
then
there's
an
outer
block
that
appears
after
one
column
has
been
processed,
and
these
these
three
books
can
basically
be
used
to
to
encode
any
any
kind
of
alignment
we
we
want
to
have
so.
The
first
case
is
the
global
alignment.
L
In
that
case,
we
we
just
need
to
use
the
inner
clock
and
initialize
the
the
dynamic
programming
matrices
in
a
certain
way,
and
also
the
trace
back
matrix
and
with
the
inner
block,
and
the
outer
block
can
essentially
be
the
empty
here
and
for
the
semi
global
alignment.
We
just
use
the
outer
block
so
yeah.
L
So
this
is
the
alignment
module
of
a
frost
biome
next
I
want
to
just
show
some
some
general
purpose:
data
structures
that
are
also
available
in
respiro
and
that
are
very
often
used
and
bioinformatics.
The
first
is
bit
encoding
has
already
mentioned.
We.
We
need
that,
basically
to
encode
the
trace
back
mate
matrices
in
the
alignment.
So
the
idea
is
that
you
sometimes
want
to
store
integers
in
kind
of
a
vector,
but
these
integers
don't
use
the
full
eight
bytes.
You
at
least
would
need
for
that.
L
Therefore,
you
can
encode
them,
for
example,
with
with
two
bits:
instead,
which
is
allowed
by
the
bit
encoding
data
structure
and
respiro
yeah.
So
you
just
initialize
it
with
a
number
of
bits
used
for
encoding
and
then
you
can
push
playing
loose,
which
would
panic
if,
if
a
value
exceeds
the
available
bits
for
encoding
it,
you
can
have
random
access
and
you
can
also
iterate
over
the
values
here.
L
This
is
just
implemented
by
a
vector
of
bytes,
and
then
we,
we
just
add
code
to
whatever
is
provided
and
in
the
constructor
of
the
object
here
and
yeah.
The
second
data
structure
I
want
to
mention,
is
it's
a
small
integer
data
structure.
So
the
idea
is
that
you
at
least
a
bound
for
medics
by
but
I
guess
it's
also
a
common
problem
in
other
fields.
You
sometimes
have
like
vectors
of
millions
of
integers
and
among
these
most
of
them
are
actually
very
small.
L
You
could
maybe
encode
them
in
a
byte,
but
some
of
them
are
bigger
and
you
just
don't
want
this
one
to
waste
the
space
for
for
for
all
of
the
integers
just
because
some
of
them
are
bigger
and
therefore
we
offer
they're
small
it's
data
structure
which
you
can
parametrize
with
the
types
for
for
the
small
integer
and
for
the
for
the
big
interest
here.
So
in
this
case,
it's
it's
a
byte
from
small
and
you
64
for
big
integers,
and
then
it
allows
just
to
push
values
either
small
ones
or
big
ones.
L
It
allows
random
access
and
iteration
and
obviously
it
won't
won't
store
all
of
them
in
the
same
data
structure.
So
how
is
that
implemented?
So
the
idea
is
that
we
have
a
struct
here
with
a
vector
for
the
small
integers
and
a
b-tree
map
for
the
big
integers,
and
so
basically,
the
b-tree
map
assigns
indexes
of
the
of
the
small
Inns
array,
two
big
integers,
then
I
start
in
there,
and
the
small
Inns
array
just
contains
all
the
all
the
small
integers
and
we
encode
the
case
that
we
have
an
integer.
L
That
is
too
big
for
the
small
integer
array
by
storing
just
the
maximum
value
of
that
of
that
type
that
we
defined
here.
The
time
is-
and
yes
so
whenever
we
query
that
and
hit
a
value,
that
is
the
maximum
of
the
type
s
we
have
to
look
up
in
the
B
tree
map
and
where
the
actual
big
integers
stored
here,
and
by
that
you
can
save
a
lot
of
space,
basically
compress
your
data
structure
and
yeah.
So
the
last
thing
I
want
to
mention
is
a
rank,
select
data
structure.
L
So
I
guess
many
of
you
might
know
that.
But
that's
all
I
want
to
briefly
introduce
it.
So
bang
select
our
operations
on
bit
vectors.
So
there's
a
bit
vector
here.
You
have
two
bits
of
value:
one
and
the
rest
is
0.
Then
the
rank
of
that
would
be
what
you
see
here.
So
basically,
the
rank
is
the
number
of
1
bits
that
occur
up
to
the
current
position.
So
in
this
case
we
have
0
1
bits
occurring
in
the
first
proposition.
Then
we
have
1
bit
here.
L
So
the
rank
of
this
position
is
1
and
then,
following
that
all
the
other
elements
here
have
rank
1
as
well
until
we
have
the
second
one
bit
here,
so
the
rank
increases
to
2
and
so
on.
So
this
is
kind
of
an
important
data
structure
where
you
want
to
look
up
index
indexes
of
of
bits
actually
in
an
exhilarating
first
data
structure,
for
example
and
yeah,
so
so
in
respire.
This
is
implemented
using
the
bit
trick
great.
L
First
of
all,
we
right
now
use
type
aliases
to
the
standard
library
wherever
possible,
so,
for
example,
texts
and
respire
are
just
byte
slices
or
the
corresponding
iterators
in
the
future.
We
want
to
look
into
choosing
new
types
where
the
distinction
makes
sense.
So,
for
example,
we
have
an
implementation
of
suffix
arrays,
which
are
essentially
vectors
of
integers.
But
of
course
you
can
imagine
if
you
take
such
a
suffix
array,
which
gives
you
some
some
information
about
your
text.
Basically,
the
sorting
of
the
suffixes
of
the
text
and
you're
just
modified.
L
It's
not
actually
a
suffix
array
anymore,
and
we
want
to
use
new
types
to
encode
this
behavior.
Basically,
and
as
I
mentioned,
we
have
a
lock,
spaced
probability
implementation
and
we
might
use
new
types
for
that
as
well
and
all
kinds
of
other
data
structures
where,
when
new
types
make
sense,
actually
yeah.
Second
second
thing
we
want
to
look
into
is
parallelization
so
right
now
we
just
care
about
it
in
terms
of
implemented
sent
and
so
wherever
needed
in
the
future.
L
So,
finally,
I
want
to
thank
all
the
contributors,
most
most
importantly,
Adam
Perry
and
Taylor
Cramer,
which
have
recently
been
doing
a
lot
of
work
in
Ross,
Perot
and
I
want
to
also
thank
the
office
of
these
very
useful
greats.
We
we
make
use
of
in
respire.
Most,
most
importantly,
touch
would
send
CSV
I,
don't
say
an
algebra
aprox
and
certain
yeah.
Thank
you
very
much.
K
L
Yeah,
so
actually
I
didn't
benchmark
it.
So
at
the
time
I
implemented
it
I
just
was
excited
about
these
macros,
obviously,
but
so
I
would
see.
So
we
really
have
have
this.
Have
this
thing
in
the
inner
loop
and
there
we
would
have
an
additional
function
invocation
in
in
case
of
using
a
trade
and
I
just
thought
it
might
be,
might
be
a
additional
performance
benefit
actually,
and
it's
not
that
complicated
the
macros.
So
it's
I
just
tried
it
out
and
it
worked
out
nicely.
So
it's
just
like
getting
rid
of
additional
function.