►
From YouTube: Peeking at compiler-internal data (for fun and profit)
Description
Oliver Scherer (presented by Niko Matsakis)
A
Oh
okay,
so
right
what
I
wanted,
what
what
what
ollie
was
or
what
we
were
going
to
talk
about
is
poking
at
compiler
internal
data
and
what
this
really
means
is,
if
you
wanted
to
write
a
an
extension
to
rest
c,
for
example,
to
do
extra
static
analysis
or
to
look
at
the
code,
that's
being
compiled
and
check
for
patterns
or
things
like
that.
We're
going
to
kind
of
talk
through
how
you're
supposed
to
do
that
and
the
patterns
for
doing
that
now.
A
A
So,
right,
like
I
said,
ollie's,
not
actually
here
but
he's
becoming
a
father.
I
said
it
a
few
times,
but
I
had
to
include
this
adorable
little
picture.
A
So
you
have
me
instead-
and
I
am
so-
I've
been
working
on
rust
for
about
10
years
and
I
am
the
lead
of
the
language
design
team
and
had
a
lot
to
do
with
building
the
compiler
and
have
been
involved
in
the
project
in
all
kinds
of
various
ways,
and
I
also
took
over
ali
slides
and
added
a
bunch
of
animated
gifs
and
planned
to
show
you
all
his
private
notes
that
were
to
himself
because
they're
really
cute
and
I
like
them
so
so
about
you,
the
listener,
I'm
imagining
that
you
are
not
in
fact
here
to
play
russ
the
game.
A
I
get
a
lot
of
requests
about
russ
the
game.
I've
never
played
it,
but
I
found
out
just
now
today
that
it's
one
of
the
cruelest
games
and
it
looks
like
doom
with
a
guitar
or
a
mandolin
or
something,
and
so
I'm
kind
of
intrigued,
but
you'll
probably
hear
because
you
are
interested
in
russ
code
and
you
want
to
analyze
rest
code.
A
A
It's
not
exactly
obvious
how
to
do
it,
and
so
it
was
very
difficult
and
the
result
was
kind
of
really
hard
to
maintain
code,
and
so
I
was
hoping
that
we
could
have
a
talk
that
in
this
talk,
people
will
kind
of
get
the
tools
for
how
to
build
a
wrapper
around
the
rust
compiler
in
the
way
that
it
was
meant
to
be
done,
and
then
that
will
kind
of
be
helpful
in
various
ways.
So
right.
A
So
what
I'm
going
to
do
is
cover
a
few
things.
Why
would
you
want
to
integrate
with
the
compiler?
What
are
some
of
the
things
you
might
do
with
that?
What's
the
best
way,
and
what
might
this
look
like
in
the
future?
A
Oh-
and
this
is
one
of
ali's
notes
to
himself,
which
I
it's
probably
the
only
one
I
actually
included,
but
it's
so
great,
so
yeah.
Let
me
tell
let
ali
tell
you
about
his
phd
thesis.
This
is
just
about
that
topic,
but
yeah
you
should
wait.
You
could
ask
him
in
a
few
weeks
when
he's
you
know,
his
life
is
completely
back
to
normal.
It's
a
little
joke
for
all
your
parents.
There
all
right,
so
you
might
be
wondering
why
does
it
matter
how
you
integrate
with
the
compiler
anyway?
A
A
A
The
main
one
is
that
the
compiler
and
your
tool
will
be
more
in
sync,
but
also
you'll
leverage,
the
compiler
for
doing
all
this
annoying
work
like
parsing
and
type
checking
like.
Certainly
it's
a
lot
easier
than
writing
a
fresh
compiler
from
scratch,
and
the
last
one
the
compiler
apis
get
improved.
What
what
that
means
is
when
we
have
a
lot
of
people
building
things
on
the
compiler,
they
in
turn,
I
hope,
can
come
and
participate
in.
A
You
know
rust
as
an
open
source
project
participate
in
the
compiler
development
and
help
steer
it
so
that
it
gets
easier
and
better
to
use
right
and
provide
feedback
on
these
apis
and
the
apis
didn't
start
out
they're
like
decent
right
now
in
some
ways,
and
not
so
much
in
others,
but
they
they
didn't
start
out
as
as
decent
as
they
are
now.
They
started
out
much
more
painful
and
they've
been
shaped
by
people
doing
things
over
the
years,
and
so
I'm
pretty
excited
to
see
us
making
progress
here.
A
So
I
will
give
a
warning
that
right
now
this
is
pretty
a
pretty
much
a
secondary
use
case
right.
It's
not
something
we
care
a
lot
about
when
building
the
compiler,
in
other
words,
we're
going
to
land
features
that
change
things
and
you're
going
to
have
to
rebase
and
that's
okay,
but
I
think
over
time
there
might
be
room
for
us
to
build
some
sort
of
oh
okay.
Now
I'm
getting
in
the
future
I'll
just
wait.
A
A
A
We
can
maybe
go
into
a
bit
more
detail.
So
what
we're
going
to
do
together
is
write
a
little
rust
compiler
that
runs
a
custom,
lint
and
detects
comparisons
like
x
equals
equals
x,
which
is
kind
of
a
pointless
comparison
right,
and
then
we
can
give
a
nice
friendly
error
message
of
the
kind
that
rust
is
famous
for
like
oh,
you
can't
even
see
it
ruining.
Oh
there
we
go
something
like
this.
You
know
to
help.
People
learn
how
to
fix
their
code.
A
Now.
These
examples
work
with
this
particular
version
of
the
rest.
Compiler
at
some
point,
we'll
send
out
the
links
to
these
slides
and
you
could
click
through
and
get
the
sources.
I
want
to
point
people
to
the
rusty
dev
guide,
which,
if
you
don't
know
it,
has
got
all
kinds
of
information
about
how
the
rust
compiler
works
and.
B
A
Really
cool,
so
the
first
thing
that
you
want
to
do
is
get
rusty
as
a
library
we
don't
distribute
this
by
default,
you
have
to
add
it
using
rust
up
component,
add
and
then
rusty,
dev
and
llvm
tools
preview.
These
are
two
different
components.
This
will
install
the
the
the
libraries
onto
your
system
and
then
you
can
start
to
add
these
things
into
your
crate
and
you'll.
A
Note
that
you
have
to
have
this
feature
rusty,
private,
so
the
feature
gates
if
you're
not
familiar
with
them,
is
rust's
way
of
telling
you
you're
using
something
for
which
av
api
stability
is
not
guaranteed,
and
so
you,
if
you
upgrade
to
a
new
version
of
rusty,
a
new
version
of
rust,
your
code
may
no
longer
build
and
in
particular
all
of
these
crates
are
unstable
and
they
probably
will
remain
that
way,
at
least
for
the
time
being,
so
to
build
your
own
compiler.
A
You
start
by
declaring
a
struct,
usually
it's
kind
of
an
empty
struct
like
my
callbacks
here
and
implementing
this
callback
straight,
the
callback's
straight.
Let's
see
what
goes
next,
okay,
the
callbacks
trait
I
feel
like
there
was
another
slide,
but
I
guess
not
the
callback
straight
is.
Let
me
just
back
up.
Then
there
we
go.
The
callbacks
trait
is
a
trait
that
has
a
bunch
of
little
hooks
where
rusty
is
going
to
call
you
at
various
points
in
the
compilation
cycle.
A
So
this
is
what
their
code's
going
to
look
like
then
you're
going
to
have
a
main
function
that
looks
that
has
a
signature
somewhat
like
this
and
you're
going
to
collect
the
arguments.
These
are
the
arguments
to
the
compiler
like
the
command
line,
arguments
instantiate
your
callbacks.
A
Oh
there
we
go
and
run
rusty
driver
run,
compiler
new
with
the
command
line,
arguments
and
your
callbacks
and
yeah
you
now
have
rusty.
The
only
thing
is
that
this
isn't
that
interesting,
because
you're,
just
reproduced
rusty
you
haven't
made
it
do
anything
different
than
a
normal
plain,
vanilla,
rusty.
A
So
that's
where
the
callbacks
trait
comes
in
right.
This
is
where,
if
you
want
to
customize
compilation
and
so
forth,
you
can
take
a
look
at
this
trade
now
I
know
you're
interested.
A
This
is
where
it
starts
to
get
fun,
so
you
can
find
we
have
actually
the
the
api
docs
for
russ
c,
the
internal
api
docs
they're,
not
part
of
the
standard
api.
But
if
you
go
to
the
rusty
dev
guide,
you'll
find
a
link
to
them
and
in
there
you
can
find
this
callback
straight
and
you
can
see
all
the
different
methods
it
offers.
It
basically
offers
lets.
A
You
be
invoked
at
various
points
in
the
overall
compilation
process
and
the
first
method
config
in
particular
lets
you
configure
different
hooks
and
things
like
that
that
rusty
offers
so
we're
going
to
look
at
config
for
our
purposes
for
building
this
lint
that
we're
doing,
and
so
we
can
add
the
config
method
and
inside
the
config
method
we
get
access,
mutable
access
to
the
russes
configuration
and
if
we
pop
up
the
the
rust
dock
for
what
configuration
offers
you
see,
it
has
a
whole
bunch
of
stuff
various
options.
A
The
crate
configurations,
file,
loaders,
we'll
talk
about
some
of
these
you
know,
and
so
on.
So
what
we're
going
to
do
is
we're
going
to
we're
going
to
set
this
callback
called
register
lintz.
I
don't
recall
if
it
was
in
that
slide
or
not,
but
anyway,
what
this
is
is
it's
a
callback
that
will
get
invoked
in
order
to
register
the
lints
like
when
the
compiler's
ready
to
know
about
what
lints
are
available
and
or
rather
sorry,
yeah.
That's
right,
it'll
get
called
wit
and
it
takes
it's
a
callback.
So
it's
a
closure.
A
It
takes
as
argument
this
thing
ls.
I
think
this
is
a
bug
in
the
slides,
because
I
think
this
ls
is
lint
store
anyway.
So
these
should
be
the
same
name
whatever
it
is,
and
the
lint
store
is
basically
some
metadata
about
what
lints
are
registered
and
then
you
can
call
on
the
linstor.
A
This
method
register
late,
pass
I'll,
explain
more
about
register
late
pass
in
a
second
but
basically
you're,
adding
an
active
lint
into
the
system
and
what
you
return
from
this
closure
is
the
your
lint
definition
and
mile
into
some
struct
that
you've
defined
we'll
get
to
that
okay.
So
what
is
myland
well,
every
lint
in
rust
is
defined
by
some
type
and
usually
it's
an
empty
struct
like
this,
because
it
doesn't
have
any
state
to
pass
in,
but
it
might
have
some
configuration
or
whatever
and
you
implement
this
trait
rusty,
lint
lint
pass.
A
A
lint
pass
is
basically
when
the
lint
phase
of
the
compiler
is
running.
We're
going
to
call
your
methods
on
this
trait
and
say:
go
ahead
and
scan
the
code
and
look
for
problems,
so
you
can
register
lint
pass.
You
can
give
it
a
name.
That'll
show
up
in
compiler
diagnostics
and
so
on.
There's
some
other
methods,
and
then
you
can
register
one
of
various
different
kinds
of
lint
passes.
The
kind
we're
interested
in
is
the
late
lint
pass,
and
that
means
it
runs
relatively
late
in
the
cycle.
A
A
A
It's
sort
of
more
like
a
concrete
syntax
tree,
but
it's
it's
really
an
ast
and
it,
but
it
it
hues
pretty
close
to
what
the
user
actually
typed
right.
And
then
we
do
this
macro
expansion,
and
at
that
point
you
actually
get
kind
of
what
the
user
wrote,
but
with
macros
expanded
and
that's
the
asd
and
then
there's
a
lowering
step
that
produces
something
called
here
and
here
is
also
an
ast.
A
But
it's
the
high
level
ir
it's
not
really
what
the
user
wrote.
It
has
a
bunch
of
expansions
and
simplifications.
It
doesn't
have
like
it
has
extra
data
with
sometimes
instead
of
storing
the
path
it'll
have
the
id
of
the
item
that
was
referenced
or
it
has
like
for
loops
are
expanded
into
while
loops
things
like
that.
Some
amount
of
simplifications
have
been
done
and
that's
what
we
use
for
things
like
type
checking
and
some
other
operations,
and
then
we
lower
that
into
mirror
mirror
is
basically
a
control
flow
graph
based
code.
A
It's
it's
sort
of
like
jvm,
analogous
to
like
jvm
byte
code
is
to
java,
as
mirrors
to
here,
so
you
can
still
pretty
easily
recognize
the
russ
code
that
produced
it,
but
it's
much
much
simpler
and
that's
a
lot
of
static
analysis
in
particular
is
really
interested
in
mir,
because
it's
a
great
target
that
was
kind
of
what
it
was
designed
for
and
we
do
some
amount
of
optimization.
This
is
also
the
borrow
checker
runs
on
mir,
for
example,
and
then
llvm
eventually
mirror
gets
translated
to
lvm.
A
This
is
where
monomorphization
happens,
which
means
that
we
generate
multiple
for
every
generic
function
defined
in
mir.
We'll
make
multiple
llbm
functions,
one
for
each
set
of
types
with
which
it's
instantiated
and
I
don't
know
what
happens
there-
a
bunch
of
stuff,
but
eventually
it
produces
some
executable.
A
So,
to
bring
this
back
to
lintz
the
reason
we
have
different
like
late
lint
pass
is
basically
has
to
do
with
which
of
these
irs
you
get
access
to
and
how
much
data
there
is.
So
if
you,
if
you
use
early
lint
pass,
you
can
get
access
to
the
ast
even
before
macro
expansion.
That
can
be.
A
That
can
be
really
useful
because
you
have
something
much
closer
to
what
the
user
typed
but
as
a
but
the
consequence
is
you
don't
have
access
to
the
type
like
the
results
of
type
checking,
because
it
hasn't
been
done
for
that.
You
would
have
to
get
this
here
and
there
are
ways
to
kind
of
connect
back
and
forth
between
these
that
I
won't
go
into,
but
for
our
purposes
we're
going
to
run
on
the
here.
That's
probably
the
best.
A
If
you
don't
know
the
right-hand,
if
you
don't
need
access
to
the
asd,
you
should
prefer
a
late
lin
pass
as
a
rule
of
thumb.
So
we're
going
to
implement
this
late
lint
pass.
One
of
the
methods
is
called
check
expert.
So
this
is
basically
a
visitor
pattern.
It's
walking
down
over
all
the
functions
in
the
crate
and
you
can
kind
of
interrupt
it
at
any
point.
In.
A
We're
gonna
we're
gonna.
Actually
I
don't
think
you
can
interrupt
it.
I
don't
remember,
I
don't
think
you
can
in
this
trade
or
maybe
there's
a
way,
but
not
with
this
method.
You
just
get
a
callback
at
each
point
and
we're
gonna
stop
it
and
we're
going
to
look
at
every
expression
as
we
pass
by
right
and
that's
what
we're
going
to
put
in
inside
the
body
here.
B
A
Do
I
go
backwards?
Oh
okay,
you
would
have
this
function
and
then
the
body
would
look
like
this
here.
We're
saying:
is
this
a
binary?
The
expression
is
the
the
thing
we're
visiting
right
now
we're
doing
a
quick
match
to
say:
is
it
a
binary
expression?
If
you
recall,
we
were
looking
for
x,
equals
equals
x.
So
we'll
look
and
see.
Is
this
a
binary
expression
and
compare?
Is
the
left
side,
the
kind
the
expression
kind?
Is
that
equal
to
the
right
side?
A
It's
not!
This,
isn't
really
what
your
code
should
look
like,
but
it
gives
you
the
idea
and
if
it
is
equal,
we'll
issue
a
diagnostic
oops.
I
thought
I
had
a
little
more.
Let
me
jump
back,
so
this
isn't
really
what
your
code
would
look
like,
because,
first
of
all,
just
comparing
things
for
equality
is
usually
a
bad
idea.
Things
get
oftentimes,
things
are
not
literally
equal,
but
they
represent
semantically.
The
same
thing
like
two
references
to
the
same
variable
actually
have
different
spans
different
locations
in
the
source
code.
A
Stuff
like
that
and
there's
there's
better
ways
to
do
it
and
you
can
look
at
like
clippy,
for
example,
if
you
want
to
see
the
this
lint
is
an
actual
equivalent
and
there's
a
better
implementation
there.
I
also
didn't
show
you
how
to
call
diagnostic
apis
and
all
the
other
stuff,
that's
in
the
rusty,
dev
guide
or
ping,
or
come
on
zulip
and
ask
people
but
but
yeah.
It's
really
kind
of
roughly
this
simple,
though
that
you'll
get
a
callback
and
you'll
do
some
stuff.
A
So
let's
look
at
some
of
the
other
parts
of
config,
because
there's
a
lot
of
really
cool
stuff
you
can
do.
I
showed
you
how
you
can
add
a
lint
pass.
I
was
register
lintz,
but
there's
all
these
other.
These
are
like
the
the
big
super
powerful
options
that
you
have
so
file.
Loader
rusty,
never
talks
directly
to
the
file
system.
Instead,
it
talks
to
a
virtual
file
system
via
this
file,
loader
api
and
you
can
supply
data.
However,
you
want
right.
A
So
rusty
will
basically
say
give
me
the
file
with
this
path,
and
you
say
here
you
go:
here's
some
bytes
whoa.
Okay.
Let
me
pop
back
the
next
one
is
override
queries,
so
override
queries
is
really
cool.
A
All
of
rusty
is
built
on
these
things
called
queries
and
the
basic
idea
is
it's
a
demand-driven
compilation
system.
So,
instead
of
working
like
a
normal
compiler
where
it
starts
or
like
a
dragon
book
compiler
or
something
where
it's
sort
of,
I
will
do
in
fact
like
the
diagram
I
showed
you
sort
of,
I
will
do
parsing,
then
I
will
do
type
checking.
Then
I
will
do
this.
What
rusty
really
does
is
start
from
the
end
and
say
what
do
I
want?
A
I
want
generated
code,
that's
code,
gen
and
then
cogen
calls
methods
that
it
needs
like.
Well.
If
I'm
going
to
get
generated
code,
I
better
have
mir.
That's
been
optimized
so
that
I
can
translate
that
to
llvm
code
well
to
make
optimize
mirror.
What
do
I
need?
I
need
mirror.
That's
been
statically
analyzed
to
make
static
analyzer.
What
do
I
need?
I
need
mirror.
That's
been
constructed
to
make
mirror
I
need
here
and
so
on
all
the
way
down
to
parsing
right
and
the
reason
we
do
it.
A
This
way
is,
it
allows
us
to
do
incremental
recompilation,
so
we
cache
the
results
of
these
queries
across
compilations,
and
it
might
be
that
instead
of
re-executing,
this
code
we're
just
giving
you
back
the
deserialized
result,
but
what
it
means
to
you
is
in
your
tool.
We
allow
you
to
actually
take
the
implementation
of
one
of
these
kind
of
like
a
oo,
specialization
or
subtyping,
and
just
swap
it
for
something
else
right,
and
this
can
totally
mess
up
everything.
A
If
you
do
this
wrong,
because
it's
on
you
to
call
the
old
one
or
you
know,
generate
the
right
results
and
I
think,
probably,
if
you
try
to
use
incremental
results
that
have
different
definitions
of
queries,
we're
not
going
to
detect
that
I
don't
know
exactly
what
happens.
A
This
is
it's
too
powerful
to
really
be
stable,
but
it's
the
it's
the
right
foundation,
perhaps
so
this
would
allow
you
to
do
things
like
hook
yourself
in
provide
advice,
if
you
think
of
aspect-oriented
programming,
if
that's
a
term
that
still
means
anything
to
anyone
around
different
queries,
things
like
that.
So
you
might
say:
oh
every
time
mirror
gets
optimized.
I'd
like
to
look
at
it,
you
could
do
that
so
yeah.
The
override
queries
basically
lets
you
get
access
to
the
original
query.
Definition.
You
can
modify
the
input
output.
A
Whatever
you
want
to
do
really
cool
here
are
some
queries
you
might
like
to
look
at
layout
of
computes.
The
layout
of
a
type
like
this
field
is
at
this
offset
in
memory.
Optimize
mirror
optimizes
mirror
mirror
built
is
accessing
mirror.
Before
it's
even
been
checked
by
the
borrow
checker,
you
could
do
cogen
whoa.
Well,
you
know
I
had
edited
these
slides,
but
I
guess
it
didn't
work.
Oh
wait.
I
know
I
didn't
reload
is
that.
B
A
That's
all
right!
We're
gonna,
keep
going!
You
missed
some
really
amusing
or
really
improved.
You
missed
some
animated.
Gifs
probably
is
actually
what
I
mean.
That's,
okay!
So
that's
a
kind
of
brief
summary
of
of
some
of
the
configuration
options
that
are
at
your
disposal,
and
I
want
to
talk
a
little
bit
now
about
some
examples
and
ways
to
integrate.
I
talked
about
building
your
own
binary
front
end
to
do
lints,
and
there
are
some.
A
Another
thing
you
can
do
is
we're
we're
building
up
an
interface
now,
that's
still
in
flux,
but
it's
getting
better
for
supplying
your
own
back
end.
So
we
have
had
traditionally
the
lvm
backend
there's
active
work
on
a
crane
lift
based
back
end,
which
is
this
compiler.
That's
used
was
started
at
mozilla
for
doing
web
assembly
and
there
are
lots
of
other
interesting
potential
back-ends
one
could
imagine,
and
so
that
that
that
that
is
a
very
nice
place
to
hook
in
for
certain
applications.
A
One
thing
you'll
find
is
that
there's
a
lot
of
code,
that's
been
like.
Initially,
everything
was
written
for
llvm
and
has
a
lot
of
it
is
code.
You
would
want
to
use,
but
some
of
it
has
been
factored
out
into
independent
utilities,
and
some
of
it
hasn't
so,
as
one
very
simple
example
type
layout
used
to
be.
A
We
compile
directly
from
rust
types
to
lvm
types,
and
then
we
added
this
intermediate
layer
when
we
were
working
on
integrating
well
for
a
variety
of
reasons,
but
one
of
them
is
that
cranelift
can
now
access
it
and
know
what
the
layout
of
a
type
is,
as
does
as
do
other
things,
so
an
example
would
be
miri.
Miri
is
a
mere
interpreter.
A
It
does
a
whole
bunch
of
stuff.
It's
super.
You
can
look
up
its
source
and
see
how
to
do
how
it
does
all
these
things.
But
you
know
it
hooks
in
after
the
analysis
is
done.
It
finds
the
start
symbol
and
starts
to
interpret
the
mirror,
and
that
needs
to
use
things
like
type
layout
to
figure
out.
Okay,
because
the
mirror
will
just
say
give
me
this
field,
so
it
needs
to
use
type
layout
to
find
out
what
is
the
offset?
It's
actually
interpreting
memory
kind
of
at
a
byte
level.
A
A
Among
other
people,
lift
is
an
alternative
code,
gen
back
end,
that's
being
developed
now,
and
I
mentioned
it
already,
it's
a
good
thing
to
look
at
for
if
you
wanted
to
swap
in
a
new
back
end
to
a
new
system,
that's
not
lvm!
A
I
think
that
can
even
be
done
with
dynamic
loading.
Oh
I'm
not
that
familiar
with
that
system,
and
the
final
thing
is
the
community
helps.
This
is
meant
to
be
looking
forward.
What
what
might
things
look
like
in
a
few
in
a
little
time
and
I'm
going
gonna
go
quickly
through
it?
There's
a
few
initiatives
and
things
that
are
going
on.
One
of
them
is
librarification.
This
links
to
a
blog
post.
I
wrote
about
it.
The
basic
idea
of
that
is.
A
We
would
like
to
take
the
compiler
and
break
it
up
into
reusable
crates,
because
it's
sort
of
silly
for
everyone
to
reimplement
the
logic
and
we're
working
on
that
we've
got
a
few
prototypes
of
things
like
chalk
is.
This
is
a
system
for
doing
trait
solving
polonius?
Does
the
borrow
checker
and
rust
analyzer,
for
example,
rust
analyzer
is
a
the
ide
implementation,
that's
most
widely
used,
or
one
very
I
don't
know
the
most.
It's
one
very
widely
used
id
implementation.
A
It
re-implements
a
lot
of
the
compiler,
but
it
doesn't
have
to
re-implement
trade
solving
because
it
can
use
chuck,
which
is
a
reusable
library.
Unfortunately,
rusty
doesn't
use
chalk
at
least
outside
of
experimental
integration,
not
yet,
but
we're
working
towards
that.
But
when
it
does,
then
it
would
be
you'll
have
the
rust,
analyzer
and
rusty
both
share
the
same
trade
solver,
which
would
make
greater
consistency
across
the
user
experience,
and
the
goal
is
to
kind
of
bring
more
and
more
parts
of
the
compiler.
A
So
besides
librarification
just
giving
us
feedback
and
and
your
thoughts
and
ideas
for
how
we
can
are
joining
us,
and
as
I
should
not
say,
we
because
I
would
like
you
to
be
part
of
the
we-
how
all
of
us
together
can
build
up
better,
apis
and
and
generally
improve.
This
system
would
be
awesome.
A
So
in
summary,
rusty
is
a
library
it's
becoming
maybe
multiple
libraries,
but
also,
I
hope.
Personally,
I
wouldn't
say
this
is
a
consensus
amongst
the
team,
but
I
think
it
should
be
that
we
would
like
to
eventually
have
some
kind
of
developed
and
stable-ish
apis,
but
in
the
meantime
you
should
still
use
it
and
it
will
make
your
life
better
if
you're,
trying
to
analyze
rust
code,
all
right.
So
with
that
I
am
finished,
we
have
a
few
minutes
for
questions
and
then
we'll
go
on
to
talk
to
the
next
talk.
A
So
people
I'm
going
to
pull
up
the
chat.
I
see
I
have
a
few
missed
notifications.
Oh
sorry,
about
that
yeah
so
feel
free
to
raise
your
hand
or
speak.
I
prefer
if
people
drop
something
in
the
chat
first,
but.
C
Hello,
I
know
if
I
yeah,
if
one
was
to
write
a
compiler
from
rust
to
something
else.
Basically,
what
I
want
is
a
compiler
from
mirror
to
something
else
for
static
analysis.
C
What
would
be
so?
I've
explored
different
approaches
already,
one
is
using
what's
developed
at
gala
extracts
information
from
mir,
because
we
ultimately
developed
in
a
camel.
Another
solution
is
to
hook
into
rusty,
using
callbacks
and
hopefully
find
when
mir
is
being
generated
and
take
that
into
our
ast
and
then
write
it
somewhere.
We
did
the
camel.
What
what
would
be
your
approach
for
that.
C
A
C
A
B
So
I
have
a
question
about
sort
of
keeping
up
to
date
with
rest
c,
so
we've
been
exploring,
we've
been
using
this
cogen
backend
way,
although
not
as
cleanly
done
as
what
you
were
showing
I'd,
be
interested
to
talk
more
about
how
we
can
do
what
we're
doing
more
cleanly.
But
one
thing
that
we
see
often
happens
is
that
things
change
that
sort
of
break
things
we're
doing
so.
A
recent
example
is
the
way
that
the
arguments
are
passed
to
closures.
B
It
seems
that
there's
like
an
extra
level
of
people
being
used
somewhere
sort
of,
is
there
a
good
way
to
keep
track
of
and
sort
of,
be
aware
what
kinds
of
changes
are
going
to
happen
both
in
terms
of
things
that
would
break
that
kind
of
stuff
and
also
in
terms
of
things
where
maybe
the
semantics
has
slightly
changed.
So
the
worst
case
scenario
in
some
ways
is
that
the
code
keeps
working
it
keeps
linking,
but
the
intended
semantics
of
something
has
changed
slightly.
A
Yeah,
I
think
the
answer
is
no,
but
I
think
we
should
work
to
make
one
and
the
best
way
is
to
look
at
the
like.
We
don't
we
haven't
care,
we
don't
carefully,
maintain
change
logs
or
anything
like
that.
I
I
think
I
would
like
to
see
us
get
to
a
point
where
we
have
some
notion
of
stability
or
it's
not
that
we
don't
promise
to
make
changes,
but
we
at
least
promise
to
keep
a
good
record
of
what
got
changed
and
maybe
bump
a
sember
number
for
you
or
something.
A
So
you
can
be
aware
of
it,
but
we're
not
there
yet,
and
it's
going
to
take
some
work
to
to
make
that
happen
in
internally.
We
have
dealt
with
this
problem
by
often
by
bringing
things
like
clippy,
for
example,
lives
in
another
repo,
but
we
we
build
it.
We
have
this
notion
of
tools
and
tool
builders
so
that
at
least
when
the
build
changes
will
notice
and
refactor,
but
we
don't
have
a
way
good
mechanism
for
integrating
other
projects
into
that,
and
that
sounds
like
the
problems.