►
From YouTube: Rust analyzer guide
Description
A walk-through of the rust-analyzer implementation, which talks about implementing compiler-driven completion for IDEs.
rust-analyzer: https://github.com/rust-analyzer/rust-analyzer
transcript of the talk: https://github.com/rust-analyzer/rust-analyzer/pull/578
A
Hello
I
am
at
Clark
I'd
like
to
talk
about
retinol
at
the
experimental
project
to
build
a
wrath
compiler,
which
is
a
good
heat
for
IDs.
That
means
that
a
raster
layer
is
fully
incremental
on
demand
and
is
capable
of
providing
at
least
some
completion
results
as
visual
types.
So
without
further
ado,
let's
take
a
look
at
the
code
city
to
the
necessary
directory
open
the
code.
A
Obviously,
to
read
the
source
code
of
rust
analyzer,
we
will
be
using
rotten
eyes
itself,
so
there
is
also
a
sort
of
written
transcript
of
this
talk
and
if
you'd
rather
with
text,
then
listen
to
me,
take
a
look
at
these
pull
requests,
which
contains
a
markdown
document,
also
keep
in
mind
that
today
is
January
2019
the
year
of
the
Blade
Runner.
If
you
are
watching
this
video
in
the
future,
things
might
have
changed
things.
Okay,
so,
let's
start
the
typical
command-line
compiler
is
a
vegetable.
A
A
It
is
a
stateful
compliant
analysis
which
you
can
apply
changes
to
and
which
you
can
query
about
certain
aspects
of
rust
code
like,
for
example,
what
is
the
definition
of
this
symbol
over
there
in
code
analysis?
This
like
high
highest
level,
API,
is
represented
by
two
types:
analysis
host
and
analysis.
A
So
analysis
host
is
like
a
really
really
tiny
API
which
apply
changes
to
you
start
with
creating
and
amped
analysis
using
default
simple,
and
then
you
incrementally
populate
it
with
source
information
using
the
apply
change
method.
The
change
here
is
basically
the
contents
of
this
file
is
now
this
string
and
the
third
method
of
knowledge
cost
is
analysis
which
gets
you.
An
analysis,
structure
and
analysis
has
a
ton
of
methods
to
actually
go
very
a
useful
information
about
trust
God,
for
example.
It
has
go
to
definition.
A
One
interesting
thing
about
analysis.
Api
is
that
it
is
based
on
source
files
and
spans.
For
example,
the
input
to
go
to
definition
is
file
position,
which
is
a
file
and
an
offset
in
this
file,
and
the
output
of
go
to
definition
is
a
so-called
navigation
target.
I
wish
again
is
a
file
span
inside
the
file
and
like
the
icon,
the
name,
but
it
does
not
really
tells
you
what
kind
of
the
rust
symbol
it
is.
A
Okay,
so,
as
you
can
see,
these
analysis
has
like
a
lot
of
methods
if
we
select
it,
it's
like
okay,
visual
code
does
not
show
me
the
number
of
lines
like
it,
but
like
it's
a
fairly
big
API.
Okay.
So
why
do
we
need
like
two
types?
Analysis
host
and
analysis
and
board
are
exactly
which
changes
so,
let's
first
talk
about
separation
into
the
types
as
you
no
tests,
a
lot
of
methods
on
analysis
like
a
lot
of
yeah
return,
cancelable
results.
A
The
source
code
changed.
So
these
observation
is
what
explains
the
cancellable
API.
The
idea
is
that
when
you
want
to
do
some
kind
of
analysis,
you
get
your
analysis
host.
You
call
the
analysis
method
and
send
this
analysis
to
a
separate
write
and
do
a
background
computation
there
and,
at
the
same
time,
at
the
main
thread,
you
are
waiting
for
the
changes
when
changes
come
you
just
apply.
A
Changes
to
the
analysis,
host
and
ya
import
of
material
last
year
is
that
there
is
only
one
analysis
host
you
can't
clone
it,
and
applying
changes
requires
unique
access.
Tunnels
cost,
but
there
may
be
several
outstanding
analysis
in
different
friends.
Different
threads,
doing
different
and
related
things
like,
for
example,
syntax,
highlighting
and
reference
search
or
whatever.
A
So
when
you
got
these
outstanding
analysis
instances
and
you
got
an
incoming
change,
what
analysis
caused
us
is
that
it
marks
each
analysis
as
canceled
that
makes
all
the
pending
currently
ji-hyun
Aquarius
to
short-circuit
and
quickly
returned
or
cancelled
and
yeah.
Probably,
we
should
take
at
the
definition
of
these
canceled.
What
is
also
a
constable
is
basically
a
result
where
the
error
type
is
cancelled
and
cancelled
is
zero
size
type,
so,
basically
just
a
market
that
the
result
should
be
canceled.
A
Yeah,
so
what
we
call
apply
change,
we
cancel
all
the
pending
analysis,
all
the
methods
returned,
canceled
and
then
big
round
Fred's
died
and
the
also
called
drop
on
the
corresponding
analysis
structs.
As
soon
as
all
analysis
instances
are
that
analysis
cost
the
proceeds
with
actually
applying
change.
So,
basically,
when
you
apply
change,
you
kill
all
background
processes.
You
apply
change
in
place
and
you
start
analysis
if
you
would
like
to
okay
I
think
yeah.
A
A
So
you
can
change
source
files
by
adding
a
new
file
change
an
existing
file
or
remove
an
existing
file.
You
can
add
a
bunch
of
new
files
in
one
go
using
add
library
method.
It
is
not
essential
to
basically
an
optimization
for
greater
your
crates,
which
are
expected
to
be
immutable,
and
you
also
can
set
a
crane
graph,
we'll
talk
about
how
files
are
represented
in
the
moment.
A
currently,
you
can
think
of
a
file
as
of
something
with
an
integer
I
need
file,
ID
a
text
and
a
path
to
the
file,
and
that's
it.
A
The
more
interesting
bit
is
a
crane
graph.
So
rest
code
originally
consists,
of
course,
of
source
files,
but
such
files
is
not
part
of
a
model
trust
the
model
of
rust.
The
completion
unit
of
rust
is
a
crate
and
crate
is
really
more
than
a
single
source
file
because,
for
example,
a
great
is
determined
by
the
active
CFG
flags,
and
if
you
point
out,
if
you
point
receipt
to
the
same
file,
for
example
on
Windows
and
on
Linux,
you
actually
get
different
trades
because
they
will
have
different
CMG
flags.
Open.
A
A
A
A
A
For
example,
if
you
have
like
a
same
crates
with
different
CFG
Flags
twice
in
your
compilation
graph
or,
for
example,
when
you
include
the
same
file
as
a
module
in
two
different
Crites
so
and
yeah,
this
is
basically
it
so
rusty
nervous
internally
does
not
do
any
I
at
all,
only
puts
explicitly
passed
as
analysis
change
and
the
inputs
are
basically
the
source
files
and
the
crate
graph
running
ahead
a
little
bit.
We
will
get
credit
graph
from
cargo,
but
it
is
not
exactly
in
the
cargo
model.
A
Let's
get
back
to
this
issue
about
files,
you
may
have
noticed
this
source
route,
ID
argument
in
the
at
Iona
hat,
and
you
may
have
noticed
that
the
path
is
not
like
a
steady
path
move
it's
some
kind
of
relative
path,
and
this
is
very
intentional.
So
what
is
the
problem
with
files?
The
problem
is
that
file
systems
are
horrible.
You
can't
check
to
pass
for
quality,
for
example,
because
it
requires
you
to
do
Cisco.
A
A
A
In
theory,
more
than
this
products
to
products
bar
my
project
should
not
change
I.
Think
inside,
when
you
like,
have
a
project,
but
if
you
actually
give
access
to
like
this
absolute
path
to
the
file,
you
will
be
able
to
observe
these
folders
as
power
difference,
and
so
you
need
to
explicitly
write
code
not
to
depend
on
these
details
in
Rustin,
wiser
or
I'd
like
to
just
not
have
access
to
misinformation
at
all,
and
this
actually
gives
rise
to
this
concept
of
source
routes.
A
Let's
take
a
look
at
a
virtual
file
system,
abstraction,
so
VFS
is
a
component
which
is
not
like
part
of
the
chorused
analyzer,
which
interacts
with
external
file
system
and
converts
this
like
horrible
world
of
X,
for
file
system
into
this
nice
three
of
utf-8
paths
and
the
data
in
the
file
system
is
now
is
organized
into
a
set
of
source
route.
So
a
source
route
or
DFS
root
is
basically
a
tree,
a
contents
of
a
directory
on
the
real
file
system.
A
A
A
Or
with
a
bath
attribute-
and
this
fat
attribute
can
point
basically
anywhere-
and
this
is
a
problem
because
you
don't
know
the
set
of
input
files
to
be
great
until
you
actually
compile
the
crate
in
rust,
analyzer
I
sort
of
explicitly
do
not
solve
this
problem.
I
require
a
set
of
files
to
be
known
upfront
so
that
all
the
I/o
can
be
done
outside
the
rest
in
wiser
and
the
trust
analyzer
can
just
know
the
contents
of
the
old
file
files.
It's
could
potentially
need.
A
A
Of
course,
if
you
really
need
to
refer
to
another
file
by
absolute
path
or
you
need
to
refer
to
a
file
not
from
this
directory,
you
confirm
isn't
implemented
here,
but
it
could
be
implemented.
You
can
explicitly
add
these
extra
files
to
sauce
road,
but
again
it
doesn't
break
this
like
fundamental
assumption
that,
before
the
calculation
rust
analyzer
knows
about
all
possible
files
which
might
be
involved,
okay,
so
this
concludes
the
site
about
source
roads.
Let's
actually
see
how
we
expose
this
API
to
the
outside
world.
A
We
do
it
the
language
server
protocol,
which
leaves
in
a
separate
crate.
The
core
of
the
protocol,
is
these
event
loop,
which
will
handle
events,
and
there
are
a
ton
of
events
which
we
might
handle
here.
First
of
all,
we
watch
like
not
really
watch
but
watch
file
system
for
changes,
so
one
possible
event
is
that,
like
user
change,
branches
and
the
contents
of
the
files
on
disk
changed,
another
source
of
events
is
the
editor
once
the
editor
learner
user
type,
something
in
the
editor,
the
lp
client
in
the
editor
sends
us
notification
about
hey.
A
This
file
is
now
has
new
contents
and
yes,
source
of
concurrency.
Here
is
our
burger
on
position.
When
the
client
should
press
from
his
Appleseed
extract
information,
we
schedule
a
big
round
fret
to
compute
recent
expediting,
which
allows
us
not
to
block
the
main
loop
and
to
process
changes
quickly
and
Brandenburger.
Now
change
background
thread
finishes:
it
sends
the
results
back
to
menu
and
all
communication
happens
on
the
main
okay.
So
how
do
we
actually
get
here
in
the
first
place
from
the
main
loop
method
and
what
this
method
does?
A
Initially,
it
creates
this
great
graph
input
from
the
cargo
net
data
command.
So
let's
just
walk
for
a
tortilla.
This
method
is
executed.
This
function
is
executed
too
many
cotton
programming.
For
me,
it's
executed
when
you
start
a
server,
and
it
knows
about
here
it
notes
about
the
root
directory,
where
the
editor
will
start
a
project.
You
take
the
first
thing
we
do.
Is
we
send
this
route
to
load
workspaces.
A
It's
works
but
yeah.
It's
workspace
loader,
so
like
it's
like
something
which
loads
project
workspace
in
the
background
and
project
workspace
is
consists
of
two
parts.
First
of
all,
it's
the
cargo
workspace
with
all
crates
from
Casio
and
with
local
crates,
and
it
is
also
the
see
sort
of
the
crates
like
summit
library,
core
etc
so
yeah.
How
do
we
trade
project
workspace?
First
of
all
we
find
carbothermal.
Then
we
need
cargo
metadata.
Then
we
reach
this
route
and
return
the
scene
and
remanded.
A
This
vegetation
is
like
a
really
cargo,
centric,
really
OC
and
centric,
while
the
data
we
have
in
analysis.
This
graph
is
completely
built
system
independent.
So
once
we've
got
that
project,
we
need
to
lower
this
cargo
port
information
to
analyzer
format,
and
this
happens
inside
server
state.
So
the
world
state
is
banks
together,
analysis
host
and
the
information
which
I
know.
This
house
does
not
have
specifically
the
absolute
paths
to
source
suits
in
the
ad
FS
and
the
information
about
cargo
workspaces.
A
We
are
this
like
pocket
or
a
struct
okay,
and
so
this
lowering
of
product
workspace
to
something
which
rot
analyzer
can
work
happens
in
this
new
method.
So
if
it's
fall,
we
need
create
source
route
substitutes.
We
create
source
routes
for
which
packaged
in
cargo
workspace
and
for
each
create
in
the
SIS
route,
and
we
yeah
we
schedule
to
read
recursively
the
contents
of
the
source
code,
all
the
rest
files
here.
A
The
results
this
like
file
system
scanning
will
be
handled
as
other
modifications
in
the
main
loop.
Okay,
after
we've
dealt
with
this
routes,
which
basically
will
give
us
that
file
I
did
relative
path
to
source
text
and
we
reconstruct
crave
graph,
and
currently
this
is
somewhat
approximate,
because
we
don't
really
handle
a
CFG
changes.
We
don't
really
handle
total
specific
dependencies
and
actually
I
believe
we
don't
quite
have
enough
information
from
cargo
to
reconstruct
this.
Precisely,
but
of
course,
cargo
can
always
be
extended
to
provide
us
with
this
information.
A
A
From
sis
route
to
cargo
crates
and
yeah,
and
between
cargo
crates
themselves
and
yeah,
this
is
like
terminology
confusion,
because
cargo
actually
talks
about
crates.
It
talks
about
packages
and
targets,
and
each
cargo
package
has
many
targets
like
binary
target
library
target
test
targets.
Example
targets
and
each
target
is
actually
a
crate
in
raspily,
not
really
crate.
You
really
get
a
great
when
you
combine
target
with
particular
CFG
flag.
A
A
A
Okay,
so
how
do
we
handle
requests?
Well,
we
first
of
all
receive
a
request
from
the
client
and
call
on
request
method,
and
this
request
method
schedules
request
to
be
run
on
a
different
pool.
So
this
world
is
this
third
world
structure
which
contains
analysis
first
instance.
The
snapshot
basically
gets
us
an
analysis
from
this
analysis
first
and
we
move
a
days
analysis
to
the
separate
thread.
A
You
remember
that,
while
this
is
refuting
on
a
separate
thread
on
the
main
thread,
we
can
receive
a
file
change
notification
which
will
call
apply
change
method
on
by
analysis,
which
will
cancel
the
background
thread
and
we
get
I
cannot
ever
here
and
return
a
contact,
modified
error
to
the
editor,
and
you
actually
have
already
seen
this.
So
this
is
the
debugger
output
from
rat
analyzer,
and
this
is
what
happened
when
I've
just
opened
their
project.
The
editor
started
to
ask
me
about
hot
lenses,
about
collections
at
cursor,
cetera,
cetera.
A
A
On
something
like
go
to
definition,
so
we
receive
this
yeah
notification
request
and
call
handle
on
the
background,
frantic
call
and
we'll
go
to
definition
function.
This
function,
first
converts
the
request
from
language
server,
protocol,
specific
implementation
to
rest
and
lies
implementation.
This
basically
means
mapping
file
URLs
to
file
IDs.
Then
it
calls
method
inside
analysis
and
converse
results
back
to
the
language
server
protocol
types
and
responds
with
the
appropriate
response.
A
Okay,
I
think
this
concludes
the
basic
walkthrough
around
API
of
rust
analyzer.
You
have
the
stateful
actor
and
you
have
this
elaborate
translation
system,
and
you
need
to
be
very
careful
about
not
leaking
details
about
like
your
build
system,
your
absolute
file,
path,
etcetera,
etcetera
inside
the
analyzer
itself,
so
that
analyzer
can
be
a
predictable,
reliable,
pure
function.
Now
we
actually
are
getting
to
the
interesting
bit.
What
is
the
implementation
of
rust
analyzer?
How
managed
solve
everything
quickly?
A
So
one
approach
would
be
to
basically
maintain
the
current
state
of
the
world
if
you
like,
cold
state
of
all
inputs
internal
as
an
add-on,
every
applied
change
schedule,
a
completion
which
compels
the
whole
crate,
possibly
incrementally
and
then
serves
requests
to
the
client,
and
this
is
actually
what
account
architecture
of
rust
language.
Sarah
RLS
looks
like
if
you
squint
enough.
A
Unfortunately,
social
protection
is
necessary,
slow,
even
if
you
incremental
II,
compare
the
whole
treasure
at
the
whole
crate
or
the
whole
set
of
crates
of
your
workspace.
You
are
doing
a
lot
of
useless
work
because
you
see
now
I
have
only
a
single
file
opened
in
my
editor
and
I,
really
don't
care
at
all
about
all
information
except
information
for
the
lies
between
239
and
283,
and
this
real
little
place
where
we
need
these
ability
to
only
query
a
specific
subset
of
the
e
confession.
A
A
A
What
this
means
is
the
fact,
for
example,
if
you
have
a
query
which
uses
the
results
of
Kurumi
when
you
apply
changes
to
the
inputs,
salsa
will
be
smart
enough
to
recompute
the
B
if
it
could
have
been
changed
by
the
changes
to
the
inputs,
but
avoid
you
computing,
the
a
if
the
results
of
recomputing
to
be
actually
stays
the
same.
So
if
you
don't
know
about
salsa
already,
my
expansion
probably
doesn't
make
this
easier
for
you.
So
just
stop
the
video
and
read
some
documentation:
salsa.
A
A
We
just
called
inputs
database,
isn't
it
Oh?
So
it's
it's
not
called
infant
database.
It's
called
files
database,
that's
why
I
haven't
found
so,
and
this
database
describes
the
whole
set
of
inputs
to
the
salsa
and
basically,
a
response
to
that
analysis.
Change
data
structure,
as
you
see
if
the
set
of
inputs
is
like
really
really
small,
you
have
files
and
each
file
has
a
text
relative
path
inside
a
sore
throat
and
a
source
or
a
source
root
has
a
set
of
files
locals.
A
His
roots
is
optimization
to
know
which
rules
correspond
to
files
inside
the
current
workspace
and
which
rules
corresponds
to
cranes,
from
crazy
or
and
for
criteria.
We
do
one
time
not
really
do,
but
in
some
parts
we
do
a
top-down
one
time,
processing
which
would
be-
and
we
like,
produces
a
really
compact
data
structures.
Modify
such
for
the
whole
credit
structures
is
costly,
so
we
use
more
fine-grained
data
structures
for
local
trades,
but
for
great
sale
price
which
do
not
change.
A
A
So
the
main
thing
which
we
get
out
of
this
database
is
the
code
model
API.
Remember
how
I
said
that,
like
analysis
operates
on
spans
and
files
and
does
not
know
about
the
cross
specific
stuff.
So
this
is
the
lower
level
API,
which
I
was
talking
about,
which
talks
about
what
specific
contents
like
crates.
A
A
A
Module
you
can
get
a
scope
and
scope
defines
the
set
of
names
defined
in
this
module,
so
you
can
imagine
how
these
powers
from
sample
completion
or
go
to
diffusion
access
for
cetera
and
what
the
bulk
of
rust
analyzer
does
is
that
it
provides
a
queries
which
populate
this
a
rich
semantic
model.
So
what
are
we
going
to
talk
about
next.
A
Yeah,
so
probably
a
good
thing
to
talk
about
now
is
that
specifics
of
rust,
where
a
single
source
file
may
correspond
to
several
semantic
panels,
and
this
is
something
explicitly
handled
in
the
rust
analyzer
in
this
source,
binders
infrastructure.
So
what
this
source
binder
infrastructure
does
is
that
it
takes
an
information
about
the
source
code.
A
The
where
the
majority
of
a
source
code
responds
only
to
a
single
semantic
model,
so
we
use
option
which
is
an
iterator
of
0
or
1
item.
Okay,
so
let's
just
see
how
we
actually
populate
this
model,
how
we
actually
grades
module
score
cetera.
This
of
course
starts
with
a
syntax
and
yeah
I
have
seen
things
database
great.
A
Great
thing
about
Trust
is
that
to
parse
source
code
you
don't
need
to
know
these
matically
and
by
parsing
I
mean
that
literal
parsing
I
don't
mean
macro
expansion.
So
that's
why
I
the
method,
parser
it's
called
source
file
but
probably
should
have
been
called
parse
accepts.
A
file
ad
does
not
know
in
the
context
of
which
crate
we
are
parsing
this
file
and
it
returns
a
source
file
and
the
source
file
is
basically
and
now
in
the
abstract,
syntax
tree.
So
I
wouldn't
talk
too
much
about
syntax
tree.
A
A
A
A
A
Assist
context
contains
a
source
file
and
basically
the
range
at
which
the
intention
wasn't
work
involved.
Great
thing
about
assists
is
that
the
activated
in
the
specific
context,
for
example
here
I,
can
add
Drive,
because
I
am
inside
a
structure,
but
if
an
outside
the
structure,
I
don't
have
this
intention,
which
is
great.
A
A
So,
despite
the
fact
that
you
have
like
this
a
uniform
representation,
you
can
also
switch
to
world
type
to
presentation
like
an
if
expression,
which
has
an
addition
and
Enbridge
and
health
branch
and
in
across
lean
in
swift
in
IntelliJ.
This
is
achieved
by
using
basically
inheritance.
So
you
have
like
basic
syntax,
node
interface
and
all
the
concrete
syntax
not
like
if
expression,
class
declarations
at
read,
cetera,
inherit
from
this
interface
and
you
just
use,
casts
and
rusts
resistant.
Basically,
some
neat
magic.
A
A
Replace
yes,
we
wanted
to
replace
my
frigate
lat,
so
we
get
an
if
expression
and
we
give
a
condition.
We
get
a
pattern
and
we
get
an
expression
and
you
notice
that
all
of
these
methods
overturns
options
so
well
rust
centers
requires
that
each
expression
has
a
condition
and
pattern
and
then
branch,
the
syntax
tree
I
use
in
rotten
eliezer
allows
you
to
amidst
these
things.
So
let's
see
this.
A
And
that's
the
US
industry
for
this
file,
so
yep
here,
I've
typed
a
fun
food,
so
it's
definitely
not
rat
function
definition
because
it
misses
parameters,
it
misses
return
type,
but
it's
a
etc.
Nevertheless,
on
the
right,
we
see
that
in
the
syntax
tree
we
actually
have
this
function.
Definition
when
we
make
a
complete
syntax
or
or
disappear
the
wire
it
doesn't
complete.
A
A
At
the
trade
bar
is
parsed
completely
without
errors,
despite
the
errors
present
in
the
previous
files,
so
yeah,
it's
that's
another
aspect
of
this
industries
is
that
they
allow
you
to
produce
partial
syntax
tree
and
don't
care
a
lot
if
some
essential
business.
Nevertheless,
if
we
are
indeed
in
the
if
expression-
and
if
this
is
a
fairly
complete
expression,
which
has
a
condition
pattern,
a
10-block
and
an
else
block,
we
actually
suggest
and
edit,
which
is,
if
flat
and
transforms
it
into
much
expression.
A
Oh
yeah,
if
and
if
flat,
are
both
represented
by
the
same,
if
expert
datatype,
okay,
just
to
give
a
quick
peek
at
the
implementation
of
the,
if
expression
and
those
mysterious
words
about
transmitting
forecasting,
if
expression
internally
like
and
each
eistein
are
terminally,
holds
this
index
node
and
this
index
node.
Is
this
like
homogeneous
node,
which
has
parent?
We
shouldn't
index
node
first
child
last
child
next
sibling,
very
simple
in
which
all
returns
that's
now,
and
if
expression
is
sort
of
neutral,
around
aesthetics
node,
and
what
this
new
type
adds.
A
It
has
a
static
knowledge
that
the
syntax
note
indeed
returns
an
if
expression
and
so
custom
works
and
what's
the
interesting,
the
Caston
works
even
between
the
references.
You
can
cast
a
reference,
the
subjects
not
to
make
a
reference
to
an
if
expression
and
that's
because
we
use
this
transparent
rock
around
a
tiny
bit
of
unsafe
code,
which
should
say,
but
if
you
actually
do
this,
it
will
be
grateful
because
I
am
NOT
that
comfortable
way
of
writing
and
safe
go
so
going
back
to
semantic
model.
What
we've
actually
talked
about
was.
A
A
The
results
analyze,
so
you
parse
each
trial
on
the
ones.
Well,
actually,
in
theory,
what
I
am
trying
to
do
is
to
make
this
syntax
tree,
which
are
a
relatively
heavy
data
structure,
because
they
remember
all
whitespace
and
commences
etcetera
and
the
highly
pointer
based
to
be
a
really
temporary
thing
in
the
compiled
it
generally.
Only
the
source
file
opened
in
the
editor
should
have
a
syntax
tree
present
in
memory
for
all
the
other
files.
A
We
get
this
okay
and
how
do
we
get
from
syntax
to
what
other
source
starts
with
building
a
module
tree?
Because
really
every
item
in
rust
is
within
the
context
of
a
particular
module
and
you
really
start
with
building
a
module
tree.
There
is
salsa
query
which
takes
source
route
and
returns
and
module
T
so
easily.
A
A
A
And
this
res
file
is
not
part
of
any
crate.
However.
I
do
want
to
get
completion
as
its
register
for
this
file
and
the
way
I
do
it.
It
actually
create
a
sort
of
like
fake
crate
for
these
floating
files,
and
usually
we
should
also
give
a
narrative,
a
user.
Hey.
This
file
is
not
part
of
any
crate.
Would
you
like
to
include
it
in
Caribou,
tamil
as
explicit
library
or
test
or
whatever?
Okay?
A
A
So
each
module
remembers
the
motor
declaration,
it
ordinated
from
so
the
sleep.
It
needs
basically
not
declaration
this
thing
and
if
one
declaration
remembers
the
module
where
the
declaration
is
situated
and
the
module,
this
declaration
is
also
two
points
to
is
a
vector
to
account
for
like
resolve
errors
when
you
have
bought
both
for
rest
and
full
/
mother
s
and
whatever
we
can
have
in
2008.
In
addition,
it's
not
actually
used
at
the
moment
if
I
remember
correctly.
So
how
do
we
build
a
module
to
deny
way
to
approach?
A
This
would
be
to
parse
each
source
file,
collect
child
modules
and
assemble
the
tree.
The
problem
with
this
approach
is
in
criminality,
so
such
trees
are
really
identity,
based,
so
if
user
types
a
single
character
or
in
the
buffer
like
very
tightly
spaced,
the
hello
syntax
tree
for
the
file
changed
because
well
know
the
syntax
tree.
Remember
parent
links
and
s
like
the
file
as
a
whole
changed.
A
Each
constituent
constituent
now
must
change
as
well,
and
that
means
that
if
we
read
syntax
directly,
whether
it
in
the
module
tree
will
have
to
recompute
module
3
upon
every
modification-
and
that
seems
really
unfortunate
because
we
actually
would
like
to
do
as
we
would
like
to
avoid
or
computing
the
module
tree
as
long
as
user
type.
Something
benign
like
a
fan
do
whatever.
We
only
need
to
recompute
module
treatment,
user
type,
small
Safi
know
when
they
move
files
around
or
rename
or
social
system,
and
we
are
shared
this
by.
A
A
Okay,
it's
kind
of
difficult
to
start,
okay,
so,
let's
start
from
the
middle.
The
main
idea
is
that
we
don't
actually
inspect
the
raw
source
code.
We
inspect
result,
sub
modules
and
sub-modules
is
a
query
which
take
which
takes
a
source
stream
and
returns
a
vector
of
sub
modules,
and
some
module
is
plain
old
based
data,
which
is
basically
a
string
name
and
declaration
and
okay,
it's
hard
to
tell
things
from
the
middle
and
pointer
in
the
source
tree
and
okay.
Let's
pretend
that
it
does
not
have
source
has
on
the
name
and
this
declaration.
A
Okay,
so
except
module.
Is
this
simple,
then
the
somebody
else
query
is
a
really
nice
query,
because,
although
when
we
change
source
code,
we
have
to
release
acute
sub-module
square
because
independent
source
code
directed
the
results
of
the
sub-modules
query,
will
not
be
changed
because
well,
unless
you
type
and
what
rule
so
again,
the
input
to
the
query
changes,
but
the
output
of
the
sub-module
square
stays
the
same.
A
That
means
that
all
the
queries
which
depend
on
some
module
query,
for
example,
module
3
query-
do
not
have
to
change
when,
unless
the
actual
set
of
sub
modules
changes,
in
other
words,
user
typed,
something
in
the
editor
like
a
space
sauce,
it
figured
out
that
hey
source
code,
changes
and
module
3
indirectly
depends
on
the
source
code.
So
we
probably
need
to
recompute
the
module
tree,
but
they
direct
dependencies
of
the
module.
3,
query,
sub
modules,
query
and
Sassa
figure
out
that
all
except
also
modules
queries
for
all
files,
except
this
one
already
fresh.
A
So
it
only
needs
to
compute
this
single
sub-module
query
for
the
current
file.
It
computes
it
and
it
gets
the
same
result
and
salsa
realizes
well.
So
all
the
sub
modules
are
the
same,
so
the
module
tree
must
be
the
same
and
it
I
was
a
computing
multi,
which
is
great,
so
yeah
one
bit
often
for
about
this
or
cycling.
We
actually
like
to
have
a
link
back
to
the
source
code.
Where
are
we
several
originated
too?
A
And
we
can
use
a
pointer
to
this
in
technology
yeah,
but
this
would
be
bad
because
a
syntax
tree
is
change
after
every
modification,
and
this
means
that
this
field,
the
result
of
some
model
query,
will
be
changed
at
every
modification,
which
is
better
now.
We
can
store
like,
for
example,
a
pair
of
offsets
to
the
source
of
the
module,
but
this
also
will
be
bad
because
text,
because
forces
change
when
you
type
something
before
the
offset.
A
The
way
we
get
a
stable
identifier
is
that
we
enumerate
all
the
items
in
the
file
and
store
them
in
an
arena
such
that
each
source
item
gets
and
index
in
the
arena
and
by
source
item
items
like
real
rust
items
like
fine
constructed,
setter,
etcetera
expressions
are
not
processed
here
and
what
we
get
as
a
result
is
that
each
item
gets
a
relatively
stable
ID.
The
ID
changes
only
when
you
add
new
functions,
new
top-level
items
when
you
type
something
inside
the
function
body,
the
idea
of
a
function
stays
the
same.
A
A
A
A
The
way
we
identified
the
module
is
that
we
collected
all
the
modules
into
a
single
vector
in
some
particular
order,
and
we
use
index
in
this
vector
as
identity
of
the
module,
and
this
is
probably
good
because
the
set
of
modules
changes
so
rarely
and
when
it
changes
the
idea,
the
module
changes
and
all
other
sorts
of
stuff
which
depends
on
this
ad
will
have
to
be
computed.
But
usually
they
stay
the
same,
and
so
the
dependency
is
stay
fresh,
doing
the
same
thing
in
completion.
A
For
example,
all
the
functions
from
the
crate
into
a
single
array
will
be
less
fortunate
because,
for
example,
I
did
a
new
function,
suppose
changing
a
new
function,
adding
a
new
module
is
a
relatively
common
operation.
So
the
ideas
will
be
invalidated
more
frequently
and
what's
wrong,
you
actually
will
have
to
crawl
across
all
over
all
of
the
functions
but
Italy.
A
We
would
like
to
avoid
even
looking
at
the
functions
inside
some
of
the
implementation,
which
we
not
used
to
type-check
a
particular
function
which
is
opened
in
the
editor
so
now
in
some
other
way
to
identify
the
function
and
we
may
use
location
as
the
identity
of
the
function.
So
this
is,
for
example,
location
structure
which
describes
a
particular
item
inside
the
module.
A
So
it
has
soft
suit,
ID
and
module
ID
fields
which
uniquely
identify
the
module-
and
it
has
this
source
item
ID,
which
uniquely
identifies
an
item
inside
the
module,
and
we
understand
that
this
Deathlok
of
a
particular
function
or
strike
or,
for
example,
yeah.
So
a
diff
lock
of
this
diff,
lock
struct
itself
will
not
be
changed
unless
we
add
a
new
function
in
this
same
file.
So
it
is
stable.
A
But
it
is
not
really
great
as
an
ID
because
eagerly
we
would
like
to
have
our
IDs
to
be
like
just
you
32
integers
and
the
flock
has
all
sorts
of
stuff.
And
you
can
imagine
that
if
we
would
like
to
see
a
location
of
an
item
inside
an
input
inside
something
else,
we
will
need
to
make
this
the
flock
recursive
and
store
some
kind
of
objector
paths
in
them
and
yeah.
A
This
does
not
block
sorry,
and
this
doesn't
look
like
compact,
admx,
storing
it
in
the
hash
map
search
it'll
probably
be
slow,
so
it
would
like
to
somehow
assign
just
American
deep
to
this
location
and
if
these,
the
location
in
certain
pattern
pattern
so
allocation
Turner
does
it
keeps
a
directional
matter
the
direction
of
a
tenant.
Only
this
is
important
Martin
between
allocations
and
numeric
IDs.
A
A
A
The
in
resolution
are
used
here
in
a
pretty
narrow
sense.
It
only
resolves
use
paths,
so
it
builds
a
thing
called
item
map
which,
for
each
module
tells
which
items
are
visible
in
this
module
and
an
item
is
visible
inside
the
module
if
it
is
declared
inside
the
module
or
if
it
is
imported
inside
the
module
and
the
idea
for
building
item
map
is.
A
Pretty
much
the
same
as
for
building
module.
Two,
we
can't
add
the
pant
on
the
source
code
directly,
because
this
will
invalidate
module
item
map
after
every
change.
So
a
we
first
lower
module
into
our
position,
independent
representation
and
then
using
this
position,
independent
to
presentation,
which
is
basically
like
sub
modules
for
middle
three.
We
run
this
export
iterative
algorithm,
resolving
everything
and.
A
What
we
get
here
is
basically
this
test
that
type
in
inside
the
function
does
not
invalidate
item
app.
So
how
much
test
what
we
have?
A
bunch
of
files-
yeah,
Linda
Torres
for
mother
s
for
bar
s,
and
we
compute
item
map
for
this
crate.
And
then
we
change
libris
file
and
we
change
the
body
of
the
food
item
for
function
from
like
one
plus
one
to
ninety
two,
and
this
actually
does
not
execute
this
item
up
query
and
the
current,
because
we
only
change
the
body
of
the
item.
A
A
This
position
different
items
with
like
james,
its
position,
independent
pointer
in
the
source
code,
and
it
also
contains
import,
and
the
important
bit
is
interesting-
why
it
is
interesting
for
an
obscure
reason
when
we
run
completion,
we,
if
we
like
complete
a
full
struct,
we
need
to
know
if
this
was
tracked,
was
imported
and
we
need
to
know
a
particular
segment
of
the
use
path
which
imported
we
struct.
The
new
way
to
do
this
would
be
to
just
like
add.
A
A
A
So
this
again
change
is
verification,
and
this
means
that
the
results
of
this
lower
module
query
also
change
after
every
modification,
but
actually
that's
a
trick
because
in
lowered
module
we
have
import
ID
and
we
can
use
import
ID
to
look
up
the
syntax
in
this
import
source
map.
I
wanna,
be
you
we
want
to
get
rid
of
this
aspect
of
becoming
stale
after
edit
and
this
what
lower
module
module
does
with
a
really
simple
query:
it
translate
previous
query,
which
returns
a
pair
and
projects
the
first
component
of
the
pair
out
of
it.
A
A
We
actually
use
here
so
again.
We
want
two
things.
First
of
all,
we
want
to
map
from
semantic
model
back
to
the
syntax,
but
we
want
to
keep
semantic
model
independent
of
the
Alliance
index
and
we
do
this
by
keeping
mapping
from
semantic
model
to
the
syntax
as
a
separate
source
map
and
by
just
removing
this
source
map
from
the
analysis.
A
Let's
take
a
quick
look:
it's
the
water
in
the
reservoir,
so
it's
this
like
X
point
iterative
algorithm,
which
just
trades
over
this
set
of
items
until
all
inputs
are
resolved
all
until
it
can't
resolve
more
imports,
actually
don't
know
how
this
works
in
racine.
It's
actually
pretty
important
problem
to
solve
in
IDs
like
to
implement
this
name
resolution,
which
should
be
intuitive
macro
expansion
correctly,
and
for
that
we
probably
need
the
specification
first
okay.
So
what
I'd
like
to
show
here
is
also
yeah
the
place
we
are
where
we
assigned
the
veggies.
A
If
you
look
at
the
item
map
struct,
which
stores
which
module
a
set
of
item
of
which
implemented
for
it,
where
each
item
is
basically
a
def
ID
account
info
like
types
macros
and
various
namespaces,
so
we
need
with
the
funny-
and
this
is
the
place
where
we
get
this
def
ID.
We
get
an
item
from
this
load
module
and
we
know
this
source
route.
A
A
A
A
All
of
this
type
inference
stuff
so
in
inter
runs
type
inference
for
a
single
function
and
what
it
creates.
It
creates
a
Madam
from
expressions
to
their
types
and
again,
what
is
an
extraction?
If
we
use
the
syntax
of
expression,
we
will
get
query.
We
should
be
invalidated
every
time,
etcetera,
etcetera,
so
we
actually
run
an
extra
step
before
type
inference.
A
We
lower
the
lower
the
row
syntax
function,
which
contains
offset
and
is
not
stable
into
this
compact
presentation
of
arena
based
iced
tea,
so
here
X
bar
is
not
a
syntax
tree
is
just
like
your
usual
recursive
and
I'm,
except
that
it
uses
IDs
and
not
boxes,
to
avoid
infinite,
recursive
size
problem
and
primarily
not
to
avoid
this
like
size
problem,
but
actually
leave
and
identity
to
each
expression,
and
this
expression
is
what
we
store.
The
inference
result.
A
And
again,
these
uses
this
source
map
pattern
when
we
lower
function
body.
First
of
all,
we
get
this
body
struct,
which
is
position
independent
and
contains
IDs,
and
it
is
a
function
model.
So
if
you
like
more
function
around
all
I
can
move
it
to
completely
unrelated
file.
The
body
of
the
function
will
stay
the
same,
but
together
with
this
body,
we
store
it
index
method,
which
maps
a
syntax
nodes
to
expression,
the
expressions
and
expressions
back
to
syntax
nodes,
and
we
use
these
heavily
in
the
IDE.
A
A
A
So
what
should
we
discuss
next?
Well,
probably,
we
have
discussed
everything.
So,
let's
just
look
at
a
single
feature
from
the
protocol
back
to
panelizer
to
the
salsa
database
and
back
to
analyze
again
back
to
the
protocol
and
let
speak
completion
for
this
fusion.
So,
let's
see
what
happens
when
you.
A
Type,
something
in
the
editor
and
press
ctrl
spacebar
to
request
completion.
This
happens.
This
starts
in
the
main
loop
where
we
get.
These
are
handy
where
we
get
is
a
completion
request.
That's
that
gets
routed
to
handle
completion
function,
hello,
completion
function,
converts,
LSB,
requests
to
file
position,
which
basically
is
a
file
ID
and
an
offset.
This
code
is
like
a
special
case
because
we
want
to
show
completion
automatically.
A
A
A
We
get
fala
position
and
we
ask
completions
on
the
database
is
not
using
database.
I,
don't
know
yep,
so
H
cancelled.
It's
like
an
interesting
aside
beat
in
that
inside
cells.
Translation
actually
is
implemented.
We
are
winding,
so
we
literally
panic
when
something
types,
something
into
the
editor
where
the
completion
is
running
and
that
cheers
down
all
the
queries,
and
here
we
catch
Hispanic
and
turn
it
into
handy
cancel,
which
is
a
result.
Okay.
So
what
is
completions?
This
is
finally
the
like
guts
of
the
code
completion
where
we
start
to
do
some
useful
work.
A
It
receives
the
database
and
it
receives
me
position
in
the
source
file
and
it
needs
to
return
a
set
of
completions.
The
first
thing
we
do
here
is
that
we
figure
completion
context.
So
we
need
to
understand:
is
the
cursor
after
a
dot
or
is
a
cursor
after
a
double
colon
colon
search
for
cetera?
Let's
see
how
we
do
it,
completion
context
is
like
in
predefined
data
structure.
It
contains
all
sorts
of
interesting
bits
of
information,
and
so
the
basic
stuff
is
like.
The
reference
to
the
database
is
the
offset
where
the
completion
wasn't
worked.
A
The
leaves
index
node
where
the
completion
wasn't
works
like
a
talking
like
an
identifier,
whitespace
or
whatnot.
It
also
contains
references
to
the
syntax
nodes
around
the
insertion
point.
So,
for
example,
if
you
involved
completion
inside
a
function
definition,
you
will
have
function
syntax.
If
you
invoke
completion
inside
of
user
item,
you
will
get
news
item
symbols
and
it
also
got
a
semantic
model
of
a
function.
So
remember,
there
is
no
bijection
between
syntax
and
semantic
model,
and
completion.
Jaian
must
get
semantic
model
and
we
will
just
see
how
it
happens
so
yeah.
A
A
A
Traverse
the
whole
module
tree
like
we
traverse
all
the
modules
we
are
know
about,
and
we
find
a
module
which
originated
the
first
module
which
originated
from
this
source
file.
But
there
are
may
be
many
modules
which
are
generated
from
a
single
source
file.
Okay,
so
yeah-
and
this
is
like
one
of
the
most
interesting
bits,
because
this
is
here
where
we
get
from
purely
source
code
base
presentation
to
the
representation
where
we
know
the
whole
context.
And
the
grade
is
a
few
G
flags
setter,
etcetera.
A
They
need
trick.
We
do
here
which
I
belong
from.
Intellij
is
better
to
learn
the
syntax
and
the
current
position
we
first
of
all
insert
and
then
identify
where
the
cursor
is
so
that
we
get
a
sound,
more
reasonable,
parsed
win,
and
we
can
so
like
to
give
an
example.
If
you
type
something
like
this
and
cursor
was
here,
you
know
this
is
an
identifier
and
we
are
supposed
to
complete
an
identifier.
A
A
A
If
we
are
inside
the
function,
we
tried
to
find
a
somatic
model
for
this
function,
and
this
is
now
much
easier
than
finding
the
semantic
model
for
a
module,
because
we
already
know
the
module
and
module
completely
determines
the
semantic
context.
The
create
sort
of
stiffens
reflex,
so
final
function
is
basically
just
its
writing.
Through
items
are
described
in
the
module
and
picking
the
one
with
the
respondent
source
tree.
So
let's
Taurus
attract
an
interesting
bit
here.
A
A
Completion
yeah,
so
we
collected
the
context
which
contains
both
syntactic
and
semantic
information,
and
now
we
run
a
serious
of
completion
utilities
which
feel
the
context
with
possible
temptation
variants
and
let's
look
at
complete
dot
which
completes
staff
after
Vlad.
First
of
all,
it
tries
to
extract
the
function,
semantic
model
and
their
receivers
syntax.
A
If
there
is
no
receiver
syntax,
we
probably
can't
complete
after
the
dots
are
we
just
returned
at
them
and
are
given
the
function,
semantic
model
we
Iran
type
inference
or
this
function
and
we
get
back
a
result
which
maps
expression
IDs
to
inference
results.
But
we
don't
have
an
expression
ID,
because
we
super
here.
A
Okay
type
type
inference
is
not
yet
perfect,
okay,
so
a
receiver
is
a
syntax
and
we
want
syntax
a
B.
So
we
asked
for
source
map
for
the
body
of
the
function
and
autocar
expression
ID
using
these
symbols.
Now
we
can
get
the
actual
type,
because
we
know
the
expression
ID
and
we
can
complete
methods
and
service
and
our
bit
from
the
context
if
we
complete
in
we
shouldn't
suggest
methods,
because
we
know
that
this
is
a
method
called
oral.
If
you
compute
in
just
bar,
we
should
suggest
both
fields
and
methods
and
yeah.
A
Here
we
basically
iterate
the
fields
on
the
type
and
we
trade
available,
Impuls
and
suggest
completion
variants.
I
think
this
is
almost
everything
I
wanted
to
talk
about,
probably
yeah,
a
single
bit
which
is
missing
is
micro.
Support,
I
missed
it
just
because
there
is
not
too
much
motor
support
in
retinal
as
a
quantity.
Basically,
we
have
a
set
of
hard-coded
macros,
but
it's
interesting
how
it
feeds
into
this
IDs
infrastructure,
so
I
will
download
now.
A
A
A
This
actually
gets
expanded
to
basically
a
support
file,
so
we
can't
save
it
like
the
results
of
macro.
Expansion
are
in
this
same
source
file
because
at
least
be
dependent
on
the
current
module
because
CFG,
if
may
be
defined
differently
depending
on
your
Co
G's.
So
we
need
to
somehow
handle
this
and
my
handle.
This
I
mean
that
we
use
pointers
to
items
everywhere
and
now
we
can
store
pointer
to
an
item
by
using
Leidy
to
identify
the
semantic
context.
The
file
ID
25,
the
file
and
the
ID
of
an
item
inside
this
file.
A
But
these
does
not
work
for
macro,
generates
the
file
because
we
don't
have
a
file
ID.
So
what
we
do
instead
is
that
we
use
so-called
here
file
ID
throughout
the
core
of
rustic-y
zone
and
this
file
ID
lat,
it's
pretty
natural.
It
is
either
very
the
ID
of
the
original
source
file
or
it
is
the
ID
of
the
file
generated
by
macro
and
to
assign
an
ID
to
the
file
generated
by
the
macro.
We
just
assigned
the
ID
to
the
particular
macro
call
expression.
A
My
current
location
in
this
file,
which
is
again
stable
across
surprises,
so
yeah
again
final,
is
a
resource
file
or
macro
file,
where
a
macro
file
is
determined
by
the
macro
allocation
and
macro
invocation
is
determined
by
the
file
and
the
position
in
this
file,
and
you
really
see
how
this
is
recursive
so
again
like
to
really
see
recursion.
We
have
this
file
ID
and
it's
file
ID
historical
ID,
which
is
identical
to
marshal
log,
which
stores
a
source
file
item
ID,
which
stores
a
file
ID,
which
is
again
here.
A
A
file
ID
from
which
we
have
started-
and
we
have
a
point
here-
is
that
this
actually
works,
because
this
recursion
is
not
infinite.
Each
key
file
ID
always
ends
in
a
mutual
file.
Id-
and
we
actually
can
see-
is
a
recursion
here
so
yeah
to
get
the
original
file
of
the
file
ID.
We
check
that
if
we
adjust
original
file
like
original
file
means
were
found
by
the
user,
we're
Thomas
ID.
If
we
are
generated
from
the
macro,
we
see
the
file
where
the
macro
code
is
situated
and
we
recursively
return
original
file.
A
A
A
To
get
the
code
of
macro
expansion,
you
again
either
look
up
the
original
source
file,
which
is
which
you
get
from
the
parts
of
the
file
on
the
disk
or
if
it
is
a
macro,
you
expand
multiplication
which
again
can
be
recursive,
because
sauce
allows
recursive
queries,
and
you
return
this
in
the
chief
that
macro
expansion
actually
implementing
macro
expansion
is,
of
course,
future
work.
Okay,
so
I
think
this
is
definitely
it
was
longer
than
I
expected
by,
but
I
hope
it
was
interesting.
Okay,
bye.