►
From YouTube: RLS 2.0, Salsa, and Name Resolution
Description
A pragmatic discussion about how to proceed integrating name resolution with Salsa and RLS 2.0.
B
B
B
A
A
We
would
like
to
have
name
resolution
to
be
incremental
and
it's
probably
make
sense
to
ask:
why
do
we
want
it
to
be
incremental,
because
example,
in
something
like
intellij
name
resolution
is
not
much
mental
but
is
not
incremental,
because
it
is
on-demand
and
really
is
basically
the
in
Java
every
file
has
especially
derivative
and
you
import
stuff
using
fully
qualified
name.
So
he
can
basically
maintain
a
mapping
from
fully
qualified
named
a
class
object
in
its
internal
presentation,
and
this
mapping
is
like
it's
very
easy
to
update
after
file
changes.
A
So
in
IntelliJ
you
basically
resolve
names
in
this
part.
Every
time
you
need
to
import
something,
and
if
you
resolve
names
locally
inside
the
file,
you
can
just
basically
use
of
all
of
them,
because
it's
usually
not
that
question
fork
anyway,
but
you
can
also
imagine
that
lay
solutions
which
resolve
name
only
in
single
function
or
something
like
that.
Basically,
you
can
be
really
lazy
in
rust.
We,
it
seems
to
be
that
we
can't
really
be
lazy
with
name
resolution
inside
the
thing
it
is
scientists,
single
crate.
A
We
must
name
resolve
the
crate
as
a
whole
and
when
I
say
name
is
all
basically
neighbors
of
all
the
imports
like
not
in
solution
in
science
item,
but
it's
just
like
top-level
stuff,
because
macros
and
imports
need
these
exploitation
algorithm
and
it's
completely
unclear
how
to
make
that
on
the
mat,
because
this
seems
like
it
could
take
a
fair
amount
of
time.
We
need
to
make
it
faster
and
we
need
to
somehow
link
realized.
A
One
way
you
can
commit
lies.
It
is
to
memorize
macro
implications
themselves.
Basically,
if
you
expand
this
macro
using
this
input,
token
stream-
you
just
analyzed
it
and
when
you
expand
this
macro
in
some
other
context,
you
just
take
results
from
cache.
This
seems
like
an
easy
part
and
also
requires
part,
because
macros
positional
markers
can
be
it's
very
complicated.
A
That's
leaves
us
we've
resolved
in
imports.
Using
this
exploit
iteration,
loop
and
I
think
we
should
incremental
eyes
the
is
as
well,
because
it's
actually
the
code
which
works
industrialized
now
is
pretty
fast.
It's
like
tens
of
milliseconds,
but
it's
like
not
super
complete,
not
super
complete
and
even
tens
of
milliseconds
is
like
something
you
could
spend
computing
typing
trains
or
stuff,
like
that.
I
think
for
completion
of
the
budget
is
about
like
100
milliseconds,
but
but
even
100
milliseconds
is
already
a
noticeable
perceptible
delay,
so
the
ETL
does.
A
It
will
be
like
something
like
16
milliseconds
in
your
frame.
So
like
tens
of
milliseconds
is
something
we
wishes
were
filed
to,
say.
The
strategy
which
I
think
could
work
for
making
this
incremental
is
to
detect
what
and
there
when
there
are
now
changes
on
the
top
level.
For
example,
when
you
are
typing
inside
a
function
you
easily
don't
want
to
in
a
default
name
resolution,
because
all
the
modules
are
the
same.
All
the
imports
are
the
same.
A
You
maybe
need
some
imports
inside
the
function,
but
it's
not
over,
so
you
can
just
resolve
them
legally
yeah.
So
here's
the
basic
idea
Ian.
So
we
start
the
query
in
such
a
way
that
it
looks
only
at
top-level
items
and
it's
not
completely
ignores
bodies,
and
that
means
that
type
in
anybody
is
okay
and
name
is
evolution
is
reused,
and
this
actually
is
implemented
and
works
quite
well.
So
when
you
type
something
set,
the
body
a
resolution
is
instant.
When
you
add
a
new
import,
you
still
got
completions
for
import.
A
You
should
maybe
expand
this
single
macro
understand
with
the
set
of
top-level
items
is
the
same
and
the
computer
name
resolution.
But
currently,
when
we
compute
a
set
of
top-level
items
itself,
we
include
microbes
with
their
bodies
in
this
set.
So
if
you
change
microbe
body,
you
change
the
sort
of
top-level
items
and
you
have
to
rerun
the
whole
name
resolution,
even
if
the
result
of
macro
expansion
to
basically
be
the
same
from
the
point
of
view
of
name
solution.
B
And
don't
think
I
didn't
understand
when
you're
you
said
that
the
macro
and
its
body
does
include
items.
So
if
you
change
the
macro
body
and
in
detective
rear-ending,
was
that
right?
Yes,
but
if
you're
taking
the
body
of
some
other
function,
I
wouldn't
tell
me
when
you
change
the
contents
of
the
Mac.
A
B
B
Why
do
we
think
that
we
could
avoid
I
mean
when
you
do
change?
The
body
of
the
map
where
we
are
gonna
have
to
really
expand
to
see
what
its
result
is?
Yes,
because
it
might
have
been
different.
Yes,
is
your
idea
that
after
you,
we
expand
the
macro?
You
can
figure
out
that
the
solution
didn't
need
to
change
personally
name.
Yes,.
A
B
B
B
B
B
B
A
A
The
expand,
macron
vacation
query
is
all
good.
Now
it
doesn't
into
the
first
name
resolution
but
like
this
actually
is
important.
What
are
the
arguments
to
this
query
so
like
if
we
actually
pass
token
three
as
an
argument,
we
need
to
have
this
token
three
in
this
neighbor
solution
results
query,
and
that
means
that
we
depend
on
the
actual
ticket
read
about
on
the
results
of
the
expansion.
A
So
we
could
pass
an
ID
here
like
the
ID
of
the
macro
and
this
actually,
what
makes
it
difficult,
because
you
need
results
of
name
resolution
to
assign
this
ID
so,
for
example,
okay
in
the
simple
case,
this
list
static
is
a
macro
in
a
file
which
is
typed
by
the
user.
So
any
could
be
a
position
in
this
file
and
this
would
actually
work
for
one
level.
So.
A
So
I,
like
the
idea
of
micro
invocation,
could
be
basically
a
position
in
the
file,
maybe
like
a
relative
position
or
some
path
length.
So
it
can
not
be.
You
know
the
patient
on
changes,
and
this
would
work
for
one
layer
but
say
this
list
static
generates
another
macro,
generic,
something
which
invokes
another
macro,
and
we
need
to
expand
that
macro
and
we
somehow
need
to
identify
that
macro
and
kind
of
to
understand
the
meaning
of
that
ID.
B
B
B
It's
like
the
base
level
and
then
the
next
level
would
be
like
a
span
into
the
result
of
a
macro
expansion,
which
means
we
would
need
few
other
IDs
to
identify
it
right,
yeah,
and
that
seems
like
it
covers
all
the
cases.
No,
so
there
is
your
write.
That
name
resolution
plays
a
role
because
figuring
out
the
D.
In
that
macro
expansion
got
the
definition,
that's
like
name
resolutions
job,
but
given
a
definition
ended
in
with.
A
B
Think
you
could,
but
I
don't
think
this
would
solve
here.
So
so
I
think
you
could
say:
okay,
I
think
you
could
make
an
ID
that
maps
to
something
like
this
over
here
and
the
idea
would
be.
If
you
have
to.
Let's
say
that
the
lazy
static
is
inside.
If
one
macaroni
generates
the
invocation
of
another
mackworthy,
then
what
would
happen
a
crow,
a
spins
to
be
banned.
Something's
been
right.
Why
you
would
wind.
B
A
A
Okay,
so
in
my
implementation,
I
used
on
the
user
side
as
an
ID
because,
like
in
fear,
we're
given
your
side,
you
can
get
to
the
definition.
If
you
involve
name
resolution
because
I
could
say
you
cite
if,
however,
the
we
actually
put
definition
of
the
macro
into
the
same
entity,
this
seems
like
it
could
just
work,
probably.
A
A
A
B
Yeah
I
mean
the
problem:
is
that
explained
macro
Invitational
right?
It
returns
to
full.
It
returns
the
full
set
of
tokens.
So
in
principle
you
could
have
a
layer
in
between
that
sort
of
extracts
out
the
stuff
that
name
resolution
cares
about,
which
is
like
the
definitions
that
results
from
a
macro
invocation
for
something
right:
yeah,
yes,
and
if
you
did
that,
you
might
well
find
that
the
full
name
resolution
becomes
reusable,
because
you
have
like
that
name
resolution
which
directly
in
focus
like
definitions
from
a
crow
invocation
or
something
a.
B
B
Which,
hopefully,
didn't
change
I,
guess
like
actually
using
spans,
isn't
probably
great,
but
if
you
have
it
or
stable
for
identifiers,
then,
and
then
this
would
say
okay,
then
we
would
invoke
definitions
from
a
clarification.
B
B
Yes,
so
you
would
reexpansion
she
wouldn't,
or
some
of
them
I
think
you
could
do
something
like
this
feels
like
it
can't
work
and
in
fact,
like
it
feels
sort
of
nice
because
having
a
label,
you
did.
You
want
some
sort
of
ID
for
the
text
in
a
macro,
then,
just
on
its
own,
you
can
sort
of
trace
it
back
to
recover.
That
text.
I
mean
that's
kind
of
what
this
ID
gives
us.
B
B
Think
what
this
solution
does
not
give
us
is
the
ability
to
have
finer
grained
reuse
within
a
crate.
If
we
think
we
want
that
like
right
now,
you
are
doing
name
resolution
across
the
entire
crate,
and
maybe
we
would
like
to
be
able
to
stay
within
this
module.
Nothing
changed
so
I
don't
need
to
compute
anything.
You
know
that's
a
little
bit
complicated
because
of
the
interdependence
between
modules.
A
A
B
It
seems
like,
if
we're
careful
with
our
with
our
choice
of
intermedia
inquiries
right,
then
the
only
time
you
would
have
to
rerun
name
resolution.
So
when
you
add
a
new
item
or
when
you
change
the
name
of
an
item
or
in
some
other
way
like
this
or
at
a
new
macro
application
at
the
top
level
in
that
is
probably
unusual,
it's
not
Bachelor.
B
B
But
as
long
as
generate
the
same
set
of
ID's,
the
neighbor
solution
results
themselves
to
con.
The
key
thing
would
be
coming
up
with
the
name
of
like
making
sure
the
ideas
are
sufficiently
stable,
that
it's
true
but
I
think
the
basic
trick
for
that
is
to
make
the
ideas
a
tree
right,
so
that
you're
not
using
like
the
offset
in
bytes
or
tokens
the
flat
list,
because
then
inserting
things
in
the
middle.
A
Well,
actually,
also,
the
second
trick
we
should
actually
use
for
items
now
like
the
IDS
are
not
trees.
The
IDS
are
positional
to
identify
an
item
in
the
file.
We
enumerate
all
the
top-level
items
and
use
the
index
of
an
item
is
as
an
ID.
So
it's
like
kind
of
not
really
a
tree
structure.
It's
a
positional
positional
thing,
but
you
carefully
arranged
so
that
positions
don't
change,
often.
B
B
B
A
I
think
I
feel
satisfied
well,
at
least
at
least
I
want
to
try
it.
I
am
Not
sure
that
I
don't
actually
hit
a
wall
somewhere
in
this
plan,
but
I
don't
see
problems
before
implementing
it
actually,
actually,
if,
if
somebody
else
wants
to
dig
into
micro,
Spanish
name
resolution
because
feel
free
to
join
like
to
work
on
something
else
like
doing
Diagnostics
now,
it's
also
very
exciting
I.
B
B
See
where
it
goes,
I
would
be
the
best
I
could
do
sure
it
might
be
useful,
like
I
think
it
might
be
worth
trying
to
play
with
this.
In
a
smaller
example,
I,
don't
know
how
much
Oh
I
never
actually
worked.
In
my
say,
maybe
it's
small
enough
and
you're
familiar
enough
that
you
move
pretty
fast
like
for
me.
If
I
were
gonna,
try
to
fruit,
this
node
I
would
make
a
little
stain
I.
A
B
A
B
A
B
Are
we
ready
to
land
gorgeous
but
a
toilet,
but
maybe
it
would
be
nice
if
we
keep
that
a
little
separated,
I
guess
I
do
think
that
will
be
a
really
powerful
in
a
blender
for
this
sort
of
thing,
it's
exactly
the
kind
of
case
where
it's
useful.