►
From YouTube: Parallel Rustc Planning Meeting 2019.10.28
Description
Discussed:
* moving LintStore from Session to TyCtxt retrospective
* Review of the Sync module
* Review of bits of mutable state
* Performance action items
A
B
So
the
idea
is
that
we
have
a
lot
of
stuff
in
session,
is
sort
of
not
mutable
yet,
but
will
become
so
once
we
do
like
here,
lowering
or
some
other
sort
of
the
stage
of
early
compilation
and
yeah
yeah.
That's
why
context
right
now
is
far
post,
they're
lowering
and
making
some
other
stuff
global
context
which
I
a
bunch
of
sort
of
easy
ways
of
putting
stuff
in
it.
B
A
So
you're
saying,
like
you,
make
you
a
this
change,
I,
don't
know
if
other
people
sort
of
knew
about
it,
but
where
we
took
the
lint
store
out
of
session,
you
initialize
it
kind
of
on
the
stack
as
part
of
registration.
Basically,
then
move
it
into
the
type
context
if
frozen
frozen
state,
so
we
don't
need
any
locks.
The
end
of
the
day,
I,
don't
think
we
needed
any
locks
right.
A
Everything
is
single
threaded
when
it's
Dean
and
I
think
what
would
happen
if
we
moved
like
in
that
case,
if
we
move
the
pipe
context
or
let's
call
it
the
query
barrier
or
something
if
you
move
the
query
barrier
back
so
that
it
I
think
your
concern
is
if
we
moved
it
sufficiently
far
back
that
it
would
overlap
the
place
where
the
Lin
store
is
mutable
like
maybe
we
move
it
all
the
way
back.
A
A
That
would
be
the
ideal
case
so
that
who
the
we
don't
need
a
lock,
because
we
sort
of
move
that
into
the
query
mechanism
itself,
managing
the
log
that
seems
like
it'll
work
for
things
that
have
a
clear
initialization
period,
maybe
less
well,
for
if
there's
state,
that's
kind
of
mutated
on
and
off
throughout,
like
the
crate
store
or
something
kind
of
lazily
handled
throughout
execution.
But
probably
what
happens
there
is
that
those
things
should
be
lifted
up
into
queries
as
much
as
possible,
because
that
is
sort
of
the
mechanism.
B
A
A
A
C
C
There
are
a
lot
of
things
that
are
defined
in
some
way
and
where
you
have
that
that's
true
is
defined
in
another
different
way,
which
is
more
parallel,
oriented
set
up
and
but
yeah
our
basic
construction,
I
guess
the
most
interesting
thing
to
to
figure
out
with
this
is
the
usages
of
these
things
and
if
they
are
correct
or
not,
this
is
more
or
less
what
we
did
with
with
Nikko
on
Russell
rust
like
quickly,
but
you
know
other
than
that.
I
don't
know.
B
So
one
thing
I
wanted
to
talk
about
is,
for
example,
the
the
kitronik
abstraction
in
this
module
seems
that
they,
like
your
at
least
to
me
and
and
some
of
the
other
abstractions
seem
like.
Maybe
we
should
try
to
either
delete
them
or
remove
them,
particularly
like
around
the
multi-threaded
lock
or,
however,
you
decipher
those
empty
prefix
and
multiply
breath
just
because
there's
sort
of
weird
primitives
and
it
seems
better
to
define
them
sort
of
as
you
need
them
versus
having
this
global
thing.
B
A
B
So
it
seems
like
in
almost
all
cases
if
you
have
a
parallel
or
non
parallel
compiler,
you
can
just
use
Atomics
directly,
like
it
won't
matter
in
the
sense
that,
even
if
your
single
threaded
Atomics
have
the
sense
to
zero
overhead,
so
it
feels
like
we
should
just
make
the
leap
and
just
use
them
versus
having
this
abstraction
and
sort
of
splitting
code
and
making
things
more
complicated.
As
a
result,.
A
There
is
some
advantages:
okay,
rather
we
just
used.
We
could
also
just
consider
using
crossbeams
atomic,
which
is
good,
I,
don't
know
if
that's
what
we're
actually
doing
here.
There
are
some
advantages,
though,
like
if
you
have
a
new
type
integer,
for
example,
which
we
do
a
lot
for
indices
and
so
on
then
being
able
to
use
like
crossbeams
atomic
atomic
T
instead
of
say,
atomic
you
size
is
a
big
win.
I,
don't
know
that
we
actually
do
that,
but
yeah.
B
B
A
B
That
was
the
other
thing
that
I
was
going
to
suggest
that
we
sort
of
be
less
eager
to
be
sort
of
single
threaded
ly
perfect
and
just
like
use
locks
directly
instead
of
trying
to
be
efficient
and
not
use
them
for
non
performance.
Critical
areas,
so
lock
is
essentially
free
as
I
understand
it.
If
you're
on
a
single
threaded
machine
say
there
is
a
class,
but
it's
not
it's
too
significant.
A
Right
so
another
example
might
be
LRS.
Well,
let's
go
through
an
order.
What
is
parallel,
so
a
parallel
iterator
and
catch
panics.
This
is
like
I
guess
this
is
the
code
that
does
parallel
iteration,
more
or
less
so.
This
is
that
parallel
macro
that
does,
if
I
work
in
parallel
and
then
sometimes
does
a
parallel
iterator.
Okay,
what
does
it
catch?
Panics
part
of
the
story.
B
A
B
One
of
the
sort
of
reasons
why
I
would
like
to
explore
switch
me
over
is
that
a
lot
of
the
time
like
I
feel
like
new
contributors,
especially
if
they
see
this
LRC
thing.
They're
like
what
is
this,
because
the
l3
that
sort
of
feels
something
I
constantly
think
it's
in
lock
and
then
I
go
and
I'm.
Like
oh
wait
now.
This
is
not
a
walk.
A
Do
agree
it's
confusing
not
to
mention
that
well,
I,
guess
that
won't
help
I
get
annoyed
that
if
you
don't
build
with
girl,
all
compilation
enabled
and
you
use
RC
things
work
until
until
you
hit
CI.
But
that's
just
going
to
be
true
regardless.
A
A
A
Probably
we
could
probably
get
rid
of
so
antilock
is
an
interesting
special
case
because
it's
not
a
ref
cell,
it's
like
when
it's
not
when
you're
in
sequential
mode,
it's
just
in
a
proscenium,
mutti,
essentially
or
owned
t,
and
it
becomes
so
it
doesn't
translate
the
ref
cell
directly.
A
B
A
A
A
A
Santiago
and
I
went
through
and
looked
at
all
the
places
where
locks
and
Atomics
are
used
and
kind
of
numerated
than
when
we
were
I
was
a
little
pleasantly
like
surprised
to
see
how
few
of
them
there
were.
This
probably
is
missing
stuff
that
this
came
in
from
just
like
ripping
through
the
code.
A
There
was
a
few
that
caught
my
eye
that
we
could
look
at
in
terms
of
like
further
candidates
for
simplification,
these
crate
crate,
metadata,
I
think
in
particular,
and
of
course,
session
and
see,
store,
I
think
we're
the
big
ones.
A
lot
of
these
other
patterns
seemed
fine,
like
the
here
ID
validator
has
some.
It
says
here
some
vector
that
many
threads
are
pushing
into
it's
kind
of
local
and
relatively
easy
to
understand.
C
A
C
A
A
A
A
A
A
See
store
itself
looks
like
we
thought
it
was.
Okay,
there's
mainly
the
session.
A
So
we
could
like,
oh
okay,
whatever
these
things
are
lots
of
likes
there
sure
all
those
fans
that
are
collected
during
parsing
or
something
just
gonna
close
my
eyes
and
ignore
all
that.
A
D
B
Think
this
is
sort
of
the
thing
were,
might
be
used
like
useful
to
talk
about
because,
like
it's,
the
pattern
of
we
have
this,
it's
explicitly
global
state
like
there's
no
way
to
see
global
if
I
it
and
everywhere
in
the
compiler
is
essentially
adding
into
it
by
just
like
calling
a
method
or
some
equivalent
of
a
method.
It.
A
A
Right,
oh
you're,
assuming
that
I'm
not
really
too
worried
about
that
I,
don't
think
we
use
this
field
very
often
you
just
like
certain
random
error
messages,
grab
this
luck
and
stick
themselves
in
there.
So
I
still
at
least
in
this
particular
case,
so
as
to
avoid
being
printed
more
than
once
or
something.
B
B
D
Of
not
great
so
obviously
also
seems
like
it
still
predates
the
query
system
where
it's
just
been
sitting
around
for
years.
At
this
point,
it
was
kind
of
a
cos
used,
I
mean
just
needs
to
be
leaving
more
ad
hoc
than
it
was
now,
and
so
it
could
be.
A
lot
of
this
stuff
could
become
queries.
No,
it
is
like
looking
like
that
allocator
carrying
an
ejector
pin
every
time
like
those
could
become
queries.
Yeah.
D
I
mean
that
make
sense,
but
I
would
also
I
would
suspect
that
if
we,
if
this
is
a
one-time
initialization,
you
have
a
mutable
reference
to
the
session,
so
you
should
be
able
to
mutate
without
locks.
At
that
point,
maybe
we
haven't
threaded
that
kind
of
like
that.
Support
might
not
be
threaded
through
just
yet,
but
that
would
be
whatever
to
suspect.
Is
one
time
initialization,
you
have
a
mutable
session.
Otherwise
you
have
a
shared
reference.
Yeah.
B
A
A
Guess
right
now,
what
we
do
is
when
we
see
them
in
the
parser
or
something
we
write
to
them,
but
we
could
instead
just
go
fetch
the
attribute
from
the
Krait
limit
from
the
great
attribute
list.
I,
don't
think
we
use
them
before
the
query
system
exists.
That's
the
key
question!
Well,
macro
expansion.
This
is
the
recursion.
B
A
D
B
A
A
D
That
is
the
case
thing.
Yeah
I
would
expect
that,
like
resolution
either
has
a
mutual
reference
or
a
like
a
share
reference
that
is
actually
mutable
later
on,
and
it's
like
the
resolution
stuff
would
produce
these
things
with
and
using
mutable
reference
to
the
session
to
shove
it
in
there,
and
then
they
would
be
immutable
once
we
start
actually
doing
parallel
work
afterwards.
Well,.
A
A
And
we
should
do
that.
Why
not
sucking
it
solve
any
big
problems?
I
guess
the
other
thing
we
could
talk
about.
Then
this
last
time
we
talked
about
the
job
server
and
everything
and
we
sort
of
left
it
without
any
clear
action
items
and
nothing
happened.
Maybe
we
want
to
produce
one.
The
closest
thing
I
remember,
is
that
I
was
gonna.
Look
at
the
array
on
your
scheduler
to
see
if
it
works
better,
but
I
didn't
do
I
sort
of
knew,
I
wouldn't
do
it.
I
shouldn't
have
said:
I
might
I.
C
Missed
that
discussion,
but
I
wonder
if
you
talked
about
this
is
sort
of
a
mismatch
to
use
it
job
server
for
threads
rather
than
processes.
I
feel
like
I,
don't
know
if
there's
a
good
resolution
for
that,
but
I'm
not
real
comfortable
with
the
way
round
is
acquiring
and
releasing
it
every
time.
I
sleeps.
B
So
I
think
we
had
mentioned
an
action
item
of
throwing
in
and
like
disabling
the
thread
of
wires
and
Louise,
isn't
like
flipping
out
jobs
over
entirely
new
you
and
then
just
running
for
fun,
seeing
like
how
a
single
thread
performance
and
maybe
some
of
the
other
French
ones
you
have
armies
like,
we
can't
actually
release
that
that,
but
will
give
us
insight.
It's
like
we've
been
saying
that
job
server
is
the
source
of
all
people,
but
whether
the
tax
would
not
match
that
day.
So.
D
But
we
also,
we
talked
about
I,
get
the
sense
that
I
had
we
have
this
suspicion
that
job
server
is
slowing
down
rayon,
but
we
don't
really
have
a
precise
understanding
of
what
the
pattern
that
rayon
has
that's
causing
it
to
be
so
bad.
Like
we
don't
know,
we
didn't
I
think
we
have
some
suspicions
of
like
it's
waking
up
or
going
to
sleep
too
much,
but
we'd
really
have
like
a
picture
of
like
why
it's
so
bad
versus,
like
would
switching
to
just
a
pure,
only
iterator,
like
parallel
topo
fredley.
D
Would
that
be
all
we
need
like?
Would
that
solve
issues,
or
even
that
have
the
same
you
job
server
issues
and
so
to
some
sense,
I
think
we
can
do
some
measurements
of
like
ripping
things
out
or
trying
out
the
new
scheduler,
but
I
suspect
that
we
still
also
it's
just
you
kind
of
more
investigation
as
to
what
exactly
is
the
like?
What
exactly
is
the
profile
for?
Why
is
so
slow
today,
and
why
exactly
are
we
acquiring
too
much
and
or
how
could
we
acquire
less
or
like
its
kind
of
respect?
A
It's
not
really
obvious
to
me
how
if
we
had
sort
of
re-implemented
the
Ranford
pool,
I,
guess
sort
of
your
same
point
but
I'm
not
sure
what
it
would
do
is
so
differently,
and
today
it's
a
better
job
server.
It's
gonna
start
up.
Some
threads
they're
gonna
try
to
get
tokens.
They're
gonna
pull
from
some
central
cube
only
like
maybe
they
wouldn't
work
still.
It
would
use
a
central
cube
just
to
be
simpler,
but
like
so
what
I
don't.
D
Understand
of
like
once,
you
start
a
pair
little
top-level
loop,
you
should
like
do
on
the
order
of
number
of
CPU
Siskel's
to
acquire
that
many
tokens,
and
then
you
should
do
nothing.
Jobs
are
related
until
the
very
end
once
everything
finishes,
but
it
sounds
like
there's
a
huge
amount
of
thrashing
during
that
giant,
parallel
loop,
and
so
that
might
be
causing
some
issues.
D
D
We
don't
I
just
so
that's
like
kind
of
the
investigation
that
I
think
we
need
to
figure
out
cuz
like
if
the
thrashing
is
happening,
that's
a
very
to
fix.
If
the
thrashing
is
not
happening,
then,
like
oh,
my
god,
job
server
might
not
work
at
all
and
we
have
to
kind
of
rethink
how
we're
gonna
do
this.
D
It's
like
I,
I'm
kind
of
went
to
be
a
person
that
receives
like
I,
don't
know
a
small
handful
like
20
30,
top
level,
parallel
loops
but
being
optimistic
and
then
like
everything
within
that
should
be
completely
paralyzed
Abul
and
if
you
I,
don't
wanna
speculate
too
too
much.
But
it's
like
I
just
wanted
to
point
out
that,
like
the
investigation
of
what's
what
exactly
is
I
mean
or
almost
unless
a
fix
comes
down
the
room
I.
C
D
A
The
tooling
that
you
were
showing
us
Alex,
the
graphs
that
were
we
were
showing
like
the
cargo.
Our
cargo
execution
played
out
is
that
generic
enough,
that
it
will
work
with
the
parallel
Russy
and
give
us
some
idea
like
bursts
of
parallelism
or
something
like
that,
or
is
it
requiring
more
integration.
D
None
of
the
cargo
tooling
keeps
track.
The
cargoes
build
graph
stuff
doesn't
keep
track
of
threads
information,
otherwise
it
would
be
the
self
rolling
press,
self
profiling,
stuff
and
Russy
and
I
haven't
actually
run
that
with
multi-threading.
So
I'm
not
sure
how
this
off
I'm
not
sure
how
the
self
profiler
handles
multiple
threads
I
suspect
pretty
good.
It.
B
B
A
D
A
C
D
A
C
D
That's
what
the
self
profiler
stuff
and
rescue?
Basically
it
is
exactly
that
which
is
really
good
for
that.
We've
got
that
for
the
the
cogent
back-end
stuff.
So,
like
the
parallelism
we
have
in
Olivia,
we
have
like
really
nice
graphs,
so
kinda
really
very
viscerally,
see
what's
happening.
I
just
haven't
tested
it
with
parallel
compilers
for
everything
else.
You.