►
From YouTube: incr-comp meeting #4 (2020-10-06)
Description
Agenda/notes at https://hackmd.io/@wesleywiser/B1mAoCtID
A
Okay,
all
right,
this
is
the
fourth
incremental
compilation
meeting
and
we
have
the
agenda
that
wesley's
prepared.
Oh
my
linked
from
zulip,
and
so
we
still
don't
have
like
a
regular
cadence
yet
for
in
terms
of
like
you
know,
standing
agenda
for
triage
and
whatnot.
Maybe
work
it
out
later,
but
let's
see
what's
what
the
status
is,
maybe
you
can
quickly
just
look
over
the
agenda
and
collectively
see
if
there's
anything
that
anyone
wants
to
add,
so
everyone
can
skim
over
it.
A
I
will
just
I
will
mention
the
bullet
points
there.
Briefly,
there's
something
about
us:
a
polaroid
question
from
santiago
through
inline
function,
duplication
david,
has
a
draft
pull
request
for
split
debug
info
amman.
Aurora,
who
is
not
present,
has
opened
a
pull
request
for
anonymous,
query
nodes
and
does
anyone
here?
Does
anyone
here
know
anything
about
that
offhand.
A
Been
talking
to
me
a
little
bit
about
that,
okay,
all
right
I'll,
let
you
perhaps
tell
us
about
it
and
then
wesley
is
working
on
adding
more
detail
tracing
data
for
the
self-profiling.
So
is
there
any
other
agenda
items
that
anyone
else
wants
to
add
to
this.
A
Yeah,
you
know
what,
if
we
have,
I'm
assuming
we
probably
will
have
time.
So,
let's,
let's
add
that
as
a
as
an
item
to
to
the
meeting
and
and
see
what
we
come
up
with.
A
Okay,
but
let's
let's
go
ahead
and
talk
about
the
things
that
are
here
first,
since
it's
you
took
the
time
to
write
this
up.
I
want
to
hear
about
it.
So
we'll
start
with
things
in
order.
So
we
have
this.
Santiago's
pull
request
to
produce
in
mind,
function,
duplication,
debug,
builds.
Is
this
the
one
that
like
turns
off
inlining
like
you?
Even
in
line
always
does,
or
did
you
change
the
behavior
they
mine
always
even
doesn't
even
in
line
anymore
or
is
that
something
else.
B
No
last
week
I
I
was
working
on
on
some
other
stuff,
so
I
didn't
like
do
anything
with
with
this.
Yet
it's
okay
big.
So
we
should
continue
like
we
should
finish
this
one,
but
it's
in
the
same
state.
It
was
last
week.
A
Since
we
don't
run
in
liner
anyway
right
right,
I
remember
that
now
we
talked
about
that
okay
and
then
it
says
here
you're
looking
into
regressions
connective.
This.
C
Yeah,
there's
there's
some
decent
wins
and
there's
a
couple
small
losses
on
some
of
the
more
what's
the
word,
the
microbes
work
or
something
yeah,
the
micro
benchmarks.
I
would
not
really
have
expected
to
see
that
so
I'm
a
little
curious
what's
going
on,
so
I
started
digging
into
that
and
then
that's
kind
of
kind
of
what
led
to
my
other
bullet
point
there,
which
is
just
trying
to
figure
out.
What's
going
on
so.
A
When
I
followed
the
link
it
doesn't,
I
don't
actually,
I
think,
there's
some
sort
of
error
with
perf
right
now,
because
when
I
followed
the
comparison
link,
I
got
a
an
error
decoding
response
body
expected
value
at
line
one
column
one.
So
I'm
assuming
there's
something
wrong
with
some
service
that
we
rely
on.
That
might
be
perfect
across
the
board,
though
so
I'm
gonna
assume
it's
not
our
problem.
Yeah.
It
was
working
last
night,
so
I
think
it's
probably
that's.
A
Okay,
do
you
all
right?
Is
there
anything
I'm
interested
in
since
I've?
Since
I
can't
see
the
comparison
data,
I
don't
know
offhand
which
micro
benchmark
zones
that
are
failing?
Is
there
anything
of
interest
yet
or
even
just
naming
the
ones
that
are?
I.
C
Don't
I
don't
remember
exactly
offhand,
which
ones
it
was,
but
I
think
the
more
interesting
thing
to
me
was
the
losses
were
typically
like
four
or
five
percent,
or
something
on
some
of
the
micro
benchmarks,
and
the
interesting
wins
were
on
things
like,
I
think
the
regex
crate
and
some
of
the
larger
I
think
one
of
the
servo
crates
actually
had
a
three
or
four
percent
performance
improvement
on
not
just
the
incremental
benchmarks,
but
even
the
non-incremental
full
debug
compilation.
C
A
Gotcha,
okay,
yeah!
No!
Definitely
something
to
to
to
push
forward
on.
I
think
okay,
now
we've
got
next
up,
is
a
pull
request
from
david
on
split
debug
info,
okay,
which
I
don't
think
we
had
last
last
meeting
so.
D
Days
after
the
last
meeting
it's
currently
working,
you
can
give
the
flag
and
it'll
produce
objects
that
have
split
the
burning
stuff
on
them
and
you
can
run
gdp
and
load
them
and
stuff.
The
problem
is
that
right
now
we're
outputting
the
dwarf
object
files
per
cogen
unit,
which
means
we
have
tons
of
them,
and
so
we
need
a
solution
to
link
those
together.
So
there's
just
one
with
the
binary.
D
The
problem
is:
there's
not
really
any
well,
there's
not
many
existing
ways
to
do
that.
There's
a
tool
called
dwp
which
can
link
the
pre-standard
gnu
flavored
splat
dwarf,
but
not
to
our
five-year
splat
dwarf,
lvm,
picks
whether
or
not
to
use
gunu,
flavored
or
dwarf
five
flavored
based
on
the
target.
So
I
don't
know
something
we
can
change,
even
if
we
were
able
to
link
them
together.
What.
D
I
don't
think
there's
much
of
a
difference.
I
think
it's
just
that
dwarf
is
the
way
to
do
it,
going
forwards,
okay
and
even
if
we
were
able
to
link
them
all
together,
we
wouldn't
there's
no
tool
that
I
can
find
that
would
let
me
change
the
dwarf
attribute
and
the
linked
binary
to
point
to
the
new
single
file
rather
than
the
twelve
individual
pair
cogen
unit
files.
D
D
So
we
could
add
support
to
dwp,
which
is
part
of
benutel's
for
dwarf
five
linking,
but
then
we
would
need
to
depend
on
that
version
of
dwp.
If
we
wanted
rusty
to
invoke
it-
and
I
don't
know
how
long
it
would
take
for
that
to
go
into
destroys
and
things,
we
could
write
our
own
little
tool.
That
does
this.
D
A
It
okay,
I'm
not
familiar
with
split
debug
info,
but
I'm
reading
over
the
issue
now
that
michael
had
filed
like
a
worser
file,
I
guess
four
years
ago
about
it.
So
I
see
it's
just
something
where
it's
just
just
so
we
all
run
the
same
page
about
it.
It's
something
where
there's
a
lot
of
time.
You
spend
processing
debug
info,
I
guess
at
load
time
or
at.
D
Least
one
time
yeah
in
the
debug
info,
there's
some
that
requires
light
time,
relocation
and
some
that
doesn't
and
it
takes
the
stuff
that
doesn't
require
a
lengthy
relocation
and
it
moves
it
either
to
a
separate
file
or
it
changes
the
debug
info
such
that
linker
knows
to
skip
it
in
either
case
the
linker
skips
it
and
that's
where
you
get
your
time.
Savings.
B
A
This
is
interesting.
I
really
you
know
this
shows
how,
like
I'm,
I
really
about
to
launch
in
terms
of
how
this
all
works,
because
I
had
thought
that
when
you
had
the
plugin
for
a
separate
file
that
it
it
wasn't
something
that
the
linker
had
to
work
with,
so
that
I
had
sort
of
thought.
Those
things
already
naturally
like
this,
that
you
all
the
time.
D
A
Do
you
think
so,
in
terms
of
like
the
the
time
that
we're
spending
on
processing
debug
info,
like
you've,
identified
this
as
being
a
big
enough
of
a
a
speed
hit
at
least
time
that
it
really
is
worth
investing
in
making
your
own
tool.
A
See
I
see
yeah,
I
just
don't
want
there
might
be
other
things
potentially
other
work
items
that
could
have
a
bigger
bang
for
their
buck,
potentially
yeah.
So
if
you're,
obviously,
if
you're
already
in
a
groove
like
working
on
this,
then
it
makes
sense
to
just
keep
plunging
forward.
But
since
it
sounds
like
you're
talking
about
the
making,
a
whole
new
tool
does
sound
like
a
kind
of
radical
shift
in
the
amount
of
effort.
Potentially.
So
I
want
to
make
sure
that
you
know
that
you
keep
in
mind.
D
A
D
A
If
the
link,
your
memory
usage
is
blowing
up
in
this
processing
the
development
for,
if
it
really
is
a
it's
just
a
time
issue
with
processing
the
stuff,
okay
and
also
the
other
thing
to
look
at
potentially,
if
you
get
if
you're,
when
it's
evaluating
these
things
to
see,
if
ldd,
if
for
any
reason
lbd
is,
is,
does
a
better
job
in
terms
of
like
where
the
time
goes.
It's
probably
because
I
know
a
lot
of
people
say
that
if
they
switch
to
lbd
the
link
step
gets
dramatically
faster.
A
It
could
be
that,
on
things
like
this,
it's
better,
maybe
or
not.
I
don't
know
it's
just
something
to
think
about.
If
we,
if
you,
if
you
decide
to
take
the
time
to
actually
do
a
little
bit
of
benchmarking,
rather
than
jumping
into
the
you
know
actual
solution,
then
I
would
consider
these
other
points
to
evaluate,
but
also
you
know
it
could
just
be
you're
better
off
just
jumping
into
the
solution.
A
Okay.
Next,
we
have
this.
This
pull
request
regarding
anonymous,
query
nodes
that
was
contributed
by
amman
aurora
wesley.
Could
you
have
thoughts
on
this
yeah?
No.
C
Description
on
the
pr,
even
yeah,
so
this
was
something
michael
filed,
an
issue
about
a
couple
years
ago
and
the
thought
it
sounds
like
there
was
not
actually
you
know
any
any
performance
data
hard
performance
data
gathered.
It
was
just
kind
of
a
oh
this.
This
might
make
stuff
faster.
C
Basically,
there's
some
queries
that
don't
depend
on
anything
in
the
the
thai
context
struck.
They
just
do
stuff
with
their
arguments
and
we
allocate
dependency
nodes
in
the
dependency
tree
for
those
things
yeah,
even
though
they
should
be
like
relatively
cheap
to
compute,
and
we
don't
actually
need
to
do
that
or
whatever.
C
So
this
pr
basically
adds
a
way
to
have
anonymous,
dev
nodes,
which
really
just
are
kind
of
more,
like,
I
believe,
like
gaps
in
in
the
depth
tree
and
they
just
sort
of
forward
any
reads
they
do
to
the
ancestor
or
the
descending
queries.
A
A
C
Yeah
that
would
be
worth
looking
at.
I
guess
I'm
sort
of
feeling
I'm
gonna
look
into
this
a
little
more
and
I
think
maybe
ping
michael
and
say
hey
does
this?
Does
this
look
like
a
reasonable
implantation
to
you
or
not?
That's.
A
C
The
computer's
wrong
and
assuming
it
is,
I
think,
if
we're
not,
if
we're
not
really
seeing
a
performance
boost
here,
it
does
complicate
the
query
model
a
little
bit
and
unless
there's
significant
space
savings,
I'm
not
sure.
I
think
this
may
just
be.
We
close
the
issue
and
say
tried
it
and
it
doesn't
actually
yield
much
or
any
of
the
performance
improvement.
So
there
was
never
really
perf
data
showing
that
this
would
be
worthwhile
to
begin
with.
It
was
just
kind
of
a
it.
A
C
A
It's
also
possible
wasn't
done
quite
right.
Yeah,
like
you,
said:
okay,
okay,
good,
to
try
to
ping
michael
all,
right
and
finally,
this
this,
like
wesley's,
got
work
on
detail,
tracing
data
for
self
profiling,
so
yeah.
What's
the
situation
here?
Is
there
an
actual
pr
you're
just
got
something
important.
C
So
there's
a
pr
open
on
measurement,
which
is
the
the
library,
the
self
yeah.
Let
me
grab
it.
C
C
For
our
multiple
arguments
being
recorded
in
the
event
that
get
that
gets
recorded
in
the
trace
file
for
a
while
now,
but
there's
not
actually
an
api
to
generate
those
from
calling
code,
so
this
is
literally
just
that
api
and
then
there's
some
other
work.
That's
happening
there,
so
we
need
to
land
a
an
update
on
rsc
to
actually
take
advantage
of
this,
which
is
in
progress,
but
once
that's
done.
C
What
I
want
to
do
is
for
every
cogen
module
node
we
have
in
there,
so
we
already
have
those
nodes
and
we
can
see
what
the
cogen
unit
names
are
and
how
long
they
take
for
llvm
to
process
them.
What
I'd
like
to
see
is
what
our
estimated
costs
are
and
ideally,
if
there's
an
easy
way
to
get
out,
why
we're
generating
this,
which
is
not
that
interesting
for
full
compilations,
but
in
the
incremental
case
it
would
be
interesting
if
we
could
get
some
sort
of
data
about.
C
There
may
even
be
more
detailed
tracing
stuff.
We
can
do
at
the
mono
item
level
and
say
this
change
for
this.
A
A
Sorry,
okay,
okay,
yeah.
A
Yeah,
this
is
very
yeah.
This
looks
really
great.
It
seems
like
michael
himself.
You
know
reviewed
the
pr
recently
I
don't
know
this
looks
like
a
great.
I
mean
it'd
be
good
to
work
towards
the
exact
thing
you
described
all
right.
This
actual
pr
isn't
isn't,
isn't
that
complicated.
C
A
This
is
just
adding
a
constructor,
exactly
okay,
cool
cool
cool,
all
right,
well,
we're
halfway
through
the
meeting
now.
So,
let's,
let's
go
ahead
and
I
guess
look
at
the
issues
and
and
see
if
we
can
do
a
little
bit
of
triage
on
the
fly
here.
You
know
I
like
I
said
I
have
not
done
much.
A
A
Since
our
last
meeting
I've
been
focusing
on
finding
a
job,
but
but
progress
there
has
been
very
good.
So
I'm
I
will,
you
know,
have
good
news
to
share.
I
think
in
the
very
near
future.
So
the
but
yeah.
That's
that's
my
report.
A
A
A
You
know,
for
whatever
reason
and
tag
with
our
team
and
and
see
if
they're,
all
things
that
we
you
know,
it
will
obviously
skip
the
ones
that
are
related
to
issues
that
we
already
touched
on
and
then,
if
we
get
through
this
whole
list
now,
then
we
can
look
at
the
broader
list
that
are
tagged,
a
common
violation,
which
is
a
much
longer
thing
that
consists
of
108
issues,
the
working
group,
okay.
So
the
working
grouping
of
violation
issue
has
nine
open
issues.
A
I
post
both
these
links
in
the
hackmd,
while
the
other
one
that
I'm
saying
we
probably
won't
get
to.
But
if
we
do
is
the
broader
list
of
all
a
pollination
issues
and
that's
108
open
issues.
C
A
Label,
I
don't
know
well,
it
could
still
have
meaning
in
terms
of
being
something
like
a
way
of
taking
something
actively
like,
at
least
that
our
team
sort
of
acknowledged
existence
of
and
perhaps
even
is
working
on,
like
there's,
there's
potentially
useful
semantics
here
in
terms
of
what
the
label
means,
or
it
could
just
be
confusing.
It's
a
good
question.
A
I
personally
am
okay
for
right
now
with
leaving
it,
but
that's
I'm
not
the
one
doing
triage
right
and
I'm
not
the
one,
that's
or,
and
and
so
if
it's
confusing
to
other
people
who
are
trying
to
go.
What
labels
that
which
I,
which
is
a
situation
I've
been
in.
I
can
understand
that,
if
nothing
else
the
fact
that
there's
such
a
disparity
between
the
two
labels,
I'm
still
tempted
to
say,
let's
not
just
change
it.
A
A
To
bring
violation-
I
don't
know
the
fact,
that's
just
nine
issues
that
are
open
here,
the
tag
with
it
like
I
said
there
could
be
some
reason
that
it's
like
that.
Okay,
so
I've
got
this
set
of
night
open
issues.
The
first
one
is
on
my
list
that
I
linked
above
is
65023.
A
Which
is
air
unused,
attribute
const
function,
union
with
income
compilation,
it's
assigned
to
ollie
right
now.
I
don't
know
if
that's
something
where
ali
self-assigned
it.
A
This
is
a
little
bit
older.
I
guess
this
minutes.
From
last
year,
santiago
filed
it
and
filed
a
workaround.
A
Yeah
so,
unfortunately
it's
the
kind
of
thing
that's
probably
hard
for
us
to
quickly
check
right
if
it's
an
internal
attributes,
let's
see-
oh,
maybe
not
because
the
bug
report
is
based
on
recomponent
compiler.
Obviously
it's
not
going
to
test
that
right
now,.
B
I
remember
I
remember
more
or
less
what
this
was
about,
but
yeah
like
impossible
now
to
reproduce
this
again.
A
My
immediate
question
is
whether
this
bug
is
one
that
affects
anyone.
That's
like
is,
is
anyone
using
this
outside
of
the
compiler
itself?
Right,
like
is
something
where
we're
fixing
the
actual
bug
has
value,
or
is
it
something
where
the
workaround,
you
know
solves
the
problem
for
anyone
who
actually
is
using
this
thing?
Oh,
it's
been
whitelisted
too.
A
All
right,
you
know
what
I'm
tempted
to
say
close
is
fixed.
Then
personally
or
you
know,
close
is
worked
around
like
we
believe.
If
we,
if
our
current
hypothesis,
that
it's
only
50
people
remorsely,
let
someone
else
file
it.
If
that's,
if
there's
some
reason
that
it
matters
in
other
contexts,
does
that
make
sense.
C
C
If
you
look
at
the
files,
changed
diff
or
I
guess
for
aaron
asks
for
one,
but
we
didn't
actually
do
that
yeah,
I
don't
know
I
mean
it
seems
like.
Maybe
we
want
an
issue
to
track.
C
Turning
this
back
to
normal,
it
sounds
like
that's
what
is
supposed
to
happen,
although
it
also
sounds
like
waitlisted
is,
maybe
just
supposed
to
go
away
right,
it's
just
an
anti-pattern
yeah.
I
guess.
A
We
could
close
it
all
right,
yeah.
I
think,
there's
a
broader
issue
here
that
I
don't
feel
like.
We
should
invest
our
time
in
personally
so
yeah,
okay,
I'm
gonna,
say
closing
as
worked
around.
A
A
Okay,
so
let's
close
this
all
right
next
next
up,
let
me
clearly
say
the
number
64291
hard
linking
incremental
files
can
be
expensive.
A
A
I
guess
of
how
yeah
the
label
usage
can
be
confusing.
So
clearly
we
don't
have
anything
to
report
here,
because
we
didn't
look
at
it
as
far
as
I
know,
so
they
don't,
they
have.
Did
they
just
reporting
that
yeah
that
just
takes
a
long
time
to
incrementalist
to
link
this
stuff
but
they're
talking
about,
but
there
they're
talking
about
literally
hard
links
on
the
file
system.
I
think
right
am.
I.
A
C
Which
is,
I
wonder
if
this
or
something
similar
is
also
an
issue
on
windows?
I
doubt
we
use
junction
points
which
I
think
are
the
more
supported
way
to
do
hard
links
on
windows,
but
we
may
just
be
copying
the
stuff
back
and
forth,
and
file
system
performance
on
windows
is
generally
bad
anyway.
So
this
might
also.
This
might
not
just
be
max
specific
this.
This
might
be
anything
but
linux
kind
of
issue,
so
yeah.
C
I
will
mention
you
in
this
comment,
which
is
just
to
say:
maybe
this
is
worth
investigating
on
windows
as
well,
because
there
there
could
be
a
similar
kind
of
issue
going
on
so.
C
Them
because
it's
it's
pretty
rare
from
what
I've
seen
for
tools
to
actually
use
them
right,
like
the
the
build
tool
for
net
ms
build
it
just
copies
everything
everywhere
all
the
time
and
it
doesn't
actually
even
really
support
linking
it
does
in
an
experimental
way,
but
like
our
the
product,
we
work
on
at
work
has
hundreds
of
megabytes
of
output
dlls,
and
I
tried
turning
that
on
to
see
how
much
of
a
performance
improvement
would
be
and
it
it
was
like
shaving
40
seconds
off
a
clean
build,
but
it
totally
broke
the
tool
afterwards,
just
straight
up
doesn't
work
so.
A
So
nagisa
did
point
out,
which
is
my
similar
to
my
reaction
and
jesus
pointed
out
here
saying
your
operating,
isn't
taking
0.2
milliseconds
on
average
to
do
this
step
of
hard
linking
seems
so
excessive
that
they
wondered
like.
Was
there
something
else
wrong
with
your
system
or
maybe
with
what
we're
doing
like
on
a
growth
scale?
So
there's
there's
some
potentially
some
investigation
warranted
here
in
terms
of
figuring
out,
what's
like
whether
other
people
see
observe
this
and
whether
it's
something
else
that's
wrong.
A
It's
it's
not
deeper
than
just
a
broader
problem
than
just
the
link
time
itself
and
incremental
exclusivity
from
having
said
that,
the
broad,
the
big
question
that's
immediately,
you
know
asked
to
the
incoming
question
itself.
Is
this
question
of?
Can
income
produce
fewer
files?
A
I
I
don't
think
so,
not
without
changing
the
architecture,
a
fair
amount
right
like
I
mean
I
guess
you
could
imagine
some
way
what
like.
Instead
of
having
files,
you
store
the
binary
data
in
a
single
file
that
doesn't,
I
don't
think,
that's
you're,
just
replacing
a
file
system
with
a
database
right
like
in
some
fashion,
it'll,
be
a
lot
of
work
and
be
pretty
questionable
like
trying
to
be
competitive.
There
seems
hard.
C
A
It
possible
the
reason
that
we're
doing.
That,
though,
is
to
I
mean
the
reason.
The
written
three
times
could
well
be
because
we're
trying
to
be
robust
in
the
face
of
like
you
know
some
control
c
in
the
middle
of
a
compilation
and
not
wanting
to
like
have
something
move
to
a
place
where
it's
we
should
find
out.
Why
we're
looking
for
three
yeah
right.
C
Yeah,
I'm
just
I'm
just
wondering:
could
we
do
something
simple
like
what
if
there
was
a
text
file
at
the
root
of
the
incremental
directory?
That
said,
like
basically
laid
out
pretend
as
if
these
file
system
links
existed
or
something
you
know
like
what?
If
what,
if
there
was
a
rust
c
specific
mechanism
for
doing
this
kind
of
thing
and
said,
you
know,
here's
a
flat
listing
all
the
files
these
ones
are
in
the
new
folder.
These
are
in
the
old
folder
or
the
these.
A
And
that's
not
like
it's
one
of
those
things
where
my
intuition,
my
immediate
got
reactions
sort
of
like.
Why
would
that
be
faster?
But
in
fact
I
could
readily
imagine
why
that
would
be
faster,
potentially
and
it
would
be
fine
in
terms
of
like
so
it
saw
it
should
solve
the
problem.
Fine,
I
think,
because
the
only
yeah
it's
hard
to
imagine
cases
where
that
wouldn't
be
fine,
yeah
right.
So,
okay,
that's
an
interesting
thing
to
look
into.
C
C
A
Ladder
might
be
yeah
yeah,
okay,
that's
a
that
might
be
potentially
really
simple.
Maybe
I
mean
or
maybe
not
depends
on
how
how
deep
we
like
end
up
using
the
file
system,
but
you
can
imagine
an
abstraction
of
a
virtual
file
system
that
we
could.
You
know,
build
on
top
of
and
then
switch
back
and
forth
between
these
two.
You
know
bases
of
the
actual
files
and
linkage
versus
the
text
file
we're
talking
about,
and
then
that
way
we
can
evaluate
things.
A
It
might
be
a
little
bit
of
architecture
to
put
in
the
other
questions,
and
then,
when
I
mentioned
virtual
file
system,
there
is
the
point
there
that
I
know
that
mac
cloud
has
spent
a
while
on
virtual
file
system
stuff
for
the
purposes
of
ide
support.
So
it
could
well
be
that
we'll
more
broadly
want
virtual
file
system
stuff
rusty
in
general.
A
So
it's
possibly
some
cross
cutting
interest
there
that
we
should
look
into
yeah.
Okay,
there's
there's
potentially
interesting
things
here:
okay,
let's
look
at
the
next
next
issue,
so
the
next
one
is
five:
five,
five,
zero,
zero.
A
Which
is
incremental,
compilation
is
ineffective
when
building
compiler.
I
think
that's
why
a
lot
of
us
are
here
right
now
so
yeah.
So
this
is
another
old
bug.
This
is.
This
is
like
two
years
old
at
this
point.
It's
almost
two
years
old
at
this
point,
and
it
would
be
interesting
to
redo
the
sort
of
benchmark
that
aaron
posted,
so
the
blog
described
says
it
took
15
minutes
after
simply
running
touch
on
a
file.
Given
that
nothing
changed.
You
know
they
expected
that
this
would
be
much
faster.
A
So
I
am
a
little
bit
I'd,
be
careful
about
just
rerunning
the
benchmark,
I
subscribed
and
then
declaring
victory
if
it
doesn't
take
15
it
takes
less
than
15
minutes,
but
also
there's
a
question
there
about.
Maybe
aaron
didn't
realize
that
they
didn't
need
to
do
the
x.py
bill.
You
know
right,
it's
a
question
about
what
what
what
is
the
thing
we're
actually
trying
to
resolve
there,
because
that's
what
you
have
built
at
this
point
might
have
done
multiple
stages
of
the
rebuilding
we've.
A
Also,
we've
also
reorganized
exactly
bigger
book
has
gotten
better
since
then.
That's
right.
A
That
path
even
exists
on
a
main
today.
So
right
right,
I
see
your
points,
but
still
it's
a
good
it'd
be
a
good
thing
to
redo
and
anyway
this
this
issue.
The
other
thing
I
you
know
pointed
out
at
the
time,
or
rather
a
year
later
or
so
happy
or
later,
was
that
we
should
figure
out
if
this
is
actually
inherent
to
rust,
build
or
if
it's
something
that
you
know
it's
a
broader
issue
within
chemical.
C
A
A
A
We
don't
have
the
deterministic,
we've
been
working
towards
it,
but
I
don't
think
we
have
it
yet,
or
at
least
maybe.
C
We
do,
but
it
requires
like
really
control
situations
or
something.
So
I
guess
I'm
almost
inclined
to
close
this,
given
how
old
it
is,
and
I
pretty
much
only
use
dash
eye
incremental
when
building
rusty
nowadays
anyway-
and
I'm
not-
I
wouldn't
describe
it
like
this,
but
I
also
don't
build
stage
two
and
I
don't
think
anything
has
really
changed.
There
wait.
A
C
C
A
C
It
sounds
to
me
like
the
complaint
here
is
really
just
that
stage:
two
and
the
incremental
flag
don't
play
together,
and
I'm
not
sure
that
I
mean
maybe
it's
worth
leaving
this
issue
open,
but
I
think
that's
really
the
core
issue
here
is
that
stage
two
and
the
incremental
flag:
don't
really
do
what
you'd
expect
they
build
a
stage,
one
compiler
in
incremental
mode
and
that's
probably
really
fast,
but
then
stage
two
just
builds
another
compiler
again
right,
probably
what
12
of
the
15
minutes
is
doing.
So
I'm
not
sure
that's
even
something
in
our.
A
A
A
A
B
A
Yeah,
it
would
be
probably
be
a
good
idea
to
double
check
that,
like
touching
this
thing
with
this
file
with
building
a
stage,
one
is
indeed
fast
right.
So
I
think
so.
We
have
two
options.
I'm
gonna
finish
writing
my
comment.
First
of
all,
which
is
to
summarize
the
sort
of
hypothesis
week
before,
which
is
that
we
think
that
the
issue
that
was
described
here
is
probably
an
artifact
of
the
lower
stage.
Two.
A
A
And
you
can
reopen
it
if
it
turns
out
there's
something
about
that
file.
That
means
it
really
is
an
issue:
okay,
next
up
unexpected
panic,
while
whilst
using
base64
macro.
A
Oh
sorry,
this
is
five
four
nine
six
zero
is
the
issue
that
I
just
opened
up
and
it
is
unexpected
panic,
whilst
using
base64
macro
from
binary
macros
great
and
oh,
I
somehow
I'm
sorry,
I've
gotten.
I've
just
realized
that
the
way
I've
been
linking
this
for
some
reason,
it's
been
doing
it
in
like
what
you
call
it
like
code
mode
in
the.
A
The
markdown
anyway,
sorry
so.
A
A
Okay,
I
closing
this
closing
as
as
as
not
as
fixed
right,
I
mean.
C
To
set
up
an
incremental
session:
well,
we
don't
know
why.
A
It
happened
I'll
have
the
back
trace.
Oh.
C
A
D
A
A
A
C
A
Yeah,
if
you
encounter
again.
A
Please
open,
hopefully
with
info
about
the
pre
post
states
of
your
code
base.
A
A
Okay,
where
are
we
time
check?
We've
got
seven
minutes
left.
Is
this
I
wanna
double
check
with
everyone
else
like.
I
actually
think
this
has
been.
This
is
useful
to
get
through
these,
but
I
also
don't
want
to
if
people
feel
like.
Oh,
this
is
not
a
good
way
to
use
the
synchronous
meeting
time.
Then
we
can,
you
know,
just
end
the
meeting
I'm
I
don't
want
to.
Of
course,
I'm
going
to
sit
here
while
we're
doing
triage
necessarily,
but
if
nothing
else,
I
like
closing
bugs.
A
C
A
Yeah
all
right,
so
let's
keep
going
then
with
a
little
bit
more
of
these,
so
we
got
the
next
one
I
see
is
53929
incremental
compilation
fails
when
a
generic
function
uses
a
private
symbol.
A
A
A
Well,
let's
see
something
about
using
lv
using
config,
using
attributes
to
cause
the
symbol
to
be
wow.
This
is
really
the
in-depth
stuff.
It's
like
changing
the
exported
name
of
various
symbols.
A
A
So
there
are
a
couple
of
questions
here
that
I
can
really
ask.
One
is
we
you
know
should
be
considered
like
how
big
a
problem
is
this
in
terms
of
I
imagine
that
you
know
for
people.
It
may
not
be
a
large
group
of
people
that
are
using
this,
but
for
the
ones
that
are
using
this
kind
of
feature.
A
It
sounds
like
this
could
just
readily
like
cause
and
say
I
just
cannot
use
chemical
pollution
at
all,
which
is
a
bummer
right
and
it's
in
fact,
the
default
when
you
just
do
cargo
build.
So
that's
even
more
of
a
bummer
right
to
have
this
kind
of
thing,
that's
like
they
have
to
explicitly
disable
it
and
I'm
betting
that
the
air
they
get
doesn't
tell
them
that
right.
A
It
these
private
symbols
are
useful
for
low-level
ffi
work.
Eg
the
linker
on
mac,
os
ios
will
duplicate,
select
your
names
only
if
they
have
a
private
symbol,
name.
C
A
If
it
has
this
attribute,
don't
use
incremental
violation
right
or
and
if
it
would
end
up
someone
up,
and
it
was
the
default
because
and
maybe
issue
a
warning
or
some
sort
of
like
diagnostics,
just
being
their
info
thing
being
like
hey.
You
know,
you
may
think
you're
getting
multiplication,
but
you
can't
because
we
don't
support
this
thing.
A
Yet
it's
it
depends
on
how
we
know
the
diagnostic
questions
is
separate
right,
but
the
first
thing
I
would
wonder
is:
should
we
consider
that
kind
of
thing
here,
just
in
terms
of
addressing
this
use
case
or
protects
or
potentially
issuing
the
diagnostic
or
or
don't
disable
demonstration,
but
at
least
issue
a
diagnostic
saying,
you've
turned
on
implementation
and
you
have
an
export
flag.
You
should
be
aware
that
there
are
problems
with
certain
kinds
of
export
flags,
so
if
you
have
so,
if
you
see
a
linkage
issue,
try
disabling
a
compilation
right.
A
C
A
Actually,
hitting
anybody
that
the
the
only
hit
would
be
the
people
that
are
using
export
name
and
it
works
with
incoming
violation,
and
they
get
this
diagnostic.
That's
in
fact
not
useful,
and
we
could,
you
know,
have
lint
have
ways
to
disable
the
limp
for,
like
the
cases
where
they
could
say
explicitly.
You
know
with
a
lint
thing
saying
I
know
about
this.
This
case
works.
A
It's
fine,
don't
issue
the
diagnostic
here
right,
yeah,
like
I
think,
that's
a
pretty
simple
so
so
yeah
the
question
is
one
we
could
try
to
make
it
actually
find
out
some
way
to
make
incremental
violation
work
with
these
cases,
but
I'd
be
more
interested
in
just
producing
something.
To
tell
somebody
about
the
problem
up
front
as
a
short-term
thing:
you're
not.
D
Sure
why
why
is
this
an
incremental
compilation
issue
and
not
just
how
we
do
petitioning,
based
on
my
reading
of
the
issue.
D
A
C
So
yeah
there's
only
there's
only
one
as
far
as
I
know,
there's
only
one
well,
there's
two
paths
through
the
partitioning
code:
there's
did
you
turn
on
the
only
one
cgu
flag,
in
which
case
we
like
just
kind
of
fail
fast
to
put
everything
in
one
cgu
and
then
continue
and
then
there's
the
the
other
case
that
has
the
more
complex
logic
for
generic
and
non-generic
and
merging
and
all
that
stuff.
So,
even
if
you're
not
using
a
criminal,
even
if
you're,
not
using
incremental
compilation,
unless
you're
using
cogen
units
equals
one.
A
Interesting,
okay,
so,
but
is,
is
coding
units
equals
n
the
default
yeah?
I
think
the.
A
C
A
D
A
Yeah,
okay,
that's
a
really
good
point.
I
think
the
answer
here
is
there
there's
some
further
investigation
warranted
to
find
out
how
broad
a
problem?
This
is,
if,
indeed,
it's
your
hypothesis
is
right.
This
is
just
about
partitioning
in
general
and
that
therefore
people
should
be
hitting
it
whenever.
D
A
A
I
I
think
the
basic
okay,
so
our
hour
is
up
right.
I
think
the
right
thing
here
is.
We
should
do
nothing
right
now
right.
We
should,
I
think,
before
further
investigation
is
warranted
before
we
do
anything
but
and
then
the
first
thing
I
would
look
into
is:
what's
the
current
behavior
for
private
signals
in
general
like
this
is
a
very
specific
test
case
they
gave,
but
I
could
imagine
other
ways
to
write
it.
That's
just
not
this.
A
A
Private
symbols
find
out
whether
there's
anything
that's
connected
to
just
incremental
violation
or
if
it's
something
general
against
partitioning
and
then
consider
like
you
know
the
solution
space
then,
but
given
that
we
haven't
seen
anybody
complaining
about
this,
oh
my
again,
my
hypothesis,
based
on
what
david
said
is
that
you
know
maybe
this
isn't
affects
every
kind
of
partitioning
which
means,
since
we
haven't
seen
other
complaints.
A
I
guess
maybe
everyone
using
private
symbols.
Also,
then
knows:
oh
well,
then
I
have
to
turn
off.
I'd
do
cgu
because
one
right
and
they
just
they-
know
enough
about
the
low
level
details
plus
the
way
the
rust
compiler
works,
that
they
don't
report
as
bug
they
just
they
just
fix
it
don't
know
anyway.
That's
that's.
A
My
attitude
right
now
is
to
say:
don't
don't
do
anything
yet,
but
I
suspect
that
we
still
might
want
to
do
something
here,
because
the
user
experience
sounds
very
bad
for
when
someone
tries
to
do
this
and
hits
this
problem
and
says
why
the
heck
am
I
having
this
problem.
It
would
be
something
where
I'd
be
pretty
annoyed
as
a
user
about
this
personally,
okay,
our
hour's
up.
Thank
you
all
for
attending.
This
was
really
great.
I'm
I'm!
A
You
know
really
pleased
with
with
what
we
got
through
today
and
I'm
hoping
to
be
more
effective
in
the
future.
As
a
personal
member
of
this
group
yeah.
Okay,
do
you
all
think
that
the
bi-weekly
cadence
is
good?
I
know
that
wesley
and
I
have
discussed
privately
like
you
know,
do
we
should
we
do
weekly
or
bi-weekly?
A
A
Let's
stick
with
biweekly
that
I
think
that's
great
and
this
time
slot
is
good
for
everybody.
Still
great.
Okay,
thanks.
Everyone
bye
thanks.