►
From YouTube: 2021-05-12 Trait Evaluation Caching (PR #85186)
Description
Niko and Jack Huey walking through how caching of trait evaluation works. The context is explaining PR #85186.
A
So,
let's
see
so
there
is
this.
Also
this
assertion
failure
we
could
either.
I
can
either
walk
you
through
the
idea
or
we
can
try
to
debug
this
together
or
both
I'm
not
for
either
all
right.
Why
don't?
We
go
through
the
first
high
level
idea
and
then
proceed
to.
Maybe
debugging
will
give
us
a
nice
like
way
to
explore
the
code
in
a
purposeful
fashion,
but
the
right
so
the
idea
so.
A
Actually,
what
I
trying
to
decide,
I
guess
I'll,
walk
through
the
what's
on
the
branch
exactly
and
then
we'll
go
from
there.
I
have
a
little
I,
like
my.
I
have
a
different
branch
where
I
refactored
the
code
to
isolate
out
the
stack
logic
and
I
think
it's
easier
to
follow,
but
it
might
be
confusing
if
I
started
with
that,
so
I
won't.
A
B
I
have
that
problem,
sometimes
yeah,
I
don't
know
it
turns.
A
Out
like
every
unique
substring
appears
in
lvm
somewhere
and
often
many.
A
Anyway,
okay,
so
right,
so
trade
evaluation
is
a
fairly
straightforward
process
in
which
we
do
a
basically
a
sort
of
depth.
First
search
down
to
try
to
decide
if
a
trait
might
be
implemented,
it's
not
entirely
clear
why
it
exists
in
addition
to
the
other
mode
of
evaluating
traits,
except
that
nico
didn't
know
what
what
he
was
doing
when
he
first
implemented
this
code
and
it
grew
into
this
structure.
But
that's
all
right.
It
has.
A
A
That's
that's
not
really
true,
that's
actually
maybe
just
kind
of
misleading,
but
it
basically
keeps
a
queue
of
work
to
do
and
if
you
have
to
prove
that
one
impul
is
or
that
t
implements
some
trait
some
type
t
implements
some
trait
it
will
figure
out
which
impul
you're
using
and
take
the
where
clauses
from
that
impul
and
add
them
to
that
queue
right.
So
that's
like
one
distinct
step.
That's
trait
selection,
or
I
call
that
trade
selection
right.
A
Is
picking
which
input
you're
going
to
use,
but
it
does
that
without
necessarily
knowing
that
the
where
clauses
are
satisfied,
the
problem
is-
and
this
is
where
the
two
systems
arose.
That
was
the
first
inspiration
for
how
it
should
work
and
in
old
rust
when
that
was
implemented
like
we
didn't
have,
we
didn't
even
have
and
actually
didn't,
really
have
where
clauses
for
the
most
part.
So
it
wasn't
that
complicated
to
do
and
we
had
where
clauses,
I
guess,
but
we
did,
we
didn't,
have
our
coherence.
A
A
We
had
this
problem
that
we
couldn't
pick
in
the
same
way,
and
so
we
added
trait
evaluation,
which
basically
tries
to
figure
out,
is
this
type
it
does.
Instead
of
instead
of
like
unrolling
and
pushing
things
onto
the
queue
it
tries
to
go
down
all
the
way
and
figure
out.
Do
all
the
where
clauses
are
they
all
satisfied
all
the
way
down?
Can
I
just
fully
prove
this
out
and
it's
not
always
it's
sort
of
a
try
value
right
like
it
comes
back
with?
A
Well,
it
comes
back
with
an
evaluation
result
which
is
either
like
yes
or
probably,
but
there's
some
region
relationships.
I
couldn't
verify
because
I
don't
know
how
to
do
that
or
I
couldn't
tell
one
way
or
the
other
or
some
other
variation
of
I
couldn't
tell
one
way
or
the
other
or
you
know.
No,
it
does
not.
There's
no
such
impulse
that
could
potentially
apply
and
recur
is
like
another
thing,
and
so
that's
for
cycles
and
stuff,
and
so
right.
So
the
code
is
fairly
straightforward,
like
proving
the
trait
predicate.
A
So
we
have
this
stack.
I
guess
that's
the
other
thing.
We
keep
a
stack
of
the
context
in
which
we
had
to
prove
something
when
we're
doing
the
at
the
fulfillment
context
level.
That
stack
is
always
empty.
Like
everything
the
fulfillment
context
starts
to
work
on
is
the
top
of
the
root
of
the
stack
it's.
The
first
thing
is
finding
the
impul.
A
That
applies
for
a
given
thing,
but
when
we're
doing
the
evaluation,
we
can
go
deeper
in
where
we're
processing
the
where
clause
and
then
the
where
clauses
of
the
where
clause
and
so
forth-
and
this
is
the
stack
type
it
is
well.
Actually,
this
is
an
iterator.
This
is
the
stack
type
and
it's
just
a
linked
list,
so
it
has
a
reference
to
the
previous
element
on
the
stack
and
then
this
is
the
top
of
the
stack.
A
B
A
A
Yeah,
it's
similar
to
the
recursive
silver.
You
know,
but
different
in
certain
respects
that
I'm
about
to
go
into
so,
and
but
this
pr
is,
among
other
things,
moving
them
moving
the
evaluation
model
closer
and
closer
to
how
the
recursive
solver
works
and
like
to
shift
gears
to
chalk
ever
so
slightly.
A
So
right
evaluation
starts
here
at
this
evaluate
trade,
predicate
recursively,
which
gets
called
by
some
other.
It
doesn't
really
start
here,
but
the
interesting
stuff
starts
here
and
first
thing
it
does
is
push
so
here's
the
old
stack,
we're
gonna
push
push
onto
the
stack,
which
means
make
a
new
struct.
If
you
go
through
you'll,
see
it
just
instantiates
one
of
those
instances.
We
track
the
depth
and
this
will
be
important.
We
track
this
depth.
First
number,
that's
familiar
to
you
from
chalk,
I'm
sure
it's
just
a
counter
that
increments.
A
The
difference
between
them
is
when
things
get
popped
off
the
stack
the
depth
goes
down,
but
the
depth
first
number
never
changes,
or
it
just
keeps
going
it's
monotonic,
which
is
useful.
So
then
we
do
some
I'm
going
to
ignore
this.
Caching,
although
it's
going
to
turn
out
to
be
pretty
important,
but
for
now,
let's
ignore
it,
because
the
whole
the
whole
bug
is
around
caching,
so
it
will
be
important,
but
we
look
for
a
cycle
that
doesn't
well.
A
That
will
also
become
relevant,
but
let's
ignore
it
for
the
moment,
and
then
we
do
this
thing
evaluate
stack
so
this
what
does
the
actual
work?
This
is
like.
Okay,
I
have
a
stack.
I
want
to
check
if
the
thing
on
the
top
of
the
stack
is
true,
false,
whatever
true
false
are
unknown,
and
I've
already
had
the
cat.
I've
already
looked
for
cycles
and
checked
the
cache
and
done
all
that
stuff
just
do
the
actual
work.
So
what
does
it
do
ignore
this
stuff?
A
This
has
to
do
with
coherence,
and
it's
basically
it's.
It's
basically
makes
us
fail
to.
A
C
A
The
this
is
the
point
where
this
is
one
of
the
points
where
we
break
from
chalk
and
from
a
traditional
like
a
more
capable
deer
improver,
because
really
what
we
should
probably
get
back
here
is
a
vector
of
candidates.
There
can
be
many
ways
to
prove
that
the
thing
that
the
tree
holds
and
it.
A
Which
one
we
do
as
long
as
we
have
one.
However,
we
force
ourselves
to
return
one
with
some
horrible
heuristics
that
you're
vaguely
familiar
with
and
that
we
will
not
go
into
here
because
they're
not
relevant,
and
so
we
get
back.
We
get
back
one
of
these
options.
Either
we
get
back
a
selection
candidate.
A
That
means
we
found
what
we
think
of
as
the
right
way
to
go.
We
found
the
impul.
Let's
take
the
easy
case,
there's
exactly
one
impul
that
unifies
and
might
apply
this
will
this
will
information
about
that
input
will
be
in
c.
If
we
get
back
none,
it
means
there
were
ways
to
go,
but
we
couldn't
figure
out
which
one
there
were
too
many
impulse
that
might
have
applied
and
we
couldn't
narrow
it
down.
A
That
could
happen
if
there's
inference
variables,
for
example
like
if
we,
if
we're
trying
to
prove
that
question
mark
t,
implements
a
trait
that
could
be
any
impul,
so
so
long
as
there's
at
least
two
impulse
we're
stuck
or
where
clauses
stuff
like
that
overflow
is
overflow,
and
otherwise,
if
we
get
back
air
here,
it
means
there
was
no
impul.
That
might
apply,
there
was
no
candidate.
We
couldn't
find
one.
A
It's
not
just
you
yeah,
I
don't
know,
we
might
use
question
mark
yeah
this
yeah.
There
are
some
things
that
could
be
improved
for
one
thing
I
noticed
as
we're
walking
through
here
that
also
like
air
is
used
somewhat
differently
right.
Like
for
evaluation,
you
can
have
an
okay
with
a
evaluated
to
error
and
an
error
is
just
used
for
overflow,
but
for
candidate
error
means
no
candidates,
which
is
a
little
confusing
yeah
you're,
not
wrong.
So
nonetheless,
the
happy
path
is.
We
found
a
candidate
we
jump
in
to
evaluate
candidate.
A
What
does
evaluate
candidate?
Do
evaluation
probe
just
prevents
muta
inference,
variable
side
effects
from
leaking
outside,
so
what
confirm
candidate
does?
Is
it
takes
the
like?
We
figured
out
which
impul
applies
confirm
candidate
will
unify
all
the
types
from
the
impul
with
the
types
of
the
thing
we're
trying
to
prove.
A
A
But
then
we
get
back
from
this
a
list
of
well,
we
get
back,
what's
called
an
impulse
source
which
is
kind
of
related
to
the
canon.
It's
sort
of
the
information.
But
the
main
thing
that's
in
there
that
we're
interested
in
is
this
nested
obligations,
which
is
basically
the
where
clauses,
and
so
then
we're
going
to
go,
recursively,
evaluate
them
and
say:
like
okay,
well,
the
impul
applied
we
were
able
to
unify
the
impul.
Everything
succeeded
now,
let's
check
if
the
where
clauses
apply
and
that's
that's
the
whole
recursive
concept.
A
A
Not
I
sure
before
I
showed
you
like,
evaluate
trade
predicate
which
is
called
here,
but
that's
just
one
case,
because
there's
different
kinds
of
predicates
but
most
of
the
others
are
not
that
interesting
like
if,
for
example,
we're
forced
to
prove
some
region
relationship,
we
can
just
say
yeah.
That
seems
good.
A
It's
good
as
far
as
we
know,
and
there's
like
some
other
stuff
here,
but
it
sort
of
bottoms
out
to
this
is
the
main
case.
Okay,
so
jumping
back.
I
want
to
talk
about
the
caching
and
the
cycles.
A
A
C
A
On
mac,
but
it
changes
on
other
platforms,
so
check
evaluation,
cache,
that's
not
what
we're
talking
about
talking
about
check,
evaluation
cycle,
so
we
have
the
stack
and
the
main
reason
we
actually
have.
The
stack
is
so
that
we
can
sorry
just
having
a
little
thought.
Oh
right,
I
remember
now
so
that
we
can
detect
cycles
and
what
do
we
do?
A
We
iterate
it's
very
simple:
we
basically
iterate
up
the
stack
looking
for
something
where
we
have
the
same
thing
we
have
to
prove
or
the
environment
is
the
same,
and
the
thing
we're
trying
to
prove
is
the
same:
fresh
trait
ref,
that's
an
old
version
of
canonicalization
where
inference
variables
get
replaced
with
a
like
an
integer
that
is
ordered
by
their
appearance.
A
It's
basically
the
same
as
canonicalization.
We
just
have
two
ways
to
do
it
that
are
similar
but
different,
because
this
one
was
done.
First,
that's
great!
Yes,
isn't
it
so
right?
So
we
we
iterate
over
the
stack
we
look
and
see
if
any
of
them
are
there.
If
there
is
one
we
extract
its
depth
and
we
say:
okay,
we
found
a
cycle,
and
this
is
where
the
assertion
is
failing.
As
an
aside
or
one
of
the
places
it
might
be
failing,
and
then
we
call
update
reach
depth.
So
what
is
that?
B
A
What
is
the
f
up
and
down
are
always
kind
of
problematic.
What's
the
oldest
thing
on
the
stack
that
you
depended
on
right?
So
if
I,
if,
if
there's
no
cycle,
then
it's
just
going
to
be
yourself
basically,
so
I
depend
on
in
order
to
prove
that
t
implemented
foo,
I
had
to
prove
that
to
implement
flu.
That
seems
fine,
but
if
you
end
up
with
a
cycle
like
usually
with
auto
traits,
here's
an
example
right
here
like
you're
proving
foo
of
t
is
send.
A
A
So
if
this
was
zero-
and
that
was
one-
and
that's
well-
that's
two,
but
it
does
never
get
pushed
then
this
would
have
a
reach
depth
of
zero
and
that's
useful,
because
that
tells
me
that,
in
order
that
tells
me
that,
especially
for
the
co-inductive
cases,
bar
of
send
is
true
so
long
as
the
things
that
were
higher
on
the
stack
up
to
the
reached
depth
are
also
true,
it's
kind
of
conditionally,
true
or
provisionally.
True
right.
So
that's
what
the
checking
cycle
basically!
Does
it
tracks?
How
high
did
you
go?
A
It
checks
if
it's
a
co-inductive,
if
everything
in
that
cycle
is
co-inductive,
in
which
case
it's
okay,
but
we
updated
the
depth.
Otherwise
it
does
this
evaluated
to
recur,
which
either
way
we're
going
to
stop
right.
So
either
we
reach
the
cycle,
which
will
be
treated
as
a
kind
of
error
or
we
are
considering
it
okay,
but
we
we
had
to
mark
things
on
the
stack
to
indicate
that
was
like
a
provisional
okay,
and
so
that's
where
we
come
to
this
caching
business.
A
So
then,
assuming
zooming
out
now,
if
we
were
doing
that
this
case
of
fubar
foo
when
we're
processing
foo
the
first
time,
there's
no
cycle,
we
go
into
the
evaluate
stack.
We
come
in
here
for
bar
no
cycle.
We
go
into
evaluate
stack.
Now
we
come
back
for
foo.
We
hit
a
cycle,
we're
going
to
mark
bars,
having
reached
an
earlier
depth
and
return,
and
then
we're
going
to
come
back
and
now
we're
still
at
the
point
where
we're
proving
bar.
A
So
we
basically
will
have
proved
evaluate
stack,
will
have
returned
okay,
because
that
cycle
was
checked
and
returned
okay,
so
we
now
think
okay
bar
is
proven
and
ignore
this
for
a
second
we're
going
to
look
here
at
what
depth
did
you
reach,
and
the
question
is
if
it
is,
if,
if
you
never
depended
on
anything
above
you
in
the
stack
or
older
than
you,
I
should
really
stick
to
older
anything
older
than
you
on
the
stack.
A
Then
you're
kind
of
self-complete
right.
So
so
you're
done.
But
in
our
case,
where
we're
processing
bar
bar
depended
on
foo,
which
was
older
in
the
stack
and
therefore
we
can't
consider
that
to
be
completely
evaluated.
So
we
use
this
thing
called
a
provisional
cache
and
the
idea
is
that,
essentially
attached
to
the
stack
itself,
we
have
a
cache.
We
have
several
caches
at
different
layers
depending
on
how
how
true
that
thing
is,
but
for
our
purpose
for
this
particular
bug,
we
can
just
think
about
two
caches.
There's
like
the
global
cache
for
stuff.
A
A
So
it's
true
it's
true.
Unless
that
thing
turns
out
to
be
false
and
we'll
track,
we
pass
in
what
was
our
depth
first
number.
C
A
Was
the
depth
that
we're
dependent
on
and
what
was
the
result,
and
now,
when
you
come
back,
if,
for
example,
you
had
foo
of
t
had
to
prove
bar
of
t,
then
bar
of
t
would
be
added
as
a
provisional
cash
hit
because
of
the
cycle,
but
now
suppose
that
then
foo
has
to
prove
baz
and
baz
has
to
prove
bar
as
well.
A
Okay,
now
we
have
another
cycle
because
or
if
we
went
and
recursively
processed
bar
again,
if
we
ignore
the
caching
we
would
we
would
get
a
cycle
just
like
before,
where
bar
would
depend
on
foo
and
it
would
mark
bar
would
depend
on
foo.
Let
me
put
the
comments,
because
I'm
it's
clear
in
my
there's
a
visualization
in
my
brain,
but
it
may
not
be
in
yours.
B
A
What
I'm
imagining
so
we
basically
had.
Let's,
let's
replay
it,
we
started
out
with
foo
we
pushed
bar
into
the
stack
we
pushed
foo.
We
got
a
cycle
right,
then
we
marked
bar
as
reached
depth,
equals
zero
processing
that
cycle,
and
we
marked
this
as
okay,
then
here
we
would
consider
this
okay,
because
all
of
its
children
are
okay,
but
then
suppose
that
for
foo,
let's
add
like
depends
on
bar
bass.
A
Baz,
let's
say
depends
on
bar
okay,
so
then,
what
we're
gonna
do
I'm
just
going
to
ignore?
Caching,
for
a
second
pretend
we
did
everything
with
no
caches
at
all,
then
we
would
go
to
bar
and
we
would
say:
okay
well
bar
depends
on
foo
just
like
before
we'll
push
foo,
here's
a
cycle.
A
A
Well,
my
reach
depth
is
zero
because
I
have
a
child
that
reached
so
the
way
this
works
in
chalk
and
the
way
I
would
like
to
think
of
it
is
that
your
result
is
not
just
okay
or
error,
but
your
result
comes
with
the
depth
that
you
reached
or
something
like
that.
That's
not
the
way
we've
implemented
it
here,
because
it
was
stapled
on
to
an
older
system
that
didn't
understand
this
stuff.
A
A
Is
that
at
each
point
we
are
initializing
the
reach
depth,
so
it
starts
out
with
the
depth
of
the
current
node
right
and
then,
when
we
find
the
cycle
before,
we've
actually
returned
we're
going
to
go
back
here
and
update
these
to
be
equal
to
the
the
minimum
of
their
previous
value
and
the
cycle
we
just
encountered.
A
Okay
and
then
we're
going
to
return,
and
then
here
we're
going
to
return-
and
here,
let's
say,
there's
no
more
dependencies,
so
we
return,
and-
and
now
we
get
up
to
here-
and
we
can
return
our
reach
depth
here-
started
out
as
zero
and
it
never
changed
because
nobody
was
able
to
reach
past
the
top
of
the
stack
or
the
bottom
of
the
stick.
Rather
the
root
of
the
stack
and
that's
fine,
that's
how
this
would
work
with
no
caching.
A
So
what
do
we
do
now?
What
do
we
do
instead?
Or
what
do
we
do?
The
problem?
Is
we
repeated
some
some
effort
here
and
that's
one
problem.
Another
problem
is
we
would
like
to
cache
all
these
results
so
that
if
anybody
and
another
function
should
try
to
prove
the
same
thing,
they
don't
have
to
do
the
work
again
either.
A
We
would
like
to
do
both
those
things,
so
we
like
to
save
ourselves
some
work
while
we're
doing
it
and
save
future
callers
some
work
too,
and
those
are
a
little
bit
different.
So
the
way
that
we
do
this
is
we'll.
First,
just
talk
about
saving
our
self-work.
A
When
we
see
a
cycle,
or
rather
let's
first
talk
about
saving
other
people
work,
I
guess
so
there's
a
global
cache
of
things
that
we've
proven
true
right
and
norm
like
when
we
get
an
okay
result.
We
like
to
put
things
in
that
cache
as
soon
as
we
can,
because
then
everybody
else
can
make
use
of
them.
A
The
problem
is,
we
can't
put
these
internal
nodes
like
we
can't
put
this
foo
in
there,
for
example,
we
did
find
it
it's
okay,
but
that
was
part
of
this
big
cycle
thing
and
it's
only
okay,
if
all
the
other
parts
of
the
cycle
turn
out
to
be
okay,
the
same
with
bar
as
it
happens,
because
because
it's
reached
depth
was
older
than
itself,
it's
a
provisionally.
A
A
Well,
if
with
as
it
turns
out
it's
the
last
thing,
we
have
to
prove
so
it
would
be
okay,
but
we
didn't
figure
that
out
yet
either
so
so
what
we
do
is
we
have
this
provisional
cache
and
if
we
kind
of
replay
when
we
got
to
the
point
where
we
proved
bar
to
be
true
not
for
cycles,
we
don't
do
it
there,
all
the
you
could,
I
guess
anyway,
it
doesn't
matter
instead
of
searching
the
stack.
A
If
you
wanted,
you
could
seed
the
provisional
cache
now
that
I
think
about
it,
probably
be
more
efficient,
but
in
any
case,
what
we
do
instead
is
we
when
we
see
bar-
and
we
see
that
it's
okay,
but
we
see
that
the
depth
is
older
than
ourselves.
That's
this
path!
A
We
insert
into
the
provisional
cache
this
entry,
so
we'll
say:
okay
bar
is
okay
with
a
reached
depth
of
zero
and
let
me
just
walk
into
the
provisional
cache
for
a
second.
What
does
that?
Look
like
cache
is
very
simple:
it's
a
well
that's
interesting!
Well,
anyway,
here's
a
bit
of
break
from
chalk.
Also,
the
key
is
the
thing
we're
trying
to
prove.
We
don't
store
the
environment
because
we
assume
the
entire
stack
has
a
single
environment.
That's
stupid.
A
We
should
fix
that
small
thing
and
the
value
is
this
provisional
evaluation?
So
what
was
the
result?
What
was
the
depth
first
number,
from
which
I
tried
to
prove
it
that's
useful
later,
and
what
depth
did
it
wind
up
depending
on
what
was
the
oldest
thing?
It
depended
on
okay,
so
we're
going
to
insert
that
in
the
cache
and
now
when
baz
comes
and
wants
to
prove
bar,
it's
going
to
find
it
in
the
cache
here.
A
Get
provisional
get
provisional
is
going
to
oops
here.
Just
do
a
just
a
hash
map
look
up.
It
returns
a
provisional
evaluation
and
the
main
thing
is.
We
can
return
the
result,
but
we
have
to
act
as
if
we
reached
the
depth
of
that
cache
value
because
effectively
we're
replaying
this
search,
we're
just
short-circuiting
it
to
the
to
the
result.
So
if
we
redo
this,
what
would
happen
with
the
cache
just
to
like
play
it
through
and
not
require
you
to
hold
state
in
your
mind?
A
A
We
would
go
to
prove
bar
baz,
we
would
say
okay,
this
has
a
reach
depth
of
1.
To
start,
we
know
that
it
depends
on
bar,
we
would
come
to
bar,
we
would
say
cash
hit,
it's
ok
or
it
has
a
reach
depth
of
zero.
That
would
cause
us
to
go
back
up
and
tweak
this
to
zero.
We
would
then
return.
Okay,
this
would
return
okay
and
then
this
would
get
cached
too,
because
it
is
also
a
okay
result
that
depends
on
something
older
in
this
deck.
A
Sub
results:
the
reach
depth
of
foo
is
not
older
than
itself,
so
we've
actually
found
the
start
of
the
cycle.
It
happens
to
be
the
root
node,
but
we
don't
that's,
not
important,
and
so
that
means
that
not
only
is
foo
okay,
but
we
know
that
these
provisional
cache
results
are
also
true,
and
so
now
that's
this
case.
A
Okay,
the
reached
depth
is
no
older
than
our
depth.
That's
what
that's
saying
it's
greater
than
or
equal
to
so
it's
younger
than
or
equal
to
it.
A
A
Yeah,
that's
right,
it's
I
think
it.
I
think.
If
I
were
to
write
this
again,
I
would
write
if
stack,
that
up
is
less
than
or
equal
to,
but
yeah
I
find
greater
than
or
equal
to
be.
A
very
my
mind
wants
small
things
on
the
left,
but
anyway
that's
how
it
was
written.
I
don't
know
so
by
me,
I'm
sure,
but
it
made
sense
at
the
time,
so
the
what
does
it
do
right,
insert
evaluation
cache!
A
That's
the
global
cache,
unlike
before
we
called
insert
provisional
on
this
cache
associated
with
the
stack
we're
now
calling
insert
evaluation
cache
on
the
whole
inference
context.
Well,
the
selection
context,
but
and
what
that's
going
to
do
ignore
this
for
a
second
that's
going
to
go
and
inside
the
inference
context,
it's
going
to
insert
keyed
by
the
parameter
environment
and
the
trait
this
time,
which
is
correct,
the
depth,
node
and
the
result,
and
then
we
actually
have
this
global
cache
too.
A
What's
the
difference,
the
evaluation
cache
is
tied
to
things.
A
Basically,
if
that
looks
like
something
that
is
just
true
for
everyone
like
u32,
send
or
something
we'll
use
the
the
big
global
cache
and
if
it
has
inference
variables
or
others
or
like
generic
parameters
or
something
we'll
use
the
inference
context
cache
this
probably
doesn't
they
don't
they
they
sort
of
both
need
to
exist,
but
not
entirely.
I
like
choc's
version
of
this
better
using
canonicalization
and
stuff,
and
then
this
is
the
other
part
on
completion.
What
does
that
do?
A
This
is
what
I
actually
changed
in
this
pr.
So
mostly
what
this
does
is.
It
goes
through
everything
in
the
provisional
cache
that
is
covered
by
this
cycle,
basically
and
goes
and
puts
it
in
the
global
cache,
because
now,
like
I
said,
bar
and
baz,
we
not
only
know
that
foo
is
true.
We
know
that
the
things
that
we're
dependent
on
foo
are
also
true
and
here's.
Here's
what
changed
in
order
to
show
what
changed.
I
have
to
make
a
slightly
more
annoying
example,
and
maybe
to
explain
how
this
works
better.
A
So
this
example
had
only
one
cycle,
but
the
actual
example
from
the
bug
is
more
like
a
depends
on
b
depends
on
c.
Well,
we
don't
need
to
make
it
that
complicated.
A
depends
on
b,
which
depends
on
a
so
now
we
have
a
cycle
depends
on
c
depends
on
d
depends
on
c
now
we
have
another
cycle
and
that's
the
key
point.
This
cycle.
A
Sort
of
contained
within
this
greater
cycle
right-
and
I
think,
in
the
actual
example
of
the
code,
like
a
also
depends
on
like
an
e
that
depends
on
c.
This
doesn't
really
matter,
but
that
could
happen,
and
so
what
we
used
to
do
in
the
old
code
was
you
kind
of
consider
this
all
one
big
cycle?
A
We
would
wind
up
we
had
for
for
the
cash
itself
a
reached
depth.
A
There
was
the
minimum
of
the
reach
depth
of
all
of
the
provisional
entries
in
the
cache
rather
than
having
a
per
entry
value,
and
so
we
said
everything
in
this
cache
becomes
true
at
that
depth,
and
that
was
okay,
like
that
sounds
like
an
approximation,
a
less
good
version
of
what
we
do
now,
but
it
actually
led
to
incorrect
results
because
there's
one
little
detail,
I
didn't
show
you,
which
is
this
dot
max,
so
we're
not
actually
storing
the
result
that
was
cached
this
okay
result.
A
What
we're
storing
when
we
go
to
the
global
result
is
sorry
just
having
a
thought.
Is
there
a
subtle,
weird
bug
here?
I
think
it's
okay.
What
restoring
is
I
have
to
think
about
it,
though
we're
actually
storing
is
the.
A
The
max
of
that
and
the
result
from
the
main
cache
and
there's
this
other
little
weird
thing
about
failure.
Let's
ignore
failure
for
a
second
and
assume
everything
is
true.
The
real
problem
here
is
there's
two
versions
of
true
there's:
okay
and
okay
modular
regions,
so
things
can
be
less
true,
but
still
true,
where
we
say
well,
we
had
some
region
stuff
to
prove
and
we
didn't.
We
didn't
evaluate
that,
and
so
what
was
happening
in
the
bug
is,
let's
say,
to
prove
e.
This
is
ok,
modulo
regions.
A
C
A
B,
isn't
really
okay
b
is
only
as
true
as
a
is
true,
so
so
b
is
actually,
even
though
we
got
an
okay
result.
When
we
just
looked
at
b
in
isolation,
that's
because
we
were
assuming
a
was
okay
modular
regions
when,
in
fact,
because
we
were
assuming
a
was
okay,
but
if
we
re-ran
this
result
to
a
fixed
point,
and
we
started
with
a
being
okay
modular
regions.
A
We're
just
doing
the
dot
max
here,
because
we
know
some
stuff
about
our
logic
system
and
we
know
that
it
doesn't
do
anything
besides.
Propagate
truth,
up
doesn't
have
negation
in
particular
and
so
yeah.
So
that's
what's
actually
happening,
and
that's
all
fine,
except
for
this
part,
because
we
would
do
that
same
thing
to
the
result
of
c,
which
was
also
okay
whoa.
What
just
happened?
A
Okay,
somehow
I
hit
something,
and
so
that
means
we
would
c
and
d
were
also
okay.
But
if
you
look
at
those,
even
though
they're
sort
of
amongst
it's
a
sub,
even
though
it's
like
in
this,
it's
a
different
strongly
connected
component,
because
from
c
you
can't
reach
a
so
so
actually
c
isn't
dependent
on
a
and
the
fact
that
a
is
okay
modular
doesn't
affect
our
c
result.
A
It
should
just
be
okay,
but
because
we
didn't
track
that
for
each
entry
in
the
cache
we
couldn't
recognize
that,
and
so
we
wound
up
caching,
ok
modular
region,
and
that
was
okay,
because
for
the
most
part,
that
distinction
doesn't
really
matter
until
in
the
incremental
version
or
for
whatever
reason.
When
we
restarted
and
we
ran,
we
happened
to
compute
c,
without
starting
from
a
just
starting
from
c,
and
in
that
case
we
would
get
the
proper
result,
and
so
we
would
get
a
an
assertion.
A
At
that
point
we
haven't
looked
at
e,
yet
keep
that
in
mind,
so
we're
going
to
clear
out
everything
with
a
higher
depth
first
number
than
c
in
the
provisional
cache,
but
not
things
with
lower
depth
versus
numbers.
So
when
we
get
to
see
let's
say
contents
of
provisional
cache
when
we
finish
with
c
we're
going
to
have
b
is
okay
with
a
reach
depth
of
zero,
and
we're
going
to
have
d
is
okay
with
a
reach
depth
of
zero,
because
in
the
course
of
proving
c
we
had
to
prove
d.
A
A
This
is
going
to
have
from
dfn2,
which
is
the
or
3,
rather
because
this
is
dfn0123,
and
so
we're
going
to
look
for
everything
with
a
higher
dfn
than
csdfn,
which
is
2.
and
oh,
and
this
reach
depth
is
not
zero.
It's
actually
one
and
we're
going
to
move
it
to
the
global
cache
and
we're
gonna
max
it
with
the
result
of
c
so
we're
gonna
move.
A
A
The
fn
has
the
advantage
of
not
being
reused
so
when
you're
gripping
through
logs
and
stuff,
you
don't
get
like
weird
results,
but
otherwise
I
think
you
could
use
either
one
here,
because
it's
but
anyway
we
use
different
ones
and
it's
confusing,
but
that's
what
we
do
so
yeah
one
at
this
point
is
c
for
the
depth
right
so
yeah.
So
that's
the
fix
everything
else
just
works.
I
think
I
didn't
really
change
anything
else
in
this
branch.
The
only
problem
is
we're
getting
this
assertion.
Failure.
A
I
don't
understand.
Let
me
just
check.
What's
on
my
schedule?
Oh
right,
okay,
that's
all
right!
I
have
a
phone
call
to
make
other
than
that
we're
okay,
it's
one
of
those
things
where
anything
that
any
business
that
doesn't
open
before.
I
start
working,
it's
very
hard
for
me
to
deal
with
my
brain
gets
into
rust
and
like
it
doesn't
let
go
until
like
7pm
and
then
most
pistons
are
closed
yeah.
A
So
anyway,
where
were
we
right?
The
bug?
So
I
don't
know
what
the
problem
is.
Let's
take
a
look.
It's
an
assertion.
Failure.
The
problem
is
update,
reach
depth
and
maybe
the
assertion's
just
wrong.
For
some
reason.
I
don't
think
so.
We're
asserting
that.
A
A
I
think
it
does
right
because
if
there's
a
cycle
then
either
we're
the
start
of
the
cycle,
so
it
should
be
equal
to
us
or
it's
older
than
us,
or
if
it's
newer
than
us,
then
it's
not
doesn't
affect
us.
So
why
do
we
care
it's
not
actually
harmful,
like
it
may
be
that
it's
just
getting
cold
and
it's
not
harmful,
but
it's
weird,
I'm
wondering
yeah.
So,
oh
there's
one
other
thing
before
we
go
on
to
this
bug.
I
just
want
to
mention
one
thing
that
the
logic
does
do
today.
A
A
This
code
should
only
ever
execute
in
the
case
where
the
result
we're
maxing
with
is
either
okay
or
okay
modular
regions,
because
otherwise
the
cache
should
have
been
cleared.
I
don't
actually
think
that's
necessary.
I
think
it
would
be
okay
to
like
if,
if
the
result
was
error,
we
could
cache
error,
because
it's
not
true
and
when.
C
B
A
A
So
it
makes
me
like
mildly
uncomfortable,
but
that
code
in
general
just
makes
me
mildly
uncomfortable.
I
think
that's
a
good
example
why
that
code
is
bad.
I
if,
if
we
had
negation
this
all
this
logic
gets
a
little
more
complicated,
which
is
why
well,
we
don't
have
negation
in
chalk
either
anymore,
but
like
the
old
slg
solver,
maybe
before
you
started
hacking
on.
I
can't
remember.
A
This
is
why
it
used
to
have
two:
it
used
to
have
two
things:
positive
reached
depth
effectively
and
negative
reach
depth,
because
it
basically
you
have,
you
will
have
to
do
somewhat
different
things
where
you're
you're
either
saying
b
is
true.
If
a
is
true
or
you're,
saying,
b
is
true:
if
a
is
false,
that's
different,
but
but
we
don't
have
that
because
we
don't
have
nikki
anyway.
So
let's
look
at
this.
A
Okay,
that's
weird!
Oh
there's
like
some
anonymous
terminals
here
or
something
very
weird.
A
A
A
B
A
The
the
nice
trick
that
I
was
using
with
some
success,
but
that
is
a
little
risky,
is
like
piping.
It
through
rip
grep
dash,
see
a
thousand
or
something
so
you
get
find
some
string
that
you
care
about,
and
then
you
get
a
thousand
lines
before
and
after
that
string
each
time
it
appears.
A
B
A
That
would
be
my
guess,
like
it's
probably
a
provisional
cash
hit,
and
so
what
I'm
wondering
is,
could
it
happen.
A
A
I
think
I
know
what
could
happen.
Maybe
this
is
an
argument
for
switching
to
dfns,
which
isn't
that
much
work.
I
think
what
could
be
happening
is:
let's
try
to
sketch
it
out
scott.
Why
are
you
not?
Why
are
you
doing
that.
A
No,
I
guess
all
that
stuff
was
that's
fine,
it
doesn't
matter
there.
We
go.
A
B
A
Pretty
big,
okay,
it's
not
worth
it
we'll
figure,
something
else
out.
What
we'll
do
is
this.
A
But
also
I
should
close
this,
maybe
something
gonna.
Let
me
have.
I
think
I
just
have
to
restart
ps
code.
A
We'll
worry
about
you
later,
so
here's
what
I
think
is
happening,
even
though
I
can't
type
to
show
you.
A
A
If
that
makes
sense,
yeah.
A
And
that's
probably
why
I
made
one:
that's
probably
why
I
had
yeah.
That's
why
I
had
one
depth
in
the
beginning,
because
I
exactly
this
transitive
thing
I
didn't
want
to
deal
with
it.
I
remember
that
now
it's
good.
We
should
probably
add
a
test
for
this
yeah.
That
is
not
like
yeah.
I
know
good
luck.
A
A
I
don't
know
I
mean
it's
something
like.
A
A
I
actually
don't.
I
don't
particularly
love
the
structure
of
this
cache,
because
it's
a
it's
a
hash
map
which
is
annoying
because
so
much
of
what
we
want
to
do
is
dependent
on
the
depth
first
numbers
really
but
like
what
we
could
very
easily
do
is
when
we
decide
when
we're
done
with
b,
and
we
see
that
we're
going
to
insert
our
thing
as
provisional,
we
can
also
sweep
everything
with
a
higher
depth.
A
First
number:
that's
in
the
cache
and
update
its
like
its
depth
should
it's
reached
up
should
be
the
minimum
of
our
reach
depth.
If
that
makes
sense,
you
know
yeah,
I'm
trying
to
think
if
that's
early
enough
or
if
we
can
have
a
problem
where
you
haven't
yet
left
b,
and
it's
already
annoying
that
it's
out
of
date.
A
Is
always
something
that
is
still
on
the
stack
like
in
order
to
have
a
problem?
It
would
have
to
be
a
depth
of
some
node
that
was
popped
right.
That's
the
problem,
we're
having
that
we
popped
b
from
the
step
back
like
we
didn't
complete
the
whole
the
whole.
The
assertion
fails
when
we
then
go
to
process
d
and
we
find
a
provisional
cache
result
for
c
with
depth
one
this
problem.
A
This
actually
won't
trigger
this
version,
but
to
make
the
thing
a
little
more
convoluted,
it
should
like
you
need,
like
another
layer
or
something
to
make
it
depth
to
which
was
annoying.
But
that's
where
we
have
the
problem,
and
the
problem
is
that
it's
referring
to
b
but
b
was
popped,
but
we
maintain
the
invariant.
If,
if
what
we
do
is
when
we
top
b-
and
we
see
that
b-
isn't
ready.
Yet
we
go
and
update
everybody's
depth
to
b's
depth.
A
It
must
be
something
lower
than
b,
or
else
b
would
have
been
ready,
and
so
we
kind
of
maintained
that
invariant
that
the
depth
is
always
referring
to
some
some
parent
on
the
stack
which
might
be
ourselves
or
which
will
never
be.
It
will
never
be
d
because
d,
if
there's
some
common
ancestor
and
d,
is
a
child
of
that,
and
at
that
point
before
d
was
pushed
everything
on
the
stack
referred
to
a
or
above
that
make
sense.
It
does.
It's
true
trust
me.
A
A
B
A
Garbage
in
the
way
to
prevent
it
from
prevent
other
things
from
freaking
out
c
has
to
depend
on
b.
A
A
Trying
to
think
what's
a
way
to
force
it
to
do
an
evaluation.
It
will
do
it
if
there's
two
work,
if
there's
two
options
that
might
apply
and
it
needs
to
narrow
it
down
so
like
we
could
do.
A
Oh
you're,
right
that
does
not
work
send,
is
what
I.
A
Stop
being
smart,
all
right,
we'll
try
to
just
fix
the
bug.
What
we
really
need
is
yeah,
so
insert
provisional.
B
A
Well,
I'm
that
doesn't
feel
right
to
me
because
it
actually
does
depend
on
like
if
b
depends
on.
If
c
depends
on
a
like
to
go
back
to
our
example.
I
thought
about
that.
But
I
think
that's
wrong,
because
c
really
does
depend
on
a
here
right
and
if
we,
if
we
just
ignored
the
depth,
we
don't
know
that
we
don't
realize
that
there's
actually
a
dependency
between
processing
c
and
the
thing
at
depth
zero,
and
so
we
might
incorrectly
assume
that
some
parent
node
is
complete.
B
A
A
A
A
Now,
my
my
concern
is:
we
walk
up
the
stack,
we
say
well,
that's
inside
of
us,
so
we
don't
have
to
update
or
reach
depth
of
one
okay
and
then,
but
the
problem
is
now
we
might
actually
cache
this
result
in
the
global
cache
when
in
fact
it
does
depend
on
a
because
c
depends
on
b,
which
depends
on
a
so.
It's
actually
wrong.
A
I
think
the
right
result
I'm
trying
to
decide
the
most.
I
think
that
there's
a
number
of
times
you
could
do
this,
but
the
bottom.
A
A
If
we
used
dfns
and
stuff,
we
could
poke
back
up
and
get
union
find
that
would
be
like
that
maximally
efficient.
We
would
say
we
would
just
never
forget
that.
As
the
end
result,
we'd
have.
The
dfn
for
c
would
be.
Okay
well
c
depended
on
b,
and
then
we
would
we
would
unify
b
with
a
and
then
we
would
say
we
found
c,
let's,
let's
refresh
to
whatever's
the
transient
thing,
so
that
makes
sense.
Yeah
wait.
A
A
A
A
It's
not
hard.
It's
just
annoying.
Basically,
I
think
what
you
would
do
is
when
you
insert
a
provisional
entry.
A
A
It
will
work
and
I
do
have
to
drop
off
now
because
I
have
another
call,
but
we've
at
least
I
think
we
covered
everything
I
wish
I
had
a
test
for
this.
It's
so
annoying
there
should,
we
should
add
some
rusty
attribute
or
something
that
lets
us.
A
No
that
lets
us
do
like
what
we
can
do
in
shock.
Yeah
rusty,
evaluate
predicate
a
my
trait
or
something
you
know.
The
annoying
thing
is,
I
also
couldn't.
I
could
not
convince
that
standalone
test
to
reproduce
for
me
in
in
any
way
partially,
because
I
don't
have
this
like.
If
I
had
this
capability,
I
think
we
could
do
it,
but
I
couldn't
like
trigger
the
evaluates
in
the
right
order
without
turning
off
debug
assertions
and
turning
on
other
things,
yeah.