►
From YouTube: rustc-chalk integration overview
Description
An overview of the plans for rustc-chalk integration and some brief coverage of how it works in the code itself.
A
A
A
So
yeah
I
think
there's
like
okay,
now
think
about
what
my
boxes
are.
Gonna
mean
so,
first
of
all
the
there's
kind
of
this
interface
here,
where
hold
on
a
minute.
This
thing
is
no
knowing
place.
Okay
over
here
we
have
chalk
X
park
engine,
and
over
here
we
have
I'm
gonna,
call
it
trait
engine,
it's
kind
of
a
bad
name.
A
Maybe
the
fulfillment
context
is
better,
but
the
idea
is
we're
doing
something.
On
the
left
hand,
side
in
rust
see
basically
and
like,
for
example,
doing
a
type
check
and
when
we're
doing
that
type
check,
we
have
this.
This
data
structure,
the
fulfillment
context
that
we
used
to
have
where
we
sort
of
it,
has
two
personal
purposes
on
now.
So
if
we're
in
chalk
mode,
what
happens
is
as
we
go
through
the
type
shape
we
might
find
some
goal.
We
need
to
prove
like
that.
A
Some
type
implements
you
know
debug
or
whatever,
and
what
we'll
do
is
we'll
include
that
list.
So
there's
like
a
list
of
goals
here,
goals
and-
and
there
they're
just
goals
that
we
have
yet
to
prove
basically
and
then
at
any
given
time.
We
can
do
a
we
can
go
through
this
list
and
we'll
try
to
prove
them
right
and
meanwhile,
on
the
other
side,
we
have
the
chalk
engine
and
it's
kind
of
a
resident.
A
A
Really
so
it's
kind
of
lazy
and
each
time
you
ask
for
the
next
answer,
it
will
run
off
and
do
as
much
computation
as
it
needs
to
do
to
figure
out
the
next
answer,
and
so
the
idea
here
is
that
if
you
ask,
if
you
submit
a
goal
twice,
what
actually
happens
is
inside
the
chalk
engine.
We
maintain
this
thing
called
a
table.
A
B
A
A
For
the
most
part,
it
might
have
some
region
constraints
and
so
on.
So
if
you
asked
like
it
is
back
of
question
mark
TD
letting
debug,
we
would
give
you
back
an
answer.
Like
guess.
If
T
is
I
might
be
the
first
answer
you
get
back
mm
and
then
you
might
ask
well
what's
the
next
one
and
you
might
get
yes,
50
is
I
32
and,
of
course,
there's
an
infinite
stream
of
answers
in
that
case,
hopefully,
you
won't
keep
pulling
and
we'll
come
to
that
in
a
second
that's
relevant
here.
A
The
way
that
it
works
somewhat
internally
is
you
know
we
would
canonicalize
that
goal,
so
it
might
be
like
this
might
be.
The
back
of
question
mark
0
implements
debug.
This
might
be
the
table
we
made
for
that
particular
goal
and
then,
when
we
find
our
first
answer,
we
would
add
into
it
like
0
is
your
32
and
then
your
own
sign
32?
Whenever
as
we
go
through,
we
always
know
how
many
answers
there
are.
A
So
basically,
each
of
these
iterators
has
like
the
table
index
internally
in
index
to
the
table
and
index
to
the
answer
and
as
long
as
it's
sort
of
already
populated
it's
very
cheap,
but
we're
getting
a
little
further
into
how
that
works
internally.
Now.
But
but
my
main
point
is
that
when
you
ask
it
for
the
next
answer
this
table,
you
know
it
will
go
off
and
do
work.
So
if
it
means
it
might
say,
oh
I
have
a
rule.
A
That
says,
for
example,
that
well
the
rule
for
vectors
is
or
one
possible
value
for
question
mark
zero
would
be
itself
to
be
a
vector.
So
in
that
case
we
might
instantiate
it
as
a
vexing
question
mark
1
and
ask
if
those
back
of
question
mark
1
implemented
bug
when
we
can
analyze
that
that's
the
same
table.
So
then
we'd
have
like
a
recursive
reference.
That's.
C
C
A
Okay,
right
so
back
on
the
rusty
side,
we're
basically
or
actually
there's
a
little
bit
of
a
shim
that
goes
in
between
the
iterator
and
rusty
itself,
and
what
this
Shin
does.
This
is
sort
of
the
chalk.
Shim
I
forget
where
it's
implemented,
I
think
it's
employed
in
the
engine
itself
and
what
it
does
is
it
tries
to
so
you
don't
do
rusty,
doesn't
directly
interact
with
that.
A
Yes,
if
it's
you
32
and
then
it
might
pull
and
get
yes,
if
it's
I
32
and
then
we
would
try
to
combine
those
answers
into
one
substitution.
Oh
well,
now
we
got
question
right:
zero
equals
question.
Mark
zero,
like
this
is
there's
basically
no
common
ground
between
these
two
answers.
There's
no
reason
to
pull
for
anymore
and
we're
done
and
we
do.
The
engine
is
optimized
around
delivering
what
I've
been
calling
a
breadth-first
search.
It's
not
literally.
A
The
main
point
is:
is
not
depth-first,
so
it
tries
to
give
you
a
variety
of
answers
if
they
exist
in
order
to
help
you
reach
this
this
this
consolidation
point,
so
you
could
imagine
it's
sort
of
in
a
traditional
Prolog
engine.
It
might
be
giving
you
like
ever
bigger,
bigger
answers
like
vectors
that
go.
That's
a
fact
of
you
through
you
before
it
ever
went
off.
To
tell
you,
oh
and
by
the
way
it
might
not
be
effect
at
all.
A
That
doesn't
mean
we
can't
take
a
long
time
to
find
an
answer,
but
we
we
are
that's
the
main.
That's
one
of
my
main,
like
things
I
think
we'll
want
to
play
with.
Is
this
search
strategy
to
try
to
ensure
that
we're
not
getting
lost
as
much
as
possible,
but
usually
we
we
do
a
pretty
good
job,
so
so,
right
so
back
to
rust.
That
will
dig
more
L
to
the
chalk
engine,
but
yeah.
B
B
We
just
want
confirmation
that
the
implementation
exists
and
then
other
times
we
are
maybe
asking
both
that
question,
but
also
there
are
parts
of
the
type
that
are
unknown
and
then
we're
we're
trying
to
see
basically
whether,
given
that
this
type
has
to
implement
this
trait,
does
the
trade
system
determine
that
this
unknown
type
must
be
high
32
or
something
right?
That
is
right.
A
B
Right,
okay,
so
yeah
cuz
I'm,
just
the
the
piece
that
was
a
little
hazy
in
the
description
you
just
gave
us
like
this
shim
that
is
actually
dealing
with
the
granular
answers
coming
out
of
chalk.
I
just
want
to
check
my
understanding.
Basically,
what
that
shim
is
doing
is
trying
to
determine
how
much
information
it
can
get
from
chocolate.
B
A
The
entrance
variables,
and
so
it
will
acquit,
for
example,
like
let's
suppose
that
the
values
you
get
back
are
not
you
32
an
I-32,
but
rather
that
give
you
32.
In
fact,
the
by
32
that
point
it
would
probably
stop,
but
it
would
tell
you
it's
a
Veck
of
something
right
right.
Well,
no,
that's
not
quite
true,
but
it
can
tell
you
that
and
the
way
it
would
do
that
is
it
we
have
a
little.
It
has
a
way
of
pushing
back
into
chocolate.
A
Saying
do
you
have
any
answers
that
invalidate
exact
of
something
that
are
like
you?
You
know,
you
know
what
possible
strands
you
might
pull
on.
That
will
give
you
more
answers
and
those
strands
could
give
you
an
infinite
list
of
answers,
but
we
can
still
sometimes
constrain
that
they're
all
vex
of
something
for.
C
A
A
Today,
where
we
kind
of
commit
we
might
commit
to
a
nimble
but
not
have
fully
processed
the
where
clauses
and
they
might
potentially
go
forever
but
by
just
but
but
because
we
know,
there's
only
one
in
bull
and
it's
for
Veck
of
something
that
we
know
something
already
and
we
can
make
progress
right.
That
makes
sense
so
yeah.
So
that's
how
we
handle
that
case
anyway,
so
that
if,
if,
if
all
the
types
were
known
to
begin
with,
then
the
aggregators
job
is
pretty
easy,
because
there's
only
one
possible
answer
which
is
empty
substitution.
A
So
it's
really
a
binary
question.
Then
do
I
get
an
answer
or
not
yep,
okay,
so
I'll
add
one
one
other
well
I'm,
noting
concerns
one
other
concern.
I
have
with
the
current
design
that
I
think
we
can
overcome,
but
it'll
take
a
little
tweaking
if
it
becomes
a
problem.
Is
that
the
I
think
become
because
of
things
like
size?
Bounds
like
it
could
be
very
common
for
us
to
get
back
an
answer
of
I.
Don't
know,
yeah
I,
think
that
would
be
there
by
far
the
most
common
answer.
A
Knowing
that
that's
something
is
we
can't
tell
if
that
something
is
sized,
so
we
can't
say
yes,
but
so
what
we
could
give
back
and
I
think
we
will,
if
I,
unless
I'm
misremembering
is
well
give
back,
I
don't
know,
but
whatever
it
is
question
mark
zero
has
to
be
a
back
of
something
right,
so
that
so
that
means
that
the
type
checker
can
still
make
progress
and
infer
that
its
effect
of
something,
but
that
goal
will
stay
in
our
list.
Yeah.
A
It's
fully
proven
and
the
next
time
we
come
in
we'll
hopefully
have
more
information
about
what
about
the
types
and
so
we'll
end
up
at
a
different
table,
because
it'll
be
does
that
make
sense,
yeah,
okay,
so
right.
So
this
is
the
is
the
basic
architecture,
and
what
I
thought
I
would
walk
through
a
little
bit
is
where
the
some
of
the
bits
of
code
are.
A
Let
me
just
say
a
little
bit
more
about
how
the
chalk
engine
integrates
with
us,
because
it's
important
when,
when
we
go
off
to
solve
one
of
these
tables,
the
way
that
this
works
internally
I
mentioned
already.
We
have
this
notion
of
strands,
so
the
table
has
a
list
of
answers.
It
also
has
a
list
of
strands
which
are
basically
kind
of
like
co-routines,
so
to
speak.
They're
like
a
part,
an
answer
that
we're
midway
through
solving,
but
we
haven't.
We
still
have
some
pending
work
left
to
do
so.
A
It's
basically
a
list
of
sub
goals.
You
have
to
solve
and
some
other
stuff
and
before
you
found
a
complete
answer
and
the
let's
see
oh
yeah.
So
when
you,
when
you
first
create
a
table
like
you,
have
to
you'll,
have
no
answers,
of
course,
but
you'll
want
to
have
a
lot
of
strands
and
the
strands
would
come
from
essentially
all
the
impulse
in
the
program.
A
That's
not
exactly
true.
There's
some
other
sources,
but
it's
kind
of
all.
What
are
all
the
ways
I
might
prove
something
to
be
debug
right.
Let
me
put
them
in
my
list,
so
I
can
go
eliminate
them
if
they
don't
apply.
B
Okay,
hold
on
so
I'm
trying
to
connect
this
back
to
my
older
understanding
of
chalk,
so
we
used
to
back
in
the
old
days.
There
was
some
notion
of
like
a
program
that
was
essentially
like
the
lowering
of
the
tray
temples,
and
you
would
ask
talk
for
answers
in
reference
to
that
program.
Yeah.
It
sounds
like
what
you're
saying
here
is
each
time
you
start
looking
for
answers
or
yeah.
You
start
a
new
table
for
answers.
The
first
thing
you
do
is
dump
in
the
contents
of
the
program
as
possible.
A
A
A
It
doesn't
really
know
about
traits
per
se
right.
It
just
knows
it
has
a
goal
to
prove
yeah
yeah,
but
but
and
your
job
is
to
give
some
superset
of
the
applicable
clauses,
and
actually
it
doesn't
matter,
you
could
give
all
clauses
that
exist
and
it'll
find
a
job
it'll
be
much
less
efficient,
because
it's
not
really
good.
You
know
it'll
search
them
one
by
one.
The
hope
would
be
that
you
will
take
some
you'll
take
as
much
information
from
the
goal
as
you
can
to
narrow
that
down.
A
So
like
a
minimum,
you
might
look
up
the
trait,
but
you
might
also
look
at
the
type
and
say
I
already
know
it's
a
vector
like
I've
hashed,
my
impulse
by
the
self
type.
That's
what
we
do
in
rusty
today,
mm-hmm
like
not
the
full
cell
type,
but
a
little
bit
of
the
self
tag
just
enough
to
like
narrow
it
down
said
yeah,
and
you
might
do
that.
C
A
Plays
the
same
role
as
the
trait
code
in
rusty
today,
so
there's
there's
a
in
and
the
rusty
side
there's
in
somewhere
in
inference
context,
which
is
so
when
you're,
when
you're
doing
type
checking
you
have
you
sort
of
have
a
and
you
have
an
inference
context
and
you
you're
gonna
come
up
with
something
you
need
to
prove
and
you're
gonna
sort
of
take
some
goal
like
all
this
in
our
example,
so
I'll
just
keep
using
it.
Let's
call
it
question:
mark
32s,
debug,
all
right
and
that's
question.
A
Mark
32,
of
course
refers
into
the
tables
here
and
we're
going
to
submit
this
goal
to
the
to
the
trade
engine
which
I
think.
We
also
give
it
a
reference
to
the
inference
context
when
we
do
that
and
it's
gonna
go
and
canonicalize
that
and
turn
that
into
like
a
stock
to
do
that.
Job
doesn't
know
anything
about
my
inference
context.
A
It's
because
we've
canonical
I
still
be
taking
all
the
infant's
variables
and
giving
them
you
know
or
pull
them
out
and
given
them
a
rendering
sent
it
off
and
then
chocolate
give
us
back
an
answer
and
we'll
sort
of
translate
that
answer
back
into
the
inference
context
and
do
the
unification.
So
as
part
of
processing
that
goal,
we
might
figure
out
that
question
mark
32.
Is
you
ins
Frank,
and
we
would
tell
the
inference
context.
Oh
hey
unify
question
mark
32
and
they're,
not
in
the
way
that
that
happens.
A
A
Will
give
back
a
substitution
right,
so
it
says
oh,
the
zero,
whatever?
What
were
you
called
because
it
gets
back?
It's
been
given
this
request.
It
has
the
canonical
number
and
it'll
get
back
a
substitution
like
zero.
Is
you
32
I
know
that
you
know,
and
then
the
trade
engine
will
say:
okay.
Well,
zero
was
question
mark
32.
Initially
that
was
the
name
that
we
at
the
original
name
we
had
for
it.
So
let
me
unify
those
two
things:
that's
how
we
get
this.
A
Right,
it
has
to
be
something
you
know
for
sure,
but
we
don't
have
to
know
that,
like
I
said
you
don't
have
to
actually
know
that
the
whole
thing
is
proven.
You
only
have
to
know
that
the
only
way
to
prove
it
would
be.
No.
We
might
still
not
know
whether
well
in
this
case,
because
the
full
type
is
known,
we
would
most
just
know
the
answer,
but
this
this
could
be
a
substitution
that
itself
includes
new,
more
variables
and
without
knowing
the
value
of
those
variables.
C
A
So
that's
the
role
of
this
aggregator
actually
is
its
job
is
to
pull
enough
answers
that
until
either
we've
seen
all
the
answers.
So
we
know
when
we're
done
basically
and
if
we've
seen
all
the
answers,
then
that's
good
or
but
they're.
Basically,
three
possible
outcomes.
We've
seen
all
the
answers
or
we've
general
I
guess
just
two
or
we've
we've
seen
it
we've
generalized
the
result
to
something
that
none
of
the
possible
future
answers
would
invalidate
and
the
simplest
way
to
do
that
is
you've
generalized
it
all
the
way
to
I
learn
nothing
great.
A
In
that
case
like,
or
rather
like
that's,
how
much
I
learned
nothing
yeah,
I,
guess:
I
learned
nothing
so
question
Mike
zero
could
be
any
type
in
that
case.
Whatever
future
answer,
we
come
up
with
clearly,
but
sometimes
we
might
learn
and
and
the
way
that
we
and
that's
where
I
was
saying
earlier
to
Aaron
that
we
sort
of
push
back
the
as
we
go.
We're
like
thanks
for
the
answers
we've
seen
so
far.
A
They
had
these
parts
in
common
and
these
parts
were
different
and
then
we
push
that
down
and
we
say:
okay,
look
at
the
answers.
You've
got
coming
up.
Do
any
of
them
potentially,
like,
essentially
invalidate
that
in
that
case
you,
if
there's
no,
then
I
can
just
sort
of
stop
I.
Don't
need
to
pull
the
rest.
Those
answers,
because
they're
just
gonna
they're
not
going
to
add
any
new
information.
A
So
yeah
I
think
yes,
okay,
so
this
is
so
the
the
chalk
engine
itself.
I
guess
I'll
do
a
little
bit
more
context
since
we're
here,
I
was
just
gonna
say
that
it
has
its
pretty
generic.
It
knows
it's
really
just
the
the
logic
of
like
managing
these
tables
and
strands.
So
it
doesn't
know
it's
kind
of
got
some
context.
We'll
see
you
later.
It
has
like
a
bunch
of
associated
types
for
everything
from
like
what
is
the
goal?
What
is
an
environment?
There's
a
type
all
that
stuff
it
does
require.
A
Some
amount
of
it
does
bacon
a
few
notions
that
are
less
specific,
like
type,
but
it's
pretty
minimal
and
even
type
is
really
just
it.
It
just
sort
of
says:
they'll
have
some
callbacks
like
hey
unify
these
two
types.
Tell
me
what
happened.
I,
don't
know,
I,
don't
need
to
know
what
they're
like
made
up
of.
A
Okay,
all
right
any
more
questions
on
the
big
picture.
I
think
that's
the
whole
picture.
B
B
A
B
A
And
in
particular,
there's
a
one-to-many
relationship:
I
guess
that's
the
important
part
like
they
all
share
the
same
inference
context
might
have.
You
can
have
many
fulfillment
context,
that's
sort
of
it's
sort
of
okay.
The
only
problem
is
like
if
you
know,
if
you're
just
seeing
whether
something
could
be
proven,
but
you
don't.
A
C
A
The
whole
compilation,
so
that's
a
big,
improve
it's
a
big
point
is
that
anybody
that
has
to
prove
that
a
vector
is
debug
ghost
at
the
same
table,
no
matter
what
function,
they're
working
under
the
only
sort
of
problem-
or
they
were
only
difficulty,
is
I,
didn't
show
it,
but
part
of
this
goal
is
also
the
environment.
So
it's
also
the
where
clauses
that
are
in
scope.
Roughly
speaking,
however,
in
my
measurements,
I
found
that
99%
of
the
things
would
be
cache
hits,
so
it
should
still
be
pretty
good,
something
like
that.
A
B
A
A
goal
which
is
stored
in
uncanonical
form:
okay,
it's
going
to
canonicalize
it
and
then
send
it
over
and
and
then
get
back
a
response
in
terms
of
those
canonical
variables
in
translated
back
and
part
of
the
reason
it's
stored
in
uncanonical
forum.
A
A
A
Do
you
see
a
really
tiny
font
in
a
bunch
of
code?
Should
I
make
it
a
little
bigger?
Let's
see
if
I'm
actually
able
to
do
that,
I
forgot
that
I,
don't
know
it
did
work.
Okay,
I
switched
terminals
since
the
last
time
I
tried
this
because
the
last
time
I
remember
it
was
like
pressed
all
the
keys.
I
can
think
of
none
of
them
we're
making
the
font
big.
A
Said
trait
engine
in
my
little
chart,
and
that
was
like
the
fulfillment
context,
there's
actually
a
trait
in
rusty
called
trait
engine,
and
it
has
these
operations
like
normalizing
things
registering
something
a
predicate
that
has
to
be
proven
through
and
select.
All
or
error
means
try
to
prove
all
the
things,
and
if
you
fail,
that's
in
your
that
you
can
report.
Hence
you
get
back
this
your
and
select,
where
possible
means
try
to
prove
all
the
things
oh
I
see.
The
only
difference
is
select.
A
All
our
error
will
give
an
error
if
there
are
things
that
can't
be
proven,
whereas
select,
where
possible,
just
leaves
them
in
the
list
for
later.
It's
not
it's
only
an
error
if
they
were
proven
to
be
false,
essential
and
yeah.
So
this
is
basically
the
interface
that
the
type
checker
needs
from
the
fulfillment
context,
and
we
introduced
this
little
shim
so
that
you
can
write,
see,
chalk
and
stuff
so
like
the
old
school
fulfillment
context
with
Russell's
current
rate
solver
also
implements
the
same
trade
right.
A
This
is
the
old
school,
but
the
newer
one
is
here
under
chalk,
fulfilled
RS
and
I'll
start
with
the
select
where
possible,
because
this
is
that
algorithm
I
talked
about.
So
you
see
that
it
maintains
a
list
of
obligations.
They
have
to
be
proven,
which
we
iterate
through.
Here's,
where
we
do
the
canonicalization,
so
we're
gonna
canonicalize.
The
query:
that's
going
to
give
us
a
canonical
goal
here
and
then
this
is.
You
asked
Tyler
Tyler
team
and
really
you
and
children.
You
and
Taylor
confuse
me.
A
Now
that
you're
both
Google,
so
so
right,
so
this
is
where
the
actual
query
comes
in,
from
from
Rusty's
perspective,
this
evaluate
goal
and
here's
the
actual
query
definition
we'll
come
back
to
that.
But
but
you
see
we're
giving
it
the
canonical
goal
and
we're
getting
back
a
response,
and
this
response
is
it's
a
result.
It's
a
canonical
query
response.
Then
the
no
solution
means
there
was
an
we
could.
We
knew
that
this
kit-
this
is
false.
Otherwise
we
get
back
this.
A
Maybe
it's
true,
or
maybe
it's
not
yet
fully
proven,
and
we
can
ask
like.
Is
it
true?
Is
it
fully
proven,
if
so,
we're
making
progress,
and
we
don't
have
to
this
instantiate
query
response
and
reagent
obligations?
What
this
does
that's
actually
interesting?
Okay,
that's
up
fix
me,
but
we'll
come
to
that.
What
this
does
is
take
the
substitution
that
came
back
and
unify
it
into
the
inference
context
unifying
the
inference
variables-
and
here
we
say,
oh,
if
it
wasn't
proven
we'll
just
leave
it
in
our
list
for
later
we'll
come
back
to
it.
A
This
is
a
bit
of
a
bug
like
I
think
we
should
be
instantiating.
The
query
response
and
regional
obligations,
in
both
cases
potentially
I,
feel
but
in
other
words
this
is
like
that
case.
I
mentioned
where
it
might
be
that
we
know
something
about
the
variables,
but
we
don't
yet
know
that
it's
fully
proven
well,
I'm,
not
sure
why
we're
not
doing
in
both
cases,
maybe
just
an
oversight
or
maybe
there's
some
more
refactoring,
this
musical,
but
that's
the
idea
so
far,
so
good
okay.
A
A
You
don't
write
actually
before
we
get
here.
Let
me
let
me
jump
back
to
the
old
code.
I
wanted
to
look
at
the
register
predicate
obligation,
because
this
is
actually
interesting.
So
this
is
where
the
the
type
checker
says:
hey.
We
have
something
we
need
to
prove
and
a
predicate
obligation
is
the
old
Russy
notion
of
things
to
prove.
So
it's
all
very
rust
specific.
This
is
a
friend
kit.
It's
like
you
know.
A
A
Yeah,
so
one
of
the
things
that
this
does
this
predicate
obligation
includes
a
a
parameter
environment,
which
is
the
set
of
things
we
the
where
clauses
in
scope
basically
and
those
are
represented
quite
differently
in
Russy
and
chalk.
So
this
is
like
a
little
bit
hacky
right
now,
if
we're
in
chalk
mode,
we
keep
a
deaf
ID,
which
was
like
what
function
did
this
parameter
come
from,
and
now
we
can
construct
the
chalk
environment
from
that
def
ID
it.
If
you
see
I
mean
so
we
saw.
A
A
Let's
see
if
I
can
find
it,
where
is
it
oh
there?
It
is
right
right,
so
the
first
thing
I
value
eighth
goal:
does
we
already
converted
the
environment
from
the
rusty
notion
of
an
environment,
to
the
chalk,
to
the
sort
of
chalk
version?
We
also
have
to
convert
the
goal.
That's
what
evaluate
goal
does
here?
It's
saying.
A
Oh
so
predicates
was
the
chalk
rusty
notion,
but
the
the
chalk
version
is
called
goal
I'm
also,
this
is
a
little
confusing
I'm,
not
sure
I
have
the
right
terminology
for
this
when
I'm
saying
Chuck
here
what
I
mean
is
there
are
a
set
of
types
that
are
in
rust,
see
that
represent
the
chaco
five
versions
of
traits
and
so
forth,
right,
but
they're,
not
in
Chuck.
There
they're,
rather
the
when
you
instantiate
the
chalk
engine
and
we
have
to
set
its
associated
types
for
like
what
is
a
goal
and
so
forth.
B
B
A
About
like
this,
but
it's
a
little
tricky
because
it
references
types
and
so
forth,
for
example
right,
okay,
quite
yes,
so
right,
so
this
code
is
right
now,
just
in
traits
the
libera
see
slash
traits,
that's
where
I
am
here
and
see
up
here
on
this
line,
and
it
has
you
know
they
might
look
familiar
if
you've
looked
at
Chuck,
so
it
has
like
a
domain
goal
with
various
options
you
get
into
the
details
later,
but
and
then
these
are
like
the
meta
things
right.
So
this
is
a
implies.
A
So
right,
so
we
convert
into
these
format
and
there's
one
thing:
I
wanted
to
highlight:
there's
some
special
cases
here
for
some
reason
or
another,
but
I
don't
remember,
but
then
the
general
case.
We
invoke
this
this
lower
method-
and
this
is
exactly
the
lower
encoding
I'm
talking
about
Aaron.
Okay,
it
lives
in
the
lib
receipt
rates
notice.
The
underscore
this
is
a
different
crate,
subtle,
subtle
convention.
A
In
general,
the
rusty
colon
colon
traits
code
is
like
the
shims
that
are
needed
to
bridge
into
like
the
query.
Let's
put
it,
the
signatures
and
the
query
interface
code
and
the
query:
implementations
live
in
the
rusty
underscore
treats
crates,
except
for
all
the
old
rusty
trade
solving
code,
which
all
lives
in
the
pressing,
but
when
it's
the
newer
stock
code
and
that
I
don't
know
anyway,
so
MJ
told
me
that
division
was
helpful.
I
still
think
it's
a
little
confusing,
so
so
this
lowering
right
there's
this
trait
called
lower.
A
This
is
basically
like
what
it
says
here:
lowering
from
a
rusty
construct
into
a
chalk
like
type,
and
it
can
do.
It
differ
a
couple
of
different
kinds
of
lowering
that
when
one
interesting
thing
is
that
lowering
into
a
goal,
something
to
be
proven
is
sometimes
slightly
different
from
lowering
into
a
clause,
something
always
true,
but
so
you
you
can
implement
it
twice
if
necessary.
But
you
know,
for
example,
here
we
say:
okay
lower
from
a
credit
get
into
a
goal.
A
A
Yes,
actually
this
is
quite
inefficient,
so
the
picture
I
showed
you
is
wrong.
As
presently
implemented,
I
told
you
that
we
have
a
persistent
forest
that
lives
across
sorry
of
persistent
chalk,
conflict
engine
and
that's,
but
we
don't
I
forgot
we're
actually
making
a
new
forest
for
every
query.
That's
totally
inefficient.
There's
no,
there's
nothing
wrong
with
it.
It's
just
gonna
avoid
all
the
caching
benefits
a
plan
is
to
there's
I,
don't
think,
there's
any
good
reason
for
this
to
be
this
way,
I
just
easier.
A
A
So
maybe
that
name
is
from
an
older
or
something
in
any
case,
oh
yeah,
so
it
calls
forest
solve
that'll
that'll.
Do
this
aggregator
thing
I
will
create
the
iterator.
Do
the
aggregation
come
up
with
a
substitution,
and
then
we
map
that
back
to
the
answer
that
we
put
back
as
the
result
of
the
query,
so
we
are
creating
a
new
forest,
we're
not
going
to
get
as
much.
Caching
as
you
would
like,
however,
we're
still
getting
caching
at
the
level
of
the
Russy
query
itself
right.
So
what
that
means
is.
The
difference
is
like.
B
A
Right
you
never
come
back
up,
it
has
its
own.
Well,
you
do,
but
not
to
this
not
to
this
query,
which
is
what
we're
going
to
talk
about
right
now,
so
yeah.
So
so.
That's
kind
of
now
we've
seen
the
forest
I
guess
the
last
interesting
thing
that
I
noticed
but
I'd
forgotten
about
that.
You
will
we've
all
talked
about
lazy
normalization.
A
If
you
compare
the
this
is
the
this.
Is
the
two
implementations
of
trade
engine
oopsy
relax,
my
cursor
there?
It
is.
This
is
the
old
one
on
the
right
hand,
side
you
see
the
normalized
projection
type.
Does
a
bunch
of
work
figuring
out
what
an
Associated
type
is
and
so
on.
Fine,
though,
on
the
chalk
side,
it
doesn't
do
anything
because,
that's
because
it's
lazy
and
this
normalization
will
take
place,
the
truck
engine
will
handle
it.
That's
kind
of
neat.
A
A
Mostly
there's
a
whole
bunch
of
types,
but
there's
also
some
operations
on
those
types
and
that's
what
I
wanted
to
show
you,
so
some
of
them
are
really
simple.
Like
goal
in
environment,
this
kind
of
says,
given
an
environment
and
a
goal,
gave
me
a
goal,
glue
them
together
and
give
me
the
aggregated
I
like
maybe,
if
we
add
like
I,
don't
know
GATS
or
something
we
could
express
some
of
these
things
a
little
more
nicely,
but
it's
traffic
right
exactly
so
then
it
has
some
questions
like
is
this
a
conductive
goal
or
not?
A
A
Right
and
jumping
back
up
a
little
bit,
I
think
or
no
jump
it
back
down.
Sorry
here
we
go
yeah
this
one
another
key
callback.
Most
of
these
are
not
interesting.
These
are
like
the
most
important
ones,
unified
parameters.
This
is
where
we're
saying
here
are
two
things:
make
them
make
them
equal
you'll
note:
we
have
a
variance
in
there.
I'll
come
back
to
that
in
a
second.
A
For
now,
so
it
says,
take
two
parameters
and
unify
them,
and
you
know
get
me
back
if
that's
true
or
false,
or
whatever
done,
and
because
of
lazy
normalization,
so
that
can
what
can
happen
here
is
that
this
can
succeed
but
give
back
region
constraints
so
like
today
and
to
be
in
my
tail
there
only
if
take
a
equals
to
be,
but
it
can
also
succeed
with
normalization
constraints.
So
I
can
say
if
you
had
a
project
on
one
side
in
it
and
that's
something
on
the
other.
A
A
But
we
treat
them
a
little
differently
in
that
normally
we
sort
of.
Maybe
this
is
not
I,
don't
know
if
this
is
actually
right,
but
this
is
how
it's
happening
now.
Normally
we
would
process
sub
goals
kind
of
they
would
make
new
tables
and
go
down,
whereas
this
one
says
does
all
the
work
and
then
gives
you
a
list
of
things
to
prove
sort
of
a
little
anyway.
It
doesn't
matter
that's
in
the
weeds.
The
point
is,
this
is
the
main
hook.
A
A
A
Yeah
and
what
it
tries
to
do
is
it
tries
to
say,
okay,
if
we
need
to
prove
that
some
trait
is
implemented.
If
you
just
look
at
there's
a
document
on
the
rusty
and
the
rusty
guide,
that
sort
of
shows
you
all
the
chalk,
lowering
rules,
you
can
kind
of
fairly
easily
see.
Where
could
such
a
rule
come
from?
It
could
come
from
lowering
the
trait
or
it
could
come
from
lowering
some
impulse
so
from
in
book.
Sorry
I
guess
they
can
come
from
implementations
of
the
trait
and
I
think
right.
A
This
is
this
has
to
do
with
implied
counts.
That's
just
ignore
that
one
for
a
second
but
the
point
is
you
can
kind
of
go
down
and
see?
What
are
the
Russ
language
constructs
that
could
be
lowered
into
a
rule
that
would
satisfy
this
goal
and
then
our
job
in
this
code
is
to
go
pull
on
those
and
this
ties
in
with
IDs
and
on
demand
processing
because,
ideally
also
a
specialization
as
it
happens.
Ideally,
we
would
only
pull
on
like
a
sort
of
minimal
set
of
things.
A
A
The
faster
you'll
be
able
to
get
answers
back
from
from
the
RLS
and
so
on
right,
and
it
also
ties
into
specialization
and
become
significant
to
the
language
definition
in
that
you
know
when
you
have
to
recursively,
if
figuring
out
whether
one
table
specializes
another
requires
solving
predicates,
then
the
set
of
things
we
pull
from
here
could
create
a
cycle
that
would
the
way
are
out
if
it's
too
broad.
So
we
should
put
this.
A
This
is
like
a
significant
callback
for
the
language
semantics
anyway,
I
think
right
now
what
it
does
the
symbol
clauses
for
imple,
basically
just
iterates
through
the
impulse
for
the
trait
I,
don't
exactly
remember,
I
have
to
go.
I
was
looking
into
this
and
didn't
quite
get
done
figuring
out.
This
is
the
stuff
with
implied
bounds,
the
interaction
there,
but
we'll
come
back
to
it,
but
this
part
is
significant,
so
there
are
some
things.
This
is
getting
to
the
work
that
is
actually
in
one
of
the
sprinkles
now
yeah.
A
Sometimes
we
don't
have
explicit
impulse
right.
Have
we
have
things
that
are
built
into
the
compiler,
so
sized
is
an
example,
and
so
it
it's
our
job
here
to
sort
of
add
those
rules
as
needed,
and
one
of
the
challenges
is
sometimes
in
order
to
figure
out
what
the
rule
would
be.
You
might
like
to
know
a
little
bit
about
the
type
in
question.
So,
if
you
look
at
like
sized,
if
you
think
about
size,
you
know
it's
easy
to
tell
you.
A
If
a
boolean
is
sized,
it's
pretty
easy
to
tell
you
if
an
array
of
size,
because
it's
like
sized,
if
the
argument-
oh
I,
guess
a
razor
always
said,
I
don't
know
whatever
you
know.
It's
sometimes
easy
to
tell
you
like
in
terms
of
other
types,
if
something
is
sized,
but
it's
pretty
hard
to
tell
you
if
some
random
infants
variable
is
sized
right,
so
the
set
of
rules
that
you
would
need
to
properly
model
properly
model
this
the
rules
we
would
want
would
basically
enumerate
all
the
size
two
types
as
answers.
A
A
So,
if
you
had
to
prove
so,
you
have
two
goals:
t
is
debug
and
T
is
display
or
something
you
might
start
with
the
debug
and
find
well.
What
are
the
values
of
T
that
make
it
debug?
It
has
infants
variables
in
it
and
for
each
of
those
concrete
answers
now
you
can
go,
take
them
off
and
see
if
they're
displayed.
So
the
first
goal
acts
as
like
a
producer
and
the
other
goals
act
is
like
filters,
so
make
sense.
A
You
lost
me,
it's
how
we
to
detail.
Let's
just
say:
I'll
come
back
to
it.
I'm
not
as
deep
into
the
rusty
good
I'll
just
say.
Suffice
to
say
that
the
order
in
which
we
solve
goals
is
very
important
and
we
would
prefer
to
solve
goals
that
have
more
types
like
fewer
answers.
First,
so
something
like
question
mark
t,
:
sized
is
like
the
worst
goal.
Remember
we
never
want
to
solve
that
goal.
We.
B
Okay
yeah
a
couple
clarifying
questions
here,
because
so
one
one
thing
I'm
getting
a
bit
confused
about
is
the
callback
we
were
looking
at
was
essentially
the
lowerings
I
mean
it's
like
the
creation
of
strands
right,
and
so
what
one
question
I
have
is.
Is
there
like
the
we
are
treating
answer
generation
lazily,
but
it
seems
like
we
are
treating
strand
generation
eagerly.
A
C
A
A
A
B
Well,
I
guess
I
mean
the
other.
The
other
thing
I
was
wondering
it's
like
you
know.
You
were
sort
of
initially
talking
about.
Okay
when
we're
generating
these
strands.
We
have
some
cases
like
sized
that
potentially
involve
a
lot
of
strands
I,
think,
but
you
sort
of
switch
to
talking
about
the
order
in
which
we
try
to
solve
goals
and
and
that
I
got
a
bit
confused
because
again
the
strands
seemed
like
we
were
eagerly
producing
these
things
and
eagerly
producing
the
set
of
all
you
know.
Sized
things
seems
problematic.
A
So
it's
definitely
there's
sort
of
two
problems.
I
mean
it's
definitely
true
that
producing
the
set
of
all
sized
strands
would
be
a
lot.
It's
not
infinite
because
they
can
be
recursive.
It
would
be
like
one
for
struct
or
something,
but
still
like
it.
A
A
B
So,
okay
I,
think
I
think
I
was
I.
Hadn't
quite
put
all
these
pieces
together
in
my
head,
like
in
in
the
old
model
of
chalk,
I
was
thinking
about.
You
basically
had
a
preliminary
lowering
steps
which
give
you
the
entire
contents
of
the
program,
basically
at
once
and
I'm
just
query
on
that
here
like
there
is
a
laziness
already
in
the
sense
that
you're
only
doing
that
lowering
based
on
goals
that
you're
trying
to
prove
so
the
only
time
you
would
ever
start
enumerated.
Every
sized
type
is
if
you
actually
encountered
a
goal.
B
A
And
what
we
actually
do
is
probably
not
bad
like
we,
we
want
to
put
them
off,
that's
right,
the
only
thing
I
would
say
is
we
do
have
to
have
a
strategy
like
we
can't
always
put
it
off
all
right,
though
we
need
some
answer.
What
we're
actually
doing
here
is
maybe
good.
It
might
even
be
the
right
answer.
Where
is
it
well?
Well,
I.
Don't
know
it
is,
but
it
comes
down
to
that.
A
It
me
one
second,
it
doesn't
have
to
be
yes
or
no.
It
can
also
include
I,
don't
know
essentially
and
I
think
we're
currently
giving
back.
An
answer
of
this
kind
was
like
do
not
prove
or
it
cannot
prove
so
we
would
call
it,
but
the
the
thing
is
it's
a
little
bit
sketchy
because,
like
there
are
answers,
so
it's
it's
kind
of
wrong
to
say
the
only
answer
is
I.
Do
not.
It
cannot
prove
answer.
That's
not
really
true,
yes,
but
I
think
it
probably
doesn't
hurt
anybody
as
for
me
right.
B
A
I
think
so,
I
I
think
there
are
fancier
caching
strategies
that
we're
not
using
where
it
would
cause
problems.
So
there's
stuff
like
subsumption
caching,
where
the
idea
is
like,
let's
say
you've,
let's
say:
you've
got
a
goal
like
Veck
of
question,
mark
th
debug
and
you
go
and
solve
it.
And
then
you
have
a
later
goal.
It's
like
vac
of
you.
32
is
debug
in
in
the
chalk
strategy.
Those
are
different
tables.
They're,
not
gonna,
share
work,
but
there
are
other
strategies
where
you
try
to
say:
oh
well.
A
This
is
a
subset
of
that
other
question.
So
why
don't
I
just
use
the
results
on
that
and
filter
them
out?
Basically,
but
that
would
not
I
would
not
work
if
you
did
that
with
size
under
this
strategy
right
because
size
doesn't
have
all
the
answers,
so
I'm,
just
a
little
nervous,
I
think
it's
fine,
but
we
would.
We
are
sort
of
breaking
the
the
model
a
little
bit
of
a
solver
this
way
and
so
I
would
like
to.
A
We
should
keep
our
eyes
out
for
where
it
might
lead
us
into
bad
assumptions,
but
I
think
god
I
think
that
we're
probably
going
to
wind
up
with
some
sort
of
thing
that
lets
it
bail
out
when
it's
just
a
clearly
pointless
task,
but
we're
gonna
try
to
order
our
clauses
so
that
we
try
not
to
get
there
in
the
first
place.
If
we
keep
it
all
possible,
and
maybe
we
can
make
that
bail
out
a
little
more
like
first
class
up
yeah
exactly
you
were
fine
great
into
something.
So
they
that's.
A
A
Oh,
no,
maybe
it
wasn't
right
but
like
one
obvious
starting
task
might
be
to
do
one
of
these
fixed
Me's,
but
just
to
say,
rather
than
trying
to
solve
some
fundamental
like
refactoring
of
how
we
select
clauses
or
whatever
we
should
start
by.
Let's
just
copy
some
of
the
existing
code,
just
to
get
a
feel
for
what's
happening
and
learn
the
system,
so
in
particular
maybe
implementing
the
we
have
the
built-in
impulse
for
sized.
Maybe
we
should
try
to
make
the
built-in
impulse
for
copy
or
something
and.