►
Description
This video was recorded during the Bay Area Rust Meetup in San Francisco, CA, USA on June 28, 2018.
https://www.meetup.com/Rust-Bay-Area/events/251073767/
Sunjay Varma presented an overview of his work on Chalk, the new trait-solving implementation in the Rust Compiler. He talked about the concept of coherence in the Rust language, described how it is implemented in rustc today, and then covered how he has implemented it in Chalk using logic programming.
Talk written and presented by:
Sunjay Varma (https://twitter.com/sunjay03)
A
A
A
That
way,
don't
make
mistakes
like
July
all
right,
just
one
quick
announcement.
The
rust
book
is
now
in
stores.
This
is
the
this
is
the
one
that's
online,
but
you
can
get
a
paper
copy
as
well.
This
is
the
one
by
Steven
Carroll,
so
yeah,
that's
the
first
print
copy,
I
guess
like
Steve
had
it
a
few
weeks
ago,
and
now
it's
like
out
for
everyone,
so
yeah
you
can
grab
a
copy
on
Amazon
or
no
starch
press
or
somewhere
I.
Don't
know
you
can
find
it.
It's
not
hard
yeah.
A
B
Awesome
so
hi
everybody,
so,
as
Manish
said,
I'm
Sanjay
I
actually
only
started
working
at
Mozilla
a
couple
of
weeks
ago
and
I've
had
the
pleasure
of
working
on
this
project
called
chalk.
Chalk
is
the
new
trade
system,
implementation
in
rust,
C
and
my
research
mainly
focuses
on
modeling
coherence
in
chalk
door.
B
If
you
don't
know
what
coherence
is
I'm
actually
going
to
talk
about
it
as
I
go
along
so
I'm
gonna
start
this
talk
by
talking
about
talking
a
bit
about
what
chalk
actually
is,
then
I'll
go
into
coherence
and
what
that
means
in
the
context
of
the
rust
programming
language
after
that
I'll
talk
a
bit
about
logic,
programming
and
cover
how
we
lower
rust
code
into
logic
in
chalk.
Finally,
we'll
talk
about
two
parts
of
coherence,
the
orphan
check
and
the
overlap,
check
and
I'll
finish
the
talk
with
a
summary
and
some
final
thoughts.
B
B
So
the
Cuyler
takes
your
rust
code
and
runs
it
through
some
very
cool
stuff
in
the
current
trades
implementation
and
then
eventually,
after
a
couple,
more
steps,
outputs,
machine
code,
chalk
is
the
new
trades
implementation,
it
uses
logic
programming
and
the
idea
is
that
we
can
take
the
logical
rules
that
the
trade
system
abides
by
and
represent
them
with
logic.
Programming
I'll
tell
you
more
about
logic
programming
in
a
little
bit,
but
for
now,
let's
talk
about
the
part
of
the
rust
rate
system.
B
That
I'm
personally
focusing
on
coherence,
is
all
about
making
sure
that
there's
only
a
single
implementation
of
any
given
trait.
That
applies
to
any
method
call
at
a
given
time.
So
it's
responsible
for
a
lot
of
the
really
cool
properties
that
our
trade
system
has
that
other
languages
actually
don't
have.
So
what
that
means
is
that
if
you
call
the
fav
method
from
the
favorite
color
trait,
we
want
to
match
that
to
exactly
one
implementation
of
that
method.
B
Fundamentally,
coherence
means
that,
no
matter
what
given
a
trait
and
some
set
of
type
parameters,
there
should
be
exactly
one
imple
that
applies.
This
is
important,
because
if
we
have
this
property,
then
no
matter
where
your
traits
methods
are
called,
you
can
be
sure
that
the
behavior
will
be
exactly
as
you
expect
and
Russ
coherence
guarantees
actually
apply
across
crates
as
well.
B
So,
even
if
you
add
a
dependency
to
your
project,
it's
important
for
that
dependency
to
never
be
able
to
introduce
a
conflicting
implementation
of
any
trait
coherence
enables
rest
and
cargo
to
actually
do
a
lot
of
really
cool
things.
Rust
is
able
to
allow
multiple,
distinct
versions
of
crates
in
the
dependency
tree
of
your
project.
We
avoid,
what's
known
as
the
hash
table
problem,
where
different
underlying
Impuls
are
selected
for
the
same
operation
in
different
contexts.
B
You
can
imagine
that
if
a
different
implementation
of
hash
was
chosen
for
insert
versus
when
you
were
getting
something
from
the
hash
table,
that
would
cause
a
lot
of
problems.
So
coherence
makes
it
so
that
we
can
never
split
the
ecosystem
by
having
multiple
incompatible
implementations
of
the
same
types
in
two
different
crates.
Crates
are
able
to
maintain
backwards,
compatibility
and
add
Impuls
without
increasing
their
major
version
number
every
time,
if
some
other
crate
could
implement
methods
for
your
types,
this
just
wouldn't
be
possible.
B
An
often
requested
feature
in
rust
is
specialization,
and
the
only
reason
specialization
is
actually
possible
in
rust
is
because
coherence
makes
it
so
that
there's
only
a
single
choice
for
a
methods
implementation
at
any
given
time.
If
two
crates
that
are
compiled
separately
could
have
overlapping
implementations,
then
we
wouldn't
be
able
to
implement
specialization.
So
coherence
means
that
for
any
given
trade,
there
are
either
zero
or
one
Impuls
of
that
trade
that
applied
to
a
given
set
of
types.
B
B
Coherence
also
means
that
for
every
imple
that
could
exist,
it
only
exists
in
one
place,
and
this
is
key
for
having
a
coherent
system
in
this
example.
That
means
that
only
one
crate
has
the
power
to
define
this
imple
of
my
trade
for
any
type
T.
If
you'd
like
to
learn
more
about
coherence,
and
here
the
things
I
described
in
much
more
detail,
boats,
who's
known
as
without
boats
on
give
up
and
Twitter,
gave
a
fantastic
talk
at
a
meet-up
last
year
which
covers
coherence
in
detail.
B
B
B
The
orphan
check
goes
through
each
imple
in
the
current
program
and
tests
to
make
sure
that
it
abides
by
the
orphan
rules.
Now
there
are
actually
a
lot
of
different
sources
about
the
orphan
rules
and
I
definitely
suggest
you
check
some
of
these
out
if
you're
interested
in
learning
more,
but
when
it
comes
to
the
rest,
compiler
and
the
rest
language
in
general,
the
most
authoritative
source
is
often
actually
the
source
code
of
the
rest
compiler.
B
So
let
me
show
you
some
of
the
files
and
functions
where
the
orphan
rules
are
implemented.
There's
the
orphan
checker,
which
actually
describes
how
rusty
goes
through
each
imple
in
order
to
check
it.
There
are
the
orphan
check
and
orphan
check
trade
raff
function
so
that
actually
do
the
heavy
lifting
of
checking
that
an
impulse,
Attis
fides,
the
orphan
rules
and
well.
We
don't
actually
have
enough
time
to
dive
into
the
code
for
each
of
these
functions.
B
Now
this
is
actually.
This
is
a
whole
bunch
of
words
and
it's
written
to
be
very
precise.
You
know
it's
written
to
be
a
very
precise,
mathematical
definition
of
the
orphan
rules,
and
originally
when
I
was
designing.
This
talk,
I
tried
reading
them
out
loud,
and
it
turns
out
that
reading
math
out
loud
is
a
terrible
way
to
explain
things
and
it's
literally
impossible
to
understand
so
we're
actually
gonna
skip
all
of
this
and
I'm
gonna
explain
the
orphan
rules
in
a
much
more
visual
way.
That
I
hope
will
be
easier
to
understand.
B
So
the
orphan
rules
are
centered
around
impulse
and
imple
describes.
The
implementation
of
a
trait
for
a
given
type
and
impulse
can
have
zero
or
more
type
parameters,
and
the
trait
itself
can
also
have
zero
more
type
parameters.
One
thing
I've
done,
which
is
going
to
come
in
handy
later,
is
I've
labeled,
the
implemented
type
as
p0
and
labeled
the
type
parameters
of
the
trait
as
p1
to
PN.
B
Now,
let's
talk
about
why
I
did
that,
so
the
orphan
rules
start
by
asking
whether
the
trait
we're
implementing
is
defined
in
the
current
crate
or
if
it
comes
from
a
dependency
and
is
thus
an
upstream
trait
if
the
trait
is
locally
defined.
In
the
current
crate,
the
orphan
will
say
that
it's
good
to
go
and
we
can
implement
it
for
any
type
at
all.
But
if
the
trait
is
an
upstream
trade,
then
the
orphan
say
that
we
need
to
check
a
few
more
things.
B
The
first
thing
we
have
to
check
is
that
there's
at
least
one
type
from
p0
to
PN,
that's
locally
defined
in
the
current
crate.
Part
of
the
definition
of
the
orphan
rules
is
that
we
actually
start
searching
from
the
implemented
type
and
then
go
through
each
type
parameter
in
the
trade
in
order
I
labeled
those
types
from
p0
to
P
n.
B
So
it's
easy
for
us
to
remember
to
do
that
once
we
have
a
local
type,
we
check
each
type
before
that
and
make
sure
that
it
doesn't
contain
any
of
the
type
parameters,
T,
UV,
etc.
If
all
this
can
be
verified,
the
imple
is
valid
and
we
can
continue.
But
if
any
of
these
steps
fail,
the
imple
is
considered
an
orphan
imple
and
it's
disallowed
by
the
orphan
rules.
B
So
if
these
rules
seem
kind
of
strange
and
arbitrary,
it's
because
they
sort
of
are,
and
the
resources
I
link
to
earlier
go
into
the
history
and
the
reasoning
behind
them,
and
they
talk
about
why
they
work
towards
our
goal
of
having
exactly
one
place
where
an
implicant
belong,
but
for
now
just
try
to
keep
these
rules
in
mind
because
we'll
come
back
to
them
later
and
talk
about
how
they
get
implemented
with
logic,
programming
and
chalk.
The
other
part
of
coherence
and
rest
is
the
overlap
check.
B
This
goes
through
all
pairs
of
impulse
and
checks
for
any
overlap
between
them.
The
tricky
part
about
this
is
that
it.
In
order
to
enforce
coherence,
we
have
to
be
able
to
check
all
possible
impulse
in
the
entire
universe.
In
other
words,
we
need
to
be
able
to
check
any
impulse
that
could
exist
in
any
compatible
world
and
the
reason
that
we're
only
interested
in
impulse
in
some
ver
compatible
worlds
is
because,
if
we
try
to
just
support
everything,
our
rules
would
be
way
too
restrictive.
B
Ok,
so
that's
a
lot
of
information.
Let's
take
a
break
from
coherence
and
talk
about
logic,
programming,
so
in
logic
programming.
The
idea
is
that,
given
some
facts
and
some
rules
about
those
facts,
we
can
come
to
inclusions
by
proving
things
based
on
what
we
know
take.
For
example,
the
sentence
son
Jay,
loves
cake,
I'm,
Sanjay,
I,
love
cake
right.
This
is
a
fact
and
I
can
write
this
in
emojis,
like
this
cake
is
pretty
sugary.
So
if
I
enjoy
cake
well,
it
might
be
reasonable
to
think
that
I
enjoy
candy
as
well.
B
We
can
write
this
as
a
general
rule
using
emojis
like
this
notice,
how
I've
left
a
placeholder
to
represent
that
this
rule
applies
to
anyone
who
loves
cake.
This
is
a
general
rule
that
says
that
blank
loves
candy.
If
they
love
cake,
we
can
take
this
fact
and
this
rule
and
use
them
to
derive
the
conclusion
that
Sanjay
loves
candy.
B
B
So
this
is
provable
because
we
can
logically
derive
it
from
the
rule
we
created
using
the
fact
that
I
love,
cake,
chalks
logic,
programming
language
is
actually
based
on
an
extended
form
of
a
well
established
logic,
programming,
language
called
Prolog.
If
we
wanted
to
represent
the
same
logic
we
showed
before
with
emojis,
we
might
write
something
like
this
in
Prolog
here,
we've
represented
the
same
fact
that
Sanjay
loves
cake,
but
we
put
the
loves
part
on
the
outside
and
the
sunjae
and
cake
parts
on
the
inside.
It
still
represents
that
same
sentence.
B
Previously
we
were
using
the
heart
emoji
to
represent
when
someone
loves
something
here,
we
defined
a
predicate
using
the
word
love's
to
represent
the
same
thing
and
after
that
line
that
represents
Sanjay
loves
cake.
We
even
have
the
same
rule
about
candy
that
we
had
before
the
colon
followed
by
a
dash
is
how
you
represent
the
word
if
in
Prolog,
so
just
like
before
this
rule
reads
as
tea
loves
candy.
If
tea
loves
cake,
tea
is
a
variable
that
represents
anything
that
could
be
placed
there.
Now.
B
Jaques
logical
language
is
actually
even
more
powerful
than
Prolog
on
its
own
chalk,
supports
a
number
of
interesting
features
such
as
for
all
and
exists,
and
for
all
means
that
a
given
rule
is
provable
for
all
values
of
the
variable
T,
whereas
exists
means
that
a
given
rule
is
provable
for
at
least
one
value
of
the
variable
T.
This
allows
us
to
express
things
like
each
type
T
implements
my
trait
or
that
there
is
at
least
one
type
T
that
implements
my
trait
okay.
B
So
now
that
we
know
some
basic
parts
of
logic
programming,
let's
talk
about
how
chalk
takes
your
rest
program
and
turns
it
into
logic.
I
mentioned
earlier.
That
chalk
is
the
new
trait
system
implementation
and
that
it
uses
logic
programming
to
model
the
rules
of
the
trade
system,
while
lowering
is
the
name
of
the
process
in
which
chalk
takes
each
declaration.
B
Let's
say,
for
example,
that
we're
compiling
a
crate
called
people.
The
people
crate
depends
on
another
crate,
called
favourites
which
defines
a
struct
a
trait
and
implements
that
trait
for
any
type
that
implements
copy.
Both
people
and
favorites
implicitly
depend
on
the
STD
crate,
which
defines
some
traits
and
types
of
its
own.
B
Using
the
favorite
color
trade
from
the
favorites
crate
people
define
some
of
its
own
implementations
of
favorite
color
for
its
own
types.
This
gives
us
the
complete
program
shown
on
the
left.
Now.
What
chalk
wants
to
do?
Is
it
wants
to
lower
this
entire
program?
That
means
all
three
crates
into
a
single
logical
program
made
up
of
facts
and
rules,
there's
a
whole
bunch
of
facts
that
we
could
actually
potentially
get
from
this
program.
B
For
example,
we
could
describe
the
relationship
between
the
crates
and
add
something
like
depends
on
people
favorites
to
talk
about
how
the
people
crate
depends
on
the
favorite,
the
favorites
crate,
or
we
could
add
some
clauses
like
defined
in
Sanjay
people
or
defined
in
Manish
people
to
describe
how
those
two
structs
are
defined
and
the
people
crate.
These
are
hypothetical,
but
my
point
is
that
you
can
lower
a
program
into
any
set
of
facts
that
you
can
derive
from
it.
B
In
reality,
chalk
would
lower
the
program
into
something
that
looks
like
this
I've
filtered
out
some
of
the
facts
that
aren't
relevant
to
what
we're
about
to
talk
about
next,
but
you
can
probably
see
where
most
of
this
information
is
coming
from.
I
mentioned
that
we're
currently
compiling
the
people
crate,
so
the
sunjai
and
Manish
types
are
local
to
the
current
crate.
Chalk
represents
this
by
lowering
these
two
structs
into
two
facts.
Using
the
is
local
predicate.
B
We
lower
the
two
implementations
of
the
favorite
color
trade
by
adding
two
facts
using
the
implemented
predicate
the
taco
type,
which
is
defined
in
the
crate
that
the
current
crate,
the
people
crate
depends
on
is
an
upstream
type.
We
can
lower
it
to
represent
that
with
a
fact
that
uses
the
is
upstream
predicate
notice
that
each
of
these
predicates
represents
a
concept
that
we
can
clearly
see
in
the
code.
B
The
reason
that
any
of
these
exists
at
all
is
because
we
decided
that
they're
important
and
we
decided
that
there's
something
that
we
needed
to
use
in
order
to
model
whatever
we're
modeling,
we
literally
could
have
created
any
predicate
at
all
based
on
what's
in
the
program,
the
final
line
of
this
lower
program
is
probably
the
most
ding
there.
We
create
a
rule
that
describes
that
any
type
T
implements
favorite
color
if
T
also
implements
copy.
B
B
Before
we
start,
though,
let's
quickly
do
one
more
example
of
the
orphan
rules
to
make
sure
that
we've
definitely
got
it.
Let's
say
that
we
have
a
crate
foo,
which
defines
a
trait
called
my
trait
and
a
type
foo.
We
also
have
the
bark
rate,
which
depends
on
foo
and
defines
its
own
type
bar,
as
well
as
an
imp
of
my
trait
for
bar.
The
orphan
check
will
look
at
that
info
and
try
to
decide
if
it's
valid
under
the
orphan
rules
when
we
compile
bar
my
trait
is
considered
to
be
an
upstream
trait.
B
B
In
this
case,
the
first
local
type
in
the
bar
crate
is
the
bar
type,
and
once
we
find
that
we
need
to
check
every
type
before
it
to
make
sure
that
those
types
don't
contain
any
of
the
type
parameters
of
the
imple
the
term
we're
going
to
use
four
types
that
don't
contain
any
of
the
imple
type
parameters
is
fully
visible
and
you'll
see
why
that
is
in
a
few
minutes,
but
a
fully
visible
type
is
made
up
of
types
that
aren't
completely
known
to
us
and
can't
be
filled
with
some
other
type.
Later.
B
This
is
all
we
need
in
order
to
check
that
the
simple
is
good
to
go.
One
important
observation
here
is
that
any
types
after
the
first
local
type
don't
actually
matter
at
all
according
to
the
orphan
rules.
That
means
that
we
can
put
any
type
after
bar
at
all,
including
type
parameters,
even
though
we
weren't
allowed
to
include
those
before
bar
in
the
order
of
the
type
parameters.
B
So
now,
let's
talk
about
how
this
gets
implemented
in
chalk.
Well,
the
idea
is
that
we
create
a
new
predicate
called
local
impo
allowed,
which
describes
when
a
type
is
allowed
to
implement
a
trade
under
the
orphaned
laws.
Then
we
can
go
through
each
type
or
sorry.
Then
we
can
go
through
each
imple
in
our
program
and
ask
whether
local
imple
allowed
is
provable
for
that
imple.
If
it
is,
then
we
can
move
on.
B
But
if
that
isn't
provable,
we
know
that
the
orphan
rules
have
been
violated
and
that
we
need
to
produce
an
error
here.
The
turtle
type
is
local,
so
we
know
that
we
can
implement
display
for
it,
but
in
the
other
example,
a
vector
of
Turtles
is
not
local
to
the
current
crate,
because
Veck
is
defined
in
the
STD
crate.
That
means
that
we
can't
implement
display
for
a
vector
of
turtles
according
to
the
orphan
rules.
B
Okay,
so
how
do
we
create
local
imploded?
Well,
more
specifically,
we
need
to
ask
what
are
the
lowering
rules
for
a
local
in
Palau
like
how
are
we
going
to
lower
each
trait
that
we
come
across
in
a
rust
program?
Well,
we
can
figure
this
out
by
going
through
each
case
of
the
orphan
rules.
Let's
say
that
we
had
this
trait
with
two
type
parameters
we
know
based
on
which
crate
the
trait
comes
from,
whether
that
trait
is
local
or
if
that
trait
is
up
strength.
B
If
that
trait
is
locally
defined
in
the
current
crate,
we
generate
a
local
in
Palau
drool
that
allows
any
type
to
be
implemented,
for
that
trait.
This
rule
says
that
my
trait
is
allowed
to
be
implemented
for
any
types
at
all.
If
the
trait
is
upstream,
we
need
to
check
those
other
conditions
that
we
went
over
earlier.
We
need
at
least
one
local
type
and
for
every
type
before
it
to
be
fully
visible.
The
tricky
thing
here
is
that
we
need
to
somehow
find
the
first
local
type.
B
B
Now
these
rules
are
basically
an
exact
translation
of
the
English
sentence,
find
the
first
local
type
and
make
sure
that
each
type
before
it
is
fully
visible.
The
problem
is
that
we
actually
haven't
defined
any
of
the
main
parts
of
that
sentence
like
when
is
the
fully
visible
predicate
actually
provable
when
is
is
local
provable,
we
aren't
actually
finished
until
we
talk
about,
is
fully
visible
and
is
local.
So,
let's
start
by
talking
about
is
fully
visible.
B
This
one
is
pretty
straightforward,
because
we
really
just
want
to
make
sure
that
it
isn't
provable
for
types
that
aren't
defined
in
some
crate
somewhere.
We
want
it
to
be
provable
for
all
the
types
we
can
see
right,
that's
where
visible
comes
from,
but
we
don't
want
it
to
be
provable
for
any
of
those
type
parameters.
B
To
do
that,
we
can
add
a
fact
for
every
struct
in
every
crate
that
uses
the
is
fully
visible
predicate
to
indicate
that
it
is
a
type
that's
fully
visible,
since
type
parameters
won't
have
any
such
fact
is
fully
visible,
won't
be
provable
for
those
type
parameters
we
can
make
sure
that
is
fully
visible,
doesn't
allow
any
type
parameters
at
any
level
by
making
sure
that
we
use,
if
and
checking
that,
each
type
parameter
of
every
structure
is
also
fully
visible.
In
order
for
the
structure
to
be
fully
visible
itself
for
is
local.
B
We
just
need
to
take
every
strut
defined
in
the
current
crate
and
lower
it
into
a
fact
using
the
is
local
predicate.
In
this
case,
we
don't
actually
care
about
any
of
the
type
parameters
that
these
types
may
have,
because
a
type
is
local,
regardless
of
its
type
parameters,
notice
that
I've
only
added
an
is
local
fact
for
foo,
because
that's
the
only
type
in
the
current
crate
and
bar
is
an
upstream
type.
B
So
there's
a
couple
of
things
that
I've
left
out
here,
for
example,
I,
didn't
cover.
How
is
local
interacts
with
fundamental
types?
If
you
don't
know
what
a
fundamental
type
you
can
absolutely
come
up
to
me
afterwards
and
talk
to
me
about
this,
and,
if
you'd
like
to
know
more
about
any
of
this
or
how
it
works.
You
can
also
come
and
ask
me
about
that.
But
all
this
is
actually
fully
implemented
in
chalk.
B
Right
now,
rusty
doesn't
actually
use
chalk
internally,
just
yet,
but
you
can
expect
that
one
day,
your
rust
code
might
be
checked
with
logical
queries
like
the
ones
I've
demonstrated
today.
The
other
part
of
coherence
is
the
overlap
check.
This
is
currently
an
active
area
of
research.
In
my
own
work,
and
just
like
with
the
orphan
check,
my
job
is
to
figure
out
how
to
model
this
using
logic.
The
tricky
part
about
the
orphan
check
is
what
I
mentioned
before.
B
B
Well,
thanks
for
listening
to
my
talk,
we
talked
about
coherence,
logic,
programming
and
went
over
some
examples
of
how
we
are
using
logic
to
model
various
parts
of
the
rest
rate
system.
I,
definitely
wouldn't
have
been
able
to
do
this
without
the
help
of
a
lot
of
great
people,
so
I
just
wanted
to
take
a
moment
to
thank
Nikko
boats
and
the
many
other
awesome
people
that
have
helped
me
figure
all
this
out.
Thank
you.
Here's
my
Twitter
and
a
link
to
these
slides
and
I'll
be
around
for
questions
as
well.
C
Cool
so
I
really
like
the
idea
of
using
a
logic,
language
or
logic,
programming,
language,
to
model
this
and
I'm
curious.
If
you
feel
like,
let's
say
something,
violates
these
rules,
does
it
seem
feasible
to
determine
given
a
violation
turn
it
into
a
human
readable
explanation
like
if
I
want
to
understand.
You
know
why
my
thing
was:
there
wasn't
an
orphan
and
that
could
be
some
complex
series
of
visibility
rules
and
so
on,
like
that,
it's
that
something
I'll
have
looked
at
or
I
mean.
B
We
actually
already
do
this
right,
so
we
have
coherence,
checking
already
implementing
the
rest
compiler.
It's
just
not
implemented
using
logic,
it's
implemented
in
rust,
and
so
we
already
produce
error
messages
like
that.
That
give
you
a
lot
of
details
and
tell
you
exactly
what
went
wrong.
So,
what
we'll
probably
end
up
happening
is
with
this
implementation.
You'll
get
enough
information
back
to
just
know
like
oh
okay.
This
is
what
went
wrong
and
then
we'll
be
able
to
produce
the
same
or
better
error
messages
using
that.
C
Quick
fault,
I,
guess
I'm,
just
trying
to
understand
like
say
if
you
change
the
rules
or
added
a
new
rule
like
to
what
extent
is
that
process
just
done
almost
automatically
by
the
by
the
fact
of
the
design
of
your
logic,
programming,
language
or
is
it
like?
You
would
have
to
very
carefully
craft
your
error
messages
to
each
new
change
to
your
logic
system.
B
D
E
Was
looking
at
the
the
coherence
definition
there
and
I
am
extremely
confused
by
why
the
the
is
fully
visible.
Predicate
only
covers
all
types
up
to
the
locally
found
type.
Is
there
some
sort
of
arcane
reason
for
that
the
chance
to
do
with
like
what
order
type
inference,
and
this
I
mean
I
did
soon.
This
already
happens
when
type
after
this
is
all
done
like
why?
Why
is
why?
Is
that
a?
Is
that
even
necessary?
Really,
or
is
this
just
yeah.
B
B
But
if
you
read
through
some
of
those
blog
posts
that
I
talked
about,
they
describe
the
reasoning
behind
the
rules
and
why
they
work
the
way
they
do
and
and
basically
the
goal
is
to
ensure
that
every
imple
can
only
be
just
it
can
only
be
implemented
in
one
crate
right,
and
so
we
have
to
have
some
sort
of
local
type
there.
Because
that
way
you
have
to
use
a
type
in
your
current
crate
in
order
to
implement
a
trait
right
and
then,
as
for
the
ordering
acquirements,
there's
also
some
reasoning
behind
that.
B
B
F
A
B
So
so
this
impacts
you
in
a
couple
of
ways.
So
the
reason
that
we're
talking
about
logic,
programming
in
general
is
because
it
gives
us
a
lot
of
really
cool
benefits
to
the
implementation
of
the
trait
system.
So,
right
now
and
again,
I
don't
know
anything
about
the
current
implementation
of
the
trait
system.
B
You
know
logic,
programming,
engine
and
and
then
solve,
and
so,
while
you
as
a
as
a
rust
user,
won't
necessarily
see
the
impact
right
away,
because
this
is
supposed
to
be
a
drop-in
replacement.
What
it
will
do
is
it'll
enable
us
to
do
cool
things
like
really
fast
incremental
compilation
and
all
kinds
of
other
things
that
will
eventually
benefit
the
busers
of
the
rest.
Compiler.
Quite
a
bit.
B
G
You
described
chalk
as
a
logic,
programming
language,
but
you
only
talk
about
in
terms
of
implementing
a
type
checker.
To
what
extent
is
chalk
a
general-purpose
programming
language
with
things
like
syntax
and
compilation
versus
just
being
a
part
of
the
RUS
compiler,
and
is
there
any
effort
being?
Is
there
any
effort
toward
turning
it
into
a
general-purpose
language
that
anybody
can
use
for
any
purpose?
Sorry,.
B
B
It's
an
implementation
of
an
engine
that
can
solve
certain
queries
within
a
logic,
programming,
language
and
it's
entirely
catered
towards
Russ,
so,
like
chalk
has
its
own
syntax,
which
we
use
just
for
testing,
it's
not
actually
used
with
rust
code,
and
it
helps
us
write
tests
that
sort
of
look
like
rust
code,
and
so
you
could
say
that
the
chalk
syntax
looks
like
rust
code,
but
there
really
isn't
any
plans
to
make
chalk
into
a
general-purpose
language
or
anything
like
that.
It's
just
designed
to
be
used
to
implement
to
re-implement
the
trait
system.
E
B
So
so,
yeah
for
sure
it's
like
chalk,
already
supports
generic
associated
types.
If
you
don't
know
what
those
are
don't
worry
about,
it
like
it
already
supports
a
lot
of
really
cool
extensions
to
the
trade
system,
and
the
idea
behind
doing
chalk
like
itself,
is
that
we
can
add
these
sorts
of
extensions
really
easily
right,
because
it's
just
a
couple
more
rules,
it's
a
different
set
of
lowering
things
right,
and
so
we
can
actually
extend
the
trade
system
very
easily
and
start
to
add.
B
F
I've
heard
that
in
Prolog
the
the
search
strategy
for
basically
resolving
predicates,
it's
actually
fairly
constrained
by
the
specification,
because
Prolog
involves
things
like
side-effects
and
cups,
and
things
like
that.
Do
you
know
if
choc
engine
is
somehow
more
flexible
on
the
kind
of
search
strategy
can
used
if
you
can
get
any
kind
of
optimizations
from
that,
so.
B
Right
now,
our
solver
isn't
actually
based
off
of
a
Prolog
implementation.
I
think
it
started
that
way.
There's
a
paper
I
forget
the
exact
title:
it's
in
my
backpack
right
there,
it's
I
haven't,
read
it,
but
it's
in
my
backpack
great
thing
and,
and
it
talks
about
something
called
a
hereditary,
harap
formulas,
and
so
there's
there's
a
paper.
It's
called
a
proof
procedure
for
hereditary
Harrop
formulas,
and
so
we
use
a
modified
implementation
of
that.
B
So
we
probably
don't
have
some
of
the
downsides
of
a
regular
Prolog
solver
because
we're
not
using
Prolog
directly,
but
there
are
other
limitations
like
a
lot
of
the
research
that
I
have
to
do.
Next
has
to
do
with
the
impact
of
negative
reasoning,
like
you
know,
negating
a
query
putting
not
in
front
of
it
and
what
that
does
like,
like
what
impact
that
has
on
the
solver
and
what
it
can
solve
cool.
Thank
you
any
other
questions.
F
H
So,
there's
currently
also
an
effort
to
reformulate
the
borrow
checker
in
terms
of
logic.
Programming
based,
don't
want
to
seen.
It
seems
like
the
the
data
log
engine
that
it's
using
is
a
less
powerful
form
of
logic
than
what
chalk
is
using
like.
What's
the
relationship
between
those
two
efforts,
I
have.