►
From YouTube: Preliminary shared state code review
Description
We looked over the (initial) results from Aaron Turon's shared state audit, trying to group the various bits of state into rough categories for what needed to be done to gain confidence in them.
A
B
But
one
thing
we
found
is,
you
know
some
some
concerns
around
a
potential
introduction
of
concurrency
bugs,
so
in
particular,
a
lot
of
what
changed
in
the
compiler
was
taking
things
that
you
know
were
previously
ref
cells
and
making
them
locks
and
not
really
not
really
doing
a
whole
lot
beyond
that
simple
transformation,
and
unfortunately,
that
has
the
potential
to
introduce
problems,
because
a
lot
of
structures
in
the
compiler
had
pretty
fine-grained
use
of
wrestled
like
wrestle
at
the
level
of
individual
fields,
in
a
structure
and
for
sequential
code.
That's
fine!
B
Because,
generally,
you
know,
you've
reestablished
all
your
invariance
by
the
time
you
exit
whatever
function,
but
would
this
sort
of
naively
translated
to
locks?
You
have
potential
for
atomicity
problems
where
you
you
getting
broken
invariants,
because
you're,
locking
one
field
and
making
an
update
and
then
locking
different
field
and
making
an
update,
but
other
threads
could
see
the
in-between
state,
and
then
there
are
also
lock
ordering
concerns.
B
You
know
where,
when
there
are
these
multiple
fields
that
are
frequently
accessed
together,
if
they're
accessed
in
different
orders,
then
you
have
potential
for
deadlock.
So
what
what
I've
tried
to
do
is
basically
go
through
the
whole
PR
history
and
essentially
review
the
PRS
just
at
the
level
of
what
is
the
relevant
state.
Where
are
these
risks
that
have
been
introduced?
And
that's
that's
what
this
document
is
giving
it's
not
complete.
There
are
a
few
larger
peers
that
we
still
need
to
do,
but
Nikko
and
I
thought
it
was.
B
We
had
enough
to
sort
of
start
the
next
phase,
and
so
then
our
goal
today
and
going
through
this
list
is
to
decide
for
each
bit
of
shared
state.
That's
been
introduced,
essentially
what
approach
we
want
to
take.
So
you
know
the
high-level
options
are
either
refactor
so
that
the
shared
state
is
just
no
longer
necessary.
That's
that's
kind
of
the
best-case
or,
if
not,
then
we
need
to
take
a
closer
look
at
how
that
shared
state
is
accessed.
What
are
the
invariants?
What
are
the
LOC
orders?
B
B
Do
we
want
to
try
to
get
rid
of
the
state
or
to
do
we
feel
like
it
does
need
to
stick
around
and
therefore
needs
to
be
looked
at
more
closely
and
then
separately
from
this
meeting,
we'll
have
some
way
of
actually
like
doing
the
more
detailed
work
on
these
issues,
and
we
will
have
to
figure
out
what
that
looks
like
sort
of
based
on
how
things
wind
up
today
Nikko.
Does
that
pretty
much
cover
things
from
your
perspective,
yeah.
A
A
Persons
from
anybody
else,
I
guess
the
one
thing
I
would
add-
is
that
I
think
well,
which
we
talked
about
this
yesterday
Aaron
and
I,
and
we
tried
like
two
or
three
approaches
for
how
we
should
sort
of
divvy
up
this
work.
Or
do
this
work
or
figure
out
what's
to
be
done
and
we
settled
on
it's
not
entirely
clear.
Let's
just
try
something
so
I
guess
we'll
start
going
to
the
list,
we'll
see
how
it
goes.
You
know
feel
free
to
make
suggestions.
A
A
We
might
wind
up
with
something
like
removing,
or
maybe
we
think
it's
fine
and
we
just
have
to
sort
of
document
what's
happening
or
not
and
other
ones
we
might
find
I
don't
know
that
we
want
to
look
more
closely
and
maybe
we
can
take
some
notes.
But
what
exactly
we're
curious
about?
Seem
good
shall
we
start
okay,
so
the
first
one
I
degeneration,
this
one
occurs
here
in
the
source,
so
a
little
bigger
what's
happening
is
there's.
This
function
make
adder
ID
that
creates
an
attribute
ID
there's
a
static.
A
Atomic
variable
just
exists
for
all
time
and
we
just
add
one
to
it.
So
it's
always
a
fresh
number.
It
seems
pretty
safe,
there's
not
I
mean
it
can
overflow.
Although
we
assert
against
that,
it's
not
really
much.
They
can
go
over
on
gear.
So
I
think
this
falls
into
the
like.
It's
fine.
Maybe
we
want
to
document
it,
but
her
cases
like
this.
Do
you
need
to
worry
about
determinism?
C
A
I
think
that's
what
it
comes
down
to.
It
depends
how
they're
used
with
it's
good
point.
So
all
right,
we'll
leave
that
one
alone
the
error
account.
So
this
one
really
this
whole
struct.
This
is
one
area
I
looked
at
yesterday,
but
it's
a
good
example
of
the
danger
or
the
potential
danger
of
just
converting
rest.
All
this
to
luck,
because
you
see
that,
like
each
of
these
fields
is
its
own
atomic
sort
of
section
here,
so
it's
possible
now
to
have
some
method
that
acquires
this
lot
makes
a
change
releases.
A
The
lock
requires
that
lock
makes
it
change
since
the
lock
and
whereas
before
those
would
have
executed
atomically
now
some.
What
other
method
might
be
executing
that
reads,
this
change
before
they
reach
the
delayed
span,
bugs
change,
but
not
the
taught,
diagnostics
or
whatever,
and
so
it
probably
is
I
think
there
would
be
two
ways
to
handle
this
one
would
be
to
kind
of
audit.
A
A
So
the
transitive
relation
is
general.
If
you
have
like
I,
don't
know,
three
is
greater
than
four,
and
you
know
that's
true,
and
you
know
that
six
is
greater
than
four
okay.
This
three
is
not
greater
than
four
turns
out.
Four
is
greater
than
3.
I
do
know
that
six
is
greater
than
four.
Then
you
can
sort
of
conclude.
Ok,
six
is
greater
than
three
it's
like
transitive
right.
So
what
this
transitive
relation
does?
A
It
has
a
little
cash,
so
the
transitive
relation
code,
you
can
kind
of
add
edges
into
it
somewhere
down
here.
I,
don't
know
where
your
ad
and
it
will
add
it
in
and
then
what
we
do
is
we
keep
a
cache
of
the
closure
and
if
at
some
point
like
when
you're
just
storing,
we
just
keep
adding
individual
edges
and
at
some
point,
when
you
ask
it,
is
this
in
isn't
it
does
that
imply?
A
This
other
thing
will
compute
the
full
closure,
which
takes
some
work
and
the
main
thing
is
you
want
to
wait
to
do
that
until
you
have
all
the
edges
of
the
other
looks
otherwise
you're
kind
of
doing
more
work
than
you
need
to,
and
so
I
don't
know
it's
kind
of
a
goofy
piece
of
code.
I
think
we
probably
could
refactor
this
a
way.
It's
also
probably
not
wrong.
I,
don't
know
that
any
given
transitive
relations
ever
actually
shared
across.
A
B
Yeah
I
said
that's
actually
a
good
moment
to
mention
you
know.
Most
of
what
we've
been
focusing
on
is
locks
or
simple
atomic
fields,
but
there
are
a
couple
of
other,
more
restricted,
concurrency
primitives
that
got
introduced
in
this
work.
You
know
like
sort
of
single
initialization
variables
or
variables
that
are
only
allowed
to
be
mutated
by
a
particular
thread,
etc.
So
you
know,
another
kind
of
refactoring
we
can
do
is
just
to
try
to
replace
use
of
blocks
with
something
more
restricted,
I
suspect.
A
I
don't
know,
but
I,
don't
don't
know
for
sure,
but
I
don't
see
just
doing
quick
rip
grab
through
the
code
any
place
that
this
winds
up
in
a
query
result
when
some
other
shared
data
structure,
but
it
just
it
just.
It
feels
like
a
refactor
I'm,
not
sure
exactly
what
the
refactoring
is,
but
it
was
over.
These
I
forget
why
I
did
that
way
in
the
first
place,
but
I
was
never
very
happy
with
it.
A
A
In
fact,
we
tend
to
call
that.
Well,
I,
don't
know
I,
guess
it's
separate,
but
I've
thought
of
it
in
my
head
is
being
related
to
this.
End-To-End
queries
work,
but
this
is
the
session
data
structure.
It's
got
a
lot
of
stuff
like
what
is
the
target
you're
compiling
for
and
some
small
bits
of
mutable
state,
I
guess
so.
A
A
A
It
might
be
that
we
can
refactor
some
of
the
fields
away
and
stuff
like
that.
But
a
lot
of
this
is
tied
to
the
main
refactor
we
would
like
to
do
if
we
can
is
to
take
shared
state
and
move
it
into
queries,
a
lot
of
cases
or
that's
a
one
great
way
to
refactor
and
that's
going
to
be
hard
because
sessions.
A
lot
of
this
stuff
is
here
precisely
because
before
we
have
the
query
system
in
place,
we
do
certain
amounts
of
work
and
think
it
needs
to
be
there.
A
But
I
don't
know
some
of
these
things.
I
think
I.
Think
I
like
these.
These
fields
here,
for
example,
type
length
limit,
recursion
limit,
I,
think
they're,
probably
set
early
on
in
the
parser
or
something
and
then
use
later
once
seems
like
a
relatively
safe
obstruction.
Thank
you
all
right.
This
will
require
a
closer
look.
Let's
keep
going
for
the
tip
for
the
moment
same
with
per
session.
C
C
B
Yeah
I'm,
trying
to
remember
like
for
my
note
here,
is
a
little
sketchy
about
not
being
clear
how
these
Global's
are
shared
with
worker
threads,
but
essentially
yeah
I.
Wasn't
I
wasn't
able
to
easily
find
in
reviewing
the
pr
how
this
context
was
passed
along
to
you
know:
rayon
worker
threads
and
not
a
seem
fishy
to
me.
I
think.
A
A
A
A
B
A
little
nicer
way
to
do
that
in
RAM,
right,
yeah
and
I
to
be
clear,
like
I'm,
not
sure
you
know,
partly
because
this
was
a
while
ago,
that
I
look
to
this
I'm,
not
sure
whether
there's
any
problem
here,
I
just
wasn't
able
to
easily
see
how
the
information
was
making
its
way
to
the
worker
threads,
which
seemed
important
so
I
mean
so.
It
sounds
like
the.
B
C
A
Mean
there's
two
things
you're
right.
One
of
them
is
how
this
mechanism
works,
but
whichever
way
it
works,
they
all
end
up
pointing
at
the
same
Global's
the
end
of
the
date
right
so
despite
the
name
thread
local
and
not,
in
fact,
a
thread-
local
copy
of
Global's,
more
like
using
TLS
to
propagate
a
reference
to
the
shared
copy.
A
C
A
It's
interesting
that
we
have
no
nevermind,
it's
actually
not
that
much
state
test
using
these
right,
there's
just
a
few
fields.
No,
no!
It's
not
like
it's
every
field.
There.
C
C
A
A
Okay,
so
I
guess
this
is
probably
a
case
of
auditing.
It's
probably
pretty
okay,
as
my
guess
so
looking
for
if
there's
sets
of
fields
that
are
changed
together
and
if
we
want
to
group
them
into
one
lock
or
something
like
that,
that's
alright
and
or
looking
if
there's
some
I
guess
ones.
Other
thing
that
cursed
me
that
we
might
consider
it.
You
said
there
were
a
number
of
different
like
less
general
primitives
like
once,
they
were
introduced
and
I,
don't
know,
but
maybe
how
thoroughly
we've
used
those.
A
A
So
we
have
this.
This
cache
filled
and
when
you
ask
for
the
predecessors
we
will
lazily
compute
it
and
then,
if
somebody
gets
a
mutable
reference
to
the
basic
box
such
that
they
might
have
changed,
we
invalidate
and
clear
the
cache
and
there's
a
ref
cell
around
this
cache.
I
guess
so,
while
you
are
iterating
over
the
set
of
predecessors,
I
think
if
you
call
basic
blocks,
mule
deadlock
or
something
or
panic,
depending
in
the
case
of
a
lock,
you
would
deadlock,
which
is
ungrate
so
I.
A
This
scheme
is
annoying
part
of
the
reason,
but
it
it
exists
for
like
one
other
reason
they
exist.
I
think
is
that
we
wanted
to
pass
along
the
muir
in
between
optimization
passes
and
we
didn't
want
them
all
to
like
recompute,
the
predecessors
separately
when
nothing
had
changed.
So
this
cache
kind
of
passes
between
unrelated
bits
of
code
and
lets
them
reuse
the
work
they
did.
Of
course,
we.
C
A
This
feels
right
for
some
form
of
refactoring
to
me:
I'm,
not
exactly
sure
what
the
best
approach
would
be,
but
it
but
either
we
could
just
recomputed,
which
may
not
be
the
end
of
the
world
or
and
just
remove
this
cache
all
together
or
we
could
move
the
cache
like
out
somewhere
else
or
if
nothing
else,
I,
don't
know,
I,
wouldn't
mind
if
we
just
returned
an
RC
here
or
something
so
that
you
get
a
snapshot
of
what
the
predecessors
were
at
that
time,
and
if
you
change
the
thing
well
so
be
it,
then
you
wouldn't
have
these
locks,
it
wouldn't
be
as
complicated.
A
A
A
Yeah
I
think
these
are
fine.
We
can
document
it,
but
I'm
pretty
sure
there
they
are
indeed
atomically
accessed
bits
of
state.
You
come
in
with
one
type
s
you
get
the
lock
you
find
out.
If
there's
a
new,
you
know
if
you've
seen
it
before
return
the
existing
pointer.
Otherwise
you
added
a
new
pointer
stuff
like
that,
so
it
should
be.
Okay,
I
guess
this
in
turn,
set
just
like.
A
A
So
this
is
good.
This
is
the
area
where
we
can
move
things
into
queries.
A
lot
of
the
time,
maybe
I,
don't
skimming
through
here,
see
immediately
a
ton
of
mutable
state,
yet
it
might
be
that
it's
been
refactor.
A
A
C
A
Yeah,
like
yeah
this,
these
ones
are
the
most
worrisome
but
might
be
challenging
I.
Imagine
its
public
because
Miri
uses
it
pretty
sure
it's
a
mere
anything
I
guess
you
could
imagine
at
least
that
we
we
don't
like.
We
reference
some
Mary
data
structure,
that's
opaque
to
us
and
has
private
fields
Mary.
A
You
know
how's
the
locking
in
it
stuff,
so
that
it
can
then
like
be
more
contained,
I'm
not
accessible
to
everyone
in
the
whole
world.
That
would
be
nice
yeah.
That's
good,
I
think
that's,
probably
the
the
minimal
strategy
is
at
least
trying
to
get
rid
of
the
public
stuff.
It's
not
only
Mary
Dahl,
so
maybe
I'm
wrong
about
that
whole
theory.
That
sounds
like
an
eerie
type
for
sure.
A
A
Okay,
so
let
me
explain
what's
happening
here:
this
has
got
a
long
history.
Maybe
we
can
refactor
this
there's
actually
an
issue
on
refactoring.
It
I
see
linear
in
your
nearest
steal
issue.
I,
don't
know
it's
not
in
my
history
apparently.
So
the
problem
is
this:
we
have
actually
yeah
okay
when
we
compute
the
mirror
for
a
function.
It
goes
through
a
couple
of
phases
and
the
idea
is
we
first
build
the
mirror.
That's
one
query
and
we
want
to
make
some
changes
to
it.
A
Then
we
call
another
query
which
takes
that
built,
mirror
and
mutates
it
in
place
to
add
more
information
or
like
the
borrowed
Chuck.
When
we
patched
the
mirror,
we
do
optimizations
on
the
mirror.
We
do
that
all
in
place
and
so
what
we
didn't.
We
wanted
to
have
some
queries
for
intermediate
stages,
so
you
could
say
like
give
me
the
mirror
of
this
function
up
to
this
point,
but
we
didn't
want
to
clone
the
mirror
in
between
each
of
those
queries.
So,
like
one
simple
way
to
handle,
it
would
have
been
for
any
given
function.
A
Let's
say
there
are
four
queries,
but
they
each
return.
A
mirror
data
structure,
maybe
in
an
arc
something,
but
then,
if,
if
if
the
same
mirror
is
just
being
passed
to
unchanged,
that
would
be
fine.
You
just
have
four
pointers
to
the
same
arc,
but
if
you're
gonna
make
changes
to
it,
you
would
have
to
deeply
clone
the
mirror
and
make
you
have
wind.
That
was
like
four
copies
of
the
same
function,
which
would
be
wasteful
both
in
time
but
also
memory.
A
So
what
we
did
was
this
kind
of
horrible
heck,
which
is
the
steal
heck
and
what
happens
is
the
result
of
the
query?
In
theory,
query
results
should
be
deeply
immutable,
sort
of
nice
little
data
structures,
but
the
result
of
this
is
an
arc
arc.
Steel,
basically
and
an
art
steel.
B
A
A
York
I'm
just
gonna
call
him
your
build,
even
though
I
think
that's
wrong,
you're
optimized
and
something
else,
and
so
it's
Sarah.
So
the
ideas,
the
borrow
Chuck,
for
example,
invokes
Muir
build
because
it
wants
to
work
on
pre-optimized
near
the
other
code,
invokes
your
optimized
sorry.
Your
optimized
also
invokes.
A
Invokes
Muir
build
and
steals
the
result,
so
we
can
make
changes
to
it
so
now
there
would
be
this
problem
if
you
called
me
optimize.
First,
the
Muir
build
would
be
stolen
and
if
you
then
try
to
do
a
borrow
check,
you
would
panic,
because
the
value
you're
trying
to
read
has
been
stolen,
and
that
would
be
so
that
the
heck
is
that
what
we
optimized
does
is
it
calls
the
Baro
check
before
it
steals
the
results
of
it?
A
You
know
that
you've
done
the
Baro
check
and
then
you
can
steal,
and
you
know
that
if
you
later
called
bio
check
again,
you're
just
gonna
get
a
cached
result,
so
that
won't
be
a
pal.
So
it's
kind
of
horrible
and
we
have
a
little
like
issue
exploring
the
different
solutions
you
could
do
to
this,
ranging
from
the
really
like
expansive
of.
A
Maybe
we
should
add
some
notion
of
linear
queries
and
something
something
to
I
guess
we
could
probably
pack
some
more
of
this
stuff
into
one
query
and
carefully
like
just
not
have
this
whole
many
queries
that
return
the
near
and
intermediate
States,
but
pack
it
into
one
query
that
does
all
the
things
and
thus
there
are
no
intermediate
states.
That's
probably
the
better
approach,
that's
the
issue!
Yeah,
it
looks
like
it
so
I
think
we
should
probably
refactor
this
I.
Don't
know.
C
A
A
A
A
A
Unlike
the
my
predecessors
approach,
for
example,
I,
don't
think
you
can
easily
refactor
away
keep
the
same
pattern,
but
not
return
not
hold
the
lock.
While
you
return
I
guess
what
you'd
have
to
do?
Okay,
what
you
could
do
is
you
could
sort
of
grab
the
lock
only
so
long.
You
can
basically
move
the
lock
out
of
so
it's
not
a
real
lock
that
you
hold
the
whole
time.
A
You
see
what
I
mean
so
that
you're
you're
not
really
holding
a
system.
Look
for
an
unbounded
amount
of
time,
you're!
Just
holding
a
logical
luck.
A
You
basically
implement
Russell
on
top,
but
with
an
atomic
access,
and
then
you
could
have
the
same
pattern
and
I
think
it
would
be
okay.
The
date
like
the
thing
that
could
go
wrong
would
be
if
somebody
was
stealing
while
you
were
reading,
but
that
shouldn't
happen.
Like
that's
a
bug
in
the
structure
of
the
queries
already,
no
matter
what.
A
Probably
seems
like
an
audit,
but
it's
probably
okay,
kind
of
situation.
This
is
interesting.
A
But
these
are
all
private
fields.
My
guess
is:
we
can
audit
all
the
youth
sites
and
see
that
they're
kind
of
making
sense.
The
main
question
would
be
like
you
know
these
things.
These
are
these
look
like
they
have
to
do
with
one
another,
but
maybe
not
so.
Are
there
locks
on
data
that
has
a
connection?
A
A
C
B
I
didn't
yeah
I
didn't
catch
that
the
first
time
through
right,
I
do
think,
walk
like
degenerates.
If
you
compile
dressy
for
a
sequential
mode
or
whatever.
Oh
it's.
B
A
To
use
luck
or
something
seems
like
strange
otherwise,
surely
that
one
field
is
not
parallel
accessed
in
parallel,
while
the
rest
burn.
A
C
A
A
B
B
The
so
this
is
this
is
kind
of.
This
is
a
different
story
where
this
is
just
like
a
well
yeah,
so
this
is
not
in
itself
like
a
piece
of
shared
state
per
se,
but
rather
like
a
variant
of
typed
arenas
for
use
in
sharing
and
there's
a
bunch
of
unsafe
code
that
is
pretty
poorly
documented
and
looked
very
suspicious,
but
I
think
it's
okay,
so
this
is.
This
is
mostly
just
like
this
stuff
needs
to
be
cleaned
up.
A
A
Do
we
maybe
just.
A
I
mean
it's
the
slow
path
of
interning
but
still
like.
We
want
the
sharted
maps,
no
matter
what
rate,
because
they
tell
us
if
we've
got
a
value
of
them
like
an
equal.
So
already
you
have
in
turn
that
I
value,
but
this
is
more
the
fallback
where
you
try.
You
look
in
the
map.
If
it's
not
there,
then
you
allocated
in
the
arena
and
I
guess
we
probably
do
want
to
yeah
I
guess
you
can
interpret
this
comment
in
two
ways.
C
A
A
B
A
A
A
A
Would
go
away
entirely?
Yes,
is
it
worth
my
time
I
think
we
should
not
go
deeply
into
them
this
moment.
Well,
I
think
the
answer
is
probably
awed.
It
may
be
that
yields
up
a
refactoring
I
mean
it's
probably
something
that
I
shall
do
the
job
server.
So
one
of
the
things
in
general,
that
seems
very
useful,
like
the
job
server
owner
has
never
not
the
job
servers.
This
is
this
thing
that
you
run
car
go
run
or
like
I
guess
not.
Cargo
I.
A
C
So
what
this
is
is
essentially
like
a
system-wide
ish
cue,
which
you
can
register
interest
in,
and
it's
essentially
like
a
file
descriptor,
which
you
write
bytes
into,
and
then
you
block
on
getting
a
bite
from
it,
at
least
on
Linux.
It
works
differently
on
Windows,
but
the
general
idea
is
that
we
don't
want
to
use
like
two
thousand
threads
at
the
same
time.
C
So
if
a
thread
wants
to
do
work
in
general,
it'll
block
on
acquiring
like
a
token
it'll
get
the
token
it'll
go
off
to
work
and
then
once
it's
done,
it
will
release
its
open
back
into
the
job
server.
For
the
most
part,
this
doesn't
really
affect
us
because
we
acquire
all
the
tokens
when
we
start
threads
at
the
beginning
and
drop
them
when
we
start
stop
the
threads
at
the
very
end.
A
Well,
we
have
some
choice
like
we
could
not
eagerly
squads
for
many
threads
sure
yeah,
so
I
mean
until
like
with
the
LLVM.
C
To
my
knowledge,
there
is
no
way
like
the
job,
server
crate
and
the
underlying
primitives.
Don't
allow
you
to
like
ask
if
there
are
ten
left
you
can
only
blocking
read,
unlike
underlying
primitive
right,
like
very
constrained
into
what
you
can
do,
you
can
only
ask
to
acquire
and
that's
an
inherently
blocking
operation.
You
can't
like
try
to
acquire
or
something
like
that
so
right.
Well,.
A
Yeah
so
at
minimum
we
should
document
what
we're
doing,
and
maybe
we
can
do
better
as
well.
I
personally
find
his
code
kind
of
mystifying
in
school,
but
I
remember
complaining
about
this,
but
it's
okay.
Well,
not
this
particular
one
safe
is
sort
of
alright,
but
you
know
it.
It's
not
like.
This
is
sort
of
very
self
documenting
to
me.
What
is
this
module
and
why
does
it
have
four
main
functions.
A
A
A
A
A
A
Actually
get
to
running
them,
and
indeed
they
think
we
just
like
look
you
actually
get
around
to
running
the
running
the
actual
in
so
we
take
the
whole
vector
out.
So
even
if
you
we
put
it
back
later,
they're
obviously
not
meant
to
concurrence
amount
of
fines.
Poor
man's
luck,
so
I,
don't
know,
I
think
we
could
document
it
and
live
with
it.
I
think
what
we
would.
This
is
not.
We
know
we
don't
like
the
Lin
system.
A
If
we're
gonna
allow
plug
in
some
mechanism,
which
he
and
I
have
discussed
for
actually
already
kind
of
exists
for
the
plugins
to
sort
of
control,
the
implementation
of
the
query
so
the
way
queries
work
therein
like
a
big
day
structure
of
function,
pointers
under
the
hood,
so
it
would
be
pretty
plausible,
for
example,
for
plugins
to
take
the
existing
the
the
default
query
provider.
The
default
function
you
find
in
there
would
be
the
compilers
one
that
it
returns.
A
A
B
B
A
B
A
Would
be
good
to
try
to
do
we
found
at
least
a
few
like
small
actionable
ones,
relatively
small
I
think
the
steal
may
be
mostly
around
mirror.
It
seems
like
the
predecessors
and
steal,
and
maybe
some
of
them
that
are
the
smaller
audits
and
documents.
So
we
could
get
started
and
see
how
it
feels
yeah.