►
From YouTube: Lang Team Sync: Unsafe Code Guidelines
Description
- Unsafe Code Guideline team reports on their progress and overall plans [1]
- We discuss PR #58739
The pre-meeting proposal can be found here [1] and the post-meeting notes can be found here [2].
[1]: https://github.com/rust-lang/lang-team/blob/master/working-groups/unsafe-code-guidelines/notes/2019-03-14.md
[2]: https://github.com/rust-lang/lang-team/blob/master/minutes/2019-03-14.md
A
B
A
B
A
Okay,
don't
try
that
I
know
you
see
first,
so
I
have
this
right
up,
then
I
can
kind
of
I.
Guess
I'll
walk
through
it
a
little
bit
and
then
feel
free
to
interrupt
me
or
ask
questions.
I,
don't
know
how
much
everybody
knows
this.
A
Shouldn't
beautifully
remember
as
a
small
child
talking
to
the
alright,
that's
my
ball:
it's
adorable!
Okay!
So
there's
the
unsafe
code
guidelines
we've
been
working
for
some
time
now
and
we've
been
a
sort
of
different
way
of
operating.
Where
we
have
a
repository
it
has
in
this
repository
there
is
hold
on
I
can
even
open
it.
A
There
is
a
kind
of
draft
reference
which
includes
stuff
that
we
have
reached
some
consensus
on
and
in
some
cases
it
includes
like
open
questions
so,
for
example,
for
structs
and
tuples.
This
stuck
in
this
part
talks
about
the
layout
of
structuring,
there's
a
disclaimer
that
this
is
not
really
ratified.
A
Example,
Nico,
so
what
we've
been
trying
to
do
is
because
there's
so
much
to
talk
about
we've
been
trying
to
have
at
any
one
time
an
active
area
of
discussion.
This
gets
proposed
by
a
PR
which
describes
the
area
like
this
is
Ralph's
proposal
for
validity.
Invariants
talks
about
some
of
the
goals
and,
if
you
actually
look
at
the
it
lays
out
like
here,
are
some
of
the
threads
that
we
should
some
of
the
ways
to
break
the
conversation
up
into
threads.
A
Then
we
kind
of
discuss
on
the
issues
when
we
reach
a
consensus.
Somebody
writes
that
into
a
PR
that
it's
the
reference
and
sometimes
will
open
up
follow-up
issues
with
stuff
that
that
still
needs
to
be
discussed
have
weekly
meetings
on
Zula.
We
mostly
look
for
stuff
that
have
reached
consensus,
trying
to
make
sure
that
this
stuff
it
keeps
going
I
feel
like
it's
working,
pretty
well,
I
think
the
main
problem
has
been
one.
We
haven't
been
drawing
enough
attention
to
these
conversations
and
two
we've
all
been
pretty
busy.
A
I,
don't
know
that's
my
perception,
so
our
general
roadmap
we've
been
focusing
on
trying
to
get
to
a
first
sort
of
RFC
and
that
RFC
would
cover
two
things
that
are
kind
of
interlinked.
The
layout
rules
for
data
structures
and
validity
invariance.
So
the
layout
rules
would
be
stuff
like
an
example
of
something
on
sale
or
other
might
want
to
know,
is:
can
I
assume
that
an
option
of
an
extern
see
FN
it's
compatible
with
a
see
function,
pointer,
you
know
or
not,
and
similarly
can
I
assume
something.
A
What
can
I
assume
about
the
layout
of
my
structure,
order?
The
fields
might
come
in
and
so
on.
What
does
not
to
mention
documentation.
So
we,
like
the
case
of
enums.
There
were
some
accepted
RFC's
and
the
text
of
those
is
kind
of
imported
into
the
guidelines
saying
like
when
you
have
a
wrapper
seeing
them.
It
has
the
following
rules:
that's
one
of
the
few
parts
of
the
thing
that
does
not
have
the
disclaimer
of
not
being
ratified
by
NRC
validity.
Invariants
are
more
about
what
have
to
be
true
for
the
types.
A
So,
for
example,
when
can
things
be
uninitialized,
or
must
references
be
aligned
stuff
like
that
right,
so
where
we
are
now
we've
kind
of
gone
through
a
lot
of
the
layout?
Although
creamer
Taylor
I
got
concerned
about
some
of
things,
which
is
fine,
but
the
point
is
we
went
through
and
we
talked
about
the
different
aspects
of
layout
and
we're
more
or
less
done,
I
think
we're
still
mopping
up
a
few
lingering
PRS,
like
about
array
layout
yeah,
like
I
kind
of
sketched
out
some
of
what
we
said
here.
A
I
can
go
through
it
in
depth.
I
guess
I
won't
at
this
moment
and
we're
talking
about
currently
validity
invariants
and
there
we've
kind
of
narrowed
down
that.
Indeed,
the
boring
stuff
is
boring.
So
the
idea
of
the
boring
stuff
is,
you
know
things
like
structs
in
arrays,
it's
fairly
easy
to
define
what
the
invariant
associated
with
it
is
because
it's
basically
the
union
of
the
invariants
of
its
fields.
The
harder
stuff
is
stuff
like
integers
and
references,
and
there
we
have
a
whole
lot
of
options.
A
So,
where
we
are
at
now
is
trying
to
take
a
lot
of
the
exploration
we've
done
and
consolidate
it
into
some
kind
of
write-up
summary
that
will
tell
us
what
the
different
roads
we
might
go
down
are
I,
don't
think
we're
at
the
point
where
we
feel
like
we
have
a
consensus
over
very
much.
Is
that
right?
Well,.
G
Right,
yeah
yeah,
for
this
I
mean
unions,
are
also
among
the
I
mean
they're,
not
as
bad
as
it
stirs
and
references,
but
also
certainly
like
nothing.
We
have
an
agreement
on
and
also
I
mean
I
guess.
Maybe
it's
clear
for
everyone,
but
like
the
reason
this
layout
and
validity
are
intertwined,
is
because
we
exploit
validity
invariance
when
doing
layout
optimizations.
G
So
so
like.
We
can't
talk
about
the
layout
of
option
without
talking
about
the
validity
invariant
of
wood
and
the
question
of
like
what
is
the
layout
of
option.
Enum
of
sorry
of
option
of
a
union
depends
on
whether
the
religion
variant
for
union
says
things.
It
has
to
say
for
like
layout,
optimization,
convenience
and
stuff
like
that.
So
you
can't
really
talk
just
about
layered
without
talking
about
validity,
Marines
and
vice
versa.
Yes,.
A
A
F
F
H
A
A
A
So
I
guess
what
I
think
we
wear
the
things
I
wanted
to
raise
for
feedback.
First
of
all,
of
course,
we
you
should.
We
can
talk
about
some
of
the
details
and
I
would
love
feedback
on
the
technical
specifics,
although,
but
also
in
terms
of
I,
was
trying
to
think
of
what
what
are
the
like
higher
level
issues
around
the
process
or
constraints
that
might
help
us
go
forward.
A
I
know,
unfortunately,
I
didn't
we
don't
think
we
got
far
enough
with
the
validity
invariant
right
up
before
this
meeting
to
kind
of
lay
out
here
are
some.
You
know
if
we
think
we
would
value
X
more.
We
might
take
this
direction,
but
it
seems
like
there
will
be
some
questions
like
some
kind
of
trade-offs
around
how
much,
obviously,
how
many
layout
optimizations
we
can
do
and
compared
to
what
amount
of
code,
so
one
question
might
be
like
backwards
compatibility
trying
to
get
it
a
sense
for
why
backwards.
A
Compatibility
here,
I
mean,
if
there's
extant,
unsafe
code
in
the
wild.
To
what
extent
can
we
break
it
and
declare
it
to
be
you
be
and
where
maybe
it
works
today,
and
maybe
it
doesn't
I,
don't
know
in
some
cases,
if
it's
a
common
pattern
right
and
I
think
mitigating
factor
there,
one
of
the
things
we've
been
trying
to
stick
to
the
and
is
ensuring
that
we
can
dynamically
check.
A
Most
everything
we
say
conformance
especially
around
validity,
are
what
safety
no
variance
budget
is
a
validity
invariance.
We
need
a
different
word
for
them
yeah.
So
basically
that
is
a
strong
consideration
for
us,
that
is
to
say
some
you
could
imagine
in
variants
that
are
very
hard
to
dynamically
check.
That
would
require
you
to
walk
on
bounded
amounts
of
memory
at
runtime
stuff,
like.
H
A
Is
right-
and
we
should
talk
about
if
that's
okay,
and
if
that
that
might
mean
that,
in
practice,
people
aren't
able
to
do
as
strong
a
dynamic
check
as
we
wanted,
because
they
we
would
have
to
do
approximations
for
performance.
On
the
other
hand,
there
are
dynamic
that
we
already
won't
be
able
to
do
probably
like
if
we
have
to
like
randomized
struck
layout
and
so
on,
to
really
test
against
all
all
the
things.
Maybe
that's,
okay,
and
and
also
in
terms
of
the
idea
of
an
RFC.
A
A
There's
someone
who'd
be
willing
to
take
minutes
or
scribe
some.
What
we're
saying
just
like
highlighters
isn't
that
the
video
is
for
the
video
is
for
detailed
minutes,
I'm
just
thinking
it
might
be
nice
if
I
would
take
some
notes,
but
yeah
we
can
just
run
with
it
at
minimum.
Let's
try
to
focus
our
conversation.
So
should
we
talk
about
our
FCS
and
what
form
it
should
take
so.
B
B
F
That
was
what
I
thought
about
the
futures
discussions
and
that
I
could
just
point
people
back
at
all
the
futures
rfcs
and
the
issues
that
we'd
had
where
we
made
all
those
decisions,
and
it
turned
out
that
people
didn't
want
to
go,
read
those
and
they
wanted
them
to
be
in
the
RFC
and
to
discuss
them
all
on
that
thread
again.
So.
A
Have
not
seen
such
a
thing
anywhere
well
we're
actually
talking
about
slightly
different
things.
I
think
I
may
interrupt
like
I,
think
Taylor's
talking
about
or
there's
one
thing
which
is
justifications
and
alternatives
like
detailed
coverage
of
how
our
decision
was
reached
and
there's
another
that
says
well
when
we're
talking
about
the
layout
of
struts,
we
have
to
refer
to
the
layout
rules
of
the
Philippian
variant
rules
and
that
might
be
a
separate
RFC,
but
we
do
have
like
there
is
a
place
you
can
go
to
see
all
those
pieces
I
think
both
isn't.
B
A
Like
this,
this
also
fits
with
I
would
just
like
us
to
shift
as
much
as
possible
to
producing
these
sort
of
artifacts
these
good
summaries
and
detailed
accounts
of
the
trade
offs,
so
that
when
we
come
back
and
we
can
read,
we
can
have
them
I'm,
trying
to
think
how
it
would
fit.
I
know
I
think
we
should
have
a
reference
and
I
think
we
should
have
link,
probably
links
some
way.
I,
don't
know
how
they
they
should
exist.
A
One
other
thing
I
want
to
emphasize
is
that
I
I
don't
want
a
situation
where
we
are
I.
Guess
this
isn't
that
dangerous,
but
I'm
concerned
about
a
lot
of
input
coming
in
very
late
after
a
lot
of
work
has
been
done
to
produce
coherent
hope
with
a
lot
of
links.
I
would
much.
Rather
we
are
waiting
attention,
memory,
Justin
to
say
we
shouldn't
do
our
C's
I
think
know
if
we
have
a
wealth,
if
we've
done
a
good
job,
the
RFC
should
be
pretty
robust
anyway,
yeah
I.
B
G
B
A
B
G
B
F
I
Over
wide
documenting,
why
actually
is
important,
because
is
that
I'd
say
it's
not
actually
trivial
to
infer
whether
the
motivation
is
for
ABI
passing
in
the
function
call
versus
trend
transportation?
These
are
two
different
use:
cases
that
have
different
constraints,
I,
think
and
thus
it's
worth
document
which
one
is
intended.
I.
A
Think
we
should
basically
have
a
reference
justification,
which
mostly
has
the
shape
of
an
RFC
look
I'm,
not
entirely
sure
about
that
point,
because
I
think
these
things
might
evolve
and
we
want
we
want
at
any
given
time.
We
want
to
be
able
to
go
back
and
look
at
what
is
the
like
combined
result
as
well,
perhaps
as
the
individual
decisions
but
I
think
when
we're
looking
at
struck
by
out.
If
we've
done,
let's
say
we
have
the
first
RFC
for
how
you
do
struts
then
down
the
line.
A
We
say:
okay,
we're
willing
to
guarantee
the
layout
in
more
cases
than
before.
That's
another
RFC,
but
I
would
ultimately
like
to
pose
to
go,
find
the
justification
for
the
union
of
all
the
things
and
so
I
justification
and
a
reference
and
basically
have
them
hyperlink
to
one
another
like
with
footnotes,
essentially
saying
yeah.
This
is
why
we
did
this.
It
was
written
here.
A
B
A
I
think
that's
that's
probably
right.
Ralph
I
didn't
necessarily
mean
to
power
hierarchies,
but
I
do
think.
You'll
want
to
have
a
certain
amount
of
specification
and
then
a
certain
amount
of
justification
for
the
decisions
like
it
won't
be.
You
can't
every
other
sentence,
interleave
back
and
forth
of
that
might
be
in,
but
they
might
be
within
one
file,
for
example,
maybe
like
at
a
per
Section
level.
I
don't
play.
B
A
You
imagine
people
will
want
to.
We
can
also
play
around
with,
like
producing
multiple
outputs
that
have
different
amounts
of
material.
Could
do
like
that
annotated,
Linux
thing
where
you
have
that?
Do
you
have
like
the
source
code
on
one
side
into
comments
from
the
other
like
:,
yeah,
I,
guess
people
have
no
idea.
A
A
F
F
A
That's
sort
of
what
we're
doing
live
right,
but
trying
to
do
yeah,
but
I
was
thinking
maybe
like.
Maybe
the
answer
is
in
any
case,
what
is
an
RFC?
Even
me,
there's
not
like
an
implementation
period.
Maybe
the
RF.
The
RFC
should
be
of
something
where
we
we
proceed
and
we
do
a
final
examination.
The
RC
is
that
we're
starting
to
do
this
process
of
okay.
A
B
G
A
What
should
we
do
an
example
that
may
or
may
not
be
one
that
you
find
compelling,
but
you
know
might
be
okay,
there's
no
way
presently
to
say
that
the
layout
of
a
given
struct
is
linear,
apart
from
wrapper
C,
which
has
other
repercussions
and
that
you
may
or
may
not
want
and
shouldn't
like
we
could
identify
places
where
it
would
be
useful
to
have
more
options.
I,
don't
necessarily
think
it's
the
role
of
this
working
group
to
I,
don't
know
we
could
imagine
us
opening
RFC's
about
those
changes,
but
it
feels
a
little
different.
A
B
A
Okay,
well
dynamic,
checking,
oh
no
Ralph.
Maybe
you
want
to
say
how
the
way
you've
been
thinking
about
it.
I
think
we've
been
taking
as
a
given
that.
Oh
you
know
what
the
other
area
was
just
breaking
changes
which
is,
in
my
opinion,
interconnected
with
I'm
gonna.
That's
a
somewhat
weighted
terminology.
Put
yes
I!
G
I
guess
you
aware,
like
something
I've
been
working
towards
with
miry,
and
it
I
mean
besides
what
already
says
said
about
this
being
useful
for
programmers.
This
is
also
extremely
useful
for
researchers
that
want
to
do
something
with
rust,
because
because
this
is
the
dynamic
checker
for
you
be
that
actually
exactly
characterizes
you'd
be
is
exactly
the
kind
of
artifact
or
thing
that
that
you
need,
as
a
researcher,
to
do
proper
verification
about
anything
and
one
of
the
reasons
that
this
is
so
hard
for
C.
G
Is
that,
like
there
is
no
such
thing,
so
it's
there's.
None
like
long
lasting
benefits
from
that,
of
course,
for
the
for
the
research
project
doesn't
really
matter
how
efficient
this
is
to
implement
it.
Just
matters
that
it
can
be
implemented
in
principle,
and
the
current
thing
that
we
have
is
that
that
Mary
can
check
some
things
and
it'll
never
be
able
to
check
everything,
and,
in
particular
at
some
point.
Non-Determinism
will
come
in
and
then
like.
G
If
you
correctness
of
your
program
depends
on
which
address
this
memory,
allocator
speak,
it's
always
gonna
be
a
guesswork.
So
there's
a
lot
of
value
in
being
able
to
to
at
least
in
principle
and
also
then
hasn't
been
as
many
cases
as
possible
in
practice.
Check
that
small,
medium
size,
ish
programs
and
test
Suites
actually
conform
with
the
UV
specification,
and
that
would
be.
G
I
mean
making
this
actually
feasible
is
useful
for
somebody
who
wants
to
run
the
test
suite,
as
opposed
to
like
firing
up
coke
and
proving
it
manually,
which
is
like
most
people,
obviously
aren't
gonna
do
for
the
vast
majority
of
people.
The
usefulness
of
this
hinges
on
the
fact
that
you
would
actually
have
to
be
able
to
run
it
practically
and
not
just
in
principle.
So.
A
A
couple
of
thoughts
that
came
to
mind,
while
you
were
talking
about
first
off,
is
why
the
point
that
I
at
least
have
still
super
excited
about
doing
this
eventually
in
a
bigger
context
like
valgrind
or
some
other
tool
does
not
be
Belgrade
but
ACN,
something
that
runs
with
VLC
code
and
so
on.
An
arbitrary
good,
but
the
and
I
think
we've
been
trying
to
design
things
so
that
that
will
work
the.
A
But
it
occurred
to
me
that
actually
this
I
said
there's
no
implementation
from
this
work
and
central
correctly
pushed
back,
but
you
could
push
back
even
harder
by
saying.
Actually
there
is
which
is
the
dynamic
checker
to
maybe
it
doesn't
apply
as
much
in
the
layout
section,
but
it
certainly
applies
in
the
dynamic
verification.
C
G
I
haven't
done
a
study
of
why
memory
is
slow
stack.
Burrows
is
like
I
mean
it's
pretty
slow.
Obviously
like
this
there's
all
these
this
every
single
reference
assignment,
like
writing
x,
equals
y,
where
x
or
y,
are
like
struct
types
containing
a
reference.
That's
all
sorts
of
extra
work,
source,
material
structures
and,
and
it's
it's
completely
ridiculous
I
mean
I-
was
not
concerned
with
performance
at
all
when
I
did
stack
borrows.
G
So
it's
a
huge
thing
and
of
course
the
validity
checks
mean
that
every
single
assignment
also
walks
over
the
value
that
gets
copied
and
make
sure
that,
like
oh,
this,
like
eight
field
struct
still
is
like,
consisting
entirely
of
billions
of
something
I
just
checked.
If
I'm
going
to
check
it
again
because
I
don't
know
that
I
already
checked
it,
there's
probably
lots
of
low-hanging
fruit
there,
but
in
terms
of
some
of
the
things
we've
been
discussing,
if
little
T
checks
would
have
to
actually
recursively
more
preferences
of
just
doing
a
shallow
thing.
D
G
B
A
Was
just
thinking
of
a
related
thing
which
is
just
want
to
throw
in
there
before
I
forget
it,
which
is
that
it
also
might
be
a
means
for
us
to
test
or
like
to
answer
how
important
these
other
invariants
are
as
long
if
we
could
run
the
test
in
both
modes
and
see
like
how
much
additional
you
Pete
did
we
detect
in
one
mode
or
the
other?
It
just
seems
like
useful
data
to
have
a
notion
of
it
turns
out
making
a
recursive
guarantee
on
validity
ingredient.
A
F
Your
programs,
you,
like
you,
be
this
or
not-
depends
on
something
that
is
more
than
five
reference
layers
deep
right,
but
it
seems
very
reasonable
to
imagine
that
it
is
one
or
two
deep
right.
This
seems
very
hard
to
reason
about
I'm,
not
thinking
about
this
tool
as
being
something
that
I
care
about
reasoning
about
I.
Think
about
it
in
terms
of
how
easy
it
is
to
use
it
and
get
useful
results
from
it
right.
Oh
I
was
thinking.
G
F
G
G
F
F
H
F
J
F
A
A
G
Have
been
several
proposals
but
it.we?
Yes,
there's
kind
of
an
expressiveness
there's
an
expressiveness
cliff
between
the
two
variables
we
have
and
the
one
I
thought
we
have
at
the
one.
That's
gonna
implement
a
memory,
and
this
wasn't
realized
when
implementing
them.
That
there's
actually
like
a
meaningful
difference
in
power,
at
least
that's
my
understanding
from
from
talking
with
you
guys,
but
there's
actually
meaningful
difference
in
power
between
between
what
we
currently
have
and
what
would
have
after
this
PR.
G
I
mean
I
actually
quite
surprised
that
this
is
at
the
transit
stop,
but
but
basically
I
was
saying.
Maybe
we
could
try
to
be
more
deliberate
about
this,
and
that
was
a
week
before
stabilization,
but
of
course
like
it
was
way
too
late
to
change
anything
and
and
I
guess.
Currently
we
are
trying
to
figure
out
what
the
pay
is.
The
best
way
is
to
do
like
be
more
deliberate
about
how
we
make
more
powerful
what
kind
of
understanding
better
what
these
extra
powers
are.
So.
G
H
So
this
is
something
where,
if,
if
we
continued
working
at
it,
there
are
some
spit
balled
proposals
to
model
this.
It's
more
question:
if
we
don't
yet
know
how
or
if
we
will
be
able
to
formally
model
the
current
behavior
in
the
meantime,
we
have
a
model
for
a
subset
of
the
current
behavior.
Is
that
accurate,
so.
G
The
model
for
the
subset
of
the
current
behavior
is
trivial.
It's
a
very
trivial
extension
of
another
without
two-face
burrows
right,
it's
less
than
10
line
different
Mary.
This
extra
thing
is
a
huge
extension.
It's
much
bigger
it's
a
much
bigger
step
from
two-phase:
borrows
the
restrictive
ice
powers
to
the
food
to
Westeros,
then
from
no
two-phase
powers
to
restrict
it
to
face
Burroughs
speaking.
B
G
G
H
So
would
it
be
possible
to
have
a
model
that
are
there
models
on
the
table
that
are
less
expensive
and
what
two-phase
borrows
can
do
and
are
closer
to
modeling
the
current
behavior,
not
an
expansion
of
the
current
behavior
I
think
all
alternative
model
that
we
can
use.
That
is
slightly
bigger
than
the
current
behavior,
but
we
could
limit
it
by
saying
this
is
what
you're
actually
allowed
to
do,
even
though
the
model
might
allow
more
I.
G
Think
all
of
them
allow
extensions
of
the
current
behavior
and
and
I
haven't
I
mean
this.
This
started
kind
of
a
mentorship
ended,
so
I
don't
have
I,
don't
have
I
haven't
spent
a
lot
of
time
trying
to
understand
what
exactly
the
envelope
of
allowed
behavior
of
each
of
the
proposed
model,
but
but
I
think
all
of
them
would
allow
significantly
more
than
what
what
was
currently
alone
but
I,
don't
know
how
much
yeah.
I
There's
a
fair
number
of
ad
hoc
restrictions
on
the
current
implementation
to
face
borrowers
just
because
we
did.
The
summary
is
we
didn't
want
to?
Have
users
depend
on
the
near
code,
gen
details,
so
we
were
trying
to
sort
of
come
up
with
a
set
of
restrictions
that
made
it
as
close
to
the
original
source
code
in
terms
of
what
you
would
mentally
model
in
your
head.
That's
what's
happening
here,
I.
A
I
A
Sorry,
not
a
good
question
to
make
sure
I
understand
when
you
say
that
they
all
are
significantly
more
Ralph.
You
mean
sort
of
modulo
the
behavior
that,
like
what
two-phase
borrows,
allows
today
includes
this
one
case
in
which
we
have
a
pre-existing
bar.
Oh
right,
I
guess
we
should
maybe
write
that
out
too
right.
The
case
in
question
is
like
you
borrow
something
and
you
with
a
two-phase
borrow
I
guess
it
would
be
best
represent.
It
was
a
method
call.
G
They
like
they
say,
you
know
you
have
you're,
creating
multiple
two-phase
paroles,
and
then
you
activate
one
of
them,
but
can't
use
the
other
anymore
and
and
then
they
open
questions
about
what,
if
you
have
to
price,
was
quite
a
formative
phase
or
all
like
nested
stuff
and
activation
work
on
trees
and
things.
And
all
of
these
questions
that
we
don't
even
really
want
to
answer,
but
they
naturally
come
up.
We
should
try
to
actually
like
want
to
define
something
reasonably
good
hearing.
So.
A
The
actual
proposal
at
hand
right
is
to
introduce
a
future
compatibility
warning
for
this
case,
which
I
believe
we
planned.
We
already
have
a
number
of
future
compatibility
warnings
due
to
the
transition
to
NL,
where
we
basically
closed
a
lot
of
bugs
essentially,
and
so
this
would
be
sort
of
rolled
into
that
general
set
of
warnings.
I
think.
A
C
B
So
I
think
the
sort
of
two
things,
at
least
when
we
discussed
this
originally
it
was.
It
was
clear
to
me
at
least
that
we
would
treat
this
as
a
bug
and-
and
it
seems
clear
to
me
that
there
would
be
some
some
progressions
that
seemed
inevitable,
but
I
I
think
it's
rather
easy
to
fix,
and
people
will
just
naturally
do
it
over
a
warning
period.
It
could
be
like
it
doesn't
have
to
be
very,
very
quick.
B
It
could
take
some
time,
I
think
it's
entirely
feasible
to
change
this
and
I
think
it's
important
that
we
do
it,
because
it's
a
important
part
of
the
language.
It's
not
some.
Oh,
we
made
a
mistake
in
some
attribute
or
something
it's
it's
a
fundamental
part
of
the
stuff.
You
can't
align
dynamic
semantics
of
the
language.
A
G
F
G
B
F
H
Do
feel
like
we
have
crater
for
a
reason
and
it's
the
tip
of
the
iceberg.
We
see
that
this
is
breaking
real
existing
code
and
that
shows
that
there
are
people
who
want
to
use
the
model
that
this
allows
as
opposed
to
a
stricter
model.
That
would
not
allow
this,
and
we
have
to
assume
that,
if
we're
seeing
breakage
on
crater,
there
is
potentially
more
breakage
we're
not
seeing.
We
have
very
strong
stability
guarantees,
we
don't
do
regressions
in
stable
and
at
the
end
of
the
day,
I
feel
like
the
model.
G
Don't
know
it
was
a
long
part
of
the
first,
the
first
regression
that
we
discovered
the
first
piece
of
code
that
that
we
found
in
the
wide
that
compared
the
model
of
us
code,
I
wrote
as
part
of
the
implementation
of
the
model.
I
didn't
write
that
code,
because
I
wanted
the
model
to
be
more
strict.
I
wrote
that
code
because
I
was
just
writing
code
with
references
and
didn't
think
much
about
it.
We
have
complained.
I
would
have
changed
the
code
of
it.
So
I
wouldn't
believe
this
isn't.
If
people
want
one
man
or.
F
B
F
H
They
certainly
think
that
would
past-tense
have
been
a
good
thing
to
do.
Perhaps
this
is
an
argument
that
in
the
future,
when
we're
introducing
changes
of
this
magnitude,
we
should
have
a
model
before
letting
the
change
in
I.
Don't
know.
This
was
certainly
one
of
the
major
flagship
features
of
the
2018
edition
and
it
may
well
be.
There
was
a
timing
issue
here
of
maybe
we
should
have
you
know,
wait.
We
had
a
model,
but
this
is
in
stable
rust.
At
this
point,.
H
I
H
I
A
I
mean
we
certainly
have
plenty
of
precedent,
for
so
I
would
say
a
couple
things.
We
have
a
lot
of
precedent
for
fixing
bugs
and
trying
to
migrate
people
and
we've
always
taken
up
the
question
of
stability.
You
know
should
mean
stability
in
practice,
but
we
also
have
also
always
taken
the
position
and
we
have
an
accepted
RFC
about
the
sort
of
need
for
us
to
become
more
and
more
script
with
a
model
and
the
recognition
that
doing
so
does
sometimes
may
require
changes
around
the
edges.
A
I
definitely
agree
that
and
I
think
it's
I
think
we
should
start
to
develop,
but
I
think
we
should
consider
strongly
our
processes
in
light
of
these
controversy,
because
I
very
sympathetic
to
what
Josh
is
trying
to
say,
but
I
also
think
there's
a
lot
of
precedent
from
making
these
sort
of
changes,
and
it
feels
like
week.
It
feels
like
the
damage
here
is
not
so
high
and
it
feels
like
the
gains
to
me
are
very
large,
potentially
in
that
getting
a
model
that
actually
gives
room
for
the
optimizations.
A
B
B
I,
don't
feel
I
ever
consent
to
to
having
this
long
staple
like.
If,
if
I
was
aware
that,
if
there
were
regressions,
you
wouldn't
do
this
I
would
never
have
allowed
and
allow
unstable,
like
this
feels
to
me
like
there
was
a
discussion
that
we
would
regard
this
as
a
bug
and
we
never
signed
off
on
this
being
really.
B
H
The
only
reason
I
think
this
is
even
under
consideration
at
all.
If
this
were
you
know
anything
other
than
an
accidental,
you
know
we
didn't
necessarily
intentionally
include
this
particular
thing
and
model,
then
I
think
we
wouldn't
even
have
something
to
discuss.
It's
like
this
is
a
stable
regression.
Closed
Oh,
actually
talking
about
it
at
all,
is
the
fact
that
we're
not
necessarily
saying
this
was
a
deliberate
behavior
of
nol
and
we're
debating
the
model
so.
F
H
F
Maybe
it's
too
much
dive
into
this
conversation
now
as
part
of
this.
This,
like
more
focused
discussion
but
I,
think
I
disagree
pretty
strongly
with
your
model
of
like
stability
and
I
think
that
there
are
a
lot
of
changes
that
we
make
that
freak
people
in
practice
in
more
significant
ways
than
things
like
this
and
that
are
allowed
under
our
stability
test
and
for.
F
H
H
F
H
B
H
Know
how
easy
it
will
always
be
to
fix
in
practice,
we've
looked
at
a
lot
of
toy
examples,
and
we
found
eight
I
want
to
make
I
want
to
be
clear
on
one
thing:
I'm
not
trying
to
suggest
an
absolute
hard
line.
We
can't
ever
do
this
period.
I
think
what
I'm
more
getting
at
is
we
seem
to
have
come
up
on
here
is
a
convenient
model,
and
it
would
require
us
to
break
this
code.
H
If
we're
going
to
consider
making
a
deliberate
decision
to
break
that
code,
then
I
feel
rather
than
going
that
road
we
should
say
here
is
another
model
that
fully
encompass
this
code.
Here
is
what
optimizations
that
model
would
not
allow
us
to
make.
Let's
actually
feed
both
sides
of
the
trade
off
rather
than
saying.
Well,
we
have
one
model
and
it
seems
to
work,
and
it
would
require
us
to
break
this
code.
Fcps.
F
But
we
can't
have
that
conversation
while
code
is
sitting
and
while
we're
allowing
people
to
write
new
code
that
could
potentially
be
broken
in
the
future
right,
because
yeah
the
longer
leave
this
around
the
harder
it
gets
for
us
to
make
this
an
error,
and
so,
if
we
can
make
it
an
error
now
and
then
have
that
conversation
about
whether
we
want
to
allow
this
code
or
not
like
we
don't
even
ever
have
to
make
it
a
hard
error.
We
could
only
ever
have
it
as
a
future
compatibility
line
and
then
we
could
decide.
F
H
So,
rather
than
a
future
compatibility,
line'
part
of
the
issue
with
Lintz
is
normally
we
use
them
to
say
hey.
You
know
there
is
something
wrong
with
what
you've
done
here
here.
We're
just
saying
we're
having
a
trouble
problem
modeling.
What
you've
done
here
so
I
think
what
I'm
suggesting
is.
Could
we
have
the
lint
be
something
along
the
lines
of
with
the
model
we
have?
We
may
not
be
able
to
optimize
this,
here's
how
you
could
change
it
to
be
more
optimizable,
and
then
we
start
figuring
out
good.
H
G
But
that's,
it
won't
be
an
optimization
mr.
position,
clearly
just
in
code
that
exploits
this
extra
thing,
because
that
would
be
like
it
I
mean
it
would
be
a
difference
in
the
model
which
would
mean
unsafe
code
could
rely
on
that
kind
of
stuff
happening.
So
it
would
be
a
mr.
position
opportunity
in
every
single
piece
of
code
that
uses
two
files.
Burroughs
I,
don't
think
this
Lin
doesn't
classify
in
any
way
the
code
that
gets
so.
B
F
I
A
J
A
B
A
H
F
F
F
H
Think
I'm
not
seeing
the
urgency
to
kill
this
immediately
rather
than
saying
what
is
the
model
that
would
encompass
this,
and
what
is
the
problem
with
that
model?
In
the
absence
of
an
alternative
that
we
can
hand,
people
I
would
prefer
to
just
say:
well,
this
works
right
now
and
the
compiler
is
handling
it.
Just
fine.
You.