►
From YouTube: 2021-11-22 Cross Team Collaboration Fun Times (CTCFT)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
hi
everybody
it's
november,
22nd
for
the
rust
ctcft
meeting,
and
let
me
see
here
I'm
just
going
to
give
a
brief
intro.
So
I
wanted
to
say
that
this
meeting
was
organized
by
the
ctcft
team,
especially
forest
and
technus,
and
jane
lesby
did
a
lot
of
work
pulling
this
together,
and
I
wanted
to
give
a
thanks
to
josh
triplett,
also
for
helping
to
liaison.
A
A
I
think,
and
the
idea
is
that
we'd
like
to
be
talking
to
people
who
are
putting
rust
to
use
in
new,
interesting
and
especially
extreme
ways
and
hear
how
it's
going
and
what
kind
of
problems
they're
hitting
and
what
we
might
be
able
to
make
better
for
them
and
that
I
figure
can
then
factor
into
our
prioritization
process,
and
so
I'm
particularly
excited
to
have
them.
A
You
know
have
the
ability
for
them
to
address
all
the
teams,
since
I
think
there's
probably
issues
that
relate
to
the
language,
but
also
to
libraries
and
to
tool
chains
and
all
kinds
of
things.
So
we've
asked
them
to
to
kind
of
focus.
You
know
not
focus
on
this
sort
of
thing,
what
what
might
they
want
or
what
might
be
useful
to
them,
as
well
as
giving
us
a
kind
of
overview
of
how
how
russ
for
linux
works
in
general?
A
So
after
that,
there's
going
to
be
the
social
allower
and
we'll
handle
that,
as
we
usually
do
jane,
are
you
here,
I'm
not
sure,
but
if
not
that's
all
right,
which
means
we'll
make
a
few
breakout
rooms
and
people
can
chat.
If
you
have
ideas
for
specific
breakout
rooms,
please
do
post
them
in
the
zulip,
and
this
will
upstream
the
ctcft
stream
and
we
can
accommodate
them.
Okay.
A
So
I'd
like
to
just
take
a
few
minutes.
If
people
have
any
announcements
before
we
get
started
kind
of
give
an
open
floor,
I
have
one
or
yeah.
I
don't
think
doc
jones
is
here,
so
I
will
get
it,
which
is
that
doc
jones
and
I
are
planning
to
host
or
organize
this
rusty
reading
club.
Our
second
attempt,
since
the
first
time
we
kind
of
underestimated
the
amount
of
demand.
A
A
So
this
time
we're
doing
we're
going
to
make
a
smaller
list,
but
I
want
to
encourage
especially
people
here
who,
like
do
know,
rusty
to
some
extent
already,
who
might
be
interested
in
sort
of
running
their
own
events
to
get
in
touch
with
me,
because
I
think
the
best
way
for
us
to
scale
this
up
is
probably
going
to
be
to
have
more
than
one
flavor
and
experiment
with
doing
things
differently
and
so
on.
So
that's
my
announcement.
Does
anyone
else
have
any
other
announcements,
if
so
feel
free
to
come
on
camera.
A
Okay,
hearing
none,
which
is
good
or
which
is
fine.
I
will
say
this
the
way
we
usually
run
these
events-
I
didn't
say
it
initially,
but
of
course
we're
abiding
by
the
rest
code
of
conduct
and
what
I'd
ask
is
that
if
you're,
not
speaking,
keep
your
camera
off
and
keep
yourself
muted?
But
if
you
do
want
to
speak,
you
can
turn
your
camera
on
and
we'll
see
that
and
kind
of
call
on
you.
A
If
you'd
like
to
ask
a
question
or
something
like
that,
if
you
don't
want
to
turn
your
camera
on,
you
can
post
something
in
the
chat
that
will
or
raise
your
hand
if
you
can
find
it.
That's
under
the
I
recently
discovered
that
it's
under
the
reactions
menu,
I'm
not
sure
if
it
always
was,
but
it
is
now
if
you
want
to
raise
your
hand
and
zoom
all
right
without
further
ado,
I
will
turn
it
over
to
russ
for
linux.
Let
me
stop
showing
my
screen.
C
B
B
So
well,
the
first
thing
thank
you,
nico
yours
et
cetera
for
organizing
these
and
inviting
us
to
be
here,
because
I
think
we
believe
is
great
to
be
able
to
have
a
bridge
to
the
different
rustings
and
speak
about
the
things
that
we
have
found
not
lacking
part
of
the
hazard,
the
things
that
we
would
like
to
to
to
to
see
in
rest.
Perhaps
you
nico
said
that
this
was
also
an
introduction.
B
B
B
Also,
let
me
face
that
we
are
not
the
first
experts,
at
least
I'm
not
a
rash
expert.
So
so,
if
you
find
that
you
have
solutions.
B
B
Okay,
so
razor
linus,
as
you
know,
most
likely,
no
one
you
have
seen,
is
trying
to
add
support
to
the
kernel
to
us
initially
for
modules
for
kernel
what
we
call
not
rasmus,
but
what
we
call
current
movies,
but
probably
later
we
have
seen
already
people
motivated
enough
or
wanted
to
have
this
go
all
the
way
and
have
rust
even
in
core
parts
of
the
of
the
kernel.
We
believe
that
offers
key
improvements
overseeing
this
domain
and
that's
why
we
want
to
see
it
happen.
B
D
We
use
some
of
some.
B
D
B
List
of
features
that
we
currently
use,
we
want
to
build
the
the
kernel
without
exploiting
or
abusing
russia.
Russia
gustra,
we.
D
B
I
mean
we
should
perhaps
not
be
using
it,
but
it's
the
way
it
is.
We
want
to
target
for
the
moment
official
or
full
releases
or
tact
releases
of
the
compiler,
that's
how
it
works
for
dc
and
clan,
so
yeah
we
for
the
moment.
D
B
Use
negative
features
as
minimizing
them
or
possible
and
hope
that
in
one
year
two
years
we
will
be
able
to
as
soon
as
possible
build
the
kernel
without
the
state
there
is.
A
list
of
here
is
a
list
of
features
that
we
currently
use.
C
All
right
so
can
can
you
can
you
hear
me
yeah
thanks,
so
so
here,
I'd
like
to
to
start
talking
about
pinning,
which
is
which
is
a
sore
point
for
us
and
a
source
of
the
major
source
of
unsafety
in
the
drivers
that
report
to
to
rest
at
the
moment.
C
So
we
are
very
motivated
to
see
improvements
in
this
area.
There
are.
There
are
in
this
section,
I'm
going
to
talk
about
three
things
right,
I'm
going
to
talk
about
the
initialization
of
pinned
objects,
I'm
going
to
talk
about
projection
of
fields
of
pinned,
structs
and
there's
this
other
thing,
which
is
not
really
related
and
could
be
sort
of
solved
addressed
separately,
but
it
it
usually
appears
in
the
same
context
right
and
it's
it's
the
definition
of
log
classes
for
log
type
in
the
kernel.
Just
as
a
bit
of
context.
C
The
kernel
has
this
debug
mode
that
at
runtime
it
tries
to
prove
the
correctness
of
log
ordering,
and
this
is
where
this
comes
into
context.
So
the
problem
we
have
is,
we
have
a
lot
of
data
structures
in
the
kernel
that
are
self-referential
and
they
need
to
be
initialized
before
they
can
be
used.
So
what
we
have
at
the
moment-
and
this
is
what
this
this
code
is
showing
here-
mutex-
is
an
example
of
this.
C
We
have
a
new
method
that
creates
a
new
mutex,
but
it's
unsafe
and
the
safety
requirement
is
that
you
call
you
need
log
before
you,
you
actually
use
it
and
and
the
call
to
the
c
function
that
that
implements
this
is
here
and
they
need
in
its
block
method.
This
is
how
c
does
the
the
log
class
creation
it's?
Actually
it
defines
a
a
macro
and
defines
a
class
here,
local
static,
local
variable
and
and
gets
the
name
from
stringifying
the
input.
C
This
is
an
example
of
actual
code
at
the
moment,
and
you
can
see
how
polluted
it
is
to
actually
create
and
initialize
this
truck.
That
has
two
mutexes
in
it.
We
actually
have
unsafe
blocks
for
the
new.
We
require
safe
annotation,
so
we
have
to
describe
why
it
is
safe
and
we
basically
say
we're
calling
it
below
before.
We
can
call
it
below.
C
We
actually
have
to
project
these
fields
in
their
node
refs,
which
is
also
unsafe,
and
then
we
call
the
the
macro
to
initialize
it
and
and
when
calling
these
these
macros,
we
have
to
manually
specify
the
name
we
can
just
stringify
paint,
for
example,
because
pins
is
is
meaningless
and
these
names
are
actually
used
when
the
the
lock
analysis
code
in
the
in
the
kernel
finds
a
violation,
and
then
it
uses
this
string
to
report
the
errors
to
users.
C
But
what
we'd
like
to
see
is
is
is
something
like
this,
which
is
the
expected
ergonomics
for
creating
things
in
the
in
rust
in
general.
Right,
of
course,
there's
a
lot
of
magic
here,
because
it
takes
care
of
of
all
the
three
things
I
talked
about
before.
So
we
don't
know
if
this
is
possible,
but
we'd
like
to
get
as
close
as
possible
to
this.
This
is
the
what
we'd
like
to
see
in
the
end.
B
Yeah,
another
topic
is
what
we
call
the
modularization
of
core
analog.
This
means
that
has
mobilization,
perhaps
partitioning
coronado,
with
some
config
options,
because
the
kernel
doesn't
need
all
the
features
that
will
provide.
So
configuring
them
off
at
least
is
important
for
for
correctness
for
code
size,
even
perhaps
for
performance,
and
this
is
very
likely
useful
for
the
domains
like
embedded
safety,
critical
etcetera,
the
list
of
features
that
we
don't,
we
don't
need,
some
of
them
at
least
are
floating
point.
B
As
you
know,
we
already
have
upstream
rust,
already
took
that
config
option
to
to
basically
remove
part
of
the
floating
point,
formatting
code,
also
big
integers.
We
will
not
use
them,
unicode
the
unicode
tables,
utf-8,
support,
etc.
B
Also,
another
popular
one
is
the
infernal
location
apis.
We
already
we
are
using
non-global
out
of
memory
handling,
although
some
of
the
tri
methods
are
still
missing.
D
B
A
nice
solution
there
is
also
types
that,
even
though
we
could,
in
principle
use
them
if
we
could
somehow
tweak
the
implementation
or
have
some
way
of
plugging
some
bits
of
them.
We
think
is
this
and
we'll
discuss
a
bit
of
this
later.
There
are
some
types
that
we
will.
We
would
like
to
implement
on
our
own
to
use
kernel
facilities,
for
example
mutex,
instead
of
using
the
mutex
code,
that
is.
C
All
right
so
so
the
next
topic
is,
is
the
memory
model
right,
as
as,
as
you
know,
the
memory
model
for
unsafe
code
in
rust
is
not
defined
at
the
moment,
and
it's
not
a
problem
for
us
at
the
moment,
because
we
are
not
targeting
writing
code
at
the
moment
that
that
relies
heavily
on
on
the
memory
model
for
for
correctness,
what
we
are
doing-
and
this
is
an
example
of
how
we
do
this-
we
we
actually
refer.
We
use
the
c
implementation
at
the
moment.
C
So
in
this
example,
here
we
have
a
sequence
log,
which
is
backed
by
the
c
implementation,
and
here
on
the
right.
We
see
the
r64
output
for
for
this
loop
here
right,
and
you
see
that
we
have
the
memory
of
barriers
here
and
it's
a
load
barrier
which
we
don't
really
have
in
in
rust
and
with
lco
enabled,
which
is
what
what
we
have
here.
C
We
get
a
code
that
is
as
efficient
as
as
c
so,
although
memory
model
is
not
a
concern
for
us
at
the
moment,
I
would
like
to
talk
about
this
for
the
future
right.
So,
as
some
of
you
may
already
know,
some
may
not,
but
the
kernel
has
has
actually
a
different
memory
model
than
than
c
right.
What
we'd
like
to
see
in
russ
go
ahead.
Nick
you
have
your
mind.
Can
I
ask.
A
A
question
yeah
actually
on
the
previous
yeah
just
so
I
understand
here
the
you
have
the
sequence:
lock,
spin,
lock
example.
First
of
all,
there's
only
one
lock
going
on.
I
guess.
C
C
Yes,
exactly
exactly
because
you
can
actually
have
writers
and
readers
concurrently
accessing
these
fields
right,
it's
just
that
if,
while
a
reader
is
is,
is
reading
a
writer
modifies,
we'll
realize
it
based
on
this
need
retry,
and
then
we
come
back
up
and
then
we
try
again
and
with
this
we
get
a
consistent
view
of
the
two
fields.
A
C
That's
right,
nothing
prevents
you
from
doing
that,
but
there's
actually
a
sanitizer
in
the
kernel
that
we
can
invoke.
That
would
catch
this
at
runtime
right,
but
there's
no
facility
at
the
moment
to
actually
prevent
that.
If
you
actually
have
ideas
on
how
we
could
prevent
that,
those
would
be
welcome
for
us
to
try
to
to
claim
that
we
are
better
than
see
even
in
in
this
sense
but
yeah.
The
idea
of
the
sequence
block
is,
is
kernel
data
structure.
It's
used
in
a
few
places
understood.
Okay,
thank
you.
That's
helpful!
C
So,
coming
back
to
to
to
the
future,
what
we'd
like
to
see
is
that
not
only
rush
to
define
a
memory
model
but
that
the
memory
model
that
it
defines
is
compatible
or
somehow
unified
with
a
with
the
linux
one.
So
there's
no
need
for
the
kernel
code
to
both
assembly
definitions
onto
onto
the
code
and
define
a
new
memory
model.
That
way.
C
Another
thing
is
in
c
the
the
the
kernel
doesn't
understand.
I'm
sorry
the
compiler
doesn't
understand,
address
and
control
dependencies,
so
they
are
sort
of
fragile
at
the
moment
in
c,
but
what
what
developers
do
is
they
use
these
dependencies
to
avoid
having
to
use
the
heavier
weights
memory
barriers?
And
then
they
look
at
the
assembly
code?
C
Is
we
if
we
could
actually
have
the
compiler
understand
and
honor
these
these
chains
of
of
dependencies,
then
we
could
actually
have
rust,
be
better
for
writing
this
sort
of
of
high
performance,
synchronization,
primitives
and
algorithms
for
four
core
pieces
of
the
kernel.
As
an
example,
due
to
this
fragility
of
optimizations,
potentially
breaking
things
when
lto
is
enabled
in
some
architectures
like
arm64,
we
actually
fall
back
to
a
much
heavier
weight,
a
much
heavier
weight
barrier,
an
acquired
barrier.
C
C
Out
of
the
code
at
one
time,
congratulations
yeah
all
right.
A
Yeah,
just
just
just
to
understand
so
the
what
you're
saying
here
is
that
the
kernel
and
c
have
different
like
the
natural
c
memory
model.
I
guess
I
don't
quite
understand
why.
Why
you're?
Having
so.
C
Yep,
yes,
so
so
so
the
c
memory
model
actually
came
after
the
the
kernel
one.
So
in
the
kernel,
the
barriers
that
we
have
are
like
read
and
write
barriers,
for
example
right.
But
if
you
look
at
the
c
and
and
rust
since
it
sort
of
inherits
the
c1,
there
are
no
read
and
write
various
there
are
load
and
apply
barriers
which
have
different
semantics
right.
Acquire
is
is,
as
you
probably
know,
it's
semi-permissible
right.
C
Certain
things
can
go
above
and
below
it,
the
same
thing
with
the
reads
and
and
the
read
and
write
various
but
they're
permissible
in
in
different
ways.
You
know
and
then
things
like
sequence
log,
for
example.
Since
they
require
a
read
barrier,
they
cannot
be
implemented
using
the
c
semantics,
because
there
is
no
width
area
right.
You
have
to
to
use
heavier
weights
barriers
like
consistent,
special
consistencies,
which
are
much
heavier
than
they
needed,
so
the
code
in
the
end
wouldn't
be
as
efficient.
C
Now,
in
addition
to
these
barriers,
there
are
different
right.
There
are
cases
where
you
can
exploit
these
dependencies
control
or
data
dependencies
between
loads
and
stores
to
have
implied
barriers
and
the
kernel
data.
The
problem
is,
the
optimization
is
not
required
to
honor,
because
it
doesn't
know
about
this-
that
we
are
trying
to
exploit
that.
So
we
actually
may
break
this
dependency,
so
the
processor
will
reorder
things
and
and
break
things,
and
then
lto
exacerbated
this.
C
This
problem
by
doing
these
optimizations
at
length
time,
so
they
had
to
actually
use
the
heavyweight
barriers
to
prevent
occurring.
Okay
does
it?
Does
it
make
sense.
E
A
B
Trivial
one
basically,
the
tooling.
B
B
B
For
example,
one
was
clippy
which
I
requested
to
to
whether
it
could
be
added
officially
in
the
documentation
or
supported
the
use
case
through
russia
directly,
which
is
how
we
we
use
in
the
kernel
which
was
fixed.
That
was
great
bill.
Sdd
is
another
one
that
I
will
talk
later.
There
is
also,
for
example,
midi.
Currently
it's
only
supposed
to
be
used
through
cargo.
You
can
of
course
use
it.
B
If
you
know
the
right
incantations
of
you
reverse
engineering,
but
ideally
it
would
be
like
officially
supported
to
run
without
cargo
first
and
then
perhaps
we
can
make
give
it
support
to
running
the
kernel.
That
would
be
great.
D
C
Now,
I'd
like
to
talk
about
three
three
cases
of
of
of
const
uses
constant
context,
I'll
try
to
go
faster
now
because
I
think
we're
running
behind
so
the
the
first
example
of
of
the
three
that
I'd
like
to
show
is
is
how
device
id
tables
are
generated
in
the
kernel.
If
you
look
here,
we
have
a
struct
which
is
defined
by
by
the
the
the
bus
right.
C
C
Then
we
have
this
macro
here
that
actually
puts
this
this
table
in
in
a
section
that
tools
can
can
actually
look
up
offline,
and
this
is
how
tools
check
if
to
search
for
drivers
that
implement
such
files
that
implement
drivers
for
a
given
device
when
the
device
shows
up
and
then
can
load
the
device
and
and
use
them,
and
this
is
why
we
wanted
to
have
this
in
in
in
rust,
so
that
we
have
the
ability
to
do
the
same
in
rest
modules.
This
is
how
we
do
it
today.
C
At
the
moment,
we
have
a
macro
as
well
to
define
this,
and-
and
what
we
want
here
is
is,
is
a
const
function
to
to
create
from
this,
which
is
a
typed
version
of
that
id,
the
raw
one,
the
c
the
c
version
of
the
thing
and
here
is-
is
an
example
of
what
we
could
do
today,
but
there's
a
bunch
of
reasons
why
this
is
not
not
enough
for
us,
I'm
not
gonna
go
into
them.
We
can
talk
about
them
later.
C
This
is
another
example
of
of
what
we'd
like
to
see
with
the
semantics,
with
the
ergonomics
that
we'd
like
to
see.
But
there
are
a
few
things
missing
like
we
can't
define
cost
functions
in
trades.
If
we
do
this,
where
we
have
a
constant,
then
that
const
is
the
size
of
of
of
of
that
of
an
array.
It
says
it's
unconstrained
and
there's
a
circular
dependency.
If
we
add
a
constraint
here
bound
here
zero.
C
C
This
is
the
second
thing
I'd
like
to
to
wait.
There's
a
question
there.
Am
I
right
that
procedure?
Yes,
we're
trying
to
avoid
procedural
micros.
We
can
talk
about
this
later,
but
we're
trying
to
avoid.
B
C
Yes,
so
the
the
the
second
example
cost
related
is
is
simpler
than
before,
but
the
idea
is
in
the
kernel.
We
have
lots
of
these
v
table
like
structs,
that
have
a
list
of
functions
in
them
and
in
this
case
we
actually
also
have
a
pointer
to
a
module
which
is
basically
the
the
library
that
implements
this.
C
This
the
functions
that
we
have
pointers
for
here
and
the
way
this
the
way
in
which
this
is
used
in
the
kernel
is,
if
you
load
the
module
and
that
module
provides
a
file
in
this
case
here
and
if
the
file
is
opened.
Okay,
then
the
ref
count
on
this
module
is
incremented.
And
then,
if
you
try
to
unload
that
module,
then
then
it
it
fails,
because
the
somebody's
still
using
the
the
module
and
you
guarantee
that
the
file
will
not
jump
to
some
position.
C
That
was
unloaded,
and
this
is
how
how
they
are
used.
You
see
the
static
cons
here.
C
The
problem
that
we
have
at
the
moment
is
that
russ
doesn't
allow
us
to
define
a
const
and
and
and
within
that
cost,
have
a
pointer
to
non-const
data,
and
this
is
the
the
error.
That's
that
we
get
at
the
moment.
So
what
we'd
like
to
see
is
some
way
to
to
do
this,
to
have
a
const
straw
and
the
reason
I
failed
to
mention
this,
but
the
reason
we
want
this
to
be
const
is
because
that's
a
v
table
right
with
function,
pointers
in
them.
C
So
we
want
that
to
be
read-only
such
that.
If
there
are
vulnerabilities
in
the
kernel
with
arbitrary
rights,
they
cannot
write
to
this
v
table
and
then
escalate
that
from
arbitrary
rights,
foreign
execute
so
basically
trying
to
reduce
the
attack
surface
in
the
kernel
by
keeping
that
in
our
old
data.
C
C
We
have
a
lot
of
this
a
lot
of
instances
of
things
like
this,
where
we
have
a
memory
block
that
represents
a
device
and
and
this
memory
block
has
registers
for
that
device,
and
you
can
write
to
those
positions,
and
these
are
of
a
given
size,
for
example,
a
page
so
4k
and
the
offsets
within
that
struct
are
usually
known
at
compile
time
as
well,
so
we'd
like
to
if
the,
if
the,
if
the
offsets
are
known
at
compile
time,
we'd
like
to
be
able
to
just
access
them
without
having
to
do
checks
at
runtime,
and
this
is
how
we
accomplish
it
right.
C
We
have
this
self,
okay
and-
and
we
have
a
chat
here
and
we
do
a
build
error
if
it's
beyond
the
the
size
that
we
know
about
and
then
what
we
do
to
catch.
This
is
we
we
don't
compile
in
this
build
error
and.
C
Link
error
when
we
try
to
compile
this,
which
is
better
than
c,
because
then
we
catch
these
before
and
we
don't
produce
a
binary
that
has
this
this
bug.
The
problem,
though,
is
that
for
unoptimized
builds.
We
need
to
to
keep
this
build
error
in,
otherwise
we
won't
be
able
to
run
this,
so
we
don't.
D
C
The
compile-time
error,
and
then
this
panic
happens
and
it's
turned
into
a
runtime
panic
which
we'd
like
to
avoid,
and
the
other
problem
is
here's
the
example
of
the
build
error
that
we
get
at
the
moment.
We
get
this
line
number
and
then
file
name
here
which
are
not
really
the
source,
the
real
source
of
the
problem-
and
we
have
this
mangled
function
name
here
which
is
helpful,
but
it's
not
as
nice
as
something
like
this
or
the
where
the
real
problem
is
so
we'd.
D
C
To
to
be
able
to
get
better
error
and
and
and
suggest
the
use
of
try,
this
is
one
thing
that
I
didn't
mention,
but
if
the
offsets
are
not
known
at
compile
time,
then
we
can
do
the
check
at
compile
time
and
then
we
we
force
users
to
use
the
try
version
which
has
a
runtime
check
and
if
we
have
time
in
the
end,
I
will
talk
a
little
bit
more
about
how
we'd
like
to
improve
these
tri
versions.
B
Yeah
another
topic,
general
topic
is
architecture
and
gcc
report.
As
you
know,
we
are
constrained
by
architecture
support
and,
on
top
of
that,
the
support
of
lbm
in
linux
itself.
We
currently
support
for
us
arenas.
We
support
a
few
architectures
with
some
constraints
that
may
or
may
no
way
away
with
time.
But
the
main
point
is
that
gcc
support
would
alleviate
this
point.
B
B
There
is
also
binding
the
question
about
binding
and
binding.
Even
if
we
manage
to
have
full
gcc
builds
by
full,
I
mean
in
the
seaside
and
also
in
the
rust
side,
with,
for
example,
rusty,
kodi
and
ecc.
We
still
use.
We
would
still
have
idea,
which
does
not
have
a
back
end
for
for
parsing
the
code
with
with
gcc.
This
could
be
important,
especially
if,
for
example,
somebody
is
using
a
plugin
in
gcc
that,
for
example,
say
reorders.
The
the
memory.
B
Not
the
memory,
the
members
of
first
track
right,
it's
not
a
big
idea.
It's
not
a
big
deal.
I
think
nowadays,
because
there
is
discussions
about
basically
removing
all
support
for
any
gcc
plug,
but
still
it
would
be.
B
Then
the
target
specification
so
right
now
well,
as
as
everybody
will
know,
normally,
users
of
russia
use
the
built-in
targets
most
of
people
use
the
built-in
targets,
perhaps
meta
projects,
and
so
on
the
kernel.
The
thing
is
the
kernel
normally
tweaks
targets
in
one
way
or
another
and
may
want
to
switch
some
options
here
and
there
over
time.
So
in
general
we
would
like
to
avoid,
even
if
we
could
even
we
could
submit
all
the
possible
combinations
of
targets
that
we
we
may
need
to
write
abstinent
rust.
B
Also,
custom
targets
are
not
stable
and,
as
far
as
we
were
told
unlikely
to
be
due
to
what
they
are
too
tied
to
whether
vm
configuration
that
russia
does
and
also
not
all
the
targeted
options
of
are
available
or
exposed
through
that
document
file.
For
example,
the.
A
B
For
for
architecture
yeah,
so
for
this
I
mean
we
don't
particularly
care
whether
it's
done
through
files
or
flags,
but
in
general.
Ideally,
it
would
be
great
if
russia
could
take
the
same
flags
even
with
the
same
names
that
this
year
have
standardized
on
for
all
these
particular
options,
and
it
would
be
also
a
nice
way
to
stabilize
it
piece
by
piece
over
time
in
the
in
russia's
side.
So
you
could
stabilize
this
flag
that
displays
this
plug
et
cetera.
B
B
Idea,
but
we
are
trying
to
push
it
for
it
anyway,
which
is
cross
language,
prostitutes
and
standard
way
of
specifying
the
target
right.
So
we
could
even
use
the
files
idea
from
from
rusty,
but
try
to
get
gcc
and
lvm
folks
to
to
to
accept
this.
This
way
of
configuring
target
and
then.
B
Well,
an
idea
we
already
told
some
llvm
and
kernel
folks
about
this,
and
one
possibility
would
be
to
bring
everyone
to
this
mailing
list
to
linux
and
english
and
discuss
whether
this
would
be
feasible
in
some
time.
Of
course,
this
will
take
a
lot
of
time,
but
if
we
end
up
with
a
way
that
we
don't
have
to
in
the
kernel
side
hack
in
the
mid
system,
you
know
to
generate.
C
All
right,
so
so
the
next
topic
in
our
laundry
list
of
things
is,
is
the
ability
to
implement
our
own
arc
or,
in
general,
any
types
library,
types
that
use
some
some
magic
from
the
from
the
compiler.
At
the
moment
we
have
here
a
list
of
of
things,
reasons
why
we
did
that
and
consequence
of
that,
but
I'm
gonna
skip
it
because
we're
short
in
time,
the
next,
the.
C
Some
economics
things
that
you
guys
may
be
able
to
provide
us
with
that
would
improve
the
ergonomics.
So
this
is
an
example
of
how
people
would
that
would
already
seem
something
similar
to
this
in
in.
C
In
the
previous
slide,
people
have
to
implement
their
methods
here
for
for
this
trade,
but
not
only
that
they
actually
have
at
the
moment
to
do
this
declare
file
operations,
but
they
have
a
list
of
of
methods
that
they
actually
want
to
have
populated
in
the
kernel
file,
operations,
struct
and
the
reason
we
need
this
is
because
the
kernel
actually
behaves
differently,
depending
on
on
whether
the
the
pointer
is
null
or
not
known
right.
So
because
you
could
imagine
that
we
do
something
like.
C
Oh,
the
default
implementation
mimics
the
kernel,
but
we
can't
really
always
mimic
the
kernel
for
one
thing,
but
the
other
thing
is
that
they
could
get
out
of
sync
and
and-
and
we
can't
always
like,
for
example,
between
read
and
read
eater.
What
they
do
is
if
you
have
one,
but
not
the
other.
One
calls
the
other
one's
implemented
based
on
the
other
right,
so
we
couldn't
implement
defaults
that
depend
on
each
other
because
we
get
into
infinite
recursion
there.
C
Now
this
is
the
hack
that
that
we
use
today
and
and
basically
what
that
macro
does
is
populate.
Something
like
this
to
use.
So
what
we'd?
Like
to
to
have
from
you
guys
is
in
a
const
context,
but
it's
also
related
to
const
in
a
const
context,
be
able
to
determine
if,
for
a
given
type,
a
method
is
implemented
by
the
type
or,
if
it's
using
the
default
implementation
of
the
trades.
C
If
we
are
giving
something
like
that,
then
we
can
use
that
to
to
populate
the
the
kernel
file
operations
and
whatever
operations
table
it
is
and
and
the
ergonomics
in
the
end
will
be
better
than
than
now.
This
is
related
to
risk.
Analyzer
again
we
have.
We
have
an
example
of
of
a
trait
and
implementation
of
the
trait.
So
the
way
we
expect
people
to
to
use
this
is
to
come
here,
do
improv
file
operations
for
acts
and
then
do
the
implement
members
method
and
rationalize
it.
C
The
problem
at
the
moment
is
that
for
for
this
specific
case
and
a
lot
of
cases,
because
this
is
a
pattern
that
we
follow
pretty
much
everywhere,
the
the
the
type
is
wrong
here.
So
this
this
doesn't
compile.
This
is
the
the
first
problem.
Now,
if
we
had
rest
analyzer
fixed,
it
would
expand
to
this
monstrosity
here,
which
scares
c
c
programmers
when
they
look
at
this
they're
like
oh.
What
is
this
is
s
and
these
angle
brackets
and
some
people
call
this
a
smiley
face.
C
You
know
they.
They
really
don't
like
this
now.
What
would
really
like
russell
eliza
to
do
for
us
is
do
this
right
and-
and
this
can
be
inferred
by
looking
at
as
the
the
type
wrapper
here-
and
you
know
it's
a
box
and
then
you
look
at
the
pointer,
wrapper
informational
box
and
find
out
that
the
the
borrow
type
is
is
really
self
and
percent
so
shared
reference.
So
this
is
what
I'd
like
to
see.
C
We
don't
know
if
this
is
possible
or
how
hard
it
is,
but
it's
ideally
that's.
That's
what
we'd
like
to
see
a
little
bit
more,
if,
if
we
can't
get
that,
then
from
from
the
compiler
folks,
what
we'd
like
to
see
is,
in
this
case
here
the
fully
qualified
syntax
we
could.
We
would
like
to
be
able
to
avoid
this,
especially
because
the
only
found
that
rapper
has
at
the
moment
is
pointer.
C
Wrapper
right
so
we'd
like
to
see
something
like
this,
where
we
can
go
straight
from
self-represent
and
borrowed
again.
We
don't
know
how
hard
it
is,
but
we'd
like
to
see
it,
and
the
last
thing
in
in
in
this
list
is
the
lifetimes
right
for
in
the
case
of
ref,
which
is
our
replacement
for
for
our
it's
probably
not
the
best
name,
you
can
rename
it
later
on,
but
it
it
takes
a
a
a
since
it's
a
ref.
C
It
takes
a
a
a
lifetime,
as
an
argument
we'd
like
to
see
is,
is
for
these
lifetimes
to
be
elided
similarly
to
to
what
we
see
with
with
the
ampersand
right
and
then
the
syntax
in
the
end
would
be
something
like
this
ref
ball
itself,
and
we
feel
this
is
easier
to
for
for
see
kernel
developers
to
swallow
when
they're
transitioning
to
rust.
B
So
let's
go
quick
here
as
well,
so
between
std,
we
are
building
this
for
the
test,
crate,
which
is
we
used
to
run
the
the
tests
which
run
on
the
hosts
not
only
on
the
corner
or
on
booted
cabinet,
so
they're
running
the
host
and
it's
currently
hard
in
between
calls
to
build
compared.
B
Well,
build
std
on
one
side
could
be
improved
to
solve
a
few
problems,
but
basically
we
have
a
huge
hack
to
to
use,
build
std
and
to
be
able
to
compile
everything
and
what
there
are
a
few
apps
there
is.
Is
it's
quite
useful,
perhaps
for
us
it
would
be
even
better
to
make
tests
not
dependent
std,
somehow
plug
in
the
bits
that
the
test
part
needs.
So
we
can
provide
them
in
the
kernel
and
then
we
can
run
the
tests
in
even
in
kernel
space
and
related
to
this.
That
is,
testing
testing.
B
We
really
like
the
approach
that
rush
took
for
for
testing
in
the
kernel
we
have,
apart
from
the
host
tests
that
we
already
support.
We
would
like
to
run
code
in
the
kernel
space
in
a
booted
kernel
and
also
in
a
booted
kernel,
run
test
from
this
space
to
that
cable.
So
these
are
called
key
unit
and
self-test
respectively
in
the
kernel,
and
it
will
be,
could
be.
B
The
syntax
could
look
like
this,
where
you
specify
test,
but
what
kind
of
test,
whether
it's
in
the
host
and
user
or
in
kernel,
so
they're
bottom
two
would
be
in
a
booted
camera,
for
example,
in
qm.
B
Allow
us
to
run
kernel,
kernel
code
or
user
code
that
is
supposed
to,
for
example,
to
run
to
test
a
cisco
right.
Then
next
slide
yeah.
So
there
is
a
lot
of
ways
to
to
discuss
this
and
we
all
want
to
get
here
into
details,
but
there
are
several
ways
we
can
think
of
doing
it.
Let's
not
discuss
it.
B
A
question
is
how
to
make
it
useful
for
other
projects
as
well,
depending
on
how
complex
the
solution
is,
or
we,
if
we
just
type
out
the
source
code,
we
ask
the
compiler
to
give
us
the
shortcut
for
the
test
and
then
we
build,
for
example,
a
current
model
that
we
execute
etcetera.
So
we
should
do
this
question
about
this.
I
guess
offline
and
we
can
perhaps
see
what
we
can
do.
There
is
another
moonsault
idea,
which
is
russia's
report.
B
C
All
right
so
so
the
next
topic
for
us
is
is
the
quality
of
of
code
chain
and
and
one
thing
that
we'd
like
to
emphasize
here-
is
that
we're
not
so
much
concerned
about
the
the
speed
but
we're
concerned
about
the
size
and
and
the
appearance
of
of
the
code,
the
simplicity.
I
suppose
this
is
one
way
to
put
it
within
reason.
So
here
we
have
a
couple
of
examples,
so
we're
not
going
to
get
into
the
details,
but
this
is
a
minimized
version
of
a
scope
card.
C
So
here
the
contacts
here
is
like
we
we
go
to
the
developers
and
we
say
instead
of
having
this
spaghetti
error
handling
of
a
bunch
of
go-to's
and
in
fact
some
go-to
is
backwards.
You
know
in
in
the
kernel
we
say
you
have:
you
have
the
drop
trade
to
take
care
of
most
of
things
and
then
in
the
cases
when
you
can't
use
drop,
then
we
have
the
scope
guard
thing
here
right
and
then
we
say
this
is
zero
cost
of
abstraction
collapses
to
nothing
right.
C
But
then,
when
we
go
in
and
and
and
look
at
the
output
codes
to
show
them
as
evidence
because
they
actually
asked
to
see
this,
they
can
say,
like
oh
show
me
the
the
the
generated
binary.
So
we
go
there
and
we
look
at
the
thing-
and
we
have
this
a
huge
block
here
and
we
have
some
no
ops
that
are
implemented
as
moves
and
tasks
and
conditional
jumps,
and
this
is
the
simplified
version
right.
C
So
this
is
what
a
kennel
developer
was
expecting
to
see,
and
this
is
what
we
had
to
show,
which
is
unfortunate
when
we're
trying
to
make
the
case
that
we
can
deal
with
this
zero
cost
abstractions
and
then
in
some
cases
we
do
get
them
like
here
and
we
we
get
this
if
we
use
unwrapped
unchecked.
But
if
we
just
use
unwrap,
then
we
get
this
this
other
thing
here.
So
it's
it's
a
bit
non-deterministic.
C
This
is
another
example.
Don't
need
to
go
into
details,
you
may
look
convoluted,
but
this
is
an
iterator.
This
is
an
iterator,
this
implementation
of
sequence,
logs
using
iterators
and
again
this
is
what
is
generated
and
if
you
look
at
it,
we
have
unconditional
branches
implemented
as
conditional
branches.
So
we
have
no
ops
again
as
as
tests,
and
this
is
the
code
that
we'd
like
to
see
and
the
code
we
see
if
we
don't
use
iterators
if
we
write
them
linearly.
C
Another
topic
we'd
like
to
discuss
is
padding
and
I've
seen
this
huge
discussion
about
about
padding.
But
let
me
just
briefly
tell
you
what
the
requirement
is.
The
requirement
is
when
we
are
copying
data
from
the
kernel
to
to
to
use
the
space
or
we
sending
it
over
the
network.
We
actually
want
the
the
padding
areas
obstructs
to
to
be
zeroed
or
for
pattern
to
be
written.
C
What
we
don't
want
is
for
the
context
to
be
arbitrary,
because
we
actually
may
have
secrets
and
data
that
we
don't
want
to
expose
from
the
kernel
or
from
other
processes,
for
example,
pointers
which
would
reveal
addresses
of
kernel
data
structures.
Those
sorts
of
things
we'd
like
to
avoid.
The
next
thing
is,
is
the
opposite
direction
here
of
of
of
the
padding,
but
it's
when
we
are
reading.
C
Of
course
we
have
these
types
like
like,
like
bull
that
have
specific
bit
patterns
that
are
required
and
different
bit
patterns
are
different
from
those
can
result
in
undefined
behavior.
So
we
like
to
avoid
that,
and
so
what
we
did
was
we
defined
this
unsafe
trait
and
we
implemented
for
a
bunch
of
the
basic
types.
C
B
As
you
know,
some
kernel
developers
put
a
value
on
having
a
language
specification
like
because
the
class
reference
for
the.
B
D
Even
normative
yet.
B
D
C
So
here
is
this
is
branded
types
as
as
the
ghost
cell
paper
calls
them.
We
know
this
is
not
going
to
happen
overnight,
but
we'd
like
to
see
this
eventually
and
the
idea
here.
This
one
is
going
to
be
preferred.
I
have
another
example
next,
so
the
idea
here
is:
we
have
this
this
this.
This
try
right,
which
has
a
runtime
check,
but
the
the
offset
that
is
using.
Is
it's
not
known
at
compile
time?
That's
why
we
need
this
to
get
trying.
However,
it
is
known
at
in
its
time.
C
So
what
we'd
like
to
see
is
that
this
is
initialized
once
and
the
offset
is
checked
once
during
init,
right
and
then
later
on,
when
we
use
this
several
times
and
this
example
comes
from
the
nvme
driver,
and
this
is
in
the
critical
path
of
the
nvme
driver,
so
this
is
actually
a
place
where
these
sorts
of
things
matter.
C
So
what
I'd
like
to
see
is
that
this
becomes
some
branded
new
size
that
is
tied
to
this
bar
here
variable,
and
the
idea
is
that
you
can
only
use
this
offset
for
this
instance
of
right,
not
not
other
instances
of
r,
because
they
can
have
different
offsets.
It
may
not
be
valid.
Here's
another
example
of
of
of
of
branded
types,
and
this
is
a
made
up,
syntax
that
I
have
here
and
the
intent
is
just
to
say
that
this
is
not
part
of
of
the
process
type.
C
This
is
that
doesn't
have
to
be
repeated
everywhere,
but
the
idea
the
example
here
is
just
rcu
actually
similar
to
cyclolog,
in
the
sense
that
you
can
access
this
thing
for
read
only
without
having
to
acquire
the
spin
lock.
But
if
you
change
it,
you
require
the
spin
lock
right
and
the
idea
here
is
that
to
modify
it.
C
B
D
B
We
are
about
to
end
now,
so
the
there
is
a
bit
of
a
research
topic.
We
wanted
to
include
one
of
these
here,
because
it
would
be
nice
to
improve
the
state
of
the
art
compared
to
c.
So,
as
you
may
know,
in
the
kernel
there
is
two
main
states
where
code
may
be
executing
in
with
atomic
and
the
slip
or
context
in
this
atomic
context.
B
You
cannot
call
functions
that
might
sleep
or
event
if
they
actually
sleep,
you
cannot
call
them
because
you
may
end
up
freezing
the
kernel
entirely
or
panicking
or
etc
so
the
seaside.
Currently,
basically,
programmers
are
tracking
this
manually
and
there
is
some
runtime
chain
that
you
can
enable,
which
you
put
in,
in
fact
the
myslip.
You
call
a
function
or
a
macro
that
attacks
them
as
such,
but
the
question
here
is:
could
rush
provide,
compile
time
checking
and
here
the
example
is
a
case.
So
basically
the
bottom
two
would
be
okay,
but
the
top
two.
B
If
you
call
from
a
function
that
is
called
from
an
atomic
context,
you
call
as
a
sleep
function
or
functional
major
sleep,
even
if
it
is
not
always
leaping.
We
would
like
to
see
a
compound
there.
B
There
is
two
questions
about
this:
one
is
inferring
or
annotating
explicitly
or
referring
whether
a
function
may
sleep
or
not.
We
could
have
something
like
unsafe
but
automatically
infer.
B
Let's
say
we
say
color
unsafe,
because
this
comes
from
our
earlier
discussion
last
year
in
las
vegas
pc,
but
don't
take
it
as
we
should
follow
what's
safe,
but
the
question
here
is:
we
could
have
the
compiler
check
all
the
calls
that
are
done
and,
of
course
they
believe
in
the
details.
B
What
happens
with
ffi,
whether
the
function
pointer
type
should
have
the
the
the
dark
of
whether
something
might
sleep
or
not
etcetera,
but
it
could
be
nice
to
automatically
infer
whether
a
function
slips,
and
this
is
one
side
of
the
equation,
and
if
we
do
that.
B
B
Is
defined
and
how
how
we
would
do
it
if
you
are
in
an
atomic
context,
check
that
we
don't
call
any
of
these
sleep
functions,
but
even
if
we
only
had
this
automatically
inferring
the
what
is
a
functional
microscopy
and
we
have
it
in
the
documentation
automatically
done.
That
would
be
quite
an
improvement
compared
to
c,
even
if,
of
course,
the
seaside
could
do
the
same
eventually,
but
these
don't
do
right
now
and
finally,
there
are
other
things
that
we
have
put
here.
We
have
even
more.
B
So
we
have
put
here
a
few
more
and
that's
basically
it
so
thank
you
for
inviting.
D
Us
again,
if.
A
Well,
that
was
really
great
yeah.
Does
anyone
have
any
questions?
There
was
a
lot
of
information
there,
so
I'll
give
a
minute
for
people
to
to
digest,
but
otherwise
I
think
we
will
go
into
the
social
hour.
I
don't
know
if
you
three
are
available
in
the
next
hour.
We
should.
B
A
I
had
one
question:
you
mentioned
cargo
wanting
to
avoid
invoking
cargo,
but
finding
that
some
tools,
like
clippy
or
miri,
or
whatever
kind
of
expected
you
to
use
them
through
cargo.
I
was
wondering:
is
it
really
that
you
want
to
avoid
cargo,
or
is
it
that
you
want
some
sort
of
way
to
integrate
cargo
or
limit
what
it
can
do
more
easily
into
the
build
system?
So.
B
I
I
would
say
the
second
one
I
don't
know.
I
cannot
tell
what
I
cannot
say
for
all
the
developers.
I
know
some
of
them
for
sure
would
not
like
to
see
cargo,
but
others
don't
care
as
long
as
it
works.
So,
for
example,
one
trivia
problem
is
if
we
originally
had
a
problem
for
the
project
using
cargo,
and
we
had
problems,
for
example
with
organizing
the
the
files,
even
something
as
simple
as
for
every
single
module,
because
a
different
grade,
you
would
have
to
put
the
source
folder,
etcetera,
etcetera.
B
D
B
Parallel
mic
is
using
several
cores
and
then
you
get
you
get
cargo
also
mixing
the
output
in
between
and
they
may
be
using
more
cores
and
you
may
want
etcetera.
So
there
are
a
few
things
that
can
be
solved
and
if
everything
is
solved
and
we
can
discuss-
we
didn't
want
to
put
it
here,
because
it
was
not
a
priority
in
the
sense
that
we
already
solved
it
in
another
way.
But
if,
in
the
future,
carl
is
possible
to
do
with
cargo,
we
will
be
fine
with
it.
B
But
again
you
know
the
kernel
build
system.
Is
it's
like
quite
involved
it's
quite.
They
have
a
lot
of
things
there.
It
uses
its
own
way
to
track
mostly
everything
and
yes,
so
it
would
be.
B
I
mean,
don't
know
I
I
I
am
not
opposed
to
it
and
again
we
we
had
carol,
but
I
think
at.
B
B
The
for
the
governor,
we
will
want
to
see
some
of
those
improvements.
B
Car
for
other
projects,
or
for
even,
for
example,
for
for
building
the
user
space
tools
that
the
government
contains.
So
in
the
current
three,
we
also
have
user
space
tools
right
or
scripts,
even.
D
E
B
B
They
were
kind
to
put
the
recommendation,
they
said
yeah.
It
should
be
fine
and
support
it.
So
we
would
like
to
see
not
just
perhaps
documentations
or
like
just
do
it
in
some
way
or
find
out
the
commands
that
cargo
is
running
and
then
run
them
yourself.
We
can
do,
for
example,
for
the
build
the
city
case.
B
B
A
Well,
we're
at
time,
so
why
don't
we
start
off
with
the
we
can
move
any
further
discussion
into
the
social
hour.
I
just
want
to
thank
all
three
of
you
again
for
that
excellent
presentation
and
I'm
going
to
end
the
recording
great
job.