►
Description
Stefan is talking about the Threadpool crate and how to synchronize offloaded work with channels, barriers and the brand new join operation.
Finishing the talks with a short section about cargo helpers (44:10).
Slides: https://github.com/rust-zurichsee/meetups/blob/master/2017-08-02_threadpool/slides.pdf
More meetups: https://www.meetup.com/de-DE/Rust-Zurich/
A
A
Will
say
this
talk
is
a
special
recording
for
me
because
it
works
on
Linux,
completely
cool,
so
welcome
to
roster
ik
say.
Thank
you
very
much
to
leap
for
having
us.
The
slides
will
be
posted
here
on
the
github
repository
I'm,
not
sure.
If
we
will
upload
the
video
as
well
to
this
location,
kit
can
handle
gigabyte
files.
A
A
The
latest
news
from
from
my
team
members
is
that
we
are
going
to
sell
the
first
batch
of
tickets
later
tonight
or
tomorrow.
So
if
you
have
twitter,
follow
us
there
or
use
some
other
tracking
device
of
your
choosing
and
follow
our
block,
it
will
be
at
ETH
Zurich,
so
nearby.
The
other
thing
is
we
have
a
September
meetup,
beginning
of
September
at
core
dump,
which
is
in
the
robbers
fields,
35
train
minutes
from
the
main
station,
so
easily
accessible
to
remain
topics,
admin,
flat
pool
and
cargo
helpers.
A
A
Yeah
so
admin
stuff,
we
made
a
fine
whoa
10th
of
July.
We
founded
a
new
entity
for
rust,
related
stuff
in
theory,
so
it's
called
lost
soo
DC,
because
we
have
a
lake
you're,
currently
looking
for
our
more
locations
and
if
we
find
the
company
paying
a
pizza
bill
or
something
like
that,
we
could
also
offer
food
at
the
meetup.
So
if
you
know
somebody
please
find
us
locations
would
be
really
really
nice.
A
A
A
The
thread
pool
trade
creates
the
worker
frets
eagerly,
which
means
when
you
create
it
and
tell
it
I
would
like
to
have
20
workers.
He
spawns
20,
frets
and
then
waits.
We
have
a
chalk
queue
which
uses
them
in
new
tags.
The
reason
for
that
is
the
this
crate
builds
completely
onto
the
standard
library,
with
one
exception
which
I'm
going
to
talk
later,
but
all
the
core
functionality
is
from
standard
library.
So
we
don't
have
any
fancy
work-stealing
queues.
We
have
standard
channel
which
requires
one
mutex.
A
A
We
have
a
really
fancy
documentation
system.
Every
crate
that
is
published
will
be
fetched
by
the
docks
RS
system
and
then
built
the
documentation
for
you.
So
we
have
every
thing
that
built
in
docker
container,
which
means
the
amd64
architecture
or
x86
64.
If
you
prefer
Intel
speak
everything
that
builds
on
that
platform
will
have
its
docks
available
with
the
docks
are
as
/
crate
name,
and
then
you
open
this
URL.
You
will
be
redirected
to
the
latest
version.
You
can
have
many
versions
of
the
documentation
compare
stuff
and
it's
handy.
A
So,
let's
start
with
a
channel
channel
is
the
magical
queue
I
told
you
about
before,
so
the
Chop
queue
is
just
the
channel
in
rust.
We
can
write
it
like
this.
You
have
a
channel
returning
a
tuple
with
the
sending
and
the
receiving
end.
We
can
move
receiving
end
of
the
channel
all
over
the
place,
move
it
to
different
frets
and
we
can
copy
or
clone
the
sending
end
of
the
channel.
So
this
is
a
multi
producer,
single
consumer
Channel.
A
The
only
problem
with
that
is,
if
we
have
two
small
messages,
we
can
flood
our
system.
This
problem
arises
when
you
have
Numa
systems,
so
this
laptop
is
technically
a
Numa
system.
There's
only
one
Numa
group,
the
new
monster
non-uniform
memory,
access
model
which
allows
to
have
more
cores
on
one
physical
machine
and
as
soon
as
you
start,
jumping
from
one
group
to
another
you'll
get
performance
penalty.
A
So
when
you
have
a
big
self-service
system
like
64
cores
or
more,
this
starts
to
matter
on
the
laptop
you
don't
care.
It
also
allows
you
to
group
tasks.
If
you
have
many
jobs,
you
can
use
a
know,
four
or
five
channels
for
each
group,
one
channel
and
then
start
consuming
partial
results
from
each
of
the
channels.
A
Con
is
because,
if
you
use
ether
ether,
it
means
turn
the
receiving
end
into
an
iterator
and
iterate
over
everything
that
comes
in.
You
must
drop
the
initial
center,
because
the
iterator
will
wait
for
all
senders
to
go
out
of
scope.
If
you
don't
do
that,
you
have
hang
at
some
point
when
everything
is
done.
A
Basically,
we
have
this
line
here,
we
use
stuff.
So
we
used
the
thread
pool
from
the
library
and
use
the
channel.
Then
we
create
for
workers
and
eight
jobs
create
the
pool.
Sorry.
This
is
just
how
much
this
left
is
the
Declaration
of
let
something
named
here
and
infer
the
type
after
that
we
actually
create
a
pool
at
this
point.
A
After
that
we
have
a
for
loop
and
we
spawn
everything.
So
at
this
point
we
create
the
channel
here
we
clone
to
send
it,
and
then
we
move
the
sender
into
it.
I'm
not
sure
if
you
can
see
that
from
back
there,
but
this
is
ace,
move
double
pipe
and
then
a
block
which
is
the
syntax
for
a
closure
or
a
lambda
double
pipe
means.
No
arguments
for
the
closure
and
the
move
statement
here
says
everything
that's
referenced
in
here
that
comes
from
out.
A
There
will
be
moved
inside,
which
technically
means
copy
and
then
don't
access
it
outside.
So
we
create
a
sender
copy
for
the
channel
for
each
fret
and
immediately
send
one
just
a
number
one
and
then
after
the
loop
here
we
drop.
The
initial
sender
can
explicitly
say
that
this
function
call
drop.
You
can
say
free
this
memory
here
at
this
point.
A
Yes,
so
what
I
didn't
mention
is
the
assert
macro.
Macros
in
rust
are
genique,
which
means
you
don't
have
this
copy/paste
stuff
from
see.
You
have
a
ste
manipulation,
so
you
don't
get
double
execution
or
other
weird
stuff
within
the
macro.
It's
just
as
if
you
wrote
different
code
there,
that
compiler
does
yeah.
B
A
A
D
D
A
If
we
start
barrier
inside
an
arc,
we
can
do
synchronization
an
arc
first
or
first,
a
barrier.
A
barrier
is
something
you
call
n
times
and
it
will
block
until
the
end
time
and
then
reset.
So
we
can
use
that
for
I.
Don't
know
if
you
have
a
barrier
with
the
phret
count
of
four
four
frets
can
call
the
barrier
more
or
less
simultaneously
and
as
soon
as
the
last
one
hits
all
of
them
Inco.
So
that's
the
synchronization
mess
synchronization
mechanism
of
a
barrier
so
to
distribute
the
barrier
into
all
threats
safely.
A
We
have
to
use
an
arc
arc
means
atomic
reference
counting.
Since
we
don't
have
any
garbage
collection
in
rust,
we
have
to
make
sure
we
don't
lose
memory
on
the
way
and
we
use
reference
counting
and
multi-threaded
reference
counting
requires
atomic
operations.
So
we
have
an
arc.
You
see
here.
The
arc
is
construction
constructed
with
the
whole
barrier
inside
it,
which
means
the
arc
owns
the
data
and
since
the
arc
can
take
care
of
how
many
copies
to
the
day
to
the
arc,
how
many
references
for
you
are
around?
A
A
Come
it's
really
easy
to
deadlock
with
this
kind
of
synchronization,
and
the
reason
is
this
assertion
must
hold
true
at
all
time.
Oh
I
forgot
to
change
the
font,
I'm,
not
sure
if
you
have
noticed
that
this
is
the
math
style
of
this
equation.
It's
it's
a
really
fluffy
record.
Is
it
cold?
It
takes
some
combinations
of
symbols,
for
instance
equal
and
more
than
and
merges
them
together
to
form
a
nice
symbol,
makes
your
code
look
pretty
very
confusing
to
strangers.
A
C
A
Now
we
can
fill
up
to
the
point
where
we
have
exactly
as
many
Fred's
running
as
we
wait
for
it's
it's
it's
capped.
If
your
pool
has
just
one
row
of
tasks
whatever
to
do
one
incentive
to
solve,
then
you
can
go
to
the
maximum
of
the
pool
size.
But
usually
this
will
lead
to
the
situation
where
you
have
work
pool
with
5,000
workers
executing
5,000
shops
at
the
same
time
which
will
kill
your
machine.
A
So
that's
not
an
option,
but
as
soon
as
you
have
five
thousand
and
one
jobs
in
the
pool
with
only
5,000
workers,
it
will
that
look
because
the
frets
are
waiting
for
the
last
fret
to
also
wait
on
the
barrier,
but
it
will
never
get
scheduled,
so
it
will
never
be
able
to
reach
the
barrier.
So
your
system
will
just
halt
deadlock.
A
Maybe
it's
clear
when
I
show
you
the
example:
it's
it's
getting
messy
sorry,
so
we
create
pool
and
something
atomic.
This
is
an
atomic.
It
could
also
be
something
else
of
the
system,
maybe
in
network
connection
or
a
file
or
whatever.
Then
you
also
have
this
assertion
which
must
hold
true
or
we
will
deadlock
here.
We
create
the
barrier
and
since
we
want
to
wait
for
all
our
jobs
plus
our
own
fret,
so
we
know
whenever
this
barrier
passes,
we
have
the
result
that
we
are
waiting
for.
A
A
So
this
whole
messy
scheme
is
just
for
one
thing.
We
would
like
to
wait
here
until
the
whole
pool
has
finished,
and
this
really
messy
and
the
only
two
options
you
had
before
was
either
do
that
or
use
a
channel
and
know
exactly
how
many
results
you
will
receive,
but
the
problem
there
would
be
if
something
panics
you
have
to
prepare
for
that
lots
of
other
crazy
stuff.
D
A
D
A
A
A
A
I'm,
sorry,
that's
how
it
is
never
join
your
own
pool.
You
will
deadlock
instantly
so
here
we
have
an
example
the
same
as
before
we
have
a
pool,
we
have
some
jobs
for
it
and
we
join
easy,
no
more
barriers,
no
more
channels.
Just
if
I
have
a
pool,
that's
working
files
which
don't
overlap
like
a
thumbnail,
can
generator
I,
just
run
it
and
join
the
pool
afterwards
will
be
done.
A
A
We
have
one
new
use
here,
so
we
know
the
fret
pool
and
we
have
the
Ark.
Now
you
have
the
atomic
you
size
and
ordering
ordering
for
you.
If
you
don't
know
it,
it's
how
relaxed
or
how
strict
the
memory
access
model
for
atomic
stuff
is.
So
in
our
case
we
have
it
very
relaxed,
which
just
means
I,
don't
care
exactly
how
or
when
my
atomic
write
appears,
but
it
must
be
correct.
A
In
this
example,
reordering
is
not
much
of
a
deal
because
you
don't
have
one
instruction,
but
when
you
have
a
large
program
and
build
lots
of
stuff
and
many
frets
are
accessing
many
things
at
the
same
time
and
maybe
wait
for
one
thing
or
another.
There
comes
a
time
where
ordering
is
really
really
important,
because
the
compiler
can
decide
to
reorder
your
instructions
if
they're
independent
enough
to
do
so.
A
A
B
D
D
D
A
There
is
something
called
the
conditional
variable
in
the
background
which
I
will
show
you
later.
So
if
we
have
many
frets
adding
drops
and
joining
on
stuff,
we
don't
get
a
raise
condition
at
all.
I
have
a
test
written
for
that.
It's
really
nasty
one
two
pools
giving
each
other
drops
and
joining
on
each
other
and
it
works.
So
yes,.
A
A
They
spawn
and
I
know
some
million
jobs
on
to
the
pool
and
calculate
stuff.
It's
really
fast.
Maybe
it
will
be
even
faster
if
they
grouped
I,
don't
know
segments
of
100
jobs
within
each
100
pixel.
It
really
need
to
chop.
So
since
you
don't
have
any
questions,
let's
go
on
to
the
architecture
of
the
pool.
I
already
hinted
it,
so
this
is
the
complete
code
for
the
structure.
This
is
all
the
fields
that
are
in
the
flat
pool.
A
We
have
a
center
which
will
be
cloneable
and
we
have
reference
to
the
shared
data
for
those
of
you
are
new.
We
have
this
derive
up
here.
Derived
means
dear
compiler,
I'm,
too
lazy
to
write
boilerplate
code
just
implement
a
copy
operation
and
a
compiler
will
complain.
If
you
come
to
it.
There
are
also
other
implementations
like
debug
or
other
fancy
stuff
equal.
If
they're
easy
enough
for
type
2
triage,
the
auto-generated
you
can
just
tell
it
with
derive,
do
stuff.
A
A
Yes,
so
clone
in
general
means
allocate
new
memory
which
either
has
the
same
meaning
or
in
this
case
points
to
the
same
data,
and
in
this
case
we
clone
the
sender
and
share
data,
which
means,
when
you
clone
a
pool.
You
get
a
new
entry
point
for
submitting
jobs
or
training
on
to
the
pool
or
changing
the
amount
of
work.
Your
friends.
A
Yes,
you
don't
need
a
rep,
a
narc
around
the
fretful,
you
just
clone
it
also,
if
you
did
use
a
narc
or
cast
limitation,
that
the
stuff
it
takes
is
not
mutable
anymore,
and
you
cannot
change
the
stuff
if
you
don't
have
the
immutable
reference
so
now
or
could
not
be
handy
in
this
case,
share
data
is
rather
simple
for
now,
let's
focus
on
these
three
we
have
the
receiving
end.
You
only
have
one
receiving
end
per
channel.
A
That's
why
it's
inside
in
a
mutex,
we
have
this
empty
trigger
and
the
condition
value
variable
which
are
required
for
the
join
operation,
and
we
have
some
bookkeeping
stuff
for
the
queues
and
a
name.
This
name
is
actually
quite
handy.
You
can
do
that
in
other
languages
too.
You
can
name
your
friends
so
when
they
crash
it,
it's
not
just
a
crashed,
fret
number
I,
don't
know
157,
just
useless,
basically
can
name
them
can
tell
it.
A
This
is
a
pool
for
fetching
stuff
or
calculating
front
nails
or
whatever,
and
then,
when
one
of
them
crashes
it
tells
you
hey.
One
of
these
rats
crash
can
do
that
with
any
fret.
You
can
create
a
name
for
every
flat
if
you
like,
I'm
just
yeah.
This
is
my
bubblegum
friend.
This
is
my
waterfront
really
handy
if
you
have
many
of
them,
oh
by
the
way,
do
you
know
what
this
means
here
inside
the
bigger
and
smaller.
A
A
The
reason
for
that
is
usually
the
mutex
indras
takes
data
inside
it
and
guards
it
against
misuse,
but
we
just
use
it
as
a
trigger,
so
there
is
no
need
for
any
data,
so
just
use
that
right
and
the
compiler
will
optimize
all
the
safekeeping
for
actual
data
way,
so
it
will
just
reduce
it
to
the
actual
mutex
like
you,
it
right
and
see
cool.
So
if
the
thread
pool
is
out
of
work,
if
you'll
notify
the
other
friends
this,
maybe
you
should
change
the
slides
of
that.
A
B
A
If
we
have
no
work,
we
acquire
a
lock
for
the
trigger
and
then
notify
all
the
listener.
This
is
I
think
it's
called
observer
pattern
in
Java
yeah.
This
is
that
this
is
the
notifying
end
on
the
other
end.
We
have
to
join,
has
two
wiles
I'm,
not
sure.
Actually,
if
the
outer,
while
is
necessary,
they
haven't
found
the
time
to
investigate
that.
A
Basically,
it's
just
a
safeguard
to
is
there
really
no
more
work
and
then
try
to
acquire
the
lock
check
again
if
there's
no
more
work,
and
now
we
have
the
condition
variable
which
says
you
take
in
here
you
take
in
the
lock,
which
is
really
important.
If
you
have
seen
a
Russ
code,
usually
you
have
an
ampersand
before
you
pass
in
stuff
that
you
get
out,
but
in
this
case
we
actually
want
to
give
the
lock
away,
because
when
we
do
that
the
lock
can
be
unlocked.
A
So
when
it
enters
the
wait
function,
the
mutex
will
be
free
because
otherwise,
the
the
previous
thread,
the
other
fret,
which
will
try
to
lock
it.
This
will
never
return
if
we
don't
unlocked
the
mutex
first.
So
at
this
point,
when
we
are
waiting,
we
give
the
lock
away
and
wait.
This
will
wait
for
quite
some
time.
Sometimes
it
wakes
up
for
no
reason.
That's
the
reason
for
this
wine.
A
A
B
A
B
A
A
A
It's
prevented
I
prevented
it
like
this.
When
you
submit
a
new
job
before
you
enter
it
into
the
queue
you
increase,
an
atomic
increase,
this
atomic
cute
counter
by
one.
So
there
is
a
state
where,
for
a
really
short
amount
of
time
there
is
one
job
accounted
for,
that's
not
submitted
and
and
with,
and
then
it
takes
time
for
the
queue
to
and
for
the
channel,
which
is
the
queue
this
this
receiver
so
instant.
A
It
has
also
many
operations,
so
you
start
execute
function,
give
it
the
reference
to
your
function,
increase
the
counter
and
then
hand
over
the
reference
to
the
function
into
the
channel
which
takes
time
and
then
when
all
of
this
goes
through
at
the
other
end,
there
is
a
mutex
to
be
locked.
Maybe
it's
already
done.
Let's
consider
it's
already
done
so
the
second
fret
has
the
mutex
gets
the
the
reference
to
the
function.
Out
of
it
then
has
to
free
the
mutex
unlock
it
and
then
start
the
accounting
bookkeeping.
A
Add
one
new
job
processing
remove
one
job
cute
and
then
actually
start
working
on
it.
Even
if
it's
a
no
op,
it
has
to
dear
F
the
function
which
takes
more
time
than
fetching
to
Atomics
and
then,
after
that
it
has
to
do
more
bookkeeping
and
then
it
has
to
decrease
the
atomic
active
count
and
after
that,
do
some
more
bookkeeping
about.
Is
it
still
needed
or
not
so
I
think
yes
in
theory,
but
highly
highly
unlikely?
A
We
could
look
at
the
code
and
I
changed
all
the
load
and
store
operations
to
sequential
consistency.
I
don't
have
reordering
issues
at
some
point
because
and
the
compiler
things
you
know
I
can
reorder
that
stuff.
Then
it
gets
messy,
so
I'm
pretty
sure,
sometimes
for
short
amounts
of
time.
There
is
one
or
two
jobs
too
many.
A
Yes
and
I
think
that's
why
it
may
be
over
counted
over
accounted
for
the
jobs,
never
under
content,
and
they
should
prevent,
and
even
if
the
rare
condition
would
be,
let's
imagine
if
the
queue
if
the
pool
is
empty
and
then
you
join
on
it
and
test
for
the
old
stuff.
It
could
be
that
somebody
else
while
you're
testing
is
starting
to
insert
a
new
job.
A
But
then
is
it
wrong
to
look
at
the
train
has
completed
because
you
will
return
from
the
join
with
the
first
thread,
while
the
other
one
is
still
trying
to
insert
a
new
job.
So
it's
racy,
yes,
is
it
wrong?
Yeah
depends
and
any
of
you
that
if
that
kind
of
a
problem,
you
could
also
just
use
more
synchronization
but
I
think
it's
sound
so
far.
A
D
A
Sure,
but
triggering
the
empty
event
and
handling
the
condition
variable
will
take
time,
plus
these
guys
will
have
to
acquire
the
lock
again
because
until
they
don't
have
to
lock,
this
function
will
not
return,
so
they
have
the
mutex
again
when
they
have
the
lock
here
in
this
symbol,
and
then
the
vial
function
starts
over
checks.
If
there
is
more
work,
if
the
work
has
already
been
submitted
at
that
point,
they
will
just
join
again
just
wait
for
another
round,
so
that
case.
B
B
B
D
B
A
D
A
A
So
my
favorite
is
cargo
outdated.
It's
the
first
car
by
extension,
I
ever
installed
because
at
some
point
it
was
in
the
early
development
of
hyper,
where
it's
just
like
it's
broken.
Why
and
then,
instead
of
just
randomly
update
which
would
break
stuff
even
more,
you
could
just
ask
cargo
outdated,
what's
outdated
and
then
explicitly
tell
this
crate
to
go
to
that
version
and
do
stuff.
That's
still,
okay,
that
does
not
break
your
code
instead
of
just
upgrade
everything
and
destroy
everything.
A
A
Your
external
crates
and
the
cargo
log
file
is
the
file
the
compiler
generates
as
soon
as
you
compile
it,
at
least
partially
for
the
first
time,
just
as
bookkeeping
I
used
this
version
to
generate
the
code
and
in
the
normal
case,
where
you
succeed
with
compile,
the
compilation
will
then
be
locked
to
a
state
of
dependencies
that
worked
at
some
point
on
that
platform.
I
had
to
learn
the
hard
way
that
Mac
OSX
is
not
your
friend
when
you
compile
open,
SSL
related
stuff.
A
A
A
Then
we
have
cargo
update.
If
you
have
used
cargo,
you
can
use
several
default
commands
like
build
test
dock
and
update
the
standard
cargo
update
will
update
your
dependencies.
The
extension
cargo
update
will
add
a
sub
command
called
cargo
install
update,
which
will
update
every
binary.
You
installed
with
cargo
install.
C
A
A
A
It's
a
little
too
silent
for
my
taste,
sometimes
because
when
you
just
execute
it,
it
will
just
grab
all
your
source
files
and
just
do
it
just
standardized
them.
There's
a
standardized
form
for
rust
code,
so
I
would
recommend
it.
If
you
have
an
indentation
for
going
on
inside
the
project,
just
install
rust
format
and
be
done
with
it.
It's
the
easy
solution
for
many
of
these
problems.
A
Stand-Up
implementation,
but
it's
also
worth
pointing
out
rust
format,
has
a
config
file
which
you
can
place
inside
your
project
where
you
can
define
for
project
in
this
project.
I
would
like
to
have
the,
in
addition
with
two
spaces
or
with
tabs
orbit
four
or
seven
a
half
whatever
cool
so
there
we
are
any
questions
comments
so.