►
From YouTube: RustConf 2019 - Tokio-Trace: Scoped, Structured, Async-Aware Diagnostics by Eliza Weisman
Description
RustConf 2019 - Tokio-Trace: Scoped, Structured, Async-Aware Diagnostics by Eliza Weisman
tokio-trace is a new set of Rust libraries that provide primitives for recording scoped, structured, and async-aware diagnostics. Unlike traditional logging, tokio-trace emits structured diagnostics that model the contextual and causal relationships between between events. tokio-trace was designed by the tokio project to solve problems with logging in asynchronous applications, but may be used in any Rust codebase. This talk presents the motivation and influences behind tokio-trace, introduces its core concepts, and demonstrates how it can be used.
A
Hello,
Russ
Khan:
how
are
we
feeling
tonight?
Okay,
so
I
have
a
couple
of
Corrections
that
I
need
to
make
before
we
can
continue.
Some
of
you
may
have
seen
on
the
program
that
it
says
that
this
talk
is
about
Tokyo
Trace,
that's
a
lot!
This
talk
is
not
about
Tokyo
Trace,
we've
renamed
the
library
it's
called
tracing.
Now
it
just
happened
after
the
light
conference
materials
and
the
the
programs
and
stuff
were
printed.
So
here's
your
here's,
my
first
announcement
of
this.
It's
it's
not
called
tokyo
trice
anymore.
A
Also,
it
may
upset
in
my
speaker,
bio,
that
I
have
two
awful
cats.
I
regret
my
statement.
I
only
have
one
awful
cat,
I
have
two
cats,
but
only
one
of
them
is
awful
and
I
hope.
All
of
you
got
the
joke
with
the
title
music
right
that
this
talk
is
about
logging
and
that
was
Kenny
logging
shout
out
to
Adam
filter
who
made
that
phone
first.
A
I've
been
writing
rust
for
since
26
2015
I've
been
doing
it
I,
don't
remember
how
long
I've
been
writing
rust
since
2015
I've
been
doing
it
professionally
for
over
two
years,
and
I
am
a
core
contributor
to
the
Tokyo
Tower
and
linker
D
to
open
source
projects.
Some
of
you've
probably
also
seen
my
Twitter,
where
I
make
bad
programming
jokes
and
post
pictures
of
my
cats
cool,
so
I
guess
the
question
really
is:
why
did
I
make
another
logging
library
right?
A
A
This
is
supposed
to
make
my
slide
go.
Oh
there
we
go
okay,
so
right,
I,
I.
First
of
all,
I
don't
like
to
call
it
logging
I
like
to
call
it
GNU,
slash,
logging,
cool,
okay,
so
I
in
my
speaker
notes.
It
says
like
pause
for
laughter
and
so
I
I'm,
really
glad
that
there
was
laughter
yeah.
So
that
was
a
joke,
but
I
I
don't
like
to
call
it.
A
Logging
I
like
to
call
it
in
process
tracing-
and
this
is
like
a
big,
very
buzzword,
sounding
name,
but
we're
gonna
talk
a
little
bit
more
later
about
what
that
actually
means,
but
to
talk
about
the
motivations
for
why
I
made
another,
not
vlogging
library
I,
have
to
ask
some
questions
to
the
audience.
How
many
of
you
are
using
futures
or
now
async/await
show
of
hands?
Okay,
that's
a
respectable
number,
and
are
you
like
logging
in
those
futures
at
all,
and
does
this
make
sense
like
you
get
something
out
of
this?
That's
readable!
A
There's
supposed
to
be
a
posture
laughter
here,
but
I
guess
you
guys
are
just
like
so
sad
about
this
that,
but
if
you're
writing
any
kind
of
like
high-performance
Network
application
or
network
library,
you
need
to
use
some
kind
of
asynchronous
programming
if,
but
it
presents
some
really
unique
challenges
for
Diagnostics,
especially
when
we're
executing
asynchronously
the
execution
of
the
program
is
multiplex
to
cross
different
tasks
and
when
they
require
something
from
input/output
or
they
require
something
from
another
task.
A
Instead
of
blocking
the
thread,
those
tasks
yield
right
and
then
we
start
executing
another
task,
and
this
is
really
good
because
it
lets
us
like
use
all
of
our
CPU
time,
saturate
the
Nick,
whatever
the
bottleneck
is.
But
the
problem
is
that,
because
of
this,
we
can
no
longer
really
rely
on
events
happening
in
order.
A
If
we,
if
we're
logging
in
these
asynchronous
tasks,
those
log
messages
come
out
interleaved
or
we
see
different
contexts,
all
logging
information
that
might
be
very
similar
and
we
can
easily
trace
what's
going
on
between
those
log
records
by
looking
at
the
order
in
which
they
were
displayed.
So
how
do
we
get
good
usable
Diagnostics
from
asynchronous
systems
I'm
going
to
touch
on
three?
A
What
I
see
is
the
big
pillars
that
we
require
if
we
want
to
have
like
Diagnostics
in
asynchronous
systems
that
make
sense,
we
need
to
capture
context
causality
and
we
want
some
form
of
structure
so
context
when
we
record
that
some
event
has
occurred.
We
don't
just
need
to
know
where
it
happened
in
the
source
code,
which
are
existing.
A
Logging
tools
are
very
good
at
so
they
have
like
information
about
line
numbers
modules,
files
and
that's
all
great,
but
it
doesn't
really
tell
us
the
runtime
context
in
which
some
event
that
was
recorded
occurred
in
so,
for
example,
if
we
have
some
HTTP
server,
that's
processing
requests
that
context
might
include
things
like
what
client
did.
This
request
come
from
what
were
their
requests?
Http
method?
What
was
its
path?
What
were
the
headers
on
that
request
and
in
synchronous
code?
So
not
async?
A
We
can
infer
that
context
by
looking
at
log
messages
in
order
right,
if
we
see
accepted
request
from
this
IP
and
then
parsed
a
request
with
these
headers
after
each
other,
then
we
know.
Oh,
this
is
the
IP
that
sent
their
request,
but
in
async
code,
where
we're
switching
between
these
multiplex
tasks
based
on
readiness
of
I/o
and
other
resources,
we
switch
between
those
contexts
very
rapidly
and
we
can't
really
rely
on
the
ordering
of
log
messages
to
determine
context.
A
Second,
we
care
about
causality,
there's
complex
chains
of
cause
and
effect
in
these
asynchronous
systems.
If
I
have
some
tasks
that
are
running
in
the
background
like
a
DNS,
lookup
or
a
database
connection,
or
something
what
caused
them
to
start,
what
caused
those
tasks
to
perform
certain
work
and
so
on,
we
can't
rely
on
log
ordering
to
determine
this
kind
of
causality
either,
so
we
need
to
record
it
explicitly.
Finally,
it's
helpful
to
have
structured
Diagnostics.
A
Traditional
logging
is
based
on
human,
readable,
textual
messages.
We
prefer
Diagnostics,
though,
that
are
machine-readable,
so
that
we
can
interact
with
them.
Programmatically
sure
you
can
increment,
you
can
programmatically
interact
with
unstructured
logs.
If
you
use
AA
core
grap,
but
it's
not
good,
you
know
if
anyone
uses
tracing
after
this
talk
and
you
ever
have
to
grep
through
a
log
again,
you
can
come
to
my
house
and
tell
me
this
I
will
give
you
$20.
A
You
can
record
type
data
as
type
values
and
you
can
interact
with
that
as
type
data.
So
tracing
tracing
is
a
framework
for
instrumenting
rust
programs
with
contextual
causal
and
structured
Diagnostics,
as
I
said
before
it
used
to
be
called
Tokyo
trace.
But
it's
not
called
that
anymore.
Tracing
is
part
of
the
Tokyo
project,
so
it's
maintained
by
the
people
who
brought
you
to
Tokyo.
It
doesn't
require
the
Tokyo
runtime,
so
you
can
use
it
in
synchronous
programs.
You
can
use
tracing
in
asynchronous
programs
that
are
using
Tokyo.
A
A
A
And
in
this
demo,
I
am
running
a
web
server,
and
this
web
server
implements
an
endpoint
where
you
send
HTTP
requests
where
the
path
of
those
requests
is
a
single
ASCII
character,
and
then
we
implement
something
called
character:
duplication
as
a
service.
We
send
you
a
response
where
the
body
of
that
response
is
that
character
duplicated
a
number
of
times
equal
to
the
received
content
length
header.
A
So
you
know
this
is
like
very
important
that,
like
because
web-scale
like
we
have
to
do
this
as
a
micro
service
and-
and
this
is
absolutely
mission-critical
right
so,
but
we
happen
to
know
that
we're
experiencing
an
elevated
error
rate
in
this
service
and
so
we're
looking
at
these
logs
and
as
you
see
it's
logging,
everything
that's
happening,
and
this
is
scrolling
really
fast.
It's
really
hard
to
actually
figure
out
what's
going
on
here,
even
though
we're
recording
all
of
this
information,
so
we
have
very
rich
verbose.
A
A
Something
we
can
do
is
that
the
example
server
I've
added
a
little
admin
endpoint,
where
you
can,
you
can
send
a
post
request
and
you
can
set
a
new
filter.
That's
used
to
reconfigure
what
tracing
events
are
recorded,
so
what
we
can
do
is
we
can
start
out
by
curl
dash
D
and,
if
you're
familiar
with
the
end,
vlogger
crate.
You
might
recognize
this
syntax
we're
gonna,
look
at
the
load
generator
only.
A
Okay,
so
we've
dynamically
reconfigured,
our
Diagnostics,
so
we're
only
looking
at
the
load
generator,
and
here
we
see
that
the
contexts
in
which
these
events
are
occurring,
it's
being
recorded
with
data
about
the
request
in
which
these
errors
are
occurring.
So
we
have
our
four
hundred
errors.
Therefore,
oh
fours-
and
we
have
our
500
errors.
Looking
at
the
404s,
we
see
that
there's
this
request,
dot
path
field
and
for
the
404s
it
is
just
slash
and
for
the
500
errors,
it
is
slash
Z,
and
we
can
determine
that.
Okay,
the
404s
are
normal.
A
The
spec
for
this
example
app
is
that
you
send
a
path
with
the
single
ASCII
character.
The
load
generators
failing
to
do
that,
so
the
server
is
returning
a
404,
it's
behaving
correctly,
but
the
500s
are
worrying.
So
we
want
to
look
at
the
Diagnostics
from
the
server
to
figure
out
what's
causing
those
500s
and
a
very
cool
thing
that
we
can
do
that.
A
You
can't
do
with
traditional,
unstructured
logging
is
that
if
we
go
back
to
curl
and
we
send
a
new
request
and
this
request
that
we're
gonna
send
it's
going
to
be
a
little
fancier,
so
this
syntax
is
a
superset
of
n
vlogers,
filtering
syntax
right
where
you
have
these
pairs
of
log
targets
and
then
verbosity
levels,
I,
don't
call
them
log
levels,
because
this
isn't
logging
right,
so
we're
gonna
introduce
some
new
syntax.
Now
we
have
these
square
brackets
and
the
square
brackets
indicate
that
we
want
to
filter
on
a
dynamic
context.
A
A
A
We
care
about
a
field
and
its
value,
and
specifically,
what
we
want
is
the
field
req
or
for
request
dot
path,
and
we
want
it
to
have
the
value,
/z
and
you'll
see
I
have
to
escape
the
quotes
here,
because
my
terminal,
but
all
I'm
saying
here
is
I-
want
to
see
any
requests
whose
path
was
Z.
I
want
to
see
the
context
where
that
event
is
being
handled,
but
I
want
to
see
all
the
Diagnostics
in
that
context,
so
I
send
that
request.
A
Look
at
that,
so
we
can
now
we're
now
seeing
the
entire
life
span
of
these
requests
right.
We
see
where
the
server
is
receiving
the
request.
We
see
the
headers
there.
Then
we
see
that
we're
handling
the
request
with
a
handler,
and
then
we
see
this
error,
that's
being
logged
at
a
very
high
verbosity
level
that
I
don't
like
this
letter
and
we're
replying
with
the
500
okay.
So
we
found
the
bug.
It's
not
it's,
not
a
good
bug.
A
I
put
it
there
on
purpose,
but
you
see
here
how
we
can
trace
the
entire
lifespan
of
this
request
and
we
can
only
look
at
requests
that
match
this
filter
and
even
though
this
server
is
under
really
high
load
and
lots
of
other
stuff
is
going
on
here,
we've
cut
out
all
of
that
noise
and
we're
looking
only
at
these
contexts
that
we
care
about.
So
that's
a
demo
of
the
power
of
what
we
can
do
with
this
kind
of
structured
Diagnostics.
A
I
am
going
to
go
back
now
to
my
slides
is
this:
are
we
looking
at
the
right,
slides,
okay
and
demo
all
right?
So
how
does
this
tracing
thing?
How
does
it
actually
work
right?
You
saw
a
cool
demo.
Everyone
was
wowed,
we
owed
a
nod.
How
do
we
do
this?
What's
actually
happening
here?
What
did
I
mean
when
I
in
process
tracing
are
any
of
you
familiar
with
distributed
tracing
technologies
like
open
tracing,
open,
telemetry,
Zipkin
Jaeger?
A
So
these
are
diagnostic
tools
for
distributed
systems,
since
I
only
see
a
couple
of
hands,
I'm
gonna
just
explain
very
briefly.
These
tools
are
for
diagnosing
distributed
systems.
They
are
designed
to
allow
you
to
track
contexts
as
they
move
between
nodes
in
a
distributed
system.
So
you
have
requests
to
one
server
that
cause
requests
to
another
server
and
the
way
this
works
is
we
propagate
some
headers
and
those
headers
have
an
identifier
that
identifies
a
context
called
a
span
and
the
insight
behind
the
tracing
Krait,
not
tracing
the
concept.
Sorry,
it's
a
dictionary
word.
A
I
know
that
we
can
have
collisions
with
these
is
that
asynchronous
programs
are
kind
of
analogous
to
distributed
systems
writ
small
right.
An
asynchronous
application
in
rust
has
concurrently
running
tasks
that
are
communicating
through
message.
Passing
message
passing
is
asynchronous.
It's
fallible,
so
it's
sort
of
like
the
network
I
mean
the
network
is
worse
right,
it's
even
more
asynchronous
and
it's
even
more
fallible,
but
it's
a
similar
concept.
A
The
only
difference
is
that
now
everything
is
running
in
the
same
address
space,
so
we
can
apply
the
same
ideas
that
we
use
for
tracing
in
distributed
systems
to
tracing
in
a
synchronous
systems
running
in
a
single
process.
So
we
introduced
some
core
primitives
in
tracing
and
our
core
primitives
are
called
spans
and
events.
Water
spans
a
span
represents
a
period
of
time
in
the
execution
of
a
program.
It
has
a
time
when
it
starts
and
a
time
when
it
ends.
A
So
it's
the
time
period
between
those
two
points
spans
also
can
be
entered
and
exited
in
tracing
independently
from
being
created
and
ending,
and
we
can
enter
and
exit
than
multiple
times.
This
is
how
we
track
the
asynchronous
execution
that
moves
between
tasks,
context
and
a
task
might
be
executed
within
that
context.
For
a
period
of
time
and
we
yield,
we
execute
some
other
tasks
we
yield.
We
execute
some
other
tasks
right
and
over
the
lifetime
of
a
task.
A
It
might
be
entered
multiple
times,
so
we
can
record
both
how
long
the
task
existed
for
and
how
long
it
was
being
actively
driven
or
executed.
We
also
can
separate
between
just
the
existence
of
this
thing
and
when
we're
in
the
context
of
that
work
right,
then
here's
how
we
make
a
span.
We
have
the
span
macro,
you
get
back.
This
span
object
and
here
we're
creating
a
span
called
migrate
span.
Then
we
have
an
enter
method
on
the
span
object,
and
that
gives
you
back
an
ra
íí-
guard.
A
So
you
can
scope
the
entry
of
a
span
to
a
stack
frame
or
inside
of
a
scope
in
a
function
and
as
long
as
you
hold
this
guard,
you
are
considered
to
be
executing
inside
this
span.
When
you
drop
the
guard,
you
exit
the
span
right,
we
also
have
kind
of
a
fun
tool,
which
is
this
instrument,
procedural
macro.
A
Events
are
singular
moments
in
time
and
they're.
Basically,
just
it's
a
log
record,
but
it's
more
structured
than
a
log
record.
We
can
add
these
structured
fields
to
them
their
key
value,
pairs
and
tracing
subscribers
can
consume
these
pairs
as
a
subtype
of
rough
or
a
subset
of
russ
primitives.
So
there
are
some
primitives
that
we
know
about
like
strings
integers
and
so
on,
and
you
can
interact
with
them
as
those
types
instead
of
as
oh,
it's
a
string,
cool
I
have
a
big
string
with
a
bunch
of
like
stuff
in
it.
A
A
It's
basically
like
a
logger
right,
but
you
know
in
the
log
crate
there's
a
log
trait
and
the
log
trade
has
like
two
methods
on
it.
Subscriber
has
a
few
more
methods
because
it
does
a
lot
more
than
a
logger,
but
it's
the
same
thing
in
that
it's
the
main
extension
point
for
tracing
and
it
actually
collects
the
data
that's
emitted,
so
libraries
can
provide
subscriber
implementations
that
do
all
kinds
of
different
stuff,
like
recording,
metrics
printing
logs
to
standard
out
whatever.
A
So
how
do
we
actually
use
this
thing?
Right?
I'm,
gonna
work
through
a
little
example.
This
example
is
drawn
from
a
very
important
domain,
which
is
yak
shaving.
Some
of
you
may
have
seen
the
version
of
this
talk.
That
I
gave
it
a
Russ
meet-up
in
San
Francisco,
where
I
had
a
slide.
Much
like
this
one
I
have
since
been
informed,
I
told
you
I,
had
a
few
corrections
to
make
I've
since
been
informed
that
the
animal
on
that
slide
was
a
yak
or
was
not
a
yak.
It
was
a
cow
Google
images
mislead
me.
A
This
animal
is
the
yak
I,
have
it
on
good
authority,
so
I
take
feedback
from
anyone
in
the
audience,
and
somebody
told
me
this
is
not
a
yak,
so
I
fixed
it.
Here's
a
real
act,
so
we're
shaving,
some
yaks
right
looping
over
some
yaks
and
shaving
them,
but
let's
say
we're
shaving,
those
yaks
asynchronously
right.
So
this
shaved
yak
function
is
asynchronous.
It's
returning
a
future.
It
could
be
an
async
FN
or
it
could
just
be
a
function
that
returns
a
future.
A
It
doesn't
really
matter
so
we're
looping
over
these
yaks
we're
spawning
a
task
to
shave
that
yak,
so
we
create
this
span
called
Shaving
yaks.
This
is
where
we're
gonna
do
all
the
work
of
Shaving
these
yaks.
We
can
annotate
it
with
whatever
contextual
information
we
want.
Like
say
number
of
you
actually
were
shaving,
and
then
we
enter
that
span.
That
indicates
we're
executing
in
the
context
of
shaving
these
yaks,
and
we
have
this
enter
guard
that
keeps
us
in
the
span
and
then
we
loop
over
the
yaks.
A
You
know,
okay,
so
then,
every
time
we
iterate
that
loop
for
a
new
yak,
we
can
create
a
span
again.
It's
called
shave
here
and
it
records
the
current
yak
and
the
shaving
span
is
a
child
of
this
span
where
we're
shaving
all
the
acts.
So
it
inherits
that
context.
So
this
span
inherits
the
yak
count
from
its
parent
and
it
adds
the
span
that's
or
the
Yak,
that's
currently
being
shaved.
So
we
have
this
instrument
Combinator.
This
attaches
a
span
to
a
future.
A
So
whenever
we
pull
this
future,
that's
shaving
the
yak
we
entered
that
span
when
we
finished
pulling
it,
we
exit
right,
and
we
can
also
put
that
on
a
sink
box.
So
here
now
we're
also
recording
an
event.
After
we
finished
shaving
the
Yak,
we
call
shaved
yak.
We
await
the
result.
We
debug
is
logging
in
the
debug
level.
Do
we
shape
that
yak
turn?
Okay,
we
spawn
this
async
block
the
instrumented
with
the
span
we're
reshaping
that
yak.
A
So
again,
this
debug
message
inherits
the
yak
that
we're
shaving
the
current
yak
and
it
inherits
yak
count
and
so
on.
We
have
this
tree
of
scopes
in
our
program
to
represent
the
context
in
which
we're
executing
finally
okay.
So
we
can
like
match
on
that
and
we
have
either
an
error
or
we
don't
have
an
error.
A
We
record
that
there
was
an
error
or
we
record
that
there
wasn't
an
error
and
we
recording
this
error
is
a
structured
value,
so
actually,
as
an
instance
of
the
standard
error,
error
type
in
the
standard
library,
so
we're
not
just
printing
it
or
formatting
it
to
a
string.
The
tracing
subscriber
can
actually
go.
Oh,
this
is
an
error.
It's
a
like
dine
error.
We
can
downcast
it.
We
can
look
at
its
source.
We
can
you
format
it
with
debug.
We
can
format
it
with
display
when
the
standard
library
adds
back
traces
to
errors.
A
We
can
look
at
its
back
trace
so
on
and
so
forth.
Right,
it's
actually
a
typed
object.
It's
a
rust,
struct
or
a
rough
trade
object
that
we
can
interact
with
again.
We
don't
need
to
record
anything
about
like
what
yak
are
we
saving?
How
many
acts
are
we
saving
in
these
individual
events
because
they
inherit
that
from
the
span
context
in
which
they
occur
great,
so
it
also
works
very
nicely
with
the
log
crate.
One
way
in
which
it
works
nicely
is
that
our
macros
are
a
superset
of
the
log
crates
macros.
A
So
if
this
compiles
right
here,
I'm
using
log
and
then
I'm
logging,
if
I
want
to
switch
to
tracing
in
my
application
right-
oh
sorry,
do
you
see
the
difference?
Use
log
use
tracing
same
syntax.
We
have
other
syntax
for
doing
more
things
like
creating
these
structured
fields.
That
log
only
just
now
has
a
concept
of,
but
it's
macros
don't
I'll
get
into
that
later,
but
you
get
you
have
a
super
set
of
logs
syntax.
A
So
if
you
want
to
migrate,
you
can
just
drop
in
tracing
and
then
you
can
go
back
and
add
more
structured
information,
more
spans
whatever,
but
it's
very
easy
to
adopt,
and
then
you
can
incrementally
roll
it
out.
We
also
have
adapters.
They
let
you
record
log
records
as
tracing
events
and
tracing
events
as
log
records.
So
if
you
have
a
dependency
that
omits
log
records,
you
can
record
them
as
tracing
events
that
are
within
that
span
hierarchy,
even
though
the
dependency
doesn't
know
about
tracing.
A
A
Finally,
any
runtime
ins
instrumentation
has
performance
costs
right.
You
were
doing
something
to
record
this
data
that
you
otherwise
would
not
be
doing.
You
can't
get
away
from
that.
There's
no
such
thing
as
a
free
lunch,
but
tracings
goal
is
to
ensure
that
you
only
pay
for
what
you
eat.
When
you
know
there
is
it's
not
a
free
lunch,
but
you
you
don't
have
to
pay
for
the
whole
lunch
you
just
pay
for
the
parts
that
you
want.
A
We
have
tried
really
really
hard
to
design
these
api's
to
ensure
that
there
are
no
costs
that
you
pay,
that
you
are
not
actually
using.
So
what
does
that
mean?
First
of
all,
we
when
any
instrumentation
is
disabled,
like
the
filtering
that
we
were
doing
in
the
demo.
We've
made
sure
that
it
is
basically
free
to
that.
There's
like
one
atomic
load
and
a
branch.
So
it's
not
free
free,
but
it's
close.
It's
about
the
same
cost
to
skipping
and
disabled
log
line
in
the
log
crate.
A
We
cache
the
evaluation
of
filters
so
that,
if
you
have
like
a
regex
that
determines
if
something
is
enabled
you
don't
have
to
like
run
the
regex
over
and
over
and
over
for
the
same
thing.
So
we've
optimized
this
a
lot.
We've
also
designed
the
subscriber
API
to
ensure
that
you
don't
pay
cost
by
default.
So,
depending
on
the
use
case,
a
subscriber,
that's
recording,
trace
events
might
do
a
lot
of
different
things.
A
If
you're
doing
time,
profiling
and
there's
a
crate
that
does
time
profiling
on
top
of
tracing,
you
need
to
make
sis
calls
to
get
timestamps.
If
you're
logging
with
timestamps,
you
need
to
make
sis
calls
to
get
timestamps.
But
if
you
don't
need
those
timestamps,
you
don't
need
to
make
that
sis
call
right
to
get
that
timestamp.
A
Finally,
it's
worth
noting
that
we're
really
trying
to
bootstrap
a
whole
ecosystem
of
libraries
here,
they're
the
core
crates,
there's
tracing
core
and
tracing
which
are
like
the
facade
layer
that
you
actually
depend
on.
That
makes
everything
work.
But
on
top
of
that
we
have
to
actually
implement
these
subscribers
that
have
different
behaviors.
For
example,
we
have
a
tracing
format,
crate,
which
is
what
I
was
using
in
the
demo,
and
it
implements
logging
trace
events
to
the
console,
but
there's
all
kinds
of
other
things
you
can
do
like
metrics
or
with
time
profiling.
A
John
ging
set,
has
a
tracing
timing,
crate
that
allows
you
to
do
like
histograms
of
how
long
a
span
takes
to
execute,
which
I
think
is
very
cool,
but
there's
a
lot
of
other
really
neat
stuff.
We
can
build
to
consume
this
unified
layer
of
instrumentation
to
output,
any
kind
of
diagnostics
that
we
want
and
there's
a
lot
of
stuff.
We
can
build
using
this
and
I
think
I
probably
have
only
thought
of
like
a
any
amount
of
it.
So
I'd
really
like
to
see
what
everyone
else
I
can
come
up
with.
A
So
here's
how
you
can
get
involved,
it's
on
crates,
do
all
of
the
core
crates
are
and
we
have
a
github
repository
in
the
Tokyo
organization
that
has
all
of
the
like
core
tracing,
and
you
know
we
also
have
a
discussion
group
on
git,
er
and
so
on
and
so
forth.
So
please,
if
you're
interested
open
a
pull
request
or
an
issue,
if
there's
something
you
want
to
see,
I
love
to
get
issues,
I
love
feature
requests,
and
if
anyone
actually
wants
to
write
those
features,
they
can
do
that
too.
A
That
would
be
great,
but
so
please
check
it
out,
try
it
out
and
we
would
love
to
hear
from
you.
Here's
some
links,
if
you
have
any
questions
reach
out
to
me
after
class
here's,
my
email
address
on
my
Twitter
and
you
can
get
the
slides
from
my
website
or
I
think
they're
on
the
restaurant
website.
So
you
don't
have
to
like
keep
taking
pictures
like
I
see
people
keep
doing.
This
is
the
only
slide
you
need
to
take.