►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
First
of
all,
you
probably
are
all
wondering
who
am
I.
My
name
is
Eliza
Wiseman
I'm,
a
systems
engineer
at
buoyant
here
in
San,
Francisco
I've
been
writing
rust,
since
2015
I've
been
doing
it
professionally
for
almost
two
years
now,
I
contribute
to
the
Tokyo,
Tower
and
linter
D
to
open-source
projects,
and
you
probably
some
of
you've,
probably
seen
my
wildly
popular
Twitter
account
where
I
post,
stupid,
jokes
and
pictures
of
my
cats-
okay,
great
so
you're-
probably
wondering
why
did
I
make
another
vlog
in
the
library
right?
A
We
already
have
a
pretty
robust
ecosystem
for
logging.
We
have
the
log
crate.
We
have
the
envoie
gor
crate.
We
have
the
fern
crate.
We
have
the
slog
crate.
We
have
okay.
Well,
first
of
all,
I
don't
like
to
call
it
log.
It
I
prefer
to
call
it
GNU,
/
logging,
that
was
a
joke.
What
I
actually
like
to
call
it
is
in
process
tracing
we'll
talk
about
what
that
actually
means
in
just
a
moment.
A
A
A
They
yield
and
we
start
executing
another
task
and
when
the
I/o
that
that
task
requires
becomes
ready,
we
will
switch
back
to
the
original
task
and
because
we
do
this
switching
between
contexts
so
rapidly.
We
can
get
log
messages
that
are
interleaved.
We
can
get
situations
where
we
see
the
same
log
message
a
bunch
of
times,
because
we
have
tasks
that
are
executing
on
multiple
cores
and
it's
just
really
hard
to
tell
what's
actually
going
on
in
these
asynchronous
systems.
A
A
So
context
means
that
when
we
record
an
event
that
occurred,
we
need
to
know
not
just
where
in
the
source
code
it
happened,
but
what
was
the
actual
runtime
context
around
this
event?
For
example,
if
we
have
a
server,
that's
processing,
some
requests
that
are
coming
in
that
context
might
include
what
client
did
this
request
come
from?
What
were
the
requests
method,
request
path,
the
request,
headers
and
so
on?
A
Second,
we
want
to
capture
causality
what
other
events
caused
some
event
to
occur
right.
So
if
we
have
some
tasks
running
in
the
background,
that's
driving
a
DNS
resolution
or
a
database
connection
or
something
what
caused
that
background
task
to
start
or
what
request
required
that
DNS
connection
in
this
example
in
acing
systems.
Again,
we
can't
really
rely
on
log
ordering
to
infer
causality
so
need
some
way
of
actually
recording
that
data.
Finally,
we
want
we'd
really
like
to
have
structured
Diagnostics.
A
Traditional
logging
is
based
on
these
text
messages
and
they're
human
readable,
but
it's
much
better.
If
our
Diagnostics
or
machine
readable,
we
can
interact
with
diagnostic
data
programmatically,
which
I
suppose
you
can
do
to
some
extent
with
textual
logs
if
you
use
grep
right.
So
if
you
have
some
field
video,
you
want
to
see
all
of
the
logs
where
that
field
is
equal
to
three.
You
can
just
grab
for
three,
but
you
can't
express
more
complex
concepts.
That
way
like
say:
I
want
an
inequality,
I
1
equal
to
or
greater
than
3.
A
What
can
I
do
I
can't
really
grep.
For
that
sure
you
could
start
actually
parsing.
Those
logs
and
treating
them
as
type
data,
but
that's
a
whole
lot
of
work.
It's
really
better
if
our
Diagnostics
are
actually
recording,
typed,
structured
data
and
it
never
just
becomes
these
text
messages
that
require
human
intervention
to
understand
so
I'm
going
to
pause
the
slides
for
just
a
moment
and
we're
gonna
start
with
a
quick
demo
which
I
have
prepared
ahead
of
time.
B
A
To
my
terminal,
everyone
see
the
terminal;
no,
they
do
not
okay
and
I'm.
Just
a
second
alright.
Do
you
see
the
terminal?
Now
all
right
and
Harrison
wants
to
join
I
I've
been
told
that
atom
is
manually,
accepting
everyone
who
wants
to
join
the
live
stream,
so
okay,
so
I
have
an
example
that
I'm
gonna
be
running
here.
A
This
example
is
based
on
the
library
hyper,
which
I'm
sure
some
of
you
have
used.
It's
an
HTTP
implementation
in
rust,
and
this
is
based
on
an
example
from
hyper.
That
is
an
echo
server
example,
it's
quite
simple,
so
all
I've
done
is
I've
taken
this
example
and
I've
modified
it
very
slightly,
and
what
I've
done
is
I've
added
some
Tokyo
trace
Diagnostics
to
the
example.
So
when
we
start
running
the
example.
C
A
This
readable
now,
okay,
so
we
already
are
getting
something
back
from
our
server
I'm
gonna
try
and
make
this
second
terminal
a
little
smaller,
so
we
can
fit
a
little
bit
more
on
screen
there.
We
see
that
we've
gotten
something
back
from
our
server
right,
it's
running
and
it
has
a
bound
local
address,
and
so
it's
printed,
the
local
address
that
the
server
is
listening
on,
and
here
we
see
that
it's
localhost
and
it's
bound
to
port
1
3000.
So
what
happens?
If
we
actually
start
interacting
with
our
server?
Send
it
a
request.
A
Can't
type
today
can't
type
it
all.
Oh,
my
keyboard
is
a
bit
of
an
angle
here,
so
if
I
send
a
request
to
our
server,
we
see
these
Diagnostics
come
back,
and
so
now
we
see
not
only
that
we're
running
a
server,
but
we
have
accepted
some
connection
and
it's
from
this
remote
address
which
is
curl
on
some
ephemeral
port,
and
we
have
a
request
on
that
connection.
The
method
is
get.
A
The
path
is
slash,
so
we
received
this
request
and
we
sent
back
a
response,
and
the
body
was
try
posting
data
to
slash
at
right,
so
this
formatter
that
we're
seeing
here
we
have
some
scopes
in
it,
and
these
scopes
are
indented
and
everything.
That's
indented
here
means
that
it's
a
child
of
some
scope.
So
the
request
is
a
child
of
the
connection.
The
response
is
a
child
of
the
request.
A
The
connection
is
a
child
server
and
so
on
now
this
is
not
probably
a
way
that
you
would
want
to
format
your
Tokyo
trace
Diagnostics
in
production,
but
for
the
purpose
of
this
demo,
I
think
it
gives
us
a
nice
visual
way
of
expressing
that
we
have
some
nested
scopes
here.
So
the
hyper
echo
server
told
me
we
should
post
some
data
to
slash
echo.
A
A
A
So
now
we
see
that
we
have
a
separate
connection,
we
have
a
separate
request
and
that
request
has
a
different
method
and
is
a
different
path
and
that
it
had
a
different
response
kind
and
a
different
response
body.
So
we
can
now
see
that
we
have
this
logical
tree
of
scopes
that
own
each
other
and
if
we
had
some
traditional
logging,
if
we're
just
using
the
log
crate,
these
lines
would
just
all
be
individual
textual
messages
and
if
we
were
to
send
a
great
deal
of
requests
at
the
same
time.
A
Concurrently,
if
we,
if
all
of
you
were
to
curl
with
this
echo
server,
you
can
do
that.
It's
not
exposed
to
the
internet,
but
if
you
were
to,
we
might
start
serving
some
of
those
requests
concurrently
right,
because
I'm
using
a
multi-threaded,
runtime
I
have
as
many
cores
as
my
macbook
has
to
serve
requests
from.
So
we
might
be
serving
a
whole
lot
of
requests.
We
might
be
writing
out
some
different
echo
bodies
to
each
of
those
requests
and
if
we
were
doing
that,
we
might
see
a
big
massive
logs
here.
A
A
Somehow
computers
are
hard
here
we
go
so
I'm
going
to
switch
back
to
my
slides
we've
seen
the
demo,
and
the
question
we
have
now
is
how
does
Tokyo
trace
actually
work
unless
there
are
any
other
questions,
but
I
think
this
is
the
one
that
is
probably
on
everyone's
mind
right
now
and
we're
gonna
start
answering.
How
does
Tokyo
trace
actually
work
with
what
did
I
mean
earlier
when
I
said
in
process
tracing
anyone
here
familiar
with
the
concept
of
distributed
tracing
systems?
These
are
technologies
like
open
census,
open
tracing,
Jaeger
Zipkin.
A
A
And
they're
designed
around
tracking
contexts
as
they
move
between
nodes
in
a
distributed
system,
so
that
we
can
correlate
events
that
occur
on
one
machine
with
events
that
occur
on
another
machine.
This
works
by,
for
example,
when
we
have
a
request
moving
through
our
distributed
system.
We
tagged
it
with
some
kind
of
identifier
and
we
can
track
that
identifier
if
all
of
the
nodes
in
the
system
that
are
participating
in
the
service
of
this
request,
somehow
forward
that
identifier.
So
we
can
track
that
context
and
we
can
correlate
this
machine.
A
Did
this
thing
in
service
of
this
request?
And
so
we
we
have
this
way
of
tracing
through
these
distributed
systems.
So
these
tools
are
intended
for
debugging
large-scale
distributed
systems,
but
the
insight
behind
Tokio,
trace
and
I
think
something
that
really
informed
its
design
at
a
basic
level
is
the
idea
that
asynchronous
applications
are
sort
of
like
distributed
systems
writ
small
sorry.
So
you
have
tasks
running
concurrently
that
are
sort
of
like
individual
nodes
in
a
distributed
system,
you're
no
longer
communicating
by
synchronously
calling
functions.
A
You're
now
communicating
through
asynchronous
message
passing
you
have
some
channels
with
which
these
tasks
are
interacting,
and
this
is
sort
of
like
sending
messages
on
the
network
right.
It's
valuable.
It's
asynchronous,
just
like
everything,
the
real,
so
your
asynchronous
application
is
kind
of
like
a
little
miniature
distributed
system.
That's
running
within
one
address
space,
and
so
a
lot
of
the
ideas
in
tokyo
trace
are
drawn
from
this
world
of
distributed
tracing
because
the
same
concepts
apply
I
think
really
nicely
to
tracing
within
a
single
process.
A
So
we
have
some
core
primitives
in
Tokyo,
trace
we're
based
around
modeling
the
world
through
the
idea
of
spans
and
the
idea
of
events.
So
what
are
spans?
A
span
represents
a
period
of
time
during
which
our
program
is
executing
in
some
context.
So
a
span
has
a
beginning
and
an
end
that
are
points
in
time
and
then
it
has
times
when
it
was
entered
and
times
when
it
was
exited.
This
mean
when,
when
we
enter
a
span,
this
means
that
we
are
executing
in
that
context
when
we
exit
it.
A
That
means
that
we
were
no
longer
in
that
context
and
we
might
exit
and
enter
a
span
multiple
times,
but
it
will
eventually
it
will
have
started
and
eventually
it
will
have
ended.
So
it
represents
a
discrete
period
of
time,
and
here
we
have
a
little
example
of
actual
code
for
creating
a
span.
We
have
the
span
macro.
A
The
span
macro
gives
you
back
a
span
handle
which
exposes
a
whole
bunch
of
methods,
one
of
which
the
most
important
one
is
that
you
can
now
enter
this
span
and
the
enter
function
takes
a
closure
or
a
block,
and
inside
of
this
disclosure
we
execute
some
code.
The
code
inside
of
this
block
happens
inside
of
the
span
and
then,
when
it
finishes,
we
have
exited
the
span
great
spans.
A
So
the
other
core
concept
is
we
have
this
idea
of
an
event,
and
events
are
much
simpler
than
spans.
An
event
represents
a
singular
moment
when
something
occurred.
This
is
pretty
much
the
same
as
a
log
record
in
more
conventional
login.
Unlike
log
records,
however,
events
exist
within
the
context
of
a
trace,
so
an
event
is
located
inside
of
a
span
and
it
exists
within
this
trace
tree.
So
we
can
see
the
context
in
which
it
occurred.
Similarly,
to
the
span
macro,
we
have
an
event
macro.
A
This
one
is
recording
just
the
textual
message
that
something
happened
and
it
is
at
the
info
level,
so
it's
not
extremely
verbose,
but
also
not
an
error
or
warning.
We
inherit
basically
the
same
levels
of
verbosity
as
the
log
crate
for
compatibility
reasons
we'll
get
into
that
later.
So
here's
an
event.
A
Finally,
we
have
a
concept
of
fields
on
both
of
both
spans
and
events.
Fields
are
how
we
record
type
structured
data,
a
field
as
a
key
value
pair
has
a
name
which
is
the
key.
The
name
is
a
rust
identifier.
The
value
is
a
rust
primitive.
We
have
a
sub
set
of
primitives
that
are
valid
fields.
We
can
also
record
any
format,
debug
or
format
display
data
as
fields
and
in
the
future.
A
We
might
have
arbitrary
serialization
for
fields
of
arbitrary
user-defined
types
and
Tokyo
Trace
lets
us
consume
these
field
values
as
those
types
and
not
as
just
a
string.
So
let's
look
at
an
example.
This
example
comes
from
the
real
world.
Application
of
yak
shaving
see
the
yak
here.
Ok,
we're
gonna
shave
some
yaks.
A
You
know
this
is
a
domain
model.
That
I
think
is
very
widespread.
Lots
of
people
have
to
shave
yaks.
So
let's
look
at
how
we
might
shave
some
yaks.
So
to
start
out,
we
create
a
span
in
this
fan,
we're
shaving
yaks,
so
we
call
it
shaven
yaks
and
we
annotate
it
with
the
number
of
yaks
that
we're
going
to
shave.
A
So
anything
that
occurs
within
this
band
is
considered
to
be
part
of
this
band,
but
also
part
of
Shaving
yaks
right.
Okay.
So
then
we
call
this
function.
It's
called
shave
yak
and
we
match
on
the
result,
and
then
we
have
some
events
and
based
on
weather
that
event
or
whether
that
function
succeeded
or
failed.
We
have
one
event
or
a
different
event,
great,
so
anything
that
happens
inside
of
the
shaving.
The
axe
function
call
is
also
inside
of
this
span
shave.
A
So
it's
annotated
with
the
current
yak
that
were
shaving,
because
shave
is
inside
of
Shaving
yaks,
then
everything
that
happens
in
this
function
call
is
inside
of
Shaving
yaks.
So
all
of
the
Diagnostics
that
this
function
emits
record,
which
yak
were
shaving,
how
many
acts
were
shaving
and
which
specific
instance.
If
this
shave
spend
we
were
inside
of
because
if
we're
running
the
same
code
on
n
cores,
then
we
have
n
separate,
shaving,
yaks
spans,
and
so
we
can
trap
which
individual
context
this
occurred
in.
A
A
Finally,
we
have
a
component
called
a
subscriber.
There
is
supposed
to
be
another
word
on
this
slide.
I
believe
this
is
a
bug
in
the
presentation
software
that
I'm
using,
because
it's
supposed
to
say
that
subscribers
collect
trace
data.
It
just
says:
collect
trace
sorry
about
that
technical
difficulties,
computers,
don't
work,
so
we
have
this
component
called
a
subscriber
and
a
subscriber
is
the
component
that's
responsible
for
actually
collecting
the
data
in
a
trace.
You
can
think
of
subscribers
as
being
somewhat
similar
to
loggers
in
your
conventional
logging
systems,
quite
quite
similarly
to
loggers.
A
This
is
the
extension
point
where
you
might
implement
user
defined
behavior.
So
you
get
to
set
what
subscriber
is
active
at
any
point
in
the
program
and
that
subscriber
will
be
the
one
that
collects
trace
data.
It's
an
interface.
We
have
a
trait
subscriber
that
you
implement,
and
now
you
have
a
subscriber,
so
libraries
or
implementations
can
provide
subscribers
with
wildly
different
behavior.
You
might
have
a
subscriber
that
just
prints
traces
to
standard
out
and,
as
we
already
saw,
there
are
a
million
different
ways
of
doing
this.
A
Depending
on
how
you
want
to
format
your
traces,
you
might
have
a
subscriber
that
records
metrics
based
on
values.
You
might
have
some
counters
or
you
might
be
doing
some
time-based
profiling,
based
on
actually
time,
stamping
how
long
we
spent
in
given
spans.
You
might
have
a
subscriber
that
sends
events
to
some
external
aggregator
or
some
tracing
system
or
to
Splunk
or
whatever
log
aggregation
system
you
might
be
using.
A
So
you
can
have
all
of
these
different
behaviors
and
the
subscriber
interface
is
designed
to
be
very
broad
so
that
it's
possible
to
implement
all
kinds
of
different
behaviors
on
top
of
it.
So
how
do
we
use
Tokyo
trace?
We've
tried
to
make
it
as
easy
to
use
and
especially
easy
to
adopt
as
possible.
A
big
part
of
this
is
that
we
want
to
be
compatible
with
other
libraries
in
the
ecosystem
that
you
might
already
be
using.
There
are
a
lot
of
ways:
we've
maintained
this
compatibility.
A
So
here
are
some
examples
of
some
of
the
stuff
we
can
do.
It
plays
nice
with
futures.
In
fact,
it
has
been
designed
to
work
with
futures
from
the
ground
up.
It's
Tokyo
trace
right,
okay,
so
we
really
want
to
be
able
to
support
debugging
asynchronous
systems.
Futures
are
the
building
blocks
of
asynchronous
systems
and
rust.
So
here
we
have
some
example
code
where
we
have
some
future
and
we're
using
some
Combinator's
to
compose
some
functions
onto
this
future.
A
So
we
have
an
end,
then
that
we
do
if
the
future
is
successful
and
then,
if
both
of
these
are
successful,
we
just
have
the
future
completes.
We
have
a
map
error
that
we
do
if
either
the
future
or
the
MN
block
returns
an
error,
and
here
we're
just
recording
that
we
are
doing
something
or
that
the
error
happened
and
then
at
the
end
we
have
this
new
Combinator
instrument.
Sorry
an
instrument
is
provided
by
Tokyo
Trace,
it's
actually
provided
by
an
external
crate.
It's
built
on
top
of
Tokyo
trace.
A
The
crate
is
called
Tokyo
trace
futures.
It's
very
imagine
if
we
named
I
came
up
with
that,
and
so
Tokyo
Trace
futures
gives
us
this
new
instrument
Combinator
on
futures,
also
on
streams
and
instrument
attaches
the
span
to
the
future.
So
whenever
this
future
is
pulled,
we
enter
this
span
for
the
duration
of
the
poll
we
execute
inside
the
span
and
then
we
exited
once
the
poll
has
ended,
and
this
will
happen
as
many
times
as
it
takes
to
drive
this
future
to
completion.
A
So
what
that
means
is
again
here,
everything
that
happens
in
any
of
these
Combinator's
or
in
the
future
itself
or
in
the
call
to
the
do
something
function
is
all
inside
of
that
span
and
any
sub
spans
will
last
for
their
duration
within
that.
So
something
else
you
can
do
is
a
lot
of.
You
are
probably
using
the
log
crib
right.
It's
pretty
standard,
logging,
facade
and
rust,
and
it
is
basic
like
textual
logging.
It's
it's
not
structured
logging.
Yet
so,
if
you
import
the
log
crate,
you
use
its
macros.
A
You
can
do
something
like
this.
You
can
say
info
and
then
you
have
a
formatted
message:
great
shut
up,
Adam,
so
I'm
gonna
get
on
that
I'm
gonna
talk
about
that
later.
I'm
gonna
talk
about
that
later.
So
if
this
compiles
guess
what
else
compiles
this,
what
did
I
do?
What
just
changed?
Log
Tokio
trace
log
Tokyo
trace
log
Tokyo
trace
log
Tokyo
Tokyo
tricks,
Oh
Tokyo
trace.
So
we
have
these
macros
in
Tokyo
trace
for
instrumentation
and
they
are
a
superset
of
the
log
creates
macros.
Well,
some
of
them
are.
A
We
have
some
other
macros
to
like.
Obviously,
log
currently
has
no
concept
of
spans,
so
our
span
macro
is
not
a
superset
of
its
span
macro
because
it
has
no.
But
our
we
have
info
debug,
warn
error
and
trace
macros
that
support
all
of
the
same
syntax
as
the
log
crates
macros.
So
if
you
are
using
log
right
now
and
you
want
to
adopt
tokyo
trace,
what
do
you
have
to
do?
Log
tokyo,
trace
log
tokyo,
trace
log
tokyo
choice.
A
Take
your
trace.
Now,
of
course
you
don't
get
all
of
tokyo
traces
functionality.
If
you
do
this,
but
it
is
the
first
step
and
it
means
that
you
switch
Tokyo
trace
in
place
of
log
and
your
crate
still
builds
so
then
you
can
incrementally
start
to
adopt
some
of
Tokyo
traces
features.
We
can
start
adding
spans
to
our
code.
We
can
start
adding
structured
fields
to
some
of
these
log
messages.
A
We
might
replace
the
Foo
in
the
bar
and
this
formatted
string
with
foo
and
bar
fields,
but
we
can
do
this
incrementally
we
don't
have
to
break
the
world
and
rewrite
all
of
our
instrumentation
points
just
to
try
Tokyo
trace.
Furthermore,
we
have
a
lot
of
other
compatibility
layers
with
existing
logging
infrastructure.
We
have
adapters
that
let
you
record
log
messages
as
trace
events.
We
have
an
adapter
that
lets.
You
record
trace
events
as
log
messages.
A
So
if
you
have
some
dependency,
which
is
some
library
that
you
use,
which
is
using
log,
you
can
consume
the
log
records
that
it
emits
as
Tokyo
trace
events
in
your
trace
tree.
Obviously
the
library
does
not
have
tracing
instrumentation
itself,
so
the
library
is
not
defining
any
spans,
but
its
records
exist
within
your
spans
I'm.
Actually
going
to
show
a
demo
of
this
in
just
a
moment.
A
Similarly,
if
you
want
to
be
able
to
emit
log
instrumentation
for
users
of
your
library,
but
you
also
want
to
use
Tokyo
Trace,
we
have
an
adapter
to
go
in
that
direction
as
well.
Finally,
I
want
to
take
just
a
moment
to
talk
about
performance,
because
we're
rust
programmers
here
performance
is
important
to
us,
and
our
performance
goal
is
that
you
should
only
pay
for
what
you
use
and
when
you
take
a
drink
and
then.
A
So
any
runtime
instrumentation
has
some
performance
costs
right
if
you're
collecting
any
kind
of
diagnostic,
whether
it's
printf
or
some
extremely
fancy,
metrics
or
tracing
system
you're,
doing
some
work
at
runtime
that
you
wouldn't
be
doing
otherwise,
so
it
has
a
cost,
but
our
goal
is
to
ensure
that
you
don't
pay
costs
that
you
don't
need
to
pay.
What
does
that
mean?
First
thing
is
that
we
want
to
make
sure
that,
when
your
fill
during
out
instrumentation
that
you
don't
want
to
collect
that,
doesn't
cost
anything.
A
It's
nearly
free,
it's
a
single
load,
single
branch,
and
then
we
have
so
under
a
nanosecond,
just
like
the
log
crate
single
branch
to
skip
something
that's
disabled.
Furthermore,
we
can
cache
the
evaluation
of
the
actual
filter
that
determines
if
something
is
disabled,
so
sometimes
you
might
have
a
regex
or
something
that
is
relatively
expensive
to
evaluate
to
determine
if
some
spanner
event
is
enabled-
and
you
want
to
collect
it
in
cases
where
the
filter
will
not
change
the
subscriber.
Has
the
ability
to
tell
us
this
filter
will
not
change.
A
I,
never
want
to
see
this.
We
never
hear
from
it
again.
We
don't
have
to
run
the
regex
every
time
to
get
the
same
result.
Similarly,
if
you
always
want
something,
we
don't
have
to
ask
every
time
to
make
sure
we
always
want
it.
This
is
actually
one
of
the
cases
where
we
outperform
the
log
crate
in
some
of
the
benchmarks.
That
I've
done
is
that
lower
levels
of
verbosity
we
significantly
outperform
the
log
crate,
because
in
some
cases
it
really
eights
filters
where
Tokyo
tray
still
doesn't
have.
A
Second
and
I
think
the
more
important
thing
is
that
subscribers
don't
pay
any
costs
by
default.
So
you
have
a
lot
of
wildly
different
use
cases
that
subscriber
implementations
might
fulfill.
You
might
be
doing
some
formatting
like
what
we
saw
before,
where
you
have
these
contexts
that
are
associated
with
your
spans
and
you
want
to
print
those
contexts
every
time
an
event
occurs
in
that
span.
A
If
you're
doing
this,
obviously
you
have
to
do
some
form
of
allocation
to
store
those
formatted
strings
in
right,
but
we
don't
inflict
that
allocation
on
every
subscriber
it's
up
to
the
subscriber
to
choose
to
perform
that
allocation.
If
you
were
writing
directly
to
standard
out-
and
you
are
never
going
to
want
to
write
the
formatted
representation
of
a
span
again,
we
don't
store
it
for
you
by
default.
A
A
If
you
were
printing
log
messages,
you
might
only
need
to
timestamp
events
and
depending
on
where
your
time
stamps
are
coming
from,
you
might
have
some
internal
clock
or
you
might
want
to
make
a
sis
call
every
time
you
need
a
times
down.
Those
are
all
overheads
the
Tokyo
trace
does
not
actually
make
for
you.
You
get
to
choose
how
you
want
to
generate
time
stamps
whether
you
want
time
stamps.
We
don't
put
time
stamps
on
anything.
A
What
we
have
done
is
that
we've
made
sure
that
when
you
hit
an
instrumentation
point,
the
subscriber
sees
it
right
away
is
a
single
function.
Call
the
time.
Difference
is
small
enough
that
if
you
take
a
time
stamp,
then
your
time
stamp
is
fine.
That's
all
we've
done
you're
responsible
for
deciding
how
time
stamps
are
taken.
So
we
made
sure
that
subscribers
don't
pay
gift
the
cost
that
they
don't
need
for
their
use
case.
If
you
want
to
implement
buffering,
you
can
implement
buffering
great,
so
on
and
so
forth.
A
So
I
have
a
few
more
demos
of
some
cool
stuff
that
we
can
do.
Some
of
them
are
pretty
cool
and
so
I'm
gonna
exit
my
presentation
for
a
moment
and
switch
over
here
back
to
the
terminal
that
I
had
earlier.
Where
is
my
mouse?
It
is
someplace
it's
well,
it's
not
Mary.
I
wish
it
was.
It
would
be
much
easier
if
it
was
Mary
there
we
go.
Okay,
here
is
my
terminal,
so
we're
looking
at
this
hyper.
D
A
I
just
want
to
be
in
the
terminal
where
the
server
is
running.
I
just
did
that.
Okay,
so
I'm
gonna
kill
this
guy
and
I'm
gonna
make
a
new
one.
No
I
don't
want
that.
So
I'm
gonna
rerun
that
example,
but
I'm
gonna
add
one
thing:
let's
say
we're
running
this.
You
know
echo
server,
we're
running
it
in
production.
Something
goes
wrong.
You
know
our
echo
server
is
misbehaving
in
some
way
and
we
think
the
bug
might
be
in
the
protocol
implementation.
Let's
say
we
want
to
get
all
of
the
logs.
A
That
hiper
has
so
remember
how
I
said
that
we
can
consume
log
records
from
our
dependencies
as
Tokyo
trace
instrumentation,
even
though
our
dependencies
are
not
using
Tokyo
trace,
so
I
have
again
not
modified
Hyper
at
all.
I
have
not
touched
hyper.
I
promise
I'm
pulling
it
from
crates
IO,
just
like
everyone
else.
So
if
I
say
rust,
underscore
log
equals
hyper,
equals
debug
and
I
run
the
sample
server
again,
and
it's
listening
just
like
before
now
what
happens
when
I
curl
local
host
on
port
3000,
okay
and
I'm
gonna
close.
A
My
second
terminal
look
at
that.
Where
did
these
messages
come
from
yeah?
They
came
from
hyper.
As
you
can
see
here.
We
have
different
modules
in
hyper
that
log
these
records
and
we
can
infer
from
the
indentation
that
those
records
are
inside
of
our
trace
tree.
So
we
can
pinpoint
when
a
log
line
from
some
dependency
happened
within
the
trace
tree
with
the
spans
defined
by
our
application.
A
E
F
A
It's
it's
not
fun.
These
are
these
are
modules,
but
we
have
all
of
the
same
metadata
that
the
log
create
records.
We
have
that
about
all
of
these
fans
as
well.
I'm,
not
printing
it
here,
because
I
wanted
to
keep
these
lines
as
short
as
possible,
because
I
wasn't
sure
how
big
the
screen
the
demo
was.
Gonna,
be
again.
This
is
a
special
formatter
that
I
wrote
just
for
the
demo.
It
doesn't
have
all
the
information
that
we're
recording.
I
can
show
you
what
that
looks
like
a
moment,
and
you
know,
of
course
this.
A
This
works
for
any
of
my
dependencies,
so
I
happen
to
know.
That's
not
what
I
want
happen
to
know
that
hyper
is
also
using
Tokyo.
So
let
me
just
reuse
the
command
I
just
typed,
so
I'm
setting
hyper
a
debug.
Let's
go
ahead
and
set
hyper
Trace.
We
just
really
want
to
get
the
firehose
here.
We
want
Tokyo
equals
debug
on.
A
A
A
A
So
I've
written
another
library
on
top
of
Tokyo
trace.
This
is
a
bit
of
a
yak
shape.
Actually,
it's
kind
of
a
funny
story
and
it's
called
Tokyo
trace
format
and
Tokyo.
Trace
format
is
basically
in
vlogger
for
Tokyo
trace
and
the
reason
is
because
I
wanted
to
print
spans
before
targets
and
in
an
vlogger.
You
can't
put
anything
before
the
target.
A
You
can
customize
the
formatting,
but
the
target
always
comes
before
anything
that
you
want
to
put
in
and
I
didn't
like
that,
because
I
wanted
my
spans
to
line
up
because
I
wanted
to
be
able
to
see
what
spans
we
were
in
just
based
on
alignment
and
n
vlogger.
Wouldn't
let
me
do
that
so
I
rewrote
it
and
we
can
do
filtering
with
an
environment
variable
just
like
and
vloggers
so
I'm
gonna
run
a
quick
example.
A
Actually
I'm
just
gonna
run
it
without
any
environment.
Variable
first.
A
A
A
But
this
is
where
it
gets
really.
Neat
I
happen
to
know
that
hidden
in
this
example
program
there
is
another
target:
there's
not
just
the
active
target,
which
is
the
module
path.
There's
another
logging
target,
that's
called
yak
events
which
I've
hidden
in
this
example
specifically
for
this
purpose.
So
let's
say
we
want
to
enable
some
yak
events,
but
we
only
want
to
see
the
yak
events
that
happened
inside
the
shave
span.
How
would
we
do
that?
A
A
Variable
it'll
still
work,
but
now
we've
added
some
stuff
and
one
of
the
things
we've
added
is
this
syntax,
with
square
braces
and
in
the
square
braces
we
put
a
span
name,
so
we
might
say
shave
which
is
the
name
of
a
span
and
then
now
we've
set
the
trace
level
for
that
whole
span,
and
you
might
notice
that
we
now
have
these
yak
events
that
we
did
not
have
before
so,
and
you
might
also
notice
that
we
filtered
out
everything
it
wasn't
happening
inside.
The
shave
span
we're
only
looking
at
spans
named
shave.
A
We
can
do
that
and
we
can
also
have
a
target
filter.
So
we
can
go
back
here
and
we
can
say,
let's
say:
I
only
want
to
see
the
yak
events
I,
don't
care
about
the
other
events
that
are
just
floating
around
in
my
name
so
I
might
say:
yack
underscore
events
and
now
I'm
setting
a
log
directive
that
is
filtering
both
on
a
target
and
on
a
span.
I
hit
return.
I
only
get
that
okay.
So
that's
some
filtering
and
I
see.
F
G
This
will
be
quick
with
the
the
first
example:
can
you
make
it
print
the
path
to
the
shaves?
Presumably
you
can
get
to
those
shades
by
different
paths.
What
do
you
mean?
We
filtered
it,
so
you
only
printing
things
that
are
within
the
shave
span.
H
A
So
we
could
also
enable
all
of
the
parents
of
the
shape
span
the
filters
that
I've
written
for
this
crate
don't
support
that.
But
it's
definitely
something
that
you
could
build
on
top
of
Tokyo
traces,
filtering,
API,
okay,.
F
A
D
A
D
A
A
So
the
way
we've
done
it
is,
we
have
a
logger
and
we
set
that
as
the
logger
and
that
logger,
instead
of
actually
logging,
it
just
causes
Tokyo
trace
events
to
occur.
Oh.
A
We
just
in
the
executable,
so
if
you
use
the
log
craic,
you
can
set
a
global
logger
and
in
the
executable
you
just
set
a
global
logger,
which
is
the
Tokyo
trace
logger,
and
what
that
logger
does
is
when
it
receives
log
events,
it
just
emits
them
as
Tokyo
trace
events.
It's
just
a
compatibility
layer.
We've
just
ensured
that
the
api's
are
compatible
enough,
that
it's
easy
to
have
that
compatibility
letter.
I
have.
A
That's
a
great
question:
okay,
it
doesn't
exist
yet,
but
again,
one
of
the
big
goals
of
the
subscriber
API
and
also
I
think
the
reason
that
there's
some
stuff
that
some
people
are
surprised
by
in
that
interface
is
because
we
want
to
support
that
use
case.
So,
if
I
have
some
thoughts
on
how
we
might
do
this,
but
we
give
you
the
tools
to
build
it.
Pr's
welcome
otherwise
watch
this
space
we
might
see
stuff
like
that
event.
Do.
A
To
an
extent,
the
main
use
case
that
I
want
to
support
that
I've
been
thinking
about
the
most
is
supporting
a
distributed
tracing
system
where
requests
are
coming
in
with
IDs,
so
Tokyo
trace
would
not
be
responsible
for
generating
them.
Tokyo
trace
also
has
a
concept
of
a
span
ID
internally
I
think
we've
determined
that
you
probably
would
not
want
to
use
the
distributed
system
span.
A
Id
for
your
internal
IDs
and
I
have
done
some
thought
on
how
you
might
correlate
an
existing
span
ID
in
the
distributed
system
with
a
span
ID
in
Tokyo
trace,
but
we
don't
currently
have
hooks
for
where
you
might
generate
span
IDs,
but
there
are
places
where
you
could
do
it
quite
logically,
and
you
might
record
them
in
a
number
of
ways.
It's
really
up
to
you
and
up
to
the
system
that
you
want
to
interface
with.
A
This
is
also
something
where
it's
important
to
not
enforce
one
way
of
doing
it
like,
for
example,
Tokyo
traces
fan,
IDs
or
64
bits,
because
that's
plenty
for
one
address
space,
definitely
not
plenty
for
a
distributed
system.
I
happen
to
know
that
all
of
the
popularly
used
distributed
tracing
systems
have
I
think
128-bit
IDs,
depending
on
what
the
ID
identifies
so
there's
David
wants
to
know
I
see
if
we
can
share
those
thoughts.
I
couldn't
see
his
whole
message.
I
think
some
of
them
have
been
posted
on
a
variety
of
github
issues.
A
A
So
it
presumably
has
some
local
storage
that
it
stores
that
in
and
when
you
see
events
and
you
emit
them
back
to
the
distributed
system,
you
use
the
distributed
systems,
ID
we're
working
on
in
down
casting
API
for,
oh
sorry,
so
something
we're
working
on
is-
and
this
isn't
in
court
yet,
but
it
will
be
soon-
is
a
down
casting
API
for
the
tray
subscriber.
So
you
can
get
the
current
subscriber
and
actually
see.
Is
it
the
open
census,
the
Edgar
Zipkin
one?
And
if
it
is,
then
you
can
call
methods
on
that.
A
That
are
not
defined
in
the
subscriber
interface.
Some
influence
was
taken
from
traits
like
standard
error
in
rust.
The
arrow
type
has
a
down
casting
based
API
and
we've
actually
learned
some.
What
not
to
do
things
from
recent
I
like
changes
to
that
down
casting
based
API?
There
are
problems,
but
down
casting
our
should
hopefully
be
usable,
and
we've
also
wanted
to
ensure
that
a
subscriber
that's
composed
of
multiple
sub
subscribers.
A
So
if
you
have
a
tea
or
something
that
you
can
expose
any
of
your
components
and
down
Casca
any
of
them
and
permit
us
to
call
methods
on
your
components,
the
subscriber
is
the
global
subscribers.
Basically,
a
trade
object,
so
the
down
casting
stuff
is
not
here
yet,
but
we
can
do
it
in
a
future-proof
way.
We
were
all
right
now.
All
subscribers
have
to
be
object,
safe,
so
there's
all
we
can
do
or
everything
we
need
to
do
to
implement
the
fully
featured
down
casting
system
doesn't
require
any
breaking
changes.
A
A
A
I
have
not
seen
this.
This
did
not
happen
last
time.
Okay,
here
we
are
we're
back
here.
We've
seen
all
of
these
slides
we've
seen
some
demos
and
I
think
this
dovetails
into
a
lot
of
the
questions
I'm
getting
from
folks
in
the
audience
and
the
point
is
we
are
trying
to
bootstrap
an
entire
ecosystem
of
libraries
here,
so
we
just
released
the
core
library,
Tokyo
trace,
koron
crates
I/o.
A
Actually,
this
morning,
I
finally
got
Carl
to
push
the
release
button
for
me
and
that's
just
the
beginning
of
what
we
can
now
do,
there's
a
whole
lot
of
neat
stuff
that
we
can
build
using
these
tools
and
I
think
one
of
the
biggest
design
goals
of
Tokyo
traces
that
it's
not
a
logging
system
or
a
tracing
system
or
a
metric
system.
It's
a
set
of
tools
for
building
metric
systems.
A
It's
a
set
of
tools
for
building
logging
systems,
it's
a
set
of
tools
for
building
tracing
systems
and
there's
really
a
lot
that
you
can
do
on
top
of
it
and
I'm
really
excited
to
see
all
of
the
code
that
people
write,
I,
probably
haven't
even
thought
of
all
the
things
you
can
do
yet,
and
we've
designed
all
of
the
API
service
to
be
extensible
and
composable
and
to
permit
you
to
do
whatever
your
use
case
requires.
So
how
can
we
get
involved?
Here's
some
places
to
go.
A
A
That's
less
stable,
all
of
the
leaf
crates,
the
actual
subscriber
implementations
for
doing
different
things
that
I've
written
so
far
lived
there
and
some
utilities,
some
compatibility
layers
and
some
neat
stuff,
like
procedural
macros,
for
the
real
galaxies
brain
folks.
So
I
love
bug
reports,
I
love,
feature
requests
I,
especially
love
pull
requests
the
best
that
you
know
I'll
take
what
I
can
get
so
please
try
it
out
and
if
anything
doesn't
work.
Let
me
know
I
like
that.
A
Second,
if
there's
something
you
want
to
see
in
this
ecosystem
that
doesn't
exist.
Yet.
Please
share
that
idea.
I
would
love
to
see
what
people
write,
but
even
if
you
don't
have
the
time
or
you
don't
understand
all
of
how
something
might
be
implemented,
please
share
the
idea
and
someone
will
probably
pick
it
up
because
odds
are
someone
has
the
same
needs
as
you.
A
A
Carl
obviously,
is
the
original
author
of
Tokyo,
and
none
of
this
would
have
happened
without
Carl.
He
provided
a
lot
of
really
valuable
guidance
throughout
this
process.
David
Barsky.
We
had
a
lot
of
really
great
conversations.
He
helped
develop
the
design
of
Tokyo
trace
and
he
wrote
some
proc
macros
that
I
was
going
to
show
off,
but
didn't
I
still
can.
Ashley
is
working
on
an
RFC
to
add
structured
logging
to
the
log
crate.
A
We
had
several
good
conversations
about
how
to
do
that
in
a
way
where
Tokyo
traces,
structured
logging
is
compatible
with
the
log
grade
structured
logging,
and
he
influenced
the
design
of
how
we
actually
represent
values,
which
was
really
valuable
and
Lucio
helped
out
a
lot
with
the
nursery
crates
and
I
think
he's
here
tonight
too.
So
thanks
Lucho
and,
of
course
thank
you
to
everyone
who's
listening
tonight.
A
Thank
you
to
my
partner
Tristan,
who,
let
me
try
this
talk
out
on
them
like
four
times,
even
though
they
have
no
idea
what
I'm
talking
about
at
all,
and
hopefully,
if
I
give
this
talk
at
ruskin
I'll
be
saying,
thank
you
to
some
of
you
in
the
audience
right
now
too.
So,
if
you
have
any
questions,
I'm
actually
gonna
sit
down
and
do
Q&A
again,
but
before
we
do,
that
here
is
how
you
can
contact
me.
Is
my
email
address.
A
My
Twitter
account
these
slides
are
already
posted
on
my
website
or
see
me
after
class
I
love
to
chat
about
tokyo,
trace
tokyo
in
general
or
linker
d,
which
is
the
other
thing
I
do.
I
think
we
brought
my
employer
brought
a
bunch
of
shirts,
but
I
think
they
might
all
be
gone
now,
get
them
while
they're
still
there.
We
have
stickers
too,
and
if
you
want
to
take
a
picture
of
this
slide
so
that
you
have
all
of
this
information
and
where
to
go
to
get
all
the
rest
of
the
slides.
A
J
F
A
Thanks
I
think
I
just
turned
this
off.
Can
you
hear
me
now?
Okay,
I
see
a
question
from
Ashley,
but
I'm
gonna,
let
everyone
who
is
in
the
livestream?
No
sorry
I'm
gonna.
Let
everyone
in
the
livestream
know
right
now
that
when
your
questions
come
up,
I
only
see
about
the
first
five
words
of
them.
They're
getting
truncated,
I
asked
Adam
ahead
of
time.
If
he
would.
D
A
So
I
asked
Adam
if
he
would
be
a
stand-in
for
everyone
who
is
not
with
us
in
the
flesh
and
he's
here
now.
Actually
so
why
don't
you
come
help
me
with
that?
So
Ashley
wants
to
know
what
immediate
next
steps
are.
Immediate.
Next
steps
are
probably
stabilizing
the
instrumentation
API
right
now
the
core
API
is
stable,
but
the
API
on
the
Tokyo
trace
proper
crate,
which
is
where
the
macros
live.
The
most
the
knackers
there's
a
lot
of
macros,
most
of
macros,
totally
stable,
there's
a
little
room
for
API
polish.
A
The
goal
was
specifically
to
get
the
core
API
stable,
because
we
don't
want
to
change
it
because
it
needs
to
be.
You
need
everyone
on
a
compatible
version,
or
else
they
can't
talk.
And
so
we
tried
to
minimize
the
surface
area
of
what
changing
or
of
what
code
that
we
don't
want
to
change
ever
again,
or
it
would
be
pretty
bad
if
we
did
Oh
Adam
is
here
again,
ok,
great,
so
that
immediate
next
steps,
the
instrumentation
API
should
stabilize.
A
We
have
a
Summer
of
Code
project
in
the
works
right
now
for
something
that
this
is
what
Karl's
talking
about
in
the
chat
Tokyo
console
and
the
idea
is,
do
we
want
to
have
an
out
of
process
console
that
you
can
connect
to
a
running
application?
That's
using
Tokyo
and
you
can
consume
all
kinds
of
instrumentation
data
to
try
and
diagnose
what's
going
on
at
runtime
and
we're
building
that
using
Tokyo
trace
instrumentation.
A
But
it's
going
to
be
out
of
process,
so
you
might
be
able
to
get
the
Tokyo
console
on
your
machine
to
talk
to
some
end
nodes
in
a
distributed
system
and
see
exactly
what's
happening
in
the
Tokyo
executor
on
each
of
those
machines
and
that's
currently
a
Summer
of
Code
project.
That's
going
on
other
next
steps
or
I
I
can
I
mean
I.
Think
most
of
the
questions
are
going
to
be
about
next
steps
in
some
way
or
not
there
unless
they're
about
the
implementation.
Sorry,
let's
keep
answering
questions
this.
H
H
A
A
Sure,
yes,
so
that's
something
that
the
subscriber
API
is
really
designed
around
the
idea
that
you
might
want
to
support
runtime
changing
if
the
log
level
you
might
want
to
support
dynamic
filtering,
so
we
basically
have
a
two
step
process
for
how
subscribers
do
filtering.
They
get.
There's
a
register
call
side
hook
and
that's
called
every
time
we
have
a
call
site
and
those
are
static.
A
So
every
subscriber
sees
every
call
site
exactly
once
and
at
that
point
it
gets
to
say
I
want
this
or
I
never
want
this,
but
it
also
gets
to
say
I
might
want
this.
Ask
me
again
when
it
happens.
So
a
call
site
is
the
in
imitation
point:
it's
where
we're
either
a
spanner
event
lives
in
the
source
code.
We
call
that
a
call
site
and
the
subscriber
can
say
ahead
of
time.
A
I
don't
ever
want
to
see
this,
never
talk
to
me
again,
or
it
can
say
I'm
opting
into
this
all
the
time
always
show
it
to
me.
I
don't
want
to
filter
it,
but
most
of
the
time,
it's
not
going
to
say
either
of
those
things.
So
it
has
a
third
option,
which
is
right
now
it
can
say
I'm
sometimes
interested
in
this,
and
when
you
say
that
your
filter
will
be
reevaluated
every
time
we
hit
that
call
site,
and
if
you
want
to
support
dynamic
filtering,
that's
one
way.
A
You
might
do
it
another
way
you
might
do
it
is
you
might
replace
your
subscriber
every
time
it's
filter
changes
which
would
let
you
benefit
from
some
of
the
caching
that
we're
doing.
But
then
you
would
need
to.
You
would
invalidate
the
registry
of
the
previous
subscriber
and
so
then
switching
subscribers
might
have
a
little
overhead,
but
you're
not
going
to
do
that
nearly
as
much
as
you're
gonna
filter
call
site.
A
B
Yeah
yeah
and
microphone
so
I,
don't,
like
you
know,
know
that
much
about
distributed
systems,
but
in
one
of
the
things
that
you
mentioned,
where
you
have
like
all
these
different
pieces
of
software
that
are
all
logging
and
there's
some
log,
aggregators
and
stuff.
B
A
I
think
that
the
security
is
a
concern
of
how
you
would
actually
be
communicating
with
that
aggregator.
So
if
you
want
to
have
you
know,
if,
obviously,
if
you
want
to
have
an
encrypted
communication
with
that
aggregator,
you
would
have
some
layer
of
encryption
and
that
would
be
in
whatever
the
actual
code
for
communicating
with
the
aggregator
is
so
Tokyo
trace
itself
doesn't
define
one
method
of
communicating
with
aggregator.
It
might
be
over
HD
P,
it
might
be
over
TLS.
It
might
be
over
just
writing
some
bytes
into
a
socket
using
Henry's
homemade
encrypted
protocol.
A
It
might
be
some
blockchain
thing
that
I
don't
understand
and
you
can
plug
any
of
that
in
to
the
subscriber
interface.
It's
really
in
once
once
an
event
hits
the
subscriber
API
it's
the
ball
is
in
your
court,
so
you
can
do
whatever
you
want
with
it
and
that
if
that
means
you
know
doing
some
crypto
magic,
then
you
can
do
that.
There's
nothing!
Stopping
you
from
doing
that.
I
did
that
answer
the
question.
Sorry.
G
Okay,
so
a
thing
that
commonly
happens,
I've
noticed
with
async
programming,
is
that
my
server
is
happily
running
and
doing
things,
and
then
everything
stops
and
it's
gotten
to
some
sort
of
deadlock,
and
what
I
really
want
to
do
is
be
able
to
poke
around
and
find
out
what
its
current
state
is.
But
if
tracing
has
been
disabled,
is
there
still
a
way
I
could
be
able
to
like
get
a
sample
of
all
of
the
current
states
of
the
span
spans
decks
and
like
explore
the
process
that
way
right.
A
So
actually,
what
you're
describing
first
of
all
is
basically
the
use
case.
We
have
in
mind
for
the
Tokyo
console
project
right
but
again,
I
think
I
mentioned
before
the
filtering
is
up
to
the
subscriber
implementation.
So
if
you
want
to
permit
that
kind
of
use
case,
you're
going
to
need
to
never
flat
out
deny
all
of
the
instrumentation
because
it
can't
be
kept.
A
If
you
say
you
don't
want
it,
you
don't
want
it,
so
it
can't
be
collected
then,
so
you
might
still
want
to
collect
some
of
it
and
not
record
it
in
like
not
actually
output
it,
but
just
record
that
you
know
that
this
has
happened
and
then,
when
somebody
requests,
hey
I
want
to
know
actually,
what's
the
entire
state
of
this
machine,
please
dump
it.
You
have
something
to
tell
them.
You
can't
really
have
that.
A
G
A
A
Yeah
that
was
really
what
I
was
getting
at
before.
Is
that
if
you
say
that
you
don't
want
to
collect
any
instrumentation,
you
are
actually
not
collecting
it,
you
can
go
back
in
time
and
collect
it
with
that
said,
you
might
want
to
collect
a
smaller
amount
of
information
all
the
time,
then
what
you
might
be
collecting
if
something
has
actually
been
opted
into
just
in
case
you
need
to
respond
to
some
kind
of
diagnostic
request,
like
that
and
I
think
that's
a
totally
valid
use
case,
but
you
would
need
to
you.
A
A
A
Don't
do
that
then
for
real
I
would
say.
Our
structured
logging
actually
helps
with
this
a
bit.
Your
recording
type
data
and
the
data
itself
in
in
the
future
we'll
have
some
more
control
over
how
it
wants
to
be
recorded.
So
we
have
a
trait
value
right
now.
It's
sealed
it
might
be
unsealed
soon
and
you
can
implement
value
for
your
types
and
they
get
to
define
how
they
wish
to
be
recorded.
A
So
if
you
have
a
type
of
sensitive
data,
you
might
have
a
value
implementation
that
doesn't
record
the
sensitive
data,
and
now
you
might
just
use
you
like
you
might
provide
that,
and
even
if
you're,
a
library,
you're
downstream
users
get
that
value
implementation.
So
they
can't
accidentally
record
all
of
the
sensitive
data
that's
stored
in
your
struct
or
whatever.
Right
now,
that's
sealed
because
we
want
to
work
on
getting
the
API
forwards
compatible,
but
it
may
be
unsealed
and
available
for
our
Potrero
types
to
implement
in
near
future.
K
Quick
two-part
question:
you
said
earlier
that
you
want
you
named
it
Tokyo
trace
because
you
wanted
to
use
it
in
Tokyo.
Do
you
mean
like
actually
in
the
library
Tokyo
itself,
and
that
leads?
The
second
question:
is
you
explained
that
you
can
use
like
the
log
macros
from
the
normal
log
crate
and
you
can
consume
those
in
Tokyo
trace?
Can
you
do
the
opposite?
If
someone
is
using
the
like
a
regular
logger
and
not
a
Tokyo
trace
logger,
can
they
get
if
you're
used?
If
you
have
a
library,
that's
using
Tokyo
trace.
A
That's
something
that
I
would
really
like
to
write
in
the
very
near
future.
It's
actually
kind
of
the
big
law
blocker
right
now
for
replacing
all
of
the
log
instrumentation
in
Tokyo,
trace
or
in
Tokyo
with
Tokyo
trace,
is
the
ability
to
expose
Tokyo
trace
instrumentation
in
backwards-compatible
way
as
well
as
consume
log
instrumentation.
A
We
do
have
an
adapter
layer
that
you
can
use
right
now
that
actually
logs
trays
events,
but
it's
something
that
the
application
sets
up
as
a
Tokyo
tray
subscriber
in
order
to
have
the
library
actually
exposing
log
events.
There
are
some
changes
to
the
instrumentation
API
that
are
in
pipe
and
and
yes,
so
that's
really
the
next
step
to
get
Tokyo
itself.
Instrumented
is
to
be
able
to
have
to
keep
exposing
the
existing
log
instrumentation.
That's
a
goal.
I'll
probably
have
it
done
by
the
end
of
the
week,
but
that's
probably
not
actually
true.
A
Ashley
wants
to
know
what
that
change
entails.
It's
really
just
some
changes
to
the
macros
in
the
Tokyo
trace
crate,
so
that
we
have
a
feature
flag
before
also
exposed
log
instrumentation,
and
if
that
feature
flag
is
set,
we
also
expose
log.
Instrumentation
I
might
be
an
environment
variable,
but
we'll
see
so
right.
K
D
A
Could
be
if
you're
collecting
it?
This
is
like
our
Tokyo
trace
motto:
I
think
I
did.
Can
you
hear
me
Carl
says
you
should
use
lume
and
actually
that's
an
excellent
suggestion.
If
you
have
heard
of
loom,
loom
as
a
concurrency
model
checker
in
rust,
that
Carl
has
written,
and
it's
already
found
some
bugs
in
the
runtime.
It's
a
permutation,
fuzzer
I'm
being
informed.
It
runs
all
permanent
yeah.
A
Okay,
so
you
have
a
situation
where
you're
yeah,
so
the
Tokyo
trace
motto
is
a
subscriber
problem.
So
if
your
subscriber
records
sufficient
information
based
on
the
primitives
that
are
bows
to
it,
you
definitely
could
do
that,
but
it
depends
on
the
implementation.
If
you're
recording
metrics,
obviously
can
do
that.
You
know
if
you
just
have
some
counters.
You
can't
play
back
all
of
the
events
that
have
occurred
from
those
counters,
but
yeah.
That's
definitely
something
we
could
implement.
Oh.
J
L
If
you
have
us
a
future
in
Tokyo
future,
that's
like
say
it
not
really
call
back
soon.
Sometimes
later
he
comes
back
ready,
so
Tokyo
is
gonna,
call
back
the
same
cold
chain.
So
the
same
span
stack.
Do
you
correlate
the
same?
The
the
span
that
you
know
the
not
really
and
the
ready
one
is
that
it's
not
the
same
span
or
he's
at
two
different
spans.
It.
A
Depends
on
where
you
put
the
span
macro,
if
you
have
a
pull
implementation,
where
every
time
Pole
is
called
how
about
for
the
instruments
yeah,
so
this
is
why
we
have
instrument,
because
it's
really
not
sufficient
just
to
put
a
span
macro
in
your
poll
implementation
you
want
to
have
the
same
span
entered
every
time.
The
future
is
come
is
polled,
which
is
what
the
Tokyo
trace
futures
crate
does,
which
is
why
we
have
that
instrument
Combinator.
A
If
it
wasn't
necessary
to
preserve
the
span,
then
you
could
very
easily
just
have
your
future
implementation
put
a
span
in
there
with
that
said,
there's
a
use
case
for
both
it's
sort
of
like
David,
asked
me
a
question.
A
while
ago,
when
I
had,
you
know
first
introduced
him
the
concept
of
Tokyo
traces
spans.
A
He
asked
me
if
I
have
a
for
loop,
should
I
put
a
span
around
the
for
loop
or
should
I
put
a
span
in
every
iteration
of
the
for
loop,
and
the
answer
is
yes,
the
answer
is,
it
depends
on
what
are
you
looking
at?
Do
you
want
to
look
at
the
length
of
time
that
you
spent
in
that
loop
or
the
length
of
time
that
was
spent
on
each
iteration
of
the
loop?
Those
are
separate.
A
You
can
put
the
span
macro
where
you
wanted
and
get
what
you
want
back,
but
yeah
the
instrument
Combinator
enters
the
span
every
time
we
polled
the
future,
because
you
want
to
think
about
the
logical
amount
of
time
that
we
spent
in
this
future
when
we
were
driving
it
to
completion,
and
we
only
want
to
close
that
span
once
the
future
is
completed.
We
have
some
more
questions.
I
think
we
have
at
least
one
more
hand
raised
so
I'm
happy
to
do
that
question
as
well.
So.
M
A
None
of
the
futures
compatibility
is
in
Tokyo
trace
that
all
is
in
external
libraries
in
Tokyo,
trace
futures
is
the
name
of
the
library.
I
probably
should
have
called
you.
Tokyo
trace
futures
o1,
because
the
current
implementation
was
written
against
futures
i1,
but
there's
absolutely
nothing
preventing
it
from
being
used
with
futures
Oh
to
accept
that
in
earth
futures
over
three
or
futures
7.0
B,
except
fitted
imports,
the
future
type
from
futures
r1
right
now.
A
The
futures
compat
layer
is
really
quite
simple,
so
porting
it
to
futures,
o3
is
probably
going
to
be
very
easy.
I
haven't
done
it
yet
because
it
wasn't
stable.
When
I
wrote
the
futures
compat
label
note
that
the
artsy's
been
merged.
Probably
over
a
weekend.
It
should
be
quite
easy
to
have
a
futures,
a3
version
of
the
futures
crate
as
well.
I'd
yeah.
Does
that
answer
the
question
cool
yeah.
C
A
My
ultimate
goal
is
total
world
domination,
so
I
know
I'm
glad
you
asked
that
question
because
it's
something
that
I
have
really
wanted
to
make
sure
that
everyone
knows
that
the
goal
of
Tokyo
Trace
is
not
to
compete
with
or
replace
the
existing
logging
libraries.
It's
a
part
of
that
ecosystem.
There
are
use
cases
where
you
need
all
of
the
extra
Diagnostics
that
Tokyo
Trace
provides
you
with.
A
There
are
use
cases
where
you
don't
and
for
those
use
cases
you
could
use
Tokyo
Trace,
but
you
don't
really
need
to,
and
log
works
great
in
those
use
cases.
The
reason
that
we
have
a.
A
K
A
Of
all
yeah
I
want
to
say
thanks
to
slog
because
I
looked
at
slog
a
lot
and
it
was
a
big
part
of
the
inspiration
for
Tokyo
trace.
There
isn't
compatibility
with
slog,
yet
it
should
be
possible.
I
haven't
written
it.
I
use
log
at
work,
I
don't
use
slog
I
would
like
to
have
compatibility
with
slog.
It
should
be
possible,
but
slog
was
a
big
influence
on
Tokyo
traces
design.
A
The
pain
point
that
I
have
had
working
with
slog
in
the
past
is
that
it
requires
explicit
passing
around
of
context
which
is
hard
to
do
in
futures
based
code.
So
we
don't
do
that,
but
you
could
probably
wire
Tokyo
traces
field
API
into
slogs,
feel
baby
Gai
pretty
easily
and
get
like
nice
structured
slog
logs
I
just
haven't
written
it,
yet
somebody
else
should
I,
don't
wanna
DPC
should
do
it
all
right.
Don't
answer
in
questions
now?
Okay,
my.