►
From YouTube: OpenTracing Monthly Meeting - 2018-05-11
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
B
Yeah
so
Loic
and
I
have
been
communicating
over
email
for
the
last
couple
of
weeks
or
so
he
reached
out
to
ask
the
questions
around
combining
distributed
traces,
a
lot
of
interesting
endeavors
and
that
sort
of
thing
with
kernel,
tracing
information,
and
we
met
up
in
Copenhagen
at
the
coop
con
thing.
A
couple
of
I
guess
last
week
and
had
a
bunch
of
great
conversations
there.
So
I
invited
him
to
present
here.
So
I
think
the
work
he's
doing
is
really
interesting
and
it's
combining
different
disciplines
and
tracings,
which
is
always
like
fun.
B
Student
at
probably
technique
Montreal
and
knows
a
lot
about
the
criminal
side
of
things
and
also
the
distributed
facing
side
of
things.
So
I'm
really
excited
to
hear
the
talk
and
and
I
should
mention.
He
has
a
hard
stop
at
the
hour.
So
you
know
we
should
all
have
to
be
disciplined
about.
Ask
their
questions
quickly
and
so
on,
but
take
it
away.
Lord.
D
D
Okay,
you
can,
you
can
think
of
it
as
the
carlo
word
of
tracing,
because
if
I'm
telling
you
that
I'm
doing
tracing,
probably
all
of
you
are
going
to
consider
that
I'm
doing
something
that
is
related
to
open
tracing.
But
it's
actually
a
bit
different,
so
kernel
tracing
is
actually
an
efficient
solution.
If
you
want
to
analyze
how
your
systems
behave,
it's
not
entirely
new.
Okay,
we
have
different
tracers
for
linux
available.
So
probably
some
of
you
are
familiar
with
f,
trace
or
even
EBP
ever
system
tab.
D
Okay,
we
don't
want
to
have
a
big
overhead
when
we
collect
all
these
events
from
the
kernel,
so
the
very
reason
why
we
don't
want
to
do
that
is
because
if
we
want
to
reproduce
some
tricky
bugs,
then
we
don't
want
to
have
the
system
behaving
differently
because
we're
tracing.
This
is
called
the
observer
problem
in
physics,
but
it's
actually
the
same
kind
of
things
in
computer
science
here
and
there's
also
a
focus
on
the
fact
that
we
can
do
offline
analysis
using
the
trace
that
we
just
collected
okay.
D
So
if
I'm
taking
you
through
the
common
workflow
of
of
kernel
tracing
the
first
thing
that
people
are
going
to
do
is
that
they
insert
static
trace
point
into
the
kernel.
Okay,
so
that's
been
done.
We
have
a
a
lot
of
trace
points
already
available.
We
can
also
do
that
in
user
space
applications.
But
my
focus
today
is
on
that
not
racing.
D
Then
we
we
choose
what
trace
points
we
want
to
activate
at
runtime,
so
we
use
our
preferred
tracer
to
say
I'm
going
to
collect
all
the
sketch
switches.
All
the
system
calls
entry
and
exits
or
the
interruptions.
Okay
and
then
the
tracers
collect
the
events
so
each
time
your
colonel
hits
one
of
the
trace
points
that
are
selected.
It's
going
to
emit
an
event,
and
these
events
are
written
either
into
a
file
or
then
they
can
be
sent
over
the
network,
and
then
these
trace
files
can
be
analyzed
by
specialized
analysis
tools,
so
trace.
D
Ok,
so
that
was
for
the
facts
now
I
like
to
analyze
what
I
call
a
tricky
bug
with
you
so
that
you
can
see
what
Colonel
tracing
can
do
in
terms
of
debugging,
debugging
applications
or
understanding
what
goes
on
in
your
systems.
So
here's
the
situation,
ok,
so
I'm
going
to
demonstrate
all
that
in
the
moment,
but
I
prefer
to
explain
it.
First,
we
have
three
processes:
okay,
two
of
them
are
running
on
the
same
CPU
low
prior
one
and
high
Pro.
You
want
okay,
so
you
can
see
the
priorities
on
the
left.
D
By
the
way
we
have
a
third
process
that
is
scheduled
on
a
CPU
zero
hype
rozier.
Oh,
there
is
a
shared
resource
that
both
low
prior
one
and
high
priority
row
want
to
lock
and
you
have
the
control
third
on
the
left,
so
low
prior
one
starts
first,
okay,
then
high
Pro
one
starts
which
means
that
low,
probably
one
gets
preempted
because
it
has
lower
priority
and
then
high
probably
won
high
praise
ero
get
scheduled
in,
and
at
this
moment
it
requests
the
lock.
Ok
and
the
question
is
what
happens?
Next.
D
Hey
is
high
priority
row
able
to
take
the
resource
that
is
currently
held
by
low
prior
one.
Second
situation
is
roughly
the
same
a,
but
in
this
case
hype
Rosie
ro
has
a
lower
priority
than
hi,
probably
one
okay.
So
we
want
to
know
what
happens
next
and
if
we,
if
we
want
to
understand
that,
we
can
think
that
what's
going
to
happen,
is
that
hi,
probably
one
can
get
the
shared
resource
back?
Okay
because
it
has
higher
priority
than
low
prior
one,
which
currently
holds
the
resource.
D
Okay,
so
logically,
it
should
be
able
to
get
the
resource
back.
So,
let's
find
out.
Okay,
there's
not
nothing,
nothing
more
important
than
practice.
So
we
have
this
little
example.
That's
been
coded,
so
trust
me.
It
works
just
like
I
told
you,
but
if
you
don't
trust
me,
I
can
obviously
publish
that
the
code
for
that
so
I
hope.
All
of
you
can
see
the
the
terminal
can
increase.
If
you
want,
if
someone
can't
see
you
can
just
shout
I
try
to
adjust.
D
Okay,
no
shouting!
So
it's
been
freshly
rebuilt.
Then
I've
got
a
bunch
of
lines
here
to
initiate
the
tracing
session,
which
means
basically,
like
I,
told
you
I
need
to
activate
the
events
that
are
going
to
be
interesting
for
the
analysis,
so
basically
I'm
taking
all
the
kernel
events
and
then
what
I
did
is
that
I
started
tracing
I
executed.
My
little
program
and
and
I
stopped
the
tracing
right
after
that.
D
Okay,
so
you
can
see
all
that
happening
and
if
I
change
the
directory
to
the
parent,
one
I
can
see
that
here,
I
have
a
directory
that
is
called
sketch
trace
and
that
actually
hosts
the
the
file
that
were
written
by
the
tracer
and
if
I
want
to
be
convinced,
I
can
use
something
called
babel
trace
that
that
just
dumps
all
the
events
okay
to
terminal.
So
this
is
what
a
kernel
trace
looks
like
I
know:
it's
not
you
know
it's
not
appealing.
D
We
have
our
timestamps
on
the
left,
the
name
of
the
event
here
and
here
bunch
of
pears,
key
value
which
you
could
relate
to
open
tracing
tags.
Actually,
okay,
so
I
knew,
for
example,
that
each
of
the
events
happened
on
a
specific
CPU
and
I
have
the
identifier
of
it.
That's
if
you
okay,
so
what
I
did
if
I,
if
I
show
you
the
code,
so
my
main
function,
what
it
does,
almost
everything
is
commented
except
except
for
these
two
runs.
So
the
first
thing
that
I
ran
was
the
first
example
that
I
showed
you.
D
Okay,
the
one
where
the
the
process
hype
Rio
zero,
has
higher
priority
than
the
other
processes,
all
of
the
other
processes
and
then
the
other
case
where
high
prosy
row
has
higher
priority
than
low
prior
one
but
lower
priority
than
hyper
a
one
okay.
So
this
has
been
run,
but
if
I
want
to
understand
what
goes
on
I'm
not
going
to
use
Babel
trace,
okay,
because
it's
just
a
dump
of
the
events,
what
I'm
going
to
use
is
a
trace
analyser.
D
So
here
is
trace.
Comm-Pass
I
already
loaded
the
trace
into
its
run.
The
analysis
it
just
takes
a
few
seconds.
You
can
see
that
here,
I
have
what
is
called
the
control
flow
of.
You
looks
very
much
to
what
you
can
find
in
open
tracing
traces.
You
know,
with
spans
being
just
the
states
of
the
different
processes
that
that
you
have
in
the
tree
on
the
left
side.
Okay
and
here.
D
D
Okay
and
then
this
is
my
second
run,
which
is
the
other
situation
where
high
praise
ero
has
an
average
priority
between
these
other
processes,
and
the
colors
here
can
tell
you
that
when
it's
green,
the
process
is
running
in
user
space.
So
it's
actually
executing
code
when
it's
yellow.
It
means
that
it's
waiting
for
something
to
happen,
so
it's
blocked
either
for
a
mutex
to
be
released
or
for
IO
to
be
available
or
something
when
it's
orange
it's
waiting
for
CPU.
D
D
This
is
where
things
get
really
interesting.
Okay,
so
here,
as
I
said,
we
have
low
prior
one
executing
first
and
then
high
priority
arts.
Obviously
it
preempts
the
process,
so
I
have
to
go
back
a
bit.
This
is
this
is
why
I
pray
one
executes
first?
Okay,
so
it
preempts
the
process
low
prior
one,
but
then
high
priority
roads
in
and
it
requests
the
shared
resource.
Using
that
system,
color,
open
ads,
okay
and
then
hyper
zero
has
to
wait
for
the
resource
to
be
released.
D
D
Hi
prior
one
gets
preempted
by
low
prior
one,
even
though
high
pray
one
has
a
higher
priority
than
low
prior
one,
and
the
reason
is
because
hyper
zero
requested
the
shared
resource,
so
it
means
that
temporarily
low
prior
one
gets
the
same
priority
as
high
priority
able
to
preempt
I
pray
one.
So
the
Linux
kernel
is
offering
that
feature
because
low
prior
one
has
to
terminate
its
work
in
order
to
release
the
mutex
really
fast,
so
that
high
priority
row
can
resume
its
its
execution.
That's
what
happens
here?
D
Low
priority
row
releases,
the
mutex
and
then
hype
Rozier
Oh
resumes
execution
okay.
So
this
is
what
we
expected,
because
hi-pro
comes
in.
The
shared
resource
has
to
be
given
back
to
high
pros
ero.
So
high,
probably
one
gets
preempted
low
pros.
One
terminates
its
execution
and
everything
is
fine.
However,
this
is
not
what
happens
here.
D
By
the
way,
the
back
end
is
decoupled
from
the
UI,
which
means
that
the
analysis
can
be
run
from
basically
anywhere
and
you
can
use
any
kind
of
UI
to
to
analyze
the
traces
we
have
that
back-end
available
for
everyone.
So
here
we
have
the
same
thing:
hi
pro
hero
tries
to
get
the
shared
resource;
okay,
so
it
makes
that
system
call
and
then
it
blocks.
But
this
time
hi
prior
one
doesn't
get
preempted
by
low
play
1,
because
low
prior
one
temporarily
gets
the
same
priority
as
high
priority
row.
D
D
Here
is
the
explanation.
Ok,
in
the
first
situation,
we
we
had
that
behavior,
where
low
prior
one
temporarily
gets
the
same
priority
as
high
priority
in
order
to
release
the
shared
resource
as
fast
as
possible.
Ok,
so
low
prior
one
preempts
high
priority
minutes
its
execution
releases,
the
lock
and
then
gets
preempted
by
hyper
one.
But
it's
not
a
problem,
because
at
this
moment
hype
fzero
can
resume
its
execution
because
it
finally
gets
the
lock
in
the
second
situation.
D
D
So
we
have
two
distinct
behavior,
but
we
thought
that
we
just
should
have
one,
and
this
is
this
is
actually
a
big
advantage
of
kernel
tracing.
It's
that
you
can
just
run
tracing
see
how
your
application
behaves,
and
then
you
can
deduce
things
that
you
cannot
deduce
otherwise,
because
you
know
sometimes
the
behavior
of
your
application
doesn't
mean
the
expectations.
D
We
can
analyze
what's
currently
missing
an
open
tracing
I
hope
that
maybe
this
is
clear
to
you
that
kernel
tracing
can
have
fine
grain
analysis
and
open
tracing
actually
focuses
on
the
tasks
that
are
being
processed
in
as
part
of
the
transaction.
So
you
can
actually
detect
a
few
design
issues
that
your
application
has.
D
For
example,
if
you
have
a
long
diagonal
of
spans,
maybe
it
means
that
you
didn't
try
hard
enough
to
paralyze
your
application,
so
this
might
be
a
design
issue,
but
in
the
case,
at
the
bottom
of
the
slides,
we
have
the
task
text
task.
That
is
lengthy,
but
it
could
be
long
for
several
reasons.
Actually,
it
could
be
long
because
it
waits
for
CPU
can
be
long
because
it
waits
for
a
shared
resource
or
just
because
test2
is
supposed
to
be
longer
than
the
other
ones.
Okay.
D
D
Usually
it
can
be
mutex
in
that
worry,
but
it
can
be
the
CPU
that's
contended
we
like
also
today
by
other
bottlenecks,
for
example,
that
comes
from
the
interaction
between
your
transactions
and
the
actual
machines
on
which
they
run
and
we'd
also
like
to
be
able
to
perform
a
an
analysis
of
multiple
transactions.
At
the
same
time,
if
it
makes
sense
because
the
transactions
they're
not
independent,
they
they
they
can
fat
fight
for
some
mutexes.
They
have
interaction.
D
So
we
have
to
understand
that
in
order
to
understand
why
that
that
transaction
is
currently
blocked
and
we'd
also
like
to
understand
how
our
transactions
interact
with
the
host
system,
so
basically,
what
we
want
is
to
have
the
best
of
both
worlds,
which
means
the
the
view
that
open
tracing
provides
in
terms
of
aggregating
all
the
information
and
presenting
that
information
as
a
single
trace.
You
know
as
a
logical
transaction
really
and
we
like
to
just
add
a
single
layer
that
comes
from
kernel
analysis
supposed
to
tell
us.
D
D
D
D
Another
thing
is
that
kernel
is
actually
able
to
understand.
You
know
how
your
threads
get
preempted
and
to
describe
everything
as
being
part
of
threads,
but
open
tracing
focuses
much
more
on
tasks
and
tasks
can
be
executed
not
on
threads,
but
on
things
like
go
routines,
which
is
another
kind
of
abstraction.
So
we
have
to
take
that
into
account
as
well.
D
We
also
have
to
find
a
way
to
synchronize
the
traces,
because
open
tracing
has
not
the
same
precision
as
ltte,
because
it's
focused
on
causality,
there's
not
that
need
for
precision,
so
we
need
to
synchronize
the
traces
so
as
to
make
joint
analysis
afterwards
and
the
other
things
that
we
like
to
keep.
The
same
word
flow
that
we
have
when
you
know
just
monitoring
a
system.
What
we
usually
do
is
that
we
have
some
kind
of
dashboard
that
is
able
to
tell
us
hey.
D
D
So
we
like
to
keep
that
as
low
as
possible
for
it
to
be
used
as
a
very
useful
tool.
Okay,
so
that's
it
for
my
presentation.
I
try
to
be
sure
to
as
to
leave
time
for
discussion
questions
ideas,
but
if
you
want
to
reach
out,
you
can
just
think
me
on
on
Gmail
and
I'd
be
happy
to
discuss
with
you.
So
thank
you.
B
B
D
So
if
you
want
to
do
that,
you
need
to
propagate
that
information
from
the
open
tracing
traces
into
the
kernel
and
the
only
way
you
can
pass
information
from
user
space
to
the
kernel
is
by
executing
a
Cisco
I
mean
there's,
there's
no
other
way
of
doing
that.
So
basically
means
that
each
time
you
want
to
create
a
span,
you
need
to
transfer
the
control
to
the
kernel
with
a
system
call,
and
usually
you
want
to
avoid
that
overhead,
because
it
involves
a
lot
of
context.
Switching
each
time
you
create
a
span.
D
So
if
we
can
do
that,
just
in
user
space
by
finding
a
neat
way
of
synchronizing
the
traces
it
can
lead
to
less
overhead
but,
on
the
other
hand,
is
totally
feasible,
I
mean
there's
been
a
paper
by
Google
recently
saying
that
they
they
did
that
by
doing
fake,
syscalls.
Okay,
so
that
they
just
take
a
random
Cisco
like
get
TIG
and
as
part
of
the
arguments
they
just
pass.
D
The
the
span
ID,
so,
of
course,
the
system
call
fails
because
the
arguments
are
not
consistent,
but
at
least
in
your
kernel
trays
you
can
have
one
event
corresponding
to
Oh.
There's
been
a
span
created,
and
it's
on
that
physical
CPU,
which
means
that
it's
being
triggered
by
that
specific
process,
which
is
what
you
want
I,
think
that
we
should
try
to
avoid
that.
But
it
can
be
a
good
idea
to
start
first
with
because
it's
probably
the
easiest
thing
to
do
and
I
know
that
Raja
has
students
working
on
that.
B
A
Well,
I
have
a
question
that
don't
need
to
fully
answer,
but
it
I
was
just
curious.
You
know
devil's
advocate
how
much
does
getting
the
timing
precisely
to
line
up
matter
in
terms
like?
Is
there
a
good
enough
form
of
this?
If
you
avoid
syscalls,
where
the
timing
is
is
roughly
lining
up,
and
that
would
still
be
sufficient
in
the
majority
of
cases
to
diagnose.
What's
going
on
so.
D
A
Like
if
in
your,
if
in
your
open
tracing,
say
your
scope
manager,
that's
the
part
of
open
tracing
that
knows
when,
when
contexts
are
being
switched
around,
yes,
you
know
recording
something
out
in
user
land
at
some
level
of
granularity.
That's
presumably
you
know
lower
than
what
the
kernel
is
doing.
You
can
still
kind
of
staple
them
back
together
out
of
band,
but
it
wouldn't
be
as
precise,
but
is
that
good
enough
for
most
use
cases
I,
don't
think
it
is
because.
D
If
you,
if
I'm
taking
you
to
that,
to
trace
comm-pass,
to
give
you
a
sense
of
how
many
things
can
happen
in
just
a
few
nanoseconds
through
into
the
kernel,
for
example,
this
system
call
it
takes
roughly
ten
microseconds
and
you
can
have
shorter
than
that.
So
it
means
that,
basically,
in
just
a
few
microseconds,
you
can
have
several
sched
switches.
D
So
if
you're
open
tracing
tres
tells
you
that
spam
has
been
created
at
that
time,
stamp,
which
is
precisely
the
millisecond
and
then
you
have
the
kernel
trace,
you're,
not
going
to
be
able
to
say
precisely
that
span
was
created.
While
that
process
was
running
on
CP
you,
because
you
could
have
a
sketch
switch
just
right
after
right
before
that.
Just
basically.
D
Make
sure
make
sure
announced
it's
not
valid,
so
I
think
we
should
have
some
kind
of
explicit
synchronization
between
the
traces.
So,
as
Ben
said,
we
could
do
that
through
the
kernel,
but
it
can
also
be
done.
Instrumenting
open
tracing
traces
using
other
kind
of
traces
like
Elton
gee.
So
this
is
something
that
I
did
just
just
for
trying
instrumented
Yaeger
so
as
to
have
information
in
LT
gingy
traces
about
when
a
span
is
created
and
when
the
spenders
is
stopped.
D
A
D
A
B
I
won
I
didn't
want
to
I,
know,
love
had
a
hard
stop,
so
I
didn't
want
to
just
add
a
comment
and
waste
his
time
with
the
comment,
but
one
thing
that
we
talked
about
in
person
in
Copenhagen
is
that
a
lot
of
the
criminal
training
ends
up.
I,
think
that
you
can
think
of
it
as
a
way
to
decorate
races
with
a
lot
of
extra
detail,
which
is
fine
and
I,
think
that's
all
well
and
good,
but
a
lot
of
the
most
powerful
applications
of
it
have
to
do
with.
B
You
know
credible
way
to
do
an
analysis
of
how
different
transactions
interact
with
each
other,
because
the
kernel
is
the
best
place
to
see
those
sorts
of
contention.
Situations
like
that.
That's
some
of
what
his
examples
focused
on
so
I
think
that
in
my
mind,
part
of
the
the
power
of
that
stuff.
This
is
not
no
case
in
common.
B
Just
a
general
Jason
comment
is:
is
about
using
the
kernel
as
a
way
to
understand
interactions
between
transactions
without
having
to
do
any
kind
of
special
instrumentation
in
the
source
code,
and
if
you
can
find
a
way
to
make
that
cheap,
so
that
you
can
see
that
these
two
different
transactions
depend
on
the
same
file,
descriptor
mutex
or
whatever.
That's
really
profound,
I
mean
an
incredibly
powerful
thing
and
I
have
no
idea
about
the
actual
fact
overhead
of
doing
that.
But
that's
the
only
thing
you
did
that
you
didn't
before
anything
else.
A
By
one
other
comment
is
I
can't
help,
but
think
this
is
an
issue
that
really
requires
probably
possible,
maybe
doesn't
require,
but
it
seems
like
a
language
level
thing
right,
like
if
you're
operating
in
a
language
that
doesn't
give
doesn't
think
about
this
and
then
you're
doing
on
top
of
that
some
kind
of
user
level
context.
Switching
it's
gonna
be
really
difficult.
On
top
of
that
whole
sandwich
to
come
back
in
and
efficiently
staple
all
this
stuff
together.
E
A
B
I
have
to
say,
I
appreciated,
but
he
was
talking
open
tracing
but
I.
Don't
really
think
of
this
token
tracing
project
I
mean
to
do
this
properly
right.
Now
you
kind
of
need
the
hard
code,
some
understanding
of
in
memory
representations
of
things
like
you
know,
even
if
we
decide
to
add
a
bunch
of
access
to
scan
context
or
something
like
that,
like
you're
gonna,
have
to
get
pretty
down
and
dirty
to
make
this
stuff
work.
You
know
so
that's
I
mean
that's
why
his
repo
is
called
Jager,
etc,
which
I
think
is
totally
fine.
B
But
but
this
is
the
I
mean
I,
don't
know
what
other
people
think,
but
I
don't
really
see
this
as
being
an
open
tracing
project.
I
think
this
is
a
tracing
project
I,
don't
think
of
it,
something
that
benefits
from
like
a
shared
instrumentation
library
as
much
as,
as
you
know,
like
I,
think
you
need
to
understand
in
memory
representations
and
things
like
that.
Yeah.
A
B
A
A
A
The
part
that
is
hopefully
getting
to
a
sort
of
v1
like
a
testable
v1,
not
a
final
v1
is
the
this
sort
of
propagation
headers,
specifically
a
trace
parent
and
trace
State,
which
is
everything
you
would
need
to
glue
multiple
different
tracing
systems
together
and
be
able
to
correlate
them
so
that
you
could
propagate
a
trace
from
one
tracing
system
into
another
and
then
on
the
back
end,
hopefully
be
able
to
export
data
from
one
of
those
systems
into
the
other.
So
you
can
get
a
complete
trace.
A
So
that's
an
issue
that
it
would
be
nice
to
massage
over
and
go
from
end
to
end
to
one
to
one,
and
maybe
even
more
importantly,
is
if
you
kind
of
define
some
kind
of
trace
data
format.
Could
we
use
that
as
a
vehicle
for
moving
forwards,
with
a
more
semantic
definition
of
the
content
of
that
data,
so
basically
kind
of
the
work
on
standardizing
tags
and
open
tracing?
Could
we
do
that
work
in
a
slightly
broader
fashion?
A
There's
the
main
takeaway
there
was
to
sort
of
do
a
review
of
existing
trace
data
formats
and
do
a
compare
and
contrast
of
what's
currently
out
there
to
see
if
there's
some
easy
subset
that
emerges
from
that
and
use
that,
as
maybe
a
basis
for
going
forwards.
So
that's
sort
of
the
next
step
on
that
project.
In
general
there
was
some
discussion
I
had
this
feeling,
I
wasn't
there
quite
for
the
end.
I
understand
other
people
had
this
feeling
that
we
should
really
be
meeting
more
frequently
and
more
kind
of
focused
working
group
sessions.
A
This
was
called
the
trace
context,
working
group,
but
really
it
was
a
more
kind
of
general
distributed
tracing
meeting,
and
would
it
be
better
if
we
just
had
more
frequent
meetings
that
were
focused
on
solving
specific
problems
in
trace
context
so
that
we
could
get
that
over
the
finish
line?
So
hopefully
that
will
occur
and
I
do
wonder
something
similar
about
open
tracing
as
well.
F
There
was
a
imaging
@q
column
for
the
cloud
events
working
group.
It's
ready
to
do
surplus
service
and
they're
also
thinking
about
implementing
some
sort
of
context
and
very
very
similar
to
the
trace
context,
right
and
I
think
they
might
also
join
to
see,
or
at
least
I.
Don't
you
take
a
look
at
this
back
and
see
if
there's
something
there
they
need
and
which
is
other
spectrum,
but
if
they
don't
talk
to
you
guys
that
aspect,
then
it
probably
a
good
idea
to
talk
him.
F
C
A
F
F
F
A
A
F
F
G
A
Great
now
I'll
try
to
find
links
to
the
other
talks
we
gave.
One
of
them
was
just
a
sort
of
Q&A
session
that
I
think
was
helpful
and
that's
another.
That
I
think
is
useful
to
almost
I.
Don't
know
if
we
get
scheduled
zooms,
but
but
maybe
more
like
office
hours
or
something
people
have
questions,
and
they
often
it's
almost.
They
find
it
easier
to
ask
them.
A
In
nodejs,
we
did
this
sort
of
office
hours
thing
at
a
certain
inflection
point
where
there
was
a
big
influx
of
new
users,
and
that
was
kind
of
helpful.
Just
letting
people
know
when
they
could
log
in
together
and
a
soon
meeting
and
people
core
members.
The
project
would
just
be
available
to
answer.
Questions
might
might
be
a
good
action
item
for
a
start
soon.
B
Yeah
I
mean
I,
just
in
terms
of
the
conference
in
general,
I
think
I
was
supposed
to
give
one
time.
I
ended
up
sort
of
giving
two
and
a
half,
because
Donald
Trump
was
about
to
leave
the
country,
so
I
had
just
chillin
for
some
of
her
stuff,
but
but
it
was
good
I
felt
like
there
was
a
really
nice
reception.
I
just
tried
to
get
a
talk
that
real,
isn't
really
even
about
open
tracing
was
just
a
similar
talk
to
the
type
of
things
that
like
Erica's
post
about
just
trying
to
you
know.
B
This
should
be
to
detangle
that
open
trade,
the
open
source
tracing
system
in
general
and
that
was
really
well
received
and
I-
think
people
got
a
lot
out
of
it.
I
think
we
should
continue
to
do
that
and
clarify
the
positioning
of
the
various
projects
in
the
face,
and
then
the
both
the
intro
and
expert
sessions
on
open
tracing
were
well
attended
and
there's
a
lot
of
interest
in
everything
in
terms
of
just
being
out.
In
the
show
floor,
it
was
pretty
obvious
that,
like
everyone,
you
know,
that's
like
coop.
B
Con
is
pretty
biased
towards
like
understanding
CN
CF
stuff.
So
it's
not
like
this
is
a
random
sample
of
the
population.
That's
walking
around
Copenhagen,
but
it's
also
pretty
obvious
that
they
all
understood
that
racism
was
or
the
dates
that
in
the
basic
level
and
and
and
a
lot
of
them
are
actually
applying
it
within
their
organizations
and
both
like
you
know,
companies
like
uber
and
so
and
so
forth,
but
also
people
from
MasterCard
and
giant
German
banks
and
stuff,
like
that,
so
it
was.
It
was
interesting
to
see
that
kind
of
proliferation.
B
Not
really,
although
I
didn't
think
that
for
the
next
one
of
these
that
you
know
we
have
people
talking
at
I,
don't
know
how
to
accomplish
this.
No,
we
had
these
salons
before
and
then
they
gave
us
like
he's.
Half
an
hour
slots
like
I,
really
would
have
preferred
to
have
had
the
thing
that
you
ran.
This
was
like
basically,
a
QA
with
a
quick
15
minute
presentation
ahead
of
time,
like
I
would
have
preferred
to
have
done
like
a
really
long
Q&A.
B
You
know
like
it
would
have
been
great
to
have
the
one
hour
QA
or
something
like
that.
I
wonder
if
we
can
make
that
happen,
but
I
felt
like
that
Q&A
session
was
maybe
more
valuable
than
any
of
the
presentations,
because
the
questions
that
were
them
from
the
audience
were
you
know
very
good
and,
and
and-
and
it
just
felt
it
felt
more
interesting
to
me
so
I'd
like
to
try
and
get
the
C&C
have
people
to
give
us
that
sort
of
situation.
A
But
yeah
that's
this
for
us
to
bother
the
the
CNC
F
about.
We
also
sometimes
try
to
run
workshops
there
and
night
I
wonder
if,
like
we
should
focus
more
on
these
like
Q&A
sessions,
because
the
workshops
really
do
seem
to
conflict
with
the
kind
of
you
know
thing.
These
conferences
tend
to
want
to
provide
space
for
and
time
for,
not
to
mention
conference
Wi-Fi,
and
you
know
tutorials
or
bad
combo.
A
A
You
have
been
working
on
it
for
us,
the
same
person
who
redid
the
Jaeger
site,
so
that's
very
exciting
I'll
send
an
announcement
out
once
that's
in
a
state
where
people
can
start
pushing
documentation
by
making
PRS
against
a
branch
on
the
open,
tracing,
IO,
github
repo.
So
that's
that's
almost
there
that'll
hopefully
be
there
next
week.
A
So
that's
a
thing
that's
coming
and
to
then
help
kind
of
push
everything
out
the
door
you'd
like
to
do
a
docking
thawne.
So
a
pull
is
gonna.
Go
out
today
around
potential
times
for
having
this
so
be
on
the
lookout
for
that,
but
we're
thinking
in
the
month
of
June
we'll
do
this
and
the
idea
behind
a
docking
on
is
we
you
know
want
to
get
enough
rough
stuff
out
there
that
there's
sort
of
a
trellis
that
you
know.
A
We
can
grow
the
rest
of
the
documentation
on
and
then
kind
of,
have
a
big
push
to
get
everyone
around.
At
the
same
time,
all
the
experts,
all
the
people
who
work
on
the
different
languages
combined
with
people
who
know
how
to
write
Docs
and
edit
and
clean
them
up
and
see
if
we
can
just
do
a
sort
of
big
day-long
push
to
clean
things
up
and
get
get
everything
out
the
door
and
so
that'll
be
happening
in
June
people
who
want
to
get
involved
in
that
we're
organizing
it.
Please.