►
From YouTube: OpenTracing Monthly Meeting - 2018-04-06
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
A
A
A
Great
that
looks
like
a
good
form
of
people,
hello.
Everyone
welcome
to
Friday.
Welcome
to
the
OTS,
see
call
great
to
see
all
your
lovely
faces.
We
have
a
pretty
cool
call
today.
We've
got
the
couch
bass
team
here
in
force
and
Mike
and
Michael
are
going
to
present
the
great
work
they've
been
doing,
integrating
open
tracing
directly
into
their
libraries
and
products.
A
D
All
right,
I
think
I
can
just
check
my
screen.
Okay,
moans!
You
are
okay,
perfect,
those
lights
here
all
right,
so
thanks
everyone
for
joining
and
everyone
who
is
watching
the
recording
in
the
future,
a
slight
correction,
so
I'll
be
presenting
this
alone.
Mike
had
his
hands
full
with
other
stuff,
so
but,
but
he
is
around
for
any
top
net
and
other
related
questions.
D
This
would
be
the
tracing
theme
time
outs,
then
what
we've
done
with
adopting
open
tracing
the
good
and
the
cherishing
pieces
then
add
live
demo,
and
then
they
call
to
action,
and
then
you
can
do
some
questions
all
right.
So
what
is
coach?
Please
for
those
who
haven't
read
it
before.
Basically,
coach
bass
is
a
distributed.
Document-Oriented
database
focused
on
scalability
and
performance
right.
D
So
that's
that
one
line
sentence
of
what
Coach
based
us,
and
so
it
has
all
kinds
of
properties
like
all
the
sharing
a
flexible
data
model
but
before
I
think
for
the
purpose
of
of
today's
core.
What
what's
important
is
that
it
has
a
memory
first
architecture
and
everything
inside
the
database
itself
is
also
done.
D
Now
we
adding
analytics
so
you
can
see
there
are
like
more
and
more
components
being
added
to
the
system,
which
you
can
immediately
think
that
it's
from
not
going
to
make
it
easier
to
troubleshoot
performance
issues
right
so,
but
but
just
that
you
get
an
idea.
This
is
what
we
coming
from,
basically
from
being
a
distributed,
managed
cache
and
then
adding
functionality
towards
document,
oriented
analytics
for
text,
search
and
so
forth,
and
one
one
thing
to
call
out
here
is
that
one
interesting
property
of
corpus
is
that
it
supports
will
be
formatted
multi-dimensional
scaling.
D
So
you
can
basically
enable
every
kind
of
service
speed,
key-value,
be
it
full-text,
search
clearing
on
every
node
in
the
cluster,
but
you
can
also
choose
to
only
individual
services
on
each
node
right.
So
you
can
say:
okay,
a
50
note
cluster
on
two
nodes
around
the
query
service
on
the
other
side,
ooh
indexing
and
then
I
have
a
couple
other
notes
where
I
store
the
data
at
the
KB
service.
And
the
important
piece
here
is
that
on
the
client
side,
our
SDKs
I
actually
like
intelligent
right.
D
They
don't
just
take
you
data
dump
and
send
it
to
a
remote
socket,
which
happens
mostly
with
relation
databases,
but
the
SDK
is
playing
integral
role
as
part
of
the
distributed
system.
So
they
basically
get
classed
information
in
near-real-time,
basically
given
up
to
date,
get
an
update
of
the
topology
and
then
they
decide
when
the
request
comes
in
where
to
dispatch
it,
including
handling
certain
retry
scenarios.
When
you
are
rebalancing
the
cluster
and
repellency
means
you
can
add,
and
remove
nodes
on
the
fly
without
on
time.
D
So
that's
another
challenge
that
the
SDKs
handle
right,
making
sure
the
data
gets
to
the
right
places
at
every
point,
the
time
without
user
disruption
right
and
here's
an
example
of
how
we
would
do
write
operations.
So,
for
example,
you
create
a
document
in
your
let's
say:
java
application
and-
and
you
call
the
absent
members-
is
of
course
we
similar
with
wire
to
the
server
and
then
it
lands
in
the
managed
cash,
our
camellia
and
then
the
managed
cash
once
it's
there.
D
It
will
basically
acknowledge
the
right
to
the
application
itself
and
then
it
will
asynchronously
send
the
operation
to
the
revocation
queue
asynchronously
to
the
disk.
You,
a
synchronous
Li
to
the
secondary
indexing
engine
right.
So
all
of
these
steps
are
happening
asynchronously
and
obviously
they
can
also
help
you
over
the
cluster
right.
So
the
replication
queue
will
eventually
send
the
operations
to
the
replicas
and
other
machines
right.
So,
even
if
you
just
perform
a
single
operation
from
from
an
sdk
point
of
view,
many
different
spots
in
the
distributed
system
actually
affect
them.
D
And
when
we
come
to
the
point
where
something
slow
right,
we
need
to
figure
out
which
places
in
the
distributed
system
have
to
be
touched
and
where
the
slowness
comes
from
and
the
other
operations
are
are
the
same,
but
I
just
didn't
that
works
like
since
it's
it's
similar.
But
if
you
have
questions
on
specific
functionality
of
Couchbase.
Just
let
me
know
when
we
cannot
cover
that
later.
So
with
with
that
basic
knowledge
in
mind
of
how
Cochise
works
and
operates.
Why
so?
D
What's
the
big
challenge
right
and
the
thing
we've
come
across
in
the
years
and
that
I've
been
with
God-
is
for
many
years
now
and
been
handling
our
support
escalations
and
one
of
the
the
big
like
the
bigger
problem
were
the
bigger
challenge.
Users
and
customers
are
running
into
our
timeouts
right
and
the
problem
is
everyone
who
has
like
look
the
timeouts
and
work
with
them
and
try
to
figure
out.
D
But
the
problem
is
just
with
the
family
exceptional.
It
tells
you
well,
something
was
slower
than
expected
that
deadline.
You
gave
it
as
a
timeout
value.
Basically,
it
took
longer
than
this
timeout
this
deadline,
but
it
didn't
tell
you
exactly
what
went
wrong,
what
went
slower
and
then
it's
very
hard
to
troubleshoot
right.
The
next
step.
Most
of
the
time,
is
going
to
look
at
the
logs
figure
out.
D
If
you
can
see
something,
go
fetch
information
from
the
server
to
see
if
something
slow
there
and
so
forth
right,
so
a
very
iterative
process,
exploited
process,
but
also
sometimes
very
time-consuming
and
making
that
time
time
slower
to
basically
go
from
something
went
wrong
to
detecting
what
went
wrong
and
exactly
how
to
fix.
That
right
is
the
whole
purpose
of
the
response,
time
ability
so
common
causes.
Obviously
there
are
three
players
or
three
components
in
our
distributed
system
right.
There
is
a
yep
service
and
they
can
be
many
of
them.
D
Then
we
have
the
network
and
then
we
have
the
classic
Scotch
base
notes
and
each
of
them
can
have
several
costs.
So
if
you
look
at
the
application
server
right
at
the
very
bottom,
we
have
the
network
networking
card
and
then
on
top
of
that
you
have
the
operating
system
with
potential
many
different
layers
of
virtualization,
including
docker,
Cuba,
natives,
whatever
right
so
that
doesn't
make
it
easier
to
troubleshoot.
What's
going
on,
and
we
have
seen
weird
bugs
at
the
OS
level.
Yeah
I
don't
want
to
go
to
the
Internet,
but
we
we
had.
D
We
had
issues
there
as
well
and
then
obviously
you
can
have
the
runtime
right.
So
if
you
are,
if
you
develop
Java
application,
you
have
the
application
server.
At
least
your
JDK
garbage
collection,
all
the
fun
things
that
you
have
to
troubleshoot
and
then
inside
the
run
time
you
have
replication
where
something
hanging
around
logic
problems
and
then
inside
the
application.
You
have
the
SDK,
which
can
also
have
box
right.
No
code
is
perfect,
so
you
have
all
these.
These
causes
on
the
application
side.
D
And
then,
if
we
go
down
the
the
the
layers
we
get
to
the
network
and
then
you
have
firewalls,
which
is
load,
balancers
proxies
all
causing
latencies
may
be
impetuous,
maybe
yeah
like
spontaneously.
In
fact,
we
have
seen
firewalls
dropping
packets,
firewalls,
basically
like
holding
the
sockets,
neither
telling
the
server
nor
the
client
that
the
socket
got
closed
right
operations
going
into
the
void.
D
All
these
fun
things
on
the
network
and
with
shared
networks
on
ec2
and
other
cloud
providers,
some
yeah,
it's
it's
even
trickier,
and
then
once
we
have
all
the
application
and
the
network
handle,
then
we
can
look
at
the
the
Couchbase
server
cluster
and
there
are
everything
applies,
obviously
to
the
OS
layer
but
and
then,
instead
of
the
application
server,
we
have
our
coach27
old,
where
each
individual
service
can
cost
lowlands
for
like.
If
you
fetch
a
document
that
this
can
be
slow
right.
D
D
So
there
is
vendor
neutrality
because
we
are
not
in
the
business
of
the
application
performance
monitoring
right.
We
are
not
providing
tracing
implementations,
but
for
us
it's
important
that
we
plug
in
into
as
main
as
many
tracing
implementations
as
possible
right.
We
don't
want
to
enforce
anything
specific
on
to
our
users
and
then
for
us.
It's
important
also
that
at
least
by
default
it
has
a
small
footprint.
We
are
bringing
in
more
dependencies,
which
means,
potentially,
there
are
clashes
with
other
application
dependencies
right.
D
The
more
package
we
bring
into
the
system
than
when
there's
a
challenge
that
specific
customers
have
trouble,
deploying
it
and
so
forth,
so
we're
looking
for
minimal
footprint.
It
needs
to
be
supported
across
all
our
SDK
languages
that
we
support
so
I've
put
all
those
logos
here
on
the
sides
of
Java
code
of
net
C,
PHP,
Python
and
so
forth.
So
we
officially
maintain
a
large
array
of
SDKs
and
every
functionality
that
we
were
out.
D
We
need
to
provide
in
all
of
those
languages
so
that,
if
you
are
coming
from
Java
and
you're
switching
over
to
the
net,
you
want
to
fit
right
at
home.
You
won't
have
the
same
functionality
and
enlarge
them
as
ecosystems
in
in
enterprises
right.
You
have
different
teams
running
different
languages,
but
eventually,
if
they
settle
on
the
same
distributed
tracing
engine
for
the
whole
system
or
you
have
different
micro
services
in
different
languages,
you
won't
have
the
same
functionality
available
and
then
it's
it
should
be
actively
developed.
D
It
shouldn't
be
something
that
we
adopt,
that
it's
already.
No,
there
is
no
other
option
anymore
right
out
there.
So
that's
why
we
send
an
open
tracing,
because
it's
made
amateur
its
appoints
all
the
languages
that
we
need
for
eyes.
The
case
it's-
and
this
is
very
important
for
us.
It's
an
API.
Only
no
other
decisions
are
made.
We
don't
want
to
enforce
any
specific
network
transfer
protocol
decisions
on
to
our
users.
D
We
don't
want
to
enforce
our
implementation
dates
that
they
want
to
override
a
customized
in
their
system
and
it's
it's
a
big
Palestine
CF.
So
it's
certain
momentum
right.
There
is
it's!
It's
not
like
this
small,
one-off
solution
that
users
end
up
building
customized
code
anyways
for
you
right,
so
it
basically
brings
all
this
this
baking
with
it
and
it's
a
moving
target,
and
so
that's
a
little
challenge
for
us
right.
But
because
this
is,
we
can
influence
the
moving
target
right.
D
It
can
participate
in
open
tracing
and
move
it
forward
and
what
I
mean
by
moving
target
is
the
equations.
So
it's
not
like
there
isn't
one
aspect
that
is
basically
set
in
stone.
It's
it's
in
civil
thing,
which
is
a
good
thing,
but
it
also
means
that
we
need
to
basically
keep
track
of
the
changes.
Each
incremental
SDK
version
basically
bump
the
dependence
is
about
tracing
to
make
sure
we
are.
We
are
tracking
it
appropriately.
D
D
My
dependency
and
the
reason
is
we
want
to
make
it
plug-and-play
zero
friction
right,
and
the
idea
is
that
we
ship
with
a
default
racer
called
the
threshold
of
in
tracer
and
what
it
does
is
it's
a
tracer
that
is
enabled
by
default
and
aggregates
low
spans
on
a
per
service
basis.
So,
for
example,
key
value
material,
all
those
services
we
provide
and
locks
them
at
an
interval,
and
we
have
set
specific
thresholds,
but
you
can
tune
them
so
they
are
set
at,
for
example,
any
KP
operation
that
takes
longer
than
500
milliseconds
there.
D
Those
are
aggregated
and
then
locked
in
a
10-second
interval
and
then,
for
example,
the
top
10,
the
top
10
slowest
operations
are
fitted
into
the
log
with
additional
information
and
with
the
information
that
we
provide,
it
suddenly
makes
time
at
collation
possible.
So
in
the
industry.
In
the
time
of
exception
shown
here,
it
actually
changes
from
a
simple
Tamil
exception
without
context
to
a
time,
an
exception
which
keeps
you
and
identified
that
you
can
use
for.
Looking
at
the
logs,
we
we
provide
operationally
the
local
and
remote
sockets.
D
We
set
the
timeout,
so
we
suddenly
enable
the
user
to
look
at
this
information
and
then
correlate
it
with
the
log
output
for
the
threshold
of
rapport
right,
which
is
part
of
the
tracer.
So
every
10
seconds
it
will
dump
out
the
top
10
or
whatever
you
configure
slow
operations
and
give
you
the
same
information
and,
in
addition,
give
you
the
timings
for
specific
parts
of
the
process.
Right
because
let's
say
you
have
a
timer
that
500
milliseconds,
but
the
operation
hasn't
returned.
D
D
Dispatch
time
is
in
there,
which
basically
combines
the
network
time
in
the
server
time
and
then,
if
the
server
service
supports
it,
it
also
gives
you
the
the
server
in
seventh
time
in
microseconds,
so
example,
if
you
store
a
document
or
you
treat
it
from
our
KB
engine
as
part
of
the
response,
it
will
tell
you
how
long
the
operation
took.
Then
we
also
give
you
the
information
so
by
looking
all
those
different
timing
spans.
Suddenly
you
get
this
immediate
insight
into
the
different
timings
of
the
system.
D
D
So,
and
as
an
HR
example,
this
is
like
the
demo
setup
that
you'll
see
in
a
bit,
but
I
wanted
to
include
some
screenshots
in
here
as
well
right.
So
by
accepting
a
trace
into
our
client,
you
can
configure
whatever
you
want
right.
You
can
just
leave,
use
our
default
Couchbase
a
thread
block
tracer!
D
You
can
use
yoga
tracing,
you
can
use
lights
if
you
can
use
whatever
you
want
and
the
way
you
put
in
your
tracer
is
just
the
way
you
do
it
with
the
couch,
please
Java
SDK
or
any
other
SDK
general
right.
So
there's
this
environment,
where
you
just
give
it
a
tracer,
instant
and-
and
we
will
use
it
so
just
by
changing
the
environment
with
giving
it
another
tracer
instance
I'm.
Suddenly
you
go
from
our
built-in
zero
friction
copy
copy,
paste
logging
engine
to
a
distributed
tracing
engine.
D
At
the
same
time,
you
can
see
how
long
it
took
and
then
you
can
see
all
the
that
spans,
basically
that
previously
in
the
lock
tracer,
where
just
feels
in
the
JSON,
you
can
see
them
as
action
spans
here.
How
long
the
took
the
dispatch
to
the
server?
How
long
took
the
response
decoding
and
so
forth?
That's
just
all
of
the
box
in
the
system
and
then
another
quick
example.
Here's
a
nikkie
query
where
you
can
see.
We
also
adding
takes
two
spans,
so
you
get
the
component.
D
Okay,
then
I'll
just
this
is
awesome,
though.
Okay,
thank
you.
So
so
you,
like
the
demo
so
I,
have
a
just
a
couch
bass
note
running
locally.
So
this
is
our
UI.
There
is
nothing
fancy
we
have
two
buckets.
One
is
our
travel
sample
packet,
which
has
airports
Airlines.
It's
just
some
sample
data
that
you
can
use
for
query
and
we'll
use
it,
and
then
I
have
a
gig
I
instance
running
locally
and
I'll
show
you
how
that
works.
So
here
we
have
our
code.
Can
you
let
me?
D
As
a
Vista,
we
do
the
country's
tracing
and
then
the
ear
trace
so
other
than
that
we
connect
to
localhost,
give
it
our
credentials
up
my
bucket
and
then
we
perform
a
document
fetch.
Then
we
will
replace
the
document
we
just
fetched
and
then
we'll
run
a
little
query,
select,
distinct
type
from
Traverse
and
Percy
will
give
out
all
the
the
distinct
types
that
I
in
the
bucket.
D
Then
we
just
sleep
a
little
bit,
so
we
get
the
both
the
couch
missing
traces,
some
chance
to
to
send
it
to
the
remote
system
and
the
way
we
set
things
up
for
the
couch.
Miss
Teresa
I've
been
modifying
the
report
a
little
bit
so
I've
been
lowering
the
threshold
to
one
microsecond
to
make
sure
that
every
operation
actually
gets
logged
and
I
increase
the
same
precise
from
top
tentative
father.
You
don't
need
to
do
that,
but
you
can
see
how
to
configure
it.
Also
setting
it
pretty
to
true.
D
So,
if
you're
on
something
that
is
taking
a
log
files
and
putting
them
to
another
system,
for
example,
if
you
don't
have
a
distributed
racing
engine
that
right
now,
you
can
still
make
use
of
the
chase
of
block
and
feed
it
into
another
system
to
analyze
later
or
for
our
support
staff,
who
can
just
scrape
for
the
stuff
and
then
look
at
look
at
things
that
are
slow
or
we
configure
the
eager
tracer
viewpoint.
It's
loading
always
give
it
some
parents
right
so,
but
but
pretty
pretty
simple
setup,
and
let's
run
this
first.
D
So
what
you
can
see
is
we've
been
performing
those
three
operations
against
the
replace
and
the
nuclear
query,
and
they
all
show
up
here
in
our
in
our
log
right.
So
we
have
the
get
request
and
this
identifier-
it's
maybe
not
as
important
to
you
right
now,
but
what
this
thing
does
actually
what
we
do
with
once
we
connect
to
the
server
during
the
the
handshake
process.
We
pass
this
ID
to
the
server,
so
the
server
also
has
like
a
threshold
log
of
some
sorts
and
it
will
lock
this
ID
as
well.
D
So
you
can
take
with
this
ID
and
the
operation
ID.
You
can
uniquely
identify
any
operational
system
on
the
server
side
right.
So,
even
if
you,
if
you
do
not
feed
this
into
some
distribute
tracing
engine,
you
still
get
the
chance
to
have
a
better
correlation
of
what's
what's
slow
in
the
system,
and
we
see
the
timing
right.
So
20
billion
sticker
took
nine
point,
seven
million,
so
you
know
obstacles
fetching
the
document
of
a
server
excluding
Network
time
to
49
microseconds.
Then
we
have
to
replace
operation
and
we
have
technically
operation.
D
Here,
fine
traces,
you
see
those
fans
in
Yaga
right
without
doing
anything,
click
on
the
NICUs
fan.
You
can
see
that
this
page
to
server
time
on
the
meeting
you
have
all
the
text
write
the
query
just
executed
the
operation,
a
tea
or
everything
basically
you've
seen
in
the
loading
tray
sauce,
its
storage
of
Jager
and
yeah.
That's
like
place,
and
the
other
thing
is
that,
depending
on
what
kind
of
operation
you
run,
if
it's
a
mutation,
we
will
spend
the
encoding
part.
D
If
it's
a
fetch
operation,
we
will
spend
a
decoding
part
because
we've
seen
in
the
past
that
like
if
you
have
huge
documents,
JSON,
encoding
and
decoding,
can
take
a
long
time
writes.
You
would
immediately
see
that
as
well.
Here,
all
those
things
are
are
in
place,
so
I
think
I'm
pretty
good.
On
my
twenty
five
to
thirty
minutes,
oh
oh
one,
more
thing,
I
forgot,
I,
forgot,
I
forgot
the
call
take
two
before
we
go
into
questions,
so
our
operation
support
is
currently
in
the
Developer
Preview.
D
We
are
planning
on
a
beta
end
of
next
week,
two
weeks
from
now,
and
then
one
scotch
missile
fire.
Five
ships
in
a
couple
weeks
months,
something
like
that
this
thing
will
become
GA
right,
so
our
cortex
right
now
is
wait.
If
you're
looking
for
feedback
in
all
kinds
of
languages,
it
doesn't
matter
if
you
are
doing
java.net
whatever
I
would
really
like
your
feedback
or
no
implementation
on
API,
and
we
can
do
better
our
response
time
observability.
D
So
we
have
the
concept
called
SDK
RFC's,
so
every
feature
we
develop
for
four
SDKs
I
would
basically
put
in
an
RFC
where
we
discuss
it,
and
these
are
open.
What
makes
this
so
I
put
the
link
in
here
for
the
draft
I'll
take
a
look
put
in
questions
and
remarks.
If
you
have
them,
and
the
other
thing
I
wanted
to
point
out
this,
so
we
have
a
blog
at
block
block
good
coach.
Mr.
cone,
we're
currently
working
on
a
series
of
blog
posts
on
that
topic,
so
watch
that
space
there
is
more
to
come.
D
Come
there.
If
you
watch
that
in
the
future,
you
can
go
there
right
now,
since
they
will
be
published
with
that.
Thank
you
very
much
for
spending
the
time
with
me.
2530
minutes
and
thanks
for
the
opportunity
to
let
us
show
what
we've
been
doing
for
the
last
couple
months
and
with
that
I'll
open
it
up
for
questions
and
please
Mike
met
cram
since
you
on
the
call.
Is
there
any
other
questions
so
that
effect
you
please
chomping.
Thank
you.
B
Awesome
talk
thanks
so
much
I
do
have
one
quick
question
so
well
comment
first,
I
guess:
I
just
said
this
is
awesome,
but
this
is
awesome.
It's
really
exciting
to
see
this,
and
it
reminds
me
a
lot
of
a
big
table
at
Google
had
a
pretty
thick
clients
that
did
a
lot
of
important
logic
that
was
involved
in
the
same
source
of
optimizations
you're.
Doing
and
I.
Remember:
I
recall
that
that
tracing
in
those
clients
was
essential
for
the
same
reasons
that
you've
outlined.
B
D
Basically,
waiting
with
support
team
I
can
tell
you
that
in
general
time,
mods
are
maybe
one
or
two
on
the
list.
So
it's
a
big
point
and
the
reason
is
that
one
of
the
reasons
I
didn't
mention
is
that,
for
example,
especially
in
our
KB
operations,
we
have
a
default
timeout
of
two
and
a
half
seconds
right
and
some
users
it's
even
lower.
D
Where,
if
something
is
slow,
you,
then
you
can
retry,
you
can
do
whatever
you
want
and
we
give
you
back
to
control,
but
the
average
user
who
is
not
used
to
like
handling
timeouts,
especially
combined
with
from
asynchronous
operations.
So
the
Java
C
case
asynchronous
as
well
right,
so
handling,
asynchronous,
retries
and
so
forth.
You
all
need
that
for
running
a
scalable
distributed
system,
but
it's
just
not
something
that
the
average
developer
is
like
immediately
used
to.
If.
E
That
makes
sense
yeah
the
other
just
to
mention
a
couple
quick
things:
one
of
the
challenges:
I,
guess
we
have
that
it
kept
races.
We
are
a
patchy
to
open
source
and
and
there's
there's
a
Enterprise
subscription
all
that
stuff.
So
we
certainly
have
those
those
commercial
customers,
but
then
we
also
have
lots
of
two
and
three
node
deployments.
So
in
those
cases
I
don't
need
you
know.
Sometimes
this
becomes
an
issue
for
them,
but
they're
not
always
running
that
that
kind
of
Full
Tilt
others
like
LinkedIn.
A
And,
and
just
to
clarify
part
of
the
value
prop
here
is
not
just
that
you've
given
instrumentation
to
your
customers,
but
you
also,
you
wrote
this
instrumentation
with
some
clay
books
in
mind
right,
so
you
actually
have
some
play
books,
you're
gonna
be
giving
your
customer
or
otherwise,
when
they
come
to
support.
You
know,
I,
it's
integrated
kind
of
hooks.
You
and
trace
points
you
put
into
the
client
yeah.
E
Absolutely
one
I
don't
think
I
can
reference
it,
they
are,
but
one
user
is
actually
a
member
of
CN,
CF
and
open
tracing
and
though
they're
probably
going
to
implement.
They
have
some
specific
dates
and
they'll
probably
do
their
own
tracer.
And
then,
of
course,
there
are
gonna,
be
plenty
of
commercial
project
or
commercial
products
and
projects
that
you
can
completely.
A
Yeah
my
one
comment
on
that
is,
you
know:
there's
been
a
discussion
about
like
automated
tracing
versus
manual
tracing,
which
I
think
is
like.
Maybe
the
wrong
way
to
slice
it
and
I
would
like
to
maybe
go
forward
to
talk
about
there's
two
kinds
of
automated
tracing
there's
dynamic
tracing,
which
is
the
traditional
agent-based
thing,
and
then
there's
there's
pre
provided
tracing
from
the
the
service
provider
right
and
they're
kind
of
mutually
exclusive
right.
Because
the
point
of
this
instrumentation
is
the
service
provider.
D
You
know,
as
you
said
by
many
other
things,
that
with
like
as
we
as
a
service
provider,
we
just
know
from
history
where
the
three
points
are
right,
so
we
can
provide
very
narrow
and
focus
instrumentation.
All
the
usual
pain
points
with.
If
you
have
some
generic
tracing
agent
based
right,
you,
basically
you
don't
have
this
inside,
because
you
just
come
to
learn
every
library
out
there,
how
the
percent
and
write
how
the
usage
patterns
are
and
so
forth.
Yeah.
E
Yeah
and
what
we
have
ears,
pretty
modest
it
doesn't
doesn't
it
doesn't
do
a
whole
lot,
but
at
the
same
time
it
does.
It
gives
you
a
fair
amount
of
insight
very
easily,
which
was
kind
of
our
goal
when
we
had
just
Mike
Mike
Goldsmith
in
particular,
and
I
had
to
spend
a
lot
of
time.
Thinking
about
it
I,
don't
we
make
sure
that
we
can
run
this
out
of
the
box,
not
spam.
The
log
still
get
useful
information.
E
At
the
moment,
looks
are
good,
oh
good.
At
the
moment.
It's
it's
only
on
the
client.
However,
we
do
and
Michael
show
this
a
little
bit.
We
do
actually
grab
certain
statistics
that
are
returned
in
the
responses
and
then
put
those
in.
So
that
was
one
of
the
kind
of
important
things
that
we
had
actually
is
that
frequently
one
will
see
these
things.
People
will
wonder
you
know:
we've
seen
the
one
issue
of
Michael
kind
of
slightly
referenced.
We
saw
an
issue
where
it
was
SSD
we're
leveling
and
it
would
affect
nodes
fairly
randomly.
E
B
A
A
All
right:
well,
we
can
certainly
continue
this
conversation
and
get
ER,
and
this
video
will
get
posted
up
on
the
internet,
so
we
can
start
sharing
it
around,
because
I
thought
that
was
a
great
presentation
personally,
but
moving
on
someone
could
update
about
happenings
in
the
larger
tracing
community.
I
think
I
put
that
down
my
father's.
B
A
B
Know
there
may
be
I
mean
I
met,
consolidated
with
the
item,
a
few,
a
few
lines
down
around
the
conversation
that
we
had
on
Tuesday
its
I
just
I.
Think
since
the
last
ODST
call
there
was
a
gardener
report
that
came
out
about
microservices,
an
APM
which
ended
up
basically
saying
that
the
enterprise
software
marquee
effect
Gartner
studies
is
moving
towards
increased
adoption
of
I've,
explicit
white
box
instrumentation
and
then
mentioned
up
and
tracing
by
name.
A
couple
of
times,
which
is
which
is
cool
to
see
and
I
think
reflected
reality.
B
That
was
met
with
like
a
number
of
blog
posts
from
other
folks
which
had
varying
levels
of
positivity
about
about
open
tracing
and
instrumentation
in
general,
that
that
was
not
causally
related
to
a
blog
post
that
Erica
Arnold
from
the
TLC
put
up
a
few
weeks
ago.
But
it's
topically
related
to
that
which
just
kind
of
described
the
different
aspects
of
tracing
which
is
laying
I
think
Ted
put
on
that
agenda.
B
But
in
general
there
seems
to
be
a
growing
need
to
like,
within
the
larger
tracing
community,
to
describe
the
different
aspects
of
tracing
and
name
them
and
specify
which
problem
is
which
problem
and
make
sure
that
that
people
who
are
trying
to
solve
problem
a
value
that
problem
B
and
problem
C
and
problem
D,
are
still
important.
So
I
think
that's
the
basic
narrative
that
that
I
would
try
to
play
at
this
whole
thing.
There's
certainly
other
takes
on
it.
B
Think
we
all
I
think
everyone
agreed
that
it
was
very
important
for
there'd,
be
a
standard,
well,
a
lowercase
s,
standard
API
for
describing
transactions.
That
was
separate
from
anything
else
which
is
open,
tracing
us
charter
and
then
app
dynamics
and
dynaTrace
and
to
a
certain
extent
ly
relic.
But
maybe
to
a
lesser
extent,
me
relic.
Have
certain
things
they'd
like
to
to
express
at
different
levels
of
application?
Complexity
like
they'd
like
to
separate
span
management
from
higher-level
concerns
like
describing
HTTP
requests
and
database
calls,
and
things
like
that.
B
So
we
had
a
about
a
day-long
workshop
to
talk
about
how
long
that
would
work
and
I
felt
like
it
was
quite
productive
and
there's
a
lot
of
alignment.
So
I
just
wanted
to
say
that
we're
gonna
continue
to
have
that
conversation.
If
people
are
interested,
you
should
ping
me
or
Ted
or
whomever,
and
at
all
they're
certainly
welcome
to
participate
in
it,
but
I
just
wanted
to.
Let
people
know
that
that's
happening.
B
I
I,
don't
like
how
they
feel
like
there
are
several
different
small
conversations
going
on
with
people
to
different
continents
about
tracing
and
I
wish
that
everyone
could
just
be
in
one
conversation.
So
this
is
just
my
attempt
to
update
that
that
other
conversations
happening
and
to
welcome
anyone
who
wants
to
take
part
in
it,
but
but
in
general
I
thought
it
was
very
positive
and
there's
a
lot
of
alignment.
A
Yeah
yeah
I
would
second
that
and
say
that
conversation
kind
of
dovetails
with
the
w3c
working
group
that
is
mostly
focused
on
this
trace
context.
Wire
protocol
that
wire
protocol
conversation
is
important.
It
hasn't
been
super
directly
related
to
open
tracing
in
the
sense
that
were
wire
protocol
agnostic,
but
it
is
important
to
obviously
the
members
of
the
open
tracing
community,
where
these
two
things
start
to
get
tied
together.
More
closely
is
on
the
other
side,
with
a
sort
of
data
export
format.
A
If
you're
going
to
use
a
standard
wire
protocol
to
tie
multiple
tracing
systems
together
with
you
know,
unified
correlation
IDs
across
tracing
systems,
you're
still
going
to
have
to
have
this
problem
where
you
have
to
get
the
data
out
of
one
of
these
two
tracing
systems
into
the
other
one
so
that
you
can
analyze
it.
So
a
standardized
data
format
is
sort
of
the
other
half
of
that
puzzle
and
once
you're,
defining
the
data
format
you're
getting
to
something
that
relates
much
more
deeply
to
the
kind
of
instrumentation
you're
doing.
A
The
model
of
that
format
ideally
should
line
up
with
the
model
that
the
API
is
thinking
about
and
then
we're
concretely
the
you
know,
tags
and
keys
and
values
that
the
data
format
is
using
to
describe
things
like
HTTP
calls
or
any
other
higher
level
concept
really
needs
to
match
with
that.
Instrumentation
library
is
doing
so,
I
think
that's
an
area
where
that
w3c
tracing
working
group
and
open
tracing
those
projects
need
to
to
gel
up
on
that
side,
because
there
is
a
lot
of
overlap.
There.
A
A
So
other
happenings
going
down
the
the
list
on
the
agenda.
There
was
an
inaugural
Austin
meetup.
This
is
awesome.
I
believe
this
is
the
first
official,
whatever
open
tracing
meetup
that
has
occurred,
at
least
in
the
u.s..
To
my
knowledge,
there
was
a
meetup
group
in
Austin
that
was
formed,
mostly
people
at
I
believe
a
home
away
and
Under
Armor
kind
of
holding
it
down.
Sorry,
if
I
left
something
out
there,
it
consisted
of
a
number
of
talks.
I
gave
a
talk,
eduardo
from
HomeAway
gave
a
talk
and
there
was
a
panel
of
tracing.
A
My
big
takeaway
was
that
this
is
really
useful
to
people
application
developers,
don't
like
instrumenting
the
third-party
software
that
they're,
given
they
don't
like
doing
it
themselves.
They
would
prefer
that
it
come
with
something
out
of
the
box
like
what
Couchbase
has
created,
that
they
can
just
plug
into,
and
if
it's
a
third
party
plug-in
that's
great,
if
it's
first
party
and
comes
with
a
playbook
and
information
with
deeper
information
about
what
you
know,
those
trace
points
are
trying
to
measure.
That's
that's
way
better
for
them.
A
A
Okie
dokie
we've
got
ten
minutes
left
we've
gone
through.
Basically
everything
someone
put
tracing
in
four
parts,
top
summary
from
Ted,
so
I
guess
I
will
talk
briefly
about
that.
We've
already
I
think
covered
this
Erica
wrote
a
great
post
called
you
know.
Tracing
tracing
tracing
and
I
gave
a
talk
along
a
similar
topic
down
in
Austin.
That
I
would
like
to
turn
into
a
blog
post,
but
pointing
out
the
way
I
think
people
have
been
receiving
it.
Well,
it's
like
there's
four
parts
of
tracing
I,
put
a
link
to
the
particular
slide.
A
I
think
that
shows
this
you've
got
a
tracing
API
that
you're
using
to
instrument
your
code.
There's
a
wire
protocol
that
standardized
to
talk
between
these
systems.
There's
a
data
protocol,
that's
sending
things
between
analysis
systems
and
then
there's
an
analysis
system
itself.
So
talking
about
it
in
those
terms,
the
API,
the
wire
protocol,
the
data
protocol
in
the
analysis
system
is
a
nice
way
to
break
it
down
because
different
people
are
focused
on
those
different
components.
A
So,
depending
on
your
role
in
this
cloud
ecosystem,
you
might
find
one
of
these
things
way
more
useful
than
the
other.
For
example,
if
you
work
on
cloud
infrastructure
at
Google
or
Amazon
and
you're
providing
black
box
services
to
people,
you
really
care
about
this
wire
protocol
in
this
data
protocol,
because
there's
no
way
for
them
to
install.
A
You
know
a
nagger
tracing
client
in
s3
for
you,
if
that's
what
you're
using
so
the
API
layer
is
like
not
very
useful
to
people
who
come
from
that
background
and
because
internally,
at
places
like
Google,
they
tend
to
write
all
the
software
from
scratch
in-house.
Something,
like
you
know
it.
An
agnostic,
API,
isn't
useful
for
them
internally,
either
too
much
some
people
from
that
background.
Center
really
really
focus
on
these
protocol
level
things,
whereas
people
who
are
not
currently
yet
trying
to
glue
together.
A
Multiple
tracing
systems
are
less
likely
to
look
at
this
wire
protocol
and
data
protocol
stuff
and
be
like
that's
my
big
pain
point
without
a
standard
data
protocol.
How
am
I
supposed
to
get
my
information
out
of
gigger
and
the
answer
is
like
well,
I,
don't
know,
I,
just
put
it
all
together
and
there
it
is.
So
it's
fine.
So
there
is
a
bit
of
people
in
the
community.
A
I
think
sometimes
talking
past
each
other,
because
they're
just
feeling
a
different
part
of
the
elephant
and
I
would
like
to
get
that
sort
of
like
cleared
up.
Maybe
it's
like
getting
some
common
language
around
that
stuff.
So
on
that
front,
I'm
gonna
try
to
turn
this
into
a
blog
post
to
kind
of
follow
up
with
what
Erika
wrote.
So
I
don't
think
we
can
say
this
enough.
A
C
B
C
Add
an
item
because
last
week
or
last
night
meeting
months
there
was
a
action
item
that
I
was
gonna
change,
my
PR
to
do
trade
context,
header
detection,
on
basic
tracer
for
go
to
have
a
start
with
a
first
step,
suggested
by
URI
that
the
inbound
trade
ID
in
the
trace
context.
Header
would
be
stored
as
a
correlation,
but
not
used
for
the
basic
tracers
on
trace
and
then
the
second
phase
could
be
a
separate
PR
that
actually
upgraded
basic
trades
or
128-bit.
C
So
I
am
also
going
for
those
WTC
meetings
and
I
have
wanted
to
have
something
to
show
for
both
of
this
group,
and
that
group-
and
you
know
Howard
well
I
guess
I
should
just
put
this
on
the
gara.
But
what?
What
do
you
think
is
the
method
of
correlating
this
w3c
trace
context,
trace
ID
with
a
a
open
tracing
basic
traits
or
a
trace?
Id
like
it's
just
a
tag
with
the
name
trace
context,
and
that's
it
or
three
lunges
and
fancier
than
that?
What
what
what
does
it
mean
to
have
face
basic
compliance?
B
C
D
C
B
C
I
got
the
impression:
that's
what
Yuri
was
suggesting
last
month
when
he
said
well.
Why
don't
you
start
out
with
just
they
did
the
notes
from
last
month
and
I
said
yeah?
Okay,
I
could
split
it
up,
because
I
had
gone
into
a
more
deeper
integration,
where
I
was
just
gonna
change
basic
tricks
to
do
128-bit
trace
IDs,
because
it's
been
you
know,
Trust
native
I,
naively
the
sampling
bed
and
everything
it's.
B
Kind
of
a
you
know
a
punt,
but
another
thing:
I'm
not
attached
this
at
all.
That
I
could
imagine
basic
tracers
options
that
it
uses
that
startup
time
could
include
some
kind
of
designation
as
to
which
propagation
format
its
intended
to
use
I
mean
that
X
ot
span
context
thing
which
was
added
as
sort
of
like
just
a
ton
of
whim
I
think
has
actually
caused
people
a
lot
of
anger
which
is
hilarious.
To
me
it
was
like
it's
like.
B
A
C
B
C
B
C
I
think
the
original
yeah
okay
I
think
I'll.
Try
that
then,
because
that's
easier,
certainly
than
doing
it
was
the
weird
I
think
the
original
idea
was.
Oh,
let's
say
you
go
through
from
dynaTrace
to
x-ray
and
back
again,
and
you
want
to
see
the
trace
idea
that
x-ray
put
on
the
thing,
assuming
everyone's
using
wt8
context
and
that's
implementation
would
add.
Like
a
good
example
of
how
that
might
work.
That
scenario,
that's
a
lot
more
complicated
than
what
what,
but
that
was
the
idea,
I
think
the
reference
application
the
demo
day.
B
I
think
it's
a
great
idea
and
I
think
that
approach
would
would
I
mean
that
that
scenario
you
just
outlined
is
sort
of
the
you
know
the
graduate
course
level
version.
Maybe
we're
talking
with
a
high
school
level
version
yeah.
That's.
F
C
A
All
right,
I
certainly
think
there's
value
in
just
this
simpler
task
of
of
just
trying
to
parse
that
hurt
like
try
to
actually
implement
that
standard.
This
is
gonna
get
to
w3c
business,
but
I.
One
concern
on
that
front
is
there's
been
lots
and
lots
of
talk
about
optimization
and
I
feel
like
in
some
sense.
Some
of
that
talk
has
been
focused
too
much
on,
like
the
kind
of
optimization
you
would
get
out
of
building
your
own
custom,
HTTP
client.