►
From YouTube: CNCF Webinar: The What's and Why's of Tracing
Description
Tracing can be very powerful. It gives the ability to connect the customer experience to the backend services several hops away. This comes down to what information is in your traces. There isn't one standard set of tags to add for EVERY application. It comes down to how traces are used and what matters to your organization. During this webinar, we will discuss the need for tracing, dive into the why (and what) you would want to trace distributed tracing, and delve into the OpenTelemetry specs and architecture on how we can tailor (or tag) our traces.
A
All
right,
I
want
to
thank
everyone.
Who's
joining
us.
Welcome
to
today's
cncf
webinar,
the
what's
and
whys
of
distributed
tracing.
I
am
libby
schultz
and
I'll
be
moderating.
Today's
webinar
we'd
like
to
welcome
our
presenter
today
dave
mcallister
senior
technical,
evangelist
at
splunk
couple
of
housekeeping
items
before
we
get
started
during
the
webinar
you're,
not
able
to
talk
as
an
attendee
there's
a
q,
a
box
at
the
bottom
of
your
screen.
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
as
we
can
at
the
end.
A
Not
the
chat
be
sure
you're
dropping
it
in
the
q
a
box.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
A
B
Thanks,
I
appreciate
it
hi
everyone-
I
am
dave
mcallister.
I
work
for
splunk
as
a
technical
evangelist
here
and
at
splunk.
We
have
this
saying
that
every
person
is
made
up
of
a
million
points
of
data.
So
let
me
share
just
three
real
quick
ones
with
you
over
here,
I'm
owned
by
three
cats,
so
I'm
very
good
at
being
ignored.
I
spent
10
years
as
a
soccer
ref,
so
I'm
used
to
people
disagreeing
with
me.
B
In
fact,
I
could
pretty
much
guarantee
that
for
any
given
moment
at
least
half
of
the
players
on
the
field
would
disagree
with
any
call
I
made
and
finally,
I'm
married.
So
I
have
a
witness
that
I
don't
read
minds.
Please
make
use
of
the
q
a
box
that
was
just
pointed
out
to
you
so
that
we
can
get
your
questions
answered
as
well,
so
I'm
going
to
be
talking
about
tracing,
but
I'm
going
to
start
a
little
bit
by
talking
about
this
concept
of
observability.
B
That
has
come
into
play
recently
here
and
it's
real
obvious
that
data
is
this
driving
factor
for
observability.
As
part
of
that,
you'll
find
that
there
are
lots
of
pieces.
There
are
the
three
classes
of
data
that
we
talk
about:
metrics,
traces
and
logs,
but
imagine,
for
instance,
if
you
never
got
all
the
data
or
you
missed
ephemeral
data
that
was
inside
of
here
or
your
data
showed
you
something
you
couldn't
drill
into,
or
you
only
saw
the
good
stuff
or
the
bad
stuff,
and
why
does
that
really
important?
B
Well
part
of
it
is,
is
that
our
world
is
changing
our
application.
Space
is
changing
and
we
now
are
into
a
microstyle
of
architectures,
where
our
services
are
are
independent,
usually
coupled
loosely
coupled
and
can
expand
or
contract
as
needed
inside
of
this,
and
so,
while
certain
things
have
become
easier,
so
ci
cd
has
become
easier
in
some
ways.
B
Simple
testing
has
become
easier
testing.
On
the
other
hand,
we
started
getting
the
synthetics
and,
to
start
looking
even
at
pasa
monitoring
the
rump,
but
nonetheless
we
now
have
a
lot
of
moving
parts,
and
when
we
throw
this
also
into
a
cloud,
we
also
can
throw
this
into
a
hybrid
cloud
where
part
of
it
lives
inside
a
data.
Existing
data
center,
as
well
as
into
a
public
cloud
life,
becomes
a
little
challenging
and
that's
part
of
what
happens
when
we
we
deal
with
this.
B
So
each
of
these
pieces
coming
into
the
front
end
from
your
hpp
may
look
at
the
cart
service
may
look
at
all
of
these
different
places,
and
so
each
of
these
pieces
get
touched
in
each
transaction.
Now.
The
nice
thing
is
that
our
tools
tend
to
be
smart
enough
to
be
able
to
map
the
transactions
for
us,
but
when
we
get
into
tracing,
we
start
looking
at
what
that
means
inside
of
here.
So
that
simple
little
step
through
that,
I
just
did.
B
For
instance,
the
front
end
came
in
overall,
you
can
volume
environment
took
about
one
and
three
quarter
seconds
of
which,
roughly
slightly
over
one
second,
was
in
the
checkout
service,
which
then
breaks
down
into
each
of
these
pieces,
and
this
is
where
tracing
really
excels,
showing
us
what's
happening
inside
of
the
application
on
a
service
by
service
basis,
for
a
request
of
interest
and
in
when
we
start.
Looking
into
this
a
little
bit
more,
we
can
start
looking
at
that.
B
So
we
can
think
of
these
as
the
trace
is
all
of
it
grouped
together
and
the
span
is
each
individual
portion
or
any
particular
portion
that
we
choose
to
measure.
So
those
are
the
the
reasons
in
here,
and
that
goes
across
time.
These
are
all
time
series
based
so
our
same
functionality,
the
same
application,
we're
looking
at.
B
We
see
that
the
cpu
is
now
pegged
out
at
100,
and
it's
been
that
way
for
over
five
minutes,
and
that
starts
leading
to
some
questions
that
are
are
necessary
to
understand.
The
cpu
at
100
percent
may
not
necessarily
mean
that
there's
something
wrong.
There
may
not
be
an
error,
but
there
can
be
a
delay.
There
can
be
communications
problems,
but,
for
instance,
we
do
notice
that
at
the
same
point
in
time
we
had
a
new
code
push
the
front
in
version,
changed
microservices
services
change
all
the
time.
B
We
need
companies
where
they
push
a
hundred
different
versions
of
different
microservices
on
a
daily
basis,
and
I
suspect
some
of
y'all
may
even
be
exceeding
those
limits
right
here,
but
now
we
can
see
that
we've
seen
an
event,
that's
caused
a
potential
happening
and
what
the
actual
functionality
is
showing
is
that
we've
actually
seen
a
service
not
available
showing
up
as
the
far
end
of
this
thing,
http
503
euro
codes
that
are
showing
up
in
the
shipping
service.
B
So
shorthand
observability
is
not
the
microsoft,
it's
the
clarity
of
the
slide
under
the
microscope.
It's
what
the
data
can
tell
you
in
our
old
world,
with
with
black
box
monitoring
or
even
far
enough
back.
Where
do
we
go
into
the
data
center
and
make
sure
that
all
the
little
servers
had
their
lights
blinking
at
us?
This
is
now
we
need
to
get
more
information
and
more
details
under
that,
and
to
do
that,
we
need
to
be
able
to
collect
data.
B
Our
data
collection
is
also
beginning
to
change
a
little
bit
because
of
the
nature
of
our
underlying
structures.
Here
we
now
need
standards
based
agents.
We
we
can't
live
with
with
data,
that's
coming
in
from
a
priority
basis
or
being
going
out
in
a
propriety
basis.
We
need
cloud
integration
so
that
it
understands
how
to
deal
with
the
public
clouds,
the
private
clouds,
even
into
our
legacy
centers
inside
of
here
we
need
the
code
to
be
auto.
B
Instrumented
there's
been
a
huge
amount
of
discussion
about
whether
you
should
should
hand
hard
code
everything
into
an
application,
but
I
think
again
think
about
how
often
services
can
be
rolled
up
across
this
incredibly
large
environment
now,
and
if
every
time
we
changed
it,
we
had
to
re-build
manual
instrumentation
within
our
code
base,
we'd
be
able
to
have
a
bit
of
a
headache
over
time.
Likewise,
we
want
to
support
all
the
developer
frameworks
that
make
sense.
B
We
want
any
code
at
any
time
and
honestly,
as
we
expand
the
scale,
we
don't
want
to
have
any
limits
on
how
we
can
slice
and
dice
this
data
to
find
out
what
we're
interested
in
now
we've
seen.
A
lot
of
this
metrics
is
a
pretty
well
understood.
Issue
logs
have
been
around
for
forever.
B
Data
coming
in
is
really
about
the
telemetry
and
the
telemetry.
Verticals
are
quite
often
many
places
I
mentioned.
For
instance,
these
three
classes
of
data
sometimes
called
the
three
pillars
of
data
here:
tracing
metrics
and
logs,
but
we
now
want
those
for
every
language
of
interest.
We
want
the
canonical
implementations.
B
So
the
wire
formats,
the
trace
context,
the
data
structures
themselves
are
all
standard,
so
we
can
interrupt
between
them,
and
these
are
think
of
these.
The
layers
of
what's
going
on
of
the
activity
for
each
of
the
verticals.
Each
of
these
verticals
may
have
a
different
approach,
but
each
of
these
verticals
must
have
each
of
these
layers.
B
B
Metrics
is
a
close
follower
to
that
and
logs
we'll
get
a
little
bit
more
into
later
are
becoming
the
next
direction.
So
open
telemetry
not
only
supports
all
of
the
telemetry
verticals
and
things,
but
it
also
supports
all
three
classes
in
one
environment
and
nice.
To
note,
eventually,
is
the
second
most
active
project
in
cncf
today,
only
by
behind
kubernetes
itself.
B
B
Tracing
is
incredibly
powerful,
but
it
does
take
a
certain
amount
of
effort
on
your
heart
to
make
it
useful
to
you
and
when
we
started
looking
at
this
we'd
start
by
saying
what
problems
are
we
trying
to
solve
and
by
the
way
this
is
a
very
small
subset
of
potential
answers
inside
of
here.
In
the
example
I
showed
earlier
we're
trying
to
to
solve
a
performance
issue,
things
running
slow
or
we're
trying
to
solve
an
error
issue.
B
Why
am
I
seeing
errors
being
reported
from
a
service
and
when
we
start
looking
at
this,
we
want
to
make
sure
that
we
are
responding
as
quickly
as
possible.
At
the
same
point
in
time,
the
mean
time
to
resolution
in
the
meantime
to
detection
need
to
be
as
quick
as
possible.
Errors
need
to
be
solved
as
soon
as
possible.
B
B
However,
when
we
look
at
the
current
methods
that
have
prevailed
beforehand,
what
we
found
was
that
there
were
some
gaps
that
showed
up
here
and
again.
I'm
sure
that
you
can
add
to
the
gaps
yourself
and
have
additional
cure,
but
it
was
difficult
to
say
what
was
observed
to
what
is
currently
working
inside
of
here
and
we
were
having
to
collect
data
from
different
formats.
B
So
the
metric
format
would
be
coming
in
at
one
one
wire
or
one
format
model,
while
the
traces
would
be
coming
in
and
it's
a
different
one
and
the
logs
would
be
coming
in
as
you
get
another
different
one,
and
even
if
you
didn't
have
those
you
need
have
they
think
that
you're
you're
fine,
they
have
no
gaps,
but
you
may
need
to
start
looking
at
new
things:
you're
developing
new
services,
you're
developing
new
code
and
there's
new
functionality.
That's
coming
into
play
here.
B
B
Next
question
is
to
start
looking
at
what
teams
going
to
make
use
of
tracing.
Not
all
teams
necessarily
make
use
of
tracing
in
the
same
ways
here,
tracing
can
be
heavily
used
in
dev
environments,
for
instance,
tracing
can
be
a
benefit
to
the
sre
environments.
Tracing
may
not
be
as
important
to
your
network
management,
but
it's
still
useful
inside
of
here.
B
The
nature
of
our
infrastructure
is
changing,
our
environments
are
changing
and
our
customers
expectations
are
all
changing.
So
I
want
to
step
a
little
bit
into
the
architecture
of
open
telemetry
itself
and
my
apologies.
I've
not
updated
this
for
the
absolute
latest
set
of
meetings
and
I'm
sure
we're
going
to
hear
a
lot
more
about
about.
B
How
do
we
deal
with
this
functionality?
The
collector?
Where
are
we
getting
the
information?
How
is
it
coming
into
here?
Client
libraries
that
are
coming
into
here
and
then
we
also
are
incubating
for
traces
and
metrics
we're
incubating
our
beta
for
traces
and
metrics
incubating
for
logging,
but
auto
instrumentation
pieces
are
beginning
to
evolve
and
in
fact
already
there
with
all
their
instrumentation.
B
So
when
we
start
looking
at
these
pieces
here,
it's
easy
to
start
the
breakdown.
So,
let's
take
a
look
a
little.
This
is
a
open,
telemetry
reference
architecture.
Here,
application
can
on
a
host
talk
to
a
collector
hotel,
collector
or
the
agent
that
agent
can
send
information,
either
traces
or
metrics
and
traces
off
to
the
backends
of
choice
whatever
they
are.
B
This
is
agnostic.
It
doesn't
really
care
what
you're
talking
to
on
the
back
end.
So
the
ltel
library
that
can
be
part
of
your
application
can
automatically
talk
to
the
agent
to
send
this
information
right
here.
The
hotel
collector
as
direct,
can
bring
all
that
information
and
talk
back
to
those
various
pieces,
so
the
applications
can
now
be
bringing
you
data
in
from
multiple
sources
in
multiple
places
to
multiple
endpoints,
and
so
this
is
a
very
powerful
architecture.
This
is
not
a
a
one.
Size
fits
everybody.
This
is
size
to
fit
you
correctly.
B
This
is
important
because
tracing
is
fairly
data.
Intensive,
there's
a
lot
of
information,
that's
flowing
through
your
tracing
environments,
and
you
really
need
to
keep
as
much
of
it
as
you
possibly
can.
So
you
can
answer
questions
that
you
didn't
even
think
were
going
to
be
questions
when
you
started
for
this.
B
So
the
tracing
side,
which
is
the
most
advanced,
is
w3c,
trace
context
or
b
inside
of
that,
its
concept
is
to
get
the
context
of
what's
going
on
here.
The
tracer
functionality
here,
that
is
a
series
of
calls
the
spans
make
up
the
trace.
A
trace,
is
a
series
of
spans.
Whether
it's
a
single
span
or
multiple
spans
is
is
part
of
the
underlying
structure
here
and
the
span
can
tell
us
additional
pieces
of
information.
B
What
kind
of
communication
is
going
on?
What
its
attributes
are,
its
key
value
pairs,
its
tags
or
its
metadata
here,
events
that
have
happened
earlier
on
in
the
example,
for
instance,
I
showed
that
a
service
deployed
that
would
be
an
event
that
happened
inside
of
here
and
then
links
that
we
can
drill
off
to
here.
We
can
also
look
at
sampling
functionality.
B
We
can
look
at
spanned
functionality
and
we
can
export
the
data
anywhere
into
any
functional
wire
format
that
we
want
via
the
open
tracing
protocol
itself,
jaeger
zipkin,
prometheus.
It
doesn't
matter.
Those
are
the
advantages
that
we
start
looking
when
we
start
bringing
this
tracing
information
into
play
here.
B
There
is
something
that's
very
important
to
understand
when
we
start
really
getting
into
this
and
that
there
is
this
thing
called
conventions,
semantic
conventions
here
and
in
open
telemetry
spans
can
be
created
as
many
as
places
as
you
want
to,
and
sometimes
it
can
be
a
little
bit
of
an
overload
of
the
amount
of
granularity.
That
happens,
but
that's
an
issue
that
you
need
to
undertake
understand.
B
What's
providing
the
best
set
of
information
for
you
here
when
we
start
looking
at
some
well
well-known
and
heavily
used
protocols
like
http
or
database
calls,
we
start
trying
to
unify
the
structure
of
how
those
things
get
reported
and
we
can
think
of
this
as
semantic
conventions.
The
conventions
are.
This
is
the
standard
way
of
doing
this
and
therefore
the
open,
telemetry
product
will
default
to
this
methodology
here.
So
things
like
http,
we'll
see
a
method,
we'll
see
a
status
quo,
we'll
see
an
error
code
that
comes
out
as
part
of
the
http
semantic
conventions.
B
We'll
also
see
databases,
databases,
no
matter
where
they
live.
These
are
common
pieces
inside
of
a
database
functionality
messaging
systems,
whether
they're,
mqtt
kafka.
You
pick
it
you've,
got
it
there
or
even
function
as
a
service
showing
the
triggering
functionality
for
the
various
serverless
environments
that
are
happening.
B
So
metric
spaces
are
similar,
but
because
metrics
themselves
are
different
than
tracing
there's
a
sedimentary
spaces
here
context
is:
it
wants
to
span
and
correlate.
So
we
want
to
be
able
to
bring
the
information
together
together
and
can
quickly
grasp
the
point
of
activity,
whether
it's
over
an
aggregation
of
a
period
of
time
or
a
specific
point
in
time
in
this
context
gives
us
that,
in
terms
of
spans
here
really
uses
the
ability
to
record
a
measurement
and
the
measurements
themselves
can
be
raw.
I
want
to
see
a
single
single
value
of
a
measure.
B
B
B
The
time
that's
involved
here
is
the
time
of
the
measurement,
as
well
as
the
time
of
the
aggregation,
that's
involved,
and
so,
when
we
start
looking
at
those
pieces,
this
becomes
incredibly
important
to
understand
that
metrics,
driven
from
the
tracing
data
themselves,
allow
us
to
look
at
time
as
a
grouping
as
well
as
dig
into
any
specific
point.
Open
telemetry
is
incredibly
good
at
this
and
providing
a
common
basis
again
for
all
of
it,
so
the
resource
sdk,
which
is
letting
us,
have
the
entity
producing
it.
B
So
that's
the
representation,
and
in
that
we
started
looking
at
things
like.
Where
is
this
running?
That's
the
environment
thinks.
Is
it
running
on
a
host?
Is
it
running
inside
of
a
container
what
defines
the
computing
environment
and
that?
How
did
it
get
deployed?
Did
it
get
pushed
by
kubernetes
they
get
pushed
by?
B
Is
it
a
manual
process
and
then
what
makes
up
that
compute
unit?
So
again,
you
can
go
from
the
kubernetes
world
to
pods
to
sorry
worker
nodes,
to
pods,
to
containers
to
process,
and
we
need
to
understand
how
to
drill
into
each
of
those
pieces
as
well
so
the
resource
the
semantic
conventions
here
are
that
all
of
these
things
get
defined
when
we're
bringing
in
the
tracing
now
or
not
to
you
is
a
call
that
you
have
to
make
in
your
environment
if
you're
not
using
kubernetes.
B
For
instance,
you
probably
don't
care
about
the
community
side
here,
but
you
probably
still
care
about
where
it's
running
and
what
the
running
environment
looks
like
at
any
given
moment.
That
gives
us
the
ability
to
start
tying
together.
The
application,
the
communications
and
the
infrastructure
into
one
picture.
Unique
inside
of
this
is
the
ability
that
we're
starting
with
tracing
so
we're,
starting
by
looking
at
the
the
application,
the
user
experience
and
then
driving
down
to
what's
affecting
that
user
experience,
without
necessarily
having
to
change
our
environments
or
change
our
structures
or
change
our
underlying
language.
B
So
most
recently
logs
have
come
into
play
here
and
logs
are
definitely
an
in
early
incubating
stage
here.
But
it's
really
kind
of
important
to
know
this.
It
is
kind
of
unique
and
must
be
possible
to
map
from
existing
log
formats
to
this
data
model,
and
so
when
we
do
this,
we've
got
to
make
sure
that
we
understand
the
logging
structures
that
come
into
play
here,
as
well
as
new
information
that
may
be
coming
in.
So
it's
got
to
be
semantically
meaningful.
B
It's
got
to
be
understandable
and
relate
to
the
underlying
structure
here,
and
we
have
to
be
able
to
map
between
log
formats.
Log
formats
should
be
able
to
come
in
as
one
and
be
converted
and
go
out
as
another,
just
like
we
can,
with
collector,
bring
in
data
from
a
wire
format
of
one
form
and
push
it
out
as
a
different
one.
We
need
to
be
able
to
do
this
conversions
so
that
we
can
continue
to
support
things
that
exist
today
as
well
as
exist
in
the
future.
B
So
this
log
translation
is
simple
metal
is
that
log
format
a
should
be
able
to
vary
until
this
and
go
out
as
log
format
b
and
log
format
b
should
be
no
worse
than
a
reasonable
direct
translation
of
someone
sitting
down.
Looking
at
log
a
and
log
b,
you
should
also
be
able
to
go
from
log
a
to
log
b
back
to
log
a
without
losing
semantics
capability
or
significant
meaning
feeding
benefit.
B
We
do
look
at
three
types
of
logs
inside
of
your
system
formats.
This
is
the
very
honestly
these
are
the
logs
and
events
that
our
operating
system
puts
out
to
us,
and
we
really
don't
have
a
lot
of
control.
We
don't
can't
change
the
formats.
We
can't
really
affect
what
information
is
included,
and
so
those
are
very
stable
environments
and
we
know
what
they
look
like
over
here,
but
also
have
third-party
applications.
Here
you
can
think
you
know
standard
web
servers,
apache
nginx.
We
may
have
some
control
over
what
that
information
includes.
B
B
So
while
there
are
other
forms,
each
of
these
categories
is
considered
as
part
of
the
log
specification
we're
defining
what
the
audience
will
look
like
and
how
they'll
get
transmitted
as
well
as
common
third-party
applications
or
applications
that
we
ourselves
are
writing
so
from
very
stable
to
flexible
to
a
no-limits
model.
We're
looking
at
all
of
those
capabilities
for
our
logs
when
we
get
into
logs
they're,
really
kind
of
two
sets
of
things
that
feel
inside
of
here.
There's
a
series
of
top
level
named
fields.
B
These
are
fields
we
expect
are
going
to
occur
every
single
time,
we're
going
to
see
a
time
stamp.
We're
probably
going
to
see
in
logs
the
body
of
the
log
record.
That's
in
solve
here
we're
going
to
see
the
source
of
the
log
record,
and
so
we've
defined
these
top
level
fields
so
that
we
can
again
unify
our
language,
and
so
our
language
becomes
part
of
the
overall
structure
inside
of
here,
because
we
have
a
unified
language.
B
So
touching
the
collector
a
little
bit.
I
mentioned
this
here.
The
collector
really
is
an
easy
way
of
getting
data
in
it's
vendor
agnostic.
You
receive
process
and
export
data.
Here
it's
decent
default
configurations.
You
might
want
to
treat
it
here.
It
supports
popular
protocols,
it's
highly
performant,
even
under
under
higher
loads.
B
It
continues
to
run
here
and
it
itself
is
observable,
so
you
can
see
what's
going
on
with
the
collector
here,
since
it's
a
single
code
base,
it
can
be
deployed
as
an
agent
can
be
to
support
as
collector,
and
it
supports
all
three
data
classes
that
allows
us
to
offload
from
the
application
and
changes
to
the
application.
Things
like
compression
and
encryption,
as
well
as
allowing
us
to
have
that
common
vocabulary.
B
The
semantics
coming
into
play
here
and
since
it's
language,
agnostic
changes
are
easy,
and
so
we
can
simply
put
the
implementation
into
there
and
then
deal
with
it
in
that
there
it's
vendor
agnostic,
it's
extensible
and
you
can
find
functionality
for
both
sides.
By
looking
at
the
collector,
you
can
see
things
that
are
in
the
core,
as
well
as
things
that
are
in
the
community.
B
This
makes
a
great.
We
don't
want
data
lock-in
and
so
by
making
use
of
the
collector.
Our
traces
are
not
going
to
be
locked
in
if
our
traces
aren't
locked
in
the
rest
of
our
data
is
also
not
locked
in
so
the
architecture
comes
into
play
that
goes
in
the
receiver.
Side
can
be
wire
formats,
it
could
be
jaeger,
it
could
be
otlp,
it
can
come
in
from
a
pre-atheist
viewpoint
and
it
can
be
exported.
Similarly,
you
can
choose
how
you
get
it
out
here.
Inside
of
that,
you
can
have
multiple
classes
of
processing.
B
You
can
do
a
batch
processing.
You
can
tell
it
to
retry.
If
you
didn't
see
the
data,
you
can
go
into
streaming
models
here
and
you
can
build
as
many
processes
as
you
want,
so
you
can
actually
deal
with
multiple
functionality,
so
otlp
could
go
through
and
go
out
as
jaeger
or
it
could
come
in
and
go
through
a
totally
different
thing
and
go
out
as
prometheus
and
otlp.
B
It's
your
decision
on
how
you're
exporting
your
data
through
this
now
the
nice
thing
is
that,
while
we
have
lots
of
libraries
that
are
coming
into
play
here,
we
also
have
the
ability
to
start
automatically,
and
this
is
the
java
example.
For
instance,
here
it
instruments
known
libraries
with
no
code
changes.
It's
a
runtime
environment.
Here
it
enforces.
It
adheres
to
those
semantic
conventions
that
I
mentioned
easily.
It
can
be
configured
inside
of
here
and
it
can
coexist.
If
you
have
something
already
more
implemented,
there
is
a
warning
that
does
need
to
come
out
here.
B
B
So
keep
that
in
mind,
but
starting
with
java
go
grab
the
java
environment
dropping
it
into
play,
and
you
automatically
start
getting
the
trace
information
that
open
telemetry
can
give
you
there
will
be
additional
functionality.
That's
coming
out
here,
the
rest.
The
client
libraries
are
moving
to
the
beta.
We
have
a
number
of
them
that
are
rock
solid,
the
tracing
environments,
pretty
much
a
rock
solid
because
they
came
out
of
existing
production
quality
functionality.
Here,
we're
looking
at
auto
tracing
functionality
for
the
rest
of
the
the
languages
that
are
capable
of
that.
B
We
want
to
add
initial
log
support
as
soon
as
we
can.
I
don't
know
if
we're
going
to
make
this
year,
but
we're
pushing
on
getting
the
log
support
in
here.
We
need
all
three
of
the
classes
of
data
to
go
in
here
and
then
we
are
working
on
in
documentation
improvements
by
the
way
we
just
pushed
a
whole
new
series
of
documentation
improvements.
B
So
why
are
you
even
doing
this?
Okay?
So
we
talked
about
the
problems
you're
trying
to
solve,
and
I
want
to
bring
this
back
a
little
bit
here.
So
imagine
you're
getting
pays
for
an
issue
and
when
that
page
comes
in
you
go
through
a
series
of
steps
here.
What
questions
are
you
trying
to
need
to
ask
yourself
about?
What's
going
on
inside
of
here?
How
do
you
filter
out
the
noise
so
that
you
make
sure
that
the
data
that
you're
assuming
applies
to
the
situation?
B
B
They
react
rapidly
to
things
that
are
too
slow
and
so
keeping
in
mind
how
we
determine
impact
so
that
we
can
get
into
the
the
respond
and
resolve
issues
as
fast
as
possible
right
here,
so
you
may
not
want
to
trace
everything.
You
may
not
want
to
span
everything,
but
you
may
want
to
end
up
with
what
are
called
necessary
services
that
are
showing
up
here.
So
we're
going
back
to!
Why
do
you
want
to
trace
these
things?
Are
you
tracing
for
user
happiness?
Are
you
tracing
to
determine
underlying?
B
Are
you
tracing
to
make
sure
communications
go
wrong?
It's
the
easiest
place
to
start
is
with
the
service
boundaries,
not
the
services
themselves,
but
the
endpoints
between
service
calls
as
well
as
calls
to
third
parties.
These
inferred
services
that
are
calling,
so
you
may
not
be
able
to
instrument
your
database,
but
the
open
telemetry
can
see
that
you
called
the
database
and
measured
that
functionality.
B
B
B
Keep
that
in
mind,
when
you're,
looking
in
what's
going
on
underneath
here,
that
you
don't
go
into
this
overload
condition
where
you
spend
all
your
time,
trying
to
to
break
out
the
noise
or
to
eliminate
what's
going
wrong
here
and-
and
you
want
teams
that
are
going
to
make
use
of
this-
to
do
it
as
close
to
free
as
you
can
get
it
so
looking
at
tracing
should
not
be
go,
spend
three
weeks
in
a
class
to
learn
what
tracing's
about
tracing
should
be
pretty
intuitive
from
that
viewpoint
and
should
be
as
close
to
being
just
something.
B
B
Finally,
special
interest
groups:
there's
a
ton
of
them
out
there
as
well.
You
can
find
a
list
of
them
on
github
under
open
telemetry
as
well,
and
then
finally
feel
free
to
submit
a
pr
there's
a
tag
for
a
label
for
good
first
issues.
If
you
want
to
get
it
or
help
want
it.
If
you
wanted
to
join
in
a
project
as
well,
and
with
that
I'd
like
to
thank
you
for
our
time
today,
and
I
will
turn
it
back
over
thanks
again.
A
A
A
B
A
All
right,
well,
thanks,
so
much
for
a
great
presentation,
and
that
is
since
we
have
no
questions,
we
can
go
ahead
and
wrap.
Thank
you.
Everyone
for
joining
us
today.
The
webinar,
recording
and
slides
will
be
on
later
online
later
today
and
we're
looking
forward
to
seeing
you
at
a
future
cncf
webinar
so
have
a
good
one
and
thanks
so
much.
Thank
you.