►
From YouTube: Fundamentals of OpenTelemetry
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
welcome
everyone
I'd
like
to
thank
everyone
for
joining
us.
This
is
today's
cncf
webinar
fundamentals
of
open
telemetry,
I'm
libby
schultz
I'll,
be
moderating.
Today's
webinar
and
we'd
like
to
welcome
our
presenter
ted
young
director
of
developer
education
at
lightstep,
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee
there's
a
q,
a
box
at
the
bottom
of
your
screen
feel
free
to
drop
your
questions
there
we'll
get
to
as
many
as
we
can.
At
the
end.
B
Hey
glad
to
meet
you
all
and
hi
john
and
hi
jen
thanks
for
responding
cool,
you
just
heard
about
it
at
aws,
re
invent
yeah,
that's
we're
doing
a
lot
of
work
with
amazon,
actually
on
lambda
and
other
things
to
make
that
work
well
with
open
telemetry.
So
that's
exciting
times.
B
So
this
is
a
talk.
This
is,
we've
only
got
an
hour,
so
I'm
gonna
move
through
it
very
fast.
B
I
believe
you
will
get
a
recording
of
this
afterwards,
so
you
can
watch
it
again
and
I
have
some
code
that
you
can
walk
away
with
and
let
me
give
you
some
of
that
code,
just
real
quick,
so
you
can
find
me
at
github.com
tedsuo
and
if
you
go
here,
I
have
several
pinned
repositories
that
are
hotel,
go
basics,
python,
basics,
node,
basics
and
java
basics,
and
I
try
to
keep
these
up
to
date.
I
got
to
update
the
go
one,
but
these
are
up
to
date.
B
Repos
we'll
walk
through
this
code
a
little
bit
later,
so
this
is
a
great
place
just
to
look
at
all
the
basic
open
telemetry
patterns
and
you
can
kind
of
copy
paste
out
of
them.
So
it's
maybe
a
handy
resource
to
walk
away
with.
So
I
just
want
to
throw
that
link
out
right
now,
but
let's
get
into
the
basics,
so
we're
just
going
to
cover
open
telemetry
from
top
to
bottom.
B
Give
you
the
basic,
architectural
understanding
you
need
to
understand
what
the
project
is
and
what
it
does
and
then
we're
going
to
look
at
some
of
this
code
in
we
can
look
at
node,
java
or
python.
Just
do
a
shout
out
into
the
chat
as
to
which
language
you'd
prefer
to
look
at
and
we'll
look
at
the
one
that
gets
the
most
votes.
B
So
let's
get
started.
So
what
is
open
telemetry
the
best
place
to
start
here
is
with
the
name.
Telemetry
observability
is
is
a
big
space,
but
telemetry
is
an
important
part
of
it.
The
term
telemetry
means
generating
and
sending
signals
from
some
kind
of
a
remote
device
to
something
that
can
analyze
that
remote
device,
but
the
emphasis
is
collecting
and
sending.
B
So
that's
the
portion
of
this
observability
world
that
open
telemetry
inhabits
it's
the
generation
and
sending
of
the
data,
and
we
actually
see
this
as
a
form
of
standardization.
B
We
see
us
as
actually
developing
sort
of
a
language
that
a
shared
language
that
everyone
can
speak
about,
describing
what
their
services
are
doing
and
doing
it
from
the
perspective
of
distributed
transactions,
so
lots
of
services
connected
together.
So
that's
the
basic
idea
behind
the
project
is
to
standardize
how
we
describe
our
software
describes
itself.
B
There
are
some
big
architectural
pieces
that
are
worth
going
over
when
you
land
in
open
telemetry.
There
are
three
kinds
of
components
that
you're
going
to
interact
with,
so
let's
just
go
through
those
now
so
imagine
this
is
your
service
right
here
whoop,
these
slides,
keep
doing
that
use
this
mouse.
So
imagine
this
this
green
thing
here
is
your
service.
When
you
wanna
install
open
telemetry
in
your
service,
you
have
to
install
the
open,
telemetry
client.
B
We
call
it
an
sdk
because
it's
a
framework
it
contains
lots
of
hooks
and
plug-ins
so
that
you
can
customize
it
to
work
with
a
variety
of
different
back-ends
work,
exporting
different
formats
using
different
sampling,
algorithms,
there's
a
lot
of
flexibility
when
it
comes
to
how
these
systems
generate
data
and
how
other
systems
want
to
consume
that
data.
So
the
sdk
is
this
flexible
framework
that
allows
you
to
configure
that
and
set
all
that
up.
B
However,
you
only
want
to
be
touching
this
sdk
when
you're
doing
your
setup
and
configuration,
because
this
sdk
contains
all
of
the
dependencies
needed
to
actually
run
this
telemetry
operation
within
your
program.
Within
your
application
code
and
within
shared
libraries,
you
don't
want
to
directly
reference
the
sdk.
B
Instead,
you
want
to
reference
the
open,
telemetry
api,
so
the
api
is
an
instrumentation
api.
This
is
what
you
use
to
actually
talk
to
open
telemetry.
So
in
your
program,
you'll
have
your
frameworks,
your
web
services,
your
network,
clients,
http
sql
whatnot.
All
of
these
shared
libraries
that
sort
of
manage
your
code
and
manage
your
network
connections
all
of
those
get
connected
into
open,
telemetry
through
plugins
or
through
native
instrumentation.
B
That
talks
to
this
api
and
the
reason
why
we
have
this
separation
is
just
clean
separation
of
concerns.
We
don't
want
all
of
these
packages
having
to
take
a
dependency
on
all
of
the
sdk's
dependencies.
You
get
into
sort
of
transitive
dependency
hell
when
you
do
that.
So
instead
we
have
a
clean
separation
of
concerns.
B
If
you
just
use
the
api
package,
all
it
contains
are
the
interfaces
and
constants
needed
to
talk
to
open
telemetry,
and
that
package
has
no
dependencies,
or
at
least
very
few,
so
you
can
reliably
take
on
a
dependency
to
the
open,
telemetry
api
and
have
faith
that
that's
not
going
to
create
dependency
conflicts
for
you
or
create
composition
issues
when
you
take
multiple
libraries
that
are
all
instrumented
with
open,
telemetry
and
put
them
together.
B
B
The
next
architectural
component
is
where
you
send
the
data,
so
these
sdks
have
plug-ins
called
exporters
that
can
export
in
a
variety
of
formats,
but
what
we
suggest
is
sending
that
data
to
another
open,
telemetry
component
called
a
collector.
B
So
what
the
collector
is
is
it's
sort
of
a
data
pipelining
tool,
it's
a
separate
service,
it's
written
in,
go.
You
can
run
it
as
a
sidecar.
You
can
run
it
in
your
local
network.
You
can
run
them
in
a
tiered
fashion,
but
the
whole
point
of
the
collector
is
that's
where
you
want
to
put
a
lot
of
your
configuration.
B
B
That's
our
native
format,
and
that
is,
I
think,
one
of
the
only
formats
in
the
world
that
supports
all
of
these
different
signals.
So,
within
the
same
format,
you
can
receive
traces,
metrics
and
eventually
logs,
but
of
course,
pre-existing
systems,
don't
necessarily
consume
otlp.
So
that's
changing,
so
you
can
also
export
in
zipkin,
prometheus,
aws,
x-ray,
etc,
etc.
B
Lots
and
lots
of
those
exporter
plugins
are
put
into
the
collector.
Likewise,
the
collector
can
receive
data
in
all
these
different
formats
and
then
translate
it
into
a
different
format.
On
the
way
out.
You
can
t
your
data
off
to
different
endpoints,
so
you
could
export
your
traces
in
zipkin
format
to
one
back
end
for
tracing
and
then
export
your
metrics
and
prometheus
format
to
another
back
end.
So
all
of
that
data
pipelining
work,
you
do
in
the
collector
and
my
suggestion
is
leaving
the
sdks
in
as
default
a
format
as
possible.
B
Speaking
native
otlp
not
doing
a
lot
of
configuration
there
and
instead
moving
all
the
configuration
out
to
a
collector
and
that's
just
nice
for
operations,
because
it
means
when
you
want
to
make
changes
to
how
your
telemetry
pipeline
works.
You're
not
having
to
go
in
and
reconfigure
recompile
and
redeploy
your
application
services.
You
just
go
in
and
change
the
configurations
and
the
deployments
of
your
collectors.
B
So
from
an
operational
standpoint,
it's
a
little
bit
nicer
to
be
able
to
to
change
how
this
this
observability
pipeline
works
without
having
to
roll
your
application
services.
So
that's
the
primary
use
for
the
collector.
B
That's
outside
the
bounds
of
the
project
and
that's
why
we
call
it
open
telemetry
instead
of
open
kitchen
sink
observability
and
the
reason
for
that
is
we
we
don't
see
analysis
and
storage
or
any
of
those
things
as
things
that
are
really
standardizable
right
now.
B
What
is
standardizable
is
describing
what
systems
are
doing
and
being
able
to
transmit
that
system
that
that
data
to
somewhere,
where
you
can
analyze
it,
but
the
actual
analysis
you're
doing
that's,
not
really
standard.
That's
actually
where
we
want
to
see
a
sort
of
green
field
of
observability
ideation
happening
with
open
telemetry
around
it's
now
much
easier
to
build
some
kind
of
back
end
or
analysis
tool,
because
you
don't
have
to
build
this
whole
ecosystem
of
instrumentation
and
other
stuff
right.
B
You
get
all
of
that
stuff
for
free
now,
and
your
only
question
is:
what
am
I
going
to
do
with
that
data
and
so
we're
hoping
to
see
a
lot
more
interesting
tools
show
up
that
analyze.
B
B
One
final
architectural
piece
that's
worth
knowing
about,
which
is
how
the
sort
of
project
and
community
is
structured
to
keep
all
this
code
working
and
self-similar
across
all
the
different
languages
and
compatible
with
each
other.
We
operate
through
a
specification
process,
so
open
telemetry
has
a
cross-language
specification.
B
So
everyone
debates
the
changes
in
the
otps.
We
do
experiments
and
prototypes.
If
we
think
it's
a
good
idea,
it
then
gets
baked
into
the
spec
once
it's
in
the
spec,
then
the
maintainers
of
different
language
implementations
implement
it.
So
that's
how,
as
a
group,
we
come
together
across
language
communities
and
hash
out
something
that
we
think
is
going
to
work
for
everyone,
and
then
we
all
go
back
and
implement
that
thing.
Once
we've
decided
what
we're
going
to
do,
and
so
that's
how
we
keep
all
this
code
kind
of
rolling.
B
B
I
want
to
move
now
onto
some
core
concepts.
We've
talked
about
the
architectural
components
that
you'll
encounter,
but
there's
a
couple
core
concepts
about
how
open
telemetry
thinks
about
the
world,
the
sort
of
core
pain
we're
trying
to
solve
with
distributed
tracing,
and
so,
if
you
can
understand
these
core
concepts,
everything
else
about
open
telemetry
will
make
a
lot
of
sense.
B
B
So
the
best
way
to
do
that
is
to
look
at
a
distributed.
Transaction
transactions
are
the
way
that
open
telemetry
looks
at
the
world.
We
look
at
the
world
as
services,
which
we
call
resources
that
are
connected
to
each
other
with
transactions,
which
we
call
traces.
So
one
example
of
how
everything
is
a
distributed
system.
Let's
look
at
something
really
simple:
you
have
a
mobile
or
web
client
and
all
it
does
is
upload
a
photo
along
with
a
caption,
so
you
have
an
endpoint
that
just
uploads
a
photo
and
a
caption.
B
Well,
what
would
be
involved
in
that
realistically
in
a
production
service?
Well,
you'd
have
your
client
right,
and
that
would
talk
to
a
server,
but
in
reality
we
know
it's
not
just
one
server.
It's
got
to
talk
to
the
most
basic
setup
I
can
think
of
realistically
would
be
talking
to
a
reverse
proxy.
That
then
calls
out
to
some
kind
of
authentication
service.
B
If
you
come
back
authenticated,
it
say
uploads
the
photo
to
some
local
scratch
disk
then
calls
an
application
service
with
the
location
of
that
uploaded
photo
and
the
caption
the
app
service
makes
thumbnails
or
otherwise
chops
the
the
photo
up
and
uploads
it
to
permanent
cloud
storage
like
let's
say,
s3
and
then
once
it
does,
that
it
takes
the
urls
for
the
photos
it
uploaded,
plus
the
caption
and
it
stores
them
in
a
database
which
is
represented
by
a
data
service
that
sits
in
front
of
a
combination
of
say,
my
sequel
and
reddish
for
caching.
B
So
this
is
about
as
simple
as
I
see
it.
Getting
in
the
real
world
like
this
is
a
sort
of
classic
lamp
stack
architecture
which
is
kind
of
my
point.
Every
system
is
a
distributed
system.
You
don't
have
to
be
building
some
kind
of
big,
crazy
database
to
say,
you're,
working
on
a
distributed
system
or
to
have
issues
with
trying
to
collect
data
and
observe
your
system.
B
So
when
we
think
about
this
from
the
perspective
of
distributed
tracing,
we
like
to
look
at
a
diagram
like
this.
That
sort
of
is
a
service
diagram
showing
how
everything
is
connected,
but
the
other
way
we
try
to
look
at
this
is
from
the
perspective
of
which
operations
talk
to
which
other
operations
and
how
long
did
they
take.
B
So
we
want
to
take
this
transaction
and
we
want
to
turn
it
into
a
graph
describing
how
all
of
these
operations
related
to
each
other
in
time
and
also
the
causal
connection
between
these
operations,
and
so
to
do
that
we
tend
to
make
these
trace
graphs
that
look
like
this.
So
you'll
find
some
version
of
this
in
almost
any
tracing
program
that
you
encounter
and
the
way
this
works
is.
These
are
color
coded
the
same
as
the
services
here,
so
this
is
describing
the
same
transaction.
B
Each
operation
is
represented
here
as
what
we
call
a
span
like
it's
a
span
of
time,
so
you
have.
The
operation
start
the
length
of
the
operation
and
then,
when
the
operation
ends,
and
then
you
have
these
arrows
that
represent
network
calls.
So
here
the
client
made
a
network
call
to
the
reverse
proxy,
which
then
called
the
auth
server.
Then
it
uploaded
something
locally.
This
is
local,
so
it's
still
blue
and
then
it
talked
to
an
app
server
that
talked
to
you
know.
S3
talk
to
data
service,
yada
yada
finally
returns.
B
So
this
is
a
graph
of
these
operations,
so
you
can
see
how
they
were
connected,
which
ones
were
happening
sequentially
with
each
other
and
how
much
time
was
spent
in
each,
and
this
is
a
really
useful
way
to
to
get
a
lot
of
information
about
what
your
system
is
doing
at
a
glance.
B
Latency
can
be
a
tricky
thing
to
figure
out
if
your
data
is
not
in
a
graph
because,
for
example,
this
client
span
takes
the
most
amount
of
time
in
this
transaction
right.
It's
a
synchronous
transaction,
it's
waiting
for
everything
to
be
done,
but
that
doesn't
mean
if
you
wanted
to
to
optimize
this
transaction.
B
You
would
come
to
the
client
and
try
to
make
that
thing
faster,
because,
of
course,
this
client
is
spending
most
of
its
time,
just
waiting
for
other
servers
to
actually
do
something,
and
so
what
you
want
to
actually
do
is
be
able
to
find
what
we
call
the
critical
path,
all
of
the
components
that
contributed
to
the
overall
latency
of
the
transaction-
and
this
is
helpful
because
it
points
you
to
where
you
should
actually
be
spending
your
effort.
B
If
you
want
to
optimize
something
so,
for
example,
here
we
can
see
both
where
work
is
happening.
The
gray
bits
are
where
computers
are
simply
waiting
for
other
operations
to
complete,
and
we
can
also
see
how
long
things
took.
So
in
this
example,
let's
say
you
were
looking
at
this
data
service
and
you
were
thinking.
B
I
want
to
optimize
this
data
service.
If
you
looked
at
the
transaction
from
this
perspective,
you
could
quickly
see
that
optimizing.
This
data
service
wouldn't
be
very
helpful
because
it
only
contributes
a
tiny
bit
of
the
overall
latency.
Really,
if
you
wanted
to
optimize
this,
the
only
place
you
could
realistically
go
would
be
to
where
we're
uploading.
These
photos
right.
The
time
spent,
uploading
and
processing
the
photos
dominates
this
transaction.
B
So
that's
really
helpful.
When
you're
trying
to
you,
can't
optimize
everything
in
your
system
and
when
you're
trying
to
figure
out
how
things
are
slow
that
can
honestly
be
harder
to
sort
out
than
why
are
things
broken
because
when
things
are
broken,
you
at
least
have
an
error
that
starts
to
sort
of
breadcrumb
for
you
to
look
at
when
things
are
simply
slow,
but
not
failing.
You
don't
really
have
that
bread
crumb
to
start
with.
B
So
if
all
of
your
data
describing
your
system
and
your
transactions
is
put
into
a
graph
like
this
and
organized
with
all
that
data
attached
to
it
properly,
then
not
only
does
it
become
easier
to
see,
you
can
actually
automate
that
analysis
and
write
heuristics
that
find
this
information
out
for
you.
So
that's
a
huge
time.
Saver
and
really.
The
point
I
want
to
get
across
with
distributed
tracing
is
not
that
you're
necessarily
going
to
be
doing
something
different
with
it
than
you
did
with
logs
and
metrics.
It's
just
that.
B
It
saves
you
a
lot
of
time
and
rather
than
spending
all
of
your
time,
looking
for
this
data
and
collecting
it
by
doing
searches
and
filters
in
your
logs
and
finding
one
id
and
then
finding
another
one
and
kind
of
patching
it
together,
you
can
save
a
lot
of
that
time
using
a
distributed
tracing
tool.
So
that's
going
to
be
a
theme.
I
I
kind
of
harp
on
through
the
rest
of
this
talk.
B
So
it
may
be
the
case
that
the
alert
you
got
was
a
500
on
the
client
saying
that
there
was
a
problem,
but
you
know
when
you
look
at
that
client
500
error.
If
that's
the
error
message
you're,
starting
with
how
do
you
know
what?
What
caused
the
error?
You
need
to
quickly
find
the
root
of
that
error
in
the
transaction.
You
know:
did
it
come
from
the
auth
server?
Was
it
some
problem
happening
in
s3?
B
Was
it
some
problem
with
your
data
service
like
it
could
be
any
of
these
things
and
if
you've
got
that
error,
that's
great,
but
if
you're
starting
from
somewhere
else
like
you've
noticed
something
else
was
broken.
B
You
want
to
be
able
to
track
it
back
very
quickly
and
again,
having
all
of
this
data
in
a
graph
allows
you
to
do
that
automatically,
rather
than
having
to
do
a
bunch
of
filtering
and
searching
just
to
find
that
data,
and
the
last
thing
we
want
to
look
at
in
distributed,
traces
are
logs,
we
call
logs
trace
events
in
open,
telemetry,
we'll
also
be
adding
a
logging
traditional
logging
facility
in
the
future
to
deal
with
traditional
logs,
but
really
trace.
Events
are
where
it's
at.
B
This
is
just
regular
logging,
but
these
trace
events
are
again
they're
put
into
this
graph,
so
you
can
think
of
each
one
of
these
things
as
a
log.
That's
just
you
know,
an
event
with
a
message
and
a
timestamp
and
some
structured
data
attached
to
it,
just
like
structured
logging.
B
Only
when
you
make
that
log,
it's
not
just
happening
out
there,
just
in
the
void
that
log
is
happening
within
an
operation
which
is
itself
within
a
transaction,
so
you're
situating
these
logs
in
the
kind
of
context
you
need
to
be
able
to
find
all
the
other
relevant
logs
once
you've
found
one
log
that
you're
interested
in
this
was
ultimately
the
thing
that
had
me
pulling
my
hair
out
traditionally
as
a
developer,
which
is
you
have
all
the
logs,
but
as
your
system
scales
up
finding
and
filtering
down
to
just
the
logs,
you
want
to
be
able
to
analyze
a
particular
transaction
or
event
or
problem
just
starts
to
take
more
and
more
work
as
your
system
grows
and
grows.
B
You
have
more
and
more
data
to
to
search
through
there's,
more
and
more
services
that
were
not
touched
than
touched
right.
If
you
have
one
app
server
and
one
one
data
server,
you
can
grip
around
in
that
and
find
the
logs
that
you
want.
But
when
you
have
50
app
servers
and
50
data
servers
which
server
do
you
even
look
at
and
you
can
put
all
of
these
log
data
into
a
database
and
index
it.
But
what
would
be
the
index
that
you
would
use
to
find
all
of
the
logs
in
that
transaction?
B
You
would
need
to
have
some
kind
of
transaction
id
that
every
log
had
attached
to
it
so
that
you
could
search
by
transaction
id,
and
that
would
give
you
all
the
logs
in
the
transaction,
and
that
is
what
ultimately,
you
get
out
of
distributed,
tracing
a
way
to
pass
around
these
transaction
ids
and
operation
ids
and
attach
them
to
every
event
so
that
you
can
index
this
data
properly,
and
that
is
the
core
thing
behind
distributed
tracing.
B
So
if
you
can
get
your
head
wrapped
around
context,
propagation
everything
that's
going
on
in
distributed,
tracing
or
open
telemetry
will
make
lots
and
lots
of
sense
so
just
to
explain
context
propagation
from
open,
telemetry's
point
of
view.
Imagine
you've
got
two
servers
they're
talking
to
each
other,
so
you've
got
some
chain
of
operations
and
service
a
and
then
a
network
call
to
service
b
and
then
some
more
operations,
and
you
want
to
be
able
to
connect
all
these
things
up
with
some
ids
well.
B
In
order
to
do
that,
you're
going
to
have
to
pass
those
ids
around
and
there's
two
parts
of
that
one
is
tracking
the
work
as
it
flows
within
a
process.
The
flow
of
execution
from
operation
to
operation.
You
need
to
be
able
to
follow
your
execution
path
in
your
code
automatically
and
open
telemetry
provides
that
via
what
we
refer
to
as
a
context
object
if
you're
a
go
programmer.
B
This
idea
is
probably
already
very
familiar
to
you
because
context
is
a
first-class
citizen
and
go
it's
and
it's
passed
around
manually.
But
this
is
a
facility
you
can
build
in
most
languages
these
days,
other
languages
offer
things
like
thread
locals
or
other
places.
In
the
background
where
you
can
store
this
data,
so
you
don't
have
to
pass
it
around
as
a
parameter,
so
open
telemetry
either
uses
the
native
context
object
in
a
language.
If
there
is
one
and
when
there
isn't
one,
we
provide
our
own
implementation
of
this
long
run.
B
We
would
really
encourage
every
language
runtime
to
come
up
with
this
concept
and
add
it
to
it,
so
that
we're
not
doing
this
in
userland,
but
here
we
are
so
that
gets
you
your
data
being
passed
around
in
your
program.
There's
just
this
context,
object!
That's
like
a
bag.
You
can
put
things
in
it.
You
can
pull
things
out
of
it
and
it's
just
always
available
to
your
code
as
you
go.
B
When
you
go
from
service
a
to
service
b,
though,
you've
got
to
propagate
that
context.
That
context
has
to
somehow
jump
across
the
wire
and
we
refer
to
that
process
as
propagation.
So
you
take
this
context,
object
and
you
need
to
serialize
it
into
your
request
and
then
deserialize.
It
on
the
other
side,
and
we
refer
to
that
process
as
injection,
so
you
inject
a
context
into
a
request
which
means,
taking
this
context
objects
and
in
http,
turning
it
into
a
set
of
headers
for
other
formats.
It's
just
whatever
metadata
facility.
B
You
have
right,
so
kafka
messages
have
metadata.
You
would
put
it
there
in
http,
it's
headers,
so
we'll
just
focus
on
that
and
then
on
the
downstream
service.
That
service
looks
to
extract
that
data
and
once
it,
if
it
finds
those
headers,
it
will
extract
them
and
make
a
new
context
object.
That
then
continues
to
follow
the
flow
of
your
execution.
B
So
this
fundamental
subsystem
of
context
propagation
underlies
everything
we
do
in
open,
telemetry
as
an
end
user.
When
you're
interacting
with
the
api
generally
as
an
application
developer,
you
won't
be
directly
interacting
with
the
context
or
propagation
if
you're
in
go
you'll
be
interacting
with
context
object,
but
for
other
languages.
It's
something
happening
in
the
background,
but
it's
important
to
understand
that
this
is
what's
going
on,
because
you
can
have
problems.
B
For
example,
if
service
a
and
service
b
are
misconfigured,
so
service
a
is
injecting
one
set
of
headers,
but
service
b
is
looking
for
another
set
of
headers.
Then
you'll
have
a
broken
trace
right.
They
won't
connect
up.
So
when
you
counter
those
issues,
understanding
that
this
is
what's
happening
under
the
hood
can
help
you
kind
of
debug
a
reason
about
your
situation,
but
luckily
managing
all
of
this
injection
and
extraction
and
context
propagation
is
handled
by
open
telemetry
through
library
and
framework
plugins.
B
So
as
long
as
you've
installed
plugins
for
the
parts
of
your
code
that
manage
your
application.
So,
like
your
web
framework,
anything
that
receives
a
network
request
like
a
web
server,
anything
that
sends
a
network
request
like
an
http
client.
B
As
long
as
all
of
those
things
are
instrumented
then
context
propagation
should
be
happening
automatically,
also,
if
you're
using
a
userland
scheduler
like
so
in
python,
if
you're
using
g
event
or
some
kind
of
co-routine
thing
that's
happening
on
top
of
the
language
and
doing
scheduling
there.
That
thing
would
need
an
open,
telemetry
plug-in
as
well
or
native
open
telemetry,
so
that
it
understands
how
to
manage
these
context
objects
as
it's
switching
context,
but
again
as
an
application
developer.
B
This
is
all
set
up
for
you
and
it
just
sort
of
feels
like
magic.
All
you
do
is
say,
hey
give
me
the
current
span,
wherever
that
thing
is,
and
it
just
kind
of
appears
for
you,
but
you
know,
like
all
magic,
sometimes
that
magic
can
go
away.
You
gotta
understand,
what's
going
on,
so
that's
the
basics
and
just
to
make
it
a
little
more
concrete.
B
These
headers
are
themselves
the
things
we're
trying
to
standardize.
So
let's
look
at
these
standard
headers
that
we're
trying
to
create,
because
I
think
that
makes
it
super
clear
how
basic
this
thing
is.
So
we've
been
working
with
the
w3c
to
create
official
tracing
headers
for
http
there's
one
set
of
them.
That's
well
underway
to
becoming
a
standard
at
this
point
and
that's
called
trace
context.
B
So
these
are
two
headers
trace
parent
and
trace
state
trace
parent.
Sorry,
my
mouse
is
a
little
funky
here.
Traceparent
here
has
two
ids
the
trace
id:
that's
that
overall
transaction
id
that's
attached
to
every
event,
and
then
it
has
a
span
id
and
this
represents
the
parent
operation.
B
So
when
you
make
your
own
operation,
your
own
span,
you
get
a
new
id
and
you
become
a
child
of
this
parent
operation,
and
so
this
is
how
we
form
a
graph
out
of
your
data.
So
you've
got
the
id
of
the
overall
graph
and
then
the
span
id
is
like
the
id
of
the
nodes
in
the
graph,
and
then
these
nodes
are
connected
up
in
what
we
call
a
parent-child
relationship.
B
So
it's
just
an
acyclic
graph,
nothing
crazy
happening
there,
there's
some
other
information
like
a
sampling
flag.
This
trace
state
thing
is
internal
bits.
You
don't
need
to
worry
about
any
of
that.
The
other
set
of
headers
that
will
become
interesting
potentially
in
the
future,
are
what
we
refer
to
as
baggage
and
baggage
is
just
taking
this
context
propagation
system
and
giving
it
to
you
to
do
whatever
you
would
like
to
do
with
it.
B
So
we're
using
it
to
do
this
tracing
stuff,
but
with
baggage
you
can
just
put
any
key
value
pair
into
the
baggage
and
then
pull
it
off
downstream.
Obviously,
there's
some
overhead
with
doing
this,
but
again,
if
you're
a
go
programmer,
think
about
taking
your
go
contacts,
and
now
you
have
a
distributed
context.
B
B
You
can
grab
it
once
upstream.
Add
the
project
id
to
your
baggage
and
then
it
will
propagate
down
and
become
available
downstream
by
just
pulling
it
off
of
baggage,
and
that
can
be
useful
like
for
a
variety
of
reasons.
The
main
thing
we
want
to
do
is
not
just
find
these
individual
transactions
right,
so
you
don't
need
just
transaction
id
and
span
id.
B
We
also
want
to
correlate
these
transactions
in
the
aggregate,
so
by
default,
open
telemetry
provides
attributes
onto
every
of
these
every
one
of
these
operations
that
are
standardized
so
all
of
the
standard
stuff,
like
http
network,
calls
message
queues
the
resources
you're
interacting
with
like
host
name,
kubernetes,
pod,
etc,
etc,
etc.
We
have
standardized
key
value
pairs
that
represent
all
of
this
data
and
we
call
those
semantic
conventions
and
all
the
instrumentation
you
use
adds
all
of
that
for
you,
so
you're
automatically
instrumenting
this
stuff.
So
you
can
tell
when
there's
errors
happening.
B
Is
this
error?
Does
this
era
correlate
with
a
particular
route?
Does
it
correlate
with
a
particular
host
or
service
type,
and
that's
where
adding
your
own
application
ids
into
as
attributes
becomes
helpful?
So
if
you
added
something
like
a
project
id
imagine,
you
are
seeing
an
error,
and
then
your
system
told
you
hey.
This
error
actually
correlates
very
highly
with
just
a
small
handful
of
project
ids.
In
other
words,
this
error
is
not
happening
everywhere.
It's
just
happening
in
these
five
projects.
B
That
would
probably
tell
you
a
lot
as
a
developer
as
far
as
where
you
need
to
look
or
would
at
least
certainly
rule
out
a
lot
of
potential
issues,
knowing
some
information
like
that,
so
having
this
data
structured
enough
in
a
graph
that
that
kind
of
graph
analysis
can
go
on
and
you
can
actually
get
real
correlations
handed
to
you
by
an
automated
process.
That's
a
huge
time
saver,
it's
a
huge
time
saver,
but
you
can't
do
that
stuff
unless
you
have
this
underlying
mechanism
to
pass
this
data
around
okay.
B
That
is
the
overview
that
is
conceptually
all
you
need
to
know
about
open
telemetry.
At
this
point,
I'm
going
to
switch
over
to
just
looking
at
some
code.
I
did
see
a
request
for
go.
Unfortunately,
my
go
code's
a
little
out
of
date,
but
the
go
code's
pretty
obvious
compared
to
node
and
everything
else.
So
let's
have
a
look
at
some
node
code.
B
So
if
I
go
here
into
hotel,
node
basics,
you
can
see
it's
fairly.
B
Well,
perhaps
voluminously
commented
code,
but
that's
just
to
make
it
all
super
easy
to
kind
of
copy
paste
out
of,
but
here
we're
just
looking
at
a
basic
express
server
that
has
one
route
called
hello
that
serves
up
hello
world
and
then
we've
got
a
client
here
that
just
makes
a
loop
of
like
200
requests,
and
then
we
install
open
telemetry
to
this
there's
some
copy
paste
code
here
for
getting
open,
telemetry
setup
in
node,
it's
a
little
funky
because
you
have
to
load
open
telemetry
right
now
before
you
require
anything
else.
B
Every
language
has
some
little
issue
with
that
when
it
comes
to
doing
this
kind
of
automated
code
injection.
But
what
you
get
for
your
trouble.
Is
you
get
all
of
these
instrumentation
plugins
installed
for
you
automatically
so,
for
example,
in
this
client
you'll
notice,
there's
no
instrumentation
at
all
we're
just
using
http
get,
but
nevertheless,
you'll
still
get
a
span
out
of
this
connected
to
the
server.
That's
rich
with
information,
because
http
is
being
auto
instrumented
for
you
under
the
hood.
B
Likewise,
on
the
server
we
have
express,
instrumentation,
so
express
automatically
gets
instrumented
for
you
and
what
that
means
is
in
your
express
handler.
You
automatically
have
a
span
available
to
you,
because
this
operation
is
already
being
recorded
and
so
the
best
practice
there
is
just
to
grab
that
span
and
start
adding
data
to
it.
B
So
the
way
you
do
that
is
you
make
a
tracer
when
you
make
a
tracer
in
node,
you
do
it
at
the
package
level,
because
you
want
to
name
it
after
your
package
and
what
this
does
is
it
lets?
You
know
every
piece
of
data
or
every
span,
that's
created
by
this
tracer
gets
an
attribute
associated
with
it
as
to
where
it
came
from.
So
you
can
look
at
your
instrumentation
source
for
every
span.
B
So
that's
a
very
helpful
thing
when
you're
trying
to
debug
okay,
I'm
seeing
the
span
data,
but
where
did
it
come
from
like
what
package
produced
it?
That's
what
you
get
out
of
these
name
tracers!
So
you
make
one
of
those
per
package
super
easy
copy
paste
and
then,
with
that
tracer
you
can
just
say,
get
current
span
anywhere
and
if
you're
within
a
transaction
and
the
spans
and
it's
already
rolling,
then
you'll
get
a
span
back
otherwise
you'll
get
a
no
op
and
then
from
that
you
can
set
attributes.
B
So
setting
an
attribute
are
the
indices
I
was
talking
about
earlier,
so
here
we're
adding
project
id
so
that
we
can
index
this
fan
with
project
id
123
and
look
it
up
later
and
then
you
can
also
add
events.
This
is
just
logging
and
you
can
see
it's
just
nice,
straightforward,
structured
logging.
You
have
your
message
and
then
you
have
a
dictionary
nested
dictionary
of
data
that
you
can
attach.
B
So
you
can
attach
attributes
to
spans.
You
can
attach
attributes
to
events,
and
you
can
also
attach
attributes
to
your
service
itself.
So
service
attributes
are
things
like
the
service
name.
The
version
of
the
service
is
a
very
useful
attribute,
because
that
allows
you
to
see
regressions
systems
like
lightstep
and
other
distributed.
Tracing
systems
are
building
the
ability
to
automatically
detect
regressions
across
versions
and
rollouts,
because
you
can
see
hey.
I've
got
two
services
with
the
same
service
name
but
different
service
versions.
B
What
is
the
performance
characteristics
between
version,
1.22
and
1.23?
This
is
enough
data
to
start
automating.
A
lot
of
that
analysis
going
back
into
our
code
here
and
just
you
know
we're
just
doing
a
quick
kind
of
guide
through
this.
So
hopefully
this
this
makes
sense.
Based
on
the
description
I
have
before
sorry,
if
it's
a
little
all
over
the
place,
but
this
basic
pattern
of
getting
the
current
span
setting
attributes
on
it,
adding
events
99,
that's
all
you're
doing
with
tracing
in
open
telemetry.
B
You
don't
really
need
to
be
making
child
spans
or
managing
spans
yourself
and
your
application
code.
Ideally,
the
the
span
management
can
happen
in
some
centralized
place,
like
you
know
your
application
framework
or
whatnot,
but,
of
course,
making
child
spans
and
attaching
them
to
the
current
spans.
B
You
know
a
common
thing
that
you
may
want
to
do
if
you
are
trying
to
carve
out
a
sub
operation
or
you're
trying
to
instrument
like
say
your
own
libraries
or
your
your
own
in-house
application
framework.
So
here's
how
you
make
spams,
so
all
you
do
is
say
tracer
start
span
now.
If
you
already
have
a
current
span
available,
then
that
span
automatically
just
becomes
the
child
of
the
current
span.
So
start
span
will
get
you
a
child
span
of
the
span
that
was
returned
from
here.
B
So
that's
how
you
make
a
span,
but
you
then
want
to
set
this
span
as
active
right,
so
that
this
get
current
span.
Pattern
will
work.
So
once
you
make
a
span,
you
create
a
closure.
This
is
called
with
span
in
most
languages.
Have
a
closure
option
like
this
and
then
within
this
closure?
Child
span
is
now
the
active
span.
So
within
this
closure
here
get
current
span
will
now
return
child
span
instead
of
the
span
you
get
back
up
here.
B
Last
but
not
least,
you
have
to
end
the
span.
B
So
when
you
start
a
span
well
way
back
up
here
when
you
start
a
span,
this
is
telling
you
this
starts
the
timer,
so
you
get
a
time
stamp
for
when
this
fan
started
and
then,
when
a
span
is
ended,
you
get
a
time
stamp
for
the
end,
so
you
know
the
duration
and
then
ending
the
span
triggers
it
to
be
sent,
sent
off
for
collection
and
exported.
B
So
this
is
one
gotcha
with
doing
spam
management
yourself.
You
have
to
end
your
spans,
otherwise
you'll
have
a
leak.
This
is
the
one
stateful
piece
of
open
telemetry
that
you
have
to
manage,
because
you've
got
these
operations
that
have
start
and
ends.
You
do
have
to
make
the
start
and
ends
lined
up,
but
that's
really
it
creating
a
span
is
simply
starting
it
setting
is
active
with
with
span
and
then
ending
the
span
and
those
are
the
basic
patterns.
B
There's
one
extra
thing
I
want
to
point
out,
which
is
how
do
you
mark
a
span
as
an
error?
So
we
mentioned
you
know.
The
duration
of
the
span
will
give
you
the
latency
of
the
events
and
attributes
lets
you
look
it
up,
but
you
still
have
error
budgets.
You
still
don't
want
to
know
if
something's
an
error
and
so
to
do
that.
B
There's
two
portions
one
is
recording
an
exception.
So
if
you
have
an
exception
of
any
kind,
those
are
recorded
as
events,
but
rather
than
saying
add
event.
You
know
we
want
to
just
make
sure
they're
they're
formatted
correctly,
so
we
have
a
helper
function,
called
record
exception,
so
record
exception,
you
pass
it
an
error,
exception
and
it'll
just
add
it
as
an
event
to
your
span,
properly
formatted,
but
that
won't
make
your
span
count
as
an
error,
and
that's
just
because
not
all
exceptions
are
errors
right.
B
You
may
have
an
exception
and
handle
it
and
move
on
and
you
still
want
to
record
the
exception.
But
it's
not
that
the
overall
operation
failed.
Conversely,
you
might
have
an
operation
that
fails,
but
there's
no
exception
associated
with
that
failure.
So
somehow
for
some
other
reason
you
know
it's
going
to
fail
so
to
mark
failure
and
error.
Every
span
has
a
status.
B
So,
if
you
want
to
say
this
operation
has
failed,
you
just
set
the
status
code
to
error.
This
is
cleaned
up
a
little
bit.
We
used
to
have
a
lot
of
different
status
codes
and
we
decided
we
didn't
really
like
having
a
lot
of
status
codes.
Now
we
change
just
one
status
code
called
error,
so
this
has
to
get
bumped
up
a
little
bit,
but
that's
how
this
works
so
again.
Super
basic
pattern.
B
If
it's
an
error,
you
set
the
status
as
an
error
and
we
think
about
errors
at
the
operational
level,
not
at
the
individual
event
level.
So
you
can
have
exceptions
as
events,
but
it's
operations
that
are
successful
or
failing
and
that's
the
sort
of
more
coarsely
grained
thing
you
want
to
look
at
right
because
you
want
to
be
looking
at
success
or
failure
per
route.
B
That's
literally
all
of
the
surface
area
of
open
telemetry.
You
need
to
know
in
order
to
get
started
with
distributed
tracing
as
an
application
developer,
there's
more
service
area.
If
you
want
to
get
into
writing
your
own
instrumentation
like
inject
extract
and
this
context
stuff
under
the
hood,
you
don't
have
to
worry
about
any
of
that.
B
B
So
we've
got
about
10
minutes
left
at
this
point.
I'm
happy
to
open
it
up
to
questions
at
this
time,
so
post
the
questions
in
at
this
point.
If
you
don't
get
questions,
I've
got
a
little
more
material.
We
can
walk
through.
B
Oh,
hey,
I
missed
one.
There
we
go.
Is
there
a
best
practice
collection
that
could
help
to
start
using
open
telemetry
correctly?
Yes,
so
I
would
suggest
again
these
repos
that
I've
made
so
this
one
we're
looking
at
here
is
node
basics.
B
So
that's
a
useful
resource,
there's
also
the
core
documentation.
If
you
go
to
open
telemetry,
there's
core,
I
just
do
my
own
horn.
I
have
my
own
documentation,
I'm
maintaining
here.
So
if
you
check
this
out,
we've
got
getting
started
guides
that
cover
this
exact
same
information
here.
So
you
know:
here's
how
you
configure
go,
here's,
how
spans
work
and
go.
I've
got
to
do
another
pass
on
getting
this
up
to
date,
but
this
is
maybe
another
resource
for
you
to
check
out.
B
There's
also,
you
know
open
telemetry.
I
o
obviously
we're
trying
to
add
more
core
documentation
here
as
well,
so
this
is
also
getting
getting
flushed
out
and
then,
if
you
look
in
the
actual
github
repos
for
these
different
implementations,
you'll
see
a
lot
of
more
useful
stuff
there.
So
actually,
if
we
go
to
open
the
pjs,
they've
got
a
lot
of
good
stuff.
B
So
js
has
got
a
lot
of
good
like
getting
started
stuff
going
on
here.
So
so
those
are
the
the
places
I
would
suggest.
Looking
if
you're
looking
to
get
started,
hopefully
that's
the
kind
of
getting
started
stuff
you
were
looking
for.
B
I
do
have
some
other
best
practices
we
can
go
over
with
the
remaining
minutes.
We
don't
have
more
questions,
but
I
see
one
more
question.
The
api
for
open
telemetry
is
somehow
related
to
the
open
tracing
api.
Yes,
good
question,
so
open
telemetry
is
the
the
v
2.0
of
open,
tracing
and
open
census.
So
the
goal
here
is
to
create
a
shared
standard
for
describing
how
systems
work
and
the
problem
was.
We
didn't
have
one
standard.
We
had
two
standards.
B
We
had
two
very,
very
similar
approaches
to
solving
the
exact
same
problem
come
out
at
a
similar
time
frame,
open
tracing
came
out,
and
shortly
after
that,
open
telemetry
came
out
and
it
just
seemed
like
we
would
not
be
able
to
to
achieve
our
goal
of
getting
everyone
together
to
agree
on
some
standard
way
of
describing
this
stuff.
If
there
were
two
competing
projects
and
they
were
so
similar,
there
was
a
sort
of
best
of
both
worlds
opportunity
to
be
made
by
combining
them
together.
B
So
the
open,
telemetry
api
is
very
similar
to
the
open,
tracing
and
open
census
apis,
because,
ultimately,
all
of
these
things
are
based
off
of
a
system
called
dapper
that
was
written
at
google
years
ago,
actually
by
my
friend
ben
siegelman
who's.
Now
the
ceo
of
lightstep,
where
I
work
so
that's
how
far
back
we
go
with
this
stuff,
but
it
basically
started
with
dapper.
B
The
dap
was
written
as
a
paper.
The
first
open
source
implementation
of
that
paper,
I
believe,
was
zipkin
and
then
open,
tracing
and
open
census
came
out
also
based
on
the
dapper
model,
and
now
that's
all
been
combined
into
open
telemetry.
B
B
In
every
language-
and
so
you
just
turn
the
open
tracing
bridge
or
shim
on
and
all
of
your
open
tracing
data
will
interact
with
your
open,
telemetry
data,
so
they
they
mix
together.
So
you
can
kind
of
progressively
migrate
from
one
to
the
other.
B
Oh
well
and
there's
another
question:
is
it
backwards
compatible
with
open
senses?
Yes,
that
is
the
plan
that
is
more
on,
like
the
google
side
of
the
fence,
I
come
from
the
open
tracing
side
of
the
fence,
so
I've
not
been
working
on
the
open
census
bridge,
but
I
have
been
told
by
the
open
census
cool
that
is
an
intention.
They
they
plan
to
make
happen
before
before
it
becomes
stable,
so
you'll
have
some
migration
path
or
backwards.
B
Compatibility
or
other
way
to
to
bridge
with
open
census
is
what
I
understand,
but
we
I
come
from
the
open
tracing
side
and
we've
already
got
that
working
cool.
B
We've
got
a
couple
more
minutes,
I'll
be
waiting
for
questions
to
come
back
in,
but
in
the
meantime
we
can
maybe
go
over
some
best
practices
that
what
people
were
asking
for.
B
So
one
best
practice
you
might
hit
is
like
how
many
spans
a
common
thing
I
see
people
do
is
a
span,
looks
like
a
closure,
and
you
know
it
looks
like
you
maybe
want
to
make
a
stack
trace,
so
shouldn't
every
function
be
wrapped
in
a
span.
Wouldn't
that
be
the
best,
and
the
answer
is
no-
that
actually
wouldn't
be
so
great
to
have
a
bunch
of
tiny
little
spans.
B
You
have
several
different
like
scopes
or
sizes
in
your
transaction
right
like
these
overall
transaction
there's
every
process,
that's
a
hop
in
the
transaction
and
then
within
your
process.
There's
code
based
transition
transitions
right,
you'll
start
with
code
executing
from
one
code
base
like
say,
express,
and
then
it
goes
from
express
into
your
application
code.
So
that's
another
code
base
and
then
maybe
from
there
it
goes
into
a
mongodb
client
and
that's
another
code
base,
and
so
you
at
least
want
to
have
one
span
per
code
base.
B
If
that
makes
sense
so
that
you
can
at
least
tell
which
library
is
producing
that
data
or
that
latency
and
then,
underneath
that
you
have
functions,
those
are
too
small
spans
kind
of
fit
somewhere
in
the
middle.
Here
you
want
to
have
as
a
rule
of
thumb,
let's
say
one
to
three
spans
per
library:
interaction
in
your
trace,
but
that's
just
a
rough
rule.
B
The
point
is
you:
don't
want
to
have
a
lot
of
tiny
spans
and
the
reason
for
that
is
one
just
overhead
spans
are
more
expensive
than
events,
so
you
have
more
overhead
if
you
have
lots
of
tiny
spans,
as
you
saw
there's
more
codes
to
juggle
spans
right,
you
have
to
start
and
end
them
and
set
them
as
current.
That
kind
of
messes
up
your
application
code
to
go.
Do
a
lot
of
that
but
rio.
The
important
thing
is
when
you're
indexing
these
things.
B
I
find
it's
generally
better
to
have
lots
of
indices
on
fewer
spans
than
to
have
lots
of
spans
with
few
indices,
because
what
you're
ultimately
trying
to
do
is
correlate
across
these
indices
and
it's
just
easier
to
keep
track
of
that
all
if
you
have
fewer
spans.
B
So
there's
just
some
some
practical
limitations
there
and
that's
why
I
encourage
people
to
mostly
when
you
get
started,
stick
with
the
spans
that
are
getting
generated
by
the
instrumentation
provided
by
open
telemetry
for
your
frameworks
and
libraries
and
try
just
decorating
those
spams
with
application
code
and
eventually
you'll
hit
a
point
where
okay,
you,
you
want
a
more
fine-grained
information,
but
you
should
do
that
later.
You
don't
want
to
start
with
that.
B
So
that's
everything
I
just
said
and
yeah
you
want
to
really
centralize
this.
This
span
management
somewhere
to
my
mind,
span
management
is
really
related
to
context
switching
and
these,
like
kind
of
underlying
things
that
go
on,
which
is
why
I
say
I
wish
the
runtimes
manage
this,
but
we
manage
it
in
like
framework
code
and
other
centralized
places
in
order
to
to
keep
the
spams
glued
to
your
code
and
not
have
to
mess
with
it
all
in
your
application
code.
B
I
see
I
got
a
question
here,
so
open
telemetry
goal
is
not
to
collect
all
logs,
but
the
portion
of
logs.
We
want
to
be
attached
to
traces.
Yes,
so
longer
term,
we
are
going
to
add
a
logging
facility
to
open
telemetry.
That's
an
experimental
feature
right
now,
but
in
the
long
run
these
logs
are
more
valuable.
B
When
there's
more
context,
I
think
that's
the
main
point:
it's
possible.
You
might
have
some
logs
hanging
around
that
aren't
part
of
a
transaction,
but
if
those
logs
are
part
of
a
transaction
of
course,
you
want
to
have
these
trace
ids
and
span
ids
attached
to
them.
In
fact,
the
logging
work
we're
doing
right
now
is
just
to
sort
of
go
back
and
forth.
B
So
if
you're,
using
something
like
log
for
j
or
winston
or
some
popular
logger,
we
want
to
create
plugins
so
that
that
data
will
just
automatically
every
time
you
make
a
call
to
log
for
j.
It
grabs
the
current
span
and
writes
the
logging
data
out
there.
B
If
you're
trying
to
put
your
logs
into
your
traces,
that's
a
great
way
to
get
a
lot
of
existing
production
logs
into
your
tracing
system
and
then
the
other
thing
is
to
go
the
other
direction,
which
is
if
you've
got
tracing
set
up,
but
you
already
have
say
a
logging
tool
where
you're
looking
at
the
stuff
and
indexing
it
you
can
have
every
time
you
make
a
log
call
for
that
thing
to
check
if
there's
a
current
span
and
if
there
is
one
add
the
trace
id
and
the
span
id
to
your
log.
B
So
then
you
can
go
do
tracing
in
your
logging
software,
which
is
you
know,
maybe
a
little
crude,
but
it's
better
than
nothing
right
like
if
you've
got
a
logging
system
that
can
make
indices
for
you
having
that
trace
id
indicy
and
that
logging
system
is
gonna,
be
super
valuable
right.
That's
a
whole
bunch
of
filtering
and
nonsense
that
you
just
avoid
you
just
find
one
log.
You
see
the
trace
id
and
then
just
do
a
search
for
that.
Now
you
have
exactly
all
the
logs
that
were
part
of
that
transaction.
B
So
that's
sort
of
the
back
and
forth
we're
doing
with
logs
right
now
longer
term,
maybe
some
more
deeper
integration,
but
we
have
to
figure
out
what
that
means.
We're
fairly
certain
the
world
doesn't
need
another
logging
api
just
to
have
one.
So
we
want
to
figure
out
where
the
value
is
there
before
we
do
anything
on
that
front.
B
Okay,
so
we're
at
time
it
sounds
like
these
slides
are
going
to
get
uploaded
as
well.
So
you
can
actually
just
walk
through
this
other
stuff
that
we
talked
about
today.
There's
some
handy
hints
in
here
about
how
to
roll
this
out
in
your
organization,
and
that's
all
I
got.
I
hope
you
enjoyed
it
and
I
hope
you
enjoy
the
rest
of
your
day.
I
you
can
always
give
me
feedback
on
twitter
by
the
way
I'm
at
ted
suo
on
twitter.
B
So
if
you
want
to
follow
me,
I
regularly
produce
these
kinds
of
walkthroughs,
so
I
should
actually
should
actually
promote
my
blog,
I'm
really
terrible
about
promoting
this
thing,
but
if
you're
still
on
the
call,
I
do
a
lot
of
blogging
about
learning
how
to
do
this
stuff.
So
you'll
be
seeing
regular
updates
a
couple
updates
a
week
on
average
coming
out
to
this
in
the
new
year.
So
follow
me
there
all.
A
Right
thanks
so
much
ted
for
a
great
presentation.
That's
all
we
have
time
for
and
thanks
everybody
for
joining
us,
we'll
see
you
at
an
another
cncf
webinar,
soon,
bye,
all
right,
bye,
yes,.