►
From YouTube: OpenTelemetry Show & Share
Description
This is a recording of a session between OpenTelemetry project maintainers and the GitLab Monitor Stage team members on what OpenTelemetry is and does, and how GitLab and the community, in general, can contribute to OpenTelemetry.
A
Good
day,
everybody
we're
here
today
to
learn
about
open
telemetry,
and
for
that
we
have
two
guest
speakers
in
Ted
and
Matt.
So
Ted
is
part
of
the
Governance
Committee
for
the
open
telemetry
project.
He
also
works
for
a
supercool
observability
company
called
lights
that
and
Matt
who
I
used
to
work
with
at
New
Relic.
Is
the
development
lead
for
the
Rubies
special
interest
group
open
telemetry?
He
also
works
at
good,
not
Gila.
He
works
out
light
stuff
along
with
Ted
and
today.
These
two
gentlemen
are
gonna.
A
Tell
us
a
little
bit
more
about
what
open
telemetry
is
what
it
does
give
a
demo
and
answer
some
questions
at
the
end.
So
from
a
kind
of
tactical
perspective,
there
is
a
agenda
doc
that
you
can
add
questions
to
and
we
can
get
through
them
in
the
session
or
synchronously
after
the
call
ends
and
with
that
I'll
turn
it
over
to
Ted
great.
B
Oh
all
right
welcome
everyone.
This
is
going
to
be
just
kind
of
a
loose
talk
on
open
telemetry,
we'll
have
Q&A
s
at
the
end.
But
if
you
do
have
a
burning
question,
you
know
feel
free
to
just
interrupt
no
problem
at
all,
but
just
to
get
started
intros.
This
is
still
us
that
hasn't
changed,
but
let's
talk
about
open
telemetry
and
before
you
talk
about
open
telemetry,
you
got
to
talk
about
observability
and
before
you
talk
about
observability,
you
got
it
talk
about
your
system
and
how
it
looks.
B
B
B
B
So,
if
you
want
to
understand,
what's
actually
happening
inside
of
this
iceberg,
you
need
to
have
observability.
You
need
to
have
some
way
of
taking
the
insides
and
putting
them
on
the
outside
and
that
data.
The
data
that
you're
getting
from
your
system
about
how
it's
working
and
transmitting
to
some
back-end,
where
you
can
make
sense
of
it.
B
We
call
that
telemetry
data
so
as
software
becomes
more
and
more
complicated
as
we're
seeing
more
and
more
components,
but
interacting
with
each
other,
as
just
the
total
lines
of
code
are
growing
over
time
that
we're
really
big
advocate
that
telemetry
needs
to
become
built-in.
It
can't
just
be
something
that's
applied
after
the
fact.
It
certainly
can't
be
something
that's
applied
automatically
after
the
fact.
The
humans
who
are
writing
and
maintaining
this
software
really
also
need
to
be
thinking
about
observing
the
software
and
how
that
works.
B
So
this
might
mean
distributed
tracing,
it
might
mean
metrics,
it
might
mean
just
old-fashioned
logs
all
of
these
different
kinds
of
data
streams.
We
think
of
that
collectively
as
telemetry,
and
we
believe
that
should
be
one
integrated
system,
because
all
of
these
different
types
of
observability
tools
need
to
share
the
same
contest,
so
they
all
need
to
be
contextualized
by
your
software
in
order
for
a
back-end
to
be
able
to
make
correlations
between
all
of
these
different
streams.
B
So,
if
you've
been
following
the
world
of
the
serve
ability
for
a
while,
this
may
sound
familiar
open.
Telemetry
is
not
the
first
project
to
try
to
tackle
this.
There
were
there's
a
lot
of
prior
art
here
and
specifically,
there
were
two
projects
trying
to
standardize
on
how
to
do
this.
In
cloud
native
software,
one
was
open
tracing
project.
B
That's
kind
of
my
background
was
starting
and
maintaining
that
world
and
then
there's
the
open
census
project,
which
came
mostly
out
of
Google,
with
Microsoft
jumping
on
board
after
a
bit
and
you've
got,
of
course,
this
standard
xkcd
trope,
which
of
course
gets
thrown
up
at
me
on
Twitter
once
a
week
about
how
standards
proliferate,
and
this
was
something
we
really
wanted
to
get
ahead
of.
We
didn't
like
that.
B
They
were
actually
two
competing
standards
that
were
almost
identical,
so
the
collective
developer
community
behind
both
projects
decided
really
in
this
case
it
would
be
better
to
merge
the
project.
Maybe
the
world
can
have
a
lot
of
different
sequel
databases
and
that's
okay,
but
having
multiple
different
ways
of
observing
your
system.
We
didn't
want
to
be
adding
to
that
problem,
so
we
bit
the
bullet
and
we
merge
the
projects,
and
so
you
can
really
think
of
open.
Telemetry
is
the
next
major
release
of
both
open
tracing
and
open
census.
B
So
it's
backwards
compatible
with
these
existing
projects,
but
it
is
a
new
system
that
kind
of
synthesized
everything
that
we
learned
from
these
prior
systems,
and
so
this
was
probably
one
of
my
favorite
tox
headlines,
open,
telemetry
and
xkcd
97
success
story.
I
think
this
is
a
one
time
we're
actually
able
to
reduce
the
number
of
standards
by
merging
two
of
them
together
so
and
just
to
reiterate
why
observability
matters
why
it
matters
to
your
bottom
line
into
your
system.
B
B
You've
got
the
API
code
which
supports
that,
which
is
these
sort
of
metal
trellises
standing
up
on
top
of
this
shipping
container
labeled
Ruby,
which
is
sitting
on
top
of
this
pile
of
tires
representing
a
bunch
of
databases
we
found
on
the
Internet
and
kind
of
the
life
of
a
developer
or
an
operator
is
to
provide
this
story.
We
all
know
that
cues
and
data
bases
and
back-end
systems
and
all
this
stuff
talking
to
each
other.
It's
never
fine.
It's
never
perfect.
There's
always
something
going
on.
B
There's
always
an
operational
struggle
to
be
had
here,
but
the
goal
is
to
hide
all
of
that
from
the
end
user
right
and
that
really
is
it's
not
just
a
fact
of
life,
but
it's
actually
the
business
value
being
provided
so
I,
don't
know
everything
about
git
lab,
but
if
I
were
to
describe
get
labs
value
proposition,
it
would
probably
look
like
this
slide.
So
you've
got
a
bunch
of
gitlab
end
users
represented
here
by
giant
pilot
kittens.
B
These
kittens
are
trying
to
playfully
interact
with
all
of
these
fine
rainbows
being
provided
by
git
lab
and
that's
where
the
kittens
want
they
want
to
play
with
the
rainbows.
Meanwhile,
the
get
lab
staff
represented
by
grumpy
cat
down
at
the
bottom
is
running
around
trying
to
put
out
all
the
various
fires
in
order
to
maintain
this
experience.
For
these
happy
end
users,
so
that's
that's
really
just
before
we
get
some,
how
it
works.
I
really
want
to
emphasize
that.
B
Ok,
so
how
does
it
actually
work?
If
you
look
at
open
telemetry,
it's
it's
sort
of
a
kitchen
sink.
We
want
to
provide
everything.
You
need
to
stand
up.
A
complete
telemetry
system
and
I
want
to
be
specific
when
I
say
telemetry.
This
doesn't
include
any
form
of
back-end
analysis.
Just
to
be
clear.
We
see
the
telemetry
is
the
generation
of
this
data
and
then
the
sending
of
this
data
to
some
back-end
and
all
of
the
pipeline
pieces
in
between,
but
we
don't
want
to
be
in
the
data
analysis.
Business.
B
That's
not
really
an
area
for
standardization.
There
are
lots
and
lots
of
systems
that
analyze
observability
data
and
provide
you
insights,
and
we
think
that's
where
all
the
competition
should
happen.
This
is
just
a
standard
telemetry
system
that
all
of
those
different
backends
can
consume
and
make
you
so
to
kind
of
drill
into
sort
of
the
architectural
overview.
B
The
first
piece
of
open
telemetry
that
end
users
interact
with
are
the
API
packages.
So,
if
you're,
an
application,
developer,
writing
application
code
or
your
framework
developer,
maintaining
your
framework
really
any
kind
of
open
source
library
that
end
users
are
going
to
install
and
make
use
of
all
of
those
end
users.
If
they
want
to
instrument
their
code,
they
can
pull
in
the
open,
telemetry
api's,
and
this
is
a
little
in
the
weeds
but
there's
a
critical
distinction
here,
which
is
when
you
pull
these
API
packages
in
you're,
not
playing
in
a
huge
dependency
chain.
B
So
there's
nothing
in
these
API
packages
that
describe
how
things
work,
you're,
not
hauling
in
a
whole
bunch
of
code,
and
so
this
makes
the
API
packages
fairly
safe
to
embed
in
shared
libraries.
So
if
you're
writing
a
framework
or
say
an
HTTP
client
library
that
a
bunch
of
people
are
going
to
use
it's
safe
to
embed
open
telemetry
directly
into
that,
the
application
developers
still
going
to
be
able
to
make
a
lot
of
choices.
B
So
if
we
look
at
this
now
from
the
perspective
of
the
operator
or
the
application
developer,
when
they
start
their
application
up,
they
interact
with
the
open,
telemetry
SDK.
So
we
think
of
the
SDK.
Is
everything
needed
to
implement
these
api's?
So
this
is
where
at
startup,
you
would
be
configuring
things
like
exporters?
Am
I
sending
this
data
to
Prometheus?
Am
I
sending
it
to
Yaeger?
B
What
kind
of
header
propagation
am
I
going
to
use,
or
there
are
some
special
plugins
that
I
want
to
install
here?
All
of
that
is
exposed
through
SDK
packages
that
the
active
can
use
to
configure
how
their
particular
service
wants
to
interact
with
this
data,
but
because
we
think
of
this
as
a
standard.
We
also
don't
want
everyone
to
be
stuck
with
our
version
of
an
SDK.
So
to
be
clear,
you
can
actually
plug
anything
into
these
api's,
so
we
come
with
a
kitchen
sink
SDK
framework.
B
That's
got
a
lot
of
bells
and
whistles
and
plug
ability.
But
if
you
don't
like
that
particular
thing,
you
are
free
to
bring
your
own
alternate
implementation,
and
so
a
good
example
of
what
an
alternate
implementation
might
be
would
be,
for
example,
a
C++
implementation.
So
maybe
you
don't
want
a
bunch
of
say
you're
using
Python
rather
than
writing
an
SDK
and
Python,
maybe
you're
fine,
taking
on
a
C++
dependency.
So
you
want
to
make
a
super
performant,
C++
implementation
and
then
plug
that
in
that
should
be
possible
with
open
telemetry.
B
So
the
design
really
is
pluggable
and
mix-and-match.
So
no
one
who's
using
one
part
of
open
telemetry
suddenly
has
to
halt
in
all
of
the
other
pieces,
even
though
we
do
think
all
of
these
pieces
pretty
good.
This
is
part
of
us
thinking
about
this
in
the
long
term
as
a
standard,
that's
going
to
be
around
for
a
long
time.
B
There's
82,
plus
contributing
companies,
I
think
the
slides
out
of
date,
maybe
over
a
hundred
different
contributor
contributing
companies.
This
point:
it's
about
a
hundred
plus
developers
active
per
month
contributing
the
code.
Maybe
three
hundred
plus
total
developers
contributed
so
far,
there's
about
11
languages
that
are
in
various
stages
of
development.
As
far
as
open
telemetry
support
is
concerned,
and
if
we
look
at
where
it's
at
today,
we
actually
just
launched
the
beta,
so
we
launched
it
at
the
end
of
March.
B
The
first
languages
to
go
into
beta
are
Go
Java,
Python,
various
flavors
of
JavaScript
and
also
Erlang,
so
there's
a
bunch
of
other
languages
in
flight,
but
these
are
the
ones
that
we
are
saying
have
all
the
Specht
out
features
implemented
and
they
support
our
native
export
format,
plus
Prometheus
and
Jaeger,
so
that
the
kind
of
canonical
CN,
CF
backends
that
would
consume
telemetry
data
can
work.
With
this
thing
we
also
are
porting
over
a
lot
of
instrumentation
for
common
libraries
and
frameworks.
B
So
for
these
core
languages
you
should
already
see
a
lot
of
instrumentation
support
and
for
languages
that
support
it.
Aka
not
go.
We
also
supply
some
form
of
automatic
installation,
so
this
is
just
being
able
to
download
and
run
open
telemetry
and
have
it
automatically
match
all
of
the
instrumentation
libraries
we're
providing
with
their
target
libraries
and
frameworks.
So
in
Java
this
is
called
a
Java
agent,
but
we
have
flavors
of
auto
installers
for
Python
and
nodejs
as
well,
and
also
ruby,
which
Matt's
going
to
talk
about,
and
here
at
light
step.
B
Obviously,
we've
been
holding
down
the
fort
on
open
telemetry
for
a
very
long
time.
We
offer
exporters
for
most
languages
all
the
languages
in
beta.
We
have
an
exporter,
for
we
also
offer
support
for
the
open.
Telemetry
collector,
which
is
sort
of
a
piece
of
you,
can
think
of
the
collector,
is
kind
of
the
routing
layer
for
sending
this
telemetry
data
to
your
backends
of
choice.
B
That's
simple!
When
you're
small,
once
you
start
doing
telemetry
at
scale,
you
kind
of
end
up
with
a
routing
tier
where
you
want
to
be
buffering
this
data
and
teeing
it
off
and
doing
all
of
your
various
kind
of
network
routing
data
plane
issues
we're
also
trying
to
document
open,
telemetry,
so
doc,
stuff
like
SEP
comm,
has
an
open
planetary
section.
So
we
want
to
write
a
lot
of
good
Doc's
there.
I
would
suggest
checking
that
out.
B
If
you're
interested
in
getting
involved
in
the
project,
we
want
to
feedback
on
our
dots,
as
well
as
our
code
and
in
the
future,
have
lots
of
big
ideas
for
really
really
cool
things
that
we
could
build
on
top
of
open
telemetry.
Don't
get
me
started
about
formal
proofs
and
testing,
but
for
now
we
really
want
to
focus
on
getting
open.
Telemetry
to
a
stable
general
release
before
we
start
exploring
all
of
that
cool
stuff.
So
that's
the
current
state
of
the
project.
B
If
you
do
want
to
try
it
today,
you
would
be
an
early
adopter,
but
I
think
that's
a
great
time
to
actually
get
involved.
There
may
be
some
missing
pieces.
This
is
mostly
maybe
some
missing
pieces
of
instrumentation
or
maybe
a
plugin
for
a
system
you're
using,
but
all
the
core
features
should
be
there.
B
If
you
are
writing
a
lot
of
manual,
instrumentation
I
just
want
to
have
a
note,
there's
a
potential.
We
may
break
api
compatibility
if
we
really
encounter
a
serious
reason
for
doing
so,
but
we
keep
that
to
a
minimum.
This
project
actually
has
backwards.
Compatibility
and
API
stability
is
one
of
its
primary
goals,
but
because
this
is
an
early
beta,
this
would
be
the
one
time
where
you
might
encounter
breaking
chains
in
the
API
past
early
beta.
B
B
A
Ask
the
first
question
Ted
so
if
and
when
open
telemetry
is
successful,
meaning
developers
using
whatever
frameworks,
can
kind
of
just
ship
up
their
telemetry
to
their
back-end
of
choice.
You
mentioned
that,
where
you
think
vendors
should
be
competing
is
the
platform
by
which
to
interpret
and
process
and
make
meaning
out
of
that
data.
Yeah,
like
my
question,
is
like
like
for
the
users
that
already
have
is
using
some
sort
of
vendor
centric
agent
to
collect
that
information.
C
B
Here's
all
the
keys
and
values
that
represent
an
HTTP
request,
but
as
far
as
what
you'd
want
to
do
with
that
data,
that's
where
it's!
It's
still
the
Wild
West.
But
if
you
wanted
to
gain
access
to
a
new
kind
of
analysis,
you
don't
want
to
have
to
go
rienne,
strim,
ending
your
system
or,
worse
somehow,
having
to
double
instrument
your
system
in
order
to
gain
access
to
multiple
different
analysis
features.
B
It
should
be
simply
enough
to
have
one
telemetry
system
pumping
data
out
of
your
application
and
then
just
teeing
that
off
to
multiple
different
backends,
so
the
way
I
see
that
kind
of
evolving
is
really
the
game
is,
is
to
come
up
with,
with
better,
more
useful
analysis
for
people
who
are
trying
to
observe
their
system.
So
if
you
can
come
up
with
some
cool
tool
or
some
cool
new
way
of
analyzing,
the
data
coming
out
of
a
system,
what's
really
powerful,
is
you
don't
then
have
to
go
reinvent
all
of
this
telemetry
stuff?
B
B
So
I
think
this
will
actually
open
the
door
to
a
lot
more
possibility,
usually
there's
kind
of
a
barrier
to
entry
here,
which
is,
if
you
want
to
build
some
cool
way
of
like
say,
a
root,
cause
analysis,
tool
or
a
tool
that
tries
to
identify,
say
interesting,
shapes
and
the
data
that
that
tend
to
correlate
with
bad
behavior
in
a
system.
You
can
just
focus
on
solving
that
one
analysis
problem:
you
don't,
then
have
to
go
rebuild
this
old
entire
stack.
B
Likewise,
if
you're
an
end
user
and
you've
already
fully
instrumented
with
open
telemetry,
it
looks
very
likely
now
that
almost
any
major
vendor
is
going
to
support
open
telemetry
data
directly
or
at
least
through
a
processor
that
you
can
plug
in
to
convert
it
to
that
system's
native
format.
So
this
means
it'll
be
very,
very
easy
to
switch
back
end.
So,
if
you're
happy
with
your
current
back
end,
you
could
gain
a
second
one.
B
B
D
Josh
yeah,
thanks
and
I
appreciate
the
time
I
had
a
question
on
the
implementation
of
open
tracing
or
open
telemetry.
We
here
at
get
lab,
have
instrumented
get
lab
itself
the
application
for
for
open
tracing.
You
can
see
some
of
our
documentation
there
and
one
of
the
challenges
has
been.
The
plug-in
model
means
these
actually
have
to
compile
in
support
for
each
individual,
back-end
mm-hm,
and
so
we
don't
necessarily
know
what
tracing
solution
our
customers
want
to
utilize
to
debug
and
understand
how
their
own
get
leppe
instances
are
performing.
C
D
B
Yes,
we
do
have
a
solution
for
that,
and
that
is
the
open
telemetry
protocol,
so
Oh
TLP,
so
open
telemetry
is
inventing
its
own
native
protocol.
That
will
include
all
of
the
data
this
system
generates
so
metrics
tracing
resources.
Everything
that
open
telemetry
debt
generates
in
one
single
protocol
that
can
be
will
be
on
by
default,
so
the
default
installation
will
be
sending
open,
telemetry
data
to
like
a
standard
local
port
and
be
sending
the
new
w3c
headers
as
far
as
wire
propagation
is
concerned.
B
So
if
everyone
can
consolidate
on
using
the
new
official
headers
for
tracing
and
puts
a
sidecar
or
a
back-end
system
sitting
at
that
that
standard
port
that
can
accept
this
data,
then
they
don't
really
have
to
go
in
and
modify
the
code
in
their
applications
in
order
to
pick
a
vendor
and
the
the
component
within
open
telemetry.
That
deals
with
a
lot
of
this
is
the
collector.
So
the
idea
with
the
collector
is
it
accepts
open,
telemetry
data.
B
It
can
actually
accept
a
bunch
of
different
kinds
of
data,
so
it
has
receivers
for
a
Jaeger
and
Prometheus
and
other
systems
as
well.
The
idea
is,
that's
the
place
where
you
would
be
doing
all
of
this
configuration
so
by
default
all
the
applications
just
kind
of
spewing
this
out,
and
then
you
would
have
your
service
measure
a
sidecar
or
you
know
some
other
collector
deployment
then
be
collecting
this
data
up,
and
so
it
would
just
be
on
the
operator
making
a
runtime
decision
around
those
collector
deployments
if
they
wanted
to
switch
back
ends.
Yeah.
D
B
B
A
B
E
E
Are
you
seeing
this
open
telemetry
status,
yeah
I
think
you
are.
E
Cool
so,
as
Ted
mentioned,
I
think
there's
five
languages
in
beta
one
you
may
have
noticed.
Ruby
is
not
on
that
list,
so
we're
not
officially
part
of
the
beta
and
we've
had
a
small
but
good
group
of
contributors,
and
we
got
a
slightly
late
start,
which
is
one
of
the
reasons
that
were
not
in
that
round.
But
nevertheless
we
did
release
a
tracing.
Only
beta
in
early
April
and
tracing
only
is
is
a
pretty
significant
part
of
the
open,
telemetry
implementation.
E
I,
don't
know,
I
have
this
diagram
here,
which
kind
of
shows
you
all
the
components?
Really.
The
big
thing
we're
missing.
Is
this
like
meter
section?
So
we
have.
We
have
something
for
all
the
rest
of
these
boxes.
Some
of
that
needs
to
be
fleshed
out
a
little
bit
more
kind
of
run
through
what
what
exactly
those
things
are.
So
our
tracing
only
beta
consists
of
12
packages.
So
if
you
check
out
ruby
gems,
you
will
you'll
see
these
things.
E
So
there's
the
open,
telemetry
API,
the
SDK,
there's
a
Yeager
exporter
and
then
the
rest
of
these
are
instrumentation
adapters.
So
those
are
the
packages
that
are
part
of
of
the
tracing
only
beta
so
for
context
propagation.
We
currently
support
propagating
context
in
the
w3c
trace
context,
format
and
correlations
using
w3c
correlation
context.
E
So
maybe
ignore
that
I
saw
that
all
together,
but
we
are
propagating
a
thing.
According
to
the
editors
draft
of
that
spec,
that's
a
whole
new
story
and
I
guess
it
kind
of
worms.
I
think
I
pointed
out
that
there
was
a
Yeager
export
package,
so
you
can
for
years
fans
to
Jaeger
or
any
back-end
that
accepts
Jaeger
format.
E
We
do
have
an
instrumentation,
auto
installation
mechanism.
So,
like
I,
don't
know,
this
is
a
fairly
interesting
thing
in
that,
if
you're
familiar
with
open
tracing
you'll
know
that
I
don't
know
it's
kind
of
hard
to
get
off
the
ground
with
instrumentation,
you
have
to
go
foraging
across
the
internet
and
find
all
the
things
that
you
want
and
kind
of
manually.
Add
them
all
and
they
all
kind
of
have
different
ways
to
set
things
up.
E
But
you
know
the
nice
thing
is
that
those
are
all
independent
packages
like
they
can
be
shared
between
many
SDKs
so
like
it
wasn't
all
bad
if
you've
used
like
New
Relic
or
data
dog,
they
ship
a
ton
of
instrumentation
like
with
their
tracer
with
they're
tracing
clients.
So
it's
really
easy
for
them
to
just
kind
of
install
that
stuff
and
know
how
it
works
for
open
telemetry.
As
Ted
mentioned
like
you,
can
have
alternative
SDK
implementations
in
this
instrumentation.
We
really
wanted
to
be
like
a
shared
commodity
that
any
SDK
can
consume.
E
So
we
kind
of
had
to
approach
the
problem
slightly
differently
to
be
able
to
have
this
situation,
where
you
can
have
this,
you
know
shared
instrumentation
but
having
easily
installable
bye-bye
all
SDKs,
but
we
had
that
it
works
pretty
well
and
then
right
now
we
have
I
think
this
is
nine
if
I
count
them
instrumentation
packages
and
these
have
all
been
ported
over
from
data
dog
data
dog
has
donated
their
instrumentation
to
open
telemetry.
So.
E
E
E
E
Can
just
show
you
the
code:
this
is
all
it
does.
It
has
like
a
new
route
where
it
just
says:
hey,
I'm,
running
on
this
host
import,
and
then
it
kind
of
has
this
other
endpoint
this
next
one
it'll
try
to
just
like
connect
to
a
service
on
the
same
host,
but
the
next
port.
You
can
do
this
to
kind
of
try
to
connect
a
couple
applications
together.
E
E
Should
get
a
really
simple
trace
so
as
advertised
you
get
a
really
simple
trace,
you
have
two
spans
and
it
one
span
is
coming
from
rack
instrumentation,
so
this
is
kind
of
sitting
at
the
boundary
and
another
span
is
coming
from
the
Sinatra
instrumentation
for
that
route
route.
It
is
extremely
not
interesting.
E
This
is
kind
of
the
setup,
so
you
need
to
configure
the
the
SDK
and
I
think
Ted
covered
that
you
can.
You
can
export
to
multiple
kind
of
backends
and
to
do
that
this
is
done.
The
export
pipeline
is
a
combination
of
a
span
processor
and
an
exporter.
So
you
have
this
simple
span:
processor.
It
sends
bands
out,
as
they
complete
there's
like
a
batch
span,
processor,
which
you'll
probably
want
to
use
in
production,
it'll
kind
of
batch
them
up
and
some
of
them
send
the
modern
badges.
E
But
this
is
how
you
can
send
things
kind
of
multiple
backends
right
now.
I
have
a
console
exporter,
so
I
will
write
them
to
kind
of
stand
it
out.
But
then
I
had
this
Jaeger
exporter
and
you
could
add
more
here.
If
you
wanted
to
export
to
the
backend
of
your
choice,
it's
kind
of
how
you
do
it
then
I
kind
of
have
this
use
all-
and
this
is
this-
has
to
do
with
the
instrumentation
auto
installation.
E
So
so,
if
you
notice
in
this
app
it's
like,
we
don't
actually
have
any
instrumentation,
you
don't
see
any
code
generating
spans.
That's
all
coming
from
the
auto
instrumentation
use
all
means
install
all
instrumentation.
That's
my
gem
file
right
now.
The
gem
file
looks
like
this
there's
a
few
more
packages
than
you
want
to
see
here.
To
be
honest,
though,
like
right
now,
I'm,
including
individually,
the
net
HTTP,
Sinatra
and
rack
adapters,
but
soon
that
will
not
be
necessary.
There's
a
PR
for
this.
E
This
open
telemetry
adapters,
all
it's
kind
of
just
a
gem
that
references
all
available
instrumentation,
so
that
will
probably
be
ready
kind
of
by
the
end
of
the
week,
but
but
for
now
this
is
what
you
actually
have
to
do.
So
I
want
to
misrepresent
things,
but
that's
what
goes
into
that
super
lame,
trace.
I
think
we
can
make
it
a
little
bit
more
interesting
though
so
I
guess
you
will
see
that
these
fans
also
made
it
to
the
console.
E
E
E
The
client
is
creating
a
span,
an
outer
stand.
Then
you
have
a
span
for
the
HTTP
request
and
then
you
have
a
span
and
on
the
server
side
where
it
receives
the
request.
This
is
kind
of
showing
context
propagation
happening
between
two
services
and
if
we
take
a
quick
look
at
that
client
app,
it's
like
super
simple.
It
just
keeps
looping.
E
E
But
again
it's
like
you
didn't
really
have
to
do
anything
to
propagate
context.
Here
the
context
propagation
was
done
for
you
by
the
net
HTTP,
instrumentation
and
I-
guess
just
to
make
things
slightly
more
interesting.
I
do
have
this
nodejs
server
as
well.
So
if
I
spin
this
up,
my
my
Ruby
server
was
always
trying
to
connect
to
the
next
port,
but
there
was
nothing
there,
so
it
was
just
giving
up.
But
if
we
come
over
and
check
out
our
traces
now
you
can
kind
of
see
we
have
three
services
in
the
mix.
E
E
A
E
So
I
expect
I
kind
of
ramble
a
little
bit
about
this
open
telemetry
adapters.
All
like
I
expect
your
gem
file
to
look
basically
like
this
you're.
Probably
gonna
have
like
three
packages:
the
SDK
adapters,
all
which
is
going
to
be
like
all
of
the
available
instrumentation
someday
like
in
the
not-too-distant
future.
That
should
be
like
everything:
that's
in
DD
tres
RV,
so
it
should
be
a
pretty
big
set
of
instrumentation
and
then
whatever
exporters
you
want.
So
you
know
we're
exporting
ta-er.
E
It
would
be
nice
if
everybody
was
exporting
to
light
step,
or
at
least
had
that
as
an
option,
but
I
need
to
write
the
light
step
exporter.
So
it's
not
an
option
right
now.
Shame
on
me,
but
what
we're
building
a
lot
of
stuff
so
but
yeah.
This
is
kind
of
a
setup
that
I
expect
that
you're
gonna
see
and
then
yeah
likely.
This
is
just
how
things
are
going
to
be
to
work,
but
we
see
I
I
do
have
something
that
see.
E
B
E
It
comes
to
wiring
up
instrumentation,
it's
like
you
have
options,
so
you
have
this
use
all
option
which,
basically
all
is
subjective.
All
is
all
of
the
instrumentation
adapters.
You
have
in
your
gem
file,
so
if
you're
using
so
if
you
picked,
you
know
three
adapters
and
just
put
them
into
your
gem
file,
it's
gonna
be
those
three
adapters.
If
you
use
open
telemetry
adapters,
all
it's
gonna
be
possibly
you
know.
Like
20
adapters,
you
don't
have
to
have
the
underlying
library.
E
It's
present
it
will
install
kind
of
intersection
of
instrumentation
plus
libraries
that
you
have
and
I
think
99%
of
people
are
going
to
use.
Have
that
use
case
and
they're
really
going
to
love
it,
but
there's
you
know
there
is
this
trade-off
between
the
amount
of
instrumentation
that
you
have
the
amount
of
visibility
you
have
into
your
app
and
performance.
So
sometimes
you
want
to
pare
down
the
instrumentation.
You
don't
you
don't
want
everything
and
the
kitchen
sink.
You
want
to
be
a
little
bit
more
selective.
E
So
there
is
this
way
to
be
more
selective
and
instead
of
saying
use,
all
you
can
just
say:
use
individual
adapters.
So
you
could,
you
can
do
you
can
kind
of
selectively
install
instrumentation
in
a
couple
of
ways?
I
guess
you
could
you
can
still
use
used
all
but
have
like
your
own
package
or
you
a
smaller
amount
of
gems
in
your
jump
file
or
you
can
pull
in
open,
telemetry
adapters
all
and
then
just
like,
take
and
pull
from
that
big
pool
using
just
the
use
command.
Oh
thank.
A
E
E
There
are
actually
some
PRS
out
there
for
sidekick
instrumentation,
and
there
is
somebody
who
has
started
working
on
Rails
instrumentation,
so
rails
will
soon
be
there.
If
you
do
kind
of
spin
up
this
beta
on
your
rails
app,
you
will
be
a
little
bit
sad
because
you
will
probably
just
see
you
will
get
kind
of
an
outer
span
for
your
rack.
Instrumentation
you'll
get
anything
for
HTTP
requests,
but
all
your
all
your
middleware
and
database
calls
will
be
invisible
with
our
current
set
of
instrumentation,
but
oh
yeah.
E
E
E
E
E
Yeah
I,
don't
know
what
the
plans
are
for
sentry,
but
the
yeah
10
might
be
able
to
speak
to
this.
If
the
century
is
going
to
be
like
an
officially
supported
thing,
I
not.
B
Directly
spoken
to
them,
I
mean
for
stuff
like
that.
Where
there's
an
existing
system,
you
know,
ideally
they
would
maintain
their
own
exporters
and
plugins,
and
things
like
that
and
I
would
say
short
of
century
doing
that
work.
Of
course,
you
know
it's
open
source
software,
any
end
user
can
can
write
a
sentry
exporter
and
add
it
to
the
registry.
B
So
that's
part
of
the
reason
why
we
want
to
have
this
registry
is
so
that
if
someone
writes
one
of
these
things,
it's
not
that
it
has
to
live
in
any
particular
official
repo
or
something
we
want
to
have
one
place
where
it
can
all
just
still
get
listed,
regardless
of
who's.
Currently
maintaining
it
yeah.
E
And
that's
one
of
the
great
things
behind
the
design
of
open
telemetry
is
that
you
kind
of
have
these
these
export
pipelines
and
they're
very
freeform
they're
very
easy
to
implement
if
well,
I.
Guess
it's
very
easy
to
get
a
handle
on
the
data
and,
depending
on
the
backend
like
it,
should
be
fairly
easy
to
get
the
data
there
and
yeah
I
probably
should
have
showed
some
of
the
JavaScript
stuff,
but
this
is
kind
of
the
setup
for
for
the
Jas
stuff.
E
It
looks
very
similar
to
what
we
had
for
Ruby,
where
you
kind
of
configure
your
export
pipeline.
So
yeah,
just
like
it's
my
responsibility
to
write
this
light
stuff
exporter
for
for
Ruby
there
duck
there
actually
exists
one
four
node
right
now,
like
yeah
the
century,
folks
should
problem-
or
it
would
be
great.
The
century
folks
started
to
chip
in
and
write
some,
but
given
that
this
is
open
source,
anybody
can
write
it.
Oh.
A
I
will
thank
you
and
I
will
wrap
up
this
meeting
with
both
kind
of
a
leaving
thought
for
the
gate,
lab
team
and
one
last
question
for
Ted
and
Matt.
So
we're
open
to
the
limit
tree
is
today
they're
actively
looking
for
participants
in
health
and
Gilad
being
full
ruby
experts.
There
may
be
a
natural
way
for
you
to
participate.
I
would
say
going
forward.
A
Open
Kalama
tree
is
super
critical
or
get
lab
and
get
lab
users,
because
it
is
highly
likely
that
our
customers
are
going
to
be
using
open,
telemetry
to
instrument
their
applications,
and
we
can
make
their
lives
easier
by
potentially
offering
a
control
pain
to
how
to
configure
instrumentation.
We
can
imagine
this
is
a
huge
concern
for
enterprises,
enterprises
where
they
want
to
consolidate
how
they
do
this
type
of
stuff.
A
This
is
something
that
you'll
hear
from
big
companies
over
and
over
again,
so
it
is
in
part
our
responsibility
as
a
monitoring
team
to
understand
the
space
and
I
think
there's
no
better
way
to
understand
than
by
actually
participating
and
contributing
to
the
code.
So
I
know
Ted
and
Matt
are
constantly
recruiting
for
more
people
to
join
them
in
their
mission,
and
if
you
have
any
interest
at
all,
this
would
be
a
really
good
way
to
beef
up
your
knowledge
and
up
your
skills.
A
B
C
B
B
What's
going
on
yeah
and
there's
also
the
specification
repo
I
should
just
we
didn't
talk
too
much
about
the
spec,
but
how
we
coordinate
this
across
languages
is
we
have
a
specification
written
in
markdown
and
we
have
another
repo
called
attempts,
repo,
which
is
like
the
open,
telemetry
version
of
RFC's.
So
two
ways
to
get
involved
are
one
is,
of
course,
to
plug
into
a
language
working
group
and
help
out
there.
I
think
the
Ruby
group
would
definitely
love
more
contributors.
B
Would
love
to
see
more
people
kicking
the
tires
on
rails,
I,
think
that's
really
critical,
so
helping
out
getting
rails
and
sidekick
instrumented
would
be
great.
But
if
there's
like
stuff
around
the
design
of
the
overall
project
and
project
direction,
that's
also
open
and
the
way
that
pipeline
works
is
filling
out
in
otep.
So
there's
a
template
in
the
otep
repo
and
you
submit
a
pull
request
to
that
repo.
A
B
Once
it's
all
added
to
the
spec,
we
then
do
a
spec
release
and
then
all
the
different
working
groups
try
to
update
their
implementations
to
match
the
new
spec
and
that's
not
a
super
complicated
process.
I'm
sure
we
could
more
formally
specify
things
in
the
future,
but
this
is
mostly
been
working
for
us
as
a
way
to
kind
of
keep
things
in
sync
across
all
the
different
projects.
E
Yeah
I'll
just
add
a
little
bit
to
that
in
that
yeah,
this
at
least
open-toe
entry
Ruby.
All
this
is
on
github,
so
we
have
like
a
pile
of
issues.
So,
if
you're
interested
in
in
contributing
you
can
you
can
look
at
the
issues
and
just
comment
on
one.
If
one
looks
good
to
you
and
I
will
assign
it
to
you
as
far
as
trying
to
collaborate
on
it
or
ask
things
asynchronously
either
over
or
github
issues,
PRS
or
I'm
getter.