►
From YouTube: 2022-07-21 meeting
Description
OpenTelemetry Prometheus WG
C
D
D
Myself
right
we'll
give
people
one
or
two
more
minutes
to
join
and
then
we'll
get
the
startus
started.
Francis
quick
question,
this
meeting
is
recorded
by
default
and
posted
on
youtube,
given
that
it's
user
feedback
from
shopify,
are
you
comfortable
with
that,
or
should
we
disable
that.
B
I
think
that's
fine,
okay,
cool
we're
not
going
to
be
talking
about
shopify
confidential
things
here.
So,
okay,
it's
cool,
yeah,.
C
D
D
All
right,
why
don't
we
kick
this
off?
So
welcome
everyone?
This
is
the
usual
end
user
work
group,
but
we're
using
it
this
week
as
an
end
user
feedback
session
from
shopify.
So
we
have
summit
attendees
from
shopify.
We
have
francis
who's
joined
us
as
well.
As
I
think
francis
you
mentioned,
you
brought
some
co-workers
as
well.
B
Yeah
so
andrew
hayworth,
gabriel,
and
robert
so
yeah,
I
work
on
open,
telemetry,
ruby
with
andrew
and
robert
and
gabriel
runs
our
collection,
infrastructure
team.
D
D
So
this
is
really
exciting
for
both
reasons,
because
you
can
speak
to
the
contributor
experience
and
the
end
user
experience.
Obviously
your
end
user
experience
is
going
to
be.
Feedback
is
going
to
be
slightly
different
than
some
end
users
who
come
in,
who
are
less
familiar
with
open
telemetry
and
get
hung
up
on
perhaps
more
basic
things
because
they're
not
as
familiar
with
it.
You
are
one
of
the
most
have
so
the
most
expertise
in
the
world
with
open
telemetry,
particularly
ruby.
D
So
so
your
feedback
will
understandably
be
different,
but
I
think
it's
still
really
valuable
for
us
to
gather
and
the
context
for
everyone
here
is
starting,
I
think
starting
last
month
was
the
first
one.
This
is
the
second
session,
the
open,
telemetry
governance
committee
and
angie's.
Your
work
group
have
been
trying
to
gather
feedback
about
the
the
primarily
the
user
and
secondarily
the
contributor
experience
to
the
project,
so
we
can
continue
to
make
it
better.
So
this
is
very
exciting,
so
I'll
be
hosting
elsa.
D
Schar
who's
who's
been
part
of
this
process
as
well
sure
I
might
lean
on
you
perhaps
more
than
more
than
normal,
just
because
I
wasn't
at
the
first
session.
So
if
there's
certain
parts
of
the
structure
or
things
I
might,
I
might
be
missing
them,
but
why
don't
we
get
started
with
describing
for
shopify
or
just
in
very
basic
details,
your
production
environment
like?
Is
it
mostly
containerized?
Is
it
mostly
vms
and
then
the
different,
open,
telemetry
components
you're
using
and
what
you're
using
them
for.
B
Sure
so
it's
mostly
containers
we
do
have
vms
running.
I
want
to
say
mysql,
but
the
rest
of
the
environment
is
pretty
much
all
containerized
we're
running
in
gcp
across
a
bunch
of
different
regions.
We
have
a,
let's
see
we're
using
open,
telemetry,
ruby,
obviously
open
telemetry
go.
We
were
using
open,
telemetry
rust
a
little
bit,
but
we
removed
that.
Maybe
I
don't
know
six
months
or
so
ago,.
B
And
open
telemetry.js
as
well,
not
client-side,
but
that
is
the
direction
that
we
want
to
go
so
we're
just
using
it
for
node
at
the
moment
and
then
we're
using
the
open,
telemetry
collector,
really
heavily
gabriel,
can
speak
to
that.
Our
our
data
ends
up
in
bigquery
and
also
in
grafana's
tempo,
which
we're
running
on
premises
and
I'm
trying
to
think
what
else
is
relevant
there.
B
We
have
a
a
regional
collection
infrastructure.
We've
experimented
with
a
lot
of
different
architectures
for
how
we
arrange
our
collectors
or
our
collection
pipeline
where
we've
ended
up
is
some
envoys
in
the
application
clusters
that
are
just
forwarding
to
a
regional
cluster.
That
runs
all
our
collectors
and
then
past.
That
point
where
we're
reworking
some
things.
But
basically
layers
of
collectors
is
what
it
amounts
to
yeah.
D
B
Of
reasons,
one
is
that
we
have
different
exporter
backends,
so,
amongst
other
things,
we're
extracting
metrics
from
tracers.
At
one
point,
we
had
three
different
back
ends
and
because
we
were
using
a
vendor
as
well,
and
that
meant
that
we
had
three
pipelines
within
the
collector
deployment
which
becomes
really
hard
to
manage
in
terms
of
what,
if
exporting,
to
one
backend
fails,
you
know,
do
you
end
up
resending
from
the
client
like
backing
up
in
the
client
and
then
resending
everything
to
all
backhanders
okay?
B
B
So
we
really
want
to
isolate
each
of
the
backends
into
their
own
collectors
and
decouple
things
possibly
or
most
likely
sticking
a
queue
in
the
middle,
whether
that's
pub
sub
hub
sub,
wide
kafka
or
whatever
yeah
in
between
the
kind
of
frontier
of
collectors
and
then
all
the
collectors
that
are
actually
exporting
to
the
back
ends.
I've.
D
Seen
a
few
firms
of
shopify
size
or
slightly
larger,
where
yeah
they're
sending
basically
everything
into
a
giant
kafka
queue
and
then
retrieving
it
from
there
yeah
yeah
cool.
So
so
then
we
can
start
to
dive
into
like
your
actual
experience
of
the
project.
So
as
an
end
user,
we'll
talk
about
the
contributor
experience
a
bit
later
but
like
as
an
end
user.
What
is
your
experience
with
open
telemetry
being
like?
Does
it
solve
the
problems
that
you
expected
it
to
solve?
B
If
I
back
up
to
what
we
used
to
do,
we
had
our
all
our
own
instrumentation,
so
we
wrote
just
an
instrumentation
based
around
open
tracing.
We
had
our
own
collection
pipeline
handcrafted,
all
our
own
exporters
for
every
back-end
we
wanted
to
support.
So
from
that
perspective,
switching
to
open
telemetry
has
been
enormously
successful
for
us,
because
it
means
that
we
don't
need
to
maintain
instrumentation
libraries
for
every
language,
because
we
can
you
know
somebody
wants
to
try
rust.
C
B
Dependency
hell
right,
yes,
usual,
sort
of
things
that
projects
run
into
so
we
it.
It's
been
successful
from
that
point
of
view
that
we
don't
need
to
maintain
all
these
different
libraries,
although
we
are
maintaining
the
time
to
ruby
one.
But
you
know
we
don't
need
to
maintain
one
for
js
as
yeah
people
can
just
pick
that
up
and
run
with
it.
B
The
having
a
collector
infrastructure
where
vendors
are
providing
exporters
has
been
really
helpful
for
us
as
well,
because
it
allows
us
to
switch
vendors
really
easily,
and
it
gives
us
a
very
clear
point
where
and
a
clear
model
for
building
new
exporters.
For
our
you
know,
customer
requirements
as
well
we've
been
able
to
leverage
interactions
with
people
in
the
community
who
are
not
part
of
you
know
the
core
collector
developer,
the
group,
for
example.
So
there's
a
another
company.
B
I
can't
remember
the
name,
but
they
they
built
a
pub
sub
exporter
for
the
collector.
Yes,
and
so
we
were
able
to
have
conversations
with
them.
You
know
side
channel
conversations
with
them
about
what
they
were
doing
and
seeing
whether
what
we
were
doing
applied
or
whether
we
could
come
to
some.
You
know
common
exporter
model
for
for
pub
sub,
so
yeah
all
of
that's
been
enormously
useful.
I
think
we've
gotten
a
lot
of
value
out
of
it.
Okay,.
D
When
we
think
about
challenges,
you've
had
of
open
telemetry,
I'm
you
mentioned
dependency
hal.
You
also
mentioned
earlier
that
that
you
were
using
rust
and
moved
away
from
it.
I
can
probably
suspect
why,
but
but
like
I
do,
want
to
speak
to
some
of
maybe
the
the
things
as
an
end
user,
things
of
open
telemetry
that
either
desperately
need
attention
or
could
be
improved.
B
So
rust
was
interesting
because
we
had
one
team
that
was
using
rust
and
really
wanted
trace,
instrumentation,
and
so
they
had
they
had
done
that,
but
their
project
didn't
have
a
whole
lot
of
developers
on
it
and
they
fell
behind.
You
know
the
newer
version
so
open
telemetry
required
newer
versions
of
tokyo
and
oh.
D
B
Yeah,
so
they
just
had
problems
actually
moving
up
to
the
newer
version
of
tokyo
due
to
other
dependencies
that
they
had
and
limited
resources.
So
they
ended
up
just
yanking
open
telemetry,
because
they,
you
know
that
was
one
more
dependency
that
they
didn't
want
to
have
to
worry
about.
B
We
will
probably
revisit
that
pretty
soon.
One
of
the
things
that's
going
to
be
important
for
us
is
interrupt
between
rust
and
ruby
and
yes
within
a
process,
and
we
we
ultimately
need
some
kind
of
shared
context
between
the
open,
telemetry,
ruby,
sdk
and
the
open,
telemetry
rust
sdk,
and
we
haven't
thought
through
what
that's
going
to
look
like
yet.
But
that's
going
to
be
an
interesting
challenge
when
we
have
part
of
the
stack
written
in
rust
and
part
written
in
ruby.
D
B
D
B
So
a
challenge
for
us
is
that
we
have
our
own
context,
propagation
format
at
the
moment
and
okay.
That
means
that
we
need
to
write
propagators
for
every
language
that
we
support.
We
are
working
towards
moving
to
the
w3c,
transparent
propagation
format,
but
we're
not
there
yet.
So
that's
been
a
minor
headache
for
us
that
we
need
to
write
that
propagative
for
every
language,
but.
B
Yeah,
so
one
of
the
one
of
the
things
that
we
did,
we
have
a
a
terrible
solution
to
propagation
and
being
able
to
validate
that
a
request
is
coming
from
a
trusted
source
just
from
a
tracing
perspective,
so
we
were
using
google's
x
cloud
trace
context.
I
think
it
was
that
header
and
then
we
added
our
own
header
just
to
mark
hey.
This
request
is
coming
from
shopify,
so.
B
The
trace
right-
and
we
did
that
because
we
had
third
parties
that
were
calling
into
shopify
who
were
also
using.
You
know,
stackdriver
trace
and
they
were
using
the
same
propagation
header.
And
if
we
just
blindly
trusted
xcloud
trace
context,
then
we'd
continue
a
trace
and
we'd
end
up
with
broken
traces.
B
So
that's
why
we
add
this
validation
step.
It's
a
terrible
solution,
but
we
didn't
have
a
better
one
so
and
we
still
don't
have
a
better
one,
which
is
why
we
haven't
moved
to
trace
parent.
Yet.
B
B
Probably
using
the
trace
state
or
something
exactly
yeah
yeah
yeah,
so
we
need
to
figure
out
the
details
of
that.
A
related
point
for
us
is
that
we
are
using
nginx
and
nginx
doesn't
have
great
open,
telemetry
support
at
the
moment.
So
we
have
somebody
who's
not
on
this
call
who's
working
on
that
currently,
the
previous
person
who
was
working
on
it
left
the
company,
but
I
don't
think
that
was
because
of
the
pain
of
integrating
with
preliminary
and
engineers.
B
But
who
knows
so
yeah,
it's
it's
an
ongoing
project
for
us.
We
we
actually
want
to
have
support
for
open
telemetry
in
lua,
which
is
not
one
of
the
sort
of
official
open,
telemetry
implementations.
So
that's
another
thing:
that's
a
bit
of
a
pain
point
for
us
right.
We
have
some
some
languages
we're
using,
which
are
not
officially
supported
by
open
telemetry.
Yet,
okay.
B
Yeah
yeah
tons
of
challenges
with
the
collection
gabriel.
Do
you
want
to
chime
in
here
on
the
kind
of
maintenance
overhead
of
staying
up
to
date,
with
the
collector.
C
Yeah
this,
I
think
this
is
the
biggest
thing
really
that
we're
kind
of
seeing,
which
is.
We
have
a
lot
of
custom
modules,
so
we
have
custom
exporters,
receivers,
processors
and
the
the
there's
a
lot
of
velocity
in
the
in
the
collector
project
and
the
cut
to
contrib
repose
stuff
changes
rapidly
and
often,
if
we
have,
we
want
to
stay
up
to
date
with
latest
collector
versions,
just
not
to
fall
behind,
but
also
to
be
able
to
use
the
latest
features
from
the
spec
and
whatnot.
C
But
sometimes
it's
significant
for
factors
to
our
modules
and,
like
maybe
a
more
recent
example,
is
we're
we're
trying
to
add,
like
exemplar,
support
to
spamatrix
processor
and
like
just
in
the
process
of
having
prs
like
our
prs,
like
we
constantly
be
rebazing
and
like
there's
a
lot
of
conflicts
with
the
repo
and
it's
yeah,
it's
it
takes
a
lot
of
time
and
I
don't
think
we
have
a
good
solution
right
now
for
staying
up
to
date,
like
in
an
automatic
fashion.
C
So
it's
it's
it's
quite
a
task
when
we
do
have
to
upgrade,
and
usually
we
do
like
every
two
ish
versions.
That's
that's
the
biggest
thing
that
we're
kind
of
seeing
on
that.
On
that
end,.
E
E
Yeah
so
we've
as
a
collector
contributor
we've
we're
trying
to
keep
the
api
stable
for
the
best
few
versions,
also
in
preparation
for
v1,
you
know
so
after
everyone
we
have
to
be
backwards,
compatible,
no
matter
what
so
we
are
trying
to
build
practices
on
on
our
you
know,
processes
even
to
say
you
know:
how
do
we
detect
backwards
incompatibility?
E
How
do
we
move
forward
without
breaking
other
consumers
of
that
code
and
so
on
and
so
forth?
So
it
is
both
at
api
levels,
open
parametric
core
api
and
as
a
user
of
the
collective,
so
config
interfaces,
for
instance
right
so
users
are
configuring,
attributes
and
properties
should
not
just
disappear,
and
the
collector
should
never
break
before
migrating
from
from
a
minor
version
to
another
one
just
because
they
a
property
was
renamed
so
those
things
they
should
not
happen
since
a
couple
of
versions.
E
Until
now-
or
I
don't
know
four
versions
until
now,
but
we
are
not
perfect
and
we
are
not
there
yet.
So
if
you
are
still
facing
that
kind
of
problem
like
backwards
incompatibility,
then
do
let
us
know-
and
you
know
if
you
prefer,
you
can
ping
me
directly,
because
we
have
this.
C
A
A
No,
I
was
just
gonna
say
I
think
the
last
collector
upgrade
we
did
was
from
049
to
053.
So
we
haven't,
you
know
tried.
Is
that
I
think
you
know
if
it
has
improved
in
the
last
several
versions.
We
may
not
have
seen
that
yet,
because
our
last
one
was
a
little
painful
as
well,
but
it's
encouraging
to
hear
that
it's
you
know
that
that
stability
is
a
focus.
Now
it
is
we
we
really
do
want
to
stay
up
to
date
as
much
as
possible.
E
Yeah
yeah:
this
is
a
pain
for
downstream
distributions,
so
we
are
aware
of
a
few
distributions,
some
vendors
that
are
part
of
the
of
the
collector
project.
They
they
express
those
concerns
as
well,
but
and-
and
we
had
this
feeling
that
other
people
might
have
this-
this
pain
as
well,
but
just
we
just
didn't
hear
from
you,
you
know
we.
We
had
no,
no
channels
with
with
you
folks
to
hear
what
are
your
concerns
now,
it's
good
that
you
are
expressing
that,
so
we
have
even.
B
Previously
we
had
a
strong
channel
into
the
collector
developers
at
splunk,
and
so
we
were
able
to
provide
that
feedback,
and
one
of
the
things
we
managed
to
do
is
at
least
get
the
the
metrics
that
are
generated
by
the
collector
to
be
declared
stable.
So
that
was
really
useful
for
us.
It
meant
that
we
didn't
need
to
rework
our
dashboards
every
time
we
upgraded
so
yeah.
We
we
lost
that
back
channel
and
we
haven't
found
a
new
one.
So.
B
D
Any
other
feedback,
so
we've
covered
the
collector
we've
covered
integrations
like
nginx.
We've
talked
about
like
rust
and
ruby.
D
Any
other
challenges
with
open
telemetry,
for
example,
just
to
try
and
elicit
more
like
one
that
we
often
hear
is
that
the
collector
can
be
challenging
to
configure
there's
a
lot
of
yaml.
Is
that
something
you've
had
challenges
with
or
you
or
do
you
actually
find
that
pretty
easy.
B
B
So,
from
my
perspective,
the
the
trace
portion
of
the
spec
was
really
easy
to
understand
and
really
straightforward
to
implement
and
it's
pretty
easy
to
go
back
and
refer
to
things
and
say:
did
the
spec
really
say
this,
you
know:
is
this
the
way
we're
supposed
to
implement
it
and
the
the
implementation
process
has
been
fairly
straightforward?
B
We
have
been
trying
to
tackle
metrics
recently
in
open,
telemetry
ruby.
The
metrics
portion
of
the
spec
is
a
lot
more
dense
and
contains
a
lot
more
theory
and
statistics.
Unsurprisingly,
and
it's
just
a
lot
harder
to
tease
out
okay.
What's
the
expected
implementation
shape
for
this,
referring
to
the
go
implementation
used
to
be
a
good,
a
good
kind
of
guideline
that
hasn't
been
the
case
for
metrics,
because
I
think.
B
So
yeah
that
that's
been
challenging,
having
metrics
available
in
all
languages
would
be
really
nice
from
an
end
user
perspective
and
that's
that
doesn't
seem
to
be
there
yet
I'm
not
actually
sure
which
languages
have
a
working
metrics
implementation.
Yet
the
the
other
thing
that
I've
been
working
on
recently
is
trying
to
implement
the
consistent
probability
sampling
spec,
which
is
marked
as
experimental.
B
So
I'm
not
entirely
sure
where
that's
supposed
to
go,
I'm
assuming
a
separate
experimental
sdk
package,
but
it
wasn't
super
clear,
but
that's
also
another
kind
of
dense
speck
and
it's
a
little
bit
hard
to
tease
apart.
Are
we
sampling
locally?
We
are
sampling
locally.
I
think
we've
gotten
rid
of
all
our
collector
sampling,
gabriel,
can
confirm.
Maybe.
C
Not,
I
think,
there's
still
still
some
so
we
have
for
domain
services,
but
there's
still
some
very
noisy
services
that
we're
sampling.
B
Okay,
yeah
so,
for
the
most
part,
we're
we're
sampling
locally.
Although
the
sampling
is
a
little
bit
different
for
us
right
now,
we're
we
haven't
rolled
out
the
consistent
probability,
sampler
we're
wrapping
up
that
pr
in
the
next
day
or
two
and
but
the
thing
that
we've
implemented
robert
has
implemented.
Actually
is
this
notion
of
the
verbosity
sampler?
So
we
end
up
getting.
B
We
had
to
make
a
patch
to
the
sdk
to
make
this
work,
but
we
end
up
getting
the
server
and
http
client
spans
for
every
request,
and
then
internal
spans
are
sampled
out.
Most
of
the
time
we
just
retain.
You
know
10
to
1
of
the
internal
spans.
So
if
a
request
is
chosen
to
be
sampled,
verbosely
yeah,
it's
a
good
question.
So
if
a
request
is
chosen
to
be
sampled
for
mostly,
then
all
the
internal
spans
will
be
kept
for
that
request.
B
If
we
choose
not
to
sample
over
mostly,
then
we'll
just
retain
the
the
client
and
server
spans.
The
way
we
avoid
breakage
is
by
patching
start
span
to
use
to
basically
return
the
parent
span
instead
of
returning
a
trying
to
remember
the
term,
but
there's
there's
a
thing
where
you
wrap
a
span
context
in
an
api
span.
Instead
of
a
an
sdk
spam.
B
C
Yeah
sure
you
mentioned
that
you
picked
otel
in
part,
because
you
wanted
to
have
some
degree
of
vendor
compatibility
and
like
not
need
to
rewrite
things
so
kind
of
is.
Are
you
currently
configuring
things
just
to
write
directly
to
tempo
or,
like
basically
have
you
done
any
experiments
with
like
looking
at
other
back
ends.
B
B
Okay,
so
we
yeah
we
used
to
use
splunk
and
bigquery.
Originally
we
used
stackdriver
and
bigquery,
where.
B
Have
migrated
through
a
bunch
of
vendors,
but
we've
been
on
at
one
point
we
had,
I
think,
five
different
back
ends.
You
guys
were
like.
B
So
pre-open
telemetry
we
had
an
experiment
at
one
point
where
we
were
exporting
to
you
know:
zipkin,
tracing
and
splunk,
I
think
or
omniscient
at
the
time.
So.
B
B
Previously
and
then,
since
we've
been
on
open
telemetry
using
the
collector,
we've
typically
had
two
to
three
backhands
at
any
one
time.
Our
back-ends
right
now
are
ultimately
bigquery
and
tempo,
but
where?
B
Because
of
the
experimentation
we're
doing,
we
actually
have
like
pub
sub
pub
sub
light,
because
we're
experimenting
with
both
of
those
a
second
tier
which
takes
pub
sub
and
exports
to
bigquery,
and
then
we
also
have
export
to
tempo,
as
well
as
the
span
metrics
processor.
That's
then
getting
scraped
by
prometheus,
so
yeah
many
backends.
D
So
going
back
to
the
contributor
experience,
so
we
covered
some
of
the
the
challenges
you've
had
with
flight
implementing
the
metric
spec
in
ruby
other.
Is
there
any
other
contributor
feedback
that
you
want
to
give
either
about
like
more
technical
things
like
the
spec
and
implementing
it
or
about
the
community
itself
and
how
it's
structured
and
how
people
meet
and
converse.
B
Yeah,
so
one
of
the
challenges
for
us
was
that
we've
tended
to
keep
our
contributions
fairly
local,
around
open,
telemetry
ruby,
and
we
occasionally,
you
know,
go
and
look
at
the
spec
and
see
has
anything
new
popped
up.
You
know
andrew
will
set
up
a
project,
a
github
project
where
he
tracks
all
the
spec
changes
that
we
need
to
come
into.
B
Compliance
with
the
we
haven't
been
super
active
about
going
back
to
the
spec
and
offering
suggestions
or
offering
feedback
on
things
that
are
not
clear
and
I
think,
to
some
extent,
I've
lost
track
of
the
process
for
doing
that,
like
the
right
process
for
doing
that,
the
we
did
have
some
early
interactions
where
we
wanted
to
introduce
job-related
like
background
job-related
semantic
conventions
which
were
like
they're,
especially
meaningful
for
ruby
services
where
background
jobs
are
used
heavily
and
we've
got
systems
like
sidekick
and
rescue,
and
we
were
redirected
into
well.
B
That
kind
of
sounds
like
messaging.
So
maybe
you
should
be
going
down
that
path
and
using
the
messaging
semantic
conventions,
but
it's
not
a
really
good
fit
for
how
background
jobs
work.
So
it
we
haven't
revisited
that,
because
we
felt
like
the
rest
of
the
community
didn't
care
about.
You
know
ruby
or
slightly
more
ruby,
specific
things.
A
Yeah,
I
I
am
struggling
to
remember
exactly
which
discussion
this
was.
I
think
it
was
something
about
http,
client
context,
propagation,
possibly
in
something
about
collapsing
instrumentations
that
are
just
thin
wrappers
around
http
clients.
Things
like
that.
It's
one
that
comes
to
mind.
It's
an
area
that
actually
the
the
ruby
sig
has
extensive
experience
with,
due
to
the
way
that
a
lot
of
ruby
gems
are
built
on
top
of
common
http
foundations,
and
things
like
that.
A
We
found
it.
No,
I
wouldn't
say
hard
or
unwelcoming
to
participate.
Necessarily,
I
personally
didn't
have
time
to
go
and
participate,
and
I
think
that's
probably
been
my
biggest
challenge.
Is
we
have
a
lot
of
experience
to
share
there
for
auto
instrumentation
in
general,
the
ruby
sdk
like
we
actually
have
an
extensive
set
of
auto
instrumentation,
because
in
ruby
the
classes
are
made
up
and
the
methods
don't
matter
and
it
use
everything's
open
internally.
So
it's
actually
very
easy
to
patch,
so
we're
able
to
actually
get
a
lot
of
instrumentation.
A
We
have
a
lot
of
thoughts
there,
but
we
do
find
it
difficult,
sometimes
to
I
think,
participate
in
the
process
just
because
we
don't
have
a
lot
of
time
individually.
None
of
us,
I
believe
robert,
is
maybe
the
exception
here,
but
most
of
us
don't
do
open
telemetry
as
our
full-time
full-time
job.
You
know
we
have
other
concerns
to
balance
internally,
so
it
is
sometimes
difficult
to
participate
and
we
do
think
we
have
a
unique
perspective
there
and
ruby
is
something
of
an
oddball
language
in
some
ways.
A
So
we
know
that
not
everything
is
going
to
be
applicable,
but
I
think
we
found
it
a
little
challenging
sometimes,
but
I
wouldn't
say
unwelcoming
or
anything.
Just
I
think
if
I
were
to
have
my
druthers
some
of
the
spec
changes
that
might
be
relevant
to
us.
It
would
be
nice
if
people
reached
out
to
us
and
asked
our
opinion.
Tigran
did
that
on
one
change
recently
and
I
forget
which
one
it
was
and
I
found
it
really
really
nice.
A
B
That's
been
a
wonderful
yeah
interactions
with
tigran
have
been
fantastic.
Where
he's
reached
out
to
all
the
sigs
and
said,
hey,
there's
this
proposal
coming
down.
It's
going
to
look
like
this.
You
know
the
change
from
instrumentation
library.
I
think
it
was
to
scope.
You
know
pushing
an
issue
into
every
sig's
repo
to
say:
hey!
Is
this
going
to
work?
Okay
for
your
language?
Are
you
going
to
be
able
to
make
this
change?
That
kind
of
thing
has
been
really
really
nice.
D
A
Yeah
I've
been
trying
to
think
about
how
to
put
this
into
words.
I
have
a
sort
of
unshaped
idea
that
I
think
is
interesting
at
the
moment
we're
experiencing
or
sorry
I'm
experiencing.
We
are
experimenting
with
end
user
metric
collection,
sort
of
in
the
browser,
the
real
user
monitoring
things
like
that.
We're
we're
testing
the
waters
there
and
we're
running
a
project
internally.
A
It's
kind
of
interesting
and
I
have
a
newer
person
who
doesn't
know
much
about
open
telemetry
and
I
spent
a
good
two
hours
on
the
phone
with
him
yesterday,
going
over
the
different
parts
of
the
spec
and
the
protos
and
showing
him
sort
of
here's.
A
Here's
what
this
means
and
here's
here's
how
you
can
translate
this
one
domain
concept.
You
already
know
to
open
telemetry.
I
found
that
an
interesting
experience
and
I
think
it
might
be
one
of
the
shortcomings
of
our
documentation.
Perhaps
liz!
That's
a
good
question.
I
don't
know
actually
what
educational
resources
would
be
helpful
there.
I
had
pointed
him
towards
doc
site
at
one
point,
but
there
was
still
gaps
from
that
gosh.
I
almost
want
to
say,
like
a
coursera
of
course,
or
something
you
can-
click
through.
A
You
know
with
like
slides,
and
things
like
that
might
be
interesting.
So
that
was
one
aspect
that
I
thought
was
interesting.
You
didn't
really
understand.
Okay,
you
know.
I
know
we
want
to
collect
metrics
in
this
way
and
we
want
to
extend
one
of
our
internal
processors
but
like
how
do
I
take
this
object,
this
proto
and
do
something
useful
with
it?
What
does
it
really
mean
the
oh?
What
concepts
were
we
translating
from
just
other
concepts?
The
way
that
you
know
we
were?
A
We
were
thinking
of
metrics,
so
we
were
thinking
about
the
way
that
other
vendors
describe
similar
concepts
and
like
what?
What
does
a
message
from
a
client
look
like
what
is
it
going
to
contain?
How
is
it
going
to
be
structured
things
like
that?
The
other
thing
that
I
thought
that
I've
personally
experienced
when
trying
to
talk
about
open
telemetry
with
people
is
it's
hard
sometimes
to
convey
the
value
proposition.
A
I
have
to
do
a
lot
of
storytelling
and
a
lot
of
picture
painting
and
waxing
poetic
about
things
that
open
telemetry
can't
actually
really
do
yet,
but
that
could
be
unlocked
if
we
adopted
it
and
we
really
got
on
board
with
the
semantic
conventions
and
sort
of
really
really
bought
into
it.
I
can
paint
a
good
picture
there.
I
think
actually,
the
demo
web
store
project
is
very
useful
in
this
regard.
It
can
kind
of
show
how
some
of
that
vision
is
supposed
to
look,
but
a
lot
of
times.
A
I
find
myself,
people
will
say
well,
why
should
we
migrate
our
internal
application
to
open
telemetry
native
instrumentation
and
I
sort
of
have
to
say
well,
the
benefits
are
theoretical,
but
they
will
come.
I
promise
you
that,
but
it's
sort
of
very
meta
point.
Sometimes,
though,
why
should
we
do
this?
Instead
of
sticking
with
open
tracing
with
which
works
or
open
census
which
works
and
is
already
here?
Why
should
I
migrate?
A
I
don't
have
a
lot
of
specific
things
to
offer,
though
I'm
excited
for
the
demo
web
store
project,
because
I
do
think
that
that's
a
very
good
vehicle
to
show
it
off
being
able
to
actually
show
people
how
that
instrumentation
hooks
together,
how
you
can
utilize
conventions
to
do
more
interesting
and
advanced
things
that
that's
big
that
makes
people
excited,
and
then
I
don't
have
to
like
paint
theoretical
pictures
as
much
anymore
and
that's
kind
of
nice,
because
I've
gotten
good
at
I've
got
a
like
a
pattern.
A
B
Yeah,
the
value
proposition
for
semantic
conventions
can
be
difficult
to
explain
to
people
I've
completely
internalized
it.
I
you
know,
I
I
really
appreciate
some
anti-conventions
and
I'm
pushing
them
strongly
internally,
but
it's
very
easy
to
see
people
just
drifting
away
from
them
because
they
don't
know
what
value
they
bring.
B
And
you
know
I
did
some
analysis
recently
on
our
bigquery
tables,
where
we
had
1200
attributes,
600
of
which
were
empty
because
they'd
been
used
at
one
point
and
then
somebody
didn't
use
them
anymore,
and
now
you
had
just
this,
you
know
garbage
column
sitting
there
and
the
amount
of
duplication.
There
was
pretty
staggering.
B
You
know
people
just
using
a
slightly
different
name
for
a
resource
or
people
dynamically
generating
attribute
names.
It's
always
a
fun
one,
especially
if
they
add,
like
the
retry
level,
as
a
suffix
on
an
attribute
name.
So
that
gets
that
gets
pretty
interesting.
We
have
a
an
attempt.
Internally,
that's
been
going
on
most
of
the
year
now
to
try
to
lock
down
a
schema
that
is
basically
open,
telemetry
semantic
conventions,
plus
a
small
amount
of
additional
stuff
that
makes
sense
to
shopify.
B
But
that's
it's
been
hard
to
think
about
like.
Where
do
you
actually
enforce
that?
Can
you
enforce
it
at
you
know
ci
time,
can
you
enforce
it
at
in
the
collector?
What
do
you
do
when
people
violate
it?
That
sort
of
thing?
How
do
you
avoid
redundant
work
between,
say,
a
processor,
that's
trying
to
enforce
semantic
inventions
and
then
an
exporter
which
is
also
enforcing
some
schema,
so
yeah
interesting
technical
challenges
around
semantic
inventions.
B
Yes,
stable
release
the
auto
instrumentation
libraries,
so
that's
been
another
interesting
thing,
because
we
had
this
early
thing
where
a
data
dog
contributed
their
instrumentation
libraries
to
the
project
and
a
lot
of
our
early
instrumentation
was
basically,
you
know,
lift
and
shift
from
the
the
data
dog
repo.
B
We
have
things
in
there
that
people
may
or
may
not
depend
on
that,
aren't
actually
aligned
with
the
semantic
inventions
in
open
telemetry,
it's
just
stuff
that
was
in
datadog's
instrumentation.
So
it's
in
our
instrumentation
as
well.
B
Yeah
yeah,
particularly
ruby
because
we
jumped
on
that
bandwagon
early.
So
a
lot
of
the
newer
instrumentation
that
we've
written
adheres
to
the
semantic
conventions
because
it
was
written
from
scratch,
but
anything
that
we
picked
up
from
datadog
earlier
on
has
some
things
in
it.
B
That
are
a
little
bit
weird
and
you
look
at
it
and
you
say
well:
why
is
that
there
and
it's
there
because
it
was
in
datadog's
instrumentation,
so
that
makes
moving
towards
stable
instrumentation
a
little
bit
harder
because
you
don't
know,
are
you
breaking
people's
workflow
by
removing
something?
And
if
you
do
that,
then
you
need
to
go
back
to
the
kind
of
spec.
B
A
Yeah,
okay,
I
think
on
that
point
the
the
1.0
release
of
instrumentation.
A
I
think
I
can
safely
say
that
in
the
ruby
sig
we
have
not
prioritized
that
specifically
because
we
know
that
reaching
1.0
and
declaring
it
stable
would
impose
a
lot
of
restrictions
on
how
we
can
change
it,
and
I
believe
it.
I
don't
remember
all
the
details,
francis
you
might
remember
more
here,
but
I
remember
we
looked
at
them
and
said:
okay,
maybe
we
just
won't
rush
on
this.
A
You
know
that
looks
difficult
and
in
the
sig
call
yesterday,
I
remember
discussing
with
people
that
instead
we
would
prefer
to
push
instrumentation
upstream
actually,
rather
than
push
towards
1.04
instrumentation,
which
aligns
with
some
of
the
goals
of
open
telemetry
in
other
ways.
Anyways,
it's
not
really
bad
per
se,
but
we
felt
that
that
might
actually
be
a
more
fruitful
use
of
our
time
in
some
respects.
Yeah.
B
In
a
lot
of
respects,
we'd
rather
say
you
know,
there's
a
redis,
ruby
library.
We
would
much
prefer
if
that
library
had
native
open,
telemetry
instrumentation
rather
than
us,
certainly
yeah,
providing
auto
instrumentation
right
there.
B
Sorry,
there
was
a
thing
you
mentioned
there:
oh
yeah,
so
one
of
the
one
of
the
things
that
seems
common
in
vendor
instrumentation
libraries
is
callbacks,
so
the
ability
for
users
to
tap
into
the
instrumentation
in
some
way
add
additional
attributes
based
on
http
response
or
whatever,
and
we
have
folks
who
come
into
the
open,
telemetry,
ruby,
sig
asking
for
those
sorts
of
things
to
be
added,
and
it's
something
that
I
want
to
push
back
on,
because
when
we
actually
report
things
and
we
have
the
instrumentation
library
or
the
scope,
you
know
version
and
name
it
points
at
our
instrumentation.
B
It
says
you
know
we're
responsible
for
emitting
these
attributes,
but
if
we
have
callbacks
there,
then
it's
somebody
else,
potentially
who's,
adding
things
that
aren't
compliant
with
spec
or
just
additional
information.
If,
if
their
callbacks
are
broken,
if
their
callbacks
are
not
well
written,
then
if
it
crashes,
it
still
appears
to
be
the
fault
of
open,
telemetry,
ruby
authors
and
that's
a
bit
of
a
concern
for
me.
So
I
would
really
want
to
say
I've.
I've
had
people
come
into
open,
telemetry,
ruby,
saying
other
sigs
have
added
these
callbacks.
B
So
if
we
open
it
up
for
other
sdk
vendors
to
add
their
their
callbacks
to
our
instrumentation
libraries,
it
it
feels
like
we're
not
going
to
be
able
to
achieve
those
goals.
So
I
don't
know
I
guess,
I'm
I'm
looking
for
a
clear
statement
about
what
is
expected
from
open,
telemetry,
auto
instrumentation
libraries.
C
D
Then
I
think
we
are
done
any
other
questions
from
other
people
on
the
call
for
shopify
walter.
A
D
Then
I
think
that
we
are
done
if
there's
nothing
else
coming
through.
So
thank
you
very
much
for
your
time,
guys
like
we
we've
just
started
doing
these.
We
started
last
month
with
github
and
now
now,
if
yourself,
this
is
really
really
valuable
for
us
as
a
community
and
of
course
you
have
a
very
unique
perspective
of
both
end
users
and
contributors.
D
So
this
will
go
up
to
the
governance
committee.
It'll
be
circulated
amongst
the
sigs.
It's
just
really
really
helpful
and
we'll
try
and
improve
things
across
the
community.