►
From YouTube: 2021-07-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
D
A
Are
you,
are
you
like
back
in
the
pool,
or
did
you
win
like?
Did
you
beat
the
civics?
Are
you
do
you
not
have
to
do
it
anymore?.
A
D
E
A
It
was
liz
and
austin,
I
think
your
co-worker.
He
works
at
lightsaber
yeah.
A
F
E
G
Hey,
I'm
just
finishing
a
snack,
so
I'm
not
gonna.
Have
you
guys
watch
me
chew.
B
Bell
I
hear
that
america
runs
on
dunkin,
don't
get
donuts
official
coffee.
F
Oh
really,
I'm
now
in
northern
idaho,
oh
cool
well.
E
I
moved
one
block
over
and
like
I'm
in
this
panhandle,
I'm
in
coeur
d'alene,
it's
actually
a
really
amazing
place.
They
call
it.
The
inland
pacific
northwest
and
like
northern
idaho,
is
still
on
pacific
time,
whereas,
like
you
know,
southern
in
eastern
idaho
is
on
mountain
time,
so
wow
that's
cool,
it's
kind
of
like
the
greater
spokane
area.
I
guess.
B
B
E
B
I'm
going
to
say
that
it's,
what
is
it
the
gravy
fries?
That's
what
I'm
going
to
no.
G
B
B
All
right
a
rice
and
chicken
bowl
and
then,
if
you
go
to
any
of
these
restaurants,
that,
like
you,
know,
they'll
charge
you
like
12.99
as
soon
as
you
hit
the
end
for
rice
and
chicken,
so.
E
So
we're
at
the
usual
time
we
know
what
robert's
eating
go
ahead
and
probably
jump
into
the
spec
sag
give
that
update
and
then
talk
about
things
more
relevant
to
open,
telemetry
ruby.
E
As
always,
your
your
comments
are
welcome,
like
rate
me
in,
if
I
get
too
far
off
track
and
we'll
get
to
it,
there
is
a
sampling
seg.
It
has
had
two
meetings
so
far
and
josh
mcdonald
has
kind
of
like
typed
up
a
really
great
summary.
I
guess
of
where
things
are
there,
it's
kind
of
hoping
to
show
up
to
some
of
the
meetings,
but
I
have
yet
to
show
up,
but.
E
Basically,
I
think
the
trace
id
ratio
sampler
is
the
preferred
sampler,
because
it
has
the
qualities
that
it
allows
independent
span
sampling
without
having
to
propagate
the
sampling
probability.
E
But
you
would
still
like
to
have
some
of
this
information
on
your
your
tracing
back-end.
I
think
that's
like
the
lacking
part,
so.
E
And
that
the
sampling
policy
should
be
attached,
as
if
it's
like
a
static
thing,
it
could
be
a
resource
attribute.
If
it
is
more
dynamic,
then
you
would
attach
it
like
on
a
purse
fan
basis.
E
E
And
incomplete
traces
are
a
problem
I
think
like
there's,
always
a
a
problem
with
collection
and
dropped
spans,
even
without
sampling
in
the
equation.
But
as
soon
as
sampling
is
in
the
mix,
it
becomes
much
much
harder
to
detect
an
incomplete
trace
and,
depending
on
your
sampling
strategy
like
it
could
contribute
to
these
incomplete
traces.
So
I
think
there
is
at
least
been
a
discussion
as
to
how
you
can
detect
these
problems,
so
he
lists
out
four
four
ways
to
detect
an
incomplete
trace.
A
lot
of
these
and.
E
Start
off
assuming
that
collection
is
perfect,
so
if
collection
is
perfect
and
you
use
a
parent-based
sampler
you
will,
you
will
get
all
of
your
stands.
This
is
because
you
disrespect
the
sampling
decision
of
of
your
parents.
E
E
Leaf
and
then
census
would
apparently
count
the
expected
number
of
child
spans,
and
I
assume
this
would
end
up
as
a
span
attribute
so
that
the
back
end
could
kind
of
cross
reference.
This
later.
C
Yeah
yeah,
we
had
implemented
this
early
on
in
open,
telemetry
ruby
and
then
it
was
removed
because
exactly
the
reason
it
was
laid
out
here,
you
don't
always
know
how
many
child
contacts
are
going
to
be.
E
Created
yeah
yeah,
I
remember
there
being
some
challenges
around
it
that
led
to
it,
I
think,
not
being
implemented
actually
everywhere
and
ultimately
removed,
but.
C
E
E
H
I
think
the
I
haven't
attended
the
the
sampling
sig,
but
I
I
also
see
rumblings
about
people
trying
to
implement
samplers
in
the
collector
so
that
it
could.
H
C
H
Implemented
parts
of
that
in
sector,
and
then
they
had
routing
by
trace
id
yep
and
then
and
then
you
get
into
like
how
does
how
does
then
that
communicate
up
the
same
play,
decision
and
yeah,
I'm
not
of
my
team,
I'm
not
the
person
who
understands
sampling,
so
I
I
will
follow
along.
That's
the
trick,
rob
nobody
does
nobody
knows
yeah,
but
when
my
teammate
hauls
out
a
textbook
on
math,
I'm
like
oh
okay,
this
is
gonna,
be
a
fun
ride.
A
E
Yeah,
no,
I
think
I
think
we're
talking
about
it's,
probably
some
tail-based
sampling
in
the
collector
yeah.
This
is
not
in
the
summary.
I
think
that
is
super
interesting
stuff.
I
would
like
to
keep
up
on
what's
going
on
there
too,
but
well
it'll
be
interesting
good.
I
just
yeah.
I
imagine
that
this
sampling
will
occur
at
multiple
levels,
because
I
think,
while
most
people
will
use
an
hotel
collector.
I
think
there
is
still
kind
of
this
thought
that
some
people.
H
Will
want
to
use
something
else,
so
it's
more
infra
that
people
have
to
run,
but
I
I
think
open
telemetry
as
a
as
a
umbrella
project.
Maybe
can
can
talk
about
at
a
certain
complexity.
Would
you
want
your
sampling
to
happen
at
a
certain
complexity?
You
need
to
shift
it
into
something.
That's
not
the
client
or
or
madness
ensues,
and.
A
A
A
A
Data
dogs
just
sort
of
going
ahead
and
doing
our
own
thing,
we're
just
gonna
start
providing
like
samplers
that
shove
attributes
that
include
the
probability
rate
just
so
that
you
know
there's
some
clients
that
can't
emit
100
of
trace
data
for
obvious
reasons
so
yeah,
it
would
be.
I
think,
josh
josh
mcdonald
says
I
think
the
I
think
it's
all
really
good
points,
but
I
worry
that
trying
to
get
a
full
thing
passed
here
might
like
merged
here
might
end
up
taking
longer
than
just
having
like
a
very
small.
A
It's
like,
why
are
we
like?
Why
is
it?
Why
are
we
moving
to
like
from
sandbox
incubation,
if,
like
we
don't
have
key
components
like
included
it's
a
little
bit
of
like?
Let's,
let's
not
like,
let's
just
be
honest
about
where
things
are,
and
it
seems
like
there's
a
side
of
things
that
don't
want
to
face
those
facts
which
is
fine,
but
then
like
yeah,
it's
yeah,
I
just
I
don't
know.
I
think
it's
fine
to
go
slow,
but
it's
also
a
bit
weird
to
then
like
go
fast
in
the
marketing
area.
E
E
I
think
there
are
definitely
some
some
improvements
that
could
be
made
with
process.
I
think
if
we
want
to
look
back,
we
could
say
some
mistakes
were
made,
but.
A
A
E
Yeah,
I
think
the
light
or
the
the
one
reason
for
optimism
here
is
that
this
sampling,
sig
did
kind
of
like
spin
back
up
like.
I
think
that
shows
that
people
have
realized
the
necessity
of
actually
getting
this
thing
out
there.
So
so
I
think
that's
one
reason
why
otep
148,
just
kind
of
has
been
hanging
out
for
as
long
as
it
has,
but
now
that
there's
a
group
of
people
to
get
serious
about
this,
I'm
hopeful
that
this
will
end
up
in
some
spec.
Sooner
than
later,.
E
Yes,
the
metrics
update.
I
think
they
are
happy
with
the
api.
I
think
they
wanted
to
mark
like
the
sdk
as
being
stable
or
being
experimental
so
yeah.
They
wanted
to
be
marked
as
experimental
by
the
end
of
the
month,
but
it's
seeming
like
there
is
more
work
to
be
done
there.
E
So
I
think
that
label
of
experimental
is
like
they
actually
kind
of
have
a
rough
draft
they're
kind
of
happy
with,
but
I
think
right
now,
it's
like
they're
still
building
the
rough
draft
is
how
I
would
say
that
so
views
are
one
thing
that
they're
still
kind
of
working
on
the
histogram,
which
we
talked
about
last
week.
E
It's
still
being
debated
and
then
kind
of
the
other
thing
that
was
talked
about
that
I
don't
think
it
was
really
captured
in
these
notes
was
just
like
understanding,
some
interestingness
with
the
export
pipeline
and
like
things
that
they
want
to
do
there
in
terms
of
like
for
like
tracing
for
tracing,
we
have
kind
of
some
default
exporters
that
are
not
open
telemetry.
E
So
what
if
you
wanted
to
export?
In
the
export
and
prometheus
formats
or
statsd,
those
will
likely
both
be
options
and
things
that
we
would
have
to
provide
exporters
for
and
they're
talking
about,
even
crazier
things
like
wanting
to
like
have
different
reporting
intervals
for
metrics
for
like
different
for
some
parts
of
your
export
pipeline.
So
if
you're
exporting
to
something
like
more
local,
you
could
like
export
things
on.
E
E
H
D
E
You're
a
better
verb.
Thank
you,
yeah
that
that
is
kind
of
happening.
But,
like
you
know,
all
this
data
is
aggregating
up
in
process.
I
guess,
and
eventually
I
think
you
you
send
what
you
have
with
a
timestamp.
I
think
that's
kind
of
like
the
act
of
exporting
a
metric
is
like
here's.
What
I
know
about
this
thing
at
this
at
this
time
right
so.
H
So
that
it's
a
matter
of
telling
your
exporter,
how
often
do
you
want
to
export
data?
Not
not?
What's
the
interval
between
my
my
metrics
collections
right,
I
think,
like
yeah,
I'm
asking
questions
about
what
that
conversation
was
since
whoever
was
taking
notes
didn't
write
this
part
down.
I'm
curious
about
what
the
yeah.
E
E
Yeah
and
I
think
the
complication
there
was
like,
if
you
have
like
multiple
exporters,
I
guess-
and
you
wanted
to
have
like
slightly
different,
like
I
guess,
export
intervals
for
for
them
how
that
might
work.
C
Okay,
so
prometheus
is
going
to
be
based
on
scraping
unless
you're
doing
prometheus,
remote
right.
So
prometheus
itself
is
going
to
be
determining
like
when
you
compute
those
metrics
right
yeah.
So
the
exporter
is
effectively
going
to
be
scraped
and
it
will,
during
that
scraping
it
will
call
all
the
asynchronous.
I
think,
they're
calling
them
observers
or
they
used
to
call
them
observers.
It
will
call
all
the
asynchronous
metric
computation
things
and
then
it
will
get
all
these
synchronous
ones
that
have
been
rolled
up
right
by
the
metrics
processor.
C
I
guess,
and
it
will
then
export
that
snapshot
for
other
things
like
open
telemetry,
which
is
push
based.
That's
where
the
there'll
be
this
processor
effectively.
That
has
a
an
interval
and
it
will
it
like.
It
will
call
all
those
asynchronous
observers
and
then
get
all
the
synchronous
sort
of
rolled
up
values
and
and
push
them
out.
C
C
C
The
prometheus
ones
are
going
to
be
like
the
exporter
will
have
to
call
into
right.
It's
like
you
for,
for
most
exporters,
they'll
be
like
oh
sorry
for
push-based
exporters,
they'll
be
like
the
otlp
trace
exporter
right
now,
where
you
do
you
have
this
batching
interval
like
a
batch
span,
processor
that'll,
be
able
to
land
like
a
batch,
metrics
processor
and
then
you'll
be
sending
data
to
these
dumb
exporters.
C
E
I
think
this
is
an
iteration
on
a
doc
that
probably
showed
up
a
while
ago.
I'm
not
positive
that
that
is
the
case,
but
I
think
there's
something
very
similar
to
this.
That
came
out
at
the
beginning
of
the
year,
and
I
think
you
know
there
were
basically
when
when
tracing
was
1.00,
I
think
there
was
this
recognition.
I
think
that
there
were
like
these
four
areas
that
needed
improvements,
and
one
of
them
was.
E
E
I
feel
like
I'm
missing,
at
least
one
here,
but
there
was
kind
of
a
document
like
this
for
for
each
each
one
of
those
areas,
so.
E
Basically,
it
sounded
to
me
like
they
were
going
to
kind
of
spin
up
at
least
like
temporarily
some
cigs
around,
like
certain
areas
in
in
open
telemetry,
like
maybe
like
messaging.
We
need
to
kind
of
have
like
messaging
attributes
and
kind
of
conventions
for
instrumentation
defined,
so
they
would
have
a
group
of
open
telemetry
experts,
a
group
of
subject
matter,
experts
who
may
or
may
not
be
open,
telemetry
experts
kind
of
from
from
the
actual
domain,
and
then
some
of
the
people
who
actually
work.
E
E
This
was
covered
just
like
very
briefly,
and
I
think
it
was
just
kind
of
like
a
summary
of
like
we
think
this
process
will
work
so
like
I'm,
not
100
sure
that
everything
that
I
said
is
correct,
but
that's
what
that's
the
impression
that
I
got
from
the
light
discussion
that
we
had
and
kind
of
this
part
of
the
document
was.
E
D
B
E
Yeah,
I
think
that
that
would
be
awesome.
I
think
the
the
other
thing
that
kind
of
came
up
with
this
document
is
that,
like
I
think
this
is
kind
of
some
people
have
been
meeting
on
the
side
kind
of
come
up
with
that
process,
and
I
think
there
was
a
request
that
these
meetings
need
to
be
public
and
on
the
public
calendar.
I
think
I
think
I
was
recognized
that
I
forget
yeah.
E
This
thing
was
just
briefly
mentioned,
not
discussed
heavily,
but
the
dimension
was
like.
This
is
hard.
There's
no
resolution,
please
read
and
I
feel
like
we
have
been
through
this
conversation
about
a
billion
times
and.
E
Spam
status
and
http
status
and
when
a
span
should
be
marked
as
an
error,
and
basically
I
think
the
summary
of
this
is
that
nikita
was
asking
that.
E
So
I
think,
as
I
read
through
this
issue,
a
lot
of
people
were
saying
this
makes
sense
from
the
client's
perspective,
but
from
the
server
perspective,
the
server
is
like
responding
400
in
response
to
like
bad
requests
from
the
client.
So
it's
like
this
is
not
an
error
from
the
server
side.
It's
like
the
server
is
notifying
the
client
that
stuff
that
it's
sending
is
not
okay.
So
it's
definitely
should
not
be
like
an
error.
If
you
are
a
server.
E
There
has
been
a
lot
of
bike
shedding
about
this.
I
think
that
makes
sense
to
me
I'll
at
least
say
that,
like
the
I'm
on
board
with
saying
that
these
servers
should
not
be
marking
these
as
as
errors,
but
clients
probably
should.
E
But
I
don't
know
long
ago,
there
was
an
otep
on
this
and
there
was
this
idea
of
having
a
status
source
so
that
you
could
set
a
status
on
a
span
and
set
the
source
on
the
span
and
then
back
ends
and
do
some
reasoning
to
figure
out
like.
Is
it
really
a
an
error
or
not?
E
G
So
this
one's
kind
of
interesting,
because
it's
kind
of
related
to
something
tim's
been
working
on
for
graphql,
he
started
working
on
the
kind
of
capturing
errors
for
racquel
requests.
G
H
It's
sort
of
it
gets
into
the
philosophy
of
what
does
it
mean
to
error
from
whose
context
and
like
the
the
the
old,
like
http
status
codes,
and
what
do
they
mean
and
400?
Is
you
messed
up
and
500
design
messed
up
from
the
server's
perspective,
so
marking
a
span
on
the
server
side
that
somebody
made
a
bad
query?
Isn't
like
your
problem,
so
why
it
failed,
has
failed
spam
right,
but.
E
It
depends
on,
if
you
I
feel
like,
if
you
are
the
owner
of
the
graphql
server
and
like
you
know,
if
you
want
to
be
woken
up
about
this,
and
maybe
if
another
team
actually
owns
the
client
perspective,
and
I
guess
it
depends
on
who
should
be
notified
right,
and
I
think
you
know
to
take
that
just
a
little
bit
further
and
dehumanize
it
a
little
bit
it's
kind
of
like
if
you
want
to
like
pinpoint
the
problem
in
your
system.
E
H
But
is
there
something
else
on
the
span
that
would
tell
you
that
users
are
receiving
400s,
the
http
status
code?
Is
there
on
the
spin?
So
so
you
could.
You
could
still
find
out
that
your
servers
are
correct,
but
requests
are
coming
back
with
400s
without
going
to
the
span
status
to
make
that
determination
right.
I
G
G
Correct
so
that's
one
of
the
issues
like
that
craft
fuel
typically
has
is
that
you
don't
have
like
a
historically
like
tools,
don't
really
surface
issues
with
like
the
graphical
endpoints,
because
it's
like
200.
D
H
It
seems
like
there's
almost
a
because
graphql
is
a
query
language
over
http,
there's
almost
a
whole
like
semantic
convention
for
graphql
called
for
where
the
http
status
might
be
fine
on
this
graphql
query.
But
the
query
didn't
succeed
and
so
graphql
error
status,
something
yeah.
B
And
and
keeping
a
lot
of
the
information
say
like
in
the
event
logs,
you
know
we're
we're
unable
to
really
query
that
easily
the
event
logs
for
like
trying
to
build
up,
monitors
or
alerts.
So
it's
we.
We
go
ahead
and
add
additional
attributes
to
the
span
itself.
When
we
see
errors
in
the
payload.
H
Right,
I
mean
there's
always
there's
always
you
opting
in
to
like
any
graphql
instrumentation.
You
could
choose
to
set
this
fans
instead
of
auto
instrument.
Auto
instrumentation,
declaring
that
this
fan
status
is
a
is
an
error.
You,
like
the
the
app
developer,
maintaining
this
server-side
graphql
query.
Endpoint
can
choose
to
set
that
right
span
status,
right,
yeah,.
G
E
Yeah,
I
guess
I
will
mention
like
this
is
being.
This
is
a
topic
that's
being
debated,
as
we
speak
so
like
if
you
do
have
opinions
or
want
to
participate
in
this
discussion
here,
feel
free
to.
E
And
I
do
think
this
was
the
last
last
thing
on
the
agenda.
It
was
so
it
sounds
like
we
at
least
have
a
pr
to
go
over
in
for
for
hotel
ruby.
I
Cool
the
second
one
from
the
top.
I
I
think
rob
has
basically
hid
the
nail
on
the
head
with
his
description
is
that
graphql
does
not
follow
the
http
conventions
for
like
400
and
500s,
and
they
have
their
own
semantic
definition
and
what
what
what
an
error
is,
and
I
linked
some
of
the
stuff
from
the
spec
yeah.
I
C
G
Right,
so
I
wonder
if,
like
there's
a
middle
ground
there,
that,
like
makes
sense,
that's
like
still
valuable
to
like
the
people
using
this
like.
If,
because
I
know
that
that
issue
that
we've
just
kind
of
came
off
of
like
it's
talking
about
where
the
error
should
lie
and
like
I
do
agree
that
it
makes
sense
that,
like
if
a
client
makes
a
bad
request,
that's
probably
an
error
for
the
client
and
like
the
servers
like
graphql,
is
not
failing.
It's
saying
you
submitted
a
bad
query
and
here's
your
error
response.
G
Right
like
I,
was
able
to
successfully
respond
and
do
all
the
work,
but
as
an
operator
of
the
server
do
we
want
to
completely
omit
any
information
about
that
happening.
Like
can
we
record
that
these
errors
happened
without
saying
that
the
the
request
failed
on
the
server
side
like
what?
What
is
a
sensible
compromise
there,
and
I
think
at
the
end
of
it,
just
like
it
kind
of
goes
out.
E
Just
to
clarify,
because
I'm
definitely
not
a
graph
graphql
expert
and
I
was
having
trouble
reading
all
those
things
on
the
previous
page-
they
kind
of
scrolled
scrolled
far
to
the
right,
but
is
the
idea
that
graphql
will
return
you
an
http
status
of
200
with
this,
as
kind
of
like
the
body
when
something
has
gone
wrong.
So
you
don't
you
don't
really
know
by
http
status
code
that
there
there
was
a
problem.
G
D
G
Basically
get
the
payload
of
the
information
you
requested
or
if
there's
a
return
type
for
like
a
mutation
like
a
post,
but
in
the
case
of
a
malformed
query
or
anything
else
like
that,
you
get
a
result,
but
it's
it's.
It's
basically
an
array
of
errors
like
what
you're
looking
at
right
now.
So
that
would
be
the
response
of
a
malformed
query.
So
you'd
still
get
200
okay,
but
then
you'd
get
this
like
this
noise.
H
But
if
we
pull
up
the
sample
trace
visualization,
which
was
the
other
screenshot
on
this
pr,
these
are
graphql
spans,
basically
not
http
spans.
G
I
forget
what
the
action
that's
used
for
it,
but
you
basically
have
a
graphql
controller
and
then
under
there,
so
it's
like
your
rackspan
and
then
under
that
it's
nested.
So
this
is
kind
of
like
your
typical
server
response
is
what
you'd
be
looking
at.
So
in
this
this
picture.
Imagine
the
root
is
like
your
ax
fan.
H
Okay,
I
see
on
on
the
one
span.
That's
highlighted
here.
The
graphql
validate
has
a
tags
error.
True,
I
imagine
that
there's
no
spec
for
graphql,
like
semantic
conventions
in
open
telemetry,
for
graphql
right.
H
H
I'm
wondering
if
the
error
state
could
be
reflected
in
these
like
graphql
namespace
spans
and
then
the
http
spans
generated
by
rack
and
and
the
controller
level
sort
of
follow
the
hd
piece
back
and,
and
you
would
find
out
that
the
graphql
errored
in
the
graphql
spans
versus
up
at
the
http
level
right
that
I
am
hand
waving.
All
this
is
like
thinking
out
loud,
not
with
a
whole
lot
of
graphql
experience.
Does
that
so.
E
I
think
your
suggestion
there
rob
is
kind
of
what
was
going
through
my
mind,
and
it
kind
of
like
dovetails
with
that
last
document
of
like
the
semantic
conventions
just
kind
of
process,
but
it
sounds
to
me
like
we
are
kind
of
in
this
situation
where,
like
we're
trying
to
use,
I
feel
like
we
do
this
a
lot
in
when
running
instrumentation
and
looking
at
the
semantic
conventions.
It's
like
you
kind
of
try
to
find
like
the
nearest
convention.
That
like
applies
to
your
situation,
so
for
graphql.
E
It's
like
we're
kind
of
like
staring
at
the
http
status
code
because,
like
it's,
the
thing
that's
defined,
but
maybe
for
graphql.
If
we
could
locate
one
of
these
subject
matter,
experts
and
that
make
them
sit
through
a
meaningless
meeting
for
too
long,
they
might
be
able
to
come
back
and
be
like.
You
know
what
actually
this
these
should
have
like
dedicated
codes,
and
we
should
call
them.
You
know
these
names
and
they
are
different
than
the
http
status
code.
H
But
I
I
could
see
any
any
time,
you're
making
a
graphql
span
and
it
contains
logs
that
are
errors.
The
span
itself
gets
a
status
of
error,
not
like
would
in
the
semantics
of
graphql.
This
graphql
validate
action
would
be
considered
a
failure
because
it
has
errors,
and
so
therefore
the
span
status
for
graphql
validate
is
error,
but
http
like
four
levels
up
hp
status
is
like
400.
It's
not
my
problem,
so
that
http
status
would
be
okay,
because.
A
Sort
of
I
think
that
makes
sense
to
me.
I
generally
don't
care
if
it's
opt-in,
I
think
it's
fine
anything's
fine.
It's
like
my
very
lazy
opinion.
Then
you
know
like
because
we're
not
we
can
always
change
it.
If
people
complain
to
like,
let's.
H
A
A
Generally
speaking,
I
think
our
node
team
at
datadog
has
hit
a
similar,
not
issue,
it's
more,
just
like
complete,
like
some
users
like
it,
the
way
it
is,
and
others
want
sort
of
what's
being
suggested
here
and
they
just
allow.
I
think
what
node's
doing
is
allowing
arbitrary
procs
that
users
can
pass
in
on
both
the
execute
the
parse
of
the
validate
spans
in
graphql,
and
then
you
can
define
your
own
semantics
for
what
you
want
to
mark
as
an
error,
and
it
I
mean
for
ruby,
it's
pretty.
A
A
A
But
so
that's
one
thought:
if
we're
trying
to
be
like
incredibly
agnostic
and
like
hands
off
and
just
say
like
we
want
to
give
people
the
hooks,
I'm
a
big
fan
of
allowing
people
hooks
arbitrary
hooks.
So,
like
that's
one
thought,
but
I
generally
am
fine.
If
we
want
to
have
more
like
if
we
want
to
just
say,
like
you
can
flip
a
toggle
that
says
like
let's
mark
these
as
errors
or
not
yeah,.
A
A
H
A
Like
I
don't
know
what
the
right
level
of
abstraction
to
provide
is
because,
at
the
same
time
you
can
also
like
you
can
also
just
say:
well,
you
can
just
use
the
api
and
do
it
yourself
like
so,
but
yeah
I
generally
find
hooks
to
be
a
nice
middle
ground
or
like
whatever
procs.
I
guess
is
the
ruby
way
of
doing
it
or
lambdas,
but
yeah.
Just
just.
A
I
I'll
try
to
review
the
proposal
in
more
depth,
but
that's
one
thing
that
came
to
mind
because
I
remember
the
node
folks
were
getting
similar
complaints
with
the
200s
and
then
people
customers
are
like
well.
Those
aren't
like
I
do
need
to
mark
those
as
errors.
They're,
not
200s.
To
me.
A
A
But
yeah
I'll
have
to
think
about.
It
would
be
nice
to
know.
A
G
C
C
C
So
if
it's
not
using
http
as
designed
like
it's,
not
it's
just
a
transport
protocol
right,
it
doesn't
interpret
or
modify
the
response
code.
Instead,
it
encapsulates
its
own
error,
reporting
mechanism.
C
C
And
I
think
this
has
been
a
problem
for
a
while.
I
mean
it.
It's
some
back-end
systems
want
to
just
look
at
client
spans
and
service
bands
and
count
metrics
based
on
those
clients,
banter
and
service
fans,
because
that's
a
really
easy
thing
to
do.
You
don't
need
to
look
at
a
full
trace
to
just
count
things,
but
in
order
to
do
that,
you
need
to
be
able
to
propagate
the
errors
to
the.
If
you
like,
the
outermost
server
or
client
span.
C
A
End
one
one
small
note
that
I
looked
just
from
looking
off
hand
is:
should
we
be
naming
the
event
should
it
have
semantic
significance
like?
Should
it
be
called
exception
or
error,
the
event
name
itself
or
is
it
okay
like
will
gra
is
graphql
error,
the
appropriate?
My
understanding
was
okay.
I
should
comment
on
the.
I
guess
I
need
to
comment
up
here.
My
understanding
was
there's
some
semantic
significance
to
some
event
names
which
might
be
a
good
way
to
signal
like
this.
A
Like
hey,
it's
an
error,
but
it
doesn't
belong
in
the
http
thing.
C
Yeah
I
mean
there
should
be
some
name
spacing
there.
I
can't
remember
what
we
add
for
the
error
events,
but
it's
probably
it's
probably
name
spaced
in
some
way
with
you
know
a
dot
separation.
A
My
only
other
question
is
like:
what's
the
like,
let's
say:
we've
merged,
let's
say
we,
you
know
take
this
proposal,
as
is
which
I
think
is
a
good
proposal.
It's
incrementally
helpful
for
some
folks
and
doesn't
affect
existing
users.
Like
is
we're
not
beholden
to
you
know
we
can
have
breaking
changes
with
it
right
because
our
instrumentation
is
pretty
1.0
and
won't
be
marked
as
stable.
Soon.
C
A
Cool
okay.
Well,
oh
we'll
comment
just
the
same
with
the
same
speed
that
I'll
do
the
issues
that
are
assigned
to
me
but
I'll
comment:
yeah.
G
I
think,
like
kind
of
making
sure
that,
like
the
names
of
the
events,
follow
whatever
conventions
available
for
interfacing
errors
and
then
making
sure
that
this
functionality
is
often
and
not
on
by
default
is
probably
a
good
starting
point,
because
at
least
it
puts
us
in
a
position
to
get
feedback
from
people
who
decide
to
use
it.
I'd
like
to
see
how
it
fares,
internally
at
shopify
like
I'd
like
to
enable
it
and
see
what
the
reaction
is.
G
If
it
goes
unnoticed,
and
maybe
it's
not
that
valuable
if
it
becomes
noisy
and
confusing
people
once
again,
that
might
not
be
that
valuable
or
people
will
like
it,
and
someone
will
realize
that
you
know
shopify
could
say.
Maybe
it's
documentation
on
some
endpoint
isn't
very
good,
because,
like
50
of
the
the
queries
to
this,
the
graphql
endpoint
are
failing
right
and
like
it
can
provide
different
insights.
G
But
we
don't
like
it's
hard
to
get
a
sense
of
how
good
or
bad
this
really
is
until
we
allow
people
to
use
it,
and
the
the
spec
discussion
right
before
we
switched
over
to
this
actually
did
provide
a
pretty
good
perspective
that
I
honestly
hadn't
really
thought
it
was
like
if
we're
gonna
mark
this
as
an
error,
it
probably
makes
sense
to
be
on
the
client
in
this
situation
and
like
the
graphql
rubygem,
is
both
support
for
being
the
client
on
the
server.
G
So
it's
like,
maybe
maybe
this
patch
moves
over
to
kind
of
the
client-side
behavior,
but
once
again
I
don't
think
that
should
necessarily
block
too
much
of
this.
It's
just
further
improvements,
further
thoughts.
There.
H
H
C
Yeah,
I
guess
there's
a
question
about:
is
this
an
exception
at
this
point,
and
should
it
be,
should
we
be
adopting
this
amount
of
conventions
for
exceptions
or
is
it
just,
you
know
validate
return
false
and
we
then
mark
it
appropriately
the
so.
The
other
question
is
whether
there
are
other
errors
that
can
occur
in
graphql
processing.
That
represents
server-side
errors,
where
we
need
to
mark
things
accordingly.
H
Does
this
taste
the
most
like
a
database
query
failing
so
like
if
you
were
to
pass
to
postgresql
a
bad
sql
query,
what
would
postgrads
return
and
maybe
those
are
better
semantic
conventions
than
hp
calls
like
for
these
internal
graphql,
like
I'm
validating
that
this
is
a
query
I
can
run.
Is
there
something
in
the
dv
name,
space
that
might
be
equivalent.
G
G
Yeah,
I
think
some
people
have
cautiously
referred
to
graphql
as
an
rpc,
changing
like
off
that
a
little
bit
there's
also
with
graphql
they
they
have
kind
of
this
weird,
not
weird,
but
a
different
rate,
limiting
bucket
cost
associated
with
graphql
queries
like
you
can
actually
submit
your
query
and
see.
How
much
did
this
cost
against
my
my
limit
and
that
information
does
get
returned?
G
So
if
you
exceed
your
kind
of
your
bucket
limit
or
you
try
to
write
a
query
that
is
too
expensive
and
the
operator
has
deemed
it
unacceptable,
like
you've
requested
everything
and
you've
graphed
out
as
far
as
you
can
you
want
every
single
result
possible.
You
know
the
operator
is
going
to
say
no
right.
H
G
G
Yeah,
I
was
just
wondering
if
that's
like
something
else,
we
would
want
to
potentially
surface
there
and
it's
like
how
often
are
people
performing
queries
that
are
too
expensive
and
we're
returning
results.
Saying
no
for
that
right,
like
it's
there's
a
lot
of
a
lot
of
valuable
insights
that
could
be
surfaced
there,
but
I
don't
know
if
it's
appropriate
for
us
to
be
doing
it
at
this
level,
and
I
think
the
answer
is
maybe.
G
I
don't
believe
so.
I've
only
ever
seen
it
from
like
the
client
side
like
working
against
shopify's
api.
Before
I
work
there,
it's
like
who's
playing
around
the
graphql
api
and
it's
like
okay
try
to
perform
this
query.
No,
it's
too
expensive!
You
get!
You
get
a
response.
You
get
your
200
okay,
like
everything's,
okay,
we
were
able
to
respond,
but
your
query
is
way
too
expensive
to
try
again.
Okay,.
A
C
G
Just
because
it's
a
little
bit
open-ended
right
now,
just
to
confirm,
is
that
is
everyone
comfortable
with
the
idea
of
like
us,
cleaning
up
the
event
naming
putting
this
behind
optic
configuration
or
configuration
flag,
and
then
that's
potentially
like
a
good
first
step.
Is
that
gonna
cause
anyone
pain.
H
H
H
G
B
Well,
I
was
going
to
say
that
I'm
looking
at
like
the
javascript
implementation
and
they
use
record
exceptions
specifically
for
when
they
have
errors
and
they
stringify
the
body.
B
It
might
be
helpful,
have
a
conversation
with
the
folks
there.
You
know
or
bring
them
in
and
get
some
perspective
from
other
folks
who
have
dealt
with
this
already.
So
we
have
some
sort
of
symmetry
and
how
it's
implemented.
Yeah.
B
I
pinged
on
robert,
I
can't
ever
muscle
to
see
he's
asleep
right
now,
but
to
see
if
you
can
comment
on
the
pr
and
give
some
insight
into
like
what
he
thinks
about
it.
But
outside
of
that,
I
lean
towards
symmetry
and
we've
already
got
like
prior
art
with
javascript.
So
I
I
leaned
towards
there.
I
tried
to
look
to
see
if
there
were
other
implementations
in
other
languages,
but
didn't
find
any.
But
then
again
my
search,
foo
might
suck
or
whatever.
H
And
in
a
quick
scan
of
looking
at
both
the
rpc
semantic
conventions
and
the
exception
semantic
conventions,
this
the
exceptions
are
recorded
as
an
event.
So
maybe
these
log
entries
that
we're
putting
on
and
on
for
the
error
for
the
validation
errors
should
be
exception
spans
and
then
the
validate
span
itself
is
a
more
like
an
rpc
call
with
an
error
message.
G
G
Yeah
and
I
do
agree
that
we
should
try
to
be
consistent
with
the
other
specs.
So
if
javascript's
done
something,
and
as
long
as
it's
like,
if
it's
done
in
a
way
that
doesn't
make
sense
to
us,
we
don't
want
to
just
blindly
copy
it
either.
But
if
it's
sensible
and
not
much
different
than
what
we
already
have,
then
I
I'd
say
like
yeah:
why
not
be
consistent
with
them?.
G
We're
already
a
bit
over,
I
didn't
have
anything
brought
up.
I
just
thought
I'd
mention
that
I
have
been
working
on
the
rails.
Instrumentation
I've
been
shuffling
some
stuff
around
I'm
trying
to
get
us
in
a
point
of
like
a
position
where
we're
much
more
consistent
with
like
rails
itself,
like
it's
a
jam
that
includes
gems
and
those
gems,
have
rail
ties
and
that's
what
does
the
setup.
G
So
I
have
a
pr,
that's
not
ready
for
review,
but
it's
moving
a
bunch
of
stuff
around
already
moved
some
of
the
action
view
stuff
around
that
andrew
had
done
so
there's
more
to
come,
but
it's
going
to
be
a
pretty
big
change
and
I'll
probably
call
upon
some
people
to
maybe
help
test
it
in
their
little
environments
to
see
how
it
goes
make
sure
everything's
good
before
we
actually
do
a
release
for
it,
because
the
size
of
it
broke
everything
for
everyone,
because
it's
been
working
decently.
Well
so
far.
G
H
I'm
continuing
to
get
familiar
with
this
complicated
code
base,
so
I'm,
but
I
can
test
things
happy
to
help
that.
C
B
So
I'll
add
like
two
points
of
victory.
One
is
that
the
github
monolith,
the
rail
application,
is
on
rail
7
and
is
currently
running
open.
Telemetry
we've
ripped
out
all
of
the
open
tracing
and
replaced
it
with
the
hotel,
ruby
sdks.
Now
we
still
have
a
lot
of
manual
instrumentation
in
there,
so
we're
not
using
a
lot
of
the
auto
instrumentation
libraries,
but
definitely
using
rack
and
farady
right
now,
which
is
so
far
so
good.
B
G
B
B
We
have
an
app
that's
running
ruby,
3
right
now,
and
it
looks
like
we're
there's
when,
when
the
the
otp
exporter
encounters
open,
ssl
errors,
they're
not
getting
caught
so
they're
they're
being
raised
out,
so
we,
the
person,
parker,
opened
up
an
issue
in
the
repo
I'll
see
if
I
can
get
to
it,
but
essentially
we
might.
There
might
be
some
class
of
errors
that
we're
missing.
B
Okay,
I'm
in
those
cases
and
it
it
it
appears
to
say
something
about
decryption
failing,
but
it
is
more
likely
that
the
socket
closed
without
a
response.
Right-
and
that's
probably
just
you
know
like
the
openness
of
error-
is
just
masking
what
the
error
is.
But
you
know
that's
that
you
know
that's
pretty
much
it.
Thank
you
for
all
the
amazing
work,
y'all.
E
B
Lucky
lucky,
for
me:
it's
only
happened
three
times
in
the
past
day
and
it's
not
happening
in
the
monolith.
It
happens
in
an
in
a
smaller
app
using
ruby
three.
So
I
don't
know
if
those
things
are
all
related
to
each
other,
but.
G
We
we've
opened
up
the
floodgates
for
like
in
talking
about
like
github
flexing
that
they
have
their
model.
It's
migrated
to
open
telemetry
already
just
showing
off
without
any
humility.
We
have
not,
but
we
have
about
40
of
shopify
migrated
to
open
telemetry
at
this
point,
so
we
have
like
we're
getting
there,
we're
not
we're
not
quite
at
100
apps
yet,
but
we're
getting
there,
and
but
just
like
that,
it's
not
like
the
model.
G
We've
seen
a
few
little
things
that
we've
had
to
add.
You
know
these
one-off
rescues,
so
I
think
that's
probably
what
you're
going
to
do
is
the
otp
exporter
just
add
another
rescue
line
for
that
line
of
error
and
decide
how
to
handle
it.
We
got
to
add
a
few
from
when
we
did
the
like
initial
floodgates.
There's
lots
of
little
encoding
errors
here
and
there
these
little
edge
cases.
You
know
anything.
That's
processing,
shipping
label
that
might
raise
an
error
is
going
to
have
some
crazy
characters
right.
B
So
yeah,
I
think
what
ended
up
happening
to
us
at
least,
and
I'm
sorry,
I
don't
mean
to
hold
everybody
hostage
here.
My
experience
was
that,
like
once,
I
turned
it
on.
You
know
at
100
scale,
because
we
were
only
doing
it
like
on
our
canary
environment.
I
wasn't
seeing
a
lot
of
the
messages,
but
all
sudden,
like
all
of
the
warning
messages
about
this
key,
is
a
symbol,
not
a
you
know.
It
was
not
a
valid
value
for
this
key
billions
of
log
events
just
started
like
spewing
out
and.
G
So
what
I
I'd
suggest
you
try
doing
for
that.
The
way
we're
handling
that
we're
not
forcing
anyone
to
do
it
because
we're
trying
to
make
the
rollout
smooth,
but
we
have
our
migration
guide
in
our
repo
and
we
added
a
helper
that
just
sets
the
open,
telemetry
error
handler
to
raise-
and
you
say
in
your
test
environment
so
like
braille's,
test
f,
like
just
call
this
this,
our
our
rapper
gem
dot
like
raising
handler
and
so
in
ci.
G
G
D
G
It
because,
like
these,
these
logs,
like
at
the
air
level,
but
if
it's
getting
the
error
handling,
these
are
like
actionable
errors.
They're
not
like
things.
People
can
potentially
ignore
the
only
catch
there
and
the
other
week.
I
added
that
specific
error
type.
We
were
talking
about
it
with
the
back
trace
the
weird
stuff
like
dbe
rake
migrate.
G
You
know
that
boots
up
rails
and
if
it
doesn't
have
anywhere
to
export
to
your
your
handler
explodes
and
you
basically
break
people's
like
rails
tools,
so
where
what
we're
doing
is
like
we're
gonna,
add
the
exception
handler
and
we're
gonna
just
kind
of
ignore
fail
to
export
in
that
case.
B
Yep
copy
that
I
will
I'll
take
all
of
those
things
under
advisement,
but
like
basically
for
our
test
suite,
I
have
disabled
so,
unfortunately,
I'm
not
using
our
actual
configured
global
sdk
for
our
test
environment.
I've
done
what
we
do
for
the
auto
instrumentation
test
suite,
which
is
to
have
essentially
an
sdk
tracer
with
the
in-memory
spanning
memory
exporter,
with
no
configuration
whatsoever.
It's
just
kind
of
like.
If
you
want
to
write
test,
you
want
to
write
tests
around
the
spans,
the
internal
spans
you're.
Writing
yourself
right!
B
G
G
G
Configure
all
of
our
like
our
wrapper,
uses
the
rail
tie
for
rails
apps,
we
pass
in
the
rails
environment
and
we
select
on
a
few
things
so
like
if
it's
production
or
staging
we
use
like
our
otlp
exporter.
If
it's
dev,
we
check
for
the
presence
of
the
jaeger
exporter
m
and
we'll
use
the
the
yeager
one,
so
they
can
do
local
tracing.
Otherwise
we
use
in-memory
and
then
tests.
G
We
use
in
memory,
but
we
default
according
to
false,
and
then
we
we
surface
a
little
like
block
yield,
so
it
says
like
if
I
want
to
explicitly
test
this
fan.
I
wrap
it
in
like
with
recording
test
bands
and
then
they
have
access
from
the
finished
span.
So
you
you,
basically
get
it
as
close
to
production,
as
you
possibly
can.
The
only
thing
that
changes
based
on
the
environment
is
whether
the
resource
detector
runs
or
what
exporter
you're
using
and
that
way
like.
G
G
Conflicts,
I
don't
want
to
find
out
in
production,
there's
like
that.
Never
ending
battle
that
you
we
want
to
make
sure
that
people
don't
associate
tracing
with
breaking
their
apps
so,
like
sometimes
people
come
in
and
they're
like.
Oh,
my
vcr
cassettes.
Don't
work
because,
there's
being
a
trace
header
added,
can
I
disable
tracing
for
tests
and
I
say
well,
of
course
you
can,
but
then
I
actually
say
it's
like.
No,
you
can't
you
can't
disable
it
do
this.
Instead.
H
B
That
right
now
you
know
like
we,
you
mentioned,
we
have
very
similar
setup,
so
we
have
like
a
little
wrapper
that
does
that
that
uses
the
initializers
to.
Oh
sorry,
the
rail
tie
to
initialize
everything
for
us,
however,
for
the
monolith,
I'm
not
using
that
right,
just
kind
of
like
building
things
myself
and
for
many
strange
reasons
and
many
complexities,
I
have
to
do
monolith.
G
B
Yeah
yeah,
nothing
works
exactly
the
way
that
I
want
to,
but
but
this
is
the
actually
the
issue
I
was
just
referring
to
that
parker
had
reported
this
morning
and
he
started
seeing
these
yesterday,
and
this
is
pretty
much
it
based
on
the
stack
trace
there.
It
looks
like
that
so
he's
using
the
net
http
instrumentation,
which
happens
to
be
instrumenting,
the
ltlp
exporter,
underneath
the
hood
and
so
for
whatever
reason
he
gets.
B
This
open
ssl
error,
which
ends
up
raising
outside
of
the
exporter
and
being
reported
up
to
the
application
outside
of
it,
and
it
doesn't
look
like
you
know,
open
ssl
era
is
included
in
any
of
these
rescues
that
are
occurring
so
and
please
go
ahead.
Mr
kidd.
H
B
But
I
think
what's
happening
if
you
look
at
the
stack
trace,
it
runs
through
on
untraced.
B
H
B
B
It's
on
the
sys
read
nom
block
for
openssl
that
this
is
coming
out
of,
and
this
is
coming
out
of
ruby.
Three,
oh
and
I
don't
know
again
if
this
is
only
surfacing
itself,
because
this
is
a
ruby,
throw
app
I've
not
seen
it
in
any
other
app
that
we've
deployed,
which
all
run
two
six
or
seven
right
now.
What
was.
B
The
specific
error
is
saying
that
there's
a
bad
mac
or
it
can't
decrypt
the
message,
and
you
know
that
doesn't
necessarily
mean
that
there's
a
problem.
I.
H
B
B
H
B
Yeah,
so
totally
like
it,
I
I
don't
know
if
there's
a
situation
where,
like
the
servicer
and
the
clients,
are
no
longer
match,
or
you
know
or
sorry
whatever
that
might
be,
I
mean
it
could
again
also
be
that
it
got
an
empty
response
when
expect
expecting
a
response
to
be
present
and
that
also
masks
that
error
right,
it's
represented
in
the
same
way,
but
it
doesn't
say
like
no
bytes
received
or
something
like
that
as
an
error.
B
B
We
can
say
make
that
you
know
make
kind
of
like
the
eat,
eat
all
the
exceptions
at
the
bottom.
You
know
with
the
except
with
you
know,
avoiding
things
like
system
exit
and
I'm
out
of
memory,
errors
and
whatnot
and
and
and
then
just
have
like
a
a
a
rescue
exception
down
there.
I
think.
B
G
It'll
probably
get
pretty
busy
and,
like
the
the
actual
exporter
like
we
already
got
a
few,
I
don't
see
the
harm
in
adding
another
one
and
if
we
decide
to
just
eventually
switch
to
the
big
net
that
that
could
happen
but
like
for
now
like.
Let's
make
sure
we
understand
the
things
we're
rescuing
and
why
they're
happening
and
do
a
piecemeal
like
this,
I
think
that's
like
the
more
thorough
approach,
even
if
it's
a
bit
more
verbose.
I
think
it's
what
we
want.
That's
like
kind
of
a
collective
group
right.
B
Yes,
absolutely
because
sometimes
it's
like
these
errors
that
occur,
we
want
to
try
to
track
them
down.
So
if
we
can.
H
See
them
that's
nice,
what
I'm
thoroughly
unfamiliar
with
this
set
of
code,
that
back
off
method.
That
is
the
retrieve
backup.
How
would
how
how
does
open
telemetry
tell
me
that
I'm
getting
open,
ssl
errors
like
if,
if
I'm
in
a
situation
where
I'm
not
getting
traces,
show
up
in
my
back
end
because
of
an
ssl
error?
How
do
I
know
that
there's.
G
H
G
Yes,
online
239
there's
the
metrics
reporter.
This
is
actually
something
we
like
francis
added
and
it
was
very
much
driven
by
like
parity
with
our
own
internal
implementation.
So
we
do
like
the
otlp
exporter
was
the
first
build
of
this
was
built
internally
at
shopify,
and
then
we
brought
it
into
like
the
official
one
made
some
obvious
improvements
to
fit
the
spec.
But
what,
before
I
even
joined
tracing
on
our
team,
what
they
had
was
they.
They
have
statsd
just
emitting
metrics
based
off
of.
G
If
we're
successful,
we're
failing
like
we
capture,
request
duration,
and
so
what
we
have
here
is
like
a
metrics
reporter
interface
and
you
can
just
supply
your
own
with
whatever
underlying
metrics
tool
you
want
to
use,
and
so
like
just
giving
some
transparency
like
in
shopify.
G
So
I
can
see
the
health
of
like
the
instrumentation
so
like
we
have
an
instrumentation
team,
that's
what
me
and
tim
are
working
on,
and
this
is
like
really
really
critical
for
us,
because
we
want
to
know
if
our
instrumentation
is
working,
there's
a
whole
separate.
You
know
instrumentation
or
like
metrics,
for
our
transport
pipeline,
but
for
our
focus
slice
the
world
like
these
metrics
reporters
are
critical
because
we
don't
know
if
things
are
failing.
Otherwise,.
B
And
I'd
say
we
have,
we
depended
on
actually
our
vendor
lightstep
to
tell
us
that
information,
because
that
was
included
in
the
lifestyle
sdk.
Whenever
we
would
send
out
trace
information,
they
would
include
like
hey
a
buffer
like
this
one
batch,
this
buffer
of
number
of
spans
were
dropped,
so
they
reported
that
to
us
through
statsd
to
their
agent.
So
we
were
like
when
I
saw
that
this
was
included
in
the
client.
I
was
like
hell.
B
Yes,
this
is
awesome
because
it
gives
me
the
granularity
into
what
specific
client,
what
service
is
the
one
that's
failing,
and-
and
so
you
graph
these
out
and
you're,
like
oh
shoot,
I
need
to
tune
the
hell
out
of
these
attribute
span
limits
or
whatever
I'd
like.
H
G
So
the,
where
is
the
internet?
I.
H
Saw
some
dimensions
in
like
the
r
dock
above
open,
telemetry
sdk
trace
export
module,
metrics
reporter
there's.
B
Robert,
how
do
you
feel
about
us
just
adding
a
couple
of
classes,
one
for
data
docs?
That's
the
one
for
statsd
instrument,
because
that's
basically
what
I
wrote
should
we
just.
G
H
G
No,
you
don't
need
the
monkey
patch,
you
just
implement
it,
and
then
you
supply
it
to
your
backband
processor
and
you
can
import
it.
B
Dude,
what
it
looks
to
me,
like
the
reason
why
metroid
reporter
extends
self
is
treating
it
as
if
it
were
a
singleton
and
treating
the
class
as
an
object.
So
you
would,
whenever
we're
initializing
it?
It's
not
allocating.
You
know
multiple
objects.
For
every
time
an
sdk
is
created.
It
has
this
default
kind
of
no
op
function
and.
H
I
could
I
could
picture
as
the
open
telemetry
metrics
spec
stabilizes
we'd
have
an
implementation
of
like
send
it
to
an
old
hotel,
metrics
endpoint.
G
Right
but
obviously
like
when
this
was
implemented,
that
was
right
did
not
close
right
right,
so
I
imagine
that
this
will
probably
shift
or
change
at
some
point,
but
for
now
it
it's
really
really
simple.
It
works
really
well,
and
you
just
have
a
concrete
implementation
of
this,
that
or
just
anything
that
responds
to
those
three
methods
methods
you
pass
it
in
and
it'll
do
its
thing.
E
You
said
okay,
I
remembered
earlier.
I
was
trying
to
rattle
off
like
the
four
areas
that
needed
improvement,
that
was
acknowledged
kind
of
back
when
tracing
was
gaining.
Now
I
remember
the
fourth.
The
fourth
was
diagnostics,
and
I
think
this
is
exactly
kind
of
if
that
group
ever
does
materialize-
and
I
haven't
heard
anything
about
them
lately,
bringing
our
experience
with
this
metrics
exporter
to
kind
of
have
a
way
to
trace
the.
B
In
there
be
like
hey
java
people,
this
is
how
you
usually
do
it
all
right.
Listen,
you
are
all
awesome.
Thank
you
for
what
is
it
indulging
me
for
the
last
like
half
an
hour
of
this
day,
and
I
know
that
robert's
like
fine,
because
he's
like
I
had
rice
and
chicken
right
before
this.
So
I'm
not.
B
I'm
good
to
go,
but
for
the
rest
of
us
who
did
not
eat
lunch,
yet
I
hope
you
all
have
a
great
afternoon.
I'm.