►
From YouTube: 2021-07-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Yes,
we
certainly
do
not
skimp
on
swag
and
like
graphic
design,
stuff.
C
A
A
B
Hello,
everyone
it's
about
the
usual
time.
We
have
almost
everybody,
I
think
others
may
trickle
in,
but
I
guess
we'll
go
ahead
and
get
started
with
the
usual
agenda
unless
anybody
has
any
other
suggestions
so
start
through
with
the
spec
sig
up,
update
and
kind
of
look
at
any
issues
prs
with
our
repo
burning
questions,
it
looks
like
we
also
now
have
this
happy
reports
section
so
just
anything
for
any
of
these
categories.
B
There's
some
changes
to
the
metrics
protocol
that
have
been
proposed.
There
wasn't
really
any
technical
discussion
on
these
other
than
like
a
call
to
look
at
these.
So
I
think
it's
mainly
for
people
who
have
been
kind
of
involved
in
in
the
metrics
work
kind
of
just
a
call
to
look
at
things
here,
so
I'm
not
going
to
try
to
give
a
summary
of
any
of
them
because
I
wouldn't
probably
give
them
any
justice.
B
B
Excited
so
probability
sampling.
I
know
we
talked
about
this.
A
little
bit.
Otep
148
has
been
around
for
quite
some
time.
Pretty
sure
this
is
the
one
that
basically
says
we
need
to
propagate
the
adjusted
count
for
things
ultimately,.
E
Yeah
this
turned
into
they
ended
up
extracting
that
part
of
it
about
the
propagation,
and
this
turned
into
just.
We
need
to
record
attributes
on
the
span
indicating
the
things
that
went
into
the
sampling
decision.
B
B
E
Wrong
one:
this
is
148.
You
want
to
look
at
168.,
sorry,.
B
B
I
have
participated
in
this
distributed
tracing
working
group
in
the
past
and
was
around
for
a
lot
of
these
original
sampling
decisions
and
it's
a
it's
a
touchy
topic,
but
I
think,
like
ultimately,
the
discussions
in
that
group
kind
of
went
something
along
the
lines
of
like
sampling
is
like
very
it's
a
very
broad
topic,
and
no
it's
very
hard
to
kind
of
get
consensus
between
different
fenders
and
systems
on
how
to
do
that.
So,
like
pretty
much,
the
only
thing
that
you
know
was
pretty
much
in
common
was
having
at
least
a
bit.
B
That
said
that
you
have
been
sampled
or
sampled
this
or
do
not
sample
this.
That
was
like
kind
of
as
far
as
anybody
was
willing
to
commit
to
on
the
sampling
up
front
and
kind
of
for
anything
else.
The
recommendation
was
well,
there
is
trace
state
and
if
you
have
kind
of
your
own
specific
requirements
around
sampling,
you
can
you,
as
you
know,
the
owner
of
these
telemetry
systems
can
totally
ignore
anything
in
the
trace
parent
regarding
sampling
and
implement
something
else
you
know
based
on
stuff
in
trace
state.
B
One
thing
I
can
say
from
having
worked
with
trace
state.
A
little
bit
is
like
as
soon
as
you
start,
adding
custom
entries
in
there
like
it
does
get
a
little
messy
and
it
you
know
it's.
It's
not
a
lossless
piece
of
data
like
if
a
trace
hops
through
enough
systems,
you
can
lose
things
and
programmers
are
bad
at
their
jobs
and
have
a
lot
of
bugs.
So
you
can
lose
things
that
way
too,
not
totally
true,
maybe
not
bad
at
our
jobs,
but
there's
just
too
much
to
think
about
so
bugs
happen.
B
Anyhow,
I
think
this
proposal.
It
is
somewhat
interesting
actually
because
it
it's
not.
I
think
it's
not
just
proposing
to
modify
trace
parent
to
propagate
the
sampling
probability,
but
it's
coming
up
with
a
very
clever
way.
I
guess
to
encode
a
sampling
probability
in
a
very
small
number
of
bytes,
which
I
think
is
kind
of
one
of
the
the
other
things.
B
That
was
a
problem
when
we
were
talking
about
trying
to
propagate
probabilities
before
is
typically
these
end
up
being
like
floating
point
numbers,
and
they,
unless
you
put
some
heavy
limits
on
the
precision,
those
can
get
pretty
long
and
just
add
a
lot.
B
But
the
proposal
is
to
make
this
last
thing.
A
log
count
and
the
log
count
would
be
a.
B
A
I
guess,
a
negative
exponent
that
you
would.
It
would
be
one
to
the
negative.
Whatever
is
in
the
log
count,
and
you
can
so
they
would
all
be
power
of
two
probabilities
here
we
go
so
if
you
had
a
log
count
of
two,
it
would
be
one
and
four
sampling
and
if
you
had
a
log
count
of
ten,
it
would
be
one
and
ten
to
the
twenty-four.
B
B
I
think
my
gut
feeling
is
that,
even
if
the
group
is
open
to
this,
it
might
take
a
little
while
before
it
would
be
ready
to
use.
So
there
will
probably
be
a
trace
state
implementation
of
this,
at
least
for
for
some
point
in
time.
B
B
I
think
I
also
have
had
these
exact
conversations
as
to
like
where's
the
right
place
for
that,
and
I
think
sentiments
are
anything
that
the
tracing
system
needs
belongs
in
trace.
State
is
kind
of
like
this
mantra
that
ends
up
getting
repeated
every
time.
You
ask
these
questions,
but
I
do
think
that
I
was
kind
of
bringing
up
this
point
like
when
trace
date
was
kind
of
originally
conceived.
B
It
was
like
this
idea
for
how
a
trace
could
like
pass
through
multiple
vendors
and
when
it
you
know,
leaves
one
vendor
goes
to
other
finders
and
comes
back
to
one
that
I
had
been
before
you
kind
of
have
like
this
breadcrumb,
where
you
can
pick
up
the
pieces
of
when
it
was
last
in
your
system
and
kind
of
like
know.
Okay,
I
left
the
system.
We
went
to
these
other
ones
and
we
could
probably
if
this,
these
other
systems
had
some
apis
ask
for
the
spans
or
somehow
somehow
go
after
the
stands.
B
That
was
kind
of
like
a
later
wish
list,
but
I
was
kind
of
bringing
up
like
with
this
kind
of
predates
open,
telemetry
but
kind
of
with
open
telemetry.
It's
like
now.
You
kind
of
have
this
system
or
the
situation
where
you
have
like
open
telemetry
wants
to
use
trace
date.
You
have
vendors
who
also
want
to
use
tray
state.
B
It
gets
kind
of
weird
if
you
start
like
looking
at
like
the
rules
of
trade
state,
it's
like.
If
you
modify
an
entry
in
trace
state,
you
can
move
your
entry
to
the
front,
but
if
you
haven't
changed
anything
it
should
stay
where
it
is.
So
I
was
kind
of
making
the
argument
that
some
of
this
open,
telemetry
stuff,
if
it's
actually
kind
of
static
it
like
falls
off
the
end
of
tray
state
while
the
stuff
that
is
already
getting
updated,
stays
on
the
front.
B
E
Yeah
from
my
recollection,
trace
states,
api
is
not
perfectly
defined
either
in
particular,
it
kind
of
punted
on
any
internals,
it
just
kind
of
said:
there's
this
raw
string
that
you
can
pass
through
the
the
other
problem
here.
Is
this
like
this
feels
like
overreach
on
the
part
of
open
telemetry?
E
This
is
really
something
that
should
be
defined
by
w3c,
not
by
open
telemetry
and
the
focus
here.
The
the
problem
I
have
with
this
otep
is
it
pumps
on
the
thing
that's
really
important,
which
is
how
do
you
modify
the
open
telemetry
api
to
accommodate
this
additional
piece
of
information,
and
it
instead
focuses
on
the
bit?
That
is
out
of
its
scope,
which
is
how
do
you
modify
the
w3c
trace
parent
spec
to
accommodate
this
extra
field?
E
So
if,
if
we
created
a
hypothetical
trace
format
and
said
like,
let's
use
this
as
our
working
exemplar,
how
do
we
modify
the
api
to
now
pass
around
this
information?
Make
it
available
to
samplers
and
all
this
sort
of
thing,
so
my
two
cents
are
this:
this
is
happening
the
wrong
way
around
and
we're
focused
on
the
wrong
part
of
the
problem
in
the
wrong
organization.
B
Yeah,
I
think
definitely
some
fair
points
here.
I
think
you
know
there
is
some
interplay
between
these
groups,
or
at
least
historically
there
has
been
so
I
think
it's
at
least
I
don't
know
it's
good-
to
have
a
proposal
for
for
the
w3c
group
to
at
least
like
think
about
and
consider
but
yeah
it.
B
A
B
Stuff
all
right,
less
interesting,
but
nevertheless,
something
that
has
created
a
bit
of
a
bike
shed
is
adding
the
distro
name
and
version
to
resource.
So
most
people
or
most
users
will
probably
end
up
wrapping
or
yeah.
B
Wrapping
open,
telemetry
and
kind
of
having
their
own
wrapper
layer,
distribution,
it'll,
be
kind
of
open
telemetry
at
its
core,
but
you'll
probably
add
some
niceties
around
configuring
it
for
for
your
company
business
use
case
and
probably
pull
in.
You
know
like
a
curated
list
of
instrumentation,
and
when
you
do
this,
you
probably
would
like
to
know
what
a
service
is
using.
B
Is
it
using
your
your
distro
or
is
it
kind
of
using
like
vanilla
hotel,
so
the
suggestion
was
to
add
attributes
for
the
distro
name
and
distro
version,
and
that
way
you
would
know,
leave
the
telemetry
sdk
language
and
version
alone.
So
you
can
kind
of
see
like
what
that's
based
on.
I
think
the
bike
shot
was
starting
to
occur
in
that
like
there
was
definitely
a
camp
that
felt
that
these
were
redundant
or
kind
of
duplicating
information
on
the
wire.
B
Sdk,
you
should
just
overwrite
the
the
existing
attributes
and
it's
language.
The
only.
E
B
I
think
it
is
the
name,
so
you
just
overwrite
like
the
name
and
the
version
with
your
names
and
versions
and
you
being
kind
of
the
owner
of
the
of
the
rapper
would
kind
of
know
what
it's.
Theoretically,
you
would
know
the
underlying
like
hotel
version
and
name
that
it
was
based
on.
F
I
I'd
flip
that
as
a
vendor
who
has
made
a
distribution,
the
users
of
my
distribution,
don't
neces,
they
would
have
to
go
hunt
down
what
the
inner
nougaty
center
of
my
distribution
would
be.
So
we
found
it
convenient
to
duplicate
that
with
the
understanding
that
it's
increased
size
on
the
wire,
but.
B
Yeah
I
mean
you
could
have
like
a
a
a
loose
dependency.
You
know
on
like
a
a
minor
version,
for
example,
so,
like
you,
could
be
picking
up
different
different
patch
versions
for
and
not
want
to
have
to
release.
You
know
like
a
version
of
your
wrapper
for
for
every
patch,
for
example.
So
I
think
personally,
I
think
this
is
not
a
huge
deal,
but
I
think
there
is
kind
of
like
this.
B
I
guess
at
least
recognition
that
we
keep
like.
You
know
plastering
on
new
attributes
and
should
maybe
have
some
sort
of
like
discussions
and
thoughts
around
them.
So.
D
I
think
this
will
be
useful
in
like
a
year
or
two
when
the
sales
cycle
comes
back
around
again
and
people
are
like
moving
between
vendors
and
some
of
these,
like
you
know,
they'll
want
to
understand
where
they're
coming
from
and
what
existing
capabilities
they
might
have
on
the
distribution
they've
already
got
installed,
but
right
now
it's
fine
because
it's
just
like
you
go
you're
no
one's
hopping
around
yet,
but
I
don't
know.
I
think
this
will
be
useful
in
the
future,
but
probably
not
a
huge
deal
right
now.
D
Like
that,
in
my
mental
model,
right
now,
if
you
were
to
use
the
aws
distro
of
the
open
telemetry
collector,
for
example,
that
doesn't
come
packaged
with
a
lot
of
processors
and
receivers
and
exporters
that
you
might
otherwise
be
asked
to
toggle
on
or
off
or
something
so
if
there
was
some
place,
every
vendor
knew
to
look
to
see
like
okay,
where's,
the
you
know.
D
What's
the
prior
art
here
that
I'm
dealing
with,
maybe
as
a
sales
engineer
or
a
field
engineer
who's
like
asking
to
move
over
an
implementation,
it
would
help
to
know
where
they're
coming
from.
I
think
it's
fine
to
whether
you
duplicate
it
as
it,
whether
you
make
a
separate
field
or
you
overwrite
the
field
just
having
somewhere.
That
designated,
I
think,
has
you
know
a
lot
of
importance
and
we'll
prove
helpful,
so
yeah.
I
guess
I'll
comment.
If
I
care
cool.
B
B
Yeah
there
was
a
brief
update
from
the
metrics
sig
and
it
was
mainly,
I
think.
The
metric
sig
has
kind
of
made
this
transition
now
from
having
metric
labels,
which
were
kind
of
string
to
string
key
value
pairs
to
having
them
as
attributes
which
are
the
same
data
structure,
same
data
format
as
attributes
on
a
stand,
and
I
think
the
the
underlying
wish
there
is
that
you
can
bundle
up
the
same
attributes
and
attach
them
to
a
span
or
metrics
freely
without
having
to
kind
of
convert
your
attributes
to
labels
or
vice
versa.
B
So
I
think
they
have
made
that
switch.
But
it
sounds
like
the
the
collector
has
not
yet
made
that
switch.
So
there's
kind
of
some
there's
a
disconnect
and
I
think
I
think
they
might
have
to.
B
B
Just
a
brief
update
from
the
instrumentation
seg,
I
think
we
kind
of
talked
about
this
document
a
little
bit
last
time,
it's
kind
of
a
proposal
for
how
to
bring
instrumentation
up
to
a
stable
state.
I
think
in
general
they've
iterated
a
little
bit
on
this
document,
but
not
a
whole
lot
else
to
report.
There
was
definitely
a
reminder
that
this
sig
is
meeting
during
the
spec
sig
asia
pacific
time
slot,
which
is
4
p.m.
Pacific
tuesdays.
B
There
have
been
yeah
a
handful
of
issues
and
a
pr
around
attribute
limits.
I
know
we
talked
about
this
before
I
know
something
has
changed,
but
I
don't
know
exactly
what,
but
it's
kind
of
been
an
ongoing
bike
shot.
B
B
Yeah
this
is
ongoing
and
I
don't
exactly
remember
what
was
around,
but
it
does
look
like
it's
getting
a
little
bit
more
granular
on
like
is
it
on
the
span
or
span
event
looks
like
that
was
like
this
before.
D
Is
is
there
a
rough
number
they
have
here?
I
think
5
000
50
000.
D
B
And
I
think
that
that
was
like
the
main
takeaway
from
this
was
just
like,
specifying
that
the
limit
should
be
configurable
and
how
to
configure
it.
But
not.
But
I
guess
not
giving
one
out
of
the
box.
B
B
I
know
a
lot
of
vendors
have
have
support
for
this
and
I
think
the
there
is
an
open,
telemetry.js
seg.
It
does
kind
of
cover
node
and
browser.
I
do
think
there
is
a
lot
of
definitely
a
lot
of
expertise
for
node
over
there.
I
do
think.
B
The
browser
stuff
is
it's
not
bad,
but
I
feel
like
it
kind
of
gets
left
behind
a
little
bit.
So
I
think,
having
some
people
to
like
champion
the
browser
use
cases
is
a
a
good
thing
and
I've.
There
have
been
some
discussions
from
some
engineers
at
splunk,
and
this
is
coming
from
alolito
alolita
at
aws
about.
B
Having
a
data
model
for
real
user
monitoring,
this
is
this
is
new,
so
I
haven't
been
able
to
read
through
this,
which
is
kind
of
mentioned
at
the
end
of
the
agenda,
but
I
I
have
heard
people
asking
questions
about
this,
so
I
think.
B
I
think
there
is
interest
in
this
in
her
summary
she
did
mention
they
were
introducing
a
new
event
type,
which
I
don't
know
what
that
means.
I
don't
know
if
this
is
something
that
would
be
modeled
using.
You
know
a
a
realm
event
instead
of
like
a
span
or
if
it.
A
D
It's
a
it's
if
you're
curious
about
it's
very
similar
and
I've
spoken
with
it
came
out
16
hours
ago,
but
there's
some
co-workers
of
mine
who
are
interested,
it's
very
similar
to
the
datadog
browser,
sdk's
model
and
our
approach
toward
rum,
which
I
think
is
better
than
essentially
having
like
dd.
You
know
having
open
telemetry.js
like
hacked
into
being
a
way
to
emit
events,
so
it's
yeah.
I
I
think
it's
pr.
I
think
it's
probably
the
right
way
to
do
the
front.
D
End
stuff
it'll
be
interesting
to
see
whether
datadog
pushes
to
be
compatible
with
it.
Contributes
to
this
or
other
vendors
have
opinion.
I
don't
know
how
other
vendors
do
it,
but
I
think
this
is
considered
the.
E
Yeah,
I
think
netflix
has
built
something
similar
and
they
they
ended
up
modifying
their
data
model.
I
think
on
the
back
end
for
tracing,
so
it
could
accommodate
this
kind
of
event
stream,
as
well
as
traces,
so
spans
and
so
forth.
I
believe
honeycomb
knows
a
thing
or
two
about
event.
Streams
as
well,
probably
has
opinions
anyway,
it's
honeycomb.
D
It's
it'll
be
cool.
I'm
excited
to
see
this.
I
think
it's
an
unsolved
problem
with
distributed
tracing
right
now.
It
feels
like
people
sort
of
try
to
push
it
onto
their
browsers
and
either
one
like
web
pack
gets
in
the
way
or
but
to
like
yeah.
I
think
it'll
be
interesting
to
see
some.
It
would
be
really
nice
to
have
some
consensus
here
around
like
ways
to
emit
things
from
a
browser
that
feels
like
a
really
common
use
case.
E
The
the
two
problems
are:
how
do
you
get
these
events
from
an
essentially
untrusted
source
into
your
telemetry
back
end,
and
then
the
other
problem
is
that
the
the
event
model
in
browsers
is
not
a
really
good
fit
for
spams,
so
you
I
mean
you
could
probably
just
have
like
one
uber
span
and
then
attach
all
these
timed
events,
but
that
also
seems
weird.
E
So
instead,
if
you
could
natively
represent
it
and
then
tie
it
together
with
traces
like
back
and
traces
in
some
some
way,
that
seems
like
a
reasonable
approach.
B
But
yeah
I'm
yeah.
I
think
this
is
a
good
thing
for
for
open,
telemetry
and
most
vendors
that
I'm
aware
of
have
kind
of
like
had
separate
approaches
for
like
node-based
javascript
and
browser-based
javascript.
So
I
think
I
think
this
is
being
informed.
I
think
this
is
kind
of
the
reason
why
you
end
up
with
this
as
you
as
you
get
into
it,
so
we'll
see
we'll
see
what
the
reception
is
and
how
this
ends
up
playing
out
in
in
the
hotel
world.
A
I
think
I
think
one
of
the
interesting
things
about
the
interesting
things
about
the
event
model
is
that
it's
like,
I
think,
probably
a
lot
of
it's
going
to
end
up
getting
handled
at
the
framework
layer
like
people
are,
writing
instrumentations
and
react
angular
review
because
yeah,
as
you
said,
francis,
it's
like
a
weird,
it's
weird
to
try
to
make
it
connect
to
to
adapt
to
spans.
A
My
client
is
actually
trying
to
do
both
they
would
they
love
this,
which
is
obviously
very
early,
and
they
are
also
going
to
do
sort
of
like
more
like
you
know,
one
request
at
a
time
originating
in
the
front
end
sort
of
tracing.
E
The
one
quick
comment
about
the
attribute
limit,
update
amusingly
enough.
We
already
have
almost
exactly
what's
required
for
the
length
limit
because
we
copied
it
from
the
java
sdk.
E
All
we
need
to
do
is
actually
remove
a
ruby
underscore
from
the
environment
variable
name.
I
need
to
go
and
look
at
the
proposal
here,
but
we
have
some
additional
constraint
that
you
can't
set
a
limit
less
than
32,
again,
probably
just
taken
from
the
the
java
sdk.
I
don't
know
whether
that
is
still
part
of
the
proposal
here.
C
G
I
don't
have
a
particularly
interesting
one:
it's
just
a
code
shuffle
could
probably
have
some
people
look
at
it
if
anyone's
feeling
generous
to
maybe
just
like
do
some
manual
testing
with
it
in
a
different
app.
I'm
going
to
try
to
do
it
in
a
couple.
It's
the
third
part
from
the
top
of
the
refactor
split
on
action
pack.
So
this
does
it
quite
a
few
things
I'll
show
you
update
the
title.
G
The
initial
pass,
the
rails
instrumentation,
as
we
know
just
did
the
patch
for
action
controller
metal,
so
that
would
rename
the
the
request
endpoint
from
the
rackspan.
What
this
does
is
it
actually
splits
it
out
into
its
own
action-packed,
instrumentation
gem.
It
also
includes
action,
view
and
active
records.
So
once
these
changes
changes
are
brought
in
the
the
next
release
of
it
will
change
the
rails.
Instrumentation
quite
a
bit.
So
by
default,
people
will
be
getting
active
record
action
view
action
pack,
whereas
previously
it
was
just
action
pack.
G
So
it's
just
a
lot
of
code
shuffling
moving
around
some
tests
to
get
tests
working,
I
left
some
kind
of
to
do's
on
the
pr.
It's
just
like
adding
examples.
I'm
just
doing
some
manual
testing
some
follow-up
stuff
that'll
be
good.
Is
splitting
out
active
support
notification
notifications,
the
work
that
andrew
did.
I
just
stuffed
that
into
action
view,
so
that
action
view
wouldn't
depend
on
the
rails
instrumentation.
I
didn't.
I
wanted
to
remove
that
dependency.
None
of
the
sub
instrumentation
gems
should
require
rails
instrumentation.
G
G
I'd
like
to
beef
up
the
the
rails,
instrumentation
tests,
but
not
like
trying
to
do
all
the
edge
cases,
but
just
doing
your
typical
happy
path
like
set
up
an
example
app
with
a
controller
that
creates
a
user
and
renders
a
view
and
making
sure
that
all
these
sub-instrumentations
actually
generate
the
spans
that
we
expect
them
to
and
then
the
say,
like
action,
viewer
action
path,
instrumentation
gems
could
test
more
of
those
like
those
those
different
scenarios
that
we
can
run
into.
So
those
be
a
little
bit
more
verbose.
G
I
think
the
the
rails,
one
is
just
kind
of
a
simple
big
integration
test.
So
that's
the
long-winded
version
of
what
I'm
trying
to
do
here.
It's
not
really
any
new
code,
it's
just
code
moved
around,
but
it
is
definitely
going
to
be
a
breaking
change
because
some
of
the
configuration
options
have
moved
around.
G
I
do
need
to
introduce
some
for
saying.
I
want
rails
instrumentation,
but
I
explicitly
want
to
disable
active
record
or
something
like
that.
I
need
to
add
that
as
an
option,
so
I
have
to
look
at
where
the
right
place
to
put
that
is
but
yeah.
So
that's.
This
is
mostly
a
shuffle
there'll,
be
a
couple
smaller
pr's
to
follow
up
it,
but
I
don't
want
this
one
to
get
longer
than
the
kerdi.
B
Has
cool
so
basically
the
the
action
controller
instrumentation
is
packaged
in
the
the
action
pack,
gem
just
to
kind
of
mirror
where
that
stuff
lives
and
in
rails
and.
G
B
G
It
all
makes
sense
yeah.
I
just
figured
like
someone's
looking
at
this
and
they're
looking
for
they
say
they
have
the
rails
instrumentation
and
they
want
to
look
at.
Oh,
I
want
to
see
the
instrumentation
patch,
because
I'm
having
issues
with
active
record,
they
should
be
able
to
go
into
the
active
record
instrumentation
jam
and
even
there
all
the
patches
should
mirror
the
structure
of
the
active
record
jam
right
so
just
want
to
make
traversing
and
digging
into
it.
If
we
have
to
later
to
be
as
straightforward
as
possible,
I
think
it'll.
G
It's
like
kind
of
this
is
a
lot
of
forward-looking
kind
of
like
ease
of
maintenance
that,
because
it's
already
starting
to
get
a
little
bit
tangled
from
my
initial
one
there-
and
I
just
want
to
undo
that
as
soon
as.
B
G
Yeah,
because
the
boot
stuff
can
be
weird
we've
all
anyone
who's
done.
Any
of
these
instrumentation
ones
has
run
into
some
form
of
boot
issue,
and
so
that's
I
think,
that's
really
what
we're
looking
for,
because,
like
the
behavior,
hasn't
changed,
but
the
way
that
it's
initialized
is
slightly
different.
So
that's
what
I'm
looking
for
is
like
in
a
typical
rails,
app
that
someone
has
an
initializer
for
open
telemetry
doesn't
apply
discrimination
patches.
That's
basically,
all
I'm
really
looking
for.
G
G
I
think
it
looks
good
to
go.
I
want
to
thank
ariel
for
getting
robert
from
the
graphql
jumped
a
look
at
it.
It's
nice.
It
looks
like
I
think.
Last
time
we
discussed
it.
B
So
is
the
tldr?
The
client
will
get
errors
on
their
spans
for
invalid
queries,
but
the
server
would.
G
G
G
G
And
it's
it's
specifically
associated
with,
like
there's
a
validation
step
that
occurs
when
it's
processing
the
request
and
if
the
validation
fails,
it
captures
the
information
related
to
it
so
like
in
your
query,
schema
or
whatever
on
this
line.
This
failed
because
it's
malformed
and
you
you'll
have
that
information
surfaced.
G
Okay,
yeah,
that's
really
goof
to
remembering
that.
G
So
I
think
then,
what
we
have
here
doesn't
work
which
is
okay,
but
I
guess
tim
has
the
the
task
of
digging
into
the
client
side
of
it
and
instrumenting
the
client
portions
of
the
response
portion
from
like
this,
because,
like
the
graph
field,
gem
can
be
either
used
like
in
your
server
to
handle
the
requests
or
it
can
be
used
as
the
client
to
perform
the
request.
So
this
is
going
to
have
to
move.
G
I
guess
I
don't
know
what,
because
the
graphql
supports
kind
of
there's
an
official
path
for
adding
instrumentation.
I
don't
know
what
that
looks
like
for
the
the
client
perspective
of
the
gem,
so.
F
I'm
I'm
trying
to
think
of
meta
at
a
maybe
meta
level
where,
if
I'm
the
person
using
the
gem
as
the
client
and
as
the
client
I
have
formed
the
query,
I
would
want
to
know
the
query
that
I
formed
in
the
client
didn't
pass
validation
as
opposed
to
like
the
person
running.
The
server
is
like
if
they
are
separate
people
yeah,
it's
the
it
puts
the
error
closer
to
the
code
that
is
being
instrumented.
That
errored
really
like
the
flaw
is
in
the
construction
of
the
query,
not
in
the
server
processing.
It.
G
Yeah
I
I
kind
of
carry
the
perspective
that,
like
a
lot
of
the
stuff
I
see
internally,
is
like
our
stuff
talking
to
our
stuff.
So
I'm
like
well.
If
a
lot
of
people
are
having
trouble
with
the
single
endpoint
like
I'd,
want
potentially
the
operator
to
know,
but
I
I
don't
really
want
to
fight
that
side
of
the
argument
too
much,
because
I
do
think
it
makes
sense
to
be
on
the
client.
G
I
think
there's
just
there's
a
bit
of
value
in
the
server
operator
having
some
visibility
in
how
many
people
are
failing,
because,
like
shopify,
for
example,
services,
a
pretty
large
graphql
api
against
like
for
partner
developers,
and
so
if
one
endpoint
is
failing
constantly
because
of
malformed
like
requests,
it's
like
well,
that
could
be
a
pretty
good
clue
that
the
documentation
isn't
isn't
very
clear
or
something
like
that.
E
I
think
you
want
events
or
some
kind
of
count
that
you
can
look
at,
but
you
don't
necessarily
want
to
flag
it
as
an
error
right
or
if
you
do
want
to
flag
it
as
an
error
that
might
be
optional.
F
Like
like,
maybe
on
the
server
side,
you
could
flag
that
this
request
had
a
validation
error,
but,
like
the
particulars
of
the
error,
is
not
your
server's
fault,
it's
just
like.
I
returned
the
user
validation
error,
which
is
something
that
can
be
counted
and
queried
versus
putting
the
details
of
the
error
on
the
client
span.
F
I
do
see
your
point,
though,
knowing
what
are
the
errors?
If
you
can
aggregate,
everybody
makes
the
same
problem.
We
have
a
documentation
update
to
make
is.
Is
there
a
why
not
both
solution
here.
G
E
E
Yeah,
so
the
interesting
thing
there
is
that
the
client
side
span
should
have
status
equals
error.
The
server
side
span
probably
should
not,
but
you
might
be
able
to
make
an
argument
for
a
configuration
option
to
let
you
set
it
to
error,
but
you
should
have
an
event
server
side
as
well
that
you
can
aggregate
in
some
way.
G
Yeah,
I
think
just
like
thinking
about
it
and
letting
it
settle
there
like
removing
these
fan
status
like
I
think
that
is
like
a
pretty
reasonable
point,
because
it's
who
knows
maybe,
when
these
validations
fail,
the
request
performs
slowly
and
you
don't
see
if
there's
no
transparency
there
so
having
the
events.
Okay,
this
one
was
a
lot
slower.
Oh
look,
there's
a
bunch
of
validation
errors.
Maybe
that's!
What's
you
know,
maybe
what's
checking
there's
failing
so
yeah?
G
D
Would
there
be
any
interest
in
like
a
option
where
people
could
pass
the
range
of
statuses
they
would
like
to
you
know
you
what
whatever
client
or
server,
I
guess,
but
the
range
of
status
that
they
want
to
treat
as
theirs.
It
could
be
a
I've
seen
it.
It
can
be
passed
through
a
configuration
ways,
but.
E
D
G
I
don't
know
so
I
think
like
for
this
one
like
for
this
car.
If
we
remove
the
span
status.
G
We
can
probably
merge
it
the
way
it
is,
but
there
should,
I
think,
they're
still
worthwhile
and
looking
up
from
the
client
side
of
actually
setting
setting
like
potentially
an
error
span
status
on
that
one
and
instrumenting,
and
I
don't
even
think
we
really
have
much
in
the
way
of
instrumenting
from
the
client's
perspective,
or
I
don't
think
we
have
anything
at
all.
So
that's
probably
like
somewhere.
We
can
sneak
it
onto
it
to
do.
G
And
then
I
think
we've
got
another
one,
for
I
don't
think
unless
anyone
else
wants
to
add
something
this
one.
I
think
we
can
look
at
another
one
of
tim's
regarding
the
the
pruning
of
invalid
links
right
at
the
top
there
just
to
see
what's
going
on,
because
this
is
as
far
as
I
know,
the
last
expect
compliance
issue
to
be
closed
before
we
can
get.
I
think
it
was
carlos
to
look
at
this
again,
this
being
our
repo.
E
E
G
There's
a
few
that
have
been
open
for
a
while,
and
I
don't
know
if
there's
much
value
right
now
with
like
a
few
minutes
going
back
to
them,
because
I
don't
think
there's
been
much
movement
on
any
of
them.
Anyone
knows
otherwise
it's
something
that
should
draw
our
attention.
Please
speak
up,
but
I
think
the
ones
you
covered
are
the
ones
that
have
had
the
most
recent
movement.
G
So
I
guess
once
the
prone
links
gets
brought
in
I'm
going
to
contact
carlos.
However,
we
reached
out
to
him
last
time
to
look
at
it.
I
think
we
are
in
a
good
position
to
push
for
a
proper
1.0.
E
Yeah
I
mean:
let's
do
another
release
candidate,
so
get
this
pr
in
do
another
release
candidate
and
then
point
him
at
that
release.
Candidate.
Okay,
I
think
that's
the
cleanest
thing
to
do.
G
Just
like
kind
of
tangentially
related
to
like
the
julius
candidate
we've
been
seeing
the
adoption
pick
up
internally
a
lot
and
we
haven't
ran
into
any
actual
real
issues
with
it,
which
is
pretty
good.
It's
a
good
test
of
like
what
we've
been
doing
here.
G
So
I
don't
know
it's
been
really
nice
like
whenever
you
roll
out
something
new,
you
expect
to
see
a
bunch
of
things
crawl
out
of
the
woodworks
and
then
most
of
the
stuff
that
we've
seen
has
been
around
like
our
own
internal
configuration
or
someone
trying
to
find
parody
with,
like
the
old
gem
and
just
asking
questions,
so
it's
been
really
doing
really
well.
As
far
as
I'm
concerned,
I
think
we're
like
at
40
percent
of
our
order.
That's
using
it
now.
C
Are
we
at
happy
reports,
though
I
have
a
mixed
report
to
report
so
right
now,
I'm
on
rc
one
and
I
gotta
get
ourselves
upgraded
soon,
but
we're
dropping
a
little
under
50
percent
of
our
spans.
C
And
something
that
is
perplexing
is
that
our
spam
buffer
utilization,
for
example,
shows
up
as
zero
almost
like
0.5
sometimes,
and
our
error
rates
are
for
the
batch
export
failures
very
rarely
come
through.
So
I
am
not
exactly
sure
I
haven't
done
any
digging
so
far
to
look
at
anything.
But
it's
unclear
to
me.
If
there's
something
that
would
be
something
else
that
would
be
causing
spans
to
get
dropped.
G
Do
you
do
you
know?
I
know
you
definitely
dug
into
it
yet,
but
with
our
internal
implementation,
one
of
our
like
highest
traffic
applications
would
see
a
lot
of
drop
spans
when
we
initially
cut
over
to
our
internal
tlp
exporter,
because
it
has
the
same
defaults
as
the
one
that
you're
using
from
open
telemetry.
G
They
had
a
lot
of
bursty
traffic.
They
had
some
weird
janky
tracing
code
in.
A
G
So
it
would
actually
buffer
up
they
had
they
basically
had
their
own
buffer
sitting
on
top
of
ours,
and
then
they
let
out
these
like
big
blasts
of
spans
and
they'd.
Have
they
had
such
a
high
drop
rate
initially?
So,
if
I,
I
doubt
that
you're
or
I'd
be
surprised
if
your
one
of
your
applications
was
doing
that.
But
if
you
are
seeing
really
really
high
drop
rates,
it
could
be
really
bursty
workloads.
C
C
So
I
you
know
pretty
much.
I've
been
experimenting
with
multiple
settings,
so
the
one
thing
I
don't
th,
so
the
what
the
metrics
reveal
is
that
we're
dropping
spans.
What
they
don't
reveal
is
that
the
buffer,
for
example,
buffer
size,
is
getting
overrun
based
on
the
metrics
reported
data,
because
we
report
that
through
statsd
and
I'm
looking
at
that
data
that
way,
I've
tried
doing
things
like
hey.
C
Let
me
experiment
here:
let's
double
the
size
of
the
of
the
the
first
band
size,
doing
some
back
of
the
envelope
calculations,
increasing
the
reporter
interval
frequency
and
it
seems
to
cap
out
evenly
and
eve,
and
it's
pretty
steady,
no
matter
what
buffer
sizes
I'm
applying
right
now
so
or
tweaking
those
three
parameters
specifically
saying
the
buffer
size,
the
batch
size
as
well
as
the
as
well,
the
reporting
interval.
I
forgot
what
the
name
of
that
variable
was
the.
A
C
G
G
Because
I
know
for
the
one,
the
one
I'm
going
to
use
like
a
real
number,
the
example
high
traffic
app
like
we
went
to
64k
and
we
reduced
the
schedule
delay
to
three
seconds
and
they're
not
dropping
anymore.
This.
G
C
In
so
what
did
you,
the
buffer
utilization
metric
look
like
when
that
was
did
they
reveal
that
that
was
the
problem.
E
Okay,
yeah,
I
remember,
I
think
it
did.
There
was
a
weirdness
in
our
case
where
this
wasn't
happening
uniformly,
so
it
depended
on
a
particular
request
that
went
down
a
weird
path
generated.
E
E
Right
because
yeah
there
should
be
a
reason
tag
above
a
full,
for
example,
if
we're
dropping.
For
that
reason,.
C
Yeah,
I
do
so.
I
do
not
see
the
the
reason
tag
coming
up,
but
you
know
what
I've
got.
A
split
riley's
waiting
for
me
see
y'all
in
the
future
cool
yeah.
D
E
Yeah,
my
mom,
what
they
call
it
monotonous
pr
is
it
just
needs
tests?
You
know
how
hard
could
it
be,
but
yeah?
I
just
need
to
write
the
test
for
that
and
then
I
think
that's
good
to
go.
If
anybody
feels
like
staring
at
the
code
that'd
be
welcome.
I
think
the
code
is,
I
think
the
code
is
perfect.
I
mean,
obviously
I
wrote
it
but
yeah.