►
From YouTube: 2021-09-22 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
A
Yeah,
I
have
a
couple
of
topics,
but
I
will
let
other
people
go
first.
D
A
question
about
the
suppression
that
we've
been
discussing
the
last
few
meetings
and
I
feel
like
it's
a
dumb
question
so
entertain
me
a
lot
of
this
discussion
around
suppression,
suppressing
spans
and
different
approaches
taken
to
doing
it.
D
There's
like
various
merits,
but
what
I
I'm
just
is
not
super
clear
to
me
at
is
why
we're
taking
this
approach
at
making
these
decisions
at
run
time,
instead
of
pushing
it
further
back,
so
I
guess
the
example
would
be
like
say
I
have
an
application
and
there's
a
lot
of
repeated
client
spans
and
a
lot
of
the
clients.
Fans,
for
example,
are
not
creating
a
lot
of
value
for
for
myself
as
an
application
owner.
A
Yeah,
so
I
can
answer
the
reason
is
once
we
collected
those
spends.
If
we
just
drop
one
of
them,
we
will
lose
the
causation
right.
So
if
a
cost
b,
cost
c
rate
span
a
cospan
b
cosplancy,
if
we
drop
b,
then
a
and
c
become
just
we
don't
know
what
our
relationship
between
them,
so
we
cannot
drop
spans
after
they
are
created
without
breaking
the
trace.
D
Right,
sorry,
so
so
what
I
mean
is,
like,
I
have
say,
multiple
levels
of
client,
instrumentation
and
they're
being
duplicated.
So
I
have
like
three
clients,
bands
and
they're,
all
it's
all
from
a
to
b
and
let's
say
they're
all
doing
the
same
thing.
We
never.
We
I'm
not
sure
if
we
discussed
that
like
how
it
impacts
propagation
explicitly,
but
let's
say
they
could
all
inject
the
correct
propagation
headers
into
the
network
request
so
you'd
be
able
to
continue
the
trace.
D
Why
wouldn't
I
just
omit
the
instrumentation
for
any
one
of
those
three
replicated
spans.
I
would
still
have
the
continued
trace,
but
I
would
not
have
the
noise.
A
D
Yeah,
well,
I
guess,
like
I'm
thinking
from
the
like,
I'm
mostly
familiar
with
the
ruby
stuff.
So
like
the
example
that
I
like
that
comes
to
mind,
is
we've
looked
at
instrumenting,
something
like
elasticsearch
and
the
scholastic
search
client
gem
for
in
the
ruby
ecosystem
is
basically
just
a
thin
wrapper
around
a
networking
library
and
we
already
have
instrumentation
for
that
networking
library.
D
So
if
we
instrument
elasticsearch
and
this
networking
library
as
clients
you'll,
have
these
repeated
spans
that
carry
the
same
information
elasticsearch
might
have
a
little
bit
more
information
specific
to
the
type
of
call
that's
being
made
but
effectively
they're
doing
the
same
thing
so
as
an
application
owner.
If
I
include
the
instrumentation
for
the
networking
library
as
well
as
elasticsearch
I'll,
get
these
duplicated
spans
or
I
could
decide
not
to
have
the
instrumentation
for
the
underlying
networking
library
and
rely
on
elasticsearch
to
do
it
or
vice
versa.
D
A
A
But
like
it's
a
interesting
discussion,
whether
the
same
client
should
have
multiple
layers,
I
I
don't
have
an
answer,
but
we
should
agree
that
the
rock
cases
where
it's
useful,
when
there
isn't
let's
say
it's,
not
elastic,
search
but
something
more
complex,
which
there's
a
composite
called
there
are
multiple
http
calls.
Maybe
some
io
calls
something
else
that
happens
underneath,
so
you
would
want
to
have
the
higher
level
span
that
describes
this
library
and
http
calls
underneath.
D
Right
but
I
guess
so,
then
I
I
guess,
like
thinking
back
to
the
examples,
I
believe
you
yeah,
it
was
like
with
the
old
type
you
wrote
through
the
example.
It
shows
like
it
looks
like
these
nested
clients
and
the
idea
is
that
there's
a
lot
of
replicated
information.
I
just
I
guess
I
don't
fully
understand
like
why.
A
E
E
It's
already
exist,
and
at
least
for
some
cases
we
were
able
to
get
much
cleaner,
spends
and
people
that
use
it,
for
example,
in
like
they
send
it
to
jager,
so
they
don't
have
a
back-end
sophisticated
back-end
system
which
can
clear
the
spends
after
the
receive
so
that
that's
one,
it's
very
convenient
to
use
it
and
another
reason
I
think
to
use
it
is,
for
example,
we
give
a
distribution
of
open
telemetry
and
we
just
install
many
plugins
and
we
don't
know
what's
actually
going
to
run.
E
D
That's
interesting
yeah,
just
because
I'm
I've
been
thinking
about
it.
A
lot
like
over
the
past
few
weeks
of
like
attending
these
and
just
trying
to
kind
of
make
sense
of
it,
and
I'm
one
of
the
things
that
I've
been
just
kind
of
a
little
bit
hesitant
towards
is
that
adding
all
these
runtime
checks
is
like
not
necessarily
like
I'm
concerned
about
the
performance
implication
of
it
of
having
to
check
all
these
things.
D
Dynamically,
like
again
like
another
example
like
that,
just
kind
of
trying
to
create
a
real
world
example
that
maybe
can
relate
my
point
a
bit
better
as
someone
wanted
to
contribute
facebook,
api,
instrumentation
library,
so
it's
kind
of
a
higher
level,
logical
client,
but
there's
an
underlying
library
that
it's
using
to
actually
make
the
network
calls,
and
so
the
original
approach
that
the
author
took
was
to
instrument
it
as
a
as
if
it
was
its
own
clients
fan.
D
So
then,
we'd
end
up
in
that
problem,
where
you
have
these
like
duplicated,
nested,
clients,
fans-
and
I
know
this
isn't
like
a
document
approach
I
mentioned-
I
think
last
week
of
the
week
prior-
is
that
we've
used
the
mechanism
to
enrich
downstream
fans.
D
I
think
that
makes
a
lot
of
sense,
so
it
makes
sense
to
push
for
that.
But
I'm
just
trying
to
think
about
the
what
the
appropriate
approach
is
and
I've
always
been
fairly
resistant
towards
putting
a
lot
of
runtime
checks
in
because
then,
like
the
application
orders
kind
of
take
on
that
that
performance
hit
right,
even
if
it
is
minor
but
ruby's
slow
and
be
careful.
A
Yeah,
if
we
agree
that
layers
can
be
useful,
then
we-
and
I
think,
like
at
least
from
my
standpoint-
they
are
very
useful.
So
inevitably,
whether
you
instrument
or
not,
users
can
create
their
own
clients
pens
to
wrap
multiple
calls
together.
So
it's
inevitable
that
we
have
layers,
it's
just
a
question:
how
we
allow
the
suppression.
A
A
D
Yeah,
no,
I
think,
there's
definitely
cases
where
it
makes
sense
to
have
those
layers,
but
like
thinking
of
like
you're,
saying
like
at
the
end
of
the
day,
like
I
think,
like
an
application
owner,
could
write
a
duplicate,
client
span
and
you'd
want
some
way
to
suppress
that.
But
I
guess
I
would
turn
around
and
say
like
if
them
having
these
duplicate
spans
is
an
issue
for
them
or
the
back
end
that
they're
sending
it
to
you.
D
Yeah,
I
think
I
think
it
might
be
I'm
running
into
just
like
different
paradigms
like
different
languages,
because
it's
like
we're
not
seeing
that
probably
problem
in
the
ruby
ecosystem,
like
some
applications
that,
like
I'm
responsible
for
they,
have
unfortunately
they're
using
multiple
different
http
libraries
but
they're
as
close
to
the
actual.
I
guess
like
the
language
we're
using
is
like
the
physical
connection
as
possible.
D
So
even
though
they're
using
multiple
different
http
libraries,
they,
they
still
only
generate
the
single
clients
ban
and
even
if
they're,
not
working
directly
with
that
global
level
code,
like
in
the
case
like
the
facebook
one
say
they
it
doesn't
matter
because
we
don't
duplicate
it
there
so
like
there
is
some
control,
but
I
I
like
I
I
said
when
I
first
asked
this
question.
I
think
I'm
just
missing
something,
that's
more
obvious
in
other
languages.
Just
because
we're
not
we
haven't
been
seeing
it
in
the
ruby
ecosystem.
C
A
feeling
that
it's
particularly
problematic
for
auto
instrumentation,
as
I
mentioned,
which
we
have
in
java,
that's
sort
of
been.
Why
that's
been
our
focus,
I'm
curious
if
ruby
also
has
any
plans
for
auto
instrumentation,
where
it
just
sort
of
monkey
patches
all
the
libraries
you
can
find,
because
in
that
case,
then
you
might
start
running
into
this
problem
where
the
underlying.
D
D
That's
that's!
That's
like
the
state
of
it
right
now.
It's
all
auto
instrumentation
we
have.
We
have
auto
instrumentation
for
all
the
http
libraries
so,
and
we
have
a
package
that
just
I
nobody's
like
I
haven't
seen
it
internally
yet
like
there.
I
think
there's
one
case
where
we
accepted
a
duplicate
clients
pad
for
like
the
the
built-in,
ruby,
http
library
and,
like
that's
subject
to
change.
It's
just
like
there's
like
the
request
span,
and
then
we
have
like
a
subspan
for
the
connection,
but
that
could
there
was
some
debate.
D
We
did
whether
or
not
that
actually
should
just
be
an
event
right,
but
that's,
I
think,
one
of
the
only
cases
of
that
otherwise,
like
I
said
when,
when
people
are
coming
in
and
pushing
for
like
elasticsearch
or
the
the
facebook
gem
we're
saying
like
hey
like
you,
don't
actually
need
to
duplicate
this,
because
assuming
you
want
to
instrument
your
network
request,
you
can
actually
just
enrich
that
span
by
using
this
mechanism
and
typically
anyone
who
is
using
whether
it's
like
the
facebook
chat
or
an
http
library
they're,
going
to
include
the
instrumentation
for
both,
but
at
very
least
like
service
owners
or
owners
are
including
the
networking
library
instrumentation
that
they're
using
because
typically
people
want
the
distributed
tracing
aspect.
D
So
to
get
that
you
just
do
the
auto
instrumentation
for
your
networking
library
and
that's
through
monkey
patching
so,
but
maybe
because
we're
like
a
smaller
ecosystem
and
we've
had
a
pretty.
D
C
Yeah,
it
sounds
like
a
different
approach
than
what
we
take
from
the
job
agent,
so
that
might
be
just
a
good
thing
to
understand
better
across
the
languages
like
job
agent
we
generate
like
we
don't
have
this
concept
of
enriching
spends.
So
that
seems
like
one
big
difference
between
the
two
languages
that
I'd
like
to
understand.
A
Robert,
can
you
present
some
time
how
you
and
reach
downstream
spans
it
would
be.
I
think
this
is
a
cool
feature
and
I'm
curious
how
it
can
be
done.
Maybe
we
can
think
how
it
would
fit
into
the
open
telemetry
trace
api
eventually.
E
D
I
think
at
this
point
you
should
be
seeing
just
guilt
telemetry,
ruby
repository,
so
this
is
where
I
pulled
up.
This
is
the
instrumentation
for
the
net
http
library.
So
this
this
library
is
built
right
into
the
ruby
language.
There
are
third-party
http
libraries,
but
we've
pushed
this
idea
across
to
all
of
them.
D
D
So
this
koala
is
the
facebook
api
gem
they're
patching
the
call
here
and
all
it's
doing
is
wrapping
at
a
higher
level.
This
client
contacts
with
attributes.
So
you
can
pass
in
this
pure
service
sent
to
facebook
because
typically
back
ends
like
this,
so
that
they
can
infer
third-party
services
that
aren't
owned
by
the
service
owners
right.
You
can
see
the
coilover
go
all
the
path
specific
to
this
gem.
D
So
it's
underlying
networking
library,
I'm
not
sure.
If
it's,
I
don't
remember
if
it's
net
http
or
another
one,
but
the
concept
holds
true
there
is
that
when
it
does
make
the
network
call,
because
it's
within
that
context,
it'll
just
extract
these
attributes
and
merge
them
in
and
that's
literally
all
we
do
to
accomplish
this
and
we
do
that
for
enriching
survey.
Service
server
stands
as
well.
So
rack
is
like
the
ruby
http
server.
Spec,
that's
used,
so
typically
any
ruby
web
server
will
use
rack.
D
So
when
we
generate
a
rack
span,
we
set
the
current
rot
racks
back
to
contact
like
in
the
context
and
then
as
down
the
line
for,
say
ruby
on
rails
framework.
It
can
check
in
one
of
the
controller
patches
if
there's
a
rackspan
set,
and
if
there
is
it,
can
rename
the
span
or
append
attributes
or
do
whatever
you
want
right.
So
we've
taken
that
approach
to
reduce
duplication,
and
so
far
it's
been
working
well
for
for
us
like
at
shopify,
it's
had
good
results.
D
Different
teams
can
make
use
of
this
approach
at
the
application
layer.
So
if
someone
has,
they
know
that
they
have
like
a
their
own
written
client
for
some
sort
of
external
service.
D
They'll
use
this
to
enrich
the
underlying
http
libraries,
because
they
don't
have
access
to
those
fans
and
they
may
want
to
set
the
pure
service
tag
like
we
make
a
lot
of
use
of
the
pure
service
tag
at
shopify
to
get
kind
of
inferred
slos
with
services
that
we
don't
own,
and
so
they
use
this
quite
heavily
like
it's
something.
D
A
Yeah,
it's
kind
of
cool.
I
just
I
I
like
this.
I
just
say
it
is
a
different
think
rather
than
suppression,
so
I
think
in
many
cases
when
you're,
when
you
indeed
have
a
thin
client,
you
may
want
to
augment
it
with
something
else,
the
http
spans
and
they
think
in
reality,
what
more
like
complex
clients
want
is
a
layer
that
drops
retries
and
I
would
argue
that
retries
retraces
different
spans
and
rob
many
other
things.
D
There's
been,
I've
been
working
under
the
influence
of
someone
who's
been
kind
of.
Has
some
pretty
strong
opinions
on
this
stuff?
That
they've
been
working
well
for
us
so
far
is
that
his
approach
has
always
been
to
have
your
span
as
close
to
the
the
physical
network
request
as
possible.
I
guess
you
could
argue
that
you're
potentially
losing
some
timing.
Information
from
like
a
higher
like
say
at
this.
C
F
D
D
Yeah,
I
was
just
talking
about
like
we.
The
approach
we've
taken
is
just
putting
our
spans
as
close
to
the
physical
requests
as
possible.
So
that's
the
approach,
but
that
doesn't
prevent
from
having
higher
level
internal
spans.
I
think
it's
just
when
it
comes
to
the
client.
We've
been
just
trying
to
push
it
as
far
out
the
edge
of
the
boundary
as
possible
for
clients
pad
specifically.
A
D
Yeah-
and
I
think
that's
maybe
where
the
the
misalignment
of
ideas
is
coming
is
for
me,
I
I
would
think
like
as
an
application
owner.
If
someone
wants
the
layers,
they
add
that
instrumentation
and
they
can
use
it,
but
if
they
don't
want
the
layers,
they
don't
include
that
instrumentation,
because
if
they
don't
see
value
say
like
someone
didn't,
have
any
value
in
this
facebook
api
and
it
was
generating
a
spam.
They
just
wouldn't
include
that
instrumentation,
so
suppression
through
just
not
instrumenting
things
instead
of
having
runtime
checks.
E
Okay,
would
it
work
with
the
orm
and
database
drivers,
this
approach,
because
I
see
how
it
can
work
very
well
for
http,
but
there
are
so
many
other
cases,
so
this
contact
would
have
to
be
like
shared
between
all
the
instrumentation
libraries.
If
you
have
orm,
which
is
just
setting
something
on
the
context,
and
then
you
have
like
10
drivers,
10
sequence,
sql
drivers
underneath
it
they
would
all
have
to
respect
this
mechanism
right.
D
I
think,
like
maybe
that's
again,
where
we're
fortunate
ecosystem,
like
typically
a
rugby
application
for
the
most
part,
is
using
either
or
postgres
or
mysql
one
of
the
three
and
there's
a
database
driver
instrumentation
like
it's,
not
common,
that
we
see
applications
using
any
combination
of
them.
It's
usually
just
a
single
one.
So
I
think
in
that
case,
like
maybe
the
problems
that
arise
are
what
you're
describing
we
haven't
experienced
yet
so
that's
a
little
bit
different
internally.
D
I
like
shopify,
for
my
sequel,
we'll
add
like
the
pure
service
tag,
but
we
just
do
it
right
on
the
instrumentation
configuration.
We
don't
do
that
higher
level
enriching
because
we
haven't
had
a
use
case
for
it.
So
I
don't
know
if
that
answers
your
question,
but
I
think
maybe
it's
just
we're
not
experiencing
like
applications
who
have
many
database
drivers.
E
E
This
is
something
that
it's
very
interesting
and
I
experienced
with
it
in
a
node,
and
I
found
suppression
to
be
very
easy.
But
I
agree
that
this
can
also
solve
at
least
a
big
part
of
the
issues
that
we're
facing
and
interesting
mechanism.
E
D
This
is
a
bit
interesting.
We've
been,
I
say
we
I'll
take
responsibility
for
this.
I've
been
instrumenting
like
our
orm
for
rails,
a
bit
differently
than
how
it's
been
done
in
the
past.
So
historically,
looking
at
a
lot
of
the
observability
vendors
when
they
instrument
ruby
on
rails,
the
orm
is
called
active
record
and
usually
what
they
do
is
they
hook
into
an
instrumentation
notification.
D
But
what
that
that
that
generates
is
just
a
very,
very
close
wrapper
to
the
database
query,
but
because
we
have
instrumentation
for
the
official,
my
sequel,
gem
and
postgres,
and
if
you're,
using
the
mysql,
instrumentation
and
you're
using
active
records
instrumentation
historically,
the
way
it
was,
you
would
just
have
two
spans
that
were
I'd,
say
equivalent
right.
The
timing
might
be
very,
very
different
by
like
a
hair,
because
you
know
active
records.
It's
a
slightly
above.
D
The
database
call,
and
I
do
mean
ever
so
slightly
so
what
we
started
or
what
I
started
looking
at
doing
is
instrumenting
the
orm,
but
not
as
just
a
call
directly
to
the
database.
So
if
you
have
things
that
are
running
as
validators
against
your
models
or
you
have
callbacks
set
up
to
like
a
save
hook,
we're
looking
at
adding
the
ability
to
capture
timing
around
that
because,
like
in
a
rails
application,
especially
like
an
older
one,
callback
usage
is
really
heavily,
was
really
heavily
encouraged.
D
But
it's
the
source
of
a
lot
of
performance
issues
and
if
you
just
have
those
low-level
database
span
timings,
you
don't
really
get
any
insight
to
why
your
your
save
call
is
slow.
You
just
see,
oh
I
you
know,
I
did
it
right
to
the
database.
It
was
really
quick.
I
guess,
there's
nothing
wrong
with
my
my
orm
save
call,
which
is
very
misleading
because
chances
are.
You
might
have
some
callback,
that's
taking
five
seconds,
so
I'm
looking
at
actually
instrumenting
the
order.
A
Okay,
while
we
are
waiting
for
robert
to
come
back,
I
kind
of
want
to
address
some
things.
We
discussed
last
time
and
the
open
questions
we
had
unless
somebody
else
went.
Oh
robert
is
back.
D
Sorry
about
that
I
seem
to
have
a
lot
of
trouble
with
zoom,
so
I'll
just
try
to
really
quickly
say
it,
so
it's
rendering
active
record
the
orm
for
rails
a
bit
differently
than
database.
So
it's
like
a
different
concern.
Historically
active
record
would
just
do
a
thin
wrap
around
the
database
calls.
So
we
have
instrumentation
for
my
sequel,
so
you
already
have
those
stands
that
capture
all
that
information.
D
So
if
you
did
it
the
old
way,
you'd
get
the
same
information
twice
and
we
didn't
see
any
value
in
that,
so
we're
trying
to
insert
the
orm
itself.
So
if
you
have
validators
or
callbacks
on,
say
like
a
save
hook
or
anything
like
that
or
update
we're
looking
to
actually
instrument
that
information.
So
if
you
have
some
bad
rm
code,
you
right
now,
you'd,
look
and
you'd
see.
Oh,
my
database
writes
fast,
but
I
have
no
insight
into
why
my
call's
actually
slow.
So
you
just
assume
your
code
is
fine.
D
You
look
elsewhere,
but
really
there's
a
lot
of
times.
We
have
like
an
old
callback
that
might
be
doing
too
much
work
synchronously
and
it's
just
really
stalling
the
time
until
you
actually
get
to
that
right.
So
we're-
and
I
say,
keep
saying
we
would
I'm
being
really
opinionated
on
this
and
saying
that,
like
let's
instrument
the
orm
as
a
whole
separate
concept
from
the
actual
database
itself,
so
you
can
get
two
separate
kind
of
insights.
So
we
don't
enrich
database
fans
with
the
orm.
We
just
treat
them
as
two
separate
things.
F
Yeah,
sorry,
I
missed
the
beginning
of
this,
but
I'm
definitely
in
favor
of
adding
additional
information
like
that.
Like
you
mentioned,
the
orm
is
a
framework
that
encapsulates
a
lot
more
than
just
the
database
calls,
and
so
the
question
kind
of
comes
down
to,
in
my
mind,
are
those
like
the
do.
Are
we
looking
at
the
overhead
of
spans
per
hook,
for
example,
for
life
cycle
hook,
or
are
we
talking
about?
These
should
just
be
events
on
a
single
span.
That's
that's
usually
where
it
gets
into
the
weeds.
For
me,.
D
Yeah
like
right,
like
today,
we
we
actually
disabled
some
of
our
instrumentation
of
the
orm,
because
it
was
capturing
information
around
instantiating
like
taking
your
record
out
of
the
database
and
creating
a
ruby
object
and
we're
capturing
those
spans,
but
they're
incredibly
verbose,
like
I
think
we
saw
a
trace
today
that
had
like
25
million
of
those
fans.
So
like
that's,
not
really,
it
doesn't
scale
very
well,
so
it
might
be
better
suited
as
an
event
or
something
that
is
maybe
toggleable
in
some
sort
of
scenario.
D
A
And
I
think,
with
events
endless
verbosity,
basically,
everything
that
has
verbosity
is
probably
a
log
and
you
just
write
the
log
and
it
gets
correlated
to
your
spans
and
you
can
pick
the
verbosity.
D
F
Yeah,
it's,
I
think
it's
helpful
people
do
often
show
up
in
various
forms
wanting
to
have
like
verbosity
levels
on
spans,
but
I
I
think
it's
actually
problematic
to
do
that,
because
it
changes
the
trace
structure
and
you
have
a
lot
of
tools
that
do
interesting
jobs,
especially
thinking
into
the
future,
around
trace
analysis
right
and
trying
to
identify,
what's
different
between
traces,
for
example,
and
at
any
rate,
I'm
a
little
suspicious
of
having
tools
that
that
heavily
manipulate
the
the
structure
of
the
data.
F
F
So
when
you're
tweaking
your
verbosity
you're,
not
suddenly
changing
the
structure
of
the
spans
and
now
you've
got
more
detail.
But
now
you
can't
compare
these
traces
to
the
traces
where
you
had
the
problem,
because
you
changed
the
structure
of
the
traces
and
so
like
those
tools
are
now
like
not
working
or
potentially.
The
data
that
was
all
logged
on
once
span
is
now
spread
out
as
attributes
across
multiple
spans.
F
A
D
I
guess
like
the
the
root
of
my
concern,
is
that
the
suppression
being
done
as
a
runtime
check?
Is
I'm
just
a
little
bit
resistant
to
it
or
hesitant,
and
I'm
thinking
I
don't
know
I
just.
I
suspect
that
I
am
just
continuing
to
miss.
Something
is
just
like
if
you
don't
want
that,
if
you
want
to
suppress
something,
why
not
just
not
instrument
it,
but
I
guess
there's
some
cases
where
you
don't
have
control
over.
D
What's
instrumented
in
your
system,
I
guess
I'm
coming
from
like
a
position
where
I
feel
that
I
have
infinite
control
over
what's
instrumented
and
the
applications
I
oversee.
So,
if
something's
being
noisy,
I'm
not
looking
for
a
sample
or
suppression
mechanism,
I'm
either
deciding
whether
or
not
we
should
keep
that
instrumentation
or
if
the
instrumentation
was
designed
properly
and
because
I've
written
a
lot
of
instrumentation
for
the
ruby.
D
B
D
B
Sure
I
I
mean
yeah,
so
robert
yeah,
you,
I
think,
put
it
well
that
you
know
you
have
infinite
control
over.
You
know
you
are
managing
your
app,
you
control
what
goes
in
it.
You
control
the
instrumentation.
You
control
all
these
things,
and
so
what
we
see
in
the
java
ecosystem
is
lots
of
cus
lots
of
users
who
just
want
to
throw
the
agent
the
java
agent,
the
auto
instrumentation
at
their
app
and
then,
if
there's,
you
know
too
much
or
too
little
they
just
want.
B
You
know
configuration
settings
that
there's
a
lot
of
users
who
these
are
operators,
ops,
folks,
not
even
dev
folks,
so
they're
not
even
allowed
to
go
in
and
touch
their
application
code.
B
So
that's
why
they
need
these
configuration
runtime
settings
and
a
couple
other
thoughts.
One
thing
reason
why
I
think
we
see
it
in
the
java
ecosystem.
This
suppression
issue
is
well,
one
is
there's
just
a
ton
of
instrumentation
there's
about
so
far.
We
have
about
100,
instrumentations
and
and
growing
quickly
and
there's
also
a
lot
of
abstraction
layers.
So,
for
example,
spring
integra
spring
has
a
lot
of
these
abstraction
layers.
B
That
for
like
messaging
and
we
so
we
instrument
the
low
level
like
kafka
libraries,
but
we
also
instrument
the
spring
messaging
library,
because
customer
maybe
might
be
using
another
messaging
system
that
we
don't
already
instrument.
So
both
it
covers
the
case
where
they're
using
some
other
ins,
some
other
messaging
system
and
and
also
at
that
higher
level
for
messaging.
In
particular,
it
gives
us
more
control
over
parenting,
things
correctly,
which
is
sometimes
tricky
in
the
lower
level
messaging
instrumentations.
B
Oh
and
then
the
last
thought
I
had
was
related
to
the
perf,
the
performance
concern
about
checking.
You
know
the
runtime
check
you
mentioned,
and
I
think
it's
probably
like
from
the
code
you
showed,
of
how
you're
pushing
the
attributes
you're,
putting
the
attributes
into
the
context
and
pushing
them
down,
and
then
the
lower
level
instrumentation
has
to
you
know,
check,
pull
those
attributes
out
and
put
them
in
at
least
how
we've
implemented
the
the
suppression
ludmilla's
implemented.
The
suppression
prototype
in
java
is
basically
the
same
perfwise.
B
D
Yeah,
that's
a
really
fair
point
and
like
just
like
touching
on
a
couple.
It's
like
the
last
point.
Like
the
performance
thing
like,
I
haven't
actually
run
like
an
extensive
profile
against
open
telemetry
ruby
in
production.
I've
done
some
like
micro
benchmarks.
I
haven't
done
anything
so
too
in
depth
yet
so
like
it's,
probably
not
fair
to
assume
it'll
be
bad.
It's
just
like
kind.
B
Of
like
you
should
have
you
should
you
should
always
assume
it
will
be
bad
and
run
benchmarks.
Yeah
we
found
out,
we
found
a
bunch
of
interesting
things.
People
have
been
running
a
lot
of
benchmarks
the
last
couple
months
in
java,
and
we
found
some
surprise
things
that
surprised
us.
D
Yeah,
so
that's
like
one
of
my
concerns
because,
just
in
general,
like
in
my
org,
it's
like,
if
we
roll
out
something
bad,
we
get
the
blame
from
all
these
service
owners
and
there's
lots
of
them.
And
I
don't
like
being
the
bad
person.
But
to
your
other
point
of
like
talking
about
how
using
like
the
the
java
instrumentation
agent,
I'm
assuming
that's
like
kind
of
like
an
auto
instrumentator
like
instrumentation,
that,
like
just
pulls
in
a
bunch
of
stuff,
is
that
correct.
B
Yeah,
so
it's
a
package
that
it
has
all
of
the
instrumentations
in
it
and
then
at
run
time,
when
the
jvm
is
loading
libraries,
we
inspect
the
libraries
that
it's
loading
and
apply
dynamically.
The
instrumentation
that's
relevant.
D
Okay,
so
so
we
have
something
similar
in
ruby,
so,
like
there's
some
some
common
ground
there,
we
have
our
instrumentation
all
package,
which
is
very
creatively
named,
but
if
you're
using
that-
and
this
is
probably
the
point
of
these
meetings-
is
that
we
we've
kind
of
baked
in
to
the
the
common
component
of
code,
the
ability
to
disable
any
instrumentation
by
name
either
explicitly
through
code
or
through
environment
variable.
So
we
had
like,
for
example,
a
team
who
didn't
want
this
particular
piece
of
instrumentation.
D
They
add
the
special
environment
variable
name
and
they
set
it
to
false,
and
so,
even
if
they
don't
didn't,
have
the
ability
to
modify
the
code,
they
could
still
selectively
shut
stuff
off,
which
I
think
is
important.
I
think
like
that
still
would
satisfy.
Like
kind
of
my
point
of
like
just
don't
include
the
things
you
don't
want
or
don't
turn
on
the
things
you
don't
want.
D
So
if
you
don't
have
the
means
of
modifying
the
code
you're
managing,
then
you
still
have
like
your
environment
variables
to
toggle
things
on
and
off,
and
anyone
feel
free
to
say
if
I'm
like,
rambling
or
beating
a
dead
horse.
At
this
point.
F
I
have
a
couple
points.
I
think
everything
everyone
just
said
is
pretty
reasonable
one
point
just
to
maybe
reach
back
to
when
you
were
talking
about
orm
instrumentation.
F
I
don't
think
what
trask
is
talking
about
is
like
totally
disabling
orm
instrumentation,
because
it
is
like
you
say
it
is
different
than
database
instrumentation.
It's
just
that
for
that
particular
span.
F
That's
the
span
for
the
actual
database,
query
it's
the
question
of
which,
which
library
should
be
performing
that
check,
and
so
so
just
to
clarify.
I
think
I
think,
like
that's
that's
just
like
an
aside
like
you
would
never
fully
disable
orm
instrumentation
just
because
you
installed
the
mysql
instrumentation.
F
There
is
an
interesting
question
of
whether
the
mysql
instrumentation
might
have
better
data
somehow
than
the
the
orm
instrumentation,
because
it
might
have
access
to
database
specific
stuff.
I'm
not
sure
if
that's
the
thing
y'all
have
investigated
over
in
java
land.
I'd
be
curious
about.
B
That
that
that's
the
thing,
the
the
orm
actually
not
currently,
the
our
orm
instrumentation
emits
internal
spans
instead
of
client
spans.
F
B
In
our
distro
I
disable
it
completely
because
really
it's
the
sql
queries
that
are
generally
the
most
useful,
but
sometimes
orm
is
cool.
F
Yeah,
I
mean
the
degree
to
which
orm
wraps
like
frame
marks,
are
really
useful
because
they
allow
you
to
identify
places
where
application
logic
is
actually
creating
a
problem
right
where
the
database
query
was
fast
and
it
got
you,
those
thousand
items
really
fast
as
well
indexed
and
then
the
developer
loops
over
them
one
at
a
time
and
does
something
something
with
each
one
and
like
that's,
actually
where
your
bottleneck
is
and
orm
instrumentation
or
framework
instrumentation
of
some
kind
is
usually
like
the
way
you
get
to
find
those
problems
without
having
to
go
in
and
really
sprinkle
all
of
your
application
code,
with
lots
of
lots
of
instrumentation.
F
A
I
have
a
one
thing
I
want
to
bring
up
in
the
last
few
minutes,
so
I'm
sorry
for
dropping
off
the
bomb,
but
basically
I'm
thinking
how
we
can
get
some
structure
from
this
meetings
and
from
the
instrumentation
sig.
It
sounds
a
bit
orthogonal
to
the
trace
stack.
A
So
it's
it's
built
on
top
of
trace
pad,
but
it's
something
else
and
I'm
kind
of
interested
since
we
have
defined
some
provers
maintainers
for
other
sigs.
A
Should
we
do
this
for
instrumentation
sig
as
well,
and
should
we
find
some
letters
here
like
when
ted
when
you're
not
here,
we
are
not
really
moving
forward,
we're
discussing
stuff
and
we
we
may
be
building
consensus
within
a
small
group,
but
basically
we
need
somebody
with
some
permissions
in
this
group
to
make
it
move
forward.
Even
if
you're
not
there,
it
could
be
you
and
then
we
will
formally
know
that
it's
you
right,
it's
just.
Maybe
we
can
create
some
organization
around
this
thing.
F
Yeah
and
apologies
for
being
late
today,
I
had
to
deal
with
my
cat
got
into
a
fight
and
I
needed
to
do
some
vet
stuff,
but
yeah.
No,
I
I
agree.
I
would
love
to
see
the
semantic
conventions
actually
reorganized
in
the
spec
right
now,
they're,
like
kind
of
sprinkled
like
through
all
the
different
areas
within
the
spec,
and
it
would
be
nice
to
go
in
and
and
just
reorganize
them
to
have
their
own
section,
where
they're
just
organized
by
subject
and
then
in
each
subject.
F
We
have
all
the
conventions
for
that
subject
and
we
could
that
would
make
it
very
easy
to
use
code
owners
and
other
tools
to
add
spec
approvers,
for
example,
that
are
specific
to
semantic
conventions,
so
that
would
give
us
the
ability
to
move
forward
at
the
spec
level
once
we
get
to
that
point,
where
we're
actually
like
putting
the
stuff
into
the
spec
and
changing
it.
So
I
think
that
would
be
very
useful
if
there's
other
places
where
people
are
getting
getting
stuck
literally
on
like
permissions
stuff.
A
Okay,
cool,
then,
basically,
what
we
need
is
reorganize,
the
spec
in
a
way
that
we
can
have
the
owners
and
the
provers
there
and,
let's
see
maybe
next
time
we
can
spend
a
bit
more
time
discussing
how
this
should
work.
What
else
needs
to
happen?
How
we
will
elect
people
there
or
not
elect,
but
at
least
whatever,
whatever
purposes
right
assign
assigned.
A
F
I
think
that's
great.
There
was
a
thing
on
this
agenda.
I
think
the
person
who
posted
it
had
to
drop
it
looks
like,
but
about
separating
out
technologies
from
poor
conventions.
F
I
wasn't
sure
if
they
were
proposing
like
removing
it
versus
just
like
organizing
this
stuff
better,
but
we
definitely
can't
remove
the
the
technology
specific
stuff,
but
we
could
organize
it
in
such
a
way
that
for
each
semantic
group
we
have
a
folder
like
http,
and
then
you
have
like
the
general
http
or
db.
For
example,
you
have
the
general
db
stuff
and
then
for
each
specific
implementation.
F
Not
not
only
I
actually
want
to
expand
those
sections
to
not
just
have
the
additional
fields.
F
You
would
want
to
add
for
those
specific
technologies
like
cassandra,
but
also
expand
it
to
explain
how
the
generic
fields
should
be
implemented
for
that
technology,
because
we
want
this
stuff
standardized
across
all
the
different
client
instrumentation,
like
all
the
different
db
instrumentation
we're
going
to
maintain
across
all
the
different
languages
and
stuff
so
really
spelling
that
out
and
then
also
adding
in
like
configuration
options
as
we're
starting
to
develop
configuration
options
in
all
of
our
instrumentation
and
like
putting
those
there
as
a
way
of
saying
for
the
non-language
specific
stuff,
like
for
kafka,
clients
like
here's,
the
the
options
we're
currently
giving
people
and
trying
to
get
that
standardized
as
well
like
as
much
as
we
can,
so
that
when
people
come
to
instrument
implement
more
of
these
things,
they
have
this
very
clear,
guide
and
they're
not
having
to
to
guess
like
there's
a
lot
of
guesswork.
F
When
people
go
to
maintaining
this
instrumentation
right
now,
and
we
want
to
use
the
spec
to
just
eliminate
all
of
that
guesswork
and
also
ensure
that
there's
like
a
high,
consistent
quality
bar
in
instrumentation,
because
if
you
don't
have
that
it
gets
really
lumpy-
and
I
think
the
goal
of
open
telemetry
specifically
is
to
to
move
past.
That
kind
of
lumpiness
that
tends
to
happen
in
these
instrumentation
ecosystems.
F
F
Okay,
well
I'll
I'll
see.
If
I
can,
I
can
get
that
kicked
off
later
this
week.
I
think
I'm
trying
to
free
up
cycles
to
actively
work
on
this
stuff,
not
just
not
just
help,
run
some
meetings
on
it,
but
I
think
you're
you're
working
on
the
this.
This
we
know
you're
working
on
this.
This
suppression
otep
right.
I
think
that's
really
critical.
A
Yeah
I
do,
I
will
update
the
autop,
I've
done
the
experiments
and
I'm
kind
of
looking
for
confirmation,
it's
what
we
want
to
solve
and
this
is
the
right
side
way
to
solve
to
solve
it.
But
let's
let
me
update
the
tab
and
maybe
we
can
like
discuss
the
summarize
it
the
next
time.
F
There
is
the
hotel,
instrumentation
slack
channel
if
people
wouldn't
mind
watching
that
channel
and
participating
there,
not
just
as
a
place
to
ask
questions,
but
also
to
sprs
and
issues
come
up
like
post
them
there
and,
if
you're,
not
getting
not
getting
any
movement
or
response
just
like
use
that
as
a
place
to
flag
that
and
be
like,
like
hey
like,
I
really
need
feedback
on
this
thing,
then
that
that
would
give
us
a
help
healthy
place
to
kind
of
centralize,
where
we're
looking
at
that,
instead
of
having
to
just
keep
track
of
all
of
this
stuff
across
all
the
different
repos,
I
find
keeping
track
of
all
those
github
issues
for
a
particular
subject
can
be
kind
of
annoying.
F
F
Wait
I'll
post
a
post,
a
link?
Has
anyone
had
trouble
getting
into
the
cmcf
slack
instance.
F
G
That
that
I
also
had
this
quick
question
related
to
the
http
conventions,
so
there
was
a
issue,
a
github
issue
like
about
the
like
a
separate
meeting,
but
it
looks
like
we
decided
to
stick
to
like
these
two,
and
actually
there
is
a
pr
with
you
know,
with
some
description
of
the
scope
that
we
wanted
to
like
or
which
which
topics
we
want
to
address,
and
we
can.
We
can
consider
the
http
conventions
like
a
specification
kind
of
stable.
F
F
If
it's
more
of
like
a
big
big
vision,
change
to
how
that
stuff
should
work.
An
otep
is
good,
but
I
think
using
this
for
http
discussion
is
helpful
because
we
found
actually
most
of
the
issues.
We're
talking
about
for
http
are
actually
like
generic
issues
like
the
span,
suppression,
stuff
and
like
span
structure
and
having
multiple
client
spans
and
all
of
these,
these
other
issues
like.
But
if
you
have
stuff
that's
like
specific
to
the
data
or
the
attributes
being
emitted
yeah,
it
would
be
great
to
to
know
what
you're
thinking.
G
Okay,
yeah,
so
actually
what
I'm
thinking
about
this
like
to
to
agree
on
the
scope
like
what
what
exactly,
what
exactly
we
can
do,
or
we
should
do
to
to
say
that
we
now
we
have
like
http
specification
like
a
stabilized
or
kind
of
you
know:
0.1
version
0.1,
so
currently
it's
like
experimental
and
which
exact
like
a
topics
we
need
to
address
before
we
can
say
that
it's
like
a
1.0.
G
So
currently,
this
scope
is
there,
but
it
will
be
good
to
have.
You
know
some
opinions
from
from
from
folks
just
to
understand,
like
do
we
have
in
the
indeed
document
that's
already
they
are
submitted
as
a
pr
do
we
have
all
the
stuff
captured
or
we
do.
We
need
to
like
add
something
else
or
to
remove
something
that
something
from
that
or
maybe
set
some
different
priorities
or
something.
So
that's
that's
the
current.
You
know
kind
of
goal
that
I
have
in
mind.
F
Yeah,
no,
that's!
That's
awesome!
That's
a
great
first
step
of
just
identifying
what
people
think
are
missing
or
problematic
about
it
and
getting
those
getting
that
all
of
that
written
down
somewhere.
We
do
have
some
old
docs
on
this
stuff.
I
can
try
to
dig
up,
but
a
google
doc
might
be
a
good
place
to
to
compile
all
of
that
information
that
everyone
can
edit,
since
it's
not
exactly
since
it's
not
exactly
a
pr
and
like
issues
are
like
a
difficult
way
to
build
up
a
dock
like
that.
G
Oh,
I
see
so
actually
yeah.
I
try
to
follow
the
same
model
as
johannes
did
for
messaging
semantic
conventions.
So
basically,
that's
like
a
oh
tab,
yeah
or
nd
md
file,
which
is
the
like.
I
just
describes
the
overall
scope
and
the
the
points.
So
that's.
G
Right
right,
okay,
so
yeah
it'd
be
good
to
like.
I
just
identify
some
some
approach
when
everyone-
everyone
can
just
go
there
and
at
least
be
aware
of
all
of
this
stuff
that
exists,
and
it
will
be
possible
to
move
forward.
F
Yeah
yeah
focus
in
in
slack.
I
think
you
posted
that
there
but
yeah,
please
repost
it
and
I'll
start
trying
to
try
to
dig
through
that
great.
Thank
you
all
right,
cool
beans.