►
From YouTube: 2021-06-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
But
hopefully
we'll
have
folks
joining
us
soon,
so
right
before
this
I
don't
know.
Did
I
send
you
a
link
to
the
media
agenda.
B
A
Oh,
the
meeting
agenda
is
like
a
public,
google
doc
and
it's
quiet
right
now.
A
A
C
C
A
Well,
let
me
let
me
introduce
you
both
to
a
colleague
of
ours
kevin
who
has
been
he's,
like
our
bleeding
edge
early
adopter
nightly,
release
updater,
finding.
A
And
so
I
invited
them
to
join
us,
so
you
could
see
you
know
how
the
sausage
is
and
made
nice.
B
To
meet
you,
thanks
for
thanks
for
the
invite.
A
Yeah
good
to
meet
you,
I
was
gonna,
let
them
introduce
themselves,
but
that's
matt,
balner
and
he's
from
lightstep
and
francis
is
from
shopify
and
robert
who's.
Joining
is
also
from.
A
E
Yeah
I've
I've
missed
coming
to
the
sig.
I've
been
gone
for
a
couple
of
weeks.
I
I
both
moved
and
my
company
was
acquired
so
like
I
think
the
announcement
of
the
acquisition
kind
of
yeah
precedes
the
acquisition
to
some
degree,
and
then
you
end
up
getting
this
like
week
of
onboarding
when
it's
kind
of
like
you
got
a
whole
new
job.
C
Oh
yeah:
that's
a
welcome
to
the
life
of
being
acquired
by
a
much
larger
company.
I've
I've
been
through
it
a
few
times
and
that's
how
it
goes.
There's,
training
and
just
lots
of
little
things
to
learn,
and
it's
not
all
bad
at
all,
but
there's
definitely
a
lot
of
a
lot
of
friction.
Actually.
E
E
I
can
volunteer
to
try
to
recap
the
spec
sig
and
I
don't
know
my
personal
goal
is
to
like
keep
that
recap
like
10
minutes
or
something
and
if
we
keep
it
to
20.
That'll
be
mildly
successful,
but
has
that
been
working
for
people?
Do
people
like
the
the
specs
egg?
Yes,
yes,
we've
been
missing
it
since
you've
been
gone,
all
right,
great
yeah.
I
wasn't
sure
that
was
a
pain
for
everybody
and
we
were
just
doing
it
or
if
it
was
actually
valuable.
So.
F
F
A
E
Reminder
there
is
like
an
afternoon
asia
pacific
time
slot
for
this
meeting.
Nobody
ever
goes.
I
never
go,
but
if,
for
some
reason
that
time,
slot
works
for
you
feel
free
to
attend.
E
So
for
open
telemetry
go,
they
were
an
early
adopter
of
kind
of
the
first
metrics
api,
so
somebody
was
asking
about
this
status
on
on
the
on
the
readme,
it's
kind
of
been
paused.
I
think,
as
they
figure
out
how
to
adopt
kind
of
v2
of
the
metrics
api.
E
C
Yeah
this
I
remind
yeah.
This
came
up.
Somebody
actually
implemented
this
for
the
ruby
stk.
I
believe
yes
awesome.
I
remember
I
thought
this
seems
very
familiar.
E
Long
long
to
see,
at
any
rate,
I
think
this
has
been
like
a
long
standing
like
question.
E
E
This
is
specifically
about
values,
and
I
think
the
answer
to
the
question
is
like
there's
no
reason
to
have
hard
limits,
but
there
should
be
a
way
to
configure
a
limit.
If
you
need
one
so
you
can
have
so
basically
yeah
there
are
two
parts
you
should
be
able
to
configure
a
limit.
I
think
the
default
limits
are
infinite
for
the
for
the
length
anyways,
it
seems
like
there's
also.
E
I
guess
the
other
limit,
which
is
the
number
of
attributes,
looks
like
we
default
to
to
128
or
that's
what
this
pr
is
suggesting,
and
these
would
be
configurable,
so
you
can
tune
them
to
your
use
cases.
F
We
implemented
this
a
long
time
ago
when
we
were
working
from
the
java
implementation
and
we
refactored
some
stuff,
like
we
were
implementing
the
correct
environment
variables,
but
we
had
the
code
in
the
wrong
place,
so
we
refactored
that
to
to
move
it
to
the
right
place.
Two
weeks
ago,.
E
D
E
E
E
E
E
This
is
a
well-written
issue,
kind
of
describing
describing
the
current
state.
E
So
it
looks
like
the
the
proposal
is
to
adopt
open
histogram.
I
think
there
have
been
like
a
number
of
different
histogram
implementations
that
have
been
proposed
as
potential.
E
Potential
implementations,
so
I
think,
like
dd,
sketch
circle
hissed,
I
feel
like
there's
another
famous
one
that
I'm
missing
right
now
have
all
been.
E
E
If
you're
a
histogram
aficionado
and
are
interested
in
this
sort
of
stuff,
I
think
they're
interesting
issues
but
feel
free
to
read.
I
always
read
these
to
learn
more
about
histograms,
because
I
am
not
an
aficionado.
E
All
right
this
one,
it
starts
delving
into
the
I
don't
know.
I
think,
there's
some
agreement
that
we
need
something
like
this.
It
starts
delving
into
the
weird
a
little
bit,
though
my
is
what
I
was
thinking.
Basically,
we
in
open
telemetry,
we
kind
of
have
this
notion
of
a
resource.
So
if
you
look
at
otlp
you
kind
of
have
resource
and
then
resource
spans
and
metrics
are
kind
of
the
same.
E
That
the
that
the
resource
has
like
a
number
of
attributes
that
are
basically
things
that
you
can
essentially
copy
onto
all
that
telemetry,
but
you
don't
want
to
have
to
copy
it
onto
all
that
telemetry
as
you
as
you
export
it.
E
So
it
kind
of
has
like
that
nested
relationship
and
when
you
come
into
the
metrics
world-
and
you
start
looking
at
all
the
resource
attributes,
if
you
have
values
that
if
you
have
a
key
that
can
have
like
many
values,
you
end
up
having
a
lot
of
cardinality
in
your
metrics
system.
So
in
your
time
series
database
when
you
want
to
start
to
group
things
they
become
like
ungroupable
when
you
have,
or
they
become
hard
to
group
when
you
have
an
attribute
that
has
many
many
values
or
infinitely
many
values.
E
So
I
think
that
is
like
the
backdrop
to
this
is
like
when
you
are
when
you
have,
when
you
have
many
key
value
pairs
for
your
resources
and
spans,
it's
a
little
bit
less
of
a
situation.
I
think
it
could
still
be
a
problem
problem
depending
on
what
your
back
end
is
trying
to
do,
but
for
metrics
this
does
come
to
the
forefront
a
little
bit
more,
and
we
didn't
really
talk
about
this
fully.
I
think
we
just
the
big
ass
was
like.
Can
people
read
about
this?
E
So
we
can
talk
about
it
next
time,
but
I
think
they.
The
proposal
here,
is
that
there
would
be
a
way
to
say
that
some
of
these
resource
attributes
are
identifying
attributes
and
when
you
are,
when
you
are,
I
guess
you
kind
of
want
to
like
partition
your
resource
attributes
into
identifying
attributes
and
descriptive
attributes
and
identifying
attributes.
You
would
always
want
to
take
and
attach
to
to
metric
data,
and
those
would
be
hopefully
of
a
reasonable
cardinality.
E
E
And
then
I
think
the
way
that
this
would
be
the
way
this
would
play
out
on
practice
would
be
to
have
a
some
kind
of
helper
to.
E
Properly
assign
these
and
probably
properly
like
retrieve
the
group
you're
interested.
E
F
Yeah,
I
mean
it's
not
crazy,
it's
it's!
Basically,
you
need
to
take
the
resource,
attributes
and
turn
them
into
labels
on
metrics
and
because
prometheus
doesn't
have
a
notion
of
resource
attributes
separate
from
metric
labels.
You
end
up
having
to
just
like
flatten
them
into
the
metric
labels
in
some
way,
but
you
end
up
with
high
cardinality
on
some
of
these
things,
and
maybe
you
just
don't
care
about
some
of
them.
F
So
we
encounter
this
a
lot
where,
like
we're
using
datadog,
the
datadog
agent
implicitly
adds
a
whole
bunch
of
effectively
resource
attributes
like
infrastructure,
identifying
things
as
labels
on
metrics
and
that
can
blow
up
cardinality
if
you've
got
other
like,
particularly
if
you
have
other
high
cardinality
labels.
F
The
last
option
here
yuri
also
calls
this
out.
The
last
option
here
is
a
reasonable
approach
that,
like
you,
just
provide
a
function
that
takes
that
label
set
and
maps
it
to
the
labels
that
you
want
implicitly
removing
any
high
cardinality
stuff
that
you
don't
want
right.
So
you
just
yeah,
it's
pretty
straightforward.
I
think
it's
a
good
idea.
F
I
would
probably
like,
if
I
was
implementing
this
with
the
collector.
I
would
do
that
in
the
collector,
like
I'd,
be
stripping
out
resource
attributes
in
the
in
the
collector
in
the
metrics
pipeline,
but
I
know
there's
a
lot
of
people
who
are
not
running
the
collector,
and
in
that
case
you
know,
having
a
hook
in
the
sdk
that
lets
you
filter
resource
attributes
is,
is
a
good
thing.
E
F
You
want
it's
probably
more
efficient
to
do
it
as
a
filter
higher
up
where
you
can,
because
the
resource
block
resource
attributes
should
not
be
changing
through
the
life
of
the
program.
F
It's
just
something
you
said
at
configuration
time,
it'd
probably
be
more
efficient
to
just
run
the
filter
function
once
and
take
that
resulting
set
of
attributes
and
like
shove,
it
into
every
metric
you
emit,
but
I
mean,
arguably,
even
if
you
were
forced
to
do
this
at
the
export
level,
you
could
still
implement
it
reasonably
efficiently
because
you
know
as
long
as
you
get
the
resource
attributes
as
a
separate
chunk,
which
you
will
like
you'll,
be
handed
a
resource.
So
you
can
just
say:
hey
that's
a
resource.
I've
already
seen.
E
Yeah,
no,
I
think
that
makes
sense
after
after
I
asked
the
question
you
started
answering
it
like
there's
this
whole
kind
of
it's
kind
of
like
zombie
debate
about
like
how
immutable
your
resources
actually
should
be,
and
the
answer
is,
they
should
be
immutable
and
unchangeable
after
some
certain
point
so
running.
The
filter
at
that
point
is
probably
the
right
thing
to
do.
E
Yeah,
I
just
think
the
filter
makes
sense.
I
feel
like
talking
about
making
some
of
these
identifying
attributes
and
descriptive
attributes
seemed
like
it
could
get
confusing,
but
it
seems
like
there
are
some
reasonable
solutions
to
the
problem.
G
E
Something
that
interests
you
please
attend,
I'm
mildly
interested.
I
might
try
to
attend
as
long
as
my
schedule
allows.
E
These
8
a.m
meetings
are
usually
extra
credit
ones
for
me,
so
I
can
usually
get
away
for
those.
E
The
instrumentation
sig
is
also
going
to
be
starting
back
up.
All
of
these
there's,
like
kind
of
ga.
E
The
I
don't
know
the
aspirational
ga
happened
earlier
this
year.
I
feel,
like
everybody
is
still
kind
of
getting
there,
but
then
there
was
kind
of
like
ga,
but
we
have
like
these
four
groups
that
are
going
to
kind
of
improve
some
other
area
of
the
thing
that
we
that
we
know
it
is
going
to
need
some
improvement
even
from
like
the
ga
level.
So
like
sampling
was
one
and
it
kind
of
got
postponed
and
then
the
instrumentation
sig
was
won
and
it
also
kind
of
got
postponed.
E
I
think
the
instrumentation
stick
is
especially
important
because
none
of
the
instrumentation
in
in
any
of
the
ecosystems
are
considered
stable.
Yet
so
I
think
that's
one
thing
that
people
would
like
to
figure
out
what
that
means,
but
I
think
the
other
thing
is
for
ga.
We
kind
of
just
have
like
this
really
low
level
tracing
api.
So,
like
you
can
get
everything
that
you
need
done,
but
there's
usually
a
number
of
steps
for
there's
a
few
more
steps
that
you
might
like
for
some
common
things
you
might
want
like.
E
E
You
know
a
helper
method
for
an
http
span
where
you
just
like
pass
it.
What
you
know
about
the
span
and
it
kind
of
takes
it.
Has
you
know
first
class
arguments
for
the
things
that
you
need,
at
least
so
you
don't
have
to
go
like
reading
the
semantic
conventions
and
everything
to
figure
out
exactly
what
you
should
put
on
a
span.
F
So
that
was
the
approach
that
we
were
proposing
that
you'd
have
this
helper.
That
would
then
spit
out
attributes
for
you.
The
proposal
that
seems
to
be
coming
out
of
the
spec
sig
at
the
moment
is
more
a
callback
based
approach
where
you
kind
of
give
it
an
object
representing
a
request,
and
then
it
calls
back
into
some
like
thing
that
you've
provided
asking
you
know:
can
you
give
me
the
http
method
out
of
this
request
object?
F
Can
you
give
me
the
path
out
of
this
request,
object
and
so
forth,
and
that
seems
I
know
andrews
provided
feedback
on
on
that
proposal,
but
from
my
perspective,
that
just
seems
much
less
efficient.
It's
extremely
high
level,
but
it's
just
really
hard
to
implement
efficiently,
as
opposed
to
something
that's
just
like
an
interface,
and
it
has
a
set
of
required
and
optional
parameters
and
those
map
directly
to
attributes
and
like
the
the
function
just
spits
out
and
attributes
hash
for
you.
E
So
yeah,
I
think
I
think
this
is
valid,
and
I
think
this
is
probably
one
of
the
reasons
why
this
sig
should
spin
up
is
to
be
able
to
kind
of
have
have
some
of
these
debates
on
some
of
these
things.
I
know
so
so
I
think
that
is
an
issue.
It
sounds
like
that
this
should
be
discussed
a
little
more.
I
know
some
other
things.
I
guess
that
people
are
starting
to
run
into
that.
E
This
sig
will
talk
about
as
well
is
now
or
now
that
some
cigs
have
metrics
in
the
mix
like
in.
In
addition
to
spans
like,
there
seems
to
be
a
lot
of
like
duplication
of
work
going
on
where.
E
People
would
like
to
record
like
an
http
request,
duration
metric
or
something,
and
it
when
doing
so.
The
metrics
code
will,
you
know,
record
a
start
timestamp
and
then,
when
the
request
ends
kind
of
compute,
a
duration
and
record
that
information-
and
you
also
have
the
span
that
actually
did
that,
all
that
exact
same
stuff
so
like.
F
Yeah
and
then
you'll
end
up
entering
and
wanting
the
span
to
be
an
exemplar
for
the
metric.
So
you
need
to
make
sure
that
you
order
things
correctly
so
that
you're
doing
the
metric
like
you've
got
your
metrics
middleware,
for
example,
after
the
the
tracing
middleware
so
yeah
it
it's
messy,
and
if
you
buy
into
the
notion
that
you
should
just
instrument
things
one
way
it
certainly
seems
redundant.
E
Yeah-
and
I
think
these
are
all
like
things
that
should
be
discussed,
because
I
think
I
think
you
could
generate
metrics
off
a
lot
of
your
spans
and.
F
There's
probably
some
vendors
that
do
that
nothing
comes
to
mind,
but
I
think
there
might
be
one
or
two,
maybe
three
vendors
out
there
that
have
some
experience
with
this.
E
E
But
yeah
there
there
are
issues
with
all
these
approaches,
so
I
do
think
that
that's
what
will
be
important
and
exciting
and
hopefully
will
help
provide
some
guidance
and.
E
E
E
This
didn't
get
discussed
all
that
much,
but
I
know
andrew
you've
been
playing
around
with
the
vlogs
a
little
bit.
So
I'm
not
sure
if
you
have
found
the
kind
of
current,
I
guess
set
of
attributes
that
you
can
use
for
like
spans
and
metrics
to
be
insufficient
for
your
logs.
Yet.
C
No,
actually,
we
we
haven't
touched
it
much
in
the
past
week
or
so,
but
I
actually
think
that
allowing
literally
anything
as
value
types
for
logs
is
actually
not
fantastic,
I
would
prefer
it
was
actually
similar
to
spans
and
metrics
and
said
no
there's,
actually
a
defined
set
of
value
types
that
are
allowable,
because
now
we
have
to
decide.
Well
what
do
we
do
with
all
these
value
types?
It
puts
the
decision
on
us,
which
is
fine.
You
know
we
can.
C
We
can
make
that
for
ourselves,
but
I
like
structure-
and
I
personally
like
when
somebody
else-
has
decided
these
types
of
things
for
me
and
then
I
can
point
to
the
standard
and
say
well
no.
This
is
why
we
do
it
unless
you
have
a
good
reason
and
then,
of
course,
we'll
look
into
it.
So
I
I
don't
think
it's
yeah.
No,
I
kind
of
wish
it
was
that
there
was
no
any
value
type,
at
least
for
logs.
H
C
Well,
I
think
the
you
you
don't
use
the
api
to
generate
a
log
is
the
key,
but
the
key
bit
there
is
no
api
for
logs.
Actually,
the
current
design
is
in
so
far
as
I
understand
it
is
entirely
so
that
it
can
plug
into
existing
logging
libraries
and
do
something.
C
Structure
them
in
a
defined
way
so
to
apply
structure
to
maybe.
F
Yeah,
I
think
the
I
think
there's
an
argument
to
be
made
for
logs
outside
of
a
transactional
context,
so
if
you're
not
actually
like
in
a
request
like
if
you
can't
clearly
identify
a
unit
of
work,
so
this
would
be
things
like
startup
shutdown,
things
that
happen.
You
know
in
between
requests.
For
example,
I
think
there's
some
some
value
in
logs
there
there's
obviously
value
in
logs
from
infrastructure
components
that
are
just
like
doing
things
that
may
not
be
again
transactional,
but
instead.
H
F
F
Right
and
you
know,
yes,
you
could
structure
that
as
a
trace
and
it's
not
an
unreasonable
thing
to
do,
and
you
know
if
you've
spent
any
time
maintaining
you
know
java
virtual
machines,
for
example,
that
might
be
a
logical
thing
to
do,
but
if
you're
like,
if
you're
coming
from
a
you
know,
I'm
starting
up
this
unix
process
and
it's
doing
some
initialization
and
printing
out
some
status
information
and
then
it
starts
doing
its
work
that
bit
at
the
start,
you
probably
just
think
of
as
I'm
just
logging
stuff
right.
F
F
But
like
the
other
one
I
would
flag
is
like
job
workers
that
are
sitting
there
doing
like
a
busy
loop
hitting
a
bunch
of
queues
to
see.
Is
there
any
work
to
do
and
then
pulling
work
off,
and
then
the
work
becomes
a
transaction
but
the
stuff?
Where
it's
just
spinning,
you
know,
maybe
you
want
to
yeah.
Maybe
it's.
H
But
to
the
original
question,
I
also
don't
understand
why
you
would
want
to
like,
if
you
don't
know
what
it
is
turn
it
into
a
string
instead
of
in
any
type
in
this
otherwise
you're,
just
layering
you're,
allowing
things
to
remain
ambiguous
and
like
the
part
of
the
purpose
of
the
spec.
As
I
understand
it
is
at
its
like
high
level
is
to
reduce
ambiguity.
Yeah.
F
We're
already
hitting
our
problem:
we've
hit
a
problem
a
few
times.
We
tried
fixing
it
a
few
times
and
it
still
keeps
cropping
up,
which
is
encoding
issues
in
strings
like
string
values
or
you
know
somebody
passes
in
a
value
that
is
a
string,
but
it's
actually
just
like
a
binary
blob
or
something
right.
Rob
probably
has
more
specifics
because
he's
been
dealing
with
this
more
frequently
than
I
have,
but
it's
just
one
of
those
annoying
things
that
keeps
coming
back
to
bite
us.
F
I
I
don't
think
we
can
tighten
things
up
to
the
point
where
we
say
we'll
only
accept
string
values
that
are
properly
encoded
utf-8,
but
maybe
we
could.
I
don't
know.
B
Because,
like
you,
don't
want
to
encode
everything
all
the
time,
but
also
you
don't
want
stuff
blowing
up
while
you're
yeah
and
like
our
recent
issue
that
he's
referencing
is
like.
We
noticed
some
encoding
issues
in
the
otp
exporter
and
they
were
coming
because
we
were
recording
an
exception
and
it
would
record
the
exception,
the
error
and
that
it
would
try
to
encode
the
exception
attributes
that
we
recorded,
which
would
then
blow
up
in
the
exporter.
B
So
we
added
error
handling
and
we
added
encoding
and
the
recording
exception,
but
there's
a
team.
That's
using
a
logger
that
then
again
tries
to
encode
it
in
json,
which
then
blows
up
in
their
logger.
So
it's
like
yeah,
it's
just.
It
seems
like
you're.
Just
it's
a
never-ending
kind
of
thing,
and
it's
like
how
do
you
do
it
without
making
this
awful
to
interact
with.
E
Repo
yeah,
I
see
that
we
had
a
recent
release.
F
Yeah
robert
did
an
rc2
release
and
then
the
instrumentation
all
gem
we
needed
to
re-release.
We
just
had
some
problems
in
the
release
there,
so
those
the
the
two
recent
releases.
E
F
So
there's
a
an
rfc
that
is
coming
from
datadog,
I
feel
very
uncomfortable
about
it
and
it
feels
kind
of
dirty
to
me
it's
basically
when
we
change
the
implementation
of
the
context
api
prior
to
that
they
were
actually
prior
to
that.
We
weren't
hiding
the
key
the
context
key
for
the
current
span
and
so
datadog
in
their
profiling.
F
Implementation
was
actually
reaching
in
from
another
thread,
to
find
out
every
thread's
current
trace
context
so
that
they
could
attach
labels
for
like
the
trace
id
and
span
id
to
their
profile
that
they
were
capturing
for
that
thread
the
or
their
sample
that
they're
capturing.
So
they
we
changed
the
api
and
we
hid
the
key
because
we
didn't
know
that
they
were
doing
this
and
that
meant
that
they
couldn't
actually
access
that
information
anymore
without
kind
of
reaching
in
and
doing
dirty
things
so
they're
asking.
F
Can
we
change
our
api
to
allow
querying
the
current
context
for
an
arbitrary
thread
defaulting
to
the
current
thread?
This,
like
bothers
me
on
a
lot
of
levels
like
if
I
was
inside
the
implementation
of
mri.
So
if
I
was
inside
the
vm
implementation,
I'd
be
perfectly
fine
with
this,
because
I
would
pause
all
the
threads
and
I
go
and
inspect
their
context
totally.
F
Okay,
but
at
the
ruby
level,
if
you
read
a
lot
of
the
documentation
here,
yes,
it
is
possible
to
query
another
threads
thread
locals
like
it's
not
prohibited,
but
all
the
documentation
is
written.
F
Assuming
that
a
thread
is
accessing
its
own
thread,
local
variables-
and
there
is,
there-
was
actually
a
bug
in
the
ruby
26
implementation
that
with
segfault,
if
you
access
to
certain
type
of
threads
thread
locals
from
another
thread,
specifically
a
process
waiter
and
that
like
puma,
actually
has
a
hacky
workaround
in
some
of
their
code
to
deal
with
this.
They
are
trying
to
access
all
the
threads
sorry
they're,
trying
to
access
a
particular
thread,
local
for
all
the
threads
and
it
blows
up
on
process
waiter.
F
If
you
happen
to
have
a
process
waiter
so
and
it
only
blows
up
in
a
certain
version
like
2
6
up
to
some
patch
level.
So
it
was
fixed
years
ago
in
ruby
26,
but
it
only
exploded
recently
in
puma
and
was
fixed
earlier
this
year
in
puma.
F
So
it
is
one
of
those
bugs
that
can
like
sit
around
for
a
while
and
if
somebody's
using
an
old
version
of
ruby
and
production
could
bite
them
in
a
weird
way.
So
I
just
this
is
just
feels
weird
and
dirty
to
me
and
prone
to
error
and
prone
to
support
issues,
because
ultimately
naive
people
looking
at
the
stack,
will
say
well.
This
blew
up
in
open,
telemetry,
ruby
right,
even
though
it
was
called
by
the
profiler
like
datadog's
profiler.
It's
it's
blowing
up
in
ruby.
So
that's
my
concern.
C
It's
funny,
it's
actually
all
my
fault
and
I
didn't
realize
what
was
happening
when
the
context
refactoring
prs
were
merged
or
I
would
have
said
something
because
this
whole
thing,
it's
literally
my
fault.
We
were.
We
were
looking
at
at
using
trying
to
correlate
like
a
profile
from
the
data
log
profiler
with
open,
telemetry
traces,
and
I
like
yellowed,
some
monkey
patches
up
for
them
to
say,
like
hey,
you
could
do
it
with
the
datadog
profiler
like
this,
and
yes,
this
literally
all
traces
back
to
me
monkeying
around
inside
libraries.
C
Right,
it
is
a
little
bit
dicey
to
do
that
and
I
think,
if
I
think
it's
okay
to
say
that,
like
you
know,
absence
another
good
way
of
doing
this,
it
makes
sense
to
do
it
like
this,
but
like
the
datadog
profiler,
could
monkey
patch
open,
telemetry
and
say
that
that's
how
they
can
do
it?
It
would
be
nice
if
there
was
a
way
to
store
some
kind
of
global
map
of
threat,
ids,
possibly
to
traces
that
may
be
in
progress
at
any
given
time.
C
Definitely
like
concurrency
issues
there
for
sure
that
would
at
least
make
monkey
patches
or
or
other
people
who
might
want
to
do
similar
things
a
little
bit
less
scary.
One
other
use
for
this.
That
came
up
when
I
started
working
on
some
logging
stuff.
A
couple
of
weeks
ago
I
wrote
I
had
a
proof
of
concept,
integration
with
symantec
logger,
and
so
it
would
grab
the
trace
id.
C
You
know
and
correlate
with
logs,
and
that
was
great,
but
semantic
logger
uses
a
threaded
logging
design,
so
you
actually
have
to
you
know
always
include
a
patch
that
says.
Okay,
you
know,
there's
expensive.
Color
gives
you
like
a
hook
basically
to
say
do
this
before
you
log,
and
so
I
use
that
to
always
grab
the
trace
id,
because
that
runs
in
the
current
calling
thread,
but
like
the
actual
log,
munging
and
processing
and
stuff
that
semantic
logger
does
behind
the
scenes,
is
on
a
different
background
thread.
C
So
you
can't
grab
current
trace
that
way.
However,
if
we
did
have
some
sort
of
mapping
between
traces
in
progress
and
threat
ids,
I
can
imagine
ways
that
that
could
be
simplified.
I
don't
know
if
it's
actually
going
to
work
or
if
it
would
be
a
good
idea,
but
I
can
at
least
sort
of
see
other
worlds
where
we
might
want
to
know
what's
going
on
in
the
open
telemetry
world
for
a
given
thread
but
like
in
a
way
that
like
doesn't
require
you
to
reach
into
the
thread
locals.
A
This
seems
like
a
conversation
that
we
should
bring
up
as
part
of
the
profiling
what's
up,
because
it
should
be
if,
if
we're
going
to
start
adding
things
specifically
for
the
sdk,
I
think
that
discussion
needs
to
happen
there
I
mean
all
these
vms
are
going
to
have
pretty
similar
models.
I
would
think
yeah.
You
know
ruby
being
a
little
bit
different
in
that.
A
Like
I'm
looking
at
this
right
now,
I
think
this
is
using
fiber
local
variables
versus
thread
local,
not
that
not
that
it
makes
a
difference
for
this
conversation,
but
I
I
I
don't
know
that
we
want
to
experiment,
I'm
I
I'm
I'm
an
agreement
here.
We
don't
want
to
experiment
on
our
side
with
this,
without
digging
more.
F
A
F
F
Of
threading,
sorry,
if
tracing
and
then
logging
is
also
somewhat
independent,
but
in
practice
in
logs,
you
want
access
to
the
current
trace
context
so
that
you
can
log
the
trace,
id
and
span
id
and
likewise
in
metrics
you
want
like.
If
you
want
to
support
exemplars,
then
you
need
the
current
the
current
trace
id,
at
least
for
that
so
yeah.
D
E
It
sounds
to
me
like
we're,
almost
all
in
agreement,
that
this
is
probably
not
a
thing
that
we
want
to
do
that
for
datadog's
purpose.
They
can.
E
They
can
do
something
nasty
themselves
in
order
to
to
read
a
thread
logo
from
another
thread
and
get
around
this.
It
does
seem
like
there
is
this
larger
question
around
like?
Maybe
what
what
can
we
provide
as
like
a
diagnostics,
api
or
something
that
might
kind
of
feed
more
into
this
profiling
stuff?
E
And
I
guess
those
are
interesting
things
to
think
about
and
yeah
it
seems
like
if
we
could
have
some
things
that
fall
into
that
realm,
that
are
kind
of
like
diagnostic
type
things.
E
That
would
be
a
good
way
to
put
something
like
this,
but
it
seems
like
this
particular
thing
is
just
like
kind
of
a
dangerous
and
scary
thing.
In
general,
it's
like
you,
can
maybe
kind
of
get
away
with
it
because
of
how
mri
handles
things,
but
it's
like
you
can't
probably
reliably
get
away
with
it
in
every
implementation.
C
And
it's
also
worth
noting
too,
as
it
stands
for
trace
and
profile
correlation
to
work
with
the
datadog
profiler
they
have
to
monkey
patch
batch
band
processor
anyways.
So
it's
not
like.
I
don't
think
it's
actually
that
bad
to
push
back
and
say
like
well.
You
know
add
this
to
the
monkey
patches
and
it'll.
Be
okay,
like
it's
a
small
targeted
monkey
patch
that
they'll
have
to
make
here
and
and
it's
okay
in
that
in
that
instance,
for
them
to
carry
that.
C
I
think
and
I'll
be
happy
to
follow
up
with
them
too,
since,
like
I
said
this
all
traces
back
to
my
handy
work,
if
you
go
through
the
commit
logs
far
enough,
so
I'll
talk
I'll
mention
it
a
little
bit
too
cool
yeah
thanks.
F
Yeah,
if
they're
really
truly
stuck,
which
they
shouldn't
be
because
they
can
just
monkey
patch,
but
if
they're
really
truly
stuck
worst
case,
we
could
consider
adding
another
method
here,
which
is
you
know,
I'm
really
like
I'm
dirty,
and
I
really
want
some
other
threads
context.
Please
right.
Do
they
a
completely
obnoxiously
named
method
that
clearly
points
to
the
fact
that
you're
abusing
the
api
yeah
context.
H
I
think
it's
reasonable,
even
like
speaking
with
a
vendor
hat
on
it's
reasonable
for
the
the
main
project
to
not
take
on
so
much
risk
until
it
like
maybe,
is
proven-
and
I
don't
know
to
continue
the
marshall
theme
smaller
blast
radius,
if
it's
in
a
third-party
distribution
or
something
and
then
if
it's
demonstrated
as
useful,
it's
something
that
can
get
brought
into
the
core
project.
Eventually
yeah.
E
Yeah,
let's
jump
back
to
other
pr's,
I
will
mention
that
I
do
have
a
hard
stop
at
10.
I
might
even
stop
like
two
minutes
before.
H
But
I'll
do
a
a
quick,
quick
touch
point
since
andrew
and
I
are
both
on
there
andrew
I've.
I
had
that
error
that
I
mentioned
on
the
action
view
pr
in
one
project,
but
not
in
another.
So
I
think
I
maybe
need
to
like
have
a
little
separate
place
and
show
my
work
and
have
it
be
a
clean
demonstration,
because
it
could
have
been
something
that
I
was
doing
wrong
in
that
project.
So
yeah.
C
C
Run
into
this,
with
the
whatever
what
robert
opened
for
another
part
of
rails,
where
it
depended
on
how
you
initialized
it
and
we
never
really
tracked
it
down
precisely
so
I
think
that
would
actually
be
really
really
helpful
because
I
think.
H
H
C
H
F
Cool,
do
we
have
any
other
pull
requests
or
issues
the
x-ray
compliant
ids?
I've
provided
a
bunch
of
feedback
there.
F
This
is
interesting
because
x-rays
really
weird
and
requires
unusual
trace
id,
so
you
actually
need
to
replace
the
trace
id
generator.
This
was
actually
the
entire
reason
that
id
generator
like
the
id
generator
interface
was
created,
was
just
for
aws
x-ray,
so
not
super
clear
where
you
should
put
it,
but
realistically
you're
only
going
to
be
using
aws
x-ray
propagation,
if
you're
using
aws,
aws
x-ray
id
generation,
so
sticking
them
in
together
seems
reasonable.
F
F
E
E
E
I
guess
the
kind
of
worms
comment
that
I
shouldn't
go
into
is
like.
I
feel
like
this
could
somehow
like
apply
to
like
b3
and
other
systems
where
you
can
either
have
64
128
like
right
now
we
do
like
some
zero
padding
to
make
sure
that
things
are
like
the
right
lengths
and
it
just
all
gets
very
weird,
and
I
feel
like.
E
F
What
yeah
it
turns
out?
This
is
broken
in
a
lot
of
places,
including
the
collector,
so
yeah
there's.
I
don't
know
if
this
whole
thing
was
not
well
thought
out,
but.
C
Is
there
anything
for
1.0
that
we
need
to
discuss
first,
like
anything
else
that
needs
to
be
picked
up
or
work
that
needs
to
be
doled
out?
Sorry,
all.
H
A
F
Just
now,
okay,
there's
a
lot
of
documentation
related
things.
There
are
two
spec
compliance
things
that
are,
I
think,
they've
already
been
claimed,
so
eric
claimed
one
of
them,
which
was
just
to
go
and
check
that
we
don't
store
baggage
in
context
when
we
have
empty
data,
if
he
doesn't
get
around
to
that,
I
can
always
pick
it
up
and
take
a
look.
There
may
be
no
change
required.
F
It's
just
a
matter
of
like
convincing
ourselves
that
that's
true
and
the
other
one
which
andrew
has
a
pr
out
for
is
making
no
alt
text
map
propagator
private.
F
B
It
looks
like
you
added
support
from
a
separate
list
of
exporters
in
the
environment,
variable
that
one
doesn't
look
too
like
as
a
spec
compliance
issue.
That
one
looks
pretty
straightforward.
I
was
wondering.
A
B
F
So
cool
a
lot
of
the
other
milestone
issues
are
documentation.
Things.
C
Yeah,
I
am
I'm
working
on
a
grpc,
auto
instrumentation,
to
try
and
see,
since
we
wanted
documented
examples
for
that,
it
turns
out
that
it's
actually
not
really
easy
to
do
it.
On,
like
I
ran
into
problems,
but
making
headway
on
that
so
hopefully
that'll
be
up
this
week.
Oh
that'd
be
amazing.
F
Kind
of
vaguely
related-
I
don't
know
if
we
want
to
discuss
this
rob,
but
the
protobuf
thing.
How
do
you
feel
about
that.
B
B
B
Prior
to
this
is
from
what
I'm
gathering
from
talking
to
people
within
like
our
internal
company,
so
we've
in
france,
I've
discussed
a
few
times
like
getting
rid
of
this
as
a
dependency,
and
so
we're
gonna
be
I'm
going
to
be
with
the
guidance
francis
exploring
getting
rid
of
it
in
favor
of
our
own
library
and
looking
at
using
what
was
it
that
crossed
raspberry.
F
So
doing
something
in
rust,
basically
a
little
rust
plug-in
to
do
the
encoding
just
so
we
can
get
the
protobuf
gem
out
of
the
way
the
recurring
issues
for
protobuf
over
a
long
period
like
many
years.
It
has
held
us
back
from
upgrading
to
new
versions
of
ruby
because
they
take
a
while
to
add
support
for
new
versions
of
mri
and
then
more
recently,
we've
had
these
memory
consumption
issues
which
have
been
ongoing.
F
Now,
for
I
think
quite
a
few
months
and
we've
tried
newer
versions,
but
they
have
just
different
versions
of
the
same
problem.
So
we've
chosen
to
pin
the
version
of
protobuf,
but
now
we're
encountering
issues
where.
F
F
And
we
have
people
who
are
now
saying.
Well,
you
know
we
we
can't
back
away
from
this,
so
we
need
to
just
turn
off
tracing
right,
which
is
also
a
frustrating
result.
So
yeah
we're
we're
just
thinking.
Let's
try
to
fix
this
narrow
little
issue
with
a
different
way.
B
Then
it
very
much
becomes
under
our
control
too
right
so
like
if
there's
an
issue
with
our
encoding
library,
then
at
least
like
we're
in
full
control
fixing
it
as
well
right.
So
it
has
like
the
two-sided
part
of
it
that
we
get
the
maintenance
of
it,
but
also
we
can
maintain
it
more
directly
and
yeah.
So
that's
something
that
I'm
going
to
be
like
spending
time.
H
F
Two
options:
one
is
to
take
the
span
data
and
turn
it
into
like
encode
it
as
protobuf
in
the
otlp
exporter.
So
we
just
replace
that
portion
of
the
otlx
otlp
exporter
with
some
native
code.
The
alternative
is
that
we
actually
write
the
entire
otlp
exporter
in
rust.
Yeah,
you
could
do
it
in
c
or
something
but
like
rust
is
the
new
hot,
safe
native
yeah
yeah.
H
F
Yeah,
so
that's
a
valid
concern
and
we
should.
We
should
think
about
that.
A
little
bit
more
people
are
used
to
installing
native
binaries
with
a
c
compiler
they're
used
to
see
extensions,
but
rust.
F
H
F
Mean
it's
becoming
more
common
in
the
ruby
ecosystem
to
have
rust
native
extensions,
but
for
sure
it
it
isn't
the
norm.
It's
an
exciting
way
to
accelerate.
F
F
F
Beautifully
but
I
don't
know
how
well
it
performs
with
mri.
C
F
B
That's
that's
what
I
was
thinking
is
that,
like
this
becomes
like,
I
don't
think
we
should
necessarily
rip
away
google
protobuf
from
everyone
and
maybe
leave
the
existing
implementation
as
it
is,
and
this
would
be
another
example
of
something
that
could
potentially
live
in
contrib.
I
don't
know
if
it
would
have
to
be
the
official
version
of
it
yeah.
Well,
maybe
it
would
end
up.
F
So
I
do
know
that
I'm
trying
to
think
of
the
memcat.
What's
everybody's
favorite
men,
cash
jam
dolly
so
dali
has
the
hash
ring.
Computation
piece
is
computationally
expensive,
so
they
have
it
written
in
ruby,
but
then
they
have
an
optional
native
extension
that
performs
that
same
computation
in
c,
so
it's
possible
to
distribute
things
in
such
a
way
that,
like
this
thing,
is
an
option
we.
C
Could
say
it
sounds
a
little
bit
like
the
multi
json
world
too.
You
know
if
you've
got
it'll
pick
the
fastest
one
it
can.
That
would
be
at
least
a
good
approach.
I
think,
like
independent
of
github's
problems,
because
I
can
we
can
fix
those
but
would
like,
as
long
as
there's
a
fallback
to
where
people
don't
have
to
install
rust
in
order
to
install
the
ruby
project,
I
think
would
be
a
good
idea
yeah.
I
agree.
F
Cool
I
just
we
haven't
started
on
this,
yet
it
was
just
something
that
came
up
again
yesterday
and
irritated
us
all
over
again,
so
just
getting
it
out
there
cool,
that's
all
I
had
was
there
anything
else.
Folks
wanted.
A
A
few
things
two
more
items
were
over
time,
so
I
don't
want
to
drag
this
out.
One
is
a
ran
into
a
problem
where
we're
using
otlp,
headers
and
per
the
spec.
It
says
hey.
A
This
should
be
key
value,
pairs,
separated
by
an
equal
sign
and
each
key
value
pair
separated
by
a
comma,
but
really
what
it's
saying
is
that
it
should
be
the
w3
context,
so
a
value
might
have
an
equal
sign
in
it
and
it
should
be
okay,
and
what
I
found
was
that
the
golang
exporter,
when
it's
being
configured
we'll
read
a
value
and
cgi
on
escape,
we'll
take
that
query,
string,
value
and
unescape
it.
So
it's
assuming
that
the
environment
variable
is
escaped
when
it
is.
A
A
Income,
no,
the
spec
does
not
say
that.
So
I'm
feeling
like
opening
an
issue
against
the
the
sdk,
the
golang
sdk,
to
point
this
out,
but
our
ours
does,
however,
use
csv
parse
where
it's
separating
by
comma
and
then
equal
signs
so
go
ahead.
C
C
A
A
A
Yeah
but
internally
it's
using
this
internal
instance
of
the
tracer,
so
it's
not
referencing
the
global
tracer,
and
so
I'm
wondering
will
there
be
a
issue
where
we
might
have
a
local
tracer
and
a
local
tracing
provider
for
that
auto
instrumentation,
and
will
that
cause
a
problem
for
those
folks?
A
F
F
It's
technically
possible
and
I
believe,
if
I'm
not
mistaken,
like
that,
the
spec
allows
you
to
have
multiple
tracer
providers,
but
you
can
only
have
one
global
one.
It's
a
it's
a
bit
of
a
messy
thing,
though,
because
your
tracer
is
like
tied
to
the
tracer
provider
or
it
has
to
be
tied
to
the
tracer
provider.
F
Provided
it
so
if
you
have
auto
instrumentation
and
you
grab
like
you,
grab
a
tracer
initialization
and
you
hold
on
to
that,
then,
if
you
have
multiple
things
calling
into
like,
if
you
have
that
instrumentation
used
in
multiple
paths,
then
it's
just.
F
A
I
this
was
a
suggestion
that
was
offered
up
and
I
don't
think
it
was
a
requirement
for
us
to
have
removed
that
reader
or
remove
that
access
through
the
tracer
and
I'm
wondering
if
we
can
either
revert
that
or
we
make
that
paired
tracing
provider
and
tracer
available
through
the
instrumentation
base.
C
So
just
to
to
clarify
ariel
is
there:
what
bug
are
we
trying
to
fix
here?
I
I
or
is
it
just
because
I
think
like
looking
at
instrumentation
bass
with
the
active
job
stuff,
the
tracer
that
is
configured
the
instance
tracer
comes
from
the
global
tracer
provider
once
you
call
in
once
it's
installed
so
like
in
practice,
at
least
it
doesn't.
I
don't
think
it
poses
an
issue.
B
The
idea
with
the
tracer
provider
supporting
multiple
tracer
providers,
like
we
support
multiple
tracer
providers
in
the
sense
that,
like
yes,
you
can
do
it,
but
the
idea
around
it
was
that,
like,
if
you
initialize,
an
additional
tracer
provider,
you're
very
much
in
the
weeds-
and
it's
on
you
like
this-
is
for
you
to
manage.
You've
decided
that
you
don't
want
to
use
all
the
default
provisions
and
you
just
need
a
separate
tracer
provider
for
whatever
reason
that
may
be
valid,
but
you
are
now
going
to
control
that
manage
it
on
your
own.
A
A
I
would
like
the
ability
to
say,
like
I
want
to
configure
this
tracer
with
these
specific,
auto
instrumentation
things
enabled
just
for
this
tracer
or
this
tracer
provider,
as
it
generates
other
tracers
as
a
that's
me,
leaning
more
towards
because
I
I
lean
away
from
global
stuff,
but
allowing
us
to
inject
the
tracer
and
or
tracer
provider,
or
at
least
keep
that
that
reference
from
the
tracer
itself
to
who
was
its
a
birth
parent.
I
guess
tracer
provider
or.
F
Like
to
see
is
example,
code
for
using
multiple
tracer
providers
in
a
real
world,
not
not
super
real
world
scenario
but
like
if
we
can
think
of
like
here's,
a
framework
where
we
might
want
to
support
multiple
tracer
providers,
then
we
can
start
going
through
okay.
How
would
we
refactor
the
code
to
support
this
scenario?
F
It's
a
little
harder
to
think
about
like
completely
abstracted
from
any
real
world
scenario.
Yeah
yeah
immediately.
A
I
will
in
my
head,
I'm
thinking
words
that
that
make
rubius
tremble
you
know,
but
anyway,
thank
you
for
giving
me
17
extra
minutes
of
your
day
kevin.
I
hope
that
you
had
fun
here
and
I
hope
that
you
come
back.