►
From YouTube: 2021-07-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
Oh
my
my
super
cheap
chinese
camera
that
I
bought
died,
so
I'm
back
to
using
the
laptop
one
which
I
don't
know
just
blur
the
background
to
not
distract
everyone
with
my
office
ruth.
I
don't
know.
C
All
right
I'll
move
this
down
open,
telemetry
java
release
next
week
next
week
would
be.
B
But
we
do
have
so
we
have
the
new
annotation
that'll
be
in
the
next
one.
Four.
Is
that
what
we're
releasing?
I
don't
even
remember
anymore,
but
I
was
wondering
whether
we're
going
to
whether
1.4
the
agent
is
going
to
have
that
grammar
annotation
ready
for
the
release
the
week
after
that.
B
C
D
D
D
Just
take
a
look
at
the
generics
code,
this
yeah
it's
much
more
complex
than
you
would
expect,
and
it's
definitely
much
more
complex
than
just
calling
spam
current
dot
set
attributes
in
your
account.
Yeah.
B
C
Does?
Okay.
B
Cool
and
the
other-
and
the
other
point
is
the
next.
The
next
bullet,
which
I
need
to
talk
anorak
tonight,
that
we
don't
want
to
release
with
the
new
aws
sampler
because
that's
being
moved
over
to
contrib,
and
we
don't
want
to
release
that
and
then
have
people
try
to
use
it
and
then
have
it
disappear.
C
Makes
sense
I
will
I
can
look
at
this
in
the
next
few
days
and
give
you
feedback
on
by
like
monday,
if
we're,
if
we're
good,
to
go.
B
Yeah,
so
probably
what
we
should
plan
on
is
next
week,
so
the
eighth
or
yeah
whatever
yeah.
The
eighth,
is
the
right
date.
We
can
review
whether
we
need
to
hold
off
and
we
could
pull
the
annotations
out
of
1.4
and
wait
for
1.5.
B
A
E
With
iguanodon
much
because
you
know
I
work
at
google,
so
we
invent
names
for
everything
and
reinvent
it.
But
how?
How
are
you
planning
to
have
a
consistent
compute
for
performance
regression,
testing.
E
Okay,
if,
if
we
need
to
get
financing
for
consistent
compute,
I
started
having
some
discussions
around
that
to
try
to
figure
out
what
that
looks
like,
because
I
think
that's
going
to
be
a
consistent
problem
across
all
the
languages.
E
A
A
A
Preferably
somewhere,
which
can
be
used
as
a
github
action
agent
or
runner.
What
was
the
term?
I
don't
remember,
that's
as
specific
answer,
as
I
can
give
right
now.
C
Yeah,
I
was
just
curious
if
anybody
knows
like,
if
any
of
the
I
know
small,
some
smaller
cloud,
some
smaller
providers
cloud
providers
provide
like
dedicated
boxes.
I
don't.
I
was
wondering.
E
E
We
could
try
to
do
with
the
regression.
Testing
is
a
kind
of
a.
How
fast
is
the
cpu
right
now
for
us
kind
of
like
samplery
thing,
and
then
we
do
crazy
statistics
to
to
like
guess
whether
or
not
we
have
a
real
regression
or
if
it's
a
cloud.
Provider-Based
vm
is
starved
regression.
A
So
yeah
the
first,
the
first
idea
that
we're
certainly
going
to
use
in
phase
one.
Let's
say
that
we
just
run
run
two
tests
that
we
want
to
compare
like
back
to
back
on
the
same
machine
on
the
same
agent
of
ci
and
then
so
we
don't
compare
today's
run
with
tomorrow's
run.
We
just
compare
one
run
with
version
x
and
the
next
round,
with
version
x
minus
one
today,
right
now
on
the
same,
ci
run
ci
job.
E
E
C
I
it's
still
it's
still
very
valid.
I've
run
these
even
with
and
without
agent,
in
in
the
cloud,
and
I
mean
I
run,
you
know-
to
get
reasonably
consistent
results.
Even
with
that
kind
of
a
test.
C
I
spin
up
like
10,
to
20,
to
50
vms
in
the
cloud
to
and
normalize
across
all
of
those
results
and
run
it
with
and
without
back
to
back,
like
10
to
20
times
like
it's.
It
takes
there's
still
a
lot
of
variance
in
these
kinds
of
performance
tests.
Even
on
even
on
a
dedicated
box.
I
tried.
C
I
bought
a
box
several
years
back
on
that's
sitting
here,
that
to
try
to
get
consistent
results
and
even
on
the
even
on
my
dedicated
box
on
the
variance
run
to
run
variants
was
a
lot.
B
Higher
than
I
wanted
at
new
relic,
we
had
an
we
had
an
engineer
who
spent
basically
his
entire
his
entire
job
for,
like
a
year
and
a
half
was
trying
out
different
pieces
of
hardware
to
figure
out
what
he
could
use.
That
was
the
least
had
the
least
variant
on
instrumentation
performance
testing,
and
it
was
very,
very
difficult
to
get
anything
that
worked
consistently
and
there's
just
so
much
variability.
That
can
happen.
A
C
It
does
take
like
nearly
that
many
like
what
like
I,
I
think.
Probably
I
run
about
a
thousand
times
because
I
run
you
know
20
to
50
vms
times
about
10,
with
10
back
to
back
on
and
off
with
the
agent
about
10
times.
C
A
C
E
Yeah
so
there's
two
pr's,
basically
ready
for
review.
The
first
one
adds
the
it
switches,
the
data
model
in
the
metrics
package
to
use
attributes,
which
was
a
change
that
came
in
the
data
model
a
while
ago.
It
it
preserves
the
existing
api
with
labels.
It's
had
two
reviews.
I
don't
know
what
what
the
normal
thing
is,
but
oh,
it's
already
merged,
never
mind.
E
Just
I
was,
I
was
hoping
to
get
another
review
there,
okay,
cool!
Thank
you
yeah!
So
then
there's
a
there's,
a
follow-on
pr
that
all
now
that
that
one's
merged,
I'll
rebase
and
and
follow
up
that
one
basically
wires
exemplars
from
data
model
all
the
way
through
to
exporters,
so
it
adds
it
into
otlp
and
prometheus
exporter.
E
There's
an
interesting
differential
in
prometheus
exemplars
versus
open
telemetry
that
I
don't
I'm
basically
looking
for
someone
to
review
that
that
might
be
more
comfortable
with
prometheus
is,
is
my
ask?
E
I
am
more
than
happy
to
review
any
java
code
you
have
in
in
in
exchange
like
not
a
problem,
I'll
trade
some
time
there
I'll
take
a
look
at
that
other
pr
listed
as
well.
But
if
someone
could
take
a
look
at
this,
that'd
be
awesome.
If
no
one
here
is
comfortable
I'll
see
if
I
can
find
someone
more
familiar
with
prometheus,
but
the
the
tl
dr
is
the
way
prometheus
does.
E
Exemplars
doesn't
line
up
with
how
open
telemetry's
data
package
does
exemplars,
and
I
think
we
have
the
best
kind
of
compromise
here
but
yeah.
That's!
So
that's
what
that
that
one
is,
and
then
I've
been
I'm
trying
to
get
views
back
into
the
new
api
sdk
before
I
send
that
for
review.
That
will
probably
be
a
big
bang
pr
based
on
this
one.
If
anyone
has
ideas
for
how
to
fragment
that
out,
that
is
pull
request.
E
333
think
it's
four
threes
yeah!
If
anyone
has
ideas
on
how
to
fragment
that
out
better
more
than
happy
to
to
do
that,
but
that's
right
now
we're
using
that
pr
as
a
prototype
to
validate
whether
or
not
the
new
spec
is
implementable,
and
so
I'm
trying
to
get
views
in
quickly
before
the
spec
pr
is
merged
to
make
sure
that
that
spec
vr
is
implementable.
B
Yeah,
I
would
just
say
I
don't
know
of
anyone
on
this
call
who
is
familiar
with
prometheus
at
all,
so
I
don't
know.
Well
I
mean,
maybe
I
don't
know
if
there's
anyone
else
in
the
metrics
say
we
could
ask
in
a
couple
hours
if
there's
I.
E
Think
I
could
ping,
I
think,
keeping
jonathan
I
what's
his
last
name:
I've
I've
been
yeah,
I've
been
obvious,
yeah.
B
E
This,
I
I
you
mean
for
the
the
sdk
in
views.
Yeah
I've
been
working
on
the
sdk
exemplar
spec,
it's
just
not.
I
haven't
published
a
pr
yet
the
spec
for
the
data
model,
part
of
exemplars
is
all
is
already
written,
and
this
yeah
just
noticed
that.
So
in
terms
of
how
the
exemplars
get
into
the
data
package,
we
still
have
to
define
that,
but
this
that
should
have
no,
that
shouldn't
break
anything.
That
is
input
for
this
pr.
Okay,.
E
Yep
and
into
prometheus
yeah,
so
it's
just
the
stable
part
of
the
spec
to
the
exporters,
nothing
else:
okay,
cool
yeah!
I
did
not
want
to
submit
anything
that
wasn't
stable
for
like
formal
review
yet,
but
that
one
is
stable.
So.
B
E
B
E
You
know
the
worst
thing
that
I
do
that
you
can
yell
at
me
for
is.
I
try
to
use
lambdas
all
the
time,
because
in
scala
you
can
lambda
any
class
that
has
one
single
abstract
method
and
it
just
doesn't
doesn't
do
it
for
me
in
java,
it
yells
at
me.
B
E
E
E
E
That
was
everyone
in
scala
was
trying
to
write
haskell.
No
man,
not
me.
At
least
you
can
go.
Look
at
my
code
in
any
case,
there's
enough,
the
the
loud
vocal
minority
is
trying
to
do
that.
Sure,
okay,
so
in
terms
of
in
terms
of
the
rest
of
the
metrics
prototype
work,
if,
if
you
have
any
ideas
for
things
that
can
get
submitted,
I'm
trying
to
again
only
submit
things
that
are
stabilized
and
the
rest
of
it
is
kind
of
prototypey.
E
I
really
don't
want
to
throw
this
big
bang
pr
on
everybody,
but
I'm
afraid
that
might
be
the
only
option.
So
just
looking
for
more
code
review
more
comfort
from
this
group
of
people
as
to
whether
or
not
this
is
the
right
direction.
Yeah.
The
next
step
is
probably.
B
E
Breaks
the
hell
out
of
the
sdk,
just
the
read
just
renaming
the
api
classes-
they're,
not
it's
not
straight-up
renames.
It's
there's
implementation
changes
that
have
to
accompany
it,
at
least
for
histograms
and
like
double
value,
recorder
and
value.
Recorder
and
stuff
I
mean
I
can.
I
can
try
to
tease
it
apart,
but
there
were
some
there's
some
interesting
behavioral
changes
I
can
go
into.
I
can
I.
E
I
should
update
my
notes
with
specifics,
but
there
were
things
in
the
current
sdk
that
were
somewhat
experimental
and
things
in
the
new
sdk
that
change
that
break
our
somewhat
experimental
stuff.
So,
like
labels,
processor
needs
to
just
get
gutted
totally
and
and
reworked
the
way.
There's
this
measurement
processor
thing
that
has
to
sit
in
the
middle
between
everything
and
then
the
the
most
important
thing,
though,
for
the
renames
is
gauge
sum
and
histogram
are,
are
now
kind
of
split.
E
And
so,
if
I
implement
those
with
attributes,
I
have
to
either
write
in
the
sdk
a
layer
that
converts
attributes
back
to
to
labels
for
the
entire
sdk
implementation,
or
I
have
to
rewrite
the
entire
sk
implementation
for
attributes
and
then
delete
it
for
like
new
changes
right
and
all
the
rewirings
with
the
sdk.
E
E
B
B
E
It
apart
from
everything
else,
I
do
realize
it's
tangled
in
with
aggregators
like
yeah,
you
can't
you
can't
touch
instruments
without
touching
aggregators.
You
can't
text
aggregators.
Without
touching
the
views.
You
can't
touch
views
without
touching
label
processor.
It's
it's
it's
a
big
ball
of
of
together.
E
Okay,
so
the
pr
333
has
a
working
sdk
that
passes
all
tests
yeah,
but
it's
also
it's
also
13
000
lines
and
impossible
to
review,
and
it
drops
views
I'm
hoping
when
we
merge
the
other
two
pr's.
I
can
get
it
down
to
something
more
manageable
but
yeah.
I
I
hear
you
I
hear
you
I'm
just.
B
Thinking
that
it
would
be,
I
think
it
would
be
really
cool
to
kind
of
have
that
steel
thread
and
all
the
way
from
beginning
through
without
having
to
write
rewrite
everything
all
at
once,
but
maybe
that's
more
work
than
three
three
three
three
three,
however
many
threes
are
in
there.
I
don't
know,
I
mean
that's
what
I
when
I
took
a
run
at
rewriting
the
sdk
nine
months
ago.
B
That's
what
I
did
is
I
basically
I'm
just
gonna
implement
a
counter
and
get
a
counter
wired
through
all
the
way,
just
to
prove
out
that
we
knew
what
we
were
doing
and
that
it
was
going
to
all
work,
especially
with
meeting
the
the
processor,
whatever
that
the
new
processor
thing,
whatever
that's
called
measurement
processing
yeah
the
measurement
processor,
with
needing
that
layer
in
there
it
might
be
worth
just
like
separating
out
counter
and
seeing.
E
If
we
can
get
it
all
the
way
through
well
again,
333
has
it
all
the
way
through?
So
if
I'm
going
to
just
do
counter,
I'm
just
going
to
gut
the
333
and
take
the
counter
pieces
does
it
have?
Does
it
have
the
measurement
processor
in
it,
though
I
didn't
think
it
did
yet
yeah
yeah
yeah,
oh
all,
right
somewhere
in
those
13
000
lines,
there's
a.
B
E
It's
there's
a
measurement
processor
and
a
default
measurement
product
yeah
the
way
the
way
so
the
way
I
did
measurement
processor.
We
still
have
to
talk
to
the
sig
about,
but
it
has
measurement
processor
in
it.
It
has
I'm
actually
working
on
getting
views
as
of
that
pr
implemented
kind
of
similar
to
what
what
was
there
before,
but
more
matching
the
the
spirit
of
the
new
pr.
E
If,
if
you're
familiar
with
what
riley
wants
to
do,
he
wants
views
to
be
100
declarative,
whereas
before
we
had
interfaces-
and
I
think
I've
proven
that
I
don't
think
we
can
go
declarative
but
we'll
see
we'll
see
we're
close.
I
can.
I
can
show
you
what
I'm
doing
there.
We
can
always
yeah
no.
B
E
Effectively
what
I
did
yeah,
but
I
I
anyway
the
the
point
being
this.
This
is
the
the
sdk
as
spec
as
close
as
possible
implemented,
including
default
measurement,
processor
or
measurement
processor,
with
a
default
measurement.
Processor,
there's
there's
a
split
between
aggregators
and
instruments
now
and
there's
a
hole
in
place
for
views
that
I'm
trying
to
get
those
implemented
and
proved
out.
We
have
all
the
components
we
need
to
actually
have
a
fully
working
sdk
with
all
the
stuff.
That's
coming
down
the
pike,
it
should
work,
it
should
build.
E
It
should
test
everything
if
you
see
any
bugs
in
there.
Let
me
know,
but
what
I
ran
into
effectively
was
because
aggregators
and
instruments
are
tangled
right
now.
I
really
I
could
take
some
time
to
untangle
them
in
the
raw
sdk.
The
way
I've
done
in
this
in
this
pr,
and
that
might
be
worth
it,
but
I
they're
tangled
in
with
attributes
they're
tangled
in
with
instruments.
E
So
if
you
have,
if
you
have,
if
you
have
ideas
for
other
ways,
we
could
fragment
this
out.
I
can
think
of
architectural
changes
like
the
fragmenting
of
aggregator
from
storage
that
I
did.
I
could
put
that
as
a
pr.
B
Yeah
that
just
I
mean
it
sounds
like,
and
I
agree
with
you
that
there
would
be
value
we
we
have
there's
definitely
value
in
disentangling
like
we.
I
think
we
want
to
do
that
as
much
as
we
can.
I
is
that
also
in
three
three
three
three
at
least
yeah.
So
if
you
think
that's
something
that
you
could
break
out
into
a
separate
pr,
that
would
be
awesome
because
that
sounds
like
it
would
be.
E
Yeah,
okay,
let
me
so
so
step
number
one
is
I'll,
rebase
333
and
see
if
it's
still
ginormous,
I
assume
it
still
works
if
it's
still
possible,
yeah
and
I'll
try
to
come
up
with
more
ideas
for
how
to
disentangle
it.
It's
just
you
know
it's
a
ball.
B
Yep
yeah,
no,
I
know
I
wish
bogdan
had
some
time
to
work
with
you
on
it
as
well.
Since
he's
the
original
author
of
ball.
E
B
Have
so
this
is
also
where
I
guess
it's
worth
mentioning.
We
have
a
new
approver
in
open,
telemetry
java,
so
jacob
bach
is
now
an
approver
I've
already
I'm
attempting
to
sick
him
on
some
metrics
stuff.
So
you
know
he's
going
to
be
an
approver
he's
got
to
do
the
work
put
the
work
in
so.
D
C
Let's
see
I
put
this
on,
I
started
getting
questions
about.
Are
we
java
17
ready,
and
I
think
this
question
is
going
to
come
up
a
lot
in
the
next
couple
months
before
java
17
is
released
so
started
poking
around
a
little
bit.
I
know
nikita
had
poked
around
previously.
C
A
16
introduce
that,
let's
height
of
jdk
internal
by
default
and
that
broke
even
not
us,
but
that
broke
the
majority
of
libraries
that
we
instrument
at
least
the
earlier
versions
of
that.
So
that's
the
actual.
In
my
opinion,
the
trickiest
question
is,
for
example,
if
we
claim
that
we
support
library
starting
from
version
x,
but
they
don't
work
on
java,
16
until
version
x
50.
A
That
do
we
start
splitting
all
our
instrumentations
like
before
16
and
after
16,
which
with
like
instrumentation
almost
the
same,
but
we
test
that
differently.
Somehow
I
don't
know.
A
C
B
B
C
It's
pretty
special
yeah,
so
I
updated
our
smoke
tests
to
17
ea,
and
so
there.
C
Yeah
github
actions
support
17
ea.
Oh
there's
some
there's
some
docker
images
for
17
ea,
but.
C
C
C
One
of
which
is
spock,
which
nikita
was
mentioning
last.
C
I
was
trying
so
in
our
distro.
We
don't
we
don't
use
spock,
but
when
I
was
trying
in
the
in
open
telemetry
repo,
I
was
getting
some
weird
problems
with
armeria
calling
some
some
groovy
classes,
which
it
doesn't
call.
F
B
A
Yeah,
but
I
mean
spock
is
now
at
least
stable
to
the
door,
so
we
can
try
to
migrate.
We
tried
we
tried
to
migrate,
actually
yeah.
Thank
you
jason.
So
we
tried
to
migrate
to
spock,
to
like
some
erc
last
year
in
november
or
december.
A
A
C
So
nikita
I
agree
that
there's
going
to
be
a
lot
of
trickiness
around
all
the
instrumented
libraries
and
their
level
of
support.
What
I'd
like
to
do
is
get
to
that
problem
like
if
we
can
get
our
testing
infra
working,
you
know
even
on
you
know,
say:
there's
10
instrumentations
that
do
support
java
17.
A
Yes,
so
as
the
first
milestone,
I
think
if
we
upgrade
well
probably
it's
still
a
good
idea
to
upgrade
to
upgrade
spock
slash
groovy.
Then
we
can
just
try
around
latest
dependencies
may
we
may
have
better
luck
with
them
than
with
with
baseline
or
base
tests.
C
Yeah
and
I
started
to
update
the
smoke
tests
shouldn't
be
too
hard
to
get
running
on
16
and
17.
So
that's
what
I
started
doing
yesterday
is
just
updating
the
images
to
support
16
and
running
those.
Let's
see
if
the
latest
run
is
passing
four
still
in
progress.
Okay,
so
we'll
see
but
yeah
those
those
should
be
less
of
a
problem
and
also
but
we'll
be
good.
I
think
at
least
to
give
us
some
level
of
confidence
that
the
agent
itself
basically
works
on
16
and
17.
C
G
You
were
saying
in
your
smoke
tests
in
on
the
microsoft
side:
17
ea
works,
fine,
yeah,
cool,
good,
good
news.
A
Yeah,
I
just
essentially
thought
told
you
that
you
are
going
to
provide
performance
regression
tests
for
us.
F
G
G
It's
on
the
radar
yeah.
A
A
C
C
A
A
C
A
D
And
by
the
way,
with
my
last
pr
in
the
java
agent
extension
api
mesh
today,
I
think
I
finished
most
of
the
changes
that
I
plan
to
do
in
this
module.
So
you
can
take
a
look
at
it
and
see
if
there's
anything
else,
that
you
want
to
add
that
you
want
to
change
or
whatever,
because
I
from
my
point
of
view,
I
think
it's
pretty
much
okay
right
now,
I
wouldn't
call
it
stable.
Then
I
wouldn't
say
that's
reason
that
raises
a
stable
and
remove
the
alpha
thing
from
the
name,
but.
A
D
D
A
D
A
C
It's
a
good
question:
are
these
the
three
that
were?
Did
I
break
this
out
right,
matthias.
A
D
Mentioned
that
we
might
just
kind
of
match
those
two
together,
I
think
I
think
those
three
modules
are
probably
the
only
ones
that
need
to
be
included
as
dependencies
when
you're
developing
some
kind
of
instrumentation.
So
these
three
are
the
most
of
the
problem
that
we
have.
C
And
so
this
is
the
one
that
you
were
mentioning.
Your
you've
been
doing
a
lot
of
work
in
this
one
lately
and
nice.
D
Yeah,
but
the
spawn
exporter
factory
is
the
precatus
which
I
couldn't
do
from
what
I
couldn't
do
for
metric
exporter
factory,
because
there
is
no
equivalent
in
the
sdk
and
there
I
left
them
as
they
were.
I
just
didn't
touch.
F
D
E
But
we
are
providing
a
prototype
that
is
probably
okay
to
break,
but
let
me
just
check
and
see
if
we're
using
it
right
now.
A
B
I
mean
code
could
be
written,
I
don't
we
haven't
done
it
yet,
because,
because
we're
rewriting
the
entire
metrics
sdk,
so
I
don't
feel
like
there's
a
lot
of
value
in
throwing
a
whole
bunch
of
half-baked
spis
in
there.
Until
we
actually
have
you
know
solid
interfaces
and
and
even
a
plan
on
how
we're
going
to
support
multiple
ex
multiple
metrics
exporters
like
we
that's
something
we
have
to
be
able
to
do,
and
I
think
it's
going
to
radically
change
the
way
all
this
stuff
is
configured.
B
So
I
haven't
prioritized
it
at
all,
because
I
just
don't
know
what
shape
yeah.
A
B
There's
spi
in
the
in
the
sdk
auto
auto
configuration
extension
over
all
that
now,
okay,
which
the
agent
uses
as
well.
D
B
B
A
B
One,
but
I
mean
we
splunk
did
just
hire
another
tech
writer
to
help
out
with
a
lot
of
this
stuff.
So
hopefully
we
can
get
them
involved
with
with
open
source.
E
As
well
so
regarding
metric
exported
factory,
just
so
just
just
so,
I'm
clear
google
is
using
it
in
an
alpha
release
of
auto
instrumentation
someone's
depending
on
this
in
opening
bugs,
but
we
actually
aren't
providing
official
support
for
it.
Yet
so,
if
you
really
really
really
have
to
gut
it,
that's
okay,
but
there
is
someone
trying
to
use.
C
It
we
can
mark
it
deprecated
just
to
be
clear
that
it
is
going
away
eventually.
C
Nikita
so
to
your
question
about
how
do
we
get
to
stability?
Do
you
does
extension
api
being
stable,
while
other
apis
not
being
stable?
Is
that
does
that
help
you.
A
C
Yeah,
I
I
think
that's
a
good
idea.
I
can
open
an
issue
at
least
to
sort
of
track,
progress
and
list
like
packages
and
maybe
classes,
and
we
can
modify
over
there
modify
the
issue
kind
of
I
don't
know,
try,
try
something
that
does.
I
do
like
the
idea
of
tracking
progress
towards
stability,
somehow.
B
Yeah,
that
was
me
sorry.
I
didn't
put
my
name
on
it.
I
was
just
curious
that
we
have
a
semantic
convention
around
content,
length
and
response
body
size,
and
it's
not
it's
basically
not
being
used
anywhere
in
instrumentation.
I
was
just
curious
if
there
was
a
reason
or
just
something
that
hadn't
been
written
and
yeah,
I
just
noticed
that
they
were
like
for
okay,
hdp
client.
No,
nothing
captures
content,
length,
header
and
puts
it
into
a
span
attribute.
G
B
G
The
extractor,
so
when
you
say
I
mean
yes,
the
instrumentation
can
manually
like
say:
okay,
what's
the
content
length,
but
that's
an
explicit
call
that
the
instrumentation
has
to
do
right.
That's
what
I'm
saying
we
should
be
doing.
B
G
C
B
C
B
D
C
C
Yeah,
so
yes,
I
probably
would
make
sense
to
convert
it
to
the
new
instrument
or
api
and
do
that.
At
the
same
time,
the
only
only
interesting
history
I
know
with
content
length
is
on
the
server
for
server
spans.
C
B
Didn't
go
well,
yeah
that
sound
that
sounds
like
it
could
lead
to
all
sorts
of
weird
complications.
I
was
more
thinking
about
this.
Just
super
simple
on
a
http
response,
editor
pulling
that
it's
not
going
to
capture
everything,
but
it
will
capture
something
which
seems
better
than
nothing.
B
I
mean
not
everybody,
not
everything
sets
content
like,
especially
if
it's
more
streaming
stuff
or
if
it
doesn't
know
or
there's
compression,
there's
all
sorts
of
crazy
stuff
or
content
like
isn't
set,
but
it
feels
like
in
the
cases
where
it
is
set,
it's
I
would
say
almost
free
to
pull
that
header
and
use
it
so
cool.
Well,
if
I
have
some
time,
I
may
try
to
see
if
like
what
it
takes
to
convert
the
okay
http
instrumentation
over
to
the
instrumenter
cool.
C
We
will
see
all
right
y'all.
We
hit
time
five
minutes
overtime.
Sorry
about
that
good
to
see
you
all
yep
thanks.
Everyone
see
ya.