►
From YouTube: 2023-03-10 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
B
A
A
B
Leaving
tonight
flying
to
Australia
so
yeah.
B
Thank
you
so
yeah,
so
I'll
be
out
for
two.
The
next
two
weeks,
I
think
this
was
the
last
meeting
on
our
six.
Our
initial
six
week
schedule.
B
B
Let
me
I
wanted
to
update
you
on
the
the
ECS
discussions,
so
we
discussed
in
the
with
the
gctc
about
ECS
and
so
there's
agreement,
there's
an
agreement
that
if
we,
if
we
can
prove
out
that
we
can
have
a
good
migration
story
for
users
of
these
breaking
the
breaking
changes
that
would
and
that
ECS
would
Encompass
on
those
the
specific
HTTP
ones.
B
Then
we
can
people
are
on
board
for
doing
that.
Doing
all
of
those
renames
and
breaking
changes.
B
B
We
one
other
and
one
other
language.
They
wanted
us
to
do
it
in
a
non-java
agent,
the
language
like
Python
and
that
could
potentially
be
just
we
have
to
maintain.
You
know
two
sets
of
instrumentations
or
people
can
use
the
old
instrumentation
for
a
while,
still
they
don't
have
to
upgrade,
and
if
there's
we
would
security
patch
those
or
something
if
needed,
and
then
the
other
thing,
the
other
kind
of
biggest
thing
is
The
Collector,
proving
out
the
collector
translations.
B
A
A
B
Sue
I
think
that
there's
like,
if
we
look
at
the
breaking
changes
that
we're
talking
about
a
lot
of
them
are
just
a
lot
of
them
are
renames,
or
some
of
them
are
conditional
renames,
which
we
don't
have
some
of
them.
We
don't
have
schema
Transformations
for
yeah.
A
We
can
do
it,
I
think
that
was
collector,
because
there
could
be
a
pipeline
of
processors.
A
It
doesn't
mean
it's
easy
for
users,
but
we
can
prototype,
we
can
prove
it
works
and
we
can
add
work
items
to
a
collector
to
make
it
easier
or
add
a
new
transformation
rules
or
something.
C
So
I
think
the
concerns
from
the
GC
no
TC
well,
specifically
the
other
Josh.
C
The
processing
overhead
of
schema
URL,
is
is
of
prime
concern
that
we
prove
out
like
if,
if
the
kind
of
processing
they
expect
schema
URL
to
have
is
negligible
within,
like
you
know,
n
percent,
you
know
less
than
10
of
just
general
Telemetry
collection,
we're,
probably
okay,
but
like
let's
say
these,
this
processing
starts
taking
up
a
significant
amount
of
your
ingestion.
Cpu
memory
RAM
were
that
means
schema,
URL
isn't
viable.
C
I'm
saying
in
general,
like
there's
a
general
concern
about
the
weight
of
the
processing
that
schema
URL
takes
so
if
we
add
schema
URL
Transformations
that
increase
the
amount
of
CPU
memory,
that
sort
of
thing
that
is
required
to
collect
Telemetry
in
pipe.
C
A
I
see
so
like
it's
the
question
of
assuming
we
have
old
version
on
the
publication.
How
much
effort
would
it
take
additional
resources?
It
would
take
on
collector
to
do
all
the
transformation,
but
essentially
it's
also
a
question:
how
much
we're
ready
to
invest
into
optimizing
that
flow
because,
of
course,
I'm
sure
it's
optimizable,
so
it's
more
like
a
Target
and
how
much
resources
we
can
put
into
it.
C
Well,
I
think
there's
proven
it's
viable
and
then
proving
it's
worthwhile.
Okay,
so
like
Step
One
is,
let's
just
make
sure
we
can
do
it
step.
Two
is
there's
concerns
that,
even
if
we
do
it
it's
it's
too
much
overhead
yeah
anyway,
and
then
there's
also
there's
proposals
on
the
floor
to
just
allow
semantic
conventions
to
have
different
version
numbers
in
the
spec.
So
we
could
actually
have
breaking
changes
come
out
by
having
a
major
version
number.
A
C
Anyway,
that's
just
wanted
to
make
sure
you
have
context
on
like
the
the
full
scope
of
that
discussion,
because
it's
it's
not
just
good
enough
that
we
prove
that
it
works.
We
also
want
to
show
overhead
and
possibly
have
some
benchmarks.
A
D
It's
perf
a
blocker
or
over
time
this
poor
concern
should
go
away
as
people
move
to
a
newer
version
right
if
they
keep
eating
the
old
version,
then
having
them
paying.
Some
first
calls
is
actually
a
nice
thing,
because
that's
the
only
motivation
why
the
hell
they
would
move
to
the
newer
version
so
so
like.
If
we
optimize
that
even
faster
than
moving
to
the
new
version,
then
we
did
bad
job.
Okay,
we
shouldn't
maybe
like
I'll,
add
some
artificial
slowness
transformation
just
to
motivate
people
right.
D
C
I
I
hear
what
you're
saying,
however,
even
with
so
so,
let's
say:
let's
say
the
overhead
turns
out
to
be
20
of
of
what
you
were
paying
like
paying
without
a
transformation
versus
just
a
simple
attribute.
Reading
in
terms
of
like
RAM
usage
and
CPU
usage,
I
think
that's
actually
kind
of
unacceptable,
like
we're
in
a
world
where
people
want
us
to
take
zero,
Ram,
zero,
CPU,
zero
disk
and
have
a
hundred
percent
uptime.
That's
the
the
reality
of
observability,
right
and
so
I
think
we're
going
to
have
pressure
to
remain
low.
D
C
C
So
two
things
really:
it's
possible:
The
Collector
needs
to
optimize
for
this
use
case
and
it's
not
and
it's
bad
at
it.
That
would
be
a
really
big
deal
because
the
interconnector
doesn't
enrichment,
so
I,
don't
think
that's
going
to
be
the
case.
I
think
we're
totally
fine.
All
I'm
asking
for
is
a
benchmark
to
alleviate
that
concern.
I
heard
because
I
think
that
will
address
it.
D
It's
more
some
concern
coming
from
having
that
guy
done
in
this,
so
we
addressed
a
real
problem
and
we
know
what's
the
effect
of,
for
example,
if
the
the
concern
came
from
someone
and
the
expectation
is,
it
should
be
zero.
Addition
to
the
cost,
then
manner
can
save
some
effort,
because,
whatever
she's
doing
it's
never
going
to
hit
that
goal
right.
C
D
A
I
think
there
are
two
good
reasons
to
do
this
work.
The
first
one
I
have
my
concerns
that
we
with
this
stabilization,
we
should
allow
schema
Transformations
after
I.
Don't
think
we
are
ready,
but
but.
A
Yeah
doing
this
exercise
at
least
help
would
help
me
understand
if
we
already,
and
how
far
are
we
from
being
ready.
The
second
part
is
I,
agree
Riley
with
your
argument
that
if
it
was
something
that
came
up
10
years
ago
or
maybe
a
year
ago,
the
schemas-
and
we
were
in
the
position
now
to
support
them.
A
Also,
okay,
we
don't
care
about
the
overhead,
but
I
think
once
we
switch
attributes,
we
will
be
in
a
couple
or
maybe
six
months
or
maybe
even
more
a
period
where
everyone
hits
it
and
if
it
doesn't
work,
they
will
be
very
pissed
off.
We
would
break
someone
and
it
will
be
another
data
dog,
but
for
the
whole
industry,
which
will
be
terrible
time
for
open
television.
C
Yeah,
the
the
other
thing
I
want
to
call
out
that
I'm
planning
to
help
with
here,
because
I
don't
think
it's
good
enough
for
just
the
collector
I
was
planning
to
work
with
Trask
and
the
Java
instrumentation
folk
to
get
an
implementation
in
the
Java
agent,
because
I
think
it'd
be
nice.
A
A
As
long
as
we
don't
rename
many
attributes
after
stability,
I
can
get
skills
but
I
question
the
necessity
to
rename
them.
What
are
we
gaining
with
renaming
attributes
after
stability.
C
Well,
yeah
yeah
that
that's
totally
fair,
that's
totally
fair
I,
think
I
think
it
anyway.
The
but
in
term,
in
terms
of
I
I
was
actually
planning
to
do
this
as
the
same
kind
of
schema,
processor
you'd
run
in
the
collector
that
just
can
run
generically
in
the
SDK
for
Oto
as
like
a
processor
or
something,
and
have
that
as
a
contrib
for
Java,
and
then
once
we
prove
that
we
can
do
it
both
in
an
SDK
via
existing
Hooks
and
in
the
agent
I.
Think
we're
good.
B
Sounds
like
a
plan
of
not
small
things,
but
for
a
good
purpose,
so
yeah,
so
the
we
just
met
with
ECS
this
morning,
and
so
there
were
a
couple
of
I,
think
Riley,
already
posted
back
to
that
Otep
and
so
I.
That
feels
like
it's
moving
forward.
Good
job,
Riley.
B
C
Yeah
do
did
you
read
the
I
put
together
a
paper
on
this?
Oh
I
didn't
see
it
well,
it's
it's!
It's
probably
because
it's
hidden
with
a
bunch
of
other
things.
I
did
me.
Let's
see
it's
ability,
proposal
I,
don't
think!
That's
the
right!
One!
C
C
So
fundamentally,
the
issue
is:
it's
a
query
time
expectation
of
users
when
you
say
create
like
a
log
base,
metrics
or
a
trace
based
metric,
or
you
do
an
alert
right,
you're,
aggregating
data
and
you're
defining
here.
This
here
are
the
attributes
to
preserve
with
metrics
it's
the
opposite.
It
preserves
all
attributes
by
default
unless
you
say
get
rid
of
these,
so
you
have
to
explicitly
aggregate
right.
C
So
did
I
put
the
right
link
in
the
in
the
notes
to
do
I
just
put
a
link
there.
Oh
yeah
great
yeah,
it's
it's
from
this
thing
which
I
never
finished.
But
if
you
look
here,
instability
around
allowing
metric
attribute
changes
right
so
effectively.
If
you
add,
a
specifications
specifically
excludes
adding
attributes
to
metrics
today,
so
you
cannot
add
any
attributes
I'm
proposing
that
we
want
to
open
up
this
restriction,
but
the
key
here
is:
we
can't
change
the
time
series,
so
you
scroll
down
a
little
bit
effectively.
C
What
happens
is
if
I
have
an
alert
threshold
and
I
have
a
set
of
attributes
and
I
split
my
time
series
right.
What
happens,
then,
is
that
for
that,
like
that
third
thing
that
attribute
a
equals,
a
and
Algebra
2
equals
c
that
yellow
graph
that
could
actually
fragment
in
a
way
I
no
longer
alert,
because
I've
actually
divided
the
time
series
and
because
of
the
way
queries
are
written
I'm,
not
aggregating
away
these
new
attributes
that
I'm
not
aware
of
by
default
by
default,
I'm
just
grabbing
all
the
time
series
for
a
metric.
C
I'm
now
looking
at
like
this
kind
of
a
thing
right
so
I
have
an
alert
where
I
had
odd
time
series
I,
add
a
new
attribute
and
now
it's
fragmented
and
so
I
actually
should
be
alerting
but
I'm,
not
because
when
I
first
defined
my
alert
I
didn't
have
all
these
new
time
series,
but
now
I
do
and
so
the
issue
with
adding
time
series
and
attributes
is
actually
around
alerts
and
alert
thresholds.
And
it's
because
by
default
the
the
predominant
query
languages.
You
know,
give
you
all
time
series.
B
We
talk
about
this
example.
Here
is
I
would
have
expected
this
to
be
x,
y
x
y
and
at
only
attribute
3
would
vary
across
these.
Oh.
C
B
So
that's
interesting,
I
didn't
I,
mean
I,
guess
the
alerts
alerting
things
that
I've
used
in
the
past.
It's
you
you
have
these.
You
know
you
say:
I
want
to
alert
on
my
say
this
for
HTTP
server
duration.
I
want
to
alert
when
route.
When
my
login
route
exceeds
five
seconds.
90Th
fifth
percentile
exceeds
five
seconds
and
behind
the
scenes
it
will
aggregate
away
the
others.
C
If
you
are
by
default
aggregate
in
a
way
everything,
then
you
don't
have
to
deal
with
this
instability,
the
problem
that
we
found
so
again
it's
so
you
know
what
I
do
internally
to
Google
one
of
the
my
three
job
functions.
C
Is
we
own
kind
of
readability
and
and
semantic
conventions
internally
for
metrics
for
Google
Cloud
platform,
and
this
is
one
of
those
big
no-no's
where
We've
effectively
broken
customers
repeatedly
by
adding
attributes
that
break
their
alerts,
because
by
default
they're
just
you
know
unless
you're
query
language
explicitly
does
that
group
by
and
maybe
maybe
that's
what
Microsoft
does
it?
It's
very
easy
to
write
a
metric
query
that
doesn't
do
this
and
for
that
to
be
the
right
thing
to
do.
C
For
example,
you
don't
if
you
are
limiting
what
you
Group
by
right
in
some
Metric
languages.
That
means
it
won't
automatically
apply
to
every
single
possible
node
that
you're
running,
so
if
I
wanted
or
pod,
for
example
right.
So
if
every
individual
pod
has
a
Target
identity
in
Prometheus,
right
and
I'm,
grouping
by
latency
and
I,
explicitly
exclude
and
say
I
just
want
these
labels
I
have
to
know
that
I'm,
including
like
different
targets
and
I'm,
including
different.
You
know
HTTP,
endpoints
or
whatever,
and
explicitly
do
that.
B
C
And
while
we
could
promote
a
best
practice
of
always
explicitly
specifying
your
time
series,
it's
not
common
today,
and
so,
if
you're
doing
anything
in
like
a
Prometheus
database
or
our
database
you're
likely
to
run
into
this
issue.
So
the
reason
why
I'm
promoting
this
for
open
Telemetry
is
just
the
prevalence
of
Prometheus
and
the
fact
that
we,
you
know
this
would
be
a
constant
source
of
friction
for
them.
B
Anyone
who's
using
does
Prometheus
say
anything
about.
Do
they
have
say
anything
about
stability
when
adding
metric
attributes.
C
You
mean
like:
do
they
consider
it
a
breaking
change
or
not?
Yeah
I
know
it's
a
breaking
change
because
of
our
kubernetes
I.
Don't
know,
I,
don't
know
if
they
say
anything
specifically
about
that.
I'd
have
to
look
I
know
that
they
don't
for
they.
C
Adding
attributes
that
change
the
number
of
Time
series
I'd
have
to
look
at
I,
don't
know
if
they
call
that
out
or
not,
but
the
the
reason.
This
is
a
fun
story
here
right.
This
is
the
histogram
percentile
thing
right.
If
you
look
at
that,
that's
going
to
grab
a
whole
bunch
of
Time
series
and
it's
going
to
get
rid
of
the
Le
label,
which
Ellie
is
the
histogram
buckets,
but
it's
going
to
preserve
all
the
other
time
series.
B
Okay,
so
I
think
I,
I,
I
I,
understand
now
that's
hard
because
it
doesn't
match
my
experience.
But
if.
B
Yeah
and
if
if
we
could
point
to
some
some
well
the
like
this,
this
stock
helps
a
lot
and
talk
about
specifically
or
I.
Think
the
prom
ql
thing
is
the
most
convincing
argument
and
if
we
can
cite
anything
from
Prometheus
about
that,
that
would
help
others.
C
Yeah
I
tried
to
take
a
100,
Prime
ql
View
in
that
document
of
like
what
happens
in
Chrome
ql.
How
the
queries
look,
how
people
write
queries,
what
best
practices
are
in
Chrome,
ql
and
right
now,
I
think
we
could
make
the
case
to
encourage
people
to
always
buy
their
queries.
To
explicitly
include
what
attributes
are
aggregated
and
not.
You
know
not
care
about
new
things,
but
that's
just
not
common
today.
That's
not
what
people
are
doing.
B
Let's
see
the
for
as
far
as
so
we're
going
to
be
breaking
right
again
when
we
moved
to
ECS
but
I,
so
I
was
trying
to
think
if
you
know
if
it
makes
sense
to
still
to
go
ahead
with
this
now
or
wait,
I,
don't
think
this
is
used
anywhere
except
messaging
today,
so
I'm
not
opposed
to
merging
it
now
just
to
kind
of
clean
up
our
story
and
then
it'll
be
a
more.
B
We
can
adopt
it
in
HD.
Well,
I,
don't
know
so
yeah.
B
My
thought
is,
if
we
wanted
to,
it
seems
reasonable
for
this
to
be
the
the
default
view
and
we
could
add
net
dot,
layer6
dot
protocol
if
we
needed
to.
A
Yeah,
it's
also
common
for
us
to
have
a
logical
attribute,
at
least
in,
like
net
per
name,
which
means
anything
by
depending
on
the
level
you
operate
on
and
we
can
reuse
it
and
a
Becca,
it's
easier
for
backends
to
say.
Okay,
this
is
my
protocol.
Whatever
it
is,
I
don't
know
and
have
something
more
specific
to
tell
what
it
is.
A
It
will
identify
the
best
candidate
and
like
I'm
talking
about
future.
So
I
don't
know,
but
my
proposal
would
be
to
say:
okay,
it
can
be
TCP,
but
then
there
is
something
else
in
the
attribute
specific
for
that
flow
level.
Instrumentation
to
explain:
oh
okay,
it's
it's
that
the
transport
protocol
we're
talking
about
there.
B
Cool
and
the
other
one
that
I
I
mean
we
need
more
reviews
on
this,
but
I
think
and
I
ping.
The
folks
who
had
commented
earlier,
but
hadn't
approved
this
one,
is
maybe
a
slightly
more
important
attribute
that
we're
breaking
but
I
think
this
is
what
we
want
to
do
even
with
the
ECS
merger.
Is
that
right
with
Miller.
A
Yes,
because
even
ECS
have
the
problem
was
client
AP
attribute
yes,
so
it.
If
we
decide
to
do
this,
it
won't
change
with
ECS
merch.
B
Cool,
so
I
I
do
think
we
should
get
more
reviews
and
and
hopefully
see
what
folks
who
commented
if
they're
concerns
are
addressed,
but
I
think
having
this
also
before
we
do
ECS
would
help
us,
because
this
is
one
of
the
breaking
changes
we
want
to
ask
them
to
make.
C
How
much
are
we
pushing
back
on
ECS
conventions
too,
where
we
feel
like
we
have
something
better,
I
know
the:
what
is
it
the
server
client
distinction
versus
net.pear
is
something
that
we're
planning
to
break
us
in
favor.
What
they
have
and
I
just
want
to
make
sure.
That's,
because
we
feel
like
what
they
have
is
better
right
or
is
it
something
we
should
push
back
on.
B
That
was
the
one
that
I
raised
in
this
morning's
meeting
with
Alex
that
I'm
the
least
confident
on
I.
Think
these.
These
three
are
basically
I
agree
I,
like
their
all
of
their
stuff.
Better,
this
one
I
don't
know
I,
like
I,
mean
I
actually
I.
Actually,
personally,
like
the
client
server
like
it
gives
names
to
that,
it's
a
little
more
concrete.
It
feels
like,
but
I
would
I,
don't
think
we're
blanket
I,
don't
think
we're
as
100
sure
on
this.
A
The
thing
that
I,
don't
I'm
not
really
happy
about
is
that
the
client
server
stores
destination
apparently
I
forgot
that
they're
using
client
server
and
they
thought
they
were
using
Source
destinations.
So
they
have
four
namespaces
or
even
more
to
describe
something,
and
it
took
us
quite
a
while
to
understand
what
we
should
use
and
I
already
forgot.
So
it
it's
the
feedback
that
I
have
it's.
It's
confusing.
B
That's
a
great
yeah,
so
maybe,
for
example,
when
we're
pulling
everything
over,
we
don't
pull
over
the
source
and
destination
and
only
pull
over
client
server.
B
E
Quickly
before
we
get
to
it,
I
just
just
wanted
to
get
up
to
speed
so
like
it
looks
like
we
want
to
still
like,
we
decided
to
merge
at
some
point
ECS,
but
like
I,
wanted
what
it
actually
takes
for
us
for
HTTP
synthetic
conventional
stability.
Do
we
want
the
past
on
it
or
still
go
with
it
that
this
part
is
not
really
clear
for
me.
B
B
We
will
keep
this
working
group
very,
active
and
accomplishing.
You
know
pushing
hard
on
that,
but
yes,
there's
a
there's,
a
bar,
proving
that
we
can
transition
our
open,
Telemetry
users
over
this.
These
big
breaking
changes.
C
To
be
fair,
I
think
that
bar
exists
today,
given
some
of
the
breaking
changes,
we're
making
anyway
right
right,
I
I,
don't
see
it
adding
a
ton
of
work,
the
ECS
part,
besides
the
fun
of
having
yet
another
voice
and
another
opinion,
and
then
us
having
to
all
resolve
our
opinions.
With
that
new
opinion.
B
Well,
I'm,
specifically
thinking
of
the
the
prototyping
work
we
need
to
do
in
Java
and
The
Collector.
You
know,
I
could
see
that
you
know
that's
a
non-trivial
amount
of
work.
B
Sorry
in
Java
I
could
commit
to
it
and
go
like.
C
Sorry,
I'm
derailing
the
whole
thing
this
this
last
one
we
talked
about
in
the
in
the
group,
but
I,
don't
remember
what
that
came.
I,
don't
think
there
was
ever
a
resolution
to
it
right,
I,
don't
think
so,
and
the
I
don't
know
how
you
feel
about
this
Riley,
if
you're
still
in
the
call,
but
this
this
one's
starting
to
annoy
me
because.
E
C
D
D
D
E
C
D
B
Ludmila,
if
you
have
a
minute
we
could
do
you
have
a
minute
yeah.
Definitely.
A
F
F
I
think
this
might
need
a
little
update
after
like
prototype
that
I
pushed
yesterday
to
make
sure
it
doesn't
complete
with
the
the
thing
that
I
proposed
in
the
Lok
HTTP
prototype,
but
yeah.
It's
pretty
much
like
the
previous
PR
with
all
controversial
stuff,
and
it
just
clarifies
at
least
it's
meant
to
clarify
the
two
ways
of
instrumenting
HD
clients
here
at
the
low
level,
where
you
have
a
separate
span
for
each
recent
attempt
or
the
high
level
instrumentation,
where
you
just
create
one
spent
for
everything.
F
There
are
some
musts
and
must
notes
here
and
like
this
might
be
a
bit
too
strong.
But
you
know
these
are
just
material
for
discussion.
F
A
Yeah
I'll
take
a
look
I'm,
sorry
I.
It
I
forgot
I,
wonder
whether
your
findings
from
the
Prototype
I'm
super
curious
about
it.
F
Well,
I
I
only
tried
our
case
to
be
I
still
have
to
try
actually
introducing
these
changes.
In
particular,
the
recent
recent
spec
changes
would
be
probably
the
most
fun
because
it's
the
most
different
for
every
other
HTTP
client.
F
But
you
know
phdp
I
tried
to
implement
the
recent
spec
correctly
it's
possible
to
do,
and
as
far
as
like
my
investigations
show,
it
should
be
possible
to
implement
them
at
least
several
HTTP
clients
like
Apache
HTTP,
for
example,
and
the
rubber
span
that
we've
talked
about
like
two
months
two
weeks
ago.
One
actually
implemented
that
I
not
like
I.
E
F
Exactly
like
it,
it's
in
case
of
like
any
normal
scenario,
meaning
that
there's
not
there
isn't
any
connection
or
any
any
sort
of
like
Network
layer
error.
It
just
adds
more
clutter
and
it's
it
doesn't
add
more
value
to
me
at
least
so.
I
propose
another
way
which
is
like
if
there
is
an
exception,
if
there
isn't
any
sort
of
error
from
and
there
hasn't
been
made,
any
attempts
to
send
a
real
HTTP
request,
we
can
create
sort
of
like
a
encompassing,
in
this
case,
encompassing
HTTP
requests
with
that.
F
Just
contains
everything
that
hypothetical
recent
Ascent
attempt
would
have,
which
is
like
HTTP
URL
and
the
HTTP
method
and
stuff,
like
that,
plus
the
exceptions
that
have
foreign.
A
So
like,
if
things
were
good
and
there
were
any
HTTP
requests
down,
we
don't
create
an
encompassing
span.
Yeah.
B
A
Otherwise,
we
fake
one
and
put
all
the
that
they
are
including
exception.
F
F
Yeah
and
it
three
team
match
matches
how
our
current
HTTP
client
instrumentation
work,
I
mean,
except
that
part
where
we
don't
Implement
a
recent
part.
The
recess
pack,
but
like
normally,
if
you
had
any
sort
of
like
error,
happened
like
before
actual
request
to
place,
you
would
still
get
an
hdb
client
span
without
the
status
quo,
without
any
sort
of
like
response
information
with
just
a
error.
A
F
Yeah,
you
will
have
the
real
HTTP
span
and
if
it's
anything
like,
if
it's
like
an
I
o
exception
or
anything
that
happens
like
during
the
HTTP,
send
attempt
it
will
be
captured
as
part
of
the
spam.
F
So
the
only
real
problem
is
when
it
happens,
like
totally
after
everything
so
like
when
you
have
some
some
sort
of
like
a
smart,
HTTP
client
that
tries
to
convert
Json
to
objects
or
stuff
like
that,
then
these
kind
of
Errors
might
leak
out,
but
I
think
it's
probably
fine
and
if
they
capture
as
part
of
any
sort
of
parents
one
because
they
are
not
specifically
HTTP
related,
they
usually
like
more
of
the
utility
function
of
the
client
Library,
so
I'm.
Okay,
with
that.
A
Yeah
I
agree
that,
like
here,
we
try
to
stay
in
Maryland
what
we
can
do
in
HTTP
layer
and
if
we
need
another
layer,
that's
not,
and
it
shouldn't
be
an
HTTP
concern.
F
A
F
Oh
good
to
hear
that
so
I'll
take
a
look
at
this
PR
on
Monday
and
see
if
there's
anything
that
I
need
to
update
to
make
sure
that
the
proposed
way
of
handling
this
actually
works
and
matches
the
spec
but
like
in
general,
like
I,
think
it
should
be
okay.