►
From YouTube: 2021-10-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
It
was
nice,
it
was
good
to
to
see
some
open
telemetry
people
in
person.
It
was
very
sparsely
attended
for
as
far
as
kubecons
go,
which
is
not
surprising,
but.
D
C
Yeah
yeah,
it
seemed
like
they
were
doing
a
good
job.
With
that
I
got
the
impression
the
virtual
conference
was
well
attended,
so
yeah.
B
Yeah,
I
knew
a
few
sydney
peoples
that
did
attend
kubecon
and
did
were
all
virtual
and
they
were
dead
throughout
the
working
days,
but
they
did
attend
yeah.
D
Like
a
virtual
conference,
but
I'm
glad
there
are
a
lot
of
attendees.
C
E
And
and
ted
thanks
for
adding
that
timeline,
because
I
think
I've
also
been
making
some
changes
and
I'll.
C
F
So
I
added
this
to
many
conventions
for
spirit,
rice
and
redirects.
As
an
item,
do
you
want
to
start
with
timeline
first
or
we
can
spend
some
time
on
it.
C
F
Okay,
so
I
will
share
my
screen,
then
just
to
go
through
it.
So
last
time,
last
time
we
discussed.
F
Put
a
share
button
here
so
last
time
we
discussed
different
approaches
for
retrace
and
redirects.
So
today
I
actually
had
some
follow-up
about
this.
How
we
can
actually
do
this
so
in
term
in
case
of
retrace,
actually
we
last
time
we
just
we
decided
that
it
might
be
beneficial.
Actually
we
can,
if
we
can
have
all
these
retrace
as
a
physical
expanse
and
link
them
together,
so
we
can
on
the
server
side
or
for
some
analysis
tool.
F
F
So
basically,
there's
the
fourth
http
request,
so
the
way
how
it's
currently
implemented
they
I
do
have
these
links
to
the
previous
tries
so
like
the
last
one,
has
this
link
to
the
previous
one,
and
it
also
has
the
link
for
it
to
the
previous
one,
and
this
one
has
a
link
to
the
first
one,
and
also
I
added
this
another
for
like
a
for
retries,
not
for
initial
try,
but
for
retries.
I
added
this
new
new
attribute
here.
F
Retry
count,
and
basically
here
we
have
retry
count
one,
and
here
we
have
retry
count
like
a
three.
So
basically
that's
the
changes
for
our
retries
and
yeah,
but
before
we
actually
go
to
the
to
the
discussion,
I
would
like
to
also
show
you
how
it
possible
to
do
this
to
use
the
same
kind
of
technique
for
redirects.
F
So
basically,
that's
really
similar.
We
do
have
like
a
three
redirects
here,
so
the
first
one
initial
request
redirects
to
this
one,
to
this
one
and
to
this
one
so
and
the
technique
is
the
same.
So
we
do
have
this
reference
to
the
previous
kind
of
lack
and
that
that's
that's
really
similar
to
the
previous
approach
and
we
don't
have
any
additional
any
additional
attributes
here.
F
So
basically,
that's
the
the
the
demo
and
in
terms
of
changes
like
that,
I
was
thinking
about
to
do
to
the
specification.
So
we
also
we
need
to
make
some
changes
to
yaml
files
and
so
on,
but
the
the
main
thing
is
here.
F
I
wanted
to
edit
this,
like
a
new
paragraph
here,
a
new
new
section
saying
that
for
each
subsequent
read
write
basically
a
new
spam
must
be
created
or
yeah.
Probably,
I
need
to
refresh
this
to
the
to
the
latest
okay,
so
I
can
show
you
here
yeah
so
for
each
subsequent
read
right,
we
want
to
create
like
a.
We
must
spend
basically
for
each
physical
tray
and
those
should
be
linked
with
the
previous,
using
a
single
link
and
this
another
retry
account.
F
Attributes
should
be
added
to
each
retry
as
an
original
number
of
requests
and
for
redirects
it
will
it
basically
the
same
so
for
each
redirect
and
use
spam
must
be
created
and
it
should
be
linked
with
with
the
previous
with
the
previous
thing.
So
that's
the
changes
about
like,
or
this
is
the
scope
of
changes
for
this
retries
and
reacts.
I'm
thinking
and
would
be
happy
to
to
hear
your
feedback.
F
F
So
if
you
do
have
some
retries
that
basically
starts
started
to
be
added
here,
I
can
show
it
right
here
so
for
the
first
one
we
don't
have
any
any
attribute
edits,
but
for
subsequent
ones
we
do
have
it
here.
C
Yeah
that
looks
that
looks
freaking
great
dennis
and
just
yet
to
clarify.
There
is
now
no
there's
no
rapper
span
right
in
this
case.
Yes,.
F
There
is
no
wrappers,
then
so
like
in
this
case.
I
do
have
this
kind
of
top
level
span,
which
I
created
just
manually
right,
but
in
this
case
I
see
all
these
spans
kind
of
under
the
same
trace,
but
in
case
it
will
be
no
no
top
level
span.
Let's
say
we
are
just
initiating
the
trace,
it
will
be
just
separated
traces,
and
I
also
do
have
this
example
here.
So
this
is
the
first
trace
is
the
second.
This
is
the
third,
and
this
is
the
last
which
is
successful.
F
So
in
this
case
it
will
be
just
kind
of
separated
traces,
but
still
we
do
have
our
bomb,
it
should
be
somewhere
here.
We
do
have.
D
F
F
Yeah
we
do
have
some
references
to
previous
traces
so
and
basically
this
kind
of
the
same
thing,
but
there
is
no
top
level
spam,
but
still
it
will
be
possible
to
kind
of
distinguish
or
identify
that
we
definitely
have
a
root
rice.
Just
because
we
do
have
this
trace
linked.
Traces
links
span
from
another
phrases,
and
we
do
have
this
attribute
here.
F
Yes,
amazing
great
so,
and
also
I
have
a
couple
of
questions
on
top
of
this.
Actually
we
just
had
a
discussion
about
this
internally
in
microsoft
and
we
do
have
actually
let
me
came
up
with
with
some
additional
suggestions.
Maybe
we
can
also
discuss
this
real,
quick.
So,
in
terms
of
kind
of
you
know,
analysis
thing
it
will.
F
It
might
be
interesting
how
many
retries
I
had
like,
for
example,
I
would
like
to
have
some
metrics
in
terms
of
like
how
many
retries
I
have
and
the
overall
kind
of
thing
so
here
for
this.
Actually
we
can
add
another
required
attributes,
it
can
be
kind
of
tricount
or
just
request
counts,
and
basically
this
one
will
be
added
by
default.
So
in
case
we
have
retries,
we
can.
F
We
will
have
this,
try
count
as
kind
of
five
or
seven
three
whatever
it
is
for,
if
you
don't
have
any
reach
right,
it
will
be
just
one
and
the
same
stuff
can
be
applicable
can
be
applied
to
to
redirect.
So
maybe
in
this
case
it
will
be
better
to
call
it
request
count.
F
So
this
basically
allows
us
to
identify
how
many
redirects.
C
Thank
you
great
yeah.
I
think
that's.
This
is
like
getting
into
the
the
the
next
territory,
which
is
metrics
right,
you're,
you're,
talking
about
you
know,
you
know.
Now
we
need
to
start
defining
what
metrics
we're
going
to
be
emitting
in
addition
to
just
trace
attributes.
A
F
Okay
sounds
good,
so
I
will
also
add
this
kind
of.
Maybe
I
will
I
will
go.
This
request
count
just
to
make
it
kind
of
general
for
both
street
rice
and
redirect
to
the
pull
requests,
and
I
will
submit
this
full
request.
See
if
you
guys
don't
mind.
C
Do
do
we
need
both,
but
just
curiosity
do
if
we
add
that
which
I
think
is
good,
do
we
also
need
retry
account
because
it
seems
seems
like
it's
or
just
it
seems
do
like
a
duplication.
So,
just
just
in
terms
of
cleanliness,
you've
got
a
a
new
attribute,
that's
consistently
always
there
and
retry
count
is
going
to
be.
F
Part
correct,
correct,
yeah,
that's
also
the
good
feedback.
So
in
case
we
do
have
this
request
count.
Maybe
it
will
be
enough
to
cover
all
the
cases.
G
A
C
F
So
in
this
case,
probably
we
still
want-
and
this
lets
me
want
to
like
keep
this
retry
count
just
because
we
were
thinking
about
this
about
last
try,
but
actually
we
can.
We
can
do
this
for
every
try
so
every
time
it
can
be,
it
can
be
there
not
only
for
the
for
the
last
one.
C
F
C
Right,
like
you're,
not
having
you're
you,
you
would
have
to
do
that
by
looking
at
the
link
structure.
Essentially,
right,
that'd
be
the
way
you'd
do
it.
A
C
I
see
I
see
what
you're
saying
you
want
to
know
yeah
what
what
the
final
like
should
the
final
should
there
be
something
on
the
the
last
request.
That
is
like
a
summary
of
some
kind.
A
H
I
think
it
can
be
useful
because
I
mean
a
question.
I
I
think
that
is
definitely
useful.
Is
that
you
make
a
query,
and
you
ask
here,
I
I
wanna
get
a
list
of
all
http
kind
of
spans
that
failed,
but
I'm
not
interested
in
those
failed
free
trial
spans.
B
F
Yeah-
and
in
this
case
probably
we
want
to
keep
both
of
them
so
for
a
price
count
for
a
dry
scale.
We
can,
we
can
see
these
transit
errors
and,
for
this
request
count
actually
we
can
see-
or
we
can
only
only
add
this
to
the
to
the
last
one-
and
this
is
this-
can
be
like
an
indicator
that
this
was
a
one
last
one
and
if
it
failed,
basically,
the
whole
ridge
right
or
the
whole
chain
was
failed.
F
C
F
C
F
The
thing
is
that
sorry,
sorry
for
interrupting,
I
just
wanted
to
say
that
we
don't
have,
since
we
don't
have
any
any
kind
of
logical
stuff
for
for
redirects
right.
So
it
might
be
useful
to
know
how
many
redirects
were
executed
by
this.
The
request
count
so
like
if
we
want
to
have
general
solution
for
retries
and
redirects,
definitely
for
retrace
it's
kind
of
more
or
less
duplicating
stuff,
but
for
it
for
redirects.
It
makes
sense
to
have
it
as
as
count
maybe.
C
I
see
so
you're
saying,
and
this
might
be
true.
This
is,
I
think,
where
we
want
to
talk
to
users
like
what
metric
is
useful.
If
you
just
have
a
request,
count
metric
that
doesn't
distinguish
between
retries
and
redirects.
Is
that
actually
annoying,
like
is
what
you
want
a
metric
for
retry
account
specifically
and
that
you
don't
want
to
have
redirects
mixed
up
mixed
up
in
there
right
in
terms
of
just
the
kind
of.
C
You're
trying
to
create
that
would
be
a
case
to
say
actually
just
what
we
want
is
retry
count
and
redirect
count.
B
For
at
least
atlassian's
purposes,
we
only
track
the
number
of
failed
requests
and
the
number
of
failed
responses
so
that
we
can
kind
of
help
definitively
track.
Slos
for
each
of
our
services
and
their
health,
including
the
the
number
of
tries
into
that
metric,
might
not
make
sense,
and
it
might
just
blow
up
the
data
vendor
that
we're
trying
to
use
right.
C
I
think
I
think
that's
where
the
addition
of
a
boolean
right
like
when
we
when
something
is
finally
giving
up.
We
have
two
choices
here.
One
is
adding
a
new
kind
of
status
code.
C
We
used
to
have
a
lot
of
status
codes
right
and
we
dropped
them
because
we
felt
like
we
were
just
inheriting
a,
I
won't
say,
arbitrary
list
from
google,
but
a
little
arbitrary,
so
that
felt
good
one
option
is
to
start
adding
in
adding
back
in
different
kinds
of
status
codes.
So
you
have
errors
and
you
just
want
to
track
errors.
You
could
add
a
status
code.
C
That
was
something
like
a
transitive
error
right,
like
some
other
error
code
to
represent
the
the
retries
so
that
you
have
a
much
simpler
way
of
tracking.
You
know
true
errors
or
final
errors
right
like
if
these
retries
are
marking
themselves
as
an
error,
because
it
is
an
error,
but
it's
it's.
It's
a.
C
I
don't
know
what
you'd
want
to
label
it
like
a
reach
like
non-final
or
like
some
some
way
of
describing
the
error
as
like
this
failed,
but
but
I
am
retrying
it.
That
seems
like
a
general
purpose,
not
like
an
http,
specific
error,
but
an
error
that
meant
I'm
going
to
try
this
again.
C
C
Basically,
you
could
just
filter
out
all
of
these
transitive
errors.
The
other
approach
would
be
to
add,
like
an
extra
boolean
on
the
final
one.
That's
like
last
time
last
request.
You
know
or
truly
giving
up.
You
know
true,
but
now
you
every
time
you'd
have
to
in
all
of
your
dashboards,
be
like.
C
C
So
you
could
see
using
that
for
for
all
of
these
things,
but
I
don't
the
the
way
open.
Telemetry
tries
to
talk
about
errors
and
exceptions
is
the
status
code
and
the
reason
we
do
that
is
that
that
is
much
more
efficient
and
makes
life
a
lot
simpler
for
tracing
back
ends.
It's
the
only
piece
of
data
they
have,
in
other
words,
there's
this
extra
added
expense
in
having
to
parse
and
scan
the
attributes.
B
C
If
you
can
do
a
much
more
efficient
initial
pass,
if
all
you're
looking
at
is
status,
codes,
which
are
an
in
enumerated
list
and
being
able
to
and
tracing
systems,
can
use
that
to
trigger
a
lot
of
say,
collection,
behavior
modifying
the
sampling
and
things
like
that,
and
it
gets
the
more
that
becomes
a
more
complicated
query
where
it
needs
to
always
be
digging
into
the
attributes
to
decide
what
it's
going
to
do.
C
I
don't
know
it
seems
I
personally
kind
of
feel
like
the
status
code
might
be
the
the
way
to
to
do
this.
So
just
just,
but
I'm
just
kind
of
thinking,
thinking
this
through
for
the
first
time
here.
I
think
that
that
is
how
traditionally
this
would
be
done
in
like
open
census,
for
example,
would
be
to
to.
H
I
mean
to
do
that
just
throwing
an
additional
deer
in
there
I
mean
with
the
status
quotes,
I
think.
Currently
we
actually
have
three
like
okay,
error
and
unset
and
was
just
wondering
to
those
three
twice
bands:
why
don't
we
use
unset
for
those
bands
and
only
for
the
last
one.
The
last
span
in
the
cascade
is
either
okay
or
error.
C
So
so
unset
is
the
default.
The
okay
error
code
is
actually
an
override,
so
instrumentation
never
sets
the
span
is
okay.
C
C
So
the
okay
status
code
is
a
way
for
the
the
operator
to
send
a
message
to
the
back
end,
saying
explicitly,
suppress
any
errors
you
might
be
raising
on
the
span.
So
it's
a
way
of
like
drowning
noise
that
you
could
say
configure
in
your
collector.
So
it's
it's
a
little
bit
different.
C
A
C
My
my
instinct
is:
is
that
would
be
the
the
right
way
to
do
this
because
it
gives
us
a
way
to
to
filter
these
things
out,
regardless
of
the
type
of
span
you
don't
have
to
dig
in
to
figure
out
what
kind
of
span
this
is
you
just
know.
These
are
transient
errors
and
I'm
going.
C
We
should
probably
also
have
like
suggestions
for
what
backhand
should
do
with
this
information,
but
I
think
by
default
it
would
a
transit
error
would
mean
not
adjusting
not
adjusting
your
sampling,
for
example,
so
when
you're
doing
tail-based
sampling
in
the
collector,
for
example,
based
on
the
status
code,
you
have
a
way
of
saying
like
I
want
to
have
a
tail-based
sampling
algorithm
for
final
errors
and
another
one
for
for
transient
errors.
H
I
mean
I
have
to
say
I
like
that
with
this
trends
and
error
status
code,
because
I
mean
already
dennis
here-
did
the
jaeger
ui
up
and
I
think
yeah
the
rbc,
basically
the
trades
and
there's
one
error,
but
I
think
for
maybe
95
percent
of
the
users,
if
there's
one
http
retry,
that
will
not
count
as
an
error
for
them.
That
is
just
kind
of
by
design,
so
they
might
be
confused
actually
by
seeing
a
trace
with
an
error
there,
just
because
there
was
an
http
retry.
So
I
think
twisted
errors.
C
C
I
think
we
should
raise
this
as
an
hotep.
I
think
this
is
something
when
we
hit
these
kinds
of
decisions,
it's
good
to
bubble
them
out
of
the
group
and
have
the
technical
committee
and
other
people
look
at
it.
It
would
be
kind
of
a
big
deal
to
add
another
status
code
back
in,
but
my
instinct
is
that
is
what
what
people
would
want
over
having
a
to
do
a
compound
query
all
the
time
in
order
to
to
filter
these
things
out.
F
C
Yeah
so
yeah
oh
tell
status
and
let
me
just.
H
And
just
adding
the
vote
set,
I
think
if,
if
that
would
be
modeled
by
the
overspend
status
instead
of
kind
of
http
attributes,
that
would
be
nice
because
then
that
mechanism
could
also
be
used
like
for
other
semantic
conventions,
not
just
http,
because
that's
not
just
an
http,
but
we
try
count
that
models
the
retry
behavior,
but
the
other.
It
would
be
basically
http
independent
and
we
could
use
it
the
same
mechanism
we
could
use
in
messaging
tool
or
in
other
areas
where
it's
needed.
C
Yeah,
the
the
third
option,
also
just
to
throw
it
out
there-
is
that
these
things
just
don't
count
as
errors.
Retries,
don't
count
us
an
error.
Only
the
final
failure
counts
as
an
error.
C
C
C
So
the
the
relevance
just
doing
a
quick
look
I'll
post
it
into
the
chat
here
it
looks
like
the
the
status
code
grpc
used
is
called
unavailable.
C
C
No,
no,
we
do
not.
We
do
not
have
to
have
to
do
that,
but
I
just
wanted
to
verify
that
like
there
was,
we
did
have
this
status
code
in
our
old
scheme
and
we
dropped
it
with
the
idea
that
we
would
add
them
back
as
as
it
turned
out,
we
wanted
them,
which
I
think
we
are
now
possibly
hit.
The
scenario
where
we
have
a
good
reason
to
to
add
one
back
in
with
a
better
name,
which
I
think
transient
is
a
a
better
name
for
this
personally.
C
F
Sure
thank
you
for
for
the
feedback.
Yeah,
I
hope,
will
be
possible
to
come
up
with
some
like
appears
for
the
next.
The
next
meeting.
C
Yeah,
I
I
do
think
if
we,
if
we
get
this
stuff
settled,
the
the
next
steps
is
to
start
thinking
about
metrics.
This
is
also
you
know
related
to
some
of
the
work
honorable
has
done
in
java
around
having
an
instrument
or
object.
C
So
I
don't
know
if
you've
people
on
this
call
have
looked
at
that
thing,
but
it's
a
nice
encapsulation
of
all
of
this
stuff
so
rather
than
when
you're
trying
to
instrument
something
like
an
http
client,
putting
in
all
the
span
information
and
then
going
back
in
and
putting
in,
like
all
the
metrics
counts
and
calls
and
everything
you
have
a
an
object.
That
is
like
an
http
client
instrumentor
and
all
you
do
is
feed
it.
C
The
information
that
it
wants
and
call
start
start
and
end
on
it
basically
and
then,
when
end
is
called
in
addition
to
recording
the
span.
It
also
submits
all
of
the
metrics
that
you're
supposed
to
admit,
but
according
to
whatever
metrics
we
define
so
that
makes
it
life
a
lot
easier
for
contrib
maintainers,
because,
rather
than
having
to
keep
track
of
all
this
stuff
by
having
like
a
document
open
somewhere,
we
just
have
some
kind
of
object
that
they
use.
C
That
makes
makes
it
so
that
it's
it's
always
correct
as
to
whatever
the
latest
version
of
this,
this
spec
is,
and
their
job
is
only
to
to
feed
it.
The
information
that's
required
for
it
to
be
able
to
do
its
job.
A
I
have
a
couple
of
questions
about
it
if
I
may
so,
it
sounds
like
there
is
this
feeling
that
we
don't
want
to
reinstrument.
Everything
for
a
matrix
right
is.
Is
this
like
how
wide
spread
this
belief
is?
I'm
just
curious
how
much
people
I
want
to
convince
that
it's
a
good
approach.
C
So
this
is
is
definitely
my
experience
from
all
of
the
stuff
that
has
been
instrumented
so
far
is
that
it
has
all
come
out
very
lumpy
people,
for
example.
We
have
things
that
are
labeled
as
optional
and
they're
labeled
as
optional,
because
it's
not
always
feasible
for
that
information
to
be
available.
For
example,
some
of
that
came
out
of,
I
think
http
clients
and
servers
being
mushed
together
into
the
same
convention
but
whatever,
but
some
people
are
like
my
approach
is,
I
will
be
maximal
and
everything
I
can
record.
C
Oh
it
says
it's
optional,
so
I'm
not
gonna
bother
and
just
in
general,
the
the
quality
of
our
instrumentation
is
very,
very
uneven
and
with
the
addition
of
metrics
that
that
gets
even
worse,
because
now
there's
that
much
more
stuff
you
have
to
deal
with
and
the
process
of
writing
instrumentation
where,
in
order
to
get
it
correct,
what
you
have
to
do
is
have
the
specification
open
like
in
a
window
and
then
read
that
and
juggle
like
a
bunch
of
constants
and
kind
of
mush
it
all
together.
C
C
That
just
say
give
here
is
it's
a
structured
piece
of
data
rather
than
just
unstructured
attributes?
Here's
like
a
struct
or
some
structured
object
that
uses
the
language
features
so
that
your
code,
completion
and
everything
else
can.
C
Tell
you
what
data
you're
supposed
to
feed
into
it
and
then
that
thing
produces
the
correct
instrumentation
on
the
other
end,
and
I
think,
there's
a
couple
different
approaches
to
how
to
write
those
helper
functions.
I
think
the
one
honorable's
made
is
like
a
very
good
approach
for
java.
In
particular,
other
people
looked
at
it
and
been
like
this
feels
very
java
ish.
C
I
don't
know
if
I
would
do
it
quite
like
this
and
go
or
another
language,
but
but
the
basic
idea
of
being
able
to
not
have
to
keep
track
of
what
you
need
to
produce
and
only
have
to
keep
track
of
like
what
information
you
have
to
give
it
another
way
of
putting
it
is
because
this
is
all
in
the
the
spec.
What
you're
supposed
to
produce
there's
only
two
ways
to
do
it
correctly
and
incorrectly
and
correctly
is
known
so
why?
C
Why
leave
wiggle
room
for
people
to
to
like
the
only
way
they
can
improve
upon
it?
All
they
can
do
is
is
is
mess
it
up
and
if
we
version
the
spec
like
say,
add
additional
metrics
or
things
like
that
that
we
want
to
produce.
C
B
It's
interesting.
You
mentioned
this
because
what
we're
planning
to
do
about
atlassian
is
tracing
first,
but
taking
the
tracing
data
and
metricizing
that
so
having
standard
attributes
and
stuff
like
that
meant
means
that
clients
are
sending
very
high,
cardinality
high
fidelity
data
and
then
reducing
them
down
into
meaningful,
like
like
pure
metrics,
that
we
can
just
throw
on
dashboards.
And
it's
the
same
for
everyone.
So
it's
interesting
that
this
is
coming
up,
because
this
is
like
the
same
pivotal
point
as
we're
doing
internally.
C
Yep
yeah,
and
it
is
pop
conceivable
that
we
come
up
with
submetrics
that
have
labels
that
are
different
than
what
we
put
on
the
span
as
attributes.
But
personally
that
seems
a
little
silly
to
me.
It
seems
like
if,
if
it's
information,
we're
counting
as
a
metric
like
there's,
we
we
might
have
span
attributes
that
are
too
high
cardinality
to
make
a
good
metric.
C
But
any
thing
that
makes
a
good
metric
seems
to
me
to
be
fine
to
be
a
span
attribute,
and
that
also
leaves
the
door
open
for
being
able
to
produce
all
of
these
metrics
later
in
the
pipeline
in
the
collector,
by
deriving
metrics
from
the
span
attributes
for
efficiency's
sake,
we're
going
to
in
the
actual
instrumentation
bake
in
you
know,
actual
metric
objects
that
emit
these
things,
just
because
that's
that's
just
more
efficient
to
to
do
it
that
way
for
the
stuff
we
know
we
want
to
produce,
but
something
honorable
found
with
the
instrument
or
object
is,
if
you
just
give
this
thing
the
set
of
attributes
off
of
your
http
object.
C
F
Basically,
additional
layer
between
tracer
and
metric
provider,
so
basically
in
metrics.
We
also
have
this
kind
of
observable
or
asynchronous
metrics,
which
can
be
collected
kind
of
aggregated
and
then
sent
sent
out.
So
basically,
you
have
this
kind
of
layer
which
collects
all
these
different
kind
of
data.
It
can
then
produce
traces
and
these
observable
metrics
and
logs
as
well.
D
A
But
for
for
us
it's
well,
I
love
the
instrumenter
api
and
I
would
love
to
use
it
in
our
libraries.
The
problem
is
that
it
has
to
have
the
same
guarantee
stability
guarantees
it's
open
to
auto
api,
yes,
and
even
though
it
can
be
a
separate
library
like
unless
it
happens,
unless
there
is
a
spec
on
it,
it's
it's
impossible
to
use
yes
or
it
can
stay
as
a
common
component
in
concrete,
which
is
also
possible,
but
probably
it's
useful
beyond
that.
C
Yeah,
if
this
thing
needs
to
be
part
of
the
course
specification,
once
we
figure
out
what
what
we
wanted
to
to
look
like
because
yeah,
like
you,
say
what
once
this
thing
is
out
there,
this
is
the
thing
we're
going
to
push
everyone
to
use
and
the
people
in
particular
who
are
going
to
be
using.
It
are
library,
authors
who
want
to
do
native
instrumentation.
C
This
is
going
to
make
open
telemetry
really
attractive
to
people
to
just
bake
this
stuff
into
into
their
library,
because
suddenly
it's
like
really
easy
to
do,
but
yeah
absolutely
this
will.
This
is
then,
just
as
critical
as
I
think
it
should
follow
the
same
life
cycle
as
every
other
core
package
where
it
starts
out
it's
experimental
and
then
it
becomes
a
stable
package
and
it
has
to
be
fully
backwards
compatible
at
that
point,
yeah,
just
just
like
any
other
part
of
the
open,
telemetry
api.
I
That's
an
interesting
point
about
converting
traces
to
metrics
in
the
collector.
Has
there
been
any
like
thought
or
discussions
around
that.
C
I
think
there
is
a
a
processor
for
doing
this,
maybe
not
quite
yet,
because
metrics
is
still
is
still
very
new,
but
yeah
100.
This
is
a
thing
you
should
be
able
to
do
in
in
the
in
the
collector,
like
it's
to
me.
C
Being
able
to
do
this
efficiently
is
invaluable
because
having
to
redeploy,
write
code
and
redeploy
your
applications
to
change
the
kind
of
metrics
that
you
are
getting
out
of
them
is
freaking
obnoxious
and,
as
an
operator
like
not
having
to
bug
the
developers
to
like
get
some
information
and
set
up,
a
dashboard
like
that
seems
like
that'll,
be
a
huge,
hugely
helpful
feature.
B
C
Yeah,
yes,
and
and
this
kind
of
instrumentation
we're
talking
about
too
the
if
we
don't
provide
a
functionality
like
that,
what
will
happen
is
configuration
hell
where
everyone's
gonna
go
into
like
every
instrumentation
library.
C
We
have
and
start
pecking
at
them
to
provide
a
whole
bunch
of
optional
features
and
ways
to
like
attach
additional
attributes
and
and
stuff
like
that,
and
I
I
I
feels
like
there's
a
general
vibe
that,
like
that,
that
approach
sucks-
that's
that's
just
really
hard
to
manage
and
it
would
be
better
to
say,
like
the
instrumentation
you
get
out
of
libraries
and
other
stuff
is
just
what
you
get
and
you
mess
with
it.
On
the
back
end,.
I
C
Yeah,
I
think
again
the
the
trade-off
is
you
know
you
won't
get
that
data.
If
you
don't
have
a
collector
and
it'll,
be,
I
imagine,
always
marginally
more
expensive
to
do
it
that
way
than
to
hard
code
a
metric,
but
for
for
library
instrumentation,
I
think
it'll
be
a
super
common
weighted
way
to
do
it.
H
I
mean
the
other
drawback
of
this.
Is
that
when
you
have
sampling
enable?
Currently
you
don't
get
accurate
counts
like
for
throughput
and
the
like.
So
I
think
that
is
something
the
sampling
forks
are
working
on.
Yes,
current
state,
you
just
get
yeah,
you
just
count,
basically
create
metrics
from
sample
spans,
which
might
be
yeah
useful
in
some
cases,
but
not
in
every
case.
This.
C
Is
something
that
josh
mcdonald
has
thought
explicitly
about,
so
the
the
trace
ratio
and
priority
sampling
that
we're
talking
about
baking
in
those
algorithms
would
work
with
being
able
to
get
at
least
a,
maybe
not
100
accurate,
but
a
known
resolution
on
your
metrics.
C
I
think
the
one
issue
right
now
is
actually
sending
that
data
in
otlp.
I
don't
know
if
that's
been
solved
yet,
but
that
that
was.
I
think
one
of
the
issues
is
the
that
kind
of
sampling
information
is
not
sent
to
the
collector.
So
the
collector
doesn't
actually
know
know
anything
about
that.
C
But
I
predict
sampling
is
like
the
world's
biggest
foot
gun
personally
and
I'm
a
big
advocate
of
saying
like
actually
the
end
users
should
not
should
not
be
going
around
mucking
with
with
sampling,
but
it
should
be
our
job
to
make
sure
that
the
sampling
stuff
that
open
telemetry
ships
with
does
has
like
a
reasonable
way
of
still
letting
you
produce
accurate
metrics.
So
again
you
know
your
your
resolution
is
going
to
be
lower,
but
but
you
know
still
accurate.
I
would
think.
G
We're
talking
a
bit
about
adding
something
like
instrument
or
api
to
the
spec
as
well.
I
mentioned
for
the
stability,
but
I
think
the
pressure
I
got
from
my
otep
was
that
the
patterns
for
instrumentation
are
so
different
in
the
languages
that
it
might
not
be
appropriate
to
have
like
a
cross
language.
C
I
think
something
this
this
group
should
do
is
as
part
of
defining
these
conventions.
We
we
need
to
go,
implement
this
stuff
in
several
languages
like
I,
don't
it's
not
sufficient
for
us
to
just
just
define
it
as
conventions
in
the
spec.
I
think
we
need
to
actually
implement
this
stuff
several
times
over
in
different
languages,
with
different
libraries
just
to
actually
make
sure
you
can
do
it.
For
example,
we're
not
saying
you
should
put
information
on
these
things
that
you
can't
get
or
whatever,
but
as
part
of
this
work.
C
C
It's
the
other
thing
and
if
we
can
just
like
anything
else,
I
think
where
the
spec
process
just
generally
works
best
when
we
have
prototypes
in
like
three
different
languages
that
usually
does
a
great
job
of
informing
what
should
go
into
the
spec
around
this
stuff,
but
I
will
say
yeah,
even
if
it's
not
in
the
spec
api
stability
is
somewhat
independent.
You
know
just
you
can't
tax
something,
even
if
it's
some
language
specific
thing
in
a
utilities
package.
C
A
So
let's
say
we
go
an
instrument.net,
http
client
and
it
does
put
some
attributes,
but
the
the
release
cycle,
the
release
cadence
of
httpclient.net,
is
quite
different
from
open,
telemetry,
so
like
being
able
to
override
it.
This
implementation
of
this
attribute
extractor
is
super
useful
and
then
basically,
this
thing
belongs
in
open,
telemetry,
api.
A
Is
it
everyone
would
need
to
use
this,
and
then,
if
we
say
okay,
there
is
open
to
our
instrumentation
api
in
java
that
implements
it
and
then
we
will
move
stuff
around.
It
will
create
a
lot
of
problems
around
versioning
and,
like
the
dependency
version,
conflict
issues
and
stuff
like
that,
so
it's
better.
We
we
identify
these
common
patterns
and
move
them
to
api
at
once,
rather
than
let
it
be
implemented
differently
everywhere.
First.
C
Yeah,
I
agree
also,
it's
not
not
a
knock-on
on
any
of
the
maintainers,
but
I'm
I'm
always
a
bit
nervous
about
stuff
that
hasn't
gone
through
a
wider
spec
process,
even
though
it's
slower,
it
gets
more
eyes
and
opinions
on
it
and
it's
easier
to
get
yourself
stuck
with.
C
C
C
G
C
It's
sort
of
just
like
a
tracer
right
right
now
we
have
this
concept
of
a
tracer
and
then
that
tracer
has
information
on
it
right
about
like
the
package
that
you're
you're,
instrumenting
and
stuff
like
that,
and
then
you
start
spans
off
of
that
tracer.
C
These
instrumenters
are
like
that,
but
you're,
basically
configuring
it
with
like
this
is
the
information
I
expect
to
come
into
this.
These
are
all
the
metrics.
I
want
you
to
make,
for
example,
and
then
everything
you
every
span
you
start
off
of
that
you
know.
C
Instrumented
tracer
expects
you
to
give
it
a
certain
amount
of
data,
and
then,
when
you
call
end
it
just
produces
all
of
those
metrics
as
well,
and
I
think
that
is
from
experimenting
that
it
just
feels
like
a
much
cleaner
way
to
write
instrumentation,
because
if
you
don't
do
that,
what
happens
is
like
now
for,
like
every
line
of
code
like
like
that,
instrumentation
code
is
like
bulky
and
piling
like
logs
traces
metrics.
C
All
of
this
like
like
into
your
application
code,
just
just
it
just
feels
like
noise,
and
if
you
can
take
a
bunch
of
that
and
turn
it
into
something
declarative
up
at
the
top
of
your
file
that
it
just
it
just
feels
cleaner
and
when
you're
trying
to
audit
this
stuff.
Later,
it's
actually
much
easier
to
see
what's
happening
because
it's
it's
like
clustered
into
to
a
single
place.
C
So
yeah,
I
would
love
it
yeah
people.
I
would
really
encourage
people
to
to
play
around,
have
a
look
at
the
thing
honorable
made
to
play
around
with
something
like
that
in
your
your
language
of
choice,
like
just
pick
pick
something
in
contrib
pick
something
and
contrib
in
your
language,
and
now
that
the
metric
api
is
out
and
just
start
seeing
what
it
feels
like
to
to
try
to.
C
Actually,
you
know
get
one
of
these
pieces
of
instrumentation
correct
and
then
see
what
what
would
help
you
clean
it
up,
because
I
guarantee,
if
you
start
trying
to
do
this.
You're
gonna
you're
gonna
instantly,
wish
that
you
had
something
like
this,
because
it's
really
annoying
right
now.
C
Okay,
we're
at
the
bottom
of
the
hour.
C
C
We,
this
project
gets
probably
dinged
the
most
for
not
having
great
documentation
and
not
having
like
a
comprehensive,
publicly
visible
roadmap
of
all
the
work
that
we're
doing
so
we're
trying
to
fix
that
by
creating
this
document
and
then
converting
it
into
a
more
fleshed
out
status,
page
on
the
website
and
since
the
work
we're
doing
here
is
important.
We
want
to
have
a
section
in
there
about
the
semantic
conventions
and
the
thing
everyone
always
wants
to
know
with
any
of
these.
Things
is
great.
C
When
is
it
going
to
be
stable,
and
so
I
wonder
what
a
reasonable
timeline
might
be
for.
I
think
we
have
two
things:
we're
working
on
right
now,
which
are
http
and
messaging
semantics,
but
just
for
the
http
stuff.
C
What
are
people
thinking
does
q4?
Does
it
seem
reasonable
that
we
will
get
this
all
declared
stable
by
end
of
year.
F
To
be
honest,
I
don't
think
so.
So
we
have
we
have
this
otep,
that's
is
kind
of
in
progress
of
being
reviewed
and
it
outlines
some
areas
so
http
and
retry
or
retries
and
redirects.
F
Just
like
a
point
there,
and
probably
we
also
like
finalize
the
stuff
related
to
to
to
sampling,
like
which
attributes
these
two,
these
two
we
need
to
put
before
sampling
or
after
sampling,
but
we
do
have
a
lot
of
them
and,
for
example,
this
status
like
what
which,
which
way
we
can
provide
to
to
end
users
to
actually
declare
should
it
be
status
code
like
should
be
failed
or
like
error
or
transient
error.
F
So
that's
that's
a
real,
similar
discussion
that
we
had
today
and
another
another
area
that
we
have.
There
is
http
2,
grpc,
maybe
websockets
or
some
related
areas,
and
those
are
not
really
uncovered
yet.
So
we
didn't
start
any
discussion
on
this,
unfortunately,
yet,
but
what
we
do.
What
we
can
do
this
year
definitely
is
just
to
outline
what
exactly
we
want
to
do
for
version
1.0
and
that
that's
basically,
is
what
app
is
about.
F
So
I
believe,
like
a
half
of
this
or
some
some
pieces
there
can
can
be,
can
be
finalized,
but
not
all
of
them.
A
I
also
wonder:
let's
say
we
have
it
written.
Let's
say
we
have
it
tomorrow,
there's
a
magic
one.
Then
there
is
some
process
right.
We
need
to
make
sure
that
three
or
maybe
four
language
or
maybe
two
we
should
define
implemented
and
it's
visible
and
we're
happy
with
the
outcome
right
and
then
we
depend
on
all
the
six
language
sticks
to
adopt
this
right.
C
C
So
what
we're
trying
to
focus
is
when,
when
is
the
spec
going
to
deliver
something
stable,
so
we're
saying,
for
example,
we're
delivering
metrics
in
q4,
because
the
spec
is
going
into
feature
freeze,
you
know
next
week
and
then
we're
going
to
be
stable
with
some
betas
in
different
languages,
so
q4
seems
real
realistic,
but
as
far
as
when
metrics
will
be
declared
stable
in
c
plus,
you
know
we
don't
want
to
put
that
in
the
road
map
for
for
us,
like,
I
think
the
maintainers
of
those
language
sigs
can
can
write
a
roadmap
for
when
they
they
believe
they're
going
to
have
stable
things
in
the
spec
implemented
in
their
language.
C
So
for
us
it's
just.
Do
we
think
we
can
have
this
stuff
finalized
in
the
spec,
which
would
include
definitely
having
you
know,
implementations
in
a
couple
languages,
but
not
necessarily.
C
We've
now
gone
through
and
updated
every
single
piece
of
instrumentation
and
have
found
contrib
maintainers
and
all
that
does
given
all
of
that
does
q1
sound,
reasonable
end
of
q1.
C
F
Well,
yeah!
That's
something
that
we
need
to
focus
on
basically
and
yeah.
So
once
we
have
this
outlined
and
basically
agreed
that
we
have
this
scope
that
we
want
to
focus
on,
then
it's
basically
just
a
matter
of
kind
of
contribution
from
from
different
parties
from
different
community
members
to
basically
make
it
finalized
right.
So
I
really
believe
it
will
be
possible
to
do
so
in
next
next
quarter.
F
Yeah,
exactly
that's,
that's
the
old
app
I'm
kind
of
trying
to
to
push
sometimes
but
yeah.
That's
exactly
the
one
which
actually
outlined
the
or
defined
the
overall
scope.
Great.
H
H
Q4
this
year
is,
I
think,
or
super
messaging,
also
in
the
old
tap
that
got
merged.
Like
I
said,
your
target
is
to
get
it
stable
this
year,
but
I
think
actually
q1
exterior
is
realistic,
because
this
year
it's
still
practically
one
and
a
half
months
because
december,
it's
kind
of
only
half
month
and
I
think
the
current
pace
I
think
you're
not
gonna
gonna
get
this
into
the
spec
in
one
of
the
half
months
for
both
both
messaging
and
http,
but
I
think
q1
for
messaging.
H
Maybe
it's
wishful
thinking,
but
I'm
pretty
confident
we'll
be
able
to
do
that.
Yeah.
F
F
Sorry
that
yeah,
just
once
once
we
have
this
scope
established,
then
we
can
actually
encourage
people
from
community
to
contribute
to
make
it
down
next
quarter.
F
C
Thank
you,
yeah
great
yeah.
C
My
experience
is
just
also
that
q4,
so
in
the
us
thanksgiving
obliterates
a
week
out
of
november
and
people
are
off
starting
the
second
half
of
december
and
before
that
there's
often
end
of
year,
deadlines
and
stuff
at
the
companies
that
people
work
at
so
people's
attention
often
gets
gets
pulled
off,
so
in
general,
I'm
always
skeptical
about
saying
stuff
is
going
to
get
delivered
in
in
q4
unless
it
looks
like
it's
deliverable
at
the
beginning
of
november,
because
things
just
get
really
slow
and
scattered
after
that
and
becomes
hard
to
get
approved
approvals
and
things
so
q1
sounds
sounds
reasonable
to
me.
C
Great
awesome,
okay
over
over
time
this.
This
feels
like
really
good
progress,
though
thank
you
for
doing
all
this
work
dennis
like
it's.
It
really
feels
like
we've
got
a
concrete
solution
to
all
of
this
basic
basic
http
stuff.
At
this
point
at
least
http
client
spins.