►
From YouTube: 2021-11-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
That
wrote
that,
how
can
I
hear
nice
ted
wrote
that
he
will
be
and
sean
will
be,
will
not
make
it?
Okay.
C
A
This
is
the
google
calendar
thing,
oh
interesting!
So
if
you
look
into
the
open,
telemetry
community
repo.
D
D
And
and
by
the
way
the
european
forks
will
join
because
they
had
a
transition
to
like
to
winter
time
last
week,
oh
nice,
so
they
basically
transition
like
just
exactly
one
week
before
we
here
do.
D
B
C
A
Yeah
since
you're
here
can
we
talk
sampling
and
metrics
over
traces,
yeah.
A
So
maybe
I
can
give
an
introduction
to
the
other
folks
if
they're
interested,
so
we
had
this
conversation
about
some
request,
lens
attributes
and
yeah,
I'm
trying
to
open
it
so
and
the
the
question
there
was:
can
we
remove
those
and
basically
what
anurag
helped
us
discover
that
it
seems
open
telemetry
moves
into
the
currently
moves
into
the
direction
of
sending
metrics
from
traces
and
there
isn't
a
sampling
and
it
seems
it's
the
joshua
mcdonald's
current
decision
because
he
is
responsible
for
sampling,
I'm
going
to
share
the
screen.
A
A
A
I
can
see
it
perfect,
yeah,
sorry
yeah!
So,
basically,
if
we're,
if
we
have
a
requirement
from
open
telemetry
that
we
should
be
able
to
populate
metrics
from
traces,
then
it
means
we
should
stamp
all
the
attributes
on
spans
that
represent
metrics,
including
perhaps
request
content,
size
and
response
content
space.
A
A
D
D
I
think
now
there
is
this
big
push
towards
creating
all
those
basically
creating
all
metrics
from
traces,
and
I
think
the
main
reason
is
just
because
there
is
no
proper
metric
support
in
open
telemetry.
So
that's
the
only
thing
people
can
do
like
to
get
some
metrics
and
then
deduce
them
on
the
backhand
side,
but
but
I
think
that
at
least
for
the
basic
metrics
on
the
long
run,
those
will
be
created
separately
from
choices.
E
I
mean
it's
mostly
from
this
sampling
discussion
where,
like
we're,
adding
the
new
specs
and
stuff
to
be
able
to
propagate
the
sampling
probability
through
the
trace
and
as
far
as
I
know,
the
only
use
case
for
that
is
to
be
able
to
create
metrics
from
the
trace.
So
it
seemed
like
a
lot
of
work
if
it's
going
to
be
deprecated
when
the
metrics
sdk
comes
out,
so
my
impression
has
been
that
that's
still
a
important
use
case,
but
that's
just
based
on
my
observation.
A
E
Yeah
but
like,
for
example,
I
can
imagine
if
backhand
supports
interesting
previouses,
but
it
doesn't
support
ingesting
metrics.
They
can
still
provide
metrics
based
on
those
traces.
So
that's
the
use
case.
I
guessed
could
be
happening
here
where
the
back
end,
rather
than
supporting
metrics
natively,
would
like
to
be
able
to
generate
them
from
the
traces
because
they
already
got
those.
A
E
A
E
C
C
A
A
A
E
A
E
E
We
just
copy
it
in
the
code,
so
it
sort
of
entered
in
as
part
of
the
implementation
detail
of
what
we
were
calling
the
x-ray
sampler.
So
that's
sort
of
okay.
A
E
E
A
Yeah
then,
I
think
like
to
azure,
monitor
folks,
since
I'm
I'm
not
treasurer
monitor.
I
don't
know
how
to
advocate
for
you,
and
I
think
somebody
should
should
join,
and
we
have
should
have
this
discussion
about
this.
The
sampling
and
approach
for
metrics.
C
A
I
think
it's
a
stack
meeting
on
tuesday
right.
Oh,
I
think
that.
D
I
think
this
should
be
discussed,
adding
this
back
meeting
on
tuesday,
or
maybe
it's
even
better,
to
start
like
a
discussion
into
the
in
the
sampling
meeting
on
thursday
mornings,
because
I
think
you
will
actually
get
more
sampling
specialists
in
the
sampling
meeting.
E
E
A
Yeah
and
here
you're
suggesting
to
spark
out
the
rate
limiting
sampler
right.
D
I
would
just
like
to
add
one
more
point
to
like
the
sampling
discussion
and
the
indispensability
stuff.
I
I
I
think
that
the
I
think
that
this
is
like
one
possibility.
D
I
do
to
extract
the
metrics
on
the
backhand
side,
but
I
think
in
the
end
we
will,
I
think,
definitely
have
semantic
conventions
for
metrics
that
are
independent
from
spans,
because
there
are
just
some
situations
where
you
want
to
have
metric,
but
you
don't
want
to
have
traces
in
spans,
maybe
trust
for
performance
issues
for
for
performance
reasons,
and
you
will
not
turn
tracing
on,
but
you
will
turn
metrics
on
and
I
think
just
for
for
those
use
cases
which
I
think
are
are
not
even
that
rare.
D
C
C
D
I
I
think
I
I
think
in
the
in
in
the
yeah.
It's
a
good
question.
I
mean
I
can
only
say
how
neuralic
did
it
and
they
did
it
that
those
metrics
and
spans
were
two
different
things
and,
like
you,
had
your
library
and
the
library
created
spans
if
you
turn
tracing
on
and
it
created
metrics
if
you
turned
metrics
on
and
the
metrics
were
not
deduced
from
spans
but
were
created
on
their
own
and
you
could
turn
both
individually
on
and
off.
D
But
I
know
there
are
like
lots
of
different
approaches
out
there
and
there
might
be
a
pros
and
cons
to
all
of
those,
but
I
definitely
think
that
we
will
end
up
having
like
us.
We
will
definitely
end
up
having
semantic
conventions
for
metrics
that
are
independent
of
spans,
because
I
think
that's
a
pretty
important
use
case.
C
Yeah,
like
I
didn't,
even
think
the
issue
that
ludmilla
was
showing
that
in
one
of
the
metrics,
you
want
to
consider
the
response,
lengths
and
request
inputs
lens,
so
those
kind
of
semantics
would
have
to
be
defined
on
what
kind
of
metrics
to
generate
and
to
me
it
sounds
like.
Rather
do
it
in
one
place
instead
of
duplicating
that
logic,.
E
We
could
also
phrase
it,
though,
as
two
different
ways
of
getting
similar
data
like
either.
Someone
only
turns
on
metrics
or
someone
only
turns
and
traces
and
sadly,
stitch
faces
can
get
an
approximate
version
of
what
the
metrics
are.
So
that's
why
traces
and
metrics
are
supposed
to
have
similar
dimensions.
E
E
C
E
It's
sort
of
like
if
my
back
end
only
supports
traces
but
offers
the
metrics
feature.
Then
I
need
the
traces
to
have
all
the
metrics
conventions
on
them,
so
that
can
give
me
all
my
metrics.
So
that
could
be
one
issue
a
user
runs
into
like,
depending
on
what
their
background
supports.
If
it
only
supports
history
space,
of
course,
if
we
don't
expect
that
to
happen
much
in
practice
and
it's
not
a
big
deal,
but
that
might
be
one
problem.
My
user
has.
D
Yeah,
I
I
think
that
is.
That
is
definitely,
I
think,
an
important
use
case
for
deducing
the
metrics
from
traces
on
backhand
side,
because
I
think
many
back-ends
have
trouble
kind
of
correlating
or
associating
traces
with
metrics
like
if
your
client
sends
traces
and
metrics
like
separate
on
separate
streams
to
the
back
end.
Many
back-ends
have
problems
associating
those
and
correlating
those
together
and
when
you
on
the
back
inside,
it
use
the
metrics
from
traces.
That's
much
easier.
C
A
And
the
problem
I
have
is
this
particular
attributes
like
us
and
attributes
and
spends,
is
that
they
are
optional
right.
So,
for
example,
I
own
some
http
instrumentations
and
I
don't
populate
them
because
they
are
optional,
which
means
that
metrics
cannot
be
calculated
right,
and
if
I
knew
that
okay,
this
attributes
I
put
them
and
metrics
will
be
calculated
and
this
them
as
a
metric.
It
seems
to
be
super
useful,
like
correlating
the
request
size
to
outcome
and
duration.
A
I
think
this
is
super
useful
and
then,
but
for
metrics
it
becomes
a
required
attribute
from
my
point
of
view
and
then,
if
we
make
them
optional,
it
seems
we
won't
get
them
anywhere
unless
we
define
conventions
for
metrics
and
use
them
for
metrics
as
a
must,
I'm
not
sure
if
it
makes
sense.
It's
just
my
concern
around
those
specific
attributes.
A
E
A
D
Yeah
and-
and
I
also
want
to
add
another
point
regarding,
like
those
metrics
that
you
see
here,
like
request,
content
length,
request,
content
like
uncompressed
and
so
on.
I
think
this
idea
to
create
metrics
from
traces
with
this
adjusted
count.
I
I
think
that
works
pretty
well
for
for
throughput
metrics
like
when
you
want
to
calculate
your
throughput,
but
when
it
comes
to
like
those
properties
on
http
requests,
I'm
not
sure
how
statistically
accurate
those
approximations
are
that
you
that
you
calculate
from
from
from
traces
to
metrics.
A
D
So
I
I
personally
I
would
say
I
would
not
trust
a
metric
for
http
like
response
length
that
is
calculated
based
on
sample
data
right.
C
D
I
yes,
metrics
are
not
sampled.
No,
the
metric
data
is
not
sampled.
So
when
you
create
metrics,
I
would
assume
that
you
have
the
unsampled
data.
I
mean
it's
going
to
be
aggregated.
Of
course
I
mean
that
is
the
that
is
the
equivalent
sampling
in
the
metric
space.
It's
going
to
be
aggregated,
but
it's
not
going
to
be
sampled.
A
This
is
one
of
the
questions
in
this
group.
How
do
we
even
approach
this?
Even
though
we
are
not
coming
up
with
conventions?
Yet
it
seems
a
lot
of
decisions
depend
on
it.
So
there
are,
there
is
a
choice.
Okay,
it
spends
the
metrics
and
we
all
don't
all
know
that
the
problems
with
this
approach
and
then
there
is
an
idea
that
anarag
is
pursuing
the
instrumentation
api
and
I
think
it's
super
important
to
figure
it
out
and
basically
the
the
instrumentation
libraries
can
call
this
api
and
inside
this
api.
A
The
implementation
of
this
api
can
report
both
metrics
and
spans.
C
A
Right
and
then
we
can
hide
all
the
the
rules
and
conventions
inside
the
instrumentation
api.
Well,
not
all
of
them,
but
many
of
them,
and
there
is
also
a
choice
that
libraries
can
just
use
matrix
api
directly,
which
also
makes
sense.
It's
just
a
bit
harder
and
a
bit
more
involved.
D
There
are
process
metrics
and
there
are
functions
of
service
metrics.
D
So
there's
already
those
proposals
there
and
I
think
I
think,
with
the
metrics
api
being
stable
and
this
the
stabilistic
case
going
out
soon.
I
expect
that
there
will
be
demand
for
stable
semantic
conventions
for
metrics,
and
I
also
like
had
I
remember
that
to
be
pushing
into
that
direction
and
asking
to
also
not
just
work
on
traces,
but
also
works
towards
semantic
conventions
for
metrics.
D
And
also
because
of
the
the
notorious
kind
of
ease
with
which
experimental
semantic
conventions
are
embraced
and
merged,
so.
D
A
E
A
Yeah
cool
then
it
sounds
like
we
would
need
to
discuss
this
problem
and
something
committing
and
spike
meeting
yeah.
I
guess
we
will
have
some
some
other
call
with
sergio
monitor
falls
tomorrow.
We
can
chat
there
about
how
we
approach
it
from
our
site.
It's
good
to
know.
We.
We
have
a
common
problem
to
solve.
A
C
B
B
We
should
like,
I
probably
should
talk
getting
start
thinking
about
both
of
these
areas.
At
the
same
time,.
D
Yeah,
I
also
have
a
opinion
about
that,
and
I
think
we
should
for
now
do
the
to
a
great
extent
definitely
focus
on
tracing,
because
I
think
we
should.
I
think
we
are
pretty
close
for
the
tracing
stuff
to
get
stabilized
for
this
metric
semantic
conventions.
I
think
there
are
lots
of
open
questions
that
are
not
even
like
specific
to
the
fields,
but
that
are
specific,
like
the
metrics
in
general,
and
I
actually
unexpect
that
those
semantic
conventions
can
only
be
stable
once
there
has.
D
Their
monster
is
some
actual
experience
with
open,
delimitary
metrics
and,
like
some
some
hands-on
experience,
how
the
semantic
conventions
look
like,
because
I
think
for
metrics.
Those
semantic
conventions
are
much
more
delicate
than
for
traces,
because,
actually,
when
you
for
metrics,
if
you
might
add,
or
remove
an
attribute
or
declare
an
attribute
like
required
or
not
require
it,
it
might
like
actually
break
the
compatibility
with
like
the
previous
metrics.
D
So
I
think,
in
terms
of
like
backwards
compatibility,
the
metrics
and
many
conventions
are
much
much
more
complicated
than
the
tracing
stuff
and
I've
seen
some
discussions.
Also
regarding
this
going
on
that
are
not
even
particular
to
actually
your
messaging,
but
to
like
metrics
as
a
whole
and
actually
expect
that
it
will
take
some
time
till
there
will
be
a
consistent
and
stable
approach.
D
D
To
dig
up
some
related
issues,
I
think
there
were
some
issues.
So
did
you
get
an
idea
about
those
discussions
where.
C
Yeah
that'd
be
helpful.
I
think
I
see
a
point
and
you're
right.
We
should
just
stick
with
our
focus
of
stabilizing
tracing
semantics
first
and
then,
hopefully
that
would
help
with
the
messaging
with
the
metric
semantics
if
we
start
doing
both.
At
the
same
time,
it
would
be
hard
to
know
how
we
progress,
but
I.
C
Ludmila's
point
right:
if
you
don't
think
about
messaging,
then
I
think
your
point
ludmilla.
The
way
you
started
was
then,
do
I
even
collect
all
these
attributes
right?
If
all
that's
needed
is
for
metrics,
then
why
collect
all
this
information
and
tracing-
or
maybe
you
showed
some
issue
related
to
that?
That
was
bad.
A
Yeah,
it
just
feels
there
is
a
bunch
of
different
discussions,
including
this
one.
That
boils
down
to
the
question
how
we
collect
metrics
and
if
we
think
about
the
the
non-http
issues
that
block
semantic
convention
stabilization,
I
think
the
instrumentation
api,
at
least
most
of
it
is,
is
a
blocking
thing.
We
cannot
move
any
convention
forward
without
figuring
out
if
it's
how
it's
supposed
to
work
right.
So
this
is
some
general
thing.
E
I
don't
know
if
I'm
actually
that
clear
on
when
or
maybe
even
why
we
would
make
a
distinction
for
attributes
based
on
whether
someone
wants
to
filter
or
not
on
them,
because
any
information
about
a
request-
I
guess
just
having
it
available
in
a
trace
view
that
can
still
be
useful
right.
So
like
is
there
a
reason
like?
Is
there
a
def
a
guideline
for
deciding
when
we
would
want
to
keep
attributes
versus
not
based
on
this
sort
of
criteria?.
A
Yeah,
this
is
a
good
question,
so
my
idea
was
that
optional
attributes
are
those
who
provide
smaller
value
right,
they're,
not
essential,
maybe
not
every
backhand
supports
them
and
also
that
they
increase
user
bill
like
for
as
your.
A
E
A
A
E
I've
actually
been
looking
a
little
bit.
I've
been
working
a
bit
on
the
collector's
side
lately
and
they're
talking
about
revamping
the
processing
pipeline
and
have
some
ideas.
For
example,
like
you,
can
define
like
having
an
actual
well-defined
format
for
defining
transformations
of
telemetry
data
and,
of
course,
in
the
back
of
my
mind,
I
feel
as
if
this
is
going
to
apply
to
sdk
as
well,
and
so
maybe
once
this
sort
of
transformation,
config
format
is
set,
then
we'd
expect
instrumentation
to
read
that
config
and
only
populate
what's
been
configured.
E
I
think,
like.
I
don't
think,
that's
a
breaking
change.
That
would
probably
happen
in
the
future,
but
maybe
in
the
beginning,
like
the
default
is
to
try
to
populate
everything
and
then
yes,
we'd,
have
sort
of
a
config
file
to
trim
it
down,
and
vendors
could
also
distribute
their
recommended
config
for
transforming
the
data.
E
A
So
they're,
basically
optional
on
the
consumption
site,
yeah.
E
A
E
D
A
D
I
I
actually
expect,
like
you,
said,
ludmilla
for
the
http
metrics
or
for
metric
semantic
convention
in
general,
that
it
might
be
much
more
efficient
to
have
a
smaller
set
of
required
attributes
than
to
have
this
big
pool
of
optional
attributes.