►
From YouTube: 2022-03-23 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Sure
so,
unfortunately,
I'm
not
sure
the
pull
request
I
linked
is
actually
very
useful,
so
I'm
just
going
to
talk
through
because
I'm
planning
to
update
it
but
I'll
talk
through
sort
of
the
problem
I'm
trying
to
solve
and
the
current
idea
I
have
and
just
if
you
have
any
thoughts
or
anything
curious
to
see
it
or
curious
to
hear
them.
So
the
problem
that
I'm
trying
to
solve
is
that
if
you
use
an
hotel,
sdk
and
set
the
instrumentation
library
fields,
then
we
just
drop
that
information
and
that's
bad
in
two
ways.
B
B
The
equivalent
in
prometheus
is
the
prefix
on
the
metric.
So
if
you're
using
like
the
go
sdk
or
something
you
give
it
a
name
space
when
you
make
make
the
metric
and
in
the
pr
I
did
link
the
open
metrics
description
of
what
the
namespace
should
be,
but
basically
it
should
be
a
single
word,
prefix
that
isn't
globally
unique,
so
no
urls
or
anything
like
that,
but
unique
enough
that
it's
unlikely
to
collide
with
any
other
libraries
being
used.
B
We
could
add
a
field
to
instrumentation
library
that
is
basically
the
equivalent
of
the
prometheus
namespace
in
the
description.
I've
referred
to
it
as
the
short
name,
but
something
that's
a
one
word
prefix
or
a
one
word
field
that
we
could
use
as
a
prefix
and
then,
if
that
were
present,
we
would,
when
we're
doing
prometheus,
exporting,
add
it
as
a
prefix
to
metrics.
A
Yeah,
so
I'm
following
the
issue
here,
my
concern
is:
will
we
be
able
to
get
instrumentation
authors
to
reliably
provide
a
short
name?
I'm
not
sure
they're
reliably,
providing
a
version
right
now
and
the
only
reason
they're,
providing
an
interpretation
library
name,
is
that
it's
required
to
get
a
meter
or
a
tracer.
B
B
And
I
I
guess
it
also
definitely
doesn't
do
any
harm
right
in
terms
of
if
they
don't
set
it,
then
we
keep
the
same
behavior
that
we
have
today,
which
is
that
you
don't
get
your
instrumentation
library
version.
If
you
use
prometheus
exporters.
C
Okay,
sorry,
it's
having
weird
audio
problems
and
you
just
restart
zoom
like
three
times
so
I
think
this
is
actually
something
we
discussed
many
many
meetings
ago,
which
is
basically
the
same
problem
just
without
the
version
number
and
whatnot,
and
the
basic
answer
is
yeah,
have
a
short
like
either
have
the
clash
and
just
crash
a
startup
or
have
the
short
name
thing
which
you
always
apply,
and
an
info
metric
for
the
rest
sounds
fine.
B
It's
because
the
the
nesting
is
that
per
endpoint
there's
a
single
resource,
so
we
translate
that
to
target
info.
But
then
there
are
multiple
instrumentation
libraries,
so
we,
my
thinking,
would
be
that
it
would
be
named
the
instrumentation
library
name,
underscore
library
info
or
something
like
that.
C
C
Not
you
know
what
instrumentation
library
happened
to
use,
because
it's
like
you
could
change
instrumentation
library,
but
that
doesn't
change
the
metric
like
it
is.
The
metric
still
has
the
same
semantics.
It
just
happens
to
be
implemented
differently.
Sorry.
C
B
A
I'm
not
sure
if
that
makes
it
hard
for
finding
that
information,
though,
if
you
don't
know
what
that
short
name
is
in
the
first
place
like
if
you
want
to
get
what
are
all
of
the
instrumentation
libraries
that
I
have
having
a
consistent
name
for
the
metric
and
then
using
labels
to
provide
the
rest
of
the
information
might
be
better.
I
can
see
both
ways
being
useful.
C
A
Cool
thanks.
One
question
I
would
have
is:
do
we
need
a
fallback
mechanism
for
defining
a
short
name?
If
none
is
provided
like
I
I'm
thinking
we
can
probably
get
reasonable
avoidance
of
collisions
by
taking
part
of
a
hash.
You
know
hash
it
and
take
the
first
eight
characters
or
something
like
that.
It's
not
going
to
be
pretty,
but
it
may
serve
a
purpose.
A
Not
necessarily,
though,
like
in
open,
telemetry
you're
going
to
have
metrics
that
follow
us
made
a
convention
like
http
request
count
and
the
instrumentation
library
will
tell
you
was
it.
You
know
the
the
http
server
library
was
the
http
client
library
which
client
library
was
it.
None
of
that
goes
into
a
metric
name.
C
B
Right,
that's
the
potential
issue
and
go.
It
would
work
out
well
at
least
with
our
built-in
instrumentation,
because
they
end
in
things
like
grpc
or
http
or
or
something
like
that.
But
I
don't
know
if
that's
actually
a
good
generalization
that
we
should
use.
C
C
I
I
don't
think,
there's
anything
that's
going
to
win
here,
but
I
guess
as
long
as
you
compile
or
at
start
time
crash
when
you've
gone,
flash,
you're,
okay,
but
then
that's
adding
extra
work
on
the
user.
A
We
don't
because,
if
you're
using
the
open,
telemetry
apis
and
sdks,
it's
not
a
clash,
they're
distinguishable
by
the
instrumentation
library
or
the
instrumentation
scope
right.
This
is
only
a
concern
when
you
are
attempting
to
convert
from
otlp
to
prometheus
or
when
you're
trying
to
do
prometheus
export
within
an
sdk.
C
A
All
right,
thank
you
and
moving
on
grace
and
you
want
to
talk
about
your
pr4
bucket
and
quintile
stillness.
D
A
Jurassie's,
on
top
of
things,
did
you
looks
like
you
didn't
end
up,
making
the
changes
that
david
proposed
about
simply
not
including
the
bucket
counts.
I
think
that's
fine
it'll
be
a
little
bit
of
additional
information.
That's
it
gets
carried
along
or
a
little
bit
of
additional
data.
It
should
be
safer
for
exporters
who
are
depending
on
bucket
count
and
bucket
bounds
being
similarly
in
in
size.
D
Okay,
yep:
we
can
always
take
a
look
and
see
how
all
the
exporters
are
handling
it
and
make
the
change
too.
A
A
Okay,
that's
the
end
of
the
agenda
that
we
have
gotham.
I
see
you're
here
and
I
had
a
question
about
the
pr
that
you
had
made
upstream
regarding
metric
metadata
cache
and
getting
that
into
a
context.
I
thought
you
had
commented
there
that
that
wasn't
quite
sufficient
for
what
we
needed.
Can
you
provide
us
a
bit
more
information
on
that.
E
Yeah
no,
I
can't
I
actually
have
an
open
pr
here.
One
second
in
the
basically
fixes
the
issue,
and
we
also
needed
the
target
information,
not
just
the
metadata
cache
so
now
I
am.
I
also
have
another
upstream
pr
to
add
target
into
context,
which
is
the
first
to-do
item
in
my
description.
E
I've
added
all
the
tests
and
once
it's
things
are
merged
upstream
I'll
fix
the
like
conflicts
and
ping.
You
folks
for
a
review,
but
I
think
it's
good
to
go.
I've
also
tested
it
by
running
it.
Everything
locally.
C
Just
one
random
torch,
locking
gear
might
be
fun
because
you've
got
code,
that's
not
the
scrape
code.
That's
now
accessing
those
data
structures.
E
Oh
yeah,
okay
I'll,
have
to
go
check
that
yep.
A
C
Now
the
metadata
cache
can
change
yeah,
it's
just
a
thing
that
needs
to
be
checked
because,
like
probably
the
simplest
thing
is
hey
during
the
append
you're
allowed
to
read
this,
but
afterwards
don't
touch
this
stuff.
Who
knows,
I
guess
anything
other
than
that
would
probably
be
very
messy,
but
you
know
the
code
needs
to
be
checked.
A
Basically,
our
receiver
implementation
is
at
a
pender
that
builds
up
a
representation
and
then
that
commit
emits
it
down
the
rest
of
the
pipeline.
A
Okay,
if
there
are
no
other
agenda
items,
I
think
we
can
wrap
up
for
the
week.
C
Just
a
quick
one
to
mention
I
made
just
some
cleanup
fixes
to
the
open
metric
spec,
nothing
that
changes
anything
there's
just
a
few
places
where
you
know.
Over
the
last
year,
people
pointed
out
things
that
were
ambiguous
or
where
to
spec
different
from
the
code.
A
bit
did
I
notice
based
on
comments,
so
that's
all
cleaned
up
shouldn't
make
the
slightest
difference
to
you.
A
But
can
you
do
you
have
a
pr
or
a
release
note,
so
you
can
locate.
C
A
All
right
with
that,
I
think
we
are
done.
Thank
you
all
for
coming
and
thanks
for
all
of
your
work
over
the
week,
we
will
see
you
all
next
week.