►
From YouTube: 2022-05-17 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
morning,
everybody,
I
think
it's
only
nine
of
us.
Maybe
people
are
busy
at
cubecom.
Yesterday's
maintainers
meeting
was
cancelled
because
of
that
reason,
so
let's
wait
one
more
minute
see
whether
more
people
show
up
and
we
can
discuss
a
few
items
in
the
meantime.
Please
add
yourself
to
the
agenda,
please
and
add
an
important
item.
A
Okay,
we
have
12
people
now,
so
I
think
we
can
start.
Probably
this
will
be
a
short
call.
So
let's
go
ahead.
Thank
you
for
coming
the
first
item.
There
is
just
mostly
item
from
lyudmila
and
it's
a
trust
vr
regarding
you
may
remember
that
we
discussed
in
the
last
weeks
how
to
change
the
you
know
the
requirement
level
for
semantic
conventions,
and
so
basically
she
went
through
the
existing
semantic
conventions,
so
she
updated
that
level
from
required,
optional
recommended
and
so
on
you
know.
So
it's
a
it's
a
pr
needs.
A
A
Okay,
the
next
one
is
oh
there's
one
over
there.
Second,
let
me
open
that.
Second,
pr,
oh
actually
see
two
peers
for
that.
Okay,
so
both
need
review,
no,
never
mind.
One
of
them
is
the
second
one
sorry
is
about
categorizing
required
versus
optional
http
attributes
we
discuss
about.
We
discuss
about
this
topic
in
recent.
B
Yeah,
so
we
had
a
pr
merge
by
josh,
specifying
exponential
histogram
aggregations
and
I'm
curious
about
what
remains
you
know
before.
Exponential
histograms
can
be
marked
as
a
stable
part
of
the
metrics
sdk
and
data
model.
C
Hi
jack,
I
guess
that's
a
good
question.
I
am
currently
working
on
my
implementation
in
go
and
we're.
So
I
feel
like
the
question,
is
how
many
implementations
have
we
demonstrated,
and
I
don't
know
the
answer.
I
actually
don't
know
the
status
of
the
java
sdks
implementation
either.
So
I
can't
really
answer
this
one,
but
I
think
we
just
need
to
see
it
out
there
in
a
few
sdks.
That
would
be
my
answer.
B
We
have
it
in
our
java
sdk,
but
so
our
metrics,
sdk
and
java
is
stable,
but
exponential
histograms
are
not,
and
so
you
kind
of
have
to
jump
through
some
hoops
to
use
them.
C
C
Well,
I
have
no
intention
to
slow
down
calling
that
stable,
but
I
haven't
seen
any
of
the
implementations
in
in
play
yet
and
I'm
still
working
on
the
one
and
go.
C
That
sounds
good
to
me.
I'm
also
working
on
the
vendor
support
this
week.
We
at
lightstep,
because
we
never
actually
added
the
ingest
handler
for
it,
given
that
we
weren't
calling
it
stable
and
there
weren't
any
implementations
out
there
that
were
being
used
so
within
a
week
or
two.
We
should
have
a
lot
stronger
story
here
at
lifesteps,
very
interested
in
getting
that
working.
D
In
open
times
we
don't
have
we
had
the
initial
discussion
there
there's
some
questions
around
the
handling
of
subnormal
numbers,
especially
considering
the
the
the
sum
like
we
report
sum
from
histogram
people
can
use
the
api
to
report
normal
double
numbers,
but
in
the
end,
if
you
add
it
up
to
a
very
large
number,
it
can
go
to
sub
normal,
like
positive
infinite.
D
So
there
there's
some
like
corner
cases
where
we're
trying
to
figure
out
how
to
handle
that
properly.
I
shouldn't
shouldn't
be
a
blogger.
I
I
think,
and
from
what
I
heard
from
the
class
past,
I
I
think
folks
are
trying
to
focus
on
getting
the
the
initial
stable
release
before
taking
exponential
histogram
and
I
feel
that's
reasonable
choice.
So
just
get
the
initial
subset
of
the
feature,
that's
the
initial,
stable
release
and
then
work
on
the
rest
of
things.
D
Yeah,
I'm
I'm
not
sure,
like
in
the
in
the
open,
telemetry
collector.
How
do
we
handle
that?
Because
all
the
like
the
aggregation
logic,
those
things
in
the
isdk
will
encounter
the
same
thing
from
the
collector
and
I
I
think
it
would
be
nice
if
we
can
have
the
consistency
there,
like
people,
sending
the
data
directly
from
otlp
versus
sending
data
through
another
parts
and
eventually
get
to
collector
and
then
export
otlp.
The
data
should
be
the
same.
B
Riley,
are
you
talking
about
something
different
than
just
kind
of
like
number
overflow
issues?
You
know
that
can
occur,
for
I
mean
counters
and
up
down
counters
as
well
and
explicit.
Bucket
histograms
is
the
problem
that
you're
talking
about
somehow
unique
to
exponential
histograms,
it's
specific
to
the.
D
Double
numbers-
and
it's
not
just
about
counter
or
histogram,
but
I
I
think
for
histogram
it's
more
tricky,
but
for
counter
you
can
imagine
if
you
keep
adding
huge
double
numbers.
Eventually,
the
number
would
go
to
positive
infinite
and
that's
probably
fine,
but
for
histogram.
If
we
decided
we're
not
going
to
report
the
the
count
for
a
very
large
number
like
if
you
give
a
subnormal
number
like
not
a
number
or
positive
infinite,
we're
going
to
ignore
just
drop
the
number
on
the
floor.
D
How
do
we
handle
that
like?
If
we
do
that?
It
means
we're
going
to
we're
going
to
ignore
abnormal
numbers
for
both
content
sum,
but
the
problem
is
that
doesn't
remove
the
the
possibility
that,
even
if
you
report
normal
numbers,
you
can
still
get
a
subnormal
sum
right
so
that
that
doesn't
seem
to
be
consistent
and
how
we
didn't
kind
of
maybe
we'll
just
document
that
and
tell
people.
This
is
a
caveat
you
need
to
know,
but
it
seems
there
might
be
a
better
answer.
B
C
But
what
you're
saying
is
that
it's
normal,
if
you
add
enough
normal
numbers
to
get
to
an
infinity
so
that,
when
we're
aggregating
these
data
points,
whether
they're
counters
sums
or
histograms,
we
should
expect
to
have
to
handle
an
aggregation
producing
a
sum:
that's
infinite,
even
when
none
of
the
inputs
were
in
were
problematic
and
it
would
be
okay
with
me
to
just
say
when
you
aggregate
you,
you
do
whatever
the
floating
point
hardware
says
to
do.
That
would
be
okay
with
me,
but
we
would
have
probably
want
to
write
that
down.
D
C
Yeah,
I
think
I
would
say
I
would
propose
that
we
say
that
behavior
is
defined
to
be
whatever
the
floating
point.
Sum
is
in
that
case,
so
that
you
can
see
infinites
in
in
the
data
stream,
but
it
should
result
from
a
very
large
sum
being
accumulated,
and
that
should
be
a
signal
to
the
consumer
like
we
got
to
find
a
way
to
reset
metrics.
C
If
that's
happening
and
by
the
way
the
problem
is
going
to
start
much
sooner,
you
know
the
the
floating
point
you
can't
increment
by
one
up
around
two
to
the
fifty
to
the
fifty
fifth
power
right
so
like
at
some
point,
you're
gonna,
start
incrementing
and
like
the
sum
is
not
gonna
change.
You're
gonna
have
problems
much
sooner
than
this
problem.
I
think.
D
C
C
Pr
and
it's
worth
noting
that
we're
going
to
run
into
a
little
bit
of
a
problem
here,
because
the
data
type
we
have
in
otlp
has
a
floating
point
sum.
So,
even
if
you're
using
integers
for
for
a
histogram,
we
have
no
way
to
give
you
an
integer
sum.
But
when
it's
a
counter
we
do,
and
that's
part
of
the
reason
that
counters
exist
with
integer
form
is
that
we
wanted
to
count
to
2
to
the
64th.
Because,
if
you're
counting
bits
on
a
router
switch,
it
happens
pretty
fast.
A
A
To
those
prototypes,
javan
go
okay.
The
next
one
is
about
units,
a
units,
question
george:
are
you
around
yeah,
I'm
I'm
here.
E
Actually,
while
we're
on
the
topic
of
exponential
histograms
and
guidance
around
that
topic,
I
want
to
wanted
to
ask
if
we
should
provide
like
a
a
guidance
document
when
to
use,
but
I
guess
that's
far
out
in
the
future,
but
when,
when
exponential
histograms
are
available
in
all
sdks,
then
we
should
probably
provide
a
a
sort
of
a
guidance
when
to
use
exponential
histograms
and
when
to
use
explicit
bucket
histograms,
because
I,
even
though
I
understand
that
exponential
histograms
would
probably
cover
most
of
what
explicit
bucket
histograms
can
do,
there
might
still
be
use
cases
where
people
actually
want
to
see
the
bucket
boundaries
that
they
explicitly
set
in
whatever
back
end
they're
using.
E
So
I
guess
we
might
might
be
a
good
idea
to
have
a
sort
of
a
guidance
there
in
some
way
shape
or
form,
but
I
guess
that's
far
out,
and
it
was
just
something
that
occurred
to
me
anyway.
Talking
about
the
units,
I
looked
at
the
open,
telemetry
specification,
I
linked
it
there.
E
It
says
instruments
for
utilization
metrics
are
that
are
dimensionless
should
use
the
default
unit,
one
right,
so
I
guess
all
the
ratios
that
we
get
in
terms
of
I
don't
know
memory
used
or
something
would
be
ingested
with
a
one
which
is
fine,
but
I
guess,
as
a
backend,
if
you
receive
a
one,
can
you
automatically
know
that
this
is
a
ratio
or
can
you?
I
guess
you
cannot
assume
that
it?
It
always
is
a
ratio
because
you
can
also
send
the
ones
for
counters,
for
example.
E
D
All
right,
so
let
me
ask
a
question,
so
how
would
people
normally
do
that
in
premises
like
do
they
use
a
like
put
something
in
the
name
following
a
certain
convention
or
they
use
a
explicit
format.
D
My
current
position-
I
haven't
spent
time
thinking
about
this,
but
my
current
position
is:
we
shouldn't
be
worse
than
promises
and
if
promises
is
doing
something,
maybe
the
either
way
is
just
to
follow
what
it
is
doing.
I
I
feel
trying
to
enforce
something
might
be
hard.
So
let
me
give
you
one
example:
sometimes
people
talk
about
the
cpu
utilization
and
they
use
percentage
number.
D
But
if
you
look
at
the
definition,
I
I
think
in
windows
due
to
the
historical
reason
it's
based
on
the
core,
so
you
can
have
a
four
core
machine
and
end
it
up
with
400
percent
of
the
cpu
use.
I
figure
like
if
we
try
to
define
that
it's
more
complex
either
we
can
ignore
the
path
and
just
to
enforce.
The
percentage
number
must
be
between
zero
to
a
hundred
percent.
Then
we
cause
other
problems
or
we
can
basically
say
this
is
just
percentage
and
we
don't
define
that
you
can
do
whatever
you
want.
D
E
Okay,
maybe
maybe
to
just
add
another
layer
of
complexity
here
is,
I
think
we
defined
the
units
in
terms
of
the
ucum
units
right
and
in
the
ucum
percentage
itself
is
actually
defined.
So
you
can
ingest
the
percentage
with
by
ingesting
a
percent
sign
as
the
unit,
so
you
can
ingest
a
percentage
which
is
between
0
and
100.
You
can
ingest
the
ratio
which
I
would
assume
to
be
between
0
and
one
or
maybe
I'm
wrong.
E
Maybe
you,
I
don't
know
actually
have
ratios
that
are
two
to
one
and
then
you
ingest
a
value
that
is
actually
above
one,
which
mathematically
is
fine
right,
but
in
terms
of
percentages
it
doesn't
make
much
sense,
and
then
you
have
this
this
unity,
as
it's
called
this
this
one
that
basically
conveys
that
this
could
be
a
ratio,
but
it
doesn't
have
to
be
and
yeah.
I
just
I
just
stumbled
across
it
and
wondered
if
there
is
something
that
we
need
to
do
or
want
to
do
about
it.
E
C
You
know,
I
think
the
way
they
write
it
is
that
one
is
just
the
default,
so
anything
that
has
no
units
can
be
called
one
and
then
inside
of
the
curly
brackets
you're
supposed
to
give
a
descriptor
for
the
what's
called
a
pseudo
unit,
and
you
know
I
guess
the
the
when
you
study
this
enough,
the
the
height
at
the
high
level
pseudo
unit
is
kind
of
like
meant
to
behave
like
a
unit
but
not
be
a
unit,
and
so
like
you,
the
way
the
word
you're
supposed
to
use
it
wisely
and
be
careful
right,
and
I
think
the
way
I
ended
up
thinking
about
this.
C
But
if
a
unit,
if
a
unit
string
is
correct
for
up
down
counter,
it
should
have
pseudo-units
that
look
like
a
plural
noun
like
what
are
you
counting
that
goes
in
your
in
your
in
your
curly
braces?
And
the
thing
is,
I
don't
want
to
start
writing
rules
to
detect
what
is
a
plural
noun,
because
language
is
hard
and
and
pseudo
units
are
not
going
to
be
like
automatically
interpreted,
which
is
why
I
like
that.
C
It
looks
more
like
an
uptime
counter
than
a
gauge
and
then
in
in
the
kafka
prs
that
carlos
has
been
merging.
Recently
we
had,
we
were
looking
at
a
pseudo
units
that
were
rates
so
they
were
well.
They
were
pseudo
units
were
a
count
and
it
was
being
expressed
as
a
rate,
so
it
was
a
pseudo
unit.
Count
per
second
was
one
of
the
units,
and-
and
that
told
me
well-
I
you
know
like
here's
a
gauge
because
it
doesn't
match
my
pattern,
which
is
just
just
pseudo
units
counts.
C
But
then
we
came
to
ratios
that
were
compression
rates.
Now
compression
rates
can
be
higher
than
one.
If
you,
if
you,
if
you're
doing
bad
compression,
are
often
lower
than
one
but
it's
different
than
utilization,
and
I
actually
just
want
to
point
out
that
cpu
utilization
is
anything.
That's
utilization
as
a
function
of
time
in
a
metric
system
is
going
to
be
a
special
case.
C
D
E
I
I
guess
it's
more
of
a
of
a
display
question
later
on
right
as
a
as
a
ui
question
is,
if
you
know
it's
a
percentage,
there
is
probably
ways
of
of
displaying
it
nicely,
knowing
that
it
is
a
percentage
right.
I
don't
know
stacked
bar
charts
if
you
want
to
go
there
but
yeah.
If,
if
you
don't
have
that
information,
then
you
can't
just
assume
that
it's
a
ratio
or
or
not
as
as
josh
already
mentioned,
it's
it's
it's
a
very
hard
guessing
game.
C
It's
the
sum
that
you
want
to
see
aggregated
and
that's
when
your
stacked
bar
charts
work,
because
the
area
is
meaningful
because
it's
a
count.
Whereas
if
you
sum
a
bunch
of
temperatures,
I
don't
think
you
ever
want
a
bar
chart
there.
So
now,
when
it
comes
to
utilization,
is
it
a
gauge
or
is
it
an
up-down
counter
and
the
thing
is,
it
can
be
defined
as
both
it's
a
number
between
zero
and
one.
C
It's
a
probabilistic
count,
though,
as
well
and
and
so
you
can
call
it
a
fractional
count
or
you
can
call
it
a
fraction
and
I
think
both
work
and
it
gets
down
to.
What
do
you
want
the
interpretation
to
be
when
you
graph
it-
and
I
think
that's
where
I
like-
to
hold
on
to
this
distinction
between
up
down
counter
and
gauge?
E
I
think
what
I
what
I
had
in
front
of
my
inner
eye-
I
guess,
is
maybe
not
a
stacked
bar
chart,
but
a
scaled
bar
chart
that
like,
for
example,
ct
cpu
utilization.
E
You
would
probably
want
to
scale
it
to
100
to
see
basically
how
much
you
have
left
or
to
to
put
it
into
a
relation
in
some
sort
of
way.
But,
as
you
said,
utilization
is
it's
it's
its
own.
C
And
there's
a
there's,
a
different
semantic
dimension:
we've
defined
for
when
it's
not
a
utilization
as
a
function
of
time.
So
if
it's
like
usage
and
limit
and
utilization
is
defined
as
usage
divided
by
limit.
So
potentially,
if
you
saw
usage
and
limit,
you
might
plot
it
one
way
and
if
the
utilization
is
defined
as
those
two
and
possibly
you
could
infer,
I'm
not
sure.
D
Yeah
and
but
another
thing
I
I
suggest
that
we
consider
if
we
try
to
restrict
what
what
can
be
called
a
percentage,
then
maybe
a
lot
of
existing
things.
People
just
by
their
instinct,
would
feel
these
are
percentage.
Actually
it's
pronounced.
It's
not,
or
maybe
you
start
with
something,
for
example,
even
the
like
the
humidity
level.
It
can
be
over
saturated,
so
it
can
go
beyond
a
hundred
percent,
and
maybe
you
start
by
making
that
a
percentage
and
later
you
learn
some
physics
and
you
realize
you're
wrong.
What
are
you
going
to
do?
E
Okay,
so
I
guess
the
more
or
less
easy
way
out
is
to
just
leave
it
as
it
is,
and
I
hope,
however,.
C
We
don't
have
a
good
story
for
translation
to
and
from
prometheus
at
this
point
in
time
it's
been
discussed
very
recently
in
slack.
Openmetrics
has
a
convention
to
append
unit
string.
So
with
a
lot
more
strictness,
you
could
absolutely
define
translation
into
the
prometheus
and
promote
this
remote
right
where
we
put
on
the
units,
but
there's
got
got
to
be
a
lot
more
spec
done
to
do
that
and
I'm
not
sure
anyone's
excited
by
it.
So
if
you're
measuring
in
milliseconds
are
you
going
to
output
milliseconds
or
like?
C
Actually,
there
is
some
spec
about
using
the
standard
base
unit.
I
think-
and
so
should
you
change
that
into
seconds.
I
don't
want
to
get
into
units
conversion
at
all,
but
but
there's
some
question
being
asked
about
how
to
get
to
and
from
prometheus
without
losing
units
and
right
now
there's
nowhere
to
put
the
units
other
than
in
the
metric
name.
D
Here
is
so
so,
do
we
enforce
people
to
use
unit
one
for
percentage
or
this?
This
is
more
like
a
suggestion.
We
don't
enforce
that,
because
if
we
don't
enforce
that,
I
I
feel
it
might
be
better.
We
can
actually
like
explore
that
in
semantic
convention,
like
hey,
we
have
this
cpu
utilization.
We
have
the
room
humidity
level,
and
this
is
the
recommended
like
unit
and
and
because
the
semantic
convention
is
still
experimental.
E
Actually,
I
just
checked
and
the
the
specification
actually
links
to
an
open
issue
that
says,
with
the
it's
it's
the
specification
issue,
number
705.
It
says
standardization
of
units
needed.
I
guess
this
is
exactly
what
what
george
just
said
is.
D
Yeah
and
I
I
feel
we
have
a
dependency
on
semantic
convention
like
we
can,
we
cannot
just
try
to
like
like
learn
like
two
examples
and
and
decide.
Maybe
we
need
to
explore
more
and
have
more
implementation
and
later
we
can
decide.
But
as
long
as
I
I
I
think,
when
people
see
unit
as
one,
they
can
use
the
most
conservative
approach
and
if
they
see
more
information
they
can.
We
can
try
to
be
smarter
and
give
people
a
better
experience,
and
we
know
that
experimental
we
can.
E
E
A
F
Yes,
I'm
here
hello,
I
just
wanted
to
mention
this.
I've
had
this
pr
open
for
a
while.
It
has
three
approvals.
It's
been
discussed.
I
just
need
one
more
spec
approver
approval
before
it
can
be
merged,
so
I
just
wanted
to
bring
it
up
to
everyone's
attention.
If
someone
can
please
take
a
look
at
it,.
A
Yeah,
I
used
to
review
that
and
it
makes
sense.
I
just
approve
that
as
well.
I
think
that
the
values
are
fine
yeah
I
would
like,
so
I
guess
that
we
need
technically,
we
have
enough
reviews
now,
but
we
need
more
eyes
in
case
somebody's
curious
about
this.
But
if
I
remember
correctly,
this
was
already
approved
by
the
client
instrumentation
group
right.
F
Yes,
it's
been,
I
am
part
of
the
client
instrumentation
group
and
I
think
nev
has
approved
it.
He
participates.
A
Yeah,
I
could
say,
then:
let's
try
to
merge
it
soon.
I
don't
know
how
that
works
for
qcon,
not
everybody
is
theirs.
Anyway.
Let's
try
to
get
some
more
reviews.
The
reviews
that
you
have
right
now
are
enough.
I
think
so
you
just
need
it
just
addition,
you
know
an
addition
to
the
resources.
So
it's
fine.
I
think,
let's
we
have
a
pair
of
more
days
and
if
not
that
we
will
merge,
but
you
know
this
week.