►
From YouTube: 2022-04-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Okay,
I
think
we
can
get
started.
I
see.
Gotham
has
already
got
some
great
items
on
the
agenda.
If
anybody
else
has
anything
to
discuss,
please
add
it
to
the
list
and
we'll
talk
about
it.
Gotham
do
you
want
to
go
ahead
and
kick
us
off
with
the
discussion
of
the
memory
issue.
C
It's
not
a
memory
leak,
as
in
the
memory,
doesn't
keep
on
growing,
but
there's
a
significant
increase
like
a
25
increase
in
memory,
with
the
change
that
I
made
for
open,
telemetry
collector,
so
the
changes
we
pass
the
target
info
and
the
metadata
cache
in
the
context.
So
the
collector
can
then
look
up
the
metadata
and
track
target
labels.
C
I've
been
trying
to
debug
this.
It
only
shows
up
in
a
high
churn
environment
after
three
hours
it
doesn't
have
anything
to
do
with
the
compactions.
C
For
some
reason
it
consistently
shows
up
after
every
three
hours,
and
I
found
a
workaround
for
it
like
after
a
week
of
experimentation,
I
gave
up
on
figuring
out
where
this
context
is
being
held.
That
causes
this
extra
memory
issue.
I
found
a
workaround
now
that,
with
that
workaround
I
realized
I
could
slowly
replace
the
older
context
to
see
where
it's
being
held,
so
I'm
going
to
first
get
the
work
around
merge
and
then
cost
like
fix
the
underlying
issue.
C
If
that
makes
sense,
so
you
will
see
a
pr
to
open
telemetry,
probably
sometime
next
week,
that
fixes
this
issue,
but
it
only
shows
up
like
in
high
churn
high
throughput
environments.
After
a
few
hours.
C
Something
it
doesn't,
I
will
get
a
change
version
prometheus
and
if
we
just
need
to
update
prometheus,
but
nothing
else
will
change
on
the
open,
telemetry
side.
B
Okay,
cool
yeah-
that's
very
much
of
interest
to
us
here
at
aws,
because
we're
looking
to
release
our
adot
potentially
including
the
the
latest
release
the
49
release
of
otel.
But
if
we
can
force
an
update
of
prometheus
to
work
around
that,
that
would
be
wonderful.
So
it
seems
like
we
can
cool.
C
Yes,
the
workaround
will
make
it
next
week
because
the
problem
is
iterating
through
things
like
iterating
through
changes
is
super
slow,
because
I
have
to
wait
like
four
hours
to
figure
out
if
the
change
is
valid
or
not.
So,
instead
of
fixing
the
root
cause
original
bug,
I'm
going
to
merge
the
first
work
around
into
prometheus
and
then
update
the
prometheus
dependency
and
maybe
next
week,
if
I'm
able
to
find
figure
out
the
root
cause.
I'll
then
update
prometheus
again,
like
with
the
actual
fix.
If
that
makes
okay.
C
It's
excess.
It
looks
like
excessive
memory
usage
as
in
so
when
we
are
running
this
benchmark.
The
memory
usage
is
more
like
29
gb
for
the
first
three
hours
for
both
like
prometheus
previous
prometheus
release
and
the
buggy
prometheus
and
after
three
hours,
there's
a
gc
and
the
previous
prometheus
comes
down
like
comes
back
to
29
gb,
but
the
buggy
prometheus
stays
at
39
gb.
C
So
there's
like
a
10
gb
increase
and
it
continues
staying
at
that
10
gb
increase
for
like
the
next
five
or
six
hours,
and
that's
where
the
benchmarks
kind
of
stopped.
So
essentially,
I
would
say
it's
a
prometheus
increase
and
for
some
reason
something
is
holding
on
to
this
context,
and
this
context
is
being
garbage
collected
way
later,
which
is
causing
this
so
yeah.
It's
not
a
leak
but
like
basically,
a
significant
increase
like
25
to
30
percent
increase
in
memory.
B
Okay.
The
next
item
is
sparse,
histograms
and
exponential
histograms.
C
Yep
I
had
the
opportunity
to
sync
with
the
beyond
regarding
the
state
of
prometheus
past
histograms
and
the
poc
is
getting
like
it's
super
close
to
being
done
and
most
of
the
details
are
nailed
down.
So
I
was
looking
into
what
it
takes
to
convert
from
from
prometheus
parts
histograms
into
hotel,
exponential
histograms
and
vice
versa.
C
The
two
incompatibilities
I
found
is
basically
prometheus
has
the
concept
of
zero
bucket
size
and
if
a
value
is
smaller
than
the
zero
bucket
size,
we
just
can't
like
we
just
put
it
in
put
it
as
a
zero
value,
while,
like
we
put
the
value
in
the
zero
bucket
while
in
open
telemetry,
you
have
explicit
like
this
is
a
zero
value
and
zero
value
count.
C
There
are
provisions
like
the
open,
telemetry,
spec
says
or
you
can.
The
instrumentation
library
can
basically
say
if
the
value
is
too
small,
it's
a
zero
value,
so
it
it
all
kind
of
works
out.
But
if
you
go
from
prometheus
to
hotel
to
prometheus,
you
lose
the
size
like
zero
value
bucket
size
which,
to
be
honest,
this
sounds
like
it's
okay
to
do
it's
not
that
bad.
C
What
I
found
that
is
difficult
to
convert
is
when
the
values
match
exactly
the
bucket
boundaries,
because
the
bucket
boundaries
in
portal
and
prometheus
are
super
mismatched,
as
in
hotel,
is
start
inclusive,
while
prometheus
is
inclusive.
C
So
I
was,
I
was
curious,
like
you
know,
how
can
we
achieve
compatibility
with
like
between
the
both
both
the
histogram
formats.
B
C
So
essentially
it
like
this
has
always
been
the
case
with
prometheus,
and
it's
always
like
be
okay,
so
beyond
was
not
super
sure
about
the
entire
thing,
but
he
was
more
or
less
confident
about
the
bucket
boundaries
because
it
allows
us
to
be
compatible
with
the
existing
histogram
buckets
or
like
the
existing
histograms,
and
we
wanted
to
keep
that
compatibility
and
one
of
the
reasons
hotel
picked
like
start
inclusive
is
because
of
some
floating
point
efficiencies
like
you
save
an
instruction.
If
you
do
this.
C
B
Okay,
can
you
maybe
mention
this
in
the
hotel
metrics
channel
on
cncf
slack
and
tag
josh
mcdonald?
I
think
he
would
be
the
person
to
talk
to
regarding
this.
My
understanding
is
that
the
exponential
histogram
support
in
the
spec
is
not
finalized,
but
the
data
model
is
so.
I
don't
know
if
we'll
be
able
to
change
this.
C
So
that
the
proto
won't
change,
it's
like
because
it's
a
start
and
end,
but
we
just
need
to
be
just
need
to
switch
around
the
inclusivity
which
could
be
a
data
model
change,
I'm
not
sure
again
I'll
I'll
ping
josh
and
give
you
an
update
in
the
next.
E
E
C
So
essentially,
the
bucket
boundaries
are
same,
like
you
can't
change
the
bucket
boundaries
because
they're
generated
from
the
exponential
stuff,
but
given
a
bucket
like
even
a
bucket
and
the
count
of
500,
you
cannot
say:
oh
50
of
them
are
on
the
edge
because
you
don't
know
what
the
actual
samples
are.
B
Right
so
it
would
introduce
a
potential
error
in
converting
between
the
two
formats,
and
the
question
then
would
be
is:
is
that
potential
error
acceptable.
C
C
A
Or
if
you're
measuring
discrete
values
and
such
we
spent
large
part
of
2020
discussing
precisely
those
kind
of
things-
and
we
came
to
the
conclusion
that
even
like
having
slight
incompatibilities-
is
still
really
bad
for
the
wider
ecosystem,
which
is
large
part.
Why
why
the
prometheus
histograms
follow
the
existing
model
of
of
the
older
parameters
histograms
to
precisely
avoid
this
type
of
issue?
B
Okay,
yeah.
I
think
this
is
a
good
issue
to
raise
to
the
broader
metrics
otometrics
community
and
potentially
also
discuss
at
the
next
spec
meeting.
If
there's
something
to
suggest.
B
Okay
and
then
the
last
item
we
have
on
the
agenda
cubecon
eu,
I
will
be
attending.
C
B
Yeah,
I
think
that
would
be
good.
I
believe
ted
young
may
also
be
trying
to
organize
some
open,
telemetry
meetup,
or
something
like
that.
I
mean
I
know
that.
There's
a
couple
sessions,
we
have
a
meet
the
maintainers
session
on
the
schedule
and
I
believe,
there's
a
room
that
we
have
for
some
number
of
hours
to
do
with,
as
we
will
that
I
don't
know
if
we've
got
a
concrete
plan
for
yet
but
sure
we'll
figure
that
out
there
will
be
a
number
of
people
from
hotel
there.
B
C
B
I
don't
believe
we
have
anything
else,
so
that's
it.
I
think
we
can
wrap
up
and
I'll
look
forward
to
seeing
you
guys
in
two
weeks.