►
From YouTube: 2022-11-09 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
A
All
right
welcome
everyone
to
the
Prometheus
work
group
looks
like
we
have
a
few
things
on
the
agenda
today,
so
why
don't
we
it's
for
half
or
it's
five
after
now,
why
don't
we
get
started
I'm,
not
sure
who
added
these,
but.
B
I
put
the
second
one
there
I,
don't
know
happy
to
mention
that
it
came
from
a
spec
call
last
week.
B
So
basically
yeah,
basically
there's
an
issue
there
and
there
was
this
thing
about
adding
one
more
option
about
the
temporarity
preference,
and
the
thing
is
how
this
would
impact
Prometheus.
You
know,
interoperability,
you
know,
and
geometry
I,
don't
know
well,
I
thought
he
would
be
here
to
discuss
that.
That
was
the
hesitation
from
that
group.
You
know
like
we
have
this
respect,
because
now
we
have
Delta.
Sorry
we
have
cumulative
Delta
preferred
and
they
could
be
stateless.
B
That
was
the
part
of
the
question
it,
wouldn't
it
would
most
likely
wouldn't
be
the
new
default,
but
at
the
same
time
we
would
recommend
that
to
users
because
in
theory
hopefully
this
would
require
the
memory
footprint,
because,
basically,
you
know
like
the
synchronous
instruments
could
use.
Delta
asynchronous
ones
would
use
cumulative.
B
So
the
thing
is
that
how
can
we
recommend
that?
And
even
if,
even
if
you
know
it
wouldn't
even
be
the
default,
but
if
you
recommend
that
how
well
I
will
play
with
Prometheus
or
even
or
maybe
you
can?
Let
me
know
what
it's
really
important,
but
maybe
it's
not
that
bad,
but
we
recommend
that
for
for
some
users,
what
Wiki
cumulative
default.
C
All
right,
sorry,
I'm
late,
I
I
just
put
something
on
the
agenda
and
then
didn't
actually
put
time
into
preparing
myself
to
talk
about
it.
So
I,
I
I,
am
not
prepared
to
talk
about
the
thing
I
wanted
to
talk
about.
That's
what
I'm
here
to
say.
C
No
I
feel
like
I'm
falling
behind
on
lots
of
opens
on
the
tree
correspondence.
Unfortunately,
so
I'm
sorry
as
long
as
we're
here
and
it's
on
the
agenda
and
I'm
not
just
walking
into
a
meeting
starting
to
talk.
C
The
plan
I
was
thinking
through
and
put
this
placeholder
into
notes
about.
It's
basically
open
Telemetry
seems
to
have
being
it
seems
like
we're
being
held
to
a
higher
standard
than
Prometheus.
Where
you
know,
if,
if
you
are
running
out
of
memory,
you
should
do
something
and
I
think
the
Prometheus
answer
is
just
suck
it
up.
You
know,
stop
doing
that
and
I've
been
happy
with
that
answer
myself
as
a
vendor
that
supports
Delta,
temporality
I'm.
C
C
That
I've
been
toying
with
in
my
my
head
is
basically
you
would
create
a
new
parameter
of
what
like
something
like
a
sort
of
soft
limit
and
like
so
maybe
you
say
a
thousand
thousand
time
series
is
my
soft
limit
anytime
I'm
over
a
thousand
times
series
I'm
going
to
try
and
start
shedding
them,
and
the
idea
would
be
that.
C
C
So
the
idea
is
that
you
can
begin
to
change
your
start
time
when
you
introduce
new
time
series
and
when
you're
going
to
get
rid
of
an
old
time
series
I
was
thinking
to
propose
that
we
first
of
all
the
exporter,
is
going
to
remember
the
last
time
it
was
red
and
this
will
work
best.
If
you
have
just
a
single
scraper,
scraping
you
as
a
Prometheus
the
the
idea
is
to
report
no
data
present
using
our
data,
no
data
present
flag
in
open
telemetry
or
a
man
value.
C
A
C
Way
for
yeah
I'm
just
trying
to
find
a
way
for
a
scraper
to
receive
information,
saying
that
this
is
like
the
last
time
you're
going
to
see
this
stale
value
I'm
going
to
drop
it
from
memory.
The
next
time,
I
just
realized,
there's
not
really
a
way
to
record
represent
that.
A
A
I,
don't
even
think
you
need
to
do
that.
I
think
if
it
just
disappears
from
the
exposition.
C
A
I,
don't
think,
there's
a
problem
from
Prometheus,
for
example,
or
like
from
the
Prometheus
server
side
of
things,
for
example,
like
I
used
to
maintain
C
advisor
right
and
that
container
series
drop
from
that
endpoint.
Frequently.
Okay
and
I
assume
that
anyone
running
that
in
a
cluster
would
need
to
have.
There
must
be
a
mechanism
to
prevent
the
Prometheus
server
from
exploding
yeah
with
container
IDs
and
stuff
cycling
right.
C
Okay,
so
that's
a
good
point.
C
Those
I
guess
the
simplest
thing
we
could
do
is
just
have
each
SDK
have
a
limit
and
when
you
reach
that
limit
go
with
lru
or
something
like
that
to
start
ejecting
things.
Is
that
roughly
what
people
imagine.
A
There
was
a
I
had
the
opportunity
to
talk
to
bartek
who's,
a
Prometheus
client
maintainer
at
kubecon,
and
one
of
the
things
we
ended
up
talking
about
was
some
proposals
that
they're
looking
at,
which
is
not
quite
this,
but
essentially
that.
A
When
you
define
the
metric,
you
can
limit
the
or
you
can
define
a
function
that
describes
how
to
convert
observed
labels
to
actual
labels,
meaning
basically
you
could
use
it
to
say.
Okay
I
want
to
keep
these
five
values
and
then
anything
else
I
want
to
put
the
value
of
this
label
as
unknown
or
something
as
a
way
to
sort
of
limit.
The
cardinality
of
any
particular
label.
Yeah.
C
And
would
that
be
close
to
having
a
view
with
a
filter
for
for
the
hotel,
SDK,
I,
think
so
yeah
and
then
the
problem
we'll
get
back
to?
Is
that
I
mean
I'm
familiar
with
how
the
hotel
go.
Sdk
today
implements
the
filter
and
it
keeps
infinite
memory.
C
So
we
get
back
to
the
same
problem
because
it's
got
a
cache
of
of
filtered
attribute,
unfiltered
attributes
to
filtered
attributes,
and
we
don't
know
how
to
clear
that
either,
but
that
actually,
that
that's
that's,
do
that's
solvable,
that's
just
a
cache,
so
we
can
always
recompute
the
filtered
attributes
see
that
again.
This
is
like
there's
nothing
to
fix
here.
Problem
like
like
there's
another
answer
for
your
cardinality
problem
is
configure
a
view
and
make
sure
that
you're,
otherwise
you
don't
run
a
memory.
Well,
I.
A
A
It's
like
if
I
say
this
gauge
has
value
50.
and
then
there's
like
with
Prometheus
I
can
Implement
collect
and
describe,
and
that's
how
I
can
have
series
disappear
over
time.
I
think
part
of
the
issue
is
that
there's
no
way
to
get
even
From
otel's
perspective,
there's
no
way
to
tell
something
to
be
forgotten
that
it
should
be
forgotten
on
a
Prometheus
endpoint.
C
Yeah
the
ways
I
know
of
I
mean
if
an
asynchronous
callback
just
simply
forgets
to
report
a
value.
It's
still
on
and
I
think
that
that's
okay,
but
that
doesn't
help
with
synchronous
state,
which
is
where
the
problem
really
lies
and
I
guess.
We've.
A
C
It's
on
the
instrument
and
you
give
it
to
labels
a
label
set
and
it
it
goes,
and
it
flushes
it
out
of
memory
and
that's
what
users
coming
to
Hotel
kind
of
expect
and
I'm.
Trying
to
remember
all
the
reasons
why
we
didn't
put
that
in
there.
C
I,
don't
have
a
good
answer,
I
feel
like
I
used
to
I
think
it
starts
out
with
like
what
is
the
semantics
of
that
supposed
to
be,
and
the
answer
is
not
doesn't
have
semantics.
It
just
has
forget
this,
which
doesn't
really
mean
much
I,
think
could
call
it
an
advisory
thing.
It
doesn't
have
any
meaning.
It's
just
meant
to
help
you
limit
memory,
but
I,
don't
know.
There's
something
bothering
me
about
this.
C
I,
don't
have
anything
more
to
say
about
this,
for
some
reason
I
feel
like.
So
what
would
be
the?
Does
anybody
see
a
problem
with
having
a
delete
call
I
can't
remember,
I
have
to
think
about
this.
C
C
Yeah
I
can't
remember
I
feel
like
we
resisted
this
for
a
long
time
and
I
can't
remember
why,
at
this
point,
yeah,
okay,
so
so,
there's
just
a
sort
of
a
proposal
here
that
we
that
we
open
up
to
a
delete
method
for
synchronous
instruments
and
the
effect
of
it
is
to.
C
C
And
and
there's
I'm
I'm,
like,
like
all
things,
I
feel
like
there's
a
connection
with
our
problem,
about
synchronous,
gauges
and,
and
one
of
the
things
I
wanted
to
specify
for
the
synchronous
gauge
is
that
it
has
a
a
character
characteristic
derived
from
temporality
says
that
if
you
are
a
Delta
temporality
reporter
and
you
have
a
synchronous
gauge
and
you
stop
reporting,
it
disappears.
If
you
are
a
cumulative
reporter
and
you
have
a
synchronous
gauge
and
you
stop
reporting,
it
stays
forever
so
that
the
effect
of
delete
is.
C
C
The
the
original
issue
that
was
filed
was
like
having
a
consistent
mechanism
to
deal
with
overflow.
Adding
delete
is
not
necessarily
going
to
help
the
user
who
has
a
cardinality
problem
and,
and
it
has
no
way
of
remembering
what
they
have
to
delete
so
I.
So
there's
some
question
as
to
whether
the
users
will
even
be
happy
if
we
added
delete
or
Time
series.
C
What
do
we
say
that
about
this?
We
say
something
like
we
discussed
it
and
couldn't
find
a
reason,
not
to
just
add,
delete
time
series
which
is
effectively
the
same
as
Prometheus
and
the
effects
of
which
are
is
this.
This
is
an
API
change.
It's
like
a
big
deal,
make
this
change
at
the
very
last
minute.
This
is
part
of
why
I
think
we
don't
want
it,
but
I,
don't
think
because
again,
I
have
a
solution
with
Delta
temporality
got
the
temporary
solves
all
my
problems
here
is
this
about.
A
C
I
guess
the
reason
why
we're
talking
about
a
Prometheus
working
group
is
that
it
feels
like
a
problem
that
impacts
Prometheus
users
that
we
haven't
addressed
with
an
Hotel
SDK,
because
we
don't
have
a
delete
right.
But
we
do
have
this
asynchronous
callback
mechanism
and
you
can
forget
things
there,
but
I
guess.
The
other
reason
is.
We
want
to
imagine
a
scenario
where
there's
an
Hotel
collector
with
a
cumulative
Prometheus
exporter
and
there's
a
Oto
SDK,
possibly
sending
Deltas,
to
do
to
get
its
own
memory
under
control.
C
A
A
A
So
the
if
we
wanted
to
match
something
like
that
which
we
definitely
don't
have
to
the
equivalent
would
be
to
say
that
the
maximum
cardinality
that
we'll
accept
the
weird
thing
is
like
you
could
have
things
split
into
multiple
batches,
but
this
would
be
like
a
time-windowed
limit
on
the
number
of
Series
right.
It's
hard
to
implement
with
push
that's
something
that
mirrors
this
yeah.
C
A
Yes,
right,
which
isn't.
A
C
A
I
could
use
metric
relabel
configs
to
drop
particular
dimensions,
for
example,
or
to
take
particular
label.
C
Well,
I'm
leaning
towards
a
position
that
we
should
do
what
you
just
described.
I
like
this.
This
is
a
saying
of.
We
have
a
limit
when
well
someone's
going
to
come
in
and
say
well,
I
want
a
per
library
or
per
instrumentation
scope
limit
right.
We
haven't
even
dealt
with
the
questions
that
come
up
in
my
like.
If
you
have
a
scope
with
too
many
too
many
series
itself
like
I've,
got
a
scope
that
has
100
000
Series
in
it,
and
the
the
actual
data
is
60
megabytes.
C
Don't
do
that,
so
we
you
know
immediately
people
there
were
saying
well,
you
could
solve
that
for
us
by
splitting
it
and
the
problem
is
it's
all
one
scope,
because
it's
a
legacy:
migration
like
we,
we
have
one
scope
for
all
of
our
old
metrics,
and
so
there
was
nowhere,
obviously
to
split
it
and
I
told
them
not
to
do
it.
C
We
finally
didn't
do
it,
but
in
this
in
this
proposal
that
we're
working
through
it
sounds
like,
we
would
want
to
put
a
limit
on
a
number
of
Time
series
per
scope,
not
a
bite
limit,
because
that's
per
export.
We
haven't
talked
about
in
the
hotel
world
how
to
split
an
export
right
now
and
I.
Don't
know
what
people
would
say
if
we
started
talking
about
it
like
I
I
feel
like
from
a
performer's
perspective.
What
whitestep
asked
me
to
do
was
very
reasonable.
C
A
C
That
feels
like
progress,
I
would
I've
been
feeling
this
like
guilt
about
not
catching
up
with
otel
I
would
be
happy
to
summarize
this
conversation
that
we
just
had
in
the
issue
that
I
Linked
In.
The
meeting
notes.
B
I
was
briefly
explaining
yeah
that
this
came
from
a
discussion
on
the
spec
last
week
and
how
I
actually
David
was
asking
first,
whether,
like
this
stateless
option,
would
be
the
new
default
which
I
don't
I,
don't
think
that's
the
case,
I,
don't
think
it
could
become
the
default
and,
second,
whether
this
could
be
more
oriented
towards
TSD
case
or
or
The
Collector,
as
well
like
as
an
attempt,
you
know
how
to
massage
data.
B
C
A
C
C
So
today,
as
I
understand
it,
there
there's
code
in
the
Prometheus,
remote
X,
remote
right
exporter,
to
receive
deltas
and
build
out
cumulative
state.
So
if
a
if
the
hotel
collector
truly
does
that
the
way
I
believe
it
does
from
reading
a
few
code
reviews,
then
stateless
should
be,
should
work
for
the
user
so
that
they
can
configure
so
that
they
can
push
their
cardinality
problem.
Downstream.
A
We
don't
have
the
mechanism
of
async
instruments
and
sending
something
no
longer
becomes
a
signal
for
us
to
drop
stuff
from
in
memory
state.
So
it's
maybe
a
little
bit
harder
for
a
Prometheus
exporter
in
The
Collector
to
manage
memory
in
that
way,
then
it
would
be
for
a
previous
order
in
the
SDK,
so
so
funny.
A
C
Trying
to
remember
why
this
matters,
but
right
so
that
do
we
have
to
imagine
a
situation
where
you're
pushing
you're
like
doing
the
cumulative
Delta
to
cumulative
and
then
pushing
it
again
and.
A
I,
actually
don't
think
this
is
a
problem
for
Prometheus
remote
right,
because
it's
also
push.
This
is
only
a
problem
for
the
Prometheus
exporter.
I've
been
trying
to
dig
in
and
remember
how
it
handles
this
okay.
So
it
has
a
pre-configured
metric,
expiration
timeout
of
five
minutes,
which
is
adjustable
and.
C
Well:
okay:
we've
we've
found
a
lot
of
solutions
here,
a
good
group,
sorry,
so
yeah
I
think
what
we're
saying
is
that
there's
not
a
problem:
From
prometheus's
perspective
or
there's
a
precedent
already,
which
says
just
you
know,
configure
a
timeout
and
if
and
if
something's,
not
updating,
then
you
can
forget
it
after
a
while,
because
I'm.
C
So
that
may
be
true:
I
I
forget
where
the
state
in
a
Prometheus
exporter
is,
and
yes
it
would
be
nice
and
in
some
sense
it
would
be
nice
to
have
a
generic
Delta
accumulative,
but
it
just
seems
to
cost
twice
as
much.
If
you
don't
like
share
the
state
with
your
exporter,
probably
what's
happening.
C
Is
it
it's
called
just
an
investigation
because
I
don't
really
know
I
kept
during
the
during
the
meeting.
I
said:
I
wouldn't
stake,
my
life
on
it
I
feel
like
I.
Think
that
does
something
and
I
don't
really
know
what
it
does.
It
would
be
good
for
me
at
least
to
know
the
differences
between
those.
C
Anyway,
Carlos
my
impression
is
that
not
many
people
are
against
stateless.
It's
just
that.
There's
not
very
many
people
who
are
excited
about
it,
but
if
we
can
tell
people
you
can
run
a
collector,
you
can
use
stateless
temporality
and
you
can
still
use
Prometheus
I.
Think
that
there's
that's
compelling
enough.
So
I'd
like
to
be
able
to
say
that
without
waving
my
hands
so
I
think
for
me,
I'd
like
to
do
a
little
bit
of
research.
B
I
can
support
that
in
anything,
you
need
reviews.
Okay,
just
let
me
know.
C
C
I
have
to
figure
out
how
it's
How,
It's,
Working
David,
asked
the
right
question
and
I
don't
know
the
right
answer.
My
understanding
also
is
that
there's
a
little
a
little
bit
of
a
question
is:
why
does
somebody
want
to
run
the
Prometheus
exporter
from
The
Collector
instead
of
the
Prometheus
remote
right
exporter
like
I?
Don't
get
it
exactly
and
I
don't
know
so
I'm
confused
of
it.
C
A
C
Yeah
that
sounds
familiar
to
me
right.
There's
no
metadata!
You
don't
know
if
it's
counter-engage
until
you
until
you
conduct
your
database
yeah,
okay.
So
that's
why
people
might
be
using
Prometheus
exporter,
okay,
I,
don't
know,
I
feel
like
I
know
what
I
need
to
do
as
far
as
I
can
follow
up
on
both
of
these
Carlos
I
will
this
is
this?
Is
my
my
corner?
I
can
handle
it
great.
Thank
you
so
much
cool
all
right,
I'm
gonna
get
to
work.
Everybody.