►
From YouTube: 2020-12-11 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
D
E
C
G
A
Hello,
everyone
see
andrew
is
projecting,
please
add
yourself
to
the
attendees
list.
While
we
wait.
A
A
I
am
usually
the
person
does
a
lot
of
talking
in
this
meeting
or
at
least
tries
to
lead
a
discussion,
and
I
missed
this
meeting
that
happened,
I
believe
a
week
and
a
half
ago
or
two
between
openmetrics
and
open
telemetry,
so
I
feel
unprepared
to
lead
us
into
this
discussion,
but
I'm
very
interested
to
have
it.
Does
anyone
hear
who
was
present
on
that
previous
meeting?
Is
anyone
here
from
that
previous
meeting?
Who
would
like
to
lead
us
off.
C
Sorry
go
ahead:
richie,
that's
okay!
So.
E
We
had
the
meeting
last
week.
It
was
pretty
good.
I
think
we
had
two
main
topics.
Basically,
one
is
that
we
all
agreed
that
it
would
make
sense
for
us
to
to
just
try
and
really
work
together
that
that
there
is
a
first-class
experience
for
for
a
prometheus
ecosystem
from
from
open
telemetry.
E
The
other
thing
is
that
in
particular,
tyler
wanted
us
to
basically
go
through
a
few
design
decisions
of
why
prometheus
cortex
thanos
did
certain
things
in
a
certain
way
and
what
the
underlying
learnings
or
or
things
where
why
we
did
this,
which
is
why
I
agreed
to
try
and
bring
a
few
people
from
prometheus
team
in
once
again,
which
succeeded
so
yeah.
That's
that's
a
super
short
tyler
did
I
forget
anything.
C
C
He
was
asking
a
lot
of
questions
specifically
around
the
things
that
are
incompatible
currently
between
the
otlp
and
the
openmetrics
protocol,
and
I
think,
a
little
bit
as
well
as
the
clients
as
well
and
trying
to
suss
out
if
there
is
a
possibility
for
compatibility
going
forward,
and
so
I
think
the
idea
was
is
that
we
were
kind
of
left
in
a
maybe
a
state
of
wondering
if,
if
that
compatibility
is
possible
or
not
and
wanting
to
get
more
opinions
about
it,
I
definitely
see
a
lot
more
faces
today.
C
C
So
yeah,
maybe
to
just
kind
of
jump
right
in
I
I
think
again
we
could
probably
just
explore.
I
don't
know
if
anybody
on
this
call
has
really
strong
opinions
as
to
whether
the
two
projects
can
or
cannot
be
merged
or
should
or
should
not
be
merged
for
some
particular
reason.
I
think
that
the
general
consensus
from
the
last
meeting
is
that,
like,
as
richie
was
pointing
out
like
we
should
try
to
explore
that.
C
I
I
don't
know
if
anybody
knows
of
any
immediate
incompatibilities
that
maybe
we
can
try
to
just
jump
into
and
discuss
to
see.
If
we
can
talk
about
like
how
those
can
be
addressed,
I
guess
is
the
way
to
start
this.
I
have.
B
A
well
I
have
a
question:
it's
less,
not
really
a
conflict
or
a
problem.
Question
is:
if
we
decide
to
attempt
some
sort
of
merger
or
closer
relationship
or
something
whatever
that
might
be,
is
do
we
want
to
explore
changes
to
the
open,
telemetry
metrics
apis
to
more
closely
match
the
openmetrics
model.
B
A
B
I
think
what
you
say
is
probably
true,
but
I
think
there's
another
consideration,
and
that
is
we
should
be
considering
our
end
users,
first
and
foremost
when
designing
our
apis
and
if-
and
you
know,
we've
gotten
so
many
questions
already
in
the
apis,
whereas
my
gauge
like,
if
we
don't,
do
some
sort
of
alignment,
we're
going
to
continue
to
get
those
questions,
and
maybe
that's
okay,
but
it
is
just
a
you
know,
a
consideration
I
like
to
try
to
think
about
the
actual
users,
the
end
users
at
the
end
like
first
and
foremost,
rather
than
us
as
maintainers.
G
What
reading
I've
done
in
your
repositories-
and
you
already
have
a
gauge
and
a
counter
in
a
few
different
apis
they're,
just
called
different
things
depending
on
how
you
access
them
like
regions,
libraries
have
those
as
well
also
called
different
things,
and-
and
I
think
that,
as
long
as
open
telemetry
can
produce
valid,
useful,
open,
metrics
we're
good,
but
then
there's
a
secondary
question
of
okay.
There's
things
like
info
and
so
on
and
okay.
Should
that
also
be
in
the
api?
So
people
can
take
full
advantage
of
metrics.
A
So
so
we've
we've
had
some
discussion
about
how
we
think
that
the
info
concept
of
openmetrics
equates
with
the
resource
concept
of
open
telemetry.
That's
one
of
the
conversations
we
might
have.
I
have
a
few
issues
that
I
wanted
to
raise,
but
I'd
love
it
if
people
other
people
wanted
to
start.
This
conversation.
H
Can
I
ask
a
fundamental
question
since
I
don't
have
any
background
on
this
like?
Why
did
you
initially
never
considered
prometheus
as
the
data
model
or
as
the
api
like?
What
was
the
main
reason
that
you
know
things
diverged
that
might.
A
Maybe
I'd
like
to
try
and
answer
that,
so
one
of
the
ways
I
entered
this
project
was
as
a
tracing
person
who
had
worked
on
tracing
apis
and
the
open
tracing
project.
The
concept
there
was
that
we're
going
to
try
and
create
a
semantic
definition
for
this
api
called
tracing
that
we
wanted
to
design,
and
so
the
concept
of
an
interface
that
is
purely
defined
in
terms
of
semantics
was
born
for
tracing
and
open
telemetry
had
to
adopt
that
in
order
to
merge
with
open
census.
A
One
of
the
things
that
came
up
is
that
the
word
gauge
is
actually
used
differently
in
a
prometheus
and
a
statsd
type
of
ecosystem,
and
what
we
have
found
actually
is
I
find
myself
internally
talking
with
people
at
lightstep,
using
the
term
cumulative
gauge
versus
true
gauge,
and
this.
This
sort
of
this
is
associated
with
the
distinction
we're
trying
to
make
between
the
up
down
some
up
down
some
observer
and
the
value
observer,
and
it's
taken
a
while.
A
It
has
to
do
with
the
way
we
subdivide
sums,
so
one
library
might
be
recording
heap
broken
down
by
processor
like
region
or
something
like
that,
like
you,
can
just
subdivide
a
sum
and
as
long
as
you
remember,
to
sum
them
all
back
up
at
the
end,
it's
the
same,
so
one
library
can
be
running
with
two
labels
and
one
library
can
be
running
with
three
labels
and
if
they're
sums
they'll
be
summed
correctly
and
if
they're
values
they'll
be
averaged
correctly.
But
if
you
mix
those
two,
we
get
a
difference.
A
So
that's
one
of
the
things
that
that
we
found-
and
this
has
to
do.
Why
would
we
try
to
do
that?
Well,
one
of
the
things
that
comes
up
in
tracing-
and
I
think
is
a
big
distinction
between
stats
team
prometheus
is
something
about
statelessness,
so
statsy
libraries
have
always
been
stateless,
and
if
you
want
to
have
a
gauge,
there's
a
question
about
what
the
semantics
of
that
of
the
operations
on
the
gauge
are.
A
We
have
this
up
down
counter
instrument
in
light
in
open
telemetry,
which
is
similar
to
the
gauge
operation
in
prometheus,
which
has
increment
decrement,
but
not
similar
to
the
gauge
in
previous.
That
has
a
set
operation,
and
if
we're
going
to
be
stateless,
we
need
to
be
able
to
encode
those
increment
decrements
as
messages
on
the
wire,
not
as
state
that
we
keep
in
memory.
A
A
So
there's
a
lot
of
differences
here
that
I
think
we're
finding,
because
we
want
statelessness
some
of
the
time
and
we
want
to
accommodate
those
stats
to
users.
So
that's
sort
of
how
this
started
started.
The
only
other
thing
I'm
aware
of
is
the
start
time
question
which
has
been
a
point
of
contention
and
I'm
not
sure
where
we
sit
right
now.
That's
all
I
had
to
say.
G
G
Fundamentally,
a
gauge
is
a
thing
that
is
a
snapshot
of
state.
It
can
go
up
and
down
keeping
it
very
simple
and
there's
basically
two
ways
you
can
do
that
at
the
api
level,
you
can
do
a
callback
and
you
can
do
some
form
of
increment
decrement
set
so
from
an
open
metric
standpoint.
Those
are
both
gauge.
G
However,
if
you
look
at
something
like
drop
wizard
and
what
they
call
a
counter
is
a
gauge
what
they
call
a
gate.
What's
the
gauge
with
the
incremental
decrement,
whereas
their
gauge
is
the
gauge
with
callback
in
prometheus,
the
gauge
is
the
one
with
increment
and
decrement
and
depending
which
client
library
uses.
There's
you
have
to
use
a
custom
collector
or
that,
but
it's
a
distinction
between
semantics,
because
it
is
a
gauge
semantically
versus
what
apis
are
offered
to
the
users
and
that's
kind
of
where
the
distinction
is.
A
G
A
Yeah
that
sounds
okay
to
me.
One
of
the
we've
been
talking
I've
been
talking
about
this
with
internally
at
lightstep,
like
you,
have
an
up
down
counter
gets
converted
using
state
into
a
cumulative
right,
so
after
all
those
pluses
and
minuses
you're
going
to
have
a
bunch
of
gauge
values.
Now,
if
I
have
two
dimensions
in
my
gauge
set,
I'm
going
to
have
you
know.
Let's
say
my
averages
are
around
100
when
using
two
labels,
but
using
three
labels.
A
My
average
is
around
10
because
that
third
label
just
broke
it
down
into
more
more
different
sums,
so
they're
more
gauges.
Now,
if
I'm
averaging
my
gauges
at
the
end
of
the
day,
the
users
are
using
two
labels
and
users.
They're
using
three
labels
are
producing
different
types
of
data
and
we
can't
average
those
meaningfully.
A
That's
that's
what
that's
what
this
is
getting
at.
If
you
have
to
sum
the
the
values
with
three
labels
down
into
two
labels
and
then
average
them
to
be
correct,
but
you're
right,
we
can
turn
those
back
into
gauges
and
it's
something
of
a
philosophical
debate
as
to
whether
you
care
to
preserve
the
information
that
this
was
once
a
sum.
That's
supposed
to
be
added
together
with
its
peers
versus
this,
was
once
an
individual
measurement.
That
can't
be
subdivided
in
any
way.
G
I'm
not
sure,
there's
a
meaningful
distinction
there
to
be
honest
and
that's
kind
of
what
the
instrumentation
labels
go
with,
which
comes
to
the
second
point.
I
think
you
were
saying
if
you
have
the
case,
you
have
some
metric
and
in
two
different
systems
it
has
a
different
set
of
labels,
a
different
set
of
instrumentation
labels.
G
In
openmetrics
terms,
those
are
two
different
metrics
they're,
not
the
same
metric
because
they
have
different
labels.
They've
got
very
different
semantics
now
it
could
happen
that
one
of
them
is
just
a
degenerate
form
of
the
other,
and
one
label
is
constant,
possibly
even
empty,
but
fundamentally
if
they
have
that
extra
label,
adding
a
label
is
a
breaking
change
on
the
metric,
because
no
one
might
have
expected
dash.
A
So
this
is
where
open
census
kind
of
opened
the
box,
and
I
and
I
working
with
bogdan,
just
sort
of
carried
it
forward.
The
idea
that
you
can
change
your
metric
labels
at
runtime
and
configure
a
view
to
output,
say
fewer
of
those
labels
meant
we
needed
to
know
semantically
whether
we
were
supposed
to
be
adding
those
things
or
say
averaging
those
things
in
order
to
configure
automatic
reduction
on
the
right
path.
A
We
wanted
to
know
more
information,
you're
saying-
and
I
agree
once
you
get
done
with
your
right
path
and
you're,
just
writing
to
storage.
It's
fine
to
call
it
a
gauge
at
that
point,
but
it
might
be
dangerous
to
mix
those
two
together
with
different
label
sets
and
you're
and
you're
in
the
system
that
you've
described
it's
it's
doing
it
correctly.
It's
treating
them
the
separate
metrics,
which
is
okay
with
me,
yeah.
G
G
So
that's
that's
a
rabbit
hole
which,
let's
sidestep
it,
and
because
that's
a
complex
design
thing
and
it's
all
the
requires
user
input,
but
at
the
end
of
the
day
on
the
wire
you're
going
to
provide
a
gauge
with
some
labels
and
if
downstream,
like
in
whatever
monitoring
system
people
want
to
combine,
things
would
have
different
meanings
because
they're
coming
from
different
systems,
that's
just
not
going
to
work,
or
at
least
not
trivially.
A
One
of
the
goals
we
had
was
to
have
these
semantic
conventions
so
that
everyone
can
write
the
same
metrics
and
hopefully
have
them
have
the
same
meaning.
This
is
the
rabbit
hole
that
we're
in,
though
I
think
we
can
talk
about
it.
I
I
So
so
one
of
the
things
I'm
seeing
here
is
you
you're
you're
concentrating
a
lot
on
these
on
these
gauge
values
and
for
the
most
part,
so
I
I'm
I'm
in
the
site,
reliability,
engineering,
landscape,
and
so
I
I'm
almost
never
actually
using
gauges
in
production
for
any
useful,
any
useful
work,
because
I
don't
know
what
a
between
between
my
sample
points
in
my
time
series
database.
I
have
no
idea
what
a
gauge
is
doing,
and
so
I'm
almost
never
I'm
like
99.9
of
my
data
is
not
a
gauge.
I
It's
a
counter,
a
cumulative
monotonic
counter,
because
I
I
can't
lose
the
data
between
one
event
and
and
the
next
event.
I
need
to
know
the
delta
between
those.
So
I'm
having
a
in
order
to
do
that,
I'm
almost
always
keeping
a
small
single.
I
You
know
float
64
in
my
code
to
track
that
cumulative
counter
over
time,
and
so
the
the
the
I
have
to
have
some
state
somewhere
to
keep
track
of
that
cumulative
counter,
whether
I'm
doing
it
as
a
simple
counter
of
the
number
of
requests
or
a
floating
point
counter
of
seconds
or
I'm
have
a
complex
counter.
Like
a
histogram
bucket
series,
and
so
like
going
back
and
forth
on
this
gauge
thing
is
mostly
a
waste
of
time
because
it's
it's
not
useful
data.
A
A
A
Is
I
have
some
data
going
into
a
buffer
and
that's
some
data
coming
out
of
above
or
a
queue,
and
it's
one
logic.
It's
one
physical
cue,
but,
logically
speaking,
I've
got
three
three
label
dimensions
and
I'm
putting
it
in
with
my
three
label
dimensions,
and
that
means
at
any.
Given
time
I
have
a
snapshot
of.
I
A
That's
that's
the
idea
you
can
still
be
stateless
after
the
end
of
your
flush
and
that
and
that,
so
you
can
make
a
high
performance,
statisti
library,
many
people
have
and
open
telemetry
would
have
this
roughly
the
same
performance,
and
so
I
think,
if
you
boil
this
conversation
about
open,
metrics
versus
open
telemetry
down
one
of
the
things
we
really
really
want
from
the
open
geometry
wire
format.
Is
the
ability
to
have
counters
that
you
don't
have
to
keep
cumulatively
in
memory
forever,
because
it
lets
you
use
higher
cardinality.
A
D
E
I
I
I'm
not
even
sure
if
this
is
about
making
changes
to
hotel
or
not
like
the
one
of
the
main
motivations
of
being.
Here
is
basically
the
question
of
what
what
design
decisions
did
prometheus
make
for
what
reason
and
what?
What
hard
lessons
have
we
learned
and
what
others
might
be
learning
from
that
not
to
not
to
try
and
force
anyone
to
do
anything,
because,
I
think
that's
like
definitely
out
of
scope
for
for
anyone
who's,
not
core
open
telemetry.
E
The
one
thing
which
I'm
still
unclear
about
is
that
even
with
deltas,
you
don't
have
like.
The
state
still
exists,
especially
when,
when
you
need
to
track
which
ingestor
last
got,
which
delta
successfully
and
you
need
to
track
this
on
both
the
instrumentation
layer
and
on
the
injector
layer.
And
if
you
have
any
horizontal
scraper
or
any
middleware,
which
is
acting
as
a
message
bus
or
what
have
you
you?
You
need
to
rebuild
this
type
of
state
a
third
time
and
that's
one
of
the
core
things.
E
Why
why
we
do
have
absolute
counters,
because
you
get
rid
of
all
this
extra
state,
which
is
basically
distributed
in
small
little
pieces
to
to
to
be
picked
back
up
later
from
from
all
the
places
at
once,
instead
opting
to
having
only
one
one
counter
to
carry,
and
so,
for
example,
with
the
open
metrics
head
on
counters,
do
not
go
away,
which
is
usually
not
a
problem
cloud
native
deployments,
because
you
have
you
have
short-lived
processes
and
they
only
live
even
shorter.
Microservices
nanoservice
is
serverless,
it's
just
shorter,
shorter,
shorter.
E
E
Is
this
level
which
tyler
actually
wanted
to
to
discuss
on
in
as
much
as
what
lessons
did
we
learn
the
hard
way,
or
maybe
the
easy
way
and
what
could
be
used
from
there
and
not
discuss
if,
if
it's
possible
to
to
drop
one
label
from
a
three
label
set
and
have
two
labels
and
does
have
a
different
sum
and
an
average?
Because
you
can
do
this
on,
you
can
use
in
the
pipeline,
you
can
do
it
in
the
ingester.
It
doesn't
matter
as
long
as
you
transmit
that
state
or
that
information.
Somehow.
C
Yeah
thanks
rich,
that's
I'd
like
to
yeah.
Hopefully
we
can
pull
back
a
little
bit
because
I
think,
at
the
end
of
the
day,
like
the
answer
that
I
hope
we
can
solve.
This
is
the
end
user.
C
You
know
right
now,
there's
there's
this
question
as
to
like
somebody,
who's
looking
to
implement
or
instrument
some
new
telemetry
system
into
their
their
code
base
and
there's
a
conflict
here.
Do
they
you
know
is:
is
open
telemetry.
The
right
solution
is
openmetrics.
The
right
solution,
obviously
they're,
not
a
one
for
one
like
they're,
trying
to
solve
for
problems,
but
there
is
overlap,
and
I
think
it's
the
overlap
that
we're
trying
to
address
at
this
point
in
time
and
understanding
you
know,
like
you
know
my
goal
here
is
that
there's
compatibility.
C
I
think
that
that's
always
been
the
goal
of
open
telemetry
and,
I
think
open
you
know.
Rich
has
been
in
a
lot
of
our
meetings.
We
want
to
make
sure
that
metrics
is
also
like.
We
have
a
working
relationship
that
we
are
going
to
be
compatible,
and
so
that
is,
I
think,
and
I
don't
think
that's
going
away
and
we
want
the
end
user
to
to
have
support
for
both
of
them.
The
question
is
just:
do
we
want
to
support
both
of
them
in
the
long
term?
C
So
when
the
the
user
is
getting
data
out
of
an
open,
telemetry
system
and
optometry
instrument
system,
can
it
be
in
open
metrics?
How
does
that
path?
Look,
and
you
know,
is
there
such
thing
as
the
otlp,
the
open,
telemetry
language
protocol,
that
you
know
a
user
is
going
to
need
to
support
as
well
or
is
that
not
something
that's
the
case.
E
And
just
to
be
clear,
most
of
what
I
saw
in
open
telemetry
can
easily
be
mapped
into
into
the
wire
format.
So,
for
example,
those
host
centric
labels
or-
or
I
forgot
the
name
but
host
central
information,
which
you
mandate
and
then
code
in
open
telemetry
that
doesn't
matter
for
open,
metrics
or
prometheus,
like
we
don't
prescribe
it,
because
we
find
it's
useful
not
to
do
this
for
a
variety
of
reasons.
E
E
So
I
I
think,
from
the
compatibility
point
of
view,
it's
a
lot
more
important
to
talk
about,
for
example,
if
if
histograms
are
flipped
upside
down
and
and
as
such,
you
can't
really
do
math
between
the
two
and
you
can't
really
translate
from
one
format
to
the
other
or
like
if
you,
if
you
must
support
dots
in
in
open
telemetry
metric
label
metric
names,
how
to
recast
those
safely
into
an
underscore.
D
Transmit,
I'm
still
missing
likely.
It's
so
is
the
goal
here
to
discuss
whether
it's
it's
like
it's
it's
unclear
to
me.
It
feels
like
an
open-ended
conversation.
What's
the
actual,
like
we're
trying
to
decide
on
whether
we
could
merge
wire
output
and
support
wire
output?
Do
we
want,
I
mean
tyler
you've
mentioned
like
long
term?
Could
it
be,
could
you
you
know,
could
the
project
live
without
otlp
metrics
exposition
is?
Is
that,
like
the
the
driving
factor
like
I'd,
it
would
would
help
to
have
some
framework
in
to
which,
like
what
yeah.
C
Yeah,
I
know
sorry,
that's
a
that's
a
good
question,
I'm
probably
not
as
prepared
for
this
conversation
as
I
should
be,
but
yeah.
I
think
that's
that's
a
really
good
way.
I
think
that
that's
how
we
ended
it
last
time
is,
you
know-
and
we
talked
a
little
bit
about
this
at
the
last
metric
specification
meeting
is,
you
know,
is
open
metrics,
a
a
wire
format
that
is
digestible
by
the
consumer
and
it's
the
last
mile
delivery
of
some
sort
of
data
for
for
metrics
in
that
format.
C
Or
can
it
just
be
used
as
the
wire
format
between
the
the
open,
telemetry
componentry,
the
open,
telemetry
instrumentation
and
say
the
open
telemetry
collector?
Is
it
a
replacement
for
that?
It
was
kind
of
the
open
question?
Could
open
telemetry
take
the
space
of
the
metrics
portion
of
the
otlp
right
now,
maybe
even
more
so
there's
some
more
open
standards
for
the
otlp
to
be
replaced
with
say
on
the
tracing
side
of
things,
but
that
maybe
we
don't
focus
too
much
on
that
side
of
that
question.
C
Just
the
open
metric
side
of
things
and
replacement
for
the
metrics
portion
and
from
the
last
open
our
our
last
meeting
last
week,
it
sounded
like
that
may
not
be
a
possibility,
and
maybe
we
can
kind
of
discuss
that.
I
think
that
specific
idea
that
we
replace
our
version
of
metrics
structure
in
in
the
protobuf
that
we
currently
have
with
the
openmetrics
structures.
H
Like
one
of
the
every
other
day
problem,
is
you
know,
a
lot
of
people
will
never
be
moving
off
of
you
know
prometheus
so,
and
they
want
to
like
in
the
at
least
in
the
collection
export,
but
they
want
to
be
able
to
rely
on
one
thing
that
actually
works
for
all
and
like
when
they're
looking
at
open,
telemetry
right
now
they
see
the
prometheus
support,
so
they
assume
that
it's
going
to
work
and
it's
going
to
work
well,
and
I
think,
that's
kind
of
also
creating
some
confusion
like
we
have
to
like
have
a
decision.
H
How
much
of
support
there
will
be
to
kind
of
like
avoid
that
you
know
expectation
from
the
project,
and
you
know
like
all
these
conversations,
that
we
started
around
scraping
like
how
to
you
know
the
prometus,
the
remote
right
rider
and
everything
like
we
have
to
just
kind
of
like,
I
think,
have
a
some
sort
of
like
a
goal.
What
is
what
we
want
to
do
with
prometheus,
because
it's
kind
of
given
the
wrong
impression
right
now.
I
think.
A
Yeah
I
refined
that
question
one
one.
So
I
think
openmetrics
is
a
great
protocol
for
text.
Human-Readable,
pull-based
metrics
full
stop!
That's
what
I
think.
I'm
questioning
whether
the
openmetrics
protocol,
as
in
protobuf
format,
not
about
text
but
about
like
we're
sending
grpc
right
now,
is
suitable
as
a
push
export
protocol,
especially
for
the
collector
case.
A
Whereas
we
know
prometheus
uses
the
remote
write,
export
protocol
and
and
that's
the
confusion
that
jana
is
referring
to
really
is
that
we
don't
see
how
to
use
open
metrics
for
pushing
and
aggregating
metrics
data
on
an
ingest
path.
We
see
how
to
use
it
as
the
last
mile
and
from
because
we
want
to
make
sure
that
open
telemetry
can
push
deltas,
which
prometheus
is
never
going
to
accept.
A
We
can
configure
open,
telemetry
libraries
to
be
prometheus
compatible
and
we've
definitely
stated
that
as
the
default,
but
we
want
to
make
sure
that
you
can
configure
your
otel
libraries
to
output
deltas,
meaning
you're,
going
to
have
this
data.
That's
not
compatible
with
prometheus
and
that's.
What
yana
is
really
also
referring
to
is
that
we
don't
know
how
to
handle
data
in
the
collector,
because
these
are
we're
expecting
to
have
scalable
pools
of
collectors.
Running
and
and
prometheus
is
hard
to
make
work
in
an
open,
metrics
world
like
that,
I.
G
Don't
think
that
those
goals
are
incompatible
because,
on
one
hand,
whether
you
are
push
or
pull
over
metrics
works
for
both
forward,
because
that's
how
it's
designed
for
this
protocol
for
text
and
separately,
if
you
want
to
push
deltas,
is
something
else
you
can
as
well
like
those
are
both
allowed
for,
because
openmetrics
only
prescribes
what
happens
in
openmetrics.
A
A
And
that's
just
and
to
me
again
that
makes
really
a
lot
of
sense.
As
a
last
mile
as
a
pull-based
protocol,
you
don't
need
deltas,
you
shouldn't
have
them
they're
meaningless,
but
as
soon
as
we
start
pushing,
we
can't
use
open
metrics
without
adding
deltas
and-
and
I
hear
the
pushback
is-
this
is
hard
to
implement
this
we've
never
done
this
prometheus
says
you
can't
do
that,
but
you
can't
ignore
the
statsy
world
and
you
can't
ignore
datadog's
dominance
of
metrics
in
the
commercial
industry.
G
F
A
And
there's
also
an
open
pr
about
item
item
potency
keys.
This
is
a
this
is
a
standard
concept
like
just
preventing.
Replays
is
all
you
need
to
do
and
there
are
large
scale
systems
that
support
deltas
in
the
world.
This
is
not.
This
is
a
solved
problem,
so
one
thing
you
could
do
is
look
at
the
google
monarch
paper.
D
So
hold
on
why
we,
I
I'm
still
a
little
bit
cut
like,
is
if
okay,
so
that
argument
makes
sense
if,
if
if
what
we're
talking
about
here
is
like
otlp
having
nothing
but
open
metrics
for
every
piece
of
the
for
every
piece
of
communication,
but
is
is,
is
that
bullish
like
is
that?
Is
that
something
that
I
guess
I
mean,
and
even
so
like
it
could
be
used
to
to
encode
like
there's,
no
reason
why
it
couldn't
be
used,
like,
I
suppose,
with
different
semantics,
but
use
the
same
protocol.
D
Possibly
like
is
this:
where
do
you
want
to
draw
the
line
on
like,
I
don't
think
it?
I
don't
think
it's
useful
to
to
like
talk
about
push
and
pull,
and,
and
all
that,
like
you
know,
I
think
that
we
should
be
just
talking
about.
Where
does
how
does
this
impact
the
open
telemetry
project
and
and
its
desire
to
support
open
metrics.
A
A
But
I
think,
as
far
as
the
ecosystem
we're
building
for
pushing
and
gathering
telemetry
data,
which
includes
trace
data
and
log
data,
there's
a
pretty
strong
sort
of
reason
just
to
continue
using
open,
telemetry
otlp,
and
that
would
say
that
we're
going
to
input
openmetrics
data,
but
we're
hoping
that
platforms
that
consume
metrics
data
will
take
otlp.
In
and
later
we
can
talk
about
support
for
deltas,
because,
that's
you
know,
users
with
higher
cardinality.
Would
prefer
to
send.
E
Deltas,
the
the
we
are
mirroring
a
lot
of
discussions
with
with
google
back
in
the
day
within
open
metrics
and
especially
sunir,
was
pushing
hard
for
push,
which
is
why
push
is,
in
my
opinion,
a
first-class
citizen
within
openmetrics.
E
We
did
have
extensive
discussions
around
around,
especially
for
for
high
cardinality
settings
to
simply
allow
dropping
certain
counters
as
in
like.
Currently,
I
don't
even
think
we
have
a
must.
We
have
a
should
that
they
should
not
go
away,
but
I
do
not
think
that
we
have
a
must
in
there
which
enables
cardinality
cuts
away
from
all
the
state
problems.
A
A
It
just
happens
to
be
reset
every
single
time
and
would
that
work
in
a
prometheus
system,
and
I
think
I've
asked
you
that
in
the
past
ritchie-
and
you
said
I
don't
think
so-
if
prometheus
servers
are
going
to
be
upgraded
to
do
true,
open
metric
support,
you
you,
then
deltas
will
be
assumed
to
be
supported,
because
you
can
reset
your
start
time,
every
single
report
and
it
should
just
work.
So
maybe
you
do
support
deltas.
I've
asked
my
team
that
the
server
came
out
of
google
about
this
degeneracy
case.
A
Deltas
and
keyless
may
look
the
same,
but
very
often
in
your
ngspats.
You
need
to
know
what
the
user
is
trying
to
do
like
you're
going
to
have
to
do
like
a
read
before
a
write
in
order
to
decide
whether
it's
correct.
If
it's
one
of
these
reset
every
time
cumulatives,
I
don't
think
you
truly
want
reset
every
time.
Cumulatives,
that's
why
we
explicitly
declare
what's
a
delta
or
a
cumulative.
It
says
my
intention
is
to
not
reset
this.
G
E
Sorry,
I'm
looking
at
the
spec
as
of
right
now
you
may
reset
to
zero
and
you
must
update
the
timestamp
so
and
that's
that's
part
of
my
point
that
openmetrics
is
pretty
malleable
about
what
you
can
transmit,
that
you
will
make
certain
ingestors
explode
and
will
have
a
lot
of
unhappy
users
if
you
basically
shove
deltas
into
a
into
a
absolute
counter
system,
basically
building
a
an
event
logging
system
within
a
metrics
engine.
E
That's
a
conversation
which
you
will
be
having
with
your
customers,
and
I
will
be
very
happy
not
to
be
part
of
that
conversation.
But
from
the
raw
point
of
view
of
the
of
the
wire
format.
E
E
E
A
E
A
Offense
we
haven't
made
that
we
we've
definitely
set
the
default
to
be
cumulative.
It
means
that
prometheus
will
be
supported.
The
reason
we
had
to
do
that
is
that,
if
you're
sending
otp
to
a
collector
in
a
scalable
pool
of
collectors
and
that
collector
is
going
to
then
export
prometheus,
remote
right
or
some
other
form,
we
need
to
be
cumulative,
so
we
we
definitely
said
that
as
a
default
behavior.
But
if
you
want
to
be
stateless
and
you
have
a
system
that
supports
it
and
there
are
systems
that
support
it.
You
configure
stateless
deltas.
A
A
So
maybe
you've
dropped
something
you
could
use
sequence
counters,
but
you
know,
as
with
tracing
one
open.
Telemetry
goals
was
to
separate
api
from
sdk.
We
separated
the
api
from
the
sdk
so
that
someone
can
come
in
with
a
completely
new
idea
and
write
an
sk.
That
does
it
so,
if
you
had
say
a
development
time,
only
sdk
that
just
wrote
a
log
of
all
your
events.
That's
one
way
to
do
this
and
that
sdk
has
no
state
in
it.
It
just
writes
to
a
log.
E
E
E
Just
I
was
trying
to
explain
from
the
prometheus
point
of
view
why
why
the
idea
of
of
of
losing
data
was
basically
made
go
away
with
with
with
absolutes,
except
for
when
you
have
a
reset
and
you
don't
scrape
at
the
right
time,
but
that's
a
lot
less
than
than
just
dropping
stuff
under
overload
or
something
like,
especially
during
load
spikes.
You
would
like
to
see
what
happened
and
that's
precisely
when
you
start
losing
your
delta.
E
G
May
I
suggest
we
step
back
a
second
from
like
design
decisions
and
trade-offs
if
a
user
chooses
that
they
want
open,
metrics
exposition-
and
you
know
the
statefulness,
this
is
required
for
deltas
to
be
accumulated.
C
As
I
understand
it,
currently,
no,
in
fact,
that
was
a
big
part
of
our
design,
was
to
make
sure
that
was
compatible
with
that
format.
A
big
part
of
the
design
was
also,
you
know.
I
think
a
large
part
of
this
discussion
is
the
fact
that
we
have
a
lot
of
different
users
that
want
to
handle
this
in
different
ways,
and
so
one
of
the
ways
that
you
just
described
is
something
that
we
tried
to
design
for.
So
I
think
that
is
compatible.
G
Okay,
well
that
that
sounds
good,
because
then
it
sounds.
Yeah
may
need
some
minor
details
like
hey.
How
do
we
map
the
different
types
of
gauge
encounters
and
so
on
seems
to
be
more
to
focus
and
hey?
Does
an
info
type
need
to
be
added
like
resources,
as
you
call
them,
I
think,
and
how
that
mapping
works
is
probably
more
the
question,
because
if
a
user
says
hey,
I'm
okay
with
these
trade-offs,
then
they're,
okay,
with
those
trade-offs.
A
G
A
Yeah,
I
think
I
agree
on
that,
and,
and
we
have
been
having
a
lot
of
discussion
about
histograms
and
sketches.
If
that's
something
we
could
talk
about
the
one.
The
one
irritating
outstanding
problem
we
have
coming
from
open
census
versus
prometheus,
is
the
use
of
less
than
or
equals
in
your
boundaries
versus
greater
than
or
equals
your
boundaries,
and
we've
talked
about
it,
and
I
wish,
though
I
wish
bogdan
were
on
this
call,
because
I
don't.
I
wasn't
involved.
G
Yeah,
that's
also
something
we
discussed
in
openmetrics.
I
don't
think
it's
practical
to
solve,
which
I
think
you
kind
of
just
have
to
live
with
the
fact
that
the
equality
like
that
exact
point.
E
G
A
And
then
we've
been
discussing
circle,
histogram,
dd,
sketch
and
so
on.
Various
different
implementations
of
the
histogram-
and
I
know
that
there
are
some
choices
we
could
make.
That
would
just
not
work
in
a
prometheus
world
and
that's
why
circle
hist
is
my
favorite
choice,
but
it's
not
clear
that
we
can
just
make
one
choice,
so
we've
got
a
protocol
under
development.
I
know
uk
is
on
the
call
that
sort
of
is
very
similar
to
the
openmetrics,
but
it's
got
one
extra
field
to
support
circle
hist.
A
We
could
talk
about
that
now.
Still
on
the.
G
Call
what
a
circle
hist
is,
but
I
think
that
is
the
case
for
to
use
my
configure,
because
you
basically
have
two
options
as
to
how
you
convert
this.
As
I
see
either
you
push
it
out
as
a
quantile
and
just
calculate
it
or
you
put
it
in
the
histogram
buckets
which
does
require
to
be
fixed
in
advance.
So
those
are
the
choices,
and
I
think
that
half
the
user
configured
realistically.
D
Is
there
any
reason
why
the
prometheus
client
library
can't
just
be
used
for
aggregation
here
like
I'd?
It
sounds
like,
like
I
mean
all,
this
would
kind
of
not
be
really
need
to
be
discussed
if,
if,
if
you
just
use
the
default
implementation
of
the
prometheus
client
library
as
an
aggregator
in
process,.
A
A
We
can
we've
been
saying
all
along
that
our
defaults
should
be
prometheus
friendly
out
of
the
box
and
we
haven't
decided
on
histograms
yet
because
there's
a
strong
push
to
get
variable
boundaries,
but
we
can
make
variable
boundaries
that
are
aligned
on
decimals
that
humans
can
read
very
easily
that's
what
circle
hist
is
but
there's
intellectual
property,
and
I
can't
necessarily
push
it
and
so
there's
a
there's,
a
problem
there
we'd
like
to
say
the
defaults
are:
are
prometheus
friendly,
but
we'd
also
like
to
say
the
defaults
are
variable
boundaries.
G
In
oak
metrics
today,
not
much
you
can
do
basically
10
buckets
choose
wisely,
however,
for
future
versions
of
open,
metrics
and
information.
We
are
looking
at
ways
to
do
this
better
and
have
certain
set
boundaries
and
handles
much
more
efficiently.
So
it's
practical
but
like
this
is
a
hard
problem,
as
you
saw
yourself
as
many
different
approaches.
G
A
Yeah
and
I
think
what
we've
been
imagining
is
a
case
where
we
may
come
up
with
a
default
that
uses
fixed
boundaries
and
is
relatively
fine
grain
and
then
interpolates
back
to
a
relatively
coarse
number
of
boundaries
on
fixed
boundaries.
There's
there's
still
new
proposals
being
pushed
and
google's
got
something
I
think
personally
exciting
there.
There's
this
notion
of
udd
sketch
that
was
mentioned
last
week,
also
really
exciting.
So
there's
there's
there's
room
for
improvement.
D
A
A
As
well,
yes-
and
I
think
that's
the
requirement
that
we're
going
to
put
before
any
of
this
code
gets
merged-
is
that
we
can
demonstrate
a
collector
pipeline
with
an
exporter
for
prometheus.
That
re-buckets,
because
that's
probably
going
to
be
necessary.
I
Just
to
note
we
we
actually
do
have
new
histogram
work
being
done,
it's
not
in
the
openmetrics
1.0
spec,
but
it
will
be
in
the
next
release.
So.
E
Basically
efficient
high
resolution,
histograms
course
this
is
one
of
the
usual
pain
points
in
prometheus
land,
and
we
are
fully
aware
of
this.
Bjorn
ramenstein
is
leading
this,
who
most
of
you
probably
know
on
this
call.
He
hasn't.
He
hasn't
had
time
to
finish
the
design
dog,
yet
I
did
see
it
early
and
it
was
super
interesting.
E
I
can
ask
him
if
he's
willing
to
to
share
a
preprint
as
it
were,
because
that
is
basically
what
what
prometheus
and
pass
cortex
thanos
and
openmetrics
will
implement
anyway,
because
this
is
a
pain
point
within
within
the
wider
ecosystem.
G
Yeah,
but
that's
still
a
little
bit
off
anyway,
we
still
have
the
over
metrics
one
zero,
which
doesn't
have
any
of
this,
but
from
a
prometeous,
client
library
standpoint.
This
is
how
a
summary
is
implemented
and
it
comes
out
like
I
right
now
have
a
pull
request.
Switching
things
over
to
java,
to
whatever
the
standard
one
is
forgotten,
the
name
but
like
that's,
just
an
implementation
detail
and
the
on
the
wire
just
ends
up
being
a
quantile.
A
Yeah
so,
for
example,
in
hotel
sdk,
we
don't
know
how.
Yet
we
don't
know
how
to
decide
whether
you
should
output,
summaries
or
histograms.
We
need
to
choose
a
default,
and
then
we
need
to
open
census
called
this
views
like
you
should
be
able
to
choose
on
a
per
instrument
basis,
and
we
don't
have
that
yet
either.
This
is
really
the
one
of
the
big
unsolved
questions
for
otosdk
is
just
that.
We
don't
know
how
to
set
up
the
that
choice.
C
Maybe
so
yeah,
I
think,
there's
a
lot
of
different
opinions
on
that
one,
especially
with
the
sketch
histogram
stuff,
but
I
am
kind
of
interested
rich.
You
talked
about
like
this
next
revision
of
open
metrics,
including
some
sort
of
high
resolution
histograms
and
maybe
a
draft
design
doc.
I
know
josh
has
said
that
uk
is
on
the
call,
it'd
be
cool
if
we
could
get
a
copy
of
that
and
run
it
by
some
of
the
people
who
have
been
thinking
a
lot
of
our
sketch
implementation
of
the
otlp.
C
To
make
sure
that
we
have
again
like
that
compatibility
going
forward
would
be
really
ideal.
E
I
was
just
I
was
just
trying
to
to
find
pure
in
the
in
the
internal
slack
to
poke
him
if
he's
able
to,
if
he's
around,
when
he
thinks
he
can
share
something
and
such
to
to
just
speed
it
up
on
that
level,
which
is
why
I
have
faced
out.
So
if
you
can
repeat
the
last
half,
because
I
was
basically
looking
forward
for
an
answer.
Preemptively
like.
D
Just
sorry
go
ahead,
I
I
do
want
to
ask,
though,
like
if
all
the
raw
events
are
going
from
open
telemetry
to
the
open
telemetry
collector
like
it
still
feels
like
this
is
an
open,
telemetry
otlp
thing
not
like
when
you
choose
to
export
to
prometheus,
I
don't
you
need
to
feed
in
all
the
raw
data
into
whatever
aggregation
protocol.
A
So
that's
why
we've
formalized
in
the
metrics,
where
we
formalized
this
design
with
an
accumulator
and
a
processor
where
you
know
the
accumulator
is
fairly
standard
in
high
performance
and
the
processor
is
where
you
decide
whether
to
be
delta
or
cumulative,
for
example,
and
then
an
exporter
is
just
a
dumb
like
convert
me
into
protocol
bytes.
So
that's
why
that's
why
the
configuration
discussion
is
happening
for
hotel,
not
not
for
the
exporters.
D
A
D
Yeah,
but
how
you
like
a
circle
histogram,
like
all
of
the
hdr
histogram
projects
out
there,
have
my
new
little
design
decisions
on
certain
bucketing
things.
I
I
it's.
A
A
Same
and
I
guess
our
goal
is
to
let
you
choose
whichever
histogram
is
the
most
sensible
for
you,
and
I
guess,
if
the
prometheus
open,
metrics
assist
project
thinks
we
should
standardize
that
implementation,
I'm
I'm
all
ears,
I
have
no
partic,
I
mean
I
don't
think
anyone
here
is
particularly
wed
to
any
of
these,
like
we
just
want
highly
compressed
variable
boundary,
histograms.
G
Yeah,
like
from
the
openmetrics
standpoint,
it
doesn't
matter
what
you
have
under
the
cover:
that's
producing
a
quantile,
whether
it's
hdr
or
sketches
or
circle
history.
What
not
the
only
thing
that
the
spec
says
is
that
what
you
don't
want
is
a
histogram
since
the
application
started,
because
that
could
have
been
you
know
months
ago,
and
just
it's
no
use
for
ongoing
real-time
monitoring,
so
it
suggests
hey.
The
data
should
represent
the
last
from
the
a5
10
15
minutes,
like
that's
the
only
so
this.
A
Is
why
again
we
have
cumulative
and
delta.
So
if
you
have
a
cumulative
histogram,
it's
going
to
be
count
since
the
beginning
of
time
and
you,
the
metric
system,
can
difference
two
time
points
to
decide
what
the
change
was
there
and
then
this
idea
of
a
delta
histogram
is
that
we're
going
to
reset
every
interval
and
flush.
Much
like
a
stats.
You
would
so,
and
there
has
been
an
open
discussion
about
whether
there's
some
intermediate
time
range
and
whether
we
call
that
delta
cumulative
or
something
in
between
the.
D
Just
fyi
the
end
users
are
going
to
suffer
here,
like
even
just
at
that
point
that
we
talked
about
just
then
about
the
window
for
quantiles
and
and
separation
like
it's
going
to
be
fundamentally
a
very
different
experience.
You're
going
to
be
the
same
application
code,
different
libraries.
A
For
summaries
right,
we
actually
we
okay,
I
think
you're
right
about
summaries,
because
the
time
window
is
vague,
so
we
should
probably
you're
right.
Let
prometheus
do
summaries
if
that's
what
you
want,
but
my
understanding
going
into
this
was
that
widely
the
idea
was
that
summaries
are
are
not
desired
and
the
histograms
are
because
they
can
be
merged.
So
I
don't
know
why
we're
recommending
summaries
or
even
talking
about
them.
D
C
So
hold
on.
Can
I
pause
everyone
here,
we're
at
we're
at
the
hour
and
like
we're?
I'm
already
not
gonna,
be
respectful
people's
time
but
continuing
to
talk,
but
I
do
kind
of
want
to
give
a
little
bit
of
a
summary
here.
It's
just
that,
like.
C
I
think
it
sounds
like
cohesively,
like
a
drop
in
replacement
of
open
metrics
within
you
know
the
the
open
telemetry
space
for
open
the
otlp
is
probably
not
something
that
we're
discussing
anymore,
it's
more
of
just
the
compatibility
and
making
sure
that
open
telemetry
is
going
to
be
able
to
support
open
metrics
in
that
last
model.
Delivery.
C
Talking
about
conversions
of
histogram
and
details
of
histograms
and
a
lot
of
the
naming
discussions
we
still
haven't
even
really
kind
of
dug
into
with
the
explicit
understanding
that
there's
going
to
be
caveats
to
users
that
use
opensymmetry
to
create
open
metrics
around
like
making
sure
that
they
have
to
set
it
up
correctly,
to
ensure
the
guarantees
that
are
defined
to
get
the
open,
metrics
specification.
C
C
Discuss
a
little
bit
more
of
the
details
of
the
naming
versus
the
histogram
sort
of
things,
and
I
don't
think
that
that's
as
critical
time
wise.
So
if
that
could
be
maybe
done
asynchronously
or
do
we
need
another
meeting
to
discuss
these
sort
of
things.
I'm
looking
for
maybe
some
some
group
understanding
on
that.
One.
G
I
I
think
we
can
get
up
at
a
smaller
group.
I'd
be
happy
to
discuss
it.
C
Okay
cool,
so
maybe
I
can
reach
out
to
brian
or
and
rich
after
this,
and
we
can
kind
of
maybe
address
some
of
these
things,
I'm
guessing,
maybe
in
like
a
subsequent
hotel
meeting
or
maybe
something
that
works
better
for
you
guys.
We
can
coordinate
on
that
one.
The
hotel
people
does
that
sound
good
as
well.
C
Okay,
cool,
sorry,
everyone
for
all
the
time,
andrew
sorry
about
missing
your
time
box.
But
I
think
this
has
been
a
fantastic,
fantastic
discussion.
So
I'm
really
happy
that
we
are
able
to
have
it.