►
From YouTube: 2021-03-30 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
D
D
E
E
Hey
everybody,
sorry,
the
specification
ran
long,
so
we're
delaying
everything
for
a
little
bit
another
minute
or
two.
E
E
It's
it's
a
great
question
right
unanswerable,
so
I
think
we're
gonna
be
missing.
I
think
josh
mcdonald
is
on
vacation
and
bogdan
will
be
missing,
which
just
means
like
some
of
the
pr
is
in
progress.
I
got
information
on
them
for
us
to
discuss
some
of
the
pr's
in
progress.
I
mean
I
might
not
be
able
to
walk
into
so
I
just
want
to
qualify
the
goal.
This
meeting
right
now
is
to
figure
out
what
needs
to
be
done
before
we
can
mark
the
metric
data
models
table.
E
So
we've
had
a
lot
of
good
discussions
over
the
past
month
and
I
think
we
made
a
few
good
decisions,
a
few
changes,
but
there's
still
a
decent
bit
of
work
to
do
so.
First
off
the
just
as
an
fyi.
In
terms
of
writing,
the
specification
on
the
data
model
there's
still
the
open
pr
around
the
single
writer
principle.
That
has
a
lot
of
good
feedback.
E
A
lot
of
good
comments,
not
sure
if
anyone
wants
to
talk
about
it
here,
but
we
can
next
up
there's
a
section
from
josh
mcdonald's,
original
sketch
around
temporal
alignment
and
what
that
means
that
needs
to
get
written,
and
if
anybody
wants
to
help
with
any
of
this,
like
please
let
me
know,
let
josh
know
take
a
piece
write.
Some
spec.
We
need
to
kind
of
get
get.
E
What
we
do
written
down
is
is
the
primary
goal
here,
and
the
focus
of
this
specification
is
not
like
what
users
will
see
or
what
back
ends
we'll
see
it's
kind
of
like
what's
allowable
in
the
middle
okay,
all
right
with
that.
I
want
to
get
into
the
the
meat
of
the
topic
here,
except
because
there's
a
hard
stop
here.
E
F
It
would
be
a
follow-on,
and
I
the
only
reason
why
we're
trying
to
get
it
as
long
as
we
can
all
like
a
brief
idea
of
how
we'd
like
to
go
about
implementing
it,
because
because
obviously
we
don't
want
to
block
the
initial.
You
know
everything
getting
wrapped
up
here
in
the
coming
weeks.
But
I'd.
B
F
To
at
least
understand
how
we
can
go
about
doing
this,
and
then
the
teams
within
f5
can
start
working
on
the
otep
and
parallel
with
members
of
the
community.
So
as
long
as
we
have
buy-in
from
everyone
in
the
community,
and
then
there
are
stakeholders
that
would
like
to
be
involved,
that's
all
we
need
to
do
is
identify
who
those
are
and
the
broader
community
adoption.
That's
it
cool.
E
Well
I'll
say:
I'm
interested
anyone
else,
anyone
else
so
so
you
have.
You
have
a
multivariate
time
series
otep
here,
which
is
great.
I
highly
recommend
everyone
read
that
what
do
you
think
what
next
steps
would
be
given
that
discussion
so
you're?
Looking
for
buy-in
of
like
do
we
want
to
go
this
route.
F
Yes,
and
and
just
to
be
clear
that
otep
is
not
finished,
it
just
has
the
motivations
behind
it,
so
the
idea
was
to
find
additional
one
to
make
sure
that
the
community's
buy-in,
so
that
it's
worth
finishing
otep
and
two
get
the
right
mines
in
place
that
are
willing
to
contribute
to
the
otep
and
review
it
as
it's
in
progress.
F
So
that
would
be
the
that
would
be
the
next
steps-
and
this
is
all
laurent
and
laurent
actually
heads
up
our
a
team
within
our
ai
and
analytics
division
at
f5.
E
This
thing
here,
all
right,
cool
cool
cool,
all
right,
so
I
would
say
in
terms
of
buy-in,
I've
heard
a
lot
of
excitement
around
this
idea
is
anyone
anyone
kind
of
think
anyone
opposed
to
this?
So
you
said
the
the
modification.
The
motivation
explanation
is
still
work
in
progress,
but
this
is
ready
to
read.
F
E
Okay,
I
will
follow
the
folks
who
aren't
here
on
buy-in,
and
I
would
encourage
everyone
here
to
to
take
a
look
at
that
motivation,
and
maybe
we
can
all.
Is
there
an
emoji
we
can
add
to
this
somewhere
to
let
you
know
that
you
have
buy-in,
let's
see
if
you
want,
we
can
create
a
issue
on
the
spec
yeah
bracket
there
yeah.
Why
don't
you
so
so?
We'll
add
that
here.
E
Okay,
cool
all
right
now
did
you
want
to
do
you
want
to
talk
anything
more
in
detail?
I'd
like
I'd
like
to
focus
this
meeting
on
trying
to
get
to
stability
so.
F
I
would
focus
on
stability.
That
was
it.
I
just
wanted
to
make
sure
that
if
people
have
questions,
laurent
was
here
because
he's
the
brains
of
this
operation
and
that's
it
so
as
long
as
we'll
get
that
issue
created
and
then
we'll
get
the
thumbs
up
from
the
community,
and
then
we
can
talk
about
this
a
little
bit
farther
after
things
are
a
little
bit
more
solidified
yeah.
I'm
super
excited.
Okay,
awesome,
cool
cool.
E
Okay,
so
yeah-
and
you
know
folks,
feel
free
to
add
agenda.
This
is
meant
to
be
the
last
10
minutes.
Let's
throw
that
down,
there
did
I
yeah
cool.
So
let's
talk
about
blockers
for
stability.
E
One
of
the
things
that
was
opened
over
the
course
of
this
week
is
a
lot
of
the
decisions
we've
made
around
consistency,
and
you
know,
metric
naming
and
stuff
has
actually
led
to
performance
issues,
so
tigran
ran
a
benchmark
in
go
of
just
serializing
metrics
and,
as
we
knew
when
we
were
discussing
this
before,
go
is
terrible
at
one
of,
and
there
could
be
other
things
hiding
in
here,
but
this
is.
This
is
actually
like
a
significant
differential
for
those
who
you
know
so
so
you
know
how
to
read
this.
E
So
like
histogram,
it's
kind
of
less
significant,
but
for
like
raw,
I
think
those
are
what
some
data
points
in
64.,
but
pretty
significant
performance
regression
in
terms
of
usage.
E
What
I
wanted
to
ask
for
help
here
is,
I
think
we
need
to
look
into
this
pretty
deeply
and
convince
ourselves
one
way
or
another
that
we're
okay
with
the
state
of
the
metrics
protocol
buffer,
that
we've
changed
to
that.
We
think
we
can
fix
this
performance
issue
and
go
or
that
you
know
it's
it's
kind
of
isolated
to
go
and
it's
not
just
an
endemic
issue.
E
So
my
proposed
tasks
here
would
be
to
try
to
get
someone
to
write
a
protocol
buffer
benchmark,
not
in
go
say
java,
just
to
see
what
it
looks
like
there
that
basically
generates
similar
data
and
yeah.
It
could
be
java
python,
whatever
it
doesn't
matter,
but
let's,
let's,
let's
see
if
we
see
us
a
similar
performance
differential
there
and
then,
if
we
can
isolate
it
as
like
a
go
related
protocol
buffer
implementation,
we
need
some
way
to
convince
ourselves.
E
We
can
fix
this
performance
issue
or
we
have
to
go
back
and
undo
some
of
the
changes
we
made
for
performance
reasons
likely.
So
because
we
can't,
I
should
say
we
need
to
go
back
to
the
drawing
board
of
discussion
around
performance
and
what
performance
do
we
find
acceptable
with
this
protocol
right?
E
So
I
think,
given
the
importance
of
the
open,
telemetry,
collector
and
its
language,
you
know
being
go
that
we
can't
ignore
this
regression.
We
need
to
find
a
way
to
address
it
or
convince
ourselves.
It
can
be
addressed
before
we
denote
stability,
cool
anyone
have
anything
they
want
to
chime
in
there
am.
I
am
I
getting
any
of
this
stuff
wrong.
Hey.
H
E
H
B
H
E
Update
for
c-sharp,
I
would
okay,
so
please
take
tigrin's
benchmark
and
try
it
out
and
see
sharp
and
see
what
you
find
cool.
So
then,
I
think
we
also
need
to
start
investigating
what
we
can
do
here
and
go
now.
I
can
follow
up
with
like
bogdan
and
tigran,
because
if
I
remember
right,
the
last
time
we
had
this
discussion,
both
josh
and
bogdan,
were
talking
about
goes
one
of
implementation
being
problematic.
I
E
All
right,
well,
I
think
I
think,
does
anyone
disagree,
that
this
is
a
blocker
for
release
that
we
need
to
figure
out
what
performance
looks
like
for
the
collector.
F
So
I
know
f5.
We
were
undergoing
some
of
our
own
performance
stuff
here,
related
to
some
other
aspects
and
and
we
were
going
to
rely
on
the
community
piece
for
some
of
the
other
core
aspects.
So
I
don't
I
don't.
I
don't
think
it's
a
blocker,
but
I
know
there's
a
lot
of
like
product
decisions
that
are
based
off
of
the
performance
of
the
collector,
whether
or
not
they
go
what
they
implement
inside
of
their
code
versus
what
they
rely
on
the
collector
for
right.
E
Right
and
at
a
minimum,
it
has
implications
on
architectures
that
are
allowable
with
the
collector
okay.
G
G
H
Is
anyone
sorry
go
ahead,
yeah?
So
so
I
was
reading
on
the
prometheus
thread
or
prometheus
documentation
and
they
mentioned
you
know
in
go.
There
are
different,
basically
different.
You
know
protopuff
buffers
implementation.
That
was
much
faster.
I
don't
know
which
ones
we
use
in
gold,
presumably
the
standard
one.
And
if
such
you
know,
we
don't
use
the
standard
we
use
gogo.
We
we
use
the
one.
That's.
E
Yeah
that
much
I
know
I
mean
it's
a
good
question
and
something
that
we
need
to
investigate.
I
I
will
follow
up
on
that
and
try
to
get
some
better
go
related
investigations
going
and
see
what
we
can
do,
okay,
but
we
agree
that
that's
a
blocker,
so
one
of
the
things
I
wanted
to
do
do
I
have
this.
G
I
Hey
josh:
what
are
your
searches,
good
question
so
for
the
performance
part?
Is
there
a
clear
goal
that
we're
trying
to
achieve
or
is
still
murky.
E
The
fact
that
histograms
only
saw
like
a
small
performance
bump
is
is
a
little
bit
less,
but
there
is
a
question
of
like
what
performance
do
we
need
and
I
think
more
importantly,
can
we
optimize.
E
So
I
actually
don't
where's
this.
I
don't
care
if
this
number
is
this
bad
at
1.0,
if
we
think
we
can
actually
optimize
the
collector
with
this
same
proto
definition
to
get
back
to
a
number
equivalent
to
here
right
or
like
within
an
order.
You
know
within
a
few
percentiles
or
whatever,
like
that.
That's
that
to
me
that
that's
that's
cool
but
you're
right,
like
there's
a
fuzzy
requirement
around
performance.
E
Yeah
yeah,
let
me
let
me
write
that
down
two
ais,
so
clarify
performance
goal
and
I
will
follow
up
with
tigrin
on
this.
G
A
Better
way,
there
good
question
related
to
this
topic.
Do
we
have
any
comparison
with
some
other
protocol?
A
So
let's
say
that
we
we
use
exactly
the
same
data
set
and
we
are
trying
to
to
test
other
protocols
in
order
to
get
the
reference
and
a
good
base
for
foreign.
A
I
would
recommend
to
use
the
apache
arrow
with
flights.
So
sorry,
what
was
that
apache
arrow
a-r-r-o-w
and
the
protocol.
E
E
Cool
yeah,
so
I
think
I
think,
coming
up
with
a
that's
a
great
point
is
we
could
actually
compare
ourselves
to
other
protocols
to
get
an
estimate
of
where
it
is.
I
think
the
the
reality
is
to
riley's
point
of
what's
acceptable,
I
from
what
I've
seen
people
want
metrics
and
they
don't
want
to
pay
anything
for
them
and
when
you
have
to
pay
anything
at
all,
it's
painful
like
running
prometheus
and
sizing.
It
appropriately
is
incredibly
painful.
E
A
And
to
be
to
give
you
some
perspective
for
f5,
for
example,
we
have
devices
that
are
producing
something
like
a
200k
multivariate
matrix
every
10
seconds,
so
nice.
B
F
H
Okay,
is
there
previously
a
known,
you
know,
request
per
second
that
we're
trying
to
achieve
or
anything
like
that
at
all.
E
Well,
there's
what
we
already
have
right,
so
I
think
the
problem
here
is
that
we
regressed
from
our
previous
protocol
and
the
the
question
I
think
being
asked
by
you
know.
Tigran
who
owns
the
collector
is
were
the
changes
that
we
made
worth
this
performance
regression
like.
Did
we
gain
anything
from
those
changes
right.
E
Okay
cool,
so
I
still
think
it's
worth
looking
into.
I
agree
with
the
the
comment
that
this
is
a
soft
blocker,
something
for
us
to
resolve
cool.
J
Josh
you
that
the
collector
is
still
using
gogo
for
its
go
protobuf
implementation.
You
know
on
the
go
sdk
side,
we've
we've
swapped
to
google's
protobuf
and
the
open
telemetry
proto
go
repo
that
was
constructed
as
an
intent
to
be
shared
amongst
implementations.
So
I
wonder
if
testing
against,
that
would
be
worthwhile
as
well.
E
J
Well,
this
this
open,
telemetry
protocol
was
constructed
like
in
the
last
couple
of
weeks
as
well,
so
it
probably
hasn't
been
swapped
over
okay,
just
put
a
little
link
in
chat
so
that
it
could
be
another
alternative
to
explore
if
we're
looking
at
whether
it's
something
inherent
to
the
use
of
one
of
or
if
it's
just
the
implementation
yeah
when
you
switched
over,
did
you
notice
a
performance
differential
for
the
sdk?
F
So
I'm
not
sure
we
did
no
just
to
clarify
there.
We
still
rely
on
gogo
1.32
in
the
collector,
but
we
also
have
references
to
going
protobuf
as
well.
So
I
don't
know
if
some
things
have
swapped
and
that's
in
the
core
collector
too,
so,
okay,
so
it
might
be
in
the
process
of
transitioning.
E
It
might
be
okay,
cool!
That's
that's
a
great
point.
Thank
you
all
right,
this
comparison
to
other
protocols.
Does
anyone
feel
like
they
can
sign
up
to
do
any
of
this.
A
Yeah
at
some
point
I
will
maybe
able
to
work
on
the
apache
arrow
flights,
but
not
immediately.
Okay,.
F
Let
me
let
me
get
back
to
you
on
this
there's
a
few
things
coming
up
in
the
immediate
pipeline,
but
let
me
see
if
I
can
move
some
things
around,
because
I
know
this
is
rather
important.
It's
not
a
hard
blocker,
but
it's
pretty
important
to
a
lot
of
things,
so
I
might
try
to
allocate
some
time
to
do
it.
E
Okay,
what
I
will
do
yeah
what
I
will
do
is
I
will
comment
the
discussion
points
here
into
the
into
the
bug
right,
so
that
it
captures
our
discussion
and
kind
of
where
we're
thinking
and
then,
if
you
do
have
time
to
contribute
one
of
these
things,
please,
you
know,
add
it
to
the
bug,
for
the
ongoing
discussion
sounds
good
and
thank
you
even
if,
whatever
time
you
can
give
would
be
super
helpful,
I'm
gonna
try
to
do
something
myself
and
I'm
also
incredibly
time
constrained
right
now.
E
So
that's
that
makes
everything
fun,
let's,
let's
call
time
there
and
move
on
to
the
next
set
of
issues.
So
if
you
look
at
the
project,
I
restructured
it
slightly.
Oh
man
right
here
there
we
go
so
I
restructured
it
slightly.
I
actually
added
this
as
a
performance
kind
of
component
to
the
project
for
us
to
capture
performance
things.
E
What
I
would
like
to
do
is
post
this
meeting,
I'm
going
to
go
through
all
of
these
bugs
and
update
the
required
for
ga
tag
to
mean
we
want
to
resolve
this
prior
to
marking
the
the
metrics
protocol
stable
for
all
of
the
allowed
for
ga
tags
or
after
g8,
there's
after
ga
allowed
for
ga
and
required
for
ga.
E
So
in
this,
in
the
scope
of
the
metrics
data
model
sig,
I
was
going
to
reuse
those
three
labels
to
mean
you
know:
priests,
pre-declaring
stability
and
then
post-declaring
stability
right
where,
if
folks
want
to
work
on
an
allowed
for
ga
thing,
while
we're
trying
to
get
things
marked
stable,
totally
fine
after
ga
just
means
we're
going
to
try
to
avoid
spending.
E
You
know
pr
discussion
time
until
after
things
are
marked
stable,
if
that's
fair,
so
in
this
list
of
kind
of
all
the
issues
that
are
tagged
as
data
model,
I
call
out
a
few
here
that
I
want
to
talk
through,
let's
actually
start
with
the
easiest
one
which
is
changing
otlp
exporter.
E
So
this
this
is
a
bug
that
I'd
like
to
send
over
to
riley's
datum
or
api
sig
around
otlp
exporters
exporting
cumulative
metrics
by
default.
So
this
is
someone
asking
if
you
know
in
the
spec
for
open
telemetry
right
when
I
have
a
gauge
or
a
counter
or
whatever
there's
this
like
default,
aggregation
semantic
to
it,
and
so
there's
a
question
here
of
if
I'm
exporting
delta
counter
values
right
from
an
sdk.
E
E
E
Yeah
yeah,
okay,
yeah
yeah,
sorry,
man,
okay,
so
there
I'm
just
going
to
add
you
to
that
for
now
and
I'm
going
to
remove
this
from
data
model
I'll
I'll.
Do
the
cleanup.
D
E
Beautiful
beautiful,
okay
cool,
so
that
was
that
was
the
easy
one.
The
next
two
bogdan
raised
these.
So
we
have
the
first
one
is
around
metric
number
data
points
requirements
for
value,
so
we
kind
of
talked
about
this
in
a
bunch
of
different
a
bunch
of
our
discussions
around
data
points.
This
is
if
I
get
a
metric
data
stream,
where
the
first
point
is
an
integer
and
then
the
next
point
is
a
double.
E
Do
we
consider
this
an
error
we
talked
about,
I
think,
in
a
previous
discussion.
We
talked
about
this
notion
of
con
like
weak
conformance,
where,
like
you
know,
doubles
can
become
or
inst
can
become,
doubles
and,
like
that's
an
acceptable
transformation,
we're
willing
to
allow
and
that
we
were
going
to
spec
this
out
and
at
the
time
we
had
agreed
that
we
would
just
say
that
you
know
these
are
considered
different
streams
or
like
diff,
like
it's
up
to
back
ends
to
figure
out
what
to
do
with
it.
E
I
C
E
Prometheus
converts
it
all
to
doubles
according
to
like
if
you
read,
if
you
read
the
the
comment
here
from
the
prometheus
folk,
om
yeah
openmetrics
considers
it
non-breaking
and
it
shouldn't
happen
frequently,
but
it's
okay.
So
there's
a
here's,
the
openmetric
spec.
If
you
want
to
take
a
look.
C
At
that,
I
just
wonder
if
the
value
of
offering
integers
has
been
established,
or
maybe
the
spec
could
just
remove
that
the
integer
type.
E
H
So
one
of
the
things
that
took
me
a
while
to
fully
grasp
and
understand,
is
that
this
is
not
the
data
points
that
we're
providing
here
are
not
actually
data
points
in
the
time
series.
These
are
all
summarized
data
points,
so
each
data
point
actually
can
represent
a
completely
different
summarization.
In
fact,
in
the
in
the
data
protocol,
each
data
point
comes
with
its
own
set
of
labels.
So
you
really
can't
if
you
are
treating
it
as
a
standard
time
series.
H
E
H
E
It's
not
again,
this
is
the
data
model,
sig
yeah,
not
the
sdk,
so
so
this
is
right
again.
Operations
allowed
in
the
collector
kind
of
important
like
I
is
so.
If
you
look
at
the
scope
of
the
sig,
the
scope
of
the
sig
is
what
what
does
the
sdk
produce
that
we
consider
a
metric
stream?
What
are
operations
that
can
be
performed
on
the
metric
stream,
which
is
when
you
say
it's
a
collector
issue?
E
That's
one
of
the
things
we
need
to
kind
of
scope
out
of
like
what
are
acceptable
operations
to
perform
as
data
flows
in
your
collection,
you
know
plane
and
then
how
do
we
convert
from
a
metric
stream
back
into
a
time
series
for
back
ends,
we're
going
to
have
to
specify
a
little
bit
of
that
to
some
extent
back
ends
need
to
do
conversions
based
on
what
they
allow,
because
some
will
allow
more
than
others
at
a
minimum.
We're
going
to
be
trying
to
provide
that
for
prometheus
and
prometheus.
E
Looking
time
series
that's
like
what
the
whole
you
know,
data
model
proposal
document
lays
out
right,
so
we
do
need
to
specify
over
time
and
again
I
don't
want
to
block
the
actual
protocol
stability
on
this,
but
we
do
need
to
fully
detail
how
to
do
like
what
aggregations
are
allowed
and
acceptable,
and
I
think
we
have
enough
of
them
implemented
in
the
collector
to
kind
of
infer
this.
E
E
And
you
know
my
my
inclination
right
now.
I
didn't
want
to
say
it
ahead
of
time.
Is
that
I
think
no,
this
won't
change
the
protocol.
It
literally
is
just
we
need
to
specify
what
the
collector
does
in
this
scenario.
There's
that
open
question
of
is
there
value
to
having
ants
versus
doubles.
One
is
a:
what
do
we
call
it?
A
less
precise
version
of
the
other
right.
There
are
things
that
we
can
represent
in
integers.
We
can't
represent
in
doubles,
that's
just
a
reality
of
of
the
data
points.
D
So
so
this
is,
this
is
basically,
it
depends
on
the
back
end
right
whether
this
will
break
anything
or
not.
I
don't
think
protocol
should
do
anything
here.
D
For
example,
if
I
instrument
a
metric
x
to
be
an
integer,
and
then
you
know
in
the
next
iteration
of
my
app,
if
I
instrument
the
same
metric
to
be
a
float,
a
protocol
should
just
not
do
anything
about
it.
It
might
it
might
something
might
change.
You
know
when
it
goes
to
a
specific
backend,
but
that
I
think
that's
very
back
and
specific.
E
Yeah,
the
the
only
the
only
time
where
it
matters
and
again
this
is
where
it
comes
in
it,
and
this
is
why
we
keep
saying
it's
a
collector
issue.
I
want
to
write
this
down
by
the
way
the
yeah
is
this
a
protocol
issue?
Why
it?
Why
is
there?
Okay,
where,
where
it
matters,
is
around
this
notion
of
operations
we
can
perform
within
the
protocol
like
in
flight?
Again,
we
are
we're
viewing
this
data
model
as
owning
data
from
sdk
generation,
all
the
way
to
back
end
and
anything
in
the
middle.
E
So,
if
we
specify
like
you
know,
I
want
to
remove
labels
and
aggregate
two
streams
together
and
one
of
the
streams
has
integers
and
one
of
them
has
floats.
What
do
we
do
right?
So
so,
from
that
standpoint,
it's
it's
a
data
model
issue,
you're
right,
it's
not
necessary
like
from
an
sdk
standpoint.
You
don't
have
to
care,
but
from
the
collector
standpoint
you
do
and
that's
that's
the
thing
we
have
to
answer.
D
But
it
might
like
you
say
it
might
break
some
aggregations
and
stuff
like
that
on
the
asian
side
pipeline,
depending
on
you
know
how
it's,
how
the
metrics
are
being
processed
between
the
time
it's
submitted
and
before
it
gets
adjusted
to
the
back
end.
D
We
need
to
enforce
something
on
the
processor
side.
I
think
so
that
it
can.
It
can
be
a
little
adaptive
if
we
are
going
to
support
this
in
the
protocol.
E
Yeah
yeah
yeah.
The
tentative
proposal
that
I
would
throw
at
this
is
that
we
focus
specifically
on
into
double
conversion
and
just
like
explicitly
declare
that,
as
allowed
in
the
specification
for
for
these
aggregations.
E
E
That's
where
all
health
starts
to
break
loose,
especially
if
you're
using
different
kinds
of
histogram
generations
or
you
know,
representations.
E
E
I,
unless
I
hear
differently
from
people,
I'm
going
to
mark
this
as
not
blocking
making
the
protocol
be
stable,
but
something
that
we
do
need
to
specify
and
it
will
remain
on
the
list
and
it
will
be
probably
like
you
know
the
the
next
thing
we
discuss
post
stability.
Does
anyone
anyone
feel
strongly
one
way
or
another
here.
E
So
this
one.
Basically,
the
question
is
when,
when
you
have
a
histogram
or
a
summary,
do
we
need
to
know
that
it's
monotonically
increasing
like?
Is
that
something
we
actually
need
to
specify
that
we
know
whether
or
not
something
is.
E
Monotonic
there's
a
lot
of
context
here.
If,
if
we
don't
feel
like
we
can
answer
this
question,
we
can
we
can
go
into
that
later.
But
let
me
pop
open-
where
is
this.
A
D
It's
already
existing
that
if
I
remember
right
that
it
indicates
if
it's
monotonic
or
not
so
is
the
proposal
to
to
to
whether
to
keep
it
or
to
remove
it
or.
E
Yeah,
I
think
so
so
again,
I
I
needed
to
look
into
the
issue.
Harold
calls
out
correctly
that
this
this
is
gone
now
in
the
in
the
proto,
because
when
we
redid
histograms,
I
think
we
changed
the
description
of
things
and
I
think
sum
is
a
bit
different.
So
sum
right
now
is
the
sum
of.
E
E
Changes:
okay,
here's
what
I'll
say
for
this
one,
because
I
we're
we're
running
out
of
time.
We
might
need
to
follow
up
with
bogdan
on
on
what
he
means
here
and
then,
if
anyone
has
strong
concerns
about
sums
and
histograms,
please
comment
on
this
bug.
E
Otherwise
I
will
meet
with
him
individually
and
try
to
get
him
to
kind
of
elucidate
more.
What
he
wants
here,
given
the
to
do,
is
gone
and
see,
if
maybe
the
the
current
definition
of
histogram
post.
All
our
histogram
work
you
know
three
weeks
ago
is
is
acceptable
and
the
way
we
comment
everything
is
fine
or
if
we
need
to
actually
dive
into
this
more
because
we're
missing
documentation
on
how
to
use
histogram.
E
E
H
E
H
E
Is
this
is
sorry
I
I
should
I
wish
I
could
give
again,
I'm
not
the
best
at
explaining
this.
This
is
around
prometheus
compatibility,
so
in
open,
metrics
there's
an
issue
where
the
way
they
treat
histograms
and
sums
is
is
one
way
and
open.
Telemetry
was
specified
in
a
way
where
it's
possible.
We
can't
export
to
prometheus
accurately,
and
so,
if
we
want
to
claim
that
we
have,
you
know
a
hundred
percent
compatibility
with
prometheus
whatever.
That
means
right
for
for
a
given
scenario.
E
Do
we
need
to
put
limits
on
what
some
means
in
histogram
right
require
it
to
be
monotonic
so
that
we're
compatible
with
prometheus,
always
so
any
histogram
we
have
can
always
be
translated
to
prometheus
and
there's
not
a
chance
that
we
can
create
a
construct,
a
histogram
that
could
be
a
histogram
in
prometheus,
but
we're
unable
to
do
so
because
of
this
some
incompatibility.
E
That's
that's
effectively
what
this
is
around.
So
sorry,
sorry,
if
I
didn't
kind
of
describe
that
well,
I
think
I
there's
some
of
these
decisions
that
are
super
super
nitty-gritty
and
kind
of
a
pain
in
the
butt
to
discuss
practically
because
again
like
what's
the
use
case.
Well,
there's
a
chance.
Somebody
fills
out
a
sum
in
a
way
that
we
can't
con.
You
know
go
into
prometheus
and
I
act
literally
can't
come
up
with
the
use
case
off
the
top
of
my
head.
There.
E
Yeah
I
let's
I'll,
follow
up
with
bogdan
on
this.
I
don't
think
we
have
the
right
people
here
to
discuss
it,
because
this
this
actually
could
be
documenting.
Whether
or
not
this
sum
is
allowed
or
required,
or
you
know
forcing
to
be
monotonic-
is
actually
could
change
the
stability
of
the
proto
okay.
E
E
Off
no,
so
so
the
what
prometheus
does
is,
if
you
provide
a
sum,
it
means
that
you
have
no
negative
values
right
and
so
that
that
I
think,
is
the
proposal.
Saying
sum
is
monotonic
when
it
exists.
If
you
have
negative
values
in
your
histogram,
you
cannot
have
a
sun,
and
so
the
question
here
is:
do
we
do
that
to
be
prometheus,
compatible
or.
J
J
E
Yeah,
that's
that
that
I
think
is
more
the
great
question
and
I
I
literally
like
my
brain
is
unable
to
come
up
with
an
example
use
case
where
I
have
you
know
negative
histogram
and
I
care
about
some
and
I
want
to
export
to
prometheus.
Like
I,
I
can't
add
all
three
of
those
together
myself.
If
anybody
has
an
idea
or
an
example
like
let
me
know,
but
it's
it's
anyway,.
E
Okay,
let's,
let's
move
on
that
was
way
too
long.
I'm
sorry
about
that
one!
I
let's
follow
up
with
bogdan
on
that.
Let's
leave
this
required
for
now.
E
Follow
up
offline,
I
actually
like
anthony.
E
Exporting
to
prometheus
all
right,
I'm
going
to
comment
on
the
bug
with
that
as
well,
because
I
actually
do
really
like
that
proposal.
Okay,
cool
right,
so
that
was
our
triage
time
for
this
and
last
10
minutes
is
next
steps
on
the
project,
so
we
have
well.
I
keep
opening
things
weirdly.
E
Okay,
we
have.
We
have
a
bunch
of
stuff
to
do
in
the
sense
of
getting
out
a
metrics
data
model
spec
that
people
can
read
and
understand
what
the
hell
we've
built
here
in
open,
telemetry
and
kind
of
do
their
thing.
We
have
some
parallel
work
going
on
around
multivariate
time
series,
which
is
awesome.
E
The
real
question
is
in
our
next
data
model,
sig.
What
should
we
focus
on
for
this
one?
We
spent
most
of
our
time
triaging
in
terms
of
temporality
that
I
wanted
to
talk
about
effectively.
We
have
two
clarifications
on
basically
what
start
time
and
time
look
like
both
of
these?
I
I
don't
feel
strongly
enough
that
they
need
to
be
addressed
prior
to
stability.
I
think
they
are
they're
kind
of
easily
handled.
E
E
Okay,
cool,
so,
in
terms
of
next
steps,
I'm
going
to
throw
out
a
couple
proposals
there
we
have
follow-on
histogram
work.
E
We
need
to
continue
to
make
progress
with
making
sure
label
attribute
changes
go
in
with
the
performance
stuff,
I'm
hoping
that
the
next
data
model
sig.
We
come
with
more
performance
benchmarks.
Maybe
this
comparison
against
other
frameworks
and
we
kind
of
discuss
what
our
performance
needs
are.
I
think
that's
probably
the
best
use
of
our
time
is
let's
kind
of
discuss
what
we
think
a
good
bar
is
on
performance
with
getting
this
this
getting
this
protocol
out
the
door.
E
Then
the
the
yeah
types
and
schema
and
performance
are
kind
of
correlated,
because
this
led
to
this
being
a
problem.
The
other
bit
to
discuss.
There's
two
big
components
that
I
think
both
are
post
stability.
One
is
where
we
actually
start
to
specify
what
operations
are
allowed
on
metrics
in
flight.
So
this
would
be.
E
How
do
I
remove
labels
from
a
metric
in
the
collector?
This
would
be.
How
do
I
change
from
a
delta
to
a
cumulative
sum
right?
Those
kinds
of
operations
are
things
that
users
will
want
to
do.
That
would
be
like,
for
example,
if
I'm
pulling
in
statsd
metrics
and
converting
into
prometheus
right,
statsd
has
up
down.
You
know
just
counters
flung
at
you
and
prometheus
wants
this
monotonically
increasing
thing
so
like
we
cannot.
We
can
specify
that
out.
I
think
that's
follow-on
work,
but
I
think
that
pales
in
comparison
to
performance.
E
So
this
is
like
the
thing
that's
reporting
metrics.
Is
it
alive
or
not,
and
how
do
I
unify
that
with
the
metrics
themselves,
so
in
in
prometheus?
For
example,
you
get
this
up
metric
because
you're
literally
scraping
an
endpoint
to
get
your
metric
values,
and
so
when
you
try
to
scrape
that
endpoint,
if
you
can't
reach
it,
you
can,
you
know,
declare
it
as
being
not
up.
So
there's
this
metric
to
denote
whether
or
not
the
endpoint
was
available,
because
you
have
this
remote
connection
from
an
open,
telemetry
standpoint.
E
E
E
E
I
so
yeah,
let's,
let's
do
this-
let's
publicly
communicate
end
of
april,
which,
how
many,
how
many
days
are
in
april
30
april
30.
target
end
of
let's
say
end
of
april.
I
want
to
be
a
little
bit
more
aggressive
honestly
when
we
look
at
the
issues
that
we
have
that
are
actually
considered
blockers
right.
We
need
to
get
this
string
string
labels
thing
through
that
we
had
agreed
on
last
week.
E
We
need
to
figure
out
performance,
benchmarking,
stuff
and
and
get
a
handle
on
performance
and
then
there's
the
question
of
the
thing
that
bogdan
raised
on
is
knowing
that
histogram
summary
is
monotonic.
Is
that
actually
important
right?
I
feel
like
this
is
going
to
be
relatively
simple
to
answer
it's
just
it's
more
of
a
discussion
between
us
and
open
metrics
and
making
sure
that
whatever
we
decide
to
do
they
consider
it
compatible
yeah.
Frankly,
this
you
know
we
agreed
on
what
to
do.
There's
a
pr
out.
E
E
In
my
opinion
of
everything
on
this
list,
this
is
the
thing
that
blocks
stability,
so
the
problem
with
performance
is,
as
you
know,
it's
it's
an
open-ended
question
right
now
we
don't
have
our
targets,
we
don't
have
our
use
cases,
so
adding
a
month
would
be
good,
I'm
hoping
we
can
get
together
and
just
agree
on
something
something
reasonable
next
week
that
is
achievable
yeah
I
mean
I
I
would.
I
would
like
to
mark
stability
a
lot
sooner
than
the
end
of
the
month.