►
From YouTube: 2021-05-11 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Like
I
share
my
video
on
one
meeting
and
in
teams,
there's
a
feature
that
you
can
suspend
the
meeting
where
you
can
see
all
the
conversations
but
you're
not
not
getting
the
audio
or
video
and
you
can
type
in
the
questions.
So,
for
example,
we
have
all
hands
and
I'm
lurking
there
and
I
see
people
asking
questions.
I
respond
to
some
of
the
questions
and
then
I
have
a
video
sharing
and
then
I'm
talking
with
another
kind
of
website.
B
A
B
I
hear
you
all
right,
so
I'm
gonna
warn
everyone.
I
didn't
have
a
lot
of
time
to
prep
some
of
the
the
prometheus
compatibility
discussions
for
today
like
I
had
wanted
to,
but
I
think
I
think
we
should
still
have
a
pretty
good
discussion
here.
If
you
have
topics
you
want
to
bring
up
before
we
get
into
the
meet
feel
free
to
pop
things.
Here,
oh
yeah
is
this:
the
first
week
I
didn't
list
that
the
the
zotep
it
probably
is.
B
B
B
C
D
B
It's
actually
super
warm
right
now,
but
we
in
the
winter.
It's
it
it.
It
can
get
super
cold,
but
we
we
have
big
swings
so
like
it'll
hit,
90
degrees
and
then
the
next
day
it'll
be
snowing
and
yeah.
It's.
D
Wonderful
yeah,
you're
inland
there
here
in
buffalo
we've
got
the
lake
to
kind
of
act
as
a
buffer,
so
we
don't
get
as
big
swings.
Yeah
we're
far
enough
away.
B
That
we
still
get
the
snow,
but
we
also
get
the
swings
because
we're
it's
like
hour
and
a
half
drive
to
lake
erie
or
something
maybe
an
hour
anyway.
Let's
get
started,
it's
been
five
minutes,
even
though
I
think
talking
about
weather's
more
fun.
Let's
do
this
we'll
talk
about
triaging
and
blocking
issues.
B
I
think
there
are
actually
a
few
issues
to
triage,
but
first
I
wanted
to
go
through
blocking
issues
and
their
status,
so
we
actually
still
have
the
same
three
that
we
had
before,
but
I
think
they're
making
some
progress
so
slate
safe
label
removal
like
documenting
when
it's
okay
to
have
level
removal
josh
has
a
pr
open
around
this
just
want
to
make
sure
folks
have
a
chance
to
comment
on
this.
I
think
tonight
I
don't
think
I
looked
at
it
again
after
you
make
changes,
so
I
will
have
to
do
that.
B
I
think
there's
a
discussion
from
riley
here
for
later
that
we
can
talk
about,
is
that
is
that
right,
riley
yeah?
So
do
you
want
me
to
briefly
talk
about
it?
Now
or
later?
I
have
a
let's.
Let's,
let's
pop
it,
let's
pop
it
on
the
stack
right
now,
since
it's
a
blocking
issue,
because
I
opened
up
your
discussion
here.
Do
you
want
to
you
want
to
kick
off
your
comments.
A
Yeah,
so
I
I
think
the
point
here
made
by
the
pr
is
that,
by
default
for
those
conquerors,
whether
it's
synchronous
or
asynchronous
version,
whether
it's
up
down
or
it's
a
monotonic
counter,
we
want
the
default
operation
when
we
remove
some
dimensions
to
be
the
sum
and
I'm
giving
a
counter
example.
But
what
I'm
trying
to
say
is
I
want
that
to
be
discussed.
So
later
we
won't
regret.
My
take
is,
although
there's
some
corner
cases
or
counter
examples,
but
I
believe
due
to
my
lack
of
knowledge.
A
A
Tell
me,
if
that's
the
case,
it
seems
josh
replied,
I'm
saying
he
has
80
guys
as
well,
and
I
also
talked
with
one
guy
in
microsoft,
who
has
been
working
on
matrix
forever,
at
least
for
15
years,
and
he
told
me
he
believed,
probably
more
than
90
percent
of
the
cases.
Some
should
be
the
default
operation.
B
I
I
also
want
to
call
out
for
those
folks
familiar
with
prometheus,
that
when
we
talk
about
the
default
and
open
telemetry,
we
are
not
talking
about
prom
ql
across
metrics.
We
are
talking
about
what
are
they
called
rewrite
rules
or
the
the
collection
rule
thing
recording
or
what's
the
name
for
the
rewrite
rules?
Yeah
recording
we're
basically
talking
about
rewrite
rules
here,
right
and
like
things
that
you
would
do
in
a
rewrite
rule,
and
that's
where
I
get
my
80
is
like
during
collection
and
ingestion.
B
What
is
that
natural
aggregation,
as
opposed
to
when
you're
looking
at
things
after
it's
at
rest,
in
a
database,
which
is
where
I
would
have
probably
a
different
number?
E
Ahead,
so
I
have
a
slightly
different
attitude
and
I
want
to
just
sort
of
double
down
on
the
design
we
have,
so
that
gauges
are
meant
to
be
averaged
and
these
up
down
counter
data
types
are
meant
to
be
summed
and
one
of
the
ways
to
think
about
that
is.
We
have
a
lot
of
confusion
over
this.
This
asynchronous
form
of
cumulative
sum
because
it
looks
like
a
gauge,
and
so
what
is
the
real
difference
there,
and
I
I
want.
I
have
two
answers.
One
is
the
the
the
delta
form
of
up
down.
E
Counter
is
a
real
thing.
Would
you
ever
use
a
delta
form
of
an
up-down
counter
for
this
instrument?
That
should
tell
you
whether
you
want
to
sum
it
or
average
it.
If
there's
no
application
for
you
to
use
an
up-down
counter,
then
it's
probably
a
gauge,
and
I
think
that
this
voltage
is
a
gauge
example.
You
don't
up
down
count
voltage,
you
don't
add
one
electron
at
a
time
and
change
your
voltage,
that's
not
how
this
works
physical
system,
so
I
think
we
should
always
fall
back
on
the
up
down
counter
delta
interpretation.
E
The
synchronous
instrument
tells
you
whether
it's
a
account
or
a
or
a
measurement
measurements
are
different
than
quantities.
That
was
my
first
answer
to
the
question.
The
second
question
is
that
we
haven't
really
pinned
down
the
meaning
of
a
resource
in
open
telemetry,
and
we
are
definitely
trying
to
say
now
at
this
point.
What
does
it
mean
to
remove
an
attribute
and
we've
focused
a
lot
on
the
the
application
level
attributes
that
happen
inside
of
your
application
code
inside
of
your
instrumentation,
because
that's
what
open
telemetry
is
prescribing
in
its
api.
E
So
the
point
of
this
is
that
the
resource,
if
you're,
talking
about
removing
a
resource,
attribute,
there's
somehow
it's
a
different
thing
than
if
you're
trying
to
remove
an
application
level
attribute
and
the
test
that
I
have
for
you
is
whether
averaging
is
meaningful.
So
averaging
is
not
meaningful
on
a
subdivision
of
a
sum
like
you
get
with
an
up-down
counter.
So
if
I
add
a
new
dimension,
I
don't
want
to
average
my
cumulative
values
because
I've
added
a
new
dimension.
It's
completely
different
scale
so
trying
to
wrap
this
up.
E
The
idea
is
that
when
you're
trying
to
there's
a
test,
essentially,
can
I
average
this
metric
with
another
of
its
similar
type
and
the
answer
is
going
to
be?
Yes
if
it's
removing
like,
if
there's
a
resource
attribute
involved
and
the
answer
is
going
to
be
yes,
if
it's
a
gauge
but
the
answer's
going
to
be
no.
E
If
it's
up
down
counter
and
so
there's
a
rule,
something
like
if
you
ever
want
to
average
an
up
down
counter
or
treat
it
like
a
gauge,
you
have
to
fix
your
dimensions,
and
that
means
either
explicitly
grouping
by
one
of
those
dimensions
or
summing
it
away,
because
it's
an
up-down
counter-
and
I
find
that
this
is
I've
finished
my
my
discussion
point
here,
but
I
find
it's
difficult
to
encode
all
of
what
I
just
said
in
a
like
linear
narrative,
but
I
think
that
those
are
two
explanations
for
what
we
have
and
I
think
we
get
to
the
point
where
resources
are
meaningful
in
hotel
and
they're.
E
Not
yet,
and
then
we
can
answer
this
question
because
ford
and
toyota,
those
are
resources
model
and
year.
Those
are
resources
and
removing
them
has
a
different
quality
than
removing
the
battery
cell.
And
if
you
were
to
say
this
is
a
ford
and
this
is
a
toyota,
I
have
battery
cells
in
this
ford.
I
have
battery
cells
in
this
toyota
and
I
was
I'm
going
to
say:
what's
the
average
cell
voltage?
That's
what
I'm
asking
and
you
can
tell
me
the
average
cell
voltage
or
you
can
tell
me
the
individual
cell
voltage.
E
Those
are
the
same
number
scale
and
you
don't
add
up
voltages
like
that,
even
though
we
all
know
about
the
physics
of
batteries,
but
you
don't
add
those
voltages,
I'm
finished.
Sorry.
Thank
you.
Okay,.
A
So
so
my
suggestion
is
that
we
we
can
agree
on
like
the
spike,
is
trying
to
make
sure
we
have
the
default
operation,
which
should
make
sense
for
at
least
80
percent
of
the
scenario,
and
as
long
as
we
allow
people
to
configure
things
to
different
behavior,
if
they
don't
like
the
default
behavior,
we
should
be
fine,
but
we
still
believe
this
is
a
good
thing
and
we
want
that
to
be
the
default.
E
I
still
sort
of
take
issue
with
the
like
80
like
we're.
We
are
trying
to
give
these
numbers
meaning
and
what
we
are
doing
is
giving
them
very
clear
meaning
and
what
you're
saying
is
80
of
the
time
you
can
directly
consume
numbers
based
on
meaning,
and
I
I
think
we
should
actually
go
as
far
as
recommending
like
what
are
the
reasonable
ways
to
visualize
this
data
in
a
graph,
but
but
just
for
now,
you're
saying
I
I
I'm
I'm
trying
to
consume
this
data,
and
there
are
two
ways
I
think
one
is.
E
I
am
just
literally
consuming
the
meaning
from
the
data
directly
and
the
other.
Is
I'm
applying
a
query
and
I'm
somehow
changing
my
data.
So
if
you
want
to
sum
the
gauges
of
the
battery,
then
you're
querying
and
you
are
now
taking
the
meaning
from
individual
cell
voltage
and
you're
telling
it.
I
want
to
sum
that
up,
which
is
a
different
operation
than
removing
a
label.
So
you're
saying
I
want
queries
20
of
the
time
and
I'm
saying
good
I
just
want.
E
B
So
so
josh
I
want
to
call
out,
I
think
the
most
important
thing
I
heard
you
say
that
I
didn't
think
about
prior
was
resource
attributes
might
semantically
be
different
than
metric
attributes,
and
the
aggregation
of
a
resource
attribute
may
be
something
slightly
different
than
the
aggregation
of
a
not
resource
attribute,
and
I
think
that's
actually
where
that
80
20
comes
from.
For
me
is
how
many
times
I
have
a
rewrite
rule
that
is
related
to
resource
aggregation
versus,
not
resource,
aggregation
or
like.
B
Where
re
it's
a
weird
resource,
aggregation
rule
where
it
doesn't.
You
know
the
the
meaning
of
the
number
is
very
clear
within
a
resource.
The
question
is:
is
it
also
that
clear
across
resources,
and
is
there
a
difference?
I
think
that
might
be
worth
opening
a
bug
and
and
having
a
further
discussion
on
what
it
means
to
remove
or
to
aggregate
across
resources.
E
Although
I
think
you
can
construct
examples
that
that
are
the
same
type
of
conflict
happening
in
application
level
attributes,
so
the
example
that
I
came
up
with
for
my
team
was
we've
we've
specced
out
in
the
earlier
draft
for
semantic
conventions
of
a
metric,
that's
called
usage,
and
so
the
usage
is
labeled.
By
like
memory
usage,
I've
got
free
memory.
I've
got
allocated
memory,
I've
got
heap
memory,
I've
got
stack.
E
Like
that's
the
constant
on
your
machine,
how
much
memory
you
have,
if
you
ever
average
the
amount
of
memory
and
don't
group
by
that
state
value,
you're,
you're,
averaging
across
memory
classes
that
are
totally
unrelated
to
each
other
and
we
we,
we
spec
doubt
that
they
sum
up,
because
we
want
there
to
be
some
sort
of
proportionality
or
some
sort
of
ratio
function
that
we
can
make
out
of
that.
E
But
for
the
simple
time
being,
I
think
you
can
say
something
very
easy.
It's
always
easy,
because
resources
are
attached
from
outside
of
the
application
state
they
can
be
removed
without
semantic
knowledge
of.
What's
on.
What's
in
the
metric,
that
means
you
can
always
simply
erase
a
resource,
whereas
you
can't
simply
erase
an
application
level
attribute
without
changing
the
meaning.
B
You
can
simply
erase
the
resource,
but
don't
you
would
have
to
still
aggregate
the
metrics
if
you
have
two
metrics
from
two
different
resources
and
that's
where
we're
saying
the
default
aggregation
is
what
you
want.
I
see
so
the
default
aggregation
applies
to
resources.
Maybe
it's
worth
us
calling
that
out.
B
E
I
I
I
got
myself
in
trouble
there.
I
want
to
back
up
just
a
little
bit,
it's
always
safe
to
remove
a
resource
label
as
long
as
you
don't
conflict
with
another
series.
So
if
it's
an
extraneous
resource
label,
it's
an
secondary
resource
label,
you're
guaranteed
not
to
need
to
aggregate.
Therefore,
it's
safe
to
just
remove
them.
So
we
can
say
something
like
you:
can
remove
any
label,
that's
not
identifying.
E
E
B
Well,
okay,
so
the
question:
here,
though,
you
brought
up
two
two,
two
things:
okay,
you
brought
up
where
I
can
remove
job
and
instance
right
and
then
I
need
to
aggregate
the
metric
together.
So
this
would
be.
I
want
to
aggregate
my
overall
notion
of
cpu
usage,
okay
in
some
way
across
my
fleet.
B
That
is
that
that
has
an
inherently
different
aggregation
semantic,
and
so
the
question
is:
how
often
are
we
running
into
that
scenario
where
this
doesn't
make
sense
to
do
now?
It
could
be
that
it's
okay
for
us.
If
somebody
says
I
want
you
to
remove
this
this
label
and
we
do
a
sum
and
it
it
leads
to
meaningless
data.
That's
just
a
user
error.
Okay,
but
how
often
will
people
do
that?
I
guess
is
the
question
riley
and
I
were
trying
to
ask
in
that
80
20
thing
right.
B
E
Example
of
the
memory
state
is
still
the
one
I
like
the
best,
because
if
you
erase
that
memory
state
and
the
default
is
to
average
somehow
like
you
get
a
distribution
of
memory,
sizes,
you've
erased
the
state,
and
now
I
can
see
there's
memory
here,
there's
memory
here,
but
they
were
different
like
I
can't
erase
that
variable
into
a
distribution
without
destroying,
what's
meaning,
but
I
can
sum
it:
it's
still
a
meaningful
quantity.
It's
the
total
amount
of
memory,
regardless
of
state
it's.
E
B
E
B
B
E
Josh,
the
big
one
is
resources:
do
we
have
a
distinction
between
a
resource
attribute
in
our
model
and
an
application
level
attribute
in
our
model
and
can
the
can
the
processor
see
the
difference
like
right
now?
Otlp
definitely
means
has
a
just
different
place
to
put
resources
and
application
level
metrics
when
we
export
to
prometheus
we're
just
going
to
flatten
that
out
and
give
you
a
single
list
of
attributes.
E
Is
that
okay,
because
that
means
we
can't
round
trip
and
there's
some
issues,
issues
there.
I
think
that
need
to
be
opened
up,
and
oddly,
I
think
you
know
tracing
hasn't,
really
addressed
this
and
the
thing
that
I
I
think
from
a
data
model
perspective
is
happening.
There
is
that
in
traces
we
think
about
essentially
deltas
all
the
time.
It's
a
span
is
a
count
of
one
and
so
like.
E
We
never
had
to
worry
about
this
cumulative
problem
that
we
have
when
we
sum
things
up
and
try
to
average
them
like
you
have
to
do.
Dimensional
alignment
is
the
word.
I've
been
using
to
talk
about
this
with
people,
so
you
can't
average
things
if
they
have
different
dimensions,
but
you
can
add
them
because
presumably
they're
subdivisions
of
each
other,
that's
the
in
their
way
of
looking
at
this.
E
B
I
guess
what
I'm
suggesting
is
from
this
is
marked
as
a
blocking
issue
right.
How
much
of
that
do?
We
think
we
need
to
take
care
of
like
how
much
of
that
is
like
a
p0
versus
a
p1
versus
p2?
I
think
we've
actually.
E
B
Sounds
good
so
how
about
we
open
a
bug
around
resources?
Everybody
take
a
second
glance
at
this
re-review
it
based
on
the
discussion
and,
let's
move
on
to
the
next
blocking
bug.
Does
that
sound
reasonable.
C
B
Thank
you,
okay,
start
time
for
cumulative
aggregation.
I
think
this
pr
has
enough
approvals
now
to
go
in.
We
emerged
it.
I
think
bug
did
not
really.
B
This
is
the
this:
is
the
spec
pr,
not
the
proto-pr
yeah
the
proto-pr
is
merged.
Yeah.
E
E
F
C
E
B
Okay,
yeah
that
we're
going
to
have
to
talk
about
when
we
get
into
the
prometheus
compatibility
stuff.
So,
let's,
let's
put
that
on
hold,
I
think
the
yeah,
the
pr
for
the
proto
is
not
merged.
Are
you
able
to
merge
that?
I
am
I'll
just
do
that
right
now,
okay,
that'd
be
great,
because
we
have.
We
have
all
the
approvals
for
that
that
should
just
go
in.
B
B
Some
on
histogram
and
summary
cannot
allow
negative
measurements
when
some
exists,
so
this
actually
has
enough
approvals
to
get
merged
as
well.
This
was
the
decision
we
had
but
tldr.
We
decided
on
it
the
data
model
spec
and
we
never
sent
a
pr.
So
I
sent
the
pr
all
it
does
is
say
some
should
only
be
filled
out
if.
B
When
we
measure
non-negative
discrete
events,
but
if
you're
measuring
anything
negative,
don't
fill
out
some-
and
I
opened
a
second
bug
for
us
to
talk
about
in
the
future-
about
how
to
allow
some
to
exist
when
we
have
negative,
discrete
event
recording.
If
that's
a
thing
we
want
to
support.
So
there's
a
bug
for
us
to
take
care
of
with
that.
We
can
triage
it
later,
but
that's
what
this
pr
is.
It
has
all
the
approvals
we
already
discussed
it
in
the
sig.
If
anyone
has
concerns
over
it,
you
know
that
are
remaining.
B
Please
comment,
but
it's
it.
That's
based
on
the
sig
and
that
needed
to
get
in
before
we
could
actually
finalize
the
protocol.
But
I
think
this
is
this:
is
it
for
getting
0.9
of
otlp
out?
Okay,
any
questions,
any
thoughts,
any
concerns.
E
There's
some
the
sum
negative
values
and
sums
left
me
feeling
a
little
uneasy.
Can
we
just
sort
of
agree
that
the
api
will
not
allow
negative
measurements
through
histograms.
E
I
know
I
know
that,
but
I
I
feel
it's
the
only
solution
to
this
that
that's
going
to
work,
and
it
is,
I
don't
think
it's
that
controversial,
but
it
is
certainly
a
question.
Someone
will
answer.
Ask
at
some
point:
how
come
I
can't
do
positive
negatives
and
we'll
say
something
like
use
two
histograms
and
mix
them
up
later.
I
don't
know.
B
Well,
okay,
so
I
want
to
throw
out
that
I
think
it's
important
from
the
instrument
to
know
whether
or
not
negative
measurements
are
possible
and
for
otlp,
where
we
know
that
you
can't
have
negative
measurements.
We
can
enforce
what's
written
today,
but
I
I
would
love
to
actually
have
the
ability
to
support
metric
protocols
where
that's
not
a
requirement.
E
Right
and
you
mean.
B
Yeah,
we
don't
export
some.
In
that
case,
we
can
export
the
the
histogram,
but
we
can't
export
some
like
this
is
legitimately
the
only
thing
that
matters
in
this
that
entire
discussion
for
people
here
for
context
is
whether
or
not
the
sum
field
can
go
into
prometheus
or
we
drop
it
on
the
floor.
That's
the
only
thing
that
matters
here
right
around
this
negative
measurement
thing.
So
if
we
want
to
support
histograms
that,
where
sums
go
into
prometheus,
then
we
have
to
require
all
measurements
be
positive
for
the.
A
Or
would
that
be
possible?
We
have
actual
hint
like,
for
example,
in
histogram
by
default.
We'll
assume
everything
is
positive,
but
if
people
want
to
say,
I
want
to
take
the
the
risk
of
getting
negative
and
I
understand
some
back
end.
It's
not
supported,
and
as
long
as
there
is
a
well-defined
behavior,
we're
saying
either
we
drop
the
data
or
we're
not
going
to
report
the
negative,
but
only
report
positive
and
they
have
the
choice.
B
Okay,
I'm
going
to
call
I'm
going
to
call
time
on
this,
so
I
have
an
issue
right
here,
allow
some
to
to
present
with
nega
with
negative
measurements
in
histogram.
That
is
possibly
the
worst
english
I've
ever
written.
So
I
apologize
for
the
title,
but
this
is
a
follow-up
where
it
says.
Currently
we
limit
the
sunfield
and
histograms
to
be
100
compatible
with
open,
metrics
and
prometheus,
so
we
can
do
conversion
whenever
it
exists.
However,
we
think
there's
value
in
having
some
available
when
there
are
negative
measurements
as
well.
B
A
I
wonder
if
the
negative,
like
histogram
scenario,
is
common.
I
can
imagine
something
like
the
the
water
pressure.
It
might
be
positive
or
negative,
and
the
pressure
can
vary
a
lot
if
you
want
histogram,
but
I
I
guess
most
of
people
use
histogram
api
for
things
like
latency,
which
is
always
positive.
F
And
we're
a
bit
surprised
how
many
people
were
using
negative
numbers
like
wi-fi
signal
strength,
measurements
was
one
example
in
decibels
so
like
it,
it
is
a
thing
that
happens
in
the
wild.
I.
E
B
E
B
And
so
from
from
the
api
level,
we
have
to
have
a
discussion
there
and
I
have
a
bug
to
talk
about
in
the
data
model,
finding
a
way
to
ensure
that
we
know
whether
or
not
the
sum
can
exist
on
negative
measurements,
because
I
still
think
there's
value
in
some
aggregate
distribution,
things
on
histogram
right,
but
cool
some
always
exist.
It's
a
question
of
whether
it's
monotonic.
E
E
B
So
so
for
now
yeah,
I
I
hear
you,
I
I'm
not
super
happy
with
it
either,
but
check
the
follow-on
issue.
Let's
take
the
discussion
offline
there
now
that
people
are
aware
of
kind
of
like
what
we're
talking
about
and
why
it
matters
and
what's
important
and
again
it's
it's
around
openmetrics
compatibility.
Okay,
so
I
think
this
pr.
I
I
think
that
one
had
enough
approvals
to
merge,
so
I'd
like
to
get
that
merged
as
well,
so
that
we
can
get
0.9
out.
B
B
With
these
prs,
we
are
not
going
to
break
the
data
model
right,
okay,
cool,
so
I'm
gonna,
mark
it
as
stable
as
soon
as
these
three
prs
are
in
for
this
0.9
release
and
going
forward
all
of
these
bugs
all
these
decisions
we
discussed
need
to
be
done
in
blue
or
in
like
a
backwards
compatible
light.
So
I
don't
I'm
not
worried
about
it.
I
think
we
know
how
to
do
that.
Okay-
that
said,
let's
start
talking
about
some
of
these
prometheus
compatibility
topics.
So
one
thing
I
want
to
call
out.
B
First
of
all,
in
the
metric
data
model
discussion
group
right,
we
have
oh
cool.
I
can
mark
this
archive
that
is
fixed.
We
have
a
few
written
specification,
things
out
and
open
to
take
a
look
at,
but
I
I'm
gonna
try
to
find
a
way.
If
anyone
has
suggestions
to
highlight
prometheus
compatibility,
I
think
we'll
probably
make
a
label
around
it
with
a
particular
color.
B
B
I
want
to
call
out
they
have
known
differences
and
limitations,
and
one
thing
that
we
have
not
done,
which
I
think
we
should
be
happy
about.
We
can
get
rid
of
this
because
we
fixed
this.
We
actually
switched
to
be
l
e
instead
of
g,
so
this
can
be
gone,
but
just
so
you
know
there
is
a
prometheus
specification
and
then
the
second
bit
is
there:
are
data
model
related
bugs
opened
on
prometheus?
B
I
have
not
imported
them
into
this
project
because
I
need
to
remember
how
to
do
that.
Okay,
can
you
go
back
to
those.
E
For
a
sec,
yeah,
yeah,
okay,
so
this
clarify
how
prometheus
uses
openmetrics
created
timestamp
we
can
close
that
they
did
clarify
it.
For
me,
I
can
write
down
somewhere,
but
you
know
we've.
I
think,
we've
addressed
all
the
compatibilities
with
start
time.
They
just
call
it
created.
E
The
thing
about
external
labels
is
probably
the
same
issue
that
we
just
discussed
about
resources
a
minute
ago.
How
resources
attributes
have
different
sort
of
semantic
interpretation
somehow,
and
I
think
what
prometheus
has
done
is
quite
muddies,
the
the
meaning
of
an
external
label
and
something
something
needs
to
be
done.
I
can
comment,
but
we
can
close
the
issue.
E
E
There's
it's
it's
about
the
prometheus
instance
and
it's
about
the
data
it
collects
and
that's
the
way
it
is,
and
I
think
that
that's
what
a
resource
is
to
us,
but
we
just
don't
have
a
way
in
open
telemetry
to
say
this
is
a
resource.
That's
about
the
collection
path,
which
is
how
the
how
the
external
label
is
used
in
prometheus.
So
wait
wait
what's
what's
what
are
the
two
meanings?
E
The
one
is
the
data
is
about.
This
is
about
the
data
being
collected
and
the
other
is.
This
is
about
the
prometheus
instance
doing
the
collection
and
the
the
it
blurs
the
meaning,
because
you
know
in
a
prometheus
high
availability
configuration
you're,
gonna
you're
gonna
erase
that
label,
but
it's
a
different
kind
of
label,
erasure
because
you're
going
to
interleave
the
metric
streams
at
that
point,
and
so
they
have
this
particular
type
of
external
label.
That
is
meant
to
be
erased,
in
other
words,
and
I've
been
trying
to
call
that
something
like
non-identifying,
but.
E
Yeah
they
do
duplicative
identities
for
this
series,
and
the
idea
is
that
you
know
they're
the
same
because
you're
gonna
erase
that
but
in
a
prometheus
configuration
you
can't
see
the
difference
between
them
and
that
that
left
me
a
little
uneasy.
So
in
a
prometheus
you'd
set
up,
cortex
you'd,
say
cortex,
please
erase
this
label,
and
then
you
take
away
that
non-identifying
attribute.
E
Exactly
it's
basically
as
a
configuration
parameter
in
prometheus.
You
need
to
know
if
your
resources
are
identifying
descriptive
or
non-identifying,
I
think
is
the
way
and
I've
been
hopeful
that
we
can
talk
about
using
the
schema.
The
new
schema
idea
to
like
give
attributes
this
type
of
quality.
Without
extending
the
data
model
like
putting
a
new
field
into
every
attribute,
that's
just
one
approach
we
might
take
okay,
so
you're
saying
I
should
close
this
with
this
comment.
Yep
and
then
we
can
say
something
like
in
the
future.
E
B
C
E
I'm
comfortable
saying
we
should
do
this
in
the
future,
and
the
thing
I'm
associating
with
this
is
that
at
some
point,
we'd
like
to
have
data
be
pushed
into
an
hotel
collector.
That
kind
of
looks
exactly
like
data
that
was
pulled
from
a
prometheus
instance
and
when
you're
doing
that,
you
might
want
to
identify
which
of
your
own
resource
attributes
are
the
ones
that
you
will
use
to
join
with
your
service
discovery
attributes
so
like.
If
you're
publishing
data,
that's
meant
to
look
like
prometheus.
E
B
Yeah
yeah:
okay,
that's
fair!
All
right!
I'm
gonna!
I'm
gonna
close
this
for
now.
If
you
want
to
open
another
issue
for
us
to
deal
with
this
or
I
can
open
one
as
well
right
now,
but
if
you
could
open
it,
I
think
it'd
be
better
because
you
can
phrase
it
in
english
right
all
right.
B
Okay,
the
last
bit
here,
then
around
data
model
was
prometheus
histogram
edge
case,
which
we
don't
support.
If
we
want
to
talk
through
this
one
now
we
can
because
I
think
we
also
have
to
deal
with
the
upgrades,
but
oh
wait.
This
is
some.
B
Yeah,
that's
right
with
the
change
to
proto.
This
is
fixed,
so
I
think
these
are
all
the
known
issues
in
the
prometheus
working
group,
around
compatibility
that
are
specifically
tagged
as
data
model,
and
this
one
is
fixed
by
that
discussion.
We
just
had
there
are
a
bunch
of
other
prometheus
working
group
related
issues
here
that
we
have.
I
think
a
lot
of
these
are
focused
on
collector
and
around
the
collector
use
case
of
prometheus.
B
B
I
wanted
to
give
us
a
little
bit
of
a
little
bit
of
a
chance
to
talk
about
up
and
start
time
if
there's
useful
topics
here
so
specifically
josh,
since
this
is
your
issue
when
it
comes
to
up
metrics,
can
you
give
us
just
a
quick
tl,
dr
of
the
important,
like
axes
of
decisions
that
we
need
to
make
here
right.
E
I,
since,
following
this
issue,
have
come
to
think
of
up
as
far
less
important
than
the
question
about
steel,
markers
or
stainless
markers
that
is
sort
of
adjacent
to
this
one.
It's
just
they're
kind
of
the
same
issue,
but
but
people
don't
really
understand
how
up
is
important
when
they
do
understand
how
stainless
is
important.
E
I
haven't
gotten
back
to
it
yet,
but
the
idea
is
that
in
prometheus,
when
you
see
a
scrape
fail,
you
put
a
nan
value
into
your
time
series
and
the
man
value
says
nothing
was
here
explicitly,
but
in
the
open,
metrics
spec
it
says
you
cannot
produce
a
nand
value
from
your
code,
so
the
nand
value
is
special.
It
can
only
be
written
by
an
observer
or
a
third
party
into
a
prometheus
remote
right,
but
it
can't
be
produced
in
open
metrics.
E
So
then,
the
question
I
have
is,
if
I'm
taking
prometheus
data
and
just
trying
to
turn
it
into
otlp,
which
I
do
with
a
sidecar.
So
I'm
familiar
with
the
question
is:
how
do
I
deal
with
the
the
the
missing
data?
These
nand
values
have
a
time
stamp.
They
explicitly
said
prometheus
saw
nothing
there
and
I
I
don't
know
how
to
put
that
into
otlp.
If
I
put
a
nand
value
and
I
think
that
works-
and
I
can
keep
it's
like
the
start
time
thing
continues
to
function
like
it.
E
I
can
tell
which
series
I'm
part
of-
and
I
can
tell
when
the
missing
data
was
missing,
but
it
it
means
either
a
nand
value
or
some
other
kind
of
placeholder
to
say
nothing
here,
but
a
timestamp
and
a
missing
series
and
it's
it
gets
to
be
messy,
though,
because
a
histogram
point
like
how
do
I
mess?
How
do
I
put
an
n
value
into
a
histogram?
So
maybe
there
should
be
a
missing
data
type,
but
that's
a
slippery
slope
too.
B
Can
we
even
do
this?
Where
we
know
we
have
a
push-based,
stateful
receiver
when
we're
doing
push-based
metrics,
we
just
don't
have
a
data
point
show
up
it
just
got
dropped
somewhere.
We
don't
know
if
there's
a
network
issue,
we
don't
know
what
it's
just,
that
that
piece
of
alignment
isn't
there,
and
so
you
have
to
account
for
this
in
your
back
end,
so
from
an
open,
telemetry
standpoint
they're
like
for
full
compatibility
with
prometheus.
B
If
we
go
from
push
to
pull
or
sorry
pull
to
push
and
try
to
have
prometheus
remote
right
coming
out
as
a
push,
we
absolutely
need
it
from
the
thing
that
does
the
pull
right,
and
so
in
that
sense
I
think
whatever
we
do
doesn't
have
to
be
necessarily
happy
path.
Otlp.
B
E
I'm
a
little
conflicted
because
I
don't,
I
think
you
can
ignore
it
from
data
model
perspective.
But
the
question
is:
how
are
we
going
to
monitor
and
there's
this
practice?
That's
in
the
world
today
that
says
I
am
able
to
know
explicitly
when
my
thing
wasn't
there
and
when
it
comes
back,
I
can
also
see
that
it
was
part
of
the
same
series
like
I
missed
some
points,
but
it's
the
cumulative.
B
I
I
understand,
but
like
from
a
push-based
model
like
this,
is
something
like
our
our
monitoring
tool
does
kind
of
by
default
when
it
does
its
re-aggregations
on
cumulatives
right
it,
it
will
implicitly
say
oh
cool.
I
have
this.
I
have
this.
I
didn't
get
any
data
points
here,
but
I
can,
when
I
do
my
re-aggregation
window,
I
can
just
drop
down
to
here
divide
across
that
time
window
great,
because
I
know
the
start.
B
Time
was
the
same
as
this
one
right
and
that's
something
that
our
back
end
does
that's
not
something
that's
handled
in
otlp
and
it's
it's
a
push-based
solution
to
that
problem
of
you
push
that
problem
into
the
back
end
and
you
don't
account
for
it
in
your
front
end
right.
So
so
I'm
not
saying
that
the
data
model
doesn't
account
for
it.
What
I'm
saying
is
if
we
solve
this
in
the
data
model,
discussions
riley
doesn't
have
to
go
figure
out
how
to
do
this
in
the
sdks
and
apis,
because
we're.
E
Never
going
to
do
that,
it
was
really
a
question
about
how
we
put
prometheus.
This
has
always
been
a
question
about
how
we
put
prometheus
data
into
glp,
because
I
want
to
point
out
that
there
is
a
solution
to
us.
It's
just.
It's
just
slightly
loses
information
and
I'm
okay
with
it.
So
here
it
is
when
you
see
a
missing
target,
meaning
a
nand
value
in
a
prometheus
stream.
E
You'd
say
when
you
see
a
nand
value,
you
have
to
effectively
reset
your
time
start
time
to
unknown,
and
then,
when
you
see
it
again,
you
have
to
reset
your
start
time
to
the
moment.
You
observed
it
again
and
that
puts
a
time
gap
between
the
last
observation
and
the
first
observation
after
and
then
would
have
been,
and
that
means,
if
you're
exporting
to
prometheus,
you
can
now
can
see
a
time
window
during
which
there
must
have
been
one
or
more
nands,
and
you
don't
know
how
many
nands
or
when.
E
B
That's
like
absolutely
something
we
don't
want
to
do
right,
and
so,
whenever
you're
wiggling
with
start
time,
there's
a
chance
that
your
collector's
flakiness
becomes
your
service
flakiness.
And
so
I
hear
what
you're
saying
there
I'd
rather
for
I'd.
Rather
us
find
a
way
to
report
those
things
explicitly
in
the
prometheus
case
in
the
data
model,
but
like
what
all
I'm
trying
to
say.
B
Right
now
is
if
we
need
to
reach
out
to
other
cigs
and
have
them
account
for
things,
riley
doesn't
need
to
go
to
the
sdk
sig
and
say
we
have
to
figure
out
how
to
deal
with
these
steel
markers.
This
is
specifically
just
a
prometheus
compatibility
data
model
problem:
okay
yeah.
It
is
so
that's
the
scope
of
it.
Do
you
have
a
mug
open
that
everyone
can
comment
on
because
we're
running
out
of
time
we're
not
we're
down
to
12
minutes.
So
I
think
at
this
point
we
need
to
take
some
things
offline.
B
Okay,
that
sounds
good.
I
do
want
to
talk
about
the
stillness
marker,
specifically
because
it's
important
to
prometheus
should
that
discussion
move
into
the
like.
Should
we
just
show
up
at
the
prometheus
working
group
to
work
through
that
issue
sheriff
or
do
we
want
to
continue
to
handle
it
here.
B
E
It's
literally
the
same
in
some
sense,
but
it's
less
important
and
I
don't
think
it's
top
of
mind.
E
It
unfortunately
ties
together
all
the
issues
that
we've
just
talked
about.
It
ties
together,
resources
and
later
deriving
data
and
and
and
these
nand
values
and,
as
you
point
out,
collection
is
different
than
service.
Health
and
prometheus
has
pinned
down
the
semantics
that
we're
kind
of
living
with
right
now,
which
is
that
this
demand
value
means
collection,
failed,
don't
know
about
service,
health
and
and
we're
trying
to
figure
out
if
that
can
fit
into
otlp.
B
Okay,
all
right,
so
we
only
have
10
minutes
left
and
I
did
skip
this
and
I
apologize.
I
want
to
spend
five
minutes
on
this
and
then
five
minutes
figure
out
what
we
talk
about
next.
So
five
minutes
on
this
will
be.
This
is
the
exponential
bucketing
for
histogram.
I
think
it
has
two
approvals.
B
I
have
not
approved
it
yet,
and
I
apologize
I
I'm
gonna
talk
about
why
I
didn't
approve
it
yet
and
that's
because
there's
a
an
open
discussion
here
that
I
think
was
pretty
good
between
heinrich-
and
I
you
I
I'm
not
gonna
pronounce
your
name,
I'm
sorry,
uk,
okay
and
it
was
basically
on.
Should
I
just
do
option
two
and
for
those
of
you
who
aren't
familiar,
the
proposal
is
on
exponential
bucketing
for
histograms.
This
would
be
a
second
bucketing,
very,
very
cool.
B
We're
excited
to
see
this
happen,
but
there's
this
notion
of
yeah
protocol
support
for
universal
mergeable
histograms,
and
this
notion
of
whether
we
explicitly
state
the
base
scale
is
hardwired
at
two
or
if
we
have
arbitrary
reference
base
and
whether
or
not
we
go
with
this
option,
one
of
a
specific
base
or
we
go
with
option
two
of
a
base
and
a
scale,
and
I
actually
think
that
personally,
I'd
like
to
have
us
have
a
firmer
decision
on
this
like
either
someone
explicitly
say.
B
And
we'll
be
totally
fine
or
like
that,
yes
actually
to
be
compatible
across
all
the
different
vendors.
We
think
we
need
this
right
now.
That's
that's
kind
of
what
I
was
looking
for
was
a
little
more
discussion
around
this
particular
point.
B
Given
some
of
the
things
heinrich
was
saying,
so
I'm
calling
that
out
as
why
I
haven't
given
this
an
approval
yet
because
I
just
wanted
to
see
if
there
was
more
discussion
there,
but
it's
been
30
days,
so
I
I
assume
at
this
point
no,
but
for
those
of
you
who
have
a
chance
to
review,
please
take
a
look.
I
don't
know
if
anyone
wants
to
comment
on
the
concern
that
I
had
previously.
I
just
wanted
to
see
more
discussion.
There.
E
E
Give
high
value
to
heinrich's
input
extremely
high
value,
so
I
I
think
it's
worth
following
his
input
as
well.
B
The
key
here-
and
I
think
I
clicked
off
again-
I'm
sorry.
The
key
here
is
this:
this
should
be
a
fully
backwards-compatible
change,
because
when
bass
scale
exists
right,
it's
it's,
then
bass
won't
exist,
and
when
we
switch
to
this
okay,
I
should
say
I
think
it's
partially
backwards
compatible.
E
Well,
we
can
also
create
a
new
histogram
bucketing
type
as
well,
so
so
that
we
don't
have.
I
mean-
and
I
use
that
as
an
option
for
303
about
these
histograms
that
have
non-monotonic
sums
as
well.
If
we
didn't
have
bucket
counts
that
were
trying
to
skip
one
element
and
had
these
infinites,
then
we
would
actually
know
the
range
of
the
histogram
very
explicitly
because
of
the
bucket
ranges.
B
More
types
in
a
non-breaking
way:
okay,
so
in
the
sense
that
you
just
convinced
me
that
this
is
an
otep,
and
so
what
we're
approving
here
is
future
discussion
with
the
details
that
go
in,
I
I'll
probably
make
a
comment
there
myself
and
probably
approve
it
for
getting
the
nuanced
details
of
this
aspect
when
the
prs
come
in,
I
feel
like.
B
Yeah
I
mean
I
there's
me
personally
and
then
there's
the
community,
so
I
want
to
make
sure
enough.
People
have
commented
on
anyway.
So
if
you
haven't
had
a
chance
to
comment
or
approve
a
review,
please
do
so
and
let's
get
that,
let's
get
that
moving
okay
next
steps,
next
steps.
So
when
you
look
at
our
current
to
do's
here
right,
I
think
from
the
discussion
today.
B
We
feel
that
the
up
metric
may
be
something
that
has
to
wait
a
little
bit
for
us
to
have
a
lot
more
discussion
before
we
really
get
to
it.
E
B
Model
yeah
one
thing
I
forgot
by
the
way,
because
I
should
pull
this
up
it's
down
here.
We
have
a
prioritized
list
of
things
to
talk
about.
First,
we
wanted
to
talk
about
prometheus
compatibility,
specification,
work,
upness,
metrics
and
stainless
markers.
Do
we
need
to
continue
the
stillness
marker
discussion
here?
I
think
the
answer
to
that
is
no
we're
going
to
take
that
to
the
prometheus
working
group
right.
Yes,
I
have
an
item
about
doing
that
for
tomorrow.
Okay,
so
then
this
is
relevant.
Histogram
bucketing,
it
sounds
like
next
week.
E
B
Yeah
yeah,
that's
true!
Well
I
mean
I,
I
just
highlighted
the
one
decision.
I
think
we
need
to
think
about
so,
let's,
let's
meet
next
next
week
and
we'll
talk
about
histogram,
bucketing
I'll,
try
to
do
a
better
job
of
prepping.
Some
decisions
to
talk
through
which
I
did
not
do
a
great
job
of
this
week
around
prometheus,
so
I'll
try
to
do
a
little
bit
better
job
there.
Unless
someone
thinks
one
of
these
other
things
we
wanted
to
talk
about
exemplars,
multivariate
time
series
or
raw
data
aggregation
is
higher.
B
Priority
we'll
spend
next
week
more
focused
on
histogram
bucketing
and
try
to
get
through.
If
you
look
at
some
of
the
questions,
we
have
open
around
histograms
this
notion,
you
know,
there's
there's
a
lot
of
things
that
we
have
talked
about
with
histograms
that
we
also
might
want
to
get
through.
B
B
Cool,
I
feel,
like
I
talked
way
too
much
in
the
meeting,
but
thank
you
everybody
for
joining.
I
think
we
made
some
decent
progress.
I.