►
From YouTube: 2021-02-23 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
C
A
So
I
guess
we've
do
we
have
a
different
notes
document
for
this.
I'm
not
sure
we
do.
A
A
A
D
Meeting,
do
you
have
do
you
like
points
you're,
worried
about
that
are
contentious
that
you
wanna
go
through
first,
or
do
you
want
to
just
talk
about
it?
Generally,
I.
A
Don't
actually
feel
that
way,
but
you
know
it
becomes
clear
that
the
magnitude
of
what
we're
doing
is
trying
to
invent
a
semantic
way
to
mix
push
and
pull
data.
And
it's
it's
it's
not
that
I
expect
contention
it's
that
it's
far
more
specific
than
in
any
of
the
existing
pro
like
protocols
out
there,
and
I
realize
that
that
that's
tackling
a
lot
and
I
think
it's
kind
of
important
to
do
to
meet.
You
know
to
be
able
to
re-aggregate
data.
A
We
have
to
really
be
clear
about
what
the
data
has
to
look
like
and
so
on.
So
not
so
much.
I
think
there's
some
interesting
idea
that
I
put
at
the
bottom
one
as
I've
been
researching
like
what
has
to
be
done
to
do
some
of
those
things
that
we
talked
about,
especially
the
re-aggregation,
and
it
came
down
to
just
this
topic
of
temporal
alignment,
and
you
know
if
you
look
through
prometheus,
there's
very
clearly
a
step
being
done
when
you
compute
a
range
vector.
A
The
interesting
and
potentially
contentious
idea
josh
was
that
in
a
push-based
system
we
end
up
talking
about
late
arriving
data
in
a
way
that
never
happens
in
a
poll
based
system
and
one
of
the
big
ideas
which
I
think
is
not
necessarily
contentious
but
but
kind
of
big
for
me
is
that
we
can
use
the
same
idea
of
external
labels.
That's
used
in
prometheus
for
spatial
replication
and
use
the
same
idea
to
do
to
do
something.
What
I'm
calling
temporal
replications.
A
The
idea
is
that
you
can
report
your
your
minute
your
day
or
late
like
a
minute
late
and
that's
just
a
shifted
in
time.
Replica
of
your
data
and
potentially
would
allow
us
to
pin
down
what
it
means
to
do
essentially
alerting
in
a
push-based
system
where
there's
late
arriving
data,
so
I
don't
know
not
necessarily
contentious.
It's
just
a
lot
of
new
scope.
So
now
that
we
have
a
larger
group
here,
I
think
I'll
put
that
back
on
the
screen
and
we
could
maybe
walk
through
it
again.
A
We
got
907
here,
see
riley
and
bogdan.
Those
are
those
are
two
important
people,
so
the
the
idea
here,
the
big
idea
that
I've
come
to
realize
is
that
there's
a
very
strict
idea
of
what
is
a
time
series
in
prometheus
and
the
protocol
that
we
have
is
is
far
more
general,
and
so
we
need
to
talk
about
exactly
how
to
map
the
protocol
we
have
into
the
prometheus,
and
this
is
where
we
will
get
into
talking
about
each
of
the
data
types
in
otlp.
A
The
I
think
we
should
probably
talk
through
these
use
cases.
Now
I
I
was
rushing
to
get
this.
What
I
have
here
written
last
night,
let
me
keep
working
on
it
this
week.
Some
of
the
use
cases
that
I
think
are
pretty
important
for
us
are
almost
all
of
them
involve
the
collector.
So
here's
a
is
the
idea
of
you're.
A
You
are
a
single
sdk
exported
to
a
single
collector
and
that
collector
is
going
to
create
a
view
for
you,
which
is
to
say,
modify
some
dimensions
and
output,
a
new
time
series
sort
of
recording
rules
in
the
prometheus
world.
This
second
example
is
one
where
you
want
to
change
the
client
behavior
and
have
no
memory
in
the
client
to
support
higher
cardinality,
but
you
want
to
push
that
same
responsibility
into
collector.
So
can
we
put
our
cumulative
aggregation
into
a
collector
and
leave
our
clients?
Be
stateless?
A
I
think
this
case
c
is
one
that
is
starting
to
interest
me
more
and
more,
because
I
think
people
haven't
really
talked
about
it
much.
This
is
one
that
the
statsd
type
of
aggregation
really
does
support.
Well,
is
they
have
many
processes
producing
metrics
on
a
single
host
and
I'm
going
to
aggregate
those
many
processes
on
a
single
host
into
one
stream
of
metrics
before
I
send
it
off
to
my
infrastructure,
my
metric
system,
and
that
requires
the
ability
to
erase
like
a
host
label
or
something
like
that
to
remove
dimensions.
A
Then
I
think
one
of
the
really
interesting
cases
people
have
been
wanting
is
to
simply
scale
up
the
hotel
collector
by
by
adding
a
horizontal
scale,
and
we
we
have
ways
that
this
can
be
done.
We
have
ways
that
this
can't
be
done
depending
on
you
know,
cumulative
versus
delta
and
there's
this
topic
about
external
labels
that
we
have
to
get
to.
E
A
That's
what
I'm
trying
to
say
is
that
we
are
going
to
spec
out
what
it
means
to
do
temporal
alignment,
which
means
yes
to
do
interpolation,
and
I
I'm
saying
I'm
speculating
about
trying
to
propose
that
every
otlp
data
point
has
a
different
way
of
handling
itself.
A
As
far
as
interpolation
goes
and
for
the
most
part,
the
sum
data
points
are
are
there
so
that
they
give
you
linear,
interpolation
and
the
histogram
is
really
just
a
collection
of
sums
in
that
sense,
so
those
also
can
be
interpolated
and
the
one
data
point
that
doesn't
really
necessarily
lend
itself
to
interpolation.
A
Is
that
what
I'm
called
the
true
gauge
or
the
open
telemetry
gauge,
which
is
where,
where
the
presumption
is
you're
going
to
use
the
last
measurement
value,
and
that
is
consistent
with
all
the
re-aggregation
that
we've
talked
about
doing
so
yeah.
So,
yes,
riley,
thank
you
and
I
don't
think
that
that's
too
contentious.
I
think
josh
asked
this
before
everyone
connected
five
minutes
ago.
A
A
Has
a
definition
for
a
range
vector
that
range
vector
is
precisely
what
you
get
by
applying
temporal
re-aggregation
to
a
stream
of
metrics,
and
I
don't
think
that
prometheus
specs
out
linear
interpolation,
but
it
could
be,
and-
and
I
think
that
I
would
like
to
know
if
people
are
using
linear
interpolation
to
do
when
when
they
do
range
vector
calculations
anywhere.
I
think
that
default
prometheus
is
always
to
use
the
last
measured
value,
but
I
think
that
that
doesn't
necessarily
work
in
a
very
long
time,
like
interval
collection
strategy.
F
I
think
about
this.
I
think
all
of
these
use
cases
and
scenarios
look
great,
but
I
would
like
to
based
on
the
feedback
from
from
the
community
and
from
a
lot
of
people.
I
think
we
should
have
a
priority
on
out
of
them
and
focus
on
one
by
one,
not
trying
to
solve
all
of
them,
because
probably
will
not
solve
any
of
them
if
we
focus
well.
I.
A
I
I
agree
that
there's
something
alarming
about
this
list
right:
we
don't
need
to
support
this.
We
need
the
data
model
to
be
convincingly
ready
to
do
that
stuff
and
then
over
the
next
year.
Perhaps
we
can
add
this
type
of
functionality
in
a
collector,
definitely
not
trying
to
have
to
get
this
to
done
by
march,
and
I
know
that
implementing
temporal
alignment
is
tricky
and-
and
it
raises
a
question
of
later
rising
data-
and
you
know,
I
don't
think
that
the
current
prometheus
systems
are
even
ready
for
it.
A
So
we
can't
we
shouldn't
push
too
hard
on
this
idea,
but
but
we
can
spec
out
what
we
think
is
going
to
happen
and
at
least
review
the
theory
behind
it.
It's
my
idea.
A
There's
clearly
a
dependency
here
on
the
api
model
and
there's
definitely
a
chicken
and
egg
problem
here
is
that
we
we
first
looked
at
api
and
sdk,
and
then
we
kind
of
derived
a
protocol
from
it.
And
now
I'm
saying
the
protocol
is
defined
based
on
the
api
that
that
we
specked
out,
and
yet
we
know
that
that
api
is
more
of
a
model.
A
That's
meant
for
specification
purposes
than
it
is
clearly
a
helpful
user
interface
for
metrics
and
the
the
goal
is
to
finish
the
data
model
so
that
we
can
unblock
the
data,
the
api
and
yet
there's
something
a
little
bit
circular
about
using
the
api
that
draft
to
define
the
data
model,
then
to
define
an
api
again.
So
I'm
interested
in
your
feedback
on
on
that
particular
because
I
didn't
put
a
lot
of
detail
in
the
event
model
here.
A
It's
something
that
I
think
we've
stated
enough
in
the
earlier
drafts,
but
I'm
kind
of
assuming
that
that's
okay
with
people
and
then
the
time
series
model
is
basically
just
lifted
out
of
prometheus.
A
This
is
the
definition
that
prw
gives,
and
I
think
that
that
what
we're
truly
trying
to
do
here
is
establish
this
new
thing
that
the
open
census
and
and
the
open
telemetry
projects
have
done
here,
which
is
to
create
these
data
types
that
are
sort
of
self
self
describe
themselves
like
what
aggregations
you're
going
to
be
doing
on
this
data
type
and
in
particular
the
the
notion
that
we've
created
two
two
gauges.
A
One
for
summing
and
one
for
averaging
is,
is
very
new
here
and
I
think
the
that
we
have
to
build
out
this
complete
picture
of
why
we
want
people
to
think
in
terms
of
this
to
two
new
gauge
types
and
one
new
gauge
type.
Instead
of
having
a
single
gauge
reason
is
that
we
are
going
to
do
re-aggregation.
A
So
I
think
I
would
like
to
I'd
like
to
stop
talking
and
let
others
ask
questions.
A
I
think
without
much
presence
from
prometheus
here,
it's
gonna
be
hard
to
make
progress.
I'd
like
people
to
read
through
this
and
start
that
conversation.
At
the
same
time.
I
left
a
lot
of
to-do's
in
here
because
I
just
ran
out
of
time
and
I
will
endeavor
to
complete
the
to
do's
in
enough
detail
to
see
the
whole
picture
of
what
I
was
trying
to
finish
here,
perhaps
by
thursday.
G
I
can
follow
up
with
the
prometheus
folks
to
actually
add
comments
on
the
dock.
I
I'm
going
to
meet.
I.
G
G
D
So,
can
I
ask
a
question
and
and
maybe
I
missed
it,
but
it
has
the
word
timeline
in
the
title.
D
There's
a
rough
time,
one
date
I
know
march
yeah
yeah.
So
what
I'm
trying
to
understand,
though,
like
this,
this
looks
like
a
complete
specification
of
a
data
model
with
a
couple
to
do's
to
fill
out.
Is
it
that
you
want?
What
what
are
you
looking
for,
but
the
timeline
and
and
the
goal
so
like
I
could
see.
I
was
taking
this
event:
data
time
series
documentation
and
just
shoving
it
into
a
data
model
specification
for
open
telemetry
of
here's,
the
data
model,
here's
how
it
looks
right.
A
Well
I
mean
that
I
mean
you
could
also
say:
well,
you
need
to
prototype
it
and
show
us
that
you
can
do
this
and
we
have
at
some
level
prototypes
for
the
sdk
portion,
saying
we
can
do
this.
We
know
how
to
do
this
in
the
sdk
and
we've
seen
it
can
we
do
this
in
the
collector.
A
It
requires
a
sort
of
extra
machinery
that
I'm
sort
of
like,
like
writing
words
about
right,
and
I
think
we
shouldn't
block
calling
this
stable
just
because
we
haven't
seen
it
happen
in
the
collector,
when
we
have
a
lot
of
existence,
proof
out
there
of
a
prometheus
system,
doing
recording
roles
and
a
prometheus
system,
doing
temporal
alignment
and
stuff,
and
and
so
in
some
level.
This
is
like
a
thought
experiment.
If
everyone
agrees
with
this
thought
experiment,
we
can
call
it
stable.
D
A
D
That,
whatever
allows
collaboration,
I
just
understand
like
what
what
we're
looking
for
to
make
progress
on
this
and
kind
of
get
those
approvals
right
of
like?
Does
it
make
sense
for
us
to
go
through,
read
it
and
comment
and
then,
as
you
resolve
issues,
we
just
find
the
big
ones
and
maybe
open
tickets
to
like
think
through
them
and
have
more
discussions
like
how
do
you
want
this
to
go
process
wise.
A
A
Like
I,
I
have
stated
like
a
rough
proposal
about
how
to
do
it
in
cases
of
overlap,
and
it's
a
lot
of
should
statements,
and
I
I
feel
like
that's
the
kind
of
thing
that
maybe
we
shouldn't
be
saying
should
maybe
we
should
be
saying
less
and
and
we've
we've
used
the
expression
to
talk
to
your
vendor
in
the
past
about
what
does
it
mean
when
you
mix
a
counter
and
a
gauge
in
the
same
time,
zero?
A
Like
that's
the
level
of
like
spec
out
that
we
need,
and
I
haven't
finished
it
there's
a
lot
more
of
that
stuff.
So
I
want
us
to
all
agree
on
the
rules
for
handling
the
data,
which
just
means
I
don't
know,
review
it
and
and
maybe
filing
issues
for
bigger
stuff.
D
A
I
think
I
generally
don't
like
to
do
a
ton
of
work
and
then
like
dump
a
giant
document
on
people,
because
I
worry
that
I'm
not
like
it's
way
off
base.
If
you
all
kind
of
have
taken
the
first
look
at
this
and,
like
maybe
the
first
few
pages
kind
of
hold
together.
A
Maybe
what
we
should
do
is
start
start
piece
by
piece:
writing
it
into
a
data
model
for
for
hotel
metrics,
so
that
we
can
have
a
four
or
five
of
us
sign
off
on
the
sort
of
two-pager
version
of
this,
which
is
like
there's
three
models
and
we're
linking
them
together.
And
then,
when
we
get
to
the
point
of
like
overlap
resolution,
and
then
we
have
a
particular
one
pr,
that's
just
defining
overlap
resolution
or
whether
whatever
we
are
going
to
do
about
overlap.
A
Whether
it's
do
nothing
and
talk
to
your
vendor
or
it's
like
recommending
a
particular
course.
And
then
how
should
we
handle
the
case
when
people
mix
integer
and
floating
point
metrics?
How
should
we
handle
the
case
when
people
mix
counters
and
gauges?
A
Those
are
also
like
potential
error
conditions
that
we
should.
I
think
talk
about
and
I
but
I
want
I
want
in
the
group
to
to
be
collaborating
on
this.
Not
just
you
know
a
few
of
us,
like
writing,
long-winded
documents
about
what
we
think
should
happen.
F
F
What
I
mean
you
mean
prometheus
yeah
really
implement
prometheus
or
any
other
backend
in
inside
the
collector.
So
anyway,
probably
a
good
is
to
scope
stuff
that
we
should
say
this
is
the
back
end.
The
final
backhand
responsibility
versus
this
is
things
that
we
do
in
the
collector
and
yeah.
A
I
mean
I'm
interested
in
probing
that
right
here
I
mean
I've
had
similar
discussions
with
the
back-end
team
here
at
lightstaff,
who
are
a
little
concerned
that
we
are
inventing
a
query
language
and
a
complete.
I
mean
prometheus
query.
Language
already
exists,
we're
not
we're
not
going
to
invent
that
it's
there,
but
but
what
are
the
capabilities
that
we're
talking
about
can
be
done
in
a
collection
pipeline
versus
what
are
the
capabilities
of
a
general
purpose
metric
system,
and
there
has
to
be.
I
think
there
is
a
strong
delineation
there.
A
It's
the
type
of
aggregations
that
you're
able
to
do
when
you
do
not
have
access
to
all
the
data,
except
there
are
like
restricted
conditions
here
where
we
do
have
access
to
all
the
data
under
certain
assumptions
and
that's
the
kind
of
stuff
that
we're
trying
to
get
here
when
you're
a
single
agent,
you
have
access
to
all
the
data
of
all
the
collect,
all
the
processes
you're
collecting
for,
but
so
so
I
there's
a
point
in
the
recent
month
where
I
asked
the
prometheus
working
group.
A
All
of
those
recording
rules
that
are
simple
regulations
can
be
in
scope
for
hotel
collector,
but
not
all
of
them
in
general,
because
if
you
have
to
access
the
entire
data
store
you're,
that's
not
a
that's,
not
something
we
can
do
you
can't
access
prior
data
in
your
aggregation.
You
can
just
access
the
data
you're
aggregating
in
an
aggregation,
and
so
that's
where
the
the
delineation
it
sits.
So
many
alerts
can't
be
done
in
collection.
Many
many
aggregations
can't
be
done
in
collection.
They
have
to
be
all
the
data.
F
Yeah,
let's,
let's
review
the
dock
and
but
keep
that
in
mind
that
we
should
not
re-implement
the
back-end
in
our
collector,
and
we
should
try
to
be
just
a
collection
as
we
promise
a
collection
pipeline.
That
is
able
to
do
some
of
the
things
for
you
if
you
want,
but
we
should
rely
more
and
more
on
the
back
end.
That
has
all
the
points
and
all
the
visibility,
all
the
sharding
and
all
the
things
that
they
already
have
to
implement.
A
Sure
yeah,
I
think
that's
we
do
need
to
keep
that
in
mind,
and
we
we
should
be
clear
that
we
are
not
trying
to
write
a
query
language.
I
mean
prometheus
query.
Language
is
the
standard
out
there
and
we're
going
to
keep
that
one
of
the
ways
I've
tried
to
explain
this.
A
In
this
conversation,
I've
had
a
back-end
team
is
there
are
changes
that
you
could
imagine
making
in
your
sdk
just
reconfiguring
your
own
sdk,
just
at
a
longer
interval,
for
example,
that
shouldn't
change
the
meaning
of
the
data,
and
it
should
only
change
the
resolution
of
the
chart
that
you're
looking
at
so
that's
an
example
of
one
that's
very
easy
to
see
like
just
re-aggregating
over
a
long
time
time
window.
You
could
have
done
that
in
your
own
sdk.
A
You
could
have
done
that
by
changing
your
prometheus
configuration
and
you
can
do
that
by
re-aggregating
in
the
collector,
without
it
doesn't
matter
where
you
do
that.
In
other
words,
so
it's
a
little
bit
more
of
a
intellectual
stretch
to
see
that
that's
sort
of
the
same
idea
for
removing
labels
and
and
for
changing
deltas
into
cumulative.
But
but
those
are
all
parts
of
a
collection
strategy.
As
far
as
as
far
as
it
looks.
D
So
can
I
ask
a
question,
because
you
said
this
twice
and
I'm
gonna
throw
an
analogy
after
this
prom
ql
is
a
query
language
right
to
query
time
series.
D
Are
you
planning
to
leverage
that
against
the
otlp
data
format?
Is
that
something
we
need
to
worry
about,
and
my
parallel
here
is
spark
and
their
data
model
around
compute
and
how
it
became,
and
also
streaming
data
model
at
some
point
in
the
future,
and
there
was
a
push
to
do
so
so
I'm
gonna,
I'm
gonna,
throw
that
out
there
as
I'm
kind
of
curious.
Your
thoughts
on
this
right
now.
A
I'm
not
sure
how
to
answer
it.
Prometheus
query.
The
previous
query.
Language
is
very,
very
much
defined
on
the
time
series
model
that
I've
put
into
the
document.
It
has
the
notion
of
staleness
and
every
sample
point
has
exactly
one
timestamp
associated
with
it,
whereas
in
the
otlp
model
every
data
point
has
two
time
stamps
and-
and
we
have
some
resolution
and
some
aggregation
that
we
can
do.
A
Therefore,
I'm
not
enough
of
an
experiment
database
theory
to
talk
about
whether
this,
whether
how
much
prom
ql
can
be
extended
as
a
query
stream
querying
language,
but
I
I
I
actually
truly
wish
we
had
a
database
theoretician
in
the
room
once
in
a
while.
The
database
theoretician.
A
D
A
D
The
room,
I
guess
my
point
is,
I
think,
what
you're
defining
with
these
aggregations
is,
is
you're,
actually
defining
the
components
that
could
lead
to
a
query
language.
So
I'm
just
calling
it
out.
Yes,
we're
not
creating
a
query
language,
but
you
could
it'd,
be
it'd,
be
trivial
once
we
have
these
components
kind
of
in
place
to
understand
what
that
would
be,
I
don't
think
we
want
to.
I
don't
think
we
want
to
push
that
on
the
world.
D
I
don't
think
that's
a
great
idea,
but
you
are
doing
it
kind
of
because
we're
actually
defining
the
like
semantics
of
how
we
talk
about
this.
What
the
you
gave
the
aggregations
names
right
like
that's,
effectively
we're
describing
how
we
do
streaming
aggregations
and
streaming
kind
of
processing
of
data
as
it
flows,
and
it's
very
very
akin
to
it,
but
I
don't
want
to.
I
don't
want
to
tease
into
that
of
light
when
you
say
like
we
don't
want
to
re-implement
promql.
I
totally
agree
with
that.
D
A
I
agree
and
it's
partly
because
I've
had
quite
a
lot
of
pushback
from
the
back
and
team
here
who
believes
that
that
all
that
stuff
belongs
in
the
back
end,
not
not
in
the
trans
in
transit
portion
of
your
spec,
so
yeah
and
I
and
I
have
tried
to
avoid
calling
it
queries,
because
what
I,
what
I've
been
trying
to
say,
is
that
there's
a
semantic
meaning
behind
all
those
events
that
you
put
in
and
what
we're
doing
is
changing
the
data
without
changing
its
mannex
and
and
hopefully
that's
true
excuse
me.
A
Sorry,
child
missing
something
in
the
house,
I'm
not
sure.
If
I
answered
your
question
josh.
D
Kind
of
yeah
I
so
I
guess
the
different
way
of
asking.
What
I'm
asking
folks,
who
are
asking
us
not
to
implement
promql,
might
be
the
same.
Folks
who
say:
don't
do
any
aggregation
in
the
collector
right
like
don't
you
shouldn't
be
manipulating
the
data.
It
should
just
flow
straight
through
the
back
end,
and
that's
I,
I
think,
we're
in
agreement
in
the
community
of
open
telemetry
that
that's
not
what
our
data
model
is.
Our
data
model
is
one
that
is
meant
to
be
have
these
aggregations
and
so
at
a
minimum.
D
A
Yes
and-
and
I
let
me
just
just
point
out
one
more
kind
of
clear
delineation-
that
I
was
at
least
thinking-
it's-
that
all
those
re-aggregations
and
temporal
delta
cumulative
transformations
are
on
a
single
metric
of
it
on
its
own,
so
that
semantics,
preserving
only
has
to
take
whatever
the
input.
Events
on
that
metric
were
and
and
output
data
that
conveys
the
meaning
right
but
and-
and
I
and
I'm
I'm
barely
able
to
say,
I've
avoided
making
a
query
language
just
with
that
picture
alone.
A
And
yet,
when
we
talk
about
the
up
metric
and
stillness
markers
and
the
time
series
model,
there's
this
implicit
join
happening
and
and
I've
and
I've
I've
sort
of
begun
to
think
about
proposing
that
we
spec
out
a
push
model
for
service
discovery
where
the
services
is
pushed
as
this
present
metric
right
and
then
in
order
to
do
the
transformation.
I
want
I'm
actually
trying
to
do
a
join.
And
that's
the
point
where
I
I
keep
saying
it's
not
a
query,
but
it's
a
join.
A
So
there's
there's
definitely
a
query
happening,
but
I'm
trying
to
restrict
it
in
a
lot
of
ways,
and
so
it's
restricted
to
a
natural
join.
It's
restricted
to
self-re-aggregation
of
the
same
type
and
yet
there's
definitely
some
need
there.
You're
right,
josh.
F
A
I'm
trying
to
say
that
our
data
model
should
be
clear
enough,
that
there
is
a
valid
way
to
join
the
data
and
then,
if
you
want
to
do
what
prometheus
has
done,
which
is
to
say,
join
service
data
with
upness
with
liveness
to
compute
upness,
because
because
that
works
and
that's
about
that's
a
practiced
way
to
do
monitoring,
then
then
we
should
be
able
to
do
that
from
this
data
and
there
should
be
a
collector
plug-in.
That's
going
to
take
two
months
to
write,
but
it'll
work
really
well
doing
that.
A
Because,
because
how
else
do
you
I
mean
like
if
you're
coming
from
a
non
non-prometheus
world?
How
do
you
get
the
list
of
things
that
you're
supposed
to
be
monitoring?
And
that
is
not
easy
in
a
in
certain
other
monitoring
systems
that
I've
done
and
if
you
look
at
stackdriver
they
have
this
notion
of
monitored
resource.
I
don't
know
how
they
do
it,
but
there's
there's
something
there
and,
if
you're
so.
So
I
think
people
want
that.
F
F
There
is
a
big
interest
from
tigran
from
splunk
about
the
concept
of
entities
or
or
things
that
produce
telemetry,
and
how
do
you
know
if
they,
when
they
are
created
when
they
are
removed
and
stuff
like
that,
and
we
may
we
may
talk
about
a
different
topic
here,
and
maybe
maybe
maybe
this
can
be
he
in
his
idea.
This
is
another
signal.
It's
not
metric
that
it's
describing
the
entity
of
of
hey
this
entity
was
created.
F
Here
are
the
the
resource
that
describes
this
entity,
and
here
now
the
entity
was
updated
with
new
resource
information,
and
now
the
entity
was
removed
from
from
from
things
like
that.
So.
A
F
A
I
I
would
like
you
to
export
about
this
resource
for
me,
which
is
what
the
aws
resource
interface
is
giving
you
and
then
the
collector
could
be
responsible
for
automatically
pulling
resource
information.
You
could
imagine
a
semantic
convention
where
you
say
my
resource
has
an
arn
in
it
and
if
you
see
the
arn
in
my
resource,
go
to
the
resource
discovery
and
join
it
with
with
all
my
other
attributes,
please
yeah
and-
and
it
sounds
like
tiguan's
looking
at
that.
That's
that's
very
interesting.
F
So
so,
yes,
that's
that's
the
point
that
we
need
to
think
when
we
talk
about
the
resource,
upness
and
stuff
a
bit
differently,
and
maybe
maybe
but
but
I
know
prometheus
wants
to
have
this
up
metric,
but
but
it's
kind
of
overlapping
with
the
metric.
What
I'm
trying
to
say
we
need
to
to
think
carefully
and
not
jump
into
to
doing
one
or
the
other
until
we
better
understand
what
we
can
do
and
how
how
better
would
feel
because
it's
it's
super
interesting.
F
The
other
thing
that
I
pointed
to
tigran
was
there.
Is
this
service
discovery
or
I
think
it's
called
impromedia's
world
where
it's
actually
exactly
this
like
this
target
is
up.
This
target
is
down
it's
exactly
updates
about
the
resource
and
if,
if
we
have
this
as
a
standalone
thing,
independent
of
of
of
all
all
of
these
things,
because
even
tracing
benefits
on
this
logs
benefit
on
this
all
the
signals
that
we
have
could
benefit
of
this
stuff
of
entity
being.
A
Up
down
inside
right
for
the
purposes
of
the
data
model,
I
really
just
want
to
have
like
a
a
signal
that
equates
with
what,
whatever
whatever
there
is
being
done
about
resource
discovery.
I
just
need
to
know:
is
it
up
or
not
right
now
and
one
way
we
can
do,
that
is
by
translating
into
a
metric
data
model
that
I
can
then
at
least
make
my
relational
arguments
about
or
or
you
know
you
know,
I
can
say,
look
just
join
this
and
it
does
the
right
thing.
F
Anyway,
let's
let's
there
are.
A
Other
areas
where
this
metrics
conversation
is
going
to
inform
tracing
and
logging,
I
think
too,
this
external
label
idea
is
quite
powerful
and
I
think
it
should
be
applied
to
logging
and
tracing
as
well.
You
know
right
now,
most
people
use
traces
or
spams
as
a
like,
like
a
diagnostic
tool
for
digging
into
a
an
error
case,
usually,
but
no
one.
We
don't
have
enough
reliability
built
into
otel
spans
to
use
the
rates
derived
from
spans
as
a
signal,
a
metric
signal.
A
If
you
wanted
to
count
spans
and
turn
them
into
metrics,
you
would
need
high
availability
and
one
way
to
do
that
would
be
just
to
send
two
two
copies
of
the
span
with
different
external
labels.
So
I
think
we
we
may
want
to
start
moving
this
data
model
into
open
geometry
wide.
This
conversation,
some
of
it.
F
A
F
Yeah,
I
will
ask
tigran
to
volunteer
for
that,
to
to
start
the
discussion
about
resource
and
and
the
fourth
signal
that
we
call.
A
This
idea
of
late
binding
resource
attributes
and,
and
I
I've
used
the
terms
identifying
and
descriptive.
Now
I
like,
I
like
that,
and
I
I
could
personally
I
would
back
a
proposal
to
have
one
bit
of
extra
information
on
any
attribute
to
say
whether
you're,
identifying
or
descriptive,
so
that
we
could
have
automatic
filters
like
just
remove
all
the
optional
labels
and
that
that
way,
we'd
have
a
better
path
to
prometheus.
That's.
F
Another
thing
that
igram
was
thinking
about.
We
can
start
by
doing
that
via
semantic
convention
and
say
that
hey
this
is
identifying
or
not
not
necessarily
putting
the
data
as
an
initial
thought
to
prove
to
prove
this
and
then
later
we
can.
We
can
maybe
extend
that
to
to
make
it
to
make
it
right
yeah.
I
totally
support
that.
D
F
Yeah,
okay,
so
my
action
item
is
to
talk
to
tigran
and
propose
this
next
spec
meeting
or
file
issue
to
start
these
discussions.
F
Now
that
I
heard
its
interest-
and
I
know
I
know-
a
lot
of
people
are
interested
into
this,
so
I
was
pushing
back
on
him
to
start
a
new
discussion
because
I'm
trying
to
limit
and
focus
the
community
on
things
that
matters,
but
it
seems
that
inevitable.
We
are
getting
there
sooner
than
expected.
So
I
will
let
him
start
that
discussion.
I
was
the
person
who
blocked
him
to
stop
it.
H
So
regards
to
josh's
proposal,
though,
do
we
have
like
a?
I
was
kind
of
hearing
a
timeline
earlier
and
I
think
josh
earth
was
kind
of
pushing
on
this
was.
The
idea
is,
is
that
we
want
to
have
some
review
on
the
doc
that
you
have
here
josh
and
then
we
want
to
go
into
a
prototyping
phase,
and
then
we
want
to
go
into
a
spec
specification
phase.
Is
that
kind
of
like
the
broad
strokes
approach
here.
A
I
was
thinking
it's
like
thought:
experiment
prototyping,
because
we're
talking
about
the
data
model,
not
about
a
real,
a
real
system.
And
yes,
I
think
I'm
asking
for
early
review
on
this
document
to
see
if
I'm
like
completely
off
base
and
like
this
is
not
the
right
direction
and
then
the
next
step,
I
think,
is
to
begin
moving
it
into
the
spec
repo
in
digestible
pieces
that
are
sort
of
small
enough
that
I
can
get
your
attention
long
enough
to
improve
and
that
don't
devolve
into
like
100
long
threads.
A
Okay,
then
I
will
in
parallel
with
anyone
reviewing
it.
I
will
continue
like
fleshing
out
some
to
do's
and
I
will
work
on
that.
First,
two
pager
or
something
like
that
that
goes
into
the
repo
and
and
try
for
thursday.
F
C
Yeah
so
you're
good
enough
yeah,
so
I
believe,
like
it's
a
good
topic
for
prometheus
meeting
or
agent,
but
still
if
anyone
can
shed
some
light
for
me
here.
So
I
think
I
I
was
continuously
getting
like
the
warning
or
the
error
message
like
prometheus
receiver
actually
is
failing
to
scrape
the
prometheus
in
point
when
I
am
trying
to
use
for
service
discovery
on
eks,
so
one
or
two
times
like
I'm
just
I
was
able
to
get
some
metrics
from
the
pod
or
endpoint
role,
but
for
c
advisor
and
other.
C
So
it's
not
getting
any
matrix
and
the
error
message
I
believe,
like
I,
don't
have
enough
insights.
I
also
looked
into
the
matrix
builder
code,
but
I
can
figure
out
like
what's
going
wrong
like
and
also
we
have
two
issues.
One
was
closed,
but
I
think
it's
a
very
similar
to
the
closed
tissue,
so
maybe
to
see
if
we
can
reopen
that-
and
we
also
have
an
open
issue
which
says
like
we
need
some
more
insights
here
with
the
prometheus
receiver.
C
So
I
don't
think
I
am
doing
anything
wrong
with
the
configuration
but
yeah
bogdan
made
a
comment
like
yeah.
So
that's
my
mentor
and
maybe
it's
a
good
question
for
prometheus
or
collector's
sake.
F
F
F
I
cannot
tell
you
more
at
this
time.
I
I
we
need
to
look
at
that,
but
it's
also
not
on
our
priority
for
based
on
our
roadmap,
which
is
not
to
support
like
the
next
couple
of
months.
We
do
not
want
to
spend
time
on
on
this.
In
the
collector
side,
see.
C
Yeah,
that's
trying
to
understand
like
so
yeah
so
yeah.
Definitely
I
will
show
up
in
tomorrow's
sig
and
prometheus
meeting,
so
does
it
mean
like
if
we
really
have
some
like?
I
mean
crappy
things
to
do
here,
but
we
don't
have
like
enough
cycle
now
or
we
are
not
planning
to
focus
on
this,
at
least
for
next,
two
or
three
months.
F
For
the
next
couple
of
months
from
the
collector's
seat
probably
will
not
focus
on
these,
but
there
is
a
little
group
with
prometheus
that
wants
to
skip
a
bunch
of
these
things,
so
she
will
focus
on
these
things
yeah
in
in
six
months.
I
would
say
this
would
be
a
p0
bug
for
us,
but
right
now
it's
a
p4
like
to
answer
to
be
more
realistic,
like
right.
G
Right
I
mean
once
we
start
rolling
out
the
you
know,
updated
code
and,
and
you
know,
factor
in
the
design
right
now
right
hand.
You
know
things
are
broken,
as
department
said,
so
just
join
into
this
prometheus
sig
and
yeah.
C
C
F
Okay,
next
topics
are
mine.
I
filed
a
couple
of
qrs
on
the
proto
side.
The
two
258
is
probably
the
less
contiguous
one.
It's
just
like
us.
Changing
our
histogram
definitions
to
match
open
metrics
for
for
better
or
for
worse
to
to
be
standard
compatible
with
open
metrics.
F
We
got
response
from
stackdriver
that
they
don't
believe.
This
is
an
issue
for
them
which
thank
you
josh.
It
was
my
impression
all
the
time
that
actually,
even
if
this
was
opposite
to
prometheus,
would
not
be
a
problem
in
real
life,
but
just
to
make
the
community
happy
I'm
happy
to
to
to
go
with
this.
This
requirements
to
be
compatible
with
them
so
show
show
that
we
are.
We
are
nice
with
the
open,
metrics
and
with
everyone,
and
we
try
to
make
progress.
F
F
Essentially,
the
loss
of
precision
that
you
have,
if
you
use
float,
is
not
that
bad
or
it
you
have
2
to
56.
I
think
precision
on
floats,
if
I
found
correctly
so
I
think
we
need
to
reconsider
if
we
want
to
to
maintain
the
end
versions,
especially
on
histogram
and
on
examples
I
think
for
for
sums.
It's
important
because
usually
sums
are
the
ones
that
are
going
crazy
for
for
things.
That
may
be
important,
but
for
other
things
I
I
I
would
consider
to
drop
that
anyway.
F
Just
a
foot
for
thought-
and
I
would
like
your
opinion
on
that-
and
I
filed
the
issues
and
asked
for
for
people
to
review
that
if
you
have
any
feedback,
let
me
know,
but
please
comment
on
the
issues
that
we
we
have
in
the
process.
A
I
will
comment
on
your
issues.
I
have
I'm
very
supportive
of
this
motion
of
yours
and-
and
I
can
rationalize
it
in
my
own,
slightly
different
way.
But
but
I
agree
with
everything
you
said
we
should
have
it.
We
have
ins
versus
quotes
to
support
those
sums
that
could
overflow
and
when
we
know
that
that's
a
corner
case,
but
it
doesn't
seem
to
be
an
issue
when
you're,
counting
things
where
it's
a
count
of
one.
You
have
to
overflow
a
lot
of
count
to
overflow,
histogram.
F
So
the
the
only
difference
between
double
histogram
in
histogram
is
the
sun
inside
the
histogram,
which
I
think,
which
I
think
is
less
important
when
you
expose
a
histogram,
because
whenever
you
expose
them
the
most
important
part
that
you're
exposing
are
the
buckets
are
not
the
sum.
Some
is
just
the
next
one
information
anyway,
we
can
discuss
there,
but
the
only
difference
between
in
histogram
and
double
histogram
is
the
type
of
the
sum
inside
the
histogram.
It's
still
a
sum,
but.
A
F
G
F
The
reason
the
reason
is
individual
measurements.
If
individual
measurements
are
greater
than
2
to
56,
you
most
likely
will
overflow
in
in
64
very
easily
like
like
if
individual
measurements,
which
we
put
the
values
inside
the
examples,
are
very
as
large
most
likely,
you
will
have
a
bigger
problem,
because
your
sum
will
overflow
very
soon
like
I
I
need
to
calculate,
but
it's
like
two
to
two
to
eight,
like
200.
A
So
with
your
proposal
we'll
be
down
to
one
histogram,
and
you
had
mentioned
in
the
last
week
meeting
that
you
were
contemplating
potentially
having
separate
histogram
types
based
on
the
the
type
of
bounds
that
were
used.
Are
you
still
thinking
of
that?
It's
it's.
A
Okay,
I
have
a
related
thought
that
I
wanted
to
throw
out
since
the
groups
here
talking
about
it,
and
so
it's
this
summary
type,
which
is
kind
of
an
oddball.
We
know
it's
not
mergeable,
but
it's
not
equivalent
to
a
histogram
right
now.
So
that's
one
point
and
then
there's
this:
the
statsd
users
out
there
that
are
using
sampling.
They
come
in
with
a
histogram
observation
with
a
sample
rate
of
0.33333.
A
Well,
let's
say
it's
point.
You
know
something
irrational
and
I'm
intended
to
invert
that
probability
or
that
sample
rate
and
that's
my
effective
count.
When
I
enter
a
histogram,
it's
not
a
it's,
not
an
integer
count,
so
there's
sometimes
a
corner
case
where
you
want
to
have
a
histogram
where
the
counts
in
each
bucket
or
floating
point
because
of
sampling.
A
A
We
could
introduce
a
new
histogram
type,
which
is
equivalent
to
the
the
one
that
we've
got
today,
which
is
double
histogram
with
integer
counts,
but
it's
the
double
histogram
with
double
counts
and
what
we
can
do
with
double
histogram
of
double
counts.
Two
things
both
of
the
problems
I
mentioned
earlier.
You
can
handle
sampled
counts
correctly.
So
if
I
get
a
you
know
a
sample
rate,
that's
in
non-integer
inverse.
I
can
correctly
count
some,
you
know.
2.7
counts.
A
I
can
also
translate
a
prometheus
histogram
into
an
equivalent
equivalent
information
as
a
histogram
with
variable
boundaries
and
variable
floating
point
counts.
In
other
words,
I
construct
a
histogram
with
one
bucket
per
quantile
or
percentile,
and
my
my
boundaries
happen
to
be
exactly
where
the
percentiles
land
and
my
counts
are.
You
know,
have
to
be
adjusted
to
match.
There
is
a
way
to
represent
prometheus
summaries,
as
histograms
with
double
counts
is
all
I'm
trying
to
say.
F
F
The
problem
is
they
say
that
you
can
send
data
like
quantiles
calculated
over
last
five
minutes
over
random
intervals,
so
it's
essentially
a
moving
window
there
for
for
the
temporarily
that
is
sending
the
summaries,
which
is
not
passed
through
any
of
the
metadata
that
they
have
and
it's
unknown
from
from
the
data
model
perspective.
Well,.
A
Yeah
you
know
we.
I
I
mentioned
that
one
of
the
big
points
of
that
document
that
I
shared
earlier
is
about
overlap
resolution.
So
one
thing
you
could
do
here
is
explicitly
allow
overlap.
In
that
case,
you
can't
merge
these
points.
They
overlap,
but
you
know
one
prometheus
summary
covers
five
minutes
and
then
every
minute
you
report
a
new
summary
we
could
give.
We
could
define
that,
as
you
know,
a
five
minute
window
and
it's
overlapping
and
you
just
can't
merge
it
as
a
result.
A
I
just
wanted
to
throw
that
idea
that
half-baked
idea
out,
because
I
something
about
prometheus
summaries-
is
going
to
be
left
on
the
table.
If
we
aren't
careful
yeah
what.
F
What
do
you
think
is
going
to
be
left?
In
my
opinion,
the
summary
support
in
our
data
model
is
just
a
trade-off
to
support
legacy
systems
and
for
summaries.
We
should
be
just
a
pass-through
system.
We
we
should
not
focus
on
producing
them.
Definitely
we
should
not
produce
them
as
prometheus.
Even
openmetric
says
don't
use
this.
If
you
are
a
new
system,
don't
produce
these
this,
I'm
not
able
to
be.
A
But
but
but
I
don't
think
that
even
the
prometheus
community
agrees
on
that,
I
don't
think
brian
brazile
agrees
on
that,
and-
and
so
I
I
I
don't
know-
I
have-
I
I'm
willing
to
state
this
in
a
document.
Maybe
that's
what
I
should
do,
it's
basically
to
say
that
we
could,
instead
of
having
a
summary,
we
could
have
a
histogram
that
that
a
new
type
of
histogram,
that
that
is,
translates
expo
exports
into
the
prometheus
summary
on
export.
A
You
know,
for
example,
and
it
would
help
us
with
a
statsd
sampling
case-
is
what
I'm
trying
to
say.
F
But
what
about?
How
do
you
we
need
to
prove
that
you
are
able,
without
losing
precision
or
information,
to
convert
from
a
prometheus
summary
and
to
back
to
a
prometheus
summary
yeah.
I
know.
A
F
But
for
the
moment
I
think
we
okay,
you
think
that
will
imply
us
removing
the
current
summary
and
just
having
that
the
new
type
I
will.
I
will
make
that
proposal.
A
It's
not
high
value
proposal
to
me,
but
yes,
I
think
that
that's
right,
okay,
it
would
help
us
with
statsd
I'll
write
that
up.
You
know
the
other
thing
that
we
have
not
in
this
in
this
hour
discussed.
Thank
you
for
your
list
of
protocol
issues
and
wait.
I
know
there's
an
agenda
item
that
I'm
trumping
right
now.
What's
yours,
though,
so,
let's
keep
talking
sorry,
I
lost
my
thought,
never
mind
here.
What
are
we
talking
about.
F
The
next
one
is
what
you
said:
multiple
bound
types
in
a
histogram
versus
different
histogram
types.
Initially,
I
thought
that
it's
a
good
idea
to
have
multiple
bounds,
to
limit
the
types
that
we
expose,
but
I
ended
up.
I
ended
up
proving
myself
that
I
was
wrong
and
the
reason
is.
F
If,
if
we
see
points
with
different
bound
types
in
the
same
metric,
how
the
heck
do
we
combine
them?
How
the?
How
do
we
ensure,
because
usually
back
ends,
have
a
type
at
the
matrix
level,
not
at
a
not
at
a
time
series
level
or
a
bunch
of
bunch
of
backends
and
stuff?
So
I
ended
up
thinking
that
will
be
much
simpler,
even
for
the
exponential
pr
even
for
everyone.
F
If,
if
we
simplify
and
support
only
one
type
of
histogram,
no
in
no
double
histogram,
we
have
one
histogram
and
then
we
have
an
exponential
histogram
and
maybe
a
linear
histogram,
and
that's
it
like
explicit
histogram,
exponential,
histogram
and
linear
histogram,
because
they
are
very
different
and
the
operations
on
them
are
kind
of
different.
I
mean
linear
and
linear
is
similar.
With
with
the
I.
A
I
like
that,
I
think
I
I
think,
there's
a
strong
you're
making
a
strong
argument.
We
have
to
be
careful
how
we
write
it
down.
I
do
I
remember
the
thing
I
was
going
to
say
a
minute
earlier,
which
is
about
min
and
max
and
histogram.
A
Everyone
wants
it,
but
we
don't
quite
know
how
to
spec
it
out
and-
and
the
reason
that
we
know
of
is
that
the
when
a
histogram
is
cumulative,
knowing
the
min
and
max
in
the
strictest
sense
is
not
very
useful,
and
so
we,
we
sort
of
know
that
there's
a
use
case
where
you
want
to
have
the
histogram
be
cumulative,
but
the
min
and
max
be
sort
of
more
like
a
delta
temporality,
and
I
don't
know
how
to
do
how
to
encode
this
I'm
starting
to
think
about.
A
There's
a
this
connection
with
this
external
labels
concept
and
and
there's
something
about
how,
when
I
report
as
an
sdk,
I
report
a
batch
of
metrics
every
single
one
of
those
metrics
has
the
same
current
time
and
like
why
do
I
need
to
have
the
same
current
time
on
all
of
them?
What
I
could
do
is
have
like
a
label.
A
That's
more
like
meta
about
the
report
saying
I
am
making
this
report
covering
a
window
of
time
from
my
last
report,
regardless
of
whether
it's
cumulative
or
delta,
and
then
you
could
use
meta
information
about
the
report
to
infer
what
the
min
max
range
was
actually
covering.
So
minimax
would
be
effectively
like
treated
as
a
gauge
and
you'd
have
to
do
some
inference
to
figure
out
what
time
range
actually
covers.
A
F
I
I
think
there
is
demand,
but
for
me
for
me
personally,
I'm
still
struggling
to
understand
if
people
want
to
combine
mean
max
with
buckets
or
this
min
max
is
actually
a
different
aggregation
where
people
want
to
look
at
the
sum
average
and
min
and
max
for
that
theme.
So
so
again,
I've
never
seen
this
in
practice.
Maybe
I
should
learn
more,
but
for
me
I'm
struggling
to
understand
if
these
min
max
actually
belong
to
a
histogram
or
is
completely
different,
aggregation
or
different
metric.
That
people
want
to
look
at.
A
We're
at
a
time
that
raised
a
lot
of
questions
for
me
because
there's
another
way
to
represent
summaries,
which
is
like
make
one
instrument
per
quantile
and
one
instrument
per
max
and
one
for
some
and
one
for
count
and
like
and-
and
we
have
no
way
in
our
protocol
of
grouping
like
a
bunch
of
sim
a
family
of
metrics.
If
you
will
that's
the
open
metrics
term.
So
if
we
had
a
family,
then
you'd
have
a
histogram
and
couple
it
with
a
min
and
a
max
gauge
or
something
like
that.
F
The
last
one
it's
a
bit
of
a
trolling
for
open
metrics,
but
can
you
anyone
read
the
open
metrics
pack
that
I
linked
and
tell
me
that,
actually
they
support
deltas
or
not
so
about
resetting
and
the
ability
of
resetting?
And
it's
exactly
like
us.
If
you
reset
you
reset
the
start
time,
and
it
seems
to
me
that
prometheus
is
actually
not
implementing
open
metrics
for
this,
because
they
do
not
respect
this.
Their
counters
cannot
go,
cannot
be
reset
and
they
do
not
respect
the
created
time.
F
A
But
there
is
a
question
about
about
reset
handling
in
my
data
model
question,
because
there's
always
some
ambiguity
and
currently
prometheus
takes
the
position
that
you
are
going
to
lose
count
that
are
reset.
You
know
if
my
process
is
crash,
looping
between
scrapes,
you
never
see
those
counts
and
they
they
just
vanish,
whereas
if
I
you
can
the
way
there's
got
to
be
some
translation
from
prometheus
into
otlp
and
and
we
just
have
a
gap
instead,
so
either
there's
an
ambiguity,
there's
a
gap-
and
I've
talked
about
that.