►
From YouTube: 2021-05-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Yeah,
if
you
want,
I
can
when,
when
the
new
camera
comes
I'll,
send
this
one
to
you.
B
A
Okay,
so
I
noticed
in
the
sig
specification
thing
that
a
bunch
of
people
are
signed
up
here
with
their
names.
Do
they
belong
here.
B
A
B
A
All
right
we'll
get
started
in
just
a
moment
here.
A
So,
just
to
remind
everybody,
last
week
we
talked
about,
I
think,
the
last
remaining
blocking
issue
that
didn't
have
an
attached
vr.
We
still
need
to
actually
get
the
prs
submitted.
We
need
to
get
a
open,
telemetry
release
and
then
mark
metrics
stable.
Those
are
still
like
two
dues,
but
I
think
we've
gone
through
all
the
issues
here
and
then
this
meeting
we're
going
to
talk
about
next
steps
and
kind
of
like
what
what
we
want
the
sig
to
focus
on
next.
A
So
I
threw
some
brainstorming
thoughts
here
in
terms
of
topics.
If
anyone
else
has
any
topics,
please
feel
free
to
add
them,
but
with
that
let's
get
started
with
blocking
issues.
Okay,
so
first
off
we
had
the
one
blocking
issue
was
the
start
time
description
in
the
proto.
If
I
click
over
to
this,
can
you
see
by
me
presenting
the
github
issue?
A
Am
I
sharing
this
correctly
yeah?
Yes,
great
great?
Is
it
do
I
need
to
make
it
larger?
It's
fine
for
me,
okay,
okay,
cool,
so
this
one,
I
think,
has
two
approvals
so
far.
This
was
a
comments
about
how
to
interpret
start
time.
Unix
nano
that
we
needed
to
address
prior
to
stabilizing
the
protocol.
I
think
there
was
josh
and
I
had
a
little
bit
of
back
and
forth
because
there
was
some
additions
here
around
gauges
that
I
highly
recommend
people
read
and
other
than
that.
A
B
Of
course,
there's
always
more,
we
could
be
writing
and
I,
like
the
list,
that
I
saw
you
for
brainstorming
next
discussions.
I
think
there's
something
about
staleness
markers
and
time
that
we
haven't
fully
captured,
but
I'm
not
sure
anyone
in
the
world
has
captured
it
in
the
like,
not
sure
prometheus.
You
know,
I'm,
like
that's
a
question.
That's
open
to
me.
C
This
description
really
helps
solidify
for
me
and
understanding
of
how
to
deal
with
start
time,
determination
so
that
I
could
fix
some
issues
that
were
in
the
prometheus
receiver
in
the
collector
already,
which
has
helped
us
get
to
better
compliance
with
the
prometheus
remote
right.
Compliance
test.
Suite.
B
Thank
you.
I
I
think
we're
going
to
need
to
add
more
about
stillness
markers
before
we
really
get
to
the
end
of
that
that
suite,
but
I'm
I'm
hopeful
that
we
can
like
mark
this
protocol
now
and
and
like
just
agree
that
we're
gonna
keep
writing
words
about
it,
not
changing
fields
about
it
is
what
we're,
after
yep.
A
I
am
on
board
with
that
okay,
so,
if
nobody
is
aware
of
anything,
please
please
take
a
look
at
this
and,
let's
see
if
we
can
get
this
one
pushed
through
one
question
I
have,
and
this
this
is
probably
for
bogdan
and
josh,
since
I'm
not
as
familiar
with
the
proto
repo.
Is
this
one
doesn't
get
auto
assigned
reviewers
like
the
specification
repo?
Is
that,
like
an
oversight,
is
that
on
purpose
like
do
we
is
there
something
we
can
do
to
help
out
this
repository,
or
is
it
not
important
to
us.
D
I
think
it's
just
people
did
not
have
a
chance
to
set
up
the
github
action,
but
you
can
copy
paste
the
config
file
from
specs
and
you
will
do
that
automatically.
E
B
I
suppose
we
could
create
code
owners
for
spec,
approvers
and
metrics
approvers.
It
seems
like
it's
starting
to
be
real
work,
though
yeah,
I'm
not
convinced.
It's
been
an
actual
problem.
Bogdan
has
been
mostly
the
one
merging
prs
I've
merged
a
few.
E
D
A
Mostly,
what
I
want
is
to
to
have
an
idea
so
like
when
it's
hey.
What
does
this
pr
need
to
make
progress
or
to
get
submitted,
or
who
do
we
need
to
talk
to
having
an
owner
on
the
pr
that
understands
what
is
necessary
to
get
it
merged
and
to
make
sure?
That's
not
the
author
of
the
pr?
A
That's
that
that's
like
the
the
minimum
of
what
I
want
it
doesn't
have
to
be.
It
doesn't
have
to
be
anything
too
complicated
and
yeah.
If
you
look
at
this
repo,
basically
tigran
and
bogdan
have
been
kind
of
owning
pretty
much
the
whole
thing
and
that's
fine
like
I
can
set
it
up
where
it's
just
tigger
and
bogdan
start
with,
and
we
can
go.
B
A
B
A
Yeah,
okay,
oh
cool
thanks
actually
and
I'm
gonna
assign
you
as
the
owner
right
now
for
this
one:
bargain:
okay,
okay,.
D
You
don't
necessarily
make
only
us
co-donors.
You
just
make
the
the
thing
to
assign
to
me
or
tigran,
because
code
owners,
we
are
not
going
to
be
the
only
one
reviewing
this
code.
Owners
means
reviewing.
Okay,
okay,
I
want.
I
want
you,
george
and
josh,
to
review
things,
but
I
will
be
able
to
do
the
marginal
stuff.
A
Yep
sounds
good,
sounds
good,
okay,
cool,
so
then
the
next,
the
next
blocking
issue.
This
one
is
marked
as
blocking,
but
it
is
a
verbage
related
issue,
so
there
josh
has
a
pull
request
around
re-aggregation
safe,
attribute
removal
that
we
wanted
to
make
sure
we
accounted
for
and
that
we
don't
need
to
change
the
protocol
before
we
mark
the
protocol
stable.
This
is
reviewed
but
not
approved
yet,
and
I
think.
A
I
think
I
asked
for
non-important
changes
in
my
review,
but
for
folks
who
haven't
seen
this,
this
is
1649
on
the
spec.
I
think
it's
it's
worth
taking
a
look
at
this
and
making
sure,
but
I
would
like
to
see
this
get
at
least
approvals
than
mandatory
approval,
so
at
least
two
approvals
from
you
know:
metrics
approvers,
before
we
mark
the
protocol
stable.
A
Just
because
that's
that's
how
this
bug
was
signed.
So
just
as
an
fyi,
I
will
go
through
after
this
meeting
and
check
to
see
my
comments.
I
think
they
were
all
non-blocking,
so
I
can
approve,
but
just
as
an
fyi
to
everybody
there
and
then
I
wanted
to
have
a
quick
discussion
on
what
else
we
need
to
do
to
get
otlp
0.9
out
and
mark
metrics
stable
in
otop09.
D
Okay,
I
was
thinking
about
the
up
metric
if
we
want
to
spend
one
one
session
to
discuss
a
bit
about
that.
There
is
a
pr
right
now
in
the
collector,
which
I
completely
disagree
with
the
way
how
it
is
implemented-
and
I
was
hoping
to
have
more
clarification
on
that.
D
B
Ed,
I
have
questions
about
stillness
markers,
which
I
think
end
up
being
the
same
type
of
question,
how
those
are
implemented
and
it
seems
like
we
have
an
implicit
way
and
an
explicit
way
potentially
is
needed.
The
implicit
way
is,
you
can
put
gaps
into
a
time
series
by
adjusting
start
times
and
finish
times,
and
then
the
consumer
of
that
data
can
see
that
there
was
a
missing
segment
in
time.
B
Presumably
that
becomes
not
up
or
not
or
stale,
or
something
like
that,
whereas
in
prometheus
remote
right
you're
putting
in
these
man
values
to
say,
I
wasn't
here
and
that's
independent
of
the
start
time
continuity
and
we
we
either
need
to
say
man.
Values
are
exactly
what
prometheus
says
they
are
or
we
need
to
be.
B
Some
do
something
else
think
about
it.
But
please
say
what
the
problem
was
in
the
collector.
D
Pr,
the
collector
pr
is
different.
It's
for
me.
It's
who
is
emitting
the
up
metric,
so
people
seem
to
to
do
it
wrong
because
they
want
for
collector
own
telemetry
to
emit
an
up
metric
and
I'm
like.
No
is
the
script
if
you
are
using
emitting
the.
If
you
are
ex
exposing
this
as
prometheus,
is
the
scripter
that
will
emit
that
not
you
should
not
emit
that.
D
Look
at
the
bottom
of
the
comments,
because
so
he
this
pr
started
by
by
generating
via
our
so
to
clarify
in
the
collector.
We
have
two
things:
two
metrics
is
the
matrix
pipelines
like
we
scrape
metrics,
put
them
in
pipelines
and
export
them,
and
we
have
our
own
telemetry
it's
it
is.
D
It
is
very,
very
strange
for
me
that
the
up
metric
should
not
be
generated
by
our
own
telemetry.
The
objective,
in
my
opinion,
should
be
the
scraper
whenever
it
scrapes
things
it
should
emit
this
metric
on
the
same
pipeline
with
the
same
data
together,
not
via
a
different
channel
this
up
metric.
So
so
a
bit
of
design
or
drawing
schema
or
where
this
up
metrics
is.
B
I
I
I
would
like
to
say
the
reason
why
I
think
that's
the
case
is
that
we
want
temporally
aligned
data
coming
out
of
this
vapor
and
and
and
if
just
to
get
back
to
the
prior
bullet
point
this
pr
of
mine
1649
talks
about
re-aggregation,
which
follows
on
to
another
pr
that
talks
about
temporal
alignment
and
all
this
was
to
get
to
the
point
where
we
can
say:
look
it's
very
easy
to
compute
an
up
metric
because
you're
going
to
have
your
aligned
data
and
you're
going
to
have,
and
now
I'm
trying
to
actually
argue
or
contest
what
you
said
about
the
definition
about
is
that
we
want
to
get
to,
I
think,
a
place
where
we
can
push
data
that
comes
out
of
a
prometheus,
remote
right
exporter.
B
Looking
the
same
as
prometheus
would
have
done
and
there's
an
open
issue
about
this.
I
don't
want
to
say
much
more
now,
but
I
I
think,
logically
speaking
in
a
push
world,
there's
there's
two
pieces
of
information.
One
is
what
service
discovery
says,
and
one
is
what
the
process
itself
push
pushes
and
you
want
them
to
be
temporarily
aligned.
And
then
you
can
clearly
combine
service
discovery
state
with
process
state,
and
this
is
what
I've
called
the
liveness
metric
and
the
presence
metric.
B
We
can
choose
better
names
if
we
want
and
that
that
way
we
have
a
world
in
which
you
can
push
data
or
pull
data,
and
they
both
compute
non-metric.
According
to
this
definition,
so
I.
A
I
want
to
I
want
to
cut
this
discussion
in
in
parts
okay,
so
bogdan's
concerned
with
the
current
implementation,
like
there's,
there's
this
notion
of
in
terms
of
the
current
prometheus
up
metric
as
encoded
in
otl
fee
who
owns
generating
that
is
it
the
exporter
or
is
it
the
receiver
and
I'm
on
bogdan's
side
here
where,
when
it's
a
pool
based
scraper
whoever's
doing
the
pool
has
the
best
shot
at
getting
everything
consistent
and
understanding
what
the
hell
is
going
on
and
they
should
own
the
up.
Metric
totally
agree
with
that.
A
I
think
that
is
outside
the
scope
of
the
data
model
specification
in
the
sense
of
like
that's
a
discussion
on
how
to
best
implement
prometheus.
A
So
I
want
to
challenge
that
this
would
block
the
otlp
protocol
being
stable
and
and
here's
why
I
think
step
one
is
the
current
up
metric
can
be
encoded
as
a
say
gauge
right
with
the
way
we
have
it
specified
or
something
or
we
might
need
to
specify
a
new
metric
type
for
it
up
down
counter
sure
fine,
we
might
need
to
specify
a
new
metric
type
for
it
right.
I
think
both
of
those
options
can
be
done
in
a
compatible
way
with
what
we
have
today.
A
So
I
want
to.
I
want
to
call
that
out
and
see
if
there's
at
least
agreement
there
bogdan.
I
do
think
your
issue
needs
further
discussion.
I
think
it
needs
to
be
in
the
prometheus
working
group
and
I
think
we
need
to
kind
of
address
it
there.
I
one
of
one
of
my
fears
with
this
sig
is:
we
might
end
up
needing
to
join
with
the
prometheus
working
group
sig
around
some
prometheus
data
model
to
our
data
model
discussions
and
do
some
transferred
transferred
things
there.
A
I
also
want
to
account
in
this
with
like
statsd
and
push
so
josh
your
point.
That
second
thing
right,
I
I
think
we
do
need
to
dive
into
that.
At
some
point,
I
don't
think
we
need
to
dive
into
it
immediately,
and
I
do
think
there's
this
notion
of
can
open
telemetry
have
a
push-based
model
that
is
compatible
with
prometheus
that
we
need
to
dive
into
in
the
long
run,
but
I
want
to
I
want
to
put
that.
I
I
yeah.
B
B
And
it
knows
it's
present,
so
it's
doing
this
future
thing
that
I
call
a
join
that
could
be
done
for
push
data
internally
and
that
it's
perfectly
fine
to
convey
that
through
otlp.
I
believe
it's
an
up
down
counter.
We
can
debate
that
and
in
the
future
we
can
replace
the
prometheus
receiver
can
just
put
in
this.
B
We
could
I'm
envisioning
a
world
where
the
prometheus
receiver
somehow
doesn't
have
to
do
service
discovery.
So
it's
very
future
world
now
the
prometheus
service
scraper
just
knows
it
has
a
target
to
escape,
not
that
it
was
part
of
service
discovery.
It
doesn't
know
it's
present,
it's
just
knowing
whether
it's
alive
or
not.
It
can
then
push
this.
You
know
aliveness
state
and
then
the
same
exporter
can
do
this.
B
Join
that
we're
talking
about
way
down
in
the
future
where
the
push
data
comes
in,
the
scrape
data
comes
in
and
you
look
at
service
discovery
which
you
may
have
gotten
in
a
more
scalable
way,
and
then
you
output,
the
prometheus
remote
right.
That's
been
joined
together,
which
gives
you
all
of
your
up
metrics
correctly,
but
you
can
also
put
in
stillness
markers.
You
can't
just
separate
this
conversation
about
up
from
stainless
markers.
A
Okay,
fair,
there's
also
the
issue
of
in
the
exporter.
When
you
do
that
join.
Do
you
know
where
the
data
came
from
initially
so
right
now.
I
think
there's
this
assumption
that
I
have
which
could
be
wrong,
that
your
prometheus
exporting
your
prometheus
importer.
As
long
as
I
have
nothing
in
between
that
adds.
Other
types
of
metrics
in
I
have
full
compatibility
with
prometheus
right
and
that's
the
current
scope
of
prometheus
compatibility
phase.
A
One
almost
everything
we're
talking
about
would
be
like
a
phase
two
right
and
to
bogdan's
point
with
that
particular
comment
on
the
collector.
I
think,
to
the
extent
that
we're
ready
to
deal
with
phase
two
we
should
we
should
be
generating
the
up
metric
in
the
receiver,
because
I
don't
think
we're
close
yet
right.
I
think.
B
B
We
we
have
to
specify
what
a
man
value
means
and
we,
I
think
the
prometheus
receiver
also
has
to
generate
the
stainless
markers
when
the
target
disappears.
And
then
I
think
that
means
that
we're
going
to
specify
for
all
otlp
consumers,
nand
values
are
especially
meaningful.
They
mean
no
data
here,
but
explicitly
no
data
here,
even
in
an
unbroken
sequence
of
observations,
which
is
what
prometheus
is
giving
us.
B
A
Okay,
yeah,
I
I
do
agree
that
that
needs
to
get
specified.
So
I
guess
coming
back
to
coming
back
to
here
around
blockers
right.
Do
we
see
stillness
markers
as
a
blocker
for
marking
this
as
stable,
or
is
that
something
we
can
add
post
like
a
0.9
right?
Is
this
a.
A
Yeah
yeah.
Well,
I
wanna,
I
wanna
finish
on
blocking
issues
to
get
zero
nine
out
first
before
we
move
on
to
what's
next,
so
is
there
anything
anything
that
we
need
to
deal
with
for
0.9
of
otop
and
getting
that
out
and
and
get
metrics
marked
as
stable?
What
we
have
defined
today
marked
a
stable.
I
just
want
to
know
there
any
blockers
to
that.
B
I
think
it's
a
fair
question,
though,
is
to
consider
whether
this
stuff
about
stainless
markers
is
going
to
really
interrupt
us
in
the
fut
in
the
near
future.
So
I
think
it's
first
of
all
really
good
that
we
did
that
change
to
do
number
type
instead
of
integer
and
floating
point,
because
if
you
had
an
integer
time
series,
there's
no
way
to
put
a
man
value
in
it.
So
once
you
have
a
mixed
number
time
series,
even
if
it's
integers,
when
the
nand
value
comes
in
you're,
going
to
have
to
have
a
floating
point.
B
So
I
want
everyone
to
think
about
that,
because
it
might
mean
that.
Well,
that's
one
of
the
reasons
we
needed
this
mixture
of
integer
and
floating
point,
and
so
I
think
that
that
alone
justifies
this.
B
But
for
summary,
for
summary
and
histogram
types,
there's
no
scalar
that
you're
putting
in
the
end
value
into-
and
I
I
don't
want
to
talk
about
how
we
represent
a
not
present
histogram,
and
I
can
propose
that
we
just
use
the
any
floating
point
field
to
represent
it.
So
if
the
sum
of
a
histogram
or
a
sum
of
a
summary
is
nan,
that
means
the
point
was
not
present,
but
that's
writing
more
text
in
the
proto
still
and
getting
agreement.
B
B
B
A
B
It
gives
us
a
much
better
story
for
prometheus
compatibility
in
addition
to,
but
it
does
leave
this
question
of
you
know.
You're
gonna
have
to
deal
with
nand
values
in
your
receiver
for
the
rest
of
time
or
something
like
that.
B
Perspective
as
a
vendor,
we
like
there's
the
prometheus
world,
has
this
push
stuff
and-
and
they
are
very
explicit
about
when
time
series
disappear
and
often
in
the
push
world
you
you've
got
like
a
timeout
or,
like
a
you
know,
average
value
between
points
is
assumed
to
be
like
your
regular
reporting
frequency
and
you,
you
kind
of
guess
it
when
series
disappear,
we're
never
going
to
pass
a
prometheus
compatibility
test
without
having
this
firm
like
knowledge
that
something
was
not
present,
and
it's
not
clear
that
we
will
win
over
the
prometheus
community
without
having
this
explicit,
explicit
notion
definitively
of
something
not
being
present.
A
Okay,
yeah
yeah
totally
agree
all
right.
So,
let's,
let's
let
me
bold
things
that
we
want
to
talk
about
next,
as
as
as
big
important
things,
but
regarding
getting
an
otlp
0.9
out
with
the
changes
for
attribute
to
label
again,
I
want
to
make
sure
that
we
can
start
kicking
off
that
work,
because
I
think
it's
going
to
take
a
while
right.
So
I
guess
there's
two
questions
here:
one
is
getting
what
we
have
already
done
into
otlp
and
getting
a
zero
nine
release.
A
So
we
can
kick
off
the
attribute
to
label
migration
right
and
then
the
second
thing
is:
can
we
mark
what
we
have?
There
is
stable
and
do
this
in
a
non-blocking
way,
so
people
can
start
looking
at
the
protocol
and
saying
like
this
is
something
I
can
depend
on
and
and
generate
right.
A
Maybe
we
split
those
two
questions
but,
like
the
reason
I
keep
focusing
on,
I
want
otp09
out
is
because
I
know
there's
going
to
be
some
churn
to
try
to
get
everybody
switched
from
labels
to
attributes,
and
I
want
to
make
sure
that
we
have
a
released
otlp
that
has
attributes.
So
folks
can
start
depending
on
it
right.
A
I
guess,
let's,
let's,
let's
talk
about
next
steps
and
next
discussions
and
we
can
talk
about
staleness
markers,
I
won't
try
to
push
for
marking
nine
as
metrics
being
stable
until
we
have
a
handle
on
staleness
a
little
bit
since
I
I'm
not
sensing,
there's
huge
consensus
or
understanding
of
whether
or
not
you
know
this
will
be.
This
will
be
a
breaking
change
to
handle
stillness.
B
I
I
recall
in
earlier
work
on
api
that
we
tried
to
spec
out
that
man
values
could
not
be
set
in
the
api.
However,
I
think
that
contradicts
the
prometheus
api
contract.
Man,
values
are
numbers
in
floating
point
and
that's
the
problem.
We're
facing
they're
like
not
they're,
not
numbers,
they're,
valid
values
as
smoking
point
numbers.
A
B
We're
not
treating
right
now
today
we're
well,
I
don't
know
what
we
are
doing,
because
it's
just
it's
just
like
the
protocol
lets
you
put
it
in
there.
So
like
it's
possible
that
that
someone's
doing
something
correctly,
I
can
say
for
my
work
on
this
prometheus
sidecar
that
when
I
first
encountered
the
code,
I
just
saw
this
if
block
that
said,
if
it's
a
nan
skip
it,
and
I-
and
I
s
of
course
I
thought
to
myself.
B
Oh
this
just
means
like
there's
just
an
out
of
bounds
like
value
check
happening,
but
really
it
was
like
these
are
going
to
be
in
there
explicitly
by
prometheus
and
to
compute
to
put
that
into
an
otp
or
push
format.
I
was
just
dropping
them
and
that
had
a
legacy
related
to
stackdriver,
and
so
but
but
otlp
is
practically
the
same.
A
Yeah
yeah
again
with
I,
I
think
we
deal
with
stillness
if
you're
asking
how
we
deal
with
man
right
now,
it's
like
unspecified,
so
you
do
whatever
you
want.
I
guess
that's
what
I'm
saying
we
mean
open.
Telemetry
have
not
said
what
nan
means,
there's
no
semantics
to
it.
So
you
deal
with
nand
the
way
you
deal
with
man
as
someone
consuming
this.
This
data
like,
if
someone
gives
you
a
nan,
you
just
guess
what
the
hell
that
means
right
now,
there's
no
no
definition.
A
Right,
okay,
I
feel
like
we're
not
we're
not
reaching
any
kind
of
consensus
here,
so
I'm
gonna
call
some
time.
B
This
should
be
discussed
with
the
prom
working
group.
Perhaps
it's
a
proposal.
I
have
a
strong
feeling
about
it,
but
it
doesn't
sound
like
anyone
else
thinks
much
about
it.
B
B
A
All
right,
so,
let's
do
this:
how
about
we
make?
A
B
But
that's.
A
The
question
then
yeah
yeah-
it's
I
mean
it's
it's
worth
the
discussion
so
does.
Does
anyone
feel
strongly
about
holding
the
protocol
for
the
stillness
marker
discussion?
Let's
just
ask.
A
That
nobody,
okay
cool,
so
this
I
will
follow
up
I'll
follow
up
with,
I
guess
bogdan
or
tigrin,
on
getting
otlp
zero,
nine
out
and
let's
make
let's
make
prometheus
compatibility
and
stillness
markers,
be
like
our
next
big
discussion
for
next
week.
Given
what
we
talked
about
here,
other
thoughts
I
had
on
next
next
things
that
we
can
talk
about
right,
eventually,
statsd
compatibility,
the
notion
of
histogram
bucketing
and
the
otep
in
place.
A
If
I
recall
correctly
that
this
was
still
as
of
this
weekend,
it
was
still
sitting
in
pull
request
form.
I
don't
know
if
that
got
merged
hold
on.
F
So
I
I
apologize,
I
dropped
my
phone
as
I
was
trying
to
speak,
but
it
just
related
to
like
I
guess
I
I
would
say
there
wouldn't
be
an
issue
with
pushing
it
back.
You
did
mention
talking
to
the
working
group.
The
prometheus
working
group,
so
potentially
the
whole
fruits
of
how
nan
is
used
could
be
discussed
there
and
if
there's
any
opposition
or
what
have
you
could
be
brought
up,
then.
A
Okay,
take
this
description
to
so
you're,
suggesting
we
take
it
to
the
prometheus
working
group
and
see
if
there
are
major
concerns.
So
the
thing
the
thing
I
want
to
avoid
again
is
like.
B
That
the
prometheus
group
is
not
going
to
be
fruitful
for
us
because
they
have
very
clear
symmetrics
about
what
stainless
markers
are.
It's
very
well
documented
it's
put
into
their
prw
spec
at
this
point,
so
there's
not
much
we're
going
to
learn,
except
that
they
need
them
and
we
can't
be
compliant
without
them
and
there
I
don't
there's
one
point
of
like
you
could
try
to
argue
with
that.
I
don't
feel
like
doing
that,
but
I
think
that's
fair.
There's
actually.
F
B
I've
kind
of
hinted
at
it,
one
which
is
the
implicit
way
which
say,
put
put
gaps
in
the
start
time
and
end
time
of
point
stream
in
your
stream
and
there's
got
to
be
nothing
there,
but
it's
much
more
implicit
and
the
question
is
whether
we
care-
and
I
think,
or
whether
we're
specking
out
enough
and
and
I
think,
there's
a
question
about
whether
anyone
will
ever
want
to
put
a
strong
guarantee
on
pushed
data
as
they
have
on
pulled
data
today
with
prometheus,
and
that's
that's
connected
here
like
I'm,
not
sure
I
trust
an
implicit
time
series
gap
as
much
as
I
trust
an
explicit
man
value.
B
Right
they're,
independent
signals:
that's
why
I'm
worried
that
we
can't
use
implicit
gaps
to
give
explicit
signals.
Brian
said
this
at
last
week's
working
group,
they're
independent
and
that's
the
the
thing
I
don't
see
a
way
around-
that.
A
Yeah
I
well.
I
also
argue
that
when
you
have
push-based
metrics,
I
I
don't
think
you.
I
don't
think
we
should
push
for
explicit,
missing
signals
from
a
push-based
thing,
because
again,
inherently
with
push
right.
B
B
But
there's,
I
think
more
work
is
needed.
I
don't
think
we
should
talk
about
here
like
empty
time
range
points
that
could
be
used
and
then
otherwise
I
might
lose
exactly
the
time
stamp
when
a
prometheus,
explicit
stainless
marker
was
was
put
in.
B
How
do
I
know
exactly
when
a
prometheus
timestamp
happens,
so
we
might
be
able
to
solve
the
problem
for
push
data,
but
then
we
won't
be
able
to
solve
the
problem
for
prometheus
remote
right,
converted
to
otlp
converted
back
to
prometheus
from
outright
we've
got
perhaps
some
really
strong
conventions
and
that's
possible.
I
don't
know.
A
A
As
a
group,
we
have
some
discussions,
we
do
some
proposals,
I
can
collect
information
and
try
to
get
one
of
those
like
you
know:
here's
here's,
the
state
of
the
world
and
decisions
we
need
to
make
for
next
week,
but
the
the
whether
or
not
we
are
willing
to
break
the
protocol
to
resolve
this
is
is
kind
of
a
thing
like
like.
A
I
want
to
say
that
at
this
point
in
time,
we've
kind
of
consolidated
on
a
protocol-
and
I
think
we
can
solve
this
issue
without
breaking
the
protocol,
and
I
think
we
can
take
that
step
going
forward
for
all
issues
that
we
take
on
of.
We
will
have
additive
changes
to
the
protocol
and
we
will
have
non-breaking
changes
to
the
protocol
right,
but
we
won't.
A
A
Okay,
so
in
the
shape
of
the
discussion,
then
going
forward
right
proposals
that
say,
let's
re-change,
all
of
our
one
ofs
and
metrics
and
change
the
names
of
everything
are
automatically
abandoned.
We
can't
do
that
right.
What
we
have
stays
new
proposals
need
to
be
like
if
they're,
if
they're,
not
breaking
acceptable.
A
If
they
are
breaking,
we
immediately
reject
them.
As
a
sig
and
say
this
is
invalid,
because
we
can't
break
anymore
sound,
good,
okay,
cool,
so
next
week
will
be
up
metrics
and
stillness
markers,
and
I
think
it's
going
to
be
a
pretty
lively,
awesome
discussion.
I
look
forward
to
putting
putting
together
work
for
that.
If
I
were
to
say
what
would
be
the
discussion
right
after
that,
do
any
of
these
things
kind
of
shine
as
things
that
we
need
to
start
talking
through
and
working
through
as
a
sick.
A
D
I
think
we
should
go
back
to
that
histogram
discussion.
If
I
at
one
point
yeah.
B
I
agree
histograms
would
be
next
for
me
statsd
and
raw
data.
Things
are
connected
and
I
think
that's
next,
that's
actually
connected
with
multivariate
if
anyone's
interested
in
any
of
those
three
topics.
Maybe
dm
me
on
stock
today
there's
a
meeting
that
I've
set
up
with
one
person,
who's
interested
in
multivariate
metrics,
and
I
don't
think
it's
worth
having
group,
but
if
you're
interested,
let
me
know
is
this:
what
you're
composing
joshua.
B
A
Yeah
I
it's
it's
interesting.
I
think
maybe
I'm
going
to
throw
on
exemplars
as
its
own
topic,
and
I
can't
spell
it
ever
that
I
think
we
do
need
to
prioritize
at
some
point.
It's
interesting
that
right
now
you
can
only
attach
exemplars
to
histograms
in
the
current
protocol
right.
B
That
wasn't
the
intention-
and
I
pushed
for
this
idea-
that
you
could
sample
any
event
stream
and
make
useful
histograms
out
of
them,
and
the
problem
that's
got
hamstrung
is
that
we
need
sampling
documents
and,
like
technology,
is
to
show
people
how
to
do
it,
and
it's
just
not
been
an.
A
A
A
At
some
level
so
sure
so
maybe
with
the
raw
data
type,
we
can
talk
about
it,
but
but
I
want
to
kind
of
divorce
that
a
little
bit
from
just
statsd.
So
I'm
going
to
split
those
two
topics.
If
that's
okay,
so
the
question
would
be
then,
do
we
think
raw
data
exemplars
I'm
going
to
call
these
raw
data
and
exemplars?
Would
they
be
more
important
than
statsd
compatibility?
Or
do
you
think
statistic,
compatibility
is
is
something
we
discussed
first
and
then
we
discuss
raw
data
exemplars.
B
Is
possible
to
solve
statistically
compatibility
strictly
inside
the
hotel
collector
without
forcing
a
discussion
about
the
data
model,
but
if
you
want
to
have
a
nicely
factored
collector,
where
the
statsd
receiver
just
produces
raw
events,
which
is
what
they
are
potentially
sampled
raw
events,
which
is
what
they
sometimes
are
and
then
have
a
downstream
aggregator.
That
knows
how
to
turn
raw
events
into
a
histogram,
which
is
what
we
sort
of
would
like.
B
So
I
think
it's
possible
to
to
defer
the
conversation
about
statsd,
because
in
the
short
term,
people
can
just
you
can
just
implement
your
aggregation
inside
of
statsd
and
not
it's
not
spec
anyway.
There's
no
standard
for
statsd.
It
doesn't
matter.
A
Right
because
you
just
use
the
collector
for
and
you're
fine
okay,
that's
that.
I
think
I
think
that's
reasonable
right
the!
I
guess
the
way
I've
been
thinking
about
this,
and
just
so
you
know
how
I
how
I
came
up
with
this
list
of
of
work.
A
By
the
way,
I
think
we
still
have
some
otlp
performance
things
to
do.
Oh,
my
god,
what
the
heck!
It's
a.
B
A
Yeah,
oh
my
god,
okay
for
practice
for
prioritization
of
work,
one.
I
think
we
do
need
to
step
more
into
the
otop
performance
work
that
victor
and
I
were
doing
and
get
like
a.
You
know
consistent
way
of
doing
that,
and
then
there's
this
notion
of
fleshing
out
improving
the
specification,
and
so
when
you
look
at
the
current
spec,
this
is
not
it.
A
That's
not
it!
That's
not
it
man,
I'm
I'm
terrible!
I
have
no!
I
don't
even
have
it
open
here
we
go.
I
wanted
to
talk
about
this.
If
you
were
to
look
and
read
this
right
and
ask
yourself
if
I
read
the
data
model
spec
and
I'm
implementing
an
exporter
or
an
importer
right,
do
I
have
enough
understanding
to
know
how
to
import
into
open
telemetry.
A
A
How
do
I
get
the
data
from
a
into
b,
and
so
what
I
was
looking
through
was
two
things
missing
sections,
and
just
so
you
know,
if
you
look
at
like
metric
data
points
gauge
and
histogram,
there
are
in-flight
prs
for
both
of
these
sections
to
get
fleshed
out.
If
you
look
at
resources,
I
think
that
one
might
be
completely
missing
and
we
need
to
actually
flesh
this
out
temporal
alignment,
external
labels
and
stream
manipulations.
A
Josh
has
some,
I
think,
a
series
of
three
pr's
that
flesh
these
out
and
what
we
talk
a
little
bit
about
doing
delta,
cumulative
manipulation
and
again
this
gets
into
that
notion
in
the
spec
of.
A
B
B
Is
that
there's
a
desire
to
push
data
and
to
not
have
to
remember
every
cardinality
set
you've
ever
touched
so
and
prometheus
forces
you
to
remember
all
your
metric
label
sets
forever
and
and
there's
this
question
about
reset,
but
in
a
delta
based
sdk,
you
should
be
able
to
forget
data
very
quickly
and
yeah.
It
should
be
otlp
deltas,
though,
that
we're
trying
to
spec
out
we
don't
care
about
other
forms
of
delta
right
now,.
A
That's
that's
that's
cool
yeah,
but
the
the
and
this
delta
cumulative
is,
if
I'm
running
an
exporter,
and
I
need
to
consume
open
telemetry
protocol
and
it's
delta
and
I'm
a
cumulative.
This
is.
This
is
a
way
that
I
can
deal
with
that
right.
Yes,.
A
Right
and
one
of
the
one
of
the
to
do
works
I
had
oh,
I
didn't
write
it
down:
yeah
delta,
the
cumulative
electric
prototype
right.
These
are
these
are
things
that
we
can
actually
like.
People
can
take.
Some
of
this
work
go
implement
some
of
these
prototypes
and
and
check
viability
and
that
sort
of
thing
going
forward.
Okay,
but
I
think
in
terms
of
discussions
we
have,
this
is
probably
what
we're
going
to
be
focusing
on
in
the
sig.
A
So
we're
going
to
be
focusing
on
this
next
I'll
collect
things,
I'm
going
to
call
time
on
the
planning
and
next
steps,
I'm
going
to
reorient
this
this
a
little
bit
around
kind
of
the
the
future
work
that
we
want
to
do
so
yeah,
I'm
going
to
take
a
shot
at
reorienting
these
cards
to
be
a
little
more
geared
around
this
shape.
A
E
Thank
you.
So
this
is
when
I
started
to
think
about
the
scope
of
the
isdk.
I
start
to
have
these
questions,
so
I'm
not
sure
if
this
is
the
right
place
or
the
premises
working
group
or
somewhere
else,
but
the
scenario
is:
if
you
have
the
open,
telemetry
collector
and
there's
a
premises
backhand
periodically
trying
to
pull
the
metric
from
that.
You
know
sdk,
you
have
some
callback,
but
the
question
is:
how
would
the
premises
agent
triggering
the
callback
in
the
sdk?
E
That
means
the
collector
has
to
provide
a
way
it
can
pull
the
metric
over
telling
the
permissions
backhand
to
not
go
through
the
connector,
but
directly
pull
the
information
from
the
isdk.
So
can
someone
help
me
to
understand
like?
Are
we
trying
to
solve
this
problem
or
we
can
ignore
it
for
now
or
it's
covered
by
different?
Stick.
E
Not
sure
I
understand
the
question
okay,
so
so
you
have
the
otlp
collector
and
you
have
the
ick
in
ick.
You
have
some
callback
that
is
only
supposed
to
be
called
when
the
premises
backend
is
coming
and
trying
to
grab
the
information
but
the
otlp
protocol.
If
my
understanding
is
correct,
it
does
not
support
this
pool
model.
So
what
I'm
I'm
going
to
do,
I'm
I'm
still
confused
what
calls
I.
B
I
have
I
have
a
way
perhaps
to
clarify,
I
think
I
understand
in
prometheus
land.
You
do
a
scrape
to
get
some
time
series
data,
but
it's
a
point
in
time.
Type
of
observation,
like
you
get
these
single
time
stamped
observations,
whereas
with
otlp,
we've
always
talked
about
pushing
it,
and
the
points
have
two
time
stamps
and
such
we've
talked
a
lot
about
temporal
stuff.
B
There's
a
question
about
whether
there's
a
way
to
pull
a
stream
of
otlp
data
that
I'm
hearing
from
riley-
and
I
can
imagine
cases
when
you'd
want
to
do
that
and
I,
but
it's
pretty
it's
a
corner
case
and
it's
like
out
in
the
future
to
meet.
So,
instead
of
asking
someone
to
push
to
me,
I
want
to
ask
them
to
pull.
B
I
want
to
pull
from
them
a
request
to
stream
data,
so
in
other
words,
I
still
want
you
to
push
me
the
data,
but
I'm
asking
you
to
and
I'm
initiating
the
connection.
The
reason
why
I
might
imagine
doing
this
is,
if
you
had
a
scalable
pool
of
collectors
and
they
all
needed
to
get
a
stream
to
like
I've,
talked
about
using
service
discovery
as
just
publishing
metrics
about
what
services
are
present.
B
That
means
every
collector
in
your
pool,
needs
to
get
a
stream
of
metrics
and
instead
of
having
the
sender
broadcast
to
every
service
discovered
in
the
pool,
you
can
have
every
service
in
the
pool,
pull
a
stream
from
a
pool
of
it's
much
easier
to
to
pull
then
to
push.
Sometimes,
when
there's
a
service
discovery
question
so,
and
can
I.
A
Can
I
check
with
riley
real
quick,
because
I
thought
your
question
was
more
about?
How
do
you
implement
a
prometheus
exporter
from
an
sdk.
E
No
trying
to
understand
the
scope,
so
so
I
I
just
have
some
scenario
that
I'm
trying
to
imagine
like.
Should
this
be
the
responsibility
of
the
sdk
or
the
protocol
or
the
collector,
for
example,
if
you
have
the
premises
and
someone
changed
the
configuration
they
bunked,
the
frequency
like
previously
premises,
is
trying
to
get
the
data
every
one
hour.
Now
it's
trying
to
get
the
data
every
one
second
and
in
the
isdk
you
only
deal
with
otlp
protocol
and
you
send
it
to
the
collector.
D
B
This
is
what
prometheus
calls
federation
and
I
think
it
shouldn't
be
like
you
shouldn't
just
broadcast
all
your
metrics,
except
someone
will
have
a
reason
to
do
that
in
this
way,
I
think
it's
a
reasonable
request.
It's
just
an
expensive
thing
to
do.
I
would
say
yes
to
your
question:
riley.
You
just
expose
the
historical
data
and.
D
D
If
you
have
a
push
based
thing,
you
don't
need
federation,
so
I
don't
think
I
don't
think
we
should
talk
about
that
when
we
are
talking
about
a
push
protocol
again,
that
doesn't
mean
we
should
sdk
should
not
support
exposing
otl
prometheus.
We
can
discuss
about
that,
but
I
don't
want
to
support
this.
D
B
But
I
also
think
it's
well
defined
and
I
don't
think
we
should
say
people
can't
do
it
and
I
can
imagine
when
people
are
going
to
want
to
do
it,
and
so
I
I've
personally
and
my
spare
time
a
little
bit
been
working
on
a
pr
to
proof
out
this
re-aggregation
work,
which
essentially
means
creating
a
collector
pipeline
that
receives
memory
metrics
and
keeps
them
in
memory
it's
most
of
what
you
would
need
to
do
federation,
and
so
I
I
think
it
will
happen.
It
will
be
a
contrib
plug-in.
G
D
G
D
It
it's
it's,
it's
not
at
the
client
level.
It's
between
multiple
services,
you
know
again,
is
not
our
sdk
supports
all
of
them.
It's
only
when
you
build
a
chain
or
a
streaming
processing
pipeline
you
either
use
pull
everywhere
or
push
everywhere
in
one
streaming
pipeline,
not
inside.
The
client
client
supports
both,
but
I
don't
see
any
point
of
building
a
streaming
pipeline
where
first
half
of
the
streaming
pipeline
is
pulled,
then
you
switch
to
push,
or
vice
versa,.
G
G
D
No,
you
can
support
both
because
you
may
have
two
different
pipelines.
One.
A
B
Imagine
is
that
if
there
were
going
to
be
an
addition,
it
would
be
like
jrpc
streaming
is
added,
with
the
same
exact,
exact
request
and
response
protocol,
but
it's
a
new
it's
instead
of
being
called
export.
It's
like
import
type
of
it's
like
the
reverse
direction.
I'm
saying
I
want
to
call
you
and
have
you
start
streaming
to
me
from
otlp.
B
That's
what
I
would
want
to
do
service
discovery
in
my
imaginary
world.
There's
a
service
discovery
pusher
out
there,
and
it
knows
how
to
to
stream
you
some
otlp
information
about
what
services
are
up
right
now
or
sorry
present
right
now,
and
then
all
the
pool,
the
all
the
collectors
in
the
pool
will
contact
the
service
discoverer
and
pull
meaning
that
the
service
discovery
will
start
streaming
to
them
all.
The
updates
from
service
discovery.
Okay,
but
effectively.
A
From
from
a
modeling
perspective
of
otlp,
it's
still
push
like.
Yes,
I'm
connecting
to
it
it's
it's.
What
do
you
call
it
comet,
correct
or
ajax
or
whatever?
I
forget,
what
the
hell,
the.
A
Yeah,
okay,
gotcha.
I
want
to
give
other
people
a
chance
to
also
agree
with
that
before
we
end,
because
before
I
write
consensus,
I
want
to
make
sure
we
have
it.
Otlp
doesn't
do
pull,
and
if
we
have
a
model
like
you
propose
a
model
where
it
wouldn't
quite
be
pool,
but
it's
it's
effectively
still
push.
But
like
do
we
have
consensus
that
the
way
we
find
otop
just
it
won't
be
pulled
just
to
answer
riley's
question.
A
A
A
We
should
not
blend
it,
we
should
be
adaptable
and
convert
to
the
other.
I
mean.
B
A
You're,
you
have
different
challenges
in
both
and
we're
trying
to
solve
most
of
the
push-based
challenges
and
focus
on
those
okay
cool.
So
we
only
have
two
minutes
left
is
that
did
we
answer
enough
of
your
question
riley
that
you
can
make
progress?
Yeah,
perfect,
thanks,
great
great
okay,
so
going
forward
in
terms
of
work,
that's
happening.
I
just
wanna
again
remind
that
we
should
probably
doing
some
ltop
performance
work,
just
a
reminder
that
improving
the
specification
is
still
happening,
those
of
you
who
are
riding
exporters
or
importers
or
processors.
A
If
you
find
gaps
in
the
specification,
if
something's
unclear,
please
open
bugs,
so
we
can
continue
to
flush
it
out
and
have
discussions
and
improve,
like
that's
just
ongoing
work
of
the
sig.
So
you
know
please
do
that.
I'm
really
curious
how
that
how
it's
going?
I've
heard
some
good
comments
from
anthony
on
some
of
the
stuff
josh
wrote.
So
I'm
really
glad
to
hear
that.
That's
like
the
goal
of
this
specification
is
so
you
can.
A
You
can
write
these
importers
and
exporters
next
week
will
be
prometheus
compatibility,
specification
work
and
thank
you.
Everybody
really
appreciate
your
time.
Thanks
josh,
alright
thank.