►
From YouTube: 2020-11-06 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
C
D
Hi
everyone,
I
think,
andrew's
gonna,
kick
off
this
meeting
with
a.
A
Roundup,
okay,
I
have
the
first
agenda
item
to
go
over
hotel,
spec
p1
issues
for
metrics.
A
This
is
the
they
can't
say
where
we
at
right
now,
12
to
do's
three
in
progress.
11
done,
we
have
another
one
in
progress
up
one
from
last
week.
You
can
see
the
status
in
the
github
project
for
the
controller.org
as
usual.
A
So
we
are
finishing
up
the
remainder
of
the
non-metric
stuff
for
the
spec,
but
three
of
them
are
in
are
in
progress
and
we
started
on
triaging
the
collector
sig
issues
and
we
have
a
label
related
to
metrics
that
is
relevant
for
this
sig
as
well,
where
we've
identified
so
far,
13
open
issues,
the
collector
sick
in
the
hotel,
collector,
that's
related
to
to
the
spec
metrics
of
varying
priority
levels,
actually
yeah,
p2s
and
p3s.
I
think
I
don't
think
there
are
any
p1's
in
that
sense.
A
This
is
not
done
yet,
though,
as
you
can
see,
there's
143
open
issues
and
I
think
we've
scrubbed
through
in
this
repository
20
some
odd
25
or
something
like
that
and
then
also
the
collector
contrib,
has
issues
related
to
spec
metrics,
some
of
which
we've
identified.
A
And
this
one
is
also
in
process
of
being
triage
as
well.
We've
only
gone
through
like
about
20
something
of
the
78
open
issues.
A
F
D
A
I'll
be
finding
two
more
times
next
week.
These
are
ad
hoc.
These
aren't
regularly
going
to
be
like
every
single
week
we're
going
to
be
just
going
at
this.
I
think
we
just
need,
like
mental
calculation,
like
about
five
more
hours
of
this,
unless
we
can
paralyze
and
somehow,
if
we
do
get
more
people
we'll
paralyze,
maybe
we
can
knock
this
whole
thing
out
for
the
collector's
sake
issues
all
next
week,
I'll
communicate
in
the
getter
channel.
A
I've
been
trying
to
keep
it
at
least
some
some
of
the
times
in
the
morning
in
order
to
be
able
to
be
okay
for
european
time
zones.
A
So
look
out
for
that
this
friday
or
today's
thursday,
tomorrow's
friday,
I
don't
know
we
we
might
go
over
some
of
it
during
our
triage
session
tomorrow
after
we
get
through
the
spec
sick
stuff.
So
that's
another
opportunity
to
do
the
triage,
but
I
I'll
only
work
on
it.
If
we
got
quorum,
which
is
if
we
got
the
maintainers
from
the
collectors
there.
D
I
think
that's
helpful,
andrew
if
you
continue
sharing.
I
just
edited
my
own
issues
to
move
them
to
the
bottom
of
this
list
so
that
we
could
have
other
voices
first-
and
I
hoped
just
talked
about
prometheus
receiver
like
front
and
center
here
amman.
Would
you
like
to
take
on
that
topic.
G
Oh
yeah,
so
this
I
kind
of
discussed
this
in
the
collector's
sig
yesterday,
but
josh.
I
was
wondering
if
you
had
a
bit
more
context
on
this
issue,
having
looked
through
it
like
from
what
I
understand,
the
reason
that
counter
data
gets
reported
as
gauge
is
when
you
have
delta
metrics
being
exported
by
prometheus
exporter
like
those
values,
if
it's
a
delta,
it
can
go
up
and
down.
G
Hence
why
it's
not!
It's
not
monotonic
so
from
my
understanding,
like
the
only
way
to
fix
that
is
for
only
cumulative
metrics
to
be
going
through
prometheus
exporter,
which
has
kind
of
well,
there's
been
a
spec
issue
about
this,
which
says
that
you
know
for
any
otlp
exporter.
It
should
by
default
export
cumulatives,
but
I
think,
like
the
only
way
to
completely
resolve
it,
is
to
ensure
that
only
cumulative
metrics
are
going
into
pareto's
exporter.
Correct
me
if
I'm
wrong,
but
that's
kind
of
my
understanding
of
that.
G
E
D
I
I
have
to
say
this
issue
is
so
ancient
to
me
that
I'm
not
sure
it's
even
relevant
at
this
time.
I
think
there
may
have
been
at
the
time
some
confusion
over
of
my
own,
also
possibly
confusion
stemming
from
an
open
census,
translation
that
was
involved
in
in
any
case
there.
I
think,
to
answer
the
question
most
directly.
We
we
do
want
otlp
exporters
to
use
cumulative
by
default
so
that
this
situation
won't
happen,
and
the
question
then
becomes
what
should
a?
D
What
should
a
prometheus
exporter
do
when
you
get
deltahead
points
in,
and
this
would
be
the
case,
if
you
had
say
a
staffy
receiver
and
a
prometheus
exporter
as
well,
which
is
a
pretty
reasonable
configuration
and
the
last
time
we
discussed
this,
which
I
think
was
in
the
tuesday
morning,
spec
sig.
I
was
able
to
mention
briefly
how
I
feel
like
the
thing
we're
missing
here
is
a
discussion
about
like
a
you
need
to
validate
your
configuration
to
say.
D
If
I'm
going
to
set
myself
up
for
a
certain
type
of
export,
I
need
to
make
sure
that
I'm
talking
to
a
single
home
like
end
point
my
destination,
like
is
a
single
thing.
Therefore,
I'm
able
to
export
deltas,
because
I
know
I
have
a
single
entity
receiving
my
my
my
output.
Therefore,
as
long
as
I
I'm
sure
that
I'm
exporting
to
a
single
entity,
I
can
rely
on
that
single
entity
to
correctly
convert
deltas
into
cumulatives.
D
H
H
Again,
no
magic,
yet
user
will
have
to
manually
configure
things
to
to
work
correctly,
based
on
what
you
say,
no
no
metadata
in
otlp
or
anything
like
that.
We
will
just
say:
tell
users
hey
if
you
want
to
have
these
exporters
out
as
delta
configure
a
single
endpoint
here,
configure
this
processor.
Everything
will
work,
let's
not
yet
do
magic,
and
then
we
can
start
about
discussing
how
to
do
a
more
magic
like
what
describe.
H
But
what
we
try
to
do
is
we
will
provide
a
way
for
users
and
we'll
have
to
explain
this
problem
and
probably
explain
to
users
how
to
fix
this
problem
and
how
to
to
optimize
for
this.
That
being
said,
okay,
so
that's
that's
a
solving
problem
for
delta
counters,
by
the
way
when,
when
people
see
counters
or
when
they
tell
about
counters
in
otlp,
they
have
to
understand
that
we
call
two
things
counters.
One
is
up
and
down,
and
one
is
not
only
up
or
whatever
just
counter,
and
hence
the
up
and
down
counter.
H
Even
though
is
the
sum
because
it
naturally
the
sun
will
be
a
gauge,
no
matter
what,
because
it's
going
it's
going
down
which
prometheus
does
not
allow
for
a
counter
for
what
they
call
counter.
So,
even
though
you
see
in
our
code
base
that
we
will
transform
sums
that
are
not
monotonic
into
gauges,
that's
not
necessarily
a
bug.
It's
actually
a
correct
thing
to
do.
The
the
third
thing
is
the
summary
which
I
think
people
from
aws
are
working
actively
on
fixing
that
there
is
a
pr
for
for
reviewing
that
and
so
on.
H
D
D
We
were
out
putting
them
as
gauges
and
there's
also
so
there's
several
cases,
there's
one
where
you
get
delta
points,
there's
another
question
that
comes
that
comes
from
users
who
have
used
an
up
down
counter
or
a
sum
of
up
some
observer,
and
I
just
probably
don't
want
to
go
into
the
details
of
all
those
questions
right
now.
So
as
long
as
there
are
issues
filed
for,
I
think
what
you
said
is
that
we
assert
that
gauge
is
the
only
value
valid
way
to
handle
a
delta
in
the
previous
export
context.
D
It
is
the
correct
thing
to
do
if
you
are
dealing
with
a
sum,
a
cumulative,
but
it's
it's
questionable,
because,
if
you're
going
to
do
dropping
points,
because
you
think
it's
left
value
type
of
gauge
you're
going
to
lose
those
counts.
But
if
you
preserve
all
your
points,
it'll
actually
work
correctly.
D
So
I
think
that
that's
fine
did
that
cover
all
the
issues
that
there's
so
there's
some
issue
about
a
an
aggregating
processor
as
well.
Is
that
going
to
be
required
for
ga?
I
think
okay.
D
H
I
D
A
D
So
there
was
another
issue
here
about
prometheus
right:
there's
there's
one
where
it
seems
to
be
just
like
gets
stuck
so
there's
like
some
functional
issues
that
we're
having
and
that
seems
to
be
worth
discussing,
and
then
I
felt
like
we
I've
got.
You
know
at
least
some
communication
from
users.
On
my
end,
who
are
saying
to
me,
there's
something
about
state
management
and
tracking
cumulative
start
time,
stamp,
resets
and
so
on
in
that
receiver.
That
is
at
least
not
well
known
and
maybe
not
as
configurable
as
needed
and
maybe
not
very
clear.
H
Think
I
can,
I
can
explain
about
that,
but
that's
about
receiver,
not
exporter.
So
I
think
there
has
to
be
a
separate
issue
because
the
the
the
thing.
D
Yes,
yes,
the
receiver
is
the
question
I'm
talking
about
and
I
think
we're
all
really
interested
in
the
receiver
to
be
sort
of
like
first
class,
and
that
to
me
means
that
we
we
understand
and
really
are
very
clear
about
what
kind
of
state
requirements
it
has
as
far
as
how
much
long-term
memory,
and
when
can
it
drop
reset
timestamps
that
things
are
dead
processes
and
so
on.
That's
that's
sort
of
needs
to
be
discussed.
D
I
I
don't
think
we
should
discuss
it,
it's
because
that
was
sort
of
vague,
and
I
mean
it's
connected
with
this
question
about
the
up
metric,
which
I
do
have
on
the
list
here.
So
maybe
we
will
come
back
to
that
again.
D
Okay,
joshua:
do
you
want
to
leave
this
discussion
about
convention
prs?
I
believe
we
just
need
more
approvals.
B
Yes
and
some
of
them
got
more
approvals
here
in
the
last
24
hours,
so
there's.
I
think
a
couple
of
those
already
have
two
two
approvals
and
maybe
just
need
like
this
one
I
think
just
needs
to
be
rebased
or
needs
the
change
log
amended,
and
then
it
can
be
merged.
J
It's
also
this
database,
one
in
particular,
is
waiting
on
a
separate,
somewhat
unrelated
pr
to
tracing,
to
add
the
database
table
and
operation
to
tracing
semantic
conventions,
and
it
has
like
five
approvals
and
it
just
needs
someone
with
permissions
to
go
click
the
button
to
merge
it.
I
know
some
of
you
are
on
this
call.
D
As
well,
okay,
if
it
just
needs
people
to
click
buttons,
we
can
get
that
done
easily.
Are
there
any
like
technical
matters
that
need
to
be
discussed
because
they're
slowing
these
down.
B
So
I
can't
speak
to
10.
I
think
it's
1095.,
the
others,
though,
so
those
top
three
are
marked
as
stale.
I
just
wanted
to
make
sure
they
didn't
get
closed
out
and
I
don't
have
the
ability
to
remove
the
stale
label
but
they're
they're,
all
very
close
to
being
mergeable.
You.
H
B
Got
it
thanks,
then
I
think
that's,
the
only
one
was
there,
somebody
else
who
brought
this
issue
up.
I
think
somebody
beat
me
too
to
adding
this
to
the
agenda,
and-
and
maybe
that
was
in
reference
to
1095,
which
I'm
not
as
familiar
with.
D
Sorry,
which
one
are
we
referring
to
right
now,
the
aws
one
which
oh
yeah,
so
I
haven't
commented
on
that
one
yet
because
I
know
it
got
a
lot
of
sort
of
more
political
type
of
response
and
I'm
not
sure
what
we
were
supposed
to
do
with
that.
It's
not
a
technical
matter
really.
I
Bogdan,
I
have
filed
one
of
the
parts
of
it,
but
again
tigran
responded
in
the
spec
meeting
and
we're
going
to
add
more
detail
to
the
guidelines
and
and
then
I
wanted,
I
am
in
the
process
of
filing
a
follow-up
pr
follow-up
issue
on
on
the
specificities
for
the
sdks,
which
are
different
from
you
know.
The
collector.
H
I
H
To
do,
I
would
just
start
transferring
every
markdown
reference
them
as
documents
and
stuff
okay,
okay,
and
that
will
help
unblocking
this
kind
of
discussion
where,
where
we
blame
like
why
this
vendor
is
here,
the
other
one
is
not
sure.
F
B
D
Yeah
I
appreciate
it.
I've
been
trying
to
catch
up
this
week
and
you'll
notice
that
I've
been
producing
pr's
instead
of
reviewing
them.
So
not
sure
where
that
balance
lies,
sean
has
a
question
about
the
0.14
collector
release.
I
take
it.
K
D
Exporter
is
broken
in
that
in
the
in
the
v013
release,
or
I
mean
it.
Doesn't
it's
ancient
so
there's
one
pr
that
I
had
that
needed
to
merge,
but
I
think
tyler
may
be
on
the
call
yeah
answer
the
question.
J
I
am
yeah
we
could
probably
have
talked
about
it
last
hour.
Sorry
about
this,
but
yeah.
I
imagine
this
needs
the
main,
encapsulating
api
issue
that
I
was
talking
about
at
the
cosig
resolved
to
get
this
released,
and
I
imagine
this
is
going
to
be
since
it's
thursday
next
week.
So
I
would
imagine
next
next
friday's
target,
but
yeah.
No,
actually,
I
think
that's
a
really
good
estimate
so
I'll
keep.
It
posted
definitely
tune
into
the
mexico
sig
meeting
as
well.
We
can
touch
base
on
that
thanks.
D
Thank
you
tyler.
I
know
I
think
I
think
you're
doing
the
right
thing
by
not
not
releasing
another
api
breaker
between
now
and
then
or
you
know,
between
last
week
and
next
week,
but
I'm
also
very
eager
to
get
those
otlps
changes
in
so
that
I
can
actually
use
the
code
to
push
some
data
great.
I
guess
we're
up
to
mostly.
Let's
see
jason
has
this
issue
about
again
the
prometheus
receiver,
which
we
already
mentioned
once,
but
here
we
are
again.
D
L
I
think
it's
more
of
a
technical
thing,
I
think,
there's
some
sort
of
deadlock
in
the
prometheus
receiver
when
we're
debugging
it.
I
guess,
if
you
scroll
down,
I
think
in
the
issue.
I
think
my
second
comment
or
third
comment.
L
The
one
right
keep
going
sorry
keep
going
almost
all
the
way
down
yeah
that
one,
so
I
kind
of
described
the
issue
there
that
I
think
is
was
what's
going
on,
but
basically,
I
think
with
the
prometheus
here
there's
like
there's
some
sort
of
deadlock
going
on
whenever
we
reload
like
we
reload
our
targets.
L
So
I
was
wondering
like
if
my
logic
here
in
the
problem
was,
I
guess,
correct
and
also
like
what
we
should
do
to
resolve
it.
I
guess
because
it
seems
like
a
pretty
big
design
problem.
D
Suggests
that
everybody
else
is
on
the
same
page
as
we
are,
as
everybody
else
is.
I
think
what
we're
talking
about
here
is
a
fairly
complex
interaction
between
a
large
prometheus
code
base,
which
we
are
directly
linking
into
the
collector
to
run
service
discovery
and
then
produce
targets
that
we
then
use
the
open,
telemetry
native
code
to
go
scrape.
Is
that
all
correct
or
I'm
actually
not
actually
sure
I
need
to
dig
it
or
or
do
we
use
prometheus
to
perform
the
scrape
and
then
somehow
receive
its
output?.
H
H
I
Yeah
I
mean
I
would
suggest
say
that
it
is
time
because
we
are
running
into
some
serious
issues
and
it's
not
an
you
know,
it's
something
we
cannot.
You
recommend
anybody
to
use
in
production
right,
I'm
more
than
happy.
H
I'm
more
than
happy,
if
we
start
most
probably
to
have
to
simplify
our
prometheus
to
simply
as
simple,
oh
and
the
other
reason
why
we
embedded
the
entire
prometheus
or
almost
the
entire
procedures
was
to
be
able
to
do
label
modifications
and
a
bunch
of
other
things
which
now
we
have
a
processor
to
do
that
hands-free
network.
Hence
people,
confusion
and
everything.
H
Yep
very
happy
if
somebody
comes
removes
all
the
media's
receiver
that
we
right
now
have
and
in
place
writes
a
simple
scripter
that
parses
the
prometheus
format
and
and
and
like
transforms
in
no
tlp,
and
that's
it
remove
all
the
other
functionality
and
everything
else.
I
would
be
more
than
happy
to
see
that
and
start
to
build
from
there.
H
We
can
even
do
that
so
for
the
moment.
I
think
first
of
all
we
need
to
discover
to
to.
We
can
reuse
the
services
capability
that
prometheus
has,
but
I
I
I
think
we
should
not
have
it
embedded
into
this
receiver.
We
now
have
a
way
to
what
we
call
a
receiver
creator
and
so
on.
So
we
have
a
way
and
we
should
if
we
leverage
that
functionality
from
prometheus
we
should
plug
in
in
our
collector
as
a
standalone
thing,
not
as
part
of
this
specific
heavyweight
receiver.
H
I
So
bogdan,
we
can
definitely
provide
a
design
because
we've
been
you
know,
looking
at
how
we
can
either
fix
these
issues
or
you
know
what
what
are
the
alternatives
and
jana
has
been
working.
You
know
closely,
obviously
with
us,
so
we
we
we'd
like
to
have
this
sooner.
If
we
decide
to
pivot-
and
you
know-
change
the
receiver,
but.
H
D
D
I
I
think,
if
there's
some
sort
of
deadlock,
it
sounds
like
you
have
a
new
technology
in
the
collector
to
make
it
possible
to
run
a
standalone
like
process,
essentially
rather
than
having
it
be
graphed
like
somehow
tied
to
the
secret
like
the
flow
control
of
a
pipeline.
I
guess
is
what
you
said:
I'm
not
sure
where
the
deadlock
comes
from.
It
sounds
like
it's
fixable
and
then
what
we
really
want
is
a
design
document
to
solve
all
of
our
problems,
which
is
more
than
just
that
one.
D
We've
got,
I
think,
right.
You
have
to
remember
the
the
processes.
You've
already
scraped
and
proc
prometheus
has
some
configuration.
Well,
I
think
we
ultimately,
our
goal
is
that
you
can
take
your
old
service
discovery
configuration
and
your
old
re-labeling
configuration
and
drops
blocks
of
yaml
into
a
collector
configuration
and
if,
if
it's
running
new
code,
that's
great,
but
it's
the
same
exact
label
relabel
configuration
block
yeah.
It's.
H
D
I
don't
know
well,
I
think
what
what's
nice
about
the
prometheus
model
is
the
it's
like.
The
association
between
service
discovery
and
relay
building
is
part
of
the
synthesis
of
resource
information,
so
like
like
that
user
has
invested
a
lot
of
their
energy
in
getting
those
two
settings
correct
and
they're
files
they're
together,
they're
associated
with
each
other.
D
I
think
we'd
like
to
offer
a
path
where
you
can
stop
running
from
eco
server,
which
has
a
sort
of
monolithic,
feel
and
and
has
a
bunch
of
functionality
that
you
don't
need
in
this
situation
and
just
get
scraping
and
data.
You
know
flow
through
hotel
collector
instead,
which
is
two
out
of
the
four
major
subsystems
of
the.
F
H
Yeah,
here's
the
entire
thing
there
is
this
problem
that
people
are
mentioning
about
the
start
time
and
by
the
way
josh.
This
is
related
with
one
of
the
proto
issues,
which
is,
is
start
time
required
or
not
for
cumulative
methods.
H
So
indeed,
should
we
discuss
that
right
now,
I
I
feel,
like
that's,
actually
very
important-
that's
more
important
than
than
this,
because
it
will
influence
how
we
implement
this
anyway
and
it
influences
how
this
this
reasonable,
and
I
can
explain
you
what
we
did
in
this
receiver
and
we
can
discuss
what
we
should
do
so.
D
I
D
H
So
there
is
another
alternative,
which
is
that
issue
in
the
photo
which
says
should
otlp
always
require
start
time.
I
think
it
should,
but,
but
there
are
cases
like
this,
where
you
don't
have
a
way
to
to
know
the
start
time
and
and
you
you
you
land
into
problems
like
like
this
one
like
when,
when
do
you
call
a
service
down,
when
do
you
reset
your
metrics
and
so
on?
So
there
are
a
bunch
of
problems
that
you
need
to
keep
a
state
to
calculate
the
subject
anyway.
D
One
is
recommend
that
you
should
not
do
that,
but
it's
okay
sort
of-
and
I
say,
sort
of
if
you
ensure
that
your
resource
label
attributes,
if
your
resource
attributes
are
distinct,
so
that
when
you
crash
and
restart
and
reset
all
your
cumulatives
you're
ensured
that
you're
a
different
resource
because
in
that
situation,
yes,
there's,
like
you,
don't
know
your
true
start
time,
but
you
never
have
a
reset
in
that
stream
of
that
time
series.
The
time
series
is
single,
essentially
single
life
right,
so
that
that
is
a
solution.
Otherwise
we
have
to
adopt.
D
The
prometheus.
Heuristic
is
what
I
call
this,
which
is
the
re
anytime.
A
value
drops
presume
it
has
reset,
which
I
don't
love,
but
is
another
solution
and
my
the
back
end
team
here
life
step
pushes
back
whenever
I
suggest
that.
H
I
I'm
all
about
that,
and
I
know
the
reasoning-
and
I
understood
the
reasoning
from
from
stackdriver
and
from
other
people
that
why
they
need
the
start
time,
but
it's
going
to
be
important
for
us
to
to
settle
on
that
and
propose
a
solution
for
for
things.
For
example,
google
at
one
point
started
to
do
some
some
scraping
things
like
this
and
what
they
did.
H
Yeah,
that's
that's
what
they
do
there.
For
example,
they
they
do
this
technique
and
they
say:
okay,
you
know
what
you
most
likely
for
a
cumulative
always
going
up
thing.
You
you
most
likely
are
interested
into
the
rate,
so
we're
not
gonna
affect
the
rate
by
the
fact
that
we
don't
send
absolute
value
and
we
we
we
reference
everything
to
to
the
first
break
that
we
get
from
you
so
and
we
use
that
as
a
start
time
of
the
method.
D
D
I
H
These
are
a
bit
small
things
here
and
there
the
overall
architecture
of
the
collector,
where,
where
are
the
discover,
abilities
so
like
that
that
can
still
be
happening
independent
of
this,
but
we
can
wait.
I
mean
I'm
happy
to
have
this
discussion.
I
think
it's
very
important
for
us
to
solve,
especially
with
the
with
the
more
and
more
scrapers
that
we
had
like.
I
Totally
so
I
I
mean
we'll
we'll
set
up,
maybe
reach
out
to
some
of
you,
set
up
a
meeting
and
then
also
get
jana
involved
and
and
and
kind
of
start
working
on
a
design
proposal.
I
Yes,
exactly
that's
what
I
was
thinking
mark
just
do
it
first
thing
next
week.
D
D
I
guess
the
remaining
items,
at
least
next
on
the
list
here
were
mine,
although
I
insist
on
having
others
first
reihan
down
below
you've
got
one
on
a
pr
that
wants
belgian's
help.
Let's
do
that.
C
H
Yeah
this
pr
was
yesterday,
I
I'm
I'm
a
bit
behind.
I
will
I'll
promise
to
to
look
before
I'm
closing
the
day
today.
I
think
there
is
this
one
and
there
is
a
summary
one
which
needs
my
attention.
There
are
two
pr's
in
the
collector
that
I
have
on
my
mind.
D
Thank
you
yeah.
I
I'm
also
trying
to
get
with
andrew's
help
a
better
way
of
like
triaging
and
looking
through
all
the
current
collector
issues.
I
know
the
triage
is
halfway
through,
so
once
that's
done,
I
know
we're
going
to
be
more
able
to
see
current
issues
and
have
all
those
comments
on
them.
So
that'll
help
and
justin.
D
D
J
Conversation
a
while
ago
on
the
issue
I
finally
put
together
the
pr
for
it.
We
made
some
decisions
here.
The
first
decision
is
that
sdks
shouldn't
be
parsing.
The
unit
string
in
order
to
recombine
units
together.
Metric
instruments
are
just
going
to
assume
the
string
they
get
is
valid.
J
D
That
sound
everything
you
said
that
you
read
on
the
screen
sounded
great
to
me
justin,
so
I
will
definitely
follow
through
on
this
by
the
end
of
this
week.
J
I
just
wanted
to
kind
of
follow
on
on
that
one,
because
just
kind
of
to
tie
the
thread
for
people
that
are
following
the
unit
sort
of
thing
like
it.
It
is
trying
to
be
as
simple
as
possible
of
an
overlay
of
the
you
see
without
like
having
you
full
and
implement
like
their
algebra
and
their
syntax,
parsing
and
all
that
kind
of
thing,
and
to
provide
some
sort
of
functionality
on
top
of
it.
J
But
at
the
same
time
ensuring
that
there's,
like
in
theory,
gonna
be
compatibility
with
the
collector
and
the
otlp
as
protocols
go
down
it
also
one
other
thing:
is
it
unifies
on
crypt
and
wrong,
just
in
the
case
and
sensitive
format
of
the
ucum
for
semantic
conventions?
That's
kind
of
important,
so
just
a
heads
up
on
people
if
they
wanted
to
find
that.
M
D
Definitely
I
agree,
and
also
this
first
point
in
the
stdk,
I'm
sorry.
I've
got
two
outstanding
sdk
spec
pr,
so
I'm
seeing
how
this
first
one
is
going
to
become
a
requirement
that
you
do
include
the
unit
string
in
whatever
lookup
table
or
key
key
that
is
used
to
locate
records
during
accumulation.
So
you
cannot
combine
numbers
with
different
units,
in
other
words,
and
you
must
pass
them
through
a
separate
time
series
and
therefore
it
will
be
up
to
the
downstream
system
or
perhaps
a
fancy
collector.
D
J
I
was
hesitant
to
include
this
this
bullet
because
really,
I
think,
if
you
wanted
to
write
your
own
sdk
and
you
wanted
to
combine
units
that
had
a
common
base
and
you
wanted
to
parse
strings.
I
think
you
know
you
should
have
the
freedom
to
do
that,
but
I
don't
think
we
should
expect
that
of
our
built-in
sdk.
D
D
Great
anyone
else
want
to
discuss
units.
I
think
this
is
excellent
to
see
and
I'll
take
a
look
at
it.
D
D
So
the
the
immediate
cause
of
this
situation
is
that
there's,
I
think
well,
the
bigger
cause
is
that
there's
some
sort
of
misalignment
of
between
the
the
java
implementation
and
the
go
implementation,
so
that,
when
I
have
been
talking
to
some
light
step
engineers
who
are
trying
to
get
the
basic
sdk
like
runtime
and
host
metrics
working
in
go
java,
javascript
and
python
that
were
running
into
trouble
in
java
and
so
the
on
the
face
of
it.
D
This
issue
is
just
that
we're
getting
these
metrics
with
empty
point
lists
and
it
doesn't
seem
sensible
to
me
and
I've
written
a
server
that
actually
returns
an
error
saying.
I
thought
you
gave
me
some
metric
data
and
there
was
nothing
there.
It
makes
me
feel
like
I'm
like
I'm,
there's
some
sort
of
backward
compatibility
issue
because
there's
a
one
up
there,
so
I'm
looking
at
a
one
up
and
I
find
nothing-
I
think
it's
an
error,
not
an
empty
list
and
the
protobuf
library
won't
generate
an
empty
wrapper
for
an
empty.
D
A
wrapper
for
an
empty
list
will
generate
a
nil,
so
I
can't
tell
the
distinction
with
a
default
protobuf
library,
between
empty
list
and
no
data
in
that
one
of
so
I'm
returning
an
error
thinking.
I
I
may
not
know
what
the
true
extension
like
some
true
data
type
right,
so
I
don't
expect
ever
to
see
that.
But
what
this
really
brought
up
for
me
is
that
we
sort
of
haven't
specified
this
sort
of
the
standard
functionality
of
the
processor,
which
is
the
concept
or
the
component
name.
D
That
we've
been
writing
into
the
sdk
specification
and
there
is
a
real
question
here.
So
I
want
to
make
sure
people
have
caught
up
with
me
and
understood
the
things
I
just
said.
D
So
the
in
the
go
sdk
there
are
two
behaviors
and
I've
tried
to
document
them
and
there
is
an
sdk
backgr.
That's
up
on
this
right
now.
You
might
want
to
just
go
to
that
andrew.
Could
you
click
back
and
find
the
list
of
sdk
specs?
D
And
so
what
I'm?
Seeing
when
I
read
through
the
issue
that
carlos
filed
here
was
basically
that
the
it
seems
like
the
java.
Sdk
has
done
a
cumulative
conversion
somehow
earlier
and
then
what
happened
was
either.
There
were
no
events
on
a
synchronous
instrument,
but
I
suspect
it's
actually
no,
no,
no
observations
for
an
asynchronous
instrument,
and
so
the
sdk
said.
Well,
there
used
to
be
an
instrument
here
and
now
there
was
nothing
so
I'm
going
to
output
a
metric,
but
there's
an
empty
point
list,
and
that's
just
not
what
I
want
to
see.
D
I
either
want
to
see
no
metric
or
a
metric
with
the
last
value.
So
when
I
say
the
last
value,
I,
what
I
really
mean
is
the
the
most
recently
exported
value
which
might
be
hours
old
or
days
old.
Anyway.
There
is
a
spec
now
written,
but
it
seems
like
the
java
sdk
is
so
different
that
we
can't
get
to
a
point
of
understanding
what
it's
supposed
to
be
doing.
H
D
Can
you
go
to
the
second
there's
a
second
pr
there
and
it's
so
short
that
we
could
just
pull
up
the
text
yeah,
and
I
only
found
it
10
minutes
before
this
meeting
so
nobody's
really
seen
it.
D
But
this
was
trying
to
answer
the
confusion
in
the
in
that
issue
in
the
pr
that
we're
looking
at
here,
which
is
to
say
that
and
the
way
I've
always
thought
of
this,
and
I
realized
that
I
hadn't
written
it
into
any
existing
default.
Sdk
spec
was
the
accumulator
component
should
not
maintain
long-term
state
so
that
there's
this
first
step
in
aggregating
metric
data.
D
That
has
to
be
able
to
forget
so
that
if
you
are
doing
delta
export
or
like
a
sd
exporter
that
you
don't
end
up
remembering
time
series
from
yesterday
in
memory,
and
so
what
I'm
saying
by
that
requirement
is
that
if
there's
going
to
be
state
built
up
inside
of
an
export
pipeline,
it
can't
be
in
the
accumulator.
Now
we
have
a
library
guideline
across
hotel
that
says
exporters
that
word
exporter
means
all
you
do
is
take
data
and
turn
it
into
a
protocol.
D
So
you're
not
supposed
to
put
your
state
management
into
an
exporter
either.
The
point
of
a
processor
is
this
is
where
state
management
goes,
and
so
the
first
paragraph
here
is
saying
the
statement
state
does
not
go
into
an
accumulator
and
the
rest
of
this
is
saying
there
is
a
basic
processor.
It
has
two
types
of
functionality:
one
is
to
convert
delta
securities
and
one
is
to
just
make
make
memory,
and
then
that's.
So
that's
why
this
is
my
answer
to
the
problem.
I
just
don't
know
how
to
bring
the
java
sdk
into
alignment.
H
F
A
list
you
had
what's
your
other
item,
no,
that
that's
mostly
the.
D
This
word
processor
has
been
used
now
by
me,
at
least
for
maybe
four
or
five
or
six
months,
I'm
not
sure
when,
but
the
term
batcher
was
the
one
that
happened
earlier
and
now
there's
a
there's,
a
current
pr
and
I
apologize
for
not
knowing
exactly
who
merged
submitted
this
one,
I
think
coming
from
google,
but
I'm
not
sure
because
I'm
behind
talking
about
a
new
type
of
processor
interface,
which
is
one
that
happens
before
the
accumulator,
which
I
think
is
a
lot
closer
to
the
concept
of
a
span
processor,
like
you
just
said
so,
there's
the
notion
that
we
have
processing
of
aggregated
data
and
we
have
processing
of
like
event,
data
and
the
this
other.
D
Pr
that
we
can
look
at
and
discuss
right
now
is
also
uses
the
term
processor,
but
it's
different
from
the
one
I
have
in
the
current
sdk
spec
documentation,
which
is
a
confusing
thing
here
and
we
may
have
to
rename
one
of
these
concepts.
I
don't
know
which
one
we'll
do,
but
my
concept
of
a
processor
is
getting
batch
data
and
working
with
batches,
and
this
other
concept
of
processor
is
is
dealing
with
upfront
before
you
begin
aggregating
data
and
I've.
I've
documented
other
reasons
why
you
might
want
to
do
that.
D
One
is
to
down
to
avoid
the
cost
of
accumulating
metrics
labels
that
you're
going
to
drop
downstream,
and
it's
maybe
more
expensive
to
do
that
and-
and
you
can
do
sampling
at
that
point
and
you
can't
do
sampling
later
so
there's
all
kinds
of
reasons
why
you
might
want
to
have
two
two
different
types
of
processor.
Does
anyone
have
a
good
terminology
idea
for
this.
H
D
I
D
This
is
a
new
idea
that
is
unrelated
to
the
current
concept
of
processor
that
I've
been
discussing.
This
is
a
the
idea
that
you
that
you're
going
to
do
some
processing
of
an
event
before
it
enters
the
accumulator
and
you
might
want
to
drop
labels
before
you
enter
the
accumulator
for
different
reasons
than
you
drop
labels
after
the
18
later
yep
and
the
other
reason
the
boating
has
mentioned
is
to
bring
in
distributed
context
keys
from
baggage.
H
That's
one
possibility
so
by
the
way
these
guys
is
not
from
from
google
it's
from
atlassian
and
I
worked
very
closely
with
atlassian
and
to
help
them
adopt
upland
telemetry
and
they
wanted
to
do
this,
and
I
told
them
hey
consider
to
do
something
like
the
spam
processor
and
they
came
up
with
this
idea.
We
need
to
review
it,
but
but
it's
nice
that
it's
coming
from
yet
another,
yet
another
company,
a
user
of
our
yeah.
H
H
D
If,
maybe
I
don't
know,
okay,
all
right,
that
is
interesting.
D
So
I
think
we're
close
to
the
end
of
time.
Here
there
are
two
open
spec
pr's.
One
of
them
is
trying
to
address
the
java
issue.
The
other
is
actually
sort
of
old
and
stale,
but
I
I
went
ahead
and
started
writing
this.
This
other
one.
I
am
trying
to
finish
the
default
sdk
spec,
but
it
needs
reviewers.
Otherwise,
it's
hard
to
make
progress.
So
please
review
those.
If
you
can
the
only
other
thing
that
hasn't
really
made
a
lot
of
progress
in
the
last
week
is
this
up
convention.
D
H
I
ask
a
couple
of
things
here
about
this,
so
I
can
tell
you
how
I
envision
this
and
then
I
can
so
for
me.
I
did
not
envision
this
as
any
kind
of
health
indicator
or
anything.
So
for
me
how
I
envisioned
it.
It
was
a
value
of
one
every
export
time.
So
then,
then,
on
the
then
on
the
on
the
sorry,
not
every
export
time,
we
send
a
value
of
one
for
every
second,
so
so
the
way
how
we
implement
something
similar.
E
In
google
was
we
send
a
value
of
one
for
every
export
for
every
second.
So
then
we
calculate
a
rate
and
then,
when
the
rate
goes
down
then
then
we
know
that
there
is
something
wrong
or
one
of
the
the
process
that
we
expect
to
see
is
not
is
not
producing
the
values
anyway.
That
was
our
implementation
and
we
figure
out
that
it's
very
hard
to
get
it
super
correct,
so
you
you
most
likely
have
to
have
some
errors
there
in
place
and
and
and
stuff,
but
that
was
one
possible
implementation.
D
H
D
That
is
mentioned
in
this
thread
of
this
issue
as
one
possible
solution.
I
I,
the
the
the
term
health
that
you
use
was
also
mentioned
here,
and
I
tried
to
oppose
that
with
saying
this
is
not
about
health.
This
is
simply
about
saying
I
produce
something,
and
there
is
discussion
here
about
the
rational
rationale.
For
that
I
don't
want
to
just
repeat:
what's
been
discussed
and
just
people
need
to
pick
through
this
issue.
H
So
so
I
would
definitely
encourage
to
investigate
that
possibility
of
implementing
as
a
sum
observer
of
the
duration,
which
is
the
duration
of
start
time.
Minus
current
I
minus
start
time
reported
in
seconds.
H
I
don't
think
we
should
use
the
the
the
unit
as
seconds,
because
this
is
no
unit
like,
even
though
we
come
up
with
this
interpretation
of
seconds.
I
don't.
I
would
not
necessarily
use
seconds
as
a
unit,
but
we
can
argue
about
that.
So
that
would
be
super
useful
and,
if
importantly,
it
has
to
be
monotonic,
it
doesn't
have
to
be
up
down
because
it's
always
going
up
unless
you
have
a
time
skew
which
shouldn't
happen.
H
But
I
don't
know,
that's
that's
another
problem
that
we
may.
We
may
chat.
So
so,
if
we
do
this
thing
with
the
with
seconds
and
stuff,
if
you
have
a
time
sku,
you
may
have
troubles
with
the
with
this,
not
sure
if
it's
a
real
problem
clocks
got
way
better
than
20
years
ago,
but
but
still
something
that
we
need
to
think
about.
D
Yeah,
I
I
did
mention
how
I
feel
like
there's
a
solution
using
an
uptime
metric,
but
really
what
we
want
is
prometheus
compatibility.
So
if
you,
if
you're
using
that
prometheus
receiver
on
your
collector
and
you're,
trying
to
ditch
your
previous
server,
you
just
need
another
metric
because
you've
built
dashboards
somehow
downstream
about
it.
So,
although
I
think
we
can
facilitate
something
like
this
based
on
uptime,
it's
just
so
much
easier
to
have
an
up
a
variable
named
up
that
mimics
prometheus,
so
that
we
can
just
and.
D
Up
up
is
a
value
that
is
zero
or
one
at
the
result
of
as
a
result
of
the
scrape.
So
if
you
scrape
a
target
and
it
times
out
or
or
doesn't
respond,
it's
zero,
if
you
scrape
the
target's
one,
you've
got
something
and
then-
and
I
mentioned
there-
was
this
interesting
corner
case,
which
is
server
starts
up
and
it
exports
nothing.
Now
you
scrape
it.
What
does
it
give
you?
It's
given
you
nothing,
but
it's
up
so
there's,
no,
there's,
no
implicit
metric
that
you
can
use
if
you're
reporting,
nothing
to
say.
H
Yeah,
no
because
you,
I
think
I
think
that's
a
good
point.
Maybe
maybe
we
should
have
two
of
them
one
up
on
one,
the
other
one
is
uptime
for
different
purposes.
If
we
really
want
mimic
the
pro
prometheus
one,
it's
fine.
I
am
fine
to
to
do
that.
But
there
is
one
problem
we
cannot
submit
zero
so
for
us
will
not
be
one
or
zero
because
zero
means
something
failed.
But
if
we
fail,
we
don't
submit
anything
because
we
are
a
push
mechanism.
D
Or
not,
I
believe
I
believe
this
is
also.
I
believe
this
is
also
discussed
in
the
issue
and
how
you
might
yeah.
That
is
a
you've
gotten
to
the
point
where
it's
sort
of
complicated-
and
you
might
think
about
treating
this
as
a
special
case
like
I'm
going
to
do
some
exports
and
my
my
export
has
been
channel-
has
been
down
for
an
hour
but
finally
came
back
okay,
so
I
had
for
an
hour.
I
was
getting
zeros,
but
I
had
to
drop
those
points
and
now
I'm
back
up
and
I'm
gonna
start
recording.
D
J
Hey
guys,
I
have
to
drop
off,
but
thanks
for
the
discussion
box,
it
sounds
like
maybe
it'd
be
helpful.
If
you
could
read
the
issue,
because
you
have
a
lot
of
context
and
some
good
ideas,
yeah.