►
From YouTube: 2020-12-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Okay,
I
guess
I
was
hoping
to
see
josh
here,
but
maybe
I
can
kind
of
lead
it.
I
imagine
this
is
going
to
be
a
short
meeting
because
I
don't
know
if
there's
too
much
to
kind
of
go
over.
I
think.
A
B
Yeah,
I
thought
so
as
well.
He
I
just
saw
him
the
last
hour.
I
was
thinking
through
some
things,
so
that
makes
sense.
So
I
guess
in
the
meantime,
just
be
sure
to
add
your
name
to
the
attendees
list
to
the
spec
sig
meeting,
which
is
not
the
one
linked
in
the
invite.
Let
me
see
if
we
can
post
it.
Maybe
somebody
already
has
yeah
andre
already
posted
it.
B
D
C
B
Yeah,
oh,
that
sounds
good.
We
have
a
boxing
on
this
is
a
thing.
Do
you
actually
have
power
to
cancel
meetings?
I've
actually
wanted
to
know
this.
Could
you
cancel
the
the
go
ones
as
well,
which
ones
so
definitely
next
week's
the
24th
go
sig
on
the
thursday
at
10
a.m?
B
Okay
and
then
yeah.
Let's
do
oh
and
then
the
next
one,
which
is
new
year's
eve,
the
go
sync
on
the
31st,
so
essentially
just
the
one
right
before
this
meeting.
E
C
D
B
They
look
gone
on
my
calendar,
perfect.
That's
what.
B
A
So
we
have
a
status
of
the
existing
p1
issues
for
spec
for
the
specification
repo
labeled
with
metrics,
which,
if
you
click
on
the
link,
it
takes
you
to
this
github
project
which
tracks
all
of
them,
I'm
just
filtering
by
label,
spec
metrics
and
priority
p1.
A
As
you
can
see,
we've
got
17
in
to
do
one
with
a
linked
pull
request
and
14
has
been
resolved
a
little
bit
of
movement
since
last
week.
One
of
them
got
resolved
from
in
progress.
Three
more
came
in
on
the
to-do.
A
Let
me
see.
I
was
also
looking
over
the
p1's
and
the
assignees
for
each
one
yep.
There
we
go
so
we
have
the
signs
for
each
one.
So
just
wanted
to
point
out
that
j
macd
is
the
winner.
Here,
he's
got.
C
E
Have
a
proposal
for
this
group
about
that?
Clearly,
it's
sort
of
too
much
has
been
put
on
me
and
it's
mostly
because
anytime
there's
confusion.
I
say
why
just
sign
that
to
me,
but
I
remember
in
last
summer
that
was
august
2019.
E
We
had
a
kind
of
a
all-day
meeting
that
we
held
on
site
at
google,
logan
organized
it
or
someone
with
bogdan
at
the
open
census
team.
There
organized
it
and
it
was
really
productive,
and
I
feel
that
if
we
intend
to
make
any
kind
of
real
progress
in
the
new
year
that
we
might
want
to
do
something
similar
to
really
focus
our
efforts,
because
I
can't
do
all
those
things
and
I
we
need
to
find
a
way
to
share
that
load.
Otherwise
it
will.
D
I'll
get
it
done.
I
think
it's
great
and
I
would
like
I
would
like
that
idea,
and
I
will
also
like
more
than
last
year
last
year
was
more
about
discussion
and
stuff.
This
year
we
can
have
even
a
section
of
two
hours
where
right
one
of
the
issues,
specifically
yeah.
E
Yeah
guys
yeah,
so
I
don't
know
how
to
assist
in
organizing
it
other
than
to
say
I
want
that
and
I
will.
I
will
be
willing
to
volunteer,
to
attend
and
help
organize
but
used
to
require
physical
space.
So
I
guess
we
don't
need
to
do
much
other
than
like
really
all
agreed
to
plan
this
out
and
do
it,
but
maybe
form
an
agenda
instead
of
date.
I
don't
know
I
don't
wanna
do
that
right
now,.
D
B
Yeah,
I
think
that
sounds
like
a
great
idea.
Josh,
I'm
totally
on
board
also
sounds
like
a
great
idea
to
volunteer
morgan.
C
F
C
No
it's
more
than
something:
morgan,
mclean,
okay,
I'll
I'll
deal
with
that
I'll.
G
A
Sometime,
what
were
you
guys
thinking
like?
First
there
we
go
not
necessarily
the
first
week,
but
sometime
yeah.
E
Before
we
jump
in
on
kind
of
the
routine
issues
and
the
agenda
that's
listed
below
here,
I
thought
it
might
be
worth
having
a
preamble
here.
Last
week's
meeting
was
a
little
bit
chaotic
and
maybe
for
lack
of
planning.
We
had
a
kind
of
collision
between
open,
metrics
and
open
telemetry,
though
I
didn't
I
wish
you'd
been.
There
would
have
helped
to
have
more
perspective
from
the
historical
open,
telemetry
timeline,
but
I
I
just
wanted
to
kind
of
try
and
address
that
head
on
right
now.
E
I
I
don't
want
there
to
be
a
conflict,
but
we've
we've
been
talking
about
some
sort
of
functionality
and
features
that
we
envisioned
from
open
census
and
four
open
telemetry
that
are
asking
to
kind
of
change
the
data
model
a
bit,
and
it's
been
giving
me
a
lot
of
thoughts
in
the
past
week
about
that.
I
found
an
issue:
that's
in
the
agenda
later
to
talk
about
to
kind
of
address
some
of
the
stuff
in
that
meeting
about
labels,
and
otherwise
I
wanted
to
just
hope.
E
Everyone
who
was
there
sees
that
at
least
our
intention
in
open
telemetry
was
to
be.
You
know
100
compatible
in
every
direction
with
open
metrics
and
I'm
now
seeing
this
become
much
more
of
a
like
a
requirement
that
we
can
state.
In
other
words,
something
like
where
we
are
trying
to
promote
feature
x.
That
is
incompatible
with
prometheus.
We
must
also
offer
a
service
or
a
feature
to
take
feature
x
out
of
the
data
and
remove
that
so
you
can
get
back
into
the
world
where
feature
x
doesn't
exist.
E
That's
my
high
level
meta
statement,
just
in
case
anyone
felt
like
they
came
here
to
to
see
more
of
that.
We
hadn't
planned
a
powwow
to
here
today
between
openmetrics
and
open
telemetry,
but
maybe
we
should
also
reach
out
to
that
group
and
have
them
join
us
for
another
type
of
meeting,
but
maybe
not
the
same
all-day
meeting
as
us,
trying
to
kind
of
pin
down
open
telemetry.
B
Agenda
yeah
see
rob
skeletons
here
as
well
from
metrics
hey.
I
don't.
Does
that
sound
like
a
reasonable
proposal
to
you.
E
H
Yeah
great
yeah,
I
don't
know
probably
talking
to
richie
to
to
find
a
time
would
be
would
be
best.
He
knows
everyone
that
would
be
required,
but
yeah,
let's
take.
Let's
set
something
up:
cool.
E
Well:
okay,
so
adding
to
that
andrew,
maybe
just
we'll
coordinate
with
richie
or
reach
out
somehow
and
try
and
get
more
dialogue
set
up
for
january
with
openmetrics.
E
I
see
the
agenda
here.
I
know
the
next
one
up
is
this
about
filter,
processor
pr
and
I'm
almost
certain.
This
is
going
to
bleed
over
into
the
issue
I
put
up
there
next,
but
I
could
be
wrong
reihan.
Would
you
like
to
introduce
this
one.
I
Yeah,
I
think
bogdan
already
put
some
review
on
that,
so
I
sent
an
update.
I
just
I
am
just
here
to
just
to
ping
him
like
that
to
take
another
look,
maybe
I
addressed
the
feedbacks
and
send
an
object.
D
D
I
Is
there
anything
worth
discussing
about
it?
I
think
no,
maybe,
like
I
don't
know
like
bogdan.
If
you
have
anything
I
just
I'm
just
here
to
just
bring
you
maybe
as
the
holy
days
are
coming,
if
you
can
have
a
look
and
close
it
yeah.
D
I
will
look
at
this,
it's
it's
on
my
agenda
items
and
I
think
it
started
differently.
We
polish
it
to
reuse
mechanism.
So
it's
way
better.
E
Now
I
always
like
to
know
if
there
are
important
topics
in
the
collector
land
that
need
more
attention
from
the
metrics
group
in
particular.
So.
E
Cool
is
that
something
you
could
link
into
the
notes
here
or
is
it
in
the
chat
or
I'm
just
if
my
email
is
killing
me,
I
should
I
can
find
it.
E
E
E
So
this
is
we've
we've
been
kind
of
talking
about
a
couple
of
reasons
why
it's
not
always
safe
to
just
simply
remove
labels
from
some
data,
because
either
semantically
it's
not
well
specified
yet
or
because
the
very
natural
conversion
into
prometheus
produces
invalid
data,
and
so
it's
it's
kind
of
a
long
one
here
I
I
don't
intend
to
actually
actually
read
it
in
front
of
you
all,
but
it
was
almost
one
of
the
the
it
was
one
of
the
most
contentious
topics
that
was
touched
on
last
week,
talking
to
openmetrics
and
how
the
prometheus
data
model
really
discourages
you
from
mixing
label
dimensions,
by
which
I
mean
having
some
have
say,
one
label
on
them
and
some
data
points
with
two
labels
on
them.
E
If
you
start
working
with
those
metrics
in
the
same
system,
confusion
may
arise
and-
and
yet
there's
something
as
I've
tried
to
describe
in
this
document
about
how
that
only
really
is
the
case
when
you're
looking
at
cumulatives,
not
when
you're
looking
at
deltas-
and
so
it's
always
been
a
lot
safer
to
add
and
remove
labels
from
deltas
and
there's
something
really
interesting
about
that.
E
So
what
I've
written
down
here
is
that
we
we
need
to
write
some
specs
to
make
sure
that
we
don't
cause
this
inadvertently,
because
we
didn't
design
design
for
this.
E
So
one
thing
is:
we
need
to
make
sure
that
time
series
don't
lose
their
identification
like
if
there's
a
resource
or
some
sort
of
unique
identifier
when
you're,
when,
especially
when
you're
cumulative
this
really
matters,
because
you
can't
just
simply
interleave
data
points,
so
we
must
preserve
our
resource
identifier
of
some
in
some
sort
of
some
sort
and
this
in
the
tracing
spec
side
of
hotel.
We
have
an
open
issue
about
one
called
span.instance.id.
E
We've
got
this
some
observer
data
point
which
I
described
lower
on
in
the
issue
that,
if
you,
if
you
erase
those
labels,
it's
it's,
it
produces
the
same
condition,
but
it's
not
through
resources.
It's
through
what
I've
been
calling
subdivision.
So
I
gave
an
example
there,
where
you
add
a
cpu
label
to
a
cpu
measurement,
that's
cumulative
and
all
of
a
sudden.
It
changes
the
structure
of
the
data
in
some
way
and
and
so
at
the
bottom.
E
Here,
I'm
proposing
an
algorithm
I'm
going
to
have
to
spell
this
out
in
much
more
more
depth,
but
this
is
now
it
just
doesn't
format
very
well.
The
idea
is
that
this
is
a
way
that
we
can
semantically
remove
labels
and
it
implies
essentially
a
collector
state,
a
pipeline
for
the
collector,
where
we
will
over
some
short
window
of
time,
and
that
means
implying
that
implies
introducing
a
delay.
E
We
will
do
do
the
aggregation
and
then
we'll
do
correct
spatial
aggregation
to
remove
labels
in
a
way
that
we
can
not
output
invalid
data
to
prometheus
and
then
so,
if
you
are
following
me
all
the
way
to
the
end
of
this
dialogue.
Really,
what
this
implies
is
that
we
are
the
implementation
of
this
stage.
Eq
is
equivalent
to
the
prometheus
recording
rules
mechanism,
by
which
you
query
your
in-memory
time,
series
and
output,
new
time
series,
and
so
what
I'm?
E
What
I'm
really
saying
or
kind
of
asking
for
here
in
the
collector,
is
a
pipeline
stage
that
can
automatically
just
erase
labels
in
a
correct
way
and
output,
new
time
series
that
meet
our
semantics
and
that
that's
it.
I
think
this
is
one
of
our
new
problems
and,
and
I
don't
think
we
can
call
ourselves
compatible
with
openmetrics
on
the
outbound
direction
until
we
have
a
way
to
just
easily
erase.
D
E
Yeah,
well,
I
did
link
to
one
of
the
issues
in
there.
So
there's
one
in
the
collector
right
now
2216
and
I
I
kind
of
think
this
is
basically
the
same
issue
surfacing
so
that
may
be,
but
but
but
but
bowdoin.
You
may
have
a
miss
a
different
understanding.
I
didn't
mean
the
word
misunderstanding
just
now.
I
you
may
have
a
different
understanding
of
what
start
time
means
and
I'm
actually
open-minded
enough.
I
I
don't
know,
what's
correct,
I
don't
think
there's
a
universal
truth
here.
We
just
didn't
write
it
down.
E
So
if
you,
if
you,
if
you
have
some
time
series
and
you
just
erase
all
their
identifying
labels,
but
you
still
have
start
times-
and
let's
just
assume
for
now
that
no
start
times
are
ever
the
same
like
like
nanoseconds,
are
never
going
to
collide
and
that's
a
that's
a
step,
a
stretch.
But
let's
assume
that.
So
now
you
have
a
bunch
of
data
points
and
they
all
have
start
times
and
presumably
any
two
points
that
have
the
same
start.
E
Time
are
part
of
the
same
series
and
I
can
then
I
could
correctly
reconstruct
that
there
were
many
overlapping
series
in
this
data
stream
that
I
have
and
if
you
ask
me,
what's
the
rate
or
what's
the
sum
I
might,
I
might
correctly
choose
a
slice
of
time
and
add
up
all
the
overlapping
time
series
that
were
present
and
distinct
right.
But
I
don't
think
that
that's
what
I
don't
think
that's
what
prometheus
would
ever
would
ever
expect.
It
would
produce
the
wrong
result,
and
I
don't
know
if
that's
what
you
were
expecting
so.
D
Let's,
let's,
let's
step
back
a
bit
and
and
talk
about
this
issue,
this
issue
is
a
bit
different
than
what
you
are
saying
with
start
time.
Okay,
for
this
specific
issue,
the
problem
that
we
have
is
so
prometheus
has
these
target
labels
that
uniquely
identify
the
source
of
the
time
series,
which
I
think
we
don't
handle
it
correctly
and
that's
why,
when
you
scrape
multiple
targets,
because
we
do
not
create
the
correct
resource
in
open
telemetry
format,
we
collide
on
time
series
and
we
cause
this
collision.
D
E
D
D
D
E
The
whole
we
should
explore
that
in
a
lot
more
detail,
because
we're
not
the
ones
who
know
that.
I
think
I
mean
I
I
remember
there
was
some
sort
of
easing
of
our
thought
between
the
very
beginning
of
open
census
was,
was
new
and
and
then
some
point
we
got
to
where
we
are
today.
E
Sorry,
I
I
don't
know
where
I
was
going
with
that.
D
E
Yeah
and
in
like
in
andrew,
could
you
go
back
to
the
the
issue
I
felt
just
now.
I
actually
wrote
in
the
third
section
there
at
the
bottom.
Is
that
when
you're
handling
this
in
the
sdk
there's
two
reasons
why
the
the
job
is
much
simpler?
One
is,
you
know
there
were
no
resets,
because
you're
the
sdk
you've
been
alive,
and
the
second
is
that
you
know
it's
all
one
resource.
E
So
what's
easy
in
an
sdk
is
actually
hard
or
harder
in
the
collector.
That's
that's
a
critical
point
is
that
because
we
can't,
if
there's
no
safe
way,
to
remove
labels
in
the
collector,
there's
good,
I
don't
know
life's
gonna
get
hard
for
somebody.
I
know
we
can
do
it
safely
in
the
sdk
we've
already
gotten
that
far,
at
least
in
my
implementation,
yeah.
D
No,
that's
that's
the
point
and
I
think
open
sensors
had
only
that
part
defined
like
ability
to
do
inside
the
sdk,
a
bunch
of
mutations
before
you
export
the
data.
E
Yeah,
and
so
we
are
kind
of
because
the
collector
now
exists,
and
it's
kind
of
part
of
the
the
whole
story
we
we.
I
think
we
probably
need
a
better
answer
for
that,
and
I
think
it's
actually
not
so
hard
and
I'm
excited
that
we
are
now
talking
about
it
at
least
but
anyway.
E
I
think,
even
if
you
agree
that
we,
if
you
even
if
you
think
we
don't
need
this
type
of
pipeline
like
right
away,
I
think
it's
great
to
know
that
we
can
do
it,
and
I
think
that
it's
it's
logical.
I
also
think
that
we
need
to
answer
some
of
these
questions
about.
Must
we?
How
do
we
guarantee
that
you
have
a
unique
resource
in
hotel?
First
of
all,
and
it
should
it
be
service
instance
id
second
of
all
when
collisions
occur
through
when
collisions
occur.
E
We
agree
that
it
must
be
unintentional
when
collisions
occur.
We
must
d
address
that
for
correctness
by
removing
duplicates,
and
that
means
you're
resolving
conflicts,
and
you
could
warn
the
user.
I
got
two
time
series
here
that
say:
they're
the
same
and
they
got
different
values
and
they
have
different
start
times.
They
have
the
same
identifiers.
Otherwise,
that's
a
real
problem
for
us.
You
throw
away
one
and
life
goes
on.
You
warn
the
user.
So
there's
some
questions
that
come
up.
We
should
we
should
answer
so.
J
For
what
it's
worth
real
quick,
this
is
not
about
prometheus,
but
I
think
the
logic
is
the
same.
Some
of
our
customers
at
datadog
ran
into
this
problem
early
in
our
life
and
used
a
tool
called
veneer
that
stripe
wrote.
That
has
some
of
this
logic
also,
we
built
it
into
our
edge,
but
the
logic
translates
between
the
the
agent
or
the
collector
and.
E
And
that's
really
useful
helpful,
useful
tip.
Thank
you.
I
should
go
research
that
I
I
know
of
a
nur,
but
I
didn't
realize
that
that
was
one
of
the
keys
yeah.
E
D
Yes,
that's
that's
a
good
point
and,
if
possible,
you
or
mike
put
the
the
name
of
the
project
there
in
the
issue,
so
others
can
look
at
the
veneer
or
whatever
yeah
it's
hard
to
spell
and
pronounce.
But
that's
why
I'm
asking
for
somebody
to
type
it
for
me.
E
Is
he
still
at
splunk
now,
anyway,
he's
the
guy
and
we
could
try
and
get
him
into
this
meeting?
I
I
actually
wouldn't
mind
if
he
were
at
this
meeting
he's
no
longer
a
splunk,
but
I
would
okay
apologize.
No.
G
Josh,
I
have
a
sort
of
a
tangential
question
from
that
that
issue,
which
is
does
does
open.
Telemetry,
have
the
same
sort
of
notion
of
staleness
on
a
time
series
that
that
prometheus
does.
E
So
I
think
that's
a
you
ask
a
great
question.
I
mean
I've
filed
an
issue
in
the
past
week
or
two
about
maybe
having
a
resource
for
start
time
which
was
helping
around
this
area.
I
think,
and
we
were
talking
about
the
resource
spending
convention
for
up
and
I
think
somewhere
we
we
end
up
wanting
to
have
a
configuration
that
says
if
you
haven't
reported
for
so
long,
you
are
no
longer
up.
E
So
it's
really
like
a
configuration
saying
that
you
know
like
there's
a
minute
threshold
and
you
ought
to
configure
your
clients
to
report
less
than
that.
Otherwise
we
might
detect
you
as
not
up,
but
it's
just
hard
to
do
this
semantically
identical
thing
that
prometheus
does
simply
because
it's
scraping,
so
it
has
a
very
definitive
cycle
and
it
knows
if
you're
up
based
on
whether
it
last
saw
you
or
not,
and
so
the
relationship
has
changed.
E
And
I'd
love
to
see
us
move
that
issue
on
an
up
convention
forward
a
little
bit.
I
think
it's
missing
from
the
hotel
collector's
prometheus
receiver.
G
Histograms
yeah,
so
yuki
wasn't
able
to
make
it
to
this
meeting,
but
I'm
I'm
sort
of
here
as
a
proxy.
I
believe
that
he
responded
to
your
last
comment
about
using
one
of
so
that
should
be
addressed
here
now.
So
I
think
I
mean
this
still
just
needs
more
review.
I
I
think
it
sounded
like
you
were
maybe
ready
to
merge
it,
but
I
don't
know
if
bogdan,
if
you
had
more
thoughts
on
this
or
had
a
chance
to
look
at
it.
D
Does
it
make
sense?
What
I'm
asking
for
is
more
like?
Are
we
100
confident
on
this?
Is
the
right
format,
or
can
we
start
smaller
and
then
keep
increasing
the
complexity
as
we
determine
is
needed.
G
Yeah
I
mean
I,
I
think,
that's
a
valid
concern.
One
of
the
things
that
I
like
about
this
approach
is
that
it's
just
generic
enough.
It
doesn't
over
fit
for
one
of
the
existing
formats
or
a
particular
vendor
so
much
and
that
it's
also
configurable.
D
Yeah,
so
my
my
concern
is
as
an
owner
of
the
collector
or
maintainer
of
the
collector.
If
somebody
will
ask
me
to
transform
from
the
current
exponential
balance
to
like
to
the
normal,
explicit
bounds,
I
swear
to
god.
It's
gonna
take
me
a
day
to
understand
all
the
things
and
stuff,
and
I
feel
that
is
complicated.
E
It's
true
I
I
when
I
and
I
reviewed
that
pr,
I
really
did
give
it
an
hour
to
an
hour
and
a
half
to
like
read
every
comment
and
and
make
sure
that
it
wasn't
just
like
this
looks
right,
but
I
made
sure
that
I
got
it
like.
I
read
carefully
and
it
is
complicated,
and
I
think
I
said
at
one
point
that
in
the
meeting
maybe
two
weeks
ago,
I
believe,
to
to
get
over
your
fear
or
your
objections
there,
bogin,
which
are
legitimate.
E
We
should
produce
a
like
collector
pipeline
stage
that
can
just
rewrite
histograms.
Essentially
they'll
have
all
that
math.
That
uk
wrote.
D
Or
or
we
can
add
one
by
one
all
these
types
and
digest
them,
starting
with
the
simpler
one
until
we
get
to
the
more
complex
there
has
to
be
a
solution
somehow
to
to
not
because,
as
I
said,
if
tomorrow
yeah
merge,
I
need
to
to
to.
I
need
to
apply
this
to
the
collector,
which
means
I
will
auto
generate
a
couple
of
things,
and
now
histograms
will
start
having
one
of
these
options
and
then
and
then
immediately
after
that,
I
will
have
to
write.
D
For
example,
I
don't
know
prometheus
exporter
or
prometheus
remote
right
exporter
to
change
it
and
I'll
have
to
to
from
all
these
possible.
D
Possible
ways
to
define
the
bounds
to
convert
them
to
linear
or
whatever
prometeus
exposes,
and
I
think
I
will
fail
personally
and
that's
that's-
that's
a
fear
for
me
to
not
to
not
move
forward
right
now
with
everything
that
is
there,
and
I
hope
we
can
start
small.
As
I
said,
maybe
maybe
we
add
the
simplest
one,
linear
and
other
one
first,
we
prove
that
we
can
have
more.
Secondly,
we
we
start,
we
start
doing
this,
and
then
we
add
in
two
months
the
exponential
one,
which
was
the
most
complicated.
I
think.
E
Actually,
you
have
a
pretty
good
point.
I
hear,
but
I
don't
think
linear
matters
very
much
to
very
many
people.
There
are
two
extensions
that
are
in
there
that
are
that
no
one's
gonna
want
both
of
it's
that
people
may
want
one
or
the
other
and
and
we
don't
need
the
both
right
away.
So
it's
exponential
kind
of
serves
the
dd
sketch
need,
and
then
this
notion
of
linear
sub
buckets
that
uk
added
serves
that
circle.
Hist
need
to
do
they're
related
the
circle.
E
Hiss
is
a
little
harder,
so
I
think
dd
sketch
would
be
step
one
I'd
rather
not
have
linear
linear
b
step
one
so,
but
I
think
you're
probably
right,
and
that
implies
backing
up
a
bit,
and
maybe
we
should
just
go
straight
for
exponential.
E
E
In
some
sort
of
mechanism,
probably
where
you
say
I
am
an
exporter,
what
formats
am
I
willing
to
export
and
then
what
am
I
willing
to
accept
as
input
and
so
that
what
you
know
otlp
is
willing
to
re-export
whatever
it
gets.
But
if
you're
exporting
to
datadog
they're
going
to
want
to
transform
into
dd
sketch
mode
or
something
like
that.
D
Yeah
the
other
thing
with
exponential
again
looking.
I
did
not
spend
an
hour
or
an
hour
and
a
half
like
you
did
josh.
I
apologize
for
that,
but
when
I
looked
at
these
and
I
looked
at
the
open,
metrics
definition
of
exponential
bounds
because
they
have
one,
I
also
failed
in
five
minutes
to
know
how
to
to
convert
between
them.
E
J
So
I
can
look
at
the.
I
don't
know
the
exponential
on
the
open
metric
side
but
to
convert
to
linear.
Like
you
said
earlier,
you
can
convert
from
rather
from
exponential
to
a
simple,
explicit
buckets,
but
not
not
the
opposite,
or
at
least
you
lose
a
lot
of
fidelity,
going
the
opposite
direction
and.
E
I
think
what
bogen
really
is
asking
for
is
we
need
to
have
that
translation
be
proven
and
ready
to
go
before
we
start
merging
proto
protocol,
because
otherwise
we're
gonna
the
protocols
are
gonna,
be
arriving
with
histograms.
We
don't
support
very
soon.
I
don't
know
who's
gonna.
Do
this
work,
so
I
can't
say
much
more.
D
And
again,
I'm
not
throwing
the
pr
or
anything
I'm
just
thinking.
Can
we
split
it
in
a
way
that
we
merge
parts
until
we
get
to
the
final
stage
so
and
then
we
can
do
the
work
in
the
collector
to
start
supporting
those
parts
and
here's.
E
The
question,
then,
I'm
representing
uk,
because
I
think
he
wants
to
get
this
merged
and
I
do
too
should
we
go
with
one
of
because
I
think
that
if
we
don't
want
to
break
the
current
protocol-
and
I
don't
believe
anyone
really
does-
although
we
aren't
I'm
not
using
the
histograms
personally
yet
so
change
here-
wouldn't
really
break
things
for
me.
But.
E
D
E
That's
a
good
question.
I
don't
want
to
answer
it
myself.
I
haven't
thought
about
it
much
if
we're
talking
about
deprecation.
I
think
there's
this
other
conversation
that
we've
been
avoiding
about
labels
versus
attributes
that
some
people
want
to
have
and
that's
another
one
where
I
worry
about
deprecation.
D
E
Do
you
mean
you,
you
would
like
us
to
move
to
a
place
where
there's
just
a
one
of
I
think
it
has
explicit
bounds
in
the
one
of
and
we're
gonna
go
through
a
period
of
translation
where
the
legacy
explicit
bounds
will
be
turned
into
a
one
of
explicit
bounds
on
the
way
in.
D
Yeah,
so
so
the
way
how
we
did
this
for
for
trace
we
did
is
for
for
status,
is
on
the
receiving
side.
We
always
transform
into
the
new
code
and
all
the
the
other
components
will
be
switched
to
the
new
code
immediately
because
they
they
don't
need
to
see
the
old
one.
It's
just
the
receiving
side
or
the
producer
side
of
these
things
we
immediately
switch
and
then
the
other
components
so
in
the
collector
is
very
simple
and
we
can
handle
this
pretty
nice.
D
E
D
It's
possible
the
only
problem.
The
only
reason
why
is
you
cannot
have
it
in
one
off
is
because
one
off
does
not
support
repeat
it.
E
Right
right,
so
we're
gonna
move
it
into
a
repeated
wrapped
message:
a
message:
wrapping
a
repeated
then
correct.
D
Because
otherwise,
otherwise,
you
could
have
add
one
off
around
that
one
off
is
not
actually
and
also
I
hate
one
off
also
also
we
can
have
a
fake
one-off.
D
What
I
mean
by
that
it
means
we
just
say
that
is
one
of
these
options
and
don't
don't
necessarily
put
a
one-off
around
and
the
collector
can
still
give
the
people
in
the
internal
model
only
one
of
these
things,
but
on
the
wire
it's
anywhere
is
the
same.
It's
only
if
you
consume
this
in
other
languages,
where
the
collector
does
not
generate
the
the
format
for
you
where
you
have
to
care
about
this.
So
that's
a
second
option,
which
is
don't
have
a
one-off
at
all,
just
say
that
it's
one
of
these
four
options.
E
Right,
the
problem
bugging
is,
we
just
wanted
to
make
a
decision
on
this
pr
and,
and
it
wasn't
getting
enough
attention.
So
we
do
have
to
decide-
and
I
I
kind
of
just
wish
that
the
go
team
would
change
their
interface
and
do
one
up
differently,
and
maybe
that
would
make
things
better.
But
it's
a
lot
to
ask.
E
There's
an
issue
saying:
stop
using
gogo
proto,
it's
unmaintained,
please
use
the
latest
go
from
google
anyway
blah
blah
blah.
I.
D
Know
I
know,
and
anyway,
probably
in
en
end
of
february,
we
will
not
we'll
stop
using
gogo
proto
and
we
will
simply
marceline
and
marshall
ourselves,
so
that
will
will
also
remove
the
problem.
How
do
we
decide?
D
I
would
I
would
go
with
a
correct
way.
Even
I.
E
Mean
I'm
leaning
towards
one
of
that's
what
I
already
said
in
this
pr.
It
sounds
like
we
can
respond
to
this
pr
and
say
that
we
we
agreed
to
just
like
get
it
done.
But
can
we
just
add
exponential
for
now
and
leave
out
linear
and
leave
out
sub
buckets
so
that
we
control
complexity
creep.
D
Sure,
but
I
will
need
help
with
somebody
to
help
me
transform
from
this
exponential
to
ex
to
explicit
bounds
and
from
this
exponential
to
the
open
element,
open
metrics,
exponential
definition.
J
J
I
think
I
found
the
openmetrics
exponential
definition,
but
but
if
you
can
send
that
to
me,
I
can
try
to
help.
We
also
wrote
some
code
to
conv,
convert
and
I'll
come
and
find
go,
find
it
fairly
trivial
code
to
convert
to
explicit
bounds
as
long
as
those
bounds
were
were
passed
in,
but
in
any
way
in
case
explicit
bounds
are
explicit
right.
So.
E
Yeah,
could
you
put
a
link
to
the
that
whatever
you're
looking
at
for
openmetrics
just
so
it's
easier
for
everyone
to
find
somewhere
and
then
we'll
we'll
all
review
that
and
I'll
be
happy
to
follow
up
on
this
pr
after
the
meeting
with
what
we
just
discussed.
E
H
D
The
heck
is,
this:
stackdriver
has
another
one
which
is.
E
Maybe
I'm
wrong,
but
okay!
Well,
we
can
move
on
from
this
topic.
It
is
some
math.
We
will
all
help
out
we'll
we'll
make
this
work
and
I'll
follow
up
on
this
pr
after
the
meeting
today
and
we'll
find
a
link
to
stackdrivers
as
well
bowdoin.
Do
you
want
to
talk
about
this
prometheus
remote,
write,
receiver.
E
E
That's
a
really
good
observation:
the
stackdriver
one
there
I'll
I'll
put
all
that
together
in
that
issue.
Let's
talk
about
this
skeleton
from
prw.
I
use
the
acronym
prw
a
lot
these
days.
A
I
think
bogdan
had
to
step
out
for
a
minute.
E
E
Talk
about
pw
before
he
does,
I'm
I'm
actually
curious,
so
prometheus
remote
write
has
added
some
optional
metadata
fields
and
but
they're
optional,
and
I
anyway
it's
curious.
E
Sorry
go
ahead!
Josh.
G
I
was
just
going
to
say
I
read
through
that
the
google
groups
discussion
on
the
around
adding
metadata
to
remote
right
a
few
times
and
and
yeah.
It
sounded
like
the
the
catalyst
for
for
adding
that
was
to
be
able
to
detect
type
information
more
easily,
especially
for
counters
and
gauges.
But
if
you
can't
rely
on
it
being
there,
it's
only
sort
of
moderately.
E
Useful
yeah
we've
talked
about
at
lightstep
about
how,
in
some
sense
it's
you
know,
maybe
not
the
end
of
the
world,
but
you
can
just
turn
everything
into
a
gauge,
and
if
your
metric
system
is
capable
enough,
you
can
compute
rates
and
such
and
like
okay,
not
so
bad,
but
a
lot
of
times.
Your
user
interface
is
customized
for
counters
and
you'd
prefer
to
know,
and
there
are
other
reasons
as
well
so
now
I
it
would
be
cool
now.
E
I
said
I
know
that
if
you
have
a
horizontal
pool
of
prw,
you
will
very
likely
receive
data
that
you
don't
know
the
type
of,
and
so
that
is
the
problem
that
we
are
aware
of
and
rob
skillington
who
was
on
this
pr.
This
call
earlier
himself
had
a
pr
that
was
trying
to
add
unconditional
metadata
at
some
point
and,
and
that
was
turned
down
in
favor
of
the
one
that
did
merge,
which
is
the
sometimes
metadata
approach,
and
unfortunately
I
think
that
leaves
a
problem
either.
E
We
say
you
can
do
this,
but
you
need
to
make
sure
there's
only
one
collector
so
that
you
can
cache
your
metadata
and
then
most
of
the
time
you'll
be
able
to
do
the
right
thing.
But
if
you
have
a
pool
it'll
almost
certainly
not
do
the
right
thing
and
you
can
fail
in
that
condition.
The
other
thing
you
can
do
is
say
we're
gonna,
have
a
pool
and
just
make
everything
a
gauge
and
kind
of
hope
for
the
best
degrade
the
user
experience.
E
I
think,
though,
so
I
don't
think
that's
what
people
really
want
and
that
that
leaves
me
thinking
of
more
more
esoteric
things
like
you.
If
you
run
through
the
I
know,
there's
a
kafka,
receiver
and
a
coffee
producer
at
this
point.
So
if
you
run
all
your
data
through
kafka,
then
perhaps
you
can
ensure
that
you
know
that
the
metadata
before
you
process
the
data,
but
that's
a
very
expensive
change
to
your
system.
So
I
don't
know
prw
wasn't
really
made
for
what
we're
asking
it
to
do.
E
So
we
should
get
bogdan
back
and
see
what
he
thinks.
I'm.
D
I'm
back,
I
was
listening
so
so
far.
My
initial
approach
is
okay.
Let's
have
the
naive
way,
let's
have
the
gauge
solution.
First,
then,
let's
try
the
metadata
and
then
and
then
from
there
we
will
learn.
I
mean
I'm,
I'm
I'm
very
incremental
on
this.
I
I
don't
know
how
to
solve
the
the
whole
problem.
As
you
pointed
metadata,
are
not
available
all
the
time
and
rarely
are
sent
or
based
on
whatever
rules
yeah.
It's
like
every
five
minutes,
yeah
every
five
minutes.
D
This
will
work,
as
you
pointed,
if
you
don't
have
it
behind
the
pool
behind.
If
you
don't
have
it
behind
the
pool
of
collectors,
and
you
just
have
it,
I
mean
one
thing
that
you
will
be
able
to
do
is
to
replace
a
collector
with
this
receiver
to
replace
the
sidecar
problem,
correct
like
because
you
deploy
it
close
to
a
prometheus,
only
one
prometheus
instance,
and
if
you
deploy
it
as
a
sidecar
into
the
same
pod.
You
saw
you
have
this
solution
and
that's
that's
the
initial
goal
for
this.
D
E
That
sounds
good
yeah
we've
talked
about
how
one
day
we'll
want
to
know
like
it's
easy
to
configure
situations
that
are
going
to
be
broken,
so
we
want
to
have
a
like
a
receiver.
That
says
I
can
only
be
used
if
you
know
you're
like
a
single
like
endpoint.
D
Go
ahead,
yeah.
I
think
I
think
it's
reasonable
to
start
with
only
documentation
first
and
say
that
hey
this
receiver
can
be
worked
only
if
you
deploy
the
collector
as
a
side
card
to
to
prometheus
instance,
and
then
from
here
via
otlp,
you
can
put
behind
the
load
balancer
the
rest
of
the
things.
So
as
long
as
you
put
here
only
one
instance
as
a
sidecar
into
your
prometheus
pod,
everything
else
will
just
magically
work
from
there
because
it
will
send
otlp,
which
is
road,
balancer,
friendly.
E
D
I'm
not
hundred
percent
sure
what's
their
their
what's.
The
high
availability
story
in
prometheus
is
the.
Is
there
a
main
one
or
or
I.
E
E
I
think
it
would,
if
I
understand
correctly,
the
way
that
it
works
is
that
you
have
these
external
labels.
You
can
add,
and
then
multiple
prometheus
servers
can
then
record
the
same
cumulative
series
and
in
theory
you
can
just
erase
that
label
by
dropping
it
whenever
you're
querying
is
my
understanding.
E
D
Okay,
so
I
will
look
into
that-
maybe
maybe
you
can
mike
you
can
do
some.
You
can
file
an
issue
and
explain
some
things
that
you
know
about
this
problem
and
we
can
look
into
solving
that.
E
I'm
curious
bogdan.
What
led
you
to
want
this
family
to
ask
a
stupid
company.
Ask
the
reason
I
ask
is
just
because,
when
we
looked
at
lightstep,
looked
at
this
possibility
ourselves
and
adding
prw
support
on
our
ingestion
point,
we
concluded
that
this
was
maybe
more
trouble
than
it
was
worth,
and
that's
why
I
ended
up
going
to
the
sidecar
code
base
from
stackdriver
and
porting
it
making
it
up
to
date,
so
that
I'm
wondering
what
the
differences
are
now
between
this.
E
D
It's
exactly
the
same
thing,
but
I
think
the
site
car
is
very
minimal
without
all
the
capabilities
of
retry
skewing
and
everything
that
the
collector
has,
and
I
was
actually
willing
to
copy
paste
a
lot
of
your
code
from
there.
It's
just
it's
just
that,
I
think
having
it
here
instead
of
in
a
standalone
binary.
Sidecar
gives
people,
even
though
you
deploy
it
as
a
sidecar,
a
bunch
more
options
to
to.
E
Do
that's?
That's
good!
That's
what
I
wanted
to
hear.
Hopefully
it's
not
that
feature-wise.
We
get
basically
the
same
idea:
output
from
these
two
yeah
and
that.
D
One,
that's
what
I
want
same
thing
and
the
reason
we
are.
We
are
not
supporting
this
on
our
ingest
george,
it's
it's
the
reason
we
people
are
annoyed
when,
if
we
ask
run
yet
another
binary,
but
if
we
tell
them
that
hey
the
same
binary
put
it
here
as
well
as
long
as
is
the
same
binary
for
them
is
much
better.
Don't
ask.
D
And
and
I'm
just
following
and
doing
that
makes
sense,
it's
good
makes
sense,
because
then,
then
you
can
prove
that
the
test
the
code
is
tested
at
all.
It's
not
a
different
binary,
not
a
different
set
of
tests
and
stuff.
Like
that,
and
it's
just
a
simple
way
again,
I
was
hoping
to
to
steal
all
your
code
from
there.
That's.
E
C
E
That's
good
timing.
Well,
thank
you
all.
I
think
this
is
our
last
meeting
of
the
year.
I
intend
to
take
a
break
myself
and
I
hope
you
all
do
too.
We
will
be
reaching
out
with
information
about
an
all-day
meeting
sometime
in
probably.