►
From YouTube: 2020-10-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Hey
cool
all
right,
I
didn't
get
as
much
time
to
put
into
agenda
setting
today,
as
I
wanted
feel
free
to
add
anything
to
the
agenda.
B
I
have
a
number
of
topics
that
I'm
going
to
start
discussing.
If
nobody
else
puts.
B
B
B
B
All
right,
let's
go.
I
thought
that
we
should
talk
this
week
a
little
bit
about
the
current
state
of
the
collector
with
regards
to
metrics,
and
how
should
I
say
this,
I'm
starting
to
think
that
wait.
If
I
shared
I've
shared
my
screen,
I
am
sharing
my
chain.
B
B
The
problem
we're
facing,
I
think,
is
that
there's
few
users
coming
to
open
telemetry
with
a
complete,
with
a
need
for
a
brand
new
code
base
that
they're
going
to
instrument.
Many
users
are
coming
to
open
symmetry
with
existing
code
base
that
has
that's
d
or
prometheus,
usually
or
one
of
the
other
libraries.
B
B
Users
are
not
going
to
be
able
to
migrate
to
open
telemetry
until
the
collector
supports
a
gradual
migration
path
for
them,
so
that
they
can
keep
running
their
prometheus
and
their
staff
see
while
they
begin
to
adopt
open
telemetry
so-
and
this
is
going
to
be
a
big
effort,
because
it's
not
just
about
getting
the
data
formats
from
you-
know
one
protocol
into
another.
It's
about
the
entire
ecosystem
of
metric
names
and
labels
that
have
already
been
in
use
that
people
are
using.
B
B
Now
there
is
definitely
movement
to
get
those
those
metrics
and
labels
all
specified
as
semantic
conventions
for
open
telemetry,
but
the
users
are
going
to
be
in
a
place
where
their
existing
dashboards
and
alerting
and
monitoring
is
all
written
for
some
names,
and
I
think
that
that
to
get
the
to
make
this
story
work,
we
have
to
make
the
open
telemetry
collector
start
to
work.
At
that
point,
users
can
begin
to
send
data
through
the
collector
and
we
can
begin
to
have
a
mixed
deployment
of
open,
telemetry
libraries
and
some
other
existing
system.
B
C
B
That's
sort
of
most
important
there.
There
is
this
existing
prometheus
collector
support
in
the
collector
there's
new
issues
that
have
been
filed.
There
there's
also
an
ongoing
effort
to
get
stats
to
unicollector.
I
think
none
of
this
stuff
is
100
working
at
this
point
in
time.
So
that's
that
what
I'm
trying
to
say
is.
I
think
that
that
should
become
our
highest
priority,
and
so
I
thought
I
would
raise
this
point.
I'm
going
to
click
on
a
few.
B
My
own
links
here
just
to
show
you
what
I
mean
bogdan
had
raised.
This
is
from
two
weeks
ago
the
idea
that
we
might
actually
move
or
replace
the
prometheus
receiver
that
was
discussed.
I
don't
think
that
that's
going
to
fly
after
thinking
about
it,
some
more
the
prometheus
receiver
is
this
very
large
piece
of
complicated
code,
because
it
it
pulls
in
all
the
prometheus
service
discovery
logic.
B
So
it's
half
of
the
prometheus
server
essentially,
and
I
think
it's
still
written
to
use
the
open
census
protocol,
and
so
there
were
there
was
talk
of
moving
it
into
a
contrib
directory,
but
I
don't
think
that's
going
to
change
the
problem.
The
problem
is
people
really
need
it
to
work.
There
was
an
issue
filed
this
week
about
it
as
well.
Talking
about
how
you
know
it's
really
not
quite
working.
B
So
this
is
the
sort
of
issue
that
I
think
we
need
to
put
some
attention
towards
so
not
to
get
into
the
details
here.
But
but
what
we're
seeing,
I
think,
is
you
know
the
open,
telemetry
collector
was
written
using
the
open
census
protocol
for
a
long
time.
Much
of
the
code
is
still
using
the
open
census
protocol.
The
conversion
between
open
census
and
open
telemetry
is
not
100
correct,
or
it's
introducing
the
sort
of
problem
that
we
just
saw.
B
So
I've
just
described
a
problem
and
I
love
to
talk
about
solutions,
but
I
don't
know
that
we
have
anyone
who
is
a
owner
or
maintainer
of
the
collector
here
at
the
moment.
Anyone
like
to
talk
about
their
role
in
the
collector
and
who
knows
more
than
me.
B
You're
all
welcome
to
say
no,
I
was
hoping
to
get
bogden
on
this
call.
I
didn't
ping
him
for
it,
but
I'll
make
an
action
item
for
myself
to
do
some
research
and
try
and
figure
out
what
the
state
of
the
world
is
and
how
much
commitment
people
have
to
making
some
of
these
changes
and
and
so
on,
try
and
bring
that
back
to
this
group
next
week.
Okay,
so
we
don't
have
anybody
collect
your
owner
on
the
call?
That's
gonna
be
that's
going
to
make
this
topic
hard.
B
To
finish
up,
I
had
posted
an
issue
just
recently.
This
talks
about
a
little
bit
of
a
technical
solution.
I
don't
want
to
talk
about
here
without
the
right,
the
collector
folks
in
the
room,
but
basically
there's
been
this
idea,
discussed
sort
of
on
the
peripheral
periphery
of
the
collector
project
that
we
could
begin
using
the
hotel
collect
the
hotel
go
sdk
as
a
processing
stage
in
the
collector,
since
we've
already
implemented
much
of
the
machinery
needed
to
do
aggregation
over
a
time
window
and
to
export
otlp.
B
So
the
the
metrics
machinery
to
turn
events
into
otlp
is
done.
So
we
should
be
able
to
use
that
in
the
collector
and
I've
I've
come
up
with
a
change
here.
This
1220
that
it's
actually
an
issue,
but
but
I
did
link
to
a
change
that
talks
about
how
we
can
change
the
sdk
to
support
the
collector.
So
that's
an
issue
that
I'm
going
to
start
to
raise
just
want
to
link
to
it
here.
B
B
One
of
the
reasons
why
it's
hard
to
talk
to
prometheus
users,
about
migrating
to
open
telemetry
is
that
this
up
variable
has
not
been
implemented
in
the
open,
telemetry
collector
for
prometheus
receiver,
but
I
think
I
want
to
begin
to
propose
that
we
should
have
a
semantic
invention
for
upness
in
some
level.
B
So
I
I
didn't
write
the
issue
yet
maybe
I
won't
be
the
one
to
write
the
issue,
but
I
want
to
talk
about
it
a
little
bit
so
when
a
prometheus
server
scrapes
a
target,
it
gets
all
those
variables
all
the
metrics
state
from
that
target,
and
then
it
introduces
a
synthetic
metric
that
says
this
target
is
up,
gives
the
job.
In
the
instance
label
people
monitor
those
variables.
B
So
the
idea
is,
I
think,
is
that
we
could
that
we
could
synthesize
an
up
variable
whenever
a
otlp
packet
of
whenever
a
report
is
made
or
received.
It
could
be
one
or
the
other.
This
requires
some
investigation
to
essentially
replace
that
up
variable
that
prometheus
gets
by
scraping
with
something
that
we
could
count
on
being
present,
whether
you're
pushing
your
data
or
pulling
your
data.
B
What
I'm
I
haven't
written
out
an
issue,
I
think,
there's
some
analysis
needed
if
anyone
else
is
interested
in
this
topic.
I
welcome
you
to
hop
onto
it,
but
this
was
really
just
a
placeholder
since
I
have
not
written
an
issue
or
a
proposal,
but.
B
Okay,
eventually
we'll
get
to
something
where
someone
else
wants
to
talk.
Okay,
so
next
week,
I'll
know
more
about
prometheus.
If
no
one
else
beats
me
to
it
and
the
collector.
I
I
mentioned
this
this
this
issue
about
transient
descriptors.
B
You
can
read
into
that,
but
it's
connected
with
this
spec
issue
I
filed
many
of
you
may
have
reviewed
this
recent
pr
that
went
in
to
put
in
some
accumulator
details.
It's
included
some
text
that
nobody
objected
to,
and
now
I
want
to
ask
if
anyone
actually
objects
to
this
text.
B
It
came
right
about
here.
This
is
a
change
from
open
census
and
I
think
it's
a
well-justified
change,
so
I
want
to
get
some
people
to
approve
this
idea.
Basically,
we
added
named
meters
to
the
hotel
system
so
that
you
could
separate
your
your
instruments
and
have
the
same
instrument
registered
in
two
places.
The
open
census
system
didn't
have
that,
but
it
did
talk
about
what
happens
if
you
try
to
register
the
same
instrument.
B
This
is
actually
already
written
in
spec.
So
if
we
disagree,
we
should
change
the
spec
and
I
was
expecting
bowdoin
to
disagree.
But
since
he's
not
here,
I
don't
know
if
anybody
else
wants
to
talk
about
it.
This
this
little
clause
makes
the
other
story
about
transgender
descriptors
a
little
bit
simpler.
That's
the
reason
that
they're
connected
to
me,
because
I
don't
really
want
to
have
to
write
logic,
to
say.
B
Oh
this,
this
has
the
same
name
and
the
same
type
and
the
same
number
kind
and
the
same
unit
as
this
other
instrument
that
was
already
registered.
Therefore,
I'm
going
to
return
the
same
instrument,
because
I
don't
think
that
that's
necessarily
what
users
expect
could
be.
It
could
create
semantic
problems
as
well.
So
anyway,
I
wanted
to
make
that
illegal.
It's
a
pretty
minor
issue.
B
Well,
my
scratch,
didn't
it
say.
B
B
So
in
this
case
it
returns
an
error
and
I
can
speak
for
the
go
sdk.
The
go.
Sdk
will
return,
you
an
instrument,
a
no-op
instrument
and
an
error,
and
then
there's
a
there's,
a
helper
utility
that
lets.
You
create
a
like
crash,
if
or
fatal
version
of
that
that
will
crash
when
the
instruments
are
not
constructed
as
a
convenience.
A
Okay
is
this
language
kind
of
standard
in
our
spec
that
if
it
says
that
it,
the
sdk
must
have.
A
B
Yeah,
I
think
we
should
probably
focus
a
little
bit
of
editorial
work
on
them
on
that
that
notion,
but
but
yeah
I
think
it's
I
yeah.
That's
that's
a
little
imprecise.
This
is,
though,
sort
of
cross
language
spec.
So
it
you
know,
you
could
say
throw
exception.
You
could
say
return
error.
You
could
say
some
sort
of
behavior.
That's
saying
not
success
right.
So,
yes,
I
think
you're
right
that
could
be
tightened
up.
B
It's
okay,
totally,
fine,
I'm
just
I
didn't
get.
I
intended
to
write
up
some
more
issues
before
the
meeting
here,
but
didn't
so.
I
dropped
in
a
couple
of
items,
but
but
I
don't
have
firm
agenda
there,
so
maybe
we
could
move
forward
to
continue
talking
about
semantic
convention
issues.
D
All
right,
yeah
yeah,
that's
me,
so
I
put
that
on
there
we
have
some
folks
on
our
serverless
team
who
are
interested
in
in
working
on
a
pr
to
address
this
issue.
One
of
the
questions
I
had
was
just
about
getting
getting
them
assigned
to
the
issue
in
github.
B
Joshua,
if
you
I
mean
I
take
it,
that
you're
intending
to
help
us
triage
issues
going
forward.
So
we
can
talk
about
getting
you
that
permission.
I
know
that
we've
done
this
for
other
people,
so,
okay.
B
I
can
do
that,
okay,
so
my
other
feeling,
when
I
see
this
issue
filed,
is
that
we've
we've
faced
off
this
question
in
http
already
like
we
have
a
set
of
semantic
conventions
for
http
spans.
I
don't
believe
that
we
should
be
inventing
a
new
set
of
demand
conventions
for
http
metrics
and
we
sort
of
got
through
that
when
it
settled
it.
So
I
haven't
looked
at
the
function
of
service
conventions,
but
off
the
top
of
your
heads.
B
B
Yeah,
I
would
go
for
that
too.
It's
sort
of
unfortunate
that
we
end
up
creating
like
a
copy
of
every
document
that
one
for
span
and
then
one
for
metrics
with
a
like
extra
column,
saying
yes
or
no
for
high
cardinality,
like
I
think,
there's
probably
some
editorial
work.
We
could
do
to
combine
them
and
have
that
metric
column
be
optional.
B
D
Yeah,
I
think
you
can
assign,
let
me
make
sure
they're
already
in
the
project.
D
And
see
what
their
their
their
name
would
be:
colanos,
k-o-l-a-n-o-s!
That's
that's
michael
lavers.
E
Yeah,
they
need
to
be
members
of
the
open,
telemetry
org,
to
get
assigned
without
being
involved
with
the
issue
if
they
want
to
actually
just
take
the
issue
on
what
they
can
do
is
also
just
comment
on
the
issue,
and
then
they
can
get
assistance
yeah,
and
then
we
can
find
yeah.
B
It
has
not
lost
I've,
not
forgotten
that
we
have
these
npr's
they're
big
and
important
and
should
be
merging
soon.
Aaron.
Are
you
on
the
call,
and
would
you
like
to
speak
to
this
pr
and
what,
if
anything,
needs
to
be
done
to
get
it
merged.
F
Sure
I've
addressed
a
lot
of
the
comments.
I
think
I
just
need
to
add
descriptions
and
there's
still
some
discussion
about
this
io
system
io
time
for
there's
a
discussion
somewhere
at
the
top.
I
I
was
wondering
so
I
think
this
change
is
already
in
the
collector.
If
you,
if
you
go
near
the
bottom,
there's
a
comment:
it's
about
like
file
system.type
right
there
up
a
little
bit.
F
Adding
a
metric
for
file
system.type
and,
like
the
file
system,
mount
point,
I'm
not
sure
if
these
would
be
metrics
because
they're
kind
of
like
constants
so
like
what
would
you
put
for
units,
and
I
was
wondering
if
they
should
be
resources.
Instead,
they
sound
like
resources.
B
B
But
this
to
me
this,
I'm
I'm
I'm
wondering
if
this
gets
to
the
point
where
we
want
multiple
resources
like
we've,
there
have
been
proposals
and
discussion
about
having
like
the
coexistence
of
multiple
resources
within
a
single
process,
and
that's
reached.
There's
been
some
resistance
to
that
notion.
B
B
I
don't
have
a
strong
feeling
on
this
particular
issue,
but
I
think
we
should
get
this
merged.
It's
been
open
forever.
B
Aaron,
what
I
I
sort
of
want
to
intend
to
say,
first
of
all,
that
you've
done
great
work
on
this.
I
also
intend
to
nominate
you
as
an
approver
for
the
spec,
and
so
I'm
I'm
tending
to
say,
like
I
think,
you've
done
great.
I
want
you
to
make
a
call
on
when
this
is
ready
to
merge.
B
That's
funny
no
one's
even
approved
it.
Well,
that's
silly
I'll,
approve
it.
I've
read
it
a
bunch
of
times.
So,
okay,
please
review
and
approve
this
pr.
If
you're
on
the
call,
and
especially
if
you're,
an
approver
that
would
be
great.
A
We
so
we
approve
and
merge
this
one.
Then
we're
punting
on
the
idea
of
multiple
resources
of
factoring
out
the
file
system.
B
Well,
yeah,
that's
been
punted
on
at
this
at
the
hotel
level.
You
know
I.
If
anyone
remembers
issue,
78
quote
78
I'd
prototype
something
here.
I
love
the
idea,
but
it's
not
it's
not
going
to
get
there's
no
way,
it's
going
to
happen
at
ga,
it's
a
huge
change
and
so
yeah,
and
so
if
that
means
that
we
should
just
have
lots
of
labels,
because
there
are
multiple
map
points.
I
think
that
that's
okay,
I
don't
see
an
alternative
that
we
can
get
through.
B
F
Oh
I'm
sorry,
I
see
these
are
labels.
I
think
I
I
completely
misunderstood,
actually
they're
just
formatted
kind
of
weird
with
the
dot,
but
I
think
it's
just
label
keys
for
the
okay
for
all
the
metrics.
I
guess.
F
B
It's
okay
with
me
at
least
wonderful
all
right.
I
will
review
that
one
just
so.
I
know
what
you're
talking
about
next
time.
I
I
put
a
couple
of
like
empty
bullets
here,
but
I
I've
got.
I've
got
product
people
and
lightstiff
engineers
who
are
actually
trying
to
use
some
of
the
work
we've
done
nowadays
and
we're
finding
things
that
we
are
missing.
B
So
one
of
the
things
that
we
are
like
don't
have
out
of
the
box
is
a
host
label
and
I
don't
know
I
believe
we
have
the
right
the
semantic
conventions
written,
but
we
don't
have
like
sort
of
standard
modules
of
code
that
provide
standard
resource
labels
and
there
have
been
a
few
attempts
to
do
resource
detection
like
modules
in
the
past
that
have
been
not
actually
ended
up
merging
and
I
don't
know
I
don't
recall
exactly
the
details-
why?
But
that's
something
that
we're
missing
right
now
is
not
just
for.
B
Like
get
my
aws
labels
get
my
gcp
labels
get
my
azure
labels
get
my
kubernetes
labels
get
my
host
system
labels
get
all
those
labels.
We
don't
have
that
really.
So,
that's
something
that's,
I
think,
certainly
missing.
It
also
affects
trace
and
I
wonder
why
it
hasn't
come
up
this
as
much
already,
because,
probably
because
people
are
migrating
to
open
telemetry
from
other
tracing
libraries
where
they
already
had
all
their
resources
standing
right
there.
But
that's
a
theory
I
don't
know
so.
I
I
intend
to
like
do
some
work
to
figure
out.
A
So,
are
you
suggesting
that
we
add
labels
that
we're
duplicating
attributes
from
resources
into
metrics
as
a
label.
B
A
B
Receive
otlp,
I
can
do
that
myself
when
I
put
them
into
my
lightstep
database
or
whatever
right.
I
can
promote
resources
into
attributes
or
whatever
fyi
here's
a
preview,
there's
some
lights
up
products,
people
who
are
going
to
propose
next
week
that
we
and
tyler
has
made
this
proposal
already,
which
is
that
the
fact
that
we
use
labels
for
metrics
and
attributes
for
spans
is
still
really
confusing.
B
Hey
I'm
going
to
propose,
we
use
labels
for
spams,
but
that's
just
me
anyway.
That's
that'll
happen.
I
I
put
this
bullet
here
about
otlp
status.
I
know
that
we
still
haven't
talked
about
raw
values.
I
know
that
we
still
have
it,
but-
and
we
have
this
issue-
which
I
didn't
prepare
anything
new
about
to
talk
about
today-
one
yeah.
B
I
haven't
read
it
yet,
though
so,
but
I'm
looking
forward
to
reading
it
I
skimmed
over
it
I
saw
uk.
Are
you
on
the
call?
B
No,
I
thought
I
did
yes,
I
am
oh
good
great.
I
did
see
you
okay,
please
from
skimming
this.
I
I
think
I
guess
that
just
would
you
like
to
say
it
out
loud.
G
Sure
I
think
the
basic
idea
is
that,
with
this
very
small
change
to
the
existing
exponential
format,
we
can
support
the
hybrid
exponential
linear
format.
Basically,
within
a
within
each
exponential
bucket,
you
subdivide
it
into
a
multiple
linear
with
buckets
basically
in
the
format.
You
just
add
a
new
field
in
the
exponential
say,
number
of
number
linear
sub
buckets.
G
It
can
be
optional
or
default
one
or
it
can
be
just
required,
and
if
you
you
don't
have
if
your
standard
exponential
just
set
it
to
one.
The
purpose
of
this
is
to
make
a
class
of
histograms
very
efficient
in
encoding
the
boundary.
This
class
is
basically
the
log
linear
class,
which
includes
hdr
histogram
and
the
circ
histogram,
possibly
others.
These
are
the
two
that's
mentioned
in
in
this
thread.
H
B
Yeah,
I
I
get
it
I
haven't.
I
haven't
there's
something
that
I
haven't
spent
time
on
understanding
myself
here,
but
I
I
think
so
what
I'm
missing
right
now
is
the
comparison
between
this
exponential
message
and
the
one
that's
currently
in
open
metrics,
which
is
going
to
be
really
hard
to
find
at
the
moment,
but
wait
I
linked
to
it.
B
B
B
G
B
So
so
I
may
have
misunderstood
something,
and
I'm
hopeful-
hopefully
that's
true,
but
I
I
didn't
interpret
that
circle
hiss
could
use,
could
use
this
type
of
structure.
Lifestep
has
its
own
histogram
internally.
That
looks
a
lot
like
this
one
as
well,
so
I
I
think
this
is
a
good
answer,
but
I
I
I
wonder
I
I
I
need
to
check
some
things
for
my
own
sanity
to
make
sure
I
understand
how
this
will
work
for
some
of
these,
so.
E
Josh,
just
to
make
sure
that
I,
which
I
want
to
make
sure
that
we're
going
on
the
wrong
you
see
it
was
recommending
not
this
specific
format
but
then
taking
this
format
and
extending
it
with
subtitles
of
linear
sub
buckets
field.
Yeah.
B
Okay,
thank
you
see.
I
skimmed
it
and
didn't
didn't
quite
follow.
Yes,
okay.
So
what
you're
saying
is
that
this
struct
doesn't
quite
cover
hdr
histogram
and
circle
his.
But
if
we
could
extend
this
structure,
then
that's
really
good.
That's
the
type
of
I
was
actually
going
to
ask
a
question
like
that
is.
B
I
know
that
we
can
express
almost
every
one
of
these
algorithms
using
explicit
buckets,
and
if-
and
I
I
do
think
that
that
getting
the
semantic
like
like
a
histogram
where
you
don't
have
to
set
boundaries,
is
more
important
than
getting
compression.
So
what
what
this
encoding
is
about
is
compression.
So
we
like
the
first
problem,
is
we
agree
that
we
want
to
have
dynamic
histograms?
B
The
second
part
is
how
we're
going
to
compress
them,
and
I
think
what
this
proposal
is
that
that
there's
a
large
class
of
these
that
can
be
compressed
by
this
simple
struct,
although
it
doesn't
cover
a
dd
sketch,
and
I
think
that
that's
a
concern.
My
next
question
is
going
to
be
something
like
how
how
much
are
we
willing
to
tolerate
different
bucket
descriptions
in
order
to
have
compression.
B
B
B
I
don't
want
to
have
to
both
implement
a
dd
sketch,
compressed
histogram
buckets
and
this
linear
linear
exponential
thing
as
well,
unless
I
have
to
so
there's
a
question
here
of
how
much
do
we
have
to
do.
But
I
appreciate
your
clarifying
this
uk
and
I,
as
you
may
know,
from
last
week,
circle.
The
circle
histogram
has
a
new
type
of
appeal
to
me
because
of
the
way
it
translates
into
human
readable
label
boundaries
and
which
makes
it
appealing
for
prometheus
export.
B
G
G
It's
probably
the
best
graphic
just
scroll
down
to
that
you'll
see
graphs;
okay,
that's
that's
it!
So
maybe
a
little
a
couple
of
lines
are
up:
okay,
top
of
the
page
here:
that's
there
there
is
basically
their
exponent
factor
is
basically
each
bucket
is
10
10
x,
that's
their
top
level
exponential
and
this
scheme,
and
then
within
each
they
divided
to
not
again
10
10
sub
buckets,
so
they
ended
up
with
sequence
of
like
1.0
1.1
1.2,
as
you
see
here,
next
bucket
is
10.
B
B
If,
if
every
client
comes
up
with
this
well,
I
need
to
need
to
say
this
carefully.
This
produces
consistent
boundaries
across
all
the
clients,
because
they're,
because
they're
looking
for
bound
base
10
numbers,
essentially
so
that
this
these
are
both
human
readable
and
going
to
merge
correctly,
because
all
the
cust,
all
the
clients
are
going
to
generate
the
same
focus.
B
I'm
I'm
thinking
of
asking
michael
on
the
call
what
you
think.
I
know
we
spent
a
couple
weeks
studying
dd
sketch
and
its
appeal.
I
think
it
still
has
appeal,
but
I
I
I'm
also
seeing
the
appeal
of
this
structure
here.
We
can't
compress
them
both
in
the
same
way
that's
yeah
from
my
first
level.
H
Please
go
ahead.
Well,
I
just
had
a
couple
quick
questions.
Does
this?
Does
this
one
support
only
integers,
or
also
any
other
value.
B
This,
I
think
the
sports
integers
and
floating
points,
it's
just
that
the
boundaries
are
placed
on
decimal
powers.
Basically,
okay,.
H
That's
what
I
was
trying
to
understand
yeah.
I
have
to
read
a
little
bit
more
about
this.
The
other
thing
is
that
it
seems
like
it
requires
you
to
specify
the
number
of
buckets,
if
not
the
boundaries
of
the
buckets,
which
is
still,
unless
I'm
under
a
misunderstanding,
more
reasoning
that
that
we're
pushing
towards
the
human.
G
It's
really
there's
dramatical
issues
here.
Why
is
for
this
class
of
histogram
structures
with
a
very
small
modification
of
the
existing
exponential
bucket?
You
can
represent
a
whole
class
of
things
like
this
next
question
is
well
decimal
or
not.
The
exponential
format
is
already
flexible,
used
to
specify
the
exponential
factor
which
would
be
10
here.
The
growth
factor
first
factor
would
be
10
here
and
the
then
for
the
number
of
sub
sub
buckets
incremented,
you
can
choose,
say
10,
linear
buckets
or
100,
linear
buckets,
so
the
format
is
flexible.
B
I
was
reminded
of
how
I
initially
asked
some
questions
about
dd
sketch
that
were,
I
had
some
confusion
and
I
was
asking
about
some
parameters
that
that
were
parameters
of
the
implementation,
not
parameters
of
the
representation,
so
that
it
was
like
I
I
would
have
to
go
back
to
that
old,
older
issue,
but
these
were,
I
was
asking:
what
are
the
the
variables
that
have
to
go
into
the
protocol
message
for
us
to
decode
this
data
structure,
and
it
turned
out
that
charles,
I
think
was
his
name
said
no.
B
These
are
these
are
just
parameters
for
the
the
code
that
generates
the
histogram.
The
histogram
itself
doesn't
have
those
parameters
you
just
need
gamma
or
whatever
it
was.
I
think
what
uk
is
talking
about
are
similar
parameters
which
will
determine
how
you
compute
this
histogram,
not
how
you
represent
it.
G
Okay,
you're
right.
There
are
two
two
classes
of
parameters
here
so
about
the
dd
sketch
I've
read
the
dd
sketch
paper
too.
So
my
understanding,
it's
also
in
the
exponential
family,
so
that
you
define
a
girls
factor,
but
what
I'm
not
quite
sure
is,
could
a
standard
exponential
format
like
this
represent
the
sketch?
Does
it
strictly
use
the
use,
the
the
exponential
growth
factor,
because
I
read
somewhere
that
in
some
implementation
they
use
a
polynomial
approximation
of
log.
H
Not
on
the
phone
brethren
last
time,
but
I
can
I
just
sent
him
your
comment
here
to
to
understand
it
a
little
bit
more
and
we'll
add
a
comment,
probably
this
evening
or
tomorrow,
to
continue
the
discussion
and
I
have
to
understand
it.
I'm
also
reading
it
for
the
first
time
now.
B
I
think
I
would
have
to
reread
these
to
answer
this
question
myself,
but
I
think
it
is
a
good
proposal
being
made
here,
which
is
that
we
we,
I
think
we
I
would
like
us
to
be
able
to
choose
any
library
that
that's
like
available.
First
of
all-
and
I
I
would
like
it
if
we
could
reduce
the
number
of
encodings-
that
we
need
to
support
to
some
smaller
subset,
and
so
I
don't
want
to
have
one
for
circle,
hist
and
one
for
discussion,
one
for
like
hdr.
B
So
I
think
the
proposal
is
that
we
might
be
able
to
find
one
compromise
which
is
not
going
to
be
as
compressed
as
the
dd
sketch
one
that
we
refer
that
we
discussed
a
couple
weeks
ago,
but
it's
not
going
to
be
anywhere
near
as
bad
as
explicit
buckets,
which
is
going
to
be
bad,
and
so
is
there.
A
good
middle
ground
is
my
question,
and
I
think
that
that's
what's
being
proposed
here
and
I
need
to
study
it
myself.
G
B
Yeah,
that
was
the
answer
that
the
dd
sketch
proposal
was
going
to
take
as
well.
I
raised
a
question,
maybe
in
this
document
here,
but
there
is
this
problem
with
zero
as
well.
So
how
do
you
represent
the
zeros
and
does
that
require
its
own
bucket
and
then
do
we
have
to
change
the
spec
to
say
buckets
can
have
zero
width
because
currently
the
bucket
there's
no
such
thing
as
a
zero
with
bucket.
B
That
was
a
question
I
have
that.
I
I
don't
think
it's
too
urgent,
but.
B
G
B
All
discussed
in
919,
we
went
into
this
at
length.
This
topic
has
been
discussed,
so
this
was
a
the
sort
of
like
best
case
proposal
from
the
dd
sketch
author
looks
roughly
like
this,
and
it
would
have
a
zero
bucket
as
well
as
a
range
of
positive
and
negative
buckets
and
and
one
parameter
which
is
gamma.
B
G
H
So
we
can
weigh
in
here
we
can
also
link-
I
don't
know
if
it'll
help,
but
we
did
implement
the
protocol
that
that
we
sent
yesterday
or
last
week
in
in
java
and
and
we
can
link
that
as
well
in
case
that
helps.
G
So
I
still
have
one
more
question
is
about
the
ex
that's
probably
most
relevant
to
to
the
explanation
of
some
part
to
the
linear
too.
That
is
the
accumulated
error.
Logically,
you
just
say
the
ice.
The
ends
bucket
is
the
the
power
of
of
growth
factor
raised
to
n,
but
if
the
growth
factor
is
the
integer,
that's
probably
fine.
If
girth
factor
is
nothing
in
here,
let's
say
1.1
raised
to
the
power
of
2030.
G
G
B
Well,
this
is,
I
think
this
is
progress,
so
there
are
some
questions
that
we
should
all
think
about
here.
I
I
wanna
I
I
myself
would
like
to
have
the
answer
to
whether
dd
sketch
can
fit
this
this
proposal,
because
that
would
make
it
a
really
good,
lower,
bound
or
good
compromise.
B
This
sense
that
me
not
that
I
I
don't
know
why,
so
this
is
great
as
far
as
the
agenda
goes.
We
have
definitely
covered
everything
I
had
intended
to
talk
about
at
this
point,
although
we
didn't
go
into
depth
on
some
of
the
stuff,
because
there's
no
issues
written
up.
B
I
think
we've
talked
ourselves
out
then
hope
I
hope
to
have
filed
some
new
issues
for
next
week
to
talk
more
about
the
collector.
Otherwise
I
think
we
can
call
this
meeting.
B
B
I
I
was
listening.
Oh
I
didn't
anyway.
I
thought
you
weren't
here
back
at
the
beginning,
we
talked
about
the
collector
prometheus
receiver
and
I
just
I
made
some
statements
about
how
I
think
it's
really
important,
that
we
get
that
back
to
life
and
to
the
point
where
it's
reliable
and
functional
and
people
can
use
it.
I
may
have
said
office
the
opposite
of
that
at
some
point
in
the
past,
but
I
have
sort
of
new
understanding
of
this
problem
from
talking
to
some
engineers,
fbi
staff
and
some
product.
People
like
that.
C
B
Which
is
a
separate
question,
and
I
think
that
in
an
ideal
world
we
would
have
collector
transformation
modules
and
be
like
okay.
First
we're
going
to
scrape
your
prometheus
targets,
then
we're
going
to
apply
the
standard,
rename
your
your
prometheus
variables
into
hotel
semantic
convention
names
and
then
hopefully,
a
system
built
for
otel
will
work
with
that
data
really.
Well,
I
think
that's
it.
That's
roughly
what
I
want.
B
C
Perfect,
that's
that's
great.
We
should
always.
We
always
accept
more
help.
One
thing
that
I'm
not
sure
we
should
make
it.
Our
focus
is
somebody
having
hotel
library
exposing
prometheus
and
we
scrape
and
get
hotel
back
the
way
yeah.
I
don't,
I
think
something
an
anti-pattern
and
I
think
the
way
how
we
build
the
prometheus
receiver
should
be
more
for
you
have
instrumented
with
prometheus
library,
we're
gonna
scrape
to
put
in
our
in
our
ecosystem
the
data,
but
between
two
components
in
our
ecosystem.
C
B
Yeah,
I
think
that
that's
a
reasonable
thing
yeah.
I
think
that
I
guess
there's
a
case
where
somebody
has
I'm
writing
some
new
code
and
I
want
to
try
out
otlp
or
hotel,
but
I've
got
my
prometheus
system
already
running,
so
I'm
going
to
try
to
scrape
my
hotel
binary
from
prometheus,
but
I'm
not
talking
to
the
hotel
cluster.
Yes,
the
only
other
item
in
this.
This
was
this
up
variable.
B
Up
variable
up
variable
is
a
sorry.
It's
a
metric,
it's
synthesized
whenever
you
scrape
a
target
that
target
gets
a
metric
literally
named
up
two
characters
up.
It
has
labeled
job
and
instance,
and
the
up
variable
is
a
one
saying
I
this
process
was
up
at
this
moment
in
time.
What
we
can
do
is,
I
think
we
should.
It
requires
some
care
and
some
thinking,
so
I
just
want
to
blurt
it
out,
but
basically
the
idea
would
be
either
when
we
export
or
when
we
receive
an
otlp
from
a
client.
B
That
client
is
up
at
that
moment
and
we
can
make
a
make
a
metric
to
say
the
client
was
up.
I
don't
know
whether
it
happens
on
the
client
side
of
the
server
side,
but
I
just
think
that
something
should
happen
here
and
that
way
we
can
have
an
up
variable,
that's
independent
of
prometheus,
and
we
can
transform
it
and
we
can
name
it
up.
I
don't
know
if
that's
the
right
name,
but
the
point
is
that
otlp
should
have
an
up
variable.
C
Uptime
yeah,
we
we
added
that.
I
don't
know
if
we
added
that
in
google,
after
you
left
in
census,
but
exactly
the
same
thing
in
order
to
to
actually
to
two
use.
Cases
were
very
nice
about
the
uptime
metric
one
correctness
of
your
entire
pipeline,
because
that
needs
to
match
with
the
number
of
targets
that
you
see,
for
example,
in
kubernetes
or
in
borg
as
up.
C
So
you
can
that
that
was
one
of
the
use
case
for
us
and,
secondly,
was
to
detect
when
when
services
are
down
anyway,
yeah
both
both
are
nice
and
I
think
the
proposal
and
should
come
from
the
client
in
my
opinion,
because
because
that's
where
the
data
are
generated,
so
I
think
we
should
have
that.