►
From YouTube: 2021-07-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Josh,
so
I
tried
to
put
14
minutes.
Do
you
think
this
will
be
good.
C
I
hope
I
hope
less,
I
don't
know.
Actually
I
know
george
is
here
and
we're
hoping
to
hear
from
georg
and
maybe
if,
if
there's
time
left,
we
can
just
have
a
broad
discussion
about
what's
left
to
decide,
because
I
don't
really
know
how
to
make
a
decision
by
myself.
A
C
C
And
usually
we
wait
about
this
long,
so
I'd
say
that,
given
we
just
ended
another
meeting,
we
can
start
up
again
right
away,
and
so
just
for,
I
think
probably
everyone
here
has
context
that
we
are
discussing
a
histogram
format
and
there
was
an
implementation
done
at
dynatrace.
C
The
dynahist
package
is
a
fairly
extensive
sort
of
software
package
for
histogram
implementations.
That
includes
other
options,
as
I
understand
it,
and
so
a
month
or
two
ago,
atmar
added
another
implementation
to
the
framework
that
would
support
this
format.
C
So,
let's
talk,
let's
talk
about
it
gary
do
you
want
to
give
us
some
more
information.
B
Sure,
let
me
just
try
and
share
I'm
not
sure
this.
If
this
works,
can
you
see
this
or
is
that
not
working?
I
got
it:
okay,
okay,
great
all
right!
So,
as
josh
already
said,
the
dynahest
is
an
implementation
of
a
histogram
implementation
and
there
is
basically
two
parts
to
it.
B
So
there
is
the
data
side
where
how
how
the
data
is
stored,
and
then
there
is
the
mapping
site
like
how
data
is
mapped
to
whatever
yeah
layout
you
want
to
use,
and
for
today
I
am
going
to
show
you
how
the
layout
or
how
it
works
with
the
exponential
layout
and
how
data
is
mapped
into
the
exponential
layout
and
yeah.
So
the
underlying
idea
here
is
that
basically,
a
double
is
made
up
from
two
parts.
B
It's
made
up
from
the
mantissa
and
the
exponent
and
the
sign
bit
which
I'm
gonna
like
leave
out
for
now,
but
it
is,
it
is
yeah
it's
used,
but
only
at
a
later
stage
and
they're
normalized.
Well
then,
mantissa
is
then
normalized
for
every
double,
and
it's
always
between
one
and
two
and
each
exponent
sort
of
shifts
that
mantissa
interval
for
yeah
to
to
get
the
whole
double
range
that
you
can
represent
in
a
normal
64-bit
double.
B
So
the
idea
behind
this
implementation
is
basically
to
map
the
mantissa
to
the
correct
bucket
inside
this
mantissa
interval
between
one
and
two,
and
we
are
using
a
lookup
table
for
that
which
is
quite
quite
fast
and
then
the
exponent.
The
second
part
of
the
double
used
is
used
to
sort
of
shift
or
decide
which
which
of
those
mantissa
intervals
is
used.
B
Okay,
so
maybe
just
like
a
very
short
overview
over
this
exponential
layout
that
we
that
we're
using
basically
the
the
bucket
bounds
for
the
mantissa
part,
are
calculated
by
the
formula
that
you
can
see
here,
so
that
will
always
yield
values
between
one
and
two
right.
So
for
as
an
and
then
you
have
an
optional
parameter
that
are
not
optional,
but
you
have
a
parameter
that
you
can
tune
here
and
that's
the
precision.
B
So
the
precision
basically
tells
you
how
many
buckets
you
want
in
this
range
between
one
and
two
or
you
can
specify
how
many
buckets
you
want
between
one
and
two,
and
so
is
a
power
of
two
and
yeah.
If
you,
for
example,
set
this
this
value
to
to
one
position
equals
one,
then
you
will
get
two
buckets
and
that
is
represented
by
the
orange
things
here.
Ticks,
I
think,
is
the
correct
word.
So
if
you
had
set
your
precision
to
one,
you
would
get
this
orange
bucket
at
one.
B
B
yeah.
If
you
up
the
position,
for
example,
if
you
use
precision
of
two,
you
would
get
four
buckets
so
two
to
the
power
of
two,
and
in
this
case
you
would
get
the
additionally,
you
would
get
the
buckets
that
I
marked
here
with
the
blue
ticks.
So
you
would
get
the
one
at
one
one
at
1.19,
one
at
141
and
so
on
and
so
forth.
B
So
you
would
get
these
two
additional
buckets
and
that
would
leave
you
with
our
additional
bucket
boundaries
and
that
would
leave
you
with
a
total
of
four
different
buckets
right:
okay,
so
that
is
basically
what
we
are
going
to
use
or
what
we
want
our
mantissa
to
map
to
right.
So
that
is
what
the
mantissa
part
like
between.
They
did
the
one
and
the
two.
As
I
said,
the
mantissa,
the
normalized
mantissa
is
always
between
one
and
two.
B
So
what
we
do
is
basically,
we
have
a
a
lookup
table
that
has
equidistant
buckets,
so
those
buckets
are
always
the
same
size.
I
just
use
the
same
colors
to
again
represent
which
ones
are
at
the
precision
of
one
and
which
ones
are
at
the
precision
of
two
and
then
we
create
this
lookup
table
once
right.
B
So
we
create
an
array
that
represents
equidistant
buckets
and
those
buckets
then
map
to
indices
in
the
in
the
in
the
exponential
array,
and
now
we
have
a
little
bit
of
a
problem
right
if,
for
example,
that
the
reason
why
we're
sorry-
let
me
let
me
start
it
this
way.
The
reason
why
we
want
to
use
the
equidistant
buckets
is
since
the
exponent,
the
mantissa,
is
a
value
that
will
be
taken
to
the
power
of
two.
B
It's
really
easy
to
do
a
division
here,
so
it's
really
easy
to
map
the
mantissa
into
this
array
of
equidistant
buckets.
So
that's
that's
a
simple
bit
shift
operation
and
we
get
the
index
in
this
lookup
table
and
with
that
index
in
the
lookup
table,
we
can
then
really
quickly
look
up
the
value
for
which
index
we
want
to
map
to
in
the
exponential
layout.
B
So
now
we
have
a
problem.
For
example,
if
we
are
looking
for
the
at
the
value
1.23,
we
would
then
by
the
division.
You
would
put
it
into
this
first
bucket
right
between
1
and
1.25,
but
this
first
bucket
between
1
and
1.25
will
point
to
1,
which
is
obviously
not
the
right
bucket,
because
if
we
have
a
value
of
1.23,
we
obviously
want
it
in
this
bucket
right.
B
So
what
we
do
is
basically
we
do
this
lookup,
so
we
look
up.
It
turns
out.
Okay,
we
want
the
index
zero,
which
is
this
one,
and
then
we
look
at
the
next
bucket,
so
in
this
case
it's
1.19,
and
then
we
look
at
the
bucket
one
one
over.
So
we
look
at
a
total
of
three
bucket
boundaries,
and
this
is
basically
the
only
the
only
thing
that
we
have
to
do.
B
We
have
to
look
at
these
three
buckets
that
doesn't
really
look
great
for
this
example
that
I
have
here
because
yeah
I
mean
if
there
is
a
free-
or
in
this
case
four
buckets
looking
having
to
look
at
free
bucket
boundaries
isn't
great,
but
this
number
is
is
equal.
It
doesn't.
It
doesn't
change,
no
matter
how
many
buckets
we
have.
B
So,
even
if
we
have,
I
don't
know,
128
buckets
between
one
and
two,
we
would
still
only
have
to
look
at
three
buckets
or
three
bucket
boundaries,
so
that
obviously
makes
adding
values
to
this.
Histogram
implementation
really
quick
and
all
of
these
operations
are
basically
bit
shift
operations.
So
there
is
this
bit
shift
operation
that
represents
this
division
up
here
and
then
it's
just
the
lookup
and
yeah.
Then
it's
basically
three
lookups
here.
So
we
have
one.
We
look.
If
it's
yeah,
obviously
it's
larger
than
one.
B
B
So
also
in
this
case,
it's
not
entirely
visible
that
one
that
there
would
be
cases
where
there
is
actually
more
than
one
bucket
inside
one
of
the
the
equidistant
buckets,
but
I
think
it's
mathematically
sound
and
I'm
pretty
sure
that
that's
how
it
works
so
great
now
we
have
taken
the
mantissa
from
the
double
and
we
have
mapped
the
mantissa
to
the
correct
bucket.
So
we
know,
for
example,
our
mantissa
belongs
in
bucket.
B
I
don't
know,
let's
again
use
the
the
value
from
before,
so
we
have
a
value
from
of
1.61
and
we
want
to
map
it
into
this
array.
So
1.61
obviously,
would
then
fall
into
this
bucket.
We
would
go
down
here.
We
would
see
a
case
smaller
than
1.68,
so
we
would
leave
it
in
this
bucket,
so
we
now
have
our
index
in
the
mantissa
part,
but
there
is,
as
I
said,
like
a
lot
of
mantissa
like
for
every
exponent,
that
a
double
can
store
and
that's
11
bits.
B
I
believe,
with
a
little
bit
of
it's,
not
it's
a
little
bit
different
because
sub-normal
doubles
and
so
on
and
so
forth.
We
can
get
into
the
nitty-gritty
stuff
a
little
later,
but
for
every
exponent
there
will
be
one
of
these
ranges
and
then
we
can
yeah
and
that's
the
that's.
The
next
thing
that
we
have
to
do
is
now
that
we
have
our
mantissa
part
mapped.
B
We
still
need
to
map
it
onto
the
whole
double
range
and,
for
example,
if
we
have
an
exponent
of
zero,
then
that
would
mean
that
we
have
the
values
between
one
and
two.
So
if
we
have
a
double
that
is
represented,
so
you
you
have
to
double
and
you
split
it
up
into
the
exponent
and
the
mantissa.
B
If
you
have,
the
exponent
part
is
zero,
then
all
the
values,
all
the
like
the
whole
mantissa
part
for
that
one
exponent
will
map
to
between
one
and
two.
If
you
have
an
exponent
of
one,
the
whole
mantissa
part
of
that
double
will
map
between
two
and
four.
If
you
have
an
exponent
of
two,
it
will
map
into
the
into
the
yeah
interval
between
four
and
eight
and
so
on
and
so
forth.
B
So
it
will
then
go
to
16,
32
and
so
on
so
and
that
again
we
can
separate
the
the
mantissa
part
and
the
exponent
part.
So
if
we
just
use
this
exponent
part,
it's
really
easy
to
offset
which
mantissa
we
want
to
use
or
which
is
the
range
we
want
to
use,
and
we
simply
add
this
value
to
or
it's
again
a
simple
addition.
We
just
need.
We
don't
need
to
do
any
multiplications.
We
don't
need
to
do
any
logarithms
or
anything
here.
B
So
it's
just
a
simple
addition
of
the
exponent
to
offset
which
of
the
mantissas
we're
going
to
use
okay,
so
that
is
the
high
level
overview.
Are
there
any
questions
about
the
high
level
overview?
I
I
guess
we
could
go
into
the
code
if
you're
interested,
but
that's
basically
the
the
the
top
down.
C
Idea,
could
you
go
back
a
couple
of
slides
or
just
maybe
just
one
slide
and
and
we'll
talk
about
the
mathematical
intuition
here,
maybe
to
develop
it
a
little
bit
more.
So
the
idea
is
that
you
only
have
to
look
at
potentially
three
buckets
and
I'm
guess
I'm
gathering
that
the
the
there's
something
special
about
like
the
minimum
value.
C
If
it
was,
I
and
I
plus
two
because
when
you
add
two
you're
offset
by
one
in
the
less
precision
lower
precision
scheme,
therefore
you
must
be
one
bucket
greater
in
the
greater
like
lower
precision.
Therefore,
I
mean
I'm
trying
to
understand
why
that
number
three
comes
out
so.
B
I
am
I
mean
I
I
understood
it
at
some
point,
but
I'm
not
entirely
sure
if
I
can
like
give
it
do
it
justice.
So
the
idea,
basically,
is
that
if
you
have
like
the
buckets,
always
get
bigger
right.
So
in
this
case
it's
one
it's
point
19
and
then
it's
point
22
between
those
two
and
then
it's
like
some
somewhere
closer
to
0.3
between
those
two
buckets
and
or
no
that's
not
right.
B
But
like
point
two
five
point:
two
seven
and
yeah
about
point
three,
two
between
those
two
and
the
idea.
Basically,
is
that
and
I
might
have
to
channel
ottmar
again
for
this,
but
he
he
meant.
He
said
that
the
like,
if
you
have
two
of
those
smaller
buckets
like
as
they
get
bigger.
If
you
just
take
the
smaller
bucket
twice,
then
you
would
still
have
less
than
less
than
if
you
just
have
one
bigger
bucket.
B
Yeah
and
basically
the
the
argument
is
that
if
that
is
the
case
like,
if
you
have
two
smaller
buckets
that
are
still
smaller
than
one
of
the
bigger
buckets,
if
you
only
if
you
like,
only
had
smaller
buckets,
then
you
wouldn't
actually
make
it
all
the
way
to
two.
B
So
that's
it's
like
a
very
unmathematically,
a
mathematical
way
of
putting
it,
but.
B
Yeah
I
mean
I've
I've
just
now
opened.
I
created
that
just
this.
I
can
share
that
if
every
anyone's
interested,
so
I
basically
just
copied
opmer's
code
and
rewrote
it
in
go
and
sort
of
just
did,
did
all
the
same
things.
D
B
B
Yes,
okay,
can
you
read
that
or
is
that
too
small?
Can
you
zoom
in
several
times?
Let
me
just
let
me
where
is
the:
where
is
the
zoom
window
here
you
put
the
link
in
the
chat
window
icon.
B
Where
is
it?
No,
that's
not
it
because
if,
as
long
as
I'm
sharing,
I
cannot
see
the
the
chat
here,
it
is
okay
that
should
did
that
work.
Yes,
okay
right!
So,
basically,
if
I
scroll
down
here
a
little
bit,
I
have
the
the
usage
or
how
how
you
would
use
it.
So,
basically,
what
you
would
do
is
you
create
an
exponential
layout
with
a
certain
position?
So
in
this
case
it's
it's
three,
which
means
two
to
the
power
of
three.
B
We
have
eight
buckets
between
one
and
two
right,
and
then
you
would
call
this
map
to
bin
index
method
and
you
pass
a
double
and
you
get
an
an
integer
out
and
the
integer
then
represents
the
the
index
into
the
exponential
array
for
the
whole
double
double
range.
B
B
If,
if
we
have
a
precision
of
three,
we
have
eight
buckets
between
one
and
two,
and
then
we
have
eight
buckets
between
two
and
four
and
then
we
have
eight
buckets
between
four
and
eight
and
so
on
and
so
forth
and
yeah
as
we
see
between
one
and
two,
there
is
exactly
eight
buckets
and
between
two
and
three,
obviously
there
is
not
eight
buckets
because
yeah
between
two
and
four
there
is
eight
buckets,
so
it's
actually
a
bit
closer.
B
B
Okay.
Then
there
is
a
little
bit
of
yeah
preparation
going
on
here.
The
thing
is
that
double
values,
as
they
are
defined
by
the
ieee,
do
have
a
lot
of
subnormal
values
which
are
doubles.
That
are
that
you
can
represent
in
the
bit
like
the
bit,
how
a
double
is
represented
in
bits
that
that
actually
mapped
to
zero
because
they
are
sort
of
smaller
than
the
smallest
representable
double
that
yeah
that
you
can
represent
with
with
the
given
bit
with
the
given
bits.
B
So
the
idea
is
that
there
is
quite
a
few
buckets
between
there
is
quite
a
few
would
be
quite
a
few
buckets
between
0
and
sort
of
0.000
and
0,
and
so
on
and
so
forth.
There
is
a
lot
of
buckets
that
we
can't
actually
map
values
to,
because
those
bucket
boundaries,
just
from
a
mathematical
standpoint,
are
so
small
that
a
double
cannot
represent
a
value
that
would
be
in
that
bucket
right.
So,
even
though,
strictly
mathematically
speaking,
there
would
be
a
way
of,
or
there
would
be
values
that
would
go
into
these
packets.
B
There
is
no
actual
way
to
represent
them
as
doubles,
and
so
we
don't
really
need
to
have
these
buckets
since
they're,
just
a
waste
of
space
and
that's
basically,
what
we
do
here
in
the
in
the
calculation
for
the
exponential
layout
or
this
get
index
offset,
is
basically
that
so
I'm
gonna
leave
that
out
for
now.
If
we
have
time
later
and
if
there
is
interest,
why
or
how
that
works,
we
can
get
into
that
a
little
bit
later.
B
But
the
first
thing
that
we
do
is
we
calculate
boundaries
and
basically,
what
this
does
is
it
just
calculates,
as
I
showed
you
before,
2
to
the
power
of
I
plus
one
divided
by
the
length,
in
this
case
eight
I
also
have
like
in
the
in
the
example.
There
is
a
little
bit
of
a
of
a
representation
of
how
that
works.
So
for
a
precision
of
three
that
would
be
the
exponential
buckets
the
matissa
would
map
to
right.
B
Okay,
so
that's
basically
what
we
do
here
and
then
we
just
create
the
lookup
table
right.
So
we
just
look
at
the
the
boundaries
if
the
the
lowest
boundary
that
is
smaller
than
or
equal
to
the
equidistant
bucket
bound
so
yeah.
This
is
basically
sorry
we're
up
here.
So
this
is
the
boundaries,
the
the
exponential
boundaries
and
the
next
step
that
we
do
is
we
calculate
the
indices
into
these
or
from
the
boundaries
array
to
the
the
the
lookup
table.
B
This
is
the
step
where
we
create
the
lookup
table,
and
basically
this
is
just
one
divided
by
yeah
the
length
of
the
array,
so
that
calculates
the
equidistant
bucket
bounds
and
then
we
find
the
lowest
boundary
that
is
smaller
than
or
equal
to
the
equidistant
bucket
bound.
So
this
is
creating
the
lookup
table
that
we
saw
earlier,
and
then
we
have
this.
We
got
this
exponential
layout
back.
As
I
said,
this
is
basically
just
an
offset
that
we
will
maybe
look
at
later
if
we
have
time
okay.
B
B
So
this
is
yeah
just
the
binary
ends,
and
in
this
case
it
just
pulls
out
the
mantissa
bits,
and
then
we
pull
out
the
exponent
bits
and
since
the
double
is
sort
of
the
last
51
bits
or
sorry,
the
last
52
bits
are
the
mantissa
and
then
there
is
11
bits
of
exponent
and
then
there's
one
bit
of
the
sign
bit.
So
basically,
what
we
do
is
we
again
binary
end
together
this
this
bit
mask
to
get
the
exponent
and
then
shift
it
over
to
52
bits.
Since
that's
where
the
mantissa
was
before
sorry.
B
C
B
So
so
it
depends.
It
really
is
just
a
question
of
representation,
so
I
think
for
normal
doubles,
like
not
subnormal
doubles.
You
have
at
least
one
bit
set
in
the
exponent
and
if
you
have
no
bit
set
in
the
exponent,
so
if
all
bits
are
zero,
then
you're
talking
about
subnormal
doubles
and
things
work
a
little
bit
differently.
So
there
is
like
a
special
rule
that
if
all
the
bits
in
the
exponents
are
set
to
zero,
then
the
values,
the
mantissa
part,
is
actually.
B
Is
actually
interpreted
interpreted
differently
than
if
you
have
at
least
one
bit
set
in
the
exponent?
So
usually
you
have
the
mantissa
part,
which
is
again.
This
is
like
we're
going
deep
into
the
bowels
of
how
doubles
work.
So
the
mantissa
part
is
basically
a
value
between
or
it's
just
everything
after
the
decimal
dot,
and
you
just
add
one
to
it.
And
then
you
add
you,
you
calculate
the
exponent.
B
So
that's
how
a
double
is
defined
so
that
you
don't
have
to
store
the
one
dot
in
every
double,
because
it's
the
same
every
time
anyway,
and
you
can
save
a
bit
if
you,
if
you
don't
do
it
like
that
and
for
subnormal
doubles,
you
don't
actually
have
one
plus
the
mantissa
part,
but
you
have
zero
plus
them
in
this
or
not
not
plus
zero
plus,
but
just
to
meant
this
apart
and
then
you
calculate
the
exponent.
So
those
values
are
all
between
zero
and
one.
C
Sounds
right
I,
it
was
just
this
one
oddity
of
if
manchester,
if
mantissa
is
less
than
first
normal
value,
bits
return.
Mantissa,
I'm
scratching
my
head
on
that
one,
but
I
actually
don't
think
it
matters
like
when
I
hope
when
I
read
through
this,
but
I'm
just
gonna,
ignore
some
normal
values.
Yeah.
B
Yes,
I
I
would
have
to
get
back
to
you
on
why
what
that
is,
but.
B
Like
let
it
be
said
that
this
part
basically
moves
a
subnormal
double
to
a
normal
double
representation,
at
least
the
mantissa
part,
and
then
what
we
do
is.
Basically
we
look
at
the
lookup
table,
and
that
is
the
the
part
that
I
was
talking
about
earlier.
That
is
the
division
that
lets
us
map
our
mantissa
into
our
indexes
array
into
the
into
the
lookup
table
right.
B
So
that
is
then
we
get
the
index
from
the
lookup
table
and
then
we
look
at
the
lookup
table
at
position.
Yeah
position
at
the
position,
plus
the
lookup
table
at
the
next
position,
plus
the
lockup
table
at
the
next
position.
So
that's
what
we
do
here.
Basically,
this
is
easily
represented
by
this
ternary
operator
in
java,
for
example,
and
yeah.
B
It's
just
the
fact
that
go
doesn't
have
ternary
operator
that
you
have
to
put
it
like
that,
but
I
think
the
compiler
would
probably
optimize
that
away
anyway
and
yeah
and
finally,
that
is
then
we
actually
shifted
over
by
the
exponent.
That's
the
part
that
I
showed
before
with
the
with
the
yellow
thing.
F
So
quick
question.
Thank
you.
How
do
you
deal
with
non-standard
floating
point
implementations
if
you
have
to
deal
with
that,
or
are
you
just
relying
on
like?
Do
you
have
an
implementation
on
this
in
a
language
that
that
has
something
something
like
terrible
when
it
comes
to
this,
like
siples,
plus
right.
B
F
Universal,
I
guess
my
question
more
is
how
much
of
ieee
are
you
relying
on
because
there's
there's
ieee
doubles
and
there's
the
the
in
practice
ieee
double
where
there's
a
few
things
in
the
spec
that
aren't
abided
by,
and
I
don't.
I
don't
know
looking
at
this
like
what
what
you're
relying
like,
if
you're
relying
on
any
of
those
aspects
that
kind
of
aren't
in
x86.
That
would
be.
That
would
be
my
primary
concern.
F
C
Kind
of
grinding
on
and
the
the
the
place
where
we've
this
discussion
has
landed
is
really
about
whether
we
need
to
know
what's
the
smallest
representable
value,
because
the
prometheus
histogram
proposal
has
that
information,
and
so
I
I
think,
we've
run
out
time
on
this
one.
I
really
appreciate
the
explainer
and
running
through
that.
C
It
actually
didn't
understand
it
until
you
gave
me
the
explanation
today
from
trying
to
read
that
code,
I
think
we
should
move
on
for
the
meeting
to
talk
about
the
other
issues
and
because
this
this
issue,
that
is
1776,
is
just
full
of
you
know
the
the
experts
who
who
have
been
debating
this
forever
and
I'm
not
sure
that
having
in
this
room
is
going
to
help,
and
I'm
also
not
sure
that
we
can
finish
it
without.
C
I
need
to
prod
that
thread
and
ask
a
few
more
questions,
but
I
believe
we
are
going
to
land
on
this
base.
2
exponential
histogram.
It's
really
just
a
question
of
how
happy
everyone
is
going
to
be
that
didn't
get
their
choice,
and
so
I
will
propose
to
continue
this
discussion
in
that
thread
and
let
riley
have
this
meeting
back.
C
Oh
you
know
what
actually
before
we
go.
I
was
reading
the
comment
threads
and
there's
there
does
seem
to
be
a
question
of.
C
Does
this
give
us
the
histogram
we've
all
dreamed
of
which
I
think
is
all
everyone
dreams
of
a
different
histogram.
But
the
question
came
down
to
am
I:
is
the
user
going
to
have
to
configure
precision,
or
is
the
user
going
to
have
to
configure
range
of
the
inputs
and
this
proposal,
the
one
that
get
represented?
C
Is
the
user
is
going
to
have
to
configure
precision,
you'll,
tell
a
precision
up
front,
and
that
tells
you
how
much
memory
you're
going
to
use
and
then,
after
that
it
will
map
buckets
and
if
numbers
are
extremely
small
or
extremely
large,
they
will
fall
out
of
range,
because
the
index
that
you
need
to
represent
them
will
be
too
big
or
too
too
big
for
for
an
integer,
that's
64
bits.
C
You
end
up
with
these
indexes,
they're
too
big,
so
I
believe,
there's
probably
another
histogram
algorithm
that
uses
the
same
representation
that
would
automatically
adjust
precision
in
order
to
come
with
the
present
range
of
data,
and
I
think
there
is
another
faction
here,
that's
interested
in
that
there
is
also
there
are
also
existing
algorithms.
C
If
all
you
want
to
sell
it
is
I
don't
have
enough
space
just
give
me
a
histogram
as
best
as
you
can
do,
that
was
the
sort
of
the
other
prototype
I
was
interested
in,
seeing
I'm
not
sure
it's
going
to
happen,
but
I'm
going
to
keep
asking
that
question,
and
hopefully
we
can
wind
this
down.
I
like
this
algorithm
it
it's
simple
enough
to
understand
what
it
doesn't
give
us
is
that
idea
that
you
can
just
automatically
choose
precision
based
on
available
space
rather
than
choosing
precision
that
gives
you
a
space.
F
Go
ahead,
oh
I
I
wanted
to
ask
about
the
the
actual
protocol
for
sending
this.
It
looks
like
you
end
up
with
a
pretty
sparse
vector,
especially
since
the
first,
what
300,
some
buckets
are
probably
not
used
if
my
if
my
value
starts
at
like
half
or
one
right.
So
if
I,
if
I'm,
we
we're
planning
to
use
milliseconds
as
our
default
value,
so
those
first
300
buckets
when
you
send
this
over
the
wire.
Are
you
doing
any
kind
of
sparse
vectors?
F
Are
you
doing
any
kind
of
like
compression
to
change
how
that
gets,
sent
yeah,
that's
part
and
parcel
for
me
here.
B
So
I
didn't
go
into
that.
I
maybe
a
little
bit
hinted
at
it
at
the
beginning.
So
there
is
a
data
part
to
this
as
well,
which
is
also
in
dyna
his
which
basically
doesn't
allocate
the
whole
vector,
but
it
just
allocates
like
a
little
part
of
that
that
vector
that
is
used,
and
then
it
also
is
optimized
into
things
like.
B
If
the
bucket
only
has
one
count
in
the
account
is
one,
then
it
just
uses
one
bit
for
the
count
and
if
the
count
goes
up,
it
uses
two
bits
for
the
count
and
and
so
on
and
so
forth.
So
there
is
an
implementation
of
that
that
I
didn't
go
into
here,
because
I
focused
on
the
how
indexes
are
mapped
to
to
the
to
the
thing.
B
But
there
is
an
implementation
for
that,
and
that
should
also
then
reduce
the
size
of
the
histogram
and
then
it
sort
of
cuts
it
off
at
a
certain
at
a
certain
point
that
you
don't
use
anymore.
C
And
this
is
also
what
hdr
histogram
has
which
is
sort
of
like
this,
and
then
for
me,
the
prometheus
prototype
as
well,
something
like
you
might
call
delta
encoding,
where
you're
going
to
encode
a
negative
value
to
be
a
run
of
zeros
and
then
like
you're
gonna.
So
those
those
have
all
been
discussed
and
they
sort
of
don't
fit
very
well
with
a
protobuf
based
protocol
where
you
just
want
to
say
here's
my
structured
data
and
I'm
going
to
put
it
in
the
protocol
where
you've
got
this
repeated
array
of
indexes.
F
C
Repeated
array
of
integers
that
you
have
to
decode-
and
I
don't
I
feel
like
this
is
these-
are
the
questions
that
are
still
remaining,
which
is
okay,
so
we
have
a
bucketing
scheme
and
we
all
we
know
what
the
math
is.
But
now
we
have
to
encode
a
bunch
of
buckets
that
are
probably
sparse,
but
maybe
not
in
a
way
that
is
everybody's
gonna
be
happy
with,
and
I
think
I
still
think
at
this
point.
C
We
should
take
it
into
the
bed
because
we
don't
have
prometheus
in
this
room
and
prometheus
is
very
important
to
us
and
they've
been
kind
of
talking
about
some
of
the
issues
that
matter
because
prometheus
is
keynotive.
Therefore
we
have
problems
and
you
should
be
the
thread
rather
than
ask
us
it
here
where
not
the
entire
group
is
not
present.
Does
that
seem
like
a
good
place
to
leave
off
here?
C
I
wish
I
knew
how
to
make
this
go
away
or
at
least
be
decided,
and
so
we
could
move
on,
but
I
also
think
we're
trying
to
be
an
open
community
and
hear
all
the
voices,
so
I'm
not
sure
we've
heard
them
all
yet
and
anyway,
I
recommend
that
we
all
follow
1776
and
there's
at
least
another
week
of
discussion
there.
As
we
talk
about
the
thing
you
just
discussed,
do
we
want
sparse
and
code
things?
Do
we
want
like
delta
encodings?
C
Do
we
want
and
and
what
is
the
zero
value
to
leave,
so
I'm
going
to
take
it
to
the
thread
where
bjorn
and
gill
are
are
also
talking
about
it
with
us.
B
So
I'm
I'm
going
to
change
the
night,
so
we
can
quickly
go
through
the
pr
and
I'll
explain
the
high
level
concept,
we're
not
going
through
all
the
details,
and
I
want
everyone
here
to
call
when
we
encounter
a
particular
section
or
a
particular
comment.
I
want
you
to
be
explicit
and
let
us
know
whether
you
think
this
is
a
blocking
issue.
We
must
solve
in
this
pr
or
is
just
something
nice
to
have
we
can
address
later.
B
I
try
to
hyper
focus
on
addressing
the
blocking
issue,
so
we
can
get
this
pr
merged.
Then
it
will
set
the
skeleton,
so
we
can
move
forward
on
the
other
parts.
So
at
high
level,
this
pr
is
trying
to
address
the
pipeline
problem.
So
if
you
look
at
the
sdk,
you
have
multiple
meter
providers,
but
each
meter
provider
is
quite
isolated.
They
they
don't
necessarily
know
the
other
meter
provider
and
you
look
at
one
individual
meter
provider.
B
They
could
have
multiple
instruments
and
different
meters
and
the
user
of
that
sdk
can
configure
multiple
pipelines,
and
the
key
here
is:
each
pipeline
is
logically
isolated
from
the
others.
They
don't
necessarily
know
the
existence
of
the
others,
although
the
actual
implementation
can
do
optimization.
For
example,
you're
smart
enough,
you
figure
those
pipelines
they're
almost
the
same,
you
don't
have
to
repeat
the
work,
then
you
can
combine
something,
but
from
the
user
perspective
they
shouldn't
realize
the
difference
and
for
each
pipeline
this
is
the
flow.
So
you
start
with
the
measurements.
B
B
B
This
is
very
similar
to
the
the
spam
processor
concept,
and
after
that
you
will
have
one
explorer
that's
into
another
process
and
in
this
model
the
the
exporter
can
be
either
a
push
or
pull
exporter
and
and
from
the
ict
perspective,
there's
no
real
difference.
The
exporters
are
just
taking
the
data
and
send
that
to
a
different
process.
It
could
be
a
scraper
like
acting
as
an
http
response,
or
it
could
be
a
http
request.
Trying
to
post
the
data
or
something
else,
and
the
only
difference
between
push
and
pull
exporter
is
the
trigger.
B
So
push
exporter
is
triggered
by
some
signal
from
the
process
like
periodically,
based
on
timer
or
based
on
memory
pressure,
based
on
any
other
signal
that
application
that
is
exiting
the
pull
exporter,
is
reacting
to
the
scripter.
So
something
like
the
premises
agent
will
access
the
exporter's
endpoint
to
take
the
data
and
the
the
major
focus
on
this
pr
is
try
to
draw
this
high-level
picture
of
multiple
pipelines
and
how
each
icon
would
work.
So
the
the
view
part
is
how
the
user
specify
which
measurement
should
go
to
which
pipeline.
B
B
F
So
the
actual
original
values
are
completely
gone
and
you
get
a
count
of
the
number
of
recordings
that
happened
so
the
unit
completely
changes.
F
In
that
case
I
yeah
so
so
it
depend
right
now,
with
the
way
the
sdk
is
specified,
there's
not
a
lot
of
flexibility
with
aggregators,
but
again,
like
we
have
three
different
efforts
that
we're
doing
that
I
think
are
kind
of
correlated
and
the
answer
to
this
question
of,
should
we
be
able
to
change
units
might
be
answered
by
do
we
allow
custom
aggregators
right
and
if
the
answer
to
one
is
no,
then
the
answer
to
this
can
be
okay.
F
We
don't
have
to
specify
units
right
now,
because
you
can't
make
a
custom
aggregator.
That's
fine,
that's
fine,
but
just
as
an
fyi
previously
java
had
this
this
notion
of
a
count
aggregator
that
would
just
count
measurements
and
that
has
to
change
the
unit
implicitly,
basically
to
a
unitless
thing.
B
Okay,
so
yeah,
so
I
I
think
what
we
should
do
is
we.
We
should
allow
that
in
the
view,
probably
like
another
parameter.
I
want
to
see
if
there's
any
objection.
For
example,
people
would
think
hey
it'll
be
great
if
the
aggregator
can
give
that
actual
information,
so
the
user
don't
have
to
specify
that
so
exactly
the
user
specified.
I
want
to
use
like
rylace
aggregator.
B
My
answer
would
be
probably
no.
We
want
to
give
the
user
that
flexibility
and
later,
if
the
aggregator
can
be
smart,
can
automate
that
then
it
will
save
the
user
some
time,
but
ultimately
I
I
won't.
I
want
this
to
be
similar
like
the
description.
You
can
also
argue
that
the
aggregator
can
give
kington
description,
so
it
can
be
automated,
but
I
I
think
ultimately
I
want
them
to
be
consistent.
B
Parameter
to
allow
people
to
change
the
unit
and
later,
if
the
aggregator
or
anything
in
ict
can
be
smart
enough.
We
can
see
if
the
user
don't
specify
the
unit
and
the
aggregator
realized.
Oh,
it
has
to
change
the
unit.
The
nice,
you
can
do
something
smart
for
the
user,
but
the
the
user,
if
they
explicitly
specify
that
it's
still
taking
the
priority.
G
But
that's
a
backwards,
incompatible
change,
so
I
think
I
think
you
should
try
to
limit
the
capabilities
and
add
more
capabilities
later,
rather
than
adding
all
these
things
now
and
have
to
deal
with
complex
implementation
and
and
maybe
possible
deprecation
of
fields
and
stuff.
So
my
two
cents
on
these,
I
don't
think
the
unit
should
be
there
and
we
should.
D
Yet,
to
also
add
to
that
particular
question:
is
there
is
a
question
out
that
says
you
know
from
a
given
set
of
instrument?
Would
we
have
potential
to
configure
more
than
one
aggregator,
and
so
I
think
per
view.
We
probably
can
only
pick
one
aggregator,
so
that's
probably
less
of
a
concern,
but
I
would
think
the
aggregator
to
josh's
point
is
probably
the
one
that
would
probably
know
best
about.
F
I'm
I
just
want
to
call
out
that
I
I
asked
if
units
need
to
change,
I'm
okay
with
bogds
and
suggestion
of
we
say
you
know
what
that
count.
Aggregator
is
too
much
to
expose
in
the
initial
version
and
we're
not
going
to
allow
it,
and
that
allows
us
to
table
this
discussion
and
going
forward
like
victor's
saying
maybe
the
aggregator
can
provide
a
unit.
Maybe
that's
allowed
right,
but
if
we
again,
the
decision
here
is
is
is
tied
to
that
decision
on
aggregators.
F
If
we
agree
that
we're
going
to
limit
the
set
of
aggregators
to
a
hard-coded
list
of
here's,
the
things
we
provide
in
v1,
we
can
finish
the
specification
later.
That's
fine!
That's
that's
totally
fine
by
me,
but
if
we
don't
limit
that
on
the
other
side,
then
I
think
we
need
an
answer
to
this
question.
F
So
I
would
propose
that
we
actually
just
don't
allow
custom
aggregators
in
v1,
I'm
a
little
bit
nervous
about
that,
but
I
think
it's
okay
for
us
to
make
progress
on
the
spec.
G
F
Sorry,
our
first
our
first
draft
of
the
specification,
I
think
we
can
add
custom
aggregators
as
a
non-breaking
change
in
the
v
in
like
v
1.2
or
something
right
if
you
will,
but
but
my
proposal
would
be
just
because
we're
fighting
over
this
right
now,
let's,
let's,
let's
agree
to
both
of
those
decisions,
so
we're
not
going
to
allow
custom
aggregators
initially
in
the
spec
people
can
prototype
them,
that's
fine,
but
they
will
be
unspecified
so
that
we
can
add
specification
later
and
let's
take
that
entire
discussion
offline.
E
Josh
just
the
question
here
when
you,
when
you
mean
custom
aggregators,
you
mean
an
aggregator
that
is
entirely
coded
by
the
user
or
in
some
way
using
an
aggregator
or
an
instrument
which
is
not
the
default
one,
because
if
I
haven't
been
confused,
if
we
have,
if
we
allow
to
have
right
now
account
aggregator.
This
scenario
that
we
mentioned
may
happen,
even
if
we
don't
have.
F
G
C
That
makes
sense
to
me:
I
don't
think
of
count
as
having
a
unit.
It
produces
a
number
that
is
unitless,
whereas
the
values
still
have
units-
and
we
are,
you
know
not
looking
at
them
when
we
count,
but
when
I
originally
put
thumbs
up
on
this
question
of
josh's,
I
mean
I
was
thinking
yeah,
but
the
measurements
are
in
milliseconds
and
I'd
like
to
put
them
in
microseconds
or
something
like
that.
No
big
deal
multiplied
by
a
thousand.
B
And
we
might
have
a
situation
where
the
instrumentation
is
saying
this
is
mainly
seconds
and
the
customer
saying
my
system
with
one
time
as
instead
of
the
full
name,
I'm
not
sure
whether
we
should
allow
them
to
change
that
in
the
sdk
or
we
just
tell
them
do
that
later
in
a
collector.
B
But
the
conclusion
I
think
here
is
we
treat
this
as
not
a
a
non-blocking
comment.
Probably
I'll
create
one
issue
to
track
this,
and
we
can
move
forward
with
this
pr.
B
Okay,
so
moving
forward.
F
So
the
link
to
this
so
just
to
clarify
this.
We
have
a
prototype
in
java
and
to
some
extent,
what
I'm
doing
on
this
pr
and
the
reason
I
haven't
approved
it
is.
I
want
to
make
sure
that
it's
implementable
and
I
want
to
make
sure
we
can
implement
it
in
java
the
way
and
I'm
trying
to
currently
stick
as
close
as
possible
to
the
way
java
was
previously
implemented
and
I'm
running
into
issues.
So
in
this
case
this
attribute
key
usage
right.
F
We
had
this
thing
called
a
labels
processor
in
java
and
so
the
way
we've
exposed.
This
is
there's
an
interface
called
a
an
attributes,
processor,
okay,
and
to
meet
the
specification.
F
What
we've
done
is
we've
created
a
kind
of
combinator
limited
dsl
for
attribute
processors,
so
the
user
won't
legitimately
say
here
are
key
value
pairs.
They
will
actually
instantiate
an
attribute
processor
where
the
only
public
methods
they
have
are.
I
want
these
key
value
pairs
or
I
want
to
add
this
baggage
right,
and
I
just
want
to
make
sure
that
that
is
acceptable
in
terms
of
how
the
specification
works.
F
F
Okay,
specifically
specifically
we're
using
regex's
and
we're
trying
to
avoid
allocating
lists
of
strings
and
stuff.
So.
B
Okay,
so
I
think
for
this
one,
I'm
I'm
having
to
follow
up
with
you
and
come
up
with
some
specific
wording
in
the
spike
to
give
that
room.
I
think
here,
like
matrix.
We
talk
about
a
lot
about
the
performance
thing
and
in
general
we
have
the
spirit.
If
there's
something
we
can
leave
the
room
for
the
ick
to
be
more
performant,
we'd
be
happy
to
do
so.
F
They
remain
2x
faster
and
if
you
support
that
extra
dimensions
from
baggage-
and
we
don't
have
an
sdk
level
hook
to
understand
when
someone's
touching
baggage
effectively
any
time,
you
touch
baggage,
there's
there's
an
inherent
slowdown
with
bound
attributes
that
that
you're
gonna
face,
and
so
I
just
I
needed
a
way
to
expose
it,
and
so
that's
the
way
that
I
did
it.
But
it
means
that
we're
not
exactly
the
way
the
specification
looks.
I
just
want
to
make
sure
that
that's
okay,
that
deviation.
D
F
You
could
argue
that
the
attribute
processor
is
a
measurement
processor,
but
we
I
I
have
a
major
issue
with
measurement
processor
as
spec
right
now.
I
think
it's.
The
way
it
turned
out
in
our
prototype
is
is
really
impractical
and
I
can
dive
into
that.
But
I
think
I
have
another
comment
on
that
later.
D
B
However,
I
want
to
call
it
one
thing
like
whatever
histogram
bucket,
we
specify
the
default
one
picked
by
the
ick.
It
implies
the
unit
and
I
think
josh
noticed
that
I,
like
I
intentionally
tried
to
change
the
the
promises
bucket
by
making
a
millisecond.
That's
because
when
I
was
going
through
the
the
matrix
semantic
convention,
which
is
still
early
stage,
I
noticed
everyone
is
using
milliseconds.
B
So
I
kind
of
struggle
which
one
should
I
take,
and
I
kind
of
decided
that
I
want
to
go
with
milliseconds,
because
the
precision
issue
I
mentioned
here.
F
So
so,
my
initial
comment,
though,
is
by
definition
we
always
start
at
minus
infinity
in
our
bucket.
That
was
the
only
change
I
wanted,
like
the
first
bucket
is
always
minus
infinity
to
the
first
boundary
that
that's
it
so
like
yeah
in
terms
of
picking
millisecond,
I
thought
we
had.
We
discussed
this
last
week
and
agreed
that
that
was
gonna
be
the
default,
so
the
histogram
proposal
will
be
aligned
with
what
you
wrote.
It's
just
that
first
bucket
needs
to
be
declared.
That's
all.
B
And
and
we
think
we'll
stick
with
this
5.0
10.0
or
0.005.
G
Can
we
apply
the
same
rule
if
in
doubt
leave
it
out?
We,
I
think
we
can
say
that
there
is
no
default
and
user
must
specify
the
aggregation
in
their
buckets
for
every
time,
and
we
may
provide
some
helpers
in
a
third
party
library,
or
anything
like
that.
That
says
here
is
a
default
aggregator
configured
with
these
packets
and
stuff.
B
We
talked
about
this
a
lot
in
the
previous
meetings
and
and
the
conclusion
was
we
don't
want
that,
because
in
this
way,
if
the
user
has
some
library
that
has
histogram
and
there's
no
default,
then
what
should
we
do?
We
would
fail
the
user
right,
whether
it's
the
internal
error,
log
or
something.
What
do
you
mean?
You
will
fail:
the
user
like
you're,
the
user.
You
use
the
library
and
the
library
has
dependency
on
yet
another
library
and
they
use
histogram
and
there's
no
actual
information
about
the
default,
his
histogram
package.
B
So
we
don't
we
don't
we
don't
expose
that
histogram,
so
the
user
would
get
no
data
from
that
instrument
right,
correct
yeah,
and
this
is
what
a
lot
of
people
are
against.
They
don't
want.
They
want
to
give
the
user
something,
so
the
user
will
notice.
Oh,
this
is
the
default
and,
if
they're
unhappy
with
the
default,
they
can
change
that.
Then
the
default
can
be
just
some
code
with.
F
The
user
gets
something
the
default
that
we
picked
was.
We
said.
Prometheus
provides
a
default
that
people
are
used
to
accustomed
to
and
works
for
that
community.
Let's
stick
with
a
similar
default,
but
update
it
for
our
semantic
conventions,
prometheus
records
latency
in
seconds.
Let's
do
it
milliseconds
we
had
talked
about
this
last
week.
I
don't
think
that
consensus
has
changed,
but
I
I
really
want
to
hear
from
say
josh
who
made
the
comment
like
you
asked.
If
this
was
what
we're
doing
do
you
have
concerns
going
that
direction?
F
C
B
F
G
Yeah
josh
surat
can
we
chat
maybe
one
day
about
the
java
implementation.
I
saw
your
prototype
and
I
have
some
questions.