►
From YouTube: 2022-12-21 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
B
C
C
We
share
the
screen
here,
we'll
assume
that
you
added
your
your
topic.
I
think
the
other
stuff
won't
take
too
long.
So
should
have
plenty
of
time
for
that.
C
First
item
on
the
agenda:
I
put
here,
cancel
next
week's
meeting
it's
between
Christmas
and
New
Year's
I.
Don't
think
very
many
people
are
going
to
be
working.
Is
anyone
opposed
to
that.
C
Okay,
so
next
week's
meeting
should
be
considered
to
be
canceled
second
item
here,
maybe
a
little
bit
more
controversial.
C
Some
other
cigs
have
already
done
this
I
know,
Java,
Python
and
I
believe
go
at
least
there's
probably
more
require
one
approval
now
for
PRS.
This
came
up
in
the
maintainers
meeting
this
week.
That
Java
was
reducing
Theirs
to
one
from
two
they've
had
velocity
issues
and
I,
don't
know
how
anyone
else
feels
here,
but
I
think
we
have
seen
a
similar
slowdown
since
tracing
and
metrics
are
mostly
completed.
C
I
think
reducing
to
a
single
required
approval
is
probably
a
good
idea
for
us
as
well.
I've
already
talked
to
the
other
maintainers
about
it
and
nobody.
Nobody,
there
objected
I
think
for
a
large
and
impactful
PRS,
we'll
still
expect
to
see
more
reviews
before
we
merge
them,
but
it
should
make
it
easier
to
get
small
things
in
and
Bug
fixes
that
get
reviewed
by
one
approver
or
one
maintainer,
and
then
just
sit
for
a
while
waiting
on
another
review
when
they're
uncontroversial.
C
Okay,
we're
going
to
take
silence
as
approval
looks
like
Amir
is
giving
a
thumbs
up
I
guess
we
can
move
on
to
the
next
topic
here.
This
PR
was
brought
up
last
week,
the
sync
Resources,
with
some
synchronous
and
some
asynchronous
attributes.
This
PR
is
in
need
of
reviews.
There's
been
changes
requested
by
multiple
people
in
here,
so
it
is
maybe
a
little
bit
more
controversial
than
a
typical
PR
out
of
10
commits
yesterday.
C
This
must
have
been
a
rebase,
but
from
what
I
understand,
the
changes
that
were
requested
are
not
fundamental
changes,
so
should
be
resolved
fairly
quickly.
So
please
give
it
some
reviews,
so
we
can
merge
that
that's
the
most
important
step
in
making
the
SDK
launch
synchronously,
which
solves
a
lot
of
problems
in
a
lot
of
different
areas.
It
should
make
it
significantly
easier
to
get
started
with
with
Js.
C
I,
don't
have
any
update
on
the
release
automation
this
week,
I
worked
more
on
the
clock
drift
PR,
which
I
should
finish
this
week,
I'm,
including
the
fetch
instrumentation
changes
in
that
PR,
because
the
updates
to
the
span
interface
broke
or
not
interface,
but
Behavior
broke
the
fetch
instrumentation,
but
the
fetch
instrumentation
was
already
using
kind
of
a
weird
flow
where
they,
where
it
provides
a
time
stamp
for
the
end,
but
not
the
beginning
of
a
Spam,
so
I'll
be
doing.
C
The
fix
in
the
same
PR
should
be.
D
C
We
we
merged
this
one
PR,
which
is
significantly
smaller
than
the
API
and
SDK
refactor,
that
he
did
previously
I
think
he
decided
to
split
his
change
into
smaller
PRS
that
they'd
be
less
controversial
and
easier
to
review,
which
seems
to
have
worked,
at
least
in
this
case,
so
that's
merged
and
in
The
Next
Step
is
splitting
the
events
API
out
of
the
logger
API,
which
is
something
that
the
specification
has
already
done
should
be
fairly
uncontroversial,
but
that
is
in
need
of
reviews.
C
C
E
Either
way
I
think
maybe
we
can
look
at
evanette's
issue.
It
sounds
like
it
might
be
quicker.
D
C
B
So
yeah
this
has
been
taking
a
long
time
and
I
have
you
have
looked
into
it?
So
Amir
has
been
looking
into
it,
but
so
we
are
not
able
to
merge
this
one
it's
already
approved,
but
there
are
issues
when
merging
so
I
I'm
just
wondering
if
you
can
take
a
look
at
it
again.
So
the
the
last
thing
I
did
was
like
I
tried
to
just
clone
the
clone
from
my
very
pository
again
and
go
to
my
branch
and
just
do
those
compile
commands.
B
It
was
failing
and
I
found
that
if
I
compile
the
the
web
Auto
instrumentations
first
and
then
redo
the
the
compilation,
it
works
like
it's
successfully
good,
but
I
need
to
do
the
compilation
for
the
automaticians
wave
first
before
it
works.
So
I
don't
know
if
that
indicates
something
for
you.
So
that's
right!
Yeah.
C
I
I
try
to
reproduce
this
like
a
month
ago
and
was
not
able
to,
but
I
did
not
look
too
deeply
into
it.
I
will
add
to
my
to-do
list
for
this
afternoon.
I
think
see
1238,
contrib,
I
think
this
is
something
we're
not
going
to
solve
on
the
call
so
I
will
I
will
spend
a
little
bit
more
time
this
afternoon,
trying
to
reproduce
it
all.
I
really
did
before
was
verify
that
it
was
working
for
me
locally,
which
it
was,
and
obviously
it's
still
failing
in
CI.
B
B
I'm
in
the
Pacific
time
zone.
E
Everybody
see
the
screen
yep
all
right
yeah.
So
this
is
a
pretty
big
PR
and
there's
a
lot
of
background.
So
I
figured
I
would
just
kind
of
give
give
a
brief
walk
through,
at
least
as
kind
of
a
starting
point
on
how
to
start
digging
into
this,
and
then
we
can
kind
of
figure
out
the
best
way
to
to
try
to
integrate
this.
E
E
This
is
all
kind
of
based
on
IEEE,
754,
floating
Point
standard.
And
basically,
if
you
look
at
a
floating
Point
number,
there
there's
one
bit
for
a
sign:
there's
a
11
bits
for
an
exponent
and
there's
52
bits
for
the
significant
or
the
mantissa,
and
this
is
kind
of
how
things
are
stored
in
memory
and
in
general,
with
this
exponential
histogram,
we're
interested
in
the
exponent
and
you
kind
of
use
the
exponent
exponent
with
some
sort
of
scaling
Factor
as
a
way
to
kind
of
bucket
numbers.
E
I
think
that's
kind
of
the
whole
underlying
premise
behind
all
this,
so
spec
describes
some
mapping
functions
that
you
need
to
implement
and
being
able
to
Implement
these
means
being
able
to
kind
of
extract
out.
You
know
parts
of
this,
the
internal
representation
of
a
float,
so
that
was
the
first
thing
that
I
had
to
figure
out
how
to
do
and
to
do
that.
E
I
used
a
a
data
view,
so
you
can
get
a
data
View
and
you
can
set
a
float
value
and
then
you
can
kind
of
get
access
to
like
the
the
raw
bits
should
I
enlarge.
E
E
They
will
truncate
your
number
to
32
bits,
which
was
the
first
surprise.
The
second
surprise
is
that
you,
you
can't
do
negative
shifts
but
anyways,
so
there
there
are
a
few
places
you
can
see
here
when
I
go
to
combine
the
combine
the.
E
What
is
it
22-bits,
the
22
Upper
bits
of
the
significant
with
the
lower
32
bits
I
have
to
multiply
by
2
to
the
32
I
wanted
to
bit
shift,
but
I
could
not
do
that
so
yeah
at
first
I
got
really
freaked
out
by
the
Native
operators
and
I
was
using
my
own
kind
of
custom
right
shift
and
left
shifts
towards
the
end.
I
I
remove
those
because
there's
only
two
spots,
I
think
where
we
need
to
deal
with
something
larger
than
32
bits
and
there's.
E
There
will
be
a
lot
of
bit
shifting
that
happens,
but
it
happens
on
on
the
indexes
and
they're
all
32-bit.
So
these
are
kind
of
the
two
I
guess
functions
that
the
rest
of
the
mapping
depends
on
and
briefly
we'll
kind
of
look
at
the
mapping.
So
there's
two
there's
two
different
mappings:
there's
an
exponent
mapping
and
a
logarithm
mapping
and
then
there's
a
notion
of
scale.
E
So
the
exponent
mapping
handles
negative
scales,
so
it's
negative
10
to
0
and
then
the
logarithm
mapping
handles
0
to
20.,
and
that
gives
you
a
huge
range
I
was
checking
out
negative
10
and
negative
10.
You
can
fit
the
entire
64-bit
floating
Point
space
into
two
buckets
at
scale,
negative,
10.
E
E
Yeah
and
I
mean
the
really
negative
scale,
I
think
there's
a
happy
medium
where
you
are
somewhere
in
the
middle.
That.
E
Yeah
that
that
will
give
you
the
best
of
all
worlds,
but
so
yeah,
so
the
key
functions
in
all
these
mapping
are
in
these
two
mapping.
Classes
is
mapped
to
indexed,
and
this
is
where
you
take
a
number,
and
you
want
to
figure
out
the
index
the
index
so
yeah.
Ultimately,
what
we
end
up
doing
is
we
store
all
these?
We
store
our
accounts
in
a
backing
array,
where
the
index
of
the
backing
array
corresponds
to
to
the
bucket.
A
So
so.
E
And
then
I
was
thinking
I
might
we
have
the
accumulation
itself
and
I
was
just
going
to
maybe
walk
through
how
we
add,
like
a
how
we
record
a
number
and
kind
of
touch
on
a
bunch
of
points
there.
And
then
this
is
our
any
questions
or
any
things.
We
should
explore
a
little
bit
more
but
yeah,
so
we
I
think
that's
a
good
idea.
E
So
we
do
start
off
with
the
logarithm
mapping
at
at
the
max
scale.
So
I
guess
we're
going
for
the
highest
resolution
and
we
will
and
this
the
the
histogram
will
automatically
rescale,
as
as
you
start,
adding
values
to
figure
out
the
the
best
scale
for
your
max
scale
or
for
your
max
bucket
size,
and
we
start
off
with
a
Max
bucket
size
of
160.
E
That's
the
default.
It
can
be
changed
when
you
set
up
your
aggregation
and
then
there's
a
minimal
Max
size
of
two,
and
that
is
because
we
can
fit
all
the
floats
in
to
two
buckets
at
scale:
negative
10.,
but
yeah
all
the
magic
happens
in
the
record
function
which
quickly
delegates
this
update
by
increment
and
I.
E
Guess
as
we're
heading
in
that
direction,
maybe
we'll
quickly
just
look
at
to
point
value,
because
this
is
what
ultimately
we
will
get
back
out
of
this
thing
like,
like
other
histograms,
there's
min
max
some
count.
E
The
exponential
histogram
can
handle
positive
and
negative
numbers,
although
only
positives
are
being
used
currently
because
this
is
kind
of
enforced
at
the
histogram
instrument
level.
So
you
can
back
your
instrument.
You
know
with
an
exponential,
histogram
or
regular
histogram,
but
the
the
spec
and
the
protos
have
have
the
positive
and
negative
buckets
and
the
reference
implementation
has
it.
So
this
is
something
that's
kind
of
implemented
here.
It
will
be
potentially
unlocked
with
some
future
spec
work
and
then
the
change
would
just
have
to
happen
over
the
instrument
level.
F
Yeah
in
in
JS
here
you
can
actually
apply
the
aggregation
to
an
gauge
or
up
down
counter
with
a
view,
and
then
that
will
be
used
because
updown
counters
and
gauge
gauges
will
allow
for
negative
values
as
well.
So
the
backing
aggregation
will
then
use
that
code
that
you
implemented
so.
F
F
Rare
case
yeah,
it's
a
it's
a
rare
case.
I
was
also
when
I,
when
I
was
working
on
that
not
not
familiar
with
that
kind
of
reasoning,
but
the
spec
permits
it.
So
it's
in
JS
and
I
think
in
Python.
It
may
work
too
not
sure
but
yeah.
E
Cool
excellent
yeah
I'm
glad
that
it
is
actually
being
used
and
foreign,
even
when
I
didn't
know
that
I
was
being
used
that
I
kind
of
kept
it.
This
way
is
that
it
does
really
influence
the
design
of
of
the
code.
It's
you
end
up
wanting
to
pass
around
a
bucket's
argument
to
to
a
lot
of
methods,
because
then
it
can
be
positive
or
negative,
and
if
you,
if
you
didn't
have
that
separation,
you
might
kind
of
embed
that
you
know
a
little
bit
more
but
yeah.
E
This
is
ultimately
the
the
data
that
we're
tracking
and
what
we're
going
to
export.
So
for
for
the
buckets
there's,
the
key
things
are
the
offset.
The
offset
is
a
number
that
you
will
add
to
the
index
in
order
to
actually
be
able
to
compute
the
the
range
for
the
buckets
at
at
your
exported
scale,
and
then
you
have
the
actual
counts
that
you
will
apply
that
to
so
yeah,
so
backing
back
up.
We
were
going
to
record,
but
then
this
two
point
value
caught
my
attention.
E
This
will
kind
of
handle,
setting
your
Minimax
incrementing
the
count
there
is
a
zero
bucket.
So
if
a
number
is
explicitly
zero,
we
will
just
count
it
as
a
zero
count.
It
doesn't
actually
get
a
bucket
in
in
the
backing
array.
E
We
will
handle
the
sum
and
then
we
will
call
update
buckets,
and
this
is
where
things
starts-
to
get
somewhat
interesting,
so
right
off,
we
kind
of
compute
the
index
for
this
value
at
this
scale
and
then
we're
going
to
go
through
and
see
if,
if
the
backing
array
can
already
accommodate
this
or
not,
and
if
it
can,
if
it
can't,
if
it's
outside
of
the
range
we're
going
to
have
to
rescale
we're
going
to
downscale
the
histogram,
so
we
manage
some
indexes
in
in
the
buckets
and
then
figure
out.
E
If
we
need
to
downscale,
if
we
do,
we
figure
out
what
that
scale.
Change
needs
to
be
downscale
and
then
recompute
the
index
and
then
we'll
we'll
increment
the
the
the
value
in
the
bucket
for
for
this
index
and
increment
index
value.
E
So
I
guess
one
last
thing
is:
we've
computed
a
scale.
We
know
that
this
index
will
fit
into
our
backing
array,
but
we
don't
know
if
our
backing
array
is
actually
big
enough,
yet
it
may
have
not
grown
into
the
right
size,
so
we
may
have
to
grow
the
backing
array,
but
if
not,
we
will
just
end
up
incrementing
the
count
in
in
the
bucket
corresponding
to
this
index,
and
this
code
path
ends
up
getting
reused
kind
of
in
the
merge
case.
E
So
in
the
merge
case,
instead
of
calling
update
buckets
you
will
you
will
know
the
other
histogram
that
you
want
to
merge
and
we
can
take
a
quick
look
at
that
and
it
will.
It
will
kind
of
handle
all
the
the
scaling
stuff
up
front
that
we
were
handling
in
update
buckets.
E
So
you
can
see
it.
It
will
kind
of
manage
the
comments.
It
will
manage
the
min
max
the
counts
and
then
handle
finding
a
common
scale.
And
then,
if
you
end
up
looking
at
merge
buckets,
you
will
find
that
it
ends
up
delegating
off
to
the
increment
and
index
by
eventually.
E
So
that
is
a
brief
tour
of
how
of
how
we
kind
of
compute
an
index
for
an
incoming
value
and
then
end
up
finding
the
bucket
that
corresponds
to
that
index.
Incrementing
the
value
in
the
bucket
I
can
say
just
a
little
bit
about
the
buckets,
so
there's
a
buckets
class
and
then
there's
a
bucket
backing
and
the
backing
is
actually
the
array
of
counts
and
then
the
reason
for
this
bucket's
abstraction
on
top
of
it,
is
kind
of
a
view
into
that
backing
array.
E
It's
a
little
complicated
in
that
or
I.
Guess
it's
optimized
in
that
the
the
backing
array
can
be
circular.
So
so,
basically
we
have
these
these
indices.
E
If
not,
we
need
to
adjust
the
scale
and
then
there's
this
notion
of
index
base
an
index
base
is
the
index
of
the
thing
sitting
in
the
zero
with
position
of
the
backing
array.
So
from
from
that,
you
can
kind
of
figure
out
if
the
array
is
circular,
but
this
kind
of
it
just
hides
all
that
from
you
and
you
end
up.
It
ends
up
tracking
the
offset
for
you,
but
it
has
this
at
method,
so
you
can
just
always
kind
of
iterate
from
zero.
E
You
know
to
length
minus
one
and
feed
that
value
to
add,
and
it
will
kind
of
give
you
like
The,
Logical,
counts
or
The
Logical
backing
array,
and
we
actually
use
that
for
the
counts
when
we
want
to
export
them.
E
If
that
makes
sense,
and
then.
C
E
E
But
yeah
it
it
quickly
grows
as
you,
as
you
add
to
it.
So
I
think
it
does
try
to
make
use
of
your
max
size
for
for
purposes
of
high
high
resolution.
E
C
I,
have,
let's
see
so
the
positive
and
negative
in
the
the
point
that
goes
to
the
export
pipeline,
positive
and
negative
are
both
included
How.
Do
you
like
I
I
would
expect,
would
have
expected
the
data
structure
to
have
a
positive
or
A
negative?
Not
both.
C
E
So,
there's
no
reason
why
the
histogram
could
not
accommodate
positive
and
negative
numbers
the
whole
range,
but
in
the
case
where,
in
the
case,
where
you're
only
using
one
at
least
as
I'm
exporting
it
now,
you
will
get
Negative
with
like
in
empty
counts.
E
C
Yeah
I
actually
think
we
don't
need
to
your
your
answer.
I
I
think
I
misinterpreted
what
positive
and
negative
meant,
but
you
can
have
positive
and
negative
values,
which
is
definitely
possible.
Like
Mark
said,
if
you
use
a
view,
so
it
should,
it
should
just
be
kept
it's
easier
and
if
that
fits,
the
Proto
anyways.
C
C
Let's
see
the
logarithmic
versus
exponential
aggregations
or
not
aggregations
scales,
scales
yeah,
should
we
consider
adding
like
calling
them
high
resolution
and
low
resolution,
or
something
like
that
or
at
at
the
very
least.
It
should
be
documented.
I
think
that
that
one
is
for
positive
and
one
is
for
negative
scales,
because
that's
not
immediately
obvious
to
someone-
that's
not
very
familiar
with
the
underlying
algorithms.
E
Yeah
so.
E
Ultimately,
the
the
scale
is
kind
of
hidden
from
the
user
completely.
So
this
is
definitely
not
something
that
is
user
facing
and
as
the
as
the
histogram
scales
for
its
inputs,
it
will
pick
the
best
scale
for
you.
We
do
at
least
comment
that
the
the
exponent
mapping
is
for
scales,
Less,
Than,
Zero.
E
Logarithm
is
for
scales
greater
than
zero
and
spec
talks
about
this
a
little
bit
at
least.
C
Yeah
I
guess
nobody
would
be
modifying
this
code
without
being
without
having
read
the
spec,
either
or
hopefully
not
anyway.
C
E
It's
definitely
a
potential
future
extension
it
is
used
in
it
is
used
in
testing
to
yeah.
It
is
Houston
testing.
I
was
looking
at,
possibly
removing
that.
However
I
yeah
come
over
here.
E
I
was
looking
at
removing
it,
but
it
made
this
quite
useful
test
quite
hard
to.
E
It's
the
exhaustive
merge
test
foreign.
E
C
I
I
think
I.
Can
you
don't
need
to
find
the
exact
test?
I
think
I
can
understand
that
it's
used
in
testing
and.
E
C
E
A
C
E
It
is
also
there
is
on
the
reference
implementation.
This
method
is
kind
of
call
out.
It
seems
like
it
is.
The
extension
would
be
useful
in
applying
histogram
aggregation
to
sample
metric
events,
so
this
is
talking
more
from
like
an
Hotel
collector
side
that
it
might
use
something
like
this
I
don't
know
if
we
would
have
something
similar
in
in
JS.
E
But
having
said
that,
I
don't
know
if
yeah,
if
that
is
a
use
case,
that
anybody
can
think
of
a
way
somebody
might
might
make
use
of
that
in
in
hotel
Js.
C
Okay,
I
took
notes
for
myself,
I
will
to
review
it.
It's
obviously
a
pretty
large
PR
in
terms
of
how
to
split
it
up
for
review
I,
don't
know
what
you
think
is
the
best
way
to
do
that
I
mean.
Obviously
this
PR
is
hundreds,
if
not
thousands
of
lines.
Long
and
reviewing
the
the
pr
as
a
whole
is
fairly
difficult.
C
Do.
You
have
a
an
expectation
of
how
the
review
process
will
go.
C
E
So
yeah
I
I'm,
ultimately
I'm
flexible,
because
I
realize
this
is
a
gigantic
PR
and
it's
I
have
sympathy
for
anybody
who
is
going
to
try
to
look
through
this,
and
if
there
are
a
few
distinct
portions
I
think
some
of
it
is
just
going
to
be
a
little
bit.
E
There's
going
to
be
a
bit
to
digest
either
way.
But
one
way
I
can
think
of
that
might
ease
this.
Just
a
little
bit
is
that
the
mapping
functions,
kind
of
stand
on
their
own,
so
I
could
I
could
have
a
PR
that
has
the
exponent
mapping
the
logarithm,
the
logarithm
mapping
and
then
just
kind
of
their
Associated
tests,
and
that's
something
that
could
be
looked
at
and
that
also
we'll
have
the
kind
of
IEEE
754
functions
there
as
well.
E
I
think
that's
a
good
thing
to
look
at
in
a
good,
like
first
step
to
kind
of
get
familiar
with
a
lot
of
the
rest
of
the
machinery,
so
so
I
see
that
as
one
distinct
thing,
the
next
thing
would
be
the
the
accumulation
itself,
which
is
going
to
be
kind
of
big
I.
Don't
know
if
there's
a
great
way
to
kind
of
split
that
up.
E
C
So
what
I'm
hearing
is
that
it's
possibly
three
PR's
the
the
first
PR
would
be
the
floating
Point
number
stuff,
and
this
is
probably
the
most
mathematically
intensive
part.
C
C
Obviously
I
I
can't
really
without
looking
more
closely
at
it
suggest
parts
to
break
it
into,
but
maybe
when
we
get
there,
we
can
either
break
it
up
or
if
it
doesn't
need
to
be
we'll
see.
I
also
want
to
get
the
the
opinion
of
logendicus,
since
he
is
the
I
guess,
you'd
say
de
facto
metrics
maintainer
having
written
most
of
the
metrics
SDK,
and
he
unfortunately
can't
join
these
meetings
because
of
his
time
zone
most
of
the
time
but
I'll.
C
Let
him
know
to
watch
the
recording,
but
I
think
we
can
at
least
start
with
the
the
IEEE
stuff.
E
Cool
that
sounds
good,
while
yeah,
while
we're
here.
Maybe
there
are
two,
maybe
three
things
that
I
forgot
to
mention
that
I
will
mention
at
least
briefly,
okay,
and
that
is,
if
you've,
if
you've
looked
into
IEEE
754,
you
will
find
that
it
is
pretty.
This
is
like
these
standard
case,
but
there
is
this
whole
weird
case
of
like
sub
normal
numbers
and
the
exponent
will
be
zeroed
out
or
it
will
be
all
ones
to
kind
of
indicate
those
whatever
the
case
is.
E
We
avoid
those
completely
by
having
the
smallest
normal
number
and
anything
that
is
greater
than
zero,
but
smaller
than
the
large
or
the
minimum,
the
smallest
normal
number.
It
just
gets
rounded
up
to
the
smallest
normal
number.
That's
in
the
reference
implementation,
if
anybody's
curious
about
that,
that
was
one
thing.
The
other
thing
that
is
possibly
interesting
if
you're,
comparing
between
this
and
the
reference
implementation,
is
that
the
reference
implementation
does
not
have
diff
and
J
MCD
was
actually
I
was
talking
when
I
was
talking
to
him
about
it.
E
He
was
a
little
surprised
to
see
that
we
are
diffing
histograms
at
all,
but
it
seems
to
be
part
of
the
the
aggregator
interface
I
think
he
was
saying
it
doesn't
make
a
lot
of
sense
for
synchronous
instruments,
but
in
any
case,
rather
than
try
to
fight
against
it.
E
Any
of
this
I
just
said:
let
it
def
which
yeah,
if
you
get
to
use
it,
gets
used
and
then
just
compared
to
the
other
instruments
like
since
the
data
structure
for
the
histogram
is
so
involved,
like
most
of
the
code
actually
lives
in
the
accumulation.
We're
doing
all
this
work,
I
think
looking
at
the
other
instruments,
the
data
is
pretty
simple
to
work
with.
E
It's
like
merge
will
kind
of
be
implemented
entirely
in
in
the
accumulation,
for
example,
just
kind
of
working
off
of
the
the
the
point
values
because
they're
easy
enough,
but
for
this
one
not
so
easy,
so
I
I
did
add
clone
methods
into
the
into
the
accumulation
so
that
you
can
merge
or
diff
without
mutating
the
original
one.
E
A
C
I,
don't
have
any
further
questions
about
the
pr
I
guess
until
until
the
review.
B
C
D
I,
don't
know
anything
about
Netflix,
but
is
it
safe
to
assume
that
the
floating
Point
numbers
will
be
represented
in
these
four
months?
Yes,.
D
E
Yeah
I
think
the
the
reference
implementation
does
use
things
that
would
end
up
being
big
end
for
us
and
I
just
kind
of
stopped
at
that
scale.
I
accidentally
creeped
into
that
scale,
and
everything
was
working
test
wise,
so
I
think
it
depends
on
like
your
JavaScript
runtime
it
was.
It
was
vs
code
who
was
alerting
me
that
these
numbers
are
larger
than
Max,
safe,
yeah.
C
And
the
definition
of
Max
safe
and
is
just
that
when
you
have
an
integer
above
Mac
safe
and
if
you
add
one
to
it,
it's
not
guaranteed
to
increment
by
exactly
one.
You
can
still
represent
higher
numbers,
but
yeah.
It's
I.
Our
other
metric
instruments
also
have
that
problem.
I
guess
I'll!
Leave
it
at
that,
like
it's,
it's
something
that
we
may
eventually
have
to
tackle,
but
the
very
very
large
numbers.
D
C,
plus,
plus,
it's
not
safe
to
assume
these
things,
because
you
can
learn
on
different
architectures
hardwells.
D
It's
not
guaranteed
that
the
like.
What's
the
solar
presentation
of
numbers
is
something
particular
like.
D
C
A
E
So
just
I
will
separate
this
out
into
those
three
PRS
should
I
leave.
Should
I
leave
this
one?
That
shows
how
they
all
work
together,
because
I
think
that
might
be
be
helpful.
But
maybe
if
I
can
I
don't
know
if
I
can
turn
this
back
into
a
draft,
but
if
I
can
I
will
would
that
be
the
right
thing
to
do.
C
That
is
the
right
thing
to
do.
I
definitely
prefer
it
be
kept
open.
If
you
can't
redraft
it,
then
I
definitely
can
I,
don't
know
how
the
whether
that's
a
specific
permission
or
not
I,
don't
think
it
is,
though
I
think
you
should
be
able
to
yeah
and
I.
I
would
definitely
leave
the
large
PR
open
because
it
will
make
it
a
lot
easier.
The
first
PR
maybe
won't
matter
as
much,
but
then.
C
C
Thank
you
for
the
overview.
It's
really
helpful.
C
E
Yeah
no
problem
I
appreciate
people
looking
into
it
and
yeah.
There's
there's
a
lot
of
code
here.
So
if
there's
any
any
feedback,
I'm
I'm
open
to
any
and
all
feedback
and
we'll
yeah.
C
Yeah
I
will
probably,
when
I'm
reviewing
it
refrain
from
too
many,
like
stylistic
or
opinion
review
comments,
because
it's
already
there's
already
so
much
to
review
there
from
a
does.
This
actually
work
as
specified
perspective
that
I
think
if
we
get
into
the
weeds
of
like
function,
naming
and
things
like
that
in
all,
but
the
most
egregious
cases
I
think
that
that
could
make
the
pr
review
just
drag
on
forever.
C
So
I
will
avoid
things
like
that
and
if
other
reviewers
are
making
those
types
of
comments,
I
will
probably
try
to
discourage
them
from
that.
Assuming
that
that's
okay
with
you
I'm.
E
Fine
with
I'm
fine
with
improving
the
style,
if
there's
something
that
would
make
people
feel
better
I
didn't
I,
didn't
worry
a
whole
lot
about
the
style
as
I
went
through.
This
I
tried
to
you
know,
adhere
generally
to
the
conventions
that
I
was
seeing,
but
there
might
be
things
that
I.
E
And
then
I
definitely
feel
like
I
am
making
use
of
plain
for
Loops
a
lot
more
than
probably
you
know,
I
don't
know
somebody
might
find
more
pleasing
ways
to
do
some
of
this
stuff
and
if
you
want
to
add
as
a
comment,
I'll
I'd
be
more
than
happy
to
to
upgrade
the
style.
C
Okay,
well
I!
Guess
if,
if
other
people
make
those
comments,
I
won't
reject
them
out
of
hand,
but
yeah
I
I
will
try
not
to
do
that.
Just
to
I
I
already
see
that
there's
a
large
PR
and
I
see
potential
for
this.
A
E
Cool
yeah,
I'm,
flexible
I,
want
ultimately
I
want
a
this
thing
to
work
and
be
the
thing
to
be
happy
with
with
the
code
in
the
end.
So
that's
the
end
point
that
I'd
like
to
get
to
right.
C
Okay,
I
did
some
bug
triage
on
myself
on
my
own
earlier
today.
If
it's
okay
with
everyone
else,
I
think
I'd
skip
the
bug
triage
this
time.
There's
not
very
many
people
here
and
it's
been
a
fairly
intense
meeting
already.
C
I
yeah
I'm,
seeing
thumbs
up
no
nobody's
saying
no
I,
don't
have
any
more
questions
about
the
pr
I.
Thanks
for
the
work
that
you've
done,
it's
a
huge
amount
of
work
and
I
appreciate
it.
F
Yeah,
thank
you.
It's
it's.
A
huge
chunk.
It
would
be
would
be
interesting
to
review
that
one
I've
I've
looked
into
the
exponential
histograms
before
and
it's
it's
definitely
not
easy.
So
highly
appreciate
that.
C
I
I
started
working
on
this
a
while
ago,
and
that
definitely
puts
the
amount
of
effort
that
I
put
into
it
to
shame.
A
E
C
Okay,
I
guess:
let's
call
it
a
day,
then
I
will
not
see
anyone
next
week.
I
will
not
be
surprised
if
the
meeting
on
the
first
week
of
January
is
short
and
doesn't
have
very
many
attendees
either,
but
I
will
definitely
be
here.
So
anyone
that
comes
I
will
see.
You
then.