►
From YouTube: 2021-09-17 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
A
So
the
latest
update
we
got
is
it
will
be
delayed
and
the
new
date,
because
it
doesn't
make
sense
for
for
people
to
give
a
date.
C
And
cable
later,
it's
the
same
as
splunk
splunk
last
year
had
been
super
aggressive,
even
before
I
joined
about
like
it's
gonna,
be
like
september
november
of
next
year
like
december,
and
then
we
got
an
email.
I
think
two
weeks
ago
saying
effectively.
We
have
no
idea
when
people
are
going
back,
like
your
guess,
is
as
good
as
ours
yeah.
C
What's
the
design,
that
means
people
start
to
become
realistic,
yes,
agreed
yeah,
I
mean
it
splunk
like
so
much
if
it's
going
remote
anyways
like
I
before
I
joined
like
but
like
dude
covered,
like
our
whole
c-suite,
moved
remote
permanently.
So
it's.
A
A
Okay,
so
so
quick
update,
I
I
think
with
like
the
api
spec
is
feature
freeze,
isdk
just
got
the
experimental
release
so
now
I'm
I'm
trying
to
do
some
cleanup
work
that
will
touch
many
places.
So
if
folks
work
on
individual
change
on
the
spike,
we
might
have
conflict
so
for
now,
I'm
not
assigning
any
tasks,
but
I'm
probably
working
on
the
api
sdk
cleanup.
So
please
expect
me
to
send
a
lot
of
cleanup
vrs.
A
These
are
already
two
and
I
probably
will
have
10
more
prs
on
that
just
to
like
just
to
follow
up
on
what
we
discussed
before
and
we
realized
those
are
small
things
we
can
do
later,
I'm
going
to
clean
all
of
them,
and-
and
once
I
figure
the
spec
is
in
a
relatively
stable
state.
So
we
I
don't
do
change
over
all
the
places.
Then
I'll
bring
a
list
of
topics,
and
we
can.
A
A
A
lot
of
debates-
and
I
need
our
help
to
give
fast
response,
so
we
can
quickly
move
forward
because
we
haven't
the
next
batch
of
feature.
Freeze,
sdk,
spec,
work
items.
I
want
us
to
be
able
to
address,
you
know
weak
and
for
the
language
implementation
of
this.
Maybe
we
can
quickly
go
through
if
folks
want
to
talk
about
that.
A
So
for
java,
I
know
sorry
she's
working
on
the
sdk
pr
and
he's
trying
to
experiment
the
the
multi
reader
thing
and-
and
he
don't
really
likes
about
the
like
the
flash
and
like
having
a
way
for
the
provider
to
know
like
the
registered
readers.
So
I
assume
that
part
is
running.
It's
mostly
john.
You
probably
know.
B
More
yeah,
so
what
you
said
is
correct,
also
we'll
just
say
that
we
we
have
ben
evans
who's.
B
One
of
the
java
champions
is
now
working
for
red
hat
and
is
starting
to
experiment
with
open,
telemetry,
metrics
and
hooking
them
up
to
the
javaflight
reporter
apis
for
job
in
java
17
and
is
uncovering
or
using
the
metric
apis
in
anger
for
real
for
the
first
time
in
something
that
isn't
http
calls,
and
he
and
josh
are
going
to
connect
and
talk
about
feedback,
and
maybe
things
that
we
can
do
better,
especially
around
the
observable
gauges
and
observable
counters.
A
B
Do
think
that
the
work
that
ben's
doing
with
jfr
will
potentially
inform,
maybe
I'm
just
guessing
possibly
a
similar
api
for
observables.
A
Okay,
some
small
zoom
windows,
okay,
so
I'm
not
sure
if
diago
you're
here,
let
me
see
okay,
so
so
update
from
python
diago
is
working
on
the
the
api
change.
He
just
updated
the
pr
to
align
with
what
we
have
on
the
api
spec
and
I
know
aaron
revealed
I've,
seen
a
lot
of
comments
from
the
python
maintainers
as
well.
I
look
at
the
the
pr
from
the
spike
perspective.
I
I
checked
everything
as
I
could.
A
I
I
think
it
covered
all
the
requirements
from
the
spec,
so
I've
already
approved
that
pr
there
there
might
be
some
performance
implementation
thing
that
I
I
didn't
pay
close
attention.
I
leave
that
to
the
python
maintainers
and
I
also
see
diago
start
to
put
a
plan
for
the
isdk
just
list
out
all
the
shoot
and
the
master
thing
from
the
sdk
space.
So
it's
going
to
create
test
cases.
I
I
I
think
that
that's
something
I
also
want
to
do
in
the
spec
to
put
a
feature
matrix.
A
So
maybe
more
than
this
is
something
like.
I
can
take
some
of
your
help
and
I
I
think,
by
end
of
the
feature
phrase
we
should
be
able
to
have
a
reasonable
matrix
like
in
the
in
the
higher
level
document,
so
for
people
I
mean
that
they
can
do
the
checkbox
make
some
things
easier.
I
know
diagon
and
I
started
a
separate
discussion
about
how
we
can
make
this
automated
there's
a
separate
dr,
but
seems
like
it's
too
complicated
and
it
will
touch
almost
every
single
line
of
the
spec.
A
So
I
I
think
we
probably
will
just
give
up
for
now,
because
that's
going
to
give
out
a
lot
of
mess
so
so
for
now,
let's
keep
doing
what
we
have
done
for
the
traces.
Until
we
shift
the
matrix,
then
we
can
come
back
and
work
with
diagonal
c
or
we
can.
A
We
can
improve
on
down
that
sigil
just
released
the
alpha
3
version
yesterday
I
think
and-
and
it
seems
he
he's
planning
to
release
alpha
four
very
quickly
just
in
a
week
or
something
and
and
currently
the
thing
that
we
don't
have-
we
don't
have
view
yet.
A
D
Riley,
hey
sorry,
this
is
aaron.
This
is
aaron.
I've
been
reviewing
the
python
pr
bit.
I
had
a
question
about
you'll,
see
in
there
that
we've
implemented
checking
for
duplicate
instruments
with
the
same
name
and
also
making
sure
that
it
matches
the
like
the
name
requirements
case
insensitive
alpha
american
stuff.
D
D
Yeah
yeah,
I'm
so
I'm
just
curious.
Should
it
be
just
looking
at
the
api
spec
there
is
this
section.
Let
me
just
add
it
here.
D
Which
makes
it
seem
like
it
should
be
implemented
in
there,
and
I
think
that's
where
we're
getting
it
from
I'm
just
not
sure.
What's
the
correct
interpretation
here.
A
B
B
A
B
I'm
actually
a
little
bit
wondering
whether
it's
something
that
even
should
be
in
the
api
spec
at
all.
I
could
imagine
an
alternate
implementation
of
an
sdk
that
would
be
able
to
you
know,
handle
this
differently.
If
it
wanted
to
is
do
we
are
we
do
we
want
to
constrain
the
sdk
and,
if
so,
that
should
be
in
the
sdk
spec.
A
Yeah
good
good
point,
I
I
think
currently
we
have
some
restrictions,
for
example,
we're
saying
that
the
name
shouldn't
be
empty
string
or
null,
and
the
reason
here
is:
if
people
just
use
the
api,
they
don't
use
the
ick.
Do
we
want
the
dummy
implementation
in
the
api
to
let
the
user
know
they're
doing
something
wrong,
so
they
can
fix
that
or
when
they
actually
inject
the
sdk
they
start
to
realize
all
the
issues
I
would.
I
would.
B
Argue
we
should
not
have
an
api,
only
implementation
in
any
sort
of
errors
or
warnings,
because
the
user
will
not
necessarily
be
in
control
of
that
implementation
of
the
usage
of
the
metrics
api.
If
it's
coming
in
from
an
externally
instrumented
library
and
there's
nothing,
they
would
be
able
to
do
about
it
and
they
would
just
get
a
bunch
of
noise
that
hey
they
didn't
care
about,
because
they're
not
actually
using
metrics
and
b.
F
Yeah,
I
have
a
position
there
in
the
go
implementation.
What
we,
what
I've
done,
is
factor
out,
what's
called
a
unique
registry
for
instrument
names
as
well
as
for
meters
and
there's
a
global
instance
that
I
have,
which
is
the
defer
until
an
sdk
has
provided
implementation
of
the
sdk,
so
the
deferred
global
sdk
uses
the
unique
registry
so
that
if
the
user
registers
something
that's
duplicate
and
conflicting,
even
before
the
sdk
is
registered,
they
would
have
seen
an
error
that
they
have
the
option
to
handle.
F
So
you
know
it's
not
like
a
panic
or
anything.
It's
a
you
tried
to
register
an
instrument
and
got
an
error
case
and
and
then,
when
the
sdk
is
finally
registered,
it's
going
to
go
through
and
replay,
essentially
all
the
deferred
actions
and
it's
going
to
come
upon.
Another
error
almost
certainly
because
all
real
sdk
is
going
to
use
the
same
registry
package
as
well,
and
I
think
this
makes
a
solution
fairly.
Nice,
you
don't
you
don't
force
anyone
to
panic,
they
they
have
the
option
to
panic
when
they
register
an
instrument.
F
Anyway,
it
happens
to
be
that.
The
only
reason
we
fail
is
for
a
registry
error
as
far
as
I
know,
but
we
could
also
fail
for,
like
you
know,
name
name.
Problems
like
like
you
know,
give
me
an
empty
metric
name
or
something
like
that.
That's
another
reason
and
you
have
the
option
to
panic,
but
I
just
returned
an
error,
and
that
would
happen
from
the
deferred
global
as
well.
F
Yes
and
I
I
feel
like
there's
going
back
ancient
times
in
hotel
there's
like
a
draft
otep,
I
wrote
that
never
got
approved
by
anybody.
We
determined
it
was
essentially
a
go
specific
issue,
but
I
I
would
I
would
write
a
sub
comment
or
a
like
side.
Note
saying:
if
you
really
care
about
performance
of
your
instruments
or
zero
memory,
then
register
a
no
op
handler
for
your
sdk
and
that
should
flush
out
most
of
this.
B
F
So
the
the
point
being
discussed,
I
believe,
is
that
there's
a
slight
performance
advantage
to
completing
the
registration
and
then
this
and
for
there's
a
library
that
you
know,
creates
a
bunch
of
instruments
thinking
you
know,
probably
there's
going
to
be
an
sdk
installed,
so
I'll
just
register
my
instruments
on
the
global,
but
the
global
never
gets
instantiated.
F
B
I
guess
the
real
question
is
not,
I
don't
think
we're
going
to
resolve
it
here,
but
I
think
this
is
a
philosophical
question
that
feels
like
has
been
litigated
in
open
telemetry
like
at
least
25
times
in
the
day,
I've
been
working
in
it
and
I
still
don't
think
we
have
standard
a
standard
answer
that
is
the
same
across
all
languages.
At
this
point.
F
Yeah
that
may
be,
I
think
I
agree
with
you.
I'm
gonna,
I'm
gonna
show
you
my
oh
chap,
because
it's
like
two
years
old
right
now
a
reminder
of
how
long
I've
been
doing
this,
but
but
it
was
essentially
closed,
saying
this
is
specific
to
go.
B
To
answer
this
specific
question
about
java,
though,
we
have
taken
the
approach
that
our
api
will
be
truly
no
op,
except
for
context
propagation,
because
that
is
actually
required
as
a
part
of
the
api
specification,
and
that
means
we
also
will
not
emit
errors
or
warnings
out
of
the
api.
Those
will
only
happen
if
the
sdk
is
installed.
B
So
that's.
That
is
the
approach
that
we
have
taken
so
far,
whether
it
is
whether
it
is
correct
or
not
is
an
exercise
for
the
reader.
I
guess
we
haven't
had
anyone
complain
about
that
behavior
so
far.
At
this
point,.
A
B
The
way
that
we
generally
approach
this
is
is
the
what
john
bly
likes
to
call
the
do.
No
harm
philosophy
like
you
should
never
hurt
somebody's
app
because
instrumentation
is
poorly
written.
You
might
not.
The
instrumentation
might
not
work,
but
you
shouldn't
hurt
their
hurt
their
app.
So
we
do
have
a
throttling
logger
that
can
log
warnings,
but
we're
never
going
to
crash
the
app
at
run
time.
B
We
may
crash
it
at
configuration
time
if
something
is
just
there's
nonsensical
configuration
provided,
but
once
things
are
up
and
running
and
the
sdk
is
configured,
we
will
never
crash
the
app
and
we
will
limit
logging
to
make
sure
that
there
is
not
potential
like
disk
space
issues
or
whatever
we
don't
want
to
cause.
That's
our
primary.
Our
prime
directive
is
to
do
no
harm
to
the
application,
so
we
won't
crash
the
app
or
throw
errors
or
exceptions
or
anything
like
that.
B
If
it's
misused,
we
may
or
may
not
log
warnings
a
limited
number
of
warnings,
depending
on
what
the
class
of
error
is
and
we'll
do
our
best
to
fall
back
to
intelligent
defaults.
If
they
provide
bad
input,
data
we'll
try
to
use
some
sort
of
default
good
data
if
we
possibly
can
otherwise
we'll
ignore
the
request.
A
Yeah,
what
he
explained
to
me
makes
sense-
I
I
guess,
given
the
number
of
folks
here,
probably
we
can,
we
cannot
make
a
vote
or
something
probably
I
can
bring
this
to
the
next,
like
the
oauth
spec
meeting
and
and
gather
more
feedback
and
based
on
that
consensus.
We
can
like,
as
long
as
we
make
a
decision
either
we
move
all
the
validation
from
the
api
to
the
sdk,
so
the
api
should
basically
put
in
this
no
harm
philosophy
as
the
design
principle
and
align
with
that
or
we
give
flexibility.
A
B
I
think
there
are,
I
mean
there
are
different
language
opinions
like.
I
know
that
a
lot
of
I
don't
know
if
this
is
still
true,
but
early
days
at
least
go
was
took
the
philosophy
of
crash
early
like
crash,
your
app
don't
try
to
run
if
things
are
bad.
That
may
not
be
the
true
anymore,
but
certainly
that
was
a
philosophy
that
was
floating
around
wild
and
that
may
be
fine
for
a
different
ecosystem.
If,
for
java,
we
we
do
not
believe
that
it
is
the
correct
way
to
go.
F
F
Where
you
just
say
I
I
don't
want
to
bother
checking
an
error
condition
here,
I'm
going
to
crash,
which
is
because
of
that
philosophy
that
you
mentioned,
but
my
actual
proposal
for
v2
of
the
prototype
api
would
be
to
remove,
must
and
tell
the
tell
the
programmer
to
deal
with
it
themselves.
I
I'm
certainly
not
going
to
panic
and,
as
you
mentioned
it,
there
is
actually
a
static
logger
thing,
and
I
don't.
I
don't
know
that
it's
rate
limited
it
should
be.
F
But
if
for
an
instrument,
registration
error,
I
don't
think
it's
a
it's.
A
rate
limit
concern.
You'll
get
back
a
thing,
that's
not
going
to
crash
you
at
runtime,
and
it's
not
going
to
log
at
run
time
either.
F
F
B
B
D
Yeah
just
to
clarify
in
in
the
pr
there
it's
not
it's
not
raising
an
exception.
It's
just
logging
as
well,
but
it's
a
matter
of
whether
or
not
we
need
to
implement
that
in
the
api
and
like
keep
track
of
instruments
and
and
check
the
instrument,
names
and
stuff
like
that.
So
it
sounds
like
we're.
Not
out
of
consensus,
though,
is
that
right.
F
Yeah,
I
think
we're
all
agreeing
on
this
topic
sounds
like.
A
F
Yeah,
I
feel
like
it's
for
me,
it's
kind
of
interesting
that
there
aren't
very
many
opportunities
in
the
telemetry
apis
to
receive
an
error.
It's
you
know
in
the
tracing
api.
I
don't
think
there's
anything
there
in
the
spec
about
errors.
So
this
is
kind
of
the
first
error.
Category
of
open
telemetry
is
instrument
registration,
failing
because
you
tried
to
do
that
twice
and
gave
us
conflicting
information.
F
So
it
falls
into
a
category
of
specified
error
as
opposed
to
almost
everything
else
which
is
just
like.
You
should
do
no
harm
you.
You
should
keep
functioning
but
like
here's
a
case
where
you
we're
specifying
an
error
so.
B
A
A
Probably
wouldn't
worse
time
to
to
search
that.
But
I
I
remember
we
linked
to
some
higher
level
documents
in
the
stack
and
there's
the
general
recommendation
on
how
to
handle
concurrency
and
errors
willing
to
use.
D
A
B
Oh
riley,
I
think
what
I
think
that
what
you
wrote
with
my
name
on
it
there
is
there's
duplicated
instrument
names
under
the
meter.
I
don't
right
now
in
the
sdk,
I'm
not
sure
what
we
do.
That
was
just
a
thought
that
we
could
do
that
or
we
could
ignore
it
or
I
don't
know
I
don't
we
don't.
I
don't
think
we.
I
don't
think
java
has
necessarily
an
answer.
E
Yeah
so
a
couple
of
meetings
ago,
we
were
talking
about
adding
a
min
and
max
optional
field
to
the
histogram
proto
as
a
means,
as
an
alternative
of
supporting
the
summary
aggregation
type,
which
is
is
not
currently
in
the
metrics
sdk
spec,
and
so
there's
a
pr
open
this
one
that
I
linked
to,
where
I
try
to
add
the
the
min
and
max
fields
to
the
the
histogram
data
model
and
there's
a
question
about
how
to
do
cumulative
to
delta
conversion
and
back
again-
and
you
know,
kind
of
the
the
the
fundamental
thing
I
see
is
that
min
and
max
are
a
poor
match
for
cumulative
data
types
for
cumulative
histograms
in
general,
and
so
you
know
we
can.
E
We
can
either
make
min
and
max
and
only
include
them
on
delta
histograms
or
if
we
do
include
them
on
cumulative
histograms,
then
you
know
they're
not
really
cumulative
values,
right,
they're,
they're
kind
of
delta
values
attached
to
what
other
data
that
is
cumulative,
and
so
is
that
acceptable,
really
and
and
if
it's
not
acceptable,
where
do
we
go
and
so
down
in
the
bottom
of
this?
E
I
proposed
like
a
mechanism
for
for
transforming
cumulative,
converting
cumulative
to
delta
histograms
and
back
again,
specifically
for
these
min
and
max
points,
and
you
know
it's
a
really
simple
algorithm,
because
all
it
does
is
just
assume
that
the
min
and
max
fields
are
are
delta.
And
so
you
know
you
just
kind
of
take
the
values
as
is
rather
than
trying
to
do
any
math
when
converting
them.
E
So
you
know
if
you're
going
from
cumulative
to
delta,
your
min
and
max
fields
are
already
delta,
and
so
you
just
take
those
values
and
add
them
to
the
delta
point
and
if
you're
going
back
from
delta
to
cumulative
well,
the
min
and
max
field
on
a
cumulative
histogram
are
really
kind
of
delta
points.
Anyways,
so
you
just
take
them
and
put
them
on
so
yeah
just
wanted
to
talk
about
that
and
and
see
if
anybody
had
any
feedback
on
that.
E
Offline,
okay
and
there's
a
comment
down
in
the
bottom
of
this,
this
pr
that
you
know
it
makes
it
more
concrete
with
an
example.
E
E
E
This
one
right
no
further
down
right
there,
so
so
yeah,
that's
where
we
stand
on
that.
I
noticed
that
riley
added
this
this
kind
of
issue
to
the
the
feature
freeze
project
so
yeah
thanks
for
that,
I'm
gonna
try
to
respond
feedback
as
soon
as
possible
and
try
to
move
this
forward.
A
F
There
are
a
number
of
formats
that
seem
to
have
histogram,
plus
min
and
max
out
there,
and
I
don't
have
them
at
hand.
I
think
in
general,
jack's
proposal
is
the
best
that
we
can
do,
and
I
think
we
should
do
the
best
that
we
can
do
and
and
just
accept
the
kind
of
blemishes
that
this
isn't
truly
a
round-tripping
protocol.
F
You
doubt
with
temporality
considered
and
that's
that's
okay-
that
because
they're
optional
and
nobody's
truly
going
to
do
two
conversions
on
their
collection
path.
F
It's
just
the
other
way
that
doesn't
work,
and
it's
not
clear
that
that
matters
to
very
many
people.
E
And
the
answer
on
what
you
do,
if
you
need
to
go
from
cumulative
to
delta,
is
you
need
to,
you
know,
collect
in
delta
format
at
the
collection
point,
if
accuracy
is
really
important
to
you
and
I'm
not
sure,
there's
any
way
around
that.
A
Yeah,
just
in
order
to
facilitate
this,
I
I
I
think
I
I
can
get
some
metrics
folks
from
microsoft
to
to
understand
their
perspective.
It
seems
spoken
feels
strong
about
this,
like
I
figure
like
if
it's
just
like
three
or
four
folks,
for
example,
josh
you're,
saying
plus
a
hundred
bogdan,
is
saying
minus
a
hundred
and
then
other
folks
don't
have
strong
opinion.
No.
A
E
Yeah
the
way
I
see
it,
we
need
some
sort
of
lightweight
mechanism
to
capture
count,
some
min
and
max.
It's
not
a
good
fit
to
capture
those
on
cumulatives,
and
so
your
options
are
either.
You
know,
as
josh
said,
except
the
blemishes
with
you
know
what
the
semantics
are.
If
these
fields
are
on
cumulatives
or
make
them
delta
only
and
each
one
of
those
comes
with
some.
F
F
Why
is
this
request
coming
up
when
we
have
a
histogram
that
can
tell
you
roughly?
What's
the
maximum,
you
just
look
for
your
largest
value,
non-zero
bucket,
and
you
that's
your
that's
where
your
max
lies
and
why
do
people
keep
coming
in
saying
I
want
to
know
more
exactly
what
my
max
is
and
it
has
to
do
with.
F
I
think,
a
little
bit
about
the
accuracy
of
it.
It
also
it
see.
It's
sort
of.
It
seems
common
to
have
a
like
a
line
drawn
on
your
histogram
with
p100,
p0
and.
F
For
histogram
resolution
I
don't
I
don't
know,
I'm
not
sure
what
the
case
is.
E
Well,
it
you
you're
right
that
if
you
have
high
resolution
histograms,
then
you
have
one
bucket.
That
is
approximately
what
your
max
is,
but
you
know,
I
think,
that's
still
a
pretty
heavyweight
mechanism
to
you
know
have
all
those
buckets
just
if
you
just
want
something
like
a
min
and
a
max.
That's.
F
Right,
I
totally
agree.
Thank
you
for
rescuing
me.
There
I
personally
have
written
a
type
that
I
use
my
own
code
called
min
max
sum
count.
It's
one
that
we've
tried.
I've
tried
to
spec
out
earlier
in
the
lifetime
of
this
project.
The
the
the
justification
goes
something
like
histo.
F
F
It's
just
this
case
of
of
cumulative,
and
then
we
have
the
reason
why
I
felt
confident
proposing
the
pr
and
the
state
I
did
is
that
prometheus
gives
us
this
example
of
a
summary
with
min
max
summit
count,
but
also
these
quantiles
and
there
you
have
an
existence,
proof
that
someone
wanted
this
and
went
to
the
trouble
of
implementing
it
in
several
languages
and
and
then
so.
That's
that's
one
reason
why
I
think
people
come
back
to
this
question.
F
F
Yeah
I
can,
I
can
go
to
that
pr
and
ask
diana
trace.
Yes,.
A
Okay
and
and
jack
would
next
tuesday
the
olap
spike
meeting
work
for
you.
I
think
we
can
probably
advertise
there
and
see
other
folks.
I
think
this
is
important,
so
we
can
play
somehow
yeah.
You
can.
A
I
see
like
whenever
one
or
two
folks
who
have
very
strong
like
feeling,
then
we
probably
need
to
get
more
like
feedback
from
other
folks
understand
yeah
after
the
meeting
I'll
make
sure
I
put
a
topic
for
next
tuesday.
A
F
Okay,
thank
you
just
giving
an
update
on
the
exponential
histogram
protocol.
F
The
protocol
pr
number
322,
which
is
the
first
of
those
bullets,
is
ready
to
merge.
I
think
we've
incorporated
all
the
feedback
that
we
could.
What
we
were
left
with
was
a
number
of
sort
of
stipulations
that
come
down
to.
When
is
this
going
to
work
and
how?
How
correct
are
you
required
to
be,
and
how
are
you
required
to
handle
values
that
you
can't,
possibly
you
know,
handle
and
and
what
to
do
with
values
that
are
just
like
outside
the
range
of
a
floating
point
number
and
so
on?
F
You
can
see
the
draft
pr,
it's
not
quite
ready,
so
I
marked
the
draft
but
that
that
currently
contains
the
highest
level
stuff,
which
is
pretty
easy
for
me
to
reiterate,
and
then
I'm
going
to
add
the
stuff
about
like
what
to
do
with
values
you
can't
recognize
or
or
handle
in
in
the
next
day
or
so
so
that
that
should
be
ready
for
people
to
look
at
for
spec
review,
but
I
think
we're
gonna
merge
the
pr
soon.
F
And
that's
it.
I
think,
jack's
question
about
me
and
max
is
really
the
biggest
pressing
concern
for
histogram
it's
also
connected,
and
if
josh
were
here,
we
could
talk
about
it
right
now.
I
figured
which
issue
number,
but
it's
like
whether
the
sum
is
monotonic
or
not,
and
that
that
issue
is
not
gone
away.
We
haven't
really
talked
about
it
or
addressed
it,
but
it's
somewhat
connected
with
all
these
statements.
F
All
these
questions
as
well,
because
as
jack
proposed
like
you
might
have
a
flag
to
say,
there's
no
no
max
here,
but
you
could
also
use
field
presence
which
I've
looked
up
and
researched,
which
looks
okay
to
me,
and
then
you
might
want
to
do
the
same
for
min
and
max.
A
Yeah
and
just
to
confirm
my
gut
feeling
is,
if
that
mean
maxine
got
sorted
out.
We
need
to
update
the
the
view
api
and
isdk
to
cover
that,
and
if
the
exponential
histogram
got
merged
to
the
matrix
data
model,
we
probably
can
still
release
the
stable
version,
and
then
I
did
that
like
later.
Instead
of
having
to
block
on
that,
that's
my
understanding,
so
jack's
pr
is
really
high
priority
from
my
opinion,
because
if
that's
the
case,
we
need
to
cover
that
in
the
sdk
api
part.
F
Right
yeah,
I
don't
think
we
need
to
block
anything
for
the
exponential
histogram.
You
know
implementation.
I
think
the
protocol
should
go
in
just
to
get
started
implementing
it
and
then
any
requirements
about
implementing
that
aggregator.
F
I
think
that
should
come
later,
although
we
have
prototypes
showing
how
to
implement
the
logic
of
the
histogram,
the
actual
behavior,
that
an
aggregator
implements
at
runtime.
There
are
so
many
ways
to
do
it
that
you
we
might
want
to
talk
about
that,
a
little
bit
more
and
that's
a
little
bit.
What
I'm
getting
at
with
the
data
model,
the
data
model
is
going
to
say,
you
know
you
can
use
scale
16,
but
it's
incredibly
high
precision-
and
you
know
some
users
are
not
going
to
have
enough
bits
of
you
know
like
whatever
like.
F
A
Okay,
so
let's
keep
the
the
ball
rolling
here
and
meanwhile
treat
the
max
as
a
urgent
thing.
F
Cool
the
third
bullet.
There
is
just
an
update
to
the
spec.
I
wrote
two
hours
ago.
I
think
we've
discussed
all
this
before,
but
it
could
use.
I
think
the
data
model
just
needed
a
little
bit
more
depth
about
inclusivity
exclusivity
and
what
we
mean
and
why
we
don't
support
a
greater
than
equals.
For
example,.
A
I
see
yeah,
I
I
know
that
a
lot
of
microsoft
folks,
they
start
to
like
implement
that
windows
team
they're,
doing
the
like
matrix
prototype
based
on
this,
and
they
have
a
lot
of
questions.
For
me.
They're,
like
the
the
left
bound,
is
exclusive,
the
right
part
yeah,
and
they
have
they
have
some
system
that
requires
the
left
part
to
be
inclusive
and
the
right
part
to
be
executed.
So
they're
trying
to
see
how
to
model
that
other
questions
well,.
F
I
put
an
answer
to
that
which
is
ignore
that
problem
and
just
go
ahead.
We've
discussed
this
enough
times,
but
I
feel
like
it
should
go
in
the
data
model.
Also,
the
exponential
histogram
uses
the
opposite,
so
they
might
be
happier,
but
they
might
not
be
depending
on
whether
they're
free
to
change
boundaries,
cool.