►
From YouTube: 2021-07-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Okay,
let's
start
I'm
sharing
my
screen.
Let
me
know
if
you
cannot
see
it
so.
First,
the
update
on
the
view
pr
this
pr
you
can
see
like
there
are
many
many
like
comments.
A
So
in
the
last
meeting
we
discussed
that
we
want
to
focus
on
the
view
part
only
and
try
to
scope
it
down
so
previously,
like
I
feel,
there's
some
entanglement.
I
have
to
document
the
pipeline
part,
but
that
part
has
a
lot
of
debate,
so
I
decided
to
remove
that
part
entirely,
so
we
can
focus
on
the
view.
A
So
please
take
a
look
at
this,
like
string
conversion
of
the
pr
and
see
if
we
can
make
progress,
I
believe
some
of
the
like
matrix
experts
like
like
two
josh's:
they
they
are
okay
with
the
current
view
part.
So
I
want
to
have
more
feedback
from
folks
who,
who
haven't
been
closely
monitoring
this
and
for
the
upcoming
pr.
So,
instead
of
calling
that
pipeline,
I
I
want
to
be
very
clear
that
the
problem
we're
solving
is:
how
do
we
allow
multiple
exporters
to
coexist?
It
could
be
two
scenarios.
A
You
have
some
metric
data,
you
want
to
send
to
the
local
environment
just
to
be
able
to
quickly
respond
to
anything
like
auto
scale
or
disaster
recovery,
but
for
business
metrics
you
want
to
send
to
the
cloud
so
that
that
gives
the
requirement
we
want
to
have
multiple
exporters
in
in
the
same
meter
provider.
A
A
The
approach
I
have
been
taking
is
focusing
on
the
implementation
details
too
much,
and
people
would
like
to
see
the
spike
try
to
scope
what
people
should
provide
like
what
the
isdk
should
provide,
instead
of
focusing
on
how
that
should
be
provided.
So
I
have
some
initial
idea
based
on
the
discussion
in
the
sig
meeting
and
also
I
followed
up
offline
with
bogdan
and
josh
suresh,
so
that
that
will
be
the
next
pr.
A
I'm
going
to
send
and
any
questions
on
this
tool,
so
my
ask
is:
please
review
the
first
one,
the
link
here
and
I'll
send
a
separate
pr
later
this
week
before
our
thursday,
just
a
quick
question
to
sort
of
be
sure.
A
What,
if,
if
actually
I
understand
that
correctly
is
that
pr
now
I,
like
the
the
disentanglement
of
the
of
the
pipelines,
is
that
about
exporting
to
two
different,
like
basically
sdks
or
is
that
more
of
like
I
have
one
measurement
that
I
want
to
export
it
to
two
different
places,
because
that's
what
the
view
the
views
are
basically
about
right.
It's
a
lighter!
It's
a
like
like
if
you
have
multiple
sdks,
they
shouldn't
even
know
each
other
like
dk,
is
totally
isolated
from
the
other,
of
course
like.
A
When
you
do
the
real
implementation,
you
can
be
smart
as
long
as
the
user
wouldn't
even
realize
the
difference.
And
yes,
this
is
specific
about
the
explorers.
The
the
two
scenarios
one
is
like
in
in
a
library
or
in
your
application.
You
have
something.
For
example,
you
have
a
counter
and
you're
saying
I
want
to
double
pump
the
data.
I
want
to
send
the
counter
information
to
both
my
local
environment
and
my
like
remote
environment.
I
like
for
whatever
reason
like
I.
A
I
think
a
lot
of
people
might
use
this
when
they
explore
something
they
want
to
see
the
dashboard
and
meanwhile
they
want
to
dump
that
on
the
console,
which
is
very
unlikely
in
production,
because
I
expect
in
production,
people
want
efficiency,
so,
like
double
pumping
doesn't
seem
to
be
the
smart
way
of
doing
that.
What's
more
likely
is
in
your
application,
you
might
have
something
like
the
the
the
body
temperature
of
a
patient
and
the
blood
pressure
and
for
body
temperature.
A
You
don't
expect
that
to
change
every
second,
so
there's
no
reason
for
you
to
explore
that
frequently,
but
for
blood
pressure
like
if
it
dropped
all
of
a
sudden.
Then
it's
an
emergency,
so
you
probably
want
to
export
that
to
some
device,
so
it
can
immediately
respond
and
save
the
patient,
but
for
temperature
you
probably
want
to
keep
that
just
for
record
in
case
you
want
to
revisit
the
surgery
or
something
so
in
this
case
you
might
send
different
signals
to
different
places
and
for
cloud.
A
I
think
it
is
especially
important
for
some
people
to
have
the
isl.
Oh
sli,
the
service
level
objective
metrics
sent
to
the
to
the
clouds.
Ultimately,
they
can
aggregate
and
see
the
overall
operational
health
of
the
system,
but
for
something
like
fast
scale
out
or
if,
if
a
like,
a
server
has
a
100
of
cpu
for
more
than
15
seconds,
you
want
to
auto
scale.
It
doesn't
make
sense
for
you
to
trigger
the
logic
in
the
cloud
because
that's
not
stable,
you
won't
have
the
local
decision.
A
A
Okay,
great.
Thank
you.
Sorry.
I
heard
some
some
background
noise.
So
is
there
more
questions
from
others
or,
if
not
the,
may
I
ask
you
to
mute
yourself,
please,
okay,
any
other
questions,
not
I
I'll
move
to
the
next
one
I
want
to
see.
If
victor
is
here,
I
am
okay.
So
victor,
would
you
give
us
an
update
on
the
aggregator
pr
and
and
what
you
need.
B
Yeah
sure
so
I
also
did
similar
to
what
riley
did
scope
down
in
terms
of
what
we
want
to
specify
in
the
specification
and
just
essentially
made
it
a
bit
more
generic
based
on
feedback.
I
gotten
from
the
different
types
of
implementation
that
is
occurring
for
you
know
how
to
potentially
do
this
aggregation,
slash,
aggregator
piece.
Okay,
so
please
take
a
look
at
the
new
update.
B
There
are
two
outstanding
questions
in
general
that
I
like
people
to
comment
on
the
first
of
which
is,
we
really
need
to
figure
out
what
the
scope
of
an
aggregator
really
is
and
whether
we
need
an
aggregator
per
se.
Currently,
the
spec
has
the
concept
of
an
aggregator
tied
more
closely
to
a
metric
point,
which
is
essentially
an
aggregator
is
a
proxy
or
a
or
a
representation
of
what
an
otlp
metric
point
would
be,
and
that
would
be
a
sum,
a
gauge
and
a
histogram.
C
B
By
the
exporter,
or
some
other
means
to
get
to
the
otlp,
so
that's
a
general
question.
One
comment
is
that
today,
because
of
how
we
currently
have
the
otlp
metric
data
point
have
defined,
we
don't
really
have
the
concept
of
a
min
or
max.
So
that
is
a
good
discussion
point
of.
If
we
need
it,
if
we
do
need
it,
how
would
we
do
it?
Would
we
add
it
to
the
otlp
protocol
or
we
just
have
a
aggregator
that
doesn't
immediately
support
otlp
protocol?
So
that
is
a
question.
B
I'd,
like
you
know
the
community
to
work
through
and
figure
out
what
we
want
to
go
from
that
direction
perspective
then.
The
second
question,
which
is
probably
a
little
bit
easier,
is:
how
do
we
want
to
represent
an
aggregator
from
the
perspective
of
you
know?
How
do
we
specify,
from
a
view,
perspective,
there's
two
two
competing
ones.
One
is
basically,
we
just
create
an
enumeration.
B
B
I
have
a
write
up
for
some
pros
and
cons,
so
that's
also
like
to
have
some
discussion
and
community
and
answers
for
other
than
that
waiting
for
riley
to
kind
of
you
know
lock
down
a
bit
more
in
the
view
api,
so
we
could
sync
up
the
the
aggregator
and
the
view
per
se
in
terms
of
how
we
specify
pipelines
and
and
all
the
other
stuff
associated
with
it.
B
A
A
D
Well,
georg,
is
here
again
today
and
presented
last
week
on
the
dynatrace
prototype.
I
think
we've
reached
a
place
where
the
consensus
is
here.
We
we
all
kind
of
with
the
exception
of
a
few
outside
contributors
who
have
histogram
backgrounds
we
most
of
the
open.
For
the
most
part,
the
open,
telemetry
and
prometheus
groups
are
all
converging
on
this
particular
format,
in
my
opinion,
we're
in
a
good
place
as
far
as
that
decision.
D
But
what
we
now
really
need
is
to
see
implementations
that
we
are
extremely
comfortable
with,
and
I
think
we
are
comfortable.
The.
What
I
was
waiting
to
see
was
a
second
prototype
one
with
a
slightly
different
flavor
than
the
one
that
gerard
presented
last
week.
D
This
would
be
the
one
that
uk
is
working
on
from
new
relic.
He
often
doesn't
come
to
these
meetings,
but
he's
been
the
lead
on
this
histogram
otep
and
the
idea
there
was
to
have
an
auto,
what's
called
auto,
ranging
the
idea
that
you
tell
this
instagram
aggregator
how
much
memory
it
has
to
use
and
depending
on
the
range
of
values,
it
will
adjust
its
buckets
to
to
match.
D
As
far
as
I'm
concerned,
once
we
have
those
two
prototypes,
we've
covered,
our
bases,
we're
actually
hoping
practical
terms-
is
that
someone
from
datadog
who's
sort
of
the
missing
voice
at
this
point
would
step
up
and
talk
about
how
much
or
how
much
they're
willing
to
and
how
it
would
look
to
adapt
the
dd
sketch
libraries.
D
At
this
point,
I
think
any
library
will
do,
and
it's
and
I'd
like
it
to
fall
to
the
implementers
in
each
language
to
to
make
decisions
about
which
library
they'd
like
to
use,
maybe
maybe
from
which
prototype
that
they
copy
their
their
implementation
or
whether
they're
able
to
use
or
adapt
another
existing
library.
D
Because
you
know
performance
demands
are
different
in
each
language.
So
it
could.
I
could
very
easily
see
a
different
implementation
in
javascript
by
default
than
a
different
implement
than
in
a
different
language,
and
I
hope
that
that
is
where
we
land.
As
for
next
steps,
I'm
waiting
to
see
more
from
the
second
prototype,
but
I
don't
think
there
are
any
more
debate.
There's
no
more
debate
really
prometheus
has
also
produced
the
same
format
and
I
guess
we
could
look
at
the
prometheus
client
prototypes
as
well.
D
I
think
the
point
is
we
have
plenty
of
prototypes
and
now
we
need
real
implementations.
A
D
The
confidence
that
we
could
write
the
protocol
right
now
and-
and
I
think
we
should
merge
the
two
pr's-
we
have
out
right
now
and
release
them
before
we
do
this
so
that
we're
pushing
back
histogram
just
a
little
bit
further
than
the
two
pending
pr's
and
then
what
we
need
are
realistic,
sdk,
implementations
of
that
pr
of
that
protocol
and
the
the
prototypes
that
we
have
should
be
very
close
to
that.
D
I
would
feel
confident,
perhaps
even
this
week,
sending
a
protocol
pr
that
puts
a
draft
out.
That
is
the
consensus
we've
reached
and
maybe
that
would
help
people
realize
we're
serious,
but
look
still
looking
forward
to
kind
of
like
evidence
that
we
have
real
sdk
implementations.
So
not
just
prototypes
of
these.
A
Implementations,
do
you
think
once
the
first
two
pr's
are.
D
Those
two
pr's
are
unrelated
to
histograms,
just
so
it's
just
so
we're
clear,
there's
one
about
some
interpretation
and
there's
one
about
missing
values.
D
I
think
that
it
would
make
sense
to
release
those
in
the
current
snapshot
and
then
I
I
can
send
a
pr
or
anyone
could
really
based
on
this,
based
on
the
otep
and
the
discussion
in
issue
1776,
I
still
think
we
need
one
to
see
that
second
prototype
or
a
dd
sketch
like
modification
or
something
like
that.
A
Okay,
so
josh,
it
might
be
helpful
if
you
could
put
some
links
here.
I
I
think
there
there
are
multiple
pr's
that
we
mentioned
here.
A
Yeah
thanks:
anyone
has
question.
A
Okay,
I
saw
josh
suresh
john,
so
just
heads
up
icon,
wonder
if
you
could
give
us
an
update
on
the
exemplar
part,
we'll
put
you
at
the
end
of
the
meeting
okay,
so
we
go
to
the
next
one.
This
is
a
question
so
attribute
value
array.
Is
that
a
list
or
set?
A
Okay,
it
was
just
like
a
clarification
question
from
from
my
part,
because
I
saw
it
in
the
in
the
proto
and
I
wasn't
sure
sorry
what
what
the
implications
of
that
were,
and
if
that
meant,
that
it
should
be
treated
as
a
list
or
it
should
be
treated
as
a
set
of
attributes
or
the
the
attribute
value
can
have
the
data
type
of
of
an
array
and
whether
that
array
is
is
interpreted
as
a
set
or
as
a
as
this.
But
that
answers
it.
Thank
you.
D
Just
for
background,
that
was
added
mostly
to
it
to
help
with
tracing
and
resources,
and
we
had
this
case.
It
was
like
http
headers
and,
like
you,
have
a
list
and
we
accepted
that
attributes
could
be
a
list
for
the
sort
of
existing
data
out
there.
But
I
don't
think
anyone
has
stated
with
certainty
that
we
must
recognize
attribute
lists
as
list
valued
attributes
on
metrics.
D
E
D
C
I
think
there
is
another
array
or
list
that
we
have,
which
is
the
list
of
key
values,
and
I
think
that
may
be
the
confusion
where
it
comes
from,
because
we
say
that
that
list
is,
is
a
map
actually
and
it
has
a
semantic
of
a
map,
but
that
that
is
because
of
how
protobuf
implements
and
serialized
maps.
C
So
the
serialization
of
a
map
on
protobuf
is
actually
a
repeated
key
value
and
that's
why
we
adopted
that
instead
of
using
a
proper
map
in
the
proto,
because
because
maps
are
slower
than
a
list
of
key
values,
so
that's
probably
where
some
of
the
confusions
come
from,
but
just
for
the
attribute
value
array.
That's
that's
a
just
a
list
of
arbitrary
things
and
we
don't
apply
any
set
or
or
any
hash
property.
D
On
that,
thank
you
bogdan.
I
think
that
actually
probably
was
what
george
was
asking
about.
Not
list
value
attributes,
but
the
attribute
set
is
treated
as
a
list
for
efficiency
reasons,
but
is
to
be
interpreted
as
a
map,
and
when
you
see
duplicate
keys,
we've
said
that
last
value
wins
across
the
board.
As
far
as
I
am
aware,.
D
E
Yeah
the
fyi
is
just
we
had
talked
about
this
last
time.
There's
a
proposal
put
on
the
ground
to
the
extent
that
you
still
agree
with
it.
Please
comment:
it
looks
like
it
has
two
lgtms
already,
but
it's
just
how
making
sure
that
we
can
represent
non-monotonic
sums
and
histograms
and
deal
with
the
current
fixing
the
text
as
written
in
the
next
version
of
the.
D
Yeah
we've
got
these
two
standing:
pr's
they're,
both
fairly
uncontroversial,
at
least
the
one
that
josh
just
posted,
was
discussed
last
week
and
then
the
one
I
discussed
was
was
discus.
I
posted
was
discussed
four
or
five
weeks
ago,
but
I
think
we
just
need
another
few
approvals
to
get
those
two
merged,
and
then
I
think
we
could
release
that
spec
there
before
doing
the
histogram
stuff.
D
C
E
Those
quick,
okay
yeah,
so
we
had
a
bunch
of
good
topics
on,
say
the
the
spec
requirements
bug.
I
still
think
that
that
bug
might
represent
my
current
thinking
of
what
goes
in
the
spec.
What
I've
been
prototyping
in
terms
of
implementation
is
a
reservoir
sampling.
I
think
the
way
that
prometheus
and
opencensus
currently
does
exempt
our
sampling
works
with
the
old
style
explicit
bucket
histograms
when
we
start
hitting
these
new
histograms
there's
a
question
of.
E
If
we
want
to
try
to
do
the
one
exemplar
per
bucket
strategy
that
prometheus
and
open
census
both
implement.
I
should
make
sure
I
list
the
order
correctly.
I
think
prometheus
just
copy.
What
opencensus
did
so
then
josh
mcdonald
proposed
doing
reservoir
sampling.
So
what
I'm
trying
to
do
right
now
is
get
an
efficient
implementation
of
reservoir
sampling
in
place
in
terms
of
current
open
questions.
E
In
my
mind,
right
there's
there's
two
things
for
us
to
discuss,
and
I
can
I
can
type
it
down
if
you
want
riley,
but
the
the
number
one
important
issue
is:
do
we
want
yeah?
Do
we
want
to
have
try
to
to
the
best
of
our
ability?
E
Have
a
exemplar
per
histogram
bucket:
is
that
a
requirement
on
how
we
do
exemplary
sampling
right
and
then
the
second
open
question
is:
do
we
want
to
like
call
out
a
direct
link
between
exemplar
sampling
and
trace
sampling?
Specifically,
I
want
to
make
a
hook
where
we
force
the
sdks.
E
Traces
would
be
especially
in
v1
the
that
I
think
the
current
spec
allows
exemplars
that
don't
have
sample
traces,
I'm
just
not
certain
that
that's
a
use
case.
We
want
to
expand
into
without
understanding
that
better.
I
think
right
now,
most
of
the
use
cases
I
understand,
for
example,
ours
are.
I
want
to
go
to
a
trace
from
an
exemplar
and
any
other
use
case,
I
think,
is
better
served
via
different
means.
So
those
are
my
two
open
questions.
I'm
going
to
leave
it
up
to
skype.
E
My
suggestion
right,
what
I'm
leaning
towards
now
for
the
second
one
is:
yes,
we
force
an
actively
sampled
trace
to
be
in
context
to
sample
an
example,
r
and
my
question
to
the
first
one
is
no.
We
will
not
have
one
sample
per
histogram
bucket
we're
going
to
make
a
reservoir
and
we're
going
to
allow
users
to
control
how
big
that
reservoir
is
specifically
for
both
a
performance,
related
reason
and
because
this
new
histogram
that
we're
planning
to
pull
in
could
lead
to
giant
reservoirs.
If
we
do
one
exemplar
per
bucket.
A
Okay,
so
my
my
quick
response,
the
first
one
I
I
have
no
strong
opinion
a
I
would
personally
prefer
to
have
that
ability
to
have
exemplar
per
bucket.
As
I
I
know,
I've
seen
some
like
scenario
in
microsoft,
but
even
without
that
were
probably
like
fine.
It's
not
it's
not
a
big
surprise
to
me.
For
the
second
part,
if.
E
D
Yes,
I
would
support
that
call
very
very
clearly.
The
aggregator
gets
to
decide
how
to
do
its
own.
Exemplars
is
what
I
would
say
and
because
I
can
see
lots
of
different
ways
to
do
a
histogram,
and
I
I
would
like
to
say
whatever
we
can
do
to
get
the
most
progress
fastest,
which
is
what
we
should
do.
So
I
don't
think
we
need
to
talk
about
probabilities.
D
We
don't
need
to
talk
about
unbiased
reservoirs.
We
don't
necessarily
need
to
talk
about
any
of
those
probabilities,
sampling
topics.
It's
just
there's
a
way
to
get
one
or
more
exemplars
per
aggregator
per
export
period,
and
the
default
that
you're
proposing
is
only
consider
those
with
trace,
ids
and
and
give
a
fixed
number
per
aggregator
per
period.
I
think
and
then
a
histogram,
a
classic
histogram
can
do
the
straightforward
thing
of
one
per
bucket
and
a
new
style
histogram
can
can
do.
D
There
are
several
options
here
and
I
don't
want
to
tie
us
down,
but
I
would
say
something
like
choose:
10
exemplars
per
period
and
then
there
are
many
ways
to
do
this
and
I
don't
want
to
sidetrack
this
meeting.
But
but
like
come
to
the
sampling
segment.
Let's
talk
yes,.
E
I
agree
I'm
planning
to
and
okay.
That
sounds
that,
just
just
so
we're
aware
what
that
means
is
the
answer
to
the
question
number
one
is
we're
going
to
actually
tie
exemplar
sampling
to
the
aggregate,
so
that's
a
decision,
which
means
I
need
to
take
a
second
look
at
the
current
aggregator
spec
and
expand
into
it
what
example
our
sampling
looks
like
there,
but
we
can
specify
the
tying
into
a
trace
as
like
a
independent
thing,.
C
I
would
across
the
board,
I
would
say
that
long
term,
probably
you
want
to
have
a
exemplar,
sampler
interface
or
contract
that
can
be
customized
short
term.
I
think
we
should
start
with
the
simple
thing
and
the
simple
thing:
the
most
simple
thing
is:
let's
focus
on
the
on
the
static
boundaries,
fixed
boundaries
and
get
the
solution
for
that.
C
Then,
once
we
have
the
new
histogram
get
a
solution
for
the
other
one,
and
then
we
learn
from
these
two
policies
and
we
we
customize,
we
make
an
interface
that
is
customizable
by
so
the
aggregator
will
essentially
accept
exemplar,
sampler
or
whatever
is
called
that
interface.
That
will
do
the
example
for
for
them.
D
D
What
are
you
going
to
do
if
you,
if
you're
asked
for
10
exam
cars
and
they've
got
a
thousand
buckets
either
you
can
be
completely
unprobabilistic
or
you
can
try
to
do
something
that
sort
of
like
tries
to
spread
out
those
exemplars
across
all
the
buckets,
which
is
what
the
original
system
did
when
it
had
ten
buckets,
and
you
know
there
are
algorithms
that
can
can
do
probabilistic
bucket
assignment,
for
example,
our
selection,
that
are,
you,
know,
going
to
be
flat
across
the
bucket
spectrum,
and
I
just
don't
want
to
talk
about
them
here,
because
it's
so
so
far
in
the
future.
E
We
can,
I
think,
it'd
be
fair
to
expose
an
interface
that
says:
is
this
measurement,
something
that
could
be
sampled
and
then
expose
a
second
interface,
which
is
a
reservoir
where
you
say
here
is
here,
is
a
thing
I'm
offering
to
you.
You
can
collect
it
or
not,
and
we're
going
to
allow
the
aggregator
to
instantiate
the
reservoir
and
we're
going
to.
We
can
have
a
general
purpose
kind
of
sampler
thing
on
the
side
that
you
can
hook.
That's
that's
what
I'm
that's
what
I'm
debating
giving
this
discussion.
E
I
had
already
kind
of
prototyped
that
a
little
bit,
but
I
need
to
I
need
to
do
some
better
work
and
prove
that
this
is
going
to
be
a
good
split
that
you
can
do
efficiently.
Real,
quick
question,
though,
for
something
bogdan
said.
E
C
I
I
would
personally
say
that
that
can
wait
for
the
v
1.1,
but
that's
my
personal
point.
D
I
feel
like
there's
the
the
promise
in
the
distant
future
is
that
these
exemplars
can
carry
probabilities
and
convey
information
about
the
aggregate
about
the
dimensions
that
you
didn't
aggregate,
that's
the
long
term
and
it's
the
same
for
traces
and
all
the
other
connections
with
probability
sampling.
So
if
you
have
a
hundred
exemplars
per
histogram,
you
can
recover
information
about
the
aggregate
about
the
dimensions
that
you
aggregated
away.
D
That's
one
thing
you
can
do
with
examples
and
but
it's
it's
tricky
and
complicated,
and
you
can
do
that
with
sums
and
do
that
with
histograms,
and
you
do
that
with
gauges.
You
can
do
that
with
all
the
data
types,
but
we
don't
need
to
do
that.
First,
I
think
we
should
get
back
to
focusing
on
the
you
know
like
the
known
use,
which
is
just
the
histograms,
but
it's
important
that
people
can
see
the
distant
future.
D
I
think
so
in
the
case
of
a
histogram,
if
you
have
two
passes
like
meaning,
I
can
remember
every
measurement
that
I've
seen
for
my
interval.
Then
I've
described
in
the
sampling
otep
how
to
do
assignment
of
exemplars
in
a
probabilistic
way.
So
you
compute
the
inverse
bucket
probability,
use
that
as
a
weight
and
run
weighted
sampling
and
now
your
exam
bars
are
going
to
be
unbiased.
They
will
convey
information
about
the
weight
in
their
bucket
and
then
you
can
take
100,
100
exemplars
and
use
them
to
estimate
the
distribution.
D
E
Yeah
I
at
this
point
I'm
more
worried
about
the
cost
of
exemplars
and
actually
around
specifically
around
like
allocations
for
gc
languages.
I
want
to
make
sure
whatever
we
do.
E
We
can
do
in
a
hot
path
on
a
synchronous
collect
and
we
can
do
it
in
a
way
that
is
efficient,
because
whatever
we
specify
right
needs
to
have
the
right
set
of
interfaces
that
we
can
go,
come
behind
and
do
cool
stuff
later,
even
if
the
existing
implementation
is
inefficient,
it
doesn't
matter
as
long
as
the
specification
yeah
allows
us
to
do
the
right
thing.
So
all
of
that
sounds
wonderful
for
the
future,
we'll
focus
on
histogram
right
now
and
I'll
make
a
very
targeted
pr.
E
I
still
need
a
little
bit
of
time
on
the
prototype,
but
I
expect
to
have
something
published
next
week
with
a
pr
behind
it.
That
shows
how
it
would
work
in
practice.
A
So,
for
the
second
part,
I
I
also
feel
my
answer
will
be
yes,
just
one
minor
thing
I
want
to
mention.
I
I
think
in
the
tracing
spike
there
might
be
cases
that
the
the
contacts
can
be
passed
in
explicitly.
A
D
E
So
one
thing
that
we
did
in
the
java
api
in
the
in
the
current
prototype
for
this
specification
is
add
context
as
an
explicit
option,
in
addition
to
the
implicit,
because
java
allows
both
explicit
and
implicit
context
propagation,
so
you
need
to
provide
both.
That's
just
going
to
be
something
you
have
to
do
as
a
metric
provider.
The
second
thing
I
do
want
to
say
is:
this
precludes
asynchronous
instruments
from
really
participating
in
exemplars,
and
I
think
that's
totally
fine.
I
don't
think
it
makes
sense
for
asynchronous
instruments
to
provide
exemplars.
D
Because?
For
them.
E
D
Contend
that
in
the
long
term,
exemplars
can
be
used
for
probabilities,
and
there
is
a
existing
like
there's.
A
the
world
has
many
examples
of
metric
systems
that
were
built
up
on
data
points.
Not
exam
aggregations
statsd
is
an
example,
amazon
embedded
metrics
format
as
an
example,
there
are
a
bunch
of
examples
of
people
who
just
say
I
don't
have.
I
don't
want
to
do
aggregations.
I
want
to
send
data
points
with
high
cardinality
and
exemplars
are
high
cardinality
data
points,
and
so
I
just
don't
want
to
hold
us
back
from
that
future.
D
Where
someone
who
says
I
don't
even
care
about
aggregation,
just
want
sample
metric
points
at
full,
cardinality
streaming
through
just
the
way
spans.
Do,
there's
not
really
a
difference
between
a
histogram
data
point
and
a
span.
Data
point
in
the
end
of
the
day
expand
the
histogram
exemplar
is
effectively
a
span
and
we
should
be
able
to
treat
histogram
exemplars
the
same
way
we
treat
spans,
and
today
I
can
do
all
kinds
of
fancy
data
on
analysis
on
spans,
because
I
count
them
and
they
have
high
cardinality.
E
E
Think
that's
a
future
we
should
push
for.
I
would
still
question
the
usage
of
exemplars
for
that,
but
I
think
that
it's
viable,
so
I
would
want
to.
I
would
want
to
evaluate
like
if
that's
the
best
way
to
use
exemplars,
given
what
people
expect,
if
that
makes
sense
like,
if
you
start
throwing
that
much
information
at
exemplars,
are
you
going
to
crash
existing
systems.
D
E
E
C
Sure
one
one
thing
my
my
thought
was
always
that
for
raw
measurements
we
will
have
a
different
metric
type,
that
is
raw
measurements
and
will
be
more
optimized
for
high
high
volume
of
of
data,
because
examples
as
they
are
defined
right
now
are
not
necessarily
the
most
optimized
for
for
high
volume
and
yeah.
D
I
agree,
and
so
you
can
imagine,
having
like
a
new
multivariate
metrics
event,
protocol
that
has
column
based
and
so
here's
a
packet
of
data
that
has
a
million
data
points
now
run
that
through
an
aggregator
come
out
with
a
bunch
of
aggregations
and
100
exemplars
per
point.
You've
cut
down
the
amount
of
data
by
a
huge
amount.
You've
sampled
you've
got
lots
of
examples.
D
You
can
reproduce
that
original
million
data
points
from
your
aggregations
and
your
thousand
examples
it's
just
an
option
like
it
is
less
efficient,
but
putting
a
few
examples
in
a
protocol
is
much
less
data
than
putting
all
the
data
into
a
raw
protocol
like
there's
some
reason
that
people
have
talked
about
it
sometimes
I'll.
Take
my
answer.
Offline.
E
Okay,
so
we'll
allow
room
for
expansion
going
forward
in
the
spec,
but
we'll
try
to
have
a
very
narrowly
focused
must
thing
for
now
and
we'll
have
a
hook
for
changing
how
samplers
work,
but
the
default
is
going
to
be
this.
You
know
attached
with
choice.
Okay,
I
actually
need
to
drop,
because
I
had
five
meetings
over
top
of
this
and
I
apologize
because
I
should
be
more
active
here
and
this
anyway,
but
I
do
need
to
drop.