►
From YouTube: 2021-06-22 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
B
B
C
C
Okay,
maybe
I
I
can
host
me.
Let
me
share
my
screen
and
I
haven't
prepared
for
the
data
model
topics,
but
I
do
remember
we
have.
We
have
some
like
three
high-level
items
that
we
want
to
tackle
and
and-
and
I
remember
last
week
we
mentioned
like
you
and
bogdan
and
josh
suresh
will
follow
up
and
discuss
one
of
the
topic.
C
C
D
Will
be
great
up
to
you?
Yes,
thank
you.
I
will
say
I'm
still
kind
of
backlogged
from
taking
two
weeks
earlier.
I
was
here
in
the
office
last
week,
but
I
didn't
catch
up.
I
just
started
to
catch
up
on
the
histogram
topic,
which
is
pretty
big
and
important,
but
I
don't
have
any
connection
to
the
other
topics.
C
Okay,
that's
why.
D
Now
that
we
have
these
in
front
of
me,
gage
histogram
is
definitely
something
that
I
offered
to
work
on,
and
it
just
doesn't
seem
as
important
as
the
high
resolution
histogram
stuff.
For
one
thing,
that's
the
exponential
buckets
question
and
we
can
talk
about
otep
149
here.
I
don't
think
that
talking
about
multivariate
time
series
is
going
to
be
profitable
for
us
either.
The
question
about
enums
and
label,
set,
though
I
think,
is
worth
discussing
in
my
opinion.
D
Yeah,
that's
a
really
good
point.
You
know
the
people
who
have
been
contributing
to
other
149
about
exponential
buckets
are
not
here
necessarily
I'd
see.
I
see
maybe
a
few
people
who
could
comment,
but
it's
a
tough.
A
lot
of
the
people
are,
there
are
in
europe
and
they
don't
always
come
to.
These
calls.
D
So
I
would
encourage
people
to
follow
that
thread.
Otep149
to
talk
about
exponential
buckets,
I
don't
think
we
have
a
quorum
to
talk
about
enums
and
label
sets
either
I'm
looking
at
the
group.
D
It's
actually
worse,
but
they're
not
familiar
with
coming
to
this
meeting
because
it
used
to
be
more
about
the
api.
No,
this
is
about
the
data
model.
What
am
I
saying
this
time
slot
used
to
work?
Do
you
have
anything
to
say
you've
been?
You
must
be
talking
to
atmar
and
he's
been
giving
us
feedback
and
I'm
supporting
that
feedback.
B
Yes,
no,
I
didn't
want
to
just
wanted
to
jump
in
about
the
the
timing
issue
that
this
is
actually
probably
the
only
meeting
that
folks
in
europe
could
realistically
join
from
the
from
a
time
timing
standpoint,
because
the
other
ones
are
alternating
by,
like
I
think,
9
pm
and
1
a.m,
or
something
like
that.
So
yeah.
E
D
Solid
agreement
across
many
people
in
this
community,
what
we
do
have
is
a
debate
lingering
over
open
histogram,
which
has
been
re-licensed,
and
there
was
some
interest
in
using
it.
But
when
you,
when
you
look
at
the
feedback
from
that
group,
which
includes
a
number
of
experts
who
have
all
offered
to
contribute
their
own
histograms
as
well,
now,
there's
like
quite
a
lot
of
agreement
about
the
binary
approach
and
the
base
2
histogram
doesn't
seem
like
anyone
has
volunteered
to
like
strongly
defend
open,
histograms
log
linear
base,
10
approach.
D
It
just
adds
complexity
and
I've
made
a
post
on
that
issue
last
night,
trying
to
call
an
end
to
it,
because
I
don't
think
there's
any
fruitful.
Further
exploration.
That's
going
to
come
out
of
it.
The
approach
of
using
base
10
is
certainly
viable,
and
but
the
doesn't
we're
asking
for
an
investment
from
all
these
vendors.
D
Many
vendors
to
accept
this
new
exponential
bucketing
strategy
and
the
work
that
the
vendors
are
going
to
do
and
the
open
telemetry
collector
needs
to
do,
and
the
prometheus
server
needs
to
do
to
support
this
new
type
of
exponential
histogram.
I
think
outweighs
the
amount
of
some
cost
going
into
all
these
client
libraries.
D
Client
libraries
for
histograms
may
be
hard,
but
they're
not
very
hard,
and
the
amount
of
effort
that
we're
going
to
put
into
maintaining
a
data
pipeline
that
supports
this
data
type
is
huge,
and
so
I
think
that's
why
people
are
hesitating
to
add
to
go
with
the
open.
Histogram
approach
is
just
one
more
variable
and
one
more
sort
of
component
in
the
equation
that
determines
these
boundaries.
C
D
We
have
now
is
a
problem
where
the
number
of
approvers
that
are
on
the
list
of
formal
approvers
is
not
as
much
as
as
many
as
we'd
like.
So,
although
there
are
many
approvals
on
that
otp,
I
can't
merge
it.
There's
just
only
three
spec
approvers
have
approved
it,
and
so
there's
some
reason
why
other
people
are
not
adding
their
approval
and
I'm
not
sure
what
that
is,
how.
C
D
Is
sort
of
why
I've
asked
if
we
could
update
the
list
of
approvers
but
but
like
doing
that,
just
to
get
this
merged
is
sort
of
a
side
run
around
the
problem.
C
Okay,
I
I
can
do
that
dirty
job.
Thank
you,
okay!
So
next
topic
I
have
one
tiny
pr.
I
need
some
approval,
so
that's
a
very
small
editorial
change,
and
so
please
help
out
and
for
the
next
topic
is
basically
continue.
What
we
haven't
finished
last
meeting
so
for
the
duration
for
folks
who,
who
didn't
have
the
contacts
here.
C
We
have
six
instruments.
For
example,
you
can
use
histogram,
you
can
use
contour
you
can
use
gauge,
and
the
ask
here
is
people
have
a
feeling
that
we
need
a
specific
instrument
for
timing
things
just
because
if
you
give
a
very
generic
histogram,
basically
pushing
the
ask
to
the
user
to
use
some
timer
and
they
could
make
mistake.
For
example,
people
might
use
a
low
precision
timer
they
could
use
some
timer.
C
That
is
not
monotonic
like
all
kinds
of
the
issues,
so
I
I'm
a
little
bit
skeptical
about
this,
because
there
are
two
things
number
one.
We
already
have
something
like
in
the
tracing
span,
like
that's
trying
to
capture
the
duration,
and
I
understand
that
span
is
a
tracing
api
and
and
second
spam
will
only
work
if
the
sampler
decided
to
allow
assembly
so
for
other
cases
like
sampler,
is
not
kicking
or
you're,
not
using
a
tracing
api.
C
It
makes
me
feel
like
we're
asking
the
people
to
report
very
similar
data
twice
and
they
have
to
measure
the
time
individually,
so
that
doesn't
seem
to
be
the
right
direction.
This
is
number
one
number
two
is
I
I'm,
I'm
still
trying
to
figure
out
his
that
duration
thing
very
specific
to
metric
and
and
so
far
I'm
not
convinced.
So
this
is
why
I
asked
if
folks
could
come
and
help
to
do
some
prototype.
C
C
Can
we
park
this
outside
the
initial
stable
release
of
the
api?
If
it
turns
out,
there's
a
need,
we
can
add
it,
it's
not
a
brilliant
change,
so
it
can
be
always
added
as
an
additive
like
feature,
but
I
I
think
that
should
buy
us
some
time
for
us
to
think
through
hey.
How
are
we
going
to
smooth
out
the
tracing
and
metric
sorry?
So
if
you
have
http
duration,
do
you
allow
something
to
convert
traces
to
matrix
or
you
have
some
facade
on
top
of
that,
so
people
can
write.
C
D
You
can
do
that
downstream
and
off
the
process,
which
makes
an
opportunity,
or
at
least
an
option
to
say,
discard
your
metrics
instrumentation
and
just
have
spans,
but
it
it
means
asking
the
user
when
they
sample
spans,
to
deal
with
sampled
metrics,
which
theoretically
works,
but
it
does
degrade
the
quality.
So
if
the
user
comes
in
saying,
I
must
have
100
metrics
no
sampling,
but
but
I
want
to
sample
traces
or
I
don't
want
to
emit
traces.
E
Yeah,
I
agree
that
parking
for
now
is
probably
the
right
thing,
because
it
can
always
be
added
back
later,
and
I
it
seemed
to
me
that
many
of
the
options
that
were
put
forward
looked
very
similar
to
a
span
api.
You
start
a
thing,
you
do
some
work,
you
stop
a
thing.
E
Maybe
you've
got
a
wrapper
function
that
that
handles
the
starting
stopping
for
you
around
some
callable
function,
which
looks
very
much
like
a
span,
and
I
think
if
we
could
generate
metrics
from
spans,
like
duration
and
frequency
and
all
of
those
things
that
you
might
want
to
get
out
of
span
information.
That
would
be
the
ideal
interface,
but
there
is
always
that
use
case
of
I
don't
want
traces.
I
don't
want
to
do
tracing.
I
just
want
some
metrics
around
this
and
providing
this
convenience
for
a
common
measurement
scenario
would
be.
E
C
Sorry
I
I
haven't,
I
haven't
seen
who's,
who
was
talking
so
trying
to
capture
the
name
here
in
the
meeting.
Sorry
that
was
anthony.
Okay,
thanks,
strange,
like
normally
zoom,
will
will
pop
you
on
the
top,
while
you're.
C
C
A
I
am
here.
I
still
think
that,
like
the
timer
is
is
at
least
to
me
and
based
on
my
experience,
is
or
should
be
the
most
used
instrument,
because
that
is
so
when
you
instrument
your
your
code
and
you
want
to
to
measure
it
and
emit
metrics,
then
I
think
that
is.
That
is
the
instrument
which
which
will
be
used
most
of
the
times.
But
if
there
isn't
like
a
space
for
it,
I
mean
effort
wise
and
it
cannot
fit.
Then
it
is
what
it
is.
C
Yes,
I
I
I
think
here
we
probably
want
to
spend
some
time
to
invest
it.
Can
we
avoid
having
to
tell
the
user
if
you
want
to
report
http
duration
here
here
goes
the
tracing
api.
You
have
to
create
a
span,
and
here
goes
another
api
for
metrics
and
you
have
to
report
the
same
thing.
So,
ideally,
we
want
the
people
to
write
one
one
thing
that
can
achieve
both
or
they
can
take
one
if
they
want
to
select.
A
One-
and
I
I
I
think,
that's
that's
a
good
idea
like
basically
like
giving
them
a
way
to
to
measure
the
code
and
basically
creating
expense
and
and
metrics,
based
on
some
common
obstruction,
we'll
just
use
expanded
destruction.
What
I
would
consider
there
is
that,
if
this
solution
will
depend
on
expense,
then
they
must
use
the
tracing
api,
even
if
they
don't
want
to
just
to
have
a
timer.
C
Okay,
so
I'll
finish
the
note
here,
that's
all
the
topics
I
can
see
today,
so
any
other
topics
feel
free
to
add
here
or
like
josh.
I
noticed
you
just
joined
so
yeah,
all
right,
yeah,
I'm
sorry,
okay,
so
so
probably
see
if
you
want
to
take
over
and
cover
some
topics
here.
I
I
think
there
there
are
like
challenges
in
some
of
these
topics.
They're
pretty
big-
and
I
remember
last
time
you
mentioned
we-
we
should
follow
up
offline
on
some
of
the
topics
here.
C
So
the
only
super
clear
action
item
for
me
is
that,
like
gmac
b
requires
one
more
spike
approval
on
this,
so
I'll
do
a
dirty
job.
It's
like
that.
For
the
other
thing,
it's
still
a
little
bit
murky.
So
I
wonder
if
you
could
shed
some
light
here.
D
Okay,
as
soon
as
before,
josh
goes
I'll
just
add.
I
think
the
idea
was
that
the
gage
history
on
topic
is
just
not
high
enough
priority
for
me
to
have
spent
time
on
the
past
week,
but
the
exponential
buckets
one
is,
I
would
love
to
talk
about.
Enums
and
label
sets
if
anyone
else
does
and
then
multivariate
time
series
I
I
feel
we're
not
ready
and
then
I
before
dropping
a
quick
question.
Sorry
there's
this
thing
about.
F
F
Data
model
stuff,
because
there's
a
there's,
an
open
question
in
my
mind
of
are
there
more
metrics
api
things
that
are
higher
priority
than
any
of
the
remaining
data
model
things
and
I'm
calling
out
views
because
I
don't
see
it
in
the
agenda
and
sorry.
I
missed
the
20
minutes
of
this
meeting,
because
I
would
rather
talk
about
views
instead
of
any
of
the
data
model
stuff
because
I
think
that's
way
higher
priority.
At
this
point,
I
think,
if
you
look
at
the
status
of
all
of
these,
basically
there's
lots
of
open
questions.
F
Lots
of
investigation
that
needs
to
happen
offline
and
basic
any
discussion.
We'd
have
here,
would
be
really
good
kind
of
evaluation
of
where
to
investigate,
but
none
of
it
seems
as
high
priority
as
talking
about
views
in
the
sdk.
So
I
just
want
to
call
that
out
of
like,
I
think
none
of
that
is
as
high
priority
as
api
work
and
I'd
rather
spend
time
there
with
that
sorry
josh.
I
just
want
to
call
that
out
before
you
go
into
like
specifics.
D
Okay,
well,
I
don't
really
want
to
go
into
specifics
and
I
agree
with
you.
We
should
talk
about
views,
but
but
as
long
as
we
were
summarizing,
all
the
open
data
model
questions
there's
this
thing
about
stillness
that
we
maybe
have
slipped
what
slip,
and
we
had
a
pr
that
we
looked
at
a
week
ago.
I
guess
that
was
adding
bits
to
the
points
and
I
think
we
need
to
to
move
on
that,
because
the
prometheus
working
group
is
essentially
waiting
for
a
way
to
reflect
stillness.
F
F
F
Yeah,
I
think,
I
think,
in
terms
of
stillness
we
made
a
decision.
We
just
need
to
follow
up
with
it
offline.
So,
okay,
cool,
that's
good
enough.
F
Awesome,
I
have
a
bunch
of
open
questions
that
I
hit
riley
with
in
slack.
I
I
don't
know
if
you
saw
those
my
warning
that
I
wasn't
gonna
make
it
and
everything
I
sent
there,
but
I
don't
know
if
you
got
it
so
apologize.
I
have
a
bunch
of
open
questions
for
me
around
views,
but
I
want
to
kind
of
phrase
them
hold
on
I'm
getting
here.
F
So
I
I
this
isn't
on
the
view
pr,
but
let
me
let
me
throw
a
topic
here
or
josh:
do
you
think
it
will
be
either?
If
you
you
could
present,
I
I
can
I'm
just
I'm
just
posting
my
comments.
F
Okay,
so
basically
I
think,
let's
let
I
wanted
to
focus
the
discussion
from
that
pr
on
things
we
agree
on
and
things
we
don't
agree
on,
so
just
the
high
level
components
of
the
api.
Let's
agree
on
what
those
are
first
and
so
to
make
sure
we
all
agree
tentatively.
So
what
I
saw
in
your
pr
was
you
basically
have
three
major
components
in
the
api
right.
F
So
there's
this
notion
that
I
have
I'm
going
to
create
a
view,
and
it
has
some
kind
of
identifier
for
the
view
like
there's
an
identity,
to
view
yeah
right,
there's
going
to
be
a
way
of
selecting
what
measurements
make
it
to
the
view.
That's
the
instrument
selector
thing
and
that
just
that's
actually
selecting
measurements
right
and
then
the
the
the
last
piece
is
an
aggregator
of
like
I'm,
going
to
take
these
measurements
and
I'm
going
to
aggregate
them
together
and
output.
F
Metrics
and
those
are
kind
of
the
three
major
components
and
as
long
as
we
agree
that
a
view
is
those
three
major
components.
I
think
it
makes
it
easier
for
us
to
dive
into
any
particular
piece
and
kind
of
outline.
Let's
explore
what
identity
means,
let's
explore
what
metric
selection
means
or
sorry
instrument,
measurement
selection
and
let's
explore
what
aggregation
metric
definition
looks
like
right.
Okay,
that's
the
three
major
components.
I
because.
C
I
think
I
agree
with
you:
90
percent,
the
other
10
percent.
I
need
some
clarification,
so
I,
in
my
opinion,
the
view
like
the
third
part.
Instead
of
aggregator,
it
defines
what
type
of
the
metric
you
want
to
get.
Example,
you
want
to
say,
hey,
I
have
a,
I
have
an
instrument,
but
I
have
a
different
perspective.
C
The
thing
I
want
to
specify
is
all
I
need
is
the
sum
whether
that
sum
should
be
reported
as
a
delta
change
for
every
collection
cycle
or
it's
the
absolute
value.
I
think
this
is
something
later,
not
in
the
view,
so
the
view
is
basically
I
care
about
the
total
sum
and
whether
it's
reported
as
delta
is
the
actual
implementation
detail,
and
if
I'm
exporting
the
data
to
a
system
that
only
supports
delta.
I
expect
the
system
to
be
smart
and
give
me
the
delta
instead
of
asking
me
hey.
F
Kind
of
I
think
I
need
to
dive
into
that
a
little
bit
more.
So
so
let
me
let
me
rephrase
to
make
sure
I
understand
a
view
is
an
identity,
a
selection
of
metrics
like
a
selection
criteria
for
metrics
or,
first
measurements;
sorry,
a
metric
definition
of
the
metric
you're
going
to
output
and
then
a
bit
of
code
which
I'm
calling
an
aggregator
that
will
take
those
measurements
and
turn
them
into
the
metric
yep.
F
F
You
know
histogram
or
sum,
etc,
okay,
and
then
we
would
have
now.
I
call
it
the
the
aggregator
so,
but
what
what
do
we
want
to
call
this
thing?
The
before
code,
which
takes.
C
C
I
don't
want
to
count
the
requests,
but
whether
that
count
like
the
counter
should
be
reported
as
delta
sum
or
the
absolute
accumulative
sum.
I
I
think
I
want
that
to
be
smart.
So
if
I'm
sending
data
to
premises
and
previously
saying
I
only
support
one
way
instead
of
both,
then
I
don't
want
to
change.
F
The
view
well,
okay,
so
let
me
let
me
throw
on
my
second
question
because
that
might
tie
into
this
aggregation
thing.
My
formatting
is
terrible
because
I'm
copying
from
a
local
markdown
file,
sorry
everyone.
So
if
you
think
of,
if
you
think
of
this,
this
notion
of
like
a
view
and
the
aggregators
in
a
view
right,
I
was-
and
I
never
miss-
I
never
spell
gage
correctly.
I'm
sorry,
if
you
think
of
our
high
level
metrics
of
gauge
sum,
histogram,
okay
and
a
view
could
take
incoming
measurements
and
output
different
gauges.
F
I
could
output
a
gauge
of
a
max
a
gauge
of
a
min,
a
gauge
of
last
value
right.
The
last
value
I
saw
those
are
different:
aggregations
of
measurements,
different
ways
to
take
the
measurements
and
output
a
metric,
but
they're
all
the
same
metric
that
come
the
same
kind
of
metric
that
comes
out
maybe
a
different
name
right,
so
they're
all
gauges,
possibly
or
sums,
depending
on
how
you
want
to
do
it.
You
could
have
a
you
know.
The
sum
metric
obviously
has
a
has
a
default.
F
You
know
I'm
going
to
add
these
things
together,
the
up
down
some,
you
know
similarly
or
whatever,
and
I
just
came
up
with
really
bad
names
for
aggregator
names
here,
just
to
kind
of
like
tease
at
the
idea.
But
this
is
why
I
think
there's
this
notion
of
an
aggregator
of
how
do
I
take
the
measurements
and
turn
them
into
a
metric,
and
I
don't
think
those
are
the
same
thing
right.
I
don't
think
the
metric
type
and
the
aggregator
are
the
same
and
I
think
there's
some
flexibility.
F
People
want
in
the
aggregation,
and
so
I'm
literally
calling
that
out-
and
I
hear
what
you're
saying
that,
like
maybe
there
should
be
a
real,
simple
like
I
have
this
metric
and
there's
a
default
aggregator
to
go
there,
but
specifically
for
like
the
use
cases
you
were
calling
out
in
your
document
around
max
and
min
or
average.
You
know
where
I
don't
want
a
histogram.
I
just
want
max
and
min.
F
F
F
F
F
Okay,
then
I
have
one
more
thing
to
add,
which
is
this
is
my
fundamental
concern
with
the
existing
proposal,
and
I
want
to
ask
this
question
if
we
agree
on
like
this,
this
way
of
fragmenting
the
problem,
the
fundamental
concern
I
have
then,
because
I
think
we
can
ask
the
question
around
aggregators
a
little
bit
later
like
how
does
a
user
define
one,
I
think,
is
as
a
second
tier
question,
I
think
the
primary
question
I
have
is:
when
do
I
get
defaults,
and
when
do
I
not
get
defaults?
F
So
if
a
user
selects
a
set
of
measurements
to
make
their
own
view,
do
all
of
the
default
metrics
that
I
was
getting
for
that
instrument
disappear
or
not.
When
does
that
happen
right
so
in
open
census?
If
you
remember,
you
have
these
instruments
and
you
have
to
make
these
views
and
aggregations
to
get
the
metrics
to
come
out
right
for
open
telemetry.
We
have
that
as
automatic.
F
C
If
you
start
to
specify
something
explicitly,
then
you're
only
going
to
get
the
one
that
you
explicitly
specified
and
if
you
need
anything
else,
you
have
to
specify
it,
but
you
can
specify
it
in
a
very
simple
way.
You
can
specify
hey,
I
want
a
view
and
it
has
the
same
name,
and
I
only
select
this
specific
instrument
and
I
just
want
to
use
it
as
this,
so
I'm
not
specifying
any
extra
like
change,
so
I'm
I'm
not
taking
which
I
should
be.
This
is
my
thinking
before
like
camera
right
that.
C
F
Okay,
all
right,
so
that
that
that
answers
that
question,
I
think
I
think
that
needs
to
get
called
out
explicitly
in
the
pr,
and
I
don't
know
if
I,
if
I
missed
it
or
if
I
just
like
misread
things
or
if
it
wasn't
there,
but
I
think
that
that
that
needs
to
be.
You
know
if,
if
you
write
a
selection
criteria
for
review
and
you
touch
any
instrument,
its
default
aggregator
disappears.
C
Yeah,
it's
not
there
in
the
pr.
So
so
here's
my
thinking
in
order
to
explain
the
default
behavior.
We
need
the
initial
view
and
hint
to
be
in
place.
So
we
can
start
to
explain
hey
if
there's
no
hint
and
there's
no
view,
then
you
got
all
the
default
instruments.
If
there's
instrument
and
there's
hint
but
no
view,
then
you
get
all
the
instrument
and
all
the
hint
default
behavior
as
much
as
possible.
If
there's
a
certain
hint
that
he
cannot
support,
then
I
think
he
has
freedom
to
ignore
that.
C
But
if
you
have
view
view
will
take
priority.
So
all
this
like,
like
sequencing
things
like
how
how
we,
how
we
respect
the
order
of
the
execution
priority,
requires
us
to
have
all
the
pieces
at
least
initial
pieces.
There.
F
I
I'm
actually
thinking
from
what
I
saw
in
the
in
the
pr
in
the
discussion
is
that
if,
if
we
had
a
really
well
defined,
so
let's
pretend
like
we
don't
have
defaults
at
all
and
we
outline
what
views
look
like.
Then
we
define
what
default
views
are
layered.
On
top
of
what
views
look
like
in
the
sdk,
I
think
that
that
will
help
people
walk
through
the
technical
details
of
this
a
bit
better.
I
agree
with
you
that
you
know
from
a
conceptual
standpoint.
F
It
might
make
sense
to
come
down
from
hints
and
then
back
up.
I
just
think
we
need
to
build
those
components
out
for
people
in
the
in
the
description.
That's
that's
why
I
was
approaching
it
in
this
fashion
of
like
let's
agree
on
what
a
view
is
what
it
looks
like
and
then,
once
you
have
this
definition
of
a
view,
then
making
a
definition
of
what
a
default
view
is
and
a
default
aggregation,
and
all
that
is,
is
dead
simple,
because
we
have
the
the
vocabulary.
F
C
F
Okay
cool,
so
that
was
my.
My
second
question
was
around
default
views
and
default
aggregations.
So
that
makes
a
lot
of
sense.
My
third
question:
if
we
have
time
where
did
I
put
it?
Here's
the
doc,
okay
and
this
one,
this
one
is
just
open-ended
and
I
think
your
your
pr
answers
it,
and
I
don't
know
if
there's
I
don't
know
if
we
have
to
have
a
ton
of
contention
here.
I
I
have
some
thoughts
but
like
the
selection
criteria
for
measurements,
right,
there's
there's
a
lot
of
open
questions.
F
C
I
understand
the
challenge
so
I
intended
to
to
avoid
like
fell
into
this
plot
hall
in
in
the
initial
pr.
So
what
I
can
see
this
is
very
similar
to
the
premises
like
configuration
right.
You
can,
you
can
change
the
metrics,
whatever
you
want
and
and
they're
like,
probably
a
very,
very
first
type
could
be.
C
So
that's
that's
something
I
I
want
to
explore
in
the
initial
pr.
What
I'm
trying
to
avoid
in
the
in
the
first
pr
is
the
callback
I
want
to
avoid
a
situation
where
people
are
saying.
The
selection
criteria
should
be
as
flexible
as
you
can
allow
the
user
to
provide
whatever
lambda
and
we
we
give
them
all
the
possible
input
and
the
lambda
should
return
the
value
telling
us
whether
we
take
it
or
not,
because,
like
lambda
gives
actual
work
for
the
spec,
we
have
to
spec
out
if
the
lambda
got
stuck.
C
F
I
totally
wow.
I
didn't
realize
that
we
were
thinking
about
going
that
flexible
with
the
lambda.
I
I'm
totally
on
board
with
not
being
that
flexible.
I
was
I
was
more
concerned
about
if
you
can
select
multiple
instruments
right,
like
the
implementation
of
java.
F
The
way
it
was
working
that
that's
interesting,
because
now
we
have
to
hook
the
the
instrument
to
storage
kind
of
bucket
and
make
sure
that
we
can
route
many
different
ways,
which
is
that
idea
behind
the
measurement
processor,
and
I
want
to
make
sure
that
we
have
enough
prototype
work
done
into
that
around
performance
and
and
done
well,
if
we're
going
to
allow
that
level
flexibility.
So
knowing
that
we
might
want
to
go
even
deeper.
C
And
no,
I
well.
I
want
us
to
have
something
simple
as
initial
release,
instead
of
trying
to
boil
the
ocean
so
whatever
like
the
minimum
thing
that
will
make
this
story,
work
example
I
now
you
mentioned
that
I
want
to
ask
the
question:
if
we
start,
if
we
start
by
only
allowing
one
one
metric
to
be
like
one
instrument
to
be
selected,
we're
basically
asking
people
you
want
to
give
me
the
exact
name.
C
Is
that
going
to
be
a
big
blocker,
at
least
from
my
side?
I
I
I'm
not
seeing
a
blogger.
I
think
that
should
work,
although
it
might
not
be
very
convenient,
but
I
believe,
like
convenience
can
be
added
later
so
we
like,
we
have
so
many
issues,
I'd
rather
us
to
scope
down
and
focusing
on
something
simple
at
this
at
the
starting
point,
and
if
we
support
something
like
wildcard
later,
it's
not
going
to
change
the
interface
a
lot.
C
F
Yeah,
I
think,
just
there's
a
set
of
complications
if
we
allow
multiple
instruments,
for
example
in
strongly
typed
languages.
Are
you
allowing
mixed
measurement
types
doubles
and
longs,
and
all
sorts
of
fun
like
that.
F
That
I
think
avoiding
that,
at
least
in
this
initial
pr
makes
sense
to
me.
I
would
argue
that
we
should
prototype
with
this
limitation
and
re-evaluate
prior
to
marking
everything
stable
based
on
if
we
find
as
people
try
to
adopt
the
prototype
that
they
run
into
problems.
Does
that
sound
reasonable?
Just
just
for
the
sake
of
let's
agree
to
the
minimal?
F
F
Cool
are
we
okay
discussing
the
fourth
question
I
have
here.
I
have
a
fifth
question
as
well
and
I
don't
want
to
manipulate
all
the
time
go
ahead,
okay,
so
the
fourth
question
is:
how
should
users
define
an
aggregation
like
so
so?
The
question
here
is:
do
we
want
to
specify
like
a
set
of
algorithms,
that
users
have
like
hey?
We
have
a
last
value
algorithm.
F
We
have
a
max
algorithm,
a
min
algorithm,
a
sum
algorithm
that
have
known
metrics
that
come
out
the
other
end,
or
do
we
want
to
expose
in
the
sdk
an
api
for
people
to
write
a
thing
that
does
aggregation,
I'm
going
to
throw
a
caveat
out
there
that
if
you
write
an
aggregator,
you
need
to
be
really
good
with
high
performance
threading,
because
at
a
minimum,
when
you
take
in
synchronous
instrument
measurements,
you
have
to
do
so
very
efficiently
and
from
possibly
a
bajillion
threads.
F
So
oh
this!
This
is
that's.
This
is
question
number
four
riley!
Sorry,
that's!
Actually,
I
mentioned
it
before
to
like
give
you
things,
but
this
is
actually
what
question
number
four
is
is
like:
how
do
we
want
users
to
define
this?
Do
we
want
to
spec
out
a
set
of
well-known
things?
Do
we
want
to
have
like
an
interface
that
we
define
similar
to,
like
you
know,
trace
provider
or
the
metrics
processor?
What
what
do
you?
F
What
do
you
think
is
the
best
thing
to
do
for
for
v1
here
I'm
going
to
make
two
statements
after
I
ask
the
open
question.
One
is,
I
think
the
minimum
thing
to
do
is
provide
an
interface
and
have
it
open
to
sdks
what
they
want
to
do,
and
the
second
is,
I
think,
there's
a
set
of
default,
well-known
aggregators,
that
we
absolutely
have
to
provide,
and
it's
the
basic
histogram,
the
basics
of,
and
the
basic
gauge
right
like
we
need
those,
because
there
are
defaults.
C
Yeah,
so
so
so
answer
your
first
question,
the
defaults,
I
I
think
currently
we
haven't
covered,
but
I
would
expect
in
the
sdk
spec
will
provide
defaults
for
multiple
things
number
one.
The
isdk
has
to
come
with
the
default
exporter.
I
I
think
console
exporter
for
local,
like
troubleshooting
or
understanding
the
basics,
the
in-memory
exporter
for
people
who
write
unit
tests
or
doing
some
like
inner
loop
test
cycle
and
the
otlp
exporter
is
a
must-have.
The
premises
exporter
is
a
must-have.
C
These
are
the
the
four
things
I
think
that
by
default
the
sdk
should
provide.
Similarly,
when
you
look
at
tracing,
we
have
like
jager,
zip
king,
those
things
right.
So
this
is
number
one
thing
number
two.
I
I
think
the
default
aggregation.
We
should
support
the
the
thing
that,
in
the
data
models,
at
least
whatever
specified
in
the
data
model
in
otlp,
we
should
be
able
to
export
that
data.
C
And
we
don't
have
propagator
concepts
like
tracer,
so
there's
no
default
propagator.
So
these
are
the
the
default
I
think
and
for
the
people
who
use
the
sdk
to
specify
what
aggregation
they
need
instead
of
using
some
class.
My
thinking
is,
in
the
view
api.
They
probably
can
specify
something
as
a
flag
because
say
here
goes
the
like
instrument,
http
duration
and
instead
of
reporting
a
histogram.
I
won't,
I
won't
just
add
them
all
together
and
I
don't
care
like
what
that
mean.
C
I
just
want
to
report
that
as
a
sum,
so
they
have
the
freedom,
they
can
say
hey.
This
is
the
the
view
and
the
algorithm
or
or
you
call
that
aggregator
so
that
the
aggregator
I
want
to
use
is
sum,
but
instead
of
passing
a
class
or
like
a
type
instance
or
something,
I
I
think
a
flag
would
be
easier
in
this
way
flag.
C
You
can
compose
that,
for
example,
you
can
say
I
want
mean,
but
I
don't
want
max
or
I
want
me
max
and
average,
so
you
can
check
multiple
things
together
and,
and
we
probably
have
some
naming
rule
like
if
they
want
to
say.
I
want
me
max
and
I
want
the
sum
and
I
want
the
unique
count
we'll
probably
configure.
Oh
based
on
all
these
combinations.
We
cannot
report
this
as
a
single
metric.
C
We
need
to
report
two
different
metrics
like
I
want
histogram
and
me
max
so,
depending
on
the
situation
we
might
result
in
different
things
and
and
for
different
metrics.
We
probably
need
to
have
some
like
postfix
or
something
to
the
name.
So
this
is
something
I
haven't
I
haven't
been
able
to
think
through,
but
this
is
my
gut
feeling
and
regarding
whether
we
allow
people
to
write
their
custom
aggregator.
My
answer
is
definitely
yes.
We
should
whether
this
should
be
first
stable
release.
C
I
I
don't
think
so,
although
I
know
a
lot
of
folks
in
microsoft,
they
have
a
high
amount.
They
want
to
write
like
the
unique
account,
but
I
I
think
that
that
thing
like
how
do
we
expose
a
very
performant
and
extensible
like
part
in
the
sdk,
to
allow
people
to
write
their
custom
thing
or
they
can
even
take
multiple
existing
aggregator
and
alter
the
behavior.
C
This
is
something
much
harder
it's
doable,
but
for
now
at
least
I
I
think
we
should
start
by
writing
the
default
aggregator
and
once
we
have
a
good
learning
how
to
write
that
efficiently.
C
We'll
take
the
learning
to
help
us
to
write
a
a
better
interface
to
allow
the
others
to
extend
that.
So
my
gut
feeling
is
so
far
we're
not
yet
very
confident
on
writing
the
default
aggregator.
Then.
How
do
we
know
we
can
design
a
perfect
or
an
awesome,
aggregator
interface,
to
allow
the
others
to
extend
so
my
take
would
be,
for
example,
victor
can
work
on
the
c
chart
part
writing
the
custom
aggregator
and
after
we
got
two
three
four
aggregators
we
can
start
to
see
hey.
C
These
are
the
common
part
we
probably
can
abstract
out,
and
then
we
can
find
the
other
approvals
in
the
in
the
language.
Specific
state
to
say:
hey,
if
you
want
to
write
a
unique
count,
is
this
good
enough
and
we
will
take
the
learning
and
come
back
after
the
initial
spec
release
and
say
in
addition
to
the
existing
spec,
now
we're
we're
opening
this
door
to
allow
extensibility.
F
Yeah,
I
think
I
think,
that's
fair,
the
the
I
know
from
trying
to
implement
the
sdk
and
java
the
aggregators
are,
where
you
spend
all
your
time
so
and
I
did
some
major
refactoring
to
the
aggregator
to
be
amenable
to
measurement
processor,
and
I
think
it's
also
likely
that
measurement
processor
I
want
to.
F
We
have
an
aggregator
interface
and
we
have
a
measurement
processor
interface
that
constructs
pipelines
where
measurements
come
in
and
metrics
come
out
in
some
sort
of
storage
thing
and
that's
where
views
will
live
right
going
forward,
but
the
I
think
I
think
we
need
in
all
of
these
things.
This
has
been
really
helpful
to
have
a
discussion
if
we
can
get
this
into
that
pr
in
a
more
fleshed
out
formal
way,
I'd
love
to
try
to
implement
it
and
see
what
it
looks
like
yeah,
because
I
agree
with
you.
F
We
need
to
start
toying
with
these
things.
I
have
one
last
question,
but
I
we
only
have
10
minutes
left,
so
I'm
gonna
write
it
down
and
I
don't
think
we
have
to
discuss
it
here,
because
I
think
we
have
a
lot
of
fish
to
fry.
If
you
will-
and
this
isn't
the
number
one,
but
I
think
it
is
somewhat
important,
which
is
how
do
users
select
what
attributes
are
preserved?
This
is
something
I
saw
in
the
pr
that
I
think
was
a
little
unclear
to
me
and
effectively.
F
You
know
the
selection
criteria
can
include
labels
that
you
use
to
select
measurements
right
and
the
pr
seem
to
imply
that
those
labels
that
are
selected
would
somehow
use
the
hint
api
and
somehow
be
preserved
in
the
output
metric,
which
is
the
exact
inverse
of
what
I
expected
where,
when
I,
if
I
use
a
label
to
select
a
metric
right,
I'm
getting
the
the
there's
an
open
question
to
me
of
what
what
output
labels
I
should
have
and
where
I'm
say,
aggregating
away
labels
where
I'm
not,
and
the
interaction
of
the
selection
criteria
of
labels
and
the
output
metric.
F
I
think
just
needs
a
little
bit
more
either.
Clarity
or
don't
try
to
tie
the
two
together
one
of
the
two
right,
because
I'm
not
sure
I'm
not
sure
right
now.
It
fully
makes
sense
like
that.
I
think
when
you
define
the
aggregator
or
whatever
we
want
to
call
this,
you
could
say:
drop
this
label
and
here's
like
a
filter
for
what
labels
to
drop.
F
But
when
I
select
my
instrument
and
select
which
labels
I'm
looking
at,
that
is
more
of
a
what
comes
in
not
a
what
goes
out
portion
of
the
api,
and
when
I
decide
what
goes
out,
I
might
not
want
to
lose
my
you
know,
metric
streams
and
key
value
pairs
that
I'm
using
to
identify
components.
F
F
Anyway,
those
I
I
have
that
I
have
one
more
question:
that's
really
dumb
and
not
important.
These
were
the
five
important
ones,
so
I
hope
I
hope
that
that
helped-
and
I
don't
want
to
take
up
too
much
more
time,
because
only
five
minutes
left.
C
Yeah,
I
I
I
think
I
understand
your
questions.
Okay,
I
I
know
what
to
do
after
this
meeting
go
ahead.
If
you
have
another
question.
F
Okay,
the
last
one
again
is,
is
real
minimal,
but
it's
basically
exemplar
sampling,
and
this
is
this
is
my
maybe
maybe
my.
What
do
you
call
it
horse
or
something?
I
don't
know
my
own
pet
thing.
I
think
I
think
we
should,
because
we
support,
exemplars
and
because
prometheus,
I
believe,
supports
exemplars,
and
I
think
this
is
the
holy
grail
of
open
telemetry,
of
contextual
metrics.
Right,
where
you
have
context
in
place.
F
F
The
default
right
now
that
I
have
implemented
is
never
sample
because
I
didn't
want
to
drop
performance,
but
I
also
have
a
second
version
that
is
sample
with
trace.
So
if
there's
a
sampled
trace,
you
will
automatically
get
sample
metric
points
associated
with
that
trace,
and
I
want
to
push
on
that
a
little
bit
and
I
think
views
are
where
the
proof
in
the
pudding
comes
like
whatever
we
do.
In
the
view,
api
and
whatever
we
allow
here,
will
kind
of
implicitly
trigger
the
rest
of
the
implementation
of
the
entire
metrics
sdk.
F
So
that's
why
I
wanted
to
push
on
it
not
important
to
dive
into
just
calling
it
out.
Is
the
thing
I'd
like
us
to
look
at
before
we
call
the
sdk
done.
That's
all
it
doesn't
have
to
be
talked
about
today.
C
F
Yeah,
so
this
is
when
you
get
a
measurement
from
a
metric
right,
you
can
look
at
that
measurement.
Look
at
the
context.
Look
at
the
trace,
look
at
the
baggage
whatever
and
decide
whether
or
not
you're
gonna
record
an
exemplar
on
that
measurement,
yeah
and
that
is
separate
from
the
aggregator.
So
when
I,
when
I
report
like
a
sum
metric
point,
I
can
actually
have
specific
examples
of
values
that
came
in
during
a
trace
exactly
or
same
with
a
histogram
yeah
yeah.
D
For
them,
I
talk
about
my
horse
here,
so
we
can
also
do
probabilistic
sampling
of
exemplars
and
I'm
not
sure,
there's
a
precedent
for
that
in
the
world.
But
if
we
did,
it
gives
us
a
way
to
begin
to
understand
high
cardinality
when
we
don't
aggregate
those
dimensions.
So
you
might
be
tossing
away
some
labels
and
if
we
probabilistically
sample
the
exemplars,
we
can
then
reconstruct
the
missing
labels,
essentially
from
probability
which
is
powerful
and
it's
something
that
I've
always
held
in
a
belief
that
we
should
do.
F
And
I'm
more
focused
on
just
correlated
telemetry.
Yes,.
D
C
Okay-
and
I
have
one
small
thing
related
to
item
4
here,
so
it's
more
for
for
josh
storage.
So
when
I
think
about
the
custom
aggregator
for
extreme
performance,
I
I'm
considering
something
declarative
instead
of
imperative.
So,
for
example,
people,
like
you
c-sharp,
there's
a
concept
called
lambda
and
an
expression.
So
you
can.
You
can
describe
something
using
c-sharp
expression
and
basically,
you
put
all
the
syntax
tree
there
and
the
the
application
can
compile
the
entire
thing
to
bytecode
once
and
exited
very
fast.
C
Instead
of
hey,
we
have
some
interface
and
callback
function
every
every
time.
We
just
call
this
minimum
thing
and
and
calling
the
function
just
in
order
to
do
a
like
interlocked
exchange
or
like
a
sum
or
something
so
so
these
are
something
I
I
think
we
want
to
push
to
extreme.
F
F
There's
a
high
performance
aggregator
that
that
also
gets
allocated
that's
more
important
and
that
that's
the
thing
that
you
have
to
kind
of
be
an
expert
at
threading
to
to
write
so
yeah
I'd
agree
if
we
can
be
more
declarative
and
come
up
with
a
good
set,
that'd
be
ideal.
I
also
don't
think
we
should
force
ourselves
to
do
that
in
v1,
necessarily
because
I
think
that'll
take
a
lot
of
time
to
be
cross-language
friendly
with
such
a
thing.
C
Yeah,
okay,
it
seems
we're
on
time
thanks
aura.