►
From YouTube: 2021-07-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Yeah,
so
I
want
to
go
through
that
and
I
I
just
need
to
find
a
meeting
where
we
have
all
the
stakeholders
or
people
who
have
a
strong
opinion
about
this
and
make
sure
we're
aligned.
I
think
either
approach
would
work.
I
I'm
a
little
bit
scared
about
taking
the
more
complex
approach,
but
if
we
can
align
on
that,
I'm
happy
to
to
change
the
pr
just
want
to
make
sure
like.
If
it's
complex,
then
on
the
other
side,
people
want
simple
things:
they
they
won't
get
pissed
off.
B
Yeah
I
was
chatting
with
josh
sarath
this
morning
in
the
java
sig
meeting,
and
I
think
we
only
I
feel
after
hearing
him
talk.
I
don't.
I
won't
speak
for
him
that
multiple
schedules
for
reporting,
metrics
and
aggregate
metrics
is
going
to
is
super
complicated
and
for
the
first
release.
I
think
I
would
prefer
to
keep
it
simple
anyway.
A
Yeah,
I
I
actually
thought
this
job
now
so
see.
I
have
to
draw
this
diagram
to
show
josh.
This
is
what
I'm
thinking
have
multiples
power.
They
have
they're
on
different
schedule
and
some
poor
exposure
might
jump
in
randomly
then
some
are
accumulative
and
you
have
to
convert
delta
accumulated
like
all
the
crazy
stuff,
yeah
yeah
exactly
yeah.
So
I
have
to
build
something
like
a
schedule
like
this
in
your
mark,
like
what
you're
going
to
do
and
then
like
it's
just
very
interesting
topic.
So
I'm
having
another
meeting
now.
C
I
just
went
the
most
basic
mechanism,
which
I
think
we
should
do
for
the
first
release
is
basically
just
keep
instance
per
you
know,
keep
the
in-memory.
You
know
states
per
each
instance
and
then
call
it
good,
but
you
know
yeah.
A
That
that's
doable.
We
can't
probably
talk
about
that
if
soorish
join
otherwise,
I
I
probably
want
to
focus
more
on
these
topics
first,
so
so.
First
I
want
to
get
perspective
on
the
view.
I
I
think
josh.
You
replied
earlier
today
about
your
thinking
on
the
view
and
and
that's
the
initial
proposal
from
the
pr
that
I
decided
to
step
back
and
take
a
simpler
proposal.
A
For
the
same
for
the
same
meter
provider
like
we
have
a
histogram
and
we're
saying
we
want
to
send
this
histogram
as
a
push
exporter
to
otlp,
and
we
also
want
to
have
the
scriver
support
for
permit
this.
I
think
the
answer
is
yes.
Definitely.
Yes,
there
has
been
a
lot
of
like
issues
and
you
can
see
the
comment
on
the
github
issue.
It's
a
big
demand
and
also
it'll,
be
weird
if
we
just
have
one
exporter
for
matrix,
but
we
support
multiple
exporters
for
logs
and
traces
just
want
to
make
sure
nobody.
A
D
D
Perspective,
that's
that's
a
good
touch.
I
I'm
not
saying
that.
Having
like
multiple
exporters
is
a
like
a
varied
scenario
in
production,
but
sometimes
you
might
need
that.
A
Yeah
yeah,
I
agree
with
you,
I
think
in
production.
Most
other
folks
would
only
export
to
one
place,
but
I
cannot
eliminate
the
requirement
that
people
might
want
to
have
multiple
exporters
and
also,
it
seems
very
strange
if
matrix
only
allow
one
exporter
while
the
other
like
signals,
we
allow
multiple
it'll
be
confusing.
A
Then,
based
on
that,
my
next
question
is:
do
we
want
views
to
be
to
be
consistent
across
exporters,
so
my
my
example
would
be:
you
have
a
temperature
which
is
a
like
asynchronous
gauge
and
somehow
you
decided
you
have
a
different
view
on
this.
You
want
to
report
that
as
a
histogram
or
whatever
creative
thing
you
have,
but
do
you
do
you
have
a
scenario
you
want
to
say.
A
So
my
answer
for
this
is
not
so
each
individual
exporter
or
the
pipeline
should
be
able
to
have
different
views.
Whether
it
will
be
a
set
of
view
that
each
pipeline
can
pick.
I
want
to
view
a,
but
I
don't
want
view
b
or
view
is
tied
to
the
pipeline.
I
I
think
I'm
fine
with
either
way,
but
my
assertion,
you
cannot
have
have
a
view
and
that
view
will
be
applied
to
all
the
pipeline
or
the
explorers
you.
You
can
also.
D
A
My
answer
is
no,
because
I
don't
think
a
view
should
should
apply
to
all
the
exporters.
Each
exporter
should
be
able
to
pick
its
view.
It
means
I
can
have
different
view.
I
can't
see
for
premises,
I'm
looking
at
this
counter
as
a
histogram,
and
for
that
for
that
exporter,
I
don't
care
about
this
instrument
at
all,
I'm
taking
another
view
on
and
and
two
other
instruments.
D
How
how
do
you
configure
the
view
what
or
where,
if
you
create
a
view,
where
do
you
pass
it?
Do
you
pass
it
to
the
meter
provider
or
do
you
pass
it
to
the.
A
A
I
have
a
different
view
in
this
way.
The
view
is
not
global
to
the
meter
provider,
it's
specific
to
the
pipeline,
and
this
is
one
potential
approach.
Another
pro
another
approach,
is
you
just
specify
all
the
views
on
the
meter
provider
and
in
the
pipeline?
You
have
to
invent
another
mechanism
to
pick
which
view
do
you
want,
and
I
think
that's
more
complicated,
so
I'm
trying
to
avoid
that,
but.
E
B
A
A
This
is
view
two
and
the
the
good
thing
for
this
is
it's
very
straightforward.
You
can
specify
that
for
different
views.
It
gives
you
the
flexibility.
The
downside
of
this
is,
if
you
want
multiple
pipelines
to
share
the
same
view,
you
have
to
repeat
the
the
logic
here
like.
If
you
want
to
look
at
this
instrument
as
something
like
a
histogram,
then
you
have
to
add
that
individually
on
two
pipelines,
that
might
seem
like
a
duplicate
effort
for
some
developers.
A
E
Model
I
agree
yeah.
I
agree
with
you
that
this
this
is
convenient
because
you
can
you
don't
need
to
invent
a
pipeline
selection
mechanism
so
that
a
view
is
applied
to
a
certain
certain
pipelines
right,
but
we
have
the
same
problem
with
the
instruments
right
I
mean
we
in
the
in
this.
In
this
very
same
document,
we
were
talking
about
an
instrument
selection
mechanism
that
will
associate
views
with
instruments.
E
So
if
you
I-
I
understand
your
your
point-
I
understand
that
this
is
convenient,
and
this
is
not
that
I
I
just
feel
that
we
are
being
consistent,
that
we
are
taking
two
separate
approaches
to
yeah.
A
B
I
just
want
to
bring
up
my
question
that
I
you
know
came
up
with
at
least
a
week
or
so
ago,
which
is
how
are
you
I
mean?
How
are
you
selecting
instruments
when
instruments
can
be
created
at
runtime
arbitrarily
by
the
user,
so
when
you
say
select
which
instruments
and
views
go
to
which
pipeline?
How?
How
is
that?
A
A
A
A
A
Listening,
you
won't
know,
you
won't
know
how
many
instruments
you
have.
You
will
keep
receiving
some
random
instruments
from
the
other
libraries
as
they
being
loaded.
What
you
have
is
you
have
the
rule,
you
know
what
what
you're
going
to
pick
and
what
you're
not
going
to
take
it
might
be
a
callback
or
something,
and
every
time
you
see
an
instrument,
you
will
go
through
the
rule,
it
might
be
a
single
rule
or
a
list
of
rules,
and
if
there's
no
rule,
we
should
have
a
default
rule
and
based
on
that,
he
would
be
fine.
A
F
Riley,
can
I
ask
an
about
your
former
point
now
about
the
question
about
views
and
pipelines.
I
think
the
way
I
hear
it.
It's
really
just
a
question
about
how
much
complexity
of
configuration
we
want,
but
then
like
can
you
configure
individual
views
for
individual
exporters
and
individual
pipelines
all
at
once,
or
something
like
that
and
then,
but
then
you
brought
up
a
kind
of
optimization
question
like
well.
F
I
really
want
to
do
the
computation
of
that
to
aggregate
once
and
then
have
different
exporters
still
treat
them
differently,
maybe
or
something
like
that,
and
it
just
starts
to
sound
like
what
we're
really
talking
about
is
a
very
complicated
configuration
and
a
like,
almost
like
a
compiler
or
like
a
like
a
setup
wizard
that
takes
your
very
complicated
configuration
and
like
constructs
you
a
very
optimized
pipeline
and
we're
almost
talking
about
query
up
like
query
optimizers
for
these
pipelines
and
views
at
this
point-
and
I
don't
want
to
I'm-
I'm
worried
that
we're
that
we're
like
optimizing
for
something
that
almost
nobody
really
kind
of
needs.
F
A
A
The
cases
I
imagine
people
in
production
will
just
have
one
pipeline
and
it's
very
straightforward
forward
to
them.
They're
saying
I
want
to
export
this
to
premises,
exporter
and
I
want
to
take
everything
or
I
only
I
only
care
about
three
instruments
or
I
want
to
add
a
view,
change
that
something
to
a
different
instrument.
C
C
E
Also
riley,
if,
if
we're
gonna,
follow
this
approach
of
making
the
pipelines
be
able
to
be
attached
to
a
certain
view,
so
they
can
be
configured
shouldn't.
We
do
this
the
same
for
instruments.
A
C
C
C
I
I
think
that,
for
it's
just
simplicity,
right
because
we're
already
having
to
implement
selection
for
for
a
given
view,
we
already
have
to
dynamically
run
a
bunch
of
rules
to
say
whether
or
not
this
instrument
matches
or
not
right.
So
adding
to
me
at
least
it
seems
like
adding
an
extra
rule.
That
just
says
only
apply
this
view
to
this
exporter.
B
C
C
C
B
I,
when
I
so
I
look
at
riley's
example
here
in
python
or
whatever
mythical
languages
might
be,
and
to
me
a
pipeline
has
one
exporter,
so
whatever
you
add
to
the
pipeline,
that
exporter
is
the
thing
that
picks
it
up
and
a
view
I
mean
in
this
example,
doesn't
show
it,
but
I
think
it
should.
It
should
show
what
the
aggregator
is.
Like
you,
your
example
here,
riley,
probably
on
lines,
162
163
should
and
on
61,
should
be
specifying
an
aggregator.
Like
a
view,
I
mean
these
views
that
you've
done.
B
B
A
A
B
So
so,
in
this
case
I
guess
victor.
What
I
see
is
a
pipeline
has
a
single
exporter.
A
view
has
some
sort
of
selection
criteria
and
an
aggregation
right.
So
there's
no
there's
there's
no
like
need
to
do
it,
just
like
add
exporters
to
views
or
anything
like
that,
because
that
would
actually
make
the
configuration
of
this
significantly.
A
B
C
A
So
working
we're
getting
the
conclusion
here.
So
just
to
recap,
I
think
each
pipeline
is
a
is
very
specific
about
what
do
you
want
and
how
do
you
want
to
convert
those
data
and
where
we
want
to
send
the
data
and
the
last
piece
is
handled
by
the
exporter
and
the
exporter
kind
of
giving
a
hint?
So
it
is
giving
some
clue
about
aggregation,
not
all
the
aggregation.
For
example,
we
know
we're
exporting
to
permit
this.
We
know
that
premises
only
support
cumulative
count.
A
Then
I
think
it
wouldn't
make
sense
for
us
to
force
the
user
to
specify
that
information.
What
we
should
by
default
convert
the
the
sum
to
cumulative
sum.
Instead
of
tell
the
user
hey
screw
it
up,
you
reported
delta
and
permission
doesn't
support
that,
so
this
clue
can
be
given
back
to
the
aggregation
here,
and
this
is
when
I
read
the
the
thing
here
sum
that
means
when
we
export
the
premises.
The
sdk
should
be
smart
enough
to
configure
this
by
reporting
the
cumulative
one.
A
But
if
you
change
this
from
premises
to
like
status
d,
then
nobody
knows
that
d
supports
delta,
and
this
should
be
a
delta
sum.
If
we
export
to
otlp,
then
probably
we
can
tell
people
if
you
don't
specify
whether
it's
delta
sum
or
its
cumulative
sum.
Well,
we'll
just
pick
the
we'll
just
pick
one
like
whether
it's
the
default
one
or
we'll
pick
the
native
one
from
the
instrument,
it's
our
choice,
but
the
user
kind
of
addition
because
see
this
is
this
is
delta.
A
B
B
So
if
your
your
exporter
could
say
for
your
prometheus
exporter,
could
could
provide
via
some
sort
of
callback
method
or
something
I
don't
know
depends
on
your
language.
How
you
want
to
do
it
could
just
say
all
counter
instruments
should
be
cumulative
and
so
that,
basically,
you
would
have
so
like
the
view.
Selection
criteria
would
be
counter
and
then
the
the
aggregation
would
be
a
cumulative
sum
or
whatever.
We
want
to
call.
A
B
A
I
I
I
try
not
to
do
that.
I
I
try
to.
I
try
to
use
a
single
view.
So
so,
basically
you
have
a
set
of
rules
and
whenever
the
first
rule
applies,
I
want
us
to
stop
there
instead
of
trying
to
go
through
all
the
rules
and
see
how
we
can
combine
them.
So
my
my
word
is:
if
you
have
a
view
saying
for
all
the
counters,
please
report
delta
sum,
and
then
you
have
another
view
saying
I
like
for
anything
called
full
bar.
A
I
want
to
only
take
three
attributes
and
ignore
all
the
other
stuff,
and
then
you
have
another
rule
say
for
anything.
That
is
a
double
instrument
like
across
all
the
types.
As
long
as
the
type
is
double,
I
won't
treat
them
as
a
histogram.
Then
how
do
you
resolve
all
these
rules?
It's
becoming
a
phd
problem.
A
What
I
want
to
do
is
you
go
through
the
the
list
of
the
rules?
If
the
first
one
doesn't
match
you
go
to
the
next
one,
and
if
none
of
the
rules
will
match,
then
you
simply
drop
the
data.
You
don't
report
that
and
the
user
can
write
a
wildcard
rule
all
that
at
the
end,
saying
if
nothing
matches
I'll
catch
this
and
I'll
I'll
do
some
default
option
yeah.
So
that's
what
I
want
the
exporter
to
provide
the
exporter
should
be
providing
that
exactly
what
you
just
said
there.
A
A
E
Now
so
far,
we
have
been
mentioning
a
rule,
solving
mechanism
that
we
will
be
using
right,
but
shouldn't
we
first
kind
of
try
to
define
it
to
see
if
it's
even
possible
to
have
this
rule-solving
mechanism.
A
Mechanism
we're
talking
so
if
that
that's
the
question,
I
can
quickly
go
through
the
the
current
dri.
I
think
we're
trying
to
cover
that,
so
so
the
first.
What
do
you
want?
You
basically
define
this.
The
the
matching
or
the
filtering.
A
Yeah
and
the
example
here
is:
if
java
has
strong
type
they're
saying,
I
won't
treat
double
and
in
that's
two
different
things
and
I
won't
have
the
rule
saying.
I
want
all
the
double
to
be
converted
to
integer
by
doing
a
cast,
or
I
want
all
the
integer
to
be
cast
to
a
double.
We
can
do
that
and
we
give
that
flexibility,
and
once
you
select
those
like
imagine,
there
is
a
list
of
rules
so
each
pipeline,
you
all,
have
multiple.
A
A
A
These
are
the
things
how
you
can
change
and
and
after
that
you
send
to
the
exporter,
and
when
this
configuration
is
provided
to
the
meter
provider,
the
sdk
should
have
the
holistic
view
like.
It
should
understand
everything
here
and
it
needs
to
combine
this.
So
when
there's
a
new
instrument
coming
saying
hey,
this
is
full
we'll
go
through
the
view
number
one
and
see
if
there's
a
match.
If
there's
a
match,
then
we'll
do
the
conversion
and
we'll
send
that
to
the
explorer
and
we're
going
to
stop
processing
the
remaining
views.
A
If
there's
no
match
we'll
go
to
the
next
view
and
see
if
there's
a
match
and
we'll
continue
this
until
we
reach
the
end
and
if
there's
no
match
we'll
just
drop
the
data.
That
means
we
don't
care
about
this
instrument
in
this
pipeline,
and
if
the
user
are
saying,
I
want
to
cache
anything,
they
should
have
a
wild
card
view.
A
B
B
Will
still
work
with,
like
with
my
idea
of
having
the
exporter,
provide
every
exporter
provide
wild
card
views
because
the
pipeline
order
here
is
explicitly
you're,
adding
a
bunch
of
views
and
those
are
you
you're,
assuming
an
order
there
right
and
then
the
exporters
last,
and
it
adds
its
views,
which
are
again
ordered
last
after
everything
else.
So
I
think
that
still
worked
fine
so
to
jump.
I.
A
Like
the
only
question
I
have
is
not
a
big
concern,
I
think
we
can
solve
that,
but
the
question
I
have
for
you
is:
if
we,
if
we
hit
the
first
view,
we're
saying
for
anything
that
is
a
counter
we'll
just
report,
that
as
histogram
and
in
the
exposure
we
kind
of
hinted
all
the
histograms
should
be
reported
as
delta
value.
And
do
you
think
in
this
way
we
should
continue
like
we
should
go
through
the
view,
one
rule
and
ignore
the
rest
one,
and
then
we
always
respect
the
exporter
view.
So
I
think.
B
Oh
yeah,
yeah
right,
so
those
are
two
different
things,
so
you
so,
I
guess,
can
you
specify
information
about
a
histogram
aggregation
that
would
be
like
the
delta
right,
so
the
way
that
the
way
that
we
designed
this
long
ago
when
we
did
a
bunch
of
prototyping
at
least
the
way
I
did
it
in
java-
was
that
when
you
defined
your
view,
the
you
defined
a
so
a
selection
criteria
and
then
kind
of
what
you
wrote:
a
selection
criteria,
what
you
what
you
want
to
match
and
then
how
you
want
to
handle
that
data.
B
So
if
I
want
to
say
I
want
to
do
this,
I
want
to
aggregate
this
histogram
instrument
as
a
delta
histogram.
B
If
the
user
is
saying,
I
want
to
aggregate
this
as
a
delta
histogram
and
their
backend
only
supports
cumulative
histograms
or
something
I
don't
know,
I'm
making
stuff
up,
they've
just
configured
things
wrong
and
they're
going
to
get
they're
going
to
get
data
that
doesn't
work
right.
I
see
it
right.
Does
that
make
sense.
A
F
I
know
of
there's
the
only
push
the
only
poll
in
them
like
support
mixed
option,
and
it's
only
really
the
otlp
right
now
that
supports
both
options,
and
I
I
did
I
have
seen
I
did
the
in
the
goat,
prototype
support,
letting
the
exporter
trickle
that
information
back
to
the
processor
to
say
what
you
want,
because
it
determines
whether
you
want
state
or
not
often
in
the
processor.
It
worked,
I'm
not
sure
if
it
was
overkill
or
not.
F
It
means
that
you
can
have
an
option
to
support
both
cumulative
and
delta
in
the
same
pipeline,
because
often
I
mean,
as
far
as
I
know,
you're
always
going
to
have
one
of
them
passed
straight
through
and
one
of
them
will
require
state
and
a
conversion.
So
to
do
delta
and
cumulative
requires
memory,
one
way
or
another,
and
if,
if
you
don't
need
both,
then
you
might
be
just
passing
through
the
natural
type
and
you
might
have
to
do
a
conversion.
But
it
sort
of
depends
on
what
the
exporter
has
asked
for.
A
Cares
so
based
on
this,
I
I
feel
like
we
we
can
go
through
this.
We
have
multiple
views
they're
in
order
and
any
instrument
will
try
to
go
through
the
the
rule
here.
If
there's
any
match,
we'll
just
take
the
view
and
do
all
the
transformation
here
and
we'll
export
the
data.
If
there's
mismatch,
then
we'll
continue
until
there
is
a
match
and
if
we
reach
the
end,
there
is
no
match
we'll
just
ignore
this
instrument
for
this
entire
pipeline
sounds.
E
Sorry
diagonal,
would
you
repeat?
Yes,
you
mentioned
that
if,
if
there's
no
match
after
all
the
views,
then
why
do
we
need
a
wildcard
view.
A
If
there's
no
match
by
default,
we'll
ignore
the
instrument.
For
example,
you
define
the
view
and-
and
you
define
the
view,
if
anything
with
the
instrument-
name
rightly
I'll,
take
it,
but
then
you
guys
have
an
instrument
called
josh.
Then
that
means
in
this
pipeline
you
don't
care
about
josh.
You
simply
won't
not
want
to
export
that.
E
Yes,
sorry,
maybe
my
question
is:
what
does
the
wild
card
view?
Do
I
don't
know.
A
In
this
case,
a
lot
of
people
would
say
it's
inconvenient,
because
I
have
a
lot
of
instruments
in
my
application
and
I
might
have
something
dynamically
loaded.
I
want
to
capture
all
of
them,
but
I
have
a
special
instrument
called
riley
and
I
don't
like
that.
I
want
to
change
that
from
a
counter
to
histogram,
but
by
doing
this
I
either
view
immediately
I'm
I'm
not
getting
all
the
other
things,
because
everything
else
got
jumped.
E
All
right,
I
I
just
have
the
feeling
that
we're
trying
to
do
like
too
many
things
at
once,
like
we're,
trying
to
support,
allow
and
block
at
the
same
time
with
the
same
mechanism.
Now
we're.
A
Yeah
we're
taking
40
minutes
on
this,
but
it's
good.
I
I
think
with
this
I'm
going
to
change
the
pr,
and
hopefully
we
can.
We
can
get
unstuck
here
so
moving
to
the
next.
A
A
Yeah
I'll
paste
it
somewhere
like
a
github
comment
or
something
like
this
is
what
we
discussed:
okay,
so
the
next
one
from
from
vector.
So
I
created
a
separate
issue
to
capture
the
aggregator
and
the
exemplar
part.
So
victor
had
this
pr.
So
the
first
ask
is,
I
know,
they're
already
like
comments
from
from
john
and
josh.
So
please
review
this
and
I
think
there's
some
outstanding
question.
So
victor,
do
you
want
to
go
through
that
and
see?
What's
the
what's,
the
blogger,
you
see.
C
Yeah
so
reading
through
the
comments,
I
think,
there's
some
well.
I
guess
I
don't
explain
it
correctly,
and
I
don't
know-
and
the
question
for
the
group
is,
do
I
need
to
separate
this
particular
piece
out
or
put
that
as
part
of
the
aggregator
and
what
I'm
referring
to
is
the
concept
of
having
a
measurement
and
based
on
the
view
configuration
potentially.
Could
you
know
expand
out
to
more
than
one
particular
metric
in
the
example
here?
C
Is
that,
given
the
attributes
there's
some
combination
of
attributes
that
we
have
to
aggregate
per
combination
of
attributes
so
that
logic
of
taking
a
measurement
and
and
mapping
it
out
to
different
metrics
or
in
this
case
aggregators?
Where
does
that
belong
right?
So
today,
I've
defined
the
aggregator
as
not
considering
those
combinations
and
I'm
leaving
the
combinations
of
how
to
figure
out
which
those
combinations
are
as
part
of
the
sdk,
and
I
think
that's
confusing
you.
B
C
People
per
se
right,
so
the
question
there
is
generally
when
we're
talking
about
an
aggregator.
My
current
definition
is
that
an
aggregator
maps
more
or
less
to
one
metric
time
series,
which
is
to
say
all
of
the
key
value
pair
combinations.
C
You
know
the
dropping
of
keys
and
whatever
has
already
been
resolved
for
you
and,
at
the
very
end,
result
you're,
just
getting
a
value
that
you
could
then
aggregate
by
histogram,
some
whatever
right.
So
then
that
leaves
the
question
the
who
is
responsible
for
taking
a
measurement
examining
its
key
value
pairs
and
breaking
it
out
into
one
or
many
aggregators
right
now
I
have
that
as
part
of
the
sdk
function.
A
A
A
It's
done
and
and
if
I
imagine,
there's
a
need
that
we
want
to
expose
an
aggregator
type,
one
reason
could
be:
we
want
the
customer
to
be
able
to
take
the
ic
can
derive
their
own
type,
they
want
to
say
I
have
a
unique
count
or
I
have
some
different
histogram
algorithm.
I
want
to
introduce
rightly
sketch
they
can
do
that.
But
my
question
is:
is
that
is
that
the
purpose
vector
and
number
two
is?
Is
that
something
we
have
to
do
in
the
first
release.
C
Yeah,
so
so
for
me
that
question
of
how
you
specify
you
know
an
aggregation,
let's
take
the
sum:
how
do
you
specify
it's
a
sum
cumulative
or
is
a
sum
delta
or
if
it's
a
sum,
monotonic
cumulative
or
some
monotonic
delta,
whether
or
not
you
specified
that,
as
you
know,
number
of
enums
or
you
know-
preference
for
just
saying
a
new
of
a
type
with
the
parameters
that
to
me
is
relatively
equal
to
me.
So
that's
really
just
a
language
preference.
C
F
Right
right
can
I
respond.
I
think
you've
got
it
vector
the
there
is
an
object,
which
is
some
sort
of
type:
that's
corresponds
to
each
distinct
key
combination
and
somewhere
in
your
sdk.
F
You
have
to
do
that
work
and
I
in
my
prototype,
called
this
an
accumulator
and
it
really
has
to
start
has
to
work
hard
on
those
synchronous
instruments.
So
you
find
yourself
only
dealing
with
delta
delta
sums
because
all
of
your
inputs
are
deltas
and
then
this
collect
function.
That's
part
of
the
pr
in
front
of
us
is
the
one
that
takes
those
deltas
and
atomically
resets
them
to
zero,
and
then
it's
a
future
stage
in
the
pipeline.
That
says,
oh,
I
have
a
delta
and
my
output's
supposed
to
be
cumulative.
F
Therefore
I
must
store
some
memory
and
output
cumulative
from
my
input
and
but
but
I
I'm
hoping
that
we
don't
have
to
like
spell
out
every
detail
of
that
nature,
and
we
don't
have
to
call
it
an
accumulator,
but
I
don't
think
of
temporality
as
part
of
an
aggregator.
It's
how
you
manage
the
state
and
so
the
thing
that
maybe
wasn't
present
in
your
description.
There
is
that
not
only
does
the
aggregator
take
all
those
new
instruments
and
output,
you
know
the
function
at
the
end,
most
of
them
the
way
we
work
with
them.
F
E
E
F
C
So
question
josh:
you
talked
about
your
accumulator
and
later
on
in
the
process
potentially
dropping
labels
per
se.
So
that's
definitely
one
implementation.
So
what
happens?
At
least
in
my
my
prototype,
all
of
the
label
sets
are
pre
expanded
out,
and
thus
the
aggregator
becomes
simpler
in
the
sense
that
it's
not
expected
to
be
quote
merged,
and
thus
my
aggregator
does
not.
The
intentional
migraine
does
not
keep
the
you
know,
label
set
or
the
you
know
the
key
value
pair
specific
to
it,
because
it's
already
been
pre-determined.
C
F
F
Enter
that
piece
of
state.
Yes,
the
reason
why
I
didn't
take
that
path
is
that
it
seems
to
require
filtering
like
the
work
of
filtering
labels
is
done
on
every
synchronous
operation,
whereas
in
the
way
I
described
it,
the
work
of
filtering
labels
is
done
once
per
collection
and
I'm
not
sure,
there's
a
difference.
It's
just
the
difference
in
in
organization
right.
I
think
that
the
results
should.
C
Be
the
same
right
so
so
the
viewpoint
I'm
taking
and
tell
me
if
this
is
correct,
but
I'm
mapping
an
aggregator
to
be
equal
to
one
event,
time
series
and
thus
all
expansions
of
label
sets
dropping
adding,
has
already
been
resolved.
C
C
And
if
we
talk
about
multiple
exporter,
the
collations
of
the
you
know
expanding
for
exporter,
all
of
those
quote,
expansion
of
a
measurement
to
one
or
many
event,
time
series
it
has
to
occur
in
a
measurement
processor
or
is
that
in
an
aggregator,
and
we
I
think
we
already.
I
talked
to
myself
to
say
that
it's
not
in
the
aggregator,
so
it's
in
the
measurement
processor
right
so
so
does
I
define
the
aggregator
for
simplicity
that
it's
currently
a
measurement
processor,
so
I'm
suggesting?
C
Right
so
the
sdk
should
be
the
one
that
is
generally
sdk
should
be
the
one
that
is
curating
the
pipeline,
which
means
the
sdk
should
be
instantiating
or
moving
the
measurement
through
the
measurement
processor
and,
at
the
very
last
end,
moving
it
to
a
aggregator
which
then
converts
to
metrics.
So
that's
kind
of
the
viewpoint
I'm
currently
have.
F
You
at
the
high
level
that
sounds
right
to
me.
I
don't
have
any
disagreements,
I
I
called
it
accumulator,
which
is
the
sdk
portion,
and
then
you,
when
you're
done
with
that
pipeline
and
call
collect
it,
hands
it
to
a
processor
which
then
can
you
know,
do
its
cumulative
state
management
and
stuff
like
that.
A
Okay,
so
time
control
here
so
so,
may
I
ask
folks
to
spend
some
time
with
victor's
pr,
I
think
just
to
help
to
make
some
progress
there
and
also
there
is
one
issue:
josh
suresh
created
about
the
exemplar
and
how
to
associate
with
other
histogram
stuff.
A
F
Thank
you
just
quick
notes.
I
last
time
I
had
action
items
from
tuesday
morning
to
first
of
all
summarize
the
discussion
that
we
had
had
on
there
in
the
thread
of
issue
1776,
which
I
did
in
this
particular
link.
F
It's
not
gonna
change,
anyone's
mind
or
or
shed
much
light
for
the
people
in
this
room
here
to
read
that
particular
summary.
The
prior
prior
posting
that
I
made
was
essentially
describing
a
prototype
that
I
think
it'd
be
nice
to
see.
Essentially,
I
tried
to
implement
the
simplest
implementation
that
I
could.
F
F
So
if
you
scroll
up
to
one
one,
one
comment
up
from
there,
rightly
is
the
one
I'm
sort
of
this
may
be
too
much
to
ask,
but
maybe
not,
I
think
what
I'm
essentially
asking
is
that
if
I
have
a
number-
and
I
know
my
scale,
how
do
I
calculate
the
bond
x
and,
logically
speaking,
you
can
just
call
log2
on
your
machine,
but
there's
it's
unclear
whether
you
have
enough
precision
to
do
to
do
that
and
I
think,
like
some
of
the
other
histogram
algorithms,
we've
seen,
there's
logically
speaking,
a
recursive
way
to
compute
the
index
and
I
feel
like
it
would
be
nice
for
us
to
spell
that
out
before
we
go
recommending
this
protocol,
I
put
sort
of
the
intellectual
curiosity
of
what
I
just
said.
F
There
is
like,
if
you're,
trying
to
use
extremely
high
scale-
and
you
just
do
the
naive
thing
you
end
up
completely
off
like
your
bucket
next
indexes-
will
not
be
correct,
because
there
is
not
enough
precision
in
calling
a
log2
function,
so
that
was
sort
of
the
prototyping.
I
wanted
to
see-
and
I
asked
some
of
the
matthew
people
to
help
with
that.
Hopefully
I
will
we'll
see
but
back
to
the
the
1776
thread.
I
also
called
one
more
person
into
that
discussion.
F
That
is,
bjorn
robinstein
he's
a
prometheus
developer
and
he
just
posted
this
as
the
meeting
was
starting
an
hour
ago.
So
I've
skimmed
it,
but
I
haven't
really
studied
it
in
much
depth,
but
he
he
does
link
and
answer
the
question
I
posed
him,
which
is
you
know
at
this
point.
We
kind
of
have
agreement
on
this
base
two
approach
and
he's
adapted
the
prototype
for
his
prometheus
to
use.
It
he's
also
got
a
protocol
proposal
and
it's
would
be
very
nice
if
we
could
just
agree
to
use
it
at
this
point.
F
So
the
if
you
want
to
click
into
these
two
bottom
links
it's
lines.
You
know
63
to
73
and
then
82
to
89
of
these
of
this
branch,
and
this
would
be
the
protocol
like,
and
I
I
need
to
do
a
little
bit
more
investigation
and
answer
some
of
these
questions
about
like
what's
the
smallest
representable
number
or
how
should
we
think
about
choosing
that
and
like
he,
for
example,
put
a
limit
on
on
between
negative
four
and
positive
eight?
F
F
What
is
the
smallest
largest
number
that
you
can
represent
in
those
various
settings
and
and
potentially
have
pseudo
code
for
calculating
it
in
a
sort
of
simple
naive
implementation,
or
something
like
that,
so
I'm
hoping
by
next
tuesday
to
have
a
little
bit
more
discussion
following
bjorn's
post
there
and
then
on
tuesday.
Remember.
We
asked
georg
from
dynatrace
to
give
a
presentation
about
their
prototype.
G
Okay,
I
was
just
talking
to
to
yuki
about
that.
Your
comment
there
earlier
this
afternoon
he's
interested
in
putting
together
a
poc
and
submitting
that
as
well.
F
F
I
know
I
like,
I
know
that
if
you
go
into
wikipedia
and
start
browsing,
you
can
find
pseudocode
for
computing
log
2
in
an
iterative
fashion,
which
is
roughly
speaking
kind
of
what's
going
on.
What
we
need
is
a
very
exact
calculation
of
log
2
for
a
number
that's
much
larger
than
64
bits.
It's
the
log,
it's
it's
the
input
number
times,
4
billion.
In
this
case
we
need.
F
We
need
to
compute
that
log
2.,
well
log
2
of
the
input
number
times
2
to
the
4
billion,
it's
an
extraordinarily
large
number
and
anyway
we
don't
have
that
capability.
So
that'd
be
great
to
ask
for
his
help.
Thank
you.