►
From YouTube: 2021-09-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
C
A
One
topic
given
like
last
time:
we
had
meeting
it
was
last
thursday
and
it's
been
just
one
working
day
for
the
u.s
on
the
friday.
So
no
much
update
here.
If
you
have
any
other
topics,
please
append
them
to
the
agenda.
So
I've
updated
the
the
measurecreator
pr
based
on
our
discussion.
Currently
it's
a
it's
a
little
bit
on
the
limbo
state.
So,
like
I
started
by
making
it
concrete,
then
we
got
feedback
from
the
java
16
like
trying
to
align.
A
That
part
is
hard,
because
java
has
some
difficulty
implementing
the
unclicked
callback.
So
we
discussed
that
on
last
thursday
and
agreed
that
we're
going
to
remove
that
if
you
want
to
take
a
look
at
how
the
prototype
will
look
like
so
josh
storage
has
a
pr,
but
we
already
agreed
try
to
remove
that
part
from
the
spec,
so
I've
removed
the
metric
reader
and
also
I
I
try
to
give
flexibility
saying.
The
poor
exporter
can
be
modeled
to
align
with
the
push
exporter.
A
If
the
language
say
want
to
do
that,
but
if
they
think
it's
too
hard,
they
can
choose
a
different
approach
that
that's
something
I'm
a
little
bit
struggling
because
and
ultimately,
spike
is
saying
here
goes
a
recommendation,
but
you
don't
have
to
do
it.
If
you
want
to
do
something
else,
you
have
the
freedom
and
that's
where
we
are
today
so
so
help
me
to
review
the
the
pr
and
and
put
your
comments
thanks.
B
Next
topic,
maybe
I
think
it
is
related
to
this
metric
reader
topic.
Did
I
last
time
we
were
talking
about
the
cumulative
to
delta
conversion
right,
so
did
I
understand
correctly
that
the
metric
exporter
or
sorry,
the
the
matrix
reader
interface,
that
we
have
now
will
not
support
this
operation?
So
it
is
it
just
exports
deltas
if
it
gets
deltas
and
it
won't
be
able
to
convert
anything
so
so
I
promised.
A
Once
the
reader,
like
this
pr
got
merged
I'll
start
a
separate
pr
to
clarify
the
delta
cumulative
thing,
my
current
thinking
is
the
reader
and
the
expander
will
not
do
any
conversion.
It
is
part
of
the
sdk
and
taking
details.
So
when
the
isdk
caused
the
reader,
whether
the
reader
retrieved,
the
data
from
the
sdk
or
the
reader
got
a
callback,
the
data
is
already
prepared
for
the
reader,
which
means
the
exporter
needs
to
give
some
hint
to
make
it
easier.
A
And
one
example
we
talked
about
is
hey
the
user
configured
the
permissions
exporter,
it
really
wouldn't
make
sense
for
them
to
say
I
have
three
views
and
I
want
the
views
to
be
delta
and
export
to
promises,
and
let
me
do
some
credit
stuff,
so
we
believe
that
if
people
specify
here
goes
my
premises
set
up-
and
here
goes
my
view-
they
don't
have
to
specify
delta
versus
cumulative.
We
should
be
smart
enough
to
say
this
entire
thing.
A
I
only
support
cumulative,
please
don't
give
me
any
delta.
Okay
sounds
good
yeah,
and
that
also
brings
the
question
if
the
user
has
mult
like.
If
the
user
has
multiple
exporters,
some
of
them
only
support
a
certain
type.
What
do
we
do
like
if
they
have
multiple
views
and
those
views
can
have
both
delta
and
cumulative?
Do
we
do
some
conversions?
So
we
don't
drop
the
data
for
premises
or
we're
saying
hey.
A
If
you
export
otl
people
will
export
everything,
but
for
premises,
exporter
we're
only
going
to
take
three
things
out
of
five
and
we
drop
all
the
delta
changes.
I
I
think
josh
mcdonald
has
a
suggestion
that,
instead
of
dropping
the
data
we
want
to
by
default,
give
the
user
a
a
reasonable
conversion.
For
example,
we're
saying
hey:
we
really
don't
support
delta.
So
how
about
we
explore
like
export
that
as
a
as
an
asynchronous
gauge?
A
A
Okay,
moving
to
the
next
jack.
D
Yeah
hi,
so
I
was
reviewing
a
pr
for
the
the
java
sdk
and
related
to
the
implementation
of
the
the
metrics
sdk
and
noticed
that
there
was
no,
you
know
configurability
for
using
summary
aggregation
type,
and
I
went
back
to
the
sdk
spec,
the
metrics
sdk
spec,
to
to
see
if
that
was
specified,
and
it
wasn't
and-
and
I
was
trying
to
figure
out
why
and
I
dug
through
pr's
and
issues
trying
to
figure
out
if
there
was
any
specific
context
on
to
why
the
summary
aggregation
was
emitted
from
the
sdk
and
yeah
just
wanted
to
see.
E
I
could
try
to
take
that
one,
the
his
summary
aggregation
the
way
we
understand
it
is
not
a
mergeable
one,
and
we
think
that
the
industry
and
the
users
would
like
to
move
towards
merging
histograms
instead
of
summaries.
Given
that,
with
enough
resolution,
you
can
reproduce
a
summary
from
aggregated
histograms.
E
The
reason
why
I've
hesitated
to
try
and
push
that
into
an
aggregators,
I'm
not
sure
many
people
want
it.
First
of
all,
second
of
all,
the
algorithms
that
are
out
there
are
not
easy
to
specify.
So
the
prometheus
summary
like
algorithm
is
not
something
you
can
specify.
It's
got
some
constants
in
it,
and-
and
I
don't
even
know
what
the
algorithm
there
is.
I
would
one
of
the
things
that
we've
done
in
the
past
to
talk
about.
E
Other
alternatives
is
just
to
talk
about
min
max
sum
count,
which
is
a
summary
with
zero
percentiles
in
it,
and
we
can
represent
that
using
a
histogram
with
zero
buckets.
So
I'd
like
to
propose
that
we
use
histograms
for
summaries
and
that
most
people
don't
actually
want
them,
and
I
may
be
wrong
if
you'd
like
to
comment
well.
D
So
so
I
I
work
for
new
relic
and
summary
is
one
of
our
our
our
main
metric
types.
I
guess
you
know,
so
what
were
you
saying
about
using
a
histogram
with
zero
buckets?
How
would
that
represent.
E
That
represents
a
min
max
sum.
Count
aggregation
well,
we
haven't
put
the
min
and
max
in
yet,
but
we've.
E
It
quite
a
bit
the
summary
is
in
the
protocol,
and
what
my
point
was
is
that
the
aggregation
to
compute
a
summary
is
difficult
to
specify
in
the
prometheus
world.
It
uses
a
smoothed
five
to
ten
minute,
moving
average
of
the
percentile
calculation,
but
it
is
five
to
ten
minutes
and
it
only
works
for
cumulative
that.
E
For
that
reason
now
I
suspect
that
what
you
want
is
a
delta
summary,
and
that
would
be
quite
different
than
what
prometheus
has
so
there's
a
lot
of
trouble
trying
to
specify
this,
and
I
would
prefer
that
people
move
to
using
histograms
with
high
resolution
that
we
can
then
compute
summaries
from
in
the
back
end,
and
I
don't
believe
that
it's
worth
our
trouble
to
specify
a
summary
aggregator.
D
Okay,
so
so
couple
couple
thoughts
there,
so
you
know
I
am
interested
in
you
know
the
min
max
some
account
version
of
a
summary,
not
interested
in
the
you
know
the
way
the
the
protobufs
allow
you
to
have
different
percentiles
in
there.
So
if,
if
the
the
histogram
protobuf
was
modified
to
include
min
and
max,
then
that
would
be
you
know
thumbs
up
for
me.
So
so
that's
one
thought
and
then
I
guess
the
other
thought
was.
D
It
is
summary,
even
even
with
the
weaknesses
you
just
stated
is
talked
about
in
the
matrix
data
model
as
one
of
the
the
data
point
types,
and
so
I
guess,
like
I'm,
trying
to
understand
the
difference
between
the
sdk
specification
and,
like
my
presumption,
would
be
that
it's
based
off
of
the
data
model,
and
so
it's
trying
to
be
an
embodiment
of
the
data
model
and
so
like
the
divergence
between
those
two
is
strange
to
me.
E
You're
right,
we
put
the
summary
type
into
the
protocol
under
heavy
pressure
from
prometheus
group
and
the
desire
to
have
compatibility
if
we
are
scraping
a
prometheus
client
library,
and
they
present
us
with
summaries
like
that,
was
the
non-lossiest
way
to
convey
that
data.
But
we
have,
from
the
beginning,
stated
that
we
would
not
put
summary
as
a
required
aggregator
supported
in
the
sdk.
E
So
that's
that
explains
why
we
have
this
type
that
we
don't
want
to
recommend
the
sdk
output.
As
for
the
min
and
max
fields
of
a
histogram
there's
an
issue
open
about
it,
maybe
or
I
closed
it,
but
it
was
a.
It
was
a
really
difficult
issue,
because
when
you
look
at
push
versus
pull
the
behavior
of
min
and
max
is
quite
different,
and
I
made
a
proposal
which
was
to
do
what
prometheus
does,
but
it
leaves
it
very
loose.
F
E
So
I'd
love
your
help
with
the
min
and
max
question.
I
think
that
that's
the
one
that
we
can
address,
because
moon
and
max
are
well
defined
and
they're
mergiable.
It's
all
the
quantiles
between
min
and
max
that
are
not
mergable
and
that
I'm
not
sure
people
truly
want,
and
if
we
do
get
our
high
resolution
histogram
off
the
ground,
which
I
think
we
can
and
will,
then
you
should
be
able
to
compute
the
same
type
of
summary
from
the
high
resolution.
Histogram.
E
Without
the
ambiguity
that
might
you
might
get
from
asking
for
a
summary,
with
five
to
ten
minutes
of
information,
and
if
we
want
exact
information
about
min
and
max,
we
have
to
push
deltas,
and
that
is
an
area
where
the
metrics
industry,
especially
prometheus
community,
is
just
not
ready
for
it.
So
that's
why
instead
we're
going
to
talk
about
you
know,
if
you
ask
for
a
max,
you
might
get
five
to
ten
minutes
of
max
and
that's
where
we
are.
It's
tricky.
D
Right
right
understood
so
when
you,
when
you
mentioned
some,
help
defining
like
the
the
min
and
max
base,
what
did
you
have
something
concrete
in
mind
or
or
what.
E
Well,
yes,
I
there's
an
issue,
I'm
sorry,
if
you
can
hear
the
roofers,
it's
okay.
E
Issue
266
in
the
proto
repository
and
I
actually
made
a
pr
at
one
point
just
to
say
hey:
this
is
what
this
will
look
like.
I
closed
the
pr
it's
pr
number
26279,
I'm
putting
it
in
the
chat
right
now
and
you
can
see
it
refers
to
issue
266..
This
has
been
requested
forever.
Everyone
thinks
the
histogram
should
have
a
min
and
max,
but
it's
just
hard
to
spec.
D
Okay,
well,
if
the
pr's
closed.
E
It
was
just
to
keep
our
proto
repository
clear
and
managed.
It
was
more
or
less
having
a
long
comment
with
two
fields
min
and
max,
and
it's
a
long
comment.
That's
troublesome.
D
Right,
okay,
well
I'll,
take
a
look
at
that
and
I'll
try
to
add
some
context
in
terms
of
you
know
what
we're
interested
in
and
you
know
maybe
we
can
see
where
that
goes
and
potentially
reopen
that
we'll
see
thanks.
B
Yeah,
so
this
is
just
I
wanted
to
ask
what
you
thought
about.
This
is
basically
the
question
of
if
we
have
a
meter
in
open
telemetry
and
we
export
it
to
with
something
that
is
not
an
open,
telemetry
exported,
but
to
some
other
backend
in
traces.
Apparently,
there
is
like
an
otlp
specified
almost
like
a
semantic
conversion
convention.
Sorry,
in
order
to
specify
how
to
use
the
meter
name-
and
this
is
maybe
interesting
in
in
the
in
the
when
thinking
about,
for
example,
having
request
counters
or
something
like
that.
B
So
you
have
two
request
counters
in
in
one
instrument
like
in
one
instrumentation,
one
counts,
the
incoming
requests
and
one
counts,
the
outgoing
request,
and
if
you
just
serialize
them
with
the
metric
name,
then
they
both
both
would
call
would
be
called
yeah
request.
Count,
for
example,.
C
B
Is
not
a
problem
in
otl
and
open
telemetry,
since
all
of
this
is
well
specified
and
it
has
the
meter
name
right
there,
so
it
should
be
distinguishable.
But
if
we
export
it,
then
it's
not
distinguishable
anymore
unless
we
put
the
actual
meter
name
somewhere
on
the
exported
data.
So
this
is
a
sort
of
an
open
question
on
what
do
you
think
about
putting
a
specification
up
how
to
call
these
things,
for
example,
because
that's
something
we
did
for
or
you
did
for
for
traces?
It
was
before
my
time.
E
B
A
So
what
you
want
is
in
the
exporter,
we
should
be
able
to
access
the
the
meter
name
right.
I
I,
I
think
anything
that
generate
generates
the
the
metric
should
be
at
least
available
made
available.
For
example,
if
you
want
to
access
the
meter
provider
information,
there
should
be
a
way
in
the
sdk
to
allow
that.
B
F
I
have
one
question
so
like
if
the
issue
is
specifically
asking
how
to
map
the
meter
name
for
known
otlp,
so
would
that
mean
like
we
would
be
specifying
how
to
map
the
meter
name,
for
example,
for
the
prometheus
exporter?
Like
do
we
repent
the
instrument
name
with
the
meter
name
so
that
there
won't
be
any
clash
in
the
prometheus
or
how
would
we
like
specify
that
for
non-otlp,
like
I'm,
using
like
prometheuses
as
an
example.
A
Okay,
at
least
I'm
thinking
that
I
think
he
should
give
you
a
way
to
allow
you
to
travel
through
the
topology
and
see
where,
like
which
meter
and
which
meter
provider
is
giving
me
such
like
metric
information.
So
you
know
expo
you
can
hook
them
up.
Regarding
like
how
how
you
call
that,
I,
I
guess
it's
probably
not
a
concern
for
the
ict
case
back
at
least
for
the
initial
release,
but
so
whether
otlp
will
have
it
as
a
separate
dimension.
A
A
Okay,
I
think
that's
all,
so.
Please
take
a
look
at
the
the
issue
mentioned
here
and
and
see
like.
If
any
of
you
would
want
to
have
me
max
on
histogram,
I
know
sigil
you'll,
probably
like
or
like
you're,
probably
interested
in
that
as
well
yeah.
I.
F
Mean
the
the
same
question
was
asked
by
alain
from
new
relic
as
well
in
dot.
Net
c
can
be
advanced
to
races
in
the
spec
meeting.
A
Yeah
yeah,
so
I
wonder
if
this
is
something
we
we
should
put
in
the
future
phrase,
so
I
I
would
suggest
like
jack
with
me
when
you
revisited
this
one,
try
to
help
the
other
folks
to
agree
on
if
this
is
something
we
want
to
put
in
the
future
freeze
milestone
for
the
isdk,
or
this
is
something
we
should
do
later
and
once
we
grow.
We
agree
on
that.
I'm
I'm
happy
to
facilitate.
D
With
that
conversation
take
place
in
the
in
prs
for
the
sdk
metrics
sdk
spec
or
you
know
in
that
proto
pr
like
what's
the
best
place
to
you,
know
come
to
that
consensus.
I.
A
I
would
suggest
in
the
sdk
spec,
because
the
protocol
already
have
that
and
the
problem
you're
seeing
is
the
problem
has
summary,
but
the
sdk
spec
doesn't
have
that
and
it
seems
there's
a
some
disagreement
there.
So,
in
order
to
solve
this,
I
I
believe
it's
not
the
problem.
It
has
to
be
that
I
see
his
back.
E
Okay,
can
I
ask,
are
we
looking
for
a
way
to
send
min
max
some
count
as
deltas,
because
that's
like
a
really
far
from
actually
the
summary
support
that
most
people,
think
of
when
you
talk
about
summaries.
E
Yeah
I
mean
the
the
the
problem.
Is
it's
totally
really
well
defined?
What
you're
asking
for?
I
just
want
over
my
one
minute
delta
to
tell
you
the
min
and
max,
because
then
I
can
aggregate
it
across
my
whole
cluster.
I
can
activate
it
across
time.
It
all
just
works,
but
as
soon
as
you
start
pulling
that
information,
it
falls
apart,
and
so
it
I
I
what
I'd
like
to
see
jack
is
that
we
get
to
a
place
where
there's
a
way
to
spec
spec
out
a
min
max
sum
count.
E
I
think
that's
a
really
good
default
for
histogram
instrument
histogram
with
zero
buckets
is
what
I
think
of
it
as,
but
I
don't
want
to
get
to
where
we
have
some
those
quantiles,
because
it's
not
mergable
and
it's
not
clear
how
we're
going
to
use
that
data.
D
So
in
your
yes,
I
agree
and
so
in
your
head,
would
it
be,
would
it
be
beneficial
to
spec
out?
You
know
this
simple
use
case
for
for
deltas
and
to
like
punt
on
the
cumulative
case,
the
pole
base
case,
or
would
you
want
to
solve
both
things
at
once?.
E
Well,
I
think
that
it's
reasonable
to
try
and
solve
both,
but
it
does
involve
just
everybody
agreeing
to
agree
that
it's
like
not
the
same
exact
thing
so
you're
asking
for
delta.
It's
really
well
defined.
Therefore,
we're
going
to
say
you
get
correctness
when
you're
asking
to
push
a
delta
of
a
min
and
max,
but
if
you're
pulling
a
min
and
a
max
now,
what
do
you
do
and
that's
when
I
don't
know
what
to
say-
and
I,
like
the
easiest
thing
to
say,
is
do
what
prometheus
does.
E
D
E
I
guess
I
like
this.
The
two
options
I
see
are
you:
could
you
could
try
and
make
them
have
consistently
the
same
behavior
so
you're
going
to
get
the
same
min
and
max
whether
you're
pushing
deltas
or
pulling
cumulatives,
but
in
that
case
you're
not
getting
correct
max
from
the
delta
perspective,
because
you
might
get
a
max,
that's
like
older
than
your
most
recent
delta,
because
the
spec
says
you
should
smooth
your
max
over
five
minutes
to
ten
minutes
or
something
like
that.
I
mean
that's.
What
prometheus
is
doing
then
you
can.
E
D
Yeah,
I
don't
think
you're
going
to
get
around
the
correctness
problem
that
you
talked
about,
and
so
you
just
have
to
you
know,
opt
for
the
best
of
bad
solutions.
E
D
Okay,
I
think
I'll
so
in
terms
of
where
to
have
this
conversation,
so
there's
no,
there's
no
open,
pull
requests
for
this
there's
just
this
issue.
I
have
an
issue
that
I
I
you
know
I
linked
to
and
at
the
top
of
this,
this
bullet
point
here.
Should
I
just
you
know,
tag
people
on
that
and
you
know
propose
something
and
and
ask
them
what
they
think
about
it.
E
I
think
that's
right
and
we
can
reopen
the
pr
just
like
I
opened
it
and
didn't
even
get
any
feedback
for
like
two
months,
so
you
can
take
a
look
at
279.
E
Opening
a
pr
in
the
product
repository
gets
quite
a
bit
more
attention
than
you
think,
because
there's
only
one
or
two
open
pr
as
any
given
time
there
and
then
I
would
say,
keep
raising
it
in
the
metrics
meeting
and
even
the
spec
sig.
We
just
want
to
get
people
aware
of
these
questions.
C
Can
can
I
ask
you
a
question
quick
around
new
relic,
summary
usage,
just
to
make
sure,
are
you
only
using
min
max?
Are
you
also
pulling
specific
percentiles
out
the
other
sorry
min
max
average.
D
Word,
it's
count,
average
min
and
max.
Oh
beautiful.
D
C
Yeah,
okay,
in
that
case,
I'm
I'm
more
than
a
fan
of
this
like.
I
will
approve
that
pr
and
then
we
just
need
a
few
other
people
who
agree.
So
it
seems
it
seems
like
a
clear
win
also
for
from
a
data
model
standpoint.
We
had
an
issue
where,
when
we
add
new
fields,
if
they're
optional,
we
don't
know
whether
or
not
we
can
interpret
them,
but
the
latest
version
of
protocol
buffer
allows
us
to
define
them
as
optional.
C
C
I
don't
know
if,
if
that's
over
the
top
technical
details,
but
if
you
reopen
that
that
pr,
the
main
thing
we
need
to
know
is
when
to
interpret
min
and
max
and
if
they
default
to
zero,
we
have
a
problem
right,
but
if
they
default
to
optional
we're
okay.
D
D
C
Up
to
different
languages
to
choose,
if
we
use
the
optional
syntax,
then
it
would
force
all
the
languages
to
update.
Basically
on
that.
C
Just
the
latest
version
of
proto3
as
an
optional
keyword
that
you
can
put
on
parameters
again.
C
C
Cat
is
in
front
of
you.
I
can't
see
anything
okay,
so
yeah
your
mind
is
blown.
I
think
the
optional
holy
war
is
finally
somewhat
settled
in
that
it's
too
useful
to
not
have
so
proto3
is,
is
adding
it
back
in
for
people
to
make
use
of.
C
If,
if
we
want
to
do
this
in
a
way
that
uses
the
existing
proto
versions,
we'll
also
have
to
use
we'll
have
to
add
min
and
max
and
add
some
kind
of
a
flag
to
denote
whether
they
are
part
of
a
histogram
for
backwards
compatibility.
C
We
have
we
actually
it's
an
in
flag
if
you
need,
if
you
need
help
with
your
pr
any
advice
like
happy
to
take
that
offline.
But
let
me
see
if
I
can
find
you
a
link.
E
C
Yeah,
there's
there's
a
there's
an
in
flag
on
every
metric
data
point
that
encodes
information
related
to
the
version
like
into
what's
encoded
in
that
protocol
that
we're
using
to
deal
with
kind
of
version
changes.
So
it's
this
thing
called
data
point
flags,
okay,
there's
these
are
bit
flags
that
get
set
on
an
integer.
On
the
point,
let
me
open
the
notes.
E
C
I
yeah
yeah,
I
I
I
don't
want
to
have
a
holy
word,
interesting
flags,
there's
no
need
to
interpret.
C
So
there's
an
example
right
there
of
the
those
are
the
the
bit
flags
and
then
on
the
actual
data
point.
What
did
what
did
you
call
it
josh?
I
think
this
is
yours.
It's
just
called
flags
right,
yeah,
it's
just
called
flags.
There's
a
uint
there's
an.
C
Yeah,
that's
the
that's
what
I
linked
right
there.
So
that
is
data
point
flags
and
then
this
is
lines
on
instagram.
C
A
Okay,
I
I
I
think,
that's
all
the
topics
josh
by
the
way
I've
updated
the
the
reader
pr
to
remove
the
requirement
on
the
uncollect,
and
so
please
take
a
look,
and
I
I
know
you
implement
the
java
prototype,
I'm
still
going
through
the
pr,
by
the
way.
C
Yeah,
I
guess
the
only
thing
I
wanted
to
call
out
was
the
from
that
prototype.
We,
we
basically
made
a
factory
on
metric
reader
and
we
explicitly
created
an
interface
that
gives
you
a
collect
access
into
the
internal
memory
representation
from
the
sdk.
That
is
an
explicit
interface
that
we
will
want
to
expose.
So
people
can
write
their
own
readers
so,
like
I
would.
I
would
like
if
we
could
get
that
specified.
It's
not
it's
not
necessary,
but
it
is
a
bit.
C
I'm
curious
if
other
people
basically
took
the
same
approach,
but
I
do
like
having
metric
reader
for
the
registration
problem
of
like
you
can
add
flush
and
you
can
add
shutdown
at
the
sdk
meter
provider
level,
and
that
is
super
useful
that
actually
solves
some.
We
had
this
weird
issue
with
java,
where
there
was
a
separate
global
for
exporters,
and
you
actually
had
to
control
your
metric
reader
across
two
different
globals
previously,
and
so
that
actually
solved
that
so
that
that's
one
of
the
big
benefits
behind
it.
A
Anyway,
so
take
a
look
at
the
updated
reader
pr,
I
I
think
we
still
have
the
like
the
flash
and
shutdown
thing,
and
I
I
try
to
step
back
and
give
people
more
flexibility,
especially
on
the
pull
exposure
and
the,
and
I
also
removed
the
metric
reader
unclogged.
A
That
might
be
funny
to
folks
who
look
at
the
spike
saying
oh
you're,
trying
to
give
me
flexibility
by
not
telling
me
to
do
anything
specific.
So
I
might
need
your
help
to
see.
How
do
we
reward
that?
C
So
on,
collect
I
mean
collect
was
the
problem
like
we,
we
never
ever
ever
called
collect
yeah
on
on
the
metric
reader
interface.
We
we
had
that
internal
producer
that
we
would
use
to
collect
yeah,
but
we
never
called
the
collect
method.
It
was
never
a
public
thing.
Periodic
metric
reader
decides
when
to
collect
and
export
right
and
the
prometheus
exporter
when
it
calls
collect
it
needs
to
get
back
its
results.
C
So
if
collect
doesn't
return
metrics,
then
I
think
collect
was
the
thing
that
I
kind
of
don't
see
a
need
for
it.
Just
it
caused
a
lot
of
weird
contortions.
C
A
It
sucks
because
later
if
people
are
saying,
I
want
different
collection
sites,
I
want
to
figure
the
basic
nutrients
call
back
every
one.
Second,
then
I
export
the
data.
The
only
reason
I
have
that
collect
is
is
for
like
later
we
can
allow
people
to
have
a
different
frequency.
A
A
No,
I'm
I'm
saying
there
can
be
a
periodic
metric
reader
and
you
describe.
I
want
to
export
the
data
every
one
minute
to
otlp,
but
meanwhile
you're
saying
I'm
not
happy
with
the
sampling
frequency
every
one
minute,
because
my
temperature
changes
so
frequently.
If
I
only
sample
every
one
minute,
I
won't
be
able
to
get
the
maximum
value
I
want
to
sample
at
at
maximum
0.5
seconds
with
clogged
height.
I
think
you
can
add
different
parameters
later,
but
if
you
call
it
flash
later,
you
have
to
introduce
a
very
different
interface.
C
I
guess
what
I'm,
what
does
what
does
later
mean
in
this
sense?
Is
this,
like
I
add,
code
to
export
it
later
and
if,
if
that's
the
case,
why,
after
why
wouldn't
I
have
two
different
readers,
one
that
reads
my
temperature
sensor
faster
and
the
other
one
that
reads
the
rest
of
my
metrics.
A
A
Okay,
so,
for
example,
you
have
a
you,
have
an
exporter
that
you
send
data
to
the
satellite,
which
is
super
expensive,
and
you
don't
want
to
call
that
every
one
second,
but
your
measuring
temperature,
which
change
frequently,
and
you
don't
want
to
sample
that
at
every
one
minute,
because
you're
going
to
miss
a
lot
of
grid
samples,
you
want
to
sample
that
as
at
0.5.
Second,
then
the
collect
method.
A
Currently
it
wouldn't
allow
you,
but
if
we
have
the
collect
method
after
the
initial
sdk
spec
stable
release,
I
think
it's
easy
for
us
to
say:
hey,
there's
an
additional
parameter
which
is
optional,
and
you
specify
that
you
can
allow
a
higher
sampling
frequency
instead
of
the
binder
with
the
exporting
frequency
you
can't
flash.
I
would
have
this.
C
I
think
I
think
you're
going
to
want
to
implement.
If
I
understand
what
you're,
suggesting
correctly,
I
think
you're
going
to
want
to
implement
that
as
a
separate
thing.
To
begin
with
so
like
that
sounds
like
what
you're
trying
to
do
is
is
take
an
asynchronous
instrument
and
instead
of
only
sampling
during
collect,
you
want
to
actually
have
a
faster
sampling
ratio
that
hits
an
aggregator.
So.
C
Want
to
translate
from
asynchronous
to
a
synchronous
instrument,
collection
technique
where
you
have
a
different,
asynchronous
periodic
kind
of
sampler,
and
I
think
you
can
do
that
almost
entirely
in
the
api
via
api
calls
like
we
could
actually
make
that
be
an
api,
only
interface
that
solves
that
use
case
today,
without
without
any
influence
on
the
the
actual
export
backend.
That's
right,
because
you
can
effectively
just
feed
your
async
instrument
into
a
synchronous
instrument
at
a
collection
interval,
and
we
could
do
that
entirely.
C
On
top
of
the
api,
that's
designed
yeah,
I
don't
really
think
the
putting
collect
on
this
reader
is
giving
you
the
that
flexibility
either
right,
because
you'd
have
to
you'd
have
to
say
like
here.
I
only
want
to
collect
these
specific
instruments,
and
that
puts
a
new
dependency
in
that
api.
That
might
be
hard
depending
on
how
things
are
implemented.
But
I
see
I
see
what
you
have
here:
just
trigger
async
instrument.
A
C
A
C
A
A
A
Okay,
I
I
think,
that's
all
any
other
topics.