►
From YouTube: 2021-08-31 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
let's
start
so
the
first
topic
that
is
dk
spec.
So
currently
you
guys
know
the
api
is
feature
freeze,
we're
talking
about
when
we
should
get
that
to
stable.
The
spec
is
a
little
bit
behind
and
I
hope
that
we
can
get
the
initial
experimental
release
soon.
So
my
question
here
is:
do
we
think
after
this
metric
reader
pr
we're
good
with
the
experimental
release?
A
I
I
think
the
answer
is
yes,
then
my
question
is:
do
you
see
like
you
can
help
us
to
make
this
happen
and
we
can
catch
so
we
can
catch
the
1.6.1
spec
release,
which
is
happening
right
now.
So,
if
that's
possible,
I
can
work
with
towers
to
to
wait
like
until
friday,
but
if
we
think
either
this
pr
is
becoming
too
big,
we
won't
be
able
to
make
it
or
we're
saying
we
don't
have
to
wait
for
this
pr.
Let's
just
cut
experimental
release
with
the
current
side
hub
thing
that
already
merged.
A
B
C
Did
you
see
my
comment
about
a
sequence
diagram
yeah?
I
did
a
sequence
diagram.
Okay,
I
haven't
had
a
chance
to
take
a
look
at
it.
Yet
that'll
help
me
understand
a
lot
better.
What's
going
on.
A
Yeah
and
if
you
have
any
other
outstanding
like
questions,
I
I
would
suggest
that
just
ping
me
and
I'll
try
to
respond
as
quick
as
as
quickly
as
possible
because,
like
sometimes
to
them,
when
you
send
a
comment,
I
I
was
in
other
microsoft
meetings
and
after
I
saw
that
it's
already
late
nights.
I
have
to
respond
like
after
the
working
hours
and
then
it
takes
hours
delayed.
So
my
body
is,
if
we
run
in
similar
like
iteration,
for
like
three
four
times,
then
it
will.
A
It
will
be
already
past
the
end
of
this
week.
So
I'm
happy
by
like
taking
some
random
pin
and
extra
hours
just
want
to
make
sure
we
can
catch
the
train.
A
I
understand
so
anyone
else.
A
Okay,
the
next
question
is:
do
we
see
additional
features
that
we
want
after
the
metric
reader?
Pr?
My
answer
is
no.
I
I
think
with
that.
We
just
need
to
fix
some
of
the
existing
issues.
Like
the
meter
provider
force,
flash
shutdown,
those
things
are
quite
straightforward,
but
besides
that,
like
anything
that
is
not
currently
in
this
bag
is
out
of
scope
by
default.
A
I
think,
with
exemplar
and
and
the
the
multi
like
reader
support,
we
can
discuss
whether
we
support
different
collection
cycle,
but
that
that
one
can
be
added
later
without
having
to
break
anything.
I
think
it's
a
it's
a
good
set
of
features.
I
remember:
there's
a
base
two
exponential
histogram
and
we've
already
decided
that
it's
not
a
blocking
issue
and
it
can
be
added
later
so
with
that,
that's
why
I
have
my
answer
now.
B
Yeah,
I
think
you're
right
about
the
exponential
histogram
that
is
moving
pretty
pretty
rapidly
prototypes
have
been
coming
in.
But
it's
it's
a
minor
extension
if
anything
to
the
sdk
specs
to
say
that
there's
this
other
option
to
support,
I
don't
think
we
should
rewire
it
at
least
not
right
away.
D
I
have
one
question
about
additional
feature,
so,
in
the
view
spec,
it
says
there
is
an
ability
to
add
additional
attributes
from
the
context.
Slash
baggage,
like
I
believe,
like
there
was
some
comment
at
the
time
of
original
pr
that
this
would
be
clarified
later,
like
how
exactly
he
said
in
the
plan
or
like.
Are
there
any
languages
here
who
implemented
this
feature
of
fetching
additional
attributes
from
context
or
baggage.
A
D
This
is
actually
there
is
no
to
do
or
anything
in
the
additional
dimensions
from
context
part.
So
can
I
assume
that
the
spec
would
be
like
just
mentioning
that
and
it's
up
to
the
language
to
decide
how
exactly
to
read
additional
things
from
the
context.
A
To
answer
your
question,
I
I
think
first,
the
like
being
able
to
get
extra
things
from
baggage
contacts
is
already
in
scope,
because,
in
the
view
you
can
find
the
the
wording
aspect
which
describe
this
scenario
right,
yeah.
First
question:
whether
it's
in
scope
or
not.
I
think
the
answer
is
yes,
it
is
in
scope.
The
second
question
is:
do
we
clarify
that
in
the
spike
and
to
what
extent
we
clarify
that?
Do
we
give
people
actual
implementation?
A
D
Got
it
yeah,
I'm
also
curious,
like
whether
any
languages
like
already
implemented
this
feature,
and
if
yes,
I
just
want
to
take
a
look
at
how
it's
modeled.
A
B
A
A
Okay,
so,
coming
to
the
next
one
given
like,
we
won't
have
three
languages
to
work
on
the
beta
release.
I
want
to
get
some
idea
like
which
language
do
we
think
will
be
like
handling
the
beta
release
at
the
initial
list.
I
I
know
that
java,
because
josh
is
working
on
that
a
lot
I
know
download
is
looking
at.
My
question
is:
what's
the
third
language
I
know
python
has
some
prototype
and
and
joshua
mentioned
like
in
go.
A
B
B
You
know
the
go.
Prototype
has
always
been
closely
aligned
with
the
otlp
data
model
and
so
we're
in
a
different
situation
there,
because
the
old
prototype
was
very
close
to
what
we
needed,
and
so
not
much
development
has
happened
in
the
last
year
other
than
some
recent
refactorings
and
renamings
to
get
closer,
which
is
still
in
flight.
B
A
Yeah,
so
my
my
gut
feeling
is
goal:
goal
is
a
little
bit
like
struggling.
If
I
understand
correctly,
gold
hasn't
reached
the
tracing
stable
release,
yet
right,
it's
still
an
rc.
So
in
this
case
I
wouldn't
try
to
pour
like
more
noise
to
this
sig
and
given
python
is
already
stable
on
tracing.
I
think
it
makes
sense
for
them
to
focus
matrix,
but
like
I'll
check
with
diagonal,
make
sure
we
don't.
We
don't
commit
something
for
him.
B
B
E
Yeah
in
the
javascript
seg,
diego
was
mentioned
as
the
person
who
will
be
doing
the
metrics
because
he's
also
doing
python,
but
we
haven't
actually
done
the
release
yet.
So
I
wouldn't
count
on
that
to
unplug
here.
B
D
A
C
A
C
I
mean
the
the
latest
release.
The
1.50
release
of
java
includes
the
what
I
guess
we're
calling
kind
of
the
proposal
or
the
I
don't
know
if
it's
release
candidate
whatever
it
is,
the
api
all
the
api
changes
that
are
in
the
spec,
so
that's
all,
but
only
the
api
and
we're
still
working
on
the
sdk.
So
it's
not
complete
yet.
C
A
Yeah
thanks:
I
hope
that
we
won't
break
the
api
part.
The
ict
part
probably
will
will
break
some
like
exporter
or
extension
developers.
C
Well,
I
think
there
still
has
been
some
debate
not
on
the
functionality
that's
currently
in
the
java
api,
but
the
exact
shape
and
idiom
mass
idiom
neomatic
video
magnet
whatever.
That
word
is
how
idiomatic
it
is
idiomatic,
but
so
there's
still
there
probably
will
still
be
some
tweaks
as
time
goes
on.
A
Yeah,
okay,
so
any
other
language
people
want
to
mention
otherwise
I'll
I'll
move
to
the
next
topic.
A
So
currently
I
got
four
or
five
reviewers.
I
think
we
need
more
and
per
the
ask
from
john.
I
also
added
the
sequence
diagram.
I
think
outstanding
questions.
We
have
to
make
decision.
What's
the
scope
of
this
pr
and
you
can
see.
Obviously
I
start
with
a
very
small
scope
and
as
people
keep
asking
questions,
I
I
want
to
like
solve
those
questions
and
the
pr
is
growing,
bigger
and
bigger.
A
My
worry
is
ultimately
we're
finding
a
situation
where
we'll
have
300
comments
there
and
the
pr
is
becoming
too
big
and
nobody
have
confidence
that
we
can
move
anywhere
and
then
we
have
to
come
back
and
scope
down.
So
so,
please
keep
that
in
mind
and
and
help
help
to
think
about.
Is
this
something
that
will
block
this
pr
or
you
can
stage
that
I'm
happy
to
send
like
following
up
as
many
as
follow-up
vrs
as
long
as
we
can
make
progress?
A
This
was
originally
asked
by
developers
from
dinatris.
The
idea
is,
if
you
have
multiple
asynchronous
instruments
and,
for
example,
you're
saying
I
want
to
export
the
data
to
otlp
every
one
minute,
but
they're
saying
wait.
I
want
to
call
the
asynchronous
instrument
callbacks
every
one.
Second,
in
this
way,
when
I
report
something
average,
I
got
the
accuracy
instead
of
every
one
minute.
I
do
the
sample
and
I'm
just
unfortunate
every
every
end
of
that
minute.
I
got
some
low
temperature,
so
I
want
to
increase
the
sampling
frequency
and
to
smooth
the
data
out.
A
A
The
second
one
is
regarding
multiple
readers.
So
imagine
you
have
multiple
exporters
learning.
You
have
something
exporting
to
like
some
remote
endpoint
and
meanwhile
you
have
some
local
thing
you
want
to
enable
on
the
fly
similar
like
like
the
z
pages.
A
I
think
most
of
the
folks
believe
this
is
not
a
super
common
scenario
for
production,
but
we
do
see
some
requirements,
for
example
josh
suresh
mentioned.
He
would
imagine
like
he's
doing
this,
like
two
two
thing,
one
for
local
one
for
remote
and
also
for
non-production
developer
inner
loop
it.
It
might
be
a
very
helpful
thing,
I
think,
multiple
folks.
I
call
that
so
with
the
support
for
multiple.
A
I
I
think
the
the
current
idea
is:
every
reader
should
be
isolated,
so
they
don't.
They
don't
even
realize
that
the
other
readers
exist
and
when
it
comes
to
the
delta
changes,
each
reader
would
ask
the
sdk
give
me
the
data
and
once
it
got
the
data
next
time
it
makes
the
ask
the
the
delta
change
would
reset
and
each
reader
will
have
its
own
state
maintained
by
the
meter
provider.
So
reader
itself
is
stateless.
A
The
meter
provider
has
to
know
the
registration
from
each
reader
and
it
has
to
maintain
the
state
for
each
reader
regarding
how
it
maintains
the
state
whether
they
can
use
a
shared
state
and
use
some
trick
like
copy
and
write
or
something
or
just
duplicate.
The
thing
in
a
stupid
way
is
the
implementation
detail
that
we
don't
spec
out.
So
this
is
where
I
want
to
go.
A
B
I
guess
my
question
is
whether
we
need
to
support
delta
export
for
readers.
It
sounds
like
everything
you
said
made
sense,
but
just
the
complexity
that
I
was
hoping
we
didn't
need,
and
then
I
I
would
support
emotion
to
just
let
readers
get
cumulative
state
only.
B
Yeah,
the
the
the
number
of
calls
for
cumulative
to
delta
has
been
very
small
that
in
in
my
time
here
this
project,
so
I'm
not
saying
it
doesn't
exist.
I
know
new
relic
occasionally
mentions
it,
so
I
shouldn't,
but
I
don't
believe
that.
B
Well,
I
don't
think
that
we
should
put
this
into
the
sdk.
Basically.
C
Josh
the
we
used
to-
or
I
think,
maybe
a
misremembering,
but
it
used
to
be
kind
of
that.
The
thinking
was
the
other
way
around
that
we
would
only
emit
deltas
out
of
the
sdk,
and
it
would
be
responsible
of
this
sort
of
component,
this
real
component
to
build
the
cumulative
since
that's
an
easier
task
than
trying
to
go
from
cumulative
to
delta.
B
Well,
I
guess
I
should
back
off
then
or
change
my
statement
I
mean
I
I've
always
seen
the
I've
always
imagined
that
there's
this
sort
of
intermediate
component,
that's
before
you
get
to
an
exporter
that
will
do
delta
to
cumulative-
and
I
was
you
know,
I'm
thinking
of
prometheus
here
as
an
exporter.
It's
just
one
of
these
exporters
that
uses
the
reader
interface
so
that.
B
C
B
In
the
go
prototype,
the
way
that's
handled
is
that
there's
one
interface
for
iterating
over
a
result
set,
but
the
push
and
pull
export
code
paths
are
different
in
the
locking
strategy
so
because
at
least
the
way
I
did
it,
you
know
the
the
synchronous
push
or
the
the
push
export
path
will,
the
the
controller,
the
one
who's
got,
the
timer
that's
doing
the
interval
management
we'll
take
the
lock
and
then
call
collect,
and
then
before
this
interval
is
done.
It
will
then
push
to
the
exporters.
B
B
So
I
I'm
trying
to
say
that
they
are
both
using
the
same
interface
for
reading
the
data
the
push
exporter
has
locking
handled
for
it,
and
the
pro
exporter
has
to
take
a
lock
so
that
the
differences
are
in
how
you
lock.
This
object
that
you're
about
to
read,
rather
than
the
interface
used
to
read
the
data.
B
D
B
A
contradiction
in
what
I
just
said
is
that,
because
the
the
go,
implementation
actually
does
implement
cumulative
to
delta
conversion
in
this
place,
and
I
would
like
to
remove
that
code.
It's
complex
stuff
that
I
don't
really
want
there,
so
that
you,
you
are
able
to
from
a
reader
request
delta
and
it
will
actually
implement
it
for
you.
But
I
wish
we
could
remove
that
code.
D
B
D
Like,
as
for
the
view,
spec
like
you,
have
the
ability
to
change
the
temporality
for
every
instrument
so
like
if
the
user
says
like
there
is
a
delta
instrument,
and
I
want
it
reported
cumulative,
someone
has
to
do
that
conversion.
So
wouldn't
it
be
like,
ideally,
the
sdk
itself,
rather
than
the
exporter.
B
Yes,
it's
the
opposite
that
I'm
saying
is
just
such
a
corner
case
that
we
could
just
remove
it
entirely.
I'm
and
and
what
I'm
trying
to
propose
here
is
that
well.
Have
I
answered
your
question.
B
B
The
open
telemetry
guidelines
for
the
libraries
are
to
say
that
exporters
should
be
simple
converters
into
protocol
objects,
and
so,
if
there
is
a
conversion
from
of
data,
it
should
happen
in
a
processing
processor
stage
somewhere
and
we
can
do
cumulative
to
delta
in
a
processing
stage.
We
must
do
delta
to
cumulative
in
a
processing
stage.
I'm
saying
that
can
do
is
not
necessary.
B
We
do
not
need
to
do
cumulative
to
delta
conversion
anywhere,
because
almost
nobody
expects
that
and
the
the
example
use
case
that
we're
just
hypothetically
imagining
I'm
saying,
doesn't
really
exist
very
much
or
at
all
is
you're
reporting
your
memory
usage.
It
is
a
number
and
it
goes
up
and
down-
and
the
question
is,
would
you
want
to
see
your
current
memory
usage
converted
into
a
rate
like
a
delta?
B
D
So
at
least
like
victor
and
I
was
doing
that.net
implementation-
and
we
kind
of
have
this
back
and
forth
conversions
handled
in
the
sdk
itself,
so,
like
exporter
always
gets
what
it
expects.
So,
if
it's
a
exporter
expecting
cumulative,
it
gets
that
if
it's
an
exporter
expecting
delta,
it
gets
that
the
exporter
doesn't
even
know
like
whether
the
user
reported
cumulative
or
delta
that
conversion
happens
inside
the
sdk,
specifically
inside
the
aggregator.
B
Yeah,
I
understand
there
is
symmetry
and
it's
it's
meaningful
to
go
both
ways,
but
I
believe
that
the
demand
for
delta
outputs
from
cumulative
instruments
is
vanishingly
rare
and
we
could
just
stop.
F
It
also
do
we
do
we
have
a
way,
for
I
might
have
missed
this
in
the
current
pr,
but
is
there
a
way
for
readers
to
describe
what
kind
of
temporality
they
want
or
like
limit
it?
Like
from
what
I
understand
of
the
specification,
we
have
you
specify
delta
or
accumulative
aggregation
at
the
view
level,
and
you
you
there's
no
interaction
at
all
available
in
the
interfaces
we
have
between
a
metric
reader
and
the
aggregator
abs,
like
there's
nothing.
A
F
A
So
the
exporter
can
expose
some
metadata
about
whether
it
is
like
it
is
supporting
delta
only
or
cumulative
or
both,
and
what's
the
preference
and
the
meter
provider
can
use
that
information
to
make
the
view
setup
easier.
So
if
the
user
is
not
specifying
what's
the
the
default
like
whether
it
is
delta
or
cumulative,
I
think
the
view
should
be
smart
enough
to
figure
out.
Oh
for
this
exporter,
it's
premises
and
it
is
always
cumulative.
So
I'm
going
to
do
this
for
better
memory
efficiency
and
we
we
haven't
covered.
B
That
in
the
current
stack,
I
mentioned
it
in
a
comment
on
your
previous
pr,
though,
because
the
go
prototype
has
that
there's
something
called
an
export
kind
selector
and
your
exporter
implements
export
kind
selector
and
then
that
that
gives
the
processor
the
opportunity
when
it
first
sees
this
new
aggregator
being
created
to
consult
the
export
client
selector
to
say
what,
eventually,
are
you
going
to
ask
me
to
get
out
of
this
aggregator?
Whether
which
temporality
do
you
expect
in
the
future?
B
At
that
point,
the
exporter
has
a
chance
to
say
its
preference
and
I
have
implemented
just
reiterate.
I
have
implemented
both
conversions
and
in
order
to
do
this
cumulative
to
delta
conversion,
it's
actually
a
little
bit
harder.
You
have
to
implement
a
new
form
of
aggregator
subtraction,
and
that
is
not
a
primitive
that
you
needed
before
this.
So
you
end
up
with
a
special
case.
Saying:
is
this
aggregator
that
I
know
how
to
update
also
something
I
can
subtract
from
one
another?
So
you
not.
B
You
have
not
only
a
merge
operation
but
a
subtract
operation,
and
you
have
to
then
implement
that
and
then
at
some
point
in
your
processor
pipeline,
you'll
you'll
encounter
a
request
to
have
a
delta
but
you're
going
to
consult
your
aggregator
and
say.
Okay,
I
need
to
subtract
you
from
another
aggregator.
I
think
aggregator
like
if
it's
not
a
subtractable
aggregator.
You
end
up
with
a
condition
that
it's
difficult
to
specify.
F
F
Get
it
it's
well
defined,
but
nobody
wants
that.
I
I
agree.
I
agree,
but
it
like
that
exists
in
java
and
like
it,
we
could
specify
that
delta,
like
there
are
a
few
things
we
can
do
in
the
spec
that
I
think
we'd
have
to
do,
and
here's
here's
a
few
action
items
for
us
to
think
about
and
specify
one
is.
We
can
specify
you're
not
allowed
to
do
delta
aggregation
at
all
on
async
instruments.
F
F
The
second
thing
we
have
to
do
if
we
want
to
as
part
of
like
what
I'm
hearing
here
is
some
kind
of
specified
interaction
between
an
exporter
or
reader,
and
I
would
I
think
it
needs
to
be
at
the
reader
level
if
it's
also
at
the
exporter
level
great,
but
you
need
a
way
to
specify
the
supported
aggregations
of
a
reader
so
that,
if
I
don't
support
deltas,
I
can
convert
to
cumulatives
right
and
if
someone
doesn't
support
cumulatives,
we
should
also.
F
F
So
if
I
have
a
reader
that
says,
I
only
support
deltas.
What
do
we
do
with
all
these
cumulatives
right?
Do
we
send
them,
or
do
we
say?
Sorry
again,
that's
not
likely
to
happen.
I
really
don't
think
people
are
going
to
use
open
telemetry
to
fire
stats
d.
I
just
don't
see
that
as
a
use
case,
you
would
literally
just
use
statsd
and
and
get
things
out
of
your
process
much
quicker
than
we're
going
to,
but.
F
Yeah
yeah.
In
any
case,
I
think
we
need
an
answer
to
that
right.
B
I
support
what
you
said.
We
do
need
an
answer.
I
would
propose
the
answer
be
that
there
is
a
specified
interaction.
It
doesn't
have
to
be
named
export
kind,
selector
or
whatever.
The
way
I
see
this
is
that
it's
consulted,
as
the
view
is
configuring
itself.
B
At
the
moment,
a
new
aggregator
is
being
created,
roughly
speaking,
and
at
that
moment
in
time
it's
not
just
saying
what
your
preference
is.
What
you
support
it's
saying
on
the
individual
descriptor
level,
would
you
prefer
to
have
delta
or
cumulative,
which
is
in
order
to
satisfy
the
view
spec
you're,
going
to
you're
going
to
have
that
configured
on
a
per
instrument
basis?
I
suppose
so.
D
D
Like
if,
for
example,
the
reader
says,
I'm
only
expecting
delta
and
the
user
configures
view
to
convert
everything
to
cumulative,
you
just
fail
fast
when
the
view
itself
is
being
registered,
saying
that
hey,
you
have
a
exporter
which
only
supports
delta,
but
you
are
asking
to
convert
to
cumulative
so
like
this.
We
config
is
unlikely
to
help
you
so
is
that
the
proposed
answer,
or
you
like
it
just
looks
like
like
riley,
is
writing
the
questions
I
was
trying
to
see.
D
If
the
proposed
answer
is
the
sdk
would
fail
fast
if
it
sees
a
incompatible
view
or
it
still
proceeds
and
do
the
conversion
or
and
let
the
exporter
drop
it
or
what
is
the
proposed
answer
here?.
A
I
I
would
suggest
that,
if
people
specify,
in
the
view
explicitly
saying
I
want
something
accumulative
and
in
the
exporter,
we
only
support
delta,
we
could
figure
out
that
error
and
the
isdk
should
handle
that
and
let
the
user
know
as
early
as
possible,
whether
it's
an
exception
or
it's
some
error
log.
I
think
the
language
can
decide
if
it's
something
the
user
defined
like
in
the
view
they're
saying
I
don't
care,
because
this
is
not
something
I
should
care
about.
A
I'm
only
describing
I
want
abcde
instruments
to
be
exported
to
permissions
and
weather
promises
is
cumulative
delta,
I'm
not
matrix
expert.
I
just
want
sdk
to
be
smart
enough.
I
think
we
should
be
smart
enough
and
figure
out
the
answer
for
them
and
if
we
cannot
figure
out
the
right
answer,
we
should
complain.
B
Well,
I
think
that
a
better
answer
might
be
for
the
exporter
to
convert
to
gauge,
because
if
you
are
an
exporter
that
doesn't
support
otlp
natively,
that
means
you
may
have
trouble
representing
chemo
data
as
delta
or
something
like
that,
and
so
the
answer
is
turn
your
cumulative
data
into
gauge
and
then
you
can
export
deltas
or
gauges.
B
F
Can
we
can
we
talk
in
some
specifics,
though,
because
one
of
the
things
I
was
thinking
all
right.
If
we're
talking
protocols,
we
know
about,
let's
talk
prometheus
or
openmetrics,
which
is
cumulative.
Only
let's
talk,
otlp,
which
is
both
delta
and
cumulative,
are
allowed,
and
then,
let's
talk
stats
d,
where
it's
delta
only
right
or
gauges
effectively.
F
So
in
the
event
someone
uses
like,
I,
I
think
the
odds
that
you
would
use
open,
telemetry
for
stats.
The
exporter
are
just
incredibly
low,
you're,
actually
minimizing
the
value
of
statsd
by
performing
a
bunch
of
calculations
in
processes,
as
opposed
to
getting
the
data
out
of
your
process
as
quickly
as
possible,
which
is.
D
F
Violates
the
the
goals
of
statsd
really,
so
I
think
it's
rare
that
someone
would
actually
use
a
delta
only
exporter,
one
possibility
we
could
do
in
our
specification
effectively
is
say
we
can
import
statsd,
but
all
of
our
sdks
can
only
either
be
a
delta
and
cumulative
exporter
or
a
cumulative
only
exporter,
and
we
don't
support
delta
x
portion
export
use
case
at
all.
Like
that's
that's
something
we
could
do
if
anyone
has
an
example,
delta
use
case,
that's
not
statsd.
F
I
would
love
to
hear
about
it
and
kind
of
kind
of
evaluate
it,
because
I
feel
like
I
might
be
missing
something
there,
but
we
might
you
know
just
practically.
I
don't
think
we
need
to
bend
over
backwards
for
statsd
exporters
right
importing
statsd,
yes
exporting
statsd.
I
really
think
our
sdk
does
not
provide
enough
value
for
someone
who
is
doing
statsd
export
today.
B
Moreover,
a
stats
to
user
would
put
those
into
a
gauge
to
begin
with.
I'm
pretty
sure
good
point
good
point,
so
I
I
want
to
try
to
answer
your
question.
I
mentioned
new
relic
because
that's
the
only
person
or
the
only
company
that's
ever
ever
mentioned
this
and
I
want
to.
I
want
to
hear
more
because
I
have
heard
it
confirmed
various
times.
It's
just
that
it's
a
use.
I
don't
really
understand
very
well.
G
Yeah
jack
here
from
from
new
relics,
so
we
don't
plan
on
adding
language
specific
exporters
at
all.
So
you
know
we
are
contributing
a
cumulative
to
delta
processor,
to
the
collector
and
if
and
if
people
configure
their
language
sdks
to
have
cumulative
metrics,
we
we
plan
on
recommending
that
they
translate
them
to
delta
at
the
collector
level.
G
D
I
have
potentially
like
one
use
case
within
microsoft,
where
we
need
delta,
but
I
don't
know
with
100
surety
that
it
is
accepting
delta,
so
I'll
have
to
like
check
and
get
back
and
raise
an
issue
here
and
yes,
we
plan
to
write
like
direct
exporters,
not
using
the
otp
so
depending
on
whether
when
they
confirm
whether
it
is
only
capable
of
accepting
deltas,
then
we
need
the
sdk
or
like
someone
to
do
that.
Conversion
from
cumulatives
to
delta.
H
F
Awesome
quick
question,
then,
for
all
the
folks
who
are
using
delta,
how
would
you
like
to
see
your
asynchronous
instruments
recorded?
Is
it
okay
to
turn
them
all
into
gauges
like
as
josh
suggested.
F
I
don't
think
we
should
try
to
address
it
in
your
pr
riley,
so
it
sounds
like
we
have
at
least
one
more
pr
to
talk
about
to
talk
about
exporters
and
and
asking
for
delta
or
cumulative
metrics,
and
what
kind
of
conversions
we
provide
out
of
the
box
is
that
yeah
yeah.
F
C
C
A
Okay,
so
I'll
quickly
go
through
this
like
assuming,
like
not,
everyone
has
contacts
on
this,
so
the
the
metric
reader
here,
I'm
trying
to
say,
like
you,
can
have
multiple
metric
readers
in
this
way.
We
allow
you
to
support
multiple
exporters
at
the
same
time,
and
each
one
is
kind
of
having
its
own
state,
whether
that
stays
from
implementation
perspective
is
shared
copy
and
write
or
something
smart.
A
Is
the
sdk
implementation
detail
that
we
we
don't
try
to
cover
in
the
spec
and
in
the
reader
we
we
have
several
callbacks
and
also
a
like
a
method
called
collect
and-
and
there
are
two
type
of
the
readers
that
we
provide
from
the
ick
out
of
box,
the
one
is
the
base
or
basic.
I
struggle
with
the
name.
A
Another
one
is
just
like
on
top
of
the
base
one.
It
basically
has
a
timer
that
periodically
trigger
the
clock,
and
here
goes
the
diagram
on
how
exporter
and
the
reader
interacts.
So
from
the
meter
provider
perspective,
it
has
the
internal
state
and
it
has
multiple
metric
readers
registered
and
exporter
can
just
be
attached
to
the
specific
exporting
metric
reader
and
we
we
support
both
push
and
pull
and
they
follow
very
similar
design
here,
and
the
key
here
is
the
the
metric
exporter:
whether
it's
push
output.
A
We
want
them
to
have
the
same
interface,
so
it's
technically
possible
that
you
have
one
implementation
that
can
support
both
push
and
pull
model,
and
here
goes
the
sequence
diagram.
So
this
is
the
generic
diagram
that
applies
to
both
push
and
pull
I'll
cover.
The
pool
one
later
there's
the
explicit
description.
A
So
here,
basically
the
metric
reader
clocked
is
the
way
you
signal.
The
reader,
hey,
go
and
take
the
data,
and
once
this
api
is
called
I'll,
explain
the
signal
part
later
once
the
api
is
called,
the
reader
will
go
and
ask
the
meter
provider
hey
like.
Please
make
sure
all
the
asynchronous
instruments
callbacks
are
invoked
and
the
data
is
right
in
place,
and
please
call
me
back
and
how
this
is
done
is
implemented
in
the
sdk.
A
This
is
language
implementation
detail,
but
once
the
meter
reader
like
the
metric
reader,
got
the
callback,
it
will
have
access
to
all
the
data
that
is
available,
whether
they're
coming
from
synchronous
or
asynchronous
instruments
and
the
metric
reader
in
case
it
has
exporter
associated.
It
will
call
the
exporter
batch
to
send
the
matrix
and
now
come
to
the
signal.
So
here
I
give
the
example.
I
think
the
signal
can
be
hey
periodically
like
we
want
to
export
the
data,
so
it's
triggered
by
a
timer
or
it
can
be
hey.
A
A
This
would
be
invalid
because
the
scripture
is
not
asking
for
it.
There
is
no
such
thing.
You
can
report
the
data
and
also
there
are
other
cases.
For
example,
the
meter
provider
could
say
I'm
trying
to
exit
like
the
application
is
shutting
down.
I
want
to
call
the
force
flash
that
would
also
trigger
the
the
signal
here
and
the
ick
implementation
can
decide.
A
They
just
have
a
a
global
thing
for
each
instance
of
the
premises
exporter
and
they
set
the
the
batch
and
exporter
when
try
to
respond
to
the
http
request
will
decide.
Is
the
data
available
there?
I
think
a
smarter
way
might
be.
You
have
some
asynchronous
local
or
thread
local
information
just
to
be
able
to
pass
the
thing
back
to
the
http
response,
so
I'll
I'll
stop
here
and
take
questions.
C
So
riley
that
last
part
you
just
talked
about
is
the
part
that
I
think
makes
this
overly
complicated
for,
like
like
it
feels
like
we're
you're
trying
we're
trying
to
specify
a
single
interface
on
the
exporter
for
the
sake
of
specifying
a
single
interface
and
the
whole
case.
I
think
here
is
different
enough
and
especially
handling
the
that
the
scraper
request
and
then
there's
no
way
for
that
context
to
propagate
through
and
for
the
the
web
page
to
know
like
to
have
that
information
about
what
to
respond.
C
A
Yeah,
so
I'll
explain
why
I
think
a
single
interface
is
important,
because
when
you
look
at
the
push
pull
exporter,
I
I
think
my
understanding
is
the
pool.
Exporter
is
a
very
special
explorer
that
it
can
only
export
the
data
at
a
certain
moment,
and
that
is
out
of
the
ict
case
control
like
the
promises.
Only
when
the
permissive
agent
asks
you
to
give
the
data
you're
allowed
to
give
the
data
in
normal
cases.
A
If
there's
no
ask
you
cannot
send
the
data
because
you
don't
know
how
to
respond,
if
there's
no
http
request,
right
and
and
that
difference
is
not
big
enough,
so
my
worries
you
can
imagine
there
will
be
a
console
exporter.
That
is
both
push
and
pull
model.
So
you
can
see
the
console.
Exporter
is
going
to
send
the
data
every
five
seconds,
but
meanwhile,
you
can
see
also
if
my
application
is
trying
to
access
or
the
user
is
pressing,
a
button
or
the
user
is
accessing
some
web
pages.
A
The
exporter
is
basically
taking
a
single
function,
export
that
takes
a
batch,
so
I'm
trying
hard
to
make
sure
we
can
follow
the
same
model,
so
it
won't
be
too
different
from
the
other
interfaces
that
people
are
already
familiar
with,
and
I
understand
the
consequence
so
by
doing
this
we're
doing
something
a
little
bit
tricky
here.
I
think
it's
doable.
B
I
I
can
say
that
I
support
this.
These
two
diagrams
look
very
much
like
the
implementation
that
I
that
I
have
in
the
hotel
go
prototype.
It
is
the
complexity
that
I
have
is
around
configuring
when
the
stateful
processor
will
cache
its
result
or
not,
especially
for
the
pull
exporter.
B
And
I
that's
complete
complicated
and
I
would
be
that's
that's
where
this
complexity
will
end
up
being
discussed.
I
think.
C
A
A
You
put
the
request
id
as
a
key
in
the
dictionary
and
the
value
is
just
empty
and
then
you
trigger
the
entire
collection,
and
you
wait
for
the
callback
to
happen,
and
you
see
whether
the
key
value
has
changed
to
something
that
you
like
it's
not
now
anymore
and
you
take
the
value
and
respond
like
respond
to
premises
and
if
the
value
is
now.
That
means
something
happened
in
the
clock
or
the
export
like
timeout,
and
you
can
probably
get
some
arrow
from
that
key
value
pair.
A
C
A
C
A
C
This
is
what
I'm
saying
riley
is.
I
don't
want
to
have
to
make
us
implement
some
tricky
way
to
make
this
work
like
using
thread
locals,
or
some
crazy
thing
like,
I
think,
we're
I
think
we're
trying
to
bend
over
backwards
to
do
something
that
actually
is
going
to
be
really
quite
tricky
to
do
in
java.
F
Us
anything
really,
there's
there's
there's
one
thing
to
call
out
john.
I
don't
know
if
you
looked
at
my
prototype
around
multiple
exporters
and
tracking
them.
The
only
thing
this
interface
provides
that
really
really
is
kind
of
valuable
is
hiding
the
freaking
complexity
of
tracking,
how
many
exporters
there
are
and
when
you
can
clear
up
metrics.
F
That
is
actually
some
really
ugly
code
in
our
prototype,
where
we
don't
where
we
expose
that
collect
interface
directly
where
you
actually
have
to
you
have
to
have
a
registry
of
what
collectors
there
are
and
keep
track
of
them
and
there's
a
little
bit
of
an
awkward
interface
around
knowing
how
many
collectors
there
are
this
kind
of
forces,
a
registration
mechanism
around
readers,
which
is
nice.
I
agree
with
you
that
prometheus
is
going
to
suck
to
implement
and
it's
going
to
be
some
crazy
ass,
like
suspend
and
re
and
unsuspend
thread.
F
It's
going
to
involve
multiple
threads
by
requirement
of
this
interface,
and
so
I'd
like
to
to
take
some
time
to
prototype
that
and
kind
of
come
back
with,
like
maybe
a
counter
proposal
here,
yeah.
So
so
I
think
this
needs
a
little
bit
of
thought
specifically
around
that,
because
we
we
do
have
an
issue
implementing
prometheus
with
this
api,
or
we
just
need
an
exemption
from
java
to
deviate,
but
I'd
really
rather
not
do
that
at
this
stage.
F
Right
like
let's,
let's
take
some
time
and
look
at
it,
I
think
I
can
come
back
next
week
with
a
prototype.
I
definitely
won't
have
that
ready
by
thursday,
but
we
do
need
to
look
at
this
riley
because,
like
john's
saying,
that's
going
to
be
really
hard
on
prometheus,
but
I
I
also
don't
want
to
lose
this
explicit
registration
step
which
actually
cleans
up
a
lot
of
our
earlier
prototype
around
multi-exporter.
A
Okay,
so
that
brings
us
back
to
the
the
first
topic
in
in
this
agenda.
So
if
we're
going
to
take
some
extra
time,
do
we
think
we
can
ship
the
experimental
version
of
the
isdk
with
spike
1.6.1
release?
I
don't
think
so,
so
we
probably
will
buy
another
one.
F
B
B
If
you
look
at
the
go
interface
you
you
register
a
collector
that
has
two
methods
one
is
describe
and
one
is
collected
and
then
you
start
the
http
server
and
it
calls
you
when
it's
time
to
collect
and
then
you
collect
the
sdk
and
then
you
send
some
stuff
on
a
channel.
So
I
think
this
thing
that
you
just
subscribed
where
you're
gonna
suspend
a
thread.
It
sounds
like
ordinary
synchronization
for
this
type
of
interaction
with
a
scraper
to
me.
C
F
It
assumes
that,
since
the
data
is
all
what
do
you
call
it
in
my
brain's
debt
cumulative
that
and
that
it's
also
readable
and
that
it's
readable
asynchronously
right,
so
it
just
directly
reads
everything.
So
it
is
gonna
look
funky
to
implement
it's
not
unimplementable.
It's
just
kind
of
weird
and
interesting.