►
From YouTube: 2021-09-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Can
one
of
you
confirm
if
the
screen
is
visible?
I'm
sharing
my
chrome
window?
Yes,
that's
it
confirmed.
B
I
think
we
can
start
allen
might
be
joining
little
late,
but
we'll
go
over
any
issues
until
then.
So
it's
just
like
update
from
me.
There
is,
since
there
is
no
other
agenda,
so
we
will
like.
I
update
the
milestones.
We
did
not
release
the
alpha
three
last
friday,
so
it
will
be
like
sometime
this
week.
The
changes
are
right
now
actively
being
worked
on
in
the
matrix
branch,
so
we'll
do
a
snap
from
matrix
to
the
main
and
then
do
the
release.
B
But
whenever
we
do
the
release
it
will
contain
the
fixes
as
part
of
alpha
3,
which
is
marked
in
the
milestone.
So
the
dates
are
changed.
So
I'll
just
update
the
milestones
to
reflect
the
new
date,
but
it
does
contain
the
fixes
for
histogram
otlp
and
the
prometheus
bug
itself.
So
it
should
contain
all
the
changes,
except
that
the
date
is
different.
B
I
can
see
riley's
adding
a
question,
so
let
me
open
that
thing.
Okay,.
B
So
riley,
can
you
walk
us
through
what
is
the
like
issue,
which
you
want
to
discuss
it
just
taking
me
to
the
spec
signals.
A
Yeah,
so
so,
just
for
information
for
folks
here,
we've
seen
couple
pr's,
especially
from
the
open
telemetry.net
instrumentation,
say
trying
to
add
something
like
the
otlp
or
the
like.
The
spam
explorer
that's
great
as
more
environment
variables
being
introduced
like
that
brings
some
question,
and
so
it
looks
like
the
overall
spec
sig
is
trying
to
address
the
same
problem
as
people
are
seeing
a
crazy
amount
of
just
environment
variables.
A
Is
that
design
aligned
with
the
overall
architecture,
for
example,
if
we
allow
the
batch
spans
processor
to
have
the
exporting
like
interval
or
something
so
ask
yourself,
is
that
going
to
work
if
you
have
multiple
processors
and
x
colors,
and
how
will
that
work?
And
if
you
have
the
otlp
endpoint,
if
you
have
three
otlp
exporters,
how
would
that
work?
A
A
Without
that
extension,
you
don't
have
any
environment
variable
exposed
as
a
attack
surface,
and
the
third
thing
is
to
think
about:
can
we
model
the
environment
variable
as
just
one
concrete
provider
and
the
configuration
should
be
an
abstraction
because
it
seems
the
specs
sig
was
talking
about
having
some
configuration
file
as
well,
and
the
last
thing
to
think
about
is
if
there
are
multiple
places
where
you
can
do
configuration
what's
the
ordering
and
probably
that
has
to
align
with
the
the
spec
as
well.
A
I
like
so
far
I
can
imagine
there
will
be
environment
variable.
There
will
be
something
hardcoded.
There
will
be
some
default
configuration
if
you
just
use
the
existing
class
like
in
the
in
the
batch
processor.
We
have
the
defaults
and
later
that
might
be
configuration
file
so
which
one
will
take
part
and
what's
the
ordering
of
those
things.
B
Okay,
and
would
you
expect
the
spec
would
clarify
the
priority
because,
as
of
now,
it's
not
clarified,
so
I
think
we
went
with
the
core
takes
priority,
but
there
were
like
discussions
at
that
time
like
it's
not.
A
Specified
in
the
spectrum
yeah,
so
I
I
I
think
at
least
from
from
our
current
position.
Whatever
priority
we
put
there
or
the
environmental
variables
exposed
is
considered
as
part
of
the
public
service,
so
probably
it
will
have
the
same
governance
as
a
public
api
change,
which
means
if
we
have
some
design.
That
is
not
very
good.
B
Okay,
so
basically
like,
if
you
have
any
like
the
I
mean
you're,
basically
trying
to
say
the
core
sdk
should
not
be
having
any
code
to
read
random
environment
variables,
yeah
like
model
as
a
separate
package,
which
is
an
opt-in
thing.
So
user
really
has
to
install
an
ex
extra
package
which
will
contain
the
logic
to
read
and
yeah
open
it.
Okay,
because.
A
Configuration
is
sensitive,
there
might
be
security
consideration.
So
let
me
give
some
example,
like
you
might
say
like.
I
want
to
export
the
data
every
one
nanosecond
which
is
physically
impossible,
but
what,
if,
like
in
production,
you're
saying
environmental
variables,
always
take
priority
than
anything
else,
and
then
some
folks
just
find
a
security
hole.
They
try
to
change
the
margin
variable
and
they
point
your
service
to
some
endpoint,
because
ddos
attack
keep
sending
data
there
yeah
those
are
the
potential
constitution
and
given
open
time
trade
position
itself
as
a
very
generic,
proper
thing.
B
Got
it
that
makes
sense
here,
so
it's
mostly
the
auto
instrumentation
folks
who
really
wanted
to
get
the
environment
variable
changes
in,
and
I
don't
think
anyone
is
here
today
I'll
talk
to
them
whether
they
really
need
any
of
them
like
part
of
the
1.2
release.
If
it
is
not
important
to
be
part
of
the
1.2
release,
we
can
like
consider
pulling
it
out
and
making
it
part
of
a
separate
package
which
the
auto
instrumentation
can
really
take
a
dependency
on.
A
Yeah
yeah
and
an
alternative
solution.
I'm
I'm
thinking
if
we
have
a
programming
model
that
will
allow
the
old
instrumentation
to
fetch
and
change
the
configuration
instead
of
using
environment.
Variable
that
might
work
and
the
auto
instrumentation
can
decide
to
pull
in
the
environmental
variable
configuration
provider
at
run
time
yeah
or
they.
A
Or
they
can
introduce
their
own
environment
variable.
But
I
I
wouldn't
go
that
approach
first,
because
it
seems
like
we're
just
taking
a
problem
from
the
ict
to
the
authorization
without
solving
that
yeah
yeah.
So
it's
more
like
a
hijab
for
people
to
think
about
the
long
term
thing
because
it
seems
like
we
started
with
adding
environment
variables
and
as
we
keep
adding
more
and
more,
we
start
to
realize.
That's
not
going
to
work
well
in
the
long
run
got
it
yeah
and
there.
B
Are
like,
like
the
key
thing
is
like
before
we
do
the
1.2
stable
release,
we
should
revisit
any
of
the
invert
variables
being
added
like,
and
we
should
like
really
consider
whether
it
is
required
to
be
part
of
the
stable.
Otherwise
we
should
get
it
out
and
then
reintroduce
when
we
have
the
overall
design
ready.
B
That's
the
key
thing
and
there
were
like
some
environment
variables,
which
we
think
we
discussed
briefly
in
like
two
months
back
like
some
of
the
environmentalists
like
to
select
which
exporter
like,
for
example,
otlp
underscore
trace
underscore
exporter,
which
allows
the
user
to
specify
like
jager
or
zucchini
or
otp.
But
it's
undecided
like
how
do
we
even
support
it?
Because
the
core
sdk
does
not
have
any
dependency
on
like
jager
or
zip
it?
So
it
cannot
load
it.
B
So
it
has
to
be
like
a
like
a
separate
package
which
kind
of
like
a
uber
package
which
knows
about
all
these
exporters,
but
we
didn't
decide
like
exactly
how
we
model
it
or
the
name
of
the
package
itself.
This
was
like
discussed,
but
we
didn't
close
it,
but
it
looks
like
we
really
need
that.
Not
only
to
solve
that
particular
problem,
but
also
to
address
any
potential
security
concerns
from
having
the
environment
variable
being
read
by
the
core
sdk
itself,
yeah
yeah.
B
That
question,
like
I
didn't,
get
a
proper
answer.
The
like,
I
think
you
mentioned
it
like
before
also
so,
if
you
configure
like
the
batch
export
processor
using
environment
variable,
it
affects
all
the
exporters
using
batch
exporter.
So
if
you
have
like
two
otlp,
you
cannot
configure
them
independently
using
otl
pin.
I
left
a
command
in
that
vr
to
make
it
at
least
clear
in
the
door,
but
yeah
we'll
take
one
step
back
and
see.
If
this
is
the
right
thing
we
want
or
not
before,
we,
it
has
part
1.2
yeah.
A
B
A
And
think
about
like
in
the
in
the
like
the
batch
spam
processor,
we
have
some
restriction,
we're
saying
the
exporting
interval
and
something
they
will
interfere
with
each
other
or
how?
How
would
that
work?
If
you
have
explicit
one
in
the
code,
yeah
marked
variable
conflicting,
do
you
automatically
fall
back
or
you
find
some
smart
way
or
you
just
tell
the
user
something
screwed
up,
so
that
part
is
unclear
as
well.
A
C
If
it
helps
specifically
for
the
auto
instrumentation
sake,
if
it
helps,
I
can
raise
this
with
my
colleague
chris
who's
involved
on
that
project
and
make
sure
that
I
think
their
sig
meeting
is
tomorrow
morning.
So
make
sure.
B
B
Ask
about
it
like
a
few
weeks
back
and
it
seems
like
they
need
this
from
the
sdk
but
like
by
default.
What
they
were
assuming
was
the
spec
says
the
batch
span.
Processor
should
affect
all
the
exporters,
but
it's
not
very
clear
to
me
like
this,
but
doesn't
explicitly
say
that
it's
just
kind
of
like
implied.
So
if
you
have
a
n
random
variable,
it'll
affect
all
of
them,
but
yes,
we
can
ask
them
like
my
primary
thinking
is:
do
they
want
it
as
part
of
the
1.2
stable
or
not?
B
If
it
is,
then
we
need
to
have
a
better
discussion.
If
not,
we
can
pull
it
out
and
make
it
a
separate
package
so
that
we
won't
affect
the
1.2
in
any
way
and
then
we'll
like
we'll
be
able
to
like
bring
it
back
to
the
core
sdk.
If
that's
the
final
decision
or
we'll
keep
it
as
a
separate
package
forever.
B
B
So
unless
we
do
like
something
to
pull
it
out,
yeah
so
alan,
if
you
can,
I
think
I
forgot,
like
david
he's
on
vacation
for
another
two
weeks,
or
so
I
don't
remember
exactly
when
he's
coming
back,
so
if
he
can
like
give
a
heads
up
at
the
earliest.
That
would
be
great,
if
not
we'll
work
with
david.
B
When
he's
back
from
vacation
he's,
he
mentioned
that
he's
gone
for
like
three
weeks
yeah,
but
I
just
don't
know
like
when
he
exactly
expects
so
we'll
make
sure
like
it
is
taken
care
before
the
1.2
release,
so
yeah,
okay,.
B
You
have
like
one
topic
to
ask
like
because,
right
now,
the
matrix
things
are
being
worked
on
to
the
like
separate
branch,
not
the
main
branch.
We
still
have
the
cas,
obviously
how
the
ci
is
running
for
matrix
spans
as
well
so,
but
we
are
not
publishing
anything
yet
from
the
metric.
So
I
want
to
like
make
sure
the
current
matrix
branch
is
a
better
alternative
to
the
one
which
we
have
in
the
main
branch.
B
Before
we
do
the
snap
from
matrix
to
main
branch,
it
looks
like
there
are
like
few
concerns
that
needs
to
be
addressed,
so
I'll
send
prs
with
my
proposal
and
if
everyone
agrees
we'll
make
it
like
part
of
the
matrix
and
then
move
it
to
me,
the
key
there
are
like
at
least
at
least
two
issues
to
be
discussed,
maybe
like
the
first
one
is
something
which
allen
brought
recently
and
like
riley
and
michael
also
briefly
discussed
it.
B
So
it's
basically
do
we
want
the
same
batch
of
like
api
for
exporting
metrics,
just
like
we
have
for
traces
and
logo
code.
We
see
if
I
can
get
the
actual
code
so.
B
As
of
now
in
the
matrix
branch
we
removed
it
for
matrix,
so
it's
basically
like
matrix
has
a
completely
different
api
for
exporting,
so
the
I
think,
like
allen,
fakes
the
same
issue
when
he
pre-wrote
the
otlp
exporter.
It's
you
can
no
longer
rely
on
the
like
just
extending
the
existing
base
exporter,
so
you
pretty
much
have
to
copy
all
the
things
from
the
base
other
exporter
into
the
new
metric
exporter.
B
So
that's
making
a
lot
of
like
copy
based
code,
because
these
are
all
supposed
to
come
directly
from
the
base
class.
So
the
alternate
is
just
keep
the
export
interface
same
for
traces
logs
and
matrix.
Basically,
you'll
get
a
instead
of
innovable
you'll
get
a
batch
of
metric,
but
that
means
we
have
to
modify
the
batch
to
account
for
the
third
possibility,
because
it's
right
now
understanding
two
things:
one
is
a
circular
buffer
or
a
single
item.
B
Now
there's
a
third
item
so
rightly
like
you
mentioned
that
we
need
to
do
a
perf
test.
So
if
I
submit
a
pr
which
shows
up
of
numbers,
is
that
the
only
thing
which
you
need
before
making
a
decision
on?
Should
we
repurpose
the
batch
to
understand,
metrics
or
or
not,
yeah,
okay,
so
I'll?
Take
it
an
action
item
I'll,
send
a
pr
which
compares
the
performance
and
then
we'll
decide
whether
we
want
the
batch
to
be
like
restricted
just
two
traces
or
we
can
reuse
for
matrix
yeah.
A
I'll
scan
why,
in
the
in
the
circular
buffer,
we
implemented
a
non-copy
version
and
the
batch
is
basically
a
pointer
to
the
circular
buffer
and
the
benefit
is
in
some
extreme
case,
like
you
have
the
simple,
like
the
simple
exporting
processor,
where
for
every
trace
or
the
log
of
the
processor
and
made
the
data.
If
you
have
a
location,
you
need
some
like
I
innumerable.
It
means
you
have
one
allocation
for
each
telemetry
data.
A
It's
not
efficient
for
magic.
My
thinking
is
matrix
is
always
exporting
asynchronously
and
you
can't
have
like
millions
of
the
raw
measurements
being
reported
through
the
api,
but
you
never
had
a.
You
would
never
run
into
a
case
where
you
have
many
exporting
things
like
invocation
to
the
exporter,
so
these
all
are
probably
run
every
five
seconds
or
every
one
one
minute,
and
even
for
every
five
seconds,
that's
already
pretty
aggressive
and
if
we're
trying
to
optimize
that
for
every
five
seconds,
we're
saving
like
24
bytes
allocation
on
the
on
the
gc
heap.
A
A
B
Okay,
that
makes
it
very
clear.
I
mean
it
was
already
clear
thanks
for
sharing
again
so
I'll
see
if
I
can
submit
the
benchmarks
and
make
a
decision
based
on
that,
and
probably
I
think
we
might
be
able
to
find
some
ways
to
like
repurpose
batch
without
like
without
affecting
the
tracing
or
logging
path.
But
I
had
to
actually
write
it
and
see
whether
it
would
be
like
zero
or
like
acceptable
overhead.
A
B
Yeah,
so
there
is
like
a
like
really
nice
thing
if
you
make
it
batch
because
then
all
your
exporters
are
like
very
consistent:
it
either
gets
a
activity
or
log
record
or
a
metric.
It's
very
easy
to
model
it,
but
yeah.
We
will
make
the
decision
based
on
benchmarking
and
if,
if
at
all
possible,
I
would
prefer
to
keep
the
same
interface,
but
let's
make
that
call
after
the
benchmarks
benchmark
starting.
So
we
make
a
informed
decision.
Yeah.
A
Totally,
I
can
imagine
if
you
have
a
console
or
your
memory
exporter,
you're
saying
it's
just
a
batch
of
t
and
you
can
be
anything
because
it's
much
more
convenient
for
the
for
the
like
for
customers
to
provide
their
own
implementation
and
that
that
has
to
come
with
a
a
reasonable
performance
balance.
So
we're
saying
by
achieving
that
convenience
we're
adding
three
nanoseconds
to
every
single
trace
or
login
call.
That
would
be
a
concern,
but
I
I
think
technically
you
can
solve
that.
So,
even
if
you're
adding
that
today.
D
B
Yeah,
I
mean
the
reason
why
I
did
not
want
to
like
optimize
it
early
is
because,
right
now
we
have
a
dictionary
which
access
the
lookup
to
find
like
where,
in
the
array,
the
actual
time
series
is
stored.
But
those
are
like
not
exposed
to
customers
in
any
way.
So
I'm
hoping
that,
like
we'll,
leave
it
like
that
and
optimize
it
after.
B
We
agree
on
the
final,
like
the
exporter
interface
because,
like
we
initially
only
had
dictionary
and
then
I
added
the
like
array
with
the
dictionary
acting
only
I
say
look
up
for
the
index
within
the
array.
But,
as
I
said,
these
are
all
like
internal
implementation.
I
should
be
able
to
like
keep
it
and
modify
it
without
anyone.
Knowing
I
mean
without
any
export
writer
or
customer
knowing.
B
So
I
can
work
with
you
and
see
if
we
can
have
a
like
better
data
structure
than
the
plain
array
and
combination
of
array
and
dictionary
or
like
riley
did
mention
that
we
could
use
a
custom
data
structure
of
ourselves
which
can
be
like
customized
to
I
mean
which
can
be
like
optimized
to
do
that.
Look
up
very
efficiently.
B
But
I,
my
thinking
is
since
its
internal
implementation.
We
should
be
able
to
come
back
to
it
after
we
solve
the
like
things,
which
would
be
part
of
the
public
epa.
D
So
the
circular
buffer
is
just
an
api
over
an
array,
I
think
so
what
if
we
just
added
a
mechanism
so
that
you
could,
you
know,
store
instead
of
an
index
of
the
array.
You
know
just
what,
where
it
is
in
the
circular
buffer.
Basically,
what
I'm
getting
at
is.
If
you
made
the
circular
buffer
of
the
store,
then
you
could
just
use
batch.
It
has
and
it
would
have
the
same
performance.
B
B
I
believe
the
issue
was
the
circular
buffer
has
a
mechanism
to
like
remove
it
like
it's
like
like
when
you
export
it.
You
make
the
activity
in
that
index
like
null,
so
that
it
can
be
reused,
but
for
matrix
it's
not
like,
we
are
removing
it.
We
still
keep
it
around
after
an
export,
so
it's
not
like
we
will
ever
like
remove
item,
so
it
did
not
find
to
me
like
circular
buffer
is
the
best
fit
or
the
current
circular
buffer
is
not
the
best
fit.
B
Might
be
like
removing
the
problem
from
like
batch,
because
now
we
might
have
to
do
that
extra
check
in
the
circular
buffer,
so
that
might
bring
back
the
same
concern
which
riley
mentioned
for
batch,
because
by
introducing
a
new
thing
for
the
circular
buffer
to
check,
it
might
be
like
taking
up
one
or
two
extra
cycles
for
the
tracing.
B
But
if
it
is
feasible,
I
think
we
can
use
it
but
yeah.
I
love
to
check
whether,
like
circular
buffer
is
the
right
candidate
or
we
need
like
a
like
some
other
researcher
only
if
it
is
exactly
the
circular
buffer.
We
can
reuse
batch
assets,
so
otherwise
we
have
to
like
use
some
other
data
structure
and
then
teach
the
batch
class
to
be
aware
of
that
new
data
structure.
B
So
yeah,
it
all
depends
on
like
what
data
structure
we'll
end
up
with
and
as
of
now
it's
array
and
that's
why
I
had
to
do
like
the
measurement.
I
mean
the
benchmark,
with
that's
being
taught
how
to
navigate
the
ray
as
well.
B
But
I
think
I
have
a
different
problem
to
us
in
general,
so
that
batch
for
matrix
is
one
thing
which
is
because
I
think
I
mentioned
in
the
piano.
So
metric
stands
for
the
like
stream
of
metrics,
not
the
individual
time
series,
so
each
metric
within
it
has
a
collection
of
or
like
list
of
time
series
so
for
each
unique
combination
of
the
dimensions
it
has
a.
B
B
Yeah,
so
no
not
here,
I
think
I
still
need
to
go
a
little
bit
or
maybe
I'll
directly
show
some
yeah.
So
this
is
the
one
which
I
want
to
so
basically
like.
When
you
are
asked
to
export,
you
will
decide
like
whether
it
is
a
batch
or
something
else,
so
that
would
be
basically
covered
by
this
first
forage.
B
So
whether
it
is
a
like
batch
or
inoperable,
we
will
call
it
based
on
the
earlier
discussion.
So
once
you
get
the
metric
like
metric
has
things
like
name
description,
but
then
that
metric
can
have
like
any
number
of
unique
combination
of
key
value
pairs
and
based
on
that
as
many
metric
points.
So
it
currently
modeled
it
as
say
like
similar
to
the
batch.
But
this
metric
dot
get
points
is
not
using.
It's
not
an
innumerable
it's
not
batch
either.
So
I
created
like
thing
called
batch
metric
point.
B
It
is
a
public
interface
for
people
to
code
it.
So
I
want
to
get
some
ideas
on
how
to
best
model
this,
because
this
is
going
to
be
like
significantly
more
per
concerning
than
the
overall
metric,
because
for
every
combination
you
have
to
return
a
metric
point
and
that
is
repeated
for
each
and
every
metric.
So
if
you
have
10
metric
streams,
you'll
be
calling
this
four
list
ten
times,
but
this
will
be
called
only
once
for
one
export.
B
So
this
is
like
much
more
impacting
the
puff
than
the
the
metric
itself
and
the
thing
which
the
thing
about
metric
point
is.
The
metric
point
is
currently
a
structure,
so
I
cannot
use
the
software
buffer
or
the
batch.
So
that's
why
I
created
like
my
own,
like
equivalent
of
batch,
which
can
navigate
the
structure,
but
it's
not
very
efficient
because,
like
for
two
reasons
one
is
I
had
to
create
a
new
public
api.
Just
for
that,
and
second
is
the
actual
batch
metric
point-
is
doing
the
heavy
copy.
B
Here
it's
following
very
similar
to
what
the
batch
for
activity
is
doing,
but
here
like
it's
dealing
with
start,
so
it
involves
like
poppy,
which
may
be
like
really
inefficient,
because
we
are
coping
the
whole
thing,
so
riley
did
mention
that
we
could
use
like
pointer
and
then
like
pass
the
pointer
around,
but
I
haven't
finished
it.
So
if
I,
if,
if
this
is
something
which
is
a
good
discussion
for
now,
we
can
do
it
now.
B
B
Can
you
like
summarize,
like
your
thought
on,
because
I
briefly
discussed
this
be
with
you
before
about
this
statement
like
copying
the
whole
metric
point,
because
it's
a
struct
and
you
had
like
some
ideas
on
how
to
avoid
that
by
treating
it
as
a
just
a
pointer
and
then
like
we
provide
like
within
the
metric
point.
B
We
provide
some
helper
method,
so
the
user
doesn't
have
to
know
like
whether
it's
coming
from
a
pointer
or
something
else
so
we'll
provide
something
like
get
histogram
value
or
get
great
value
and
we'll
internally
navigate
the
pointer
and
read
the
right
belly.
So
really
like.
Can
you
share
your
thoughts
and
maybe
like
I'll
ask
if
alan
or
michael
has
other
thoughts
and
based
on
that?
I
can
focus
my
next
set
of
investigations
on.
A
Yeah
I'll
place
in
something
here
in
the
chat,
so
I
I
think
in
donna,
like
the
newer
version
of
the
donut,
there's
a
way
that
you
can
use
the
memory
t
and
the
spend
t,
and
these
are
essentially
giving
you
access
to
the
low
level
data
on
the
on
the
memory
instead
of
having
to
do
any
like
copy.
A
So
essentially,
this
is
like
giving
you
the
syntax,
like
syntactical
sugar,
that
allows
you
to
use
c
sharp,
as
c
plus
plus
you
can
see,
I
pre-allocate
a
bunch
of
memory
and
I
store
the
magic
there
and
I'm
just
giving
this
pointer
to
the
caller.
You
go
and
fetch
the
data
directly
from
the
memory.
Instead
of
you
copying
that
from
some
give
to
your
local
stack
and
do
all
the
creative
things.
So
that's
for
performance,
improvement.
B
A
Dependency
yeah.
I
tried
that
the
memory
t
and
span
t
both
of
them
work
well
for
all
the
versions
that
we
support,
so
anything
that
is
greater
than
doughnut
4,
5,
2
or
net
core
2.1
will
be
working.
So
it's
more
than
good
enough
for
us.
B
Okay,
yeah,
I
need
to
like
spend
some
time
like
how
to
like
make
use
of
the
memory
class
and
avoid
this
magic
of
not
magic.
This
inefficiency,
which
I
have
currently
here,
but
would
you
imagine
that
the
apa
would
look
something
like
that
like,
for
example,
would
the
users
be.
B
So
like
do
you
have
like
any
opinion
on
this
ap,
because
once
you
have
the
metric,
you
have
to
like
hold
some
method
on
it
to
get
all
the
time
series
or
metric
points
in
otp
world.
So
would
we
be
like
exposing
like,
instead
of
like
a
new
batch,
we
would
be
potentially
exposing
like
something
like
this.
Is
this
what
you
really
mean
by
leveraging
the
span
slash
memory,
api.
A
Yeah,
I
I
I
think
we
need
to
give
some
helper
method
to
allow
the
exporter
travel
through
the
the
data
structure
that
we
have
for
matrix
and
and
that
api
has
to
be
efficient.
So,
ideally,
I
I
like
we
could
avoid
a
lot
of
unnecessary
allocations
regarding
whether
that
api
will
expose
the
low-level
structure
that
we
store
in
memory,
or
it
gives
some
abstraction.
A
B
Got
it
so,
even
if,
like
we
are
using
a
struct
like
this,
we
don't
like
really
expose
like
how
we
are
storing
the
histograms
within
it
or
how
we
are
storing
the
value.
So
instead,
like
we
will
have
like
some
methods
on
this
class,
which
the
exporter
would
call
and
that
method
internally
knows
how
we
are
storing
the
value,
whether
it's
a
span
or
something
else.
So
is
that
what
you
really
mean.
A
It
is
efficient,
so
we
have
a
get
method
that
is
inlined
and
once
it's
aggressively
inlined,
there's
no
difference
with
that.
Comparing
to
just
go
to
a
particular
address
and
get
the
double
value,
then
I
think
we're
good
we're,
saying
that'll,
be
a
virtual
function,
call
and
you're
going
to
jump
and
flash
your
cpu
instruction
cache.
That
seems
to
to
be
too
much
got
it
so
so
now
it's
just
a
public.
B
Like
field,
so,
if
you
can
rewrite
it
as
a
like,
like
it's
a
function
which
can
be
like
achieving
the
same
performance,
then
we
should
look
for
it
because
right
now,
like
it's,
it's
just
long
value
and
double
value.
The
user
has
to
decide
if
it's
a
grade.
The
double
value
is
the
gauge
value.
If
it's
a
histogram,
the
w
value
is
really
the
sum.
B
So
we
should
provide
like
some
helper
methods
and
user
can
just
call
it
get
histogram
value
and
it
should
be
able
to
return
the
right
thing
based
on
our
internal
representation.
Yeah,
so
think,.
A
B
Okay
yeah,
the
other
thing
which
I
really
asked
like,
maybe
like
in
another
period,
we
already
left
some
comments.
It's
about,
like
this
structure,
is
like
very
having
like
too
many
fields.
So
I
think
by
using
the
union
approach
where
we
specify
the
custom
offsets,
we
should
be
able
to
like
minimize
the
like
overall
size.
B
So
any
concerns
with
that
approach
or
me.
The
alternate
approach
which
I
tried
was
to
really
make
it
like
an
interface
like
I
metric
point
and
implementing
like
concrete
ones
for
histogram
and
like
wage
and
some,
then
it
kind
of
defeated
the
performance
gain
we
have
because
then
we
were
like
essentially
boxing
it
in.
So
that's
why
I
stick
with
the
struct,
except
that
you
haven't
done
the
the
unionizing
part,
but
that's
what
I
intend
to
do
so
if
there
are
any
concerns
or
any
feedback,
please
let
me
know.
B
Close
the
formatting,
I
will
fix
the
formatting
after
the
code
I
like
to
sit
after
that,
so
I
think
I
mean
I
got
answers
for
most
of
the
questions
which
I
wanted
like
alan
or
michael.
Do
you
have
any
questions
or
anything
to
be
discussed
in
particular
about
the
metrics.
B
Okay,
nothing,
it
seems
I'll
have
like
some
action
attempt
to
send
prs
to
get
feedback
and
then
we'll
take
it
from
there
and
as
soon
as
we
are
satisfied
with
the
way,
the
matrix
branch
is
we'll,
do
the
snap
to
the
main
and
do
the
release.
So
it.
A
So
I
have
a
question
about
performance.
My
understanding
is
for
matrix.
We
want
to
optimize
the
extreme
performance
for
the
measurement
reporting
path,
so
anything
from
the
the
instrumentation
from
from
the
meter
and
the
instruments
to
the
pre
aggregated
in-memory
representation
should
be
super
fast
and
we
do
whatever
we
can
to
reduce
allocation
and,
ideally
like
we
would
want
absolute,
zero
keypad
location
regarding
the
exporting
side.
A
B
Yeah,
one
reason
why
we
try
to
optimize
or
like
refactor.
The
export
part
is
it's
part
of
the
public
api.
So
we
want
to
like
give
enough
flexibility
for
us
to
like
change
it
in
the
future,
because
if
you
like
ship
it
with
a
particular
model
for
the
exporter,
we
have
to
leave
with
that.
But
for
the
hot
path
there
is
nothing
being
publicly
exposed.
The
only
api
is
the
one
from
dotnet
itself
which
is
stable,
but
anything
what
we
do
internally.
B
So
far,
it's
a
bit
inefficient,
but
at
least
it's
not
going
to
be
like
publicly
visible.
So
we
should
have
the
flexibility
to
keep
changing
it
until
we
reach
the
desired
performance.
So
that's
all
the
reason
why
I
did
not
spend
more
time
on
optimising
the
whole
path.
Yeah.
I
put
like
comments
in
the
to
do
like
maybe
like
one
of
the
pr
saying
that
the
hot
path
is
right.
B
Now,
I
think
yeah
I
mean
I
have
to
find
the
pr
where
I
could
recommend
it
involves
like
the
in
in
memory
sorting
of
the
keys
and
values,
and
we
have
to
like
find
a
way
to
optimise
it,
because,
based
on
my
experiments,
that
in
memory
sorting
is
like
half
of
almost
like
50
of
the
entire
time
duration,
which
we
take
for
recording
a
raw
measurement.
B
But,
like
I
said,
since
it
is
an
internal
implementation
detail,
we
should
be
able
to
optimize
that
and
bring
it
even
down
and
regarding,
like
anything
else,
whether
we
use
the
dictionary
along
with
the
array
or
we
just
use
dictionary,
or
we
use
a
different
data
structure
like
I
said
we
should
be
able
to
like
solve
it
without
any
user.
Ever
knowing
okay.
A
So
for
for
the
batch
t,
my
guess
is,
if
you
put
some
specialization
logic,
for
example,
you're
saying
if
type
of
t
equals
to
metric
the
git
compiler
would
be
smart
enough
to
see
if
it's
like
getting
the
code
for
matrix.
It
will
just
take
that
path,
and
I
assume
that
wouldn't
affect
the
the
tracing
and
logging
part
at
all.
B
A
B
A
Now,
by
the
way,
I
just
put
a
concrete
case
so
here,
like
people
saying
if
you
scroll
up
people
saying
like
in
the
environment
variable
my
maximum
queue
size
is
10
and
in
the
in
the
code
they
don't
specify
the
max
export
size
and
by
default
it's
like
512.,
currently
will
throw
exception
right.
I
think
so
yeah
we.
We
have
a
validation
which
checks
here
yeah,
but
it
wouldn't
make
sense
for
the
user
to
change
some
environment
variable
and
network
gifted
like
we
give
them
with
exception.
A
A
I'll
stop
here
you
just
okay,
people,
thinking.
B
About
this,
we
did
like
briefly
mentioned
this,
like
maybe
like
two
or
three
weeks
back
and
yeah,
we'll
make
sure
like
this
is
addressed
before
the
stable
release.
Okay,
any
other
things
to
discuss.
So
I
learned,
like
you,
had
a
question
about
the
summary
it
seems
like
it
was
discussed
in
respect.
We
wait
for
the
official
respect
to
come
before
we
add
the
summary
or
like
adding
min
and
max
to
the
histogram,
so
by
default,
we'll
just
stick
with
what
we
currently
have.
C
A
Just
give
people
a
quick
recap.
I
I
think
jack
brought
this
question
earlier
today
and
the
conclusion
is
people
think
having
the
min
max
on
histogram
is
a
nice
thing
to
have
so
we're
supportive
to
have
that
in
the
protocol
as
optimal,
build
and
also
specify
that
in
the
sdk
regarding
when
you
export
like
cumulative
or
delta,
what
does
min
max
mean?
A
I,
I
think
the
current
approach
is
you
like.
If
you
have
cumulative
you
report,
the
min
max,
which
means
the
mini
max
that
you've
ever
seen
since
the
beginning
of
the
application,
if
it's
delta,
that
means
you've
seen
the
maximum
and
minimum
value
through
the
collection
cycle,
so
which
is
just
the
the
previous
export.
A
And
there
are
some
debates
I
believe
there
are
folks
who
want
the
even
the
delta
to
report
me
max
based
on
what
you've,
since
the
beginning
of
the
process
start.
But
it
seems
like
that's
a
minority,
so
so
we're
not
getting
that
approach.
B
B
Okay,
like
anything
else,
if
not,
we
can
meet
again
next
week
and
look
for
some
pr's
which
addresses
the
questions
which
we
cover
today.