►
From YouTube: 2022-02-22 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
So
I
don't
know
whether
the
zoo
meeting
link
got
updated
because
previously,
like
a
few
months
back,
all
the
youtube
videos
had
the
same
title
with
just
the
date
and
time,
but
since
the
zoom
links
were
like
updated
to
be
like
specific,
the
youtube
title
was
like
very
easy
to
search
like
you'll.
Have
a
title
which
says
this:
is
the
dot
net
say
go?
This
is
java.
C
C
C
Yes,
so
I
did
not
have
any
spec
meeting
today
morning,
but
it
looks
like
there
are
a
couple
more
issues.
The
specs
team
wants
to
like
merge
before
marking
it
as
stable,
so
one
long
standing
issue
has
been
merged
as
of
last
week.
C
So
I
I
don't
have
an
idea
yet,
but
we'll
still
be
doing
another
rc
sometime
this
week,
because
we
did
have
a
quite
large
number
of
changes.
I
mean
m1
may
be
small,
but
significant
changes
in
the
last
week,
so
we
should
be
doing
another
release
candidate
most
likely.
We
can
include
the
incoming
changes
which
are
reacting
to
the
recent
spec
changes,
so
so
no
update
on
1.2
stable
release,
but
we
will
have
a
rc3
late
this
week
or
very
early
next
week.
C
So
that's
the
only
thing
which
I
had
to
operate
so
we'll
go
over
the
agenda,
so
I
think
I
learned
you
already
went
ahead
and
created
issues,
so
it
looks
and
you
already
have
a
draft
pr.
So
it's
basically
the
spec
is
being
more
flexible
now,
in
terms
of
allowing
what
is
considered
a
duplicate,
we
need
to
tweak
sdk
to
change
our
behavior
and
based
on
my
initial
look.
C
B
Yeah
I'll
continue
to
work
on
that
draft.
Pr
I'll
have
some
updates
on
that
today.
C
Yeah-
and
I
think
you
mentioned
the
second
part
which
is
come
on
yeah-
this
is
there
are
like
another.
There
is
another
set
of
spec
changes
which
calls
for
allowing
multiple
callbacks
for
a
single
observable
instrument
and
yeah.
It
should
be
like
something
discuss
with
the
dotment
team
how
they
want
to
expose
it.
C
If
I
reflect
correctly
that's
a
like
optional
thing
in
this
text,
so
we
don't.
Oh,
we
don't
really
need
to
like
treat
it
as
a
p0
thing.
It's
okay
to
not
have
it
and
still
ship
this
table
and
raise
the
asian.net
team
and
have
them
natively
support
it,
and
then
we
can
add
it
as
a
relative
change
in
the
future.
B
C
We
can
quickly
go
through
the
I
mean
go
to
the
spec
issue,
just
to
means
big
pr
just
to
confirm
that
it
is
a
optional
thing.
So
I
I
mean
this
pr
like
changed
its
shape
quite
often
so
when
I
was
reading
it
initially,
it
says
like
optional
thing,
so
let's
wait
for
it
to
load.
C
C
So
it's
a
shoot,
and
if
this
feature
is
allowed,
then
there
should
be
a
equivalent
d
register
thing
somewhere,
yeah
undoing
the
effects
of
callback
registration,
so
yeah
it
should
be
like
optional.
So
I
don't
think
like
anyone
updated
the
compliance
metric,
so
hope
someone
will
do
with
that
with
a
asterisk
saying
it's
an
optional
thing:
okay,
so
we'll
reset
in
the
dot
net
runtime
repo
and
get
some
feedback
from
them.
Whether
there
are
any
other
considerations,
I
think
in
terms
of
workaround
we
should
still
be.
C
I
mean
if
some
user
really
wants
it.
Someone
suggested
a
workaround
in
the
spec.
It's
basically
like
they'll
register
one
callback
and
within
that
single
callback
they
can
in
turn,
invoke
something
it's
like
really
on
the
user
to
orchestrate
that,
but
that
would
be
the
workaround
if
someone
needs
it
before
the
docnet
runtime
supports
it.
C
C
Okay,
so
if
there
are
no
questions
about
that
part,
we
can,
by
the
way,
like
the
specter,
there
are
like
more
incoming.
I
think
some
of
them
might
have
further
implications
on
the
sticky
implementation,
so
we'll
see
how
that
goes.
One
of
them,
in
particular,
was
about
the
aggregation,
temporality
delta
cumulative
and
the
third
thing
called
stateless
code,
yeah,
stateless
or
something
so
yeah,
depending
on
how
that
gets
merged.
In
what
shape
we
might
need
to
do.
Some
changes
in
the
won't
be
like
a
plane
change
because
we
have
to.
C
It
won't
be
like
a
trivial
change
by
just
introducing
a
new
option
to
the
aggregation
temporarily,
because
we
need
to
do
something
to
convert
the
observable,
callbacks
and
tweak
the
logic
to
respect
the
stateless
part
once
it
lands.
So
we'll
worry
about
it.
Once
perspectives
like
more
a
little
bit
more
solar,
it's
still
like
getting
lot
of
comments
and
it's
an
active
discussion.
So
I
expect
we'll
react
to
it
once
the
like
a
little
bit
more
stable
than
it
is
currently
yeah.
Maybe
like
is
it
something
which
you
are
actively
looking
at?
B
Yeah
that
one
is
of
interest
to
me
they're,
namely,
because
I
work
at
new
relic
and
we
we
are
currently
our
support
for
cumulative
metrics-
is
still
underway,
so
we're
some
things,
we're
wanting
delta
aggregation.
However,
while
I
think
that
there
will
be
some
impact
on
the.net
sdk,
it
may
be
relatively
light.
The
the
main
change.
I
think
that
that
pr
may
create
is
for
up
down
counters
specifically,
which
you
know
we
don't
yet
have
support
for.
B
Okay,
the
the
idea
is
that
an
up
down
counter
aggregated
with
delta
temporality
is
not
very
useful,
so
we
actually
prefer
cumulative
where
the
community
is
kind
of
coming
rallying
around
and
stating
a
preference.
That
cumulative
is
really
the
only
sensible
thing
for
up
down
counters.
B
C
Okay,
got
it
yeah,
and
even
in
microsoft,
we
have
some
legacy
systems
where
we
only
accept
data.
So
whatever
issues
which
has
most
likely
will
be
faced
by
microsoft
as
well
so
yeah
I
mean
I
was
intending
to
like,
take
a
closer
look,
but
since
it
changed
again,
I
now
had
to
re
read
to
see
if
it
still
is
applicable
to
the
microsoft
internal
vacuums
as
well.
C
Okay,
yeah
we'll
come
back
to
this,
as
this
makes
more
progress
if
it
is
like
purely
applicable
to
up
down
counter
with
no
changes
to
counter,
then
one
less
thing
to
worry
about
in
1.2
time
frame,
but
if
it
is
applicable
to
counter
as
well
like
primarily
by
introducing
a
new
aggregation,
preference
or
creation
temporarily,
then
we'll
have
to
like
make
some
changes.
Like
I
did
like
in
the
initial
version
of
this
year.
I
did
take
a
look
and
it
looked
non-trivial
because
we
have
to
like
do.
C
I
mean
probably
like
not
that
bad,
but
it's
something
like
we
have
to
like.
Do
it
and
wait
at
least
a
few
days
before
we
just
go
stable
from
that,
but
anyway,
we'll
come
back
as
soon
as
this
gets
water
and
thing
from
the
spec
meetings.
I
don't
think
this
was
marked
as
a
blocker
or
maybe
I
think
you
can
pull
that
page
right
now,
just
because
this
is
considered
a
broker.
C
Yeah
so,
as
per
today's
meeting,
which
I
did
not
attend,
it's
the
only
two
things
which
are
considered
as
broken
issues
for
the
sdk
spec.
Are
these
two
the
preferred
one
here?
This
is
still
considered
a
non-working
one,
so
maybe
people
would
get
to
it
relatively
later.
B
C
Long
as
we
don't
change
defaults,
then
I
think
it's
like
if
we
are
currently
doing
like
cumulative
delta
and
in
the
future.
Also,
if
we
continue
to
do
that,
that
should
be
fine,
but
if
the
pr
is
all
asking
is
introduce
a
third
form,
then
that's
an
optimum
thing
like
user
has
to
explicitly
say
I
want
the
new
stateless
thing
but
yeah.
I
don't
know
whether
that's
how
it's
going
to
land
so
yeah,
depending
on
how
it's
going
to
land,
then
it
may
or
may
not
be
a
looking
one
yeah
right.
C
Okay,
let's
see
if
there
is
anything
more
in
the
related
to
matrix
stability,
okay,
yeah
yeah,
so
this
pr
long
pending
we
merged
it.
Finally-
and
I
think
ellen
you've
already
created
the
clarifying
questions,
we'll
probably
discuss
it
in
the
pr
itself
actually
in
the
issue.
Maybe
I
can
find
it
right
now.
So,
since
we
have
like
at
least
three
of
us,
maintainers
and
two
more
folks
are
here,
we
can
just
ask
issues.
C
Okay,
just
to
recap
so
my
current
understanding
is,
we
move
the
metric
reader
say:
options
which
is
not
try
to
any
exporter,
so
it's
really
exposed
by
the
sdk.
C
And
when
you
are
configuring,
an
exporter,
you
are
asked
to
pick
which
radar
type
you
want
like
two
options
manual
versus
periodic
and
the
question
is
what
should
be
the
default
and
would
that
default
be
same
for
all
the
exporters?
Or
would
it
be
like
like
console
as
one
and.
B
Yeah
correct
so
the
status
quo
today
is
that
the
otlp
exporter
defaults
to
periodic
the
console
exporter
defaults
to
manual
in
a
prior
conversation
riley
suggested
that
he
thought
manual
might
make
most
sense
right
now
for
console,
though
you
bring
up
a
good
point,
as
you
kind
of
read
through
the
spec
and
take
the
hops
through
the
various
things,
it
kind
of
suggests
that
console
or
anything
that
is
a
push
metric
exporter
should
be
paired
with
the
periodic.
C
Exporting
reader,
so
it's
only
if
you
like
read
it,
as
is
I
don't
know
whether
it
was
intentionally
written
that
way
or
it's.
Maybe
you
can
quickly
just
stop
trying
to
see
same
way.
So
this
is
the
stick
for
console
exporter,
which
is
surprisingly
stable.
C
B
Yeah,
I
think,
if
that's
the
answer
well,
regardless
of
what
the
answer
is,
I
think
the
spec
could
be
clarified.
I
think
the
I
think
the
actual
console
exporter
spec
should
say
that
without
having
to
make
this
hop
or
the
push
metric
exporter,
part
of
the
specification
maybe
needs
to
be
clarified
or
loosened
that
it
doesn't
have
to
be
paired
with
a
with
a
periodic
exporter,
because
riley's
original
argument
was
that
it.
D
B
C
C
If
this
is
the
case,
then
so,
even
if
it's
a
console
exporter,
it's
a
push
paste,
but
it
doesn't
export
anything
every
one
minute
or
even
every
60.
Maybe
it
doesn't
matter
what
the
time
interest
is,
but
it
can
only
do
when
there's
something
which
happened
in
the
sdk.
So
it's
mostly
saying
it
can
be
even
made
as
a
manual
one.
So
the
only
question
is
like
what
should
be
the
default,
because
even
today,
like
once,
we
apply
the
metric
data
type
change
to
all
the
exporters.
C
C
So
only
question
is
about
default
yeah.
Maybe
you
can
see
if
spec
can
be
very
explicit,
so
we
don't
need
to
like
have
this
conversation,
because
whatever
we
do,
it
would
be
ideal
if
it's
same
across
all
languages
like
java
python,
it
could
look
bit
awkward
if
one
console
exporter
is
like
exporting
every
few
seconds,
there
is
other
one.
C
Okay,
so
again,
oh
yeah,
yeah,
and
if
you
are
going
for
that,
go
for
it.
I
wanted
to
like
ask
you
related
thing,
maybe
like
this
is
something
which
we
spent
a
lot
of
time
earlier
as
well.
So
this
example,
this
is
where,
like
lot
of
like
confusion
came
because
initially
this
was
by
default,
some
timer
piece.
But
then
the
question
is
like:
does
the
console
exporter
run
long
enough?
C
There
was
another
suggestion
thing
like
I
chatted
with
utkash
about
it,
basically
replace
this
example,
with
a
infinite
loop
kind
of
example,
and
then
do
the
timer
thing.
I
mean
make
the
periodic
exporter
right
now.
It
works
purely
because
of
the
using
statement,
and
I
have
seen
like
many
people
make
that
mistake.
C
B
C
I
mean
unless
you
are
familiar
with
the
stack
it's
a
bit
and
knowing
that,
like
the
console
exporter,
doesn't
display
display
anything
so
solid
like
what
would
you
propose
to
do
like
just
open
an
issue
in
the
spec
asking
for
a
clarification?
What
should
be
the
default
for
console?
Slash
other
exporters
in
terms
of
whether
they
are
paired
with
a
periodic
or
manual.
C
Yeah,
that's
one
good
way
to
get
attention
create
an
issue.
People
have
lessons
and
need
to
respond,
but
if
you
go
and
change
the
actual
respect,
then
more
likely
you'll
get
more
response:
yeah,
okay,
yeah!
So
once
you
open
the
issue
like
you
can
even
put
it
in
the
slack
channel
for
dotnet
also,
so
we
can
see
if
we
can
support
you
here,
but
is
there
a
like?
I
think
there
is
still
a
couple
of
follow-ups
for
the
exporter
options
right.
B
No
yeah
so
that
that's
that's
a
follow-up
now
that
we
have
this
in
place,
we
can
do
this
same
thing
for
the
console
and
maybe
the
end
memory.
I
forget.
If
the
end
memory
has
these
options
or
not
yeah,
I
think
in
memory.
C
Would
have
the
same
thing
because
I
mean
the
spec
says
they
can
be
both.
It
simply
says:
like
memory
can
be
a
push.
Oh
sorry,
it
can
take
cumulative.
It's
always.
C
C
Okay,
yeah,
that
would
be
great
yeah
just
to
make
sure
like
once
we
do
that
like.
I
think
it
makes
sense
for
us
to
like
do
that
in
entirety
before
we
do
the
release
right.
Otherwise,
it's
like
partially
supported
by
some
exporters,
but
not
by
others.
So,
let's
put
it
as
a
like
once
you
create
the
issue,
let's
create
a
tag,
sorry
milestone
for
rc3
and
mark
it,
as
that
issue
is
part
of
our
c3.
B
Yeah,
that
should
be
cool,
quite
quick
ones.
No.
C
C
Okay,
that's
end
of
questions,
and
next
is
the
issue
about
in-memory,
channel
and
metric,
so
maybe
like
mothra.
If
you
want
to
talk
about
it,
I
can
continue
to
share
the
screen.
Yeah.
If
you
can
show
your
screen,
please
let
me
close.
A
D
Okay,
so
I've
been
looking
at
this
issue
with
the
memory
channel
because
we
persist
the
metrics
we're
exporting
the
same
metrics
every
time.
What
happens
is
that
the
export
collection
will
just
grow
with
duplicate
instances
of
the
same
metrics.
D
So
I
think
that
presents
a
slight
perception
problem
with
users
because
they're
looking
at
this,
they
see
their
list
continue
to
grow,
but
it's
not
actually
new
items.
It's
just
the
same
item
that
continues
to
be
exported.
D
The
second
issue
is
that,
if
you
export
twice,
for
example,
you've
now
lost
the
values
from
the
first
export,
because
there's
not
they're,
not
persisted
anywhere,
so
I've
been
working
with
cars
and
we're
looking
at
two
different
ways
to
address
these.
We
can
address
the
the
growing
export
collection
by
just
clearing
the
export
every
time
we
every
time
the
memory
exporter
does
its
new
export
and
I've
got
a
poc
for
that
in
the
pr
and
the
second.
D
Right,
it
would
just
be
the
in-memory
exporter,
okay
and
then
the
second
issue
is:
do
we
want
to
provide
some
way
to
deep
clone
the
metric
points
in
ukarshan?
I
were
kind
of
going
back
and
forth
last
week
about
different
ways
to
achieve
that
and
we
discussed
like.
We
don't
need
that
at
all,
and
we
can
just
cut
that
if
we
do
the.
If
we
clear
the
collection
right
before
we
do,
the
export.
C
So
I'm
trying
to
think
like
compare
with
tracing
like
how
does
it
work
for
traces
for
tracing
in
memory?
Exporter
gets
a
batch
of
activity.
C
Okay,
okay,
now
now
I
request
so
for
like
if,
for
example,
if
you
had
a
single
counter
every
time
the
correction
is
triggered,
you
would
export
a
metric
with
that
particular
counter,
because
many
data
points
as
it
has,
but
in
the
next
export.
It's
still
the
same
thing.
It's
just
that
the
values
are
updated.
E
C
So
there
is
like
something
about
the
contract
like
once,
we
export
to
any
exporter
like
there
is
like
some
contract
like
what
is
expected
by
the
exporter,
so
for
tracing,
it's
very
clear,
like
once.
You
return
the
control
back.
That
activity
is
gone
forever
like
there
is
no
way
to
like
restart
or
reuse
that
activity.
So
next
time
you
are
going
to
always
get
a
activity
which
was
not
part
of
the
previous
one
for
matrix.
We
don't
have
that
guarantee.
C
E
E
Of
every
export
we
clear
the
collection
so
and
then
we
add
whatever
is
the
latest
collected
metric
point
value
so
that
way
we
will
at
the
most
only
have
one
metric.
If,
if
we
are
only
using
one
counter
in
our
instrumentation,
we
will
only
have
one
value
at
any
given
point
in
the
exported
collection.
C
So
if
a
user
were
to
like
use
in
memory
exporter
just
to
check
if
his
metric
instrumentation
is
correct,
let's
say
he
is
testing
it
like
after
three
collects.
So
three
cycles
have
gone
now
with
the
proposed
change
where
the
in-memory
exporter
clears
its
item
after
each
so
at
the
end
of
third
one
I'll
be
still
getting
a
single.
Let's
assume
that
if
there's
a
single
counter,
so
I'll
still
be
getting
a
single
one
and
that
will
contain,
depending
on
the
yeah
that
will
state
after
the
third
collection.
C
Yes,
so
if
it
is
cumulative
I'll
get
like
everything
from
like
time
zero
to
end,
but
if
it
is,
if
it
is
just
delta,
if
the
temporality
is
delta,
I'll
only
get
the
change
from
like
collection,
two
to
collection,
three
yeah.
If
the
unit
just
wanted
to
like
really
validate
everything
from
the
beginning,
then
they
have
to
like
do
that.
Inspection
within
every
collect
cycle,
like
in
the
in
memory
exporters,
export
method.
C
C
C
E
C
Itself
to
clear
it,
because
otherwise
we
are
simply
like
growing
the
memory
which
serves
no
purpose,
because
even
if
the
user,
like
only
observed
this
thing
like
after
10
cycles,
he's
now,
there
is
no
way
for
him
to
go
and
look
at
what
was
the
state
between
collection
192,
because
we
are
going
to
overwrite
the
memory.
So
there
is
no
way
they
can
do
that
at
the
end
of
time.
C
C
C
You
should
not
assume
that
the
thing
which
are
like
which
we
gave
earlier
would
be
like
untouched
or
anything,
because
we
it's
an
sdk
internal
implementation,
detail
that
we
are
using
the
same
thing,
so
the
contract
would
be
something
like
if
you
once
you
return
control
to
it.
We
assume
that
you
already
took
whatever
you
needed
like
you
copied
or
you
moved
anywhere.
C
I
think
that's
something
which
we
should
write
in
the
exporter
contract
like
we
have
a
very
small
but
very
important
section
about
what
is
the
expectation?
C
Let's
see
if
we
have
something
like
that,
is
it
extending
the
sdk
yeah?
We
do
oh
man.
This
is
this
is
incorrect,
probably
copy
paste
error.
C
So
it
should
derive
it
can
optionally
implement
to
not
throw
exception
over
generating
we've
seen
same
okay,
so
we
need
to
like
make
some
clarification,
because
here
the
exporter
you
receive
a
batch
of
metric
and
once
you
return
control,
you
are
expected
to
do
whatever
you
are
intending
to
do
with
that.
C
Within
that
timeframe
like
once
you
return
control,
then
you
no
longer
own
that
we
are
going
to
update,
but
does
it
make
sense
like
to
update
this
first,
and
then
I
mean,
or
maybe
do
it
at
the
same
time,
at
the
same
time,
because
it's
not
like
I
don't
know
like
what's
other
way
to
enforce
this
as
a
contract
rather
than
documenting
it
here
or
maybe
we
can
modify
the
export
methods
public
doubt
to
indicate
that
you
only
own
control
of
batch.
B
It
almost
feels
like
this
warrants.
A
clarification
may
be
at
the
spec
level
too,
in
the
sense
that
there
is
an
in-memory
exporter
specification,
and
I
guess
this
is
kind
of
raising
the
question
of
like.
What's
supposed
to
be
kept
in
memory,
you
know,
is
it
I
mean
I
can
I
can
see.
I
can
see
one
interpretation,
maybe
that
I
want
the.
I
actually
do
want
that
the
you
know
put
like
duplicate
metrics,
which
maybe
should
have
the
actual
time
interval
of
that
metric.
B
C
That's
why,
like
I
mean
if
the
sdk
was
like
not
that
memory
conscious,
if
it
was
like
creating
new
memory
after
every
collect,
then
it
would
have
worked
the
way
you
just
described
it,
because
every
export
will
give
you
a
batch
of
metric
to
the
memory
exporter
and
the
next
one
gives
you
a
fresh
batch,
really
not
tied
to
the
previous
one
in
any
way.
C
But
the
fact
that,
even
if
we,
even
though
we
claim
to
give
a
patch
of
metric,
it's
really
like
a
underneath
point
into
the
same
thing,
so
you
would
it
would
overwrite,
whatever
you
collected
in
the
whatever
you
saw
in
the
previous
one,
so
yeah,
I'm
not
I'm,
I'm
not
sure
like.
Where
would
that
clarification
would
be
whether
it
is
in
the
spec
or
whether
the
open
elementary.net
sdks
contract
to
every
exporter?
This
is
my
contract
to
every
exporter.
C
Like
that's
what
I
was
trying
to
say,
we
should
say
that
somewhere
here,
once
your
return
control,
will
you
don't
own
that
memory
anymore?
We
could
overwrite
it
or
we
could
not
overwrite
it.
That's
purely
our
internal
thing.
So
if
you
really
care
about
the
state
at
a
given
point,
you
have
to
like
read
it
store
it
elsewhere.
D
So
so
to
address
that
point,
would
we
also
need
to
introduce
a
helper
method
to
do
the
deep
copy
of
the
metric
points
so
that,
if
the
customer,
if
the
user
expects
to
have
like
the
point
in
time,
metrics
from
like
the
first
export
that
they
have
an
easy
way
to
persist?
That.
D
So
in
the
pr
I
propose
deep
cloning,
the
metric
point:
it's
a
struct,
so
it's
kind
of
easy
to
make
a
shallow
copy.
D
C
C
Okay,
I
think
that
sounds
okay.
For
now,
okay
yeah,
I
mean
allen
or
michael.
Like
any
other
objections
to
that
part,
I
mean
we
could
always
add
a
new
new
capability
on
the
metric
to
do
the
deep
clone
it
should
be
like
an
additive
change.
C
E
But
I
think
we
will
also
have
to
decide
where
to
persist
that
deep
clone
metric
point
area,
because
if
he,
if
it's
just
in
the
aggregator
store,
then
it's
going
to
change
again
on
every
like
on
the
next
export
cycle.
When
we
call
get
deep
clone
again,
then
that
same
array
is
going
to
be
updated
with
the
original
metric
point
area
based
on
whatever
that
was
so
I
I
mean
it
should
probably
be
persisted
at
the
in-memory
exporter
level
itself,
where
yeah.
C
The
persistent
would
be
done
like
in
memory.
Exporter
would
be
doing
the
first
testing
part,
but
I
think
what
I'm
trying
to
say
was
the
sdk
would
expose
a
new
method
which
would
make
it
easy
for
in-memory
exporter
or
any
other
exporter
to
get
a
good,
deep
clone
of
the
entire
thing
which
is
guaranteed
to
be
unaffected
even
after
I
return
control
from
export,
because,
right
now,
once
the
export
control
is
returned,
same
memory
could
be
overwritten,
but
with
the
new
deep
clone
thing,
I
get
a
new
data
structure.
C
I
mean
new
instance
of
the
batch
of
metric,
which,
even
if
I
return
control,
I
can
still
play
with
it.
Assuming
that
it's
going
to
be
the
same,
it's
going
to
remain
there
forever,
even
if
no
matter
how
many
updates
happen
further.
So
if
some
updates
happen
next
time,
it
would
be
the
new
thing
which
I'm
going
to
get
so
yes,
it
needs
to
be.
C
D
I
think
that
makes
sense
that
to
expect
the
exporter
to
you
know
export
a
a
a
separate
like
standalone
instance
of
an
object,
and
I
looked
at
that
for
the
metric
object
itself
and
I
don't
see
a
straightforward
or
an
obvious
way
to
clone
the
metric
yeah.
It's
a
very
large
object.
D
So
that's
why
I
was
looking
at
like
a
visual
work
around
to
clone
the
metric
point
instead,
which
sort
of
breaks
what
cj
proposes
like
if
we
expect
the
exporter
to
be
doing
the
cloning
and
that's
not
an
option,
maybe
we
just
don't
do
any
cloning
at
all
and
just
clear
the
collection,
I
think
that's
acceptable.
Yeah.
C
C
C
If
there
is
a
need,
we
can
do
it
after
1.2.
So
that's
all
I
I
would
say
we
need
to
do
right
now,
unless
I
hear
other
ways.
C
Okay,
I
learned
like
what
could
you
clarify
like
what
aspect
we
should?
If,
at
all,
we
need
to
ask
the
spec
for
clarification?
What
would
that
be.
B
B
My
question
was
more
coming
from
the
standpoint
of
should
I
expect
some
uniformity
across
the
language
sdks
for
what
is
in
that
memory
in
memory
thing
and
expectations
around.
If
at
a
given
point
in
time,
I
inspect
that
memory,
what
I
should
see
yeah,
I
think
the
spec.
C
Is
like
maybe
intentionally
unclear
on
that,
because
it
puts
like
in
memory
state
that's
in
a
block
diagram
without
explaining
what
that
state
to
be.
I
think
it
was
intentional
to
leave
the
flexibility
for
individual
sdks
to
decide
how
that
structure
should
look
like
it
just
says
that
there
is
a
statement.
What
exactly
is
contained
here
is
probably
intentionally
left
like
two
weeks
back
the
original
spec.
C
It
had
like
a
specific
way
of
representing
things,
and
people
opposed
to
that,
because
it
doesn't
allow
every
sdk
to
do
their
own
optimizations,
or
maybe
it
forced
like
every
sdk
to
rewrite
everything
from
scratch.
So
was
intentionally
decided
that
sdk
authors
like
languages
can
decide
how
they
want
it.
C
It's
very,
it
was
very
clear,
like
I
think,
in
the
spec
somewhere,
it
says
in
memory
and
console
is
just
for,
like
like
in
the
loop,
keep
testing
not
for
any
production
thing,
so
memory
useful
for
unit
tests.
Suppose
yeah,
I
think
yeah
we
can
do
what
we
described
just
to
recap,
so
we
need
to
update
the
this
is
more
like
the
contract,
which
is
how
we
expect
me.
If
you
are
writing
your
own
exporter.
This
is
what
we
expect
them
to
read.
C
I
mean
most
practically
they'll,
just
look
at
the
existing
example,
but
this
is
more
like
the
contract,
because
there
is
nowhere
else.
We
can
make
it
a
contract
so
like
when
you
are
doing
that.
We
are
like
updated
here
as
well,
and
this
is
just
a
copy
picture.
So
if
you
can
clean
that,
that's
also
great
number
two
is
create
an
issue
just
make
all
in
the
existing
issue
say
that
in
the
future
we
nothing
prevents
us
from
providing
a
a
deep
clone
from
the
sdk
itself.
So
today
does
indexes.
C
So
if
you
are
an
exporter,
do
that
yourselves
and
once
you've
returned
control?
Don't
assume
that
it's
yours
anymore,
okay,
yeah,
that
should
at
least
unblock
this
issue,
and
we
could
I
mean
I
don't
know
whether
it
was
tag.
1.2,
yes,
not
tagged.
C
I
assume
that
it's
not
like
blocking
the
actually,
so
I
never
tagged
it
with
1.2,
so
we
should
be
able
to
just
address
this
yeah
and
we
won't
be
needing
any
changes
in
our
unit
because
our
units
were
already
like
aware
of
this
limitation,
so
it
was
already
doing
that
cleanup.
So
if
in
memory
exporter
is
going
to
do
that
for
them,
then
it
could
require
like
some
cleanup
in
the
unit
test
as
well.
Okay,
I
can
look
for
that
yeah.
So,
like.
C
Great
yeah,
let's
see
if
there
is
anything
more
okay,
I
think
more.
Maybe
we
will
do
a
quick
review
of
prs
just
to
see
if
there
is
anything
which
we
need
to
yeah.
I
think
this
is
very
straightforward.
Somehow.
C
Yeah
I
mean
I'm
just
wanting
to
highlight
that,
like
there
might
be
like
people
who
are
submitting
here,
so
we
just
want
to
set
the
right
expectation
that
if
we
are
doing
a
stable
release
or
any
release-
and
we
can
differ
to
not-
we
decide
not
too
much
that
we
are
if
it
like.
One
example
is
if
that
here
is
introducing
a
new
change
which
we
believe
is
too
risky
to
be
included
in
the
table.
C
Release,
so
we'll
just
say,
don't
merge
it
now
or
another
thing
is,
it
is
exposing
a
public
api,
so
we
are
not
sure
whether
we
need
to
expose
that.
So
we
always
start
with
the
minimal
one
and
expand.
So
I
don't
know
whether
we
need
to
be
like
very
explicit
here.
Sorry
foot,
like
maintenance
can
differ
to
differ
in
epr
to
the
next
release.
Train
so
just
like
give
a
thumbs
up
or
racing
issues
in
the
pr.
C
C
One
specific
example
was
somewhere
here.
I
forgot
where
it
is
so
yeah.
This
was
about
the
log
record.
We
are
trying
to
expose
new
setters
for
logo
code
and
I
put
a
command
in
the
pr
make
making
sure
that
we
have
like
enough
time
to
make
sure
we
go
through
it
and
since
the
focus
is
still
on
matrix,
we'll
just
say:
okay,
pr
is
good,
but
we
just
don't
merge
it.
C
So
that's
one
example,
and
there
are
like
other
examples,
but
here
the
other
itself
says:
don't
merge
it
because
we
need
time
to
think
through
it.
So
that
was
just
a
background
for
like
making
this
tweak.
C
Okay,
so
this
one
is
covered
and
duplicate.
This
is
island
like
yeah,
you
can
let
us
know
when
it
is
out
of
draft
straight
or
if
you
want
to
take
a
look
while
it
is
still
in
the
draft.
Please
let
us
know
this
one,
let's
see.
Okay,
this
should
be
mergeable.
Now
it
has
enough
approvals.
C
C
Okay,
so
that
makes
sense,
because
I
was
also
confused
by
this
when
we
added
the
code
coverage-
I
think
ed
was
the
person
who
enabled
it
like
a
couple
of
years
back,
so
this
was
a
bit
confusing.
So
that's
one
of
the
reason
why
we
decided
to
not
make
it
a
required
thing.
It's
totally
clear
like
if
you
come
all
the
way
here
and
you
can
see
like
some
of
them
are
required,
and
some
of
them
are
not
required.
So
we
have
seen
like
many
people
asking
the
same
hey.
C
This
is
failing,
it's
not
obvious
to
them
that
this
is
a
required
thing.
So
I
think
this
suggestion
makes
sense
like
we
don't
know
whether
it's
feasible
to
always
return
success,
so
that
this
would
be
green
yeah.
It
should
be
possible
because
we
own
that
script,
which
triggers
all
these
things,
so
we
should
be
able
to
do
something
so
that
at
least
it
will
look
like
the
pr
is
like
green
here
from
here,
and
it
will
show
this
thing
and
long
term.
C
Anyone
who
has
expertise
on
modifying
the
ci
to
return
green
all
the
time,
if
not
yeah
we
can.
I
can
cut
this
into
a
issue
and
see
if
we
can
get
some
some
experts
to
do
that.
Maybe
I'll
ping
eddie.
He
might
know,
even
though
he's
not
active
these
days,
but
I
can
just
ask
because
he
did
the
groundwork
for
enabling
all
these
things.
C
Okay,
I'll,
take
that
as
an
action
item
on
me
to
create
a
separate
issue
and
see
if
you
can
get
some
more
ideas,
but
that
I
said
like
this
vr
should
be.
C
Yeah,
it's
just
extending
the
try
catch
yeah,
so
we
should
be
able
to
merge
this
fear
it
already
has
provoked,
so
we
should
be
able
to
merge
that
part.
C
Yeah
so
I
put
like
some
notes
here,
so
maybe
it's
not
related
to
what
we
are
currently
like,
focusing
on,
but
just
a
quick
update.
So
the
logging
provider
has
two
things:
one
is
the
state
and
then
there
is
another
thing
called
scope
so
for
scopes
we
already
make
sure
that
we
access
the
scope.
While
we
are
in
the
log
statement
itself
like
we
don't
try
to
access
the
scope
once
we
are
outside
that
scope.
I
mean
outside
that
code
path
or
in
that
same
context,
and
the
only
time
it
gets
triggered
is.
C
But
if
it
is
a
batch
exporter,
then
we
make
sure
that
we
clear
the
scope-
and
there
is
an
opt-in
thing
by
which
user
can
choose
to
like
copy
that
scope
into
like
a
new
data
structure
yeah
they
have
to
pay
some
cost
because
you
have
to
click,
create
a
new
easter
location
where
the
things
from
the
scope
are
copied
over.
C
So
we
fixed
that
for
scope,
but
it
looks
like
even
for
regular
log
state.
There
is
no
guarantee
that
that
state
is
going
to
be
available
after
the
log
statement
is
turned.
C
This
is
a
very
specific
example
where
sp
net
core
is
putting
something
into
the
low
state
which
is
backed
by
a
field
which
is
tied
to
the
actual
request.
So
if
you
try
to
access
this
in
a
batch
processing
scenario
like
the
request
is
already
gone,
and
then
you
ask
for
the
iterator
over
the
state,
so
at
that
time
it
triggers
this
part
which
tries
to
access
a
request
which
is
already
disposed,
so
it
throws
this
exception,
but
it's
very
bad.
C
If
you
ask
me,
because
you
just
like
enabled
logging
in
asp.net
core-
and
this
is
something
which
spinner
code
does
by
default.
Even
I
mean
like.
I
don't
know
whether
users
usually
do
this
thing,
but
yeah
I
mean
there
is
nothing
promoting
them
from
doing
it,
but
at
least
since
hispanic
core
itself
you're
doing
this.
It's
kind
of
very
bad,
like
you,
take
your
hello
world,
enable
open,
telemetry
and
your
app
is
gone.
C
C
So
in
the
simple
case,
we
don't
need
to
do
anything
in
the
batching
exporter.
We
can
choose
to
copy
it
over
I
mean
we
can
I
try
it
and
copy
it.
It's
going
to
be
like
little
bit
more
expensive,
so
we'll
come
back
to
it
like
when
we
have
enough
time
because,
as
we
all
know
like
just
trying
to
get
the
metrics
out,
so
we
should
be
able
to
come
back
a
bit
later
and
address
this.
C
Maybe
I'll
also
ask
like
espionage
folks,
whether
they
face
this
before
it
like.
I
walked
down
the
application
inside
side
of
it
where
we
expo,
but
we
never
saw
this
problem
because
in
the
synchronous
code
path,
application
sets
copies
whatever
it
needs
into
a
different
data
structure.
C
The
built-in
console
exporter
from
I
logger
itself,
because
that
also
in
the
synchronous
log
path
it
takes
whatever
it
needs
and
writes
it
to
the
console,
writes
it
to
like
someplace
else,
so
there
is
no
need
of
accessing
it
after
the
log
statement
is
complete,
so
we'll
ask
I'll:
ask
the
high
level
honest
whether
it
is
I
intended
that
providers
would
be
always
copying
whatever
they
need
before.
C
Turning
the
control
back
or
or
this
is
just
a
one
of
spinet
core
choose
to
power
the
state
by
something
which
is
gone
after
the
request
is
gone
yeah
either
way
like
we
won't
likely
have
time
to
address
it
like
immediately
so
we'll
just
do
the
prevent
the
app
crash,
bug
fix
and
come
back
to
the
actual
program
later.
C
Okay,
closing
there
is
closing
this
any
other
pr,
so
this
is
just
a
draft
just
to
reward
some
ca
things,
and
this
is
something
we
already
discussed.
This
is
something
we'll
come
back
later,
yeah
this
one's
already
covered
last
time
there
is
some
cla
signing
issues
yeah
and
the
pr
board
is
already
cleaning
up
lot
of
things.
So
if
there
is
any
pr
which
you
are
working
on
but
close
by
the
tier
board,
please
raise
a
comment
we
can
reopen
yeah,
I
think
that's
all
we
have
okay.
C
All
right
can
I
do
some
updates
on
the
notes
just
based
on
what
we
discussed
after
the
call
yeah
see
you
all
next
week,
okay,.