►
From YouTube: 2021-06-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
On
the
code,
please
don't
forget
to
update
your
name
in
the
identities
list.
Orbital.
B
B
Okay,
I
think
we
can't
touch.
I
have
like
a
couple
of
small
updates.
Let
me
go
over
it
first.
So,
like
I
mentioned
in
the
last
meeting,
we
have
to
unblock
this
pr
from
margin,
because
this
pr
is
like
freely
blocker
for
1.1
release.
B
So
basically
we
are
trying
to
move
the
deferred
tracer
provider
builder
to
the
api,
so
that
instrumentations
can
continue
to
just
depend
on
api
and
not
sdk.
So
michael
raised
a
good
point
like
it
should
be,
like
just
take
a
dependency
on
the
microsoft
extensions,
dot,
d8
dot
abstractions,
or
should
we
create
like
our
own?
B
Like
version
of
that,
to
avoid
that
dependency,
and
there
were
like,
I,
I
had
like
some
offline
conversations
and
trying
to
get
a
some
sort
of
guarantee
that,
like
that
package,
is
relatively
safe
to
take
because
it's
shipping
from
the
dot-net
runtime
repo,
so
it
should
be
maintaining
pac-weight
compatibility
just
like
the
diagnostic
source.
We
already
have
a
diagnostic
source
and
extensions
dependency
in
the
sdk.
This
is
specifically
about
adding
a
dependency
in
the
api,
so
there
were
like
some
concerns.
B
One
is
we
have
the
potential
like
compatibility
issue,
so
I
have
reached
out
to
dotnet
team.
They
said
that
the
bar
for
breaking
change
is
very
high
in
the
hosting
sorry
in
the
dependency
injection
dot
abstractions
package,
but
we
are
still
evaluating
like
whether
it
is
okay
to
take
the
dependency.
So
I
michael
is
also
here
so
hey,
michael,
so
we'll
just
I
mean
it's
this
feature
like
whatever
we
do
here.
B
It's
not
really
meant
for
the
end
users,
it's
already
clear
in
the
pr
which
michael
sent
yesterday,
which
shows.
Actually
you
say,
the
end
user
is
not
going
to
really
know
or
use
the
deferred
tracer
builder.
It
is
used
by
those
who
provide
instrumentations
or
other
plugins
to
the
sdk
like
like
asp.net
core,
for
instance,
or
zikan
exporter.
They
are
the
ones
who
would
be
dealing
with
it.
End
users
will
always
deal
with
the
actualized
service
collection
if
they
use
the
hosting
package.
B
So
we
just
want
to
make
sure
like
it's
okay,
to
take
a
additional
dependency
on
the
api
and
we
should
be
like
able
to
work
around
if
there
is
any
breaking
change
introduced
in
this
package.
In
the
coming
versions,
and
also
like
another
point
which
michael
raised
was,
this
package
is
not
available
for
net
461
and
older,
so
we'll
have
to
do
a
conditional
dependency,
which
we
already
know
is
causing
other
issues.
B
So
I
have
like
one
proposal
I
haven't
like
confirmed
this
part
from
dotnet,
but
the
proposal
is
we'll,
go
ahead
and
take
the
dependency
and
we'll
do
it
conditionally,
because
it's
only
available
in
extended
2
and
above
then
that
leaves
with
these
two
frameworks
not
having
this
feature
and
given
the
fact
that
they
are
going
to
be
like
end
of
life
in
2022
april,
which
is
roughly
10
months
from
now.
B
It
should
be
a
short-lived
issue,
so
we
should
be
able
to
like
support
it,
and
we
can
get
rid
of
this
in
like
april
2022,
and
I
don't
expect
this
to
be
a
major
issue,
because
we
are
not
going
to
release
the
instrumentations
anyway.
It
will
be
only
released
like
probably
end
of
summer,
so
we're
talking
about
like
few
months,
where
we'll
have
this
potential
like
issue
with
different
frameworks
not
having
the
same
api.
B
B
So,
given
that,
like
the
whatever
issue
we
have
because
of
the
different
api
in
different
framework,
it's
going
to
be
short-lived
one.
So
it
should
be
like
a
very
less
risk
and
I
I'll
get
a
confirmation
about
this
from
dot
19
before
we
make
the
decision.
So
I
want
to
see
like
whether
anyone
else
sees
any
other
concerns
about
adding
an
extra
dependency
to
the
api
package.
B
If
not,
I
will
mark.
I
mean
I'll,
put
a
comment
here,
saying
that
okay,
we
can
proceed
to
take
a
dependency
on
the
library
rather
than
creating
this
on
ourselves.
B
Okay,
I
hear
no
objections,
which
means
we
we
can
go
ahead,
michael
like
like.
Would
you
have
like
any
other
concerns
other
than
what
I
described
here?
If
you
were
to
take
a
dependency
here,
I
mean
you
kind
of
discuss
the
conditional
part,
but
that
is
said.
Is
there
any
other
concerns
which
you
see,
which
I'm
missing
out
here?
B
No,
you
got
everything:
okay,
yeah,
so,
like
we'll
mark
it,
I
mean
I'll
at
the
end
of
the
meeting
I'll
mark
it
that
the
decision
is,
we
will
take
a
dependency
on
here
and
there
are
a
few
concerns,
but
the
concerns
are
like
not
a
big
deal,
because
we
don't
expect
the
inducer
to
directly
use
it,
and
we
also
know
that
the
conditional
issue
is
a
very
short-lived
one,
and
also
this
package
is
coming
from
the
dot-net
runtime.
So
chances
of
breaking
issue
are
not
very
high.
B
It
should
be
very
small,
I'll
just
confirm
like
this
part
again,
if
we
have
like
some
sort
of
guarantee
and
worst
case,
if
there
is
a
breaking
change,
we
should
be
able
to
handle
that
in
the
api
itself.
So
we
don't
really
expect
it
to
be
a
breaking
deal.
B
So
that's
the
reason
why
you
brought
it
up
is
because,
once
we
like
make
a
final
call
on
this
pr,
we
should
be
able
to
release
like
beta4
and
like
give
it
like
a
week
or
two
just
to
hand
out
any
final
issues,
and
we
should
be
able
to
release
one
dot
one.
So
this
would
be
our
next
stable
release.
So
let
me
open
the
milestones,
so
everyone
can
see.
B
So
this
is
a
milestone
where
we
would
be
planning
to
release
that
conflict
change.
The
one
which
we
just
discussed
and
we'll
give
it
like
couple
of
weeks
and
release
the
actual
1.1
by
end
of
this
one.
It
could
be
like
delayed
by
a
week
or
so,
but
it
should
be
still
fine,
and
then
I
have
created
few
more
milestones.
The
next
one
is,
after
one,
more
one
is
1.2.
B
This
will
align
with
net
six
release
on
number
nine
we
may
or
may
not
release
on
number
nine
number.
Nine
is
when
dot
net
is
going
ga.
So
technically
we
can
release
any
day
after
nine.
So
if
you
have
to
follow
what
we
did
like
with
tracing,
we
were
ready
by
number
30,
even
though
the
actual
release
took
more
time,
so
we'll
still
target
it
by
end
of
number
and
it
should
contain
matrix
and
the
new
version
of
diagnostic
source.
B
So
there
are
new
improvements
coming
in
diagnostic
source,
which
we
will
be
able
to
incorporate
in
the
1.2
release.
The
next
milestone
is
1.0,
which
pick
the
date,
because
this
is
a
date
when
dotnet,
all
the
versions
are
deprecating,
so
we
just
remove
it
from
the
sdk,
so
that
will
mitigate
all
the
conditional
compilation,
issues
which
are
primarily
erasing
because
of
these
three
frameworks.
B
Any
questions
on
the
timeline
I
mean
this
is
just
the
description
like
logistically.
We
need
to
like
figure
out
some
more
thing,
because
metrics
are
still
being
worked
on
in
the
private
brain.
So
at
some
point
after
we
do
1.1,
we
might
be
able
to
bring
it
back
to
the
main
branch
or
we
can
continue
to
operate
with
a
matrix
branch
for
like
several
more
weeks
until
we
choose
to
bring
it
to
the
main
branch.
B
But
those
are
like
pure
logistics,
we'll,
like
figure
out,
what's
the
best
way
in
the
next
weeks.
B
Okay,
let's
move
on
to
the
next
item,
so
this
is
covered
yeah.
This
is
just
an
update
like
we
we're
having
some
concerns
last
week
that
we
had
a
breaking
change
and
did
the
fix.
We
released
a
new
one.
So
now
we
have
introduced
api
compact,
which
is
a
newer
tool
which
is
supposedly
more
powerful
than
the
public
ap
analyzer,
which
we
currently
have
so
now.
It's
part
of
the
ca
it's
intentionally
not
made
mandatory,
because
the
tool
is
like
like
a
preview
or
beta
version.
So
it's
not
yet
ga.
B
We
expect
it
to
be
g8
by
dot
net
six
time
frame.
So
until
then
we'll
keep
it
as
a
non-mandatory
thing,
because
there
are
known
issues
with
that.
We
also
learned
from
top
19
that
they're
going
to
replace
this
with
a
different
tool.
So,
given
all
these
facts,
we
will
keep
it
as
mandate.
It's
currently
failing
because
it
says
that
we
have
a
breaking
change,
but
from
what
utkarsh
investigated,
there
is
no
breaking
change.
It's
just
that
that
food
is
reporting
a
false
positive.
B
So
that's
just
an
update,
and
next
is
so
very
small
ask
from
like
anyone
who
was
time
to
helps
with
something.
So
if
you
look
at
our
readme
page,
the
very
beginning,
we
have
these
two
badges.
We
had
it
removed
like
long
back,
because
this
was
having
some
issues,
but
now
we
are
using
the
official
badge
from
the
github
actions,
but
unfortunately
the
linux
one
is
always
showing
us
failing
it's.
B
It
doesn't
matter
whether
it
is
actually
green
or
not.
So
I
couldn't
figure
out
why
it
is
the
case.
So
if
anyone
has
some
experience
with
this,
please
so
if
I
try
to
create
a
bet,
it
still
shows
failing,
even
if
I
change
the
branch
to
main
or
whatever
it's
still
failing.
So
it's
not
a
big
deal,
but
it
gives
a
wrong
impression
that
our
cas
are
failing,
even
though
they
are
succeeding.
So
if
anyone
has
like
time
to
investigate
this,
please
raise
your
hand.
B
I
will
create
an
issue
and
assign,
if
not,
I
don't
have
the
time
to
look
at
it,
so
I'll
just
remove
the
linux
one
so
that
things
look
green
here,
okay,
if
no
one
is
like
having
bandwidth
to
look
at
it,
I'll
just
go
ahead
and
remove
it
for
now
and
create
an
issue
to
track
the
task
of
bringing
it
back
yeah.
I
have
very
small
update
on
the
controller
before
we
did
receive
several
more
prs
for
adding
more
instrumentation.
B
A
couple
of
them
are
not
much,
but
we
do
have
like
quite
a
healthy
number
of
instrumentations,
so
the
release
is
somewhat
automated.
Now,
even
though
the
talk
says
it's
still
manual,
but
it
is
more
automated.
So
if
you're
a
maintainer
or
approval,
all
you
need
is
push
a
tag
and
that's
it.
You
push
the
tag
and
at
the
end
of
the
push
tag,
it
triggers
a
little
action
which
builds
tests
packs
and
pushes
all
the
way
to
nougat.
B
So
this
part
is
automated.
So
it's
fairly
easy
for
me
to
respond
to
people
asking
hey:
can
you
release
a
new
version,
so
just
updating
that
and
like
almost
all
of
them
are
maybe
one
yeah,
it's
not
much,
so
it's
all
released.
So
please,
like
take
a
look
at
the
remaining
pr's.
We
still
need
like
more
rice,
to
look
at
quartz
net.
I
do
not
have
a
good
grasp
on
this
should
be
just
already
updated.
B
So
it's
just
updating
that,
like
contributor,
is
getting
like
more
attention
than
the
main
repo,
which
is
a
good
news,
but
we
need
people
to
like
review
it
as
well.
B
Okay,
that
is
end
of
updates
from
me.
If
there
are
no
other
questions
that
we
can
talk
about
metrics
are
there
any
questions
which
needs
discussion
before
we
move
to.
B
Matrix
so
for
metrics,
I
think
victor
made
a
lot
of
progress
last
week.
So
sorry
I
mean
the
wrong
repo,
so
we
can
look
at
it
quickly.
B
So
victor
are
you
on
the
caller,
so
yeah
you're
here
I
am
yeah,
I'm
trying
to
see
like
whether
we
can
like
are
we
at
the
stage
where
we
can
like
create
like
sub
items
for
individuals
to
tackle
parallelly,
or
do
we
think
that,
like
we
still
need
to
like
work
on
it
more
before
we
ask
like
others
to
take
items
so
like
I
have
like
one
specific
ask
for
exporter.
B
Of
course
we
can
have
another,
but
I
can
try
to
see
if
how
close
are
we
to
like
letting
someone
write
an
actual
exporter,
be
it
like
otlp
or
prometheus
or
some
internal
one?
I
mean
there
is
a
need
in
my
team
to
write
an
internally
exporter
for
microsoft,
but
so
victor
the
question
is
like:
how
close
are
we
to
like?
B
How
close
is
the
exporter
definition?
Do
we
have
a
exporter
class
which
we
can
ask
people
to
go
ahead
and
implement?
It
looks
like
you
already
have
the
interface
ready
like
the
metric
exporter
would
get.
I
think
I
metric
or
forget
like
photos.
I
think
I
think
it's
immatric
a
collection
of
imetrix.
So
do
you
think,
like
it's
good
enough
for
us
to
ask
people
to
take
a
stab
at
writing
exporters,
or
do
you
think
that
you
should
spend
like
more
time
like
finding
this
out.
C
Yeah
so
so
I
think
there's
a
couple
of
areas
that
people
want
to
contribute,
there's
already
ripe
areas
that
is
abstracted
it's
abstracted
away.
Hopefully
it's
abstracted
correctly
and
I
think,
as
people
write
different
things,
they
will
obviously
let
us
know
if
it's
not
quite
right.
As
far
as
exporters
is
concerned,
we
do
have
the
the
processor,
the
export
processor
or
in
this
case,
is
the
metric
processor
and
so
and
the
end.
The
end
version
of
a
metric
processor
happens
to
be
an
exporter,
so
you
can
do
that
today.
C
B
So
it
already
has
the
support.
I
mean
it's
already
having
that
language
histogram
and
some
answer
right:
okay,.
C
Right,
correct,
correct
so
so
otlp
has,
I
think,
like
four
different
data
types,
if
you
will,
and
so
we
have
already
modeled
those
four
different
data
types,
they're
all
part
they're,
all
as
part
of
an
imetric.
So
there's
like
a
sum
and
a
gauge
and
a
histogram
and
a
summary
so
and
that
those
are
already
being
passed
through
the
pipeline
to
a
metric
processor.
So
if
you
look
at
the
console
exporter
which
I
guess.
C
B
You
imagine
like
this
would
be
like
something
like
a
switch
case
with
four
statements
like
one
for
some
one.
For
summary,
one
for
instagram
and
yes,.
C
That's
correct,
however.
The
exporter
wants
to
export,
so
I
would
see
here
that
you
know
if
you're
doing
the
otlp,
someone
will
have
to
then
obviously
pull
in
the
protobuf,
your
definition
and
then
based
on
a
switch
statement
on
what
type
of
metric
data
that
we
get
I
metric
we
get.
It
should
map
to
one
of
those
portal
buffs.
You
know
types
that
we
have.
Okay,.
B
Yeah
so,
like
alan
mentioned,
like
couple
of
weeks
back
that
he
might
be
like
able
to
work
on
the
otp,
so
I'll
check
with
him
whether
he
can
like
model
a
otl
exporter
based
on
the
metric
console
exporter,
question
is:
can
we
actually
define
like
a
metric
exporter
itself
rather
than
let's
see
how
we
did
for
traces?
So,
okay,
we
don't.
C
B
Be
I
think
that
would
be
more
aligned
with
what
we
have
for
exporter
I
mean
for
tracing.
We
use
the
base
exporter,
which
gets
an
export
method,
so
we
could
potentially
ask
like.
Instead
of
writing
a
metric
processor,
you
could
write
a
more
concrete
thing
like
my
exporter,
which
extends
the
base
exporter
of
type
metric.
It's
like
really
option.
I
don't
think,
like
that's
a
broken
thing
from
writing,
but
that
would
make
it
more
consistent
couple
more
questions
like
so
when
I
am
writing
this
thing.
B
So
okay
yeah
yeah
so
like
there
is
no
like
need
for
this
exporter
to
worry
about,
like
what
is
the
frequency
at
which
I
get
called
itself
all
configured
by
the
user
when
they
configure
the
correct
correct.
I
think
there
was
one
more
ask
which
is
about
the
pull
versus
push.
C
You
could
today
today
you
could
do
a
you,
could
do
a
push,
otlp
or
push
you
know
whatever
you
know,
processor
you
want.
If
you
want
to
do
a
pole
processor,
then
that
implies
that
you
will
have
to
do
some
kind
of
your
own
storage
or
state
management
for
the
metrics
that
you
will
get
or
if
you're
doing
a
poll
we
will
need
to
discuss
because
you
probably
just
want
to
access
the
data
store
directly.
So
that
is
the
pole.
Processor
is,
I
think,
needs
to
be
discussed
more.
B
Okay,
so
we
cannot
really
write
a
prometheus
one
at
this
thing,
because
prometheus
is
by
default
a
full
based
one
okay,
so
we
with
the
only
like
one,
which
we
can
write
right
now
we
saw
it
will
be
one
apart
from
like
some
like
console
kind
of
one
okay,
but
you
still
have
the
like
logic
to
handle
like
multiple
intervals.
Right,
like
you
can
have
like.
Like
2x
I
mean
I
can
register
two
exporters.
C
That's
correct,
yes,
and,
and
the
aggregation
for
the
particular
interval
is
also
taken
care
of,
so
so
by
the
time
you
get
to
this
metric
item.
All
the
items
in
there
should
already
be
for
the
interval
of
time
that
you
configured
okay,.
C
Part
here,
so
that's
a
that
that
is
a
separate
item,
but
that's
actually
part
of
the
aggregator
portion,
which
also
people
could,
if
they
want,
could
work
on
the
aggregator
portion.
We
have,
I
think,
four
aggregator
types
and,
for
example,
I
don't
have
anything
for
like
a
histogram
aggregator
and
I
don't
have
anything
for
like
a
summary
aggregator
for
the
most
part,
besides
just
the
most
basic
so
in
there
yeah.
So
you
could
look
on
so
those
need
to
be
filled
in
with
people
and
it's
just
a
standard.
C
You
know
it
should
be
abstract
enough
that
people
could
work
on
an
aggregator
as
they
see
fit
and
output
a
metric
as
they
see
fit.
Now
for,
as
far
as
your
question
about
the
temporality
of
the
thing,
I
do
for
my
for
my
gate.
Sorry
for
my
counter.
C
I
do
have
a
constructor
that
takes
in
the
temporality.
You
know
for
how
you
want
to
output.
You
know,
based
on
whether
you
want
a
delta.
C
And
how
you
we
could
configure
it,
but
again
I
just
have
a
very,
very,
very
basic,
aggregator
there,
so
people
who
want
to
help
definitely
any
of
the
aggregators
would
would
would
be
nice
to
you
know,
fill
in
fully
so.
B
I'm
more
concerned
about
like
and
like
I
think
you
have
a
very
basic
implementation
where
like.
If
it
is
delta,
then
you
reset
everything.
You
start
fresh,
that's
correct!
You
keep
everything,
so
you
keep
accumulating
but
more
than
the
implementation
I
was
concerned.
How
does
user
configure.
C
There,
on
line
33,
it's
part
of
the
constructor
you
could
see
at
the
end,
there's
a
is
delta
flag
and
it
is
monotonic
flag
right,
so
users.
So
so,
when
you
write,
if
you're
writing
an
aggregator,
you
have
a
constructor
that
you
could
pass
in
these
types
and
then
inside
your
code
you
could
then
use
the
whether
it's
delta,
meaning
it's
cumulative
or
delta,
and
it's
monotonic,
whether
it's
it's
monotonic
or
not.
So
that's
part.
C
One
second
part
is
how
do
you
configure
this
so
so
that
is
a
part
of
the
view
api,
but
we
also
have
a
default
that
set.
So
if
you
look
at
the
aggregator
store
cjo,
if
you
go
to
the
aggregator
store.
C
C
C
Yeah,
whether
it
is
stateful
or
not
right,
right
and
then
so
I,
the
next
step,
is
I
have
a
view
api
which
doesn't
affect
the
aggregator
whatever,
but
I
have
a
view
api
which
will
allow
the
user
to
specify
what
type
of
aggregator
they
want,
and
I
think
you
know
we'll
have
to
do
some
stuff
based
on
what
they
specify
internally,
whether
we
want
the
you
know,
the
aggregate
like
the
sub
metric
aggregator
to
either
be
cumulative
or
delta.
Here
you
could
see,
I
just
have
on
line
70
and
line
71.
B
And
so
I
think
my
main
question
was
about
like
who
decides
whether
the
aggregation
should
be
stateful
or
stateless.
I
mean
both
are
same
as
like
saying,
delta
and
cumulative.
Is
it
the
exporter
which
says
that
or
is
it
based
on
the
instrument
because
it
looks
like
you
are
like
basing
this
decision
on
instrument?
B
I
was
wondering
like
whether
it
is
a
more
of
a
exporter's
decision.
Hey
I'm
an
exporter.
I
don't.
I
cannot
export
cumulative
so
give
me
delta
only
or
as
an
exporter
I'll
tell
you.
I
don't
want
to
give
like
get
deltas.
Give
me
the
humility
from
the
start.
C
C
Each
aggregator
keeps
its
own
state
and
each
aggregator
is
responsible
because
it
keeps
its
own
state
for
producing
the
appropriate
metric
when,
when
it's
being
collected,
so
that's
part.
One
of
the
answer
to
your
question.
The
second
part
to
to
your
question,
is
how
about
the
export
frequency
and
so
forth
right,
so
so
for
each
export
frequency
distinct
that
we
have.
I
keep
a
separate,
aggregator
store
per
that
distinct
interval
which
would
translate
to
having
aggregators.
B
B
So
if
you
are
like,
like
incrementing
the
counter
by
10
like
every
second
and
you're,
exporting
it,
let's
say
like
every
10
seconds,
so
the
the
exporter
which
expects
cumulative
would
expect
that
at
the
end
of
10
seconds
you
get
like
100
and
at
the
end
of
next
10
seconds
you
get
like
200
and
so
on
so
forth.
So
but
the
exporter
which
wants
the
delta,
it
always
gets
100
because
that's
the
delta,
since
the
last
update.
B
So
I
was
thinking
like.
Is
that
a
like
instrument,
level
config?
Or
is
that
like
something
which
depends
on
the
actual
exporter?
Because
we
don't
really
know
whether
to
do
delta
based
aggregation
or
cumulative?
Unless
we
know
what
is
exporter
being
used,
whether
exporter
supports
delta
or
not,.
C
Did
that
well?
Okay,
so
yeah!
So
the
the
question
there
is,
I
think,
being
addressed
slightly
differently.
If
you
look
at
the
spec
in
the
data
model,
spec
there's
a
section
that
talks
about
delta
to
cumulatives
and
cumulatives
to
delta.
A
C
I
think
the
question
of
is
different.
Well,
let
me
let
me
phrase
the
the
question
is
actually
separate
in
the
sense
that
the
instrument
has
its
own
sense
of
delta
or
cumulative,
regardless
of
what
the
exporter
you
use,
and
then
the
exporter
has
its
sense
of
delta
cumulative.
Regardless
of
what
instrument
you
use
so,
which
implies
that
the
exporter,
when
it
is
it's
between
the
aggregator
and
exporter,
it
needs
to
then
figure
out.
You
know
what
it's
going
to
store
in
its
state
and
may
need
to
convert
to
the
other
as
needed.
C
Yeah,
so
so
the
s
so
so
right
now,
when
you,
when
you
set
up
a
view
or
when
you
configure
your
system
for
an
instrument,
you
have
to
tell
it
a
particular
type
of
aggregator,
which
includes
the
temporality
of
that
aggregator,
which
then
implies
that
if
you
have
an
exporter-
and
you
only
get
data
for
this
type,
the
exporter
may
need
to
do
some
conversion.
B
C
B
Okay,
yeah,
okay,
yeah.
I
think
I
had
like
few
more
questions,
but
I
think,
like
we'll,
take
it
offline.
I
think
the
only
ask
which
we
can
make
like
alan
is
the
only
one
who
was
committed,
but
I
see
like
if
there
is
anyone
else
who
wants
to
work
on
yeah.
B
Okay,
yeah
so,
like
you
have
like
a
couple
of
sample
aggregators,
but
we
could
like
either
refine
it
or
make
it
do
the
actual
thing,
because
I
think
these
are
like
the
very
basic,
but
it
should
be
like
technically
correct,
like
what
you
are
doing,
but
we
can
potentially
optimize
it,
and
I
don't
know
whether
you
have
an
actual
thing
or
yeah.
B
I
think
you
have
it
to
do
here:
okay,
okay,
so,
basically
like
we
are,
I
mean
asking
like
if
anyone
has
cycles
and
interest
to
work
on
aggregators,
please
reach
out
to
like
us
in
the
slack
or
like
you
can
speak
up
now
or
if
there
is
anyone
in
your
company
who
wants
to
like
write
a
specific
aggregator
and
like
you
can
share
us
feedback,
whether
this
interface
for
aggregator
is
sufficient,
or
we
need
to
do
like
something
more.
We
can
take
that
feedback.
B
B
B
Am
I
getting
the
thing
converted
into
metric
here?
That's
the
only
thing
which
I
need
to
worry.
If
vtcs,
then
I
can
like
modify
this
exporter
to
like
print
it
separately
for
some
summary
histogram
and
then
ask
like
people
to
ride
other
exporters
for
the
delta
cumulative
part.
I
think
I
need
to
do
like
some
more
homework
before
making
more
comments,
so
I'll,
probably
sing
with
your
plane
and
learn
more
about
it
and
yeah.
B
We
don't
have
the
like
pull
based,
one
ready
yet
so
that's
that
would
be
required
before
we
try
to
write
it
from
with
us
exporter,
okay,
yeah.
I
think
that
I
said
good
overview
thanks
victor
for
getting
it
this
far,
we
continue
to
hydrate
on
it
next
few
weeks
we
still
expect.
I
think
we
can
do
a
better
release
in
the
next
two
weeks.
B
Once
we
confirm
that
these
interfaces
are
good
enough,
we
should
be
able
to
do
a
like
beta
or
like
maybe
like
an
alpha
release
just
to
see
if
someone
wants
to
played
with
their
own
exporter
or
plug
it
with
their
own
instrumentation,
we'll
see.
If
that
can
be
done,
I
think
it
should
be
safe
to
say
we
can.
We
should
be
able
to
release
like
one
alpha
or
beta
version
in
the
next
few
weeks.
B
That
would
be
in
line
with
the
original
plan,
which
we
will
be
doing
june
preview
releases.
It
says
starting
june
yeah,
but
we
still
have
a
few
weeks
left
yeah.
Okay,
if
there
are
no
other
questions,
I'll
update
the
notes
as
soon
as
the
meeting
is
done.
If
there
are
no
other
questions,
we
can
end
and
we
can
come
back
next
week
with
more
questions.
C
B
All
right,
okay,
I
will
see
you
all
next
week.
Oh
there
are
some
chat
messages.
Let
me
see
if
it's
okay,
nothing
important.
Thank
you.
Reese.
A
B
Coming
out
of
some
peers
solely
question
of
timing,
like
maybe
we'll
decide
to
tackle
like
one
problem
later,
because
I
mean
there
isn't,
there
is
a
need
for,
like
writing,
an
actual
exporter,
because
that's
a
real
validation
that
you
put
something
as
an
input
to
dotnet's
metric
apa
and
you
have
an
exporter.
Are
you
getting
the
expected
data
or
not
so
like
once?
That
is
that
can
be
done,
we'll
be
able
to
validate?
Okay,
we
have
the
like
things
in
between
our
own.
B
Like
internal
details,
we
can
probably
keep
it
in
general
for
now.
C
Yeah
having
an
otlp
exporter
would
be
awesome.
Yeah.
B
B
While
you
work
on
something
else,
while
I
work
on
something
else,
so
that's
it
and
there
is
potentially
like
opportunity
to
write
instrumentations
again,
like
allen
already
has
a
pr
which
was
based
off
the
old
metric
here,
but
he
said
he
would
have
some
time
to
resurrect
that
pr,
so
we'll
be
able
to
have
an
end
to
end
example,
where
we
can
use
an
sp
net
core
app,
which
will
publish
some
basic
counters.
Maybe,
like
request
count,
we
can
see
it
all.
B
C
B
A
B
Once
we
get
like
really
more
closer
to
a
release,
we
can
do
like
a
brown
bike
kind
of
thing
where
victor
or
I
can
walk
through
the
like
design
like
how
does
the
metric,
which
starts
from
or
matrix
api,
the
one
from
dot
net?
It
reaches
the
export
that
whole
life
cycle
of
what?
What
are
the
things
it
goes
through.
So
maybe
that
will
help
people
get
a
bigger
understanding
of
how
it
is
structured.
It's
it's
much
complex
than
the
tracing
one,
because
tracing
is
like
way
easier.
B
There
is
no
like
aggregation
or
view
or
anything,
it's
very
straightforward.
Matrix
is
little
bit
more
complex,
so
it
may
be
like
helpful
if
you
give
a
like
brown
back
kind
of
thing
to
others,
so
that
may
like
encourage
more
people
to
like
start
previewing
more
actively
or
even
make
contributions.
B
So,
but
that's
something
which
we
are
like
still
far
off,
we'll
be
many
like
several
more
weeks
before
we
can
like
do
such
a
demo
kind
of
thing
or
like
groundbreaking,
but
we'll
definitely
keep
that
in
mind,
because
I
think
we
did
something
like
that
in
activity
when
we
migrated
from
our
own
span
to
the
dot
nets
activity
just
so
that
people
are
aware
of
what
is
the
thing
which
dot
net
is
giving
us
and
why
are
we
building
on
top?