►
From YouTube: 2021-03-30 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
B
B
Okay,
I
think
we
can
start.
Can
anyone
confirm
if
my
screen
is
visible.
B
Thank
you
yeah,
so
we
don't
have
any
agendas
put
here
so
I'll
just
ask
around
if
there
are
any
questions,
otherwise
I'll
invite
alain
to
do
a
quick
demo
of
otlp
log
exporter
which
he
has
been
working
on.
B
A
I
think
I
can
just
take
over
yeah,
so
I
was
talking
to
cj
a
little
earlier
today.
Apparently,
there
is,
as
I
understand
it,
a
little
bit
of
interest
in
getting
log
data
over
or
exported
from
open
telemetry,
and
I
had
mentioned
him
that
I
so
I
work
with
new
relic
and
we
have
built
a
native
otlp
endpoint
and
we're
we've
enabled
the
ability
to
ingest
log
data
and
in
the
in
just
testing
that
out.
A
One
thing
that
I
did
was
since
the
dotnet
open
telemetry
project
didn't
currently
have
an
otlp
log
exporter.
I
just
took
a
quick
stab.
I've
got
one
it's
kind
of
in
a
draft
state.
It's
not.
It's
still
got
some
polish.
That
needs
to
be
done,
but
it
was.
It
was
it's
a
tool.
I
used
it
to
at
least
just
kind
of
test
things
out.
A
So
I'll
show
you
that
today,
though,
you
know
big
disclaimer
like
one,
I'm,
not
a
huge
expert
in
the
in
the
log
space
and
to
the
method
of
exporting
log
data
with
this
otop
log
exporter
that
I've
implemented
may
not
be
a
realistic
production
scenario.
That
said,
like
I
said,
I'm
not
an
expert
and
I'm
definitely
interested
in
if
others
have
after
after.
A
I
show
you
this,
if
you,
if
you
have
some
input,
if
this
is
something
that
you
maybe
you've
heard,
is
of
interest
to
folks,
but
that
said,
you
know,
the
logging
space
has
a
lot
of
things
like
fluentd,
fluent
bit.
A
Tools
like
that
that
people
are
generally
pretty
accustomed
with
for
exporting
log
data
anyways.
This
was
great
for
just
kind
of
like
a
a
quick
kind
of
prototype
thing
for
me.
So
this
is
what
I
got
we're
looking
at
the
one
of
the
example:
applications
within
the
opentelemetry.net
project,
it's
just
the
sample,
asp.net
core
application,
and
what
I
have
here
in
program.cps
is
you
know,
I'm
using
the
the
sdk
and
I'm
configuring
it
with
the
open,
telemetry
logging
provider-
and
I
add
the
otlp
log
exporter
that
I
implemented.
A
Fortunately,
like
this
exporter
is
really
easy
to
implement,
because
it's
it
it's,
I
just
modeled
it.
After
what
we
had
for
traces,
it's
it
works
very
very
similarly,
you
know.
B
A
In
the
project
were
already
there
for
all
the
all
that
telemetry
signals,
okay,
which
actually
brings
up
an
interesting
question.
I
I
think
that
the
080
protofiles
were
just
released
on
monday.
B
A
A
Yeah,
it
was
already
there,
so
I
just
I
just
used
it
so
yeah
the
exporter
itself,
pretty
easy.
A
I
think
it's
in
this
other
file,
where
I
basically
do
the
translation
of
a
log
record
from
our
sdk
to
the
protobuf
format
that
gets
sent
over
the
wire,
and
I
have
some
to
do
some
some
holes,
but
it
basically
gets
the
basic
information
like
you
know:
it'll
do
the
correlation
with
trace
and
span
if
the
trace
id
and
span
id
are
present
and,
of
course
you
know
the
body
of
the
of
the
message
itself.
A
Vs
code
window
here
I
have
collector
configurations,
so
the
relevant
bits
here
are,
of
course,
I'm
I'm
receiving
otlp
data
over
grpc,
so
I've
configured
the
o2p
receiver
and
then
I
have
two
exporters
configured
for
the
collector
here,
one
just
a
logging
exporter,
so
you
can
see
the
logs
coming
across
and
landing
at
the
collector
and
then
and
I
work
at
new
relic,
so
I'm
also
sending
it
over
otlp
to
new
relic
and
then,
of
course,
I
wire
up
the
log
pipeline
appropriately
fluent
forward
was
a
was
a
component
that
I
was
also
playing
with.
A
I
don't
have
that
running
right
now
in
my
environment,
but
I
was
I
was
trying
to
play
around
with
maybe
some
more
realistic
scenarios
that
customers
might
use.
Maybe
I'll
talk
a
little
bit
more
on
that
afterwards
but
anyways.
That's
my
that's
my
basic
configuration-
and
this
is
the
collector.
This
is
just
the
base
collector
project,
so
I'm
just
going
to
fire
this
up.
A
A
A
A
Well,
we're
waiting
for
that.
So
you
know
that's
the
good
old
weather
forecast
controller
here
that
I'm
gonna
be
hitting,
and
I've
just
added
a
couple
of
log
statements
here
so
that
weren't
here
before
so
basically
you
know
in
this
controller
we
make
an
external
call
at
google.com.
A
So
I'm
just
logging
out,
you
know
making
an
external
call
to
google.com,
and
then
you
know
another
log
message.
It
says:
retrieving
the
forecast.
So,
let's
see
I
think
I
can
just
curl.
A
Cool
so
I
hit
my
endpoint
in
the
output
itself.
You'll
see,
you
know,
making
an
external
call
with
google
retrieving
the
forecast,
and
if
I
pop
back
over
to
my
collector
since
I
had
the
logging
exporter
configured
in
my
pipeline
as
well,
we
should
see
the
same
thing
here
yeah.
So
the
the
log
exporter
for
the
collector
got
two
logs
and
it
looks
like
one
of
them
is
retrieving
the
forecast
and
the
other
is
is
making
the
external
call
to
google,
so
it
works
yep
and
then
just
just
for
the
heck
of
it.
A
A
A
Tracing
okay,
so
it
shows
it
correlated
here,
retrieving
the
forecast
making
an
external
call
look
at
these
messages-
and
this
is
the
this-
is
the
log
record
that
was
transformed.
B
Yeah,
it's
really
cool.
I
think,
like
the
next
logical
thing
for
us
to
consider
is
like
how
do
we
make
it
part
of
the
repo,
so
others
can
try
it
out
and
share
more
feedback.
B
Like
we
already
discussed
like
some
of
the
options
maybe
like
when
you
open
electro
rpr,
we
can
figure
out
the
actual
logistics
most
likely.
We
can
ship
it
as
a
separate
package
from
the
same
repo
or
I
think
you
you
are
suggesting.
Let's
do
it
in
the
country
report
either
way
is
fine
and
you
don't
require
any
special
version
of
the
package.
You
just
need
to
use
like
one
dot,
one
that
should
just
work.
Fine.
A
Yeah,
I
think
I
mean
I
I
think.
Ultimately
I
mentioned
the
contributor
only
because
of
discoverability.
I
actually,
I
actually
like
the
thought
of
it
being
in
the
in
the
main
repo
better,
but
I
also
understand
you
know.
This
is
not
something
that
we're
ready
to
release
with
the
with
the.
A
As
it
as
it
currently
stands,
so
I
think
the.
B
B
Yeah,
because
code
repo
is
not
very
active,
so
in
terms
of
discoverability,
I
think
our
best
weight
is
the
main
repo.
But
that
brings
is
slightly
ready
to
pick.
There
is
a
effort
to
unify
the
open,
elementary
dot.
Io
docs,
where,
like
every
repo,
has
its
own
recommendation
in
their
guitar.
B
But
there
is
an
overall
something
like
a
marketing
page
kind
of
thing,
in
open
elementary
dot,
io,
so
we'll
be
having
like
proper
docs
there,
and
we
can
actually
add
things
to
that
page,
which
is
like
really
at
like
a
website
level,
not
part
of
any
github
repo.
B
There
is
a
section
there
to
register
all
components,
yeah
right,
so
we
could
actually
list
it
there.
So
people
who
are
after
logging
should
find
it
very
easy
to
discover
from
there
as
well.
A
B
I
don't
think
we
ever
looked
at
it,
but
recently
there
is
an
interest
in
making
the
website
like
more
up
to
date,
so
austin
from
the
website
team,
he
has
been
sending
pr
to
all
the
language
report
to
get
the
getting
started
guide
in
sync
with
the
repo
and
the
website.
B
So
I'm
pretty
sure
like,
since
someone
is
already
looking
at
that,
we
can
see
if
registry
is
because
it
already
has.
The
h3
already
has
like
too
many
things,
but
I
don't
think
like
we
ever
submitted
an
epr
to
list
our
components
there,
but
yeah
that
something
which
is
relatively
easy
for
us
to
do
now
that
we
are
like
really
not
occupied
with
anything
else
yeah.
I
was
just
looking
at
the
leg
registry.
It
probably
has
like
100
plus
components
already
from
all
the
languages.
All
the
reports.
A
Yeah
we'd
ultimately
get
all
of
our
instrumentation
yeah
registered
there
as
well.
B
And
I
think,
all
the
exporters
anything
which
is
custom
relic
one
yeah
everything
we
can
yeah.
I
mean,
I
think,
those
things
we
can
easily
figure
out
in
the
pr
itself.
You
can
feel
free
to
submit
a
draft
and
then
you
can
take
it
from
there
and
there
is
already
like
a
couple
of
pr's
active
in
the
login
space.
B
We
merge,
like
a
couple
of
them,
should
be
just
part
of
the
beta,
but
we
may
have
to
undo
some
of
the
changes
from
that.
So
maybe
for
now
you
can
take
a
hard
dependency
on
the
one
zero
one,
the
actual
stable
package,
not
the
current
one,
because
current
one
might
be
undergoing
some
changes.
So,
but
even
if
you
do
take
the
project
dependency
on
the
latest,
just
to
just
look
at
only
the
state
scopes
and
other
things
are
newly
released.
So
it's
not
part
of
the
stability
guarantees.
B
It
could
change
anything
yeah.
So
we
can
just
start
with
what
is
stable,
which
the
only
thing
which
is
stable
is
the
like
low
record
has
like
traceability,
spanning
parent
id,
and
then
this
there
is
a
thing
called
state
yeah
I
mean
as
soon
as
you
are
just
using
those
fields.
We
should
be
good
shouldn't
face
any
breaking
change,
issues
from
changes
to
scope,
or
I
forget,
like
we
added
like
a
couple
of
other
things
as
well.
B
Yeah,
no
other
updates.
So
just
mentioning
one
thing
like
alan,
I
saw
your
pr
in
the
eshop
on
containers
and
it
looks
good
from
the
initial
I
I
don't
have.
The
like
didn't
have
the
time
to
actually
run
it,
because
I
never
really
show
up
on
containers
before
so
I
I
need
to
do
like
some
like
homework
to
make
it
run
in
my
machine
first
and
then
I
can
see
how
it
really
looks
with
the
open
telemetry.
B
I
forget,
like
I,
okay,
it's
a
different
report,
so
I
don't
have
notification
but
yeah.
I
will
definitely
look
at
it
yeah.
I
can
see
the
pr
now
yeah
so
like
if
you
need
help
from
like
people
in
microsoft
to
push
anything
through.
Just
let
us
know
we
can.
We
already
spoke
to
the
maintenance
of
that
repo
and
they
are
pretty
much
okay
and
they
would
even
write
code
to
make
it
happen
so
anything
which
we
can
help.
Please
let
us
know.
A
Yeah,
you
bet
yeah.
No,
I
think
the
only
the
only
comment
that
I
got
on
the
vr
right
now
is
just
that.
The
way
they
want
to
proceed
is
to
land
the
pr
in
a
different
branch
which
totally
makes
sense
to
me,
because
then
then
the
id
their
idea
is
that
all
the
services
get
flushed
out
with
open
telemetry
and
then
we
kind
of
iterate
on
it
that
way
before
actually
landing
it
into
their
main
line.
B
Okay,
are
there,
like
I
mean
when
you
did
the
pr
like?
Did
you
remove
the
existing
clogging
or
you
just
added
new
thing?
I.
A
Just
added
so
I
tried
to
be,
I
tried
to
be
as
least
invasive
as
possible,
just
to
begin
with,
just
to
kind
of
just
to
kind
of
like
have
kind
of
open,
telemetry
stuff
off
to
the
side
and
then,
depending
on
where
it
goes,
you
know
we
can.
We
can
replace
it
with
whatever
I
know,
there's
app
inside
stuff
in
there
and
yeah.
B
I
think
most
likely
we
are
open
to
removing
application
sets
because
it's
tied
to
microsoft,
so
this
is
expected
to
be
like
remain
neutral,
because
it's
just
a
dotnet
architecture
report.
B
B
Yeah,
okay
yeah.
I
I'll
just
share
the
pr
link
in
the
meeting
notes
in
case
and
when
it's
not
aware
of
this,
we'll
be
able
to
take
a
look
I'll
also
share
in
the
slack
channel.
A
B
Another
question,
so
we
can
end
early
today
on
this.
Oh
victor
says
he
has
a
question
so
go
ahead.
Victor.
D
Yeah,
so
I
I
don't
know
if
it's
a
question
or
not,
but
I
volunteered
to
add
some
unit
night
unit,
test
performance,
benchmark
tests
for
the
pro
data
model,
protocol,
stuff
and.
A
B
The
matrix
alone
prototype
or
is
it
does
it,
involve
like
anything
more.
D
Well,
so
so
the
so
I'm
trying
to
mimic,
or
at
least
do
the
same
tests
that
tegan
is
doing,
which
does
involve
me
having
to
have
access
to
two
versions
of
the
protocol
files.
So
that
was
also
an
interesting
comment
that
currently
in
the
main
branch,
the
protocol-
I
don't
know
how
we
get
that,
presumably
we're
just
using
like
a
get.
You
know.
Oh.
B
B
D
Right
so
so
I
I
will
be
doing
version
0.4
and
version
0.8,
so
in
my
local
box
I
have
those
two
checked
out
as
well,
so
I
don't
know
how
we
want
to
deal
with
that.
But
but
in
short,
should
I
check
in
the
stuff
somewhere
or
should
I?
B
B
Yes,
so
since
it
is
only
affecting
metrics,
my
suggestion
would
be
do
it
in
the
matrix
branch.
But
yes,
you
can
use
the
like
benchmarks
here
and
add
to
it.
You
can.
B
Yeah,
probably
yes,
I
mean.
A
This
might
be
a
little
bit
of
a
tangent,
but
I
was
thinking
about
something
the
other
day.
I
noted
that
in
the
go
community
they
have
actually
published
go
modules
for
each
of
the
the
it
would
actually
be.
Sorry,
I
think
it's
actually
the
open,
telemetry
proto
repository
has
published
go
packages
that
are
consumable
by
version.
So
you
know,
if
I
had
a
go
application,
I
can.
I
can
consume
the
0.70
or
the
0.40
version
and
there's
nothing
to
say
that
we
couldn't
do
that
in
net
as
well.
A
We
could
create
nougat
packages
that
are
versioned
based
off
of
the
version
of
the
protos,
and
that
way
you
know
you
could
do
a
benchmark
app
that
just
goes
out
and
reaches
out
and
gets
the
the
protos
for
the
version
that
you
want
to
benchmark.
B
Okay,
but
is
that
same
as
the
instructions
which
I'm
sharing
right
now
like
to
generate
the
client?
Libraries
use
this
code,
but
we
don't
directly
use
this
right.
We
just
like
I'm
not
sure
like
whether
this
has
anything
to
do
with
the
way
we
generate
c-sharp
files
from
the
profiles.
A
B
A
Think
that
we
would
need
to
use
their
their
generation
script
there.
I
think
we
would.
I
haven't
actually
tried
this
out,
but
I
think
I
think
we
would
just
simply
have
a
separate
c-sharp
project.
You
know
with
the
protophiles
in
it
that
we
would
just
generate.
You
know,
via
the
ms
build.
D
Stuff
yeah,
so
we
should
probably
in
that
project
and
we
should
probably
just
put
a
get.
You
know
a
good
reference
to
the
proto
and
just
have
it
pull
that
the
latest
down
and
build
as
well
yeah.
Instead.
B
Yeah
this
was
like
brought
up
like
maybe
like
a
year
back,
and
how
do
we
best
handle
this
thing?
And
initially
it
was
this
error.
The
easiest
way
is
to
just
copy
it,
because
it's
it's
not
something
which
changes
like
every
week
or
so.
D
B
I
mean
there
were
some
challenges.
I
I
don't
regret
like
what
exactly
someone
did
try
to
do
that
like
automated
and
they
gave
up
in
the
middle
checking.
B
Yeah,
I
mean
I'm
pretty
sure
like.
There
were
some
reasons,
because
I
recollect
having
the
same
conversation
like
sometime
back
but
either
way
like
so
victor,
for
you
to
do
the
experiment
and
do
a
benchmark
between
two
versions.
I
think,
let's
do
it
here
into
the
matrix
branch
and
you
can
like
check
in
both
versions
of
the
profiles
right
now,
because
if
you
want
to
do
the
new
git
approach,
you
need
to
first
do
some
ground
work
to
make
that
happen.
D
In
my
case,
it's
a
little
bit
more
complicated
than
that,
because
I
have
to
actually
check
in.
I
can't
I
can't
have
both
version
checked
in
without
changing
something
because
they
all
share
the
same
package.
Name
name
space.
Everything's
the
same
so
the
grpc
compile
just
failed,
so
I
actually
had
to
go
and
alter
one
of
the
versions
so
that
it
it
keeps
two
different
name
space.
B
Like
what
one
other
option
like
is
the
like,
like
I'm,
trying
to
figure
out
like
what
is
your
final
intention
like?
Do
we
anticipate
like
having
these
benchmarks
running
in
our
repo
forever,
or
this
is
just
to
wow.
D
Yeah
so
there's
some
in
the
in
some
of
the
comments
and
saying
that
you
know
they
were
saying
they
were
a
suggestion
that
you
know.
Perhaps
we
should
keep
these
benchmarks.
I
know
the.
I
guess
the
go
project.
I
guess
they
they
kind
of
do
that
where
they
basically
keep
a
benchmark
and
anytime
there's
a
change.
They
run
the
benchmark
to
see
if
it
regressed.
So
I
think
there
was
some
suggestion
to
do
that.
D
B
Yeah,
so
what
we
currently
have,
we
also
have
a
otlp
benchmark,
but
like
none
of
them
are
like
ca
check.
So
it's
like
manual,
if
you,
if
someone
wants
to
see,
if,
like
I'm
making
some
change-
and
I
want
to
see
whether
I
am
regressing
or
improving
the
buff-
they
can
just
run
it
locally.
B
That's
the
only
benchmarks
we
have
in
dot
net
repo
and
even
for
otlp,
we
do
have
a
benchmark
like
we
used
to
run
this
like
in
the
early
days,
but
this
is
like
really
tied
to
the
actual
version
which
we
have
like
checked
in
our
repo
and
if
the
intention
is
no
because
this
is
like
meant
to
be
like
here
forever
like
because
I
mean
hopefully
at
one
day,
we
will
make
it
part
of
the
ca.
B
So
whenever
you
do
a
pr,
we
know
like
whether
you
are
regressing
the
before
not
but
as
of
now,
it
is
something
like
a
tool
for
all
the
contributors
to
run
it
themselves
when
they
are
making
any
change
in
their
relevant
areas.
So
if
you
are
touching
otlp,
you
can
run
and
see
for
yourself
whether
you
actually
request
or
not.
B
If
the
intention
for
the
proto
files
for
two
versions
of
matrix
proto
is
also
to
like
offer
like
a
help
going
forward
for
the
devs,
we
can
like
make
it
part
of
the
repo
and
figure
out
what
is
the
best
way
to
run
it,
but
if
it
is
just
to
like
pick
or
offer
some
suggestion
back
to
the
proto
repo
about
which
version
of
it
is
performing
better,
we
can
keep
it
in
the
pr
for
now.
B
Let's
not
merge
it,
and
once
we
get
some
conclusions,
we
can
make
it
like
part
of
the
main
report
itself,
just
like
every
other
image
box,
and
at
that
point
we
can
also
discuss
that.
B
We
need
to
first
introduce
the
capability
of
running
like
multiple
versions
at
the
same
time,
because
none
of
these
exporters
are
comparing
anything
against
each
other.
It's
just
like
a
self-contained
thing.
So
if
I
run
the
jager
one,
I
just
takes
a
current
jager
one
man
like
run
some
benchmarks
against
it.
It
doesn't
compare
it
with
anything
else.
It
doesn't
compare
with
the
run
from
master
or
it
doesn't
compare
anything
from
the
previous.
B
So
it's
a
good
to
have
feature,
but
no
one
had
the
like
energy
to
do
that
so
for
now,
victor
like
let's
do
it
in
apr
and
try
to
promote
them
things
into
the
actual
like
repo
as
we
as
we
see
the
need
for
it
sure.
B
Yeah
and
by
the
way
I
updated
the
matrix
branch
from
main
like
two
weeks
back
so
like
maybe
like
as
soon
as
we
get
like
something
from
metrics,
we
can
actually
like
start
modifying
things,
because
it's
long
time
since
we
updated
it,
it's
already
outdated.
I
think
once
the
matrix,
the
new
apa
is
getting
like
blessing
from
more
folks.
B
B
I
think
it's
merged
or
like
about
to
be
merged,
which
has
a
just
just
like
a
single
metric
instrument
called
counter,
and
there
will
be
one
more
wage
sometime
in
the
next
week.
It
removes
like
a
bunch
of
other.
There
is
no
mention
of
bound
instruments.
There
is
no
label
set,
so
we'll
be
like
killing
a
bunch
of
things
from
here.
So
this
repo,
I
mean
this
branch
is
now
like
right
for
action.
We
should
see
some
pr's
in
the
next
few
weeks.
D
B
On
it
or
yeah
initially
I
I
would
just
wait
to
have
it
merged.
I
think
it's
not
yet
merged,
but
once
it
is
merged,
then
yeah.
B
We
would
be
like
actively
following
it,
because
we
also
have
a
commitment
with
the
dotnet
team,
so
you
need
to
give
them
constant
feedback,
so
I
think
we
will
be
doing
so
very
similar
to
what
we
did
for
tracing
like
we
waited
initially
we
implemented
like
our
own
traces
initially
and
then
did
a
switch
over
to
the
one
from.net,
and
after
that
we
were
like
pretty
actively
working
on
it
like
whenever
there
is
a
new
version
of
the
api
in
in
the
case
of
tracers,
the
spec
was
stable
or
relatively
stable,
but
there
were
new
builds
coming
from
the
net,
so
we
were
updating
it
like
very
frequently
to
catch
issues
and
give
feedback
to
dotnet
team
right
away.
B
We
are
not
waiting
for
the
actual
dot
net
release
like
preview
six
preview.
Seven
we
were
like
after
the
daily
builds
from
the
dot
net
repo.
So
as
soon
as
we
get
like
some
builds
from
the
dot-net
repo,
it
may
not
be
like
official
public
nougat,
but
it
could
be
like
a
private
field
which
we
use
for
like
tracing
works.
We
can
do
like,
like
pretty
fast
iterations.
D
So
so
that
so
that
means
that
if
you
were
to
follow
the
the
spec
as
that's
being
merged
from
the
api
side,
then
that
means
you
would
guys
would
then,
if
you
guys
are
following
that
closely,
you
guys
would
then
start
deleting,
like
all
the
label
set
stuff.
All
the
other
counters
traces.
Is
that
what
you
guys
should
then
do.
B
B
I
I
need
to
do
but,
like
I
said,
we
need
to
like
wait
for
it
to
I
mean
whenever
we
start,
I
would
like
to
have
some
continuity,
rather
than
just
doing
like
something
now
and
wait
for
the
spec,
but
as
of
today,
I
don't
think
the
new
matrix
spec
has
been
marked.
So,
let's
see
if
it
has
been
merged,
it
still
has
oh
yeah.
There
is
a
new
api,
oh
it
just
much
like
21
minutes
ago.
Okay,
so
now
we
have
like
something
to
work
on.
B
So
probably
I'm
okay
to
get
started
with
this
right
now,
because
there
is,
the
stack
has
already
been
merged.
There
will
be
additions
next
week,
so
we
can
start
with.
I
mean
start
activity
following
this
spec
from
now
onwards,
don't.
B
Right
now,
yeah
like
just
follow
the
tracing
model.
Basically
we
do
the
api
right
here
like
just
like
what
you're
seeing
right
now
once
dotnet
starts
keeping
some
bits,
we
can
just
replace
it
with
the
one
from
dotnet.
Okay,
so
cj.
D
Have
you
looked
at
the
the
that
noaa
branch
thing
where
he
has
the
quote:
current.net
api
set
that
he's
thinking
about?
Are
we
and
there's
a
kind
of
api
sdk
thing,
so
how?
How
do
you
think
that
will
contribute
or
or
not
to
what
this
sig
wants
to
do?.
B
It's
so
right
now,
it's
just
in
a
private
form,
so
we
cannot
do
anything
with
that,
like
once
it
once
the
api
comes
from
the
dotnet
runtime
repo,
which
I
believe
is
the
plan
for
totna
team
to
do
in
the
next
few
weeks,
rather
than
having
it
in
a
private
branch.
So
the
moment
like
it
is
in
the
dot
net
report.
Then
we
can
take
a
dependency
on
that,
but
until
then,
let's
just
like
literally
clone
that
here.
D
D
B
No,
I
think
so,
maybe
I
I
got
bit
confused
with
the
statement
so
like
there
is
a
potential
code
which
might
be
which
might
become
the
dotnet
api
sitting
in
a
private
fork
of
this
repo.
What
I
meant
was
like.
We
cannot
take
a
dependency
of
on
that.
B
However,
once
it
is
moved
into
the
dotnet
runtime
repo
and
they
start
like
dotnet
starts
publishing
this
to
an
internal
feed.
I
think
and
not
internal
like
there
is
a
like
daily
build
from
dotnet
runtime,
which
we
used
to
have
it
like
dependency
on.
At
that
stage,
we
can
take
whatever
is
available
from
the
dot
net
instead
of
our
own,
like
meter
or
whatever
are
the
things
which
dotnet
provides.
We
can
just
take
a
dependency
on
that.
D
B
If
you
are
willing
to
work
on
it
right
now
like
today,
even
before
waiting
for
dotnet,
what
I
was
telling
you
was
feel
free
to
just
copy
those
classes
from
the
from
the
place
where,
like
noah
or
dotnet
team,
is
right
now
playing
with
it
and
like
copy
it
right
here.
B
B
Sorry,
if
that
confuses
you,
let
me
rephrase
again
so
when
we
let
me
rephrase
what
we
did
for
tracing
so
when
we
did
tracing
like
dotnet
team
did
not
have
like
did
not
provide
us
with
any
package.
There
were
like
some
private
work
occurring
in
several
forms,
so
what
we
did
was
we
did
have
something
called
span
here
like
in
the
api.
There
is
a
thing
called
telemetry
span
and
a
tracer
all
those
things
which
eventually
got
replaced
with
activity.
D
Sorry,
I
guess
my
simplest
question
is:
when
should
we
start-
and
it
sounds
like
we
shouldn't
start
until
several
weeks
later
until
api
is
further
along
sdk
started
and
that.net
has
something
for
us
or
are
you
suggesting
we
follow
the
spec
as
it's
being
merged?
I
guess
that's
where
the
question
is:
do
we
do
it
right
away,
or
do
we
wait
so.
B
If
you
do
right
away,
we
need
to
clone
the
or
copy
the
api
into
this
repo.
D
B
There
is
anything
like
anything
like
officially
stated
as
the
our
intention,
but
if
you
are
willing
to
submit
peers,
I'm
happy
to
like
take
it
yeah.
The
only
reason
why
I
suggested
like,
if
you
wait
for
like
maybe
like
three
weeks
from
now
when
dotnet
would
actually
move
things
into
the
runtime
repo.
B
Then
we
don't
need
to
do
the
work
of
like
having
that
api
shipped
here
we
can
just
add
a
reference
to
the
matrix
api
from.net
and
then
do
the
whole
sdk
part
here,
because
if
you
start
like
today,
like
let's
say
you
are
submitting
apr
in
the
next
one
hour,
you
will
have
to
submit
a
pr
which
involves
all
the
api
classes,
along
with
the
sdk
implementation
right.
A
B
So
if
you
are
asking
like,
when
do
we
want
to
get
started?
I
think,
like
probably
like
another,
two
or
three
weeks
when
we
right
now,
it
only
has
counter
the
spec
which
got
merged.
It
only
has
one
instrument
and
there
will
be
like
one
more
in
the
next
week.
That's
what
I
heard
from
the
matrix.
So
once
we
have
two
like
two
instruments
and
it
the
matrix
decides
what
to
do
with
the
old
apa
versus
new
api.
B
At
that
point
like
we
can
start
like
playing
with
this,
like
we
can
start
removing
things
and
add
things.
So
if
I
were
starting
it,
I
would
start
it
probably
middle
of
or
maybe
towards
the
end
of
it
like
three
weeks
from
now.
B
But
if
you
are
like
interested
in
like
moving
things
from
dot,
net's
private
fork,
yeah,
I'm
happy
to
review
it,
but
we
don't
need
to
do
it
right
now.
We
can
just
wait
for
two
to
three
weeks
when
the
specs
would
be
more
or
less
complete
and
dotnet
team
has
an
ability
to
ship
nuget
packages
rather
than
copying
the
source
code,
then
that
would
be
the
right
time
to
start
doing
it.
B
B
So
we
did
the
actual
tracing
api
in
this
repo
and
then
dotnet
came
with
like
improved
activity.
Let's
call
it
improved
activity,
then
we,
like
literally
like
deleted
all
the
things
from
this
crypto
and
replaced
it
with
two
references
to
pure
activity,
but
in
this
case,
in
case
of
matrix,
since
we
already
know
that
dot
knight
team
has
a
commitment
to
like
provide
us
with
a
matrix
api,
we
can
like
wait
for
the
first
new
get
package
released
from
the
dot-net
team
and
then
just
do
the
sdk
here
like.
D
So
cj
I'm
wondering
maybe
we
should
provide
the
rest
of
the
sig,
a
link
to
noaa's
that
branch
in
case
people
want
to
see
how
that's.
B
B
So
I
mean
we
know
that
the
person
working
on
it
is
the
person
who
is
eventually
going
to
do
the
dot-net
runtime
work,
but
since
it
is
in
a
private
form,
I
would
rather
not
list
it
anywhere,
but
I
mentioned
like
in
the
slack
channel.
If
anyone
is
interested
in
sharing
early
feedback,
please
reach
out
to
victor
or
me,
we
can
take
them
to
the
right
repo.
B
D
So
right
now
I'm
doing
some
benchmarking,
like
I
mentioned
for
the
just
looking
at
the
version,
04
versus
the
version
you
know
o8
of
the
protocol,
and
I
want
to
check
that
in
somewhere.
That's
why
I
asked
and
then
after
that
I
think
you
know
noah
the
dot-net
team
and
this
sig
probably
wants
to
coordinate.
You
know
with
the
api
when
to
rip
everything
apart
and
start
putting
things
in,
and
I
don't
know
what
that
timing
is.
So
that's
what
I.
B
Was
asking
got
it
yeah,
so,
let's
think
again
like
in
the
next
meeting
next
sig
meeting
or
the
week
after
so
at
that
time,
we'd
have
like
a
slightly
better
clarity
on
the
spec
as
well
to
figure
out
the
actual
logistics.
So
let's
come
back
to
this
in
a
week
or
maybe
in
two
weeks.