►
From YouTube: 2020-11-03 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Hey
hello,
can
one
of
you
confirm
if
you
can
see
my
screen
share.
B
B
Yeah,
I
think
we
can
start
it
one
minute,
plus
we
have
a
lot
of
things
in
agenda
so
best
if
we
start
right
now.
Okay,
so
there
was
one
entry
about
user
research,
slash
dog,
fooding
guide-
I
don't
know
who
ordered
it.
I
I'm
seeing
it
for
the
first
time.
So
really
do
we
know
anything
about
this.
C
Yes,
so
it
is
part
of
the
the
ga
thing,
so
we
want
people
to
try
the
minimum
scenario
for
tracing
like
the
creator,
http
client
server
and
see
if
things
got
connected
and
give
us
feedback.
So
the
initial
proposal
has
been
proposed
by
ted
earlier
this
week,
and
there
are
several
languages
are
trying
this.
So
I
I
think
it
is
a
valuable
thing.
We
can
have
some
fresh
eyes
from
the
community
like
some
people
who
have
who
have
not
been
deeply
involved
in
this
project.
They
have
the
fresh
idea
and
come
here.
C
C
B
Expect
people
to
like
do
it
right
away,
or
should
we
ask
them
to
wait
until
we
have
one
more
refinement
done
to
the
dogs
section
so.
B
B
Yes,
so
I
will
make
sure
like
I'll.
Do
it
like
myself,
one
more
time
and
see
if
it
at
least
the
dogs
are
followable
or
do
we
need
any
change?
And
then
I
can
put
a
comment
here
and
also
in
the
jitter
channel,
to
ask
everyone
to
take
part
in
this
experiment.
Slash
it's
not
really,
experiment.
It's
user
search
here.
C
C
C
A
C
B
Okay
yeah,
so
the
general
ask
is
like
anyone
who
hasn't
been
like
heavily
involved
with
the
project.
Please
do
it
this
because
people
who
are
already
familiar
with
the
project
they
may
not
catch
some
of
these
issues
so.
B
Scoped
to
tracing
okay,
it's
always
corpus
tracing
okay.
That
makes
it
even
easier
because
I'm
pretty
sure
that
trace
dogs
are
more
up
to
date
than
anything
else.
D
I
had
one
one
quick
comment
on
that
in
that
user
doc:
research
doc,
so
I
had
seen
that,
but
you
know
net
was
not
one
of
the
ones
that
was
listed
as
so
that's
step.
Two
pick
a
language
and
it
says
the
following:
implementations
have
declared
themselves
ready
for
feedback.
So
do
we
want
to
add
ad.net
on
there.
B
Oh
okay,
so
it
says,
like
these
following
implementations,
have
declared
themselves
ready
for
feedback,
so
dotnet
has
not
made
any
explicit
like
announcement
saying
that
maybe
we
are
ready.
Okay
got
it,
so
this
is
some
step
for
me
to
follow
up
and
let
it
know
that
we
are
ready,
but
I
I
would
do
that
when
we
do
our
next
release,
which
would
be
like
next
friday,
so
that
I
mean
it's
best.
If
you
do
it
like
this
friday,
so
we'll
have
like
enough
time
to
react
to
it
before
we
go.
B
Okay,
so
I'll
take
an
action
item
on
me
to
see
what
does
it
take
for
dotnet
to
be
listed
here
and
yeah,
and
then
I'll
wait
for
ready
to
complete
his
own
like
trial
experiment
and
see
how,
where
the
issue
lands
and
how
we
can
react
to
it
once
it
lands
somewhere.
B
No,
it's
not
from
me.
Okay,
someone
else
had
it:
okay,
but
anyway,
good
to
be
made
aware
of
that.
I
I
wasn't
like
aware
of
this
thing
anyway.
Thank
you.
So
let's
go
over
the
issues
which
are
already
in
the
agenda
and
I
have
some
items
to
discuss,
but
I'll
do
it
like
towards
the
end,
because
we
have
some
issues
to
be
so
let
me
open
it
one
by
one
this
pr,
let's
see.
E
B
E
Yes,
I've
addressed
the
feedback
provided
in
this
pr,
so
just
want
to
know
like
if
there
is
a
blocker
on
this
or
anything
needs
to
be
addressed
in
this.
E
Yes,
there
was
a
question
asked
by
michael
and
riley
like
if
we,
whether
this
should
be
a
part
of
the
apa
or
shad
project.
So
this
is,
I.
E
D
E
Plan
was
to
get
it
as
a
separate
nuget
package.
The
moment
we
place
it
place
these
files
as
like
in
api.
There
are
two
major
issues
we
could.
F
E
Into
one
is
like
by
marking
all
the
things
as
internal
all
the
interfaces
as
internal
we
may,
the
third
party
exporters
can
may
not
take
the
advantage
of
this
one.
Yeah
second
thing
is
what
rayleigh
specified
this
is
going
to
make
the
api
like
now
it's
a
lightweight
api.
This
is
going
to
add
weight,
more
weight
to
the
api,
and
this
may
not
be
required
that
this
implementations.
B
To
look
at
it,
yeah
really
go
ahead.
C
My
thinking
is,
this
feature
is
only
useful
for
exporter
right.
We
expo,
like
we,
expect
the
exporter
to
be
able
to
leverage
this
component
to
dump
things
on
the
local
storage
if
it
cannot
send
data
out
and
and
yet
if
the
memory
pressure
is
high.
In
that
way,
I
I
think,
given
the
entire
exporter
interface
is
inside
the
sdk.
So
clearly
this
this
piece
shouldn't
go
to
the
open
telemetry.api
package.
C
C
But
if,
when
exporter
decided,
I
I
have
no
requirement
for
this,
then
it
can
stay
small
without
this
dependency,
and
even
this
component,
like
is
having
minimum
dependence
in
other
libraries,
I
I
think
number
one,
the
the
size
implication
should
be
small,
like
I'll,
be
surprised
if
this
piece
of
code
is
adding
more
than
like
10
kb
of
the
binary
size.
The
second
one
is,
I'm
not
seeing
any
big
security
issue
or
concern.
C
B
C
C
Yeah,
and-
and
one
thing
I
have
been
struggling
with-
is
the
name
open
telemetry.shared,
regardless
of
this
pr.
If
you
have
something
named
opentime2.shared,
it
is
not
clear
to
me
what
we're
trying
to
communicate
like.
Is
this
supposed
to
be
a
shared
component
that
used
by
other,
like
should
open
telemetry
components,
depend
on
open
climate
without
shared,
or
this
component
should
depend
on
open,
telemetry
yeah,
that's
a
good
question.
C
B
Is
like
an
extension,
so
we
can
follow
what
we
did
for
hosting
package.
It
was
open
elementary.extensions.hosting,
but
he
here
we
can
do
like
open
elementary
dot
extension.
So
yeah,
if
you
choose
to
this
like
ship
it
as
a
separate
nougat,
then
extensions
gives
the
feeling
that
it's
not
really
required,
like
it's
an
optional
extension.
If
you
want
to
liberate
it,
install
it
otherwise
forget
it.
C
B
F
I
have
an
opinion
I
mean
I
feel
like
we
already
have
a
lot
of
libraries
like
consuming
open
telemetry
into
some
of
the
products
that
I
ship
there's
just
a
lot
of
dependencies,
so
I
would
prefer
to
make
this
an
opt-in
if
it's
possible,
I
don't
know
how
you
would
do
that.
How
do
you
compile
it
in
optional?
Do
I
make
some
kind
of
plug-in
thing.
B
Like
that
depends
if
jaeger
decides
to
have
a
dependency
on
this
package,
so
if
you
guys
say
I
I
don't
want,
then
we
don't
ship
it
as
part
of
the
yeager
like
or
or
we
need
to
probably
do
like
some
yeah.
H
B
B
If
it
is
just
for
the
internet,
the
exporters
which
we
ship
from
this
repo-
and
it's
shipped
as
a
separate
extensions-
then
it's
probably
okay
for
those
exporters
to
take
a
dependency
on
this
one.
That
means
the
exporters
which
are
shipped
from
this
repo
will
have
this
functionality
and
dependency
on
this
package.
But
for
any
exporter
which
is
written
by
other
companies,
they
have
like
opt-in
model
they're.
Not
they
don't
have
to
pay
the
price
of.
I
mean
it's
not
really
a
price.
B
Is
that,
like
a
reasonable
thing,
like
I'm,
basically
saying
put
it
as
a
extra
thing,
don't
make
it
part
of
the
sdk
package
just
ship
it
as
a
separate
nougat,
and
we
as
a
community?
We
can
decide
that
jaeger
and
sipkin.
Those
are
the
exporters
which
we
ship
from
this
repo
will
take
a
dependency
on
this
extra
package.
B
Yeah
I
mean
it's
really
we
can
ask
around
like
people
who
write
exporters
of
their
own
like
azure
rights,
one
and
like
new
relic
has
like
stuff
has
that
they
have
any
opinions
about
like
adding
one
more
package
if
they
care
about,
I
think
neural,
like
yeah,
like
allen,
you
mentioned
that
there
is
not
a
strong
interest
for
neural
to
use
the
like
offline
storage
mechanism
right
or
is
that
still
the
case
like
I'm
just
curious
like
what
is
your
opinion
on
this.
A
Well,
I
can't
say
that
I
have
a
strong
opinion
of
whether
it's
a
separate
package
or
whether
it's
it's
baked
in
a
separate.
I
I
think
the
conversation
out
of
being
a
separate
package
is
fine
by
me,
specifically
about
new
relic.
I
have
talked
with
a
number
of
peeps
to
see
if
this
is
of
interest,
and
I
haven't
gotten
a
strong
indication
that
this
is
a
hard
requirement
for
us,
at
least
at
this
point
in
time,
but
it's.
F
I
can
say
it
at
costar:
I
have
five
or
six
teams
with
kind
of
independent
products
using
open
telemetry
we're
all
using
the
built-in
exporters
yeager
zipkin,
no
one.
I
don't
see
anyone
using
this.
We
kind
of
approach
it
as
you
know,
if,
if
telemetry
goes
down
for
a
little
while,
it's
not
the
end
of
the
world,
we
just
kind
of
use
it
sporadically
as
needed.
So
we're
not
approaching
it.
A
Okay,
just
for
some
context
in
case
people
are
interested
on
the
call.
I
think
I
think
the
reason
why
microsoft
is
particularly
interested
in
this
functionality.
Right
was
because
of
like
non-kind
of
telemetry
use
cases
and
maybe
more
like
business
use
cases
where
data
lost
was
unacceptable.
So,
like
that's
right,
yeah.
B
B
So
because
of
that,
like
all
the
existing
azure
monitor
offerings,
they
do
like
retrace
and
keep
it
in
disk,
even
if
like
network
is
down
for
like
a
whole
day,
you
keep
it
for,
like,
I
think,
up
to
24
hours,
also
yeah
yeah,
so
rest
of
the
actual
pr
discussions
can
happen
in
the
pr
itself.
We
don't
want
to
ask
it
right
here.
C
Yeah,
so
for
that
pr,
I
I
think
we
probably
can
conclude
this
moment
like
number
one
is
we
want
to
rename
the
package
instead
of
using
open,
timesheet.shared
use,
opentelemetry.extension
dot,
some
like
offline
storage,
thing
and
and
number
two
is
so
far.
We
haven't
seen
a
lot
of
interest
in
making
the
built-in
exporters
like
jaeger
and
deep
king
supporting
this.
So
probably
for
now,
they
don't
have
to
take
depends
and
don't
have
to
implement
this
feature.
But
later
people
see
the
need,
we're
open
to
add
that
support.
B
A
B
A
B
I
mean
the
the
actual
code,
it
doesn't
really
say
you
can
only
store
telemetry,
you
can
just
store
like
anything
below,
so
you
can
put
potentially
your
configuration
or
something
there.
So
then
it
becomes
yeah.
I
mean
the
this
particular
like
api
is
not
opinionated
about
what
you
store
there.
It
could
be
it's
just
a
blog,
it
could
be
telemetry
or
it
could
be
something
else.
B
F
B
B
Do
one
more
review
and
submit
this
as
a
comment,
so
okay
yeah,
so
the
next
is
I
just
like
linked
bunch
of
issues
here,
just
to
give
a
feel
of
what
are
the
top
things
which
we
are
trying
to
solve
in
the
next
couple
of
days,
because
we
are
expecting
a
beta
release,
the
last
beta
release.
B
B
So
there
was
an
issue
which
we
addressed
yesterday,
which
is
about
adding
the
global
propagators
api.
This
was
a
requirement
from
like
early
april,
but
we
kept
pushing
it.
B
So
this
is
like
a
major
feature
gap
in
dotnet,
so
we
addressed
it,
but
while
addressing
it,
we
have
like
several
follow-up
questions
about
like
what
level
should
what
level
of
customization
we
want
to
avoid
hello.
So
right
now
we
just
merge
the
pr
which
supports
propagators
api.
We
have
it.
It
should
be
part
of
the
next
beta
which
is
expected
today.
But
then
the
question
is:
what
is:
is
there
a
requirement
for
individual
instrumentations
or
providers
to
override
the
one
provided
by
the
global?
So
as
of
now,
we
do
allow
that.
B
B
B
B
B
B
Slightly
renamed
just
to
match
the
actual
spec.
So
if
you
look
at
this
here,
which
just
got
merged
you'll
see
the
renaming
okay
yeah
both
are
text
map
based
the
baggage,
one
and
okay.
I
think,
if
you
open
the
change,
log
you'll
see
the
renaming
as
well.
Okay,
it
was
a
previous
pr,
so
it
was
a
separate
pr
which
renamed
it
and
this
one
just
added
the
global
one.
B
Yeah,
so
the
ask
is
like:
is
there
any
reason
to
keep
instrumentations
having
the
ability
to
override
it?
My
initial
thinking
was:
we
can
just
remove
it
from
all
the
instrumentations
now
and
if
there
is
a
need
which
arises
later,
we
can
always
add
more
apis,
but
adding
something
now
means
we'll
have
to
support
it
forever.
C
Yeah
that
makes
sense-
and
I
I
think
in
general,
in
the
open,
telemetry
specification
meeting
we'll
face
some
challenges:
do
we
want
to
have
something
or
not,
and
the
choice
I've
seen
is.
If
we
don't
have
enough
confidence,
don't
add
that
for
the
ga
release,
because
we
can
always
make
additive
changes
later.
B
Yeah
I'll
create
that
I
haven't
done
that
job
of
like
creating
that
which,
oh
actually
it
says.
Okay,
let's
put
it,
you
know,
first
try
to
remove
it
from
the
instrumentation
and
then
create
a
new
issue
which
says
add
ability
to
overhead
and
market
us
after
ga.
So
if
you
ever
think
that
is
circuit,
we
can
come
back
to
it.
B
Okay,
so
that
would
leave
us
mostly
in
alignment
with
at
least
python.
That's
what
I
was
following
actively,
so
they
only
allow
like
a
global
thing.
There
is
no
individual
instrumentation
level
individual
instrumentation
level
overriding,
but
it
doesn't
mean
that,
like
I
mean
it's
still
open
like
if
you
write
a
new
instrumentation
on
your
own
and
you
can
choose
to
not
respect
it,
but
that's
not
really
our
concern.
B
B
So
this
one
is
already
merged,
but
I
just
wanted
to
bring
it
up
here,
because
this
was
an
issue
which
has
implications
for
several
of
the
built-in
instrumentations
and
for
folks
who
have
already
started
instrumenting
their
applications.
So
let
me
just
explain
what
this
was
doing,
so
this
was
an
example
which
we
had
for
a
producer.
B
Consumer
scenario
where,
like
this
is
an
example
where,
like
there
is
an
application
which
has
producer
and
receiver,
so
it
produces
messages,
puts
it
into
rabbit,
mq
or
some
messaging
queue
and
then
receives
it
from
the
other
end.
So
let
me
just
show
specifically
what
what
was
the
issue
in
sender
like
the
place
where
we
send
the
message
into
the
queue
I'm
only
showing
the
activity
part,
the
actual
writing
of
message
and
just
ignoring,
because
that's
not
the
highlight
of
the
issue,
so
you
can
see
like
what
we
are
doing.
B
Is
we
create
a
span,
slash
activity
to
represent
the
operation
of
writing
this
into
the
queue
which
is
a
producer
type
span,
and
what
we
do
is
we
check
if
the
activity
is
null
or
not
and
based
on
that
we
decide
inject
the
context
using
the
propagator
now
the
issue
here
is:
it
is
quite
possible
that
the
activity
here
could
be
null
either
because
of
sampling
or
some
other
reason.
B
If,
if
that
is
the
case,
we
will
not
inject
anything
at
all
so
on
the
receiving
side
when
they
try
to
extract
they'll,
think
of
it
as
a
brand
new
root
trace,
because
there
is
no
context
propagated,
they'll
just
start
refresh,
but
in
reality
what
this
is
supposed
to
happen
is
like,
even
if
this
activity
was
null,
we
need
to
check
if
there
is
a
further
parent
activity
created
before
this
activity,
which
is
still
active.
So
this
pr
is
just
modifying
that.
If
this
activity
is
now,
we
just
need
to
check.
B
Is
there
a
parent
activity
which
is
coming
and
then
you'll
realize
that
if,
if
this
code
was
happening
in
a
in
the
context
of
let's
say
asp.net
code,
which
would
have
created
its
own
activity,
we'll
make
sure
that
we
propagate
that
context
downstream,
I
mean
this
issue
itself
has
implications
in
almost
all
other
instrumentation.
We
just
found
that
http
client
is
not
doing
the
right
thing,
so
it
is
going
to
work
on
that.
B
So
my
ask
is
like
in
general,
if
you
ever
used
like
a
code
like
this,
where
you
start
an
activity
and
you
propagate
only
based
on
whether
the
activity
is
not
null.
Please
revisit
your
code,
make
sure
you
propagate
it
either
when
your
activity
is
not
null
or
there
is
something
called
activity
dot
current
sorry,
I
should
have
shown
here
yeah
I
mean
there
are
more
details
here
and
there
will
be
another
for
appears
to
solve
this
problem
in
http
client
for
dotnet
framework
as
well.
B
So
this
is
just
like
asking
everyone
like
who
already.
B
I
Sorry
close
it
one
quick
question
and
that
in
that
same
example,
when
we
extract
the
context
in
that
pr
you've
also
added
setting
package.current,
which
wasn't
there
before
in
the
example
online
62
there.
So
that's
also
something
that's
needed
like
has
to
be
done
manually
by
the
instrumentation
author
every
time
as
well.
B
F
I
B
I
I
was
just
a
question
I
didn't
I
couldn't
understand,
I
I
knew
it
wasn't
there
before.
I
was
just
wanting
to
understand
why
there
was.
B
B
You
yeah,
and
I
expect
like
the
of
all
the
things
which
we
ship
from
this
repo
only
http
client4.net
framework
have
this
issue.
The
httpclient4.net
code
will
not
have
this
issue
because
in
that
case,
the
http.net
core
runtime
itself
creates
activity
all
the
time,
irrespective
of
the
sampling
decision.
So
we
wouldn't
face
that
issue,
but
like
it's
quite
possible
that
you
might
have
run
into
this
in
your
own
instrument.
B
Okay,
let's
see
okay,
I
I'm
just
opening
one
issue
about
the
extensions.hosting,
but
there
are
at
least
three
or
four
issues
open
about
extensions.hosting
I
mean
I
don't
think
I
have
any
thing
to
discuss
with
the
community.
It's
just
that
I
wanted
to
highlight.
B
This
is
something
which
we
will
be
tackling
before
ga
it's,
like
many
people,
have
complained
that
when
they
use
this
approach,
where
they
are
adding
open
elementary
tracing
like
multiple
types,
not
necessarily
from
the
same
place
like
if
you
have
a
shared
project
where
you
pass,
the
like
service
collection
around
each
shared
component
will
add
their
own
thing
and
ultimately
you
compose
everything
and
make
the
single
application.
So
at
that
time
we
have
several
issues.
One
of
them
is
highlighted
here.
B
If
you
call
it
twice,
you
add
the
entire
sdk
twice,
including
each
and
every
instrumentation.
So
if
you
add
it
like
three
times,
you'll
see
three
times
the
normal
amount
of
telemetry,
this
is
being
sold.
I
should
have
like
a
draft
at
least
now,
but
like
early
this
afternoon,
we
can
I
mean
if.
B
Who
has
like
opinions
about
it?
Please
look
at
the
pr
review
prs
which
are
about
to
come
out
yeah
next
one.
We
need
to
discuss
a
little
bit
more,
so
this
is
one
of
the
last
thing
which
we
found
to
be
non-compliant
with
the
spec,
because
the
specification
says
the
default.
Implementation
of
the
sdk
should
look
for
resources.
B
Information
from
a
particular
environment
variable
called
hotel,
underscore
resource
attributes.
There
are
a
bunch
of
other
things
which
the
specification
says
about
leading
from
reading
from
environment
variable,
but
none
of
them
is
like
a
must-have
for
ga,
it's
just
optional
for
sdks
to
do
it
except
the
resource
one
which
is
like
marked
as
a
master.
So
we
need
to
have
this
before
ga,
but
then
the
question
is,
let
me
see
where
we
left
at
it
so
is
presumed
on
the
call.
B
Do
you
want
to
just
walk
us
through
like
what
you're
trying
to
achieve
so
because
this
is
where
we
have
most
confusion
about
this
api,
like
where
there's
a
create
api
which
you
pass
some
attributes,
but
it's
what
this
api
is
internally
doing?
B
Is
it
merges
the
attributes
with
the
ones
from
environment
variable
and
the
one
from
library,
and
it
looks
like
what
prashanth
was
saying
like
python
is
doing
exactly
the
same
way
and
java
is
also
doing
it
the
same
way,
but
go
is
doing
completely
differently,
so
I
want
to
ask
like
what
would
be
the
best
way
to
handle
it
like?
Can
you
share
like
your
thoughts
on
this
first
and
then
we
can
ask
if
anyone
else.
H
Yeah,
yes,
youtube!
So
when
I
like
started
working
on
the
issue,
my
primary
thought
was
that
currently
the
net
sdk
only
provides
the,
I
think,
create
service
resource
method
to
provide
like
service
name
so
and
a
constructor
just
to
create
a
bare
bone
resource.
H
So
adding
like
a
create
method
which
kind
of
takes
the
user
attributes
and
merges
the
those
attributes
with
the
environment,
variable
attributes
and
also
the
telemetry
sdk
attributes
like
the
sdk
name
version
and
language
and
yeah.
That
was
pretty
much.
The
use
of
this
create
method
and
the
tracer
provider
builder
would
use
this
method
to
automatically
include
the
resource
from
telemetry
sdk
and
also
from
the
environment.
Variable.
H
Yeah
and
like
I
discussed
with
cj,
there
was
some
confusion
regarding
the
name,
because
the
name
is
pretty
simple:
it
doesn't
give
much
detail
as
to
what
the
the
ongoing
operations
are
in
the
the
method
so
which
which
is
common,
as
they
just
said
in
java
and
python,
and
maybe
it's
not
called
out
in
the
larger
scheme
of
like
how
resources
work.
H
So
I
I
kind
of
like
the
idea
with
cg
proposes
that
we
can
call
it
like
create
with
detectors,
and
we
can
have
a
resource
detector
class
which
detects
attributes
from
the
environment
variable
so
at
least
for
being
spec
compliant.
We
can
include
that
single
resource
detector
during
create
and
the
default
resource
can
be,
can
can
be
only
from
the
telemetry
sdk
attributes.
Yeah.
B
H
I
I
didn't
get
a
lot
of
chance
to
look
into
the
go
implementation.
When
I
looked
at
python.
What
it
does
is
that
it
kind
of
defines
an
interface
which,
where
the
resource
detector,
would
have
a
detect
method
and
in
the
create
method
you
can
just
chain
the
calls
to
each
detector's
dot,
detect
method
which
returns
a
resource,
and
you
can
like
keep
merging.
Okay.
H
Yeah,
I
don't
think
there
is
one
defined
in
this
spec
like
how
to
go
on
with
the
the
resource
detectors.
Okay,.
B
So
I
think,
like
a
good
summary
would
be
like
we'll
just
eliminate
create
because
we
already
have
constructor
doing
or
the
constructor
which
takes
attributes.
This
will
just
create
a
source
with
the
past
arguments.
That's
it.
There
is
no
change
in
behavior,
but
we'll
provide
another
option
called
create
from
detectors.
I.
B
H
Detectors
are
something
which,
like
detect
to
kind
of
the
environment
where
the
application
is
running
in,
for
example,
like
a
container
or
some
host,
I
mean.
H
But
those
are
not
required
by
the
spec
and
can
be
added
individually
by
a
user.
B
So
it
just
says
that
it
should
return
a
resource,
so
if
you
have
a
kubernetes
detector,
all
it
does
is
when
we
call
like
detect
it
just
returns
a
resource
internally.
It
will
look
at
all
the
like
kubernetes
specific
things
and
construct
the
parameters
accordingly.
B
So
that's
the
only
thing
which
spec
says
there
is
no
like
the
shape
of
the
ap
is
not
defended
here,
so
we
can
just
borrow
what
other
languages
are
doing
and
like
provide
an
option
to
change
them.
Together
I
mean
we
don't
really
have
to
provide
an
option,
because
resource
already
has
a
merge
method.
So
if
you
have
create
from
detectors
which
has
like
a
like
array
of
detectors,
what
this
method
internally
can
do
is
like
call
each
of
them
one
by
one
and
merge
with
the
previous
and
return
the
final
resource.
B
So
this
this
can
be
one
option,
but
it's
if
there
are
no
other
concern,
then
that's
a
way
we
go
for
it
and
by
default,
we'll
have
two
detectors
from
this
repo,
which
would
one
would
be
to
detect
the
environment
variable
and
second
would
be
to
find
the
library,
because
those
are
the
only
two
things
which
spec
says.
We
should
ship
from
the
main
repo
everything
else
like
kubernetes
or
anything
else
it
it
doesn't
even
have
to
be
from
the
same
package.
It
can
be
in
like
separate
package.
B
Is
there
any
thoughts
about,
I
mean
we
will
discuss
the
actual
name
in
the
pr,
but
like
conceptually,
if
everyone
is
okay
with
that,
we
go
for
that
approach.
H
So
c2
this
method,
then,
would
require
user
to
pass
in
their
custom
detectors
right,
and
this
would
only
be
merging
resources
from
detectors
and
not
from
a
user
specified
or
previously
defined
resource
yeah.
B
I
mean
there
is
always
like
resource
dot
merge,
so
we
can.
I
mean
I
have
to
like
sit
down
and
write
exactly
how
it
looked
like,
but
since
resource
has
a
well-defined
merge
method,
we
can
let
customer
pass
like
they
if
they
want
like
they
can
pass
their
attributes
first,
so
that
this
becomes
a
primary
and
everything
else
becomes
secondary
in
the
order
past.
So
if
there
is
any
conflict
user
passed,
one
takes
the
first
priority,
followed
by
the
detectors
in
the
array
in
the
order
of
their
position
in
the
array.
B
So
it's
definitely
possible
or
we
don't
need
to
provide
this.
Let
user
create
a
resource
using
the
constructor
and
then
call
that
like
dot,
merge
and
pass
create
from
detectors
something.
This
achieves
the
same
thing
like
you
construct
with
your
attributes,
and
then
you
merge
it.
So
by
default
this
becomes
a
primary
everything
else
becomes
secondary,
so
yeah.
This
is
something
which
we
can
discuss
in
the
pr
itself.
B
Unless
are
there
any
other
opinions
about
this
one
other
ways
we
can
just
move
on
to
the
next
topic
and
continue
this
discussion
in
the
pr
itself
and
one
thing
to
notice:
there
is
no
requirement
that
this
is
required
for
ga,
like
we
don't
really
need
the
detector
approach
for
ga,
so
it's
conceivable
that
we'll
create
a
detector
and
market
internal
and
use
the
library
and
environment
variable
one
just
keeping
it
internal
means
we'll
have
more
time
to
like
wait
if
you
want
so
that
there
is
a
formal
spec
on
what
the
api
should
be
called,
but
it's
up
to
us,
if
you,
if
you
create
it
public,
then
it's
it
sticks,
and
if
specification
comes
up
to
tomorrow,
saying
that
there
is
a
different
api,
then
we'll
have
to
introduce
a
breaking
change.
B
The
only
reason
I
am
for
like
not
introducing
too
many
public
apcs
like
it's
once
we
introduce
it,
it
is
there
like.
We
have
to
keep
supporting
it,
so
in
general,
I'm
trying
to
avoid
introducing
any
new
apis
unless
there
is
a
strong
need
to
do
that.
B
Okay,
so
let's
continue
this
and
if
there
are
any
any
other
opinions,
please
like
comment
in
the
pr
itself.
The
next
thing
is
a
port
self
diagnostic.
So
let
me
open
that
pr.
So
again,
this
is
about
like.
If
something
goes
wrong
with
the
sdk.
We
want
to
get
some
logs
from
the
sdk
itself,
so
there
is
no
spec
written
about
it,
except
that
there
is
a
environment
variable
which
the
spec
says,
which
should
be
used
to
control
the
log
level.
B
I
think
its
hotel
underscore
log
level
something
like
that.
So
that's
the
only
thing
with
specs
say
so.
This
is
not
like
a
like
a
spec
trigger
thing,
but
this
is
more
like
something
which
we
need
if
you
want
to
figure
out,
what's
going
wrong
because
the
entire
sdk
and
I'm
pretty
sure
all
the
everything
which
is
shipped
from
this
repo
and
contributor
are
using
even
source
as
a
mechanism
to
write
logs.
So
that
is
the
only
way
to
subscribe
to.
B
That
would
be
like
the
only
official
way
would
be
whatever
we
ship
from
here.
Of
course,
people
can
still
use
perfume
and
other
tools
to
listen
to
it.
So
the
pr
is
all
about,
so
I
haven't
had
a
chance
to
review
it
so
shown.
If
you
are
online,
can
you
oh
michael
already
had
a
chance
to
look
at
it.
So
that's
at
least
one
step
ahead
of
me.
B
J
I
think
we
can
go
on
the
prs
and
then
resolve
it.
Offline.
B
J
But
for
now
I
think
if
we
still
have
time
in
the
end,
I
want
to
have
a
quick
demo
of
how
this
work,
because
I
already
have
it
in
in
another
package
that
of
my
own.
So
I
want
to
show
the
community
how
it
works.
B
Okay,
let's
come
back
to
it
at
the
end,
if
you
still
have
time
left,
but
one
q
question
is
how
does
the
configuration
like
where
does
this
come
from.
J
Right
now
this
is
this
is
coming
in
the
next
pr.
It
will
come
the
configuration
file.
B
Okay,
good,
so
what
we
want
to
make
sure
is
there
is
there
is
a
specification
about
what
that
environment
variable
should
look
like.
B
Yeah
there
is
an
in
the
specification
yeah
this
one,
I'm
assuming
the
sdk
logger
is
what
we
call
self
diagnostic,
so
we
can
more
align
with
what
the
open
telemetry
spec
is.
There
is
no
formal
definition
of
vortices,
but
maybe
we
want
to
align
more
with
this
and
take
this
option
like
instead
of
creating
a
new
environment
variable,
we
need
to
stick
to
this
one,
but
that's
pretty
much
it
and
let's
come
back
to
it.
If
you're
done
with
other
topics
towards
the
end,
so
sean
can
show
a
quick
demo.
C
There's
a
question
about
the
environment
variable
so
in
general
I
would
expect
environment
variable
to
be
specified
before
the
application
started
and
once
the
application
is
running,
there's
no
way
you
can
change
the
variable
most
of
the
cases.
So,
if
you
regret,
like
you
regret,
then
telling
like
I,
I
should
turn
on
the
verbose
level.
The
only
way
you
should
do
is
to
change
the
environment
variable
and
restart
the
application,
which
might
be
a
challenge
in
some
like
serious
production
environment.
C
B
C
B
C
So
there's
there's
no
like
there's
no
clear
explanation
of
what
does
it
mean
for
outside
the
process,
so
environmental
variable
is
a
very
process.
Specific
thing
like
I.
I
have
no
definition.
What
does
it
mean
for
for
environmental
variable
outsider
process?
You
can
have
a
default
environment
variable
at
the
operating
system
level
or
the
session
level.
That
only
means
it's
a
static
configuration
and
when
you
have
a
process
started,
it
will
use
that
as
this
initial
environment
variable,
but
once
that
got
started,
there's
no
official
way
for
you
to
get
like
like.
C
B
Okay,
yeah,
okay,
so,
like
short
story
like
for
our
self
diagnostics
module,
we
need
to
have
the
ability
to
read
it
from
a
file
so
that
we
don't
have-
I
mean
the
basic
requirement
is
we
should
be
able
to
change
the
log
level
on
the
fly
without
changing
the
update
without
restarting
the
application,
yeah
yeah,
and
yet
we
we
probably
have
to
support
the
I
mean
if
I
thought
we
are
reading
it
from
environment
variable,
we
should
follow
it.
This
is
again
not
a
requirement
that
we
should
allow
it.
B
If
we
allow
it
from
an
in
one
variable,
then
we
should
use
the
same
name.
So
if
you're
not
supporting
in
one
variable,
then
that
makes
it
slightly
easier.
The
only
way
to
configure
changes
in
the
log
level
is
by
putting
it
in
a
file
in
the
like
that
is
not
yet
there,
but
when
the
pr
comes,
that
would
be
the
behavior
yeah.
Okay,
let's
see
if
we
have
time
for
a
demo
at
the
end,
so
spec
complaints,
metrics.
B
I
think
I
don't
want
to
update
it
this
week
because
we
still
have
like
few
issues
being
worked
on
actively.
So
let
me
do
that
by
end
of
this
week
because,
like
I
think,
let
me
quickly
check
whatever
milestones
says
for
the
next
couple
of
release
milestones.
B
Oh
actually,
this
was
already
due
by
yesterday.
So
I
have
I
I'll
update
it
to
today
and
do
the
release
today.
So
we
have
coming
next
week
and
in
number
23
so
and
after
this
beta
is
done
like
I'll,
go
and
update
the
spec
complaints
metrics,
because
I
don't
think
we
should
have
anything
major
pending
from
the
spec.
C
Yeah,
so
so
that
matrix
thing
is
just
a
reminder,
and
and
in
other
repo
I've
seen
a
challenge
when
people
try
to
update
that
there
seems
to
be
different
understanding
like
some
people
would
believe.
Oh
this
part
is
done
and
the
others
will
say
I
think
it's
80
percent
down,
but
not
a
hundred
percent.
So
we
probably
need
to
give
some
buffer
time
actually.
B
C
Okay,
so
my
suggestion
would
be:
let's
put
the
initial
thing
here.
I
I
can
work
with
you
quickly
and
after
we
got
the
initial
list
we
can.
We
can
send
a
link
on
the
gator
channel.
Let
everyone
know
hey.
This
is
what
we
believe
the
reflection
of
the
the
current
reality
and
if
we
see
any
disagreement,
people
are
saying
this
when
we
have
the
marked
variable,
but
it
is
a
different
behavior,
then
we
got
to
adjust
and
and
decide
whether
we
want
to
make
that
compatible
before
ga,
or
this
is
just
something
we
cannot
achieve.
B
How
about
things
where
we
don't
have
any
support
so,
for
example,
like
for
tracing
since
we
are
using
activity
activity,
does
not
enforce
like
limit
to
the
attributes
or
links.
So
what
do
we
mark
for
them?
Yeah?
No,
we
just
both
have
minors
here,
okay,
which
means
we
are
not
complained
or
we
are
not
support.
Yeah,
not
supported
okay.
So
it's
not
that
na
right.
It's
not
not
applicable,
it
is
applicable
and
we
are
not
supporting
yeah.
C
But
now
you
pick
me
so
so
for
the
tracing
part.
I
I
think
currently
we're
at
that
95
percent.
We
know
like
in
the
current
open
time,
api
package.
We
have
some
extension
methods.
For
example,
it
allows
people
to
set
the
the
status
of
activity
and
eventually
this
part
should
be
moving
to
the
donut
as
part
of
the
doughnut
six
release
and
will
be
backward
compact
with
all
the
existing
versions.
However,
due
to
the
timeline
like
restrictions,
we
we
cannot
achieve
this
like
done
at
five.
C
So
what
we
ended
up
is
we
have
those
extensions
in
the
open,
telemetry
project
and
I
believe
when
we
actually
move
that
to
net
it
shouldn't
be
a
breaking
change,
but
there
might
be
some
glitches
like
if
we
have
the
method
in
donet,
and
we
also
have
the
extension
might
get
a
static
check,
issue
or
compilation
time
issue.
So
this
is
something
just
to
keep
your
heads
up
and
for
the
matrix
part
it's
just
like
in
in
this
year,
we
don't
have
enough
capacity
to
do
that.
C
So
what
we
have
in
donet
is
the
experimental
version
of
the
matrix
based
on
the
the
spike
at
that
time,
and
after
that,
we
noticed
there
like
multiple
things
changed
in
the
matrix
spec
and
also
looking
at
the
current
progress
and
feedback.
I
think,
is
too
risky
to
just
release
some
matrix
implementation
and
tell
people.
C
So
that
means,
if
we
ship
ga
1.0
by
end
of
this
month,
we
want
to
give
people
a
clear
expense,
like
expectation
if
they
take
a
dependency
on
this,
are
we
going
to
screw
them
up
after
a
year,
or
we
tell
them
we're
going
to
support
this
for
the
next
three
years
and
the
support
covers
if
you
have
any
security
like
issue
like
we
take
a
dependency
on
some
zero-day
bug
and
we
got
to
fix
it
immediately
and
if
there's
any
compatibility
issue,
we
got
to
fix
that.
C
So
so
we
need
to
have
that
clarity
as
part
of
the
ga
and
looking
at
the
current
metrics,
I
I
think
it
would
be
not
possible
to
have
that
clarity.
So
in
this
way,
I
I
think
for
the
for
the
like
the
overall
project.
We
would
have
confidence
to
say,
tracing
part
and
logging
part
is
pretty
stable
and
we
can
commit
a
long-term
support.
Matrix
part
will
be
a
preview.
We
need
to
take
the
the
time
to
learn
and
get
feedback
and
also
the
matrix
part.
C
The
spec
part
itself
is
evolving
and
the
last
time
I
heard
the
the
plan
is
they
realized.
We
cannot
ga
the
metrics
back
but
end
of
this
year,
but
we
try
to
target
like
early
next
year,
so
we'll
see
how
that
works.
C
But
that
matrix
part
is
so
sorry
sigil,
so
the
matrix
part
is
very
critical
and
especially
the
performance.
So
if
we
screw
up
any
of
the
api
design,
it
is
almost
impossible
for
us
to
build
a
shim
because,
like
api
wrapper
on
top
of
metrics,
wouldn't
make
sense.
If
you
look
at
performance.
B
Yeah,
that
makes
sense
so,
on
a
related
note
like
can
we
ask,
like
everyone's
opinion
on
how
do
we
go
about
marking
the
because
we
still
still
ship
everything
like
trace
log
trace
metrics
as
a
single
new
kit
package,
but
a
subset
of
that
is
not
treated
with
the
same
quality
or
ga
marker.
So
the
question
is
like
how
do
we
mark
matrix
like?
B
Is
it
enough
to
just
go
and
mark
absolute
in
all
the
public
api
in
matrix,
so
that
whenever
someone
uses
it,
they'll
get
the
competitor
warning
saying
that
okay
you're
using
absolute
and
we'll
tell
point
them
to
an
issue
where
we
describe
why
metrics
are
not
ga
or
if
there
are
any
other
approaches
we
can
discuss
that
as
well.
B
C
As
experimental
yeah
so
that
obsolete,
like
approach,
seems
to
be
a
little
bit
like
misleading
because
we're
not
trying
to
obsolete
this
api,
it's
just
we
tell
people.
This
is
experimental
and
I
I
think
it
might
be
helpful,
because
just
putting
a
banner
in
the
readme
document
would
make
me
worried.
I
think
a
lot
of
people.
They
don't
use
any
document.
They
will
just
take
the
sdk
and
use
some
like
automation,
tool
like
id
like
intelligence
to
finish
their
work.
C
So
I
think
at
the
minimum
I
I
would
expect
when
people
take
the
ga
version,
1.0
open,
telemetry
and
start
to
use
that
if
they
accidentally
use
some
metrics
api,
they
would
get
a
like
a
build
time
indication.
This
is
something
that
we're
not
ready.
So
we
don't
have
the
long-term
support,
commitment
and
I'm
not
sure,
what's
the
best
practice
here
so
far,
I've
seen
the
the
cheaper
way
is
to
mark
that
as
obsolete.
B
C
C
Yeah
make
a
conditional
compilation,
but
that
also
gives
us
a
problem,
because
we
really
want
people
to
try
and
give
us
feedback.
If
we
ship
this
without
matrix,
then
when
are
we
going
to
add
the
metrics
experimental
api?
Are
we
going
to
ship
two
different
new
guide
packages
that
yeah
that's
just
hard
to
imagine
so.
C
Yeah,
that's
a
fair
question.
I
I
think
some
like
some
languages.
They
decided
they
were
just
ga
with
the
tracing
part
and
and
for
metrics
they'll,
just
put
that
in
the
document.
Okay,
okay,
so
it's
not
a
must-have!
It's
an
like
it's,
my
my
feeling
that
we
probably
should
do
this
just
to
protect
the
users
yeah.
That
makes
sense
yeah.
That's
the
only
thing
which
will
be
caught.
B
If
people
don't
read
the
documentation,
yeah,
okay,
let's
go
back
and
see
if
there
is
anything
left.
So
sean
is
two
minutes
enough
for
you,
or
should
we
put
that
as
the
agenda
for
next
week,
because
next
week
it
will
have
like
more
things
to
like
talk
about,
because
we
are
doing
a
community
review
as
well
for
the
public.
J
I
think
should
be
enough
I'll
make
it
quick.
J
J
Okay,
so
just
in
short,
the
this
program,
the
stress
test,
stress,
dot,
exe
already
have
the
self,
my
diagnostic
module.
So
let's
run
it
really
quick.
J
Now
it's
showing
what
the
stress
test
does
now.
If
I
create
a
file
named
diagnostic
configuration
here,
it
will
start
recording
the
files
recording
the
event
source
events
in
this
folder
because
I
already
have
it
here.
So
I
just
shortcut
to
create
that
file,
which
is
you
can
see
the
json
files
generated
here
with
the
directory
to
telling
it
to
log
to
local
directory,
and
we
can
see
the
log
file
here.
J
These
are
written
for
the
self
diagnostic
event
for
testing
and.
F
J
If,
if
we
change
this
on
the
fly,
it
will
actually
close
this
file
and
generate
the
file
in
the
in
the
wherever
you
configure
it
to
be,
and
if
we
remove
that
file
the
configuration
file,
everything
will
be
closed
and
will
no
longer
be
generating
those
files.
B
Yeah,
pretty
cool
yeah.
We
definitely
want
it
before
ga,
so
I'll
also
take
a
look
at
the
pr.
So
just
to
ask
like
one
question:
there
is
no
option
to
control
the
log
level.
Is
that
intentional?
Or
is
it
something
which
you
plan
to
add,
because.
B
Yeah,
okay,
yeah
folks.
Everyone,
if
has
any
questions
on
this
feature,
please
reach
out
to
sean
like
there's
a
pr
open,
which
already
has
I
mean
this
feature
should
be
live
once
the
pr
is
merged
right.
What
you
just
showed,
or
does
it
require
like
one
more
pr
to
read
the
config
on
the
fly.
B
In
one
yeah
that
makes
sense
yeah.
Okay
thanks!
So
hopefully
we
should
have
this
part
of
the
next
release,
if
not
today's
release,
but
that
is
still
fine,
we'll
have
it
before
the
gm.
So
all
right,
I
do
not
have
anything
else,
so
please
reach
out
to
us
in
jitter
or
github.
If
you
have
any
questions
otherwise,
we'll
see
next
week
so
next
week,
I'll
put
the
like
public
apis
to
be
reviewed.
B
There
are
like
huge
lists,
we'll
trim
it
down
to
a
manageable
level
by
next
week
and
then
ask
opinions
whether
something
specific
is
like
required
as
a
public
or
not
required.
I
haven't
prepared
that
yet
so
that
should
be
the
major
agenda
for
next
week.
Okay,
thanks,
everyone
see
you
all
next
week.