►
From YouTube: 2022-09-14 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
B
B
I
wasn't
sure,
but
but
this
it
is.
B
It
was
surprisingly
soft
blending
honestly
like
it
was,
you
know
in
a
way
it
was
like.
I
wasn't
the
way
at
all,
but
in
in,
on
the
other
hand
like
it
also
sometimes
felt
that
very
foreign,
so
I'm
not
sure
how
to
actually
describe
this,
but
but
no
it
wasn't.
The
like
a
hard
fall
at
all
or
like
I
was
I
was
waiting
to
get
back
and
start
working
again.
C
D
First
thing
that
I
wanted
to
talk
about
was
the
metrics
GA
for
those
that
did
not
notice.
We
emerged
and
closed
a
bunch
of
issues,
so
there's
only
two
left:
one
is
documentation
which
I'm
working
on
with
Philip
from
honeycomb.
He
recently
became
the
maintainer
of
the
commsig
and
the
second
one
is
this
ability
to
remove
unused
attributes
and
Metric
instruments
for
those
that
don't
know
what
this
is.
D
This
is,
if
you
have
High
cardinality
metric
labels
or
if
you
export
a
a
metric
with
a
label
that
you
then
don't
use
again.
D
All
of
that
has
to
be
tracked
by
the
SDK
in
order
to
keep
the
metric
stream
working,
and
this
can
cause
a
memory
leak.
It's
kind
of
specified
that
way.
There's
no
there's
no
resolution
for
this
in
the
spec
right
now
and
it
affects
I,
believe
affects
other
metrics
sdks.
D
Also,
although
I
have
not
verified
that
I
believe
that
this
can
be
solved
after
the
ga
actually,
but
it's
something
that
we
need
to
be
aware
of,
and
at
least
document
before
we
go
to
GA,
that
metric
labels
need
to
be
low,
cardinality
or
you'll
need
to
manually
clear
them
by
restarting
the
SDK.
Those
are
essentially
the
only
two
options,
but
before
making
that
decision,
I
wanted
to
talk
to
everybody
make
sure
that
was
okay
with
everyone,
particularly
the
other
maintainers
I.
D
Don't
know
this
is
a
fairly
technical,
metrics
topic.
I,
don't
know
how
aware
most
people
are
of
this,
but
does
anyone
have
opinions
here
or
think
that
we
should
hold
the
ga
for
this
or
think
that
we
should
handle
this
after
ga.
D
Sounds
like
no
opinions
here,
I
think
I'm
going
to
comment
on
this
legit
because
she
does
not
join.
The
meetings
tends
to
do
most
of
our
metrics
work.
So
I'm
gonna
get
his
opinion.
D
I
would
encourage
people
to
read
DSU
and
and
comment
on
it.
If
you
have
opinions
there,
but
I
I
know
it's
kind
of
a
weedy
topic.
Anyone
have
questions
before
we
move
on.
D
That's
exactly
what
it
is
and
that's
what
that's,
what
GA
means
it
would
be
moving
them
out
of
the
experimental
folder
into
the
main
packages
folder
and
giving
them
a
1.0
version.
D
The
the
API
PR
to
merge
the
API
into
the
main
repo
merged,
so
you
can
see
that
here
all
we
did
was
copy
over
the
entire
repository
directly.
D
We
did
retain
history,
so
you
can
look
at
the
the
blame
here
and
that's
not
a
good
example.
Let's
see
go
to
the
change
log.
C
C
D
D
Oh,
you
know
what
it
must
have
squashed
it
you're
right,
okay!
Well,
we
don't
have
the
history
of
them.
We
do
have
the
contributors
I
guess,
but
that's
it.
D
But
in
any
case
The
Next
Step
here
is
we
just
copied
everything
over
directly,
so
we
still
have
like
all
of
the
files
that
were
in
the
repo
that
need
to
be
cleaned
up
and
use
the
ones
in
the
main
repo,
like
all
of
these,
has
its
own
git,
ignore
and
all
kinds
of
stuff
that
we
don't
need
now
that
it's
in
the
main
Repository.
D
So
that's
the
next
step,
that's
what
I'm
working
on
right
now.
It
will
need
to
be
added
back
into
the
learn.
A
mono
repo
I
did
not
do
that
yet
and
then
I
think
we
should
automate
the
versioning,
because
we
don't
want
to
release
the
the
API
every
time
we
release
the
SDK
or
anything
like
that,
and
it's
not
overly
complex
to
do.
But
it's
something
that
I
do
manually
every
time
and
it
can
I'm
not
sure
anyone
else
knows
how
to
do
it
or
anything
like
that.
D
So,
if
I'm
not
around,
it
would
be
nice
if
it
was
automated.
D
We
need
to
migrate
all
issues
and
PR's
from
the
API
repo
and
archive
that
repo,
and
then
this
one
is
the
one
that
I
kind
of
want
to
ask
about
in
play.
So
right
now
the
API
supports
back
to
node
version
8.
and
the
main
repository
does
not
test
those
old
versions.
D
So
without
dropping
support
for
old
node
versions,
We
need
to
test
for
them,
but
we
don't
want
to
test
in
the
full
Repository.
So
I
was
thinking.
We
may
want
to
add
a
separate
GitHub
workflow,
which
tests
only
the
old
versions
of
node
and
only
on
the
API
does
that
seem
reasonable
to
everyone.
D
It's
it's
kind
of
a
hacky
workaround,
but
I
don't
want
to
test
like
to
add
all
of
the
Note
8
testing
and
everything
back
to
the
main
SDK,
because
we've
updated
a
bunch
of
dependencies
and
Dev
dependencies
that
don't
support
no
date.
So
we
would
have
to
downgrade
a
bunch
of
those
and
I.
Don't
really
want
to
do
that.
D
Seeing
a
bunch
of
Thumbs
Up
in
the
in
the
videos,
so
I
assume
that
everybody's
happy
with
that,
and
then
we
need
to
add
the
API
TS
docs
to
the
automated
publishing,
which
should
be
easy.
So
most
of
these
steps
are
are
relatively
easy
to
do.
They
just
need
to
be
done.
I'm,
I'm,
just
working
on
them,
knocking
them
out
one
at
a
time.
I
expect
this
to
probably
all
be
done
within
the
next
week
or
so
as
long
as
we
can
get
PR's
reviewed
and
merged
quickly
enough
and
then
from
there
Nev.
D
You
should
be
able
to
update
your
script,
to
ignore
the
API
repo.
D
I
also
removed
the
the
commit
that
was
causing
your
CLA
issue
yeah,
so
you
should
be
okay
with
the
CLA
check.
Now
too,
that
was
in
the
API
repo,
so
I
took
yeah.
This
merge
to
do
that.
D
Oh
I
got
it
okay.
Well,
in
any
case,
the
pr
the
the
commit
that
was
causing
your
CLA
problem
should
be
gone
so
or
not
gone,
but
re-authored
I
re-authored
it
under
myself.
C
It
should
be
fine,
okay,
I'm
still
now
that
we've
actually
got
acknowledgment,
that
Bots
can
be
exempt
I'm
still
an
admin
on
that
I
haven't
given
up
that
privilege.
Yet
so
I
could
always
force
it
in.
D
Okay,
anyone
I
have
questions
on
the
API
merge
here
before
we
move
on.
D
All
right
he
logs
and
events
API
merged.
This
is
not
released
yet,
but
we
probably
should
cut
a
release.
The
question
that
I
wanted
to
ask
is:
should
we
cut
a
release
of
the
SDK
and
experimental
packages
now,
or
should
we
wait
until
the
metrics
GA,
which
should
handle
happen
in
the
next
couple
of
weeks
and
do
One
release
for
all
of
that.
C
Sorry
go
ahead,
I
was
gonna,
say
I
just
want
to
clarify
that
this
is
just
the
API
part.
It
doesn't
it's
not
the
SDK,
it
doesn't
have
the
exporter.
So
it's
not
very
useful
right
now,
so
that
probably
there's
no
rush
to
release
it.
D
Okay,
there
are
a
handful
of
other
things
that
are
unreleased
as
well.
B
This
stuff
that
happened
in
in
metrics
I
think.
C
It
would
be
a
good
idea
to
do
one
final
RC
release
before
going
ga.
D
Yeah,
so
these
are
all
the
PRS
that
have
been
merged
since
our
last
release.
There
are
quite
a
few,
particularly
a
bunch
of
fixes
and
a
lot
of
metrics
work.
I
think
I
agree
with
Mark
that
my
feeling
is
that
we
should
do
one
quick
release.
D
D
We
did
something
similar
when
we
did
the
the
tracing
GA.
We
released
one
final,
zero
dot,
whatever
version
and
made
sure
that
it
was
all
good
and
then
we
created
a
release
essentially
immediately
that
all
it
did
was
change
the
version
number
so
the
last
zero
dot
X
version
and
the
first
1.0
version
were
actually
exactly
the
same
code,
just
to
make
sure
that
everything
was
good
before
we
started
so
I
think
that
probably
makes
sense
to
do
again.
A
Yeah
yeah,
so
we
wanted
to
introduce
some
browser
attributes
in
the
browser
detector
and
we
realized
that
it
already
had
some
attributes
that
we
discussed
in
the
client,
instrumentation
seg
and
agreed
that
you
know
those
are
not
so
relevant
for
the
browser
side
in
all
those
process
start
runtime
that
line
31
through
33.
A
in
the
right
side,
yeah,
those
so
in
this
PR
I
think
Legend,
because
I
think
he
said
that
if
we
remove
these,
you
know
it
would
be
a
breaking
change.
Open
Telemetry
resource
package
is
already
stable,
it
has
a
1.2
release
and
he
suggested
creating
a
new
detector
class
called
user
agent
data
detector.
And
then
you
know
Mark
this
one
as
deprecated.
A
It
is
fine,
but
I
was
just
thinking
from
an
ergonomics
perspective.
I
think
this
name
is
more
suitable,
the
browser
detector.
A
C
A
C
Give
me
a
second
yeah:
that's
the
Sig
documents.
A
If
you
yeah
under
process,
I
think
yeah
the
second
from
the
last
year.
No,
if
you
go
back
the
there
is
a
process
starting
with
the
last,
but
one
yeah.
C
A
My
plan
is
to
even
get
it
removed
from
there,
because
I
think
I
checked
with
the
person
who
added
this
and
I
think
he
said
they
just
added
it,
but
they
are
not
really
using
it.
A
Well,
I
think
something
if
it
is
already
1.2
and
if
you
add
content
that
will
automatically
get
the
geotag
right,
it's
a
little
tricky.
How
do
you
add
experimental
content
to
something?
That's
that's
ga.
D
Yeah
is
the
but
I
mean
that's
experimental
and
browser
one
I
noticed
is
also
experimental.
So
this
this
resource
detector
should
not
have
been
released
as
1.0
to
begin
with,
but
it
was
a.
A
A
Will,
if
there
is
already
detectors
somewhere
like
a
package
for
detectors,
would
be
more
helpful,
where
we
could
add
more
detectors.
D
I
mean
most
of
our
detectors
are
in
separate
packages.
The
only
ones
that
are
I
mean
that's
not
true,
obviously
about
the
browser,
detector
or
the
process.
Detector.
C
B
D
Versioning
is
exactly
the
reason,
so
we
have.
Resources
are
1.0,
because
the
SDK
depends
on
on
the
resources,
interfaces
and
stuff,
so
it
has
to
be
1.0,
but
the
detectors
themselves
depend
on
specification
that
is
not
complete,
so
this
specification
is
experimental
and
they
can
break
so
we
have
the
case
where
the
the
detectors
themselves
are
stable
or
released
as
stable,
but
the
specifications
are
based
on
is
not
I
think
we
have
to
split
them
into
separate
packages.
D
D
D
The
same
names,
yeah
okay,
that
would
allow
us
to
actually.
D
C
D
Mean
I
I
don't
want
to
do
that
in
this
package.
This
package
is
a
1.0
package
I
and
now
I
I
realize
that
that
you
want
to
update
it,
but
now
that
I
realize
it's
based
on
a
experimental
specification,
I
think
it's
just
not
the
right
place
for
it.
C
A
A
new
package
with
all
of
these
copied
over
there.
D
Yeah
I
mean
if,
if
any
of
these
like,
if
the
environment,
detector
I
don't
know,
the
environment,
detector
may
be
able
to
be
stable,
because
that
comes
from
where
this
is
no
I,
don't
think
it
does.
Maybe
it
does
hold
on
restarts
SDK.
D
Yeah,
so
the
environment
one
should
be
able
to
stay
because
it's
in
a
stable
specification
document,
but
the
the
browser
and
the
and
the
process
one
should
move
into
an
experimental
package.
A
D
Yeah
I
mean
I,
don't
necessarily
want
to
include
all
detectors
in
the
main
resource
module
anyways
because
you
put
like
if
you're
not
running
on
AWS.
You
don't
want
that
one.
If
you're,
if
you're,
not
running
a
gcp,
you
don't
want
that
one
okay
having
them
each
in
their
own
package,
is
overhead.
I
like
I,
get
that,
but
it's
it's
maintenance
overhead,
but
from
a
versioning
perspective
it
might
be
what
makes
sense.
B
And
in
the
future
we
will,
even
if
they
all
would
be
stable
today
we
will
have
new
sections
to
semantic
conventions
and
like
resource
which
will,
in
the
future
when
they
will
be
added,
will
be
first
experimental.
So
we
would
face
the
same
situation
even
if
it
wouldn't
be
the
case
today,
so
like
keeping
them
separate
for
versioning
I
guess
is
the
correct
way
to
go,
even
though
it's
a
it's
silly
of
it,
but
yeah.
D
Yeah
I
mean
I,
can't
really
think
of
another
way
to
do
it.
There's
some
of
them
are
more
important
than
others.
Okay,
I
think
that
that's
fairly
obvious
like
if
you
look
at
the
at
what
we
have
like
the
the
process
and
the
browser
and
the
OS
ones,
are
certainly
more
important
than
you
know
like
what
the
cloud
one
that
you're
in
or
something
like
that
from
a
like
built-in
perspective,
but
I
I,
just
don't
think
that
we
want
to.
A
Yeah
I
think
I
I
I
still
feel
I
still
want
to
ask
again,
if
you
think
strongly
about
if
we
were
to
remove
some
functionality
or
if
you
were
to
modify
the
behavior
of
a
stable
package,
you
know,
wouldn't
you
follow
at
a
let's
say
some
process,
and
and
could
we
follow
that
here.
D
A
D
A
Do
you
want
to
mark
all
of
them
as
deprecated,
except
the
new
OP
one
or
leave
the
environment.
D
D
Make
a
package
per
detector
is
what
we
have
to
do
unless
the
specification.
A
So
far,
there
really
are
only
three
detectors,
right
process,
environment
and
browser,
or
are
there
more
like
I,
don't
think
there
are
do
we
have
detectors
for
each
of
the
resource
conventions.
D
Here
we
go
yeah,
so
there's
a
there's,
a
couple
of
them
in
here.
These
all
I
think
are
like
each
one
for
all
of
these.
B
Yeah
I
think
we
have
to
because
because
of
the
version
you
know
and
and
the
apis
or
the
semantic
conventions
might
become
stable
in
different
times
and
then
we
will
be
facing
Santa
said
like
we
wouldn't
want
to
basically
have
them
bundled
together,
because
if,
if
one
of
them
will
become
stable,
we
have
to
like
split
them
up
again,
which
basically
means
like
moving
them
across
packages,
which
is
universe.
A
All
right
Sanskrit,
so
we
will
we'll
start
with
creating
a
package
just
for
the
browser
to
begin
with,
okay
and
and
we'll
also
Mark
this
as
certificate.
C
D
All
right
next
issue
here,
I
added
this-
is
something
that
we've
been
working
on
for
a
while,
but
it
has
had
some
renewed
Focus
I.
Don't
really
have
a
lot
to
say
about
this
other
than
just
to
draw
it
to
everyone's
attention
from
a
review
perspective,
but
for
those
that
don't
know
currently
every
instrumentation
that
gets
enabled
wraps
required.
So
if
you
have
10
or
15
instrumentations,
particularly
if
you
use
like
the
auto
instrumentation
node
package,
you
get
a
lot
of
them.
Each
instrumentation
wraps
require.
D
So
you
end
up
in
the
situation
where
require
is
has
been
wrapped,
like
15
or
20
times,
and
every
time
any
file
in
your
project
is
required.
All
of
those
wrappers
get
called
and
they
say,
do
I
need
to
hook
this
package,
yes,
no,
no,
okay,
and
then
they
call
their
own
original
function,
which
is
itself
a
wrapper.
D
D
It
works.
Well,
it
it
drastically
improves
speed
up,
improves
startup
time,
but
there's
just
some
some
questions
around
how
we're
going
to
test
this,
and
particularly
the
disable
functionality,
because
all
of
our
test
Suites
run
in
a
single
process
enabling
and
disabling
the
instrumentation
and
instantiating
a
new
one
for
testing
purposes
is
broken,
so
the
the
contrib
repository
will
have
a
ton
of
broken
tests.
D
If
we
merge
this,
that
would
all
need
to
be
updated,
so
I
would
encourage
people
to
look
at
this.
Look
at
the
proposed
Solutions
right
now.
None
of
the
proposed
Solutions
are
that
great.
So
if
somebody
has
a
great
idea
that
would
be
awesome,
so
I'm
just
trying
to
get
more
more
eyes
on
this,
particularly
for
that
testing
scenario.
D
C
Yeah,
when
I
reviewed
it
I
had
a
look
I
just
pasted
a
link
in
the
the
chat.
This
is
how
application
insights
hooks
everything
and
it
effectively
has
a
callback
function.
So
when
you
register
a
hook,
we
also
keep
track
of
this
returned
object
so
that
we
can
actually
self-remove
ourselves.
That
would
be
the
way
to
do
it,
but
I
don't
require
enough
in
terms
of
whether
we
could
do
that
and
say
Well
when
the
instrumentation
adds
itself
it
just.
We
return
an
object
which
has
a
remove
function
like
this.
C
That
can
then
go
and
remove
itself
out
of
its
internal
lookup,
because
the
way
the
hook
stuff
works
here
is
effectively
once
it
only
hooks
it
once
and
then
it
just
has
the
list
of
hooks
that
it
just
enumerates
through
okay
yeah,
but
something
like
that
would
work,
but
so
I
don't
require
deeply
enough
to
because
it
looked
like
it
was
using
a
require
function
for
the
on
require
call
back
or
something
so
anyway.
One
possibility.
D
I'll,
take
a
closer
look
at
that
before
I.
Don't
want
to
just
comment
it
with
no
context
so
I'll
take
a
closer
look
at
it
before
I.
Make
a
comment,
but
something
like
that
could
definitely
work,
particularly
because
it's
only
for
testing
like
I,
don't
expect
it
to
be
used
by
end
users,
and
we
would
probably
go
to
at
least
some
effort
to
hide
it
or
discourage
its
use.
D
Okay,
if
there's
no
particular
questions
about
that
I'll
just
move
on,
because
I
think
without
reading
the
pr
there's
not
a
lot.
We
can
really
talk
about
here.
D
Sandbox
PR
needs
additional
review,
you're
already
kind
of
brushed
on
this
nav.
Is
there
anything
in
particular,
it's.
C
C
D
All
right,
anyone
have
another
topic
before
we
move
on
to
bug
triage
everybody's
favorite
part,
all
right,
I
believe
there's
actually
none
in
the
main
repo.
So
we
can
move
right
on
to
contribute
which
we
haven't
done
yet
there
aren't
that
many,
but
we'll
see
what
we
can
do
here
all
right.
D
D
C
C
C
D
D
X-Ray
Trace
ID
generator-
this
is
definitely
will
and
Nathaniel
again.
Let
me
open
the.
D
C
I
actually
think
I've
done
the
fix
for
this
already
and
it's
been
merged
in
yeah,
so
I
think
we
can
close
this
now.
I
just
haven't
done
that
yeah.
D
I
noticed
so
it
looks
like
this
is
the
same
problem
we've
had
before
yeah.
It
looks
like
you're
in
the
habit
of
adding
the
the
yeah
to
the
the
commit,
but
it
actually
doesn't
automatically
find
it
here.
Unless
you
specifically
say
in
this.
C
D
Fixes
1136
right
and
then
it
would
automatically
close
it
yeah
for
what
it's
worth
or
you
could
just
manually
connect.
It.
C
D
C
B
D
Yes,
okay,
can
you
comment
on
this
just
so
that,
because
I
think
where
she's
at
is
she's
assigned
and
can't
reproduce
it
so
either
comment
and
say
you
want
to
work
on
it
or
or
you
know,
tell
her
what
you
found
or.
D
Missing
dependencies
for
instrumentation,
when
upgrading
to
31
and
about
bumps
27-31
only
change
the
version.
My
app
is
not
building.
B
Licensing
but
that's
now
emerged.
B
Is
not
released,
though
I
haven't
checked
the
issue,
whether
all
of
the
packages
he
mentions
are
now
fixed,
but
you
can
assign
me
I'll
check
it.
B
D
Only
only
when
it
fails
which
it
doesn't
fail,
every
time,
let's
see
if
we
can
requests.
C
D
D
D
I
remember
this
there,
so
the
maintainer
of
this
package
is
gone.
I
missed
it
because
I
forgot
to
assign
myself.
D
D
C
D
A
C
You're
going
to
work
on
this
yeah
yeah
I.
C
Guy
to
create
a
PL
to
fix
it,
but
I
can
do
it
myself.
Okay,.
C
D
Oh,
this
was
I.
Remember
this
one.
This
is
an
old
one.
I
think
this
ends
up.
This
ended
up
being
in
async
resources,
bug.
D
And
it's
been
15
minutes,
I,
don't
think
we
need
to.
We
can
finish
the
rest
of
these
next
week
we
made
quite
a
bit
of
progress
here,
though
11
left
untriaged.
D
I
guess
that's
it
for
the
agenda.
Does
anyone
have
anything
that
they
would
like
to
bring
up
before?
We
close
the
call
here.
C
Yeah
so
I
I
merged
in
the
the
script
one
and
then
ran
the
script
to
run
the
the
new
PR
and
it's
also
failing
with
another
CLA
one
but
cracking
down
that
original
commit.
The
person
had
signed
the
CLA
so
obviously
they're
they're,
no
longer
there
I
put
links
in
there.
In
my
note,
there.
D
C
D
Know
the
same
I
had
similar
issues,
not
this
specific
one
when
I
merged
the
API
into
the
SDK
and
I
handled
it
by
making
myself
the
author
and
then
adding
them,
as
you
know,
co-authored
by
trailers.
D
But
if
you
don't
want
to
modify
the
history,
I
can
open
another
ticket
with
cncf
about
this.
If
you
want.
C
Well,
as
I
I
have
the
ability
to
bypass
the
CLA
for
this,
my
gut
feeling
is
because
they
had
signed
the
CLA
at
the
point
of
that
commit.
It
should
be
fine,
but
I'm
happy
to
wait
if
you're
not
open
up
with
the
CLA
first.
D
Yeah
just
to
make
sure
usually
so
what
happened
on
all
of
them
I
there
were
like
six
or
seven
commits
on
the
API
that
had
a
similar
problem
and
they
were
all
because
the
email
on
the
commit
were
some
Corporate
email
and
the
user
changed
employers,
so
they
removed
that
email
from
their
GitHub
account.
So
they
hadn't
even
unsigned
the
CLA.
D
They
had
just
removed
their
email
from
the
GitHub
account,
so
it
could
be
as
simple
as
that
I'll
reach
out
to
Chris
at
the
cncf
and
and
see
you
know
if
he
says
it's:
okay
to
just
bypass
the
the
check.
Because
of
that,
then
they
should
be
good
to
go.