►
From YouTube: 2023-03-15 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
C
Did
you
get
that
drawing
of
yourself?
Is
that
a
real
drawing
or
AI
generated,
or
done
the
old-fashioned
way
with
Photoshop.
B
Yeah
sort
of
Photoshop
it
was
a
picture
taken
a
while
ago,
I
think
it
was
paint.net
I
used
to
then
effectively.
You
know
told
it
to
create
a
pencil
sketch,
so
I
think
it's
actually
my
LinkedIn
picture
at
the
moment
too,
from
quite
a
while
ago,
now,
I,
don't
quote
we're
quite
too
young.
Unfortunately,.
C
Hopefully,
it's
not
just
us
I
think.
With
the
time
change,
we
may
have
some
Europeans
that
possibly
either
don't
realize
or
have
conflicts
yep,
not
just
Europeans
people
outside
the
US.
B
Yeah
I
think
we'll
see
yesterday,
one
of
the
readings
yesterday
there's
like.
Actually
it
was
our
internal
one
team
member
from
France
said
like
in
two
weeks
they
change.
C
Yeah,
that's
most
of
Europe
changes
in
two
weeks.
This
is
actually
the
best
two
weeks
of
the
Year
for
me,
because
my
team
is
based
in
Europe
I'm,
one
of
the
only
American
team
members
and
all
of
our
because
we
only
have
limited
time
zone
crossover.
I
I
have
8
A.M
meetings
almost
every
day,
yeah
and
this
week,
and
next
week
they
are
all
at
9
00
a.m.
Instead,
so
I
get
an
extra
hour
yeah.
B
Yeah,
that's
good
I
know
as
I
was
in
Australia
and
leading
the
team,
but
we're
doing
stuff
for
American
Express
here
in
the
US.
That
was
a
nightmare,
especially
considering
I
had
a
weekly
sync
with
a
victory.
The
UK,
the
US
and
Brazil
I
think
it
was
so
that
was
fun.
C
All
right
all
right,
well,
I
guess
we
can
probably
get
started.
A
A
C
C
This
is
just
me,
or
this
Google
Docs
look
different,
it
doesn't
matter.
Okay,
oh
what
happened
here.
It
looks
different
now.
B
A
bit
today,
yes,
so
the
sandbox
I'm,
deeming
as
ready
for
branching,
so
I've
tried
to
put
details
in
here,
but
I'll
jump
through
them.
That's
a
link
to
the
sandbox.
If
you
look
in
main
now,
it
has
all
the
packages
merged
over
as
of
five
days
ago.
B
So
I
think
the
pkgs
is
where
everything
is.
So
you
can
see.
I've
got
API
core
context.
Some
basic
browser
detection
only
I've
only
brought
over
the
OTP
yeah
otlp
base
exporter.
At
this
point
there
is
no
other
exporters.
The
instrumentations
includes
the
basic
instrumentation
package,
as
well
as
some
of
the
contribute
instrumentations
for
web,
which
is
in
the
web
folder.
So
instrumentation
is
just
the
instrumentation
stuff.
B
B
As
part
of
merging
it
over
into
this
folder,
the
script
does
attempt
to
enable
tests
for
node
browser
and
worker.
There
are
a
couple
that
the
workers
don't
actually
work
so
they're,
disabled
and
there's
a
couple
of
other
web
ones
where
the
node
one
doesn't
work
so
they're
disabled
as
well
so
yeah.
So
that's
affected
the
trace
base.
B
Trace
web
sdks
I
think
it
just
renames
the
packages.
If
we
want
to
move
the
packages
around
it'll
be
a
little
bit
painful,
but
not
impossible
effectively.
The
paths
are
defined
in
contrib.ts,
so
we
can
effectively
do
a
a
git
move
locally
in
the
branch
push
they
didn't,
update,
config,
TS
and
then
all
other
mergers
will
go
to
the
whatever
the
new
structure
is
to
keep
the
history.
We
should
have
history
for
every
single
file
that
didn't
have
a
merge
conflict
when
merging
JS
and
contrib.
B
There
are
a
couple
of
things
like
package
jsons,
some
of
the
base
readme's,
where
git
decided
that
the
contribution
overwrote
the
JS
version
script
that
actually
merges
into
the
auto
merge
folder.
So
if
you
go
into
the
auto,
merge
folder
here,
you'll
see.
There's
a
Json
country
package:
these
are
all
the
other
packages
from
JS
and
quantrib
that
have
not
been
migrated
so
everything's
in
here,
like
all
the
node
stuff
as
of
five
days
ago.
A
B
B
B
Okay,
so
what
I've
asked
if
I
get
here,
so
what
additional
packages
that
I
haven't
brought
over?
Do
you
want
I,
probably
got
one
from
Martin
to
bring
over
the
new
API
in
events
which
I'm
trying
to
do
at
the
moment,
except
because
I've
I'm,
enabling
node
browser
and
webworker
tests?
B
Okay,
and
what
branches
or
work
streams
do
people
want
to
play
with?
So
the
current
work
streams
that
we've
identified
as
part
of
the
romsig
is
I'm
going
to
be
driving
a
minification
one
which
is
going
to
be
it's
the
next
one
down
the
current
Branch
work
stream
plans.
Yeah,
oh
yeah
I
mean
yep
so
effectively
the
minification
one
is
just
going
to
be
trying
to
Minify
the
existing
code
to
make
it
work
in
a
web
to
see
if
you
can
get
it
viable
enough
for
a
web.
B
This
will
also
Advantage
node,
because
if
this
works
and
we
push
it
back
to
JS,
node
will
become
smaller
because
the
minification
will
work
Martin's
proposed.
We,
instead
of
continue
to
do
the
events
and
logs
prototyping
in
JS.
We
actually
create
a
branch
in
the
sandbox
and
do
it
over
there,
because
we've
got
everything
that
we
can
play
with
and
Santosh
wants
to
start
the
process
which
we've
talked
about
previously
before
creating
the
sandbox
of
effective,
creating
a
web
only
Js
by
having
the
basic
API
stuff
and
core
sitting
in
the
sandbox.
B
It
means
we
have
access
to
that,
but
the
plan
is
to
effectively
probably
dump
a
lot
of
stuff
and
just
conform
to
the
interfaces.
So
I
know
Daniel.
You
were
not
keen
on
this
previously,
which
is
why
I've
got
this
as
a
bullet
point
for
the
meeting,
because
if
we
do
end
up
going
down
this
path,
it
means
if
it's
the
same
repo.
We
end
up
with
a
duplicate
set
of
code.
B
The
whole
point
of
the
minification
branch
was
to
try
and
figure
out
whether
we
can
or
can't
get
it
to
work,
and
then
we
go
from
that.
One
advantages
of
separating
a
node
in
web
would
be
the
that
was
at
the
environment.
B
Folders
We
have
in
each
package
which
say
node
and
browser,
and
then
we
get
all
the
complications
of
the
package
Json
and
we
get
errors
every
now
and
again,
because
people
forget
to
tell
their
package
manager
they
want
to
create
the
browser
version
that
would
go
away,
but
it
does
mean
extra
work
again
in
the
sandbox
it'll,
be
investigation
to
see
if,
if
it's
viable,
because
at
the
end
of
the
day,
we
want
instrumentations
built
from
contrabandjs
to
work
regardless.
C
First,
like
the
minification
one,
it
seems
obvious
to
me,
like
the
whole
point
of
the
the
sandbox
repo
was
to
be
able
to
make
sweeping
changes
of
the
repo
code
without
affecting
sort
of
the
production
version
more
ready,
so
that
that
totally
makes
sense.
I
am
curious
why
events
and
logs
would
be
done
in
the
sandbox,
since
that
should
just
be
about
like
it.
It's
not
like.
It
requires
drastic
changes
to
many
other
packages.
It
seems
like
a
an
unnecessary
bifurcation
of
the
of
the
resources
we
have
at
JS.
E
Yeah
I
just
want
to
clarify,
like
the
the
logs
SDK,
definitely
should
continue
in
the
JS,
the
JS
repo,
because
that's
separate
from
browser.
E
E
So
you
know
like
putting
it
here
in
in
the
in
the
sandbox
would
allow
us
to
just
put
together
a
prototype
of
all
the
instrumentations
that
we
have
defined
semantic
conventions
for
faster
than
you
know,
then,
and
so
we
have
something
to
demo
or
like
show
before
we
create
like
separate
packages
in
contrib
and
finalize
the
events
SDK.
C
E
B
C
I,
yes,
I,
as
you
said,
I
I'm
apprehensive
for
similar
Reasons
I'm,
worried
about
like
we
already
have
limited
resources.
We
have
a
hard
enough
time
getting
things
reviewed
when
we
have
two
repos.
That
was
a
big
part
of
the
reason
we
moved
the
API
back
into
the
main
repo
and
had
splitting
work
onto
another.
Repo
just
seems
like
it's
going
to
make
that
particular
problem.
Worse,
not
better
and
I,
wonder
what
like
when
it
when
it
just
says,
not
reusing
everything,
that's
a
very
hand.
Wavy.
B
C
That's
that's
there
for
no
reason
so
I
I
wonder
what
is
being
removed
where
it
will
still
be
specification
compliant
other
than
the
environment
like
it
seems.
We
already
have
like
the
SDK
web
tracing
package
and
I
wonder
what
you
know
why
the
work
can't
just
be
done
there
to
strip
out
unneeded
yeah
unless
this
SDK
is
meant
to
be
not
specification,
compliant
I
I'm
a
little
hazy
on
what
this
means.
Yeah.
B
So
this
is
really
in
investigations
like
so
I
think
Santos
was
going
to
go
and
drive
this.
They
go,
go
off
and
figure
out
what
can
be
done
and
then
from
there.
If
there's,
anything
that
can
be
contributed
back
to
Json
contribute
would
be
like
the
the
whole
point
of
the
sandbox.
Is
we
don't,
at
this
point,
intend
to
publish
anything
out
of
the
sandbox
so
yeah
in
terms
of
what
does
it
mean?
B
We
don't
know
we
we
started
talking
about
it
last
week
in
terms
of
creating
a
minimal
I've
just
been
reinforcing.
We
at
least
want
to
keep
interface
compatibility.
There
are
a
couple
of
things
that
cause
issues
in
the
web.
B
You've
already
called
that
some
of
them,
which
is
the
environment
stuff,
which
just
doesn't
make
sense
unless
we
want
to
start
having
meta
tags
globals,
at
least
for
me,
is
a
problem
for
my
internal
teams,
because
I
have
teams
that
have
effectively
multiple
versions
of
sdks
all
running
on
the
same
page,
because
different
teams
contribute
different
components,
which
is
a
big
problem
for
open
Telemetry
usage.
B
C
F
C
Good
like
justification
for
it,
I
I
just
am.
B
Yeah,
you
know
in
terms
of
the
big
open,
Telemetry
picture.
This
would
be
effectively
a
client,
SDK,
I
think
Ted
bought
that
up
and
the
maintainers
meeting
several
weeks
ago
now
saying
that
when
we
have
client
sdks
well,
this
is
probably
the
first
attempt
to
look
to
see
what
that
might
look
like.
C
Okay,
I
mean
I'm,
I
am
curious
to
see
what
minimal
means
and
and
how
much
can
be
taken
out.
I
I
I
completely
agree
that
the
like
the
the
open,
Telemetry
deployment
right
now
is
way
too
big
for
for
web.
C
It
was
very
focused
on
node
in
the
early
days,
and
web
support
has
been
sort
of
just
tacked
on,
but
up
until
this
point
has
been
treated,
maybe
saying
treated
as
a
nice
to
have
is
is
is
taking
it
a
little
bit
too
strongly,
but
but
certainly
hasn't
been
treated
as
a
first-class.
A
C
Yeah,
so
I
I
am
interested
in
seeing
what
can
be
done
there.
I
just
want
to
be
cognizant
of
the
fact
that
we
have
limited
resources
and
splitting
too
much
into
a
different
repo
I'm.
Just
a
little
bit
wary
of
that
yeah.
B
I
I
don't
want
to
go
through
big
processes
of
you
know:
X
number
of
reviewers
effectively
whoever
owns
the
branch
I'm
going
to
treat
branches
as
having
owners
for
that
work
stream
and
just
let
them
loose,
which
is
going
to
be
fun
when
it
comes
to
like
Branch
protection
rules,
but
we'll
deal
with
that.
As
we
start
creating
branches
and
figure
it
out.
A
B
Yeah,
okay,
so
effectively
the
pr
that
I
have
at
the
top
really
is
just
merging
JS
and
contrib
to
affected
the
staging
brace
that
I've
got.
All
that
is
doing
is
bringing
over
history
there's
in
the
separate
script
that
effectively
mergers
that
staging
Branch
into
main
into
the
format
that
we
just
looked
at
on
the
screen,
the
whole
point
of
main
branches.
It
should
just
be
there
to
effectively
have
everything
brought
over
and
compiled.
It
does
a
bunch
of
stuff,
like
pre-pending,
sandbox
Dash
to
every
package
name.
B
So
we
don't
really
want
to
have
that
script
have
merge
conflicts
so
effectively.
The
the
main
branch
should
not
have
any
code
directly
added
to
it
to
fix
bugs.
If
there's
bugs
in
the
code,
they
need
to
be
fixed
in
JS
and
contrib,
and
then
the
merge
scripts
will
take
care
of
bringing
that
in
and
then
all
work
happens
in
workstream
branches,
which
means
your
work
stream
branches
have
to
perform
a
git
merge
from
Main
to
pick
up
any
changes
that
get
consumed
around
and
that's
probably
it
okay.
C
B
Okay,
so
in
terms
of
letting
me
know
about
which
which
packages
you
want
to
bring
over
I
guess,
issues
in
the
sandbox
would
probably
work
or
slack.
Actually,
probably
issues
on
the
sandbox
would
be
better.
I've
got
a
couple
of
weeks
of
internal
stuff.
I'm
gonna
be
sidelined
with
a
little
bit
more
than
I'd,
like.
C
Next
Point
here
is
I
did
release
the
next
version
of
the
API
SDK
and
experimental
packages.
Mark
found
a
resource
backwards,
compatibility
problem
Mark.
If
you
want
to
explain
this
a
little
bit.
D
D
You
don't
want
to
so
I
noticed
that
in
the
country,
people
the
build
started
to
fail,
and
that
was
because
of
this
PR
that
we
had
published
recently
where
we
we
added
this
async
resource
magic
that
just
basically
moves
the
the
weight
to
the
exporter
or
that
yeah
to
to
the
SDK.
D
Basically,
and
there
have
been
some
changes,
that
would
basically
mean
that
you
would
yeah
assign
the
resource
the
newly
introduced
resource
interface
to
the
to
the
resource
type,
which
the
the
interface
has
a
few
optional
Parts
in
there,
and
that
obviously
would
cause
a
compilation,
fail
and
failure
and
typescript.
So
the
pr
that
I
have
opened
now
that
basically
fixes
that
I
just
went
in
and
made
the
things
optional,
yeah
I'm,
not
sure.
D
If
there's
a
better
way
to
do
it,
I've
played
around
with
a
few
approaches,
and
that's
the
only
one
that
worked
out
for
me.
So
if
anybody
has
a
better
idea,
please
please
feel
free
to
head
over
to
the
pr
and
comment
on
it.
I
think
it's
a
rather
high
priority
item
for
us
to
to
get
this
fixed,
because
all
the
EX
existing
resource
detectors
will
basically
stop
stop
compiling
the
way
it
is
now.
C
So
I
guess
there's
two
two
things
that
I
want
to
point
out.
First
is
this
is
just
another
version
of
an
issue
we've
already
had
on
the
past,
which
is
using
referencing
classes
directly
in
types.
C
Because
the
detector
type
returns
this
resource,
which
is
a
class,
not
an
interface.
This
is
sort
of
the
reason
that
we
had
this
problem
and
I
just
wanted
to
reiterate.
Using
using
interfaces
and
types
where
possible
in
the
public
interfaces
should
be
policy
moving
forward.
D
C
D
C
C
C
C
Can
we
consider
that
a
bug
fix
my
gut
feeling?
Is
yes?
But
if
there's
anybody
that
believes
that
that
is
not
acceptable.
I
now
is
the
time
to
speak
up.
I
I'd
like
to
hear
opinions
because
I'd
like
to
fix
this
as
quickly
as
we
can.
B
Yeah
no
I
agree
with
that,
both
for
application,
insights
and
the
internal
sdks
I
did
this
as
well
and
I,
followed
up
days
later,
with
a
a
bug
fix
to
fix
what
I
described
as
the
inadvertent
breaking
change.
C
And
I
think
that
we
should
treat
it
the
same
way.
I
also
think
so.
Mark
already
has
a
PR
open
for
this,
and
I
would
encourage
people
to
please
look
at
it.
Where
are
we
here?
Please
look
at
it
and
review
it
so
that
we
can
get
it
merged
and
released
as
quickly
as
possible
here
so
that
we
can
move
forward
Mark.
What
did
you
do
in
the
change
log
here?
Yeah?
You
marked
it
as
a
bug.
D
Yeah
I
marketed
as
a
bug
fix
I
was
considering
putting
it
in
to
both
like
breaking
change
and
bug
fix,
yeah,
not
sure
I
think
we
will
also
have
to
edit
the
release
in
GitHub
to
tell
people
that
that
was
a
breaking
one.
So,
when
they're
scrolling
through
that,
they
see
that
that's.
C
C
Whatever
the
pr
is
this
one
I
will
move
it
into
a
section
that
says
breaking
and
I'll
just
I'll
deprecate
the
1.10.0
when
the
1.10.1
is
released
and
I'll
put
that
as
the
reason
that
sounds
good.
C
So
please
review
this
PR
and
let's
get
it
released
as
quickly
as
we
can.
C
Okay,
I
guess
we'll
move
on
to
the
exponential
histograms,
which
I'm
sure
nobody
is
surprised
to
be
hearing
about.
We
did
merge
the
part
two
PR
here,
so
there's
only
the
third
one
left,
which
is
really
just
making
public
all
of
the
components
that
have
already
been
merged,
so
it
should
be
relatively
straightforward
to
review.
I
saw
that
Matt
updated
it
yesterday
or
last
night,
so
I
Matt
can
I,
then
assume
from
that
that
it
is
ready
to
be
reviewed.
C
C
C
Martin,
it's
like
you,
have
a
question
here:
our
plugins
and
contrib
experimental
or
the
guidance
I'm,
making
breaking
changes.
Aws
Lambda
with
a
breaking
change.
E
Yeah
so
I
opened
this
PR
a
few
weeks
ago.
That's
based
on
a
change
in
the
spec,
but
but
I
guess
it
is
a
breaking
change
because
it
changes
the
behavior
of
the
plugin,
but
the
plugin
is
on
version,
I,
think
0.35
or
something
like
that.
So
I'm
just
wondering
like
if
that's
how
to
to
handle
breaking
change
like
that
in
these
plugins.
C
C
C
Yeah
will
armiros,
who
I
think
does
not
normally
join
these
calls,
so
he's
not
here
today,
but
I
would
ping
him.
If
he's
non-responsive,
then
I
would
say
the
JS
maintainers
would
make
a
decision
there.
In
this
particular
case,
I
think
if
it
comes
from
spec
and
I
think
we
would
likely
allow
it
I
think
will
also
will
likely
say
yeah.
That's
fine.
E
Sorry
I
started
to
interrupt
like
he.
He
did
actually
comment
yesterday
and
said
that
it's
breaking
change
and
that
he
wasn't
aware
of
the
spec
change
so
I'm,
just
not
sure
how
where
to
go
like
like
I
guess
it's
at
his
discretion.
C
Interesting
yeah
I
mean
I.
We
typically
leave
these
at
the
discretion
of
the
of
the
component
owners.
That's
why
we
have
component
owners
because
it
comes
from
the
spec
I
like
it
doesn't
seem
like
he's
blocking
it.
I,
don't
know:
I'm
yeah,.
C
I
guess
we'll
see
what
he
says:
I
mean
like
like
I
say
here:
I
would
lean
toward
the
side
of
allowing
it
but
I
guess
it's
up
to
him.
F
Hey
this
is
purvi,
Jamie
is
out
sick
today.
So
it's
just
me,
but
we've
been
working
on
that.
So
I
think
that
there's
another
Link
in
there.
That's
there's
like
an
in
progress
PR
for
esm
support
for
for
all
of
our
Auto
instrumentation.
F
So
we
had
a
little
bit
of
time
to
work
on
it
and
I
have
like
we.
We
have
a
branch,
that's
takes
this
work
and
like
adds
on
a
little
bit
to
it,
where
we
believe
we
do
have
like
a
solution.
I've
tested
it
out
with
HTTP
instrumentation
as
well
as
Express
instrumentation,
and
it
is
working
I
had
to
do
some
things
that
aren't
don't
feel
great
to
like
get
it
to
work,
but
I
was
wondering
what
is
the
best
way
to
like
push
this
forward.
F
C
Yeah
I
would
open
a
new
PR.
Valentine
doesn't
have
as
much
time
as
he
used
to
to
work
on
the
project
and
I
wouldn't
be
confident
that
he
will
be
very
responsive
on
this
PR
on
a
day-to-day
basis.
So
if
I
think
he
would
be
very
happy
for
someone
else
to
take
over
this
work,
okay
and
he
I,
don't
think
he'll
feel
like
you're
stepping
on
his
toes
or
anything.
If
that's,
what
he's
worried
about
yeah.
F
I
was
just
worried
about
like
a
cause
and
confusion
and
then
B
like
yeah
stepping
on
toes
or
anything
but
yeah
I
think
I
think
we
have
like
we.
There
is
a
solution
that
that
works,
so
I'd
love
to
get
some
feedback,
whether
it's
it's
viable
or
not.
I
guess.
C
Yeah
I
guess
nobody
has
interacted
with
PR
since
November
of
last
year.
I
am
not
particularly
worried.
I
think
that
probably
just
making
a
new
PR
where,
where
somebody
actively
working
on
it,
has
control,
is
probably
the
better
way
to
go.
Okay,.
F
C
F
Is
it
yeah,
I,
I
guess
I
have
like
one
question
about
the
basically
the
require
in
the
middle
Singleton
like
we've
made
it
a
Singleton
so
that
it
doesn't
like
Loop
through
I.
Guess,
like
all
of
the
packages
like
multiple
times
for
every
instrumentation,
is
that
was
that
kind
of
the
thinking
behind
that
yeah.
C
So
we
used
to
have
a
performance
problem:
I
I,
don't
know
how
familiar
you
are
with
the
way
that
require
in
the
middle
Works
under
the
covers,
but
basically
it
patches
the
required
function.
Unsurprisingly,
and
every
instrumentation
that
loaded
was
calling
require
in
the
middle
right.
So
it
was
wrapped
not
once
for
every
instrumentation
that
actually
loaded
but
wants
for
every
instrumentation
that
was
even
like
installed
in
the
first
place
right
and
then
every
single
one
of
those
rappers.
C
You
know
the
when
you
call
require
it,
invokes
the
wrapper
instead
of
require
the
wrapper
checks
to
see.
Is
this
module
intercepted
or
not
and
then
calls
its
delegate,
and
that
was
happening
every
single
time
required
gets
called,
which
in
a
production
application?
Startup
is
many
many
times
yeah.
A
C
Was
making
startups
for
some
larger
applications
go
from
literally
like
seconds
to
minutes
right
so
making
it
a
Singleton.
We
now
have
one
wrapper
or
that's
sometimes
more
than
one,
but
one
per
version
of
the
instrumentation
package.
That's
in
your
node
modules
tree
so
two
to
three
most
of
the
time
at
most,
which
drastically
improved
the
performance.
Okay
and
then
there's
a
one
wrapper
looks
at
all
of
the
modules
instead
of
each
module
having
its
own
wrapper.
C
F
That's
really
helpful,
I
I
kind
of
got
the
impression
that,
like
I,
had
I
guess
a
partial
picture
from
from
what
I
saw.
But
it's
it's
helpful
to
know,
but
basically
because
we're
using
Import
in
the
middle
as
the
analog
to
require
in
the
middle
here
and
Import
in
the
middle
works
quite
a
bit
differently
to
require
in
the
Middle
where
it's
not
super
Singleton
friendly.
F
Using
like
it
creates
like
a
proxy
object
of
of
a
module,
essentially
but
yeah
so
I,
just
the
Singleton
itself
was
not
a
like.
It
doesn't
work
as
a
thing.
So
my
solution,
like
doesn't
work
exactly
as
a
Singleton,
but
I
figured.
We
could
put
it
up
and
then
maybe
I
don't
know
like
have
some
ideas
for
how
to
how
to
make
it
better.
C
Yeah
so
I
think
with
import
it,
so
it
the
loader
API
was
what
I
was
trying
to
think
of
and
I
think,
because
you
can't
actually
replace
like
the
like
import
itself
is
not
a
function
that
can
just
be
around.
So
it's
not
wrapped
once
per
plug-in.
I
I
think
you
don't
have
the
same
performance
degradations
if
you're
not
using
a
Singleton
pattern,
so
it
should
be.
Okay,
that's.
C
I
mean
I
would
say
if
you're,
if
you're
curious,
you
can
just
add
a
loader,
add
like
a
hundred
dummy
instruments.
You
know
just
wrap
dummy
packages
and
then
try
to
start
up
a
larger
application
and
see
if
it
yeah
affects
the
performance
of
the
startup.
I
would
expect
that
it
shouldn't
at
least
drastically
affected.
F
The
other
piece
of
it
is
like
right
now
we
don't
have
esm
support
so
like
it's
like.
We
could
add
this
see
what
the
response
is
and
then
make
it
better,
because
it's
like
at
least
it
works
compared
to
it
like
not
working
at
all.
C
Yeah
I
mean
working
even
if
there
is
a
performance
degradation
working
with
a
performance
degradation
is
an
improvement
over
not
working
at
all.
I
agree
with
that
I
would
say.
If
there
is
a
performance
degradation,
we
should
try
to
identify
it
early
because
it
became
a
much
bigger
problem
with
require
than
we
expected.
C
It
sort
of
grew
over
time
where
we
kicked
the
can
down
the
road
and
then
one
day
we
realized
we
were
affecting
startup
Time
by
by
not
just
like
10
or
20
percent
by
like
ten
thousand
percent
right,
and
that
was
obviously
not
acceptable
and
should
never
have
gotten
that
bad.
To
begin
with,
so
I
would
say
if
there
is
an
issue,
it
would
be
great
if
we
could
identify
it
early,
but
my
my
gut
feeling
on
this
based
on
what
I
know
about
Import
in
the
middle
and
the
loader
API,
is
that
you.
A
C
C
A
C
D
Yeah
that
that
seems
to
be
another
another
more
or
less
breaking
change.
Basically,
it's
using
the
fs
promise
thing
to
detect
the
resources
in
the
machine,
ID
detector.
D
So
what
the
problem
there
is
is
they're
using
it
with
an
older
version
than
Note
14
they're,
using
it
with
Note
12,
which
we
stopped
supporting
quite
some
time
ago,
and
that
is
the
breaking
change
for
them.
So
we
we
don't
test
it
for
note,
12,
anymore
and
I.
Think
we've
had
that
discussion
quite
a
while
ago,
where
we
removed
node
12
as
one
of
the
supported
engines
and
didn't
consider
that
major
version
pump
and
now
it's
it's
actually
breaking
them.
D
I
I
double
checked
that
one
already
and
we
we
do
I
think
we
may
be
able
to
support,
or
at
least
fix
that
particular
issue
for
them
by
importing
the
the
fs
promise
thing
differently.
I
haven't
tried
it
out
yet,
but
I
have
some
changes
locally
that
we
could
apply
to
maybe
make
it
work.
C
I,
don't
I,
don't
really
know
what
to
say
here.
C
I
guess
that's
it
for
now
I'm
going
to
remove
the
bug
label
because
it's
an
unsupported.
C
C
And
then
Mark,
you
said
you
looked
into
it
a
little
bit
into
fixing
it.
Do
you
want
me
to
assign
this
to
you
or
should
I
just
put
this.
D
Up
yeah,
you
couldn't
assign
it
to
me.
I
think
I
just
need
to
to
test
it.
If
it
actually
works
or
not,
the
problem
is
I
can't
run
the
can
can't
install
all
the
packages
that
we
need
there
because
yeah
the
death
dependencies.
Don't
don't
install
anymore,
because
they're
not
supported
by
note
14.
C
Yeah
or
no
12s,
but
yeah,
not
12,
yeah
I'm
a
little
bit
interested
in
their
setup,
but
I
I
don't
want
to
get
too
deep
into
the
weeds
there.
Yeah.
D
Actually,
regarding
this
note,
engine
support
I
think
we
we
recently
or
I
think
we
we
last
time
agreed
that
we
will
drop
support
for
note
14
in
the
future.
Two
at
some
point,
because
I
think
the
support
for
that
one
ran
out
quite
some
time
ago,
as
well,
so
April
is
coming
up
and
that
maybe
our
opening
to
release
a
2.0
for
sdks
in
the
future.
C
Yeah
I
was
thinking
the
same
thing,
I
mean
up
until
now.
We
have
really
sort
of
avoided
the
2.0
discussion,
but
there's
really
no
reason
for
that.
Like
releasing
a
2.0
is
a
natural
part
of
the
the
development
life
cycle
and
something
that
we're
going
to
have
to
do
eventually,
and
there
are
a
lot
of
sort
of
mistakes
or
historical
decisions
that
we've
made,
that
that
I
think
we
want
to
change
in
the
2.0
anyways.
So
it
might
be
a
time
to
start
that
discussion.
C
D
Yeah,
it's
yeah
it's!
There
are
a
few
things,
as
you
already
mentioned,
with
the
the
resource
stuff
that
we
talked
about
earlier
and
other
things
that
if
we
could
redo
it,
we
probably
would
do
it
differently
and
I.
Guess
that
having
having
that
opportunity
to
move
the
2.0
and
just
fix,
all
of
that
would
be
great.
C
C
So
this
is
someone
trying
to
use
es,
build
to
build
the
grpc
exporter.
This.
D
Is
this
is
similar
to
that?
There
are
issues
open
for
webpack
and
roll
up,
I
think
and
we've
designated
those
feature
requests
in
the
past.
C
Yeah
I'm
going
to
mark
it
as
a
feature
request
as
well,
but
I
think
this
is
something
that's
coming
up
more
and
more
and
maybe
something
that
we
should
actually
address.
You
know
the
the
core
of
the
issue
is
that
we're
loading
the
protos
at
runtime
there's
no
way
to
make
typescript
inline
non
typescript
or
JavaScript
files
like
we
can't
have
it
inline
the
protos,
which
would
be
ideal,
but
there
may
be
some
other
way.
A
C
Do
it
like.
A
C
D
I
I
have
looked
into
generating
the
so
statically
generating
code
for
the
grpc
client,
because
grpcjs
is
just
generating
that
the
code
dynamically
based
on
the
the
protofile
that
you
pass
into
it,
or
at
least
I'm,
not
sure
what
it's
called.
This
proto-loader
thingy
and
I've
made
some
progress
with
that.
The
problem
with
statically
generating
is
that
we're
basically
ending
up
with
with
something
that
is
incompatible
with
the
current
otrp
Transformer
that
we
have.
D
We
have
this
ODB
Transformer
package,
so
we
would
have
to
go
through
it
again
and
then
transform
it
again
and
I
think
I
may
have
a
workaround
but
I'm
still
like
looking
into
it
a
bit
more
in
between
in
between
blocks
and
stuff.
So.
D
One
one
big
issue,
because
it's
I
think
it
may
be
actually
a
duplicate
to
the
web
back
and
the
roll
up
thing.
C
C
C
D
There
should
be
PR
already
open
for
that.
One
is.
C
D
Yeah
kind
of
wondering
why
it's
not
linked,
let
me
check
I
have
seen
something
like
that,
or
at
least
a.
C
C
Yeah
well,
I
I
don't
want
to
review
this
PR
online,
but
it
seems
at
least
that
they
have
opened
a
PR.
So
what
was
the
366.
A
C
C
C
Yeah
there's
just
several
of
these
I,
don't
think
we
need
to
go
through
all
of
the
duplicates
right
now.
Information
already
requested
and
I
think
this
one
has
no
update
with
information
requested.
C
C
Resource
detector
AWS
does
not
compile
on
Maine.
This
is
already
the
issue
that
yeah,
so
I
guess
I'll
leave
this
open
for
now
it's
not
a
bug,
but
we
can
leave
it
open
until
it's
fixed.
D
I
will
actually
link
the
pr
a
link
linked
to
issue
in
my
PR
so
that
it
gets
closed
automatically
appreciate.
C
That
container
ID
set
wrong,
we're
on
the
official
otel
demo
cluster
view
on
T4
container
ID,
HTTP
actual
ID
for
the
container
and
only
the
ID
right
now
we
have
Dot
scope.
We
requested
information
on
this,
so
I
guess
we
actually
do
truncate
the
strings
as
long
as
there's
four
characters.
You
can
remove
the
container
Adventures
again.
C
C
It
is
Azure
service
bus
does
not
propagate
within
its
telemetry,
still
just
waiting
for
a
degree.
Okay,
I
think
good
timing
meetings
ending
right
now,
anyways
I,
guess
that's
it
there's
one
in
the
chat.
We
need
to
drop
that
sound.
Okay.
Thank
you,
everybody
for
your
time.
I
will
not
be
around
next
week,
but
I
believe
Mark.
You
will
be
here
running
the
meeting
right
yeah.
It
would
be
here
all
right
sounds
good
and
I
will
see
everybody
in
two
weeks.