►
From YouTube: 2022-09-21 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
B
Yeah
I
know
it's
coincidental
I
promise,
I
didn't
do
it
both
of
them
or
anything
actually
made.
Those
are
like
acoustic
panels.
I
made
them
myself.
Actually,
it's
a
small
room
I.
It
helps
a
lot
with
the
echo.
C
B
C
C
C
Okay,
I
think
we
can
probably
get
started.
The
share
my
screen
here.
A
Yeah
any
updates.
C
Did
you
see
you
must
not
have
seen
my
comment?
It
was
just
like
five
minutes
ago.
C
Since
the
user
had
signed
the
CLA
when
they
made
the
commit
I.
Don't
imagine
it'll
be
a
problem
but
good
to
get
an
official
thumbs
up
before
doing
that.
I
think
yeah.
C
C
C
Let's
see
first
item
I
put
on
there
for
those
that
didn't
notice,
we
released
1.7.0
and
the
0.33
SDK
packages,
I,
don't
remember
exactly
when
Thursday
or
Friday,
so
that
seems
to
have
gone
well.
I
haven't
heard
any
complaints
about
that
at
least
right
now,
I'm
thinking
this
will
be
the
last
release
before
the
metrics
ga,
the
the
main
work
for
the
metrics
GA
is
completed.
C
C
C
Okay,
I
wanted
to
quickly
go
over
an
overview
of
of
metrics
work.
We
have
to
do
after
the
metrics
GA
I
just
pulled
out
four
sort
of
high
priority
issues
here.
The
highest
Priority
One
is
the
Prometheus
exporter,
which
does
not
currently
support
resources.
C
Nor
does
it
support
scope,
attributes,
scope,
attributes
were
just
added
to
the
new
Proto,
so
I
think
that
that's
not
as
important,
but
the
resource
attributes
certainly
need
to
be
supported
in
Prometheus
before
we
can
go
to
1.0
I
believe
Mark
is
planning
on
working
on
that,
but
not
confirmed
yet
yeah.
C
The
other
three
are
also
important,
obviously
as
well.
We
have
not
started
working
on
exemplars
at
all
because
that
was
pushed
until
after
ga,
similar
with
the
high
resolution
histogram.
As
far
as
I
know,
nobody
has
been
working
on
that
and
this
item
here.
Dropping
unused
attributes
is
waiting
on
specification
clarification,
so
we
can't
work
on
it
immediately,
but
the
the
issue
here
is,
if
you
create
instruments
with
high
cardinality
attributes,
the
the
memory
streams
are
never
forgotten.
C
So
every
time
you
create
a
new
one,
you
bloat
the
memory
used
by
the
SDK,
so
we're
hoping
to
have
a
method
to
drop
unused
attributes
after
some
period
of
time
or
something
like
that,
but
there's
obviously
a
lot
involved
there.
C
So
this
is
sort
of
inherent
to
the
way
the
SDK
specification
works
right
now,
it's
not
a
bug,
but
it
is
something
that
we
will
need
to
be
aware
of
and
hopefully
affects
us
coming
down
the
line.
A
C
Next
order
of
business,
the
protocol
had
a
release
version
19.
this
strips
out
all
of
the
fields
with
the
instrumentation
Library
names
in
favor
of
the
ones
that
use
the
scope.
Names
shouldn't
be
a
problem
for
us
since
we
deprecated
those
fields
and
started
using
the
new
ones
quite
a
while
ago,
but
this
will
probably
be
considered
a
breaking
change,
so
it
would
be
nice
to
do
it
before
releasing
the
1.0,
the
metrics
exporters,
just
to
make
sure
that
we're
not
breaking
any
December
guarantees.
C
It
also
adds
scope,
attributes
which
are
non-identifying
attributes
that
describe
the
scope,
I
think
right
now,
the
only
one
in
use
or
the
only
one
even
being
discussed,
is
the
short
name
for
Prometheus,
but
that's
not
even
merged
yet.
So
there
are
no
scope
attributes
in
the
specification,
so
it's
not
the
highest
priority
update,
but
we
will
want
to
add
support
for
that
sooner
rather
than
later,
and
it
also
adds
partial
success
responses.
C
So
if
you
have
some
otlp
export
to
your
back
end
and
if
you
export
100
metric
points
and
98
of
them
are
accepted,
but
two
are
rejected.
Instead
of
just
having
a
400
response,
we
can
now
say
most
of
these
were
were
accepted,
but
two
of
them
were
rejected
with
a
reason,
mostly
important
just
for
logging
purposes,
since
we're
not
doing
retries
on
those
anyways
I'm,
not
sure
if
anyone
has
started
work
on
the
Proto
update
yet
or
not
or
if
there's
an
issue
for
that.
C
But
if
anybody
wants
to
do
that,
please
create
an
issue
or
reach
out
to
me
and
I'll.
Do
it
any.
C
C
Nope,
okay:
next
is
this
sort
of
ongoing
work,
merging
the
API
into
the
main
repo,
the
pr
that
was
removing
all
of
the
files
for
the
the
API
repository
is
merged,
so
I
open
the
next
one,
which
adds
the
API
into
the
mono
repo.
This
is
probably
the
most
important
one
in
terms
of
getting
it
right.
So
I
would
appreciate
some
reviews
on
this.
C
It's
mostly
exactly
what
you
would
expect.
Hopefully,
there's
no
surprises
in
here.
If
there
are
any
surprises,
that's
not
a
good
thing.
I
did
add
it
to
the
browser
and
webworker
tests,
but
right
now
the
web
worker
tests
are
failing,
I'm,
not
entirely
sure.
Why
so
I
have
to
look
into
that,
but
I
expect
it's
probably
something
to
do
with
the
way
that
I
set
up
the
tests
most
likely,
not
any
problem
with
the
API
itself.
A
A
As
you
know,
that
was
in
relation
to
roll
up
forget
that
it
was
the
way
it
was
pudging.
The
way
required.
Js
worked,
but
yeah
it's
fun.
Like
you
see
it
has
all
the
typing
issues,
because
they're
a
different
set
of
linking
rules
between
the
two,
which
was
painful.
C
Got
it
okay,
this
one
is
I'm,
not
entirely
sure.
What's
going
on
here,
I'm,
not
sure
if
you've
ever
seen
this
it
just
says
like
zero
tests
ran
right,
zero
success,
zero
success
and
then
it
fails
with
exit
code.
One
I
assume
because
there's
no
tests,
because
not.
A
The
grant
yeah
I,
don't
remember
that
one,
but
that's
probably
because
the
spec
that
you
have
for
the
tests,
it's
not
finding
anything.
So
probably
the
web
pack
definition,
let's
say
I.
C
Assume
it's
something
along
those
lines.
It's
got
to
either
be.
Let's
see
this
index
webpack
worker
file
yeah
one.
A
Nine
is
probably
the
is
it
my
name?
No,
that's
just
your
normal
build
there's
somewhere,
where
you
define
the
location
of
where
your
test
files
are,
and
it
was
slightly
different
for
the
webpack
worker,
because
it
runs
in
trauma.
Was
it,
but
it
doesn't
run
in
camera.
I
forget
which
one
yeah.
A
Yeah
it'll
be
something
to
do
with
how
it's
defining
where
the
files
are.
C
Yeah
I'm
sure
that's
it
I
just
haven't
had
a
lot
of
time
to
look
into
it.
Yet.
C
C
All
right,
our
first
real
I,
guess
discussion
topic
of
the
day.
This
PR,
which
implements
the
require
in
the
middle
Singleton,
is
anyone
from
AWS
on
a
call
today
looks
like
maybe
not
bummer,
so
this
PR
for
those
that
don't
know,
changes
the
way
the
instrumentation
works.
C
Currently
every
instrumentation
wraps
require.
So
every
single
time
in
any
application
require
is
called.
Every
single
wrapper
is
called.
So
if
you
have
nine
instrumentations,
it's
wrapped
nine
times
and
calls
down
the
stack,
which
obviously
adds
a
huge
overhead,
particularly
to
application
startup
and
increasing
it
from
a
handful
of
seconds
to
a
handful
of
minutes
in
some
cases,
which
is
obviously
not
good.
C
The
problem
that
we're
running
into
right
now
is
that
it
does
not
work
with
the
AWS
Lambda
instrumentation,
because
that
instrumentation
intercepts
a
file
based
on
its
absolute
path,
which
is
not
supported
by
the
Singleton,
because
the
Singleton
is
using
a
feature
of
require
in
the
middle
that
intercepts
everything
and
I
guess.
Everything
does
not
include
absolute
paths
for
require
in
the
middle
I'm,
not
sure.
If
that's
a
require
in
the
middle
bug
or
not,
but
in
its
current
state
it's
not
working.
C
So
the
question
becomes:
is
that
okay,
the
the
workaround
would
be
that
the
AWS
Lambda
instrumentation
would
need
to
essentially
use
require
in
the
middle
directly,
probably
instead
of
using
this
instrumentation
wrapper,
we
may
also
be
able
to
patch
require
in
the
middle
so
that
in
the
future
we
could
use
these
absolute
paths,
but
I'm
not
sure
I
had
hoped
that
somebody
from
AWS
would
be
on
the
call
here
so
that
we
could
discuss
it,
but
it
seems
like
not.
C
Does
anybody
have
any
particular
opinions
here?
Is
it
obviously
AWS
Lambda
is
an
important
Target
for
us,
so
it's
kind
of
a
problem.
If
that
instrumentation
stops
working
or
any
ideas
for
workarounds
or
anything
like
that.
A
Yeah
so,
in
my
opinion,
I,
don't
I,
don't
think
we
should
wait
for
aw.
Islam
was
to
be
solved
to
for
that.
This
PR
too
much
I
think
we
can.
We
could
try
to
solve
it
in
a
second
second
phase,
PR,
because
I
think
emerging
SPI
will
help
a
lot
of
people
that
don't
use
the
AWS
numbers
or
are
not
in
the
AWS
ecosystem.
A
Well,
I'm,
not
I'm,
not
using
it,
even
if
the
more
people
that
are
not
using
lambdas
would
like
to
benefit
from
this
change.
B
However,
we
are
actually
breaking
every
everyone
who
is
using
any
of
the
components
in
Lambda
then
right,
we
would
essentially
make
it
so
they
they
you
know
cannot
either
use
the
AWS
Lambda
package
or
or
and
nothing
else
that
comes
with
it,
because
the
instrumentation
package
likely
or
maybe
actually
you
could
keep
on
using
the
old
version
of
the
instrumentation
library
and
would
it
still
work.
C
It
would
still
work
yeah,
that's
obviously
a
temporary
solution,
because
we
don't
want
to
say
AWS
can
never
update
their
dependency.
B
C
And
then
the
longer
term
workaround
would
be
to
implement
require
in
the
middle
directly
in
the
AWS
Lambda
instrumentation
through
the
instrumentation
wrapper.
B
C
That
so
in
the
instrumentation
wrapper
we
could
detect
if
something
is
an
absolute
path
by
just
looking
for
the
leading
slash,
and
if
the
file
is
an
absolute,
you
know.
If
the
file
requesting
to
be
intercepted
is
an
absolute
path,
we
could
add
a
second.
We
add
another
wrapper
for
that
file.
So
ideally,
this
is
just
one
wrapper.
C
This
would
that
particular
workaround
would
add
another
wrapper
for
every
absolute
file
required,
but
right
now,
there's
only
one
in
contrib
and
it's
just
for
Lambda,
and
we
could
document
that
performance
concern
say
if
you
use
an
absolute
path.
You
are
adding
overhead
to
every
single
require
statement
and
just
allow
that
to
happen.
C
B
So
that's
a
I
I
agree
with
Wales.
That's
I
think
this
could
be
merged.
B
C
Okay,
I
think
I
I
generally
agree
merging
it
before
we.
We
don't
want
a
contrib
package
necessarily
to
block
updates
in
the
SDK.
The
biggest
issue
is
that
I
mean
this
is
a
breaking
change.
The
the
instrumentation
package
is
released
as
1.0,
and
it
currently
supports
absolute
paths
so
releasing
it
without
absolute
path.
Support
is
a
breaking
change.
C
C
C
It
is
zero
okay,
so
we
could
make
a
breaking
change
in
here.
I
guess:
I
I
was
thinking,
it
was
1.0
and
I
was
much
more
concerned
about
it.
When
it's
0.01
I
still
think
we
shouldn't
break
things
without
cause,
but
this
seems
like
a
a
big
enough
win.
I
mean
this.
This
is
affecting
essentially
every
user
that
uses
more
than
one
instrumentation,
which
is
I,
mean
everyone,
basically
so
the
impact
even
on
small
applications.
I've
I've
seen
people
saying
it:
it's
adding
tens
of
seconds
to
their
startup
time
on
large
applications.
C
It's
it's
minutes
which
is
really
not
an
acceptable
impact.
To
be
honest,.
C
Okay
I
agree:
I
also
want
to
look
at
the
that
I
I
have
an
approving
review
here,
but
I
did
it
before
all
the
changes
that
Ronald
just
alluded
to
so
I
also
want
to
review
those
before
merging
it,
but
in
general,
I
think
that
this
is
going
to
be
mergeable.
I
will
reach
out
to
the
AWS
folks
to
make
sure
that
they're
aware
of
what's
going
on
before
we
merge
it,
though.
A
A
A
Yeah,
sorry
for
the
delay
on
this
here,
I've
been
kind
of
M.I.A
work
stuff,
but
I
had
a
question.
I've
refactored
the
code
per
the
feedback
I
got
recently,
but
I
just
had
one
question:
if
I'm
using
the
is
it
called
Noob
spam,
processor
or
no
Ops
fan
processor.
C
A
Know
what
okay
cool
yeah,
if
you
scroll
down
to
the
bottom
of
the
pr
there's
a
there's
a
comment:
I
mentioned
yeah.
If
you
click
on
the
link,
this
code
you'll
see
how
using
it
and
I
just
want
to
make
sure
I'm
using
it
correctly.
A
Basically,
there's
a
chance
that
there
won't
be
a
span
processor
but
might
have
a
no
Ops
fan
processor,
so
I
don't
want
to
call
it
register
on
it
on
the
trace
provider.
If
there's
a
no
op
spam
processor.
A
Well
there
there
is
a
chance
that,
in
inside
of
the
Tracer
provider
with
EnV
exporter,
I
know
it's
kind
of
a
long,
weird
name,
but
if
the
user
doesn't
want
to
use
any
EnV
exporters,
you
know
they
could
set
it
to
none
or
to
empty
empty
string.
Then
it
won't
actually
register
that
Trace
provider
with
EnV
exporter.
C
A
C
I,
don't
remember
how
we
we've
done
that
check
in
other
places,
It's
Complicated
by
the
proxy
Tracer
provider.
It
used
to
be
that
you
could
just
check
if
the
Tracer
provider
was
the
Nook
Tracer
provider,
but
now
we
have
the
proxy
Tracer
provider
this.
It
may
be
worth
adding
a
method
to
the
API
to
check
if
there
is
a
registered
Tracer
provider
or
not.
That
may
be
a
better
way
to
solve
this
I.
C
My
concern
here
is
that
we're
kind
of,
depending
on
internal
functionality
of
the
API,
for
this
check
I
mean
checking
that
instance
of
probably
works.
C
But
if
there's
changes
in
the
API
and
the
future
to
the
like
the
the
internal
workings
of
the
API,
then
this
check
could
break
so
I.
Don't
necessarily
want
to
depend
on
that.
C
Of
a
kind
of
a
rambling
answer,
yeah
I
I
would
probably
prefer
to
add
a
method
to
the
API.
C
A
What
I
was
getting
before
is
that,
if
I
just
called
register
without
checking
it,
then
a
bunch
of
my
other
previous
tests
were
failing,
because
it
would
try
to
set
up
the
context
or
the
context.
Propagation
I
forget
what
what
it
was
and
it
was
starting
to
fail.
Yeah.
C
C
I
got
it
I
have
to
look
into
this
PR
a
little
bit
more
I
think
so
that
I
can
understand
better,
what's
going
on
here,
but
I
I
would
expect
this
to
work,
but
I
think
we
may
want
to
just
add,
like
a
comment
here
to
do
to
improve
this.
When
there's
a
method
in
the
API
to
check
for
registration,
I
think
that's
likely
a
better
way
to
go
about
it,
though
it
just
makes
me
a
little
bit
nervous,
depending
on
the
internal
functionality
of
the
API.
C
I
wouldn't
block
this
PR
on
that,
since
this
isn't
an
experimental
package,
anyways
and
no
method
exists
on
the
API
to
do
that.
Currently,
so
you
have
to
work
with
what
you
have
for
now,
but
I
would
maybe
just
make
a
comment
to
improve
it
in
the
future
when
there
is
one.
A
B
Is
no
one
else
actually
I
would
I
have
a
like
a
discussion
Point,
perhaps
okay,
so
we
have
the
constantly
have
issues
with
the
country.
Repos
builds
because
some
packets,
like
decides
to
start
failing
on
us
for
one
reason
or
another,
and
often
I,
think
it's
caused
by
the
the
optimizations.
B
We
have
added
to
the
CI
to
skip
the
bills
that
that
like
skip
the
the
running
the
tests
and
and
builds
on
the
packages
that
are
not
changed
in
a
in
a
certain
PR,
so
we
have
added
some
optimizations
and
thus
sometimes
I've
noticed
that
that
some
issues
get
through
the
pr
process
and
actually
get
merged
and
somehow
start
fading
later
and
it's
it's.
B
Sometimes
we
have
like
50
packages
there
so
like
fixing
all
of
them
and
maintaining
between
me
and
Amir,
who
are
perhaps
but
the
most
active
maintainers
on
the
trip
is
just
you
know
too
much.
Basically
so
I
would
like
to
like
maybe
ask
for
any
suggestion
how
to
solve
something
like
this.
What
I
have
in
mind
is
to
temporarily,
basically,
if
we
notice
something
like
that
temporarily
turn
testing
and
releases
off
for
those
packages
until
the
issues
are
solved,
which
you
know
causes
another
set
of
issues
but
I'm
I'm
open
for
any
ideas.
A
B
B
The
other
hand
I
could
like
those
optimizations
to
stay
in
place
because,
like
20
minutes
is
quite
a
long
time
for
the
CI
to
run
anyways
and
I
suspect
it's
only
growing
or
it's
I
think
it's
at
this
point.
It
is
close
to
30
in
most
cases,
anyways
and
and
usually
it
doesn't
cause
any
issues,
so
I'm
not
pointing
the
finger
to
those
optimizations
per
se,
but
it
definitely
has
happened
that
we
like
right
now.
For
example,
we
have
issues
with
graphql
and
IO
readies
I,
think
packages
at
least
I.
C
C
No
I
I,
you
know
I
I
thought
it
was
the
the
unit
tests,
so
I
think
the
test
all
versions
doesn't
run
on
PRS.
C
B
A
B
Is
another?
Actually,
this
is
another
Point
to
consider
and
actually
I.
That
brings
me
another
question
as
well,
but
to
to
be
certain
that
you
know
no
old
versions
break.
We
have
basically
politely
asked
any
of
the
contributors
or
owners
of
any
PRS
to
run
test,
all
versions
for
that
specific
package
locally,
which
you
know
some
people
miss
and
report
that
they
have
done
it,
but
but
changed
the
pr
and
then
don't
do
it
again.
So
it
happens.
B
That's
understandable,
because
that
for
any
single
package
it
even
takes
you
know
sometimes
like
a
half
an
hour
or
so,
which
is
another
situation
where
those
slip
in
you
know,
because
in
the
main
now
it
would
fail,
but
we
have
merged
the
pr
already
so
one
way
to
fix
that
would
be
to
add,
like
a
label
that
any
anyone
could
add.
That
would
make
the
CI
to
run
digital
versions
for
that
package,
for
example,
so
I'm
wondering
whether
anyone
has
done
or
built
something
like
that
I
I.
B
Imagine
it
not
being
like
impossible,
and
it's
just
reading
the
reading
the
labels
and
changing
the
CLI
Arguments
for
learner.
But
if
there
is
any
like
a
like
a
more
clever
way
to
do
that
without
any
custom
scripts
or
if
anyone
else
has
has
built
something
like
that,
please
reach
out
to
me.
Maybe
you
can,
you
know
just
bump
some
ideas
off
of
each
other
to
to
make
this
work
better.
C
Yeah,
nothing
that
I'm
aware
of,
but
labels
may
not
be
the
best
solution
for
that,
because
not
everybody
has
permission
to
add
labels.
B
C
B
I
think
at
least
a
maintainer
like
someone
who
is
reviewer,
for
example,
could
see
the
pr
see
that
okay,
they
are
adding
something
to
a
specific
package
and
then
manually
like
I,
mean
it
doesn't
have
to
be
perfect.
The
first
iteration
doesn't
have
to
be
perfect,
but
like
that,
could
at
least
make
it
possible.
You
know
for
us
to
validate
whether
all
of
those
tests
run
before
the
merge.
It.
C
Yeah
so
I
mean,
if
that's
the
case,
we
could
have
I
mean
it
would
be
a
label
per
component
and
we
would
have
to
manually
add
those
labels
and
maintain
a
list
of
you
know
the
label
to
the
package
path
or
something
like
that.
But
we
could
have
like
the
changelog
label
in
the
main
repository
is
a
good
example.
You
can
have.
C
C
That's
very
similar
to
what
The
Collector
was
doing.
So,
if
you
look
at
the
dinosaurs
exporter,
for
example,
they
have
like
a
label-
exporter
dynatrace,
and
this
actually,
when
it
gets
applied.
It
comments
on
the
issue
to
Ping
the
code,
owners
and
stuff
like
that,
but
we
could
also
have
it
run
a
test
all
version
script.
B
Okay,
well
thanks
that
that
helps
a
bit
so,
but
regarding
basically
turning
desktop
temporary
for
packages
that
are
failing
our
main.
What
do
you
think
of
that?.
C
I
think
it
makes
sense,
I
wouldn't
want
to
release
packages
that
are
failing
tests,
so
the
easiest
way
I
can
think
of
to
do.
It
would
be
to
add
the
private
label
to
the
package
and
to
remove
it
from
the
release.
Please
configuration
removing
it
from
release.
Please
would
prevent
it
from
being
included
in
the
release.
Automation,
changelog,
stuff
and
adding
the
private
label
would
prevent
it
from
being
released
when
you
run
learn.publish,
which
is
done
by
the
CI.
B
C
C
Yeah,
that's
what
I
was
wondering
too
I
think,
let's
see
look.
C
Learn
LS
private
we
can
see.
All
of
the
private
packages
are
actually
probably.
C
So
examples
are
we
running
tests
on
any
of
the
examples?
I,
don't
think
so.
I,
don't
think
they're
added
to
the
learner
repo
at
the
moment.
C
As
well
that's
an
example.
That's
an
example!
That's
an
example.
That's
an
example,
contribute
scripts
yeah,
so
right
now,
I,
don't
think
we
have
any
private
packages
that
are
being
tested
in
control,
so
it
should
be
safe.
C
B
C
You
should
be
able
to
modify
the
test
script
in
package
Json
where
it
says,
learn
a
run
test
here.
We
should
be
able
to
just
do
no
private,
which
should
just
skip
them,
and
then
you'd
also
want
to
do
it
in
the
CI.
B
Okay,
cool
I'll
do
that
and
look
into
the
collector
CI
for
ideas
for
the
label.
Magic
thanks.
Thanks
for
your
help,.
C
A
A
C
I
wonder
if
it's
a
is
that
a
compilation
bug-
or
it's
probably
just
ordering
of
this
or
logic.
C
Okay
seems
like
a
real
bug.
I
will
say
this
is
P2
because
it
causes
incorrect
instrumentation.
Is
there
anybody
that
wants
to
volunteer
for
this.
A
C
Thank
you,
and
that
was
the
only
one,
nice
and
quick
in
the
main
repo.
C
All
right,
let's
try
to
get
through
some
of
these
contrib
instrumentation
mongodb
cannot
find
name
documents.
So
it
looks
like
a
build
issue.
Private
node.js
service
install
the
mongodb
instrumentation
and
build
the
project.
A
C
C
A
C
Interesting,
do
you
know
if
you
compile
code
that
uses
symbol
using
typescript
with
the
es5
target?
Does
it
polyfill
it
or
not?.
A
That
I
have
not
tried
I
I
have
my
own
poly
pills
that
I
use
so
I.
Don't
think
it
probably
fills
it
because
it
ends
up
using
my
code.
So.
C
Interesting.
Okay,
if
that's
the
case,
then
that
might
mean
that
the
API
is
not
supporting
IE
anyways
right.
C
Okay,
I
will
look
into
that
after
the
call
here
for
now
I
a
lot
of
instrumentations
web,
the
other
Auto
instrumentations
that
it's
building
do.
We
know
if
these
are.
C
C
For
now,
I
think
I'm
going
to
remove
the
bug
label
from
this
and
add
the
enhancement
label.
C
B
Well,
yeah
I've
looked
into
it
a
bit,
but
again
the
there
are
other
problems
with
graphical
tests.
At
this
point
there
are
numerous
graphql,
PRS
and
issues
I,
don't
know
which
one
is
this.
C
Okay,
yeah.
B
Basically
like
it
would
be
perfect
to
have
someone
who's
knowledgeable
in
graphql
to
look
into
that
yeah.
The.
C
Not
very
familiar
with
the
graphql
code:
I
can
leave
this
assigned
to
myself
and
and
look
into
it
when
I
have
time,
but
I'm
not
super
familiar
with
it,
but
somebody
has
to
do
it.
I
guess.
A
C
Last
week
this
person
was
going
to
dig
deeper,
looks
like
there's
still
no
information
here,
so
information
requested
label
is
still
appropriate.
C
B
A
C
Alternative
PR
icly
also
closed
as
stale,
if
I
had
to
guess
foreign
yeah.
These
are
quite
old.
I.
Remember
this.
There
was
a
lot
of
discussion
at
the
time
as
to
whether
or
not
this
was
even
a
bug,
I
think.
In
the
end,
we
agreed
that
we
should
change
the
behavior,
but
the
implementation
itself
was
never
agreed
on
I'm
Gonna,
Leave
This
as
a
bug
and
assign
it
P2.
C
Because
it's
not
crashing
anything,
it's
just
causing
weird
instrumentation.
C
C
C
C
A
Billy
I
think
that
when
you
call
connect,
sometimes
it
doesn't
connect
immediately,
it's
a
postponed
and
then
you
get
like
another
spin
like
two
spins
for
connect
and
the
duration
and
the
status
are
not
correct.
I
I,
don't
remember
the
exact
details.
A
C
A
Yeah
I
I
think
when
I
investigated
it
I
found
those
like
an
offline
queue.
When
you
were
post
like
a
command
to
read
this,
if
it's
not
connected,
then
it
put
it
in
a
queue
and
then
it
calls
it
again
after
it's
connected
and
then
the
traces
don't
look
good.
B
A
A
Yeah
I
need
to
look
at
it
again.
It's
also
two
years
old,
maybe
I'll,
look
at
it
and
get
an
idea.
C
Express
plugin
does
not
work
with
Express
async
errors,
change
the
handle
function,
location.
A
C
C
I
mean
it's
not
really
a
bug.
I
guess:
I'll,
leave
the
bug
and
I'll
add
a
P4
label
here,
P3
actually.
A
So
it's
now
running
I
cloned
it
and
got
all
the
web
worker
test
running
locally.
So
I've
contributed
to
your
PR
and
left
a
comment
as
well.