►
From YouTube: 2022-07-06 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
D
D
I
am
interested
in
the
specsig
too,
I'm
really
really
interested
in
the
http
json
spec
get
to
a
stable
place.
I
think
that'll
open
up
a
lot
of
things
for
client-side
javascript.
A
A
D
E
G
Use
not
a
lot
but
fairly
regularly,
where
I
live,
there's
not
a
lot
of
good
snowboarding,
but
we
try
to
make
it
out
into
the
mountains
a
few
times
a
year.
G
G
G
So,
first
item:
I
will
not
be
here
on
july.
G
I
originally
asked
mark
to
cover
the
meeting
for
me,
but
he
won't
be
he'll
be
on
vacation
that
week.
So
if
one
of
the
other
maintainers
wants
to
cover
the
meeting,
that
would
be
great.
If
not,
then
we
can
cancel
that
one
looks
like
not
right.
C
G
Do
you
have
access
to
the
maintainers
calendar,
there's
a
maintainers
vacation
calendar,
I
know,
but
I'll
ask
you
after.
G
Okay,
yeah,
it
looks
like
rona's
also
gone,
so
he
won't
be
covering
that
meeting.
Mark
already
can't
amir
is
gone.
I
would
say
we
should
probably
just
cancel
that
one
we'll
assess
again
as
it
gets
closer
but
yeah.
G
Just
so
people
know:
okay,
I'm
going
to
move
this
down
a
little
bit
lower
the
1.4.0
and
the
0.30.0
core
and
experimental
it
says
released,
but
I
wasn't
able
to
get
it
done
before
the
meeting
since
I
was
making
lunch
for
myself,
but
I'm
going
to
merge
that
and
release
it
immediately
after
the
meeting,
assuming
nobody
has
any
complaints
about
that.
G
Okay,
the
same
with
the
same
metrics
ga
milestone
that
we
talk
about
every
week,
nothing's
been
added
to
it.
So
that's
nice!
There
are
a
few
pr's
open.
G
These
two
are
both
opened
by
vayart,
who,
I
believe
is
not
on
the
call
today
and
those
are
both
waiting
on
him.
But
additional
reviews
are
needed
on
those
and
then
there's
one
here
to
use
common
attributes
definition
for
the
metrics
api,
which
should
be
a
relatively
simple
pr
and
fixes
a
handful
of
issues.
So
that's
an
important
one
yeah.
These
are
the
two
pr's
that
I
pulled
out
last
week,
but
waiting
on
the
author
of
both
of
those
anyone
have
any
question
about
any
of
these
prs.
G
Okay,
I
wanted
to
ask
what
people
thought
of
the
new
bug
triage
flow?
Is
it
working?
Is
it
not
working?
Has
it
not
changed
anything
for
people,
or
is
it
too
early
to
tell.
G
Personally,
I
think
it's
a
little
bit
too
early
to
tell
you
know
I
I
don't
see
any
immediate
problems
with
it,
but
if
there
are
problems,
I'm
hoping
to
catch
them
early,
I
don't
know
if
I'm
bothering
anyone
with
the
new
with
the
new
flow
or,
if
it's,
if
it's
getting
in
the
way
of
anyone.
E
I
think
it's
working
very
well
from
what
we
had
before.
I
think
it's
a
big
improvement,
yeah.
G
Rano
you
do
most
of
the
work
on
the
contrib
repo
amir.
Also,
you
do
a
lot
of
work
on
contrib.
Do
you
think
that
we
should
apply
this
to
the
contrib
repo,
or
should
we
let
it
go
a
few
more
weeks
in
the
corridor
before
we
make
a
decision
on
that.
A
I
think
we
can
for
now
continue
as
we
are.
I
would
really
like
to
see
if
we
could
like
make
it
work,
so
we
I
didn't
absolutely
have
to
like
to
a
triage
in
during
those
meetings
like
synchronously,
but
I'm
like,
but
I'm
like
all
for
doing
it
for
now,
because
I
I
also
see
the
benefit,
and
you
know
since
it
doesn't,
you
know,
do
any
harm.
It
definitely
can
only
be
positive.
A
But
yeah
for
now
I
would
want
to
get
the
the
default
process
of
contribute
triaging
a
little
bit.
You
know
more
toned
down
or
like
basically
define
it
in
the
first
place,
because
I
think
it's
it's
a
really
processed
less
kind
of
process
than
than
anything
else.
It's
just
what?
What
any
of
the
maintainers
like
happen
to
do,
not
that
the
set
process
there.
D
A
G
G
Yeah,
okay,
I
did
add
a
couple
of
the
labels
already,
so
I
added
the
priority
labels
just
so
that
they're
there,
but
there's
no
official
documentation
pointing
to
like
the
the
triage
workflow
or
anything
like
that
from
the
contrib
repo
we
should
probably
just
in
the
contributing
mark
down
here.
We
should
probably
just
point
to
the
bug
report
documents
in
the
court
repo
and
then
add
the
form
in
this
one.
G
G
Okay-
I
added
this
in
here,
just
because
I
was
going
to
close
this
issue
actually,
but
then
I
reopened
it
because
it's
got
like
50
thumbs
up.
It's
obviously
a
popular
idea.
If
nothing
else,
there's
not
a
lot
of
info
here.
Other
than
just
a
question.
Are
we
going
to
support
react
native?
G
As
far
as
I
know,
we
can't
support
react
native.
I
think
the
sdk
has
to
be
written
in
either
like
swift
or
or
java,
or
whatever
the
target
environment
is-
and
I
remember
somebody
was
doing
some
work
on
that
and
they
were
using
the
otel
js
instrumentations,
but
using
the
hotel
java
sdk.
I
think
it
might
even
be
nev,
but
he's
not
only
called
oh.
He
is
only
called
today.
Was
that
you,
I
think.
H
G
D
G
G
Yeah
and
I
think
it's
sort
of
the
same
story
as
us
just
doing
a
different,
a
different
path
to
the
same
result,
we're
working
on
so
many
other
things.
If
we
had
finished
the
javascript,
sdk
or
whatever.
I
would
say
sure
we
have
time
to
look
at
this,
but
we
haven't
and
we
don't-
and
I
think
the
client
side
would
say
the
same
thing.
A
Yeah,
it's
interesting
like
I
don't.
I
don't
think
we
have
a
like
precedence
for
like
closing
and
feature
request,
essentially
what
it
is,
because
we
lack
the
like
bandwidth,
because
I
feel
like
we
have
tons
of
stuff
in
the
backlog
that
we
don't
intend
to
address
in
the
near
future,
but
have
it
in
the
backlog
anyways
for
anyone
else
to
pick
it
up
or
or
or
just
for
the
record
for
the
future,
or
it's
just
like
the
reasoning
behind
closing
the
issue.
Like
it's
a
question
for
me,.
H
Yeah,
this
is
like
a
framework
request
thing
like
you
can
do
it
in
javascript
and
I
paste
it
in
the
chat,
the
application
insights
version
of
that
and
effectively
it's
just,
it's
probably
the
equivalent
of
an
instrumentation.
H
H
If
you
look
at
this
implementation,
like
we,
don't
do
a
lot
so,
depending
on
what
level
they're
asking
for
the
hardest
problem
is
when
there's
the
breaking
changes
with
version
updates,
I
think
they're
about
0.64
at
this
point
in
time.
So
yeah,
it's
same
with
with
react
like
you,
go
from
react,
17
to
18.
We
don't
support
18,
yet.
H
Yeah,
the
the
actual
run
time.
Sorry,
so
we
have
angular
react,
react
native
and
we've
got
a
request
for
view
as
well.
We
can't
handle
view,
we
don't
have
the
bandwidth
and
even
for
this
year,
the
person
who
originally
created
this
isn't
with
the
company
anymore.
So
we
struggle
to
keep
this
fly,
but
we
haven't
changed
the
coding
pages.
G
So
I
guess
I'm
just
not
familiar
enough
with
react
native
here,
but
how
do
you
handle
like
there
are
a
lot
of
apis
where
there's
a
node
api
and
a
web
api,
and
we
already
have
separate
code
paths
to
deal
with
those
does
react
native?
Would
it
need
a
separate
code
path
usually
or
do
the
node
apis
for
the
most
part
work?
Or
do
you
know?
What's.
I
G
H
So
anything
that's
specific
to
a
an
environment.
It's
problematic
the
way
we
handle
that
in
application
insights
is,
we
have
nothing
in
the
core
path,
that's
specific
to
an
environment,
and
then
you
have
plug-ins
you
plug
on
top,
which
is
different
to
hotel.
So.
G
G
G
That's
right,
I
guess
the
decision
we
have
to
make
is:
what
is
the
the
issue
list
like?
What
is
the
backlog?
What
does
it
represent?
Does
it
represent
things
that
we're
planning
on
tackling
in
the
near
future,
or
does
it
represent
a
list
of
all
things
that
need
to
be
done?
G
Maybe,
instead
of
closing
the
issue,
we
could
apply
a
label.
That's
like
we
know
this
issue
is
open,
but
we're
not
working
on
it
right
now
like
we
could
just
call.
We
could
make
a
label
like
a
backlog
label,
basically.
A
Yeah,
like
a
hardcore
like
I
guess,
like
what
I'm
struggling
with,
is
the
like
sitting
or
like
building
a
like
a
decision
tree
in
my
mind
of
like
when,
should
the
ticket
be
actually
closed
and
it
doesn't,
you
know
where
we
land
with
that,
doesn't
really
matter
in
the
end,
because
like
issues
can
be
reopened,
and
I
can
totally
see
the
process
where
we
do
liberally
close
the
issues
as
like
won't
do
for
now,
but
then
you
know
if
they
become
relevant
again,
we
can
reopen
the
issues
and
and
whatnot.
A
That's
that's
also
fine,
but-
and
I
also
think
that
there
is
a
value
in
having
like
a
short
list
of
open
issues,
so
we
can
actually
find
the
focus
better.
So
either
way
we
go
is
fine
by
me
and
then,
but
it's
just
what
I
was
struggling
with
is
the
criteria
why
this
one,
because
I've
never
seen
us
closing
an
issue
that
that
yeah,
it
seems
like
seems
like
appropriate
for
javascript
to
at
least
do
a
partial
implementation,
but
but
we
still
close.
G
A
G
I
guess,
though,
your
your
questions
are
valid
and
I
don't
want
to
close
it
arbitrarily
and
have
the
have
the
person
that
opened
it
feel
like.
Why
did
mine
get
chosen
to
be
the
one
that
got
closed
so
maybe,
instead,
what
we
should
do
is
create
a
document
like
the
bug
triage
document.
We
should
create
a
feature,
request,
lifecycle
document
and
maybe,
instead
of
instead
of
marking
or
closing
issues
that
we
don't
have
bandwidth
for.
G
Maybe
we
should
create
start
working
with
like
something
like
a
project
or
a
milestone
for
the
next
release.
Right
like
we
could
pick
10
issues
and
say
these
are
all
the
1.5
issues
and
when
they're
completed
we
will
release
1.5
and
then
we'll
pick
10
more
issues
for
1.6.
A
And
with
this
issue
in
particular,
actually,
since
I
think
it's
it's
the
most
upvoted
issue,
I've
ever
seen
on
on
the
on
any
of
the
js,
it's
weird
to
start
here,
you
know
by
like
closing
the
issues
in
that
sense.
It's
it's
weird
that
we,
if
we
set
the
a
rule
that
hey
like
if
we
know
that
we
have
no
bandwidth
or
for
dealing
with
a
certain
issue,
then,
and
then
we
close
it.
A
G
With
popular
issues
yeah
I
get
that
okay,
I
just
made
a
note
to
work
on
a
feature
lifecycle
doc
after
I
get
the
release
out
this
afternoon,
I'll
work
with
I'll
work
on
that,
because
I
think
we
don't
want
to
start
closing
issues
until
we
have
a
policy
where,
when
someone
complains,
we
can
say
this
was
this:
is
the
policy.
J
Yeah,
just
to
add
on
to
that,
I
guess,
if
anything,
if
you,
if
it
was
like
across
the
board,
hey
anything
older
than
you
know,
january
1st,
2021
we're
closing
everything,
then
it
would
be
less.
It
would
feel
less
like
something
was
picked
out,
as
opposed
to
like
they're
still
being
older
and
newer.
So,
even
if
it
is
like
we're
not
sure
of
an
exact
thing,
close
everything
before
a
certain
date
and
then
either
selectively
reopen
or
create
new,
depending
on
popularity
or
feedback
on
what
seems
important
to
the
community.
G
J
Yeah,
almost
the
opposite,
because
you
have
the
problem,
no
matter.
What
right
you
want
to
set
expectations
which
is
part
of
what
you're
trying
to
do
is.
I
want
to
set
an
expectation
that
we
won't
be
looking
at
this,
but
there's
it's
also
likely
you're,
not
looking
at
the
other
50
or
100
before
it
either
potentially.
J
So,
if
you
do
a
blanket,
then
at
least
it's
not
it's
just
everything
everything
goes
and
if
it's
a
matter
of
like
bandwidth
like
we've,
tried
going
through
open
bugs
a
few
times,
and
you
know
it
takes
a
while.
We
have
the
same
problem
with
our
repos
on
honeycomb
too.
So
I
don't
know
that
there's
an
exact,
perfect
answer,
but
that
was
one
thing
we
sort
of
did
was
anything
older
than
this
date
closed.
G
Yeah
and
this
one
is
relatively
old-
I
mean
we
have
the
stale
bot
that
does
go
through
the
issues,
but
we
have
so
many
issues
that
the
stale
bot
actually
gets
rate
limited,
which
at
the
time
we
decided
was
fine
because
it
slows
down.
G
We
don't
want
to
just
be
slammed
with
200
stale
issues,
all
in
one
one
run,
so
I
think
it
only
does
like
25
per
day
or
something
like
that,
but
the
the
whole
point
of
the
stale
bot
was
to
handle
this,
and
then
we
end
up
every
time
something
gets
marked
as
stale.
We
end
up
just
saying
this:
isn't
stale
and
removing
the
stale
label.
I
don't
know
that
I.
A
But
maybe
we
shouldn't
stop
still
like
doing
it,
sharp,
basically,
because
that
that's
exactly
what
this
situation
is
about
as
well
right
I
mean
it's,
it's
fine
to
have
that
issue
there.
As
long
as
someone
is
interested
in
it,
and
if,
if
there
is
any
discussion
it's
it
should
not
be
stale,
but
some
other
issues
that
will
may
have
been
marked
still
and
then
right
away
like
unmarked,
because
they
are
still
like
actually
relevant
in
a
way
for
some
people.
H
And
you
can
also
set
the
style,
bot
or
bots
up
to
say
if
I've
got
certain
labels
on
them,
they
never
get
stale
like
for
crap
yeah.
We
have,
I
think,
three
or
four
labels
that
if
the
labels
on
it
it'll
never
get
stale,
but
we
also
have
like
it's
gonna,
be
nothing
happens
for
a
year
before
we
mark
it
as
sale
and
then
30
days
later,
we'll
close
it
so.
G
G
G
I
think
the
takeaway
from
this
is
that,
instead
of
instead
of
picking
on
this
specific
issue,
we
should
create
a
policy.
I
think
that's
the
takeaway
that
I'm
getting
and
the
particulars
of
the
policy
are
maybe
up
for
debate,
but
before
we
start
working
on
anything,
we
need
to
make
sure
that
we
are
doing
it
in
a
consistent
way.
G
Okay,
this
next
one.
G
There
is
a
sort
of
old
pr
that
I
opened
in
the
api
repo.
You
know
by
sort
of
old,
I
mean
pretty
old.
That
was
resurrected
somewhat
recently
by
actually
a
feature
request
from
the
contrib
repo.
I
can't
remember
what
the
instrumentation
is,
but
one
of
the
instrumentations
depends
on
these
new
context
methods
in
order
to
work
rona.
Do
you
remember?
G
Oh
it's
this
one
on
dc,
so
for
various
technical
reasons,
they
are
waiting
on
this
pr
or
on
this
feature,
and
they
would
prefer
to
use
the
diag
channel
or
not
the
diagnostic
channel
instead
of
the
open,
telemetry
apis,
which
I
totally
get.
If
I
was
maintaining
another
library
I
probably
would
too,
but
because
of
that,
they
need
a
different
way
to
manage
context
where
they
can't
wrap
everything
in
callbacks.
G
So
mostly,
I
was.
I
just
want
to
bring
this
to
everybody's
attention
so
that
you
can
look
at
it.
It
currently
depends
on
a
an
interpret,
an
interpretation
of
an
optional
feature
in
the
specification,
but
this
is
actually
the
way
that
most
other
sdks
have
been
handling
context.
Since
the
beginning,
we
use
the
width
function
or
the
width
method
and
pass
a
callback
and
and
activate
context
that
way.
But
if
you
look
at
like
java
or
net,
I
believe
they're
using
attach
and
detach.
G
B
B
If
people
want
it,
I
just
don't
think
it
will
be
useful
for
the
problems
that
the
people
plan
to
solve
with
it,
and
I
also
think
that
the
term
I
think,
context
that
we
use
is
not
very
clear.
We
need
to
define
it
well,
because
it
can
be
very,
very
confusing
to
know
when
you,
when
you
activate
the
context
what
will
be
applied
by
it
like
when
you,
if
you
call
an
async
function
and
then
it
returns
is
the
context
still
valid.
If
the
callback
is
called
is
the
context
valid.
B
There
are
a
lot
of
edge
cases
and
it's
really
really
confusing.
I
think
we
can
benefit
from
like
summarizing
a
few
use
cases
and
explaining
how
it
will
work
in
each
use
case,
because
we
just
use
this
term,
but
it
doesn't
mean
anything.
It's
not
documented
anywhere.
G
Okay,
so
I
did
see
your
comments
on
that
and
we
changed
everywhere
to
use
current
execution,
which
is
the
term
used
by
the
node
documentation
and
since
at
least
the
async
local
storage
context,
manager
is
just
a
wrapper,
a
thin
wrapper
around
the
extent
local
storage,
the
documentation
for
that
applies
directly
to
what
we're
doing
so.
The
the
attach
function
is
essentially
just
enter
with.
It
won't
do
anything
else,
and
then
the
async
hooks
context
manager
will
need
to
be
modified
to
mimic
that
behavior,
but
that's
essentially
what
we're
doing
so.
G
B
G
B
B
Instrumentation,
that
is
correct,
but
I
think
people
wanted
to
use
it
like
they
wanted
to
add
something
to
the
baggage
and
then
let
it
stay
in
the
baggage
until
the
end
of
the
transaction,
and
they
imagine
that
they
can
just
use
this
context.
B
G
Yeah
and
that's
true,
so
that
it
does
not
solve
that
problem.
But
it
does
solve
the
problem
of
on
dc
where
their
their
architecture
is
fundamentally,
not
nested
callbacks.
So
there
is
no.
D
G
B
G
Okay,
the
baggage
one
has
come
up
also
recently,
someone
internally
at
dynatrace
was
asking
me
about
it.
I
believe
the
only
way
to
solve
that
would
be
to
make
baggage
mutable,
but
the
specification
specifically
says
that
it's
not
mutable.
K
No
questions,
I
was
just
gonna,
say
ruby
added
those
as
well,
and
it
was
it
kind
of
had
it
uses
kind
of
the
width
method
or
an
analog
to
the
width
method
as
kind
of
the
standard
thing,
but
attach
and
detach
were
added
later
for
for
similar
reasons.
So
if
you
want
to
look
at
the
implementation
feel
free
to
but-
and
I
know,
context
works
very
differently
between
the
languages.
G
Yeah,
it
definitely
does
what
has
the
feedback
generally
been
in
ruby,
adding
the
attach
and
detach
have
people
been
confused
by
it,
because
that's
my
primary
concern
is
that
people
will
see
both
methods
and
not
know
which
one
to
use.
I
added
in
the
documentation,
fairly
strong
wording
that
says,
if
you
don't
have
strong
reasons
to
use,
attach
and
detach
you
should
just
use
with,
but
I
don't
know
if
it
will
be
confusing
to
users
when
to
use
attach
and
detach
and
exactly
how
to
do
it
like
it's
a
tough
thing
to
document.
K
Yeah,
I
I
haven't
seen
any
feedback
either
way
on
it.
I
think
with
is
like
the
it's
hard
to
get
into
trouble
with
with
it's
very
easy,
to
get
into
trouble
with,
attach
and
detach.
K
You
have
to
make
sure
that
you,
you
know
pair
your
attaches
and
detaches
properly
and
that
your
detaches
are
always
reachable
otherwise
there
there
are
some
problems,
but
I
think
at
least
ruby's
detached
method
tries
to
detect
if
you're
attached,
if
it
matches
the
attach,
you
think
it
does
and
at
least
return
true
or
false
if
if
they
were
actually
matched
so
that
there
is
a
way
to
tell
if
you
mess
things
up,
okay,.
K
I
think,
for
the
most
part,
I
think
it
is.
You
know
people
who
are
pretty
heavy
contributors
to
open
telemetry,
ruby
that
are
using
these
apis
at
this
point
in
time
and
they
generally
understand
how
how
they
work,
but
I
can
definitely
see
the
more
casual
you
user
being
somewhat
somewhat
confused,
but
they
haven't
spoken
up.
K
G
Okay,
the
next
one
is
also
an
api
pull
request
for
getting
the
current
span.
G
This
is
essentially
just
a
wrapper
around
existing
apis,
so
instead
of
calling
like
active
context
and
then
getting
the
span
from
that
which
is
fairly
obtuse
and
annoying
to
do-
and
I
think
confusing
for
users,
this
adds
a
helper
method
to
just
get
the
span
from
the
current
context
if
it
exists,
I
think
that
it's
a
a
good
addition,
I
think
it'll
be
easy
to
maintain
long
term
and
is
very
unconfusing,
probably
less
confusing
than
what
we
currently
have.
G
So
it's
just
in
need
of
reviews
yeah.
So
I
just
suggest
that
that
people
please
go
review
this
I've
spoken
to
philip
about
this.
I
support
it.
I
think
it's
a
good
idea
if
anyone
thinks
it's
a
bad
idea.
Now
is
the
time
to
speak
up,
but
other
than
that.
Please
just
go
review
that
anyone
have
questions
there
before
I
move
on.
I
think
that's
a
relatively
straightforward
one.
E
Right
so
I
opened
this
pr
to
move
the
views
registration
to
the
meter
provider
for
those
not
familiar
in
matrix.
We
had
this
add
view
method
on
the
meter
provider
which,
when,
when
called
after
an
instrument
is
created,
would
not
apply
the
view,
as
you
would
have
to
call
it
before
most
sdks.
E
It
is
a
little
bit
big,
mostly
it's
just
updating
tests
in
there,
but
yeah,
I'm
basically
just
looking
for
opinions.
I
do
realize
that
this
is
quite
late
in
the
game
and
that
it
changes
a
lot
for
users
who
have
been
using
views
before,
but
I
think
it
is
important
that
we
look
at
it
now.
I
have
seen
people
on
on
slack
run
into
exactly
that
issue
where
they
were
confused
about
the
view
not
being
applied
yeah,
and
we
can't
really.
E
If
we
have
this
add
view
method
there,
because
that
would
break
the
that
would
be
a
breaking
change
for
existing
users
and
I
think
just
applying
a
documentation
fix
for
this
would
also
be
a
not
an
ideal
solution,
as
users
might
yeah
just
see
the
add
view
method
expected
to
work
and
then
it
doesn't,
even
even
though
it
is
written
in
the
documentation.
I
guess
a
lot
of
people
would
not
not
look
that
up
and
not
see
it
so
yeah.
E
G
E
Right
so
now
it
is
just
moving
it
to
the
to
the
constructor,
and
I
have
moved
a
lot
of
the
logic
that
checked
if,
if
the
view
is
like
spec
compliant
to
this
view,
class,
which
will
now
throw
when
the
view
is
constructed,
as
that
was
the
logical
thing
to
do
as
one
might
create
views
before,
and
then
the
error
will
be
associated
with
that.
That
line
instead
of
some
line
in
the
in
the
meter
provider.
E
G
Okay,
yeah,
I
agree
with
removing
the
ad
view
I
was.
I
was
only
asking
because
if
you
said
you
were
keeping
both,
I
was
going
to
suggest
removing
it
just
so
that
it's
not
confusing
to
have
both
things
there
and
then,
if
we
kept
it,
you
still
have
the
same
documentation
and
confusion
issue
where
people
expect
it
to
apply
later.
E
Yeah
I
just
when
I,
when
I
saw
the
user,
bring
it
up
in
slack.
I
I
could
immediately
immediately
relate
because
it
looks
like
the
view
would
just
be
applied
to
existing
instruments,
even
though
it
isn't,
and
I
can
definitely
see
a
lot
of
people
running
into
the
same
issue.
G
G
I
yeah
legendicus,
isn't
here
he's
usually
the
tends
to
be
the
unofficial,
metrics
maintainer
you
has
he
weighed
in
on
this.
Yet
I
didn't
look.
E
E
The
meter
provider
is
created
with
the
views
we
either
have
to
allow
dynamic
reconfiguration
now
and
and
have
it
be
possible
to
to
use
it
after
the
instruments
are
created,
or
I
think
we
have
to
somehow
enforce
that
that
users
do
it
on
creation,
and
I
think
the
trade-off
is
worth
it
to
save
everybody
confusion
when
it's
when
it's
actually
out
and
people
are
using
it
yeah
yeah.
I
agree.
G
Okay,
well,
for
now
I
it
doesn't
sound
like
people
have
immediate
opinions,
but
please
take
a
look
at
that.
It
is
a
fairly
important
change,
whether
we
decide
to
do
it
or
not.
It's
an
important
decision.
So
please
take
a
look
when
you
have
time.
G
I'm
gonna
skip
this
stuff
for
now,
mostly
because
I
looked
at
the
bug
assignment
stuff
earlier,
and
it's
already
done
and
I'd
rather
cover
all
the
topics.
If
we,
if
we
don't
have
time
for
triage,
that's
fine,
I
can
do
that
asynchronously.
G
H
Yeah,
it's
still
progressing
progressing
a
lot
slower
than
I'd
like
I
do
have
a
working
of
course.
It's
all
local
branches
at
the
moment
where
it
creates
a
common,
merge
master
that
effectively
just
currently
grabs
the
js
and
the
api
repos
and
puts
them
into
subfolders
so
that
I
have
full
history.
H
I've
got
it
sticking
back
to
effectively
the
the
first
versions
and
I'm
currently
testing
upgrading
to
the
the
next
versions,
because
I
want
this
to
just
run
in
the
background
and
just
pull
it
in
and
create
pr's.
So
that's
what
that's
what's
taking
a
bit
of
time?
I
I
don't
want
to
just
drag
it
in
once
and
then
go
back
to
manual
mode.
H
K
For
me
so
there's
this
issue
for
using
short
keys
for
otl
pjson.
You
can
kind
of
see
the
there's
there's
some
examples.
There's
some
tests
that
show
the
different
stats
for
for
the
different
representations,
but
basically
it
would
be
replacing
these
full
names
of
kind
of
two
one
to
two
character
names,
and
I
think
they
can
do
this
using
the
the
proto
json
name
option,
but
it
would
be
a
pretty
big
breaking
change.
What
we
currently
have.
K
I
just
wanted
to
a
call
it
to
the
sig's
attention
that
this
is
out
there
and
also
just
kind
of
get
get
some
opinions
as
to
what
the
kind
of
benefits
and
drawbacks
are
to
this
approach,
because
my
initial
thought
was:
why
don't
you
just
gzip
and
you
know,
use
you
use
compression
and
spare
everybody
the
pain,
but
it
sounds
like
some
browsers
have
compression
issues
and
they
might
want
to
use
this
over
compression.
But
I'm
not
a
browser
expert.
I
know
we
have.
H
Yeah,
it's
probably
a
case
of
browsers
compressed
in
or
uncompressed
inbound,
do
not
compress
outbound.
So
if
you
want
to
compress
outbound
traffic,
it's
another
package
you
have
to
to
include
so
that
increases
your
payload
as
well
as
the
cpu
involved,
because
it's
all
javascript
com,
you
know,
basically
it's
not
native
so
from
a
browser
perspective
using
compression-
is
not
really
an
option
unless
you're
in
a
native
like
electron
or
something
like
that,
where
it's
not
free,
but
it's
better.
H
We
talk
about
this
in
the
client,
rum
sig
as
well.
It's
like
there
is
advantages
to
doing
this,
and
when
I
was
working
on
identity,
we
did
do
this
in
an
automatic
fashion,
but
you
have
to
have
lookup
tables
and
again
lookup
tables
payload
size
same
problem
depending
on
when
you
apply.
The
short
keys
depends
on
what
the
advantages
are.
I
think
in
the
pr
there's
a
comment
from
christian
talking
about
potentially
using
numeric
value
rather
than
the
short
key
value.
G
Yeah,
I
know
I
see
so
I
assume
this
lookup
table
is
then
like
encoded
directly
in
well
right
now
we
use
the
proto
files
directly,
as
I
suppose
so
it
doesn't
matter.
But
if
we
were
to
use
like
a
statically
generated
code,
it
would
build
a
lookup
table
and
that
was
I'm
not
familiar
with
this
mechanism.
So
my
first
question
was
going
to
be:
is
this
like?
How
are
the?
How
are
the
key
names
decided
on?
Is
this
like
an
automated
process?
But
the
answer
is
no.
It's
a
lookup
table.
H
Yeah,
it
can't
be
an
automated
process,
so
it
worked
as
an
automated
process
for
for
our
system
because
it
was
versioned
effectively.
So
your
version
x
got
deployed
had
a
certain
set
of
small
keys.
It
wouldn't
work
for
this
as
an
sdk,
so
it
would
have
to
be
fixed.
G
Yeah
interesting
that
so
that
he's
comparing
this
is
the
current
state,
and
then
he
compares
this
to
this
in
the
above
table,
but
this
is
actually
longer
because
it
contains
all
these
underscores,
but
that
doesn't
matter
to
me
too
much
right
now.
I
am
actually
so
I'm
not
at
all
surprised
that
the
compressed
versions
are
mostly
unchanged.
G
I'm
very
surprised
at
how
little
benefit
you
know
it's.
It's
still
a
big
benefit,
30
50
whatever,
but
I
would
have
expected
it
to
be
a
lot
more.
I'm
surprised
at
how
little
it
is.
It
looks
like
yuri
put
a
thumbs
down
on
the
with
no
comments
just
a
thumbs
down.
G
H
It's
mixed,
there's
trade-offs
like
realistically,
like
the
smaller
keys,
for
example.
If
you
do
end
up
zipping
it,
the
smaller
keys
are
actually
less
compressible
than
the
longer
keys,
because
there's
less
that
get
repeated
right,
which.
G
H
It
depends
where
it's
done
so,
if
it's
done
at
the
point
of
setting
the
attribute,
so
when
you
say
set
attribute
your
http
dot
whatever
and
then
internally
that
looks
up
the
lookup
table.
That's
it
that's
not
too
bad,
because
the
last
thing
you
want
to
do
is
to
do
this.
Conversion
at
the
point
of
serialization
cpu
is
just
stupid.
I'm
at
that
level.
G
I'm
quite
sure
this
is
done
at
serialization
time
based
on
I
mean
it
could.
Obviously
you
could
always
do
it,
but
that
would
require
sdk
changes,
not
expert
this.
This
proposal
to
me
reads
as
something
the
exporter
does
and
in
order
to
have
it
done
at
setting
time,
it
would
have
to
be
something
the
sdk
does.
H
G
H
Insights
we
have
the
ability
to
if
you
give
us
a
key
with
a
dot
in
it,
we
will
unpack
that
and
effectively
create
nested
objects
that
so
that
effect,
you
have
to
check
every
single
key,
whether
whether
you
do
it
don't
have
dots.
H
So
it
is
significant,
so
iterating
over
keys
at
the
point
of
sterilization,
is
a
very
cpu
intensive
task,
so
it
you
know
it
can
take
it
from
being
milli
seconds
to
tens
of
milliseconds
to
serialize
and
if
you're,
in
an
app
that
has
a
lot
of
ui
interactive.
You
know
pausing
the
ui,
for
that
amount
of
time
is
noticeable.
K
I
read
it
the
same
way.
I
was
going
to
say
I
think
really
before
going
forward
with
this.
You
would
almost
want,
like
some
browser
benchmarks,
to
see
what
it
would.
What
serializing
an
actual
export
request
would
look
like
using
the
short
keys
and.
K
H
It
it
depends
how
the
sterilization
is
done.
So
if
everything
right
now,
because
I
haven't
looked
at
the
video,
if
everything
right
now
is
effectively
creating
the
hierarchy
at
serialization,
whether
whether
you're
replacing
with
the
long
name,
the
short
name
or
the
an
integer,
it's
going
to
be
the
same,
it's
not
going
to
have
any
effect
at
all.
A
G
Right
no,
so
this
is
like
a
default
marshaller
that
so
you
when
you
convert
proto
to
json,
you
can
either
do
camel
case
or
snake
case
they're
both
allowed.
I
think
the
snake
or
that
the
camel
case
is
the
default.
But
these
translations
on
descending
and
receiving
end
are
done
automatically.
So
you
don't
need
a
different
name.
So
if
you
look
at
the
the
keys,
you
know
you
have
string
value,
it
automatically
becomes
a
camel
case
string
value.
You
don't
need
a
separate
lookup
table.
A
Yeah,
but
I
I
think
it's
probably
cheaper-
to
use
a
lookup
table
for
changing
the
casing
in
a
like,
a
very
small
set
of
keys.
Isn't
it
like,
because
the
alternative
would
be
to
parse
the
whole
string
and
see
and
kind
of
change
cases
and
concat
strings?
I
don't,
or
am
I
on
the
wrong
direction
all
together
like
I'm,
am
I
talking
jurors.
A
All
together,
but,
but
I
I
feel
like
the
the
point
the
ground
was
making-
is
that
the
default
for
jason
is
to
have
a
lookup
table
anyways
because,
that's
probably
the
I
know,
he's
probably
more
knowledgeable
in
the
in
the
area.
But
it's
probably
the
way
it's
done
either
way
because
of
the
casing
change
you
have
to
have
a
lookup
table
of
some
sort,
because
it's
the
the
cheapest
way
to
basically
achieve
the
casing.
Changes
and
to
transform
that
into
shorter
keys
would
just
give
us
a
better
like
a
additional
small
benefit.
G
Yeah
so
like,
instead
of
converting
this
name,
it's
yeah,
I
don't
know
I
haven't,
looked
into
into
like
the
statically
generated
code
to
see
if
it
uses
a
lookup
table
to
do
the
the
camel
case
or
whether
it
uses
whether
it's
actually
processing
the
name
every
time.
H
H
Well,
in
terms
of
proto
itself,
it
only
sends
the
number
it
doesn't
send.
The
name.
K
G
G
G
In
any
case,
the
call
is
out
of
time,
so
we
can
talk
about
this
more
next
week.
We
can
give
people
time
to
look
at
it
comment
on
the
issue.
If
you
have
an
opinion,
I'm
sorry
I
don't
know
pervy.
Is
that
how
you
pronounce
your
name.
D
Yeah,
that's
me:
we
can
talk
about
this
next
week.
It's
okay!
Okay,
if
we're
out
of
time
it's
just.
I
was
just
bringing
up
the
bundler
issues
so
using
some
of
the
exporters
with
bundlers,
because
I
think
it
there
was
a
comment
that
we
might
discuss
it
today.
So
just
put
it
on
the
agenda.
G
Yeah,
I
guess
I
I
don't
think
this
is
a
bug.
The
reason
I
wanted
to
talk
about
it
is
because.
D
G
D
Yeah,
I'm
wondering
because
I
think
it's
not
even
webpack.
I
think
the
thing
I've
seen
around
more
as
like
folks
using
typescript
with
node,
because
you
run
into
the
same
thing
and
I
think
node
and
typescript
is
like
a
fairly
common
stack.
So
even
if
you
know
it's
not
a
bug,
it
might
be
useful
to
have
documentation
talking
about
this
somewhere
or
like
a
troubleshooting
thing.
G
This
works
with
a
default
type
script
when
you
have
a
node
modules,
because
the
profiles
are
ian
are
nested
in
the
mod,
the
node
modules
folder
the
same
way
that
many
other
files
would
be.
This
should
work
with
a
typescript
stack
and
we
we
have
it
working
with
typescript
stacks.
Okay,
internally
yeah,
that's
not
been
a
problem.
D
G
G
Okay,
thank
you
everybody.
For
your
time,
sorry,
we
went
over
by
a
little
bit
and
I
will
speak
to
you
next
week.