►
From YouTube: 2022-10-19 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
C
C
Week
so
this
meeting
I
expect
will
be
canceled.
I
will
be
unlikely
to
be
able
to
attend.
Is
anyone
here
planning
on
attending
kubecon.
C
Okay,
just
me
I
guess
so
Amir.
If
you
want
to
run
the
meeting
next
week
great.
Otherwise
we
can
just
cancel
it.
C
B
I'm,
just
an
FYI,
there
have
been
some
cases
where
your
GitHub
ID
is
hasn't
been
transferred
over
to
the
voting
system,
probably
because
of
case
insensitivity.
So
yes.
C
If
you
have
a
capital
letter
in
your
in
your
GitHub
username,
you
need
to
contact
gerasi
crawling.
Who
can
manually
add
you
to
the
system.
C
The
other
caveat
worth
noting
is
that
the
election
system
that
we're
using
I
think
it's
called
Helios
is
a
little
bit
confusing
after
you
click
on
the
candidates,
you
want,
and
you
click
you
know,
Vote
or
whatever
there's
another
screen,
and
you
actually
have
to
click
cast
your
vote
on
that
screen.
Otherwise,
your
vote
will
not
be
counted
so
until
you
actually
click
the
cast
vote
button
and
see
like
your
vote
is
confirmed.
Your
vote
is
not
confirmed.
C
You
will
know
for
sure
that
it
worked
if
you
get
an
email
from
the
voting
system
so
make
sure
to
actually
go
through
the
full
process.
We've
had
a
few
people
that
have
been
confused
by
that.
C
A
Okay,
sorry
I
I
have
a
question:
a
quick
one:
hey
everyone
yeah
I'm,
not
part
of
the
open,
Telemetry
contributors
or
anything
like
this,
but
is
it
okay
for
me
to
be
a
fly
on
the
wall?
I
have
just
a
little
question
about
the
time
and
issue
the
time.
Drift
and
I
wasn't
sure.
If
it's
okay
for
me
to
join
this
meeting
or
it's
specifically
for
the
contributors,
only
no.
C
A
A
C
C
Any
questions
on
the
on
the
election
before
we
move
on
to
the
next
topic
here
all
right.
The
first
item
on
the
agenda
has
been
the
first
item
for
weeks
now,
and
that
is
the
metrics
ga.
There
is
a
draft
PR
open
to
release
the
the
metrics
GA
I
encourage
everybody
to
look
at
it.
It
is
marked
as
a
draft
right
now
because
the
compilation
is
failing
and
the
compilation
is
failing
because
of
the
way
that
lerna
supports
peer
dependencies.
C
The
API
package
itself
needs
to
be
released
first,
so
this
one
it
looks
like
is
also
failing.
So
that's
cool
I
will
look
into
that
when
the
meeting
ends,
but
this
PR
is
a
prerequisite
for
this
one,
and
there
is
one
issue
here
that
somebody
brought
up
as
blocking
would
like
to
get
it
in
before
we
release
so
I
added
it
to
the
description
here.
C
But
if
there
are
any
other
issues
that
you
feel
should
block
the
release,
please
comment
them
on
the
pr
and
I
will
add
them
to
the
list.
If
obviously,
myself
and
the
other
maintainers
agreed
anybody
have
any
questions
here.
A
C
I
don't
know
if
wired
Barr
is
on
the
meeting
today
or
not
no
looks
like
he's
not.
He
is
going
to
be
working
on
documentation
tomorrow,
so
they
have
a
a
20
day
or
whatever
it's
called
where
he
works
and
he's
going
to
spend
that
time
on
metrics
documentation
for
the
website.
So
that's
the
only
real
yeah
blocking
thing.
That's
just
not
done
yet,
and
one
quick
bug
so
I
would
encourage
everybody
to
review
the
bug.
It's
a
relatively
straightforward
one
and
look
over
the
release.
C
The
after
GA
section
here
is
mostly
unchanged
from
last
week.
We
did
merge
the
Prometheus
resource
support
so
on
the
next
release
that
will
be
available.
C
I
do
not
have
any
update
from
last
week
from
the
high
resolution.
Histogram
I'm,
not
sure
if
Matt
is
on
the
call
I'm
here,
I.
B
Think
I've
I've
been
reading
through
this
back
and
trying
to
at
least
kind
of
ramp
up
on
background
knowledge,
so
yeah
I'm
not
at
a
point
to
start
on
it
either
so
I
at
least
hope
to
be
able
to
review.
But
if
I
yeah,
if
I
get
to
a
point
where
I'm
ready
to
start
before
you've
picked
it
back
up,
I'll
just
get
in
touch
with
you
and
see
where
things
stand.
C
Okay,
Mark
did
create
a
issue
for
it,
which
describes
not
like
the
the
mathematical
complexities
involved
with
that
particular
data
structure,
but
the
actual
process
for
putting
it
into
the
JS
SDK
like
which
components
are,
are
actually
needed
in
order
to
add
any
instrument,
not
just
this
one,
and
that
issue
is
marked
as
up
for
grabs.
I
did
comment
on
that
issue.
Anybody
that
wants
to
work
on
it
can
reach
out
to
me
and
I'm
happy
to
share
what
I've
worked
on
so
far.
C
Otherwise
I
will
probably
get
to
it.
I
would
assume
not
next
week
but
the
week
after,
because
next
week
is
kubecon,
but
it
will
be
sort
of
my
next
work
item
if
nobody
else
takes
it
by
them.
C
C
This
one
from
this
can
I
show
you
the
person
who
spoke
earlier,
so
the
the
update
that
I
was
going
to
give
on
the
anchored
clock.
Timing
issue
was
that
we
merged
spr,
which
at
least
ensures
that
span
durations
are
never
negative.
C
So
all
it
does
is
look
at
the
start,
time
of
a
span
and
the
end
time
of
a
span
and
if
the
end
end
time
is
before
the
start
time
it
just
moves
it
so
that
they're
equal
sort
of
a
a
quick
fix,
not
necessarily
the
only
long-term
outcome
of
this,
but
a
reasonable
check
that
that
SDK
can
do
Kesha.
Did
you
have
any
particular
questions
or
input
on
this
issue?.
A
A
How
precise
would
that
be,
and
then
I
I
was
wondering
like?
Do
you
still
think
it's
a
good
idea,
or
do
you
have
any
other
ideas
how
to
proceed
with
this.
C
Yeah
right
now,
my.
C
My
my
current
best
idea,
which
I'm
not
I'm
not
really
even
convinced,
is
the
right
way
to
go,
but
it's
the
best
idea
that
I
have
at
the
moment
is
to
add
a
separate
clock
package
which
would
provide
a
global
anchored
clock
and
allow
essentially
a
Time
provider
interface,
the
same
way
that
we
have
Tracer
providers
and
meter
providers
so
that
if
you
had
a
custom
clock
implementation,
it
could
be
swapped
out,
but
by
default
it
would
use
the
anchored
clock
using
the
process
start
time
and
potentially
periodically
update.
C
We
would
have
to
see
whether
that
makes
sense.
C
The
biggest
issue
is
that
if
you
update
the
time
or
the
anchor
during
a
trace,
you
could
end
up
with
traces
that
don't
make
sense
as
long
as
the
updates
don't
happen
too
frequently
and
the
time
isn't
swinging
wildly,
which
I
don't
expect
them
to
that
shouldn't
be
too
big
of
a
problem,
but
in
some
some
cases
it
could
be,
but
that
that's
my
sort
of
hand,
wavy
idea
at
the
moment,
if
you
have
input
I,
would
encourage
you
to
put
it
on
the
on
the
issue,
the
the
more
feedback
we
can
get
the
better
and
we
really
have
not
gotten
a
lot
yet
so
I
want
to
make
sure
that
we
don't
end
up
just
implementing
things
and
then
they're
broken,
and
we
have
to
change
them
later
and
something
as
important
as
a
clock.
C
I
don't
want
to
have
a
lot
of
thrash
in
that
implementation.
It
should
be
done
once
and
done
correctly.
So
yeah
I
I
encourage
input
on
that
issue.
A
Yeah
that
makes
sense,
and
then
so,
whenever
This
Global
package
with
the
anchor
plug
is
ready,
then
the
idea
is
to
use
it
and
all
the
other
instrumentations
right.
The
document
load
from
the
contribute
package,
as
well
as
Fetch
xhr
and
all
the
others.
C
Yes,
it
would
be
used
by
the
API
and
the
instrumentation
packages
and
probably
also
buy
the
SDK.
C
The
the
biggest
issue
we
have
is
that
if
we
just
implement
it
as
an
internal
in
the
SDK
which
is
kind
of
what
we
have
now,
the
instrumentations
don't
have
any
way
to
access
that
clock
and
that's
the
the
root
of
the
bugs
that
we
had
or
that
we
fixed
the
time
drift
in
the
SDK.
But
all
the
instrumentations
were
still
affected
by
the
time
drift.
So
any
instrumentation,
which
you
used
a
manual
timestamp
for
span,
start
or
end,
but
not
both,
which
is
kind
of
a
weird
scenario.
C
I
I
do
realize
so
like.
If,
if
the
start
time
was
calculated
automatically,
but
a
manual
end
time
was
provided
the
start.
Time
was
corrected,
but
the
end
time
wasn't
and
you
could
end
up
with
spans
that
didn't
make
sense
and
that
did
affect
a
couple
of
instrumentations
I.
Don't
know
why
those
instrumentations
chose
to
provide
manual
end
times
to
the
spans
I
was
not
involved
in
in
writing
any
of
the
ones
that
were
affected.
A
I
I
might
be
wrong,
but
I
think
the
reason
for
that
is
that,
specifically
with
Fetch
and
xhrs
that
they
are
waiting
for
the
performance
Observer
to
fetch
some
data
on
the
events,
for
example,
when
a
certain
Network
related
things
have
happened,
so
they
put
a
timeout
to
wait
for
the
data
to
be
ready,
but
they
have
to
record
the
end
time
for
the
spend
when
the
span
is
finished.
A
C
Okay
and
those
end
times
come
from
the
performance
API,
which
is
paused
during
the
only
certain
browsers,
do
it
in
only
in
certain
scenarios,
but
best
I
can
tell
if
your
computer
sleeps,
while
the
tab
is
open.
That
definitely
will
pause
the
performance
timer
in
at
least
certain
browsers.
C
There
may
be
other
situations
where
it
can
still
happen,
but
yeah
the
the
anchored
clock
was
resilient
to
that,
but
the
performance
timings
that
they
are
using
for
the
spin
and
are
not
so
that
that
was
the
root
of
the
problem
and
negative
spans
are
definitely
never
a
good
thing.
So
making
them
zero
is
moderately
better,
but
having
a
having
accurate
timings
is
obviously
the
end
goal.
A
And
sorry,
just
one
more
question
for
me
to
fully
understand
this,
so
you
mentioned
that
or
I
guess.
My
assumption
is
that
we
do
not
want
to
change
any
of
the
apis
of
the
span
itself,
because
we
already
or
sorry
you
folks,
already
kind
of
froze
the
API
and
that's
why
it
has
to
be
a
an
additional
package
available
to
all
the
other
instrumentations.
Is
that
the
correct
assumption.
C
Partially,
we're
we're
typically
hesitant
to
change
those
apis
you're
correct,
because
we
want
them
to
be
as
stable
as
possible,
but
the
reason
for
it
to
be
a
separate
package,
in
my
opinion,
has
more
to
do
with
the
fact
that
the
times
are
relevant
for
signals
other
than
tracing.
So
if
you
have
a
trace
that
has
a
particular
timing
and
you
have
metrics
coming
in
with
the
wrong
time
stamps.
That's
definitely
a
problem
or
Logs
with
incorrect
time
stamps.
C
C
The
logs
API
would
need
to
rely
on
the
trace,
API
and
so
on,
which
already
happens
to
some
extent,
and
that
I
would
like
to
minimize,
because
it
is
a
eventual
goal
to
split
the
API
packages
such
that
you
can
use
one
signal
at
a
time
without
another
signal,
if
you
so
choose,
particularly
in
the
browser
where
bundle
size
is
important.
C
Is
that
sufficiently
addressed
for
today?
Anybody
have
questions
on
that.
C
B
C
C
Okay,
let's
use
some
quick
bug.
Triage
I
did
look
before
the
meeting.
There
isn't
much.
Actually
this
first
one
is
a
duplicate
of
the
timing
problem
that
we
already
have.
C
C
C
Looks
like
there
are
two
new
issues
and
contrib
both
opened
by
a
mirror,
not
first
of
all.
B
Yeah
so
I
have
the
example
below
instead
of
a
jsonness
25
it
the
data
bit
value,
it's
just
like
you
can
you
can
see
the
the
value
will
sold
yeah,
so
so
it
explains
itself.
We
just
need
to
move
from
like
this
implementation
that
creates
the
array
that
yeah.
C
Okay,
I
think
that
there
is
something
in
a
specification
about
the
way
that
these
are
supposed
to
be
serialized.
I
mean
Json,
makes
sense
to
me,
but
I
would
make
sure
to
look
at
the
specification.
C
B
If
we
want
to
to
populate
the
attribute
with
like
a
array
instead
of
a
Json
stringified,
then
they
have
to
be
of
the
same
type
and
I.
Think
it's
not
guaranteed.
B
I
know
there
are
some
suggestions
to
change
that,
but
it's
not
done
yet.
B
C
B
C
B
Can
put
it
up
for
cops?
It's
nothing
bigger.
Just
a
senator
I
thought
it
would
be
good
idea
to
document
it
and
the
other
thing
I've
seen
that
it's
related
like
when
you
have
a
SQL
query
as
text
and
you
have
like
the
values
as
parameters
like
not
not
embedded
in
the
text.
So.
A
B
C
Yeah
that
makes
sense,
I
think
that
we
should
omit
the
values.
This
person
obviously
just
said
that
they
think
we
should
keep
it,
but
you
should
sort
of
payload
and
try
to
see
defaults.
Yeah,
I,
I
think
they're
way
too
likely
to
contain
sensitive
information,
and
they
should
not
be
captured
by
default.
If
possible.
B
C
C
Okay
I
put
P2
here
since
it's
not
crashing
anything,
but
it
is
a
fairly
high
priority
and
that
was
it.
There
were
only
two
so
35
minutes.
If
anybody
wants
to
bring
anything
up.