►
From YouTube: 2022-07-05 meeting
Description
Instrumentation: Messaging
B
Next,
next
to
next
friday,
next
friday,
okay.
B
B
C
A
Now
you're
you're,
located
in
the
us
yeah.
C
Yeah
yeah,
I'm
in
puget
town,
so
seattle,
okay,
yeah.
A
We
had
our
our
little
city
here,
allows
fireworks,
and
so,
like
our
neighborhood,
is
like
a
like
a
war
zone
yep,
it's
a
little
bit
crazy.
C
C
C
B
So
now
your
nested
attributes
appear.
Does
it
look
like
it's
progressing?
It's
not
clear
from
the
pr
comments.
C
I
think
almond
from
dynatrace
is
added
a
lot
of
comments
and
is
pushing
back
quite
heavily
on
making
it
generic.
So
I'm
gonna
have
to
look
at
that
and
reword
it
probably
delicately.
C
So
far,
he's
told
me
to
go
and
raise
two
additional
o
tips
on
some
of
the
changes
I've
got
in
there,
which
is
setting
a
nested
depth
attribute
property.
I
can't
remember
what
the
other
one
was.
Let
me
briefly
look
at
the
comments
this
morning.
B
C
Exactly,
which
is
why
I
wanted
the
way
I
wanted
it,
because,
effectively,
what
I'm
trying
to
do
is
get
it
in
there
and
say
well.
This
is
what
this
is,
how
it
all
works.
This
is
how
it
happens,
and-
and
why
originally
I
I
got
in
the
comments
in
terms
of
how
signals
should
implement
it,
which
is
why
I
don't
want
to
add
all
these
current
stuff
about
currently
only
applies
to
logs.
I
can
probably
add
a
pre-added
table
saying
how
it's
implemented,
but
they
need
to
go
and
implement
it.
C
A
Okay,
I
had
a
question
about
that
is
this,
so
if,
if
the,
if
the
sdks
support
nested
attributes
right
now,
the
json
civilization
will
add
all
the
additional
things
like
like
it
will
have
like
the
key,
a
key
in
value,
yep
properties.
And
then
the
value
is
gonna,
have
the
strings,
string
or
end
value
property,
and
so
that
seems
like
I'm
still
a
lot
of
overhead.
C
Well,
jason
still
has
a
lot
of
overhead,
but
in
the
case
of
if
you
only
have
one
value
in
a
nested
attribute,
then
it's
debatable.
You
end
up
actually
having
more
bytes,
but
when
you
have
more
than
one
attribute
in
a
nested
attribute,
then
that's
when
you
start
getting
the
saving,
depending
on
the
length
of
the
key
that
you've
got.
A
But
that's
some
okay.
What
I
mean
it's
not
so
you
mean
by
removing,
like
the
the
name
space
key
like,
for
example,
for
exception.
You
would
not
be
repeating
the
exception
correct,
but
it's,
but
it's
still
you're
still
will
be
repeating,
like
the
the
value
string
value
by
the
string
value,
whatever.
C
Yeah
but
the
but
the
actual
string
value,
so
the
key
name
now
doesn't
have
exception
in
it.
So
while
you've
still
got
the
curly
bracket
and
a
comma
on
the
outside
yeah,
so
you've
still
you've
still
got
like
a
three
or
four
character
additional
overhead.
As
long
as
the
name
is
longer
than
which
exception
is
you
end
up
with
savings?
Not
much,
but
it
all
adds
up
a
little.
A
Bit
yeah,
it
would
be
yeah,
I'm
just
trying
to
figure
out
figure
out
about
like
the
ultimate
like
how
old
like
how
would
the
ideal
you
know,
savings
would
be,
and
I
think
it's
there's
a
discussion
in
the
in
the
ephemeral
resources
that
ted
opened
yeah.
A
C
Short
keys
are
a
solution,
but
you
then
have
the
overhead
of
the
the
code
on
the
the
client
that
you
need
to
deal
with
as
well,
and
that
will
only
be
for
the
defined
keys
not
for
the
custom
keys.
A
Maybe
like
we
can,
we
can
do
something
about
making
the
json
json
serialization
serialization.
You
know
better
more
streamlined.
C
A
The
way
the
way
yeah
the
way,
what
I
understand,
what
what
he
meant
is
that
about
the
size
of
the
payload.
So
we
have
we
have
like
in
the
js
sdk,
there's
the
transformer
right
now
right,
which
is
which
takes
like
the
span
or
the
log,
and
then
it
generates
this.
This
structure
like
which
has
the
just
the
value
string,
value
keys.
All
that
and
the
way
I
understood
tigran's
comment
was
that
maybe
we
can
come
up
with
some
sort
of
different
representation
of
the
json.
C
Yeah
and
that's
if
it's
still
going
to
going
to
be
jason
at
the
end
of
the
day,
that's
nested
attributes.
If
you
want
to
use
some
other
protocol
like
message
pack,
that's
binary,
but
then
you
have
all
the
cpu
overhead
of
actually
well.
You've
got
the
payload
to
be
able
to
generate
the
binary,
and
then
you've
got
the
cpu
involved
in
doing
that,
say
that
there
is
no
faster
way
than
have
an
object
and
say
json
stringify,
and
then
you
the
only
way
you
manage.
That
is
the
object
you
pass
into.
C
Stringify
is
already
optimized
for
different
ways
like
v8
has,
if
you
have
an
object,
you've
constructed
where
you've
added
keys
dynamically
and
there's
more
than
20
keys,
it's
really
slow,
like
really
really
slow,
unless
you
happen
to
re-optimize
so
effectively
recreate
the
object
which
I
I
have
links
in
application
insights
to
do
this.
So
in
fact
we
we
optimize
the
object
before
we
serialize
it,
because
when
there's
more
than
20
dynamic
keys,
even
though
the
overhead
of
doing
the
optimization
is
bad,
it's
actually
less
than
the
what
happens
during
stratification.
C
C
Which
is
what
application
insights
originally
did.
I
now
have
a
config
switch
to
turn
that
off
to
get
that
230
saving
back,
because
it
was
doing
it
on
every
single
event,
regardless
of
whether
the
customer
actually
needed
it
or
not,
so
that
yeah,
I
played
a
lot
around
with
serialization
I
yeah
to
support
office.
They
have
very
stringent
requirements
on
how
long
it
takes,
and
it
was
painful.
A
So
I
don't
you
I
think
you
know.
Obviously
I
don't,
I
think
it's
you
know
I
like
introduced
like
going
going
forward
with
proposing,
like
the
the
the
nested
attributes,
makes
perfect
sense,
but
I'm
I
guess
my
my
thinking
is
like
it's
probably
not
gonna
be
enough.
It
may
be
enough.
So
what
else
can
we
do.
C
The
only
other
way
is
to
not
have
jason
going
out
the
door
and
then
even
with
things
like
message
pack,
the
fact
you
have
a
json
object
message.
Pack
really
is
just
another
form
of
json
serialization,
where
it
uses
binary
to
represent
and
remove
the
duplicated
keys
and
having
nested
attributes,
for
it
is
actually
that
it
is
actually
better
because
perfectly
if
you've
got
non-nested
attributes
and
the
the
key
is
always
different,
then
it
has
nothing
to
compress.
C
Where,
if
you
have
like
a
nested
attribute-
and
you
have
like
the
key
value-
is
repeated
at
more
than
one
level,
then
it
gets
replaced
with
a
yeah.
It's
it's
sort
of
like
gzip.
I
don't
know
if
you've
ever
written
compression
yeah.
So
it's
sort
of
like
that,
except
it's
structured
based
on
on
predefined
values,
of
defining
that
this
is
a
sub-object.
This
is
a
an
array
or
something
like
that,
but
if
you're
only
doing
short
messages,
it's
actually
worse.
C
E
Maybe
a
stupid
question,
but
how
can
we
ensure
then
that
the
endpoint
does
support
this
compression?
So
I
mean
if
the
endpoint
doesn't
support
a
certain
compression
algorithm.
C
The
application
developer
and
affecting
negotiation
with
their
third
party
like
who's,
the
vendor
that
they're
sending
the
data
through
you
know,
and
what
exporter
does
that
no
one
want
to
use
effect
is
what's
gonna
boil
down
to.
C
If,
if
the
vendor
supports,
you
know,
otlp
version
one,
only
then
they're
gonna
have
to
use
an
exporter,
that's
hotel,
the
tpu
version
one.
If
they've
got
one
that
supports
jason,
then
they
have
jason.
I've
got
one
supports
jason
with
compression
and
they
want
to
have
the
compression
loaded
in
the
light,
but
then
they
use
that
one.
E
C
Yeah,
so
it
would
so
the
compression
algorithm
as
long
as
like
during
unload
effectively
you've
got
to
do
everything
and
everything
now
in
terms
of
the
single
javascript
execution
cycle.
So
there
are
libraries
around
that
can
do
the
compression
non-asynchronously
so
effectively.
It
has
to
do
synchronous,
compression
and
then
pack
it
up
and
send
it
off
so
send
beacon
and
fetch
with
keep
alive
both
support.
C
They
are
in
the
process
of
implementing
compression
on
their
outbound
platform
because
they
run
an
electron
so
that
so
they
can
actually
have
the
extra
slightly
extra
overhead
for
running
by
default.
I
did
not
expose
the
ability
to
compress
the
outbound
request
with
send
beacon
and
fetch,
but
that
is
now
a
capability
and
there's
also
cause
requirements.
When
you're
sending
out
a
compressed
payload,
you
have
to
send
a
content
type
saying
it's
compressed
and
that
is
not
a
cause
safe
header,
so
a
cause
requests
but
options
request
is
required.
C
C
I
guess
update
on
the
sandbox
so
which
has
been
most
of
my
focus
in
the
last
week.
I've
now
got
it
so
that
I
can
keep
all
so
I
can
merge
the
api
and
js
repos
in
their
entirety,
push
them
into
subfolders
and
merge
them.
C
I've
got
that
to
the
point
where
I
can
I'm
doing
that
on
a
specific
version,
and
then
I'm
now
currently
now
trying
to
effectively
keep
that
up
to
date
by
having,
by
having
the
same
script,
slowly
fix
this
consistently,
but
I'm
getting
merge
issues.
So
I
need
to
now
add
code
to
do
automatic
merges
on
that,
so
that
the
history
can
stay
up
to
date.
We
can
just
have
this
running
in
the
background,
pushing
into
a
emerge
branch
that
we
can
then
pull
from
that
into
the
structure
that
we
eventually
want.
C
It's
all
very
painful.
It
doesn't
help
that,
because
I
use
windows
with
both
visual
studio
and
bs
code,
keep
my
folders
active.
So
if
I
do
anything
in
my
current
git
repo
that
they're
watching
it
locks
it,
so
I
can't
clean
and
blow
it
away,
so
I
actually
have
to
hack
it
and
go
back
a
directory
level
and
do
it
outside
of
the
repo
make
it
all
work
and
then
load
up
visual
code
or
or
visual
studio,
to
go.
Look
at
the
repos
and
see
how
it
generated
make
sure
my
history
and
all
my
labels.
C
So
hopefully
this
week
I'll
have
that
script
working
so
it
can
create
a
merge
repo
where
effective
oil
will
do
so.
Sorry,
merge
branch
where
effectively
I'll
check
that
in
and
that
will
just
sit
in
the
background,
run
once
a
day
and
then
effectively
pull
from
both,
in
fact,
I'll
probably
upgrade
it
to
the
contrib
repo
as
well
pull
from
all
three
repos.
C
If
there's
any
changes,
it'll
create
a
pull
request,
because
that
will
be
all
automatic.
We
should
then
be
able
to
have
a
higher
level
of
confidence
saying
this
is
all
good
and
we
can
sign
it
off
and
then
we
just
have
to
then
I'll
have
another
script
to
push
effectively
pull
that
branch
down
and
push
it
into
the
structure
that
we
want,
but
again
also
in
an
automated
fashion.
C
So
we
can
have
a
higher
level
of
confidence
in
it
and
then
we
haven't
got
to
go
through
and
validate
every
file,
which
is
which
is
the
problem
with
the
current
one
and
that
being
637
files
of
just
the
partial
js
repo
and
the
full
api
repo,
and
it's
just
un
reviewable.
So
so
that's
what?
Where
that's
at
now.
A
C
Yeah
once
I
can
get
the
structure
committed,
so
this
extra
merge
step
is
a
little
bit
painful,
but
it'll
it
will
have
to
create
prs
so
effectively.
The
pr's
have
to
be
signed
off.
These
things
will
probably
run
as
me,
so
I
won't
be
able
to
actually
sign
off
the
prs.
C
C
In
your
credit,
pull
request
for
the
I
know
release
please,
but
I
think
really,
please
runs
as
daniel,
so
I
think
he
has
the
same
problem
there
so
yeah
once
we
get
the
structure
and
we
get
a
building
which
I've
already
got
locally
and
I
think
in
the
release
in
the
notes
here,
I've
given
a
link
to
my
my
pr
for
that
or
my
local
branch,
for
that.
This
extra
step
is
just
extra
steps
to
make
that
all
automated,
so
that
there's
history
there
still
will
have
to
be
some
manual
changes.
C
I
think
there's
only
three
files
that
need
change,
but
yeah,
hopefully
this
week,
and
then
we
can
start
creating
the
real
branches
and
start
going
and
playing
so
have
minification
branch
and
start
trying
to
compress
the
crap
out
of
it
all
trying.
One
of
the
things
I
want
to
investigate
is
trying
to
solve
the
global
context
issue
not
quite
sure
how
we're
going
to
do
that.
Yeah.
A
Yeah
that'd
be
good
yeah,
so
I've
been,
I
think
you
know
if
you
want
in
the
future.
We
can
use
this
this
meeting
this
tuesday
meeting
like
to
to
be
as
a
more
working
session
on
some
of
these
things
like
once,
we
have
any
once
we
have
code
to
look
at
and
yeah.
C
Where's
our
code
to
play
with,
like
one
of
the
reasons
for
dragging
the
api,
is
so
that
we
can
spin
a
lot
faster
rather
than
having
to
have
an
api
published
with
any
fixes
and
there's
stuff
in
the
country
code
that
eventually
we'd
want
to
push
down
into
the
web
one
and
then
for
all
our
events.
C
Or
whether
we
just
do
it
in
the
sandbox
or
we
start
in
the
sandbox
and
we
push
it
back
out,
I
don't
know
we'll
cross
that
bridge
when
we
get
to
the
point
of
implementing.
I'm
I'm
thinking
to
start
with.
We
might
want
both
because
we're
going
to
have
to
have
logs
and
logs
doesn't
exist.
Yet
so
that's
going
to
exist
in
the
main
repo.
A
I
think
the
the
otep
santosh's
protap
is
looks
like
almost
ready
to
be
merged,
and
but
I
think
you
know
I
can.
I
was
gonna
start
by
just
creating
the
otp
exporter
for
for
logs
and
in
the
js
js.
It
shouldn't
be
that
much
work
and
then
then
start
implementing
the
the
events
api.
C
C
Yeah,
like
I
end
up
signing
off,
santosh's
he's
dropped
off
already.
Here's
otep,
I
didn't
100
agree
with
some
of
the
wording
around
the
way
the
the
merged
api
and
logs
api
was
on,
but
it's
like
buttons
just
get
it
in
there.
Let's
just
get
something
going.
C
Yeah,
like
I
I,
it
was
really
the
way
he's
put
it
in
the
otp
was
market.
This
is
the
way
it's
going
to
be
as
opposed
to
in
our
meetings.
We
were
discussing
that
this
is
the
way
we'll
define
it
initially,
and
then
we
may
split
it
apart
later,
but
that's
like
we'll
have
that
discussion,
even
when
we
need
to.
C
Oh
yeah
yeah,
so
as
a
consequence,
I
I
haven't
looked
at
helping
with
the
defining
events.
C
I
don't
think
ram
has
either
we
have
some
office
moves
coming
up,
which
is
going
to
be
a
little
bit
painful,
but
it's
just
going
to
take
a
couple
of
days
out,
mostly
we're
working
remotely
anyway
between
the
week
of
the
move-
and
I
know
internally,
the
pm
associated
with
my
team
is
trying
to
drag
in
another
team
who
extends
the
stuff
we're
doing
with
their
own
customer
events
for
clients
so
trying
to
get
them
to
come
in
and
help
with
defining
client
events,
so
still
trying
to
find
that
I
believe,
find
trying
to
find
the
right
person
from
that
team
to
be
involved.
A
A
I'm
still
kind
of
wrestling
with
with
the
the
span
versus
events
we
talked
about
the
last
week
and
the
reason
is
I
I
think
I
think
there
there's
some
some
folks
like.
I
think
you
and
quinn
from
from
aws
like
who
are
thinking
of
this.
As
as,
like
everything
is
events,
and
you
know
everything
should
be
namespaced
and
so
that
you
can
validate
it
on
the
back
end
and
all
that.
But
I
think
there
are
also
some
folks,
like
actually
ted.
A
I
had
a
discussion
with
ted
about
spans
and
he
said
that
as
much
as
we
can
like,
we
should
be.
You
know
we
should
like
be
treating
spans
as
as
as
something
that
the
client
should
send
like
if
it
makes
sense,
yeah.
C
And
I
agree
that
if
it
makes
sense
yeah
so
like
for
ajax
requests
yeah,
it
makes
sense,
because
a
span
will
give
you
some
additional
timing
by
default
rather
than
hoping
to
collect
it
yourself.
A
C
Should
it
always
not
necessarily
not
everyone's
going
to
want
to
have
a
span
for
that,
if
you're
doing
asynchronous
things
like
route
changes
for
a
spa,
it
makes
sense
yeah,
but
not
for
everyone.
You
don't
always
need
the
spam.
A
C
A
No,
no
so
so
I
think
what
I'm
kind
of
stuck
is
like
when,
when
defining
semantic
conventions,
for
example,
for
things
like
resources,
so
you
have
like
you
know:
xhr
calls
like
they
will
have
the
the
current
instrumentation
still
pulls
in
some
attributes
from
the
resource
timing,
api
and
adds
them
to
the
span.
A
But
you
know
we
could
also
send
them
as
as
events
in
themselves.
So
when
we
define
these
semantic
conventions
like,
should
those
semantic
conventions
be
tied
to
an
event
or
should
they
be?
You
know
like
a
set
of
attributes
that
you
can
apply
to
both
span
and
the
event
or
I'm
not
I'm
not
quite
clear
how
that
works
together.
Yeah.
C
I
I
my
view
would
be
we
define
it
as
an
event.
Ultimately,
I
would
like
span
events
and
log
events
to
be
the
same
thing,
which
is
part
of
what
I'm
trying
to
drive
with
with
the
nested
attributes,
so
that
we
can
define
it
as
the
same
thing,
yester
attributes
looks
like
it's
going
to
get
a
lot
of
pushback
so
at
least
for
span
attributes
we're,
probably
not
going
to
be
able
to
get
nested
attributes.
C
C
Blocks,
I
think,
as
I
don't,
if
you
noticed,
tigran
smiling
when
we
were
talking
about
the
he
was
asking
about
the
progress
of
this.
It's
just
gonna
be
long.
I
think
we'll
still
be
talking
about
it
once
we,
while
we're
actually
sending
events
in
logs.
A
Yeah,
it's
it's
going
to
take
some
time.
C
Yeah
but
yeah,
I
think,
if
you
think
of
it
as
an
event,
define
them
as
events.
C
C
I
I
alluded
that
a
little
bit
in
my
spec
change,
where
I
talk
about
having
a
strategy
to
flatten
if
the
exporter
or
the
environment
doesn't
doesn't
support
it,
the
same
thing
would
hold
through
going
the
other
way
saying:
okay.
Well,
we've
we
have.
We
have
an
event
the
events
defined
with
nested
attributes,
but
if
you
want
to
send
this
event
on
a
span,
then
it
gets
expanded
like
this.
C
C
Technically,
you
could
have
them
as
attributes
on
the
span,
but
I
think
because
we're
defining
events-
let's
say,
events,
equal
events,
not
events
that
we
call
you
know
a
collection
of
attributes
that
could
also
be
on
its
band.
Again.
It's
going
to
get
too
confusing
and
generate
too
much
discussion
if
we
try
and
have
that
as
semantic
conventions,
it'll
just
get
held
up.
C
A
Yeah,
I
think
that
one,
what's
what
is
the
difference?
Gonna
be,
I
think,
are
we
gonna
keep
the
spams
but
remove
spam
events.
There.
C
Well,
the
span
of
that
will
probably
look
different.
I
say
probably
because
we
haven't
got
to
defining
what
that
event
looks
like
yet
so,
but
generally,
if
we're
saying
an
event,
has
a
domain
and
a
name
and
then
data
with
its
data
in
it,
which
is
nested.
Then
it's
going
to
be
different,
so.
C
Ultimately,
I
would
like
to
have
the
option
have
both?
I
know.
Ted
has
talked
the
other
way
around
and
actually
having
it.
So
when
you,
when
you
add
a
spam
event,
it
creates
a
log
event
that
works
too
so
effectively.
We
just
say
well,
as
of
this
word
now
that
now
that
eventually
now
that
each
sdk
has
logs
and
logs
have
events,
it
would
be
fairly
easy
to
say,
okay.
C
That
does
have
a
problem,
though,
if
the
span
never
ends
and
therefore
never
gets
sensed.
You've
now
got
a
bunch
of
events
that
are
disconnected
from
this
fan,
but
likewise,
if
a
span
never
ends,
you
know,
you
actually
end
up
losing
all
your
events.
That.
A
C
A
So
actually
like,
if
you
like,
if
I
try
to
take
a
specific
example,
let's
say
that
you
have
the
document
load
span
and
we
create
an
event.
That's
like
the
navigation
event
and
and
like
I
actually
like
the
idea
of
the
event
going
separately
because,
like
you
said,
if
the
span
doesn't
doesn't
finish
or
you
know
for
some
reason,
then
you
still
get
some
information.
What
would
happen
correct
me.
A
There's
also
so
I
I
don't
think
it's
a
bad
thing,
but
I
I
heard
a
concern
from
santosh
that
that,
if
you,
if
you
send
them
in
two
different
payloads
or
two
different
calls,
then
like
a
back
end
that
may
rely
on
like
doing
some
analysis
has
to
wait
for
both.
D
C
C
Like
you
know,
without
back
ends
effectively
when
you
send
a
request,
it's
well,
when
you
send
a
single
payload,
it's
like
it
likely
hit
a
single
server
when
you
send
multiple
payloads,
it's
extremely
likely
to
hit
multiple
servers
so
doing
anything
in
a
multi-server
front
end
for
the
back
end
to
try
and
link
things
together.
It's
just
not
feasible
so
thinking
section
id
here.
C
So
if
you
want
to
try
and
have
your
back
end
link
all
the
session
ids
for
something
together
before
you
send
it
down
to
your
storage,
you
just
can't
do
that
if
you've
got
more
than
one
server
receiving
your
payloads
yeah
so
yeah.
If
you've
got
a
single
back
end,
then
it's
you're
gonna
overload
it
at
some
point:
specif
for
clients.
You
know
you
could
have
one
user
that
could
have
like
you
know
millions
of
clients
and
therefore
you've
got
millions
of
requests
coming
in
and
millions
of
connections.
C
So
just
having
you
know
one
one
user
of
your,
your
sdk
could
kill
your
back
end.
Even
if
they've
only
got
one
app,
so
clients
really
do
explode
the
problem
where
instrumenting
a
server.
You
know,
there's
always
a
limited
number
of
servers.
Maybe
is
the
way
to
put
it
and
potentially
unlimited
number
of
clients
using
those
servers.
A
C
Yeah
yeah,
and
it's
the
only
way
to
do
that
properly,
is
to
link
them.
If
you
happen
to,
if
there
is
a
runtime
out
there,
that
wants
to
do
that
and
therefore
wants
to
limit
it
to
a
front.
C
A
Yeah,
do
you
have
a
question
for
you?
Do
you
have
the
same
the
same
view
on
identifying
events
as
like
what
quinn
has
brought
up,
which
is
you
know
you
mentioned
that
you
want
to
send
everything
as
as
events,
so
I'm.
C
Yeah,
so
that's
really
the
domain
in
the
name,
so
that's
like
when
we
first
when
I
first
jumped
on
on
this.
It
was
a
case
where
I
was
pushing
for
the
name
to
have
the
domain
as
part
of
it.
Splitting
that
in
separate
fields
is
just
so,
in
fact,
the
combination
of
the
domain
and
the
name
would
identify
what
the
fields
are,
that
we're
expecting
from
them.
A
Yeah,
let's
do
this,
the
other
thing.
That's
that's
like
that
kind
of
we
diverged
with
with
the
way
we
we
identify
or
classify
events
from
from
spans.
We
got
diverged
right
from
spans,
so
you
know
if
there
are
folks
they're
like
saying
well,
we
should
use
use
spans
for
this
use
case
and
then
use
spans.
A
You
know
to
I
don't
know
to
generate
some
metrics
or
some
doing
some
analysis
then,
like
the
method
of
classifying
spans
from
from
events,
is
gonna,
be
different
and
we'll
have
to
take
take
that
into
account
with
defining
semantic
conventions
right
so
like
if
you
find
yeah,
if
you
define
like
a
like
a
navigation
event
like
that's
easy
like
it's
just
the
name
is
navigation.
C
C
A
C
There
would
be
a
wrapping
span
and
what
do
we
give
the
name
of
that
wrapping
span?
That
could
be
the
domain.
It
could
be
something
where
we
don't
care
about
it,
and
then
we
have
the
span.
Event
still
has
the
domain
and
the
name
in
it.
C
I
know
you
can
have
up
to
128,
at
least
in
javascript
by
default
spam
events,
so
which
is
probably
why
each
event
you'd
we'd,
probably
want
to
repeat
the
domain
and
the
name,
because
it
may
end
up
having
multiple
and
I
don't
really
care
about
what
the
span
name
is.
It's
I
I
see
the
span
event
and
how
the
event
gets
down.
There
is
really
just
how
we
deliver
it.
C
We
either
deliberate
buyer,
a
logo
we
deliberate
virus
fan,
but
at
the
end
of
the
day,
it's
just
an
event
that
gets
put
into
the
backend
database
that
someone
goes
and
queries
which
is
what's
going
to
be
happening
today.
If
you've
got
a
bunch
of
spam
events
with
128
events
in
it,
you're
not
good,
you
know
the
back
ends
should
be
if
they're
doing
it
in
a
performant
way,
effectively
creating
a
record
for
everyone
and
then
linking
together
with
the
the
trace
id
of
the
spam.
C
A
So
if
I
understand
correctly
so
you
would
send
you
have
this
option
of
sending
an
event.
That's
linked
to
a
spam
or
maybe
a
spam
event
spam
event.
But
you
still
like
you
still
need
that
you
always
will
need
some
event
yep.
So
so
you
would
never
use
the
span
itself
or,
like
I
don't
know,
for
like
aggregation
or
some
generating
metrics
or
things
like
that.
A
A
C
And
that,
like
that
span,
will
have
a
time
associated
with
it
so
effectively.
You
start
a
span,
you
add
a
bunch
of
events,
you
end
the
span.
So
therefore
the
span
has
a
time
on
it,
so
that
is
effectively
a
metric.
You'd
want
to
track,
but
the
span
itself
is
not
an
event
per
se
in
terms
of
how
I'm
looking
at
events,
it
happens
to
be
a
grouping
of
it
and
each
span
event
would
be
or
log
event
is
linked
to
that
span
id.
So
you
can
actually
still
create
your
waterfall
from
it.
A
C
I
know
today
it's
a
it's
a
hack
and
that
I
think
people
are
creating
the
zero
length
span.
Sorry,
the
zero
time
spans
and
the
span
name
is
the
event.
I
I
don't
think
we
want
to
persist
that
model
for
events.
A
Right
right,
yeah,
I'm
just
I'm
just
I'm
still
just
kind
of
stuck
on
that
document
load
like
it's
like
you
said
that
we're
gonna
have
to
deal
with
backward.
A
You
know
people
using
the
current
instrumentation
so,
like
you
know,
I
know
that
like
for
our
customers
like
when,
when
they
send
send
us
browser
data,
currently
they
they
will.
You
know
query
these
spans.
These
document
load
spans
to
see.
You
know
how
you
know
to
like
look
at
things
like
latency
and
and
throughput
so
you're
going
to
have
to
like
switch
that
so
they
they
start
using
the
logs
or
events
instead.
Well,.
C
A
Yeah
so
essentially
like
I
guess,
the
summary
in
my
head
is
as
far
as
defining
semantic
conventions.
Let's
just
focus
on
events.
C
Yeah,
let's
just
focus
on
where
we
want
to
go,
so
we
can
have
a
framework
for
defining
all
of
our
events
and
because
the
other
ones
are
still
experimental.
Unfortunately,
if
our
release
it
becomes
more
problematic,
but
because
they're
still
experimental,
we
can
say
okay.
This
is
the
path
going
forward
as
a
as
a
log
event
and
to
support
people
in
that
crossover.
They
can
still
keep
using
the
experimental
instrumentation
as
and
and
also
include
the
other
new
one
so
and
it
would
be
the
other.
C
C
C
I
think
it's,
the
tomorrow's
meeting
is
where
we
get
some
people
who
are
currently
using
the
existing
instrumentation,
so
yeah
yeah-
and
you
know,
as
long
as
we
as
long
as
logs
supports
the
capability
of
linking
the
span.
C
If
you
want
to
have
a
wrapping
span,
you
can
still
have
a
wrapping
space
yeah
so
that
if
we
do
eventually
propose
to
change
a
span
event,
so
it
really
does
get
sent
as
a
log
event,
which
happens
to
be
a
link
to
the
span
id
everything.
That's
still
work.
C
But
yeah,
I
think
we're
in
for
a
long
ride
looks
like.
I
would
be
surprised
if
we
finish
all
this
and
it's
completely
working
by
the
end
of
the
year.
I've
actually
told
some
of
our
internal
people
will
be
end
of
next
year.
Realistically,.
A
I
would
I
think,
it'd,
be
nice.
It'd
be
good
to
have.
A
Like
this,
I
wonder
what
you
think
about
this
like,
while
we're
working
on
the
sandbox
and
optimizing
the
sdk
we
could
also,
we
should.
I
think
I
think
we
should
also
work
at
the
same
time
on
in
the
current
js
sdk
and
just
improving
the
current
instrumentation.
C
C
I
want
to
have
it
in
the
background
regularly,
so
that
we
can
keep
the
sandbox
up
to
date
so
that
when
we
identify
improvements,
we
push
them
into
the
main
repo
and
then
it
closed
back
in
and
then
we
continue
so
that
you
know
when
we
go
and
add
logs
to
the
main
js
repo,
the
logs
ends
up
in
the
sandbox,
and
then
we
can
continue
doing
things
from
there
and
make
it
better:
yeah,
okay,
okay,
so
yeah!
A
Yeah
yeah,
I'm
yeah,
I'm
just
saying
that
because,
like
we
already
like,
at
least
from
my
perspective
like
we
have,
we
have
customers
using
the
js
sdk
for
browser.
Now
you
know
it's
not
it's
not
a
lot.
It's
not
a
lot
of
traffic,
but
there's
there's
some.
There
are
some
people
who
want
to
yeah,
and
but
there
are
a
lot
of
gaps
I
think
you
know
the
current
instrumentation
has
a
lot
of
gaps.
A
Obviously,
so
if
you
can
like
slowly
plugging
in
those
gaps
with
now
that
we'll
have
we'll
have
the
events
api,
that
will
make
it
possible
to
close
some
a
lot
of
those
gaps.
C
Yep
yeah,
I
would
hope,
have
no
man,
you
hope
it's
not
a
plan.
Every
time
I
say
hope
he
rings.
My
ears
hope
is
not
a
plan
so
derek
if
he
ends
up
reading
this,
that's
you
so.
I
would
hope
that
by
the
end
of
the
year,
we'll
have
logs
and
some
events
that
we've
defined
working
in
an
experimental
fashion
that
would
then
be
flowing
into
the
sandbox
and
then
the
sandbox
we're
trying
to
make
that
as
small
as
possible.
C
I
say
hope
I
would
like
is.
The
global
thing
is
a
big
blocker
for
me
like
if
we
can't
fix
the
global
context
for
anything
that
we
support,
I'm
not
going
to
be
able
to
move
forward
on
that.
C
So
simplistically,
if
you've
got
if
you're
an
application
developer,
we
have
a
thing
called
auto
instrumentation
so
effectively
our
script
gets
injected
on
the
page
because
you
have
to
be
hosted
in
azure
and
it's
all
automatic.
It
just
happens
for
those
people.
Mobile
context
works
perfectly
because
the
sdk
becomes
global.
C
It
ends
up
being,
like
you
know,
window
dot,
app
insights,
no
issues
for
those
people
with
open
telemetry,
where
we
do
have
issues
is
mostly
internal,
where
we
have
teams
that
create
different
components
that
live
on
the
page
and
those
teams
all
want
their
own
telemetry
and
that
their
own
telemetry
is
sent
to
their
own
back
ends
like
they
don't
want
telemetry
for
like
what
we
call
the
me
control.
C
Effectively
yeah
yeah,
like
there's
very
few
environments
that
are
managed
by
a
single
team
like
even
teams,
while
there
is
a
team
that
manages
the
team's
case,
and
I've
done
a
lot
of
work
with
them
to
try
and
effectively
provide
a
global
framework
with
a
bunch
of
things
are
shared
because
it
hosts
other
applications
inside
of
it.
Those
applications
themselves
have
their
own
instance,
and
today
they
manage
that
by
effectively,
instead
of
using
the
global
app
insights,
they
have
they
use
npm.
C
Instead,
therefore,
they
have
their
own
instance,
which
again
still
comes
back
with
open
telemetry.
I
had
to
look
at
the
global
thing
that
everything
falls
back
into
being
registering
the
context
on
the
global,
which
is
it's
not
nice,
it's
almost
there.
I
could
almost
do
it,
but
it's
gonna
have
to
be
backward
compatible.
C
So
it
may
just
be
a
case
that
we
say:
okay,
there
is
a
global
context,
but
if
you're
this
other
class
of
user,
you
don't
use
that.
But
for
that
to
work,
every
instrumentation
has
got
to
be
aware
of
that
as
well,
and
the
support
being
given
a
context
rather
than
always
assuming
or
setting
with
global
context.
C
A
When
you've,
when
you
start
global
context,
first,
I
at
first
I
thought
it
had
to
do
with
global
context,
for
course,
like
spam
context,.
A
Yeah
yeah,
I
understand
now,
but
that's
another
thing.
That's
that
I
that
I'm
interested
in
at
some
point
is
is
like
when
that
when-
and
this
area
only
applies
to
like
document
load
like
the
context
of
the
script
that
that
that
is
executing
like
while
the
page
is
loading
because
everything
everything
else
is
event
event
driven,
but
at
the
beginning,
like
the
script,
is
loading
executing
and
if
you,
if
you
have
some,
if
you're,
creating
some
spans
in
that
context,
then
they
do
not
get
parented
to
the
document
mode.
C
C
Is
sort
of
being
addressed
already?
I
think
it
was
a
couple
weeks
ago
that
there's
a
pr
I
haven't
looked
at
it
again
recently,
but
but
yeah
the
global
api
thing.
I
there's
a
bunch
of
there's
a
test
that
tests
the
global
nature
of
that
and
they
have
to
hack
because
they
only
had
it
running
in
react,
sorry
in
in
node,
and
they
hacked
it
so
that
effectively
they'd
load
their
first
instance
kill
the
internals
of
require.
C
So
when
they
loaded
the
second
instance,
it
would
give
them
a
different
instance,
and
when
I
tried
when
I
added
that
a
browser-based
test
for
that
it
broke
because
it
wasn't
using
node
require
it
was
using
karma
require.
C
So
I
then
had
to
reach
in
and
hack
karma's
caching
mechanism
and
kill
that
as
well
just
so
that
test
would
still
run.
But
that
did
show
me
that
if
we
can
break
that,
it
should
work,
it's
just
the
current
way.
You
create
the
api
today.
C
Well,
the
recommended
way
to
create
the
ipa
api
today
forces
the
global
nature.
So
there's
only
ever
one
instance
and
all
the
code,
that's
in
there
about.
Okay,
you
loaded
sdk
version
one,
but
you
want
sdk
version
1.2.
So
therefore,
you
end
up
with
knob
implementations.
Okay,
so
we
know
if
the
framework
is
using
api
version.
One
and
I
want
to
use
api
version
1.2.
I
should
be
able
to
have
my
own
1.2
instance.
So
that's
another
bit.
C
C
Although
there
are
multiple
teams
working
on
different
components
running
in
the
back
end,
but
today
they
generally
don't
want
their
own
telemetry
generally.
A
C
And
declare
who
might
generally
comment,
we
do
have
effective,
you
know
front-end
ux
teams
and
non-ux
teams
that
might
want
the
telemetry,
but
generally
today
at
least
I'm
thinking
for
identity,
where
I
did
this,
it
all
flows
into
a
common
back
end
and
they
just
query
on
an
attribute
from
the
table.