►
From YouTube: Diagnostics WG meeting Sep 30 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Take
it
as
a
no,
so
we
go
with
the
regular
agenda.
Basically,
the
items
listed
in
the
issue
tracker
with
the
diag
agenda
label
put
on
them
and
if
we
get
some
time,
we
could
also
touch
base
on
the
user
journey.
Deep
dives,
but
usually
mary
drives
that
and
in
case,
if
mary
is
not
coming
by
that
time,
we
will
see
how
much
time
we
get
on
the
get
for
that
and
take
a
call
there.
So
to
start
with
the
regular
agenda.
B
Yeah,
yes,
I'm
proposing
at
least
async
resource
has
been
largely
unchanged
for
a
while
now
and
is
basically
stable
in
all
the
name
and
local
storage.
I'm
less
sure
about
that,
if
I'm
being
stable
yet
but
like
as
someone,
that's
worked
in
apm
for
many
years
and
used
basically
that
feature
in
user
lands,
I'm
fairly
confident
about
what
it's
supposed
to
and
just
like.
I'm
like
it's
being
used
in
the
data
dog
agent
right
now
and
it
works
like
I'm,
I'm
fairly,
confident
in
it.
B
But
I
can't
understand
if
people
still
feel
that
basic
local
storage
is
not
quite
ready
yet.
But
I
would
like
to
suggest
we
try
and
get
a
timeline
before
when
we
would
consider.
I
think
local
storage,
like
aim
for
node
16
or
something
maybe.
C
C
Would
be
helpful
like,
for
example,
you
you
know
one
of
the
ones
we
had
for.
I
think
both
of
them
was
like
real
world
usage,
and
you
just
mentioned
datadog
is
like
one
of
the
things
that
would
fulfill
that
right.
C
C
B
C
A
A
B
B
Okay,
I
know
it
has
proper
tests
and
documentation
and
all
that
stuff
and
like
it
is
being
used
and
user
landed
in
some
places,
unfortunately
not
as
many
places
as
it
should
be,
just
because
it's
still
considered
experimental.
So
some
people
aren't
adopting
it.
C
B
And
so
it
instant
resource
triggers
all
on
the
javascript
sides,
so
it's
actually
faster
than
a
lot
of
the
rest
of
asynchronous
just
because
it
has
to
cross
the
native
to
javascript
barrier.
A
C
That
may
just
be
like
a
even
if
things
have
already
been
accomplished
like
it
may
be,
because
I
know
you
did
some
work
around
performance
right,
so
you
might
want
to
put
into
the
exit
checklist,
something
that
says
you
know
reach
this
level
of
performance
and
then
it's
a
place
to
where
you
can
say.
Oh
and
yeah,
we've
we've
you've
already
done
the
work
it
performs
as
well
or
better
than
all
the
existing.
C
You
know,
ecosystem
type,
alternatives.
B
There
there
are
rough
equivalents
to
async
resources
in
user
land,
but
nothing
that
like
actually
interacts
with
asynchronous.
It's
like
the
same
thing
for
something
else.
It's
like
cls
has
its
own
way
to
wrap
things,
but
then
you're
stuck
with
cls
and
there's,
like
other,
like
alternatives
to
cls
that
have
the
same
sort
of
thing.
B
C
C
Right
is
it
just
in
that
one
like
that,
there's
one
pr
that
legendicas
had
had
on
in
flight?
Is
it
just
that
that
needs
to
land,
or
is
it
something
more
than
that.
E
I
think
this
needs
to
land
yeah
to
fix
this
right.
I
don't
know
the
actual
state
so
because
between
the
time
and
it
was
created
and
now
quite
some
changes
were
done
in
this
area-
practice
performance
improvements
per
stephen
and
so
on,
but
this
is
for
sure,
one
of
their
own
topics
and.
E
Should
fix
this?
Otherwise
it's
what
is
impossible
for
our
native
adults
to
do
it
correct.
D
D
Basically,
with
enter
with
the
kind
of
bleeding
through
call
frames
is
something
that
I
personally
understand
the
desire
to
have
a
functionality
similar
to
that,
but
it
doesn't
have
clear
boundaries
in
its
api
contract,
and
so
I
just
am
not
comfortable
moving
that
out
anytime
without
a
clear
boundary
set
or
some
sort
of
vast,
limiting
condition
that
allows
you
to
prevent
some
bugs,
like
I
showed
in
the
issue
thread.
B
All
the
limiting
condition
of
like
when,
when
it
stops,
is
specifically
bound
to
the
async
resource
stack
in
car,
which
is
exposed
through
execution,
async
resource
and
also
correlates
directly
to
the
after
events
and.
D
Async,
I
guess
so,
but
the
problem
that
I
stated
and
showed
with
some
code
examples
in
that
issue
thread
are
about
exiting
the
call
frame
in
which
you
are
performing
enter
with
and
potentially
bleeding
into
multiple
logical
async.
Whatever
you
want
to
call
them
tasks.
B
It's
it's
not
bound
to
the
call
frame
it's
bound
to
the
tech,
so
it's
like
between
the
before
and
after
of
basic
hooks
it'll
capture,
everything
within
that
which
could
be
like
a
whole
bunch
of
stuff
that
may
or
may
not
connect
to
directly.
B
D
I'm
not
stating
that
it
is
significantly
different
from
what
we
already
have,
I'm
just
showing
bugs
that
I
am
uncomfortable
with
in
that
issue
thread
the
response
where
those
aren't
bugs,
and
it
seems
to
be
the
same
response
here
when
I
hear
that
it
makes
me
much
less
comfortable
and
much
more
wary
that
we
should
ever
ship
it,
because
I
I'm
trying
to
show
code
examples
with
conditions
that
would
be
problematic
that
do
have
stale
references
and
things
like
that
and
being
told
that
the
stale
references
are
intended
is
worrisome.
C
We're
not
talking
about
you
know,
async
hooks
are
experimental
as
well,
so
being
the
same
as
them
doesn't
doesn't
mean
that
it's
good
to
go
to
non-experimental.
E
What
would
you
say
so
it
is
in
the
end,
not
leaking.
It
just
means
as
an
apn
tool.
You
want
to
monitor
every
emitted
event.
D
D
E
Means
you
need
to
go
one
step
further,
so
if
you
go
down
into
node
core
somewhere
in
the
beginning
of,
let's
say
an
incoming
web
request,
and
then
you
call
the
run
there,
then
everything
beyond
this
processing,
this
incoming
web
request
will
be
on
this
tick
and
it
belongs
to
this
single
resource
and
this
one
we
would
like
to
the
monitor-
and
one
way
is
to
to
call
this
end
of
this
in
the
beginning
of
this
or
you
wrap
it
via
a
run,
and
this
wrapping
should
happen
in
the
very
very
beginning
of
the
stick,
so
sangha
really
down
down
down
to
node
core,
which.
D
Breaking
that
guarantee
that
you're
trying
to
state
it
must
happen
at
the
beginning,
and
if
somebody
does
something
such
as
prepend
listener,
that
guarantee
is
broken
it
also,
unlike
what
you're
describing,
is
not
a
wrapper
entirely,
so
you
can
have
things
where
if
you
perform
interwidth
multiple
times
on
the
same
tick,
you
can
get
stale
storage
values,
and
so
this
has
been
covered
a
bit
in
the
issue
thread.
D
So
there's
no
guarantee
that
enter
with
only
happens
in
a
concert
with
diagnostics
channel.
That
could
be
an
alternative
where
we
only
emit
diagnostic
channels
events,
and
then
we
have
some
kind
of
hook
that
is
performed
there.
D
That
would,
I
think,
always
ensure
that
it
only
happens
once
per
tick,
and
it
would
happen
at
the
beginning.
D
B
The
the
user
of
diagnostic
channel
has
to
specify
like
in
their
in
their
own
code,
like
they
have
to
listen
to
whatever
specific
name
channel.
They
want
to
be
the
starting
points
of
their
like
trace
transaction,
whatever
you
want
to
call
it,
and
they
have
to
look
at
like
inspect
the
message
and
like
decide
at
that
point,
should
I
call
this
enter
with,
and
what
should?
I
call
this
like
what
what
object
should
I
use
to
call
enter
with
to
set
us
context.
B
It's
not
like
a
strict
match
of
like
here
just
matched
to
this.
It's
like
that.
They
have
to
like
look
at
the
events,
make
their
own
decision
about,
like
whatever
is
in
the
structure
of
this
event
like
they
might
want
to
grab
like
the
url
out
of
a
request
and
decide.
I
want
to
match
against
this,
or
they
might
only
care
about
the
method
or
they
might
want
to
match
everything,
or
they
might
want
like
a
completely
different
point
in
the
request
lifecycle
to
start
their
transaction.
D
If
we
have
the
diagnostics
channels,
events
and
the
ability
to
choose
such
as
running
a
function
against
the
data
in
the
diagnostics,
channel's
events.
D
You
could
try
to
just
limit
kind
of
the
possibility
of
those
problematic
situations
such
as
throwing
if
you've
already
called
enter
with
on
the
same
tick.
That
would
definitely
prevent
at
least
one
of
the
odd
situations.
D
D
So,
in
the
issue
thread,
I
had
something
a
function
called
unrelated
timer.
I
think
it
was
unrelated
set
timer,
where,
if
you
have
a
tick
and
it's
doing
multiple
processing
of
some
kind
say
it's
a
batch
operation,
you
may
have
multiple
asynchronous
tasks
being
processed
on
the
same
tick,
so
you
could,
for
example,
potentially
emit
multiple
request
events.
D
This
current
api
with
interwith
has
some
issues
with
that
design.
That
is,
that
is
really
it.
You
can
get
into
situations
where
you
get
a
storage
value
of
the
previous
request,
and
that
is
what
I'm
calling
the
stale
references
and
so
there's
a
code.
B
E
E
We
want
all
of
them
get
the
context
propagated.
D
The
claim
here
is
that
you
can
produce
code,
such
as
I
did
in
the
issue
thread
where
you
can
have
a
set
of
events
that
are
not
inherently
related
to
each
other.
They've
just
been
batched
somehow,
potentially
is
a
good
example,
and
so
if
we
have
some
pooling
mechanism
and
it's
batching
up
to
be
processed
events
of
some
kind,
you
can
actually
get
into
a
case
such
as
where
you
emit
two
requests.
D
C
Okay,
so
like
in
the
case
of
enter
with,
I,
I
think
I
could
kind
of
see
that
if
you
call
that
multiple
times
with
different
stores
based
on
when
an
event
fires,
you
might
get
you're
associated
with
the
one
you
called
it
the
first
time
with
or
the
one
you
call
it
the
second
time
with
right,
correct
that
sort
of
that's
the
source
of
you
know.
So
you
don't
know
which
of
the
two
you're
gonna
get.
You
might
get
some
with
one
and
you
may
get
some
with
the
other
one.
D
Yes,
that
is
the
only
real
issue,
and
the
the
problem
is
just
that:
interwith
doesn't
have
any
sort
of
api
guarantees
that
allow
you
to
avoid
this
currently.
E
C
B
So
if
you
batched
like
say,
requests
together
in
the
same
tick,
if
you
gave
each
request
its
own
async
resource,
which
arguably
we
should
enter
with,
would
scope
to
that
request
and
that
that's
actually
exactly
what
run
already
does.
Is
it
just
creates
a
new
async
resource
and
runs
enter
with
within
it.
B
So
I
I
would
argue
we
should
be
able
to
specifically
because
there
is
like
a
use
case
for
that
in
apm,
of
switching
whatever
a
current
span
is.
E
So
yeah
in
in
the
net,
for
example,
they
have
only
a
set,
which
is
perfectly
the
same
as
and
twist,
and
I'm
not
aware
that
they
have
any
problems
with
it.
E
So,
at
least
with
the
colleagues
I
talked
with
from
net
area,
they
are
fine
with
the
just
setting
it
and
then
changing
it
and
clearing
it
whenever
they
need
to
do
and
whenever
it's
set
at
one
point
in
time
it's
propagated,
and
if
it's
changed
everything
that
happens
afterwards,
it's
propagate
gets
the
new
value
propagated
and
so
on,
and
this
is
more
or
less
the
same
as
if
we
would
have
only
the
end
to
this
in
nodejs,
and
I,
honestly
speaking,
don't
understand
why
these
two
languages
should
be
that
different.
D
So,
to
my
knowledge,
with
async
storage
on
the
net
side
of
things,
it
was,
it
persists
past
the
current
tick
it
is,
it
is
persisted
across
the
whole
call
chain
of
your
async
task,
so
even
in
the
next
async
task,
running
it'll
still
be
set
to
that
value.
E
So
it's
a
it's
a
yes!
This
is
the
same
for
us
in
local
storage
thousand
local
storage
propagate
sets
to
all
the
follow-up
greater.
D
C
B
Because
what's
enter
with,
does,
is
it
like
it
binds
the
context
object
you
give
to
enter
with
to
be
the
current
context
in
until
the
end
of
the
sync
tick
and
any
asynchronous
events
that
occur
within
that
time
will
also
be
bound
to
that
so,
when
their
before
and
after
between
their
before
and
after,
like
in
descending
ticks,
that's
the
same
context
or
object
will
be
bound
again,
and
so
it
will
flow
through
the
whole
async
tree.
B
If
there's
any
point
where
that's
broken,
it's,
it
would
be
like
it's
a
resource
issue
which
there's
the
known
issue
of
events
like
event:
emitters
don't
actually
have
any
in
sync
resource
binding.
So
what
the
handler
of
an
async
but
the
handler
of
an
event
emitter
doesn't
actually
bind
to
anything.
It
just
runs
in
whatever
the
current
context
is
so.
B
The
point
where
an
emit
happens
is
where
an
event
emitter
handler
is
bound
to
not
the
point
where
the
on
event
is
attached,
which
there's
been
different
opinions
about
what
what
that
should
look
like
and
there's
the
same
argument
about
promises
that
it
like
it's
hard
to
define
whether
the
context
of
a
callback
is
where
you
called
then
or
is
it
where
you
resolved
it's
hard
to
say,
and
so,
as
a
result,
there's
not
been
any
movements
on
wrapping
event,
handlers
in
an
async
resource.
C
D
D
If
they
get
a
warning
or
something,
they
know
that
they're
doing
something
weird
at
least
and
that
they
they
should
probably
you
know,
go
and
look
at
some
docks,
because
if
you
do
enter
with
multiple
times,
that's
pretty
simple
to
do
with
new
zealand,
cueing
or
batching.
D
But
if
the
intent
is
to
allow
enter
with
multiple
times,
they
will
do
so
without
any
sort
of
mechanism
to
know
that
hey
this
is
this
is
very
abnormal
and
probably
not
what
you
wanted
to
do.
But
I
can't
see
any
reason
why
you
would
want
to
do
interwith
multiple
times.
D
A
lot
of
the
write-up
on
the
domains
post-mortem
was
basically
around
similar
things.
People
were
doing
things
improperly
and
just
us
claiming
that
they
were
doing
them
properly,
didn't
really
stop
them
from
doing
it
and
so
context,
propagation
implicit
is
something
people
want
it's
something
in
general,
you
need,
if
you
want
a
realistic
instrumentation,
but
doing
it
in
a
way
that
has
these
kind
of
easy
gotchas
is
what
worries
me
so
I'll
think.
B
On
that,
I'm
all
for
having
like
a
big
red
warning
sign
on
the
dock.
Saying
like
don't
use
this.
Unless
you
really
know
what
you're
doing
it's
like
the
intel
like
the
intended
like
general
consumer
api
of
asynchronous
storage,
is
absolutely
the
run
method
like
almost
everyone
should
use
that
and
enter
with
is
like,
quite
specifically
just
for
apm
vendors
and
like
we
would
have
fairly
strong
understanding
of
exactly
what
the
implications
of
are
running.
That.
D
D
A
Given
the
interest
of
time
since
we
have
other
items
as
well,
how
about
this?
A
So
probably
it's
worthwhile
to
you
know
turn
a
little
bit
more
through
the
issue,
tracker
itself,
probably
listing
the
problem
exactly,
and
we
are
having
a
look
at
that
and
then
probably
run
through
this
meeting,
maybe
once
or
twice,
and
then,
if
required,
we
can
have
a
dedicated
session
on
this
discussion
itself.
C
A
A
It
was
good,
but
when
it
comes
to
the
user
experience
the
name
of
the
api
or
the
the
external
manifestation
of
the
api,
was
it
right
or
wrong.
There
was
always
this
debate,
so
it's
a
good
good
discussion
to
have,
because
once
these
apis
are
frozen,
it's
absolutely
difficult
to
make
changes
in
the
stable
state.
So
yeah.
C
A
good
example
is
like
I
pasted
in
into
the
minutes
the
link
for
the
worker
threads
one
and
one
is
like
review
the
source
code
for
compatibility
with
terminate.
C
B
It's
just
sitting
waiting
for
more
review.
I
have
one
approval
on
it
technically,
two
right
now,
because
james
accidentally
approved
it
and
hasn't
unapproved
it
or
changes
or
anything
there's
only.
B
Approval
right
now,
that's
it
yeah.
I've
done
some
like
squashing
rebasing.
So
it's
there's
now,
just
like
four
commits
there's
the
initial
commit
which
is
like
back
to
the
core
api,
which
is
the
like
part
that
I'm
fairly
sure
about,
and
then
I
have
like
two
additional
commits
which
are
making
a
channel
async
iterable,
which
is
kind
of
like
an
idea.
B
I
was
playing
with
at
the
recommendation
of
someone
else
and
I'm
not
really
sure
of
the
value
of
that
so
may
or
may
not
drop
that
if
we
get
any
comments
on,
if
that
has
any
value
or
not,
and
then
there's
the
span
api
as
well,
which
I
pulled
out,
which
I
do
feel
we
need
something
solving
like
what
that
is
trying
to
solve,
which
is
correlating
multiple
events
together.
B
I'm
I'm
open
to
other
ideas.
If
people
have
better
ideas
on
what
that
api
should
look
like,
but
yeah
roughly,
we
need
to
be
able
to
correlate
events
to
produce
like
a
span
on
the
like
apm
agent
and
like
we
want
to
be
able
to
feel
like
this.
Like
request.
End
is
related
to
this
request,
start
and
things
like
that
or
like
what
more
like
easily
recognizable.
It's
like
it.
B
If
it's
like
an
fs3
file
or
something
you
might
have
multiple
fs3
files
in
the
same
request
that
could
even
be
triggered
in
the
same
tick.
So
we
need
to
be
able
to
like,
like
more
clear,
more
cleanly
connect.
The
the
points
you'll
say
like
that.
This,
this
callback
was
triggered
from
this
defensory
file.
This
callback
was
triggered
from
this
fs3
file,
which
we
don't
really
have.
If
we
just
are
emitting
plain
data.
C
B
E
B
Basically,
what
we
need,
but
we
don't
really
have
any
guarantees
of
the
timing
of
diagnostic
channel
events
and
they
could
happen
somewhere
in
the
middle
somewhere.
B
So,
for
example,
if
someone
does
an
fs
read
file,
there's
going
to
be
an
open
and
a
close
in
the
middle
of
there
and
we
we
don't
necessarily
want
diagnostic
channel
events
committed
to
be
associated
with
so
many
clothes
we
want.
We
want
people
to
say
like
this
is
related
to
me
doing
some
fs
read
file
stuff,
so
I've
passed
along
this
id
somehow
to
say
like
yeah.
This
message
is
related
to
this.
E
E
so
instead
of
trying
to
have
a
central
mechanism
with
a
running
number
or
whatever,
because
then
maybe
just
just
consider,
you
have
some
messaging
system
which
emits
diagnostics
events.
This
messaging
system
may
have
already
existing
correlation
ids
inside,
like
v0
trace
context
or
whatever,
and
if
you
have
something
like
this
in
place,
why
not
using
it.
B
B
C
Is
there
is
there
any
core?
Is
there
any
link
to
this
between
what
we
were
just
talking
about,
the
the
like
the
asynchronous
execution
like,
and
you
know,
emitting
the
id
that
the
current
async
context
would
that
potentially
link
together
the
right
things.
E
I'm
not
sure
if
we
can
bind
this
or
should
so.
Maybe
we
can
bind
it
in
a
lot
of
cases
to
some
sort
of
other
context,
but
there
might
be
quite
some
cases
where
this
does
not
match
and
in
general
I
would
say
that
the
diagnostic
state
emitted
and
the
as
a
context,
propagation
I'll,
do
what
the
canal
thinks
and
we
should
not
mix
them
together.
E
E
C
E
Well,
I
would
say
we
do
not
need
anything
global
here.
So
if
you
call
fs,
read
and-
and
we
know
a
read-
is
an
open,
a
start,
an
open
several
reads
in
the
close,
so
this
is
how
it
built
internally.
So
the
fs
model
has
to
know
that
this
open
belongs
to
this
read.
So
this
interval
belongs
to
this
read.
So
there
is
something
in
place
already
within
the
fs
module
to
track
its
operations.
E
Example
or
whatever,
and
if
this
is
something
which
they
don't
want
to
give
outside,
they
can
always
add
some
sort
of
running
number
into
this
object.
They
have
internally
and
use
this
running
number
to
issue
it
together
with
the
events.
So
it's
not
a
global
way
to
do
it,
it's
per
channel,
so
this
means
as
a
user.
I
have
to
correlate
fs
events
in
a
different
way
than
http
events,
for
example,
but
on
the
other
hand,
we
do
not
need
some
sort
of
complicated
implementation
with
encore.
B
Yeah,
that's
right,
yeah,
another
example.
I
would,
I
would
think
which
I
I
I
think
fs
is
like
less
clear
example
just
because,
like
it's
fairly
self-contained
api,
so
it's
not
too
hard
to
control
like
where
data
is
coming
from
there,
but
with,
for
example,
the
like
http
api.
We
might
want
events
that
are
like
anytime.
Someone
does
like
the
stream
right
on
response
you
might
want
to.
B
I
don't
recommend
actually
doing
this,
but
you
might,
in
theory,
want
to
capture
some
information
at
just
like
arbitrary
points
in
the
request
life
cycle
and
like
if
you
tried
to
attach
that
to
the
like
async
call,
gra
call
graph
like
that's,
not
gonna,
give
you
anything
useful,
you're
gonna
be
like,
maybe
in
the
middle
of
like
sure
some
other
callback
or
something
completely
unrelated.
It
just
happens
to
call
right
on
the
response,
like
you
kind
of
I'm.
D
C
Yeah,
I'm
convinced,
like
I
understand
better
now
that,
like
it
really
should,
I
think
that's
what
was
gerhard
was
suggesting.
It
should
be
something
that
you
know
the
whatever
the
the
the
channel
is
this
certain
thing
like
faster
http,
if
there's
a
correlation,
there's
already
something
that
tracks
that
so
it
should
be
driven
from
that,
as
opposed
to
anything
else.
B
Yeah
just
need
to
define
exactly
like
what
that
correlation
system
is.
I
think
I
mean,
but
like
was
already
said,
it
only
needs
to
be
unique
per
channel,
so
in
theory
anyone
could
make
their
own
correlation
for
each
channel,
but
I
think
it
would
be
helpful
to
have
at
least
some
sort
of
rough
standard
for
core
to
use,
even
if
it's
not
a
public
api
just
so
so
we
can
like
have
some
consistency
when
we
listen
to
like
yeah.
I
want
to
capture
the
life
cycle.
E
So
this,
I
think,
is
in
a
general
agreement,
but
maybe
we
can
just
do
it
like
this.
We
we
add
some
instrumentation
later
on
and
let's
say
in
http
client,
http,
server
and
this
extent,
as
example,
how
this
is
documented
and
how
this
works.
So
we
do
not
have
to
choose
the
same
mechanism
there,
so
we
can
do
it,
but
we
don't
have
to,
but
in
any
way,
once
we
use
that,
so
we
publish
diagnostic
state
in
one
of
these
models.
B
This
so
the
the
main
reason
I
created
like
a
complete
spam
api,
as
opposed
to
just
like
some
id
generating
api
initially,
was
just
because,
like
from
the
user
perspective,
if,
if
there's
not
a
clear
api
on
how
to
do
a
thing,
they
might
just
not
do
the
thing
like
if
we
don't
provide
a
clear
and
obvious
way
for
them
to
correlate
events
to.
B
They
might
not
think
to
do
that
at
all
so
like
if
we
leave
it
up
to
them,
like
you
can
make
your
own
correlation
id,
if
you
want
to
a
lot
of
them,
are
just
not
gonna.
Think
of
doing
that
and
that's
helpful
information
that
we
want
them
to
do
so,
having
some
sort
of
indication
that
people
should
do
this,
I
think,
helps
he's
not
sure
what
that
indication
is
yet.
E
So
the
current
proposal
is
specified
by
adding
a
new
apis
or
span
api,
so
this
is
somehow
the
proposal
looked
as
an
api
for
this,
for
this
use
case
just
use
it,
and
the
other
way
would
be
more
than
two
don't
have
this
api,
but
that
documentation
that
they
are
existing
want
to
say.
Okay,
if
you
have
events
which
belong
together,
then
add
some
sort
of
identifier
and
document
it
to
make
this
clear
in
your
recommendation
of
demons.
So
this
is,
in
the
end,
the
same
solution
for
for
the
topic.
C
C
So
you
know
that
they
would
actually
have
to
choose
to
like
not
provide
a
an
actual
correlation
id.
You
know
it
would
be
obvious
from
that
perspective,
as
opposed
to
you
know,
documentation
that
hey.
You
should
do
something
like
this
and
figure
it
out.
It
would
be
more
like
here's,
here's
a
piece
of
data
that
when
you
like
emit
an
event
you
need
to
provide,
and
you
know
so
it's
obviously
you
should
be
doing
something.
D
E
C
C
E
It's
always
up
to
the
publisher
to
decide
which
api
they
use,
and
so
they
decide
what
they
want
to
group
and
if
they
want
to
group
something
so
even
with
the
existing
api.
No
one
forces
you
to
use
the
span
api.
You
can
issue
every
single
event
via
the
normal,
just
emit
the
publish
api,
so
it's
still
not
enforcing
anything.
It
just
gives
a
good
hint
by
the
presence
of
this
api
that
there's
a
value
in
doing
it.
So.
C
C
B
That's
possible
and
one.
B
Thing
that
the
span
api
currently
provides
is
also
like
specific
start
and
end
events
just
to
delineate
the
overall
life
cycle
of
these
correlated
things,
which
that
would
would
be
helpful
as
well,
just
from
apm
perspective
to
be
able
to
correlate
that
to
a
span
which
has
its
own
life
cycle.
It's
supposed
to
match
that.
A
The
three
four
eight
nine
five.
What
would
be
the
one-liner
conclusion?
Should
we
continue
reviewing
the
br.
A
A
Thanks
all
right,
unless
somebody
has
anything
urgent
to
discuss,
we
should
skip
the
other
three
items
in
the
agenda
and
call
it
today.
A
Next,
okay,
so
thanks
for
the
discussion
talk
to
you
in
two
weeks
time.