►
From YouTube: 2021-10-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
D
B
D
Okay,
so
here
we
go,
please
add
your
name
here
to
the
attendee
list
and
what's
on
the
trend
of
the
days
avoid
those
points
I
put
on.
The
first
point
actually
is
pretty
short
like
the
pr
with
our
scenarios
and
the
and
the
out
of
scope,
definitions
that
looks
pretty
good.
I
think
just
saw
before
ted
approved
two,
so
we
have.
I
think
we
have
the
four
provers.
We
need
to
merge
it.
So
that's
that's
awesome.
There
are
some
smaller
open
comments
from
the
gram
I
will.
D
I
will
address
those
today.
Those
are
just
yeah,
it's
nothing,
substantial!
It's
just
some
smaller,
some
good,
smaller
good
points
that
he
made.
I
can
fix
it
today
and
I
think
then
we
we
can
merge
this
out
tap.
D
So
thanks
all
for
contributing
to
that
and
the
second
point
of
the
list
we
started
talking
about
that
last
time
discussing
context,
propagation
and
yeah,
I'd
like
to
continue
discussing
that
today
and
it
seems
if
there
is
no
other
point
on
the
gender
we
can.
D
We
can
devote
the
rest
of
the
time
to
that.
I'm
not
sure
if
there
is
anything
regarding
the
cloud
event
stuff.
You
want
to
discuss
foreign
miller.
D
C
D
Your
pr
okay,
so
this
is
yeah
this
document
I
started
so
now.
Everybody
should
have
edit
permissions
on
that.
D
So
I
just
started
this
google
doc
to
capture
like
discussions
that
we
have
here,
because,
with
the
other
old
tap
kind
of
being
closed,
more
or
less,
we
don't
have
kind
of
any
written
artifact
and
I
think
it's
good
to
just
have
something
written
down,
so
other
people
who
cannot
attend
this
meeting
can
participate
and
also,
I
think
it
helps
to
have
some
stuff
here,
written
down
and
everybody
please
feel
free
to
comment
or
extend
this
document.
D
But
I
started
out
with
I
went
through
this
last
time,
basically
the
status
quo,
but
I
did
here.
I
just
took
the
four
examples
we
have
in
the
current
specification,
and
I
took
the
one
example
that
the
ludmila
presented
and
I
tried
here
to
kind
of
visualize
how
how
phrases
would
look
for
for
those
examples
and
how
that
would
fit
into
our
conceptual
model.
D
So
I
will
go
through
that
just
yeah
I
just
did
very
quickly
today.
That
is
just
that's
here.
That's
just
all
copied
from
the
specification
description
and
this
table,
and
there
is
a
link
to
where.
D
Although
the
trace
trees
is
constructed
very
simple,
we
have
like
monsant
span
that
corresponds
to
our
creation
stage,
and
then
we
have
two
process:
bands
and
the
color
here.
Those
are
kind
of
the
symbolize
that
those
stands
are
created
by
different
processes
and
are
basically
different
traces.
So
we
from
send
span
and
then
one
one
one
message
is
processed
and
then
the
same
message
is
processed
again
and
the
process
bands
are
directly
parented
to
the
senseman.
D
Then
the
second
case
this
is
a
bit
more
complicated,
that
is
a
kafka
whisk
with
or
spring
boot,
and
what
happens
here
is
like
there's
a
process
p
that
produces
or
creates
a
message,
and
it
sends
this
publish
this
to
a
topic
on
kafka
and
then
there's
a
first
process
that
receives
the
message
and
processes.
The
message,
and
also
based
on
this
message,
creates
and
publishes
a
new
message
that
is
then
published
to
a
topic
too
and
is
then
received
in
process
by
another
another
service
or
process.
D
So
we
see
this
here
we
have
the
first
process
that
just
creates
and
publishes
the
spans.
Then
the
second
process
receives
the
span,
processes
it
and
then
produces,
creates
basically
creates
and
publishes
another
span,
and
this
span
is
then
received
by
process
the
third
process,
how
this
looks
like,
and
we
see
that
here
it
gets
a
bit
confusing.
It
also
doesn't
fit
it
doesn't
that
nicely
corresponds
to
our
model,
how
it's
laid
out
here,
but
that's
basically
what
the
spec
says
how
this
should
be.
D
D
I
link
this
here
to
our
receive
state,
although,
as
I
said,
I
think
it
doesn't,
hundred
percent
correspond
to
the
receive
state,
because
I
think
at
least
as
we
in
our
discussions,
I
think
the
understanding
they
can
do
for
this
publish
and
receive
spans
was
that
it
works
more
on,
like
the
physical
level.
So
it
is
tied
to
kind
of
the
protocol
that
is
used
to
transfer
the
message
here.
This
band
isn't
really
kind
of
tied
to
the
protocol
that
is
used
here.
D
I
I
nevertheless
link
those
to
the
receive
spend
to
just
show
that
it's
different
from
the
process
stage,
but
yeah
that
is
to
be
taken
with
a
grain
of
salt
but,
as
I
said
with
the
sense
band
and
one
receives
spam
and
the
receive
span
has
two
children.
The
first
one
is
the
processing
of
the
message
and
the
second
one
is
again
like
a
sending
of
a
new
message.
D
I
think
this
message
basically
is
based
on
the
message
received
here,
and
this
is
here
created
and
published,
and
then
here
again
we
have
an
other.
This
again
yeah
corresponds
to
the
creation
state,
and
then
we
have
another
receive
span
in
the
third
process.
It
receives
the
message
that
was
sent
here
so
here
we
see
that
the
the
context
basically
flows
from
the
creation
stage
to
the
received
stage,
so
it
also
parented
here
and
as
opposed
to
the
first
one,
that
we
had
a
direct
link
from
context
to
process.
D
D
Then
process
c
receives
them
in
one
go,
but
then
processes
them
separately
and
how
we
have
this
model.
Here
we
have
again
two
sense:
bands
that
are
basically
corresponding
creation
state
and
then
again
we
have
a
receive
span
on
the
consumer
side,
and
this
receives
man
has
basically
two
process
bands
as
children,
and
here
in
this
case
the
sand
span,
is
linked
to
those
process
stages.
D
So
again
we
have
context
flowing
from
create
to
process,
but
in
this
case
there
is
no
parent
file
relationship,
but
there
is
a
link
relationship
between
scent
and
process
and
already
the
first
three
examples
are
pretty
interesting
to
me,
because
I,
I
think
one
central
aim
or
what
one
central
use
case
for
like
related
to
messaging
systems,
is
to
have
some
kind
of
link
between
the
creation
and
the
process
stages.
D
I
think
that
is
one
one
of
the
central
use
cases
in
all
the
examples
in
some
way
or
the
other
enable
the
customer
or
the
user
to
make
this
connection
between
those
two
stages,
but
already
our
the
first
three
examples
here
in
every
example.
There
is
a
different
way
to
to
get
from
one
stage
to
another
here.
In
this
example,
it's
a
link
from
a
sent
to
process.
D
In
the
second
example,
it's
it's
a
grandparent
grandchild,
relationship
from
sent
to
process,
and,
in
the
first
example,
it's
a
direct
parent
child
relationship.
D
So
I
think
that
was
yeah.
My
thought
when
thinking
through
that
and
looking
at
that-
that
I
think
it's
important
to
have
for
those
important
connection
between
those
two
stages.
I
think
there
should
be
one
consistent
way
throughout
like
our
examples,
regardless
of
batching
or
other
kind
of
constraints.
There
should
be
one
consistent
way
how
this
link
or
how
this
connection
can
be
made
in
all
cases,.
D
So
that's
already
what
I
kind
of
saw
in
the
first
three
examples
that
there's
already
three
in
three
examples:
three
different
ways
of
how
this
kind
of
connection
is
made
and
then
the
two
last
examples.
There
is
an
example
of
batch
processing,
which
is
the
example
itself
a
bit
confusing
to
me.
But
let's
go
through
it
again.
There's
a
process
p.
That
process
sends
two
messages
to
us:
the
same
queue
and
then
there's
a
process
c
and
process
c
receives
both
of
them
separately,
but
processes
them
together
like
in
the
previous
example.
D
We
kind
of
receive
them
together,
but
process
them
separately,
and
here
we
are,
we
receive
them
separately
and
process
them
together,
which
I'm
not
sure
about
real
world
use
cases
for
this,
but
maybe
other.
I
guess
there's
a
reason
why
this
example
is
there.
So
here
we
see
again,
process
b,
creates
and
publishes.
Two
spans
process
c
receives
those
two
spans
separately,
but
then
processes
them
together.
D
And
again,
how
that
looks
here
here
again,
we
have
it's
kind
of
consistent
in
a
way
with
the
previous
example
of
how
a
process
is
connected
to
send
that's
again
by
a
link.
So
we
have
context
here
flowing
from
create
to
process,
and
here
we
have
a
link.
D
D
So
here
in
the
previous
example,
basically
we
had
to
receive
span
kind
of
a
more
or
less
parentless
banking
there,
and
here
we
have
the
receive
spent
parented
to
the
to
the
sand
span.
D
Again,
it's
kind
of
a
bit
confusing
to
me.
I
mean
I
understand
why
it's
the
case,
because
basically,
here
the
parenting
wouldn't
work
because
we
see
would
have
two
parents
that
that
doesn't
work
out.
So
I
guess
that's
why
it's
standing
here,
but
here
basically
receive
can
be
connected
to
send.
So
that's
why
it's
parented
here,
but
again
here
process
cannot
be
parented
to
receive
because
then
process
would
have
two
parents.
D
So
it's
split
up
here
at
this
stage,
so
so
at
least
here
the
the
link
between
create
and
processes.
That's
consistent
with
the
previous
example,
because
here
we
have
like
a
link
relationship
but
yeah,
where
the
receive
span
ends
up.
D
That's
a
bit
confusing
when
I
look
at
it
together
with
the
previous
example,
and
then
we
have
the
last
example
that
came
that
that
was
the
one
that
ludmilla
demoed
I
think
two
weeks
ago-
and
here
I
didn't-
need
to
make
any
diagram
because
that's
just
copy
and
paste
it
from
luke,
miller's,
blog
post
and
that's
in
some
way
a
unique
example,
because
here
we
also
have
the
settling
stage
model
and
the
publish
stage
which
we
don't
see
in
any
of
the
other
examples,
but
yeah
what
we
have
here.
D
Basically,
we
have
like
the
creation
stage.
So
a
message
is
is
created.
Then
we
have
the
publish
stage.
So
the
message
is
sent
or
published
here
and
then
the
receive
stage,
that's
missing.
But
then
we
have
a
process
stage.
That's
here,
and
the
process
stage
is
linked
to
the
creation
state
so
not
to
the
publish
state.
But
the
process
stage
is
linked
to
the
creation
state.
We
have
that
relationship
here
and
then
we
have
the
settlement
stage.
There's
also
span,
and
this
is
basically
a
child
of
the
process
stage.
D
So
here
that
is
an
interesting
view,
and
I
think
for
me
the
most
interesting
and
actually
a
bit
counterintuitive
view
is
that
when
I
look
at
the
trace
here,
I
have
basically
the
creation
processing
settling
link
like
here
and
then
I
have
like
the
publishing
state
below
which
doesn't
really
correspond
to
like
the
actual
flow
when
you
see
time
wise.
D
So
that
is
a
bit
when
I
look
at
this
from
like
just
look
at
this
and
don't
know
what's
going
on,
that
might
be
that
might
confuse
me
having
this
basically
published
stage
down
there,
because
I
one
would
be
of
the
impression
that
this
is
something
that
happens
after
everything
that
happened
here,
but
it
actually
does
not.
D
B
B
I
don't
have
an
answer,
but
I'm
I'm
interested
if
it's
the
visualization
problem
or
data
representation
problem
right.
So
the
timeline,
if
you
look
at
the
this
block
next
to
the
send
the
blue
one,
it
shows
when
it
started
right.
So
it
started
before
so
that
this
this
gun
chart
doesn't
show
the
timeline
precisely
so,
basically
not
sure
if,
if
it
really
affects
the
discussion
here,
potentially
it
can
be
shown
in
a
different
place.
D
I
I
don't
think
it
affects
the
discussion.
I
think
it's.
I
think
I
think
it
affects
like
a
wider
discussion
or
thought
that
I
had
been
looking
at
the
other
example
and
that
basically
was
how
how
to
properly
kind
of
visualize
the
asynchronous
work
in
a
trace
tree,
because
that's
basically
what
we
have
to
do
here
and
that's
also
what
basically
is
done
here,
and
I
think
that
is
quite
a
challenge
to
kind
of
visualize
that,
in
a
in
an
intuitive
way,.
E
So
I
have,
I
have
one
comment:
conus,
if
we're
at
the
commenting
section.
E
So,
given
you
know,
these
are
all
different
models
in
many
ways
you
know
they're
not
all
going
to
have
the
same
structure,
so
I
wouldn't
automatically
say
that
the
structure
is
always
the
same,
but
that
said
the
fact
that
in
some
cases
it's
multiple
traces
linked
together
and
then
in
the
kind
of
one
special
case
of
single
sender,
single
receiver.
It's
a
it's
a
single
trace.
I
I
would
suggest
moving
away
from
that
model
and
being
more
consistent
about
traces,
each
being
a
single
link
in
in
this
overall
thing
and
besides
consistency.
E
The
other
reason
I'd
suggest
this
is:
there
can
be
an
unknown
time
gap
between
the
sender
and
receiver
here
and
also
the
receiver
may
be
physically
or
topologically
very
far
away
from
the
sender
and
we're
still
in
a
world
where
a
lot
of
tracing
tools,
including
the
collector,
don't
don't
handle
long-lived
traces
very
well
at
all,
and
they
don't
they.
E
They
have
to
handle
traces
that
can
go
across
regions
and
things
like
that,
but
I
think
it's
actually
better,
if
at
all
possible
for
the
the
modeling
we're
doing
like
the
official
modeling,
we
kind
of
presume
that
traces
are
synchronous
transactions
that
that
are
bounded
by
some.
You
know
reasonable
web
request,
site
like
say
10
minutes
or
so
maximum.
E
E
So
that's
one
way
that
I
think
would
be
better
just
in
terms
of
like
making
traces
that
are
easy
to
consume,
lowering
the
risk
of
losing
a
trace
event
combined
with
sort
of
simplifying
this
model,
in
the
sense
that,
regardless
of
whether
it's
a
single
message
or
multiple
messages,
it's
always
traces
linked
together.
E
So
the
the
time
constraint
just
has
to
do.
Well,
it's
it's
both
just
a
logical
constraint
and
a
time
constraint.
I
don't
want
to
focus
too
much
on
the
time
constraints
because
you
could
you
know
technology
is
going
to
change,
but
when
we're
modeling
traces,
part
of
the
ideas
with
traces
and
links
is
that
that
traces
tend
to
represent
synchronous
transactions.
E
So
there's
there's
a
transaction
occurring
with
that
is
consuming
resources
right
so
and
in
general,
there's
somebody
somebody
waiting
for
the
transaction
to
end
so
that
those
resources
can
be
closed
down
and
on
a
practical
point
of
view,
you
have
an
issue
of
what's
called
a
trace
assembly
window
this
the
collector
has
this
lightstep.
Has
this
number
of
tools
have
this,
which
is
before
you
put
the
trace
into
long-term
storage?
E
You
want
to
store
this
stuff
compactly
and
efficiently
in
some
kind
of
immutable
database
is
like
the
way
people
tend
to
go
with
this
stuff.
But
you
have
this
interesting
question
which
is
like:
when
is
the
trace
done
quote-unquote
and
that's
this
unknowable
situation,
and
so
it
it
helps.
It
helps
our
implementation
modeling.
E
F
That
is,
that
is
true.
That
is
true,
for,
I
would
say
the
90,
the
95
case
or
probably
more
yeah.
There
are.
There,
are
scenarios,
there's
some
a
bunch
of
real
scenarios
that
we've
seen
that
we
see
where
that's
not
necessarily
true
yeah
for
sure,
especially
especially
with
asynchronous
messaging
loop,
because
they
they
they
initiate
operations
which
are
longer
running
out
absolutely,
and
so
we
just
just
as
a
as
a
data
point.
F
One
of
the
things
that
we're
currently
doing
in
our
brokers
is
to
actually
eliminate
the
notion
of
timeouts
for
locks
and
to
make
the
timeouts
for
locks
almost
infinite,
because
we
or
make
it
make
it
an
option
to
make
them
almost
infinite,
and
the
reason
is
that
we
they're
seeing
quite
a
bit
of
evidence
that
that
it's
being
used
more
so
I'll,
give
you
two
examples.
One
is
re-simulation
for
autonomous
driving.
F
We
see
a
bunch
of
so
a
bunch
of
companies
doing
effectively
the
same
thing
without
necessarily
knowing
of
each
other,
but
they
take
these
giant
data
sets
which
they
collect
day-to-day
out
of
autonomous
driving.
They
chunk
them
up
and
then
they
run
those
data
chunks.
F
So
there's
one
example:
where
a
message
is
driving
work,
but
then
it's
literally
they
need
to
come
back
and
complete
that
message
once
they're
done
with
that
work
like
because
they
only
want
to
do
that
expensive,
compute
work
once
so.
That's
one
example.
The
other
example
is
everything
that
involves
people
in
a
workflow
where
you
are
driving
information
towards
using
a
message
towards
a
person
and
then
that
person
will
execute
something
and
then
basically
checks
out
that
message.
F
So
any
anytime,
you
have
people
in
the
loop,
then
any
assumptions
that
you
may
have
about
seconds
or
minutes
is
killed
by
the
by
the
coffee
break.
So
I'm
I'm
just
saying
that
there
are
outliers
there
that
are
very
real
and
that
we
that
we
see
specifically
in
async
messaging,
because
when
ac
messaging,
this
inherent
nervousness
of
rpc,
where
you're
effectively
hanging
on
the
socket
and
the
the
lifetime
of
the
socket
basically
determines
how
long
you
can
possibly
have
a
thing
run.
E
Like
that,
it's
very
very
valid
that
there's
going
to
be
the
outliers
here,
and
so
that's
why
I
was
saying
I
don't
want
to
focus
too
much
on
technological
limitations,
but
there
are
conceptual
limitations
to
something
like
a
trace.
For
example,
you
could
look
at
things
like
the
life
cycle
of
a
web
service
as
a
trace
like
the
life
cycle
of
a
container
as
a
trace,
or
you
could
look
at
a
user
session
as
a
as
a
trace
like
conceptually.
E
We
don't
model
those
things
very
well
at
all
in
open,
telemetry
right
now,
which
is
why
the
rum
working
group,
for
example,
has
to
like
do
some
real
fundamental
work
to
figure
out
how
to
deal
with
user
sessions.
So.
F
The
only
thing,
the
only
reason
why
I
raised
the
point
is
that
this
is
a
is
a
thing
which
is
different
from
rpc,
because
with
rpc
people
intuitively
know
that
doing
any
work
of
any
length
is
dangerous
because
the
socket
will
collapse
and
then
your
entire
contact
length,
and
so
that
is
different
with
async
messaging
and
now
yeah
yep.
E
Yeah,
no
there's
there's
definitely
outliers.
It's
just
yeah.
I
think
in
this
particular
case,
keeping
keeping
the
trace
associated
with
the
open
sockets
to
some
degree
would
be
would
be
good,
or
at
least
the
place
where
we're
we're
clearly
violating
that
is
to
have
a
trace,
go
into
storage,
and
we
await
reawaken
later
on
the
other
side
of
the
queue
and
then
continue
being
part
of
the
same
trace.
I
think
that's
specifically
the
area
where
we
want
to
say
that
actually
traces
should
be
bounded
on
that
sort
of
freezing
or
storage
point.
F
Yeah
there
are,
there
are
situ,
so
so
there
are
situations
where,
especially
especially
with,
if
you
look
at
a
message
as
really
being
the
job
that
you
process
then
so
I
said
the
work
that
we're
that
we're
doing
now
with
with
making
these
these
blocks
quasi-infinite,
is
that
we're
seeing
because
of
kubernetes
and
to
and
some
other
container
or
sandbox
technologies
and
because
of
stuff
like
service
meshes
connections
have
been,
have
become
super
volatile
like
stuff
changes,
all
the
time
sockets
get
killed,
and
so
so
and
because
the
work
that
we're
that
we're
driving
tends
often
to
be
very
long-lived,
the
we
will
basically
reconnect
under
the
covers
and
then
still
be
able
to
go
and
maintain
the
same
lock
on
that
same
message.
F
So
the
job
doesn't
get
reassigned,
but
it's
actually
the
client
can
go
and
recover
their
connection,
and
so
so
and
that
will
so
once
that
technology.
So
we
have
this.
This
is
something
that
has
been
an
mkp
since
20
2012.
So
that's
not
really
new,
but
it's
nobody
has
implemented
it
so
we're
implementing
that
now
and
so
once
we
because
there's
no
need
because
stuff
just
changes.
F
E
E
Yeah
absolutely
and
that's,
I
would
point
that
out
as
simply
like
a
technical
limitation,
we're
looking
at
with
current
systems
just
like
you're
having
to
go
in
and
deal
with.
You
know
long-lived
locks
systems
are
going
to
have
to
figure
out
a
way
to
deal
with
long-lived
traces.
I
mean
the
collector
right
now
only
has
a
short
trace
assembly
window
and
it
requires
you
send
all
of
the
spans
to
the
same
collector
in
order
for
it
to
do
any
kind
of
tail-based
sampling.
E
That's
a
ridiculous
requirement
right,
so
the
technology
is
going
to
move
along
and
improve,
but
I
think
the
the
conceptual
part
is
is
the
the
main
part
where
we
have
to
look
at
traces,
meaning
like
active,
active
work
or
active
processing
is,
is
being
done
and
to
not
include
the
time
when
something
is
being
stored,
essentially
or
transmitted
through
the
queue
we
have.
E
It
would
be
better
to
have
a
consistent
way
of
modeling
that
as
separate
traces
partially,
because
we're
all
we're
going
to
want
to
have
a
consistent
model
measuring
the
time
that
something
was
was
in
the
queue
or
in
the
the
the
messaging
system
versus
time
spent
right
right
now,
we're
only
measuring
the
time
spent
by
user
processes,
processing
these
messages
right,
creating
them
and
then
doing
something
with
it,
where
we're
not
measuring
the
time
something
spent
inside
of
the
queue,
and
it
will
probably
be
very
helpful
to
us
to
have
a
consistent
way
of
splitting
that
out,
because
we're
going
to
also
want
to
start
measuring
that
part
as
well
at
least
indirectly
measuring
it.
E
B
You
can
chime
in.
I
have
a
couple
of
comments.
I
think
there
were
like
when
we
first
discussed
these
scenarios.
We
identified
two
classes
of
message:
messaging
systems
right
and
one
are
typically
used
to
process.
One
message
you
can
think
about
it
this
way
and
others
are
designed
more
for
batch
processing
right.
B
So
what
you're
saying
is?
Okay,
let's
merge
this
too,
but
like
for
users,
it's
maybe
different
thing
still
was
what
the
feedback
we've
got.
Was
this
picture
for
example,
or
similar
things
that
we
use
batching
everywhere,
and
we
are
wrong
because
it
confuses
people
who
don't
use
it.
There
are
a
lot
of
people
who
don't
use
it.
They
are
confused.
B
So
your
concern
is
that
we
cannot
properly
think
about
it
as
a
single
trace
because
we
cannot
reconstruct
it,
but
at
the
same
time,
if
we're
like
majority
of
things
that
happen,
they
happen
within
this
10
minutes
right.
So
the
worst
thing
that
can
happen
if
we
keep
it
as
a
single
trace.
If
it
takes
longer.
B
Okay,
the
trace
is
not
constructed.
It's
not
you.
You
cannot
represent
it
for
some
people,
but
it
works
better
for
the
rest
of
the
world.
Also,
links
require
a
bit
of
extra
work
right.
You
need
a
specific
indexing
and
other
things,
and
it's
the
more
you
use
it,
the
the
more
expensive.
Your
solution
is
as
a
backhand,
so
basically,
maybe
as
to
provide
better
experience
for
users
who
don't
need
links
and
better
experience
for
you
as
an
owner
of
the
back
end.
E
Yeah
I
mean
I
could.
I
can
definitely
see
the
argument
and
to
be
clear
like
that's
where
I
was
starting
from
was
okay.
You
can
process
this
all
as
a
single
trace
when
we
get
to
batching
and
this
other
stuff.
It
starts
to
look
more
complicated
and
we
can't
do
that
anymore,
but
I
do
wonder
the
cost
analysis
aside
once
these
tools
actually
present
links
properly,
wouldn't,
wouldn't
it
I'm
just
wondering
why
it
would
continue
to
be
confusing.
E
I
mean
I
could
see
in
some
cases
where
we're
just
talking
about
like
just
like
a
a
fifo
q
within
a
process
or
something
like
that.
Maybe
modeling
that
differently,
but.
D
I've
ever
remark
on
this.
Also
from
from
the
backhand
side,
I
mean
I
in
a
way,
I
agree
with
miller,
because
I
mean
actually
most
of
the
back-ends
out
there
don't
properly
support
links
yet
yeah
and
when
we
come
up
with
something
that
includes
things
here,
that
will
basically
maybe
make
it
not
work
for
some
backhands
or
great
work
on
the
backhand
side.
But
on
the
other
side,
I
think
here,
for
example,
here
the
backhand
very
nicely.
You
see
that
here
azure
monitor
kind
of
calculates,
this
time
spending
queue.
D
D
I
think
that
creates
more
work
on
the
back-end
side,
because
just
to
create
to
show
this
time
spending
queue,
they
would
have
to
have
those
three
branches
and
check
for
each
of
those
in
a
trace
to
see
how
to
calculate
this
time.
Spending
queue.
The
thing
from
that
point
of
view,
it
might
again
be
easier
for
the
back
ends
if
they
just
have
one
one
way
to
link
those
two
stages
together,
and
you
can,
you
know.
B
B
So
it
knows
like
there
is
an
information
in
the
message
that
when
it
was
put
into
the
event
hub-
and
then
it
just
adds
some
information
in
this
pen,
so
it's
an
extra
heuristic,
it's
yeah,
it's
the
question:
how
to
keep
this
hierarchy
properly
and
to
always
know
when
to
show
this
time
spending
queue
and
since
it's
the
property
of
one
span
and
not
two
of
them,
then
it
kind
of
can
work,
even
if
you
own
only
on
the
consumer.
A
D
Links,
I
think
here
there's
the
link
between
and
and
and
you
said
that
this
is
basically
confusing
for
many
customers.
So
do
you
I
mean
other
questions.
Do
you
always
model
it
with
a
link
in
every
case,
or
do
you
have
room.
B
So
we
are,
we
are
kind
of
inconsistent,
so
in
this
case
it's
links
because
you
can
see
two
different
operation,
ids
above
separated
with
comma.
This
is
how
I
know
if
there
were
like
multiple.
If
there
was
batching,
it
would
look
a
bit
differently.
What's
confusing
for
customers
is
that,
let's
say
I
think
I
showed
the
picture
previously,
but
maybe
on
on
the
producer
side.
So
if
we
have
multiple
messages
and
we
batch
them
together
and
send
as
a
badge,
then
we
model
it
with
links.
B
But
what
happens
that
majority
of
users
have
sent
one
message,
and
then
they
have
this.
This
event
have
message:
pen
plus
event,
tab
sends
pen,
and
this
is
super
confusing
and
the
way
we
show
it
is
confusing
the
most
confusing
part.
If
they
only
have
con
producer-
and
they
have
let's
say
they
send
the
ten
messages
and
they
see
us
pamper
message
and
spawn
for
send
and
they
it
makes
zero
sense
to
them.
B
But
putting
this
aside,
so
what
I?
What
I
found
that,
if
we
can,
if
we're,
dealing
with
one
message
and
one
message
on
processing
site
and
showing
it
as
a
single
thing
like
it
was
a
typical
request,
processing
makes
more
sense.
It's
way
less
confusing.
It
could
be
links
inside
like
here,
you
wouldn't
notice
drawings,
but
there
is
no
reason
to
spread
it
to
buildings.
E
Is
this
something
we
could
do
on
the
ui
and
by
the
way,
I'm
very
interested
in
the
the
user
research?
Here
I
would
love
if
there's
a
way
to
get
more
user
research
out
of
azure
about
people
using
this
stuff.
You
guys
have
already
implemented.
That
would
be
really
fabulous.
B
G
Yeah,
yes,
so
in
my
comment
that
was
that
I
thought
matt
was
leading
some
type
of
customer
research.
I
have
not
been
following
up
on
that
or
anything
do
you
know
if
that's
still
continuing?
If
not,
then
we
can
follow.
I
can
follow
up
with
him,
but
just.
E
Trying
to
that
was
kind
of
put
into
storage,
he
had
gotten
pulled
off
onto
other
projects
and
we
can
resurrect
it
the
user
study
we
put
together.
There
was
very,
and
we
could
definitely
use
that
as
a
baseline.
E
G
Okay,
I'll
think
about.
G
Yeah
I'll
think
about
lumila
offline
on
that
and
follow
up
with
matt
to
see
yeah
what
we
can
do.
E
And
we
should
probably
move
on,
but
my
one
other
request
since
oscar's
implemented.
This
is
actually
on
the
technical
side.
We
know
that
this
stuff
is
expensive,
switching
to
links,
and
I
wonder
if
there
are
things
we
can
do
in
terms
of
the
data
structure
that
might
be
able
to
make
it
cheaper
in
some
way,
I'm
not
really
sure
what
that
would
be.
I
just
want
to
throw
that
out.
There.
G
Yeah,
I
agree:
maybe
someone
on
zaki's
team
or
something
but
yeah,
let's
sync
up
more
offline
on
this
on
these
two
topics
and
figure
out.
If
we
try
to
provide
an
update
next
week,.
A
I
want
to
also
add
a
topic
about.
The
sampling
sampling
is
very
complicated
when
we're
doing
links,
because
when
you
have
a
systems
that
process
a
lot
of
data,
you
usually
sample
it
like
one
out
of
thousand
and
then
when
you
receive
the
message
and
you
sample
it,
one
out
of
thousand
and
the
sender
is
also
sampling,
one
out
of
thousand.
Then
you
get
one
out
of
million
correlation
that
you'll
have
the
same
sender
and
the
receiver
to
connect
to
connect
to
each
other,
which
really
complicates
things.
A
A
So
this
is
one
thing
and
the
other
thing
which
which
supports
their
links
is
that,
even
if
you
have
a
single
sender
and
a
single
receiver,
that's
that's
receiving
one
message.
A
Sometimes
the
process
that
receives
the
message
is
not
like:
it's
not
receiving
the
message
as
the
rooted
context,
so
you
have
like
an
http
span
that
is
sending
some
requests
to
a
seller,
and
then
it
gets
back
a
message
with
a
single,
a
message
from
the
queue
and
then
it
processes
it
and
if
you're
connecting
them,
then
it's
really
problematic
to
to
represent
this
http
operation.
A
So
so
it's
like
there's
pros
and
cons
to
each
to
each.
I
method,
but
I
just
wanted
to
add
them
and
also
I
wanted
to
add
for
my
experience
that
people
think
it's
a
bug
when
they
see
these
links.
They
don't
understand
why
they're
not
connected
they're,
expecting
an
experience
like
http,
you
tell
them
open.
A
Telemetry
you'll,
have
visibility
into
everything
that
happens
in
your
system,
you'll
just
install
tracing
and
you'll
able
to
send
a
message
and
see
who's
receiving
it,
but
none
of
the
back-ends
support
it
in
like
in
a
very
good
way.
Currently
so
people
they
just
look
at
it
and
jager,
and
they
say
what
happened.
I
sent
a
message
and
I
don't
see
the
the
receiving
part
and
then
they
see
the
received
place.
A
They
said
you
promised
me
a
visibility,
but
there's
no
visibility,
so
maybe
in
the
future,
when
all
back-ends
will
support
links,
it
will
change,
but
currently,
as
a
maintainer
of
fewer
instrumentations
for
messaging
system,
people
keep
on
asking
about
this
they're.
Like
opening
issues,
they
say
you
have
a
bug.
It's
not
connected.
A
Things
are
not
working
and
I
have
to
explain
it
again
and
again
that
it's
by
design-
and
it
was
so
frustrating
that
I
I
was
having
thoughts
about
falling
back
to
even
represent
the
batch
receive
as
a
parent-child
relationship
instead
of
links,
because
people
really
don't
like
these
links.
Currently
it's
really
confusing
them.
D
That
was
great
armory,
with
with
other
thoughts,
please
feel
free
to
extend
this
document
or
write
comments
or
just
add
them
there.
Just
a
few
remarks
on
that
like
regarding
sampling.
That
is
a
real
problem.
I
mean
we
put
that
out
of
scope
for
1.0,
but
I
think
we
definitely
need
to
tackle
and
talk
about
that
at
some
point,
maybe,
together
with
the
sampling,
stick
to
come
up
with
a
solution
because
for
the
end
user,
this
will
be
a
real
problem:
how
to
handle
sampling
and
to
the
com
to
the
visualization.
D
That
is
also
a
good
point
that
I
kind
of
thought
about
in
the
beginning
like
when
we
we
can
come
up
with
a
very
fancy
linked
model
here
that
works
great,
but
then
yeah
it.
It
doesn't
work
in
bank
cans.
They
don't
probably
support
links
because
end
users
will
be
confused.
So
actually,
I'm
wondering
like
how
link
support
looks
at
different
backends,
I
mean
in
our
about
yeager.
It's
kind
of
you
have
to
go
somewhere
down
to
click.
D
E
Now
this
is
a
thing.
This
is
a
chicken
egg
issue.
Until
links
are
a
thing
I
mean
it's,
it's
really
unfortunate
right.
I
mean
just
the
way
my
experience
at
lightstep
trying
to
warn
the
product
team
about
this.
E
Is
they
tend
to
have
a
wait
and
see
approach
because
they
have
a
bunch
of
other
priorities
and
there
is
a
tendency
to
say
we
want
to
work
on
things
that
customers
are
asking
for.
I
imagine.
A
lot
of
organizations
are
like
this,
so
there
is
a
an
unfortunate
chicken
egg
issue
here.
So
I
do.
I
do
wonder
if
there
is
something
we
could
do
as
a
group
I,
this
might
be
a
michael
question
here
as
a
pm.
E
I
think
you're
you're
good
at
understanding,
maybe
how
how
to
best
pitch
this
kind
of
thing.
If
there's
something
we
could
do
as
a
sort
of
industry
group
to
to
go
to
all
these
different
major
back
ends
and
basically
say
like
you
need
to
prioritize
getting
this
into
your
system,
because
it
is
a
low-level
data,
modeling
issue,
you
know
you're
going
to
have
to
get
your
your
platform
or
database
or
backend
team
involved
in
it.
It
might
take
you
some
time
to
do
it
and
you
need
to
get
started
on
it
now.
D
E
D
So
that's!
What,
for
me,
is
the
main
question
working
here.
It's
the
main
question
for
me
is
how,
in
how
far
do
we
take
like
current
backhand
limitations
into
consideration
when
designing
this,
because
I
think
that
was
that's
that's
why
they
came
up
with
those
many
different
solutions,
because
people
just
wanted
it
to
make
look
good
on
a
certain
back
end,
and
then
we
have
three
or
four
different
varieties
of
doing
the
same
things.
G
Yeah
my
my
thought
in
terms
of
like
how
you
position
it
to
others
in
order
to
in
order
to
incorporate
and
adopt,
is
more
around
hey
if
you
work
backwards
from
the
customer,
you
think
about
the
value
proposition,
and
in
this
case
I
think
one
of
the
main
things
is
that
hey,
if
we
all
like
I,
I
don't
think
we
should
be
very
specific
to
each
backend
right.
G
I
think
it
should
be
the
reverse
in
which
we
provide
the
standards
as
a
community
and
they
can
then
align
to
that
and
the
benefit
of
doing
so.
That's
the
message
that
we
need
to
create
to
be
more
concise
is
hey.
Look
then
your
end
customers,
regardless
of
their
you,
know
their
their
their
cloud
native
approach
to
building
applications
and
services
and
whatnot
it's
you
really
are
enabling
them
to
use
many
different
types
of
back-ends,
including
yours
and
and
and
I
think,
from
an
extensibility
standpoint.
G
It's
just
easier
right,
so
I
think
we
just
need
to
tighten
up
that
message
in
that
kind
of
kind
of
way
and
I
think
that'll
help
in
terms
of
convincing
or
you
know
letting
these
back-ends
know.
Hey
this
is
you
know
the
future,
the
better
approach
that
can
be
scalable
for
everyone.
E
Yeah
yeah,
it's
been
a
little
tricky,
I
mean,
I
think.
Let
me
put
it
this
way.
Open
telemetry
is
going
to
have
links
and
we're
going
to
use
links
to
model
these
big,
disjointed
asynchronous
systems,
and
I
think
we
have
to
do
that.
I
think
we
should
we
should
get
the
modeling
correct
and
it
it
just
kind
of
comes
down
to
this
situation
of
I
think
any
way
you
cut
it.
All
of
these
back-ends
are
going
to
implement
links.
It's
just.
Do
we
have
this
period?
E
Where
are
they
going
to
go
through
that
process,
because
users
are
really
frustrated
or
are
they
going
to
go
through
that
process
proactively?
I
think
either
way
they're
going
to
go
through
it,
because
once
there's
all
of
this
open,
telemetry
instrumentation
for
kafka
and
all
these
big
systems
and
the
situation
is
well.
Some
backends
support
this
ergo.
Some
vendors
have
a
a
competitive
advantage
because
they
can
go
out
there
and
advertise
that
they've
got
good
kafka
support
it'll
bubble
up,
I
think,
into
the
other
companies.
E
E
Our
job
is
just
to
come
up
with
like
a
consistent
and
clean
model,
and
when
I
was
talking
earlier
about
traces
being
synchronous
transactions,
I
was
maybe
I
shouldn't
have
brought
up
the
technical
limitations
there,
because
that's
not
not
what
I
was
trying
to
focus
on.
It
was
more
just
saying:
hey,
let's
make
sure
we
have
a
consistent
model
in
how
we're
modeling
this
stuff,
which
was
why
I
was
saying
you
know
for
some
of
these
use
cases,
even
though
it
would
be
feasible
to
model
it
as
one
trace.