►
From YouTube: 2022-06-16 meeting
Description
Instrumentation: Messaging
C
A
P.M,
not
even
afternoon
evening.
D
C
C
C
To
new
york
and
then
from
new
york
to
austin.
C
A
C
A
Well,
I
do
big
time.
The
challenge
is:
I've
got
like
two
young
kids,
and
so
my
wife
would
kill
me
if
I
went
so
you
know
there's
I
don't
think
I
can.
I
don't
think
it's
it's
viable
at
this
point.
Unfortunately,
do
you
have
kids
yeah
yeah
they're
of
the
age
where
you
know
two
parents,
it's
like
all
right.
We
can
handle
this
together.
If
you,
if
you
strand
one
parent
with
both
children,
it's
it's.
You
need
a
very,
very
good
justification
for
doing
that.
So.
A
F
For
one
day,
which
is
unbelievable,
I'll
also
be
away
next
week,
I'm
on
vacation.
E
E
E
E
It
seems
that's
not
the
case,
so,
let's
see
I
mean
I
would
have
I
I
don't
know
who
proposed
that
there
was
no
name
here.
I
was
actually
looking
forward
to
that,
but
yeah.
If
the
person
who
put
it
here
is
not
here
today,
then
we
probably
sorry.
I
guess
we
need
to
skip
this
and
we
could
actually
jump
right
away
to
continue
our
discussions
on
attribute
names
and
span
names.
I
mean
maybe
first
regarding
the
context,
propagation
pr
I
think
have
to
open
somewhere
here
there
were
a
few
comments
from
duane.
E
I
think
that
those
are
the
only
two
open
comments
and
I
think
we
could
quickly
discuss
it
here
and
then
close
those.
The
first
comment
I
made
it's
related
to
this
sentence
here
that
says,
intermediaries
that
are
not
instrumented
might
simply
drop
the
transport
context
and
the
I
think
his
point
was
that
might
be
misleading.
It
might
be
actually
like
read
as
a
recommendation
for
intermediaries
to
drop
that
context,
and
actually
there
is
a-
I
think
I
pointed
this
out.
E
I
mean
there's
actually
three
possibilities:
intermediaries,
independent
of
whether
they're
instrumented
or
not.
They
might
I
mean
they,
they
have
three
possibilities,
they
might
alter
the
context,
they
might
drop
it
or
they
might
just
keep
it
and
forward
it,
and
I
think
we
should
not
in
at
this
stage
in
this
document
not
make
any
kind
of
assumption
and
if
this
sentence
that
suggests
any
assumption
recommendation,
I
think
we
should
just
remove
it
and
I
think
duane
agrees
and
then
you're
fine.
Just
I
will
then
just
remove
this
one
sentence.
E
About
this
other
future
possibilities
about
the
protocols,
the
messaging
protocols
and
tuan
suggested
that
we
kind
of
refine
this.
This
wording
a
bit
that
we
say,
because
here
we
say
just
once:
protocols
reach
a
stable
state,
but
we
don't
really
define
what
this
stable
state
should
encompass
and
actually
the
stable
state,
and
when
is
right
here,
should
encompass
more
than
we
currently
lay
out
in
this
document,
because
in
this
document
we
only
talk
about
the
message,
context,
propagation,
but
there's
a
future
work
time.
You
also
work
item.
E
We
also
have
this
transport
context
and
I
think
what
is
what
is
good.
That
duan
points
out
here
is
that
these
two
open
items
that
we
have
these
two
future
possibilities.
The
first
one
is
basically
working
out
the
two
context
layers
and
the
second
one
is
stabilizing
the
protocols
and
basically
recommending
for
the
course
they're
actually
dependent
on
each
other
so
before
we
stabilize
or
before
the
protocols
can
be
stabilized.
E
E
I
will
merge
this
suggested
change
from
dwan
and
then
we,
then
we
basically
have
all
the
open
comments
you
covered
and
I
unfortunately
didn't
make
it
to
the
stack
meeting
on
this
tuesday,
but
I
will
definitely
try
to
make
it
next
tuesdays
and
there
I
also
I
also
reached
out
already
reached
out
to
some
people
or
ctc
people
to
get
more
reviews
on
this
on
this
pr.
So
we
can
get
this
merged.
E
Okay,
so
what
they
also
did
here
on
the
now
coming
to
attributes
that
also
did
here
on
the
gender.
I
added
a
list
of
existing
attributes.
That's
basically
the
list
that
is
here
and
I
think
a
goal
for
us
should
be
to
basically
go
through
this
list
of
existing
attributes
and
see
whether
we
want
to
keep
them
or
want
to
change
them.
E
I
think
that
is
a
requirement
for
1.0.
If
there
are
additional
attributes,
I
think
we
can
always
add
them
at
them
later
on.
But
that's
that's
why
I
just
made
this
list
here
and
once
we
are
basically
through
through
the
through
the
existing
attributes.
I
think
we
can.
I
think,
that's
that's
a
that's
at
least
the
minimum
that
we
that
minimum
for
version
1.0
of
the
semantic
conventions-
and
I
also
try
to
mark
some
progress
here.
E
So
we
have
this
messaging
system
already
discussed
at
length,
and
I
think
I
think
we
come
to
a
compromise
here.
E
We
have
actual
messaging
protocol
and
protocol.version
the
pr
I
put
up
for
introducing
those
application
protocols
that
was
merged,
so
those
are
available,
and
I
think
I
definitely
think
we
should
push
for
replacing
this
messaging
specific
attributes
with
the
these
generic
attributes.
Here
that
basically
model
the
same
model,
the
same
concept.
E
So
we
can
basically
keep
this.
These
attributes
out
as
messaging
specific
attributes,
but
just
use
the
generic
ones,
and
the
same
is
true
for
all
the
net
dot
p
right
like
the
net
prip
and
net
peer.name.
I
think
that
might
that
naming
might
actually
change.
I
think
luke
miller
is
working
on
that
which
is
actually
here,
but
I
think
whatever
the
name
changes
do.
We
can
also
here
continue
using
the
generic
one
and
then
yeah.
E
That
leaves
us
basically
with
this,
these
remaining
attributes
to
to
discuss
and
to
come
to
some
conclusion,
what
we
are,
what
we're
going
to
do
with
them
and.
E
E
H
E
I
think
we
can
actually
kind
of
close
this
comment
here
and
say
that
we
yeah
we
agreed
to
model
this.
E
E
And
I
think
the
initial
like
semantic
meaning
of
message
to
destination,
was
just
the
cute
topic
and
and
not
the
and
not
the
fully
specified
broken
tenant.
And
I
think
the
question
here
is
whether
we
want
to
to
to
change
or
alter
this
or
keep
the
like
initial,
the
initial
meaning
and
only
have
a
keyword
topic
in
this
destination.
G
Yes-
and
I
think
it's
a
little
bit
more
complicated
so
that
we
have
the
end
point,
but
we
also
have
this
name
of
this
topic
or
queue.
It's
not
one
thing,
it
could
be
a
pass
and
I
think
the
last
time
we
talked
about
whether
we
want
this
class
to
be
how
we
want
this
task
to
be
represented.
G
I
I
think
amir
had
the
suggestion
to
have
array
of
these
things
again.
Clearing
becomes
hard,
maybe
we'll
have
one
attribute
for
the
whole
thing,
either
past
or
the
absolute
url,
or
we
will
have
extra
attributes
specific
system
where
generic
wants
to
describe
the
array
of
things.
G
E
Yes,
but
then
to
just
to
clarify
here
with
me:
we
talked
about
this,
but
just
to
make
sure
like
when
we
have
the
full
basically
pass
here
in
this
destination
that
will
make
this
this
messaging
url
attribute
obsolete.
E
So
we
will
not.
We
will
not
need
kind
of
this.
We
will
not
need
this
representation
then
kind
of
to
or
this
this
url
then
to
determine
the
unique
so.
G
I
would
actually
suggest
that
maybe
we
should
reuse
that
pure
name
attribute
to
specify
the
host
and
the
report
to
specify
for
it,
and
this
way
people
who,
let's
say,
are
interested
in
just
network
failures.
They
can
run
the
theory
against
the
blocker
host
rather
than
specific
topic
or
cue
right.
G
If
you
want
anything
per
broker
and
you
would
not
care
about
how
many
future
topics
are
there
or
if
you
want
to
count,
how
many
are
there
for
brokerage
and
then
we
will
use
those
generic
attributes
and
the
destination
name
or
maybe
some
other
attributes
we'll
come
up
with,
will
describe
this
pass
on.
You.
E
That
makes
sense,
so
basically,
this
this
part,
we
will
cover
by
the
method
network
name.
That
would
be
a
port,
and
this
part
here.
Basically,
we
will.
We
will
us
we
will
represent
by
the
destination.
G
E
Yeah,
but
I
I
I
think
the
the
main
point
will
be
then
here
that
we
say:
okay:
destination
name:
it's
not
just
a
keyword
topic,
but
it
really
should.
It
should
be
enough
to
just
look
at
destination
name
and
basically
have
a
fully
specified
broker
and
tenant
and
cue
topic
in
there.
So
basically
that
should
be
the
only
attribute.
You
need
to
look
at
to
kind
of
uniquely
identify
the
the
destination.
F
That's
the
only
thing
I'm
because
not
everything
is
necessarily
a
url.
Like
you
know,
like
some
apis,
you
connect
to
a
broker,
you
do
some
authentication
and
then
you
just
publish
to
a
particular
queue
on
that
broker.
Maybe
there's
a
concept
of
like
for
our
broker.
We
have
something
called
a
vpn
which
is
like
a
you
know,
isolated
instance
within
that
address,
but
you
know
url
urls
aren't
provided
through
the
native
messaging
api.
They
are,
if
you're
using
a
rest
interface,
but
I
think
it
depends
on
the
messaging
system.
F
Whether
path
is
a
natural
term
or
not,
but
if,
but
if
we
leave
it
at
something
that
uniquely
specifies
the
destination
on
a
broker
and
then
whatever's
natural
for
a
given
messaging
system
to
be
used,
there.
G
E
Yeah,
I
also
agree
that
we
could
keep
at
the
brink
just
because
it's
easy
to
kind
of
search
and
index,
and
I
think
that's
actually
a
requirement
on
this
attribute
that
you
can
kind
of
search
it
and
index
it,
but
then
yeah,
if,
if,
if
other
brokers
want
a
more
like
a
structured
approach,
it's
a
settled
mill.
I
think
that
can
be
defined
in
a
broker-specific.
E
Miller
asked
ask
you
to
like
repeat
your
question,
because
I
think
she
didn't
I'm
sorry,
I'm
having
a.
C
Problem,
seeing
with
miller
okay,
for
example,
how
do
we
imagine
a
rabbit
to
populate
this
value
with
it
be
like
exchanged
and
space,
then
routing
key
and
the
user
will
have
to
understand
it
by
itself
that
the
first
that
they
are
separated
by
spay,
the
first
one
is
the
exchange,
and
the
second
is
the
outcome.
Key.
G
Perhaps
yeah
we
should
come
up
with
some
form
and
I
should
come
back
and
evaluate.
I
think
rabbit
is
very
special
in
this
sense,
with
the
routing
key.
So
this
would
be
a
hard
one
for
other.
H
E
I
mean
what
what
I
will
take
is
an
action
item
and
try
to
do
until
next
week.
I
mean
look
miller
already
listed
a
bunch
of
systems
here.
What
I
will
do
is
like
look
into
these
systems
and
see
how
how
destination
name
or
path
or,
however,
we
call
it
might
look
for
each
or
at
least
like
some
of
those
mentioned
here,
and
I
think
then
we
have
like
when
we
have
maybe
a
list
or
table
of
how
this
destination
name
might
look
or
what
it
might
involve
for
different
brokers.
E
G
E
We
can
then
continue
or
maybe
close
this
discussion,
but
I
think
it's
already
a
kind
of
a
partly
kind
of
party
agreement
that
we
have.
We
don't
want
to
just
have
the
cue
topic,
but
we
will
have
the
tenant
information
too
captured
here
and
then
we
will
like
verify
how
it
looks
like
and
based
on
that
we
will.
C
E
I
definitely
think
this
field
can
have
a
high
cardinality.
Yes,
I
think
for
the
span
name,
we
said
we
we
don't
want
to
have,
or
we
must
not
have
high
cardinality,
but
for
for
this
attribute,
we
want
to
be
with
this.
We
want
to
actually
to
be
accurate
and
that
can
involve
high
cardinality
in
some
cases,.
E
J
E
Okay,
then,
I
think
we
we
should
maybe
start
now,
if
everybody's
fine,
because
we
have
30
minutes
left-
and
maybe
there
are
some
questions,
so
I
think
we
pause
here
and
maybe
the
next
time
we
can
continue
discussing
based
on
this
on
this
on
this
list,
and
we
use
the
remaining
30
minutes
for
the
for
your
presentation.
C
Maybe
just
a
small
question
before
for
this,
so
you
honestly
opened
an
old
app
like
a
small
otap
which
is
part
of
the
big
hotep,
and
I
wonder
if
you
have
plans
on
opening
more
of
this
to
like
yes,.
E
Definitely
I'm
I'm
currently
basically
will
be
all
in
our
three
out
tips.
The
first
one
is
the
context
propagation
one
that
is,
that
is
already
there
then
currently,
I'm
working
on
another
one
that
is
for
the
tracing
span
structure,
and
I
think
in
in
this
audit,
probably
will
be
the
most
controversial
one,
because
it
will
also
involve
the
link
discussion.
E
So
that
will
be
the
second
one
and
then
the
third
one.
This
will
be
the
the
span
and
attribute
names,
that's
what
we
are
currently
working
on,
so
it
will
basically
like
those
three
and
they
hope
that
this,
that
is,
a
tracing
span,
structure
or
tap.
I
can
get
that
up,
maybe
during
the
next
one
or
two
weeks
and
the
attribute
otep
yeah.
I
think
we
need
to
finish
our
discussions
here.
First
before
we
can
get
that
up
and
going.
I
Sure
thanks,
so
I
I'm
nires,
I'm
from
aws
extra
team
and
then
we
have
one
more
alex
who
who
is
also
in
aws
x-ray
team.
We
have
like
some
questions.
We
need
some
questions.
Suggestions
from
this
group
on
related
to
specifically
around
the
links
like
how
we
should
generate
the
links
and
specifically
in
case
of
sqs,
to
lambda
path.
Sqs
is
a
managed
queue.
Solution
for
aws
provides
and
lambda
is
a
serverless
compute
and
customers
can
actually
configure
lambda
to
consume
the
sqs
messages
and
process.
It's
all
in
managed
way.
I
I
So
I
think
firstly,
we
want
to
just
give
a
like
overview
of
how
actually
internally
sqs
in
in
inside
the
how
the
architecture
looks
like,
and
then
we
can
go
into
the
links
like
how
we
can
generate
the
links
we
have
to
generate
the
links,
so
I
can
skip
this.
If
you
folks
already
know
about
the
x-ray
service.
I
Yes,
x-ray
service
again,
like
is
the
same
as
distributed
service
tracing
solution
in
a
manner
is
provides
a
managed
solution
for
that
and
we,
as
of
now,
we
provide
our
own
instrumentation,
but
we
are
moving
more
towards
the
hotel
specs.
So
we
want
to
make
sure
whatever
hotel
sports
x-ray
supports
with
that
one
is
totally
aligned
with
that
one.
That's
our
long-term
goal.
We
provide
apart
from
this
ingestion
part
where
we
have
to
like.
We
have
this
hotel
specs
generated
data
on
the
read
path.
We
have
multiple
ways:
customers
can
query
data.
I
One
is
the
service
graph
service
graph
is
just
aggregated
view
of
all
these
stresses
in
a
particular
time
window.
Taste
summary
is
just
on
some
of
these
attributes.
The
customers
can
slice
and
dice
the
traces
they
can
say.
Okay,
if
give
me
the
traces
who
resulted
in
a
fault
and
two
and
whose
end-to-end
duration
is
this
much
or
who
followed
truth
by
this
node
or
this
span
so
that
there
we
have
trace
summaries.
This
details
is
more
detailed
about
a
particular
request
end
to
end,
so
we
actually
present
that
view
into
our
trace
details.
I
One,
and
just
like
we
have
a
arrow
also
in
our
service
graph
and
test
details.
We
generally
show
the
arrow
and
those
arrow
we
calculate
from
the
parent
child
relations.
We
have
it
like
in
the
span
we
have
a
parent
linking
so
so
linked
one
whenever
it
had
said
that
this
is
my
parent,
so
the
parent
will
then
point
an
arrow
to
this
child
span
internally.
I
We
actually
convert
this
span
into
the
segment,
so
you
might
hear
like
span
and
segment
sorry
in
this
way,
so
this
is
like
overall
architecture
so
into
the
I'm
moving
more
towards,
like
now
sql
to
lambda
how
internal
it
works.
So
we
actually
have
a
sqs
service
and
sql
service,
as
of
now
doesn't
generate
any
span
of
its
own.
So
we
have
sql
scholar
and
internal
architect
internal
sorry,
service
microservice,
which
is
not
visible
to
the
customer,
but
it
does
it.
It
actually
consumes
the
messages
from
the
sqs
and
invokes
the
lambda.
I
Customers
can
say:
okay,
maybe
consume
first
10
messages
or
20
messages.
They
can
actually
specify
the
batch
size
and
window
till
which
it
has
to
wait.
So
it
actually
accumulates
all
these
messages
and
then
invokes
the
lambda
front-end
service.
Lambda
front-end
services,
like
doesn't
look
into
the
payload.
It
just
manages
the
lambda
worker,
like
lambda
worker,
is
the
place
where
the
customer
code
executes.
That's
where
actually
they
write
the
function,
their
things
actually
executes
over
there.
I
So
lambda
front-end
service
is
a
very
thin
service
which
takes
the
payload
and
find
out,
which
instance
it
should
run
into
any
css
code
and
like
puts
the
payloads
into
network
and
that
workers
should
start
generating
the
lecture
running
the
code
it
like
sqs
polar
is
a
internal
node
and
lambda
front-end
service
and
lambda
worker.
Both
are
visible
to
the
customers.
They
see
two
spans
lambda
front-end
service
generation
span.
I
Lambda
worker
also
generates
its
own
span
and
if
there's
a
retry
sql
spoiler,
this
internal
service
also
does
the
retry
like
it
handles
the
retries
and
everything
for
customers.
So
it's
the
kind
of
we
manage
solution
for
customers.
So
what
we
are
doing
with
this,
like
we
have
to
support
this
one.
So
what
we
are
doing
is
we
are
actually
enhancing
the
sps
polar.
So
what
this
sql
scholar
will
do
is
it
will,
as
it
receives
the
messages
from
the
sqs
each
messages.
Have
it
system
property?
I
We
call
them
aws
tracing
header
and
it
will
inspect
all
messages
this
header
and
if
any
and
one
more
thing
it
will
do,
is
it
will
create
a
new
trace.
It
will
start
a
new
trace
like
it
will
receive
the
older
place
context.
I
I
I
So
lambda
front-end
creates
the
span
corresponding
to
this
new
child
trace
that
we
have
created
and
if
there's
a
retry
from
the
sql
scholar,
because
of
some
reasons,
lambda
failed,
the
downstream
service
failed,
sql
spoiler
will
again
start
a
new
child
trace
and
before
invoking
the
lambda
function,
it
will
actually
just
repeat
the
same
same
these
steps
again
for
like
extracting
the
message
context
and
then
creating
a
new
child
race
and
setting
that
sampling
decision
based
on
any
message
is
sampled
or
not.
E
I
have
a
short
question
here,
sorry
to
interrupt,
but
I
mean
I'm
not
sure
if
you
get
into
that
later,
but
I'm
just
wondering
here
this
lambda
front
and
service
in
the
lambda
worker.
Are
they
always
receiving
just
a
simple,
a
single
message,
or
is
there
cases
when
they
receive
a
batch
of
messages
to
process.
I
E
Okay,
I
see
you
do
you
batch
the
message,
but
each
message
might
have
like
a
different
context
on
it,
yeah
each
single
message
and
then
basically,
with
the
sampling
decision,
I
mean
you,
you
have
some
heuristic.
I
guess
because
I
mean
when
you
pass
a
badge.
Basically,
the
batch
might
have
some
messages
that
are
sampled,
have
a
sample
context
and
others
that
have
a
not
sampled
context
in
them.
I
G
G
So
when
you
receive
a
badge
and
you
have
multiple
contexts,
you
still
have
a
child's
pen
per
message.
Somehow
or
you
have
this
pen
with
links
to
message,
contacts.
I
G
I
That's
a
good
question:
actually,
that's
we
always
create
a
new
child
trace,
irrespective
of
only
one
message
is
sampled
or
10
messages
are
sampled.
However,
we
will
we
will
generate
the
links
and
that's
the
I
think,
main
topic
of
this
discussion
like
where
to
generate
the
links
like
how
to
tell
customers
that
these
10
or
these
20
upstream
messages
are
connected
to
this
new
trace
like
how
to
generate
that
link.
I
So
we
are
like
trying
to
use
the
hotel,
specs
links,
but
there's
like
multiple
ways
to
generate
the
links,
so
maybe
I
can
go
to
the
next
slide.
If
so,
we
have
actually
the.
I
think
that
you
asked
the
right
question
like
how
to
generate
this
link.
So
there
are
two
locations
that
we
can
generate
the
links.
One
is
at
the
square
polar
level
only.
It
actually
knows
that
okay
and
go
to
these
parent
traces,
and
this
is
the
new
trace.
I
can
link
them
together
and
send
that
information
to
the
x-ray
service.
I
Another
location
is
at
the
lambda
worker
location
so,
and
the
lambda
worker
location.
What
happens
is
like
customers
are
writing
customers
when
they
write
their
code
and
they
actually
execute
the
code
they
when
they
receive
the
messages
customers
code
also
actually
receiving
the
same
system.
Attributes
that
sql
scholar
should
so
we
propagate
the
taste
context
to
the
lambda
worker.
Also
lambda
worker,
like
customers,
can
also
write
the
code
to
generate
the
link.
So
we
have
these
two
options
that
we
want
to
go
with.
You
folks
and
say
like
understand,
which
one
is
better.
I
Both
have
a
pros
and
cons.
So,
firstly,
I'll
go
with
some
examples
like
how
those
examples
will
look
like.
So
this
is
like
suppose,
sql
spoiler
generates
the
link.
So
what
will
sorry?
So
what
will
happen
is
like
the
linking
like
this
dotted
line
is
just
a
link
and
solid
line
is
a
parent
child
relationship
like
when
we
have
a
parent
filled
in
the
span
and
dotted
lines
are
linked
field?
So
what
will
happen
is
if
sqs
fuller
generates
the
link.
It
will
look
like
this,
so
this
is
like
very
normal
case.
I
It's
like
sql
spoiler
link.
So
it's
saying
that
I'm
pointing
towards
the
lambda
front-end
span.
I
E
I
Case
customers
are
processing
message,
one
and
message
two
into
the
into
the
lambda
function,
but
links
are
already
generated
at
the
sqs
level,
so
they
are
still
actually
pointing
to
lambda
front-end
service
that
they
are
not
pointing
to
like
processing
message
span
or
processing
message
to
spam.
I
So
this
is
the
option
one.
If
we
generate
the
links
at
the
sps
polar
side,
another
one
is
like
if
we
lambda
worker
generates
the
links.
So
what
will
happen
is
sqs
will
say
that
I
am
linked
to
lambda
function
directly,
so
I
think
the
it's
just
a
visualization,
so
the
span
visualization
will
look
like
this.
So
solid
lines
are
the
parent
calculations
and
then
dotted
line
is
just
a
link
relation,
so
the
this
this.
I
This
is
how
like
it
will
look
like
in
a
normal
case,
and
if
we
look
into
this
the
another
case
when
customers
were
doing
the
for
loop.
So
in
this
case,
if
lambda
sorry,
I
think
this
is
a
lambda.
Sorry
like
this
is
wrong,
but
if
lambda
actually
generates
the
link,
then
the
links
will
look
like
this,
like
sqs,
will
directly
link
to
this
processing
message
span
and
both
of
these
cases
that
were
best
in
they
will
directly
point
to
this.
Their
own
processing
spans.
G
Yeah,
quick
question
on
this:
this
process
is
message
one
and
patent
message
two.
I
assume
this
will
be
in
the
question
code
and
customers
will
generate
them
free.
I
Yes,
so
we
can
provide
some
instrument.
We
can
actually
update
our
like
instrumentation,
like
the
hotel
instrumentation
that
that
which
can
help
customers
generate
these
funds.
I
So
both
actually
had
some
pros
and
cons
and
like
I'm
just
trying
to
compare
like
there
could
be
more,
but
this
is
like
based
on
just
our
understanding.
So
if
we
go
look
into
the
sql
spoiler
links,
one
benefit
of
this
one
is
like
the
poison
pill,
because
lambda
runs
in
a
very
constrained
environment
where
there
is
limited.
Maybe
resources,
memory
cpu
everything.
I
So
if
there's
one
message
which
can
kill
the
lambda
worker
in
any
way,
because
then
it's
possible
that
there
will
not
be
any
links
generated
because
somehow
this
lambda
worker
didn't
execute
properly
so
but
in
sql
spoiler.
That
does
that
it's
like
manage
service.
It
doesn't
actually
look
into
the
message.
Then
it
doesn't
actually
execute
any
of
that.
So
until
there
is
no
system
issue
in
our
service
like
it
will
provide
the
links
in
all
the
cases.
So
that's
like
the
one
benefit.
I
I
But
two
messages:
three
messages
thousand
messages
it
can
find
the
list
of
all
the
siblings
by
looking
only
into
that
one
span,
because
lambda
front
end
will
say
I
am
linked
to
these
these
parent
traces-
and
I
know
that
like
even
though
small
is
a
customer-
benefit
that,
because
this
one
like
because
we
are
generating
the
links
from
sqs
polar
side,
customers
doesn't
need
to
do
any
logic,
for
that,
like
it's,
links,
are
generated
in
a
managed
way.
So
these
are
the
benefit
of
sql
spoiler
from
the
lambda
worker
side.
I
We
also
see
the
benefit,
and
one
like
major
benefit
we
see
is
like
sorry,
two
major
benefits
we
say
like
one
is
like
when
we
try
to
calculate
n2
and
latency
or
some
end-to-end
attributes
for
a
message.
Sometimes
customers
say:
okay,
how
much
time
my
this
message
took
end
to
end.
I
So
in
this
case,
what
so
suppose
there
were
two
messages
one
was
put
into
the
s3
and
one
was
put
into
the
dyno
db,
so
they
can
say
we
give
me
the
message
one
and
two
and
later
so
in
this
case,
ideally,
we
should
process
like
inspect
the
path
from
client,
api
gateway,
sqs,
then
processing
message,
one
and
s3.
This
is
the
path
we
should
follow
in
this
case.
What
will
happen?
Is
we
don't
know
which
path
to
follow
until
we
put
some
metadata
over
here?
I
That's
like
one
benefit
of
actually
linking
at
the
lambda
worker.
Another
one
is
obviously
like
if
they
are
generating
links
at
the
lambda
worker.
It
means
that
they
have
a
flexibility
that
they
can
actually
send
the
vendors
to
like
those
links
to
any
vendor
like
the
lambda
segment,
even
though
we
are
actually
have
a
plan
to
actually
send
all
the
recipients
spends
like
the
sqs
api
gateway
lambda
front
and
all
these
into
the
two
vendors
customers
should
be
allowed
that,
but
that
functionality
is
not
there
as
of
now.
I
But
so,
if
you
generate
the
links
at
the
lambda
worker
level,
it
means
that
all
the
links
the
customers
can
start
doing
that
from
now.
So
I
have
like,
like
I
have
like
another
options
that
we
were
considering,
but
before
going
into
this
dual
links,
I
just
want
to
actually
pose
here
and
see
if
the
question.
G
I
questioned
her
from
my
sets
and
I'm
on
azure.
So
the
question
is:
how
do
you
want
default
experience
to
look
like
without
customers
doing
any
instrumentation.
I
It
has
a
issue
if
we
start
doing
this
end-to-end
latency,
because
that's
what
we
are
thinking
in
future,
we
might
support
where
customers
can
filter
out
the
trace
is
the
the
root
trace
is
based
on
end-to-end
latency.
So
in
this
case
we
want
to
calculate
the
end
latency
by
following
the
exact
path
in
which
message
was
executed.
I
I
I
Then
the
problem
becomes
like
how
do
we
know
like
which
links
to
follow
to
calculate
the
end-to-end
latency
like
we
can
provide?
I
I
don't
see
like
anything
into
the
hotel
specs
that
says
that
for
entry
and
latency
or
anything,
follow
this
path
or
something.
So
that's
why
I'm
like.
We
are
yes,
because
both
links
will
look
same
to
us
like
from
our
processing
perspective.
Both
links
are
same
like
going
to
lambda
front
end
or
like
links
from
the
worker
level.
Both
are
same
look
same
until
we
actually
put
some.
I
G
G
I
E
Yes,
because
it
kind
of
ties
in
pretty
well
with
and
first
to
give
to
give
you
just
a
summary
about
what
we
are
working
on
here.
I
think
you
probably
know
this.
The
existing
semantic
conventions,
those
here
and
the
examples
here-
that
kind
of
provide
some
guidance
for
for
linking
or
parenting
spans
in
traces
and
those.
Those
examples
here
are
actually
pretty
inconsistent
and
often
misleading
and
confusing
to
people.
E
You
see
here
this
document
we're
working
on
about
linking
and
pirating
spans,
and
I
actually
think
that
that
the
this
scenario
here
actually
captures
what
you
just
talked
about,
and
here
we
have
a
message:
producers
and
we
have
batch
push
based
consumers
and
here
basically,
this
this.
E
We
have
like
a
producer
that
publishes
messages
and
that
actually,
this
published
context
gets
them
tied
to
the
message
and
on
the
consumer
side,
we
have
a
deliver
span
here,
which
is
linked
to
those
publish
producer
bands,
and
I
think
the
deliver
span
here
that
is
kind
of
these.
This,
as
I
think
that
could
be
this
sqs
service
that
you
talked
about.
E
That
basically
creates
this
one
span
for
the
whole
batch
and
links
to
to
the
publish
bands
and
then,
as
children
of
this,
there
are
basically
process
spans
and
those
might
be
then
the
other.
The
work
that
runs
in
the
lander.
E
So
so
this
captures
this
case
and
what
basically
we
for
now
that's
our
suggestion,
and
I
think
that
aligns
with
with
the
first
solution
that
you
presented
so
here
we
just
have
to
deliver
spam
with
that
represents
the
whole
badge
link
to
the
published
bands.
But
we
have
no
direct
link
to
these
process
with
this
process.
E
Bans
to
publish
and
the
main
the
main
reason
for
this
actually
being
is
that
in
this
current
version
of
in
this
working
document
we
have
here,
we
don't-
we
make
very
limited
assumption
about
these
process
bands,
and
the
reason
for
that
being
is
that
we
cannot
really
require
these
process
bands
to
be
created,
because
in
some
cases
maybe
a
message
is
like
delivered,
but
it's
not
processed
at
all
and
then
no
process
span
will
exist.
E
Another
reason
is
that
the
deliver
span
we
can
basically
can
be
created
like
in
without
the
customer,
basically
having
to
do
anything.
This
delivers
ban
can
be
created
like
in
your
delivery
service
or
in
in
sdks,
that
deliver
messages
or
for
actually
creating
the
process
span.
The
customers
or
the
users
required
to
create
a
spam
and
to
do
like
to
do
some
work,
and
that's
why
we
said:
okay
in
terms
of
providing
a
solution
that
can
work
with
out
of
the
box
instrumentation
of
sdks.
E
But
your
point
is
a
very
good
one
because
with
this
end
to
end
latency,
if
you
want
to
see
this
end-to-end
latency,
you
basically
need
to
be
able
to
go
like
from
the
beginning
of
this
span
to
the
very
end
of
this
span,
and
you
need
to
know.
E
Okay
publishing
one
actually
refers
to
this
process
m1,
and
with
this
with
this
model,
you
don't
really
you
don't
really
can
make
this
correlation,
because
you
just
know:
okay,
this
published
span
here
just
belongs
to
dispatch,
but
you
don't
know
which
process
ben
kind
of
actually
corresponds.
I
E
It's
published
one,
I
mean
I,
I
think
an
other.
An
other
kind
of
challenge
here
is
that
yeah
you
can
actually
also
do
batch
processing.
So
maybe
you
have
a
single
process
span,
but
in
the
single
process
ban
your
kind
of
process,
multiple
messages-
that
is
also,
that
is
also
a
use
case.
We
came
up
with
that
makes
it
kind
of
misleading
or
problematic
to
kind
of
require
direct
correlation
of
this
process
band
to
a
published
span.
I
Yeah-
and
it
may
be
like
one
thing
again
for
the
batch
processing
suppose
there
were
two
messages:
both
messages
were
best
together
and
put
into
maybe
some
storage
suppose
s3.
In
that
case,
we
just
need
to
follow
one
single
path
because
we
don't
have
to
decide
like
which
part
to
follow
so
only
like
when
I
agree
on
the
this
cases.
Maybe
there
were
like
three
messages:
m1
m2
and
m3:
maybe
they
process
m1
and
m2
together
as
a
best
but
m3
as
independent.
I
So
I
agree:
there's
like
a
lot
of
these
cases
where
customers
can
actually
process
these
best
deliveries.
E
Yes,
I
mean
other
edge
case.
We
came
up.
I
think,
actually,
amir
brought
this
up-
that
this
often
happens
like
in
real
world
use
cases
that
I
think
it's
a
very
idealized
idea
that
you
say:
okay,
you
receive
a
batch
here
and
then
you
kind
of
have
one
span
and
this
ban
and
this
ban
basically
each
span
processes.
One
message
I
mean
you
might
actually
have
multiple
sequential
operations
that
operate
operate
on
the
whole
batch.
You
might
do.
E
Maybe
you
have
like
one
span
here
that
does
some
transformation
on
the
whole
batch
and
then
you
have
a
second
span
that
maybe
stores
the
whole
batch
in
s3.
In
your
case
and
then
you
basically
have
two
process:
bands
kind
of
working
on
the
whole
batch,
and
it
then
will
be
hard
to
find
out
which
of
those
two
kind
of
should
be
used
to
kind
of
calculate
the
end-to-end
latency.
E
When
you
need
the
links,
basically
maybe
link
both
to
all
the
publishers
and
then
figure
out
which
one
ended
the
latest,
so
there's
kind
of
lots
of
peculiarities
there
and
that's
why
we
for
now
in
in
this
in
in
this
model
that
we
have,
we
just
went
with
the
minimum,
feasible
solution
that
can
be
applied
to
to
not
all,
but
maybe
most
messaging
use
cases
but
yeah
as
a
what
this
model
that
we
came
up
with
does
not
cover.
That's
a
good
point
that
you
bring
it
up.
E
And
I
I'm
I'm
not
sure
if
we
actually
can
come
up
with
a
generic
model
that
that
can
can
be
applied
like
in
a
generic
way
to
or
or
almost
all
messaging
scenarios,
and
that
allows
calculating
the
end-to-end
latency
I
mean,
maybe
that's
something
we
need
to
think
more
or
brainstorm
more
about.
I
Yeah
makes
sense,
so
can
we
like
our
we
don't
have
and
like?
Maybe
we
can
add
the
endpoint
latency
support
in
the
next
phases?
Can
we
then
assume
that
it's
okay
to
actually
start
generating
the
links
at
the
sql
spoiler
side?
As
of
now,
will
that
be
okay
to
launch
with
that
particular
capabilities?
That's
similar
to
what
you're,
showing
here
and
once
they're
like?
Will
it?
E
I
I
think
the
main
point
here
is
you:
can
you
can
always-
and
maybe
somebody
that's
here
if,
if
they
have
a
different
understanding,
correct
me,
you
can
always
you
can
always
add
stuff
on
top
of
of
these
semantic
conventions
that
we're
working
on
here.
So
I
think
what
we
are
requiring
here
is
kind
of
a
minimum,
so
we
as
a
minimum,
require
this
delivery
span
to
be
present
and
this
delivers
band
to
be
linked
to
those
published
spans.
E
E
No,
that's
not
fine,
but
that
is
not
finalized
yet
so
we
are
currently
working
on
that
and
I
think
yeah.
I
showed
this.
That's
the
current
semantic
conventions.
So
that's,
what's
that's.
What's
I
think
you
probably
know
the
document-
it's
not
currently
there
and
we
are
working
on
improving
or
replacing
part
of
that
with
this
with
this
document
that
is
still
like
a
working
progress
or
tab.
E
Mean,
interestingly,
I
I
I
opened
this
up
here
there
is.
There
is
an
item
that
from
I
think
it
was
submitted
by
anorex
like
long
ago,
and
it
actually
covers
sqs
and
lambda,
and
he
gives
some
kind
of
example
here,
and
this
item
is
still
open.
So
maybe
you
can
just
add
your
kind
of,
and
I'm
not
sure,
if
he's
actually
talking
about
end-to-end
latency
here,
but
maybe
you
can.
I
think
you
don't
need
to
open
a
new
item,
but
you
can
maybe
just
add
to
this
one.
I
It
thanks
and
like
one
more
thing,
the
unlock:
what
unlocked
into
the
implementation
the
spans
are
being
created
at
the
lambda
workers
side
into
the
this
sqs
to
lambda
implementation.
There
is
a
there
is
one
package
under
hotel,
specs
sqs
to
lambda
implementation
and
is
creating
the
links
at
the
lambda
worker
can.
If,
if
the
example
you
shown
seems
like
we
shouldn't
like
at
least
default,
one
shouldn't
do
that
the
customers
are
allowed
to
do
it.
So
can
we
change
that
to
hotel
java
instrumentation
code.
E
E
I
am,
I
don't
fully
understand
the
question.
I
don't
fully
understand
the
goals
here.
Also
like
over
time.
Do
you
have
time
to
continue
this
discussion
next
week.
E
Because
that's
a
very
interesting
question
and
I
and
I
think
people
have
to
drop
now,
but
maybe
we
can
continue
next
week
so
that
every
everyone
can
participate
and
maybe
you
can
in
the
meantime
add
your
questions
to
this
or
add
your
comments
to
this
pr
here.
That's
open
from
anorex,
because
that's
we
are.
We
are
also
tracking
this.
So
basically
we
want
to
resolve
this
for
a
version,
one
of
stable
semantic
conventions
that
you're
working
on.
E
E
C
E
Yes,
so
I
unfortunately
have
to
go,
but
whoever
has
time
and
wants
to
continue
now
can
definitely
feel
free
to
stay
on
the
card
and
continue,
but
it
would
definitely
be
great
if,
whatever
you
kind
of
conclusions
or
quest
additional
questions,
you
come
up
with.
You
can
add
it
to
this
issue
that
anarch
posted
and
then
we
can
kind
of
continue
discussing
based
on
that
the
next
week
so
for
everybody
has
to
who
has
to
drop
off,
including
me.
I
say
thanks
thanks
a
lot
for
discussions
today.
E
It
was
great
and
see
you
next
week,
everybody
who
can
come
next
week.
You
know
a
few
people
who
they
cannot
make
it.
Otherwise
you
in
two
weeks,
okay,
and
thank
thanks
again
for
for
sharing
the
address
x-ray
stuff.
That
was
really
great.
I
C
C
I
I
Okay,
so
these
are
so
what
happens
in
not
internally?
I
think
this
is
visible
to
customers
also.
So
when
customers
set
up
say,
lambda
function,
this
lambda
service,
the
serverless
service,
is
split
into
the
two
parts.
The
lambda
front-end
service
is
just
a
very
thin
layer.
That's
the
front-end.
It
takes
the
request
from
the
customer
customer
say,
invoke
this
lambda,
so
this
one
will
take
the
request.
I
It
will
do
the
authorization,
throat
links
and
everything,
and
then
it
will
forward
that
request
to
a
particular
instance,
because
lambda
worker
is
is
a
single
tenant
means,
like
there's
already
reserved
instance,
for
the
land
into
the
lambda
worker,
for
that
particular
customer.
But
lambda
front-end
service
is
a
kind
of
multi-tenant.
It
takes
the
request
from
everyone
and
based
on
its
own
metadata.
It
will
find
out
which
lambda
instance
I
should
forward
this
request
to.
C
I
I
Yeah,
so
sq
spoiler
is
an
internal
service
because
what
happens
is
like
in
this
managed
way.
We
our
like
this
aws,
needs
to
keep
fetching
the
new
message
from
sks
and
keep
invoking
the
lambda
front-end
service,
so
this
sql
spoiler.
Does
it
internally
like
so
it's
not
visible
to
customers.
There's
no
spam
lambda
front-end
service
generator
spin,
so
the
sql
spoiler
is
like
keeps
calling
the
message
from
sqs
and
keep
invoking
the
lambda
front-end
service
on
customers.
Behalf.
I
It's
not
instrumented
like
so
from
the
customer's
perspective.
Sqs,
polar
and
lambda
front-end
service
are
kind
of
same
lambda.
Front-End
service
generates
its
own
span
so
from
the
customer's
perspective,
when
they
look
up
from
outside,
they
see
that
lambda
front-end
is
actually
consuming
these
messages
and
invoking
my
lambda
worker.
C
C
I
I
Only
with
title
like
aws
lambda,
where
the
red
arrow
is
going
and
second
one
is
saying:
aws
lambda
function,
so
this
is
how
they
are
the
two
different
spans.
So
that's
why
customers
were
able
to.
This
is
the
one
example
of
customers
service
like
the
trace
map,
so
you
can
see
there's
a
two
spans
like
one
for
lambda
front
end
the
first
one
and
the
second
one
is
for
lambda
worker.
C
I
C
So
so
you,
by
default,
whenever
you
someone,
invokes
an
under
from
sqs,
you
generate
two
spends
right.
Did
I
understand
you
correctly,
yeah.
C
I
see
and
then
and
then
additional
spans
that
are
created
by
the
number
itself
like
by
the
user
code.
They
should
be
like.
I
Yes,
so
as
of
now,
like
lambda
front-end
service
generator
space,
if
you
directly
invoke,
if
customers
directly
invokes
the
lambda
first
pen,
they
will
get
from
the
lambda
front-end
service.
Another
span
they
will
by
default,
they
will
get
from
the
lambda
worker
service
and
they'll
be
continuous,
and
if
they
call
the
downstream
service,
then
there
will
be
like
suppose,
they're
calling
s3.
Then
there
will
be
link
to
s3,
so
so.
C
Then
the
lambda
is
handling
a
batch
right,
so
it
can
take
10
messages
right,
yes,
but
but
you,
but
you
still
care.
You
still
generate
two
spans
one
for
like
for
each
invocation
of
the
lambda.
You
generate
two
spends
regardless
if
it's
messaging
or
something
else,
yeah
right,
yeah
and
then
the
user
code
is
the
one
that's
like
responsible
to
to
iterate
the
messages
and
create
like
additional
spends
according
to
open
television
right.
C
Okay,
okay,
so
I
know
I
I
understand-
I
didn't
understand
it
before,
so
it
was
out
for
me
to
follow
where
what
you
are
referring
to
and
now
your
question
is
like,
if
I
understand
correctly,
those
two
yellow
cell
cans,
they
are
generated
by
by
you
right,
like
by
aws
and
the
four
circles
on
the
right.
They
should
be
generated
by
the
user
right.
I
Right
so,
in
this
case,
what's
happening
is
like
first
to
lambda
front,
end
and
lambda
function.
Those
pens
are
by
default
generated,
but
in
this
case
customer
said
no,
no.
I
want
more
more
spam
because
I'm
doing
is
processing
each
message
by
my
own.
So
I
I
want
to
generate
my
own
span
for
each
message,
so
they
can,
you
know,
start
generating
processing
message,
one
processing
message
to
their
own
spam,
so
they
can
they're
allowed
to
do
it.
C
Yeah
so,
basically,
like
you
can't
control
what
the
users
will
do,
you
can
only
like
recommend
what
to
do
right.
So,
like
users
create
whatever
links
they
want,
they
can
follow
the
semantic
conventions
or
they
cannot
follow
the
semantic
conventions.
I
Yeah,
so
it's
the
kind
of
we
have
questions
around
the
default
experience.
So
if
we
generate
the
links
at
the
sqs
level,
like
your
customers
can
squeeze
polar
level
like
this
is
doing,
customers
can
still
generate
the
links
at
the
lambda
worker
level.
We
cannot
stop
them.
If
customers
will
do
that,
then
the
link,
the
linkings,
will
look
like
this,
like
there
will
be
one
link
from
sqs
going
to
lambda
front-end
service,
another
link
going
to
directly
into
messages,
because
another
link
is
from
the
customer's
code.
I
C
C
This
is
why
I
think
it's
very
important
to
record
links
on
the
lambda
folder
right
yeah,
so
like,
at
least
in
my
opinion.
I
think
it's
very
important
to
record
those
links
like
like
the
left,
drawing
that
you
would
that
you
were
presenting,
but
I
also
think
that
is
very
important
to
generate
the
links
like
on
the
on
the
user's
side
of
the,
because
you
have
like
a
spend
that
refers
to
a
message,
but
then
you
don't
know
which
message
it
is.
If
you
don't
call
the
link
right,
so
it
would
be
problematic.
C
Like
my
personal
opinion
is,
I
think
you
should
record
both
links
and
I
think
specification
because
we're
working
on
a
new
version
for
the
specification.
I
think
the
specification
should
like
add
a
link
attributes,
so
you
can
specify
like
for
each
link.
You
can
say
what
it
represents,
so
you
know
that
they
refer
to
like
the
two
links
they
they
they
link
to
the
same
message,
but
you
know
what
stage
of
the
processing
they
they
capture.
C
I
So
like,
if
I
understand
correctly
you're
saying
that
then
start
generating
the
links
at
this
sqs
polar
level
like
and
customers
should
be
allowed
to
generate
the
links
from
this
processing
message
also,
if
they
are
processing
one
by
one,
we
should
be
allowed
to
generate
the
links
in
this
way.
Also,
and
only
then
one
thing
is
because,
if
you
so
there
are
the
two
questions,
first
for
end-to-end
latency,
which
one
to
follow,
I
think
you
mentioned
infusion
all
tile.
I
C
I
Right
yeah
make
sense,
you
know,
I
think
that
will
definitely
solve
the
problem.
One
more
concern
I
have
is
on
this
one.
Like
suppose
we
actually
use
this
links
to
generate
the
you
know
arrow
also.
So
if
you
see
all
these
arrows
in
our
like
service
graph
or
you
can
say,
this
is
the
trace
graph,
so
we
will
use
those
links
to
generate
the
arrows.
I
So
now
the
our
service
map,
like
if
service
map
or
trace
map,
will
look
like
this,
like
there
will
be
some
arrows
going
to
front-end
service
some
arrows
going
to
this
processing
message
too
yeah.
I
think
maybe
that
also
we
can
actually
add
in
future.
So
if
there
is
already
an
arrow
going
to
processing
message
too,
we
can.
I
C
C
C
I
Right,
sorry,
it's
it's
just
a
view
thing
like
once.
Customers
start
to
see
this
trace
end
to
end
like
if
we
are
generating
this
service
graph
view.
I
C
But
I
think
it
depends
like
on
the
use
case
right,
so
some
users
will
be
interested
in
in
like
they
don't
care
what
happens
in
aws.
They
just
want
to
see
dell
code
right
because
they're
debugging
an
issue
in
dell
code.
They
don't
care
about
the
infrastructure,
but
I
think
if
other
users
like,
if
like
they,
are
interested
in
seeing
the
infrastructure
the
operations
right.
C
So
I
think,
like
the
ideal
solution
in
my
opinion,
is
to
be
able
to
show
both
of
them
and
like
let
the
user
browse
the
trace
and
like
focus
on
what's
interesting
for
them,
but
but
not
to
choose
one
over
the
other.
That's
my
opinion,
but
I'm
not
sure,
like
you
know,
a
user
experience
like
how
it
will
look
precisely
the
ui,
but
does
it
make
sense.
I
Yeah
yeah,
I
think
that
makes
sense.
Thank
you,
yeah
yeah.
I
think
we
can
take
the
help
from
our
like
user
sorry,
user
person
also
but
yeah.
I
think
that
makes
sense,
because
what
you're
saying
like
it
depends
on
customers,
some
might
want
it.
Some
might
not
want
it.
So
maybe
we
can
try
to
use
some
attributes
that
we
have
into
the
link
and
say
like
where
customers
can
define
that
okay
now,
this
is
my
use.
I
D
B
C
One
last
question:
is
it
possible
for
a
vandal
where
to
get
these
spends
as
well
like
the
the
spends
that
you
generate.
I
Yeah,
we
are
actually
working
on
that
one,
because
we
have
a
lot
of
this
aws
managed
services
who
have
you
know:
api
gateway,
lambda
front-end
service,
like
lambda,
api
gateway,
lambda,
front-end
service.
All
these
are,
you
know,
managed
services.
They
directly
send
the
traces
to
x-ray.
So
we
are
working
on
a
on
a
solution.
Again,
there
is
a
it's.
Not
we
don't
have
any,
but
we
are
working
on
it
where
actually
customers
can
define
where
they
want
to
their
destination
and
that
destination
will
not
send
the
data
to
x
rate.
I
C
Let's
see
okay
cool,
I
work
for
a
spectre
where
they're
tracing,
zendo
and
they'll
be
very
interesting
in,
like
also
evaluating
these
use
cases
in
our
system
to
make
sure
they
look
good
like
specifically,
if
you
add
those
two
links
and
suddenly
you
have
to
to
make
sure
that
the
ui
supports
it,
I
I
would
be
very
interested
in
adding
it
to
our
system
as
well.
I
C
So
it's
very
interesting
to
me
to
to
have
like
another
view
on
this
this
issue.
So
if,
if
you
want
to
to
share
and
set
up
a
meeting,
that
would
be
great.
I
Sure,
what's
your
actually,
I
think
your.
C
C
C
I
C
I
C
Yeah
yeah,
I
think
it's
automatically
recorded
and
uploaded
to
youtube.
C
So,
okay,
there
is
no,
I
think,
there's
no
like
it's
just
a
one
like
videos
that
it's
you
can't
tell
if
it's
one
sig
single
the
other,
you
have
to
open
it
to
check
so.