►
From YouTube: 2023-02-08 meeting
Description
Instrumentation: Messaging
A
B
A
C
A
A
Right
I
can
get
us
started
unless
you
guys
have
something
to
cover.
C
I
wrote
on
a
few
topics:
material
can
start
with
a
discussing
the
yesterday's
meeting.
I
guess.
A
Yeah,
so
the
main
thing
from
yesterday
was
that
you
know
we're
looking
for
more
people
to
do
reviews
on
the
specification.
So
if
you
guys
have
interest
in
you
know
working
to
be
a
spec
reviewers,
then
let
I
would
say
make
it
known
in
the
slack.
C
A
Obviously,
you'd
be
able
to
help
with
the
fast,
but
so
I
submitted
the
pr
to
update
the
I.
Think
you
guys
reviewed
it
already
to
remove
the
AWS
x-ray
environment,
variable
environment,
variable
propagation
interference,
so
I
think
that's
mostly
ready
to
go
we're
just
waiting
for
approvers
to
approve
it,
and
so
we
can
get
it
merged.
C
Yeah,
okay,
it
looks
good,
so
I
have
a
question
about
that
specification.
I
went
through
the
call,
not
all
of
it.
Is
there
a
link
somewhere
to
the
to
the
Google
Doc
sheets
of
the
discussion
for
the
justification
changes?
Do
you
have
it.
A
Yeah
I've
got
that
just
a
sec,
I
believe
you're.
Referring
to
this
one.
A
Which
one
this
one
okay,
so
this
is
the
the
spreadsheet
we've
all
kind
of
been
collaborating
on.
So
here
we've
got
aws's
things,
items
that
are
fairly
commonly
available
in
the
context,
so
these
are
attributes
that
we
can
use
on
instrumentation
azures
is
very
limited
in
in
so
we're
trying
to
get
Azure
folks
to
help
us
identify
some
of
these
things
that
we
want
to
share.
A
Google
has
has
done
so,
and
so
we
have
a
little
bit
more
to
work
on
here,
so
that
I
think
that
when,
for
example,
we're
ready
to
to
kick
off
Google
instrumentation
similar
to
Lambda,
we
can
use
this
as
a
as
a
resource
for
that.
But
the
main
thing
we
were
trying
to
do
is
take
these
things.
These
terms,
for
example,
that
are
found
in
the
the
various
semantic
conventions
documents
and
and
make
sure
that
we're
aligned
on
those.
A
And
then
this
is
what
we
want
to
rename
it
to
and
I've
bolded
the
ones
that
are
different
so
like,
for
example,
fast
execution.
This
is
like,
for
example,
the
AWS
request.
Id
I,
don't
feel
like
execution
is
very
descriptive
yeah.
So
this
is
the
biggest
one
that
I
had
issue
with
and
wanted
to
change.
So
we're
gonna
go
with
invocation
ID,
so
that
aligns
better
that's
similar
to
like
what
Azure
is
using
Google
uses
event.
Id
AWS
uses
request,
ID.
A
Version
trigger
instance,
max
memory.
Those
are
all
the
same.
The
main
difference
with
memory
is
currently
it
was
in
megabytes,
because
that's
what
AWS
reports
like
memory
limit
in
megabytes,
but
just
to
have
more
consistent
units
across
the
different
foreign
Ty.
A
So
the
ID
is.
A
So
the
main
thing
the
ID
was
was
was
the
the
Arn
for
Lambda
right,
and
the
thought
here
is
that
Arn
is
already
just
composed
of
the
various
attributes
that
are
already
set
as
semantic
conventions,
either
here
or
in
the
cloud
spec
yeah.
So,
okay,
it
felt
like
there's
a
lot
of
duplicate
information.
There.
A
I,
don't
think
instrumentation
currently
does
this,
but
this
was
already
in
the
specification.
C
Yeah
I
I
confused
between
between
the
two
of
them,
but
yeah
I,
think
it
makes
sense
okay
and
about
the
AWS
tags.
C
So
I
think
that
it
could
be
great
to
separate
the
AWS
account
in
the
region
instead,
instead
of
using
an
Arn
so.
A
No
so
down
here,
I,
you
know
in
the
meeting
we
discussed
potentially
adding
the
Arn
to
this
specification,
but
then
we're
like.
Then
we
decided
that
all
of
that
information
was
already
duplicated
across
all
of
these,
so
it
seemed
unnecessary.
C
Yeah
also,
we
also
had
this
like
the
discussion
back
there
in
heptagon
and
I
think
that
it
was
very
useful
for
customers
to
be
able
to
search
by
specific
their
region
or
their
company.
So
it's
usually
more
useful,
I.
Think
but
okay
sounds.
B
A
The
the
main
use
case
I
think
for
having
Aaron
as
a
a
single
thing
would
be
like
if
you
wanted
to
copy
and
paste
the
elsewhere.
C
A
C
A
Other
thing
is
like
we,
we
said
that
it's
much
easier
to
add
than
to
remove.
So
if
there's
enough
user
demand,
you
know,
Amazon
can
put
forth
or
somebody
can
put
forth
a
proposal
to
add
air
and
back
in
somewhere.
C
Yeah,
okay
I
was
also
looking
on
the
specification
and
there
are
two
things
that
I
think.
Maybe
we
can
also
consider
one
is
the
there
are
also
specifications
for
the
like
semantics
for
invoked
operation
for
invoking
functions
and
maybe
I
can
can
open
it
in
a
sec.
C
So
yeah.
B
C
Yeah,
so
in
this
case,
I
was
thinking
if
it
should
be
just
the
same
resource
attributes,
and
one
of
the
reason
is
that,
for
example,
the
provider
and
region
is
relevant
also
for
other
Cloud
operations.
So
if
I
publish
an
SMS
message,
for
example,
so
in
that
case
it's
probably
not
going
to
be
an
invoke
region,
I
think
that
that
it
should
be.
A
So
I
don't
have
a
strong
opinion
on
this.
I
I
would
suggest
bringing
this
up
in
the
slack
Channel,
but
my
thought
there
is
that
I
think
there's
some
benefit
to
having
a
different
name,
at
least
for
you
know
the
the
region
so
like,
for
example,
say
you
have
a
Lambda
function
running
in
you
know
the
EU
region
right
and
you
invoke
a
Lambda
function
in
it
on
accident,
in
the
a
U.S
region.
A
And
if
you
have
those
as
the
same
name,
then
you
know
One's
Gonna
overwrite,
the
other
Yeah.
A
A
C
Yeah,
so
I
I
think
it's
it's
that's
also
what
I
was
thinking
that
it's
it's
maybe
good
at
it
separate,
but
this
issue
is
relevant
for,
like
every
SDK
operation,
probably
like
in
Cloud
operations,
so
I
think
we
should.
We
should
have
a
standard.
C
Idaho
wants
a
separate
name,
okay,
but
it
still
probably
should
be
not
unique
for
for
fuss,
or
we
are
okay
that
the
same
attribute
will
be
both
on
the
suspended
resource
and
then
the
backends
will
need
to
deal
with
that,
but
yeah,
it's
a
it's
a
good
example.
I
think.
C
C
C
Yeah
so
I
know
that
in
heptagon
and
I
pretty
sure
that
in
other
sdks
as
well,
the
trigger
was
a
separate
spam,
like
a
parent
span
for
the
spend
that
generated
for
their
request.
So
you
have
a
parent
HTTP
span:
parents,
sqs
band,
for
example,
and
and
here
the
specification
said
that
it
everything
on
that
on
one
span.
A
Okay,
included
messaging
has
got
gone,
a
lot
more
heavy-handed
on
specs,
specifying
the
way
that
the
span
should
be
structured
and
which
spans
should
be
created,
but
I
don't
think
that
that
is
the
case
in
this
semantic
convention
am
I
wrong.
C
So
actually
I
I,
don't
know,
but
I
think
that
it
should.
It
should
be
stated
other
like.
Even
if,
if
it's
you
can
do
either
it
should,
it
should
be
written
and
I.
I
know
that,
for
example,
I
saw
that
there
is
NPR
in
JavaScript
that
adds
HTTP
trigger
this
Quest
trigger
information
and
I.
Think
that
it's
added
on
a
separate
spam
and
so
I
think
that
it
should
be
like
at
least
for
Lambda,
for
example,
or
for
each
different
environment.
It
should
be
standard
across
all
the
runtimes.
A
So
one
other
thing
to
consider,
though,
is
that,
like
I
said
that
the
messaging
Sig
is
also
working
on
updating
their
specification
and
I
feel
like
there
would
be
a
potential
overlap
or
conflict.
If
we
were
to
try
to
do
the
same
because,
for
example,
I
think,
a
lot
of
the
messaging
sigs
design
and
proposals
overlap
very
heavily
with
fast,
specifically
around
cases
for
like
sqs
message,
listeners
or
SNS
stuff,
like
that.
C
Yeah
yeah,
definitely,
let's
fur,
but
still
it's
it's
like
a
one
kind
of
trigger
so
like
I'm,
not
sure
if
it
should
block
us
for
the
order
kind
Maybe
for
this
trigger
it
will
be,
it
will
be
a
little
different
but
but
I
understand.
A
B
A
I
mean
I
would
suggest
again
if
I
I
think
there's
a
legitimate
argument
that
you're
presenting
here
that
maybe
we
should
be
specifying
the
the
structure
a
little
bit
more
I.
Think
that,
for
example,
one
might
say
that
hey,
we
need
to
Define
what
the
span
looks
like
that.
The
the
Lambda
extension
is
creating,
for
example,
versus
what's
being
created
by
the
applications
instrumentation
itself,
because
I
don't
think
that
that's
addressed
in
this
specification
either.
A
I
I
guess
I,
don't
have
a
specific
point,
just
saying
that
I
think
maybe
you're
right
that
we
should
specify
the
expected
structure
a
little
bit
more.
A
What
that
is,
I
I,
don't
know
specifically,
but
I
can
certainly,
let's
take
note
of
that
in
the
notes,
and
we
can
bring
that
up
in
the
next
Sig
meeting
and
then
also
you
can
mention
it
in
slack.
If
you
want
to
continue
carry
that
conversation
there
all
right,
let's
see
so.
C
So
I
guess
it's
about
the
specifying.
C
Yeah
I
think
it's
about
invoked.
Oh.
A
A
A
Spec
assessment:
okay,
so
have
you
had
a
chance
to
look
at
the
this?
The
spec
assessment
updates.
C
Yeah
I
took
a
look
on
the
python
PR.
Have
you
know
if
you
look
at
well
looks:
okay,
I,
don't
know.
A
Okay
feel
free
to
like
And
subscribe,
yeah.
A
And
then
discussing
native
support
for
11.
C
C
Yeah,
it's
a
foreign
discussion
about
the
collector
layer,
so
so
I
I
I
they.
C
They
are
very
Seasons
that
that
people
gave
there
about
why
it
could
be
useful
now
or
in
the
future
first
for
users,
but
I
do
think
that
maybe
we
we
should
decide
if
we
also
want
to
have
native
support
for
installing
land
instrumentation
without
a
layer
by
exporting
directly
from
the
from
the
Lambda
Handler,
and-
and
it's
true
that
now
you
can
install
the
instrumentation
but
I,
don't
think
it's
it's
quiet
like
clear
how
to
do
that
for
like
I,
don't
know,
average
users
and
also
I
know
that
there
are
some
some
stuff
that
are
added
on
their
layer
level,
for
example,
resource
attributes,
at
least
in
Python.
C
So
they
are
extracted
from
environment
variables
on
the
like
on
the
boot
Scripts
right
yeah.
So
that's
something
that
we
if
we
go
in
the
direction
as
well,
so
we
probably
want
you
to
change
and
I
just
know
that
from
our
explicit
users
having
the
the
most
easy
way
to
install
this
rotation
could
be
very
useful
in
in
many
cases
and
I.
Think
that
adding
a
layer
and
also
if
you
need
to
compile
it
yourself,
could
just
add
some
more
moving
parts
that
are
aren't
always
necessary.
C
A
So
I
I
get
the
concern
about
how
the
current
implementation
requires
blocking
right,
so
the
Lambda
itself
is
blocked
until
each
span
is
reported
all
the
way
to
the
back
end,
whatever
back
end
that
is
and
depending
on
you
know,
Network
traffic
or
whatever.
That
can
be
a
relatively
long
time
and
you're
paying
for
the
Lambda
during
that
whole
cycle,
which
is
unfortunate
because
you
know
having
the
the
architecture
where
you
just
send
it
to
the
collector
and
then
return
certainly
sounds
appealing,
but
there's
certain
there's.
A
So
I
was
thinking
about
that,
and
you
know
the
way
I
would
probably
try
to
solve
this
if
I
want,
if,
like,
if
I
had
my
own
application
or
my
own
company
and
I,
was
trying
to
optimize
the
Lambda
invocation
time,
I
would
probably
try
to
have
the
collector
throw
instead
of
reporting
to
a
back
end
outside
of
my
network.
I'd,
probably
try
to
put
it
into
a
queue
of
some
sort
that.
A
Is
now
outside
of
the
lambda's
invocation
context
and
I
I
can
then
take
that
from
a
a
different
system
collect
it
and
report
it
asynchronously.
A
Now,
what
what
that
looks
like
in
terms
of
processing
that
might
be
expensive
because
span
the
The
Collector
data
probably
would
exceed
the
size
allowed
for
sqs
messages
fairly
easily
right.
A
A
So
the
other
option
would
be
to
have
a
collector
write
the
data
to
an
sqs
bucket.
Sorry,
not
a
an
S3
bucket
that
another
Lambda
could
listen
to
and
collect
the
the
S3
data
and
report
it
that
way.
Instead
of
putting
an
sqs
queue
in
the
middle.
C
So
how
I
see
it's,
it's
yeah
doesn't
necessarily
matters
if
you
export
the
data
to
sqs
S3,
some
backend
or
and
another
auto
collector
that
that
you
run
outside
of
the
Lambda,
because
it's
like
another
service
that
has
to
synchronously
approve
that
it
gets
the
data
or
it
gets
the
data.
So
you
can
terminate
your
invocation
right.
A
But
I
guess
that's
very
similar
to
you
know
a
Lambda
writing
logs
right.
A
Anyway,
so
the
reason
I
was
saying
right
to
like
S3
or
sqs,
for
example,
is
because
that's
operating
that's
guaranteed
to
operate
within
the
the
execution
in
like
the
the
network
region.
A
C
Yeah
yeah
I
I
I
can
see
that
it
could
be
like
a
good
solution
for
for
users
in
you
know,
under
some
requirements
and
not
sure
like
how
much
it's
different
from
just
running
a
collector
on
in
AWS,
for
example,
in
in
eks
or
ECS,
and
just
make
sure
to
do
it
in
the
same
region.
C
C
A
The
reason
I
would
say
that
that's
a
little
bit
more
dangerous.
Unless
that
collector
is
running
outside
of
Lambda,
then
you
know
that
HTTP
request
is,
is
potentially
blocking
right,
reporting.
A
Foreign,
so,
but
what
I'm
saying
is:
is
that
separate
collector
now
also
blocking
or
does
it
let
the
the
the
data
get
reported
asynchronously
so.
C
B
C
Yeah
and
it's
it's
another
important
Point
like
at
least
on
Guiding
users,
to
to
do
that
to
add
and
as
soon
currently
processor
to
this
kind
of
collector.
C
Yeah,
okay,
so
I
I
think
maybe
it's
we
should
discuss
it
like
with
the
you
know,
with
the
rest
of
the
Sig,
and
it
could
be
good
because
I
think
that
it
may
be.
If,
if
we
want
to
do
that,
it
could
be
a
good
point
to
do
this
now
and
not
like
you
know
a
few
months
or
more
now,
probably
it
will
means
that
more
work
will
have
to
be
done
if
we
invest
on
on
the
layer
and
also
for
like
instrumentation.
A
Yeah
I
mean
maybe
the
better
option
is
to,
instead
of
putting
a
full
collector
into
the
the
Lambda,
have
a
like
a
stripped
down
version
of
The
Collector.
That
is
mainly
just
collecting
all
the
resource
data
and
then
sending
to
a
separate
local,
a
separate,
collector
elsewhere.
C
Yeah
I
just
I,
don't
think
that
for
collecting
research
data
you
need
a
collector
I,
don't
see,
see
how
much
it
different
from
running
it
inside
the
Lambda,
maybe
besides
needing
to
code
it
in
different
run
times,
but
it
should
be
fairly
small,
so
I
guess!
If,
if
you
you
want
to
do
something
like.
A
All
of
the
resource
collection,
jobs
that
the
collector
currently
has
should
be
pushed
into
the
the
application
and
the
the
language
specific
instrumentation.
C
Yeah
I
I
mean
to
make
sure
we're
talking
about
the
same
thing.
It's
it's.
It's
just
parsing
involvement
right
variables.
C
So
I
I'm
not
sure
that
I
do
either.
But
from
what
I
understand
it's,
there
is
an
option
to
get
data
from
the
lambda's
Telemetry
API,
but
I.
Don't
think
that
it's
it's
used
by
default
and
I'm
not
sure
exactly
what
what
it
is
used
for.
But
there
is
this
possibility,
but
for
for
the
most
simple
use
case,
I
think
that
you
can
just
do
it
on
the
on
the
Lambda
code
and
for
for
more
advanced
users.
C
We
they
should
have
this
option
and
it's
fine,
but
but
yeah
I
just
think
that
having
both
of
them
could
be
useful.
A
A
Because,
like
for
example,
going
back
to
what
Anthony
was
saying,
some
companies
don't
want
to
have
any
they
they
want
to
be
pure
Lambda
right.
They
want
to
be
able
to
Auto
scale
as
much
as
they
can
I
think
that
makes
a
lot
of
sense.
So
maybe
it
would
be
interesting
to
provide
a
collector
instance
that
can
run
as
a
Lambda.
C
A
C
Yeah
and
for
for
the
also
for
the
long
term,
I
think
that
the
best
solution
is
if
there
was
a
possible,
a
possibility
from
AWS
to
make
sure
that
you
don't
lose
the
lose
data
without
exporting
it
right
on
the
Handler
end.
C
A
I'm,
not
sure
I
necessarily
agree
with
that,
because
you
know
it's
it's
a
trade-off
writing
documentation
to
support
a
Lambda
layer
and
having
all
of
your
deployment
logic
in
in
there
seems
a
lot
easier
than
okay
go
and
deploy
this
collector
instance
to
an
ec2.
You
know
collect
the
the
information
configure
it
over
here,
make
sure
that
it's
on
the
same
region,
if
you're
deploying
across
different
regions,
you
have
to
kind
of
go
through
and
do
this
over
and
over
again
yeah.
C
It
I
I,
don't
think
that
it's
it's
it
solves
one
another.
It's
not
either
that
or
either
that,
because
also
if
you
run,
if
you
run
the
collector
as
a
layer
and
your
back
end,
I
don't
know
whatever
it
is,
is
running
on
another
region.
You
will
still
pay
the
extra
time.
So,
if,
if
you
want
to
save
to
to
have
a
better
latency,
you
probably
should
run
a
collector
anyway
outside
the
Lambda
okay,
I.
A
Just
thought
of
an
idea,
so
I
believe
that
the
issue
here
is
that
the
collector
will
will
get
suspended
after
the
Lambda
execution
is
going
to
shut
down
right.
A
I
believe
somebody
was
saying
I,
don't
remember,
who
that
the
that
Lambda
is
supposed
to
send
out
a
a
suspension.
B
A
A
I
wonder
if
it
can
use
that
signal
to
instead
of
reporting
and
blocking
and
waiting
for
the
data
to
be
sent
out
of
the
network
if
it
can
use
that
300
milliseconds
to
write
the
data
to
S3
and
then
the
next
time
it
starts
up.
It
looks
if
there's
data
in
that
F3
bucket.
A
A
separate
process
that
looks
in
that
F3
yeah,
but
the
default
operation
is
to
send
it
asynchronously
until
it
gets
that
suspension
thing
and
then
that's
when
it
says:
okay,
I'll,
just
dump
everything
to
to
S3.
C
Yeah
I
I
I
agree
that
it
it
could
be
helpful
to
do
something
like
that.
I
mean
we
should
probably
explore
it
more
and
and
try
it
and
yeah
using
S3
or
another
destination.
It
really
depends
on
the
on
the
performance
right
if
Esther
gives
a
better
performance,
so
it
could
be
better
than
the
collector
or
directly
to
the
back
end.
A
Or
or
it
could
write
it
to
a
collector
right
yeah
that
has
the
benefit
of
not
overloading
that
collector
for
most
of
the
use
case
right.
So
the
the
default
use
cases
for
each
Lambda,
each
laminate
collector
to
send
it
themselves
and
then
have
the
fallback
being
to
send
it
to
a
a
lambda's
ec2
backup.
A
B
C
It
works
I'm,
not
sure
if
it's
like
that,
if
we
do
have
this
option,
if
users
should
be
able
to
encourage
to
use
only
the
layer
version
because
I
mean
I'm,
not
sure
that
it
will
it
will,
it
will
mean
that
you
can
completely
export
the
traces
asynchronously,
but
but
if
it
does
so
that's.
A
Good
I
guess
one
question
is:
how
long
would
it
take
in
a
you
know,
production
load
to
send
Trace
data
from
one
from
a
Lambda
to
a
collector
running
in
an
ec2
instance.
C
A
C
Yeah
I
know
that
on
hexagon
we
had
like
usual
response
rate
of
I,
don't
know,
I
think
it
was
few
hundred
through
like
a
TENS
of
milliseconds.
C
But
that's
like
the
average
case.
We
should
also
look
on
the
you
know
the
percentiles,
but
yeah.
It
sounds
interesting
to
to
check
and
I
also
try
to
see
if
we
can
do
it
on
on
our
team
to
this
kind
of
PLC
and
and
see
like
how
much
it
helps
and
yeah
like
I
said
if
we
can
count
count
on
that
as
like
as
a
better
solution.
A
Yeah,
if
you
guys
want
to
take
that
on
as
like
a
research
topic,
I
I,
totally
encourage
that
I.
Think
I
think
this
is
a
a
useful
area
to
explore
and
the
the
other
thing
I
like
about
this
idea
is
that
the
default
could
be
completely
blocking
right.
A
So
the
collector
could
block
in
just
the
standard
deployment-
and
you
say
hey
if
you
want
to
optimize
this,
go
and
deploy
this
collector
on
a
separate
ec2
instance
configure
it
in
your
lambda's
layer
collector
and
then
then
it
doesn't
have
to
block
anymore,
and
it
can
just
detect
that
oh
I've
got
a
proxy
Lambda.
A
That
I
can
send
to
as
a
backup
and
change
the
logic
in
the
the
Lambda
collector
yeah
yeah,
because
it
seems
like
a
much
easier
documentation
flow
or
our
user
flow
than
what
I've
been
proposing
of
having
a
separate
S3
bucket
or
whatnot.
C
A
A
C
C
So
I
think
that
next
Point,
we
think
that
it
could
be
much
more
efficient
if
we
can
merge
those
things
to
have
a
single
one.
C
So
I
was
thinking
if
we
can
find
a
time
which
is
like
a
bit
earlier
now
it
could
be.
It
could
work
for
us,
so
I
will
about
to
just
proposals
in
the
select
Channel
but
happy
to
to
hear
your
opinion.
I.
A
A
C
C
Both
yeah,
even
if
we
can
do
it
like
two
hours
earlier,
it
could
be
much
more.
You
know
it
could
work
much
much
better
for
us
and
then
it
could
be
easier
to
to
discuss
everything.
A
Yeah
I
agree:
I
can
put
that
down
here.
Can
we
find
in
our
earlier.
B
C
So,
ideally
it's
either
Monday
or
Wednesday.
C
A
8
P.M,
okay
and
the
so
the
current
Sig
starts
at
12
p.m.
So
you're
saying
if
we
can
move
it
ahead,
two
hours
that
would
be
best
for
you,
yeah.
C
And
also
like
a
Tuesday
or
Thursday
could
work,
but
it
it's
less
ideal
for
us,
but
it's
still
I
think
better
than
and
then
currently.
Okay.
A
A
Would
recommend
writing
down
some
some
recommended
some
requested
time
slots
and
see
if
we
can
change
the
meeting?
Yeah
I
don't
have
a
strong
opinion.
So,
okay.