►
From YouTube: 2023-02-01 meeting
Description
Instrumentation: Messaging
B
Yeah
I
think
at
some
point
they
maybe
changed
the
the
zoom
links.
Do
you
have
a
a
calendar
invite
from
that
you
copied
over
at
some
point.
A
C
Yeah,
it
was
actually
an
internal
innovator
to
be
had
so
maybe
it
was,
it
was
changed.
B
Yeah,
so
I
would
suggest
updating
that
link.
Then.
C
C
B
B
B
Toson
I,
don't
think
we've
met
Tyler
and
then
I
I'm
a
light
step.
Where
are
you
at
tosin
I'm,
based
in
Nigeria
Nigeria
cool?
What
company
do
you
work
for
IBM,
nice,
okay,
yeah?
So
do
you,
if
you
have
a
link
to
the
the
document,
feel
free
to
add
your
info?
Add
your
name
but
anyway.
B
So
yesterday
we
had
a
meeting
where
we
had
a
customer
from
Northwestern
Mutual
that
is
using
open
Telemetry
with
AWS
and.
B
They
also
use
open
fast.
Has
anyone
heard
of
this
before
anyone
familiar
with
this.
C
Yeah
I've
heard
of
it,
but
that
didn't
use
it
myself.
B
C
B
B
The
customer
uses
laminas
to
generate
PDFs,
and
so
they
see
a
lot
of
overhead
with
the
current
open,
Telemetry
instrumentation.
They
they
were.
They
weren't
clear
on
whether
it
was
The,
Collector
or
the
instrument.
Auto
instrumentation
network
was
causing
the
cold
start,
but
I
think
it
was
Anthony
that
said
that
it's
very
likely
the
instrumentation,
because
the
collector
doesn't
seem
to
according
to
his
calculator,
his
testing,
The
Collector,
doesn't
seem
to
cause
a
significant
amount
of
overhead.
B
So,
okay
but
yeah,
it
seems
it
sounds
like
it's
a
fairly
complicated,
node.js,
app
and
so
the
the
auto
instrumentation
for
that's
likely.
The
culprit.
B
So
the
the
big
takeaway
that
they
had
is
that
when
they
were
trying
to
onboard
using
open
plummetry
with
Lambda,
there
wasn't
really
a
good
set
of
documentation
that
outlines
the
best
practices
for
how
to
to
do
so.
B
And
so
he
was
asking
for
us
to
make
sure
that
we
Define
or
or
publish
a
document
publish
better
documentation.
B
A
C
Yeah
I
think
I
will
just
also
watch
the
recording,
but
I
was
wondering
about
the
audio
instrumentation.
If
we
know
about
similar
cases
with
containers
that,
if,
if
it
does
have
adds
this
boot
time,.
B
The
coastal
tissue
yeah
I
I'm
personally,
not
a
JavaScript
guy,
so
I
I,
can't
really
speak
to
that.
I
do
know
that
you
know
Java's
Auto
instrumentation
is
known
to
increase
startup
time
yeah,
there's
always
customers
that
request
ways
to
improve
that.
But
it's
a
very
long
tail
situation.
A
B
I
think
that
he
probably
worked
around
that
by
using
individual
I
I
think
he
was
saying
he
used
individual
packages
to
avoid
Auto
instrumentation
perf
issues.
I,
don't
know
exactly
what
that
means
or
what
that
looks
like
in
JavaScript,
but
it
sounds
like
he.
He
was
able
to
work
around
the
issue.
A
B
So
yeah,
that
was
the
customer
interview.
I,
think
you
guys
were
the
ones
that
brought
this
one
up.
C
C
If
somebody
else
is
joining
okay,.
C
So
I
just
put
this
the
screen
so
to
be
in
front
of
us,
we
were
like
the
windows,
smaller
yeah
sure
that
better
yeah,
okay,
so
I,
was
like
getting
into
the
project
about
a
week
or
two
ago
and
wanted
to
better
understand
that
some
of
the
design
points
here
and
I
wanted
to
discuss
how
the
exporting
of
the
Telemetry
worked
with
the
auto
collector
and
to
make
sure
I
understand
it
right.
C
And
maybe
there
are
some
points
here
that
that
we
can
discuss
improving
and
so
I'll.
Just
go
right
to
the
point.
So
I
think.
As
we
understand
the
the
current
system
has
the
the
version
of
the
opportunity
collector
that
is
compiled
specifically
for
for
running
here
in
the
Lambda
layer,
and
then
we
have
the
open,
Geometry
SDK
and
using
forward
slash
at
the
end
of
the
of
the
Lambda
Handler,
to
make
sure
that
the
the
data
is
is
exported
and
as
I
understand
it.
C
Currently,
first
flash
sends
the
the
the
data
to
The
Collector
and
then
synchronously.
We
have
the
collector
exporting
it
to
whatever
is
configured
x-ray
or
anything
else,
and
so,
if
I
understand
right.
Currently
just
said
here
that
everything
is
in
the
same
thread,
so
the
Lambda
code
will
not
terminate
until
the
the
data
is
flushed
to
the
to
the
back
end,
and
unless
there
is
some
timeout
and
so
first
of
all,
I
just
want
to
make
sure
that
I
understand
it
correctly.
C
Okay,
so
it
in
that
case,
I
I,
wasn't
sure.
Why
do
we
need
a
collector
at
all,
because
how
different
it
is
if
the
flashing
was
directly
to
to
the
back
end,
if
it
supports
the
the
the
export
like
an
exporter
for
the
SDK
or
otherwise,
to
another
opportunity,
collector
that
can
run
anywhere
else,
and
then
he
will
push
it
with
a
custom
exporter
to
wherever
needed.
B
So
my
my
thought
on
that
is,
there's
a
big
difference
between
sending
those
traces
to
a
local
ex
of
the
local
collector.
That
should
be
a
relatively
fast
operation,
but
if
we
are
trying
to
send
those
to
a
remote
system,
the
delay
in
sending
those
traces
or
or
whatever
data
can
be
a
lot
more
expensive.
Especially
if
you
have
you
know,
network
issues.
B
I
mean
oh
I,
see
what
you
mean
so
you're,
saying
that
the
the
Lambda
won't
actually
return
until
the
collector
has
also
reported.
Is
that
what
you're
saying
yep.
B
That's
that
would
surprise
me
actually,
like
I,
would
expect
the
Lambda
to
send
the
traces
and
then
return
as
soon
as
that
is
done,
and
then
the
The
Collector
would
receive
those
traces
allow
it
to
return
then
send
the
the
traces
to
whatever.
C
C
Yeah,
so
I
I
think
it
is
not
and
I
think
that
the
logic
behind
it
is
that
after
the
Lambda
hander
finishes,
the
container
could
get
Frozen
or
yeah
or
how
you
you
want
to
call
it,
and-
and
in
this
case
there
is
a
race
condition
and
The
Collector
could
have
not
been
yet
able
to
publish
the
the
data
to
export
it.
C
So
this
is
why
I
think
that
it's
currently
where
it
currently
worked
like
that,
and
so
there
is
some
it's
mentioned
here-
that
ideally
we
would
have
more
wanted
that
the
The
Collector
could
have
some
kind
of
event
callback
and
then,
if
he
didn't,
he
didn't
send
the
data
it
will
check
again
after
60
seconds
or
so
and
and
then
send
it.
But
currently
it's
not
possible
in
Lambda.
Unfortunately,.
B
I
see,
which
is
why
you're
saying
that
it
just
holds
on
to
the
the
Lambda
response,
until
it's
actually
able
to
successfully
send
the
data.
A
C
C
And
I
I
can
also
say
that
in
in
apps
again
we
also
did
quite
the
same
thing.
I
mean
we
just
we
just
had
to
to
send
the
data
at
the
end
of
the
invocation
and
and
yeah
it's
it's
at
the
invocation
time.
I
guess
that's
the
best
solution
that
we
can
do
is
to
just
propose
the
customers
to
run
an
opportunity
collector
as
close
as
possible
and.
C
No
yeah
in
separate
friends,
but
like
probably
a
container
in
the
same
region,
same
AWS,
account
and
maybe
I,
don't
know
same
availability
zones
and
but
yeah.
If
we
would
love
that
I,
don't
know
exactly
like
what
what
this
recommendation
should
look
like,
but
something
like
that
and
yeah,
and
maybe
also
we
because
there
is
this
problem
with
Lambda.
We
can
propose
some
kind
of
configuration
where
the
data
is
sent
asynchronously
and
there
is
a
risk
that
some
of
it
won't
be
sent
and
maybe
for
some
customers.
C
This
is
still
better
than
doing
a
force,
flush
and
adding
the
duration.
B
I
see
yeah
I
I
think
that
it's
definitely
worth
bringing
up.
C
Yeah,
okay,
I
I,
just
also
think
that
it
could
be
an
important
point
because
adding
a
layer
and
everything
it's
it's
had
a
lot
a
lot
of
effort,
and
maybe
it's
not
needed
at
this
point
at
least.
B
Amir
well,
oh
I!
Guess
he
left.
C
B
Well,
unfortunately,
a
lot
of
the
people
that
I
think
are
going
to
have
a
better
response
for
this.
Aren't
here
so
I.
B
C
Yeah,
okay,
do
you
know
like
who
were
the
people
that
mostly
involved
with
the
collector
and
this
this
kind
of
issues.
B
A
B
So
Anthony
and
Alex
are
probably
the
two
that
I
would
address
on
this.
B
B
B
B
I
mean
this
is
probably
also
related
to
the
this
next
issue.
Semi-Related.
So
apparently
in
the
various
languages
we
have
tracing
being
flushed
properly,
but
the
the
metering
data
might
not
be
flushing
at
the
end
of
the
function
properly.
So
that
is
something
that
should
probably
be
evaluated
for
each
of
the
languages.
B
And
then
finally,
layer
assessments
I'm
not
quite
sure
what
that
means.
A
B
I
guess
this
is
one
of
the
outputs
that
we're
trying
to
do
for
the
the
overall
Sig
is
to
have
these
assessments
of
the
the
various
layers
just
to
make
sure
that
we
are
collecting
data
consistently.
C
B
So
anyway,
was
there
anything
else.
You
guys
wanted
to
talk
about.
B
Cool
yeah
and
feel
free
to
catch
up
on
the
the
the
recording
from
yesterday
I'm.
Guessing
that's
going
to
be
probably
be
a
more
reliable
source
than
I
am.
C
Thinking
right
there
sorry
I'm,
not
sure
it's
probably
good
to
you
also
watch
it
out.
Yeah
yeah,
they're,
recording.
B
Was
gonna
mention
also,
if
either
of
you
are
interested
in
the.
B
What's
it
the
specification
updates
for
Lambda,
specifically
around
x-ray
propagation
changes?
That's
something
that
it
we
feel
like
we're
at
a
good
State,
where
we've
mostly
come
to
agreement
on
how
to
proceed
with
that
and
I
think
we
just
need
somebody
to
actually
make
the
document
change.
So,
if
either
of
you
are
interested
in
in
doing
that,
let
me
know.
C
So
that's
related
to
I
know.
There
is
the
issue
you
opened
a
few
weeks
ago:
okay,
yeah
I,
I'm,
not
sure
I
completely
understood
like
what
is
that
the
bottom
line
of
how
how
the
propagation
should
work
actually.
B
I
know
how
it
shouldn't
work
right
now.
It's
prioritizing
that
x-ray
environment
variable
and.
A
C
B
So
I
think
the
the
decision
we
made
was
to
change
that
such
that
the
environment
variable
is
not
used
as
part
of
that
propagation
decision,
but
instead
it's
added
as
a
a
span
link
if
it's
available,
add
it
as
a
span
link
and
then
let
the
pro
the
regular
configured
propagators
just
do
whatever
else
they
need
to
do
so.
C
No
I
what
sounds
like,
maybe
a
bit
confusing
for
me-
is
that
I
I
guess
it's.
It's
quite
rare
that
you
actually
need
like
have
a
spell.
A
parent
spend
like
both
department
and
spend
link.
So
like
I
mean
it's
possible
that
you
have
the
X-ray
propagation
and
actually
it
won't
connect
to
any
other,
spend
right.
B
I
I
think
it's
possible.
I
mean
the
the
ultimate
problem
here.
I
think
is
that
you've
got
spans
being
reported
to
two
different
systems
right,
so
the
X-ray
spans
that
is
created
by
x-ray,
inherently
is
going
to
report
to
X-ray,
and
then
the
the
the
regular
span
is
going
to
report
to
whatever
tracing
system
is
configured.
C
C
I
mean
I,
guess
that
if
the
user
configured
like
another
kind
of
propagation,
so
if,
if
that's
work,
that
should
be
good
for
Excel
as
well.
No
sorry
could
you
repeat
that
I
mean
if
the
user
did
config
another
context,
propagation
for
the
Lambda
which
actually
works,
so
if
this
spans
will
be
ingested
to
x-ray,
that
should
work
there
as
well
right,
yes,
so
in
in
what
case?
Do
you
think
that
having
the
X-ray
link
like
will
will
will
help
this
user.
B
I,
personally,
don't
understand
what
that's
for
it's
something
that
AWS
wants
to
have
the
the
environment
variable
propagation,
so
I
assume
they
have
reason
for
it.
You'll
have
to
ask
them
specifically
what
distinction
that
makes
having
it
as
the
span
link
was.
Some
was
I,
guess
the
the
consolation
that
they
were
okay
with
so
I,
don't
want
to
argue
with
them
too
much
on
that
okay,
I.
C
B
Yeah,
are
you
interested
in
you
know,
taking
point
on
the
the
actual
specification
change.
B
Just
feel
like
that's
a
possibility,
that's
an
option
yeah
to
help
you
guys
get
involved
so
yeah.
C
Yeah,
it's
a
good
option
just
want
to
make
sure
that
I
know
how
to
do
it
and
I'm
not
sure
yet
so
I
agree
yeah,
but
but
I
will
write
this
point
and
look
into
it.
Okay,.
A
B
Cool
yeah,
if,
if
you've,
got
questions
or
or
need
more
information
on
that,
just
let
me
know.
A
B
Cool
yeah,
I
think
I
think
that's
all
the
items
on
the
agenda.
So
we
can
end
here
and.
B
So
if
you
do
decide
to
work
on
that
change,
let
me
know,
but
anyway,
okay.