►
From YouTube: 2023-01-31 meeting
Description
Instrumentation: Messaging
A
B
B
No
so
it
seems
like
we
have
a
lot
of
contributors
today,
which
is
really
exciting.
The
first
half
we're
gonna,
try
and
spend
doing
a
bit
of
a
customer
interview.
Kind
of
hearing
about
the
current
state
from
Justin
who's
kindly
agree
to
join
us
and
provide
some
of
his
feedback,
but
I
also
see
contributors
from
Azure
and
Google
as
well.
So,
let's
make
sure
we
spend
some
time
on
the
spec
in
the
second
half
of
the
meeting.
I
think
that'll
be
productive.
B
So,
with
that
being
said,
let's
go
ahead
and
dive
in
Justin
I'll.
Give
you
a
quick
second
to
introduce
yourself.
I
feel
like
to
be
the
best
easier
time.
Maybe
we
won't
go
around
and
do
a
full
introductions,
but
some
information
about
your
background
would
be
helpful.
C
Yeah
hi
I'm
Justin
I'm,
a
senior
developer
over
at
Northwestern
Mutual,
I,
primarily
work
on
our
data
ingestion
and
back-end
teams,
but
I
help
also
on
our
front
end
and
performance,
pretty
much
anywhere
I,
possibly
can
and
yeah.
My
background
has
been
in
didn't
name
a
language
I've
probably
worked
in
it,
and
mainly
in
financial
financial
markets
and
areas
like
that.
B
Okay,
great,
so
this
is
a
functions
as
a
service
group,
so
I
know
you
wanted
to
specifically
talk
about
your
Lambda
experience
before
we
dive
too
far
into
it.
Do
you
have
like
Azure
or
gcp
function
scenarios
as
well?
Are
you
principally
just
an
AWS
shop.
C
We're
personally
an
AWS
shop
and
otherwise
I
know
there's
some
other
groups
that
they
use
open
files
in
our
system.
So
that's
another
function
as
a
service
that
we
use,
but
primarily
we
are
AWS
Lambda.
B
C
Problem
yeah,
one
of
our
big
issues,
is
our
Lambda
process.
At
the
moment
our
main
lambdas
are
for
helping
us
with
some
PDF
generation
and
with
it
one
of
our
primary
issues
that
we
have
is
this
kind
of
async
workflow
that
ends
up
occurring.
C
I
mean
again,
this
is
common
with
Lambda,
but
that
you're,
like
I,
am
going
to
shoot
a
few
lambdas
off
responding
to
some
event
and
we're
gonna,
let's
say:
publish
some
data,
I,
don't
know
S3
and
some
metadata,
not
in
S3,
because
we
want
metadata
somewhere
else
and
all
these
things,
and
it's
all
this
asynchronous
workflow
and
really
what
we're
trying
to
get
out
of
with
open
Telemetry
specifically,
but
really
any
type
of
telemetry
system
is
okay.
C
I
made
the
request
into
whatever
the
function
is:
I
now
have
a
trace,
that's
sitting
in
the
function,
but
at
the
same
time
there
could
be
other
work.
That's
going
on
still
so
part
of
that.
Trace
is
obviously
that
work
that's
happening
in,
let's
say
the
service
once
the
Lambda
is
done.
It
might
not
respond
back
to
a
service
right,
it
may
just
be
a
dead
trace
or
it
could
be
responding.
C
Let's
say
SNS
onto
an
SNS
topic,
and
so
it's
really
propagating
our
Trace,
where
it
is
this
really
asynchronous
multi-system
workflow
that
ends
up
occurring
and
now
we're
even
getting
more
complicated
because
of
our
Lambda
workflow
or
our
Canary
workflow
that
we're
going
to
be
starting
up.
C
It's
starting
in
our
proof
of
concept
phase,
but
a
really
minimal,
viable
product
phase
that
we're
now
going
to
start
Canary
deploying
and
going
through
what
one
of
the
ways
AWS
asks
us
to
do
it,
which
is
through
step,
functions
and
so
really
making
sure
that
we
have
traces
that
a
trace,
May
spawn
off
multiple
lambdas
and
some
of
those
lambdas
may
go
into
our
Canary,
and
some
of
those
lambdas
may
go
under
stable,
and
so
it's
now
even
going
to
add
that
level
of
complexity
into
our
Trace.
That's
really!
B
We've
heard
a
bit
about
some
kind
of
cold
start
performance
specifically
around
adding
like
the
the
layer
with
the
collector
extension
as
part
of
it
have
y'all
dealt
with
that
as
well
or
cold
starts,
principally
not
issue.
I
know
you're,
mostly
a
node
shop.
By
the
way,
I.
Don't
think
I
mentioned
that
earlier
yeah.
C
We
do
have
some
cold
start
issue,
it's
definitely
something
where
it
can
be
very
bursty
traffic.
So
we
we
don't
do
obviously
the
practice
of
keeping
them
warm,
we'll.
Let
them
just
die
out
and
let
the
image
essentially
dump
and
then
we
have
to
come
back
in
and
with
the
collector
we
actually
had
to
destroy.
C
We
actually
had
to
stop
collecting
at
one
point,
because
the
collector
was
so
slow
that
our
Lambda
timeouts
were
starting
to
occur.
It
was
adding
anywhere
from
30
seconds
to
I.
Think
close
to
a
minute
at
one
point
for
us
and
yeah.
We
shut
them
down
for
a
couple
months
because
of
it.
B
To
yeah
have
general
questions
too
I
just
don't
need
to
be
the
one
dominating
the
conversation,
so
the
number.
C
That
I'm
30
seconds
to
a
minute
Edition,
is
when
the
collector
is
in
there
and
to
be
fully
honest.
I,
don't
know
exactly
if
it
was
just
because
of
the
addition
of
The
Collector
layer.
It
could
have
easily
have
been
like
for
some
reason.
The
mixture
just
caused
even
our
Lambda
to
boot.
Up
because
one
of
the
problems
with
doing
like
PDF
Generation
in
node
is
they
want
you
to
use?
Oh,
what's
the
technology?
C
What's
the
tool
essentially
creating
a
headless
browser
and
we've
noticed
that
when
you
add
other
layers
with
a
headless
browser
for
some
reason,
it
makes
the
Headless
browser
take
even
longer
to
start
up,
whereas
if
it's
just
a
layer
alone,
not
much
else
going
on,
it's
like
we'll
respond,
even
on
coal,
starts
with
super
fast
times.
C
So
I
we're
pretty
sure
it
was
just
due
to
some
de-optimization
for
that.
We
didn't
figure
out
the
reason
why
that
was
happening,
but
yeah
adding
in
The
Collector
did
add
that
additional
time
I
mean
it
wasn't
really
dug
in
exactly
what
it
was.
Okay.
C
We
did
it
disable
it
at
the
exact
same
time.
We
are
not
doing
like
in
the
documentation
where
it's
like,
hey
run.
This
function,
like
we
actually
import
each
individually,
that
we
want
to
use,
because
the
auto
instrumentation
can
just
add
so
much
overhead.
F
Issue
we've
seen
occasionally
is
memory.
So
if
a
function
is,
you
know,
writing
right
at
the
edge
of
available
memory,
and
then
you
add
a
collector
and
auto
instrumentation
to
it
that
can
drive
you
up
beyond
the
memory
available,
which
then
can
cause
significant
performance
degradation.
C
I'll
be
honest:
our
memory
is
way
oversized
for
our
use
cases,
yeah
we're
we're
sitting
at
a
good
chunk
of
memory,
and
it's
it's
a
very
oversized
compared
to
what
we
actually
use
so
I
I
have
a
feeling
we
have
other
optimizations
and
to
be
fully
clear.
This
was
one
of
the
first
phrase
into
to
AWS
Lambda
that
we've
done
so.
C
For
the
node
piece,
it's
been
a
lot
of
the
AWS
documentation
and
the
use
of
the
Lambda
there
and
then
just
keeping
up
with
announcements.
I
mean
that's
another
whole
piece
of
it
is
not
really
geared
towards
the
open
Telemetry
piece,
but
just
the
different
ways
of
calling
a
Lambda
is
just
mind-boggling
at
times
and
then
from
the
open
Telemetry
side.
C
F
C
No
100
with
metrics
we
actually
just
started.
We
implemented
some
graphql
metrics
into
our
system
just
a
couple
weeks
ago
and
we're
adding
we
just
did
our
POC
MVP
of.net
Telemetry
we'll
be
adding
in.net
metrics
and
then
even
if
we
could
I
don't
know
if
this
is
on
the
table
or
how
far
this
would
be.
But
even
the
logging
piece
of
it
would
be
absolutely
amazing.
F
And
when
you
say
you're
looking
at.net
metrics,
that's
using
the.net,
open,
Telemetry,
metrics,
SDK,
correct
yeah,
yep.
A
B
Awesome
well,
if
we
don't
have
well
I
guess
it
seems
like
you
principally,
have
only
worked
in
node.
Have
there
been
like
any
like
issues
or
like
language
discrepancies?
You've
noticed
across
the
various
implementations.
I
know
we're
kind
of
digging
in
to
them
more
holistically,
but
be
curious,
like
if,
from
your
outside
user
perspective,
you've
seen
anything.
C
Actually
just
ran
into
this
today,
there's
definitely
needs
to
be
I
would,
and
this
is
not
specific
to
pause.
This
is
just
the
overall
we
implemented
some.net
Telemetry
in
and
the
graphql
piece
of
it
was
a
hundred
percent
different
than
what
the
graphql
node.js
was
in
terms
of
what
like
span
names
and
everything
else
came
in.
It's
definitely
even
a
different
setup
process
to
get
it
ingested
in
what
I
would
expect
one
system
to
do
another
system.
Does
it
a
bit
differently?
C
One
thing
will
be
like
this:
Auto
detects
like.net
will
automatically
Auto,
detect
certain
things
and
not
Auto,
detect
others,
whereas
like
the
node,
auto
text
pieces
and
then
doesn't
detect
others,
so
there's
definitely
like
more.
Cohesion
needs
to
occur,
but
I
mean
that
comes
with
time.
In
my
opinion,
I
I
would
rather
have
the
data,
and
then
the
cohesion
come
in
place,
whereas
just
half
cohesion
but
we've
decided
not
to
implement.
For
you
know
a
year.
B
Yeah,
that
makes
sense
great.
So
if
you
had
a
if
you
had
some
sort
of
wish
list
for
around
the
Lambda
space
or
maybe
functions
in
general,
what
would
be
at
the
top
of
your
list?
Maybe
your
top
three
items
here
really
would
be
helpful.
The
community
could
provide
or
something
similar
I.
C
C
One
of
the
only
does
right
now,
working
with
open,
Telemetry
and
I
will
say,
like
a
lot
of
it
has
been
reading
documents
from
all
over
the
place,
but
even
then
digging
into
code
to
just
see
how
things
are
running
to
then
figure
out
like
what
might
be
the
best
approach
so
like
just
that
would
help
onboard
some
on
my
side
more
devs,
not
just
those
that
are
only
interested
in
it
number
two
is
if
it
gets
easy
to
get
log
ingestion,
I,
I,
I
know
this
is
a
big
one
across
the
board,
but
log
ingestion
is
just
going
to
be
absolutely
wonderful,
I
would
honestly
say
even
more
so
than
metrics
and
then
number
three
is.
C
C
One
is
literally
like
the
glaring
piece
for
me
to
work
with
new
devs
and
really
also
showcase,
like
two
new
devs,
why
this
is
so
important,
because
this
is
one
of
those
effort,
costs
versus
profit
type
issue
right
like
they
see
it,
as
oh
I
have
to
dig
through
this.
This
is
this
NM.
Does
it
this
way
this
company?
C
Does
it
this
way
like
and
the
profit
doesn't
seem
big,
whereas
if
we
can
make
that
cost
or
a
barrier
to
entry
lower,
they
see
the
profit
a
lot
faster
and
that's
big
and
then
log
ingestion
is
just
big,
because
so
we
still
have
a
lot
of
devs
like
that.
Tracing
is
great
but
they'll.
For
some
reason,
their
instant
reaction
is
straight
to
logs
and
I.
C
Think
that
being
able
to
just
really
easily
put
both
of
those
pieces
together
and
get
everything
kind
of
ingesting
in
a
way
that
moves
it
either
to
one
location
or
makes
it
really
easy
to
switch
between.
The
two
just
makes
it
that
much
easier
to
track
down
issues
that
could
be
occurring,
especially
in
an
asynchronous
environment,.
B
So
I
guess
kind
of
related
to
that,
though,
so
do
you
all
make
a
heavy
use
of
like
metrics
for
alerting
or
anything
like
that?
How
do
you
see
like
metrics
playing
into
your
use
cases
if
you're
principally
like
logging
and
tracing
first.
C
C
It
would
just
allow
us
to
really
dive
into
some
more
I
guess
from
the
SRE
perspective,
or
the
performance
engineer
perspective
of
what
could
be
potentially
causing
performance
degradations
and
a
way
that
we
can
easily
do
that
analysis
compared
to
our
current
tooling,
which
is
really
just
logs,
that
we
have
available
to
us
a
way
to
easily
through
an
open
source
mechanism
and
not
a
proprietary
Source.
It
just
gives
us
a
better
perspective
in
that
regard.
C
So
really
yeah
metrics
are
going
to
help
us
do
more
performance
engineering,
because
a
renderer
and
a
Lambda
acts
differently
than
a
renderer
on
your
computer
or
on
your
phone.
So
really
can
highlight
those
issues
for
us.
B
Awesome
yeah
thanks
for
that
context,
any
last
questions
for
Justin,
just
keeping
in
mind.
He
has
a
day
job
too.
B
Okay,
silence
is
Applause.
Well,
we
really
appreciate
you
joining
Justin,
your
perspective
is
valued
and
it
definitely
will
help
us
prioritize
things
and
also
kind
of
understand.
Customer
use
cases
better.
So
thank
you.
Yeah.
B
All
right,
awesome,
yeah
feel
free
to
drop,
so
let's
maybe
switch
to
the
language
assessment
or
the
spec
I
think
we
have
Rohit
and
David
here.
So
maybe
the
spec
is
a
better
use
of
our
time.
Welcome
guys.
B
So
Tristan
you
and
Tyler
I
know
I've
been
doing
some
work
and
we're
able
to
at
least
do
some
knowledge
sharing
with
David.
But
what's
the
current
state
of
like
the
spec
work,
any
pressing
questions,
anything
that
needs
a
six
attention.
F
G
D
But
I
have
some
issues,
but
I
haven't
discussed
them
with
Tyler,
yet
so
they're
kind
of
still
raw
I.
A
D
Worth
bringing
up
yet
here
until
like
next
week
when
gone
through
them,
maybe
opened
the
issues
about
them
anyway,
we
certainly
could
stuff
like
metrics.
It
uses
histograms
instead
of
gauges
stuff
like
that
I,
don't
know.
If
there's
any
usefulness
in
discussing
any
of
that,
yet.
B
E
Yeah
I
mean
basically
what
I
said
to
Dave
and
I
think
that
David
and
the
I
think
the
same
thing
applies
to
Azure
folks
is:
it
would
be
really
helpful
to
kind
of
get
a
better
picture
of
what
kind
of
metadata
is
available
and
how
to
get
it
from
the
perspective
of
instrumentation
so
like.
E
If
we
were
to
write
automatic
instrumentation
for
for
for
node.js
for
python
for
Java,
how
would
we
go
to
about
getting
the
metadata
that
we
want
to
be
able
to
provide
as
as
attributes
on
the
spans
or
as
as
resources?
E
So
that
way
that
can
go
along
with
the
the
spans
and
getting
getting
that
collected
now
is
going
to
make
it
a
little
easier
to
have
a
cohesive
name,
a
naming
terminology
and
consistency
going
forward,
because
I
think
there's
some
awkward
naming
that
we
have
currently
in
in
some
of
our
terms.
So
thanks.
A
A
Have
some
some
progress
on
that
I'll
fill
in
the
spreadsheet
while
we
talk
but
I've
been
running
some
experiments?
There
are
some
cases
where
it
looks
like
we
might
not
be
able
to
get
the
type
of
metadata
that
we're
looking
for
so
I'll
try
and
call
those
out
as
well.
But
okay.
B
Yeah
sounds
awesome.
Thank
you
for
your
help.
David
okay
sounds
like
we're
good
on
the
spec
Rohit.
Do
you
have
any
updates
from
Azure
or
any
related
questions,
or
things
like
that.
G
Oh
I'm,
good
yeah
about
the
metadata
I'll
go
and
have
another
look.
So
the
interesting
thing
is
you're,
looking
at
some
sort
of
Auto
instrumentation.
G
That
should
be
able
to
collect
those
metadators
right
yeah.
So
so
we
do
have
metadata,
but
then
can
any
auto
instrumentation
capture
it
or
not.
I'll
dig
deep
and
then
then,
probably
when
we
meet
next
week,
I'll
have
some
answer.
B
Okay,
great
and
I
think
a
common
ask
that
I
think
came
up
with
AWS
and
Justin,
but
we'll
also
need
best
practices,
documentation
for
like
gcp
functions
and
Azure
functions
as
well
and
to
find
you
know
the
spot.
That
makes
the
most
sense
to
keep
all
that.
Docs
is
probably
the
hotel
website,
but
just
something
we
should
be
aware
of
we're
looking
to
produce
eventually.
G
Coming
back
to
the
speca
and
the
the
names
of
attributes,
and
all
do
you
think
we
should
have
another
look
at
it
now
that
David
is
here,
I
am
here.
B
Yeah
sure
Tyler
do
you
want
to
show
your
screen?
You
might
have
the
link
more
handy
or
you
could
also
just
put
it
in
the
chat
and
I'll
pull
it
up.
Oh.
G
I,
don't
know
how
to
put
you
on
a
spot-
maybe
not
now
some
other
time,
but
then,
if
you
can
spend
some
time
going
through
this
spec,
let's
review
each
and
every
attribute
and
see
if
it
is
making
sense
for
for
all
three
big
players
here.
G
B
Have
authority
here
so
yeah
someone
wants
to
run
it
or
you
know,
tell
me
where
to
start.
That's
fine,
too,.
E
I'm
trying
to
pull
up
the
other
document
just
a
second,
but
you
can
go
ahead
and
start
looking
at
the
from
this
perspective,
I.
E
These
are
these
are
the
terms
on
the
the
left
side,
what
we're
using
currently
so
trigger
execution
and
ID
I'm,
not
a
fan
of
so,
for
example,
execution.
There
is
not
a
good
description
of
what
I
think
it
intended.
E
So,
if
you
click
on
the
copy
of
sheet1
tab
poorly
named
there
I
I'm
trying
to
come
up
with
some
better
naming
so
like,
for
example,
the
ID
or
invocation
ID,
is
what
I
would
call
the
the
term
that
we
would
use
as
an
attribute
and
then
how
that
maps
to
the
different
terminology
from
each
of
those
other
vendors
so
like,
for
example,
ID
I'm
thinking
about
renaming
it
invocation,
ID
potentially,
but
so
like
on
that
line
three
there.
E
E
Yeah,
so
what?
What
is
the
event
type?
You
know,
the
the
trigger
is
defined
in
our
spec
to
specify
like
okay,
you
know
information
about
whether
it's
an
HTTP
Pub
sub
timer,
whatever
down
below.
G
Right
so
we
do
have
a
function,
meta
data,
wherein
we
have
other
details
like
the
event
type
and
what
is
the
function?
Name,
not
sure
if
we
have
version
or
not
like.
E
If
you
click
on
the
AWS
tab,
you
can
see
a
little
bit
more
of
you
know
things
like
memory
limits,
I
think
that's
kind
of
Handy
to
be
able
to
report
the
the
version.
Aws
has
this
notion
of
an
Arn
which
probably
doesn't
apply,
but
if
there
is
a
a
more
specific
name,
I
think
with
Azure.
There's
the
the
function
good
that
that
we
probably
want
to
include.
G
So
we
have
the
function
with.
We
also
have
something
called
resource
ID
that
is
more
similar
to
the
Arn
name,
that
you
see
for
the
AWS.
So
it's
a
string,
concatenated
value,
and
it
has
all
the
details
like:
what's
the
subscription
ID
app
name
function?
Name,
let
me
get
an
example
for
you,
but
your
function,
ID
is,
is
a
good
that
never
changes,
cool.
E
G
G
So
what
you're
saying
is
saying:
Azure
we
have
an
app
and
in
that
app
we
can
have
multiple
functions
and
it's
it's
not
or
it's
not
similar
how
it
is
in
AWS
right.
That's
what
you're
saying.
D
C
D
B
G
Resource
ID,
so
this
is
our
resource.
I
will
put
it
in
the
chat.
E
Anyway,
I
think
what
we're
trying
to
get
at
here
is,
you
know,
just
a
collection
of
the
various
attributes
that
we
want
to
standardize
in
the
spec
I.
E
Don't
think
that,
there's
anything
that
precludes
instrumentation
from
or
having
a
spec,
that's
more
specific
for
individual
platforms
so
like,
for
example,
we
can
have
a
spec
that
defines
more
consistent,
AWS
terms
or
or
gcp
or
Azure,
whichever
is
interesting
to
them,
but
at
this
point,
I'm
mainly
trying
to
find
what
are
the
the
generic
things
that
are
common
across
everything
that
we
want
to
standardize
on.
G
H
B
B
Okay,
so
let's
talk
about
the
language
assessment,
matrix,
real,
quick,
so
I
think
that's
in
a
current
state,
Tyler
was
able
to
do
some
of
the
Java
work
and
I
think
someone
is
doing
the
dot.
Network
I,
don't
remember
their
name
in
the
slack
Channel
right
now,
but
any
kind
of
open
questions
about
this
Alex
I'm,
not
sure.
If
you
have
any
anything,
you
want
to
say
here
or
another
some
kind
of
clarifying
thanks.
B
Okay,
cool
so
can
I
get
some
progress
but
yeah,
just
if
you're
working
on
an
assessment
make
sure
you
you
get
that
information
into
into
the
repo
quick
update
on
a
cmtf
account
I'm,
still
kind
of
paying
for
governance
committee
for
occasional
updates
on
a
ticket.
The
cncf
did
respond.
They
asked
if
the
current
or
the
cost
estimate
was
current
to
which
you
responded
Yes.
It
was
and
are
now
waiting
for
a
response
from
that.
So
it's
making
slow
progress
but
nothing
major
to
update
flushing
the
meter
provider.
F
Yeah
so
I've
added
this
I
think
most
of
the
the
existing
instrumentation
for
Lambda
flushes,
the
Tracer
provider
at
the
end
of
a
Handler
I,
think
we
also
need
to
ensure
that
near
providers
are
flushed
I've
added
this
PR
for
python.
That
I
believe
accomplishes
this,
but
I
don't
really
python.
So
reviews
and
suggestions
for
improvements
would
be
helpful
there
also,
if
others
can
look
at
other
languages
and
identify
where
and
how
we
can
make
sure
that
this
functionality
is
supported.
B
B
E
I
I
think
that
there
was
a
canonical
a
canonical
build
of
Lambda
layers.
E
I
think
Tristan
pointed
me
out
to
some
scripts
that
are
in
the
AWS
repo
for
building
and
Publishing.
Those
layers.
I
was
wondering
if
there
would
be
any
issue
in
using
those
to
kind
of
as
a
starting
point
for
for
our
layers.
F
D
D
B
B
Maybe
we
can
discuss
it
a
bit
today,
but
discussing
issues
around
the
current
kind
of
like
Hotel,
collector,
lyric
design
and
how
that's,
actually
you
know
in
the
layer
itself
and
I
think
they're
looking
at
you
know
sending
the
data
just
from
like
the
Lambda
directly,
so
I'm
not
sure
if
anyone
has
thoughts
today
and
wants
to
discuss
it
or
wait
until
after
the
EU
SEC
kind
of
shares
their
thoughts,
but
that
was
notable.
F
E
The
problem
is,
is
that
the
the
person
that
was
that
had
the
issues
that
they
wanted
to
discuss
is
not
on
the
call.
E
D
I
was
not
I
was
wondering
who,
where
the
original
idea
came
from
versus
having
the
collector
in
the.
A
D
F
B
B
E
I
guess
maybe
the
the
feedback
there
would
be
Anthony
if
you
could
pass
along
to
the
Lambda
folks
a
suggestion
to
provide
an
option
for
sidecarish
kind
of
tasks
that
don't
necessarily
have
to
be
per
Lambda
and
don't
have
to
be
spun
up
per
Lambda
I,
don't
know.
Maybe
that,
like
is
completely
in
opposition
to
the
the
Lambda
architecture,
but
I
know
that
the
that
there
was
some
work
done
to
try
to
improve
cold
start
times
around
Java
and
I.
E
Think
that
that's
awesome
I
worry
that
these
additional
layers
might
kind
of
reduce
some
of
the
benefit
of
that,
though,
I
don't
know
how
the
I
forget
what
it's
called
the
quick
start
or.
F
H
E
If
you're
using
any
layer
on
top
of
it,
it
prevents
you
from
using
snapstart
I.
F
Don't
think
it's
any
layer,
I
think
it's
external
extensions,
extensions
right
and
layers
and
extensions
are
separate
but
related
things.
Extensions
are
kind
of
like
that.
Sidecar
concept
you
were
talking
about,
but
at
a
per
function
instance
level,
which
is
how
the
collector
runs
layers
are
basically
just
a
file
system
that
don't
provide
any
running
processes
got
it
got
it.
Okay,.
F
B
Okay,
well,
unless
we
have
any
other
specific
topics,
there's
a
short
kind
of
EU
agenda
for
tomorrow,
it
seems
like
the
main
topic
will
probably
be
that
hotel,
collector
layer,
design,
I'm,
looking
forward
to
hearing
what
Ron
has
there.
But
besides
that
I
think
we
can
just
kind
of
continue
working.
Asynchronously
I'll
keep
poking
the
cmcf
people
Tyler
or
Trista,
and
Rohit
David,
please,
you
know
kind
of
continue
the
the
spec
work
offline.
Wouldn't
that
be
great.
Let's
keep
pushing
this
forward.
B
I
think
we're
still
targeting
the
end
of
February,
so
that's
beginning
of
March
to
have
our
spec
oteps
out
so
a
little
less
than
a
month.
At
this
point,.
H
Thoughts,
yeah
I
have
one
one
question
so
there's
one
of
the
items
on
the
feature
Matrix
is
to
allow
users
to
configure
an
environment
variable
yeah,
the
X-ray
and
VAR
span
link.
H
B
Sorry,
you
go
ahead.
Tyler
you
got
it.
E
I
was
gonna
say
my
take
is
that
I
think
the
first
step
would
be
to
you
know,
start
with
the
the
spec
change,
unless
we
need
to
do
an
Otep
first,
but
since
that
specific
design
is
kind
of
enshrined
in
the
spec
I
think
it
needs
to
start
in
the
with
the
spec
change
before
we
can
go
through
and
make
the
implementation
changes.
F
Yeah
I
would
agree
with
that
and
I
would
say
as
far
as
who's
going
to
do
that
implementation,
if
not
us
who
else
I
I,
think
the
the
interested
parties
are
here
and
we
should
ensure
that
someone
from
this
group
is
once
the
spec
changes.
Are
there
implementing
them?
Yeah.
B
Yeah
plus
one
to
all
that
I
think
that
the
goal
is
this
egg
would
fix
the
implementations
themselves.
It's
just.
We
have
to
make
sure
we
have
the
groundwork
ready
to
do
that.
But,
yes,
we
will
be
scheduling
that
work
as
a
sec.
Eventually.
E
So,
with
regards
to
the
spec
change
Anthony:
is
that
something
that
you
want
to
take
on,
or
do
you
want
to
have
somebody
else
take
a
stab
at
that.
F
If
someone's
willing
to
offer
to
take
a
stab
at
that
I'm
happy
to
review
I,
if
nobody
else
is
willing
or
able,
I
can
take
it
on.
D
H
E
I
I
felt
it
I
mean
sure
it
can
be
part
of
the
spec
assessment,
but
I
also
feel
like
to
a
degree.
It
can
be
independent
a
little
bit
from
some
of
the
other
spec
work
that
we
are
doing.
Certainly.
B
F
B
B
Awesome
well,
I
think
we
can
go
ahead
and
call
it
here.