►
From YouTube: 2023-01-11 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
When
you
give
people
a
couple
more
minutes
before
getting
started,
another
some
calendar
issues.
A
A
All
right,
well
I,
think
Oscar
will
probably
join
when
he
gets
a
chance,
but
we
can
go
ahead
and
get
started.
I
know
everyone's
fairly
busy.
We
have
different
time
zones
and
things
to
account
for
so
welcome
to
the
functions
as
a
service.
It's
the
kind
of
free
start,
it's
being
shifted
from
a
Lambda
specific,
the
more
General
kind
of
functions
as
a
service,
but
our
initial
Project
work
is
focused
around
how
to
improve
the
Lambda
monitoring
that
exists
today.
A
So
I
put
together
kind
of
just
a
running
list
of
agenda
items,
I'm,
not
sure
if
people
want
to
we
prioritize
them
or
discuss
what
we
want
to
go
over
first.
But
the
first
item
would
be
Ted,
has
a
new
kind
of
project
management
process
and
we're
trying
to
prototype
it
out
with
the
functions
as
a
service
Sig.
So
we
love
you
always
kind
of
help.
Feedback
and
yeah
just
stops
around
this,
but
Ted
I'll
turn
it
over
to
you.
First.
B
Awesome,
thanks
excited
to
see
this
working
group
get
kicked
off
and
for
this
working
group
it
seems,
like
you
know,
there's
there's
two
main
tasks
from
what
I
understand
on
the
one
side,
there's
a
existing
open,
Telemetry,
Lambda
implementation,
and
this
group
would
like
to
bang
on
that
until
it
gets
into
a
shape
where
the
users
stop
complaining
about
it,
and
for
that
I
say
good
luck
and
May
the
force
be
with
you.
B
The
other
part
of
this
whole
thing
is:
there's
a
contract
between
what
these
serverless
environments
admit,
how
they
describe
what
they're
doing
and
what
analysis
tools
expect
them
to
Emit
and
how
they
should
expect
to
be
able
to
process
that
data,
and
that's
the
part
that
we
want
to
put
in
the
spec.
B
We
don't
want
the
details
of
how
Lambda
is
describing
itself
over
otlp
to
be
buried
within
the
Lambda
implementation.
We
want
to
move
that
part
into
the
spec
so
that
that's
the
kind
of
the
whole
point
of
the
spec
people
can
look
there
and
understand
what
the
expectations
are.
As
we
build
implementations
for
other
serverless
environments,
you
know
we
can
reuse
parts
and
and
make
it
align
well
and
we're
trying
to
do
that
for
all
of
the
existing
semantic
conventions
that
we
have.
B
None
of
them
are
stable
yet,
but
we
would
like
a
working
group
to
form
around
each
domain,
try
to
improve
those
semantic
conventions
and
mark
them
as
stable,
and
since
we
have
so
many
of
them,
we
want
to
see
if
it's
possible
to
do
that
work
quickly
in
the
past.
Trying
to
get
these
conventions
into
the
spec
and
stabilize
has
been
kind
of
a
long
process,
because
we
haven't
been
super
focused
as
a
community
and
so
to
try
to
make
that
work.
B
B
But
the
the
basic
idea
would
be
this.
The
working
group
spends
six
weeks
trying
to
get
all
of
their
proposed
spec
changes
and
oteps
together
and
ready
to
present
to
the
wider
Community
to
ensure
that
that
process
goes
well.
We
want
to
make
sure
there's
enough
core
open,
Telemetry
members
in
the
group,
I
believe
Carlos
Alberto
from
the
TC
is
agreed
to
participate
in
this
group
and
I
Anthony
Mirabella
who's.
Another
old-timer
is
heading
it
up.
B
Six
weeks
from
now
with
the
deliverables
that
you
are
going
to
expect
the
community
to
review
so
that
the
spec
Community
can
be
prepared,
knowing
that
that
work
is
coming
and
they
can
be
prepared
to
review
it
and
give
you
feedback
when
you
need
it
that
feedback
period
we're
proposing
should
be
one
month
so
four
weeks
of
public
feedback
so
announcing
in
public
hey.
If
you
want
to
comment
on
this,
you
have
four
weeks
to
do
so.
B
We
aren't
automatically
going
to
merge
everything
at
the
end
of
four
weeks.
If
it
turns
out,
we
have
to
redo
stuff
fine,
but
in
general,
if,
if
people
feel
satisfied,
then
after
four
weeks
we're
going
to
merge
everything
and
then
we
would
spend
two
weeks
trying
to
get
all
of
those
approved,
oteps
and
anything
else
merged
into
the
spec
and
either
declared
stable
or
at
least
declared
like
a
frozen
or
release
candidate
that
would
go
stable.
Once
enough,
implementations
have
verified
that
there
aren't
any
problems,
so
it
would
be
a
three-month
process.
B
So
my
first
question
to
you
know:
Anthony
Alex,
everyone
else
kicking
this
group
off.
Does
that?
Does
that
timeline
sound
reasonable
to
people?
Obviously,
this
group
is
going
to
do
other
work
related
specifically
to
the
Lambda
implementation,
but
to
the
semantic
conventions
and
the
spec
work.
Does
a
three-month
process
sound,
sound,
feasible.
C
B
B
You
know
that
people
can
play
with,
but
I
know,
besides
just
the
the
data
being
emitted
by
the
Lambda
implementation
like
there's
like
other.
It's
a
whole
pile
of
like
other
stuff
that
this
group
wants
to
work
on
and
I,
don't
know
how
to
paralyze
all
of
that,
but
the
idea
would
be
in
six
weeks,
here's
our
proposals
for
changes
if
any
to
the
semantic
conventions
for
fast
and
Lambda,
and
here's
a
you
know
a
Lambda
prototype.
You
know
that
you
can
use
to
to
see
that
data
get
emitted.
B
Obviously,
this
group's
going
to
have
to
dig
into
this
work
to
figure
out.
You
know
how
feasible
that
is,
but
that
would
be
my
request
for
the
group
and
the
place
where
we'd
like
to
put
this
information
is
in
this
project
tracking
issue.
So
let
me
see
if
we
don't
already
have
it.
B
Okay,
I
put
it
in
the
dock
as
well.
If
you
look
at
that
issue,
that
issue
just
kind
of
keeps
track
of
everything
this
working
group
is
up
to.
So
it's
a
place
with
the
rest
of
the
community
is
like
wait.
B
They
planning
to
deliver
like
what,
when
are
their
Milestones
who's
who's
working
on
this
like
who
should
I
I
ping,
we're
trying
to
use
a
tracking
issue
for
a
working
group
where
we
we
keep
it
up
to
date
with
that
information,
so.
E
E
B
A
I'll
make
a
note
to
add
that
it
was
shifted
from
Lambda
specific
to
be
more
kind
of
fast
in
general,
so
I'll
make
some
updates
for
that.
B
Yeah-
and
it
may
not
be
the
case
that
there's
a
heck
of
a
pumpkin
lot
to
do
on
the
fast
Lambda
semantic
conventions,
in
which
case
fabulous,
but
the
thing
we're
trying
to
do
is
get
these
conventions
reviewed
the
stuff
that
we
already
have
make
sure.
There's
like
any
changes
that
significant
changes
we
want
to
make
to
them.
We
make
and
then
declare
them
stable
as
a
part
of
getting
open,
Telemetry
fully
GA.
B
We
really
need
to
have
the
conventions
for
these
core
domains
to
be
stable,
so
that
end
users
can
can
rely
on
the
idea
that
this
is
the
data
our
system
is
going
to
produce.
B
So
my
request
for
this
working
group
is
that
that
be
part
of
the
work
and
as
you
figure
out
what
the
the
heck,
it
is
you're
doing
to
update
that
doc
with
your
milestones
and
then
to
communicate
that
back
to
me
and
the
TC,
maybe
in
the
spec
meeting,
basically
giving
us
a
heads
up
so
that
we
know
when
we're
expected
to
be
available
to
give
you
feedback.
C
So
one
of
the
questions
I
have
is
there
and
what's
the
scope
of
the
semantic
conventions,
so
I
think
there's
some
description.
In
the
current
mind
to
conventions
around
behavior
for
implementations
of
Lambda
function,
Handler
wrapper
instrumentation
does
that
belong
in
semantic
conventions?
Does
that
belong
somewhere
else
in
the
specification?
Does
it
not
belong
in
the
spec
at
all.
B
I
feel,
like
that's
a
very
fuzzy
boundary
right,
because
I
I
definitely
know
for
a
fact,
with
every
every
implementation,
like
serverless
implementation,
there's
going
to
be
a
bunch
of
implementation,
specific
details,
I,
don't
think
it's
worthwhile
to
record
those
in
the
spec
anywhere,
because
we
aren't
trying
to
coordinate
a
bunch
of
different
implementations
to
adhere
to
Something.
In
those
cases,
it's
just
you
know,
making
Lambda
work.
B
B
If
there's
other
details
around
things
like
like
what
headers
should
be
passed
and
stuff
like
that,
it
would
be
great
to
get
those
put
in
the
spec
as
well,
but
without
knowing,
like
all
of
the
weird
Lambda
details
that
you'll
need
to
to
work
through
I
feel
like
I'm,
not
in
a
position
to
to
like
talk
about
specifics
beyond
that
that
General
principle
yeah.
E
I
guess
my
my
take
on
this
will
be
if
it's,
if
it's
a
detail,
that's
required
for
more
than
just
like
the
Lambda
repository
to
implement,
then
it
should
probably
be
specked
out.
For
example,
some
of
the
information
around
like
propagators
or
whatever
instrumentation
details
that
each
different
language
implementation
needs
to
be
aware
of
I,
think
that
should
be
in
the
spec
but
otherwise
I
agree.
If
it's
implementation
detail
at
this
particular
working
group,
then
it's
probably
not
necessary.
B
C
Sorts
of
details
that
I
am
thinking
of
like
there
was
a
proposal
recently
for
for
adding
to
the
python
Lambda
instrumentation,
some
pre
and
post
hooks.
Would
we
want
to
specify
that
consistently
for
all
languages
to
implement,
or
similarly
the
there's,
an
existing
issue
that
Tyler's
created?
Regarding?
C
C
Then
the
question
is:
do
those
belong
in
these
semantic
inventions
for
fast
or
Lambda,
or
should
there
be
a
separate
specification
DOC
for
that
because
of
the
Tomato
conventions
purely
around?
What
is
the
structure
of
the
data
that's
being
emitted
via
these
instrumentations
through
otlp,
or
does
it
also
cover?
How
are
these
rotations
configured
and
used.
B
That
that
is
a
that
is
a
very
good
question
it
when
we
initially
thought
of
the
semantic
conventions.
We
were
literally
thinking
about
the
semantics
within
otlp
right.
B
We
were
not
thinking
any
details
beyond
that,
but
even
when
we
were
looking
at
say
HTTP,
for
example,
you
know
some
details
come
up
around
like
say
how
redirects
and
retries
should
be
handled
where
you,
you
have
to
kind
of
describe
a
bit
of
behavior,
to
explain
what
kind
of
data
should
be
emitted
properly,
so
I'm,
not
I'm,
not
really
sure
I,
don't
think,
there's
a
problem
with
adding
a
section
to
this
spec.
That's
just
like
Lambda
implementation
details.
B
If
you
don't
think,
if
you
think
it's
really
far
afield
from
from
what
would
go
in
the
semantic
conventions
like
if
the,
if
the
pre
and
post
hooks
are
just
about
like
what
kind
of
data
you
should
be
emitting
when
you
start
and
when
you
end
that
sounds
like
semantic
conventions,
there's
a
whole
bunch
of
details
about
like
how
flushing
should
work
and
things
like
that.
The
baby
sounds
like
it
should
be
its
own
section.
C
B
And
this
is
also
for
the
record.
This
is
an
example
of
the.
Why
I
think
these
working
groups
should
have
two
TC
members
involved,
so
that
like
when
it
goes
to
public
commentary,
we
hopefully
like
past
some
of
those
kind
of
like
basic
basic
issues,
in
a
way
that
you
know,
the
approvers
of
a
thing
are
already,
you
know
have
had
eyes
on
oops,
but.
F
A
Awesome
thanks
Dad
thanks
for
joining
and
sharing
more
about
the
new
process.
This
is
kind
of
going
to
be
a
living
process.
So
if
y'all
have
issues
run
into
any
problems
or
have
feedback,
please
help
us
make
it
better
over
time
and
hopefully
we'll
get
to
a
shared
state
where
we're
delivering
things.
You
know
on
time
and
quickly.
A
Pigs
yeah,
so
we
have
a
bunch
of
topics
for
today.
You
know
whether
that's
the
current
state
of
the
Lambda
implementations,
the
current
approvers
and
maintainers
of
the
repos,
which
are
you
know,
notwithstanding
the
hotel,
Lambda
slack
Channel,
spec
assessment
and
you
know
General
function
requirements.
We
have
to
look
at
over
the
next
six
weeks.
So
where
would
you
all
like
to
start
I
don't
want
to
dominate
the
conversation,
so
it
should
be
thing
driven.
A
Okay,
yeah
sure
this
was
in
no
specific
order.
So
if
we
need
to
come
back
to
an
item
or
something
like
that,
then
that's
perfectly
fine.
So
essentially
we
need
repo
ownership,
so
we
need
approvers
and
maintainers
and
the
open,
Telemetry
Lambda
repo
I,
think
Anthony
has
just
been
like
low-key
doing
some
things,
but
we
need
some
sort
of
official
ownership.
So
you
know
all
that
needs
to
be
decided
today,
necessarily.
A
Yeah,
of
course,
so
does
anyone
have
I
guess
we
can
just
come
back
to
us,
except
for
the
hotel,
Lambda
slack
channel?
Do
we
want
to
potentially
just
deprecate
it
and
say
everyone
moved
to
the
fast
channel.
I
probably
won't
be
actively
monitoring.
It
I'm,
not
sure
if
anyone
else
has
a
specific
interest
in
that.
D
A
A
Anthony,
does
he
have
ownership
of
that
channel?
Do
you
know
who
created
it.
C
I
was
just
looking
through
zero
Channel
managers
on
that
channel,
so
we
would
probably
have
to
go
to
the
admins
for
the
slack
workspace.
Okay,
I
talked
to
admins.
D
A
Well,
we'll
keep
the
fast
channel
for
now
if
it
becomes
too
much
of
an
issue
to
get
access
to
the
ownership
of
the
Lambda
Channel
I
think
we
should
just
deprecate
it
and
move
on.
That's
okay
with
everyone
else,.
A
I'll
take
that
action
item,
so
we
need
to
do
a
spec
assessment
and
think
about
General
function.
Requirements
I
know
like
like
Anthony
mentioned.
Some
of
the
stuff
is
a
bit
Lambda
specific
today,
I'm
not
expecting
us
to
like
actually
do
the
assessment
in
this
meeting,
but
I'm
curious.
Does
anyone
have
some
like
experience
with
the
function
as
a
service
spec
and
know
of
any
like
kind
of
known
issues,
or
you
know
have
a
perspective
there.
G
I
think
I
have
just
the
one
specific
issue
that
I've
already
mentioned,
that
Anthony
already
mentioned
specifically
around
the
environment,
variable
propagation
specific
to
x-ray,
with
with
Lambda.
A
G
Be
well,
it's
lifted
this
yeah.
The
the
method
that
they
expect
propagation
to
be
done
is
is
defined
in
the
current
spec.
A
And
I
think
it's
that
variable
is
that
on
by
default,
or
is
that
just
a
I
guess
potential
gotcha
kind
of
in
the
background?
If
you
turn
it
on
by
accident.
C
Yeah
I
believe
you
need
to
enable
active
tracing
on
your
function
and
then
the
Lambda
runtime
will
set
that
variable
with
the
trace
context,
information,
correct.
A
And
Tyler,
what
do
you
I
guess?
Can
you
link
your
proposal
or
if
you
have
some
known
issue
with
some
information.
C
C
I
think
there
was
a
bit
of
XY
problem
derailing
on
on
this
conversation,
but
what
we
come
to
eventually
is
probably
the
last
paragraph
of
that
last
comment.
Basically,
we
need
to
look
at
how
do
we
want
to
deal
with
contexts
that
are
propagated
to
the
Lambda
function,
especially
when
there
are
potentially
multiple
contacts
coming
from
multiple
places?
A
Yeah,
okay
makes
a
lot
of
sense.
Are
there
any
other
known
kind
of
like
open
issues
that
we
have
based
on
the
Lambda
or
the
the
fats
back
in
general?
That
we've
found
thanks
for
sharing
that
Tyler.
C
I
think
the
other
one
that
I
mentioned
I,
don't
know.
If
was.
C
I
don't
know
if
it
was
linked
to
the
general
issue
here.
I
think
there
was
in
Python
recently
yeah
contribution
proposal
for
some
instrumentation
hooks
here
is
the
pr
relevant
for
that.
C
And
this
is
basically
looking
for
pre
and
post
event.
Hooks
I
think
this
is
the
sort
of
thing
we
probably
want
to
specify
for
all
of
the
Lambda
implementations.
If
we're
going
to
have
them
in
one
place,
we
want
to
have
consistency.
C
A
A
Yeah
I
think
we
have
a
topic
around
harmonizing
the
different
implementations
too
and
they'll
probably
be
related
to
that,
but
that's
good
to
get
track.
Okay,
so
we
have
four
current
kind
of
known:
okay,
five
current
known
specification
issues,
and
so
this
would
fall
into
the
six
week
bucket,
so
we
essentially
have
until
the
beginning
of
March
to
create
otaps
based
around
these
known
issues
and
any
other
issues
we
raise
kind
of
as
part
of
our
investigations.
So
just
something
to
keep
in
mind.
A
Okay,
so
I
think
we
have
a
couple
different
vendors
here,
so
I'm
curious
kind
of
the
you
know
the
customer
feedback
you
all
have
been
hearing.
Obviously
it's
generally
negative
about
Asian,
but
I
love
to
hear
you
know
kind
of
specific
issues.
I
think
we
know,
like
the
language
implementations
you
know
are
inconsistent
at
best.
A
And
context,
propagation
environment
and
variable
issues.
A
C
C
E
C
That's
not
specifically
around
okay,
yeah
I
was
gonna
say
that
it's
not
about
capturing
the
cold
start
information,
but
it's
about
adding
The
Collector
as
an
extension.
Adding
the
Java
agents,
for
instance,
just
adding
Hotel
instrumentation
to
a
Lambda
function,
can
have
a
very
significant
impact
on
its
cold
start
performance.
I
Yeah,
the
the
one
issue
did
did
that
I'm
curious
about
and
why
I
got
into
this
is
more
round
even
getting
started
with
that,
so
the
people
I
mean
there's
no.
For
many
people,
it's
not
really
clear.
Like
okay
is,
is
there
some
documentation
from
AWS?
Is
there
some
documentation
on
open
Telemetry?
I
o
is
there
they
type
Lambda,
open,
Telemetry
into
Google
and
or
any
other
preferred
search
engine,
and
then
they
find
like
12
different
things
that
do
vastly
different
things
on
doing
things
so
having
some.
I
Kind
of
yeah:
here's
here's,
how
you
do
it!
What
what
we
try
to
do
to
put
an
open
Telemetry,
I
o,
and
even
if
it's
only
saying
like
yeah
the
source
of
Truth,
is
here
at
the
vendor
side
or
whatever
I
I
would
be
fine
with
that.
But
but
I
think
that
that's
something
I
hear
and
see
a
lot
that
that
people
get
quite
confused
where
we're
even
to
start
with
that
and
then
how
to
to
even
work
around
it
right
so
that
it's
definitely
something
I'd
like
to
tackle
yeah.
A
I
think
there
was
also
Alex
I
might
need
your
help
on
this,
some
sort
of
conversation
or
Tyler
around
getting
Amazon
to
publish
the
layers
or
something
like
that.
I
know
there
was
like
an
official
like
AWS
account
or
something
that
was
acquired.
E
Yeah,
there's
there's
been
discussions
around
whether
or
not
we
should
start
publishing
layers
directly
from
the
hotel
repository
and
that's
not
currently
happening,
I.
Think
there's
a
community
issue
opened
around
getting
an
official
cncf
owned.
Aws
account.
We
can
start
publishing
there.
C
H
C
I
would
assume
that
those
are
probably
vendors
who
are
ADOT
partners
like
I,
think
light
step
is
right
and
for
the
most
part,
they're
just
shipping
out
TLP
out
to
wherever
the
destination
is.
C
But
also
on
this
topic,
customization
of
the
layers
is
at
least
semi-frequent
question
that
that
I
get
and
I
don't
know
what
sort
of
affordances
we
want
to
offer
for
lyric
customization,
because
there
are
a
lot
of
components
that
exist
in
the
Upstream
collector
control
repo.
That
probably
don't
make
sense
to
have
in
a
Lambda
layer
and
might
actually
cause
more
headache
than
it
might
be
worth.
But
users
may
not
understand
the
implications
so
either
we
need
to
well
document
for
them.
D
All
right,
I
think
we
also
see
questions
about
like
performance
too
I'm
not
I,
know
a
cold
start
is
always
a
big
performance.
You
know
for
these
functions
as
a
service,
but
in
general
like
how
will
the
layer
affect
performance.
E
E
C
Really
weird
and
non-obvious
effects
at
the
boundaries.
We
recently
had
a
customer
issue
where
the
customer
was
about
128
Megs
allocated
and
they
were
typically
using
like
125.
Then
they
added
the
instrumentation
that
just
pushed
them
over
and
it
started
breaking
in
in
ways
that
didn't
seem
like
that.
D
As
a
general
question,
I
I
think
all
these
questions
are
like
important,
but
are
we
going
to
answer
these
in
like
a
vendor
agnostic
way?
I
know
a
lot
of
these
problems
stem
from
Lambda,
but
are
we
gonna
like
look
at
all
the
functions
as
a
service
as
once
or
just
focus
on
answering
these
in
the
context
of
Lambda.
A
Yeah
I
think
we
have
two
kind
of
related
tracks
here,
so
we
do
have
to
do
a
spec
assessment
for
functions
as
a
service,
so
we
should
consider
I
think
these
questions
in
the
context
of
that,
but
also
with
that
generic
you
know
non-vendor
locked
in
perspective,
but
then
there's
also
a
lot
of
current
Lambda
implementation
issues
too.
That
need
to
be
fixed.
So
in
my
mind,
they're
related,
but
you
know
a
little
bit
separate,
so
I
I
think
as
Severance
said
an
issue
we
should
think
of
it
with
that
more
generic
handle.
E
E
I
I
I
think
at
the
end
it
depends
just
on
the
people
showing
up
here
in
the
sick
meetings
and
contributing
to
something
right.
I
mean
if,
if
most
people
are
are
interested
in
in
doing
something
with
Lambda,
then
then
we
tackle
that.
But
if
one
someone
shows
up
and
says
like
hey
I'm
using
some
something
else
or
even
some,
some
open
source
serverless
stuff,
then
then
sure
that
that's
I
mean
we
had
a
discussion
on
on
the
ticket
right
that
we
said
like
yeah.
I
Let's
rename
it
from
Lambda
to
function
as
a
service
to
to
be
open
to
everything.
I
mean
the
the
client-side
Telemetry
working
group
is
the
same
thing.
They
named
it
very
broad,
but
but
at
the
end
they
focus
on
browsers
and
mobile
devices
and
ignore
everything
else
for
right
now
and
I
think
yeah.
What
you
just
said,
like
I
mean
there's
this
back
and
then
there's
the
implementation
in
this
pack
should
be
brought
and
the
implementation
we
do.
What
what
people
care
about
right
now.
C
We
should
always
take
the
opportunity
to
ask:
is
this
something
that
can
be
solved
generally
or
does
this
truly
only
apply
to
Lambda,
or
does
it
not
matter
if
we
solve
it
just
for
Lambda
now
and
deal
with
others
later,
because
there
isn't
anything
else
right
now,
but
as
long
as
we're
asking
ourselves
that
question
and
coming
to
what
seemed
like
reasonable
conclusions
to
us,
I
think
we're
doing
the
right
thing.
A
Yeah
and
I
reached
out
to
Lee
Miller
for
Microsoft
and
mentioned
we're
kicking
off
this
group
and
we
love
some
like
Azure
functions.
Contributors
or
you
know
that
perspective
as
well.
So
I'll
keep
trying
to
kind
of
find
some
other
options
but,
like
Severance
said
it
kind
of
just
based
on
you
know
whether
people
participate
or
not
so
we'll
try
but
we'll
see.
A
A
Like
silence,
that's
a
no,
so
the
next
thing
or
next
item
is
kind
of
assessing
the
current
state
of
the
various
language,
implementations,
I.
Think
Alex
and
Tyler
have
done
a
bit
of
work
on
the
Java
side,
I'm,
not
sure
about
python
or
JavaScript,
but
I
think
there's
potentially
four
four
layers
and
a
double
check
that
I'm
not
sure
there's
one
for
go,
but
we
need
kind
of
a
current
state
of
whereas
are
at
and
also
where
they
differ.
A
C
Yeah,
so
there
is
instrumentation
for
go
and.net,
but
there
aren't
separate
layers
for
them.
The
layers
are
there
for
configuring,
the
auto
instrumentation
in
languages
where
it's
available,
so
there's
there's
also
then
just
a
plane
layer
that
has
the
collector
as
an
extension
that
can
be
used
with
go
and
Dot.
C
So
it's
not
that
there's
there's
no
layers,
just
there's
no
specific
layer
for
for
those
two,
but
there
are
instrumentations
and
to
the
extent
that
we're
going
to
be
defining
behavior
for
instrumentations
going.net
need
to
be
included.
A
Yeah
I
think
it's
worth
mentioning:
the.net,
Auto,
instrumentations
and
beta.
So
at
some
point
we
probably
will
need
to
look
at
adding
a
net
specific
layer,
but
we
can,
you
know
worry
about
that
once
it's
cheated
and
things
like
that.
So
we
have
the
node
Java
and
python
issues
right
now
and
I
know,
I
think
ran
potentially
or
someone
put
friends
issue
where
the
the
node
and
python
implementations
differ.
So
besides
Java
do
we
think
we
have
a
good
understanding
of
like
where
node
and
python
are
today.
C
I
know
both
of
them
don't
currently
call
flush
horse
flush
on
them.
Meter
providers
and
python
doesn't
even
seem
to
have
that
implemented
on
their
meter
providers.
So
that's
one
of
the
rabbit
holes
I've
been
trying
to
face
down
lately
a
node
I
think
it
can
be
added
fairly
straightforwardly
because
they
do
have
that
implementation.
E
E
E
Last
time,
I
checked
in
node
to
provide
a
configuration
option
through
environment
variable
I
think
the
same
already
has
been
implemented
in
Python
I,
don't
know
if
those
two,
if
that's
been
released
or
not,
but
as
as
we
continue
testing
different
services
and
different
propagation
and
different
instrumentations.
We
can
probably
like
add
this
to
like
a
public
Matrix,
that's
available
somewhere
in
the
readme
or
something.
A
Yeah
it'd
be
great
to
get
that
into
the
repo
or
something
just
so
we
you
know,
have
a
bunch
of
eyes
on
it
and
we
can
all
track
publicly
where
the
assessment
at
this
at
and
things
like
that,
so
that'd
be
great.
G
The
other
thing
that
we
observed
is
the
the
issue
that
we're
also
kind
of
looking
at
in
the
the
messaging
stick
and
that's
the
distinction
between
links
versus
parent-child
relationships
that
comes
to
play
a
little
bit
more
overtly
in
Lambda
when
you're
not
dealing
with
when
you're
dealing
with
any
events.
That's
not
an
API
Gateway
event
because
yeah,
it's
it's
a
similar
kind
of
messaging
client
broker
situation.
H
C
Yeah,
that's
that
becomes
an
issue
then,
if
you're,
using
something
other
than
x-ray
propagation
for
your
HTTP
requests
like
say,
you've
got
w3c,
that's
flowing
through
the
API
Gateway,
just
as
an
opaque
editor,
and
you
want
to
use
that.
But
you
also
want
to
link
to
the
API
Gateway
Span
in
case
you
need
to
go
to
AWS
support
or
something
like
that.
Is
that
what
you're
thinking.
H
C
F
C
A
Okay,
so
I'm
not
sure
if
we
want
to
discuss
the
Telemetry
API
today
and
it's
kind
of
like
implications,
but
you
know
it's
something
new
from
AWS
we
can
potentially
take
advantage
of.
You
know
for
our
first
conversations.
We
probably
want
to
be
a
bit
more
General,
since
this
is
you
know,
Amazon
specific,
but
just
something
to
take
into
account.
We
could
potentially
get
cold
start
captures
or.
C
Something
like
that
I
believe
that
tomorrow,
during
the
EU
friendly
meeting,
we
should
expect
someone
from
the
Lambda
service
team
who
can
also
provide
some
I'm.
Sorry
Lambda,
not
zolanda,
gotcha,.
A
A
Okay,
cool
thanks
Anthony,
so
Tristan
I'll
answer
the
question
for
you
and
potentially
for
Severin.
But
are
there
any
specific
you
know
vendor
features
you
have
today,
you're
specifically
looking
to
get
Upstream.
You
know
anything
top
of
mind.
I
know,
you've
talked
about
it
a
bit.
H
No
I,
don't
think
so,
but
the
only
things
we
really
do
differently
is
the
fact
that
we
build
the
layers
for
x86
and
arm
and
include
our
distro,
which
has
some
different
configuration
options.
Essentially,
so
nothing
really
in
the
the
libraries
themselves
that
differentiates
us.
Okay,.
A
Great
to
know
Severn
anything
on
The
Cisco.
F
So
sorry,
just
one
just
one
point
on
the
desired
Upstream
vendor
features.
I
I
know
we
are
talking
about
features,
but
are
there
any
areas
like
documentation
where
things
can
improve
as
well?
That
can
be
looked
at
as
part
of
I'm.
Just
calling
it
out
just
to
find
out.
I
mean,
like
you
know,
not
just
features
but
anything
documentation,
samples
that
might
help
Community
I,
don't
know
whether
you
wanted
a
picture
from
that
angle.
Yeah
I.
I
I
mean
all
all
what
I,
what
I,
what
I
mentioned
before
would
be
extremely
valuable
right.
I
mean
we
started
to
have
a
little
bit
of
serverless
documentation
on
open
Telemetry.
I
o
right
now,
I
think
there's
a
little
bit
I
think
node
GST
only
is
the
only
one
where
we
just
created
something
so
having
something
on
open
Telemetry.
I
o
that
has
a
little
bit
of
a
Authority,
and
then
maybe
vendors
can
so
if
vendors
have
already
something
donate
that
or
if
vendor
then
can
pick
from
that.
A
Yeah,
that's
a
that's
a
great
point.
So
let's
create
an
action
item
for
the
documentation.
I
think
that'll
be
something
we'll
need
to
figure
out
fairly
soon.
Anyways.
E
A
C
I
I
mean
right
now,
there's
a
documentation
not
using
layers
at
all.
Right
I
mean
it's
just
not
not
using
a
collector
in
a
layer,
it's
just
talking
to
to
a
backend
directly
and
and
even
if,
if
the
documentation
right
now
would
say
like
hey,
you
can
find
some
some
specific
layers
here
or
things
like
that.
I
I
think
it
would
also
be
okay,
but
but
yeah
I
mean
of
course,
at
the
end,
there's
some
dependency
on
that.
That's
that's
for
sure.
A
Okay,
great
so
I
think
we
have
around
12
minutes
left
and
I
don't
want
to
waste
anybody's
time.
The
last
two
items
are
really
it'd
be
great.
To
have
an
official.
You
know.
Second
TC
sponsor
I.
Think
Carlos
will
be
able
to
spend
some
time
here,
but
won't
have
a
ton
of
time.
A
So
if
anyone
has
any
suggestions
or
anything
like
that
on
a
second
TC
sponsor
that'd
be
great,
but
we'll
probably
just
have
to
manage
it
update
going
forward
and
then
just
if
you
have
any
contact
with
other
Cloud
providers
vendors,
it
may
have
an
interest
in
this.
Please
you
know
reach
out
to
them
and
say
like
this.
Is
this
work's
going
on
and
we
love
their
contributions
feedback?
A
You
know
things
like
that,
so
I'll
keep
trying
to
push
for
some
Microsoft
involvement
and
then
I'll,
potentially
I'm
ping
Joshua
out
there
over
at
Google
and
see
if
they
have
any
interest
to
foreign.
A
So
action
items,
so
we
need
to
figure
out
a
documentation
home.
It
seems
like
the
hotel,
website's,
probably
the
place
to
do
that,
but
it's
still
something.
You
know
we
kind
of
some
ownership
item
we
have
to
take
care
of.
We
need
to
create
identified
or
issues
for
the
identify
implementation
issues,
I'll
work
on
the
Lambda
Channel
and
try
and
get
ownership
of
that
or
just
put
a
tombstone
or
something
like
that.
How
do
we
potentially
want
to
split
up
this
work?
A
We
also
have
a
meeting
tomorrow,
which
is
a
more
e-friendly
time,
and
the
the
kind
of
Hope
of
the
effort
is
for,
like
the
first
six
weeks
of
spec
work
and
whatever
and
we
we
meet.
You
know
twice
a
week
to
kind
of
actually
collaborate
and
move
more
quickly
and
then,
after
that
it
would
switch
to
a
bi-weekly
kind
of
alternation
if
we'd
have
one
meeting
one
week.
A
You
know
the
other
meeting
the
next
week
and
things
like
that,
but
not
sure
how
we
want
to
specifically
split
this
seven
or
seven
will
probably
need
your
help
since
you'll
probably
be
attending
both.
B
A
A
I
think
that
would
probably
be
part
of
the
assessment
I
haven't
looked
at
it
specifically.
A
Hero
Alex,
would
you
mind
just
adding
the
implementation
Matrix
publicly
just
so
we
have
that
out
there
I
guess
we.
E
A
Awesome
thanks:
let's
see
if
anyone
yeah
I
think
we
need
issues
related
to
everything
we
called
out
so
among
a
lot
of
people
that
raise
the
issues
to
to
create
them
and
still
have
a
bit
more
detail
but
I
think
essentially,
probably
this
current
list
would
be
good
and
then
the
spec
assessments
as
well.
So
let's
try
and
get
these
into
the
repo,
and
then
we
can.
We
can
start
tracking
them
over
time
as
well.
A
Any
other
topics,
thoughts
about
eight
minutes
left.
I
For
the
Emir
friendly
meeting
tomorrow,
who's
going
to
to
so
I
have
I
have
to
to
I
can
only
join
in
the
second
half
of
it.
So
I
think
it
would
be
worthwhile
to
take
some
of
the
things
discussed
today
over
there
as
well.
So
I
just
wanted
to
check
if
anybody
else
is
joining
tomorrow.
I
B
Just
do
a
random
question
about
Lambda
the
last
time
I
looked
at
it
was
a
while
ago,
but
if
I
recall
there
were
some
of
the
issues
around
header
propagation
and
things
like
that
had
to
do
with
other
AWS
services
that
only
understood
how
to
propagate
x-ray,
headers
I'm
curious.
Anyone
know
if
that's
like,
like
still
kind
of
a
blocking
issue
with
w3c
propagation.
C
It's
not
necessarily
A
blocking
issue.
It
was
like
API
Gateway,
for
instance,
it
has
vented
traces,
it
will
vent
traces
or
Trace
segments
to
x-ray.
For
you,
it
only
understands
the
X-ray
propagation
format.
It
only
sends
that
data
to
x-ray.
So,
even
if
it
understood
the
w3c
trace
context,
it
would
actually
end
up,
probably
in
a
worse
spot,
with
some
missing
span
information
if
ALB
create
or
API
Gateway
created
a
span
for
you
send
that
information
to
x-ray
but
you're
looking
for
it
in
lightstep.
C
I
think
for
the
most
part,
transiting
things
like
w3c
Trace
context
through
API,
Gateway
or
ALB
is
not
a
problem,
because
those
are
in
HTTP
headers
that
are
treated,
as
you
know,
opaque
black
boxes
to
them.
They'll
just
pass
that
through
in
your
function.
C
If
it's
looking
for
the
right
propagation
format
in
the
right
place,
we'll
be
able
to
pick
that
up
and
things
will
work
and
then
those
segments
that
it
creates
on
the
side
for
you
in
x-ray
are
just
kind
of
floating
unattached
to
anything
else
and
probably
never
looked
at
because
you're
not
using
x-ray.
B
Foreign
I'm
curious
how
important
the
API
Gateway
tracing
data
is,
to
you
know,
triage
and
investigating
Lambda
issues.
C
A
blind
spot
for
them
that
they
want
to
dig
into,
but
can't,
and
it
might
also
be
good
to
hear
from
some
of
the
folks
from
the
Lambda
service
team,
about
what
their
plans
are
in
that
area.
C
H
C
Rest
are
random.
I
know
that
that
team
is
looking
at
that
I,
don't
know
if
there's
anything
else,
I
can
say
at
this
point
no.
C
No,
it's!
You
can
represent
an
x-ray
Trace
ID
as
w3c,
because
it's
still
just
128
bit
Trace
ID.
It's
just
the
semantics
of
those
bits
are
different
when
it's
an
x-ray
Trace
ID
than
when
it's
a
fully
random
Trace
ID.
C
Right
if
they're
not
generating
it
with
the
right
ID
generator,
then
it
will
be
an
ID
that
gets
dropped
at
x-ray,
but
that
happens
at
the
X-ray
points,
consumers
who
are
dealing
with
context
propagation
for
x-ray
context.
You
can
also
send
non-x-ray
Trace
IDs
through
the
X-ray
context,
propagation
mechanism,
because
again
it's
just
bits.
It
doesn't
care,
and
so
you
can
do
that.
Just
fine,
but
x-ray
will
drop
those
segments
when
it
gets
to
it.
C
B
A
related
question
is,
like
you
said,
even
if
you
know
API
Gateway
adopted,
you
know,
understanding,
w3c
headers,
it
still
probably
would
only
be
able
to
send
data
to
x-ray
I'm
assuming
you
know,
since
that's
a
big
shared
service
which
and
just
gets
into
like
very
Amazon
specific
stuff,
but
I
think
this
is
a
question
you
know.
B
As
a
community,
we
would
love
to
start
understanding
more
from,
like
all
the
different
Cloud
providers,
which
is
you
know,
how
do
you
is
there
the
possibility
of
having
like
a
cheap
fire
hose
where
you're
not
asking
x-ray
to
do
a
lot
of
work
on
your
behalf?
But
you
would
be
essentially
having
to
ask
it
to
to
forward
all
the
data
to
wherever
it
is.
You
do
actually
want
to
analyze
it
because
that
seems
to
be
like
you
know,
for
for
any
hosted
service.
C
B
H
Yeah,
exactly
you
can't
do
this
right
now,
I
assume,
but
even
just
having
API
Gateway
able
to
save,
send
x-ray
traces
to
this
collector.
So
we
can
convert
them,
so
it
doesn't
need
to
be
a
fire
hose.
I
assume
you
can't
right
now
set
a
URL
for
it
to.
H
B
F
H
It's
easier
for
the
in
the
user's
case.
I
was
thinking
for
if
we
could
just
get
them
to
our
collector
and
not
have
to
ingestifier
but
yeah
for
there.
C
A
Awesome
I
think
we're
at
times
it
was
a
great
kind
of
kickoff
meeting
I
think
we
have
another
one
tomorrow:
sort
of
for
the
EU,
but
thanks
everyone
for
joining
and
now
we're
going
to
communicate
over
slack
and
start
getting
some
of
these
items
into
the
GitHub
repos
too.