►
From YouTube: 2021-03-25 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
C
But
I
did
want
to
check
in
after
our
exchange
on
github
just
to
make
sure
that
things
were
clear
on
that
front
and
also
there
weren't
any
hard
feelings,
because
it
wasn't
anything
that
I
meant
for
people
to
take.
Personally.
B
And
anyway,
do
you
want
to
update.
D
D
B
For
for
sdk,
I
I
commit
my
python
support
and
I
got
the
feedback
from
anyone
said
we
need
to
support
the
you
know
x-ray
case,
for
example,
if
the
upstream
is
not
instrumented
by
x-ray
propagator
and
the
lambda
and
the
api
theory
are
in
passive
mode.
In
this
case,
we
need
to.
We
need
to
extract
the
trace
id
from
other
propagator
from
hd
header
directory
right.
So
that's
what
we
need
to
support.
A
B
You
I
mean
we
need
to
support
the
the
non-aws
case,
takes
example.
If
the
upstream
are
not
inject
a
s3
propagator
and
the
lava
and
api
database
are
using
passive
mode.
In
this
case,
the
sample
is
always
zero
in
lambda
environment
yeah.
In
this
case,
we
need
to
extract
the
trace
id
for
http
header.
D
Yeah,
I
think,
if,
like
and
java,
it
wasn't
the
interface
that
changes
the
type
parameter.
Well,
I
guess
you
just
have
to
examine
the
type
of
the
request.
Somehow
the
api
gateway
request
or
not
yeah
yeah.
B
Hi
alex.
B
C
Makes
sense,
I
appreciate
you
putting
the
work
in
to
make
this
stuff
be
able
to
live
upstream.
B
C
B
I
think
anyone
can
help
answer
this
question,
as
I
know
that
in
lambda,
if
x3
works,
if
x3
propagator
works,
we
can
extract
trees,
header
from
a
environment
variable.
So
we
can
check
this
in
aws
lambda
document,
but
it
doesn't
mean
that
the
other
propagator
would
not
work
because
the
other,
probably
you
can
use
hp
header
right.
So
I
think
the
other
also
works
correctly.
D
Similarly,
if
you're
using
anything
asynchronous
like
laminator
s3,
where
the
header
has
to
go
through
multiple
services,
if
the
original
wasn't
the
x
amazon,
then
that's
also
not
going
to
get
propagated
through
those.
So
that's.
C
C
But
in
the
general
lambda
case,
people
will
have
to
if
they
have
edges
where
they're
tracing
through
an
application
and
then
it's
going
off
to
like
a
lambda
function.
They're
gonna
need
to
switch
to
using
those
headers
for
those
cases.
D
For
those
cases
yeah
so
like
lambda
dot
and
sorry
aws,
sdk
instrumentation,
I've
been
written
in
my
semantic
in
my
spec
doc.
Yet,
but
aws
sdk
instrument,
at
least
for
now,
until
this
gets
improved,
should
just
always
use
the
x-ray
header,
because
there's
no
advantage
to
any
other
one.
D
D
I
think
if
the
instrumentation
was
forcing
the
header,
the
only
use
case
where
the
user
would
not
seamlessly
get
propagation
is
if
they
use
api
gateway
in
non-http
proxy
mode,
where
they're
not
getting
the
when
they're
not
examining
the
headers
and
they
get
lost.
Otherwise.
At
least
instrumentation
can
force
the
header,
so
that
makes
it
easy
for
the
user.
C
It
does
seem
like
the
the
best
course
of
action
for
that.
This
point
is
for
them
to
make
sure
those
requests
are
using
the
x
amazon,
headers
yeah
man
that
I
feel
like
that
exposes
a
configuration
issue.
We
have
in
open
telemetry
right
now,
which
is
I
don't
know
how
easy
it
is
to
say.
I
want
to
use
like
b3
headers,
except
except
when
I'm
using
this
one
client.
I
want
to
use
a
different
propagator.
I
don't
know
how
tricky
that
is
to
do
right
now.
In
most
languages
I
mean.
D
D
Just
use
a
composite
where
library
instrumentation
accepts
open
telemetry
as
a
configuration,
so
that
can
have
a
different
propagator,
the
global
like
agent.
There
isn't
a
good
way.
You
just
have
to
enable
all
the
propagators
as
a
composite
as
alex
said,
and
that's,
of
course,
okay,
but
you
might
not
be
able
to
support
that
performance
wise
for
if
you're
yeah.
D
C
C
I
was
someone
was,
I
think
they
might
have
been
on
slack,
but
someone
was
having
some
confusion
around
this,
which
is,
which
is
why
I
asked,
but
I
guess
they
also
have
to
know,
then
that,
like
yeah,
the
stuff
that
they'll
be
seeing
in
their
lambda
function
needs
to
use
needs
to
use
those
headers.
B
Under
the
in
sqs
case,
going
to
follow
java's
code
to
handle
the
trace
link
in
case,
I
mean
before
a
batch
of
events,
if
a
batch,
if
online
exclusive
event
contains
a
batch
of
squares,
a
message,
so
in
this
case,
how
do
we
handle
that.
C
Yeah,
I
think
that's
that
I
think,
there's
like
a
general
like
there's
a
fair
amount
written
in
the
semantic
conventions,
around
conventions
for
messaging
and
pub
sub,
but
I
actually
feel
like
this
is
an
area
where,
like
as
a
project
we
haven't,
we
haven't
like
done
a
lot
of
research
or
put
put
a
lot
of
guidance
out
there.
C
C
It
seems
like
with
with
these
other
kinds
of
messaging
systems,
there's
there's
a
lot
of
trickiness
there.
It's
a
little
bit
harder
to
say:
there's
one
definitive
right
way
to
do
it,
so
I
think,
as
a
as
a
community,
we're
gonna
have
to
put
more
effort
into
figuring.
That
out.
C
C
So
I
think,
there's
a
lot
of
confusion
there
lei.
I
predict
it's
not
going
to
be
one
right
answer
for
for
a
while.
C
Unfortunately,
if
anyone
wants
to
champion
sorting
that
out,
that
would
be
like
a
helpful
thing
to
do.
You
know
over
the
next
month
or
two
since
we
want
to
kickstart
writing
a
lot
more
instrumentation.
This
is
going
to
come
out
more
and
more.
D
D
So
this
deck
is
just
a
guidebook
for
how
to
instrument
lambda
functions,
so
this
is
to
help
make
sure
that
we're
consistent
across
languages
and
how
we
model
the
scans
awesome
parenting.
So
now
this
is
finally
merged.
This
only
goes
through
two
types:
events
right
now:
api
gateway
and
sqs.
D
So,
basically,
the
default.
We
assume
that
lambda
is
mostly
used
sort
of.
I
don't
even
this
might
maybe
we
should
change
at
some
point.
This
just
happens
to
be
that,
like
the
default
tracer,
we
just
assume
it's
a
server
unless
it's
not
because
api
gateway
tends
to
be
one
of
the
most
commonly
used
ways
to
use
lambda
and
from
our
experience.
D
So
unless
it's
a
messaging
event,
it's
a
service
ban,
it
has
the
fast
conventions,
and
so
this
describes
how
you
can
fill
in
these
three
based
on.
What's
in
the
lambda,
it's
very
confusing,
because
lambda
has
what
they
call
the
context,
so
I
always
have
to
call
it
lambda
context.
So
it's
not
open
telemetry
context,
but
they'll
provide
some
data.
D
B
B
D
So
should
still
like,
if
it's
x-ray,
we
should
still
check
these.
I
think
checking
htc
batteries
is
the
easiest
way
to
know
whether
it's
x-ray
or
not,
because
if
it's
not
sampled
and
it's
x-ray
like
there
will
won't
be
any
other
context
inside
http
headers
that
wasn't
started
by
lambda
right.
So
we
should
still
check,
but
I
think
that's
the
simplest
way
for
the
user.
If
we
try
to
determine
it
ourselves.
B
Okay,
I
think
I
go
to
a
point.
D
D
So
this
also
like,
since
this
is
only
ever
going
to
be
the
x-ray
format,
we
don't
use
the
composite
propagator
to
parse
the
environment
variable.
We
just
use
the
extra
propagator
directly
and
then,
if
we
have
to
read
french
headers,
we
use
the
composite
for
that
configured.
D
D
So
all
of
those
conventions
apply
with
api
git
we're
usually
setting
up
routes
like
you'll,
have
a
path
parameter
or
something
like
that,
and
so
it
does
provide
this
property
called
resource
in
the
event,
so
that's
equivalent
to
our
http
route
attribute,
so
that
sometimes
it's
hard
to
fill
that
in
based
on
the
framework,
it's
pretty
easy
for
lambda
and
then
otherwise.
D
You
just
go
through
the
http
headers
and
what
not
to
fill
in
as
many
to
be
attributes
as
you
can
sqs
is
where
it
gets
a
bit
more
complicated,
it's
messaging
and
so
with
lambda
lambda
only
ever
provides
sqs
batch
to
functions.
There's
no
way
to
configure
your
functions
like
single
message
mode
or
something
that
so
handling
of
each
message
is
always
going
to
be
in
user
code
and
that's
what
makes
it
a
bit
unfortunate
because
we
definitely
want
spans
for
each
message.
D
It's
just
not
going
to
be
easy,
so
I'll
talk
about
the
message
in
a
bit,
but
basically
we
have
two
spans
one
for
the
sqs
event,
which
is
a
batch
and
one
for
the
message.
If
we
can
both
of
them
of
type
consumer
sqs,
also,
multiple
messages
in
the
batch
could
be
from
different
queues.
D
D
I
have
some
drift.
You
have
to
fix
this.
This
was
left
over
from
a
previous
thing.
That's
not
valid.
The
consumer
span
should
not
there's
no
server
spam
when
using
messaging.
So
sorry,
this
is
wrong
for
every
message.
In
the
event
we
check,
if
it
has
the
aws
trace
header
attribute
lambda,
will
populate
the
system
attribute
if
it
was
able
to
propagate
context,
in
usually
from
when
someone
calls
the
sqs
dot,
send
message
request
within.
D
If
it's
aw
stick
instrument,
scope
and
tell
me
that's
supposed
to
be
sending
a
trace
header,
in
which
case
it
would
be
populated
here,
so
you
can
add
it
as
a
link,
and
so
the
event
span
would
have
as
many
links
as
messages
in
the
batch,
possibly
if
they're
all
different
requests
and
then
so
sks
message.
This
is
the
problem
where,
since
lambda,
it's
user
code,
that's
handling
each
message
the
best
we
can
do
in
many
cases
to
provide
a
helper
for
the
user
to
create
a
span
around
their
message.
D
Hopefully
they
do
so
if
it's
auto
instrumentation,
even
auto
instrumentation
can't
do
it
because
it
might
just
be
a
for
loop,
no
method
call
or
anything
and
they're
just
handling
a
message
inside
of
for
loop.
So
we
like
in
java,
we
provide
a
helper,
a
tracing
request
handler
if
the
user
extends
that
for
their
handling
their
message.
It's
a
tracing
message
handlers.
If
they
extend
that,
then
that's
instrumented.
D
If
they
don't,
then
the
message
is
just
not
instrumented,
there's
nothing.
We
can
really
do
about
it,
unfortunately,
but
if
so
then
yeah
each
message
will
have
the
name
of
the
queue
that
centered
as
the
span
name
and
then
the
standard
messaging
attributes
and
we'll
check
the
database
tracer
again
to
give
it
a
single
link.
If
that
is
available,.
D
C
C
I
feel
like
we
haven't.
We
haven't
like
written
down
a
whole
lot
of
guidance
there,
so,
but
what
you're
doing
looks
fine.
D
One
issue
I
found
a
long
time
ago,
which
I
don't
have
enough
resources
to
follow
up
on,
but
I
was
trying
to
discuss
a
lot
of
these
corner
cases
and
try
to
figure
out
the
situation,
but
it's
hard
like.
I
don't
think
we
really
got
to
any
end
result.
We
were
just
chatting
like
armin
and
I
were
trying
on
this
issue
basically,
but
maybe
it's
a
good
refresher
to
see
what
all
cases
can
happen.
C
Yeah
yeah
exactly-
and
this
relates
also
to
not
just
fast
but
also
things
like
kafka,
cues
and
and
all
of
that
stuff,
but
it's,
I
suspect
it's
it
can
get.
C
My
worry
is
there's
like
the
corner
case,
where
the
graph
just
gets
like
gigantic.
Basically,
and
is
there
a
way
for
us
to
to
model
this?
Where
that's
not
I
mean
you
have
the
same
issue
with
caching
right
like
you'd,
want
to
know.
C
You'd
want
to
make
a
link,
for
example,
to
know
which,
if
you,
if
the
cash,
if
you're,
trying
to
hit
something
and
the
cash
wasn't
valid,
so
you
have
to
populate
the
cash
the
thing
that
invalidated
the
cash
you
would
want
to
to
have
a
link
to
that
potentially
to
be
able
to
connect
the
two
together,
but
that's
like
a
really
tricky
thing
to
model
and
you'd
naturally
think
well.
C
C
So
I
don't
know
like
sorry,
I'm
being
kind
of
vague.
I
just
think
we're
gonna
start
using
like
links
as
like
a
hammer
to
like
deal
with
like
every
single
different
kind
of
workload.
Basically-
and
I
don't
know
what
the
end
result
of
that
looks
like
or
if
we're
gonna
need
to
have
more
more
differentiation,
there.
C
And
in
practice,
I'm
not
I'm
not
actually
sure
if
anyone
currently
supports
links
in
their
back
end.
I
would
be
curious.
I
don't
think
we
do
yet
at
lightstep.
You
could
correct
me
if
I'm
wrong
alex,
I'm
pretty
sure
we
don't.
I
know
we
have
a
ticket
for
it,
but
but
so
yeah,
it's
gonna.
Be
that
that's
like,
I
feel
like
we're
still
just
trying
to
get
to
the
end
of
like
getting
like
transactional
workloads,
fully
instrumented
and
done,
but
this
stuff
is
kind
of
like
the
next
level.
C
It's
coming
up
on
4
30
and
I'm
gonna
have
to
run.
I
did
just
want
to
follow
up
about
that
github
thread
and
just
make
sure
that
it
sounds
like
we're
all
on
the
same
page.
You
know
it
sounds
like
y'all
are
putting
the
effort
into
to
get
this
stuff
moved
upstream
and
residing
in
in
the
open,
telemetry
organization
itself,
but
I
did
want
to
check
in
about
whether
or
not
that
is
putting
like
a
huge
burden
on
you
all.
D
C
D
F
D
We
probably
want
to
publish
anything
users
use
for
my
open
telemetry
account.
So
I
from
what
I
understand.
Cmcf
has
the
organization
we
have
a
cncf
telegraphy
account
this
one
account
for
open,
telemetry
and
then,
like
nikita,
got
a
user
account
assigned
with
some
permissions
on
it,
and
we
would
need
another
user
with
some
permissions
for
lambda.
B
C
A
B
Yeah
last
friday,
I
think
I
talked
with
her.
She
thinks
she
didn't
understand
why
we
need
to
use,
see
himself
account,
but
not
aws
wrong
account,
but
I
think
I
summarized
the
q
and
a
so.
Hopefully
she
can
help
us
drive.
B
D
So
that's
the
cicd
yeah
yeah.
I
mean
in
very
blunt
like
of
course,
you're
not
expecting
anything
but
like.
I
would
think
of
this,
as
if
aws
is
just
to
pull
out
of
this
project,
see
and
open
every
so
still
supposed
to
be
publishing
layers
right
so
like
I
don't
think
it
makes
sense
to
use
an
account.
That's
not
owned
by
open
telemetry.
B
And
at
the
beginning,
because
we
want
to
set
up
everything
in
aws
downstream,
but
now,
since
everything
went
to
upstream,
so
this
would
not
be
labeled
as
a
dot
right.
B
B
Yeah
but
they
are
represented,
you
know
that
make
a
cost
estimation
also
spend
time
right
cycle.
I
draw
a
table.
B
Oh
alex,
please
edit
me
add
me
into
the
enemy
as
the
maintenance
of
the
repo,
because
yeah
probably
monday
or
friday.
You
were
not
here
so
I
found
I
cannot
merge
my
code.
Ui
support.
F
Yeah
I
mean
the
workflow
that
we've
been
using.
The
other
repos
is
whoever
is
the
maintainers
can
can
do
the
merging,
but
since
there's
only
there's
only
like
two
three
people
that
are
creating
code
here,
it
probably
doesn't
make
sense
to
not
have
you
as
one
of
the
maintainers
here
so
so.
G
Gotta
love
conference
offer
yeah
must
be
a
thing
about
names
and
backgrounds.
So
just
real,
quick
hi,
I'm
alex
I'm
a
senior
engineer
at
aws
on
the
serverless
application
experience
team,
so
I
basically
make
a
lot
of
aws
tooling,
that
comes
out
just
for
serverless
in
general,
I'm
so
mostly
here
listening
and
I
guess
in
case
any
questions
relevant
to
my
space
came
up.
I
might
be
useful,
but
mostly
happy
to
just
listen.
G
I
don't
wanna
jump
straight
into
something
where
I
don't
have
a
lot
of
background
and
start
telling
everyone
what
to
do
so,
mostly
listening.
D
G
So
I
I
do
not
work
on
that
team
per
se,
but
I
I
know
the
the
people
who
work
on
that
yeah.
It's
all
one
big
happy
family
really.
G
But
you
know
you
might
have
seen
on
my
team
like
serverless
application
model,
sam
cli,
some
other
things
you
may
see
in
the
near
future.
D
B
We
are
going
to
release
two
kinds
of,
for
one
is
the
same
aws
same
so
another
is
perform,
so
I
have
some
experience
about
how
to
using
samsung.
I
because
my
initial
version
of
hazard
support
is
by
sam.
B
B
No,
I
mean
I
publish
a
layer,
2y
account,
and
I
want
to
clone
this
account
to
another.
I
want
to
clone
this
london
layer
to
another
account
from
teslacom
to
product
yeah.
So
I
just.
G
Is
this
something
where
the
right
thing
to
do
is
to
clone
the
layer
across
accounts,
or
would
it
make
more
sense
for
your
use
case
to
set
up
the
permission
so
that
the
other
accounts
could
just
use
that
layer
directly
and
then
kind
of
you
know
have
a
single
source
of
truth
for
it
there's
cases
where
both
might
be
appropriate.
So
it's
not
entirely
a
rhetorical
question.
B
G
Yeah,
it's
probably
a
thing
where
I'd
want
to
see
a
little
bit
more
about
what
your
use
case
is.
I
don't
want
to
necessarily
give
you
an
off
the
hip
answer
and
then
have
it
turn
up,
be
a
bit
wrong
like
one
possible
way
if
you're
using
sam
cli,
for
example,
is
you
could
deploy
to
different
accounts
based
on
your
aws
profile?
G
And
there
are
things
like
sam
config
that
help
you
to
save
those
things.
So
you
could
you
know
as
you're
you
know,
you're
doing,
builds
and
deploys
if
you're
using
cloud
formation.
I
assume
to
be
like
where
the
source
of
truth
is
for
the
lambda
layer.
You
could
just
deploy
to
your
dev
or
test
account
on
a
regular
or
say,
deploy
to
a
dev
account
over
and
over
again
and
create
some
sort
of
ci
cd
pipeline.
G
So
you
know
create
a
lambda
function,
attach
that
layer
to
it
run
a
bunch
of
stuff
and
make
sure
it's
doing
what
you
expect
then,
and
only
then
deploy
to
the
production
account
and
then
increment
its
version
number,
and
you
can
do
that
with
things
like
code
pipeline,
where
each
stage
targets
a
different
account,
and
then
you
just
have
an
account
without
those
permissions
to
do
those
deployments
so
off
the
hip.
G
That's
how
I
would
probably
do
it,
I'm
happy
to
sync
more
with
you
offline.
If
you
want
to
go
into
more
detail
about
what
your
use
case
is
just
to
make
sure
that
what
I'm
saying
yeah
that's.
B
G
Doing
that
that
way,
instead
of
managing
multiple
profiles
on
your
laptop
you're,
not
going
to
accidentally
start
deploying
to
your
production
account
because
you
have
the
wrong
profile
set,
you
can
you
know
on
your
dev
laptop.
You
may
just
not
give
yourself
permissions
to
touch
that
production
account.
So
you
cannot
accidentally
deploy
a
wrong
version,
a
broken
version
or
even
just
increment,
needlessly.
B
G
A
G
Yeah,
admittedly,
I've
been
away
from
lambda
layers
for
a
little
while,
but
that
does
sound
right.
F
G
Well,
that
also
becomes
one
advantage
of
having
a
pipeline,
then
is
you
can
have
a
script
that
is
just
going
to
publish
to
all
of
those
regions
or
you
can
have
you
know
multiple?
You
know
cloud
formation
actions
that
are
going
to
deploy
one
region
at
a
time.
B
G
Up
to
you,
what
what
model
makes
sense
for
you,
but
you
absolutely
can
use
one
account,
okay,
and
that
does
make
it
easier
for
a
layer,
because
you
can
just
say
you
know:
here's
the
arn
substitute
your
region
and
it
will
work
rather
than
having
someone
have
to
do
a
mapping
of
depending
on
which
region
I'm
in
it's
going
to
be
a
different
account.
F
A
F
Also
worth
noting
that,
if,
if
we
plan
on
doing
that,
we
also
should
make
sure
that
the
same
number
matches
across
the
different
regions
like
the
same
version
number
for
the
layer.
Because
if
we're,
if
we
do
any
kind
of
like
building
in
one
region
and
then
when
we're
ready
to
probably
publish
we
published
all
regions,
the
numbers
wouldn't
match.
B
B
Okay,
another
thing
is
not
sure
if
you
know
that
there
is
a
region
called
syrian
region,
china
region,
it's
isolated
with
other
two
regions
in
china,
yeah
yeah,
vhy
and
djs
right
so
far,
that
sounds
great
yeah,
so
that
means
we
have
to
apply
for
another,
a
special
account
for
this
collision
right
I
haven't
decided.
I
believe
that
would
be
correct.
Yeah,
okay,.
G
So
the
interesting
thing
there
that
that
may
help
for
making
the
r
division
make
sense
is
the
r
would
also
then
be
different,
because
the
partition
name
would
be.
B
G
Oh,
I
see
so
if
you
look
at
the
way
that
is
created,
there's
essentially,
partition
is
one
of
the
values
there.
B
G
It
can
be
useful,
that's
an
interesting
question
that
I'm
thinking
of
raising
back
to
the
lambda
team.
You
know
off
the
hip.
I
don't
have
strong
thoughts
on
this,
so
I
don't
want
to.
I
don't
want
to
take
a
strong
stance
without
researching
it
a
bit
more.
Basically,
I.
G
Yeah
keeping
ver
published
versions
the
same
without
doing
what
my
fellow
alex
suggested
of
maybe
doing
like
repeated
deployments,
to
make
them
line.
G
G
And
it
would
only
then
really
work
for
latest
potentially
because
if,
if
someone
were
to
try
to
do
a
notion
of
versioning
like
let's
say
you
might
want
to
take
care,
if
you're
advertising
that
you
know
version
numbers
are
going
to
align
because
let's
say
you
publish
version,
50
and
version,
50
has
a
bug.
G
E
A
G
Be
made
for
not
putting
too
much
stress
into
having
the
numbers
line
up
and
maybe
having
that
as
the
best
effort.
B
I
searched
on
google
that
how
to
get
the
latest
version
of
lambda
layer,
if
you
search
the
sensor
in
google,
you
will
see
that
someone
will
suggest
that
you
can
use
basic
aws
cli
to
check
this
version,
this
layer
name.
Then
you
will
always
get
the
latest
version.
So
how
about
we
give
this
solution
to
users?
I
mean
we
don't
need
to
keep
a
line
with
this
version
number
and
the
customer
just
need
to
run
the
command
to
check
the
latest
version.
F
Yeah,
I
guess-
and
I
guess
in
my
mind
I
would
almost
add
this
to
like
releases
from
the
repo
where
you
would
be
able
to
like,
go
and
identify
what
the
release,
what
the
landa
layer
version
for
a
particular
release,
looks
like
across
all
of
the
different
regions
and
then
that
way.
If
you
have
to
do
a
rollback,
you
can
create
like
a
new
release,
but
it
has
like
a
new
version
number
well.
There.
G
Maybe
is
something
to
be
said,
for
you
know
what
one
of
the
questions
has
come
up
is
like:
how
are
you
doing
discoverability
in
general
right
because
for
someone
to
use
this
layer
they're
adding
it
to
their
cloud
formation
to
their
terraform
to
their
update,
config
calls
for
a
function,
so
you
know
how
like
and
they
have
to
know
your
account.
They
have
to
know
the
name
of
the
layer.
G
Where
are
they
finding
this
to
begin
with,
and
maybe
you
know
if
you
go
to
that
place,
you
could
say
you
know
here
is
the
full
arn
of
the
layer
for
you
to
use,
including
the
version
number
and
then
they're
using
that,
and
presumably
it's
just
gonna
and
then,
if
you're
doing
discoverability
of
releases,
you
might
say
you
know
here
are
a
bundle
of
arns
that
are
associated
with
this
release
and
given
that
they
kind
of
have
to
update
it
for
different
targets
anyway,
there's
potentially
a
reasonable
story.
There.
G
But
I
I
think
that
it
is
certainly
feedback
that
we're
having
this
discussion
that
I
can
take
back
to
some
people.
I
know
in
the
land
of
team.
B
G
Yeah,
I
I
am
out
of
t
I
am
out
of
touch
with
the
the
latest
lambda
releases.
It
would
appear
then
and
yeah
so
yeah.
If,
if
an
alias
model
exists,
then
I
would
just
use
that,
and
that
would
make
a
lot
of
sense
and
then
that
that
takes
away
a
lot
of
this
problem
and
then
you
can
alias
it
with
whatever
version
scheme
you
see
fit.
G
G
I
life
honestly
tell
you,
I
am
not
familiar
so.
Admittedly,
I
work
on
serverless
tooling,
broadly,
so
I'm
not,
although
I
suppose
we're
in
a
public
meeting,
so
I
wouldn't
be
able
to
say,
but
I
am
honestly
not
following
every
lambda
release
before
it's
out.
G
But
yeah
so
and
anywhere
where
you
would
have
an
alias.
That
would
be
a
useful
thing
to
do,
for
something
like
this.
G
Can
imagine
when
you're
doing
I
I
could
imagine
if
you're
having
like
releases
on
github,
for
example,
you
do
get
like
a
text
field
or
even
a
series
of
lines,
and
that.
G
F
As
long
as
the
the
ci
cd
pipeline
makes
it
like
easy
enough
to
like
copy
paste,
whatever
the
output
from
the
publish
publishing
script
very
worst,.
G
Case
your
script,
could
you
know,
assemble
something
in
a
database
row
or
something
I
mean
I
I
imagine
that
you
could
assemble
it
from
the
output,
but
there's
a
lot
of
different
ways.
You
could
collate
all
the
different
version
numbers
and
then
use
it
to
update
a
github
release
post.
I
mean
made
scripts
like
this
before
for
similar
use
cases.
B
I
had
this
question
before
so,
in
my
imagination,
the
the
easiest,
the
simple
way
is
we
keep
the
latest
version
as
the
as
the
latest
version.
I
mean.
The
latest
version
number
is
the
latest
version,
so
we
don't
need
to
save
this
in
database.
So
every
time
we
just
release
a
new
version,
they
check.
What's
the
latest
version
and
update
this
latest
arn
into
release
notes.
So
this
is
very
simple.
G
I
may
have
to
drop
myself
momentarily
to
get
ready
for
another
meeting.
I'm
sorry
that
I
was
actually
slightly
late
here
as
well,
for
the
same
reason
something
about
meeting
density,
but
it
is
nice
to
meet
everyone
and
yeah
feel
free
to
reach
out
to
me,
I'm
not
too
hard
to
find
either
on
github
twitter
or
anything
like
that.
So,
if
you
wanted
to
follow
up
with
me
on
any
of
those
questions
feel
free
to
do
so.