►
From YouTube: 2023-04-04 meeting
Description
Instrumentation: Messaging
A
Hey
guys,
go
ahead
and
add
any
items.
If
you
have
them
today,
I
think
we
already
have
a
fair
amount.
A
A
So
the
first
thing
I
wanted
to
kind
of
discuss
briefly
was
Cloud
function,
docs
generically
on
the
open
Telemetry
website
and
where
we
think
this
might
make
sense.
I
think
it'd
be
probably
preferable
to
to
leave
them
out
or
not
have
a
standalone,
AWS
Lambda,
section
I
think
my
initial
thought
was
maybe
just
to
have
a
cloud
functions
section
like
we
do
for
like
the
Kate's
operator
or
for
the
collector
instrumentation,
but
I'm
also
open
to
to
different
ideas
as
well.
So
I'm
not
sure
if
anyone
has
any
opinions
here.
A
A
So
I
think
it
might
potentially
be
nice
to
have
a
cloud
function
section
itself,
but
could
also
see
how
you
know.
Other
compute
types
may
start
asking
it
forward
as
well.
Then
it
gets
to
be
super
cluttered
so
generally
flexible,
but
not
sure
if
anyone
has
a
strong
opinion.
C
I
think
it
makes
sense
to
try
to
put
you
know
that
Dock
at
the
top
level,
like
what
you're
talking
about.
C
Just
because
it
does
have
a
very
different
documentation
needs
from
like
kubernetes
or
regular
application,
specific
installation.
A
A
Okay,
so
we
can
talk
about
the
the
AWS
docs
later
Aaron
I
know.
We
briefly
talked
about
the
gcp
I.
Guess
we're
gonna
still
like
I'm
gonna
talk
to
the
docs
team
and
try
and
get
like
a
kind
of
a
placeholder
section
made
and
I
think
as
we
produce
the
community
layers
we'll
also
produce
Community
documentation,
but
not
sure
if
you
all
have
an
internal
timeline
or
thought
around
getting
some
gcp
stuff
as
well.
No,
no
real
expectations
on
our
side,
though.
B
Okay,
yeah,
that's
good,
because,
honestly,
we're
not
really
sure
I
talked
to
the
team
and
we're
not
super
sure
on
our
strategy
here
in
terms
of
like
what
kind
of
special
configuration
is
needed,
especially
for
metrics,
so
I
mean
the
cubecon
deadline.
Definitely
won't
happen.
Unfortunately,
yeah.
That's.
B
Reason
I
was
gonna,
ask
does
like
does
Lambda
and
Azure
do
they
have
documentation
on
their
like?
You
know
individual
Amazon
or
Microsoft
websites
as
well.
A
So
Azure
I,
don't
think,
has
really
anything
especially
open.
Telemetry
related
around
Azure
functions,
Anthony
I'd,
probably
defer
to
you
on
the
AWS
I
know
some
vendors
have
their
own
docs
as
well.
B
Okay
and
how
do
you
see
like
the
documentation
there
versus
open
telemetry.io
coexisting.
D
Those
documents,
those
documents
are
largely
around
how
to
construct
the
Uris
for
each
of
the
layers
that
are
released
and
how
to
add
them
to
your
functions.
So
I
think
if
there
were
General
documentation
for
using
the
layers
that
were
available
in
the
hotel
website.
We
would
probably
largely
point
to
that
and
then
call
out
anything
specific
to
the
ADOT
distribution
of
those
layers
on
our
site.
B
Okay,
I
might
want
to
check
with
our
Tech
writers
as
well
to
see
what
they
would
prefer,
but
I
I'm,
definitely
in
favor
of
having
something
on
the
hotel
website.
A
Yeah
I
think
having
just
a
single
source
of
Truth
for
a
lot
of
these,
like
instrumentation
guidance
and
centralizing.
That
on
the
website
would
be
excellent,
of
course,
more
than
happy
to
link
out
to
vendor
guidance
and
things
like
that
as
well
from
the
hotel
docs.
It's
just
nice
to
have
kind
of
a
centralized
place
that
you
know
will
direct
you.
A
Okay,
so
we'll
do
some
more
work
on
that
I'll
talk
to
the
docs
team
and
then
also
take
a
look
at
the
ADOT
Lambda
docs
and
see
if
we
can
start
producing
just
the
initial
drafts
and
things
like
that
and
kind
of
think
of
how
and
where
that'll
make
sense
to
place.
E
Yeah,
so
the
AWS
Trace
ID
environment
variable
thing
how
to
use.
It
is
still
a
question
that
we
haven't
fully
answered,
I'm
hoping
we
can
finish
that
today.
Finally,
the
issue
now
is,
after
pull
requests,
have
gone
out
to
update
AWS
sdks,
to
use
the
trace
context
found
in
that
environment
variable
as
a
Spam
link.
There's
been
some
pushback
on
doing
that
and
solely
having
it
as
a
link.
There's
suggestions
of
doing
it,
a
configuration
variable
that
will
say
make
it
apparent.
E
There's
a
suggestion
of
if
there's
no
Trace
context
found
in
the
Lambda
event,
then
using
it
as
a
parent
and
I'm
wondering
if
people
have
some
thoughts
on
what
they
think
would
be
most
appropriate
and
then
I
can
suggest
that
to
the
spec,
and
hopefully
we
can
be
done
with
this
environment.
Variable
confusion.
D
So
I
think
if
you
go
back
to
the
original
issue
we
were
talking
about.
This
I
had
put
in
some
example
code
of
how
you
could
construct
an
event
to
carrier
implementation
that
would
pull
information
out
of
the
event
and
also
optionally
augment
that
carrier
with
the
that
value
from
the
environment
variable.
If
there
wasn't
already
a
value
present
in
the
carrier
and
then
that
would
be
handed
to
whatever
propagation
chain
is
in
use
and
if
the
X-ray
propagator
is
configured,
it
wouldn't
find
that
in
the
carrier
and
be
able
to
use
it.
E
Okay,
yeah,
so
that
would
be
kind
of,
but
that's
also
a
little
implementation
detail
it,
but
the
core
of
that
is
that
it
would
do
it
if
there's
no
Trace
context
found
in
the
event,
it
would
fall
back
to
the
trace,
ID
environment
variable
right.
D
If
that
particular
field
was
not
in
the
carrier
that
came
out
of
the
user's
carrier
extraction
from
the
event,
then
it
could
be
augmented
with
that,
and
then
the
propagator
is
separate
right.
That
may
happen
when
the
user
is
using
a
w3c
trace
context
propagator,
at
which
point
the
X
Amazon
Trace
ID
field
in
carrier
will
never
even
be
looked
at
or
if
they're,
using
the
X-ray
propagator,
then
that
field
will
be
inspected
and
if
it
was
added
from
the
environment,
if
not
already
present,
then
yes,
that
gets
the
the
behavior
that
you're
describing.
C
E
Phrase
context
in
the
event
use
the
trace,
ID
environment
variable.
Would
that
would
you
think,
that's
appropriate
and
to
prove
it
I
mean
assuming
that.
D
E
It
just
seems
like
a
bigger
change
than
trying
to
correct
the
usage
of
the
environment,
variable
that,
because
right
now,
this
Beck
already
says
only
use
it
as
a
link.
Essentially,
and
if
we
want
to
resolve
that
before
the
bigger
change
of
getting
a
bent,
a
carrier
in
the
spec
I
think
would
be
nice.
D
I
I
think
specifying
event
carrier
is
the
appropriate
thing
to
do
here
at
this
point
yeah
it's
it's
more
spec
than
simply
saying
use
a
fallback,
but
I
think
it's
it's
it's
a
an
abstraction,
that's
needed
in
order
to
be
able
to
effectively
have
those
sorts
of
specifications,
because
right
now
we've
got
a
bunch
of
variety
in
how
that
is
handled
across
the
languages.
There's
no
uniformity
at
all.
C
Would
you
be
willing
to
to
take
that
spec
work
on
Anthony.
C
I
just
feel,
like
you
know,
I
I
couldn't
make
an
attempt,
but
I
have
a
very
incomplete
view
of
what
you
envision
would
need
to
happen
in
that
PR
and
I
feel
like
it
would
just
result
in
a
lot
of
back
and
forth.
Yep.
E
Same
I
mean
I,
think
I'm,
seeing
it
clear
now
in
this
context
that
it's
sort
of
a
way
to,
because
there's
multiple
places
that
just
context
could
be
coming
from
so
simply
doing
a
getter
and
setter,
isn't
appropriate
and
so
you're
proposing
this
or
have
proposed
this
event
to
carrier
that
can
user
can
sort
of
pick
and
choose
potentially
where
they
want
context
to
be
coming
from.
Is
that
sort
of
oh
wait?
He's
good.
D
Correct
yeah,
and
hopefully
the
the
comment
on
the
original
issue
that
I
just
linked
to
can
provide
some
additional
context.
I,
don't
know
if
we
can
talk
through
that
quickly.
E
D
Right,
I
think
if
you
could
quickly
go
back
to
that,
though,
I
want
to
call
it
one
thing
on
there
that
second
Club
code
block
there
right
here,
yeah
the
composite
event
carrier
that
is
I
think
how
this
ought
to
be
handled.
So
the
the
user
can
provide
a
base
function
that
says:
here's
how
I
want
the
the
event
to
be
extracted
from
the
carrier.
We
can
provide
this
composite
in
in
this
rotation
implementation
and
that
will
use
the
the
user
provided
function.
D
If
the
environment
variable
is
there,
it
can
then
set
it
into
that
carrier
as
well.
This
doesn't
check
to
see
if
it
exists
in
the
carrier,
but
that's
should
be
an
easy
change
to
only
set
it
if
it
doesn't
already
exist
in
the
carrier.
E
Yeah
the
only
thing
I
quickly
notice
here
the
like
example,
go,
makes
perfect
sense,
you're,
wrapping
it
and
passing
this
with
event
the
carrier,
but
in
completely
Auto
instrumented
languages
like
we're
gonna
have
to
have
defaults
that
the
make
the
most
sense,
I
guess-
and
it's
not
going
to
be.
That's.
D
E
E
A
Then
we
can,
we
can
go
async
with
this.
Yet
yeah.
We
can
keep
talking
over
slack,
no
problem.
There
collector
login
another
one
for
me,
Tristan.
What's
going.
E
E
No,
the
okay,
this
is
simply
have
the
issue
has
come
up
at
Splunk
with
a
user
who
so
the
logs
from
The
Collector
just
go
to
standard
error,
all
the
logs
and
getting
those
somewhere
somewhere
besides
cloudwatch
I
was
curious.
If
anybody
had
dealt
with
this
and
either
both
I
mean
you
could
run
an.
A
E
I
guess
like
another
agent
within
the
Lambda,
add
another
layer
or
pull
it
from
cloudwatch
seems
to
be
the
other
option.
But
it
seems
it
would
be
nice
if
the
collector
could
forward
those
lugs
itself,
because
it
already
can
export
logs
through
exporters.
But
that
seems
to
be
have
been
rejected.
And
if
anybody
knows
anything
about
this
in
general
or
The
Collector
not
accepting
the
ability
to
afford
its
own
logs
through
exporters
I'm,
not
sure
where
to
go
with
this.
D
G
What's
it
so
I
was
going
to
say
that
I
have
a
I,
have
a
task
on
my
to-do
list
to
try
and
support
otlp
configuration
for
exporting
Telemetry
from
The
Collector.
This
was
specifically
around
traces
and
metrics,
but
obviously
logs
will
fall
into
that
same
bucket
here,
but
currently,
that's
not
possible,
but
I.
Don't
think
it's
a
I.
Don't
think
it
was
a
strong
decision
against
being
able
to
export
the
data
as
much
as
we
just
didn't
have
the
means
to
do
it.
Yet.
Okay,.
E
G
G
Yeah,
just
to
be
clear,
the
the
collector
itself
uses
the
hotel
SDK
for
its
Telemetry,
so
I
guess
this
will
be
pending
on
the
hotel.
Go
SDK
having
support
for
exporting.
D
Logs
only
for
tracing
and
optionally
for
metrics
Hotel
code
hasn't
even
started
touching
logs
and
The
Collector
uses
zap
as
its
logging
framework.
D
Good
correct,
yeah,
I
think
getting
logging
out
via
otlp
is
probably
significantly
down
the
road
thing.
Getting
logging
out
through
any
other
means
that
zap
currently
supports
so
should
be
on
the
table.
F
A
Great
so
build
tooling
update
is
the
community
layers
in
general,
so
Alex
I
know
you
and
Tyler
have
been
spending
some
time
on
producing
The,
Collector,
Community
layers
and
Standalone,
and
actually
getting
that
published
for
micro,
GitHub,
repo
and
Tyler
I
think
you've
been
doing
some
work
on
relief
Tooling
in
general.
It's
going
to
get
an
update
of
where
you
all
are
at
if
you're
having
any
issues
and
things
like
that,
all
right.
C
All
right
we're
trying
to
paste
the
link
in
there
but
I,
don't
know
it's
not
pasting,
but
anyway,
yeah
things
I
think
are
going
pretty
well.
The.
C
We've
got
a
couple
actions
in
place
currently
for
the
collector
and
for
the
the
Java
layer
you're
going
to
want
to
look
at
the
the
one
starting
with
release
yeah,
so
those
two
this
morning
I
or
is
it
yesterday
yesterday,
I
I
I
did
some
work
to
separate
out
the
actual
portion
of
the
workflow
that
does
the
publishing
to
Lambda
for
in
in
this
Java
layer.
C
You
can
see
that
at
the
bottom
here
this
part
is
intended
to
be
reusable
for
each
of
the
different
layers.
So
the
next
step
is
making
it
so
that
the
collector
is
doing
the
same
thing,
but
with
the
parameters
needed
for
The
Collector,
and
the
other
thing
I
wanted
to
point
out
is
in
that
last
PR.
Yesterday,
I
I
changed
it
from
using
a
a
button
to
trigger
the
release
to
now
trigger
the
release
based
off
of
the
the
tagging
action.
C
So
if
you
go
back
one
Carter,
you
can
see
at
the
the
top
on
line
five
through
seven
right
there.
It's
now
using
a
tag,
so
each
of
the
release
types
will
have
their
own
prefix.
So,
for
example,
The
Collector
would,
you
know,
do
release
Slash
layer,
Dash
collector
or
something
like
that
and
then
suffix
with
the
version
type
and
then
inside
the
the
job.
C
It's
very
it's
doing
some
validation
to
make
sure
that
the
the
release
version
in
the
tag
matches
the
release
version,
that's
being
defined.
C
This
works
nicely
for
the
Java
piece,
because
the
Java
agent
has
a
capability
of
returning.
So
if
you
execute
it,
let's
see
so
like,
for
example,
online
44
right
there.
The
Java
agent
is
able
to
return
the
exact
version
number.
C
So
it's
easy
to
identify
what
version
is
actually
being
published
so
we're
basically
inside
of
the
the
layer
release
portion.
If
you
go
back,
one
Carter
go
to
the
yeah
layer,
publish
so
the
actual
Val.
So
this
this
is
where
all
of
the
different
parameters
are
defined
and
then
scroll
down
the
actual
layer.
Validation
is
happening
on
line
65
there
so
making
sure
that
the
the
tag
ends
with
that
layer.
Version
number.
C
C
I
think
so,
so
we
did
an
initial
test
to
make
sure
that
the
validation
works.
I
haven't
I,
don't
think,
we've
done
a
test
with
this
new
change
right
to
Alex.
C
Okay,
so
we
have
some
initial
versions
valid
published,
but
we
haven't
actually
used
them
yet
so.
A
G
A
A
Awesome
cool
I
know
we
have
Python
and
JavaScript
or
in
progress.
That's
great,
but
Martin's
out
this
week,
so
probably
won't
make
much
progress
there.
What
are
the
next
steps
for
the
release
process?
So
do
we
feel
like
we're
close
to
a
point
where
we're
kind
of
just
proving
it
out
with
Java,
and
then
we
would
just
adapt
it
to
the
JavaScript
in
python
or
are
there
any
other?
You
know
pretty
pretty
major
pieces
to
put
in
place,
or
is
it
just
like
validation.
C
Yeah
I
think
we're
still
at
this
point
validating
the
the
the
release
process
with
the
Java
and
The
Collector.
Once
that's
in
place.
You
know
we
can
try
and
use
try
using
it
with
an
actual
Lambda
and
then
from
there.
We
can
expand
it
to
Python
and
JavaScript.
A
C
I
mean
I
can
try
it
or
or
we
can
have
Mike
Whalen
help
us
with
that.
I
think
he
would
probably
be
able
to
do
it
really
quickly,
but
I
I
don't
think
we
should
be
afraid
to
try
it
ourselves
either.
So.
A
Cool,
so
we
also
have
a
stack
Overflow
or
anything
else.
There.
Sorry
before
we
move
on
tell
her
any
other
thoughts.
Anyone
have
any
questions,
feedback
for
Tyler
suggestions.
C
I
think
one
thing
I
wanted
to
do
is
just
make
sure
that
everyone
was
on
board
and
okay
with
that
new
style
of
release
triggering
via
the
tag
publishing.
C
If
you
look
at
the
collector
release,
that
shows
the
other
style
where
it's
triggered
based
off
of
a
workflow
dispatch,
which
is
more
like
a
manual.
You
click
a
button
and
it
starts
a
release.
But
it's
not
necessarily,
you
know
specific
to
any
particular
tag.
It's
more
based
off
of
a
branch.
C
Oh
yeah,
so
okay,
I
was
okay.
I
was
showing
you
the
the
two
different
approaches
and
if
everyone's
okay,
with
the
the
tag
approach,
the
The
Next
Step,
was
me
working
on
the
collector
changes
to
to
one
make
it
so
that
the
collector
is
using
this
portion
of
the
script
for
publishing
and
making
it
so
that
the
collector
is
using
the
tag
for
releases
as
well.
Yeah.
That.
C
Oh
I
think
we
looked
at
this
Alex
right,
but
ultimately
we
decided
that
the
the
version
number
that
would
be
published
as
part
of
the
tag
as
the
layer
name
would
be
only
one
specific
of
those
versions.
Right,
Alex.
G
D
I
I
think
we
would
continue
to
version
it
after
the
The
Collector
version
that
it
contains
and
the
the
modules
that
are
contained
within
the
slander
repo
can
simply
follow
those
if
we
like,
or
we
can
have
a
new
version
explicitly
for
the
Lambda
collector.
If
we
want
and
have
the
the
go
modules
follow
those
I'm
not
super
married
to
one
or
the
other.
D
C
The
other
thing
I
would
suggest
is
within
the
the
sorry.
The
the
layer
name
will
be
the
The
Collector
version,
but
Lambda
itself
will
also
have
the
auto
incrementing
version
number
as
a
sub
separately.
So
already
layers
are
going
to
inherently
have
two
different
versions:
there's
going
to
be
the
the
version
of
the
underlying
artifact,
that's
embedded
in
the
layer
name
and
the
version
in
that's
the
auto
incrementing
one.
So
I.
D
C
C
C
I
mean
the
the
tag
name,
for
example,
with
the
Java
was
you
know,
release
slash
Java
layer,
Java
agent.
So
if
we
need
to
have
a
different
release
structure
for
go
modules,
they
can
just
use
a
different
tag
format
and
then
that
wouldn't
trigger
a
Lambda
layer.
Publishing.
D
I
believe
you're,
correct
I'm,
just
wondering
do
we
really
want
to
have
separate
patterns
and
uses
like
do
we
want
to
have
these
be
separate,
release,
steps
and
artifacts,
or
should
tagging
one
or
it's
putting
the
set
of
go?
Module
tags
also
push
the
collector
version,
because
I
don't
expect
that
we're
going
to
be
tagging
those
modules
unless
we
intend
to
be
releasing
that
version
as
well.
C
Yeah
I
I
don't
fully
understand
the
the
release
process
that
is
going
to
be
needed
here
for
the
collector
portions,
so
I
will
defer
to
you
and
Alex.
D
One
other
thing
I
would
call
out
on
the
the
tag
naming
here
is
in
the
Upstream
collector
repo
we've
used
release,
slash
Boo
for
branches
for
release
branches
so
that
we
can
go
back
and
create
Point
releases
of
Prior
releases
as
necessary,
and
branches
and
tags
share
a
common
namespace
and
get
so
I.
Don't
know
if
we
want
to
choose
some
other
prefix
or
the
the
tag
this
and
retain
the
ability
to
use
really
slash
food
for
branches.
C
So
if
it
were
up
to
me,
I
would
say
the
the
the
branches
that
you're
referring
to
could
probably
just
remove
the
release
portion
and
have
the
the
the
the
branch
name
just
be
like
a
layer.
Dash
Java
agent
slash
the
version
number
in
this
case,
but
that's
just
like
throwing
something
out
there,
I'm
open
to
other
ideas.
D
That's
the
inverse
of
what
I'm
used
to
and
would
expect
but
I
don't
know
if
that's
a
universal
expectation.
C
So
you
would
expect
the
release
tag
to
remove
the
the
release,
slash
portion.
D
C
I
I
I'm.
Okay
with
that
I,
don't
have
a
strong
opinion
on
this.
If,
if
that's
what
we
would
prefer
I
I'm
happy
to
update
this.
C
A
A
Okay,
great
before
we
get
a
stack,
Overflow
review,
Aaron
I
know
you
put
a
comment
here
and
you
can
patiently
wait
in.
So
let's
talk
about
container-based,
serverless
yeah.
B
Sure
I
I
think
maybe
this
was
a
general
question
about
the
scope
of
the
Sig,
so
I
think
for
I
know,
there's
AWS,
fargate
and
then
there's
cncfk
native
and
Google
has
the
cloud
run
offering
which
is
like
our
version
of
k-native?
Are
these
like
generally
in
scope
for
this
working
group,
and
is
it
something
that
we
would
want
to
like?
Add
to
that
documentation
or
sort
of
out
of
scope.
A
Or
the
initial
scope-
it's
probably
a
bit
out
of
it
because
it's
really
serverless
it's
like
such
a
wide
area.
It's
really
around
serverless
functions,
but
I
do
think
serverless,
you
know.
Maybe
it
needs
more
of
a
look
at
in
general,
so
I
guess
I'm
flexible,
but
would
definitely
want
to
make
sure
we
nail
down
the
functions.
You
know
deliverables
first
but
curious
to
hear
the
group's
opinion
too.
C
I
think,
from
my
perspective,
it
would
probably
come
down
to
a
question
of
do
they
feel
very
similar
like
do
they
have
similar
problems?
Do
they
have
similar
deployment
strategies
and
mod
models?
C
Do
you
do
the
customers
of
these
systems
have
similar
problems
and
concerns.
B
Yeah,
definitely
for
for
more
context.
The
version
two
of
cloud
functions,
Google
Cloud
functions
actually
uses
Cloud
run
under
the
hood,
so
it
will
build
a
container
and
run
it
on
the
cloud
run
environment.
So
in
terms
of
like
I,
think,
that's
a
really
good
point
in
terms
of
the
issues
and
stuff,
like
our
main
concerns
about
cardinality,
and
things
like
that
are
exactly
the
same
between
the
two.
C
And
a
lot
of
the
Anthony
come
down
to
what
do
the
these
systems
expose
in
terms
of
hooks
to
allow
for
monitoring
themselves
like,
for
example,
there?
It
might
be
much
easier
to
add
instrumentation
for
and
hooks
around
one
versus
the
other,
depending
on
what
the
the
the
run
time
exposes.
A
D
So
there's
the
question
of
Lambda
functions
that
can
be
containers
rather
than
was
it
balls,
which
occasionally
we
get
some
customer
questions
about.
But
I
don't
know
if
there's
a
solid
way
to
support
right
now.
Unfortunately,
I've
never
used
Lambda
in
that
manner
as
well.
So
I
I
don't
know,
but
it
sounds
like
it's
similar
to
what
is
being
described
with
gcp
Cloud
functions,
V2
using
Cloud
run
under
the
hood.
D
Is
a
serverless
compute
backing
for
ECS
or
eks
I?
Think
that's
further
out
from
this,
and
those
patterns
would
be
better
handled
in
the
normal
container
use
case
of
you're
running
a
container
in
eks
or
ECS.
It
just
has
some
limitations
in
that
you
can't
run
a
demon
set
if
all
of
your
eks
nodes
are
far
gauge,
for
instance,
because
every
pod
is
a
node
in
that
model.
So
you
need
to
do
everything
as
sidecars,
but
the
existing
container-based
use
cases
I
think
cover
those
fairly
well.
B
Okay,
so
there's
no
like
managed
for
a
gate
where
you
don't
have
to
have
your
own
eks
cluster.
Behind
the
scenes.
D
D
Yeah
AWS
app
run.
They
effectively
run
an
a
DOT
collector
on
your
behalf
in
the
function
runtime
space.
Let's
see.
B
Yeah
I
think
like
instrumentation
aside.
Our
main
concern
is
like
mainly
around
cardinality,
so
if
you
have
like
a
ton
of
containers
or
really
bursty
workloads,
especially
for
metrics
I,
think
the
the
issues
are
the
same
if
as
long
as
it's
serverless
so
like
yeah,
if
it's
a
minute
like
unmanagedy
native,
where
somebody's
running
it
in
their
own
gke
cluster
I,
agree
that
that
would
definitely
be
out
of
scope.
A
Okay,
so
yeah
I
guess
we're
willing
to
definitely
discuss
the
items
Aaron,
especially
if
they're
going
to
be
function,
adjacent
or
if
it
seems
like
you
know,
you're
running
under
the
hood,
essentially
a
container
on
top
of
a
function
or
something
like
that.
I'd
be
interested
if
Azure
is
doing
something
different
like
similar,
with
like
offering
an
Azure
function
container
for
something,
but
that's
kind
of
a
open
question
for
the
vendor
themselves.
A
Okay,
awesome
so
quickly
wanted
a
spotlight,
of
course.
Okay
after
I
verify
myself
and
there's
all
my
bridges
that
we
should
start
looking
at
kind
of
some
AWS
Lambda
and
open
Telemetry
related
stuff
and
start
trying
to
push
customers
towards
it.
It
doesn't
seem
like
there's
that
many
questions
today
they
seem
relatively
old,
but
I,
don't
know
if
you
want
to
spend
some
time
during
the
Sig
or
just
do
it
asynchronously.
A
F
A
Oh,
we
don't
have
an
answer
for
you,
okay!
Well,
we
will
I,
guess
continue
to
direct
customers
towards
this
and
then,
as
we
get
some
more
consistent
questions.
Well,
we
will
dedicate
some
take
time
to
following
up
on
these.
Okay.