►
From YouTube: 2023-02-21 meeting
Description
Instrumentation: Messaging
A
A
So
I
already
put
a
couple
kind
of
like
running
spec
issues.
We
have
I
think
Tyler.
You
probably
put
this
first
one.
Are
there
any
other
topics,
kind
of
top
of
Mind
people
on
the
call.
C
I
edited
that
one
actually
someone
the
the
author
of
the
pr
has
asked
that
we
discussed
this
here,
because
I
asked
that
we
look
to
have
consistency
amongst
the
implementations,
so
I
wanna
I
was
agree.
Is
this
something
that
we
would
expect
in
implementations
to
provide
or
not?
And
if
so,
what
do
we
need
to
specify
of
it?.
D
E
F
This
honestly
feels
like
the
kind
of
thing
that
would
be
super
common
as
an
issue
across
many
different
instrumentation
libraries,
and
not
just
Lambda
specific,
because
you
know
ultimately,
the
problem
is:
is
you've
got
a
span,
that's
being
added,
underneath
your
actual
code
that
you
can't
necessarily
control
so
like
it
starts
and
ends
all
within.
For
example,
the
the
AWS
sdks
control
I
think
that
the
way
that
the
Java
SDK
generally
tackles
this
is
by
telling
people
to
add
a
Spam
processor.
F
F
That
said,
you
know,
this
kind
of
thing
seems
a
lot
more
easy
to
work
around
and
thus
you'll
see
this
kind
of
solution
in
scripting
languages
like
Python
and
node.js,
but
I
don't
think
it's
feasible
to
suggest
something
like
this
for
Java
I'm,
not
saying
that
that's
necessarily
a
problem,
but
if
you
want
consistency
across
language,
implementations
I
think
that's
going
to
be
a
challenge.
C
F
But
generally
instrumentation
doesn't
have
a
direct
hook
unless
it's
the
library
instrumentation
but
like
Auto,
auto
instrumentation,
the
they're
not
generally
going
to
have
a
direct
hook
to
that
instrumentation's
classes.
G
C
E
A
So
is
the
I
guess
consensus
fee
that
we
should
have
this
present,
but
if
Auto
instrumentation
is
present,
it
wouldn't
be
possible
because
I
think
the
layers
at
least
for
Java.
You
know
have
Auto
instrumentation
on
by
default.
Right.
F
You
know
taking
my
suggestion.
One
step
further
I
would
almost
suggest
that
this
is
something
that
perhaps
should
be
addressed
by
like
an
instrumentation
Stig
in
general,
because,
like
I
said
it
applies
to
more
instrumentation
than
just
the
AWS
SDK.
F
Like
being
able
to
have
pre
and
post
hooks
seems
like
something
that
would
be
relevant
for
a
lot
of
different
Frameworks.
G
Yeah
I
would
support
that
because
that
some
requests
we
have
in
open
tracing
in
the
past
and
slowly
one
by
one
people
started
to
you
know,
request
this
kind
of
functionality
for
every
Library.
So
yes,.
C
I
think
we
need
a
better
understanding
of
what
exactly
the
use
case
here
is
like
if
it
is
very
specifically
related
to
sampling
and
getting
attributes
out
of
the
Lambda
event
for
sampling.
That
seems
like
something
that
might
be
a
bit
closer
to
Lambda
specific,
because
most
instrumentation
isn't
going
to
be
creating
root
spans
and
need
to
provide
you
access
before
the
root
span
gets
created
to
to
affect
the
sampling
decision.
F
Right,
but
if,
if
that's
what
you're?
Looking
at
the
sampling
decision,
then
wouldn't
that
be
more
like
the
span
processor
kind
of
territory
which
is
already
in
the
spec.
C
Yeah,
that
could
be
a
way
to
solve
it
for
the
pre-hooks
for
PostScript
that
doesn't
work
because
span,
processors
or
read-only.
That
span
end,
but
that's
a
different
issue
that
I
think
the
spec
is
going
to
have
to
come
up
with
a
solution
for.
G
Pretty
correct,
yes
and
and
I
remember
complaints
about
that.
To
be
honest,
but
yeah
I
don't
know
like
for
requests.
We
could
be
fine
for
response.
If
the
user
of
these,
the
user,
who
posted
this
PR,
he
really
wants
to
have
direct
the
response
one
then
we
are
not
on
a
good
on
a
good
position.
Sorry
for
the
the
end,
one.
F
I
I
personally
would
be
okay
with
asking
JavaScript
to
remove
it
for
consistency's
sake,
but
on
the
other
side,
I
think
that
many
languages
have
little
things
that
they've
implemented.
Just
for
you
know.
F
Just
for
their
language
implementation,
so
there's
lots
of
inconsistencies
all
over
the
place,
but
if,
if
this
is
something
that
is
likely
to
to
spread,
then
yes,
I
I,
agree
that
maybe
having
them
deprecate
it
at
least,
and
encourage
a
Spam
processor,
as
an
alternative
seems
like
a
better
approach
to
me.
G
Probably
we
should
create
an
issue
in
the
specification
to
have
a
a
hook,
for
you
know
to
allow
spans
to
be
modified
after
they
were
finished
you
know
or
before
so
they
can
add
attributes
if
that
helps
in
any
form,.
H
So
there's
a
couple
use
cases
right
now
that
clients
that
I'm
working
with
are
using
this
for
and
the
first
is
like
clients
to
find
their
own
semantic
conventions
that
they
use
that
they
apply
across
the
board.
H
It's
easier
for
them
to
apply
them
here
in
the
request
response
hooks
where
they
can
just
distribute
that
Javascript
file
with
all
their
lambdas
rather
than
asking
every
Lambda
author
to
add
that
into
their
code
and
then
the
other
is
that
with
Lambda,
both
the
service
name
and
the
operation
name
default
to
the
name
of
the
Lambda
function
in
and
often
that's
not
useful
at
all.
So
they
need
to
be
able
to
have.
The
service
name
can
be
updated
through
the
environment
variables,
but
the
the
operation
name.
H
F
And
in
those
hooks,
where
does
it
get
the
the
the
actual
name
from
like?
What
does
it
change
the
name
too?
Whatever.
H
E
F
Right
so
I
think
that
that
also
I
mean
doesn't
JavaScript
in
Python
have
span
processors.
C
No,
it
is
it's
generic
in
the
spec
and
every
SDK
should
support
them
readings
again.
It
looks
like
this.
This
can't
even
be
sampling
related,
though,
because
the
request
receives
the
span
span
has
already
been
started
so
doing
something
like
setting
the
name,
wouldn't
impact
sampling.
At
that
point,
the
decision
has
already
been
made.
If
you're
doing
head
sampling,
if
you're
doing
tail
sampling,
then
it's
totally
different
Beast.
So
yeah
it
sounds
more
and
more
like
span.
Processors
are
the
appropriate
place
for
this.
A
F
Should
be
I
think
if
anything,
maybe
what
we
need
to
have
is
a
spec,
that
a
request
for
the
spec
to
update
the
the
post-processing
on
a
span
processor
to
allow
modification.
C
F
F
C
Greatly
simplifies
dealing
with
spin
processors
that
mutate
data-
you
simply
don't
have
to
worry
about
it.
Processors
operate
on
the
same
data
bin
sequence,
so
the
first
band
processor
operates
on
the
data.
Then
the
second
then
the
third,
and
if
one
can
change
the
data,
then
you
know
all
bets
are
office
to
what
might
happen.
I.
C
No,
that's
not
to
say
that
a
span
processor
can't
take
the
the
read-only
data
they
received,
make
some
changes
to
it
and
pass
on
a
different
read-only
data
object,
but
there
there
aren't
good
affordances
for
that
in
the
current
spec
and
I.
Think
that's
the
direction
that
the
discussion
is
going
right
now
is.
How
can
we
make
it
so
that
it's
easy
for
someone
who
does
need
to
mutate
data
at
span
and
to
get
a
mutable
copy
of
the
data
that
can't
affect
other
processors,
but
that's
not
solved
right
now.
F
Yeah
I
mean
the
other
option.
I
would
say,
is
to
you
know,
Define
the
order
of
the
processing
and
then
allow
mutation
to
affect
down
down
the
chain.
Perhaps
I
don't
know.
A
If
someone
has
the
link
to
that
spec
issue,
that
would
be
nice
if
you
just
go
ahead
and
maybe
plus
one
it
or
something.
A
So
it
seems
like
the
consensus,
though,
is
to
remove
response,
Handler
request
and
just
document
processor
Behavior.
That's
probably
Fair.
F
I
guess
going
back
to
Mike's
suggestion
that
brings
up
an
interesting
question
around
the
service
name.
Should
the
for
for
a
particular
Lambda
or
generic
function?
F
Should
the
service
and
operation
name
both
be
the
function
name.
F
My
personal
opinion
is
that
I
think
it
makes
sense
to
have
the
operation
name,
be
the
function
name,
but
I
kind
of
feel,
like
the
service
name,
should
probably
be
common
across
multiple
different
functions
together,
probably
all
functions
within
a
Deployable
group
so
like,
for
example,
in
in
Lambda.
A
lot
of
functions
are
generally
deployed
in
a
bundle
via
a
cloud
formation
template.
F
C
For
that,
though,
I
believe
every
SDK
provides
the
ability
to
set
the
service
name
through
environment
variables
and
setting
the
environment
variables
on
functions
that
need
a
service
name.
That's
different
from
the
function
name
is
the
way
to
handle
that
service
names
required
is
required,
attribute.
We
have
to
have
a
default.
C
Ultimately,
the
default,
even
if
we
don't
provide
one
of
the
lambda's
rotation
wrapper,
is
then
going
to
be.
You
know,
like
unknown
service,
with
the
the
binary
name.
Okay,.
F
I
I
support
that
I'm
fine
with
the
function
name
being
the
the
default
or
the
the
fallback,
but
I
I.
Think
our
documentation
should
encourage
setting
that
environment
variable
where,
wherever.
F
Mike,
does
that
sound
reasonable
to
you.
H
Yeah
the
other
thing
there,
though,
is
the
languages
right
now
are
using
they're
defaulting
and
using
Hotel
resource
attributes,
which
we
also
have.
What
is
the
hotel
service
name
or
something,
and
it's
kind
of
two
different
ways
to
handle
the
same
problem
so
I
just
think
I,
even
though
you
know
the
latter
overwrites,
the
former.
H
We
need
to
be
consistent
in
that
messaging.
In
my
opinion,.
F
Yeah
I
I
I
support.
Having
you
know,
service
I
mean
I
I
understand
why
you
would
want
to
potentially
be
able
to
do
both,
but
generally,
like
I,
think
our
documentation
should
prefer
having
you
know
the
hotel
service
name
as
a
standalone
environment
variable
rather
than
you
know,
mingle
commingled,
with
the
the
the
resource
attributes.
F
No,
this
isn't
a
Lambda
specific
problem.
I
think
what
he's
referring
to
is
in
the
sdks.
You
can
set
the
service
name
as
a
resource
attribute.
So
there's
an
environment,
variable
I
think
it's
otel
resource
attributes
is
that
right
and
then
inside
of
that
resource
attribute
string,
you
can
specify
the
service
name.
H
There's
two
because
there's
otel
service
name
and
there's
otel
resource
attributes
and
both
of
them
can
do
exactly
the
same
thing
which
you
know
kind
of
goes
against
the
idea
of
a
convention.
But
it's
it
creates
confusion
because
one
is
kind
of
aggregating
all
the
different
attributes.
So
you
completely
like
the
version
and
all
that
stuff.
C
H
H
E
A
Thanks
for
bringing
up
that
topic
Anthony,
we
can
keep
talking
about
some
of
the
implications
too
changes
down
the
line
for
the
next
topic.
I
think
we
have
two
open
spec
issues,
so
Tyler
I'm,
not
sure
if
you
have
a
particular
order.
We
want
to
make
these
then,
but
or
any
particular
topics
based
on
that
we're
getting
a
lot
of
comments,
just
always
good
and
bad
on
it.
A
So
we
have
like
the
cloud
resource,
ID
and
then
I
think
lewd
Miller
raised
this
I'm,
not
really
I
haven't
looked
into
it,
but
I
figured.
We
can
talk
about
it
a
little
bit
as
well,
but
the
cloud
resource
ID
is
probably
the
most
pertinent
one.
F
Yeah
I
think
that
this
one
is
is
pretty
much
ready.
I
I
forget
scroll
down
to
the
bottom.
Was
there
yeah
I
think
I've
resolved
all
of
the
the
open
concerns
on
this
one.
So
I
think
it's.
G
I
was
tempted
to
merge
this
one,
but
I
just
wanted
to
clarify
something.
We
have
a
difficult
and
we
want.
You
know
we
are
trying
to
talk
to
elastic
people
about
expectations
regarding
ECS
and
I
am
afraid
that
what
Christian
suggested
will
become
true,
that
we
will
go
later
go
back,
trying
to
align
with
ECS.
G
So
I
would
like
to
wait
one
more
day.
I
have
some
conversation
with
the
TC,
maybe
poke
Alex
from
elastic
and
then
merge
stuff
I
mean
we
can
always
rebirth
or
or
break
update
the
changes,
but
yeah
it
doesn't
make
sense.
It
would
like
to
wait
one
more
day
and
then.
G
F
Don't
have
to
know
I,
think
elastic
from
I,
believe
elastic,
took
the
the
previous
hotels
semantic
conventions
and
and
copied
at
least
in
this
case,
copied
it
over
to
theirs.
So
they
have
a
a
fast
dot.
Id.
A
F
Ultimately,
what
happened
is
you've
got.
You
know
effectively
two
different
repos
that
diverged
and
you
know
now,
you're
dealing
with
merge
conflicts.
A
A
All
right:
well,
we
will
we'll
wait.
I
guess,
let's
see,
let's
see
so
I
guess
we
can
look
at
Lee.
Miller's
stack
issue
real
quick,
since
we
have
a
little
bit
more
time.
A
C
And
I
believe
the
Telemetry
API
provides
some
of
that
information,
but
it
doesn't
exactly
line
up
with
what
you
would
see
in
x-ray.
E
A
F
I
I
think
the
main
question
I
would
ask,
is
what
value
does
it
provide?
Is
it
something
that
a
user
would
find
beneficial
for
purposes
of
troubleshooting.
H
Yeah
I'd,
agree
and
I'd
said
that,
if
we're
assuming
the
host
is
the
Lambda
environment
right
very
the
very
root
level,
then
that's
not
going
to
add
any
value
in
my
opinion,
but
the
the
runtime
and
getting
the
the
cold
start
data
and
the
initialization
there
stuff
that
Alex
has
worked
on
with
the
collector.
That
is
highly
beneficial
I.
F
C
The
cold
start
processor
in
the
Lambda,
collector
extension
only
functions
with
Lambda
and
only
with
the
Telemetry
API.
Are
there
similar
mechanisms
for
Azure
or
gcp.
A
I
think
that's
TBD
I
know:
we've
talked
to
Rohit
about
getting
cold
star
information
from
functions,
but
I
remember
there
being
some
sort
of
weird
Behavior
to
where
it's
like
the
information
is
available,
but
maybe
not
like
accessible.
If
that
makes
sense,
so
it
probably
would
be
helpful
to
State
out
what
scenarios
we
at
least
expect
to
produce
information
on
and
then
maybe
see
if
that
aligns
so
I
cold
start
info
is
probably
a
pretty
good
one.
A
A
H
I
think
it's
it's
mainly
the
co-star
I
haven't
really
got
enough
data
to
understand.
If,
because,
like
I
said,
there's
five
levels
or
five
spans
missing
from
what
we
get
versus
X-ray
and
there's
because
x-ray
creates
the
environment
one
one
for
the
entire
Lambda
environment,
then
one
for
the
execution
or
the
the
runtime.
H
H
If
those
other
things
really
providing
any
value,
I
think
it's
primarily
everything
that
we
have
now
plus
cold
star
data
and
then
and
I
I
believe
I
know
how
it's
already
captures
this,
but
with
cold
start,
it's
people
a
lot
of
customers,
think
that
cold
starts
a
Boolean,
but
it's
either
it's
it's
a
true
or
false,
and
there
is
no
property
on
a
Lambda.
This
is
called
sorry,
it's
the
the
latency
there
in
the
initialization
stage.
That's
the
most
important.
A
H
A
Was
helpful,
I'll
pay,
English
and
just
mentioned.
This
is
something
we
probably
want
to
talk
about
a
bit
more
just
to
get
a
better
understanding
of
what
scenarios
they
have
top
of
Mind
things
like
that
I.
F
Mean
just
to
give
you
a
little
bit
of
additional
color,
like
she's,
probably
coming
from
the
perspective
of
similar
to
like
what
they're
doing
on
the
messaging
side,
where
they
are
specking
out
very
comprehensive
design
of
the
way
that
they
want
a
system
to
interact.
F
Even
though,
for
for
many
systems,
it's
not
entirely
possible
to
even
generate
data
so
like,
for
example,
when
working
with
sqs
Lambda
Handler,
you
know
the
the
span
representing
the
receive
is
completely
in
entirely
inside
the
AWS
system,
and
so
from
the
perspective
of
the
Lambda
and
most
of
our
instrumentation.
We
would
never
be
able
to
produce
something
like
that,
and
so
that
would
be.
F
A
C
F
Those
even
if
we
did
recommend
them
and
even
if
they
were
used
right
now,
none
of
the
Cloud
providers
allow
sending
spans
that
were
generated
internally
outside
of
their
system.
So,
like
the
X-ray
spans
that
that
Mike
was
referring
to
earlier,
the
only
place
to
see
those
is
within
x-ray.
Hotel
can
only
View
and
see
the
and
send
outside
of
AWS.
The
stands
that
that
we
generate.
A
Gosh
yeah
yeah.
That's
helpful,
definitely
not
want
to
specify
too
much
for
the
cold
start,
though
it
feels
like
that's
something
we
should
specify
as
a
requirement.
At
least
from
these
implementations
like
we
expect
cold
start
data
to
be
provided
like
in
a
certain
enter
as
well.
A
I,
don't
think
we
we
call
that
out
in
the
spec
today.
Is
that
something
we
should
also
open
an
additional.
C
We've
got
there's
a
Boolean
for
weather
requests
was
a
cold
start
which
can
be
kind
of
fuzzy.
You
know,
based
on
my
understanding,
because
it's
like
it's
it's
possible
that
it
started
as
a
cold
start,
but
while
it
was
spinning
up
another
runtime
instance,
a
runtime
is
just
became
available
in
the
the
invocation
was
rescheduled
onto
that.
C
So
I
don't
know
how
far
into
the
weeks
we
can
get
in
terms
of
specifying
when
information
comes
out
on
cold
start,
but
we
do
have
that
Boolean
yeah
to
indicate
if
we
have
an
information
that
leads
us
to
believe
this
was
a
cold
start.
Spam.
F
So
the
problem
with
cold
start
I
think
is:
there's
there's
effectively
two
different
ways
to
present
that
information,
one.
You
can
Mark
a
specific
span
as
having
been
a
cold
start
right.
The
other
is,
you
can
create
a
separate
span
that
represents
the
duration
of
the
cold
start
and
I.
Think
a
lot
of
that
is
going
to
depend
somewhat
on
what
the
the
platform
provides
in
terms
of
capabilities
so
like
with,
for
example,
with
Lambda
I.
F
Don't
think
that
Lambda
exposed
the
option
to
really
calculate
the
cold
start
prior
to
the
the
Telemetry
layer
options,
extensions.
C
And
even
then,
it
doesn't
really
the
culture
processor
that
Alex
created
kind
of
infers
that,
based
on
attaching
it
to
the
first
span
that
is
processed
by
that
collector
instance.
H
H
F
Anyway,
from
that
perspective,
you
don't
necessarily
know
how
long
the
cold
start
took.
You
just
know
whether
it's
a
Boolean,
yes
or
no,
so.
A
I
guess
my
perspective
is
how
long
it
took
is
what
the
customer
actually
cares
about.
So
maybe
we
should
update
this
to
either.
You
know
have
a
Boolean.
If
that's,
you
know
useful
to
filter
on,
but
we
should
still
have
that
additional,
like
latency
captured
or
something
like
that.
You
know
whether
it's
a
standalone
spin,
whether
it's
a
metric
I,
feel
like
this
should
be
updated
just
so
reset
their
expectation.
A
D
A
So
we
probably
want
to
make
some
sort
of
well
I,
guess
if
a
processor
is
producing
it,
it's
not
necessarily
part
of
the
spec
that
makes
sense
like
where,
where
should
we
make
the
requirements
for
at
least
replicating
that
behavior.
D
D
D
A
I
think
this
is
a
bit
of
a
chicken
egg
issue,
though
right,
because
if
you
don't
have
something
in
the
spec,
then
platforms
may
not
be
willing
to
provide
it.
But
if
it's
something
that's
expected
from
the
community
and
is
generally
well
understood
scenario,
then
they
may
be
more
likely
to
provide
it.
A
For
instance,
so
I
know
we
have
contributors
from
every
single
platform,
at
least
the
major
ones
right
now,
so
it
should
be
at
least
easy
to
start
that
conversation
and
see
how
it
goes,
but
I
know
like
gcp
and
Azure
are
specifically
looking
to
improve
their
function
story
and
their
function
instrumentation
and
things
like
that,
because
they
don't
have
anything
like
layers
or
what
have
you.
C
You
think,
from
a
debugging
perspective,
at
least
in
my
experience
having
a
bullying
is
probably
sufficient.
Most
often
when
I
go
and
look
at.
Is
this
a
concert
problem
and
looking
at
just
the
time
the
overall
request,
processing
time
of
was
this
a
cold
start?
Those
are
longer
than
not
a
gold
start.
You
get
a
decent
sense
of
what
the
the
cold
start
penalty
is
by
looking
at
that
Delta
and
your
averages
and
all
of
the
information
needed
to
calculate
that
it's
already
existing
on
the
existing
spans.
F
I
I
know
from
my
perspective
as
a
previous
Lambda
user
I
would
have
really
appreciated.
Having
the
the
cold
start.
Information
available
on
the
the
the
Lambda
context
object
that
I
received
into
the
function.
F
I,
don't
know,
maybe
do
some
logging
additional
caching
I,
don't
know
I
more
from
like
an
observability
perspective,
if
I'm
creating
a
span,
it's
easier.
If
it's
available
that
way
than
trying
to
work
with
an
extension
but
I
I'm
not
asking
AWS
to
change
that
now,
I'm
just
saying
it
would
have
been
nice
if
it
was
available.
A
A
The
next
item
it
seems
we
have
the
spec
PR,
still
open,
I
guess
Cossack
just
hasn't
had
a
chance
to
maybe
resolve
this
conversation.
But
besides
that,
maybe
just
review
progress.
A
Yeah
yeah
yeah
totally
totally
understand
just
wanted
to
bring
this
back
to
attention.
A
Awesome
and
I
know
we
do
have
some
minor
fixes
in
progress.
I
think
Martin
Mars,
our
team's
gonna,
do
one
in
JavaScript
and
hopefully
will
be
sometime
this
week
or
next
small
progress
on
the
AWS
account,
so
the
cncf
did
respond.
Apparently
there
is
like
a
dormant
either
a
cncf
or
hotel
AWS
account
which
they
asked.
We
were
okay,
just
taking
ownership,
of
which
I
saw
no
reason
to
say
no
to
so.
I
did
request
the
Chinese,
the
China
account
still.
A
It
seemed
to
still
potentially
be
a
need,
but
hopefully
you'll
be
able
to
take
this
kind
of
Dorman
account
and
start
publishing
lyrics
from
it,
and
things
like
that.
A
So
no
specific
real
update
there.
Besides
just
an
FYI,
that's
in
progress.
A
A
We
wanted
our
final
PR
by
February
28th,
so
essentially,
next
week
is
when
our
like
at
least
proposed
PR
or
stack
PR
should
all
be
in
which
I
think
we're
roughly
on
track
for
and
then
we
will
have
at
least
a
one
month,
review,
Pro
or
at
most
one
month
review
process,
but
we
already
have
a
lot
of
the
spec
reviews
and
things
like
that,
so,
hopefully
you
get
to
implementation
sooner.
A
E
F
I
think
the
only
concern
I
would
have
is
the
the
lack
of
you
know:
movement
on
the
Azure
or
gcp
side.
A
Yeah
I
can
see
that
I
mean
Ursa,
exhausted,
been
very
Lambda,
focused,
so
I
just
think
having
the
contributors.
A
getting
them
kind
of
consistently
involved
is
helpful
and
then
we'll
be
looking
to
kind
of
produce
the
guidance
for
the
documentation,
at
least
for
functions,
or
you
know
various
platform
functions
over
the
next.
You
know
probably
six
months,
at
least
in
my
head-
that's
kind
of
what
I'm
thinking
so
can
understand
the
concern
I
guess.
But
there
is
a
small
order
of
operations
too
it
only
so
much
sick,
Focus
time.
F
A
But
yeah
we'll
be
looking
to
do
I
know
at
least
around
Azure
function,
someone
like
the
end
user
working
groups,
that's
already
having
customers
pop
up
asking
around
guidance,
so
I
have
an
action
item
to
connect
them
with
Rohit
today
and
I'm
hoping
that'll
kind
of
stimulate
some
of
this
document
creation
and
then
we'll
be
able
to
reuse
whatever
assets
they
come
up
with.
E
A
All
right,
well,
I,
think
we're
I
think
we're
good
here
thanks
everyone
for
joining
and
participating
yeah
I'll
talk
to
you
throughout
the
week.
Bye.