►
From YouTube: 2021-10-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
It's
been
five
minutes.
I
think
we
should
probably
start.
Please
add
yourselves
to
the
the
identities
list
actually
on
the
screen.
A
D
Hi
everyone
myself,
I'm
from
india
and
I
work
in
app
dynamics
which
is.
D
And
from
india
and
I
live
in
bangalore-
currently
I
am
living
in
my
hometown
due
to
it
yeah
and
that's
about.
A
F
Yeah
just
want
to
warn
you
always
there's.
We
can
see
your
slack
so.
A
Doesn't
share
anything
any
secrets
with
me
either
anyway,
so
it's
okay.
I
tried
to.
I
was
trying
to
find
this
message.
A
So
someone
was
asking
for
help
earlier
and
I
thought
we
should
probably
I
want
to
bring
it
up,
but
yeah
thanks
for
the
morning.
A
We
can
start
with,
looks
like
we
don't
have
a
lot
to
discuss
today,
so
the
first
topic
is
logs
tigran
and
some
other
people
have
been
nothing
to
try
it
out.
So
we
need
to.
We
need
to
decide
how.
How
do
we
proceed
with
this?
A
We
could
release
a
special
logs
package
or
release
like
a
1.10
alpha,
like
we
have
done
with
metrics,
or
we
could
just
merge
logs
into
the
main
branch
and
put
it
under
a
an
experimental
name
space
to
indicate
that
it's
not
sorry
to
indicate
that
it's
not
part
of
the
stable
api
and
it
can
change
any
time
and
it's
not
covered
with
1.0
guarantees.
A
G
I
I
think
the
link
that
tigran
shared
it.
One
of
the
points
that
it
says
is
do
not
use
like
the
experimental
in
the
name
space.
If
you
come
down
a
little,
it
should
mention.
A
A
G
A
G
Yeah
sure,
okay,
thanks.
F
It's
interesting
that
that
last
part
that
you
mentioned
in
in
this
because
I'm
pretty,
I
thought
the
collector
did
this
like.
I
know,
there's
definitely
packages
that
are
named
experimental
in
the
collector.
So
I
yeah.
A
A
Okay,
so
silicon,
why
don't
you
take
this
forward
and
like
propose
something,
and
we
will
see.
A
G
Include
those
people
like
who
showed
interest
in
if
they
are
fine
with
them,
you
know
what
are
the
problems
that
we
might
agree
on.
A
So,
okay,
so
the
next
thing
I
want
to
bring
up
just
want
to,
I
don't
think
we
can
solve
this.
It's
a
really
complex
thing,
but
just
want
to
like
put
it
on
everyone's
radar.
So
someone
asked
if
they
could
create
multiple
json
providers,
because
they
have
like
a
monolith.
Python
service,
where.
A
We
had
briefly
discussed
this
before
and
at
the
time
we
thought
of
allowing
multiple
tracer
providers,
even
if
a
single
one
is
set
as
the
global
one,
and
this
I
suggested
this
and
it
did
solve
their
problem,
but
only
for
manual
instrumentation,
as
you
can
imagine,
with
auto
instrumentation,
we
don't.
We
cannot
influence
other
instrumentation
to
use
one
tracer
provider
for
one
code
box
and
another
for
a
different
code
box.
A
A
No,
no,
so
that's
not
a
problem.
So
let's
say
you
have
a
single
python
service,
but
inside
it
you
have.
Let's
say
it's
a
traditional
model,
that's
right!
It
has
a
sub
component
that
does
authentication
and
another
component
that
does
profiles
and
another
one
that
does
check
out
or
something
and
in
the
microservices
world
you
would
have
each
of
those
as
a
separate
servers.
So
each
of
those
would
have
a
tracer
provider
with
a
different,
let's
say,
service.name
resource
attribute.
A
But
when
you
have
a
monolith,
then
you
want
to
use
different
tracer
providers
configured
differently,
specifically
with
the
service.name
attribute
and
those
different
components
right.
So
when
you
manually
instrument
them
the
telemetry
they
generate
when
it
goes
to
an
apm,
backend
apm
and
sees
those
three
components
as
different
services,
even
though
they
live
in
the
same
process.
A
So
so
that
works.
The
problem
is,
if
they
auto
instrument,
then
all
those
three
components
are
using
the
same:
instrumented
library,
let's
say
some
orm
right
and
and
the
instrumentation
uses
a
single,
the
global
tracer
provider
or
whatever
the
custom
one.
You
provide
okay,
but
in
this
use
case,
what
they
want
is
for
the
code
boss
to
decide
which
tracer
provider
to
use
dynamically,
which
we
don't
support
today,
and
I
don't
know
if
we
ever
will,
but
we
just
wanted.
A
A
A
A
A
G
A
A
A
A
All
right
any
other
thoughts
on
this.
G
Before
we
end
this,
so
I
was
looking
for
the
suppressing
instrumentation.
I
was
wondering
why
do
we
have
like?
There
is
a
surprise,
instrumentation
key
and
then
there
is
also
suppress
instrumentation
at
http.
A
G
And
so
since,
like
like
a
lot
of
instrumentation,
they
do
not
check
that
suppress
instrumentation
key.
So
I
was
wondering.
G
A
I
think
the
http
was
added
as
a
as
a
way
to
as
a
way
for
different
client
instrumentations
to
hint
each
other
when
when
both
are
active,
so
if
request
is
active,
then
it
and
the
underlying
url
lib
instrumentation
is
active
as
well
and
request
creates
a
client
span.
Then
the
url
would
create
another
client
span
and
we
would
end
up
with
two
client
spends,
which
is
weird.
A
G
Okay,
so
so
that
they
don't
produce
their
like
double
instrumentation
kind
of
thing.
A
Right
yeah,
so
we
don't
generate
duplicate,
not
duplicate,
but
more
than
one
client
spans
when
more
than
one
client
libraries
that
wrap
each
other
or
instrument
it.
I
I
think
it
was.
It
was
a
stop
gap
major
anyway.
I
don't
know
if
we
want
to.
A
A
G
I
think
that
that
might
influence
the
sampling
decision
because
that
gets
passed.
Maybe
I
I
remember
following
the
similar
discussion,
why
it
was
not,
unlike
the
update
kind,
was
not
supported.
B
D
B
B
I
had
brought
this
pr
up
at
the
last
meeting,
but
what
I
wanted
to
highlight
were
these
two
scripts,
this
one
that
we're
adding
we're,
adding
an
instrumentation
to
aws
lambda
to
be
able
to
automatically
get
open
telemetry
on
the
lambda,
but,
as
you
can
see
from
regular
instrumentation,
the
one
thing
that
we've
deviated
on
is
we've
added.
This
folder
here
called
scripts,
and
these
two
here
so
I
just
want
to
explain
quickly.
B
Hotel
instrument
is
the
new
entry
point
that
we
give
lambda,
so
lambda
normally
takes
in
a
user's
function
and
just
runs
it
on
like
a
quick
service.
But
if
you
wanted
to
instrument
that
you'd
have
to
download
all
the
packages
for
open,
telemetry
and
then
do
all
the
manual
instrumentation,
but
what
we
provide
is
something
called
a
lambda
layer.
What
that
lambda
layer
does
is
it
gives
you
all
the
packages,
even
the
collector
and
the
python
code,
to
be
able
to
have
all
of
it?
B
B
We
take
this
package
here
and
do
two
really
important
steps.
We
modified
the
python
path
to
tell
it
where
it
can
find
the
open
telemetry
packages
from
the
lambda
layer,
because,
even
though
lambda
service
will
do
this
later
on
at
this
point,
when
we
want
to
auto
instrument
it's
too
early
and
it
hasn't
been
added
to
the
python
path.
So
we
tell
it
where
to
find
the
open
telemetry
packages,
and
then
we
also
tell
it
where
to
find
the
runtime
packages,
like
all
the
packages
that
open
temperature
wants
to
instrument.
B
Like
botoco
or
flask,
or
things
like
that
after
we
do
these
two
things,
then
we
use
the
regular
open
temperature
environment
variables.
As
I
was
saying,
and
we
configure
it
with
some
defaults
that
we
expect
users
would
want
they'd
want
the
otlp
exporter,
they
probably
want
these
things
on
the
resource
attributes
and
such
and
the
last
thing
that
the
script
does
or
the
last
two
things
that
it
does
is
one
it
patches.
B
This
variable
here
underscore
handler,
which
is
what
lambda
was
going
to
call
to
find
the
user's
function,
but
instead
we
save
that
in
this
original
handler
environment
variable-
and
we
tell
lambda
to
call
our
wrapper
one
and
then
we
go
ahead
and
we
execute
open,
tell
me
transcend
now.
We
were
really
hoping
to
have
only
this
one
script,
but
we've
found
out
that
we
definitely
need
the
two
scripts.
B
This
is
because,
when
open
temperature
instrument
goes
ahead
and
it
instruments
everything,
it
instruments
photo
core
flask
and
in
this
case
lambda,
when
we
run
the
downstream
and
photo
or
aws
service,
finally
ends
up
calling
this
one.
It
imports
this
package
using
this
method.
Here,
imp
load
module.
B
Now
this
is
a
really
annoying
method
that
I
found
out
about,
because
it
says
this
function
does
more
than
importing
the
module.
If
the
module
is
already
imported,
it
will
reload
the
module.
This
is
significant
for
us
because,
even
though
open
telemetry
the
instrument,
the
script
went
ahead
and
instrumented
the
user's
lambda
handler
module.
B
This
call
that
is
made
by
the
bootstrap
when
it
calls
this
handler
punching,
will
remove
the
instrumentation
so
that
lambda
handler
that
the
user
has
no
longer
has
instrumentation
so
to
solve
this.
Instead,
we
tell
it
to
import
this
one,
which
would
be
of
no
consequence
if
it
reloads
it,
because
what
we
do
here
is
right
away.
The
first
thing
that
we
do
in
the
script
is
use
the
lambda
instrumentation
and
just
call
instrument
by
this
way
that
it
allows
itself
to
have
started
the
new
python
process
and
no
more
reloading
will
occur.
B
These
are
just
two
scripts
that
we
wanted
to
share
and
store
up
here
on
upstream,
because
they
would
serve
really
well
in
every
lambda
layer,
as
people
auto
instrument,
their
code
on
aws
service,
on
aws,
lambda
or
with
open
telemetry,
and
you
know,
there's
obviously
things.
We
said
here
like
the
propagators
that
we
want
to
have
here
and
resource
attributes,
but
this
should
serve
as
the
model
and
as
we
go
forward
to
be
able
to
instrument
on
aws
lambda
and
this
method,
calling
the
executable
should
allow
us
to.
B
A
This
looks
great
I'll,
add
some
additional
questions
on
on
the
pr,
but
the
main
ones
the
main
ones
are.
Okay
can
can
people
use
the
aws
lambda
instrumenter
without
the
script,
and
what
would
that
look.
B
Yes,
so
that's
a
good
question:
we
have
tests
in
the
instrumentation
package
here
and
actually
these
tests,
they
mock
this
whole
lambda
setup
that
I
mentioned
so
you
mock
and
execute
of
lambda
when
it
executes
on
lambda.
What
it
does
is
it
again
pulls
from
this
handler
variable
and
it
will
call
it
but
then,
like
I
said,
if
you
provide
this
wrapper,
it
will
call
the
first
script
and
it
this
is
the
one
here
and
what
it
does
is.
B
Where
is
it
test
this
one?
Oh
sorry,
by
default
we
set
the
exec
wrapper
to
point
to
this
function
here
which
all
it
does.
Is
it
imports
that
script?
But
we
have
a
test
here
at
the
bottom,
which
removes
it
do
not
use
hotel
instrument.
I
have
to
change
this
resort
to
manual
instrumentation
below
and
what
you
would
do.
B
Is
you
instrument
it
there's
some
talk
about
this
optional
feature
here,
but
then
you
execute
the
lambda,
and
then
we
assert
that
you
still
get
the
traces
here,
even
though
we
didn't
run
either
of
those
scripts
that
I
was
talking
about.
All
we
did
was
modify
actually
the
user's
lambda
handler
code
to
instrument
and
we
can
still
get
traces
here
and
we
can
still
uninstrument.
A
Always
can
you
hear
me
hey
sorry,
I
was
needed
good
yeah.
That
means,
could
you
scroll
up
a
bit
to
the
instrument?
Yes,
okay,
so
so
what
is
the
custom
event
context?
Actually?
Is
this
the
lambda
instance.
B
So
this
is
a
feature
that
we
were
asked
for.
A
customer
landlord
can.
A
Right,
so
if
if
we
could
maybe
just
suggestion,
if
we
could
add
alternate
ways
to
to
instrument
the
handler
something
like
like,
we
have
in
class
consummatory
instrument
app,
we
could
have
something
like
instrument
handler
that
just
takes
a
function
and
instruments
it
so
that
people
instrumenting
their
lambda
handles
directly.
I'm
sorry
manually,
you
can
just
define
their
function,
import,
aws,
landlines
manager
and
just
call
instrument
handler
on
it
without
having
to
set
all
the
environment
variables
and
everything
that
makes
sense.
A
B
No,
no,
they
don't
have
to
set
this.
This
is
set
in
the
lambda
environment
like,
for
example.
This
is
my
aws
linda
and
then,
when
I
set,
I
have
to
set
this.
B
A
Okay,
do
you
intend
to
also
publish
the
contribute,
the
scripts
to
the
country,
people
or
keep
them
in
the
land.
B
D
B
And
so
it
would,
I
think,
in
my
in
the
right
one
here
like
when
it
constructs
everything
even
right
now
what
it
does
is
it
it
moves
the
hotel
instrument,
script
to
the
root
of
the
layer
and
then
from
there
zips
everything
together
into
the
layer.
So
that's
what
I
would
imagine
it
does
it
pip
installs
from
upstream
contrib
and
then,
when
it
builds
the
layer
it
moves
the
scripts
around
to
where
they
need
to
be
so
that
you
can
find
them
in
the
aws
lambda
environment.
A
Yeah
yeah
yeah-
this
looks
great
thanks
for
sharing,
looks
really
exciting
cool.
B
Yeah
thanks
so
much
for
your
help,
like
I
said
I'll,
get
it
out
of
draft
soon
and
then
I'm
looking
forward
to
you
guys
comments
thanks.