►
From YouTube: 2021-01-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
B
A
A
C
C
C
So
one
of
the
things
we
do
have.
C
That
is,
is
very,
relatively
simple
to
use.
Basically,
you
change
the
base
class
that
you
have
in
the
default
project,
but
I
I
would
like
to
talk
with
somebody
from
aws.
What's
the
plan
regarding.net,
you
know
kind
of,
can
we
incorporate
that
on
the
base
class?
Do
you
want
to
have
this
in
a
repo
and
that's
the
kind
of
question
that
I
would
like
to
kind
of
go
over.
D
So
because
I'm
not
take
the
next
developer
work,
I'm
not!
No,
I
don't
know
the
detail,
but
I
just
as
I
know
that
so
far
I
heard
my
team
members
said
the
donated
record
knowledge
block
due
to
some
special
region.
They
said
they
cannot
merge
their
code
like
aws
id
generator
and
their
propagation
to
the
level.
So
we
have,
we
still
haven't
know
any
pla
about
the
dominated
lambda
support,
but
happy
to
heard
that
you
have
worked
on
that
yeah.
That's
all
I
know
about
that.
D
C
Yeah,
basically,
they
changed
the
base
class.
We
have
some
overrides
that,
according
to
the
configuration
call
our
wrapper-
and
we
also
one
of
the
things
that
I
think
we
should
be
kind
of
discussing
it's
kind
of
we
follow
a
convention
about
how
we
transform
the
lambda
context
in
tags.
C
I
I
work
at
I
work
with
jacob
watch
that
did
the
java
one.
So
it's
matching
that,
but
it's
for
dotnet,
you
know
so
javan.net
took
over
from
that,
but
I
I
I
I
remember
hearing
about
this
problem
about
context
propagation
for
the
x-ray,
because
it's
it
doesn't
fit
with
dotnet
supports.
I
I
I
didn't
follow
that
one
close
I
can
circle
back
to
to
check
and
once
more
is
unragged
the
one
from
the
aws.
That's
gonna
work
kind
of
with
net,
or
there
is
other
person.
D
I
I
don't
know
but,
but
if
based
on
our
experience
about
for
java
and
python,
when
you
write
your
lambda
function,
instrumentation
you
have
to
propagate
from
aws
trace
id
right.
You
have
to
propagate
the
transaction
to
your
laminar
function
span,
so
you
still
need
the
help
of
aws
id
generator,
because
that
can
translate
the
trace
id
call
open
television.
C
D
When
we
talk
about
the
wrapper,
I'm
not
sure
if
I
understand
correctly
so
I
want
to
share
some
talk.
Let
me
share
my
screen
so
based
on
my
understanding,
there
are
at
least
three
records
you
can
build
in
london.
A
D
Okay,
so
this
is
the
bottom
layer
is
the
bottom.
We
can
see
it's
customized
original
laminar
function
right,
so
we
will
create
a
layer
on
it
which
is
lambda
instrumentation.
D
We
will
wrap
cosmetic
original
number
function
with
begins,
and
instead
this
is
just
for
instrumentation
right
and
then
on
that
we
will
create
another
record.
We
call
that
execution
record.
The
executioner
can
dynamic
dynamically
decrease
customer's
original
function,
with
lambda
instrumentation
so
by
this
middle
layer.
Customer
no
need
to
change
their
software.
D
C
So
I
I
I'm
having
a
bit
of
trouble
with
the
audio.
I
I'm
not
sure
if
I
actually,
I
I'm
sure
I
didn't
understand
exactly
what
you
are
saying.
Perhaps
I
can
share
the
screen
and
show
basically.
C
C
All
right,
so
I
have
a
bunch
of.
C
C
So
basically
we
have
our
internal
wrapper
and
we
override
one
function
and
basically
we
make
the
call
if
the
telemetry
is
enabled
using
the
wrapper.
So
the
only
change
that
is
needed
on
this
is
for
the
user.
Instead
of
deriving
from
this
class,
that's
the
default.
C
They
they
derive
from
this
one,
and
then
you
immediately
have
the
cultural
lambda
instrumented,
but
this
is
is
purely
manual,
so
there
is
no
auto
instrumentation
here.
If
you
have
down
the
stack,
then
you
need
to
kind
of
do
your
manual
instrumentation.
You
know,
so
this
is
pure
manner.
There
is
no,
since
this
is
not
using
any
of
the
hooks
that
perhaps
we
have
on
other
layers
like
web
client
and
web
request.
From.Net,
we
don't
have
the
the
extra
instrumentation,
so
this
is
just
the
first
level
if
you
want
to.
C
If
you
want
to
do,
you
can
do
manually
for
the
functions
themselves.
This
is
the
old
format
that
doesn't
have
a
base
class.
Basically,
you
implement
the
functions
directly
and
since
this
depends
on
the
signature,
basically,
we
expose
that
wrapper
the
same
wrapper
that
we
use
it
there
and
they
have
a
bunch
of
overloads
that
you
can
use
to
make
the
call
and
get
instrumented.
C
You
know
this
is
kind
of
the
legacy,
but
is
a
way
also
to
instrumental
the
the
the
lambda,
but
you
also
get
only
the
first
level.
So
it's
a
very
simple
implementation.
Right
now
we
use
our
implementation
of
open
tracing
tracer,
but
we
could
switch
this
to
open
telemetry.
You
know-
and
it
seems
relatively
simple
to
do
that,
but
I
I
before
going
ahead
to
the
effort
of
doing
that,
I
would
like
to
kind
of
if
there
is
a
contact
that
we
can
discuss
before
ahead,
going
ahead
and
doing
this
effort.
D
D
Called
channel
so
by
reading
this
document,
you
would
understand
that
so
customers
know
their
class
based
on
your
class.
They
still
have
a
regional
lambda
function,
but
when
once
a
longer
long
time
running,
if
your
dynamic
dynamically
generate
a
new
class,
this
new
course
will
definitely
focus
your
implementation.
Instrumentation
and
customer
development
function
can
build.
A
new
class
in
for
java
is
the
engineer
it's
like
anonymous.
D
D
D
So
I
think
isaac
is
very
familiar
with
python
instrumentation,
so
the
python
auto
instrumentation
has
very
needs
to
add
prefix
before
python
and
their
cross
right
so
by
the
london
literature
called
london
native
river.
We
can
change
the
nuclear
arrows
of
land.
D
D
So
this
is,
this
is
the
basic
of
the
top
layer
of
wrapper.
We
call
that
lambda
native
red,
so
there's
a
reason
why,
in
our
python
implementation,
we
can
implement
a
very
perfect,
auto
instrumentation,
because
we
don't
need
to
change
anything,
no
need
to
replace
their
nominal
handle,
because
we
have
done
everything
in
this
lambda
native
reference.
So
this
is
what
I
call
that
three
layers
of
vapor.
D
So
the
photon
is
just
for
instrumentation
rep
begin
and
any
segment
with
customer
oriented
function.
The
middle
layer
customer
no
need
to
it
can
automatically
generate
a
anonymous
class
in
runtime.
The
top
layer
of
wrapper
can
help
user
hide
everything.
There's
no
need
to
know
what
happened
in
background.
C
One
thing
for
the.net:
we
we
can't
instrument
in
the
same
way
as
java.
We
need
to
change
the
runtime.
That's
why
you
see
that
limitation
of
not
instrumenting
the
down
level
of
the
call
you
know
kind
of,
because
in
in
java
you
can
add
a
reference
and
you
can
hook
yourself
to
generate
the
code
when
so
that's
how
the
experimentation
is
done.
C
C
Last
week
you
presented
the
doc
and,
I
think
forgot
her
name.
She
said
she.
She
said
that
she
was
gonna,
try
to
share
with
the
rest
of
the
the
group.
Oh.
D
Yeah,
I'm
not
sure
why
she's
not
attending
this
meeting.
She
had
a
discussion
with
me
one
hour.
One
hour
ago.
She
said
the
dog
is
almost
done,
but
still
need
to
be
polished
a
little
bit.
She
said
in
next
segmenting
she
wrote
she
will
post
that
proposal
and
not
sure
if
you
know
that
she
sent
something
in.
D
A
D
D
For
example,
take
the
example
of
this
question
he
asked:
is
it
for
open
telemetry
support,
lambda
or
awsc
neighbor
monitoring
using
open
telemetry?
I
can
see
for
this
answer.
I
can
see
both
because
we
know
that
the
lambda
open
condition
support
is
not
only
just
the
instrumental
lambda
function.
We
also
need
to
think
about
how
to
launching
open
telemetry
in
london.
Lambda
is
a
library
to
be
instrumented,
and
it's
also
a
environment
need
to
be
wrong.
D
So
beside
instrument,
lambda
user
love
that
function.
We
also
need
to
think
about
how
to
launch
open
parameters
in
london
environment.
That's
why
we
provide
a
collector
extension
layer
and
the
python
s.
C
And
unrelated
to
this,
but
through
the
operating
of
of
the
group,
so
I
saw
that
there
there
is
a
issue
open,
open,
telemetry
community,
discussing
about
the
creation
or
not
of
a
sig,
but
I
think
for
the
time
being,
even
if
that's
not
decided,
we
should
have
something
on
the
readme.
Because
other
day
I
had
somebody
asking
me
about
this
group-
and
I
told
you
oh
go
to
the
community
and
the
person
didn't
find.
I
said.
Oh
then
I
I
have.
C
I
noticed
that's
in
the
calendar,
that's
why
I
knew
about
the
meeting,
but
we
we
should
add
at
least
something
I
can
do
that
later
this
week,
but
we
should
add
a
note
to
the
community.
D
Yeah,
sorry
that
this
is
my
first
open
source
appearance.
I
don't
know
how
to
hold
the
meeting,
how
tall
how
to
notify
everything
broadcast
the
information
all
of
these
things
are
done
by
a
lolita.
So
I
think
I
can
stick
with
her
if
you
can
help
us
do
that,
that's
better
I'll
also
tell
a
little
bit
of
this
concern.
C
C
D
I
have
I
have
an
idea.
I
talked
that
before
that's
about
the
open
temperature
collector
alex
helped
us
create
the
initial
version
of
a
collector
layer,
but
the
problem
is
the
size
of
that
layer.
So
I
explained
that
before
in
lambda,
because
they
use
the
group
sql
property
it
contains.
It
calculates
the
memory
usage
by
both
rss
and
the
cache.
So
that
means,
if
our
layer
size
is
100
megabytes,
even
the
rss
is
only
time
management.
But
finally,
in
lambda
matrix,
it
will
say
memory
usage
is
110
megabytes,
so
there's
a
reason.
D
C
A
A
custom
build
yeah,
so
what
I
did
is
I
ended
up
building
it
just
from
the
like
the
main
collective
repository,
another
electrical
trip,
but
I
didn't.
I
didn't
try
and
strip
out
any
of
the
functionality.
I
don't
I
don't
know
if,
for
example,
we
could
strip
out
some
of
the
like
exporters,
or
maybe
some
of
the
receivers
or
like
it
feels
like
that's,
usually
the
place
where
we
could
probably
cut
some
some
of
the
some
of
the
bulk
there.
B
I
don't
know
if
you
guys
have
seen,
but
there's
like
a
script
or
like
a
website
that
produces
a
collector,
build
if
you
want
that
might
be
able
to
give
you
like
a
build
file.
But
basically
I
think
you
override,
like
the
main
method
in
in
the
contributor
or
whichever
one
you
want.
Yeah
remove
the
parts
you
don't
want
and
recompile
it.
Hopefully.
D
Yeah,
that's
that's
what
we
are
doing
now.
So
in
our
aws
collector
layer
we
customize
the
third
party
exporter.
We
just
leave
around
five
whole
time
exporter,
but
the
thing
says
from
customer
side:
if
they
want
to
add
their
third
parties,
it's
very
hard
for
them.
They
they
should
know
how
to
do
that.
They
should
modify
the
source
code.
D
I
have
no
idea.
How
can
we
provide
a
customer?
A
using
solution,
for
example,
just
build
the
layer
based
on
their
configuration.
I
know
that
golang
is
aesthetic
compelling
language,
it's
very
hard
to
do
that,
but
if
you
guys
have
any
experience
have
idea
yeah,
that's
a
very
useful,
very
important
feature.
A
Yeah
one
one
way
you
could
do
it
is:
you
could
just
produce
a
different
lambda
extension
layer
for
each
for
each
exporter
and
for
each
receiver.
Combination
that
might
create
a
lot
of
layers,
but
at
least
it
would
give
you
a
smaller
layer
right.
D
D
Yes,
but
the
thing
is
yeah
a
simple
idea
is
we
can
use
a
template
to
generate
the
gold
source
code
right,
but
but
that's
not
safe.
We
are
not
sure
that
if
you
can
really
pass
the
campaign.
C
Yeah,
but
I
I
I
think
I
think
we
could
you
you,
then
you
need
the
source
kind
of,
but
the
way
that
go
works
with
the
build.
I
think
we
can
have
a
tool
to
kind
of
build
the
the
layer
with
the
minimalist
stuff
needed,
but
but
yeah.
I
I
at
least
from
where
I'm
starting
thinking
that
that's
not
a
priority
for
me.
You
know.
B
Can
you
guys
see
the
link
I
put
in
the
chat,
I
think
that's
kind
of
what
you're
asking
for
like
you.
I
think
you
put.
C
B
And
it
will,
it
will
build
a
binary
for
you.
I
don't
know
if
this
is
officially
endorsed
by
open
telemetry,
yet
but
yeah
it
exists
because
other
people
have
the
same
problem.
It's
a
little
bit
tough
yeah.
D
And
I
think
I
believe
in
excellent
meeting
we
will
share
the
exact
our
our
detailed
design
yeah.
I
don't
know
why
aloita
haven't
finished
that,
but
I
will
push
her.
B
I
guess
this
is
a
question
for
both
lei
and
alex,
but
I
was
wondering
if
either
of
the
the
things
that
you
guys
built
were
able
to
support
metrics
and
then
I
guess
more
from
like
a
back-end
perspective,
if
you
are
planning
to
support
it,
what
is
the
plan
for
dealing
with
the
cardinality,
the
cardinality.
D
B
B
So
we're
not
talking
about
lambda
we're
talking
about
canada
or
what
have
you,
but
essentially
what
we
have
to
do
is
we
have
to
have
some
sort
of
identifier
on
all
of
the
like
instances
that
are
running
at
the
container
or
what
have
you
so
I'm
not
sure
how
it
works
exactly
on
lambda
but
like
you
would
have
to
do
either
an
aggregation
somewhere
or
you
would
have
to
be
able
to
merge
those
interiors
in
the
back,
and
I
was
just
wondering
if
you
guys
have
looked
into
that.
Okay.
D
Okay,
I
can
see
that
I
have
a
design
dock
for
for
matrix
in
london,
but
this
is
only
for
aws
background,
which
is
a
cloud
watch.
You
may
know
that
hardware
so,
as
I
know
that
okay,
I
studied
to
topra
if
for
aws.
Second,
what
we
are
doing
so
the
sensor
aggregation
right.
We
cannot
send
too
many
metrics
to
backing
back
and
it
does
not
accept
that
it
will
throttle
that
it
will
send
too
many
requests
right
either
inflate
the
customers
building
or
throttle
backhand.
D
So
what
we
are
doing
is
it's
really
it's
not
a
common
solution.
Do
you
know
that
lambda
has
an
agent
called
a
log?
We
can
call
that
a
log
agent
there's
a
reason
why
in
cloud
watch
log
you
can
say
customers
lambda
log
right.
The
log
agent
would
never
be
frozen
because
it's
not
in
the
scope
of
the
same
c
signal
with
lambda
container.
D
So
by
this
way
we
can
make
sure
that
the
log
would
never
be
delayed
will
never
be
frozen.
So
what
our
design
is,
we
will
convert
matrix
to
log
and
throw
the
log
interface
called
cloudwatch
log.
Then
in
cloud
watch
log
they
will
convert
the
matrix
back
to
four
logical
measures
from
json
to
matrix.
So
we
don't
have
to
consider
too
much
about
the
aggregation
matching,
because
the
agent
that
lambda
log
agent
will
match
the
matrix
to
log
the
senator
carwash.
D
So
basically,
we
awarded
this
problem,
but
if
for
external
company,
what
I
know
is
they
have
to
create
a
long-running
service
outside
of
lambda
container.
So
this
this
long-running
service
needs
to
batch
and
application
metrics
and
that's
the
instrument
for
your
real
icon.
So
you
must
be
the
middleware
coagulated,
the
matrix.
B
Yeah
so
like,
for
instance,
is
there
any
plans
to
support
like
otlp
to
the
collector
and
then
prometheus
or
something
like
that.
D
It's
very
hard
to
it's
very
hard
to
batching
being
collector
in
london,
because
because
think
about
that
there
are
two
process
running
in
lab.com:
one
is
lavender
long
time,
which
is
customer
application
and
open
international
sdp
right.
This
is
one
process.
The
second
process
is
critical.
Once
lambda
gets
frozen,
it's
very
hard
to
control.
How
could
you
flash
the
data
from
sdk
to
collector
microphone
because
from
clicker
side
it
does
not
know
that
if
sdk
already
finished
the
flash
right?
So
that's
the
reason
in
our
design.
D
D
Would
will
give
us
a
new
feature
called
idle
event
with
this?
I
don't.
Even
we
can't
solve
this
problem
in
long
term.
I
say
we
can
solve
this
problem.
The
theory
is,
you
know
that
even
lambda
does
not
know
that,
if
even
less
moment
it
will
be
frozen
or
not,
because
lambda
cannot
control
customers
right.
D
It
would
be
frozen
so
it
cannot
predict.
I
even
tell
you
that
I'm
going
to
throw
them
before
that,
I'm
going
to
freeze
to
place
the
flash
away.
I
cannot
do
that
because
that
does
not
know
if
it
will
be
chosen,
but
I
wouldn't
know
that
I
have
been
frozen,
for
example,
60
seconds
one
minute,
so
from
lambda
side
it
can
regularly
check
if
the
container
status
is
freeze
or
not.
D
If
it's
free
over
one
minute,
it
will
send
the
even
two
container
and
flow
flow,
the
environment
so
yeah,
so
from
open
temperature
sdp,
we
can
receive
an
event
said:
okay,
I
have
sleep
for
so
long
time.
Please
flash
your
data,
then
sdk
can
flash
data
tool
collector,
but
the
character
still
has
no
any
batching.
Processing
special
collector
will
synchronously
forward
the
sdk
data
code
back.
That's
our
long-term
solution.
So
by
this
way
the
aggregation
and
the
batching
still
works
in
spp
design.
D
The
long
solution
is
a
short-term
solution
before
before
lambda
provides
us
idle
event.
If
lambda
provides
us
idle
event,
I
think
we
share
the
same
solution.
We
will.
We
will
register
this
event
called
idle
event.
When
we
receive
this
event,
sdk
will
show
the
call
source
flash
measure,
okay,
right,
matrix
and
the
trace
in
sdk
buffer.
D
B
Yeah,
I
think
we
have
it
in
python,
but
I
think
I
think
alex
would
also
know,
but
I
think
the
only
other
issue
is
that
we'll
invoke
the
observers.
So
if
anybody
has
like
value
observer
value
recorder,
sorry
value
observer
up
down
sum
counter,
etcetera.
I
believe,
at
least
in
python,
that
if
you,
if
you
do
like
a
force,
flawship
we'll
do
that.
So
I
guess
it's
okay,
but
it
might
take
a
while
to
finish
and
it
might
also
like
mess
up
the
interval.
B
So
if
you're
expecting
it
to
collect
every
10
seconds
or
you're,
expecting
your
async
instrument
to
go
every
10
seconds
or
whatever
you've
configured,
it's
going
to
change
like
the
values
in
the
intervals
right.
D
D
A
I'm
curious:
is
there
any?
Is
there
any
advantage
that
you've
seen
like
batching
and
force
flushing
versus
just
using
like
like
a
simple
exporter
that
would
just
flush.
A
D
Whatever,
frankly,
at
the
beginning,
before
we
make
this
decision,
we
we
think
this
issue
can
be
easily
resolved
by
using
simple
processor
right,
because
there
are
two
things.
One
thing
is
a
simple
processor
has
no
batching
right.
We
don't
want
as
innovation,
we
want
to
at
least
have
several
kinds
of
batching.
D
At
least
now
we
can
batch
the
data
in
one
location.
Think
about
that.
In
one
location
you
call
aws
sdk
10
times
this
transparent
will
be
batched
together
at
least
there's
a
reason.
We
don't
use
simple
processor.
If
we
use
sim
processor,
each
parametric
data
would
generate
a
one
request
back
in
right.
So
at
least
the
simple
processor
is
not
a
good
solution.
The
second
reason,
most
important
now
simple
processor,
cannot
prove
the
call
as
synchronous,
simple
processor,
as
I
know
that
in
java
sdk
it's
also
in
another
thread,
so
it's
also
can
be
frozen.
A
A
Simple
enough
right,
although
I
guess
in
to
your
first
case,
where
you're
talking
about
being
able
to
still
batch
before
you
send
it
to
the
back
end
like
if
you're,
if
you're,
using
the
collector
layer,
for
example,
where
the
batching
happens
at
the
collector
there,
it
doesn't
really
make
that
much
of
a
difference
right
if
you're,
if
you're
flushing
it
every
time
or
like
there's,
not
that
much
added
latency.
I
guess
that's
what
I'm!
What
I'm
saying.
A
Yeah,
so
I
guess
that's
what
I
mean
like
if
you're,
if,
instead
of
batching
in
the
sdk
you're
batching
at
the
collector
under
your
your
collector,
is
embedded
inside
your
lambda
with
the
with
the
extension
layer,
I'm
curious
if
you've
tested
whether
or
not
there's
there's
enough
of
a
performance
difference
that
it
was
worth
going.
The
battery
out
or
not.
D
It's
not
about
performance
is,
if
we
add
a
fashion
processor
incorrectly,
it
must
be
user
must
see
data
delay
if
the
environment
is
further.
It's
very
easy
to
reproduce
this
issue.
You
can
try
it
just
invoke
the
lambda
function
right
hand.
Definitely,
you
would
not
see
any
parametric
data
in
background
because
the
data
is
hold
is
held
in
better
process.
A
So
is
that
is
that
what
you're
saying
that,
regardless
of
whether
you're
talking
about
the
the
layer
that
includes
like
the
open,
telemetry,
sdk
or
the
layer
that
includes
a
collector
binary,
both
of
those
would
be
frozen
within
the
lender
like,
even
if
it's
not
inside
the
runtime.
D
Yeah
because,
in
the
same
container
share
the
same
state
group.
A
C
Yeah,
and
just
to
I
already
mentioned
this
last
week
in
in
the
case,
for
that
call
that
I
showed
earlier,
we
opted
instead
of
having
flush,
we
opted
to
send
the
the
batch
when
the
root
span
is
closed.
Right.
C
Yeah
no,
but
in
that
case,
because
we
have
the
we
are
receiving
the
call
the
the
thread
that
we
have.
That
is
actually
not
the
threat
because
but
the
call
itself
it
the
logical
thread.
Dot
net
is
not
the
physical
thread,
but
the
the
logical
thread
is
the
same.
You
know
so,
since
there
is
no
nothing
being
created
for
anything
that
the
user
starts,
everything
that
finishes
the
the
root
span
send
everything
at
once.
C
But
but
you
are
right:
if
there
was
something
that
could
be
created
outside
of
that
yeah,
then
then
we
lose
control
completely.
You
know
and
then
the
flesh
doesn't
happen
so.
B
I
didn't
follow
what
you
were
saying
with
the
simple
span:
processor
and
the
freezing
like.
D
Oh
okay,
so
at
the
beginning
long
time
ago
the
simple
process
is
very
simple:
it
will
not
create
a
new
thread,
but
you
know
simple.
Processor
use
another
thread.
So
if
you
recall
simple
yeah
in
java,
at
least
in
java,
if
you
call
simple
process,
it
will
be
running
in
another
thread.
If
it's
in
your
nice
grid,
it
might
be
frozen.
D
Okay,
yeah,
yeah
yeah,
because
my
thread
already
finished
but
asynchronous
thread,
the
simple
processor.
We
are
not
sure
if
it's
finished
or
not,
if
the
lambda
function
think
I'm
already
done.
I
return
result
to
lambda
service.
Then
I'm
just
always
saying:
okay,
you
can
be.
I
want
to
flow
to
the
environment.
B
D
D
D
By
this
way,
we
can
make
sure
that
the
code
from
sdk
side
to
backhand
synchronous
asynchronous
but
make
sure
that
don't
write
your
new
thread
inside
of
your
exporter.
If
you
write
a
new
thread
in
exporter,
so
we
cannot
make
sure
that
the
code
is
simple
enough.
Fortunately,
as
I
know
that
so
far
through
my
test,
most
of
my
supporters
don't
use
multiple
slang,
because
exporter
just
transform
translates
the
data
right
trust,
transform
the
format,
don't
need
multiple.
B
C
Yeah
I
I
was
going
to
say
that
the
receivers
usually
usually
what
they
do
is
they
rely
on.
Whatever
is
the
the
let's
say,
network
that
you
are
using,
but
as
soon
as
they
get
the
data,
they
are
single
thread.
You
know
so
yeah.
C
D
C
D
D
We
had
an
internal
discussion
about
this.
You
may
you
must
familiarize
anyone
right.
He
helped
us
a
very
good
contributor.
D
He
support,
put
the
sdk
related
code
in
language,
esdk
repo,
but
not
in
lab
director,
but
my
suggestion
is
we
put
everything
in
london
level
except
the
instrumentation
part,
because
my
theory
is
lambda
is
not
only
a
library
top
instrument.
Lambda
is
also
an
environment,
that's
different
with
the
other
things.
That's
not
like
apache
calendar
or
aiohtp
like
this
library.
If
it's
just
a
library,
we
just
need
to
focus
on
instrumentation
right
so
speaker
is
is
okay,
but
since
we
need
to
think
about
how
to
launch
how
to
launch
open
directory
in
number.
D
D
D
A
It
makes
sense
to
me,
like
the
the
commonality
between
the
different
languages.
Here
is
probably
going
to
be
higher
than
the
commonality
between
like
different
instrumentation
library,
for
example.
Right,
like
I,
wouldn't
expect
a
iohtp
and
python
to
exist
in
a
common
repo
for
like
asynchronous
http
servers
or
whatever
for
all
the
languages
I
feel
like.
That
would
just
be
a
mess,
but
for
the
lambda
lambda
code
it
feels
like
a
lot
of
the
same
functionality
will
be
replicated
across
the
languages.
A
So
it
makes
sense
to
have
that
in
here,
and
I
agree,
I
think
some
of
the
nice
things
we'll
get
out
of
it
with
the
ci
kind
of
commonalities
will
be
really
helpful.
So.
D
Yeah,
but
I
also
I
also
like
to
put
a
lot
of
instrumentation
into
sdk
level
because
anyway,
it's
a
library
to
be
instrumental
right.
But
in
my
in
my
personal
implementation
phrase
I
found
if
I
put
the
code
into
lambda
level,
it's
easy
for
developer,
because
because
I
don't
need
to
wait,
the
release
of
sdk,
for
example,
python
release
period
might
be
three
weeks
or
four
weeks.
I
don't
want
to
wait
too
long
time
because
my
code
is
not
stable.
A
Sdk
yeah,
we
found
the
same
thing
with
some
of
the
some
of
the
contributions
that
we
put
in
like
we.
We
donated
the
open,
telemetry
prometheus
sidecar,
but
we
ended
up
doing
a
bunch
more
changes
in
our
in
our
light
step,
controlled
repo,
but
just
that
we
can
get
like
the
kinks
worked
out
and
then,
once
it's
not
changing
anymore,
it's
easier
to
have
it
in
a
place.
That's
a
little
bit
slower.