►
From YouTube: 2021-04-08 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
found
I
missed
the
invitation
of
joining
hotel
organization,
so
I
think
probably
that's
the
reason.
I
cannot
accept
the
lambda
repository
rotation,
so
I'm
asking
recent
documentation.
Let
me
join
the
urban
telemetry
stem
cells.
So
after
that
probably
I
can
join.
B
B
You
know
I've
been
spending
so
much
time
on
the
on
the
python
side
of
things
that
I
I've
had
zero.
This
is
zero
bandwidth
to
spend
time
anywhere
else.
A
Yeah,
because
now
we
have
no
team
member
take
part
in
the
python
segmenting.
So
if
you
have
any
good
news,
please
tell
us.
B
Here
any
good
news:
well,
the
the
1.0
was
released
last
week
and
now
we're
on
to.
I
guess
the
next
series
of
fixes
that
are
happening
here.
Yeah.
That's
that's
about
the
only
news
I
have
from
the
from
the
python
seg.
B
Well,
the
contribs
are
only
instrumentations,
so
those
are
at
0.19
b0,
I
think,
is
the
version
number
but
the
the
core
repo.
So
the
sdk
and
the
api,
some
of
the
exporters
they're
all
at
one
level.
A
So
aws
plus
to
release
python
lavender
layer
at
the
end
of
april.
So
can
we
use
core
1.0
plus
country
0.19,.
B
Yeah
yeah,
that
shouldn't
be
a
problem.
I
saw
that
you
had
raised
an
issue
regarding
the
resource
detector.
That's
all
right.
A
Yes,
because
you
know
we
have
a
lambda
resource
doctor
which
will
some
attributes
like
region,
but
I
found
yeah.
You
know
at
the
end
of
last
year
I
had
code
everything
in
lambda
wrapper,
but
now
because
we
upstream
the
upstream
code
to
open
telemetry,
so
we
have
to
make
everything
is
configurable.
A
So
so
I
found
I
cannot
inject
the
lambda
resource
detector.
So
now
yeah
the
basic
functionality
works,
but
it
has
no
regions
resources
attribute.
So
I
don't
know
how
to
do
that.
So
I
researched.
B
A
Well,
I
asked
this
question
to
analog
before
how
does
java
handle
that,
because
the
java
has
spi
interface,
so
it
automatically
works,
so
it
doesn't
either
user
handle
anything,
but
for
python
probabilizing
we
can.
We
can
load
this
by
entry
point,
so
I
read.
This
idea
means
that
we
should
so.
C
A
D
C
But
even
before
that,
like
I
used
git
package,
it's
a
tool
to
be
able
to
just
depend
on
a
get
branch
in
npm,
so
the
lambda
layer
is
working.
It
works
surprisingly
well,
like
I
feel
so
dumb
for
working
on
java
before,
because
java
and
lambda
just
doesn't
work
well
now
I
know
what
lambda
is
supposed
to
look
like
yeah
like
there
was
like
the
patching
infrastructure
is
like.
I
wonder:
if
python
can
maybe
get
some
hints.
I
don't
know,
but,
like
I
haven't,
tried
the
python,
but
the
javascript.
A
C
C
A
E
Alternatively,
we've
been
taking
a
look
into
the
net
implementation
this
week
and
that
has
just
been
a
giant
giant
headache.
A
Yeah,
because
it's
really
weird
with
other
language,
either
from
language
perspective
or
from
london
long
time
I
see
the
south
code
of
lambda
netherlands,
it's
it's
pretty
different
with
other
language,
so,
for
example,
the
weapon
I
mean
the
lambda
exec
wrapper.
The
mechanism
is
different
with
the
other
language
like
java,
as
well.
D
Yeah
I
came
here
to
see
what's
going
on,
I
also
kind
of
like
wonder
if
we're
looking
it
into
like
a
couple
of
people
have
been
asking
like
how
do
we
scrape
prometheus
from
you
know,
lambda
generally,
as
a
question
like
you
know,
there's
like
push
gateway
type
of
like
solutions.
I
wonder
like
if
this
sig
is
you
know,
promises
is
kind
of
like
a
long-term
thing
that
I
can
suggest.
Or
can
we
start
like
looking
into
it
just
a
bit?
D
D
I
guess
no
yeah,
so
lots
of
people
probably
have
no
idea.
What
I'm
talking
about
prometheus
is
a
you
know,
pool
based
model
like
what
you
do
is
you
serve
http
server,
publish
a
page
like
publish
an
endpoint
and
then
premises
comes
in
and
like
scrapes
that
and
it's
kind
of
like
not
really
well
working
with.
You
know
the
lambda
model,
because
all
of
your,
like
you
know,
lambda
functions
of
ephemeral,
you
know
functions
in
the
end
of
the
day,
they're
not
like
around
for
a
very
long
time.
D
So
if,
with
the
push
model,
it's
just
much
easier
because
you
know
from
the
lambda
it's
easy
to
push
metrics
that
you
want
to
push
and
then
once
you're
done,
you
know
you're
done
with
the
with
the
pull
model.
On
the
other
hand,
you
have
to
make
sure
that,
like
you
know,
you're
pulling
it
before
the
end
of
the
execution.
D
Sorry,
you
have
to
pull.
You
know
you
have
to
there's
like
some
life
cycle,
basically
management
that
you
have
to
do
so.
There's
there's
like
some
some
of
our
customers.
D
Just
wanna,
like
you,
know,
pull
you
know
prometheus
metrics,
because
there's
like
a
lot
of
like
framework
integrations,
there
are
a
couple
of
like
things
that
only
supports
prometheus
metrics,
so
that
kind
of
like
become
became
a
challenge
and
the
typical
model
is
kind
of,
like
you
know,
having
a
layer
layer,
doing
the
pools,
doing
the
scrape
and
once
a
while,
like
reporting
what
it
actually
scraped.
D
So
we
will
be
probably
like
looking
into
something
similar.
I
just
don't
know
enough
about
the
like
life
cycle,
events
to
be
able
to
see
if
the
extension
will
be
able
to
you
know
fit
into
that
model,
and
maybe
it
would
be
nice
for
us
to
kind
of
like
evaluate
if
the
layer
would
be
able
to.
You
know
support
that,
because
you
know
what
we
want
to
do
is
like
having
like
one
collector
layer
that
does
everything.
Potentially
we
can
always
extend.
D
You
know
the
the
how
the
entire
thing
mechanism
works,
but
if
one
layer
just
kind
of
like
solves
this
problem,
that
would
be
the
ideal
solution.
So
I
just
came
here
to
you
know
just
ask
you
if
you
have
any
context
about
this
problem,
plus
probably
we
should
designate
one
person
to
take
a
look
at
that
and
kind
of
like
suggest
just
something
right.
A
All
right,
so
this
question
is
regarding
our
promises
back
on
the
support
I
heard
I
heard
this
word
before
pull
and
push
two
different
modes
for
permissions,
so
I
don't
know
the
exact
details,
but
just
from
the
name
it
seems,
promises
were
actively
produced
from
collector
right.
D
Yeah,
so
so
that's
actually
a
bit.
I
can
explain
the
whole
flow,
so
users
will
be
publishing
some.
You
know
prometheus
metrics
that
the
collector
is
going
to
come
and
pull
from,
but
we
support
a
push
like
you
know,
protocol,
which
is
called
the
remote
right.
So
probability
says
this
like
api
called
remote
right,
which
is
actually
kind
of
looks
like
push
once
you
scrape
in
the
collector
collector
currently
can
speak
like
remote
right
in
the
exporters.
D
It
kind
of
becomes
more
of
like
this
like
push
type
of
model,
so
all
we
have
to
do
is
like
make
sure
that,
like
before
the
end
of
the
execution
before
the
before
sorry
at
the
end
of
execution,
we
have
at
least
pulled
you
know
once
and
we
have
to
figure
out
like.
Should
it
be
at
the
end
or
should
it
be?
You
know,
like
that's
also
a
you
know,
question
should
it
be
configurable
would
be
the
other
question
we
should
be
able
to
scrape.
D
You
know,
at
least
let's
say
that
at
the
end
of
the
function
and
then
the
collector
should
be
able
to.
You
know
turn
it
into
this
remote
right
and
it
will
be
able
to
like
push
it
to
remote
writer.
You
know
prometheus
instances.
D
It
would
couple
that,
probably
like
I'll
try
to
find
someone
on
our
team
to
kind
of,
like
you
know,
take
a
look
at
this
entire
thing
and
like
how
the
layer
works
and
like
what
type
of
life
cycle
we
need
to
be
considering
and
like,
should
it
be
configurable
where
exactly
we
want
to
scrape
right,
because
you
know
you
want
to
be
able
to
scrape
here
there
or
like
with
the
push.
D
D
Propose
at
this
point,
but
I'll
make
sure
that
somebody's
looking
at
that,
so
whatever
you
do
in
the
layer
is,
you
know
compatible
with
that.
You
know
in
the
long
term
that
that
that's
the
only
risk
if
we
end
up
like
you
know
not
covering
that
case,
we
may
need
another
layer
which
would
not
be
great
layer.
C
D
Yeah,
that's
true
open,
telemetry
metrics
already
they
have
that
you
need
to
do
some
extra
too
yeah
yeah
yeah
with
primitives,
it's
more
of
like
it's.
It's
it's
a
different
problem
because
you
know
it's
like
there's
no,
like
I'm
done
with
in
publishing.
My
metrics
type
of
thing
like
properties
is
very
great.
If
you
have
the
server
running
around
for
a
very
long
time,
because
you
periodically
come
and
you
know
ask
for
the
metrics
like
scrape
the
same
metrics
and
everything
is
cumulative,
so
it's
kind
of
like
you
know,
you're
just
catching
up
with.
D
What's
you
know
produce
over
time
that
just
doesn't
work
well
with
ephemeral.
Stuff,
like
you
know,
lambda
functions
because
you
produce
and
then
maybe
the
instance
will
be
gone.
You
know
the
next
time,
somebody's
executing
it
so
I'll
I'll
find
someone
to
you
know
kind
of
take
a
look
yeah!
That's
all!
I
have.
A
Prometheus,
that's
that's
the
solution
for
for
aws
cloud
watch
only
because
in
every
lambda
sandbox
it
will
launch
a
log
agent.
This
logo
agent
will
never
be
frozen.
Even
the
lava
sandbox
actually
is
frozen,
so
this
local
location
always
works.
So
we
will
write
matrix
into
log,
and
this
log
agent
will
collect
this
log
and
convert
to
yeah
simple
aw
cloud
watch
in
emf.
Format,
that's
a
special
format
for
metrics,
so
yeah
yeah,
actually.
D
It's
already
available,
that's
already
possible.
The
case
I'm
describing
is
more
like
you
know.
People
have
all
these
frameworks
that
they
don't
they
didn't
instrument
themselves
and
they
can
change
that
code,
and
that
framework
is
already
you
know,
producing
only
like
prometheus
metrics
and
they
want
to
find
a
way
to
kind
of,
like
you
know,
be
able
to
send
those
metrics
to
prometheus.
D
D
So
I
I'll
tell
you
one
thing
I
mean
this
is
recorded
but
like
actually
they're
looking
into
natively
support
the
collect
the
collector
at
some
point,
because
there's
all
this
difficulty
with
like
layers
and
all
that
stuff.
But
you
know
it's
consider
we're
considering
it.
It's
just
not
clear
yet.
A
Yeah
before
before
this
topic
I
want
to,
I
want
to
introduce
something
happened
in
this
two
weeks,
so
it
started
because
I
had
a
comment
to
annie
rob's
appear,
so
I'm
asking
if
we
can
use
aoc
as
a
collector
in
upstream
collector
extension,
I
mean
we
put
the
lc
as
a
dependency
of
open
dimension
lambda.
A
So
the
answer
yes,
the
answer
for
any
love
is
no.
We
cannot
do
that
because
we
don't
want
to.
A
Upstream
use
any
now
neutrality
dependency,
because
the
aoc
is
very
aws
style
so,
but
from
amazon
side,
amazon
will
not
allow
the
publisher,
lambda
layer
result
a
dot
brand.
A
So
that
course
we
have
to
split
it
to
two
different
levels:
the
upstream
level,
which
is
open,
telemetry
lambda
it
can
use
it
can
use
a
collector
country
or
character
code.
But
in
our
downstream
in
aws
availability
we
will
release
another
public
number
layer
which
will
use
a
dot
collector
as
the
character
extension.
A
A
So
in
upstream
collector
extension,
I
think
it
will.
It
corresponds
to
country
I
mean
we
can.
We
still
can
put
aws
exporter
and
the
data
dollar
exporter
into
public
land
layer,
but
any
rather
thing.
We
cannot
do
that.
A
B
C
D
C
A
There
are
two
ways:
one
way
is
we
put
a
country
collector
as
a
dependency
of
characterization,
and
then
cherry
pick
part
of
exporter.
What
we
want
when
we
build
the
collector
extension.
The
second
way
is
we
just
use
core
repo
calculator
as
a
dependency,
and
then
cherry
pick,
the
third
parties
for
the
front
country.
A
The
results
are
same.
The
result
I
see
yeah,
so
I
prefer
the
second
way,
because
I
had
a
discussion
with
a.collector
developer.
I
ask
why
you
do
not
use
collector
controversy
dependency.
Then
cherry
pick
datadog
aws
whatever
you
want.
I
said
because
it's
very
troubled
to
resolve
conflict,
so
he
uses
another
way.
I
use
the
chlorophyll
as
a
dependency.
Then
cherry
pick,
the
third
part
as
water
from
country
separately.
A
A
A
A
D
C
B
A
A
A
And
after
that,
it
doesn't
mean
that
we
will
build
all
of
character,
mental
internship,
but
we
will
cherry
pick
in
code
in
another
place
yeah
here
it
is,
for
example,
we
add
the.
We
add,
a
chrome
collector
as
a
dependency,
but
we
only
pick
up
this
component.
We
only
build
this
component
into
critical
extension.
A
A
So
far,
we
think
that
we
can
add
lots
of
except
promise.
You'll
see
especially
promises
as
water
is,
is
very,
oh
sorry,
not
previous
exporter,
from
your
sales
receiver.
If
we
have
no,
we
don't
add
a
premises
receiver
to
here.
That's
okay.
A
D
By
the
way,
like
prometheus
receiver,
we
want
to
do
some
improvements
in
terms
of
like
rewriting
a
bunch
of
like
there's
so
much
like
you
know,
open
senses,
imports
and
all
that
stuff
in
it
do
you
have
like
any
specific,
like
so
size
requirements
or
like
resource
requirements
that
might
kind
of
help
us
in
the
long
term,
we
want
to
improve
both
the
receiver
and
export.
One
thing
that
I've
also
seen
is
like
you
are
using
the
regular
pool
based
prometus
exporter.
D
Maybe
you
want
to
instead
use
the
remote
right
exporter,
which
does
this
like
push
thing
which
will
be
required
for
the
lambda
thing,
because
when
you
use
like
the
prometheus
exporter,
it
just
serves
another
like
web
server
with
that,
like
from
this
end
point
where
the
promoters
need
to
come
and,
like
you
know,
scrape,
maybe
remote
right
would
be
much
better
I'll,
probably
like
file
some
issues.
D
Maybe
it's
it's
the
better
way
to
go,
but
we
will
be
also
planning
to
improve
some
of
these
parameters
related
components
if
like
if
it's
too
much
like
overhead
or
something
if
you
have
like
specific
you
know,
requirements
from
us.
I
would
love
to
like
ask
the
team
to
take
a
look
at
those.
A
Because
this
collector
is
running
inside
of
lambda
sandbox,
so
we
suppose
that
only
otlp
grpc
makes
us
right.
Do
we
need
any
other
receiver?
So
that's
the
reason
we
remove
promises
receiver,
because
one
currently
sales
receiver
would
take
40
megabytes.
We
don't
know
the
reason
yeah.
D
Yeah
the
parameters
receiver,
unfortunately,
like
imports
prometheus,
it
reuses
a
lot
of
things.
Prometheus
does
but
prometheus
says
libraries
are
organized,
you
know
not
for
external
consumers,
but
for
the
prometo
server.
That's
why
it
just
relies
on
the
entire
world.
Maybe
that's
where
all
you
know
it's
becoming
so
big.
D
Yeah,
that's
what
I
was
like
trying
to
explain
earlier
in
the
meeting
like
there
are
people
who
actually
care
about
it
in
the
context
of
lambda.
They
want
to
be
able
to
scrape
from
lambda
functions
for
meteos
metrics,
even
though
it's
a
difficult
problem,
we
don't
have
to
support
it
right
now,
but
eventually
we
may
take
a
look
at
that,
especially
if
it's
feasible.
That's
why
I
was
wondering
like
hey
like
if
you
have
any
suggestions
for
us
in
terms
of
what
this
is
the
size
or
the
you
know.
D
This
is
how
much
resources
it
can
use
in
terms
of
memory.
This
is
a
been
a
good
time
to,
I
think,
have
those
conversations,
but
we
can
come
back
to
this.
You
know
later
it's
not
like
urgent.
We
are
not
there
yet
to
be
able
to
support
lambda.
A
A
A
So
that's
the
reason
why
we
cannot
add
promises
receiver
into
lambda,
because
most
of
users
don't
use
that,
but
we
cannot
publish
one
layer
yeah
so
so
far.
If,
if
suppose,
that,
if
a
user
very
care
about
the
promising
receiver,
we
still
can
provide
a
solution.
Customer
can
build
their
customized
landlord
by
themselves.
D
Yeah,
that's
what
I
was
like
asking
like
hey:
will
we
ever
have
like
a
separate
story
for
prometheus
scraping
if
it
probably
it
will
be
the
case
because
of
the
size
limitations
and
everything
anyways.
This
topic
requires
a
person
to
be
designated
and
like
that
person
needs
to
kind
of
make
some
decisions.
I
think
in
in
case
we
care
about
prometheus,
scraping.
A
Yeah-
and
we
can
raise
this
issue
to
collector-
say
why
receiver
size
is
so
big
and
there
is
a
tool
called
collector.
Build
credit
builder
can
build
the
collector
binary
based
on
customers,
configuration
so
properly.
There's
also
a
solution
yeah
to
customer.
D
Yeah
yeah
yeah
yeah
there's
this
project,
I'm
not
sure
if
you've
seen
it
like
it's
kind
of
like
a
builder,
you
pick
whatever
components
you
care
about
it
on
the
fly.
It's
gonna
generate
you
a
binary,
so
maybe
that's
the
way
to
go
in
the
long
term,
but
it's
not
going
to
fit
well
with
our
this
approach.
This
is
a
this
is
a
very
big
topic
to
discuss,
maybe
in
this
meeting.
But
you
know
the
challenges.
Are
there
at
least
yeah.
A
Yeah,
yes,
but
I
have
to
try
all
of
them.
I
just
pick
a
part
of
them,
for
example,.
A
A
D
D
More
thing
that
people
do
like
they
are
running
collector
as
a
service
type
of
thing.
So
you
know
they
talk.
Otlp,
you
know
running
the.
The
collector
is
a
standalone
thing
that
just
only
talks
about
becomes
their
like
ingestion
point.
So
if
you
follow
this
model,
are
we
expecting
people
to
do
something
like
that?
In
case
they
wanna,
you
know
export
to
different
places
like
we
can
always
you
know
they
can
always
run.
You
know,
collectors
somewhere
else
they
can.
You
know,
use
otlp
to
be
able
to.
D
A
One
way
where
I
work
on
the
design
of
lambda
project
so
because
you
know
that
lambda
samples
will
be
frozen
randomly.
We
cannot
predict
that.
So
if
we
can
have
a
long
running
service
outside
of
london,
that
can
solve
our
problem,
but
the
thing
says
is:
who
will
pay
that
we
will
set
up
the
roman
times
that
yeah?
Sorry,
that's
a
question.
D
D
A
Cool
and
anything
else.
A
Okay,
so
not
sure
if
you
know
that
aws
has
a
eight
dot
pattern
framework,
it's
a
test
framework
for
verifying
collector
and
sdk
I,
but
it
used
aws
backhand
like
x3
and
cloudwatch
as
a
basic
infrastructure.
A
A
Okay,
here
is
a
diagram
to
describe
the
relationship
between
upstream
and
boundary.
The
left
side
is
open,
parametric
lambda
upstream,
so
it
will
use
original
sdk
and
collector
as
a
dependency
and
build
a
longer
extension
and
sdk
into
a
layer
and
publish
to
cncf
account.
So
sad,
I
don't
know
how
to
implement
this
cd
yet
because
we
still
have
no
backhand
to
verify
that
the
upstream,
but
in
downstream
we
can
use
x3
and
cloudwatch
to
run
the
integration
test
and
turn
the
integration
test.
A
The
differences
here
we
will
replace
collector
to
a
dot,
collector
and
replace
sdk
to
a
dot
sdk.
Actually.
Now
here
is
only
for
java
agent,
because
the
other
language
sdk
reverse.
We
still
use
original
sdk,
for
example,
we
still
use
upstream
sdk,
but
anyway
we
will
build
a
layer
by
in
downstream
and
for
csv
I'll.
Just
give
a
quick
example
here,
for
example,
if
we
want
to
run
a
integration
test,
we
have
two
kind
of
folder.
One
folder
is
for
building
lavender
layer.
A
A
A
Yeah,
that's
the
basic,
because
the
sample
app
and
the
build
and
deploy
layer
is
based
on
developers.
Preference
probably
is
made
by
telephone,
probably
is
made
by
sun.
I
mean
the
aws
cloud
formation,
so
we
need
to
create
a
adapter
between
this
test
framework,
the
sensitive
workflow
and
the
exact
implementation.
A
So
that's
why
yesterday
I
post
a
pr
at
the
ccd
convention,
because
I
hope
in
upstream
for
every
sample
app
and
deploy
layer
folder
it
would
contain.
It
should
contain
a
shell
script.
That's
a
one
click
script.
If
call
it,
it
will
automatically
deploy
the
layer
and
deploy
the
sample
app.
That's
a
basic.
A
A
A
A
A
A
Yeah,
but
it
doesn't
need
upstream
help
because
so
right
now,
upstream
still
have
no
sense
of
intention
of
implementation,
but
sooner
or
later
it
will
help
if
we
can
build
this
uniform
interface.
That
makes
sense
in
long
term
and
in
our
downstream.
We
will
reuse
this,
and
for
this
adapter
adapter
shell
script,
I
defined
the
basic
defined
convention,
but
I
haven't
posted
until
into
upstream,
but
first,
let's
implement
the
first
one.
This
is
the
most
useful
one
and
it
also
provides
better
user
terms.
A
There
is
a
basic
implementation
for
stem
for
aws
cloud
formation.
I
suppose
that
we
only
need
to
implement
the
two
adapter
total
software.
One.