►
From YouTube: 2021-10-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Okay,
I
guess
we
have
the
visual
participants,
so
shall
we
begin
all
right?
Well,
before
we
begin,
we
have
a
participant
here,
nikolai
welcome
to
this
meeting.
This
is
the
python
sig
meeting.
Nikolai
has
made
several
contributions
in
the
past
to
our
project
and
he
is
now
applying
for.
A
Usually,
with
the
new
people
that
come,
we
usually
ask
if
they
would
like
to
introduce
themselves.
Let
us
know
a
little
bit
of
their
background
and
what
got
them
involved
into
this
project.
If
you
please,.
G
And
I've
been
programming
for
quite
a
while.
I'm
working
in
a
startup
company
in
israel
named
oxide
doing
our
best
for
the
cyber
security
of
the
cloud,
and
we
have
found
quite
a
lot
of
interest
in
the
project.
So
we
figured
might
as
well.
I
A
D
I
can
go
next
hi,
I'm
leighton,
I
think
you
and
I
spoke
before
I'm
also
one
of
the
maintainers.
I
work
for
microsoft.
D
I've
been
on
this
project
for
probably
like
over
two
years
now,
so
getting
sick
of
it.
I'm
just
kidding,
but
yeah
I've
been
around
since
the
dawn
of
time
and
a
lot
of
the
old
stuff
that
you
will
see
probably
have
a
reason
for
it,
and
I
will
probably
know
why
we
did
that
and
stuff.
It's
all.
D
I
I
can
I
can
go
next,
hey
I'm
alex
I'm
one
of
the
approvers
on
the
project,
former
maintainer.
I
also
work
for
lightstep,
and
I've
also
been
around
this
project
just
about
as
long
as
laden
so
forever
happy
to
have
you
on
board
and
it's
a
fun
group
to
participate
in.
F
I
can
go
hi.
My
name
is
aaron.
I
work
at
google.
I've
been
working
on
this
project,
not
quite
as
long
as
alex
and
leighton,
but
I
think,
probably
about
a
little
over
a
year.
I'm
an
approver
and
yeah.
I've
been
involved
also
like
a
little
bit
in
the
metric
sig.
Currently
working
on
the
metric
stuff
in
python
here
and
yeah.
Also
me
and
leighton
are
open
census
maintainers.
So
there's
that
too,
is
anybody
left
nathaniel,
oh
yeah,
there's
a
few
people.
K
L
L
K
Been
I've
been
working
for
over
a
year
on
hotel
poison.
I
work
at
splunk
and
have
been
involved
with
other
hotel
projects,
mainly
the
collector
before,
but
right
now
mainly
focus
on
python.
M
Sorry,
always
that's
a
big
revelation
hi,
my
name
is
nathaniel.
I've
been
working.
M
D
I
All
right
on
the
on
that
note,
I
don't
know
if
shirkhand
wants
to
do
it,
some
self
or
not,
or
if
we
should
move
on
with
the
agenda,
it's
all
good,
otherwise,.
D
Cercon,
did
you
want
to
introduce
yourself
or
it's?
We
can
just
move
on.
J
Hey
sorry,
I
I
was
on
mute,
hey
nick
srikant
been
participating
in
this
project
for
about
a
year.
I
initially
wanted
to
fix
some
broken
links,
but
somehow
ended
up
doing
a
lot
more
than
that.
I'm
also
one
of
the
upgrades.
A
A
C
D
Nikolai
signing
up
for
the
contributor
process,
so
his
recent
contributions
are
related
to
the
an
instrumentation
that
he
added,
and
I
asked
him
if
he
wanted
to
be
a
code
owner
as
part
of
our.
D
You
know,
process
that
we
agreed
on
a
couple
of
months
ago
and
to
so
the
component
owner's
workflow
works
like
the
to
automatically
assign
people
to
issues
that
relate
to
their
the
changes
that
they
make
for
their
instrumentations,
but
in
order
for
them
their
approvals
to
actually
count,
they
actually
have
to
become
obviously
members
of
the
open,
telemetry
cncf
community
and
go
through
that
whole.
You
know
open
up
an
issue
get
to
sponsors
workflow.
D
So
I
think
nikolai
has
more
to
say
about
this,
but
it
is
kind
of
like
long-winded
process
for
someone
who
just
wants
to
you
know
deal
with
their
own
instrumentation.
So
I
was
wondering
if
like
if
this
green
check
mark
that,
like
this
approval,
is
kind
of
needed
or
is
it?
Is
it
sufficient
enough
to
just
get
people
to
acknowledge
and
like
say
that
the
pr
is
okay
and
then
the
rest
of
us
just
like
merge
in
the
following
pr?
A
D
I
think
for
the
the
code
owners
for
the
instrumentations
it's
it
was
our
process
that
we
made
up
right.
Okay,
sorry,
yeah,
okay,.
A
Well,
yeah,
I
I'm
okay
with
relaxing
the
requirements,
if
that's
more
convenient.
D
I
guess
I
guess
nikolai
like.
Was
it
a
kind
of
like
a
big
blocker
for
you
to
not
want
to
do
that.
G
G
I
guess
there
is
some
sense
to
opening
initiator
to
become
a
member
of
the
community,
because
you
do
want
to
be
sponsored.
You
do
want
to
be
approved.
You
do
want
the
community
to
to
be
made
of
people
who
actually
contributed
right,
but
I
will
say,
that's
not
necessarily
connected.
I
have
created
an
instrumentation,
it's
something
that
I
am
familiar
with
and
some
code
that
I
know
I
would
necessarily
need
to
or
any
other
member
of
course
for
that
matter.
G
G
Also,
maybe
we
should
update
the
the
instructions
to
actually
be
more
specific,
that
you
need
to
open
an
issue
that
you
want
to
be
added.
As
a
member.
D
Correct
yeah
yeah,
so
yeah
we
should
probably
add
a
if
we
decide
to
keep
doing
this.
We
should
add
a
link
to
the
membership,
so
the
community
membership
doc
that
I
linked
to
in
our
contributing
so
well,
so
it
seems
like
the
the
process
is
okay.
Then
it's
just
more
like.
D
Like
people
should
be
able
to
figure
out
themselves
instead
of,
like
you
just
ping
them
all
the
time
so
yeah
so
yeah,
pretty
straightforward,
then
if
that
makes
sense
with
everyone
I'll
I'll,
probably
just
add
an
excerpt
or
something
in
contributing.md
in
our
instrumentation
repo
or
architecture
repo
to
link
to
community.
So
if
that's
fine,
we'll
just
keep
the
same
process.
C
D
Nice
moving
right
along
adventures
in
error
handling.
Of
course,
diego.
Do
you
want
to
take
point
on
this.
C
Well,
yeah,
let
me
share
my
screen.
A
Real
quick,
so
I
remember
the
topic
that
we
discussed
several
things
ago
regarding
error
handling.
Okay,
so
I
implemented
a
possible
solution
in
this
pr.
A
I
had
this
vr
open
and
these
vr
only
added
the
base
safety
class
and
the
safety
decorator,
but
it
was
not
actually
using
them
anywhere.
So
I
am
expanding
this
pr
by
trying
to
use
this
mechanisms
in
our
code
and
while
I'm
doing
so,
I
am
learning
new
things
about
error
handling
and
how
it's
what
limitations
does
it
have
and
et
cetera?
So,
to
give
you
a
quick
recap
of
what
I'm
doing
here.
A
Is
that
I'm
pretty
much
well,
I
created
a
script,
the
this
one
that
will
find
every
public
class
and
public
function
that
we
have
in
our
api,
so
that
I
know
what
I
need
to
decorate
or
subclass
from
base
safety.
I'm
doing
so
I
started
with
this.
Is
the
list
by
the
way
of
stuff
that
we
have?
A
I
started
with
this
class
bounded
attributes
and
I'm
pretty
much
doing
this,
adding
this
as
a
parent
class,
changing
the
init
to
this
private
method,
defining
this
stuff,
that's
necessary
for
the
safety
to
work
and
decorating
all
of
its
methods
with
default
values
that,
I
think,
should
be
right
now.
A
I
fixed
all
the
api
tests
for
this
class
and
now
I'm
doing
the
same
for
the
sdk
class.
So
this
process
will
be
quite
long.
I
imagine
because
it
will
it'll
require
for
me
to
decorate
every
class
every
function
and
then
make
sure
that
every
test
passes
so
a
couple
of
things
since
I'm
working
on
metrics.
I
intend
to
do
this
in
parallel,
because
I
don't
think
I
can
just
focus
on
get
getting
this
thing
done
in
a
it
will
take
just
too
long
to
stop
the
metrics
effort.
A
All
together,
I
had
a
conversation
with
aaron
and
we're
we're
gonna
try
to
move
in
the
direction
of
doing
parallel
work
in
metrics.
That
does
not
require
this
feature.
Yet
that's
in
practical
terms.
A
Now
there
are
a
couple
of
things
that
I
will,
like
your
opinion
on,
so
this
pr
can
just
grow
grow,
grow,
bigger
as
I
decorate
more
classes.
A
Sorry
as
I
subclass
at
bay
safety
as
a
parent
class,
and
as
I
decorate
more
methods
and
at
the
moment
we
should
have
everything
covered,
and
we
could
have
a
very
big
pr
for
other
people
to
review
that's
a
possibility.
Another
possibility
is
to
have
follow
the
same
approach
that
we're
having
we're
being
doing
with
metrics
and
logs
to
have
a
branch
where
we
add
stuff
in
there.
To
be
honest,
I
prefer
the
first
approach
it
sucks
to
review
a
big
pr,
but
it's
having
long-standing
branches.
A
It's
also
quite
troublesome,
because
you
know
you
need
to
you
know,
update
them
to
maine
very
frequently
that
requires
a
pr
and
so
on.
So
what
do
you
guys
prefer?
We
do
a
long
sunday.
K
Branch
or
a
big
pr
at
the
end,
you
want
to
maybe
try
to
get
get
what
you
have
for
now
in
like
the
basic
safety
related
code
and
then
maybe
update
one
or
two
classes
to
use
it
and
get
that
in
and
then
maybe
create
issues
for
each
package
for
each
package
to
use
safety,
and
maybe
we
can
all
contribute
and
we
can
maybe
find
new
contributors
to
do
it
per
package.
A
Okay,
yes,
that
that's
an
option.
The
only
slight
objection
that
I
have
is
that
I
have
noticed
that
I'm
learning
new
things
about
this
error
handling
mechanism,
as
I
decorate
these
classes.
A
For
example,
I
just
realized
that
the
expectation
of
no
single
exception
being
raised
from
open
telemetry
code
that
will
never
crash
an
application,
is,
I
think,
not
possible
to
accomplish,
because,
as
we
work
on
this
safety
mechanism,
we
also
need
to
work
on
the
default
values
that
will
be
returned
and
those
default
values
will
need
to
kind
of
follow
the
same
path
that
our
current
objects
have
been
following.
For
example,
if
we
create
a
no-op
span,
that
no
span
should
be
able
to
go
from
tracer
to
exporter
and
work.
A
That's
what
this
specification
wants
us
to
do,
but
it
is
not
possible
to
guarantee
that
no
application
code
will
ever
break,
because
some
application
code
may
depend
on,
for
example,
the
value
of
the
trace
id
of
some
span,
and
we
may
decide
to
use
a
zero
value
for
the
trace
id
of
of
an
overspan
or
we
have
not
yet
defined
how
propagation
will
be
supported
or
not,
and
many
other
things.
So
in
the
same
way
as
when
we
had
this
discussion
about
backwards
compatibility.
A
I
don't
know
if
you
remember
that
we
were
trying
to
define
what
backwards
compatibility
means,
and
I
am,
I
try
to
present
backwards
compatibility
just
as
a
guarantee
that
the
api
won't
change
in
the
sense
that
a
public
element
will
disappear
or
the
signature
of
methods
won't
change,
but
other
folks
consider
that
the
behavioral
behavioral
change
in
our
code
should
also
be
considered
as
a
matter
of
backwards
incompatibility,
because
it
also
may
crash
an
application.
A
It
is
similar
to
the
situation
that
we're
having
now
we
if
we
handle
an
error
and
we
create
a
different
span,
an
op
span,
even
when
that
span
fulfills
the
api.
Its
behavior
may
be
different
enough
to
cause
the
application
code
to
crash
so
right
now
the
specification
is
very
broad.
It
just
says
should
not
cause
an
error
if
used
incorrectly.
A
I
do
believe
that
a
lot
more
specification
is
needed
there
to
define
precisely
what
is
expected
from
the
implementations
on
this
side,
because
I
have
been
noticing
more
and
more
limitations
to
this
approach,
as
I
work
on
base
safeties.
So
a
way
back
to
your
suggestion,
I
am
I'm
okay.
I
am
just
a
little
bit
afraid
that
we
may
merge
this
now,
but
after
having
tried
it
on
only
a
few
classes,
but
when
we
tried
on
some
other
classes,
we
will
discover
some
other
things
that
may
change
our
minds.
A
Regarding
this
approach.
Of
course,
the
safest
approach
will
be
to
test
it
on
everything,
but
that
is
also
problematic.
I
am
willing
to
follow
an
intermediate
approach
here,
maybe
not
two
classes,
but
maybe
several
more,
maybe
like
five
or
eight
classes,
to
give
us
more
opportunity
to
catch
all
these
issues
before
we
immerse
this
in
so
it
could
be
a
pr
that's
bigger
than
average,
but
not
as
big.
As
you
know,
trying
to
re
apply
this
functionality
in
all
of
our
api,
so.
K
What
do
you
say,
I
think
yeah?
I
think
that
makes
sense
like
if
you
can
identify
classes
some
api
surface
that
you
think
will
be
most
vulnerable.
J
K
To
this
change,
maybe
yeah
we
can
do
that
like
have
a
few,
maybe
a
dozen
classes
or
whatever
you
think,
makes
sense
all
right.
Also.
I
guess
this
will
only
be
these
issues
will
only
be
triggered
when
there
are
in
cases
where
people's
code
is
raising
exceptions
today.
K
So,
even
though,
technically
it's
a
breaking
change,
if
someone's
relying
on
an
exception
in
a
test
case
or
something,
but
in
practice
what
we
will
be
replacing
is
one
exception,
with
possibly
another
exception
in
another
place
in
gold,
so
so
in
practice,
probably
we
won't
be.
Hopefully
we
won't
be
breaking
much
yeah.
A
Something
unexpectedly
well,
yes,
I
I
think
that
we
are
either
breaking
or
not
and
it
so
far
I
think
we
have
been
trying
to.
A
I
think
that
it
will
be
convenient
for
us
to
reconsider
our
approach
of
what
is
to
bring
to
change
and
kind
of
acknowledge
that
we,
I
think
we
need
to
limit
it
a
lot
more
to
just
something
that
can
be
well
defined,
and
hopefully
it's
scripted
so
that
we
can
have
a
tool
that
runs
an
algorithm
that
tells
us
okay.
This
is
already
changed
or
not
I'll,
be
perfectly
fine,
with
just
bounding
ourselves
to
consider
breaking
change,
to
be
something
that's
just
something
in
the
api
and
lowering
our
promise
to
our
users.
A
That
no
bad
thing
will
happen
if
they
use
a
new
version
of
open
telemetry,
because
I
think
we
just
are
unable
to
do
anything
else,
but
that's
a
separate
topic
guys.
Will
you
so
the
the
rest
of
the
people
here?
Would
you
agree
with
this
approach
of
me?
Trying
this
on
several
more
classes,
create
a
slightly
big
pr
and
then
try
to
get
this
merchant.
A
Yeah,
that's
a
good
question.
I
will
intentionally
try
with
the
riskier
classes.
I
mean
there
are
more
most
important
classes
like
trace
provider
trace
span
all
the
big
ones,
I'll
try
that
first,
okay,
I
actually
started
on
on
a
little
class
bounded
attributes,
which
is
actually
not
part
of
the
api.
F
Yeah,
like
I'm
not,
I
don't
know
if
I'm
even
convinced
that
some
of
these
need
needed
still
like
bounded
attributes,
for
instance
like
the
the
exception.
This
is
mostly
used
internally.
I
think
we
do
expose
it
externally
and
maybe
a
few
exporters
use
it
like.
I
think
I
use
it
in
the
google
cloud
exporter,
for
instance,
but
for
the
most
part
like,
I
think,
the
main
instrumentation
api
is
the
thing
that
needs
it,
and
then
this
bounded
attributes
is
kind
of
outside.
F
Of
that
same
thing
probably
goes
for
stuff,
like
really
simple
data
classes
like
like
trace
context
and
stuff
like
that,
like
they
definitely
shouldn't
raise
exceptions,
and
I
think
we
have
that
in
a
few
spots
which
isn't
right
but
yeah
like
like
the
main
instrumentation
stuff,
like
tracer
meter,
things
like
that
span,
etc.
A
Yeah,
actually,
I
felt
the
same
way
when
I
started
applying
these
these
classes
and
the
creators.
I
felt
the
same
way
when,
when
I
realized
okay,
what
about
bounded
attributes?
It's
supposed
to
be
a
part
of
our
api
and
nothing
of
no
part
of
our
api
should
be
unsafe.
So
in
theory
we
have
to
make
this
safe,
but
at
the
same
time
I
feel
like
well
that
I
mean
this
is
something
that
it's
not
even
defining
the
spec.
A
Probably
I
just
did
it
because
it's
I
I
rather
fail
on
the
safe
side
and
on
being
more
spec
compliant.
A
I
guess
that
if
we
find
a
class
that
we
definitely
can't
make
safe,
we
should
raise
that
concern
to
the
spec
to
make
sure
that
we
are
not
being
asked
to
do
something
that
we
actually
can't
implement
right
but
yeah.
I
agree
with
you
in
that
sense
that
I
I
had
the
same
feeling
I
don't
feel
like.
So
where
do
we
need
to
make.
F
The
safe
yeah
I
mean,
I
think,
we've
discussed
this
a
few
times
and
I'm
not
sure
if,
if
we're
largely
in
agreeance
on
like
the
approach
and
and
like
how
far
we
should
go
like
I
remember,
we
were
talking
about
span
or
start
span,
takes
a
name
positional
argument,
and
if
we
emit
it,
should
it
be
considered
an
error
like
do
we
want
to
allow
that
one
to
raise
like
do?
We
still
need
to
to
make
consensus
on
that?
What
do
other
people
think.
A
I
actually
think
that,
well,
my
personal
opinion
is
that,
yes,
we
should
be
safe
against
those
errors.
I
think
I
think
like
that,
because
the
specification
doesn't
give
us
any
room
to
not
do
it
and
we
are
also
able
to
do
it
in
the
sense
that
this
implementation
is
able
to
do
so.
That
being
said,
I'm
not
that.
A
I
mean
my
position
is
not
that
hard
in
the
sense
that,
if
even
if
we
don't
use
a
mechanism
that
protects
against
that
kind
of
exceptions
by
that
will
raise
an
exception
if
a
bad
argument
is
passed,
but
we
do
have
some
other
mechanism
that
protects
the
inside
of
the
code
or
professionals
and
classes
that
consider
that
to
be
a
good
progress
as
well.
A
So
even
I
I
think
I
have
been
a
bit
insistent
on
on
us
protecting
us
against
that
error,
but
to
give
you
a
more
clear
message,
even
if
we
ultimately
decide
not
to
do
that,
and
I
also
think
that
specification
is
not
that
clear
in
that
sense,
which
is
something
that
may
end
up
happening
right.
We
decide
not
to
protect
against
arguments,
but
we
introduce
some
other
mechanisms
that
provide
protection
in
summer
part.
H
F
Yeah
yeah
agreed,
so
so
can
we
like?
I
guess,
let's
like
what
isn't
what
do
we
need
to
take
this
out
of
draft,
so
people
can
start
reviewing.
A
Okay,
good
question:
I'm
keeping
this
in
draft
right
now,
because
lots
of
test
pages
are
failing.
I
need
to
figure
out
what
errors
are
happening.
I
was
planning
on
moving
this
out
of
draft
when
all
the
checks
were
green.
A
When
that
happened,
I
will
be
introducing
the
code
as
it
is
right
now
in
this
mechanism,
if,
if
we
agree
to
make
an
agreement
beforehand,
if
on,
if
we
should
protect
or
not
against
the
past
arguments
or
not,
I
think
that
can
happen
in
parallel.
If
we
make
the
decision
in
the
following
days,
for
example,
I
can
you
can
just
be
informed
and
I
can
change
the
code
accordingly.
I
I
don't
think
it
will
be
that
impactful
one
on
on
my
side.
On
the
contrary,
it
should
make
things
pretty
much
much
more
simple.
A
If
we
don't
protect
against
that
kind
of
error,
it
may
save
us
from
the
this
base
safety
class
altogether,
but
anyway,
that's
that's!
That's
why
I'm
keeping
it
in
draft
once
that
checks
are
green.
I
intend
to
put
this
in
reviews.
D
Yeah,
I
think
that
work
can
be
done
in
parallel.
I
think
the
the
result
of
the
discussion
we
had
with
ted
two
weeks
ago
was
that
we
need
to
come
come
up
with
a
list,
at
least
for
the
dynamic
languages,
for
what
we
wanna
cover
right.
But
this
is
this.
Pr
is
kind
of
can
be
done
like
we
need
this,
regardless
for
the
no
op
functionality.
J
D
Don't
let
that
block
you?
We
can
work
on
that
in
parallel
and
to
answer
your
question
about
the
whole
large
pr
thing
as
long
as
you
from
from
from
precedence
from
before,
like
we
really
hate
feature
branches
so
or
like
any
other
branch
thing
so
yeah.
I
much
prefer
this
this
method
than
the
other
dumbass
method,
so
yeah.
A
Okay,
all
right
so
well,
thanks
for
your
input,
I'll
be
working
this
in
parallel
with
metrics,
so
I've
filed
myself
an
issue
for
the
sk
first
component,
meter,
emitter
provider
and
I'll
be
trying
to
work
on
this
on
in
in
parallel.
J
There
you
go,
I
like
I
wanted
to
understand
like
if
what
are
the
performance
implications
of
doing
this
change?
Let
me
consider
you
know
doing
some
some
sort
of
testing
there.
A
Impossible
not
to
introduce
a
performance
hit,
even
if
it's
very
small,
because
we
always
will
need
to
add
a
new
another
thing
to
stack
right,
because
we
need
to
wrap
this
in
a
try,
except
at
some
moment.
A
D
I
think
for
poor
proof
implications,
it's
it's
a
bit
more
important
for
metrics,
so
they'd
be
pretty.
I
think
I
was
talking
to
aaron
about
this
too,
like
it'd,
be
good
to
have
some
benchmarks
for
this
specifically
for
metrics,
but
I
don't
think
that
needs
to
be
included
in
your
first
draft
at
least.
A
Sure
the
the
other
thing
is
that
I
mean
it
has.
D
I'm
just
basing
this
off
of
like
what
like
josh
and
actually
the
other
josh
as
well.
J
D
I
think
both
josh's
mcdonald
and
serge
about,
like
oh
and
riley,
how
he
brought
up,
how,
like
you
know
python,
might
have
like
some
performance
problems
or
something
I
wasn't
in
that
original
discussion,
but
like
we've,
never
had
to
worry
about
like
performance
really
before,
because,
like
the
metrics
is
very
very
focused
on
performance.
D
Is
that
that's
a
separate
topic
that
don't
want
to
take
up
the
time
for
this,
but
yeah.
D
Cool
we're
right
along
yeah,
so
we
did
a
release
yesterday.
It
actually
took
quite
a
bit
because
we
there's
a
change
that
removed
cloning.
The
repository
from
the
contrib
builds,
and
in
doing
so
we
actually
don't
pass
in
the
core
repo
shot
anymore,
so
we
actually
have
to
add
the
core
repo
sha
to
the
core
repo
build,
which
is
hilarious,
but
I
got
things
working,
so
that's
just
another
like
testament
in
how
complicated
and
convoluted
our
builds
are
now
so
yeah.
D
It's
only
just
keep
getting
worse.
So
just
just
if
you
guys
like
run
into
any
problems,
just
don't
don't
be
surprised
or
like
feel
like
it's
your
fault,
because
it's
definitely
the
nature
of
and
complicate.
We
have
a
complicated
product,
so
don't
feel
too
badly
about
that.
F
Cool,
I
think
we
were
going
to
talk
about
this
a
little
bit.
I
don't
know
if
we
have
a
lot
of
time
for
it,
but
I
remembered
nathaniel
you
brought
up
that
there
was
like
a
change
made
in
the
contrib
builds
well
well.
First
of
all,
ospr
was
just
merged
that
gets
rid
of
most
of
the
contrib
builds,
which
should
speed
things
up
a
lot,
and
then
we
were
discussing
if
we
should
remove
that
all
together.
J
F
Then
the
other
issue
is
that
right
now
we're
using
like
git
git
urls
with
pip,
which
re-downloads
the
repository
for
each
build
for
each
dependency.
So
it's
in
a
pretty
bad
state
right
now
I
would,
I
would
say-
and
I
don't
know
like.
J
F
K
Contra
packages
can
just
depend
on
the
published
wi-fi
packages
instead
of
pulling
stuff
in
from
core
unless
they
need
to
depend
on
a
new
new
feature.
That's
in
core,
in
that
case,
that
specific
package
that
depends
on
this
new
feature,
then
the
developer
like
using
contributing
that
change
can
specifically
and
temporarily
make
it
use,
get
until
next
release,
so
that
could
speed
things
up
quite
a
bit.
I
think.
F
K
No,
no
so
in
contribute
like
we
have.
For
example,
we
have
an
sdk
extension
package,
it
looks
extensive
and
it
I
think
it
depends
on
the
sdk
and
api,
and
today
we
when
we
need
to
install
ntk
and
api,
we
clone
the
core
repo
and
then
install
from
the
local
file
system.
So,
instead
of
doing
any
of
that,
just
just
depend
on
setup.tft.
K
Just
do
pip,
install
and
pip
will
automatically
pull
in
api
and
sdk
from
wi-fi
like
it
would
any
other
dependency
right
so
do
do
we
even
need
to
go
on
core
in
contrib
unless
some
package,
that's
in
development
in
the
current
development
cycle,
depends
on
a
new
feature.
That's
not
been
published
to
wi-fi.
Yet
in
that
case
we
fall
back
to
the
current
mechanism,
but
by
default
we
always
use
wi-fi.
H
K
I
So
it's
like
every
time
someone
wants
to
release
an
api
and
sdk
package
they
like
they
would
have
to
go
to
the
backup
plan
on
how
to
use
pip
with
github
just
something
to
keep
in
mind
either
that
or
we
get
comfortable
with
releasing
open
the
open,
telemetry
api
and
sdk
packages.
First
and
like
not
not
block
the
release
on
the
contrib
builds
or
I
don't
know
it's
a
bit
of
a
chicken
and
egg
problem.
K
Another
thing
I
was
thinking,
but
I'm
not
comfortable
proposing
it
just
yet
was
running.
So
there
are
a
number
of
wi-fi
compatible
servers,
so
we
could
run
like
a
wi-fi
server,
local
backup,
server
and
ci
and
just
route
all
pip
installs.
Through
that
and
that's
a
working
cache
every
single
package,
then
we
can
freely
reference,
get
urls
and
they
wouldn't
clone
them
again
and
again,
because
the
server
would
catch
them.
But
the
downside
is
that
we
introduced
yet
another
component
and
probably
complicated
local
development
further,
but
it
would.
I
Yeah,
I
guess
I
seem
to
remember
mario,
pointing
out
that
the
local
dev
builds
are
also
really
slow
now
with
the
using
github.
So
I
guess
that
wouldn't
solve
that
problem.
I
G
K
I
I
just
spent
like
two
hours
with
poetry
in
my
splunk
distro,
trying
to
just
just
upgrade
to
hotel
1.6
and
it
just
it
cannot
for
some
reason
anyway,
so
so
going
back
to
the
local
wi-fi
server.
I
think
that
would
probably
work,
because
we
can,
in
that
case
we
can
reference
to
dev
packages.
K
So
if
the
current
current
api
or
sdk
package
in
core
is,
let's
say,
1.7
dev
and
it's
not
published
to
real
wifi,
the
other,
our
development
packets
are
working
sketch
that
and
we
wouldn't
even
know
it's
installing
from
yet
but
yeah,
I'm
not
proposing
it
right
now.
I
need
to
think
more
about
that,
but
but
technically
it
shouldn't
be
a
problem.
D
Hey
sorry,
speaking
of
dev
versions,
did
we
decide
in
before
that
we're
not
using
dev
versions
anymore
and
we're
just
keeping
things
as
the
latest
release
version
until
the
next
release?
K
Yes,
I
think
it
was
just
a
mistake
on
my
part,
but
but
personally
speaking,
I
think
keeping
the
current
version
in
get
is
probably
more
convenient
because
you
can
easily
install
just
one
package
from
git
and
everything
else
from
wi-fi
without
it
forcing
you
to
install
everything
from
you
so
that
that's
helpful
when
you're
developing
locally
and
again
something
right
but
yeah,
not
a
big
deal.
You
can
go
back
to
that
version
of
that.
D
Do
we
have
issues
that
are
like
just
make
rci
better,
I'm
pretty
sure
we
have
like
some
sprinkled
around
and
stuff
right.
So
I
think
it'd
be
good
if,
if
someone
has
the
time
to
like
just
consolidate
all
of
them
and
then
be
like
hey
identify
the
the
bottlenecks
in
which
we're
having
and
then
propose
a
solution,
that'd
be
very
helpful.
D
Was
this
even
a
topic?
Okay,.
D
Cool
does
anyone
else
have
any
other
topics
they
want
to
talk
about
before
we
go
into
prs
looks
like
we
only
have
one.
M
Guys
yeah,
I
just
wanted
to
talk
about
one
part
that
I'm
worried
might
be
a
little
controversial
on
this
qr.
So
what
this
pr
does
is,
it
adds
instrumentation
for
aws
lambda
functions.
D
M
Oh,
I
can't
share
my
screen
right
now.
Sorry,
if
that's.
D
Okay,
let
me
let
me
do
it
then.
M
Thanks,
I
really
appreciate
it.
Thanks
for
helping
me
get
the
core
one
merge,
here's
a
pr
for
this
one,
it's
in
draft
for
something
that
I'll
mention
very
quickly,
but
the
only
controversial
thing
about
this
pr
is:
it
adds
instrumentation
for
lambda
functions
on
aws.
So
if
customers
end
up
using
aws
lambda-
and
they
have
this
instrumentation
package,
then
we'll
be
able
to
instrument
the
lambda
context
and
lambda
event
to
be
able
to
add
instrumentation
for
that
event
and
have
that
end
up
as
a
trace,
obviously
be
exported
like
anything
regularly.
M
M
Does
that
lets
you
call
the
script
before
your
lambda
handler
gets
called
and
what
that
script
will
do
is
basically
just
call
the
open,
telemetry
instrument
command,
set
up
the
environment
variables
that
we
believe
customers
will
need
by
default
and
allow
traces
to
be
generated
automatically
without
you
having
to
you
know,
import
the
instrumenter
and
instrument
it
and
such
so.
This
is
something
that
we
already
have
in
other
languages
and
it
actually
exists
in
the
open,
telemetry
open,
telemetry,
lambda
repo,
but
we
have
it
to
do
to
upstream
it
here
to
the
contrib.
M
M
K
I
think
it
sounds
great.
The
script
is
interesting.
It
we
could
probably
think
of
generalizing
it
somehow
maybe
but
then
maybe
it's
too
specific,
so
salary
at
least
has
is
one
another
use
case
which,
where
the
instrumentation
should
ideally
have
knowledge
about
how
the
instrument.
C
K
Because
so
right
now
it,
if
you
look
at
the
instrument
command,
it
has
a
special
piece
of
logic
for
celery
and,
basically
it
what
it
does
is
it
changes
where
the
instrument
is
executed
instead
of
doing
it
at
the
usual
place,
it
sets
up
a
celery
hook
so
for
each
worker,
each
celery
team
and
each
salary
worker.
K
We
instrument
all
the
packages
inside
each
process,
just
not
exactly
like
this,
but
sounds
something
similar
where
an
instrumentation
package
could
should
have
some
mechanism
to
to
influence
how
the
instrument
come
on
and
how
and
where
the
instrument
go
on
is
done
so
yeah.
I
think
this
is
interesting.
M
Thanks
yeah,
that's
that's
really
interesting
I'll.
Look
at
what
the
special
cases
are
for
cellular
celery
see.
If
we
can
do
something
like
that
for
lambda,
I
I
hope
that
it,
the
script,
will
get
less
complicated
after
I
do
these
changes.
That
will
let
me
get
it
out
of
track,
but
yeah.
M
I
think
it
it's
a
it's
a
neat
feature
that
my
colleague
came
up
with,
so
we
just
want
to
share
it
here
in
the
open
source
repo
for
people
to
look
at
it,
and
I
I've
actually
added
tests
that
you
know
import
the
script
and
simulate
what
lambda
would
do
itself
added
comments
everywhere,
so
yeah.
I
will
pull
it
out
of
draft
when
it's
ready,
thanks
guys
for
your
help.
D
Hey
nathaniel
on
the
side.
No,
like
I've,
never
seen
this
open,
telemetry
lambda
repo.
Before
is
this
like
owned
by
you
guys.
M
Hey
yeah
kind
of
it's
definitely
owned
by
open
telemetry,
but
we
asked
cncf,
for,
I
guess,
open
telemetry
for
a
repo
at
one
point
to
store
all
these
functions
and
like
instrumentation,
and
they
gave
it
to
us.
I
think
alex
is.
M
M
D
Trade
off,
I
guess,
cool
yeah
nathan.
I
think
that
makes
sense
is
that.
Is
that
what
you
wanted
to
do
to
make
this
out
of
draft?
So
we
can
actually
look
at
it.
M
What
I
need
to
do
to
get
it
out
of
draft
is
right
now
this
whole
process
that
I
mentioned
where,
before
we
call
your
lambda
code,
it
will
set
up
open
function
for
you.
We
actually
copied
and
pasted
the
auto
instrumentation
code
and
set
it
up
as
two
different
scripts,
because
we
were
having
some
issue
with
python
paths,
not
finding
it
at
the
right
time,
so
why
we
thought
we
think
we
have
a
solution
for
that
to
make
it
only
one
script.
D
Okay,
cool,
it's
good.
I
think
that's
pretty
much
all
the
discussion
topics
that
we
have
today.
I
guess
something
I
just
also
wanted
to
bring
up.
I
think
nathaniel
that
you're
already
aware
of
this,
but
yesterday's
release
also
released
the
the
1.0
and
2.0
versions
of
the
sdk
extension
and
the
aws
propagator
accidentally.
M
M
M
D
I
think
for
now
like
for
the
at
least
until
that's
addressed
like
for
any
releases.
We
just
got
to
be
careful
that
anything,
that's
1.0
and
greater
are
actually
releasable
and,
like
that's
the
state
that
the
code
owners
want
them
to
be
in.
But
as
long
as
we're
mindful,
I
think
we're
good.
C
M
D
Cool
with
one
minute
left,
anyone
has
any
other
topics
they
want
to
address.