►
From YouTube: 2021-09-30 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
Many
participants
we
have
today,
okay,
yes,
so
today
we
have
a
special
guest,
terry
young,
who
absolutely
discuss
their
handling
with
us.
I
say
give
maybe
a
couple
of
minutes
for
our
folks
to
join,
but
our
leading
is
here
we're
here.
What's
up
everybody?
A
Yes,
nice,
so
I
think
we
have
the
you
shall
open
these.
D
Sure
thing
yeah,
hi
everybody
yeah.
I
haven't
been
to
the
python
segment
forever.
It's
like
conflicts
with
another
meeting,
but
I
kind
of
miss
it
yeah.
Diego
asked
me
to
come
by
because
it
sounded
like
there
was
just
some
debate
about
error,
handling
and
growing
exceptions
from
the
api,
and
I
do
think
that's
that's
like
a
very
important
area
where
we're
trying
to
get
some
uniformity
in
the
spec.
D
But
it's
it's
probably
underspecified
in
some
places-
and
I
don't
think,
like
the
spec,
writes
a
lot
of
its
reasoning
down
for
like
why
things
are
the
way
they
are
in
the
specs.
So
diego
asked
me
to
come
by
just
to
discuss
how
how
we've
seen
like
error
handling
across
languages,
like
what
kind
of
like
the
intentions
were
in
the
spec
and
how
to
deal
with
it.
So
I
think
maybe
just
first
of
all
like
yeah,
do
you
have
any
questions.
E
Yes,
in
fact,
well
that
thanks
thank
you
for
joining.
I
have
a
question
the
well.
I
open
up
this
issue
and.
A
People
seem
to
have
some
doubts
regarding
if
we
should
raise
an
exception
or
not.
If,
for
example,
a
bad
parameter
is
passed
to
a
function
that
starts
span,
so
I
think
that
you
added
a
comment
here,
but
I
don't
know
if
you
would
like
to
talk
about
your
opinion
on
this
case.
D
Yeah
sure
I
can
I
can
just
go
through.
Maybe
that's
a
good
thing
just
to
to
quickly
go
through.
Why
part
of
this
spec
includes
not
returning
errors
or
throwing
exceptions
from
the
instrumentation
api
part
of
it
is
we
want
like,
like
most
logging,
libraries
or
you
know,
observability
code?
D
D
We
don't
want
users
to
feel
like
they
have
to
put
guard
statements
around
every
single
api
call
that
they
make
so
the
the
the
general
rule
of
thumb
there
is
that
from
instrumentation
code,
exception
should
be
caught
internally
and
that
information
should
be
routed
to
some
kind
of
centralized
like
error,
handling
or
diagnostic
tool,
so
that
the
the
developers
should
be
able
to
like
get
that
information
in
some
way.
So
they
can
see.
D
Oh,
I'm
I'm
throwing
exceptions
here
and
be
able
to
to
track
the
problem
down,
but
in
production
the
the
caller
of
the
instrumentation
generally
can't
can't
catch
the
instrument,
can't
catch
the
exception
and
then
do
something
useful
with
it.
So
it's
better
to
to
to
handle
it
internally.
D
Now
that
said,
there's
I
think
some
real
great
area.
I
don't
think
the
spec
does
a
good
job
right
now
about
what
should
for
api
calls
that
returned
objects
like
what
what
should
the
object
be
that
actually
get
returned.
If
what
the
user
did
was
was
hand
it
some
like
malformed
or
incorrect
configuration.
F
Hey
ted-
hey,
hey!
F
Okay
sure
so
my
question
is
so
that
kind
of,
like
philosophy
for
not
you
know
crashing
the
user's
application
right
does.
That
is
that,
does
that
apply
in
general,
or
only
for
things
that
like
would
return
an
object
or
something.
D
I
think
it
I
mean
in
I
would
say,
like
in
general,
with
the
instrumentation
api,
it
shouldn't
shouldn't
crash
when
somebody
calls
it
so
someone
you
know,
calls
set,
attribute
or
log
or
or
calls
a
metric.
You
know
those
are
things
that
don't
return
any
values,
but
you
know
they
also
shouldn't
throw
exceptions
like
the
user
shouldn't
have
to
card
card
against
right.
F
Okay,
so,
okay,
so
so,
if
this
applies
for
pretty
much
like
your
entire
api
service,
saying,
like
throwing
exception,
is
like
a
pretty
like
catch
all
kind
of
statement,
and
I
understand
that
all
that
for
like
type
languages,
you
know
it's
very
strict
in
terms
of
like
you
know,
run
time
right,
but
it's
very,
very,
like
kind
of
almost
unreasonable
to
like
have
a
guard
against
this
kind
of
like
exception
and
handling.
F
When
you
know
if
the
user
uses
our
api
incorrectly
right
and
that's
totally
possible
right
so,
like
I
think
for
at
least
for
python
or
untyped
languages,
it's
important
to
like
define.
F
You
know
what
kind
of
exceptions
exactly
will
we
be
handling?
If
that's,
if
that
makes
sense,.
D
Yeah,
I
I
think
what
you're
saying
is,
like
you
know,
there's
like
the
sdk
shouldn't,
be
blowing
up
or
something
but
and
bubbling
exceptions
that
way,
but
there's
just
like
basic.
D
You
used
it
wrong
like
things
that
in
a
typed,
language
would
be
copped
by
the
compiler
and
says
like
hey
buddy,
like
you,
just
literally
didn't
pass
the
required
parameter
here
and
should
in
in
dynamic
languages
should
that
those
situations
still
result
in
exception
and
yeah.
I
think
this
is
a
place
where
the
spec
isn't
totally
clear.
D
I
personally
feel
like
actually
those
should
be
swallowed
as
well,
because
the
the
thing
about
a
type
language
is
you're
you're
using
those
tools
to
catch
it
at
compile
time,
so
that
it's
not
possible
to
have
those
exceptions,
show
up
in
run
time
and
to
me
it's
the
difference.
The
the
the
thing
we
want
to
avoid
is
run
time,
exceptions
so,
for
example,
and
the
way
these
things
can
come
up,
for
example,
is
like.
D
Passing
the
wrong
type
of
thing,
like
a
user
passes
in
objects
where
they
should
be
passing
a
string
or
a
number,
and
sometimes
that
can
happen
because
they're
passing
in
data,
that's
that's
being
generated
in
some
fashion,
but
I
agree
it
does
feel
like
like
kind
of
a
gray
area
there.
But
personally
I
would
recommend
just
saying
like
look.
There
needs
to
be
a
mechanism
here
for
for
handling
bad
input
and
and
the
the
the
sdks
should
just
just
always
use
the
same
same
mechanism
for
everything,
for
simplicity's
sake,.
A
For
example,
in
metrics,
we
check
the
name
of
the
instrument
that
we
pass
and
if
an
an
instrument
may
have
the
right
type
be
a
string,
an
instrument
name
right,
but
it
still
may
be
wrong.
So
we
still
need
to
do
something,
and
I
mean
we
still
need
to
make
sure
that
we
handle
that
exception,
even
when
the
type
matches.
So
I
don't
think
it
is
unreasonable
to
expect
us
to
handle
those
kinds
of
exceptions
as
well
in
python,
dynamic
language.
D
Yeah,
I
will
say
like
from
a
from
practical
experience.
D
Being
you
know
in
this
corner
of
the
industry,
for
a
while,
like
the
a
super
common
thing
we
say
see,
is
that
often
instrumentation
is
called
in
like
odd
or
exceptional
code
paths
like
code
paths,
that
the
application
doesn't
go
down
very
often
and
actually
aren't
tested
or
very
well
tested
and
on
one
of
those
on
those
code
pass
the
users
just
like
you
know,
you
know
incorrectly
called
the
api
and
then
now
open,
telemetries
blowing
their
stuff
up
in
production,
and
then
they
get
they
get
mad
at
us
and
you
could
say
like
well.
D
You
should
test
everything
you're
right
and
they
should.
But
that's
that's
an
example
for
like
like
where
I've
seen
this
actually
like
show
up
in
practice
where,
where
you
know
the
observability
systems
actually
like
caused
like
real
material
damage
to
you
know
a
customer
or
a
user.
I
should
say.
H
Yeah,
well,
I
think
that
makes
sense.
I
had
similar
concerns
to
what
lyden
had
more
like.
I
wanted
to
understand
like
where
the
where
to
draw
the
line,
for
example,
if
there's
like
tracer
provider.getracer,
and
it
accepts
instrumentation
library
name
and
someone
passed
an
integer
instead
of
string.
I
agree
it
shouldn't.
We
shouldn't
throw
an
exception
there.
We
should
maybe
do
a
best
effort
at
making
it
work.
H
H
I
don't
know
if
we
like
if
we
need
to
draw
a
line
somewhere
or
if
we
need
to
catch
it.
All
I
don't
even
know
if
catching
at
all
would
even
be
possible
in
this
case,
because
the
attribute
error
would
probably
be
raised
from
outside
the
api
boundary.
In
this
case,
for
example,
sorry
the
value
error-
or
I
don't
know
what
the
error
is
yeah
so
or,
for
example,
there's
some
attribute
on
span
or
something
and
someone
tries
to
call
it
as
if
it
was
a
function
like.
G
Okay,
that's
that's
also
sort
of
where
I
am.
I
think,
like
this
example
here,
if
there's
no
name
is
different
than
if
name
is
like
none
versus
if
it's
the
wrong
type
or
something
like
that,
if
you
go
to
the
the
sig
notes
as
well,
this
might
be
maybe
not
the
best
argument,
but
do
you
mind
going
to
the
other
tab
here?
G
G
The
dock-
oh
sorry,
yeah,
so
so
this
is
just
like
the
built-in
logging
library,
which
I
think
is
what
we
would
be
plugging
into
with
the
hotel
logging.
If
you
call
it,
if
you
call
wagger.debug
with
no
arguments,
it's
going
to
do.
Basically,
the
same
thing
gives
you
a
type
error
because
you're
missing
the
positional
argument.
G
So
I
think
for
me,
that's
like
where
the
line
is,
but
I
definitely
agree.
We
need
to
do
something
because
there's
even
like
one
we're
raising
an
exception
and
it's
not
in
the
fast
fill
path.
G
E
Sorry,
just
to.
A
A
Not
only
exceptions
that
happen
inside
of
the
function,
but
also
bad
arguments
passed
to
it.
I
think
the
case
of
passing
an
integer
instead
of
a
string,
it
will
be,
it
will
produce
the
same
error.
It
will
cause
the
same
behavior
as
not
passing
any
arguments
at
all,
because
the
the
same
thing
will
happen.
I
know
our
object
will
be
returned,
so
I
don't.
I
don't
think
we.
We
actually
should
see
this
kind
of
exceptions
as
something
different
from
from
the.
D
I
I
personally
think
that's
fair,
like
like
here's
like
another
example.
What,
if
someone
just
calls
a
made-up
method
right,
they
just
call
span.foo
like.
Should
we
silently
swallow
that
and
asynchronous
producing
exception,
error
or
like?
Should
that
just
blow
up,
because
that's
just
what
python
is
going
to
do
by
default
right?
It's
just
going
to.
G
D
Right
and
then,
if,
if
you're
saying,
there's
like
clear
required
arguments
should
should
that,
should
that.
D
Right
or
is
that
instead,
just
more
like
I,
I
would
be
tempted
to
say
that
if
they
called
the
api
like
crawled
a
correct
method,
but
then
something
about
the
input
that
they
put
into
that
method
was
was
invalid
or
missing.
Then
there
should
just
be
like
one
thing
that
that
open
telemetry
does,
in
that
case.
A
Yeah
but
but
there's
a
difference,
if
someone
calls
tracer
dot
start
span
with
bad
arguments,
we
know
what
to
do,
because
we
know
what
kind
of
object
will
start
on
start
span
normal
return.
But
if
someone
calls
tracer.fu,
I
think
it
is
okay
to
raise
an
exception.
A
normal
attribute
exception.
Just
let
the
interpreter
handle
it,
because
we
wouldn't
know
what
what
kind
of
object
would
that
true
method
return.
So
there's
absolutely
nothing
that
we
can
do
yeah
and.
A
D
Is
a,
I
think,
the
subtle,
I
think
the
example
blatant
people
are
are
using.
That's
a
good
one
is
what
if
what
if
a
required
parameter
is
missing,
so
you
call
get
get
tracer
and
you
don't
pass
the
required
parameter
versus
like
passing
a
bad
parameter
and
personally,
I
would
say
like
yeah
it
should.
D
Maybe
I
do
think
that,
like
things
like
get
tracer,
I
don't
know
the
that
that
seems
to
still
be
a
thing
that
that
someone
could
could
screw
up
real
easily.
So
that
would.
F
I
have
a
question:
might
be
a
hot
take,
but
it's
like
is
it
possible
we
could
just
like,
like
I.
I
know
that
there's
a
lot
of
these
like
usages
of
the
instrumentation
api,
where
it's
not
like
the
hot
path
right
and
like
people
don't
really
test
this
extensively.
F
But
can
we
bank
on
the
fact
that
this
is
not
maybe
not
the
majority
of
the
use
cases
and
just
be
like
we'll
solve
this
for
you
or
something,
because
that's
what
we're
doing
for
all
of
our
other
python
applications
like
we
never
guard
against
basic
python
behavior,
and
we
just
solve
this.
When
customers
have
issues
and
then
like,
we
just
point
them
out,
hey
you're
just
doing
this
wrong
and
then
they
solve.
G
F
F
So
yeah.
D
D
I
I
do.
I
do
think
this
is
like
an
area
where,
like
maybe
the
spec,
should
be
more
clear.
I
mean
the
main
thing
that
I
personally
want
to
see
in
open
telemetry
is,
is
just
consistency
right
like
so.
This
is
the
thing
where
I
I
just
kind
of
actually
wonder
like
what
it,
what
are
ruby
and
javascript
doing
right
and
like
maybe
we
should
like
get
together
and
be
like
hey.
Can
we,
let's
just
like
pick
how
how
we
handle
the
kinds
of
things
that
would
get
caught
in
a
typed
language?
D
I
think
that's,
maybe
like
the
the
rule
of
thumb,
you're
looking
for
is
like
well.
If
we
had
types,
then
these
would
be
the
things
that
that
the
compiler
would
wouldn't
fail
to
compile
and
for
for
those
things
we
should
throw
an
exception
because
they
didn't
use
open
telemetry
they
just
they
just
did
a
silly
thing.
D
The
flip
side
is
to
say,
like
it,
just
just
pass
objects
with
default
parameters,
all
the
time
and
always
log
an
exception,
and
once
you've
got
once
you've
got
that
basic
mechanism
in
place.
Then
then
you're
good
to
go
but
okay,
so.
G
G
Because
that's
what
that's,
what
the
jsig
is
using
right,
so
it
compiles
down
to
javascript
and
people
can
use
it
without
using
typescript,
but
for
the
most
part
from
looking
at
the
code,
there,
I've
been
working
there
a
little
bit
they're,
not
guarding
against
this
kind
of
thing
in
any
other
way
than
using
typescript.
So
we
have
also
like
static
analysis
tools
in
python
they're,
not
quite
as
good
but
like
I
don't.
I
don't
even
think
we
could
uniformly
treat
them
all
the
same.
To
be
honest,.
D
Yeah,
maybe
that's
that's
like
the
the
best
rule
of
thumb
to
go
with
there
is
like.
Is
it
possible,
like?
Is
it
possible
for
the
user
to
catch
this
stuff
statically
like
if,
if
we
can
say
to
the
user,
like
you,
can
guard
against
these
these
errors
by
using
this
like
super
standard
python
tool
for
for
checking
stuff
like
they're
doing
in
javascript,
they're
saying
like
well,
if
you
use
typescript,
then
you're
fine,
yeah,
then
maybe
like
things
that
could
be
caught.
D
That
way,
you
could
say
that's
that
falls
on
the
side
of
like
we're
just
gonna,
you
know,
say
you
weren't,
using
it
correctly,
but
because
there's
at
least
like
a
solution
for
them.
If
they're,
like
I'm,
really
scared
like,
I
really
want
to
confirm
that
I'm
not
going
to
get
a
runtime
exception.
I
think
that's.
Maybe
it
like.
The
answer
is
like
the
user
needs
to
be
able
to
confirm
positively
confirmed
that
there
are
no
runtime
exceptions
that
are
going
to
be
caused
by
their
usage
of
open
telemetry.
A
Okay,
so
I
think
so
far,
I've
been
the
strongest
advocate
for
very
strict
exception
handling.
At
this
moment.
I
think
that,
even
if
we
didn't
end
up
handling
bad
parameters
or
stuff
like
that,
and
we
handled
exceptions
that
were
raised
inside
of
our
code,
that
will
still
be
a
great
progress.
I
think
that
we
as
a
sig
need
to
draw
that
line,
as
others
have
mentioned.
A
I
wanted
to
bring
another
topic,
because
it's
also
very
important,
and
it
can
at
least
I
found
it
a
little
bit
confusing,
and
that
topic
is
something
that
mentioned
to
me
is
that
the
second
point
says
that
the
api
or
sdk
may
fail
fast
and
cause
the
application
to
fail
on
any
initialization,
so
that
I
wanted
you
to
give
us
some
context
on
this,
because
that
I
think
this
means
that
there
are
like
two
phases.
A
D
Yeah
yeah,
just
like
the
general,
like
the
general
best
practice
that
we're
trying
to
follow
follow
is
like
during
you
know,
production
usage
when
an
application's
in
production,
open
telemetry
is
not
going
to
start
causing
problems
with
their
production,
but
when
a
program
is
starting,
so
when
you're
creating
your
providers,
basically,
I
think
that's,
I'm
pretty
s
pretty
much
sure
like
like
creating
and
setting
up
your
your
sdk
is
like
pretty
clearly
delineated
from
using
the
instrumentation
api
during
production
and
when
you're,
creating
and
setting
up
the
sdk
like.
D
Actually,
we
should
synchronously
be
returning
errors
or
throwing
exceptions
rather
than
having
it
silently.
Do
nothing
and
then
pipe
that
stuff
to
some
asynchronous
air
reporter.
For,
for
one
reason
it's
like:
where
do
you
put
that
asynchronous
error
reporter,
if
you
like,
have
failed
to
like
create
the
sdk
like?
Where
does
it
even
go,
but
also
just
just
normal
development
like
during
setup
code?
D
People
want
to
fail
fast
and
they
want
their
setup
code
to
be
synchronous
and
not
not
have
to
do
some
weird
asynchronous
thing
to
to
catch
when
that
this
stuff
was
set
up.
So
that's
why
that
clause
was
added
in
there
is
for
for
setting
up
the
sdk
just
blow
up
if
if
it
was
misconfigured,
I
know
if
that
seems
reasonable
to
people
or
not.
F
D
Yeah,
I
think
the
rule
of
thumb
is
also
like.
You
should
not
be
touching
the
sdk
outside
of
setup
and
tear
down
like
generally
speaking,
right
like
like
libraries
and
application
code.
That's
like
handling
traffic
like
like
it's.
It's
basically
like
a
smell
if
they're
touching
the
sdk
directly
in
those
cases,
but
when
you're
setting
up
the
sdk
at
the
beginning
before
the
service
has
entered
production
like
like
just
in
general,
just
from
users
of
like
these
kinds
of
observability
tools
like
they
want
they,
they
don't
want
that
service
to
enter
production.
D
If
a
component,
including
like
observability,
is
like
known
to
not
be
not
working
like
they
that
they
want
that
deployment
to
to
not
not
go
through,
rather
than
like,
replace
their
fleet
of
services
with
a
new
fleet
of
services
with
observability
turned
off,
because
it
it
just
silently
swallowed
that
stuff.
So
that's
that's.
Why,
like
that
stuff
should
just
blow
up,
and
then
they
can
do
exception,
handling
and
all
the
normal
stuff
during
setup.
A
Okay,
there
is
one
particular
function
that
I
wanted
to
bring
to
everybody's
attention
and
it
is
a
set
razor
provider.
Saturation
provider
is
a
function
that
is
part
of
the
api
and
it's
also
a
function
that
is
part
of
setup
code.
So
what's
your
opinion
on
on
this
function,
race
being
protected
or
not,
against
the
exceptions.
D
Yeah,
I
think
that's
that's
why,
like
that
number
two
says
the
api
and
sdk
can
can
throw
during
setup.
Is
you
know
if
you
set
tracer
provider
is
like
it's
in
the
api
package,
but
it's
actually
part
of
like
the
mechanism
and
implementation
of
you
know
setting
the
api
up.
It's
not
it's
not
part
of
like
the
instrumentation
api
that
people
are
are
calling
during.
D
You
know
when
handling
transit
requests
and
transactions.
It's
just
that
you
called
it
the
beginning
and
so
yeah.
I
would
say
that
thing
should
should
return
an
error
or
throw
an
exception.
Also.
D
Matter,
if
all
of
the
asynchronous
exception
handling
is
something
that's
part
of
the
sdk,
which
I
think
is
where
it
needs
to
be,
because
you're
gonna
have
a
lot
of
sdk
level
stuff
for
most
of
the
instrumentation,
the
instrumentation
api.
Everything
is
like
pass-through
in
the
sdk,
so
the
sdk
is
the
thing
that
that
deals
with
with
all
of
the
error
handling.
D
F
Yeah,
so
it
seems
like
a
lot
of
these
cases
requires
pretty
some
pretty
deep
thought
in
terms
of.
G
F
So
in
terms
of
like
practicality
and
like
if
whether
we
choose
to
do
this
or
not,
I
think
a
lot
of
people
agree
that
there
is
a
fine
line
that
we
got
to
drop.
So
we
do
have
to
do
something
well,
what's
the
best
way,
I'm
opening
this
up
to
the
sake
to
do
this
like
properly
like
comprehensively
and
incrementally
and
systematically
right.
We
could
talk
a
lot
about
like
the
theory
about
it,
but
how
do
we
actually
execute
making
these
changes?
F
F
Not
per
se,
saying
that
we
have
to
do
like?
Oh,
we
have
to
go
through
our
entire
api
and
list
out
what
we
do
for
each
single
method,
but
at
least
we
can
come
up
with
some
general
rules
like
what
ted
was
trying
to
do
just
now.
I
think
that's
the
best
way
so
far.
What
does
everyone
think,
instead
of.
A
Yeah,
I
I
agree
the
I
mean
the
how
it's
it's
important
right.
You
can
discuss
theory
all
day,
but
we
need
to
be
sure
that
we
can
implement
this
again.
The
prototype
that
I
have
is
pretty
much
a
class
that
can
be
passed
as
a
mixing
and
a
decorator
that
can
be
applied
on
functions.
A
So
if
at
least
this
prototype
will,
if
implemented,
will
require
us
to
decorate
all
of
our
functions
and
use
this
class
and
pass
this
class
as
a
base
class
to
all
the
the
classes
that
we
actually
want
to
protect
the.
As
far
as
I
know,
it
is
not
a
breaking
change,
so
I
think
that's
that's
convenient.
G
But
but
I
think
it
could
be
a
break.
Somebody
writes
a
test,
somebody's
written,
a
test
that
depends
they're
testing,
that
some
bad
path
does
something
and
then
now
they
get
a
no-op
instrument
or
something
like
that.
G
A
A
Change
it'll
be
a
behavioral
change.
I
think
that
we
can
justify
the
fact
that
we
were.
We
are
now
implementing
affixing,
adding
an
amazing
feature
in
our
implementation
of
quantum
metric
so
yeah.
Maybe
it
can
be
considered
a
non-breaking
change,
but
I.
H
Think
it's
yeah.
I
think
it's
justifiable.
I
I
don't
think
breaking
change
should
should
cover
like
that
scenarios.
If
it
doesn't
break
your
production
app
it.
I
think
we
can
construct
non-breaking
chain.
D
Yeah,
I
don't
know
if,
if
returning
a
no
op
is
always
the
correct
thing,
I
might
just
throw
that
out
there,
though,
or
the
best
thing
to
do,
but
I
I
I
I'm
sorry,
I
didn't
look
at
the
scope
of
your
pr,
diego.
So
I
don't
know.
A
No,
it's
all
right,
it's
only
a
prototype
so
far,
and
and
I
acknowledge
that
I
have
given
it
some
mental
testing,
but
there
are,
there
can
surely
be
other
scenarios
that
I
have
not
yet
considered.
So
I'm
not
I'm
trying
to
give
you
a
full
perspective
on
this.
Okay.
F
Yeah,
okay,
so
as
always,
I'm
pointing
out
we're
already
40
minutes
into
the
sick.
I
think
there's
some
other
topics
as
well
in
terms
of
like
next
steps,
so
diego
already
has
a
prototype
out
to
try
to
solve
this
problem.
I
believe
what
we
should
do
is
for
that.
F
Come
up
with
that
list
and
see,
if
that
you
know
it
could
be
applied
to
you
know
what
diego's
pr
could
like
solve
majority
or
most
or
you
know
some
of
those
points
in
terms
of
making
that
concept
comprehensive
list.
Diego
you've
been
kind
of
working
on
this
already.
Do
you
think
you
want
to
be
able
to
take
point
on
this?
A
I
I
know
I
know,
but
the
motivation
be.
A
big
part
of
the
motivation
is
that
when
I
opened
the
open,
telemetry
metrics
api,
I
got
a
lot
of
significant
feedback,
especially
from
joshua
seward
from
google.
That
constantly
mentioned
this
stuff.
That
is
related
to
this.
So
he
constantly
mentioned
a
no
apps,
and
so
I
realized
okay.
We
need
this
in
order.
Yeah.
G
F
Great
okay,
cool
awesome,
yeah!
So
let's
just
let's
do
that
move
on
to
the
next
topic,
but.
F
Keep
an
eye
out,
you
know
for
a
discussion,
topic
or
an
issue
and
then
we'll
continue
on
there
sure.
D
Oh,
it's
gonna,
say
yeah,
just
one
one
quick
request
and
thank
you
all
y'all
are
awesome.
First
of
all,
but
one
quick
request
is:
it
would
be
great
to
like
one
get
some
of
this
clarified
in
the
spec
because
clearly,
like
the
spec
should
have
some
things
added
to
it.
Like
you
know,
set
tracer
provider
should
throw
an
error.
You
know
stuff
like
that
and
two
like,
if
you
guys
do
come
up
with
like
your
principles
like
writing
those
down
in
the
you
know,
the
python
documentation
would
would
be.
F
Ted
yeah
also,
if
there's
like
a
mechanism
in
which
we
could
like
so
so
like.
If
this
is
gonna,
be
one
wind
up
in
the
specs,
then
we
don't
have
to
really
coordinate
with
other
type
languages
non-type
languages.
F
But
if
not,
I
think
this
might
get
lost
and,
like
nobody
will
actually
know
about
this
if
you're
not
in
the
sig.
So
I
guess
just
keep,
keep
an
eye
out
for
that
and
maybe
spread
the
word
to
other
cigs.
Once
we
get
like
a
prototype
or
something
or
so,
we
could
be
consistent
right,
which
is
what
you
care
about.
So.
D
H
C
F
Yes,
okay,
I
believe
it
is
the
docs
issue.
A
F
Right
all
right,
so
I
think
oh
wait
was
it.
You
put
this
down.
H
Yeah
yeah
edited
it
it's,
it's
very
small.
Just
take
a
couple
of
minutes
so
right
now
we
have
two
two
websites
for
documentation:
one
treat
the
docs
open,
telemetry
python,
which
is
the
obvious
one
people
will
discover,
but
then
there's
all
the
instrumentations
and
all
the
connected
packages
are
documented
on
completely
different
website,
which
is
open,
telemetry,
python,
country.readthedocs.com
and
it's
yeah.
So
it's
hard
to
discover
the
country
website
and
it
it
can
be
confusing
for
people
when
they're
trying
to
find
back
like
documentation
for
some
package.
H
So
as
ideally,
it
would
be
nice
if
we
had
one
website
that
has
documentation
for
all
packages
and
not
divide
that
between
core
and
because
current
contrib
is
more
like
for
our
benefit
and
end
users.
It's
just
it's
just
it's
confusing
to
them.
So
that
was
it.
If
you
have
any
opinion
or
thoughts,
add
to
the
issue,
I
I
don't
think
we
need
to
discuss
it
live
unless
it
gets
some
controversial
comments.
Maybe
we
can
discuss
it
next
week.
F
H
H
Far
yeah,
okay
but
yeah,
but
but
if
I
guess,
if
the,
if
the
maintainers
of
that
project
accept
entire
documentation,
I
get
I
mean
we
could
do
that.
I
guess
okay.
F
Okay,
so
I
guess
people
can
like
just
think
about
that
unless,
if
you
guys
have
any,
you
know
conflicting
arguments
or
ideas
for
a
ways
issue,
I
guess
that
also
leads
to
the
other
issue
that
was
brought
up
last
week
in
terms
of
migrating
the
website
docs
back
to
the
website,
repo,
we
kind
of
just
like
said
we
were
like
you,
know,
wait
and
see
what
other
repos
are
doing
or
other
sigs
are
doing,
but
they
have
been
kind
of
like
pointing
at
us
and
asking
us
like
what
our
decision
is.
F
So
I
think
a
bunch
of
sigs
are
doing
this
via
sub
module,
which
is
option
number
one
versus
two
sorry
option
number
two
and
first
was
the
option.
Number
one
was
the
just
migrating
the
docs
into
the
repo,
for
I
guess,
for
my
sake
like
I,
I
personally
don't
know
what
sub
moduling
means
or
how
that
works.
Does
anybody
have
any
more
context
on
what
the
repercussions
of
that
is
and
like
what
the
maintenance
consequences
for?
That
is
what
that
means.
F
I
can
lick
the
dish
here.
Aaron
do
you
want
to
share.
F
F
Yeah
so
like
this
was
the
original
issue,
so
they're
asking
for
our
opinions
on
what
we're
going
to
be
doing
the-
and
this
is
the
original
issue
on
the
open
temperature
I
o
page
so
they're
listing
like
all
the
sig
decisions
right
and
between
the
two.
It's
like
these
docs
number
one.
These
stocks
will
be
pulled
into
open
texture,
iovia
sub
module,
otherwise
their
docs
will
be
migrated
back
to
this.
F
F
H
F
I
see
okay,
yeah,
okay,
I'll
I'll,
probably
have
to
read
more
on
that,
but
don't
really
want
to
take
a
lot
of
time
on
this.
H
But
it
is,
I
guess,
just
just
one
a
note
if
they
are
going
to
submodule
our
repo
inside
the
website
repo,
it
shouldn't
be
any
additional
maintenance
button
for
us
right.
If
that,
if
that's
how
it's
going
to
work,
then
we
just
maintain
our
docs
and
whenever
they
want
to
issue
a
new
release.
They
just
update,
like
sync,
the
sub
module
and
release
it.
F
Right,
okay,
if
away
or
like
any
others
like,
if
you
guys
have
like
opinions
on
this,
please
comment
on
this
issue
because
I
think
they're
kind
of
pushing
us
for
decisions
soon,
yeah.
F
Yeah
no
worries
obvious,
also
nice
to
know
like
why
anyone
would
want
to
do
this
versus
this,
so
yeah
cool.
F
H
I
want
to
present
it.
I
don't
know
if
we
have
enough,
maybe
I
can
just
go
over
it
real
fast
and
just
give
everyone
an
idea.
So
so
this
came
up
a
few
weeks
ago
and
we
were
discussing
the
getting
started,
experience
and
let's
play
with
python,
it
can
be
a
bit
confusing,
there's
a
bunch
of
packages
and
you
have
installed
and
there's
you
need
to
run
some
configuration
code,
which
is
not
always
very
obvious.
H
So
so
the
things
I
was
trying
to
solve
with
this
proposal
is
one
to
make
it
very
obvious
and
simple
how
to
get
started
with
hotel
python
and
make
it
very
easy
to
configure
sdk
for
tracing,
but
that
should
extend
to
metrics
and
logs
later
and
make
it
very
simple
to
instrument
all
installed
packages.
H
For
example,
we
have
this
distro
thing,
coupled
with
open
television
instrumentation
package
that
diego
tried
to
split
and
for
the
most
part
did,
but
we
still
have
some
other
things
coupled
between
core
and
contrib,
like
the
core
open
telemetry
instrument
package
has
a
bootstrap
command
which
needs
a
list
of
all
the
instrumentation
packages
we
have
that
live
in
country
and
the
current
state
of
the
world
is
that
when
you
change
something
in
country
like
change,
a
dependency
version
of
instrumentation
package
or
add
a
new
package,
you
have.
H
This
bootstrap
command
and
manually
copy
it
over
to
core
and
then
create
a
pull
request
in
core,
so
the
developer
experience
is
quite
bad,
but
even
if
we
automate
it
somehow
there's
still
this
dependency,
which
is
which
is
the
other
way
around
like
core
depending
on
the
contrib.
So
another
thing
opportunistic
thing
I
want
to
solve
laws.
What
was
that
problem?
H
Could
you
go
back
to
the
top
please?
So
so
I
split
this
this
into
two
two
sections.
The
first
one
is
the
getting
started
experience
when
someone's
instrumenting
with
code,
and
second
one
is
when
someone
instruments
and
sets
everything
up
without
writing
any
code
with
cli
arguments
so
for
code,
could
you
scroll
down
a
bit
yeah
so
so
for
code?
I
propose,
we
add,
a
configure
tracing
method
to
sdk,
which
is
used
to
configure
the
sdk,
and
if
you
don't
pass
any
arguments
to
it,
there
is.
H
Yeah
we
can,
you
can
make
that
work
that
gave
us
the
green
light
so
yeah
it
uses
the
default
default,
recommended
values
which
is
batch
span
processor,
basically
whatever
we
are
documenting
today,
but
but
we
can
like,
we
can
go
into
those
details
when
we
start
implementing
and
then
it
allows
you
to
override
some
specific
components.
H
So
instrument
will
basically
hook
into
the
open,
telemetry
instrumentation
package
and
call
those
internal
things
that
we
do
in
the
sorry.
So
so
the
only
way
to
instrument
everything
automatically
right
now
is
by
using
the
open,
telemetry
instrument
command.
But
if
you
want
to
instrument
with
code,
you
have
to
configure
a
tracer
provider.
G
H
Up
tracing
pipeline
and
then
import
every
single
instrumentation
manually
and
then
call
instrument
method
on
every
single
instrument,
instrumentation
right,
so
so
this
provides
a
counterpart
to
the
open,
telemetry
instrument
command.
With
the
instrument
method,
you
can
so
it
basically
behind
the
scenes
does
the
same
thing:
it
uses
entry
points
loads
up
all
the
packages
and
and
calls
instrument
on
them.
So
so,
with
just
these
two
lines,
you
get
same
experience
as
open
telemetry
instrument
come
on,
but
with
without,
if
you
want
to
do
it
with
code,
so
this
gives
us
parity
with
that.
H
With
that
experience,
any
any
questions
so
far
before
I
move
ahead.
H
Okay,
so
I'll
try
not
to
go
in
detail
because
we
don't
have
a
lot
of
time,
but
please
go
over
it
and
look
at
the
factories
versus
instances
design
decision
and
we
can
maybe
discuss
that
if
it
doesn't
sound
good,
you
can
really
change
it.
H
H
If
you
want,
I
can,
I
can
go
over
it
really
fast.
So
so
so
factories
are
basically,
the
concept
is
factory
is
a
callable,
and
if
it's
in
provider
factory,
you
call
it
and
it
returns
a
provider
instance.
So,
keeping
that
in
mind,
the
tracer
provider
plus
is
a
valid
factory,
but
so
is
a
partially
applied
class
or
or
a
lambda
function
or
any
other
function.
H
So
this
gives
users
a
lot
of
flexibility
into
how
they
instantiate
these
objects
and
if
they
want
to
use,
let's
say
a
third-party
object
that
whose
initializing
parameters
or
the
like
the
contract
for
init
method
are
different
and
would
fail
they
can
use,
they
can
wrap
it
up
in
a
function
and
then
you
know,
adapt
the
arguments
so
this.
So
this
gives
us
a
lot
of
flexibility
in
in
terms
of
like
people
can
use
anything
whatever
they
want
yeah.
So
moving
ahead
now
to
do
the
do
the
feel
I
bought
so
now.
H
The
other
experience
is
getting
started
with
cli,
which
mainly
involves
two
commands.
Today,
one
is
the
open,
telemetry
bootstrap
code,
which
looks
at
your
virtual
environment
or
whatever
the
activated
python
environment
is
tries
to
detect
everything
you
have
installed
and
installs
corresponding
instrumentations
for
them,
and
the
other
one
is
open,
telemetry
instrument,
which
basically
does
the
same
thing
that
we
just
showed
above
with
code.
It
sets
up
a
tracing
pipeline,
then
instruments
all
libraries
that
you
might
have
installed.
H
H
They
also
need
to
install
the
open
damage
register
package
because
without
a
disto
configuration
doesn't
happen
at
all
and
it
can
be
very
confusing
so
looking
at
prior
art
and
other
projects
and
what,
as
an
end
user,
I
would
expect
is,
I
would
expect
to
install
a
single
package
like
people
pip
install
open
telemetry,
which
seems
to
be
like
the
most
obvious
choice
to
me
and
once
you
install
paper
and
sell
open
telemetry,
you
get
the
default
experience
out
of
the
box.
Everything
is
ready
for
you
to
get
started
with
the
hotel.
H
You
don't
have
to
think
about
how
packages
are
divided,
how
functionalities
divide
packages
which
packages
you
need,
which
you
don't
need
you
just
install
people,
you
just
install
the
open
telemetry
package
and
you're
good
to
go
in
five
minutes.
So
the
proposal
is
we'll
move.
These
two
commands
to
the
pins.
Sorry
to
the
new
open
telemetry
package
and
this
package
would
internally
depend
on
open,
telemetry,
sdk
api
and
open
telemetry
instrument.
Instrumentation
everything
that's
required
to
get
started
with
with
hotel
right.
In
addition,
that
will
have
some
extra
requirements,
like
one
bundle,
can
be
otlp.
H
H
So
the
only
downside
to
not
maybe
include
otlp
as
default
is
an
unnecessary
package
installed
for
people
who
don't
want
to
use
otlp.
I
guess
for
http.
It's
not
that
big
a
deal
for
grpc
it
might
pull
in
a
lot
of
additional
dependencies,
but
maybe
even
that
is
justifiable,
especially
if
the
spec
recommends
using
grpc
by
default.
A
Sorry,
just
a
general
question
so
so
far
from
reading
your
document,
I
think
that
you
have
a
pretty
you.
Have
this
already
figured
out?
Do
you
want
to
continue
the
process
of
receiving
feedback
in
this
document,
or
do
you
want
to
open
up
your.
H
I
would
give
I'd
like
to
give
everyone
a
chance
to
review
this,
maybe
and
wait
till
end
of
this
week.
Maybe
like
I
overthought
something
that
maybe
I
didn't
think
of
something,
or
there
are
some
design
like
fundamental
design
flaws
that
I
didn't.
Think
of.
So,
if,
if
everyone
gets
a
chance
to
do
this
until
early
next
week,
then
I'll
start
with
like
prototype
implementations,
and
we
can
discuss
the
actual
implementation
on
the
pr
itself
right.
F
Okay,
let's
give
it
to
like
early
next
week,
yeah
and
then.
F
H
Yeah,
I
didn't
get
one
minute
left,
yeah
yeah,
so
maybe
I
can
cover
this
real
quick.
So
could
you
scroll
up
a
little
so
so
I'm
proposing
another
small
change
to
the
open,
telemetry
instrument
command,
I'm
proposing
that
the
command,
so
so
the
first
three
lines
load
and
apply
pre-instrument
hook
and
instrument
in
packages
and
personal.
This
is
how
it
works
today.
H
Yeah,
so
just
want
to
just
want
to
point
it
out
like
this
is
one
more
change
that
I'm
proposing
other
than
this.
Everything
else
is
the
same
as
as
it's
today
so
yeah.
I
think
we
are
over
already
thanks
for
thanks
for
giving
me
a
chance
to
to
present
this,
and
let
me
know
if
you
have
any
comments
or
concerns
I'll
wait
till
next
week,
and
then
we
can
maybe
discuss
the
nitty
gritty
details
of
instrument.
Sorry,
implementation,
awesome,
cool.
F
Yeah
this
looks
good
yeah.
Everyone
take
a
chance.
Take
a
look
at
this.
We
do
have
a
bunch
of
pr's
that
we
didn't
get
a
chance
to
look
at,
but
we
will
add
them
to
either
next
week's
agenda
or
they
might
even
just
get
addressed
this
week,
but
other
than
that,
if
no
one
has
any
other
really
pressing
matters,
we
will
see
you
guys
next
week
then.