►
From YouTube: 2020-10-22 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Already
lunch
time,
for
me,
you
know:
okay
yeah,
we
can
get
started,
hey
alex.
Do
you
think
you
can
share
your
screen?
While
I
talk
yeah
sure
just.
C
A
Right
now,
good
thanks
just
sorted
here.
Yeah
again
add
your
name
to
the
attendees.
Please
aaron's,
always
on
point
with
the
on
this.
So
yeah.
A
Yeah,
it's
favorite
part
of
the
day,
looks
like
nathaniel
has
added
a
essay
to
the
the
signals
yeah.
So
I
haven't
written
an.
D
A
Sounds
good
cool
all
right,
so,
if
you
guys
could
see
alex's
screen,
we
could
just
go
right
into
it.
Nathaniel
wanna!
We
could
just
prioritize
your
thing.
First,
you
wanna
talk
a
little
bit
about
what
you
have
in
your
mind.
D
Thanks
yeah
I'll
give
an
update
on
what
I
was
working
on
and
what
I
accomplished
this
week
for
moving
things
to
the
contrib.
So
the
first
topic
was
that
I
created
prs
to
just
blatantly
move
all
the
packages
with
the
get
history
into
the
contributor
post.
Thanks
so
much
to
alex,
I
saw
he
was
merging
them
all
away
last
night,
and
so
I
think
this
is
mostly
like
agreed
upon
and
finished,
and
so
like
it's
it's
a
good
path
forward.
D
So
initially
I
had
made
just
one
pr
where
I
removed
the
packages
and
I
removed
references
to
the
packages,
but
I
it
started
getting
a
bit
complicated
because
then
I
had
to
add,
commits
on
every
single
pr,
because
there's
about
40
separate
packages,
so
40
separate
prs
to
remove
all
of
them
one
by
one.
So
I
just
wanted
to
ask
you
guys
your
opinion
on
it.
A
Me
personally,
like
I
wouldn't
mind
you
just
removing
everything
in
one
pr,
my
my
thoughts
on
like
having
the
one
pr
be
split
up
really
only
applies
to
when
you
were
adding
it
to
the
contributory
pill,
because
that
is
the
thing
that,
like
we're
using
like
that's
the
actual
code
base
right
so
right,
I'm
pretty
comfortable
with
like.
B
D
D
D
D
So
this
is
an
example
that
I
got
running
on
my
version
of
it
so
in
here
thanks
alex
for
this,
so
the
best
case
I
saw
here
was
that
let's
say
someone
opens
a
pr
and
open
telemetry
python
core
like
this.
This
curl
command
will
make
call
to
the
api
endpoint
workflow
dispatch
on
the
contrib
repo
and
say
run
this
workflow
remotely
using
these
inputs.
D
So
you
can
see
the
last
line
there.
The
inputs
have
fork
repository
which
github
automatically
populates
from
the
repository
are
on.
So
an
important
note
is
that
this
works
with
forks,
because
if
this
is
a
fork
of
open,
telemetry
and
we'll
reference,
the
current
branch
where
this
is
made
out
of-
and
this
is
again
all
run
automatically
one
disadvantage
to
this-
is
that
you
can
see
that
on
the
second
or
the
third
line
of
the
curl.
You
have
to
have
a
personal
access
token,
so
this
one
would
require
all
devs
to
set
this
up.
D
They'd
have
to
go
into
their
settings.
It
took
me
like
two
minutes,
and
the
instructions
were
very
clear,
but
you
have
to
set
up
a
token
in
your
username
and
you'd
have
to
call
them
this,
because
when
you
fork
the
repo
and
make
a
pr,
it
will
look
for
these
secrets
in
your
forked
repo,
not
in
the
base
repo.
D
I
looked
into
finding
another
way
to
find
so
that
open
challenger
can
provide
this,
but
I
think
for
security
reasons,
forks
never
get
any
secrets
from
the
base
repo.
So
there's
it
seems
like
this
is
the
best
solution,
which
is
a
minor
inconvenience
and
the
the
last
point.
So
if
if
we
open
up
the
other
link
that
I
posted
in
the
doc,
it
shows
that
on
the
core
repo,
so
I
I
cloned
the
repo
and
made
it
an
original
repo-
it's
not
a
fork!
D
D
So
the
key
step
is
the
third
step
here.
It
says,
check
core
repo
at
requested
fork
and
branch,
and
if
you
drop
down
line
one,
this
is
a
github
provided
action.
So
you
can
see
here
that
in
the
width
parameters
on
line
three,
it
takes
the
repository
where
the
pr
was
made.
So
this
is
my
fork
which
we
saw
earlier
and
it
takes
the
refs
from
where
it
came
from,
so
I
just
was
pushing
directly
from
my
master,
so
it
took
the
master.
D
D
So
one
thing
like
if
you
just
go
back
in
your
browser
to
the
previous
one
or
if
you
click
on
actions
that
works
too
and
then
again
so
the
one
thing
that
I
like
about
this
workflow
dispatch
is,
if
you
click
on
below
all
workflows,
the
second
tab,
the
the
one
below
that
one
sorry
yeah.
So,
oh,
I
guess
it
doesn't
show
up
yeah.
This
is
the
right.
This
is
the
right
place,
sorry,
but
yeah.
I
don't
know
why
it
showed
up
earlier,
but
I
I'll
show
it
up
later.
D
But
if
you
on
this
page
here
and
above
or
under
the
four
results,
there
would
be
a
banner
that
shows
a
run
workflow
dispatch
test.
So
it
allows
you
to
if
you
make
a
pr
in
the
contributor-
and
let's
say
you
want
to
test
that,
you
didn't
break
the
the
tests
and
contribute
you
could
supply
with
inputs.
D
D
So
it's
this
manual
process
for
sure
it's
not
automatic
by
any
means,
but
you
can
at
least
check
if
your
feature
in
the
contrib
works
against
your
other
feature
in
the
core.
By
pulling
it
and
running
tests
and
it'll
give
you
something
that
you
can
run
automatically
there.
So
this
is.
This
is
the
best
solution
I
found
you
guys
can
tell
me
if
we
should
make
a
pr
that
has
this
coil
request
and
which
has
this
workflow
dispatch
implemented,
but
the
the
benefits
are
that
at
least
tests
run
on
contrib
pr.
D
The
disadvantages
are
that
you
have
to
create
a
personal
access
token
and
that
it
doesn't
really
give
you
any
feedback.
The
best
thing
you
could
do
in
the
pr
is
like
someone
could
ask:
did
you
check
that
the
run
that
was
automatically
created
and
contributed
or
passed
and
then
you'd
have
to
like
go
look
for
it
right,
you'd
have
to
like
look
through
these.
These
runs
and
say
like
well.
I
know
here's
the
branch
that
triggered
my
thing.
Here's
I
can
see
it
in
the
third
step
of
the
tests.
D
A
Yo,
that's
pretty
cool
man,
that's
pretty
cool!
This
works
like
this.
D
Why
it
doesn't
show
up
on
alex's
screen,
but
I'll
take
a
picture
and
I'll
post
it
in
the
dark
of
what
I
was
talking
about.
Maybe
there's
a
permission
thing
I'll
I'll
mention
it.
If
there
is
yes.
E
Hey
so
nathaniel
okay,
is
it
not
possible
to
run
run
the
workflow
on
the
the
actual
repo
control,
repo.
D
D
Yeah
yeah
I'm
just
using
because
because
I
just
not
just
not
merged
into
the
original
one,
yet
sorry,
let
me
like
move
this
so
that
it
but
yeah
it
will
run
on
the
for
me
before
that's
the
goal.
E
Okay,
so
then
there
wouldn't
be
that
checks
issue
right,
like
you
would
be
able
to
write
back
if
you
wanted
to
to
the
virtual
npr.
D
The
reason
we
can't
run,
that
is
because,
even
if
they
get
triggered
like
the
main
contributor
triggers
this,
but
they
trigger
it,
they
have
to
trigger
it
on
your
branch,
which
still
exists
on
your
fork
right
because
you
forked
open
telemetry
and
you
make
a
pull
request
against
open,
telemetry
core.
Then,
when
contribute
finishes,
its
pass,
which
got
triggered
remotely.
D
E
Okay,
but
like
can't,
you
leave
it
on
the
pr
like
that's
how,
for
instance,
like
read,
the
docs
has
one
that
will
build
your
docs
and
then
it
will
write
back
to
your
pull
request
if
it
failed
or
not.
E
D
Okay,
I
mean,
like
I
looked
into
it
for
quite
a
while
yesterday
and
I
couldn't
figure
out
a
way
to
get
it
to
work
on
forks.
I
got
it
to
work
on
a
non-fork
and
I
got
it
to
work
on
my
like
original
one,
but
specifically,
this
api
will
not
work
on
a
fork,
and
so
that's
why
I
just
I
stopped
looking
into
it.
A
D
D
E
Sorry,
I
was
gonna
say:
did
you
look
into
if
you
use
like
the
action
as
a
library
from
from
the
contributory
post,
you
can
just
build
like
build
it
directly
in
the
ci
run
on
the
core
repo,
like,
like
sort
of
like
that
actions
clone
or
whatever
it
is
actions
check
out
action
from
github,
but
you
make
your
own
and
then
just
run
it
directly.
D
So
I
thought
about
it.
I
don't
know
if
this
is
what
you
mean,
do
you
mean
like
cloning,
the
contrib
repo
during
the
workflow
in
the
core
pr
and
then
running
the
tests
that
way
yeah?
So
I
thought
about
that
and
I
think
it
is
possible,
but
I
mean
I'm
just
worried
it's.
D
E
Okay,
I
mean
it's
not
circuit,
like
a
circular
dependency
right,
because
there's
just
one
way.
Each
contrib
package
depends
on
the
course
you
could
just
update
it
either
to
build
off
of
the
branch
or
you
could
update
it
to
use
like
the
local
checkout.
B
D
I
see
I
see,
actually,
I
think,
yeah.
I
don't
think
that
one
would
be
too
hard.
I
just
I
I
know
you
could
check
out
the
repo,
but
then
you'd
have
to
like
you'd
have
to
run
talks
on
the
checked
out
repo
and
then
install
it
from
somewhere
in
the
top
level
that
could
work,
maybe
yeah.
I
I
can
see
that
working.
E
D
B
Do
we
hey
nathaniel,
this
looks
cool.
Do
we
know
if
how
other
projects
are
handling
it?
Other
hotel
projects
assume
they
must
have
the
same
problem.
D
Totally
I
didn't
besides
java
java
has
all
their
stuff
in
the
main
repo,
so
they
don't
have
the
problem
yet
contrib,
I
don't
think,
has
any
ci
testing
or
sorry
net
doesn't
have
anything
on
theirs.
As
far
as
I.
D
D
B
Then
it's
on
you
to
go
to
fix,
go
and
fix
contrib
as
well
yeah,
and
I
think
I.
D
I'm
just
worried
that
I
have
the
same
issue
as
the
checkpoint
api
because
in
the
checkpoint
api
I
had
to
create
my
own
github
app,
either
way
it's
like
the
same
way.
You
would
create
like
a
linkedin
app
and
the
problem
was
the
linkedin
app
like
or
sorry
the
github
app
doesn't
have
access
to
for
to
repost,
specifically
so
yeah.
Maybe
the
bot
might
have
different
permissions,
but
I'm
just
worried
about
that.
That's
the
one
thing
I
would
point
out.
B
Yeah,
I
I
I
think
the
bot
should
be
able
to
handle
it,
but
but
I've
never
built
one.
So
I
don't
know
but
but
interiorly
it
feels
that
a
repo
owner
should
be
able
to
give
about
a
lot
of
access
around
prs.
B
Otherwise,
this
this
looks
good.
My
only
concern
is
developers
having
to
add
their
tokens
for
for
regular
members,
regular
countries
that
that
might
not
be
a
big
deal,
but
for
one-off
contributors
it
might
be
a
big
hurdle
to
do
that.
D
Yeah,
like
I
mentioned,
like
totally
that's
a
fair
point.
I
I
think
it's
it's
very
simple
process
for
me
like,
like
all
you
need
to
do,
is
you're
past
to
enter
your
github
password
and
you
just
say
generate
a
token
and
it
does
it
for
you,
but
but
yeah,
I,
your
your
valid.
Your
concern
is
totally
valid.
B
Yeah,
it
might
be
a
very
simple
process,
but
I
wouldn't
be
surprised
that
it
would
like
discourage
people
from
making
small
or
minor
pr's
that
they
would
otherwise
totally
yeah.
D
Yeah,
so
if
there's
no
more
questions,
I
I
mean
like
now,
I
think
about
it
for
sure
I
think
erin's
suggestion
is
a
great
one
that
where
you
can
just
download
the
contrib
on
the
core
and
then
run
it
that
way,
and
so
that
would
just
be
like
a
checkout
and
then
you
would
just
like
have
to
like
change
directories
to
the
checked
out
repo
and
then
run
it
against
whatever
just
got
downloaded
there.
B
Yeah,
I
I
think
that
sounds
like
a
great
first
solution,
at
least
like
it
sounds
simple
enough
to
implement
it
and
it
fixes
most
of
the
problems.
So
so
I
would-
and
my
work
would
be
to
go
with
that
at
first
and
if,
if
we
encounter
problems,
then
look
into
other
solutions,
maybe
also
probably
maybe
we
can
bring
this
up
with
some
in
some
cross-sect
meetings
and
see
if
we
can
collaborate
with
other
things.
D
I
think,
for
me,
an
action
item
for
me
will
be
that
I'll,
create
an
issue
and
I'll
explain
what
I
think
would
need
to
be
done
there
and
then
maybe
in
the
coming
weeks.
If
I
have
time
again,
I
can
go
back
and
do
it
I've.
I
kind
of
time
boxed
my
my
move
of
this
country
stuff
to
this
week,
and
so
I
have
to
get
back
to
my
other
other
tests
but
yeah.
I
think,
there's
a
good
path
forward
there
that
we
can
pick
up
and
that
at
least
will
provide
the
maintainers.
D
C
You
know
since
you're
talking
about
time
box
again,
I
would
really
focus
on
just
getting
getting
all
those
packages
moved
over,
which
I'm
I'm
hoping
I'll
get
to
the
remainder
of
the
reviews
today
and
then
just
get
the
tests
running
against
the
contrib
repo
on
its
own,
so
that
you
know
we
can
prove
that
all
of
those
tests
are
all
passing
and
that's
all
good
and
then
the
cross
cross,
repo
testing
stuff,
we
can
probably
kind
of,
like
you
said,
just
create
an
issue
and
we
can
follow
up
on
it.
C
I
you
know
it:
it's
really
a
like
an
optimization,
I
think.
Maybe
almost
it
would
be
a
nice
to
have,
but
it's
not
it's
not
a
core
requirement.
The
other
thing
is
just
around
like
the
docks
I
think
I
mentioned.
I
mentioned
this
in
one
of
the
prs
that
I
reviewed
that
you
know
currently
all
of
the
docs
are
living
in,
read
the
docs
and
they're
all
against
the
the
core
repo.
But
what
is
that?
D
Yeah
I
I
just
basically
noted
that
the
we
were
basically
ripping
out
the
docks
for
those
packages
from
the
core
repo
right,
especially
if
they
go
live
in
the
country.
One
I
I
can
see
like
we
can
like
make
a
mock.
We
can
do
like
the
same
thing
in
the
country
like
bring
in
the
package
that
creates
the
docs,
but
I
don't
know
how
how
much
you
guys,
like
the
idea
of
you,
know
someone
having
to
go
to
open
summary
python
and
then
open
summary
python
contrib
for
the
rest
of
the
docs.
D
D
A
Yeah
the
I
don't
really
mind
having
the
dogs
in
two
separate
places
right:
it's
not
a
big
deal.
A
Sorry
I
meant
like
if
you
want
the
instrumentation
docs,
you
go
to
the
instrument,
the
contra
repo
and
then
yeah.
I
think
that's
okay,.
D
Okay
sounds
good
yeah.
The
last
thing
that
I
wanted
to
bring
up
here
as
the
we
we
talked
about
last
meeting.
If
there's
a
problem
with
updating
all
packages
at
once,
and
when
I
asked
the
underag
on
the
java
maintainer
side,
he
gave
me
this
quote
that
releasing
takes
fairly
significant
effort,
driving
the
release
process,
writing
change,
log,
etc.
D
So
guess
to
be
to
reduce
the
overhead
by
not
having
it
happen
too
often
across
many
components,
which
made
sense
to
me
and
led
me
kind
of
into
this
point
here
afterwards
that
if
you
have
like
how
we
have
right
now
we're
going
to
set
up
that
the
contributor
is
going
to
have
all
the
instrumentation
exporter
stuff
and
the
core
repo
is
going
to
have
the
main
stuff.
Then,
if
contrib
and
core
are
both
at
version,
one
and
contrib
is
pointing
to
version
one
then
to
get
updates
that
touch
both
repos
I'd.
D
Imagine
the
flow
from
now
on
would
be
you
update
core
over
and
over
and
over
and
over
again
and
like
contribute's
still
pointing
to
one.
So
it's
fine
once
core
is
ready
for
a
version.
Two
release
all
the
features
that
you
got,
that
contrib
will
have
to
catch
up
on
right.
It
updates
to
version
two
and
then
the
next
flow
step
would
be
to
go
to
contrib
and
start
updating,
maybe
just
a
subset
of
the
packages
right
like
if
you
have
django
and
flask,
maybe
only
flask
needs
an
update
to
work
with
core
two.
D
Then
you
would
bump
the
package
for
flask
to
point
to
version
two,
but
then
you
would
also
bump
django,
even
though
jango
maybe
didn't
have
a
change
to
say.
Okay
now
this
one
also
points
to
two
so
that
all
the
packages
and
core
they're
contrib.
Sorry,
all
the
packages
on
contrib
are
now
pointing
to
the
core
repo
and
they're
all
pointing
to
the
same
version
on
the
core
repo
and
then
you
have
it
as
one
big
release
like
okay.
D
All
these
packages
now
are
part
of
the
version,
two
release
which
point
to
core
version
two,
and
so
I
thought
that
was
a
good
use
case
for
updating
all
packages.
You
could
for
sure
the
one
downside,
if
you
were
to
have
individual
packages
running
being
updated.
Like
let's
say
you
have
django
version,
2
and
flask
and
version
1
is
that
then
the
tests
for
those
would
have
to
be
separate
right,
because
if
they
each
clone
the
repository,
then
they
would
have
to
clone
the
repository
at
different
tags.
D
So
right
now,
it's
nice
because
we
can
run
them
all
using
the
same
clone.
But
if
you
have
to
clone
it
every
time
it
just
might
make
tests
slower
and
maybe
like
more
sparsed
out
right,
like
every
every
workflow
would
have
a
job
for
every
single
run.
So
if,
if
you
guys
had
thoughts
on
that
on.
A
B
A
E
B
A
That's
such
a
big
issue
like
you,
don't
have
to
clone
individual
versions
if
it
does
break
on
18.
That
means
you
have
to
change
something
on
on
the
package
right.
So
then,
then
then
it's
like
oh
I'll,
just
update
the
number
to
18..
So
but
like
me
personally,
I
don't
have
a
problem
with
updating
all
the
packages
at
once.
I
I
think
the
only
downside
I
see
is
like
you
can't
really
explicitly
tell
if
something
has
made.
A
D
Yeah
makes
sense,
I
just
wanted
to
add
thoughts
here
so
later
on
when,
like
you
know,
if
someone
comes
along
and
they
want
to
think
about
updating
them
individually,
they
can
see
this
but
yeah.
You
make
a
good
point:
it's
unless
it
breaks
it's
fine,
it
can
just
use
the
clone
from
the
new
tech.
So
yeah,
okay,
thanks
guys,
that's
that's
all
the
thing
I
want
to
do.
I
have
my
action
item
so
I'll
get
that
finished
up
this
week.
A
C
D
C
Versioning
versioning
update
from
the
maintainer
meeting,
so
we
we
kind
of
talked
about
this
a
little
bit
last
week.
I
think
aaron
had
a
question
around
what
does
the
semantic
versioning
for
the
different
components
of
the
api?
I
mean
once
we
get
to
ga
and
I
think
the
big
takeaway
from
the
maintainers
meeting
was
there
is
a
plan
currently
being
put
together
and
I
think
the
owner
on
that
is
a
lolita
from
aws
to
propose
what
the
what
the
releasing
kind
of
process
looks
like
and
what?
What
versioning?
It's
going
to
look
like.
C
I,
I
kind
of
talked
to
elena
about
this
a
little
bit
this
morning
and
yeah.
You
know
it
does
just
feel
like
there's
some
some
things
that
we
could
do
to
make
make
sure
that
we're
kind
of
future
proofing,
our
own
packaging
version
numbers
and
one
of
the
thoughts
that
we
were
kind
of
bouncing
around
and
I'd
love
to
get
more
feedback
from
other
folks
here
is
you
know
what?
C
If
what
if
we
did
split
the
the
metrics
and
the
api,
the
metrics
api
and
the
tracing
api
into
separate
packages
and
then
left
the
open,
telemetry
api
package
as
a
package
that
has
a
dependency
on
both
the
metrics
and
the
tracing
api,
and
that
has
a
bunch
of
questions
because
there's
I'm
sure,
there's
a
bunch
of
code-
that's
kind
of
shared
across
both
of
them.
And
what
does
that
look
like?
But
I
did
I
just
wanted
to
bring
this
up
and
get
some
general
ideas
here.
A
Yeah
like
if
we
don't
do
that,
it's
like
tracing
gets
released
for
ga,
but
metrics
isn't,
but
they
still
both
live.
In
the
you
know.
The
api
sdk
offerings
like
the
package
that
is
being
installed
when
you
pip
install
so
like
they
have.
We
have
beta
components
that
exist
in
a
ga
package.
So
that's
just
kind
of
strange.
E
Yeah,
I
think
actually
I
may
have
misspoken
last
week.
It
looks
like
what's
gonna
happen.
Is
there's
gonna,
be
a
release
candidate
for
tracing
first
and
then
release
candidates
for
like
metrics
later
and
then
the
whole
thing
will
ga
at
the
same
time
is
that
is
that
right
or
I
mean
I
don't
go
to
the
maintainer
meeting,
so
I
could
totally
be
off.
E
Yeah
from
morgan.
E
Yeah,
I
think,
there's
a
lot
of
confusion
and
then
you
kind
of
have
the
same
problem
still
with
like
the
api.
Like
parts
of
the
api
are
going
to
change
even
after
it's
released
candidate
like
right,
the
mexico
yeah.
So
I
don't
know
if
that
really
changes
the
picture.
A
C
Yeah,
I
I
guess
I
was
you
know
if
we
don't
so
if
we
don't
separate
the
different
apis
into
their
own
packages,
what
does
that
mean
for
like
when
we
want
to
add
new
apis?
Does
that
mean
that,
for
example,
like
the
logging
api,
when
when
it
does
get
realized
like?
Is
that
just
going
to
be
like
a
2.0
release
right
yeah,
it's
really
weird
right,
like
yeah,
like
it
just
tightly
coupling
the
two
kind
of
signals
that
may
or
may
not
be
tightly
coupled
otherwise,.
A
Yeah
timing
just
doesn't
work
realistically,
it's
like
like
what
aaron
said
should
be
the
should
be
what
we're
doing
right
like
we
are
see
everything
together
and
then
ga
everything
together
like
that.
We
don't
have
any
of
these
versioning
problems,
but
I
don't
know
if
that
fits
with
everyone's
timeline.
So
because
who
knows
when
you
know,
metrics
is
going
to
be
done.
A
We
don't
have
an
answer
right
now,
so
don't
know
yet
is
annoying,
because
it's
such
a
huge
issue,
such
a
big
problem
and
yeah,
so
we'll
see.
C
C
A
I
thought
yeah,
I
thought
the
question
was
gonna,
be
answered
in
the
blog
post,
that
morgan
wrote,
but
there's
literally
just
an
excerpt
that
says
next
week.
We're
gonna
be
like
doing
ours,
the
first
release
candidate
for
tracing
and
then
after
that
it
says
shortly
after
that
the
metrics
spec
will
be
frozen,
and
I
don't
know
what
shortly
after
that
means.
So
I
think
they're
leaving
it
vague
intentionally.
So
don't
really
still
don't
really
know.
A
E
I
mean
we
could
like
make
an
issue
and-
and
I
can
point
morgan
to
it
or
we
could
just
tag
him
or
whatever.
Maybe
maybe
he's
thought
about
it
more
than
us.
C
A
If
not,
then
we
can
just
go
through
the
ones
that
we've
had
so
far.
If
you
have
any
prs
that
you
want
us
to
call
out,
just
feel
free
to
add
them.
These
are
just
the
ones
that
alex
and
I
found
so
the
first
one
is
actually
already
merged
from
from
oa.
This
is
related
to,
like
you,
know,
error,
handling
and
setting
the
status
and
status
code.
A
So
this
is
this
one
was
totally
fine,
like
I
kind
of
wanted
to
bring
this
up,
because
I
I
was
talking
to
alex
about
this
too,
like
we
kind
of
have
like
multiple
ways
to
handle
errors
in
in
the
sdk,
and
we
we
should
be
consistent
across
instrumentations
and
how
we
do
this
like.
I
was
taking
a
look
at
the
code
this
morning
and
it's
like
it's
different
for
like
whether
it's
like
a
you
know
a
server
or
a
client
instrumentation.
A
A
So
I
already
created
an
issue
for
this
just
wanted
to
bring
it
up.
I
guess
if
you
guys
have
any
ideas,
but
essentially
we
should
be
consistent
in
like
how
we're
dealing
with
errors
and
setting
the
status
or
at
least
if
there
are
multiple
ways
we
have
to
be
very
strict
on,
like
the
rulings
like.
Oh
this,
this
is
what
we
do
for
server
instrumentation.
This
is
what
we
do
for
client
interpretation.
So
does
that
make
sense?
Does
anybody
have
any
questions
regarding
this
all
right,
yeah?
A
So
that's
already
been
tracking
that
issue,
so
I
just
wanted
to
bring
this
up.
We
can
just
move
on
to
the
next
one.
If
no
one
has
any
questions
so
yeah
this
one,
okay,
yeah,
so
cool.
I
put
this
on
here,
just
as
a
reminder
alex-
and
I
still
have
to
we're
already
assigned
to
this
so
yeah,
pretty
pretty
straightforward.
So
let's
go
to
the
next
one.
C
A
A
So
after
alex
after
like
she
addresses
your
comments,
I'll
probably
take
a
look
at
it
because
I
don't
even
know
if
it's
in
a
state
where
it's
reviewable
yet.
C
No
right
now,
right
now:
it's
not
it's
not
currently
doing
anything
with
the
code.
Unfortunately
right
so
I
think
there's
there's
a
little
bit
more
work
to
be
done,
but
I
think
yeah.
Do
you
want
me
to
just
assign
you
as
a
reviewer
instead
of
sure
yeah,
okay,
I'll
just
put
you
in
the
assignees
that
way
it
looks.
It
looks
more
threatening
when
you're
an
assignee
yeah.
A
Ottawa
sign
thing
yet
so
anyways,
okay,.
E
A
Yeah
some
more
updates
to
the
sampler
spec
that
was
recently
it
was
like
two
days
ago,
so
you
guys
remember
like
alex's
pr
for
the
you
know
passing
in
context.
Instead
of
parent
context,
we
just
gotta.
Do
it
now
for
samplers
as
well.
A
This
pretty
much
just
changed
the
context
to
the
parent
context.
Oh
sorry,
the
span
context
of
the
parent,
the
parents
span
context
to
the
parent
context
and
it
just
changed
a
bit
of
the
behavior
of
the
parent-based
samplers,
pretty
straightforward,
so
yeah.
A
I
also
wanted
to
kind
of
point
out
that
it
took
me
a
lot
a
while
to
understand
but
like
for
the.
If,
if
someone
is
using
a
parent-based
sampler
and
it's
the
root
span,
it
is
always
sampled,
and
that
was
something
that
I
was
like
failing
a
lot
of
tests-
and
I
was
like
I
don't
know
why
so
yeah
well
doing
that
pretty
straightforward
just
need
some
reviewers
for
this.
C
Yeah,
I'm
happy
to
take
a
look.
I
don't
know
if
we
can
get
a
second
approver
on
here
to
assign.
C
Thanks
awesome,
thank
you
this
one,
oh
yeah,.
A
Oh
right
yeah,
so
I
think
oh
and
I
took
a
look
at
this
away.
He
made
some
comments
about
like
the
default
implementation
of
get
keys
and
he
typed
up
this
entire
thesis
on
on
what
the
guy
should
do.
Oh
wait.
I
just
want
to
just
quickly
go
over.
B
Yeah,
I
think
there
was
some
confusion,
so
so
this
implements
the
new
getter
interface,
which
is
basically
a
clause
that
has
a
get
and
keys
method,
and
what
I
asked
for
was
if
it
would
be
possible
to
provide
some
sort
of
a
helper
that
that
instrumentations
of
the
code
can
use
to
just
pass
in
their
existing
get
methods,
and
that
would
turn
it
into
a
valid
getter
implementation.
B
So
we
wouldn't
have
to
implement
custom
getters
everywhere.
I
think
it
was
probably.
B
I
didn't
communicate
too
well
and
the
author
thought
I
wanted
a
default
implementation
and
they
implemented
the
getter
to
work
with
dix
by
default,
but
that
obviously
fails
in
my
pi
complaints,
because
it's
a
specific
implementation
and
not
like
an
interface
and
that's
where
they
were
getting
tripped
on
so
so
just
share
an
example
of
how
the
the
what
we
can
call
the
default
getter
in
api
can
be
just
an
interface
with
a
generic
carrier,
and
then
we
can
implement
multiple,
specific
getters
for
different
types
of
carriers.
B
And
if
dictionary
is
being
used
by
most
of
the
packages
as
the
carrier,
then
we
can
have
a
default
implementation
for
dictionary
getter
somewhere
in
api
or
a
util
package
or
somewhere.
So
I
think
they
should
so.
This
example
should
solve
the
problem,
but
I
haven't
heard
back
yet
from
the
author
awesome.
C
A
Okay,
so
oh
wait
for
the
for
the
different
exporters
that
use
their
own
getters
implementations.
They
just
have
their
own
getters
in
the
exporter
packages
right.
B
All
right,
yeah,
I'm
actually
only
aware
of
one,
it's
the
salary
instrumentation,
oh.
A
Cool,
what
what
are
we
using
for
the
default
propagator?
Like
I
mean
the
the
sdk
propagators
and
stuff.
B
The
default
right
now,
I
think
it's
I'm
not
sure
I
think
right
now,
it's
as
far
as
I've
seen
in
at
least
in
instrumentations.
We
just
use
dictionary
types,
get
method
in
place.
B
A
So
so
we
would
just
have
our
own
sdk
getter
kind
of
thing,
yeah.
C
A
Yeah
yeah,
probably
just
remove
that
we'll
just
use
that
one
and
we've
got
to
just
turn
into
a
class.
I
guess,
according
to
a
waze
interface
design,
right,
yeah
cool,
all
right
looks
good
nice
yeah.
So
we'll
just
wait
until
he
deals
with
that
so
yeah.
Let's
do
two
more
issues
here,
yeah,
I'm
sure,
there's
more
pr's.
We
just
didn't
have
time
to
get
to
talk
about
them,
but
so
they
wouldn't
have
any
other
open
pr's
that
they
have
or
they
want
to
talk
about
and
stuff
alex.
A
A
Six
one
from
the
oh
sorry,
not
I
yeah
one,
two,
five
six.
C
A
Yeah,
so
it
looks
like
aaron
has
some
requests
for
changes
for
this
one.
I
personally
didn't
look
at
this
one
at
all,
but
yeah.
A
I
see
oh,
this
is
like
yesterday.
Okay
cool
sounds
good.
A
A
Two
other
ones
I
want
to
talk
about,
but
yeah
we
could
talk
about
aaron's,
one
first.
E
Okay,
cool:
if
you
go
to
the
code.
E
And
maybe
just
expand
the
top
code,
like
the
actual
fix,
so
you
can
see.
So
this
is
in
the
worker
thread
in
the
batch
exporter.
Batch
span,
exporter
and
basically
just
the
problem
is
like
the
timeout
isn't
getting
reset.
E
Where
is
it
from
132
if
it's
set
to
something
really
short
it'll
just
stay
in
that
little
like
while
loop
at
the
top,
but
I
was
kind
of
curious
because,
if
they're
setting
the
timeout
to
be
the
the
scheduled
delay,
so
if
it's
like,
you
know
they
put
5000
milliseconds
or
whatever,
if
they,
if
you
do
that
in
here,
the
only
place
that
timeouts
use
is
on
118
and
the
condition.wait,
and
that's
just
like
a
max
timeout
like
if
you
get
notified,
it'll
it'll
break
out
of
there
beforehand.
E
A
A
E
A
You
own
everything,
yeah,
that's
everyone,
dude
aaron,
you
shouldn't
filter
the
ones
about
what
you're
assigned
to
you
should
filter
the
reviewers,
because
we.
A
Yeah
yeah
makes
sense
but
yeah.
Hopefully
I
haven't
just
personally
taken
a
look
at
it.
Yet
don't
really
know
what
it's
asking
yet
either,
but
hopefully
I'll
be
able
to
shed
some
light
after
so.
C
A
C
A
Yeah:
okay,
how
about
this
alex?
You
want
to
just
like
put
down
like
this
pingum
and
just
be
like
forget
about
the
tests
and
just
like
answer.
Aaron's
thing
like
address
aaron's
comments
and
then
we'll
just
merge.
A
C
A
Okay,
last
one
is
the
end
variables
into
otlp
exporter.
This
already
looks
pretty
good
to
me.
I
was
wondering
what
this
was.
It
just
needs
a
more
reviewers
think.
C
A
A
I
think
that's
pretty
much
it
for
the
prs.
Oh,
we
still
have
always
pr
for
the
automatic
instrumentation
of
the
otlp
exporter.
I
think
something
like
that
looks
like
diego
requested
some
changes,
and
this
was
like
a
while
ago.
Oh
oh,
we
already
commented.
B
Yeah
did
I
go
request.
Some
changes
I'll
probably
get
to
this
early
next
week
and
try
to
address
all
the
comments.
A
Yeah
all
right
sounds
good,
so
all
right,
that's
good
I'll!
Just
melt
that
down.
C
And
I
I
know
that
diego's
been
working
on
the
bound
instruments,
pr
which
I
can't
see
right
now.
A
All
right
yeah,
I
made
some
comments
on
that
that
he
hasn't
addressed
yet
so
can
you
also
tag
it
as
the
release?
Oh
wait
is
this:
is
this
needed
for
ga,
don't
even
know?
I
mean.
A
Anyways
alex,
I
I
didn't
get
a
chance
to.
E
C
Oh
yeah,
I
was
just
pointing
diego
at
the
implementation
around
the
the
start
time
and
the
time
values
were
all
different
depending
on
the
aggregator
type
right,
which
right
now,
I
think
in
this
pr
he's
trying
to
use
the
same
yeah
yeah
the
same
one
for
all
the
aggregators
and
yeah.
I
I
suggested
that
that
might
be
part
of
what's
kind
of
wonky
here.
A
Yeah,
okay,
cool
sounds
good,
but
just
waiting
wait
for
him
to
do
that.
Nice!
Okay.
We
have
like
three
minutes.
C
In
three
minutes
so
we'll
cram
it
in-
I
don't
know
if
we've
already
kind
of
talked
about
this
or
not,
but
this
is
so
the
the
status
code
for
those
who
haven't
seen
the
change
in
the
spec,
the
status
codes
and
the
spec
have
changed
to
only
three
status
codes.
There's
error,
okay
or
onset,
and
I
think
the
this
issue
is
just
to
tackle
that.
C
C
D
Thanks,
I
just
wanted
to
ping
and
follow
up
on
this
one
about
adding
custom
propagators
under
which
namespace
package
path.
So
I
was
proposing
open
sourcing.sdk.extensions,
but
just
if
anyone's
had
any
thoughts
on
it
since
then
about
putting
it
there
or
putting
it
somewhere
else.
A
I
think
this
is
something
that
maybe
avoleta's
guidelines
were
gonna
address,
I'm
not
too
sure,
but
until
then
like
is
there
some
temporary
place
we
could
put
it
or
just
like.
We
could
just
leave
it
as
it
is
until
those
are
kind
of
finalized.
D
We
can
leave
it,
as
is
because
we
still
need
to
contribute
to
be
in
a
good
place
before
I
can
even
make
a
pr
on
that.
But
yeah
just
wanted
to
keep
it
in
mind,
maybe
for
next
week
to
talk
about
like
if
you
guys
agree
with
that
location
or
if
not,
that
could
become
the
default
location
for
now
and
then,
if
we
need
to
like
we
can
move
it
anywhere
before
ga
or
anything.
A
Right
are
you
able
to
like
kind
of
ask
avalita
about
the
progress
and
stuff
like?
I
know
you
guys
both
work
the
same
place,
but
I
don't
know
if
you
guys
converse
or
anything.
D
B
Brian
ashby
from
splunk
is
on
the
call,
he's
a
senior
technical
documentation,
writer
and
he's
looking
into
he's
looking
if
he
can
improve
or
help
hotel
python
documentation
in
any
way.
We.
A
Cool
yeah.
D
A
Yeah,
sorry
about
that
cool
anything
else
from
anyone
all
right,
cool,
we're
out
of
time,
see
you
guys
next.