►
From YouTube: 2021-07-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Fine,
how
are
you
going
well
yeah,
just
jumping
right
back
into
it,
got
a
little
bit
of
a
vacation
going
on
first
half
a
week,
but
yeah.
A
Cause
I
started.
B
Yeah
it's
I
came
back
and
it's
it's
overwhelming
how
much
stuff
there
is
to
do.
Have
you
been
somewhere
somewhere,
far
away
or
not
kind
of
southern
california?
So
far
away
to
me,
climate-wise
it
was
the
opposite
side
of
the
world
it
felt
like,
but
yeah
josh.
Are
you
still.
C
Down
in
the
bay
area,
I
moved
a
month
ago
to
mendocino
county.
B
Okay,
what's
the
I
think
the
temps
there
are
pretty
close
to?
I
was
in
palm
springs
of
the
first
half
of
the
week.
What's
medicina,
I
think,
is
like
you're
in
the
hundreds
right,
no.
C
C
B
B
Yeah,
don't
think
you're.
The
only
one
who's
thought
that.
B
Robert,
how
about
you,
how's
poland,
these
days
or
estonia,
or
no
poland,
I'm.
A
From
poland
I
get
this
package
all
the
time,
yeah
yeah.
So
basically
you
know
in
poland
more,
I
I
have
an
apartment
so,
but
but
basically
right
now,
the
weather
is
extremely
crazy.
In
poland,
at
least
for
us,
so
we
got
hails.
We
got
a
lot
of
storms,
etcetera
and
it's
like
it's
like
30
degrees.
You
know
celsius,
I
don't
remember
how
hot
for
for
visit,
but
very
hot.
A
B
Okay,
hi
there
hey
steve,
hey,
I
think.
Let
me
see
if
I
can
pull
up
the
participants.
Okay,
I
think
we
probably
are
reaching
quorum.
We
got
five,
I
see
anthony's
on
the
call
as
well.
Maybe
we
could
wait
for
one
or
two
more,
maybe
aaron
would
show
up,
but
everyone
on
the
call
is
pretty
tenured
on
this
one.
So
if
you
need
a
reminder,
decide
yourself
to
attendees
list.
B
B
Cool
yeah
welcome
everyone.
We
can
jump
into
looking
at
the
project
board
to
start
off.
I
think
it's
been
a
pretty
slow
week.
I
know
personally,
I've
been
out
of
town,
I'm
just
getting
back
in
today,
so
it's
there's
a
lot.
My
inbox
is
quite
full
but
yeah.
Maybe
we
can
just
kind
of
jump
into
the
prioritized
list,
specifically
what
we're
trying
to
do
for
the
next
release.
B
There's
been
a
little
bit
of
movement.
I
think
anthony
his
pr
here.
It
was
completed.
Otherwise
I
don't
think
there's
been
too
much
other
movement.
That
being
said,
I
have
started
working
on
this,
so
now
there's
movement,
so
I
actually
wanted
to
kind
of
maybe
jump
into
the
last
issue.
Yeah
anthony's
super
happy.
This
I
was
talking
with
boggs
and
about
to
see
bogged
open
a
lot
of
really
great
issues
and
has
had
a
lot
of
really
great
conversation
with
me
sideband.
B
But
this
is
one
where
I
think
is
kind
of
important.
It
was
brought
up
due
to
this
hotel
test
change
that
baghdad
wanted
to
implement,
which
made
the
api
require
the
sdk
as
a
dependency,
and
that
wasn't
really
desired,
and
this
was
opened
specifically
just
to
like
address
that.
Maybe
this
is
also
an
implementation
that
we
didn't
want,
but
I
think
there's
a
really
interesting
point.
B
That's
brought
up
here
and
it's
this
end
user
frustration,
that's
kind
of
captured
in
bogdan's
last
statement
and
that's
saying
that
when
bogdan
was
testing
with
the
hotel
test
package,
things
were
working
great
and
then,
when
bogdan
went
to
production
and
started
using
the
real
sdk
in
more
of
a
test
situation
was
actually
production.
It
didn't
work
which
isn't
great,
that's
actually
almost
worse,
because
the
test
gave
him
false
confidence
that,
like
his
code,
was
accurate,
and
I
think
that
inspired
me
to
say
like
yeah.
B
This
is
a
really
important
thing
that
we
probably
need
to
address
then
and
making
sure
that
our
dependency
or
our
implementation
of
the
sdk
and
of
our
testing
library
run
through
similar
code
paths.
Obviously
they
can't
be
exactly
the
same,
but
we
should
try
to
make
them
as
similar
as
possible,
so
users
have
the
utmost
confidence
when
they
test
with
this,
and
we
went
through
a
lot
of
conversation
last
week
on
how
this
could
get
restructured.
I
don't
think
there
was
a
cohesive
plan
for
it,
but
one
of
the
things
also
was
you
know.
B
We
probably
need
two
packages
out
of
the
hotel
test
package
and
I
think
this
kind
of
speaks
to
it.
Maybe
there's
a
hotel
test
package
that
lives
in
the
sdk.
It
has
more
sdk
things
than
the
instrumentation
side,
there's
something
more
around
the
constructs,
but
the
instrumentation
will
need,
but
I
think
we
should
probably
try
to
tackle
this
before
we
get
out
stable
release,
because
the
hotel
test
package
is
going
to
be
a
stable
package.
So
we
want
to
make
sure
that
we
have
a
something
correct
here.
D
The
concern
that
I
have
is
with
things
that
are
just
instrumentation
and
what
do
they
test
with
and
should
they
be
testing
against
the
sdk
when
the
user
may
not
use
the
same
sdk?
So
I
I
think,
there's
still
value
in
having
that
separate
implementation
in
hotel
test,
but
it
would
be
very
good.
I
think
if
the
the
span
recorder
could
be
used
just
as
an
exporter
with
the
sdk,
which
I
think
gets
you
99
of
the
way
towards
being
able
to
as
an
application
test
with
the
sdk.
B
Yeah,
I
think
you're
right
and
I
think
that
yeah,
I
think
we're
saying
the
same
thing
but
just
like,
I
think,
that's
a
really
important
thing
like.
Maybe
we
need
to
split
this
up
or
make
the
functionality
of
that
spam
recorder
just
more
universal,
and
these
are
all
really
great
ideas.
I'm
really
excited
to
hear
them
so
yeah.
I
agree.
Somebody
seems
to
I
think,
really
dive
into
this
one
or
multiple
people
actually.
B
I
think
that's
actually
it
for
new
tickets,
other
ones.
There's
still
definitely
some
work
to
do
so.
I'm
looking
to
try
to
put
in
some
I'm
not
going
on
vacation
next
week,
so
I'll,
be
here
next
week
to
work
on
this
kind
of
stuff
and
so
hopefully
put
in
some
hours
on
this
one
and
we
can
get
a
release
out.
Hopefully,
in
the
next
few
weeks
is
my
goal,
the
metrics
support
in
the
otlp
with
http.
B
I
think,
there's
user
demand
that
we
get
a
release
out
specifically
so
that
they
can
get
support
for
that
again
and
then
also
use
the
rc,
which
is,
I
think,
good
motivation
for
us.
We
want
them
to
be
using
the
rc
release,
so
I
think
getting
another
rc
out
is
going
to
be
helpful
if
that's
going
to
allow
them
to
upgrade
to
be
able
to
use
the
the
rc,
I
think,
is
what
I'm
thinking
about
that
anthony.
D
Yeah,
I'm
kind
of
of
two
minds
of
this.
I
would
be
reluctant
to
release
another
rc
that
I
don't
think
is
actually
a
release
candidate,
which
is
what
I
think
we
would
be
doing.
If
we
were
to
do
it
right
now.
I
wonder
whether
we
could
release
a
0.22
of
all
of
the
metrics
packages
or,
if
we've
broken
anything
between
rc1
and
now
that
they
would
depend
on.
B
Yeah,
that
is
an
interesting
thing.
Is
that
something
that
you'd
be
willing
to
try,
or
is
that
just
an
idea.
D
Yeah
I
can
give
that
a
try
to
see
if
I
can
make
that
work.
Somehow
it
should
be
a
matter
of
taking
one
of
the
examples
and
ensuring
it
has.
No
replacements
depends
on
the
the
tagged
versions
but
replace
for
the
metric
stuff.
B
Right,
yeah,
and
it
would
also
give
us
a
good
opportunity
to
try
out
this
new
tooling
that
we
have
for
the
versioning.
I
think
this
is,
you
know
inevitably
going
to
happen,
so
it's,
I
think,
a
good
trial
yeah,
I'm
in
favor
of
this
sounds
great.
B
Cool,
I
think,
with
that
we're
probably
good
on
a
little
bit
of
the
project
roadmap
here,
like
I
said
hopefully,
next
week
we
can
see
some
bigger
numbers
here,
I'm
planning
to
spend
a
lot
of
time
in
the
next
week,
so
we
can
jump
into
the
agenda
first
josh.
Do
you
wanna
talk
a
little
bit
about
the
metric
system,
progress.
C
Yeah,
so
two
kind
of
different
threads
are
here:
I
shared
two
weeks
ago,
a
kind
of
early
draft
of
just
you
know.
I
started
from
scratch
with
a
like
an
api
proposal
that
was
mostly
trying
to
focus
on
simplifying
the
godoc
appearance
of
the
api,
got
some
good
feedback
on
that.
That
is
2044..
C
I
continued
working
on
it
a
bit
after
some
feedback
from
bogdan
and
a
few
others,
and
it's
currently
in
a
state
where
I'm
pretty
sure
I
could
implement
it.
I
started
like
sort
of
experimentally
just
branching
that
copying
it
into
place
over
the
existing
metrics
api
and
deleting
it
the
old
code
to
see
what
would
happen.
So
what
that
really
means
is
trying
to
get
the
registry
package
to
work
and
the
global
package
to
work
and
the
metric
test
package
to
work.
C
So
I
got
that
far
or
I
started
to
see
what
the
problems
were
going
to
be.
I
I
definitely
have
some
some
conclusions
about
which
types
from
the
current
api
could
be
pulled
out.
So
one
example
is
metric.
Descriptor
one
example
is
metric.
C
The
measurement
type
is
sort
of
only
relevant
when
you're
an
sdk,
and
these
are
types
that
are
only
going
to
distract
the
user.
So
I've
started
to
understand
what
are
those
types
that
we
can
remove,
but
then
I
also
recognize
that
as
soon
as
I
start
pulling
types
around,
I
started
making
a
disastrous
mess.
So
this
is
kind
of
me
coming
at
this
problem
from
the
other
direction.
I've
recognized
some
problems
in
the
current
code
that
are
independent
of
my
explorations
of
new
apis.
C
So
now,
could
you
tyler
go
back
to
I
put
an
issue
there
and
then
so
yesterday,
starting
like
in
the
morning
and
kind
of
almost
all
day,
I
spent
time
just
tearing
apart
the
code.
I
got
to
the
point
where
I
recognized
that
really
I'm
having
problems
because
there's
no
meter
provider
abstraction,
that's
really
kind
of
correct
in
the
code
and
as
a
result,
it's
hard
to
swap
in
a
new
implementation,
because
a
few
differences
are
like
just
the
code.
I'm
not
doing
this
very
well
right
now,
but
things
were
fall.
E
C
So
I
started
with
me
just
try
to
solve
some
problems
that
are
like
basically
independent
of
in
each
api
change,
and
so
this
this
fix
here
is
one
that
I
think
we
can
do
in
several
pr's.
I
don't
want
to
dump
one
pr,
so
I
made
a
draft
of
where
I
was
last
night.
I
would
propose
to
split
it
and
I
can
describe
right
now,
roughly
speaking
what
the
pieces
are.
I
also
commented
in
that
issue
about
what
those
pieces
are,
but
basically
the
the
high.
A
C
Picture
of
this
draft
pr
and
what's
describing
the
issue
is
I
want
to
get
to
the
point
where
there's
a
first
class
meter
provider
object
and
every
metrics
sdk
accumulator
serves
only
a
single
meter.
I
started
with
this
idea,
partly
because
I'm
tracking
the
metrics
sag
and
where
the
sdk
spec
is
heading
and
we're.
C
We
end
up
talking
a
lot
about
per
meter
configuration
so
like
you're,
going
to
apply
a
configuration
object
to
this
meter
or
this
meter
provider,
and
I
like
not
having
a
meter
provider
instance
like
starts
to
feel
like
a
mess
to
begin
with.
So
what
it
means
to
create
a
new
meter
provider
instance
is
that
every
metrics
sdk
essentially
dedicated
to
one
instrumentation
library,
and
this
has
nice
outcomes.
C
I'm
not
sure
if
this
is
super
clear.
As
I
said
it
didn't
it
sounded
a
bit
muddled.
The
idea
is
basically
that
the
current
implementation
is
one
accumulator
for
the
entire
sdk.
What
I'm
proposing
is
that
we
have
one
accumulator
per
instrumentation
library.
It
simplifies
the
otlp
exporter
quite
a
bit.
It
also
is
going
to
help
us
implement
schema
url
properly,
it's
going
to
help
remove
the
instrumentation
name
and
version
from
the
instrument
options
which
just
doesn't
make
any
sense.
C
There's
a
bunch
of
cleanup-
that's
happening
here,
so
I
just
kind
of
wanted
to
to
talk
about
that,
because
I
believe
the
next
step
would
be
for
me
to
split
my
current
pr
into
several
potentially
sequenced
to
accomplish
the
same
thing.
In
an
easy
to
review
fashion,
I
don't
think
you
should
try
to
review
this
pr,
but
I
hope
what
I
just
described
sounded
like
a
good
direction.
C
In
order
to
get
that
moved
correctly
in
the
future,
I
have
to
remove
instrument
option
from
the
descriptor
I'm
basically
I
changed
it
so
because
there's
going
to
be
a
cycle,
if
you
start
pulling
things
out
of
the
main
api,
you
can't
refer
back
to
the
main
api.
So
I
want
to
put
metric
descriptor
in
an
sdk
api
package
means
I
have
to
pull
the
instrument
option
out
of
it.
C
That's
one
of
the
types
of
things
I'm
doing
here,
and
so
you
see
lots
of
tiny
little
test
cleanup
fixes
and
those
get
distracting
when
there's
a
bunch
of
them
mixed
together
of
different
sort.
So
I
would
probably
sequence
this
like
one
prs
to
move
metric
descriptor
into
a
sub
package,
where
I
think
we
will
keep
it
forever,
even
when
I,
even
when
we
talk
about
swapping
in
a
new
api
or
not
like
we're
just
going
to
pull
types
out.
C
It's
one
meter
per
instrumentation
library.
Well,
that's
I
mean
sorry,
there's
always
one
named
meter
per
instrumentation
library,
but
the
current
code
structure.
You
have
one
implementation
and
the
instrument
knows
its
instrumentation
library,
name
and
version
and
the
there's
no
concept
of
a
meter
in
the
implementation.
It's
all
put
into
the
instrument
in
the
descriptor.
B
C
C
Thing
I've
done
that
I
didn't
mention
yet.
Is
I've
pulled
the
resource
object
out
of
the
met
the
metric
export
record
so
that
the
resource
will
be
passed
in
a
side
channel
all
the
way
through
to
the
exporter,
so
you'll
you're,
so
so
taking
the
resource
out
of
the
export
record?
Churns
the
code
quite
a
bit.
I
would
do
that
in
its
own
pr,
but
I
believe
that's
the
correct
thing
to
do.
C
So
that
means
that
the
what
actually
changes
most
of
this
pr
is
just
like
refactoring
and
moving
and
stuff
like
not
really
a
functional
change.
What
is
actually
changing
in
this
pr
is
the
controller
instead
of
having
one
accumulator
exporter
pipeline
has
one
exporter
still,
but
it
has
one
accumulator
per
meter
so
that
it
has
the
same
data.
C
The
same
code
machinery
now,
instead
of
applying
to
all
the
instruments
in
the
process,
applies
to
all
the
instruments
in
one
meter
and
then,
when
you
export
you're,
going
to
get
one
resource,
metrics
object
per
instrumentation
library
and
the
code
is
way
simpler
because
you
don't
have
to
go
regrouping
things
together.
Every
call
to
export
has
one
instrumentation
library,
one
instrumentation
library
version,
one
schema
url
and
one
resource,
so
I
put
together
a
new
struct
called
source
data.
C
It's
those
four
fields
and
actually
those
four
fields
would
apply
to
a
trace
exporter
as
well.
This
is
like
otlp
standard
stuff,
instrumentation
name
version
schema,
url
and
resource,
so
those
would
all
arrive
at
the
exporter
and
not
be
part
of
the
export
record.
C
C
D
C
C
Important
or
not,
I
would
just.
I
would
still
suggest
that
the
the
currently
the
code
has
to
do
a
bunch
of
extra
work
that
won't
have
to
be
done
and
this
question
of
whether
we
combine
the
outputs
of
different
instrumentation
libraries
into
a
single
batch
for
the
like
outbound
request.
I
guess
is
a
separate
question.
C
You
might
have
to
then
bracket
your
exports
with,
like
a
we're
beginning,
a
call
to
batch
some
outputs,
and
then
you
call
export
per
library,
and
then
you
say:
okay,
we're
done
and
then
the
otlp
exporter
can
then
issue.
I
would
probably
prefer
to
just
see
like
a
pipeline
of
resource
metrics
get
you
know,
output
and
then
some
sort
of
custom
batching
logic
go
in
there,
maybe
or
maybe
not.
I
don't
know.
F
So
I'm
I'm
not
so
worried
about
having
multiple
instrumentation
libraries
in
a
single
export
call,
but
because
that
that
is
perfectly
allowed.
In
fact,
as
long
as
they
share
the
same
resource,
then
you
can
put
as
many
instrumentation
libraries
in
one
call
to
export
to,
as
you
want.
C
F
F
C
Yeah
we
have
that
batching
today
and
I
was
just
simply
removing
it
because
it
causes
regrouping
to
happen
in
the
oclp
exporter,
so
the
the
batching
is
still
present
and
what
what's
missing
is
like
essentially
a
more
sophisticated
export
routine.
That
says,
I
am
now
about
to
export
five
different
instrumentation
libraries
I
and
I
have
five
different
checkpoint
sets
and
I
have
five
different
five
different
instrumentation
library
name
version
schema
urls
and
then
I
have
one
resource
object
and
I
don't
know
what
the
calling
convention
is
going
to
be.
F
Yeah,
that's
the
exporter
should
be
getting
a
well.
I
don't
know
what
the
current
export
api
is
on
the
metric
side,
but
on
the
trace
side,
the
exporter
gets
a
list
or
a
slice
of,
I
believe,
essentially
instrumentation
library,
traces,
okay,
so
they
they
have
you
it's
typically
just
one
instrumentation
library,
but
you
can
have
multiple
instrumentation
library
traces
in
each
instrument.
Okay,.
C
I
can
preserve
that
I'll
have
to
go.
Do
that.
I
will
not
try
to
ever
propose
breaking
this
again,
so
it'll
be
something
like
instrumentation
library.
Metrics
is
passing
over
a
slice
of
those.
F
C
C
Yeah,
okay,
so
that
only
changes
the
connection
between
the
export
api
and
the
otp
and
other
exporters
for
the
most
part,
the
I
would
keep
almost
all
of
this
pr
just
keep
going
adding
more
to
it,
but
thank.
C
Oh
god,
you
I
mean
if
you've
been
you've,
been
missing.
The
metric
sig
meetings,
probably
for
good
there's,
been
a
lot
of
turmoil
and
discussion
about
multiple
exporters.
You
know
like
there
are
sort
of
three
different
places
in
the
metrics
pipeline.
You
can
insert
like
a
like
a
fan
out
operator
saying,
go
to
multiple
places,
and
it's
like
the
answer
is
yes,
of
course,
more
than
one
exporter
as
long
as
they're
the
same
no
big
deal,
but
as
soon
as
they
have
different
failure,
semantics
or
blocking
semantics
or
temporality.
C
C
C
So
this
idea
of
an
accumulator
that's
kind
of
called
measurement
processor
in
the
in
the
working
draft
of
the
sdk,
and
it's
still
there
the
the
idea
that
the
question
about
having
one
of
them
per
meter
or
per
library-
that's
that's
a
judgment
call
I
know
at
one
point
bogdan
was
asking
me
like
have
one
of
those
per
instrument,
because
that
way
you
have
less
contention
between,
like
even
within
a
library.
I
just
think
it
like.
C
There's
all
good
answers
here
like
having
it
be
per
meter
gives
you
the
ability
to
have
different
export
intervals,
but
of
course
someone
can
come
in
and
say
within
a
particular
instrumentation
library.
I
want
this
meter
to
have
a
different,
this
metric
instrument
to
have
a
different
export
interval
than
that
and
at
some
point
I
want
to
draw
the
line
and
say:
look:
you
can
do
different,
instrumentation
libraries
at
different
intervals
or
something
but
maybe
not
different
instruments.
C
I
don't
know
that's
what's
being
discussed
in
the
metric
seg,
so
at
some
level
this
controller,
that's
where
that's
where
the
change
was
happening
here,
that's
still
where
complexity
may
or
may
get
thrown
in
or
removed
later.
Whether,
but
currently
the
idea
is
one
accumulator
can
be
checked
can
be
collected
atomically
so
like.
If
you
have
an
you
could
have
one
per
interval,
I
don't
know,
there's
different
ways.
C
F
D
F
D
The
batching
processor
in
the
collector
has
logic
for
splitting
up
matches,
so
if
you
submit
500
and
it's
supposed
to
only
send
200
at
a
time,
it'll
split
it
into
three
chunks.
That
logic
is
yeah,
especially
with
p
data
structures.
I've
been
elbow
deep
into
that
recently,
so
if
we
could
avoid
that,
that
would
be
great,
but
I
suspect
we
may
end
up
having
something
similar
if
we're
going
to
be
pushing
large
chunks
with
traces.
D
It's
it's
slightly
easier
because
you
push
a
span
onto
a
queue
and
another
process
pulls
spans
off
the
queue
one
at
a
time
and
once
it
hits
its
threshold,
it
sends
all
of
the
spans
it's
got
patched
up,
but
with
metrics
I
can
see
we
might
have
a
key
related.
You
know
50
or
20
individual
data
points,
and
if
we're
saying
we
want
to
batch
at
x
data
points,
we
may
have
to
then
split
them
up
after.
D
Yeah
and
even
the
collector
doesn't
batch,
based
on
like
byte
size
of
output,
it
batches
based
on
data
points
or
metrics
or
traces
spans.
So.
D
Yeah,
we
can
easily
count
those
and
split
on
them,
the
actual
size
that
goes
downstream,
like
we
had
to
do
with
the
jager
agent
exporter
splitting
that
is
significantly
less
fun.
C
I've
seen
it
all,
I
hope
to
avoid
it,
but
I
I
will
definitely
update
my
pr
to
or
my
drafts
and
forthcoming
prs
to
keep
the
data
together
and
and
so
that
the
decision
to
split
will
be
done
inside
the
exporter,
not
inside,
like
the
controller.
B
Okay,
yeah.
That
sounds
good
in
the
interest
of
time.
I
think
we
could
probably
move
on
josh
unless
you
were.
B
Cool
awesome,
thanks
for
the
update
on
that,
I
think
there's
some
really
great
discussion
there
and
should
be
interesting
garrett.
We
have
you
up
next
for
talking
about.
G
Language,
I
just
wanna
have
a
call
out
that
the
draft
pr
that
I
had
I've
pretty
much
got
the
okay
on
our
side
to
switch
that
to
a
regular
one.
So
that's
cool
that'll
be
happening
soon.
We've
got
a
related
issue
that
is
kind
of
stemming
from
force
flush
stuff
that
doesn't
normally
matter,
but
in
the
context
of
lambda
and
freezing
it
is
pretty
important.
G
We
think
we
can
figure
it
out
for
the
most
part
ourselves,
but
the
suggestion
of
ante
we
went
ahead
and
made
the
issue
there.
We
think
we're
kind
of
diving
into
it.
Neither
me
nor
lay
my
mentor
are
really
go
experts,
especially
with
concurrency
stuff,
so
we're
taking
it
slow
for
sure.
G
But
ideally
if
we
could
get
that
kind
of
finished
and
then
then
the
draft
would
be
totally
totally
ready
but
yeah.
So
that's
that's.
D
Really
it
yeah
and
I
managed
to
nerd
snipiano
with
this
issue
the
other
day.
I
think
she's
commented
on
the
pr
not
on
on
this
issue,
but
she's.
It
looks
like
she's
talked
to
some
people
in
the
lambda
team
as
well.
They
think
that
there
should
be
a
hooker,
an
event
that
we
can
listen
to
out
of
the
run
time.
D
D
D
That's
already
made
it
into
the
queue,
but
as
a
last
step,
we
would
trap
this
shutdown
event
and
call
shutdown
which
drains
the
queue
completely
and
ensures
that
nothing
new
can
come
in
and
it
gets
rid
of
everything
that's
gone
out,
so
that
that
should
be
a
guarantee
that
all
spans
eventually
get
exported.
Even
though
some
may
happen
after
the
the
lambda
got
frozen
and
waited
until
it
got
killed,
awesome
yeah,
that's
a
great
update
there.
B
Okay,
that
makes
sense
so
is
that,
is
that
what
this
issue
is
about?
It's
that
force
flush
will
export
all
the
ended
spans,
but
not
necessarily
all
of
them.
It
won't
put
up
essentially
a.
G
Sem4
saying
that
like
or
spans
from
here,
so
it
actually
won't
export
all
the
ended
spans.
They
can
be
kind
of
stuck
in
a
state
where
they're
not
consumed
out
of
the
or
into
the
batch
yeah,
and
so,
although
they're
ended,
if
you
call
for
splush
real
quick
after
that,
it
might
not
be
exported
yeah.
So.
B
D
Places
right
so
there's
there's
the
the
batch,
which
is
a
slice
of
spans
that
are
ready
to
be
exported
and
then
there's
the
queue
which
is
a
channel
that
the
span
processor
on
end
method
writes
onto
so
it
throws
it
onto
that
queue
you
you
could
make
it
a
zero
length,
queue
and
block
on
full
cue,
but
then
you're,
essentially
using
the
synchronous
span
processor,
so
it's
possible
for
a
span
to
have
been
ended.
Put
into
that
channel
since
it's
non-blocking
the
go
routine
immediately.
D
G
D
B
B
A
it's
not
in
the
caller's
domain,
it's
in
the
batchman
processor
itself.
What
I'm
thinking
is
we
might
be
able
to
just
add.
I
guess
we
can't,
because
it's
in
the
trace
pipeline.
B
Maybe
we
can
add
an
option
with
like
a
synchronous
or
a
durability,
guarantee
or
something
like
that
where,
if
you
export
and
it
hits
the
you
know,
it
goes
to
the
batchman
processor-
it's
guaranteed
to
be
in
the
queue
you
know,
it'll
block
until
it's
actually
in
the
batch
or
something
like
that.
E
B
B
Like
I
get
the
idea
here
is
that,
like
we
wanted
the
batchman
processor
to
just
be
as
performance
oriented
as
possible,
so
if
you're
using
it
in
a
really
high
cardinality
system,
you
know
not
blocking
on
the
fact
that
it
has
to
get
through
acute
into
a
batch,
just
kind
of
just
returning
as
fast
as
possible,
which
I
think
is
in
the
specification
was,
was
the
design
there
I
haven't
looked
at
in
a
while.
I
have
a
question.
F
This
instrumentation
is
running
a
collector
very
locally
to
this,
like
it's
not
the
same
process
but
or
not
necessarily
the
same
process,
but
it's
it's
within
the
same
kind
of
local
area
attached.
Is
there
any
reason
why
you
can't
just
use
the
simple
processor,
the
synchronous
processor,
it.
G
D
That's
a
direct
monetary
cost
that
our
users
end
up
paying
because
we
charge
by
the
middle
second
for
lambda.
It
may
still
be
worth
looking
at
the
matched
pen
processor,
with
a
cue
length
of
zero
and
blocking
on
the
full
queue,
because
that
will
still
batch
up
the
sends
to
the
collector.
D
D
B
Yeah-
and
I
think
you
may
not
even
need
to
block
on
full
queue
either
you
just
drop.
You
know
that's
what
the
other
alternative
as
long
as
the
q
length
is
zero
yeah.
But
if
the.
D
Q,
like
is
zero,
then
it
does
drop,
and
so
I
think
I
don't
know
I'd
have
to
double
check
that
it
does.
So
if
you
look
at
the
enqueue
function,
it's
it's
got
two
branches,
one
where,
if
the
block
on
full
q
is
there,
it
will
do
a
blocking
read
and
if
not,
it
goes
into
a
select
where
it
tries
to
read
and
in
the
default
case
it
increments.
The
dropped
metric
counter.
B
Okay,
I'll
take
your
word
for
it.
I'd
have
to
take
a
look
again,
but
yeah.
C
B
It
doesn't
sound
great.
Thank
you.
G
Yeah,
we
even
so
to
add
on
to
one
of
the
layers
we
even
through
testing
we're
able
to
get
it.
So,
even
if
it
does
enqueue
it's
possible
to
get
stuck
where
the
span
is
neither
in
the
queue
nor
the
batch
when
you're
calling
force
flush,
we
just
were
looking
into
it
earlier
and
blai
thinks
he's
found
where,
if
your
other
go
routine,
that's
normally
processing
the
queue
runs
at
the
right
time.
Pretty
much.
G
It
can
be
trying
to
consume
out
of
the
queue,
but
it
won't
have
the
lock
for
putting
it
into
the
batch
and
so
because
force
flush
has
it,
and
so
that's
an
interesting
layer
added
on
top
that
we're
looking
at
too
but
yeah.
D
Well,
yeah,
the
the
problem
with
that
is.
We
would
have
to
effectively
constantly
hold
that
lock
in
process
cue
them,
though,
and
and
only
release
it
when
the
timer
detect,
which
means
that,
if
you
try
to
call
force
flash
you're,
then
blocked
waiting
on
that
lock
until
the
bench
press,
spam,
processor's,
next
timer
ticks
or
until
it
next
reads
off
of
the
queue.
B
Yeah,
I
I've
looked
at
this
many
times
and
there's
been
a
lot
of
discussion
about
this,
but
I
still
think
that
there's
just
an
algorithmic
refactoring
that
needs
to
be
done
for
this
processor,
but
this
kind
of
stuff
is
motivating,
maybe
taking
another.
B
Look
at
this
I'd
like
to
make
sure
that
we
have
use
cases
for
all
of
the
things
that
we're
talking
about
here
and
all
the
failure
cases
that
we
can
try
to
reproduce
them,
because
I
think
that
there's
just
some
concurrency
patterns
that
aren't
correct
here
and
I
think
that
we
could
probably
try
to
redesign
this
a
little
better.
I've
looked
at
doing
this
in
the
past
and
I've
gotten
about
halfway
through
with
just
like
refactoring
through
like
some
channel
pipelines,
and
it
it
just
is
it's
challenging
to
get
it
right.
E
D
D
Well,
certainly
at
least
you're
gonna
have
to
wait,
don't
force
flash
returns,
but
the
synchronous
on
when
you
end
a
span.
It
has
to
go
through
some
amount
of
processing
until
it
gets
to
a
place
where
force
flush
will
guarantee
that
it
gets
sent
and
so
blocking.
There
is
one
of
the
things
we're
trying
to
avoid
here
so
that
we
can
end
a
span
and
immediately
move
on
and
we
don't
spend
a
lot
of
time.
Waiting
at
span
end.
D
But
that
then
leads
to
the
potential
that
something
else
can
currently
may
be,
adding
more
onto
the
queue
right,
which
then
presents
the
problem
of
if
I've
got
go
max
prox
of
greater
than
one.
I
might
end
up
in
a
situation
where
four
flush
sits
there
waiting
for
a
long
time,
while
other
stuff
is
still
throwing
into
the
queue.
B
Yeah,
there's
I
don't
know
I've
seen
a
few
patterns
to
deal
with
this,
mostly
around,
like
passing
state
across
channels
to
handle
the
locking,
and
then
you
can
also
do
like
these.
You
know
max
eater,
essentially
like
buffering
in
the
process
that
you
can
do.
I
think
you
can
also
pass
semaphores
on
the
queue
to
say,
like
signal
a
force,
flush
and
flush.
Anything
after
this
is
like.
After
what
force
fluster
is
called
like.
B
There
is
a
concurrency
pattern
that
you're
describing
there
but,
like
I
don't
know,
if
it's
an
error
to
say
that,
once
you
call
force
plus
any
force
flush,
anything
after
that
is
not
going
to
get
flushed.
I
think,
if
that's
just
the
way
concurrent
systems
are
going
to
operate.
If
you
keep
trying
to
send
something
after
you
call
force
flush
like
it's
just
there's
no
guarantee
that
it's
going
to
get
sent
at
the
end
like.
In
that
case,
you
really
want
shut
down.
E
I
think
yeah
well,
two
things
one,
I
think
anthony
was
not
one
of
the
words
in
your
outfit,
but
I
thought
you
were
talking
about
that.
That
happens
before
a
relationship
between
span
and
and
a
following
called
a
force
flush
that
whether
or
not
somebody
should
assume
that,
if
the
call
to
force
wash
happens
after
span.n
returns
that
it
will
include
that
span
and
the
work
that
it
does.
E
D
B
Right,
okay,
that
makes
sense.
Okay,
I
guess
we
have
issues
tracking
this.
These
are
not
immediately
obvious,
but
I
think
that
I'd
like
to
or
somebody
to
dive
into
this
again,
because
I
think
that
there's
just
there
is
a
solution
here
that
could
be
engineered
to
make
this
work
and
I'm
sure,
looking
for
people
who've
worked
at
google
already.
They
probably
already
solved
this,
but
I'm
not
seeing
anybody
throw
their
hands
up
yet
cool.
I
want
to
keep
this
moving.
B
We
only
have
18
minutes
left
in
the
meeting
garrett
thanks
for
sharing
this.
Hopefully
we
can
get
some
eyes
on
this
coming
forward.
Go
ahead.
B
Sorry,
I
just
said
awesome
thanks:
okay
cool
robert
you're
up
next
on
the
docs
guidelines.
Pr.
A
I
hope
that
coins
and
and
easy
you
can
go
to
five
changes
like
guidelines
for
creating
talks
that,
in
my
opinion,
but
also
talk
with
a
like
technical
writer
at
splunk,
who
is
also
willing,
probably
in
future,
to
contribute
to
open
telemetry
documentation
to
make
it's
to
make
it
somehow
consistent
and
approachable
for
both
four
depths
as
as
well
as,
for
example,
sres,
which
are
not
first
want
to
have
some
some
first
insight
on
the
github
and
then,
if
there's
something
more
to
read,
for
example
in
the
code,
then
they
want
to
just
to
have
a
hyperlink
to
go
doc.
A
A
So
he
said
that
in
his
opinion,
each
goal,
module
or
each
basic
component
should
have
its
own
readme
and
should
be
like,
should
be
possible
to
go
to
from
the
from
the
root
readme.
It
doesn't
have
b2b
directly.
But,
for
example,
if
we
have
some,
for
example,
exporters,
we
have
had
dedicated
exporters
readme
and
then
instead
we
have
for
each
of
them.
A
G
B
Cool
yeah,
thanks
for
adding
this,
I
think
that's.
These
are
some
some
useful
guidelines.
One
thing
I
want
to
ask
is
about
the
readme
itself
and
why
we
wouldn't
want
to
put
this
in
the
godoc.
B
I
know
that
in
the
past,
jana
specifically
called
out
that
a
lot
of
people
that
are
working
with
this
are
going
to
be
developers
and
their
first
place
to
go
is
package.go.dev,
where
this
kind
of
information
is
really
useful
to
have
there.
B
B
Maybe
not
okay,
that
seems
to
make
sense.
So
how
do
we
make
a
demarcation
here,
though,
because
I
think
this
is
really
useful,
and
I
I
I
glanced
over
this-
I
don't
know
all
the
details,
maybe
you've
captured
here
but,
like
I
think,
a
readme
is
a
really
useful
thing
for
like
you're,
saying,
like
the
sre
or
the
operation
side
of
things,
but
like
we
probably
want
a
demarcation
for
people
that
are
going
to
be
using
the
code.
B
You
know
specifically
here
like
this
is
really
nice
that
it's
continued
are
contained
there,
but,
like
the
documentation
side
of
things
and
having
you
know,
specific
code,
instructions
is
also
really
nice.
So
how
do
we
make
sure
that,
like
that's,
clearly
documented,
where
you'd
want
to
include
one
versus
the
other.
A
I
do
not
have
the
answers
yet
for
sure.
I
think
that
how
to
use
like
the
examples
are
the
greatest
place
like
this
is
where
I
always
look
and
where
I
benefit
the
most,
and
we
have
most
a
lot
of
packages
have
this.
So
that's
why
I
just
added
a
bullet
point,
for
example,
but
for
more
details.
I
I'm
not
sure,
maybe
we'll
just
know
more
if
we
create
more
rhythmics
because
right
now
we
don't
have
a
lot
of
them.
A
D
Yeah,
I
think
this
is
definitely
a
thing
we
can
learn
by
doing
it's
a
really
good
practice
that
I
think
we
probably
should
have,
and
so,
if
we
can
get
started
doing
that,
then
we'll
learn
more
about
what
works
for
us
and
hopefully
we'll
get
feedback
from
end
users
as
well
about
what's
working
for
them.
B
Yeah
awesome
that
sounds
good.
I
know
we
moved
away
from
this
at
some
point
because
they
were
just
becoming
stale,
but
I
think
that
if
we
like
you're
saying
solidify
on
it-
and
we
can
probably
iterate
so
sounds
good
thanks
robert,
I
think
you're
up
for
the
next
as
well.
A
Yeah,
the
next
is
like
basically,
the
first
pr.
I
think
it
should
be
reviewed
and
merged,
probably
as
soon
as
possible,
because
because
of
one
of
my
previous
contributions
for
the
saram
instrumentation,
it
occurred
that
there
is
still
there.
I
introduced
one
race
condition
and
basically
it
should
fix
it.
A
B
Yeah
I
have
seen
this
okay,
I
see
you're
talking
about
yeah.
We
should
inverse
this
okay
or
we
should
not
merge
this
right
away
without
a
review.
Yeah
yeah.
B
Yeah
aaron,
I
can't
believe
it
so
all
these
other
people
have
already
reviewed
it,
yeah,
no,
not
at
all
cool
yeah.
I
I
would
love
to
get
this
review
because
exactly
that
I
don't
want
this
blogging
anymore,
cool
thanks
and
then.
A
What
was
the
next
thing
you
had
and
in
general
I
think
that,
basically
there
are.
I
basically
spent
a
few
days
reviewing
some
some
pr's
like
for
for
this
repository,
and
I
think
that
some
of
them
would
be
good
if
there
will
be
more
eyes
just
to
not
lose
like
the
attention
of
the
developers
who
want
to
contribute,
because
they
were
actively
basically
responding
to
comments,
and
you
know
addressing
comments,
etc.
D
D
I
really
want
to
be
able
to
spend
some
more
time
looking
at
what's
in
the
contribu
prs,
even
if
it's
just
to
say
this
isn't
something
we
can
take
right
now,
which
I
think
is
going
to
have
to
be
the
answer
to
some
of
these,
but
there
are
absolutely
prs
for
things
that
exist
in
contribs.
That
need
to
be
updated
that
I
I
need
to
be
better
about
reviewing.
B
Yeah,
I
I
think
it's
really
easy
to
kind
of
blame
ourselves
because
for
months
now,
anthony-
and
I
have
been
talking
about
this
like
this-
is
just
kind
of
like
on
the
back
burner,
but
I
maybe
there's
some
more
administrative
structure
we
can
add
here.
I
I
think
this
idea,
robert,
you
seem
to
be
kind
of
taking
the
reins
a
little.
I
talked
in
the
past
that
maybe
we
could
split
out
the
the
trust
hierarchy
to
provide
more.
B
You
know
permissions
or
something
like
that
beyond
what
just
hotel
is
currently
offering,
where
there's
a
maintainer-
and
you
know
I
guess-
approver
and
then
collat
or
a
member
of
the
org,
or
something
like
that,
but
I'd
love
to
maybe
like
put
a
little
bit
more.
Maybe
we
can
do
like
a
trust
chain
here
for
parts
of
the
repo
other
people
can
get
additional
permissions
or
something
like
that
because
having
even
two
maintainers
try
to
go
through.
All
of
this
is
hard
to
keep
up
so
I'd
like
to.
B
Maybe
we
could
try
to
look
at
including
more
people
here,
but
maybe
in
the
meantime
I
know
robert,
if
you
have
specific
prs
that
you
think
are
ready
to
merge,
feel
free
to
slack
them
to
me.
I.
B
Yeah-
and
I
I
appreciate
that
as
well,
you
know,
I
think,
that's
also
really
useful
so
yeah.
A
B
That
said,
as
anthony
also
pointed
out
that,
like
we
have
a
limited
amount
of
time,
and
that
like
we
may
need
to
just
not
be
able
to
you
know,
accept
things
currently
is,
is
maybe
the
answer
to
some
of
these,
because
it's
just
it
has
been,
and
I
think
it's
it's
going
to
change
soon,
but
our
priority
in
the
project
is
to
get
a
stable
release
for
the
tracing
and
the
other
parts
of
the
project
in
the
main
repo.
B
After
that,
I
think
that
we're
going
to
try
to
be
more
productive
in
this
repo,
but
I
just
yeah-
I
don't
want
to
like,
like
you're
saying,
like
give
people
the
wrong
impression
that
we're
not
interested
in
their
contributions
right
now.
It's
just
that.
We
don't
have
the
time
to
review
it
right
now,.
D
Yeah-
and
I
know
ted
has-
has
talked
about
trying
to
find
an
additional
pool
of
people
to
be
responsible
for
instrumentation
separate
from
the
sdk
and
api
maintainers.
So
maybe
that's
you
know
as
he
kicks
off
that
effort.
That
may
be
something
that
that
helps
out
here
in
terms
of
having
multiple
levels
of
trust
and
responsibility
within
the
project,
I'm
all
for
it,
but
I
don't
know
that
github
gives
us
great
tools
for
managing
it.
D
I
know
it's
been
an
issue
in
collector
contrib
as
well,
where
they
use
code
owners
to
add
people
as
code
owners
for
individual
components.
You
know
so,
like
the
splunk
exporters
have
people
from
splunk
added
to
the
aws
exporters
of
people
from
aws
added
as
code
owners,
but
that
doesn't
actually
give
us
approval
capacity
and
ordinary
merge
capacity.
D
So
at
best
it's
just
another
signal
that
yeah
there's
someone
from
this
organization
who
gets
notified
on
all
of
the
prs,
and
maybe,
if
the
maintainers
look
at
it
and
say
I
trust
you
know,
they've
reviewed
it
and
approved
it,
and
I
trust
that
they'll
put
a
rubber
stamp
on
it.
But
that
also
gets
to
you
know,
like
robert's
point
of
if
he
doesn't
trust
himself
to
make
that
call,
then
we
shouldn't
put
the
responsibility
on
him
either,
so
we
can
try
to
find
informal
ways
to
do
that.
B
Yeah,
okay,
so
maybe
this
is
just
a
request
for
comments.
Then,
if
people
have
ideas.
F
B
I
think
that's
what
my
mind
went
to
initially
as
well,
and
I
think
you
can
also
set
the
number
of
approvals.
It's
just
that
those
those
approvals.
I
think
I
don't
know
if
anthony
correct
me
from
wrong,
but
I
think
that
the
code
owner
stuff
doesn't
influence
what
those
approvals
are.
They
have
to
be
in
a
particular
group
or
something.
D
Yeah,
I'm
not
100
positive
on
that.
I
don't
know
why
we
weren't
able
to
do
that
in
collector
can
trim,
but
it
could
be
something
to
explore.
If
we
were
to
go
down
that
path,
though
we
would
have
to
go
back
to
the
discussion
we
had
had
some
months
ago
about
clearing
still
or
clearing
approvals
on
an
update.
So
if
we're
gonna,
if
we're
gonna,
automatically
merge,
when
we
get
two
approvals,
we
can't
have
one
approval
come
in
significant
changes
come
in
and
then
a
second
approval
come
in.
B
Yeah,
that's
a
good
point
robert.
I
know
you're
also
like
really
good
about
this
kind
of
like
administrative
stuff.
I
don't
know,
if
maybe
you
want
to
put
together
a
proposal
around
this
with
kind
of
that
idea.
There.
A
I
mean
right
now
I
do
not
have
any
idea.
To
be
honest
also,
the
main
reason
is
right
is
also
that
I
think
that
such
you
know,
administrative
thing,
you
get
the
best
ideas
when
you
know
the
people,
and
I
do
not
feel
I
know
you
well
enough,
like
you
know
your
experience,
your
time,
how
much
you're
involved,
because
you
know
it
doesn't
make
sense
to
make
an
ideal
process.
If
you
don't
have
people
who
who
fit
into
the
process,
I
know,
but.
B
Thank
you,
but
thank
you
and
I
would
say,
don't
trust
me
is
the
only
thing
I
would
say,
but
okay,
that's
totally
fair,
and
maybe
I
can
find
some
time
to
take
some
more
to
look
at
this.
I
I'm
also
interested
in
this
kind
of
thing
so
yeah.
I
would
love
to
maybe
put
together
a
proposal
and
then
we
could
share
it
at
one
of
these
meetings
and
see
what
everyone
thinks
and.
B
Have
to
run
up
the
chain
to
the
tc,
because
I
don't
know
if
we
can
do
this
in
a
vacuum.
Well,
cool.
I
think,
with
that
we've
run
through
the
agenda.
I
had
some
more
issues
to
talk
about,
but
I
will
hold
off
on
them
because
they're
partial
and
I
don't
want
to
waste
everyone's
times
on,
I'm
just
waxing
poetic,
and
I
think
that
everyone
could
use
the
next
three
and
a
half
minutes
back
so
cool
everyone
thanks
for
joining,
and
we
will
see
you
all
virtually
over
next
week,
thanks
tyler.