►
From YouTube: 2023-02-09 meeting
Description
Instrumentation: Messaging
A
B
Oh
yeah,
what's
what's
the
weather
been
like
it.
C
B
I
used
to
live
in
Colorado,
and
that
was
like
it
was
the
worst
because
I
go
to
work
and,
like
you
know,
you're
like
bundled
up
in
like
a
jacket
by
the
time
you
get
home,
you
gotta,
be
like
short
sleeves
in
your
shirt,
almost
like
your
your
shorts
yeah.
C
B
Yeah
all
right,
it's
always
a
those
crazy
temperature
swings.
It
was
also
really
hard
to
like
grow
things
there,
because
of
that
reason
like
be
freezing
in
the
morning
and
they'd
be
dying
of
heat
by
the
end
of
the
evening.
Yeah
yep.
A
A
B
But
we
can
wait
a
little
bit
longer.
We
can
jump
in
as
well
because
I'd
love
to
talk
to
you
about
stuff.
C
B
It's
like
okay,.
B
Yeah
I
was
we
were
talking
with
Mike
earlier
in
the
week.
It's
kind
of
cool
that
you
guys
can
actually
connect
to
zoom
meetings
on
the
the
room
conference
system.
That's
kind.
B
B
Okay,
cool
Let's
see
we
are
three
past
we
can
jump
in
here.
If
you
haven't
already,
please
add
yourself
to
the
attendees
list
and
if
you
have
something
you
wanted
to
talk
about,
add
it
to
the
agenda.
B
We'll
start
just
a
little
bit
actually
I'll
just
wait
in
case.
Somebody
wants
to
add
something
there.
B
Perfect,
okay,
cool
jumping
in
looks
like
Aaron.
Also
out
of
this,
so
I'll
pass
it
off
to
you
Aaron.
To
talk
about
this.
C
So
I
just
put
an
update
this
morning
in
the
discussion
about
the
cumulative
histogram
and
really
what
it
comes
down
to
is.
C
C
C
C
The
for
the
the
two
approaches
that
I
I
tried
was
just
having
a
pool
around
the
different
things
that
that
are
highly
allocated
in
the
histogram,
which
are
the
the
ones
that
we
really
care
about.
Are
the
slices
the
counts,
the
bounds,
also
the
slice
of
data
points
and
then
having
a
manual
release,
and
what
I
found
was
that
The
Benchmark
is
incomplete
in
this
sense,
in
that
it
would
always
report
the
same
number.
C
It
would
always
report
like
six
allocations
per
the
one
run,
even
though
I
could
measure
I
could
put
like
Atomic
counters
in
to
the
the
code
path
and
measure
that
no
it
it
didn't,
do
it
every
single
cycle,
but
it
did
it
enough
of
the
Cycles,
like
it
reallocated
memory
enough
of
the
cycles
that
the
the
counter
the
the
allocation
counter
went
up.
C
C
We
could
continue
to
explore
that
the
other
approach
that
I
took,
which
was
really
successful,
was
the
collect
into
approach,
and
the
idea
is
instead
of
collect,
being
something
that
returns
a
concrete
memory
slice.
We
have
something
that
accepts
a
buffer
that
we
write
into
essentially
turning
it
into
something
similar
to
the
right
interface.
C
So
that
that
took
a
little
bit
of
of
finangling
I
could
pull
up
the
proof
of
concept
yeah
their.
But
let's
see,
if
you
go
to
probably
the
the
manual
reader
or
the
reader,
that
would
have
the
the
signature
there.
So
here's
collect
into
line
151
essentially
take
the
resource
metrics
from
the
output.
Put
it
on
to
the
the
input
of
the
function.
C
99
of
the
logic
stays
the
same.
The
only
real
difference
is,
instead
of
just
making
a
new
slice.
We
check
to
see
if
the
old
slice
has
the
capacity
and
reuse
that
slice
wherever
we
end
up
slicing
it.
That
has
the
advantage
that
we
have
zero
allocations
if
we're
in
the
very
happy
path.
C
But
there
are
other
limitations
to
this,
so
I,
just
kind
of
wanted
to
throw
this
out
as
like,
hey,
there's
new
work
to
be
done.
We
kind
of
need
to
make
a
decision
on
whether
or
not
we
want
to
introduce
something
on
the
reader
and
explicitly
get
down
and
be
able
to
achieve
like
zero
allocation
code
pass.
Or
do
we
want
to
real
rely
on
something
that
is
more
probabilistic
in
that
we
can
use
the
pools?
C
We
can
maybe
have
a
zero
API
change
happen,
but
it
doesn't
always
guarantee
that
we
will
not
be
allocating
memory
and
not
not
incurring
those
costs
of
allocations.
A
B
So
I
think
there's
a
few
things
if
thought
of
I've
been
taking
a
look
at
this,
but
one
of
the
things
I
wanted
to
ask
is
so
the
the
resource
metrics?
Does
this
data
structure
need
to
get
changed
at
all
to
do
some
sort
of
like
clearing
or
I'm
thinking
of
Josh's
prototype
that
he
had
something
like
reset
or
you
know,
set
length
or
something
like
that?
Does
this
need
to
be
updated
in
that
way?.
C
No,
if
you
actually
go
to
the
pipeline
I,
you
can
see
where
I
reset
some
of
the
slices
right
there
so
166
to
170.
That's
me
resetting
the
slice
that
just
scrolled
it
off
that
contains
the
scope,
metrics.
B
Right-
and
so
is
this
done
for,
like
all
the
aggregators
as
well,
though,
like
so.
C
C
So
if
you
look
at
the
I
believe
under
the
histogram,
it
does
it
in
the
aggregator
yeah.
C
It
does
I
I
do
similar
things
in
the
aggregator,
so
there's
a
trade-off
that
that
means
that
pushes
this
kind
of
complexity
into
the
aggregators
somewhat
this
is
this
is
a
very
rough
proof
of
concept.
This
is,
you
know,
get
it
working,
so
there's
plenty
of
opportunity
to
clean
this
up.
C
The
other
thing
that
I
found
out
while
doing
this
and
I
need
to
write
up
a
bug
report
on
this.
Is
you
know
our
our
tests
are
assert
metrics.
All
of
the
data
types
are
the
concrete
struct,
but
technically
the
aggregators.
C
C
No,
the
from
the
metric
data
type
so
like
metric
data
dot,
a
starmetric
data
dot
histogram
is
also
an
aggregator
as
well
as
a
metric
data.histogram.
C
C
I,
don't
think
it
like.
We
don't
ever
return
a
pointer
at
this
point,
but
they
are
aggregators
because
the
because
of
the
method,
just
because
of
how
we've
defined
them
right.
B
Yeah,
okay
I'd
have
to
look
at
the
bug
report,
but
so
I
think
the
question
I
had
for
this.
One,
though,
is
just
like
I
think
so
so
like
this
right
here,
like
the
bounds
right,
if
there
already
is
an
allocation
of
this
slice
for
the
bounds
like
wouldn't
we
want
to
reuse
it
or
is
that
something
like
do?
We
need
an
API
to
search
for
an
existing
bound.
C
C
It
doesn't
have
any
kind
of
exterior
bound,
that's
pooled
or
anywhere.
If
it's
already
allocated
into
this
struct,
it
will
reuse
it
and
if
it
doesn't,
then
it
doesn't
like.
That
is
something
that
we
could
explore
as
well.
We
could
have
a
hybrid
model
where
one
of
the
things
that
I
that
I
pointed
out
in
this
model
is
when
you're,
when
you're
doing
the
I
think
it's
in
this
line.
C
So
there's
some
point
where
it
needs
to
check
that
the
data
point
the
the
type
that
the
type
of
aggregator
that
is
that
was
passed
to
it
is
a
histogram,
and
if
it's
not
it,
Just
creates
a
new
histogram
data
type,
which
would
be
an
allocation
that
could
be
improved
by
having
pools
for
each
different
type
right.
Well,.
C
C
Might
no
but
I
gotta,
I
gotta
find
it
so.
A
Sure
sorry
so.
C
Here
is
a
concept
interesting:
let's
see,
file
changes.
C
Yeah
aggregate
into
right-
oh
yeah,
right
here,
I,
do
about
a
check
if
that
is
a
histogram,
if
it's
not
I,
just
create
a
histogram.
At
that
point
we
know,
like
the
we
know
at
our
code,
point
that
that
is
memory
that
has
been
given
to
us
to
be
reused.
So
we
can,
we
could
put
whatever
data
type
was
in
there
into
the
appropriate
pool
so
like.
If
that
was
a
sum
that
was
underneath
this,
we
could
put
it
into
the
pool
and
then
we
could
get
it
from
the
histogram
pool.
B
Pools
is
that,
like
once,
you
start
leaking
that
to
some
sort
of
third
party,
like
the
ownership
of
that
needs
to
be
the
person
who
maintains
I.
Think
the
pool
like
that's
that's
been.
The
big
problem
with
this
approach
is
like.
If
you
have
this
into
the
pool,
management
needs
to
be
done
by
the
person
passing
the
data
structure
into
the
pool
right
like
because
otherwise
you
have
two
pools
fighting
over
the
same
data
types,
which
is
guaranteed
to
have
memory
violations
yeah.
B
A
E
B
Okay,
okay,
well
I
might
have
to
leave
and
come
back,
but
I
think
that's,
that's
probably
enough
enough
to
get
to
like
the
second
point
that
I
wanted
to
get
to
shoot.
I
can't
even
share
a
screen
anymore.
Oh
maybe
you
can.
B
Here,
I'm
gonna
I'll
be
right
back.
Let
me
just
quit
this
really
quick.
A
B
Well,
sorry
that
got
all
kinds
of
screwy.
Can
you
all
hear
me?
Yes,
yep.
Okay,
welcome
back
I
love
technology,
yeah,
okay,
cool,
so
I!
Think
kind
of
I
wanted
to
go
back
to
that
original
interface
that
you
had,
or
was
it
like
manual
reader
yeah
the
collect
into?
Let's
try
this
one
more
time:
This,
Is
It!
B
This
is
something
that
we
could
say,
like
you
know:
hey
user,
actually,
a
manual
reader's,
a
concrete
type
right
like
it's.
Yes,.
B
Reader
right,
okay,
all
right
so
yeah,
maybe
we
could,
unlike
the
reader
interface
like
we
have
this
like
produced
into
I.
Think
here
like
maybe
we
could
also
just
add
like
another
like
a
comment
or
something
to
the
user,
saying
like
hey,
you
might
want
to
check
and
see.
If
the
reader
implements
this,
you
know
collect
into
interface,
and
if
it
does,
then
you
can.
You
can
do
this,
so
this
seems
like
something
we
could
add
that
that
is
possible.
B
B
Oh
yeah
I,
guess
you
don't
yeah
the
collect
into
is
not
yeah
where's.
The
reader
I
think
the
readers
is
defined
here.
Yeah
Okay.
B
Like
yeah,
something
else
could
just
cast
that
the
thing
that
I
was
wondering,
though,
is
that,
okay,
so
that
that's
something
we
could
do
after
a
release
is
provide
some
sort
of
memory
optimization
for
her
readers,
but
the
the
other
approach
here,
the
sink
pool
in
the
I'm
based
the
push-based
reader
right
like
is
this
something.
What
needs
to
change
here
like?
Is
this
something
we
could
do
after
the
release
as
well
or
are
there
is
still
interface
so.
C
The
change
that
would
need
to
happen
is
there
would
have
to
be
some
public
API.
It
doesn't
have
to
be
attached
to
the
reader
interface.
That
is
an
explicit
freeing
of
the
the
memory
that
that
has
passed
to
it.
The
really
I
think
I
put
it
in
the
comments
somewhere
there.
There
is
a
release
function.
It
was
literally
just
a
fun,
a
free
freeform
function
that
took
a
resource
metric
and
just
iterated
through
and
released
any
histogram
data
point
slices
and
Bounds
and
count
slices
back
into
the
pool.
B
Yeah,
okay,
so
I
think
we're
thinking
the
same
thing
so
similar
to
here,
where
you
have
like
this
produce
yeah
like
SD
produced
into
right,
like
I,
think
I
think
that,
since
this
is
also
not
exported
right
yeah,
we
could.
We
could
add
this
method
to
this
producer
holder.
Well,
it
really
is
just
like
yeah.
We
just
had
this
method
right
and
then
from
the
periodic
reader.
I
think
that
once
you
have
it
there
in
the
I,
don't
know
why
it's
not
I,
guess
you
didn't
update
the
periodic
reader
here.
B
Cool
so
where's
the
meat
and
potatoes
here
register,
yeah
I
think
collects
right
here
right
so
since
right
here,
this
is
like
this
produce
line
right.
We
could
also
update
it.
So
this
is
where
the
the
pool
can
be
held
right
to
the
periodic
where
you
could
hold
the
pool
from
this
this
level
and
go
ahead.
C
B
Oh
yeah,
not
not
here
right
inside
this
is
private.
Collect
right
is
what
I
was
thinking
like,
so
you
wouldn't
expose
so
it'd,
be
essentially,
you
would
use
a
pool
instead
of
calling
producer,
you
would
call
the
produce
into
or
some
sort
of
like
internal,
you
know
only
where
it
would
do
that
that
self-allocation,
the
only
problem
there,
though,
is
that
then
you
need
to
unify
into
this
export
pipeline,
but
the
export
pipeline,
you
would
be
handing
it
a
pooled
resource
right.
B
So
essentially,
like
the
pool
would
look
like
this.
Like
you
know,
you
pull
from
the
pool
at
the
start,
you
send
it
into
some
sort
of
collect
into
not
the
public
collect
here,
yeah,
send
it
to
the
exporter
and
once
it
returns
I
would
then
you
know
essentially
free
it
put
it
back
into
the
pool
at
that
point,
and
this
is
I
think
why
I
was
I.
B
We
were
thinking
the
same
thing
because
I
added
to
the
metric
data
type,
there
was
like
a
PR
that
update
the
documentation,
saying
like
once
you
return
from
the
export.
You
can't
hold
any
memory
state
that
this
just
had
right,
like
you're,
expected
to
not
hold
the
memory,
because
it's
going
to
be,
you
know,
volatile
after
you,
after
you
return,
I,
think
that
that
should
work
at
that
point
right.
B
Okay,
so
I
mean,
if
that's
the
case,
like
the
the
point
of
this
exercise,
also
I
think
this
is
great
work
by
the
way.
I
didn't
say
that
so
thanks,
but
I
do
think
it
from
from
this
approach,
like
both
of
these
approaches
could
actually
be
implemented,
and
they
all
could
they
could
be
done
in
a
in
a
post,
1.0
release
right,
okay,.
B
Okay,
I
think,
if
that's
the
case
I
would
I
would
we
can
probably
just
move
this
out
of
the
let's
just
Escape
beta
right.
B
Okay,
post,
GA
I
think
that's
where
we
want
to
go.
Okay,
perfect,
all
right,
I
think
that's
good,
so
I
think
we
could
probably
also
add
comments
about
the
the
planned
work
to
try
to
support
this.
If
that
makes
sense,
Aaron
and
honestly
I
think
at
the
collect
into
as
well.
We
probably
not
want
to
like
a
loot.
I
mean
it's
the
wrong
word,
but
just
add
to
that
I
think
having
it
as
a
separate
interface
that
a
user
could
try
to
like
check
is
also
a
useful
thing
in
case.
C
E
B
B
So
one
of
the
things
that
I
think
of
is
like
with
testing
it'd,
be
nice
to
not
have
to
allocate
something
or
I.
Guess
you
don't
really
have
to
allocate
it.
You
just
have
to
like
declare
it
above
it,
but
it's
such
a
small
thing
and
it's
not
like
the
as
if
they
said
like
the
main
way.
This
would
be
used
that,
like
I,
don't
know,
if
that's
a
valid
reason
to
not
use
this
as
the
default
I
I
don't
know,
I
I
could
I
could
see.
B
B
Especially
if
we
have
Benchmark
showing
reductions
or
allocations,
how
many
allocations
did
you
get
a
benchmark
for
for
this
method?
This
is
collect
into
like
how
many
did
we
end
up
doing
if
it's
like
the
Happy
path.
C
So
the
happy
path
as
the
code
stands
right
now,
there's
one
allocation
per
there's
one
allocation
per
histogram;
okay.
That
can
actually
be
improved
if
we
return
a
pointer
to
a
histogram
as
the
the
type.
C
It
makes
sense,
so
it's
literally
just
the
histogram
aggregation
that
is
being
allocated,
so
it's
just
the
the
container
of
the
the
like
the
the
name,
the
what
you
would
call
it
and
the
the
one
slice
of
data
points
right.
But
the
data
points
are
are
not
allocated
the
the
whatchamacallits
in
the
happy
path.
B
C
The
the
one
thing
that
I
I
saw
here
is
right.
Now
we
currently
only
reuse
aggregators
if
they're
the
exact
same
type,
because
pipeline
store
aggregators
in
a
map.
I,
don't
know
if
retrieval
of
ranging
over
that
map
is
consistent
between
run
and
run.
I
know
that
it's
it's
supposed
to
be
not
not
consistent
between
like
execution
and
execution.
Like
you
know,
entry
into
the
binary,
but
from
one
collect
to
the
next
collect
within
the
same.
You
know,
processor,
space,
I,
don't
know
if
that's
the
same
I
don't
know
if
it's
consistent.
B
C
B
B
You
know
if
we
do
decide
to
go
this
route
and
update
this
I
I
think
I'm
on
board,
because
I
think
it
would
improve
the
SDK,
but
it
would
delay
I
think
our
evaluation
of
its
stability
by
a
little
bit
so
I
just
want
to
make
sure
that
we
are
understand
everyone's
okay
with
that
trade-off.
E
B
Yeah
I
agree:
okay,
I
will
put
this
back
into
the
beta
project
done.
C
Okay,
all
right,
then
I
suppose
as
soon
as
I
can
I
will
kind
of
lay
out,
like
I
kind
of
have
a
rough
idea
of
the
steps
we
need
to
take
to
get
the
API
working.
B
Right,
yeah,
exactly
I
think
the
scope
is
just
going
to
be
that
collect
change
the
API
of
the
collect
so
that
it
accepts
one
of
the
what
it's
going
to
be
returning
into
right:
okay,
okay,
yeah
and
then,
and
then
we
can
also
split
off
that
pool
work
for
the
reader.
The
periodic
reader
to
its
own
thing
after
the
facts,
I
think
that's
also
something
we
do.
Okay,
yeah,
okay,
cool
I
will
go
back
to
sharing.
B
Okay,
awesome
yeah,
thanks
for
looking
into
that
again,
that's
as
you've
pointed
out
quite
a
lot
of
work,
cool
I.
Think
then
we'll
just
jump
to
this
metrics
beta.
That's
the
only
other
thing
that
we
have
to
do.
I
do
think
there
is
an
issue
coming
that
I
was
still
researching
right
before
this
meeting
about
the
map
allocations.
B
So
maybe,
let's
kind
of
jump
to
that
here.
So
there's
this
issue
that
was
open
for
it's
called
memory
leaks,
but
it's
really
unbounded
memory
use
for
the
cumulative
attribute
filters,
not
chemo,
just
any
attribute
filter
that
holds
State
So
currently
right
now,
this
just
kind
of
give
you
a
background
on
the
pr
and
then
we'll
dive
into
it.
The
attribute
filter
is
when
it
Aggregates.
It
looks
to
see
if
it's
already
done.
B
Some
sort
of
filtering
before
by
seeing
if
the
attribute
set
that
it
is
being
passed,
has
been
seen
so
I
think
this
is
just
the
cache
that's
going
on
here,
but
the
problem
is
that
this
just
results
in
an
Unbound
in
an
Unbound
amount
of
memory,
because
this
map
never
is
cleared.
So,
if
there's
ever
a
situation
where
you're
using
like
a
Delta
aggregation
which
would
which
would
drop
this
underneath
it
or
something
like
that,
this,
this
will
just
continue
to
grow
unbounded.
B
So
the
alternative
that's
being
proposed
here
is
that
we
just
changed
the
memory
unboundedness
to
a
computational,
a
little
bit
more
computational
intensive
operation
each
time.
It's
doing
this
attribute
filter,
which
I
think
seems
Fair
until
we
have
a
way
to
clear
this,
which
I
think
I
don't
know.
I
I.
E
B
This
is
a
valid
way
to
do
this,
but
one
of
the
things
that
it
was
also
pointing
out
is
that,
right
now,
this
scene
map
is
using
an
attribute
set
as
the
key
which
is
syntactically
valid.
The
attribute
set
is
comparable,
but
the
problem
is
is
semantically,
it
is,
let's
just
say,
a
gray
area,
because
sometimes
this
attribute
set
will
compare
correctly
to
another
attribute
set
that
is
created
independently.
So
say
you
have
actually
I
haven't
demo
of
this.
B
B
Okay
cool,
so
hopefully
you
can
all
see
this
I
created
just
like
this
really
quick
demo
app.
So
here
I've
got
a
a
set
of
attributes
and
that
are
exactly
the
same.
They
have
the
same
key,
the
same
value,
but
these
are
with
a
slice
of
strings.
B
Here's
a
similar
one,
exact
same
attributes,
A
and
B
key
values
exact
same
except
the
string,
and
so
what
I've
done
here
is
then
I've
also
got
like
a
map
and
that
map
uses
this
slice
attribute
using
a
set,
and
then
it
also
does
the
same
thing
with
a
string.
It's
just
the
value
of
these
Maps.
They
could
have
been
the
same
effect.
Actually
I
probably
could
have
simplified
this
to
make
the
sense
put
this
up
five
minutes
evaluating
this
sorry.
B
So
what
I'm
looking
at
here
is
like
you
can
test.
The
equivalence
of
just
these
attribute
sets
right
off
the
bat,
and
that
just
essentially
says
like.
Is
this?
What
happens
if
you
look
at
the
equivalence?
It
looks
at
this
distinct
interface,
which
is
like
an
interface
of
like
this.
B
It
can
be
an
array,
but
it
can
also
be
a
slice
depending
on
the
size
of
this,
and
it
also
then
will
look
at
like
okay,
if
I
do
a
retrieval
of
this
map,
that
has
a
slice
as
one
of
the
attribute
does
it
return
the
exact
same
value,
which
should
be
true,
and
if
it
does
it
with
a
string,
does
it
return
the
exact
same
value,
and
so
hopefully
this
works.
E
B
You
just
see
very
inconsistent
results
so,
like
the
stringed
map
get
is.
This
is
like
the
key
here
like.
If
you
do
a
map
look
up
with
an
attribute
that
has
a
slice
in
it.
It
returns
false,
meaning
it
got
a
different
value.
If
you
return
it
do
the
same
thing
with
the
string,
it
Returns
the
value,
so
it
it
does.
The
comparison
correctly
like
I,
say:
syntactically.
It
still
works
because
the
attribute
set
is
comparable,
but
it
if
it's.
B
If
it's
a
contains
a
slice
it
does
that
comparison
on
of
a
it,
looks
like
a
pointer
value.
So
yeah.
That's
it's
not
a
great
idea
and
we
have
a
lot
of
problems
because
we
use
this
attribute
set
throughout
our
code
as
a
key
from
for
maps
and
like
caching,
so
there's
another.
B
In
coming
related
to
this
for
the
SDK
that
we
have
to
I
think
address,
I,
don't
know
what
the
answer
is.
I
wanted
to
ask:
if
people
have
opinions,
I
think
that
there's
a
way
I
I,
don't
know
I
think
it.
We
need
to
look
at
maybe
even
building
another
attribute.
That
is
completely
comparable
correctly,
based
on
value
not
based
on
the
reference
type,
but
I
didn't
know.
If
anybody
else
has
thoughts
on
this.
C
So
one
of
the
things
so
I
don't
there
might
be
a
problem
of
building
an
attribute
set.
That
is
completely
comparable.
Just
because
slices
of
how
we
compare
slices
and
go
or
reference
types
and
go
in
general.
C
But
one
of
the
things
that
J
MCD
had
done
and
some
some
engineers
in
lightsep
is
to
put
an
ID
in
to
put
basically
a
hash
of
an
attribute
set
within
the
the
within
the
attribute,
like
as
a
private
token
that
won't
let
it
be
comparable
in
a
map
per
se.
But
it
would
let
you
do
a
lookup
based
off
of
the
hash.
So
you
could
have
a
reference
of
of
a
hash
that.
B
B
If
the
hash
is
like
a
byte
array,
I
think
is
comparable
yeah.
Then
then
you
could
use
that
as
like
a
key
right.
C
Yeah
exactly
that,
that's
essentially
what
they
ended
up
doing
in
a
lot
of
different
cases.
Okay,
there
is
some
complexity
around
that
and
it
might
be
an
extension
to
the
I.
I'd
have
to
go
and
look
it
up,
but
it
it
would
have
to
be.
It
might
be
an
API
extension
on
the
on
the
attributes
like
it
adds
more
API
surface,
but
that
is
something
that
that
may
be
a
potential
solution
to
this
kind
of
problem.
B
Okay,
when
I
open
the
issue,
could
you
see
if
you
could
find
a
link
to
that?
If
it's
a
public.
C
C
We
took
some
time
to
to
create
a
distro
around
one
of
the
the
beta
versions,
one
of
the
the
earlier
versions
of
the
SDK,
so
that
we
could
add
exponential
histograms
for
a.
C
It
is
it's
in
the
public
one
and
it's
there
I
believe
the
that
Implement
that
Improvement
the
attribute
Improvement
is
in
that
too
so
I'll
double
check,
but
I
think
that's
that's
where
we
would
find.
B
It
okay,
if
you
could
send
me
a
link,
that'd,
be
ideal.
All
of
an
issue
to
I
think
address
this
or
to
document
this,
and
then
we
can
include
that
link.
I!
Think
if
there's
that's
a
great
solution,
another
one
was
just
to
look
at
our
slice
implementation
in
the
attribute,
because
I
think
that's
where
this
is
all
stems
from
and
I'd
have
to
verify
this,
but
I'm
pretty
sure
arrays
are
comparable.
B
So
if
we're
actually
storing
it
as
an
array
underneath
it
I
think
there
was
an
idea
to
I
didn't
look
too
deep,
but
if
it
is,
it
should
be
comparable
after
that,
so
I'd
like
to
figure
out
why
that
would
be
the
case
could
also
be
because
maybe
it's
stored
as
a
interface.
That
is
an
array
that
could
be
something
a
problem.
B
Ways
we
could
probably
take
a
look
at
this.
The
hash
is
a
good
idea
as
well,
so
I
think
if
it's
worth
exploring
all
of
them
yeah.
That
being
said,
I
think
that
the
pr
that
I
did
link
to
there's
an
underlying
bug
that
I'm
gonna
open
an
issue
for,
but
I,
think
it's
also
worth
looking
at
the
issue,
because
I
don't
know
if
we
need
it,
even
if
we
fix
the
attribute
comparability
like
that,
that
lookup
is
not
necessarily
a
good
idea
for
an
ever-expanding
amount
of
attributes.
A
C
Okay,
agreed
yeah.
You
were
you're,
absolutely
correct.
That
I
believe
was
just
a
cash
implementation.
Okay,.
B
Yeah
I
wanted
to
run
a
bike
because
I
I
couldn't
remember:
if
I
did
it
or
you
did
it,
but
I
think
I,
I
could
and
I
was
like
I
think
this
is
what
it
is
so
I
don't
know:
okay,
cool
back
to
the
agenda
we
did
skip
over
this
I
wanted
to
kind
of
touch
base
on
this
metrics
API.
B
We
did
have
some
Riley
reached
out
and
he
said
that
he's
gonna
be
able
to
evaluate
from
the
TC
level
the
yeah
from
13th
to
the
17th,
or
something
like
that.
So
we
do
have
a
TC
coming
by
to
evaluate
our
metrics
API,
so
yeah
just
kind
of
a
heads
up
on
that
one.
So
that's
moving
forward,
so
I'm
pretty
excited
about
that
other
than
that.
I.
Think.
B
If
there's
still
this
open
question
about
logging
from
the
snow
up
implementation,
I
have
this
open
PR
shoot
well,
this
might
be
a
little
bit
of
a
I.
Don't
have
to
go
down
to
find
it
but
they're
the
pr
that
is
open
to
try
to
address
this
I.
Don't
know
where
it
went.
This
issue
about
logging,
it's
kind
of
a
bigger
one.
In
the
specification
man.
B
It's
a
good
good
path,
we'll
get
there
from
here
yeah.
Here
we
go
yes,
this
is
the
one.
This
is
the
spec
PR.
That
could
definitely
use
some
more
eyes.
It
removes
the
SDK
definitions
in
the
API
spec.
It
adds
a
no
op
part
to
the
spec,
so
this
clearly
says
the
no
option
to
do
no
operations
so
based
on
Josh
sureth's
comments.
I.
B
Something
that
others
languages
have
seen
as
well,
so
I
think
it's
worthwhile
to
other
languages
and
I
hope
that
it
touches
on.
But
if
you
have
time,
please
approve
this.
This
is
helpful
because
we
do
need
to
resolve
this
before
we
can
go
ga
for
our
API
but
yeah,
okay
other
than
that
we've.
That
should
be
everything
on
the
agenda
last
year
estate
emailed
to
some
of
us.
They
want
to
talk
about.
B
Okay,
one
other
thing
I
did
just
think
of
was
our
metrics
SDK
GA
project
board.
I
did
update
that
I.
We
talked
about
it
a
little
while
ago,
I
created
a
bunch
of
issues
to
verify
our
SDK
I
was
going
to
go
through
that,
but
then
I'm
also
realizing
especially
after
this
meeting.
We
have
some
more
changes
coming
so
I
think
probably
just
hold
off
on
evaluating
that
for
a
little
bit,
but
the
project
board
is
updated.
B
Cool
I
think
from
a
crossover
from
the
Go
Auto
instrumentation
seg,
there's
also
the
Insurgent
Edition,
that's
being
added
to
the
contrib
Erin
and
I
are
definitely
on
the
hook,
I
think
to
review
that
so
we're
looking
at
it,
but
just
kind
of
a
heads
up.
It
probably
isn't
going
to
be
perfect
when
it
gets
merged.
The
idea
is
to
get
it
merged.
B
It's
in
the
unreleased
modules
right
now,
so
we
don't
plan
on
releasing
it
and
then
iterating
on
cleaning
it
up
and
hopefully
creating
tasks
for
that
one
as
well,
so
just
kind
of
giving
a
heads
up
on
that
plan
for
the
Instagram
ER,
that's
3
000
lines
to
review.
B
B
So
it
sounds
like
Anthony's
going
to
do
that
review
on
that
one.
Okay,
anybody
else
have
something
else.
They
want
to
talk
about.
E
Not
open
Telemetry
but
rest
Cox
just
put
up
a
proposal
about
Telemetry
capture
in
the
go
tool
chain.
If
anybody
has
a
popcorn
deficiency
or
salt
deficiency,
go
check
that
out.
B
Yeah
Secrets
mentioned
that
one
a
few
times
I've
gotten
halfway
through
it.
It's
like
there's
a
lot
of
controversy,
I
guess
you'd
say
in
the
chat
rooms
around
that
one.
Sorry.
B
But
I
also
like
it
is,
it
is,
but
it
also
kind
of
isn't
like
in
the
same
sense
like
it
completely
is
because
I
think
it's
actually
trying
to
you
know
do
bug
reports
and
that
kind
of
thing,
but
I
think
it
does
kind
of
say
something
that
we
gloss
over
often
in
these
open
Telemetry
is
just
like
you
know.
B
E
Yeah,
it's
something
that's
very
interesting
as
someone
Distributing
like
client,
instrumentation,
and
things
like
that,
where
we
don't
have
great
visibility
into
how
people
are
using
our
tools,
but
it's
a
hard
problem
to
solve
and
it's
hard
to
get
the
trust
needed
to
have
users.
Allow
you
to
do
that
so
I
wish
them
the
best
of
luck.
But
oh
boy.
D
I've
been
going
to
there's
a
go
like
the
language.
Runtime
Diagnostics
working
group
I
understand
none
of
what
they
say,
but
they're
revisiting
how
they
do
runtime
tracing
and
it's
this
kind
of
crazy.
It
generates,
like
thousands
or
hundreds
of
thousands
of
spans
from
all
the
random
stuff
that
they
have
traced.
They
have
their
own
tracing
package
in
the
go
Library.
D
It's
like
runtime,
slash
tracing
and
it's
really
closely
integrated
with
profiling
and
they
may
be
I
think
the
result
might
be
something
more
like
a
collector
receiver,
but
at
least
they're
interested
in
open
Telemetry
integration
or
how
you
know
the
wider
world
of
I
when
I
say
that
I
mean
it
somewhat
Loosely,
like
maybe
they'll
Define,
a
stable
data
type
that
they'll
be
able
to
export
this
in.
D
Yes,
because
they
were
talking
about
how
they
might
want
this
to
be
a
stateful
protocol
because
of
the
large
amount
of
information
like
they
don't
want
to
send.
I
think
every
single
span
has
a
stack,
Trace
and
they're,
not
right
yeah,
so
they
don't
want
to
send
that,
like
with
heavy
requests.
They
want
to
do
something
more
like
send
my
Sig
symbols
and
then
dump
this
flood
of
of
data.
A
B
Yeah,
do
you
know
where
this
meeting
is
posted
by
chance?
Is
it
like
on
a
calendar
or
something.
D
No,
it's
mostly.
B
D
Right
it
would
be
I,
don't
even
know
if
I'm
not
sure
if
the
meetings
are
posted
publicly
or
not,
but
there's
like
I'm
sure
you
could
reach
out
to
people
who
are
on
this
issue
thread
if
you're
interested.
B
Perfect
yeah
I
think
it's
really
interesting.
I
do
remember
in
our
tracing
package.
We
do
somewhat
interrupt
with
the
tracing
front,
but
it's
like
you
know,
enable
them
and
disable
really
quick
like
yeah.
It's
not
great.
D
And,
to
be
honest,
a
lot
of
they
haven't
really
gotten
into
the
discussion
yet
of
export
formats
or
other
things,
they're
still
trying
to
solve
more
basic
problems
like
can
we
make
the
overhead
of
collecting
this
stuff
at
all
reasonable,
so
I
haven't
understood
the
vast
majority
of
what's
being
talked
about.
B
All
right,
I
think
we're
United.
Here
then
thanks
everyone
for
joining,
we
will
see
you
all
asynchronously
or
next
week
same
place
same
time,
bye.
Everyone.