►
From YouTube: 2022-01-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Yeah,
how
so
feeling
better
or
just.
B
I'm
feeling
better
but
the
kids,
so
I
don't
know
if
you
can
hear
in
my
background
the
kids
are
watching
tv
because
they
caught
it
right
after
us
of
course,
like
it
was
gonna
happen.
Yeah
and
they're
not
cleared
to
go
back
to
school.
Yet,
but
oh,
my
wife
is
and
yeah.
So
it's
been
a
weird
last
couple
days.
A
B
B
But
this
the
thing
that
really
gets
it
is
the
school's
like.
Oh
you
tested
positive
10
days.
Well,
okay,
so
yeah
and
we're
like
you
know
the
fir
for
the
first
week,
we're
like
we
definitely
need
to
stay
inside
and
like
not
get
anywhere
near
anybody,
but
like
afterwards,
it's
like
we're
going
crazy.
These
four
walls
are
not
enough.
A
It's
definitely
true.
Luckily,
the
fatigue
for
me
has
been
debilitating,
so
I
just
go
to
sleep
all
the
time,
but
yeah.
A
Cool
kind
of
a
light
turnout-
I
don't
know
the
zoom
got
switched,
but
we
can
probably
start
up
here
when
I
kind
of
respect
the
time
got
some
things
to
talk
about
too
so
yeah.
We
can
jump
into
this.
I
guess
also
just
kind
of
assume
people
are
gonna,
be
watching
the
recording,
if
they're,
not
here,
although
yeah
we're
just
talking
about
people
having
sicknesses,
so
maybe
everyone's
out
this
week,
but
cool
yeah
to
those
just
joining,
please
make
sure
you
add
itself
to
the
attendees
list.
A
If
you
have
anything
else,
you
want
to
talk
about,
add
it
to
the
agenda
and
we'll
jump
in
here.
So
the
first
thing
I
wanted
to
bring
up
is
this
pr.
I
was
just
reviewing
a
little
while
ago.
It
sets
up
a
benchmark,
github
action.
I
really
like
the
concepts,
in
fact,
there's
a
really
cool,
like
timeline
that
you
can
watch
like
benchmark
changes,
but
in
reviewing
it
I
was
noticing
that
okay
there's
a
response
in
six
minutes
ago.
A
I
didn't
read
this
we're,
including
permissions
to
write
contents
to
the
repository.
So
it
says
that
there's
a
chance
that
this
couldn't
be
written
to
the
main
branch,
but
I
wanted
to
make
sure
that
we're
really
clear
as
a
community
and
understanding
the
security
implications
of
providing
these
permissions
there's
essentially
a
github
token.
That's
going
to
be
passed
to
this
third
party
action
here,
which
I
took
a
look
at
I'm
also
not
a
javascript
expert
by
any
means.
A
So
it's
really
hard
to
tell
what's
going
on
there,
and
so
I
just
wanna
like
make
sure
that
we
understand
like
what
this
is
actually
opening
us
up
to
and
agree
that
this
is
worth
it.
It
seems
like.
I
don't
fully
understand
the
security
or
sorry
the
permissions
here,
but
we
can
dive
into
that.
B
So
I
I
have
also
briefly
looked
at
it
and
it
looks
like
we
can
get
some
value
without
having
the
full
there.
B
There
is
some
value
that
we
can
get
without
having
giving
it
a
permission
right
mainly
in
the
the
idea
that
we
can
store
it
in
the
cache
and
have
it
error
out
right
whenever
it
goes
beyond
a
certain
threshold,
and
I
think
that
is
100
doable
and
then,
if
we
want
to
actually
bring
in
the
idea
of
where
it
actually
renders
a
page
and
then
pushes
it
to
the
github
pages,
which
I
don't
even
think
we
have
turned
on
in
this
repository,
then
we
we
can
kind
of
step
it
up
there,
and
I
think
we
should
pro
like.
B
C
From
the
description,
though,
it
sounded
like
it
would
error
out
if
it
was
unable
to
push
to
the
github
pages
branch,
the
last
part
of
that
the
manual
action
saying
we
need
to
set
it
up
and
it's
currently
failing.
I
think
because
of
that
on
this
pr.
A
Yeah,
I
think
that
is
because
there's
like
a
more
deluxe
cadillac
solution-
that's
going
on
here
from
my
understanding
from
from
looking
at
this
really
quickly.
So
I
may
be
wrong,
but
the
idea
I
think
that
aaron's
talking
about
is
here
and
that
what
is
enabled
currently
is
this
version
of
it.
So
there's
two
versions-
and
this
version
is
where
you
actually
are
gonna-
be
pushing
to
this
branch
that
needs
to
be
created.
A
This
github
pages
branch
essentially
like
if
you
turn
on
pages
for
a
repository,
there's
actually
like
a
secret
branch
that
sits
there
and
that
you
know
tracks
those
markdown
files,
I
think,
is
what
they
are,
and
so
what
this
is
doing
is
it's
pushing
to
that
branch
and
creating,
I
think,
a
markdown
file,
and
then
there
is
a
just
a
bare
bone,
minimal
setup,
and
that
is
where
all
the
benchmarks
are
stored
to
this
cache
and
that
cache
is
just
the
actions
cache
and
if
you'll
notice,
I
was
looking
at
this
as
well.
A
There's
no
secrets
token
used
here
because
there's
nothing
that
has
pushed
back
to
the
branch.
So
I
think,
there's
two
options
here
is
what
aaron's
saying
anthony:
does
that
make
sense
yeah?
Okay,
that
does
make
sense
and
it
would
be
a
good
place
to
start.
Then
I
think,
okay
yeah.
I
agree.
I
was
looking
at
this
and
I
thought
the
same
thing,
but
I
I
was.
A
I
didn't
go
too
far
into
this
setup
to
see
like
what
the
differences
are.
I
just
wanted
to
make
sure
that
we
paused
on
what
was
currently
being
proposed,
so
I'll
try
to
capture
this
as
a
comment
saying
that,
like
I
think
we
should
go
with
the
minimal
setup
rather
than
the
full
deluxe
right
off
the
bat.
A
Yeah
I
agree.
I
I
did
think
that
that
was
kind
of
a
cool
thing
to
see,
especially
there's.
I
was
looking
at
some
of
these.
Oh
man,
I'm
gonna
get
lost,
but
yeah.
The
github
page
of
stuff
was
really
cool
because
it
did
have
a
really
nice
graphic.
A
I
think
it
was
actually
probably
up
here
showing
you
know,
trends
over
time,
which
could
be,
I
think,
pretty
useful,
but
I
am
extremely
hesitant
of
this
setup
just
because
thinking
through
the
security
implications
here
like
if
I
have
to
verify
this,
but
like
you
know,
if
there's
a
possibility
where
you
know
looking
at
the
permissions,
this
is
able
to
then
allow
this
action
to
rewrite
history.
It's
a
great
way
to
inject
code
that
we
don't
actually
review.
I
think
is
my
issue.
B
The
the
other
thing
that
we
could
do
as
just
a
side
project
is,
we
could
have
a
benchmark
suite
that
is
external
to
this
and
there's
other
hiccups
there.
But
then
you
also
completely
avoid
the
problem
of
injecting
code,
because
nobody
actually
is
importing
the
benchmark.
Suite.
A
That's
a
a
really
good
idea
in
my
opinion,
and
then
we
could
even
link
to
whatever
the
github
page
that
is
created
from
this
repo.
C
It
would
also
limit
us
to
benchmarking
exported
methods.
I
don't
know
how
much
of
an
issue
that
would
be
based
on
what
we're
currently
benchmarking,
but
if
we
have
a
lot
of
benchmarks
of
internals,
those
wouldn't
be
able
to
be
represented
there
unless
it
was
a
fork
of
this
repo.
A
A
Yeah,
I
mean,
I
think
I
think
we
also
write
internal
benchmarks,
sometimes
just
for
like
internal
packages
just
for
implementation
stuff,
I
think
useful
to
devs
more
than
the
users,
but
you're
right.
I
do
think
also
that
you
could
probably
check
out
this
code
into
whatever
we're
going
to
benchmark
into
that
repository.
I
think
there's
some
things
we
could
look
at,
but
I
aaron
what's
the
incremental
change
of
doing
this
minimal
version
here
is
this
just
to
prevent
large
regressions
and
performance.
B
Essentially,
I
don't
know
exactly
where,
but
you
essentially
set
a
config
file
that
says
like
if
it's
over
110
or
150,
or
something
like
that.
This
action
will
fail.
Okay,
yeah
just
so
that
you,
you
don't
have
large
regressions,
like
you
said.
B
A
Okay,
I
think
that
yeah,
I
think
I
saw
that
in
the
original
pr
yeah
something
like
this.
This
should
still
be
possible.
I
think
using
the
cache.
A
That's
the
intent.
Okay!
I
I'll
try
to
capture
this
as
a
comment
here
in
a
response,
but
I
think
that
what
I'm
hearing
from
the
community
is
at
least
to
start
with
the
minimal
stuff
and
then
either
evaluate
the
possibility
of
using
this
approach,
or
maybe
another
repository
to
handle
this
is
that
correct?
A
Cool
the
next
thing
I
wanted
to
bring
up
were
performance
enhancements.
I've
opened
a
few
pr's
looking
at
a
few
things
in
the
tracing
sdk
over
the
past
week,
most
of
which
have
already
been
identified,
but
are
essentially
just
sitting
there
yeah.
So
I
take
absolutely
no
credit
for
for
most
of
the
stuff,
it's
more
just
getting
it
across
the
finish
line.
One
of
them
is
optimizing.
The
eviction
queue
this
one's
pretty
straightforward.
A
I
didn't
want
to
merge
this
because
I
did
remember
I
caught
myself
that
these
two
people
work
at
the
same
place.
So
we
need
a
third
approval
on
this
one,
but
it's
pretty
straightforward.
Essentially
it's
just
changing
the
response,
not
pre-allocating,
something
that
isn't
actually
needed
and
then
making
these
types
instead
of
pointers,
so
pretty
easy
one
to
review.
If
you
have
some
time
the
next
one
was
a
little
bit
more
involved,
I
think
maybe
a
little
more
worth
the
discussion
is.
This
is
a
change
that
was
proposed.
A
A
while
ago,
we've
had
a
few
discussions
on
this.
Bogdan
came
in
and
pointed
out
that
the
new
start
span
configure
really
any
config
that
we
have
right
now
takes
as
a
argument,
a
pointer
and
all
of
the
options.
The
apply
method
takes
as
a
pointer,
and
this
allocates
that
pointer
to
the
heap,
because
there's
really
no
understanding
from
the
garbage,
I'm
sorry
from
the
compiler
that
that
pointer
isn't
going
to
be
able
to
escape
the
past
function.
A
So
what
was
proposed
is
to
change
the
this
to
just
passing
of
type,
and
this
pr
does
that
it
updates
our
contributing
doc,
as
well
as
all
of
the
options
and
all
of
the
new
functions
to
match
this
new
style
of
turning
a
type
itself
from
the
apply
method
and
essentially
passing
through
a
chain.
The
compiler
is
able
to
figure
this
out
as
something
that
doesn't
need
to
escape
to
the
heap
and
it
prevents
a
single
allocation.
It
looks
like
per
config.
I
did
a
little
benchmarking
in
the
api.
A
This
is
kind
of
a
useful
thing,
because
things
like
the
start
span,
config
are
called
every
single
time.
A
new
span
is
created,
I
think
even
no,
not
if
it's
not
recording,
but
if
it
is
recording
like
this
is
a
pretty
commonly
called
function.
So
reducing
an
allocation
here
is
pretty
significant.
I
think,
because
it's
in
the
hot
path,
so
that
being
said,
it's
comprehensive
and
it's
a
lot
of
really
similar
changes.
B
So
I
I
promise,
when
I
actually
have
some
time
to
work
talking
about
the
funny
things
earlier
in
this
meeting.
I'll,
give
you
a
more
detailed
breakdown
of
this.
I
essentially
ran
the
bench
staff
of
old
versus
new
and
the
one
thing
I'll
notice
is:
yes,
there's
one
less
allocation,
but
it
actually
takes
longer
to
run
it
takes
more
more
time
per
operation
in
the
new
start
span,
config,
so
avoiding
that
allocation
isn't
actually
saving
us
time.
Is
that
essentially,
what
I'm
going
to
point
out.
D
It's
not
saving
time
in
this
benchmark.
It
may
save
you
time
later
on
down
the
line
when
the
garbage
collector
has
to
run
right.
I
mean
it's
something
you
pay
up
front
in
a
benchmark
but
gets
amortized
over
time
when
the
gc
has
less
work
to
do.
B
A
Yeah
I
I'd
love
to
see
the
data
you're
saying,
like
you
said,
I
know
timewise,
for
commitment
to
working
right
now
is
a
little
tough
but
yeah
I'd
love
to
track
that
down
and
building
better
benchmarks
to
explore
performance
that
we
actually
want
to
see
or
understanding
the
the
full
scope
of
this
so
yeah.
I
would
appreciate
that,
if
I
can't
figure
it
out
on
my
own
I'll,
try
to
take
a
look
as
well.
B
A
D
B
Totally
agree,
my
theory
on
why
this
is
is
because
you
have
concrete
versions
of
the
slices
underneath
it
spends
more
time
copying
both
the
slice
and
the
data
in
the
slice,
rather
than
the
ones
allocated
slice
on
the
heap
and
just
the
pointers.
But.
A
Yeah,
I
think
that's
always
the
case
is
the
slices.
When
you
have
those
arguments,
those
always
have
to
escape,
essentially
which
yeah
that's
a
good
point.
I
was
thinking
of
splitting
up
some
of
these
benchmarks
that
I
was
creating
here
just
to
do
it
per
option
but
and
again
like
this
may
be
a
good
example
as
to
why
we
would
need
to
do
that
so
yeah.
I
think
that's
useful.
C
A
It
is
also
really
interesting
to
see
this
memory
reduction
and
based
on
how
I
understand
computers
to
work
allocation
escape
to
keep
is
pretty
time-intensive
operation.
So
I'm
surprised
to
see
that,
but
again,
like
we
said
we
need
to
kind
of,
I
think,
probably
explore
that
I
do
think
also
the
allocations.
A
I
wanted
to
verify
this,
but
I'm
not
too
sure
if
the
allocations
are
reset
when
you
are
resetting
the
timer
here,
so
this
allocation
may
actually
be
coming
from
somewhere
around.
I
don't
know:
I'd
have
to
explore
this
thing
a
little
bit
more
that
we
were
talking
about,
though,
but
yeah
thanks.
That's
some
good
feedback.
Keep
this
going.
I
have
one
more
thing
that
I
wanted
to
talk
about
here
regarding
performance
improvements,
but
also
regarding
correctness.
So
I
opened
this
issue.
A
This
is
kind
of
related
to
something
that's
been
found
before
and
I
think
yeah
this
or
another.
Another
part
of
the
issue
essentially
saying
that
our
algorithm
for
dropping
attributes
is
incorrect.
It's
not
compliant
with
the
specification.
A
The
specification
states
that
I
mean
you
can
write
here,
but
essentially
the
last
in,
is
the
one
that
gets
dropped
and
we're
using
an
lru
method,
so
the
last
the
least
recently
updated
and
that's
out
of
compliance
which
isn't
great,
so
that
needs
to
be
fixed
and
in
fixing
it
there's
also
been
identified
in
other
issues,
which
I
thought
I
had
linked
here,
but
it's
definitely
think
we
can
go
back
here
and
find
it.
A
The
actual
attributes
map
itself,
which
is
the
thing
that's
implemented,
the
lru
caching
algorithm,
is
pretty
heavy
weight,
so
just
replacing
this
with
something
that
is
a
little
bit
more
lightweight,
I
think,
is
ideal,
and
one
of
the
things
that
I
really
realized
is
something
that
could
be
a
big
improvement
here
is
just
storing
the
attributes
as
a
map
which
is
really
simple,
and
it
makes
it
a
lot
easier
to
just.
A
You
know
the
algorithm
that
we
actually
have
implemented
for
setting
attributes
checking
if
they
exist,
if
they
don't
and
then
returning
is
very
bare
bones,
it
looks
very
much
like
go
and
the
performance
improvements
are
impressive.
The
thing
that's
important,
though,
is
we
need
to
make
sure
this
is
correct,
and
I
think
it
is.
A
I
definitely
added
some
tests
here
to
make
sure
that
we're
complying
with
specification,
there's
really
two
things
that
are
stated
in
the
specification
that
we
need
to
be
compliant
with
one
of
them
is
what
was
already
read,
saying
that
we
need
to
do
this
last
last
in
dropped
schema.
The
other
is
that,
if
you're
setting
an
attribute
that
has
the
same
name,
it
needs
to
be
not
considered
to
be
dropped.
It
needs
to
update
anything
that
already
exists.
A
I
think
it's
the
two,
the
two
key
things
that
are
read
out
of
the
specification.
As
far
as
I
know,
if
there
are
other
things
that
we
need
to
pay
attention
to,
please
comment
so
there's
definitely
a
test
to
cover
both
of
those,
and
I
think
what
we're
doing
here
as
well,
dealing
with
capacity
is
is
valid.
The
thing
that
is
different,
though,
and
it
should
become
pretty
obvious-
is
the
returns
attributes
that
we're
going
to
be
returning
are
going
to
be
coming
from
a
map.
A
So
the
order
is
what
one
it's
unordered
or
two
it's
going
to
be
an
arbitrary
order
that
is
not
going
to
be
user
defined.
Nowhere
in
the
specification
does
it
say
it
has
to
be
ordered.
In
fact,
I
think
in
many
other
implementations
it
isn't
ordered,
but
it's
going
to
be
a
change
in
behavior
by
including
this
here.
A
This
isn't
a
complete
pr,
there's
still
changes
that
need
to
be
made,
because
obviously,
some
of
our
tests
assume
there's
going
to
be
an
order
in
the
produced
attributes
and
those
tests
have
started
updating.
There's
not
it's
actually,
not
that
many.
They
need
to
be
updated
to
handle
this
fact
that
the
the
ordering
is
going
to
be
not
deterministic
from
call
to
call
so
either
the
caller
needs
to
sort
them
if
it
matters
or
they
need
to.
You
know,
impose
their
own
order,
and
I
kind
of
like
that.
A
C
It
does
seem
a
little
odd
that
there
isn't
a
specification
regarding
order,
because
the
data
structure
in
otlp
is
a
list
right.
It
is
inherently
ordered,
I
think,
I'm
fine
and
not
implying
that
there's
an
order
or
requiring
there
be
an
order
and
how
we
handle
it.
If
there's
no
requirement
for
that,
it
just
does
seem
a
little
odd
to
me.
A
Yeah,
I
think
you're
right
and
there's
a
relevant
pr
that
was
recently
merged
kind
of
adding
this.
Let
me
see
if
I
can
find
it
really
quick
on
the
spot.
A
Sorry,
it
may
be
too
complex,
but
essentially
it
was
saying
that,
like
there's
a
there
was
a
clarification
to
the
specifications
saying
something
like.
I
think
this
might
actually
be
it.
A
The
keys
essentially
they've
always
been
intended
to
be
a
map,
even
though
they're
stored
as
some
sort
of
ordered
list
in
the
otlp,
the
original
intent
was
to
always
have
them
be
represented
in
this
I
think
manner,
and
in
some
cases
this
is
related
to
the
fact
that
there
were
duplicate,
key
value
pairs,
and
this
clarifies
the
it
was
changed
to
clarify
to
make
sure
that
everyone
understands
that,
like
those
key
value,
pairs
need
to
be
unique,
something
that
was
kind
of
always
envisioned,
because
it
was
going
to
be
a
map,
and
so
I
think
that's
why
you
don't
see
anywhere
in
the
specification
order
being
specified,
because
I
think
in
the
back
of
many
of
the
authors
heads
it
was
originally
going
to
be
a
map
and
the
data
structure
just
worked
out
better
for
them.
A
I'm
not
exactly
sure
why
they
booked
the
slice,
but
that
might
I
don't
know
if
that
gets
gears
current.
Turning
for
you
anthony,
but
that
was
what
I
understood
as
well.
C
I
mean
this
is
actually
a
change
from
what
my
understanding
was,
which
was
that
it
was
a
list
of
key
value
pairs
precisely
so
that
something
like
a
streaming
sdk
could
emit
multiple
key
value
pairs
for
the
same
key
and
last
right-wing
semantics
could
be
used
to
resolve
that
to
a
map
like
data
structure
on
the
receiving
end.
C
A
A
Yeah,
I
don't
know
so.
That's
that's
interesting.
I
think
that's
a
little
bit
more
relevant
to
the
otlp
but
kind
of
bringing
us
back
to
our
implementation.
Here
I
don't
know
one
given
the
sdk
specification
just
say
that
unique
key
values
need
to
be
updates.
They
can't
be
duplicates.
I
think
that
that
is
something
that
is
specified
for
the
sdk,
not
the
otp.
Well,
obviously,
I
think
they
tried
to
specify
it
for
the
otlp,
but
in
in
the
sdk
they
should
not
have
duplicate
values.
I
think
that
is
specified
yeah.
A
C
I
I
think
it's
it's
fine
for
us,
and
it
looks
like
the
last
paragraph
of
this
pr
does
actually
say
streaming.
Implementations
can't
enforce
this
unless
it's
on
the
receiver,
which
I
mean
what,
why
is
it
a
must,
then
it
really
shouldn't
be
unless
not
it
should
be.
I
should
not
but
yeah.
I
agree
spec
language
aside.
I
think
that
the
intent
here
is
to
say.
Oh,
you
really
should
be
treating
this
as
a
map
and
then
omit
it
as
a
list
of
key
value
fares,
which
is
fine
with
me.
A
Okay,
yeah-
that
was
kind
of
my
interpretation
as
well,
which
is
also,
which
is
what
we're
doing
here,
is
treating
it
as
a
map
and
then
emitting
it
as
a
slice
that
slice
is
not
going
to
be
ordered
so,
okay.
That
being
said,
it
sounds
like
there's
enough
support
from
the
community
on
this
call
where
I'll
try
to
fix
this
and
turn
this
from
a
draft
into
an
actual,
compiling
and
test
running
pr.
But
yeah
thanks
for
the
feedback
on
that.
A
Okay
last
thing
I
had
wanted
to
talk
about
in
the
meeting
was
the
update
on
the
metrics
api.
I
know
we've
talked
a
lot
about
that
in
the
last
week
or
two
now
I
noticed
aaron
has
a
pr
and
then
I'm
guessing.
This
is
the
pr
for
anthony
about
yeah
proof
of
concept
for
the
sdk
version
limitations
do
either
of
you
want
to
jump
in
and
I'll
hand
the
mic
to
you
essentially.
B
Anthony,
do
you
mind
taking
your
findings.
First.
C
Sure
yeah,
this
is
pretty
simple
to
go
over.
Basically,
it
proves
that
go
build
tags
workers
intended.
I
added
build
tags
to
all
of
the
go
files
in
the
sdk
and
everything
that
depended
on
the
sdk
and
for
the
most
part
everything
just
worked.
C
There
was
some
weirdness
with
the
ci
process
where
running
it
with
go
116
locally.
It
didn't
give
me
any
errors,
but
it
did
when
running
it
by
github
actions
to
fix
that.
I
just
inserted
some
placeholders
with
not
go
117
build
tags
into
the
package,
so
there
was
at
least
one
go
file
that
would
be
there
and
then
it
wasn't
effectively
an
empty
package
with
no
tests
from
the
116
perspective.
C
I
think
the
the
description
lists
a
couple
outstanding
issues
which
are
there's
there's
one
file
that
when
is
it
not
a
generated
file
that
one
generated
with
117
wants
to
remove
the
build
tags,
but
when
generated
with
116?
Doesn't
I
still
don't
understand
that?
C
But
we
can
figure
that
out,
and
the
other
is
that
I
had
to
remove
the
go
mod
tidy
step
from
pre-commit,
because
when
running
with
go
116,
it
will
explode
on
mod
files
that
have
go
117
in
them.
I
don't
know
how
necessary
it
is
to
have
mod
id
in
pre-commit
or
if
we
put
that
in
pre-connect
pull
it
out
of
ci.
C
But
long
story
short,
it
really
does
seem
like
it
will
be
easy
to
allow
the
api
to
work
with
117
and
118
and
restrict
the
sdk
to
118.
Using
this
approach.
A
Cool
yeah,
I
think
that's
great,
to
hear
yes,
okay,
so
with
that,
how
are
we
gonna
restrict
our?
I
guess
we're
not
talking
about
the
sdk
today
we're
gonna
talk
about
the
api,
but
aaron.
I
don't
know
if
you
have
some
ability
to
talk
through
this.
B
This
is
kind
of
the
start
of
taking
what
was
discussed
in
the
linked
issue
right
there
2526,
I
think,
and
putting
it
into
actual
words.
We
said
at
the
last
sig
that
we
would
rip
out
all
of
the
api,
and
then
this
is
an
implementation
of
that.
It's
closer
to
the
six
packages.
B
The
only
thing
I
did
was,
I
actually
put
the
individual
like
sink
or
async
float64,
underneath
instrumentation
that
just
made
things
a
little
bit
cleaner
and
it
also
is
a
good
cookie
crumb
towards
what
is
an
instrument
and
what
is
not.
B
The
things
that
are
broken
are
the
sdk,
of
course,
because
that's
what
we're
changing,
but
also
there
was
bridge
open,
which
I
believe
is
supposed
to
rely
only
on
the
api,
so
that
I
think
we
could
fix
before
releasing
I've
not
had
time
to
go.
Do
that
and
then
there's
the
internal
metric
package,
which
is
either
rewritten
with
this
or
gets
removed,
I'm
not
sure
which
I
didn't
dive
too
much
into
it.
You
have
the
actual
sdk
changes
which
you
know.
B
Those
need
to
be
fixed
that
I
didn't
not
expect
to
pull
in
with
this,
but
kind
of
a
follow-on
pr,
maybe
one
that's
branched
off
of
this
even
and
then
the
exporters
which
actually
will
rely
on
the
sdk,
not
necessarily
the
api.
B
And
then
we
have
a
couple
examples
that
have
in
them
either
the
bridge
or
the
exporters,
so
you
can't
really
fix
those
until
the
exporters
are
fixed.
So
I
think
this
kind
of
a
this
was
my
exploring
that
we
can
actually
make
this
work
as
an
api.
B
I
also
included
a
no
op
implementation,
which
I
don't
think
is
I'm
not
sure
if
it's
actually
called
for
by
the
spec
in
any
way
shape
or
form,
but
we're
gonna
need
that
if
we
have
a
global
package
anyways
and
then
so
technically
we
have
an
sdk
that
is
implemented.
I
guess
in
that
we
have
a
no
op
sdk,
that's
not
really
helpful,
but
but
it
also
goes
and
removes
4000
lines
of
code
of
what
was
there
before
so
yeah
nice.
A
Yeah,
I
think
that
this
is
a
great
first
step.
I
haven't
made
it
all
the
way
through
this
and
I'll
be
honest.
I
still
need
to
complete
the
the
reading
of
this
issue,
but
from
what
I
can
see,
this
is
a
great
start,
thanks
for
getting
this
together.
B
Some
of
them
are
like
this
creates
an
instrument
I
don't
know
where
or
where,
where
we
really
feel
like.
We
need
to
put
in
kind
of
the
description
of.
Why
are
there
different
instruments?
B
Oh
I
do
want
to
call
out.
There
was
one
other
change
that
I
made
from
that
you're.
Actually,
looking
at
right
now,
I
made
the
interfaces
that
we
use
to
group
different
instruments
together
as
unexported
in
interfaces,
because
they
don't
actually
provide
any
functionality.
Nobody
is
nowhere
in
the
sdk.
Are
we
expected
to
call
the
asynchronous
mechanism
that
has
an
interesting
side
effect
in
that
you
actually
have
to
embed
that
particular
interface
into
any
concrete
type?
That
is
meant
to
be
an
asynchronous
interface.
B
So
if
you
could
go
to
one
of
the
no
ops
like
to
scroll
down
a
little
bit
for
the
non-recording
instrument
to
be
a
synchronous
instrument,
it
actually
has
to
embed
instrument,
synchronous.
B
A
B
A
Method,
yeah
yeah,
it's
the
same
thing
we
use
for
all
of
our
configs
as
well.
B
I
think
we
are
safe
to
do
this,
because
anybody
can
embed
that
so
any
sdk
implementation
can
embed
it
and
because
we
are
promising
that
it's
not
called,
and
it
has
no
side
effects.
B
Then,
as
long
as
the
implementer
doesn't
actually
call
that
for
any
reason
whatsoever,
then
it
will
never
panic,
because
that
that
will
be
a
calling
a
function
on
a
null
receiver.
Essentially.
B
A
A
A
Yeah,
okay,
yeah.
That
sounds
good.
The
other
thing
that
I
noticed
you
pointing
out
was
this
register
callback
versus
the
startup
callback
right.
This
is
always
been
kind
of
an
issue
for
us,
essentially
with
the
instrument
catch
22
right.
B
Yeah
yeah,
so
I
I
kind
of
give
a
little
example
of
how
I
thought
it
might
work
if
we
have
a
closure
here.
It's
created
in
the
second
line
of
this
example.
B
The
funk
context
counter
the
reason
why
you
have
to
have
that
counter.
There
is
because
that
closure
will
not
have
access
to
the
counter
from
the
top.
It
doesn't
exist
so
you're
in
a
catch-22
of
you
need
this.
B
A
Yeah
the
the
thing
that
I
remember
this,
if
I
remember
correctly,
is
some
of
the
thinking
like
the
hosts
instrumentation
counter
is
defined
above
as
just
a
variable,
and
that
is
just
like.
A
To
not,
you
need
to
have
air
declared
above
as
well.
I
I
don't
know
if
that
would
work
here.
I'd
have
to
look
into
this,
though
I
know
it
was
hard
and
it's
not
clean.
B
What
we
would
have
to
then
have
is.
We
would
basically
have
the
register
callback
function
at
the
meter
level
right,
but
then,
like
what
happens,
there
is,
if
you
have
to
pre-declare
it
and
then
come
through.
You
essentially
have
to
have
something
that
is
similar
to
how
I
have
registered
callback.
B
I
guess
it's
worth
calling
out
in
the
initial
proposal.
There's
new
callback
and
then
you
register
callback.
I
or
I
change
it
to
register
callback.
The
reasoning
for
that
was
there's
nothing
to
do
with
those
callbacks
after
you
instantiate
them
at
that
point
they
are
already
registered
in
the
meter.
A
C
Josh
has
one
or
two
changes
proposed
in
the
spec
that
would
address
this
issue
as
well,
because
right
now
the
the
spec
requires
that
the
callback,
a
single
callback,
is
associated
with
an
instrument
when
the
instrument
is
created
but
suspect
that
he
he
has
would
allow
registering
callbacks
after
instrument
creation
and
associating
a
callback
with
multiple
instruments
or
multiple
instruments
with
a
callback.
C
So
I
and
I
I
think
from
the
spec
meeting
this
week,
the
direction
it
was
going
was
that
we
were
going
to
try
to
finalize
the
sdk
spec
next
week
and
then
can
consider
these.
So
I
would
expect
that
we
should.
We
should
push
hard
for
these
to
be
merged
as
soon
as
possible
into
the
spec
and
treat
this
as
the
first
version
of
the
spec
that
we're
going
to
target.
B
Yeah,
I
agree
if
you
look
right
now
line
399,
which
is
at
the
bottom
of
this,
says
that
a
callback
function
that
that's
under
the
the
api
must
accept
these
parameters.
So
you
know
a
description
a
unit,
but
a
callback
is
a
must.
B
So
technically
this
api
is
out
of
spec
in
that
way.
But
honestly
it
makes
it
a
lot
more
difficult
to
it
kind
of
makes
it
more
difficult
to
use
to
actually
follow
the
spec.
A
C
Yeah
and
this
wouldn't
break
any
existing
implementations,
I
did
take
a
call
back
in
amusement
creation
because
that's
still
a
way
to
associate
instruments
with
callbacks,
there
seem
to
be
support
from
other
languages
for
moving
to
this
model
as
well.
A
Right,
okay,
so
I
think
with
that
said,
the
proposal
that
aaron
has
together
would
satisfy
this
specification.
So
I
or
is
that
not
right,
aaron.
B
It
would
satisfy
this
if
this
gets
accepted,
it
wouldn't
satisfy
the
specification,
as
is
written,
currently,
okay,
cool.
A
I
think
that's
something
again
like
anthony
said,
I
should
probably
review
this
yeah.
I
definitely
should
review
this.
Okay,
that's
on
my
list
of
things
to
do
now.
C
Yeah,
I
think
we
move
forward
by
targeting
this,
as
if
it's
going
to
be
the
spec
and
make
sure
that
all
of
us
are
vocal
in
the
specs
saying
that
it
needs
to
become
the
spec.
B
Yeah
agreed,
I
think,
the
next
steps
on
this
is.
I
will
explore
the
two
sections
that
I
identified
as
can
be,
can
be
done
without
the
sdk
and
then
I'm
trying
to
work
with
josh
as
much
as
possible
to
get
some
kind
of
prototype
for
the
api
stabled,
the
stabilized
right
or
for
the
sdk
stabilized,
and
then
I'm
not
sure
how
we
want
to
kind
of
stage
these,
because
if
we
accept
just
this,
we
basically
cannot
make
a
metrics
release.
B
Until
we
have
an
sdk
and
exporters,
it
would
just
be
in
a
broken
state.
We
could
probably
look
at
adapting
what
our
sdk
is
right
now
to
the
new
spec,
but
I
think
that's
a
lot
of
that
might
be
a
lot
of
work
for
diminishing
returns,
especially
if
we're
planning
on
ripping
it
all
out.
Anyways.
A
I'm
okay
with
it
being
in
a
broken
state
as
long
as
we
have
a
clear
plan
and
timeline
as
to
like
when
it
would
be
fixed,
but
I
think
that
maybe
also
kind
of
like
what
you're
saying
if
you
have
the
time
and
we
could
get
like
a
pr
built
from
this
work
in
progress
that
would
show
the
completed
state.
I
think
it
would
be
helpful.
But
I
I'm
okay
with
just
saying
we're
going
there
yeah.
C
A
B
C
Could
do
as
well
is
move
the
metrics
api
modules
into
a
new
module
set
that
could
be
released
independently.
Just
let
it
be
known
that
this
is
purely
an
api.
You
can
start
looking
at
it.
It's
got
a
no
up
implementation
if
you
want
to
start
trying
to
write
code
against
it,
but
there's
no
actual
sdk
here
yet.
A
C
A
B
We
would
have
to
go
and
turn
off
a
lot
of
tests
and
whatnot
in
our
ci,
because
all
of
the
sdk
is
broken.
C
Yeah,
there's
probably
a
lot
of
skipping
tests,
there's
probably
a
lot
of
stuff
in
contrib.
That
would
need
to
be
disabled
or
alike,
so
maybe
not
the
best
approach.
But
it's
a
thought
if
we
want
to
get
an
api
out
in
front
of
people
earlier
than
we
have
an
sdk
ready.
A
Yeah,
I
I
think
we
need
to
figure
this
out
like
you're,
saying
aaron
a
good
next
step,
but
I
think
we
also
need
to
kind
of
have
a
good
understanding
that
we're
going
to
merge
the
api
first.
So
let's
focus
on
that,
I
think,
and
then
we
can
maybe
talk
next
week
about
what
the
next
phases
of
the
sdk
are.
A
Okay,
cool,
that's
everything
on
the
agenda.
I
would
like
to
open
it
up
in
case
somebody
has
something
else
they
want
to
talk
about
that
isn't
on
the
agenda.
A
Otherwise,
we
can
jump
off
the
call
and
get
back
to
the
real
world
yeah
there.
Any
other
good
user
stories
out
there
from
the
past
week.
A
Okay,
cool.
That
sounds
good.
I
think
that
I've
got
some
action
items
aaron's
got
some
extra
items
looks
like
we're.
Making
some
progress
so
again
appreciate
all
the
work
that
everyone's
doing.
If
you
have
some
time,
reviews
are
always
welcome.
Otherwise,
I
guess
we
can
call
here-
and
I
will
see
you
all
next
week
or
online
via
slack
bye.
Everyone.