►
From YouTube: 2022-03-24 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
No,
but
maybe
you
need,
maybe
you
need
metrics
to
determine
how
much
sleep
you're
getting.
B
B
All
right,
well,
it's
kind
of
four
is
that
we
can
start
so
welcome
everybody
as
usual.
Please
add
your
names
to
the
tv's
lists.
Oh
ladies,
here
hey
later.
B
Okay,
so
I'll
start
with
this
topic
regarding
any
issues
with
metrics.
So,
as
you
know,
we.
B
Have
a
matrix
implementation
that
can
perform
the
most
basic
matrix
operations,
including
exporting
metrics.
I
went
through
the
sdk
specification
yesterday
and
made
a
list
of
all
the
things
and
that
are
still
pending.
In
my
opinion,
it's
I
found
20
things
21
things
that
need
to
be
implemented.
B
B
B
Several
times
like
the
optimization
of
timeouts,
for
example,
so
as
as
you
may
remember,
there
are
some
functions
like
shutdown
force,
flush
that
have
perspective.
They
have
needed
timeout
and
we
still
have
to
implement
that.
B
B
I
will
like
to
have
all
this
sorted
out
before
our
next
release,
which
I
hope
can
be
a
release
that
has
the
the
underscore
before
metrics
removed,
so
that
we
have
savor
release
that
we
can
share
with
other
people
so
yeah
I
wanted
to
let
you
know
about
this
I'll
I'll,
be
updating
the
the
issues
and
the
and
the
dashboards
that
we
have
to
include
this.
This
information,
but
but
yeah,
that's
pretty
much.
The
topic
that
I
wanted
to
share
with
you.
E
Hey,
isn't
the
sdk
spec
still
mixed
status.
B
Its
own
status,
let
me
show
you.
B
So
yeah
it's
mixed,
but
emir
provides
stable.
Well,
this
section
is.
D
B
As
we
use
limits
stable,
sometimes
you
yeah
metric
period
stable,
so
most
of
the
stuff.
Here
it's
already
considered
to
be
stable,
so.
B
Yeah,
that's
a
that's
a
good
point.
What
what
we
can
do
is
to
keep
example
or
private
and
well
you
know
so
we
haven't
implemented
any
of
examples
yet,
but
yeah.
A
I
think
if
we
were
gonna
remove
the
underscore
I'd
be
a
little
more
comfortable.
If
more
people
tried
it
out
first
sure,
instead
of
just
guaranteeing
stable
api,
we
make
sure
we
have
something.
That's
pretty
solid.
A
I
don't
know
if
it
would
be
good
to
do
like
a
rc
like
we
did
for
tracing
and
try
to
actually
get
people
to
use
it.
B
Yeah,
that's
a
good
point.
In
fact,
we
we
can
do
that
right
now.
I
think-
and
oh
that's
that's
also
topic
that
I
wanted
to
discuss
this
pr-
that
I
opened
prop
aggregation.
B
Yeah
this
one,
I
think
you
mentioned
that
it'll
be
good
to
survey
a
change.
B
So,
regarding
this
concept
of
experimental
release,
I
don't
think
we
have
there's
anything
like
an
experiment
that
can
be
considered
an
experiment.
Experimental
release
right
there
there
are
no
experiments.
Experimental
releases
are
just
releases.
What
we
can
do
is
pretty
much
tell
people,
I
guess
in
the
limited
python,
slash
channel
a
we
have
this
it's
located
there
in
the
underscore.
B
Please
give
it
a
try
and
we
can
do
it
right
now.
We
can
do
that
right
now,
yeah,
before
the
the.
A
Yeah,
I
think
it
would
be
good
to
announce
something
there's
already
been
quite
a
few
bugs
opened
by
people
trying
it
out,
which
I
think
is
great.
Okay,.
B
B
But
so
so
far
I
have
been
skipping
change,
log
entries
for
all
the
metrics
pr,
the
prs
that
I
have
been
adding,
but
everybody
thinks
the
same.
A
So
so
I'm
thinking
it's,
it's
actually
really
valuable
right
now
I
understand
what
you're
saying
about
private,
but
for
people
who
who
have
tried
it
out.
It
would
be
good
if
they
have
a
list
of
changes
since
locked
version
to
make
it
easier
for
them
to
upgrade
and
know
what
breaking
changes
to
expect
for
for
people
who
are
trying
out
the
experimental
version.
F
E
B
You
prefer
we
can
keep
adding
to
the
change.
Look
private
yeah.
We
can
start
good
right,
okay!
Well,
that's
what
else
yeah!
So,
yes,
that's
what
I
wanted
to
share
with
you
regarding
these
issues.
B
A
Comments
thanks
for
going
through
all
the
requirements-
diego,
maybe
maybe
at
the
end.
If
we
have
time
we
can
sort
of
like
go
over
them
and
maybe
triage
them.
A
B
Thing
that's
very
hard
to
read
right
now,
but
if,
if
you
want
to
we
can
we
can
do
it.
A
And
discussed
and
then
just
discussing
on
all
the
issues
if
some
of
these
are
already
resolved
or
open
for
interpretation.
B
While
I
was
reading
through
the
spec.
I
I
try
to
be
as
pedantic
as
possible
and
to
include
here
pretty
much
everything
that
could
be
considered
to
be
not
implemented.
Even
if
we
are
kind
of
super
strict
but
but
yeah.
I
think
there
may
be
some
things
that
that
could
we
consider
accomplished
or
implemented
right
now,
but.
D
B
Yeah
we
can
discuss
that
when
we
have
cool
when
it
comes.
Yes,
all
right,
any
other
issues
or
the
ideas
means
related
to
this.
D
B
Any
vrs,
I
guess
we
can
go
straight
to
these
issues-
that
we
have
here
all
right.
First
one
says
this:
one's
from
students:
listening
instrumentation.
F
Yeah,
we
briefly
talked
about
this
in
the
last
thing.
I'll
repeat
that
again,
so
there
was
a
person
who
opened
up
like
two
pr's
one
was
like
bumping
up
and
one
was
pumping
down.
So
it
would
be
great
if
you
can
provide
some
sort
of
guidance
for
the
contributors
or
instrumentation
others
on
how
how
we
want
to
you
know
support
like
what
major
versions
do
we
want
to
support
like
target
like
some
sort
of
policy
around
that,
for
example,
let's
say
so.
F
The
this
author
was
bumping
up.
The
pi
m
cache
from
one
point
x
to
two
point
x
at
three
point
x.
I
think,
like
one
point
x,
is
from
more
than
four
years
so
like
there's
already
two
major
versions
since
that
and
then
that
they
also
dropped
support
for
that.
I
think
so
yeah
it.
F
In
that
case,
it
would
make
sense
to
drop
that,
but
but
then
they
again
updated
the
pr
to
support
the
1.x
as
well
so
like
initially,
they
started
with
like
dropping
1.6,
but
after
some
conversation
they
started
like
supporting
it
as
well,
so
it
it
was
certainly
not
clear
for
them
like
after
they
opened
the
pr.
F
This
has
happened,
so
I'd
like
to
you
know
have
some
sort
of
like
guideline
that
would
help
them
to
like.
F
Yeah
something
like
that
not
it
doesn't
have
to
be
like
very,
very
strict
about
it,
but
even
some
sort
of
genetic
guidelines-
let's
say
let's
say,
there's
some
library
which
has
like
again
take
the
example
in
this
case
right,
let's
say:
there's
a
prime
cache
library.
It
has
like
imagine
three
different
major
versions
and
we
initially
started
supporting
one
point
x.
Now,
it's
like
very
old
version
people,
that's
like
not
supported
anymore.
So
do
we
still
want
to
support
that?
F
What's
the
criteria
like
when
do
we
like,
for
example,
there's
a
policy
right
we
have
like
we
use,
we
will
support
the
python
major
versions
after
like
six
months
of
like
dropping
that
from
like
the
main
site,
so
some
some
something
like
similar
to
that,
so
that
you
know
we
have
some
solid
guideline
on
how
like
how
long
do
we
want
to
support
some
particular
major
version
of
instrumentation.
B
B
If
we
should,
if
it's
it
is,
is
it
convenient
for
us
to
drop
support
of
instrumentation?
Does
it
cost
us
anything?
I
can
understand
that.
Okay,
if
we
have
an
instrumentation,
let's
say
for
library,
x
right
and
if
to
provide
instrumentation
for
library
x
in
the
new
version
we
we
have
a
conflict
with
an
old
version.
B
B
Stop
supporting
the
old
version
right,
but
unless
that
happens,
it's
not
much
benefit
for
us
to
do
that.
Right.
F
Yeah,
that's
like
that's
a
very
point,
but
let's
say
like
what
happened
like
how
do
let's
say
we
want
to
stop
supporting
it
right.
We
we
like
at
some
point.
We
will
be
releasing
stable
versions
right.
So
what
we'll
be
doing
for
that,
like
we
suddenly
can't
drop
the
support
in
some
after
we
reach
stable,
like.
B
What
happens
is
that?
Okay,
let's
let's
say
that
we
are
in
that
situation,
right,
there's
one
library
version,
one,
it's
old
and
supporting
that,
because
conflicts
with
supporting
that
the
new
library
that
everybody
is
using,
so
that
could
be
a
relatively
easy
choice
to
make
right.
Okay,
we
just
decide
not
to
support
level
one
anymore,
but
the
complicated
situation
is
when
there
are
more
than
one
library
that
it's
important
to
support
and
they
are
both
conflicting.
B
Let's
say
there
is
version
two
and
version
three:
both
of
them
are
important
and
being
used
and
both
incompatible.
B
F
B
B
Pretty
much
say:
okay,
we
don't
even
have
to
wait
in
any
particular
to
find
time
right.
B
We
can
just
drop
support
for
it
right
now,
but
I
what
I'm
most
concerned
is
is
not
about
that,
but
the
other
particular
situation
that
I
just
described
and
considering
the
fact
that
if
we
implement
things
like
that
by
by
having
a
bunch
of
fixes
analysis
in
inner
code
that
allows
us
to
do
different
things
depending
on
the
version
that
is
running,
we
don't
need.
We
don't
actually
need
to
define
a
policy
right
because
we
can
keep
supporting
things
forever.
B
So
I
think
the
question
is
more
like:
how
can
we
support
conflicting
versions
of
libraries?
At
the
same
time,.
F
I
mean
like
we
do
that
in
like
very
few
instrumentation
already
with
the
logical
like
sub
branches,
as
you
mentioned,
okay,
but
at
some
point
I
think
we
need
some
some
like
that
mechanism
right
and
maybe
at
some
point
we
will
like
that-
won't
be
like
that
that,
with
the
like
using
ifs
and
else's,
that
might
not
be
enough
like
right
now
it
might
be
that
we
get
get
away
with
it,
but
maybe
we
will
see
in
some
instrumentation
that
you
can't
achieve
that
with
the
logical
branches
for
each
version,
but
yeah,
I
I
don't
know
I
don't
have
like
solid
examples
for
now
yeah
I
mean
we
could.
B
Define
an
arbitrary
amount
of
time-
that's
saying:
okay,
we
consider
version
one
of
this
library
to
be
too
old
to
be
supported.
This
is
the
last
release
next
release
the
this
support
for
this
version
of
this
library
will
be
removed,
and
that
can
be
the
host
right.
B
B
F
Yeah,
well,
I
I
think
I
haven't
thought
through
what
will
be
the
implications
as
command
and
user
use.
Yeah.
B
Or
I
can,
I
can
then
add,
there's
a
comment
here,
so
we
can
think
about
it.
We
don't
need
to
make
the
decision
right,
but
there
was
also
a
thing
that
I
mentioned
here
and
this
may
save
us
from
having
to
make
that
decision.
F
B
All
it's
a
more
complicated
approach,
which
is
to
have
branches
for
every
instrumentation
so
that
we
we
have
as
many
branches
as
supported
versions.
We
we
want
and
we
release
like
that.
So
at
the
same
time
we
may
have
open
to
limited
experimentation
x.
One
point
two:
the
three
limited
expectation
x,
two
point,
two
three
or
something
like
that.
So
at
the
same
time
we
have
version
one
and
version
two
of
the
same
instrumentation
release.
F
B
And
they
belong
to
the
to
the
branches.
Sorry
to
the
to
the.
D
B
I
think
require
for
the
users
to
install
packages
with
different
names,
and
those
names
should
match
the
the
versions
right,
so
they
will
install
up
into
level
three
segmentation
x1
if
they
are
using
library
version,
one
two
if
they
are
using
library
version
two
and
so
on.
B
It's
a
much
more
complicated
approach
because
it
requires-
and
I
mean
it
can
present
some
difficulty
for
the
users.
I
think
because
it
will
require
them
to
know
which
version
they
are
using,
so
that
they
can
still
be
package
and
at
the
same
time
we
already
have
released
packages
without
the
number
in
their
names
right.
So
what
will
be
the
meaning
of
those
packages
for
now,
but
yeah?
Just
another
option
that
can
be
used.
F
F
Okay,
so
like
like,
we
have
so
guys
what
I
was
thinking
like,
so
that
they
they
will
be
having
like
one
match,
version
that
they're
using
in
the
project
right.
So
these
projects,
without
the
version
numbers,
they
will
become
a
convenience
packages
for
all
these
packages,
with
the
version
numbers
in
that
name.
F
So,
at
any
point
like
we
check
like,
we
already
have
a
dependency
check
in
the
instrumentation
like
when
they
call
the
instrument
instrument,
so
it
will
be
taken
care
of
there
like,
if
particular
like,
which
one
of
these
major
instrumentations
is
activated
so
that
that
will
be
taken
there.
That's
that's
something
that
I
was
and
that
I
had
in
mind.
B
Okay
right,
so
I
I
can
add
these
comments
to
the
pr.
Please
have
that
yours
as
well,
so
that
we
can
write
this
down
and
keep
the
discussion
going.
If
someone
else
has
any
other
ideas,
please
make
sure
that
they
are
heard
here.
B
A
Yeah
sure
so
this
has
been
open
for
a
while.
I
think
diego
is
the
only
person.
Who's
left
any
comments
here.
I
don't
know
if
other
folks
have
had
a
chance
to
read
it,
but
essentially
there's
the
spec
pr,
which
is
adding
multi-instrument
callbacks.
A
So,
for
instance,
if
you
read
a
file
that,
like
like
proc
stat,
which
has
measurements
which
would
apply
to
a
bunch
of
instruments,
we
want
to
do
that
in
a
single
reading
of
the
file.
We
don't
want
to
have
to
to
read
the
file
in
each
callback,
so
we
just
need
to
decide
what
the
api
for
this
is
going
to
look
like
and
update
our
api.
I
outlined
a
few
potential
approaches
here
and
we
sort
of
just
need
to
make
a
decision
and
implement
it
all
right.
A
Yeah-
and
I
imagine,
there's
probably
other
possible
approaches
too
so
if
you
want,
we
can
like
go
over
them.
A
I
don't
know
you
could
share,
I
just
scroll
down
a
bit
to
approach
one
okay,
okay,
so
in
this
one
the
observer,
so
the
meter
would
have
a
method
register,
a
callback
which
you
specify
a
single
callback
and
all
the
instruments
that
can
be
recorded
against
in
that
callback
and
then,
basically,
the
way
it
would
work
is
the
observable
instruments.
We
would
add
an
observed
method
which
admit
which
emits
a
measurement
given
the
parameters
and
that
measurement
is
found
in
that
specific
instrument.
A
A
A
I
think
I
think
maybe
where
I
was
what
I
was
getting
at
with.
That
is
that
we
would
have
just
this
meter
dot
register
callback
is
the
only
way
to
bind
instruments
to
a
callback,
but
I
think
the
spec
is
actually.
It
actually
says
that
the
create
observable
whatever
should
be
able
to
accept
a
single
callback
as
well.
So
I
don't
think
that
point
is
valid
anymore,
based
on
how
the
pr
changed.
A
B
B
A
Approach
yeah,
so
this
one.
Basically,
we
would
update
the
observer
callbacks
to
accept
a
function
so
in
this
form
the
the
proxy
observer.
It's
accepting
an
observed
function
which
takes
as
parameters
the
measurement,
sorry,
the
instrument
and
the
measurement.
A
But
yeah,
I
think
this
this
one.
The
other
thing
about
it
is
there's
no
there's
no
generator
anymore,
so
you
don't
have
to
yield
the
measurements.
I
think
I
don't
know
I
could
go
either
way
on
that.
I
think
it's
sort
of
more
idiomatic
to
have
the
yield
in
python,
but
yeah
yeah.
D
A
Yeah
this
last
one-
I
don't
like
very
much.
This
is
what
was
like
proposed
in
the
original
pr
for
go
essentially
you'll
just
say,
say
a
callback
and
then
you'll
have
an
observed
function
which
just
accepts
directly
the
values
and
those
observed
functions
are
on
the
asynchronous
instruments.
A
B
Yeah,
all
right
great,
great
I'll,
take
this
into
consideration
with
when
I'm
making
the
list
of
filming
issues.
Yes,.
A
Sir
sir,
is
this
issue
assigned
to
anybody?
Yet
this
issue
is
not
a
sign,
I
think.
Okay,
I
don't
know,
does
anybody
on
the
call
want
to
tackle
this,
or
or
even
just
like
help
add
which
approach
you
like.
B
I
I
can
definitely
add
my
my
vote
here
soon,
but,
okay,
the
the
question
is:
what
will
the
responsibility
of
the
scp
be
to
to
make
the
decision
or
to
implement
this
or
what.
B
A
Okay,
cool,
it
looks
like
alex
he'll
look
at
the
issue
and
vote
be
good
if
everybody
could,
and
there
may
be,
other
approaches
so
feel
free
to
suggest
something
else.
Nice.
B
D
B
So
we
could
go
through
all
this,
it's
very
hard
to
read
right
now
to
pretty
much
copy
and
paste
it
from
respect.
B
A
Maybe
like
a
single
issue
with
the
checklist
would
be
good
and
then,
as
we
like,
identify
the
ones
that
we
want
that
are
like
you
said
these
are
the
most
pedentic
it
can
be,
and
then
maybe
we
can
argue
for
certain
ones
of
them.
I
think
you
could
just
click
on
like
the
create
issue
button
in
the
thing
to
create
them.
Just
so
we
don't
have
to
create.
You
know,
20
issues,
sure.
A
Okay,
oh-
and
there
are
already
some
some
issues
for
some
of
these
too
so.
B
B
What
I'm
worried
about
is
that
with
this
is
a
pretty
much
taken
from
the
spec,
but
with
that
context,
it's
kind
of
hard
to
know
what
what's
the
problem
here
right.
E
A
If
nobody
else
does,
I
think
we
could
also
just
look
at
the
metrics
project
board
really
fast
and
try
to.
Let's
take
a
look.
A
Yeah,
so
we
have
a
lot
of
unassigned,
I
mean
not
a
lot.
We
have
13
or
12
of
them,
so
I
think
you're.
Oh
you
already
did
that
one,
diego
right.
A
Cool
so
of
the
sdk
ones.
I
haven't
been
marking
them
with
the
1.10
thing,
but
that
yeah
that.
A
F
B
Is
you
and
I
discussing
if
you
should
use
super
okay?
I
can
make
this.
I
can
move
this
into
another
function
so
that
we
can
using
super
get
disbursed.
You
can
do
that
today.
So
I'll.
Let
you
know
when
this
is
implemented,
so
you
can
check
it
out
and
make
sure
that
now
it's
it's
good
enough
and
you
can
result
in
conversations
from
hopefully
versus
cool.
A
B
Mechanism
that
is
required
in
in
lots
of
places
in
lots
of
functions
in
shutdown
in
force
launch
in
many
places.
But
I
think
if
we
can
find
that
one
single
solution,
we
can
pretty
much
apply
it
everywhere
and
definitely
help
a
lot.
B
E
Yeah,
I
didn't
want
to
disrupt
this
flow
man,
I'm
kind
of
like
heads
down
with
like
logging
stuff
right
now.
So
if
we're
trying
to
get
this
these
issues
out
for
the
release
next
month,
I
could
pick
up
some
stuff
that
are
like
a
bit
smaller.
A
Yeah
yeah,
I
think
we
should
err
on
the
side
of
leaving
them
unassigned,
so
so
that
they
don't
like
appear
to
be
taken,
and
then
we
don't
have
time
to
work
on
them
or
whatever.
So
we
can
see
if
there's
some
smaller
ones
alex
said
this
reminds
me
of
the
timeout
for
the
tracing
signal.
C
It
it
reminds
me
of
the
long
open
issue
of
we
should
have
one
solution
to
implement
timeout
for
all
of
the
rece
for
all
of
the
exporters
and
eventually.
F
A
A
Yeah
I
mean
okay.
A
Yeah,
so
this
this
is
a
pretty
big
issue,
because
it
would
probably
apply
to
that
as
well.
A
A
A
B
Well,
maybe
you
bro,
at
least
it
maybe
could
give
some
insight
on
how
do
they
implement
the
diamond.
A
Oh
yeah
yeah
I
mean
we
do
have
yeah,
we
do
have
the
timeout
and
the
batch
processor
and
the
periodic
reader.
I
believe
so
those
ones
don't
plop
forever.
C
This
was
the
the
general
mechanism
we
tried
to
implement
for
the
tracing
signal.
C
D
B
A
I
mean
now
that
we
support
python
3.6,
plus
only
we
could
even
use
use
code
routines
for
this
to
avoid
creating
like
a
bunch
of
threads.
A
Since
that
literally
just
has
timeouts
already,
I
guess
we
could
also
create,
like
a
thread
pool
which
I
believe
the
submit
function
has
timeouts.
B
A
Cool,
let's
see
if
there's
any
small
tasks
on
the
metric
sport
that
we
can
assign
to
people
yeah.
C
Good
good
there's
an
issue
about
adding
support
for
the
exponential
histogram.
Is
that
something
we
are
looking
at
doing
before
the
next
release,
or
is
that
something
that
could
be
pushed
out.
A
There
was
a
user
issue,
that's
pretty
small,
which
maybe,
if
you're
looking
for
something
small
late
and
this
consume
it's
on
your
right,
diego,
the
top
one
and
then
the
unassigned
sdk
column.
Oh
sort.
A
Third,
one
down
now
consume
measurements
yeah,
so
this
was
opened
by
a
user.
But
basically,
if
you
have
a
async
callback,
we
just
need
to
try
catch
before
we
call
it
so
that
it
doesn't.
A
It
doesn't
crash
the
sdk
yeah.
You
can.
B
B
Okay,
all
right!
I
can
go
through
all
this
list
that
I
showed
you
and
creating
issues.
I
can
update
the
board.
No
pretty
much
send
send
it
to
you.