►
From YouTube: 2021-10-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
You
can
add
your
name
to
the
list
of
attendees.
Maybe
we
can
wait
another
half
a
minute
or
so.
A
Okay,
I
think
we
can
start
okay.
What
does
it
mean?
Yellow
system
resources,
okay,
yeah,
so
we
go
through
the
agenda.
It's
very
simple
this
time,
so
we
may
have
time
for
like
some
discussions
once
I
give
you
the
quick
update,
so
we
released
the
1.2
beta
1,
which
is
the
first
beta
in
the
1.2
series,
so
we
did
like
four
alphas,
and
so
now
we
switch
to
peters.
A
The
intention
is
we'll
continue
to
do
p
test
for
probably
another
month
and
a
half
and
we'll
switch
to
rc
at
the
moment.
Spec
gets
into
a
better
flip,
because
spec
is
still
experimental.
A
So
once
spec
is
smartest
feature
phrase,
then
we
can
switch
to
using
the
term
rc
because
then
we
won't
be
like
adding
new
features,
but
we
do
not
have
a
exact
date
like
when
would
that
happen,
I'm
still
hoping
that
it
would
be
still
a
number,
but
if
there
is
any
change
I'll
keep
fox
updated
here
so
yeah
I
mean
I
just
missed.
You
don't
think
I
included
this
beer.
So
there
are
some
changes.
In
fact,
I
also
missed
the
admit
with
wild
card
support.
A
So
these
two,
like
missed
the
boat.
A
I
did
the
snap
before
this
vrs
were
added,
or
I
think
this
is
still
not
much,
but
this
one
was
not
so
we'll
address
it
like
we'll
include
it
in
the
next
release.
A
So
having
done
the
first
release
of
beta
and
just
like
listing
the
major
issues
which
we
have
to
tackle
in
the
metrics,
so
views
still
requires
two
more
feature:
changing
aggregation
and
exemplar,
and
then
we
have
an
issue
open
for
this,
and
allen
has
every
draft
pr
or
ability
to
clean
up
and
then
multiple
reader
support
and
like
there
might
be
some
otp
fixes,
because
it's
a
long
time
since
we
updated
the
proto
files,
so
there
might
be
something
which
we
need
to
take
care.
A
When
we
pull
in
the
latest
profiles,
there
were
like
some
histogram
additions
as
well,
so
we
need
to
see
whether
we
need
to
make
some
changes
in
the
actual
exporter
and
then,
even
though
low
priority
might
be
like,
like
nice,
to
have
for
a
lot
of
customers.
So
we
may
need
like
some
investment
in
the
instrument.
I
think
we
have
http
client
and
a
spinner
core,
that's
it
so
we
should
be
at
least
covering
all
the
instrumentations
we
have
for
tracer,
so
grpc,
maybe
readys
and
asp.net
and
seeker.
A
Those
are
like
nice
to
have.
If
you
have
time
for
that,
so
I
want
to.
I
mean
if
there
are
no
questions
I
want
to
like
spend
some
time
on
the
cleanup
issue,
because
that
seems
the
biggest
one.
Everything
else
is
like
really
specked
out.
We
just
need
to
like
go
and
implement
it,
but
this
seems
a
little
bit
more
trickier.
It
might
require
changes
in
the
structure
itself
and
I
intentionally
did
not
like
other
performance
optimizations,
because
there
are
a
few
optimizations
which
we
wanted
to
do
and
measure.
A
But
those
should
not
like
really
affect
the
end
user
directly.
It
should
all
be
like
internal
optimization,
so
so
it
should
have
like
less
impact.
So
I
did
not
like
explicitly
list
it
here,
but
we
already
have
issue
here.
So
if
there
are
no
questions,
I
think
maybe
I
let
alan
like
walk
through
the,
even
though
it
is
like
draft.
We
can
just
quickly
go
over
it
and
see
if
there
are
like
feedbacks
right
away
or.
B
Yeah
sure
you
just
want
to
continue
sharing
your
screen
and
we'll
just
kind
of
walk
through
it,
so
I
guess
the
so
for
some
contacts
just
in
case
people
aren't
aware
that
the
idea
here
is
that
before
this
pr,
the
metric
points
was
capped
at
2000.
So
a
metric
point
is
the
unique
set
of
values
for
a
the
keys
and
values
for
a
given
metric
capped
at
2
000
for
every
single
metric.
B
B
So
this
pr
still
preserves
the
2000
metric
point
cap,
but
it
makes
those
points
reusable
if
they
go
stale
up
to
this
point.
My
initial
stab
at
this
I've
tried
to
maintain
most
of
the
maintain
the
use
of
most
of
the
data
structures
that
we
already
have
in
place.
B
So
I
didn't
touch
those
so
much.
What
I'm
attempting
here
is.
I
have
I've
introduced
this
free
list
that
yeah
so
yeah
thanks
to
each
other.
Maybe
this
is
a
good
place
to
start
so.
I've
introduced
this
metric
point
status
on
the
metric
points
and
originally
it's
on
set,
but
as
it
gets
used
and
metric
values
get
updated,
then
the
status
changes
to
update
pending
after
a
collect
the
status
gets
changed
to
no
pending
update
and
then
on
the
subsequent
collect.
A
However,
you
don't
really
mean
update,
painting
ready,
it
really
means
like
no
collect
pending
because
update
happened,
but
we
haven't
like
snapshoted
it
and
exported.
So
it's
more
like
collect
pending,
not
update,
windy.
B
A
And
what
what's
like
I
mean
when
you,
when
you
say
like
we
remove
a
point
like
you,
are
basically
marking
the
metric
point
as
like
some
status
so
that
it
can
be
reused.
But
we
are
not
touching
the
dictionary
right.
So
if
a
particular
key
combination
arrives
and
then
it
doesn't
arrive
for
a
long
time,
which
means
the
metric
point
is
probably
gone,
but
the
dictionary
would
still
have
a
entry.
So
we
still
need
to
do
like
modify
the
dictionary
to
either
like
remove
it
or
like.
B
Yeah,
so
I've
actually
done
that
I
haven't
pushed
that
up
yet,
but
okay
yeah
the
the
place
that
I
would
do
that
if
you
you're
just
about
there
scroll
down
just
a
little
ways.
B
So
this
is
we're
in
the
take
snapshot
method
here
of
the
aggregator
store
and
when
it's
candidate
for
removal,
that's
where
we
free
up
the
point
and
in
there
I
am
also
now
removing
the
entry
from
the
from
the
dictionary
okay.
So
the
big
issue
left
to
be
figured
out
again
like
this
is
all
using
like
the
same
data
structures
and
it's
all
being
kind
of
done
in
place.
B
Cjo
you'd
suggested
maybe
there's
a
way
that
on
collect
it
can
have
a
separate
buffer.
That's
totally
can
do
all
of
its
work
on
in
this
pr.
So
far,
I'm
doing
it
all
in
place,
so
both
the
updating
and
the
collect
operate
on
the
same
underlying
structure.
So
there's
some
concurrency
issues
here,
which
you
know
I
haven't.
I
haven't
gone
in
and
actually
dealt
with
yet,
but
this
is,
if
we
continue
down
this
path,
you
know
I
would
have
to
introduce
some
concurrency.
B
A
Including
the
update
path
right,
because
this
is
the
snapshot,
but
it
should
be
like
looking
with
the
update
path
like
so
when
we
do
the
actual
update,
we
would
be
forced
to
do
like
some
locks.
B
A
A
Yeah
but,
like
I
said,
I'd,
never
like
really
fully
explore
the
idea
of
like
using
two
buffers,
but
it
would
be
one
thing
which
I
noticed
like
I
mean
I
explored
that
idea
for
a
completely
different
reason,
not
for
cleaner
purpose,
yeah.
C
A
Idea
there
was
like
you
have
like
two
buffers
or
two
arrays
like
primary
and,
like
you
can
call
it
like
active
and
inactive
active,
is
the
one
where
the
fresh
updates
are
made,
but
when
we
do
a
collect,
we
just
swap
the
active
one
to
the
second
one.
So
that,
like
all
further
updates,
would
happen
on
the
second
one.
So
then
the
collector,
the
is
correcting,
has
a
exclusive
access
without
worrying
about
any
fresh
updates.
A
But
when
we
do
the
swap
again,
we
have
to
make
sure
like
all
the
things
which
got
updated
in
the
other
one.
While
we
were
like
inactive,
you
should
reflect
it
back.
I
think
I
trying
to
feel
like
I
probably
like
wrote
like
some
okay.
Maybe
it's
very
yeah
forget
like
where
I
did
like
write
it
down,
but.
A
No,
not
here
it's
very
difficult
to
find
where
they
put
that
this
is
okay
anyway,
if
I
find
it
like,
I
put
like
I'll
share
it
as
a
comment
here,
because,
but
it's
not
again
fully
refined,
like
my
goal,
was
to
at
that
time
was
to
avoid
any
fresh
updates.
While
I
collect
is
going
on
so
that
you'll
always
get
accurate
time
stamp,
because
for
all
the
things
which
you
like
snapped,
there
won't
be
any
fresh
updates
at
all.
A
B
That's
I
haven't
fully
thought
in
my
mind
either
but
yeah.
The
last
comment
you
just
made.
I
think
that
well
I
don't
know
yet,
but
I
think
that
we're
going
to
have
to
somehow
during
that
reconciliation
like
once
the
collect
is
done.
If
we
had
that
separate,
you
know
active
inactive
thing.
I
anticipate
that
we
would
need
some
locks.
C
B
A
Yeah
and
another
thing
which
I
wanted
to
like
generally
ask
is
because,
as
of
now,
we
are
using
the
dual
data
structures
like
one
is
the
actual
data
structure
where
we
store
the
metric
points,
which
is
the
array
which
is
statically.
I
mean
created
at
one
like
construction
time
and
then
to
do
the
lookup.
We
have
a
different
data
structure,
which
is
the
concurrent
dictionary
one
side
effect
or
one
inefficiency
here.
A
A
I
mean
I
started
paying
attention
like
very
recently.
Where
are
the
places
where
we
are
potentially
like
wasting
memory,
so
that
was
something
which
we
might
be
able
to
solve,
but
I
just
don't
know
like
yet
how
to
do
that,
maybe
like
while
you're
working
on
it.
I
can
also
share
some
ideas.
A
Maybe
like
another
idea
is
like
we
completely
get
rid
of
the
metric
point
snra
and
instead
we'll
purely
use
the
dictionary
and
when
the
dictionary
starts
up,
I
think
we
have
a
option
to
provide
the
initial
capacity,
so
that
would
somewhat
achieve
what
we
are
here,
like
you
at
start
time
or
at
the
construction
time
of
the
store.
You'd
create
a
like
backing
array
within
this
dictionary,
which
is
capable
of
folding
as
many
items
as
you
anticipate
like.
If
it
is
2000
then
2000.
A
the
re
I
mean
I
tried
that,
but
I
could
not
like
make
much
progress
here,
because
the
reason
is
for
collect.
There
is
no
easy
way
to
like
I
trade
on
the
dictionary,
because
if
you
remove
the
metric
points
array,
then
the
updates
are
happening
on
the
same
dictionary
as
we
are
trying
to
take
a
snapshot
of.
I
couldn't
find
any
way
to
even
for
concurrent
dictionary.
A
There
was
no
way
to
get
a
snapshot
of
all
the
like
values,
because,
while
we
are
collecting
there
could
be
like
more
updates,
so
I
wasn't
sure
like
whether
we
would
get
a
accurate
one
or
not.
So
that's.
B
A
It
might
allocate
something,
but
it's
only
on
the
collect,
so
it's
not
as
bad
but
yeah.
That's
something
which
we
can
because,
like
original
reason,
why,
I
think
I
put
like
a
separate
data
structure
was
now
the
collector
is
very
straightforward.
You
just
go
through
the
metric
points
without
like
worrying
about
the
dictionary
and
the
algorithm
or
the
enumerator.
We
have
knows
how
to
skip
any
metric
points
if
it
is
not
being
used.
A
So
that
was
the
idea
why
I
put
like
a
separate
data
structure,
but
like
this
vr
like
made
me
like,
really
think
about
it,
a
little
bit
more
because
now
I
like
one
key
issue
which
I
find
is
the
duplication
of
like
the
string
array
and
the
object,
which
is
the
key
value
pair,
is
now
duplicated
in
the
dictionary
and
in
the
metric
yeah.
So
when.
A
You
have
to
clean
up
from
both
places
and
when
we
add
it
back,
so
that's
something
which
we
want
to
discuss
so
michael
is
also
here.
Like
michael.
Do
you
have
like
any
I
mean?
A
Have
you
ever
had
any
thoughts
about
like
like
eliminating
this
metric
point
completely
and
instead
sticking
with
just
the
concurrent
dictionary
and
its
value
being
instead
of
pointer
or
index
within
this
array
like
this
type,
would
be
like
directly
the
metric
point,
and
then
we
create
the
concurrent
dictionary
with
the
initial
size
up
to
what
we
want
and
like
during
collect,
we'll
take
a
snapshot
by
doing
a
two
array
or
something
so
we
get
a
like
independent
copy
of
the
things
and
we
can
collect
on
that
list
so
that
we
are
not
like
really
clashing
with
the
existing
ones.
A
So
I
mean
any
obvious
concerns
you
see
if
we
go
down
that
path.
C
C
So
what
I
was
just
typing
is:
why
don't
we
make
batch
to
process
instead
of
an
array
of
metric
points?
If
we
just
store
the
index
there,
so
just
the
the
I
parameter,
basically
the
metrics
that
we're
looping
over
then
we
could
always
get
it
again
as
a
ref.
If
you
scroll
up
a
bit
like
line
104,
that
kind
of
makes
sense.
B
C
Yeah,
I'm
not
totally
sure
we
need
metric
point
to
be
a
struct.
It
might
be
better
suited
as
just
a
class
since
it's
stored
in
an
array
anyway.
So
even
though
we're
like
creating
these
structures,
we're
putting
them
in
our
race
or
allocating
them
on
the
heap
anyway,
essentially
yeah.
I
haven't
done
enough
analysis
to
know
if
there's
other
places
where
we
get
some
benefit
from
doing
that
yeah.
I
think
my
only.
A
C
B
C
It
might
be
better
just
to
make
it
a
class.
It's
just
slowing
it
down
a
little
bit
because
every
time
you
new
one
up,
it's
going
to
new
it
on
the
stack
and
then
it's
going
to
create
a
heat
and
copy
it
over,
but
we
only
do
it
once
so.
It's
probably
not
the
end
of
the
world,
but
I
don't
know
I'll
do
a
little
bit
more
analysis.
C
A
Yeah,
okay,
yeah.
Can
you
like
just
leave
comments
like
we
should
be
able
to
use
this
and
like
restructure
this
thing
because,
like
the
key
thing
is,
I
don't
think,
and
the
key
concern
or
like
key
problem
which
you
want
is
always
like.
We
want
to
like
reuse,
symmetric
point
if
it
is
still-
and
we
don't
want
to
like,
remember
it
forever
like
we
currently
do
so
we
mean
the
current
design
doesn't
allow
for
like
like
removal.
A
So
that's
what
rna
is
trying
to
do
like
trying
to
put
some
marker
on
each
and
every
point
to
indicate
whether
it
is
being
used
or
it
is
potentially
reusable
by
some
other
thing.
So
we
just
I
mean
as
long
as
making
it
a
class
like
doesn't
cause
us
to,
because,
with
this
structure
we
don't
like
do
we
don't
have
to
like
store
anything
or
create
anything
once
the
initial
array
is
done,
because
this
arrays
in
the
heap
and
the
array
has
enough
space
to
store
as
many
metric
points.
A
So
if
we
can
like
make
similar
thing
with
the
class,
we
should
be
fine,
but
I'm
I'm
not
like
really
worried
about
that,
but
I'm
only
worried
about
like.
Can
we
like
reuse
it
and
recreate,
and
because
it's
typically
the
case
that,
like
user,
would
start
with
like
some
particular
key
value
combination
and
it
may
be
absent
in
like
one
collect
cycle,
so
we
go
and
like
clean
up
everything
and
in
the
next
one
they
are
going
to
come
with
like
the
same
key
value
pair
again.
A
So
we
would
be
like
allocating
it
again.
So
we
should
be
like
somewhat
like
conservative
and
we
remove
it
and
hopefully
we
should
be
able
to
like
reuse
it
I
mean
if
the
if
a
metric
which
was
active
and
then
it
comes
back,
we
don't
want
to
spend
the
entire
effort
of
like
recreating
it.
If
you
can
just
like
mark
it
from
stale
to
active,
that
would
be
like
like
much
better
than
like
creating
a
whole
point.
A
But
yes,
we
need
to
like
see
like
what's
the
best
way
to
orchestrate
this
whole
thing
without
at
most
trying
to
take
locks
within
the
snapshot
today
and
not
in
the
hot
part.
Okay
yeah,
I
mean
thanks
for
sharing
all
these
ideas
like
I'll.
Also
like
take
a
deeper
look
and
put
some
comments,
and
hopefully
we
can
come
up
with
something
so
one
just.
B
To
just
to
share,
like
one
other
kind
of
naive
idea,
that
came
to
my
mind
that
I
think
works.
It
works
okay
for
delta
metrics,
though
I
don't
think
it
would
necessarily
work
well
for
the
case
of
cumulative,
which
is
for
delta
metrics.
I
mean
it's,
it's
conceivable
that
when
an
export
happens,
you
know
when
we
have
this
active
inactive
list
that
we
just
start
from
scratch.
B
With
the
new
act
of
like
a
collect
interval
starts,
and
we
just
forget
everything
basically,
and
I
think
then
I
guess
the
downside
to
this
approach
would
be
that
at
every
single
collect
interval,
any
very
frequently
recorded
metrics
would
need
to
be
re-established
in
the
active
list
see
I.
A
Knew
okay!
So
when
you
said
like
completely
start
from
scratch,
did
you
mean
just
creating
a
like
fresh
metric
point
array
and
don't
touch
the
actual
dictionary?
Or
did
you
intend
to
like
clean
up
the
dictionary
as
well?
Because
you
are
not
picking
up
the
dictionary
we
the
index,
whatever
index
it
had
in
the
original
structure
we
might
be
able
to
like
use
the
same
thing,
so
we
don't
need
to
like
reconstruct
the
dictionary
itself.
B
Actually
yeah,
no,
I
was
thinking
that
that
the
dictionary
would
be
start
from
scratch.
A
Yeah
so
like
based,
I
think
I
put
a
comment
like
somewhere
in
like
the
benchmark,
so
the
one
of
the
biggest
contributor
to
performance
is
this
lookup,
and
so,
if
and
like
allocation,
so
like.
Basically,
you
get
a
key
read
only
span,
so
you
have
to
like
copy
the
keys
and
values
into.
A
We
currently
use
a
thread,
local
string
array
and
object
array,
and
then
we
do
the
lookup,
and
if
it's
not
there,
we
have
to
do
the
insertion
and
in
that
insertion
we
do
take
an
explicit
log
like
somewhere
down
here.
You
would
find
that
if
it
is
not
there,
we
actually
take
a
law.
A
So
if
you
like
clean
up
the
dictionary,
then
we'll
be
like
taking
that
log
like
forever
with
the
current
structure,
you
take
the
lock
when
you
see
a
key
value
combination
for
the
very
first
time,
and
you
don't
remove
it
from
the
dictionary.
It's
always
there.
A
So
if
we
can
like
somehow
keep
the
dictionary,
as
is
so
that
we
don't
have
to
pay
the
cost
of
like
taking
the
the
lock
in
the
hot
path,
however,
we
can
somehow
like
clean
just
the
just.
The
array
like
this
can
be
cleaned
up,
but
of
course,
I
had
to
think
little
bit
more
to
see
whether
it's
even
feasible,
because
cleaning
up
the
data
dictionary
would
be
like.
I
believe
it
would
be
very
expensive.
A
I
mean
riley
did
mention
like
some
time
back,
that
we
may
need
a
custom
data
structure
as
opposed
to
just
using
the
because
this
is
like
very
much
like
general
purpose
thing,
but
we
have
a
very
specific
need.
We
know
that
we
have
a
need
to
clean
up
and
we
know
that
the
reading
the
collect
can
happen
from
a
single
thread
only
but
right
so
based
on
all
these
requirements.
A
We
might
have
to
like
redesign
the
data
structure,
but
I
was
hoping
that
we
would
be
able
to
like
get
away
with
just
using
the
concurrent
dictionary,
at
least
for
the
initial
release
and
then
like
design
like
a
different
data
structure
which
can
like
take
into
account
all
our
special
requirements.
A
But
yeah
I
mean,
if
it's
absolutely
anything
we
have
to
like,
spend
some
time
on
that,
but
yeah
yeah
anyway,
like
keep
like
ordering
ideas
like
either
say,
comment
or
like
in
the
description
somewhere.
So
we
won't
like
lose
track
of
it,
maybe
like,
even
if,
like
the
person
who
proposes
it,
may
not
see
like
some
work
around
someone
else
might
be
able
to
see
so
just
share
the
ideas.
Even
if
it's
not
perfect,
you
might
be
able
to
like
keep
the
performance
intact.
A
Without
I
mean,
while
still
being
able
to
forget
the
points
yeah,
I
think
the
other
thing
which
you
mentioned
like
for
cumulative,
I
don't
think
it's
a
high
enough
priority.
So,
even
though
we
want
to
forget
the
cumulative
at
some
point,
it's
not
really
a
big
issue
because,
based
on
my
understanding,
the
existing
prometheus
client
does
that
like
in
prometheus,
it's
always
cumulative.
So
it
always
remembers
all
the
points
since
the
beginning
of
the
process.
A
So
it
should
not
be
a
like
major
concern,
so
I
would
prefer
to
optimize
the
case
for
deltas,
because
that's
where
the
user
explicitly
says
I'm
only
interested
in
deltas,
so
they
kind
of
expect
us
to
not
remember
anything
beyond
a
certain
point,
but
for
cumulative
they
are
explicitly
asking
us:
hey,
do
the
cumulative
aggregation,
so
it
can
be
like
forgiven,
even
if
we
keep
track
of
everything
since
the
beginning.
B
A
000
yeah
yeah,
so
whatever
that
number
magic
number
we
come
up
with
and
like,
potentially,
we
will
need
to
allow
this
to
be
customized
using
views
likely
like
for
each
metric.
The
user
can
configure
with
view
how
many
maximum
time
series
they
want
to
keep
track.
A
Yeah
I
mean
I
don't
know
whether
prometheus
library
allows
like
setting
this
limit.
It
strongly
encourages
people
to
not
have
a
high
cardinality
thing,
but
I
don't
think
there
was
any
like
limits
like
that,
but
I
worked
on
like
application
sites
metrics
so
where
we
do
allow
the
user
to
specify
how
many
maximum
time
series
they
want
to
keep
track
and
if
you
exceed
like,
we
start
dropping
the
data,
so
I
mean,
irrespective
of
what
we
do
here
like
the
cleanup
android.
A
We
still
quite
likely
want
to
expose
this
with
a
public
api
for
the
user
to
change
it,
because
we
cannot
really
predict
what
is
the
default
here.
I
put
like
two
thousand
just
like
a
magic
number
right.
Okay,
I
don't
think
2000
is
the
right
number.
I
don't
think
like
200
is
the
right
number
either
so
quite
likely
we'll
have
to
let
the
user
configure
it
most
likely
using
views.
B
A
A
Diagnostics
around
you
mean
like
from
the
sdk
file
like
we'll,
give
some
hints.
How
many
are
you
using
yeah,
something.
B
Like
that,
either
via
logs
or.
A
Okay,
yeah,
I
think
we
can.
We
should
be
able
to
like
do
that
like
for
metrics.
Like
specifically,
we
wanted
to
try
a
lot
of
other
things
like
to
let
users
know
of
what's
going
on
like
one
of
the
things
which
we
were
considering
was
like.
Whenever
we
drop
a
metric,
because
we
hit
the
limit,
we
put
a
like
incremental
number
like
metric
dropped
due
to
data
point
exceeder,
or
something
and
like
when
we
shut
down.
A
We
just
emitted
a
warning
message:
if
we
have
emitted
like
non-zero
bond
off,
I
mean
we
have
dropped,
non-zero
amount
of
telemetry.
That
was,
I
think
we
added
it
four
traces
very
recently
like
in
the
batch
export
processor.
A
If
we
drop
some
telemetry,
because
the
buffer
is
full,
keep
a
count
of
it
and
we
log
it
in
the
shutdown
phase.
So
maybe
like
something
similar,
we
can
do
for
matrix.
So
whenever
we
drop,
we
keep
a
count
of
how
many
we
are
dropping
and
at
shutdown
we
just
print
it.
I
mean
not
printed.
We
emit
it
as
a
warning
level
error,
so
people
can
subscribe
to
it
and
see
like
what
is
the.
A
I
think
are
they
actually
hitting
the
limit,
or
are
they
not
and
based
on
that
they
can
tune
it
accordingly,
but
that
would
still
require
us
to
first
expose
this
as
a
configurable
thing,
not
like
that
hard
code,
yeah
yeah,
that's
a
good
thought,
yeah
yeah.
By
the
way,
I
don't
think
the
spec
would
ever
at
least
tech
would
be
coming
up
with
something
this
year.
A
For
this
thing,
so
we
may
have
to
do
like
something
on
our
own,
because,
based
on
what
I
see
as
pending
in
the
specification
repo
for
marking
matrix
sustainable,
it
does
not
involve
any
mention
about
the
ability
to
decide
how
many
matrix
points
to
keep
it.
It
just
has
some
guidelines,
so
so
we
have
to
like
take
a
decision.
A
Should
we
expose
it,
then,
if
the
decision
is
yes,
then
we
might
be
exposing
something
which
is
not
part
of
the
spec
today
and
then
what,
if
spec
or
something
in
the
future,
so
we
have
to
expose
it
in
such
a
way
that
we
won't
be
like
too
much
restrictive,
so
we
should
be
able
to
adapt
it
to
the
spec
needs.
If
spec
ever
exposes
it
or
if
you
choose
to
not
expose,
then
we
need
to
make
a
decision
like
what
would
be
the
default.
A
So
I
think
we
can
come
back
to
it
like
these
are
things
which
we
can
do
like
even
in
the
rc
state.
So
we
should
be
able
to
come
back
to
it
once
we
fix
the
fundamental
like
cleaner
profile
yeah.
So,
like
I
encourage
everyone
to
like
take
a
look
at.
I
think
the
description
should
have
some
yeah.
A
It
should
describe
like
what
is
the
reason
why
we
are
doing
this
or
maybe
I
should
link
to
the
stick,
because
the
spec
has
a
very
detailed
explanation
on
why
we
want
to
do
this
thing.
Maybe
it's
a
good
read
for
anyone
who
wants
to?
A
Oh
that's
this
problem
because
yeah
it
has
all
the
details
on
like
why.
Why
remembering
or
why
not?
Forgetting
is
a
bad
thing.
So
I
will
just
link
this
here
so
like.
If
you
are
trying
to
look
at
the
pr
might
have
some
more
context.
A
Okay,
yeah
anything
else
to
be
like
discussed
right
away.
Oh
one,
one
question
which
I
wanted
was
like:
does
anyone
like
there
are
not
many
people,
but
I
just
want
to
ask
like
if
anyone
has
cycles,
please
take
one
or
more
instrumentations
and
let
us
know
like
you
are
working
on
it.
So
the
goal
is
like
it's
not
like
required
by
the
spec
and
there
won't
be
any
semantic
conventions
table
by
end
of
this
year.
A
But
the
idea
is
like,
if
you
have
like
a
lot
of
instrumentations
for
metrics,
just
like
we
have
for
tracers,
it
would
be
much
easier
for
like
uses
to
get
started.
They
just
like
enable
the
provider
and
enable
all
the
instrumentations
they
just
get
like
all
the
metrics
for
free
without
doing
anything
major.
So
it's
lower
priority.
A
But
if,
if
anyone
is
like
willing
to
take
some
cycles
and
contribute,
it
should
be
like
much
easier
than
the
tracing
part
because
for
tracing
we
have
to
deal
with
some
legacy
tracing
and
we
have
to
do
like
some
mapping,
but
for
matrix,
it's
very
straight,
for
you
create
a
meter
subscribe
to
the
whatever
callback
we
have
and
emit
some
metrics
out
of
it.
A
A
Okay,
I
think
that
all
from
my
side,
so
I
intend
to
do
another
release
when
michael
has
stand
with
the
prometheus
exporter,
and
this
is
already
enough.
So
this
is
like
done
and
riley
is
doing
like
some
cleanup
for
the
exporter,
so
once
we
have
that
we
should
be
able
to
release
the
next
one.
I
won't
be
working
on
these
two
features
this
week,
so
it
won't
be
part
of
the
next
video,
but
it's
still
better
like
we.
We
still
need
like
primitives,
to
be
in
great
shape.
A
Okay,
yeah
yeah,
I'm
also
creating
like
some
issues
for
logs
as
well
a
lot
of
people
who
are
asking
for
it.
Basically,
I
just
created
one
issue
to
tackle
all
of
them
and
it
doesn't
look
like
we
work
as
c,
so
I
added
the
basic
example
here
and
I
already
realized
that,
like
we
don't
like
work
essays,
because
if
you
do
not
do
the,
why
do
I
do
that?
A
Yeah
include
for
my
red
message:
nothing
actually
gets
exported,
so
I
just
like
teach
it
by
setting
it
like
this,
but
we
need
to
like
it's
not
like
critical.
If
anyone
has
free
speckles,
they
can
go
ahead
and
modify
the
exporter
to
deal
with
all
these
things.
A
A
I
did
not
want
to
mention
your
name
is.
I
know
that
you're
working
on
the
like
matrix
thing,
it's
quite
likely
like
within
the
world,
so
you
might
be
like
you
see,
and
so
for
logs.
I
am
hoping
that
someone
else
from
the
community
would
be
able
to
step
up,
because
there
were
like
a
lot
of
people
who
were
like
really
asking
for
that,
and
some
of
them
mentioned
that
they
might
be
able
to
contribute.
A
So
so,
let's
just
like
wait
for
a
week
to
see
if
anyone
else
might
help
bandwidth
to
work
on
log
exporter
for
otlp,
so
that,
like
you,
can
really
be
like
undistracted
from
the
logging
work.
While
you
are
in
matrix,
it's
totally
up
to
you
like.
If
you
want
to
go
ahead
and
do
it
it's
fine.
I
just
want
to
see
whether
there
are
other
folks
who
were
like,
don't
know
like
yeah.
A
I
think
the
number
of
likes
or
upwards
for
this
issue
was
like
big
surprise,
a
lot
of
folks
and
there.
I
think
there
was
a
lot
of
ask
in
the
slack
channel
as
well
for
hotel
vlog
supporters.
So
maybe
there
are
other
folks
who
are
like
really
motivated
to
make
it
better.
So
we
can
let
them
do
that.
If
that
happens,
that's
nice,
if
not
we'll,
come
back
and
do
it,
but
either
way.
This
is
not
going
to
be
like
stable
by
end
of
this
year,
but
matrix
based
on
the
current
planet.
B
Yeah,
fair
enough
yeah
I'll
wait
a
week.
I
that
boolean
was
probably,
since
I
submitted
that
pr
a
long
time
ago.
It
probably
changed.
A
In
the
meantime,
so
yeah,
that's
fine,
like
I
mean
the,
if
you
follow
the
current
example,
like
the
example
I
intentionally
pushed,
does
include
for
my
red
message,
so
you
would
actually
see
something.
Maybe
I
have
it.
I
was
running
it
like
when
we
did
the
release
on
friday
and
it
did
okay.
Did
this
thing,
however,
if
you
do
not
do
that
formatted
one,
that's
what
you
end
up
getting
like
unknown
attribute,
so
we
can
take
a
look
at
it
offline,
but
anyway,
just
sharing
update
on
road
exporter,
yeah,
so
nothing
else.
A
I
think
we
can
end
early
and
see
you
again
next
week
and
meanwhile,
like
I
encourage
everyone
to
look
at
the
pr
for
removing
the
metric
point,
which
it
would
be
like
the
probably
the
most
complex
one.
After
all,
the
recent
effect.
This
would
be
the
most
interesting
one
as
well
in
the
whole
matrix
right
now,
all
right.
Thank
you
see
you
next
week,
thanks
aj.