►
From YouTube: 2021-09-30 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
C
Let
me
share
my
screen
so
trying
to
make
this
short
so
so
first
give
people
heads
up
so
moving
forward.
We
won't
have
the
tuesday
metrics
meet.
If
you
miss
the
announcement,
you
won't
be
able
to
find
the
tuesday
metrics
meeting
so
we'll
only
have
thursday
meeting
and
for
tuesday
we
reserve
20
minutes
for
metrics.
If
any
outstanding
thing
we
can
use
the
olap
spec
meeting.
C
So
there
are
less
things
to
do.
I
I
hope
by
now
this
year
we
can
also
remove
the
thursday
matrix
meeting,
so
we
can
just
use
a
tuesday
meeting
and
just
give
you
a
heads
up
on
the
on
the
current
progress.
C
So
if
you
have
anything
that
is
not
listed
here,
my
ask
is:
please,
like
please,
read
the
issue
and
and
discuss
that
in
this
forum
make
sure
we
don't
miss
any
outstanding
thing.
Coming
back
to
the
open
prs,
I
I
think
multiple
pr's
got
merged
and
and
the
spec
just
released
1.7.
I
believe
so.
C
No
no
hurry
so
for
me
max.
I
I
think
we
we
discussed
this
last
time
and
what
we
this
will
merge
this
one
and
knowing
that
the
collector
will
need
some
time
probably
a
month
according
to
the
estimation
from
bogdan,
but
that
will
be
a
collector's
thing,
regardless
of
the
spec
work
and
once
this
pr
got
merged
I'll
I'll
work
on
the
sdk
stack
update.
So
we
include
the
min
max
configuration
so
so
do
you
think
we
were
ready
to
merge
this?
C
I
think
bogdan
is
saying
he
he
he
wouldn't
block
us,
but
he
wouldn't
give
bowl,
but
it
seems
to
be
okay
right.
D
The
reason
why
the
thing
I'm
thinking
right
now
is
that
we've
already
merged
the
exponential
histogram,
and
so,
if
we
wanted
to
make
a
release
of
just
the
exponential
histogram,
we
could
do
that.
But
I
feel
like
from
the
bulk
of
the
people
are
in.
You
know
present
it's
probably
more
important
to
get
min
max.
D
So
I
guess
I
would
support
merging
and
releasing,
and
I
think
it's
going
to
be
a
tough
collector
release
cycle.
D
C
C
Okay
and
the
next
pr
I
have
is
this
one.
C
C
And
for
people
who
who
haven't
seen
this
document
before
this
is
not
a
spec
we're
trying
to
give
some
suggestion
here.
So
that
means,
if
there's
something
we
think,
is
important.
We
want
people
to
at
least
understand
that
part.
For
example,
we
want
people
to
take
care
about
memory
usage
instead
of
trying
to
like
ignore
the
cardinality
explode
and
blow
the
application
away.
Then
we'll
put
it
here,
but
we
figure
at
this
moment.
C
C
Eventually,
I
think
we
can.
We
can
get
the
feedback
and
see
if
something
seems
to
be
super
important.
Let
me
change
the
wording.
Remove
this
section
instead
of
a
recommendation,
move
that
to
the
spike
and
saying
this
is
a
must
or
should
that's
the
thinking,
and
so
please
take
a
look,
and
I
want
to
go
back
to
the
the
board
there.
There
are
some
issues
I
want
to
talk
about,
so
this
attribute
limit
is
not
supported.
C
I
think
we
already
made
it
clear
in
the
spec
that
we
currently
the
attribute
limits,
do
not
apply
to
matrix.
So
I'll
I'll
show
you
the
the
spec,
and
I
want
to
get
some
consensus
here.
Are
we
ready
to
move
this
issue
out
of
the
the
stable
release?
Scope,
just
move
that
after
ga
or
we
should
still
keep
it
and
do
something
about
that?
C
It's
we
haven't
figured
out
if
we
want
to
limit
that
what
should
be
the
behavior
like
if
we
drop
something
randomly
or
we
allow
people
to
define
some
rules,
we
can
drop
the
less
important
one.
So
you
can
see
here
we're
saying
the
attributes
belong
to
matrix
will
not
respect
the
common
rules
for
attribute
limits,
and-
and
I
replied
here
so
in
this
guideline-
I
put
something
about
the
suggestion
and
I
didn't
put
that
as
a
requirement.
A
This
is
limited
for
tracing
right.
Sorry,
this
is
limited
for
tracing
right.
C
The
the
attributes
limitation
currently
specified
like
we're
intending
to
cover
the
entire
like
spike
ripple.
So
all
the
signals,
but
in
matrix
we're
saying,
okay.
B
C
We
don't
have
a
good
understanding.
We
need
further
analysis.
That's
why
we
don't
put
that
limitation,
and
here
goes
the
reason,
because
if
we
get
some
dimension
that
is
going
over
the
limit
do
we
drop
the
data?
What
does
that
mean
like?
Do
we
just
combine
the
thing
we
see?
We
don't
have
this
dimension
anymore
and
some
people
might
say:
can
you
give
me
a
configuration,
so
I
can
say
I
care
about
the
http
status
code,
but
I
care
less
about
something
else,
so
you
can
drop
the
other
dimensions.
C
I
don't
care,
but
please
do
not
to
never
drop
my
http
verb
or
status
code.
So
in
this
suggestion
we
I
mentioned
here
explicitly
like
the
definition.
What
is
efficient
memory
usage
is
very
subjective
and
application
owner
will
know.
So
I
gave
some
suggestions,
but
I
don't
think
this
is
a
is
a
very
clear
and
good
answer
for
the
spec.
That's
why
I
put
this
here.
The
guidelines
that
I
was
span.
A
A
I
agree
on
that
but
like
it
is
still
like,
if
you,
if
you
like,
not
limiting
that,
I
believe
it
is
like
you
will.
You
will
end
up
with
in
the
same
situation,
but
later.
C
B
With
that
do
we
have
any
means
of
following
up
for
their
analysis.
Like
do
we
have
any
ones
like
any
users
that
we
could
reach
out
to
to
what
they
think
the
default
should
be.
C
I
I
think
so
far
there
are
three
languages
making
faster
progress
on
metrics
and
they
should
have
customers.
I
know
for
a
fact
that,
on
c
plus,
plus
and
now
that
we
do
have
customers
and
not
hearing
many
different
requirements,
we
can
collect
those
requirements,
but
I
figure
we
wouldn't
have
enough
time
to
consolidate
all
the
requirements
and
still
catch
the
metrics,
stable
timeline.
B
C
D
There
are
a
number
of
github
discussions
that
are
discussing
variations
on
this
topic,
though,
like
are
there
maybe
two
categories
of
attribute
and
or
resource
like
the
the
required
mandatory
ones
and
the
optional
nice
to
haves?
And
if
we
had
a
specification
of
a
like
a
boolean
that
we
could
have
for
the
for
the
different
attributes,
then
maybe
we
could
have
a
proper
default.
Behavior,
maybe
is
what
we've
been
discussing
in
various
issues
comes
up
both
for
resources
and
for
attributes,
and
so
there
is
some
history
here.
C
Yeah
and
and
many
languages
they
they
put
some
solution
there,
so
they
allow
people
to
specify
like
what,
if
they
exceed
certain
things,
what
what
should
they
do?
I
I
know
like
in
downlight,
like
people
just
write
some
internal
log
and
by
default
job
that
and
you
can
change
the
limitation.
You
can
say
I
have
more
memories.
So
so
you
can
change
the
actual
upper
limit.
C
Okay,
there
are
other
other
things.
I
want
to
get
some
idea
so
we'll
talk
about
this
one.
C
And
my
my
guess,
guesses
were
the
same
like
when
we
reach
the
cardinality
limitation.
We
start
to
drop
something,
or
we
can
do
some
specific
thing
like
let
the
user
specify
which
dimensions
are
more
important
for
them,
and
when
people
use
some
efficient,
aggregation
like
they,
they
receive
the
delta
value
they
export
the
delta
value.
Then
they
only
have
to
remember
things
that
happened
in
the
last
collection
cycle.
C
C
Okay,
this
one,
I
I
want
to
get
feedback
from
josh
here
explicitly
because
he
mentioned
some
like
goal
a
lot
of
this
memory
model.
C
So
the
the
question
here
is:
if
people
report
cumulative
and
nothing
happens
in
the
collection
cycle,
do
you
report
the
same
thing
or
you
can
ignore
something,
and
also
do
you
have
to
report
the
metric
at
all
or
you
can
you
can
just
ignore
this
cycle
and
next
cycle?
When
you
see
some
update,
then
you
report
data
and
it
seems,
go
support
the
user
to
execute
it
like
configure
that.
D
Yeah,
that's
true
I
put
in
that
support
to.
D
I
guess
I
would
say
if
that
option
fell
out
of
trying
to
support
both
a
statsd
style
and
a
prometheus
style
exporter
is
that
those
two
exporters
behave
quite
differently
and
in
a
way
that
is
it's
almost
the
same
as
whether
you
need
to
keep
state
in
order?
It's
a
cumulative
conversion,
but
it's
not
quite
because
of
this
distinction
for
gauge
as
to
whether
you
need
to
send
an
old
gauge
or
not.
D
D
Why
is
that
most
of
the
open
source
world
just
doesn't
have
an
infrastructure
to
deal
with
deltas
and
if
they
did,
then
there
might
be
more
of
a
reason
to
put
a
standard
option
in
to
support
delta,
so
the
go
x
sdk
does
support
delta
otlp
export,
but
that's
more
because
lightstep
does
as
well,
and
I
wanted
to
see
it
and
promote
it,
but
and
and
that's
what
you
get
from
a
statsd
library
if
you're
used
to
it
and
it's
more
more
capable
of
supporting
cardinality.
That's
all.
D
You
know
that
that
all
that
was
designed
before
there
was
a
view
api
or
a
reader
interface
or
an
exporter
interface.
So
I
would
say
this
is
probably
something
you
specify
by
the
by
the
exporter
or
the
reader.
Essentially,
and
let's
see
it
could
be
a
view,
configuration
setting.
C
Yeah
yeah,
so
I'm
I'm,
I'm
guessing
probably
like
the
explorer,
can,
can
give
some
information
when
it
is
being
registered
on
the
metric
reader.
So
the
reader
will
understand
that
the
exporters
support
cumulative
delta
or
both
what's
the
preference,
and
if
it
is
cumulative,
there's
no
update
should
I
send
the
same
data
or
I
can
just
ignore
that
all
these
are
like
the
exporter
specific
scenario
and
and
probably
for
otlp.
We
need
to
spec
that
out.
I
know
there
are
open
issues
about
people
want
otlp
by
default.
B
For
one
of
the
projects
that
maintain
ghost
sd,
we
have
a
option
to
to
set
this
for
the
reason
that
some
vendors
don't
work
well
with
sparse
metrics.
B
C
C
Okay,
that
that's
all
the
outstanding
topics
I
want
to
talk
about
the
other
items
as
I
scan
through
the
board.
I
I
think.
C
E
I
had
a
question
that
the
last
conversation
kind
of
made
me
think
about
it.
It's
it's
tangential
to
the
the
memory
mode
thing,
but
just
the
discussion
around
that
being
configurable
by
you
know
at
the
exporter
level
versus
like
the
view
api,
something
I
noticed
with
between
java
and
net
sdks
currently,
is
that
aggregation
temporality
for
the
java
api
is
configurable
via
the
view
api
and
with
net
we've
done
it
at
the
exporter
level.
E
I'm
just
curious
just
from
my
own
education
just
is
that
something
that
we
plan
to
converge
on
or
is
that
a
difference
in
how
javan.net
has
approached
this?
Is
that
my.
C
My
my
thinking
is
that
instrumentation
level
we
need
to
give
people
flexibility,
so
they
can,
they
can
decide
which
temporality
they
use
instead
of
the
explorer.
If
you
have
10
different
instruments,
whatever
view
you
configure,
the
exporter
will
just
treat
everything.
That's
cumulative,
that
the
exposure
can
give
the
preference,
but
if
the
exporters
support
both,
then
I
think
we
should
support
individual
view
level
configuration.
E
Yeah
so
so
then
your
thought
is
that
then
plan
for
net
would
be
to
do
something
similar
to
what
java
has
done.
C
Yeah,
my
my
thinking
is:
you
know
that
the
folder
is
just
providing
a
convenience.
The
culture
is
basically
saying
if
the
user
does
not
specify
anything,
then
I
have
a
preference
or
I
don't
have
preference
give
me
whatever
that
makes
sense
like
more
efficient
or
whatever
the
ic
he
would
like.
But
if
the
exposure
is
saying
I
can
only
support
delta,
then
there
is
no
reason
for
the
user
to
specify
cumulative
in
the
view
configuration
in
this
way.
C
We
probably
should
either
give
some
internal
error
or
if,
during
the
setup
we
should
just
fill
the
sdk
initialization
and
if
they
export
her,
it's
saying
I
support
both
and
whatever
they
prefer
the
user
specified.
In
the
view
api
saying
I
I
want
this
one
to
be
cumulative
and
I'm
pretty
sure,
then
that
should
be
respected.