►
From YouTube: 2021-10-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
let's,
let's
start
thank
you,
everybody
for
joining,
so
yeah,
let's
jump
into
the
actual
agenda.
The
first
item
belongs
to
tigran
regarding
schemas,
please.
A
Oh
actually
tigran
is
not
around
today.
Well,
I
was
going
through
his
actual
pr's,
basically
long
story
short.
There
was
an
alter
that
he
created
1
52
to
to
define
how
schemas
around
semantic
conventions
could
work
and
now
he's
trying
to
bring
that
the
majority
of
that
work
into
the
actual
specification.
A
The
first
one
is
trying
to
define
the
core.
The
the
latest
two
are
trying
just
to
will
additions.
On
top
of
that,
we
already
accepted
the
attack.
It
was
merged
a
long
time
ago,
so
I
could
say
that
it's
mostly
acceptable
stuff,
but
we
need
eyes
on
on
them.
So
please
review
them.
A
Okay,
the
next
item
related
to
that
schema
evolution.
John,
are
you
around.
B
B
We
generate
the
semantic
convention
constants
as
a
part
of
our
build,
based
on
whatever
the
latest
published
version
of
the
semantic
conventions
and
hence
the
spec,
but
we
realize
that
this
is
definitely
a
problem
if
semantic
conventions
change
if
names
of
the
constants
change
or
if
con
names
like
if
values
disappear
altogether,
because
that
means
that
if
a
piece
of
instrumentation
is
compiled
against
that
the
older
version,
it
will
not
work
against
the
newer
version,
because
it
I
don't
neither
constance,
won't
be
there.
B
So
there
are
a
bunch
of
different
possible
solutions
to
this,
but
I
just
wanted
to
bring
this
up
and
bring
it
especially
up
with
people
with
with
language
teams
that
haven't
started,
implementing
a
generation
of
scientific
constants
convention
constants
that
this
is
something
we
need
to
figure
out
and
in
the
maintainers
meeting
yesterday,
ted
called
out
that
he
thinks
that
this
should
be.
B
So
in
java
we're
still,
you
know,
ruminating
on
different
possible
solutions
to
this,
but
it
ends
up
being
pretty
messy,
no
matter
what
solution
you
choose,
options
that
we
have
considered
so
far,
we
have
considered
generating
a
new
artifact
altogether,
so
a
new
name
that
is
based
on
the
version
of
the
semantic
conventions
that
the
consonants
are
from.
B
B
So
that's
also
going
to
be
messy.
Another
possibility
we've
kicked
around,
although
we
haven't
attempted
implementing,
it
is
to
generate
files
that
represent
the
diffs
between
the
semantic
inventions
using
some
sort
of
interface
hierarchy
or
something
we're
not
exactly
again.
We
haven't
actually
implemented
it,
but
some
other
way
where
we
actually
just
generate
diffs.
So
if
there's
no
changes,
then
we
don't
have
to
generate
any
code
at
all
or
we
just
generate
an
empty
file
with
no
changes
in
it.
B
There
was
one
other
point
I
wanted
to
make,
and
I
don't
remember
what
it
was
anyway.
I
just
wanted
to
bring
this
up
and
make
sure
that
the
languages
are
thinking
about
this,
considering
how
they
want
to
do
it.
I
guess
the
other
thing
we
want
to
do
is
we
probably
want
to
pull
out
the
the
semantic
conventions
constants
from
the
actual
java
api
sdk
repository
and
maybe
move
them
into
the
instrumentation
repositories.
B
This
is
more
appropriate
or
potentially
move
them
into
a
completely
separate
repository,
which
will
be
in
sync
with
the
spec.
Much
like
we
just
moved
the
protobuf
definitions
or
the
protobuf
bindings
into
their
own
repository
anyway.
So
food
for
thought.
We
don't
have
a
final
final
answer
to
this
yet,
but
we're
still
trying
to
figure
it
out.
A
Thank
you
so
much
for
that.
Does
anybody
want
to
comment
how
this
is
working
in
other
languages?
B
Yeah,
I
think
I
thought
tigran
was
going
to
create
an
issue
and
the
spec
for
this
hasn't
been
done.
Yet
I
don't
think
or
maybe
ted.
A
Right
right,
yeah,
but
yeah
forgot
about
that.
So
probably
that's
also
a
way
a
good
way
to
let
people
know
or
everybody
gets
a
reminder
that
there's
an
instrumentation
seek
meeting
today
so
maybe
attend
that
if
you're
interested.
A
Okay,
thank
you
so
much
for
that.
John
next
point:
metrics
20
minutes
time
box.
I
guess
that's!
This
is
for
actual
discussion.
But
since
not
everybody
has
been
part
of
this
group,
maybe
you
I
don't
know
whether
members
of
the
team
feel
about
giving
us
an
update
before
you
are
you
actually
discuss
what
has
been
happening.
D
Yeah,
so
the
current
focus
is
to
getting
the
metrics
sdk
to
feature
freeze.
Originally,
we
were
planning
to
get
this
to
fisher
freeze
by
end
of
the
previous
month.
There
were
some
delay,
so
we
tried
to
get
the
feature
freeze
by
no
later
the
end
of
this
month
and
so
far
there
are
three
languages:
doing
prototype
and
actively
giving
feedback
to
the
metrics
sig.
D
D
We
can
do
it
now
or
we
can
do
it
later.
So
if
we
we're
going
to
take
time,
I
want
us
to
be
able
to
make
a
quick
decision.
Do
we
put
this
out
of
the
scope
for
now
and
add
that
later,
because
it's
not
a
breaking
change,
anyways
and
the
second
part
is
if
there
is
agreement,
but
there
are
minors
in
frequent,
merge
conflicts
or
any
small
comments
that
we
need
to
respond.
Please
respond
as
quickly
as
possible.
D
D
And,
and
by
the
way,
there
are
some
like
important
features
like
the
the
bundy
instruments
supporting
exponential
base,
two
exponential
histogram.
Some
of
these
features
be
discussed
during
the
matrix
sig
meetings
and
decided
that
we're
not
going
to
include
them
in
the
initial
stable
release.
D
A
D
E
Josh
asks
if
anyone
opposes
calling
it
min
max
some
count
and
what
I
think
we
should
do
if
there's
no
opposition
is
just
call
it
min
max
sum
count
histogram.
I
think
it's
good
for
the
name
to
imply
the
type
of
data
point.
That's
going
to
be
outputted
from
this.
It's
a
bit
of
a
mouthful,
but
it's
it's.
It's
super
clear
and
unambiguous
about
what
it
does.
E
F
In
the
chat,
bogan
just
suggested
mid
max
histogram,
all
histograms
always
have
some
and
count.
So
I
guess
that's
sensible.
It's
also
shorter.
D
E
It's
good
enough,
I
don't
like
so
I
think
I
think
the
min
max
histogram
yeah
histograms
always
do
include
summon
count,
but
that
suggests
that
it's
emitting
them.
F
E
A
D
Then
I
I
would
suggest
that
we
use
trivial
histogram,
because
this
seems
to
be
a
very
specific
term.
People
can
search,
and
also
it
is
something
that
will
will
catch
people's
mind
instead
of
me
max
concept
later
you
want
to
add
something,
then
you
want
to
change
the
name
or
you're
stuck
with
the
name
forever.
F
We
we've
discussed
that
actually,
instead
of
using
degenerate
histogram,
we
chose
trivial
yeah.
Thank
you.
Tyler.
G
Okay,
I
I
am
kind
of
confused
by
the
max.
Some
count
doesn't
work.
You
know,
adding
histogram
does
imply
an
additional
category
of
things
that
you're
going
to
impose
that
it
should
be
producing
a
histogram.
I
think
that
mid
maximum
account
is
descriptive
of
what
is
there
and
what
isn't
there.
I
think
that
min
max
histogram
is
also
descriptive,
based
on
what
bogdan's
saying
as
well
min
maximum
histogram
min
max
sum
count.
G
G
Yeah,
I
understand,
but
what
I'm
saying
is
that
may
not
be
about
the
case
always
like
it's
really
tough
to
tell
the
future,
and
if,
unless
this
is
specifically
containing
buckets,
that,
like
are
a
histogram
binning
of
data.
How
this
aggregation
turns
out
in
the
end
like
people
can
also
use
this
to
turn
into
an
average,
and
they
could
just
surprise
you
because
they
wrote
their
own.
You
know
final
representation
of
this.
G
Incorrect,
I
I
mean
I
agree,
but
I
think
somebody
training
getting
out
as
a
histogram
would
be
surprising
to
your
expectation.
What
I'm
saying
is
that
other
people
new
to
the
project
with
different
understandings
and
different
perceptions
of
like
how
they
want
to
see
data.
That
may
not
be
surprising.
C
Another
option
is
to
make
mean
max,
and
even
some
in
the
future,
optional
parameters
or
optional
fields
on
the
histogram
itself,
so
this
will
become
just
the
configuration
of
that
aggregation.
So
we
we
have
only
histogram
aggregation,
which
has
the
capability
of
being
configured
producing,
produce
max
produce
some
and
stuff
like
that.
E
Yeah,
so
we
we
could
omit
this
all
together
and
count
on
people
configuring,
the
existing
explicit
bucket
histogram
with
with
zero
buckets,
and
that
would
produce
this
the
same
type
of
thing,
with
zero
buckets,
zero
buckets
plus
one.
C
Bucket
one
bucket,
whatever
plus,
enabling
enabling
mean
and
max
or
whatever
we
call
it,
enable
min
max
as
a
property
for
doing
this
and
enable
some
disable
some
or
whatever.
We
call
it
because,
as
we
know
in
the
data
model,
we
want
this
to
be
optional.
So
I
feel
like
it's
more
or
less
an
option
on
the
histogram
to
produce
or
not
to
produce
this.
E
So
I
like
to
the
extent
that
it's
possible
to
keep
them
it's
cheap
to
produce.
I
think
the
only
reason
they're
optional,
at
least
in
my
head,
is
because
there's
no
transformation
available
from
cumulative
to
delta
and
so.
C
The
default
behavior
can
be,
the
default,
behavior
can
be
always
produced
and
people
can
disable
them
if
they
want.
I
I'm
not
arguing
about
the
default
behavior
default.
Behavior
can
be
to
produce
them.
Okay,
I'm
just
saying
that
conceptually
it's
only
one
aggregation,
so
is
the
histogram
aggregation
that
corresponds
to
the
data
model.
That
has
some
arguments,
parameters
to
produce
or
not
to
produce
some
of
the
things,
and
by
default
we
can
define
what
what
we
do.
E
Yeah,
so
that
so
you're
definitely
right,
and
I
brought
that
up
in
one
of
the
comments
in
this
pr
that
this
is
really
just
to
improve
the
ergonomics
of
this
to
make
it
simpler
to
configure,
but
that
that
you
could
configure
this
type
of
aggregation
already.
With
what's
available
from
explicit
bucket
histogram.
C
E
Know
if
if
if
this
is
essentially
syntactic
sugar,
and
so
it
just
makes
it
easier
to
configure
this
type
of
aggregation
if
it
proves
to
be
common,
so
you
know
maybe
kind
of
going
with
your
suggestion
suggestion
we
remove
this
and
and
if,
if
there's
desire
for
simpler
configuration
of
this
later,
we
could
add
it.
But
for
now
omit
it.
E
Yeah,
so,
okay,
so
just
just
for
so
there's
no
ambiguity.
What
I'm
suggesting
is
getting
rid
of
this
min
max
sum
count
aggregation
altogether
and
adjusting
this
pr,
so
that
there
is
only
you
know
if
you
go
and
look
at
the
the
changes,
the
code
changes
to
it,
that
the
only
difference
would
be
that
the
min
and
max
fields
are
added
to
the
existing
explicit
bucket
histogram
aggregation.
But
there's
no.
C
Other
changes,
correct.
That's
that's
fine!
There
is
only
one
small
discussion
that
I
want
to
have
if
we
produce
this
by
default,
but
I
believe
there
is
not
a
big
problem
to
produce
them
by
default.
I
don't
think
you
will
add
any
extra
overhead,
but
somebody
has
to
to
test
that.
E
Yeah,
my
thought
would
be
just
on
that
that
you
know
computing,
which
bucket
the
data
belongs
in,
is
going
to
be
more
expensive
than
computing
them
in
max
always
so
like
in
terms
of
overhead.
It's
going
to
be
minimal.
C
F
F
So,
therefore,
the
only
discussion
if
we
agree
to
make
the
min
max
options
for
the
histogram
is
the
name
we
use
to
configure
a
view.
That's
just
the
bare
minimum
is
that
correct.
F
Okay,
I
mean
I
was
imagining
that
we
have
like
some
sort
of
view,
alias
saying
that
to
get
a
midnight,
some
kind
of
behavior,
you
know
there's
like
another
name
to
use
but
you're
saying
just
use
the
name
histogram
and
configure
it
with
less
than
or
equal
to
one
bucket
and
you'll
get
what
you
asked
for.
C
Yeah
I
mean,
I
don't
think
we
should
ask
you
the
ex
users
to
explicitly
add
that
one
bucket,
I
think,
use
that
comes
by
default
or
whatever
we
call
so
then,
then
they
don't
specify
any
bucket.
Essentially
they
just
say:
histogram,
neo,
histogram
or
whatever
language
or
just
histogram,
with
no
buckets
at
all
nil
slice
of
buckets
and
that's
it.
E
F
E
So
it
seems
like
there's
a
bit
of
consensus
on
this,
so
do
you,
so
this
is
kind
of
a
change
in
scope
of
this
pr
from
like
adding
a
new
aggregation
to
you
know
to
not
adding
a
new
aggregation
and
instead,
just
like
you
know,
reducing
the
scope
and
only
adding
these
min
and
max
fields.
Should
I
just
adjust
this
pr
in
place,
or
should
I
open
a
new
pr.
C
C
D
C
E
No,
the
data
model
spec
has
already
been
updated
to
include
these
optional,
min
and
max
fields.
So
this
is
just
updating
the
sdk
to
be
in
sync
with
it,
and
then
the
changes
to
the
protos
are
coming
later.
Once
we
can
figure
out
how
to
support
optionality.
D
D
D
So
here
like
we
based
on
the
prototype
feedback
from
java
and
download,
we
try
to
document
the
behavior
and
I
guess
the
question
here
is:
do
we
allow
a
purview
level
temporarily
setting
and
the
initial
response
from
most
people
would
be?
D
We
can
do
that
later.
We
shouldn't
try
to
address
it
now.
It
seems
there
are
a
lot
of
folks
asking
for
this
feature.
So
to
answer
jack
your
question,
I
guess
we
we
need
to
see
like
who
who
really
need
the
preview
setting.
D
Like
in
in
the
donald
say,
I
believe
allen
was
asked
for
this
several
times
it
seems
new
relic
has
a
desire
to
support
that.
I
also
heard
from
the
javascript
that
people
want
to
say.
Maybe
I
can
take
all
the
instruments
based
on
the
type,
for
example.
If
it's
histogram,
then
I
want
to
specify
how
they
should
be
exported,
whether
delta
or
accumulated.
E
So
so
I
can
address
the
alan
westby's
so
he's
his
co-worker
of
mine
and
we've
been
talking
about
this
offline,
and
so
so
we
don't
have.
We
had
new
relic,
don't
have
any
interest
in
configuring
this
on
a
per
instrument
level.
So
if,
as
long
as
there's
like
a
way
for
the
customer
to
configure
it,
you
know
for
all
instruments,
because
that's
what
we're
interested
in
that
that
passes
our
requirements.
E
So
I
personally
had
interest
in
the
view
api
configuring,
the
temporality
there,
because
there
wasn't
another
mechanism
to
do
it
that
at
the
time
there
wasn't
the
ability
to
configure
temporality
at
the
exporter
level.
But
you
know
I
think
that
that's
a
cleaner
way
to
configure
it.
And
so
that's
my
preference-
and
you
know
I
I
don't
think
I'm
go
out
of
line
when
I
say
that's
alan
west's
preference
as
well.
E
C
I,
what
I'm
hearing
so
far
is
people
usually
want
all
deltas
or
all
cumulative
based
on
the
back
end
exporter.
This
is
the
clear
use
case
that
we
have.
We
have
new
relic.
We
have
prometheus
on
the
other
side,
so
until
we
have
very
good
reasons
to
add
this,
at
the
view
level,
I
would
say
just
drop
that
support
for
the
moment.