►
From YouTube: 2020-10-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
All
right,
you
need
to
put
some
pictures
on
your
walls,
all
right,
they're,
literally
on
the
floor
behind
me,
just
not
on
the
wall.
Yet
we
got
a
house
back
in
february
and
you
know
plus
side
to
cover
we
get
to
take
our
time
moving
in.
But
you
know
at
the
same
time.
C
C
C
Okay,
I
think
we
should
start
reihan
is
here
from
aws
and
I'm
excited
to
start
talking
more
about
collector
components
here.
So
would
you
like
to
leave
this
off.
D
D
So
the
t
is
like
so
we
wrote
the
aws
ecs
container
matrix
receiver,
but
when
I
keep
my
pr,
we
have
some
attributes
for
task
and
container
levels
metadata,
and
I
was
like
setting
them
as
like
metric
level,
but
I
got
a
suggestion
to
make
them
like
resource
attributes
that
they
are
not
changing
for
each
request
or
data
point.
So
I
make
them.
But
now
I
see
like
the
cloud
was
emf
exporter,
which
is
also
written
by
our
apple
cloud
system.
D
D
So
when
I
looked
into
other
exporters,
the
other
exporters,
like
I
found
like
maybe
honeycomb,
one
new,
really
one
and
other
ones
they
are
like
collecting
all
the
resource
levels
as
well
as
like
materials
and
making
them
may
be
dimensionless.
But
maybe
so
I
am
here
for
suggestion
like
what
do
you
think?
As
an
expert
like?
Could
I
go
back
to
my
old
code?
Make
the
metric
levels
or
I
just
talked
to
loudouars
team
like
hey?
Maybe
you
should
follow
the
common
pattern
others
are
doing.
We
should
also
consider
our.
C
Yeah,
so
if
I
understand
the
question
it's
it's
that
we
are
performing
data
that
includes
metric
metric
events
or
metric
time
series
with
labels,
and
then
these
resources
are
a
separate
part
of
the
protocol.
You
have
an
exporter
that
receives
this
otlp
data.
Should
you
consider
those
resource
attributes
as
metric
labels?
Is
that
right?
Okay?
Does
anybody
here
want
to
comment.
E
This
is
from
the
exporter
standpoint,
or
is
this
in
the
collector
pipeline?
Sorry,
I
missed
that.
D
I
would
say:
maybe
all
the
exporters
are
implementing
this
like
combining
these
two
things.
I
don't
know
where
we
can
combine
them,
but
from
my
understanding,
as
of
today,
the
exporters
are
doing
so.
I
guess.
E
C
C
C
E
E
E
New
relic
we
like
to
annotate
that
kind
of
stuff
and
make
sure
that
there's
a
lot
of
information
there.
A
cardinality
thing
is
a
is
not
as
big
of
concern
for
us,
but
I
working
in
prometheus.
I
know
that
that's
like
a
cardinal
sin
as
well,
so
having
operated
a
prometheus
database
before
that's
a
big
problem
eventually,
and
so
like
yeah.
I
I
think
that
it's
kind
of
just
gonna
have
to
be
a
per
exporter
thing
like
maybe
in.
E
Situation
like
that
up
metric
josh
that
we
were
talking
about
a
little
while
ago,
that
could
include
a
lot
of
the
resource
attributes
but
yeah,
maybe
not
the
every
metric.
C
Yeah,
I
I
have
mixed
feelings,
because
it's
also
there's
a
question
about
cardinality,
but
sometimes
even
when
cardinality
is
not
a
problem
like
just
the
sheer
number
of
labels
becomes
expensive.
So
last
week
we
were
looking
at
some
of
the
configuration
in
prometheus
where
for
a
kubernetes
pod,
there
are
15
or
so
metadata
labels
and
and
those
are
all
available
for
you
to
apply
as
a
relabel
way,
which
means
the
user
is
expected
to
go
in
and
configure
which
of
those
15
labels
they
want
and
that's
in
the
sort
of
pool
oriented
situation.
C
What
we're
doing
here
is
talking
about
how
to
push
metrics.
So
we
are
making
this
decision
about
which
of
the
seven
or
which
of
the
15
metadata
labels
to
include
as
real
labels
versus
which
to
drop
and
it's
hard,
and
I
I'm
not
sure
that
the
user
wanted
to
make
that
decision
either.
It's
easy
to
say,
let's
just
include
them
all
and
then,
and
then
at
least
the
otlp
data
that
has
15
resource
labels
and
as
many
use
metric
labels
as
the
user
gave.
C
D
So
so,
if
I
understand
correctly
here
so
throughout
the
pipeline,
we
have
two
types
of
levels,
and
it's
also
it's
true
for
the
whole
pipeline,
like
metric
levels
as
well
as
resource
attributes.
Now
it's
up
to
the
exporter,
so
how
they
will
treat
them
like
give
them
the
customers,
the
options
which
levels
you
want
to
select
and
maybe
exporter
should
combine
all
of
them
and
search
for
the
fields
which
actually
customer
wants
to
export
as
dimensions
right.
C
Yes-
and
I
think
one
answer
is
just
configure
that
in
the
receiver
or
in
some
sort
of
configuration
file
which
which
of
those
you'd
like
to
receive
it's
worth,
pointing
out
that
there's
another
type
of
metadata
or
attribute
available
here,
which
is
the
distributed
context
or
the
correlations
or
the
baggage.
Whichever
terminology
we
have
and
that's,
this
issue
has
been
presented
in
tracing
as
well.
Do
you
should
you
record
the
those
dynamic
distributed
context,
attributes
or
correlations
on
every
span?
C
And
they?
I
don't
think
we've
answered
that
one.
Yet,
although
the
question
has
been
asked
a
bunch
of
times,
so
it's
the
same
could
be
asked
for
metrics,
and
I
think
the
answer
is
probably
no.
We
know
that
those
are
going
to
be
expensive
and
we
expect
them
to
be
high
cardinality
so
that
by
default
we
probably
don't
want
to
record
distributed
correlations
or
baggage
with
metric
events.
B
C
Yeah,
I
think
that's
right,
so
the
the
clients
will
send
all
their
resources
and
then
potentially
you'll
configure
a
metrics
export
pipeline
to
use
those
resources
as
labels.
But
you
might
also
then
want
to
do
some
relabeling
to
cut
down
on
the
numbers
yeah.
You
know
we
might
end
up
talking
about
all
these
semantic
conventions
prs
in
a
few
minutes
and
like
I,
I
looked
at
them
all
just
earlier
today
and
I
find
them
all
to
be
great.
C
C
I
think
the
decision
is
that
you
should
keep
your
resources
separate
and
that
your
your
exporters
should
keep
them
that
way,
and
then
the
receivers
should
make
this
decision
on
their
own
and
a
simple
decision
is
include
them
all.
A
simple
decision
is
include,
none
of
them,
but
probably
the
right
answer
is
somewhere
in
the
middle
and
it's
going
to
require
more
configuration
and
more
learning.
I
think.
D
Okay,
so,
and
also
so-
but
it
is
common
like
so,
the
exporter
will
get
the
dimensions
from
resources
and
as
well
as
like.
This
is
the
common
practice
and
expected
scenario
right.
Yeah
thanks.
C
Thanks
cool,
as
I'm
thankful
also
that
you
went
through
some
of
the
other
receivers
and
did
a
little
audit
or
of
that
or
some
of
the
other
exporters
it
feels
like
we
ought
to
have
a
higher
higher
level
policy.
They
stated
about
this
somewhere
in
our
specifications.
C
C
I
sort
of
filled
in
a
list
of
the
active
prs
that
we
could
talk
about
for
this
agenda,
but
I
would
encourage
anybody
else
here
to
fill
in
an
agenda
or
items
that
you
would
like
to
discuss.
I
I
didn't
get
around
to
adding
res
to
adding
issues
about
collector
receivers
and
exporters
for
metrics,
but
I
feel
like
that
is
probably
the
hottest
topic
we
should
be
talking
about,
especially
now
that
there's
been
so
much
good
progress
on
semantic
conventions.
C
I
wrote
this
question.
What
does
ga
for
metrics
mean-
and
I
think
it's
important
for
us
to
answer
that,
because
tracing
is
approaching
a
ga
very
soon
and
and
we
have
a
lot
of
unanswered
questions
in
metrics.
I
don't.
C
Expecting
metrics
to
ga,
but
we
should
be
thinking
about
what
it
means,
one
of
the
things
that
it
is
sort
of
in
the
short
term
that
it
means
is
that
we're
deferring
to
tracing
for
some
of
our
timeline
problems,
and
I
think
it
what
it
means
is
that
we're
going
to
end
up
with
the
term
attribute
and
we're
going
to
throw
away
the
term
label.
I
I've
personally
stated
I
don't
think
it's
my
favorite
term,
but
I
think
it's
that
the
right
outcome
for
open
telemetry.
C
So
if
you
feel
strongly
about
this,
you
should
be
in
this
issue
here,
making
noise
about
it.
I've
started
to
accept
it.
I
don't
feel
strongly
enough
to
decide
it.
So
that's
that's
where
we
are
on
that,
and
so
what
ga
for
metrics
might
mean
is
changing
terminology
just
outside
tracing,
but
but
it
also
means
finishing
up
some
set
of
issues
and
we
should
have
a
road
map
and
a
a
sort
of
milestone
for
that
defined,
which
we
don't
right
now.
C
Oh,
I
forgot
about
views
well.
C
C
No
one's
really
working
on
views
to
me
what
views
are
is
configurable
sdk.
I
think
john.
You
posted
a
really
good
issue
on
it.
I
forget
the
number
it's
been
a
while
it's
probably
buried
at
this
point.
I
think
we
should
define
views
to
be
a
minimal
set.
C
Because,
if
we're
going
to
drop
labels,
it
should
be
an
option
in
the
client
and
that's
the
typical
thing
that
you
do
with
use.
So
there
is
some
mineral
subset
there.
I
think
it's
okay
to
have
that
be
a
blocker
for
ga,
but
I
think
it
probably
means
that
we're
going
to
be
in
not
ga
for
like
many
months
many
many
months
here.
E
Yeah,
that's
the
current
plan
is
still
like,
trace
tracing
will
reach
rc,
and
then
we
want
metrics
to
reach
rc
and
then
once
those
are
both
like
once
we're
satisfied
with
the
quality
of
both
then
we'll
take
them
to
both
the
ga
together.
C
I
see
so
maybe
views
is
gonna
really
wrench
up
our
plans
to
get
to
ga
or
sorry.
I
haven't
been
using
the
term
rc,
but
I
get
a
rc.
If
views
is
a
requirement,
it's
we
haven't
even
prototyped
that
in
any
existing
sdk
like
what
we
really
mean,
we've
got
some
prototyping
of
views,
but
we
don't
have
a
spec
or
like
a
standard
subset
to
find.
As
far
as
I
know,.
E
E
C
Yeah
and
I've
I've
seen
a
lot
of
yeah.
That's
that's
the
reason
why
you
said
it
the
last
time
you
said
it.
I
I
I
feel
like
there's
an
intermediate
or
a
point
in
the
evolution
of
these
sdks,
which
is
where
you
have
all
the
machinery,
like.
You
have
a
class
that
implements
the
necessary
functionality
to
do
this
and
that
type
of
configuration.
C
But
this
thing
that
I
think
of
his
views
is
like
takes
that
like
many
steps
further
and
gives
you
the
the
ability
to
you
know,
write
a
yaml
file
that,
like
magically
sets
up
everything
we've
talked
about,
whereas
today
to
set
up
everything,
we've
talked
about
requires
knowing
exactly
how
to
do
it
with
one
of
these
existing
pieces
of
machinery
so
like
in
the
go
sdk,
I
have
a
reducing
processor.
If
I'm
going
to
reduce
label
dimensions,
I
need
to
install
reducing
processor.
C
I
have
a
basic
processor
that
can
be
configured
for
cumulative
or
delta,
like
I
have
all
these
components
and
I
can
string
them
together
for
dedicated
purpose,
but
as
for
having
a
configurable
sdk,
where
I
can
write
yaml
sections
to
define
which
aggregations
I
get
and
which
labels
I
get,
which
is
what
people
want?
That's
steps
ahead
of
us,
so
it's
gonna
definitely
impede
an
rc
or
a
ga.
If
that's
the
for
g
for
trace.
If
that's
what
we've
decided.
E
I
do
think
that
it's,
it's
a
pretty
important
part
of
the
system.
I've
been
walking
through,
especially
in
the
python
say
some
of
the
people
internal
here
at
uralic
have
started
looking
at
it,
and
it's
like
the
one
place
that
we
do
have
the
implementation
of
a
prototype
views
set
up
there,
and
so
it's
kind
of
interesting
to
watch
them
go
through
that
process.
E
How
that
actually
like
translates
from
making
a
measurement
through
an
accumulation
down
through
the
processing
and
building
out
an
aggregation
are
really
well
defined.
E
I
think
platforms
that
kind
of
thing,
but
like
we've
kind
of
just
also
built
in
a
lot
of
really
good,
same
defaults,
and
maybe
some
like
you
can
even
change
some
of
those
defaults
and
what
josh
is
talking
about
like
changing
out
all
these
different
like
base
aggregations,
but
at
the
end
of
the
day,
like
the
user's
going
to
come
along
and
say
like
well,
that's
cool,
but
I
don't
want
any
of
these
labels.
I
want
to
collapse
these
metrics
into
the
same
thing.
I
want
to
change
my
aggregation
for
this.
E
C
And
I've
said
yeah
mo
configuration,
but
I've
I've
read
some
open
census,
instrumented
code
and
it's
very
much
api
type
of
configuration.
You
say
for
this
instrument.
I
want
this
particular
aggregation
and
I
I
believe
that
users,
the
actual
authors
of
code,
are
less
in
the
position
to
know
exactly
what
should
be
aggregated.
E
So
maybe
I
think
if
there's,
I
think,
there's
some
really
great
work
to
be
done
here.
It
seems
a
little
like
daunting
setting
up
this
side
of
it,
but
I
I
think,
for
that
reason
maybe
morgan.
There
needs
to
be
some
more
hard
conversations
around
like
these
sort
of
ga
planning
and
tying
things
together
with
metrics
and
and
what
is
the
minimal
viable
thing?
E
For
traces
which
why
we're
at
an
rc-
and
I
think
like
starting
literally
like
next
monday,
is
when
we
we
turn
that
focus
to
metrics
and
begin
to
have
those
hard
conversations
to
be
honest,
like
I've
been
attending
the
metric
sig
meetings,
but
I'm
usually
sort
of
doing
something
in
the
background.
So
I
I
don't
always
have
like.
Personally,
I
don't
always
have
the
greatest
context
for
for
metrics
but
that'll
narrow.
E
C
We
narrow
the
problem
to
just
what
what's
needed
for
open
census.
I
mean
it
may
be
that
those
building
blocks
and
the
mechanisms
I've
talked
about
are
just
not
you
know
like
it.
Is
it
possible
to
configure
an
sdk
based
on
the
open
census,
api
and
the
answer
might
be?
Yes,
I
I
I
haven't
studied
it,
though,
that
that's
part
of
the
problem
here
is
it's
just
a
big
topic.
You
know.
Yes,
all
the
additional
questions
that
come
up.
C
When
you
start
talking
about
varying
your
aggregation,
I
would
hesitate
to
say
that
we
should
actually,
we
have
other
issues
that
have
not
been
addressed
that
are
probably
standing
in
the
way
of
rc
or
ga,
which
are
like.
We
talked
about
raw
data
points
and
we've
talked
about.
You
know
some
of
the
next
issue
items
on
the
agenda
like
this.
We've
still
got
this
open
question
about
histograms,
which
is
you
know,
maybe
making
progress,
but
it's
not
not
moving
very
fast.
E
Yeah
yeah,
I
know
I'm
excited
to
and
andrew
yes
and
andrew
we've
made
really
good
progress
on
tracing,
so
I'm
excited
to
to
put
the
same
focus
on
metrics
and
then
get
everything
looped
in
together.
C
C
Essentially,
it
looks
a
lot
like
the
circonus
histogram
and
I
was
remarking
before
many
of
you
had
joined
the
meeting
here
that
I
made
a
mathematical
error
in
my
analysis
last
night,
but
the
the
general
point
stands
here,
which
is
that
I'm
intrigued
somewhat
by
these
base,
10
histograms
that
are
much
like
prometheus
and
much
like
circonus
and
that's
all
I've
got.
Is
I'm
interested
in
that?
It's
it's
in
recognition
of
it's
not.
C
It
doesn't
have
the
sort
of
strongest
performance
as
far
as
relative
error,
but
it
does
have
the
sort
of
most
compatibility
for
prometheus
user,
which
is
the
prime
goal
here,
at
least
for
me.
So,
anyway,
I've
I've
updated
this
issue,
and
I
think
this
is
the
one
that
is
probably
the
greatest
challenge
for
us
to
get
to
rc,
ignoring
that
views
question
which
is
hopefully
something
we
can
work
our
way
around
before
ga.
C
H
I
think
the
conclusion
is
pretty
positive
in
the
sense
that
I
clarified
with
them
that
if
we
have
the,
I
mean
the
ensure
the
dd
sketch
result
from
the
say,
a
vp
sketch
library
in
the
client
could
be
represented
by
a
standard
exponential
bucket.
H
He
he
extended
exponential,
histogram
and
also
to
make
the
in
the
sense
that
the
front-end
dd
sketch
produced
histogram
represented
in
the
standard
exponential
histogram
shift
to
the
back
end.
The
back
end
will
be
able
to
restore
the
original
db
sketch.
That
was
in
the
front
end,
and
also
we
clarified
to
make
this
easier.
It
would
be
nice
in
the
protocol.
We
have
a
single
single
single
number,
like
either
kind
of
number,
to
declare
the
pr
the
this
histogram
was
produced
by
db,
sketch
or
something
else.
H
You
can
support
now,
can
support
dd,
sketch
circuits
and
hdr
and
possibly
whatever
future
things
we
can
have,
and
we
have
a
auxiliary,
a
single
number
declare
who
produced
this
as
a
hinge
to
the
consumer
and
the
back
end
where
the
back
end
received
it
back
and
get
him
the
producer
was
the
back
end
can
properly
correctly
restore
whatever
special
processing
it
may
have
to
minimize
the
error.
C
Yeah,
I
understand,
would
anyone
else
like
to
speak
on
that
topic.
G
From
the
data
dog
perspective,
like
yuki
said,
we
just
discussed
it
and
as
long
as
the
duty
sketch
will
work,
we're
gonna
forge
ahead.
We
have
like
four
teams
and
a
handful
of
engineers
on
the
implementation
itself.
G
C
Fun
and
that's,
and
that
is
based
on
an
exponential
bucketing
representation,
or
is
that,
based
on
this,
this
more
customized
one
that
was
discussed
back
in
issue.
G
919,
I
guess
like
so
it
will
be
an
implementation
of
dd
sketch
literally,
but
my
understanding
is
that
as
long
as
there's
awareness
through
this
enum
of
of
which
type
it
will
be,
it
will
be
exported
correctly
to
a
back
end
that
expects
it.
C
I
have
to
admit
I
got
confused
when
I
read
through
this
link
here
and
my
I
haven't
been
able
to
put
the
brain
cycles
into
like
sure,
try
to
understand
exactly
what
the
ideal
case
is
versus
what
the
okay
case
is
and
what's
lost
by
choosing
these
different
bucketing
representations,
I'm
trying
to
understand
what
db
sketch
is,
if,
if
there's
so
many
options,
really
like,
what's
the
core
so
and
I
felt
like
you-
have.
G
I
mean
charles
is
certainly
happy
to
to
comment
in
the
pr,
but
if
you'd
rather
just
talk
about
this,
I
can.
I
can
set
up
a
meeting
with
him
and
you
after
this.
C
I
I
don't
personally
feel
like
that's
necessary
for
me,
like
as
a
group,
maybe,
but
I
feel
that
I'm
interested
in
seeing
then
perhaps
next.
For
me,
what
would
be
good
would
be
to
see
a
like
a
draft
protocol
document
with
that,
you
know
and
and
comments
on
exactly
how
you
expect
these
values
to
to
those
hints
to
help
you
a
translation
back
into
the
native
representation
to
be
done
more
correctly.
I
think
I
can
imagine
it,
but
I'd
love
to
see
it.
H
H
H
C
H
Done
exactly
so,
I'm
here
I'm
volunteering
to
just
revise
the
protocol
to.
C
Yeah,
so
so
it
sounds
like
your
goal
is
to
introduce
option
to
a
side
from
explicit
to
be
exponential
with
linear
sub
buckets,
which
I
I'm
not
sure
that
that
extra
complexity
is
is
valuable
to
us
or
not.
At
this
point,
I
recognize
that
we
could
support
all
the
options
with
this
new
field,
but
I'm
not
sure
that
we
really
want
to
I'm.
Not
that's
a
question
is
exponential.
Good
enough.
C
H
Well,
the
benefit
is
that
you
can
basically
the
log
linear
or
the
hybrid
bucket,
it's
a
large
family
of
of
things,
hdr
prescribed
being
the
most
popular
yeah.
Just
pointed
out,
the
prometheus
is
kind
of
you.
If
you
have
enough
linear
buckets,
you
can
represent
the
promiscuous
one
in
in
this
format
too.
So
I
made
a
an
error,
but
I
did
try
to
point
that
out.
I'm.
C
Going
to
correct
my
error
after
this
meeting
yeah,
so
yes,
I
I
think
it'd
be
great
if
you're
volunteering
to
sort
of
draft
that
up
the
only
thing
that
crossed
my
mind
in
addition
to
what
was
discussed
just
now,
is
that
there's
still
a
call
for
the
exact
value
of
max
and
nin
to
be
included
anyway.
That's
the
sort
of
minor
point,
but
that
that
would,
I
think,
resolve
one
of
the
major
outstanding
issues
for
us
as
a
group
and
get
us
to
the
next
protocol
version.
H
C
Yeah
right
so
in
the
pro
repo,
yes
make
a
draft
to
the
0.5,
where
currently
we
have
only
explicit
boundaries.
Yes,
please,
okay,.
C
Excellent
cool:
well,
that's
progress
and
and
I'll
try
to
understand
more,
but
about
how
dd,
sketch
and
exponential
like
works
together,
because
I
felt
like
there
was
something
about
dd
sketch
that
was
different
than
exponential
and
that's
what
I'm
not
understanding
right
now,
but
I
won't
say
anymore:
we
already
talked.
I
skipped
ahead
to
talk
about
label
versus
attributes
if
you
feel
strongly
go
post
on
the
issue.
C
C
I
see
so
yeah.
That's
that's
kind
of
the
core
of
this
issue
is
that,
on
the
semantic
level,
it
seems
like
no
big
deal
to
have
these
sort
of
structured
values
be
specified,
especially
when
they
have
a
natural
scale
or
conversion
to
string,
but
bogen
keeps
getting
caught
up
when
we
talk
about
it
on
the
potential
implications,
either
for
performance
or
like
correctness,
I
guess,
if
you
have
these
maps
valued
or
these
list
valued
attributes,
but
were
you
really
referring
to
the
terminology
like
whether
it's
the
word
value
or.
C
A
Yeah,
this
was
exactly
what
you
were
talking
about:
okay,
yeah.
My
personal
opinion
is
that
I
think
we
can
make
it
work,
but
bogdan
obviously
has
a
thousand
years
more
is
more
experienced
than
I
do.
Working
in
a
metric
system.
C
Right
and
I
I
feel
like
I'm
trying
to
appease
him
by
by
very
clear
wording
about
how
I'm
not
I
don't
want
to
change
the
protocol,
and
I
think
it's
okay
to
say
that
for
performance
reasons,
metrics
labels
have
to
be
stringy,
but
that
it's
just
not
a
problem
to
have
like
the
api
sort
of
allow
this
and
just
treat
it
as
an
unspecified
behavior,
or
something
like
that
when
the
user
tries
to
annotate
a
metric
with
a
list
value
or
something
like
that.
A
C
Still
meaningful,
it's
just
ridiculously
expensive
if
you
think
about
how
you're
gonna
implement
it,
so
we
can
just
say
for
ridiculous
to
avoid
the
ridiculous
expense.
We're
not
going
to
try
to
compute
unique
label,
sets
for
map
value
attributes
or
for
list
value
attributes
we're
just
going
to
call
those
unrepresentable
and
you'll
get.
I
think
that
what
we
should
do
is
then
specify
what
the
undefined
behavior
should
be
currently.
Currently,
I
have
no
idea
what
happens
in
my
own
implementation.
C
If
you
try
to
do
this
because
it
is
possible
for
that
value
to
appear
in
the
metrics
api,
it
could
crash,
it
could-
and
I
just
don't
know
anyway.
The
point
is
I
don't
really
know
we
should
say
what
happens
and
maybe
that's
what
we
have
here
anyway.
C
I'm
gonna
go
say
more
about
this
issue
before
tuesday
and
hopefully
you
all
will
read
it
or
say
more
yourself:
there
are
four
open
pr's
about
cement
conventions.
I
wanted
to
move
on
to
this.
First
one
is
almost
ready
to
go
aaron.
You
have
any
more
to
say
on
this.
If
you're
on
the
call.
I
C
All
right,
so
this
is
going
to
merge
immediately,
maybe
ready
to
go
right
now,
the
other
three
I've
approved
for
myself,
although
I
will
say
I'm
not
an.
D
C
In
any
of
these
areas,
so
I
didn't
give
the
most
scrutiny
to
the
exact
details
of
you
know
function
as
a
service
metrics,
but
I
read
through
these
documents
and
they
all
look
good,
so
I've
got
links
to
those
was
justin
or
chris
or
graham
or
any
one
of
you
like
to
speak
about
these
justin.
C
F
I'm
not
sure
what
to
say
about
all
of
them.
At
the
same
time,
it'd
be
great
to
get
them.
C
Reviewed
yeah.
C
F
There
was
an
earlier.
F
There
was
an
earlier
decision
that
we
made
around
when
we
were
doing
the
http
spec,
that
we
would
try
to
kind
of
overload
a
single
instrument,
the
value
recorder
instrument,
so
that
we
could
get
duration
and
count
and
that
we
could
facet
to
get
error
accounts
out
of
it.
Also,
some
of
these
semantic
conventions
have
proposed
some
separate
counters
that
I
think
we
could
roll
together
into
a
single
value
recorder.
C
To
me,
this
raises
the
question
about
views
that
john
raised
the
beginning,
which
is
semantically.
We
can
define
what
these
mean
without
saying,
whether
they
have
to
be
done
as
instruments
or
whether
they
can
be
done
as
views.
So
potentially
we
should.
I,
I
think,
I'm
in
support
of
writing
all
the
semantic
conventions.
C
Now
we
can
even
go
back
to
the
http
conventions
and
add
those
conventional
names
for
these
sort
of
somewhat
redundant.
You
know
things
and
just
just
go
with
it
later
in
life
when
we
have
views
firmly
established,
maybe
at
that
point
you
can
start
dropping
out
the
literal
instruments
and
start
generating
those
or
synthesizing
those
even
downstream,
but
for
now
it
would
be
a
lot
more
sort
of
straightforward
to
just
specify.
F
C
Well,
just
to
take
this
example
here
I
create
a
specific
counter
to
check
the
messages
sent
so
there's
also
a
duration
measurement
for
the
which
corresponds
one-to-one
with
messages
set.
So
it's,
I
think,
what
I'm
trying
to
say
is.
We
can
have
a
semantic
convention
for
the
name
of
the
method.
Sorry
for
the
name
of
the
metric,
that
is,
messages
sent,
as
well
as
the
name
of
the
message,
the
metric,
which
is
duration
of
of
send,
and
it's
okay
for
us
to
have
these
implemented
as
two
separate
instruments
now
and
then
later
later
on.
F
Maybe
I
I
wonder
if
I
have
been
thinking
about
these
semantic
conventions,
kind
of
the
wrong
mental
model.
I've
been
thinking
that
these
are
really
sort
of
instructions
to
instrumentation
writers,
about
what
data
they
need
to
be
providing
when
they're
writing
instrumentation.
C
I
think
you're
right.
The
the
scenario
that
I
described
requires
some
degree
of
coordination
between
like
releases
and
like
okay.
So
first
we
release
a
views
functionality
and
then
we
revise
the
plugins
to
take
advantage
of
that.
Instead
of
having
separate
instruments,
maybe
in
a
later
release.
C
I
I
was
another
way
to
look
at
your
mental
model.
Question
that
you
just
posed
is
is
that
these
could
be
seen
as
documents
for
the
vendors
or
for
the
systems
that
are
going
to
use
this
data.
It
doesn't
matter
to
me
whether
there
are
two
instruments
generating
that
that
name
or
one
it
matters
to
me
that
I'm
going
to
use
account.
F
A
A
Is
that
what
we're
writing
here
is
kind
of
what
justin
had
in
mind,
which
is
this
is
how
instrumenters
should
do
stuff
and
then,
if
you're,
a
vendor
with
an
exporter,
here's
some
ideas
about
how
you
could
use
a
view
or
not.
If
you,
okay,
with
whatever
the
default
aggregations,
are
but
here's
how
you
could
use
a
view
to
decompose
the
state
into
what
you're
back
in
and
the
meaning
the
meaning
of
the
data
that
might
come
out
of
it.
C
Yeah,
I
see
I
see
how
there's
a
contradiction
here,
john
it's
hard
to
express,
maybe
we
could
well.
I
think
we
need
to
keep
working
on
this.
I
was
going
to
ask
aaron
since
he's.
We've
got
this
other
parallel
pr,
that's
about
to
merge.
We,
we
did
a
similar
thing
with
this
limit
semantic
convention,
where
this
is
something
that
you
can
either
derive
as
a
sum
of
all
the
usages
in
some
cases,
it's
true
that
the
sum
of
all
your
usages
equals
a
limit.
D
D
C
C
I
So
I
wrote
I
wrote
all
of
them,
except
for
limit
what
it's
worth,
so
it
might
just
be
because
I
added
that
later
and
I
didn't
add
it,
but
I
think
when
it
when
it
lists
an
instrument,
it's
definitely
for
the
instrumentation
writer.
They
know
what
they're
doing
and
then,
if
it's
just
like
a
general
guideline,
I
suppose
that's
more
for
for,
like
a
back
end.
C
A
Okay,
fair
enough,
even
better
than
that,
get
some
instrumentation
folks
to
actually
use
it
and
see
if
they
know
what
they're
supposed
to
do.
I
Sorry,
yeah
yeah
no
worries,
I
think
also
what
justin
was
saying
with
regard
to
this
duration.
One
is
that
if
it's
a
value
recorder,
then
you'll
get
like
well
currently
mid
max
sum
count,
so
you'll
get
the
count
as
part
of
that
metric
explicitly.
I
C
Right,
well
that
that's
the
premise
that
started
this
long
ago
with
http,
metrics
and-
and
I
was
part
of
that
decision-
and
I
think
it's
it's
questionable-
it's
still
questionable
as
to
whether
that's
the
right
way
to
go,
which
is
exactly
why
this
question
appears
here
about
superfluous
counters.
It's
totally
that's
that
we
can
call
this
the
superfluous
counters
issue.
I
don't
know,
I
don't
know
what
we
should
do
about
it.
J
To
give
my
two
thoughts
about
this,
like
we've,
it
seems
like
we
built
these
metrics,
so
we
have.
The
ability
to
to
you
know,
derive
this
information,
and
I
don't
know
why
I
I
can
see
the
arguments
of
why
we
wouldn't
I
I
would.
I
would
say
why.
J
Wouldn't
we
use
that
if
we
were
able
to
derive
that-
and
I
would
say
that
that
would
be
more
on
the
consumer
to
you-
know,
create
the
two
different
two
different
views,
if
necessary,
but
like
if
the
if
the
information
is
there,
I
don't
know
why
we
wouldn't
derive
that
instead
of
creating
two
separate.
C
Aggregates,
well,
I
think
that
the
benefit
of
having
these
as
a
semantic
conventions
is
that
we're
going
to
export
this
data
into
these
other
systems
that
don't
have
these
notions,
so
when
so
that
we
at
least
know
how
to
represent
these
semantic
conventions.
When
we're
forced
to
that's
how
I'm
thinking
about
it.
So
once
we
do
have
a
views
mechanism,
you
can
define
these
superfluous
counters
as
derivatives,
and
then
you
don't
even
ever
have
to
implement
them.
F
So
I
think
the
way
that
I'm
leaning
after
some
of
the
thoughts
that
have
been
shared
so
far
is,
I
think
we
should
not.
We
should
not
make
some
superfluous
counters.
We
should
make
use
of
the
extra
features
of
the
min
max
sum
count,
since
we
have
them
and
we
should
put
some
information,
maybe
in
the
main
readme
of
our
semantic
conventions
for
metrics.
A
I
would
say
my
only
my
only
last
two
cents,
because
I
agree
with
everything
that
we're
saying
is
that
one
of
the
kind
of
primary
guiding
lights
of
writing
instrumentation
is
the
least
you
can,
and
so,
if
we
can
have,
we
can
minimize
the
number
of
instruments
we
create,
which
is
going
to
minimize
the
number
of
things
that
have
to
be
added
onto
the
user's
application
without
them
necessarily
knowing
about
it.
C
C
Cool,
I
think
that
was
the
end
of
the
agenda.
Wait
andrew
has
placed
an
item
about
tomorrow's
spec
scrub.
Would
you
like
to
talk
andrew.
K
Yeah,
just
a
quick
fyi
there's,
like
I
don't
know,
maybe
30
issues
that
have
the
metrics
spec
metrics
label
on
it.
We've
triaged
them
like
long
long
time
ago,
but
some
of
them
are
not
fitting
so
much
to
our
meanings
of
what
p1,
p2
p3
means
for
everything
else
in
the
spec,
so
we're
just
going
to
scrub
them
again
tomorrow.
So
I
just
need
representation
from
somebody
who
can
help
with
that
that
reevaluation
of
those
issues,
I
assume
it
should
go
quick.
It's
a
half
hour
meeting.
K
C
All
right
I'll
be
there.
Thank
you
andrew.
Is
there
anybody
else
that
would
like
to
speak
or
attach
an
agenda
item
before
we
go?
Otherwise,
I
think
we've
reached
the
end.
C
Going
going,
thank
you
all.
I
think
the
biggest
issue
is
still
this
histogram
one,
and
I
appreciate
you
guys,
volunteering
and
we'll
keep
talking
about
it.
Thanks
everybody.