►
From YouTube: 2020-05-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
I'll
probably
wait
a
couple
minutes
more.
A
I
am
projecting
the
set
of
issues
that
are
still
opened,
tagged,
metrics
in
specification,
repo
and
I
put
on
the
agenda
these
top
three
ones
here,
which
I
think
are
all
the
new
topics
that
we've
been
discussing
and
then
I
think
we've
been
a
little
bit
bike
shady
in
the
last
few
meetings
about
metric
naming.
But
it's
like
one
of
the
major
outstanding
topics-
that's
kind
of
holding
us
back
at
this
point.
So
I
expect
we'll
spend
a
lot
of
time.
Talking
about
that
and
and
ted
who's
on.
A
The
line
here
has
submitted
an
otep
which
some
of
you
have
already
commented
on.
So
I
expect
that'll
be
the
bulk
of
the
conversation.
I
was
hoping
that
we
could
go
in
reverse
order,
though
talking
about
these
issues,
number
607
and
608.
A
A
So,
as
you
know,
we
have
six
instruments
defined
in
the
current
spec
and
we've
debated
many
many
alternatives
to
that
number.
The
proposal
that
was
in
otep
88
listed
up
to
10,
at
least
this.
A
This
proposal
here
is
for
an
absolute
value
recorder,
which
would
be
the
same
as
a
value
recorder,
but
take
non-negative
inputs
only,
and
the
argument
being
made
is
that
this
helps
give
us
an
instrument
that
can
be
used
to
define
a
rate
from
its
sum,
even
though
it's
meant
to
be
used
for
what
we're
calling
instantaneous
values
at
the
moment.
A
I
hope
that
was
a
fair
summary.
Yes,
I
think
that
was
a
good
summary.
You,
you
had
a
couple
of
questions.
I
tried
to
answer
them.
A
Yeah
I
the
major
one
was:
why
would
you
not
use
counter?
Because
you
know
it's
an
absolute
thing?
Last
time
I've
been
for
four
days
leighton.
I
think
you
are
not
on
mute
and
you
are
talking
to
someone
else.
It
does
look
that
way.
It's
not
that
way.
Okay,
continue,
please,
okay,
so
that
was
probably
the
most
important
question.
A
The
the
argument
that
I
had
was
like:
if
the
default
natural
way
is
actually
to
see
a
distribution
of
that
or
is
an
instantaneous
value,
I
would
not
abuse
the
api
to
and
use
counter
just
because
it
gives
me
that
opportunity.
So
that
was
my
major
argument
and
I
think
that
was
one
of
the
the
design
principles
yeah.
A
Sorry
yeah
so
anyway,
I
I
would
like
to
hear
others
opinions
for
the
moment.
I'm
not
hundred
percent
confused.
The
async
version
of
it
is
useful,
yet
don't
know,
but
for
the
synchronous
version.
I
already
have
something
in
mind.
I
think
you
jo
josh,
mentioned
something
that
you
saw
about
that
thing.
Personally,
I
did
not
see
that
example.
You
mentioned
something
you
saw
in
the
in
the
runtime
metrics,
or
something
like
that
during
the
last
meeting.
A
Well,
I
I
have
an
issue
with
number
607
to
discuss
next.
I
don't
think
that
that
was
related
to
this
this
one
here
and
I
wanted
to
state
just.
I
think
I
put
that
out
as
sort
of
a
devil's
advocate
question
about
counter
or
up
down
counter.
I
there's
something
strongly
discouraging.
I
think
me
from
wanting
to
add
instruments,
but
I
am
seeing
some
value
here.
A
A
Looking
at
the
api
and
saying
how
do
I
get
a
gauge
out
and
that's
what
they
want
and
and
who
am
I
to
argue
that
they
should
not
get
what
we
call
a
gauge,
which
means
a
single
number,
which
is
the
last
value
that
was
used
or
recorded,
and
one
of
the
benefits
I
see
of
your
proposal
is
that
it
gives
us
a
chance
to
make
two
different
defaults,
because
we
have
two
different
instruments.
So
we
have
value
recorder.
We
have
absolute
value
recorder.
A
If
that's
the
name,
I
think
there
probably
is
a
better
name,
but
let's
suppose
there
are
two
instruments
one
of
them
can
default
to
histogram
and
one
of
them
can
default
to
gauge,
and
I
think
that
that's
pretty
well,
that's
pretty.
I
think
users
will
understand
that.
I
would
choose
absolute
value
recorder
as
a
histogram.
It
can
become
a
rate
naturally,
and
then
the
sort
of
unrestricted
value
recorder
could
become
a
gauge
and
it
can
be
negative
if
it
needs
to
be
that's
my
that's
my
thought.
A
We
can
definitely
do
something
like
that.
I
did.
My
proposal
did
not
have
that
in
mind.
I
was
just
looking
at.
I
have
a
clear
example.
I
I
know
people
would
like
to
see
that
as
a
sum,
because
they
don't
want
to
pay
the
cost
of
building
a
histogram
or
exporting
a
histogram
out
of
this,
and
if
it's
positive
only
it
will
be
great
to
give
them
the
rate
anyway.
A
A
I
think
if,
if,
if
that's
a
need,
because
usually
people
coming
from
prometheus,
so
they
look
for
gauge,
we
we're
gonna,
give
them
immediately
the
answer
of
up
down
counter
as
the
gauge
when
they
will
use,
plus
and
minus
like
increment
and
decrement,
but
when
they
will
ask
for
set
right
now
right
now
we
can
give
them
the
up
down
or
the
the
sum
up
down
some
or
some
observers.
But
we
don't
have
a.
I
mean
the
value
recorder
is
gonna,
be
on
not
exactly
what
they
expect
the
value
recorder.
A
They
would
expect
one
value
out
of
that
which
is
the
last
value
which
we
don't
have
right
now
and
remember.
We.
We
had
this
discussion
a
couple
of
weeks
ago
when
we
met-
and
I
was
pointing
to
you
that
probably
the
default
aggregation
for
that
may
be
better
to
be
one
value.
But
you
had
a
good
point
that,
in
general,
a
min
max
is
more
useful
on
the
backend
side.
A
A
Approve
words
and
stuff
by
the
way
I
just
got
all
the
approvals.
I
will
send
the
pr
for
for
code
owners
tonight
so
okay,
so
we
move
forward
by
more
people
giving
us
input.
I
think
you-
and
I
are
the
sort
of
have
all
said
what
we
need
to
say
and
others
should
have
an
input.
I
think
that,
after
getting
some
input
on
this
issue,.
A
The
same
value
is
recorders
is
a
lot,
a
lot
of
typing
that
I
don't
like,
but
and
if
anyone
has
an
idea,
let's
put
that
there,
I
think
we
should
move
on
here.
Since
there's
a
large
number
of
people,
607
was
less,
was
much
easier
to
to
just
much
simpler.
I
would
say,
as
far
as
a
change
there's
nothing
in
the
spec
that
says
you
can't
do
this
presently,
and
I
want
to
say
that
you
must
that
you
can
do
it
and
what
the
sdk
must
do.
A
You
are
observing
gc
stats
and
the
gc
stats
gives
you
a
slice
of
all
the
gc
events
that
have
happened
since
the
last
time
you
called
nem
stats,
and
so
the
slice
is
individual
events
which
are
individual
timings,
but
you
saw
them
in
an
observer
and
so
there's
no
observer
instrument,
that's
suitable
for
that,
and
so
the
idea
is
you
just
use
a
value
recorder,
for
example,
to
record
the
duration
of
your
garbage
collection,
for
example,
and
then,
if
you
use
that
from
within
an
observer
instrument
callback,
it
will
be
guaranteed
that
the
same
collection
interval
will
cover
that
data,
so
that
you'll
get
your
synchronous
observations
and
your
asynchronous
observations
in
the
same
report.
A
I've
already
I've
already
merged
a
pr
that
does
this
in
go
and
I
have
an
intended
use
for
it,
but
I'd
like
to
put
it
in
the
spec.
Please
go
ahead.
I
have
only
one
problem
with
this.
A
I
I'm
repeating
one
of
the
optimization
that
I
would
like
to
achieve
at
one
point,
which
is
having
the
ability
to
export
a
different
interval
cycles,
different
metrics,
which
causes
one
of
the
problem
with
this,
because
I
I
do
not
know
if
this
I
don't
I
I
cannot
know
that
that
synchronous
instrument
is
driven
by
by
a
callback
right.
I
actually
addressed
that
in
this
comment
here,
because
it
is
a
corner
case.
A
I
I
I
would
be
happy
to
say
that
they
are
derived
events
of
derived
metrics,
essentially
that
they're
they
are
a
side
effect
of
the
observer.
There's
no
way
for
the
sdk
to
know
that
you're
going
to
do
that
from
the
observer,
and
I'm
okay
with
that.
Potentially
you
could
have
extra
configuration
of
some
sort,
but
it
doesn't.
I
don't
think
it
matters.
A
I
would
ask
how
urgent
is
this
and
if
I
have
some
time
to
think
more
about
this
issue,
it's
not
urgent,
as
it
turned
out
in
the
go
sdk,
there
was
really
nothing
preventing
us
from
doing
this.
All
he
had
to
do
was
reorder
two
lines
of
code,
because
in
the
existing
code
it
was
collecting
the
synchronous
instruments
and
then
calling
the
asynchronous
instruments.
All
I
did
was
reverse
that
and
I
get
what
I
want.
A
I
just
thought
it'd
be
worth
saying
in
spec,
but
but
but
if
we
add
this
ability,
when
you
say
collect-
and
I
give
you
a
couple
of
metric
names-
and
I
want
only
these
to
be
collected-
I
think
you'll
have
troubles,
because
I
I
I
would
put
in
the
specs
something
like
you
can't
you
can't
we
we
offer
you
the
ability
to
have
variable
collection.
A
We've
already
specified
that
every
callback
has
a
one-to-one
relationship
with
an
observer
instrument
or
sorry,
not
a
one
one
to
many
potentially
but
there's
one
callback
per
instrument,
the
synchronous
instruments.
Don't
have
that
property.
So
there's
no
there's
no
way
to
then
to
collect
this.
The
synchronous
thing,
it's
still
a
synchronous
instrument
and
there's
no
other
way
to
do
that
with
the
other
synthesis.
There's
still
no
callback
associated
with
this
thing,
it's
just
it's
a
side
effect.
A
Let
me,
as
I
said,
please,
okay,
we'll
move
on
from
that.
Give
me
some
time.
I
I
need
to
think
that's
good,
that
you
cover
that
corner
case,
and
I
hope
everyone
understand
the
reasoning
that
I
have
behind
of
allowing
that
to
happen.
Yes,
the
spec
that
just
merged
this
week
in
pr
number
601
does
say
that
the
requirements
for
observer
instruments
is
that
there's
a
that
every
instrument
has
exactly
one
callback
so
that
the
sdk
could
do
the
thing
that
you're
asking
for
yeah.
A
We,
I
don't
think
we've
seen
any
actual
sdks
do
this,
yet
that's
sort
of
one
of
the
future
steps.
I
expect
people
to
work
on.
Okay,
I
thought
moving
on
then
I
think
now
this
this
discussion
about
metric
naming
has
consumed
the
entire
meeting
for
two
two
meetings
in
a
row
now.
So
I
expect
it'll
just
happen
again
and
I
don't
know
that
we
have
anything
else
to
discuss
so
here
it
is.
I
found
this
issue.
There
are
lots
of
conversations
already
and
a
couple
of
documents.
A
Ted
pennings
is
on
the
call
here
and
has
written
a
draft
document
that
some
of
us
have
reviewed,
and
so
I
thought
that
it
was
just
enough
this
morning,
so
I
don't
know
that
enough
people
have
been
able
to
review
it,
but
this
is
basically
the
the
current
sort
of
most
important
thing
for
the
metric
scoop
here
is
to
sort
of
sort.
This
out.
A
Is
would
anyone
who
is
listening
actually
ted?
Would
you
like
to
speak
so
josh
josh?
Is
it
me
or
or
everyone
that
don't
hear,
george
very
well
in
the
interruption
yeah?
Yes,
I
heard
about
you
a
few
interruptions
too.
Okay,
perfect
josh.
I
think
you
may
do
something:
okay,
okay,
I'll,
try
I'll
try!
Thank
you.
Thank
you.
Ted
go
ahead,
yeah!
So
we've
talked
about
this
a
bit
and
I
I
took
a
first
pass
at
codifying
some
of
our
discussions.
A
I
think
there's
a
challenge
in
the
way
that
we
articulate
this,
that
there's
this
like
desire
to
have
like
general
guidelines
and
general
specifications,
but
it's
really
hard
to
do
that
without
providing
examples,
and
it's
really
easy
to
get
like
pigeonholed
into,
like
example,
land,
which
I
think
this
current
document
went
a
little
bit
too
far
in
that
regard.
A
But
in
general
the
things
that
I'm
trying
to
convey
here
are
that
we
want
to
name
space
all
the
metrics
and
that
we
want
to
be
very
thoughtful
about
what
those
metric
namespaces
are
so
they're,
very
discoverable
in
the
future,
and
that
we
want
to
give
some
thought
to
standard
metric
names
and
standard
labels.
And
there
are
some
examples
here
in
the
dock.
A
I
think
that
some
of
these
examples
will
persist
into
a
more
general
convention
document
and
then
I
think
that
we'll
have
some
kind
of
like
appendix
or
glossary
or
something
that
will
be
like
a
separate
document
that
will
describe
some
of
the
metrics
that
are
important,
but
we
don't
want
to
like
bog
down
the
overall
generic
specification
with
those
details
so
ted
to
be.
A
To
give
you
a
bit
of
context,
we
do
have
what
we
call
semantic
conventions
where
we
will
define
per
let's
say
per
name
space,
the
metrics
that
we
consider
canonical
and
needs
canonical
names
for
labels
and
stuff.
Like
that,
so
that's
there.
A
A
The
other
feedback
that
I
had
was
like,
I'm
I'm.
I
was
very
confused
that
I
know
labels
have
a
key
and
the
value,
and
you
I
just
see
one
one
value
here,
which
I
don't
know:
if
does
that
represent
key
dot
value
like
prometheus
style?
Does
that
I
don't
know
what
what
you
say
that
for
for
the
metric
for
standard
labels,
including
only
house
metric
dot
free,
I
mean,
is
that
the
key
or
the
value
of
the
the
label?
A
Yeah,
that's
a
good
point.
I
was
confused
about
that
and
and
that's
something
that
I
would
like
to
to
better
see,
and
I
think
most
likely,
we
should
standardize
on
the
name
of
the
keys
to
have
a
pattern
to
say:
hey
the
name
of
the
key
should
be
relevant
in
scope
of
the
name
of
the
metric,
for
example,
like
I
would
not
repeat
cpu.
If,
if
I,
if
I
have
a
method
called
cpu
usage,
I
will
not
repeat
the
key
to
be
cpu.
A
Dot
state
doesn't
make
too
much
sense
because
it's
already
scoped
into
that
metric
as
cpu
usage.
So
everyone
knows
that
the
metric
the
the
label
refers
to
the
cpu
anyway,
it's
a
feedback.
We
can
argue
that
that
may
not
be
the
case
or
may
be
the
case,
but
I
think
this
kind
of
this
kind
of
rules
I
would
like
to
see
them
documented
a
bit
and
the
patterns
that
we
use
and
then
a
lot
of
these
examples
are
very
good,
probably
yeah.
So
that's
my
feedback.
A
When
I
read
this
yeah,
I
think
that's
all
very
applicable
feedback.
I
don't
think
I
really
disagree
with
any
of
that
that
that
kind
of
makes
sense.
I
do
wonder
how
exactly
we
would
want
to
construct
those
label
name
spaces.
You
write
that
I
thought
about
them
in
a
more
one-dimensional
way,
so
I
don't
know
how
we
would
want
to
structure
that
in
terms
of
key
value,
for
example
with
the
memory
labels
that
are
visible
right
now,
there's
four:
what
would
you
imagine
those
would
look
like
those
are
not.
A
Okay,
there
are
memory
labels,
I
would
say
the
key
is
called
state
because
that's
how
linux
calls
them,
for
example,
and
the
values
are
free
resident,
shared
private
or
whatever.
It
is,
for
example,
it
just
you
just
arrived
yeah.
That
was
the
feedback
that
I
had
on
this
document
at
first,
which
is
that
in
the
the
well
yeah,
labels
should
have
values.
So
this,
for
example,
cpu
core.
I
think
bowdoin
was
suggesting
that
the
label
should
just
be
named
core
and
the
value
should
be
a
number.
A
A
I
didn't
think
we
were
going
to
talk
about
namespacing
of
labels
in
this
conversation,
in
fact,
I
was
hoping
we
would
never
talk
about
it,
but
can
I
can
I
bring
up
one
thing
from
the
memory
metrics
real,
quick
that
we
were
just
talking
about,
because
I
think
there's
a
pattern
there
as
well,
which
is
that
the
the
memory
labels,
whatever
whatever
the
exact
syntax
of
the
label,
name
and
value
that
example
included
both
resident
and
private
and
private-
is
usually
a
subset
of
resident,
and
so
that
that
shows
an
example
of
a
place
where
you
can't
just
add
those
up
and
get
something
meaningful,
and
I
think
I
remember
the
last
time
we
talked.
A
We
talked
about
how
metric
value
all
of
the
the
streams
for
a
given
metric
should
be
meaningful.
If
you
aggregate
away
the
label
right,
can
you
scroll
down
a
little
bit
yeah?
I
left
a
comment
about
that
because
it
wasn't
clear
to
me
how
you
would
like
derive
like
memory
utilization
and
so
forth,
based
on,
like
all
those
measures.
A
I
think
clinton
you,
you
went,
you
went
deeper
than
us.
We
stopped
at
a
higher
level,
but
this
is
a
very
good
feedback.
Yeah
I
mean
I
just
I
just
want
to
bring
up
at
the
high
level
that
it's
important
that
that
any
recommendation
says
you
know,
label
values.
It
should
be
meaningful
to
add
up.
All
of
that.
A
My
internet's
terrible-
so
maybe
this
isn't
working,
but
I
I
was
the
the
spec
says
something
about
how
asynchronous
instruments
are
make
a
set
so
that
you
have
essentially
a
complete
set
and
you
can
use
them
to
define
ratios
ted.
So
you
could
say
I'm
going
to
have
an
asynchronous
observer,
it's
going
to
fire
every
minute,
and
I
know
that
I
can
only
have
one
observation
per
per
label
set
an
instrument
per
interval,
and
that
means
that
I
can
compute
a
sum
and
divide
it
to
computer
ratio.
A
So
I
just
want
to
make
sure
that
was
stated
and
we
should
be
able
to
do
to
compute
ratios
when
you
use
an
observer,
correct
and-
and
I
think
we
had-
we
had
a
good
explanation
of
when
do
we,
as
quentin
pointed
when,
do
we
use
label
versus?
When
do
we
encode
something
into
the
name
of
the
metric
whenever
an
aggregation
is
meaningful
across
that
label?
For
example,
a
sum
across
that
label
gives
you
the
total
or
gives
you
some
interesting
information.
A
Then
we
choose
to
have
a
label,
and
I
think
this
is
yet
another
example
where
I
think
the
general
rules
there
at
the
top
should
be
expanded
a
bit
and
pointed
to
this
rule,
and
so
that
that,
if
intel
finds
another
example
of
this,
he
just
points
say:
hey
you
don't
respect
that
rule.
A
So
so,
in
the
case
that
quentin
named
it
would
be
that
you
probably
want
a
different
metric
name
for
private,
because
it's
not
something
you
can
sum
together
with
the
others.
I
think
it's
probably
resident
actually
would
be
the
one
that
that
represents
the
combination
of
the
other
ones.
I'm
not
super
familiar
with
the
way
linux
does
memory
accounting,
but
I
think
resident
is
the
one
that
is
a
super
set
of
the
others
yeah.
I
think
I
think
I
think
that's
that's
another
thing.
A
We
should
probably
start
to
understand
better
these
terminologies
in
linux
and
stuff
and
and
make
the
right
call
yeah
and
and
probably
invite
some
other
people
to
help
us
if
we
know
more
specialized
people
in
these
kind
of
things.
That
also
seems
very
os
specific
right.
The
the
types
of
memory
that
you
can
have
is
going
to
be
very
different
on
windows
than
than
linux,
so
maybe
that
means
it
does
doesn't
belong
in
the
stock.
You
disagree,
let's
see,
let's
see
I
mean
it's
definitely
we
definitely
need
a
memory
metric
that
belongs.
A
To
this
I
mean
both
both
oss
have
memory
now
now,
if,
if
the,
if
these
labels
are
linux,
specific
or
windows-
specific-
maybe
maybe
that's
true-
maybe
we
should
have
linux
more
linux,
specific
metric,
more
windows,
specific
metric,
but
there
should
be
a
general
one
that
covers
both
like
memory
usage
and
memory
total,
for
example.
It's
both
defined
is
very
clearly
like
how
much
memory
am
I
using
out
of
the
and
how
much
total
memory
I
have
available.
A
Well,
then,
I
would
immediately
say:
does
that
include
virtual
memory,
or
only
only
physical,
like
you,
you
certainly
care
whether
the
memory
is
is
in
a
swap
file
or
not
right.
I
think
on
windows
certainly
has
swap
files
too.
So
like
we
each
for
each
of
these
metrics,
some
some
care
should
be
put
into
figuring
out.
What
is
a
meaningful
cross-platform
set
of
labels,
correct
yeah?
A
Definitely-
and
it's
I
think
it's
really
important-
that
in
a
cross-platform
way,
people
be
able
to
build
dashboards
that
have
like
sort
of
the
same
results
and
same
meaning,
regardless
of
what
like
the
underlying
measures
are.
I
think
that
that,
like
level
of
abstraction
is
really
important
for
people
like
as
a
user
who
doesn't
want
to
worry
as
much
about
how
the
values
are
measured
and
recorded.
A
Maybe
the
answer
is:
is
that
there
should
be
both
os,
specific
and
generic
labels,
because
maybe
you
could
tag
you
know
there
are
three
different
streams
that
say:
type
equals
virtual
and
then
one
of
them
says
slab.
You
know
kernel
equals
slab
for
one
of
them
and
kernel
equals
page
cache
for
another
one
etcetera,
etcetera
like
you
could
have
the
the
both
os
pacific
and
generic
in
the
same
metric,
but
that
may
be
an
option.
A
If
I
I
think
this,
this
document
tried
more
to
to
propose
some
naming
conventions,
not
some
specific
examples,
and
I
would
like
I
would
like
probably
to
remove
those
metrics
from
the
example
dog
from
this
document.
I
I
think
the
purpose
of
this
is
to
show
people
how
to
do.
We
construct
a
names
in
a
conventional
way
and
then
there
is
going
to
be
another
document
and
another
in
the
specification
where
we
define
clearly
okay
for
memory.
These
are
the
names
for
cpu.
A
I
was
just
trying
to
use
memory
as
an
example
to,
as
an
example
point
like,
I
think
the
the
general
goal
would
be
that
we
should
figure
out.
You
know
the
the
the
thing
that
might
be
at
the
top
of
this
document
would
be
metric.
Names
should
be
os
agnostic.
Some
kind
of
rule
like
that
right,
standard,
metrics
name,
should
be
os.
Agnostic
and
remember.
Memory
is
an
example
of
how
there
are
os
specific
things
that
might
creep
into
a
general,
otherwise
generic
metric
yeah
so
clinton
you.
That
would
be
a
good
comment.
A
Please,
please
add
it
to
the
to
this
pr,
if
you
don't
mind
sure
just
to
to
keep
track
of
that.
Also.
Another
short
comment
that
the
the
number
there
should
be
108..
A
A
Okay,
everybody!
Do
we
need
to
talk
about
namespaces
for
labels.
I
have
a
nope
okay,
you
use
the
same
label
on
different
instruments.
Are
we
it's
it's
up
to
the
user's
good
judgment
to
make
that
make
sure
that
it
means
the
same
thing
if
they're
ever
gonna
put
the
same
label
onto
a
dashboard,
we're
never
going
to
talk
about
this
again.
Are
we?
I
think
I
think
personally,
the
way
how
labels
and
metrics
are
structured.
A
A
label
belong
to
a
metric,
naturally,
because
it's
part
of
that
metric,
so
it's
already
scoped
in
the
way
that
it's
being
part
of
that
metric,
their
name
space
is
the
metric
name
like
cellular.
Logically,
whatever
you
want
is
not
very
well
stated,
but,
but
that
that
method
being
that
label
being
part
of
the
metric
means
it's
scoped
inside
that
metric.
A
So
if,
if
I
refer
to
something
like,
let's
say,
state
in
a
cpu,
I
know
that
I
refer
to
the
cpu
state
and
if
I
have
a
state
in
a
metric
name
in
a
metric
called
memory,
state
means
the
see
the
memory
state,
not
a
another
cpus
yeah.
I
feel
I
feel
I
don't
need
to
do
more
name
spacing
than
that.
I
think
naturally
they
are
namespaced
by
the
medley.
A
A
A
No,
I
think
what
you
are
trying
to
achieve.
That's
why
we're
talking
about
it,
though
right
yeah,
but
I
think
what
what
in
that
example,
what
what
you
will
use
is
attributes
from
the
resource
which
are
associated
with
the
with
the
metrics
which
are
describing
the
process.
So
so,
for
example,
if
you
have
a
process,
I
don't
know
how
you
describe
it,
but
you
have
informations
about
the
host
the
pid
and
everything.
A
And
if
your
regex
refers
to
these
things,
you
that's
how
you
will
you
will
do
the
regex
to
look
for
metrics
for
a
specific
process.
Well,
at
some
point
you're
like
I
don't
think
you
can
say
that
resources
answer
this
question
completely
right,
you're
not
gonna,
have
a
separate
resource
for
every
disk
right.
You
might
have
several
disk
metrics
that
have
dev
sda,
as
as
the
value
for
a
label
right
and
and
so
like.
A
Maybe
a
process
should
be
a
resource,
but
I
think
many
of
the
likes
are
going
to
say
each
cpu
core
should
be
its
own
resource,
like
there's
some
limit
to
that.
No,
no!
I'm
not
I'm
not
saying
that.
I'm
saying
I'm
saying
the
the
process
in
that
example.
But
if
there
is
another
example,
I
I
don't
know
is
the
dashboard.
It
completely
depends
on
how
the
dashboard
is
built.
I
mean
we
clearly
know
the
labels
by
the
nature
of
the
protocol.
The
labels
are
name-spaced
to
the
metric
correct
because
they
belong
to
a
metric
right.
A
A
I
mean
whether
or
not
they're
they're
emotionally
part
of
a
metric
you
they
can
still
be
conventions
that
all
of
the
metrics
will
use
the
same
set
of
labels
like
this
is
a
spec,
not
a
protocol
yeah.
I
think
that
becomes
relevant
in
some
runtime
type
metrics
like
for
gc.
It
would
be
useful
if,
like
the
gc
metric
always
has
like
account
label
to
describe
that,
and
then
there
might
be
like
other
sort
of
variants
of
that,
with
like
runtime
memory
allocations
and
so
forth.
A
A
What
if
we
at
least
said
if
you
are
going
to
have
metrics
that
are
sort
of
like
siblings,
like
disk,
read
and
disk
write,
or
something
like
that,
and
you
have
a
device
name
label,
for
example,
make
it
the
same
label
across
all
your
metrics
at
a
minimum
right?
That's
that's!
That's
definitely
going
to
happen.
That's
definitely
going
to
happen.
A
Yeah.
I
think
we're
just
having
hesitation
to
put
anything
into
the
spec
that
says
that
we're
going
to
validate
somehow
anything-
and
I
don't
because
I
don't
want
to
because
you
can
clearly
make
the
wrong
decisions
and
and
use
the
same
label
on
every
metric,
and
it
might
not
be
meaningful
to
query
metrics
in
that
way.
A
But
I
I
kind
of
don't
want
to
like
go
down
a
path
of
trying
to
talk
about
this
and
specify
anything
about
this
yeah.
I
could
imagine
I
could
imagine
some
rules.
I
don't
think
I
want
to
talk
about
them.
A
Maybe
this
falls
into
the
the
like
two
document
discussion
where
there's
like
this
primary
document
of
like
guidelines
and
conventions
that
we're
talking
about,
and
then
there
might
be
another
future
document
that
I
don't
think
we
want
to
talk
about
right
now
or
even
in
the
next
couple
weeks,
which
would
be
like
a
glossary
of
recommended
metric
names
and
labels.
A
A
A
I
can
see
some
ways
that
we
can
update
the
syntax
here
to
make
it
more
clear
to
everybody
else,
because
much
of
us,
many
of
us
think
of
labels
as
key
equals
value,
and
but
not
everybody
in
the
world
does
like
datadog
treats
a
label
as
just
a
string,
and
you
can
like
there's
a
convention
that
a
colon
divides
labels,
keys
and
values,
but
it's
not
always
the
case
so
that
in
this
example
for
example,
here
my
understanding
is
that
you
meant
cpu
core
equals
three
in
this
in
this
example
yeah.
A
So
I
think
that's
a
little
better.
I
think
that
says
I
think
that
created
a
bit
of
confusion,
but
what
you,
what
we
understand
now
what
you
meant-
and
that's
that's
all
good.
Well,
that
conversation
trailed
off
faster
than
I
expected.
A
I'm
not
sure
that
we
have
any
agenda
items
now,
a
bit
of
the
proto.
Oh
yeah,
I
I
was
there's
lots
of
open
questions
about
otlp
and
I
think
I
guess
that
now
is
a
perfect
time
for
us
to
just
take
this
time
and
talk
about
it.
Would
you
like
to
open
that
discussion
with
a
little
background?
A
Yes,
with
only
one
concern,
I
have
to
leave
in
10
minutes,
but
let's,
let's
start
that,
so
there
are
a
bunch
of
discussions
and
the
major
issue
if
you
can
present
josh
you're
presenting
if
you
can
present
the
major
issue
that
came
from
tyler,
that's
yeah
is
the
problem
here
reminds
me
okay,
so
this
is
in
the
proto
make
offense
to.
A
A
A
I
couldn't
hear,
but
what
I
was
trying
to
say
is
the
major
ones
are
the
around
monotonicity
and
temporality,
and
once
we
resolve
those,
I
think
the
rest
of
the
issues
that
tyler
mentioned
there
by
not
having
different
ways
to
define
buckets
and
histograms
or
stuff
like
that.
Are
no
brainers.
It's
just
like
laziness
on
our
side
to
not
define
all
of
them
possibilities
so
so
yeah.
A
But
I
think
that
the
major
one
is
the
temporarity
slash
part
and
that
interacts
with
these
metric
kinds
and-
and
I
think
that's
the
most
important
one
to
to
be
solved.
Yeah,
I'm
pretty
confident
once
we
have
that
solved.
The
rest
of
the
items
are
are
trivial
compared
with
this
one
yeah
yeah.
A
So
my
perspective
on
that
is
that
it's
this
text,
I'm
highlighting
here
it's
the
current
proto,
which
came
out
of
opencensus,
had
this
sort
of
like
enumeration
of
instrument,
types
or
aggregation
types
and
first
of
all,
we
don't
have
a
gauge
instrument
and
it's
not
clear
exactly
what
how
we
would
map
these
so
we're
basically
trying
to
figure
out
how
to
change
otlp
to
much
to
much
more
closely
match
the
api
specification.
A
And
I
see
this
is
basically
a
terminology
problem.
We've
used
the
term
cumulative
and
delta
and
those
are
meaningful
words,
but
I
think
that
they're
a
little
overloaded
at
this
point
and
they
they
confuse
people
they
want
to
get
rid
of
them
and
there's
also
this
term
distributional
or
instantaneous
that
we've
started
using-
or
at
least
I
have
the
the
core
distinction.
A
We're
trying
to
to
bring
out
is
that
if
a
measurement
is
applies
to
an
interval
in
time,
then
it's
just
different
than
a
measurement,
which
is
just
a
point
measurement.
So
we
look
at
bytes
red,
as
is
something
that
you
might
consider
a
cumulative,
because
it's
it's
something
that
you,
if
you
look
over
a
period
of
time,
you're
going
to
sum
that
up
and
something
like
latency
is
not
a
number
that
you
would
add
that
up
in
in
the
same
way.
A
So
we
have,
we
have
many
words
that
can
apply
that
can
describe
this
distinction.
Instantaneous
or
distributional
refers
to
the
kind
of
data
you
get
out,
whereas
additive
maybe
defines
the
these
these
things
that
are
summed
over
time
or
or
cumulative,
maybe
or
another.
One
is
interval
that
I
I
just
use
that
word
I'm
starting
to
like
the
word
interval.
A
Just
so
you
all
know,
maybe
so
so
we
need
to
have
two
words
that
distinguish
the
type
of
aggregation
that
you're
getting
to
tell
you
whether
it's
something
that
you
can
subtract
to
get
something
useful
or
whether
it's
something
that
you
can
average
to
get
something
useful
is
one
way
of
thinking
about
it.
A
A
A
latency
distribution
is
not
instantaneous,
because
it's
built
over
time
is
the
decision
that
this
is
where
this
problem
comes
from.
Is
everything
we're
doing
is
built
over
time
and
there's
this
other
dimension
of
the
debate,
which
has
to
do
whether
you're
going
to
report
one
interval,
which
is
just
the
most
recent
measurements
or
whether
you're
going
to
report
all
the
intervals
since
the
beginning
of
time?
A
A
I
would
point
to
you
that
there
is
an
exception
with
the
time
part,
which
is
there
are
some
measurements
that
are
not
associated
with
the
time
as,
for
example,
the
classic
example
that
I
have,
in
my
mind,
is
cpu
temperature,
so
cpu
temperature,
you
aggregate
it
so
so,
even
though
you
produce
a
distribution
that
the
values
so
so
there
are
two
ways
of
producing
this
distribution
if
you
produce
the
distribution
of,
for
example,
if
you
report
this
every
second
and
then
you
produce
a
distribution
of
values
over
a
minute,
then
then
this
yes,
this
is
associated
with
an
interval.
A
But
if
I'm
using
the
observer
pattern,
I'm
reporting
the
temperatures
and
I'm
just
building
a
distribution
of
values
that
are
right
now
at
this
moment,
this
snapshot
in
time.
These
are
not
associated
with
the
interval,
and
I
I
think
it's
important
to
distinguish
between
these
two
cases
and
I
think
that's
probably
some
of
the
confusions
where
is
coming
from.
So
I
I
think
there
is
this
third
option
like
besides
the
two
things
that
I
mean.
Am
I
reporting
values
from
the
previous
reporting
to
the
and
or
from
the
start
of
the
process?
A
A
Instantaneous,
I,
which
I
don't
think
I
convinced
you
that
that's
the
case
or
not,
but
that's
that's
something
that
I
don't
want
to
say
that
I'm
convinced
either
way
right
now.
It's
just
I
feel
like
I
have
to
think
about
this
more.
I
don't
have
a.
I
haven't
figured
it
out
when
I
was
trying
to
to
work
this
out
earlier.
I've
had
a
few
conversations
with
a
few
of
you
about
this
already.
I
ended
up
writing
down
a
grid
with
eight
boxes
in
it,
because
we
have
two
kinds
of
instrument.
A
We
have
two
types
of
exporter
and
well
I'll
shut.
My
paper
in
front
of
me
now
because
there's
one
other
dimension
but-
and
I
need
to
understand
how
we're
using
these
words
in
each
one
of
those
cases.
I
think
that
that
there's
an
exhaustive
list
that
we
can
generate
that
that
probably
has
eight
things
in
it
and
once
I've
gone
through
that
and
thought
about
it
more,
I
might
be
ready
to
agree
with
you
clearly
there's
something
special
about
aggregating
over
the
time
dimension,
it's
different
from
the
spatial
dimensions.
A
That's
that's
the
problem
here
and
the
the
goal
is
to
indicate
what
types
of
aggregations
are
meaningful
for
the
back
end
and
you
don't
want
to
start
subtracting
numbers
that
don't
make
sense,
and
you
don't
want
to
start
averaging
numbers
that
don't
make
sense,
because
those
are
possible
risks
right
now.
You
don't
want
to
average
numbers
that
were
summed,
and
you
don't
want
to
subtract
numbers
that
weren't
weren't
added.
I
guess.
A
But
I
I
don't,
I
don't
find
the
argument
that,
once
you
combine
data
that
it
becomes
somehow
a
different
type
of
data,
I
feel
like
there's
a
range
of
time.
That's
part
of
the
output
that
tells
you
this.
So
if,
like
you
know
the
range
of
time
that's
covered
by
your
measurement
and
I
I
would
prefer
to
think
that
when
you
have
an
observer
instrument
that
fires
one
value,
that's
the
one
value
that
you've
fired
that
you've
measured
at
the
end
of
the
interval.
It
is
a
point
that
applies
to
the
interval.
A
I
don't
particularly
care
that
it
happened
to
be
one
second
from
the
end
of
the
window.
I
that's.
I
guess
I
don't
particularly
like
that
distinction,
so
so
the
only
problem
talking
about
this
is
that
this
instantaneous
histogram
versus
cumulative
histogram.
So
so
the
thing
is,
the
thing
is:
if,
if
part
of
your
aggregation,
there
is
something
that
naturally
grows,
like
account
number
of
observations.
A
Okay,
you
have
the
number
of
observations
that
increase
over
time,
usually
like
that,
cannot
go
down
from
the
perspective
of
you
have
more
observations,
correct,
and
there
are
two
cases
here.
If
you
are
reporting
something
like
you
have
a
synchronous
instrument
and
I'm
giving
you
latencies
always
the
number
of
observations
will
increase,
even
if
you
reset
the
thing
and
you
report
deltas
or
one
cycle
things,
but
it's
natural
that
when
you,
when
you
do,
when
you
do
some
these,
you
get
a
valid
result.
You
get
that.
A
Okay,
I
had
five
observations
in
the
previous
cycle.
I
have
three
now
in
total
in
this
larger
time
window.
I
have
eight
observations,
correct
and
it's
meaningful
it's.
It
makes
sense
to
do
that.
But
now
now,
if
you
have
a
distribution,
for
example,
that
the
cpu
temperature
and
the
label
that
you
have-
and
let's
say
you
don't
have
any
labels.
So
what
you
do
is
you
measure
eight
points
which
be
the
number?
A
Of
course
you
have
eight
cores
and
you
measure
the
temperature
of
every
core
and
you
build
a
distribution
out
of
these
eight
numbers,
and
you
always
report
these
eight
numbers
they're.
Actually,
now
it's
it's
a
bit
unclear.
If
that's
some
of
the
the
number
of
observations
makes
a
lot
of
sense.
Does
it
make
sense
what
I'm
saying?
Because
it's
not
like?
I'm
I
had
eight
point.
Eight
observations,
it's
just
actually
it
doesn't
make
any
sense
to
to
to
combine
these
two
histograms
that
easily
I
just
have
to
do
an
average.
A
As
you
said,
I
have
to
apply
the
average
at
each
point
and
then
and
then
see
the
whole
result
versus
I
can
combine
them
and
calculate
the
average
so
so
in
in
first
case,
if
I
sum
the
the
these
the
histograms,
I
can
calculate
the
average
of
the
sum
and
it's
fine.
In
the
second
case,
I
have
to
calculate
the
average
of
these
two
and
then
do
an
interpolation
between
them
kind
of.
A
Well,
my
audio
broke
up
and
I
didn't
really
hear
all
of
bogue
in
there.
So
sorry,
I
couldn't
answer
him.
As
I
was
saying,
I
think
this
issue
is
really
tricky
and
I
haven't
thought
through
myself
enough,
so
I
don't
think
I
should
speak
anymore.
Would
anyone
else
like
to
ask
questions
or
comment
on
that
discussion.
A
I'd
say
that
this
issue
is
slowing
us
down
the
only
people
that
have
been
discussing.
This
are
myself
tyler
and
bogen,
and
it's
probably
the
most
important
thing
right
now,
so
I'd
say
we're
all
we,
the
three
of
us
are
all
thinking
about
this,
and
I'm
I'm
confident
that
a
week
from
now
we'll
have
a
more
a
better
time
trying
to
resolve
this,
or
if
we
haven't
done
something
sooner
than
that,
so
at
least
I
think
I'll
thought
it
through
by
then
yeah.
A
I
think
that's,
that's,
probably
a
good
way
to
approach
the
situation.
The
temporality,
I
think
in
the
monotonicity,
are
the
big
blockers
right
now
for
getting
the
otlp
and
just
to
put
perspective
on
it.
The
otlp
is
kind
of
it's
the
protocol.
That's
going
to
be
transporting
data
from
the
exporter
to
the
collector,
and
it's
also
used
in
many
other
situations,
so
it's
kind
of
a
blocker
in
itself
on
some
development,
especially
on
like
the
views
api
as
to
like
what
can
be
supported
and
like
what
needs
to
be
supported.
A
So
getting
this
nailed
down
is
is
kind
of
a
top
priority
for
us
right
now.
I
think
that
if
you
have
any
thoughts
on
that,
the
two
big
issues
that
josh
and
I
are
kind
of
talking
about
the
temporality-
that's
number
one
and
then
the
as
a
secondary
thing.
There's
a
monotonicity
question
and
I
think
that
that's
a
little
bit
easier.
A
It's
just
a
question,
so
I
think
once
you
get
the
temporality,
if
you
need
to
include
a
monotonicity
question,
mainly
the
question
being
stated
is:
is
the
data
that's
being
transported
in
the
otlp
monotonic,
because,
if
it
is
the
back
end
may
be
able
to
do
optimizations
or
understand
that
it
can
calculate
a
rate
on
it
without
any
sort
of
artifacts
due
to
jumps
or
understanding
jumps
as
artifacts?
A
So
these
are
things
I
think
that
we
understood
as
possibly
being
needed.
There
was
a
question
over
whether
we
want
to
call
monotonic
or
we
want
to
call
absolute
input
to
the
instruments,
and
I
think
we've
settled
on
monotonic,
because
the
otlp
will
be
describing
values
that
come
after
an
aggregator,
not
the
input
to
instruments.
A
So
just
keep
that
in
mind
when
you
are
reviewing
these
kinds
of
things
outside
of
that
there's
a
whole
host
of
other
issues
in
the
link,
doc
that
we
were
talking
about
some
of
them,
especially
towards
the
bottom
of
the
ranked
remaining
issues,
are
not
on
the
table,
specifically
like
enum
types.
I
think
that
was
kind
of
a
a
one-off
thing
that
bogton
wanted
to
kind
of
talk
about.
Sorry,
I
said
the
linked
issue
is
the
link
docked
from
our
notes:
yeah
no
big
deal.
A
The
other
thing
was
are
some
things
that
can
be
done:
asynchronously,
mostly
the
exemplars
and
histogram
buckets.
We
could
probably
start
adding
support
for
those
asynchronously
and
then
I
think
the
rest
are
kind
of
more
conceptual
questions
that
are
not
going
to
be
as
blocking
well.
There
is
a
question
about
the
in
maximum
count.
There's
still
no
support
directly
for
that,
but
I
think
that's
more
of
a
contentious
issue
than
it
is
an
actual
practical
issue,
so
probably
not
worth
wasting
too
much
time
on
that
right.
A
I
do
think
so
yeah
if
you
have
a
chance,
especially
in
that
number
one
of
those
issues.
It's
definitely
something
we
can
get
some
more
eyes
on
and
be
helpful.
So,
just
to
summarize,
the
first
class
support
from
in
max
is
that,
right
now
we
have
this
notion
of
a
summary.
It's
like
the
legacy
of
many
many
metric
systems
is
that
you
say
you
can
report
different
quantiles
and
the
it's.
It
seems
a
little
unusual
to
say,
I'm
going
to
report
my
min
and
my
max
by
reporting.
A
Quantile
equals
zero
and
quantile
equals
one,
and
it's
it's
maybe
there's.
Maybe
we
should
be
considering
a
per
a
type
in
this
protocol
for
each
concrete,
aggregator,
the
histogram
in
example.
Bucket
thing
this
is
one
where
I
actually
I'm
not.
I
well.
I
wrote
the
issue.
I
don't
know
that
we
should.
This
is
like
getting
ahead
of
ourselves.
I
don't
think
that
the
way
these
exemplars
are
done
is
actually
very
useful
to
many
people.
A
I
what
I
see
happening
is
I
did
12
years
ago
and,
like
I
don't
think
you
ever
want
just
one
example.
You
can't
get
any
statistics
out
of
one
example.
That's
what
I'm,
what
I'm
thinking
of
when
I
say
this
like
I'd
like
to
be
able
to
have
many
examples
and
somehow
have
a
way
to
use
those
examples
to
generate
statistics
in
my
back
end
so
so
interesting.
We
can
talk
about
it,
go
ahead,
chris
yeah
we've!
So
we
we
have
a
use
case
for
exemplars.
A
This
was
originally
an
open
census
for
stackdriver.
I
I
don't
know
how
much
other
back-ends
use
this,
but
at
least
in
stackdriver
we
want
to
be
able
to
link
from
the
last
span
that
generated
each
metric
and
in
case
of
histograms,
this
means
per
histogram
bucket.
But
if
it's
just
something
like
a
like
a
straight
value,
we
still
want
to
be
able
to
link
from
the
last
recorded
value
to
the
actual
span.
A
We
actually
have
a
pending
intern
project
now,
for
somebody
to
add
the
exemplar
spec
here
so
hopefully
in
the
next
week
or
two
we'll
have
an
hotep
that
people
can
talk
about
great
yeah
I'd
be
interested
in
seeing
well.
One
of
the
applications
that
I've
always
had
in
mind
is
is
as
a
way
to
attack
cardinality.
So
you've
got
some
some
metrics
observations
or
some
recorded
values
and
you're
going
to
do
a
reduction
of
dimensionality
before
you
export
so
you're
pre-aggregating
in
the
process
and
you're
dropping
a
bunch
of
labels.
A
I'd
like
to
be
able
to
have
many
examples
be
included
in
the
output
from
which
I
can
estimate
those
lost
labels
using
statistics,
but
it
means
that
I
need
to
have
more
than
one
example
or
per
or
whatever
it
is,
I'm
trying
to
estimate
yeah.
So
I
don't
know
how
that
fits
in
the
protocol.
I
haven't
really
thought
through.
I
just
I
know.
What's
there
is
not
good
enough
for
that
gosh.
A
It
almost
sounds
like
a
different
problem
like
if
you
want
to
record
say
like
it
may
something
like
you
do
something
like
an
exact
aggregation
yeah,
because
if
you're
trying
to
reconstruct
statistics
that,
like
don't
exist
in
the
aggregation,
then
I
I
don't.
I
don't
know
like
a
way
in
general
to
solve
that
with
exemplars.
I
feel
like.
Maybe
the
better
solution
is
just
to
use
a
different
kind
of
aggregation.
A
Yeah
that
could
be-
I
will
retract
my
statement
then,
and
look
forward
to
seeing
this
the
work-
oh
yeah,
so
that's
worth
discussing
briefly.
Google
is
sending
us
an
army
of
interns
and
there's
going
to
be
a
lot
of
emphasis
on
metrics.
A
I've
been
in
touch
a
little
bit
with
nick
taylor,
who's
going
to
be
on
the
google
end,
managing
a
little
bit
of
that.
I
haven't
heard
much
more
at
this
point,
but
if
we
do,
if
anyone
has
info
we're
looking
forward
to
this
I've
info
yeah,
so
interns
are
starting
to
work
mostly
on
tracing
projects.
A
Some
will
be
working
on
instrumentations
that
generate
metrics,
because
I
don't
think
there's
anything
in
the
metric
spec.
That's
blocking
them
on
doing
that.
We
are
blocked
on
doing
any
metrics
exporter
implementations,
and
so
I
have
some
ftes
at
google
who
work
on
metrics
or
who
work
on
other
parts
of
open
country
like
sergey,
for
example,
who
have
said
they're
willing
to
help
out
with
reviews
or
anything
they
can
to
speed
along
the
metric,
spec
and
so
josh.
Just
because
I
know
you
brought
this
up
on
monday
in
the
maintainers
meeting.
A
How
can
we
best
use
those
people
to
speed
things
up?
Is
it
just
reviews
of
of
the
pending
tips?
Well,
at
this
point,
we
merged
the
api
spec
that
was
pending.
That
was
that's.
That's
nice.
There
aren't
any
like
documents
that
I'm
aware
of
they're
out,
except
the
one
that
we
just
reviewed
earlier,
ted's
document
on
getting
metric
conventions
together.
I
don't
think
the
conventions
will
block
like
they'll
they'll,
theoretically
block
development,
but
they
can
like.
These
are
interns.
A
It's
going
to
take
a
while
to
get
ramped
up
with
the
code
and
apis,
like
the
conventions,
are
something
that
they
can
probably
relatively
easily
change
further
along
the
process
yeah.
I
think
that
we're
probably
need
quite
a
lot
of
work
now
to
update
the
libraries
that
have
metrics
to
the
latest
standards.
A
So
we've
got
six
instruments
now
and
there
were
three
in
the
last
version
of
the
specs
so
and
the
go
is
now
up
to
date,
we're
about
to
release
that
so
I'm
we're
sort
of
ready,
ready
there,
but
I
don't
know
the
current
state
of
all
the
other
implementations,
that's
something
that
we
could
probably
be
discussing
in
the
maintainer
meeting
on
mondays
in
the
case
of
c
plus
plus,
we
may
have
some
of
them
actually
do
or
assist
with
the
implementation
just
because
more
firepower
behind
it,
yeah,
okay,
all
right!
A
So
thanks
for
the
update,
morgan
yeah,
I
don't
know
if
we
have
another
agenda
item,
we
didn't
talk
about
views,
but
bogdan
and
tyler,
and
I
have
sort
of
discussed
that
we
shouldn't
really
talk
about
views
until
we
finish
the
hotel
p
discussion.
So
if
you
have
curiosities
about
otlp
or
want
to
think
about
it
over
the
next
week,
please
do
and
let's
try
and
have.
A
I
hope
we
are
not
saying
the
same
thing
a
week
from
now
that
which
is
that
we
need
to
think
about
this
more
and
morgan.
Just
to
answer
your
question
directly,
I
don't
know
what
we
need,
but
I
I've
just
I.
I
actually
think
that
metrics
is
pretty
good
right
now
and
I
and
I'm
looking
at
the
specific,
the
spec
repo
there's
a
lot
of
lingering
tracing
questions
that
are
not
being
answered
right
now
as
fast
as
I
think
they
should
like
sampling
and
some
other
stuff.
A
So,
okay,
when
I
said
that
when
I
said
that
logs
was
distracting
from
spans
and
metrics,
I
met
both
bands
and
metrics.
A
A
Okay,
I
guess
I
didn't
see
that
as
a
blocker.
I
do
have
an
outstanding
it's
six
months
old
at
this
point,
but
it's
it
was
kind
of
stable
and
I
was
waiting
to
get
through
this
api
work
and
then
re-imp,
like
we've,
just
changed
to
go
sdk
quite
a
bit
in
the
last
month
and
I'd
I'd
like
to
get
back
to
that
document.
That
is
still
on
me
and
I'm
and
I'm
that
will
be.
I
may
be
yeah.
I
will
be
working
on
that.
A
A
A
But
that's
a
fair
question.
I
won't
try
to
answer
it,
we'll
put
it
in
the
spec
and
we'll
write
it
and
we'll
have
it
reviewers
review
it,
cool,
okay!
Well,
it's
five
o'clock,
at
least
where
I
am.
It's
been
lovely
everybody.
Let's
do
this
again
next
week,
but
this
next
week
it'll
be
at
11
a.m.
Thank
you.
Thank
you
all
and
see
you
later.