►
From YouTube: 2020-09-25 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
I
can
also
I've
been
on
call
this
week
and
it's
been
kind
of
a
bad
one,
so
I'm
a
little
bit
sleep,
deprived
or
out
of
it
and
didn't
didn't
put
much
time
at
divi
agenda
today
in
case
anyone
else
has
items
I'd
like
to
talk
about
now's,
a
good
time
to
update
that.
C
So
josh
we
also
have
a
josh
at
new
relic
who's
here.
To
kind
of
talk
about
that
first
action
item
and
on
the
agenda
I
didn't
know
if
we
wanted
to.
Oh,
I
guess
we're
three
minutes
in
we
could
probably
just
I
just
made
some
introduction.
Josh
is
a
former
engineer
here
at
new
relic
and
now
an
em
working
in
our
telemetry
pipeline
here
at
new
relicon.
C
I
could
probably
just
leave
it
to
you
at
that
point
for
the
expectation
of
like
how
much
involvement,
you're
gonna
have
the
project
josh,
but
because
I
don't
wanna,
I'm
gonna
mess
it
up.
I
guarantee
it
so
I'll
just
send
it
over.
D
Sure
yeah,
thanks
for
the
intro
yeah,
some
joshua
galbraith:
yeah
engineering
manager
for
the
metrics
team.
In
our
in
our
day
platform
group
yeah,
I
was
formerly
in
ic,
but
everyone
makes
mistakes
right.
So
here
I
am,
and
I
I
guess
I
you
know,
I
wanted
to
be
involved
as
much
as
possible
sort
of
to
help
out
on
the
project
management
side,
anything
that
I
can
do
to
help
sort
of
coordinate,
work
or
organize
things,
prioritize,
etc.
So
that's
yeah!
D
That's
what
my
first
agenda
item
is
about,
and
I
think
you
know
honored
to
be
part
of
this
project.
Then
you
know
somebody
used
open,
tracing
and
open
census,
pretty
early
on
like
five
or
six
years
ago,
and
really
excited
about
about
this
work,
and
so
I'm
looking
forward
to
working
with
all
of
you.
D
D
I
know
five
or
so
that
were
p3
feels
like
a
lot
of
work
before
I
I'm
told
that
the
rc
is
maybe
five
weeks
away
and
a
ga
in
november,
and
so
I
just
wanted
to
see
like
you
know
what
folks
thoughts
were
around
like,
maybe
prioritizing
those
in
a
more
granular
manner
than
just
the
three
buckets
and
if
there
are
like
a
milestone
or
anything
in
github
that
they're
tied
to
and
how
I
can
help
out.
Yeah.
B
Great
I
for
one,
welcome
you
to
the
project
and
would
welcome
any
kind
of
help
we
can
figure
out
the
best
way
to
achieve
our
shared
objectives.
Very
welcome.
B
I
I
know
that
every
once
in
a
while,
we
look
through
the
list
of
issues
once
in
a
while.
I
write
that
in
the
agenda,
so
here
was
one
where
we
kind
of
looked
down
the
list,
and
it's
usually
not
a
great
use
of
this
this
hour
with
so
many
people
on
the
call.
So
it
might
be
a
good
idea
to
get
another
meeting
up
and
running
once
in
a
while.
B
For
that
we've
done
that
with
the
hotel
spec
for
tracing,
I
know
so
that
would
probably
be
a
good,
a
good
way
to
move
forward
on
that
and
also
I
know
that.
There's
a
lot
of
work
actually
implied
by
many
of
those
issues
and
the
more
project
management
we
can
get
to
kind
of
track
and
prioritize
and
keep
the
pressure
up
on
others
will
free
more
of
us
to
actually
do
some
of
those
tasks.
So
that'd
be
also
appreciated.
C
Josh
yeah
really
related
to
that.
There
was
a
meeting
tomorrow
morning,
and
maybe
bogdan
also
knows
this
answer
or
morgan's
on
the
call,
as
well
as
to
like.
If
that
is
supposed
to
be
a
prioritization
meeting
tomorrow
morning
and
if
like,
if
so,
is
it
also
related
to
the
spec
work.
E
Yeah,
it's
it's
mostly
focused
on
the
tracing
spec
bug
into
there
andrew's
there
andrew
runs
it.
It's
mostly
focused
on
just
looking.
If
there's
any
new
issues,
changing
a
priority
of
ones
based
off
of
our
current
status
and
and
just
sort
of
getting
the
the
tracing
spec
done,
we
don't
tend
to
go
too
deep
into
metrics,
but
I'm
guessing
starting
next
week.
We
will
because
tracing
will.
E
G
Important
to
come
tomorrow,
so
okay,
this
week
supposed
to
be
the
last
week
for
tracing
focus
and
stuff
unless
there
is
an
emergency,
so
yes
most
likely,
because
it's
the
end
of
the
week,
it
will
be
good
to
for
some
of
you
to
come
josh.
You.
F
G
F
Yeah
you
find
the
find
the
zoom
cncf
zoom.
So
then
I
got.
G
Perfect
doesn't
matter
important,
is
it's
open
for
everyone,
but
it's
mostly
focused
on
going
through
all
the
specs
unassigned
if
we
can
assign
them
and
and
do
some
real,
prioritization
and
stuff.
Also,
you
want
to
mention
something
about
having
another
meeting
for
different
topics,
I'm
all
about
that.
G
The
otlp
went
very
well
since,
since
we
met
more
regularly
and
discuss
about
that,
let's,
let's
prioritize.
If
there
are
any
other
topics,
we
want
to
discuss
more
aggressively
and
more
we're
happy
to
have
a
extra
extra
meeting
for
this.
I
know
it's
going
to
be
only
to
work
maximum
four
five
meetings,
because
that's
when
we
we
want
to
finish
yeah.
E
So
there
you
go
tyler
like
previously,
it
has
not
been
metrics
focused,
it's
been
very
tracing
focus,
but
starting
this
one,
it's
gonna
get
metrics
focused.
C
Awesome
cool
yeah.
I
think
I
think
we'll
have
we'll
try
to
have
josh
show
up,
and
you
may
see
my
face
as
well.
E
E
It's
on
the
open,
telemetry
calendar
and
it's
got
the
the
calendar
google
group
on
it.
So
if
you're,
if
you
have
access,
if
you're
subscribed
to
either
of
those
it's
on
your
calendar,
not
if
not,
then
you
should.
B
Agree
cool,
so
maybe
we
can
go
through
so
tomorrow
we
I'll
I'll
show
up.
I
assume
it's
at
eight
o'clock,
pacific.
Okay.
I
can
do
that
made
a
note.
I
think
then
we
should
keep
trying
to
use
this
time
to
go
through
the
sort
of
most
the
questions
and
topics
that
most
need
discussion
for
the
last
few
weeks.
That's
been
this
topic
of
it's.
It's
both
a
question
of
protocol
and
a
question
of
default
api
and
sdk
behavior
talking
about
value
recorder.
B
That's
what
I
hope
to
get
talked
about
today
and
then
there's
sort
of
a
separate
track
with
semantic
convention
work
that's
been
ongoing
and
we
should
review
the
state
of
all
that.
B
I
think
we're
we're
sort
of
missing
items
and
maybe
should
have
more
during
the
issue
scrub,
perhaps
or
some
other
meeting.
We
can
propose
a
review
of
the
ones
that
we
have
that
we've
potentially
missed
like.
Is
there
there's
an
issue
in
the
proto
repository
asking
about
raw
values?
I'm
not
sure
that
it's
front
and
center,
like
that's
one
of
the
remaining
issues
for
otlp,
so
we
should
do
some
of
that
review.
B
The
in
the
past
few
weeks,
we've
talked
about
a
potential
proposal
to
use
db
sketch
as
sort
of
having
it
become
essentially
a
recommended
sort
of
standard
for
us,
and
I
think
we
started
to
see
a
little
bit
of
maybe
a
little
bit
of
pushback
for
that.
So
I
proposed
another
issue
in
the
last
week.
I
want
to
talk
about
both
of
those
in
the
next.
B
I
don't
know
20
minutes
or
so
at
least,
and
then
hopefully
people
want
to
want
to
talk
about
some
other
conventions
to
figure
out
what
what
else
we
can
do
or
need
to
do
to
get
those
prs
merged
and
then
hopefully,
there's
an
otlp
discussion
that
that
maybe
bogdan
wants
to
like
think
about
right
now.
B
So
so,
as
for
the
the
dd
sketch
questions,
we
were
digging
deep
and
I
think
we
should
go
ahead
with
that,
but
it
might
be
worth
everybody
before
we
do
seeing
this
one
that
I
myself
wrote
up.
This
was
in
response
to
some
of
the
discussion
in
919.
B
F
B
Agree
that
dd
sketch
is
pretty
good.
We
should
go
with
that
as
a
standard.
So
to
that,
I
think
we've
been
asking
questions
of
michael
who's
on
the
call,
and
maybe
we
should,
but
mike
will
take
it
from
here.
I
B
I
The
new
issue
opened
the
other
day.
We
don't
mind
either
way
any
hdr
implementation
I
mean
hdr
would
have
to
be
implemented,
duty
sketches
and
implementation
of
hdr.
So
I
appreciate
that
there
are
many
ways
to
implement
this,
we're
very
happy
with
dd
sketch
and
we're
also
very
happy
to
contribute
it.
So,
however,
I
don't
know
how
these
things
work
in
the
open
source
community.
Admittedly,
but,
however,
you
guys
want
us
to
contribute
that
we
are
happy
to
contribute
it.
I
B
Yeah,
so
I
think
we've
also
noticed
that
there's,
probably
it's
more
than
just
one
question.
B
There's
there's
a
question
of
what
do
we
end
up
doing
in
the
collector,
which
is
sort
of
where
the
high
value
kind
of
calculations
occur,
and
potentially
a
lot
of
the
long-term
storage
can
be
based
on
some
of
that
sort
of
aggregation
that
you
do
at
the
collector
level,
whereas
it's
and
it's
hard
to
get
one
even
one
of
these
approaches
to
work
with
all
the
code
involved,
so
that
maybe
we
wouldn't
want
to
force
ourselves
into
a
corner
of
having
to
do
something
like
that
in
every
one
of
the
sdks
and-
and
so
maybe
all
that
really
matters
to
you
is
that
your
your
customers
can
run
the
collector
with
dd
sketch
as
the
default
output
mechanism.
B
And
that
means
that
at
least
you
know,
datadog
won't
have
to
go
to
to
do
the
work
to
get
data
into
its
db,
sketch
format
as
it
crosses
that
sort
of
network
boundary
into
your
system.
But
it
does
mean
that
if
the
defaults
that
open
telemetry
have
in
place
are
not
outputting
dd
sketch
that
you're
going
to
have
an
extra
conversion
somewhere,
and
I
I
suspect
that
that
conversion
cost
is
not.
B
You
know,
sort
of
first
in
mind
for
most
people
in
the
room
here
and
that
what
what
is
more
important
is
getting
us
to
ga,
and
I
think,
maybe,
if
we
relax
what
we
can
call
a
histogram
that'll
help.
So
do
you
want
to
I
that
what
I
what
I've
just
said
in
the
issue
and
the
other
issue
that
I
felt
doesn't
excuse
us
of
this
question
of
how
do
we
convert
back
into
some
of
the
legacy
representations
like
if
you
are
using
prometheus?
I
So
with
this
on
the
prometheus
question,
we
we
did
specify
dd
sketch
to
fix
buckets
representation.
You
would,
it
doesn't
define
the
bounds
of
the
bucket.
So
in
the
exporter
I
mean
there's
human
knowledge
when
people
export
a
prometheus
histogram
and
we
can't
derive
that
that
human
knowledge,
but,
given
that
knowledge,
certainly
we
can
merge
into
that
and
I
think
that's
reasonable,
hopefully,
and
the
only
realistic
solution.
I
So
in
the
exporter
in
the
prometheus
exporter,
one
could
define
their
buckets
or
like
a
timer
to
find
a
default,
nothing
longer
than
60
seconds
matters,
and-
and
we
this
this
code
would
merge
it
in
it's
fairly
easy
code,
but
but
nonetheless,
it
will
merge
into
big
buckets
and
output.
What
one
expects.
B
I
B
Sounds
good?
Anybody
want
to
talk
about
this
topic.
I
Just
real
quick
before
we
go
on
you,
you
mentioned
that
there's
some
complexity
in
the
collector
for
various,
but
there's
also
we're
doing
the
aggregation
here
in
process
rather
than
sending
raw
values
to
the
collector
to
minimize
the
the
overhead
between
the
client
library
and
the
collector.
So
here
you
have
the
giant
library,
library
implementation,
but
it
represents
any
any
sketching.
If
we
all
want
to
do
it
in
process
does
represent
that
unit
of
work
also,
which-
and
I
think
doing
it
in
process-
was
the
right
decision.
B
Okay,
so
I
think
what
what
you
just
said
was
that
we'd
like
to
have
this
compression
inside
the
client
and
inside
the
collector
and
inside
the
server
we
want.
We
want
to
merge
distribution
so
that
we're
sort
of
keeping
them
compressed
and
as
a
distribution
the
whole
way
along.
B
I
guess
the,
and-
and
so
that's
where
this
simple
story
that
I
told
in
that
issue
about
how
we
could
just
say,
let's
just
let
it
be
histogram,
a
simple
summary
summary
of
like
that
starts
to
fall
apart
when
you
start
to
combine
different
histograms
from
different
places
together
without
some
awareness
of
the
sort
of
loss
of
precision
that
happens
when
you
do
that,
and
that's
what
these
dedicated
merge.
Algorithms
do,
and
maybe,
by
forcing
everybody
to
represent
their
their
data
structure,
has
a
histogram.
We
could
be.
B
Unfortunately,
that
means
I
guess
that
sort
of
points
to
there
being
no
great
and
perfect
answer
here.
Does
anybody
else
want
to
say
something.
J
Hey
colin
here
from
new
relic
just
bombing
in
out
of
nowhere
hi
colin
hi,
so
I
do
kind
of
like
the
the
generic
buckets
approach,
simply
because
of
the
variety
of
different
histogram
formats
that
are
out
there
in
the
wild
and
dd
sketch
is
pretty
great
in
compressing
things
down,
but
if
you
have
something
like
prometheus
histograms
or
something
it
still
seems
sort
of
heavyweight
to
represent
it.
J
So,
for
example
like
if
you
have
dd
sketches
of
the
bucketing
algorithm
you're
going
to
know,
go
down
the
screen
that
you
can
merge
them,
and
maybe
maybe
the
spec
has
certain
known
enumeration
of
sort
of
bucketing
out
algorithms
that
are
supported.
That
way.
It's
sort
of
easy
to
encode
on
the
wire.
A
B
The
idea
being
that,
if
you,
if
we
arrange
so
that
the
client
and
the
collector
are
all
using
the
same
algorithm
and
that
we
keep
them
in
their
sort
of
natural
encoding,
so
that
they're
compressed
in
a
way
that
those
merge
algorithms
work
with
correctly
that
we're
all
better
off.
So
we
can.
We
can
write
a
protocol
that
does
that.
There's
still
this
question
of
what
should
the
sdk
do
by
default?
G
E
B
G
B
B
B
I
think
that.
G
And
we
may
give
a
user
a
one
way
to
to
save
the
entire
sdk
for
all
the
recorder
maps
recorder,
record
values.
Unless
you
do
anything
you
do
this
cache.
For
example,
you
may
give
a
global
thing
to
not
have
to
set
for
everyone
or
or
something
like
that,
a
very
small
thing,
but
as
a
pure
default.
Maybe
we
should
fall
back
into
the
simplest
thing.
B
The
where
that
remember,
what
that
reminds
me
of
only
want
to
say,
is
that
sometimes
users
have
come
up
to
us
and
said
I
wanted
a
gauge.
Remember.
B
Actually,
what
I
wanted
was
last
value,
so
there
there
were
already
some
users
who
were
kind
of
like
looking
for
for
a
gauge,
not
histogram
when
you
move
away
from
histogram
towards
midmax
some
count,
it's
like
you're,
even
further
from
what
they
wanted.
In
some
sense,
that's
my
only
response,
so
I
know
that
in
the
original
effect
right
well
anyway,
the
spec
currently
says
mid
maximum
count,
primarily
because
it's
sort
of
like
the
simplest,
smallest
answer,
we
could
come
up
with
it's
like
unobjectionable,
go
ahead,
michael,
please.
I
To
drift
at
all,
I
was
just
going
to
say
that
by
including
one
of
the
bucketing
options,
whether
it's
dd,
sketch
or
fixed
size,
buckets
or
whatever
we're
not
proposing
removing
anything.
So
the
you
can
always
drop
the
extra
data
on
the
floor.
It
does
cost
extra
on
the
wire
or
whatever,
but
but
with
the
default
being
slightly
bigger,
provides
optionality
but
doesn't
remove
any
functionality.
B
G
Yeah,
my
my
my
two
cents
here
is
explicit:
with
fixed
buckets,
it's
going
to
be
hard
to
come
up
with
the
default
that
works
for
everything.
So
that's
one
of
the
most
common
problem
for
that.
So
it's
very
metric,
specific,
the
fixed
bucket
solution
and
even
then
you,
you
may
have
troubles
choosing
the
right
buttons.
But
let's
assume,
let's
assume
you
have
enough
traffic
and
you
you
choose
enough
buckets
you'll
get
a
good
approximation,
but
so
explicit
buckets
is
hard
to
do
linear
buckets.
G
Maybe
we
can
do
something
about
linear
or
exponential
buckets.
Exponential
buckets
may
actually
work.
If
we
go,
let's
say
we
start
from
0.1
or
something
like
that
and
we
go
up
to
whatever
value
100
buckets
or
something
like
that
or
120
buckets.
That
is
very
easy
to
calculate.
Just
do
byte
manipulation
and
bid,
manipulation
and
stuff
to
calculate
things.
I
I
That
we
considered
in
the
conversion
to
prometheus
just
now
like,
while
we
were
speaking
where
we
say
we
just
assumed
that
prometheus
buckets
are
growing
exponentially
and
we
choose
an
enormous
maximum
bucket
and
convert
the
tv
sketch,
but
we
decided
not
to
go
that
way,
but
it's
actually
legitimate.
Also.
I
agree.
B
I
I
sort
of
wanted
to
throw
out
a
new
idea.
That's
occurred
to
me
in
this
context
as
well,
which
is
from
reading
the
circle.
Circonus
histogram
circle
list.
I
guess
I
don't
know
how
they
say
it
paper
that
that
that
approach,
it's
sort
of
blends,
the
the
exponential
with
the
human
readable
problem
and
finds
a
solution
that
is
sort
of
middle
ground.
I
think
in
a
really
nice
way
it
it
sort
of
the
problem
with
sort
of
pure
exponential
strategies.
B
Is
you
end
up
with
these
numbers
that
are
not
human
readable,
like
how
many?
How
many
steps
of
three
are
there
between
one
and
ten?
It's
like
you
end
up
with
a
number:
that's
somewhere
around
three
and
a
half,
but
it's
like
a
very
long
real
number.
The
circonus
histogram
approach
says
you're.
Gonna
have
a
point,
one,
a
point,
two,
a
point:
five
a
point
and
a
one
and
then
a
two
five
10..
B
So
the
the
bins
are
slightly
different
in
size,
but
it
makes
for
human,
readable
numbers,
and
then
you
have
a
single
byte
which
gives
you
120
28
values,
and
so
you
get
like
64
powers
of
some
like
and
this
anyway,
some
number
of
them
per
decade,
and
it's
actually
a
really
nice
scheme,
because
it
gives
you
these
human
readable
boundaries,
so
it
would
actually
translate
back
into
prometheus
really
well.
So
that's
my
new
favorite,
even
though
it
does
have
a
worse
relative
error.
Worse
relative
error
than
some
of
the
other
schemes.
G
How,
or
what
it's,
how
many
buckets
does
it
have
like
in
terms
of
memory?
Is
it
very
inefficient
or.
B
Well,
that's
when
so
in
this
presentation
mode
here
like
you,
could
represent
any
of
these
schemes
using
an
explicit
bucket,
but
but
that's
that's
where
the
extra
compression
of
these
schemes
sort
of
gets
interesting
right.
So
the
the
discussion
and
the
issue
with
the
authors
of
dd
sketch
was
very
detailed
about
how
we'd
have
four
numbers
for
the
positive
range
with
four
numbers
for
the
negative
range
like
and
it's
like
complicated.
But
it
is
compressed.
B
The
circle.
Hist
is
a
little
bit
similar
and
it
ends
up
using
one
byte,
so
one
byte
for
the
exponent
and
that
encodes
there's
a
zero
bucket
and
then
there's
about
half
of
them.
Half
the
remaining
are
negative
values
and
that's
remaining
are
positive
values
and
so
on,
and
then
it
would
be
like
point:
zero,
zero,
zero
one
and
then
a
two
and
a
five
and
then
times
ten
one
zero
one,
two
five
times,
ten
again
one
two:
five
something
along
those
lines.
B
I
don't
wanna
propose
a
single
standard
anymore
and
I
think,
to
get
us
to
ga
fastest.
We
should
just
work
with
explicit
buckets
for
now
and
then
each
sdk
can
choose
the
instagram
that
works
best,
except
for
this
problem
with
prometheus.
So
again
I
don't
know,
I
don't
know
if
you
recall,
I
had
otep
117
was
saying
not
just
mmsc
midmax
some
count,
but
the
min
mexican
count
laughed,
which
is
sort
of
an
interesting
like
a
little
bit
of
everything
approach.
B
B
B
Default
so
have
I
derailed
us
to
talk
about.
I
mean
I
worry,
that
I've
derailed
us
talking
about
duty
sketch
for
a
few
weeks
and
then
making
a
proposal
that
we
not
talk
about
dd
sketch.
B
I
think
everyone's
going
to
be
served
by
having
this
type
of
compression
in
the
collector
and
we're
going
to
get
a
lot
of
pushback
if
we
mandate
something
and
for
every
sdk,
because
there's
10
choices,
but
none
of
them
are
available
in
all
the
languages,
and
so
I
don't
know
what
to
do,
but
we
should
also
move
on.
I
think,
in
terms
of
this
hour.
B
Okay,
let's
see.
B
Justin
and
aaron
both
have
things
to
say
about
semantic
conventions,
and
maybe
graham,
I
want
to
turn
it
over
to
one
of
you
to
there's
both
air
and
has
a
big
pr
open
and
as
well
as
I
think
that
there
is
just
spec
wide
some
attention
needed
for
semantic
conventions
and
I'm
interested
in
hearing
from
any
of
you.
B
True
to
true
james
is
here
because
we're
in
the
afternoon
call
congratulations,
josh,
remembering
that,
yes,
james,
is
here
and
knows
a
lot
about
the
collector
and
the
windows
angles
as
well
of
metrics,
but
for
basically
what
I
was
trying
to
do,
maybe
not
quite
saying
it
is
delegate
somebody
here
to
kind
of
own
the
like
getting
us
to
ga
with
all
the
all.
B
This
man
conventions
work
tied
up,
and
I
think
I've
appreciated
both
james
and
aaron,
for
their
expertise
on
actually
like
knowing
the
exact
metrics
that
all
the
collector
the
collector
you
know,
instrumentation
plugins
are
using,
but
I
also
think
we're
looking
for
somebody
to
make
to
sort
of
proofread
our
spec
and
and
tie
it
all
together,
which
could
be.
I
don't
know,
I
think,
justin
you
volunteered
actually.
C
Yeah
so
I
volunteered
last
week
and
then
I
actually
brought
joshua
galbraith
from
new
relic
here
to.
C
To
do
last
week,
I'll
actually
kind
of
hand
it
over
to
him.
B
B
And
I
think
you
justin
and
graham
can
fill
you
in
josh
about
all
the
sort
of
in-flight
work
that
has
has
and
has
has
happened
and
is
ongoing,
and
then
there's
been
a
few
other
prs
in
the
oteps
repo
about
like
naming
conventions
and
then
some
of
the
things
that
we
need
to
talk
about
like
character
sets
and
like
maximum
like
size
limits
and
such
as
they
refer
to
metrics,
are
pretty
important,
but
are
sort
of
also
being
specified
at
the
hotel
level
across
metrics
and
traces
and
so
on.
B
So
there's
just
really
a
question
of
tying
that
all
together,
but
the
current.
Currently
the
big
piece
of
work
that
aaron
has
been
leading.
Is
this
one
here?
My
impression
was
that
it
had
a
few
pieces
of
feedback
that
still
needed
to
be
addressed.
I
made
that
may
be
unfair
or
out
of
date.
K
No
yeah,
it's
fair.
I
just
did
it's
a
lot
of
work.
I
understand
yeah
yeah,
there's
still
a
few
comments
left,
but
I
think
I
think
I
know
where
to
go
with
all
the
comments
that
are
there.
K
B
Cool,
do
you
have
you
you're
able
to
resolve
these
comments
yourself?
Aren't
you
aaron.
K
I
think
the
majority
of
them
I
did
I
didn't
want
to
ask
there-
was
a
discussion
there
on
load,
and
I
just
want
to
make
it
make
it
clear.
I
think,
because
I'm
getting
some
mixed
signals,
but
I
was
trying
to
say
I
was
trying
to
use
it
as
a
counter
example
of
one
that
should
be
specific
to
the
operating
system.
Does
that
seem
correct,
I'm
not
trying
to
introduce
it
into
this
pr,
just
use
it
as
like.
It's
an
example
of
a
one
that
should
be
os
specific.
B
A
B
All
right
sounds
like
we
don't
really
have
a
lot
of
any
real
disagreement
here
and
at
this
point
great,
I
don't
actually
have
a
personal
list
of
all
the
things
that
are
outstanding
for
semantics.
So
we
can
keep
tracking
that
maybe.
G
Can
can
we
last
time
when
I
looked,
which
was
three
or
four
days
ago,
a
bunch
of
the
things
were
using
the
instrument
types.
Can
we
also
put
the
aggregation
type
from
otlp
in
that
table
for
people
to
clarify
some
of
the
things.
B
C
K
C
G
Indeed,
but
but
some
of
them
some
of
the
metrics,
especially
the
system
metrics
we
may
we
may,
we
may
do
directly
because
we
scrape
them
so
they
are
already
aggregated
most
likely.
We
will
not
apply
any
aggregation,
so
we
would
like
to
inform
the
users
how
they
will
be
in
otlp
correct.
They
can
apply
later
different
other
aggregations
if
they
want
or
anything.
But
this
is
how
we're
gonna
scrape
them,
and
if
you
can
see,
I
think
knight.
G
All
of
them
are
observers
which
in
general
means
it's
a
scraping
mechanism
and
for
for
a
bunch
of
them
in
otl
t
in
sorry.
In
the
collector
we
scrape-
and
we
generally
directly-
we
generate
directly
the
proto,
because
the
the
collector
is
a
pipeline
for
for
that
proto.
So
we
don't
go
through
the
observe
of
server
patterns,
which
I
understand.
K
Me
as
well
go
ahead.
Aaron!
Sorry,
I
was
just
gonna
say
this:
that's
this
is
a
minor
point,
but
these
tables
are
getting
pretty
wide
because
I
just
added
a
description
column
as
well.
So
I
think
I
think
that
would
require
like
reformatting
it
in
a
different
way,
but
which
is
fine,
but
I'm
not
really
sure
how
to
do
that.
G
G
B
I
I
think
it's
okay
to
have
wide
tables.
I
could
also
imagine
having
it
be
like
a
two
line
like
field
for
instrument
type
flash
aggregation
type,
but
just
to
be
clear,
I
think
it
should
be
like
something
you
could
compute
by
looking
up
the
default
aggregation
for
each
instrument
and
like
applying
that
mapping.
It's
just
it's
nice
to
have
it
in
one
place.
Yes,
I
agree.
K
Okay
cool:
how
about
temporality
in
those
tables.
B
G
B
K
Yeah,
that's
right.
My
only
concern
is
like
the
utilization
one.
I
was
thinking
about
it
a
bit
and
I'm
not
sure
if
you
could
convert
from
cumulative
cumulative
to
delta
and
vice
versa,
for
utilization.
K
B
And
I
also
I
have
this:
this
is
this
minor
spec
issue
or
pr
open
that
talks
about
changing
the
word
out
of
it,
non-additive
to
grouping
and
additive
to
adding
and
so
on.
What
I'm
trying
to
do
with
this
is
actually
put
in
a
few
more
paragraphs
to
try
and
really
like
strengthen
this
understanding,
we've
built
about
the
difference
between
adding
and
grouping
and
how
the
question
that
you
just
asked
about
cumulative
versus
delta
doesn't
make
sense
when
you're
one
of
these
grouping
measurements,
basically.
G
B
Okay,
I
made
a
mistake
when
I
merged
upstream
so
don't
merge
it
right
away
I'll
fix
it,
and
then
I
guess
I
can
merge
it
myself.
Don't.
B
Does
anybody
else
want
to
take
an
agenda
item,
lift
it
from
this
list
here
we've
actually
spoken
about.
We
haven't
talked
about
spoken
about
this
last
item
here.
That's
my
alarm,
going
off,
I'm
going
to
mute
for
a
second.
Do
you
think.
C
C
G
All
of
them
are
our
issues
so
far,
so
I
don't
know
no,
no
all
of
them
should
be
issues.
G
L
G
G
K
Please
go
ahead.
Sorry,
I
don't
see
the
ucu
I'm
on
the
agenda
today.
I
don't
know
if
tyler
wanted
to
talk
about
that
again
today,
but
I
did
update
this
pr
with
uc
and
after
doing
it
practically
I
do.
I
do
like
wonder
if
it
should
be
the
case
sensitive
or
case
insensitive
variant
and
yeah
that
that's
the
main
one
I
have,
because
it
would
change
things
a
little
bit.
Some
of
the
units
are
actually
different,
esoteric
ones,
but
some
of
them
are
actually
different.
B
Now
interesting,
well,
I'd
be
happy
to
talk
about
units.
We've
got
some
time
left
here,
but
I
was
gonna.
I
I
interrupted
just
earlier
to
say
that
I
think
we
bogen
just
asked
for
help
after
the
issue
scrub
tomorrow.
Maybe
we
can
have
a
like
list
of
things
that
we
are
specifically
really
asking
for
help
with,
rather
than
a
vague,
like
suggestion
that
might
help
aaron
or
anybody.
Would
you
like
to
talk
about
the
uc
ucum
and
any
outstanding
questions
such
as
case.
C
I
I
think
that
there
was
a
directed
question
at
me,
which
I'm
probably
did
a
poor
job
answering
a
just
a
caveat,
yeah,
that's
a
that's
an
interesting
question.
I've
been
thinking
a
little
bit
about
the
units.
I've
also
not
had
any
action
taken
to
actually
write
any
specification
sort
of
thing
related
to
the
open
issue
for
units
right
now.
So
I
guess,
if
you're
looking
for
direction,
I
don't
have
it.
I
guess
is
my
my
short
answer,
but
I
would
love
to
understand.
C
Like
your
feedback,
one
of
the
things
I
was
thinking
about,
based
on
the
units
that
you're
including
there,
I
think
it
really
is
important
that
you
included
units.
That's
a
really
good
point,
because
making
sure
that
we
have
a
consistent
idea
of
what's
going
to
be
sent
down
in
the
semantic
adventures
is
going
to
be
really
helpful.
C
I
the
the
question,
is
it's
just
that
partition
is
to
like
how
the
uc
would
percolate
back
up
through
the
sdk
into
the
interface
that
the
the
end
user
is
actually
going
to
use?
Is
the
open
question
right
now?
I
think
that
I
haven't
heard
any
opposition
from
people
saying
that
they
don't
want
the
uc
to
be
implemented
in
the
protocol.
C
The
only
question
is
is
like
how
far
up
the
chain
should
that
actually
get
percolated
and
then
like
how
should
that
be
presented
to
users
josh
in
that
issue
made
a
really
good
point
that,
like
you
know,
having
the
extensibility
of
the
unit
system,
is
a
pretty
critical
thing
and
I
think-
and
I
totally
agree
I
wouldn't
want
to
also
in
the
go
worlds-
have
to
input.
You
know
ns
for
nanoseconds
when
there's
a
default
type
in
go
that
actually
is
nanoseconds
in
the
standard
library
right
and
so,
like.
C
I
think,
there's
kind
of
a
bridge
to
gap
there
and
then
yeah.
I
think
that
you
also
bring
up
another
question
as
to
like,
I
think,
that's
actually
a
really
good
question
for
the
protocol.
Is
you
know
if
it
supports
uc
like
what
format
is
it
is
it
in
the
case
sensitive
or
in
the
case,
insensitive
format?
I
I
don't.
I
don't
actually
know
the
answer.
Sorry,
I
might
be
giving
you
more
questions
than
actual
answers.
G
Case
sensitive
and
lower
case,
if
no
no
cases,
oh
sorry,
wait,
wait,
wait!
Wait,
wait!
Yeah
in
this
case
in
this
case,
because
the
the
sensitivity
may
matter
does
it
matter
it
matters
between
bison,
correct.
C
Yeah
it
does.
There
are
definitely
some
some
interesting
cases
where
it
matches,
especially
also
in
the
prefixes
it'll
it'll,
have
a
different
form.
If,
if
the
case
sensitivity
is
a
matter,
I
didn't
see
anything
in
the
comments
of
the
otlp
that
actually
specifies
that,
but
I
didn't
yeah.
I
don't
know.
K
K
If
we
do
say
that,
for
instance,
it's
case
sensitive
and
you
had-
and
you
did
like
screw
up
the
the
letters
and
you
thought
it
was
case
insensitive,
it
wouldn't
have
the
same
meaning.
C
For
units
that
are
case
sensitive,
are
there
gotchas,
or
are
these
the
things
that
people
just
normally
expect
to
be
case,
insensitive.
K
C
Yeah,
I
think,
that's
correct.
Sorry.
It's
been
a
little
bit
since
I've
looked
at
this,
but
yes,
there
are
gotchas
in
there
justin
that
are
like
not,
I
think,
intuitive,
to
try
to
resolve
conflicts
of
of
collisions.
Yeah.
K
Yep
and
there's
also
sort
of
in
the
this
is
kind
of
esoteric,
but
for
pascals,
if
you
have
the
case
sensitive
variant,
it
would
be
capital,
p,
lowercase,
a
and
then
for
case
insensitive.
It
would
be
pal.
So,
like
the
actual,
you
have
an
extra
letter
for
that.
C
Try
to
use
case
insensitive
things,
if
I
can.
I
don't
know
enough
about
the
details
of
this
scenario,
to
really
weigh
in
on
that,
though,
my
philosophy
about
this
whole
like
standard
standardization
of
units,
though,
is
that
it
ought
to
be,
firstly,
a
semantic
convention
and
then,
secondly,
a
set
of
convenience
utilities
in
each
language.
That
makes
sense
for
the
idioms
of
the
language.
C
G
C
So
in
that
sense,
I
think
that's
actually
a
good
distinction,
because
the
way
that
aaron's
writing
the
specification
is
to
make
a
clear
definition
of
what
the
units
are
right.
And
so,
if
we're
just
going
to
agree
that
in
the
in
the
specification,
if
you
see
a
unit,
it
should
you
know,
and
you
could
interpret
it
with
the
ucum-
you
could
interpret
it.
You
know,
I
think,
with
the
case
and
sensitive
case
sensitive
whatever
aaron
sets
the
precedent
with
you
know
no
pressure
there
aaron,
but
yeah.
C
I
think
that's
probably
just
like
a
good
way
to
do
it,
because
eventually
it'll
come
down
to
the
implementation
with
the
otlp,
and
that
may
that
that
I
think,
could
probably
be
a
separate
issue.
I
think,
is
a
good
way,
but
just
like
in
the
specification,
if
we
use
the
ucom
to
define
units
that
we're
going
to
be
compliant
eventually
in
that
process.
G
C
That's,
I
think,
yeah.
I
think
you
just
summarized
the
the
strategy
that
I'm
hoping
to
have
specified
eventually
yeah.
So
from
my
perspective,
I
think
this
needs
to
be
a
semantic
convention.
The
thing
that
bogdan
just
suggested,
I
think,
is
excellent.
I
think
that
issue
needs
to
be
treated
as
a
higher
priority
than
p3.
G
You
can
comment
there
and
okay,
I
don't
know
if
you
have
the
power,
but
I
would
suggest
tyler
or
josh.
Can
you
remove
the
priority
or
or
remove
the
required
4g
start
to
to
be
re-evaluated
because
p1,
whatever
yeah
somebody
did
that
time?
B
I
wanted
to
follow
on
just
about
the
unit
I
feel
like.
I
don't
want
to
belabor
this
question
too
much
and
like
I
don't
think
many
people
will
be
too
worried
about
how
we
decide
to
encode
pascals,
but
really
the
timing
units
are
what
matter
most.
I
think
probably,
and
so
I
I
noticed
a
related
issue
when
I
was
working
through
some
code
with
the
go
sdk
recently,
where
there's
places
where
we
have
to
keep
map
a
map
of
like
a
unique
instrument
like
this
instrument.
B
It
has
the
same
name
as
that
instrument
and
have
the
same
metric
instrument
type.
They
have
the
same
number
type
it's.
So
it's
the
same
as
that
instrument,
and
I
and
because
this
open
pr,
this
spec
about
the
spec
about
the
accumulator
there.
There
is
words
in
here
about
what
the
sda's
responsibility
is
as
far
as
instrument
registration.
B
So
it's
more
than
just
the
accumulated
at
this
point,
and
I
realized
that
I
have
to
treat
unit
as
part
of
what's
unique
about
an
instrument,
because
I
don't
want
to
start
mixing
measurements
that
are
in
different
units.
So
there's
some
question
about
this
issue.
The
pr
number
880
I
don't
want
to
go
into
it
now,
but
but
there's
something
there
about
that.
B
I
think
we
really
need
to
make
sure
that
if
we
intend
for
a
single
sdk
to
to
combine
sorry
for
the
collectors
to
be
combining
streams
of
metrics,
where
the
same
metric
name
and
type
has
measurements
they're
taking
in
different
timing
units,
especially
that
they
get
corrected,
that's
my
that's
the
functional
goal.
I
think
we
should
have
the
only
practical
one.
I
think
I
think
we're
not
gonna
be
turning
watts
into
horsepower,
or
something
like
that
that
doesn't
that
doesn't
matter.
H
G
B
B
Yeah,
I
think,
and
I've
had
trouble
at
times,
understanding
the
very
precise
meaning
that
tyler
has
uses
his
words.
Sometimes,
so
I
think
we're
looking
for
you
tyler
to
help
us
finesse
the
words
to
make
sure
that
we
can
do
what
we
want
later.
C
Yeah
yeah
I
apologize.
I
haven't
been
able
to
do
anything
this
week.
I've
been
prioritizing
the
go
tracer
api
right
now,
but
I'll
try
to
make
sure
that
I
get
something
in
this
next
week's
cycle.
Sorry
about
that.
B
No
worries,
no,
this
is
moving
as
fast
as
we
make
it.
So
I'm
going
to
keep
working
on
this
this
one
that
we're
looking
at
now
as
well.
B
We,
I
think
we
made
it
through
the
agenda.
Does
anybody
have
anything
else,
they'd
like
to
add.
B
Or
I
can
go
look
at
the
kafka
problem
that
I'm
supposed
to
be
on
call
for
cool
all
right
thanks.
Everybody
see
you
next
time
we
need.
We
need
enough
comments.
Bye-Bye.