►
From YouTube: 2020-09-17 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Hi
everybody
just
getting
started
here.
A
I
thought
we
might
go
over
the
items
that
were
the
ones
we
discussed
last
week
since,
as
far
as
I
know,
these
are
the
issues
that
have
only
issues
that
have
been
worked
on
by
anybody
in
the
last
week,
so
I
think
we
sh
should
be
able
to
start.
Then
sorry,
I
feel
a
little
bit
confused
by
this
reboot.
A
A
I
afterwards
talked
to
two
people
about
doing
that:
justin
foote
who's
here
on
the
call,
as
well
as
aaron
abbott,
both
of
which,
both
of
whom
have
been
involved
in
the
current
set
of
specs-
and
I
think
probably
this
is
a
situation
where
the
more
the
merrier
I
know,
aaron
has
been
focused
on
these
pretty
key
prs
like
the
one
we're
looking
at
here.
937
justin's
also
been
involved
in
several
efforts
to
standardize,
timing,
metrics
and
so
on.
So
I
feel
like
what
I
was
hoping
to
find.
A
There
was
just
a
person
to
be
the
owner
of
any
loose
ends,
making
sure
that
there's
sort
of
a
coherent
story
in
our
specs,
like
we've,
got
spec
hispanic
conventions
related
to
labels.
We've
got
semantic
conventions
related
to
metrics,
we've
got
general
guidelines,
we've
got
conventions
about
character,
sets
and
so
on.
I
feel
like
all
of
that
kind
of
belongs
in
the
umbrella
of
semantics,
and
I
was
hoping
that
to
to
see
someone
take
ownership
there.
A
So,
given
the
the
prs
that
aaron
has
been
writing,
it
seems
like
perhaps
asking
justin
to
to
own
that
sort
of
coherence
and
ownership
of
the
semantic
conventions
would
be
a
good
thing
just
wanted
to
pass
that
along.
I
think
I've
talked
to
both
of
you
about
it.
I
don't
think
there's
any
danger
of
having
duplicate
work
happen
here,
so
any
work
you
could
do
to
improve
our
spec
for
semantics.
A
That'd
be
great
with
that
out
of
the
way,
I
think
we
have
this
quite
lengthy
discussion
happening
on
this
pr
here.
I've
been
involved
in
at
least
a
couple
rounds
of
this
feedback.
A
Aaron
is
doing
a
great
job
of
responding
to
all
the
issues
that
are
brought
up.
I
think
it
would
be
nice
to
talk
through
this
now
since
aaron's
here
on
the
call
and
see
what,
if
any,
of
these
issues
are
causing
us
to
to
be
blocked
or
whether
we
can
resolve
them
and
get
this
merged.
B
Yeah,
I
would
say
that
usage
one,
the
usage
convention
is
a
little
bit
aaron.
Would
you
are
you.
A
Here,
and
would
you
like
to
give
us
a
comment
on
this
yeah?
Can
you
hear
me
I
can
hear
yours
right
through?
I
don't
think.
A
A
So
if
anybody
else
disagrees
with
that,
I
think
we
could
probably
agree
to
not
discuss
this
depth
here.
Just
that
you're
smiling.
What
do
you.
D
A
Consistency
about
plurals
and
non-plurals
various
minor
things
nobody's
talking
all
right.
We're
gonna
move
on.
I'm
gonna
say
that
this
seems
pretty
close
and
I've
everyone's
kind
of
chuckling.
I've
recently
been
added
to
the
technical
committee,
so
I
now
have
powers
to
approve
these.
C
A
A
Something's
funny
and
everybody's
smiling,
nobody
wants
to
talk
is
what
I
think
is
going
on.
Your
audio
was
coming
up
for
a
little
while.
E
D
D
Aaron,
while
I
guess
josh
is
kind
of
working
through
things,
I
don't
know
if
there
was
anything
you
had
in
particular,
you
wanted
to
call
out
about
that.
Pr.
B
So
there's
some
discussion
on
the
usage
convention,
which
is
supposed
to
be
like
something
where
the
label
set.
If
you
sum
across
the
label
set,
you
would
get
like
a
limit
so,
for
instance,
like
cpu
cpu
time
for
a
given
interval.
If
you
summed
up
all
the
different
labels,
you
would
get
like
the
total
amount
of
times,
then
you
could
calculate
the
utilization
easily
from
that
and
it
seems
like
in
a
lot
of
cases,
it's
actually
not
possible
to
do
that.
B
So
I'm
not
sure
if
that
convention
is
good
to
add,
there's
a
lot
of
discussion
there.
But
basically
I
think
there
was
a
suggestion
to
expose
the
limit
as
a
separate
metric
so
that
utilization
could
be
calculated
or
or
converted
from
utilization
back
to
usage
and
vice
versa.
D
Yeah,
okay,
I
can't
remember
josh
going
into
this
in
the
implementation.
That
seems
to
make
sense
to
me.
I
don't
have
as
much
context
on
that
is
josh
the
other
one
I
see
james.
Also
in
that
conversation.
A
Us
now
josh,
I
can
hear
you,
okay,
yeah.
I
think
I
caught
up
now.
We
were
in
the
middle
of
this
thread
here.
I
I
think
I
wrote
down.
What
I
thought
is
that
perhaps
we
should
sort
of
in
the
semantic
conventions
include
a
limit,
and
then
it's
possible
that
some
for
some
metrics
the
limit
is
implied
by
a
sum
of
all
labels
and
for
others
it
might
not
be.
I
think
it's
particularly
true
for
cpu.
A
Like
limits,
you
don't
really
know
what
your
limits
are.
Your
limits
can
be.
The
number
can
be
greater
than
one
because
you
might
have
more
than
one
cpu
and
that's
generally
a
case
where
you
don't
know
it
so
that
one,
I
think
we're
going
to
want
to
have
a
limit,
but
I
think
for
some,
like
memory
and
disc,
it's
it's
reasonable
to
say
that
there's
some
label
that
adds
up
to
100.
B
I
think
james
is
out
adding
a
counter
example
here
for
for
memory
at
least.
A
But
then
I
also
that
you
know
this
james's
question
here
I
can
answer
for
go
is
that
the
go
memory
counters
are
not
necessarily
set
up
to
this.
To
do
this,
100
calculation,
like
they're,
the
numbers,
don't
add
up
to
one
or
one
hundred
percent,
necessarily
because
of
timing
involving
garbage
collection
so
and
there's
a
legacy
issue
in
the
go
lang
repo
about
it.
A
So
we
chose
to
not
make
them
add
up,
and
that
implies
that
we
would
want
a
limit
like
a
total
available
memory
limit,
but
I
I
tend
to
think
the
way
people
use.
These
is
that
they're
going
to
have
their
process
output,
whatever
the
memory
stats
that
they
have
and
then
their
container
is
going
to
output
a
limit
and
then
they'll,
probably
graph
something
from
the
runtime
memory
usage
against
the
process
limit,
which
is
like
a
container
property.
B
A
A
Okay,
I'll
try,
so
my
position
here
was
that
I
think
there's.
Sometimes
there
is
a
reason
to
have
a
limit,
sometimes
there's
not
and
as
far
as
so
many
conventions,
we
just
needed
to
find
what
those
mean,
and
as
far
as
the
sdk
spec,
we
should
probably
recommend
which
is
appropriate
in
certain
situations.
A
At
least
we
can
break
this
problem
up
into
something
about
the
semantics
saying
what
it
means
to
be
a
limit
and
a
utilization
and
for
the
sdk
spec.
We
could
say
that
you
know
the
default
runtime
instrumentation
package
should
output
memory
as
a
limit
and
a
usage
or
as
a
utilization
of
the
usage.
We
could
make
that
part
of
the
sdk
spec.
B
Okay,
do
you
think,
do
you
think
it's
okay
to
just
have
one
of
those
so
say
just
usage
and
limit.
A
Well,
that
I
mean
that
is
the
convention,
I'm
more
familiar
with
or
more
comfortable
with,
but
utilization
is
an
easier
property
to
monitor
and
you
don't
have
to
join
two
metrics
together
to
get
the
utilization
if
you
have
a
single
metric
for
it.
So
I
guess
I
was
supposed
that
there
could
be
different
use
cases
and
different
settings.
B
Okay,
I
remember
you
mentioned
having
at
least
two
of
those
three
right.
B
A
I
think
that
that's
a
kind
of
a
minimum
configuration
yeah-
I
I
don't
think
it's
terrible
to
have
all
three
and
I
think,
there's
a
future
world
where
the
views
some
sort
of
use
api
that
can
be
can
be
used
to
calculate
utilization
automatically,
and
it's
particularly
true.
We
know
for
timing
like
when,
when
the
usage
is
a
time,
then
we
can
report
the
utilization
using
timestamps
directly.
We
don't
need
a
third
or
second
value.
The.
B
Okay,
I
can,
I
can
read
over
this
whole
throat
again
and
see
if
I
can
just
I
can
at
least
specify
the
limit
and
then,
like
you
said,
choosing
or
writing
the
sdk
spec
to
have
two
of
the
three
is
sort
of
independent.
A
A
Do
we
have
we
have
michael
here?
Do
we
have
bogin,
perhaps
maybe
maybe
not.
A
Oh
going,
please,
you
were
saying
we
also
have.
E
I
was
going
to
say
the
we
also
have
home
and
lee
just
want
to
quickly
introduce
because
he
can
perhaps
answer
a
lot
of
questions
that
I
haven't
in
the
past.
He's
the
team
lead
on
on
the
team
that
developed
sketches
hello.
There.
A
So
the
issue
just
to
bring
everyone
up
to
speed
is
that
we've
kind
of
been
going
back
and
forth
on
what
to
make
be
the
default
aggregation
for
this
instrument.
We
call
value
recorder
over
the
course
of
the
last
year.
We've
gone
through
a
number
of
iterations,
including
histogram,
is
an
option
with
fixed
boundaries.
A
There's
min
max
sum
count,
which
is
one
where
it's
just
sort
of
a
basic
merging
summary
there's
the
classic
prometheus
summary,
which
includes
the
non-mergiable
like
quantiles
in
it
as
well,
and
then
it's
possible
to
imagine
just
raw
raw
data
or
an
exact
aggregation,
and
we
don't
have
support
for
raw
data
in
otlp.
That's
actually
the
other
topic
that
I
wanted
to
bring
up
at
some
point.
A
Part
for
the
reason
we're
having
that
trouble
is
that
we
don't
have
any
sort
of
ideal
aggregations
in
the
spec
beauty.
Sketch
has
been
brought
up,
but
it's
worth
noting
in
that
issue
that
there
is
some
amount
of
question
as
to
whether
perhaps
dd
sketches
just
yeah
go
ahead.
Sorry.
D
D
I
just
just
wanted
to
okay
see
if
we
can
improve
this.
A
Keeps
working
so
at
the
very
end
of
this
issue,
919
you'll
notice
that
armin
from
dynatrace
has
kind
of
thrown
up
a
little
bit
of
an
objection
saying
that
you
know
hdr,
histogram
and
other
dynamic.
Histograms
are
still
also
quite
popular,
and
if
the
goal
is
to
get
something
that
can
be
kind
of
like
rapidly
deployed
or
agreed
upon,
perhaps
we
could
be
standardizing
something
not
quite
dd
sketch,
but
perhaps
like
a
more
flexible
way
to
encode
variable
histogram
buckets.
A
A
You
know
I
did
if
you'll
scroll
up
a
little
bit
tyler
I
did
list.
What
would
the
steps
be?
That
was
the
action
item
that
we
took
last
week.
The
steps
would
be
like
in
order
to
like
standardize,
dd
sketcher
across
open
telemetry.
What
would
we
need?
A
It
kind
of
means
having
a
library,
a
protocol,
an
aggregator
and
an
exporter,
support
that
so
it's
sort
of
like
three
or
four
parts
per
language-
and
I
think
part
of
me-
is
thinking
that
this
is
on
the
borderline
of
being
a
post
ga
item
just
because
it
looks
like
it's
going
to
take
a
long
time.
But
then
you
see
this
other
sort
of
feedback
from
armin
at
the
bottom
and
it
starts
to
look
like
there
could
be
some
disagreement
over
whether
we
should
be
trying
to
standardize
this
or
not.
A
I
know
in
open
metrics,
there
is
essentially
a
histogram
data
point
that
has
multiple
operations
for
its
buckets
and
you
can
have
linear
buckets.
You
can
have
log
linear
buckets,
you
can
have
custom
buckets
and
one
idea
that's
come
out
of
this,
at
least
just
now,
for
me,
is
that
you
could
have
dd
sketch
buckets
as
well
like
being
a
dd.
A
Sketch
buckets
are
a
particular
way
to
assign
buckets
that
uses
this
gamma
function
and
the
sort
of
the
equations
that
are
part
of
the
dd
sketch
paper,
because
then,
potentially,
it
would
be
possible
to
use
to
use
the
hd
hdr,
histogram
or
the
you
know,
dynatrace
histogram
or
any
of
these
other
options
if
they
all
kind
of
fit
into
the
same
data
structure
of
like
some
buckets
with
various
ways
of
indicating
the
bucket
boundaries.
A
That's
a
potential
option
here
we
have
in
the
past
few
weeks,
we've
gone
over
this
a
few
times.
Michael
has
been
very
helpful,
but
I
think
the
one
of
the
questions
that
was
most
unanswered
on
about
this
was
whether,
if
this
became
the
default,
whether
the
classic
prometheus
use
case
is
going
to
have
trouble
the
classic
problem
being
that
histograms
when
they
enter
prometheus,
have
been
turned
into
individual
time
series
one
per
bucket,
there's
this
le
label.
A
C
A
Like
too
many
series
just
because
they
could
use
different
but
boundaries
and
the
question
is
what
will
the
experience
be
like
for
a
prometheus
user?
And
how
will
we
help
prometheus
users
say
if
you're
the
open,
telemetry
collector
and
we
are
remote-
writing
prometheus
data
to
a
prometheus
backend?
How
do
we
help
the
prometheus
back
end
condense?
That
number
of
time
series
down
again,
I
think
we've
been
talking
through
that
michael
has
talked
about
showing
us
some
example
code
to
convert
from
openmetrics
into
some
other
format.
E
A
E
So
for
that,
homan
might
be
able
to
talk
about
it.
That's
actually
the
one
thing.
That's
on
another
team
that
is
in
flight
right
now
we
have
hands
on
keyboard
and
we
are
committed
to
getting
you
sample
code
on
on
this.
I
just
don't
have
that
today,
but
that's
on
our
aggregation
team,
not
on
our
sketching
team.
Unfortunately,
okay.
C
Hey
josh,
real
quick.
How
do
we
get
the
the
last.
A
A
So,
okay,
that
that
has
been
enough
to
make.
I
know
my
audio
is
not
really
working
right.
Now
there
there's
a
question
about
histograms
that
we're
gonna
ask
common
and
then
there's
a
question
about
dd,
sketch
and
last
value.
I
can
answer
the
second
question.
We've
talked
about
this
a
number
of
times.
A
A
There
has
been
this
an
issue
that
I
found
myself
trying
to
maybe
try
to
reconcile
this
concern
by
adding
last
value
to
sketch,
like
you
could
have
min
max
last
sum
and
count
in
addition
to
all
the
other
data-
and
I
know
that
that
debate
just
so
there's
just
different
ways.
You
can
answer
that
question
michael
michael,
has
confirmed
that
sort
of
datadog's
position
on
this
is
similar
that
that,
if
you
want
to
use
a
last
value,
you
should
use
an
instrument
that
has
those
semantics.
A
C
C
E
No
no
problem
and,
and
tell
me
if
I'm
wrong
here,
but
but
there's
this
async
instrument
right
that
that
you
can
record
the
value
of
on
whatever
schedule
you
want.
The
the
value
recorder
will
record
values
over
time
and
in
that
case,
implies
aggregation
with
its
semantics
right.
You
could
always
just
record
something
every
x
seconds
with
the
async
instrument
and
treat
that
as
a
non-aggregated
as
a
gauge
as
a
last
value
as
a
raw
point.
Whatever
the
semantics
of
last
really
is
whether
it's
an
aggregation
or
not,.
A
Has
come
up
just
to
add
to
that?
This
has
come
up
like
if
you
were
going
to
use
a
value
recorder
which
is
like,
if
you're
thinking
in
terms
of
the
classic,
prometheus
or
statsy.
This
is
the
gauge
instrument
where
you
say
set,
and
you
can
you
that
becomes
the
last
value
and
then
at
some
point
later,
you're
going
to
read
the
last
value.
A
A
C
A
E
Right
so
I
think
for
me,
until
it
comes
up
from
people
using
it
and
feeling
limited,
you
know,
I
don't
have
any
real
stories
about
where,
where
recording
over
an
hour
period,
but
only
reporting,
the
last
value
is,
is
useful
compared
to
using
the
value
observer
instrument
and
and
doing
it.
Asynchronously.
A
How
long
is
there
another
question
about
not
just
last
value
but
min
and
max
value?
So
if
somebody
has
configured
a
value
recorder
and
they've
got,
you
know
a
10
second
interval.
Do
they
care
to
know
the
exact
max
and
exact
moon,
since
those
could
be
I've
inferred
approximately
from
some
sort
of
histogram
and
as
much
as
we're
trying
to
come
up
with
a
default?
It's
going
to
work
for
a
large
number
of
cases.
A
B
Yeah
I
mean,
I
guess
precisely
for
the
reason
that
we
do
have
a
relative
error
for
the
max
that
we
like
keeping
the
max
explicit,
especially
like
for
the
cases
where
the
one
percent
high
error
is
higher
than
the
max
or
lower
than
the
min.
A
So
in
as
much
as
I
earlier
proposed
that,
because
potentially
a
solution
here
is
that
we
we
converge
on
histogram
data
point,
we
just
create
new
variations
for
how
to
specify
those
bucket
boundaries.
You
know
you've
got
log,
linear,
you've
got
linear,
you've
got
custom,
you've
got
dd
sketch
like
with
a
gamma
factor
potentially,
but
then
you
also
end
up
kind
of
wanting
to
admin
max
and
or
last
to
the
histogram,
so
the
histogram
is
sort
of
becoming
the
catch-all
summary
which
I
don't
think
is
objectionable.
But
again
it
can
create
confusion.
D
Yeah-
and
I
think
that
to
earlier
conversations
around
otlp
like
bogdan's
position
on
that
was
overhead
of
transport.
If
you're
not
going
to
include
mid
back
some
count
on
a
histogram
or
something
like
that,
then
you
just
have
overhead
worse
versus
a
very
specific
message.
Payload
was,
I
think,
the
argument
against
it,
but
I
think
that
that's
been
a
very
loud
voice
in
that
decision
and
the
idea
that,
like
we
need
to
be
able
to
have
some
way
to
transport.
This
east,
I
think,
also
be
as
equally
loud.
A
I
feel
like
we
lack
a
specific
proposal,
that's
going
to
move
this
forward
and
that's
sort
of
a
satisfying
place
to
leave
it.
E
I
think
your
proposal,
where
one
can
choose
the
algorithm,
makes
sense
in
abstract.
I
do
wonder
if
people
who
instrument
it
are
going
to
want
to
deal
with
that
level
of
detail
either
way
we
can
deliver
code
for
dd
sketch
whether
it
gets
used
as
an
option
or
as
as
the
default.
I
did
have
tactical
questions
about
that.
If
that's,
if
that
was
a
good
time
sure,
I
think
it
is
okay.
E
So
last
time
we
talked
about
this,
you
needed
a
well-documented
protocol
which
is
in
flight
a
conversion
to
for
prometheus,
which
is
in
flight.
One
of
the
things
you
mentioned
is
the
implementation
of
the
sketch
algorithm
in
python
java
javascript
go
and
net,
and
I
wanted
to
make
sure
that
I
understood
the
requirement
here
a
little
bit,
because
in
the
datadog
agent
we
have.
E
We
always
assume
that
there's
an
agent
in
play,
which
means
that
we
can
send
raw
points
from
our
client
libraries
instead
of
building
the
sketch
in
process
in
the
application
and
shipping
sketches,
and
so
for
us.
The
implementation
is
slightly
different.
If
we
can
depend
on
the
hotel
collector
always
being
there
because
then
we
don't
have
to
re-implement
the
language,
the
algorithm
itself.
E
E
Right
so
well,
we
would,
we
would
have
client
libraries
for
each
of
the
languages,
but
they
would
just
ship
raw
points
to
the
collector.
The
collector
would
intake
all
the
points
and
build
a
sketch
and
then
export
them
wherever
you
want,
so
the
dd
sketch
algorithm
would
only
have
to
exist
in
the
collector
and
it
would
and
the
client
libraries
would
just
ship
the
raw
value
to
the
collector.
D
Present,
I
think,
is
the
way
to
answer
that.
A
Well,
I
think,
there's
a
opportunity
to
to
refine
this.
A
little
like
we
have
talked
about-
and
this
is
the
other
missing
piece
from
otlp-
is
how
do
we
express
raw
values?
This
has
come
up
not
just
for
for
the
otlp
and
and
this
question
of
how
to
do
sketches.
It's
also
come
up
for
this
staff
see
receiver,
that's
going
to
the
collector,
where
you
are
you're,
getting
individual
histogram
data
points
and
there's
no
way
to
put
a
raw
point
onto
otlp.
A
At
this
point,
I
I
think
we,
you
know
as
much
as
we're
trying
to
find
a
default
for
value
recorder.
A
There
are
reasons
why
we
don't
like
histograms
to
fix
boundaries,
there's
reasons
why
we
don't
like
raw
value,
that's
sort
of
expensive,
but
I
guess
what
we're
trying
to
get
to
is
a
point
point
where
each
language
has
some
good
default,
and
I
think
raw
values
with
a
collector
is
a
pretty
good
default.
If
the
collector
can
do,
especially
if
the
collector
can
do
sketch
and
then
no
collector
with
a
sketch,
is
another
good
configuration
where
in
that
case
we
do
want
the
library
to
run
inside
the
sdk.
A
So
I
think
we
don't
want
to
require
you
to
run
a
collector,
and
I
think
it's
okay
to
have
as
long
as
we
have
a
good
default
for
the
languages,
if
they
don't
have
sketch,
then
you're,
gonna
sort
of
say
you
need
the
collector
to
get
this
performance
back,
and
that
way
we
can
say
either.
You're
gonna
have
to
support
raw
values
or
sketch
values.
E
Yeah,
okay,
so
in
my
experience
and
and
maybe
others
have
a
different
experience,
the
agent
that
you,
your
code
running
where
there
could
be
an
agent
doesn't
want
to
depend
on
an
api
because
it
often
transacts
over
http
and
it
represents
a
blocking
call.
So
you
want
to
make
a
udp
call
to
a
local
collector
which
boosts
things
out
to
the
internet
right
and
where
you
do
have
a
tcp
call.
E
There's
this
issue
with
sending
raw
data
points
to
an
api,
because
it's
extremely
high
points
per
second,
even
if
it's
the
same
cardinality
cardinality
of
time
series.
So
by
that
argument,
if
you
expect
otlp
to
send
something
to
an
endpoint,
you
would
still
want
it
in
process.
I'm
maybe
talking
to
myself.
At
this
point.
I
apologize
so.
C
E
D
So
yeah
you're
not
the
first
to
ask
that
question
for
sure,
and
you
also
have
to
keep
in
mind
that
there
are
certain
vendors
that
are
starting
to
use
the
otlp
itself
as
well
and
so
having
the
ability
to
send
raw
data.
I
I
know
specifically
amazon
has
been
in
here.
I
don't
know
who's
on
the
call,
but
they
had
specifically.
D
C
C
D
That's
always
been
the
way
that
we
anticipated
the
otlp
to
actually
work
is
to
send
some
sort
of
process
or
pre.
You
know
minorly
pre-processed
data
to
the
collector
or
to
whatever
back
end,
just
adjusting
the
other
okay.
D
Yeah
yeah,
I
think
that
it's
it's
it's
going
to
depend
on
the
user,
though,
because
there's
definitely,
I
think
people
that
disagree
or
or
want
to
you
know,
perform
their
own
algorithms
and
want
the
raw
data
as
raw
as
they
can
have
it,
and
then
others
that
are
just
you
know,
network
overhead
is,
is
going
to
define
the
performance
or
metric
form
yeah.
E
It's
not
just
network
overhead,
whatever
your
intake
is
has
to
turn
cpu.
A
E
E
Okay,
so
for
my
takeaway
here
in
terms
of
what
would
be
accepted
as
a
pr
would,
would
you
like
to
see
us
implement
this
in
process
or
or
to
implement
it
in
the
collector
and
send
raw
points
in
the
in
the
client
libraries?
A
To
see
it's
in
practice
implementation,
oh
god,
my
video,
my
audios
yeah.
I
lost
you
a
little
bit
there.
Sorry,
okay!
I
think
we,
the
pro,
there's
a
prerequisite.
We
don't
have
a
way
to
put
raw
data
points
into
otlp.
A
I
think
we
should
get
that
sorted
out
once
we
have
that,
then
there's
many
available
default
configurations,
but
what
I'm
hearing
might
be
good
is,
if
there's
a
dd
sketch
library
that
you
can
run
in
process
good,
I
think,
there's
a
pretty
big
part
of
open,
telemetry's
performance
story
came
all
the
way
from
open
census
about
this
is
that
we
expect
aggregation
happen
in
the
process
for
performance
reasons,
and
so,
if
it
weren't
for
sketch
dd
sketch,
let's
say
then,
then
probably
you'd
fall
back
on
a
histogram,
and
the
question
here
is
whether
we
prefer
to
encourage
you
to
go
to
a
raw
data
point
when
there's
no
sketch
available
or
whether
we'd
prefer
you
to
go
to
a
histogram
when
there's
no
sketch
available-
and
I
think
there's
several
places
in
this
thread
that
we're
looking
at
where
a
very
practical
concern
has
been
raised
like
if
we
don't.
A
If
we
mandate
dd
sketch
has
to
be
in
every
process,
it's
going
to
be
like
a
long
time
before
we
get
this,
and
we
also
do
know
like
php
keeps
coming
up
as
like
the
one
that
just
there's
no
state
anyway,
you
can't
do
aggregation
when
there's
no
state.
Therefore,
we've
got
to
put
raw
point
out
same
with
the
stats,
the
receiver
and
the
collector.
We
need
to
do
raw
points.
A
So
possibly
the
outcome
here
is
that
we're
gonna,
we're
gonna
recommend
sketch
is
a
good
default
when
it's
available
we're
gonna
recommend
raw
points
is
a
good
default
when,
when
it's
available,
but
watch
out
for
performance,
I
think
the
only
question
is:
is
there
ever
a
case
when
we
recommend
plain
old
histogram
with
fixed
boundaries
as
a
as
a
default,
because,
like
there's
some
point,
there's
no
value
in
having
a
default?
If
there
are
15
different
defaults
that
we
recommend.
E
A
E
For
play
histograms,
we
we
really
can
just
offer
code
in
the
exporter
so
that
when
it
sees
the
dd
sketch
it
will
convert
and
export.
If
that's
what
people
want,
it
wouldn't
be
the
default
in
the
otlp,
but
it
would
be
something
people
could
configure
easily
for
their
use.
E
But
I
I
think
it
does
behoove
all
of
us,
or
at
least
that's
what
josh
was
saying,
and
I
agree
with
him
that
that
you
know
the
permeates
as
they
fairly
widely
adopted
standard
and
making
it
easy
for
for
users
who
expect
that
prometheus
contract
makes
total
sense
to
me,
and
we
can.
We
can
support.
A
E
Would
just
collapse
the
buckets
and
and
export
a
plain
histogram
from
the
exporter.
A
So
I
don't
know
the
answer
to
this
question,
but
I
do
wonder
how
many
users
out
there
are
going
to
look
at
this
proposal.
If
it
was
written
and
say
I
don't
care
about
sketch,
I
don't
want
raw
points,
but
I
was
happy
with
histogram
the
way
it
was,
but
I
think,
if
that's
the
case,
you
should
configure
it
since
you're
probably
going
to
want
to
specify
your
boundaries
anyway.
A
So
I
think
that
that's
the
argument
you
know
the
only
other
consideration
that
I
want
to
bring
up
before
we
move
on
is
that
early
on
there
was
a
proposal
to
make
the
default.
The
min
max
sum
count,
because
it
is
simple.
So
there
needs
to
be
a
case.
Why
that's
no
longer
what
we
think
and
I
think
the
case
goes
like
the
reason
why
minimax
account
felt
appealing
was
that
perhaps
we
were
coming
from
a
world
where
there
was
no
aggregation.
A
So
in
a
world
where
you're
just
going
to
send
data
points,
you
you
can
buffer
bin
max
on
account
for
a
long
period
of
time
and
just
send
a
fixed
number
of
points.
A
But
the
same
is
true
of
histogram,
so
once
you
begin
having
the
ability
to
aggregate
rather
than
sending
min
max
sum
count,
which
is
four
values,
you
can
send
a
histogram
which
is
20
values
or
200
values,
it's
still
fixed,
and
so
I
think
the
argument
is
that
we,
because
we're
assuming
there's
aggregation
midnight
sum
account,
is
not
necessarily
the
default
we
want.
We
can
even
remove
that
from
the
spec
we
could
just
have.
D
Yeah
from
the
the
new
relic
perspective,
the
mid
maximum
count
is
a
pretty
useful
metric
to
have
it's
something
that
we
support.
I'd
be
a
little
bit
hesitant
to
remove
that
from
the
specification.
I'd
also
be
hesitant
to
remove
it
as
like
a
supported
conversion
type
from
what.
D
Db
sketch,
like
it's,
there's
a
conversion
from
the
sketch
to
the
maximum
account.
So
if
you
did
want
to
reduce
it
down
to
a
very
minimal
set
of
statistics
like
it's
still
possible
yeah,
I
don't
know
just
want
to
point
that
out.
E
Yeah,
the
the
dd
sketch
the
the
reason
we
were
talking
about,
keeping
mid
max
with
the
dd
sketch
is
that
the
one
percent
relative
error
means
that
you
could
potentially
have
a
value.
That's
one
percent
larger
than
the
max
value
right,
which
is
hard
to
reason
about
when
you
see
it
on
a
graph,
among
other
things,.
E
So
so,
like
judge
said,
it
could
certainly
be
optional
to
tyler's
point.
We
keep
it
in
our
implementations
of
dd
sketch
and
we
can
implement
it.
That
way.
I
think
that's
that's
reasonable,
the
one
we
do
need.
I
know
we're
talking
about
mid
max
sum
count,
but
we
we
do
actually
need
the
sum,
because
the
sum
of
points
are
unboundedly,
I
mean
you
get
it.
A
There
was
some
question
now
that
I
know
that
we're
in
the
weeds
here
there's
some
question
about
floating
point
versus
integer
counts
as
it
relates
to.
I
guess
just
there.
Mathematically
there's
a
question,
but
then
there's
also
this
question,
because
when
you're
dealing
with
statsd
data,
there's
this
well-known
sample
rate
sort
of
functionality
that
it
means
that
individual
counts
can
be
non-non-integer.
A
Do
you
think
that
that's
worth
debating
there's
actually
a
precedent
in
open
telemetry
not
to
support
floating
point
counts
and
I've
found
issues
about
how
we
might
have
a
second
count
to
like
a
second
floating
point
as
a
multiplier.
But
I
think
that
nobody
is
looking
interested
in
this
topic.
A
Okay,
we
have
talked
through
this
a
lengthy
period.
I
think
my
takeaway
I
got
from
this
is
that
there's
emerging
a
sort
of
idea
of
a
single
data
point
which
is
histogram
plus
summary
effectively.
It
concludes
min
maximum
account.
It
includes
variable
buckets
that
you
could
specify
either
through
explicit
customization
or
it
could
be.
It
could
be,
there's
a
different
way
to
specify
it
log,
linear,
linear
custom
and
then
dd
sketch
could
be
one.
A
It
almost
sounds
to
me
like
we
could
drop
the
whole
whole
notion
of
a
prometheus
summary
and
actually
turn
prometheus
summaries
into
this
type
that
we're
talking
about.
If
you
know
the
count,
you
can
just
fake
some
boundaries
exactly
where
the
quantiles
were
and
fake
some
counts
to
make
it
all
look
about
right.
So
it
sounds
like
we
could
drop
summary
and
synthesize
a
histogram
to
equal
his
prometheus
summary.
If
people
are
nodding,
that's.
F
Great
well,
I
sorry
to
interrupt.
We
are
currently
debating
within
openmetrics
if
we
make
summaries
mandatory
for
this
exact
use
case
or
not
so
I
can
take
this
okay,
I
I
can
take
your
preference
back
to
openmetrics,
just
as
a
warning
that
that
please
don't
finalize
this
just
so.
A
F
Awesome
lower
than
some
other
things
I
fully
agree.
I
will
also
take
the
question
of
onboarding,
maybe
you
josh
and
one
to
others
as
a
preview
thing
back
to
back
to
the
call
of.
C
F
Next
one
next
tuesday,
and
so
let's
maybe
talk
on
wednesday
next
week-
okay
yeah
just
hit
me
by
email
or
something,
and
you
know
I'll,
send
you
a
calendar.
E
Okay,
great
and
in
the
meantime,
we
will
just
in
the
interest
of
time
and
letting
you
get
to
the
rest
of
the
agenda,
we
will
get
you
the
commitments
that
I've
made.
You
know
on
on
implementing
the
at
least
the
dd
sketch
side,
whether
or
not
we
end.
C
A
With
the
timing
remaining
myths,
there
was
there's
two
more
items
in
the
agenda.
Tyler.
Would
you
summarize
the
current
status
of
the
units
question.
D
Yeah,
I
can,
I
can
jump
into
that,
so
we
kind
of
talked
about
this
last
week.
I
don't
know
well.
I
definitely
know
that
I
haven't
been
able
to
work
on
it,
had
a
reduced
bandwidth
this
week,
living
in
the
pacific,
northwest
and
so
just
kind
of.
To
recap,
the
idea
yeah
and
I
guess
josh
is
more
central,
but
it's
still
pretty
bad
down
there.
I'm
blessed.
D
I
guess
the
idea
is
that
the
units
right
now,
it's
a
commonality
issue
and
a
commonality
question
across
the
open,
telemetry
implementations.
D
The
otlp
is
the
transport
for
all
current
metrics
right
now
and
the
way
that
it
expects
the
units
that
are
associated
with
any
metric
that
it's
transporting
are
to
be
in
a
format
that
it
can
interpret
and
it
currently
uses
a
thing
called
the
ucum
standard
as
a
design
principle
or
as
a
way
to
encode
those
units.
This
is
a
standard
that
has
comprehensively
a
lot
of
units
as
well
as
prefixes
encoded
in
a
machine.
Preferenced
format
using
ascii
and
the
question
is,
is
how
do
you
provide
compatibility
across
open
telemetry?
D
Last
week
there
was
a
lot
of
questions
as
to
the
comprehensiveness
of
this
issue
in
particular,
which
we
can
kind
of
talk
about
in
the
next
eight
minutes,
so
maybe
only
scratch
the
service
again
as
well
as
I
think
there
was
actually
a
very
big
yet
lingering
issue
that
was
kind
of
brought
up
in
the
compatibility
of
similar
units
within
the
open,
telemetry
implementation
itself.
D
Josh's
got
this
great
response
here,
which
I
have
been
able
to
kind
of
respond
to,
but
I
think
you
make
some
important
points
of
just
the
implementation
side
of
things.
One
of
the
things
that
I
was
hoping
for
this
issue
to
resolve
is
having
a
standard
be
codified
in
the
specification
as
to
like
what
the
base
encoding
is
going
to
need
to
be
and
important.
D
I
think
to
take
away
from
that
is
you
know
I
proposed
in
this
issue
that
we
standardized
and
we
codified
the
fact
that
the
ucum
will
be
the
standard
that
way
that
we
transport,
metrics
or
communicate
metric
units,
but
it
also
it
doesn't
necessarily
mean
that
it's
the
interface
that
is
presented
to
the
user
and
because
of
that,
if
there's
going
to
be
a
gap
between
what
the
user
sends
you
and
what
you're
going
to
be
transporting
there
needs
to
be
some
sort
of.
D
I
don't
know
compatibility
or
requirement
of
your
implementation
to
actually
make
sure
that
that
is
isomorphic,
I
guess
is
the
way
to
say
it
and
to
make
sure
that
it
is
encoded
correctly,
and
on
top
of
that,
I
think
that
needs
to
be
a
well-defined
user
interface.
I
feel
bad
because
I
didn't
get
too
deep
into
josh's
response
here,
but
he
does
make
some
good
points
that
I've
gathered
from
this
is
just
the
idea
that
you
don't
really
want
to
be.
D
You
know
accepting
strings
in
the
ucum
format,
because
it's
not
a
really
native
way
for
humans
to
speak
units,
and
I
think
it's
even
less.
D
A
way
that
coders
like
to
speak
or
developers
like
to
speak
units
and
so
making
sure
that
you
don't
restrict,
that
is
an
important
I
think,
design
goal.
My
response
is
when
I
didn't
get
enough
time
on
this,
but
it's
also
just
like
I
want
to.
D
I
want
to
point
out
that
this
issue
seems
to
be
comprehensively
a
lot
of
issues
in
in
a
single
issue
right
here,
but
the
one
thing
that
was
needed,
that
tigran
was
asking
for
in
the
resolution
and
why
I
was
bringing
it
up
last
week,
is
that
we
need
to
make
sure
that
the
standard
that
the
proto
is
actually
implementing
with
the
ucum
is
is
going
to
be
a
good
enough
base
representation
of
units,
because
changing
that,
after
the
ga
data
is
not
possible.
D
So
if
they
need
to
change
the
encoding
of
the
units,
it
needs
to
be
done
prior
to
the
ga
day
and
then
included
in
this
issue
is
all
those
other
things
I
was
talking
about
is
presenting
this
in
a
way
that
gives
guidelines
to
sdks,
as
john
was
asking
for
last
week.
It
also
resolves
the
issue
that,
if
you
send
something,
if
you
have
two
different
instruments,
this
is
something
that
was
identified
last
week.
D
As
the
major
issue
that
I
was
kind
of
realizing
and
they're
reporting,
similar
metric
events,
say
cpu
cycles
or
latency
is
latency
numbers
but
they're
using
two
different
multiplicatives
of
a
unit
so
say
microseconds
versus
nanoseconds,
or
you
get
the
idea
that
the
prefix
is
actually
different.
They
need
to,
I
think,
resolve
eventually
down
to
the
same
metric
event
or
be
compatible
in
the
same
metric
events.
Currently
right
now,
given
the
distinctiveness
of
a
metric
is
encoded
based
on
the
unit
and
in
some
languages,
that
unit
is
just
a
string.
D
The
encoding
is
going
to
be
different
and
they're
going
to
be
considered
two
different
instruments
or
two
different
metric
events
that
are
not
relatable,
I
think
is,
is
a
problem.
So
I
think
that
we've
uncovered
another
issue
that
needs
to
kind
of
get
resolved
in
this
resolution,
but
for
right
now
I
think
the
issue
that
needs
to
get
resolved
before
ga
specifically
needs
to
be
this
fact
that
the
otlp
is
transporting
with
uc
and
we
need
consensus
that
that's
a
viable
thing
going
forward.
A
G
I'm
I'm
still
not.
It's
still
not
clear
to
me
what
it
means
to
say
that
the
otlp
is
transporting
a
new
cu,
since
it's
just
some
ascii
like
what.
What
is
what
is
the?
What
how
how
do
we
encode
the
semantics,
or
do
we
in
otlp
to
even
say
that
that's
a
thing
when
there's
when
there's
no
way
to
unless
we
introduce
some
crazy
enum
that
would
have
to
capture
all
the
possible
values
like
what
is
it?
What
does
it
actually
mean
to
say
that
the
otlp
is
is
accepting
ucum.
D
G
So,
if
I,
if
I
I
don't
know
why,
but
if
I'm
inventing
a
new
http
library,
god
help
me
and
I
decided
to
implement
it
like
instrument
it.
However,
I
want
them
not
following
the
open,
telemetry,
so
many
conventions,
the
all
of
the
sdks
apis
and
sdks,
will
still
support
capturing
that
data
and
delivering
it
wherever
it
needs
to
be
delivered.
So
this
is
where
I
feel
like
there's
a
difference
between
the
semantic
invention
and
saying
that
otlp
will
only
accept
ucum,
because
I
don't
understand
what
the
behavior
is.
B
D
Not
not
entirely
following,
I
don't
think
which
is,
I
think,
my
shortcoming,
but
I
think
that
I'm
I'm
wondering
about
so
if
the
idea
is,
is
that
these
normative
statements
and
the
specification
that
we've
laid
out
in
the
semantic
conventions
are
ways
to
provide
not
only
functional
abilities,
things
that
are
recommendations
but
also
compatibility
issues,
things
that
are
normative
requirements.
D
I
think
that
that
is
kind
of
similar
here,
where
the
otlp
is
is,
like
I
mean
yeah,
you
can
put
whatever
you
want
in
the
units
field.
If
you,
you
know,
you
don't
feel
like
ucum
is
your
cup
of
tea
and
you
want
to
create
your
own
units
and
sending
them
there,
but
then,
because
of
that
in
the
fact
that,
like
there
is
a
compatibility
requirement
that
if
you
want
to
be
compatible
with
the
downstream
systems,
you
need
to
send
us
valid
data.
D
G
You're
saying:
well,
I
guess
what
I'm
saying
is.
I
don't
think
that's
what
the
semantic
conventions
in
general
say.
I
think
the
semantic
conventions
are
a
strong
recommendation,
but
in
no
way
normative,
like
there's,
no
must
in
the
semantic
conventions,
and
so
my
question
is:
are
we
going
to
add
a
must
to
otlp,
or
is
this
just
a
may
or
a
should.
D
So
you
do
not
understand
it
in
the
sense
that
why
it
couldn't
be
compatible
if
they
don't
send
it
or
like
you
know,
I'm
not
following
why
you
don't
understand
well,.
G
If
I
send
the,
if
I
send
quote
feet
quote,
which
I
don't
think
is
in
the
uc
at
least,
I
hope
it
isn't,
but
maybe
it
is,
I
don't
know
or
feet
with
an
e
on
the
end,
because
I,
like
justin,
foote's
name
pluralized
like
what's
going
to
happen
like
what
is
what
is
the
behavior
that
I
should
expect,
because
if
we
don't
define
that
behavior
or
we
don't
say
we're
going
to
throw
this
data
away
or
like
I,
I
just
feel
like,
then
what
is
the
point
of
specifying
it?
G
D
Well,
I
mean
in
the
same
way,
that
I
don't
know,
as
you
propagate
things
down
to
otl,
open
telemetry
pipeline
having
cement
conventions,
provide
insights
and
provide
usability
of
the
data
specifically
to
the
vendor
side
of
things.
I
think
it's
kind
of
the
same
story
here
is
like
as
you
propagate
metrics
down
the
pipeline.
Eventually
they
get
to
the
collector
and
the
collector
can
handle
them
if
they
conform
to
these
ideas
of
what
our
standards
are,
and
so.
G
G
You're
making
a
stronger
statement
here
than
the
semantic
conventions,
and
I'm
wondering
whether
yeah
I'm
wondering
whether
we
should
be
making
a
stronger
statement
here
and
if
we
are
making
a
stronger
statement.
What
the
actual
meaning
of
that
stronger
statement
is
to
an
end
user,
who's,
writing
custom,
instrumentation
and
isn't
looking
at
our
semantic
conventions
at
all.
D
A
C
A
That
data
and
say
I
couldn't
validate
this
unit.
Therefore
I
it's
like
it's
a
it's
a
unique
unit
for
me.
There's
nothing
else
like
it.
I
like
it's,
not
a
problem,
it's
still
got
numbers,
and
I
think
the
only
I
mean
the
benefit
of
having
well
known
units
is
that
you
can
extract
the
magnitude
and
convert
between
magnitudes.
A
G
D
No,
I
I
want
the
other
side
of
things
so
yeah,
I
I
I
I'm
fine,
saying
unspecified
behavior,
essentially
or
if
and
if
you
want
to
go
with
the
stronger
case
of
saying
that,
like
when
you
send
bad
data
down
the
pipeline,
that
the
collector
may
just
not
throw
anything
away,
but
it
may
not
be
able
to
consolidate
or
may
not
be
able
to
recognize
it
as
a
compatible
metric
to
other
metrics.
That's
fine
by
the
as
well.
D
But
what
I'm
trying
to
say
is
is
that
the
job
implementation,
the
go
implementation,
the
ruby,
the
python,
the
erlang,
the
php,
that
you
know
all
these.
All
these
other
implementations,
if
they're
sending
down
metrics
and
they
all
are
similar
metrics,
where
the
same
metric
instrument
for
a
particular
system
and
yet
they're
recording
with
different
units.
D
D
Okay,
yeah,
so
we're
going
to
get
to
that,
but
the
idea
is
like,
as
that
transport
is
going
through.
It
needs
to
be
in
whatever
format
we
need
right
and
that
could
be
our
own.
We
could
define
what
that
format
is
like.
We
can
say
that
you
can
only
transport
time
in
the
format
of
nanoseconds.
You
can
only
transport
it
or
whatever,
but
the
idea
is,
if
you
use
the
ucum
at
least
the
subset
of
it,
that
we
actually
care
about.
D
D
And
yeah,
and
so
then
you
once
you
make
that
agreement,
you
can
build
off
of
that.
So
if
you
know
that
you
have
to
transport
at
some
sort
of
format
later,
then
you
can
say:
okay
from
the
sdk
level,
you
need
to
provide
an
interface
that
is
usable
from
the
user's
perspective
that
can
eventually
build
units
that
are
of
this
particular
format
in
a
usable
way.
D
Yeah
I
mean
so
that
was
kind
of
like
the
one
of
the
open
questions.
Here
was
the
extensibility
of
that
unit
system
and
I
think
it's
an
open
question
like
do
you
want
people
to
be
sending
their
own
units
you
know.
Do
you
want
a
generic?
I
don't
know
what
the
units
are.
This
is
an
unrecognized
unit.
I
think,
if
that's
an
open
question,
I
don't
think
it's
a
question
that
I'm
trying
to
resolve
prior
to
ga
and
I
think
that
the
extensibility
of
it
should
should
be
open.
D
I
I
I
would
fall
into
that
camp,
but
I
think
it's
an
open
question.
I
you
know
I
was
asking-
and
I
opened
this
issue
specifically
to
get
feedback
on
that.
If
there
are
people
who
think
differently,
I
would
like
to
to
understand
that
better.
G
D
Yeah,
I
mean
that's
a
that's
a
valid
point,
so
yeah
I,
I
think
that's
that's
totally
a
valid
point
and
it
probably
should
get
captured.
I
think
in
the
the
issue,
if
you
could
make
a
comment
because
yeah
I
like
I,
I
didn't
think
about
it.
That
way.
I
was
just
thought
about
the
underlying
standard,
because
it's
similar
to
the
standard
like
you're
saying,
like
it's
gonna,
be
breaking
to
the
user
interface
like
yeah.
D
G
I'm
of
the
opinion
that
I
don't
think
this
is
a
real
issue
and
we're
being
overly
academic
on
this,
but
I'm
happy
to
be
vetoed
if
or
overridden.
If
people
disagree
with
me,
I
think
given
as
long
as
there's
standard
semantic
conventions
across
open
telemetry
standard
instrumentation,
it's
going
to
use
the
same
same
units
and
it
just
doesn't
matter
so
I
actually
don't
think
it's
a
big.
I
don't
think
it's
a
big
deal
that
we
do
that
personally.
A
I'd
just
like
to
make
it
so
that
if
one
server
is
reporting
microseconds
and
then
I
upgrade
to
a
new
runtime-
and
I
start
recording
nanoseconds-
that
I
can
make
my
graphs
do
the
right
thing
and
like
that's.
Basically,
if
I'm
worried
about
timing
units
and
everything
else
doesn't
matter-
I
mean
maybe
there's
decimal
versus
binary
prefixes
on
on
byte
counts.
That's
the
second
one,
but
for
the
most
part
I
think
it's
a
non-issue.
I
agree
with
john.
A
This
is
just
not
a
big
deal,
so
maybe
we
can
find
a
way
to
specify
things
so
that
when
users
provide
made
up
units
that
it
just
sort
of
works
and
they're
just
made
up
units,
it
just
requires
some
fancy
wording,
perhaps
sorry
to
downplay
it
though
it's
real
issue,
it's
just.
It
doesn't
feel
too
important
to
me.
D
Yeah,
I
I
want
to
make
that
clear
is
like
I.
I
do
don't
think
it's
a
top
priority
issue.
It
hasn't
been
on
my
plate,
but
I
I
don't
want
to
for
the
people
that
do
care
about
this.
I
don't
want
to.
You
know,
diminish
their
opinions
on
this
fact,
because
it
is
a
compatibility
issue
that
it
does
actually
impact
usability
of
islamic
tree.
D
Well,
a
new
relic.
We
have
a
requirement
that
the
units
that
we
send
in
are
of
a
particular
unit
value.
That's
not.
D
I
know
prometheus
also.
There
are
recommendations
on
particular
unit
values
and
how
you
encode
that
in
a
naming
schema,
those
are,
I
think,
the
only
ones
that
I'm
aware,
I'm
not
too
sure
about
statsd.
I
think
it's
similar,
I'm
also
looking
at
richard
correct
me
on
that.
One.
F
F
Which
is
obviously
whatever
like
they
have
to
kilogram
as
the
basis
for
historic
reasons,
I
guess
there's
something
in
uco.
F
F
Obviously
there
is
no
open
metrics
police
so,
but
as
per
standard,
it's
base
units
and
that's
a
must
not
as
not
a
shoot.
A
Okay,
well
that
points
out
that
this
is
more
serious
than
I'm
I'm
making
it.
So
I'm
glad
that
I'm
not
the
one
in
charge
of
this
issue
tyler,
it
sounds
like
you,
you
care
a
lot,
and
I
and
I
I
know
you
can
word
this
in
a
way
that
will
work
out.
D
A
Yeah,
we've
lost
half
the
people
on
the
call,
so
I
think
we
just
have
to
end
it.
There
was
one
more
item
that
didn't
get
done
this
week
at
all.
I'm
gonna
have
a
one-on-one
with
john
to
talk,
hopefully
john,
to
talk
about
the
sk
specs
stuff,
and
I
don't
know
if
you'd
prefer
this
week
or
next,
but
we'll
we
can
talk
offline.