►
From YouTube: 2020-08-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
C
Where
you
are,
I
I'm
actually
camped
out
in
a
rv
trailer
that
someone
has.
Let
me
use
in
mendocino
county
for.
C
A
little
cold,
but
it's
it's
been:
okay,
there's
actually
windows.
Above
me,
it's
kind
of
weird,
very
nice.
Well,
I
hope.
E
C
That's
true,
graham,
was
gonna,
follow
up,
I'm
sorry
to
confuse
justin
and
graham
there
they've
both
been
working
on
this
issue
for
quite
a
while,
so
it
felt
like
we
had
made
some
progress.
Let's
try
and
remember
that.
E
Yeah,
I
think
that
the
idea
here
was
we
had
some
consolidated
understanding
that
we
wanted
to
to
not
go
into
some
sort
of
circular
decision
and
try
to
remove
the
specified
labels
and
just
say
to
use
the
span
ones,
because
then
it
linked
the
spans
with
the
labels.
Sorry.
E
Gonna
do
this
injustice
but
I'll
give
it
my
best
shot
and
then
because
of
that,
we
wanted
to
try
to
then
maybe
abstract
a
lot
of
that
concepts
into
a
standardized
attribute
dictionary,
which
was
a
proposal
that
then
justin
opened
up
another
pr
related
to
this
and
how
to
address
that.
E
But
then
I
think
if
the
idea
was
that
we
didn't
want
to
have
any
of
that
blocking
this
moving
forward
and
wanted
to
have
some
sort
of
resolution.
I
think
that,
based
on
look
at
graham
looking
at
graham's
last
few
comments,
the
remaining
outstanding
issues
looks
like
tiger
kind
of
got
what
he
was
kind
of
looking
for.
I
think
there's
some
follow-on
work.
He
was
looking
for
that,
graham
it
sounded
like
he
was
okay
with
but
bogdan.
E
There
was
a
remaining
comment
that
he
was
looking
for
you
to
respond
with
which
I
I
imagine
that
that
you're
totally
not
busy
tons
of
other
things,
but.
F
I
think
we
talked
last
week
or
two
weeks
ago.
I
don't
remember,
but
was
the
idea
of
how
do
we
make
sure
or
what
is
our
strategy
in
terms
of
the
default
labels?
That
will
be
that
will
be
used,
because
we
had
this
conversation
of.
If
we
put
all
these
attributes
there
most
likely,
all
the
instrumentation
will
use
all
of
them.
How?
How
can
we
make
sure
that
we
don't
have
by
default
very
high
cardinality.
F
E
Yeah,
okay,
I'm
fine
with
that.
I
I
think
yes,
if,
if
you,
if
you
think
this
is
a
serious
concern,
we
probably
need
to
get
into
the
weeds
in
in
a
in
a
follow
up
on
just
how
we
are
going
to
deal
with
that
cardinality.
I
think
that's
a
good
point,
there's
a
lot
to
that
discussion
and
we've
had
this
discussion
a
few
times
in
this
meeting.
I
don't
want
to
dive
into
it
again,
so
so
yeah.
Maybe
we
can
just
do
it
that
way,
if
that's
the
case
and
you're.
E
Okay
with
that,
can
you
maybe
make
a
comment
on
this
pr
just
give
me
approval,
saying
like
cool,
let's
make
an
issue
or
have
somebody
else
make
an
issue
and
we'll
follow
up
with
that.
F
C
I've
been
getting
related
to
sampling
and
we've
had
this
idea
that
potentially
you
can
generate
metrics
from
span
events,
which
is
sounds
nice.
But
the
comments
keep
coming
in
saying
that
this
is
going
to
cost
us
too
much,
because
you,
in
order
to
compute
the
metrics
that
we've
talked
about,
which
include
a
response
timing
which,
as
part
of
the
metric,
you
have
to
actually
attach
a
real
span
and
record
and
that
that
means
that
you're
no
longer
sampling
the
way
you
might
expect
for
tracing.
C
And
this
basically
says
that
we
should
potentially
drop
idea
of
relying
too
much
on
automatic
generation
of
metric
events
from
spans
and
instead
focus
on
proposals
like
739.
That
give
us
a
much
more
clear,
like
roadmap
for
establishing
metrics
out
of
middleware,
I'm
thinking
like,
instead
of
having
the
same
instrumentation
generator
span
and
your
metric
you're
just
going
to
have
metric
instrumentation
at
a
few
places
where
you
also
have
span
instrumentation
at
least,
then
we
can
avoid
the
cost
of
constructing
spans
for
every
request.
E
Yeah,
I
think,
you're
right,
I
think
that's
also
kind
of
the
right
way
to
frame
that
is
to
think
about
739
as
like.
If
there
is
going
to
be
a
metric,
that's
going
to
be
generated.
This
is
the
form,
and
this
is
like
the
what
that
that
metric
will
actually
take
shape
of,
but
the
whole
coupling
between
the
span
and
the
the
actual
metric
content
itself.
That,
I
think
is,
is
a
is
a
follow-up
or
a
totally
yeah.
A
separate
pipeline
discussion
as
well.
E
Yeah
the
the
span
to
metric
event
generation
stuff,
I
think,
has
been
pretty
explicitly
if
in
this
pr
said
that
this
is
not
this
pr.
So
I
think
that's
well
documented
at
this
point
how
that
actually
gets
picked
up
yeah.
That's
probably
also
going
to
be
related
to
the
question
that
you
just
had
as
well
for
the
follow-up
issue
of
like
what
attributes
do
you
want
to
generate
because
it
may
be
a
subset
of
even
the
subset
at
that
point,
but
yeah.
G
But
do
we
need
a
formal
proposal,
something
like
an
otep
saying
that
we
would
like
to
generate
metrics
and
spans
and
we
can
hash
out.
Have
that
argument
there?
F
There
was
a
discussion
in
the
sampling
group
about
this.
I
I
showed
them
some
ideas
about,
instead
of
having
his
recording
just
a
bullet
having
that
unknown,
which
says
I'm
already
recording
for
metrics,
I'm
only
recording
all
the
things
anyway.
There
are
ideas,
but
I
think
yes,
otep
or
or
some
people
getting
together
and
write.
E
So
I
think
that
probably
the
best
way
to
start
it
is
to
have
an
issue
right
and
just
maybe
even
asking
the
question
in
the
issue,
because
the
otep
is
a
proposal
for
a
solution.
So
unless
somebody
who
wants
to
immediately
solve
the
problem
like
they
already
have
an
idea
for
the
solution
to
open
a
note
up,
I
think
that
we
should
probably
just
start
with
an
issue.
Some
extent
question.
E
Is
that
something
that
you
could
do
justin
or
is
that
something
for
another
somebody.
G
Yeah,
I
certainly
don't
mind
doing
it.
John
watson,
a
kind
of
a
proof
of
concept
of
something
that
made
metrics
and
spans
at
the
same
time.
But
I
don't
remember
what
actually
came
of
that
or
if
it
turned
into
an
issue
or
a
proposal.
I
All
I
did
was
just
write
some
code
in
our
in
our
fork
of
the
online
routine.
Nothing
nothing
done
formally.
C
Great,
so
to
me,
I
think
the
most
important
thing
we
should
be
talking
about
is
the
otlp
changes
and
getting
that
release
open.
Do
you
want
to
give
us
an
update.
F
F
If
you
do
that,
but
it
has
with
the
downside
that
you
need
to
do
some
extra
computation
to
calculate
the
integer
out
of
of
so
to
calculate
the
integral.
Essentially,
there
is
a
algorithm
which
is
pretty
reasonably
fast,
but
in
my
benchmarks
I
still
got
a
better
performance
overall
by
by
just
using
the
sign,
the
fixed
size,
which
is
essentially
encoding,
the
the
integers
as
eight
bytes,
always
without
having
to
to
a
variable
length.
F
So
anyway,
also
my
experiment
did
not
involve
any
network
cost
which
arguably
may
increase
a
bit,
but
based
on
my
experience,
experiment
the
the
amount
of
so
we
were
like
500
nanoseconds
better
for
the
experiment
that
I
had
and
only
a
thousand
bytes
more,
which
is
around
one
mtu.
F
C
C
A
little
more
motivation
from
for
us,
and
but
I'm
so
motivated
to
see
us
snapshot
0.5
that
I
sort
of
like
would
take
anything
at
this
point.
C
F
Perfect
now
I
can
hear
you
and
I
just
commented
on
that:
open
telemetry.
F
F
E
Opinion,
no,
I
I
don't
I'm
kind
of
in
josh's
camp
at
this
point.
E
It
seems
I
don't
know
I'm
reading
honorable
response
as
well.
I'm
not
I'm
realizing,
I'm
not
quite
following
all
of
the
trade-offs
here,
but
it
seems
it
seems
like
quite
a
small
optimization
based
on
what
I
was
just
hearing
and
that's
probably
I
don't
know-
that's
probably
not
fair,
but
I
don't
know.
F
E
I
would
I'm
going
to
be
honest
with
you.
I'm
going
to
delegate
this
to
somebody
who's
more
performance,
oriented.
F
E
I
don't
I
don't
know,
honorable's
request
for
the
idle
latency
or
the
I
o
requirement.
There
seems
like
that
may
be
something
to
follow
up
on,
but
otherwise
I
would
default
to
that
discussion.
I
mean
the
type
of
the
actual
number
doesn't
matter
too
too
much
to
me
as
long
as
I
can
transform
it
into
some
sort
of
like
indigo.
F
G
F
C
F
C
F
F
That
is
not
something
that
I'm
using
so
so
I
already
started
to
move
the
collector
to
the
new
things
to
for
for
this,
because
this,
even
if
I
accept
this
or
not
things
that
will
change,
will
not
be
something
that
is
user
visible.
So
I
I
don't
have
to
to
do
anything.
C
So
I
mean
I
think
we
should
be
able
to
release
this
protocol
change
like
this
week
tomorrow.
I
I
suppose
the
change
log
is
written.
It's
just
a
github
release.
It
doesn't
have
to
immediately
land
in
the
collector
nothing's
going
to
break.
I
think,
would
you
agree.
Look.
F
D
So,
and
would
a
new
release
still
have
the
current
version
of
the
otlp?
No.
F
F
D
So
bogdan
one
of
the
guidances
you
gave
was
that
we
should
separate
out
the
implementations
right
for
histogram
and
oh.
A
F
Free
discussion,
I
was
trying
to
explain
them
that
histograms
will
go
in
right
away.
Summaries
may
be
delayed.
F
Correct
so
we
cannot
merge
it
right
now
right
right,
but
we
should
leave
the
pr
there
and
that's
why
I
asked
to
split
the
pr
into
histogram.
In
summary,
because
histogram
like
emerged
right
away,
so
you
will
still
be
the
gauges.
The
counters
and
the
histograms
will
not
have
yet
support
for.
For
the
other.
D
Okay,
so
so
for
the
summary
then
bogdan,
when
do
you
expect
the
changes
later
this
month
or
yes,
I'm
just
trying
to
understand
the
time.
F
So
yeah
to
answer
that
josh,
yes,
I
I
did
my
best
to
review.
Let
me
look
after
this
meeting
again
and
my
result.
C
Thank
you
yeah.
I
think
one
of
them
definitely
needs
to
be
split
if
it
hasn't
been
already
and
that
made
sense.
The
histogram
and
the
summary
one
I
think
the
other
one
with
scalars
is
basically
ready,
and
there
is.
There
is
a
note
about
the
prometheus
remote
right
protocol
about
it,
not
honoring
monotonic,
and
that
is,
I
believe,
the
true
state
of
the
world
as
far
as
prometheus
from
outright
goes.
So
I
think
it's
all.
Okay.
F
Okay-
maybe
maybe
I
don't
know
I
was
just
asking
different
questions
there.
I
will
read
the
stuff.
C
So
those
two
pr's
were
linked
below
and
I
believe
aolita
has
had
a
chance
to
speak
about
that
sort
of
urgency.
The
next
topic
that
I
wanted
to
get
into
was
first,
a
pr
of
mine
where
we're
discussed
and
john
has
raised
a
question
about
what
the
sdk
specification
is
going
to
say
as
far
as
requirements
versus
sort
of
recommending
that
we
have
common
names
for
the
various
components
and
a
sort
of
similar
architectural
diagrams.
C
I
I'm
happy
to
speak,
I
mean
I
think
you
basically
said
it
right.
I
think
it's
actually
slightly
nuanced
differently
than
what
you
said.
You
said
different
names
for
the
components
and
I'm
actually
asking
whether
we
need
to
have
the
same
components
at
all
or
whether
the
whether
we
can
specify
the
requirements
in
terms
of
inputs,
outputs
and
configurability
and
not
in
terms
of
the
internal
components.
C
Yeah,
I
don't
really
want
to
see
anyone
have
to
rewrite
anything.
So
I
I
I
guess
I'd
imagine
that
it
would
be
more
of
a
sort
of
rewriting
of
sorry,
not
rewriting
renaming
of
things
that
already
sort
of
must
exist,
but
I
I
kind
of
see
your
point
about.
Essentially
it
sounds
like
in
the
current
java
code.
C
Basically,
every
instrument
is
its
own
entity
and
there's
no
real
crossover,
but
I
mean
so
don't
you
I
imagine
that
when
you're
outputting,
you
know,
say
an
otlp
stream
there's
a
moment
where
you
go
and
gather
all
the
instruments
and
get
their
current
state
and,
like
I,
I
think
it's
useful
for
us
to
call
about
that
step.
Call
that
step
that
does
that
manipulation,
the
processor,
because
its
output
is
a
set
of
you
know,
metric
aggregations.
C
So
I
mean
I
can
imagine
that
that
concept
exists
in
the
code
and
that,
if
you
just
were
to
call
it
processor,
then
that
would
be
well.
I.
I
The
I
think
the
concept
exists,
but
it's
not
a
thing.
It's
a
it's,
a
like
a
function
or
a
method.
That's
called
on
something
on.
You
know
on
the
I
think,
on
the
the
meter,
the
batch.
I
remember
I
don't
remember
exactly
the
details
anymore,
but
it's
more
of
a
function
on
a
different
thing,
not
like
a
an
entity
that
has
that
function.
Does
that
make
sense?
You
know
saying
what
I
mean
yeah.
I
guess
I
know.
C
I
think
so
because
also
in
the
in
the
go
code
that
I'm
sort
of
basing
them
this
model
text
off
of,
there
was
originally
a
piece
of
code
that
was
called
sdk.
It
was
it
was
like,
since
it
has
all
the
entry
points
from
the
api,
it's
sort
of
easy
to
think
of
it
as
the
sdk.
But
you
know
as
we're
talking
about
taking
various
different
approaches
and
configuring
various
different
export
pipelines.
C
It
becomes
useful
to
think
of,
I
think,
to
think
of
the
sdk
as
these
components
in
a
pipeline,
and
then
I
think
have
common
names
helps
only
so
that
we
can
describe
open
telemetry
in
similar
terms.
So
I
almost
think
it
sounds
like
what
you
this
thing
exists.
It's
just
it's
not
a
you
know
it's
not
an
object
or
a
class
or
something
it's
more
of
a
function
which,
which
seems
to
me.
Okay,.
I
Yeah
and
I
think
that's
since
everything
is
noun
nouny
in
what
you've
written
rather
than
verbi
it,
it
gets
tricky
to
figure
out
how
to
map
the
nouns
into
the
verbs
or
vice
versa.
I
Okay,
like
like,
for
example,
if
we
were
to
create
a
little
checklist
like
we
have
for
tracing
about
all
of
the
features
of
the
sdk,
I
would
have
to
put
minuses
in
front
of
everything
that
you've
written
down
so
far,
except
the
actual
instruments
themselves
got
it
well.
I
don't
want
that.
C
And
I
do
suspect
that
there
are
people
on
this
call
who
are
nodding
their
head
when
you
said
that,
because
we
will
have
a
checklist
and
people
want
to
see
this
finished
other
than
me.
So
we
can
look
forward
to
that,
probably
after
tracing
and
context
and
such
get
finished.
C
So
the
analogy
that
we
had
from
tracing
was
that
there's
this
concept
of
a
span
processor
that
is
given
as
a
sort
of
component
level
description,
and
I
think
there
at
least
people
have
imagined
that
there's
a
sort
of
catalog
of
spam
processors
that
you
can
get
that
will
do
various
things
and
that
the
goal
of
open
telemetry
is
not
only
to
separate
the
api
from
the
sdk,
but
also
to
make
a
kind
of
toolbox
of
sdk
components
that
you
can
piece
together
and
that's
that
span.
C
So
as
for
whether
we
specify
you
must
have
these
configurations
versus
you
must
have
a
interface
that
can
can
be
plugged
in
to
support
these
types
of
configuration
in
practice,
or
something
like
that,
I,
I
guess
that's
the
debate.
I
don't
feel
so
strongly
that
I'm
willing
to
you
know
hold
its
back
on
this.
I
Well,
if,
along
those
lines
I
mean,
I
think,
all
of
the
components
that
are
called
out
in
tracing,
I
think,
if
I'm,
if
I'm
remembering
correctly
like
the
span
processor
and
the
exporters
and
the
samplers
and
things
like
that,
those
are
all
extension
points.
Where
there's
many
and
it's
that
stuff
up.
Would
you
envision
that's
true
about
the
accumulator.
C
I
I
do
think
of
that
as
the
sdk,
but
but
you
can
also,
I
can't
imagine
it
being
something:
that's
configurable.
C
As
for
whether
it
would
be
different
code
bases,
I'm
not
sure,
we've
looked
at,
we've
talked
a
little
bit
about
sampling,
metric
events
and
there's
a
sense
in
which
you
need
to
sample
them
before
they
get
put
into
an
aggregator.
So
you
could
imagine
wrapping
an
accumulator
with
a
sort
of
another
accumulator
passed
through.
C
Maybe
that,
like
does
sampling
for
you,
I'm
not
sure,
that's
what
actually
makes
sense,
but
it's
an
idea-
and
I
think
you
can
imagine
having-
I
guess,
I'm
thinking
like
a
multiplexing
implementation
of
not
just
accumulator
or
processor
or
exporter,
but
all
of
those
but
but
just
come
different
costs.
So
sure.
There's
a
there's,
probably
some
reason
to
have
multiple
accumulators
receiving
events
from
an
instrument,
and
you
can
imagine
a
multi-sdk
essentially
that
does
that.
C
But
you
can
also
imagine
multiple
processors
being
sort
of
put
alongside
each
other
in
parallel
or
multiple
explorers
put
alongside
each
other
in
parallel,
and
I
think
it
comes
down
to
a
question
of
what
you
actually
want
and
because
there's
useful
configurations
of
all
those,
probably
it's
just
going
to
cost
a
lot
more.
If
you
have
to
duplicate
your
processor
for
two
exporters
when
in
fact
one
processor
and
two
exporters
would
make
more
sense,.
C
So
I
do
think
of
the
accumulator
as
the
sdk,
and
maybe
that
would
be
acceptable
rewrite
for
you.
But
there
is
still
this
important
processor
component
and
there's
also,
I
think,
an
important
idea
of
this
controller,
of
which
there
are
two:
it's
not
necessarily
a
plugable
interface
as
much
as
it
is
a
standard
piece
of
code
that
you
you
offer
to
coordinate,
pulling
and
pushing
metric
data.
I
To
to
go
off
on
a
slight
tangent,
are
there
other
languages
that
have
implemented
metrics
and
what
did
their?
I
would
love
to
hear
from
those
people
about
what
their
metric
implementations
look
like
and
whether
it
matches
your
description
here
of
what
go
has
done
or
not.
If
it's
or
is
it
more,
I
mean
I
haven't
described
what
the
java
stuff
does,
but
are
there
other
implementations
in
other
languages
that
do
or
do
not
line
up
with
what
you
have
written?
I
haven't
heard
anyone
aside
from
you
and
I
really.
D
I
mean
I
can,
I
can
definitely
add
we
added
the
c
plus
plus
implementation
for
metrics,
both
the
api
and
sdk,
and
yes,
it
does
emulate.
What
goes
you
know
implement
has
implemented
with
with
the
aggregator
and
all
the
other,
the
controller
processor.
You
know
the
whole
design
as
it
stands
in
the
spec.
C
K
Folks,
on
the
call
I
was
going
to
say
for
python,
it's
it's
kind
of
like
the
java
situation.
It's
not
really
following
the
exact
same
names.
K
C
D
C
To
that.
C
And
yeah
this
it's
this
diagram,
roughly
speaking,
and
what
I
did
in
my
what
I
was
planning
to
do
in
my
sdk
document
was
basically
to
describe
the
interfaces
between
these
components
and
explain
how
you
can
achieve
a
change
of
a
behavior
by
changing
one
of
these
components.
C
Right,
I've
made
other
versions
of
this
diagram
that
became
sort
of
difficult
to
follow.
That
did
actually
have
aggregators
and
controllers,
so
the
controller
is
sort
of
like
sitting
underneath
all
three
of
those
and
then
the
accumulator
ends
up
owning
the
aggregators
and
then
it
copies
them
when
the
processor
gets.
So
basically,
the
processor
has
to
make
its
own
copies
if
it
wants
to
keep
state
of
the
aggregator
and
then,
by
the
time
the
exporter
sees
it
becomes
the
accumulation
and
so
on.
H
Their
individual
bachelor
right,
no,
it
has
so
the
accumulator
has
its
in
java.
I
Issues
right
now,
I
understand
the
reasoning
I
just
think
it
actually
is
a
very
different
description
than
what
josh
has
written
down
and
that's
what
I'm
worried
about
like.
I
think
that
it
does
all
the
same
functions
and
that's
why
my
I'm
pushing
for
a
specification
for
the
sdk
that
describes
the
functions
that
are
being
performed,
not
the
components
and
the
like
the
how
many
of
how
many
accumulators
there
are,
what
the
what
the
cardinality
of
accumulators
might
be
in
the
system.
D
Yeah,
that
would
be
very
helpful
actually
to
actually
functionally
define
that,
because
that
that
was
something
we
struggled
as
we
implemented
the
you
know.
We
took
the
stack
and
then
we
had
to
look
at
the
implementations
across
also.
F
John,
probably,
what
we
need
to
do
a
bit
is
in
our
bachelor.
We
do
have
the
capability
of
computing
delta
in
community.
I
think
josh
wants
us
to
to
split
that
functionality
which
I'm
happy
the
bachelor
will
always
produce
delta,
and
then
we
just
have
a
different
layer
called
processor
that
is
able
from
delta
to
produce
cumulative.
C
C
To
say
is
that
I
see
the
the
cardinality
question
about
how
many
accumulators
there
are
is
being
certainly
relevant
to
this
document
and
there's
been
a
you
know,
a
pr
from
the
google
summer
here
with
configurable
collection
intervals,
and
if
I
was
to
implement
that
I
would
probably
run
a
separate
copy
of
the
accumulator
that
I've
got
today
for
every
different
interval
or
something
like
that.
That's
actually
not
I'm
not
sure
of
that,
but
I
think
that
would
be
the
natural
approach
and
so
maybe
having
one
accumulator
per
instrument
is
totally
compatible
with
this
diagram.
C
C
Issue
number
818
about
getting
the
stuff
out
of
otep119
into
the
specification,
and
there
were
some
topics
that
have
been
discussed
both
essentially
what
the
correct
instrument
type
is
for
these
synthetic
metrics.
C
There
was
a
link
in
the
document
there's
two
but
yeah.
C
C
C
So
so,
but
there
was
a
discussion
about
value
recorder
being
the
appropriate
for
those
utilization
metrics,
even
though
the
otep
says
something
else,
and
then
aaron
had
run
into
some
sort
of
questions
about
whether
that
same
argument
wasn't
appropriate
to
use
on
other
cases
in
other
cases,
and
and
that's
where
this
debate
has
ended
at
the
moment.
C
C
To
use
this
whole
new
thing
that
we've
built,
we
have
to
start
having
stable
names
and
I'm
I'm
I'm
pretty
comfortable
with
the
names
in
otep
18119
after
working
through
it,
and
I
think
I
like
it,
there's
still
our
differences
between
what
otep
119
says
and
what
the
current
collector
does
for
its
host
metrics
receiver.
I
want
all
that
stuff
to
be
fixed,
and
so
we
got
to
finish
this.
F
C
This
it
looks
like
we
replaced
system.cpu.time
and
it's
not
using
dot
usage
anymore
there.
I
thought
they
all
use
usage.
So
maybe
I've
been
mistaken.
C
C
A
I
think
fans
could
not
go
by
the
library
for
this
and
produce
directly
the
output.
C
Yeah,
I
actually
in
the
current
host
metrics
plugin
for
the
go
contrib
repo
repository.
I
actually
went
reverse
engineered
all
the
metrics
that
were
being
computed
in
the
same
host
metrics
code
and
used
the
same
name
so
that
at
least
they're
consistent
with
each
other.
But
I
think
the
the
spec
that's
about
to
happen
here
from
119
will
change
those
slightly
and.
K
Josh
I
wanted
to
ask.
I
saw
your
your
response
on
the
issue,
and
so
it
seems
like
what
you're
saying
is.
If
you
have
a
some
some
observer
for
the
time
or
the
usage
or
whatever
it
should
be,
and
then
you
take
the
rate
of
that,
you
get
the
utilization
right.
Yes,
so
if
that's
the
case,
then
I
guess,
like
I
mean
you
argued
about
this
before,
but
do
we
need
both.
C
We
can't
automatically
generate
like
a
disk
utilization
without
knowing
a
limit,
and
I
think
the
whole
point
of
this
utilization
was
so
that
you
don't
have
to
separately
encode
a
limit
and
do
a
join
downstream
like
it's
just
there's
one
metric
that
tells
you
utilization,
so
you
can
monitor
it
directly.
So
that's
good!
It's
just
time
is
special,
and
so
should
we
stop
encoding
time
the
cpu
utilization
and
always
derive
it.
I
think
that's
asking
too
much,
but
we
could
eventually.
A
F
You
have
three
here.
You
can
also
compute
utilization
for,
for,
I
think
for
swap.
You
also
can
compute.
C
Like
maybe
universally
you
can
generate
information
from
usage,
and
I
would
still
probably
say
that
it's
the
easiest
I
mean
you
have
to
keep
that
state
again,
so
whether
the
the
client
has
to
do
it
or
the
server
has
to
do
it.
It's
a
lot
easier
and
like
the
world
we're
in
today
to
just
have
the
client,
compute
utilization.
K
I
mean
that's,
that's
true,
but
it's
like
it's
just
taking
the
rate
of
of
anything
right.
So
so,
wouldn't
like
you
mentioned
for
otp,
you
could
have
an
interval
temporality.
So
couldn't
you
have
an
instrument
that
measures
intervals
or
measures
rates
directly
like
it
seems
like
that's
the
only
way
that
you
could
implement
it
without
storing
state.
C
Yeah
this
this
could
be
an
endless
discussion.
I
mean
you're
right,
you
can,
we
can
encode
deltas
and
then
at
some
level,
that's
why
deltas
are
sometimes
nice
and
then
on
the
other.
On
the
other
hand,
cumulatives
are
sometimes
nice
for
other
reasons,
and
it's
just
hard
to
always
choose
one.
C
Okay.
So
just.
K
Go
ahead,
please
yeah,
I
think
it's
good
to
keep
utilization
just
because
they
seem
useful
and
I
can't
I
can't
speak
about
all
back
ends,
but
it
seems
like
it'd,
be
nice
to
just
export
them
directly,
so
people
won't
have
to
worry
about
it.
I'm
just
not
sure
about
the
up
down
some
observer,
because
just
going
by
what
I
read
in
the
metrics
api,
it
said
if
you
plan
to
like
aggregate
these
values
across
like
globally.
K
Maybe
then
it
might
be
good
to
have
to
have
it
represent
as
some,
but
if
you
want
to
get
like
a
distribution
or
a
summary,
then
the
value
observer
would
be
better.
Are
you
okay
with
my
comment,
my
pending
comment:
yeah,
I'm
okay
with
that
one,
but
there's
some
other
ones
like,
for
instance,
if
you
scroll
down
a
little,
I
think
there's
some
up
down
some
observers
that
aren't
that
aren't.
C
Utilization,
the
the
one
it's
that
the
usage
ones,
the
utilization
ones
we've
discussed,
and
I
think
we
I
think
we
can
all
agree
that
utilization
should
be
value
recorder
or
value
observer.
But
it's
it's
the
one
at
the
very
bottom
of
this
file,
which
is
like
network
connections
or
something
like
that,
which
is
I.
F
C
Really
kind
of
don't
feel
strongly.
The
thing
I
wanted
to
say
in
that
issue,
but
I
ran
out
of
time
to
type,
was
something
about
how
we've
had
several
debates
in
the
past
over.
What's
the
default
aggregator
supposed
to
be,
and
it
really
is
being
described
or
being
discussed
in
the
context
of
a
process,
that's
recording
or
observing
some
measurement,
and
it's
going
to
what's
the
default
for
a
process
by
the
time
the
data
gets
aggregated
at
a
collector
from
multiple
hosts.
C
F
So
for
for
connections
for
connections,
what
was
the
problem
with
these.
F
C
F
But
if
you
look
at
this
into
the
back
end,
you
probably
what
you
most
likely
gonna
do
you
will
say
you
cannot
look
at
this
as
a
histogram,
let's
put
the
other
way,
because
it's
just
a
single
value
in
order
for
you
to
produce
a
histogram,
you
should
say
what
is
the
distribution
of
the
number
of
collect
connections
across
10
means
or
something
like
that.
You
need
to
put
another.
A
F
You
need
to
add
another
dimension,
another
another
damage,
another
thing
into
this
game
in
order
to
produce
a
histogram,
because
by
itself
the
value
is
not
a
histogram
and.
K
I
could
see
that
for
like
a
regular
up
down
counter,
but
since
this
one
is
asynchronous
and
you're
just
taking
measurements,
you
don't
like
you're
you're,
basically
taking
samples
right.
So
if
you
wanted
to
say
like
at
any
point
in
time
how
many
connections
would
I
have
or
like?
What's
the
distribution
of
number
of
connections,
as
I
measure
them.
F
F
But
across
multiple
machines,
it's
not
our
job.
To
do
that,
aggregation,
we
just
informed
the
back
end.
That
here
is
the
one
value
for
this
machine
and
the
back
end
should
be
smart
enough
to
say
now,
if
you,
if
you
aggregate
across
different
labels,
which
is
the
label
that
identifies
one
machine,
you
you
build
me
a
histogram,
so
I
think
that's
that's
the
next
aggregation.
So
if
you
see
things
in
an
aggregation
pipeline,
that's
the
second
aggregation
that
you
do
now
that
you
have
these
as
measurements.
You
use
them
and
you
build
another
allegation.
C
Yeah-
and
I
was
trying
to
say
roughly
the
same
thing
earlier-
is
that
we're
only
in
the
business
of
specifying
default
aggregations
for
the
process
itself
and
these
write
sort
of
points
and
later
on
you're
going
to
talk
about
aggregating
points
again
and
often
we
talk
about
grouping
them,
and
I
think
mostly,
what
people
want
to
see
in
this
case
that
we're
talking
about
is
okay.
We
are
computing
a
sum,
but
we're
going
to
group
it
by
some
attribute
and
then
I
will
see
a
number
of
sums.
Then.
F
C
C
Then
we
can
close
api,
then
we
can
close
your
pr
or
move
forward
at
least
and
I'll.
Take
on
this
question
about
clarifying
which
instrument
choice.
I
do
hope
we
can
talk
a
little
bit
about
the
last
topic,
which
is
important,
because
we
still
have
a
disagreement
over
what
to
do
with
value
recorder.
C
We
also
have
no
summary
data
type
in
ocelp
and
I
think
we
still
have
some
sort
of
disagreement
about
when
how
the
default
aggregation
might
not
include
the
last
value
if
it's,
including
a
histogram
or
a
dd
sketch,
and
sometimes
people
want
histograms
and
some
people
want
less
values
and
there's
a
that's.
Why
I've
at
some
point
proposed
this
mid
max
lesson.
Count
idea.
B
For
this
one
for
for
the
value
recorder,
is
there
a
case
for
measuring
things
synchronously
and
needing
the
last
value,
or
is
it
only
a
matter
for
the
value
observer
or
the
yeah,
the
value
observer
that
you
need?
The
the
last
value.
C
I
think
you're
asking
the
right
question:
I've
I've
seen
different
people
answer
it
differently.
Unfortunately,
so
I
think
that
we're
faced
with
some
disagreement
over
whether
that
makes
sense
or
not-
and
I
think
people
have
code-
that's
been
written
in
a
particular
way
and
it
almost
suggests
that
they
have
to
rewrite
their
code
to
get
the
behavior
they
want
and
that's
going
to
offend
some
people
like
darn
it.
I
just
have
a
number
that
I
finished
some
synchronous
requests
and
I
want
you
to
observe
it.
B
C
If
it
just
included
last
value,
I
think
that's
what
you
just
said,
then
you'd
have
everything
and
that's
almost
the
same
as
my
midmax
left
sub
count
proposal
really
which
which
did
didn't
have
histogram,
but
I
think
is,
is
a
step
in
the
direction
of
more
information
for
not
much
more
cost.
So
I
like
it,
but
I
I
think
that
there's
some,
I
think,
you're
also
probably
we
agree,
that's
a
minority
of
requests.
You
know
like
a
small
number
of
people
care
about
this,
but
it
tends
to
be
I'm.
B
There
I
in
my,
maybe
maybe
my
naive
opinion,
but
even
there
one
could
set
off
a
value,
an
asynchronous
metering.
Every
hour
right,
you
wouldn't
have
to
do
a
synchronous,
hour-long
recording,
like
the
only
reason
to
use
this
is
because
you
care
about
the
temporality,
but
but
maybe
that's
overassuming
again,.
C
So
then,
just
to
make
this
concrete,
I
think
there
is
a
proposal
that
we
did,
that
we
settle
on
db
sketch,
which
is
min
max
last
sum
count
and
variable
boundary
histograms.
I
think
it
makes
a
lot
of
sense.
The
thing
that
concerns
me
the
most
about
it
is
that
it
doesn't
map
well
into
prometheus
and
it's
if
even
if
we
can
set
aside
this
concern
about
those
who
last
value
from
their
synchronous
instruments,
which
will
require
an
alternate
configuration
after
we
set
that
aside.
B
Right
so
what
I
was
thinking
there,
and
so
so
we
obviously
we
we
work
with
openmetrics
all
the
time
and
we
have
a
conversion
from
openmetrics
to
db
sketch,
where
we
set
the.
We
just
figure
out
what
what
buckets,
what
buckets
in
the
variable
size
buckets,
each
one
will
fit
in
with
each
prometheus
value
would
fit
into
and
we
merge
them
that
way.
We're
limited
to
the
fidelity
of
the
open,
metrics
format,
but
but
it
works
well.
C
Right,
so
I
think
I
for
when
I
first
took
from
you
there.
The
idea
is
that
you'll
just
convert
back
into
a
standard
histogram
and
you
might
keep
right
now.
It's
fixed
right.
C
B
C
So
then
it's
the
collector's
prometheus
exporter
or,
if
sorry
yeah,
it's
the
prometheus,
remote
right
or
the
prometheus
scrape
endpoint.
Both
of
those
would
have
this
behavior,
but
at
least
then
you're
configuring
it
in
the
collector's
dml
file,
not
in
the
sdk
of
the
individual
processes
out
there.
F
F
Yes,
because
the
problem
is,
if
you
are
variable
buckets,
you
can
handle
non-viable
buckets.
So
it's
an
easy
problem
from
open
metrics
to
your
model.
Right
from
the
model
you
have.
The
sketch
model
to
open
metrics
is
a
more
complicated
because
data
will
come
with
variation,
variable
variable,
variable
brackets
and
we
need
to
transform
them
in
fixed
packets.
B
So
the
issue
philosophically
and
not
coming
with
code
for
a
moment,
is
that,
if
the
shape
of
the
data
changes
significantly
over
time,
then
the
prometheus
format
loses
compatibility
over
time
anyway.
So
we
are
talking
about
a
dd
sketch,
that's
fundamentally
within
the
same
reasonable
bounds
right,
which
is
a
more
constrained
problem.
At
least
it's
not
that
any
sketch
will
merge
nicely
into
these
same
prometheus
buckets
because
prometheus
buckets
don't
do
that
over
time.
B
F
C
I
think
it
would
be
this.
This
sounds
good
to
me.
I
think
we
need
to
clarify
some
of
these
details
in
texas
and
also,
I
think,
we're
out
of
time,
and
so
we
should
call
it
and
just
to
remind
you
all,
I'm
very
excited
about
zero
point.
Five
opens
up
the
cheap
crypto
looking
forward
to
that,
and
thank
you
all
thanks.
God,
thanks
bye,
everybody
thanks,
I'm
gonna
make
an
action
item
for
you,
michael
to
help,
try
and
follow
some
of
this
stuff.
B
Yes,
thank
you
and
open
questions
that
you
have
phrased
in
your
words
would
be
helpful
for
me
as
well.
F
C
I
think
that
that
idea
now
that
I
know
prometheus
wright
system
pretty
well,
although
I
think
what's
glaring
to
me,
is
that
the
scrape
format
which
is
logically
equivalent
to
open
metrics
includes
time
stamps
and
and
time
stamps
are
a
real
thing
and
the
prometheus
remote
write
both
does
not
have
reset
timestamps
in
it
and
I'm
not
sure
exactly
why
and
also
well,
that's
that's
the
big
problem.
The
second
problem
is
that
you
don't
know
the
difference
between
a
counter
and
a
gauge
in
a
monotonic
counter.
F
Yeah,
that's
that's
true.
My
biggest
problem
is
different
one.
I
think
I
think,
having
collector
as
an
intermediate
step
on
a
pool
based
mechanism,
we're
not
gonna,
give
it
right.
We're
never
gonna
have
because
the
pool
mechanism
that
you
prometheus
uses
is
that
to
determine
a
bunch
of
other
things.
If
the
process
is
up
and
running
and
a
bunch
of
other
things,
we
don't,
we
cannot
so
so.