►
From YouTube: 2020-10-23 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
I
also
will
add
that
I
did
not
take
any
time
today
to
prepare
any
additional
agenda,
but
I
see
a
few
items
and
I
suspect
we
can
have
a
fruitful
conversation
just
with
those
items
and
then
I
think
it
in
future
weeks.
We
can
expect
more
organization,
and
maybe
this
is
the
week
we
said
we'd
start
doing
this.
I
don't
know
I
could.
B
I
could
start
by
passing
this
to
andrew
or
or
morgan
or
josh,
any
of
you
who
have
been
in
this
room
for
a
few,
a
few
meetings
now
and
have
talked
about
doing
some
organizational
work
here.
Would
you
like
to
speak.
D
I
have
a
proposal
to
be
able
to
help
with
the
organization
to
help
just
bring
the
same
type
of
status,
updates
that
we
have
for
the
specs
segmenting
tracking
of
issues,
prioritization
assignments
of
p1's
and
such
which
I'd
be
happy
to
add
to
the
agenda
item
on
a
regular
basis
and
just
like
time
boxes.
That
way.
We
also
have
time
to
talk
about
other
things
related
towards
that.
Would
that
be
a
helpful,
desirable
thing
for
the
sig
for
me
to
to
go
over.
C
Absolutely
like
andrew,
like
the
process
you
did
for
tracing,
was
amazing,
and
it
brought
like
a
lot
of
order
to
the
chaos
of
of
any
open
source
project,
and
certainly
this
one.
If
we
can
continue
that,
if
you're
willing
to
continue
that
now
with
the
focus
to
metrics
like
seriously,
that
would
be.
D
B
All
right
sounds
good,
let's
see
so
I
see
this
first
item
on
the
agenda.
We
could
just
start
there
reihan.
Would
you
like
to
start
with
your
item
here.
E
Yeah,
so
I
think
in
our
last
week
meeting
so
we
discussed
an
agenda
which
was
the
very
first
one
from
last
week.
I
guess
so.
We
want
to
convert
some
of
our
resource
attributes
to
metric
levels
and
it's
a
pretty
common
new
skills
for
maybe
many
of
our
customers,
and
we
discussed
you
know
and
that
meeting
like
so
we
we
wish
to
have
it
in
the
exporter.
E
But
the
case
is
like
say,
for
example,
we
implement
it
in
in
one
of
the
exporters,
then
our
customers
tries
to
use
another
exporter
where
actually
we
don't
have
the
fixer
to
convert
the
resource
activities
to
matrix
levels,
then
our
in
a
sense.
We
are
blocking
our
customers.
So
then
we
had
kind
of
internal
discussions.
Also,
like
honua
was
there
from
our
side.
I
guess
and
we
thought
like
it
might
be
a
good
idea
to
implement
it
as
a
processor,
and
we
figured
out
that
the
matrix,
transform
processor
is
kind
of
a
good
candidate.
E
Maintainers
tigran
is
like
these
editing,
so
I
just
want
to
discuss
more
here
like
so
it's
my
strong
feeling
like
it
should
go,
maybe
under
the
transform
processor.
I
just
want
to
hear
from
you
guys
like
what
do
you.
B
Think
if
I
could
frame
the
question
a
little
bit,
I
I've
recently
started
looking
into
this
a
bit
and
it
seems
like
there
is
a
metrics
transform,
processor,
a
resource,
processor
and,
like
a
span,
attribute
processor,
and
I
think
we
we
might
have
a
disagreement.
There
could
be
different
opinions
in
this
group
about
whether
resources
resource
attributes
should
just
automatically
be
applied
as
labels.
That's
the
that's
the
first
question.
B
I
want
us
to
sort
of
jointly
discuss
and
I've
had
this
conversation
with
the
team
at
amazon
or
aws
working
on
the
prometheus
receiver.
Just
yesterday,
where
we
we
looked
into
this
question
of
how
prometheus
treats
the
same
concept
essentially
and
and
they're,
you
know
they're
exported
as
label
the
resource
values
are
these
ones
that
are
found
through
service
discovery
and
they
are
presented
to
the
relabeling
step
with
these
double
underscores,
and
so
they
you
have,
the
user
actually
has
to
select
them
and
they
they
end
up
as
labels
anyway.
B
Well,
this
is
we're
talking
about
this
in
the
context
of
a
collector
processing
stage
right
so
yeah
my
so
if
you
have
an
otlp
exporter
in
a
client
sdk,
my
assumption
is
you're
going
to
put
the
resource
in
there
and
then,
when
you
see
this
in
in
your
own,
let's
say
in
a
prometheus
exporter
in
the
client
library.
B
B
F
I
think
the
filtering
right
now
of
the
resource
or
changes
of
the
resources
are
happening
in
the
resource
processor,
so
we
may
right
now,
based
on
the
current
design.
This
may
happen
in
two
steps.
Removing
things
that
you
don't
want
to
be
exported
may
happen
in
the
resource
processor
and
making
the
resource
attributes.
Labels
in
the
metrics
may
happen
in
the
metrics
transform.
F
F
B
F
B
Yeah
sure
sure,
but
I'm
saying,
if
you
get
an
otlp
data
structure,
which
has
labels
and
attribute
sorry
label,
attribute
terminology
problem
which
has
resource
attributes
and
labels
and
you're
exporting
to
prometheus.
I
believe
the
thing
we
should
be
doing
is
taking
the
resource
attributes
and
making
them
labels
as
we
export
to
prometheus
awesome
and,
and
and
it's
easy
so.
F
But,
but
also
also
another
thing,
this
should
not
be
a
decision
on
the
user
necessary.
The
only
decision
on
the
user
could
be,
do
it
completely
or
not.
Do
it
or
or
things
like
that,
so
so
that
being
said,
that
being
said,
I
don't
think
because
of
that,
I
don't
think
it
should
be
a
processor,
because
user
will
have
to
configure
this
to
happen.
F
I
think
I
think,
should
be
part
of
the
exporter
and-
and
I
I
just
found
another
use
case
very
similar
with
this
and
my
my
solution
to
this-
is
because
it's
not
only
prometheus,
for
example,
it
stats
d
as
well,
and
if
we
export
stat
d
we
want
to
do
the
same.
If
we
export.
F
What
other
things
are
like
data
dog?
We
probably
want
to
do
the
same
or
or
things
like
that,
so
the
solution
for
this
was
not
a
processor
was
a
helper.
What
we
called
it
a
consumer
consumer
metrics
wrapper.
So
it's
it's
a
it
implements
the
consumer
metrics
interface
and
accepts
a
base,
one
whatever
it
does
the
transformation,
and
it
calls
the
next
element
so
so
anyway,
in
by
having
this
wrapper.
Now,
whenever
I,
for
example,
for
the
prometheus
prometheus
returns
as
part
of
the
the
exporter
it
returns,
it
implements
also
this
interface.
F
E
Here
I
am
trying
to
understand-
maybe
I
don't
know
so.
I
have
I'm
trying
to
understand
the
difference
between
like
putting
into
a
processor
versus
like
coming
up
with
a
new
concept
of
like
an
interface
for
consumer
matrix
conversation
or
whatever
we're
telling
here
the.
F
E
What
configurability
do
you
want,
for
example
like
for
our
cases
like
okay,
so
by
default
we
have
like
seven
resource
attributes
or
maybe
like
six
say,
for
example
like
we
have
couple
of
attributes,
but
customer
only
cares
about
like
maybe
two
or
one
some
customers
cared
about
all
of
them.
So
how
do.
F
F
So
that's
that's
the
optional
part.
That's
the
optional
or
or
thing
optional
part
of
this.
This
thing
that
user
can
configure,
but
the
transformation
whatever
whatever
it
ends
up
being
called
to
the
exporter,
everything
that
is
in
the
resource.
I
want
them
to
be.
Labeled
user
can
configure
in
the
pipeline
to
to
remove
some
of
these
things
prior
to
calling
the
exporter,
but
once
you
hit
the
exporter,
everything
that
is
in
the
resource
becomes
a
label.
F
Does
it
make
sense?
I
I
let's,
let's
not
discuss
about
all
these
things
right
now.
I
can
draw
a
picture
for
you
if
you
want-
or
we
can
talk
on
guitar,
to
give
you
more
details
about
how
I
see
this
yeah.
E
I
think
I
got
the
high
level
idea,
but
still
I
want
to
clear
myself,
like
maybe
I'm
not
getting
her
like
so
one
thing
like
how
are
we
resolving
the
issue
like
say,
for
example,
we
are
talking
about
prometheus
exporter,
so,
okay,
let's,
let's
say
like
we
have
implemented
for
methods
exporter
like
say,
for
example,
customer
wants
to
use
other
exporters,
but
that
feature
is
not
supported
by
like
say,
for
example,
aws
cloud
was
exporter,
so
in
a
sense
like
we
are
blocking,
some
of
the
customers
like
to
export
to
cloud
was
because
of
not
having
these.
F
F
Wrapper
sit
because,
because
we
need
this
in
core,
we
will
have
to
have
it
in
core
because
it
needs
it
is
needed
by
the
exactly
so
it's
a
common
option
for
all
the
exporter.
That's
exactly
what
wesley
told
me
here.
B
I
was
gonna,
so
we've
talked
about
this
sort
of
like
all
or
none
option,
and
I
think
what
we're
saying
is
that
all
the
resources
should
pass
through
at
somehow.
If
there's
a
resource
separately
from
a
label,
then
keep
it
separate
and
if
there's
not
put
the
resources
in
the
labels,
and
then
the
next
concern
is
that
sometimes
there
are
too
many
too
many
resources
and
the
user
doesn't
want
them
all
it's
too
expensive
or
what
not.
So
now
we
have
a
resource
process
that
drops
labels
I
wanted
to.
B
B
Stop
sure,
okay,
great
all
right,
so
I've
pulled
up
some
some
links
before
so
I
could
be
ready
to
show
so
I'm
looking
at
the
prometheus
get
this
zoom
stuff
out
of
the
way
prometheus
configuration
document
for
service
discovery
for
kubernetes
pod
is
one
of
the
fairly
common
ones,
and
you
can
see
here.
This
is
the
document
per
pod
and
these
are
called
available
meta
labels.
These
are
the
labels
that
are
given
to
you.
B
At
the
moment
a
target
is
identified
and
they're
applied
with
double
underscores
at
this
sort
of
first
stage
in
a
prometheus
pipeline.
The
double
underscores
mean
these
are
not
going
to
be
reported
by
default,
so
they're
available
for
relabeling
only
essentially
and
you'll
see-
I
don't
know
it's.
It's
like
20
of
them
right
and
some
of
them
like
this
one
here,
pod
label
underscore
label
name
are
really
prefixes
so
that
there's
going
to
be
a
bunch
of
them
now.
B
The
next
page
here
that
I
have
is
this-
is
the
helm
chart
for
the
community
owned
prometheus
deployment
on
kubernetes.
So
these
this
is
a
recommended
configuration
for
running
prometheus
on
kubernetes
and
you'll
see.
This
is
the
section
where
we
decide
the
default
relabelings
for
kubernetes
pods.
So
this
is
the
rule.
That's
going
to
take
those
20
or
so
pod
meta
labels
and
turn
them
into
actual
real
labels,
which
means
these
are
the
ones
they're
going
to
pass
through,
and
I
will
note
that
it
is
not
all
and
it
is
not
none.
B
It
is
six
or
seven
or
something
like
that.
So
these
ones
that
are
labeled
an
app.
These
are
passing
through
the
entire
key
values
that
they've
just
these
are
just
stripping
off
a
prefix,
and
then
some
of
these
are
actually
changing
names
a
little
bit
so
there's
some
there's,
there's
sort
of
like
four
or
five
variables
label
attributes
here
which
are
recommended
for
met
for
metrics.
B
You
could
have
you
could
have
a
sort
of
the
prometheus
receiver
just
takes
those
metal
labels
and
puts
them
in
there
and
then
there's
a
resource
processor
that
strips
the
ones
you
want
and
keeps
the
ones
you
want
and
then
there's
an
export,
an
exporter
that
just
sends
them
all
out,
and
I
think
so
I
think,
there's
a
standard
component
which
says
give
me
the
semantic
conventionally
recommended
attributes
from
the
set
of
all
attributes
which
is
20
or
so
so,
not
all
not
non.
I'm
not
sure
if
I
was
clear
about
this,
but.
F
I
think
I
think
josh,
you
are
very
clear
for
me
at
least
I
understood
everything
you
you
wanted.
There
is
a
small
difference
between
us
and
prometheus
prometheus
by
default
has
a
no
for
this,
but
we
by
default
has
a
have
a
yes,
so
the
difference
is
for
for
us
all.
The
resources
by
default
are
kept
as
part
of
the
the
metrics
and
everything
for
for
prometheus
are
just
available,
but
not
you
have
to
explicitly
enable
them.
F
It's
it's
a
flavor
we
can
discuss
about
which
one
is
better,
and
if
we
want
to
change
the
entire
behavior,
but
except
that
we
we
have
exactly
this
functionality
already
available.
So
as
part
of
as
part
of
receiving
things,
we
we
have
a
discovery
thing.
When
you
run
in
kubernetes,
we
have
a
processor
that
appends
process
pod
id
and
a
bunch
of
other
informations
like
that
into
the
resource.
We
have
a
resource
processor
that
can
drop
things
and
then
once
we
hit
now
after
we
do
this,
we
hit
the
exporter
the
exporter.
F
I
think
it
shouldn't
have
any
other
logic
of
keeping
or
not,
because
we
already
have
a
chance
in
the
resource.
Exporter
to
do
keeping
or
not
keeping
of
the
resource
you
just
whatever
you
have
there
turn
into
labels,
and
that's
it
yeah.
I
agree,
but
but
having
defaults
and
stuff
completely
agree
with
that,
it's
we.
We
lack
on
on
these
documentation
and
better
explaining
to
people
all
the
capabilities.
Probably
we
have
way
too
many
capabilities
in
our
pipelines,
but
it's
it's
a
problem.
There.
F
B
I
I've
actually
wondered
whether
we
could
actually
somehow
use
the
prometheus
like
struct,
the
sort
of
yaml
struct
or
json
struct
as
a
language
for
relabel
configs
like
it's.
It's
an
established
thing.
There
are
six
fields
and
you
can
set
them
and
they
have
various
defaults
and
they
have
sort
of
well-defined
behavior.
So
I
wonder
if
we
could
reuse
that
technology,
maybe
but
that's
sort
of
an
aside.
F
I
would
encourage
please
file
an
issue
for
this,
so
it's
in
the
contrib
and
refer
to
the
component
called
metrics
transfer
processor.
That
should
follow
this.
B
Config
yeah,
that
would
be
a
good
idea.
I'm
gonna
ask
someone
to
volunteer
to
follow
that
another.
It
also
occurs
to
me
that
we
could
have
a
behavior
that
just
says
drop
things
with
double
underscores,
and
then
you
could
just
sort
of
pipe
the
prometheus
service
discovery,
labels
in
as
resources
and
then
have
them
drop
because
there's
a
broad
rule
that
says
drop
double
underscores
josh.
I
And
and
because
we
we
do
think
that
that's
that's
the
way
to
implement
and
specifically
for
the
resource,
attributes
and
labels.
Discussion
that
we
had
around
the
prometheus
processor.
E
I
Yeah
bogdan,
if
you
can
comment
on
the
issue
and
actually
provide
us
a
framework
of
you
know
what
you
have
in
mind
then
definitely
can
help
implement
again.
Let's
clarify
that
that
would
be
super
helpful
for
non-otlp
exporters,
especially
so
I
mean
do
you
envision
that
from
an
you
know,
common
feature,
point
of
view
where
this
should
reside,
because
I'm
not
sure
if
it's
an
exporter
per
se,
let's.
F
Right,
we
just
need
that
to
be
implemented.
I
will
show
you
how
I
see
things
in
the
issue,
so
once
you
point
the
issue
here
somewhere,
please,
I
will
point
to
you
how
the
structure
should
look
like
and
also
this
is
important
also
for
logs
and
even
for
traces
that
don't
support
the
resources.
So
it's
not
only
for
for
for
for
this.
It's
for
for
all
of
the
signals
and.
F
B
Actually,
no
you're
right.
I
forgot
I
forgot
about
that.
It's
at
least
one
of
them,
though
yeah
shall
we
move
on
to
a
lolita.
You've
got
the
next
one
here.
I
We
are
actually
looking
at
a
couple
of
use
cases
for
supporting
summary
in
the
otlp
definition
to
have
full
compatibility
with
the
prometheus
data,
different
definitions
and
again,
this
is
something
that
you
know
existed
earlier
than
it
has
been
taken
out,
and
today
at
least
the
summary
type
gets
dropped
right
in
in
otlp
and
sorry
bogdan.
What
were
you
saying.
I
So
there's
really
no
clean
way.
If
you
will
of
being
able
to
guarantee
that
you
know
summary
type
metrics
are
passed
through
from
otlp
to
a
prometheus
data
format.
F
I
would
I
would
encourage
to
do
the
following
personally,
I
would
like
to
see
a
proto
proposal
for
this,
because
we
need
the
proto
change
for
this
yeah
and
maybe
maybe
consider
how
we
name
this,
because
naming
may
be
a
problem.
Maybe
we
can,
but
but
the
current
proto
definition
supports
very
easily
adding
a
new
value
there.
So
we
it's
just
a
matter
of
how
this
aggregation
would
look
like
and
stuff.
F
I
I
know
prometheus
is
very
important.
Openmetrics
also
has
this,
so
I
think
there
is
no.
There
is
no
way
we
can
avoid
supporting
this,
even
though
we
don't
produce
it
in
our
sdk
as
an
otlp.
We
have
to
carry
this
because
we
may
get
data
from
prometheus.
We
may
get
data
from
from
other
protocols
that
have
a
notion
of
of
quantiles
and
summary
thinking,
so
we
probably
should
not
drop
this.
This
report
at
the
otlp
level,
but
probably
there
is
a
separate
discussion
if
we
should
support
producing
these
summaries.
I
Yep
yep
so
bargain.
There
are
obviously
two
related
discussions
right,
because
the
quantiles
supporters
also
need
a
needs.
The
summary
data
type
and
that's
again,
a
requirement
that
we
even
have
from
cloud
watch,
metrics
being
interoperable
with
otlp
right.
So
that's
something
that
both
from
a
prometheus
as
well
as
in
cloud
watch,
point
of
view
we
need
to
be
able
to.
You
know,
support
that
and
otlp
to
you
know
have
that
full
transformation.
So,
in
terms
of
a
proto
proposal,
do
you
ex?
I
F
That
we
have
here
pick
up
like
the
histogram
definition
yeah.
I
also
don't
think
we
need
yeah
pick
up.
One
of
them
look
at
histogram,
for
example,
and
just
propose
one
for
summary.
Okay,
and
I
don't
see
any
other
thing-
go
on
josh,
there's.
B
An
alternative
I
posted
it
in
the
in
that
issue
or
that
I
at
least
want
us
to
look
at
or
consider,
which
is
that
I
think,
the
even
within
prometheus.
The
summary
data
type
is
sort
of
like
almost
like
deprecated
and
the
way
that
you
can't.
I
posted
this
issue
so
morgan,
if
you
wouldn't
mind
just
clicking
to
the
bottom
there,
but
basically
when
prometheus
is
done
processing
that
summary,
it
turns
into
several
time
series
and
we
can.
We
can
just
map
the
prometheus
summary
into
several
time
series:
it's
not
an
efficient
representation.
B
It's
not
one
point
for
one
summary,
but
it's
one
point
for
the
sum
one
point
for
the
count
and
one
point
for
every
quantile
the
the
advantage
of
doing
this
is
that
we
never
have
to
add
a
new
point
in
the
protocol
and
we
leave
it
in
this
sort
of
like
deprecated
state
where
it
works.
But
it's
it's
expensive.
You
get
the
output
you
expect
and
then
the
problem
would
be
if
you
like,
had
a
prometheus
receiver
and
a
prometheus
like
a
prometheus
exporter,
meaning
you're
getting
scraped
by
another
prometheus
you'd.
B
Wanna
you'd
expect
that
data
gets
scraped
by
one
prometheus
and
then
gets
scraped
by
the
next,
and
it's
going
to
see
a
summary.
But
by
this
time
we've
put
it
through
otlp
it's
going
to
come
out
as
a
summer
account
and
a
bunch
of
quantile
time
series.
So
there's
a
little
bit
of
code
to
reconstruct
a
point
at
that
point.
At
that
time,.
B
I
guess
the
thing
is
that
summaries
aren't
very
elegant
anyway.
That's
sort
of,
I
think,
that's
the
reason
I
suggest
this
and
then
there's
several
related
topics
to
me.
B
There's
been
a
a
call,
for
I
mean
the
old
idea
of
minimax
sum
count,
which
is
kind
of
the
summary,
with
with
only
two
quantiles,
zero
and
and
one
I
like
the
min
max
sum
count,
and
but
even
even
when
we
were
doing
that
in
like
the
first
half
this
year,
it
was
sort
of
kind
of
a
hack
to
have
to
express
my
min
and
my
max
as
quantile,
zero
and
quantile
one.
So
I've
wanted
at
least
there's
been
a
number
of
times.
B
It's
been
mentioned
in
certain
threads
to
have
a
way
of
putting
the
actual
max
and
the
actual
min
into
a
histogram,
because
I
kind
of
want
us
to
converge
on
just
one
data
point
type,
and
so
it
so
summary
has
this
quantile
stuff,
and
the
other
idea
that
I've
posted
here
is
that
we
could,
you
could
kind
of
represent
a
summary
as
a
histogram,
so
you
multiply
the
sum
times
each
quantile
that
you're
going
to
use
and
you
synthesize
buckets.
B
So
if
you
have,
you
know,
100
000
requests
and
you
have
a
0.5,
p,
p
50,
which
is
a
0.5
you're
you're,
going
to
output
a
histogram
bucket
saying
that
there
was
50
000
less
than
halfway
through
your
range.
It
is
perhaps
a
little
bit
less
hacky
than
the
last
approach
I
gave.
I
just
wanted
to
mention
that
possibility.
B
I
put
that
into
a
different
issue
and
the
reason
I
mentioned
it
in
that
first
place
was
that
we've
got
a
stats
d
receiver,
which
has
to
support
this
statsd
format,
and
that
has
this
sample
rate
built
into
it,
and
once
you
have
sample
rate
built
in
it's
like.
If
I
have
a
sample
rate
of
you
know,
0.33333.
B
And
I
get
one
event:
I
have
to
say
that
that
is
about
three.
But
if
my
sample
rate
is,
I
don't
know
one
in
23,
it's
some
weird
number.
Every
individual
turns
into
a
floating
point
number.
So
there's
a
question
about
whether
there's
a
histogram
that
supports
counts,
which
are
floating
point.
But
if
you
have
a
histogram
which
supports
counts,
which
are
floating
point
you
can
use
it
to
support
sample
counts
from
statsd.
B
F
That's
a
possibility,
but
I
think
we
are.
We
are
very,
not
tied
we're
I
think
we
are.
We
have
a
problem.
The
problem
is,
we
will
never
be
able
to
replace
the
whole
prometheus
world
and
prometheus
will
still
exist
and
open
metrics,
which
they
just
kind
of
release
this.
I
think
this
coup
con,
they
finally
gonna
announce
it.
It's
it
has
it.
So
it's
gonna
be
hard
for
us
to
ignore
the
fact
that
it
exists
yeah.
I
would
probably
just
have
it
there
and
call
it
legacy,
deprecated
summary
or
whatever
some
some.
F
No,
no,
no,
don't
use
it,
don't
produce
more
things
and
have
it
more
like
open,
metrics
prometheus
world,
because
that's
probably
the
major
part
where
we
get
it
if
possible,
to
make
it
compatible
with
with
stasd
or
other
protocols.
That's
something
that
alolita.
You
should
look
into
so
look
at
other
protocols
that
support
this
summary
and
how
do
they?
What
do
they
have?
So
we
can
ensure
that
we,
we
also
can
support
others,
but
just
name
it
in
a
way
that
okay,
this
is
just
for
supporting
some
legacy,
things
that
we
don't
have
control.
F
Josh,
I
I
feel
like
there
is
no
chance
we
will
escape
from
this.
I
I
like
your
proposal,
but
I
don't
think
there
is
a
chance
and
I
I
really
want
to
make
sure
that
we're
never
gonna
produce
this,
because
they
are
bad,
but
there
is
no
chance.
We
we
we,
we
can
escape
from
collector
perspective
to
to
propagate
them.
B
I
Yeah
I
mean
it's
again:
it's
an
it's
an
overall
interoperability
requirement.
You
know
it
may
not
be
something
that
otlp
would
like
wants
to
support,
but,
on
the
other
hand,.
A
F
I
think
the
the
the
biggest
problem
you
have
to
resolve
is
the
one
with
quantile,
zero
and
one
if
those
exist
or
not,
if
they
represent
me
max,
I
just
wanna.
Have
that
thing
clarify.
B
I
mean
so
my
perspective
is
min
max
thumb.
Count
was
a
proposal
that
was
written
in
very
early
to
the
spec
and
it
was
my
idea:
it's
one
that
a
pattern
I've
used
in
in
production
in
the
past,
but
it's
not
necessary,
and
I
think
I
think
we've
basically
come
to
the
point
now
of
forgetting
about
mid
maximum
count
and
only
wanting
to
have
a
histogram.
B
I
that's
what
I
would
like
this
is
just
all
in
on
histogram,
but
I
I
think
that
min
and
max
are
for
special
cases
that
some
protocols
care
for
and
if
you're,
going
to
try
and
sort
of
get
everyone
to
agree
that
all
all
distribution
points
can
be
represented
as
histograms
pretty
well
most
of
the
time.
And
then
some
protocols
are
going
to
end
up
being
converted
into
this
histogram
and
it
would
be
nice
if
we
could
get
min
and
max
correct
as
I'm
thinking.
B
So
it
could
be
like
histogram,
with
two
extra
fields,
just
for
min
and
max
which
are
optional.
Maybe
maybe
because
there's
the
question
of
you
know
missing
fields
and
stuff
and
optional
in
front
of
us,
so
that
complicates
matters
and
things
so.
B
I
I
I
mean,
I
think,
bogdan's
right.
We
should
just
just
do
a
summary
and
and
move
on
summary
in
the
otlp.
A
Okay,
but,
and-
and
I
guess
that
the
follow-up
question
then
is:
how
would
it
change
really
there.
B
B
There
was
nothing
wrong
with
what
it
used
to
be.
I
think
the
problem
used
to
be
min
max
some
count,
so
we
can
delete
the
concept
of
min
max
sum
count
across
open,
telemetry
and
then
bring
back
the
the
summary
the
way
it
was.
It's
exactly
a
one-to-one
mapping
from
prometheus
and
and
people.
Don't
I
don't
know
how
common
it
is
to
use
quantile,
zero
and
quantile
one
with
a
summary
data
structure,
but
the
reason
why
it
was
so
problematic
is
that
we
had
rep
we
had
specified.
F
Let's,
let's
forget
about
that,
and
let's
make
sure
that
this
summary
we
not
don't
produce
from
our
stuff,
so
they're
gonna
be
discussions
about
these
correct
so
that
we
hear
the
theory
prometheus
like
stuff
so
but
talking
about
prometheus
and
open
metrics.
By
the
way,
I
would
like
at
one
point
to
start
discussing,
maybe
about
the
the
enum
thing
that
they
have.
There
is
another
thing
called
a
new
in
the
open
metrics,
but
not
not
right.
Now,
let's,
let's
hold
our
horses
and
focus
on
the
current
prometheus.
Not
the
future
sounds.
B
B
There's
a
label:
actually,
there
is
a
missing
release
in
the
in
the
in
the
repository.
I
think
there's
a
tag
but
not
a
release,
and
we
probably
could
fix
that
the
the
current
version
was
released
august
or
september,
or
something
or
others.
It's
tagged
0.5
and
I
think
almost
nothing
has
changed
since
then.
So
it's
about
the
same
today.
J
Okay,
so
that's
clear,
so
I
can
correct
my
my
my
pr
yes
yeah,
there's
other
things
I'd
like
to
discuss
besides
this.
So
firstly
is
there's
this
encouragement
in
the
master,
that's
in
instagram,
double
histogram
and
in
the
discussion
we're
thinking
about
adding
a
third
where
the
count
could
be
double
right
now,
but
isn't
this
too
many?
J
J
B
J
F
The
only
difference
between
them
is
the
sum
the
sum,
and
somebody,
including
myself,
believe
that
it's
it
has
better
precision
for
high
numbers.
If,
if
you
use
the
sum
as
hint
for
for
for
some
of
the
cases
but
again.
F
One
one
off
is
super
slow
in
proto
and
it's
super
super
bad.
So
if
we
have
only
one
one
off
in
the
whole
thing
which
selects
the
the
the
thing
and
that
causes
like
two
two
allocations
for
every
for
every
sum
so
think
about
you-
have
a
combination
of
a
thousand
labels
values
points.
So
then,
then
you'll
do
two
thousand
allocations
for
nothing
just
to
put
an
ink
there.
So
it's
very
bad.
F
B
Josh
yeah
at
certain
points.
Over
the
summer
I
made
a
number
of
proposals
that
we
went
through
before
we
went
ended
up
here
and
I
had
the
variations
were
that
you
could
have
a
single
histogram
data
point
type
that
has
like
just
two
fields
because
you're
looking
at
a
struct
that
has
about
10
fields
in
it.
So
by
the
time
you
compare
the
cost
of
wasting
one,
you
know
maybe
pointers
worth
of
data
in
that
struct.
B
It's
really
small,
the
amount
of
overhead,
so
you
could
just
have
two
fields
and
only
use
one
of
them.
But
then
you
have
this
case
where
you
have
to
like
you,
look
at
the
value
type
and
decide.
Am
I
looking
at
integers
or
doubles
and
I'm
myself,
I'm
sort
of
thinking
that
we
could
probably
just
for
the
most
part,
get
by
with
double
histogram.
B
F
Okay,
if,
if
we
have
doubts
about
this
and
stuff,
apply
the
rule,
remove
it
well,
remove
the
end
versus
histogram
and
just
have
double
yeah.
Remove
the
inversion,
keep
the
double,
and
I'm
fine
with
that.
Even
though
it's
a
small
breaking
change,
I
don't
think
anyone
started
to
use
that
because
it
was
pretty
new.
So
we
can
forget
about
this.
Okay,.
J
F
Counters,
we
really
have
cases
like
the
beats,
router
and
stuff.
I
think
I
think
it's
it's
a
bit
more
serious
and
even
prometheus
and
openmetrix
has
that.
J
Yeah,
I
meant
for
the
bucket
count
for
each
bucket.
There
is
an
account
shall
we
support
double
in
the
camp
or
just
ins?
Oh
for
the,
for
the
buckets
count.
F
Is
that
not
a
real
number
or
do
you
do
sampling
and
you
do
some?
I
mean
that's
a
number
of
measurements
that
belong
to
that
bucket.
J
B
B
J
F
Let's
put
this
way,
I
bet
I
can
help
you
with
that
with
protozoa,
I'm
I'm
pretty
good
at
understanding
things,
so
I
can.
I
can
help
you
if,
if
you
have
questions
or
whatever
put
whatever
you
think
and
I
can
propose
alternatives,
so
we
can
keep
backwards,
compatibility
and
stuff.
So
that
means.
B
I
I
think
it
means
in
this
case
that
the
explicit
bounds
is
going
to
stay
where
it
is
and
if,
if
we
have
new
different
bounds
type
options
that
that
we
should
put
them
in
a
new
place
for
backwards
compatibility,
in
other
words,
there's
probably
a
one
of
in
the
open
metrics,
which
is
like
one
of
either
explicit
or
linear
or
exponential.
If
we
wanted
to
change
that,
we'd
be
breaking
something
for
the
explicit
boundary
case
which
I'd
rather
not
because
I
we
are
using
this
as
well.
F
F
One
off
is
applied
is
not
a
wire
information,
so
you
don't
break
the
wire
for
that.
Okay.
So
so
I
explicitly
tested
that
and
thought
about
that
when
I
did
decided
to
not
add
so
that's
possible
to
add
more
things
and
put
them
into
the
one-off.
It's
it's
going
to
change
a
bit
the
generated
code,
but
the
wire
format
is
backwards,
compatible.
J
F
I
I
I'm
I'm
more
leaning
in
this
case
personally,
I'm
more
leaning
towards
put
both
and
and
paid
a
extra
eight
bytes
than
doing
the
the
one
off
we
can
discuss
there
in
the
pr
and
we
can
measure
some
of
the
performance
stuff
and
but
we
will
make
it
backwards
compatible
and
we
will
make
it
extensible.
So
that's.
J
It's
logically
one-off,
but
not
not
physically
yeah,.
F
So,
for
example,
in
the
collector,
because
we
know
it's
not
all
wire
change-
we
we
are
thinking
actually
to
dropping
the
one
off
like
whenever
we
generate
our
stuff,
even
though
in
proto
is
there
for
for
stuff,
and
we
may
think
about
leaving
it
there
in
cold
and
stuff,
but
in
in
a
collector
where
we
really
care
about
performance,
because
we
have,
we
have
huge
amount
of
traffic.
F
We
actually
think
about
doing
whenever
we
do
get
some
module
to
apply
a
patch
to
to
remove
this
one
off,
because
it's
essentially
the
same
thing
but
we're
gonna
treat
it
in
our
generated
code
stuff
as
a
one-off
anyway
tricks
for
performance
for
things
we
get
discussed
there,
I'm
very
happy
to
help
you
with
the
understanding,
trade-offs
and
stuff.
F
That's
a
that's
a
separate
pr.
I
would
do
this
dpr
as
the
other
one,
because,
because
not
saying
no
again,
not
saying
no,
but
but
what
I'm
trying
to
say
is:
let's
do
let's
try
to
do
granular
changes
and
focus
changes
so
that
they
can
be
accepted
faster,
and
we
prove
that
that
if
you
come
up
with
complete
structure
of
things,
you
may
be
harder
because
of
a
small
change
that
people
will
become
very
busy
on
that
and
will
not
improve
the
entire
world
because
of
that
speaking
change.
F
So
let's,
let's
talk
it
separately.
I
think
stackdriver
had
an
experience
with
this.
They
put
it
min
and
max
inside
the
histogram,
and
they
were
they.
Actually,
it
is
in
the
proto.
But
if
you
try
to
send
them,
they
will
refuse
your
message
because
they
say
they
don't
support
that.
It's
very
a
very
good
experience.
J
F
Okay,
it
is,
it
is
so
so
proto3,
just
a
month
ago,
announced
the
support
of
of
how
did
they
call
it?
They
they
just
added
the
new
support
for
optional
stuff.
E
F
Is
back
yes,
the
optional
keyword
is
back
not
sure
if
it's
fully
supported
fully
implemented,
but
we
can
discuss
about
that
when
we
do
the
pr
for
min
max
or
other
fields.
If
you
really
need.
J
Well,
optional,
one:
how
about
the
sun?
Currently
some
would
be
required
if,
as
it
stands
in
v1,
I
think
it
should
be
required
correct
what
technically
is,
strictly
speaking
a
histogram,
it's
just
the
histograms,
but
just
just
so
happens.
Many
many
implementations
also
gather
me
in
the
max
and
some.
F
F
For
example,
I
don't
know
if
you
are
exporting
a
histogram
for
for
like
message
size
like
the
http
request
size
as
a
histogram,
because
you
want
to
see
the
the
distribution
of
the
sizes
that
you
are
sending
you
you
most
likely
want
to
look
at
the
rate
as
well
anyway.
J
My
incorrectly
positioned
pr
placed
pr.
I
was
also
adding
the
number
of
linear
sub
buckets
for
x,
exponential
histogram
and
as
an
optional
field.
If
you
don't
have
it,
you
can
leave
it
as
0
or
1,
which
would
be
the
same
so
for
this.
The
purpose
is
that
kind
of
to
for
the
just
in
the
scope
of
the
exponential
histogram
to
support
like
two
spectrums
two
ends,
one
is:
if
we
want
to
do
the
cpu
optimized
histogram,
we
do
the
locked
linear,
which
is
very
fast
compared
to
the
pure
line.
J
So
the
producers
includes
hdr,
histograms,
circ,
histogram
and
dt
sketch
the
fast
mode.
When
you
create
this
dd
sketch
there's
a
fast
mode.
When
you
create
fast
mode,
it
actually
is
a
linear
interpolation
that
is
linear,
sub
buckets
in
there
some
person,
and
if
you
want
on
the
other
side,
if
you
have
cpu
spare
cpu,
but
you
want
to
squeeze
out
last
fighting
memory,
you
go
with
the
pure
log.
J
Basically,
your
number
of
number
of
step-back
buckets
would
be
one.
You
have
always
one
one
bucket
in
each
exponential
range.
So
I'm
pressing.
We
support
those
two
two
options
and
nothing
in
between,
at
least
for
now,
in
between,
as
in
the
dd
sketch,
there's
a
balanced
mode,
which
is
the
cubic
approximation
of
the
log.
J
J
B
I
have
a
sort
of
related
comment.
That's
a
little
bit
of
a
step
back.
If
you
don't
mind,
I
I
followed
you
my.
I
started
a
new
question.
B
That's
related
though,
and
I
wanted
to
sort
of
ask
it
so
what
you're
talking
about
is
we're
beginning
to
ad
talking
about
beginning
to
add,
like
parameterized
encodings
for
histograms,
where,
instead
of
having
the
explicit
boundaries
which
are
very
expensive,
relatively
expensive,
to
encode
we're
going
to
have
something
that
has
a
small
number,
a
smaller
number
of
bits,
certainly,
but,
but
also
probably
has
some
kind
of
like
well
in
the
case
of
exponential
there's,
a
parameter
which
is
the
base
of
the
logarithm
or
the
exponent
and
dd
sketch
has
a
gamma
parameter,
and
so
there's
usually
a
parameter
it.
B
D
B
That
realization
is
one
that
I've
I've,
just
sort
of
discovered
is
is
present
in
the
circle
sarcos
histogram,
which
is
one
of
those
cases
that
has
the
log
the
linear,
sub
buckets
on
the
log
linear
exponential
strategy
and
the
key
of
course
there
is
that
it's
a
decimal
base.
It's
like
they're
using
base
10
for
their
because
it's
natural
in
some
sense-
and
that's
also
the
set
the
sort
of
defaults
that
the
prometheus
project
chose,
is
to
kind
of
keep
your
buckets
on
on
power
base
10
boundaries.
B
So
I'm
starting
to
wonder,
do
we
really
want
to
add
more
ways
to
parameterize
and
more
different
ways
to
encode
histograms
and
the
only
the
sort
of
closing
point
is
I've
mentioned
circonas
histogram
a
few
times
it
comes
up
again
when
you
talk
about
the
cpu
cost,
the
it's
expensive
to
to
call
logarithm
and
they've
published
that
algorithm.
That
allows
you
to
do
it
without
any
logarithm
calls,
and
so
it's
it's
sort
of
like
a
nice
balance
for
me
and
I'm
starting
to
think
it's.
It's
maybe
our
best
choice.
B
However,
we
know
that
there
are
patents
involving
it
and
and
now
there's
this
weird
situation
where
I'd
like
to
have
a
conversation
about.
Maybe
we
could
could
get
circonus
to
donate
this
to
us,
but
it's
a
technical
conversation
and
a
legal
conversation,
and
I
don't
know
how
to
have
this
conversation.
B
I
did
have
a
private
conversation
with
one
of
the
authors
at
circonus
on
this
and
I
have
to
say
it's
possibility,
so
I
want
I
don't
know
how
to
break
into
this
chicken
and
egg
problem.
If,
if
circonus
looked
good
to
everybody,
we
could
be
talking
to
sarconis
about
maybe
patent
grant,
but
it's
complicated
and
that's
not
necessarily
our
fastest
path
to
a
good
solution
here.
Yeah.
J
My
point
is
that,
as
soon
as
you
get
get
out
of
the
explicit
band,
you
are
bound
to
have
parameters
even
for
a
single
linear
for
the
for
the
linear
bucket.
You
need
to
define
the
offset.
Where
do
I
start?
How
wide
is
is,
is
my
bucket
and
how
many
buckets
do
I
have
that's
through
three
parameters
for
linear
for
exponents.
You
definitely
don't
want
to
define
base.
J
I
don't
want
to
say
just
people
can't
degree
agree
on
base
like
the
the
dd
sketch
the
base
could
be
arbitrary,
like
1.1,
1.2
is
or
1.01
so
the
base
in
my
product
here
the
base
is
there
also
is
it's
inevitable
and
just
by
adding
the
linear
bucket,
then
just
one
more
parameter.
You
support
the
a
large
class
of
a
large
family
of
the
local
linear
histogram.
That
is
good
on
the
cpu
optimized
side,
whether
it's
the
circuit
or
not.
J
If
we
can't
get
search
to
donate,
there's
also
at
least
the
dd
sketch
the
fast
option
is
is
exactly
as
lucky
linear,
except
it's
base
two,
but
for
given
for
given
position.
If
the
precision
is
precise
enough,
nobody
really
cares
201.
If
you
get
the
one
percent
precision,
nobody
cares.
What
base
you
are.
F
B
J
B
Yes,
thank
you,
everyone.
I
think
we've
run
out
of
time,
so
we'll
do
this
again
next
week.
C
Yeah
and
so
the.
C
B
C
One
final
reminder
for
everyone:
I'm
guessing.
This
is
also
a
blog,
that's
going
to
say
the
open,
telemetry
governance
committee
election
is
next
week.
Everyone
should
have
got
an
email
for
this.
Even
if
you
get
the
email
you
do
need
to
formally
register
based
off
the
link
in
the
email.
So
please
do
that
registration
rates
are
already
looking
really
good,
but
there's
still
a
few
more
days.
So
please
register
you
can
technically
register
during
the
vote,
which
I
think
starts
on
monday.
But
it's
a
lot
easier
for
everyone.
If
you
do
it
beforehand,.
C
D
Meeting
after
the
triage
meeting
tomorrow
to
talk
about
ga
items
perfect,
so
bogdan
morgan
would
be
beautiful
if
you
guys
are
also
there
I'll,
be
there.
Okay,
thanks.