►
From YouTube: 2021-07-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
A
A
Okay,
let's
get
started.
I
think
we
have
three
minutes
passed.
We
had
one
item
on
the
list,
which
was
giving
an
update
to
the
you
know,
team,
on
on
the
changes
that
we've
been
making
on
the
target
allocator,
we
renamed
the
load
balancer
to
the
to
a
target
allocator,
because
it's
more
more
appropriate
in
terms
of
what
it's
doing
and
again,
alex
and
and
ruffle.
C
C
Yes,
so
yes,
I
mentioned,
we
actually
changed
the
name
from
a
load
balancer
to
your
target
allocator,
because
that
is
more
apt
for
what
we
are
kind
of
building.
It's
not.
It
may
be
misleading
the
users,
so
we
didn't
want
to
use
load
balancer
and
we
decided
to
go
forward
and
use
target
allocator.
So
that
was
one
change
we
did
in
the
first
pr
which
we
have
actually
filed
today
and
other
than
that.
C
We
also
added
a
configuration
option,
change
the
configuration
option,
so
david
actually
suggested
that
we
actually
use
a
mode
instead
of
any.
I
think
it
was
used
before
so
we
had
actually
gone
forward
with
that
and
were
using
mode,
but
then
jana
had
suggested
that
we
actually,
it
may
be
kind
of
again
confusing
for
the
user,
because
there's
already
a
mode
option
in
the
actual
crd,
which
may
be
confused
with
the
target
allocator
mode.
C
So
that
was
one
reason
we
wanted
to
change
it.
Another
reason
was
that,
since
we
are
only
supporting
one
algorithm
now,
so
we
are
only
supporting
least
connection
as
of
now,
so
maybe
in
the
future,
we
can
go
forward
and
think
about
how
we
actually
want
to
manage.
It
maybe
have
separate
binaries
for
each
algorithm,
so
that
would
actually
not
need
a
separate
mode
option
or
if
you
are
having
multiple
algorithms
in
the
same
binary,
we
can
actually
add
it
back
with
a
different
name.
C
So
for
now
we
actually
just
went
forward
and
are
using
the
enabled
option
which
actually
lets
the
user
choose
if
they
want
to
enable
the
target
allocator
option
or
not,
and
this
would
only
work
in
stateful
setup.
C
So
yes
yeah,
that's
the
all
the.
D
C
We
have
actually
added
the
two
options
with
respect
to
the
actual
container
image.
We
have
the
design
document
and
the
pr
almost
ready.
So
we
would
be
filing
that
also
pretty
soon
and
we'll
be
linking
that
to
the
ps
as.
C
A
Even
any
concerns
around
this,
because
I
know
that
you
had
provided
some
feedback
earlier,
but
I
think
it's
in
line
with
what
you
had
in
mind.
It's
just
more,
we
just
kind
of
you
know,
focused
it
in
to
be
a
bit
more
generic.
Initially
we
were
using
the
these
connection
target
allocation
as
an
as
we,
you
know,
algorithm
by
default.
B
Yeah,
I
think
true,
false
enabled
doesn't
extend
if
we
want
to.
We
can't
extend
it
with
multiple
options
in
the
future,
but
also,
if
we
don't
want
to,
if
that's
what
we
want
to
start
with,
then
that's
also
fine.
It's
a
just
a
detail.
E
B
A
So
rahul
and
alex:
do
you
guys
need
another
review,
or
are
you
good
to
go
on
the
pr.
C
So
we
are
actually
good
to
go
on
the
first
year.
I
mean
both
the
ps
actually
kind
of
we
got
approved
internally
and
we
would
be
filing
it.
I
mean
the
first
vr
is
actually
already
filed
up
stream,
so
I'll
just
link
it
in
the
document
and.
C
A
A
Does
this
need
to
re-review
again,
I
guess
it
does.
E
This
one
was
just
created
right,
yeah,
so
yeah,
the
the
first
one
should
be
the
controller
in
the
operator
that
will
create
the
target
allocator
and
then
the
next
one
will
be
the
actual
target.
Allocator
implementation
itself.
A
A
Okay,
so
the
next
pr
rahul-
I
guess
you
guys-
are
filing
later
today.
A
Allocator,
what
the
other
topics
that
folks
wanted
to
cover
david,
anything
on
your
end
or
vishwa.
A
There
are
pr's
on
your
end,
which
you
are
waiting
to
be
merged.
Again,
you
know
we're
in
the
midst
of
the
tar.
You
know,
trace
ga
work,
that's
ongoing,
so
we'd
really
like
to
make
sure
that
all
our
prs
don't
get
neglected
on
the
from
ethiopia.
D
A
Okay,
cool:
were
there
other
topics
we
wanted
to
discuss?
I
know
that
there
are
some
long-term
discussions
that
we
wanted
to
have,
and
one
of
the
things
that
I
think
that
you
might
have
seen
on
the
metrics
backlog.
Let
me
pull
it
up
is
related
to
the
redesign
of
the
receiver,
the
prometheus
receiver,
and
you
know
really
evaluating
whether
the
receiver
can
be
optimized.
A
Is
that
something
david
you're
interested
in
or
vishwa
your
team.
A
So
right
now,
I
think
the
idea
was
that
you
know
the
the
receiver
has
a
is
primarily
used
for
scraping
for
all
of
the
collector
right
and
that
has
been
as
hi
josh
thanks
for
joining,
so
that
has
been
moving
again.
Josh
had
proposed
as
well
as
others
that
you
know
why?
Don't
we
move
that
to
being
using
a
more
standardized
service,
discovery
mechanism
in
the
collector
and
not
kind
of
use
the
prometheus
receiver
as
it
stands
for
all
kinds,
all
discovery
and
and
scraping
right.
A
So
at
this
point
the
idea
is
that,
could
we
evaluate
the
receiver
to
see
what
changes
need
to
be
made?
If
any?
Maybe
the
receiver
is
good
enough,
but
there's
also
another
issue
that
it
is
actually
quite.
It
has
quite
a
heavy
footprint
today
right,
it's
40
meg
when
you
actually
load
the
receiver
itself
and
we'd
like
to
kind
of
optimize
that
to
be
having
the
minimum
functionality
that
it
needs
to
have
as
a
receiver
for
from
supporting
prometheus.
A
You
know
ingestion,
but
not
necessarily
doing
everything
other
than
you
know
there
is
the
common
functionality
being
pulled
out
of
it
and
perhaps
being
modularized
into
more
general
collector
modules.
So
that's
that's
kind
of
the
thinking
right
now.
The
question
is,
you
know
again.
It
would
be
good
to
kind
of
have
a
more
detailed
discussion.
Maybe
some
folks,
you
know
who
are
interested
in
working
on
the
receiver.
A
We'll
obviously
take
a
look
at
it,
but
if
anybody
else
is
interested,
we
could
have
another
deep
dive
into
the
into
the
code
and
you
know
also
discuss
the
design.
If
there
are
possible,
you
know
redesign
areas.
E
And
emanuel
is
already
working
on
converting
the
prometheus
to
p
data
pipeline.
That.
C
E
Away
from
using
open
senses
which
hopefully,
will
help
with
its
size,
if
we
can
eliminate
that
dependency
because
we
have
to
have
p
data,
so
we
can't
really
get
rid
of
that.
But
if
we
can
go
directly
from
prometheus
to
p
data
and
cut
out
open
sensors,
that
should
minimize
its
footprint
or
at
least
reduce
its
footprint
a
bit,
and
I
think
one
of
the
the
other
concerns
that
we've
had
with
it
is
testability
and
the
the
current
tests
that
it
have
are
fairly
brittle.
E
And
when
you
try
to
go
make
changes.
There
are
widespread
changes
to
the
tests
for
fairly
small
changes
to
the
implementation.
So
that's
something
I
think
we'll
want
to
look
at
seeing
if
we
can
improve
as
well.
A
Yep,
absolutely,
and-
and
I
mean
that's
a
very
good
point-
that
is
something
that
we're
definitely
looking
to
build
out,
especially
the
open
senses,
swapping
out
all
the
open
census
dependencies
that
will
optimize
this
a
bit
more.
Let
me
just
share
the
the
backlog
items
that
we
have,
and
this
is
where
you
know
we'll
add
more
detail
in
terms
of
what
exactly
this
this
you
know
part
entails.
So,
as
you
can
see,
this
is
the
metrics.
You
know
phase
two
backlog.
A
The
first
phase
only
has
the
otlp
conversions
for
the
collector,
which
is
required,
for
you
know
any
other
work
to
be
done,
but
you
can
see
here
we're
kind
of
tracking
the
prometheus
receiver
redesign.
You
know
and
exploring
exactly
these
issues.
Looking
at
how
to
remove.
A
I
mean
use
otlpp
data
directly
instead
of
using
open
sensors
proto
as
a
converter
in
between,
and
then
also
you
know,
completing
out
the
rest
of
the
backlog
for
prometheus
support
that
we
had
itemized
in
our
group
backlog
and,
primarily,
you
know,
really
swapping
out
all
the
open
census
dependencies
to
native
open
telemetry.
A
So
those
those
are,
you
know
some
of
the
key
areas
and
would
really
like
to
have
you
know
anyone
who's
interested
kind
of
working
with
us
on
this
we're
going
to
tackle
the
prometheus
receiver
redesign.
A
So
that's
something
that
emmanuel
has
already
started
to
look
at
and
then
there's
another
area
which
is
related,
which
is
ongoing,
which
is
being
actually
started
by
jay
from
I
don't
know
if
he's
here,
but
definitely
he'll
be
joining
in
for
the
collector's
sig
jfj
camp
from
splunk
he's
starting
to
look
at
implementation
of
what
a
or
evaluation
of
what
a
potential
design
for
a
general
service
discovery
model
would
look
like
and
and
in
discussions
with
bogdan.
A
A
G
That's
on
issue
816,
I
think
that's
moving
slowly,
there's
some
blockers
there
to
get
the
hotel
go
sdk.
In
briefly,
I
would
comment
on
the
one
you're
you're
you're,
looking
at
now,
I've
I've
promoted
this
idea
in
the
past
and
I'm
still
behind
it.
There
definitely
is
interest
from
lightstep
in
in
particular
having
to
do
with
the
amazon
metric
stream
product
when
we
get
these
streams
of
metrics.
That's
great.
C
G
We
we
aren't
getting
all
the
resources
that
we
want
and
we
would
love
to
have
essentially
a
protocol
from
from
open
telemetry.
That
was
for
publishing
resources
that
looked
just
like
metrics
data,
and
I've
mentioned
it
in
this
context
in
the
past
is
like
we
can.
G
Premise,
receiver
service
discovery
code
split
it
into
its
standalone,
app
that
just
pushes
out
otlp
with
resource
data
and
that
would
be
potentially
very
useful
to
us.
We
definitely
have
people
at
lightstep
who
are
looking
into
doing
this
as
a
sort
of
special
case
for
now,
but
it
would
be
something
we
could
imagine
generalizing
and
it's
at
the
beginning.
It's
just
to
get
aws
resources
attached
to
aws,
metrics,
yeah
and.
A
Yeah
yeah,
I
mean
that's
a
good
use
case.
Josh
I
mean
metric
stream
has
other
nuances
and
obviously
it's
not
really
a
push
so
well.
B
G
Place
where
we
can
run
fewer
of
them
or
or
control,
how
it's
deployed
separately
from
collectors
and
so
on.
A
Potentially,
bringing
in
new
sources
yeah,
that's
that's
a
good
point,
and
and
josh
I
mean
again,
the
you
know
the
pros
of
having
a
specialized
prometheus
receiver
is
that
it's
optimized
for
the
prometheus
pipeline,
which
is
you
know,
used
a
favorite
and
and
it
stays
unique
enough,
but
then
there
is,
I
mean-
and
this
is
just
thinking
out
loud.
There
is
a
more
of
a
wrapper.
You
know
around
being
able
to
call
specialized
receivers
which
could
serve
as
the
you
know.
A
General
service
discovery
module
right
and
then
there
is
enough
granularity
in
terms
of
actually
having
specialized
receivers
which
are
invoked,
for
you
know
the
type
of
sources
that
are
being
scraped
right.
So
again
it
can
be
a
you
know,
design
which
can
consider
a
more
modular
approach,
also
because
I
think
that
the
prometheus
receiver
does
what
it's
supposed
to
do
today.
A
You
know
for
supporting
prometheus,
except
that
you
know
if
some
of
the
optimizations
that
anthony
was
also
referring
to
that
you
know
if
psychophy
removed
all
the
open
sensors
dependencies
that
would
reduce
the
size
somewhat
and
other
efficiencies
that
we
could.
Actually,
you
know
gain,
but
anyway
I
mean
this
is
an
effort
that
jay
has
started
to
look
at
I'd
like
him
I'll
chat
with
him
and
see
if
he
can
join
in
here,
because
I
think
in
general
it's
just
it's
an
it's
a
related
topic.
F
Yeah,
so
thank
you
anthony
for
talking
about
the
p
data
changes
I'm
making.
I
yeah
and
thanks
david
ashball,
lolita,
anthony
and
tigran,
and
bogdan
and
others
for
helping
me
review
them.
C
F
Essentially,
they
should
be
getting
in
pretty
soon
and
after
those
are
complete.
I
actually
think
we
should
profile.
You
know
we
should
do
cpu
and
memory
profiling
for
just
the
receiver
itself,
because
right
now
we're
just
firing
shots
in
the
dark
without
actual
date
on
what's
consuming
the
most,
we
can't
profile
so
kuang,
some
other
person
on
my
team
on
the
rest
of
my
team.
Well,
myself
we're
going
to
focus
on
you
know.
After
all,
the
p
data
conversion
is
done.
F
We
shall
do
a
bunch
of
profiling
and
you
know
we'll
report
the
results.
We
have
infrastructure
for
continuous
profiling,
so
you
know
we'll
run
that
produce
pictures
and
send
pr's
with
optimizations.
A
A
All
right
cool,
so
josh.
How
do
we
also
better
understand
the
requirement
that
you
mentioned?
You
know
that
should
be
something
that's
factored
in
into
the
general
design
dock,
because
I'd
definitely
love
to
see
a
design
before
actual
implementation.
G
Well,
I'm
not
sure
we're
talking
about
exactly
the
same
things.
I
I'm
try
and
stay
tuned
to
what
what's
being
discussed
here.
The
thing
that
I
was
raising
was
really
a
little
bit
potentially
more
general
than
than
just
the
prometheus
receiver,
and
I
think.
C
G
You
you
are
justified
and
right
to
say
that
we
should
focus
on
the
sort
of
integrated
use
case.
First,
I
do
have
one
person
at
lightstep
who's
talking
about
how
to
do
this
stuff
inside
of
our
system.
You
know,
so
it
would
be
done
behind
our
ingestion
point,
but
it
could
be
done
using
hotel,
collector
behind.
F
F
G
If,
in
the
future,
that's
kind
of
the
vision
would
be
that
you
have
a
service
discovery,
publisher
and
you
have
a
sort
of
joint
operation
that
can
that
can
do
that
somewhere
in
your
data
pipeline.
I
don't
think
we
should
hang
up
prometheus
development
on
that.
You
know,
we've
got
to
stay
tuned
and
keep
us
in
touch
with
that
effort.
A
Yeah,
I
agreed-
I
mean
yeah,
I
I
completely
agree
with
you,
so
I
mean,
but
we'll
kind
of
I'll
touch
faces
jay
and
we'll
figure
out.
You
know
what
some
of
the
dependencies
are
on.
The
work
that
he's
starting
to
do
would
like
to
be
a
collaborative
effort
so
and
and
brian
and
others
from
the
prometheus
community.
Definitely
welcome
your
review
and
your
feedback.
A
You
guys
are
kind
of
our
experts,
so
I
really
would
love
to
get
feedback
as
we
optimize
the
prometheus
receiver,
especially
so
you
know
looking
forward
to
that
discussion.
There
was
another
topic
that
I
wanted
to
ask
josh
you
about.
Is
that
the
histograms
discussion
that
is
ongoing?
Did
you
want
to
give
a
quick
update
on
that?
G
I
don't
have
as
much
feedback
as
I
was
hoping,
I'm
I'm
struggling
with
this
right
now.
I
feel
like
nobody's
talking.
It's
like
me
in
an
echo
chamber
other
than
people
who
are
extremely
close
to
the
topic.
Like
authors
of
histogram
implementations
are
the
ones
talking
and,
and
I
was
trying
to
find
a
consensus
in
the
wider
community
and
and
I'm
almost
beginning
to
doubt
my
own
recommendations
from
a
month
ago.
G
Because
there
are
implementations
that
are
good
and
widely
available
and
could
be
taken
off
the
shelf
today
that
give
us
histograms
and
all
open
telemetry
is
really
created
to
be
a
kind
of
vendor
neutral
cross
cross
vendor
protocol.
So
the
question
is
originally
I
said
to
myself
and
to
that
issue
that
I
posted.
We
should
choose
one
histogram,
because
there
will
be
a
tremendous
amount
of
data
pipeline
engineered
to
to
handle
it
across
all
the
vendors
and
all
the
open
source
ecosystem.
G
And
we
have
this
pretty
strong
consensus
among
prometheus
developers
as
well
as
some
of
the
vendor
histogram
people,
I'm
calling
them
people
who
are
experts
all
pointing
at
this
exponential
histogram
and
it's
pretty
simple
protocol,
and
yet
we
don't
have
any
like
ready-to-go,
off-the-shelf
libraries
that
create
this
new
protocol
and
haven't
even
been
kind
of
finished.
Yet
prometheus
team
developers
have
a
prototype.
You
know,
dynatrace
has
a
prototype.
We
have
lots
of
prototypes,
but
it's
something
where
it's
not
clear.
G
So
my
recommendation
a
month
ago
was
we
should
choose
one
and
I'm
starting
to
doubt
that
and
I'm
I'm
curious
what
people
in
this
room
think,
although
I
know
there's
a
position
here
that
I
like
the
prometheus
proposal,
I
like
the
exponential
instagram.
I
will
be
glad
to
support
it,
but
the
question
I'm
having
now
is
so
on
my
issue.
I
we
got.
You
know
both
open,
histogram
being
discussed
as
well
as
hdr
histogram,
two
popular
existing.
Well,
you
know
well
represented
in
the
world
histograms.
G
These
are
have
been
around
for
a
long
time.
Why?
Wouldn't
we
use
them?
Why
wouldn't
we
just
say:
go
ahead,
any
histogram
is
good
enough.
Of
course.
The
answer
is,
if
you
end
up
putting
data
into
a
prometheus
pipeline
and
prometheus,
won't,
accept
htr,
histogram
or
open
histogram,
then
we're
left
converting
yep
data
and
that
both
leads
to
inefficiency.
It
leads
to
more
code,
at
least
higher
maintenance.
It
leads
to
you
know,
you
know
cpu
cycles,
and
so
that
was
why
I
said
we
should
choose
one
by
starting
to
look
like
choosing.
G
One
is
making
us
go
very
slowly
and
I
wonder
what
people
would
think
if
we,
if
I
was
to
reverse
my
opinion
completely
and
say
I
think,
open
telemetry
should
support
more
than
one
histogram.
Essentially
we
have
this
nice
agreed-upon
exponential
histogram
that
the
prometheus
developers
and
some
of
the
histogram
experts
have
have
talked
about
in
otep148,
I'm
sorry
149
and
then,
whichever
one
it
is,
and
then
we
have
these
other
log,
linear,
histograms,
two
of
them
that
have
been
around
for
a
long
time,
open,
histogram
and
hdr,
histogram
and
gosh.
G
If
we
had,
if
we
if
hotel
were
to
say
for
its
sdks,
any
histogram
will
do,
and
I've
literally
posted
that
issue
a
long
time
ago.
Any
instagram
will
do
because,
because
at
some
level
we
don't
care
and
and
it's
fine
if
javascript
chooses
the
best,
histogram
and
and
java
and
and
go
chooses
the
best
histogram
in
their
own
languages
and
java,
we
might
end
up
with
hdr
histogram.
It's
been
so
established
for
so
many
years.
It
has
so
many
options.
G
It
has
so
many
performance
options,
but
you
wouldn't
choose
hdr
histogram
in
a
language
that
didn't
have
a
very
strong
implementation.
You
might
choose
a
dd
sketch
or
whatever.
So
what's
the
I
don't
know,
I'm
a
little
bit
on
the
fence
now,
because
after
posting
that
issue,
I
didn't
get
a
lot
of
feedback
other
than
from
the
same
people
that
I'd
already
gotten
a
lot
of
feedback
from
and
so
yeah
I'm
starting
to
think
what?
If
and
I
haven't
posted
this
yet
and
I'll
open
it
to
anyone
who
wants
to
talk?
What?
G
If
we
had
a
long
linear
protocol
with
parameters
that
could
support
both
open
circle,
open,
histogram
and
hdr
histogram
as
another
option,
it
might
exceed
our
delivery
of
sdks
but
requests
fire
a
conversion
stage
and
in
the
collector
and
then
it
would
also
require
vendors
who
don't
want
to
run
force
their
customers
to
run
that
conversion.
G
To
tell
the
customer,
you
have
to
use
the
one
that
we
support,
maybe
a
vendor
could
say
I
don't
support
circle,
history
or
open
instagram.
I
do
support
the
other
app
or
it's
going
to
lead
to
vendors.
Just
supporting
multiple
histograms,
which
actually
I'm
starting
to
think
is
not
such
a
great
big
deal.
G
A
No,
I
mean
that's
a
very
good
point
and
I
think
josh,
you
know
some
of
them.
Some
of
us
have
been
at
least
I
have
been
following
your
theotep
and
and
the
discussion
around
from
the
sidelines,
but
definitely
very
interested
because
you
know,
as
you
know,
even
from
our
from
aws's,
and
we
support
both
obviously
prometheus
histograms,
as
well
as
cloud
watch
histograms,
which
are
their
own.
A
You
know
implementation
and
we'd,
like
obviously
like
to
get
both
supported
right
because,
again
until
cloudwatch
changes
its
implementation
or
that
it
supports
you
know
other
histogram
ingestion
also
it
has
it's
incompatible
with
the
prometheus
histogram,
for
example.
Today.
So,
given
some
of
those
considerations
you
know
being
able
to
support,
multiple
formats
is
good,
but
that
doesn't
mean
that
it
has
to
natively.
You
know
hotel
has
natively
supported,
as
you
said,
if
there
is
an.
A
Format,
that
is,
you
know,
natively
supported,
then
there's
a
transformation.
You
know
layer
required
which
will
kind
of
do
all
the
magic
but
then
also
be
fully
compatible.
So
I
mean
it's
an
implementation
approach.
At
that
point
I
mean
it
goes
without
saying
that
we
should
be
supporting
multiple
types
of
histograms,
but
you
know
how
we
do.
It
is
a
different
layer,
so.
G
H
Is
that
several
of
these
things
you
can
convert
between
because,
like
a
log
in
your
histogram
and
an
exponential
instagram,
you
know
you
can
you
can
lastly
convert
between
those
reasonably
okay?
No,
it's
not
perfect,
but
you
can
do
it.
That's
not
the
case
for
hd
or
history
or
ending,
which
varies
buckets
over
time
yeah,
because
one
of
the
problems
you
have
as
well
is
that
rate
I
start
by
htr
histogram
at
the
start
of
my
program,
my
program's
been
running
for
a
week.
The
histogram
is
now
useful.
H
So
does
that
kind
of
so
that
means
that
that's
fundamentally
not
very
useful
for
prometheus
and
anyone
who
basically
wants
to
try
to
get
useful
data
because
you've
got
all
this
old
data,
which
is
the
side
of
the
buckets.
So
new
information
is
basically
discarded
or
ignored
or
not
as
important,
but
anything
that's
you
know
it
doesn't
matter
how
you
select
your
buckets
as
long
as
you're,
fake
stitch
and
you
can
convert,
but
anything
with
dynamic
buckets
is
more
problematic.
G
G
I'm
not
sure
that
what
I
was
trying
to
say
is
I.
I
don't
entirely
believe
that
hr
histogram
couldn't
be
adapted,
but
I
don't
want
to
try
and
force
prometheus
to
believe
that
either
way
so
like
I'm
anyway,
I
I
recognize
that
that
some
particular
uses
of
histograms
are
really
not
ideal.
For
cumulative
data
is
what
what
it
comes
down
to.
G
I
think,
and
that
may
be
a
reason
why
just
to
add
some
nuance
to
what
I
just
sort
of
said
earlier,
like
perhaps
there
is
a
if
there
were
several
histograms,
you
would
say
that
there's
the
native
one
or
the
the
built-in,
like
everyone,
must
support
this
histogram
and
then
there's
a
sort
of
optional
histogram
and
the
collector
can
convert
it
to
the
one
that
we
all
support
and
vendors
may
support
the
stop
histogram
if
they
want
to
avoid
conversion
costs.
G
That
would
be
a
kind
of
like
two
classes
of
histogram,
first
class
second
class.
It's
just,
I
think
it's
it's
it's
feeling
extremely
contentious
to
me
to
say
we
are
not
never
going
to
accept
open
histogram,
because
I
I
just
feel,
like
that's
an
unpopular
thing,
to
say
in
the
world
when
we've
got
this
off
the
shelf,
ready
to
go
histogram
that
a
lot
of
people
find
useful.
G
Oh,
we
would,
I
think,
I
think,
accepting
multiple
histograms
means
providing
a
histogram
converter
in
the
collector,
but
I
the
question
is
something:
like:
is
the
final
state?
Is
it
okay?
If
half
the
sdk's
default,
sdks
produce
one
histogram
type
and
the
other
half
produce
the
other
histogram
type?
I
I
feel
like.
G
I
don't
think
that's
a
good
situation.
I
think
every
sdk
must
have
one
implementation
of
the
default
standard,
histo
exponential
histogram,
because
because
vendors
shouldn't
have
to
support-
and
you
shouldn't
have
to
run
a
collector,
so
you
should
be
able
to
choose
an
sdk
that
supports
one
histogram,
but
as
a
matter
of
getting
out
there
and
letting
his
users
who
are
familiar
with
the
existing
histograms,
keep
using
what
they've
got.
I.
I
don't
feel
strongly
that,
like
telling
a
user
you're
never
going
to
use
hdr
histogram,
because
you're
never
been
like.
G
If
it's
been
working
for
you,
why
not
let
you
use
that?
And
so
can
you
take
your
sdk
your
hotel,
sdk
and
and
pull
in
the
hdr
histogram
aggregator
and
just
start
writing
metrics,
because
that
might
get
us
an
sdk
a
lot
faster.
H
But
think
of
it
is
hgo
histogram
he's
a
histogram
internally
in
his
implementation.
Externally
it
produces
quantiles
and
like
if
you
want
to
split
out
quantiles
as
gauges.
You
can
like
that's
the
summary
in
prometheus
terms,
but
it's
your
histogram.
It's
only
a
histogram
in
terms
of
internal
implementation,
not
how
it
looks
externally
unless
you're
going
to
poke
around
the
internals.
I
guess.
G
Yeah,
I
still
think
this
is
like
beyond
the
level
of
well
that
technicals
are
sort
of
less
important.
At
this
point,
I
see
what
you
mean
ryan,
so
this
is
why
some
some
histograms
are
unacceptable
to
some
audiences,
that's,
but
I'm
trying,
but
what
I'm
realizing
is.
It
feels
like
a
really
difficult
decision
to
make
to
say
that,
just
because
some
instagrams
are
unacceptable
to
some
audiences
hotel
will
only
accept
the
one
type
of
histogram.
G
Oh
yeah,
that's
a
good
question.
Yeah
honestly
I've
been
confused
and
I'm
fine.
I
think
I
finally
can
explain
everything
now,
but
so
a
log
linear
histogram,
is
a
lot
like
an
exponential
histogram.
G
But
there's
this
additional
linear
thing
going
on.
So
it's
for
me.
It's
easiest
to
explain
with
open
histogram
the
the
exponential
factor
in
an
open
histogram
is
10.,
so
you've
got
one
ten
hundred
thousand
ten
thousand.
You
know
it's
got
point
one
point:
zero
one
point:
zero,
zero
one
and
so
on.
G
When
you
go
log
linear
in
a
decimal
based
system,
your
your
subdivisions
are
linear
between
one
and
ten.
So
you
get
one
and
ten,
but
you
get
two
three
four:
five:
six,
seven,
eight
nine
and
then
between
one
and
a
hundred
ten
and
a
hundred
you
get
ten
twenty
thirty
forty
seven.
Eighty
ninety
and
one
of
the
two
of
the
reasons
why
people
like
that
base
is
that
you
know
you
can
convert
si
units
without
loss.
G
You
can
out,
you
can
factor
in
a
mega
or
something
like
that
and
you
don't
lose
precision
and
because,
if
you're,
if
you're
setting
slos
like
I
always
round
millisecond
number
or
10
second
number,
like
your
histogram-
is
exact
up
to
that
decimal
boundary.
So
that's
why,
anyway,
in
binary
numbers
work
but
yeah
in
binary,
which
is
where
the
floating
point
thing
comes
up
more.
Its
share.
G
Histogram
is
also
log
linear,
and
it
took
me
a
long
time
to
figure
this
to
really
get
this,
but
it's
log
linear,
in
the
sense
that
each
there
there
are
ranges
between
you
know
two
and
four
there's
ranges
between
four
and
eight
there's
ranges
between
eight
and
sixteen.
G
Those
are
each
a
bucket
and
then
they
subdivide
linearly.
So
the
way
you
can
imagine
an
hdr
histogram
is
you
count
leading
zeros
that
tells
you
what
bucket
set
you're
in
and
then
they're
linear,
based
on
some
number
of
bits.
So
you
could
have
three
bits
of
of
linear
precision,
and
that
means
that
your
your,
your
binary
powers
are
are
every
two
to
the
third,
so
you've
got
one
and
eight
and
then
you're
going
to
do
linear
between
one
and
eight
and
then
you've
got
eight
to
sixty-four
and
you're
gonna.
G
C
G
So
either
way
you
end
up
with
these
round
powers
of
ten
or
powers
of
two
between
your
exponential
powers
of
two
or
powers
of
ten.
That
is
why
log
linear,
it
makes
it
sort
of
like
when
you
first
learned
about
logarithmic
logarithmic
plots
in
school.
What
you
saw
was
exactly
a
log
linear,
histogram,
that's
like
the
standard
way
of
presenting
it
in
scientific
literature,
because
it's
natural
for
humans.
G
G
The
thing
that
prometheus
is
really
concerned
about
is
not
letting
tiny
buckets
build
up
over
long
periods
of
time,
and
you
could
you
could
say
that
about
an
exponential
or
about
a
log
winning
here,
they're
sort
of
independent,
and
so
anyway
it's
it's.
G
H
Yeah,
like
I,
I
think
that,
like
long
linear
versus
exponential
like
that,
can
be
converted
and
it
wouldn't
be
unreasonable
to
say
hey,
you
can
support
it,
but
be
aware
if
you
transfer
between
them
as
a
vendor
or
whatnot,
there's
going
to
be
some
yeah
you're
going
to
need
a
little
bit
of
loss.
You
know
maybe
five
ten
percent
of
precision
or
accuracy,
I
should
say
not
the
end
of
the
world.
H
The
issue
is
more:
if
well,
you're,
using
like
history
or
histogram
the
resize
function,
because
then
that's
where
start
teams
start
to
get
funky,
you
actually
start
to
have
well
this
static
and
no
longer
be
aggregated
yeah.
That's
the
five
percent
loss.
That
said
this
data
is
now
useless
sort
of
loss,
which
is
yeah,
that's
more
of
an
issue.
G
Yep
all
right,
I
I
totally
agree,
so
I
think
we
I'm
gonna.
This
has
been
helpful
to
me.
Thank
you
for
bringing
it
up
elita.
Thank
you,
brian.
I
plan
on
posting
something
soon.
A
Yeah,
I
think
I
think
josh,
that
you
know
I
mean,
as
you
can
see,
even
in
the
discussions
that
we're
been
having
you
know
on
the
this
web
group,
especially,
is
that
we
definitely
think
that
you
know
again
settling
in
on
exponential
as
a
default
and
then
being
able
to
have
an
efficient
transformer
is
the
way
to
go
for
implementation,
because
we
have
to
be.
A
You
know
clear
about
what
we
support
as
a
default,
and
then
you
know
support
as
many
other
formats
as
possible
with
the
transformer
right,
and
that
would
like
to.
A
Yet
an
action
item
to
make
sure
I
mean
we've
already
been,
you
know
getting
the
the
principal
engineers
involved
on
the
cloud
watch
side.
So
we'll
get
them.
You
know
directly
on
the
on
this
discussion.
Should
they
join
in
into
the
into
this
group
or
into
the
which
sig
would
be
the
right
one?
Maybe
I
mean.
G
A
This
has
its
own
audience,
but
yeah
there's
a
certain
issue:
1776.
A
G
My
my
plan
sort
of
next
step
was
to
to
post
a
summary
of
kind
of
what
I
just
said
to
you
all
in
this
room
on
1776.
To
say
this
doesn't
look
like
one
is
the
right
choice
anymore
to
me,
but
having
a
first
class
and
second
class
with
just
accepting,
there
will
be
loss,
vendors
can
choose
to
do
it
or
they
can
put
the
collector
in
front
of
it
to
convert
and
so
on,
because
I
I
just
I
feel
like
the
the
enemy
of
the
good,
is
the
perfect
one
histogram.
G
So
I'd
rather
have
a
few
good
choices
at
this
point
given
when
things
are
gone,
I.
A
Mean
so
I'll
definitely
take
an
action
item
to
get
the
cloud
watch.
You
know
histogram
implementation,
more
detailed
and
you
know
kind
of
posted.
A
G
Then
what
I
have
to
offer
from
light
steps
perspective
is
that
we
are
using
open
histogram
internally,
that's
our
one,
true
internal
histogram.
So
what
we've
done
is
got
a
converter
that
converts
from
otel's
explicit
histo
into
circle.
Hist
and
we've
got
one
to
convert
from
dd
sketch
into
circle,
so
we've
already
done
histogram
conversion
and
we
can
show
you
all
what
that
looks
like
and
that
could
be
plopped
into
a
collector
at
some.
G
You
know
with
some
modification
and
so
on
so
I'll
follow
up
with
both
the
conversion
pro
just
sort
of
proof
of
concept
and
look
forward
to
some
cloud
watch
information
and
then
I'll,
post,
okay,
1776
this
week.
A
Sounds
good,
I
mean
that's
a
good
step
forward.
I
mean
happy
to
unblock
it's
just
just
been
focusing
in
on
the
prometheus
implementation
and
compatibility,
so
josh
will
I'll
definitely
get
that
information.
You
know
for
cloud
watch
posted
because
there
is
an
great
interest
in
making
sure
that
there
is
compatibility
yeah.
You
know
what
yeah.
G
I'm
trying
to
figure
out
the
question
I'm
trying
to
answer
now
is:
is
there
one
sort
of
like,
like
parameterized
histogram,
that
kind
of
a
catch-all
for
a
bunch
of
other
histograms?
Like
I?
I
know
there
is
one
that
can
describe
both
open
histogram
and
hdr
histogram
at
some
level,
if
that,
if
cloudwatch
fell
into
that
same
bag,
it'd
be
pretty
cool.
A
Yep,
I
agree
totally
okay,
so
this
is
a
good.
This
is
a
good
discussion
again
josh.
Thank
you
for
going
into
the
details
because
we
are
super
interested
in.
You
know
ensuring
that
the
implementation
is
decided
sooner
than
later,
because
it's
just
you
know
something
that
how
do
we
provide
full
compatibility
and
full
support
for
all
types
of
data
and
histograms
is
definitely
very
important,
especially
in
the
prometheus
world.
So
you
know
that's
something
that
should
be
absolutely
supported
by
default,
all
right,
any
other
topics,
I
don't
think
we
have
any
others.
F
A
F
A
F
So
one
thing
would
like
to
do
is
have
basic,
basically
full
parity
with
the
prometheus
server
yeah,
and
you
know
to
accomplish
that.
There
are
a
few
things
one.
I
think
I
asked
about
this,
but
I
didn't
get
too
many
results.
Is
there
some
form
of
like
a
design
dock
for
the
prometheus
remote
trade
exporter.
A
Yes,
there
is,
there
is
yes,
yes,
we
had
submitted,
we
had
submitted,
I
mean,
since
we
built
the
prometheus
remote
right
x.
What
do
we
have
a
detailed
design
dock?
I
can
definitely.
G
Wilkie
wrote
one
and
shared
it
with
us.
I
don't
remember
where
or
when,
but
I've
seen
it.
A
F
So
you
know,
after
that,
that's
in
I
plan
on
like
getting
an
actual
prometheus
server,
make
make
make
it
scrape
from
same
end,
points,
consume
data
and
then
compare.
A
I
I
Okay
yeah,
I
just
started
yeah
sorry
for
jumping
like
in
the
middle
there's
a
final
question
that
I
have
from
a
team.
We
are
curious
about.
You
know,
setting
the
collector
up
internally
to
start
playing
with
that,
specifically
with
the
prometheus
receiver,
we're
very
interested
in
this
specific
part,
and
we
were
wondering
about
the
milestone
one.
I
know
that
you
started
working
in
the
in
the
phase
two,
but
I
still
see
items
in
in
the
milestone
in
the
first
phase
milestone.
A
A
A
And
you
know,
the
idea
is
that
carlos
will
continue
going
through
all
the
phase
two
items
and
you
know
complete
those
before
metrics
go
stable
right
because
that's
a
dependency
for
metrics
stability
in
the
connectivity
yeah.
So
I
mean
again
happy
to
you
know
just
ping
me.
If
you
have
any
questions,
we
can
walk
through
it
together
because
there's
a
whole
bunch
of
items
that
that
we
are
in
flight,
we've
completed
them
or
our
you
know.
Different
folks
are
testing
with
it.
So.
A
All
right
coolness
all
right,
I
think
we
can
end.
Then
we
have
10
minutes
to
give
back
to
everyone
thanks
thanks
guys.
So
thanks
everyone
for
joining.
Thank
you.
Yeah
thank.