►
From YouTube: 2021-05-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
A
Hey,
can
I
just
hijack
like
a
few
like
two
or
three
minutes
the
start
of
this
and
ask
a
bit
more,
like
maybe
organizational
questions,
I'm
like
rather
new
to
this
space.
I've
been
following
the
metrics
data
model
meetings.
Api
meetings
are
like
1am
from
my
time,
so
I
haven't,
haven't
been
attending
those
but
yeah.
A
I'm
I'm
wondering
how
to
like
contribute
a
bit
more
than
like
just
quote-unquote
just
listening
in
on
those
meetings,
and
they
like
from
what
I
understand
the
as
now
that
the
metrics
data
model
is
has
been
declared
stable.
I
think
there
might
be
some
work
that
needs
to
be
done
in
terms
of
the
like
the
experimental
apis
and
stuff.
So
I
wondered
if
there
is-
or
there
are
resources
for
me
to
like
sort
of
where
to
find
out
more
about
what
is
pressing
and
where
I
can
actually
help
out.
E
I
think
riley
might
be
the
best
to
answer
this,
so
I'll
just
say
in
general,
two
things,
one
is
feel
free
to
jump
on
issues
and
comment
and
in
open
source.
No
one.
If
you
sign
up
to
do
work,
no
one
will
stop
you
generally.
It's
it's
one
of
those
one
of
those
deals.
They
might.
They
might
yell
at
you
for
how
you
do
it
all
the
time,
at
least,
if
it's
me,
but
no
one
will
stop
you
from
like
signing
up
to
contribute
with
work
for
the
api
stuff.
E
I'm
sure
riley
has
like
a
list
of
things
that
are
kind
of
easy
to
pick
up.
I
know
for
from
a
data
model
standpoint,
one
of
the
things
so
we
got
the
protocol
mark
stable.
We
have
the
specification
on
the
data
model
sig,
which
the
the
question
I
want
to
ask
is:
do
we
think
people
can
write
an
exporter
for
metrics
or
an
importer
for
metrics
like
for
the
api,
and
if
the
answer
to
those
two
questions
is
yes,
I
want
to
get
that
mark
stable
relatively
soon.
I
don't
think
the
answer
to
that
right.
E
Now
is
yes,
so,
like
that's,
what
we're
going
to
be
doing
a
little
bit
in
this
meeting,
although
we
want
to
kind
of
talk
about
that
histogram
bucketing
otep.
So
if
you,
if
you're
interested
in
any
of
those
two
pieces
of
reading
through
that
protocol
and
saying,
could
I
implement
an
exporter
and
just
opening
box
like
saying
this.
A
E
That
that
that
would
be
ideal
riley
do
you
have
any
any
other
thoughts
there.
C
Yeah,
so
I
I
think,
like
first
ask
yourself,
or
are
you
willing
to
spend
a
lot
of
time
or
you
have
limited
amount
of
time,
and
what
what
do
you
think
you're
good
at
and
depending
on
your
answer,
you
might
want
to
help
on
the
spike
part
of
or
you
want
to
help
on
the
implementation
like
some
prototype.
C
So
on
the
spec
part,
I
I
think
whether
it's
data
model
or
the
or
any
other
thing
in
open
telemetry.
There
are
sig
meetings
like
this
and
doing
this
we'll
have
discussions
of
one
simple
way
to
contribute
to
those
discussions,
ideas
and
with
feedback
or
if.
C
Help
them,
and
ultimately
it
comes
to
the
the
real
things
that
you
deliver
to
the
community.
I
think
those
are
either
the
issue
or
the
prs,
but
look
at
the
the
topics
and
like
issues
and
prs
and
see
if
you
can
pick
something
you
can
start
by
contributing
to
the
others
prs
by
comment
or
like
help
them
to
identify
the
gaps
and
later.
If
you
have
better
confidence,
you
can
send
the
pr
yourself
and
nobody
is
going
to
stop.
You
will
welcome
those
contribute.
C
C
E
All
right
that
was
a
great
discussion,
so
I
should
actually
list
that
as
a
our
first
agenda
topic
was.
C
So,
and
by
the
way,
check
the
open,
telemetry
community
page
like
there's
a
repo
open,
telemetry
community,
and
it
has
the
basic
rules
and
how
you
can
become
a
member,
how
you
can
find
sponsors
to
join
this
community
and
how
should
you
guys
start
and
each
report
has
the
contribution.md
a
markdown
file,
giving
you
the
basic
steps
to
get.
You
started.
E
If
someone
wants
to
take
notes
on
what
we
said
to
summarize
it,
I,
I
was
unfortunately
typing
for
half
of
what
riley
said.
So
I
apologize.
I
don't.
E
I
will
take
military
okay.
Thank
you.
Thank
you.
Okay,
oh
I'm
not
presenting
yet
so
let
me
start
presenting.
I
think
we
have
it's
been
seven
minutes,
so
I
think
we
have
folks
here.
Thank
you
all
for
coming.
If
you
haven't
added
your
name
to
the
agenda
document,
please
do
so.
E
I
have
to
make
this
be
a
new
chrome
tab
where
everything
goes
to
hell,
open
temperature,
six
specs.
Here
we
go
okay,
cool,
so
thanks
everybody.
We
just
had
that
meta
discussion
on
how
to
contribute.
You
know,
as
as
you
arrive,
please
make
sure
you
fill
out
the
attendees
list.
Thank
you
and
let's
get
started
with
blocking
issues
in
triage.
E
So
from
the
standpoint
of
where
we
stand
on
locking
issues
well,
first,
I
before
I
go
to
that,
I
want
to
call
out.
I
think
that
we
have
marked
the
protocol
as
stable,
meaning
we're
going
to
make
no
more
breaking
changes
to
it.
E
The
next
step
is
to
kind
of
get
the
data
model
specification
to
state
of
the
stability
where
the
the
bits
that
we
have
in
it
are
marked
stable,
and
we
start
moving
to
a
model
where,
like
like
we're
going
to
evaluate
today,
with
the
histogram
bucketing
algorithm,
we're
going
to
have
an
otep
we're
going
to
take
that
otep
we're
going
to
work
in
some
experimental
way
to
kind
of
define
you
know,
concepts
and
ideas
and
things
and
then
we'll
bring
that
into
the
specification
as
a
stable
kind
of
full
proposal.
E
You
know
metric
output
and
convert
it
into
some
other
format
that
I
did
back
and
that
I
need
to
record
in
and
then
the
second
question
is:
can
I
implement
an
importer
so
from
riley's,
like
the
api
standpoint
is,
can
the
api
generate
open,
telemetry
protocol
correctly
and
then
the
other
question
that
I
want
to
ask?
Is
you
know
if
we
were
to
import?
E
You
know
collect
d,
metrics
or
statsd
or
prometheus
or
open
metrics
right
can
do
we
feel
like
we
have
enough
written
down
in
that
document
that
it's
clear
how
to
imp.
You
know
import
it
into
open
telemetry
and
then
is
it
clear
how
to
export
it
and
those
are
kind
of
the
two
primary
concerns
that
I
think
the
data
model
spec
document
needs
to
address.
E
So
with
that
said,
when
we
look
at
here,
we
have
this
notion
about
requirements
for
removing
labels
safely
as
a
required
for
ga,
and
I
know
that
josh
has
a
couple
pull
requests
here
and
even
an
outstep
that
I
think
is
somewhat
related
to
this.
Is
that
correct.
F
Yeah,
so
I
felt
from
a
few
of
the
recent
meetings
that
there
were
still
open
questions
about
why
open
up
down
counter
and
versus
gauge,
or
why
would
the
default
aggregation
for
one
be
sum,
and
why
would
the
other
be
glass
value,
and
so
I
tried
to
spell
out
in
a
much
longer
form
first
feedback.
I
got
was
it's
too
long.
It
doesn't
like.
F
So
I
can
go
edit
this,
but
it's
definitely
trying
to
justify
why
we
made
these
decisions
about
the
two
different
types
for
up
down
counter
and
engage,
and
then
how
attributes
sort
of
mean
different
things
when
you
apply
them
to
those
two
fundamental
types,
and
I
would
like
people
to
review
and
approve.
F
Pro
pr
1646
merged,
which
has
the
basics
of
start
time,
and
then
I
have
two
more
pr's
that
were
neglected
and
I
need
to
update
them,
which
talk
about
temporal
alignment
and
then
re-aggregation,
and
I
think
this
otep
is
really
just
like
wait.
A
second
I'm
seeing
there's
more
confusion
than
I
expected.
So
we
should
just
review
that.
I
think.
C
For
for
the
like
matrix
exporter,
I
think
victor
is
working
on
some
like
unturned
story
like
the
sdk
exporter
prototype,
so
that
might
be
able
to
cover
this
one
for
the
matrix
importer,
wonder
if
anyone
is
working
on
that,
if
not,
can
we
identify
someone.
E
F
Been
working
on
metrics
import
for
a
long
time,
I've
been
doing
prometheus
import
exclusively
since
august
of
last
last
year,
and
so
I
I
mean
a
lot
of
the
work
that
we've
been
doing
in
start
time.
Specification
work
has
been
because
we
know
what
the
problems
are
for
import.
So
I'm
pretty
confident
that
we
have
that
story.
I'm
not
sure
the
documentation
is
is
perfect.
E
Yeah,
so
what
I'm?
What
I'm
looking
at,
if
you,
if
you
were
to
go,
read
here,
okay,
to
answer
riley's
question
in
a
different
way
if
we
go
read
through
this
document,
okay,
there's
some
to
do's
in
here,
but
those
are
around
other
scope
use
cases.
So
I'm
not
as
worried
about
that.
If
we
look
through
the
document
and
we
look
at,
are
you
responses.
E
C
E
E
Yeah,
I
I
wanted
to
be
careful
with
that
term,
because
I
don't
want
it
to
necessarily
mean
the
receiver.
The
other
thing
that
this
could
be
is
alternative
apis,
so
like
micrometer,
if
micrometer
decides
not
to
go
through
the
open,
telemetry
api,
but
to
directly
generate
open
telemetry
protocol,
is
there
enough
description
on
the
protocol
that
they
could
do
that.
C
I
see
that,
and
bogdan
probably
mentioned
something
like
people
who
write
how
that
adapter
or
something
like
take
the
the
data
from
the
ick
like
from
micrometer.
And
then
you
can
process
that
in
the
open,
telemetry
sdk.
G
G
E
E
We
need
to
outline
these
like
for
resources,
for
example,
does
this
need
to
be
outlined
in
a
metric
specific
way,
or
do
we
just
call
out
to
things
in
the
rest
of
the
specification
for
temporal
alignment?
Josh
has
been
kind
of
driving
this
same
with
external
labels
and
resources,
and
when
it
comes
to
stream
manipulations,
these
are
kind
of
optimistic
best
effort
like
guides
for
people
who
need
to
deal
with
concepts
in
open
telemetry
that
might
not
be
in
their
their
back
ends
or
in
their
metric.
E
E
So
what
I'm
suggesting
when
it
comes
to
getting
this
document
marked
as
stable
is,
I
would
like
for
folks
to
basically
take
a
second
read
through
the
document
with
the
lens
of
someone
writing
either
an
exporter
or
a
receiver
or
the
sdk
itself
right
to
kind
of
look
through
here
and
see
if
there
are
unanswered
questions
in
open
books.
E
I
think
there's
a
lot
of
stuff
that
josh
is
handling
here.
My
opinion
on
this
resources
bit
is
there's
likely
to
be
some
things.
We
need
to
call
out
in
resources,
but
for
now
we
should
try
to
just
defer
to
the
specification
of
what
resources
mean
in
general
in
all
of
the
telemetry,
and
if
we
have
to
clarify
metric
specific
aspects,
we
can,
and
then
these
stream
manipulations
are
kind
of
optimistic
on
demand.
If
we
feel
we
need
to
flush
something
out
so
that's
kind
and
external
labels.
F
So
I'm
I've.
I
think
that
resources
and
external
labels
are
really
the
same
thing
and
now
we
have
schemas,
and
I
think
that
we
can
actually
just
remove
resources
and
remove
the
word
pending
there.
I
don't
think
there
are
any
real
questions
that
need
to
be
addressed
in
this
document,
but
the
external
labels
question,
though,
is
pretty
real
for
importing
prometheus
data,
and
I
mentioned
it
in
the
spec
an
hour
earlier.
F
We
have
this
like
any
time,
we're
talking
about
putting
attributes
on
a
piece
of
data
that
it's
describing
the
collection
that
happens
so
sampling
probabilities,
drop
span
or
dropped,
attributes
and
external
labels
fit
into
the
same
category
of
this
is
about
the
collection
path,
not
about
the
data
and
what
I'm
hoping
we
can
do,
and
I
have
a
long
list
of
things
I'm
supposed
to
be
so
I'm
not
there
yet,
but
it
would
be
an
otep
that
says:
here's
how
we
can
use
a
schema
to
give
metadata
about
your
attributes
without
changing
our
protocol.
F
So
we
don't
have
to
actually
physically
put
a
field
into
our
attribute
to
say:
are
you
identifying?
Are
you
one
of
these
special
collection
type
attributes?
We
just
put
it
in
the
schema
and
then,
if
you're,
if
you
don't
kind
of
care
about
the
rich
data
model,
you
can
just
see
key
values.
That's
it!
The
external
value
labels
look
like
resource
labels
because
they're
just
key
values,
but
if
you
want
to
know
meaning
you're
going
to
you're
going
to
parse
the
schema
and
you're
going
to
see
oh
dropped
attributes,
that's
that's
a
collection
path
attribute.
F
Oh
external
labels
named
prometheus
replica,
that's
that's
a
collection
path
attribute
and
I
don't
think
we
need
to
say
anything
more
and
then
in
the
prometheus
working
group.
There's
a
question
about
how
do
I
join
my
my
secondary
service
discovery,
attributes
using
resources
and
that's
a
I
think,
an
independent
question.
F
It's
related
to
resources,
and
but
it's
not
part
of
the
data
model.
It's
like.
E
An
operational
effectively
resources,
so
I'm
going
to
call
out.
I
think
what
what
you're
talking
about
I
refer
to
as
observability
of
observability,
and
I
think,
there's
going
to
be
a
whole
effort
in
open
telemetry
around
that
as
people
adopt
it
and
run
into
problems,
and
I
want
to
call
that
out
of
scope
for
v1,
oh
great
literally,
all
I'm
worried
about
is,
can
you
can
you
generate
correct,
open,
telemetry,
metrics
in
happy
path?
Can
you
get
stuff
out
and
then
this
observability
of
the
observability?
We
can
work
on
over
time
yeah.
E
F
We
could
just
say
it's
a
concept
for
the
future
yeah
where
there's
sort
of
like
two
levels
here.
If
you
just
want
to
see
key
values,
it
doesn't
matter.
External
labels
are
just
key
values
and-
and
the
prometheus
working
group
explicitly
confirmed
that,
yes,
we're
using
them
two
ways:
external
labels
might
be
real
and
they
might
be
collection
path
and
that's
where
we
can
leave
it
for
now.
Cool.
E
Cool
okay,
so
with
that
said
folks
who
have
the
opportunity
to
read
through
here,
they
ask
themselves
if
there's
enough
detail
here,
that
they
can,
you
know,
write
an
exporter
or
adapt
things
and
open
bugs.
That's
kind
of
what
we
need
is
is
just
a
little
round
of
hardening
of.
E
Are
there
bits
in
this
document
that
are
unclear,
but
we're
too
close
to
the
subject
matter,
to
kind
of
see
right
now
and
kind
of
get
things
stabilized.
There's
also
this
temporal
alignment
kind
of
discussion
that
I
almost
wonder
if
it
needs
to
be
here
or
somewhere
down
here
right.
E
It
is
a
manipulation
I
think
we
should.
We
can
call
it
that
yeah
and
everything
under
stream
manipulations.
I'm
gonna
write
an
introduction
describing
this
around
how
these
are
meant
to
aid
people
in
using
the
metrics,
but
these
are
not
meant
to
be
in
stone
specification
of
what
is.
This
is
literally
specifically
around
that
use
case
of
helping
people
write,
exporters.
F
E
B
Yeah,
so
so
I
wanted
to
you
know
people
to
comment
on
if,
if
I
were
to
quote
write
an
exporter
to
what
what
what
is
within
scope
right,
because
obviously
the
simplest
exporter
could
be
just
based
on
how
the
api
sdk
goes,
it's
just
to
map
to
the
protobuf
and
send
it
out,
but
you
also
expecting
some
experiments
or
some
code
or
some
concept
around.
You
know,
delta
cumulative.
You
know
the
temporal
alignment
and
other
alignments.
Do
you
include
that.
F
As
part
of
the
export
delta
to
cumulative
is
kind
of
a
key
requirement
for
any
prometheus
consumer,
so
I
think
of
that
as
being
the
first
requirement
that
has
to
be
met
in
an
sdk
is
that
the
default
output
from
a
counter
should
be
accumulative,
but
I
think
we're
trying
to
avoid
being
really
rigorous
about
architecting,
an
sdk
for
the
entire.
You
know,
organization
like
a
go
sdk
does
not
have
to
be
the
same
as
a
python
sdk
and
maybe
in
the
future.
F
There's
a
reason
for
that
to
happen,
but,
like
this
is
very
far
out,
and
I
think
we're
hoping
right
now,
just
to
say
the
api
works,
the
data
model
comes
out
correctly
and
we're
gonna
we're
still
working
on
what
the
sdk
has
to
do
to
meet
a
spec.
E
Yeah
to
answer
your
question
victor,
if
you,
if
you
try
to
implement
an
exporter-
and
you
want
to
pretend
like
you're
exporting
to
prometheus-
and
you
want
to
take
a
crack
at
what
dealing
with
the
delta
sum,
looks
like
that.
That
I
would
say,
is
within
scope
of
the
data
model
of.
Do
we
give
you
enough
information
to
know
how
to
deal
with
that
scenario?
E
If
that
makes
sense,
because
the
algorithm
proposed
is
not
meant
to
be
the
best,
it's
just
meant
to
be
enough
to
give
you
an
idea
of
where
to
go,
to
implement
what
you
need
for
your
use
case
and
there's
a
lot
of
like
on,
unfortunately,
for
delta
cumulative.
Specifically,
I
want
to
call
out
that
I
think
the
right
algorithm
is
really
dependent
on
your
architecture
and
we
can
go
for
an
80
decent
case,
but
likely
there's
always
going
to
be.
People
want
to
customize.
So
anyway,
does
that
answer
the
question.
E
F
E
E
Okay,
one
thing
I
didn't
have
a
chance
to
do
today.
The
last
time
I
looked
was
on
friday.
I
didn't
have
a
chance
to
check
bugs
that
have
come
in
since
monday,
so
if
anyone's
aware
of
one
call
it
out,
but
so
far,
I
think
this
this
list
still
represents
where
things
stand.
D
E
F
Okay
and
then
I
have
a
pr
open
about
temple
alignment
which
I
can
get
back
to
just
spell
it
out
a
little
bit
yep,
and
then
there
was
a
follow
onto
that,
which
is
just
here's
reaggregation.
Now
it's
just
everything
we've
built
up,
and
it's
also
in
stream,
manipulations
and.
F
E
We'll
be
very
specific
for
people
reading
our
notes,
cool
all
right,
so
we're
going
to
call
time
shenanigans
on
that.
I
guess
no
one
had
any
other
topics
they
want
to
talk
about.
I
want
to
go
into
the
histogram
bucketing
then,
if
that's
all
right.
E
Oh,
this
was
closed
in
favor
of
the
the
otep
right.
So
I
clicked
the
wrong.
E
E
Okay,
so
effectively
I'll
give
a
quick
recap
of
what's
going
on
here
wrong
one
this
one.
This
is
a
proposal
to
add
a
new
bucketing
type
for
histograms.
This
would
be
right.
Now
we
have
explicit
buckets
where
we
define
you
know
a
bucket
is,
it
goes
from
you
know,
minus
infinity
to
0
and
then
0
to
10
and
then
10
to
100
and
100
to
infinity
or
whatever,
and
then
we
keep
track
of
counts
in
the
histogram
within
that
balance.
E
So
this
is
a
proposal
where
we
actually
have
an
exponential
bucket,
as
opposed
to
explicitly
defining
a
min
and
a
max
right.
So
the
exponential
bucket.
E
Is
there
a
good?
Should
anyone
have
any
recommendations
for
how
to
view
this?
Should
I
view
this
as
a
file?
Maybe
so
it's
easier
to
read
that
works
okay.
This
is
not
the
one
that
had
the
picture
that
I
liked
when
I
was
reading
up
on
this.
E
I'm
sorry
that
I
forget
where
that
was,
but
the
idea
here
is
you
use.
You
use
an
exponential
bucket
where
you
start
with.
You
know
something
of
base
two
and
each
subsequent
bucket
is
logarithmically
greater
than
the
previous
one.
You
know
by
or
exponentially
greater.
So
it's
just
that
much
wider,
because
you're
using
an
exponent
to
calculate
the
the
bounce.
E
Okay.
So
there's
a
there's
a
lot
of
nice
details
and
stuff
that
are
describing
the
explanation,
for
example,
they're
good
at
long
tail
distributions,
because
it's
a
lot
easier
to
represent
those
long
tails
and
actually
get
some
level
of
granularity
at
the
long
tail.
Instead
of
just
saying
anything
above
200
to
infinity
is
in
the
same
bucket,
you
actually
start
to
divide
it
in
any
case,
so
so
I'm
a
big
fan
of
this.
This
ability-
I
think
this
is
kind
of
relatively
new.
E
In
the
metric
space,
it
deviates
from
a
couple
protocols,
but
I
think
in
a
really
good
way.
E
Let's
talk
about
the
big
open
question
so
on
the
pull
request,
it
was
kind
of
asked
with
this
exponential
bucketing
right.
E
Do
we
specify
a
or
allow
people
to
specify
a
different
reference
base
and
how
that
base
scales
with
each
bucket
right
to
define
the
size
of
it
or
do
we
agree
on
a
reference
base
of
two
and
then,
when
it
comes
time
to
deal
with
different
histogram
buckets
and
sorry
histogram
exponential
buckets,
we
should
be
able
to
merge
them
mathematically,
because,
if
everything's
base
two,
even
if
you
have
different
base
scales,
there's
a
way
that
we
can
calculate
how
to
merge
buckets
together.
F
The
the
central
discussion
point
at
the
end
of
this,
the
otap
thread
is
about
fixing
the
reference
space
to
two
being
a
decision
that
leads
us
to
it,
restricts
the
possibilities,
but
also
makes
things
mergable
in
a
pretty
strong
way.
So
I
think
everyone's
leading
in
that
direction.
To
me,
it
looks
that
way,
at
least,
and
I
support
that.
E
Yeah,
should
I
go
back
to
the
comments
on
here.
Is
this
yeah
it's
less
easy
to
read
what's
going
on,
but
right
here
the
idea
is:
should
we
just
specify
a
reference
base
of
two
and
go
forward
with
that
option
and
not
start
with
a
configurable
base.
E
F
I
agree
with
that.
It's
it's
really
hard
to
see
this
in
the
thread,
and
I
want
I
kind
of
wish
uk
would
fit
would
update
it.
I
I
think
that
it's
hard
for
non-experts
to
follow
this
discussion
when
they
come
into
it
every
two
weeks.
So
I
think
it
makes
sense
I'd
like
someone
to
help
clarify
this
for
everyone
else,
but
maybe
I'm
just
asking
for
too
much.
E
Yeah
yeah,
but
like
for
this
discussion,
I
wanted
to
try
to
frame
it.
So
I
came
up
with
a
simple
description
and
didn't
draw
a
picture
folks
who
have
questions.
How
many
people
here
will
just
take
a
general
survey,
the
room
put
a
plus
one
in
chat.
If
you
don't
know
what
the
hell
we're
talking
about.
E
And
then
put
a
plus
two:
if
you
don't
care
about
the
decision
in
that,
you
think
it's
not
important
and
then
we'll
put
a
plus
three
if
we're
just
on
board
with
let's
go
fix,
base
two
and
add
the
base
scale
straight
away
and
then
accept
the
otep.
F
E
E
F
Okay,
I
mean
there
are
legitimate
questions
about
how
you
know:
there's
no
best
answer
for
histograms
and
maybe
we're
going
to
end
up
realizing
that
three
or
four
years
from
now
there
was
a
better
choice
and
we
can
introduce
a
new
bucket
data
point
and
by
then
we'll
know
a
lot
more
about
the
cost
of
converting
them
and
what
people
really
need.
Maybe
so,
I
think
just
getting
something
to
start
with
is
so
important,
because
we
have
nothing
right
now.
E
F
Think,
well,
it's
just
mergeable.
Now
we
could
just
merge
it.
I'm
not
sure
how
we
say.
Someone
should
decide.
It's
done.
E
All
right
so
in
terms
of
ai,
I
guess
the
authors
of
the
pr
are
not
here
right.
E
So
I
guess
the
question
is
what
what
I'd
like
to
get
from
this
meeting
is:
is
someone
here
willing
to
kind
of
shepherd
or
like
steward
or
what's
the
mentor?
I
don't
know
what
the
language
we
use
nowadays
is
but
like
mentor
this
pr
through
getting
it
into
the
specification
right.
So
besides
just
accepting
the
otep,
because
I
think
we
all
agree
that
this
is
the
right
way
to
go.
E
F
Uk,
I
I
think
I
probably
will
answer
that
I
can
do
that.
I'm
reading
the
chat
right
now
and
there
is
a
fine
question
about
prometheus
histogram.
E
Okay,
so
my
my
answer
to
that
would
be:
do
we
think
I'm
gonna
ask
a
second
question:
do
we
think
we
can
answer
the
prometheus
integration
question
as
we
adapt
the
otep
into
the
spec
as
part
of
the
spec
work.
F
I
I
don't
know
that,
there's
enough
of
spec
in
prometheus
to
do
that.
Even
what
I
do
know
is
that
I've
had.
F
I
have
some
experience
with
like
a
generic
histogram
conversion
where
you're
like
I
have
two
histograms
with
different
exponential
bases,
and
I
need
to
write
some
code
to
convert
one
to
the
other
and
it
will
be
a
little
lossy,
but
it's
not
impossible,
and
so
that
is,
I
think,
what
you
were
getting
at
earlier,
like
we
have
to
to
get
this
into
a
collector,
we're
going
to
have
to
deal
with
histograms
and
I'm
I'm
concerned
that
what
we
really
need
to
do
is
have
a
collector
release
that
can
work
with
this
data
and
output,
prometheus
histograms,
which
probably
means
having
code
to
take
a
you
know
a
100
bucket
histogram
and
turning
it
into
a
12
bucket
histogram,
because
that's
all
prometheus
can
deal
with
right
now.
F
If
you
gave
a
prometheus
too
many
histogram
buckets
in
today's
histogram,
they
would
crash
so
histogram
conversions
need
to
be
done,
but
that
actually
is
not
the
biggest
open
question
here
like
what
should
the
sdks
do
to
generate
histograms
and
that's
where
at
some
point
I
don't
think
we've
ever
answered
this
question
like?
Can
you
take
an
off-the-shelf
sketch
algorithm
that
produces
exponential,
histograms
and
use
it,
and
the
answer
is
probably
no,
because
none
of
them
produce
the
space
2
reference
base
thing
so
dd
sketch
won't
work
off
the
shelf.
F
None
of
the
algorithms
work
off
the
shelf
for
this.
How
do
we
actually
go
implementing.
F
E
E
F
Yeah,
I
think
some
prototype
probably-
and
I
don't
really
know
exactly
what's
needed,
but
I
mean
there's
an
implication
that
there's
a
way
to
have
the
number
of
buckets
in
a
conversion,
for
example
by
adjusting
the
the
reference
scale
or
whatever
the
base
scale.
Is
that
something
we
should
be
demonstrating
and
and
then
still
I
just
don't,
really
know
how
we're
gonna
put
what
the
sdk
is.
Gonna
do
big
open
question.
F
At
some
point
I
wrote
an
issue
in
the
specs
saying
any
histogram
library.
Any
histogram
sketch
will
do
hoping
that
we
could
just
adopt
the
best
available
library
that
does
dynamic
histograms.
But
if
we
fix
the
reference
base
to
two
and
the
base
scales
as
variable
now,
not
any
histogram
algorithm
will
do.
G
Quick
question
about
this:
with
the
base
set
to
two:
can
we
are
we
compatible
with
the
dd
sketch
and
shift
cohes
and
other
stuff,
or
or
not,.
F
There's
there's
a
lossy
conversion
between
all
these
histograms
and
there's
definitely
not
a
way
to
get
circular
hiss
to
convert
into
a
phase
to
just
you
know,
log
scale
histogram
without
that,
but
the
I,
the
the
code
I
mentioned,
which
is
to
convert
from
one
to
another,
essentially
has
to
handle
that
type
of
conversion.
So
I've
got
a
base
10
scale
and
I've
got
a
base
2
scale
and
I'm
and
I'm
walking
through
and
I'm
adjusting
weights
to
make.
F
F
Circle
has
used
as
a
base
10
and
is
very
unconfigurable
by
design
and
did
this
sketch
use
this
base?
F
It's
a
an
adjustable
base,
so
maybe
dd
sketch
can
be
adapted
and
I'm
not
an
expert
at
that.
E
F
E
F
E
F
F
Yeah
michael
gerstner
has
shown
up
to
this
meeting
a
bunch
and
he's
not
here
today,
but
I
I
will
shepherd
by
reaching
out
to
the
people
in
the
otep
how's
that.
D
F
I
am
signing
up
for
a
lot
of
this
here,
but
the
lightstep
really
cares
about
instagram.
It's
probably
the
biggest
thing
that
we're
missing
today
and
the
thing
we
want
from
hotel
the
most.
E
Okay
and
if
you
want
to
shed
some
of
like
all
of
these,
some
of
these
ais
are
kind
of
independent.
If
you
want
to
shed
some
get
some
help,
let
me
know,
and
if
other
people
are
interested
in
in
components
of
this
work
like
it's,
it's
it's
definitely
bigger
than
one
person.
So
in
the
sense
that
josh
I'm
gonna
rely
on
you
to
figure
out
who
and
what
the
next
steps
are
and
how
to
organize
it.
All.
Yes,.
F
Okay,
but
don't
feel
like
you
have
to
do
it
all
agree.
Thank
you.
I
won't.
I
won't
do
it
all
right,
okay,
I'll
I'll,
merge
the
otep
and
ask
uk
and
someone
from
datadog
to
comment
and
help
us
with
next
steps.
E
Awesome
in
terms
of
the
the
shaping
of
the
prometheus
histogram
do
we
know
who's
responsible
for
that
and
and
include
them,
because
I
think
that
should
also
happen.
If
we
can.
F
Bjorn
had
a
talk
about
it,
it's
less
than
a
year
old.
I
haven't
seen
it
really
come
together
in
code.
Yet
so
I'm
not
sure
what
the
state
is.
Okay,
it
was
definitely
a
design.
E
Oh,
that's
a
good
idea,
that's
a
good
idea.
Okay,
all
right!
So
before
we
call
time
shenanigans
in
this
topic,
is
there
anything
else?
We
think
we
need
to
look
into
here?
I
I
think
the
state
of
the
utep.
You
know
we
have
recommendations,
we're
going
to
provide,
but
really
merging
the
otep
just
means
that
we're
going
to
start
investing
in
this
work
right.
G
Yeah,
but
one
last
question
about
this,
and
I
think
was
asked
on
the
on
the
chat
as
well,
which
is
the
interaction
with
the
new
dream.
Prometheus
has
to
add
a
new
histogram
and
do
you
think,
do
anyone
believe
that
they
will
listen
to
us
or
what
do
what
we
should
do
there
or
we
should
wait
for
them
and
just
copy
their
stuff.
F
I
asked
a
question
along
these
lines
at
one
point:
to
see
what
what
kind
of
thinking
they
were
doing
about
the
otlp
histogram
versus-
and
it
seems
like
prometheus-
is
so
specific
about
things
like
stainless
markers,
and
it
is
very,
it
was
a
very
prometheusy
design
aimed
at
how
they
would
store
it
in
their
database,
which
is
not
necessarily
how
you
put
it
in
a
protocol.
So
I'm
not
sure
there's
a
a
problem,
but
I'm
not
an
expert.
E
I
I
think
it's
it's
a
task
that
we
need
to
look
into,
I'm
happy
to
take
that
work
and
kind
of
look
into
that
as
well
or
find
someone
else
who
might
be
closer
and
try
to
look
into
that.
But
to
answer
your
question
bogdan
my
here's,
my
thinking,
okay,
and
I
want
to
hear
what
other,
if
this
resonates
with
other
people
in
the
community,
I
think
we
have
a
proposal
on
the
table.
That's
really
good!
E
The
the
real
question
we
should
have
is,
with
this
new
bucketing
that
we're
going
to
plan
to
use
and
consolidate
on.
If,
if
these
lossy
conversions
are
absolutely
a
no
go
for
converting
to
different
types
of
histogram
pockets,
so
if
we
define
this,
this
histogram
bucketing
solution
and
people
who
use
circle
history
unwilling
to
adopt
it.
That's
my
sign
that,
like
whatever
prometheus,
does,
will
not
be
compatible
with
what
we
do
right
so
in
practice.
E
G
F
E
Yeah,
I
don't
want
to
prevent
us
making
progress
on
our
apis
and
our
community
if
they're,
not
ready.
That
said,
I
think
whatever
we
do,
we
should
try
to
make
sure
it's
compatible
but
like
if
datadog
and
circle
hist
don't
like
what
we
proposed.
I
think
that's
a
sign
that
we're
never
ever
going
to
get
prometheus
to
accept
what
we
propose
either
and
it
won't
be
good
right.
G
Yeah,
so
let's
as
long
as
we
have
circle
keys,
then
this
catch
buy
in
into
this.
Somehow,
then,
I
think
it's
a
good
thing.
E
Yeah,
I
think
I
I
like
to
think
of
hotel
as
the
the
metric
spec
that
cares
about
anything
not
prometheus.
E
And
it
cares
about
prometheus,
but
anyway,
okay,
maybe
I
shouldn't
say
that
out
loud,
I
don't
know
all
right.
Let's,
let's
get,
let
me
get
back
on
track
from
the
standpoint
of
histogram
buckets
any
any
other
last
minute
concerns
that
was
a
good
one
to
raise.
Thank
you.
E
F
E
Cool
anyone
else
have
any
other
topics.
They
want
to
talk
about.
G
E
E
I
will
pull
up
the
data
model
specification
again.
We
had
talked
about
walking
through
different
components
and
different
things.
You
know
just
to
call
out
a
few
of
the
ai's.
We
have
looking
at
the
current
data
model
specification
and
making
sure
it
abides
by
the
use
case
and
opening
bugs.
E
We
have
a
few
different
topics
that
we
I
don't
think
have
fully
fleshed
out
up.
Metrics
is
an
example.
Around
prometheus
compatibility,
josh
is
going
to
work
on
flushing
out
re-aggregation
temporality.
We
have
a
oh
yeah.
Let
me
get
rid
of
that.
That's
me
being
stupid.
No,
that's
done
what
that's
already
done.
E
I
know
it.
I
I
you
know
what
happened
is
when
I
did
a
search
on
the
proto.
I
mistyped
things,
so
I
didn't
think
it
existed.
Let
me
close
this
okay,
we
still
there's
a
there's.
A
question
on
exemplars
later
gage
histogram
is
is
a
thing
we
have
to
deal
with
at
some
point:
that's
around
prometheus
compatibility
and
then
raw
aggregation
is
another
thing
that
has
been
brought
up
to
talk
through.
G
There
is
also
something
that
probably
we
don't
have
things
we
don't
have
issues
for
they
have
this
a
new
and
they
have
have
also
the
state
something
state.
E
Like
okay,
is
this
like
in
in
stackdriver
or
cloud
monitoring,
we
have
string
based
data
types.
That's
that's.
G
E
E
Who
you
are
okay,
they're?
I
know
I
know
for
a
fact
they
work
and
I
also
know
for
a
fact
that
open
telemetry
won't
be
able
to
push
them
so
so,
like
there's,
there's
a
specific
use
case,
they're
used
for
and
only
that
use
case
is
allowed
to
do
it.
So
it
probably
looks
like
they
don't
work,
but
like
there's,
there's
a
metric,
that's
allowed
to
have
string
values
that
you
can
push
yeah.
G
It
doesn't
matter
that,
but
how
we
should
have
an
issue
for
how
to
deal
with
unknowns.
Coming
from
open,
metrics
and
state
states
yeah.
The
only
problem
I
have
and
the
biggest
problem
I
have
with
state
set
actually.
G
G
I
don't
know
if
you
read
their
spec,
but
they
say
yeah,
it
is
correlated
with
other
metrics,
but
you
know
I
was
trying
to
clarify.
So
it
means
whenever
I
receive
a
state
set.
These
labels
are
associated
with
every
time
series
that
I
receive
in
that
request.
No,
no!
That's
user!
Whatever
does
oh
okay,
so.
E
Interesting
all
right,
I
I
think
this.
This
is
probably
going
to
be
actually
a
really
good
topic
to
walk
into
and
in
in
the
interest
of
trying
to
prioritize
some
prometheus
compatibility
things.
That
sounds
like
a
good
one
for
us
to
to
talk
through.
I
can
open
the
issue
bogdan.
Do
you
think
do?
Would
you
have
time
to
put
together
a
like
major
concerns
that
you
have
with
state
set
metrics
in
the
news,
just
like
a
set
of
open
questions
for
us
to
walk
through.
G
And
I
don't
have
any
question,
I
I
think
the
only
question
for
us
is:
do
we
want
to
have
a
standard
enum
type
or
we
do
the
traditional
thing
express
them
as
don't
gauges?
Okay,
that's
something
that
we
just
need
to
to
convince
ourselves.
If
we
want
to
have
a
name
or
not.
E
Okay,
all
right,
so
I
can.
That
sounds
like
enough
for
me
to
prep
this
for
next.
E
E
On
that,
okay,
anyone
else
have
anything
else
that
they
think
we
need
to
focus
on
in
the
next
steps.
I
want
to
call
out
specifically
they're
already
there.
It
seems
to
be
like
a
sub
working
group
of
this
working
group
working
on
multivariate
metrics.
E
So
I'm
just
going
to
give
a
quick
status
update
on
that,
because
it's
exciting
and
awesome
there
is
a
proposal
to
have
a
columnar
representation
of
metrics
in
the
protocol,
so
you
can
either
choose
what
we
have
now
or
choose
the
multivariate
form.
The
multivariate
form
has
some
nice
compression
if
you
can
actually
generate
multivariate
metrics,
but
that
work
is
happening.
If
you
join
the
open,
telemetry
multivariate
channel
on
cncf,
you
can
see
that
all
experimental
right
now
yeah
did.
I
cut
something
a
couple
of.
D
E
Oh,
I
think
it's
josh
okay
anyway,
there's
a
there's,
a
cncf
slack
channel
called
multivariate
metrics.
If
you
want
to
see
folks
doing
experimentation
there,
it's
pretty
cool,
I
don't
know
when
it's
gonna
hit
this
data
model,
but
if
you're
interested
in
all
that
pay
attention,
I
I
think
it's
very
exciting
and
also
still
a
little
bit
away.
Okay,
cool.
I
don't
want
to
hold
you
longer
than
necessary,
so
sorry
for
taking
all
but
five
minutes
thanks.
Everybody
we'll
see
you
next
week.