►
From YouTube: 2021-05-25 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
C
C
Okay,
yes,
we
can
start.
Thank
you
for
joining
us.
Usually,
okay.
We
have
the
first
item
of
the
agenda.
As
you
guys
may
remember,
we
used
to
use
some
time
to
trash
the
issues
I
did
triage
the
new
ones.
There
were
only
a
few
of
them,
so
it's
all
fine.
There
is,
however,
one
issue
and
it's
related
to
metrics-
that
I
have
no
idea
about
it's
about
allowing
exemplar
only
metrics.
So
I
will
ask
probably
somebody
from
the
metrics
group
to
go
and
triage
that
I
have
no
idea.
C
Sounds
all
right
anyway,
so
that's
the
one
yeah
it
already
has
the
metrics
label,
of
course,
but
that's
all
the
next
item
is
that
rayleigh
riley
well,
as
you
know,
he's
not
here
today,
but
he
was
mentioning
that
the
metrics
respect
may
be
ready,
maybe
already
earlier
than
originally
planned,
which
is
very
good,
there's
a
link
to
the
document
itself
there.
But
I
was
wondering
if
somebody
from
the
actual
group
could
give
us
some.
C
C
As
you
can
see,
we
have
a
few
dates
there,
the
last
day
of
may.
We
will
be
doing
well.
This.
This
thing
will
be
doing
an
experimental,
matrix
api
specification,
the
release
then
by
18
of
june
they
will
be
doing
the
same
part
for
the
sdk
portion,
along
with
a
few
pre-release
packages
from
golangjava.net
by
the
end
of
july.
C
The
api
specification
will
be
frozen.
That's
a
plan
by
the
end
of
august,
the
metrics
api
will
be
stable,
the
sdk
portion
will
be
feature
free,
freeze
and
finally,
by
october.
D
D
Yeah,
I've
been
looking
at
the
oteps
repo
a
bunch,
and
there
are
at
least
12
that
are
last
that
are
filed
last
year
and
older,
and
I
find
it
difficult
to
make
sense
of
them,
and
a
couple
of
them
are
have
content,
that's
useful,
but
haven't
been
touched
and
we
need
to
start
over.
I
think
with
some
of
it.
There's
three
about
sampling,
one
of
mine's
one's
mine
and
one's
that's
fresh,
but
the
other
two
are
old
and
it
really
leads
to
confusion.
C
That
sounds
good
to
me
by
the
way
yeah.
I
was
wondering
about
that
long
time
ago,
so
I
think
it's
good
to
do
just
close
them.
We
can
just
create
new
issues
for
them.
I'll
take
the
action.
C
The
next
one
is
about
samplers.
This
is
something
we
already
talked
about,
whether
samplers
are
expected
to
be
used
by
multiple
tracer
providers.
We
came
to
the
agreement
that
it's
hard
to
like.
We
should
actually
allow
users
to
do
that,
but
just
just
tell
them.
You
know
like
that's
when
you're
doing
that
at
your
own
risk
is
blocking
the
call.
I
think
we
were
just
yeah,
maybe
if
you
could
just
check
it.
C
If
you
haven't
checked
that,
please
do
that
after
the
call,
I
think
that's
important
to
either
reach
an
agreement
or
you
know,
and
either
merge
these
or
do
some
change
or
whatever
is
needed.
I
I
check
that.
E
So
as
as
the
interface
it
doesn't
matter,
what
requirements
you
have
you
can
have
specific
requirements
at
the
implementation
level.
Correct,
like
you
can
say,
this
implementation
of
the
sampler
cannot
be
shared
because
it's
not
multi-threaded
or
whatever
reason
cannot
be
shared
between
different
instances
and
stuff.
So
so
I
think
it's
up
to
every
implementation
to
decide
if
it's
shareable
or
not,
between
different
trace
provider.
In
my
opinion,
no
matter
what
we
do
in
the
interface,
there
will
be
implementations
that
will
not
be
shareable
or
will
be
shareable
depends.
E
So
that
being
said,
I
think
this
issue
started
because
we
wanted
to
pass
a
resource
to
every
should
sample
call,
and
my
whole
point
is,
I
think,
it's
a
limited,
a
very
limited
use
case
where
sampler
needs
to
to
access
resource,
so
somebody
can
simply
implement
a
sampler
that
accepts
the
resource
as
part
of
the
constructor,
and
it
has
a
requirement
that
cannot
be
attached
to
different
trace
providers
or
something
like
that.
C
I
see
is
christian
around
by
the
way-
no
he's
not
here,
sadly
yeah,
if
you
could
just
go
ahead
and
write
that
in
that
issue,
because
it's
it's
stale
really
appreciate
that
and
by
the
way,
what
you're
saying
totally
makes
sense.
C
E
I
I
see
multi-tradition
has
to
be
because
we
we
have
the
trace
provider
making
concurrent
calls
because
that's
how
the
application
works.
It
can
start
expanding
the
same
time
across
multiple
threads,
so
so
it
has
to
be
multi-threaded,
but
the
the
capability
of
associating
to
one
trace
provider
or
to
multiple
trace
provider.
It
can
be
up
to
the
implementation
and
I
think
I
think,
right
now
in
the
specs.
If
I'm
not
wrong,
that's
we
don't
specify
anything
about
processor
and
exporters.
So
essentially
we
let
that
be
the
case
for
them
as
well.
C
Okay,
let's
add
a
note
that
to
the
pr,
hopefully
we
can
iterate
it
iterate
on
it
soon.
Thank
you
for
the
for
the
feedback.
I
will
add
a
note
in
a
bit.
The
next
one
is
interesting
as
well
it's
about
status.
This
is
a
pr
we
have
all
known
for
some
weeks
now.
Nikita
started
that
one
there
were
some
questions
from
yuri
and
sergey
regarding
whether
it's
good
enough
or
not
whether
we
should
actually
go
for
the
hook's
way,
julian
sergey.
The
call.
F
F
That's
probably
that's
my
general
feedback.
If
we,
if
we
disagree
with
pull
pull
requests,
but
we
are
unable
to
provide
better
or
at
least
alternative
solution,
because
we
don't
have
capacity
or
whatnot
should
we
block
non-ideal
solutions
which
still
like
has
like
open
end?
We
can
iterate
them,
or
should
we
just
block
them?
And
let's,
let's
wait
when
we
have
more
time
for
ideal
solution
in
the
future.
Maybe
sometime.
F
G
G
C
F
F
I
can
offer
my
personal
personal
opinion
that,
yes,
we
should
fix
that,
but
I
am
not
well
I
I
yes.
I
do.
I
look
from
that
from
instrumentation
or
sdk
point
of
view,
not
from
specification
point
point
of
view.
We
we
produce
sdk
our
end
users
have
struggled
with
that.
Our
instrumentation
our
angular,
have
bugs
in
that.
So
I
want
to
fix
that
back.
For
our
end,
users.
H
E
Money
answers
all
questions.
No
no
john,
I
know
for
you,
it's
it's
other
perk,
but
I
will
not
share
that
with
nikita.
I
will
not
tell
him
what
what
can
vacation.
B
C
Change
myself,
I
think
it's
a
good
change.
I
think
it's
a
it's
a
good
amendment
and
I
think
it's
valid
to
ask
sergey
and
yuri
for
alternative
proposals.
E
But
yes,
I
read
their
concerns
and
their
most
concerns.
If
I
read
correctly
was
that
it
requires
a
state
which
I'm
not,
I
don't
really
buy,
because
it's
it
is
a
state,
don't
get
me
wrong,
but
it's
just
it's
a
state
that
is
not
retrievable,
which
means
still
doesn't
necessarily
need
to
happen
in
memory.
It
can
happen
on
the
back-end
side
and
right
now.
The
fact
that
we
have
a
same
span
id
is
the
same
problem.
We
still
need
to
have
this
state
in
the
back
end.
E
If
you
are
implementing
a
streaming
solution
that
groups
everything
by
spam
id
because
you
will
get
every
individual
event
separately
so
so
that
state
will
not
be
something
on
top
of
whatever
is
needed
right
now,
so
I
think
nikita.
I
think
you
should
explain
that
this
is
very
similar
with
the
fact
that
there
is
a
span
id
that
needs
to
to
to
be
combined
same
with
president
right
now,
so
there
is
already
in
the
back
end
that
needs
to
happen
and
is
not
something
that
we
enforce
any
implementation
to
have
that
state
in
the
client.
F
F
Yeah,
I
will
reread
that
and
if,
if
I
get
the
same,
I
will
clarify
that
yep,
but
thank
you,
but
anyway.
B
J
J
It's
so
the
only
worry
I
have
is
have
we
actually
tried
to
use
this
solution
in
a
number
of
cases
and
and
actually
vetted
whether
it's
it's
correct
or
not.
J
It
seems
fine
to
me
in
this
particular
case,
but
what
I
worry
about
is
are
we:
are
we
like
solving
some
edge
cases
while
creating
some
more
edge
cases
and
so
on
and
so
forth?
I
think
that's
why
there's
some
nervousness.
F
J
E
So
nikita-
maybe
maybe,
if
we
you
can,
we
can
spend
four
or
five
minutes
on
this.
You
can
summarize.
I
was
rereading
some
of
the
comments
and
a
bunch
of
things
are
about
confusion
that
why
only
okay
can
overwrite
error,
but
error
cannot
overwrite,
okay
and
so
on
so
forth,
so
is.
E
E
Which
one
happens
first,
so
so
in
case
of
auto
instrumentation,
I
think
auto
instrumentation,
all
usually
or
99.99
of
the
cases,
happens
after
the
user
code.
So,
for
example,
on
the
on
the
server
side,
you
have
auto
instrumentation,
create
the
span
and
will
end
understand
and
user
interacts
with
that
span
and
can
set
the
status.
So
you
don't
have
any
problem
there,
because
if
the
user
said
something
the
auto
instrumentation
set
of
status
will
be
dropped
or
will
not
overwrite.
Whatever
the
user
said
correct.
F
F
I
am
a
little
bit
worried
that
currently,
so
if,
if
we
say
that
we
can
only
overwrite
unset,
meaning
that
status
can
be
set
only
once,
then
we
actually
leave
to
maybe
accidental
implementation
detail,
that
user
will
always
have
a
chance
to
set
his
okay
before
our
instrumentation.
Can
we
actually
guarantee
that?
Can
we
rely
on
that?
E
F
J
J
Right,
I
think
a
thing
that's
lacking
currently
in
our
implementation.
That
makes
this
easy
is
the
fact
that
the
application
developer
uses
the
same
kind
of
tracer
effectively
as
the
instrumentation
right.
We
haven't.
J
We've
discussed
like
an
application
tracer
or
something
like
that,
but
it
isn't
the
case
that
right
now,
the
application
developer
does
something
to
identify
them.
As
the
user
versus
instrumentation.
You.
E
The
way
how
we
design
the
tracer
right
now
is
not
gonna
work
unless
you
put
all
the
operations
on
the
tracer,
so
so
the
the
problem
is,
the
span
that
we
are
discussing
about
is
99
of
the
time
created
by
auto.
Instrumentation
user
just
interacts
with
it,
so
so
user
will
not
have
a
chance.
What
I'm
trying
to
say
exactly.
J
That's
when
a
user's
making
these
api
calls,
we
don't
know
that
they're,
the
user
doing
it
right.
They
would
have
to
do
something
extra.
Like
say
you
know:
okay
or
air
and
I'm
the
user,
and
so
that's
making
it
more
complicated
for
for
the
end
user
to
do
that,
because
the
default
would
have
to
be
instrumentation.
E
I
think
what
will
help
a
lot
nikita
here
for
everyone
is-
and
I
think
this
is
the
last
thought
is
if
you
can
put
some
of
the
examples
or
maybe
there
are.
I
need
to
re
reread
that,
but
having
some
set
of
examples
where,
where
this
can
happen,
hey
in
case
of
a
java
app
where
auto
instrumentation
does
this
user
does
that?
This
is
why
it's
not
working
in
case
so
like
four
or
five
examples
that
will
explain
why
one
works
or
the
other
doesn't.
E
F
C
Perfect
perfect,
thank
you
so
much
really
yeah.
Thank
you
for
all
the
time
and
the
patience
very
good.
We
have
action
item
1.1,
the
next
one
is
about.
I
know
tap
about
the
no
sorry
the
next
one
is
one
pr
that
you
have
booked
on
yourself,
yeah.
If
you
could
iterate
on
that,
so
we
either
move
on
or
merge
it
or
something.
Just
like
a
reminder.
Nothing
important.
C
The
next
one
is
the
instrumentation
ecosystem
moteb
similar
situation.
We
had
a
lot
of
approvals
there.
C
You
remain
the
last
comment
regarding
whether
we
should
decide
something
before
merging
this,
because
in
his
mind,
going
from
an
attempt
to
specification,
it
should
be
mostly
a
mechanical
change
that
you
don't
actually
change
anything,
but
probably
that's
not
always
the
case,
but
either
way
we
have
to
decide
some
some
window
frame
nikita.
You
could
probably
just
go
with
three
weeks,
which
is
what
the.
F
G
This
part
of
the
specification
is
going
to
be
marked
as
as
experimental
right,
so
that
is
completely
open
for
revisiting.
You
can
change
the
duration,
not
a
problem.
C
C
As
soon
as
you
have
that
I
will
merge
right
away
well
or
anybody
who
has
power
thanks
so
much
the
next
one
is
yeah,
it's
one
for
the
one
that
geometry
mentioned
for
probabilistic
sampling
for
telemetry
events.
It.
I
think
it's
a
very
interesting
attack,
but
it
doesn't
have
enough
reviews-
and
this
is
not
an
urgent
matter
of
course,
but
there
was
some
interest
on
on
reviving
the
sampling
seed.
C
Paschemic
made
a
comment
on
twos
one.
So
maybe
it's
a
good
time
for
doing
that.
I
don't
know
whether
people
have
some
appetite
for
that.
D
I
certainly
do
I've
been
having
some
conversations
with
people
at
lightstep
who
remember
how
dapper
did
sampling
and
there's
there's
a
discussion
happening,
and
I
I
don't
actually
want
to
push
that
on
the
community
here,
but
having
the
ability
to
do
basic
root,
sampling
with
probability
and
encode.
The
inclusion
probability
on
the
entire
trace
is
the
first
step,
and
we
definitely
want
to
talk
about
that.
So
I'm
not
sure
if
a
working
group
is
needed,
but
but
having
reviewers
and
discussion
happening,
perhaps
in
slack
would
help.
Why
do
you
need
that?
D
So
we
want
to
be
able
to
count
spans
as
they
arrive.
Lightstep
has
a
product
feature
where
we
count
spans
call
them
streams,
even
if
they're
nested
in
the
sub
root
in
a
sub
tree
of
a
trace,
each
one
of
them
gets
counted
as
the
inverse
probability
of
their
root
and
when
spans
are
unsampled.
This
is
really
easy
to
do,
and
so
the
the
sort
of
a
twin
goal.
Here
we
want
to
cut
down
the
rate
the
volume
of
data
being
reported.
We
still
want
to
be
able
to
approximately
count
it.
D
It
did
only
on
this
route,
and
this
has
caused
some
confusion
in
the
document.
What
I
was
trying
to
say
is:
there's
a
concept
of
inclusion,
probability,
there's
no
concept
of
adjusted
sample
count
and
there's
a
concept
of
unbiased
sampling.
We're
going
to
use
this
over
and
over
again.
There's
lots
of
ways
to
use
this,
and
what
you
just
described
is
a
totally
valid
sampling
scheme
for
traces,
and
I
wrote
that
as
an
example
into
my
document
and
everyone
read
that
and
said
that,
that's
not
what
I
want.
D
So
I
want
to
just
get
this
first.
Merge
just
to
say
was
a
concept
that
we're
going
to
keep
using
called
sampling
probability
or
sampling
count,
and
next
we're
going
to
talk
about
a
spec
for
tracing
and
and
propagating
trace
probability
and
encoding
it
in
spans.
We
may
also
talking
about
doing
the
same
for
metrics,
which
is
a
separate
topic.
C
J
E
E
This
there
is
also
part
of
this,
and
probably
main
my
fault
and
related
to
josh
problem
is
how
do
we
have
a
deterministic
sampling
algorithm
that
uses
trace
id,
for
example,
if
we
want
to
not
use
plain
trace
id
and
use
the
hash,
we
need
to
define
that
hash
function
that
can
be
implemented
across
sdks
and
maybe
even
across
across
back
ends,
because
they
may
want
to
to
to
know
different
things
from
that
from
that.
E
So
there
are
a
bunch
of
things
that
we
need
to
do,
and
I
think
it's
probably
the
last
part
that
we
have
not
finished
or
polished.
D
Too
much
yeah,
I'm
okay
with
a
working
group
I
mean
I
can
try
to
come
and
but
I
I
feel
like
there's
been
a
lack
of
coherent
proposals.
So
you
just
mentioned
something
about
deterministic,
sampling
and
and
yet
the
w3c
gave
us
this
is
sampled
bit
and
that
that
was
meant
to
be
used
to
just
to
convey
whether
the
sampling
decision
was
positive
or
negative,
and
then,
but
yet
there
are.
D
There
is
an
entire
otep
about
probably
like
what
you
just
described
and
it's
pretty
confusing
to
see
all
these
different
proposals
and
they
kind
of
they're
all
very,
very
different.
It's
not
clear
that
that
mechanism,
which
is,
I
think,
otep135,
gives
you
probabilities,
which
is
what
I'm
looking
for
it
gives
you
complete
traces
and
it
gives
you
less
than
100
data,
but
we
want
a
little
more
than
that.
E
Yeah
or
in
in
also
it
gives
you
what
google
had
with
the
inflationary
thing.
So
it
gives
you
a
guarantee
that
you
can
increase
sampling
and
ensure
that
everything
under
the
need,
like
the
subtree,
will
be
sampled.
If
you
have
a
deterministic
algorithm
anyway,
in.
D
D
But
it's
sort
of
saying
that
I
want
to
have
a
minimum
probability
for
my
sub,
my
subtree
and
if
I
get
an
incoming
trace,
whether
it's
sampled
or
not,
I
need
to
know
its
probability,
because
if
it
was
sampled,
I'm
going
to
keep
sampling
and
if
it
wasn't
sampled,
I
need
a
conditional
probability
equation
to
adjust
my
own
and
decide
to
maybe
start
sampling.
My
route
remember
that
it
was
awful,
but
that's
what
dapper
did
and
someone's
going
to
come
and
ask
for
that.
E
No
no,
but
I
removed
it.
E
I
I
did
it
differently,
so
so
the
beat
is
enough
to
know
if
the
parents
sampled
or
not,
and
now,
if
you
have
a
deterministic
algorithm
across
all
the
sdks,
if
you
have
a
higher,
if
you
have
a
higher
probability
and
based
on
a
deterministic
algorithm
that
google
used,
you
will
guarantee
that
you
will
sample
only
if
you
have
higher
probability
than
the
parent,
so
you
don't
need
to
to
propagate.
Just
for
that,
you
need
to
propagate
for
the
counting
problem,
which
is
true,
but
not
for
for
that
problem
to
solve.
A
Yeah
and
there's
the
area
of
tail
sampling,
which
a
lot
of
people
are
asking
for
and
would
have
to
a
sampler
in
open,
telemetric
later
country,
but
there
were
some
discussions
about
enhancing
it
and
or
changing
it
or
like
making
some
revolution
here
with
several
ideas.
How
to
tackle
that.
And
that
was
one
of
the
reasons
why
we
wanted
this
working
group.
So
I
feel
that
we
have
several
things
to
cover
here
and
maybe
it's.
D
Worth
my
my
proposal
actually
incorporates
the
idea
of
tail
sampling
by
by
allowing
for
a
zero
zero
inclusion
probability,
because
when
you
have
tail
sampling,
you
often
buffer
things
until
the
point
where
you
have
to
throw
them
out
that
you
didn't
know
when
you
started
what
the
probability
was.
The
probability
changes
over
the
duration
of
your
tail
sampling
and
that's
why
you
need
to
go
back
to
the
root
and
assemble
a
trace,
because
you
can't
probably
get
the
correct
probability
because
of
your
tail
sampling
decisions.
E
So,
let's,
let's
have
that
group,
but
in
order
to
solve
that
problem
we
probably
need
people
from
from
the
collector
and
we
need
some
other
people
not
only
from
the
sdk
so
yeah.
I
think
it's
it's
a
good
good
thing
ted.
I
know
you,
you
led
that
so
maybe
maybe
you
want
to
to
start
it.
B
E
E
J
C
Come
sweet!
Thank
you
for
that
and
the
last
item
we
have
in
the
agenda
is
about
josh.
Well,
actually,
we
have
two
more
josh
yeah.
Please.
K
Yeah
yeah
sorry,
I
just
muted
and
texting
okay,
so
real
quick.
Basically,
the
the
data
model
sig
has
been
putting
together
a
document
around
the
open,
telemetry
protocol.
This
document
is
designed
so
that
people
can
write
metrics
exporters,
it's
designed
so
that
people
can
write
metrics
importers
or
receivers.
If
you
will
and
it's
designed
to
like
pre
predecess,
I
don't
know
some
word
proceed
there.
We
go.
K
That's
a
word,
the
the
api
and
the
sdk
x,
the
actual
sdk
exporter
right
of
like
this
is
how
you
get
metrics
into
open
telemetry.
And
what
like
it's?
It's
a
little
bit
more
refinement
on
the
protocol.
Okay,
now
the
document
lives
in
the
specification
it's
marked
as
beta.
K
Okay.
What
I
want
to
know
is-
and
I
want
to
talk
through
the
process
of
making
it
stable.
We
had
changed
the
process
around
like
oteps
and
stability
and
that
sort
of
thing-
and
I
feel
like
this-
is
something
that
perhaps
we
weren't
doing
correctly.
K
So
the
real
question
is:
what
do
we
do
to
go
forward
and
I
have
some
tentative
proposals,
so
we
have
one
open,
pull
request
from
josh
mcdonald,
which
I
think
represents
the
last
major
component
for
this
bit
of
the
specification
before
I'd
like
to
try
to
declare,
what's
in
there
stable
and
ready
for
folks
to
use
again,
we
already
have
the
protocol
buffer
definition.
K
K
So
I'm
not
sure
what
process
we
want
to
take
here.
But
what
I'm
proposing
is
we
we
get
this
last
pr
from
josh
mcdonald
merged
and
then
I'd
like
to
write
an
o
tep
that
will
mark
it
as
stable
and
go
through
some
round
of
review
there.
We
have
the
the
sig
currently
is
going
through
a
process
of
just
looking
through
and
trying
to
use
it
to
do
little
implementations
and
prototypes
to
see.
K
K
Are
you
asking
questions,
my
okay?
My
question
is:
does
this
sound
reasonable?
Does
anyone
want
to
see
anything
else
done
like
so?
If
I
make
an
otep
to
mark
this
is
stable,
post
josh's
last
pr
right.
What
do
I
put
in
the
otep?
Do
I
put
the
whole
specification
that
exists
in
our
spec?
Do
I
just
put
a
proposal
to
mark
it
as
stable
and
then
secondarily
going
forward?
G
K
I
don't
think
I
don't
think
I'm
doing
it
justice.
I
I
think
what
we
have
specified
is
actually
pretty
darn
good
and
pretty
stable.
What
what
what
I
need
here
is.
K
There
are
aspects
of
prometheus
that
we've
never
mapped
into
open
telemetry
and
what
I'd
like
to
do
is
make
those
require
a
formal,
prototyping
implementation
and
otep
phase
before
we
just
throw
them
at
the
specification,
and
so
I
want
to
take
what
we
have
today,
which
is
basically
what
the
current
protocol
means
and
stabilize
it
right
which,
if
you
look
at
what's
in
that
the
document
that
I
have
linked,
there's
a
few
to
do,
sections
that
again
are
going
to
be
filled
out
this
last
pr
removed
that
are
more
around
how
to
make
use
of
the
data
and
not
around
what
the
data
is.
K
If
you
look
at
the
definitions
of
like
histogram,
you
know
number
data,
point
those
kind
of
things:
that's
that's
not
changing!
That's
stable!
That's
fine!
I
think
we
we've
done
enough
research
into
that
to
know
this
is
what
we
want
as
our
foundation
right.
It's.
K
What
what
I
want
to
do
is
figure
out
how
we
make
changes
to
that
foundational
model
on
a
stable
thing.
So
I
want
to
get
what
we
have
marked
as
stable
and
then
I
want
future
work
to
actually
happen
in
oteps
and
in
prototypes
right.
That's
that's
kind
of
the
goal
of
what
I
want
to
do
here.
So
how
do
we
take
this
current
specification
as
it
exists
and
get
this
thing
marked
stable
so
that
then
we
can
do
prototypes
in
prototype
way.
K
There's
a
group
of
people
looking
at
you
know:
histogram
bucketing
and
there's
a
proposal
for
a
new
histogram
bucketing
mechanism
right.
I
want
to
find
a
way
to
enable
those
prototypes
to
happen
against
something
that's
stable
instead
of
just
continuing
like
in
beta
phase
forever
like
let's,
let's
get
this
thing
marked
stable
and
let's
move
to
this
otep
model
so
that
we're
operating
the
way
that
we
want
the
process
to
go
going
forward.
G
K
Make
those
breaking
changes
agreed
agreed?
I
think
the
prototypes
could
reveal
a
different
way
of
representing
data,
and
it's
we're
just
as
insurers.
We've
always
been
with
everything
I
mean
that
could
happen
with
traces.
D
Yeah
I'd
like
to
add
just
a
little
bit
of
confidence
here.
Is
you
know
when
the
project
started,
we
were
trying
to
design
apis
and
sdks
and,
as
we
moved
along
for
these
two
years,
this
collector
just
became
a
driving
force
for
us
and
we
had
to
design
a
data
protocol
that
was
to
meet
the
industry.
It
had
nothing
to
do
with
apis
and
sdks
at
that
point
had
to
do
with
transferring
meaning
about
telemetry.
D
And
so
I
part
of
these,
these
others
that
I'm
writing
are
because
we
have
prototypes
this
and
we
put
it
in
production
and
I'm
looking
at
the
stream
of
you,
know,
staleness
markers
and
saying
I
don't
know
what
to
do
with
these.
Now
and
that's
why
there's
a
that's,
why
there's
a
pr
so
I
I
was
on
the
fence
last
night
about
putting
that
last
pr
into
an
otep,
because
there
are
options
and
I
will
accept
responsibility.
K
Okay,
but
the
the
point
being
for
push-based
sdk
protocol,
we
don't
like
we,
we
have
committed
and
the
data
model
said
we're
not
going
to
break
what
we
have
and
we
think
it
represents
what
we
need
from
our
metrics
api
exactly
right.
Now
we're
fine
like
that.
We
don't
see
changing
right
now,
as
we
start,
adapting
other
metrics
protocols
and
pulling
stuff
in
there
might
be
like
additions
that
come
and
that's
fine
we're
going
to
do
it
in
a
non-breaking
way
and
we're
not
worried
about
not
breaking
those
things.
K
So
I
can
give
you
examples
if
you
want
to
know
some
of
the
stuff
we're
discussing,
but
that's
basically
where
the
sig
stands
today
is
we
want
this
current
model
to
be
stable
to
not
change
and
map
as
much
as
we
can
into
it,
and
there's
going
to
be
a
few
things
that
don't
fit
that
we're
going
to
have
discussions
about
and
possibly
add,
but
those
will
should
be
prototyped
and
in
an
otep,
and
I
gave
you
the
first
example
is
the
histogram
bucketing
thing:
there
is
a
non-breaking
way.
K
We
plan
to
represent
this
we're
prototyping
that
now
right,
but
that's
not
I
I
want
to
get
the
current
data
model
marked
as
stable.
So
as
we
release
the
sdk
and
the
api
for
metrics
and
go
into
beta,
you
have
this
stable
data
model,
you're
working
off
of
that
people
can
target,
and
everything
else
is
an
experiment
and
it's
very
clear
right,
the
way
so
that
we
don't
run
into
what
we
had
before
like.
I
know
that
we're
trying
to
move
to
a
different
model
here.
K
So
if,
if
I
made
it
sound
like
we're
not
sure
about
the
data
model,
no,
the
the
three
main
types
that
we
have
right
now
very
sure
of
right.
That
doesn't
mean
there
won't
be
additions,
but
these
three
types
we're
very
sure
if
these
will
be
stable.
I
think
metrics
are
more
complicated
than
traces
just
because
you
have
a
lot
more
math
that
people
like
to
throw
at
it.
G
We
know
additive
changes
can
happen.
That's
that's
completely
fine,
so
I
guess
I
I,
if
you
feel
certain
I
personally
would
not
be
against
if
we
mark
this
particular
data
model
part
as
stable.
If
you
would
like
to
be
more
cautious
but
still
recognize
the
fact
that
you
achieved
a
certain
milestone
and
now
it
is
worth
having
implementations
based
on
that
milestone.
Maybe
we
invent
a
status
which
is
in
between
the
experimental
and
stable.
We
call
it.
G
I
don't
know
data,
we
don't
have
that
formally
defined
anywhere,
but
we
can
have
a
better
state
right.
That's
still
a
recognition
that
you
have
achieved
this
milestone
and
now
people
can
go
ahead
and
implement
whatever
is
in
the
specification
yet
still
give
the
you
yourself
the
flexibility
to
steal.
If
there
is
something
uncovered
something
unknown
to
make
that
breaking
change,
if
necessary,
that's
also
a
possibility,
depending
on
how
confident
you
feel
about
the
need
or
not
not
having
the
need
to
make
that
breaking
change.
G
K
The
data
model
talks
about
an
event
stream
conceptually
it
talks
about
a
time
series
conceptually
and
it,
and
it's
meant
to
guide
you
into
how
to
map
between
the
two
like
it's
a
guide.
It's
not
it.
If
you
look
at
how
the
wording
is
in
there,
it
is.
It
is
not
firm,
nothing.
These
are
all
kind
of
like
suggestions
for
how
to
deal
with
problems.
K
So
like
the
data,
the
problem
with
the
data
model
document
is
it's
a
specification
and
it's
an
aid
for
implementation,
and
the
majority
of
that
document
is
this
aid
for
implementation
that
we're
basically
getting
to
the
point
where
we
feel
that
this
covers
enough
of
the
scenarios
that
we've
had
to
deal
with
in
implementations
today,
with,
like
the
statsd
receiver,
with
the
prometheus
receiver,
et
cetera,
et
cetera
right
that
we
think
this
is
ready
for
people
to
read,
but
I
want
to
get
the
document
itself
marked
to
stable.
So
it's
very
clear.
K
I'm
just
like
I
don't
know
yeah
I
like
I
can.
I
can
actually
that's
an
option
here
is
I
can
go
through
section
by
section
and
define
each
section
as
stable
or
unstable
the
way
we
have
in
the
other
documents
and
that's
fine
too.
I
can
even
actually
clarify
in
the
data
model
document
the
sections
which
are
more
about
guiding
you
for
interacting
between
your
you
know,
event,
model
and
your
time
series
and
the
ones
that
are
specified
like
here's.
What
the
data
model
means.
That's
fine,
that's
another
option.
E
The
other
option
is
split
the
the
document
into
two
document
that
is
pure
data
model
document,
explanation
of
what
is
a
sum
and
stuff
that
we
definitely
want
to
mark
as
stable
and
the
other
one
just
say
guideline
or
whatever
we
call
it
for
mapping
and
by
the
fact
that
it's
called
guideline,
even
though
you
say
stable.
It's
just
a
guideline
like
it's
not
is
not
something
that
users
should
take
as
a
specification
and
that
guideline
can
be
changed
because
it's
a
guideline
like
okay,
that's
another
option.
K
Yeah,
I
think
I
think
we
might
do
that
again.
If
you
look
at
the
verbiage
in
the
document
today,
it
is
very
clearly
not
using
must
in
a
lot
of
its
in
a
lot
of
what
it
says
and
then
there's
sections
where
it
really
does
use
musk,
because
you
know:
there's
a
must
there.
G
K
E
I
would
still
split
the
document
because,
because
people
will
ask
tell
me
what's
your
data
model
and
we
give
them
this
document
and
say
this:
is
the
explanation
of
the
data
model
same
as
open
metrics?
And
then
there
is
a
guideline
of
how
to
map
different
concepts
on
this
data
model
that
are
not
necessarily
related
to
the
data
model
per
se?
Is
just
concepts
or
or
things
about
the
data
model.
K
Okay,
all
right,
that's
fair,
so
we'll
we
can.
We
can
split
out
the
like
how
to
implement
a
a
receiver
exporter
from
the
actual
data
model,
spec
itself
yeah,
I
think
initially,
what
I'll
do
is
just
do
that
with
headings
and
and
sections
and
send
that
as
a
pr
and
then
we'll
go
from
there.
Does
this
need
a
formal,
otep
review
process
to
to
come
to
to
get
the
stable
marker
on
this?
I'm
just
curious.
E
You
would
have
so
in
a
lifetime.
This
hotep
should
have
been
done
before
we.
We
started
this,
but
now
just
trying
to
interactively
doing
to
follow
the
process.
I
don't
feel
necessarily
confident
and
comfortable
doing
that,
because
it's
just
unnecessary
work.
It
should
have
been
done
before
like
when
we
started
like
two
years
ago.
We
should
be
no
no
tap.
Hey.
We
want
to
support
metrics
here
is
a
proposal
of
the
data
model.
E
I
think
tigran
did
something
similar
for
logs
very,
very,
very
close
to
what
we
want.
He
started
with
the
note
app
for
for
logging
why
logging
is
important,
blah
blah
and
a
bunch
of
things
and
then,
and
then
from
there
derive
the
data
model
will
derive
a
specification,
so
I
think
that's
the
right
process
to
do,
but
now
going
back
to
put
an
autop
just
to
mark
it
stable,
like
it's
just
a
word
like
what?
What
would
you
write
in
an
autep
to
convince
me
that
is
stable
or
not,
or
what?
What
exactly?
K
Gotcha,
okay,
all
right!
So
if
I
send
a
pr
to
split
this
up,
I'll,
do
the
split
up
first
and
then
the
stability
second,
but
that
pr
just
goes
through
the
normal
spec
process.
Yes,.
G
J
I
do
want
to
point
out.
I
think
there
is.
Our
process
is
a
little
fuzzy
when
it
comes
to
proto
changes
in
the
sense
that
you've
got
oteps.
You've
got
the
spec
and
you've
also
got
the
proto
repo,
and
we
don't,
generally
speaking,
want
to
do
a
lot
of
experimental
thrashing
about
in
the
proto-repo.
But
that's
where
we
tend
to
do
the
work
and
it
is
actually
a
little
fuzzy
about
what
gets
written
down
in
the
spec
versus
what
goes
into
the
proto-repo
and
where
the
source
of
truth
is
there.
J
B
G
That's
so
that's
a
good
point.
That's
also
what
you
told
about
dropdown,
so
my
perspective
is
that
the
proto
repository
is
part
of
the
specification.
It's
the
more
formal
part
of
it.
It's
a
machine
language,
part
of
it,
but
it
is
actually
part
of
the
specification
and
if
you
look
into
the
the
protocol
section
of
the
of
the
spec
report,
it
it
links
to
the
product,
it
possibly
says:
go.
Look
at
the
definitions
there,
that's
that's
the
that's
the
source
of
truth
for
the
protocol
right
and
I
think
data
model
wise.
G
G
J
I
mean
if
you
look
at
if
you
look
at
the
what's
written
down
for
the
logging,
for
example
like
it,
it
pretty
much
describes
the
data.
J
J
E
K
Yeah
and
that's
what
we
did
with
metrics,
like
the
the
fact
that
we
can
cover
the
use
cases
of
how
to
import
in
and
import
out
like
it
just
gives
you
the
flexibility
to
really
dive
into
how
to
use
these
things,
because
if
you
just
say
this
is
the
start
time,
it
means
this
okay.
Well,
I'm
dealing
with
this
format.
What
what
the
hell
do?
I
do?
How
do
I
map
it
in
right?
That's
that's!
That's
where
we
spend
most
of
our
verbage
in
the
data
model,
spec.
C
Okay,
yes,
we're
gonna,
say
goodbye:
yeah
joshua,
anthony
yeah.
We
are
having
the
next
release
next
month
next
week,
early
june.
Let
us
know
if
there's
a
problem
with
that
or
you
need
something
from
us,
but
yeah
next
week
should
be
happening.
B
No
next
week,
I
think
it's
fine.
I
was
we're
looking
to
make
an
rc
for
the
go
sdk
soon
and
if
we're
ready
this
week,
I
would
like
to
do
it,
but
there
are
also
changes
for
the
schema
url
that
we
might
want
to
land
in
there.
So
I
think
we'll
we'll
have
a
discussion
in
the
go
say
whether
we
want
to
get
those
in
and
anticipate
the
upcoming
back
or
wait
until
the
spec
is
out
and
if
it's
coming
out
next
week,
that
may
be
fine.