►
From YouTube: 2020-08-14 Spec SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
C
A
Hi
everyone,
okay,
well,
we're
all,
not
all
here,
probably
I'm
desperately
hoping
that
bogan
will
arrive
and
I've
pinged
him
half
an
hour
ago
to
help
remind
him
so
we'll
see
about
that,
because
a
substantial
portion
of
what's
been
going
on
in
the
last
week
has
to
do
with
a
bunch
of
prs
flying
around
otlp.
D
A
A
I
have
drafted
half
this
agenda.
The
rest
of
you
may
have
added
some
stuff,
so
this
is
great
I
so
I
think
we
should
talk
about
what's
taken
place
with
otlp
in
the
last
week,
especially
there
were.
There
was
a
meeting
on
tuesday
it's
on
the
public
calendar,
but
only
a
few
of
us
arrived.
A
I
feel
like
there's
one
person,
that's
missing
from
this
list,
but
I
don't
remember
who
it
was
any
case.
I
wrote
these
notes
and
a
lot
of
the
discussion
has
already
been
taken
and
taken
effect
by
the
pr's
that
bogan's
been
merging
I've.
I've
captured
a
list
of
two
outstanding
like
high
priority
topics
here
bogdan.
I
wonder
if
you
would
lead
with
some
thoughts
of
yours
since
you've
taken
over
on
this
otp.
E
So
so
yeah
I
I
can
give
updates.
So
for
the
moment,
I
don't
know
if
everyone
saw,
but
there
was
a
pr
for
a
couple
of
days
with
a
request
to
remove
the
summary
for
the
moment
again.
For
the
moment
I,
before
we
better
understand
what
what
are
the
implications
and
the
relationship
with
time,
the
main
idea
was
quantiles
and
even
min
max
has
a
problem
that
the
problem
is
from
cumulative.
E
You
cannot
get
delta,
so
the
decision
of,
if
you
send
cumulative
or
delta,
is
critical
on
this
on
these
properties,
because
you
cannot
refer
that,
even
if
you
are
having
an
agent
that
runs
on
the
same
machine
or
anywhere,
you
cannot
do
that.
E
But
the
idea
was
that
we
want
to
make
a
release
of
the
proto,
probably
monday,
that
will
that
will
include
that
will
include
everything
that
we
are
confident
hundred
percent
that
will
be
in
the
in
the
ga
version.
So
summary
we
don't
know
yet
how
they
will
look
like
that's,
essentially
the
the
summary
of
of
the
summary
there
is
this.
There
is
a
specific
yeah.
E
But
there
is
the
value
recorder
part
which
we
haven't
decided
and
that
decision
will
influence
also
the
protocol.
So
I
think
that
now
right
now
we
moved
everything
to
to
that
issue
and
we
need
to
to
resolve
the
value
recorder
before
we
make
a
decision
of
what
the
otp
will
support
for.
For
this.
E
Mostly,
that's
that's
my
update
and
there
is
still
an
ongoing
issue
which
I
started
to
look
into,
which
is
the
one-off
versus
encoding,
the
values.
So
far
I
got
very
close
results.
I
will
publish
something
tomorrow
and
will
ask
everyone
to
give
their
opinion
of
what
to
do.
There
is
a
small
performance
penalty,
but
not
that
much
so.
F
No,
I
just,
I
think,
it's
a
a
great
way
to
progress
this
and
going
forward
so
yeah.
I
think
that
sounds
good.
E
Again,
that
doesn't
mean
I
made
a
decision
to
completely
remove
that
summary
or
anything,
it's
just
a
temporality
of
the
temporality
right
it.
It
is
just
that
I
think
that
needs
some
polish
things
and
some
more
thoughts
and
if,
if
I
understood
correctly
from
josh,
the
goal
is
to
to
have
something
that
we
can
declare
stable,
even
if
it's
incomplete
and
to
achieve
that
goal,
I
I
had
to
remove
it
until
we.
We
we
put
more
thoughts
into
that
and
we
make
a
decision.
A
Yeah,
I
think
this
is
this
is
good
if
we
can't
have
if
we
can't
have
value
recorder,
let's
at
least
get
everything
else
figured
out
and
and
move
us
forward
and
also,
I
think,
we're
making
the
right
call
here
on
the
performance
question.
I
it
it's
so
much
more
important
to
get
something
to
work
in
the
next
month
or
two
than
to
like
have
an
endless
debate,
and
I
think
what
I'm
seeing
is
that
we're?
What
we're
comparing
is
semantic
clarity
versus
performance.
It's
almost
like
comparing
apples
and
oranges.
A
So
it's
like
really
hard
to
make
a
fair
comparison,
and
I
think
a
lot
of
the
problems
really
stem
from
the
go
protocol,
compiler,
which
can
probably
be
addressed
with
some
custom
code
and
eventually
we'll
probably
get
there,
but
there's
a
downside
of
waiting
at
this
point.
So
I
think
we're
doing
the
right
thing.
E
Yeah,
so
so
far
with
the
protobuf
with
the
gogo
proto,
the
only
thing
that
we
trade
off
is
one
allocation
versus
versus
semantics.
So
it's
just
one
extra
allocation
for
one
off
that
we
have
versus
clear
semantics
and
and
even
saving
some
space
potentially
so
anyway,
but
I
will
publish
more
results
tomorrow.
When
I
finish
all
my
experiments,
I
just
tried
to
give
an
update
that
I'm
working
on
that
it's
it's
not
finalized
yet,
and
also
for
everyone.
E
You
can
see
in
the
pr
section
that
every
pr
has
I
try
to
assign
randomly
a
person
to
every
pr
and
that
pr
that
person
from
tc
is
responsible
to
make
a
decision
go
or
no
go
and
merge
that
that
pr.
So
so
we
kind
of
try
to
push
hard
to
to
increase
the
velocity
here
and
that
impacts
metrics
as
well.
So
that's
why
I
gave
this
update
to
everyone.
A
Thank
you.
I
had
fallen
behind
on
that,
but
I
accept
this
responsibility.
I
I
feel
that
this
first
bullet
here
actually
deserves
more
conversation,
but
I
think
we
should
move
forward
with
this.
The
agenda
and
maybe
come
back
to
it
and-
and
this
issue
is
one
636-
that
if
you
haven't
already
read
it,
it
contains
a
lengthy
summary
of
the
back
and
forth.
We've
been
going
through.
E
Yeah,
so
I
would
like
next
when
I
have
some
cycles
to
actually
split
that
into
two
issues:
one
that
talks
about
value
recorder
and
one
that
talks
about
value
observer
and
just
recently.
I
think.
A
We
can
end
the
debate
about
value
observer.
There
was
a
technicality
that
I
had
hung
up
on
myself
and
and
you've
talked
me
out
of
it.
Many
times
like,
let's
realize
recognize
that
no,
I
don't
think,
there's
any
debate
left
on
value
observer.
If
you
change
the
aggregation,
you
should,
if
you
change
the
label
dimensionality,
you
should
also
change
the
aggregation.
That's
all
that's
come
out
of
that
the
in
the
meeting
on
tuesday
here
I
am
going
back
to
this
topic.
A
The
meeting
on
tuesday
there
was
a
proposal
that
we
might
consider
not
using
min
max
sum
count.
I
think
my
when
that
came
out
for
me
it
was
like
a
reaction
to
that's
the
world
where,
if
you
have
a
like
a
timing,
measurement
every
value
gets
becomes
a
measurement
which
becomes
a
packet
on
the
wire
which
is
very
expensive.
I
mean
that's
some
count,
there's
a
much
cheaper
way
to
do
that,
but
in
this
world,
where
we
have
heavy
clients
that
can
do
aggregation,
a
histogram
is
not
much
more
expensive
than
a
midnight
sum
count.
A
So
maybe
we
should
think
about
a
histogram
representation.
The
problem
there
is
that
you
have
to
set
boundaries
and
it's
not
always
correct,
so
we
actually
had
michael
gerson-haber
on
the
call
from
datadog
who
recommended
we
think
about
e-sketch
as
a
as
a
default
for
value
recorder.
I'm
thinking
that
that's
viable
there's
some
there's
some
questions
about
how
to
encode
them
and
and
we're
going
to
get
some
feedback
from
datadog
on
that,
but
it's
viable.
A
So
the
next
one
that
I
put
up
here
is,
I
think,
a
very
serious
concern
that
we
all
need
to
think
about,
and
I
don't
know
the
answer
to
so.
We've
got
a
protocol
change
coming.
A
The
hotel
collector
currently
uses
open
census
protocol
internally
for
a
lot
of
its
receivers
and
exporters
and
there's
some
sort
of
change.
That's
about
to
happen
over
the
next
two
months
in
which
we
change
protocol
versions.
People
are
going
to
upgrade
collectors
and
all
the
sdks
have
to
change,
and
it's
like
there's
a
matrix
of
compatibility
here
and
it's
very
unlikely
that
anyone's
going
to
get
anything
to
work.
Unless
we're
extremely
careful
about
this-
and
I
don't
know
what
to
do
exactly.
A
It
suggests
to
me
that
we
should
be
gating
collector
releases,
but
also
gating
them
with
with
sdk
releases.
Somehow,
and
I
don't
know
how
we
will
achieve
that-
and
I
don't
have
any
more
thoughts
other
than
I'm
looking
to
anyone
with
responsibilities,
including
odin,
to
help
here,
but
also
we
have
maintainers
and
for
the
individual
languages
that
need
to
pay
attention
and
make
sure
that
we're
just
not
continually
releasing
stuff.
That
doesn't
work
because
that's
happened
already
and
it
seems
like
it's
even
more
likely
to
happen
in
the
near
future.
E
Do
you
do
you
have
some
some
ideas
about
some
of
the
problems
that
may
happen
like
you
internally
in
the
collector?
I
don't
think
it
matters
how
we
represent
the
data
and
we
if
we
still
go
through
or
whatever
the
open
sensors
until
we
finish
that
as
long
as
the
boundaries
from
the
collector,
which
are
the
incoming
and
outgoing
things
are
a
result.
So
I
think
I
think
a
critical
point
is
once
we
make
a
stable
release
for
other
metrics.
E
The
collector
should
immediately
change
the
boundaries
we
do
a
release
on
the
collector
and
that
that's
where
we
we
talked
about
stability
inside
the
collector.
We
may
have
different
data
flowing
around
until
we
until
we
make
the
complete
transition,
but
I
think
I
think
I
would
start
with
the
boundaries
to
be,
and
that's
where
we,
we
should
pay
a
lot
of
attention.
So
if
you
have
any
other
problems,
if
you
have
any
other
things
in
mind,
that
may
cause
problems.
Let
me
know,
but
that's
that's.
F
Like
the
the
collector
alone,
it's
the
collector
in
combination
with
all
of
the
languages
that
send
to
the
collector
right,
because,
if
you,
if
you
make
that
those
boundaries
and
you
upgrade
the
boundaries
but
all
of
the
languages,
are
not
aware
that
they're
not
paying
attention
and
they're
not
ready
to
make
that
upgrade
as
well.
Like
then
you're
going
to
have
compatibility
issues
between
the
languages
and
the
collector.
A
And
we
we
aren't
declaring
it
stable
yet
so
at
some
point,
where
the
point
of
having
unstable
protocols
is
that
we
can
break
things,
but
once
we
get
to
the
point
where
it's
about
to
work,
we're
still
going
to
have
broken
things.
If
we
don't
coordinate
an
sdk
release
with
the
collectors-
and
I
don't
know
that
we
have
a
version
identifier
on
the
protocol
as
it
stands
today.
But
that
might
be
a
a
first
step.
We
just.
E
E
I
think
this
is
a
good
discussion
to
have
my
my
hope
was
that
monday
we
freeze
a
minimal
part
of
the
protocol
and
from
now
from
that
moment
we
treat
everything
backwards
compatible
and
we
do
the
right
things
that
we
you
do
in
any
distributed
system,
which
is
we
need
to
support
deprecated
fields
and
the
only
difference.
Maybe
we
remove
deprecate
these
fields.
Let's
say
in
three
months,
not
in
two
years
that
you
usually
do
in
a
in
a
ga.
E
C
Button
is
there
some
way
that
you
could
actually
consider
like
a
sembler
versioning
for
the
spec,
because
it's
totally
you
know
again
it
it
does
help
us
in
coordinating
our
testing.
You
know,
as
we
build
components
in
parallel
and
and
everybody
else.
E
A
So
how
do
we
avoid
okay?
I
I
accept
what
you're
saying
about
the
collector.
It's
like
an
internal
matter
which
protocol
it
is
being
used
if
all
the
conversions
between
the
two
formats
are
correct,
shouldn't
be
a
problem,
but
there
is
going
to
be
a
day
when
a
new
collector
lands
and
there's
going
to
be
a
day
when
a
new
protocol
snapshot
or
a
new
protocol
release
is
made.
A
So
those
if
those
two
are
not
the
same
moment,
maybe
there's
going
to
be
a
user
who
tries
to
upgrade
the
protocol
and
get
some
old
collector
or
try
to
get
a
new
collector
with
an
old
protocol,
and
since
these
are
not
versions,
it's
not
going
to
work,
and
that
means
that,
like,
if
I
have
my
go
code
running
today
and
the
collector
changes
it's
not
going
to
work
tomorrow,
is
that
this
is
unavoidable.
A
Should
we
like
so
I
mean
turn
off
otlp
receiver
like
to
help
like
prevent
this
from
happening
or
something
I
don't
so
you've
seen.
We
just
make
lots
of
announcements
saying
this
is
probably
you
gotta.
This
is
the
compatibility
matrix
and
it's
like
very
unlikely
to
work.
Unless
you
pay
attention
to
this
matrix.
E
So
and
after
the
stable
release,
if
we
add
new
new
things
and
you
are
emitting
from
sdk
to
collector
and
collector
does
not
know,
let's
say
about
the
summary
and
you
are
emitting
summary-
you
may
not
get
that
into
prometheus,
for
example,
because
collector
does
not
know
how
to
transform
it
yet
to
prometheus
summary.
E
E
A
So
I
guess
the
the
very
minimum
concern
is
that
a
prometheus
exporter
is
going
to
get
used.
That's
pretty
common
that
some
client,
which
was
written
to
.04
of
the
proto,
is
it's
got
summaries
and
in
the
go
code
it's
min
max
some
count.
It
gets
turned
into
summary,
and
so,
like
the
like,
you
end
up.
You
end
up
with
this
data.
That
is
not
gonna.
You
already
have
this.
It
may
be
already
broken,
in
which
case
it's
sort
of
the
same
situation,
but
the
problem
is
users
keep
showing
up
and
saying
it
doesn't
work.
D
E
Completely,
I'm
just
worried
about
this
completely
understand
all
of
this.
This
is
the
the
part
of
where,
where
an
unstable
protocol
is
used
in
production,
so.
A
Yeah,
so
I
think
we
what
the
best
I
can
imagine
or
the
easiest
is
to
just
make
a
lot
of
announcements
saying
this
isn't
ready
for
use.
Yet
the
otlp,
if
you
send
data
to
to
the
satellite,
you
know
or
sorry
collector
in
otlp
format,
you
better
be
beyond
this
version
and
and
there's
going
to
be
a
point
when
the
collector
is
releasing
the
sdks
are
not
most
likely,
but
it
could
happen
the
opposite
way
like.
A
If
you
publish
a
protocol
and
we
update
the
go
sdk
tomorrow,
you
know
then
you're
going
to
have
the
go
sdk,
sending
data
that
the
collector
doesn't
recognize.
I
guess
we
just
have
to
make
it
very
clear
until
all
the
components
are
released
and-
and
you
can
upgrade
everything
and
get
work
that
you
have
to
pay
attention.
Yes,
okay,
that's,
I
guess
that's
the
best.
We
can
do.
F
So
I
mean
two
things
not
to
be
too
much
of
the
dead
horse,
because
I
think
those
are
really
good
options
and
I
think
that
bogdan
laid
out
a
really
good
balance
system.
There,
like
you,
know
we're
still
pre-ga.
F
We
need
to
have
some
sort
of
like
movement
forward,
but
at
the
same
time
like,
I
think
that,
having
more
meta
questions
like
this
around
like
how
are
we
going
to
provide
stability
once
we
actually
get
to
a
point
where
we
think
we're
going
to
be
stable,
and
I
I'm
a
little
bit
worried
about
the
idea
that,
like
well
we'll
have
defined
a
protocol.
That's
just
extensible
enough
that
we'll
just
add
on
only
stable
elements.
I
think
that
that
may
not
serve
in
the
long
term.
F
I
think
that
you
brought
up
a
good
point
in
having
maybe
depleted
fields
in
the
life
cycle,
but
that
should
also
probably
be
a
documented
thing.
You
know,
because
I
I
imagine,
there's
going
to
be
more
people
involved
and
then
the
other
thing
is
like.
We've
already
had
reports
from
people
that
are
users
in
the
outside
of
the
project
using
the
otlp.
F
So
I
mean
I
already
know
that
there's
use
cases
outside
of
this
project
and,
like
I
think
that
it
may
behoove
us
to
start
thinking
about
this
sooner
rather
than
later.
I.
E
E
F
Yeah,
I
think
that
I
don't
go
ahead.
Sorry,
sorry,
I
I
mean
that
was
kind
of
like
maybe
I'm
beating
around
the
bush,
but
that
was
kind
of
what
I
was
thinking
in
the
back
of
my
mind,
but
I
I
think
I
was
framing
that
around
the
idea
that
like
well,
if
it's
such
a
huge
impetus
right
now
to
do
that,
that
probably
it
doesn't
make
sense,
but
maybe
in
the
future
it
needs
to
be
explored
as
an
option.
Oh.
E
E
That
being
said
again,
I'm
happy
to
to
help
the
community
in
any
way
from
the
collector's
seat
perspective
that
that
will
benefit
and
will
not
completely
break
everyone.
A
F
I
just
wanted
to
ask
another
it's
more
of
a
dumb
question
to
bogdan
to
alina's
point
like:
can
we
use
sember
in
the
versioning
of
the
proto
instead
of
just
having
a
v1
and
then
going
to
v2
or
something
like
that?
Is
there
a
you
know,
patch
or
minor
releases
that
we
could
increment
kind
of
like
all
the
other
languages?
What
do
we
get
with
that?
F
I
don't
know
test
run
of
versioning
like
that's.
That's
something
that
we're
going
to
need
to
do
in
the
long
term.
Sorry
go
ahead.
Well,.
C
Didn't
we
would
get
interim
you
know,
alignment
on
on
builds
like
especially
if
there
is
a
difference
between
the
sdks
and
the
collector
release,
for
example,
mapping
to
the
a
very
specific
version
of
the
spec.
C
C
C
E
Okay,
that's
that's
a
separate
problem,
but
having
the
version
inside
the
protocol,
I
don't
think
it.
I
don't
think
that
I,
I
think
I
think
I
think
releasing
the
pro.
So
so
what
I'm
understanding
from
tyler
is.
He
wants
to
embed
the
version
in
the
package
of
the
protos
or
something
like
that
correct
tyler
am.
I
am.
I.
E
E
We
also
have
a
v1
inside
the
package
gives
us
another
ability,
which
is
in
the
future
exactly
what
god
did,
which
was
to
to
to
have
ability
to
go
to
v2
and
knowing
that
that's
completely
broken.
Breaking
change
and
everything
is
broken
and
will
allow
us
to
to
have
two
two
exp,
two
receiver
and
two
exporters,
one
for
v1,
one
for
v2.
C
But
the
doughnut
is
that
adequate.
For
you
know
as
you
you
pick
up
steam
and
change.
You
know
the
much
faster
changes
right
that
you're
making
to
the
spec.
A
Yeah,
I
was
going
to
say
the
same
like
if
we
have
0.5
includes
everything
but,
like
summary
or
value
recorder
output
type,
now
we're
still
calling
that
v1
in
the
path
of
the
protocol
named
space.
Now
we
add
some
kind
of
summary
thing
and
that's
something
we
can
debate
later
now,
it's
0.6,
but
we're
still
calling
it
v1.
A
If
I
could
know
the
difference
between
0.5
and
0.6
at
least,
I
could
maybe
start
to
detect
like
a
problem
like
if
my
collectors
doesn't
know
about
that
0.6,
it's
gonna
they're
going
to
be
fields
that
are
unknown
to
it,
and
that
will
be
a
problem.
Even
though
it's
not
a
factor,
it's
not
a
breaking
change.
It's
like.
I
need
to
know
when
my
my
collector
is
out
of
date.
Okay,
so
if
the.
A
It
september
maybe
we
could,
but
this
is
all
just
kind
of
working
around
like
this
sort
of
uncertainty
over
the
next
couple
months.
I
guess.
E
No,
no
and
that's
that's
not
gonna
work,
george.
So
the
point
that
I
have
is
it's
very
easy
with
protobuf
to
detect.
If
you
know,
if
you
have
an
unknown
field,
so
protobuf
tells
you
if,
if
they
see
unknown
fields,
so
so
now
we
can
debate
that
if
we
see
unknown
fields
in
the
future,
the
behavior
should
be
that
we
we
just
don't
accept
that
and
we
we
better
tell
to
the
user.
E
Hey
we
receive
unknown
fields.
We
don't
know
what
to
do.
You
better
upgrade
the
collector,
for
example.
That's
that's
a
possibility
that
we
can
still
implement
without
having
encoded
the
the
semantic
versioning
inside
the
protocol.
The
protocol
itself
is
semantic
version,
but
not
not
encoded.
Inside
the
protocol.
A
E
A
F
Yeah,
I
think
that
would
help.
I
think
that
a
lot
we've
actually
covered
a
lot,
and
I
think
that
we've
come
to
some
really
important
and
hard
questions.
Yeah.
A
This
is
like
a
not
just
a
theoretical
question,
it's
sort
of
a
practical
question,
because,
because
we're
trying
to
start
using
this
stuff
and
like
and
like,
for
example,
a
customer
wants
to
know
if
they
should
start
sending
us
data
at
otlp
as
soon
as
you
cut
0.5
and
I'm
worried
about
the
potential
that
that
0.5
is
going
to
get
some
break
and
breaking
change
and
that,
like
now,
I've
told
a
customer
to
send
me
data
and
I'm
going
to
have
to
support
the
like
two
versions
protocol.
A
You
know-
and
this
is
just
for
like
custom
integrations
where,
like
they're,
knocking
hotel,
sdk
they're,
just
going
to
put
some
data
into
this
format
that
we
asked
for
so
it
it's
like.
I
just
want
a
stable
protocol
right
now
and
I
think,
if,
as
long
as
we're
not
going
to
change,
0.5
it'll
be
fine,
but
there's
still
that
risk
that
the
performance
argument
doesn't
work
out.
But
I
guess
what
you
said
is
that
you're
going
to
make
that
performance
argument
before
we
cut
0.5.
E
That's
the
performance,
I'm
not
going
to
release
05
until
we
fix
the
performance.
To
be
honest,
not
gonna.
Do
that!
I'm
not
gonna
touch
that
this
is
gonna,
be
the
big
breaking
change.
We
have
and
no
more
okay
breaking
things.
A
No,
I
don't
either
so
I
put
this
here,
it's
mostly
for
me
to
remind
you
that
I
have
been
putting
off
working
on
an
sdk
specification
for
lots
of
reasons.
It's
sort
of
procrastination,
it's
sort
of
like
not
having
certainty
about
otlp.
A
A
That
is
my
fault,
so
I
I
feel
increasing
pressure
to
do
this
because
it'll,
let
us
close
a
lot
of
those
spec
issues.
I
did
have
a
draft.
A
But
I
don't
also
don't
need
to
be
the
person
blocking
here
if
anybody
else
wants
to
like
stand
up
and
write
something
feel
free,
but
at
least
if
we
have
finished
some
portion
of
347
at
least
there'll
be
a
document
at
which
others
can
add
into
right
now,
there's
just
no
document.
So
it's
a
problem.
So
I'm
going
to
get
something
done
myself.
E
E
You,
if
you
want
to
spend
one
hour,
maybe
maybe
just
extract
the
skeleton
of
this
with
the
content,
different
titles,
a
bit
of
things
like
in
an
hour
or
so
I
think
we
can
get
that
merge
in
a
day
and
then
and
then
and
then
we
can
even
parallelize
fast
faster.
I
don't
know
it's
just
a
situation.
That's
that's!.
A
A
good
idea,
I
I
create
a
skeleton,
pfr
and
merge
that
first
I
like
that.
I
I
think
that's
an
even
better
approach
I'll.
I
will
take
that
a
maybe
tomorrow.
I
can
do
that
tomorrow.
Okay,
this
now
this
next
area
about
cement
conventions
is
one.
I've
tried
to
keep
my
hands
off
of,
and
I
think
that
graham
and
justin
have
been
doing
a
lot
of
this
work.
A
I'd
like
to
hand
it
over
to
the
two
of
you
to
talk
about
the
two
unmerged
brs
and
then
I
don't
know
if
erin's
on
the
call,
but
we
did
merge
the
otep
about
standard
system,
metrics
conventions,
and
I
wanted
to
figure
out
what's
next,
if
we're
going
to
put
some
spec
wording
together
for
that.
I
Yeah
sure
from
ipr
it
has
several
approvals.
I
think
bogdan
had
some
thoughts
on
it
that
I
tried
to
answer.
I
don't
know
if
there's
any
more
discussion
needed
for
it
and
if
it
can
just
be
approved
and
merged.
E
So
to
be
honest,
maybe
I'm
too
naive,
and
maybe
I
don't
trust
people,
but
I
believe
that
somebody
who's
gonna
read
this
is
gonna,
add
all
12
of
them.
So
I'm
trying
just
to
make
sure
that
this
is
not
gonna
be
the
common
case.
I
don't
know
how
to
to
do
it
better,
but
I'm
just
skeptical
on
this.
I'm
just
I
don't
know
how
to.
I
E
None
of
them
are
required,
none
of
them
are
optional,
but
I
feel
them
will
have
this
mindset
having
more
data
is
better
and
I'm
just
going
to
add
all
of
them,
so
maybe
maybe
I'm
just
coming
from
a
different
world
or
had
to
to
work
with
different
people,
but
that's
that
was
my
my
my
my
experience
so
far.
If
you,
if
you
tell
people
that
these
are
optionals,
but
things
are
available,
they
will
add
all
of
them,
because
better
is
more,
is
better.
J
Than
less
so,
I
think
tyler
had
a
fantastic
rebuttal
to
that,
which
is
people
are
going
to
add
them
anyway,
and
if
we
don't
specify
what
they
should
be
called
they're
just
going
to
be
random
things
that
are
added,
oh
yeah,
that's
better
to
specify
them
than
to
not
specify
them,
and
just
have
them
be
willy-nilly.
Whatever
the
author
decides
to
throw
in
there,
maybe
maybe
another
option.
E
H
But
I
don't
see
this
as
a
bad
thing.
I
think
it's
like
applying
as
much
information
as
possible
at
the
instrumentation
layer
makes
a
lot
of
sense
because
you
can
never
upscale
your
your
cardinality
downstream,
like
you
can
never
add
that
data
later
it
has
to
get
added
to
the
instrumentation
layer
and
then
filter
downstream.
E
So
because
of
that,
that's
that's
where
I'm
worried.
That's
that's
where,
where,
where
my
worry
comes
into
place-
and
that's
where
I
don't
know,
the
default
case
will
be
very
expensive.
Let's
put
this
way,
and
that's
that's
something
that
I
don't
know
how
to
balance.
I
maybe
maybe
another
option
is
to
actually
come
back
to
the
api.
I
have
that
recommended
labels
and
and
and
all
the
labels
available
and
or
I
don't
know,
I
just
want
to
make
sure
that
that
any
random
person
comes
picks
up
an
integration
without
changing
any
configuration.
A
I
yeah
this
debate
is
never
ending.
I
guess
I
I
feel
like
I'm
on
the
sort
of
justin
and
graham
side
of
this
as
well,
because
I'd
prefer,
if
we
could
just
make
it
easy
to
configure
dimensionality
reduction.
I
think
the
whole
point
of
open-top,
like
from
the
start,
was
the
separation
of
api
sdk.
Let
the
instrumentation
people
write
what
they
want.
A
Let's
make
the
actual
sdk
perform
the
way
we
need
it
to
and
so
like
an
environment
variable
to
set
default
like
it's
the
operator's
concern
which
dimensions
you
look
at
from
your
monitoring.
It's
not
the
instrumentation
authors,
but
there
is
a
real
performance
question
here.
So
that's
what
in
in
whenever
press,
I
say,
let's
just
make
it
extremely
easy
to
configure
dimensionality
control.
E
A
I
think
not
the
person
that
writes
instrumentation-
that's
I
I
think
my
opinion
has
evolved
a
little
bit,
but
it's
sort
of
like
the
idea
being
that
there
are
three
parties
in
the
system
here.
There's
the
author
of
the
instrumentation
there's
the
operator
who's
like
writes
the
main
function
or
starts
the
command
and
sets
up
the
environment
and
then
there's
the
vendor,
and
maybe
the
vendor
wants
to
control
it.
But
we
need
dynamic
configuration
for
that
and
I
think
it's
you
know
it's
the
operator
who
knows
the
topology
of
the
network.
A
They
know
which
dimensions
are
important
to
them.
They
can
set
up
which
ones
they
want
to
compute
metrics
on.
That's,
I
think,
an
ideal
but
you're
right
that
the
out-of-the-box
behavior
will
be
expensive
and
there
was
a
topic
discussed
last
time
which
was
like
if
you're
going
to
do
dimensionality
reduction,
there's
two
places
you
could
do
it,
you
could
do
it
before
the
accumulator
or
you
could
do
it
after
the
accumulator
and
it's
it's.
It
creates
this
large
map,
if
you
put
it
inside
the
accumulator,
but
it
makes
a
very
simple
reduction
later
for
me.
A
That's
an
implementation
detail.
Let's,
let's
get
started,
it's
a
performance,
relevant
implementation
detail
it
is,
it
is
a
detail
I
I
it
might
be
that
that
doing
down
dimensionality
reduction
before
the
accumulator
is
performant.
Is
enough
performance
wind
that
that,
like
we
can
ignore
this
problem.
D
E
A
A
F
E
So
for
me
for
me
how
I
would
move
this
forward.
First
of
all,
we
need
to
understand
for
the
random
dev
coming
picking
up
this
instrumentation
of
http.
Are
we
okay
that
they're
gonna
pay
a
bit
more
extra
cost
because
because
they
will
have
12
labels
and
there
is
going
to
be
some
extra
cost?
There
is
no
question
about
that.
Now
we
do
give
them.
E
Let's
assume
we
give
them
awesome
ability,
easy
ability
to
to
to
reduce
the
labels,
and
we
implement
it
very
carefully.
We
we're
gonna
test
that
josh
proposal
before
after
that
we
have
that.
Is
that
an
acceptable
behavior?
Is
that
something
that,
when,
when
an
user
comes
and
picks
up
this
http
client
instrumentation
by
default,
they
will
have
pretty
poor
performance
unless
they
tune
something.
Even
that
means
a
line
of
code
or
anything.
Is
that
something
acceptable?
Is
that
something
that
we
want
to
do.
F
Well,
okay,
so
you
got
to
have
the
other
side
right,
because
you're
you're
framing
the
concept
and
like
that's
a
poor
performance
thing,
but
like
for
vendors
like
new
relic
that
really
like
to
express
that
and
show
a
curated
view
of
a
lot
of
these
metrics.
It's
important
to
have
these
annotations
so
not
having
them
is
a
detriment
to
like
our
product,
and
I
think
that
that's
kind
of
an
important
thing
as
well.
F
Yeah,
no
I'm
hearing
I'm
just
trying
to
find
the
the
optimal
like.
Where
can
we
find
resolution
like?
Would
it
be
appropriate,
maybe
to
add
a
disclaimer
in
there,
so
that
the
instrumenter
is
fully
cognizant
of
the
fact
that,
like
you
know,
keep
in
mind
when
you
choose
the
labels
that
you're
implementing?
But
this
is
going
to
have
overhead
on
like
the
downstream
operator
as
well
as
the
vendor?
Is
that
is
that,
like
an
appropriate
thing
to
resolve
this.
G
E
I'm
seeing
this
for
maybe
for
new
reality
customers.
That
may
not
be
a
problem,
but
for
me,
for
example,
when
I
was
working
at
google,
giving
this
to
me
would
have
been
a
big
problem
again,
I'm
trying
to
raise
this
as
a
possible
issue,
and
these
are
the
implications
that
I'm
thinking
about.
That's
why
I
was
reacting
this
way
and
that
my
reaction
indeed,
is
more
like
by
default.
We
should
do
less
if
you
want
more,
the
data
should
be
available,
but
we
should
not
do
more
for
you
unless
you
you
ask
for
more.
E
E
I
don't
know,
for
example,
another
approach.
Another
approach
is:
maybe
we
ship
an
sdk.
We
know
that
these
are
the
the
canonical
http
metrics
and
stuff,
and
we
ship
an
sdk
where,
for
for
this
http
metrics,
we
install
the
views,
we
don't
we
don't
respect
the
default
views
and
we
install
specific
views
for
this.
That
will
have
less
labels.
That's
an
alternative
as
well.
I
again.
F
E
Yeah,
so
so
so
that
being
said,
I
would
like
everyone
to
think
this
way,
and
I
would
like
to
to
think
about
this
big
problem,
not
just
about
this
specific
that
we
need
all
of
them.
Definitely
we
want.
I
want
them
to
be
all
of
them
available.
My
question
is:
what
is
the
default?
What
is
what
is
by
default?
Gonna
happen
to
the
user.
That's
that's
for
me,
the
biggest
concern
that
I
have
not
the
fact
that
they
exist,
not
the
fact
that
we
record
them
the
other
thing
by
the
way.
E
I
would
like
to
point
it.
Maybe
maybe
it's
premature,
but
we
should
think
about
this
until
we
spread
too
much
too
many
instrumentations.
Some
of
them
may
go
to
correlation
context
because
we
don't
want
them
to
be
recorded
in
spans.
We
don't
want
them
to
be
recorded
in
the
in
the
metrics
as
well.
Every
time
and
correlation
context
is
one
way
to
to
put
them
into
the
request
and
then
spams
will
pick
them
and
and
and
metrics
can
pick
them
just
giving
you
a
more.
A
A
If,
if
correlation
context
is
going
to
get
used,
I
want
a
way
for
it
to
stop
being
used.
We
don't
have
kind
of
compliment
or
ttl
anymore
and
that's
what
I
would
want,
but
that
I
feel
like
that's
off
that
I
thought
we
lost
that.
But
let
me
I
don't
know.
A
E
You
know:
do
you
know
what
we
had
in
open
senses?
We
actually
said
it's
ttl,
but
we
had
only
two
possible
values:
local
or
global.
We
didn't
allow
the
ttl
to
be
three
like
it's
nothing.
That's
actually
close
enough
to
me
yeah,
let's
not
discuss
about
that,
but
I
would
like
people
to
think
about
that
as
well
and
not
necessarily
saying
we
should
not
have
all
of
these
I'm
just
trying
to
again
trying.
A
That
there's
a
proposal,
so
I
think
in
terms
of
the
go
sdk,
because
that's
the
one
I
work
in
mostly-
and
I
know
that
we're
pretty
careful
to
make
sure
there
aren't
any
allocations
or
those
very
minimum,
numbered
allocations.
When
you
put
12
labels,
it's
really
not
much
more
allocation
than
putting
four
labels.
It's
still
more
processing
costs.
You
can't
deny
that
so
I
think
it
might
be.
If
we
really
needed
to
dig
in
here
would
be
to
do
some
benchmarking
of
like
what
is
the
cost
of
12
labels
versus
four
labels.
A
And
then
what
is
the
cost
of
12
labels
that
you
then
reduce
down
to
four
labels,
because
there
is
some
cost
there,
and
I
I
think
that
it's
probably
going
to
be
not
so
bad
at
cost
and
go,
but
it
could
be
different
in
another
language.
I
know
there's
been
some
concerns
like
in
ruby
like
it's
very
expensive,
and
I
don't
I
don't
know
ruby,
so
I
can't
think
through
it,
but
it
might
be
a
concern
in
some
languages
to
think
about
so
there's
12.
A
F
Don't
know
I
I
like
the
pragmatism
in
that
thought,
but
I
I
probably
caution
just
from
that
exact
fact,
because,
like
there's
there's
an
element
of
like
well
what
about
this
language?
What
about
this
language?
That
kind
of
opens
that
pipe
and
just
kind
of
draw
that
question
back
like
bogdan
originally
said
something
like
there
actually
are
only.
I
think,
like
three
recommended
based
on
like
context
of
things
really
there's
only
one.
F
F
Like
not
all
of
these
are
actually
like
recommended,
I
think,
by
the
specifications.
So
that's
kind
of
why
I
was
kind
of
going
back
to
that
idea
that,
like
it's,
it's,
I
think
incremental,
like
thinking
incremental.
How
can
we
get
this
into
the
specification
now,
and
how
can
we
make
progress
on
this
as
we're
going
to
ga,
like
is,
is
something
like
a
disclaimer
saying,
like
you
know,
when
you're
building
this
out,
keep
keep
in
contact
all
these
things
maybe
do
benchmarking.
F
E
Me
because
this
is
the
first
case
where
we
have
more
of
them
out.
I
think
it's
it's
good
enough
to
to
to
to
spend
a
bit
more
time.
Thinking
about
this
personally,
I'm
fine
with
merging
this
as
it
is
if
everyone
is
okay,
but
personally
I
would
like
to
think
about
how
would
we
use
correlation
context
in
this
case?
Maybe
we
should
use,
and
some
of
them
should
be
in
the
correlation
context.
E
Secondly,
is
I
would
like
to
do
some
benchmarks,
as
as
josh
pointed
about,
if,
if
by
default,
having
all
of
them
aggregated
is
not
the
end
of
the
world
or
not
that
bad?
That
being
said,
I
think
this
is
what
I
would
like
to
do
until
we
call
this
ga.
Is
this
good
enough,
and
is
this
a
very
good
start?
Indeed,
it
is
maybe
maybe
I'm
overthinking,
and
maybe
I'm
thinking
more
about
how
we
want
to
be
when
everything
is
there.
Maybe
I
put
nothing
this
way,
but
this
is.
F
Okay,
yeah
I
mean,
and
that's
definitely
it
has
its
place
for
sure
so
as
action
items
for
graham,
like
I
heard
that
you
said
in
there
like
you're
ready
to
merge
this,
as
is
I
also
heard
in
there
that
you
said
that
you
want
benchmarks.
What's
the
next
steps
for
graham
I
I
mean
if
we
merge.
E
This,
as
is,
I
would
ask,
I
would
immediately
file
issues
to
say
we
should
benchmark
this
before
ga
and
a
couple
of
other
things
like
that.
So
I
would
still
do
this
before
ga
all
of
the
things
that
I
told
you
I
would
do
before
jay
it's
a
matter
of.
Do
people
feel
more
better
if
we
merge
them
right
now
and
we
file
issues
about
this
versus
we
resort.
This
is
the
first
one
and
we
we
debate
in
that
pr.
A
F
I
I
don't
know
that's
kind
of
what
I
was
suggesting
as
well.
I
I
like
that
approach
yeah.
I
think
that
that's
a
really
great
option
because
then,
honestly,
like
you
just
pointed
out
like
I-
I
don't
have
nearly
as
much
context
in
ruby
or
like
erlang's,
like
I
don't
know
black
magic
to
me
right
but,
like
I,
the
people
who
are
writing
the
the
language
sdks.
They
have
a
lot
of
really
good
domain
specific
knowledge,
so
it
seems
like
they
would
be
the
best
person
to
make
these
sort
of
recommendations.
F
E
Thing
that
may
be
important,
and
maybe
I
don't
know
if
we
should
care
about
this-
is
the
default
behavior
across
languages
should
be
kind
of
similar.
If,
if
by
default
in
in
in
go,
I
get
only
one
label,
and
in
java
I
get
12,
I
would
say
that
we
kind
of
fail
as
a
project.
E
I
think
I
think
we
should
try
to
keep
things
mostly
in
sync,
in
terms
of
what
I
get
default
behaviors
for
some
of
these
things
I
I
really
have
to
run
but
yeah.
A
So
logan
don't
run
yet
please
we
got
to
talk
about
these
code
reviews
in
the
collector
I,
since
I'm
not
a
maintainer
or
approver
there.
I'm
kind
of
powerless
and
this
request
from
a
lolita
is
really
important.
They've
got
extremely
high
velocity
interns-
oh
they're
great,
I
mean
they're
on
the
call,
so
everybody
you
you're
doing
great.
I
want
to
make
sure
that
we
get
attention
on
these
pr's.
I
can.
A
I
can
promise
to
review
all
of
these
pr's
today,
but
I
can't
promise
to
make
profit
on
them
because
I
don't
have
the
right
persons,
so
I
just
want
to
make
sure
we
could
you
you
or
could
you
deputize
someone
to
like
like
pay
attention
to
the
especially
1544
and
1525.?
E
So
wait
a
second,
but
there
wasn't
that
I
commented
a
lot
I'm
just
confused
about
this.
I
commented
10
or
15
comments
on
an
issue
that
got
close,
I
think
was
15
24.
B
Right
and
15
44
is
like
it
addresses
the
issue
you
have
in
1524.
The
reason
we
closed.
That
is
because
some
of
some
other,
like
my
partner,
is
like
no
longer
working
on
this
project.
So.
E
E
A
Sorry
yeah,
sorry
about
that.
I
I
have
been
pleased
with
the
size
of
the
pr's.
I
was
reviewing
in
the
go
contrib
directory.
There
was
nothing
this
largest
2500.
I
haven't
looked
at
this,
yet
I
I'm
willing
to,
though
so
I'll
do
my
best
to
to
take
a
first
pass
through
these
today.
A
E
I
will
also
review,
don't
get
me
wrong
and
I
I
provided
a
lot
of
feedback
on
that
24
because
I
I
saw
it
as
the
first
thing
and
I
was
expecting
to
get
answered
when
the
issue
got
anyway
I'll
review.
Four
four,
I
think
fifteen
four
four,
so
josh
you
split
a
bit.
Can
you
review
the
the
country
one
and
I
will
focus
on
the
core
ones?
Okay,
and
once
that
is
approved
by
you,
I
will
check
and
I
will
just
review
not
that
carefully,
because
I
will
believe
you
and
then
I'll.
Just
mine.
A
A
Okay,
thank
you
all
right.
Everyone,
but
bogan
is
off
the
call,
but
anyone
else
wants
to
speak,
especially
aolita
or
one
of
yang
or
eric
or
connor
about
these
topics.
You're
welcome
too.
K
I'm
I
can
speak
about
something
for
something
about
a
note
on
the
sdk
implementation,
so
we
had
a
question.
This
is
kind
of
a
little
bit
language
specific,
but
it
applies
to
like
the
c
plus
sdk
and
the
go
sdk.
So
I
think
it's
general
enough
to
mention
it.
A
K
Here,
okay,
go
for
it,
so
we
just
had
a
question
about
the
use
of
unused
unused
sdk
aggregators.
If
you
could
click
that
link
real,
quick
yeah.
I
I
linked
right
there.
K
If
you
look
at
this,
this
is
just
a
a
function
in
the
metrics
sdk
for
c
plus
plus
that
assigns
aggregators
based
on
instruments
and
if
you
can,
like
you'll
notice,
that
only
two
aggregators
the
counter
aggregator
and
min
max
some
count.
Aggregator
are
ever
mapped
and
this
leaves
out
like
the
sketch
histogram
exact
and
the
gauge
aggregators
that
are
used,
and
this
is
also
there's
a
similar
issue
here
in
the
go
sdk
as
well.
K
So
we
just
had
a
question
about
like
if
this
will
be
changed
in
the
future,
and
the
comment
at
the
top
says
that
it's
currently
not
being
used
in
the
pipeline.
But
I
guess
the
assumption
is
that
this
will
be
used
down
the
line,
but
the
question
still
remains
like
whether
the
other
aggregators
will
like
get
to
be
used
at
all.
A
So
this
is
a
great
thing
with
the
sdk
spec
ought
to
be
answering,
but
it's
not
written
so
so
part
of
this,
I
think,
is
just
confusion
from
not
writing
off
documents
so,
but
but
you're
right
that
we
well.
We
discussed
changing
the
default
for
value
observer
to
last
to
less
value
activator,
but
this
is
what
a
views
api
is
supposed
to
help
us
with
is
it
is
that
we
are
would
be
able
to
change
the
aggregator.
A
So
the
my
my
my
thinking
on
this
is
that
there's
a
there's,
a
piece
of
code
that
you
can
use
to
configure
which
aggregator
you
want.
We've
called
that
aggregator
selector
and
the
the
default
would
be
something
like
the
code
we're
looking
at
here
and
then
the
views
configurable
views
sdk
would
allow
you
to
change
that
using
a
yaml
file
or
some
much
more
detailed
configuration
struct.
L
Sorry
also
to
add
on
to
that,
so
the
python
implementation
already
has
some
sort
of
views
prototype
that
we're
using
so
we're
gonna
be
working
on
it
iteratively.
So,
as
josh
said,
we
actually
move
the
aggregator
4
into
this
views
file
and
it's
actually
now
like
called
get
default
aggregator.
So
these
things
that
you
see
here
like
will
only
be
used
for
views
of
meta
instruments
that
don't
have
views
defined
for
them.
So
it
creates
a
default
aggregator.
L
But,
as
you
all
said
like
we're,
there
are
mechanisms
in
place
where
we
can
actually
configure
what
aggregator
to
use
for
each
instrument.
So
this
is
actually
removed
and
like
just
used
for
the
default
case
now,
so
that's.
A
Good,
that's
good
yeah,
you
know
the
the
go
sdk
is
the
one
I've
been
using
to
kind
of
explain
things
to
others,
but
it
doesn't
necessarily
like
some
of
the
detail
in
the
go.
Sdk
is
not
necessary
to
replicate
so
I
I
noticed
you
had
this.
I
mean
the
file
is
named
ungrouped
processor.
A
We
we
have
renamed
like
we,
but
the
name
ungoop
left
us
long
ago
and
go
sdk
code
base
and
we're
calling
it
basic
processor,
and
that's
also
where,
like
the
conversions
from
delta,
is
accumulative
happens
and
such
so
there's
like
a
lot
different
code
there.
But
there's
still
this
notion
of
a
selector,
that's
that's
called
to
choose
the
aggregator,
and
that's
that
there's
a
basic
one
that
you
can
choose
and
then
there's
a
like
of
theoretical
views,
but
we
haven't
done
that
in
go.
So
it
sounds
fine.
What
you're
doing
and
I
don't.
K
Okay,
cool
that
pretty
much
answers
it
then
we
just
wanted
to
bring
this
up,
but
thank
you.
A
Yeah,
the
value
observer
is
going
to
change
to
to
last
value.
We
know
that
and
then
there's
still
an
open
question
about
value
recorder
that
was
discussed
at
the
top
of
this
hour,
and
I
would
like
to
point
out
that
it's
five
o'clock,
at
least
in
my
time
zone
and
I
think
we
are
finished
approximately-
did
we
miss
anything?
What
was
there
at
the
end?
That's
pretty
good.
We
kind
of
got
to
the
end
here.
I
was
going
to
point
out
this
pr
of
mine
that
has
merged
it.