►
From YouTube: 2021-04-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Okay,
cool,
I
think
we
should
probably
get
started.
I
don't
know
if
anyone
folks
folks
can
join
in
so
again
the
focus
is
going
to
be
on
trying
to
get
the
metrics
data
model
mark
to
stable
and
or
at
least
understand
our
timeline.
So
that
said,
real
quick
want
to
make
a
reminder,
for
you
know,
add
your
name
to
the
attendees
list,
please
and
then
secondarily,
there's
the
histogram,
oh
tap,
which
is
still
out
there
ready
to
be
reviewed
and
looked
at
yeah.
B
I
haven't
made
any
public
comments
myself,
but
I
have
actually
reviewed
it.
So
I
will
do
that
I'll
make
my
comments
relatively
soon,
but
please
remember
to
take
a
look
at
that.
That
was
mentioned
last
meeting.
Okay,
with
that,
let's
step
into
our
blocking
bugs
and
prs
for
stability,
so
the
first
one
I
want
to
walk
into,
and
we
might
need
bogdan
for
this
because
it
says
pr
is
the
label
to
attribute
change.
B
And
I
just
want
to
understand
next
steps
because
there's
a
few
concerns
I
want
to
call
out
on
the
pr
what
this
does
is.
This
actually
finally
makes
the
change
from
having
labels
as
a
thing
to
having
attributes
is
a
thing
and
where's
the
comment
yeah
since,
according
to
maturity
definition,
we
only
allow
breaking
changes
every
three
months.
Do
we
want
to
downgrade
this
to
alpha,
and
I
think
I
think
it
was
harold
mentioned
that
this
is
a
deprecation,
not
a
breaking
change.
D
It
is
breaking
right,
yeah.
D
E
B
Well,
if
you
look
at
what's
listed
as
breaking
and
what's
listed
as
deprecation
here,
it's
we're
not
we're
not
like
super
consistent
right
in
terms
of
what
what
I
would
I
like.
I
can't
look
at
this
and
say
what
I
would
list
is
breaking
and
what
I
would
list
is
deprecated,
one
versus
the
other.
In
this
case,
I
think
labels
is
still
there
but
marked
as
deprecated,
so
it
doesn't
actually
break
until
it's
removed.
C
Yes,
it
feels
disingenuous
to
me
like
it's
breaking
like
we
can't
say
we're
not
gonna
make
any
breaking
changes
if
we're
gonna
break
things
in
three
months
in
a
planned
way
shouldn't.
We
just
call
it
breaking
now
and
ask
the
because
anyone
building
against
the
head
of
the
proto
is
going
to
break
so
or
sorry
should
break,
should
break
their
code
by
fixing
it
to
use
new
fields.
B
Immediately
it
it
comes
down
to
the
what
are
we
going
to
label
the
version
that
doesn't
have
the
produce
right?
So
if
we
were
at
1.0-
and
we
made
this
change
of
deprecating
something
and
moving
to
something
else,
and
we
were
to
release
a
version
that
no
longer
has
the
deprecated
fields,
that
would
have
to
actually
bump
a
major
version
according
to
semver,
because
that
would
be
a
breaking
change.
B
But
if
you
leave
the
deprecated
fields
or
deprecated
fields
and
messages
there
right,
you
actually
haven't
broken
people,
because
they
can
still
send
them,
and
you
still
abide
by
this
deprecated
field.
It's
allowed.
We
just
don't
really
want
people
to
use
it
anymore.
That's
kind
of
the
difference
there
right.
So
if
we're
going
to
mark
this
thing
as
stable
with
deprecated
fields,
then
the
question
is:
what
happens
when
we
remove
those
deprecated
fields
right
like?
Are
we
declaring
stability
for
the
deprecation
as
well.
B
So
I
I
hear
what
you're
saying
around
like
this.
Yes,
people
will
break
if
they're
using
the
old
fields
right,
but
we
are
abiding
by
the
deprecation
policy,
and
so
it
doesn't
fully
break
the
deprecated
apis
until
three
months
later.
So
I
think
in
the
sense
of
the
comment
we
should
allow,
give
people
the
three
months
to
move
off
the
old
fields
right
and
leave
them
there
as
deprecated
and
not
have
them
be
breaking
to
allow
people
to
to
to
avoid
churn
and
like
personally,
I'm
still
comfortable
marking.
What's
not
deprecated
as
stable.
B
Like
I
think
we
could
do
that
right,
but
there's
this
question
of
what
what
do
the
protocol
number
definitions?
Look
like
going
forward,
you
know
are:
are
we
releasing
a
1.0
or
are
we
releasing
a
0.8
and
my
assumption
is
we're
still
releasing
zero
dot
x's.
E
E
I
think
once
we
hit
1.0,
that
period
probably
be
a
lot
longer
than
three
months.
Yes,
we
should
be
making
major
version
releases
much
more
slowly,
but
we
should
be
following
the
same
process
of
notice
that
it's
deprecated
provide
an
alternative
and
get
people
time
to
move
over.
To
that
alternative,
then
break
them
with
a
major
version.
A
A
B
Do
we
have
consensus
around
right
like
we
agree
that
we
should
give
people
time
to?
I
assume
I
don't
hear
major
dissent,
give
people
time
to
kind
of
migrate
in
this
pr,
but
what's
the
what's
the
next
steps
around
it
like
is
this?
Do
we
feel
like
it's
ready
to
go
and
emerge?
Does
anyone
have
major
concerns?
The
second
concern
that
I
should
pull
up
here
is
this:
this
concern
around.
B
Oh,
it's
not
actually
on
the
pr.
There
was
a
concern
around
the
performance
implications
of
this
change
and
being
able
to
run
benchmarks,
which
I
have
dedicated
like
a
whole
separate
section
here
around
benchmarking
and
benchmarking
concerns.
But
I
guess
my
question
is:
what's
the
next
steps
for
this
pr
to
go
through?
Does
anyone
have
any
major
concerns
with
with
this
going
through
now,
as
is
with
just
changing
the
def,
the
documentation
of
the
change
to
be
deprecated
instead
of
breaking
and
having
those
deprecated
fields
still
on
the
protocol.
G
So
josh
we
kind
of
talked
about
previously
about
a
model
for
how
to
move.
You
know,
and
this
is
regarding
to
the
one
of
what
the
process
would
look
like
for
you
know
deprecating
and
breaking
protocols,
and
it
involved
us
potentially
checking
on
the
collectors
and
other
downstream
components.
Maybe
we
should
pull
back
up
that.
You
know
the
thing
that
we
wrote
or
that
you
know
and
make
that
more
well-known,
yeah.
B
I
actually
have
on
my
you
can't
see
it
here,
but
on
my
notes,
list
of,
like
my
things
to
do,
is
to
write
up
that
otep.
B
I
think
that
process
is
not
in
consensus
yet,
but
the
the
theory
there
is
we
have
this
three-month
policy,
and
I
think,
anthony
just
mentioned
three
months-
is
not
enough
once
we're
1.0
and
my
opinion
is
that
three
month
needs
to
be
wiggleable
based
on
usage
and
adoption
and
migration.
So
if
the
collector
is
relying
on
this
old
sorry,
if
there's
a
whole
bunch
of
users,
relying
on
the
old
deprecated
behavior
right,
we
need
to
rate
wait
a
reasonable
amount
of
time
for
them
to
migrate.
B
What
is
a
reasonable
amount
of
time,
and
so
the
idea
here
is
that
we
figure
out
a
way
to
define
a
reasonable
amount
of
time
with,
like
some
kind
of
measurement
or
estimate
of
community
adoption
of
of
the
old
versus
the
new.
When
we
do
that
migration
and
yes,
I
want
to
write
that
out.
I
don't
think
we
need
to
write
that
otep
for
this
pr
to
go
in
that,
like
do
you
agree
or
not,.
B
Yeah
yeah-
I
yes
need
to
find
more
time
in
the
day
to
get
everything
done.
Yes,
if
you
want
to
write
that
up
like
that's
another
thing
that
I
think
would
be
super
valuable
is
like
we
just
we
need
to.
We
need
to
make
sure
I
think
what
I'm
hearing
from
people
is,
there's
a
hesitancy
to
make
changes
that
break
things
right,
and
so
we
need
to
not
break
things
that
change
things
because
there's
this
unknown
of,
like
you
know
how
rippling
is
the
change
in
the
ecosystem
and
that
kind
of
thing.
B
I
think,
if
we
can
define
a
process
that
people
are
comfortable
with
and
then
follow
the
process
and
kind
of
promote
that
process
such
that
everybody
understands.
What's
going
on
how
long
things
take
when
things
can
happen
and
that
we
all
kind
of
agree
to
that
that
that
will
kind
of
crisp
this
up
and
get
rid
of
some
of
the
uncertainty
where
nobody
wants
to
click.
The
merge
button.
G
B
I
sorry
for
signing
up
for
it
and
not
doing
it,
but
yeah
we
we.
We
should
write
that
otep
and
the
otep
needs
to
specify
like
how
to
do
these
kinds
of
changes
and
what
they
look
like
and
when
like
how
to
evaluate
when
it's
okay
to
actually
make
the
breaking
change
going
forward.
So
we
know
as
opposed
to
just
what
we
have
now
is
only
three
months
and
no
one's
really
comfortable
with
it.
B
Okay,
cool
next
steps
on
this
pr
again,
I
don't
really
want
to
block
it
on
having
that
process
agreed
to
across
the
whole
ecosystem.
I
think
that's
going
to
take
a
lot
of
time
for
us
to
to
discuss.
Does
anyone
have
anything
that
that
they
feel
very
strongly
where
we
shouldn't
submit
this
pr.
H
H
Right,
that's
that's
one
thing
for
me,
the
biggest
question
by
the
way
josh
is,
is
the
the
transformation
to
string
and
and
all
that
discussion
which
I
was
on
vacation.
So
I
don't
know
what
happened
in
the
past,
but
that's
that's
the
biggest
unknown
that
I'm
not
100,
confident
on
it.
B
This
sig
has
pushed
to
that
sig.
That
problem.
That
was
where
the
discussion
lies.
We
so
so
the
idea
would
be
the
the
metric
data
model.
Sig
doesn't
specify
any
kind
of
tostring
by
default
and
it's
up
to
exporters
to
figure
out
how
to
string.
H
B
Than
yeah,
what
I'm
suggesting,
though,
is
those
back
ends
can
solve
the
problem.
We
will
solve
the
prometheus
problem
as
a
community,
and,
if
that,
if
that
solution
works
for
all
those
other
backends,
they
can
also
use
the
same
thing.
But
we're
constraining
the
problem
to
kind
of
get
something
out
quicker.
Can.
C
I
add-
and
I
think
one
of
the
key
properties
that
you
that
we
discussed
in
the
moment
of
that
conversation
was
the
notion
that
if
a
user
mixes
labeled
attributes
with
different
type
and
the
same
name,
either
it's
unintentional
or
it's
it's
incorrect.
Behavior
like
you,
must
try
to
avoid
this
condition.
C
Then
it
kind
of
doesn't
matter
what
happens
and-
and
I
feel
like
we
keep
encountering
the
same
type
of
situation
across
this
data
model.
Where
we're
saying
you
should
not
do
this,
and
when
we
see
data
that
has
done
this,
we're
just
going
to
pass
it
through,
like
it's
distinct
in
a
way
that
is
there's
like
a
stop
gap,
essentially
because
we
have
no
other
options
so
we're
treating
things
that
pass
through
as
if
they
are
logically,
it
can't
contradict
each
other.
And
so
when
so
so,
attributes
better,
never
have
just
ambiguous
type.
H
B
C
To
me,
it
seems
like
there
are
really
only
kind
of
two
valid
outcomes.
One
is
ignore
like
if
you
have
a
conflict
of
attribute
on
some
metric
and
like
it's
a
string,
valued
one
and
a
number
of
integer
values.
One
and
the
the
two
outcomes
are
very
obvious
one.
Is
you
drop
one
of
the
points
and
one
is
you
add
them
together
like
because
it's
a
sum
point,
for
example,
and
the
we
can
just
pick
one
of
those
and
we
can
provide
a
function
or
processor
to
to
give
you
the
other
meaning.
C
We
can
give
you
a
processor
to
coerce
labels
into
strings,
and
if
you
do
that
processor
before
you
export,
then
the
data
model
prescribes
what
you
must
do,
which
is
to
add
those
points
together.
So
we
can
give
the
user
a
way
to
select
the
behavior
they
want
and
we
can
choose
one
of
the
defaults.
I
think
there
aren't
very
many
outcomes
here.
H
Yes,
except
except
for
some
it's
good,
but
for
for
for
others
may
not
be
possible,
but
I
I
don't
understand,
I
don't
understand
this
performance,
I'm
not
worried,
as
you
pointed
josh,
and
everyone
I
think,
would
be
able
to
fix
that.
I
One
I've
got
enough
perspective
to
share
on
this
yeah
influx
data,
we're
trying
to
support
metrics
logs
and
traces
in
kind
of
one
storage
engine
and
this
problem
of
typed
attributes
is.
It
also
exists
in
logs
and
and
traces.
I
So
from
our
perspective,
we're
dealing
with
it
anyways.
So
I
kind
of
celebrate
the
opportunity
to
add
type
2
to
metric
labels
or
calling
them
attributes
whatever.
That's
all.
C
B
Let's
deal
with
it,
and
I
think
the
key
here
is
like
I,
I
don't
want
to
treat
the
problem
generically
across
every
single
back
end
where
we
have
these
discussions
where
nobody
knows
every
back
end.
What
I
want
to
do
is
solve
the
problem
for
a
specific
back
end
and
come
up
with
a
solution,
and
then,
if
we
want
to
take
that
solution
and
use
it
for
another
back
end
and
adapt
it
make
it
more.
B
Generic,
that's
fine,
but
when
we've
had
tried
to
have
this
discussion
generically
across
every
possible
back
end,
people
can
imagine
it's
not
productive.
So
I
want
to
actually
nip
that
in
the
bud
and
have
that
somewhere
else
targeted
at
a
back
end.
That's
basically
what
we
decided
to
do
here.
If
anyone
disagrees
with
that
as
an
approach
feel
free,
I
learned
that
from
from
another
senior
edge
that
like
taught
me
a
lot
about
how
to
write
code
quickly.
B
Anyway,
okay,
any
any
last
comments
here
on
next
steps.
Otherwise
I
think,
ideally
I
think
we
we
changed
the
change
log
to
market
as
deprecated
instead
of
breaking,
and
then
we
try
to
get
it
through
sound
reasonable.
B
The
main
reason
I
want
to
get
it
through
quickly
is
because
it
is
a
deprecation
that
will
have
to
ripple
all
the
way
through
the
apis
and
sdks,
and
I
think
it's
super
critical
that
we
get
the
apis
generating
this
data
like
this
to
begin
with
so
yeah
any
any
concerns.
Last
minute,
people
want
to
discuss.
B
Okay,
let
me
actually
just
write
in
the
notes:
the
the
fact
that
we.
A
All
the
typed
attribute
string
problem
must
be
cut
with
somewhere.
B
All
right,
next
up
what
I
wanted
to
do
with
blocking
bugs
and
pr's.
We
have
this
benchmarking
bug
and
thank
you
victor
for
looking
into
that
and
finding
a
few
things
that
changed
kind
of
the
the
the
shape
of
the
data
that
we
were
using
for
the
benchmark,
because
we
were
actually
triggering
a
what
I
would
call
the
worst
case.
Scenario
of
you
know
something
around
the
go
protocol
buffer
implementation.
That
was
bad
with
some
changes.
We
made
that's
really
easy
to
do
with
benchmarks.
B
B
You
know
changes
against
previous
release,
so
I
started
writing
some
of
these
requirements
here
of
like
what
we
feel
we
need
from
benchmarking
in
the
proto
library
to
make
people
feel
comfortable
with
what
we're
designing
and
that
we're
not
designing
something
that
is
inherently
slow.
B
So
you
know
one
of
the
things
you
know.
One
of
the
most
important
languages
in
our
ecosystem
is
go
because
the
collectors
are
in
go
so
I
think
at
a
minimum
we
need
to
go
benchmark.
Tigran
actually
has
a
you
know,
personal
project,
page
that
benchmarks
the
protocol
library
that
I
think
he'd
be
happy.
If
we
just
take
that
and
kind
of
lift
it
up
into
elevated
status,
it
is
go
only,
but
it
does
a
good
job
of
like
synthesizing
data
and
shoving
it
through
it.
B
It
basically
tests
serialization
like
taking
a
live
instance
of
a
protocol
buffer
and
writing
it
into
binary
and
deserialization,
taking
the
binary
and
turning
it
back
into
the
the
instance
in
the
language
and
just
measures
the
performance
of
that
right.
So
what
I?
What
what
I
wanted
to
do
was
kind
of
brainstorm,
some
of
the
things
that
we
feel
like
we
might
need
from
a
benchmarking
suite,
it's
acceptable
to
say
that
we
feel
like
we
don't
need
one.
B
I
just
want
to
call
that
out
as
an
option
that
you
know
I
well
anyway,
but
I
want
to.
I
want
to
walk
through
like
some
ideas,
so
in
any
case,
at
a
minimum.
I
think
we
need
to
have
go
and
the
collector
have
a
some
kind
of
a
benchmark
to
to
know
what
encoding
and
decoding
of
protocol
buffers
look
like,
because
that's
like
it's
bread
and
butter
right.
C
Reasons
this
was
so
hard
josh
and
I
ran
those
benchmarks
last
summer
was
that
you
have
to
have
compiled
a
protocol
buffers
and
an
alternate
and
like
it's
really
hard
to
keep
that
like
in
the
repository
like
it's
often
done
on
a
branch,
and
then
you
make
a
experimental
change,
so
we
don't
have
a
way
to
like
version
our
protocol
buffer
artifacts
to
test
against
the
past
very
easily,
and
that's
one
of
the
reasons
that
there's
an
experimental
subdirectory
of
the
protobuf,
the
proto
project,
because
bowdoin
was
using
that
one
and
it
was
so
hard
to
build
buffer
artifacts
at
that
time.
E
C
B
B
I
want
a
metric
that
has
five
labels
and
30
values,
and
this
this
name
or
something
right
in
a
yaml
file
config,
and
then
I
have
a
binary
which
will
run
a
prototype
benchmark
against
a
defined
version
of
the
protocol
spec,
and
I
literally
compiled
different
binaries,
and
it
reads
this
config
for
what
the
data
shape
needs
to
be
right.
B
You
run
the
binary
and
the
binary
outputs
how
long
it
took
for
it
to
like
churn
through
that
data,
and
so
we
actually
specify
here
is
the
file
for
the
data
that
you're
going
to
be
reading.
And
writing
here
are
the
types
of
metrics
that
you
need
to
generate
for
encoding
and
decoding
right,
and
then
we
actually
run
separate
binaries,
and
this
allows
us
to
be
flexible
with
what
languages
people
want
to
test
and
the
protocol
buffer
versions,
because
you
only
have
to
link
one
proto
version
into
your
binary.
B
You
just
build
a
different
binary
for
the
other
version
right
and
so
now
your
code.
It's
it's
a
little
bit
easier
to
deal
with
your
code.
The
downside
here
is
you'll
actually
be
running
a
different
binary
for
a
different
performance
test,
which
means
that
it's
going
to
be
a
little
bit
harder
to
compare
apples
to
apples
between
the
two
binaries
right,
we're
going
to
sacrifice
a
little
bit
of
quality
comparison
or
instability
in
the
test
environment,
ecosystem
for
this
flexibility.
B
But
that
was
actually
going
to
be
my
proposal
for
what
we
do
here
and
then
I
was
going
to
write
a
if
we,
if
we
specify
like
here's
the
yaml
format,
for
what
these
things
read,
then
we
can
also
write
a
separate
binary
that
generates
that
yaml
data
of
just
here's,
a
slew
of
representative.
You
know,
example
yamily
things
that
we
use
to
synthesize
protos
and
then
you
know
go
through
the
churn.
I
can.
B
G
So
I
I've
been
doing
a
little
bit
of
this
for
both
c-sharp
and
go
so
so
I
acknowledge
that
that's
a
particular
problem,
but
I
think
that
problem
can
be
solved
in
many
many
different
ways
and
I'll
give
two
scenarios
that
I've
come
across
so
for
c-sharp
it's
possible
to
just-
and
we
talked
about
this
a
little
bit
in
the
c-sharp.
G
You
know
the
dot-net
sig
a
little
bit.
Is
that
potentially
possible?
We
could
just
build
a
binary
of
just
the
protos
and
then
keep
that
as
nougat
packages,
so
that
you
could
then
dynamically
link
the
appropriate
versions
that
you
want.
So
that's
one
approach.
The
approach
that
I
took
with
go
to
with
tigrin's
particular
version
is
that
I
actually
compile
all
the
protocols
into
the
one
singular
binary,
but
the
only
change
I
made
is
basically
update
the
protocol
file
in
the
namespace.
G
So
I
could
actually
have
multiple
namespaces
of
the
different
versions
of
the
proto
and
that's
just
a
really
quick.
We
could
make
that
a
programmatic
thing
where
we
just
change
the
namespace
to
the
appropriate
version.
We
could
build
all
of
the
protocols
in
one
binary
and
then
do
what
we
wish
with
that
as
well,
and
I'm
sure
there
are
other
approaches
as
well.
So.
B
Yeah,
I
mean
the
key
thing
here,
though,
like
I
want
to
call
out
two
things
that
are
important
with
name
spacing
one:
is
the
code
isn't
going
to
be
exactly
the
same?
So
if
you
compile
a
binary
and
you're
relying
on
static
linking
and
we
add
a
new
field-
and
we
want
that
field
to
be
representative
in
the
data
generation,
you
you
need
to
actually
change
the
code
of
the
benchmark
to
fill
out
that
thing
yeah.
Well,
I
know
that
tiger's
thing
does
that
and
and
so
what
you're
saying,
but
in
previous
benchmarks.
B
G
No
so
so
at
least
I'll
give
you
my
experience
with
what
I
recently
did
with
tigran's
test
suite
right.
So
so
I
had
to
kind
of
check
out
different
versions
of
the
proto,
so
what
I
did
was
I
just
basically
built
diff
or
compiled
in
different
versions
of
that
proto
in
the
same
binary,
and
I
have
the
code
to
read
and
write
appropriately
kind
of
similar
to
what
you
mentioned
josh.
G
Is
that
there's
a
higher
level
specification
of
how
many
data
items
you
want
how
many
labels
you
want
some
shape
in
form
of
the
data
and
then
for
specific
protocol,
there's
some
code
that
will
you
know
encode
and
decode
to
that
format.
All
of
that
is
built
into
one
binary.
That
is,
for
the
different
protocols.
Just
happen
in
this
case
will
have
different
name
spaces,
so
the
benchmark
are
run
as
one
unit
now.
G
B
Yeah
yeah,
and
I
think
I
think
that
makes
a
lot
of
sense.
So
I
know
tigran's
code
works
really
well
for
go
right
and
you
did
it
in
gohan.net,
but
you
chose
different
solutions
for
the
two
or
is
it
basically
the
same?
Looking
everything.
G
H
B
Okay,
I
mean
the
there's
like:
do
we
need
to
have
all
the
length?
So
let
me
ask
this
question
quickly:
do
we
need
the
same
set
of
data
going
through
go
the
exact
same
protocol
buffers
going
through
go
c,
sharp
and
java
compared
to
the
previous
protocol
right
or
other
languages?
You
know
python
all
of
these
to
like
evaluate
the
protocol
or
not
like
do
we
want
it
to
be
exactly
the
same
data
or
is
it
okay,
if
it's
just
the
same
within
a
language,
to
compare
previous
to
next
version.
G
I
I
if
you're
asking
me
personally,
I
think
the
the
common
denominator
is
that
actually
bytes,
you
know
that
is
being
transmitted,
so
that
has
to
maintain
compatibility
across
all
languages.
So
then,
all
we're
benchmarking
is
given
some
specification.
How
long
and
what
the
performance
is
to
get
to
the
same
bytes
and
then
for
the
decoding.
How
long
does
it
take
these
set
of
bytes
to
be
encoded
or
decoded
into
the
proper
language?
G
B
This
one
of
the
one
of
the
ideas
I
had
for
a
benchmarking
suite
is
that
we
literally
write
a
server
that
deserializes
the
proto
right
does
something
to
make
sure
that
your
cache
is
that
that
the
proto
doesn't
just
cache
the
raw
bytes
and
then
re-serializes
the
proto
in
java.
That
means
you
literally
have
to
touch
every
single
string,
which
is
a
pain
in
the
ass
but
anyway.
B
So
so
you
basically
deserialize
proto
re-serialize
the
proto
as
a
server
in
a
bunch
of
different
languages,
and
then
the
we
make
the
benchmark
the
benchmark.
Is
I
compile
that
server
against?
You
know
otlp
version,
one
otop
version
two
and
there's
some
other
process
which
basically
creates
these
bytes
and
just
feeds
it
through
the
server
and
we
measure
how
long
serialization
deserialization
take
in
those
servers
like
through
the
course
of
shoving
data
at
it
right.
G
I
I
guess
the
question
is:
how
much
do
we
think
the
server
implementation
affects?
You
know.
You
know
our
performance
as
well
as
to
what
degree
of
you
know
down
the
pipeline.
Do
we
want
to
actually
go
so
at
least
in
the
in
tigrans
and
in
my
c-sharp
one?
We
were
primarily
only
focused
on
just
the
proto-buff
portion,
so
our
code
basically
just
takes
the
protobuf
implementation
of
the
language,
specific
language.
We
fill
in
the
thing
use
protobuf
to
generate
the
bytes,
so
that's
part
one
and
then
the
second
part
is.
G
G
B
B
Ltlp
data,
which
you
know
in
this
case,
would
be
taking
the
proto
and
synthesizing
it,
but
also
going
through
grpc,
if
that's
what
you're
doing,
if
that's,
how
you're
exporting
right
and
that
that
all
of
that
kind
of
matters
in
the
end
and
so
like
for
from
a
protocol
standpoint,
I
think
that
it's,
you
could
consider
it
relevant
from
a
practical
standpoint,
it's
more
what's
going
to
be
the
fastest
thing
that
we
can
implement.
B
That
gives
us
the
best
notion
of
of
the
performance
of
what
we
have
where
we're
comfortable
making
changes
and
saying
like
this
change
is
okay
and
of
sufficient
performance
that
I'm
not
worried.
I
destroyed
the
ecosystem
right,
that's
that's
what
we
need.
So
from
that
standpoint,
just
testing
protos,
I
think,
is
fine.
B
It's
more!
What
I'm
asking
in
general
right,
because
again,
I
think
this
is
a
task
that
we
all
need
to
work
on
to
some
extent
and
like
get
components
of
this
done
is
what
which
direction
do
we
want
to
go
with
this
design,
where
we
feel
like
we're
going
to
get
the
best
coverage
and
the
best
bang
for
our
buck?
B
I
I'm
specifically
asking
and
targeting
around
multiple
languages,
and
that
might
not
actually
matter
it
might
be
that
we,
you
know
we
decide.
We
only
want
two
languages
that
we
support
and
that's
reasonable,
and
then
we
can
go
with
a
more
aggressive
design.
That's
quicker
to
get
out
the
door
like
what
you
already
have
victor
like
the
fastest
possible
way
to
get
this
thing
out
the
door
and
it's
less
flexible
for
other
people
who
want
to
add
new
languages
to
evaluate
and
that's
fine.
B
We
don't
care
because
we
get
enough
signal
enough
information
that
we're
comfortable
releasing
this
right.
So
that's,
basically
the
the
meta
question
I'm
asking
is
like.
C
Is
the
middle
requirement?
Sorry
go
ahead?
Yeah!
I
don't
want
to
belabor
this
test.
I
don't
think
we
should
spend
too
much
time
on
it,
because
it
sounds
like
we're
going
to
wait
until
this
benchmarking
is
done
to
declare
something
stable
and
and
then
we're
going
to
forget
about
this
benchmark
and
never
change
a
protocol
again.
B
Well,
I
wouldn't
say
we're
never
going
to
change
the
protocol
again,
I
I
don't.
I
don't
yeah,
I
think
someone's
going
to
make
changes
to
the
protocol
and
they're
going
to
want
to
evaluate
the
change
and
otherwise
I
think
it's
not
worth
making
this
benchmark.
We
just
use
tigran's
results
and
say
we'll
make
a
one-off
done
right,
because
then,
who
cares.
B
For
the
next
month,
yeah,
that's
that's
one,
one
of
the
things
I
want
to
talk
about.
So
how
about
how
about
this?
Let
me
make
this
proposal
what,
if
we
take
what
victor
and
tigran
already
have
and
use
that
to
evaluate
things
in
the
near
term,
so
we're
not
blocked,
and
I
think
it
it
sounds
like
victor,
and
I
can
can
talk
about
this
offline,
we'll
work
on
trying
to
get
something
into
the
proto
repository
at
a
minimum.
H
I
I'm
curious
about
one
thing:
have
we
considered
the
implications
on
of
one
of
not
only
from
performance
but
from
semantics?
And
here
what
I
mean,
for
example,
in
case
of
a
number
data
point,
that
we
have
right
now
an
end
or
a
double?
What
does
it
mean
if
the
user
does
not
send
any
of.
G
Question,
wouldn't
that
just
be
like
josh
was
mentioning
that
it's
just
an
error
condition
we
just
drop
it.
H
D
H
But
they
don't
know
if
it's
in
that
one-off,
the
encoding.
The
binary
encoding
does
not
know
that
you
are
part
of
a
one-off
unless
you
have
a
definition
of
the
one.
No.
B
No,
I
I
know
I
know
so,
but
it's
a
field
that
the
that
you
don't
know.
So
the
question
is:
what
should
the
proto?
What
should
the
receiver
do?
In
the
in
the
instance,
it
gets
a
field
that
it
doesn't
know
in
any
scenario
right,
and
so,
basically,
the
collector
shouldn't
be
able
to
operate
on
the
data
itself.
In
that
case,
because
it
can't
see
the
data
and
there's
the
theory
of
the
collector
should
just
pass
the
data
along
as
long
as
it
doesn't
have
to
unpack
to
the
point
where
it
needs
to
understand.
B
It
should
be
able
to
pass
it
on
like.
I
should
be
able
to
deserialize
that
proto
or
deserialize
the
header
of
that
proto
and
then
continue
to
send
the
proto
on
that's
the
definition
of
protocol
buffers
on
the
wire.
However,
looking
at
the
go
implementation,
I
don't
know
if
that's
really,
if
that
actually
works,
I
think
that
breaks.
B
B
It
but
the
validations
on
the
header
effectively
not
inside
there
with
the
unknown
field,
but
the
idea
is,
if
you
have
a
service
between
two
servers
and
these
two
servers
are
up
to
version.
You
know
why
and
this
server's
on
version
x,
but
all
it
does,
is
pass
through
with
a
header
and
route.
That's
fine
right!
That's
fine!
Both
of
these
will
see
their
pieces
of
data
and
this
guy
can
be
a
little
bit
behind
and
it
should
be
okay,
that's
the
design
of
protos.
I
don't
know
if
that
is
how
it
works
and
go.
C
B
C
C
There
was
no
points
there
and
I
think
one
day
we'll
be
having
a
detailed
conversation
about
semantic
conventions
to
report,
dropped
points
and
dropped,
metrics
and
so
on,
because
these
are
our
systems
are
doing
that
all
over
the
place
for
all
kinds
of
reasons,
but
not
today,
yeah.
B
B
Okay,
so
I
think
I
think
we
gathered
enough
requirements
here
for
a
crazy
thing.
We
want
to
build
and
then
for
now
we're
comfortable
just
using
like
what
victor
and
tigran
have
worked
on
right
cool.
B
Any
major
disagreements
we'll
talk
about
that
bogged
in
your
concern,
I
think,
is
like
again
we're
gonna.
Take
that
into
the
other
discussion,
so
cool.
We
only
have
15
minutes
left,
which
means
we
only
have
five
minutes
for
some
of
these
discussions
around
aggregations,
rebuilding
from
deltas
and
safe
label,
remover
removal
josh.
I
know
you
were
waiting
on
me
to
write
this
spec
and
yeah.
I
was
out
for
a
bit
last
week,
so
I
apologize
I
didn't
get
to
it.
Is
there
anything
that
we
need?
What.
C
B
Documents,
fine,
so,
oh
you
know
what
I
didn't
have
here,
there's
one
other
pr.
Okay,
I
did
write
a
pr
and
I
want
to
call
that
out.
Where
is
it?
C
You
sent
me
a
link
to
it.
I
know
this
one.
B
Yeah
it's
in
it's
under
specification,
so
this
one
targets,
the
bug
that
victor
has
opened
around
instruments.
B
Basically
all
this
is
so
the
single
writer
pr
is
is
blocked
right
now
I
want
to
know
how
to
make
progress
on
it.
B
We'll
talk
about
that
in
a
second,
but
this
is
outlining
instruments
a
little
better,
and
so
please
take
a
look
at
this
because
I
think
the
most
important
thing
is
I
added
a
picture
of
how
an
instrument
could
lead
to
different
aggregations,
which
leads
to
different
metrics
and
how
it
calls
out
specifically
how
open
telemetry
tries
to
design
instruments
that
have
a
specific
known
aggregation
that
lead
to
a
specific
known
metric
stream
and
opens
the
discussion
around
views
effectively
or
what
views
were
so
I'd
like
to
get
I'd
like
to
get
some
review
on
this,
because
you
know
I
I
think
instruments
versus
metrics
remains
a
kind
of
confusing
piece
of
open
telemetry
and
I
just
wanted
to
draw
a
picture
and
then
put
verbage
around
it,
and
I
think
my
verbage
absolutely
sucks
and
the
picture
might
be
bad
too.
B
B
I
specifically
in
this
picture,
call
out
that
it
doesn't
have
to
be
the
case
if
you're
going
through
some
sort
of
view-ish
thing
in
the
future
from
a
data
model
perspective,
we
are
not
tying
these
things
together
and
I
want
to
call
that
out
in
the
data
model.
So
the
api
is
full
flexibility
of
whatever
the
hell
it
wants
to
do.
C
I,
like
this
picture
josh
one
of
the
things
that
I
felt
like
was
most,
I
think,
essential
in
all
the
prototype
designs
that
we've
done
to
date
was
kind
of
giving
a
clear,
a
clear
guideline
or
a
clear
understanding
or
specification
on
what
is
the
default
you're
going
to
get?
C
If
you
just
choose
one
of
these
instruments,
which
which
has
been,
I
feel
like
the
data
model
comes
into
question,
and
it's
sort
of
like
the
data
model
is
what
links
the
the
choice
of
instrument
with
a
default
outcome,
and
that's
why
there's
more
to
it
than
just
saying
we
can
do
anything.
We
want
to
choose
a
default.
B
Yeah
and
if
we
need
to
you
know,
expand
on
the
notion
of
defaults,
I
think
there's
a
place
for
it.
Where
did
I
put
that.
C
Yeah,
I'm
sorry
sorry,
sorry,
I
I
think
of
there
being
like
essentially
four
different
streams
in
otlp.
You've
got
gage
histogram,
some
and
and
non-monotonic
some
and
then
we've
got.
You
know
three
or
four
five
six
instruments,
and
the
point
is
that
they
actually
map
back
down
to
four
different
metric
otlp
stream
types
yeah,
because
temporality.
H
C
B
This
diagram,
okay,
yeah-
and
this
is
not
meant
to
be
exhaustive.
This
is
only
meant
to
show
the
the
the
fan
out
effectively
super
cool,
so
cool
all
right.
So
I
want
to
call
that
out
as
a
thing
that
I
added
to
help
with
that
that
instrumentation,
the
blocking
bug
around
instrument
versus
metric
to
help
people
understand
that
the
last
bit.
B
So
these
two,
I
think,
have
no
progress
and
that's
that
again
is
on
me
and
then
the
single
writer
pr
has
a
lot
of
open
questions
around
it
and
we
don't
have
a
ton
of
time
to
talk
here.
The
question
is:
is
anything
in
this
blocking
for
making
the
protocol
be
stable
and
in
as
much
as
this
pr
is
necessary
for
the
delta
to
cumulative
discussions
that
are
blocking?
B
I
think
I'd
like
to
make
progress
on
this
pr.
What
is
the
bit
that
needs
to
get
a
little
bit
driven
home,
and
this
is
I
I
asked
bogdan
if
he
was
going
to
be
here
specifically
to
talk
about
this.
Where
is
it
I
man?
The
way
this
threads
is
so
bad.
C
But
view
the
changes
it's
easier
to.
F
C
B
Yeah,
okay,
so
there's
there
was
a
back
and
forth
between
tigran
and
josh
and
the
other
josh
not
me.
B
Yes,
so
effectively,
we
had
a
discussion
in
this
data
model
sig
around
an
error
scenario,
if
somebody
reports
a
metric
stream
right
and
that
metric
stream
has
a
name
and
a
set
of
attributes
and
and
a
specific
data
type
like
a
histogram
and
and
someone
else
reports
the
same
metric
name,
the
same
set
of
attributes,
but
a
different
metric
type
right.
What
do
we
do?
B
We
discussed
in
this
sig
that
we
would
treat
them
as
separate
streams,
and
so
the
type
was
identifying
and
that
an
exporter
could
decide
how
to
handle
that
and
adapt
those
streams
back
together
if
it
wanted
to
reinterpret
points
or
whatever
the
hell
it
wanted
to
do
in
the
exporter.
But
from
our
standpoint
we
treat
them
as
completely
separate
streams,
and
we
also
call
out
that
that's
effectively
an
error
scenario
where
someone
is
using
the
different,
like
different
instruments
to
report
the
exact
same
metric,
okay,
so
that's
kind
of
what
we
talked
about
previously.
B
However,
in
the
discussion
here,
I
think
we've
gotten
kind
of
confused
around
what
that
consensus
was
so
I
kind
of
want
to
open
that
up.
We
only
have
five
minutes,
so
we
can't
talk
about
it
here.
We're
gonna
have
to
talk
about
it
on
the
pr,
but
I
just
want
to
remind
everyone.
That's
what
the
original
discussion
context
was
that
led
to
the
metric
data
type
being
part
of
the
identity
and
then
get
an
idea
for
how
to
make
progress
on
this
specification.
B
I
am
perfectly
fine
with
us
like
specking
out
this
differently,
where
we
say
the
metric
identity
does
not
include
the
data
type
and
then
having
a
specific
like
call
out
for
what
to
do
when
the
data
types
don't
line
up,
that's
totally.
Fine.
All
I
want
to
know
is
how
to
make
progress
on
the
specific
pr.
So
if
people
could
take
a
look
and
make
their
comments
and
what
they'd
like
to
see
that'd
be
ideal,
so
I
didn't
save
enough
time
to
actually
talk
about
it.
So
apologize.
B
C
Temporality,
I
thought-
and
I
think
I
promised
to
write
something,
and
I
I
still
am
promising
to
do
that,
but
I
didn't
do
it
in
the
last
week.
C
It
was
this
question
about
missing
start
times
and
how
to
handle
it,
and
I
realized
that
we
have
options,
as
I
thought
through
it
a
bit
and
fyi.
I
I've
been
trying
to
answer
some
of
these
questions
in
the
single
writer
pr
by
writing
code
because
I
think,
ultimately,
the
the
purpose
of
this
spec
is
to
write.
Business
is
to
say
how
a
collector
plug-in
should
work,
and
so
I
was
trying
to
answer
it
from
that
angle.
C
You
do
end
up
with
temporality.
Having
this
this
option,
you
could
allow
a
zero
value
and
define
it
as
being
unknown
and
let
the
processor
reconstruct
things.
So
I
wanted
to
I'll
write
that
up,
I
guess,
but
in
some
level
what
I've?
What
I'm
seeing
is
that
we
need
a
like
a
plug-in
that
can
do
re-aggregation
and
then
all
forms
of
data
manipulation
can
be
defined
as
re-aggregation,
and
one
of
them
will
be
how
to
reconstruct
time
stamps,
for
example.
C
So
I'm
not
sure
if
that
means
it's,
it
seems
like
it's
actually
a
second
priority
to
discussing
re-aggregations.
As
you
say,.
B
B
B
G
B
Oh,
oh,
oh,
that
that's
a
great
question
like
what
is
acceptable
performance
and
the
the
problem
here
is,
I
don't
know
if
we
have
an
answer
so
so,
who
defines
what
acceptable
performance
is?
I
think
the
the
real
answer
is
the
tc,
so
that
would
be
more
of
a
question
for
josh
bogdan
tigran.
You
know
folks
on
the
tc,
because
I
think
they're
the
ones
who
actually
will
set
that
standard.
B
I
don't
know
what
the
standard
is
off
the
top
of
my
head
and
it
could
be
one
of
these
like
what
is
what
is
what
is
good?
You
know
it.
When
you
see
it
right,
it
could
be
one
of
those
philosophical
discussions.
We
don't
actually
have
a
set
of
requirements
for
what
that
standard
is
at
all
that
I
know
of
so
to
some
extent,
if
someone
on
the
tc
says
hey,
this
is
a
problem.
I
want
you
to
look
into
this.
We
look
into
it.
B
I
don't
think
that's
a
good
ongoing
thing,
so
we
should
come
up
with
a
standard,
so
that's
also
something
we
should
have
a
discussion
around.
I
can
try
to
raise
that
in
some
of
my
other
101s
with
people
to
try
to
get
an
idea
of
what
what
the
requirement
really
is
around
performance
or
if
we
want
to
specify
one
or
like
what
we
should
do
here,
but
effectively
at
a
minimum.
We
should
measure
it
and
the
tc
will
say
yes,
no
kind
of
a
thing
right.
G
Yeah,
so
so
for
josh
and
bogdan,
I
just
information.
If
you
look
at
the
bug,
you
know
I
have
some
more
information
for
you
guys
from
a
benchmark
perspective,
to
give
you
some
measurement
of
performance,
difference
from
point
four
to
point
eight
and
the
impact
of
the
one-off,
as
it
relates
to
the
goal.
So
you
guys
could
look
at
that
and
see
if
that
is
within
your
you
know,
if
that's
okay,
basically.
C
For
the
record,
I
did
these
benchmarks
very
similarly.
Last
summer
I
was
in
favor
of
the
same
change
that
we're
doing
now
so
and
tigran.
I
view
tigre
as
a
gatekeeper,
and
he
is
doing
that
right
now
and
I
think
it's
us
it's
up
to
us
to
convince
him
that
this
is
okay.
B
So
I
think
that
means
next
step
is
I'll
schedule,
a
meeting
with
tigran
and
just
talk
to
him
specifically
in
person
to
do
a
high
bandwidth
discussion
on
this,
or
we
can
try
to
invite
him
into
the
sig
for
specific
discussion
after
we
have
all
of
our
changes
done
and
we
can
talk
about
performance
and
concerns
one
of
the
two
but
I'll
ping
him
directly.