►
From YouTube: 2021-06-08 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
I
yeah
my
limit's
about
95
fahrenheit,
to
where
I
will
just
hide
inside
and
not
leave
my
house
or
go
to
another
air-conditioned
location.
A
All
right,
so
if
everybody
has
a
chance
to
open
up
the
notes
and
add
your
name
and
agenda
items
that
you
may
or
may
not
have,
if
please
do,
let's
see
open
telemetry
am
I
sharing
the
correct
window?
Can
anyone
tell
me.
A
A
A
A
Oh
victor,
I
have
your
pr
to
discuss
up
at
the
top.
Oh
okay
yeah.
I
don't.
I
was
actually
going
to
try
to
cover
that
first.
If
that's
okay,.
A
A
A
Okay,
I
think
the
first
thing
we
want
to
discuss
here
is
the
the
histogram
bit
flag,
so
in
I
think
was
that
was
that
jonathan
who
said
throwing
victor
under
the
bus
yeah
go
for
it
victor.
You
want
to
tell
us.
B
Well,
yeah,
so
I
try
to
limit
you
know,
because
I
think
we
talked
a
little
bit
about
you
know
future
growth
and
all
that
and
adding
this
bit
flag
across.
You
know
potentially
different
number
types
at
potentially
at
the
higher
level
of
where
we
have
the
one
off
of
the
metric,
and
I
think
that's
going
to
be
a
bit
complicated
until
we
first
figure
out
what
we
want
to
do
for
the
most
immediate
issue,
which
is
just
for
the
histogram
portion.
B
So
I
put
out
a
pr
just
to
start
getting
some
feedback
and
some
comments
on.
You
know
basically
a
proposal
for
doing
so
based
on
our
past
conversation,
and
I
think
that's
a
309
or
something
like
that,
and
it
basically
just
proposed
a
simple
bit
flag
in
the
histogram
number
type
itself,
and
I
got
some
feedback
from
tigran
and
so
forth.
So
I
think
so
so
a
couple
of
areas
that
I
think
you
know
I
didn't
understand
how
we
do
the
versioning
per
se.
B
B
Then
if
we
just
add
a
flag
itself,
then
there's
the
all
clients
would
not
know
about
it.
So
if
we
were
to
do.
A
B
Flag
itself,
then
oakland
won't
know
about.
That
means
we
still
have
to
encode
what
we
currently
have
in
a
way
that
would
be
still
compatible
going
forward,
and
so
yes,
one
easy
thing
from
given
for
where
we
are
right
now,
is
that
we
have
to
count.
B
What
that
will
do
for
the
older
clients
essentially,
is
that
they
will
know
there's
no
data
point,
but
they
don't
really
know
necessarily
whether
or
not
that's
a
because
of
you
know,
there's
no
recorded
data
value
or
it's
some
other
error,
so
presumably
they
would
just
you
know,
presumably
treat
it
as
bad
and
ignore
it.
If
you
will
yeah
for
newer
clients
coming
in
they'll,
see
this
potentially
count
equals
zero,
so
they
may
then
say
hey.
You
know
what
you
know.
There's
a
new
feature
for
flag.
B
We
could
check
the
flag
to
see
if
it
is
indeed
whether
it's
a
bad
data,
or
it
is
a
quote
one
of
these
no
recorded
value
thing,
and
I
don't
know
whether
that
term
no
recorded
value
is
the
correct
one.
But
I
don't
think
nan
is
the
correct
term.
So
we
could
debate
over
that
naming
as
we
see.
B
Right
right,
yeah
and
then
moving
forward
on
that.
Assuming
that
we,
I
think
the
best
way
we
could
do
is
just
basically
test
it,
because
I
think,
without
changing
the
protocol,
we
just
need
to
pass
in
some
count:
equal
zero,
see.
If
the
you
know
a
comp
existing
component
is
just
going
to
fall
over
or
not.
I
didn't
see
anything
in
the
spec.
Necessarily
that
says,
that's
not
allowed
necessarily
count
equal
zero.
B
So,
presumably,
if
you
followed
the
spec
prior
to
it,
that
you
would
still
have
to
somehow
figure
out
how
to
handle
if
count
equals
zero
across
our
protocol,
not
just
this
one
but
across
a
whole
bunch
of
other.
You
know,
number
data,
types,
histogram,
data
type
and
summary
data
type
per
se,
so
that's
kind
of
where
I
am
then.
The
next
question
is
then:
are
we
taking
this
further?
B
You
know
the
flag
concept
further
from
just
histogram.
Are
we
going
to
also
introduce
this?
The
next
step,
probably
is.
If
we
think
this
is
the
right
approach,
then
we
want
to
potentially
introduce
a
flag
to-
let's
say
the
number
data
type
and
then
also
the
summary
data
type.
Along
with
you
know
the
histogram
data
type.
A
A
so
yeah
yeah
there's
a
there's
a
second
flag,
so
next
steps,
yep,
there's
a
second
flag
for
histogram,
so
second
flag,
which
is
sorry,
allows
negative
measurements
yeah
now
that
one's
hard,
because
we
already.
B
A
We
already
have
something
and
if
we
start
allowing
negative
measurements
in
some
right,
if
there's
an
old
client
with
prometheus,
they
will
start
exporting
count
and
exporting.
A
Right
as
a
non-monitor,
yeah,
non-monotonic
cumulative,
whatever
the
hell
math
term,
we
use
for
that,
meaning
it
doesn't
have
negative
values
right,
whereas
we
are
allowing
negative
values
and
so
that
that
that's
an
interesting
one,
because
I
like
this
count,
equals
zero
solution.
That's
that's
a
really
elegant
way
for
us
to
be
able
to
record
this
right.
B
D
B
The
approach
of,
if
we're
gonna,
if
we
are
good
with
doing
flags
and
we'll
continue
to
build
on
using
flags
per
se
right,
because
I
think
if
we,
if
we
did
not
use
flags
and
we
do
using
just
individual
fields
per
you,
know,
message
you
know
for
protobufs,
you
know
messaging,
I
think,
there's
some
concern
about
sizing.
We
did
some,
you
know,
checking
on
the
size
and
stuff.
So
there's
like
a
potential
six
percent
growth
in
size.
There
is
a
concern
about
flag,
though.
A
A
A
So
what
we're
doing
is
we're
expanding
the
spec.
So
if
you
see
this
flag,
we
can
allow
the
count
to
be
zero,
and
we
can
update
this
to
say
that,
right
now,
when
it
comes
to
sum
right
now
we
say
sum
of
values
in
the
population.
If
the
count
is
zero,
then
this
field
must
be
zero,
which
is
interesting
because
we
also
said
the
count
can't
be
zero
or
I
guess
it's
just
non-negative.
Anyway,
the
value
must
be
equal
to
sum
of
the
sum
fields
and
buckets
of
the
histograms
provided.
A
Some
should
only
be
filled
out
when
measuring
non
negative.
Discrete
events
right.
So
we
talk
about
this
specifically
here
in
this
note
now
there's
this
question:
if
you
see
a
zero,
it
means
that
some
would
have
exactly
this
behavior.
But
if
you
see
that
two
bit,
then
we
could
say
that
some
has
a
different
behavior
and
update
this
to
specify
that
the
downside
is
that
existing
clients
will
treat
some
the
same
way,
no
matter
what,
because
I
won't
look
at
the
bit
right,
but
I
think
it's
like
from
an
evolution
standpoint
of
protocol
buffers.
A
B
Yeah
anyway
yeah,
so
so
there's
a
so.
I
just
thought
this
out
there.
As
I
thinking
about
you,
know
the
no
versioning
issue
with
the
protobuf.
You
know
we
don't
have
a
version
of
protobuf.
B
A
Do
it's
it's
just
it's
weird!
So
it's
in
it's
in
this
url
here
right
which
is
hold
on,
but
it's
not.
A
It
it
is,
but
it's
hard
to
describe
so.
First
of
all,
this
is
in
a
package
that
has
a
version
associated
with
it,
which
means,
if
you're,
using
reflection,
protobuf
reflection
or
like
any,
you
will
actually
see
it.
The
other
thing
where
we
have
it,
I
believe,
is
because
the
service
is
under
this
package.
A
B
A
No,
no,
no
it
like.
So
if,
if,
if
you're,
relying
on
the
grpc
side
of
this
story,
okay,
the
version
two
protocol
buffer
will
actually
be
a
different
type
than
the
version.
One
protocol.
A
The
version
one
protocol
buffer
to
the
version
two
one:
okay
now
they
might
be
the
same-
they
might
be
compatible
at
runtime
right
but
like
from
a
practical
standpoint,
there's
like
a
client
level
integration
that
prevents
that
from
happening
so
like
from
a
v1
to
a
v2.
Theoretically,
we
have
this
notion
of
aversioning
and
at
the
runtime
level,
there's
there's
that
negotiation
with
grpc
to
say
I'm
talking
to
this
service
and
I
believe
it's
fully
name-spaced,
so
you
get
that
v1
and
that
v2
in
it.
So
we
have
this
macro
level.
B
A
And
we've
never
made
a
v2
right
and
we
also
want
to
avoid
making
the
v2
as
much
as
we
can.
B
B
You
know,
you
know
not
a
recorded
value
and
so
forth,
because
we
don't
quote,
have
versioning
practically
in
this
moment,
I'm
wondering
if
maybe
we
should
reserve
the
flag
used
to
denote,
rather
than
a
particular
event
to
know
a
way
of
interpreting
data
right.
So
so,
if
you
have
imagine
what
we
have
right
now,
where
we
have
the
flag
of
no
recorded
value,
what
if
we
said
the
flag
here
is
interpretation,
one
whatever.
B
That
term
is
right
and
if
you
see
the
flag
of
interpretation
one,
then
you
know
based
on
that
that
if
you
see
the
count
of
zero,
then
you
could
treat
that
as
being
you
know,
no
recorded
value
along
with
you
know.
If
you
see
the
sum
you
know,
then
you
could
treat
that,
as
you
know,
you
know
included
inclusive
or
not
inclusive.
So
basically,
we
use
the
flag
to
know.
B
A
Kind
of
cures,
what
what
people
think
of
that?
I
I
have
some
thoughts,
but
I'm
gonna
hold
them
for
a.
A
A
A
Theoretically,
everything
should
be
against
the
same
version,
so
you
should
be
able
to
version
well
above
where
this
flag
lives
right
from
a
collector
standpoint,
if
you're
talking
about
handling
batches
of
ingestion
from
an
sdk
like
I
got
a
metric
batch,
you
can
actually
look
at
that
batch
holistically
as
a
as
a
series
of
metrics
right,
and
so
you
could
record
the
version
number
way
up
at
the
top
once
this
this,
like
it,
your
in
your
notion
of
interpretation,
you
could
you
could
record
that
high
up
and
we
could
issue
an
error
if
the
version
is
newer
than
something
we
handle
if
we
want
to
allow
breaking
changes
in
that
fashion
of
like
you
need
to
know
how
to
interpret
this
data.
A
A
If
we
have
a
single
writer,
I
think
that
means
that
any
kind
of
like
version
semantic
you
might
want
to
do
actually
belongs
way
up
at
the
top
at
the
highest
possible
level,
because
we
know
one
person
is
going
to
be
writing
this
right
so
that
we're
not
duplicating
that
data
over
and
over
again.
So
that's
first
thought
is
just
if
we're
gonna
do
some
sort
of
versioning
thing.
I
want
it
to
be
super
super
high
up
in
the
hierarchy.
A
The
second
thing,
though,
is,
I
feel,
like
the
abstraction
really
detracts
from
the
readability
of
the
data
model.
I
like
what
you're
trying
to
get
at
I
just
I
don't
know
if
the
cost
is
necessarily
going
to
be
worth
it
in
for
two
reasons.
One
I'd
like
to
avoid
if
we
can
breaking
changes
in
versions.
So
if
we
make
a
new
interpretation-
and
you
don't
have
access
to
it,
what
do
you
do
right?
A
Do
I
die
or
do
I
just
interpret
it
the
way
that
I'm
interpreting
it
now
and
we,
as
a
committee,
decide
to
always
do
things
in
a
future
compatible
way?
That's
correct!.
A
So
I
I
think
I
think
we
want
to
do
the
latter
to
make
sure
that
we
aren't
breaking
clients
as
things
go
just
because
I
think
the
alternative
is
a
lot.
It's
it's
a
lot
worse
for
users
right
and
then
the
second
bit
is
when
I
see
this
like
interpretation,
one
thing
right:
how
hard
is
it
for
me
to
figure
out
how
to
interpret
that
and
read
that
in
code?
A
I,
like
the
name
that
you
gave
this
of
like
you
know
we
didn't
get
a
value
here
this.
This
is
not
a
recorded
value.
This
is
this
is
an
error
value
like
let's,
let's
give
it
a
very
clear,
crystal
clear
name,
so
people
know
how
to
interpret
the
thing
that
they're
getting,
because
I
think
if
we
do
this
interpretation
thing
it's
going
to
be
hard
to
kind
of
consume
the
format
and
the
only
time
I'm
I'm
happy
to
make
it
harder
to
consume.
A
I
really
like
what
you're
trying
to
do
and
I'm
trying
to
under
I'm
trying
to
think
through
like
other
alternatives
and
things
I'm
not
coming
up
with
anything
great,
but
because
I
think
this
this
some
issue
is
kind
of
our
biggest
deal
right
now.
Anyone
anyone
have
any
thoughts.
It's
kind
of
a
quiet
audience
today.
A
Yeah,
I'm
gonna,
I'm
gonna
start
I'm
gonna
start
calling
on
people.
That's
what
I'll
do
just
to
get
some
some
more
opinions
and
advice
here
all
right.
So
I
think
what
I'm
hearing,
though,
let's
let's,
let's
give
people
time
to
process,
give
people
time
to
read
your
proposal
and
I'm
going
to
ask
for
two
things
in
terms
of
ai's.
Would
you
mind
writing
down
this
in
the
pr?
A
Yes,
like,
specifically
what
old
clients
will
see
and
how
they
interact
with
it?
And
then
this
ai,
like
we,
we
should
call
out
that
we're
going
to
do
a
test
of
how
count
equals
zero
is
handled
yep.
I
could
do
that
in
the
interest
of
making
progress,
instead
of
looking
at
how
to
use
this
flag
on
other
data
points.
A
Let's,
let's
for
now
I
and
and
tell
me
tell
me
if
you're
okay
with
this,
because
it
we're
what
we
might
find
a
micro
optimum
instead
of
a
or
local
optimal
instead
of
a
global
optimal.
But
let's
just
look
at
how
to
deal
with
some
yeah
sure.
A
As
the
next
thing
we
do
with
flag
and
and
come
to
come
to
an
agreement
that
we
all
agree
on,
because
I
think
that's
there's
enough
complexity
there,
that
if
we
come
up
with
a
solution
that
people
kind
of
feel
comfortable
with
and
like,
I
think,
that'll
be
that'll,
be
great.
A
A
A
Okay,
sorry,
sorry,
we
can't
we
couldn't
make
a
decision
today
victor,
but
we
we
tried,
you
know
all
right
so
moving
on
to
exponential
bucketing,
I'm
just
calling
out
this
is
an
otep,
that's
sitting
here.
A
I
think
most
of
us
have
agreed
that
we'd
like
we'd
like
to
see
it,
there's
a
few
action
items
on
it.
I
want
to
call
out
what
what
do
we
think?
I
I
don't
know
if
anyone's
here
who
can
actually
make
progress
on
next
steps,
any
owners
of
this
otep,
but
I
just
want
to
call
it
out
that
it's
sitting
here,
if
you
have
comments
on
it,
please
comment.
A
Open
histogram
has
been
re-licensed
to
apache
too
interesting,
okay
cool
anyway,
I'd
like
to
make
progress
here.
If
anyone
here
is
an
owner
of
this,
please
please
make
progress.
I
think
it
has
enough
approvals
that
we
can,
you
know,
start
getting
something
merged
and
getting
getting
a
discussion
around
exponential
bucketing.
A
Cool
next
one
we
have
an
open
pr
for
mid
max
field
on
histogram,
and
this
is
from
march
15th.
A
What
it
doesn't
say
is
it
doesn't
say
whether
or
not
I
guess
it
doesn't
have
to,
because
these
would
be
gauges
in
in
prometheus.
If
anyone
has
any
concerns
over
this,
please
raise
them.
I
think
we
decided
not
to
move
forward
this
in
the
near
term,
but
I'm
curious
if
anyone
else
wants
to
see
this.
C
I
believe
it's
like
at
least
max
it's
extremely
useful
for
for
histograms,
and
I
believe
that
is
something
that
everyone
should
like
present
on
their
dashboards.
Instead
of
dp90
95.99.
A
Yeah
from
an
exporter
standpoint,
the
you
can
still
actually,
if
your
back
end
doesn't
support
histograms
with
mid
max.
You
can
still
export
a
separate
time
series
for
mid
max
with
these.
It's
just
we
transport
them
all
as
a
bundle
right,
because
I
I
think
the
discussion
previously
was,
should
we
have
a
a
group,
a
family
of
of
metrics,
where
you
report
a
histogram
and
a
min
metric
and
a
max
metric
for
the
same
name
as
opposed
to
putting
it
in
the
protocol
that
min
and
max
are
optional.
A
I
agree
with
you
that
this
is
super
useful
and
I
would
be
okay
with
it
being
bundled
in
histogram,
because
I
think
it's
I
don't
think
I
don't
think
we
want
to
go
through
the
process
of
metric
families,
with
the
current
structure
of
our
metrics.
C
You
need
it's
kind
of
like
a
package
like
if
you
start
measuring
something
and
you
use
a
histogram,
so
you
start
measuring
a
distribution.
Then
you
will
get
this
out
of
the
box
and
you
don't
need
to
think
about.
If
you
will
get
them
or
not
at
the
end,
it
will
be
exported
and
it
will
be
there
always
also.
I
believe
a
counter
would
be
very
useful
as
well
like
how
many,
how
many.
B
A
Cool
yeah
yeah-
that's
already
there,
but
it
so
it
has
a
counter.
It
has
a
sum,
and
it
has.
It
has
exemplars,
so
you
can
actually
record
specific
instances
of
things.
In
fact
this
is
this.
Is
such
a
stale
pr
that
I
think
this
is
not
the
up
to
date,
histogram
boundary,
but
yeah.
We
have
a
count
of
how
many
items
are
in
it.
We
have
a
sum,
you
have
your
buckets
and
then
was
there
something
else
besides
exemplars,
I
don't
think
so.
A
A
So
right
the
two
proposals
I'm
counter
proposal
to
this
was
one
was
we
can
put
min
and
max
as
exemplars
that
are
meant
to
always
be
there,
which
no
one
really
liked,
but
is
an
option
that
is
compatible
with
what
we
have
today.
A
We
could
have
this
metric
family
idea
or
we
could
just
put
them
in
histogram.
I
would
say,
if
you're
a
fan
of
just
throwing
them
in
histogram.
Please
comment
on
the
pr
and
approve
if
we
get
enough
approvals,
let's
we'll
I'll
ping
josh
and
get
him
to
update
it
and
we'll
get
it
merged,
because
I
think
it's
it's
a
non-breaking
change
and
it's
incredibly
useful
they're.
Also
in
if
you
look
they're
they're
marked
as
optional.
A
C
Okay,
actually
I
I
would
have
a
question
about
about
min.
If,
if
we
could,
I
would,
I
would
like
document
some
kind
of
like
use
cases
or
justification
for
for
me,
because
I
believe
it's
kind
of
like
a
constant
debate
in
the
metric
space
like.
Why
would
you
record
me
in
at
all
like
what
what
is
the
use
case
and
usually
the
answer
for
this,
because
my
manager
asked
me
to
do
that?
C
I
have
some
kind
of
like
very
weird
use.
Cases
when,
like
setting
up
an
alert
for
me,
would
make
sense,
but
I
I
would
I
would
be
there.
A
Are
you
assuming
negative
measurements,
so
I
would
argue
when
there's
no
negative
measurements
possible,
then
min
is
yeah.
I'd
agree
with
you
don't
fill
out
min,
but
when
there
are
negative
measurements,
you
have
the
same
problem
that
you
have
with
max.
Where
you
don't
know
what
your
lower
bound
of
your
buckets
are.
C
I
might
be
mistaken,
but
I
thought
the
the
the
open
telemetry
histogram
will
not
support
negative
measurements.
It
does
support.
A
It
does
support
negative
measurements.
Okay,
then,
if
you
allow
negative
measurements,
then
you
can't
produce
some
okay
and
so.
C
A
Would
be
that
it?
This
would
be
like
the
opposite
of.
You
min
only
makes
sense
when
you
have
negative
measurements,
because
otherwise
your
min
right,
okay,
it
could
be.
B
A
A
Okay,
yeah
there's
there's
a
bunch
of
like
dumb
nuances
and
the
whole
reason
for
that
negative
measurement
thing
is
just
so.
We
know
whether
or
not
some
can
get
exported
to
prometheus
as
its
own.
A
Okay
cool,
so
please
please
comment
on
this.
I
think
I
I
don't
think
we
need
to
talk
more
about
it.
I
I
mentioned
like
the
three
concerns
with
it.
I
I
personally,
I
think
you
know
what
my
bias
is.
I
approve
the
cl,
I'm
a
fan
of
it.
Please
go
prove
it
if
you're
a
fan
of
it.
If
we
get
if
we
get
enough
approvals
in
that
repo
to
get
the
thing
merged,
I'll
ping
josh
to
update
it
and
we'll
get
that
through.
A
C
Yeah,
I'm
not
hundred
percent
sure
I'm
on
the
right
meeting
with
this
question.
But
what
are
the?
What
are
the
currently
supported
or
officially
supported
transport
protocols
for
dlp?
I
know
grpc
was
always
there,
but
I
have
seen
like
http
was
mentioned
a
couple
of
times
in
different
places
as
well.
Will
that
be
a
officially
supported
transport
protocol
for
otlp.
A
The
answer
is
yes
in
terms
of
timing,
I'm
not
sure
when
so
so,
but
let's,
let's,
let's
talk
first,
you
have
grpc
binary
right.
Sorry,
your
grpc,
you
have
grpc
http
and
then
you
have
json
http.
If
I
recall
correctly-
and
those
are
three
distinct
things.
C
Okay,
okay,
that
that's
great
just
to
give
a
background.
C
I
am
asking
this
because
I
also
work
on
spring
class
sleuth,
which
is
the
distributed
tracing
library
for
spring
finger,
and
we
had
issues
where,
basically,
like
users
complained
about,
they
try
to
use
the
the
java
dlp
exporter
and
they
are
not
able
to
use
that
out
of
the
box,
because
the
java
dlp
exporter
does
not
give
you
a
client
or
a
jfpc
transport
out
of
the
box,
and
this
is
because
there
are
issues
with
the
you
will
have
issue
cd
dependencies,
so
the
clients
needs
to
know
which
grpc
transports
all
hotel
supports.
C
So
it's
it's
kind
of
weird
and
we
would
like
to
give
like
a
kind
of
like
an
out-of-the-box
user
experience
to
the
users,
which
means
that
we
will
need
a
default
transport
or
default
client
for
otlp
with
http.
I
believe
that
should
be
easy
with
grpc.
I
believe
that
is
like
borderline
impossible.
E
Sorry,
I
just
want
to
point
out
that,
like
the
specification
actually
doesn't
require
that
you
support
all
of
the
three
protocols
that
josh
just
listed,
it
requires
that
you
support
one
of
them
and
it's
not
defined,
which
one
so
keep
that
in
mind
that
in
a
language
it
may
be,
the
http
json
is
supported
and
another.
It's
jrpc
with
the
protobuf
that
there's
there's.
E
C
A
Out
is
the
http.
Json
format
is
still
experimental,
okay,
the
so,
if
you're
talking
protobuf
over
http,
then
you're
fine,
if
you're
talking
json
over
http,
then
it's
still
experimental.
A
So
you
know,
depending
on
which
one
you
can
do
and
in
terms
of
the
grpc
related
issue.
I
I
personally
I'd
like
to
dive
into
that
as
well,
but
I'm
not
I,
I
think
you
should
you
should
push
on
this
regardless.
I
just
want
to
dive
into
what
specifically
the
grpc
related
issue
is
and
see
if
we
can
fix
that
on
the
java.
C
Literally
two
seconds,
which
is
like
one
two:
I
can
paste
into
the
document
the
the
issue
so
that
you
can
here.
D
Fix
this
to
be
the
right
one
and
at
some
context
here,
I
I
I
believe
the
collector
has
a
dis
support
for
both
grpc
binary
and
the
grpc
http
one,
and
the
reason
we
need
both,
I
believe,
is
number
one.
Some
languages
might
require
like
proxy
and
http
makes
proxy
easier.
Number
two
is,
I
know,
like
ruby,
doesn't
have
a
good
support
for
grpc,
protobuf
and
and
ruby
currently
have
the
http
and
github
folks
are
are
thinking
if
they
could
contribute
back
to
the
a
better
grpc
library
for
http
json.
A
Well,
no,
I
so
so
with
this
particular
issue.
I
know
what
we
did
on
so
the
cloud
the
cloud
google
cloud
exporter
has
the
same
issue
of
effectively.
If
you
take
a
dependency
on
gopc,
you
can
actually
break
users
depending
on
different
versions
of
grpc,
and
so
in
the
cloud
exporter.
We
have
two
two
versions
of
consumption,
one
where
you
depend
on
our
exporter
and
you
we
try
to
use
the
version.
The
grpc
client
that
you're
using
as
well
we're
not
using
the
compile
time
only
trick
to
force
the
version
that
you
have.
A
We
actually
force
a
version
conflict
resolution
in
your
dependency
resolver.
Just
so
it's
clear
that
there's
an
issue
even
though
it's
not
clear
because
freaking
dependency
resolution
in
java
is
been
my
bane
for
10
for
years
anyway,
but
the
the
second
thing
we
do
is
we
actually
completely
shade
the
exporter
so
that
the
grpc
is
completely
hidden.
We
have
our
own
copy
of
that
entire
dependency
tree.
C
A
So
so,
from
from
the
java
standpoint,
I
think
maybe
pushing
on
that
bug
to
have
a
shaded
version
of
the
grpc
exporter
might
be
useful.
Also,
if
you
have
other
protocols
that
you
can
export,
so
you
have
different
sets
of
dependencies,
because
you
know
again,
it's
like
pick
your
poison
with
exporter.
How
big
do
you
want
it
to
be?
A
What
dependencies
do
you
want
to
use
and
how
efficient
do
you
want
it
right,
like
that's,
users
should
have
options
so
yeah.
I
that's
why
I
wanted
to
ask
to
kind
of
push
on
that
a
little
bit
and
I
think
that's
going
to
be
a
problem
in
most
languages
as
well.
The
whole
dependency
hal
thing
so
good
call
out.
A
Okay,
let's
move
on
to,
I
wanted
to
have
a
quick
discussion
on
reporting
dropped
metrics.
So
this
was
the
thing
that
we
had
discussed
in
the
last
data
model,
sig
that
we
talked
about
today.
I
think
the
bug
was
called
reporting
dropped,
metrics,
not
reporting
drop
to
metric
data,
which
I
kind
of
expanded
it
to
so
what
I
did
was
I
we.
We
asked
a
question
around
some
and
dropped
metrics
of
like
this
was.
A
This
was
an
issue
where
we
might
not
have
delta
the
cumulative
sum
conversion
or
if
we
do,
we
might
have
out
of
order
points
and
have
to
drop
metric
points,
because
the
collector
can't
or
might
not
have
implemented
a
time
alignment
all
this
kind
of
junk
right.
So
we
have
a
question
of
how
do
we
report
dropped,
metrics
or
dropped
points
or
dropped
attributes?
Okay,
so
I
took
some
notes
on
the
current
state
of
the
world,
specifically
tracing
in
in
open
telemetry
tracing
specification
to
preserve
memory
and
performance.
A
It
allows
dropping
data
when
the
user
goes
over
some
limit.
So
if
I
specify
too
many
attributes
for
a
span
or
for
an
event
or
if
I,
if
I
specify
too
many
events
per
span,
there
are
configurable
limits
in
the
sdk,
where
once
you
go
over,
that
it
actually
drops
anything
additional
that
you
add,
and
it
will
report
that
it
dropped
some
things.
A
Okay,
metrics,
I
think,
because
identity
and
attributes
are
so
closely
correlated,
we
don't
drop
attributes,
that's
just
not
a
thing.
We
do
and
we
kind
of
focus
on
aggregating
data
to
get
efficiency,
so
this
would
be
like
using
histograms
instead
of
reporting
a
crap
ton
of
buckets.
This
is
using
a
gauge
to
every
once
in
a
while
pull
a
value
instead
of
you
know,
recording
it
very
very,
very
frequently
and
sucking
up
memory.
So
there's
like
different
sets
of
controls
for
handling
efficiency
and
metrics.
A
So
I
try
to
focus
on
kind
of
the
ones
that
we
plan
to
use
and
where
data
could
get
dropped
and
what
we
want
to
do
to
report
it.
So
for
histogram
right,
I
am
recording
a
bunch
of
of
local
aggregations
and
I
have
this
field
called
count
for
how
many
of
them
I've
gotten.
A
Okay
for
sums,
okay
for
synchronous
instruments
right
we
it's
possible
that
we
didn't
see
any
values
recorded
during
a
particular
reporting
interval,
because
you
know
the
sum
runs
in
line
with
like
http
requests,
and
maybe
we
didn't
get
any
during
this
reporting
interval.
So
it's
possible.
We
have
no
values.
For
sum,
so
it's
possible.
We
know
about
a
metric
and
there
were
no
values
and
we
could
report
that
there
were
no
values
for
that
metric
during
this
interval.
That's
a
possible
thing.
A
We
can
do
that's
related
to
victor's
proposal
on
histograms,
but
like
this
would
be
specific
to
sums
for
asynchronous
sum
right.
There's
this
this
this
issue,
where
I
go
to
call
the
callback
for
the
sum
and
some
kind
of
exception
or
related
language
feature,
has
an
error,
retrieving
that
sum
at
that
particular
point
in
time
right
and
it's
possible.
We
want
to
report
that
to
the
to
someone
upstream
saying:
hey,
we
know
that
this
sum
exists.
We
tried
to
grab
the
value
for
it
and
we
got
nothing.
There
was
an
exception
right.
A
So
that's
something
we
might
want
to
report.
Similarly,
for
gauges,
gauges
are
right
now
only
async,
so
it's
possible
an
exception
occurs
when
we
try
to
go
grab
the
value
and
that
could
get
reported
the
view.
Processor
api
is
getting
flushed
out,
it's
possible
with
views
and
processors,
and
things
like
that
that
you
can
actually
drop
metrics
and
say
I
don't
want
this
to
leave.
I
want
this
to
go
it's
possible
that
we'd
want
to
actually
report
that.
A
I
don't
know
if
I
like
that
idea
at
all,
but
I'm
calling
it
out
as
a
thing
that
can
happen
and,
lastly,
there's
an
exemplar
portion
of
the
the
data
model
that
isn't
really
fleshed
out
in
the
api.
Yet
and
exemplars
of
this
notion
of
you
know
for
a
histogram,
I'm
going
to
record
specific
points,
so
I
know
that
this
point
existed.
I
know
that
this
point
existed
and
I
try
to
record
like
the
trace
id
and
the
span
id
of
of
that
point
within
the
histogram.
A
So
I
can
correlate
my
metrics
in
my
logs
right
and
there's
going
to
be
some
kind
of
sampler
for
exemplars
to
try
to
grab
it.
You
know
interesting
ones
that
kind
of
junk
there's
a
whole
bunch
of
exemplar
work
to
do.
Exemplars
are
interesting
because
we
can
record
the
number
of
them.
We've
dropped
if
the
user
says.
Please
record
this
exemplar,
that's
the
one
area
where
we
can
be
kind
of
picky
about
how
much
memory
wanted
to
use
and
say
sorry,
you
gave
us
too
many
exemplars.
A
We
can't
keep
all
this
data,
we're
dropping
some
right
cool.
So
I
think
exemplars
are
a
more
interesting
use
case
for
drops
and
telling
users
when
we
had
to
drop
them,
because
I
think
it's
also
one
of
the
more
aggressive
things
so
from
an
api
sdk
standpoint,
that's
all
the
possible
like
issues
that
we
have
from
a
collector
perspective.
A
It
gets
even
more
interesting
right,
there's
a
memory
limiter
where,
as
it's
consuming
metrics
and
data,
if
it's
out
of
ram,
it
will
just
drop
incoming
data
completely
to
prevent
the
collector
from
crashing
when
it's
overloaded,
so
you'll
get
you'll
literally
get
spotty
metrics
get
pushed
upstream,
because
it's
just
dropping
things
on
the
floor.
It
does
the
same
with
traces.
A
I
don't
think
it's
realistic
for
us
to
report
anything
more
than
is
already
reported
by
the
memory
limiter
itself,
because
if
we
try
to
get
like
a
high
fidelity,
you
know
metric
drop
count
out
of
that
we're
probably
creating
enough
memory
that
we're
not
saving
memory.
You
know
like
it's:
it's
catch,
22.,
okay,
there's
an
attributes
processor
where
users
can
configure
adding
and
removing
attributes.
Do
we
want
to
actually
record
how
many
were
added
and
removed
in
the
api?
A
That
again,
not
sure
I
I'm
leaning
towards
I
personally
am
leaning
towards
now,
I'm
just
calming
down
something
that
happens.
Finally,
there's
this
metric
transform
processor,
where
you
can
do
whatever
the
heck
you
want
to
metrics
right
and
then.
Lastly,
from
an
integration
standpoint,
we
have
pool
based
metrics,
where
we
can,
we
can
literally
know
a
metric,
doesn't
exist
right,
like
we
go
and
try
to
pull
it
and
we
couldn't
actually
access
the
resource,
and
we
want
to
like
report
that
this
metric
wasn't
there
okay.
So
these
are
kind
of
the.
A
The
idea
here
is:
there's
failure
scenarios
that
can
happen
during
metrics
collection
and
aggregation
right
and
to
the
extent
that
we
can
be
observe
our
observability
right
or
observe
our
telemetry
like
what?
What
are
we
providing
upstream
in
the
data
model,
for
people
to
use
to
kind
of
get
insight
into
mistakes
and
and
things
that
are,
you
know
of
concern?
A
A
You
know
error
metric
points
whenever
we
have
issues
doing
one
of
these
asynchronous
measurements
right.
Is
that
a
thing
that
we
want
to
provide
yay,
or
should
the
sdk
be
reporting
the
number
of
measurements
that
participate
in
a
sum
so
for
an
asynchronous
sum,
or
that
you
know
it's
not
as
important
for
the
synchronous
sum
where
we're
trying
to
count
something
as
requests
come
in?
Do
we
want
to
actually
like
record
the
number
of
points
that
were
included
in
that?
A
That
feels
like
its
own
sum,
that
feels
like
something
that
you'd
be
reporting
in
a
separate
metric,
but
I'm
just
calling
it
out
as
a
thing,
a
thing
that
looks
like
the
what
we
did
in
tracing
the
should
the
sdk
report,
the
number
of
dropped
exemplars,
because
again,
I
I
suspect,
we'll
have
to
have
a
limit
on
exemplars
for
efficiency's
sake
that
one
that
one,
I
feel
more
strongly
about
right.
A
Are
there
other
scenarios
that
are
missing
here
that,
like
weren't
called
out
in
this
form,
and
then,
lastly,
should
the
collector
be
reporting
any
of
the
manipulations
it
does
on
metrics
and
things
that
it
sees
right?
I
think
the
the
most
important
thing
from
the
collector
standpoint
is
if
it
gets
misaligned
or
you
know,
time
aligned.
A
A
We
probably
want
a
way
to
report
that,
but
otherwise,
if
the
user
configures,
I
want
to
drop
this
label.
That's
not
really
something
we
should
say:
hey
you
configured
it
to
drop
a
label,
we're
going
to
tell
you.
You
dropped
a
label
that
doesn't
seem
quite
as
useful,
but
I'm
just
throwing
this
out
here
for
open
discussion
around.
A
This
is
kind
of
the
collection
of
things
that
I
found.
I
think
there's
a
lot
of
discussion
and
kind
of
like
thought
churning.
That
needs
to
happen
here
around
what
we
want
to
do
and
prioritize.
Anyone
feel
strongly
about
any
particular
thing
here
that
that
needs
to
get
discussed
relatively
soon.
B
D
A
Yeah,
I
feel
like
there's
going
to
be
some
cross
cross
correlation
there.
I
know
like
in
internally.
We
tend
to
use
both
because
it's
like
you,
know,
spray
and
pray.
You
know
which
which
service
will
actually
be
able
to
take
my
error
reporting
and
give
me
an
error
message,
because
if
I'm
having
trouble
with
metrics,
maybe
logging's
still
up,
and
I
can
get
that
out
the
door
right
and
if
I'm
having
trouble
with
logging,
maybe
metrics
is
up
and
I
can
get
that
out.
The
door.
B
So
there's
also
in
one
of
our
back
ends
on
our
side.
We
also
have
the
concept
of
durability.
In
other
words,
when
you
have
these,
you
know
network
failures
and
we
start
to
quote
log
either
log
or
metric
these
errors
you're
going
to
run
out
of
memory.
So
we
actually
store
that
stuff
to
this,
so
that
the
next
time
you
come
back
up,
we
actually
have
the
information
that
we
spend.
You
know
that
we
send
back
out,
and
that
may
be
us.
A
Yeah,
I
think
that's
there's,
there's
a
proposal
on
the
collector
around
this
offline
buffering
thing
that
I
think
is
is
worth
talking
about
from
an
sdk
standpoint.
A
I
I'm
of
the
opinion
that
the
sdk
needs
to
go
out
of
process
as
fast
as
possible
for
resiliency's
sake,
because
you
need
to
get
that
information
out
of
the
process
in
case
it
crashes,
if
you're
doing
a
lot
of
processing
in
process-
and
you
actually
have
an
actual
crash
scenario
specifically
for
logs-
you
don't
get
that
out
of
process
to
somewhere
else,
you're
dead,
you
you,
it
doesn't
make
it
right.
A
You
can't
wait
for
shutdown
or
any
of
this
kind
of
junk,
you're,
literally
crashing,
so
get
it
out
of
process
as
fast
as
possible
is
is
kind
of
my
opinion
around
anything
log
related.
But
that's
because
I'm
like
one
of
those
hipster
java
chronicle
people
back
in
the
day
anyway.
For
those
of
you
who
knew
chronicle,
you're
awesome
for
those
of
you
who,
don't
you
should
look
it
up,
it
was
really
good.
Second,
second
thing
around:
that
is,
if
I'm
a
metric
consumer,
okay
and
I'm
consuming
metrics.
A
This
notion
of
I
failed
to
grab
a
gauge
because
of
some
internal
exception
right.
I
will
now
be
missing
a
metric
point
because
we're
a
push-based
model.
However,
the
sdk
knew
that
there
was
a
failure
and
could
report
that
I
could
consume
that
push-based
model
of
I
know,
there's
a
failure
because
the
sdk
is
still
operating.
It
hasn't
failed,
yet
the
application
failed
in
some
fashion,
but
the
sdk
is
still
running
right.
Is
it
worth
it
to
try
to
get
that
data
point
out?
A
So
me
as
a
consumer,
can
see
this
failure,
data
point
and
and
process
it
in
the
same.
You
know
like
I'm
getting
these
time
windows
that
I
expect
of
metrics.
So
I'd
get
a
point
that
says
there
was
an
error
as
opposed
to
just
seeing
nothing
and
then
I
have.
I
can
either
assume
that
the
whole
thing's
down
or
I
have
to
do
like
more
investigation
figure
out
what
happened
as
opposed
to
just
getting
an
error
point.
I
think,
in
my
opinion,
that's
probably
the
most
useful
thing
to
talk
about
right
now.
A
Given
the
metric
api
sdk,
you
know
spans,
and
we
can.
We
can
take
that
into
the
api
sdk
discussions
possibly
around.
If
you
know,
if
we
want
to
get
this
error
reporting
out,
we
can
provide
something
in
the
data
model
for
that
to
me
that
that
that's
like
valuable,
because
it's
information,
if
I
log
that
right,
yes,
that
information
can
get
to
the
user,
but
it
also
means
that
my
downstream
processing
doesn't
have
that
information
and
it
makes
it
a
little
bit
weirder
in
the
downstream
processing.
A
D
Other
people's
thoughts-
I
I
agree
with
you
josh,
I
I
think
for
the
open
questions
you
listed
number
two
is
a
little
bit
different
from
all
the
others.
If
we
remove
number
two
for
now,
then
all
the
other
questions
they
seem
to
me
like
the
the
heartbeat
for
the
matrix
system
or
like
in
microsoft.
People
call
that
canary
metrics,
basically
like
the
metrics
that
you
will
always
get
from
the
from
the
system.
D
Even
if
you
don't
have
any
custom
metrics-
and
these
are
are
things
I
I
think
it
would
be
nice
to
have
some
data
exposed
by
the
sdk
and
the
collector
and
and
people
can
optionally
subscribe
to.
Those
for
vampire,
I
would
say
in
ic-
could
probably
have
something
called
hotel,
dot,
sdk
dot
like
building
matrix
like
data
job
read,
or
something
like
that,
and
and
by
default,
we're
not
sending
that.
D
But
if
people
want
those
like,
like
course,
information
they
can
subscribe
to
this,
and-
and
this
would
also
align
with
the
other
like
like
tracing,
I
know
in
the
exporters
people
are
talking
about
hey.
If
I
got
the
exporter
stocking
data
there,
there
might
be
different
scenarios
like
the
sponsor
might
decide
to
drop
the
data,
because
network
is
not
available,
it
has
to
retry
later
or
the
exporter.
Just
get
a
very
firm
like
response
from
the
server
and
telling
the
data
doesn't
make
sense.
D
Yeah,
so
my
proposal
would
be
we
would
categorize
this
as
a
heartbeat,
canary,
whatever
like
like
cross
metrics,
and
we
can
use
the
same
model
across
the
ingestion,
the
sdk
and
also
the
collector,
and,
coming
back
to
your
question
too,
I
I
think
the
total
number
of
measurements
is
not
specific
to
some.
I
can't
imagine
like
if
you
want
average
and
if
they
want
to
do
the
average
on
top
of
average
numbers,
they
will
also
need
this
total
number
of
measurements.
D
A
If
we
add
it
to
sum,
we've
effectively
added
it
to
gauge
because
they
use
the
same
number
data
point.
I
don't
know
if
I'm
actually
proposing
that
that's
worthwhile
to
add,
because
I
I
still
when
it
comes
to
sum
and
gauge,
I
really
don't
know
if
the
number
of
metrics
that
went
into
it
are
going
to
be.
A
Are
super
significant
in
practice,
especially
given,
like
you
know,
when
you're
looking
at
metrics
there's
real
time
where
you're
kind
of
looking
at
the
most
recent
data,
and
then
you
have
this
notion
of
down
sampling.
That
happens
for
historical
reasons,
so
that
you
can
actually
look
at
your
data
and
period
in
a
realistic
time
frame
and
so
given
that
that
happens
so
frequently
and
that
we
do
that
all
the
time
and
and
that
people
aren't
really
keeping
track
of
the
number
of
accounts
that
go
into
those
down
samples.
A
D
There
yeah,
so
my
question
would
be
word
blueprint.
Average.
D
A
D
Let's
see
and
literally
people
want
to
add
something
new
like
unique
account
or
something
will
encourage
them
to
have
a
have
yet
another
type.
Instead
of
trying
to
leverage
one
existing
type
right,
yeah,
okay,
then
it
sounds
to
me
like
a
low
priority
and-
and
I
I
assume
literally-
we
think
it
can
be
an
up
opt-in
field.
It's
not
going
to
be
a
breaking
change.
A
Okay,
for
that
yeah,
I
guess
the
the
main
thing.
The
main
question,
then,
is
what
kind
of
heartbeat
metrics
do
we
do?
We
want
like
like
if.
A
D
D
We
cannot
catch
up
with
the
data
and
yet
performance
sucks
we're
not
able
to
tell
how
it
sucks.
So,
basically,
we're
telling
people
we're
screwed,
but
we
don't
know
how
far
away
from
the
the
correct
state
number
two
is
when
we're
a
little
bit
screwed,
but
we're
still
able
to
reserve
some
energy
to
calculate
and
set
the
expectation.
We
can
tell
we're
dropping
90
percent
of
the
data,
or
this
is
the
total
number
that
we
got.
A
Yes,
so,
okay
for
next
steps
here,
then
I
want
to
talk
about
next
steps.
We,
I
think
we
have
two
things
to
to
push
on.
One
is
the
work
that
victor
is
continuing
with
the
histogram
bit
flag
and
the
second
thing
here,
if
we
could
get
some
of
the
discussion,
moved
into
the
pr
around
metric
data
and
metric
data
failure
scenarios
that
we
want
to
report.
A
What
I,
what
I
want
to
know
is
whether
or
not
we
need
to
get
additional
information
components
in
the
data
model
and
the
protocol
for
this
kind
of
observability
and
just
just
to
refine
things
before
we
stabilize
the
api
sdk.
So
anything
the
api
sdk
wants
to
report
here.
I
just
want
to
make
sure
that
we
have
a
place
in
the
data
model
to
report
it.
A
It's
possible
that
we
just
report
these
as
their
own
set
of
metrics
like
heartbeat
metrics,
like
you
suggested,
but
if
we
want
to
have
any
kind
of
component,
that's
like
similar
to
what
traces
have
I'd
like
to
have
it
here
and
at
a
minimum,
I'd
like
to
have
a
otep
or
a
semantic
convention,
or
something
for
what
these
metrics
will
be
and
how
we
report
them
so
that
it's
clear
to
people
that
yes,
these
will
exist.
Yes,
sdk
should
produce
this
and
here's
what
it'll
be?
A
Does
that
sound
good?
We
can
and
we
okay
so
next
week.
Maybe
we
can
discuss
follow
on
from
that
and
if
anyone
else
has
anything
else,
you
know
feel
free
to
add
to
the
agenda
and
do
stuff.
Thank
you
all
sorry.
We
ran
a
little
over.
Thank
you.
Thank
you.
Josh
thanks.
Sarah.