►
From YouTube: 2023-03-14 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
B
B
Okay,
let's
start
I,
guess
I,
don't
see
trust
here,
but
I
will
start
with
my
item.
Hopefully
he
will
join
later.
Usually
he
has
a
lot
of
stuff
to
you
know.
Do
his
calls
so
the
first
one
is
just
a
small
for
your
information.
It's
a
small
PR
and
it's
basically
just
a
clarification
that
Christian
noemular
sent
and
the
idea
pretty
much
is
that
as
I
said
before,
it's
a
clarification
level.
B
B
So
basically
he
was
mentioning
that
capabilities
is
just
a
basically.
This
will
just
be
reflecting
what
we
have
in
all
these
six
at
this
moment,.
B
C
Could
you
can
drive
sweet
so
this?
This
is
a.
This
is
a
pier
that's
been
out
for
a
bit.
It
has
has
a
few
comments,
but
I
think
it
needs.
It
needs
a
bit
more
attention,
so
basically
we're
trying
to
provide
a
clear
guidance
on
what
semantic
inventions
actually
enforce
and
the
scope
at
which
they
enforce
them.
C
So
what
this
attempts
to
do
is
go
in
and
say
like
specifically,
here
are
the
set
of
fields
that
semantic
conventions
will
apply
to
here's
a
set
of
things
it
doesn't
apply
to
and
and
trying
to
be
more
clear
and
specific
about
that.
There's
two
areas
of
contention
I
think
are
worth
calling
out
here.
C
Number
one
is
like
what
does
enforce
mean
in
like
semantic
conventions
or
conventions:
they're,
not
enforced,
okay.
Well,
we
enforce
upon
ourselves
this
notion
of
stability,
so
I
went
through
yesterday
and
reworded
everything
to
talk
more
about
what
stability
is
in
enforced
and
what
stability
means,
and
so
it's
it's
re-referenced
around
stability.
C
C
So,
for
example,
this
is
not
attempting
to
provide
stability
for
a
new
values
just
for
new
names
right,
it's
it's,
providing
a
stability
for
the
names
of
things
and
like
expectations
there
so
I
guess
what
I
would
say.
This
is
like
you
know,
who's
the
audience
of
symante
conventions.
Is
it
instrumentation
authors
or
is
it?
You
know
consumers
of
open,
Telemetry
instrumentation
so
again
try
to
clarify
the
the
goal
of
this
is
if
I
am
a
person,
writing
alerts
or
dashboards
or
sophisticated
tooling,
using
open,
Telemetry
semantic
conventions.
C
How
should
I
think
about
what
my
data
will
look
like
and
what
will
remain
and
what
can
I
expect
to
be
there?
And
so
this
takes
an
approach
of,
let's
make
sure
those
tooling
vendors,
that
provide
default
alerts
that
provide
metrics
that
provide
dashboards
that
provide
whatever
are
very,
very,
very
flexible
and
then
let's
be
more
rigid
with
our
own
instrumentation
is
kind
of
the
The
Divide
I
may
have
failed
at
some
wording
here,
but
I
think
those
are
those
are
kind
of
the
two
primary
things.
C
B
Yeah
I
see
a
lot
of
comments,
but
does
anybody
want
to
raise
specific
point
here.
B
Okay,
in
that
case,
let's
please
continue
the
discussion
offline
if
there
is
no
feedback
at
this
time.
B
B
Okay,
I,
don't
see
trust
yet
trust,
Crown,
Maybe,
he's
on
vacation,
I
believe
all
right,
yeah,
sorry,
yeah,
okay,
I,
don't
see
you
Milan
either
and
I
am
guessing
nobody
from
the
semantic
conventions
group
or
the
HTTP
part
at
least
I
I'm
here
so
would
you
mind
like
giving
like
an
update?
I
mean,
usually
trust.
You
know
what
he
does.
You
know
like.
He
brings
a
list
of
the
PRS
that
are
still
being
disclosed.
B
C
That's
so
I
brought
what
I
think
is
the
most
one
of
the
most
important
ones.
There's
another
thing
that
I
want
to
raise.
I
opened
a
collector
issue
about
this.
This
is
around
again
I'm
gonna
I'm
gonna
diverge
a
little
bit
from
HTTP
stability
and
just
talk
about
stability
in
general
semantic
conventions.
C
We're
we're
concerned
about
schema,
URL
and
the
applicability
of
schema
URL.
So
just
as
a
heads
up
again
can
I
take
notes
and
talk.
At
the
same
time,
probably
not
I
can't
drink
big
notes,
yeah,
so
we're
doing
we're
doing
kind
of
three
things
right
now.
One
is
we're
we're
starting
to
invest
in
making
sure
there
are
skimmy
URL
Transformations.
So
we
understand
whether
that
schema
URL
can
apply
and
whether
or
not
we
can
rely
on
it
for
stability
in
semconf.
C
So
lidmilla
is
working
on
the
collector
and
trying
to
do
translation
in
The
Collector
I
am
picking
up
and
trying
to
do
the
same
kind
of
thing
in
the
SDK
for
Java
and
trying
to
make
sure
that
we
can
do
some
sort
of
translation
of
of
what
schemer
URL
requires
their
and
then
we're
gonna
come
back
and
make
sure
that
that
is
a
stable
foundation
for
us
for
semantic
conventions
going
forward.
C
If
we
find
out
that
it's
not,
then
we
still
think
we
move
forward.
We're
just
conservative
about
name
changes.
We
look
more
like
HTTP
specification.
We
don't
allow
name
changes,
for
example,
without
actually
breaking
the
the
version.
Number,
so
that's
that's
one
thing.
The
second
thing
is
the
the
ECS
discussions
are
proceeding.
There's
a
set
of
PRS
around
ECS
convergence
with
open,
Telemetry
and
I.
Think
the
next
step
now
is
we're
reaching
out
to
ECS
to
be
like
hey.
Here
are
the
things
that
diverge
between
open,
Telemetry
and
ECS?
C
When
does
it
make
sense
to
use
what
we
have
when
does
it
make
sense
to
use
what
you
have
in
some
of
these
scenarios?
Specifically,
there
is
an
open
Telemetry.
We
have
net
dot
peer
and
net
dot
X
in
in
ECS.
They
have
I
believe
they
have
both
server,
Dot
and
client
Dot,
and
then
they
have
source
and
Target,
and
so
there's
an
open
question
of
which
one
is
more
friendly
to
users
to
understand
what
the
hell
you're
talking
about.
C
If
you're
recording
a
span,
you
know,
is
it
easier
to
understand
that
net.pear
is
where
something
came
from,
or
is
it
easier
to
understand
that
you
know
client
means
you
know,
I,
send
it
somewhere
else
and
server
means
I'm
taking
it
in
that's
kind
of
that's
that's
one
of
the
things
that
I
can
find
the
specific
issue
for
you
and
and
put
it
in
the
things
when
I'm
not
actively
talking.
C
So
that's
number
two
number
three.
Let
me
grab
the
projects
real,
quick,
because
I
want
to
make
sure
I
get
the
most
important
one.
E
C
Oh
exception
to
error
change,
so
this
is
number
three
one,
nine
eight.
Let
me
copy
that
in.
C
B
C
Yeah,
there's
also
there's
a
there's,
a
an
issue
that
we
found
related
to
this.
So
the
specification
today
actually
states
that
every
language
SDK
is
required
to
document
the
the
layout
of
how
errors
show
up
when
they
log
errors
with
the
span
API.
So
when
you
have
that
add
error
event
or
ad
I
forget
what
the
helper
method's
called
I
think,
it's
add,
error,
add
error
event
or
something
record.
C
You,
when
you
write
record
exception,
the
layout
of
that
is
required
to
be
defined
by
language
sdks.
However,
we've
never
actually
enforced
that,
so
we
actually
don't
have
anywhere
specified
like
what
these
exceptions
look
like,
and
that
was
another
thing
that
was
uncovered
as
part
of
this
issue.
That
I
think
we
should
address.
C
Ecs
takes
a
similar
approach
to
us.
If
you
look
at
stack
Trace,
it's
just
the
stack
trace
of
the
error
in
plain
text
is
so
I
yeah
they
they
also
copped
out
the
same
way
we
copped
out
what
I.
What
I'd
like,
though,
is
I
feel
like.
We
should
be
documenting
what
we
did
since
we
asked
for
sdks
to
do
so.
C
Yeah,
like
I,
think
I
think
part
of
this
PR
is
or
part
of
this.
This
thing
is
moving
to
calling
an
exception,
but
then
part
of
it
is
I
think
we
want
to
make
sure
that
when
we
align
with
ECS,
we
actually
move
their
definition
of
exception
to
match
our
definition
of
exception,
which
includes
SDK,
specific
here's.
What
I
produce.
I
C
Yeah,
let
me
here
Carlos
I'm,
going
to
send
you
a
link
to
this
so
effectively.
Well,
actually,
you
can
click
on
it.
Where
were
you?
C
Let
me
let
me
get
you
that
link
quick,
so
Jack
has
this
link
to
record
exception,
which
talks
today
about
the
exception.
Semantic
conventions
exception.
Semantic
conventions
are
here
yeah,
let
me
put
it
there
and
if
you
look
inside
of
this
right,
so
what
we
have
today
so
scroll
down
to
like
attributes
and
such
all
right
so
right
now
we
call
it
exception
instead
of
error.
So
what
would
happen
as
part
of
this
PR
as
we
change
to
error?
C
But
if
you
scroll
down
to
error.stack
Trace
right
it,
we
actually
specify
that
this
is
a
string
in
the
natural
representation
for
the
language
runtime,
the
representations
to
be
determined
and
documented
by
each
language
sake.
That
is
a
difference
from
what
ECS
does
ECS
is
just
it's
a
free
text
string
you
can
put
whatever
the
hell
you
want
in
there
we're
more
strict.
What
we
don't
have
and
what
I'd
like
to
fix
is
a
location
for
every
language
Sig
to
document
their
exception.
F
C
Yes,
that's
what
I
said
like
like
C
plus
plus
is
a
great
example.
They
have
a
stable,
you
know
stable
implementation
of
otel,
they
may
or
may
not
have
exceptions,
but
when
they
do,
there
should
be
like
a
document
of
what
we
provide
and
it
should
be
in.
H
F
H
E
C
C
C
No,
we
we
don't
I
what
okay!
Let
me,
let
me
discombobulate
my
my
thoughts
here.
So
three
things
I'd
like
from
this
particular
PR
one
is
people's
or
not
PR,
because
it's
a
issue,
one
is
people's
thoughts.
Around
renaming
from
exception
to
error.
Is
that
an
acceptable
change
comment
on
the
bug?
The
second
thing
I'd
like
to
do
is
get
this
semantic
convention.
C
Mark
stable,
be
so
that
we
can
actually
document
these
things
and
have
it
clear
that
we're
actually
providing
a
level
of
rigor,
above
and
beyond
what
ECS
does
and
I
think
it's
important
for
us
to
document
what
exceptions
look
like
for
tooling
vendors
and.
J
So
that
will
take
time.
We
cannot
just
Market
that
stable.
There
are
interoff
story
like
C,
plus
plus
calling
back
to
Java,
and
what
do
we
do
with
the
call
stack?
Do
we
have
a
mixed
language
call
stack
or
is
this
per
language
apartment?
So
it's
a
very
complex
problem
and
ECS
having
to
solve
that
for
a
decade.
I
anticipate
that
it
will
take
time.
J
C
If
that's
the
case,
then,
maybe
with
the
second
thing
would
be
I
want
to
get
this
document
marked
stable,
so
we
know
how
errors
show
up
in
events
as
part
of
HTTP
semantic
conventions.
Right
yeah,
like
they're,
they're,
key
pieces
of
open
Telemetry
that
need
to
stabilize
as
part
of
semantic
inventions
and
recording
errors
in
spans
is
one
of
them.
That
I
think
is
just
fundamental
to
the
our
whole
operation
right.
C
F
H
Yeah
and
the
other
thing
is
like
in
the
JavaScript,
it
said
it
it's
based
on
what
V8
returns,
which
is
fine
for
node,
because
node
runs
on
V8,
but
for
a
browser,
not
all
browsers
represented
in
V8
format.
So
there
is
no
way
to
to
guarantee
that
I.
D
Was
just
going
to
bring
up
this
exact
point?
Most
of
the
specifications
for
JS
are
written
targeting
JS,
but
this
particular
one
ignores
the
fact
that
JS
is
a
language
and
not
a
runtime.
C
All
right,
so
it
sounds
like
the
consensus
here
is
for
now.
Maybe
we
remove
we,
we
can
leave
some
documentation
from
sdks,
but
that
section
should
definitely
not
be
stable.
The
only
thing
we
could
stabilize
is
that
we
call
it
stack
Trace.
J
Yeah
yep
and-
and
maybe
we
can
give
some
recommendations
saying
the
stack
Trace
is
a
free
form
thing,
but
for
languages
that
can
achieve
XYZ,
then
this
is
our
recommendation.
They
should
be
doing
this,
but
we
understand
that
that
there
are
cases
where
it's
impossible
for
people
to
achieve
it
due
to
the
underlying
dependency
or
implementation,
and
that's
okay.
C
F
C
A
I
I
have
a
question:
what
is
the
current
status
with
the
Josh,
with
the
proposal
coming
from
from
Pulsar
guide
from
what,
from
the
Pulsar
person
that.
C
That's
which
proposals.
C
K
Spoken
with
Asaf
about
it,
he's
looking
for
performance
at
in
any
way
he
can
get
it
and
I
have
another
meeting
with
him
this
week,
I'm
not
sure
that
it
was
at
all
related
to
semantic
conventions.
A
If
you
meet
with
him,
maybe
it's
incredibly
important
to
to
talk
to
him
about
it,
so
we
have
one
problem
compared
with
any
other
SDK
in
terms
of
deleting
the
fact
that
we
have
views
the
problem
here
is
if,
if
somebody
installs
a
view,
a
final
MTS
or
a
final
combination
of
attributes
may
come
from
multiple
different
combinations
recorded,
which
means
you,
if
you,
if
you,
if
you
want
to
delete
that
you
need
to
keep
track
of
what
are
the
input,
combinations
that
are
generating
the
output
combination,
that
does
it
make
sense?
K
I
think
this
is
probably
down
into
the
weeds
and
and
not
the
general
discussion
topic
he,
you
know
he
deletion
of
Time
series
is
really
not
the
primary
objective.
In
this
case.
It's
really
just
to
make
it
possible
to
have
higher
performance
by
not
buffering,
and
so
on.
K
A
Perfect
the
reason
why
I
brought
this
up
because
last
week,
meeting
Josh
complained
that
we
should
work
more
with
these
people,
so
I'm
putting
effort
into
this.
A
C
Josh,
if
you
can
bring
back
to
this
group
like
things
in
the
spec
that
are
missing
on
metrics,
that
we
need
to
build
like
because
I
do
think
they
have
a
very
high
performance
use
case.
But
I
do
think
that
as
we
integrate
with
particular
open
source
apis,
it's
going
to
be
common
that
we
need
those
features.
So
yeah.
K
Yeah
I
I
will
do
that
I
I
think
my
general
statement
for
the
group
is
I.
Don't
think
that
what
they're
asking
for
is
what
they
actually
need
and
we'll
figure
it
out.
I
Added
a
couple
of
things
here
so
there's
been
a
a
number
of
conversations
that
have
opened
some
in
the
Java
projects,
some
on
slack
some
in
the
specification-
and
you
know
all
different
things
that
Pulsar
has
expressed
interest
in
and
yeah
I
guess.
What
would
be
interesting
to
me
is
like
what
are
the
nice
to
haves
and
what
are
like
the
high
priority
items
that
would
really
allow
them
to
move
forward
with
open
Telemetry
in
a
serious
way.
I
A
K
So
this
conversation
has
turned
into
our
wish
list
for
things
that
metrics
should
be
doing.
I
think
is
that
correct,
I
I
would
I
I
would
be
glad
to
Circle
back
in
a
week
with
with
some
more
concrete
lists.
I,
don't
see
a
synchronous
gauge
on
your
list,
Jack
for
example,
and
I.
Think
that
that's
been
my
top
top
request.
That's.
I
That's
my
top
request
too.
It's
definitely
the
most
requested.
These
are
the
items
that
I've
heard
Pulsar.
K
Express
interests
in
specifically
yeah
and
and
as
I
was
saying,
I
think
what
they're
asking
for
is
not
quite
what
they
need
and
the
streaming
exporter
idea
is
a
way
to
get
them
to
the
ability
to
end
time
series.
If
you
can
just
stream
out
your
metrics,
you
don't
need
to
have
a
start
and
end
is
kind
of
what
they're
telling
us
and
I.
We
need
to
look
more
closely
at
it.
I
I
think
the
streaming
exporter
you
know
when
when
I
was
talking
with
Asaf
it
was.
It
was
around
this
idea
that
they
have.
They
have
20
million
time
series
sometimes,
and
you
know,
if
they
need
to
export
those
20
million
time
series,
then
it
the
overhead
per
Series
has
to
be
extremely
low
in
terms
of
allocation
and
so
I
think
you
know
sorry
go
ahead.
No.
K
You've
described
it
pretty
well
the
the
idea
that
they
have
these
20
million
time.
Series
means
that
they're
going
to
try
and
stream
them
out
without
any
copies,
and
that's
that's
effectively.
K
The
the
I
think
their
most
important
request
and
it
sounds
to
me
like
it.
It
could
require
additional
export
or
support.
I
also
questioned
whether
it
could
be
done
through
the
metric
producer
API.
It
just
seems
like
this
is
a
low-level
question
that
is
going
to
end
up
being
as
much
of
an
SDK
performance
question
as
it
is
like
a
spec
level.
Question.
I
I
You
to
reuse
objects,
so
you
know
you
have
to
mutate
them
and
reuse
them
each
collection
and
you
can
get
the
zero
allocation
with
the
existing
apis
that
we
provide
none
of
this
streaming
stuff.
But
you
know:
there's
there's
some
there's
some
dangers
like.
K
That's
good
to
hear
watch
out
for
I,
remember,
proposing
effectively
the
same
thing
to
Assaf
and
I've
got
the
same
type
of
prototype
for
myself
and
in
my
lifestepgo
metrics
SDK.
So
it
uses
a
different
export
interface
between
the
core
SDK
and
the
exporter,
which
allows
the
exporter
to
hand
back
the
statement
buffer
of
memory
that
it
used
the
previous
round
and
then
the
so
that
buffer
is
temporary.
K
But
it's
the
last
output
from
The
Collection.
That
goes
into
the
same
buffer
that
you
reused
and
therefore
there
isn't
a
great
deal
of
copy.
So
we'll
talk
to
him
about
that
as
well.
K
Well,
but
but
that's
not
exactly
what
they're
talking
to
us
about
like
I,
you
know:
I
My,
Lifestyle,
metrics
SDK
is
super
optimized
for
that
case,
because
lystep
was
using
statsd
and
I've
had
to
optimize
it,
perhaps
more
than
I
expected
to
let's
say
and
I
I
think
they're,
you
know
I'm
not
sure
I
could
swap
in
the
hotel,
go
Community
SDK
and
make
them
happy
right
now
super
optimized
for
statsd.
K
This
is
a
case
where
someone
wasn't
happy
with
statsd
and
they
went
and
built
an
in-memory
metrics
data
structure.
So
this
is
a
separate
problem.
K
K
Bound
instruments
is
still
the
answer
to
that
particular
type
of
problem
and
I'm
still
looking
at
one
to
two
or
three
percent
of
my
CPU
gone
to
metrics
lookup,
so
it
sucks
and
I
could
definitely
fix
that
one
to
three
percent
with
found
instruments,
but
it's
diminishing
returns
and
that's
an
API
change,
so
I
haven't
even
started
thinking
about
it.
I
Yeah
I
got
rid
of
bound
instruments
in
the
Java
implementation
because
it
was
the
performance.
Difference
was
down
to
just
the
lookups
there's,
no
additional
allocations
from
using
bound
versus
non-bounds,
because
I
optimize
them
all
away.
So
you.
K
I've
recently
pushed
a
change
from
my
SDK
that
uses
fingerprinting
so
that
there's
no
allocation
to
produce
your
map
key,
and
it
also
now
has
an
option
to
ignore
collisions
so
I'm
not
even
going
to
test
whether
the
attributes
are
equal
when
my
fingerprint
matches
and
that's
again,
to
get
rid
of
an
allocation
and
not
pay
the
cost
of
comparing
attribute
sets,
and
it's
still
too
high,
still
too
much
cost
in
my
opinion.
So
actually
it's
a
problem.
K
The
problem
is,
we
have
runaway
metrics
usage
and
people
who
haven't
been
thinking
about
the
cost
of
their
metrics
for
too
long,
but
nevertheless,
I
think
customers
and
users
are
always
going
to
demand
that
we
optimize
our
metrics
sdks
Beyond,
where
we
thought
we
would
need
to.
I
Yep,
but
you
know
backing
up
a
step
to
you
know
address
the
the
point
that
I
forget
who
made
it
I
think
it
would
be
useful
to
come
up
with
a
general
wish
list
of
you
know
additional
metrics
enhancements
that
we've
all
been
talking
about,
but
have
been
punted
on
things
like
synchronous,
gauges,
finishing
exemplars
and
a
couple
of
other
things
come
to
mind,
but
if
we
could
have
some
renewed
focus
on
that,
I
think
that'd
be
good.
C
Yeah,
especially
given
I
expect
all
the
user
feedbacks
coming
in
now
from
stabilization
having
hit
and
people
are
really
getting
into
the
meat
of
it.
So
we
should.
We
should
get
that
list
out
and
prioritize
I
think
Paul
saw
is
just
a
really
good
example
of
the
kind
of
Open
Source
engagement
we'd
like
to
sponsor
going
forward.
So
if
we
can
prioritize
them
a
little
bit,
that'd
be
good
and
get
all
the
other
lists
in
there
too.
I
K
Yeah
I
will
set
up
something
with
us
off
in
the
slack
he's
asked
to
meet
today
Bogan
if
you're
available,
we'll
work
on
it.
A
Perfect
being
on
slack
to
find
the
time.
G
One
of
the
things
I
wanted
to
ask-
maybe
Jack
as
well
is
this
is
going
to
be
relevant
to
you.
Is
that
advice
or
the
hints
API,
because
we're
talking
about
you,
know
a
punch
list
for
metrics.
G
What
I'm
noticing
is
from
that
PR
that
you
have
out
like
we've,
actually
may
need
some
API
changes
in
the
metrics
invitation
and
go
so
I'd
probably
say
like
that
would
be
a
high
priority
as
well.
I,
don't
know
if
others
think
the
same.
I
G
But
adding
it
specific
to
an
instrument
kind
does
not
exist
right
now,
so
you
can
add
an
option
or
a
parameter.
That
would
apply
to
all
like
synchronous
instruments
or
asynchronous
instruments,
I
see
and
so
yeah
I
think
we
may
need
to
go
back
and
say,
like
that:
I
didn't
see
it
changed
like
is
down
to
the
instrument
level
specificity.
It's
just
it's
going
to
make
this
the
API
really
loaded.
G
I
That's
kind
of
the
idea
that
was
in
my
head,
so
you
know
if,
if
an
API
can
be
structured
in
a
way
where
the
hints
can
be
specific
to
particular
instrument
kinds
great,
but
if
not,
then
you
know
I,
don't
think
it's
the
end
of
the
world
for
an
API
to
allow
you
to
specify,
like
include
you,
know
a
hint
for
explicit
bucket
boundaries
for
a
counter
instrument
where
they're
they'll.
Rarely
if
ever
be
applicable
you
just
it's
just
the
type
of
thing
that's
ignored.
A
It
can
be,
it
can
be
still
useful,
for
example,
if
you
know
that
something
can
be
counter
or
histogram
in
the
back
end,
because
whatever
you,
you
may
think
some
people
may
want
to
to
do
the
distribution
some
may
not
want,
and
then
then,
if
somebody
sets
up
a
view
with
and
say
it's
an
explicit
histogram
and
without
specifying
the
bucket
you
can
inherit
from
the
hint
the
buckets
for
example.
You
can
do
something
like
that
in.
K
F
K
I
So
I
I
think
I
think
that's
a
strange
example,
because
I
think,
if
you
can
represent
something
as
a
histogram
or
a
counter,
that
you
should
use
a
histogram
instrument
and
assume
that
people
want
the
distribution
by
default
and
then,
if
they
only
want
the
count
of
it,
then
they
can
downgrade
to
using
a
sum
aggregation
instead
and
so,
like
you
know,
I
think
it
wouldn't
be
advisable
to
tell
to
you
know,
encourage
users
to
use
a
counter
instrument
and
include
explicit
bucket
boundaries.
A
A
K
It
comes
to
defining
my
definition
for
synchronous
gauges,
take
a
histogram
and
turn
the
aggregation
into
last
value.
That's
that's
as
close
as
it
gets
yeah,
but
so
all
of
the
synchronous
gauge
defined
is
defined
to
be
is
the
same
as
a
histogram
instrument
where
you've
changed
the
aggregation
to
do
last
value,
so
yeah
I
can't
I've
implemented
synchronous
agent,
like
some
metrics
APK.
Exactly
that
way,
it's
a
hint
that
says
take
my
histogram
and
turn
it
into
a
gauge.
A
Correct,
but
but
based
on
all
the
examples
that
I've
seen
in
the
in
the
issue
for
synchronous,
gauge,
they're
kind
of
histograms
and
based
on
the
the
logic
that
that
Jack
had,
which
is
okay,
if
something
is
either
a
counter
or
a
histogram
use,
a
histogram
use
the
more
generic
one.
Then
then,
the
same
argument
can
be
made
for
for
for
gauge
and
histogram
use
a
histogram
all
the
time
in.
I
K
Think
I
have
a
little
bit
of
a
philosophical
take
on
that.
But
if
you
take
a
look
at
the
histogram
events
and
you
I
I
want
to
make
a
connection
to
the
gauge
histogram,
which
is
also
out
there
for
us.
You
take
a
let's
say
a
Delta
temporality
histogram
over
10
seconds,
and
you
take
a
look
at
all
those
observations.
You
take
that
same
set
of
observations
and
you
make
a
gauge
histogram
out
of
it
you're
going
to
get
the
data
structure
that
Prometheus
describes
as
a
gauge
histogram
by
by
removing
the
time
Dimension.
K
That's
how
it's
like
a
gauge
to
me
in
the
sense
that
I
every
one
of
those
histogram
measurements
could
have
been
a
gauge
and
you're
just
saying,
I,
don't
want
the
expensive
histogram
I
only
care
about
the
last
value.
I!
Don't
know
if
that
was
circular
definition,
but
it's
it's.
The
one
I
think
about
foreign.
I
Gauge
is
well
so
like
one
one
reason:
a
gauge
is
not
the
same
as
like
a
histogram
is
because
gauges
aren't
you,
you
can't
do
spatial
re-aggregation
about
them,
histograms
you
can
so
that's
like
a
clear
difference
between
them,
and
you
know
if
I'm
thinking
about
a
use
case
for
a
synchronous
gauge
is
so
like.
Imagine,
you
have
a
thermometer
and
you
you're
synchronously,
getting
receiving
readings
from
this
thermometer.
I
You're
you're,
pushing
those
readings
to
your
synchronous
gauge
rather
than
asynchronously,
observing
them
so,
like
I,
think
the
use
cases
for
a
synchronous
gauge
are
like
the
same.
The
same
situation
is
asynchronous
gauge,
but
you
know
rather
than
observing
something
you
know
you're
receiving
measurements,
synchronously.
C
So
specifically,
I
want
to
back
up
what
Jack's
saying,
because
the
Java
flight
recorder
hooks
in
Java
are
a
good
example.
Java
flight
recorder
is
event
based.
You
basically
say
hey
give
me
events
and
I'm
going
to
record
them
right
when
we
want
to
record
gauges
today,
we
actually
have
to
create
and
allocate
our
own
memory
and
then
allocate
an
asynchronous
gauge
to
read
this
memory
that
we
created
the
whole
point
of
the
metrics
API.
C
Is
it
does
it
for
us
with
synchronous
things
allocate
this
memory
for
me
for
my
synchronous
gauge
and
let
me
just
fire
the
event
when
I
get
it
so
I
don't
have
to
do
that
extra
work.
Well,
just
from
the
standpoint
of
like
integrating
with
JFR
I
think
that's
a
useful
use
case
and
man.
It
would
have
been
nice
for
that
implementation,
perfect.
A
Perfect
Josh.
This
is
a
good
example,
because
if
we
are
optimizing
for
that,
I
would
say
that
then
then
we
need
to
to
support
push
counters,
for
example
like.
Let's
call
them
push,
not
synchronous
because
but
but
let's
assume
you
have
a
push
counter,
somebody
pushes
you
a
counter
value.
Yes,.
K
This
is
a
this
is
like
a
whole
Otep
worth
of
discussion
and
we're
just
like
loose
having
it
right
now,
I
thought
we
were
gonna,
wait
till
we
had
a
1.0
to
talk
about
synchronous,
cage
and
API
hand
suggest
a
vehicle
to
do
that.
I
want
to
stop
this
conversation
now
personally,
because
ASAP
joined
us
to
talk
about
pollstar
and
we're
never
going
to
finish
this
conversation
about
synchronous
gauge.
It
should
be
written
down
in
this
discussion
issue.
K
So
if
I
may
turn
us
back
to
the
prior
bullet
point,
we
were
sort
of
freestyle
talking
about
what
is
it
that
Apache
Pulsar
really
needs
and
really
wants,
and
I
will
now
introduce
a
soft
Mexican
I,
don't
know
your
name
who's
been
working
on
this
for
a
while.
Why
pass
it
to
you-
and
we
can
start
here.
E
Okay,
so
I've
been
I've,
been
walking
on
a
on
integrating
open
Telemetry
into
Apache,
Polo
cells
and
I.
Think
for
the
last
eight
months.
Okay,
because
the
poster
is
quite
a
huge
code
base
and
open
Telemetry
learning,
it
is,
has
quite
a
steep
curve,
but
basically
I
would
say
that
for
parcel.
There
were
two
major
issues
that
I
wanted
to
solve.
One
a
person
has
like
four
different
I
would
say:
metric
libraries
and
I
wanted
to
consolidate
into
one.
E
E
That
means
that
this
is
something
not
typical,
so
they
mean
that
that
means
that
for
metal
purposes
you
have
many
attributes.
So
today,
PayPal
users
today
use
between
something
that
can
range
from
30k
topics
in
a
single
broker.
It
can
range
up
to
100K,
tops
and
well,
and
and
also
the
community
is
now
how
that
work
on
making
Parcels
one
a
single
book
of
support
up
to
one
million
topic
per
poker.
E
So
even
at
30k
topics
it
needs
to
be
it's
a
it's
a
problem.
So
what
I
thought
about
doing
basically
and
is
introducing
a
new
concept
called
the
topic
group
so
minimizing
it
from
topics
to
topic
groups,
so
they
can
same
topicals
are
supposed
to
be
normal
cardinality.
So
if
topic
groups
is
100k,
it's
the
topics,
let's
say
You
have
30k
topics
so
topic
rules.
You
should
sell
something
like
a
thousand,
maybe
10K
at
most,
but
that's
like
a
reasonable
cardinality.
You
can
never
so.
E
Okay,
so
at
first
by
the
way,
I
thought
that
I'm
gonna
do
the
aggregation
of
the
attributes
of
topics
to
topic
groups
completely
Dynamic
and
then
I
realized
after
a
while
areas.
It's
important
it's
not
good,
because
then
it
means
that
account
of
our
topic
group
can
suddenly
go
down.
So
it's
not
counter
anymore,
so
I've
decided
to
to
give
up
that
idea
and
simply
have
a
topic
of
like
constantly
and
so
so
that
changes
what
I
needed
at
the
beginning.
So
at
the
beginning,
I
I
mentioned
that
I
needed
a
streaming.
E
The
the
the
the
The
Observer
pattern
for
doing
streaming,
because
I
wanted
to
be
able
to
do
a
aggregation
on
the
fly
without
having
the
impact
of
getting
a
list
of
metric
points
and
then
introducing
a
new
list
of
metric
points
that
would
kill
interval
performance
and
so
so
that
specific
one
I
don't
need,
but
in
terms
of
performance-
and
there
are
two
things
that
Still
Remains
one
either
the
current
implementation
that
Pulsa
has
today.
E
It's
essentially
like
implementing
everything
like,
including
their
right
to
back
to
the
HTTP
response
using
pulsal
format.
So
basically,
it's
almost
zero
location,
so
it
simply
reads
the
data
from
variables
that
stored
in
memory.
That's
the
current
snapshot
and
simply
write
it
directly
into
a
Neti
byte
buffer
and
then
speeds
it
which
is
off
hip
and
then
writes
it
into
the
HTTP
response
to
the
socket.
Essentially,
so
it's
zero
allocation
and,
as
you
can
imagine,
the
postal
broker
is
super
sensitive
to
latency.
So
the
latency
is
measured
in
a
few
milliseconds.
E
Okay
and
they
have
summaries
because
it's
a
very
old
project
summaries
is-
is
implemented
using
a
streaming
a
streaming
sampling
protocol
called
Apache
data,
sketches
I,
don't
know
if
you
heard
about
it,
so
even
that
small
implementation,
efficient
implementation
of
that
sampling
costs
too
much
too
much
CPU
and
they
they're
happy
to
move
to
histograms
because
it
even
that
five
percent
CPU
that
you
can
see
in
the
flame
graphs
is
cost
too
much.
So
it's
very
sensitive
to
garbage
collection.
E
The
second
thing
is
I.
Think
I
also
mentioned
to
Jack
and
I
will
open
an
issue.
Is
that
once
I
removed
the
aggregation
part
of
aggregating
from
topics
to
topic
groups-
I
still
I'm
left
with
the
filtering,
because
the
filtering
is
essential,
because
even
if
you
aggregated
to
topic
groups,
sometimes
you
need
to
be
able
to
dynamically
says.
Okay.
This
topic
group
is
problematic.
I
need
to
see
a
few
of
Topics
in
here.
Okay,
so
open
that
group
and
you
need
to
do
it
dynamically.
E
Disability
means
that
I
need
to
be
able
to
do
a
push
down
predicate
so
I
can
give
like
a
function
that
says:
okay
given
an
instrument
name
and
then
and
the
attributes
I
can
decide
if
I
want
that
or
not,
and
it
will
simply
not
be
reported
out
into
the
metric.
E
Basically,
it
will
not
be
reported
from
The
Matrix
producer,
essentially
so
that
that
is
the
second
part,
I
think
I
plan
to
do
it
as
a
as
a
rapper
which
isn't
it's.
It's
an
internal
API
today,
but
I
plan
to
do
it
as
a
wrapper
on
top
of
Matrix
producer
like,
but
for
a
walk
around
until
I
will
have
the
the
push
down
predicate.
I
Just
a
quick
follow-up
question
on
the
idea
of
filtering,
so
you
know:
would
it
be
sufficient
if
a
metric
reader
could
Define
some
sort
of
predicate
and
say
hey
only
read
the
data
from
the
The
Meters
from
the
Scopes
that
are
interesting
to
me
and
only
read
the
specific
attributes.
Series
that
are
interesting
to
me
would
like
defining
that
type
of
API
be
sufficient
to
filter
down
or
do
you
need
to
like
dynamically
change
which
attributes
are
are
retained?
You
know
for
instruments
at
runtime.
I
E
A
E
A
I
So,
by
default
record
to
the
most
granular
level
record,
everything
and
then
most
of
the
time
filter
out
and
only
get
the
more
the
the
higher
levels,
the
higher
order
aggregations.
But
sometimes
you
know
dynamically
change
in
in
in
and
dig
in,
and
you
know
and
collect
the
lower
level
details.
Yes,
those
have
been
aggregating
the
whole
time.
K
Exactly
it
sounds
like
a
request
for
dynamic
views.
In
a
way
I
mean
we
already
have
views
that
can
express
filters
and
I'm,
not
sure,
there's
any
difference
between
what
he's
describing
and
the
the
common
cases.
There
will
be
a
filter
down
to
say
the
group
level
and,
and
then
once
in
a
while.
You
turn
on
the
verbose
logging
mode
and
you
record
topic
level
metrics.
Is
that
that's
roughly
what
I'm
hearing?
If
you
had
dynamically
configurable
views,
I
suppose
you
could
get
that
behavior?
K
What
I'm
actually
hearing
an
important
takeaway
for
me
is
that
you
have
like
a
really
intensive
Benchmark,
that
you're,
describing
and
you're
and
and
I
would
rather
hear
it
described
from
the
application
Level
as
just
simply
a
metrics
Benchmark,
where
you
have.
You
know
a
million
times
serious
and
this
many
attributes
and
and
this
many
updates
and
you're
going
to
measure
the
cost
of
the
metrics
Library,
including
the
the
update
code
path,
as
well
as
the
collect
code
path
and
you're.
K
Saying
to
me
both
of
those
want
to
be
as
close
to
zero
allocations
as
possible,
and
also
that
you
want
to
dynamically
change.
The
views
the
for
me,
the
most
important
part
here,
is
that
the
cost
of
the
filter
application
in
the
metrics
update
path
is
extremely
expensive
and
probably
the
thing
that's
most
sensitive
in
this
discussion.
K
Could
we,
as
a
group,
make
a
standard,
metrics,
Benchmark
and
start
to
look
at
you
know
measuring
these
things
when
I
first
implemented
views,
my
attitude
was
they're
too
complex,
they're
just
here
for
correcting
problems,
but
what
you're
saying
here
is
that
they're
also
for
performance
and
now
I
need
to
get
my
view,
application
code
path,
the
one
where
I
filter
attributes
to
be
as
fast
as
the
most
performant
code
path.
That's
what
I'm
actually
hearing
here,
but
that's
complicated,
I'm,
not
saying
it's
impossible.
No.
K
A
Yeah,
so
that's
off
I
summarizing
the
chat,
so
you
you
want
to
aggregate
at
fine
granularity
but
report
either
as
a
group
or
verbose
based
on
certain
conditions
and
the
condition
is
by
default
report
as
groups.
If
something
special
report
verbose
for
for
specific
groups,
is
that
a
summary
correct
of
what
you
are
suggesting
can.
A
Let's
put
this
way:
let's
talk
about
specific,
so
we
we
want
to
have
the
com,
the
the
aggregation
at
the
topic
level,
but
reporting
can
be
done
at
group
level
or
or
verbose
based
on
the
input
from
from
somewhere
else,
an
external
trigger,
correct.
A
Forget
about
forget
about
CPU
usage
for
a
second
philosophically
I'm,
trying
to
understand
the
problem,
and
then
we
can
propose
Solutions.
But
but
if
I,
if
I
have
the
fine
grain
right,
the
fine
granularity
I
then
can
derive
the
the
groups.
Yes,.
E
Yes,
but
none
of
this
feature
doesn't
exist
at
all
today,
not
even
remotely,
because
when
you
change
it
dynamically
that
that's
that
department
I
thought,
even
if
I
have
Dynamic
groups,
it's
dynamic,
dynamic
views.
Sorry,
it's
almost
impossible
to
do
it
in
views
because
it
keeps
turning
on
and
off
and.
A
You
change
that.
That's
what
I'm,
trying
I
think
I
think
what
you
need
is
a
different
thing.
I
think
you
need!
You
need
your
own
Smart
View
implementation,
which
is
actually
you
what
you
will
do.
Is
you
you'll
aggregate
at
a
fine
granularity
at
the
topic
level
and
then
view
views,
get
all
the
inputs
and
and
also
controls
the
output?
So
then,
then,
if
you
build
your
own
view
here
and
you
build
that
on
top
of
other
view
for
doing
the
aggregation,
you're
not
going
to
do
the
aggregation,
but
then
then
you
can
control.
E
Yes,
that
what
was
my
intention
in
the
beginning
and
I
wanted
to
put
it,
as
is
in
between
them,
so
getting
better
producers
and
then
streaming
it
and
then
doing
the
aggregation
and
then
filtering
and
then
exporting,
but
again,
because
I
saw
it's
quite
complicated
and
I
don't
have
the
streaming
ability
at
all.
So
I
thought.
Okay,
let's
reduce
that.
Let's
remove
the
aggregation
for
now.
I
would
have
a
fixed
aggregation
level
called
topic
groups,
and
that
would
be
the
solution
for
the
aggregation
part.
The
filtering
part
I
have
to
have
it
no
matter
what.
E
K
I
What
you're
describing
Josh
you
can
only
record
at
the
fine
level
of
granularity
and
Define
two
views,
one
which
retains
all
the
attributes
in
one
which
you
know
essentially
does
the
coarser
grain
and
drops
some
attributes
but,
like
you
know,
either
way
at
export
time.
You
still
don't
always
want
to
export
all
of
your
millions.
H
I
K
K
Slack
me
if
you're
interested
in
talking
with
us
off
in
this
type
of
way
at
some
point
in
the
next
week,
I
think
so
that
we
can
have
a
little
bit
more
focused
presentation
in
the
specsig
and
we
have
to
talk
about
exemplars,
because
I
really
don't
want
to
see
people
Computing
two
views
when
really
the
aggregate
is
is
available
and
when
you
have
the
exemplars,
you
can
see
the
topics
and
you
can
estimate
the
topic
level
metrics.