►
From YouTube: 2022-06-24 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
A
D
Hello,
I
haven't
quite
mastered
the
john
watson
eating
during
the
meeting,
so
it's
scrambling
to
eat
before
the
meeting.
D
D
Cool
well,
let's
jump
into
this
one.
D
So
yeah
so
add
oops,
I'm
sorry.
I
had
asked
honorag
for
his
thoughts
when
we
chatted
with
matayish
on
our
monday.
Well,
I
guess
it's
technically
your
jackets
after
midnight
there
so
really
is
tuesday,
pretty
much
everywhere,
except
here
and
yeah.
So
we
were
chatting
about
potentially
doing
the
you
know.
Should
log
processors
just
go
ahead
and
mimic
spam
processors
weren't
sure
I
was
guessing.
D
And
for
the
most
part,
I
think
they
can,
and
it
also
saves,
probably
an
an
extra
copy
right
out
of
you,
because
you
can
construct
the
immutable
data
structure
right
away.
A
Yeah,
I
think
it
just
it
probably
depends
on
which
fields
in
log
in
on
a
log
record,
you
want
to
make
mutable
span.
Their
read.
Write
span.
Has
you
know
a
lot
of
its
surface
area
mutatable,
but
not
all
of
it
and.
A
Well,
I
think,
I
think
the
extent
to
which
you
can
just
wrap
kind
of
is
based
off
whether
those
each
of
the
fields.
You
know.
B
So
if
the
user
is
able
to
mutate
themselves
like
in
the
api,
that's
the
main
reason
why
we
have
to
copy
in
the
spam,
because
the
user
might
continue
to
try
to
mutate
or
something
like
that
with
log.
If
log
processors
only
place
they
can
mutate,
even
regardless
of
what's
exposed
to
that,
read,
write
log
still
and
then
it
should
be
much
easier
to
just
wrap
for
the
log
data
because
it
won't
be
mutable
anymore.
A
A
Yeah
I
brought
this
up
in
the
the
log
stick
this
week
and
I
think
I
think
what
honorable
suggested
makes
a
lot
of
sense.
The
log
sig,
I
think,
had
a
a
similar
feeling.
We
were
trying
to
go
back
and
remember
why
why
log
data
is
immutable
in
the
first
place,
so
I
think
that
was
kind
of
just
a
copy
paste
thing
from
the
span.
Processors-
and
you
know
the
on
end
aspect
of
the
method
of
the
the
span
processor
but
yeah.
A
I
guess
the
the
realization
is
that,
without
without
having
logs
mutatable,
the
kind
of
idea
of
a
processing
pipeline
falls
apart,
not
really
useful.
You
can't
do
anything
useful
in
the
processing
pipeline,
except
for
pass
it
to
an
exporter
immediately
and
and
and
wrap
it.
I
mean
like.
A
B
D
D
D
A
Yeah
keep
pulling
things
out
of
contact,
stamping
baggage,
you
know
trimming
attributes
that
you
don't
want,
or
I
guess
you
can't
it.
Maybe
that
still
wouldn't
be
possible.
Actually,
maybe
that
would
still
be
a
type
of
thing.
You'd
have
to
do
in
an
exporter.
You
know.
Presumably
we
want
to
only
have
setters
for
kind
of
like
primitive
fields
and
additive
methods
for
kind
of
complex
types
like
attributes,
so
you
can
add
an
attribute,
but
you
can't
remove
attributes
you
can
set
maybe
the
time
stamp,
but
not
necessarily
well.
E
D
D
Oh
yes,
this
one,
I
kind
of
wanted
to
so
honorary.
The
interesting
thing
here
is
in
auto
configure
you
can
both
like
set
the
sampler
and
you
can
create
one
of
those
sampler
wrapper
things
and
the
those
two
pieces
of
the
the
auto
configure.
D
Don't
work
that
well
together
in
this
like
set
sampler
like
the
wrappers,
don't
work
on
top
of
the
one
that
you
set
sampler-
and
I
was
just
trying
to
think
like-
is
that
a
common
thing
across,
like
other
like
say
for
exporter,
you
can
set
the
exporter
also,
do
we
not
wrap.
A
You
can't
really
set
the
exporter
because
exporters
are
you
configure
a
like
you
they're,
an
argument
for
a
batch
span,
processor
or
a
simple
span?
Processor?
And
you
add
one
of
those.
D
I
see
right
right,
I
think,
because
what
I'm
doing
in
our
distro
is
like,
I
set
it
to
none.
I
set
the
like
the
exporter.
B
A
B
A
They
could
do
the
wrapping
thing
where
you
know
you
take
an
existing
sampler
and
you
you,
you
know
you
return
a
new
one
by
wrapping
it
or
just
replacing
it,
and
then
this
configurable
sampler
providers,
like
you
know
you,
you
provide
a
named
sampler
and
you
provide
a
configuration
that
says
by
default.
Use
this
name
sampler.
B
Yeah
for
that
specific
use
case
website,
aws
destroyed
us
and
at
the
same
time
I
mean
every
spi
does
run
into
ordering
issues
at
some
point.
So
it's
not
like
that
means
we
shouldn't
implement
ordering
stuff,
but
if
the
specific
user
doesn't
care,
then
yeah
the
sampler,
configurable
sampler
and
default
into
it
would
be
the
more
dramatic
thing
to
do.
D
D
Is
there
a
way
to
avoid
like
if
you
don't
use
tracer
provider,
I'm
trying
to
think
like
what
would
be
like
the
general
rule
for
avoiding
sort
of
shooting
yourself
in
the
foot?
If
you
don't
use
tracer
provider.
B
A
E
D
A
Yeah
this,
this
deterministic
trade
sampler
just
seems
like
a
great
candidate
to
be
a
sampler
provider.
You
know
it's
this,
this
custom
implementation
that
they
provide.
They
want
to
make
it
their
default,
but
do
they
want
to
make
as
strong
of
a
statement
as
saying
like?
Not
only
is
it
the
default,
but
we're
going
to
make
it
really
hard
to
use
anything,
but
this-
and
I
I
don't
think
that's
what
they
want.
D
Yeah
so
I
mean:
does
it
make
sense
to
to
put
a
javadoc
on
this
guy
to
sort
of
recommend
against
using
it
when,
via
auto,
configure.
B
B
Oh
all
right
because
it's
not
only
said
simple,
like
anything
you
can
and
we're
not
going
to
add
that
to
every
single
method,
tracer
provider.
But
on
this
method
we
can
document
the
danger
of
following
that,
especially
if
you're
a
distro
with
not
super
strong
opinions,
which
is
I
mean
some
people.
Might
I
doubt
honeycomb
wants
those
opinions,
but
other
people
might
need
the
opinions
and
would
call
that.
D
A
A
Oh,
no,
that's
what
I
was
just
thinking
about
like
did.
We
make
a
mistake
by
having
some
of
these
like
setters
and
others
add,
but
let's
let
it
become
a
problem
before
we
worry
about
it.
Yep.
D
Oh,
they
did
already
the
the
user's
on
top
of
it
and
I
think
jax.
Let
me
give
jax's
comment.
B
A
I
just
added
this
because
of
lack
of
other
topics.
I
got
josh
cervette
to
review
this
pretty
happy
about
that.
You
know
it
is
the
api
edition,
though,
so
I
just
want
to
make
sure
that
anurag
and
john
are
on
the
same
page.
D
Do
you
want
to
share
and
walk
us
through
the
api
changes.
B
D
I
agree
that
would
be
better
as
as
opposed
to
the
customizer.
A
All
right,
so,
the
new
api
edition
is
on
this
metric
reader
interface.
So
a
metric
reader
is
a
thing
that
reads:
metrics
right
and
they're.
The
two
primary
types
of
readers
are
push
readers
and
poll
readers
there's
only
one
poll
reader
which
is
prometheus,
and
it's
currently
not
possible
to
implement
your
your
own
other
poll,
readers
on
the
push
reader
side,
there's
the
periodic
metric
reader,
which
reads
on
an
interval
and
pushes
the
metrics
to
an
exporter
that
you
configure
with
it,
and
the
idea
behind
this
new
edition
of
get
default.
A
Aggregation
is
that
is
that
readers
can
influence
the
default
aggregation
for
each
instrument
type.
So
the
default
aggregation
is
the
aggregation
that
is
used
if
there
are
no
views
configured
for
that
instrument.
So
like
say
you
have
a
sum
or
a
counter
instrument
like
the
default
aggregation
is,
is
always
a
sum
or
that's
that's
what
it
is
by
default,
and
if
you
configure
a
view
you
can
override
that
aggregation,
but
by
default
it
is
a
sum.
A
The
the
main
use
case
for
something
like
this
is
is
histograms,
so
you
know
if
you
want
to
change
the
default
aggregation
for
histogram
instruments
from
explicit
bucket
histograms,
which
are
you
know
not
great,
for
a
variety
of
reasons,
but
like
one
of
them
is
that
the
default
bucket
bounds
may
not
be
where
your
data
is
recorded,
so
you
might
want
to
choose
to
use
exponential
histograms
by
default
instead,
and
so
that's
why
you'd
want
to
change
the
default
aggregation
for
a
particular.
A
You
know
by
instrument
and
the
reason
that
this
is
different
than
views.
So
you
know
the
the
question
that
kind
of
was
going
through
my
mind
when
this
was
originally
proposed
is
like.
Why
can't
I
just
use
views
to
do
this?
Why
can't
I
just
configure
a
view
that
says
for
all
for
all
histograms.
A
A
Let's
say
you
want
to
change
all
histograms
to
be
exponential,
histograms
instead
of
explicit,
and
then
you
want
to
select
one
histogram
and
change
its
description
so
like
the
way
that
you
might
do,
that
is,
let's
see
sdk
here,
let's,
let's
just
get
a
little
place
where
I
can
write
some
code,
so
you
might
do
something
like
this
you'd
say
builder
register
view,
and
so
you
want
to
have
a
view
that
changes
all
histogram
instruments
to
be.
A
So
you
configure
view
like
this:
you
have
your
selector,
you
select
all
histogram
instruments
and
you
say
your
view:
we're
going
to
change
the
aggregation
to
be
exponential
histograms
and
then
let's
say
you
want
to
add
an
additional
view
that
changes
the
name
of
one
histogram
or
changes
the
description
of
one.
So
you
say:
hey
register
another
view,
and
this
time
I
want
to
select
instruments.
A
A
Well,
so
the
way
that
this
would
work
when,
when
two
views
match
an
instrument,
it's
not
like
these
add
together.
You
get
two
distinct
metrics
for
this,
because
each
view
that
matches
an
instrument
produces
a
distinct
stream,
and
so,
if
you
want
like,
you
might
think
that
you're
saying
hey
all
histograms
or
histogram
name
foo
should
be
an
exponential
histogram
and
should
have
this
new
description.
But
that's
not
what
you
would
get
you'd
get
one
exponential,
histogram
called
foo
with
its
original
description
and
one
explicit
bucket
histogram,
with
this
new
description,
what
happens
to.
A
So
the
the
first
view
to
match
replaces
the
diff
the
original
replaces
the
default
and
so
like.
If,
if,
if
no
views
match,
you
use
the
default
view
as
soon
as
one
view
one
or
more
views
match
you,
you
get
rid
of
that
default
view,
and-
and
that's
kind
of
that's
actually
a
really
good
point
and
that's
a
good
way
of
describing
what
this
does.
This
changes
the
default
view,
so
that,
like
this,
is
the
case
of
what
happens.
This
is
the
aggregation.
A
That's
used
for
this
instrument
type
when
no
views
match
what
is
the
default
aggregation?
It
depends
on
your
instrument
type
so
for
it's
not,
but
the
default
method
there
isn't
passing
instrument
type
along
it.
So
this
this
aggregation
here
is
like
it's
kind
of
special
it
that
you
know.
A
So
effectively,
if
you
what
you
would
do
once
you
have,
this
change
is,
you
would
like
you
know,
you'd,
instead
of
registering
a
view
that
would
change
all
histograms
to
be
exponential.
A
You
would
just
say
that
when
you
add
a
reader
like
your
periodic
metric
reader
at
the
metric
exporter
level,
you
would
configure
your
exporter
to
say,
hey.
I
want
exponential
histograms
by
default
for
the
otlp
metric
exporter,
and
that
would
just
mean
that
the
default
view
for
all
all
histograms
would
be
exponential,
and
then
your
the
subsequent
views
that
you
register
would
kind
of
work
as
intended.
A
D
A
Does
the
default
aggregation
work?
Oh
oh,
I
see
what
you're
saying
so.
Each
each
reader
has
its
own
default
aggregation
right,
right,
okay,
and
so
I
I
didn't
make
this
entirely
clear,
because
I
think
in
this
pr
some
of
the
some
of
them.
You
know
the
methods
are
missing.
A
I
haven't
extended
to
otlp
metric
exporters
yet
to
allow
this,
but
that's
kind
of
that's
kind
of
the
basis
of
this
idea
and
why
it
is
included
in
the
spec
and
how
this
all
that,
how
this
will
like
kind
of
carry
through
into
like
useful
stuff
for
users
is
there's
going
to
be
a
well.
A
There
is
now
so
there's
a
there's:
a
spec
page,
big
property
yeah,
a
config
property
that
says
for
the
otlp
exporter,
change
the
histograms
from
the
default
aggregation
from
histograms
from
being
explicit
bucket
to
being
exponential,
so
metrics
sdk
exporters,
otlp
and
here
is
the
new
property
that
was
added
as
of
like
today,
and
so
this
will
provide
a
really
easy
mechanism
for
users
to
flip
on
exponential
histograms
and
all
of
a
sudden
have
histograms
that
automatically
have
like
density
around
the
the
range
of
measurements
that
are
being
recorded,
which
will
be
very
cool.
D
Back
to
the
new
method
that
in
the
metric
reader,
so
why
is
it
this
like
a
five-way
function
like
the
others?
It
is
a
five-way
function.
A
A
By
line
40
still
sorry-
oh
sorry,
so,
okay,
so
the
the
key
thing
here
is
that
a
metric
reader
now
extends
this
interface,
which
is
default,
aggregation
selector.
A
This
is
a
simple
functional
interface
that
takes
in
an
instrument
type
and
returns
an
aggregation
so
that
that's
the
critical
piece
you
know
this
is
the
five-way
function
that
metric
readers
must
implement
the
default
for
this,
so
that
this
default
aggregation
here
you
know
this
is
a
special
thing.
We
go
and
look
inside
this.
You
know
default
aggregation.
We
see
this
five-way
function
down
in
the
weeds,
and
so
this
is
saying
for
counter
up
down
counter
and
observable
counter
and
observable
up
down
counter
use
the
sum
aggregation
for
histogram
use
explicit
bucket.
A
A
So
that
same
interface
that
the
readers
have
to
implement
and
also
users
don't
actually
implement
the
reader
interface.
That's
just
there's
only
a
couple
internal
implementations
for
now,
where
this
impacts
users
is
that
their
metric
exporters
also
have
the
opportunity
to
influence
this,
and
so
at
the
exporter
level,
you
can
choose
what
your
default
aggregation
is
by
instrument
type.
A
Maybe
you
know
that
your
histograms
need
to
be
explicit,
bucket
histograms,
with
a
certain
a
certain
default
set
of
buckets
that
are
different
than
normal
or
or
maybe
you
don't
support
histograms
at
all,
and
you
just
want
to
return
like
a
sum
instead,
and
so
you
kind
of
want
to
like
down
sample
that
data
and
lose
that.
D
So
you
can.
This
is
very
interesting
to
me
because
we
have
a
specific
set
of
buckets.
Histogram
buckets
that
are
backend
likes
prefers,
that's
perfect,
so.
A
A
A
D
Cool,
oh
splunk
seemed
like
maybe
splunk
had
a
all
hands
or
something
because
we
had
no
splunk
or
we
didn't
have
our
splunk
friends
on
the
call.
So
we
did
not
talk
about
this
no
yeah
I
was
I
wanted
to
explore,
even
if
it's
just
a
so
yeah,
so
you
we're
talking
about
attributes
extractor,
which
I
agree
for
yeah,
so
kind
of
whether
we're
scoping
this,
whether
there's
a
need
for
it
outside
of
the
java
agent.
D
C
D
B
B
B
D
B
D
A
What
was
the
problem
with
using
an
spi?
Was
it
the
was
that
they
want
to
have
this
in
the
the
user's
code,
but
but
that
spi
would
have
to
be
loaded
into
the
the
agent's
class
loader.
D
D
Because
you
could,
you
can
create
a
java
agent
extension
today
and
hook
in
a
span
processor
and
I'm
assuming
that
you
could
then
do
an
onstart
and
that
that
would
work
for
you
also.
A
Yeah,
so
what
I
think
laura
was
mentioning
something
about
some
problem
with
that,
but
the
context
work.
D
Okay
yeah,
so
I
think
it's
all
doable
whether
it's
convenient
or
not
is
a.
D
Story
which
was
kind
of
why
I
was
hope,
like
yeah
as
wanted
to
chat
through
it
to
understand,
and
so
I'll
I'll
I'll
follow
up
with
sort
of
some
more
questions
just
to
understand
what
exactly
we're
trying
to
solve
here
and
then
sort
of
my
main
thought
was
if
we
do
had
something
so
that,
like
a
convenient
way
for
users
to
do
it
from
their
application
code,.
D
If
we
wanted
yeah
that
was
kind
of
what
I
got
the
impression
from
laurie
that,
basically
you
know
we
make
it
really
easy
for
server
spans
to
customize
server
spans,
because
you
do
spam
current
in
your
application
code
and
that's
real
nice,
but
there's
nothing.
B
D
D
C
B
B
Instrumentation
can't
be
customized
programmatically
like
if
they
could
add
attributes
extractors
and
it
seems
like
they
should
be
able
to
that's-
probably
a
feature
that
they
would
want
at
some
point,
but
that
we
have
zero
way
of
doing
so.
I
probably
just
need
some
thought
in
how
best
to
expose
that.
A
Yeah,
okay
is
there
so
that
that
kind
of
seems
like
a
similar
problem
to
just
providing
making
it
easier
to.
You
know:
implement
spis
that
customize
the
configuration
of
the
sdk,
you
know
so
you
can
make
you
can
make
the
you
can
make
it
so.
The
instrument
or
api
is
extendable
in
the
in
the
user's
application.
But
couldn't
you
just
couldn't
you
in
a
similar
way
make
it
so
that
the
in
the
application
code
they
can
have
hooks
to
to
customize
the
sdk?
A
You
know
if,
if
the
auto
configuration
customizer,
if
they
could,
if
they
could
provide
an
instance
of
that
that
the
agent
would
detect
and
apply
during
configuration,
then
they
could,
they
could
add
their
own
spam
processor
and
effectively
just
do
all
of
the
extension
stuff
within
their
application.
Instead
of.
B
Of
customizing
instrumentation
is
supposed
to
be
a
bit
easier
because
instrumentation's
in
the
user's
app
well,
the
sdk
is
in
the
agent
class
loader
like
even
with
the
shading.
We
could
run
time
shading
or
something
so
like.
I
suspect,
customizing
instrumentation
is
supposed
to
be
easier,
while
customizing
agents
supposed
to
be
harder
just
by
the
way
that
we
isolate
those
two.
D
D
Either
that
or
I
mean
because
of
the
whole
shading
and
safety
stuff,
it
makes
it
really
hard.
I
mean
if
we
took
some
of
that
away
and
injected
like
the
instrumentation
api
for
real
in
there
or
didn't
shade
the
instrumentation
api
or
maybe
if
we
could
detect
if
the
instrumentation.
D
Yeah,
the
other
thing:
that's
the
request,
like
the
attributes.
Extractors
are
nice,
but
for
us
to
stabilize
like
a
lot
of
time
so
where,
when
the
request
is
the
real
library
request,
object,
that's
nice
and
the
user
can
access
that.
But
if.
B
D
Processor,
a
little
bit
better
where
you
don't
it's
more
limiting,
because
you
don't
get
the
the
request
object,
but
the
user
could
potentially
if
they
wanted
it,
they
could
put
it
in
the
request
and
pull
it.
D
And
then
we
talked
a
bit
about
a
good
bit
about
jfr
profiling
or
I'm
interested
in
kind
of
feeling
out
vendors
for
their
interest
in
sharing
sort
of
jfr
recording
management
utility
for
starting
stopping
recordings
configuring.
What
is
the
recording
configuration,
what
events
it's
going
to
emit
and
then
because
there's.
D
Different
java
version
specific
details
about
how
to
do
that
and
the
we
have
this
project,
that
kind
of
abstracts
that
and
I'm
imagining
other
people
have
similar
things
so
potentially
upstreaming
that
something
like
that
to
the
contrib
repo,
also
as
a
foundation
for
when
there
is
a
open,
telemetry
profiling.
D
Format
protocol,
then
we
could
at
this
point
this
would
just
manage,
produce
the
files
you
can
get
the
files
and
you
know,
do
whatever
you
want
to
them
with
them,
for
your
backend
service,
but
in
the
future,
then
we
would
put
in
the
open,
telemetry
specific
stuff
and
then
also
I
was
kind
of
interested
in.
D
Which,
I
think
is
pretty
much
implemented
here
already.
This
is
kind
of
what
I
know
to
be
the
standard
solution
to
this,
but
I'm
not
sure
what
the
splunk
folks
are
doing.
D
But
we
talked
about
moving
this
over
to
contrib
anyway,
and
maybe
pull
out
this
piece,
which
I
think
is
useful,
independent
and
I'm
not
sure
if
anybody's
this
one
seems
less
useful.
D
D
And
what
we
picked
bruno
was
on.
Oh,
I
picked
jack's
brain
for
lots
of
new
relic
profiling
details
and.
E
D
D
A
Do
do
maintainers
have
do
tc
members
have
the
ability
to
just
merge
to
any
repo
in
the
project.
Yeah
yeah.
E
A
A
Which
year
is
it?
It
is
4
45,
35.
A
D
My
personal
policy
is
one
one
time
you
get
a
free
pass
for
one.
B
B
B
B
D
D
That
is
one
thing
I
have
completely
like
not
worried
about
their.
B
B
D
Explaining
the
the
java
agent
project
to
the
internet
to
our
intern
and
how
or
it's
sort
of
the
it
used
to
be
something
that
all
the
vendors
would
invest,
a
lot
of
time
and
engineering
power
into
building
a
java
agent
for
new,
relic
dynatrace,
app
dynamics,
everybody,
and
now
you
know
we
have
this
sort
of
shared
resource
and
they
were
very
concerned
that
you
know
it.
Doesn't
you
know
that
that
was
thought
that
that
was
very
sad
that
people
you
know
it
was
like.
B
D
There's
always
so
many
problems,
it's
just
it's
all,
always
about
shifting
resources.
Shifting
over
to
something
different
new.
A
Yeah,
despite
technology
seemingly
improving,
there's
there
seems
to
be
an
ever
increasing
demand
for
more
and
more
software
developers.