►
From YouTube: 2022-06-17 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
B
Now
I
have
a
I
have
this
crappy
chinese
webcam
that
I
bought
at
the
beginning
of
the
pandemic
that
I've
had
perched
on
the
top
of
my
monitor,
but
my
monitor
is
a
lot
taller
than
it
used
to
be
so
I'm
trying
to
figure
out
where
I
can
put
the
camera.
So
it's
it's
not
like
way
high
up
or
it's
too
low
now
so
anyway,.
B
C
B
A
B
A
A
Well,
maybe
not
so
if
he
really
wants
concurrency
and
he
doesn't
want
the
batch
spam
processor
to
like
block,
then
he
wants
to
return
a
completable
result
code
of
success
to
allow
it
to
you
know,
continue
yeah,
I
mean.
C
A
A
C
A
C
C
C
A
B
C
C
B
C
If
you,
if
you
want
to
still
use
the
bat
spam
processor
right,
yeah,
yeah,
hey
that
sounds
less
correct
than
a
concurrent
processor,
though,
like
for
example,
a
simple
spam
processor
is
already
concurrent,
because
that
doesn't
wait
for
anything.
And
so
you
can
imagine
just
having
a
small
batching
layer
and
a
simple
span
processor
and
then
that's
a
matching
current
processor
right.
B
B
Or
they
could
completely
not
even
implement
an
exporter,
and
just
do
it
all
in
a
processor
like
I
mean,
there's
nothing
that
forces
any
anyone
to
use
an
exporter
at
all.
C
A
A
bit
because
you
can
still
just
use
the
the
auto
configured
sdk
hooks
to
customize
the
sdk
to
your
your
heart's
content.
B
Yeah,
I
think
what
we
have
today
is
super
flexible
and
configurable
and
maybe
a
little
bit
more
tricky
in
the
agent
like
you
said,
but
I
mean
we
kind
of
leave
it
pretty
wide
open
for
people
to
do
whatever
they
want.
D
B
D
Zoom
thumbs
up
right,
yeah
yeah
in
the
chat,
so
you
were
talking
about
so
I
missed
the
beginning,
you're
talking
about
back
span
processor.
Why
were
you
talking
about
bat
span
processor,
because
my
this
doesn't
this
pr
doesn't
have
anything
to
do
with
the
backspan
processor?
Well,.
D
B
D
B
All
it
does
is
say
the
sdk
needs
to
document
what
they
expect
out
of
an
export,
which
is,
I
think
we
have
done
so
return
a
completable
result
code.
The
password
process
will
block
or
spec.
You
can
implement
your
own
spam
process
with.
Doesn't
you
can
implement
an
exporter
that
just
returns
of
success
and
fire,
it's
fire
and
forget?
But
I
don't
think
there's
any
reason
why
our
out
of
the
box
exporters
need
to
be
converted
over
to
fire
and
forget
just
because
it's
possible
for
someone
to
write
a
fire.
Forget
exporter.
B
D
See
the
the
thing
that
I
don't
that's,
I'm
not
clear
on
still
is,
it
says,
export
result,
so
it's
defining
the
export
the
result
right,
the
return
value
and
which
can
be
async.
So
it
can
be
a
promise
also,
but
it
says
success
means
the
batch
has
been
successfully
passed
to
the
exporter.
D
But
I
thought
I
thought
that
was
the
whole
point
of
this
pr.
So
clearly
there's
some
misunderstanding
somewhere.
Oh,
I
might
have
missed
that
line.
D
Yeah
so
jack,
would
you
mind
undoing
your
approval
and
asking
for
clarification
on
that
yeah?
I
can
do
that
because
do
you
agree
with
me
that
if
it
says
the
batch
has
been
successfully
passed,
the
exporter,
then
what
my
draft
pr
does
is
the
correct
interpretation
of
that.
A
C
D
Because
see
I
mean
if
you
look
at
the
diff
there,
what
it
used
to
say
is
the
batch
has
been
successfully
exported
that
it
means
the
data
is
sent
over
the
wire
and
delivered
to
the
destination
server,
so
that
part
was
explicitly
removed,
which
is
why
I
think
my
interpretation
is
what
at
least
the
author
intended
yeah.
This
is
a
very
strange.
B
B
C
B
C
I
mean
if
we
didn't
have
spam
processors,
maybe,
but
we
already
have
this
nice
split
between
spam,
processor
and
exporters.
So
why
would
we
not
leave
the
fire
and
forget
decision
of
the
spend
processor
so
that
the
exporters
can
be
used
in
either
way?
It's
just
losing
features
for
seemingly
no
benefit.
D
And
what
about,
if
you
part
of
that
retry,
is
storing
to
disk
and
exponential
back
off
you
just
leave
that
promise
open.
I
mean
that's
if
that's
where
the
exporter
wants
to
behave.
Sure.
C
D
B
D
Design
that
I
mean
the
nice
thing
about
that
being
in
the
span
processor,
is
that
it's
exporter
agnostic,
the
not
nice
thing
is
that
you
end
up
re-serializing
over
and
over
for
each
retry
versus.
If
you
do
the
disk
persistence
and
the
exporter,
you
can
just
save
the
http
body
directly
to
disk
yeah.
That's
okay!.
C
D
There
was
one
other,
oh
yeah
I
mean
I
just
wanted
to
get
on
rocks.
I
oh.
C
D
Approved
it
already
yay
all
right
folks.
Was
there
anything
else?
Oh
of
course
the
the
build
sort
of
succeeded
and
sort
of
failed.
It
went
to
made
it
to
maven
central
did
not
make
it
to
github
release.
Oh.
A
D
B
B
B
D
A
And
so
and
then
we
continue
so
just
so,
then
we
continue
to
publish
the
stable.
A
Annotations
artifact,
as
is
out
of
the
the
sdk
repo
and
and
we
continue
publishing
that
until
2.0.0
and
just
have
two
versions
of
the
annotations.
We've
marked
the
sdk
one
deprecated
right
so
deprecate
one
but
continue
to
publish
it.
And
we
do.
We
reject
this
pr
and
ask
him
to
open
it
in
in
instrumentation
after
we've
moved
the
other
one
over
as
like
a
pattern
for
him
or
yeah.
D
B
Although
I
think
the
pr
has
been
moved
over
to
a
somewhere
else,.
A
B
So
my
proposal
would
be
to
for
someone
who's
really
good
at
doing
instrumentation,
not
it
to
to
take
the
existing
or
take
the
with
span
and
this
proposal
and
put
it
into
somewhere
in
instrumentation
a
new
artifact
of
instrumentation
and
work
on
trying
to
implement
the
metric
side.
I
mean.
Obviously
the
span
side
is
easy.
It's.
A
C
D
Yeah,
which
is
where
I
was
going
to
suggest
that
I
mean
I
I
I
can
copy
the
span
annotation
over
and
you
know,
build
out
support
for
the
the
dual
support
there.
D
But
then
I
would
probably
ask
the
the
pr
author
of
the
annotation
to
submit
the
annotation
plus
implementation
to
the
instrumentation
repo,
because
that
I
think
that
was
kind
of
why
why
you
had
initially
brought
it
up,
because
those
two
make
sense
bundled
most
sense,
bundled
together.
Yeah.
B
A
Okay,
if
we
run
out
of
time,
we
can,
we
can
add
those
the
new
module
run
out
of
time
before
the
next
release.
A
Well,
if
we
need
to,
we
can
bring
the
artifacts
over
without,
like
adding
the
the
publish,
the
publish,
plugin
yeah.
A
B
B
D
D
D
C
A
C
B
B
Well,
that's
funny.
When
I
saw
I
saw
the
what
you
wrote
in
the
note
in
the
notes.
I
thought
you
were
linking
to
some
stale
issues
that
we
needed
to
deal
with,
but
this
is
even
better
than
that.
It's
more
meta,
yeah.
B
Stuff,
I
think,
there's
gonna,
be
I
don't
know,
does
this?
Will
this
go
through
and
sweep
through
all
the
existing
ones,
so
we're
gonna
get
like
55.
A
B
A
No
so
we
talked
we
talked
about
it,
and
so
the
prs
will
be
marked
as
stale
and
then
closed
automatically
after
some
like
relatively
short
period,
issues
will
stay
open
for
a
long
period
of
time
by
default,
but
we
we
have
a
tool
at
our
disposal
to
automatically
close,
which
is,
we
can
add,
a
label
to
them,
which
is
a
request,
author
feedback,
or
something
like
that.
Something
to
that
effect,
which
basically
says
like.
A
A
C
B
C
A
B
B
C
C
C
B
B
Figure
this
out,
speaking
of
which
I
spent
last
weekend,
I
was
trying
to
work
on
building
the
java
11,
okay
or
java
11
http
client
wrapper
instrumentation
for
instrumentation
project
and
it
basically
took
me
I
mean
I
didn't.
I
played
video
games
while
I
was
doing
this,
but
I
like
starting
getting
the
project
up
and
indexed
was
a
multi-hour
affair
on
my
windows
machine.
C
One
is
multi-aria
but
yeah,
so
you
mentioned
the
shared
thing,
so
I
think
plum
looked
into
that
once
he
might
have
some
context.
B
There
oh
yeah
yeah.
They
should
like
being
able
to
upload
indexes
pre-indexed
stuff
for.
C
B
C
B
C
C
Yeah
I
mean
they
could
ideally
have
a
longer
time,
maybe,
but
if
that's
annoying
to
set
up
because
like
for
example,
tres
has
this
multi
export
thing,
that's
sort
of
just
an
example
of
the
idea
that
I
think
he
has
a
spec
pr
or
something
for
it.
So
that's
one
use
case.
We
have
draft
prs
working
on
the
spec.
B
A
D
C
A
You
know
an
auto
configure
environment
variable
or
system
property
that
you
can
flip
to
enable
them
as
the
default
aggregation,
which
is
something
that's
been
talked
about
in
the
spec
and
and
so
this
is
the
the
steps
we
need
to
take
to
make
that
happen.
I
feel,
like
that'd,
be
a
good
feature
to
get
in
by
for
for
the
next
release,
it's
a
good,
valuable
thing
for
users
and
could
help
speed
up
the
specification.
Stabilization
of
them.
B
Yeah,
I
am,
I
started
looking
through
that
pr
today
I
got
through
four
of
the
25
files,
and
that
was
all
of
the
ones
that
were
super
simple
and
didn't
have
any
interesting
content
in
them.
I
feel
like
it'd
be
good
to
get.
I
don't
know,
is
josh
surith
doing
much.
You
haven't
seen
him
at
this
point.
He.
A
Was
on
the
the
spec
sig,
he
attended
the
specs
this
week,
maybe
I'll
I'll
I'll,
try
picking
him
yeah.
B
A
B
We
have
a
customer,
we
have
a
customer
who
wants
to
get
the
prometheus
metrics
that
we
generate
into
new
relics.
So
I
remember
when
I
was
there,
there
was
talk
or
there
was
some
the
beginnings
of
some
way
to
do
it,
but
I
didn't
I
haven't:
it's
been
a
while.
A
Yeah
I
mean
we
can't
we
definitely
do
it.
You
know
we
have
to
do
some
some
stateful
transformation
because
we're
a
delta
backend.
So
we
have
to
convert
your
cumulative
metrics
to
delta,
so
some
things
happen.
Some
kind
of
strangeness
happens
at
the
beginning
and
end
of
your
series,
but
that's
it
works
for
the
most
part.
So
is
something
you
plug
into
your
prometheus
implementation.
A
B
B
A
Yeah,
so
I
think
it's
unless
you
do
unless
you
do
a
route
where
you
have
the
collector
that
scrapes
the
permeate
at
some
point,
and
then
you
know
converts
it
to
the
open
telemetry
data
model
on
exports
over
otlp,
that
that
would
be
the
only
other
way
that
I
can
think
of,
because
you
can't
exactly
configure
a
new
relic
to
have
access
to
your
your
private
data
center
yeah.
Obviously,.
B
Cool
yeah
I
haven't,
I
haven't
taken
the
step
to
introduce
the
collector,
yet,
although
I
think
it
would
be
a
good
idea,
it's
also
one
more
thing
to
manage
in
every
cluster
that
we
run,
so
we
can
avoid
as
long
as
possible.
C
C
A
A
What
you
have
to
there's
kind
of
a
there's
kind
of
a
caveat
there,
which
is
in
order
to
do
that
with
new
relic.
You
have
to
use
this
processor
for
the
collector,
the
cumulative
to
delta
processor,
and
so
you
have
to
pay
that
stateful
transformation
tax
yourself
rather
than
letting
new
relic
pay.
It.
A
B
I
mean,
I
assume
you'll
still
be,
storing
delta's
interact.
A
Yeah
yeah
yeah
for
all,
but
very
specific
types
of
metrics,
like
non-monotonics
non-monotonic
sums.
Those
are
those
are
more
useful
in
their
cumulative
form
than
they
are
in
their
delta
form.
A
The
last
issue
that
I
added
here
this
was
this
was
kind
of
a
a
trash
question.
I
I
suppose
I
suppose
anarag
might
have
an
opinion
too,
but
yeah
it's
yeah.
A
Maybe
I'm
just
like
being
antsy
about
it
and
trask
has
been
sick
and
and
out,
and
so
I'll
just
give
it
another
week
before
I,
okay.
A
Most
of
the
lines
are
actually
in
addition
to
the
readme,
because
I
added
some
code
gen
to
output,
which
the
shape
of
all
the
metrics
that
are
generated
from
that.