►
From YouTube: 2022-09-08 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
For
me,
it's
actually
6
p.m.
So
I'm
in
germany
in
munich.
C
Let's
bump,
let's
start
with
releases
in
case
laurie
gets
here
to
chat
about
startup
profiler.
E
A
E
I
haven't,
I
just
got
back
from
vacation
two
days
ago,
though
I
haven't
earl
yesterday,
so
I
need
to
take
another
look
at
that,
but
I
think
I
approved
no.
I
didn't
approve.
A
I
think
the
first
time
you
wrote
the
comment
it
didn't
have
any
tests,
so
it
wasn't
ready
for
approval.
Yet.
B
E
I
think
we
really
need
to
do
that.
They
seem
to
be
floundering
on
whether
it
is
an
allowable
change
to
change
the
histogram
bucket
boundaries.
C
One
with
one
nuance
that
riley
pointed
out
that
they
were
in
the
change
that
they're
proposing
they're
only
adding
additional
buckets,
not
picking.
So
you
will
only
get
more
granularity,
there's
no
loss
of
granularity.
C
C
E
It's
so
there's
a
pr
right
right
now,
at
the
spec
level,
the
api
and
the
sdk
for
logs
are
in
disagreement
about
naming,
and
I
have
a
pr
out
that
synchronizes
them
gets
everything
aligned
but-
and
it's
it's-
you
know
approved
by
a
couple
of
people,
but
I
think
it's
better
to
just
wait
until
that's
that's
merged,
so
that
we
have
more
certainty
makes
sense.
E
C
E
E
Let's
see,
let's
go
through
these
real
quick,
so
the
conditional
resource
provider
that
should
be
uncontroversial,
that's
just
adding
a
new
internal
api
and
so
I'll.
Take
a
look
at
that
today.
The
dropping
the
jaeger
proto
artifact
that
we,
you
know
we
agreed
on,
and
we
updated
the
versioning
documentation
to
explicitly
allow
that
this
is
kind
of
an
experiment
in
ways.
It's
the
first
time
we're
going
to
stop
publishing
a
previously
stable
artifact.
E
C
E
I
do
need
a
thumbs
up
from
john
on
that
that's
not
critical
to
get
out
this
release.
So
if
we
have
any
kind
of
reservations
about
it,
we
can
we
can
delay.
E
The
next
one
so
fixing
container
detection
so
that
one
I've
I've
approved.
It
looks
reasonable
to
me.
B
E
Right
and
hopefully
that
that
moves
to
instrumentation
within
a
couple
of
releases
so
dropping
the
micrometer
shim,
so
I'm
I'd
like
to,
I
guess,
defer
to
the
instrumentation
folks
for
that.
So
you
know
I
don't
want
to
drop
the
micrometer
shim
unless
we're
confident
about
having
it
available
in
in
instrumentation.
E
I
think
it's
reasonable.
The
prs
are
there,
but
just
want
to
confirm
with
you
all
that
that's
on
track
to
be
merged.
E
C
All
right,
one
topic
I
was
gonna
ask,
is
if
the
release
schedule,
if
we,
if
we
could
pin
down
like
what
is
like,
is
it
the
first?
Is
it
the
friday
after
the
first
monday-
and
I
was
thinking
of
maybe
adding
that
to
the
releasing
dock
in
the
three
java
repos
sort
of
of
their
expected
schedules,
not
that
you
know
we
can't
slip
but
just
to
have
a
expectation
out
there,
especially
for
downstream
distros.
I
think
it's
helpful.
E
Yeah,
that
seems
like
a
reasonable
thing
to
do.
I,
like
the
definition.
You
just
said
that
that
tends
to
be
what
we
do.
B
Easter
right,
it's
like,
except
in
years,
ending
zero.
Then
we
do
something
different.
B
C
C
C
And
we
have
that
that
awesome
bot
that
sends
the
pr
version
bump
with
the
version
bump
prs
now,
yeah
saves
five
minutes
of
work.
C
Cool
so
since
we
won't
meet
before
the
next
before
the
instrumentation
release,.
A
That
one
about
the
locking,
I
think
it's
not
necessary,
I
mean
trust.
If
you
have
time.
Please
do
take
a
look
at
this,
but
I
don't
think
that
it's
crucial
for
the
next
two
weeks,
the
same
about
the
net
attribute
getters
change,
which
is
very
huge.
C
Okay,
yeah
that
that's
fair,
I
want
I'll
try
to
get.
I
want
to
get
through
this
one
so
that
to
unblock
you
for
additional
pieces.
A
Graded
plug-in
is
probably
nice
to
have,
since
it's
very
small,
and
it
adds
the
service
name
detection.
It's
playing
good.
C
Cool
and
I'm
guessing,
there's
probably
nothing.
C
All
right,
john
turn
over
to
startup
profiler.
D
It's
a
profile
to
develop
a
tool
to
profile
the
prevalent
method
of
the
open,
telemetry
agent,
for
example.
It
could
help
in
investigating
slowness
at
the
application
started.
C
So
what
is
the
what's
the
alternative
or
proposal
here.
D
D
No,
it's
a
it's
a
different
agent.
We
don't
ask,
we
will
not
have
to
modify
the
intelligence.
D
C
D
C
And
so,
even
though
we
can't
start
jfr
via
the
command
line
that
early
during
pre-main
programmatically
have
you
you've
tested
it,
it
does
you
we
can
start
it
in
pre-main.
C
Has
anybody
dealt
with
java
agent,
startup
slowness
in
the
past
have
any
tricks
that
you've
used.
A
We've
had
several
complaints
from
some
of
our
customers
that
the
java
agent
is
slow
to
start
up
and
the
extent
of
time
and
like
delays
as
a
significant
delay
to
the
first
http
request.
And
there
is,
I
don't
know
if
we've
had
any
like
actual
valuable
comments
or
insights
on
how
to
improve
the
start
type.
But
I
don't
think
so
so
it
would
be
worth
at
least
knowing.
What's
what's.
A
Part
of
starting
with
your
idea.
C
That's
a
good
point
about
because
there's
kind
of
two
components
to
start
up:
there's
the
pre-main
there's
the
actual
initialization
of
the
java
agent
and
then
there's
the
impact
that
it
has
on
the
user's
application
loading
up
until
that
first
request.
A
Yeah,
so
I
know
I
I
just
remember
that
we
actually
had
the
cos
stomar
that
used
reactionary
and
the
first
request
in
the
application
with
timeout,
because
the
buying
post
notification
added
so
much
overhead
that,
like
the
first
week,
was
just
climbed
out
and
the
solution
that
we
suggest.
That
was
just
you
know,
for
name
or
used
classes
like
react
to
http
client
like
http,
client,
finalizer
and
the
internal
stuff,
which
worked.
But
it's
terribly
ugly
and
I
think
heavy
depends
on
the
instrument
of
application.
C
So
you
had
them
during
the
initialization,
had
them
load
those
other
classes.
C
C
I'm
really
hoping
I
mean
eventually
a
solution
to
that
will
be
the
static
instrumentation.
C
A
Doesn't,
like
necessarily
have
to
be
a
server
request.
For
example,
an
application
could
be
loading
your
configuration
from
some
remote
source
during
the
setup
like
I
know,
setting
up
the
spring
bin
that
requires
an
http
call
to
retrieve
some
necessary
data,
so
the
the
question
is
whether
this
would
be
considered
still
a
part
of
initialization,
or
this
is
the
first
standard
we
just
wanted
to
go
and
stop
before
this
happens.
A
A
Probably
something
like
five
minutes
with
an
option
to
override
this
for
particularly
slow
or
huge
applications.
D
F
F
So
in
the
performance
pro
front,
wouldn't
it
be
nice,
so,
let's
say
for
the
agent
to
let's
say
meet
metrics
about
itself,
how
much,
how
much
time
it
took
to
start
that
kind
of
stuff,
and
then
we
can
use
leverage
the
own.
Let's
say
since
the
agent
already
produced
a
is
something
that's
used
for
producing
metrics.
F
Couldn't
the
agent
produce
metrics
that
would
allow
someone
to
diagnose
how
much
time
it
takes,
and
then
we
would
have
the
absolute
time
as
opposed
to
let's
say
proportional
stuff
like
this
with
gfr
I
mean
we.
I
feel
that
we
need
both
actually
because,
but
still
on
the
performance
front
would
be
nice
to
get
some
absolute
magic
as
well.
C
Yeah,
I
think
that
could
be
very
certainly
the
pre-main
time
helps
to
know
like
if
it's
because
we
spent
a
long
a
lot
of
time,
starting
up
versus
slowing
down
via
by
code
instrumentation.
F
Yes,
but
sorry
I
think
this
orthogonal
to
what
jan's
proposing,
but
just
let's
say
still
in
the
performance
aspect
of
the
asians.
C
Where
would
we
emit?
Would
we
use
hotel
metrics
to
emit.
A
We
could
be
using
golden
matrix
because
I
think
this
these
are
usually
easier
to
consume
than
you
know,
just
sprinkles
with
numbers,
and
we
could
have
this
like
optionally
available
like
and
even
additional
flak
bit
besides,
the
usual
agent
debug
mode.
C
Is
anybody
using
the
so
the
like
the
trace
exporter,
the
hispanic?
The
otlp
exporters
emit
some
self-diagnostic
metrics
right.
C
E
They're
mildly
useful
if
things
go
wrong
to
understand
how
many
were
seen
versus
sent
successfully,
but
when
I've
helped
customers
with
issues
related
to
the
otlp
exporters,
I'm
primarily
looking
at
the
logs
from
their
application.
Those
have
proved
more
useful.
C
C
So
if
we
did
emit
via
hotel
metrics,
I
would
probably
intercept
that
in
our
metric
exporter
and
log
those
to
our
log
just
so.
It's
all
in
one
our
self-diagnostic
source.
F
C
C
Well,
I
like
the
idea
of
going
to
hotel
metrics,
I
mean
it
seems
to
we
have
good
precedent
in
the
the
span
exporter.
I
mean
the
otlp
exporter
diagnostics
and
it
gives
some,
I
guess
the
only
like,
because
again
we
would
probably
I
would
probably
end
up
logging
that
to
our
log,
so
I
don't
know
from
otlp
if
you're
using
otlp
and
otlp
ingestion,
that
option
wouldn't
be
available.
B
Nope
not
me
one
question
actually
excuse
me
quickly
for
jack
the
remove
micrometer
shim
pr
says
it
depends
on
the
other
one
being
merged
from
contrib
or
instrumentation.
I
don't
know
which
one
it
is
and
that
hasn't
been
done
yet
so
are
we
gonna?
Is
that
gonna
make
it
into
the
release.
E
I
think
it
is
mateusz
approved
the
pr
over
an
instrumentation
and
I
think
it's
a
rubber
stamp
style
pr,
so
just
waiting
on
trask
or
or
or
larry
there
we
go.
B
C
Oh
one
last
call
for
if
anybody
is
looking
for
issues
to
help
out
with
repro
provided
issues
are
always
interesting
and
like
this
one
came
in
recently,
it's
probably
super
complicated,
kotlin
co-routine,
something
or
other,
but
at
least
with
a
repro
like
you
can.
You
know,
set
up
your
debugger
and
go
through
for
hours
and
if
you
like
that
kind
of
issue,
it's
super
helpful.