►
From YouTube: 2021-03-17 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Figuring
out,
so
I
got
jmh
running
with
the
profiler.
I
tried,
both
with
the
with
jfr
and
with
async
profiler
got
the
most
legible
and
understandable
results
using
async
profile
and
profiler.
In
their
tree
view,
the
data
was
all,
I
think,
probably
all
there
in
jfr,
but
a
lot
harder
to
interpret.
Actually
why
don't?
I
just
share
my
screen
and
I
can
show
you
what
I
see,
which
is.
A
I
think
I
mean
it's
really
hard
to
get
like
super
consistent
reproducible
results
on
my
laptop,
but
this
is
the
output
of
async
profiler
for
that's
20
threads.
Let's
look
at
five
threads.
I
think
it's
maybe
a
little
more
indicative,
so
I
don't
know
how
much
you've
played
around
with
async
profiler
before,
but
it
can
output.
It's
basically
does
sampling
so
it'll.
I
think
the
way
that
it
works
by
default
is
when
it
sees
cpu
hit
some
threshold.
A
It
will
do
sampling
of
the
current
thread
stacks
and
then
based
on
that
you
can
infer
where
the
cpu
is
being
used.
So
for
so
I
I
wanted
to
after
looking
at
many
many
many
of
these
tree
view.
Traces
wanted
to
kind
of
hone
in
on
what
was
going
on
around
with
the
blocking
cue
with
the
array
blocking
queue
that
we're
using
at
the
moment.
A
And
so
this
particular
benchmark,
we
captured
1529
total
sample
stacks
and
if
you
look
at
the
usage
of
the
the
offer
side
so
we're
putting
stuff
in
on
the
in
the
span
processor
on
end,
it's
got
about
two
point,
two
point,
one
six
plus
two
about
you
know
four
and
a
half
four
and
a
quarter
percent
of
the
cpu
of
the
time
being
spent
offering
on
the
offer
side,
which
is
not
horrible
but
not
great,
but
then
on
the
polling
side
we're
seeing
I'm
like
in
this
particular
run.
B
A
I
mean
this.
This
may
be
higher.
I've
seen
I've
seen
these
numbers
all
over
the
place,
but
it's
always
been
a
lot
higher
than
I
would
have
hoped.
Let
me
look
at
a
different
one,
so
there's
where
I
have
many
many
of
these
stacks
here.
Here's
another
five
thread,
one
that
might
be
more
indicative.
A
Some
of
this
stuff
is,
I
mean,
I
think,
all
of
this
stuff
is
kind
of
hard
to
interpret
or
not
like,
say
not
trivial
to
interpret,
but,
and
it
may
be
that
we're
spending
a
lot
of
time
polling.
Well,
I
wouldn't
expect
we
spend
a
lot
of
time
polling
if
there's,
if
there's
things
to
be
pulled
off
the
queue
right,
so
I'm
a
little
bit
confused
about
why
the
polling
is
so
high
here
this
is
higher
than
I
was
seeing
in
some
of
in
a
lot
of
my
runs.
A
A
Matters
more
for
sure
and
like
in
this
particular
case,
seeing
four
and
a
quarter
percent
that
four
and
a
quarter
percent
on
the
offer
side
is
pretty
consistent
and
that's
that's
not
horrible,
but
it's
also
a
it's
a
pretty
good
hunk
of
cpu
time
being
spent,
and
it's
all
down
in
here
in
this
like
in
the
the
syncing
in
the
in
the
sync
code
inside
the
jdk,
or
it's
actually,
mostly
a
native
sync
code.
A
Big
impact
yeah-
I
was
quite
surprised
myself,
so
the
next
thing
I
want
to
do
is
I
do
want
to
try
running
the
same
benchmark
with
the
profiler
on
the
proposed
on
the
proposed
change
to
use
the
that
non-blocking
queue,
which
I
can
only
hope
will
be
faster.
The
only
issue
is,
then,
how
do
you
like
keeping
track
of
all
of
the
different
sizes.
A
Do
you
know
how
it
manages
size
like?
Does
it
have
linear,
or
does
it
have
good
time
on
its
size,
implementation.
B
A
A
Yeah,
I
don't
know,
I
have
never
looked
at
that
library,
myself
or
how
how
much
code
we'd
be
bringing
in
is
it.
I
think
I
did
look
and
see
that
it
was
apache
licensed,
so
we
can
shade
it
in
if
we
need
to
or
if
it's
only
one
class
we
can
probably
just
copy
it
in.
B
I've
used
it
in
a
couple
of
other
projects,
including,
like
I
replaced
the
zipkin
exporter
with
that,
just
for
the
heck
of
it
not
not
within
zipkin,
though,
like
I
didn't
care
enough,
this
person
seems
to
care
a
lot
yeah,
which
is
good
I'm
into
contributing.
I
probably
should
have
contributed
my
thing
also,
but
I
think
jc
tools
is
a
good
small
but
fast
queue,
and
it's
worth
it
like
they
use
unsafe,
like
obviously
it's
complicated
code,
but
we
don't
care
about
that
as
long
as
it's
small
right,
so
yeah.
B
So
this
is
using
the
signal,
so
this
is
still
his
implementation,
but
with
an
array
blocking
here,
where
is
this
the
current
option?
Oh,
this
is
the
current.
This
is
the
r
current
yeah
like.
I
was
still
going
to
suggest
that
we
stick
with
signaling
plus
array
blocking
q,
which
gives
us
a
pretty
good
balance
of
the
improved
cpu
without
adding
the
weird
counter
stuff
and
then
go
for
jc
tools
from
there.
A
B
A
A
A
The
poll
doesn't
bother
me
if
we're
if
we're
spending
time
polling-
oh
yeah,
545
self.
That
means
so
that
that
means
that
there
were
545
stacks
that
were
captured
where
pole
was
at
the
top
of
the
stack.
So
it
was
sitting
there
just
waiting
waiting
for
something
to
come
in
so
that
that
doesn't
bother
me.
B
A
Sorry,
sorry,
this
19
is
depth,
stop
back
back
up.
This
is
just
the
depth.
There
were
54
stacks,
which
is
4.828
of
the
stacks
where,
where
this
offer
was
actually
in
any
somewhere
in
the
stack
but
there's
zero
percent,
the
self
is
basically
where
it
is
seen
at
the
top
of
the
stack.
So
it's
basically
the
thing
that
is
actually
the
top
of
the
stack
that's
captured,
what's
actually
waiting.
A
If
that
makes
sense,
as
I
said,
it's
a
little
tricky
to
try
to
reason
about
this,
because
it's
just
doing
it's
basically
just
doing
sampling
periodically
samples,
but
then
also
when
the
cpu
hits
some
threshold
it'll
capture
a
sample
as
well.
A
A
My
guess
so
I
think
we
can
actually
see
that
the
thing
that's
interesting
exporter,
wise
that
I
was
worried
about
so
two-span
data
being
the
thing
where
we're,
where
we're
doing
we're
building
rabbits
and
it's
not
very
expensive.
I
was
worried,
but
I'm
actually
I'm
more
worried
about
like
if
we
included
the
otlp
transformation
in
here.
A
But
the
actual
the
actual
creation
of
the
span
data,
because
it's
just
creating
a
wrapper.
It's
it's
not
very,
not
very
much
with
the
time
spent
in
there
at
all.
B
B
B
A
A
Yeah,
the
delaying
this
is
the
delaying
span
exporter.
So
it's
basically
how
long
it
takes.
B
A
B
A
A
B
A
Do
we
well,
it
will
pull,
I
think.
Well,
the
other
thing
we
could
try
that
jason
and
I
have
thought
would
be.
You
know.
Also
a
good
simple
thing
is
pull
and
then,
if
you
see
something
drain
and
that
pull
plus
drain
will
catch
more
than
just
right
now
we
just
pull
one
and
then
go
through
the
loop
and
pull
again
and
pull
and
pull
and
pull
it
pull.
B
B
Like
I
had
considered
that
idea
when
I
was
writing
my
draft
thing,
it
does
seem
more
complicated
than
having
a
separate
signal
for
the
pulling
at
least
that
makes
it
easy
to
reason
about.
Like
you
only
drain,
or
you
only
signal
like
you,
don't
pull
and
drain,
I
wasn't
too
sure
what
happens
in
that
case.
Yeah.
That's
fair,.
B
B
Yeah,
like
pulling
from
the
signal
queue
and
then
draining
from
your
spam
queue
that
sort
of
still
made
sense
to
me.
But
when
I
tried
to
reach
like
pull
and
drain,
I
thought
it
could
go
over
the
max
export
batch
size.
If
I'm
not
careful
that
sort
of
thing.
So
that
was
the
main
reason.
I
avoided
that
idea.
A
B
A
And
so
you
we
might
want
to
signal
on
like
half
half
batch
size
or
something
just
to
make
sure
that
it
gets
drained
and
doesn't
we
aren't
like
the
drain?
Isn't
racing
the
what's
being
added
to
it?.
B
A
Well,
the
array
blocking
queue,
though,
if
we're
draining,
I
don't
think,
draining
grabs
a
lock
on
the
queue.
So
I
think
we
can
brainy.
B
B
A
B
B
B
A
A
A
I'm
thinking,
I
think,
I'm
thinking
about
the
the
time
when
you're
we're
running
right
at
the
like
we're
running
right
kind
of
at
the
perfectly
tuned
section
where
we've
our
cue
size
is
perfectly
exactly
the
same
thing,
and
so
right
at
the
same
time
that
we
fill
it
up.
That's
when
our
timer
hits,
and
so
I
said
also
assuming
max
export.
B
B
B
B
Yeah,
so
I
my
suggestion,
for
so
we
can
merge
the
benchmarks.
I
think
right.
Did
you
find
them
to
be
useful
benchmarks
when
you
try
them?
I
think
so.
I
mean.
A
I
think
we
might
still
want
to
tweak
them
over
time.
I'm
not
convinced
the
thing
that
I'm
not
convinced
of
is
the
way
that
the
code,
the
benchmarks,
are
right.
Now
they
run
for
like
five
iterations
of
five
seconds
on
each
setup
and
five
seconds
is
a
very
short
amount
of
time
to
be
gathering
samples.
B
B
B
A
Me
to
get
a
handle
on
things,
but
that's
the
kind
of
thing
where
I'm
not
100
sure
that
the
precise
numbers
are
tuned
really
great
for
doing
benchmarking.
You
want
to
use
jmh.
B
A
And
as
long
as
you
have
the
async,
the
the
link
library,
that's
the
so
in
your
the
right
place
for
java
to
pick
up,
then
it'll
just
use
it
yeah
I've
been,
I
spent
a
bunch
of
time
figuring
out
how
to
run
it
in
run
benchmarks,
an
idea
to
see
if
I
could
run
the
benchmark,
because
because
idea
also
now
has
async,
profiler
and
jmh.
Sorry
not
jmh
jfr
built
into
it,
but
I
couldn't
get
idea
and
async
profiler,
jmh
and
gradle.
B
B
A
And
that
worked,
but
I
wasn't
able
to
integrate
the
async
profiler
integration
and
idea
to
work.
Did
you
add
the
annotation
processor
for
the
main
method?
I
I
went
down
that
road
for
a
while,
but
at
the
end
I
just
I
wrote
a
main
method
that
just
did
a
manual
or
a
manual
thing,
and
then
I
also
had
to
run
the
gradle
to
generate
the
stuff
and
add
stuff
to
my
class.
But,
to
the
add
the
add
that
meta
and
file
to
my
class
path,
you
guys
paperwork.
B
I
think
I
like
in
many
projects
I've
like
both
the
gradle
plugin
and
the
annotation
processor.
If
you
have
both
of
them
there,
then
you
can
also
add
main
methods
and
they
run
fine
and
it's
much
easier
than
going
through
the
stupid,
jmh
plugin.
I
think
so
that's
something
we
could
do
at
some
point.
If
we
wanted
to
yeah,
I've
had
good
experiences
with
it.
Yeah.
A
A
B
A
Mean
it's
also
when
you're
doing
this
kind
of
work.
It's
also
not
a
big
deal
to
hack
on
the
yeah
back
on
the
video,
not
so
common
yeah
like
it's,
not
a
part
of
the
it's
not
part
of
the
build,
although
it
would
be
interesting
to
see
if
we
could
put
this
part
in
like
as
a
part
of
our
actual
builds,
but
then
we'd
have
to
also
figure
out
how
to
get.
We
have
to
get
the
async
profiler
library.
A
A
B
B
B
A
Yeah
cool,
I
think
that
sounds
good.
A
Where
are
you
going
just
going
out
to
the
oregon
coast,
stare
at
the
stare
at
the
rain
in
the
water
for
a
week?
What
it's
not
going
to
be
good
weather
next
week?
Oh,
it's
almost
never
good
weather
at
the
oregon
coast.
So
it's
just
a
way
to
get
out
of
the
house
and
out
of
the
city
and
see
a
little
bit
different
scenery.