►
From YouTube: 2020-08-20 Java Auto-Instrumentation SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Yeah,
my
parents
gave
me
a
plant
for
my
birthday,
nice.
A
C
Yeah,
I
think
it
was
awesome.
I
went
back
to
clear
up
my
desk
and
my
the
plant
that
was
on
my
desk
was
just
dead.
C
B
D
B
All
right,
so
I
offered
to
do
a
little
tour
of
our
java
agent
repo,
since
we
open
sourced
it.
I
don't
know
two
weeks
ago,
I
think
approximately
two
weeks
ago,
so
I'm
gonna
try
and
share
my
screen.
D
B
B
D
B
Great
okay,
so
I
have
a
really
rapidly
thrown
together
and
very
disorganized
little
outline
of
stuff.
We
could
talk
about,
but
I
wanted
to
start
with
a
diagram
that
jason
keyed
me
off
to
that.
I
didn't
know,
existed
jason
keller,
which
is
this
architectural
overview
here
that
lives,
I
think,
still
just
on
our
private
repo,
but
and
it
has
not
been
updated
in
quite
some
time,
but
I
think
still,
the
the
basic
high
level
overview
stands
here.
The
basic
structure
of
the
agent
still
stands.
That's.
A
This
diagram
is
in
the
public
repo,
it's
just
not
linked
to
from
the
wiki
anymore.
Okay,
okay
is
the
wiki.
B
Yeah,
okay,
so
agent
new,
relic
jar,
the
new
relic
edge
and
collector.
I
didn't
realize
that
the
term
new
relic
edge
was
actually
this
old
jason
keller.
I
know
that's
one
of
your
favorites.
I
thought
it
was
a
newer
term,
but
so
we
have
thinking
about
where
I
should
start
so
in
addition
to
the
new
relic
agent
coming
up
and
instrumenting,
all
of
the
user
code
and
library
code
there's
and
then
sending
telemetry
out.
B
There's
also
this,
like
this
kind
of
bespoke
protocol,
that
new
relic
has
crafted
that
helps
to
manage
the
life
cycle
of
each
agent,
and
we
can
look
at
that
a
little
bit.
But
basically
it
registers
itself
with
our
back
end
gets
some
identifying
characteristics
from
the
back
end
and
then
there's
a
handful
more
than
a
handful
of
endpoints
that
are
hit
throughout
the
lifecycle
of
an
agent
to
both
report
telemetry,
but
also
to
ask
questions.
B
So
where
else
am
I
here
on
my
okay?
So
let's
talk
a
little
bit
about
our
weaver,
so
our
repo
is
pretty
big
right.
There's
a
lot
of
a
lot
of
modules
at
the
root
level,
and
so
we
can
we're
not
going
to
touch
on
all
of
these.
Some
of
these
are
very
small
and
serve
primarily
primarily
to
support
testing
and
to
support
our
build
system,
but
you
know
there's
some
key
ones.
We
should
look
at
for
sure.
B
The
weaver
is
one
of
those
and
the
weaver
is
what
has
been
written
to
do
by
code
weaving.
So
when
we
say
weaver
we
mean
by
code
weaving
we
use
asm
as
our
bytecode
manipulation
library
and
we
register
so
everything
in
asm
I'd
say
it's
a
little
less
friendly
than
bite
buddy,
which
is
what
we
use.
I
think
in
open,
telemetry
and
other
agents
most
likely,
but
asm
is
really
really
powerful,
but
it's
also,
I
think,
a
little
a
little
lower
level,
I'm
not
gonna,
say
clumsy
or
clunky.
B
I
just
think
it's
lower
level,
and
so
we
have
a
lot
of
listeners
that
get
registered
or
visitors
rather,
and
these
visitors
are
invoked
by
the
by
the
asm
system.
So
we
register
some
class
load
listeners
and
upon
class
load
we
hook
in
with
asm
and
then
the,
but
we
get
our
visitors
get
invoked
every
time.
There's
bytecode
happening
every
time.
B
This
costs
living
our
visitors
get
invoked
and
inside
of
those
invocations,
we
decide
whether
or
not
we're
at
a
point
of
interest,
and
if
we
are,
then
our
weaver
can
help
us
to
get
our
code
wrapped
around
the
points
of
interest.
B
So
we
can
cruise
into
an
instrumentation
package
and
looks
we
can
look
at
what
it
takes
to
make
instrumentation,
let's
see
so
this
is
going
to
seem
pretty
familiar
if
you've
looked
at
the
open,
telemetry
agent
code
base
at
all
large
package
full
of
instrumentation
modules,
so
these
are
all
of
the
very
common
frameworks
that
we
and
libraries
that
we
support
and
have
created
instrumentation
for
and
one
thing
just
visually.
If
you're
scanning
through
here
you'll,
say
man,
why
are
there
you
know
five
different
instances
of
kafka
clients?
B
B
We
add
a
suffix
onto
the
end,
so
it
kind
of
reads
like
a
versioned
up
story
if
you
kind
of
go
in
in
order
all
right,
so
I
think
this
goes.
This
remind
me
jason.
If
it's
starting
with
or
ending
with
this
number,
I
can't
remember
we
can
look,
though
we
can
go
into
this,
for
example,
and
inside
of
the
build
gradle,
we
should
declare.
B
Inside
of
this
verify,
instrumentation
block,
which
is
a
custom
bit
of
build
gradle
that
we'll
show
in
a
sec,
we
claim
that
we
pass
only
versions
starting
at
10
or
0.10
right.
So
this
is
like
an
inclusive
with
a
bracket
sort
of
math
syntax
here
right,
excluding-
and
this
goes
up
forever.
B
So
that's
our
way
to
declare
what
versions
we
support
for
a
given
piece
of
instrumentation
right.
So
then,
going
back
down
to
kafka
clients,
the
next
one
here
starting
at
2.0,
there
was
a
version
change
that
required
us
to
redo
some
instrumentation
and
then
starting
at
2-0
is.
Is
this
other
version?
Okay?
So
what's
inside
here,
I
don't
know
this
specific
instrumentation
at
all.
I
picked
it
at
random,
so
let's
go
and
find
out.
B
What's
in
here
so
first
thing:
you'll
notice
is
two
packages,
one
that's
comedy
relic
instrumentation
kafka,
one
is
org:
apache
kafka,
clients,
consumer,
so
that
package
conflicts
with
or
overlaps.
I
guess
I
should
say
with
the
instrumentation
with
the
the
actual
implementation
package,
and
that
is
by
design
the
way
that
our
weaver
works.
There's
a
couple
of
assumptions.
B
One
assumption
is
that
that
your
package
and
class
package
class
name
and
method
signatures
all
match
the
thing
that
you're
intending
to
weave
with
an
exception
here
and
that
this
name
doesn't
match,
but
we've
specified
the
original
name
as
part
of
this
annotation
okay.
B
So
we
have
a
couple
of
things
of
interest
in
here
and
again,
I
don't
know
this
instrumentation
at
all.
The
first
thing:
you'll
notice
upon
this
class
is
the
weave
annotation,
that
is
a
custom
neurologic
annotation
that
flags
a
class
to
indicate
that
it
should
be
woven.
B
Normally
we
would
expect,
or
the
weaver
would
expect
the
class
name
to
match
exactly.
But
if
you
want
to
make
it
a
little
clearer,
you
can
specify
a
different
different
name
with
an
original
okay.
So
we're
instrumenting
the
kafka
consumer
in
this
case,
and
there
is
a
method
on
kafka,
consumer
called
poll,
and
it
takes
duration
and
returns
and
returns
some
customer
records
and
what
we
do
in
this
particular
case.
B
It
looks
like
we
call
original,
so
this
is
kind
of
the
weaver
api
in
action
here
so
annotation
to
flag
and
then
inside
of
the
implementation.
We
have
a
weaver
call
original
and
that
is
called
out
in
the
bytecode
as
the
location,
where
the
original
implementation
should
be
invoked
right,
so
call
the
new
relic
code.
Until
you
see
weaver
call
original
then
call
the
original
code
returning
whatever
result
might
be
returned
and
then
continue
in
the
new
relic
code.
B
Okay,
so
we
trap
any
sort
of
errors
and
we
notice
those
right.
So
if
anything
were
to
happen
during
this
kafka,
consumer
pull,
pull
we
flag
it
and
rethrow.
So
this
is
a
mechanism
by
which
we
can
trap
exceptions
and
notice
exceptions,
and
then
we
also,
then
it
looks
like
iterate
over
the
records
that
we're
about
to
return.
So
this
is
again
happening
after
the
original
was
invoked.
We
have
the
results
here,
we're
looking
over
those
and
we
are
using
our
new
relic
api
to
report
as
external
the
parameters
that
were
part
of
those
calls.
B
Circle
back
and
talk
about
our
domain
model
a
little
bit.
We
there's
a
lot
of.
B
I
guess
one
of
the
core
ideas
inside
of
the
new
relic
agent
is
this
idea
of
a
transaction,
and
that
goes
back
a
little
bit
to
some
of
the
the
original
history
of
the
agent
and
the
new
relic
product
offering
being
centered
around
rails
as
a
rails.
Monitoring
solution
and
transactions
are
kind
of
this
key
http
concept
in
rails,
and
that's
true
too
in
the
java
world,
with
you,
know,
servlets
and
any
other
kind
of
http
framework.
B
So
a
transaction
is
essentially
the
from
the
time
that
you
get
a
request.
All
of
the
stuff
that
happens
until
your
request
is
finished,
being
serviced
right
so
really
similar
to
a
root
span.
We
also
have
this
idea
of
spans,
but
the
transaction
is
kind
of
the
the
core
piece
of
telemetry.
I
guess
inside
of
the
new
relic
agent.
B
B
I
think
this
is
a
pretty
good
way
of
just
like,
so
the
data
center
is
responsible
for
sending
data
out
of
the
agent
into
the
new
relic
back
end,
and
this
interface,
I
think,
does
a
pretty
good
job
of
like
just
describing
all
of
the
things
that
we
would
send
over
the
things
that
we
track.
The
things
that
we
gather,
the
things
that
we
send
so
analytics
events,
those
are
essentially
transactions
command.
Results
are
data
that
are
returned
in
response
to
a
command.
B
B
There's
error
data
metric
data
modules,
so,
as
classes
are
being
loaded
we
track
which
jars
they
are
being
loaded
from
and
we
collect
those
up
and
report
those
as
the
mod
we
call
it
modules,
but
the
basically
the
list
of
jars
list
of
libraries
that
are
being
used
profile
data,
I'm
not
actually
sure
what
that
is.
That
sounds
like
environmental
data.
Maybe
I
don't
know
what
profile
data
is
and
spans
we
do
track
sql
traces
and
so
those
are
returned
and
then
transaction
traces.
B
We
talk
about
our
api
a
little
bit
so
in
that
instrumentation
we
saw
this
api
call
and
so
the
way
that
user
code,
in
addition
to
this
instrumentation
using
our
api,
we
do
have
a
pretty
rich
public-facing
api,
that
users
can
wire
up
to
manually
instrument
their
own
code
and
get
data
over
to
us
and
it
all
starts
kind
of
with
a
static
neuralic
call.
B
Pretty
s
pretty
small
and
pretty
flat
package
containing
all
of
the
api
code,
and
this
compiles
and
publishes
and
anybody
that
needs
needs
to
invoke
you
know
you
can
use
the
product
without
having
to
ever
invoke
the
api.
But
if
you
have
some
manual
code
that
you
want
to
instrument,
you
would
be
using
this
jar,
so
the
top
level
new
relic
java
is
the
kind
of
the
entry
point
to
the
api.
B
We
have
pretty
pretty
solid
and
can
like
deep
docks
on
all
of
this
stuff
on
our
documentation
site
and
the
the
thing
to
point
out
here.
The
thing
to
notice
is
that
all
of
these
implementations,
this
is
not
an
interface.
B
This
is
a
concrete
class
whose
implementations
all
return,
these
no
op
instances
so
effectively
when
you
wire
in
the
api
by
itself,
if
you
use
the
new
relic
api
by
itself,
you
get
a
bunch
of
no
opt
code
until
or
unless
at
run
time
you
have
the
new
relic
agent
installed
via
the
command
line
switch
at
which
point
this
implementation
will
get
replaced
with
the
actual
implementation
right.
So
we
do
a
little
byte
code,
weaving
of
the
api
itself
to
hook
in
the
new
relic
implementation.
B
There's
no
other
really
good,
no
examples
in
here,
but
because
a
lot
of
these
are
just
consumers
and
not
producers,
but
yeah.
You
know
things
like
an
empty
string
that
gets
replaced
with
with
a
true
implementation,
and
I
picked
a
strange
example,
but
there's
some
hooks
in
the
agent
too
to
support
browser
product
and
then
the
no
no
op
agent
big
chunk
of
the
api
lives
in
here
as
well
again,
the
true
agent
would
get
wired
in
instead
of
the
no
hop
agent
all
right.
B
B
So
I'm
not
sure
what
packet
the
annotation
is
in,
but
we
have
this
trace.
B
Annotation
right
so,
if
you're
a
user
of
our
our
api-
and
you
want
to
manually
instrument
your
code,
the
main
way
to
do
that
is
by
using
our
trace,
annotation
and
putting
trace
onto
a
piece
of
code.
So
maybe
there's.
B
And
so
trace
dispatcher
equals
true.
This
is
a.
I
have
no
idea
what
what
it
looks
like
I'm
in
some
test
code,
but
this
is
the
annotation
that
you
can
put
on
your
methods
to
indicate
that
they
should
be
traced
by
our
tracer
in
order
to
produce
transactions
inside
of
the
new
relic
api.
B
So
this
is
how
it's
implemented.
That's
that's
great.
You
know,
through
by
code
weaving,
we
will
wrap
the
implementation
as
we
saw
earlier.
The
question
of
course
comes
up.
What
do
you
do
with
async
code?
That's
the
tricky
one
right
like
what
happens
there.
Well,
we
have
this
idea
of
a
token
api
and
I'm
not
sure
right
off
where
that
lives,
but
let's
find
it
keller.
Can
you
tell
me
where
this
lives?
Do
you
know
offhand.
B
So
it's
get
token
is
on
the
transaction,
so
from
the
agent
you
can
get
a
transaction
the
currently
running
transaction
from
there.
You
can
get
a
token.
These
tokens
should
just
be
traded
as
like
a
an
opaque
identifier
that
allows
user
code
and
instrumentation
code
to
track
the
fact
that
bits
of
execution
that
contribute
to
a
single
transaction
might
be
happening
on
different
threads.
B
So
from
there.
I
think
what
you
get
back
is
of
type
token,
and
if
we
did
a
thread
hop
here,
which
I'm
not
going
to
try
and
code
this
up
and
show
it.
But
if,
if
there
was
like
yada
yada
async
stuff
thread
hops,
then
later
you
can
say
token
link
on
the
other
thread
context
and
that
allows
the
internals
of
the
agent
to
link
the
thread
in
which
the
transaction
was
started,
with
execution
happening
on
another
thread.
B
So
this
is
effectively
our
async
api,
and
this
is
how
we
allow
a
single
transaction
to
be
traced
across
multiple
thread
hops
and
then
at
the
end.
We
also
require
the
token
to
be
expired,
so
that
it's
no
longer
held
onto
in
memory
and
can
be
reported
when
the
transaction
is
complete.
B
Api,
what
else
do
we
want
to
talk
about
talk
about
class
loaders?
Maybe
so
I
think
I
actually
don't
know
if
the
open,
telemetry
agent
does
this
the
same
way,
I
think
it
might,
but
the
new
relic
agent
is
loaded
into
the,
I
believe,
the
bootstrap
class
loader
and
we
have
class
loader
separation
from
user
code
and
in
order
to
do
certain
operations
within
the
new
reload
code
base.
We
have
this
thing
called
the
agent.
B
B
B
And
this
basically
serves
to
allow
us
to
bridge
calls
between
the
two
class
letters.
B
Okay
and
then
we
also-
I
mean
I
just
have
a
bullet
point
here,
just
talking
about
class
letters,
we
do
also
instrument
class
loaders
in
order
to
detect
the
loading
of
certain
classes
and
then
leverage
that
in
some
of
the
instrumentation,
so
I
think
we
were
trying
to
target
about
20
minutes
and
I've
gone
a
little
bit
longer
than
that.
So,
and
I
also
know
that
I
glossed
over
tons
and
went
very
fast.
So
I
want
to
I
don't
know.
B
D
Okay,
I
then
I
have
third
question
so
do
how
do
we
do
that
automatic
inter-thread
topping
so.
C
B
Yeah,
let's
go
find
one
so
we'll
find
a
piece
of
instrumentation
that
does
this.
A
B
A
We've
also
got
like
you
could
look
at
the
any
of
the
executor.
Do
we
have
something
with
the
executive
service
in
here.
B
B
B
I
think
for
instrumentation
that
does
track
thread
hops,
where
we
put,
we
weave
up
a
new
field
onto
the
class,
that's
instrumented
onto
instances
of
the
class
being
instrumented
and
then
when
exec
is
called,
it
looks
like
we
check
to
make
sure
that
we
have
the
token
and
we
call
links
so
so
kind
of
like
I
showed
in
that
contrived
example.
We
still
use
the
token
api
and
we
do
link
them,
but
it
looks
like
we've
wired
it
up
to
the
exact
call
and
we'd
do
that
just
prior
to
calling.
A
The
original
just
to
for
people
who
care
about
the
details
that
new
field
doesn't
actually
add
a
new
feel
to
the
class,
because
we
don't
want
to
change
this,
the
class
signature,
so
it
uses
some.
I
think
it
uses
some
tricks
to
wire
up
a
parallel
static,
static
instance
that
tracks
that
new
field,
so
it
doesn't
actually
add
to
the
class
it
adds
it
to
a
separate.
It
creates
a
new
class
that
lives
alongside
it.
I
don't,
I
don't
remember
the
exact
details.
E
Yeah,
I
am
glad
that
you
remember
that
I
think
it
uses
a
weak
reference
to
link
those
together
too
cool.
B
So
I
mean
this
is
this:
is
just
the
one,
the
there's
only
one
class
in
the
completable
future
instrumentation,
but
you
can
see
you
know
it's
pretty
involved
like
there's.
There's
a
lot.
Yeah
there's
a
lot
of
ways
in
which
exec
can
cause
these
thread
hops,
and
so
we've
we've
tried
to
cover
all
of
those.
B
E
So
earlier
you
were
looking
at
the
the
kafka
instrumentation
and
I
I
didn't
quite
see
how
you
had
the
distributed
tracing
support
in
there
because
it
looked
like
the
incoming
headers
was
set
to
null.
E
B
B
Right
which
we
did
gloss
over
so
I
mean
maybe
we
should
talk
about
that
a
little
bit,
so
it
looks
like
in
our
kafka
client
spans
where
we're
attempting
to
do
this
distributed
tracing.
B
It
looks
like
we
have
woven
the
kafka
producer,
so
this
is
something
putting
data
into
kafka
and
upon
send
we
grab
the
transaction
and
we
create
a
distributed
trace
payload.
So
it
looks
like
this
is
a
new
set
of
headers.
I
knew
I
guess
trace
context
would
be
the
the
right
way
to
say
this
right,
so
we're
creating
a
new
trace
context
and
then
adding
those
as
headers
under
the
new
relic
string
to
the
record,
that's
being
put
into
kafka,
okay
and
so
on
the
receiving
side.
E
A
B
A
fourth,
so
we
we're
trying
the
future
like
what
we
would
like
to
do,
ideally
is
to
use
w3c
trace
contacts
everywhere,
but
clearly
this
is
not
a
w3c
trace.
Header
like
this
is
the
the
conventional
way
the
new
relic
did
it
so
tyler.
You
were
suggesting
we
look
at
the
receiving
side,
I'm
not
sure
where
that
would
be.
B
D
D
We
weave
yeah,
so
so
how?
How
does
weaver
know
that?
Yes,
I
do
have
the
or
capacity
kafka
client
producer
kafka
producer,
the
dash
instrumentation.
B
B
B
D
D
E
After
this,
I
would
also
be
curious
to
see
how
the
testing
of
instrumentation
works.
B
Yeah,
okay,
cool,
so
the
there's
a
lot
of
code
in
the
weaver,
and
so
you
know
this
would
require
us
to
maybe
dive
down
a
little
bit.
But
it's
gonna.
A
B
B
Okay,
cool:
let's,
let's
look
at
the
thing
tyler
was
talking
about.
I
might
have
an
example
that
I
could
show.
B
B
And
so
through
some
additional
gradle
plugins
and
those
plugins,
we
have
also
open
sourced
they're
kind
of.
I
can't
see
them
being
terribly
useful
outside
of
the
new
relic
agent,
but
they
will
go
and
grab
all
of
the
versions
that
claim
to
be
supported
by
a
given
piece
of
instrumentation.
B
Cool
yeah,
so
it
went
by
very
quick
quickly
once
it
got
to
that
point.
But
you
can
see
it
there's
like
a
list
of
versions
that
kind
of
popped
up
altogether
there
and
it
verified
that
they
all
still
wove
and
if
we
change,
if
we
went
and
changed
the
signature
on
one
of
those,
if
we
changed
our
instrumentation
signature
and
or
if
the
dependency
upstream
changes
in
a
breaking
way,
that
would
also
fail
verification.
And
then
we
can
know,
I
think,
we're
running
those
nightly.
E
B
B
Load
with
instrumenting
class
loader-
I
don't
know
this-
I
don't
know
this
code
very
well
at
all,
but
we
do
have
a
way
of
doing
it
and
in
this
particular
example,
it
looks
like
we're
doing
some
verifications
that
when
an
error
happened,
we
got
some
unscoped
metrics
about
looks
like
we're
asserting
zero.
So
it
looks
like
in
the
case
of
an
error
we
didn't
get
these
metrics
incrementing
is
what
this
looks
like
it's
trying
to
verify.
B
A
B
Oh
yeah,
we
could
talk
about
infinite
tracing,
so
this
was
originally
built
as
a
separate
library,
but
we
ended
up
incorporating
it
as
part
of
our
open
sourcing.
Infinite
tracing
is
a
new
relic
kind
of
a
more
recent
new
relic
offering
that
is
kind
of
an
analogous
to
the
open,
telemetry
collector.
So
we
have
another
piece
of
software
that
sits
outside
of
the
jvm.
B
The
jvm
talks
grpc2
and
sends
spam
data
over,
and
these
are
non-sampled
spans
with
the
intention
of
being
able
to
to
apply
a
more
sophisticated
sampling
algorithm
outside
of
the
jvm.
If
I've
done
an
okay
job
of
describing
what
that
is
probably.
C
Just
I
was
going
to
mention
going
back
to
the
question
of
like
how
do
we
track
what
should
be
instrumented,
and
all
that,
like
going
looking
at
the
instrumentation
context,
manager
is
probably
a
good
place
to
start
for
that.
It
does
a
lot
of
the
heavy
lifting
of
keeping
track
of
all
that.
B
B
Okay,
I'm
gonna
very
gracefully,
stop
sharing
now
thanks
everyone.
Thank
you.
That
was
a
good
tour.
The
whirlwind
tour
brings.
D
Okay,
let
me
share
my
screen
now
to
take
a
look
at
our
agenda,
so
I
don't
remember
that
I
put
that
here
for
not
supporting
java
7
but
a
if
there
is
a
pull
request
now
opened
by
by
trust,
which
tries
to
record
our
decision
why
we
have
decided
to
or
we
are
deciding
to
draw
to
drop
the
support
for
java
7
in
out
in
instrumentation.
D
We
are
still
going
to
support
some
specific
selected
libraries
for
like
manual
instrumentation,
mainly
for
android,
like
http,
and
maybe
some
some
others,
but
the
current
goal,
and
the
idea
is
that
the
majority
like
90
plus
percent
of
instrumentation
repo
and
all
of
our
instrumentation
modules
will
be
java.
8
plus
there
is
some
interesting
comments
from
other
vendors.
A
D
D
Jumping
back,
we
have
close
two
p1
issues.
One
was
added.
I
don't
know
what
trust
me
meant
by
that,
but
that
one
has
been
vanished
has
vanished.
D
Okay,
then,
among
inter
more
interesting
things
that
happened
during
the
last
week
in
into
intermutation
repo,
is
we
have
a
lot
of
pull
request
merge
from
from
a
lot
of
contributors?
That's
very
good
week,
so
please,
if
some
of
the
contributors
are
here,
please
continue
and
please
keep
up
this
good
work,
we're
keeping
up
on
semantic
attributes.
There
are
quite
some
changes
recently,
so
we're
keeping
up
one
cool
feature
who
has
developed
in
mr
mutation
repo.
D
Then
probably
you
know
that
we
try
to
again.
We
try
to
test
actually
test
our
instrumentations
not
be
only
with
the
version
that
we
actually
like
write
instrumentation
against,
like
hibernate
3.5
3.5,
but
we
try
to
actually
run
our
test
again.
The
latest
available
released.
C
D
D
We
continue
to
write
dogs
documents,
so
thank
you,
munir
for
spring-related
dogs.
Please
keep
up
and
we
have.
We
have
a
very
lot
of
general
code
cleanup,
so
we
mostly
deprecated
all
decorators.
We
now
have
tracers.
If
somebody
knows
what's
the
difference,
we
had
done
a
lot
of
package,
renames
and
artifactory
names
to
prepare
for
ga.
D
D
So
the
idea
is
that
we
have
nightly
build,
which
builds
and
populates
gradle
cache,
which,
with
all
those
tests,
results
in
the
first
place,
so
whenever
the
developer,
either
locally
or
in
pull,
requests
submit
the
change,
which
touches
only
one
instrumentation,
so
that
build
should
be
much
much
faster
because
it
has
to
retest
only
one
module.
Even
if
that's
like
a
clear.
D
Like
in
pull
requests
that
almost
work,
but
we
trying
to
do
better
and
we
as
an
experiment,
we
started
using
that
tool
called
ci
made,
the
idea
of
which
is
to
that's
probably
a
bad
example.
That's
a
bad
example.
Give
me
good
example,
which
aggregates
in
nice
well
almost
nice
view
the
all
tests
that
were
run
during
that
particular
pull
request,
so,
which
is
probably
that's
not
a
very
good
example.
But
if
we
migrate
from
github
action,
github
action
doesn't
have
a
nice
test
report.
D
D
D
A
Nikita,
do
you
know
if
trask
and
anarag
are
planning
on
having
their
6
p.m?
Pacific
time
meeting
today.
A
So
if
you're
interested
in
meeting
your
colleague,
it's
a
good
time
to
do
it,
it's
not
a
good
time
to
do
it.
It's
a
time
to
do
it.
D
B
Is
it?
Is
it
the
same
zoom
meeting
or
is
there
a
different
one
at
6
p.m?.