►
From YouTube: 2020-05-14 Java Auto-instrumentation SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
C
We
company
called
White
Lake
that
you
may
have
heard
of
I
have.
D
D
Carlos
something
like
that
he
lives
in
Austin.
Now,
although
during
the
pandemic,
he
is
camping
out
in
his
cabo
house,
I
believe.
E
B
D
It's
mostly
just
some
placing
some
thoughts
in
people's
heads
rather
than
anything
else.
One
of
right
now
it
looks
like
the
instrumentation
only
provides
the
the
name
of
the
tracer,
not
a
version
and
see
it
would
be
cool
if
we
also
included
the
version
of
the
agent
itself
in
that
great
great,
for
doing
support
when
those
spans
and
metrics
come
in
and
know
what
version
the
agent
they're
using
without
having
to
ask
the
customer.
A
D
And
the
other,
my
other
item
was
just
we
should
start
thinking
about
using
the
more
rich,
open,
telemetry
API
features
like
events
on
spans
and
all
sorts
of
stuff
like
that
in
the
instrumentation
I.
Don't
know
that
there's
any
semantic
conventions
built
for
that,
but
you
know
a
hotel
has
a
richer
API
that
I
think
de
dog
had
so
let
me
nice
to
start
thinking
about.
B
B
B
Yeah
and
just
if
you
can
just
kind
of
specifically
where
like
what
features
like
where
we
could
use
those
like
ER,
PC,
yep,.
D
A
Can
briefly
make
that
that
about
enablement?
So
currently
we
have
if
it.
If
you
have
looked
into
the
github
issues,
we
have
like
one
two,
three,
four,
five,
six
implementations
disabled
by
default.
Those
are
I
thought
I
was
sharing
my
screen
so
tender,
so
those
are
like
servlet,
grizzly,
gdbg,
source
service,
filter
and
couple
of
others.
A
B
Okay,
let's
delay
this
conversation,
see
if
Tyler
shows
up
today,
because
he
would
have
the
most
insight
into
why
they
are
Maya.
From
previous
conversations,
his
said
that
that
they'll
sometimes
mark
them
disabled
by
default
for
like
when
they
first
introduced
them
for
a
few
releases.
Let
people
opt
into
it
and
then
they'll
switch
them
to
on
by
default,
but
I.
Don't
think
that
really
is
the
case
for
some
of
these
have
been
around
for
a
long
time,
so
I
think
they
may
be
off.
But
for
other
reasons,
in
our
case
of
putting
up.
A
F
F
So
other
things
like
spark,
Java
required
jetty
and
jetty
was
conflicting
with
our
server
instrumentation
we're
trying
to
work
through
that
so
I.
It
probably
doesn't
necessarily
apply
to
this
project.
So
things
like
jetty,
can
you
rewind,
I
didn't
quite
catch?
The
spark
so
skip
spark
spark.
Java
requires
jetty
and
jetty
was
disabled
by
default
because
it
conflicted
with
the
servlet
instrumentation.
That
is,
data,
talk
specific,
so
things
like
grizzly
and
jetty
can
probably
be
enabled
by
default
and.
F
Think
that
that
is
one
of
those
that
we
had
disabled
by
default,
but
never
went
back
to
enable
so
that
one
can
probably
be
evaluated.
It
might
be
on
the
noisy
side.
I,
don't
remember,
I
put
it
in
in
the
same
time
frame
that
I
did
this
servlet
filter
and
we
got
complaints
around
the
servlet
filter,
and
so
you
know
just
kind
of
got
lumped
in
disable
those
for
now.
Okay,.
A
F
Okay,
it's
basically
generating
some
extra
internal
spans
that
yeah
yeah.
We
didn't
have
it
first
and
it
generated
additional
spans
that
people
weren't
necessarily
expecting.
So
you
know
you
could
probably
enable
it
in
this
project
and
it
would
be
fine,
but
that's
that's
the
context.
Okay,
thank
you
both
just
to
be
clear,
like
those
instrumentation
I
added,
not
really
specifically
because
of
customer
demand,
but
because
it
seemed
like
it
was
useful
to
have.
F
F
We
give
importance
to
the
the
top-level
span
name
and
by
that
it's,
like
you,
know
the
this
servlet
dot
request,
part
kind
of
thing
which
you
guys
don't
even
have
anymore,
I,
believe
and
because
customers
are
using
servlet
requests
and
that's
at
the
top
level.
If
we
add
things
like
grizzly
or
jetty
of
that
that
changes
things
which
causes
conflicts.
B
Okay
and
grizzly
same
as
Teddy,
okay
sing,
I
got
it
Teddy.
F
D
F
D
F
A
Because
well,
I,
just
I
think
that
my
specific
interest
was
was
exactly
about
Jackie
and
it
just
happened.
You
created
two
tasks
about
that
enabled
instrumentation,
so
I
just
hope,
one
on
the
data
cool
and
then
the
next
two
things
yeah
I
see
that
we
only
will
have
time
today
title
one
of
them,
so
probably
that
publishing
should
take
the
precedence
all
right.
A
So
that
supporters
essentially
contains
two
not
two
separate
things.
One
is
my
proposal
from
you
like
release
process,
because
when
I,
when
I
took
a
look
at
that
of
the
open,
telemetry
Java
repo,
so
much
manual
steps-
and
you
do
that
into
that
into
that
into
that
it
was
for
me,
it
was
very
confusing,
so
I
have
such
switched
around
a
little
bit
and
my
current
proposal
that
we
can
take
nebula
plug-in
from
Netflix
and
that
a
little
bit
simplifies
the
process.
A
F
So
I
think
that
sounds
really
similar
to
what
we've
instituted
recently
in
the
job
in
data
dogs,
repo
and
I
noticed
tracks
that
you
didn't
bring
that
over.
So
I
will
take
a
look
at
that
as
well.
Maybe
that's
intentional
I,
don't
remember
if
we're
using
nebula,
plug-in
I
think
we
might
be
do
some
what
we
might
be
using
something
else.
F
A
A
How
do
we
deal
with
credentials
exactly
themselves
I,
exactly
the
same
as
in
open
the
hotel
Java
currently,
which
means
that
resume
is
that
some
environments
there
are
variables
in
CI,
Joe
but
correct,
so
that
doesn't
change.
Anyhow,
we
still
use
bean
tree,
plugins
still
use
artifactory
plug-in
to
actually
publish
things
up.
The
question
is
how
you
handle
versions
and
how
we
handle
next
version,
how
you
handle
those
tags
and
branches
if
we
need
maintenance
branches.
So
that's
a
separate
question.
B
G
A
G
B
G
I
here
again,
you
see.
Actually
both
of
them
are
personal.
Yes,
but
Ben
Tre
I
am
in
they
like
step
organization,
so
I
need
you
to
ID
you
guys
for
a
start,
so.
B
D
A
F
What
the
point
is-
and
this
is
the
way
I
I
like
to
do
it
data
dog-
is
you
use
personal
I
use,
robot
credentials
to
upload
to
Ben
Tre,
yes,
and
that
that's
okay,
because
within
Ben
Tre
it's
much
easier
to
remove
stuff
right
and
to
if,
if
there's
a
bad
build,
and
then
we
could
potentially
use
individual
credentials
to
initiate
the
synchronization
from
Ventre
to
solo
type.
A
G
A
Currently
it
has
you
and
bogdan,
allows
two
members.
Only
that's
great
I
see
okay,
so
that
that's
one
part
like
how
we
manage
versioning
can
and
release,
and
the
second
part
is:
what
do
we
publish
and
how
we
package?
What
do
we
publish
all
that
that
interesting,
part
and
I
believe
for
that?
We
need
this
this
issue.
A
I
mean
I
have
tried
that
locally,
just
to
see
what
the
result
will
be,
but
these
pull
request
doesn't
do
that.
It
just
policy
is
to
shut
usual
our
jars
and
jarred
open
sources.
Does
it
correct,
so
I
think
I
will
give
screen
back
to
you
and
we
can
like
take
a
look
at
the
documentation
and
task
so
I
very
much
like
Steve
proposed
of
that
by
default.
A
Whenever
we
publish
like
full-blown
artifact
these
exporters
and
propagators
and
whatnot,
it
should
have
open
telemetry
stuff,
so
it
should
work
by
D,
but
you
download
one
jar
and
you
have
open
telemetry
exporter.
You
have
open
telemetry,
Pro
propagate
the
the
same
with
metrics
and
logs
when
they
when
they
appear.
A
B
A
B
B
C
One
of
the
one
of
the
key
principles
of
that
I
understand
of
the
design
here
is
that,
like
the
overtones,
your
collector
is
the
multi-protocol
in
and
out
switching
post
and
that
we
should
probably
have
a
handful
of
choices,
but
we
don't
need
to
go
whole
hog.
Gonna
support
everything
on
the
world,
because
that's
the
collectors
job
is
to
translate
or
one
of
the
clintus
jobs.
C
Saying
I
can
understand:
ot,
LPN,
Zipkin
or
ot,
LP
and
and
certainly
logging
logging
is
very
useful
one
for
debugging
other
things,
but
I
don't
think
we
need
to
go.
We
need
to
imagine
a
world
where
there
are
like
seven
of
these
things,
because
seven,
you
know
supporting
seven
protocols
is
the
collectors
job,
but.
A
C
Absolutely
still
keep
the
ability
to
create
your
own
jar
and
have
that
be
fully
supported
like
we
have
it
now
right,
yeah
I'm,
just
looking
for
ways
that
we
can
make
it
easier
for
the
just
standard
out
of
the
box
experience
as
it
says
here
right
then
so,
like
Tran
says,
if
it's,
if
what
we
have
now
and
what
we
anticipate
adding
in
the
next,
you
know,
few
releases
doesn't
expand.
The
all
much
compared
to
the
auto
I
think
is
probably
trying
to
just
go
all
right.
A
One
one
point
is
here
is:
if
we
have
only
or
tilty
exporter
inside
then
does
we
somehow
motivate
people
to
use
open
telemetry
collector?
Do
we
so?
Is
it
good
think
about
that
think,
I,
don't
know
if
we
have
all
it's
easier
for
them
to
well,
don't
use
the
collector
okay.
So
yeah,
that's
that's
interesting
discussion.
What
do
we
want.
B
A
B
A
D
A
A
C
B
B
D
G
Well,
I
was
thinking
about
probably
like
if
we
could
actually
pack
us,
probably
like
each
vendor.
You
know
just
absorb
this
thing
and
then
just
be
able
to
provide.
You
know
like
I.
Imagine
like
stepping
out
wanting
to
do
that
yeah,
but
it
would
be
not
probably
so
much
for
the
final
user.
With
more
for
vendors,
you
mean
like.
G
D
F
C
A
B
So
yeah
I
can
give
some
more
context
on
this.
The
publishing
the
proposal
to
publish
all
the
artifacts
separately,
in
addition
to
obviously
like
this,
would
just
be
for
vendors
who,
like
so
what
I'm
doing
currently
is
I
just
maven
install
everything
locally
and
then
because
I,
only
I'm,
picking
and
choosing
which
instrumentation
I
want
to
expose
and
support
for
our
customers
and
also
doing
a
lot
more
sort
of
heavy
customization
of
the
configuration
story
to
integrate
better
with
our
environment
and
do
some
backwards.
Compatibility
stuff.
Also.
B
F
F
Well
because
then
you're
sending
extra
classes
that
you
have
disabled,
that
you
don't
want
customers
using
okay,
the
other
side
of
that
is,
if
I
have
a
data
dog,
specific
instrumentation.
That
replaces
something
else
then,
and
I
want
to
maintain
that
separately
from
the
public.
Shared
version,
then
excluding
that
dependency
and
using
my
own
makes
is
a
lot
easier.
Oh
just.
F
So,
just
to
be
clear,
what
we
promised
I
would
say:
promise
what
we
agreed
to
with
the
the
general
open
telemetry
community
is.
We
will
eventually
get
to
the
point
where
we're
publishing
library
versions
of
our
instrumentation,
so
we're
already
already
going
to
need
to
publish
individual
instrumentation.
D
B
A
A
Just
probably
also
know
not
a
very
big
deal.
One
more
issue
is,
then
we
will
have
to
apply
that
publishing
logic
to
much
more
Gradle
projects
and
that
probably
it
will
make
our
Gradle
build
every
Gradle
build
slower,
even
when
not
published,
because
we
will
have
to
apply
extra
plugins
to
50
more
sub
modules.
A
B
C
My
two
cents
here
are
that
if
we
have
members
that
actively
want
to
use
this,
this
idea
of
individual
artifacts
that
we
should
go
that
way,
I
mean
we
shouldn't
just
because
we
also
want
to
you
know
just
because
I
want
to
have
a
single.
You
know
jar,
that's
that's
it
right
doesn't
mean
that
I
mean
if,
if
nobody
actually,
if
nobody's
actually
has
plans
to
use
this,
this
is
theoretical,
then
there's
no.
F
I'm
using
it
already
right
so
yeah,
let
me
give
another
example
of
where
we
would
use
it
within
data
dawg,
so
you
know
we're
the
original
where
the
upstream,
and
so
it's
gonna,
be
very
difficult
for
us
to
just
do
a
big
thing
switch
over,
so
I
feel
like
most
likely.
The
best
piece
of
a
course
of
action
for
us
is
to
you
know,
release
something
with
maybe
one
or
two
of
the
instrumentations
migrated
over.
F
A
C
B
A
B
A
A
B
Them
well
will
want
to
root,
take
a
harder
look
and
review
those
the
naming
then
across
all
those
modules
before
we
switch
them
to
maven
central
just
so
we're
not
kind
of
changing
them.
Is
this
something
I'm?
So
we
have
the
zero.
G
A
F
But
originally
in
the
data
dog
Fork,
there
is
a
maven
property
that
the
publisher
plug-in
supports
that
you
apply
like
published
and
local
artifactory
kind
of
thing.
I,
don't
remember
the
exact
string,
but
if
you
set
that
to
true,
then
you
can
do
the
the
Maven
or
sorry
you
can
do
the
Gradle
scripts
for
publishing
and
it'll
publish
to
a
local
artifact
artifactory
instance.
So
you
can
test
the
process
that
way.
Yeah.
D
Just
before
we
move
on
to
the
next
item,
I
just
wanted
to
throw
out
an
idea
for
the
far-flung
feature.
It
will
be
really
cool
if
we
had
a
website
like
the
spring
boot
starter
website,
where
you
could
select
what
instrumentation
you
wanted
and
what
exporter
you
wanted
and
have
it
jet
crank
out
a
bundled
agent
custom
agent
for
you.
G
B
D
C
Be
able
to
do
it
pretty
quickly
here!
Oh
sorry,
let
me
share
screen
and.
C
So
I
start
playing
with
the
idea
of
using
the
dynamic
imitation
to
create
is
that
a
customer
could
create
static.
Lee
instrumented,
for
example,
client
libraries
that
they
could
then
import
themselves
as
part
of
their
build
script
so
that
they
wouldn't
have
the
actual
dynamic
behavior
the
agent
they
wouldn't
have
to
use.
Java
agent.
There
wouldn't
be
any
bytecode
modification
at
runtime,
but
you
would
still
get
all
the
benefits
of
everything
we
do
without
us
having
a
lot
of
extra
work.
So
what
I
came
up
with
is
this?
C
The
actual
change
to
the
core
agent
is
very,
very
small.
So
far,
there's
there's
a
bit
need
to
talk
about
in
terms
of
where
we
want
to
go
in
terms
of
design,
but
the
basic
idea
is
that,
taking
advantage
of
the
fact
that
the
the
traveling
traveling
transform
instrument
er
exists
and
having
a
pre
instrumented
before
their
actual
agents
instrument
or
an
instrument
or
after
being
able
to
capture
the
actual
changes
like
here's,
the
here
of
the
bytecode
changes
that
happened
so
again
that
doesn't
involve
changing
the
core
agent
in
any
way,
shape
or
form.
C
The
idea
that
is,
there
will
be
a
part
of
the
build.
That's
part
of
the
build
process.
You'd
run
a
command
Java
Java
agent,
us
CP,
the
full
class
bathroom
everything
it
takes
to
instrument
your,
for
example,
my
simple
driver
or
whatever
right
in
all
its
dependencies
and
then
a
meaning
that
I've
created
a
main
program
that
I've
created.
That
basically
does
the
routine
acts.
Acts
like
the
app
right
and
says:
okay,
I've
got
my
class
path.
I've
got
my
Java
agent.
C
There
I'm
going
to
just
basically
Alice
if
I
can
find
the
the
main
here
right,
just
iterate
the
class
path,
I'm
ignoring
modules
for
the
second,
it's
it's
a
very
happy
prototype
just
to
try
it
out.
Iterate
the
class
path
load,
every
jar
load,
every
item
in
every
jar
and
if
they,
if
any
of
them,
get
transformed
right,
a
new
jar
to
a
specified
output
directory
that
contains
the
instrument
instrument
code,
so
I
was
able
to
use
this
to
again.
My
test
example
was
the
my
sequel
driver
and
its
dependencies
and
it
works.
C
So
we
directly
see
as
long
as
you
as
long
as
you
can
figure.
The
class
path
correctly
so
that
it's
all
there
and
everything
can
resolve
you
can
run
it
through
the
actual
existing
Java
agent,
and
we
this
this
littles
main
class
here
captures
and
writes
out
the
new
bytes,
the
modified
bytes,
but
the
wrinkle
that
I
ran
into
is
what
to
do
with
the
helper
class.
Is
that
those
injected
bytes
that
those
injected
instructions
actually
work
with
so
I've
prepared
a
there?
C
Is
there
in
blue
little
changes
there
in
red
and,
for
example,
if
there's
the
food
decorator
for
the
foo
foo
client
instrumentation,
that
references
code
inside
foo,
client,
ID
jar,
it
sort
of
makes
sense
to
keep
it
in
through
client
ajar
so
that
in
the
same
class
class
loader
it's
you
know
wherever
it
goes,
it's
there
with
it
right.
But
then
what
do
you
do
with
base
decorator,
in
particular,
Java
and
beyond
the
package,
cache
and
split
modules?
C
If
we
have
beast
decorator
in
two
separate
jars,
it's
gonna
cause
problems
right,
so
I've
got
two
ideas
that
are
actually
I,
think
viable.
One
involves
some
sort
of
audit
to
understand
which
helper
classes
are
and
their
associated
helper.
You
know
code
that
they
call
are
dependent
on
the
actual
instrumented
code
and
our
dependency
free
and
rely
just
on,
for
example,
open,
telemetry,
api's
and
course
stuff.
It's
and
utility
code,
that's
in
the
agent,
so.
C
C
So
one
is
again
kinda
like
this
right
that
there's
that
the
some
of
this
some
of
this
stuff
goes
on
the
system
classpath
shared
and
some
of
it
that's
specific
to
the
that
actually
has
type
references
to
the
instrumented
code
goes
and
it
just
gets
plumped
inside
the
rewritten
jar
files.
That's
one
approach!
Another
approach
is
to
actually
then
just
shade.
It
so
just
take
everything
you
need
sheet
it.
So
there's
there's
no
there's
no
duplication
of
names,
and
then
you
know
it's
kind
of
have
copies
of
it
inside
of
each
one.
C
It's
like
each
one
is
its
own
separate
little
world
that
can
exist
independent.
Those
are
my
two
thoughts
and
I
just
kind
of
want
to
talk
it
through
around
before
trying
to
dive
into
actually
how
to
make
one
of
these
happen
as
the
next
experiment,
which
one
of
these
seems
better,
it
sounded
like
Tyler,
already
had
an
opinions.
F
C
C
B
Okay
and
one
of
the
area
where
this
was
sort
of
particularly
interesting
to
me,
was
in
the
Android
space
of
having
some
kind
of
auto
instrumentation
story
for
the
Android
space,
but
I
am
also
curious.
Tyler
from
your
perspective,
have
you
run
into
customers
who
don't
want
to
use
the
Java
agent
and
for
those
do
you
think
that
this
would
address
that
alleviate
that
concern
so
where's.
F
C
G
That's
correct,
spurred
me
to
experiment
with
us:
yeah
yeah
I
would
like
to
play
with
these
if
I
remember
correctly
and
actually
the
problem
with
them.
If
I
remember
correctly,
it's
that
they
were
also
afraid
of
the
other,
all
the
time
you
know
the
nature
would
take,
so
they
could
just
prefer
you've
used
two
or
three
instrumentation
pieces,
and
that's
it
you
know
so.
C
Could
work
right,
the
same
technique
of
workers?
You
have
to
write
a
second
Kenny
K
right,
you
have
to
you
have
to
say
you
use
this
others,
you
use
my
H
mini
JDK.
An
earlier
version
of
Wylie's
product
actually
work.
This
way
static,
static,
instrumentation
here,
just
make
a
copy
of
your
app
in
a
separate
directory,
including
the
kit
separately.
Dk.
If
you
need
to
separate
separate,
you
know
WebLogic
app
server,
what
an
early
version
of
Wylie's
product,
okay,.
C
Why
why
they
create
a
desktop
agent
and
it
was
based
on
I
work
on
hey.
If
we
just
write
the
class
loader
is
the
only
thing
we
instrument
and
then
from
there
on.
We
can
do
everything
dynamically
right
anyway,
so
yeah,
that's
the
that's.
The
idea
I
think
there
a
lot
of
interesting
possibilities
we
can
go
in,
but
I
was
just
looking
for
advice
on
for
the
next
level,
the
experiment
whether
to
pursue
a
shading
approach
or
there
pursue
this
sort
of
splitting
up
the
helpers
approach.
C
C
Yeah
I
must
focus
on
the
mechanics
in
the
code
of
how
I'm
going
to
get
this
to
happen.
Cuz
I,
don't
actually
entirely
know
yet
I'm
more
just
interested
in
their
based
on.
What's
there
already
what's
gonna
fight
me
or
not
on
on
this
design,
choice
again,
I,
don't
think
this
is
where
I
want
to
go
right,
detect
if.
C
Well,
oh
yeah,
and
we
get
we
could
skip
it,
not
instrumented
or
we
could.
We
could
again
I'm
just
experimenting
here,
I'm,
not
actually
in
the
final
forms
of
the
packaging
like
what
would
it
physically
look
like
for
a
real
environment?
Obviously
we
want
to
make
it
clean
and
like
wrap
it
up
in
Gradle,
cradle
wrappers,
and
you
know
that
sort
of
thing
I'm
just
trying
to
get
the
core
of
it
and
I'm
trying
to
keep
it
as
lightly
around
the
existing
job
agent
as
possible.