►
From YouTube: 2022-06-03 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
Yeah
I,
when
I
was
a
kid
I
had
there
was
this
book.
My
dad's
name
is
john
also
there's
a
book
that
he
had
when
he
was
a
kid
called
a
boy
named
john,
which
was
basically
all
the
variants
of
john
from
around
the
world
and
the
jack
was.
The
british
version
of
john
is
what
the
book
claimed
I'll
believe
it.
C
Good,
I
I
just
got
everybody
excited
that
we
have
a
library
author,
showing
up
to
the
meeting.
F
Yeah
yeah,
I
guess
I
can
quickly
highlight
what
apache
nifi
is
yeah.
Basically,
it's
an
easy
way
to
move
data.
C
Cool,
so
I
the
hat
does
apache
knife.
I
have
any
existing
tracing
integrations.
F
Called
provenance
which
is
similar
to
tracing,
but
the
problem
with
it
is
it's
all
isolated
to
that
individual
system
and
that
the
concept
of
the
trace
id
and
I
guess
the
context
to
pat
to
include
on
what
they
call
flow
file
attributes
which,
in
a
general
sense
you
can
think
of
them
as
just
http
headers.
They
don't
have
that
concept
where,
when
they
egress
data,
they
wrap
that
information.
C
Okay,
so
you're
looking
so
this
would
be
like
if
you
could
propagate
that
trace
id
across
and
visualize
flows
of
the
data
across
through
the
systems.
Is
that
sort
of
the
goal.
F
Yep,
that's
pretty
much
it
and
it
can
be
very
chatty
because
you
know
flows
can
have
in
number
of
steps
and
it
can
also
include
batching.
So
a
simple
example
is,
you
might
have
let's
say:
10
files
come
in,
all
10
of
them
have
distinct,
trace
ids,
but
then
you
might
want
to
zip
those
files
up.
So
now
you
effectively
have
a
potentially
new
trace
id
or
whatever
the
spec
defines
for
linkage,
and
then
that
new
file
will
now
have
to
propagate
that
information
downstream.
C
F
I
think
at
the
simplest
form-
yes,
it's
very
similar
to
messaging
there's
just
additional
steps,
meaning
unless
so
like
a
traditional
messaging
system,
isn't
going
to
batch
many
messages
and
then
put
them
into
a
new
message
and
then
that
new
message
might
then
get
broken
up
into
many
more
messages.
And
then
you
might
have
another
downstream.
That
then
takes
a
percentage
of
those
messages
and
then
creates
new
batches.
So
there's
many
different,
I
guess
linkages
that
have
to
be
taken
into
account,
but
it's
not
a.
F
C
Have
you
looked
at
the
messaging,
so
the
only
reason
why
I
kind
of
was
thinking
messaging
from
this
sort
of
batch
batching
piece
and,
like
you
say,
of
combining
multiple
traces
and
then
and
kind
of
the
way
that
the
messaging
conventions
work
is
that
when
you
have
a
batch,
that
processing
of
that
batch
starts
actually
starts
a
new
trace
and
then
links
creates
these
links
to
the
incoming
traces.
C
And
then
that's
your
new
trace
and
that's
what
would
get
prop
that
single
trace
would
get
propagated
downstream
so
that
you're
not
sort
of
accumulating
more
and
more
traces
and
trying
to
propagate
more
and
more
things
downstream.
You're
always
only
propagating
one
trace
id
downstream.
C
Yeah
so
definitely
check
out.
Oh,
I
always
come
to
the
wrong
place.
C
And
so
then
it's
still
in
experimental
status
and
there
are
some
and
you
actually
might
be
interested
in
following
some
of
the
ongoing
work
messaging.
C
Yeah
so
there's
kind
of
work
going
on
on
stabilizing
the
messaging,
and
so
there
will
probably
be
changes,
but
there's
also
a
if
you're
interested
in
that
there
is
a
we.
I
think
it's
still
going
on
the
there.
A
weekly
meeting
on
thursday
mornings,
specifically
about
messaging
instrumentation
across
this,
is
cross
language.
C
If
you
have
kind
of
sort
of
questions
about
modeling
that
but
yeah,
so
what
have
you
looked
at
the
open,
telemetry
api
already?
Have
you
looked
at?
F
Yeah,
so
I'm
somewhat
familiar
with
open,
telemetry
and
really
trying
to
do
data
gathering
to
see
if
it
makes
more
sense
to
use
the
something
like
auto
instrumentation
or
if
we
would
commit
to
apache
nifi
and
basically
use
the
java
instrumentation.
So
that
way,
it's
more
optimized
for
nifi
itself,
rather
than
the
auto
instrumentation.
Just
because
we
have
played
with
the
auto
instrumentation
on
apache
nifi
starting
up,
and
it
certainly
bogs
things
down,
and
I
know
I
can
disable
components-
and
I
haven't
done
that
yet.
C
As
it
looks
at
all
the
byte
code,
all
the
classes
being
loaded
and
instruments,
those
there
shouldn't
be
additional
runtime
overhead
versus
manual
instrumentation,
because
it
basically
injects
the
similar
types
of
calls
and
things
that
you
would
do
manually.
F
Okay
and
have
you
had
any
concerns
with
since
it
does
bite
co-injection?
Have
you
had
any
concerns
with
security
related
issues
or
anyone
that
kind
of
doesn't
want
to
run
auto
instrumentation
because
of
security
constraints.
C
We
definitely
have
some
people
who
don't
want
to
do
bytecode,
instrumentation
or
who
don't
want
to
use
a
java
agent
or
camp
some
environments.
You
can't
use
a
java
agent,
some
like
managed
runtimes.
C
So
we
do
definitely
support
both
approaches.
C
We
have
a
lot
more
instrumentation
for
at
today
at
this
point
still
for
auto
instrumentation,
but
our
library
instrumentation
list
is
growing.
So
this
these
are
all
things
that
you
can.
We
kind
of
call
it's
sorta,
auto
instrumentation
in
the
sense
of
it's.
You
know
you
make
a
couple.
Call
like
you
just
add
an
interceptor.
You
add
you,
you
get
a
proxy
or
something
that's
just
a
couple
line,
few
lines
of
code,
but
it
does
not
use
the
java
agent
or
any
bytecode
instrumentation.
A
A
Okay-
and
it
is
the
goal
for
from
an
instrumentation
standpoint,
to
be
able
to
do
do
you
want
all
the
components
the
did
you
call
them:
receivers,
processors
and
sinks
or
sources,
processors
and
sinks?
Do
you
want
them
all
to
you
know,
have
options
for
instrumentation
or
to
use
some
common
convention
for
for
the
data
that's
collected
as
data
flows
through
those
components.
F
So,
ideally,
some
common
convention
since
effectively
the
processors
extend
a
base
class.
Ideally
we
offer
them
the
ability
to
enable
per
processor
or
and
then
also
sampling,
based
on
certain
attributes.
So
that
way
for
certain,
let's
say
data
types
we
could
increase
decrease
sampling
based
on
expected
volumes.
A
F
Yeah
absolutely-
and
it
definitely
has
a
lot
of
those
tools
that
you
guys
are
libraries
that
you
guys
already
support
and
even
out
of
the
box,
those
things
were
working.
It's
just.
There
are
certain
caveats
as
to
the
trace
context,
or
you
know,
the
trace
id
was
not
propagated,
as
data
moved
between
each
processor.
So
little
things
like
that
and
the
age.
F
I
guess
the
concern
I
have
with
agent
is
performance
for
startup
cost
versus,
because
if
the
runtime
is
not
going
to
be
necessarily
much
different
between
implementing
it
and
apache
knifefi's
baseline
versus
using
the
auto
instrumentation,
because
for
us
we're
also
looking
to
make
it
easy
for
people
to
use
granted,
it
would
be
easier
for
us
to
say
run
this
agent
versus
upgrading
your
cluster
to
you
know,
whatever
version
that
open
telemetry
gets
embedded
into
it.
A
Have
you
done
any
sort
of
analysis?
You
know,
there's
all
these
different
components
or
plug-ins
in
the
nifi
ecosystem.
You
know
some
of
those
will
probably
have
some
coverage
by
the
the
instrumentation
that's
available
in
the
agent
or
from
library,
instrumentation
and
some
will
not.
You
know.
Are
you
trying
to
have
full
coverage
on
on
on
all
the
the
plugins?
And
if
so,
do
you
have
any
sort
of
idea
about
what
type
of
coverage
you
have
today.
F
So
we
don't
have
an
idea
of
the
coverage
we
have
today.
We
certainly
our
goal
is
to
well
it's
really
two
goals.
One
is
to
be
able
to
track
data
as
it
enters
a
system.
F
We
don't
necessarily
care
about
the
level
of
visibility
per
se,
so
we
don't
need
to
know
each
step
in
the
flow,
but
at
the
same
time
we
don't
want
to
limit
people
who
want
that
level
of
visibility,
because
there's
the
the
systems
that
care
about
their
own,
you
know
slas
and
then
there's
the
overall
of
how
the
architecture
you
know
the
big
picture
looks
but
we're
trying
to
require
a
minimum
level
of
information.
I
guess
from
each
system-
and
so
that's
just
you
know
each
system
adhering
to
emitting
certain
types
of
information.
A
You
know
metrics
are
interesting
in
another
way,
because,
presumably
you
know
you
have
some
sort
of
hooks
that
can
you
can
call
before
and
after
data
flows
in
and
out
of
each
of
these
components,
and
you
could
use
the
metrics
api
to
collect
standard
information.
For
example,
you
could
time
the
amount
of
time
it
takes
the
data
piece
of
data
to
flow
in
and
out.
You
could
produce
histograms
that
describe
the
distribution
of
how
long
it
takes
that
type
of
thing.
A
So
you
know
that
that
might
be
an
easy
way
to
get
some
sort
of
telemetry
emitted
about
all
the
components
without
needing
you
know,
specific
trace,
instrumentation
about
them.
F
Yeah,
I
think
nifi
itself
has
a
prometheus
exporter,
but
I
guess
the
the
main
driver
behind
tracing
for
us
is
just
be
able
to
track
data
as
it
flows
through
n
number
of
systems.
So
then
we
could
say
you
know
system
x,
you're,
not
meeting
an
sla,
and
you
know
you
need
to
take
care
of
it
and
then
that
way
we
have
visibility
into
that
and
then
can
kind
of
guide
discussions
around
investments.
I
guess
is
one
way
to
put
it.
A
So
yeah
just
just
one
piece
of
feedback
there
so
and
unless
you
have
tracing
set
up
to
sample
100
of
the
the
the
things
sample
100
all
the
time,
you
know
you'll
you'll
be
missing
data.
That's
I
think,
required
to
determine
if
components
are
meeting
some
sla
because
you
don't
know
you
know
and
specific
spans
are
dropped.
So
you
won't
be
able
to
tell
that
a
particular
system
that
they're
sla
with
certainty.
F
Yeah
absolutely
and
we
have
insight
into
which
data
types
we
you
know,
have
certain
volumes,
and
so
we
have
an
idea
of
what
we
want
to
sample
starting
out
because
definitely
won't
be
able
to
get
a
hundred
percent
and-
and
we
realize
tracing
his
best
effort
and
definitely
not
treated
as
auditing,
but
yeah.
No
we're
planning
on
you
know
if
we
get
it
implemented
to
dynamically
sample,
because
we
know
we
understand
that
it's
sampling
at
the
collective
part.
But
then
the
code
is,
you
know.
C
F
Okay,
and
do
you
think,
because
performance,
I
guess,
is
important
and
obviously
it's
it's
all
just
a
guessing
game
at
this
point,
but
does
it
make
more
sense
to
focus
on
implementing
tracing
into
the
baseline
for
apache
nifi?
So
that
way
we
can
optimize
around
performance
or
do
you
think
optimizing
around
performance
wouldn't
be
an
issue
for
the
agent
or
just
the
java
instrumentation
itself.
C
Well,
the
nice
thing
about
the
back
to
john's
point
about
the
manual
instrumentation
versus
auto
instrumentation
is
the
auto
instrumentation
interops
with
manual
instrumentation,
and
the
way
that
we
even
do
auto
instrumentation
is
a
lot
of
times.
C
We
take
the
light.
We
take
the
library
instrumentation,
the
manual
instrumentation,
and
we
just
auto
inject
it
at
runtime,
so
starting
with
just
manual
instrumentation,
is
will
get
you.
You
know
going
probably
faster
more
easily
and
is
not
throwaway
work.
If
you
do
decide
to
then
do
auto
instrumentation
and
then
you
can
potentially
do
both.
B
Yeah,
I
think
the
the
key.
The
key
point
is
that
there
needs
to
be
some
very,
very
early
point
in
the
life
cycle
of
the
application
where
the
sdk
gets
set
up
and
either
that
can
be
done
by
the
agent
or
it
can
be
done
manually.
But
I
think
that
if
you
make
sure
that
you
have
that
kind
of
that
point
where
the
sdk
is
initialized,
whether
it
be
just
pulling
it
from
the
global
that
the
agent
sets
or
doing
it
yourself,
then
the
instrumentation
is
just
the
instrumentation.
B
It
just
does
whatever
it
does,
and
it'll
work
either
way.
C
And
if
you
look
at
like
so
some
of
our
agent
instrumentation
is
actually
pulls
in,
like
couch
base,
has
their
own
open,
telemetry,
instrumentation,
and
so
the
auto
instrumentation
actually
just
pulls
in
their
artifact
basically
and
injects
that
at
runtime,
similar
for
the
azure
sdks.
C
And
then,
if
you
look
at
like
grpc
and
ok.
Http
are
great
examples
to
look
at
because
they
have
or
any
of
the
ones
that
have
both
library
and
java
agent
instrumentation.
F
Okay,
I
guess
I
guess
a
good
spot
would
be
to
look
at.
I
guess
your
spring
instrumentation,
since
patchy
knife
is
basically
sitting
on
top
of
spring,
because
I'm
not
I'm
not
aware
of
the
hooks.
I
guess
that
I
would
have
to
to
look
at
wouldn't
be
me
doing
the
development,
but
is
there
a,
I
presume,
there's
an
easy
example
of
these
are
the
hooks
that
people
have
to
look
into.
C
C
C
But
are
you
servlet
based?
Are
you
web
spring
web
based
or
this
you're
just
using
spring
for
like
injection
and
data
libraries,
it's
a
serverlet
based?
It
is
okay.
C
So
that
is
a
following
up
place
where
we
are
missing.
We
actually
do
not
have
servlet
library
instrumentation
today,
which
is
unfortunate,
and
that
makes
it
would
not
be
terribly
hard
to
write.
C
One
of
the
first
hurdles:
if
you
don't
not
going
down
the
java
agent
route,
but
you
could
initially,
you
could
go
down
the
java
agent
route
and
then
any
of
your
instrumentation
that
you
write.
You
could
use
that
just
write
that
manually,
because
the
agent
will
automatically
pick
that
bridge
that
manual
instrumentation
into
the
java
agent.
F
Yeah
that
sounds
like
that
might
be
the
better
option
since
there,
since
it
standing
it
up,
did
actually
do
some
tracing
but
just
missing
the
connection
pieces
and
I
feel
like
if
apache
knife
I
itself
took
care
of
that.
B
Yeah,
it
sounds
like
the
context.
Propagation
is
probably
like,
I
think,
to
repeat
what
you're
saying
context
propagation
internally
inside
nifi
is
probably
the
key
to
getting
most
of
it,
working
which,
of
course,
is
also
the
most
difficult
part
to
get
working
context.
Propagation
is
definitely
the
trickiest
bit
of
instrumentation.
F
Yeah,
have
you
guys
ran
into
any
testing
with
that
like
high
volumes
to
make
sure
that
I
guess
well
is
your
code,
I
guess
threads
well,
it's
probably
threat
safe
and
then
have
you
run
into
any
testing
where
you
make
sure
that
there's
no,
I
guess
worry,
of
data
crossing
over.
If
that
makes
sense,
like
your
context,
is,
I
guess,
isolated.
D
B
C
Was
going
to
have
leaks
yeah,
I
thought
we
allow
listed
instrumentation.
C
Java
is
this
not
oh,
thank
you
for
the
link.
B
C
No,
no,
the
netty,
there's.
C
Reactor,
I
would
take
I'm
glad
the
reactor
ones.
Well,
I
don't
know
if
we.
C
But
yes,
it
has
been
a
problem.
Historically,
we
used
to
have
way
more
failures.
C
Yeah,
we're
we've
been
a
little
short
staffed
lately
with
matteo,
shout
and
onorak
out,
and
nikita
is
out.
C
Cool,
so
anything
else
that
you
wanted
to
check
in
about
brian,
and
certainly
especially
now
that
we
have
a
good
sort
of
context
for
what
you're
trying
to
do.
You
know
feel
free
to
either
join
one
of
the
sig
meetings
or
you
know,
ping
us
on
slack
or
issues
anytime.
F
Yeah,
thank
you
for
your
time
and
I
do
appreciate
everything
you
guys
do.
E
D
I
just
suggested
that
we
don't
even
have
the
is
enabled
flag
for
now,
because
it's
not
the
end
of
the
world
if
the
sdk
is
not
exporting
so
at
least
we
don't
have
to
pick
the
public
api
yeah
we
could
like
if
there's
demand,
and
then
we
can
reconsider
later,
but
we
don't
have
to
do
it
in
this
first
version.
I
think.
A
We
have
this
new
property
that
we're
adding,
which
enables
or
disables
the
the
sdk,
but
we
don't
want
to
change
the
ergonomics
of
auto
configured
open,
telemetry
sdk
to
you
know,
the
original
suggestion
was
the
original
idea
that
I
had
was
to
make
you
know
the
get
open,
telemetry
sdk
result
or
get
auto
configured
open,
telemetry
sdk,
to
make
it
nullable
so
that
if
the
sdk
was
disabled,
you
would
get
a
null
response
and
it
would
be
up
to
you
to.
A
But
you
know,
after
talking
with
honorable,
we
we
agreed
that
the
the
ergonomics
of
making
that
method
knowable
to
you
know
accommodate
an
experimental
property
like
that's
just
too
much
of
a
a
hit
to
the
ergonomics,
and
so
you
know
I
suggested
this
alternative
route
where,
instead
of
making
that
method,
nullable
there's
an
additional
method.
A
That's
you
know
boolean,
that
the
auto
configured
open,
telemetry
sdk
has
as
well,
and
it
is
enabled
like,
and
so
you
know,
if
you're
interested
in
observing,
like
whether
this
property
was
set,
you
can
check,
you
know,
is
auto
configured
open,
telemetry
sdk
is
enabled.
If
so,
then,
then
you
know
get
the
auto
configured
instance,
if
not
then
ignore
it.
So,
but
so
onorag's
point
is
that
you
know
even
that
might
be
unnecessary,
and
so
you
know
instead
of
returning.
A
Instead
of
adding
that
method,
we
can
just
return
if
the
sdk
is
disabled,
we
return
an
instance
of
the
sdk
that
is
just
configured
with
nothing
attached
to
it.
It's
it's
almost
like
a
no-op
sdk,
but
it's
not
quite
a
no-op
sdk,
and
I
guess
that
that's
my
only
kind
of
like
reservation
about
it
is
that,
like
we
kind
of
have
this
new
version
of
a
no
op
sdk
now
it's
like.
D
D
Never
if
that
seemed
appropriate,
but
I
I
mean
if
someone
complains
that
it's
not
no
up
enough,
then
we
can
go
from
there,
but
I
feel
as
if
we
don't
have
to
be
too
proactive
on
having
this
for
now
and
because
it's
once
it's
api,
it's
api.
So
it's
easier
to
think
it
through
later.
I
think,
rather
than
right
away.
A
I'm
inclined
to
agree,
but
so
suppose,
let's
kind
of
project
a
bit
suppose
this
gets
added
to
the
the
specification.
What
would
you
imagine
we
do
in
that
situation?
A
Let's
keep
it
the
same
or
you
know,
go
with
one
of
the
alternative
solutions.
D
And
we
can
think
even
now
I
mean
the
user
could
check
the
property
environment
variable
and
have
almost
the
is
enabled
flight
without
new
api,
like
it's
not
super
precise,
but
at
least
it's
not
impossible
either
so
that's
already
allowed
and
then,
if
the
flag
stabilized,
then
we
would
just
try
to
make
that
easier.
That's
one
way
to
think
about
it.
A
Yeah,
I
don't
mind
that
solution.
I
was
I've,
been
meaning
to
get
back
to
this
and
just
get
rid
of
that
method
and
push
up
that
commit.
But
I've
been
distracted.
A
So
you're
saying
similar
to
how
in
metrics
we
say
if
there's
no
exporters
just
use
return,
no
op
instruments
all
the
time
yeah.
If
there's
no
processors
attached
yeah,
no
active
processor,
then
just
you
know
don't
create.
I
don't
know
what
you
would
do.
Would
you
would
you
say
you
don't
create
spans
in
that
situation?.
B
D
B
B
Yeah,
I
guess
that's
what
I
mean.
We
could
definitely
yeah
that's
what
I'm
saying
is
just
put
in
some
internal
flags
that
aren't
visible
yeah.
That
could
be
that
to
to
basically
turn
off
behavior
yeah.
That
should
be
possible.
Yes,
don't
think
it'll
be
too
bad
like
there's
not
much
again,
you
could.
It
could
always
fall
back
to
using
the
api
implementation
right.
That's
true!
It
wouldn't
be
that
hard.
A
Okay,
so
you
know
a
simple
small
change
to
this
pr
and
we'll
get
that
merged
and
then
pending
additional
future
feedback.
We
have
an
additional
optimization
that
we
can
make.
B
D
B
A
A
D
B
A
Yeah
onorag's
theory,
I
think,
makes
a
lot
of
sense.
You
include
auto
configure
on
the
class
path
and,
if
anything
throughout
the
class
path
calls
global
to
open
telemetry.get,
it
initializes
auto
configuration,
and
you
know
if
you
have
to
configure
three
different
properties
in
order
to
disable
the
sdk.
B
A
Could
be
coming
in
it's
like
a
transitive
dependency.
Like
you
know,
maybe
the
company
has
some
sort
of
utility
library
that
sets
up
like
open
telemetry
in
a
common
way
across
all
the
apps
exclusively.
B
Yeah
anyway,
I
just
I,
I
just
wonder
how
much
the
use
case
could
could
inform
how
we
actually
would
want
to
implement
it
inside
the
sdk,
if,
if
it
is
needed,
but
just
doing
this
as
a
first
step
seems
like
at
least
it
will
prompt
discussion
and
respect
around
around
the
option.
A
And-
and
just
you
know
for
just
one
more
bit
of
context,
so
if
we,
if
we
implement
this
in
the
way
that
is
suggested
in
the
pr,
if
someone
were
to
set
this
property
and
disable
the
the
sdk,
then
global
open
telemetry
would
be
set
to
a
true,
no
op,
but
the
instance
that's
returned
by
auto
configured
open
telemetry
sdk
would
not
be
a
no
it'd,
not
be
a
true
no
op
it'd
be
like
you
know.
A
A
Global
open
telemetry.get
would
be
a
pure
noaa,
but
the
response
from
auto
configured
open,
telemetry
sdk.
You
know
if
you're
using
that
somewhere
would
wouldn't
it
be
a
true,
no
op.
A
No,
so
I
global
open
telemetry
can
be
a
true,
a
no
op
and
that's
what
I
do
in
this
pr.
B
B
Someone's
using
the
other
configured
one
and
then
taking
that
instance
and
injecting
it
yeah,
it
wouldn't
be
the
true
noaa
and
if
they
have
a
if
they
have
a
mix
and
match
of
like
how
their
app
is
instrumented,
sometimes
they'll
get
an
op,
and
sometimes
they
would
right
yeah,
which
is
weird.
But
I
mean
it's
what
you
get
for
having
a
weird
mishmash,
I
guess
I
mean
they
both
would
have
the
same.
Behavior
just
performance
would
be
vulnerable,
yeah
right.
D
B
B
That
annotation
question
is
a
good
one,
because
it,
the
annotations,
I
think
originally
did
live
in
the
instrumentation
right
and
we
move
them
into
into
the
api.
B
B
B
C
D
C
C
B
Well,
the
reason
the
reasoning
for
putting
it
in
the
the
core
repo
is
because
other
people
want
to
implement
it
in
not
necessarily
using
bytecode
injection
but
using,
I
don't
know,
dynamic,
proxies
or
some
other
magical,
aspect-oriented
approach
and
so
basically
they're
there
so
that
they
can
be
implemented
either
by
the
agent
or
by
some
other.
Some
other
mechanism.
B
D
A
We're
not
too
far
off
from
that
from
that
line
of
reasoning
today,
did
this
I'm
trying
to
think
if
there's
other
exceptions
besides
this,
maybe
the
jfr
events.
D
Another
line
could
be
whether
you
can
actually
develop
an
implementation
together
with
it
like
having
the
annotation.
This
repo
doesn't
mean
you
can
actually
implement,
for
example,
the
job
agent
logic,
that's
in
the
other
repo,
so
you're,
really
just
creating
the
annotations
in
a
black
box,
hoping
you
get
it
right.
D
I
don't,
for
example,
open
trump
api.
We
implement
sdk,
so
we
can
exercise
it
here,
so
it
doesn't
have
that
problem,
but
the
annotations,
I
think,
have
this
problem
where
all
we're
doing
is
defining
here,
but
since
we
don't
use
it
here,
we
don't
even
know
if
the
definition
makes
sense,
which
I
think
is
what
jack's
point
was
yeah
yeah
on
this
particular
pr
yeah,
and
so
that
is
probably
a
good
line
to
say.
Yes,
this
is
probably
not
the
right
people
for
it.
A
Yeah
we've
moved,
we've
moved
stable
things
in
from
one
artifact
to
another,
as
long
as
their
class
and
package
has
remained
constant.
So
we
have
a
precedent
for
that
for
repackaging.
D
A
One
option
is
to
wait
for
2.0
to
switch
it
to
instrumentation,
retain
it
in
this
repository
for
now-
and
you
know,
don't
merge
this
pr.
That
adds
these
new
annotations
directly
to
the
annotations
extension.
Instead,
merge
them
to
the
incubator
extension
until
an
implementation
can
be
developed
and
we
feel
more
comfortable.
B
I
mean
the
other,
the
other
approach,
which
I
know
we've
at
least
chatted
about
is
we
could
do
what
lombok
does,
which
is
create
an
experimental
package
that
is
known
to
be
unstable
and
we
don't
and
things
don't
move
into
the
main
annotations
package
until
we've,
we've
proven
it
out
internal
I
mean
we
could
also
do
internal
yeah.
We
could
certainly
do
internal.
D
C
I
was
yeah.
I
was
trying
to
think
through
my
in
my
head.
If
that
would
be
much
impact
because
you're
right
I
mean
it's.
D
A
C
I
support
that
at
sans
the
problems.
B
A
E
C
Instrumentation
side
I
mean
technically,
we
can.
We
could
not
support
with
fan
annotation
the
existing
one
because
of
our
our
the
wiggle
room
we
have,
but
I
don't
think
we
would
want
to.
I
think
we
would
want
to
continue
supporting
the
old
with
me.
B
A
A
C
C
Yeah,
I
mean
it,
I
think,
that's
inevitable
forever
for
most
shops,
just
because
it's
a
lot
of
work
to
maintain
at
the
java
agent.
B
B
Well,
you
know
I
was
just
thinking
that
this
is
like
that
particular
pr
reviewing.
That
is
like
the
perfect
thing
for
someone
who's
interested
in
becoming
an
approver
or
helping
out
and
like
getting
involved
in
the
project.
Like
that's
the
kind
of
thing
that
maintainers
would
love
community
folks
or
people
who
are
interested
in
the
project
to
go
through,
because
you
know
we
know
all
that
stuff
already.
B
It's
not
written
for
us,
it's
written
for
the
users
and
written
for
the
people
who
are
going
to
actually
be
consuming
things,
but
I
don't
know,
I
don't
know
how
you
you
can't
you
can't
push
people
to
review
documentation,
I
mean
they,
don't
read
it,
they
don't
read
it
normally.
Why
would
they
read
it
in
a
pr.
A
But
you
know
I've
always
been
doing
something
else,
and
so
it's
it
was
time
to
to
clean
them
up.
B
At
least
going
to
be
better
than
what
we
have
today,
it's
fine.
I
certainly
think
so
I
mean
clearly
I
mean:
is
there
anything
controversial
in
there?
I
guess
I
should.
I
guess
I
should
do
my
due
diligence
and
just
make
sure
it's
all
javadoc
and
not
like
you're,
not
slipping
in
some
back
doors
in
the
code
somewhere.
C
I
I've
already
made
that
my
my
official
minimum
bar
is
we're
not.
G
C
A
I'm
gonna
have
to
yeah
pull
out
my
my
botnet
and
my
your
bitcoin
miner,
my
bitcoin
miner,
that
every
time
you
load
this
in
your
browser,
it
executes
javascript
that
mines
bitcoin
and
sends
it
to
my
account
exactly.
F
I
appreciate
the
joke,
but
you
guys
do
security
scanning
right.
Oh
yeah,.
E
C
Yeah,
we
have
a
nightly,
we
run
code
ql.