►
From YouTube: 2019-10-03 Java Auto-Instrumentation SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
C
D
All
right,
I
guess
I'm
on
the
top
of
the
list.
I'll
start,
so
my
name
is
Bruno
workforce
entry.
That
is
an
arrow
tracking
tool
and
I'm
just
interested
to
see.
Where,
were
you
all
going
with
regards
to
java
or
instrumentation
in
country?
So
probably
probably
just
listen
to
what
you
all
have
to
say
in
Java.
E
B
A
C
G
B
I
also
had
a
conversation
which
bucked
and
on
this
we
want
ideally
like
a
library
to
look
like
that,
can't
be
author
instrument.
It
do
we
want
to
give
any
guidance
to
people
like
Philippi
callbacks.
How
deep
do
we
want
to
go
in
a
bytecode,
instrumentation
side
and
for
me,
I,
think
honestly,
I
think
we're
not
yet
at
the
point
where
we
should
jump
into
implementations.
I
would
still
like
to
better
understand
what
the
architecture
indications
on
the
Java
side
should
be.
I
have
missed.
B
C
B
This,
don't
don't
get
me
wrong
on
any
of
these
items,
but
I
think
if
you
want
to
adjust
certain
things,
if
for
them
to
change
something
I'd
like
to
have
a
more
in-depth
discussion
on
how
this
all
relates
like,
and
we
also
have
companies
here
that
have
built
auto
instrumentation,
so
I
do
and
also
from
a
diner
trades
perspective.
When
I
had
this
discussion
with
spanner
said.
Well,
I
don''t
ways
we
are
obviously
willing
to
share
our
knowledge
and
learnings.
B
Here
and
again,
the
plan
was
to
come
up
with
different
approaches
and
how
we
want
to
make
things
work.
So
I
would
really
appreciate
a
deep
dive
on
how
it
how
it
works,
whatever
what
you
want
to
have
implement
that,
maybe
what
we
want
to
have
changed
I
think
it
would
also
have
impact
on
third-party
contribution
libraries,
how
we
want
them
to
be
structured,
and
these
are
things
that
rather
discussed
before
maybe
jumping
into
the
code
too
early
and
fine.
B
If
you
have
like
a
first
data,
talk
is
version
that
we
use
for
experimentation,
but
I
think
it
still
should
be
the
bigger
let's
get
together
and
agree
on
architecture
and
go
forward
here,
type
of
discussion
and
don't
whine,
don't
get
me
wrong.
I
don't
want
to
hold
anything
back,
but
it
feels
like
we're
jumping
a
bit
in
like
rather
quickly
here
what
we
want
to
do
without
like
having
laid
out
the
entire
plan
yet
and.
G
I
totally
I
totally
agree,
I
think
the
even
before
this.
The
question
is:
what
are
our
requirements
right?
What
are
we
trying
to
create
with
with
the
open
sea
lemon
tree,
Java
Ottoman
stimulation,
because
basically
we,
the
decision
has
been
made
to
just
ports,
data
dog
or
fork
data
dog
to
be
the
current
agent
for
open
salami.
But
what
does
that
mean?
Does
that
mean
that
we
just
inherit
all
of
the
same
requirements?
C
F
Can
we
just
real
click
in
like
in
a
1
minute
talk
about
what
this
agent
or
this
data
dog
I'm
not
too
familiar
with
it,
but
it
does
so
you
load,
you
start
you
loaded
with
each
other.
It
is
most
probably
and
then
what
does
it
do
so?
What
does
it
instrument
automatically
to
where?
Where
are
we
with
that?
But
it's
not
there
anything.
Let's
talk
about
requirements
because
then
sure
yeah.
C
Sure
so
on
I
mean
I'm,
pretty
familiar
with
the
code
base
or
the
of
about
four
different
java
APM
tools,
so
I
mean
I
can
kind
of
say
that
there
you
know
I
mean
what
we
all
do
is
fairly
similar.
Now
I
haven't
seen
any
of
the
commercial
code
bases
so
I
don't
know.
Maybe
there's
some
secret
sauce
in
that
commercial
code
that
you
all
can
share,
but
at
least
the
open
source
ones
that
I'm
familiar
with
yeah
they're.
C
Typically
I
mean
it's
a
Java
agent
based
using
some
type
of
instrumentation
library,
on
top
of
esm,
whether
it's
byte
buddy
or
something
built
specifically
for
that
instrumentation
agent
and
then
there's
various
instrumentations
for
each
third-party
library
that
you
want
to
capture
apache
httpclient
JMS
servlet.
So
these
are
all
sort
of
separate
module
that
use
that
instrumentation
library,
capture
incoming
you
know
capture
the
spans
capture,
the
timing.
C
Typically,
they
have
some
underlying
API
for
the
instrumentation
to
use
whether
it's
specific
to
that
APM
tool
that
wraps
what
they
want
to
do
then,
or
if
it's
in
the
data
doc
case
they
use
directly
open
tracing
to
as
they're
sort
of
instrumentation
API,
although
speaking
with
Tyler.
They
actually
want
to
get
away
from
that
and
have
sort
of
an
internal
API
that
they
call
that
then,
could
be
abstracted
for
different
versions
of
open
tracing
even
right
now,
they're
stuck
on
one,
because
they've
find
they
find
it
a
little
bit
too
tightly
to
open
tracing.
B
Yeah
I
think
there's
a
number
of
questions.
Your
first
of
all.
How
do
you,
how
much
of
the
instrumentation
can
you
actually
find?
Is
it
just
well?
Nobody
can
do.
Is
bytecode
instrumentation,
don't
get
me
wrong,
but
is
it
just
wrapping?
Is
it
really
direct
injection
of
a
code
into
functions,
how
it
is
libraries
shipped
where
the
rules
it's
coming
from
how
the
rule
sets
in
libraries
loaded
because
most
of
the
concrete
libraries
either
register
for
callbacks
or
the
rebbe
existing
api's
are.
C
So
the
it
attack
use
any
open
any
of
those
contribute.
Its
direct
instrumentation
of
the
of
the
underlying
library
apache
direct
instrumentation
of
apache
httpclient
direct
instrumentation
of
servlets.
So.
B
But
I
think
that's
a
good
discussion,
because
it
technically
means
you
will
have
people
who
build
the
manual
instrumentation
libraries
I
mean
parallel
built
the
Auto
instrumentations
functionality
for
the
same
libraries.
Is
this
really
what
we
want
the
framework
provided
to
do
say:
I'm
framework,
X
provider?
You
telling
me
to
the
hand-
and
you
won't
eventually
we
want
framework
providers
providers
like
former
observability
point
of
view
out
of
the
box,
and
you
will
tell
me
that
I
have
to
provide
two
different
versions
of
this
instrumentation.
B
C
B
C
I
think
the
the
concern
I
have
about
that
is
that
that's
going
to
push
us
out,
I
mean
by
the
time
that
we
get.
You
know
all
of
these,
even
by
the
time
we
get
all
of
these
framework
providers
to
incorporate
open
telemetry.
We
still
have
a
huge
customer
code
base
user
code
base
in
the
world.
That's
using
older
versions
of
these
I
mean
in
the
Java
world.
The
we
see
you
know
regularly.
C
B
Agree,
I
think
we're
on
the
same
page
there.
This
is
just
things
that
I'm
them
right
now
nowhere
written
down
what
we
actually
want
to
do
so
I
kind
of
like
a
mission
statement
of
what
we
want
to
achieve.
I
think
would
be
helpful
here,
as
we
are
having
this
discussion.
I.
Think
these
things
come
up
and
because
I
know
wise,
which
is
modest,
are
deeply
in
it
to
work,
but
I
always
like
to
think
like
from
an
end
user
perspective.
B
If
we're
going
to
release
this
most
likely
in
a
March
April
timeframe
next
year,
what
exactly
thinking
of
march-april
of
us
it
was
the
code
freeze
for
open
telemetry.
What
how
do
we
want
people
to
adopt
it
where
we
provide
everything
out
of
the
box?
Do
we
expect,
even
like
third-party
commercial
distributions,
to
provide
like
this
instrumentation
libraries
that
that
would
be
interesting
to
know,
because
then
I
think
we
would
not
achieve
necessarily
a
lot
of
standardization,
because
New
Relic
as
well
as
diner
to
is.
B
We
have
instrumentation
runtimes
right
now,
so
how
would
it
help
us
honestly
to
collaborate
on
this?
If
you
have
our
own
instrumentation
runtimes?
Why
would
we
go
for
a
joint
one
and
I'm,
adding
a
bit
more
questions
right
here
and
again,
not
saying
that
we
don't
want
to
contribute?
Otherwise,
we
we
wouldn't
be
here
I
just
want
to
have
a
clear
understanding
of
where
we
really
want
this.
This
to
go.
Yeah.
C
I
think
that's
an
excellent,
well
I,
like
the
idea
of
you
know,
mission
statement
of
you
know
how
what's
the
bigger
picture
as
far
as
third-party
libraries,
because
you're
right
I
haven't
seen
that
statement
anywhere
either
I've
heard
it
kind
of
discussed
of
that
we
want
to
approach.
You
know
these
other
apache
httpclient
and
work.
C
B
Yeah
yeah
more
less
I
mean
it's
for
us.
It's
just
additional
work
without
additional
benefit.
I
think
the
biggest
benefit
would
be
if
we
could
reuse
like
auto
instrumentation
libraries
of
the
different
kind,
and
maybe
that's
it-
what
you
want,
but
eventually
everybody
has
to
go
back
to
or
management.
It
explained
why
we're
investing
time
here
so
and
it,
but
it's
over,
don't
get
me
wrong.
I
do
think
that
it
makes
sense
to
collaborate
on
these
topics.
I
think,
which
is
just
to
be
more
precise.
Why
we're
doing
it
and
what
we
want
to
achieve?
B
Plugins
that
was
actually
what
we
had
in
the
last
discussion.
Where
everybody
told
me,
wherever
people
are,
they
call
told
me
you
can
be
used
them
simply
for
auto
instrumentation.
It
said:
okay,
that's
that's
great,
but
I'm,
not
sure
whether
this
would
work
out
of
the
box,
that's
kind
of
where
we
ended
in
the
last
meeting.
Now
we're
talking
about
differently,
which
is
fine.
That's
just
why
I
want
to
get
a
bit.
G
G
I
totally
understand
the
concern
that
you
had
with
regards
to
much
older
versions
right
and-
and
these
are
challenges
that
we
our
work,
much
older
versions
of
a
particular
third-party
library
that
is
being
instrumented
right,
because
usually
the
plugins
that
are
created
by
whoever's
interested
in
creating
a
plug-in
in
the
procedure
they're
created
specifically
for
the
version
that
is
current
or
something
like
that
right.
But
many
there
exist
many
situations
where
very
old
runtimes
are
running
much
older
version.
So
so
how
do
you
solve
that?
And
we
within
the
special
agent
projects?
G
We
have
a
number
of
approaches
to
try
to
solve
these
situations.
The
one
that
that
I,
like
the
the
the
one
that
I
try
to
encourage
more,
is
where
we
end
up
forking
the
bug
in
and
basically
changing
it
in
a
way
that
it
will
act.
It
will
actually
work
with
older
versions,
basically
providing
the
the
functionality
directly
into
the
plugin
to
be
able
to
support
a
much
wider
range
of
the
versions
of
the
third-party
library
that
is
being
instrumented
right
and
that
kind
of
encapsulates
the.
G
Instrumentation
the
functionality,
the
logic
specific
for
the
instrumentation
of
that
library
in
one
place,
and
it
is
decoupled
from
any
system.
It
is
only
coupled
to
with
the
well
in
our
situation,
open
tracing
API,
but
in
open
telemetry
is
such
a
situation.
It
would
be
the
open,
telemetry
API
9,
because
if
it's
decoupled
this
way,
then
it
can
be
used
by
anybody
else
effectively
in
a
manual
way
or
in
another
auto
instrument
that
some
kind
of
fashion
writes
by
by
dynaTrace,
if
with
some
kind
of
adapter
or
whatever
new
relic
even
data
dog,.
E
The
reason
we're
here
is
that,
if
we're
going
to
commoditize
it,
if
it's
going
to
become
advertised,
we
should
be
heavily
involved
in
making
sure
that
it's
commoditized
in
a
way
that
makes
sense
for
us
as
a
business.
So
that's
really,
our
motivation
is
to
you
know,
make
sure
that
we
are
involved
in
the
commoditization
because
we're
all
here
kind
of
competing
with
each
other.
But
if,
if
we're
gonna
share
this
commodity,
let's
make
it
a
good
one.
E
E
F
I
Yeah
in
general,
I
want
to
see
a
robust
ecosystem
and
open
telemetry
of
instrumentation
with
high
quality
semantics
and
we're
all
kind
of
agreeing
that,
on
what
kind
of
data
is
getting
produced
by
these
systems
and
then
ideally
trying
to
get
library
and
framework
maintainer
x'
to
actually
push
this
instrumentation
directly
into
the
software.
That
they're
providing
that's
like
a
core
goal
of
the
open
telemetry
project.
B
I
I
mean
there's,
there's
a
always
a
bit
around
just
trying
to
get
allowing
users
to
get
started
without
having
to
write
any
code
right,
which
is
for
the
most
part.
You
know
you
don't
need
to
install
some
kind
of
SDK
always
so
there
will
probably
always
be
the
need
for
some
kind
of
auto
instant
installation
in
some
languages.
There's
a
certain
amount
of
dynamic
work
that
just
has
to
get
done.
I
You
can
look
at
like
for
join
pools
in
Java
as
a
place
where,
like
you,
just
have
to
do
a
bit
of
dynamic
work
and
there
will
surely
always
be
at
least
some
amount
of
like
plug-in
based
instrumentation,
not
every
piece
of
open
source
software
that
somebody
wants.
Instrumented
is
going
to
adopt
this
like
first-party
approach,
yeah.
E
I
There
will
always
be
a
need
for
something
like
polls,
but
it
will
always
ideally,
though,
like
these
instrumentation
packages
are
not
buried
inside
of
the
agent
one
use
case
are
people
who,
for
whatever
reason
they
don't
want
to
use
an
agent,
might
want
to
install
this
stuff
by
hand.
We
want
to
have
like
an
open
ecosystem
where
it's
not
some
core
group
of
us
having
to
maintain
all
of
the
instrumentation
for
all
the
software
in
the
world,
so
there'll
be
a
need
for
these
auto
installers
to
be
able
to
work
really
well
with
instrumentation.
I
F
G
It's
and
I
want
to
chime
in
here.
I
mean
it's
kind
of
an
important
architectural
shift,
and
I
definitely
think
that
we
should
make
this
decision
in
it's
a
it's
a
big
fork
in
the
road
I
believe
and
it's
one
of
the
most
essential
ones
because
effectively
it's
it
answers
the
question:
how
coupled
are
the
instrumentation
rules
or
plugins
or
whatever
we
want
to
call
them
right
and
and
I
refer
to
instrumentation
tools
and
plugins
as
those
pieces
of
code
that
are
that
are
coupled
specifically
for
a
third-party
library
that
they
are
instrumenting
okay.
G
G
Can
this
plugin
be
manually
used
by
just
you
know,
direct
code
instrumentation
right,
because
if,
if
we
don't
make
this
decision
now,
the
the
potential
is
that
if
we
end
up
making
these
couplings
too
soon
too
early
and
walk
walk
up
relatively
long
distance
forward.
It
will
be
very
difficult
to
well
re
architect
to
remove
the
couplings
to
separate
the
plugins
from
the
the
tracer
and
henceforth.
C
A
that
they
use,
and
then
they
can,
you
know
I
mean
speak.
They
can
put
that
on
the
classpath.
They
can
put
that
in
a
folder
that
we
can
load
via
the
auto
instrumentation.
To
pull
in
I
mean
that
that's
how,
for
example,
glow
route
works,
there's
a
plug-in
API
that
other
people
have
used
to
write
their
own
on
instrumentation
and
they
just
drop
that
in
a
folder
with
glow
root
and
glow
route.
G
G
I
C
I
That's
precisely
it
Trask
like
for
the
the
case
where
we're
not
talking
about
you
had
to
you
literally
had
to
get
in
there
and
do
something
dynamic
in
order
to
get
the
instrumentation
to
work
like
fork/join.
If
we're
just
talking
about
the
case
where
you
have
like
okay,
HTTP
or
some
library-
and
you
know
they
don't
have
native
instrumentation,
so
someone
writes
a
plug-in,
you
know
using
a
wrapper
or
their
interceptors
or
something
like
that,
they're
going
to
be
providing
a
certain
kind
or
amount
of
you
know
instrumentation.
That
would
come
out
of
that.
I
F
G
Instrumenting,
okay,
HTTP
and
that's
it
that
repository
we'll,
be
the
one
that
end
the
only
one
that
is
necessary
for
any
supports
of
instrumentation
of
vocation,
CP
libraries,
whatever
version
it
may
be,
whatever
changes
may
need
to
be
made
for
supports
of
very
early
versions
versus
later
versions.
They
should
all
belong
to
a
common,
well
known
single
repository
right,
because
then
that
becomes
manageable.
Otherwise,
the
alternative
is
that's.
G
G
C
C
Contribu
capi,
so
if
we
ought
to
inject
that
into
the
users
app
and
then
they
down
cast
that
to
my
sequel,
JDBC
connection
to
access
my
sequel,
specific
functionality,
they're
going
to
get
a
class
cast
exception,
so
the
in
general,
the
wrapper
approach,
I,
I'm,
I
hesitate
for
in
the
cases
like
okay,
gtp
or
other
some
libraries,
where
they've
provided
really
nice
hooks
for
us
to
inject
listeners
into
that
sort
of
thing.
That's
less
that
I
feel
like
that
could
be
shared.
Certainly
between
the
two,
so
I
mean.
G
Yeah,
it's
definitely
case-by-case
and
and
I
know
exactly
what
you're
talking
about,
and
then
we
deal
with
all
of
these
situations
trying
to
figure
out
creative
ways
of
how
to
basically
get
in
there
get
into
the
third-party
library.
What
is
the
best
most
sustainable
way
where
well,
we
wouldn't
necessarily
lead
to
potential
holes
of
cast
exceptions
because
of
the
way
that
the
the
user
application
is
actually
using
the
library
and
stuff
like
that.
It's.
C
A
different
goal
for
the
the
manual
instrumentation
right,
the
developer,
you're,
pulling
that
in
at
compile
time
at
test
time,
they're
finding
issues,
they're
check,
doing
version
conflict.
You
know
they're,
there
they're
still
testing
that
versus
with
the
goal
of
auto
instrumentation
is
just
boom.
It
works.
So
there's
the
without.
You
know:
there's
a
little
less
flexibility.
There
I
feel
sure.
G
And
yeah
there
are
these
situations
and
they,
but
again
they're
they're
case-by-case
basis
it
doesn't
the
this.
This
argument
doesn't
mean
that
we
should
just
say
no
from
all:
plugins
should
all
be
inside
of
the
agent
and
also
all
be
inside
of
manual
instrumentation
right.
There
needs
to
be
honest
right,
so
I
mean
suggestions
that
I'm
kind
of
trying
to
drive
a
discussion
here.
C
What
would
be
helpful,
maybe
is,
if
you
could
look
at
the
you
know
at
the
data
Doug
instrumentation
and
show
how
on
take
one
of
them.
That
would
be
a
good
example
of
sharing
and
show
how
we
could
share
the
same
share
the
manual
instrumentation
kind
of
using
the
approach
that
that
the
special
agent
takes.
C
G
That's
the
challenge
right,
so
the
bite
buddies
integrations
are
directly
coupled
into
by
sorry.
Data
dogs
integrations
are
directly
coupled
into
bite
body.
They
are
written
with
bite
body
period.
However,
if
you
take
on
the
counter
side,
you
take
a
special
agent
approach,
which
was
you
starts
with
the
instrumentation
plug-in
that
is
specific
for
third-party
library.
G
It
just
means
that
we
would
have
to
rewrite
the
the
plugins
which
basically
have
to
be
like
okay,
what
are
the
required?
What
are
the
what
suspicions
of
this
plugin?
What
does
it
provide?
What
what?
What?
What
are
the
the
trick
on
the
stands,
that
it
provides,
the
the
properties
etc
and
then
basically
rebuild
a
plug-in
out
of
that
that
only
conforms
to
the
open,
Celebi
API
that
is
separated,
separate
right
and
decoupled
from
the
agent
you
see.
Does.
B
That
make
sense,
can
we
get
it
like
into
a
nice
architecture,
diagram
of
low,
dark
Ram?
Oh,
if
it
already
exists
again,
please
feel
free
to
point
me
little
bit,
I
think
there's
a
lot
of
tribal
knowledge.
Circling
around
here
that
I
think
as
we
start
to
implement
it,
it
should
open
it
up.
Open,
telemetry,
community
I.
I
B
J
G
Yeah,
so
my
body
provides
a
like
an
ideal
interface
to
transform
retransform
classes,
we
transform
bytecode,
but
then
there
are
all
of
these
questions
of
classloader
complexity.
And
how
do
you
manage
multiple
plugins,
all
coinciding
in
a
single
agent
right,
many
of
whom
could
be
interfering
with
each
other,
because
I
mean
we
could
keep
going
and
going
into
all
of
these
different
situations,
but
but
it's
definitely
a
lot
more
involved
than
just
fight
buddy.
My
buddy
is
an
integral
piece
of
the
agent,
but
it
is
only
one
piece
of
a
much
much
more
involved
architecture.
G
I
I
will
say
like
I
want.
It
would
be
great
if
we
could
have.
You
know
better,
not
be
the
enemy
of
good
enough,
while
we're
hashing
out
like
what
we
want
our
long-term
plan
to
me,
if
you
know
in
the
short
term,
as
far
as
just
getting
people,
to
try
open,
telemetry
and
start
being,
you
know
like
early
users,
if
there's
just
a
short
path
to
just
getting
something
that
just
installs.
I
Like
short-term
issue
of
just
like
having
something
there
for
end-users
to
get
started
with,
so
if
it
looks
like
this
discussion
of
like
getting
the
thing
that
we
actually
want
might
be
taking
a
long
time,
we
might
want
to
consider
just
putting
some
amount
of
effort
into
like
a
short
term
solution
just
to
get
them.
That's
quite.
B
A
way
to
add
the
reason
why
I
started
this
discussion
like
when
we
had
the
Google
Doc.
What
do
we
talk
about
auto
instrumentation
or
alteration
injection
or
other
SDK
provider,
exporter,
whatever?
We
want
to
call
it
injection,
because,
whereas
these
are
two
different
things
right,
how
does
it
make
sense?
Because
we
will
have
like,
for
example,
dynaTrace
we
will
have
for
owning,
maybe
SDK
provider
or
SDK
implementation,
I
think
Microsoft.
B
As
far
as
I
remember
plans
to
do
the
same
and
I
think
other
people
will
do
parts
of
it
equally
and
just
understanding
how
I
can
easily
do
this,
because
right
now
would
have
to
modify
the
source
code
to
already
get
this
in
there,
which
is
kind
of
well
doable,
but
not
necessarily
want
to
do
in
this
case.
I
would
rely
that
there
is
already
opened
telemetry
instrumentation
in
there,
every
which
is
consume
it
properly.
The
other
topic
where
co2
instrumentation
is
when
instrumentation
is
not
there
and
we
want
to
add
instrumentation
at
runtime.
B
F
I
Clarifying
yet
you're
saying
just
the
difference
between
and
sorry
that
I
missed
beginning
this
meeting.
Another
call
but
you're
just
saying
the
difference
between
like
installing
the
stuff
for
end-users,
so
they
don't
have
to
manually.
Do
it
versus
doing
like
dynamic
code,
manipulation
of
like
all
various
sorts
right,
like
there's
lots
of
different
ways
to
get
additional
information
out
of
a
running
system?
I
F
I
I
I
mean
the
focus
here
like
the
the
court
thing,
at
least
from
my
point
of
view
is
just
if
you
look
at
users
trying
to
get
started
right.
They
don't
want
to
have
to
put
a
whole
bunch
of
work
in
before
they
can
see
data
and
they
don't
want
to
risk
like
the
chances
of
miss
configuring
things.
That's
a
thing
we
see
with
like
a
peer
manual
approach
right
note.
I
E
Of
a
non-issue
Ted
to
your
early
point
about
the
one
of
the
goals
of
the
of
the
overall
project
is
to
get
libraries
to
actually
have
open,
telemetry
built
into
them.
Do
we
have
any
open
source
libraries
that
or
the
maintainer
z--
of
them
that
are
on
board
for
kind
of
alpha
beta
level
stuff
in
the
Java
space
that
you
know
of
that
we
could
actually
get.
If
you
want
people
to
be
able
to
use
stuff
right
away
like,
let's
maybe
we
should
that's.
E
I
I
think
it's
a
bit
separate
I.
Imagine
people
would
be
a
bit
nervous
about
baking
in
an
alpha
API.
You
know
directly
into
like
a
production
library
that
they're
shipping,
but
you
can
certainly
look
at
open
tracing
as
a
place
where,
where
people
have
gone
ahead
and
done
that
you
can
look
at
TIBCO
or
gnats
or
Couchbase,
there's
a
number
of
pieces
of
software
out
there
that
just
ship
with
open
tracing
baked
in
likewise
I
think
G
RPC
ships
with
like
some
open
census,
stuff
baked
into
it,
at
least
in
some
languages.
I
I
B
B
B
I
Well,
good
first
call
sounds
like
we've
got
our
work
cut
out
for
us.
I
think
this
is
good
and
I
do
think.
It
would
be
great
to
really
get
this
clear
and
then
get
this
pushed
out.
You
know
into
open
telemetry
proper,
like
in
this
specification
somewhere,
actually
like
hammer
out
these
details.
If
people
haven't
looked
at
it,
there
is
the
the
RFC
that
that
BHS
wrote
and
got
proposed
that
got
approved.
C
B
Experience
even
people
have
other
stuff
to
do
I.
Think
right
now
we
might
decide
to
have
a
meeting
much
quicker,
but
let's
see
how
much
progress
we
really
make
on
the
document
honestly.
So
usually
people
have
very
great
intentions
and
then
everyday
work
comes
into
place
and
then
it
turns
out.
The
two
weeks
is
not
such
a
bad
proposal
either.
So
my
proposed
my
plan
would
be,
let's
start
seeing
how
far
we
can
get
with
the
talk.
What
the
progress
is.