►
From YouTube: 2021-04-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
C
C
A
All
right:
hey:
we
have
an
agenda
item.
B
Yeah,
basically,
there's
been
no
agenda
in
the
java
sick
meetings
for
the
past
four
weeks
and
there
has
been
we've
been
cutting
them
extremely
short.
So
the
question
is:
should
we
just
merge,
since
we
have
three
other
meetings,
do
we
need?
Can
we
get
rid
of?
Can
we
get
rid
of
the
the
standard
javascript
meeting
and
just
leverage
the
other
three?
I
don't
know
that's
the
question.
B
B
B
B
B
A
All
right,
any
other
topics
people
wanted
to
discuss
today.
C
Yeah,
that's
a
very
good,
quick
question
jason.
Thank
you.
I
want
to
talk
the
same.
The
thing
that
we
have
the
last
release
was
20
days
ago.
Yeah,
so
do
we
want
to
make
releases
each
month
or
two
times
a
month.
B
A
Is
that,
like
first
week
or
yeah,.
A
So
my
assumption
was
that
we
would
follow
their
release
pattern
and
release
one
one
zero
within
a
week
after
the
java
sdk
one
one
zero,
but
we
have
options.
A
So
then
the
we
would
not
be.
We
would
be
out
of
sync
version,
numbering
wise
with
the
sdk,
which.
A
F
F
Well
so
like,
for
example,
I've
toyed
with
the
idea
of
having
like
a
completely
not
semver
style
version,
but
more
of
a
date
style
format.
F
Where,
because
you
know
effectively,
you
have
a
lot
of
different
constraints
as
a
java
agent
in
terms
of
compatibility
it,
I
I
think
that,
having
something
that
exposes
how
old
it
is
and
like
the
the
supported
and
like
the
supported,
range
and
stuff
like
that,
might
be
more
interesting
and
more
useful.
F
F
C
C
C
A
So
you
think
you
like
this
is
would
be
like
for
distros
that
are
like
collections
of
other
collections
of
instrumentation.
A
We
sort
of
have
seen
this
issue
and
I
think
this
was
sort
of
why.
Oh,
the
hotel
gave
us
the
go
ahead
to
release
the
java
agent.
1.0
was,
but
not
they
don't
want
us
to
release
any
specific
instrumentation
as
stable.
Yet
as
a
sort.
C
C
A
So,
as
far
as
releasing
more
than
once
a
month,
is
there
a
specific
like?
Are
you
wanting,
I
mean,
are
you
wanting
to
release?
More
often
is
the
current
is
once
a
month
a
month
causing
any
issues.
C
C
F
The
thing
that's
hard
for
me
is
for
a
particular
change
in
a
java
agent,
something
that
might
be
breaking
for
one
person
might
be
totally
like
un
unimportant
for
another
person.
F
D
I
like
the
frequent
releases
only
because
vendors
that
are
building
distributions,
if
there's
ever
spi
changes,
they're
they're
waiting
on
those
you
know
and
so
having
to
go
a
month
before
even
integrating
spi
changes
is
a
bit
it's
a
bit
of
a
lag.
F
Get
picked
up
by
vendors,
I
would
almost
argue
that
it
would
be
better
to
just
have
everything
published
to
master
or
published
to
like
a
snapshot
version
and
then
merge
on
master
the.
Let
me
try
that
again
publish
a
snapshot
version
when
it
gets
merged
to
master,
and
then
it's
just
always
updated
to
the
latest
or
tied
to
a
specific
version
by
the
by
the
vendor.
A
D
A
Oh,
I
was
gonna
mention
one
one
reason
for
us,
potentially
to
keep
more
frequent
releases,
not
fault
like
that.
The
java
sdk
has
really
stable
right,
so
that's
kind
of
why
it
makes
sense
to
go
to
monthlies
versus
kind
of,
in
particular,
related
to
this
comment
and
as
our
instrumentation
api,
our
apis
are
changing.
A
Still
we
haven't
really
stable.
So
I
think
that's
a
good
argument
for
releasing
more
often
for
us.
Until
until
we
have
stable
api
and
spi
spis.
A
Okay,
let
me
discuss
with
honorag
tonight,
see
because
when
we
had
discussed
previously,
he
was
also
thinking
we
would
follow
the
sdk
schedule.
But
if
it
doesn't
have
any
objections,
then
yeah
we
will
just
go
and
release
go
back
to
releasing
whenever
you
know
every
other
week
or
so.
Whenever
somebody
wants.
A
A
A
Oc,
oh
yeah.
We
had
this
funny
hack
of
starting
oshie
in
the
java
agent
startup
before
we
realized
somebody
had
discovered
a
good.
I
don't
remember
who
somebody
not
me
discovered
this
cool
pattern
of
using
a
component
installer
in
the
agent
and
that
gets
picked
up
via
the
java
agent
spi.
So
we
can
do
some
things
at
agent
startup.
A
A
So
this
was
just
a
closing
out
one
of
those
that
we
had
done
all
the
work
basically
for
in
1643
1643
is
the
famous
one
I
I
think
is
famous
on
infamous
it
was
the
100
100
comments
hundred
commits
hundred
hundred
files.
I
was
like
what
are
they
like?
A
basketball
quadruple
triple
5000
lines
of
code
changed
yeah,
but
it
was
also
awesome,
because
this
is
what
gives
us
a
lot
more
confidence
when
running
the
tests
now
and
made
this
made.
This
change
possible.
A
So
you
know
every
little
issue
down:
counts,
cleanups
to
executor
tests.
Oh
yeah,
we've
tried
to
we'd
taken
the
the
java.
Sdk
has
a
strict
context.
A
Option
where
it
will
basically
make
throw
an
error
or
no
lie,
throw
an
error.
If
you
don't
close
a
context,
john,
it's
it's
using
like
on
gc.
If
it's
still
alive,
we
graph.
A
B
A
So
we
tried
to
enable
it
on
our
tests
and
it
showed
that
at
first
we
thought
it
was.
The
fallout
was
limited
to,
like
you
know,
five
or
six
instrumentations
or
three
or
four
or
something
smallish,
and
so
we
were
commenting
it
out
for
those
particular
instrumentations.
A
But
then
we
saw
a
little
bit
too
much
flakiness
in
ci
due
to
it,
so
we've
removed
it
for
now
and
but
it's
definitely
the
goal
to
bring
it
back
and
figure
out
why
we
are
leaking
contacts
and
clean
those
up.
So
that's
a
cool
feature
if
you're
using
the
sdk,
I
highly
recommend.
A
A
So
this
is
sort
of
related
to
the
runnables,
where
we
instrument
a
little
bit
too
many
runables,
and
so
that
datadog
has
done
some
interesting
work
here
that
we
should
interact
pointed
out
also
that
sort
of
limiting
the
scope
of
the
context
propagation
across
threads,
which
is
nice
to
limit.
That.
Just
because-
and
I
think
this
came
up
around
the
context
leaking
because
the
problem
that's
kind
of
one
of
the
problems.
A
If
we
propagate
into
a
run-up
too
many
runables,
sometimes
those
runnables
aren't
really
they're
like
new
threads
in
a
thread
pool
or
something.
That's
not
really
something
that
we
want
to
track
as
part
of
that
original
transaction
and.
F
So
that's
I'm
curious.
Why
is
so
I
I
understand?
Why
datadog
does
it
and
that's
mainly
because
we
we
have
that
case
where
we're
trying
to
send
complete
transactions?
Why
is
that
important
for
open
telemetry
to
contain
and
limit
that
propagation.
A
I
think
just
related
to
this.
I
think
it's
related
to
the
strict
context
and
wanting
to
make
sure
that
we're
not
leaking
context
into
any
anywhere.
A
A
Okay,
I
see
like
we
leak
a
span
into
the
context
with
a
span
into
a
runnable
and
that
runnable
then
picks
up
some
work.
That
work
may
would
get
parented
to
that
leaked
span
and
we
we
guard
against
that,
primarily
through
the
server
with
all
our
server
instrumentation.
We.
A
Yeah
yeah,
but
it
would
still
be
better
to
not
leak
context
in
the
first
place.
B
A
C
G
Detecting,
whether
our
necessary
hours,
library,
instrumentation
class
in
the
class,
is
already
on
application,
class
buff,
but
anarch
has
pointed
out
the
scenario
where
this
would
completely
break
the
instrumented
application.
So
I
just
made
a
revert
pr
today
and
it's
it's
go.
It
goes
back
to
it's
going
back
to
the
drawing
board.
I
guess.
A
Yeah
thanks
for
thinking
about
this
one
though
like
we
gotta,
have
some
failed
attempts
before
we
get
it
right
and
this
this
issue
has
been.
It
was
a
long
standing.
It's
under
a
thousand.
G
F
How
do
name
tracers
can
prevent
that,
so
the
tracer
provider
would
know
exactly
who
was
supposed
to
get
a
valid
tracer
and
provide
no
opt
tracers
to
the
other
one.
F
A
F
All
I'm
saying
is
like
I
don't
know
how
exactly
it's
supposed
to
work,
but
I
my
understanding
was
that
the
named
tracer
stuff
was
specifically
designed
intended
to
help
with
this
problem.
B
F
A
There
there
was
some
discussion
way
back
when
this
document
was
written,
which
was
like
almost
two
years
ago
now.
I
kind
of
remember
that
so
that
would,
and
even
so
we've
been
kind
of
hedging
on
we've
kind
of
changed
our
minds
on
which
should
back
off.
So
originally
we
were
planning
to.
Let's
see,
I
think,
I'd
probably
dock.
A
Yeah,
the
java
agent
suppressed
the
library
instrumentation,
but
there
was
some
discussion
and
desire
for
the
java
agent
to
back
off,
which
makes
that
name
tracer
thing,
even
if
we
could
return
no
op
tracers,
that
that
doesn't
help
for
the
reverse
problem.
F
B
B
But
is
there
a
way
we
could
have
the
library
instrumentation,
implement
an
interface
or
provide
a
method
or
do
something
that
would
let
the
agent
know
that
it
was
there
and
in
place
and
being
used
so
that
one
or
the
other
can
be
suppressed
as
appropriate?
Like
maybe
add
something
to
our
api
to
our
instrumentation
api.
G
Yes-
and
I
don't
know
if
the
senate
and
the
scenario
that
anurag
pointed
out,
would
still
work
with
that,
because
the
exact
example
that
he
added
was
that
you
have
a
application
which
has
multiple
grpc
services,
and
one
of
them
is
actually
defined
by
some
library
that
uses
our
instrumentation
library.
G
So
if
our
is
library
instrumentation
just
says
to
the
agent
in
some
way
that
hey
I'm
there
I'm
present
and
we
turn
it
off.
What's
the
guarantee
that
we
won't
apply
the
java
engine
instrumentation
to
just
that,
one
service
that
was
instrumented
already
and
not
all
the
rest.
B
G
B
G
B
But
it
feels
like,
if
we're
I
mean,
maybe
we
shouldn't
be
trying
to
do
everything
automatically
perfectly
all
the
time,
but
make
sure
that
the
user
has
enough
information
and
switches
that
they
can.
We
can
give
them
the
information.
The
agent
can
log
the
information
and
they
can
then
address
it.
However,
they
want
to
address
it.
G
D
A
So
we
detect
nesting
and
we
do
suppress
like
client
and
server
spans
right
already.
B
If
you
could
detect
that,
for
example,
that
a
span
was
being
generated
that
had
exactly
the
same
instrumentation
library,
name
and
type,
it's
mankind
as
its
parent,
rather
than
just
client
client,
but
a
little
bit
more
like
hey.
This
looks
like
you've
duplicated,
exactly
what's
happening,
then,
maybe
that
second
one
second
instrumentation
could
be
suppressed.
D
Something
along
those
lines.
Yeah,
I
don't,
I
don't
know
full
details,
but
something
that
would
allow
us
to
detect
that
it's
definitely
a
slightly
different
use
case
than
client
client.
A
So,
let's
see
partly
for
we're,
hoping
that
instrumentation
libraries
or
libraries
themselves
will.
C
A
Open
telemetry
directly,
and
so
in
that
case
we
would
want
to
defer
to
the
library's
instrumentation
and
have
the
java
agent
back
off.
Okay,.
F
So
let's
say
that
we've
got
a
three
different
cases:
right:
we've
got
library,
instrumentation,
that's
provided
by
us.
We've
got
automatic
instrumentation,
that's
provided
by
us,
and
we've
got
potentially
instrumentation
that's
built
into
the
various
frameworks
right.
Those
are
the
three
cases
and
I
would
argue
that
it
should
take
that
perhaps
take
that
priority
or
that
precedence
right.
F
C
A
Reasons
here
we'll
ignore
this,
because
so
far
it
hasn't.
A
The
user
may
have
configured
the
library
instrumentation,
oh
yeah,
so
like
in
this
way.
We
would
respect
programmatic
configuration
of
the
library
instrumentation.
C
A
Too
many
options
yet
for
library
instrumentation,
and
I
guess
this
was
sort
of
the
one
that
sort
of
sold
me.
I
think
the
most
maybe
was.
If
the
user
really
wants
the
job
agent
to
pick
presidents,
they
can
always
remove
the
library
instrumentation.
B
A
A
G
Unfortunately,
it's
not
currently
working
for
library
instrumentations,
because
we
do
not
bridge
the
server
and
client
spam
context.
There
are
the
separate
issues.
C
A
No,
no,
I
am
all
right
so
yeah,
so
it
sounds
like
we're
back
to
the
drawing
board,
but
that's
some
good
brainstorming
and
it
sounds
like
you
know,
material
as
you're
like
not
that
you're
responsible
for
this
item,
just
because
you
made
the
first
attempt.
A
A
Oh
yeah,
I
approved
it.
I
thought
it
was.
I
thought
it
was
awesome
until
honorary
came
along
and
pointed
pointed
out,
the
problems,
let's
see,
build
stuff
build
stuff.
Oh
yes,
well
we'll
get
to
gwit
down
lower.
C
G
That's
a
that
was
a
simple,
this
deceptively
simple,
radial
change
where
that
just
moved
some
test
tasks,
modifications
after
grade
evaluate
we
have.
We
have
a
class
filter
in
our
instrumentation
grading
file
that
excludes
all
the
actually
all
of
libs.
That
has
something
to
do
with
java
agent
so
because
our
tests
are
actually
instrumented
applications.
C
G
That
is
java
agent.
This
wasn't
working
for
test
sets
because
the
test
tasks
generated
by
the
test
sets
plugin
were
created
after
after
this
code.
So
I
just
had
to
add.
A
A
Yes,
I
think
we
went
over
that
last
week,
but
for
those
that
don't
remember
our
muzzle
was
not
our
muzzle
validation
was
running
but
not
actually
validating
anything
for
a
good
two
or
three
months.
A
C
C
F
Is
that
npe
inherited
from
datadog
or.
A
E
C
It's
not
that
common
code
path
that
leads
to
that
np
as
well.
Reactor
native
certainly
uses
that
path,
but
our.
C
A
Yeah,
it's
great
yeah
cause,
I
mean
it's
our
it
has.
A
It
will
justify
what
we
want
to
do,
which
is
drop,
netty,
instrumentation
and
instrument,
the
higher
level
stuff.
Only
yeah.
C
D
Okay,
can
I
can
I
go
back
and
do
a
side
quest
this
issue
in
the
npe
around
nettie
tyler.
You
fixed
this
two
months
ago
with
the
data
dog
repo.
D
C
F
F
Help
us
out
sorry
one
thing
I
did
change
when
I
was
looking
at
that
nettie
instrumentation
is.
I
noticed
that
there's
a
lot
of
inconsistencies
in
the
neti
clients
in
how
they
handle.
F
C
F
C
F
Yes,
when
the
client
is
making
an
https
request
via
a
proxy,
it
has
some
very
different
behaviors
that
it
has
to
go
through.
F
Most
of
the
instrumentation
seemed
to
just
handle
it
properly
without
problems,
but
nettie
definitely
did
not,
especially
with
http
url.
Sorry,
the
apache,
the
async
http
client.
C
A
F
A
Yeah,
I
really
need
to
spend
set
aside
some
time
for
bringing
over
cool
stuff
from
the
datadog
repo.
A
And,
oh
that's
the
same
thing:
somebody
must
have
merged
something.
While
I
was
talking
nikita.
A
Time,
strict,
oh
yeah,
this
is
the
one
that
we
ended
up
rolling
back,
but
hopefully
we'll
bring
back
at
some
point.
Clean
up,
clean
up.
A
Oil
in
the
ocean,
I,
like
that
yeah
test
fix
type
in
another
new
instrumentation.
A
Oh,
we
had
a
good
discussion
on
tuesday
wednesday
meeting
about
how
to
about
kind
of
the
the
ever-growing
size
of
the
repo
and
the
current
proposal
that
we're
leaning
towards
is
splitting
out
having
a
second
repo
java,
instrumentation
contrib
and
moving
less
popular
instrumentation,
well
moving
all
old
versions
of
instrumentation
over
there.
So
we
would
only
keep
like
nete41
lettuce
5-1
in
the
main
repo
and
then
also
likely
move
over
less
popular
instrumentations
into
that
repo.
A
So
just
a
heads
up,
I
think
that's
a
nice
reasonable
solution,
because
I
don't
think
we
don't
want
to
end
up
with
tons
of
repos,
but
it
is
nice
to
keep
the
main
repo
that
has
all
the
core
java
agent
functionality
from
down
a
little
bit.
It
makes
it
easier
to
deal
with
deal
with
some
of
our
api
work
and
stuff.
D
A
A
Right,
that's
excellent!
That.
A
Yeah
yeah,
that's
a
really
good
point,
and
so
that
could
give
some
bounded
growth
because
that's
going
to
be
the
unbounded
growth
repo
and
we
can
keep
the
main
repo
kind
of
bounded.
A
Cool
async,
oh
this
is
I.
F
I
see
trade-offs
both
sides
on
that
one.
I
look
at
the
the
pain
that
it
was
to
work
with
instrumentation
for
open
tracing
and
how
it
was.
You
know.
The
open
tracing
instrumentation
was
split
across
multiple
different
repos,
and
that
meant
that
things
diverged
and
got
outdated
very
easily,
because
you
know
you
would
make
a
change
and
it
wouldn't
necessarily
be
tested
and
propagated
across
all
of
the
different
repos.
F
So
if
you
have
two
repos
that
are
like
tightly
bound
and
tightly
coupled
and
maybe
unit
tested
together,
then
I
guess
that's
that
kind
of
alleviates
that
problem,
but.
F
Yeah,
I
would
I
would
caution
against
frack
starting
the
repo
too
much.
C
F
So
in
that
vein,
if
you
make
a
change
to
the
the
core
project-
or
I
guess
the
instrumentation
project,
how
does
the
test
suite
get
activated
to
run
on
the
contrib
project?.
C
That's
still
open
question,
but
I
am
going
to
write
like
a
proposal
both
like
general
scope
to
open
telemetry
about
the
general
distribution
of
third
third-party
instrumentations
and
specifically
how
java
is
going
to
handle
that.
So
I
will
not
answer
you
right
now,
but
there
will
be
some
text
explaining
that.
F
Before
actual
work
happens,
that
would
be
my
biggest
concern
about.
That
is,
if
you
make
a
change
that
breaks
one
place,
having
it
be
like
visible
asap
is,
I
think,
essential.
The
worst.
C
F
You
know
you
make
a
change
that
breaks
the
the
contrib
and
then
you
know
a
month
later,
when
you
finally
get
around
to
syncing
the
project
and
re-testing
contrib,
realizing
that,
oh,
that
didn't
work,
there's
this
problem
and
it
broke
something.
A
So
we
are
three
minutes
over
our
five
minute
cut
off.
I
did
want
to
just
call
out.
This
was
a
cool
feature
by
a
new
by
a
external
contributor,
the
with
span
annotation.
Now,
if
it
returned,
if
that
method
returns
a
completable
future,
then
the
the
span
captured
by
that
with
span
won't
end
until
the
completable
future
ends
and
he's
now
extending
that
there's
an
open
pr
to
extend
that
to
rx
java
types,
future
types
and
plans
to
extend
that
to
other
other
library
future
types.
So
that's
really,
I
think.