►
From YouTube: 2021-05-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
So
it's
funny
with
the
api
diff
stuff.
Today
I
was
trying.
I
was
moving
directories
around
and
looking
at
the
diffs
and
also
kind
of
at
the
same
time
working
on
the
change
log
and
I'm
like
okay.
So
this
is
a
really
good
test
of
this,
like
is
the
diff
match
to
change
log.
Is
that
right?
I'm
like
oh,
no,
the
diff
is
not
showing
up
the
change
in
the
jaeger,
the
jaeger
grpc
exporter
builder
and
I'm
like
what's
going
on.
It
must
be
not
working
like
what's
happening
here.
C
We
added
that
we
had
the
timeout
to
that
and
I
spent
a
good
half
an
hour
messing
around
with
things
and
tweaking
it,
and
it
turned
out
that
the
diff
was
correct.
That
timeout's
been
on
that
exporter
for
a
long
time.
It
just
was
added
to
auto
configure
not
like
the
option
was
always
on
the
exporter
and
I
totally
forgotten
about
it.
But
then
the
api
diff
was
100
accurate,
so
it
was
good,
but
I
was
convinced
I
was
totally
convinced.
It
was
wrong.
C
E
I
I'm
mostly
joining.
I
don't
have
anything
else
other
than
I
was
thinking
after
the
morning
meeting
that
I
could
throw
that
stuff
into
contrib.
Maybe
it
would
be
better
served
in
the
contrib
repo
rather
than
the
main
instrumentation
repo.
D
Why
don't
you
show
share,
show
honorag
and
let's
get
his
thoughts.
D
Sure,
okay,
we'll
come
back
to
the
the
burning
sdk
release.
C
E
Okay,
so
I
think
I
showed
this
a
while
back-
and
I
showed
it
again
this
morning,
but
this
this
repo
that
I
stood
up
a
while
ago
called
iguanodon
that
was
confusing
people
this
morning,
based
on
the
name.
It's
just
a
code
name,
it's
a
placeholder,
it's
also
a
dinosaur
anyway.
This
stands
up
a
docker
compose
environment
with
spring
pet
clinic
rest,
and
then
it
runs
a
test
pass
with
no
agent.
E
It
runs
a
test
pass
with
an
agent
and
by
test
pass
I
mean
it
runs
this
k6
script
that
does
a
bunch
of
stuff,
and
it
does
it
a
few
hundred
times.
Concurrently
with
a
few
workers-
and
I
know
this
is
a
lot
to
sort
of
digest,
but
it's
like
going
through
the
rest,
sort
of
user
workflow
of
creating
new
vets
and
new
pets
and
new
owners,
and
ideally
scheduling
like
visits
and
stuff.
But
it
didn't
quite
finish
anyway.
E
So
does
that
a
few
hundred
times
across
a
number
of
threads
and
once
with
the
agent
and
once
without
the
agent?
Actually,
it
does
no
agent
first
and
then
it
comes
back
and
does
agent
and
then
it
in
during
both
during
both
test
runs.
It
turns
on
jfr
and
then
it
grabs
gfr
files,
and
then
it
extracts
various
statistics
from
those
jfr
files
and
puts
them
back
into
they're
both
on
the
main
branch
in
this
results
directory,
but
aggregates
them
in
the
form
of
csv
files
that
get
committed
back
to
the
repo.
E
So
this
is
like
really
inexpensive
and
silly
unsophisticated
way
of
like
doing
a
database,
and
so
we
have
stats
over
time
and
then
on
the
g.
D
I
haven't
seen
that
rendering
github
rendering
of
csv
files.
E
E
That's,
oh
because
that
file
doesn't
exist
on
that
branch.
It
exists
in
a
different
place.
It's
in
just
I
wanted
to
root
stuff
in
web,
so
that
would
make
more
sense
when
you
saw
urls
and
stuff.
But
you
know
it's
copied
over
here
on
the
gh
pages
branch
which
allows
us
to
do
some
graphing
right
and
so
here's
the
main
things
we're
looking
at
allocations.
Garbage
collection,
heap
throughput
start
time
and
right.
E
D
I
remember,
seeing
like
single
requests,
glow
root
would
capture
the
the
allocation
thread
count,
and
you
know
we
would
have
some
long-running
requests
that
would
allocate
like
multiple
gigs.
Just
you
know
it's
easy
to
to
allocate
memory.
E
E
So
it's
a
pretty
short
run,
even
though
it's
like
thousands
of
requests
still,
I
didn't
want
to
blow
it
out
too
much
in
any
case,
here's
what
else
is
in
here
and
I'm
open
to
other
metrics
and
other
ideas
too,
but
here's
gc.
This
is
like
total
pause
time,
total
sum
time
for
gc
across
the
life
of
a
run,
and
so
I
think
trask.
You
are
not
the
only
person
to
have
ever
asked
this
like
what
is
the
x-axis,
because
it's
because
it's
absurd
looking
these
are
time
stamps
right.
E
So
each
of
these
x-axis
you.
D
Know
that
the
bigger
problem
actually
is
that
zoom
covers
up
the
bottom
part
of
your
screen.
With
my
with
the
toolbar,
oh
yeah,
okay,
no,
I
literally
don't
see
the
x-axis
down
the
bottom
part
of
your
screen.
E
So
even
if
the
test
might
have
taken
longer-
or
there
were
more
garbage
collections
on
this
one,
there
were
all
more
also
more
garbage
collections
on
this
run
right.
So
there
are
some
exceptions
here
like
this
one
went
down
and
this
one
went
up
like
okay,
there's
some
some
variance,
but
the
shape
of
these
curves
is
like
it's
pretty
consistent.
I
think
there's
data
to
be
gleaned
from
that
I
mean
the
fact
that
it's.
C
E
But
in
this
particular
scenario
I
think
just
a
trend
line
would
be
a
pretty
good
indicator
and
also
not
surprising
that
this
is
like
consistently
higher
right,
because
it
does
so
much
more
than
even
spring
pet
clinic
arrest
is
like
a
pretty
simple
app.
I.
E
C
E
Yeah-
and
maybe
it
was
also
a
fluke
like
maybe
it
crashed,
I
don't
know
like
there
was
development
happening.
You
know
yeah,
fair
enough.
This
one
got
close,
so
this
is
throughput.
Looking
at
the
iteration
time
is
a
is
a
single
test
pass
through
that
k6
file
that
I
showed,
and
you
can
compare
you
know,
averages
there's
not
a
lot
of
difference
between
the
average
and
p95
for
both
of
these
and
then
this
is
for
single
http
request.
Average.
E
And
then
startup
time
and
startup
time
is
in
seconds
so
you'll
see
well,
actually
that
one
doesn't
look
like
a
round
number,
but
it
should
be
in
seconds
and
if
this
is
only
accounting
for
the
time
until
I
think
the
app
is
available
like
until
I
think
and
what
does
it
use?
What
is
that
thing
that
I
hate
some
wall?
What's
one
of
the
things
I
hate
so
much,
that
is,
that
that
form-based
ui
for
hitting
rest
requests.
E
E
Sure-
and
we
and
we
know
that-
and
we
will
always
have
that
yeah
yeah,
of
course,
like
being
able
to
track
some
form
of
baseline
and
look
at
it
over
time
and
say:
look
it
got
way
better
or
way
worse
or
you
know
some
somebody
observed
it
at
300
percent,
whereas
we
normally
track
it
at
80
percent
like
what's
the
what
changed
or
why
is
their
scenario
unique
right?
Why
is
their
deployment
different
than
spring
pet
clinic.
C
C
E
I'm
not
so
that
came
up
earlier
too,
I'm
I'm
using
whatever's
the
latest,
but
it
would
be
great
to
be
able
to
have
this
kind
of
faceted
on
different
agents
or
dif,
and
not
to
compare
between
vendors
like
I
wanted
to
stay
out
of
that
game
entirely.
But
more
yeah
like
comparing
snapshots
with
latest,
would
be
fantastic.
E
They
are
yeah
yep,
yeah
and
just
in
a
cloud
environment
you're
going
to
expect
variance,
but
the
even
the
shape
of
these
curves
right
is
more
or
less
the
same,
and
I
think
it
it
depends
largely
on
what
the
path
that
the
test
runner
goes
through,
but
also,
I
think,
more
to
the
point
what
cloud
instance
you're
on
and
who
your
neighbors
are.
B
Yeah,
I
guess
so
yeah.
C
E
B
E
I
think
that's
exactly
what
it
is,
or
at
least
that's
a
big
part
of
it
and
there's
so
there's
only
three
containers
that
or
there's
three
images
and
three
three
containers
that
I
build,
and
so
we
do
reuse,
the
pet
clinic
container
for
both
test
runs,
which
I
think
probably
aligns
them
closer
to
where
they
would
have
been
otherwise
yeah.
D
B
D
Yep,
I
would
like
to
see
more
iterations,
especially
of
the
startup,
because
the
startup,
you
know
that's
just
one
snapshot
whereas
like
if
you
go
to
the
go
to
the
throughput
chart.
D
Yeah
like
this
is
probably
the
most
consistent
of
all
the
charts,
which
is
good.
I
mean,
and
partly
I
think,
because
you're
this
is
running
a
lot
of
iterations
each
time.
D
B
C
C
B
E
B
When
I'm
trying
to
like,
if
we
have
multiple
commits
with
different
dots,
like
I
sort
of
can't
imagine
being
able
to
do
anything
because
is
it
because
of
variance
or
is
it
because
of
actually
the
code
changed?
If
we
don't
so
this
graph,
I
don't
know
how,
of
course,
if
we
double
our
startup
time,
we'll
find
it.
So
it's
still
it's
always
nice
to
have
graphs.
I
think,
but.
D
B
D
E
A
D
D
Another
way
that
we
could
potentially
get
less
variance
is
running
now.
Can
you
run
running
on
different
vms?
If
we
make
it
different
jobs
like
we
could
have
split
it
like?
Have
it
run
the
job
10
times
on
10
different,
so
it'll
pick
up
10
different
vms.
A
D
When
I
run
when
I
run
benchmarks,
I
I
minimum
20,
I
use
a
minimum
20
vms
and
I've.
I've
run
50
to
100,
also
just
because,
especially
for
those
startup
ones,
there's
so
much
variance.
You
need
a
lot
of
feedback,
yeah
and
but
yeah
I
mean
you're
using
azure
for
that
right.
Yeah,
yeah,
yeah,.
E
D
C
E
Know
that
I
know
it's
just
like,
and
maybe
there's
maybe
there's
way
better
facilities
for
doing
some
of
that
automation
that
I'm
not
leveraging
yet.
That
would
not
surprise
me.
E
B
So
we
I'm
just
in
case
you
feel
like
doing
it.
We've
had
the
same
problem
on
lambda
and
found
a
solution.
So
maybe
this
you
can
just
borrow
like
if
you,
if
you
upload
artifacts,
there's
a
github
upload
artifact
command
that
uploads
a
file,
but
if
they
all
start
with
the
same
folder.
Actually,
then,
when
you
download
the
artifacts
they're
all
combined
into
one
so
that
somehow
gets
aggregated,
which
was
surprising.
B
E
Anyway,
I've
hugged
enough
of
this
meeting.
Probably
my
idea
was
so
the
the
conversation
is
yeah.
Why
is
this
just
hanging
out?
In
my
account?
We
should.
I
definitely
want
to
contribute
this.
I
have
been
known
to
speak
ill
of
monorepos
a
lot,
and
I
think
that
our
instrumentation
repo
is
very
big
and
already
like
pretty
challenging
to
work
with,
and
I'm
hesitant
to
add
files
to
something.
E
That's
in
that
condition
currently,
and
so
I
suggested
at
the
beginning
of
this
meeting
that
maybe
I
could
put
this
in
contrib
and
maybe
it
would
serve
well
better
over
there.
I
have
another
idea.
C
What,
if
we
suggest
to
the
tc
or
the
government's
governance
committee,
that
we
have
a
repo
specifically
devoted
to
performance
that
could
be
more
than
one
language,
because
what
you've
written
here,
it
probably
wouldn't
be
too
difficult.
Assuming
you
had
a
way
to
gather
the
data
together,
the
metrics
to
extend
this
to
other
languages.
B
E
Okay,
what's
a
good.
B
I
would
I
mean,
since
this
would
not
be
built,
I
guess
I
mean
just
having
a
folder
inside
a
reaper
right
now.
I
don't
think
it
hurts
that
much
adding
where
instrumentation
hurts,
because
it
all
gets
built.
So
I.
B
Yeah,
the
main
reason
to
put
in
the
instrumentation
repo
would
be
to
then
just
run
it
on
every
commit.
So
we
get
a
data
per
commit
rather
than
time
based,
if
which
yeah.
B
Nice
to
have,
I
think,
yeah
or
if
it's
in
another
repo,
we
could
have
our
commits
trigger
a
workflow
and
another
repo,
but
unfortunately
github
doesn't
make
that
that
easy,
so
yeah,
those
are
three
options
but
like
if
the
repo
is
a
performance
read,
but
that
would
make
a
lot
of
sense.
I
think
I
have
a
dedicated
one.
There.
D
Well,
I
would
vote
for
it
starting
in
instrumentation
and
and
then
starting
the
discussion
of
creating
a
standalone
repo
for
it.
Okay,.
D
Yeah
release
time.
C
No,
it's
a
different
error.
I
could
not.
I
like
spent
time
reading
through
5
000
lines
of
dot,
dash
dash
info
and
could
not
figure
out
what
was
going
wrong.
So
I
just
started,
I
told
jason.
I
just
started
throwing
stuff
at
the
wall,
I'm
like
well.
What?
If
I
change
the
java
version?
What
if
I
changed
the
ubuntu
version
and
then
suddenly
like
changing
ubuntu
2004,
it
worked
and
I'm
like.
Okay,
I'm
a
terrible
engineer
but
yeah.
C
I
guess
I
got
it
working
so
I
have
no
idea
absolutely
no
idea
what's
going
on,
but
everything
was
broken
and
it
was
all
the
full
config
test
every
single
time,
and
it
was
all
I
mean
I
don't
I
couldn't.
It
was
not
the
same
ssl
issue
that
we
were
seeing
before
with
armeria
just
was
getting
no
no
spans
getting
delivered
to
the
grpc
endpoint.
So
I
had
no
idea.
What's
going
on,
2004
seems
to
fix
everything,
although
it
broke
everything
like
two
months
ago
right
so
now
it
fixes.
C
C
Works
with
2004,
so
we're
now
using
2004,
let's
see
when
it
breaks
no
idea,
I
you
know
it
was
like
one
of
those
things
where
I'm
like
yeah,
I'm
getting
ready
for
the
release,
got
everything
going,
I'm
like
and
nothing
builds
anymore.
What
is
going
on
and
it's
always
ubuntu
so
yeah
anyway,
that
was
an
exciting
two
hours
of
my
day.
Wasted
so
release
is
there
anything?
Are
we
ready
to
release?
Is
the
question?
Is
there
anything
else
we
want
to
get
in
that?
C
And
I
don't
think
there
is,
I
mean
the
only
thing
would
be
potentially
the
baggage
parsing
improvements
but,
as
jason
has
pointed
out
several
times
like
we're
literally
trimming
off
microseconds
from
baggage
parsing,
under
the
condition
where
we
have
100
baggage
items.
B
C
B
At
we're
having
the
conversation
right
now,
whether
we
support
baggage
for
propagating
in
aws
no
one
wants
to
because
they're
scared
of
this
8k
header
or
potentially
8k
header,
like
that's
the
limit
on
size
of
baggage,
it's
pretty
weird,
but
and
so
now
I
need
to
know
like
how
people
use
baggage
or
else
like.
If
we
can't
come
up
with
use
cases,
then
no
one's
gonna
allow
us
to
propagate
this
huge
header
within
aws.
You.
C
Know
I
know
the
one
use
case
that
I've
heard
from
bargain
several
times
is
attaching
information
about
whether
you're
running
a
canary,
whether
you're,
it's
a
canary
deploy,
so
a
canary
will
put
baggage,
put
a
baggage
information
in
there,
so
that
the
request
is
known
to
be
going
through
one
of
the
canary
deployments.
That
would
let
you
then
do
analysis
if
that
baggage
is
then
appended
onto
some
sort
of
span
or
metrics.
C
Anyway,
that
is
the
main
that
is
the
only
use
case
of
propagated
baggage
that
I
have
heard
that
seems
like
a
reasonable
sre
sort
of
baggage
usage
propagated
baggage.
I
mean
in
process
baggage
totally
different
thing
in
process
baggage
is,
is
useful
for
other
reasons,
but.
C
Well,
I
mean
the
baggage
is
something
attached
to
the
context,
but
it's
a
something
with
a
known
spec
and
a
known
api.
That's
built
in
to
your
open,
telemetry
apis.
So,
yes,
it
is
just
sticking
something
in
the
context.
It's
just
a
map
in
the
context,
but
it's
one
with
a
well-defined
api
that
is
supported
across
languages.
C
Oh,
for
sure
I
mean
you
don't
need
baggage,
I
mean
you
could
do
it
other
ways.
It's
just
a
kind
of
a
built-in.
If
someone
is
looking
on
how
to
do
something,
they
look
for
a
thing
called
baggage
and
they
can
use
it.
They
could.
They
could
stick
their
own
map
into
the
context
and
it
will
work
just
as
well.
D
Does
doom
does
the
baggage
map
if
you
update
it
lower
down,
does
it
up
or
is
it
immutable,
yep?
Okay,
okay,.
D
And
it's
just
strings
right:
string
string,
yeah,
just
string
string!
Okay,
what
have
you
seen
that
use
case
in
the
wild.
C
B
Like
android,
or
something
like
that,
probably
that
sort
of
metadata
yeah
if
it's
all
correlated,
then
we
can
see
things
because
a
lot
of
information
about
a
user
or
device
or
something
that
might
go
into
baggage.
So
it's
shared
across
all
the
signals.
I
guess
that's
like
anything
you
want
to
share
to
cross
signals
not
only
in
traces
you
would
put
in
baggage.
B
C
C
You
could
in
open
or
you
could
in
open
census.
This
is
something
that
w3c
is
still
not
agreed
to
or
agree
to
that
spec,
but
that
is
kind
of
what
the
purpose
of
the
metadata
is.
That
is
currently
unspecced,
but
the
metadata
would
allow
you
to
say
here's
the
here's,
the
life
cycle
like
this
is
allowed
internal.
This
should
propagate-
or
this
has
some-
I
don't
know
some
number
of
hops.
C
B
C
C
Anyway,
no,
I
think
it
seems
it
doesn't
seem
unreasonable
to
me.
I
wish
it
were
something
that
the
spec
had.
B
C
A
B
No
up
case
yeah.
It
is
really
just
a
spam
builder
allocation
that
that's
only
a
location
in
practice
and
and
of
course,
the
con.
So
we
at
least
need
the
context,
and
I
think
that's
significant
overhead
and
then
the
question
is
whether
we
need
the
open
thumb
jpi
itself
and
that's
only
that
span
builder
allocation.
C
C
B
B
B
B
D
B
B
C
B
Yeah
anyways-
I
just
marked
this
as
alpha
for
now,
so
I
figure
it's
worth
playing
around
with.
We
can
like
alpha.
We
can
delete
if
we
don't
like
it.
So
I
I'm
hoping
like
if
amazon
is
the
only
company
that
uses
this,
then
there's
no
point
for
this
to
be
an
open
template.
It
could
just
be
maintained
by
us,
but
right.
B
B
D
You
mentioned
without
this
people,
people
to
build
their
own
tracing
wrapper,
which
is
what
the
azure
sdk
people
did,
because.
D
B
And
yeah-
and
I
think
that's
it's
hard
to
validate
our
real
api
completely.
This
no-op
thing
is
really
easy
to
validate.
I
think
so.
I
think
it's
easier
to
get
people
on
board.
That
might
be
an
interesting
thing.
Maybe
you
can
ask
them
like.
Would
they
be
interested
in
open
temporary
if
it
had
this
mode
where
it's
purely
no
up
by
default
and.
D
Really
around
api
breakage,
but
it's
worth
revisiting
with
now
that
1.0
is
out.
B
D
D
John's
pr
that
that
the
why
we
won't
be
breaking
api.
B
Yep
yeah
aws
sdk
has
its
intercept.
Api,
like
every
single
library,
has
their
open
telemetry,
but
I
think
this
is
one
of
the
reasons,
so
I
think
having
a
pure
knob
mode
I
like.
Ideally,
this
was
the
real
no-op.
It
was
in
the
spec
from
the
beginning
and
that
would
have
been
nice,
but
in
the
mean
well.
B
C
C
I
mean,
I
think
it's
worth
you
know
bringing
up
as
a
at
least
as
a
spec
discussion,
but
I'm
also
fine
having
this
is
a
you
know,
elf,.
C
B
B
B
B
D
E
B
D
To
me,
it's
it's
specifically
about
modeling
client
spans.
Where
you
have
something
that's
then
you
know
you
want
to
model.
You
want
to
capture
that
outgoing
over
the
wire
kind
of
a
thing,
and
how
close
do
you
want,
like
the
the
on
response?
Callbacks
seem
the
most
clear
to
me
that
they
shouldn't
be
part
of
it,
because,
typically
you
get
on
response
and
then
you
do
a
database
call
and
you're
on
response
handler
or
something
yeah,
and
so
to
me.
D
The
interceptors
feel
similar
to
that
where
you
would
use
it's
just
a
different
kind
of
callback
pattern
where
you
would
get
the
response
back
in
the
intercept.
B
I
think
so,
though,
because
interceptors
are
registered
to
the
client
they're,
not
the
business
logic
itself,
so
the
client's
calling
one
method
to
issue
this
request
and
then
they'll
get
the
response
either
in
a
callback
or
whatever,
but
the
interceptors
are
totally
in
the
background.
They're
not
part.
D
D
Sorry,
I'm
not
I'm
saying
this
badly,
there's
a
before
and
after
in
the
interceptors
before
it
calls
the
downstream.
Before
we'll
get
called.
Do
the
afters.
D
C
B
B
B
And
so
it
feels
like
that's
then
a
black
box,
and
you
call
execute
and
you
get
a
response
in
your
business
logic,
but
whatever
happens
at
blackboard,
so
we
go
back
to
our
aws
sdk
azure
sdk,
like
it
might
be
doing
a
auth
request
and
then
multiple
database
calls
and
then
returning
the
response.
So
far,
we've
been
modeling
that
all
as
one
span
and
it's
the
same
concept
here.
I
think
where
the
user
calls
execute
and
gets
a
response.
What
happens
in
that
black
box
implemented
with
interceptors?
C
That's
something
that
we
don't
really
have
like.
If
we
choose
nikita's
idea
that
response
their
on
response
is
out
in
the
instrumentation
api
instrumentary
api
happens
after
the
client
span
has
ended,
then
users
can't
use
the
instrumentation
api
to
decorate
the
the
client
span
with
information
from
the
response.
C
D
D
Well,
if
you're
the
new
instrument
or
api,
I
think
solves
this
for
you,
because
there's
there's
attribute
extractors
on
both
request
and
response.
C
D
But
anyway
we
should.
We
should
clarify,
with
the
case
on
response
the
logic
that
happens
inside,
of
an
on
response
handler.
B
B
D
B
Yeah
yeah
exactly
what
so,
maybe
that's
also
not
yeah
it's
a
bit
because
okaygb
interceptors,
they
don't
separate
async
mode
at
all,
they're
all
synchronous
interceptors.
So
you
issue
the
request
and
you
get
the
response.
B
E
C
C
Yeah
yeah,
as
long
as
it's
is
injected
injectable
into
the
okay
hcp3
interceptor.
Yes,
absolutely.
E
E
There
are
hooks
all
over
the
place
and
there's
different
operations
that
happen
all
across
that
life
cycle,
and
where
do
you
draw
the
line
like
that?
That's
everywhere
right
yeah,
and
I
I
mentioned
it
earlier,
but
we
had
a
customer
that
was
asking
for
they
wanted
details
at
every
point
along
the
transaction,
including,
like
dns,
resolution
time,
connection,
setup,
phase
connection
established
phase
like
they
wanted
every
detail
about
this,
like
on
both
ends
like
connection
set
up
for
request,
plus
response
handling
and
tear
down
like
they
wanted
details
all
along
there,
yep
yeah.
D
Let's
see
so,
I
know:
okay,
http
grpc,
I'd
forgotten
that
aws
instrumentation
has
a
lot
of
these
right.
It's
very.
D
Where
we
put
the,
I
wanted
to
like
write
up
a
list
of
the
where
we
apply
this
pattern
of.
E
B
D
D
D
C
B
D
B
B
D
B
D
D
Is
trying
to
get
a
response
back?
This
is
just
like.
I
see
as
opposed
to
a
callback
which
is
not
mutate,
potentially
further
mutating.
The
response.
B
D
Okay,
yeah
cause.
I
I
said
that
I
would
write
up
the
pros
and
cons.
A
D
On
the
issue
after
we
chatted
with
you,
so
yes,
you
you've
convinced
me
I'm
so
I
just
want
to
get
like
be
able
to
write
it
up.
Clearly,
any
other
examples
that
you
can
think
of
I'll.
B
D
B
B
A
Cool
so.
C
B
C
C
C
A
B
C
B
C
C
I
have
my
I
have
my
second
coven
shop
tomorrow,
so
I
might
be
knocked
out
for
the
weekend
we'll
see,
but
thankfully
splunk
has
we
have
a
day
of
rest
on
monday,
so
I
have
an
extra
day
to
recover.
So
that's
good
is.
C
D
I
think
so
so
the
yeah,
so
the
the
action
I
was
for
to
review
lori's
was
it
who
was
laurie's
pra.
Let's
see
yeah
yeah
right
right,
yeah.
He
was
on
the
call
this
morning
and.
D
If
that
looks
good,
then
that
determines
the
direction
that
nikita
will
go
on
the
extension.
D
Stuff,
but
it
didn't
look
any
I
mean
it's
definitely
I
wanted
to
give
it.
I'm
gonna
try
to
run
it
locally,
because
I
was
just
in
there
dealing
with
our
I've
told
you
about
our
signed.
My
signed
jar
file
was.
B
D
Oh,
it's
not
merged
yet,
yes,
because
I
need
to
add
tests
what
motivated
this
pr
was.
Actually
what
I
found
with
the
signed
with
a
startup
performance
is
there's
an
initial
right,
the
jvm
loads,
the
agent
at
the
beginning,
and
of
course
I
can't
surprise.
D
I
can't
do
anything
about
that-
signed
jar
files,
verification,
but
then,
beyond
that
there
were
a
couple
places
both
in
our
distro
and
in
open
telemetry,
where
it
was
reading
resources
from
the
agent
jar
and
that
would
trigger
also
the
jar
file,
signing
verification.
D
Yeah,
it
would
cache
it
at
some
point.
It
would
cache
the
jar
file,
so
it
wouldn't
do
like
every
single
time
you
access
a
resource,
but
the
first
time
you
access
a
resource.
It
would
so
essentially
what
the
point
of
this
pr
for
me
was
on
the
http
url
instrumentation
and
the
executor
instrumentation,
which
both
get
applied
during
pre-main
both
get
applied
really
early
to
make
sure
that
those
don't
trigger
any
resource.
Lookups.
D
And
then
once
we're
out
of
pre-main,
then
the
jar
file
signing
verification
is
not
a
problem,
because
it's
fast
because
the
jit
compiler
is
up
and
going
it's
only
java
8
and
only
in
free
main,
that
the
jit
compiler
doesn't
run
and
so
jar
file
signing
verification
is
painful,
yeah,
so
yeah.
So
I
I
I
was
gonna
take
this
one
for
a
spin,
but
overall
I
like
it.
I
mean
it
seems
more
straightforward
that
loading
from
the
jar
file,
as
opposed
to
using
the
url
handler
yeah,
the
url.
B
D
Yeah
yeah,
it's
mainly
what
we
were
using
before
was
that
that
weird
thing
of
the
x
yeah
this
our
own
url
handler,
which
is
cool
like
a
cool
kind
of
thing,
but
it's
also
confusing
this.
B
D
D
D
A
B
D
That's
okay,
you
know
our
own,
I
you
know
you,
you
tackle
those
instrumental
api.
E
D
Yep
yeah
and
then
yeah,
so
I
think
we'll
target
next
week
for
one
two
zero.
I
have
not
done
what
I
said
I
was
gonna,
do,
which
is
push
change
log
prs,
so
maybe
I'll
try
and
do
that
before
get
us
get
us
more
prepped.
D
I
don't
think
there's!
Oh
the
only
one
that.
D
A
A
B
D
Any
good
shows
I
should
put
on
my
queue.
B
So
there's
a
are
you
familiar
with
this
anime
called
kenshin?
It's
a
samurai
story,
it's
a
series
of
movies,
so
they
made
the
they
just
released.
The
fourth
movie
which
I'm
gonna
watch
today,
so
I
re-watched
the
first
three
ones,
and
so
it's
an
ad,
but
they
also
have
some
live
action
movies
based
on
this
story
and
that
they
just
released
a
new
one.
So
that
was
one
of
the
ones
I
was
watching
on.
They
have
it
on
netflix
in
japan
at
least
I
don't
know
if
they
have
it
in
the
us,
though,.
B
B
D
B
D
B
D
Yeah
yeah,
I
don't
know
I
like
the
I
like
the
like
the
hour-long,
or
I
guess,
two-hour
long
like
a
little
bit
more
breaking.
B
D
B
B
D
Cool
yeah,
I'm
always
trying
to
add
new
stuff
to
my
cues
across,
like
like
always,
they
got
netflix
hulu
amazon
prime
just
subscribed
last
week
or
the
week
before,
added
hbo.
D
So
right
now,
I'm
watching
hbo
shows
since
there's
no
no.
D
Not
right
now
I
watched
the
first
season
of
mandalorian
I
wanted.
I
want
to
subscribe
again
to
watch.
A
D
D
Yes,
that's
one
of
the
most
listened
to
things
on
my
spotify
skill
years
later.