►
From YouTube: 2021-05-06 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
Yeah,
I
finally
managed
to
get
the
haircut
after
I
don't
know
two
or
three
months,
because
old,
barbers
or
hand-dressers
and
cosmetic
salons
were
closed
till
last
week.
Lockdown.
D
D
B
Background
vacation
just
figure
out
exactly
what
we
want
to
get
into
the
release.
B
So
there's
a
couple
last
depending
ers
that
probably
if
we
want
to
release
being
up
to
date
with
spec
1.3,
which
was
released
yesterday.
B
E
D
Excuse
me
all
right:
nikita.
B
D
B
The
only
thing
that's
a
little
bit
sticky
is
that
there
was
a
change
or
there
has
been
an
outstanding
issue
in
java
to
verify
that
our
all
of
our
propagators
do
not
change
the
context
when
they
fail
to
parse
the
whatever
on
extraction,
and
that
will
actually
does
impact
the
aws
x-ray
propagator,
because
there
was
that
was
not
the
behavior
of
this
user.
So
I
just
need
to
talk
to
you
about
that.
B
No,
this
has
actually
been
what
the
spec
has
said
for
a
long
time.
We
just
haven't
gone
through
and
validated
all
of
them,
and
I
put
in
a
pr
yesterday
that
did
all
wrote
made
sure
all
the
tests
were
there
and
found
that
the
aws
x-ray
propagator
wasn't
doing
that.
Also
the
baggage
propagator
wasn't
doing
that
properly.
B
B
D
B
D
Cool
and
then
the
instrumentation
will
probably
target
releasing
sometime
next
week
on
top
of.
D
That
nikita
do
you
want
to
share
and.
A
I
don't
I
don't
want
to
share
just
please
go
interview,
but
one
one
question
that
I
think
we
have
to
discuss
is
so,
as
you
remember,
if
you
remember,
if
you
paid
attention,
there
are
two
ways
how
to
actually
load
and
modify
classes,
both
from
both
for
our
asian
class
law.
The
ransomware
extension
perspective.
A
What
that,
essentially,
what
lowry's
pull
request
pro
now
is
doing
for
agent
clark,
class,
loader
and
another,
is
what
agent
class
loader
used
to
do
in
url
connection
handler.
Whenever
we
open
connection,
we
already
modify
bytecode
so
2d,
two
different
approaches,
one
in
my
opinion,
significantly
less
code,
url
hunger,
one
then
other
class
loader
one
may
be
more
correct
and
like
more
straightforward,
maybe
because
url
handler
opens
connection
not
just
when
we
load
classes
and
resources
and
just
for
verify
the
the
existence
of
cloud
of
resources.
A
D
D
I
mean
like
loading,
the
resources
directly,
not
using
a
url
handler.
I
don't
recall,
to
be
honest,
I'm
sorry
no,
no
worries.
F
D
D
A
D
A
G
Yeah,
I
think
I
can
explain
it
a
bit.
I'm
pretty
sure
that,
like
what
people
usually
do
is
that
they
just
shade
all
their
dependencies
inside
the
agent.
By
renaming
the
packages.
G
G
So
to
avoid
that,
I
guess
that's
why
the
prefix
is
added,
so
that
nobody
else
can
get
access
to
our
classes,
resources
accidentally,
and
we
have
a
separate
class
loader
to
load
our
stuff.
It's
like
it's
probably
because
to
isolate
all
of
our
stuff
from
the
normal
application.
G
G
Yeah
but
but
still
then,
the
problem
is
with
with
all
the
properties,
files
and
stuff
like
that
that
somebody.
G
And
okay,
in
a
typical
agent,
you
wouldn't
have
like
the
separate
agent
class
letter.
You
would
just
shade
everything
inside
the
agent
and
then
the
agent
would
probably
be
loaded
by
boot,
glass
loader,
but
that
wouldn't
help
without
us
with
the
multi-charge
thingy,
because,
as
far
as
I
know,
the
podcast
loader
doesn't
support
multi-chart
multi-version
jar
files.
E
G
End
up
dumping
all
the
class
files
to
the
root
of
the
jar
file,
and
I
don't
know
it's.
F
F
Did
did
you
get
it
the
answer
on
the
classloader
versus
url
handler?
G
My
understanding
of
this
is
that,
basically,
people
don't
want
to
write
class
loaders
so
using
the
url
handler,
probably
allowed
them
to
just
skip
writing
some
of
the
class
level
code
and
make
it
somewhat
like
shorter
and.
A
A
A
G
As
I
have
looked
like
a
lot
of
class
loaders
from
my
previous
job,
I'm
not
afraid
of
writing
class
loaders.
So
for
me
it's
even
more
natural
to
have
everything
inside
the
class
loader.
Then
it's
a
bit
easier
to
understand
at
least
to
me.
What's
going
on
the
url
collection,
I
think
a
bit
too
magic
for
me.
F
If,
if
you
have
the
expertise-
and
you
say
that
this
is
better
like
I-
I
don't
really
have
anything
to
say
that
to
disagree,
like
I
mean
having
it
be,
a
a
an
actual
class
loader
is
probably
the
the
more
idiomatic
and
correct
way
of
doing
it.
F
G
Yes,
I
I
think
I
copied
it
from
from
the
existing
code
somewhere.
F
Because
I
seem
to
recall
when
we
initially
did
this,
that
we
referenced
that,
and
I
don't
I
don't
remember
exactly
like
if
we
did
it
the
same
way
or
not,
I
know
that
ours
has
evolved
a
little
bit
over
over
the
years.
F
So
at
one
point
we
were
actually
copying
the
jars
to
like
an
external
file
system
and
then
loading
it
as
a
jar
directly
like
that,
but
that
had
the
downside
of
needing
to
have
a
writable
file
system.
So
then
we
evolved
it
to
load
the
jars
load,
the
classes
directly
from
within
the
the
agent
jar,
but
in
a
like
a
weird
place,
and
that
was
in
order
to
allow
for
working
better
in
read-only
file
system
locations.
A
Did
anybody
try
to
put
to
put
so
if
you
put
jar
in
sky
jar
instead
of
inst
folder,
we
we
put
jar
inside
jar
and
we
just
can.
We
have
url
for
jar
inside
jar
and
just
use
the
usual
url
class
loader.
G
A
D
I
think
that's
why
the
spring
boot
does
the
why
the
the
jar
has
to
be
uncompressed
inside.
F
So
I
do
think
that
it
would
be
really
cool
to
be
able
to
have
multi-release
jar
capabilities
of
dynamically
loading,
the
correct
class
across
multiple
different
versions.
So
if
you
know
whatever,
we
need
to
do
to
get
that
working,
whether
it's
making
the
existing
class
loader
detect
that
automatically
or
have
the
class
loader
be
more
supportive
of
the
the
java
inherent
way
of
doing
it,
then
I
I
support
that.
D
Cool,
so
that
helps
so
nikita.
Your
suggestion
is
to
first
review
laurie's
multi-release
jar
pr
decide
on
that.
I.
A
G
F
I
was
actually
just
going
to
ask
a
related
question
so
specifically
around
the
multi-release
charge.
F
With
with
regards
to
a
service
loader
file,
how
does
so
if
you
define
a
multi-release
jar
that
you
know
has
multiple
different
versions
and
like
and
one
service
you
want
to
load
with
like,
for
example,
java,
11
and
one
version
you
don't
so,
for
example,
our
our
instrumenters.
F
Let's
say
we
were
instrumenting
http
client,
the
the
new
java
version
of
that.
So
in
some
cases
you
want
it
added.
If
it's
like
you
know,
java,
nine
plus,
or
I
can't
remember
exactly
what
version
it
was
added,
but
in
java
8
you
don't
want
that.
So
does
the
service
loader
have
the
capability
of
being
dynamic
like
that
for
for
multi-release
jars.
G
I
don't
know
that,
but
but
for
some
reason
I
I
think
it
could.
G
No,
actually,
I
think,
if
you
for
the
month
of
release
jar
files,
if
a
resource
name
starts
with
the
method,
then
that's
special.
F
I
mean
it's
potentially
it
could
be
possible
that,
like
it,
has
all
of
the
correct
classes
defined
for
it
in
the
multili
in
the
service,
loader
definition,
but
then
in
the
the
default
ones.
It's
just
defined
with
the
no
op,
maybe
versus
the
correct
ones.
Otherwise,.
F
Oh,
I
see
what
you're
saying
so.
What
I'm
saying
is,
would
it
be
possible
to
get
to
the
point
where
even
the
instrumentation
class
could
be
loaded
and
compiled
with
java
11
but
ignored
via
the
multi-release
jar
capability?
If
it's
not.
G
F
Okay,
so
I
guess
you
guys
are,
I
guess
it's
being
handled
differently
than
the
way
we
did
it
in
datadog.
F
8
yep
right
so
the
way
that
we
handle
it
at
datadog
is
the
instrumentation
class
is
defined
as
java
7,
because
we
want
to
be
able
to
load
that,
for
you
know,
java
seven
classes
because
we
haven't
made
the
update
yet
but
anyway,
so
like,
for
example,
java
eight.
But
the
advice
class
is
what
can
be
compiled
for
other
versions
and
that
will
only
be
referenced
if
the
instrumentation
class
actually
gets
applied.
F
E
This
is
skipping
back
in
time
a
little
bit,
but
as
far
as
the
jar
and
jar
thing
goes,
I
wanted
to
confirm
that
I
remembered
it
right,
but
the
new
relic
agent
also
provides
jars
and
jars.
A
A
E
What's
iguanadon,
I
showed
this
a
while
ago,
it's
a
stupid
code
name
for
the
measuring
overhead
agent
overhead,
and
I
did
some
updates
on
that
last
week
and
I
was
going
to
share
those
with
the
community.
Some
people
have
already
seen
this,
but
I
can
share
if
you
want
me
to
that.
D
E
E
E
Is
it
through
actions,
spins
up
a
docker,
compose
environment
with
spring
pet
clinic
rest
and
an
hotel
collector
and
a
test
runner
and
the
test
runner
will
ssh
into
the
spring
clinic
container
and
say:
hey
run
this
startup
spring
pet
clinic
with
no
agent,
and
then
it
will
do
a
test
pass,
and
this
might
be
too
verbose
to
even
look
at,
but
then
it
will
do
a
test
pass
and
gather
those
results,
and
then
it
will
fire
up
spring
clinic
again
with
the
agent
and
do
a
test
pass
and
then
gather
up
those
results,
and
then
it
munches
the
data
and
tucks
it
back
into
the
github
branches
or
github
pages
branch.
E
E
I
have
to
just
kick
them
off
sort
of
manually,
but
these
I
kicked
them
off
earlier,
and
so
it's
just
using
whatever
the
latest
release
of
the
agent
is,
and
now
I
have
some
ui
that
can
help
us
to
look
at
this
data,
and
so
we've
got.
We
can
look
at
allocations,
for
example,
over
time,
and
this
last
data
point
is
today,
and
so
this
is
over
the
lifespan
of
the
entire
test
run
so
from
the
time
that
pet
clinic
starts
up
to
the
time
that
it
shut
down.
E
How
much
memory
was
allocated
by
the
by
the
jvm,
and
so
the
baseline
here
is
with
no
agent.
It
did
like
a
little
over,
I
guess
11
and
a
half,
almost
12,
gigs
and
then
15
right.
So
that's
allocations.
We
would
expect
that
to
be
higher
because
we're
doing
more
right,
but
this
sort
of
allows
us
to
quantify
it
over
time,
also
looking
at
garbage
collections.
So
this
is
the
cumulative
time
spent
doing
garbage
collections
over
the
life
of
a
test.
Pass,
let's
see
so
this
is
like
the
opposite
of
micro,
benchmarking
right.
E
This
is
more
of
like
macro
bench
market.
This
is
like
over
over
a
relatively
short
lifespan,
being
like
a
minute
or
two,
where
we
throw
hundreds
and
hundreds
of
rest
requests
at
different
endpoints.
What
does
the
garbage
collection
look
like
like?
What
could
somebody
expect
their
typical
application
to
maybe
behave
like
and
again,
every
app
is
different.
I
think
you
have
to
take
this
with
a
huge
grain
of
salt.
I
have
a
little
reminder
here.
E
That's
like
we
very
much
need
to
to
qualify
what
it
means
right,
what
overhead
is,
but
here's
what
else
we
look
at
so
heap
usage
again
over
the
entire
life
min
and
max
there's
a
weird
data
point,
but
you.
E
E
You
know
it
takes
hundreds
of
milliseconds,
then
and
then
so
that's
throughput
and
then
startup
time.
This
is
pretty
cobble
together,
but
I'm
just
capturing
the
amount
of
seconds
until
I
detect
that
spring
pet
clinic
is
healthy.
So
this
is
maybe
the
worst
metric.
I
think
out
of
all
of
these.
This
is
the
one
I
trust,
maybe
the
least,
but
whatever
okay.
So
I
wanted
to
share
that
and
just
open
it
up
for
for
discussion.
E
E
D
E
E
B
You
really
need
to
introduce
either
a
performance
enhancement
or
a
performance
regression,
so
we
can
actually
see
this
thing
showing
something
interesting,
rather
than
just
the
same
data
over
and
over
again.
E
Now,
no
it's
totally
true,
and
this
is
all
for
the
same
release.
It
will
be
interesting
to
see
you
know
after
a
release,
if
things
change
at
all-
and
I
don't
I
need
to
set
this
up
to
run-
I
don't
know
like
maybe
once
a
day
and
that
way
we
can
really
see
it
over
time.
E
Yeah,
I
can
show
you
that
so
I'm
using
this
tool
called
k6
here-
is
a
single
test
pass
right.
So
this
this
defines
what
kind
of
amounts
to
like
a
user
workflow.
E
So
the
first
thing
I
do
is
like
enumerate
all
of
the
specialties
right,
this
pet
clinic
each
vet
might
have
some
specialties.
We
go
through
and
add
a
new
veterinarian.
E
E
D
E
The
main
repo
yeah,
that's
the
thing:
what
what
do
we?
I
guess
we
can
talk
more
about
it,
maybe
tonight
but
yeah,
if
people
if
people
want
me
to
add
that
I
am
very
much
into
to
adding
it
and
helping.
D
Oh,
I
was
going
to
ask:
do
you
think,
like
I
noticed
the
you
know,
it
varies
a
lot
each
run
and
I'm
assuming
that
this
is
all
sort
of
the
same
code
base.
All
those
runs
are
from
the
same
code
base.
D
E
E
I
think
part
of
that
is
due
to
noisy
neighbors
and
just
being
in
a
cloud
environment
you're
going
to
get
variance
there.
The
fact
that
I'm
reusing
the
same
container
for
both
test
passes,
I
think,
helps
with
that
a
little
bit
and
because
a
single
test
pass
does
a
bunch
of
different
operations.
I
think
some
of
that
is
averaged
out
already,
but
there
is,
of
course,
like
additional
smoothing
or
averaging
that
could
be
done.
Yeah.
D
Cool,
I
I
mean
I
love
the
idea
of
getting
something
in
the
main
repo
and
then
we
can,
you
know,
kind
of
play
with
it
over
over
time
of
trying
different
stuff.
Although
I
guess
we
would
want
to
keep
one
thing
that
I
don't
know
how
we
play
with
it
over
time
without
affecting
the
historical
results.
Also,
yeah
I
mean.
Maybe
we
create
a
v1
and
a
v2
directory.
E
Anyway,
hopefully,
that's
somewhat
helpful.
I
just
this
question
keeps
coming
up
over
and
over
about
like
well,
how
much
overhead
is
this
going
to
introduce
like
we're
doing
some
provisioning?
And
I
don't
know
this
data
can
ever
be
used
for
provisioning
right.
You
have
to
really
try
it
and
find
out
as
the
real
answer,
but
hopefully
it
can
give
us
at
least
a
starting
point.
D
Yeah,
I
know
we
discussed
previously
like
whether
we
even
wanted
to
publish
benchmarks,
and
you
know
I
I
mean
I
haven't
seen
other
ap.
I
haven't
seen
vendors
publish
like
a
benchmark
like
this.
Please
correct
me.
If
anybody's
seen,
I
don't
think
any
vendor
wants
to
yeah,
so
I
I
hesitate
a
little
bit
from
you
know
whether
we
want
to
publish
like
them
as
benchmarks.
D
Super
useful
for
us
internally
measuring
progressions
and
also
having
something
you
know
as
we
start
trying
to
improve
the
performance,
something
to
measure
that
against.
D
A
D
D
E
Whenever
cool
yeah
there
was,
I
forgot
about
this-
there
was
a
rando
that
opened
that
opened
a
pr,
no
open
an
issue
and
like
there
they
actually
stood
it
up
and
ran
it
locally.
This
is
before
I
had
any
of
the
github
actions,
automation
and
they
ran
it
locally
and
they
were
kind
of
stoked
on
it
and
I
don't
remember
where
they
were
from,
but
they
just
they
just
showed
up
and
had
run.
E
D
Cool
all
right.
Well,
let's
see
one
more
thing
on
the
agenda
I
have
added.
I
don't
know
if
there
was
any
further
discussions
on
this
last
night.
Okay
nikita
had
to
comment.
This
was
about
the
modeling,
the
client
spans
in
particular
for
ok,
http.
D
D
You
know
more
of
that
interaction,
but
also
means
that
we
capture
some
things
that
aren't
really
related
to
the
actual
underlying
http
call,
and
so
you
know
it's
kind
of
pros
and
cons
and
what
I
suggested
that
I
think
we'd
kind
of
discussed
before
in
the
context
of
on
response
callbacks
is
for
the
client
span
to
more
closely
represent
the
be
closer
to
the
underlying
the
wire
the
over
the
wire
call.
D
So
the
underlying
library
call
and
not
capture
any
callback
behavior
for
unresponse
callbacks,
I
think,
is
pretty
clear
at
least
to
me,
because
on
response
callbacks
a
lot
of
times,
you
do
something
with
that
response
and
you
make
a
database
call
or
you
do
something
else,
and
we
don't
want
that
to
be
in
the
client
span.
D
B
Got
a
related
comment,
question
feedback,
something
I'm
not
sure
exactly
the
right
way
to
phrase
it
with
the
little,
the
okay
shooting
library
instrumentation,
which
is
just
an
interceptor.
B
I
found
myself
when
I
was
hacking
around
wanting
to
take
what
the
like
the
span
that
is
generated
by
that
ok
http
interceptor,
but
I
wanted
to
enhance
it
and
add
some
more
attribute
attributes
to
it,
based
on
headers
and
in
order
to
do
that,
I
had
to
kind
of
bend
over
backwards
and
write
a
hacky
little
turn
over
with
a
chain
a
wrapper
for
the
chain,
so
that
I
could
hand
a
synthetic
chain
off
to
the
built-in
for
the
basic
interceptor
and
then
do
some
enhancements
to
it.
B
B
That
gets
to
work
with
the
span
like
the
span,
that's
in
place
and
know
how
to
do
that
in
a
like
in
a
way
that
is
reliable,
like
if
I
have
to
remember
to
put
this
one
before
that
one
or
something
like
that,
it's
going
to
be
confusing.
But
if
our
library
interceptor
enabled
like
injecting
a
callback
or
something
that
could
be
a
useful
feature.
A
A
B
B
B
A
D
So
I
think
that
this
use
case
has
a
better
solution
in
the
new
with
the
new
instrument
or
api,
where
you
can
inject
your
own
extractor
attribute
extractors,
so.
C
So
that's
one
thing
that
should
be
possible.
I
mean
we
only
have
our
media
tracing
right
now,
but
it
already
exposes
a
method
to
add
completely
custom
attribute.
D
D
Back
to
this,
what
is
trying
to
skim?
What
is
your,
what
was
your
response
nikita
your
thought.
A
I
mean
I,
I
look
at
that
from
the
point
of
view
that
in
in
hotel
we
are
kinda
trying
to
formulate
the
general
recommendations
how
to
write
instrumentation,
which
also
implies
that
we
have
to
have
some
recommendations
about
modeling.
So
that's
one
of
the
questions
that
we
like
should
have
a
recommendation
about
and,
if
I
think
about
how
do
you
we
want
to
recommend
writing
those
client,
instrumentations
and
painting
into
a
call
account
those
small
multi-tiers
instrumentation
like
database
call
over
http,
etc,
etc.
A
A
B
D
Yeah,
because
interceptor
like
that's
just
another
way
of
doing
on
response
callback,
is
that's
just
a
new
difference.
A
A
D
So
is
it
fair
to
summarize
on
requests
should
be
inside
clients.
D
E
A
But
I
still
I
still
understand.
I
still
think
that
own
response
should
be
outside
exactly
because
you
can
you.
You
may
want
to
take
that
require
response
and
go
to
database,
and
that
should
not
be
part
of
the
client's
plan.
But
if
you
go
to
the
data
database
on
request
callback,
I
still
think
that
we
want
to
capture
the
heart
of
the
class
inside
clients.
B
But
in
this
case,
if
on
response
is
outside
the
clients
fan,
then
I
can't
I
can't
do
anything
to
decorate
that
client
span
based
on
response
like
responsibility.
A
A
A
D
D
Yep
all
right,
we
hit
five
minutes
till
any
last
thoughts.
Topics.