►
From YouTube: 2021-04-08 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
D
Tyler
you
are
here,
can
you
can
you
explain
as
we
have
empty
agenda,
I
just
jump
in
taylor.
Can
you
explain
what
the
hell
is
age
agent,
cashing
pool
strategy
and
why
is
it
needed.
D
I
mean
I'm
trying
to
I'm
trying
to
understand
how
exactly
bite
body
finds
classes.
Oh
well
type
descriptions
for
for
its
magic.
So
if
I,
if
I
have
some
instrumentations
loaded
from
our
usual
agent
class
loader
and
some
instrumentation
from
another
class
law
order,
and
then
I
don't
quite
understand
how
to
tell
I
don't
quite
understand
how
bad
body
decides
which
class
loaders
to
scan
or
to
ask
for
for
class
files-
and
I
like
have
a
suspicion
that
this
pulling
strategy
certainly
plays
his
role
in
the
in
all
of
this.
D
D
D
E
E
I
can't
remember
if
it's
called
on
every
class
load
or
every
non-ignored
class
load,
but
yeah,
so
the
the
cache
can
have
very
significant
impact
on
performance
of
startup.
E
So
I
don't
remember,
I
don't
think
buddy
has
a
default.
Cache
loading
mechanism.
E
E
E
The
the
the
type
definition.
E
E
Two
separate
caches
where
it
used
the
class
loader
looking
up
into
a
separate
cache,
so
it
was
two
levels,
but
the
problem
with
that
is
it
made
it
much
harder
for
us
to
put
like
a
a
maximum
size.
So
we
wanted
to
constrain
the
size
of
this
because
it
can
get
really
big
and
eviction.
Strategies
are
hard,
and
so
we
made
it
into
a
compound
key.
So
that
way
we
could
constrain
the
size
in
a
more
simple
way.
E
D
D
C
Be
it
so,
do
you
want
to
tell
people
sort
of
what
the
big
picture,
what
you're
working
on
nikita
there.
D
D
E
E
It
can
load
a
class
file
by
loading
it
as
a
resource
and
yes,
and
so
I
believe
that
that
does
not
work
in
the
case
of
the
bootstrap
class
path
and
so,
if
you're
trying
to
load
it
as
an
extension
on
like
an
extension
class
path.
I
don't
know
if
that's
what
you're
referring
to.
Maybe
that
has
a
different
behavior.
I
I'm
not
sure
it's
gonna
be
our
own.
E
So
do
would
that
class
loader
be
able
to
load
the
expected
classes
as
a
resource
instead
of
as
a
class.
E
So
so
that's
the
way
that
bytebuddy
works
to
load
the
class
file
without
actually
loading
the
class
yeah.
E
D
C
Okay,
anybody
any
anything
on
people's
minds
today
welcome
consolidated.
I
know
it's
mostly
the
same
group
of
people,
but
welcome
consolidated
meeting
time.
C
So
yeah
moving
forward,
past
1.0
and
so
yeah,
I
think
honorag
has
a
pr
and
I
didn't
check,
haven't
checked
this
morning.
The
latest
oh
looks
like
that:
got
merged.
D
C
D
C
D
You
have
meeting
today
with
as
well
yeah.
C
C
Yep,
if
he
is
not
going
to
make
it,
I
will
make
it
or
I
will
ask
you
to
make
it
yeah.
C
Cool,
maybe
what
I'll
do
is
today
since
honor
x
is
over
already.
I
may
start
I'll
try
to
work
on
the
release
notes
this
afternoon.
C
That
way,
and
then
you
or.
D
D
A
No,
that's
not
the
way
it
works.
I
wish
I
mean
sometimes
I
wish
it
would
work
that
way,
but
the
way
it
works
is,
I
just
make
sure.
Every
couple
days
I
go
through
and
update
the
change
log.
A
Mean
I
wish
somebody
else
would
help
out,
but
I
mean
we
end
up
with
a
I
mean
it
works
really
well
with
my
github
workflow,
also
because
I
just
whenever
I
come
across
a
pr,
that's
when
it
gets
merged.
I
just
put
it
in
my
save.
A
C
I
did
notice
the
release.
Notes
were
really
good.
I
was
reading
through
them,
like
I
mean
you
know
very
like
you
can
tell
somebody
spent
time
on
each
one
to
make
it
make
sense.
A
Yeah
this
is
this,
is
why
being
a
maintainer
is
a
lot
of
work.
C
Which
I
know
when
we
leave
it
to
the
end,
it
tends
to
not
be
as
nice,
we'll
like
mostly
copy
paste
the
pr
titles
and
tweak
them
a
little
bit.
A
C
All
right,
I
will
also
ask
honorags
opinion
on
that,
but
it
seems
like
it
could
seems
like
it
would
be.
A
good
thing
make
our
make
our
releases
easier
to
send
out.
G
C
Yeah,
that's
a
good
point,
so
I
discussed
last
week
with
honorard
and
he
was
pushing
back
a
little
bit
on
the
faster
release
cycle.
Well,
not
so
much
the
faster
release
cycle,
but
getting
out
of
sync
with
the
sdk
version
number
just
because
it
then
is
an
extra
like
matrix
of
versions
that
we
and
the
users
have
to
keep
in
their
head
versus
and
even
though
the
java
agent
1.1
will
support
earlier
versions
of
the
sdk,
it
won't
support
later
versions.
C
So
there
is
some
version
tie
there,
so
we
did
discuss
it
in
tuesday
wednesday's
meeting
and
what
we
decided
for
now
is
to
wait
until
there's
we'll
stay
in
sync
for
now
and
then,
when
the
first
time
that
something
comes
up
where,
oh,
you
know
a
vendor
wants
to
release
the
instrumentation
sooner,
then
we
will
revisit
and
decide.
If
you
know
we
want
to
release
the
honorable
wants
to
release
the
sdk
at
the
same
time
to
stay
in
sync
or
if
we
want
to
go
out
of
sync.
C
At
that
point,
we
just
didn't
have
a
really
strong,
like
specific
case
like
I
think
it
will
help
to
have
a
specific
need
to
discuss
and
decide
on
at
that
time.
G
Okay,
yeah,
that
makes
sense,
I'm
wondering
if
we
shouldn't
also
have
or
maybe
just
tag
on
to
that
minimum
release
cadence
like
if
it
goes
more
than
a
month,
should
we
release,
but
then
that
still
has
the
version
number
problem
like
if.
C
Yeah
and
one
of
the
thoughts
of
wanting
to
release
faster
on
the
instrumentation
side
is
that
we're
not
very
stable?
Still,
things
are
changing
a
lot,
but
if
this,
if
we
can
kind
of
get
through
this
time
until
we
are
more
stable,
then
that
will
alleviate
that
need
also
or
that
desire.
G
If
there's
no
other
agenda
items,
I
was
going
to
say
that
watson,
you
should
share
a
link
to
that
that
blog
post,
because
I
thought
it
was
really
good
like
that
one,
the
one
from
doordash
yeah
the
batch
span,
processor
work.
G
A
A
We
had
to
be,
I
mean
this-
is
I
kind
of
had
to
make
a
make
a
comment
to
him
to
apologizing
for
how
persnickety
we
were
being
just
that
you
know
this
is
kind
of
the
most
critical
piece
of
tracing
infrastructure
in
the
entire
sdk.
So
we
needed
to
be
extremely
careful
with
making
changes
and
made
sure
that
everyone
fully
understood
exactly
what
was
going
on.
D
I
wonder
when
future
blog
posts
will
stop
starting
their
posts
speech.
What
is
distributing
crazy?
What
is
open,
telemetry,
yeah
and
just
dive.
F
C
Like
a
decade
once
it's
no
longer
once
there's
something
new.
C
Yeah
one
of
the
things
that
I
thought
was
really
interesting
from
this
work
was
the
that
I
hadn't
run
across
before
was
the
idea
of
benchmarking
at
a
fixed
rate
versus
benchmarking
for
throughput,
but
like
I've,
seen
benchmarking
at
a
fixed
rate
like
from
a
j
meter
or
from
an
external
source
right,
you
want
to
measure
your
you
know
at
a
thousand
requests,
a
second
measure
that,
but
that's
typically
at
like
high
load,
you're
kind
of
stressing
the
system
versus
in
this
case
it
was
like
intentionally
putting
wait
times
and
gaps
in
between
to
measure.
A
Yeah,
I
mean,
and
certainly
I
it
feels
like
it's
kind
of
stretching
what
jmh
was
intended
to
be
used
for,
but
it
actually
ended
up
working.
I
think
remarkably
well
as
a
way
to
measure
cpu
overhead.
I
mean
you
have
to
hook
up
a
profiler
to
it
at
the
same
time
to
get
anything
useful
out
of
it,
but
it
also
the
one
one
thing
that's
nice,
and
we
because
we
have
another
fastband
processor
implementation,
that's
in
pr
at
the
moment,
and
that
benchmark
is
really
nice
to
run
even
without
a
profiler.
A
Just
to
show
that
oh
look,
it
hasn't
changed
like
this
is
working
exactly
the
same
way
at
the
fixed
throughput.
We
have
not
changed
the
through
with
that.
We
have
not
changed
any
of
the
kind
of
general
behaviors
in
that
fixed
throughput
scenario.
Everything
works
the
same
way.
It
did
so
that's
actually
even
that
is
relatively
valuable.
Without
a
profiler
doesn't
actually
do
any
cpu
benchmarking,
but
it
does.
It
does
at
least
give
you
that
the
confidence
that
you
haven't
changed
the
behavior
in
a
really
gross
sense.
F
So
you
should
just
drive
down
the
road
and
knock
on
lexi's
door
and
chat
with
him
about
it.
I
don't
even
know
where
he
lives
these
days.
A
Yeah,
it's
interesting,
that's
an
interesting
question.
I
mean,
as
far
as
I
have
seen,
he
doesn't
seem
very
open
to
adding
new
features
like,
for
example,
you
can't
even
specify
the
number
of
threads
as
a
parameter.
It's
only
that's
only
done
per
benchmark,
which
is
kind
of
super
annoying.
You
can't
parameterize
number
of
threads
and
he
said
he
doesn't
want
to
do
that.
Even
though
parametrizing
the
number
of
threads
would
be
incredibly
useful
for
benchmarking.
G
G
G
Method,
so
it's
super
annoying
yeah.
The
the
fixed
throughput
thing
is
interesting
right
because
if
your
code
is
capable
of
doing
like
2
000
a
second
and
you
cap,
your
benchmark
at
1
000
a
second,
because
you
don't
want
to
hit
that
ceiling
right
if
you're
you're,
just
trying
to
micro
benchmark
code
itself.
G
If
you
step
back
from
that-
and
you
say
well,
if
you
can,
do
it
2000
a
second,
why
do
you
care
about
the
performance
at
all
at
1,
000
right,
like
that's
kind
of
the
flip
side
of
it,
is
if
you
can,
if
you
can
do
so
much
more
than
that,
then
why
do
you
care
about
the
performance
down
here?
Well
because.
A
So
the
real
reason
is
this
is
that
I
think
we
all
hope
we
aren't
actually
running
in
production
at
the
absolute.
You
know
breaking
point
of
your
application
right.
You
want
to
run
in
a
relatively
comfortable
zone
and
understanding
the
characteristics
of
what
happens
when
you
are
running
in
a
comfortable
zone
is
actually
really
important,
because
then
you
can
know
when
you,
you
can
see
what
happens
when
you
get
outside
the
comfortable
zone,
but
you
can
also
start
optimizing
for
that
comfortable.
A
G
A
G
C
A
Yeah
so
anyway,
it's
definitely
a
novel
usage
of
jmh.
It
works,
but
you
know
it
requires
that.
Basically,
you
put
into
your
benchmarks
some
extra
sleeps.
It's
not
done
via
sleep,
it's
done
via
unsafe,
for
what
the
method
is
with
the
nanos.
G
A
A
A
C
Not
there
was
a
lot
of
kind
of
cleanupy
type
of
work,
a
lot
of
prs,
but.
C
A
lot
of
cleanup
and
test
flakiness,
oh
yeah
nikita,
is
continuing
to
stress
the
the
neti.
D
Instrumentation
another
one
continue
yeah.
I
continue
to
stress
those
different
assumed
frameworks
in
several
possible
ways:
yep.
A
H
This
yeah,
so
we
had
the
supportability
metrics
class
introduced
by
john,
and
it
was
being
used
in
bait
tracer
to
calculate
how
many
spans
were
dropped
so
in
this
pri
both
copied
the
usage
to
the
new
instrumentary
api
that
anarac
added
and
added
the
new
way
to
capture
completely
custom
counters,
like
the
one
that
we
needed
to
do
for
the
sql
statement,
parsing
cache.
So
previously
there
was
a
block
trace
statement
that
was
just
locked
into
nowhere,
that
we
have
a
cache,
miss
and
right
now,
we'll
just
increment
the
counter.
C
A
I
think
in
general,
this
approach
to
capturing
this
data
is
a
lot
better
than
logging,
because,
if
you
log
every
entry,
it
just
spans
log
and
it's
ridiculous
and
not
really
usable,
but
if
we
if
instead
just
have
well
named
counters,
that
will
get
output
as
a
part
of
debug
output.
I
think
you
get
much
more
useful.
Well,
you
get
much
more
human
consumable
information
in
your
debug
logs.
C
Have
you
seen
this
in
the
wild
yet.
A
C
C
A
C
C
Excluding
our
runnables,
this
is
just
to
not
propagate
trace,
like
in
the
propagate
context,
in
our
internal
runnables,
for
example,
in
our
the
exporters
things
like
that,
which
is,
I
think,
it's
just
an
optimization
on
nikita
and
materials.
Do
you
happen
to
know
if
there
was
anything
driving
this
more
than
optimization
and
login?
Oh,
that's
what
you
were.
Yes,
logging!
That's
what
the
I
think
I
awarded
you
10
points
for
that
guess.
A
C
There
or
something
so
yes,
that's
probably
why
increase
and
for
those
yes,
there
is
no
points
in
more
flaky
tests.
C
This
is
a
new
github
action
version
that
now
supports.
I
think
we
we
added
we,
you
have
to
specify
the
jdk
distro
that
you
want
to
use
and
do
you
know
what
I
know
we
we
picked
adopt?
Do
you
know
what
the
other
options
are
currently.
C
And
hey,
maybe
it
will
be
microsoft
distro
in
the
future
available.
Also.
D
Yeah
by
the
way,
that's
that's
an
interesting,
interesting
thing
happening
about
that
setup
java.
So
gil,
then,
from
from
azul
proposed
to
set
up
java
for
maintainers
to
use
some
fuji
public
api
for
discovering
different,
open
jdk
builds
from
different
vendors
and
setup
java.
Maintainer
said
nope.
We
don't
want
that.
We
will.
We
will
discover
every
every
build
for
ourselves
and
we
will
just
like
have
a
huge
table
where
every
bill
can
be
downloaded
from
and
whatnot.
C
Yeah,
what's
the
eclip
open,
open,
j9.
C
I
think
this
was
yeah
some
duplication
of
testing
just
minor
stuff,
a
minor
test
cleanup
test
cleanup
test
cleanup.
Oh
some
of
this.
Some
of
this
cleanup
was
driven
by
our
backlog.
I
was
cherry
picking,
easy
things
to
bring
art
to
close
issues
and
look
at
that.
We're
under
200.
C
D
Yeah,
I
I
remember
the
time
we
are
almost
100.
We
have
to
do
something.
C
Nice,
let's
go
back
because
and
then,
if
we
have
time
I
do
want
to
put
on
instrument.
A
This
is
just
a
this
is
just
a
listing
of
the
workflow
runs.
I
thought
I'd,
throw
it
in
there
as
well,
but
austin,
so
we've
been
trying
to
figure
out
how
to
keep
the
website
documentation
in
sync,
with
the
actual
versions
of
the
code
and
the
quick
starts,
and
all
that
kind
of
stuff.
A
So
the
proposal
that
austin
has
put
together
is
that
the
each
language
or
each
repository
would
maintain
its
own
copy
of
the
website
docs
in
their
own
repository,
so
it
could
be
updated
as
a
part
of
releases,
etc,
and
then
there
will
be
there's
a
github
action
which
austin
plugged
in
for
java,
which
will
open
a
pr
in
the
docket
in
the
website,
repository
based
on
basically
pushing
whatever
the
latest
version
is
over
there
in
austin,
I
kind
of
worked
through
trying
to
figure
out
how
to
make
it
work.
A
Yesterday
we
did
get
it
working.
Finally,
it
took
about
you
know
nine
or
ten
different
iterations
of
this
specific
action
and
the
right
now
there's
a
little
bit
of
a
weirdness
because
it
uses
a
so
it
has
to
have
a
personal
access
token.
In
order
to
do
the
writing
and
it's
using
austin's
like
a
personal
access,
token
of
austin,
specifically,
who
has
rights
to
actually
create
prs
programmatically
over
there,
but
it
is
working,
it
is
currently
functional.
So
that's
cool.
A
C
And
what
is
the
oh,
is
the
idea
to
start
moving
some
of
the
dock?
That's
in
the
main
repo
into.
A
So
I
what
once
this
has
been
pushed
properly
out
there.
I
am
going
to
re
change
our
quick
start
dock
to
just
point
at
the
website,
rather
than
having
two
copies
of
it,
because
what
basically
is
going
into
that
in
that
manual
instrumentation
document
there
is
basically
the
quick
start.
It
is.
It
is
our
quick
start
guide.
So
I'll
just
point
that
point
it
over
at
the
website.
Then
we
only
have
one
copy
of
it
when
I
have
to
maintain
it
in
one
place
cool.
A
A
So
I
put
that
one
in
I
ran
that
one
last
night
after
merging
things-
and
you
can
see
it
just
basically
pushed
over
the
contents
and
opened
the
pr
for
it.
A
C
G
C
Cool
and
what's
the,
what
is
the
advantage
of
keeping
it
in
the
our
repo
versus
making
those
updates
directly
to
the
website,
repo.
A
It
just
means
that
you,
as
a
maintainer
or
the
maintainers
of
instrumentation,
only
have
to
do
it
in
one
place.
They
don't
have
to
remember
to
go
to
a
whole
different
repo
and
put
in
pr's
over
there
and
maintain
potentially
maintain
the
docks
in
two
places.
A
It
just
makes
it
so
that
maintainers
have
one
place
to
work
rather
than
having
to
work
in
two
in
two
places,
because
I
don't
know
if
you've
noticed
what
currently
is
in
the
documentation
here,
it's
incredibly
out
of
date,
like
nobody,
has
been
keeping
this
stuff
up
to
date
like
if
you
look
at
the
the
getting
started
for
java,
I
think
it
still
points
at
0.17
like
it's
it's
or
in
the
or
wherever
it
is
a
quick
start.
Well
that
one
just
points
over,
but
there
is
there.
I
think
it's
in
manual.
A
A
A
B
C
A
less
of
a
doing
it
in
two
places
since
we're
going
to
point
people
into
the
docs
anyway
and
more
of
a
like
singular
place
where
maintainers
work
and
own
the
doc
own
end-to-end
yeah.
A
It's
much
easier
for,
like
we
can
probably
automate
version
updates
in
the
docs
like
as
a
part.
We
because
we
have
our.
We
have
a
gradle
task
right
now
that
will
update
in
our
readme.
We
can
also
make
it
updated
in
the
website
docs
as
well,
when
we
do
a
release,
cool
yeah,
so.
A
A
So
people
who
want
to
use
it
have
to
include
a
grp
tree
transport,
daddy,
shaded,
grpc,
nettie
or
jfc
nettie
shaded
or,
if
they're,
using
grpc
okhdp,
whatever
they
might
be
doing.
They
have
to
specify
that
themselves
always
so
we
don't
provide
a
default
and
I'm
not
a
grpc
expert,
but
I
think
honorable,
both
honorag
and
bogdan,
who
are
grpc.
Experts
are
very
reluctant
to
automatically
have
a
transit
dependency
included
because
it
will
cause
dependency
conflicts
so
often
with
the
way
grpc
works.
A
A
So
we
so
honorable
in
bogdan
and
I've
come
along
for
the
ride
because
it
seems
reasonable
to
me
have
made
the
choice
that
we
don't
include
a
transport
transit
lead
by
default.
To
avoid
having
the
continual
questions
about
persian
conflicts
with
grpc.
A
But
obviously
the
sleuth
team
disagrees
and
are
very
vehement
about
it
and
onorag,
and
I
basically
tried
to
try
to
say
that
our
our
position
is
that
distributions,
like
the
agent
or
any
particular
agent
that
needs
a
grpc
implementation
or
sleuth,
should
they're
the
they're,
the
ones
that
should
include
what
their
users
will
want
to
use
as
the
default
transport,
rather
than
the
sdk
having
that
opening.
A
But
we
are
also
if,
if
people
are
willing
to
have
strong
opinions,
the
other
direction
we're
not
gonna
we're
not
gonna
die
on
that
hill,
at
least
I'm
not.
I
can't
speak
for
honor.
D
A
And
it's
interesting
that,
like
in
the
counter
examples
that
yonatan
provided
it's
like.
Well,
if
you
use
thrift
you
or
if
you
use
zipkin,
you
get
a
default
transport,
but
that
also
that
transport
is
also
based
on
stuff,
that's
built
into
the
jdk.
It's
the
it's
uses,
a
url
connection,
http
url
connection.
It
doesn't
use
anything
a
third-party
dependency,
so
it
doesn't
introduce
any
sort
of
dependency
conflicts.
A
It
may
be
non-optimal
and
not
very
good,
but
it
means
it
will
work
out
of
the
box,
but
it
is
also
something
that's
built
into
the
jdk,
so
unfortunately,
there's
no
grpc
transport
built
into
the
jdk
or
fortunately
I
don't
know,
unfortunately
yeah
unfortunately,
probably
anyway,
if
you
have
an
opinion
on
this.
Please
comment
on
the
issue.
C
C
Yeah
sure
so
do
you
want
to
share?
Do
you
want
me
to
oh.
H
I
think
oh
you
can
share,
so
this
is
actually
my
pr
that
uses
the
api
that
anarch
already
did
in
previous
one,
but.
H
Okay,
so
I
think
that
anurag
has
has
had
this
idea
for
quite
a
bit
of
time
and
recently
he's
made
this
vr
with
the
new
approach
and
if
you
open
the
files,
change
and
instrument
or
instrument
or
build
there,
whatever.
H
Implementation
of
an
idea
that
we
can
have
a
sort
of
a
single
class
for
all
telemetry
operations.
So
right
now,
it's
it's
only
tracing
and
it's
supposed
to
support
metrics
in
the
future,
and
it's
even
the
implementation
might
seem
convoluted,
and
it
has
quite
a
lot
of
fields,
a
lot
of
parameters
and
so
on.
It's
pretty
simple
actually,
because
it
just
has
like
two
phases:
you
start
operation
and
operation
and
it's
completely
generic.
H
In
a
sense,
all
instrumenters
are
supposed
to
be
exactly
of
the
same
type
supposed
to
have
exactly
the
same
interface,
which
is
which
contrasts
with
our
current
tracer
api,
in
which
case
almost
every
instrumentation
had
a
separate
tracer
class
which
had
slightly
different
start
or
end
methods,
and
they
were
impossible
to
define.
H
So
the
way
anurag
has
designed
this
is
that
instrumental
is
actually
composed
of
several
building
blocks,
so
there's
a
several
strategies
for
extracting
or
for
preparing
a
spam
name
or
finding
out
the
spam
kind
from
the
request
or
input
of
the
instrumentation
or
finding
out
status
or
adding
attributes
and
the
way
you're
supposed
to
use.
H
It
is
just
you,
create
the
instrumenter
builder,
just
put
all
the
building
blocks
that
you
want
in
your
instrumentation
and
create
a
new
instrumenter
and
use
the
generic
api,
and
I
think
that
the
really
cool
thing
about
this
is
that
we
have
the
like
semantic
conventions,
describe
several
attributes
or
spam
naming
that
should
be
used
and
before
with
tracer
api,
it
was
kind
of
like
it
was.
The
instrumentation
was
supposed
to.
H
Adhere
to
those
and
right
now
we
have
the
several
common
components
that
implement
https
convention,
expanding
conversion
or
http
attributes
that
are
conventional
and
if
you
implement,
if
you
instrument
the
http
library,
it's
just
reusing
a
few
common
classes.
Instead
of
trying
to
make
it
work
with
hdb
tracers.
H
And
it
is
in
this
vr
there
was
a
actual
andrag,
also
redid
the
armeria
instrumentation.
So
I
think
it's
a
really
fine
example
of
how
it
works.
There
was
armenia,
tracing
or
me
a
tracing
builder.
Maybe
one
of
them.
H
H
So
we
follow
exactly
the
same
semantic
conventions
for
http
server
and
clients,
and
you
just
create
those
building
blocks,
put
them
into
instrumental
and
that's
it.
G
H
Have
mentioned
probably
in
the
beginning,
with
all
the
talk
about
building
blocks,
but
all
the
inheritance
was
completely
replaced
with
delegation
and
what
was
the
word
composition,
composition,
yeah,.
A
H
It
yes,
so
we
have
two
instrumentations
right
now
one
is
submerged
that
use
the
instrumentary
api
and
we'll
probably
migrate,
all
of
them
in
the
next
months.
Probably-
and
I'm
not,
I'm
still
not
sure
if
this
covers
all
the
weird
cases
that
some
of
our
some
of
the
instrumental
like
various
but
well
we'll
see.
E
I
think
this
looks
really
cool
it.
It's
a
little
bit
more
in
line
with
what
I
had
originally
envisioned
in
terms
of
like
having
type
span,
builders
and
stuff
like
that.
So
I
think
this
is
cool.
C
Yeah,
I
am
looking
forward
to
discovering
those
those
the
all
the
those
edge
cases
that
we
have,
but
there's
only
one
way
to
discover
those
well.
C
Cool,
thank
you
matthews
any
thing
else.
We
were
two
minutes
past
our
five
minute
cut
off.
A
A
C
Yeah,
I
told
them
I'd
try
out
that
semantic
convention
generate
script
today,.