►
From YouTube: 2021-09-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
D
D
C
Anyway,
good
to
see
you
again,
evan,
it's
been
been
a
few
years.
E
B
All
right,
so
I
threw
some
stuff
on
the
agenda,
just
sort
of
updates
and
some
things
to
chat
about,
but
wanted
to
start
with.
If
anybody
has
anything
that
you
wanted
to.
Oh,
I
see
people
are
adding
down
here
yeah.
I
will
bump
people's
topics
up
to
the
top
here.
D
D
So
basically
the
jaeger
tlp
pro
exporters
will
open
a
network
call
as
soon
as
they're
constructed.
So
when
we're
dealing
with
something
like
native
image,
that's
bad,
because
we
as
part
of
the
process
of
building
native
image,
we
essentially
start
things,
but
we
can't
open
sockets,
otherwise
scroll
vm
spits
out
at
us
and
gets
nasty.
D
D
So
I've
got
that
working
with
for
a
while-
and
I
was
chatting
with
ben
this
week
and
he
had
some
issues-
the
hotel,
agent
and
native
image
and
with
similar
kind
of
like
that
late,
startup
problem,
and
so
we
thought
it
was,
and
it's
been
on
my
list
to
do.
But
it
was
opposing
it
to
another
and
kind
of
wanting
to
see
a
whether
there
was
interest
and
b.
D
You
could,
I
would
imagine,
does
then
a
bit
of
a
performance
hit
to
that
first
export
fans
because
you're
starting
the
exporter
as
well.
That's.
C
Also,
but
that's
done
on
the
background
thread
right,
so
it
I
mean
it
does
the
same
thing
with
the
span.
Processor
right,
like
the
spam
processor,
is
going
to
have
that
same
issue
on
the
first
thing
when
it
needs
to
do
the
swap
and
initialize
the
exporter
right,
doesn't
it
I
mean
it
ends
up
being
same
same.
D
No,
so,
basically,
what
happens
is
that
the
way
I've
got
it
in
quarkus
is
that
at
what
we
call
the
runtime,
which
native
image
or
jvm,
when
that
happens
as
startup
it
will
use
the
sdk
for
the
jaeger
exporter,
for
example,
create
that
instance
set
it
as
the
delegate
on
the
late
bound
batch
band
processor.
So
it's
already
there
and
running
as
soon
as
runtime
starts,
even
if
there's
no
span
triggered.
B
And
ken,
how
is
that
different
from
doing
the
same
thing
with
the
delegating
span,
exporter
and
swapping
in
the
the
real
spam
exporter
at
the
same
point
in
time.
C
C
The
you
know
the
reason
why
it
might
be
better
to
do
it
to
the
exporter
level
is
because
we
don't
necessarily
know
what
spam
processor
people
are
going
to
be
hooking
up.
They
could
be
hooking
up
the
simple
one
or
they
could
be
hooking
up
some.
They
like
there's
the
executor
service.
Spam
processor
right,
there's
a
whole
bunch
of
different
spam
processors.
They
could
be
wanting
to
use,
but
they
would
all
nest.
They
would
all
like.
C
It's
really
the
exporter,
that's
doing
the
network
connection,
so
it
feels
like
that
might
be
the
better
place
to
do
the
I
mean
the
hacking
kind
of
in
order
to
work
around
this
issue.
You
know,
I
guess
the
one
question
I
have.
This
is
just
a
higher
metal
level
question
like
how
do
other
things
that
need
to
make
make
network
connections?
How
is
that
dealt
with
in
gorgeous
or
this
button
ahead
of
time?
Compilation
systems.
D
We,
the
extensions
we
have
specifically
do
things
in
a
way.
That
means
the
connections
are
only
open
when
runtime
starts
and
not
as
part
of
the
build
that
does
native
image.
D
G
C
G
G
For
the
build
time,
it's
a
little
off
topic,
but
there
are
such
things
as
crack.
You
know
the
coordinated
restore
at
checkpoint
and
all
of
that
kind
of
thing
which
is
trying
to
solve
some
of
these
same
problems.
It's
a
big
can
of
worms
and,
from
my
perspective
I
I
think
I
just
want
to
identify
a
suitable
architectural
point
where
we
can
do
late
bound
delegation,
and
I
feel
like
that.
This
is
an
architectural
pattern.
That's
going
to
come
up
in
a
few
places,
so
it'll
be
good.
B
Kenton,
besides
span
exporters
is
within
the
open,
telemetry
ecosystem
is
are
span
exporters.
The
only
thing
that
you've
needed
this
for.
D
Yeah,
so
I
think
I
made
a
note
on
the
discussion
that
yeah
there
would
probably
probably
be
others
like
samplers
and
other
things,
but
I
haven't
come
across
any
so
far.
I
know
we
had
a
someone
from
the
community
contribute
a
bunch
of
aws
stuff
to
quakers
for
open
country
trying
to
find
it
quickly.
Now
I
did
everything
at
run
time
to
support
that.
D
Oh
they
even
okay.
They
also
created
a
delayed
attributes
to
handle
that
so
a
version
of
attribute
was
delayed
with
a
delegate
inside
it
as
well.
F
D
So
I
do
that
from
what
we
call
a
recorder
in
quarkus,
which
basically
is
a
job.
I
can
find
it
as
well,
so
I
can
share
it,
but
basically
it's
a
class
that
creates
by
code
and
it's
a
way
for
it
to
ensure
that
it's
run
at
run
time.
Instead
of
build
time.
F
B
And
so
how
does
that
corkus?
I
know
you're
doing
these
extensions
in
quercus
is
that
what
most
people
are
using
when
they're
doing
native
images,
or
is
it
should
we
be
trying
in
within
open
telemetry
trying
to
support?
Is
there
a
broader
native
image
ecosystem
that
we
should
be
trying
to
support?
So
I
think
certainly.
D
Quarkus
is
ensuring
that
all
these
does
in
the
core
work
in
native
images
jvm,
but
obviously
that
only
if
you're
using
quarkx
so
there's
certainly
a
need
for
a
more
wider
java
ecosystem
kind
of
helping
out
the
native
image
side
of
things
and
that's
kind
of
some
of
those
discussions.
Ben
and
I
had
around
this
we're
leading
to
in
terms
of
something
like
this
potentially
being
useful
for
other
times.
G
I
mean
the
way
that
I've
been
looking
at
it
is
that
there
are
really
two
separate
approaches
to
it.
You
know:
quarkus
already
has
a
good
model
for
this
and
a
well-defined
life
cycle
so
to
the
earlier
point
about
the
components
which
are
doing
the
injection.
You
know
we
know
what
they
are
from
the
carcass
point
of
view.
I
sort
of
also
feel
like
there's
there's
a
lower
level,
which
is
maybe
we'll
just
merge
in
the
auto
instrumentation
native
image,
update
because
it
kind
of
fits
into
the
same
same
discussion.
G
We
need
to
do
the
same
things
there.
We
need
to
have
the
ability
to
to
separate
the
the
the
transformations
which
need
to
occur
at
build
time
and
the
generation
of
weaved
by
code,
which
has
been
native
compiled
from
the
actual
network
socket
connection
basically,
so
that
is
a
very
low
level
mechanism.
It's
not
something
where
there
is.
G
We
can
necessarily
rely
upon
there
being
a
framework
which
has
the
life
cycle
to
do
that
injection
so
that
that
does
need
to
happen,
and
my
guess
is
that
probably
what
we
do
is
we
end
up?
You
know
in
the
future.
This
will
end
up
being
a
kind
of
annotation
mechanism
so
that
we'll
be
able
to
annotate
certain
methods
where,
if
you're
running
in
parker's,
mdm
mode
or
if
you're
running
in
hotspot,
the
annotation's
ignored-
and
it
just
happens,
as
naturally
as
part
of
the
the
pre-main.
G
G
That's
the
sort
of
model
I
have,
but
that
is
you
know,
literally
just
a
mental
sketch
of
where
I
think
we'll
end
up.
B
G
So
hotel
is
just
a
really
super
use
case
for
this,
and
because
it's
one,
I'm
interested
in
it,
seemed
like
a
good
place
to
start
from,
because
it
was
at
the
right
point
in
its
development,
and
I
think
it's
really
important
for
the
future.
So,
but
there
is
that
thread
about
thinking.
What
does
this
look
like
in
the
general
case,
as.
G
I
mean
what
I
don't
know
that
it
actually
needs
a
full-blown
processor
tyler,
and
what
I'm
thinking
about
is
that
the
you
know
again:
total
straw
man.
We
have
an
annotation
called
something
like
delay
until
run
time.
You
know,
and
for
now
it
will
probably
live
under
gravity
m,
so
all
dot,
grav,
vm
annotations,
dot,
delay
until
runtime,
and
that
is
annotated
on
methods
that
get
called
on
the
code
paths
from
premain.
G
And
if
you
are
running
in
hotspot
or
you're
running
in
substrate
vm
in
vm
mode,
then
it's
ignored
and
it
just
executes
completely.
Normally,
so
the
behavior
is
transparent
to
to
hotspot
and
in
dynamic
vm
mode,
but
in
native
compilation,
then,
during
the
compilation
phase
the
compiler
notices
that
annotation
and
doesn't
generate
a
call
to
that
method,
and
instead
it
makes
a
note
of
it
and
it
writes
out
additional
code
which
is
effectively
a
native
pre-main,
which
calls
those
methods
in
the
order
they
were
encountered.
G
C
C
Would
this
be
helpful?
Would
that
would
a
solution?
That
of
this
shape,
be
helpful
to
your
use
case
as
well.
The
non-pre-made
agent
use
case,
I'm
just
thinking
that,
like
adding
a
couple
essentially
just
tag
annotations
that
will
at
least
look
like
the
future
shape.
Like
that's
a
very
low
ask
right,
that's
very
small!
C
That's
a
very
small
ask
right,
which
I
think
that
would
I
wouldn't
have
any
problem
with
doing
something
like
that
sprinkling,
those
things
throughout
the
anything
that
you
know
in
in
the
in
the
core
sdk
that
has
network
access,
I'm
a
little
leery
about
trying
to
build
in
what
feel
a
little
bit
like
hacks,
like
the
late
bound
span
processor,
which
is
a
very
point
solution
to
a
very
point
problem
like
building
that
kind
of
thing
into
the
core
sdk.
C
But
if
we
had
a
more
general
solution
like
an
annotation-based
solution
in
mind,
we
could
definitely
create
some
sort
of
beta
annotation
and
if
it's
completely
just
a
you
know
something
that
lives
there
in
the
byte
code
and
not
doesn't
have
any
functionalities.
Let's
see
with
it.
G
F
G
D
To
answer
your
question,
john,
I
I
think,
but
it
would
require
some
poc
to
make
sure
it's
possible,
because
I
know
one
poss
in
my
head
right
now.
I
know
with
qarcus.
We
can
find
a
lot
of
information
about
things
that
have
used
annotations,
but
that
usually
requires
the
library
to
have
been
indexed
with
jandex
first,
which
is
essentially
a
tool
jbl's
created
to
provide
more
detailed
information
about
annotation
points
of
usage.
D
D
B
So
I
was
gonna
ask
how
it
plays
with
like
with
the
auto
configure
or,
if
maybe
some
of
this,
that
delegating
stuff,
I
don't
know
that
that
might
be
some
place
where
we
could
integrate
stuff.
B
That
isn't
quite
in
the
core.
I
don't
know
john,
but
I
know
that
this
is
kind
of
being
considered
core.
Also.
C
C
C
B
All
right
yeah,
so
I
mean
I
think,
to
answer
the
general.
The
initial
question
of
you
know
supporting
moving
that
stuff
upstream,
you
know
the
definitely
interest
in
supporting
the
native
image.
You
know
emerging
native
image
ecosystem.
D
Move
yeah
yeah,
so
I
I
guess
it's
a
a
question
of
this
certainly
seems
to
be
interesting,
annotation
side
of
thing.
Is
there
any
interest
in
whether
it's
at
the
exporter
or
the
processor
or,
as
john
suggested,
just
moving
things
out
of
the
constructor
and
delaying
things?
Is
there
anything
any
interest
in
doing
anything
like
that
or
pursuing
both
at
once
and
I'm
open
to
whichever?
What
is
the
preferred
approach?
D
C
B
And
john,
is
it
okay
to
like,
if
to
rearrange
some
of
the
code
to
where,
like
one
of
these
methods,
like
some
kind
of
init
method,
is
needed
so
that
that
can
be
annotated
separately
outside
of
other
things?
Yeah.
C
B
C
Oh
sure,
yeah
yeah,
I
mean
I
don't
think,
there's
a
problem
with
that.
If
we
have
a
good
use
case
for
that,
that
definitely
seems
reasonable.
G
Okay,
is
there
any
interest
in
in
doing
some
pairing
on
the
code
for
this
john
or
tyler.
B
H
I
So
there's
the
the
spec
says
that
our
otlp
exporters
should
be
able
to
configure
gzip
and
the
gzip
compression
is
configurable
for
otlp
or
http
protobuf
exporters,
but
not
for
just
the
regular
grpc
ones.
And
so
the
question
is:
do
we
have
an
appetite
to
add
that
configuration?
I
Does
the
collector
support
this?
I
have
to
double
check
that,
so
I
would
assume
yes
because
I
think
if
you
use
grpc
out
of
the
box,
it
manages
you
know,
content
type.
Negotiation
like
under
the
hood
for
you
and
gzip
is
built
into
grpc
as
a
compression
type,
but
you
know
I
I
don't
want
to
make
any
assumptions
about
what
the
collector
is
doing.
So
I
would
double
check
that
I
mean.
C
I
Well,
I
don't
know
actually,
so
this
is
something
I
don't
have
good
intuition
on,
but,
like
a
question,
that's
got
been
going
around
in
my
head
is
how
how
effective
is
gzip
compression
for
like
binary
encodings,
like
protobuf
yeah?
My
guess
is.
F
F
A
Benchmarks,
I
think
so
yeah
I
I
think
the
ultimate
way
to
try
to
like
settle
this
debate
is
just
benchmark
it
and
you
know,
try
to
get
semi-realistic
input
into
that
benchmark
and
see
see
how
it
compares.
A
I
mean
one
of
the
things
that
you're
going
to
need
to
consider
is:
what
is
the
cpu
overhead
of
that
compression,
but
also
on
the
other
side
of
for
the
collector
is
what's
the
overhead
of
decompressing?
It.
C
I
Yeah
and
it's
it's
kind
of
an
interesting
one,
because
you
know
we
have
that
back.
You
know
the
the
ability
to
configure
a
managed
channel,
so
you
can
get
around
a
lot
of
configuration
limitations
for
grpc,
but
the
way
that
you
configure
compression
is
actually
not
on
the
channel
it's
on
the
stubs,
and
so
there
is
no
way
to
enable
gzip
compression
at
all
today
as
it
stands,
so
does.
I
I
I
thought
so
initially,
but
you
know
a
quick
little
dive
into
the
code,
shows
that
you
know
the
the
new
stubs,
the
new
serialization
that
we
have,
which
is
not
using
the
generated
protos
still
uses,
or
I
guess
at
least
the
methods
are
there
to
enable
gzip
compression.
But
I
I
I
guess
I
need
to
double
check
if
they're
actually
doing
anything
or
if
they're,
like
no
ops
or
anything
like
that.
But
so
I
have
some
research
to
do.
A
So
one
other
question
to
consider
here
is
for
the
receiving
side
of
it.
So
if
we
enable
compression
do
do
the
does
the
other
party,
the
the
end,
points
that
receive
those,
do
they
automatically
know
how
to
decode
that
gzip.
I
Well,
I
think
grpc
servers
will
by
default.
So
if
you
use
you
know
the
out
of
the
box
grpc
server
stuff
for
any
of
the
the
languages
it
would,
but
at
least
that
you
know
I'm
like
90
confident
in
that,
but
I
think
the
collector
might
be
a
special
case
because
it's
doing
some
some
stuff
to
sidestep
the
out
of
the
box.
Grpc
go
stuff.
I
A
Sorry,
I
I
didn't
quite
get
that
so
I
think
that
that
the
question
of
like
how
how's
the
the
collector
gonna
handle
that
is
even
is
an
even
more
important
question.
Yeah.
I
If
the
collector
handles
it
today,
then
there's
no
issue.
If
the
collector
doesn't
handle
it,
then
we
should,
you
know,
maybe
actually
change
the
spec
to
to
not
require
that
language.
Grpc
exporters
enable
or
allow
gzip
configuration,
because
that's
like
an
incompatibility
between
where
the
collector
is
today
and
what
the
spec
requires.
I
B
C
B
All
right,
let
me
see
we
don't
have
so
patch
releases.
We
put
out
this
one
just
making
sure
everyone
knows.
152
was
the
important
memory
issue
we
talked
about
last
week.
B
We
did
put
out
another
patch
release
this
week,
for
there
was
a
regression
in
parsing
configuration
settings
that
represent
maps,
not
sure
if
that
affects
too
many
people,
so.
C
You
think
is
there,
I
mean,
I
know
we
know
that
we
essentially
kind
of
have
duplication
between
the
config
parsing
and
the
sdk
and
config
parsing.
The
agent
feels
like
it
might
be
nice
to
have
that
shared
some
way
so
that
we
don't
so
we
don't
have
to
maintain
it
twice,
but
I
understand
also
the
issues
with
the
fact
that
that
parsing
is
not
part
of
our
public
sdk
apis
at
the
moment.
C
C
F
C
F
C
You
know
open
telemetry,
config
artifact
like
there
are,
like
you
know,
there's
other
projects
that
have
this
right,
but
I
don't
know
that
we're
necessarily
ready
to
to
buy
into
that
yep,
because
then
other
people
will
use
it
for
nefarious
purposes
and
pester
us
when
it
doesn't
work
exactly
the
way
they
want
it
to.
C
B
B
Primarily,
I
think
primarily
because
we
don't
have
a
overarching
bomb
that
pulls
everything
together.
If
we
did,
then
maybe
it
would
be
okay,
it's
definitely
worth
we'll
get
on
rugs
thoughts
does
does
any
library
instrumentation
use
config,
though.
B
It's
currently
exposed
in
the
instrumentation
api
so
but
yeah
we're
not
we're
not
totally
sure
about
our
future
direction.
There.
B
I
wanted
to
give
an
update
on
instrumenter
api
conversion,
so
six
down
in
the
last
week,
I'm
including
four
pending
prs
and
shout
out
to
jack.
Thank
you
jack
for
putting
in
the
work
here
so
that
gets
us
up
to
that
will
get
us
up
to
57
out
of
99.
So.
B
Thumbs
up
context
attribute
this
came
out
of
a
discussion
earlier
this
week.
B
Talking
about
we've
been
doing
some
performance,
benchmarking
and
profiling
of
the
agent,
and
one
of
the
things
that
we
see
is
a
lot
of
context
access
and
not
not
the
pulling
the
context
from
the
thread,
local,
so
much
as
looping
through,
because
we
have
the
array
backed
context.
B
B
And
honorag
brought
up
a
really
good
point
which
he
said
was
even
already
in
the
javadocs,
which
is
that
it's
recommended
to
combine
multiple
items
together
and
put
them
all
in
now
in
the
instrumentation.
B
I'm
not
sure
we're
really
violating
this,
because
we
already
do
if
there's
related
items
group
them,
but
it
does
point
to
a
pos.
We
do
put
a
lot
of
different
things
into
the
context,
and
so
there
may
be
some
consolidation.
C
For
you
know,
context
for
pun,
not
really
intended,
but
there
it
is
for
other
people
on
the
call
most
of
the
kind
of
these
kind
of
map
like
data
structures
in
the
sdk
or
the
api
like
context
and
attributes
and
baggage
we
actually,
we
internally
they
are
stored
as
arrays
with
pairs
of
entries
in
them,
so
optimize
for
memory.
C
Saving
memory
at
the
expense
of
you
know
linear,
search
for
accessing
specific
keys,
very
efficient
from
memory
and
from
an
iteration
perspective
and
as
inefficient
as
possible
for
picking
out
individual
keys,
which
we
we've
chosen
specifically
for
these
reasons
like
that
are
in
that
comment
like
we're,
assuming
we
have,
we
have
been
assuming
that
in
general,
these
things
are
going
to
have
a
small
number
of
items,
so
the
penalty
for
linear
search
is
not
going
to
be
too
terrible.
C
B
B
Yeah
the
this
is,
we
discussed
this
earlier
this
week.
This
issue
is
oh,
and
I
knew
that
I
linked
to
the
wrong
so
right
now
we
capture
there's
this.
B
It
kind
of
gives
you
some
options
of
what
to
capture
like
you
can
either
capture
the
whole
url
or
like
this
triplet,
or
these
the
idea
being
that,
ideally,
we
would
pull
out
whichever
one
was
the
library
instrumentation,
whichever
one
is
most
efficient
for
us
to
capture
and
there's
sort
of
some
sorting
here,
which
I
think
is
intentional
under
http
client
sort
of
preference,
like
you
often
get
the
full
url
from
the
client
versus
on
the
server
side.
You
server,
like
servlet,
tends
to
give
you
scheme
host
and
well.
B
B
So
I
think
we
sort
of
agreed
on
for
the
client
instrumentation,
but
at
the
same
time
we
don't
want
to
have.
We
don't
really
want
our
instrumentation
to
be
capturing
like
our
one
http
client,
instrumentation,
to
capture
url
and
another
http
client
instrumentation
to
capture
the
triplet
we'd
like
it
to
be
consistent
from
the
instrumenter.
B
I
B
I
I
think
it
varied
based
on
which
library
was
being
instrumented,
some
reconstructed
it
and
some
had
it
readily
available,
but
I've
I,
like
my
sample
size,
is
small,
probably
like
four
four
different
modules
that
I've
looked
at.
B
Yeah,
what
what
sort
of
initially
motivated
this
was-
and
I
finally
did
a
little
bit
of
something
about
it.
But
the
way
we
are
like
capturing
the
url
for
server
instrumentations
is
we're
building
actual
url
java
net
url
objects
and
then
calling
to
string
on
them,
which
is
just
horrifically
unperformant.
B
So
this
will
help
in
the
meantime
it
did
the
one
that
was
bothering
me
the
most
in
my
benchmarks,
was
the
tomcat
in
tomcat.
So
at
least
now
it
constructs
it
from
a
just
a
string
builder,
but
hopefully
this
will
be
a
better
general
path
forward.
G
Just
a
minor
point
I
mean
I
take
it:
we've
already
decided
that
we're
definitely
going
to
remove
the
creation
of
url
objects
and
then
to
string
them.
Okay,
good,
because.
A
So
something
to
consider
if
you're
gonna
be
using
like
the
actual
url
from
the
java
library
we've
found
on
the
datadog
site,
that,
if
you're
constructing
it
and
it's
not
already
available
it's
relatively
expensive,
because
it's
doing
a
whole
lot
of
parsing
all
over
again.
So
we
actually
created
kind
of
an
a
a
wrapper
class
around
it
so
that
if
the
url
is
available,
we
can
use
it.
But
otherwise
it
uses
just
a
composite
kind
of
object
of
what's
already
available.
B
Yeah,
that's
sort
of
what
I
did
in
this
one
was
just
to
create
a
uri
url
builder.
That
just
does
the
string
concatenation
but
hoping
hoping
this
will
be.
I
think
the
I
a
little
bit
better
on
the
since
the
server
side.
We
don't
tend
to
get
the
full
urls.
B
B
Cool,
so
four
minutes
left
just
briefly
weekly
things
that
went
in
this
was
a
cool
feature
available
from
if
unsafe
is
available,
but
it
allows
us
to
not
have
to
write
out
jar
files
to
the
file
system
in
order
to
load
inject
things
into
the
bootstrap
class
loader,
which
is
going
to
be,
which
is
great
for
supporting
I've.
Seen
a
couple,
people
ask
for
support
for
read-only
file
systems.
B
I
guess
that's
the
thing
in
the
container
world,
more
concurrency
tests
enabled.
B
New,
we
have
quartz
two
quarts
instrumentation
now
I'm
kind
of
curious.
What
because
I
know
we
already
have
spring
scheduling,
instrumentation,
and
so
my
guess
is
that
most
people
are
using
courts
via
spring
scheduling,
but
I
know
there
was
a
request
for
that.
I'm
kind
of
curious.
If
anybody
has
experience
there.
C
B
As
I
mentioned
before,
I've
been
doing
some
performance,
some
pro
benchmarking
and
profiling.
So
a
bunch
of
little
optimizations
going
in
class
value
is
cool.
It's
something.
We've
been
finding
more
and
more
useful
for
a
little
caches
and
little
optimization
things
definitely
recommend
it's
not
something
I
had
used
up
until
like
a
year
ago,
so
recommend
checking
it
out.
C
What
what
jvm
or
jvms
are
you
doing
your
benchmarking
on
and
I'm
wondering
whether
there's
you
see
any
differences
between
like
8
and
11,
for
example,.
B
Oh,
my
goodness
yeah,
you
know
I'm
getting
way
worse
performance
on
11
compared
to
eight,
and
I
don't
know
why.
So
that's
a
excellent.