►
From YouTube: ASP.NET Community Standup - August 28, 2018 - Benchmarks and Performance with Sebastien Ros
Description
Community Links for this week: https://www.one-tab.com/page/wC9Om6Q1TxqbADqeZm5aXg
A
A
A
A
B
B
A
B
A
A
A
Okay,
so
we
showed
this
link
off
last
time,
where
we've
got
a,
we
did.
The
announcement
show
for
s
peanut
core
to
200
preview
1,
and
so
this
is
the
announcement
post.
This
is
kind
of
meta
here.
This
is
our
video
from
last
week's
show
here.
But
the
thing
I
want
to
point
out
is
that
this,
this
bullet
list
of
features
is
actually
now
blog
posts,
so
you
can
go
over
and
you
can
read
blog
posts
that
dig
into
these
different
features.
A
A
Routing
there's:
the
signal
are
Java
client.
There
is
this
one
about
open,
API,
open
the
API
analyzers
and
conventions,
HTTP
2
and
kestrel
and
health
checks,
so
there's
a
ton
of
stuff
there.
So
that's
pretty
neat
and
again
with
all
of
these
there's
code,
samples,
there's,
there's
all
kinds
of
stuff
to
dig
in
and
you
know
get
a
lot
of
in-depth
information.
A
B
A
Of
neat,
this
is
he's
he
just
shared
this
out
on
Twitter
and
I
thought.
This
is
pretty
cool.
This
is
a
template
that
he's
building
out.
This
is
asp
net
identity,
with
identity,
server
for
and
one
of
the
interesting
things
he's
been
doing.
Is
he
just
called
out
for
some
contributions
for
localization,
and
so
the
community's
been
jumping
in
and
adding
in
a
bunch
of
localization
for
this
as
well?
So
this
is
this.
You
know
this
I
love
this
identity
or
this
template
system.
A
You
know
just
dotnet
new
I
and
you
can
install
that
using
that
template
and
stuff
okay,
I
got
two
two
links
to
feature.
Maxime
pointed
these
out
to
me
in
this.
So
one
is:
there
are
some
Doc's
on
using
source
code
link
from
a
new
get
package,
so
there's
tons
of
great
new
get
packages?
You're
gonna
want
to
use
in
your
applications
if
they
are
set
up
for
a
source
link,
then
like
number
one,
you
get
the
source
code
reference
here
that
you
can
see
and
and.
B
A
And
with
source
link,
then
you
can
go
through
and
you
can
actually
directly
from
a
new
get
package.
You
can
do
things
like
debug
into
the
source
code,
so
here's
another.
This
is
this
is
cool
Maxine
kind
of
called
this
out
and
and
Carlos
took
up
the
challenge,
and
so
he
went
through
and
did
a
walkthrough
where
he's
going
through
and
setting
up
source.
Like
sorry,
it
shows
you
know
just
a
few
minutes.
A
Work
really
here
to
go
through
and
set
this
up
and
then
how
it's
it's
integrating
in
and
he's
getting
the
source
link
debugging
experience.
So
this
is
just
a
call
to
action
for
people
that
are
creating
new
get
packages
that
this
is
pretty
easy
to
set
up
and
something
to
take
advantage
of
alright,
so
Hashem
dug
into
an
interesting
one
here.
This
is
localization
for
generics.
So
if
you've
got
a
generic
class,
it's
al-
it's
not
quite
as
simple
as
you
might
have
thought
where
you
could
just
go
through
and
create
create
classes.
A
How
do
you
revoke
that?
How
do
you
come?
How
do
you
go
and
revoke
that
permission,
so
he
goes
through
and
shows
how
to
show
how
to
how
you
know
you
can
go
and
revoke
that
permission.
So
it's
that
permission,
there's
there's
the
client-side
cookie
and
synchronization
to
the
backend,
and
so
it
goes
through
and
explains
how
to
do
that.
So
that
is
cool.
A
A
B
A
Right
feature:
quest,
Eve,
yeah,
that's
a
good
point.
It's
slightly
related
to
what
you
just
brought
up
their
cosmos,
TV,
so
Tomas
blocked
on
this.
So
this
is
I've.
Seen
this
before,
where
people
use,
rethink,
DVD
and
rethink
DB
is
a
database
that
has
this
whole
idea
of
you
know
it
can
send
you,
notifications
at
change
via
basically
and
so
Tomas
went
through
and
set
this
up
using
cosmos,
TVs.
So
there's
a
change
feed
that
he's
set
up
and
exposed
and
then
pulling
that
in
from
an
asp
net
application.
A
So
that's
that's
kind
of
a
you
know:
push
based
from
your
database
to
receive
those
changes,
pretty
cool,
alright
and
my
last
one
here.
This
is
Peter
Kellner
and
he's
he's
got
this
overview.
A
nice
in-depth
article.
This
is
I,
would
call
this
a
full
article
here.
It
goes
into
a
lot
of
depth
here
explaining
and
with
a
video
to
explaining
the
difference.
If
you're
used
to
partial
views
and
haven't
kind
of
made,
the
jump
over
to
view
components
or
written
a
view
component,
yet
he
explains
what
they
provide
for
you.
A
So
this
is
this.
He
explains
kind
of
the
benefits
of
being
able
to
package.
Something
is
a
few
component
and
then
go
through
and
set
that
up
and
take
advantage
of
that.
So
a
very
nice
meaty
post
and
again,
as
I
pointed
out
here,
there's
there's
a
video
at
the
top
as
well
excellent
stuff
and
now
I'm
going
to
I
almost
stopped
I
almost
hung
up
on
the
call,
which
would
have
been
bad
instead.
I
will
just
stop
sharing
I'm
done.
Thank.
B
A
B
What
I
want
to
show
you
is
in
how
we
do
measure
the
things
that
we
expose
and
on
the
web
for
performance.
So
you
might
know
already
I
assume
you
know
I
hope.
You
know
that
there
is
a
link,
aka
dot,
NS
/,
SP
net
/
benchmarks
that
goes
to
a
dashboard
of
members,
charts,
KPIs
and
everything
that
displays
all
the
perf
numbers
that
we
that
we
collect
over
time
and
we
collect
them
many
times
per
day
to
follow
the
evolution
of
that
bad
things.
B
So
these
numbers
are
very
interesting,
but
I've
been
working
on
actually
grabbing
while
measuring
all
these.
These
numbers
and-
and
the
thing
we've
put
in
place
to
measure
that
is,
is
I,
think
super
useful
to
be
able
to
to
benchmark
any
application
and
I
wanted
to
show
you
how
you
can
reduce
that,
because
it's
actually
open
source.
A
few
of
you
have
used
that
or
try
to
use
it
in
some
environments.
B
B
To
drive
the
Wow
slagging
yeah
tell
us
if
we
are
good
or
not
during
the
last
three
days
or
the
last
ten
days
to
detect
regressions
on
Linux
Windows
cloud
physical
and
then
we
also
have
some
things
like
reliability
testing.
So
we
measure
continuously
some
apps
that
run
for
days
of
all
weeks
and
we
can
track
the
memory
usage,
the
CPU
and
detect,
for
instance,
memory
leaks,
like
I,
think
it
was
around
February.
B
B
Okay,
and
sometimes
there
are
issues
in
the
either
request
or
the
measurement
which
actually
does
add
the
result.
In
the
same
second,
that
we
get
the
result,
that's
technical
issue,
and
so
that's
super
useful
and
we
can
see
on
Linux
and
Windows.
We
run
these
on
Asia
to
be
as
close
as
possible
as
what
our
customers
will
use.
B
We
have
a
chart
for
similar
which
is
daily
chart.
We
have
the
same
thing
for
stress,
testing,
long
running
and
we
have
a
custom
chart
where
you
can
compare
all
the
scenarios
that
we
are
testing
and
check
the
different
servers,
server
forecasts,
rail
transport,
cloud
of
physical
Linux
or
Windows
HTTP
HTTP
for
some
scenarios,
and
then
it's
like
a
self-service
cell
third
list
of
chance.
So
that's
that's
what
we
expose
publicly
that's
what
what
that's,
what
we
use
also
internally
to
to
measure
everything,
but
it's
not
just
what
we
have
internally.
B
A
B
We
use
the
KPI
so
that
we
daily
look
at
the
KPIs
to
see
the
resolution
because
it's
hard
to
detect
those
like
they
can
be
in
like
two
percent.
Sometimes
the
person
can
be
expected,
or
it
can
just
be
noise
of
other
machine
which
is
lowered
a
day
than
the
other,
so
that
there's
lots
of
Bibles.
It
is
kind
of
hard
to
detect
those
changes
and
if
we
should
10%
drop
because
we
look
at
it
every
day,
we
will
see
it
okay.
B
B
Think
until
the
question
and
but
yeah
over
time,
we'll
add
more
types
of
tests
to
be
able
to
detect
things
that
we
don't
cover
so
far
in
terms
of
performance
regression
and
also
in
terms
of
reliability.
There
might
be
some
weird
objects
in
dotnet
that
we
are
using
that
as
a
memory
leak
and
we
don't
know
it
yet
and
maybe
a
customer
will
find
it
live
and
we
have
to
fix
it,
but
it
will
be
better
for
us
to
to
know
about
it
before.
B
So
that's
that's
one
of
the
goals
but
yeah,
but
that's
progress
from
last
year
that
we
we
had
something
like
this,
but
now
it's
public
and
we
can
click
more
detail
about
more
metrics
so
that
that's
already
super
useful
for
us.
So
let
me
show
you
how
we
do
that,
so
we
have
a
repo.
Oh,
let
me
switch
back
yeah.
We
have
a
repo
on
github
SP
net.
B
Slash
sorry
benchmarks
this
one,
so
this
repository
contains
everything
to
measure
what
we
can
see
in
these
graphs
and,
more
so
I
think
it
is
useful
to
show
you
how
it
works,
because
I
will
hope
that
you
can
run
while
customers
can
run
it
in
their
own
environment
and
be
able
to
measure
their
apps
and
find
perfect
shoes
and
investigate
these
issues.
With
this
tuning
I
know,
some
of
you
have
already
tried.
B
I
know,
for
instance,
for
instance,
my
hair
helmet
to
configure
to
I
think
we
still
have
is
pure
to
configure
it
to
run
Windows
because
it's
using
docker
and
so
I
know
people
are
interested
in
to
running
that,
and
so
we
try
to
document
how
to
say
that,
and
but
still
the
documentation
does
not
explain
what
it
does.
So
I
will
show
you
how
it
works
and
then
maybe
you
will
be
interested
into
looking
at
how
to
install
them
on
your
system.
So
the
idea
is
that
we
run
a
service
to.
B
To
start
an
application,
any
application
you
want
and
this
service
would
be
on
a
dedicated
machine
which
we
call
the
benchmark
server.
This
is
a
web
app
done
in
a
spirit,
call
that
will
take
jobs
to
run
an
app,
and
we
have
another
service
called
the
benchmark
client,
which
will
send
a
load
and
HTTP
load
to
the
service
that
is
running
to
the
application
that
the
benchmark
server,
starting
so
two
different
web
applications
and
to
measure
performance
correctly.
B
We
run
them
in
different
environments,
different
physical
machines,
okay,
totally
independent
machines,
so
the
application
is
the
only
thing
running
on
the
benchmark
server
and
the
load.
Testing
is
the
only
thing
running
on
the
benchmark:
client,
okay
and
these
two
services
benchmark
server
and
benchmark
land,
take
jobs
and
they
can
cure
jobs.
B
And
then
this
driver,
which
is
another
application
I,
will
show
you.
We
will
be
able
to
display
the
results
on
the
user
screen
or
record
it
to
database
and
that's
how
we
do
that
with
these
shots.
Whenever
we
run
for
a
scenario
and
some
time,
we
store
the
results
in
a
database
and
we
have
enough
metadata
to
split
all
these
results
by
Hardware
tie
by
host
by
system
by
type
of
value.
We
are
measuring
and
then
do
charts
from
that.
So
that's
what
we
are
using,
but
it's
not
just
for
charting.
B
It's
can
also
be
used
to
do
other
measurements.
So
let
me
show
you
how
it
looks
like
when
it's
deployed,
so
we
have
this
environment
set
up
on
physical
machines
in
our
lab,
which
are
weird,
so
we
have
it
on
running
on
a
Windows
machine
on
the
Linux
machine.
The
load
is
always
the
Linux
machine,
because
we
don't
care
about
what
what
is
running.
B
We
just
need
HP
clients
and
we
have
the
same
environment
in
Azure
on
both
Windows
and
Linux,
and
so
this
is
important
for
us,
because
the
physical
machine
will
give
us
more
constant.
Well,
how
do
I
say
it's
more
deterministic
results,
because
we
own
the
machines.
We
know
when
we
can
run
the
patches
or
do
updates
and
what's
install
on
that,
and
so
it's
more
stable
environment
to
repro
numbers
to
continuously
run
numbers,
and
then
the
extra
machines
are
nice
to
be
able
to
be
close
to
what
customers
use
and
to
see.
B
Other
kind
of
issues
are
not
the
same,
also
size
of
the
machines
like
the
physical
machines.
They
have.
How
many
calls
we
have
I
think
12
calls
and
Asia
ones,
four
of
them,
and
we
have
another
environment
which
is
even
bigger,
which
is
exactly
the
same
as
the
tech
Emperor
companions
using.
So
this
way
we
can
exactly
reprove
what
the
camper
is
displaying
as
results.
B
That's
something
else.
We
we
can
do
Anna
so
useful
for
us
when
we
don't
understand
why
do
members
after
camp
arise
or
different
as
us,
we
just
run
under
the
machine,
is
Wichita
exactly
the
same
as
the
camper,
so
we
have
a
server
where
the
clients
and
I
have
two
consoles
open
here.
So
this
is
the
the
machine,
then
that
runs
the
the
the
benchmark
server
and
it's
a
physical
machine
and
if
I,
open,
docker
I
see
there
is
a
container
running,
it's
called
benchmarks.
Actually,
it's
called
benchmarks:
server,
okay,.
A
B
Is
the
one
and,
if
I,
look
at
what's
happening
so
the
application
is
running
and
it's
currently
testing
stuff.
So
this
machine
is
always
running.
This
service
is
running
and
we
have
some
jobs
that
are
continuously
running
to
test
all
the
scenarios
all
the
time
and
then
gather
and
measure
data
and
store
them
in
the
database.
So
this
one
at
some
point
we
might
see
what
time
is
you
know?
Maybe
there
is
no
job
running
and
I
will
start
one.
So
this
is
the
server
itself
and
you
can
see.
Let
me
show
you
you
see.
B
There
is
some
packages
which
are
downloaded
well
stored.
There
is
an
app
running.
Can
we
see
it?
This
is
the
training
itself,
and
this
is
the
app
in
this
case
that
we
are
running
to
measure
the
scenario
called
plain
text.
So
this
is
a
docker
instance
running
on
one
second,
one
server,
and
we
have
here
this
one.
So
if
I
do
docker
logs
benchmarks,
clients,
this
one
is
sending
a
load,
the
client,
low,
HTP
load
to
the
to
the
application,
and
you
can
see
it's
using
w.
B
Ok-
and
we
can
see
here,
our
sample
results
from
the
machine
itself,
but
these
results
are
you
see
and
someone
started
or
something
some
process
started
a
new
load
on
the
on
a
machine.
It
can
be
the
windows
in
the
middle
index.
Server
I,
don't
know,
but
something
is
running
so
now.
Let
me
show
you
how
I
can
run
it
from
my
system
here
the
common
line
already
prepared.
B
So
this
is
the
benchmarks
repository
and
if
I
go
here
in
SOC,
you
will
find
the
server
application,
the
client
application
I
spent
at
Co
applications
and
the
driver,
which
is
a
console
app.
Ok,
this
application
is
a
web
app
that
we
measure
performance
up.
So
it
is
a
generic
web
app
because
lots
of
scenarios
like
lots
of
controllers,
Web
API,
is
stofflet
calls
and
the
framework
that
uses
middleware
that
uses
MVC
everything.
B
This
is
a
like
a
big
bucket
of
different
scenarios
to
test
performance
on,
and
this
is
this
app
that
I
usually
ask
the
driver
to
run
remotely
and
then
measure
a
specific
end
point
to
to
test
a
specific
benchmark.
So
here
from
the
clan
line,
I
mean
the
benchmark
driver.
Folder
I
will
do
that
at
run
and
then
some
parameters
to
to
set
up
the
environment
I
will
give
the
URL
of
the
service
running
the
benchmark
server.
B
The
end
point
for
the
benchmark
client,
then
a
file
that
contains
the
description
of
the
job
I
want
to
run
well,
I'm
kind
of
where
is
it
yet?
So
this
is
continuing
here,
so
benchmarks,
dot
pain
takes
a
JSON.
So
it's
a
JSON
document
explaining
why
listing
a
list
of
benchmarks
I
can
run
if
I
don't
want
to
type
them
when
the
command
line,
and
then
the
name
of
the
benchmark
I
want
to
run,
and
let
me
show
you
how
it
looks
like
in
the
in
the
JSON
file.
B
So
if
I
go
to
benchmarks
and
I
open
these
J'son
file,
here
the
one
I'm
pointing
I'm,
pointing
to
it's
a
JSON
document-
and
let
me
see
you
can
read
actually
so.
These
are
the
default
parameters
and
I
say
the
default
parameters
is
to
use
the
work,
client
and
it
will
use
pipelining
and
we'll
send
some
plain
text
headers,
and
this
is
where
the
app
is
actually
hosted,
and
this
is
the
branch
and
this
is
a
project
to
run.
B
This
is
just
doing
the
same
thing
as
plain
text,
but
using
the
MVC
action
controller
and
so
on.
You
see
and
we
can
have
lots
of
different
kinds
of
names
of
scenarios.
So
here,
when
I
do
and
plain
text
I'm
just
saying
to
the
client
hit
the
slash
plain
text
and
point
okay
with
these
presents,
so
I
will
run
them.
I
will
run
that
okay,
I
start.
B
B
And
we
should
have
something
here:
oh
this
is
running
single.
Okay.
Let
me
sorry,
some
job
is
corrected
currently
running
there.
So
I
will
do
the
same
thing,
but
targeting
the
Linux
box
and
will
be
able
to
see
it
there.
So
something
had
happened,
and
here
it
says:
okay,
cloning,
this
repository
starting
the
plaintext,
it's
installing,
whatever
the
net
runtime,
is
necessary
on
this
machine
in
this
dr
image
and
then
it's
looking
for
the
correct
explanation:
it's
starting
the
app
and
now
the
application
has
started.
You
see
application
started
press
control-c
to
shut
down.
B
So
if
I
go
in
the
client
machine,
then
a
job
starts
to
send
the
load
and
if
I
go
in
my
own
console,
you
see
client
job
ready.
So
it's
it's
ready
to
start
the
thing.
So
this
driver,
the
benchmarks
driver,
is
what
allows
me
to
communicate
with
the
benchmark
server
and
the
benchmark
client
to
start
application
and
to
start
a
load,
and
after
a
few
seconds
it's
running
15
seconds
warm
up
and
then
a
15
seconds
measurement
I
should
be
able
to
see
the
result
of
the
run.
B
B
B
Starting
again,
it's
important
in
this
case
to
do
the
warm-up,
because
we
want
to
measure
the
up
when
that
one
is
cold.
We
don't
care
about
the
startup
time
and
everything
we
want
to
do
the
best
throughput
that
the
application
can
have
and
to
measure
everything
at
max
CPU
Road.
We
can
have
on
the
server,
so
we
can
detect
the
bottlenecks,
and
here
is
his
signal,
which
means
this
thing
has
done
so
the
result
from
my
application
from
my
side,
I,
don't
need
to
connect
to
the
server's.
I
can
just
choose
my
local
application.
B
I
can
see
that
this
job
run
for
1.6
million
requests
per
second
I
can
see
the
CPU
the
working
set
average
latency
the
startup
time.
The
time
of
the
first
request
once
it
was
started
up,
the
latency
when
there
is
no
load
so
like
the
application,
is
warmed
up.
I
send
one
request:
what's
the
latency
of
one
request
by
itself
how
many
requests?
How
long
will
it
take
so
caderousse
bad
response
is
the
SDK
that
it's
used
on
the
server
the
run
time
it
used,
which
aspect
call
version
and
then
so.
B
B
B
The
design
file
contains
job
definitions,
but
what
we
have
here
is
that
we
can
define
that
we
want
to
set
custom
duration,
so
not
fifteen
seconds,
but
let's
say
five
seconds,
because
we
don't
need
to
run
that
or
we
could
say
we
don't
want
to
run
a
warm-up
or
we
want
to
warm
up
with
specific
number
of
seconds.
You
can
configure
the
client,
threads
and
nerve
connections
also
for
the
load.
B
We
like
it's
very
button
because
we
might
want
to
push
the
limits
of
the
system
to
see
how
it
behaves
when
we
have
too
many
connections
or
or
maybe
the
system
can
endure
more
connections
at
the
default
and
we
need
to
learn
them.
This
is
what
te
is
doing.
The
Kemper
is
doing
that
when
they
measure
the
measure
was
like
256
connections
up
to
1,000
connections,
and
so
what
we
have
here
s
so
the
two
major
ones,
in
my
opinion,
are
these
ones:
SPL,
conversions
fashion
and
runtime
fashion.
B
So,
by
default,
when
we
run
job,
the
driver
will
look
for
the
latest
available,
SP
net
version
from
my
gate
and
also
for
the
latest
compatible
runtime
version.
This
is
how
we
detect
the
regressions
early,
because
we
are
always
using
the
latest
version,
which
is
a
Magid
latest
working,
build
and
but
with
that,
what
we
can
do
is
set
a
very
custom
version.
B
Like
two
point,
one
point:
zero
four
n
stands
for
SP
net
or
if
we
find
a
commit,
we
are
not
sure
if
it
was
a
regression,
we
can
just
go
on
the
commit
number
and
find
the
SP
net
coercion
and
then
just
run
it
for
that.
So
we
can
go
back
in
time
to
benchmark
an
application
to
see
how
it
behaved
like
a
month
ago
compared
to
today.
This
is
what
we
use,
for
instance,
in
the
kpi's.
B
This
part
of
the
the
KPI
is
baselines.
This
is
how
we
do
to
measure
how
it
was
running
in
two
point.
One
point
zero
compared
to
now,
so
we
can
detect
regressions
from
two
point:
one
point:
zero:
to
see
how
how
we
behave
or
improvements
so
and-
and
these
numbers
are
run
all
the
time,
because
the
environment
might
change
so
every
day
we
still
run
it
on
two
point:
one:
zero
based
on
the
current
environment,
physical
environment
or
Asia.
B
So
we
can't
we
can
be
data,
not
super
useful,
to
be
able
to
do
that
to
find
issues.
We
can
pass
custom
arguments
to
application.
We
can
use
a
custom
port,
but
this
we
don't
care-
and
this
is
also
one
of
the
best
things
so
I
will
ask.
Can
we
have
someone
from
the
audience
to
do
something
for
us
yeah,
the
first
one
to
like
we
asked
Ben?
Maybe
you
will
be
able
to
do
that
quickly.
Then
I'm
sure
you
have
a
Castro
branch
somewhere.
B
A
A
A
B
I,
don't
even
want
to
come
back,
you
just
go
to
edit,
for
can
you
change
the
code
of
line
of
code
and
you
just
commit
that
you
don't
even
need
to
clone
it.
You
can
do
it
from
hidden.
So
this
will.
Let
me
point
to
any
repository
emitted,
any
branch,
any
common
number
and
run
the
application
in
the
state.
It
was
on
this
committee
and
I
will
show
you
with
the
changes
been
willing
providers
project
file.
B
This
is
when
you
want
to
spit
one
shot,
one
when
you
want
to
run
a
specific
application
from
a
repository
that
has
multiple
of
them.
This
one
I
will
show
you
later
runtime
store,
so
we
have
things
or
options
also
to
to
decide
how
we
want
to
run
this.
One
is
all
it's
no
more
accurate,
but
what
we
can
do
is,
let
me
show
you
JIT
compilation
self-contained,
so
we
can,
with
the
parameter,
ask
for
the
application
to
be
deployed
using
the
tier
compilation,
compilation,
flag
or
as
a
self-contained.
B
So
this
one
that
used
not
to
be
in
the
CS
plus
5.
So
we
had
to
add
this
flag.
Now
you
can
define
jet
compilation
into
your
project,
your
project
files-
let's
see
here,
but
this
you
still
can
do
so.
This
is
super
useful.
So
we
can
measure
performance
differences
between
a
standard
deployment
and
self-contained
which
mean
which
means,
since
it
will
just
embed
all
you
reference,
even
the
runtime,
even
asp.net,
in
your
published
folder,
and
then
we
can
measure
the
differences
in
terms
of
startup
and
throughput,
and
that
makes
a
difference.
B
So
it's
very
important
to
know
the
impact
on
every
app
for
for
this
flag
and
same
thing
for
cheat
compilation
so
I
a
few
weeks
ago,
North
Fork
wrote
a
blog
post,
I
think,
and
you
mentioned
that
during
a
meeting
showing
the
benefits
of
jet
compression
and
how
did
he
get
all
his
numbers?
He
just
used
deep.
B
This
benchmark
driver
and
run
all
the
scenarios
he
had
in
mind
and
with
and
without
the
test
compilation
flag,
and
then
he
got
the
results
from
this
console
app
and
he
made
a
nice
graph
based
on
that
and
actually
of
even
automated
all
these
things.
So
he
can
run
that
as
many
times
as
he
wants
on
all
the
scenarios.
That's
that
was
very
fruitful,
that
you
can
also
run
the
application
with
a
specific
environment
viber,
which
is
what
these
things
actually
do.
B
You
can
you
can
download
specific
files
from
a
several,
so
when
we
run
an
app
and
the
app
generates
some
files,
somehow
like
a
dll
or
like
static
file,
you
can,
after
the
fact
say,
download
this
file
from
the
server
from
what
you
run,
or
you
can
say,
fetch
everything
that
you
run
like
if
we
use
a
specific
the
action
of
the
of
the
Spirit
runtime.
If
we
use
iron
whatever
code
that
is
on
github
to
run
on
server,
we
can
download
everything
that
was
run
directly
locally.
B
A
A
B
This
is
for
a
real
application.
You
can
test
a
real
application.
That's
that's
the
goal.
We
have
micro
applications,
micro
benchmarks,
meaning
well.
We
test
very
specific
end
points
in
the
benchmarks
application
to
to
to
measure
specific
things,
but
you
can
test
your
own
endpoint
with
that.
That's
right.
The
goal
and
I
would
like
to
do
something
with
that.
B
If
you're
Granger,
that's
all
on
you
Mike,
my
ultimate
goal
would
be
to
provide
a
service
like
this
or
for
the
net
foundation
for
the
net
foundation
to
provide
the
server
and
the
client
on
Asia
and
that
we
could
run
our
members,
like
library,
authors,
their
benchmarks
to
help
them
measure
the
person
that-
and
it
can
be,
it
could
be
like,
like
I'm
thinking
about
image
shop.
You
know
image
shop,
they
have
doubt
they're,
also
providing
a
middleware
okay
and
then
there
are
benchmarking.
B
It
all
the
time,
but
I
think
I
have
an
issue
on
their
middleware
that
will
be
slower,
download
and
so
I
would
love
them
to
be
able
to
provide
an
12,
an
application.
A
simple
application
with
20
points
that
they
could
follow
the
performance
of
their
middleware
and
detect
regressions
and
even
better
if
they
can
test
changes
and
see
the
real
impact
under
load.
B
A
C
B
B
This
is
it,
this
is
what
I
need,
and
this
is
the
branch.
So
if
I
go
there,
I
can
say
aw
so
run
the
same
pentek's
thing
well,
this
is
not
the
one
I
want
I
want
to
use,
I
wish
so
I
would
say,
and
I
will
paste.
The
github
repo
okay
at
I
will
say
at
this
branch.
Slash
commit
can
be
whatever
at
that.
Okay,
but
what
I
will
use
is
actually
the
same.
Repo
I
will
use
the
one
here.
B
B
So
here,
I
just
say
J
this
file,
so
this
is
a
JSON
file.
I
want
to
load.
This
is
a
scenario:
I
want
to
run,
which
is
plain
text
platform
and
I
can
run
it
and
I
will
say
no
I,
don't
care
faster
and
each
work.
So
the
nice
thing
is
that
well,
if
it
doesn't
work,
you
trust
me
when
I
say,
let's
dive
made
a
mistake
and.
B
Plane
takes
platform:
oh
yes,
this!
No!
There
is
no
space.
Well,
it
should
have
worked,
but
the
idea
is
that
we
can
point
to
any
repository
and
the
server
will
just
take
it
run
it
and
benchmark
it.
So
the
super
useful
for
us
when
people
provide
peers
with
improvements,
they
say:
oh
I,
improved
the
performance
by
that
amount
of
that
amount.
Or
can
you
check
it's
a
it's
a
nice
improvement
on
your
system
and
by
just
doing
this
command
line.
We
know
if
it's
an
improvement
or
not
and
with
them.
B
We
did
that
so
many
times
when
he
helped
us
on
tech
camp
or
you
made
all
the
changes
that
made
why
he
optimized
so
many
things
in
the
camp
take
a
poor
orchestral.
That
was
super
useful
for
us
because
it
could
just
anything
crater
commit
and
we
had
like
Skype
discussions
or
Twitter
exchanges.
Why
we
will
do
a
change
and
the
meet
after
I
will
get
the
numbers
and
another
change
and
the
military
get
a
number.
So
he
had
life
back
about
his
changes
on
our
environment
with
a
stable
set
of
baselines.
B
B
That's
also
super
useful,
for
anyone
wants
to
run
that
so
that's
the
ID
to
provide
an
environment
where
people
can
benchmark
their
applications
way
way
easier
than
what
they
would
do
otherwise
and
a
stable
environment,
and
also
what
we
do
for
that
employer
is
that
we
can
run
docker
images.
So
what
we
have
in
the
tempo
repository
well
what
they
have,
but
what
we
have
what
they
have,
but
that's
fine
for
us.
If
I
did
the
C
sharp
one
DSP
had
come
one
since
February.
What
they
are
doing
is
that
they
are
using
docker
files.
B
Ok,
so
these
local
file,
we
set
up
the
environment
to
run
the
MVC
scenarios,
and
what
we
do
is
that,
with
this
benchmark
tool,
we
can
just
point
to
the
docker
image
from
the
JSON
document
and
then
in
Suffern
of
deploying
the
app
and
building
it.
It
will
just
run
the
docker
image
here
and
start
to
load
from
that.
So
that's
that's.
What
I
did
I
should
yesterday
with
Bradley
Granger,
with
the
author
of
the
my
sequel
driver
for
a
little
net.
So
let's
look
at
that
yesterday
there
was
a
pure
from
PC
Bradley.
B
B
It
it
took
like
one
hour:
it
was
super
awesome
so
that
that's
also
how
it
us
to
to
to
help
the
community
improve
their
their
results,
not
mentioning
that
which
I
wrote
in
ski
who
is
the
owner
of
the
MGP
sequel
driver.
We
also
did
the
same
thing
for
four
weeks
to
to
work
on
the
scenarios
that
we
were
measurement
a
computer.
B
A
B
You
can
say,
but
to
put
in
place
liquor
in
everything
that
will
be
the
other
issue
at
least
yeah
obtained.
So
we
can
ourself
myself.
I
would
say
sorry
I
set
up
the
environment
for
them
and
they
point
me
to
the
to
the
application.
I
will
configure
it,
so
they
don't
have
to
touch
anything.
They
will
know
that
the
numbers
that
would
be
that
very
nice,
so.
A
B
B
B
So
that's
something
we
don't
expect
and
that's
very
simple
scenario.
So
if
we
had
an
issue
we
will
know
so.
The
question
here
is
to
understand
what
triggers
issue.
What
is
the
actual
issue?
Is
there
an
issue,
but
the
main
problem
we
had
with
this
issue
is
that
people
were
coming
with.
Oh
I,
have
the
same
thing
see
yes,
I
also
seen
this
or
facing
some
issue
here
and
bla
bla.
So
like
I'm,
like
10
people
report
the
same
behavior,
but
not
the
same.
B
Things
are
all
different
he's
actually,
but
they
all
worried
about
the
memory
usage.
Okay,
so
and
that's
and
that's
I'm,
just
messin
about
I'm
like
I,
don't
want
my
app
to
use
all
my
memory
and
look
at
that.
So
many
people
saying
there
are
issues
that
are
issue
and
we
are
asking
every
one
of
them.
Tell
me
ripple.
Can
you
share
something
with
us?
B
What's
your
environment
and
you
see
28
more
items
here
so
I'm
like
and
at
the
point
that
we
could
not
follow
up
with
all
the
questions
and
all
the
commands,
so
so
moany
suggested
to
stop,
stop
this
thread
and
to
just
split
it
into
different
issues,
because
they
are
all
talking
about
different
things
and
it's
a
loss
and
also
the
the
the
thing
I
found
is
that.
Why
are?
Why?
Is
everyone
asking
the
same
question
but
with
different
scenarios
and
maybe
stop
the
same
issue?
B
And
maybe
there
are
no
issues
but
how
to
prevent
everyone
from
saying
there
is
an
issue
when
there
is
not
so
I
worked
on
every
scenario
here.
So
let
me
show
you
first
so
I
spit
this
one
into
different
issues.
Some
of
them
have
it
closed
because
we
found
the
issues
which,
over
on
the
new
user
code
usually
and
we
could
close
the
the
separate
issues
so
I'm
a
still
open
because
I'm
waiting
for
feedback,
but
most
of
them
are
forcing
Qatif.
Maybe
one
might
be
sorry
so
I
will
have
to
check
this.
B
One
and
I
said
I
will
write
an
article
about
how
we
can
detect
those
changes
or
to
detect
that
there
is
no
issue.
Okay,
not
as
change
is
that
to
detect
those
issues
and
that
there
is
actually
no
issue.
So
I
worked
on
that
and
I
make
I
made
an
app.
It's
called
memory
leak,
which
is
a
very
bad
name,
because
there
is
no
memory
leak.
Just
to
show
some
things
we
can
do
so.
This
app
I
will
run
it.
The
goal
is
to
understand
how
the
garbage
collector
works.
So
what
this
app?
B
Does
the
simple
Web
API
app
with
a
range
of
you
that
will
display,
live
the
the
working
set
of
the
application,
which
is
the
blue
line?
Okay,
what
is
the
memory
used
by
the
process,
including
managed
objects
and
native
memory?
The
advocated
bites
in
manage
memory
like
what
how
much
objects
like
managed
data
is
in
the
memory,
and
we
can
see
then
the
Jen
Jen
one
zero
Corrections,
how
much
CPU
is
used
and
how
much
happiest
we
have?
B
Okay,
so
that
does
Julian
and
the
goal
is
ready
to
understand
how
the
garbage
collection
works
and
memory
and
pitfalls
in
HP
9000
cool
you
could
do
and
how
it
behaves.
So,
let's
on
this
app
I
will
put
some
load
just
to
see
how
it
behaves
as
simple
others
and
stand
on
up,
and
you
will
see
exactly
the
behavior
from
the
issue
that
was
creating
a
meter.
B
B
Where
is
that
API
controller?
So
I?
Have
this
big
string
here
is
just
returning
a
new
string,
10
kilobytes,
but
it's
actually
20
kilobytes
of
memory,
because
it's
a
two
bytes
per
charm.
So
get
me
string
and
I'm
doing
that
as
fast
as
we
can.
So
whenever
we
have
the
result,
we
are
sending
a
new
request,
I'm
studying
it
and
on
the
web
app.
B
We
can
see
that
the
memory
is
growing
and
we
can
see
the
allocations
and
every
time
the
GC
collect
something
we
can
see
that
the
manage
allocations
are
going
down
and
the
green
arrow
is
a
general
correction.
The
orange
arrow
is
a
gen
1
and
the
black
arrow
is
a
gentle,
and
here
we
can
see,
we
just
have
a
zero,
which
means
short-lived
objects.
B
So
this
string,
sorry,
this
string
is
just
short
leaf
object.
It
goes
in
general
and
then,
when
there
is
some
threshold
which
has
been
hit,
the
GC
will
just
say:
okay,
we
need
to
do
a
correction,
let's
do
a
g0
correction,
agencia
correction
and
it
will
collect
everything
and
we
can
see.
We
have
a
collection
every
two
seconds.
A
square
in
the
grid
is
one
second,
okay
and
the
memory
is
a
run.
400
megabytes
and
that's
super
stable,
and
this
is
exactly
well,
it's
stable.
A
B
Can
go
faster,
but
that's
that's
that's
right!
Well,
so
there
is
no
GC
leak
or
the
GC
is
actually
running
correctly.
Otherwise
we
will
no
it's
just
to
show
that.
Yes,
a
standard
estimate
application
and
the
others
the
default
settings
might
take
because
I
have
32
gigabytes
of
memory,
we'll
take
400
megabytes
of
memory,
that's
the
normal
behavior
of
the
GC.
It
will
adapt
on
the
available
memory
so
just
to
show
that
and
then
by
extension,
I
wanted
to
show
all
the
issues
that
people
explained
in
this
in
this
thread
in
the
github
thread.
B
So
if
I
go
and
just
show
the
the
worst
thing
you
can
do,
which
is
instead
of
returning
the
strings,
directly
store
them
in
a
static
dictionary
but
statically.
So
here
I
have
a
concurrent
bag,
static
screens
and
it's
static,
so
it
will
live
for
the
full
lifetime
of
the
application
and
I'm
just
adding
strings
to
that.
And
everyone
can
guess
what
will
happen.
B
The
memory
will
just
grow
a
static
string
under
load
if
I
hit
the
correct
and
point
yes
and
you
see
the
hurry,
so
we
can
still
see
the
garage
go
working,
but
now
is
Jen.
It's
like
he
is
trying
to
remove
as
much
memory
as
you
can.
Oh
now,
Jen
zero,
but
so
it
can
collect
some
things.
It
will
collect
some
things
because
some
strings
are
located
and
can
be
released,
but
the
memory
will
grow
indefinitely
until
we
hit
out
of
memory
exception
and
I
won't
go
to
this
point.
B
Okay,
I,
don't
want
it
and
then
the
memory
is
stable
because
I'm
not
creating
new
strings,
but
there's
still
a
memory
because
it
can't
be
released
because
they
are
referenced
statically
by
a
bio
correction.
Okay
make
sense,
so
I
will
close
the
application,
because
now
it's
using
too
much
memory
and
show
you
another
scenario.
B
So
I
should
use
some
transient
objects
with
the
strings,
and
so
this
is
the
startup
of
application.
What
I
want
to
show
you
is
so
let
me
restart
now.
I
will
go
directly
here.
I
want
to
show
you
the
workstation
GABA
straighter,
so
the
garbage
collector
by
default
runs
in
a
mode
called
the
server
garbage
collector,
which
is
optimized
for
several
loads,
like
a
spirit
call,
and
this
is
why
we
use
it
as
a
default.
B
B
Behavior
is
a
big
issue.
Maybe
you
should
try
that
I
will
personally
try
that
of
some
of
my
applications
just
to
see,
because
in
this
case
I
don't
care
about
CP
usage.
It
can
correct
as
much
as
it
wants
I
just
care
about
the
memory
usage,
so
something
that
is
interesting
also
to
see
visually,
how
it
behaves,
because
just
some
terms
that
people
have
heard,
but
maybe
never
never
seen
how
it
works.
B
B
It's
a
time
file
with
temp
folder
with
nothing
inside
what
I'm
doing
is
that
I'm
doing
this
thing
and
coding
that
I'm
not
disposing
it
even
though
it's
disposable,
yeah
I
would
say
now,
because
sometimes
you
don't
want
to
dispose
we
see
later.
But
the
issue
is
that
is
that
it's
a
manager,
blade
I,
don't
call
this
pose,
but
it
will
be
collected
by
the
garbage
crate.
B
B
A
B
It's
it's!
It's
very
interesting
because
I
want
I
think
this
is
or
is
not.
Oh
I
don't
have
the
link
to
that,
but
there
is
a
an
issue
which
was
fine
and
is
currently
a
cure
that
has
been
opened
by
a
by
Adele
fauna,
BC,
true
to
fix
that
indirectly.
So
that
that's
the
typical
issue.
So
if
you
don't
dispose
what
should
be
disposable,
what
what
is
possible?
B
You
might
have
issues
not
all
the
time,
but
in
this
case
this
is
a
big
issue,
because
the
memory
will
keep
in
creasing
and
you
will
get
I
mean
this
is
a
memory
okay.
So
we
have-
and
this
is
an
actual
issue
under
mr.
Poe
story,
because
or
maybe
on
the
I-
don't
remember
which,
with
the
story,
because
the
hosting
environment
is
actually
referencing
that
and
doing
lots
of
watch
and
if
you
in
the
case
like
you,
are
doing
unit
testing
and
because
we
are
not
disposing
it
from
the
Austin
environment.
B
Then
we'll
get
a
memory
nickel
when
you
do
limit
testing
that
creates
lots
of
fasting
environments
which
itself
creates
physical
fiber.
That
so
that's
why
I
wanted
to
show
that,
because
there
is
an
actual
issue
that
you
could
hit,
if
you
do
any
testing
for
that,
otherwise,
in
production,
you
will
never
see
this
exact
exact
issue
yet
will
will
fix
it
so
skipping
on
the
disposable
things.
This
is
an
HTTP
client.
This
one
is
obvious,
but
I
wanted
to
show
it.
B
B
You
will
trust
me
because
I
don't
want
to
encode
that,
but
I
could
just
paste
you
on
there
and
it
will
do
another
HP
client
on
that.
The
issue
is
that
code
is
that
HP
client
should
not
be
disposed
or
actually
should
be
reused.
Yes,
it
should
be
disposed,
but
you
should
reuse
the
instance
of
HP
client
as
much
as
you
can.
So
this
is
a
bad
pattern.
If
you
do
that,
you
will
get
some
port
exhaustion,
like
the
connection
will
be
created,
but
HP
client.
B
We
miss
both,
but
the
connection
itself
won't
be
released
by
the
by
the
by
us
as
fast
as
you
create,
in
which
P
client
and
in
the
end,
at
some
point
after
few
seconds,
you
will
get
both
exhaustion
on
the
client
and
you
won't
be
able
to
create
new
HP
clients.
So
the
idea
is
that
this
explicit
class
HP
client
needs
to
be
reused.
So
a
simple
thing
is
just
to
put
it
a
static
and
to
reuse
it
on
every
request.
B
The
same
instance:
it's
a
thread
safe
instance,
so
we
Sherm
use
it
and
that's
how
supposed
to
work
and
then,
obviously,
when
the
application
shuts
down,
we
show
dispositive,
but
in
its
case
that's
fine.
We
are
just
have
one
when
the
application
will
shut
down,
everything
will
be
cleared
correctly
and
even
better
use
the
HTTP
client
factory
and
also
build
a
blog
post
that
was
written
when
2.10
was
shipped.
So
that's
a
new
thing,
but
just
a
reminder
that
don't
well
reuse
your
chest
behind
instances
or
put
them
static.
B
If
you
don't
know
how
to
do
it
correctly,
that's
the
best
thing
to
do
next
thing.
I
want
to
show
is
the
large
object.
Not
this
one
I
want
to
show
the
large
object
heap.
This
is
also
very
typical
issue.
We
see
and
that's
that's
a
magical
example
I
know
so
I
will
stop
this
application.
What's
what's
is
doing
here?
It's
stuck
at
700
or,
let
me
restart
it.
I
wanted
to
begin.
B
B
B
A
lot
better:
okay,
so
I'm,
creating
I'm
calling
with
84
9
975,
which
is
actually
doing
what
it's
just
creating
a
new
battery
of
the
size
and
passing
here
and
returning
the
length
just
to
actually
instantiate
it
and
and
I
don't
want
to
return
everything
just
want
to
instantiate
it
so
I'm,
creating
a
byte
array
of
80,
4975
bytes
and
what
I
see
here
is
that
it's
allocating
and
I
bet.
No,
it
am
I
in
workstation,
GC
or
yeah.
B
B
Yes,
now
it's
Blair
I
just
need
to
rebuild
I
will
start
the
load
again
and
I'm
using
the
same
behavior,
but
you
see
I
will
go
to
my
four
hundred
megabyte
a
memory
and
you
see
I
can
see
gen
zero
every
second,
okay,
that's
the
I,
so
I
can
create
these
large
objects.
That's
fine!
But
now,
if
I
do,
plus
one
byte
just
one
more
bite,
it's
totally
different
I,
don't
get
gen
zero
Corrections,
which
are
cheap,
I
get
black
ones.
B
The
Gen
2
collections,
which
are
more
expensive,
because
a
Gen
2
collection
will
also
do
a
genuine
collection
and
a
general
correction.
So
by
definition
it's
slower
than
Gen
Z
revolution,
okay
and
I
get
them
much
more
often,
so
we'll
take
much
more
resources
than
just
one
less
bite.
So
why
is
that?
It's?
B
Because
when
we
rotate
objects
in
memory
they
will
be
put
in
country,
use
segments
of
memory
and
they
would
take
space
and
once
objects
are
collected,
we
will
release
the
space,
and
this
can
lead
to
fragmentation
of
the
hip
segments
in
memory
to
prevent
fragmentation,
because
fragmentation
is
bad
is
like
on
our
disks.
You
don't
want
fragmentation,
because
you
have
to
find
a
slot
which
has
enough
space,
so
what
the
GC
does
it
will
do
what
what
is
called
compaction,
so
it
will
move
for
each
generation.
B
It
removed
the
bytes
of
each
object,
which
isn't
done
in
the
hips.
It
removed
them
to
compact
them.
It
will
defragment
the
memory,
but
the
issue
is
that
defragmenting
is
costly
when
you
move
memory.
Removal,
member
moving
memory
is
costly
and
at
some
point
it's
less
efficient
to
move
large
blocks
of
memory
than
just
having
fragmentation.
B
So
the
idea
is
that
they
decided
that
85
thousand
dot.
Eighty
five
kilobytes
but
85
thousand
bytes
was
the
limit,
after
which
they
should
not
even
try
to
move
the
objects
and
they
leave
them
in
the
specific
zone
in
the
arbitration
that
way
in
the
memory
that
is
called
large
object
heap.
So
any
object
that
takes
more
than
eighty
five
thousand
bytes
in
memory.
Won't
will
be
only
collected
by
a
gentle
collection
and
will
be
placed
in
a
specific
zone
where
they
don't
move.
B
They
are
not,
they
are
not
compacted,
and
the
recommendation
in
this
case
is
to
be
careful
when
you
create
large
objects
when
you
Sarai's
them
when,
when
you
do
big
allocations
of
strings
of
json
documents
that
it
might
go
in
the
large
objective
and
impact
the
performance
of
your
system-
and
you
see,
one
byte
makes
a
difference.
So
it's
not
exactly
here,
85,000,
because
the
two
bytes
themselves,
like
some
space
in
memory,
but
maybe
also
the
metadata
for
the
biter
itself
or
the
structure
of
data
thanks,
maybe
25.
B
You
see
here
so
people
ask
questions
about
why
you
have
that
and
then
we
don't
like
do
you
have
big
objects
and
so
on?
And
you
see
here
we
can.
We
can
have
statistics
from
the
analysis
tools
about
how
much
is
the
object
you
want
to
make
him
so
that
that's
important
to
understand.
I
think
we
are
on
time
and
the
last
example
will
have
been
objecting.
B
B
So
a
technique
is
to
here
I'm
wrapping
I'm,
creating
a
pool
array
which
is
wrapping
an
array
okay
by
battery.
And
what
I
do
is
that
I'm
using
the
array
pool
which
is
passed
here?
It's
a
static
object,
I'm
using
an
array
pool
and
I'm
when
it's
created,
I'm,
putting
Ouray
by
Colleen
rent
and
when
Jeff
this
object
is
disposed
I'm
returning
URI.
B
So
it's
a
wrapper
around
an
array
and
what
is
doing
here,
I
am
using
a
pool
array,
this
class,
which
is
disposable,
and
here
this
is
a
magic
thing
which
is
on
the
response.
Subject.
You
have
a
register
for
this
pose
and
an
object,
and
the
idea
is
that,
when
the
request
is
done,
when
the
response
has
been
sent,
it's
a
spirit
call
we'll
call
this
pose
on
this
object.
When
the
request
is
done
so
we
can
inform,
we
can
be
informed
when
the
request
is
done
and
release
an
object
or
call
this
person.
A
B
We
do
that
here.
There
is
no
way
to
know
when
it's
available
again
and
it's
even
worse
to
pull
something
that
we
can't
return,
because
it
will
like
keep
some
references
to
it
forever.
So
that's
a
nice
trick,
I
think
we
should.
We
should
know,
as
a
nice
proper,
how
to
detect
when
the
iced
tea
request
is
done
and
call
this
button
on
that.
That's
it
great
questions.
I.
A
I,
don't
see
any
towards
the
end
here.
I
think
that's
probably
a
good
place
to
wrap
up
I
guess.
The
only
thing
I
would
say
is
for
people
that
want
to
keep
up
with
the
work
that
you're
doing
so
you
did
point
out
the
one
place
where,
with
a
kms
link
where
people
can
go
and
look
at
benchmarks,
is
there
anything
else
that
you'd
recommend
for
people
just
keeping
up
with
performance
as
well
as
as
performance
and
troubleshooting
on
their
own
applications?.
A
B
So
we
are
trying
to
go
that
so
what
I
explained
with
the
typical
patterns
of
memory
management?
We
will
release
an
article
with
that.
So
for
sure,
whenever
we
see
an
atypical
issue
like
this,
we
ask
people
to
read
that
and
to
see
if
there
can
be
one
of
these
issues
in
your
application.
First,
we
want
also
to
do
the
same
thing
with
typical
usage
of
API,
as
we
have
in
HTML
or
I'm.
Thinking,
for
instance,
about
was
named
concurrent
dictionary.
The
compare
dictionary
when
you
pass
a
lambda
to
create
an
item
it.
B
It
might
call
this
lambda
multiple
times.
It's
not
locked
and
people,
don't
know
that
and
expect
that
to
be
thread
safe
well
to
be
locked
completely.
So
these
are,
there
are
some
subcommittees
in
the
API.
Is
that
people
might
not
know
I
want
to
have
a
reference
about
that,
so
we
so
whenever
usually
there
is
an
issue,
we
can
point
them
to
a
list
of
best
practices,
or
do
you
know
about
that?
Do
you
know
about
that?
Maybe
that's
it
so
they
can
by
themself
check
some
some
things
that
are
obvious.