►
From YouTube: ASP.NET Community Standup - June 16, 2020 - Performance Infrastructure (David Fowler & Sébasten Ros)
Description
Join members from the ASP.NET teams for our community standup covering great community contributions for ASP.NET, ASP.NET Core, and more.
Community links for this week: https://www.theurlist.com/aspnet-standup-2020-06-16
A
A
A
B
C
A
D
A
Things
happen:
everyone
can
hear
what
she
needs
to
hear
great,
wonderful,
oh
here
we
go
with
the
community
links
for
this
week.
So
these
are
I
will
publish
these
in
the
chat
and
everything
it's
a
SP
nut,
stand-up
20,
2006
16.
So
here's
so
you
know,
don't
worry,
I'll
be
sharing
these
all
out
at
the
end.
Here's
what
I
got
for
you
today.
First
of
all,
Jeremy
lick
nests
with
a
huge
three-part
series,
building
a
line
of
business
app
using
blazer
webassembly
and
a
lot
of
energy
framework.
A
A
So
he's
he's,
you
know,
setting
up
some
things
with
change
tracking
and
then
the
end
result
of
that
is
you
actually
see
as
people
as
users
are
edited
or
changed,
he's
tracking
that
so
that's
a
common
line
of
business
scenario
that
you'll
see
and
so
that's
cool-
to
see
how
he's
got
that
set
up
he's
got
some
other
things
with
with
data
annotations,
etc,
want
to
point
out
some
other
things
in
the
other
parts
of
the
series
so
part
two.
He
shows
off
some
cool
stuff
here.
A
One
thing,
let
me
see
he's
using
some
stuff
using
versioning
and
he's
also
got
a
repository
patterning,
a
pattern
here
to
allow
for
testing
and
in
the
third
one
and
I'm
just
showing
the
highlights
here.
These
are
definitely
you
know
a
couple
of
cups
of
coffee.
One
neat
thing
here:
he's
showing
is
filtering
with
debounce,
and
so
in
this
one.
A
This
is
there's
a
search
interface
and
the
problem
that
you
run
into
is
searching
where
you
allow
typing,
and
your
and
your
filtering
as
you
type
is
that
it's
constantly
running
the
search
every
time
you
type
a
character
so
he's
doing
the
standard.
Debounce
here,
where
he's
got
a
timer
running,
and
only
comparing
and
he's
saying
if
between
when
the
timer
started
and
ended,
if
what
I've
typed
has
changed,
then
run
another
filter.
So
that's
kind
of
standard.
A
Maybe
I
would
be
a
little
worried
here.
I
can
keep
in
my
standard,
dotnet
timer
world
and
and
so
so
anyhow,
that's
that's
kind
of
cool
and
he's
got
a
bunch.
Other
stuff
here.
Queries
across
the
wire
he's
doing
some
things:
we're
sharing
UI
components,
a
lot
of
cool
stuff,
so
wonderful,
stuff
and
reminder:
we've
got
a
chef
community
stand
up
as
well
going
on
so
they're
they're,
showing
off
a
lot
of
cool
stuff
in
those
another
neat
blazer
thing
this
is
blazer
repple.
So
this
is
an
example
here.
A
One
is
where
you
can
type
in
your
own
stuff
and
do
the
repple
thing.
They've
also
got
some
built-in
ones.
So
some
things
like
counter
demo:
they
even
have
here's
a
forum
demo
and
so
this.
Actually
this
is
all
the
code
and
then,
if
I
hit
run,
it
actually
processes
and
runs
it
interactively.
So
that's
this
one,
that's
a
little
bit
more
complex,
but
you
know
this
is
a
full
repple
thing
showing
this
all
so
pretty
neat
stuff
one
or
two
more
Blazer
things
here.
A
This
is
a
neat
deep
dive
by
Edie
into
the
render
tree
so
explaining
how
things
how
the
render
tree
is
constructed,
and
you
know
showing
the
abstraction
between
the
DOM
and
the
and
the
render
tree
itself
and
explaining
it.
So
he
goes
through
and
explains
how
it's
constructed
some
nice
visuals
here
showing
how
it's
all
fit
together
and
then
he
kind
of
towards
the
end
of
this
talks
about,
and
he
shows
here
how
you
can
construct
things
and
work
directly
with
the
render
tree
through
the
component
base.
A
One
thing
I
have
to
get
upset,
though,
is
here
he's
got
multiple
h1
elements
he
iterates
through
at
one
point
and
creates
multiple
h1
elements
and
I
don't
like
that
at
all,
but
here
he
shows
building
out
a
render
tree
and
then,
finally,
at
the
very
end,
he
talks
about
a
place
where
you
can.
If
you're
directly
editing
things
in
the
Dom,
you
can,
you
can
get
a
conflict,
and
so
he
shows
using
the
key
or
you
can
set
the
key
to
allow
for
direct
tracking
of
an
element
so
really
good
stuff.
A
There
all
right,
here's
one
more
lengthy,
blazer,
post,
tons
of
good
stuff
here
and
blazer
lately,
as
so
daniel
writing
about
using
laser
tensorflow
and
ml
dotnet
to
identify
images
in
this
case
broccoli.
So
he
goes
through.
He
shows
how
he
creates
a
tensor
flow
model
and
trains
it
with
a
bunch
of
images
and
and
labels
showing
what
those
images
are.
A
So
this
is
actually
just
a
console
application
that
he
runs
separately
creates
a
tensor
flow
model
is
neat
visualizer,
which
is
pretty
cool,
and
then
he
dumps
that
into
an
or
a
blazer
application
with
ml
net
and
then
goes
through,
and
you
know
show
some
some
additional
things.
You've
got
to
do,
for
instance,
setting
it
up
with
system
drawing
on
Mac
or
Linux,
and
then
then
then
there's
some
more
kind
of
in-depth
stuff
as
far
as
actually
kind
of
cleaning
it
up.
A
He
does
some
nice
things
even
as
far
as
like
when
you
upload
the
image.
He
does
some
some
nice
things
where
he
actually
loads
in
the
image
and
shows
a
loading
indication.
So,
for
instance,
here
he
is
loading
the
image
by
decoding
the
binary
60
base64
binary.
Here
and
showing
that
so
pretty
cool
stuff,
it's
neat
I
I,
like
that
ml
dotnet
works
with
existing
things
like
tensorflow,
and
this
is
cool
to
see
it
kind
of
fully.
You
know
integrated
solution
there.
A
Well,
alright,
update
from
Elan
on
mobile
blazer
bindings,
not
a
ton
of
stuff
in
this
one.
Some
some
things
like
the
update
for
xamarin
forms:
four:
five
simplified
more
web
like
syntax.
So
instead
of
this
button
text
equals
you
can
have
button
and
and
between,
and
also
some
things,
including
in
like
CSS
improvements
I,
like
also
that
he
always
calls
out
a
community
input.
So
you
know
so
that's
pretty
cool
too
all
right
coming
up
very
soon.
A
A
Alright.
Moving
on
from
the
blazer
section.
Here
we
have
a
nice
one
from
Andrew
Locke
and
he's
talking
about
boost
filtering
with
Castro.
So
he
points
out
the
problem
that
you
can,
if
you're
not
specifying
boost
names,
you
open
yourself
up
for
some,
so
he
shows
some
examples
with
things
like
DNS,
rebinding,
cache,
poisoning
and
password
reset
hijacking.
A
A
This
is
a
sub
domain
and
the
sub
domain
is
set
with
a
really
short
T,
DL
and
like
60
seconds,
and
then
it's
updated
to
point
to
another
URL
and
then
they
start
doing
xhr
requests
so
scary
things.
So
what
he
shows
here
actually
is
is
a
very
simple
solution
which
is
actually
setting
your
host
your
aloud
host.
So
by
default
it's
just
star,
but
you
can
set
them
directly
so
and
then
he
shows
here
by
setting
that
directly
he's
allowing
site
a
but
not
site
B.
A
A
So
she
points
out
that
there
is
the
no
opener
rel
attribute
I,
wasn't
aware
of
so
there's
no
open
there
and
then
also
Fri
11.
Your
safe
are
also
putting
in
no
referrer.
What
that
does
is
your
underscore.
Blank
does
not
allow
changing
the
window,
dot,
opener,
so
good
stuff,
all
right.
Let
us
encrypt
from
Nate
McMaster.
So
a
few
things
to
to
be
aware
of
here,
one
is
its.
Let
us
encrypt
or
instead
of
let's
encrypt
I,
feel.
A
Encrypt
that
makes
sense
so
yeah.
So
that's
that
is
an
update.
It
also
does
the
automatic
update
for
automatic
request
for
your
certificate
at
thirty
days,
which
is
really
nice,
so
this
is
this
is
some
things
to
be
aware
of
there
all
right,
some
cool
announcements.
This
was
just
from
today,
G
RPC
web
now
available.
So
we
previously
I
think
both
showed
this
off
on
the
show
and
also
showed
the
previous
longer
blog
post,
explaining
it
and
so,
okay,
okay,
so
so
this
is
so
G
RPC
web
offers.
A
A
A
That's
true
so
yeah
so
anyhow
this
this
is
this
is
release
today.
So
this
is
very
cool,
so
gr
PC
web
now
available.
Congratulations,
team,
hello!
Everyone,
hey
Glenn!
What's
up
also
Tim
with
the
web
live
preview,
so
web
live
preview.
This
is
this
is
an
extension
or
Visual
Studio
for
Windows
you
can!
Oh,
my
gosh
I'm
gonna
end
the
community
section
very
early.
So
with
this
you
can
now
go
in
you.
Can
this.
A
Right
you
can
interact
directly,
but
what's
also
cool
about
this
is
that
it
allows
interaction
with
the
our
tools,
so
you
can
actually
go
in
with
browser
tools,
update
things
and
it'll
update
directly
back
into
your
code
in
Visual,
Studio
itself,
so
very,
very
exciting
stuff.
Give
it
a
try
and
from
the
feedbacks.
Okay.
Last
thing,
very
quick
one
here
so
up
with
the
asp
net
core
for
dotnet
5
preview,
5
update,
it's
not
a
huge
amount
in
here
very
easy
to
update,
and
the
only
update
here
is
reloadable
update
via
configuration
for
kestrel.
A
B
A
A
D
Yeah,
so
the
idea
is
that
we've
worked
for
too
long
now
on
performance
and
we've
improved
the
system.
We
are
using
to
benchmark
SP,
Annette
and
everything
else,
to
a
point
that
it's
reusable
by
everyone
who
cares
about
earth.
So
we
want
to
to
make
it
more
usable,
more
open,
saucy
with
documentation
and
just
clean
stuff.
So
we
made
a
new
repository
to
to
to
get
it
open
source
and
to
let
everyone
use
it
if
they
want
to
make
benchmarks
on
on
the
web.
D
B
The
background
story
is
Seb,
made
sure
that
that
is
open-source
today
and
benchmarks,
and
it
was
basically
over
time.
It
can.
I
grew
organically
into
this
thing
that
we
used
to
test.
I
can
power
on
all
of
our
scenarios?
I
mean
track
it
via
the
via
the
power
bi.
That
Damien
shows
like
every
now
and
then
on.
Stand-Ups
are
in
other
places,
is
how
we
track
our
tech
employ
our
progress.
B
So
you
can
run
this
thing
on
all
of
our
perfect
machines
to
show
perf
numbers,
as
as
a
team
grew
as
more
people
got
involved,
it
became
really
hard
for
Sepp
to
run
all
the
process,
so
he
tried
to
make
it
easy
for
everyone
to
run
and
that
kind
of
grew
and
grew
more
organically
and
the
command
line
kind
of
this
monster
of
a
bunch
of
things
to
pass
in
so
sad
bit
kind
of
a
reboot
to
make
it
easier
to
basically
run
these.
B
These
performance
tests
and
I
saw
it
and
I
said
said:
we
have
to
make
this
thing
like.
We
have
to
make
this
thing
a
product
because
it
actually
is
super
useful
by
itself
and
we
had
a
bunch
of
internal
teams
on
other
people
in
general
want
to
test
their
stuff
and
we
and
we
were
always
go
and
tell
them.
We
have
this
thing.
Does
it
have
any
thoughts?
But
if
you
can
figure
it
out,
you
know,
then
that
makes
sense.
B
So
we
made
a
repo
like
how
long
ago,
like
maybe
two
weeks
ago,
on
an
a4
sub
to
name
it
I'll
be
head
of
me
and
we
we
we
named
it.
We
did
some
stuff,
and
now
we
think
we
have
what
we
think
is
a
super
cool
product
for
for
perf
testing
and
maybe
a
little
testing
too.
Let's
crash
testing,
my
video
frozen
still.
E
C
D
C
D
Thank
you,
and
so
I
just
asked
my
wife
to
make
no
sound,
no
noise,
but
she
said
my
daughter
has
a
piano
lesson
in
30
minutes.
So
I'm
very
sorry,
we'll
see
how
it
goes.
Yes,
so
that's
the
repositories
in
private.
It
will
be
public
soon,
I
keep
hitting
f5
and
it's
not
panic.
It's
because
I'm
David,
so
we'll
see
how
it
goes.
So
this
is
the
repository
dotnet
slash
crank
previously
to
us.
Well,
it
was
it
still
there,
its
asp
net
benchmarks.
D
This
is
where
we
are
where
we
have
everything,
unfrosted
sure,
but
also
all
the
apps.
We
want
to
measure
and
lots
of
things
and
too
many
things
that
fly
or
so
we
made
a
new
repository
to
just
separate
what
we
do
in
the
asp.net
team
and
what
everyone
should
be
able
to
use,
which
is
just
the
infra,
all
the
tools
that
we
use
and
there
are
superior
documentation
and
we
are
the
CIA
now
has
tests
and
it
builds
stuff.
It's
no
more
like
junk.
D
B
D
Yes,
so
the
idea
of
these
tool
is
to
be
able
to
deploy
jobs,
apps
on
different
servers
and
let
them
rappin
like
a
web
app
load
generator
a
database
readies
whatever
you
want
on
whatever
machine
you
want,
and
then
it
will
gather
some
mate
that
will
be
returned
to
your
machine
to
be
stored
somewhere
in
a
database
or
in
JSON.
So
you
can
see
the
result
of
the
benchmarks.
B
D
This
schema,
what
we
see
is
that
the
controller
is
like
to
CLI
that
we
talk
to
agents.
Every
agent
is
installed
on
a
machine
or
just
on
a
single
machine.
It
depends
on
what
I
want
to
do,
and
every
agent
can
run
docker
jobs.
Does
that
project
or
just
executable
files?
Because
if
you
don't
want
to
belittle
a
job,
you
might
have
something
that
is
already
compiled,
you
can
send
it
directly
or
if
you
want
to
run
a
docker
job.
You
can
also
do
that.
D
D
This
is
crank
and
from
there
we
can
start
commands
that
will
orchestrate
all
the
agents
that
will
start
the
job
and
the
agent
is
able
to
to
measure
what's
happening
on
each
application,
like
the
CPU
to
memory
edge
network,
swapping
memory
and
things
like
this
and
then
each
app
can
also
return,
custom,
metrics
and
that's
very
powerful
feature.
Because
then
you
can
decide
on
what
information
you
want
your
app
to
to
provide
to
the
agent
and
that
will
be
forwarded
to
the
controller.
B
C
D
When
David
mentioned
that
it
grew
organically,
it
became
a
monster
because
at
first
we
just
want
to
like
to
run
an
application
and
start
W.
Ok
to
measure
RPS.
Ok
request
pass
again:
that's
all
we
wanted
to
do,
and
then
we
were
like
ok,
but
we
need
to
measure
memory
to
measure.
Latency
is
to
measure
network
usage
to
measure
is
the
GC
enabled
on
that?
Is
the
seller
GC
enabled?
Or
what
version
are
we
actually
running?
Is
it
like
2.1
or
2.2,
or
is
it
whatever
and
and
then
we
kept
adding
comments?
C
That
the
history
of
this
thing
is
literally
I,
think
I
I
started
building
that
app
and
it
just
had
some
rudimentary
odd
casting
in
the
web
app
itself,
and
you
would
say:
do
you
want
to
run
the
database
test?
So
do
you
want
to
run
like
the
middleware
tests
and
I
use
some
ridiculous,
like
naive
pattern,
matching
to
try
and
turn
middleware
and
stuff
on
and
off,
and
then
Mike
I
think
built
the
driver.
That
would
let
us
send
jobs
to
it
remotely
when
he
was
working
on
our
performance
stuff
as
well.
C
So
then
we
had
like
infrastructure.
That
was
our
first
piece
of
like
always
running
infrastructure
that
we
could
submit
jobs
to
virus
sh
or
whatever.
Then,
of
course,
we
wanted
to
get
into
automation.
So
then
you
want
things
like
the
build,
would
automatically
push
a
job
to
the
infrastructure
and
then
give
you
results
back,
but
then
we
still
want
to
be
able
to
ad
hoc
so
that
a
dev
can
make
a
code
change,
build
the
assembly
locally
without
having
to
push
it
anywhere,
but
invoke
a
job
with
their
private
assembly.
C
C
Someone
from
the
dotnet
core
runtime
team
would
be
working
on
a
fix
to
improve
the
performance
of
one
of
our
scenarios,
and
they
would
need
to
be
able
to
use
this
tool
as
well,
and
so,
like
you,
throw
them
the
one
now
and
they're
like
what
yeah,
and
so
this
has
really
grown
into
something.
And
then
then
we
want
to
start
running
competitive
stuff
right.
C
So
not
only
do
we
run
our
own
dotnet
server,
but
like
tech
and
power
we
want
to
be
able
to
track
our
performance
on
our
hardware
versus
other
frameworks
or
other
things
that
they
implement
a
different
ways
so
that
we
can
get
like
for
like
numbers
on
the
same
machine.
So
it
needs
to
be
other.
You
know,
needs
be
able
to
run
anything.
So
all
that
has
led
to
where
we
are
right
now,
which
is.
B
Pretty
cool,
yeah
and
so
I
think
one
day
we're
in
a
we're
in
a
meeting
and
said
had
an
epiphany
that
he
could
rewrite
the
thing
to
be
super
generic
and
I
think
he
spent
like
two
weeks
just
scrapping
the
old,
not
scrapping,
making
a
brand
new
agent
that
we're
seeing
her.
That
was
a
new
and
improved
system
and
over
the
last
couple
of
months
it
kind
of
got
better
or
better.
It's.
A
B
D
Yeah,
so
here
I'm
also
showing
the
what
is
the
goal
of
our
infrastructure
to
be
able
to
measure
performance
in
iOS
and
and
shout
it
and
filter
it
and
compare
scenarios.
So,
as
I
mentioned,
we
measure,
for
instance,
where
can
I
find
some?
Where
are
they?
Oh
is
too
much
featuring
many
yes
yeah,
so
we
measure
different
frameworks
like
no
GS
rest
at
clicks
go
so,
for
instance,
you
know
you're
acting
fast
HTTP,
which
is
go
on
the
OJS.
C
Gentoos
and
what
is
it
normally?
That's:
a
combination
of
both
dotnet
runtime
counters
or
Avenger
counters
or
metrics
and
platform
counters,
so
that
we
can.
We
can
gather
information
directly
from
the
machine,
that's
running
it
like
the
underlying
OS
or
we
can
get
it
from
the
process
itself
or
you
can
deploy
another
job.
That
job
is
to
gather
metrics
from
the
other
job.
C
That's
running
like
you
can
do
all
that
type
of
composition
and
then,
ultimately,
it
all
just
gets
put
back
into
a
database,
and
then
you
can
visualize
it
whatever
you
like
we're
using
power
bi,
but
the
idea
would
be
if
you
in
and
deploy
this
yourself.
The
data
goes
to
wherever
you
want
the
day
to
go
to
go
to,
and
then
you
could
visualize
it
using
whatever
tool.
You
want
to
write.
A
C
So,
just
by
virtue
of
all
these
time
series
being
synchronized
right,
so
these
graphs
here
are
all
showing
the
results
for
whatever
is
currently
selected.
And
so,
if
you
see
like
a
jump,
let's
have
a
look
on
the
top
left-hand
graph.
Where
it
says
IPS
you
can
see.
We
have
a
step
up
back
in
May,
so
we
got
faster
right
and
that
happened
on
on
May,
the
9th
May
the
8th,
and
you
can
see
all
the
metadata
in
the
tooltip,
which
is
a
feature
of
power
bi.
C
You
can
pick
other
fields
to
show
up
in
the
tooltips
for
each
of
these
data
series,
so
we
can
see
what
Bill
debasement
at
kora
was
what
build
of
dotnet
core.
It
was
what
sessionid,
what
job
it
was,
and
so
we
can
look
at
that
and
then
you
can.
Then
you
can
hover
over
another
graph
like
how
much
memory
was
being
used
or
what
was
the
latency
or
what
was
the
CPU
or
what
was
the
contention
rate
and
you
can
find
the
same
dot
right.
C
The
same
series
start
at
the
same
point
in
time
with
the
same
runtime
version,
and
you
know
the
correlate.
So
you
can
see
right
here
at
around
the
same
time.
A
fixed
obviously
went
in
that
made
a
change
to
the
thread
pool
because
before
that
series
of
time
the
thread
pool
items
was
much
higher
and
then
it
went
down,
and
that
seems
to
have
happened
around
the
same
time,
that
the
RPS
went
up
right.
B
And
this
is
used,
we
also
track
regressions.
So
if
we
do
have
a
like
a
the
perfect
regressing
in
some
form
memory
getting
higher
our
CPU
getting
slower
our
RPS
getting
lower,
we
can
late
it
to
commit
and
dip
them
and
figure
out
what
change
will
hopefully
figure
out
what
changes
in
a
snit
core
or
donate
core
caused
that
issue
and
SEP
has
a
bot.
That's
these
scans
through
commits
that
W
called
the
regression,
but-
and
it
follows
issues
when,
when
the
level
of
the
level
of
various
for
regression
is
is
has
hit
some
threshold.
B
B
You'd
only
have
that
if
you
were
super
confident
that
this
is
the
problem,
but
a
later
time
you
can
give
you
a
diff
yeah.
It
takes
some
some
depth
to
go
figure.
Ok,
this
this
change,
maybe
cause
the
regression,
for
the
most
part
is
pretty
clear,
but
sometimes
it's
not
obvious
and
it
could
be
a
red
herring
and.
C
We
also
obviously
have
environmental
changes
from
time
to
time,
whether
it's
kernel
patches
or
your
drivers,
and
things
like
that.
Now,
like
any
good
performance
environment,
you
want
to
try
and
really
coordinate
those
and
make
sure
you
do
those
in
a
very
deliberate
fashion,
but
sometimes
things
come
through
and
we
don't
realize
that
it
was
an
environmental
change
that
caused
it.
But
we've
got
a
lot
better
of
that.
I'd
spare
to
say
I
think
so
we
haven't
really
have
well.
There
is
one.
D
Well,
but
I
can
explain.
That's
that's
fine!
That's
a
good
inventory
man
gunshot,
but
you
can
see
it
here
because
I'm
measuring
HF
proxy
and
an
HTTP
client
proxy.
So
this
is
a
page
where
we
track
proxy
performance
and
if
I
compare
H
a
proxy
to
a
CD
clients,
you
see
the
two
of
them.
They
have
an
issue
and
it's
not
because
of
the
net
runtime
change
that
HT
proxy
went
slower.
It's
actually
like
a
different
discussion,
but
this
is
how
we
find
that
it's
an
environmental
change
and
not.
D
C
So
we're
running
this
on
all
of
our
environments.
Now
right
so
we
have
you
know:
windows.
We
have
physical
windows,
we
have
Linux
a
physical
Linux.
We
have
cloud-based
Windows
and
Linux
as
well,
and
then
we
have
a
couple
of
other
environments
that
we
do
ad-hoc
right
with
the
like,
the
from
other
teams.
They
have
hardware
that
we
sometimes
are
able
to
utilize
to
get
some
variance
or
we
have
a
net.
C
We
have
an
AMD
machine
as
well,
so
we
have
a
a
Linux
AMD
server,
so
we've
started
doing
comparisons
of
the
epic
epic
Rome,
the
new
AMD
server
architecture,
and
to
have
a
look
at
how
we
do
on
that
sort
of
hardware
as
well,
and
we
run
all
of
these
on
and
you
can
see
if
we've
got
arm
in
there
as
well.
So
we've
got
a
whole
bunch
of
different
physical
machines
and
virtual
machines
under
cloud
that
we're
running
all
of
these
benchmarks
on
all
the
time
so
that
we
can
compare
these
things.
So
there's.
C
D
So
I
wanted
to
show
some
demos
how
it
so
this
is
the
end
result.
But
how
do
we
get
to
these
numbers?
How
do
you
run
a
benchmark
like
I
was
showing
different
dashboards
because
every
team
in
the
net
might
have
different
requirements
like
the
chief
computation
team
wanted
to
analyze
the
differences
between
achieve
completion
on
enough?
How
is
it
good
for
our
peers
on
a
10th
season
started
grdc,
like
James,
has
lots
of
benchmarks
for
comparing
go
native
and
the
net
implementation
on
the
server
and
on
the
client.
D
D
E
D
A
D
So
here
the
net
wins
told
the
crane
controller
and
the
crank
agent,
and
then
you
are
able
to
run
crank
for
the
controller
and
crank
agent
to
run
the
agent.
So
if
I
just
want
to
run
an
agent
look
because
I
want
to
do
some
tests
on
my
local
machine
that
will
run
all
the
application.
I
can
do
crank
agent
and
it
starts
the.net
service.
It's
the
net
app
that
will
accept
any
jobs
from
the
controller,
and
here
it's
on
the
local
host
on
five
zero
or
ten
pot,
and
now
the
the
agent
is
ready.
D
So
here
and
now
what
I
want
to
do
is
run
I
will
run
a
web
app,
an
expedite
way
back.
That
is
on
the
central
supposed
to
be
so.
If
I
look
at
that
I
have
it
there
if
I
open
my
hello
folder
in
the
samples,
the
hello
folder
is
just
a
simple
meter
where
standard
app
that
gives
end
point
and
returns
hello.
World
okay
is
just
sending
returning
edward
on
route
path.
Okay,
and
it's
also
displaying
some
the
net
version
that
is
training
on,
but
we'll
see
later.
D
E
D
We
need
a
change
remember
no
2
month
ago
here.
The
idea
is
that
from
the
command
line,
crank
I
need
to
be
able
to
describe
the
benchmark
itself.
The
benchmark
I
want
to
run
is
made
of
this
application.
The
web
application
that
I
want
to
deploy
somewhere
and
a
load
generation
tool
that
we'll
use
to
simulate
some
HTTP
client
when
some
HTTP
traffic
under
web
and
nationalism.
This
tool
will
use
the
we
use,
W
or
K
or
Bombardier
I-
think
here,
I'm
using
one
dot
here.
D
So
this
file
is
describing
everything
that
we
need
to
do
to
run
a
benchmark.
I
won't
use
this
one
actually
excuse
me:
I
will
use
the
local
one
here,
which
is
simple
rissalah.
So
here
these
file
defines
jobs,
scenarios
and
profiles.
So
if
I
focus
on
the
jobs
first,
it
says
I
have
a
job
name
server
and
the
source
is
in
this
local
folder,
ok
and
the
project
I
want
to
run
is
the
cs
proj
here,
and
the
Redis
type
text
is
application
sake.
So
what
it
says
is
the
application
is
local
in
this
folder.
D
You
need
to
build
a
CH
file.
Remember
we
can
run
docker
files,
CS,
proj
and
executables,
and
to
detect
that
the
application
is
ready
to
accept
requests.
You
have
to
check
that
the
text
application
start
is
available,
and
this
is
what
dotnet
apps
web
18
I'd,
actually
output
on
the
console
to
say
that
they
are
ready.
So
that's
how
line
into
you
and
then
here
I'm
importing
another
job,
which
is
the
Bombardier
job,
and
this
job
has
the
same
definition
as
that.
D
That
explains
how
to
run
my
body,
which
is
a
console
app
made
in,
go
to
generate
some
HTTP
node.
It's
like
W,
ok,
but
in
this
example,
one
body
is
very
interesting
for
us
because
it
can
run
on
Windows
and
Linux
W.
Ok,
we
only
run
on
Linux.
So
that's
why
I'm
using
my
body
for
these
demos,
the
scenarios
is
using
existing
jobs
by
explaining
what
you
run
and
want
right.
D
So
here
it
says
the
sanaya
name,
hello,
who
will
deploy
something
called
an
application
that
will
be
based
on
the
job
server,
so
the
web
app
I
just
defined
here
and
then
it
says,
I
have
a
load
service
to
deploy
which
we
base
on
the
job
Bombardier,
which
is
defined
this
file
and
the
variables
for
this
job
are
that
the
server
port
is
5,000
and
the
path
request
is
slash.
Okay,
so
that's
like
a
docker
compose
file
that
will
explain
what
are
the
components
of
my
system
and
what
to
deploy.
D
So
this
one
and
then
this
one
another
profile
will
tell
the
tool
where
to
deploy
every
job
of
my
scenario,
so
it
will
actually
define
that
for
the
job
for
the
jobs.
The
application
and
point
will
be
this
machine,
my
local
machine
and
the
load.
One
will
be
this
machine.
So
the
end
point
is
the
URL
of
the
agent
that
will
accept
the
job
that
will
get
the
job
from
the
controller
and,
in
my
case,
I
just
started
imagined
on
my
machine
on
50-ton.
D
D
If
I
open
the
file
will
open
just
to
show
you
that
it's
the
same
thing
that
doesn't
job
it
has
variables
that
will
help
me
define
the
command
line
to
run
given
value
and
the
server
URI
is
actually
the
one
that
is
used
to
target
the
application
to
to
send
HTTP
requests
to.
So,
if
I
go
back
there,
the
server
URI
is
also
across
and
the
port
will
be
5000,
which
is
a
default
one
on
this
estimate
up
so
now
from
here.
D
What's
happening
now,
is
it's
sending
the
first
job
top
one
to
my
local
machine,
so
here
using
local
folders,
so
it's
using
the
local
folder,
it's
uploading,
the
local
folder,
to
the
machine
and
if
I
look
on
the
agent,
the
agent
got
the
source
code
and
now
accepted
the
job
and
is
currently
seeing
that
it
wants
to
run
in
Oetker
up
3-1.
So
it's
downloading
all
the
requirements
for
this
tab,
which
is
the
runtime
3
1
5,
the
new
SDK,
when
the
latest
runtime,
the
latest
SDK,
the
latest
XP
net
and
the
latest
desktop
runtime.
D
So
here
you
don't
know
in
everything
and
installing
everything,
so
it
might
take
some
seconds
to
to
run
and
how
it
found.
This
version
is
because,
if
you
look
at
this
yes
proj,
it
was
accepting
2t
FM's
that
curl
up
3-1
and
net
corrupt
5-0.
So
by
default
it
says.
Ok,
there
are
two
of
them.
I
take
the
first
one.
This
is
how
it
works
and
then
which
version
so?
Is
it
the
one
that
is
currently
built
on
the
nightly
feed
of
exponet
or
just
maybe
an
old
one?
No,
no
it!
D
About
that
there
are
feeds
and
metadata
files
online
on
the
github
repositories
and
you
get
and
private
feeds
that
will
contain
these
sessions.
So
in
this
case
it
found
it
was
three
one
file
and
I
will
show
you
how
it
can
detect
more
versions.
So
it's
what
everything
it's
totally
lifting
and
now
it's
running
you
see
the
application
is
running
and
it's
starting
doing
something
for
bombardier.
So
Bombardier
is
also
configured
to
use
three
one:
five.
Actually
let
go
up
three
one.
D
I
can
see
some
stats
going
on
and
I'm
not
supposed
to
see
that
it's
supposed
to
run
as
a
service
on
a
machine
and
it's
stopping
the
the
jobs
are
and
if
I
go
back
on
my
client,
my
controller
I
see
the
results,
so
I
can
see
what
happened
job
one
job
2,
which
is
a
server
and
when
the
application
on
the
load-
and
here
I
see
the
results
from
the
application
and
results
from
the
load.
What's
common
between
the
two
is
that
the
CPUs
age
is
provided.
D
I
have
the
memory
the
build
time
and
the
start
time
very
important
to
make
sure
how
long
it
to
start
the
application
first
and
then
the
load
itself
return
more
important
information,
like
the
time
of
the
first
request.
How
many
requests
were
sent?
How
many
bad
response
is
the
min
latency,
the
max
latency
requests
per
second
and
the
max
of
the
requests
per
second,
and
this
is
typical
from
embody.
The
Barry
okay
doesn't
return
you
the
the
max
requests
per.
Second,
it's
like
I,
don't
know
how
to
do
that.
D
Maybe
this
sample
during
one
second
and
every
second,
and
they
return
and
mean
requests
per
second
and
a
max
requests
per
second.
But
when
we
chart
it,
we
always
return
the
average
request
when
the
mean
requests
per
second,
which
is
how
many
requests
by
how
long.
In
this
case,
it
was
fourteen
thousand
one
144,000
by
fifteen
seconds.
That's
like
so
that's
everything
we
get
from
this
run,
and
you
see
it
was
pretty
simple.
I
just
needed
to
to
run
that
now.
D
If
I
was
to
change
the
the
my
local
application,
it
will
just
send
it
again:
Andrey
benchmark
it
right
so
from
there
we
can
change
the
scenario
to
make
it
faster
because
run
I.
It
was
running
50
seconds
of
normal
and
15
seconds
of
measurements,
so
I
could
do
that
either
in
the
file
by
changing
some
variables,
because
Bombardier
will
use
a
one
map
generation.
So
I
could
say
well
met
like
two
seconds
in
duration
and.
B
D
Okay
and
it's
completely
open,
because
the
idea
is
that
one
bad
year
we
have
an
argument
or
party
that
says
so,
if
you
know
Bombardier,
it's
like
a
cheap
and
choda-boy
acadia
all
have
the
same
parameters,
so
the
number
of
connections
and
the
duration
of
the
run.
This
is
our
own.
You
see
one
map
duration,
how
many
requests
to
send
if
there
is
a
rate
defined
past
the
rate?
So
this
is,
we
are
templating
the
command
line
to
run
on
the
server
and
this
way
from
our
job.
D
We
can
change
all
these
variables
like
we
can
overwrite.
So
there
is
a
default
and
we
can
change
wherever
want
from
wherever
we
want,
and
this
wherever
we
want,
is
either
our
own
file
here
or
even
on
the
command
line.
So
if
I
don't
want
to
do
that
for
everyone,
because
the
goal
is
to
be
able
to
share
this
Nile,
but
I
just
want
to
do
it
from
the
command
line.
D
D
That
duration,
it
will
just
change
the
viable
duration
of
the
load
job,
but
it
would
be
easier
for
me
to
define
a
global,
viable
to
say
one
equal
zero.
So
there
are
many
ways
to
to
define
all
these
variables
on
the
command
line
on
the
job
as
a
global
variable
or
yeah,
so
that
that's
the
flexibility
we
have
we
have
is
that
and
this.
B
A
D
So
we
don't
do
anything
here,
it's
to
your
client
and
server
to
define
how
they
will
communicate.
But,
for
instance,
in
we've
used
about
you,
okay,
you
can
pass
a
script
because
of
Arrakis
opposed
to
a
script,
and
there
are
examples
in
the
dubillac
a
repository
on
how
to
intercept
every
request
to
do
a
pre-flight
request
to
get
a
token
and
then
assign
this
bearer
token
on
every
other
request.
After
that,
so
the
idea
would
be
to
have
your
client
support,
authentication.
Another.
C
Example,
you
can
you
create
a
job
and
then
can
you
do?
We
have
output
variables?
I
guess
is
the
question
I
need
to
ask,
so
you
can
repeat
that:
do
we
have
the
support
for
output
variables
from
jobs?
No,
because
I
think
if
you
did
have
that
it'll
just
do
the
feature
design
on
the
call
hi.
You
could
have
a
job
that
does
a
generic
pre-flight
request
job
that
then
emits
a
token
into
an
output
variable,
and
then
you
could
flow
that
into
the
next
job.
D
D
Instance
here
what
what
that's,
why
I
wanted
to
sure?
So,
if
I
do
application,
dot
options
that
display
output,
because
the
application
itself
was
outputting,
some
dashing
okay,
so
here
I
can
configure
that
to
stream
the
output
of
my
application
to
my
local
machine.
So
imagine
you
don't
have
access
like
I
did
here
to
the
you
can
say:
okay
show
me
what's
hot
booty
and
she
live
I,
see
that
my
application,
starting
and
I,
see
what
it's
out
cooking.
So
it
fits
looking
something
something.
D
I
can
see
the
my
local
machine,
so
we
already
have
the
so
yeah
in
somehow
we
could
be
able
to
extract
some
values
and
then
use
it
for
the
next
job
that
will
not
be
hard
actually
so
here
and
now
job
is
run
now
building.
This
is
for
the
next
job
and
if
I
control
C
now
it
will
stop
the
client
and
every
agent
we
just
kill
the
applications
that
they
run.
But
I
won't
do
that.
It
should
be
quick.
Now
it's
done,
but
you
see
I
D.
D
You
can
see
the
output
of
every
endpoint
of
of
your
system
put
it.
So
that
was
the
first
demo.
What
I
wanted
to
show
now
is
how
to
run
an
actual
thing
on
actual
machine.
So
what
I
have
here?
I
have
three
different
machines:
a
speaker
of
clean
load
and
DB.
This
is
an
environment
we
use
to
benchmark
our
applications
before
we
look
at
them
on
the
CI.
A
D
So
to
use
3d,
so
each
of
these
are
the
major
and
20
okay.
So
instead
of
bringing
my
own
will
call
up,
agent
I
will
use
these
ones
now
and
I
will
use
a
measure
sample,
which
is
the
app
you
were
complaining
about.
The
nice
part
that
does
everything
the
beautiful,
app
and
I
have
a
sample
like
this.
So
I
click
trends
here
and
if
I
open
this
file
its
yeah.
D
Now
you
understand
how
it
works.
It's
importing
the
WHI
job,
the
Bombardier
job,
so
I
have
some
choice
here
we
use
the
very
okay
because
it's
doing
pipelining
and
it's
very
important
for
us
because
of
the
camp
where
and
here
in
the
job
I
have
the
a
spirit
benchmark
which
is
not
a
local
app
but
something
on
Gaeta.
So
we
can
also
say
the
app
I
want
to
benchmark,
isn't
heated
on
this
branch,
and
this
is
the
project
to
be
very
interesting
and
here
the
variables
for
this
app.
D
So
this
app
itself
when,
when
it
starts
it
needs
to
be
defined
and
time
I'll
explain
that
and
I
was.
It
was
kind
of
cringe
because
we
are
still
using
all
these
parameters.
What
is
an
eye?
What
transport,
what
server,
what
protocol
all
these
things-
and
this
is
the
command
line
to
start
this
up,
so
this
app
can
do
many
things
to
many
things.
We
break
it
eventually,
yes,
and
we
passed
a
ministry.
D
D
We
can
also
define
environment
variables
because
we
want
to
pass
a
connection
string
and
then
we'll
define
a
load
job
using
the
Bayaka
with
headers
and
hitting
a
specific
path
on
a
specific
boat
and
now
in
the
profiles,
I
still
have
my
local
or
file,
but
I
have
no
profiles.
I
have
the
one
that
is
using
the
three
machine.
I
showed
you
for
the
application
we'll
go
in
the
lean
machine
for
the
load
will
go
in
the
load
machine
and
for
the
database
on
the
one
that
is
cool
database.
D
Okay,
three
different
agents,
and
what
special
is
this
machine
is
that
either
is
that
they
have
a
private
network,
private
LAN.
So
from
my
network,
they
are
known
as
these
and
but
internally
now
or
network,
with
an
old
networker,
an
own
networks
which
the
the
web
application
is
known
at
as
1000
102
and
the
database
one
has
1000
103.
So
we
are
sure
the
the
network
traffic
will
be
local
without
any
interference
from
the
public
network.
D
I
just
need
to
verify
I'm
on
the
VPN
I'm
on
the
VP
and
otherwise
I
won't
be
able
to
access
this
machine,
and
the
profile
is
name
s
Pinet
perf
beam,
so
that
from
the
command
line.
It's
very
easy
to
go
into
this
configuration
to
select
a
scenario
like
fortunes,
the
profile,
the
one
that
will
send
it
to
these
machines
and
I
won't.
D
D
There
is
nothing
what
is
a
turning
birth
DB,
oh
okay,
I
lost.
The
connection
is
completely
blocked.
Okay,
so
I
know
the
connection
on
this.
One
was
bad,
but
now
the
load
has
started
also
David
mention
that
we
see
is
pointing
to
what
is
currently
running.
So
if
I
open
this
URL,
this
is
what's
the
state
of
the
job
that
is
running
on
this
machine,
so
I
can
follow
even
retrospectively.
D
This
is
the
latest
public
version
of
ASP
ID
5
preview
305,
and
this
is
what
it
used,
and
this
is
the
result
we
get
in
terms
of
requests
per
second
107
thousand
requests
per
second
on
these
machines,
slow
news,
you
see
using
database
and
it's
this
machine
now,
let's
say
I
want
to
run
it
on
net
corrupt
3-1.
So
what
I
will
do
is
I
will
say
the
application
that
framework
will
be
met
core
up
3.1.
D
A
A
Well,
well,
it's
doing
this
a
question
about
more
distributed
scenarios
and
I
know
in
this
case,
you've
already
got
kind
of
distributed.
You've
got
like
a
database,
but
you
know:
I
was
even
thinking
about
cases
like
project
tie
and
things
where
you've
got
kind
of
more
distributed.
Things
I
see
a
smile.
There
is
that
something
you've
looked
at
like
or
distributed
scenarios
yeah.
D
C
Ones
that
we
use,
we
have,
for
example,
load
generator
web
application
database
server
or
we
might
have
load
generator
web
application,
back-end
API
that
the
web
application
calls
or
you
might
have
to
load
generators,
web
application,
and
you
know
to
Redis
servers,
for
example,
like
there's
no
reason
we
have
got
those
type
of
scenarios
that
we
run
already,
where
we
utilize
multiple
instances
of
jobs
at
various
tiers
which
is
more
distributed.
What
you're,
seeing
here,
which
is
just
the
effectively
client-server
right
so
yeah.
D
So
Ryan
Novak
did
some
studying
a
few
months
ago
and
he
was
deploying
some
web
api
and
then
he
was
testing
micro
services.
So
you
would
deploy
different
apps
on
different
machines
to
have
actual
micro
service
benchmarks
to
run
and
something
that
even
David
is
looking
at
is
to
be
able
to
deploy
N
and,
like
n,
being
a
big
number
number
of
machines.
Yeah.
It's
case,
yeah.
B
B
But
the
idea
is
to
the
install
crank
agent
on
on
a
bunch
of
a
CI
containers,
that's
clients,
so
I
could
spin
out,
like
10
10,
a
CI
ages
for
client
load
and
then
have
a
big
VM.
As
my
server
application
I
could
test.
The
I
could
try
to
max
out
the
number
of
concurrent
connections
per
VM
by
appending,
picking
a
VM
size
to
doing
a
test
run
and
then
keep
doing
that
over
and
over
upping
the
size
of
the
VM
to
get
more
load
into
it.
Yeah.
C
C
They
they
typically
test
for
different
outcomes.
Right
and
sorry,
we
used
to
have
a
tool
in
the
IAS
suite
called
where
the
web
capacity
analysis
tool
or
wk
okay,
and
it
was
very
similar
to
what
you're,
seeing
here
but
built
on
XML
and
Windows
and
native
code,
C++
modules
and
and
really
only
worked
with,
is-
and
this
is
you
know
in
some
ways,
a
very
modern
version
of
that-
that's
vastly
more
flexible,
yeah
and
modern
because
it
uses
yeah
Mille
instead
of
XML,
but
it
can
run
anything
right
like
it
really
really
can't
want
anything.
C
So,
whether
it's
you're
trying
to
say
hey,
given
a
really
big
machine
as
a
server.
How
many
concurrent
idle
WebSocket
connections
can
I
connect
like?
Can
I
connect
half-a-million
WebSocket
connections
or
more
we'll
think
about
the
number
of
load
generating
servers?
You
would
need
to
open
that
many
outgoing
connections,
one
big
load
generating
machine-
is
probably
not
going
to
be
either
enough
or
available
to
you
right.
C
We
want
to
get
the
server
to
the
point
just
before
it
fails
effectively
and
see
how
many
connections
it
was
at
that
point
and
then
we
can
take.
You
know
a
memory
profile
at
that
point
and
then
learn
what
it
is.
That's
taking
up
the
amount
of
memory
for
that
many
connections
and
make
improvements
so
there's
different
scenarios,
different
styles
of
testing,
but
this
Orchestrator
is
generic
enough
that
you
can
use
it
for
both
right.
What.
C
And
that's
what
that's,
what
W
cap
was
originally
built
for
the
old
one
and
there's
no
reason
why
yeah
you
couldn't
stand
this
up
against
your
actual
application
and
then
have
jobs
that
represent
your
users
or
different.
You
know:
load
types
or
user
scenarios
for
your
users
and
then
hit
the
server
with
a
whole
bunch
of
different.
You
know,
quote:
unquote,
live
user
code
or
user
sessions
and
see
how
it
performs
right
and
I.
E
C
F
C
You
can
actually
observe
what
happens
in
real
time,
so
the
system
looks
stable
now,
with
a
thousand
clients
performing
a
split
of
user
scenarios
of
tasks.
Eight
us
beat
us
see
what
happens
if
I,
if
I,
add
two
thousand
new
clients
who
come
in
and
try
to
do
task
D
because,
like
they
hit
a
marketing
banner
on
some
website
or
something
right
that
type
of
interactive
load
testing
is
the
type
of
stuff
that
it's
often
done
in
the
real
world
for
capacity
testing
of
sites.
C
F
B
C
C
Air
and
he's
got
work
work,
but
if
you
wanted
to
do
more
of
a
user,
you
know
vs
has
tools
for
doing
like
you
record
a
user
web
session,
and
then
you
play
it
back
and
scale
right
against
a
server
those
types
of
user
testing
or
acceptance
or
stress
testing
scenarios.
This
doesn't
care
about.
It
doesn't
know
about
those
things.
It's
just
a
job.
C
You
would
write
a
job
and
you
either
put
that
stuff
in
a
container
or
annex
e
or
in
there
just
does
not
net
project,
and
then
you
configure
a
job
to
run
that
right,
and
so
anyone
taking
crank
could
then
integrate
in
any
other
thing
that
generates
the
type
of
traffic
or
does
the
type
of
behavior
that
they
want
to,
including
a
sub
show
before
capturing
custom.
Metrics,
like
you,
can
capture
custom
details
from
whatever
your
custom
client
is,
and
then
that
would
be
collected
for
you
by
crank
because
it
has
those
generic
on
capabilities.
D
They
I
wanted
to
find
yes
here
you
see
here.
This
is
a
GRDC
file
made
by
James
and
he
made
custom
client
apps
to
generate
Java
C
traffic
to
benchmark
the
job
position.
I
he's
just
a
console,
automating
dotnet
and
then
a
Yunel
file,
and
then
from
anywhere.
We
can
simulate
that
and
now
the
I
think
this
is.
The
ad
team
is
using
this
client
to
test
their
job
PC
traffic
on
any
or
they
copied
that
some
code
to
make
their
own
client
to
test
your
PC
traffic
to
for
performance
reasons.
So.
B
That's
and
you
can
imagine
an
ecosystem
of
jobs
define
somewhere
on
github
that
you
could
pull
in
as
like
for
your
own
jobs
right.
So
if
somebody
made
a
Jerry,
C,
client
or
work
client
Bombardier,
you
can
reference
their
their
thing
from
github
directly
in
your
script,
but
they'll
happen
to
build
your
own
client
for
everything.
So
yeah.
C
Or
if
you
were
building
a
web
RTC
video
application
as
long
as
you
can
write
an
app
that
can
create
a
client
or
emulate
client
traffic
on
that
you
could
orchestrate
like
the
glow
testing
that
environment
using
something
like
crank.
So
it
really
is
that
layer
underneath
and
you
get
the
flexibility
to
write
the
jobs
in
whatever
language
you
want
that
actually
executes
the
the
various
things
so
yep.
D
So
while
you
were
talking
I
run,
two
jobs,
I
render
JSON
scenario
is
in
the
same
file
that
was
defined
here
and
I
stored
them,
so
to
store
a
job.
I
used,
the
output
parameter
and
I
saved
the
first
one
jason-3
one
and
the
second
one
for
phi
0
I
said
no
I
didn't
find
50
that
didn't
okay,
so
it
creates
a
file
like
this.
If
I
open
it
Jason
1
0.
D
It
shows
me
here
for
each
job
everything
that
it
computed-
and
you
see
here
also
the
SPL
conversion,
which
was
written
the
metadata,
which
describes
every
measure
it
took
with
description
and
wanted
these
all
the
measurements.
Like
all
the
samples
we
took
from
every
metric
bill
time,
CPU
swap
everything
and-
and
it
does
that
for
all
the
nodes
of
your
deployment
specific
to
the
very
arcade
from
the
hosts.
First
request:
silence:
okay,
so
that's
Todd,
and
this
is
also
what
will
be
stored
in
in
sequel.
D
If
you
decide
to
not
use
the
bash
output
but
the
sequel
option,
which
lets
you
store
the
results
in
signal
arrays,
that
we
use
for
power
bi,
for
instance,
or
you
can
use
foreign
chatting,
it
will
store
the
result
in
a
JSON
document
in
your
database
and
from
that
we
can
also
I
decide
to
compare
the
results.
So
there
is
a
command
in
crank,
which
is
called,
compare
and
I
can
say:
Jaden,
3,
1
and
Jason
5
0,
and
it
will
just
show
me
a
table
with
all
the
time.
D
Like
benchmark
net
but
by
pivoting
90
degrees,
so
because
we
have
lots
of
metrics
and
doesn't
fit
in
the
screen
or
isn't
3,
so
it
has
to
be
vertical
and
and
this
way
so
this
is
a
comment
to
compare
already
existing
measurements.
But
what
you
do
usually
is
that
you
do
a
baseline,
so
you
run
a
benchmark
and
you
output
it
as
a
something.
Let's
call
it's
great
baseline.
D
D
Let's
say
you
are
been
adapts
and
you
change
this
trail
every
day
you
do
that
application
that
output
files
and
then
you
pass
kestrel
the
journal,
like
your
local
file
like
see
color
code,
Kestrel,
Journal,
and
it
will
upload
this
file
or
in
the
agent
to
be
benchmarked
inside
the
other
tick
debugging.
So
this
is
super
useful
for
engineers
because
they
can
test
their
local
changes.
D
So
that's
the
stuff
loaded
engineers
used
to
to
track
the
improvement
of
the
other
Kijiji
and
by
the
way,
if
you
look
at
the
requests
per
second
between
on
JSON,
which
is
middleware
Russian.
So
the
standard
mention
between
3
1
and
5
0
and
5
0
3
2
5,
not
even
preview
6
or
7,
ok,
which
is
actually
faster.
You
get
22%
on
this
machine
requests
us
again
and
again
sees
when
when
when
then
the
build
time
window.
D
A
A
B
B
D
We
do
that
so
what
I'm
saying
is
that
I
want
the
application?
Well,
the
agent
that
is
running
the
application
to
collect
a
trace-
and
this
is
a
collect
arguments
I
want
to
send
to
the
the
application-
is
creating
a
trace.
This
argument
is
from
paraview
because
I'm
targeting
a
Windows
machine,
so
these
correct
arguments
would
be
for
purview
and
if
I
was
targeting
a
Linux
machine,
I
would
I
could
pass
the
ref
collect
arguments,
earth
/f,
correct
arguments,
so
this
is
the
native
traces
that
we
get
miss
collect
true
and
you
receive
as
happening.
D
Then
we
have
another
option
which
is
done
net
trace,
so
I
can
say
that
options
that
been
a
trace
and
then,
if
we
use
the
new
dot
that
trace
tool
to
take
the
trace
and
we'll
have
the
manage
trace
and
what
happen
is
that
the
server
to
the
trace
using
purview
and
the
driver
is
now
downloading
the
trace?
That's
why
I
have
an
ETL
file
here?
Ok,
who
knows
how
to
stop
our
view
and
to
collect
and
to
stop
on
it?
No
here
it
just
just
say:
I
want
the
trace.
D
You
should
see
the
JSON
sterilizer
and
the
thread
tool,
so
if
I
select
now
what
is
the
app
benchmarks,
which
is
a
PI
just
run,
and
we
can
see
it
just
really
slow
and
there
is
a
meter
where-
and
this
is
a
just-
a
meter
where
okay,
which
is
invoked
so
we
can
then
go
there
and
see
what's
happening
inside
so
you
can
see.
This
is
just
you
just
got
a
trace
that
you
can
then
analyze
to
that's
for
you
and
that's
how
you
pass
custom
arguments
and
what
David.
B
There's
perf
there's
like
DTrace,
that's
more
of
a
freebsd
thing,
but
I
think
it
got
for
it
to
the
clinics
and
there's
there's
one
more
called
something
of
it
care
of
the
name
of
it.
This
one
call
I,
think
beeper
yeah
something
else
so.
D
So
if
you
just
do
that,
it
will
grab
all
the
system
run
time
counters
the
default
system
that
run
time
controls-
and
this
is
what
what
we
saw
in
the
chest
with
the
GC
allocations.
Gc
hips
thread,
pull
stats,
looking
contentions
and
so
until
super
you
sweets
like
looks
like
the
framework
and
the
different
libraries
can
provide
whatever
and
provide
whatever
numbers
they
want
to
expose
to
the
app
or
to
the
agent
to
track
them
and
every
system
every
subsystem
can
be
find
their
own,
and
this
is
actually
how
we
expose
also
our
matrix
injections.
D
So
here
you
see,
I
have
much
more
information
so
on
the
application
itself.
I
know
I
know
have
all
these
counters
that
provided
some
data.
It
also
provides
a
CPUs
age,
which
much
is
almost
what
could
be
tracked.
Memory
usage
this
in
GCC
general,
but
just
information
about
allocations,
tip
size,
exceptions
per
second.
All
these
things
are
super
interesting
and
because
each
system
can
provide
their
coil
contours
here,
if
I
passed,
we
just
used
the
cast
reward
up
to
new
I
need
the
new
ration
35,
which
one
is
the
newest
I,
think.
B
C
A
C
D
On
so
you
see
here
we
get
the
HD
Edco
counter,
so
requests
per
second,
which
should
match
approximately
what
nobody
sees.
Ok
member
of
request.
So
what's
interesting,
is
that
a
spinet
counted,
1.6
million
requests
and
the
benchmark
said:
I
just
sent
1.1
million
requests,
so
there
isn't
much
yeah,
that's
interesting,
because
when
I
did
that
with
specific
number
of
requests
on
Bombardier
I
think
this
is
this
is
about
yaki.
But
if
I
use
my
body
it
matches
exactly
the
number
of
requests,
so
that's
suggest
so.
B
D
Something
are
so
interesting
is
that
you
see
here
I'm
using
justify,
which
is
the
latest
public
session.
Let's
say:
I
want
to
use
the
latest
nightly
build
of
a
spinet
Corin
runtime
I
can
just
a
application
and
I
don't
change
the
framework
that
I
can
change
the
channel,
and
here
I
can
say
ang
and
get
the
result
using
the
latest.
The
latest
latest
available
bits
on
the
fields
which
might
have
been
just
built
and
a
word
ago.
D
Maybe
this
is
how
we
change
direction
and
I
can
you
can
do
the
same
thing
under
each
framework,
so
super
useful,
and
not
only
can
you
define
the
channel
to
just
get
the
latest
version
or
the
current
a
big
version,
but
you
can
also
set
a
specific
version.
This
is
very
important
when
you
benchmark,
because
let's
say
you
do
a
change
in
Castro,
you
don't
want
the
runtime
to
change
every
time.
You
run
a
benchmark.
You
want
to
use
a
stable,
well
yeah,
stable,
runtime,
a
meaning
one
that
doesn't
move.
D
So
you
will
run
the
benchmark
and
copy-paste
the
version
of
the
runtime
and
then,
instead
of
using
edge
for
the
runtime,
you
will
use
this
dashing
and
you
can
then
change
the
experiment.
Forget
that
you
want
to
to
measure
okay.
The
goal
is
that
when
you
do
two
runs,
you
might
not
run
two
different
versions
of
the
runtime,
because
there
was
a
new
build
available
and,
and
now
you
just
measuring
it,
on
the
wax.
A
D
Thank
you,
and
you
see
here
night
season,
preview,
seven
and
a
built
from
a
16
which
is
today
v
build
under
16.
So
today
this
is
a
fifth
build
that
were
using
from
the
runtime
and
same
thing
from
a
speed
net.
Okay,
so
super
useful
to
to
track
the
performance
of
estimate,
maybe
also
on
your
own
apps.
Maybe
you
want
to
see
what's
the
impact
of
the
current
SPI
dashing
on
my
app
I
know
for
us
it's
pretty
soon,
because
that's
how
we
do
the
things
we
want
to
track.
D
The
old
version,
the
old
drivers-
support
when
the
server
supports
that
the
agent
supports
it,
but
the
driver,
the
new
one,
doesn't
know
how
to
download
the
results
right
now
to
do
that.
So
I
haven't
posted
that,
but
to
show
you
because
this
graph
here
is
using
the
old
driver,
skill
and
the
EF
core
benchmarks,
they
are
micro
benchmarks,
they
are
benchmark,
done
and
match
box
and
each
result
here
is
a
class.
So,
yes,
it's
supporting.
D
We
can
run
using
exactly
all
s
increments
by
changing
the
frameworks
by
yeah,
doing
gonna
planters,
but
doesn't
make
any
sense
in
this
case.
But,
yes,
you
can
run
benchmarks
continuously
using
benchmark
net,
and
this
is
the
kind
of
results
that
you
can.
You
can
get
you
get
the
same
results
it's
just
that.
The
way
they
wanted
to
show
me
was
to
show
a
chart
or
a
class.
D
You
could
also
have
a
tree
view
with
all
the
classes
and
all
the
methods
and
just
shake
them
as
much
as
if
supported
not
in
the
new,
so
crank
it
serve,
doesn't
support
it,
but
crank
agent
supports
it.
So
it's
just.
We
have
to
bought
every
feature
to
the
new.
The
new
controller,
something
I
didn't
mention
are
super
useful
for
micro-services
is
that
you
can
say
application
and
then
restrict
CPU,
CPU
sets
and
memory
on
the
server.
B
B
A
We're
about
at
time
and
I
realized.
You
have
a
thousand
different
things
to
show
off
here.
This
is
cool,
but
so
some
kind
of
wrap-up
questions
there's
one
so
I
had
mentioned
like
the
micro
service
and
ty
as
an
option.
There's
a
specific
question:
can
you
use
this
directly
with
ty
or
is
it
more
like
well
ty,
you
could
configure
your
environment
with
Ty
and
then
you'd
wire.
This
up.
Neither.
B
D
A
F
C
B
C
B
A
Yeah,
ok,
this
is
super
wonderful.
This
is
really
cool,
so
I
think
you
know
I
love
the
the
I
love
the
side
of
it.
That
I
know
that
my
applications
keep
getting
faster
magically
because
I
write
them
in
because
I
picked
a
winning
web
framework,
so
that's
cool,
but
then
it
also
is
neat
to
think
about
like
opportunity
to
to
benchmark
my
applications
to
you
know
I'm,
so
others
very
good
stuff,
a
job
said
well,
alright,
let
me
see,
can
you
unshare
your
screen
and
we
can
all
wave
goodbye?
Oh.