►
From YouTube: ASP.NET Community Standup - August 13th 2019 - Performance and Benchmarks with Sébastien Ros
Description
Join members from the ASP.NET teams for our community standup covering great community contributions for ASP.NET, ASP.NET Core, and more.
Community links for this week: https://www.theurlist.com/aspnet-standup-2019-08-13
A
A
B
C
B
B
A
B
A
B
A
B
But
so
some
of
the
things
that
are
nice
in
here
talking
about
you
know
you
can
set
them
up
classic
or
via
yamo,
there's
also
explaining
some
stuff
like
there's
a
walkthrough
of
setting
up,
build
agents
both
in
Windows
and
aboon,
okay
and
as
I
was
going
through.
You
know
this
is
this
is
going
through
and
I
was
thinking
potentially
be
setting
things
up
via
docker
image?
B
C
B
So,
and
and
I
love
this-
that
kind
of
you
know
Kanban
style
board.
So
you
know,
starting
by
looking
at
what's
in
the
html5
spec,
but
then
also
showing
the
kind
of
wrapper
the
API
around
it
in
Blazer,
which
is
cool,
I
haven't
looked
at
that
honestly.
A
B
A
A
A
B
B
Alright
Dodi
G,
who
you
may
remember,
has
like
his
practical
is
peanut
Cory
has
hundreds
of
samples
he
keeps
them
updated.
You
can
see
this
one
updated
nine
hours
ago,
and
so
what's
nice
here
is
these
are
for
all
different.
You
know
four
to
one
two
to
three.
Oh
now,
so
this
one
I
thought
was
interesting.
B
This
is
G
RPC
server
side
streaming,
so
setting
this
up
using
a
in
the
in
the
middleware
here
or
you
know,
just
just
call
it
basically
running
a
loop
and
it's
just
writing
out
hello
and
it
just
writes
it
out
continually
and
then
in
the
client.
It's
it's
just
handling
the
the
GRP
stream,
GRP
c
stream
response.
B
So
oh
yeah,
one
thing
I
like
about
these
samples
is
they're
very
lightweight
like
so
sometimes
you'll
see
a
sample
and
it's
mixing
a
bunch
of
different
stuff
together,
and
these
are
great
from
the
point
of
view
of
I
want
to
learn
just
one
thing
at
a
time,
and
this
shows
how
to
do
that.
So
so
here's
this
is
the
client
side,
so
he's
got
a
client
and
a
server
talkin
G
RPC
to
each
other.
Nice.
A
We
saw
a
demo.
Actually
we
just
have
an
all-hands
meeting
in
the
in
the
for
the
8-minute
team.
We
just
came
from
before
this
and
James
Newton
King
dialed
in
from
New
Zealand
and
gave
a
demo
of
exactly
this.
It
was
server-side
client-side,
HTTP
RPC
streaming,
while
observing
the
the
current
metrics
of.
What's
going.
A
B
All
right,
you're,
getting
writing
up
a
post
here
on
just
a
quick
look
at
a
difference
in
start-up
CS
and
really
hear
what
he's
pointing
out
is
the
the
app
dot
use
routing
so-
and
this
is
kind
of
the
you
know
like
the
the
end
point
routing-
and
this
is
configuring
app
used
routing
on
in
the
startup,
so
just
talking
about
what's
different
there
honestly,
a
lot
of
time
like
their
stuff,
changing
all
the
time
and
I
was
kind
of
like
wait.
When
was
this
a
day?
Was
this
possibly
into
no
and.
A
B
A
B
B
B
B
B
B
A
Yeah
that
sock
was
originally
written
by
a
principal
engineer
here
at
Microsoft
who
actually
works
on
our
customer
and
partner
engagement
team.
So
he
spends
all
his
time
working
with
real-world
customers
who
are
migrating
typically
large
enterprise
customers,
but
not
always
who
are
migrating
their
apps
to
a
spinner
core,
and
this
was
based
on
what
he
had
seen
and
what
he
had
experienced
in
actually
doing
that
type
of
work
with
customers.
So
shout
out
to
Mike
there
who
he
actually
approached
us
and
said.
B
A
B
Very
cool
and,
like
like
I,
said
it's
it's.
This
is
a
high-level
overview
and
then
it
digs
down
deep
into
some
other
stuff
in
here.
So
really
good.
Like
you
said
it's
it's
people
that
have
done
this
and
you
know
it's
its
guidance
based
on
on
the
real
world
stuff,
very
good.
Well
and
as
always,
I'll
be
sharing
out
this
link,
and
that's
it
I
am
done
so
I'm
going
to
stop
the
sharing
nice
back.
A
C
A
C
B
B
A
To
the
dotnet
blog
and
the
ACE,
banette
blog
you'll
see
a
number
of
posted.
A
preview
Wade
is
now
available.
As
I
said
in
last
week's
stand-up,
yeah
preview,
a
Tirzah
go,
live
release
like
preview
7
is
the
new
ones
bunch
of
bug
fixes
in
it
there's
some
new
api's
and
last
minute.
Api
changes
that
are
coming
through,
especially
for
things
like
blazer,
an
EF
core
which
had
an
own
body
of
work
that
was
happening
towards
the
end
of
the
release
and
I
also
said
if
you
are
really
really
waiting
for.
A
You
want
like
a
true
icy
Damian.
Give
me
the
real
release.
Candidate
preview
9
is
the
one
I
would
wait
for,
because
there
still
a
body
of
work
that
will
be
happening.
Preview
9
system
was
asking,
does
Damon
have
double
the
loudest?
No
I'm,
just
a
lot
closer
to
the
microphone
than
everyone
else
and
I'm
projecting
my
voice,
or
is
it
service?
Well.
A
Just
update
the
preview
channel
and
you'll
get
the
latest
stuff
so
or
or,
and
you
can
go
and
download
the
sdk
separately,
which
is
what
serve
is
showing
right
now
from
the
SDK
download
page.
Ok,
and
then
you
can
do
that
as
well
as
getting
the
one
inside
it's
totally
fine
or
if
using
vs
code
you
would
go
and
download
or
a
different
editor.
A
You
download
the
SDK
separately,
like
you
have
been
used
to
doing,
but
if
you're
on
the
preview
channel
4
vs,
which
I
we
talked
a
little
bit
about
last
week,
just
to
update
it
today,
there's
a
new
version
today,
I
guess
it's
16,
3
preview,
I
think
I
got
that
right.
I
think
was
preview
2,
which
includes
300
preview,
8.9
core
3,
so
go
and
check
that
out
and
the
blog
post
covers
a
bunch
of
changes
that
are
in
preview,
8
right
back
off
the
tangent
back
into
performance
into.
A
C
A
C
A
C
This
is
an
update.
Yes,
so
we
have
a
bunch
of
services
that
can
run
on
machines
that
will
let
us
benchmark
applications
anything
and
we
use
that
these
services
to
to
measure
our
earth
continuously
and
shout
the
results.
If
you
want
to
see
all
these
graphs,
the
actual
URL
for
the
public
is
a
kms
/,
a
spinet
/
benchmarks,
and
then
you
get
access
to
the
these
charts.
C
So
the
physical
infrastructure
is
replicas
of
what
the
camp
ours
is
doing
is
using
the
same
machine
is
exactly
the
same
network,
all
same
switch
same
servers
and
the
cloud
is
asier
VMs,
the
same
thing
that
they
can
produce,
and
this
way
we
can
see
the
changes
live.
What,
while
we
do,
changes
on
asp.net
and
the
core
effects
and
the
consumer
so.
C
This
is
the
three
zero
okay
star
branch
and
this
is
to
see
what
we
are
doing
continuously
until
we
start
working
on
the
next
version
after
at
the
zero,
and
then
you
try
the
next
new
thing,
and
we
see
here
some
versions
of
Pentax
and
versions
of
JSON,
of
data
updates
that
uses
EF,
dapper
dude
net
for
tunes
and
data
based.
But
usually
we
don't
really
look
at
this
because
done
lots
of
data,
but
we
can
see
some
trends.
That's
interesting
and
you
can
see
like.
Oh
it's
going
down
what's
happening.
C
Well,
it
took
us
a
month
to
figure
out
what
was
happening
and
by
chance
we
figured
out,
but
that's
part
of
another
chapter.
We
can
also
see
some
things.
Jumping
Oh,
what's
happening
like
that's
cool
right,
but
we'll
see
why
it's
not
that
as
good
as
again-
and
we
can
simmer
so
here.
Some
drops
some
jumps
here
and
I
will
explain
you.
What
happened
then,
why
that
happened.
C
So
on
this
page
you
have
the
campers,
but
the
the
big
interesting
things
I
think
are
the
custom
page
where
you
can
follow
every
single
benchmark
individually
or
by
being
able
to
compare
them
like
in
this
case.
Ict
is
one,
but
let's
say
we
look
at
it.
Plaintext
is
the
most
famous
one
is
the
fastest.
We
can
see
here
with
that
filters
that
we
can
compare
Linux
and
Windows
concurrently,
but
the
others
to
same
physical
machines,
and
we
run
the
benchmarks
on
the
two
machines
and
we
graph
them.
C
So
we
can
see
the
evolution
of
the
differences
between
the
two
and
with
that
we
can
also
check
on
the
cloud
and
arm,
and
we
can
also
compare
two
different
benchmarks
like
if
I
heat
control
and
another
one
and
I
can
then
focus
on
a
single
OS
and
see
the
difference
between
plain
text
and
plain
text.
Non-Pipeline
yeah.
So
you
can
compare
two
scenarios
if
you
wanted,
so
that
is
useful
and
what's
also
very
useful
for
us
is
that
we
can
track
many.
C
The
trick,
like
the
time
to
first
response,
which
is
very
important
for
starting
on
applications,
is
fit
into
two
different
numbers.
The
blue,
slash
finish
one
taco
is
one,
is
the
startup
time
of
the
application,
so
the
time
from
starting
the
process
to
the
server
is
available,
and
the
great
part
is
the
first
request.
C
We
can
try
to
CPU
the
number
of
bad
responses,
also
get
the
rows
and
then
what
is
very
new
for
us
since
we
month
ago,
are
all
these
metrics.
Like
GC,
metrics,
gen,
0,
GC
numbers
average
per
second
of
collections.
We
can
track
the
size
of
the
GC
heaps
channels,
erosion,
one
Gen,
2
and
large
objection,
the
size
of
the
for
GC
and
the
location
rate.
C
How
much
allocation
do
we
do
per
second
on
the
ground
on
average,
meaning
that
when
we
do
a
runny
trance
for
15
seconds,
so
this
is
the
average
allocation
rate
over
all
these
15
seconds.
The
time
in
GC,
lock,
contention,
exceptions
the
threat
with
friends,
the
cute
lend
the
queue
length
of
the
fret
rule
and
the
fretful
items
actually.
A
C
B
C
B
C
Even
now
plaintext
numbers
which
are
the
fastest
we
can
get.
We
didn't
see
any
change
in
the
members
by
anything
that
so
what
we
do
now
is
we
just
enable
that
all
the
time
and
we
track
them
super
easy
and
also
what's
a
super.
Interesting
also
is
that
the
counters
can
be
tracked
out
of
process.
So
you
don't
need
to
run
the
code
India.
C
You
can
connect
the
main
types
to
the
process
to
different
process
and
get
all
these
metrics
outside
of
your
main
app,
so
that
pillows
for
making
site
card
apps
that
you
could
run
aside
your
application
to
track
release
number
us,
live
and
store
them
or
just
show
them
in
a
graphs.
If
you
want
in
a
graph,
you
want
as
a
side
app.
So
let's
say
you
can
so
that
super
efficient
to
create
your
own,
a
matrix
application
to
track
your
application
and.
A
C
So
that's
why
I
wanted
to
mention
about
a
new
thing.
We
are
also
these
KPIs
Windows
Linux
1,
and
this
is
what
is
always
displayed
in
Damien's
office,
so
people
usually
track
their
own
stocks.
Every
morning,
you're,
like
is
my
stuff,
better
green
red
orange
whatever,
but
Damien
tracks
those
one,
maybe
you're
so
track
system,
but
again.
A
C
Numbers
yeah,
I,
remember
I,
so
I
saw
the
video
just
to
see
what
we
should.
Let
you
now
we
have
like
hundreds
and
that's
an
issue
because
super
Co
every
little
widget
here
is
a
node
service
on
the
server
side.
That
has
to
return
something.
So
it's
super
slow
first,
so
we
have
to
spit
it
in
Linux
and
windows.
We
can't
have
them
both
on
the
same.
C
Thank
you,
but
so
what
we
track
is
the
trend
over
three
days
of
each
major
benchmark,
but
we
measure
that
we
see
here
the
trend
over
ten
days
to
see
if
there
is
a
slow
regression
for
ten
days.
There
is
one
okay
and
then
we
track
also.
We
used
to
track
the
servicing
risk,
but
well
anyway,
she
right
now
we
track
the
latency
something
we
track.
The
changes
in
obvious
from
three
zero,
two
two
two,
so
we
can't
compare
what
will
be
sitting
three
zero
versus.
A
C
A
A
A
The
reason
we
break
down
over
three
days
and
ten
days
is
because
you
want
to
be,
as
Sebastian
said
before,
you
want
to
know,
if
there's
a
difference
between
within
a
tolerance
because
there's
always
jitter
within
the
environment,
are
we
going
down
slowly
over
time?
These
sudden
drops
we'll
see
very
easily,
or
is
it
something
that
is
it's
gonna
happen
quite
suddenly,
so
by.
A
Between
three
days
10
days
and
the
last
ga
release,
we
can
very
quickly
look
at
the
comparison.
The
third
row
over
here
is
the
one
that
most
folks
tend
to
be
interested
in,
which
is
how
fast
are
we
compared
to
the
last
release,
which
was
2.2?
Now
you
have
to
zoom
in
to
see
the
two
to
number.
It
is
there,
because
this
is
designed
to
be
I
view
this
on
a
75
inch
TV
in
my
office.
So
that's
what
Sebastian
has
done
here,
and
so
you
should
see
that
one
there.
A
C
Forty
percent
improvement,
so
this
is
on
Linux,
so
we
argue
numbers
for
Linux
and
windows
on
the
next
region
from
forty
percent.
Forty
percent
improvement
in
plaintext,
sixteen
percent
aluminum
Jason,
twelve
percent,
forty
two
and
this
pain
text
and
point
routing.
We
don't
have
gems
because
it
wasn't
available
into
two
right,
but
we
have
reverse
fold
with
Europe
and
the
numbers
four
three
zero.
C
If
I'd
check
here
on
the
last
relays,
we
point
five
billion
out
of
0.2
for
middleware
without
any
routing
this
one,
and
you
see
eight
percent
on
Fortune's,
a
dotnet
with
15
percent
of
14
percent
on
year,
HTTP
traffic
plus
twenty
nine
percent
and
JSON
H,
yes,
+,
20
percent.
These
are
all
good
numbers
and
then
latency
goes
down.
For
instance,
in
plaintext,
96
percent
doesn't
mean
anything
is
from
24
min.
Second
one.
Second,
the
issue
is
latency.
Is
that,
depending
on
the
load,
you
saying
you
can
have
very
bad
latency
latency?
A
C
Important
here
is
that
these
numbers
are
run
side
by
side.
So
what
we
do
is
we
run
the
numbers
three
times.
We
is
the
2.2
framework
and
then
right
after
that
we
use
the
same
scenario
in
the
3-0
framework.
So
there
are
time
constraint
it's
just
like.
Oh
we're
running
20
and
the
other
one
the
day
after.
No,
they
are
very
close,
so
we
don't
pay
for
any
physical
issue
or
connection
or
we
never
know
on
or
batch
applied.
C
So
these
are
very
important
and
they
are
stable.
So
we
run
them
three
times,
so
we
have
a
stable
number
that
we
can
trust.
You
mentioned
that
latency
was
very
good.
I
agree,
but
what's
more
important
here
is
memory?
Look
at
that
if
we
take
a
nap
on
to
point
to
the
plaintext
pad,
so
it's
just
a
middleware
that
returns
head
over
on
these
kind
of
machines,
because
this
is
very
important.
C
A
C
Jason
Jason
is
much
educating,
so
the
407
you
see
so
you
see
here
the
1.8
default
the
same
as
Pentax.
It
mean
it
was
a.
It
was
committing
some
memory
more
than
it
needed
now.
It's
committing
less
and
committing
as
you
needed.
So
in
this
case,
justin
is
allocating.
So
we
have
four
four
nine
seven
four
hundred
megabyte
allocating
the
committed
memory
but
much
lesser
than
the
one
degree
actually
out.
B
C
All
the
numbers
are
the
same:
C
60
%
93
%,
and
we
go
over
all
the
signals.
They
will
take.
Much
less
remember
so
that's
just
believe
it,
and
this
is
a
machine
with
28
302
calls.
So
the
more
course
you
have
the
bigger
improvement
we
need
so
for
I.
Remember
the
deferred
discussion
we
had
about
that
was
I,
think
from
Craver
from
Socorro
flow.
They
had
something
like
sixty
crows
and
I'm
mocchi.
C
No,
no,
maybe
the
kind,
the
same
amount,
of
course,
but
they
also
had
a
lot
of
memory
and
a
simple
console
hat
was
sitting
a
3d
video,
two
or
three
gigabyte
of
memory,
and
I,
why
is
that?
Because
I
need
to
start
then
AB
Smiley's?
Why
are
they
all
taking
three
gigabytes,
not
doing
the
thing?
They
should
not
need
that,
and
it
was
that
so
now
this
should
be
much
better
meal.
Visual
well.
B
A
Was
just
no,
it
was
the
algorithm
used
by
the
runtime
to
pre
reserved
boot.
Sort
of
the
amount
of
space
it
was
going
to
use
was
based
on
a
tweak
made
a
long
time
ago
when,
like
dual-core
processors
became
more
popular,
oh
yeah,
so,
like
it
really
had,
was
something
along
those
lines.
It
had
not
really
been
looked
at
in
a
very
very
long
time,
so
the
change
itself
is
actually
very
small,
and
then
we
also
added
new
knobs
and
you
can
go
in
and
change
a
bunch
of
those
limits.
A
And
so
because
what
happens
is
you
know
the
runtime
boots
up
and
says?
Oh
I
need
400
megabytes
of
reserved
space
in
order
to
create
my
GC
hips.
But
then
it
divides
that
by
the
number
of
logical
CPUs
available
in
the
system
and
on
a
server
like
this
somewhere
you've
got
28
logical
CPUs,
because
it's
14
core
with
hyper-threading.
It
would
then
literally
try
and
create.
A
A
Why
split
a
hundred
over
sixteen
or
twenty
eight
cause
that
doesn't
make
any
sense
right,
and
so
that
was
really
the
big
change,
and
so
because
the
plain
text,
a
test
in
three
hours
so
Sebastian
said,
is
basically
non
allocating
once
the
connections
connect
and
we
only
use
or
is
it
seventy
six
megabytes
of
working
set
for
the
entire
process
for
running
the
millions
and
millions
of
requests
that
go
through
that
you
don't?
Why
reserve
the
one
and
a
half
gigabyte
that
it
used
to
it
doesn't.
C
To
GC
and
when
to
stop
rotating
so
and
also
it,
the
idea
is
that
if
you
have
64
gig
of
memory,
well,
it's
okay
to
reserve
one
game.
You
have
lots
of
lots
of
memory
still
available,
but
now
that
has
and
the
start
of
timer
so
very,
very
much
improved
by
50%
of
all
scenarios,
even
on
the
slowest
one
which
is
based
on
EF.
Because
yet
is
a
big.
You
know
to
JIT
and
still
45
percent
improvement
since
at
a
time-
and
these
are
mostly
or
so
due
to
tier
GT,
which
is
owned
by
so.
C
B
C
I
had
a
lot
of
work
on
specific
issues,
because,
with
this
bench,
you
can
see
explicitly
the
differences
between
windows
and
networks,
our
dams.
They
don't
have
both
boxes
available
to
bench
models
and
they
usually
most
of
it
of
them,
uses
Windows
and
it's
harder
to
then
benchmark
on
the
next
weaker
upset
of
VM,
and
you
don't
see,
obviously,
and
with
these
benchmark,
it's
much
more
abuse.
C
File
this
issue:
that's
my
job
fighting
issues,
okay
and
so
I
said.
Why
is
that
thing
slower
on
the
next
and
window?
It's
raining
the
scenario
and
without
we
can
also
take
Tracy's
that
I
share
with
the
devs,
even
all
the
proof
that
there
is
an
issue
and,
and
then
people
cared
and
are
like
okay,
so
Adams
ethnic.
C
You
all
know,
because
you
work
in
a
batch,
wonder
net
and
other
things
decided
to
track
that
and
found
the
reason
and
found
the
fix
and
fixed
it
and
then
use
the
same
thing
to
show
the
improvements
like
before
after
I'm.
Fine,
it's
sorry
at
some
point,
I
will
and
it's
like.
Oh
look
now
we
have
the
same
members
on
Windows
and
Linux.
It's
awesome
like
from
20
thousand
two
hundred
ten
thousand
and
and
then
I
wish
is
change.
C
I
could
approve
and
say:
okay,
it's
fixed
done
its
merged
and
look
at
that
before
forty
thousand
requests
per
second
on
a
specific
scenario
after
two
hundred
forty
requests
per
second
just
trying
to
in
point
this
specific
issue
for
a
specific
and
why
I
I
find
that
this
environment,
or
in
our
case,
it's
because
we
were
looking
at
the
data
updates
taken
for
our
benchmark
and
if
I
go
back
on
the
data.
That's
taken
revenge
wall
which
I
will
select
from
this
tab.
We
will
obviously
see
issue.
C
That's
the
one
I
want-
or
maybe
it's
not
that
old
interesting.
Why
is
that
check?
But
I
have
it
on?
Yes,
this
one?
Okay,
so
look
at
before
this
job.
Okay,
here
super
slow,
dapper
like.
Let
me
see
two
thousand
requests
per
second,
something
like
that
and
here
thirteen
thousand
requests
per
second
for
the
edge
of
the
net.
So
this
is
ad
the
net
to
top
is
ATO
dotnet,
and
this
is
data,
doing
lots
of
updates.
This
is
the
tech,
Emperor,
henchman
and
Trust,
and
it
has
always
been
super
slow.
C
Even
better
Alice
could
not
make
it
faster,
wise,
so
I
found
the
issue
that
was
this:
an
environment,
culture
comparison
and
on
Linux,
and
then
this
is
when
the
fix
code
much
incorrectly,
so
that
was
that
solve
the
issue.
There
was
one
of
three
attempts
to
fix
some
things
with
the
GC,
but
then
this
one,
it's
dairy.
It
was
really
a.
C
C
Remember
which
itself
all
parameters
which
itself
did
I,
get
it
to
the
benchmark
driver
to
the
radio
and
cracker
story,
which
is
in
this
case
and
PG
sequel,
and
that's
where
everything
went
bad
because
also
NPCs
well
as
big
connections,
big
collections
of
of
caches-
and
that
was
triggering
a
big
issue.
So
it
has
been
fixed
and
now
this
is.
How
is
the
area
that
is.
C
A
C
A
C
It's
weird,
but
that
word
does
some
cache
is
maybe
that
I
have
to
check
okay,
but
there
are
big
things
happen
here.
So
this
change
you
see
this
is
on
the
19th
of
June
and
fix
was
merged,
Aranda,
okay,
this
is
when
I
figured
out.
It
was
good
and
not
only
that,
but
if
I
take
another
benchmark
like
the
httpclient
factory,
which
is
just
a
measurement
of
hte
plant
performance
under
the
19th
of
June
boom.
Look
at
that.
So
this
is
Linux.
This
is
like
127
thousand
requests
per.
Second,
that
goes
to
193.
C
This
scenario
is
about
creating
concurrent
connections
to
a
proxy
server,
so
it's
like
a
prototype.
It's
a
proxy
server
that
is
evaluated
to
see
how
many
requests
we
can
forward
to
other
servers.
It's
like
a
micro
service
benchmark
to
bench,
mom
used
to
be
client
itself
and
HP
client
OHT
become
factory
are
the
Samsonite,
but
on
Linux
this
is
the
junk
that
that
went
in
just
be
caliber,
and
these
should
get
on
in
3-0.
C
B
Really
interesting,
I've
been
looking
at
the
the
actual
files,
change
and
stuff,
and
it's
like
it
comes
down
to
you.
I
mean,
of
course
it
would
have
to.
But
it
comes
down
to
native
implementations
and,
like
you
said,
it's
all
it's
stuff
like
keep
in
there
like
globalization
classes
and
stuff
like
that,
so
by
but
you're
filing
these
bugs
they
fix
these
benchmarks,
but
they
fix
stuff
all
throughout
the
stack
as
well
like
anyone.
That's
using
this
in
this
case,
like
RAF
County
and
with
globalization.
It
helps
everybody.
C
So
sometimes
we
just
have
the
issues,
but
we
need
to
find
someone
to
be
able
to
easy.
That's
a
very
beauty
of
hundred
eggs
and
important
so
recognized,
and
maybe
it
also
fix
some
other
things.
Hp
client
connection
cross.
Also.
It
also
improve
the
connection
clothes
scenario,
which
is
when
we
actually
don't
keep
the
connection
open,
which
is
quick
right.
Many
connections
as
fast
as
we
can
and
and
yeah
so
Linux
that
also
Adam
has
been
working
on
a
new
template
for
benchmark
infrastructure
that
the
core
FX
0
/,
because
of
his
work
is
like.
C
We
need
to
use
these
services
to
test
concurrent
access.
So
the
issue
was
some
looking
contention
in
the
environment.
We
know
there
was
a
lock
somehow
and
we
could
only
see
the
perfect
pact
on
asp.net
because
that's
where
we
do
benchmark
with
concurrent
loads
and
where
the
looking
will
be
I
will
have
a
been
impact
on
the
on
the
proof
when
use
benchmark
dotnet,
you
find
your
road
birth,
but
it's
harder
to
find
what
the
next
in
terms
of
contention.
C
He's
been
working
on
with
this
specific
crypto
API
that
had
some
officials
and
he's
been
working
on
and
using
this
sample
to
measure
the
impact
on
new
concurrent
role
and
to
see
if
it's
faster
yeah.
That's
that's
what
he's
doing
so
he's
trying
to
improve
the
infrastructure,
also
to
make
it
simpler
for
that
core
effects
that,
in
particular
to
to
make
faster
code
elimination.
B
C
The
same
thing
you
have
to
opt
in
for
these
specific
transport-
and
you
see
this
is
the
fastest
results.
We
have
in
plain
text
for
three,
but
they
can
perform
most
of
the
scenarios
on
the
next,
so
Stephan
altar
to
kiss
code
and
Katya
two-30.
So
now
it's
ready
but
ready
just
soon
enough
for
us
to
integrate
it
in
the
next
branch
of
the
camp
or
because
we
merged.
C
Me
this
one,
so
yesterday
we
merged
video
with
zero
three
to
seven.
Well,
now
it
will
be
pretty
right
because
it's
boring,
and
so
the
changes
mean
that
we
are
using
now
the
3
0
based
the
cream
cheese
and
updating
that
were
updating
and
post-race
equal,
but
we're
so.
We
also
removed
ETF,
HSN
and
sponges
on
benchmarks,
because
we
are
using
system
text
JSON
now,
which
is
as
fast
as
these
guys.
We
don't
need
them
anymore.
Thank
you
very
much
and
an
update
to
the
database
updates
to
make
it
also
with
the
other
benchmarks.
C
I
need
to
find
it
occur
images
to
show
you.
This
is
a
change
now,
instead
of
using
the
SDK,
we
are
based
on
the
3-0
SDK,
which
is
colla.
That's
there
and
the
r-tx,
our
HDX
benchmarks
have
been
temporarily
removed
because
we
were
waiting
while
still
waiting
for
the
peer
from
different
to
be
merged,
because
this
is
the
change
to
make
it
compatible
to
3
0
right
now,
too
many
breaking
changes
for
this
network.
So
we
are
team
for
that,
and
then
we
will
reintroduce
the
reddit
transport
on
the
next.
A
C
If
I
go
back
on
my
notes,
there
is
one
thing
about
the
net
consoles:
I
need
to
mention:
I
will
show
you
specific
example.
Dimension
did
earlier,
which
is
the
end
point
routing
a
plain
text
value
which
is
just
remember.
There
was
a
deep
here,
yeah
that
should
go
back
up
tomorrow.
It
has
been
mashed
and
you
can
see
here.
That
is
also
a
jump
in
terms
of
memory
and
I
will
take
Windows
in
terms
of
memory
usage.
C
There
is
a
jump
committee
member,
so
there
was
a
change
that
was
more
like
80
and
you
can
see
because
it's
flat
and
then
we
have
lots
of
GCS
and
the
allocations
here
was
almost
nothing
lot
here
and
yeah.
So
that's
something
we
will
fix
that
well,
yeah.
The
thing
I
wanted
to
show
actually
doesn't
show
up
allocation
array.
No,
this
here
at
which
you
see
it's
flat.
C
There
is
no
Roger
Jackie
allocations
and
then
we
have
so
actually
it's
not
because
we
have
some
now
that
it
means
we
didn't
have
some
before
the
donut
counters
heap
size
is
get
only
updated
if
there
is
a
garbage
collection-
and
there
was
not
enough
memory
pressure
before
to
get
a
garbage
collection,
not
at
all,
which
means
the
GC
hips
here
might
not
be
0.
And
here
what
we
see
in
terms
of
for
the
larger
heap
not
have
been
always
the
vendor
the
same,
but
is
just
that
because
there
was
no
garbage
collection
triggered.
C
B
A
Happened,
this
is
a
regression,
so
what
we're
saying
is
that
fix
was
checked
in
for
a
certain
behavioral
issue
whereby,
if
you
were
throwing
exceptions
in
your
view,
like
we
I,
don't
know
if
you
said
this,
I
didn't
quite
catch
it,
but
all
the
way
back
to
what
caused
the
issue.
If
you
were
to
throw
an
exception
from
a
razor
view,
you
wouldn't
see
the
error
page
right.
A
Page
middleware
that
caused
this
regression,
because
just
the
state
machine
allocation
itself
was
enough
to
tip
this
one
over
into
this
regression.
So
the
the
fixed
of
that
went
in
yesterday,
I
think
and
will
be
in
preview
9,
and
so
they
changed
how
they
did
it.
So,
rather
than
allocating
a
new
state
machine,
they
end
up.
Adding
this
thing
to
be
tracked
and
there's
an
event
at
the
end
of
the
request,
essentially
where
the
host
will
look
at
things
that
are
registered
there
and
you
know
dispose
of
them
or
whatever
it
is.
A
A
First
of
all,
these
aren't
really
micro,
because
they're
not
running
in
a
harness
like
benchmark
net,
where
we're
doing
one
method,
a
million
times
right,
we're
literally
running
a
full
server
from
another
full
server
through
a
full
networking
layer
with
this
is
real
load
testing
albian
for
very
specific
scenarios,
and
sometimes
a
very
minor
change
will
regress
a
specific
scenario.
Someone
significantly.
Some
of
the
scenarios
are
very
sensitive
to
memory
allocations,
because
we've
done
so
much
work
in
the
framework
layout
to
get
this.
A
A
The
the
reality
is
that
if,
if
you
did
want
to
do
what
I
just
said,
if
you
want
to
have
a
certain
routable
endpoint,
that
was
highly
cached,
so
that,
for
you
know,
given
30
seconds,
you
were
always
returning
the
same
precache
response.
Well,
guess
what?
If
we're
already,
if
the
framework
is
always
allocating
a
buttload
of
objects
and
hundreds
and
hundreds
of
objects
or
whatever
it
is,
every
request
before
you've
even
had
a
chance.
A
There
is
an
absolute
ceiling
as
to
what
you
can
get
with
regards
to
that
performance
when
you
do
go
and
do
the
work
and
your
hot
pile
is
to
make
that
really
really
fast.
And
it's
not
gonna
be
anywhere
near
what
you
want
to
be,
if
we're
already
stealing
all
those
CPU
cycles
and
GC
cycles
for
you.
So
our
goal
is
always
to
get
these
things
as
low
as
they
possibly
can
be,
so
that
you,
the
app
developer,
can
use
those
cycles
instead
of
the
framework.
A
B
A
I
can
see
what
they
did
some
of
them.
You
can
see
that
they're
optimized,
like
we
did
some.
We
might
have
turned
your
thing
off
that
you
don't
this
scenario
doesn't
require
and
there'd
be
nothing
wrong
with
you
doing
that
in
your
app.
For
that
end
point
because
you
don't
need
that
processing
yeah
everything
has
a
cost.
A
Every
time
you
write
a
lot
of
code
you're
paying
for
it
somewhere
whether
engineering
startup,
whether
it's
doing
execution,
whether
it's
you
know
whatever
it
might
be,
and
you
can
always
do
things
in
your
code
to
work
around
those
things
if
it's
worthwhile
for
that
particular
path,
you
know,
as
NIC
very
vol
ways
like
to
say
it's
all
about
trade-offs.
We
talked
about
it
last
week,
most
of
our
jobs
as
developers
and
product
owners
and
managers
is
about.
Is
this
trade-off
with
it?
B
But,
but
what
this
is
all
showing
is
that
a
lot
of
time
you
don't
think
about
the
performance,
trade-offs
of
what
you're
coding
you
think
about,
like?
Oh,
you
know,
like
you're,
focused
on
solving
developer
scenarios
so
by
having
this
in
your
office
when
you
walk
in
in
the
morning-
and
you
know
in
front
of
everyone-
it's
it's
something
your
track.
Yeah.
A
As
we
know,
we've
we've
got
lots
of
fantastic
help
from
the
community
to
make
stuff
fast.
Something
about
performance
in
this
world
pushes
people's
buttons
a
lot
of
people
like
to
hear
about
it,
they'd
like
to
get
involved
and
they'd
like
to
see
that
we're
improving
for
no
cost
to
them.
Generally,
we
make
these
improvements
from
release
to
release,
and
all
you
have
to
do
is
upgrade,
and
you
see
that
benefit
now.
A
They
said
I
deployed
it,
I
updated
my
blog
from
200
to
and
my
home
page
millisecond
response
time
improved
by
30%
they
hadn't
done
anything
all
they
did
was
upgrade
the
framework,
and
that's
because
we
had
put
the
effort
and
we
weren't,
focusing
on
their
blog,
we're
focusing
on
plain
text.
We
were
focusing
on
Fortune's,
but
that
transposed
directly
into
people's
real-world
applications
and
it's
just
a
continual
rabbit
warren,
an
onion
layer.
You
know
onion
layer,
peeling
exercise.
A
We
do
this
release
to
release
based
on
the
feedback
that
we
get
and
we
you
know
we
make
those
trade-offs.
You
know.
Sometimes
we
can't
make
the
performance
of
we
want
to,
because
there's
a
compatibility
issue
and
we
have
to
either
work
around
a
compatibility,
restraint
or
look
at
introducing
new
AP
is
to
unlock
performance.
That's
being
what's
the
way
to
say
it
is
locked
behind
an
exit
locked
behind
legacy.
I
guess
some
folks
are
saying
in
the
chat
that
they
lose
performance
to
our
businesses.
A
A
The
other
thing
Brian
likes
to
talk
about
from
tech
empower
is
their
performance
in
that
in
these
days,
in
the
in
the
time
of
the
cloud
directly
translates
to
money
in
more
than
one
way,
whether
it's
you're
paying
for
the
CPU
time
in
memory
time
that
your
app
is
using
because
you're
on
a
service
infrastructure,
you're
on
some
type
of
VM
or
whatever
that's
billing
per
CPU
cycle.
Or
you
were
trying
to
host
as
many
apps
as
you
can
inside
this
view.
B
A
You're,
like
Amazon,
and
what
they
you
know,
what
their
famous
research
showed
that
for
every
millisecond
of
latency,
that
you
add
to
certain
rendering
the
drop-off
rate
on
conversion
is
estate,
is
huge
right
and
measurable.
And
so,
if
you
want
to
make
stuff
faster,
there's,
usually
a
direct
business
impact
depending
on
what
that
type
of
app
is
and.
A
Of
business
apps,
which
I
spent
a
lot
of
my
career
building,
if
you
can
make
the
the
line
work,
are
more
productive.
The
person
who
sits
there
all
day
and
is
doing
data
entry
or
using
an
app
that
you
build
in
order
to
do
their
job
is
you
can
make
them
more
productive
and
not
have
to
buy
a
new
PC
for
them,
which
is
a
capital
cost,
or
you
know
whatever
it
might
be.
A
Just
by
deploying
you
update
to
your
app
you're
going
to
generally
multiply
that
by
the
number
of
workers
you
have
so
I
know.
I've
worked
in
companies
where
we
had
rooms
full
of
data
entry
clerks
who
were
taking
paper
forms
and
entering
them
into
the
system,
and
you
make
a
change
to
improve
that
workflow,
and
you
multiply
that
by
the
30
people
sitting
in
that
room.
Doing
that
work,
that's
what
getting
productivity
out
of
computing
has
always
been
about
and
we're
still
doing
it.
This
is
no
different.
It's
all
related!
A
B
That's
a
good
takeaway
like
some
of
this
is
just
it's
cool
to
see
and
it's
you
know
it's
interesting
for
developers
and
then
another
layer
to
it
is
like
hey,
it's
good
for
us
to
know
that
our
framework
is
getting
faster,
like
that's
pushes
me
to
upgrade.
I,
have
a
business
reason
to
get
on
the
newer
platform
and
all
that.
But
then
another
thing
is
the
like
you're
saying
thinking
about
this
all
the
way
through
application
development
as
well,
and
that's
that
link
I
shared
earlier
about
the
performance
recommendation.
B
A
Absolutely
and
to
Dana's
point
you
say
the
argument
I
make
and
going
the
other
way,
though,
and
you
can
sometimes
it's
just
much
cheaper
to
throw
a
CPU
at
something
to
make
that
faster,
I
totally
agree,
I
used
to
joke
that.
Maybe
we
should
just
put
a
cup
of
an
SSD
in
every
copy
of
Visual
Studio
ten
years
ago,
and
that
would
make
everyone's
visual
studio
run
faster.
That
is
sometimes
the
right
answer
and
in
the
cloud
that
can
often
be
the
right
answer,
but
it's
not
free.
A
If
you
have
the
money
and
that's
cheaper
for
you
to
throw
out-
and
you
know
that
that's
your
bottleneck
and
by
throwing
more
CPU
or
memory
at
it,
you'll
make
it
faster
for
your
needs
go
ahead
and
do
it
if
you're
selling
a
framework
like
we
are,
we
don't
have
that
advantage.
We
can't
do
that
yeah.
We
have
to
be.
A
A
We
should
do
that.
We
shall
talk
about
that
bottlenecks
and
then
we
can
use
that
to
close
out.
So
are
there
any
areas
of
benchmarks
that
we're
not
happy
with
the
at
or
putting
more
effort
into
relative
to
others
to
what's
currently
yeah
there's
a
couple,
the
non-pipeline
so
Jason
and
above
jason,
is
actually
really
badly
named,
so
the
Jason
benchmark.
He
isn't
testing
Jason
performance,
it's
actually
testing,
it
is
I
mean
it
has
a
Jason
payload
it
to
the
hello.
A
Well,
Jason
payload,
it's
very
straightforward,
but
the
big
difference
over
from
it
from
plain
text
is
that
plain
text
is
pipelined,
meaning
it's
in
16
requests
allows
them
to
process
and
then
6
get
sent
back.
Where
is
Jason
is
not
Jason.
Actually
you
want
me
to
show
you
something
Jason,
that's
the
wrong.
One.
Jason
is
she's
Jason,
Jason,
really
yeah.
So
one
of
those
graphs
is
the
plain
text
test
without
it
being
pipelined
the
other
one
is
the
Jason
test.
You
can
see
how
close
they
are
together.
A
You
don't
lose
much
from
that
benchmark
by
doing
the
Jason
test,
doing
the
Jason
serialization,
and
so
a
lot
of
folks
see
the
Jason
number
and
think
Oh
Jason
is
really
really
slow
compared
to
other
stuff.
So
the
one
thing
that
we
do
need
to
work
on
is
our
non-pipelined
performance,
which,
as
soon
as
we
go
from
the
pipeline
test,
which
is
plain
text
to
Jason,
there's
a
huge
drop-off
on
net
currently
relative
to
other
frameworks.
It's
an
area
that
we
just
need
to
do
some
work
on
Sebastian
noted
before
the
Linux.
A
Only
transport
I
think
did
you
bring
it
up?
You
did
the
redhead
transport,
which
is
not
using
the.net
Sockets
Layer,
and
it
gets
fantastic
performance
and
is
much
better
on
the
non
pipeline
performance
than
ours,
and
that's
something
that
we're
going
to
be
looking
at
in
the
next
version.
Of.Net.
What
can
we
learn
from
that?
Can
we
apply
that
to
the
generic
Sockets
Layer
in.net,
which
you
know
kestrel
then
sits
on
top
of
that's
not
how
they
have
done
it.
They've
literally
written
a
custom,
kestrel
transport,
which
you
know
you're
able
to
do.
A
We
have
a
limb,
UV
transport,
which
is
still
available,
but
we
don't
use
by
default
and
they
wrote
a
Linux
transport
that
uses
its
own
threads,
doesn't
use
the
thread
pool
and
then
P
invokes
directly
into
the
Linux
I/o
stack
networking
stack
in
order
to
do
that,
and
it
gets
much
much
better
performance.
So
that's
something
that
we
really
want
to
look
at
the
other
one
that
we're
going
to
continue
looking
at
is
fortunes
portions
is
the
chunky
is
test
the
biggest
one,
that's
the
one
that
does
HTML
rendering.
A
After
reading
from
a
database,
we
made
really
good
progress
on
that
in
one
and
two
and
we
got
up
into
the
top
ten.
But
since
then
a
bunch
of
the
other
frameworks
have
started
doing
work
on
their
database
drivers
to
support
pipelining
at
the
database
layer
and
that
with
no
change
to
the
application
developer,
all
the
updated
David
does
is
what
they
normally
do.
They
just
issue
their
database
requests
and
then
the
database
driver
knows
enough
about
the
connection
management.
A
Batching
them
up
to
get
drastic
improvements
in
throughput,
so
we're
looking
to
do
that
at
the
a
do
net
layer
and
starting
out
with
the
Postgres
driver,
because
that's
what
tech
and
power
usually
runs
on
on
linux
to
see
if
we
can
get
similar
gains
so
we'd
very
much
like
to
get
our
fortunes
benchmark
back
up
interested
in
your
hopefully
around
the
top
five
for
taking
power
see
so
that's
there
too,
that
we
really
are
we
really
going
to
look
at
next?
What
are
you
showing
there?
So
we
have
one
aware
of
cooking
so.
C
C
So
here
we
have
a
big
dip
at
some
point
and
we
don't
know
why
we
know
why,
because
at
that
point
we
tried
to
install
a
new
network
card,
but
that
didn't
work.
So
we
put
the
network
on
right
before
back
to
10
gigabits
network
up,
and
once
we
put
it
back,
we
saw
a
huge
dip
in
performance.
We
have
never
been
able
to
explain
that
until
we
arrived
a
week
ago.
Yes,
what
happened
is
that
we
got
here.
C
C
So
what
I
discover
is
that
we
had
on
the
client
load
machine,
a
newcomer
different
than
the
other
ones,
and
why
is
that?
Because,
actually
we
were
running
on
boot
to
desktop
on
this
machine,
which
was
not
only
installing
new
colors
automatically,
but
also
running
a
GPU
driver
and
x4x
bog
and
I
was
taking
too
much
CPU.
So
we
were
bottlenecked
by
the
client
machine
on
CP.
That's
why
our
machine
could
not
get
as
fast
as
the
camper
is
showing
on
their
benchmarks
and
also
that's
why
there
used
to
be
a
huge
dip.
B
A
A
Which
mental
is
it
three
things?
We
changed
some
network
cards
we
put
the
old
ones
back
in
when
it
didn't
work.
Unbeknownst
to
us,
we've
got
a
security
patch
delivered
to
the
client
machine.
That's
basically
that
same
day,
not
knowing,
and
the
only
reason
that
happened
was
because
we
were
running
a
version
of
the
OS
that
we
didn't
know.
C
C
C
Panna
cotta
and
then
okay,
we
got
there
and
then
we
install
the
actual
40
gigabits
card
on
Linux
machine
enemy.
You
don't
get
faster
here.
It
was
like
for
two
days.
It
was
not
still
faster
and
I
I.
We
were
still
CPU
bound
on
the
minute
and
then
I
looked
at
the
lowest
quit
worrying
for
pipelining.
It
happens
that
would
have
been
optimized
and
I
looked
at
the
one
from
Tekken
where
it
changed
it
to
optimize
it
also.
So
we
optimized
it
and
at
the
same
time,.
B
C
C
A
To
be
limited
by
10
gig,
and
we
thought
that
was
the
limit,
then
it
turned
out
it
was
CPU
and
that
turned
out
that
was
because
of
you
know,
client
desktop
and
a
bunch
of
other
the
CVE.
So
then
we
got
the
40
gig
card
in
the
Linux
client
server,
we've
got
a
40
gig
card
in
the
Linux
server
and
we
fixed
the
client,
OS
and
GPU
stuff
and
all
that
crap,
and
so
now
we've
gone
from
about
six
million
requests
per
second.
Before
all
this
to
eight
point,
four
eight.
A
C
B
A
Because
tech
empower
their
environment
is
lagging
our
slightly
because
we
provided
the
hardware
for
tech
empowers
environment
right
and
they
we
have
a
mirror,
but
we
have
been
working
to
get.
We
knew
that
the
network
cards
were
a
limitation
and
we've
been
working
this
long
to
try
and
figure
out
what
do
we
have
to
do
to
get
the
40
gig
cards
working
and
so
we've
only
just
now
got
this
working,
as
we
just
said,
and
so
they
don't
have
40
gig
cards.
Yet
they
still
only
have
the.
C
A
C
C
Millions
for
you
need
seven
millions
for
optics
and
8
point
something.
This
is
one
we
have
8.3
for
a
spinet
and
if
we
had
two
clients,
because
actually
at
this
level
we
are
still
limited
by
the
packets
per
second
right.
If
we
had
two
clients,
you
Lee
will
be
16
million
per
second
per
second
on
the
same
server,
so
Brian
made
you
leave
is
at
16
million
and
not
7
million
and
7
million
c40.
So
this
is
ready
limit
and
same
thing
for
a
spent
on
Linux
on
Windows.
A
A
There's
a
bunch
of
stuff
for
us
to
go
off
in
welcome
Brian
the
the
Lewis
script
tuning
isn't
in
PR
because
we
took
yours.
If
we
tried
to
do
our
own
tuning,
then
we
realized
that
you'd
already
done
it,
and
then
we
took
yours
and
now
we're
writing
the
same
one
as
you
there
yeah
so
so
yeah.
So
these
set
of
numbers
here
is
just
Sebastian
went
off
and
did
a
bunch
of
Investigations
with
some
manual
runs
to
see.
A
What
can
we
expect
when
we
start
to
see
these
new
network
cards
rolled
out
across
our
infrastructure
and
also
tech
and
powers?
With
regards
to
those
frameworks
that
are
currently
clustered
up,
the
top
all
doing
about
seven
million
requests
per
second,
because
we
always
expected
that
a
few
of
them
once
we
take
the
shackles
off
the
network
would
start
to
stretch
ahead
again
right
and
it
turns
out
that
the
next
limit
we
turn
into
even
with
a
forty
key
card
is
likely
how
much
client
you
can
load.
A
It's
about
packets
per
second,
and
so
the
40
gig
cards
are
different
cars
to
the
10
gig
cards,
and
so
they
have
different
chips
on
them
and
different
optimizations,
and
so
it's
not
just
about
getting
more
bandwidth,
often
they're
more
powerful
cards
and
can
do
more
packets
per
second,
so
we
get
in
there,
but
things
are
going
to
get
faster
and
we
will
see
new
base
lines
very
very
soon.
So
now
we've
reset
our
environment.
Basically,
we've
got
the
security
patch
installed
on
all
UNIX
servers.
A
A
Actually
they're
gonna
have
a
50
gig
card
in
them,
but
let's
not
go
down
that
rabbit
hole
in
them,
which
will
run
in
intend
gig
mode,
and
that
will
then
allow
us
to
free
up
ports
on
our
switch
and
then
we're
going
to
be
able
to
pull
our
switch
out
to
isolate
it
a
little
better
than
it
is
right.
Now
we're
running
multiple,
effectively
VLANs
through
the
switch
right
now,
which
doesn't
seem.
B
A
So
we're
going
to
be
able
to
do
that
once
we
get
the
new
arm
servers
and
I'm
very
interested
to
see
what
performance
the
new
arm
servers
get
versus
the
Dell
Intel
servers
that
we
have
in
that
are
about
18
months
to
24
months
old,
because
those
new
arm
servers
are
pretty
powerful,
they're
like
30
core
or
something
or
something
really
glad
so
anyways
its
brand
new
company
from
Silicon
Valley.
That
is,
building
these
arm
servers
and
we're
getting
a
few
of
those
to
be
able
to
test
on
their
core
unarmed.
A
We
did
do
Raspberry
Pi
a
year
ago
or
a
year
and
a
half
ago,
and
it
was
like
Gandhi
was
a
40,000
requests
per
second
or
something
in
a
Raspberry
Pi
in
a
while.
We
should
do
it
again,
but
yeah.
So
that's
pretty
much
everything
that's
coming
and
then
you
know
we'll
have
other
stuff
that
will
do
for
now.
Five,
which
we
haven't
thought
about
yet,
but
that's
probably
a
good
time
to
end
it
I
think.
B
A
Okay,
yeah:
well,
thanks
everybody,
if
you
have
any
other
questions,
just
reach
out
to
Sebastian
or
myself
or
both
of
us
on
Twitter,
we're
always
happy
to
talk
about
latest
stuff
going
on
in
the
Smurfs
lab
and
tech
empower,
and
we
love
that
so
many
people
love
what
we're
doing
on
it.
Keep
watching
the
the
power
bi
dashboard.
You
can
see
what
we're
doing
from
week
to
week.
If
you
think
you've
seen
a
regression
that
we
haven't.
Let
us
know,
logo
bug.