►
From YouTube: ASP.NET Community Standup - March 5th, 2019 - David Fowler on Perf, ASP.NET Core 3.0 and More!
Description
Join members from the ASP.NET teams for our community standup covering great community contributions for ASP.NET, ASP.NET Core, and more.
Community Standup links for this week: https://www.one-tab.com/page/r7pQHzHHRl-R3fiaAq3QoA
A
B
A
A
D
Weekly
dose
of
awesomeness
here
he's
going
through
a
through
Z,
so
we're
up
to
I
is
hosting
of
asp
net
core
web
apps.
So
again,
one
thing
I,
love
with
these
is
he
kind
of
put
salt,
pulls
all
these
random
things
together.
There's
how
do
you
do
in
process?
How
do
you
set
up?
I
asked
with
a
VM
and
all
that,
so
it's
all
kind
of
pulled
in
together
here.
So
some
things
here
explaining
you
know
some
some
basic
important
things:
the
difference
between
in
and
out
of
process
setting
up
for
actually
installing.
D
Turning
on
the
features
for
is
it's
easy
to
get
kind
of
used
to
just
doing
is
Express
I
recently
had
to
go
through
and
set
up
with
I.
As
you
know,
the
foal
I
asked
for
something
and
it's
it's
kind
of
good
to
review.
What's
in
here
another
thing
that
I
find
you
know
as
I'm
as
I'm
teaching
people
as
is-
and
you
know
just
going
through
this
in
classes
and
stuff-
is
some
of
the
visual
studio
features.
So
obviously,
like
the
the
is
profile
he's
explaining
here
and
the
difference,
this
is
another.
D
A
D
A
Would
it
mean
to
you,
do
you
have
any
opinion
on
whether
if
we
would
change
the
default
hosting
option
of
visual
studio
phrase,
Nick
or
three
apps
to
be
Castille
directly,
so
it
launched
your
app
as
a
console
app
effectively
the
server
and
launched
the
browser
as
well
and
all
that
type
of
stuff?
Do
you
have
an
opinion
in
that
move
on
John?
Thank
you
very
much.
Yeah.
D
No
that's
very
interesting,
I
personally,
I
kind
of
would
like
the
Kestrel
approach
I'm
just
to
have
it
the
same
everywhere
and
stuff
but
yeah.
So
please
sound
off
in
the
chat,
so
I've
actually
broken
today.
Up
into
a
few
little
sections,
one
is
on
hosting
one
is
on
blazer
and
then
some
kind
of
general
tips
and
tricks
stuff
towards
en.
D
D
Right
so,
first
of
all,
mr.
Shane
Shane
Boyer
writing
about
the
docker
extension
4
vs
code
and
dotnet
core.
So
this
is
cool.
This
is
an
update
for
the
docker
extension
in
vs
code
and
with
some
nice
things
like,
for
instance,
generating
your
docker
file,
integrating
with
Azure
container
registry
for
pushing.
So
you
know
a
nice
kind
of
quick
overview
here
with
animated
gifts
and
stuff
which
is
exciting
and
showing
actually
pushing
out
to
you,
know
app
service
on
Linux
and
stuff,
so
very
cool.
The
visual
studio
code
extension.
A
E
A
D
D
And
this
is
very
cool
he's
he
links
over
to
a
full
project
on
github
and
then
as
well,
some
some
gist
as
far
as
like
docker
file
and
stuff.
So
yeah,
you
know
just
very
lightweight
I'm
setting
up
with
circle
CI
to
build
and
deploying
it
out.
So
as
he
points
out
here,
a
great
use
case
is
just
hey.
You
want
to
share
something
with
a
friend
on
a
for
instance,
or
something
demonstrate
I've
run
into
this
sort
of
thing
to
like
say
in
a
dev
team.
D
D
You
know,
but
generally
supported
thing
for
you
know,
as
they
point
out
here.
Chrome,
opera,
Yandex,
you'll,
see
this
a
lot
like
in
your
mobile
browsers.
If
you're
on
you
know
something
where
you're
on
a
metered
connection,
then
if
the
safe
data
header
is
present,
then
you
can
serve
lighter
weight
data.
So
the
example
here
is
using
middleware
and
showing
you
know,
lower
quality
graphics
if
you're
on
a
metered
connection,
very
cool,
and
so
you
know
this
is
a
kind
of
simple
example.
A
D
A
A
A
Traits
that
make
it
not
particularly
good
for
using
in
server
applications
like
taking
system-wide
locks
or
title,
and
so
it
is
documented
up
to
you
now
the
version
that
works
on
core
I
believe
on
Windows.
It
still
uses
all
the
same
GDI
goop,
so
it's
still
like
and
be
very
cautious
and,
like
just
be
warned,
the
version
of
systems
are
drawing
that
works
on
Linux
I
think
is
uses
like
some
mono.
It's
tough
doesn't
that
skier,
sharp
and
that's
I
believe
is
that
have
the.
A
Isn't
the
drawing
is
not
released
as
part
of
the
framework
is
not
really
it's
in
the
windows
compatibility
pack,
and
so
it
has
a
different
support
policy.
So
if
you
know
if
it
works,
for
your
app
wonderful
eyes,
wide
open,
a
couple
of
caveat
emptor,
things
to
be
aware
of
when
using
systems
are
drawing
and
I
would
take
this
opportunity
to
sprig
the
wonderful.
D
D
That
just
plugging
in
middleware
so
cool,
so
there
you
go.
D
Yeah,
so
this
is
interesting,
so
vakhtang
G,
showing
here
a
virtual
grid
view
so
that
you
know
the
nice
thing
with
virtualization,
of
course,
is
your
only
rendering
what
is
actually
in
you
port,
so
you're
optimizing
for
that,
so
that
works
well
for
things
like
where
you've
got,
you
know
many
thousands
of
rows
or
that
kind
of
thing
so
I'm
sure,
we've
all
you
know,
we've
played
with
these
yeah
man.
I.
Remember
going
back
to
writing.
D
D
D
Cool
so
Chris
sainty
writing
about
building
blazer
apps
using
Azure
pipelines,
so
this
is
actually
building
out
a
build
process
for
blazer
application,
showing
here
he's
using
a
free
Azure
pipeline
and
a
few
things
that
he
ran
into
he's.
Actually
using
the
windows
build
agent,
he
said
he
ran
into
trouble
using
the
Ubuntu
version,
so
he's
switched
it
over
to
using
the
windows
image.
D
C
E
D
So
here
we've
got,
you
know
just
using
this
image
and
the
build
script
and
configuring
the
artifacts.
That's
something
I've
run
into
recently,
as
is
getting
your
artifact
publishing
set
up
is
very
important
so
and
all
the
green
check
boxes.
So
that's
wonderful,
okay,
one
of
my
favorite
posts
over
this
past
week
from
Dustin.
This
was
on
the
front
page
of
hacker
news.
This
is
tips
and
tricks
for
asp
net
core
applications.
I
love
these,
because
it's
a
roll
up
of
opinions.
These
are
things.
D
Chris
loves
to
do,
and
you
may
disagree,
there's
actually
a
great
discussion
in
the
comments
as
well,
but
some
things
here.
Turning
on
logging
and
getting
the
logging
set
up
exactly
how
you
want
it
so
things
like
enriching
your
log
entries
and
then
going
down
some.
You
know
customizing
things
like
your:
your
output,
there's
Oh
config
classes.
This
is
this
is
great
too
for
strongly
typed
config
as
well.
Right
then,
going
down
to
some
things
like
conditional
configuration
extension
methods,
so
here
he
creates
a
when
extension
method
and
then
he's
able
to
do
this.
B
D
A
A
A
A
A
A
D
E
A
D
E
A
B
D
So,
what's
nice
to
hear
what
tips
and
tricks
do
you
have
and
there's
there's
a
lot
down
in
the
comments,
one
cool
one
that
I
thought
was
neat.
Is
we
get
a
call
out
here
from
Mohammed
saying,
hey
a
lot
of
this
stuff
is
in
dotnet
boxed,
so
I've
been
doing
these
talks
lately
they
say
something
that
corwin
our
makeover
and
I
love
this
sort
of
thing
to
where
it's
like,
hey,
let's
store
some
opinions
in
and
talk
about
him.
One
final
one
I
want
to
point
out
from
this
guy
named
Scott
and
weird.
A
E
D
A
D
A
A
A
A
C
C
E
D
A
E
E
E
A
A
I
get
other
stuff,
which
would
be
amazing,
but
I
don't
to
get
on
github
and
have
it
structured,
well,
I
go
and
I
can't
be
bothered,
I
got
a
meeting
now
or
whatever
it
might
be,
and
or
I
hit
Scott's
thing,
which
is
are
crap
I.
Did
it
the
wrong
way
and
now
I
have
to
start
again
or
I
took
the
good
repo
name
and
I
have
to
believe
it
like.
It's
part
is.
A
E
C
A
A
C
C
A
C
A
C
A
C
A
To
those-
and
you
can
see
that
the
the
on
the
left
here,
we
have
the
black
line
at
the
top.
That's
the
platform
in
the
yes,
there
we
go.
This
is
the
platform
test
and
you
can
see
it
has
even
ticked
up
a
little
bit
in
three,
oh,
but
the
green
line.
If
you
can
just
move
the
cursor,
so
we
can
see
the
full
history
there
we
go,
you
can
see
where
it
went
from
two
to
two
three,
oh,
that
was
about
and
I'm
trying
to
use
your
mouse
now
is
about
I.
A
Think
that's
what
we
took
the
runtime
change
and
what
we're
trying
to
do
is
make
those
lines
as
close
together
as
possible.
In
the
ideal
world,
there
would
be
no
cost
for
making
a
tunic
or
on
top
of
Kestrel
and
but
we're
making
really
really
good
progress.
There
is
a
dip
there.
It's
like
there
is
a
very
visible.
D
A
E
A
A
E
A
C
A
All
of
them
are
received
and
then
I
will
send
16
responses,
because
that
way
you
know
you're
gonna
get
the
most
effective
packet
size
usage.
A
web
server
can't
really
do
that
unless
you
have
configuration
knobs
and
configurable
buffers
and
all
those
type
of
things
to
do
that
and
to
be
fair,
some
of
them
take
up
our
do.
A
C
C
Object
objects
before
we
salic
ate
an
HTTP
context
per
request
and
I
made
a
change
in
3.0
to
reuse
those
objects
per
ACP,
one
connection
I'm
free
to
stream,
so
for
one
so
for
one
HTTP
connection,
request
can't
overlap,
so
you
have
one
request
at
a
time.
So
now
we
just
reuse
that
object
over
and
over,
and
that
was
a
huge
huge
improvement
for
reducing
allocations
and
I'll
see
ya
help
through,
but
he's.
A
C
C
C
D
C
It
isn't
just
a
allocation
that
makes
it
okay,
so,
for
example,
this
was
a
dot
memory
profile.
Before
the
changes
we
allocated
a
default,
HTP
context
default,
HTV
response,
a
request.
Services
feature
for
the
request:
services
on
the
forum
feature
for
forms.
This
request
didn't
have
any
form
parsing.
So
it's
kind
of
weird
to
see
a
form
feature
here
allocated.
Also
it's
showing
a
cuter
called
a
cue
user
work
item.
The
user
work
item
call
button
default
contact,
so
I'll
talk
about
in
a
little
bit.
C
C
It
can
be
this
actually
and
it's
profile
that
is
long,
so
we
had
basically
a
profile
of
what
the
allocations
were
like
in
2.2
before
these
changed
entry
point.
Oh
so
we
had
contacts
a
bunch
of
strings
for
headers
the
requests,
the
response
object.
It's
a
foreign
feature,
a
bunch
of
callbacks,
some,
how
many
I
mean
I
was
showing
up
so
me
and
Bank
kind
I
went
to
work
on
on
this
profile
to
try
and
get
down
to
just
strength.
Just
a
headers
been.
C
A
C
White
allocate
a
foreign
feature
if
this
is
a
get
request
right
right.
That
kind
of
thing
this,
this
change
reused
a
it
should
be
context
on
the
same
connection
so
be
so
before
we
had
81
Meg
I
think
this
was
a
hammer.
How
many
requests
I
did
for
the
same
for
the
same
workload.
Here
was
81
megabytes
of
allocations,
and
this
is
dot
dot,
traces,
timely,
because
doc,
memories
actually
broken
on
on
Donna
core
three.
For
now
oops
yeah,
oops,
stuff
change,
stuff,
bro,
it's
being
fixed
I've,
been
told.
C
So
you
can
see
here
we
had
a
fifteen
Meg's
of
context,
D
nights
of
requests,
because
that
would
respond
and
the
goal
was
to
get
that
completely.
If
you
look
at
the
new
profile,
it
went
from
eighty
one
Meg's,
246,
Meg's,
Y
locations,
nice,
forty,
six
and
their
be
seeing.
What
we
have
left
were
strings
for
headers
and
requests
words
feature
and
the
forum
feature
right.
Cells
part
one
get
rid
of
the
contest
applications.
C
The
second
part
was
to
lazily
allocate
more
thing,
so
the
forum
feature-
and
the
funny
thing
is
a
bunch
of
these
required
breaking
changes
like
breaking
changes
and
cool
it's
because
they
want
to
affect
the
masses
of
people.
It's
when,
if
you
were
getting
really
deep
that
you
would
actually
see
a
breaking
change.
So
this
change
made
the
forum
feature
lazy
before
we
used
allocate
the
forum
feature
upfront
to
set
options
and
no,
we
passed
those
options
all
the
way
down
to
the
forum
feature.
C
D
C
C
D
C
Formula
every
single
request
so
event
from
16
megabyte
allocations.
You
see
this
picture,
16
Meg's
of
allocations
of
the
forum
and
the
request
services
feature
down
to
wait.
This
is
the
new
one.
Sorry.
So
after
this
change
I
think
we
only
had
two
allocations.
It
was
the
request
for
as
a
feature
you're
moving
really
fast.
Oh
sorry,
yeah!
So
here's
the
here's,
the
change
after
we
did,
the
forum
change
and
the
software
is
I.
Don't
think
it
was
the
same
number
of
requests.
C
C
C
C
C
Address
to
its
internal.
A
C
Value
is
what
we
use
to
verify
our
changes,
so
I'll
make
a
change
in
a
PR
I'll
get
the
bits
from
that.
Pr
upload
it
to
this
this
server
and
it
will
download
those
bits,
run
the
perfect
test
and
verify
our
changes.
So
I
can
verify
pertinent,
perfect
changes
on
our
CI
on
our
actually
perforate
and
paste
him
inside
of
the
poor
quest
to
see
the
changes
at.
C
C
Yeah
I
know
the
next
one.
Next
change
to
look
at
is
to
today.we
when
you
want
to
get
per
request
services.
There's
a
there's,
a
feature
on
the
HP
contest.
That
is
a
that's
called
request
services.
We
used
to
have
a
piece
of
middleware
in
hosting
that
would
pre
populate
that
middleware
pre
populate
that
field
eagerly.
So
if
you
asked,
if
you
never
asked
for
request
services,
it
was
always
there
allocated
the
goal
here
was
to
remove
that
completely.
C
So
we
we
figured
out
a
way
to
to
make
it
a
little
easy
as
possible,
which
also
reduce
allocations
even
more,
although
those
were
those
are
mainly
used
for
real
applications
for
the
benchmarks.
It
makes
a
huge
difference,
so
I
believe
we
went
from
24
Meg's
of
allocations
in
this
benchmark
in
this
particular
benchmark,
and
they
were
all
for
the
request
services
feature.
There
was
5
minutes
of
that
and
we
ended
up
going
to
19
megabytes
from
24,
so
a
small
small
change
and
no,
we
were.
C
We
were
at
a
place
where
the
thing
being
allocated.
Every
request
in
the
default
case
is
strings.
If
you
look
at
this
profile,
string
is
like
the
vast
majority
of
allocations,
no
more
stragglers
anything
else.
We
do
have
these
by
allocations
and
I'll
talk
about
how
we
got
rid
of
those
later
on.
Alright.
So
far,
we
got
the
per
class
allocations
down.
Now
we're
going
to
look
at
more
esoteric
changes,
so
WebSocket
improvements.
C
It's
a
small
change
in
kestrel
to
basically
four
four
upgraded
requests
and
an
upgrade
request
means
requests
that
are
passed
through
to
the
actual
underlying
connection
that
we
just
read
from
the
actual
connection,
Oh
pipe,
instead
of
copying
button
to
a
request,
pipe
that
helps
if
a
bunch
of
memory
as
well
there's
a
new
interface
in
the
VCO
that
you
will
probably
never
use
for
the
most
part.
Does
anyone
use
the
triple
API
talking
to
the
audience?
That's
watching
online,
there's
an
API
called
cue
user
work
item
and
you
uses
this
API.
C
If
you
want
to
you
know
offload
some
work
to
the
thread
pool
the
issue
with
the
API.
Will
there's
no
issue,
but
the
issue
that
we
had
was
everything
you
called
queues
were
kinda.
It
would
actually
allocate
a
work
item.
People
don't
think
about
that.
Normally,
because
there's
an
overlook
that
tastes,
a
state
object.
So
normally
you
call
queues
a
work
item.
C
You
pass
in
a
delegate
to
call
back
and
some
state
and
the
state
object
is
there
to
avoid
allocating
a
closure
when
you're
actually
passing
your
delegate
to
use
a
work
item
in
this
model,
you
actually
implement
an.
I
thread,
pool
work
item
and
you
called
unsafe
queue,
user
work
item
and
it
will
run
your
ID
method
as
the
callback.
C
In
the
is
case,
we
don't
allocate
per
request
anymore
to
dispatch
request
to
the
thread,
pool
and
there's
one
big
change
that
that
we
made
in
the
scheduler
and
Castrol
so
casual
has
a
scheduler
for
I/o.
We
basically
batch
type
work
items
together
into
one
giant
work
item
and
then
run
down
the
thread,
pool
and
bam
a
this
change
that
we
had
that
we
had
planned
for
awhile
and
obviously
to
implement
the
I
through
pipe
work.
C
A
D
A
A
That
the
benchmark
is
using
then
after
the
connection
is
established,
we
can
run
as
many
requests
infinite
requests
on
that
connection
and
it
literally
allocates
no
memory
now.
Some
of
those
things
are
shared
buffers
like
you
said,
and
if
you
add
more
connections
than
we
might
have
to
grow
privacy
of
things,
and
that
obviously
is
an
allocation.
So
it's
just
important
to
understand
the
context
of
when
we
see.
C
A
C
A
lot
so
the
the
iPad
pool,
we're
kind
of
interphase,
came
from
me
looking
at
our
allocation
profile,
I'm
going.
Why
is
the
key
Zurich
item
user?
A
callback
like
the
number
underneath
the
HCP
contacts
that
is
super
high,
that's
kind
of
weird
Oh
we're
allocating
every
time
we
like
read
bytes
from
the
network.
C
D
C
So
so
you,
if
you
were
dispatching
work
to
the
triple
you,
could
today
remove
the
allocation
by
implementing
your
own
ice
Ripper
work
item
I
said,
and
you
can
reuse
those
items.
So
that's
the
benefit
of
this
interface.
Oh,
this
is
one
of
the
Ben
atoms,
crazy
changes,
so
I
sent
Ben
a
proof,
a
performance
profile
from
one
of
our
our
runs
and
it
turned
out
for
a
hundred
byte
response.
We
were
zeroing
out
400
bytes
of
stack
space.
So
we
have
this
method
that
we
saw
in
question.
C
So
and
we
figured
so,
we
have
coding
cash
for
that.
That
assumes
a
bunch
of
known
head
and
we
copy
bytes
from
those
known
headers
into
a
buffer,
really
really
fast
right,
and
we
even
have
we
even
have
a
specific
order
in
which
we
copy,
assuming
that
those
headers
are
more
common
than
than
other
headers.
So
we
have
common
headers
left
common,
headers
least
common
headers
in
different
kind
of
matches
right
and
what
we
did
was
we
generated
a
giant,
a
giant
switch
statement.
C
C
C
We
ended
up
making
it
I
switch
in
the
very
end
because
we
didn't
like
go
through
really.
Yes,
instead
of
allocating
a
bunch
of
variables
per
header,
we
know
how
a
bunch
of
a
bunch
of
variables
set
once
and
we
just
kind
of
assign
those
variables
to
the
to
the
header
to
be
written
every
time
instead
of
doing
instead
of
having
a
giant,
you
know
giant
locals
on
the
stack,
so
those
are
fun
changes
with
a
burst
of
some.
C
You
have
to
allocate
a
task.
That's
one
downside
right.
If
you
don't
care
about
winning
on
the
result,
in
line,
there's
no
point
in
doing
tasks
tot
run
tossed
out
run
also
behave
differently
to
choose
the
work
item
where
tasks
are
run,
tries
to
okay.
So
this
is
super
a
little
bit
deep.
So
there's
two
queues
in
the
tripod,
Lobel
queue
and
a
local
cue.
The
cues
that
are
local
are
per
thread.
So
when
you
call
like
tatha
run
or
continue
our
scheduler
continuation,
those
run
those
are
preferred
over
the
global
tasks.
So.
A
C
A
C
C
A
E
C
Alright,
some
more
fun
stuff,
some
new
features
in
internet
court
there's
generic
host
everywhere
we
started
work
in
2.1.
We
added
this
thing
called
the
generic
host
and
it
was
basically
we
figured
out
that
if
we
want
to
have
more
app
types
use
our
cross-cutting
concerns
like
di
and
configuration
and
logging,
we
need
to
have
a
generic
host
that
works.
You
know
across
multiple
app
types
and
we
had
the
web
host
and
we
we
figured.
We
should
make
the
web
post
deprecated
and
how
the
generic
host
be
the
main
thing.
C
C
Sharing
and
pushing
everywhere
other
apt
hearing,
I.
A
Think
there
is
even
enforcing
there's
even
a
sample
app
that
doesn't
exist
anywhere,
except
on
Glenn's
computer
yeah.
That
runs
WinForms
in
the
context
of
the
generic
host,
which
means
every
new
windows
form
that
gets
created.
It
is
di
from
our
container
yeah
and
it
was
actually
not
hard
to
do
at
all
super
easy.
It
was
actually
super
easy.
So
there
you
go.
C
A
A
C
So
more
changes
we
have
some
c-sharp
8
reactions
to
features
so
our
dick
in
theater,
no
supporters.
Our
di
container
now
supports
is
a
disposable.
It's
actually
a
pretty
interesting
thing
to
support
where,
if
you
have
a
service
that
unless
I
have
a
disposable,
you
want
it
to
be.
This
pose
right.
It
needs.
C
E
A
A
C
A
A
C
Since
our
template
does
this
both
day
saying
you
don't
see
it
for
the
most
part
ready
if
you
control
the
host
yourself
and
you
call
dispose
it'll,
say:
I
can't
disclose
anything
disposable
thing
cool.
The
other
change.
I
think
innumerable.
No
we
had
support
in
signal
are
so
signal
are
supported
streaming
from
client
to
server
and
server
to
client
in
3.0.
We
just
out
of
support
for
is
Inc
enumerable,
so
in
your
hub,
you
can
now
return.
Is
enumerable
and
use
the
beautiful
yield
syntax
in
c-sharp.
It
just
returned
basing
things.
C
Oh,
it's
super
clean,
less
they
have
less
engaged,
can
I
show
Jason.
We
have
a
new
Jason
across
the
entire
stack
there's
a
reader
or
writer,
and
a
Dom
Dom
being
like
J
object.
A
loosely
type
object
model
for
the
Jason
schema
honestly
I,
just
just
got
checked
in
to
preview,
for
which
is
gonna
be
coming
out.
Who
knows
when,
sometime
after
preview
preview.
C
A
C
Client-Server
streaming
pipelines
everywhere,
so
in
Sigma
I
want
she
was
demo
actually
do
we
have
time.
Yeah
I
mean
it's
a
it's
a
loose,
our
oh
good,
because
we
gotta
listen
so
that
counts
right
sure,
okay,
so
we
we
had
a
blog
post
about
I,
think
preview,
2
of
a
spirit
core,
and
we
showed
a
demo
of
streaming
and
I'm
gonna
show
this
demo
know
that
our
dev
wrote
this.
A
C
C
Yeah
so
before
you
could
do
that
right,
you
could
you
could
call
them
at
it
over
and
over
with
different
chunks,
I
out
yourself
or
what
you
could
do
so
I
think
this
application
I
believe
we've
blocked
a
bit
ever
pull
through
the
source
before
it.
The
scope
will
be
coming
up
soon.
We
got
to
go
to
view
and
make
it
all
beautiful.
So
here
at
my
stream,
here's
my
stream
I,
can
wash
the
stream
over
here.
B
C
E
E
C
C
C
D
C
A
E
C
C
There's
a
hub
called
stream
hub
that
has
the
that's
not
left
off,
it's
a
regular
hub.
It
has
a
class
called
a
stream
manager
which
is
a
singleton,
so
I
can
list
all
the
streams
so
that
the
UI
you
see
where
the
streams
are
all
listed,
where
I
can
watch
a
stream.
It's
all
from
this
list
of
streams,
there's
a
callback
to
watch
the
stream
that
returns
you
a
channel
reader
and
in
the
future.
This
could
be
an
icing
interval
and
then
start
stream.
C
So
we
basically
start
a
stream
call
run
stream
async
on
the
stream
manager,
and
then
we,
you
know
we
just
tell
everyone
hey
you
stream
is
there
and
then
we
await
this
dream
task,
so
this
run
stream
async.
What
it
does
internally
is
ignore
this
crazy
logic.
This
isn't
we
trying
to
prevent
risk
conditions.
It
reads
the
incoming
stream
out
the
channel.
So
in
the
future,
this
will
all
be
I.
Think
in
the
rebel
I
will
be
a
weight
for
each
async
picture.
This
was
for
each
a
sink
in
like
in
in
your
brains.
C
A
C
So
this
is
this:
is
the
channel
API
right
before
I
think
in
the
remote
right?
So
in
the
end,
the
hopefully
in
preview,
3
I'll,
just
change
it
to
be
a
wait
for
each
a
saying
right,
blah
so
that
that
is
the
feature
of
Sigma
or
for
streaming
all
right.
What
I
have
what
I
have
ok
pipelines,
who
pipelines
in
the
server
so
we've
been
on
this
journey
or
I-iv
on
this
journey?
C
So
in
3.0,
we've
finally
made
the
big
first
step
to
add
it
to
the
HTTP
context,
the
achieve
your
request
and
response
natively.
So
the
whole
intent
of
this
change
is
that
quechuas
casual
has
a
buffer
pool
today
and
we
have
pools
all
the
way
up
the
stack.
So
we
use
a.
C
C
C
So
the
issue
was
that
we
were
reading
in
buffers
ourselves
in
castro
and
the
middleware
wants
to
read
the
for
more
parsed,
parse
jason
and
I
had
to
copy
the
bytes
from
Castro
right
into
a
different
pool
to
them
party
on
right
right,
so
the
intent
was
could
be.
Could
we
somehow
expose
the
buffers
directly
from
Castro
up
stock
right?
So
literally,
if
you
were
parsing
the
foreign
body,
you
would
get
the
data
as
it
came
from
the
network
right.
You
just
parse
it
up
the
wire
right
right.
C
There's
no
copying!
No
copying
right,
I'm
Sam
from
Sam
for
output
from
our
output.
Today
we
would
allocate
memory
into
a
different
pool
and
then
coffee
into
casuals
pool.
So
could
we
somehow
I
suppose
cash
with
underlying
memory
and
this
right
into
and
say,
like?
Ok,
go
no
flush
right
right.
So
the
pipe
API
is
we're
kind
of
born
out
of
that
idea
and
then
we're
no
exploding
for
the
first
time
in
3.0
on
the
HTTP
request
and
response
like
directly.
So
what
that
looks
like
I
have
a
have
this
sample
here.
C
C
There's
a
body
pipe,
and
this
is
the
pipe
reader
right,
unlike
stream,
you
end
up
with
kind
of
one
side
of
the
equation.
So
today,
for
a
stream
streams
are
both
read
and
write,
and
you
have
to
call
you
you
have
to
check
the
boolean.
So,
for
example,
if
I
have
a
body
stream,
the
body
that
the
body
stream
for
a
request
is
only
readable,
but
yet
I
have
write
right
on
here.
Right,
so
I
have
to
check.
You
know,
can
I
write
before
I
write,
I
mean.
A
C
A
C
C
C
Thing
super
hard,
but
I
won't
get
to
it.
One
of
the
interesting
design
challenges
that
we
had
to
to
figure
out
in
3.0
is
with
the
pipelines.
Was
you
have
this
type?
You
have
contact
start
request
up
that
body
right
that
has
to
respond.
Stop
body
right,
people
end
up
doing
things
like
this
memory
stream
right.
They
want
to
catch
the
entire
response.
They
do
this
and
then
things
happen
and
then,
like
it
all
works
right.
Yeah.
C
C
C
If
you
set
the
body
to
a
stream,
we'll
give
you
a
pipe
wrapper
over
that
stream
for
the
body,
but
that's
pretty
cool
I'm
vice
versa,
so
everyone
I
was
doing
it
before.
If
aren't.
If
our
code
uses
pipes
on
your
craziest
dreams,
it'll
still
work,
I
I
didn't
collect
Utley
for
us,
but
I
think
that
was
the
bare
minimum
that
we
had
to
make
sure
that
that
it
was
working
well
for
everyone
and.
C
C
C
All
right,
so
this
is
dot
trace,
and
this
is
the
Timeline
view,
because
this
is
probably
the
only
thing
I
could
use
to
get
allocations
in
a
nice
UI
for
now.
I
have
two
two
examples
here.
It
is
the
reading,
for
example,
with
me,
posting
a
form
that
has
my
name
David
my
age
by
one
say:
32,
thirty
years
old,
one
in
using
preview
three
before
we
pipeline
fi,
the
four
meter
and
one
after
we
pipeline
to
fight
it.
C
So
before
I
open
the
view,
and
you
can
see
what
got
allocated
so
before
the
code
did
what
you
would
typically
do
when
you
were
gonna
Reese
things
from
the
stream,
you
would
use
a
stream
reader,
you
pass
in
the
stream,
you
call
it
read,
line,
read
line
or
read
whatever
and
you
get
data
rate
and
we
actually
did
use
pool
memories.
So
we
we
allocated
a
bunch
of
pool
memory
from
a
buffer
pool
for
characters
and
ones.
Four
bytes,
and
we
would
you
know,
do
that
conversion.
C
So
I
think
this
was
10,000
requests
of
a
very
small
form
and
we
end
up
in
the
96
allocation.
The
96
and
maze
of
allocations
and
for
the
most
part
I
have
a
bunch
of
Chara
is
and
byte
arrays
and
then
the
strings
as
a
result
of
those
for
the
form
just
see
know
what
I
actually
sent
I
actually
sent.
Something
like
this
name
equals
David
and
age
of
32.
That
was
the
entire
payload
array.
Super
small
turns
out.
C
If
you
use
a
stream
reader
by
default
with
default
settings,
it
allocates
a
fork,
a
fork,
a
buffer,
just
in
case
for
K
chars
and
4
Kbytes
to
do
like
transcoding
and
stuff.
So
let's
say
you
get
data
from
the
the
the
wire
that
is
like
utf-32
or
some
other
random
encoding.
We
would
read
the
encoding
turn
your
your
bytes
into
characters,
parse
those
characters
and
then
turn
mentis
strings,
so
we
had
kind
of
double
allocation.
C
C
So
let
me
show
you
the
profile
after
that
I
mean
the
interesting
thing
is:
ask
these:
it
actually
does
work
for
a
lot
of
coatings,
because
what
we
do
is
we
we
only.
We
only
need
to
get
the
delimiter
in
the
right,
encoding
and
then
scan
for
and
bite
so,
and
we
take
advantage
of
the
factory
scanning
in
in
the
framework,
so
the
framework
uses
it
uses
intrinsics
to
scan
like
massive,
almost
a
bytes
in.
D
C
Of
and
substring
all
those
things
are
in
factories,
so
they
went
from
from
96
megabytes
to
17
megabytes.
So
no,
this
was
the
biggest
I
know.
We
have
the
strings
and
we
can't
get
rid
of
these
because
the
forum,
the
forum
interfaces,
dictionary,
string,
string
right,
but
all
the
intermediate
data
is
now
gone.
C
C
This
is
like
seeing
it
happen
late
for
real
before
it
was
all
I,
okay
in
theory,
if
we
do
all
these
five
things
and
they
all
connect
like
this,
this
will
happen
and
now
we're
like
super
close
I
tried
doing
adjacent
example,
but
the
Jason
Siri
laser
is
not
ready,
not
ready.
Yet
it's
funny
I
had
this
giant
list
of
Pokemon.
C
In
addition,
payloads
and
it
was
like,
like
I,
knew,
yeah
90k
of
like
Jason,
and
the
the
test
was
to
show
how
people
do
it
today,
where
we
buffer
the
entire
payload
in
memory
I
had
a
discussion
with
Tom
Dennis
on
on
Twitter
about
a
sting.
They
think
this
realization
for
Jason
what
parsers
tend
to
do
today?
If
you
look
at
all
the
source
code
for
a
bunch
of
parsers,
it's
not
all
the
ones
I've
seen
when
you
call
read
a
sink
on
the
parser
passing
this
dream.
C
What
our
parser
does
in
the
framework
is
you
can
the
look
level.
The
pressure
is
built,
it's
built
on
a
reader,
so
the
reader
lets.
You
read
like
no
strings
and
in
some
numbers
from
the
network
from
the
byte
array,
and
it
returns
you,
the
current
state
of
where
it
got
to
so
it'll,
say
I
parse.
This
much
I
used
this
many
bytes
I'm
at
this
position
in
the
sequence
and
then
your
goal
is
to
take
that
state
object
and
pass
it
back
in
with
bit
more
data.
C
I
just
went
over
and
over
so
the
serie
laser
stores
of
stack
of
objects,
two
words
to
where
it
last
got.
So
it
basically
calls
that
thing
in
a
loop
saying:
okay,
I
have
this
many
objects
I'm
this
far
within
this
object
resume
so
for
for
pipelines,
you're
literally
parsing,
the
bytes
as
a
composite,
comes
in
from
catch
on
a
network
and
you're
making
progress
without
having
to
buffer
the
entire
thing
all
at
once.
So
you
when.
A
C
C
A
C
A
C
A
A
C
C
A
I
imagine
the
retro
actively
fitting
async
into
an
existing
serializer.
It
probably
would
be
cheaper
to
do
it
that
way,
yes,
and
to
build
an
entirely
new
posit.
That
has
this
polarity
averted
effectively
right.
Okay,
so
it
makes
sense
that
an
existing
XML
serialization
serial
was
ever
when
it
wants
to
support.
Async
would
go
well,
I,
guess
I'm
going
all
async
all
the
way
through
on
my
existing
parsing
logic,
rather
than
let
me
build
a
new
serializer
and
no
sterilizer
okay,
I'll.
C
C
C
D
C
C
A
C
A
Mean
it
does,
except
that
you've
already
someone
well
app
service,
is
interesting
because
it's
a
PI
as
there
are
other
layers
that
you
don't
trol,
and
so
you
can
meet
your
stuff
as
efficient
as
you
like.
But
if
you're
running
in
process
hosting
it
doesn't
matter
in
process
asp
net
core
in
it
doesn't
really
matter
because
you're
not
using
Kestrel
if
using
out
a
process
you're
using
Kestrel.
So
your
process
is
is
as
efficient
as
it
can
be.
A
A
C
So
either
it
consumed
the
full
token.
Yes
or
it
didn't
right.
So
let's
say
you
had
a
string
that
was
like
10k
and
you
got
five
kids
of
string.
It
would
bytes
consumed,
would
be
zero
right.
It
wouldn't
leave
parse
half
and
like
resume,
it
would
say:
Lee
I,
didn't
press
anything
right,
and
then
you
give
it
the
data
you
give
it
the
5k
plus
whatever
you
got
that
came
back,
but.
A
D
A
C
C
A
C
There's
four
point:
five
months
of
allocations,
and
most
of
it
is
strings
and
a
great
user
code.
If
you
look
at
this
graph,
it
has
small
object
heap
on
large
object,
heat
and
I'm
on
the
sma'da
keep
all
we
have
are
pretty
much
strings
and
the
string
at
this
point,
if
you
look
at
the
cost
I
to
see
where
they're
coming
from
they're
all
coming
from
headers.
This
is
a
header
key.
This
is
a
header
value.
This
is
because
our
headers
are
represented
as
string
values.
Let's
say.
D
C
C
Evaluation
I'll
tell
you
what
I'll
tell
you
what
we
are
looking
are
we
looked
at
changing
string
value
to
supports
during
bytes
in
the
backing
store
today
today
the
backing
store
is
string
or
straight
because
because
headers
can
be
multivalued
for
the
most
part,
the
header
keys
are
like
reused
completely,
because
they're
no
keys
like
karate
lens
right.
Those
are
all
reused.
If.
A
A
Because
the
platform
test-
yes,
we
talked
about
before
it-
doesn't
give
you
a
structure
that
contains
headers.
It
gives
you
a
call
back
that
says:
I
came
across
a
header
while
pausing.
What
do
you
want
to
do
right
and
so
there's
no
allocation.
You
get
to
the
side
in
this
stuff.
We
do
it
all
for
you,
yep
and
so
to
change
this.
We
effectively
have
to
go
to
a
world
where
the
the
the
type
that
we
give
you
that
represents
headers
would
have
to
be
lazy
without.
A
C
A
A
C
A
A
C
A
A
To
return
like
and
when
you
say
held
on,
the
reason
is
held
on
is
because
now
you're
attached
to
the
pool
yeah,
whereas
once
we've
turned
it
into
a
string
for
you
put
in
the
collection,
its
marked
consumed
its
copied
there
by
the
pipelined,
and
so
then
that
part
of
the
pool
once
that
chunk,
that
was
yeah,
there's,
obviously
the
more
memory
in
a
bit
given
chunk.
That
was
being
that
head
I
might
have
been
consuming.
A
But
once
that's
freed
up,
even
though
you
might
still
be
processing
that
request,
you
may
not
even
read
the
header,
yet
it's
in
the
collection,
but
that
part
of
memory
gets
put
back
into
the
kestrel
pool
and
used
for
the
next
request.
If
we
do
this,
it
can't
because
it
hasn't
been
consumed.
Yet
we
needed
to
go
somewhere
and
it
would
stay
in
the
pipe.
That
would
be
the
obvious
thing
to
start
with.
It
would
just
stay
in
the
part
of
the
buffer
that
is
being
rented
effectively
to
the
pipe
right
now.
Yeah.
C
A
My
gut
my
gut
feel,
is
that
we
will
get
there.
Eventually
there
will
be
a
mode
or
an
API.
You
can
call
an
eight-minute
call
very
early
on
or
some
there'll
be
some
way
that
you'll
be
able
to
say.
I
want
to
go
into
this.
Like
preemptively
and
there's
caveats
like.
If
you
do
this
mode,
some
things
can
go
and
you
do
something
strange.
The
weird
things
going
to
happen,
but
I
think
we'll
end
up
having
to
get
there,
because
there
will
be
certain
scenarios
where
you
want
to.
A
C
There's
a
new
type
in
Don,
Ecker
3.
Oh,
that
makes
it
easy
to
use
pipelines
called
a
sequence
reader.
It's
also
heavily
optimized.
It's
used
to
parse
bytes
in
memory.
It
does
all
the
hard
things
about
having
more
than
one
buffer,
because
so
for
pipelines,
you
call
read
async.
If
you
go
over
a
certain
size.
Normally
you
get
buffers
split
across
multiple
chunks
like
a
linked
list
of
buffers
and
parsing.
That
is
hard
right,
so
people
do
to
array
and
they
allocate
and
his
battery
is
virtualized.
It.
C
C
Is
it
n
buffers
do
I
split
across
those
things,
but
the
API
is
nicer
to
use
nice
and
we're
using
the
form
reader
and
now
in
the
achieve
parser,
and
we
actually
got
performance
gains
from
using
in
the
parser,
because
the
dev
spent
tons
of
time
Jeremy
on
the.net
team
spent
a
lot
of
time
league
looking
at
assembly
and
like
I'm,
making
it
really
fast
awesome.
It's
really
good.
Can
I
show
the
hot
reload
stuff,
I
guess
really
this.
C
E
C
Let
them
laugh
that
was
pretty
cool,
I
hope,
maybe
so
done.
That
watch
I've
seen
a
bunch
of
complaints
about,
don't
I
watch
being
slow
to
use
and
it
really
actually
is
really
slow
to
use
and
hard
to
use
in
certain
cases
on
the
under
and
in
in
a
spinet
core
3.0,
we've
removed
runtime
compilation
by
default
from
Ridgid
pages.
C
So
if
you
change
HTML,
you
have
to
rebuild
the
entire
project
to
rerun
to
get
new
bits
right,
that's
kind
of
cumbersome,
especially
if
you
came
from
a
dynamic
language
or
how
about
using
javascript
or
node
or
PHP,
or
something
else
where
you
write
your
basely
compiling
on
demand.
So
you
change
a
file
you
hit
it
you
hit
f5
in
the
browser
yep,
you
have
new
content.
All
right,
I'll
be
happy
member
doing
dnx
all
those
years
ago.
That
was
kind
of
one
of
our
big
things.
C
Like
run
time
compilation
you
can
change
inner
loop,
yeah
char
file
and
habit
reboot.
So
the
question
is,
can
be
somehow
regained:
the
glory
of
diggin
eggs
while
using
dotnet
and
msbuild,
and
all
that
stuff.
So
I
have
this
experiment.
Where
today
done
that
watch
you
boot
it
up
it
launches.
So
you
don't
have
watch
and
some
command.
So
you
do,
you
would
do
done
it,
don't
let
watch
run
and
that
would
basically
run
the
code.
Wait
for
a
file
change
and
then,
if
the
file
change
reburn
that
command
over
and
over
is.
C
Thinking
about
this
from,
like
a
zoom,
not
point
of
view,
the
way
to
get
the
best
experience
where
you're,
where
you
change
something
in
the
application
and
you
hit
refresh
and
it
just
worked,
is
you
want
to
figure
out
how
to
be
the
least
destructive?
You
can
be
when
you,
when
you
make
changes
right?
So
if
I
change
a
controller,
do
I
need
to
reboot
the
entire
app
do
we
need
to
kill
the
process?
How
far?
C
How
far
do
I
have
to
yield
before
those
changes
can
be
observed
by
the
runtime
right,
so
anything
that
happens
per
request
will
get
recreated
for
request.
So
for
those
things
you
could
potentially
like
update
that
one
thing
and
again
you've
worked
up
it
mm-hmm.
If
you
change
the
start
of
class
or
the
program
main
guess
what
you
got
a
torque,
the
entire
application.
It's.
D
A
They
now,
unfortunately,
things
like
the
middleware
pipeline
sound
like
their
prayer
requests,
but
they're
actually
evaluated
once
up
front
and
the
pipeline
is
built
like
in
the
beginning.
So
today
and
then
the
all
the
way
apps
like
all
I,
could
raise
a
view
today.
Razor
views
are
separate
assembly
so,
like
that's,
really
easy
to
just
like
the
other
one
in
theory,
and
we
do
that
already
in
pre
to
pre
three-hour
episode.
So.
C
Today,
a
razor
we
compiled
a
different
assembly
with
a
random
name
and
it's
easy,
because
when
a
request
comes
in,
we
can
say,
Oh
has
a
file
change
on
this.
If
it
has
recompile
via
via
Roslin
again,
you
assembly
returned
it
to
the
color
right.
That's
super
easy
for
Don
that
watch
it's
a
little
bit
trickier
because,
depending
on
what
change
was
made
and
you're
trying
to
saved
it,
you
want
to
be
not
you.
C
You
want
to
not
be
destructive,
so
that
requires
knowing
what
what
content
in
each
file,
what
kind
of
thoughts
being
changed
what's
going
on
right,
so
this
sample
is
trying
to
give
an
experience.
That's
kind
of
like
in
is
where
I
make
a
file
change.
If
I
make
a
request
during
the
app
during
the
app
teardown
State
I
want
the
requests
to
be
queued
until
the
app
has
has
spawned
again
today
and
watch
if,
if
you're,
in
between
a
change
and
in
the
browser,
it'll
give
you
a
thing
like
a
there's,
no
socket
listening,
yeah.
C
C
So
you
want
to
separate
this
server
from
the
actual
like
application
process
right.
So
what
this
thing
does
what
this
crazy
hack
does?
It's
a
crazy
cool,
hack,
I
think
they're
just
server
running
a
watch.
So
watch
has
a
server
running
and
I'm
like
it's
ranked
Astral
right
so
resetting
ziz
in
those
casual
setting
casual
things
right.
Then
we
boot
up
a
second.
So
let
me
show
the
kitchen
structure
in
this
demo:
The
Watcher
is
basically
done
at
watch
and
the
sample
is
the
actual
application.
So
the
sample
looks
pretty
normal.
C
There's
no
tricks
here
right
and
the
watcher
runs,
and
it
says
I'm
going
to
point
to
that
DLL,
so
I'm
hard
coding,
a
path
to
the
sample
application.
So
you
can
imagine
it
being
dotnet
watch
this
folder
and
it
would
just
like
know
that
right
right,
then
the
watcher
has
a
server
running
in
itself.
It's
running
kestrel
and
it's
gonna
boot.
The
application
in
a
different
context,
assembly
load
content,
which
is
a
new
thing
in
da
net
core
3.
D
C
C
Is
is
in
the
same
process
as
your
application
right,
so
you're
loading
the
application
like
a
plugin
into
the
host
process
and
the
whole
process
is
on
the
server
so
you're
telling
your
you're
basically
getting
the
application,
the
middleware
pipeline
from
the
the
plug-in
and
running
it
in
the
in
the
host
process.
So
this
code,
what
it
does
is,
it
says,
get
me
the
required
service
hosting
server,
which
is
my
type.
So
the
server
is
actually
we
put
a
fake
server
in
the
in
the
8th
net
core
app
in
the
child
process
mm-hmm.
C
C
Yes,
yeah
in
a
child
context,
and
this
we're
wait
for
application.
A
single
basely
wait
until
the
application
is
spun
up
and
ready
before
request
gate.
Let
through
and
this
hounds
are
queuing
for
us
in
the
back.
I
have
a
actually
built
this
using
all
of
our
permits,
so
I
have
a
background
service
that
basically
has
this.
This
super
long
loop
that
says:
ok,
I'm,
gonna
watch
for
CS
fault,
Anja's
and
while
the
app
is
still
running,
I'm
gonna
create
a
little
context
points
get
pointing
at
the
application
and
then
I'm
gonna
load
that
assembly.
C
C
C
C
A
C
We
call
creative
art
builder,
we
call
into
the
actual
method
we
get
the
application
applications
horse
builder.
This
is
this
type
actually
exists
in
the
child
context.
One
thing
that's
interesting,
interesting
about
load
context.
Is
you
can't
you
can't
do
type
exchange
it'll
just
tell
you,
I
can't
cast
type
8
to
type
a
so
I,
say
I
loaded,
just
not
net
in
the
child
context
and
in
the
whole
context,
and
then
I
passed
I
pass
just
another
instance
from
the
host
to
the
child
context
to
an
API.
C
It
would
say:
I
cannot
cast
blah
blah
and
you
below
your
home.
So
there's
still
a
bunch
of
things
to
fit
to
figure
out
to
make
the
tomato
context
look
look
better
in
the
debugger
and
stuff
because
you
can't
distinguish
two
types
from
different
local
context
like
easily.
It's
really
hard
to
figure
out
I.
A
C
C
C
D
C
That
tells
us
when
the
application
has
started,
it's
ready,
so
we
start
the
application
host.
This
is
an
I
host
builder.
We
wait
for
a
file
change
and
by
the
way
I
wrote.
I
wrote
it
this
way
because
it
give
it
gave
the
app
the
Dakota
very
linear
flow.
It
says:
dart
application
wait
for
file
changes,
application
do
it.
This
is
a
funny
bug,
it
crashes,
the
process
when
it
call
collect
oh
yeah
anyway,
and
then
it
runs
dotnet
build,
know,
restore,
and
then
it
goes
again.
C
C
C
A
C
A
C
So
like
so
when
you
call
when
you
call
when
you
so
you
implement
low
context
and
whenever
someone
come
out
to
load
on
that
low
context,
you
have
to
decide
where
it
came
from
where
that
little
request
gets
served
from.
So
you
can
say
interesting,
I
got
a
little
call
back.
I
want
to
look
at
loaded
into
this
context
or
litter
from
the
default
context.
Okay,.
A
A
C
A
C
A
And
Michaels
is
commenting
he
he
thought
he
heard
it
differently,
which
is
you
had
said.
While
you
learn
assembly,
a
new
context
is
returned.
David
was
talking
about
the
internals
before
we
expose
this
and
that
internally,
when
you
loaded
an
assembly
light
from
bytes
versus
load
from
file,
whatever
those.
A
The
bookkeeping
that
the
runtime
is
doing
to
track
what
assemblies
were
being
loaded
from
where
was
done
by
this
thing
called
a
low
context,
and
sometimes
it
would
create
a
new
one
right.
It
wasn't
that
you
actually
get
back
the
low
context
or
anything
like
that
and
when
he
says
now
exposed
we're
not
giving
you
back,
you
implement
it
I
guess
it's
a
little
bit
like
implementing
what
was
it
app
domain
handle?
There
was
like
a
type
loaded
or
assembly
loaded,
no.
D
C
C
And
same
for
loop
and
bytes
okay,
so
you
couldn't
lay
across
the
stream
rights
from
from
those
things.
They
will
isolate
it
by
definition,
different
University
head
right,
so
either
your
plugins
would
be
all
from
I
smile
I
tell
people!
That's
how
things
kind
of
work
you
would
load
it
all
from
file
or
all
from
somewhere
else.
Right.
D
C
Where,
if
you
have
a,
if
you
have
a
bus
system,
you
want
the
hole
to
be
in
the
default
context,
cuz,
it's
shared.
Yes,
anything
you
want
to
share
across
your
child
context
needs
to
be
in
a
shared
context
somewhere
else.
Okay,
and
then
you
pass
the
instance
from
that
host
in
two
different
contexts.
It.
C
C
In
a
link
in
okay
in
the
show,
all
right,
cool,
okay,
so
this
event
was
done.
I'm
gonna
show
the
demo
guess
so
hello
world
three
I
make
it
file
change.
I
hit
at
five
is
in
between
reloads
it
loads
again.
So
that
shows
the
cueing
behavior
in
the
in
the
old
watch.
It
would
have
just
like
said
no
server.
That's
right.
C
A
C
D
A
A
Just
all
state-
and
they
maintain
they,
they
persist
across
invitations
are
build,
and
so
all
that
stuff
has
news
is
Donna
core
one
and
Donna
core
to
and
so
I
mean
that's
continually
being
refined
to
make
things
like
cold,
build
on
this
app
that
you
just
built
a
moment
ago
as
fast
as
possible,
or
build
this
app.
This
project
that
references,
47
other
projects
and
have
it
be
as
efficient
and
only
knowing
what
to
have
to
rebuild
last
time
know
what
type
of
stuff
yeah
yeah.
So.
C
A
Like
a
Holy
Grail
that
we're
aware
of
right,
it's
like
you
could
imagine
your
app
doesn't
have
to
restart
at
all.
The
process
is
running
and
you
just
like
a
variable,
a
default
value
right
and
you
just
want
the
app
to
do
the
bare
minimum
that
sab
add.
Example:
how
about
I
change
the
first
line
of
a
kind
of
an
MVC
controller
action
method,
because
that's
prayer
request
and
there's
no
nothing.
You
can
really
get
into
danger
with
there.
A
How
do
we
make
that
the
the
fastest
minimal
change
to
the
running
application
possible?
Now
that's
hard
with
net
and
there's
really
two
mechanisms
that
exist
today
that
that
address
very
specific
scenarios.
One
is
the
profiler
where
there's
a
hook
that
lets
you
basically
give
it
new,
il
as
I
understand
it,
and
you
can.
D
A
Reject
this
method,
it's
a
method
level,
a
member
level
or
something
you
can
say,
change
this
method
to
do
this
now
and
the
runtime
will
like
do
the
needful
and
coordinate
that
and
dispatched
update,
V
tables
and
all
the
rest
of
it
right
and
then
there's
edit
and
continue,
which
requires
a
debugger
to
be
attached
and
then
there's
some
intrinsic
wire,
I
call
debug
or
something
to
let
you
do
that.
Yes,.
A
A
A
Watch
and
somehow
there'd
be
a
way
for
the
assembly
to
be
created
in
the
most
efficient
way
possible,
and
then
you
can
like
feed
it
yep
into
the
app,
which
would
probably
be
something
like
this,
because
you
got
a
load
context.
So
you
can
just
unload
that
assembly
zone
say
apply
this
diff
assembly
yeah
and
it's
all
just
as
quick
as
possible
in.
A
A
All
fairly
and
the
reason
they
do
it
right
is
because
they
have
quite
complicated,
build
infrastructure,
usually,
and
if
you
were
just
to
make
a
JavaScript
fog
it
design,
you
know
she
JavaScript
type
script
or
something
else
you
make
that
fall
to
make
that
change,
and
then
you
wonder,
like
you'd,
have
to
rebuild
the
entire
app
and
then
never
get
back
to
where
you
were
and
you're
complicated,
spire
hierarchy
to
do
so
and
there's
the
same
reason.
Why
there's
mo
apps
and
wind
forms
that
they
all
have
the
same
issue.
D
A
D
A
An
IL
interpret
its
day,
but
they
don't
do
hot
reload
yeah.
So,
but
that's
that
I
know
of
other
dotnet
mini
nets.
Flavors
people
have
built
interpreters
in
order
to
feel
like
mob.
The
mono
has
an
interpreter
and
that's
used
in
some
of
the
unity
scenarios
right.
The
other
dotnet,
so
yeah
lots
of
options.
But
this
is
a
cool
thing
to
see
all
right.
We
all.
A
Magma
stay,
everyone
got
bored,
okay,
have
we
looked
at
H
to
be
three
and
quick?
Yes,
I
was
gonna,
say
well
that
all
that
performance
stuff
you
talked
about
a
lot
of
it
is
a
benefit
in
HTTP
1.1
we
haven't
not.
All
of
that
applies
equally
to
http,
which
ones,
although
a
lot
of
the
stuff
you
talked
about
because
a
lot
of
the
time
you
talked
about
things
like
we
now
make
this
a
singleton.
They
reuse,
yeah.
C
C
A
We
should
be
one
one
connection:
they
will
literally
only
be
one
request,
object
now
or
one
response
object
at
a
time,
because
you
never
one
request
at
a
time
anyway,
well
being
processes
that,
even
if
it's
pipeline,
but
because
in
a
should
be
to
its
multiplex,
you
could
have
in
the
a
request
200.
The
idea
is
that
you'll
pull
them
and
only
create
them
when
you
need
to,
and
then
you
cut
them
away
and
stuff
and
all
the
way,
ok,
cool
and
then
H
be
quick,
3
H
week,
3
and
quick.
C
We've
been
working
with
the
windows
team
on
quick
like
for
about
a
year
in
our
sleep,
they're
they're
part
of
the
eye,
I
got
ETF,
I,
think
group,
one
of
them
that's
right,
right,
yeah
yeah,
we
actually
are
involved,
like
not
me,
but
like
the
company
in
Microsoft,
aren't
are
involved
in
the
spec
and
like
implementation,
we
have
one,
a
library
internet
that
is
used
by
League
some
some
teams.
Okay,.
D
C
A
C
C
C
D
A
A
A
D
D
A
That
is
the
plan
and
Dave
and
I
have
been
talking
about
turning
a
bunch
of
this
stuff
new
talks,
maybe
even
a
workshop,
so
we
were
talking
about
maybe
making
a
diagnostics
workshop
to
talk
about
diagnostic
stuff.
We
did
I
talk
on,
but
then
we
could
do
a
performance
workshop
with
the
one
way
a
two-day
workshop
or
something,
and
we
could
record
it-
we
could
do
it
at
Commerce
or
whatever.
A
D
B
D
A
A
couple
of
people
who
are
asking
related
questions
about
LTS
two
one
is
the
current
LTS
release.
It'll
be
supported
through
tool,
I
think
it's
June,
20
21
and
a
minimum
to
two
will
not
be
LTS.
It
is
on
the
current
train
and
300
at
this
stage
will
not
be
an
LTS
we're
still
determining
whether
whether
we'll
do
a
three
one
that'll
be
LTS,
which
is
likely
whether
it'll
be
some
release
after
that
I
do
believe.
We're
gonna,
try
and
firm
up
those
plans
and
do
a
blog
post
or
something
in
the
next
yeah.
A
Before
we
release
three
o
GA
to
say
what
the
LTS
plans
are,
but
I
can
say
there
will
be
no
2,
2
or
2
dot.
X
LTS
2.1
is
the
LTS
release
for
that
train
and
for
their
300
itself
will
not
be
an
LTS
release.
It
will
be
a
current.
So
if
you're
on
2.2
today
is-
and
this
was
said
in
the
blog
post,
when
2
2
came
out,
this
is
not
news.
If
you
move
to
2
2
as
the
current
train
you'll
need
to
move
to
300
when
it
comes
out
within
the
grace
period.
A
We
just
I
think
three
months
in
order
to
get
the
support
and
servicing
releases
after
that
grace
period
is
up
now
moving
from
2
to
2
3.
Oh
they
are,
then
you
know
a
bunch
of
small
breaking
changes,
but
moving
an
application.
We've
got
lots
of
feedback
from
customers
that
actually
moving
in
application
is
very
straight
forward.
Real
for
300k.
D
A
If
it
depends
what
you
use
like
always,
but
most
of
the
you
know,
quote-unquote
breaking
changes
that
we've
made
in
a
genetic
or
at
least
a
lot
of
them
are
around
the
startup
stuff
with
generic
host
and
a
lot
of
them
is
mostly
compatible
and
there
isn't
that
much.
That
is
like.
Oh,
we
changed
this
API
we
got
that
was
widely
in
use.
We
did
remove
deputies,
we
remove
Stout
and
that
again,
that
was
pre-announce.
That
was
always
gonna,
be
the
place.
A
If
you're
using
an
API,
that's
already
marked
obsolete
in
2x,
it
will
be
gone
completely
in
3o,
so
first
step
stop
using
all
the
obsolete
API
is
because
they
always
get
removed
in
the
next
major
version,
with
the.net
core
plans,
and
that's
true
for
the
stuff
under
the
stack
as
well
and
EF
core
as
well.
So
all
those
things
we
kind
of
all
adhere
to
the
same
policy.
All
right.
Let's
go.