►
From YouTube: ASP.NET Community Standup - May 1, 2018 - SignalR and Project Bedrock with David Fowler
Description
Top links from the show:
https://github.com/aspnet/SignalR/
https://github.com/aspnet/KestrelHttpServer/issues/1980
https://github.com/JanEggers/Playground/tree/SignalR/MQTT
B
A
A
C
A
Cool,
so
one
nice
thing
with
with
this
now
these
days
is
the
chat
is
actually
preserved
on
these.
So
it
used
to
be
with
the
chat
as
as
soon
as
the
thing,
and
it
is
just
the
chat
was
gone
yeah,
but
now
and
now
it
stays
around
so
very
good,
hello
people
on
the
chat,
cool,
hey
well,
so
you've
got
a
lot
of
cool
stuff
to
show
off
yeah.
B
A
B
So
just
my
content
I'll
ask
you
all
right,
so
I'm
gonna
show
today
I'm
a
thing
called
bedrock.
There's
happening
to
me
a
bill
for
the
last
out
of
a
couple
of
months
now,
I'll
show
a
little
bit
of
new
signaler.
We
are
just
finishing
the
RC
milestone,
so
we
are
making
much
more
changes
and
for
rpm
wearing
this
thing
called
ask
mode.
B
B
Only
there
working
okay,
that's
me:
okay,
yep
all
right!
So
let's
talk
about
the
history
of
of
things
and
1.0
of
a
snake
or
we
minute
a
server
call
Castro
and
one
could
knows
and
loves.
Castro
was
our
web
server.
It
was
built
on
this
thing
called
libuv.
Wv
is
a
cross-platform
networking
library
for
nodejs.
It
was
made
by
you
know,
Jeff's
guys,
so
that
they
could
have
a
cross-platform
networking
story
for
a
node.
B
We
took
it
because
there
is
no
networking
story,
cross-platform
in
the
very
beginning
of
a
spinnaker
days
before
it
on
that
correspond
form.
So
historically,
we
kind
of
used
it
as
our
cross
platform.
Never
and
I
want
to
show
you
this
this
this
reporter,
because
one
of
our
Deb's
always
complain
that
catcher
got
more
complicated
when
we
added
more
stuff
to
it.
So
my
job
is
to
make
things
more
complicated.
I
guess
are
simpler.
B
If
you
look
at
capsule
1.0
the
source
code,
you'll
see
it
has
two
projects:
Castrol
I've
got
a
GPS,
so
very
simple:
this
is
the
web
server.
This
is
a
CPF
support,
contrast
that,
with
what
we
have
today
and
then
branch,
let
me
have
core
HTTP
abstractions
levy,
sockets
Kestrel,
caption,
abstractions
I'm,
named
a
with
that.
We
want
to
actually
break
capsule
down
into
more
layers
that
you
can
use
the
layer
separately,
because
most
projects
need
a
good
networking
stack
and
we
kind
of
ended
up
with
nine
castro.
B
So
one
point
was
very
simple:
there
was
a
type
called
socket
input
and
capture
1.0,
and
it
was
kind
of
the
core
library
used
to
do
I
own
between
live
UV
and
the
application
layer
and
I
saw
it
as
this
very
unhappy
written
by
Luis
de
Chardin,
who
I
see
it's
on
the
Bing
team,
though,
and
it
looked
like
a
very
generic
thing
that
could
be
used
for
a
networking
protocol.
So
I
was
like.
We
should
make
this
a
DCL
thing.
B
So
I
made
this
repository
called
channels
under
my
private
github
repository
where
I
kind
of
incubating
this
push-based
dreams,
I
did
it
for
for
dinette
and
I
got
a
couple
of
guys
involved
from
the
community,
so
you
guys
know
mark
grobble
from
Stack
Overflow.
B
Let
me
try
to
find
this
commits
on
here
started
helping
out
because
he
they
make
a
stack
machines,
Redis
Ivan,
James
C
word
who
actually
worked
on
works
on
a
bank
and
keep
it
doing
some
Excel
things
of
course
battles,
because
Ben
is
everywhere
ever
ever
seen,
Ben
how
we
ended
up
incubating
this
thing.
I
ended
up,
you
know
we
ended
up
using
new
patterns
and
new
practices
and
cooling,
and
the
whole
idea
was
out.
We
turn
this
thing
into
a
general-purpose
library
like
and
we
use
for
networking.
B
So
what
ended
up
happening?
We
ended
up
moving
this
library,
the
core
FX
and
it
got
renamed
to
system
I,
go
pipelines
and
core
FX.
If
you
don't
know
core
FX
is
where
we
incubate
ideas
for
v-necks
of
dotnet,
so
things
kind
of
start
off
in
cortex
lab.
So,
for
example,
we
have
this
system
as
your
experimental.
We
have
bases
where
primitives.
Here
we
have
readers
with
writers,
multi-value
dictionaries
sequences.
B
Basically
it's
a
kind
of
a
hodgepodge
of
stuff
that
we
have
as
ideas
and
the
goal
is
that
we
incubate
here
iterate
a
little
bit
push
packages.
People
try
to
know
this.
This
is
completely
unsupported
and
then
we
eventually
migrate
from
this
to
core
FX.
If
you
look
here
closely,
you'll
see,
there's
no
more
system
idle
pipelines
in
corporate
slab
because
we
actually
put
it
into
core
FX
proper
in
2.1.
So
if
you
look
at
the
latest,
2.1
corvex
you'll
see
under
source.
B
There's
a
system
oil
pipelines-
oh
my
gosh-
is
lost
up
there,
not
pipes.
That's
different
pipeline
is
right
here
in
core
of
X.
So
now
we
have
an
official
packaging
into
in
2.0
on
for
pipelines.
That's
on
you
get
what
those
are
as
we
move
forward
in
parallel.
Well,
I
was
working
on
channels,
I
started
to
prototype
new
signaler,
and
we
had
this
abstraction.
B
Allari
called
its
net
course
lock
case,
which
is
which
is
an
awful
name,
but
the
idea
was
that
Sunkist
is
this
generic
American
fashion
for
you,
where
you're
hidden
away
from
the
actual
details
of
the
transport,
so
you
are
Livi
or
you
were
sockets
or
you
were
real?
Is
that
the
windows
specific
networking
I
a
P
is
or
you
were
named
pipes
or
you
were
sockets,
you
could
code
against
a
single
abstraction
and
not
have
to
worry
about
being
a
lying
transport
alert
another
one
of
the
main.
B
B
If
you
look
in
the
command
history
you'll
see
a
bunch
of
things
in
here
like
Sokka,
is
using
signal
water
and
we
kind
of
had
a
bunch
of
prototypes
for
doing
generic
networking
things
on
top
of
this
and
you
Sockets
Layer,
so
that
happened
then
into
in
the
end
of
2.0.
We
ships
pipelines
as
a
source
in
capture
because
we
weren't
ready
to
make
it
public
yet
so
kasher
actually
has
the
source
of
pipe
planes
checked
in
in
the
2.0
branch
which
it
could
be
a
nightmare
for
servicing,
but
it
works
fine.
B
Stop
it
going
through
a
bunch
of
these
of
various
various
things.
I
wrote
this
uber
spec
for
bedrock
and
I'm
a
shoe
on
on
Castro
I
kind
of
laid
out
the
plans
for
for
why
I
thought
we
should
do
going
forward,
and
it's
on
that
to
support
our
no
on
HTTP.
Never
comes
back,
so
they
there
is
there's
kind
of
two
parts.
There's
and
I
have
some
like
super
ghetto.
Diagrams
I
made
last
night
that
made
no
sense
but
but
bear
with
me.
B
Applications
are
kind
of
things
that
live
that
live
on
top,
so
it's
net
core
is
an
application.
For
example,
signaler
is
in
the
same
category
of
application.
The
dispatcher
is
the
thing
that
that
gets
your
configured
pipeline
and
calls
it
so
think
of
this
oblique
hosting,
for
example.
The
transport
is
the
thing
shuffling
back
and
forth
so
transport
layer,
for
example,
would
be
sockets
or
name
pipes
or
whatever
else
can
read
and
reread
bytes
yeah.
A
B
Don't
think
it's
for
everyone
to
write
is
for
people
who
understand
so
the
goal
here
is
that
if
you
write
a
framework
and
you
need
networking
so
let's
say
you
wrote
a
MongoDB,
try
prefer
a
database.
You
are
an
application
layer.
You
sit
on
top
of
the
dispatcher
and
you
write
against
some
arbitrary
connection.
Abstraction
and
the
transport
could
be
live
here
could
be
sockets,
but
the
idea
is
that
the
person
that
wrote
the
transport
layer
understands
transport
understands
how
to
optimize
their
cover
loop.
B
So
the
transfer,
they're
super
fast
and
efficiently
are
just
closed
against
this
abstraction
and
then
you're
kind
of
your,
your
decoupled
from
the
underlying
transport
layer
completely
so
yeah.
There's
that
we
would
ship
transports
likely
billion,
sockets,
they're,
highly
optimized
for
TCP
scenarios
and
then
imagine
a
ship
real,
because
bands
gonna
do
real,
because
I
I'm
in
to
do
that.
He
said
it
for
and
applications
can
take
advantage
of
arbitrary
transport
lorries
that
come
off
the
shelf.
A
B
The
way
so
real
astons
for
registered
IO
and
the
way
it
works
is
you
give
real
a
bunch
of
memory
that
it
tends
to
physical
memory
to
do
I,
oh
and
then
there's
no
curse,
which
is
whenever
you
actually
two
recent,
writes
and
you're
managing
everything
yourself
all
the
buffers.
So
there's
no
therapy
will
you
you're,
basically
given
control
of
networking
talking
to
the
league
physical
memory,
which
is
insanely
fast
but
insanely
hard
to
to
do
sometimes
yeah,
so
this
yeah
she
kind
of
spells
out
the
layers
only.
B
It
is
that
we
want
to
have
a
support
for
non
HTTP
protocols
in
a
Spinetta
core,
so
Internet
core
is
basically
an
umbrella
term
for
like
server
programming,
so
don't
think
of
it
as
like,
a
spin
net
web
forms
or
whatever
the
name
is
a
brand
and
the
brand
supports
multiple,
like
okay
things
other
than
HTTP.
It's
not
tight,
HTTP
I
know
people,
it's
has
anyone
said
the
BCI
yeah
on
the
chat.
No
one
said
it.
No,
no
good
awesome
that
always
comes
up
when
I
show
these
slides.
B
The
goal
isn't
isn't
to
be
as
high
level
as
WCF.
It's
more
like
this
is
the
underlying
thing
that
you
could
build
on
top
of
right
if
you're
gonna
build
on
UWF.
You
could
use
this
as
Americans.
That
tell
me,
since
there's
a
public
protocol
call
and
qtt
I.
Don't
know
it
stands
for
something
something
something
like
wait,
something
small
it's
using
a
few
scenarios
and
it's
nice
I
think
he's
I.
Don't
like
this!
It's
as
popular
the
GP,
it's
kind
of
the
next
level
below
HDPE.
When
you're
talking
about
IOT.
B
It
comes
up
in
all
the
all
the
things
that
do
peyote.
It's
a
binary
protocol
over
TCP
here
with
WebSockets
and
has
a
spec
and
there's
a
there's,
a
bunch
of
like
resentment
on
that
for
doing
this
stuff.
So
you
can
imagine
writing
a
server
app
that
supports
both
HTTP
and
MQTT
in
the
same
application
with
similar
patterns.
B
So
the
goal
here
is
to
be
able
to
unify
those
those
kinds
of
things
and
I'm
not
have
to
worry
about
varying
varying
abstractions
that
I
belong,
so
I
call
the
the
various
the
various
layers,
there's
kind
of
application
and
middleware
frameworks,
they're
kind
of
same
in
the
same
space.
So
yeah
the
application
code
having
those
connections,
it's
the
the
protocol
person
logic.
B
From
Middle,
where
there
are
things
that
run
kind
of
wrap
that
knot
wrap
the
stream
of
data
so,
for
example,
TLS
or
guess,
people
know,
as
as
HTTP
for
HTTP
tea,
Lance's
the
encryption
and
decryption
logic
around
the
actual
stream.
So
at
the
example
whenever
you
as
the
tea
lights
middleware,
whenever
you
read
from
this
dream,
it
will
decrypt
them
in
the
actual
underlying
transport
and
give
you
encrypted
bytes,
and
when
you
write
it
would
encrypt
those
bytes
him
write.
The
underlying
transport,
Aza
I,
think
the
flow
of
middleware
and
then
dispatchers
are.
B
This
are
basically
the
things
that
call
into
the
the
middleware
pipeline
for
connections.
The
idea
was
to
build
parallels
to
the
HTTP
there.
So
there's
a
connection
context,
there's
an
eye
connection
builder
versus
the
I
app
builder.
If
you
can
recall,
I
can
show
it
doesn't
give
everyone's
and
everyone's
super
confused
right
now,.
B
Not
really
so
the
biggest
thing,
the
biggest
thing
about
the
current
bedrock
abstraction
is
that
it
requires
connection
semantics,
so
UDP
is
connectionless.
You
just
send
and
receive
from
arbitrary
address.
There
are
other
things
on
top
that
might
work
more
like
that,
may
work
better
than
than
just
raw
UDP,
just
jump
for
thing
called
quick.
It's
protocol,
that's
coming
up
called
quick
and
Status
for
a
quick,
UDP
internet
connections.
It's
basically,
if
I
favor
to
the
TLDR,
it
would
be
its
UDP.
B
If
you
use
YouTube
click,
it's
actually
being
used
used
on
other
covers,
so
I
think
it
may
not
work
for
raju
DP,
but
I
think
if
you
did
UDP
with
connection
semantics,
where
you
had
a
connection
ID
as
part
of
your
packet,
you
could
you
can
make
signal
artwork
that
way,
if
that
makes
sense
so
going
through
some
of
the
some
of
the
actual
API
is
there's
a
connection.
B
Con
I
think
this
is
up
to
date,
but
I
can't
remember
so
if
it's
not
you,
let
me
know
here's
the
equivalent
of
the
HTTP
context
in
the
low-level
networking
space.
You
have
a
connection
ID,
there's
feature.
This
is
from
the
FTP
world.
There's
a
transport,
and
this
is
the
I,
do
Plex
pipe,
and
this
is
one
of
the
new
primitives
that
we
have
built
into
pipelines
and
in
2.1
it
basically
has
an
input
and
output,
so
you
can
redirect
right
from
recreating
and
then
each
of
these
things
are
broken
into
features.
B
Just
like
the
HTTP
lair
is
I
mean
idea
behind
features
is
that
you
can
middleware,
can
add
or
remove
features
based
on
the
underlying
transport
capabilities.
So,
for
example,
a
feature
may
be
something
like
I
can
provide
you
an
IP
address,
because
if
your
DCP
I
can
give
you
an
address,
if
you're
not
ECP,
I
can't
give
you
one.
So
there
are
things
that
can
that
describe
capabilities
of
the
underlying
transport
there.
A
B
A
A
So,
to
summarize
on
the
chat
for
for
people
watching
the
video
there's,
some
MQTT
and
Frederic
brought
up.
Mqtt
is
Message
Queuing,
telemetry
transport,
which
is
which
is
interesting.
There's
a
question
about
signal.
Are
docks
and
Damion
jumped
in
and
said
that
they're
being
written
and
will
be
published
soon,
asana.
C
B
B
I
mean
I
mean
yeah,
we'll
get
him
to
call
in
yeah,
yeah,
yeah,
hey,
hey,
I'll,
get
to
come
afterwards,
cool,
okay,
so
I
was
talking
about
querying
features
yep,
so
I
think
example.
We
have
some
code.
I'll
just
find
some
code
quickly
in
signal
dark
or
itself.
We
have
some
code.
I
recall:
I
used
BS
code,
no
like
full
of
time.
I've
not
been
using
yes
for
a
while
I,
don't
get
my
computer
super
slow.
Whatever.
B
A
A
B
So,
for
example,
we
say-
and
this
is
this-
this
feature
name
is
I,
didn't
name
it.
This
is
remember,
I'm,
going
to
tell
him
that
yeah.
So
this
is
an
eye
connection
inherent
qiblah
feature.
The
idea
here
is
that
if
indeed
underlying
transport
supports
cube
waves
inherently
then
we
don't
have
to
say
heartbeat
because
they
can
we
get
to
like
by
itself.
So
we
said,
if
you
have
this
feature
and
you,
and
if
I'm
the
has
inherent
keepalive
flag
is
not
true,
then
we
don't
have
to
worry
about
sending
heartbeats.
B
A
A
C
A
And
also
that
that
when
transports
are
plugged
in
it's
not
just
kind
of
like
the
ones
where
you
have
hard-coded
into
signal,
are
it's
it's
more
kind
of
pluggable
into
the
transport
itself
right,
so
I.
B
Forward
a
little
bit
so
in
signal
air
transports
right,
so
there
is
Libyan
I'm
socket.
So
today,
in
Cashel,
Cashel
has
to
transport
a
shipment
of
the
box
in
2.1
sockets,
which
is
a
chef's.
We
use
it
sitting
at
focus
in
the
cortex
and
leave
UV,
which
is
the
thing
we
had
sentence.
1.0
and
I
haven't
changed
in
2.1:
we're
adding
a
bunch
more
transports,
not
too
careful
itself,
but
to
the
bedrock
style
ecosystem
so
by
pockets,
because
signal
ourself
exposes
the
connection
abstraction
over
WebSockets.
B
So
we
create
the
impression
that
you
have
a
duplex
transport
over
long
flowing
on
over
sourcing
event,
so
the
API
is
for
the
application
looks
similar.
So
you
code
against
this
long-running
connection
that
now
lives
and
guess
by
it's
over
over
and
over
and
over
in
a
loop
and
then
under
the
covers,
that's
actually
sending
multiple
HTTP
requests
over
the
Internet
yeah.
B
B
B
B
On
the
robot
I'm,
not
sure
which
roadmap,
maybe
3.0,
we
have
so
much
stuff
to
do.
Ht
p2
is
the
next
big
thing
for
Castro,
quick
I
really
want
to
have
time
to
spike
it.
The
protocol
is,
is
changing
rapidly
like
if
you
actually
were
to
follow
the
the
w3c
like
the
whole
process
is
changing
a
lot
at
first
at
first
quit
used
to
have
his
own
crypto
and
then
no
it's
TS
1.3.
B
So
you
have
to
kind
of
implant,
TS
1.3,
if
you're
game,
to
pick
in
the
first
place,
and
there
seems
to
be
only
a
few
implications
like
up
there
just
live
click,
which
is
a
goal
based
library,
but
if
you
want
to
build
something
that
would
actually
be
amazing,
so
I
would
I
would
start
by
looking
at
the
socket
transport
in
Castro.
If
you
want
to
understand
how
to
actually
like
make
this
function,
and
then
that
can
be
a
starting
point
for
making
your
own
custom
transport
I
expect.
B
B
A
B
Our
C
wants
coming
soon
on
preview
to
you
can
start
with
that
and
then
upgrade
to
our
c1
when
it
comes
out
and
that
pretty
much
has
the
final
shape
of
the
API.
You
can
you
to
code
against,
and
then
you
have
to
implement
this
this
dispatch
earlier
yourself.
Sorry
yeah,
though
there
that
calls
the
application
killer.
Once
you
have
a
connection
object,
so
let
me
show
kind
of
catch
early,
how
it,
how
it
looks
so
catcher
has
this
thing
called
casual
tracking
is
visible,
is
good
fight.
B
Okay,
I
shall
transport
a
fraction
which
actually
this
is
all
paternal
public
internal.
Is
there
so
you
can't
read,
but
it
will
change,
that's
release
a
guarantee
of
a
break.
So
don't
don't
be
sad
if
it
breaks
on
you.
So
here's
the
the
thing
that
lives
in
between
Castro
and
the
transporters
to
kind
of
abstract
capsules
core
something
the
transports.
This
thing
called
transport
connection.
It
has
a
bunch
of
goop.
It
diverges
from
connection
contacts.
The
thing
that
we
saw
before
it
has
a
bunch
of
features.
B
B
The
BV
caching
context,
which
itself
derives
from
OBS
code,
partial
classes,
Christy
right
from
the
transport
object
that
we
just
saw
and
the
goal
here
is
to
all
the
transport
you
have
to
do
is
do
your
thing
and
then
you're
right,
I,
read
bytes,
and
this
is
this:
this
you
want
on
the
senate's
code,
if
you're,
not
in
Turkish
whatever,
but
at
the
transport
owner.
All
you
have
to
do
is
read
from
the
application.
B
I
read,
say
your
underlying
transport
and
right
to
the
right
to
the
to
the
application
when
you
get
VIN
it
from
you
it
from
the
network.
Okay.
So
this
is
kind
of
hard
to
see
in
the
bv,
but
it's
like
it's
maybe
a
bit
simpler
than
I.
Think
about
it.
B
So
sockets
has
two
loops:
do
you
receive
do
send
and
you
can
guess
what
they
do
one
this
is
one
thing
gets
memory
from
the
input
of
some
size
calls
receiving
kind
of
buffer
if
your
zero
you're
done.
Otherwise
you
call
advance
and
flush-
and
this
is
like
this-
is
the
entire
region.
Oop
super,
simpler,
eight
and
then
for
four
sins.
It's
the
same
thing.
B
We
call
process
things
in
the
loop
and
we
read
from
the
application
and
they're
right
to
the
underlying
socket,
and
this
is
this
is
this:
is
the
code
you
write
if
you
write
a
transport
and
how
complicated
is
all
depends
on
how
you
toss
your
underlying
transport
layer,
but
the
the
angle
is
to
end
up
writing
to
the
application.
Everything
from
it
to
do
the
to
do.
I,
oh
basically,
okay,
so.
A
B
B
So
we
actually
today
yeah
so
this.
What
we
do
here
is
we
implement
the
actual
feature,
collection,
interface,
I,
have
the
features
on
the
same
object,
so
we
don't
allocate
per
feature.
Okay,
that
represents
the
entire
to
the
set
of
known
features,
and
we
have
fields
so
in
this
in
case
that
somebody
wants
to
override
one
of
those
features,
but
by
default
these
pointy
to
the
current
object,
if
that
makes
sense.
B
So
if
I
look
at
transport
connection
with
those
data
layer,
they
all
point
to
this,
that's
because
by
default,
if
no
one
overrode
these
features,
it's
the
same
object,
that's
that's
being
exposed.
Okay,
since
it's
a
way
to
not
allocate
a
new
object
per
feature
and
then
with
someone
calls
get
a
feature.
We
have
kind
of
this
slow
features
that
aren't
no
one,
and
then
we
have
the
fact,
the
facet.
So
when
you
can't
get
featured
it's
check
to
see
if
the
thing
you
pass
in
is
this
type.
B
If
it's
an
on
type,
then
we
say
return
the
value
directly,
so
for
the
most
part
for
all
known
features.
We
just
return
this
immediately.
Okay,
and
once
you
get
to
the
point
where
we
have
unload
features,
we
say
maybe
I
should
not
know,
then
we
try
any
doing
ask,
and
this
is
cool.
Yeah
I
mean
yeah
the
same
percent.
Okay,
that's
the
currently
feature
to
the
actual
thing,
and
the
idea
is
to
avoid
a
big
stale
look
for
the
common
cases.
So.
A
B
A
B
B
B
B
Here,
so
you
find
the
signal
samples,
so
I'm
straining
to
signal
our
samples,
and
it
has
it
programs,
the
apps
that
looks
like
oil,
because
we
write
CSR
everyone
starting
in
Hispanic
or
hug
me
Tom
did
it
touch
they
used
and
you,
though,
the
API
is
to
be
actually
use
them.
The
old
manual
ones,
so
Syria's
use
Castro
on
the
Builder,
say:
listen,
localhost,
5,000
images
of
my
port
and
to
point
one.
B
We
actually
expose
the
connection
builder
abstraction
on
the
capsule,
the
capsule
listening
guys,
so
I
can
say,
listen
on
any
address
port
like
that's.
No
one
and
I
want
to
get
the
Builder
and
I
want
to
say
use
hub,
and
this
is
signalers
framework
code
running
on
top
of
the
contractions,
so
Sigmar
past.
This
thing
this
this
use
hub
thing
and
it
has
no
idea
what
this
builder
is
hanging
off
of
it
could
be
tcp,
it
could
be
named
pipe.
B
It
could
be
anything
that
can
support
the
Builder
and
it
just
runs
is
connection
handler
on
top
of
on
top
of
that
builder.
So
it's
kind
of
like
the
the
internet,
core
middleware
pattern
where
you
haven't
I
up
getting
builder
and
then
like
there's
you
static
files.
That
is
you
signal
or
and
there's
various
uses,
it's
the
same
pattern
but
lower-level.
B
If
that
makes
sense,
okay
and
then
picture
the
startup
plus
on
the
show
starter
class
has
configured
services
which
has
add
connections
which
is
the
need
of
the
level
API.
It
has
add
signaler.
This
is
what
we
expect
and
signaler
I'm
Adam
is
my
protocol.
It
supports
both
Jason
time
missus
back
by
default.
I
had
this
common
handle
because
I'm
not
running
writers
right
now,
little
crap
I,
don't
comment
it
out
and
then
configure
I
have
a
you
signaler
with
a
bunch
of
roads.
B
So
for
those
who
haven't
seen
the
new
signaler,
this
is
the
new
signal
or
you
have
to
map
specific
hubs
to
specific
URLs.
Now
so
I
say
my
pub
the
chat
hub
maps
to
default
if
I
ever
told
Chad.
It's
a
very
simple
chat
occasion
when
someone,
if
someone
joins
I,
send
connection
ID
join
with
someone
leaves
exited
a
person
left
when
all
right,
when
some
missing
is
called
I,
send
saying
who
sends
it
on
this
various
various
groupings,
her
so
send
to
others
sent
to
everyone,
except
for
myself,
send
to
a
a
specific
connection.
B
B
B
B
Another
bug
I'm
kidding
I,
don't
do
that
so
no
I
should
be
running.
I
should
be
running
I'm
running
on
two
things:
I'm
running
out
port
effect.
Doesn't
local
host
and
I'm
running
out
that
doesn't
know
one
member
would
be
bound
to
this
port
and
in
our
in
the
program
main
I'm
gonna
launch
this
URL.
This
is
our
beautiful
example.
There's
nobody
trap
because
we're
all
easy,
I'm
gonna
use,
hugs
I'm
gonna
connect
me
up
my
tab,
so
you
can
see
the
underlying
connections
happening.
B
B
Were
not
gonna
do
transfer
fallback
and
then,
after
a
single
release,
we
kind
of
realized
why
it
would
be
so
badly.
So
we
have
it
back
so
there
there
should
be
happy
about
that.
This
template
just
shows
various
ways
to
to
communicate
with
others.
So
I
can
say:
hey
I,
see
myself,
you
can
see
stuff
going
on
down
here
in
the
actual
tab.
B
B
B
A
B
If
I
look
at
the
tab
here-
and
she
did
this
before
refresh
made
its
low
point-
okay,
nice-
you
see
different-
you
see
different
things
here,
so
notice.
There's
a
bunch
more
requests
here.
All
right,
there's
others
responses.
So
it's
not
yeah
here
we
go
here
is
the
distant
response
that
came
from
the
loan.
Pulling
request
is
more
more
requests
that
went
up,
and
this
I
think
this
one
is
a
painting
request.
That's
waiting
for
more
data,
so
long
clothing
works
by
by
having
a
request
open
to
the
server
waiting
until
the
data
comes
in.
B
If
I
could
compare
to
the
WebSockets
or
I
can
see
frames,
you
see
the
frames
are
it's
gone
tonight
in
and
you
can
see
all
the
data
going
over
it
and
this
type
six
is
the
key
blank
frame
to
be
sent.
Remember
the
code
we
had
before
to
see.
If
your
connection
had
inherent
keyblades
of
work,
we
do
plugins
and
we're
pretending.
This
is
paying
every.
How
long
I
said
I
don't
know,
I
think
it's
five
seconds,
but
it
looks
faster,
okay,
yeah,
so
we
send
that
thing
every
six
seconds
to
keep
it
life's.
B
A
B
If
you
want
to
track,
if
you
want
to
have
presence,
like
you,
wanna
know
who's
online,
you
have
to
start
a
list,
because
groups
don't
tell
you
who's
in
a
group.
It
just
lets
you
broadcast
in
a
group
if
you're
gonna
do
a
loop
and
sent
each
connection
in
a
group
that
will
not
be
as
efficient
I
was
using
groups,
and
it
won't
work
across
servers
so
very
well.
B
Okay,
so
I
would
I
would
use
a
list
of
pensions
for
if
you
want
to
know
who's
online
offline,
but
I
wouldn't
use
it
to
actually
do
the
individual
sentence,
because
that's
not
as
efficient.
If
you
can
like
you,
want
to
use
groups
to
describe
groups
of
connections
any
in
any
arbitrary
way
that
you
to
group
conditions
and
then
stand
into
that
group,
because
we
can
optimize
that
instead
of
sending
to
each
dimension
so.
A
B
So
you
think
we're
we
were
a
bird
unless
it
burnt.
Oh,
let's
ignore
try
to
try
to
work
in
all
cases
for
everyone
all
times
in
a
distributed
system
which
made
it
really
hard
to
scale
to
some
kinds
of
applications.
So
new
Sigma,
our
mantra
weathers,
let's
start
small,
let's
start
doing
whatever
you
would
have
you'd
have
had
to
do
manually.
Let's
see
how
bad
that
is.
If
it's
really
bad,
then
we'll
have
support
for
it.
So.
A
B
An
example
we
in
in
the
first
preview
in
the
first
alphas
we
didn't
support
having
that
having
the
connection
be
reused.
So
if
you
made
a
new
client
and
you
call
start
and
you
call
stop
or
from
an
offline
and
you'd
have
to
create
a
brand
new
client,
because
we
said
oh,
my
gosh
reconnect
was
the
hardest
thing
ever
to
do
in
the
listing
right.
It
had
tons
of
bugs
yeah
tons
of
races
like
all
these
things
are
wrong
with
it
and
then
after
we
wrote
a
single
application,
they
realized.
Oh,
my
gosh.
B
This
is,
is
impossible
to
use,
and
now
we
have
to
like
do
it
ourselves
again
and
we
kind
of
but
I
think
the
big
difference
was
we
as
we
did
that
feature
by
feature.
We
kind
of
decided
that
we
were
going
to
take
on
the
burden
of
owning
this
space
because
in
the
whole
thing
we
can't
just
made
it
work
and
discovered
bugs
absolutely
a
time
passed
now
we
kind
of
understand
the
space
more.
We
understand
the
types
of
issues
that
occur,
so
we
kind
of
knew
beforehand.
A
What
we
were
getting
into
so
yeah,
it's
a
difficult
balance
because,
on
the
one
hand,
you're
trying
to
build
something
magical
yes,
and
to
do
too
much
magic.
You
end
up
with
complicated
hard
to
maintain
code,
and
it
doesn't
scale
well
and
all
those
things
so
so
this
this
makes
sense
here.
What
you're
you've
taken?
What
you've
learned
from
before
and
then
you're
also
kind
of
building
simple
and
adding
features
or
things
as
they're
needed,
yeah
think
guessing
biggest.
B
Change
we
made
was
that
new
signaler,
if
you
do
scale,
requires
sticky
sessions,
if
you're
not
doing
much
buckets,
and
that
decision
has
simplified
like
everything
else
in
the
design,
all
signaler
supported
connections,
flopping
every
single
message
between
end
servers
and
it
transistor
for
that
skill,
which
is
like
literally
impossible.
Now,
if
you,
if
you
do
one
polling
our
service
and
events,
you
have
to
make
sure
that
you
have
turned
on
stickiness
for
those
tasks
we
work,
but
we
don't
have
to
worry
about.
B
Storing
a
client
cursor
I
can
survive
across
reconnect
across
a
farm
like
bad.
That
I
think
was
a
the
thing
that
made
Sigler
as
as
hard
as
it
was
to
maintain
No.
So
these
days,
not
just
a
voice
that
entire
issue,
but
not
most
wording
at
all,
okay
yeah.
So
this
is
cool
I'll
show.
Let
me
try
any
something
else.
No
so
I
did
I
have
two
clients:
I
have
the
lump
one
client
and
the
website
client.
Let
me
try
it
ding,
one
more
client.
B
B
That's
client:
should
you
connect
to
the
same
hub
but
be
using
TCP
syn.
Here
we
go
Nettie's
EP
if
you're
a
dub
CF
fan.
You
recall
this.
The
scheme
is
your
I
scheme.
It's
a
this!
Actually,
so
it's
funny
I
tried,
I,
tried
just
TTP
I,
just
doing
TCP
colon
colon
like
that
hole
that
pattern
and
it
blew
up
because
you
have
to
like
register
your
eyes
in
some
API
and
on
net.
If
you
do
new
you,
alright
TTP
colon
whack
whack
like
address
it
just
blows
up,
but
net
PCP
is
actually
hard-coded
somewhere.
B
A
B
A
B
A
B
Right
and
if
you
look
at
the
actual
code
for
the
client,
this
is
it
this
is
in
estoppel
no,
but
we
plan
to
actually
do
this
and
the
next
release
so
that
client
side
a
signaler.
Here's
the
hub
sample.
Let's
get
the
code
here
so
I
create
a
new
hub
connection,
builder
I,
configured
logging,
the
console
logger.
So
you
can
see
love
if
the
scheme
is
in
a
TCP
I
use
with
endpoint,
which
is
a
different
API
and
if
it's
not
I
used
with
URL.
B
So
the
client
has
a
similar
model
to
the
server
where
I
can
plug
in
different
transports
the
depth,
depending
on
what
I'm
doing
and
in
this
case,
I'm
using
the
your
ice
team
to
determine,
especially
to
use
TTP
or
HTTP
HTTP
so
with
the
URL
is
built
into
Sigma.
Today
it
does
the
WebSocket
fallback
or
certain
events
or
long
polling,
and
this
with
endpoint
is
new,
so
I
have
to
gather
this
sample,
obviously
copied
kestrels,
socket
transport
layer
and
made
it
work
for
clients.
So
if
you
look,
let's
use
the
back.
B
Otherwise,
this
FX,
it
doesn't
work
at
all,
but
pretend
it
works
and
then
I
add
this
TCP
connection
battery,
which
basically
just
starts
a
TCP
connection,
which
is
in
the
codecs
Lee
in
here.
I
call
start
async,
and
then
this
is
all
a
Sigler
needs
to
to
understand
a
new,
a
new
transport,
and
the
idea
is
that
we
would
have
clients
and
servers
be
supported
of
various
transports
from
the
same
interfaces
so
that
so
today,
the
transport
layers
and
cash
flow
only
supports
server
things.
B
B
A
B
B
But
by
the
way
is
very
buzz
would
advise
say
if
you
were,
if
you
were
unaware
of
the
state,
so
this
guy,
yeah
and
I
think
it's
young
young
eager
is
actually
this
new
really
cool.
Recently
he
took
signal
Aras
protocol
parsing
layer
and
got
it
the
the
protocol
itself
to
support
MQTT,
and
then
he
still
use
hubs
as
the
dispatch
model.
B
So
he
want
to
use
a
hub
with
methods
to
actually
call
in
to
as
the
a
day
that
was
exposed
to
end
users,
but
under
the
covers
it
was
handling
and
qtt
as
the
tried
the
protocol
Wow.
So
so
here
is
this
player
lock
core?
Has
this
mqtt?
Let
me
find
it
so.
First
of
all,
here's
the
MQTT
parser
all
right.
This
actually
takes
in
the
bytes
and
it
parses
the
Pakistan
understands
the
protocol.
B
B
How
it
how
it,
how
it
lives
first
like
what
it
means,
because
this
is
kind
of
a
hybrid
of
using
hubs
solely
as
a
model
to
dispatch
to
your
methods
right,
but
it's
actually
using
Sigler.
If
you
think
the
the
signal
air
program
model
bit
isn't
using
signal
herself,
so
he
replaced
the
actual
Oh.
Signor
call
there
with
this
thing
that
understands
MQTT.
So
as
an
example
and
this
this
is.
This
is
an
example
of
having
to
write
the
code
that
understands
the
bus
on
the
wire
to
parse
right.
B
So
you
get
a
MQTT
hub
christian
context,
you
read
it
from
it.
You
get
a
buffer
if
it's
not
empty,
you
deserialize
pakka
stop
the
buffer,
so
a
packet
is
basically
his
MQTT
packet
and
then
he
has
a
method
to
dispatch
to
that
thing,
and
then
it
says
if
it's
a
kinetic
packet
call
on
connect
if
it's
published
common
publish,
if
it's
pink
or
ping,
if
you
subscribe
call
subscribe,
other
way
is
like
nothing
right
and
then
from
here.
The
question
is:
once
you
get
a
packet,
what
do
you
want
to
do
with
it?
B
B
He
instantiate
and
it
basically
has
a
bunch
of
methods
to
handle
entity
specific,
like
payloads,
all
right
so
I
can
see.
This
is
either
a
a
signaler
like
extension
or
it's
a
different
framework.
Great
that
looks
similar
but
is
is
different
right.
What
is
what
is
the
controller
base
class
for
arbitrary
server
protocols?
B
That's
kind
of
the
question
right
yeah.
This
is
like
a
hub.
The
whole
idea
behind
the
hub
is
that
it
was
like
it
was
like
a
controller.
It
was
this
place.
You
can
write
code
as
you
get
invoked
right.
It's
the
same,
the
same
thing
here,
but
you
aren't
using
the
hub
Samantha's
birthday,
you're,
just
handling
handling
this
protocol
I'm
dispatching
the
methods,
okay,
yeah.
So
it's
not
in
the
roadmap,
I
think
there.
There
will
be
interesting
things
that
happen
in
the
community.
B
I
think
we'll
watch
that
and
then
decide
if
we're
gonna,
take
it
in
or
not
based
on
demand.
Okay,
but
I
really
want
to
segment
to
be
a
super
fast.
The
level
light
layer
on
top
of
the
transport,
but
it
is
extremely
reliable,
so
you
can
use
it
to
handle
connection,
disconnect
ascend
and
sand
payloads
back
and
forth,
and
you
not
to
worry
about
having
your
own
protocol
because
we
support
dispatching.
That
is
for
you,
okay,
top
of
that.
It
is
kind
of
up.
A
B
A
B
So
this
presentation
was
given
by
one
of
our
developers
Pavel.
He
gave
it
to
the
dump
night
team.
One
of
our
internal
talks
on
I
don't
have
to
go
all
the
way
through
it,
but
I
think
we
have
been
talking
about
pipelines
like
just
on
Twitter
every
now
and
then
for
a
while,
and
some
people
want
to
know
like
what
the
state
of
it
is
pipelines
is
shipping
and
two
point:
one
has
no
official
support
for
FX
labor.
B
It's
it's
right
now
on
you
get
the
org
in
preview
status,
so
it
is
preview
to
currently
base
you
think
asterisk.
You
can
signal.
Are
it's
actually
using
a
bunch
of
places
in
a
snack
corn
up,
so
you
can
to
good
that
they
span,
like
you,
battle-hardened,
hopefully
so
I'll
kind
of
run
through
like
what
it
is
quickly
because
I
don't
want
to
spend
too
much
time
going
through
specific
details.
B
B
B
Some
streams
have
a
buffer
internally,
which
means
you
end
up
copying
from
the
buffer
internally
to
the
one
you
passed
in,
and
that
can
happen
at
multiple
layers
and
you
have
no
idea
when
it's
happening.
So
when
you
wrap
streams
in
other
streams,
you
have
to
kind
of
understand
how
the
stream
is
written
to
understand.
If
you
were
buffering
too
much
or
not,
the
buffering
can
be
good
or
it
could
be
bad,
but
for
some
scenarios,
for
example,
for
example,
WebSockets
we
can
I
don't
want
to
allocate
anything
unless
data's
coming
in.
B
Put
it
in
production,
probably
before
Lee
it
was,
it
was
even
bait,
so
yeah
awesome
awesome,
no
common
pattern
for
pooling.
So
whenever,
if
you
want
to
do
efficient
networking
I
owe
you
normally
have
to
pool
buffers
and
you
have
to
either
create
a
miracle
or
use
one.
That's
out
there.
You
have
to
rent
a
return
and
once
you
start
pulling
memory
you're
in
the
realm
of
C++
month
free,
you
have
to
understand
when
to
return
memory,
and
it's
very
simple.
B
If
you
are
your
own
component,
the
moment
that
data
is
passed
somewhere
else
and
is
pooled,
the
the
consumer
has
to
know
what
to
return
it
and
when
you
return
Italy
today
the
GC
understands
it.
It
knows
who's
using
a
thing
because
they
can
track
usages
right.
That's
all
Don
Harry
you've
found
us
I'm
managing
my
memory,
I'm
managing
the
memory
exactly.
A
B
That
is
I
see
a
really
bad
place
to
begin,
so
part
of
the
goal
was,
if
you
want
to
do
you
can
efficient
networking
you
have
to
put
buffers.
So
how
can
we
make
the
model
such
that
if
you
do
buffers
it's
kind
of
hidden
from
you
or
that
it's
natural
right,
the
the
rent
I
returned
are
natural
is
instead
of
being
a
very
explicit
operation.
B
Today,
well
in
two
point:
one:
is
this
a
lot
better,
but
in
any
existing
world
you
have
to
allocate
a
task
pari,
tarpor
right,
and
that
was
kind
of
a
thing
that
we
did.
We
didn't
want
to
do
so
in
two
point.
One
value
task
actually
supports
this
very
optimized
code
path.
Where
you
can,
the
Tyga
task
can
represent
multiple
async
operations,
but
haven't
allocated.
So
that's
actually
that's
the
abstraction
we
use
now
in
pipelines,
but
before
that
there
is
no
way.
Besides
writing
a
custom,
weird
to
avoid
allocating
a
new
task.
B
Every
time
you
did
a
read
or
write,
and
that
adds
up
quickly
can
fish
until
Coons
have
been
kind
of
fixed
and
two
point
one.
They
aren't
the
best
pay,
though,
because
API
is
that
implement
cancellation
have
to
account
for
passing
and
a
different
took
every
time
and
that
kind
of
its
kind
of
makes
it
less
optimal.
And
then
the
other
thing
was
that
we
every
time
you
you
want
to
implant
cancellation.
You
end
up
three
exceptions,
which
kind
of
sucks.
B
If
you
want
to
cancel,
cancel
a
thing
that
is
an
exceptional
sea
and
at
three
exceptions
for
known
cases,
which
kind
of
sucks,
so
in
pipelines
we
never
copy,
we
actually
gave
you
the
buffers
the
so
that
the
switch
here
is
that
before
you'd
have
to
pass
in
a
buffer,
no
buffers
are
fed
to
you.
So
you
don't
end
up
having
to
allocate
buffers,
because
that's
done
under
the
covers
by
the
actually
of
the
ice,
it
doesn't
mean
you
have
to
trust
it
of
the
api's
you
can
configure.
B
B
Imagine
the
client
sent
two
bytes
didn't
have
a
flash
n,
so
you
read
based
off
the
wire
and
you
have
to
keep
buffering
that
until
you
find
a
slash
n
right
doing
that
efficiently,
where
the
client
can
kind
of
send
you
tons
data
where
you
haven't
seen
the
actual
new
line
for
a
while.
You
want
to
actually
like
a
kid
a
buffering,
so
you
don't
have
a
giant
buffer
out
kid
on
the
keep
in
memory
right.
B
B
There's
no
patients
per
read:
/
write
the
per
the
per
operation.
Allocations
are
gone.
We
internally
store
in
the
buffer
as
a
link
list
of
chunks
to
avoid
allocating
giant
buffers
in
memory
all
the
ones
chief
cancellation.
There's
no
exceptions,
no
occasions
it
just
it
does
yields
the
current
read.
So
if
you
say,
cancel
it'll
return,
you
a
result
saying
oh
I
can't
so.
B
Instead
of
three
inception,
so
it's
cheaper
this
is
it
supposed
to
be
a
comparison
showing
you
it's
kinda
bad
about
comparison,
but
the
imagine
you
want
to
read
stuff
from
a
file
a
10
Meg
file.
The
bottom
is
supposed
to
be
stream
reader,
so
you
open
the
file
and
then
you
call
a
read
like
a
sync
super,
simple
API,
the
pipe.
It
gets
a
bit
more
verbose,
but
the
idea
is
that
you
would
you
would
reimplemented
stream
reader
on
with
the
pa-pa
if
that
makes
sense.
Okay,.
B
Right
so
I
think
I'm
at
the
comparison-
and
this
is
kind
of
the
code-
continued
right,
there's
a
bunch
of
other
code
to
read
lines
and
stuff
so
ignore
the
bug
be
yeah.
That
bug
is
fixed
now
so
erase
that
line
completely
top
the
top
row.
It's
the
same
file
signs
in
both
in
both
cases,
so
the
strings
are
the
same
because
we
do
we
do
turn
the
light
into
a
string
eventually
and
the
only
thing
you
pay
for
me.
B
B
Yeah,
that's
pretty
cool
yeah
comparison,
okay,
and
they
may
have
some
some
history
here:
kind
of
kind
of
the
same
thing
from
from
my
my
previous
documents:
infant
and
Kestrel
channels,
corvex
love
pipelines,
public
API
in
2.1,
and
castro
has
regular
copy
this,
and
this
diagram
kind
of
shows
what
we
mean
by
copy
list.
Today
you
get
bid
from
a
socket
planet.
B
Has
the
HTTP
start
line
and
it
has
some
in
the
body
and
it
has
its
header
right,
some
chunks
up
there
and
in
a
world
where
you
have
streams,
you
can't
read
a
think
you
pass
in
a
buffer.
It
will
copy
from
that
internal
buffer
into
your
buffer
and
hide
you
something
ray
what
if,
instead
you
could
just
parse
the
Vice
in
place.
So
I
get
that's
a
lot
from
socket
I'm
gonna
read
that
same
buffer
and
parse
HTTP
up
until
the
new
line
at
the
end
of
chunks,
right.
A
B
A
B
Layer
gets
access
to
the
body,
so
in
Internet
core
the
way.
The
way
we
handle
the
HTV
pipeline
is
castro,
reads
the
start
line
and
headers
and
then
calls
your
phone
and
then
calls
your
callback
all
right
to
run
middleware
and
then,
when
you
call
re
on
the
body,
let
me
give
you
more
data
all
right,
but
it's
a
sam,
the
same
overall
request
but
they're
kind
of
exposed
from
different
layers.
So
you
read
the
body,
and
now
you
get
the
same,
the
same
buffer
but
you're.
B
B
So
I
tried
parsing
that
to
the
parser
I
says:
that's
not
full
HP
request
go
away
right,
I,
know,
I,
say:
okay,
I'm,
gonna,
I'm
gonna
advance
to
the
start
of
the
buffer,
which
says
I'm
gonna,
put
back
all
this
data
because
I
consume
nothing
and
I
call
read
again
and
now
I
have
more
stuff
that
get
slash
all
right,
great
uh-huh,
which
actually
isn't
a
valid
and
sheepy
line,
but
I'm
gonna
leave
it
for
now,
because
it
doesn't
matter
for
this
example.
A
B
You
can't
do
it
in
parallel
only
supports
single
readers,
single
writer.
So
it's
more
like
you
would
do
that
sequentially,
so
I
would
read
one
part
of
it
and
then
pass
you
the
rest
of
it.
Okay,
a
cart
read
again:
I
get
the
body
I
get
inside
the
heads
yeah
and
then
I
could
pass
that
API
on
to
the
header
parser,
and
it
would
understand
just
anders
without
having
to
reallocate
more
data
right
right.
B
You
keep
reading
the
same,
but
for
now
my
friend
over
and
over
push
and
pull
this
isn't
that
important
asking
that
for
now
there
is
this
idea
about
my
shirt
where,
like
as
you're,
pumping
data
into
the
person
that's
reading,
we
have
to
support
a
scenario
where
you
can
overwhelm
the
person.
That's
trying
to
read
this.
Imagine
the
socket
is
sending
tons
of
data
over
and
over
and
over
and
the
consumer
can't
keep
up.
There
has
to
be
a
threshold
right
where
you
stop,
you
say:
okay,
bro
I
can't
have
it
any
more
data.
B
Let
me
stop
I'm
back
off
right,
so
that's
a
pause
right
or
threshold.
So
when
you
hit
the
back
threshold,
the
the
person
producing
data
stops
until
you
reach
the
resume
threshold
today,
there's
that
the
consumer
is
reading
data
off
of
this
network
and
I'm
trying
to
part
it
to
DB,
but
my
app
is
run
is
running
very
slowly
and
I
can't
do
it
fast
enough.
So
please
stop
pass.
A
B
B
B
B
I'm
gonna
write
one
two,
three,
four
five
into
the
actual
pipe
right,
exam
so
far,
uh-huh,
so
to
start
and
have
it
move,
and
the
writer
is
that
is
that
the
the
fifth
block
in
the
in
the
memory
pool
no
the
greater
rights
more
data,
six,
seven,
eight,
and
that
ends
up
being
that
ends
up
going
over
the
block
size.
So
it
ends
up
being
part
of
this
linked
list.
Now
I
call
flush
flush,
says:
I
want
to
make
this
data
visible
to
the
to
the
consumer.
B
B
It
reads
up
to
five
and
it
says:
I
don't
have
a
full
frame:
four,
six,
seven,
eight
yeah,
so
I'm
just
gonna
not
read
it
so
advanced
a
five
rights
No
the
star
pointer,
has
moved
to
six,
because
that
is
the
end
of
where
the
readers
saw
last
time
right
right,
right,
I
know
the
red
is
gonna,
write
some
more
and
then
call
flush.
So
now
the
reader
can
see
six
seven,
eight
and
three
two
one
I
know
as
a
full
frame
and
I
can
parse
it.
B
A
This
is
something
where,
like
definitely
down
it,
kind
of
the
transport
layer
and
stuff
like
that,
that's
useful.
Are
there
other
places
for
a
pipeline
where,
like
you
know,
your
average
app
dev
is
gonna
like
yeah
I
want
to
use
this,
or
is
this
more
kind
of
like
a
library,
transport
level
kind
of
no.
B
So
so
that
thing
you
said
about
being
the
span
of
streams,
it's
interesting.
We
actually
want
to
expose
the
pipe
pipelines
from
the
HTTP
context,
request
body,
but
for
the
bodies
of
the
request
and
response.
So
the
idea
there
is
that
today
we
have
a
stream
that
we
expose
as
the
body.
So
if
you
did
it
like
HTTP
context,
don't
request
that
body
right?
That
is
up
being
a
stream
fluent
image.
Whoa
three
two
awesome.
B
If
I
show,
let
me
find
a
sample
if
I
look
at.
B
Contacts
requested
body
its
type.
This
is
a
stream
right.
So
what
ends
up
happening
is
that,
since
its
met
core
is
so
layered
we
have
kestrel,
then
we
have
middleware
that
we
have
like
MVC
right.
So
we
end
up
having
to
in
our
own
in
our
own
corpus
and
our
own
layers.
We
end
having
to
copy
because
we
kind
of
have
hidden
the
abstractions
away
from
the
user
code,
so
they
would
be.
We
actually
expose
the
body
are
a
new
any
property
right.
B
A
new
feature
is
a
pipe
reader
or
a
pipe
writer,
basically
for
ampere
output
and
with
that
our
middleware
itself
can
take
advantage
of
it.
So,
for
example,
NBC
can
now
read
the
Vice
directly
from
Castro
on
model
buying
that
into
objects.
So
it
is
kind
of
exposed
to
end
users,
but
they
don't
have
to
use
themselves
because
we're
gonna
since
they
live
us
at
a
high
level.
Where,
if
you
support,
if
you
do
do
model
binding
an
NBC
right,
you
don't
have
to
do
it.
B
Streams
ever
you're
doing
objects,
but
you
get
the
benefits,
because
if
we
can
get
access
to
those
little
ap
is
inside
of
middleware
at
that
level,
then
we
can't
write.
You
super
efficient
parsers
for
Jason
for
form
for
different
for
various
things
without
having
to
reallocate
memory.
That's
that's
there
already
for
us,
okay
yeah,
that
makes
sense,
but
the
average
dev
won't
have
to
use
it
for
most
for
most
things,
cuz,
it's!
Basically!
B
B
A
A
A
B
Jim
ready.