►
From YouTube: GitHub Quick Reviews
Description
00:00:00 - Approved: HTTP2: Create additional connections when maximum active streams is reached https://github.com/dotnet/runtime/issues/35088#issuecomment-653248184
A
B
Okay,
so
this
is
adding
a
property
to
sockets,
HTTP
handler
and
I
believe
the
proposal
has
been
expanded
to
also
add
a
property
to
when
HTTP
handler.
So
the
issue
we're
solving
as
HTTP
to
has
a
setting
called
Max
and
current
streams.
This
is
configured
on
the
server
and
it
is
the
upper
limit
of
active
streams
on
a
single
TCP
connection.
A
B
And
usually
it's
about
100
but
can
be
a
bit
higher.
So
the
problem
with
the
setting
is
today
with
HP
client.
It
will
always
open
a
single
connection
to
a
host
so
for
a
given
host
name,
there'll
be
a
single
connection
and
as
soon
as
you
hit
that
limit,
so
you
call
a
send
a
sync
or
one
of
the
methods
which
cause
them
to
send
a
sent
like
get
async.
B
That
method
will
await
and
it
will
form
a
queue
so
first
in
first
out
queue
and
it
will
wait
for
a
current
active
call
to
complete
before
it
continues.
So
you
you
have
to
do
this
on
the
client.
If
you
didn't
do
this
on
the
client
and
you
still
were
tempted
to
send
the
request
of
a
server,
the
server
would
error.
It
would
send
you
a
message
to
tell
you
that
you're
making
too
many
requests,
and
for
that
reason
the
client
tracks
the
number
of
active
streams
and
it
will
form
that
queue.
B
So
there's
a
couple
of
problems
with
us.
One
HTTP
client
is
quite
common
to
just
have
a
single
turn
instance
within
an
app
now,
if
you're
in
a
like
client
app,
probably
heaven
they
get
like
some
thicker
client
having
100
concurrent
requests
is
probably
quite
a
few,
but
if
you're
doing
something
like
one
micro
service
talking
to
another
micro
service
and
all
the
requests
are
going
through
a
single
HTTP
connection,
a
single
yeah,
it
should
be
client.
B
You
could
easily
hit
their
100
limit
and
it's
a
little
worse
for
ERP
C,
because
G
RPC
introduces
the
concept
of
long
live
streams.
So
you
could
create
100
streams
that
you
expect
to
last
for
a
long
time
and
then
any
subsequent
subsequent
calls
would
just
be
queued
and
would
never
never
go
anywhere.
Never
progress
until
one
of
those
long
live
streams
were
closed
and
an
additional
problem
is
like
there's
very
little
feedback
about.
B
A
A
B
It's
the
client,
because
the
client
is
the
one
who's
creating
the
connection.
I
just
I
think
this
Mexican
current
streams
makes
more
sense
in
the
context
of
a
browser
like
you
have
hundreds
of
browsers
talking
to
one
server
and
for
each
of
those
browsers.
100
connections,
100
concurrent
streams,
so
download
images-
that's
that's,
probably
fine,
but
when
you
have
one
server
talking
to
another
server
like
all
the
activity
of
an
application
of
a
server
application
is
going
through
that
one
TCP
connection,
then
a
100
concurrent
requests,
and
quite
so
many
right.
D
C
B
I
think
I
think
this
sitting
was
probably
when
I
was
thought
of.
It
was
thought
of
mainly
in
the
context
of
browsers.
It
was
there's
no
reason
a
browser
should
start
opening
multiple
connections
to
download
one
webpage
but
and
the
context
of
a
server
talking
to
another
server
and
all
the
all
the
HTTP
activity
is
just
between
one
client
on
one
server.
Then
multiple
connections
do
do
sap
making
sense.
So
some
examples
of
where
we've
seen
other
clients
get
around
this
issue
is
go
will
actually
do
the
spider
fold.
B
When
you
exceed
that
limit,
it
will
just
create
a
new
connection
for
you.
Other
clients
tend
to
require
opt-in,
like
when
HTTP
yeah,
when
HTTP
I
believe
in
Windows
has
a
setting
to
like
a
flag
which
you
can
unable
to
create
additional
connections
and
I.
Think
I've
seen
Java
clients
have
a
setting
to
create
additional
connections.
So
what
we're?
What
we're
proposing
as
a
solution
is
a
setting
which
will
allow
it
should
be
client
to
create
additional
connections
when
the
limit
is
reached
so
rather
than
pausing
and
forming
a
first
and
verse
out
queue.
B
A
C
E
D
D
C
This
setting
is
essentially
a
workaround
for
servers
that
maybe
aren't
configured
correctly
or
aren't
configurable.
So
maybe
you
have
a
micro
service,
API
server
that
is
limited
to
a
hundred
maximum
streams,
and
you
either
don't
know
that
it's
a
problem
or
you
can't
change
it.
So
this
is
kind
of
working
around
it
by
having
HTTP
client
open
up
multiple
connections.
Oh.
B
I
think,
if
you
add
tens
of
thousands
of
concurrent
requests
on
a
single
TCP
connection,
just
concurrency
issues
would
eventually
lead
to
issues
with
throughput,
so
I
think
I
think
having
the
ability
to
open
up
multiple
TCP
connections
as
a
is
a
better
option
than
just
trying
to
send
everything.
Why
one
want
is
to
be
connection.
D
Especially
like
in
petrol,
we
just
run
every
request
on
its
own
stream,
like
it
does
her
performance
with
that
design
or
sorry.
We
run
like
each
request
on
its
own
thread.
I,
don't
know
if
I
said
the
screamer
thread,
but
yeah
that
does
cause
concurrency
issues.
There
are
too
many
streams
in
one
HMT
connection.
C
B
So
the
the
properly
we're
talking
about
max
http/2
connections
per
server
when
that's
one
there
would
be
the
current
behavior.
So
we're
not
talking
about
changing
the
behavior
of
ATP
to
with
a
cheaper
client
by
default.
The
current
behavior
would
still
be
the
default.
When
you
increase
this,
you
could
have
create
up
to
that
number
of
connections.
B
A
A
B
C
C
A
C
That
I'm
not
sure
of
we,
they
do
have
a
connection
limit
for
HTTP
one
I'm,
not
sure,
if
why
they
didn't
do
one
for
HTTP
two.
It's
probably
because
you
get
so
many
requests
out
of
one
connection
that
you
would
really
have
to
be
doing
something
crazy
to
open
up
a
problematic
number
of
connections.
Mm-Hmm.
C
C
A
B
C
So
HTTP
3
goes
over
UDP,
which
causes
a
much
different
load
on
things
like
network
hardware
and
firewall
things
like
that
to
me,
it
makes
sense
to
kind
of
allow
the
user
to
configure
both
of
them
separately,
but
I
could
see
if
we
were
going
for.
You
know
maximum
ease-of-use,
simplest
API.
Combining
them
is
an
option.
I.
D
C
A
D
A
From
all
of
us
hearing
from
HTTP
so
far
that
it
should
be
three
seems
very
different
from
HTTP
what
we
currently
have,
including
two,
but
it
seems
hard
to
think
about
what
it
does
to
our
API
surface.
Like
I
mean
I,
don't
know,
I
should
be
client
was
a
good
API
is
also
you
know,
not
super
young
anymore
right
and
I
wouldn't
be
shocked
if
you
find
ourselves
writing
a
brand
new
API
for
that
altogether,
then
so,
I
would
kind
of
say
like
assume
that
this
is
HTTP
to
for
now
and
if
three
comes
along.
C
Handlers
are
also
not
experts
in
the
room
right
now.
We
actually
have
an
implementation
in
HTTP
client
in
Kestrel
today.
So
the
current
API
is
are
fine.
It's
really
just
about
resource
usage.
For
me,
it
going
over
UDP
and
not
having
head
of
line
blocking,
makes
me
think
it
should
just
be
a
separate
sitting.
Yeah.
A
C
A
B
So
I've
got
a
question
about
NIT
standard
and
net
five.
Oh
so
currently
the
the
Geo
PC
client
target
standard,
two
one
and
it
uses
HTTP
client
Handler,
and
you
can
see
there
and
the
GRP
see
usage
currently
on
the
screen.
What
we
would
probably
do
in
the
Chi
RPC
is,
we
will
create
a
handle
or
ourselves
and
we
will
configure
it.
So
there
is
no
limit,
so
people
are
using
GRDC
and
they're
doing
these
long
live
streams.
B
New
new
straight
you
connect
shion's
will
be
created
as
needed,
but
they're
like
brings
up
the
question
about
how
how
do
we
know
that
sockets,
a
should
be
handler,
is
going
to
be
available
and
as
the
right
and
we
use
compared
to
HTTP
client
handler
like?
Will
there
be
some
kind
of
boolean
test
to
check
whether
sockets
a
should
be
handler?
Is
the
right
thing
to
use
or
I'm
just
trying
to
figure
out
like
will
the
future
version
of
the
GI
PC
client
that
uses
this?
A
A
A
A
So
that
means
you
and
I
would
like.
Maybe
just
take
an
HTTP
message
center,
that
the
client
that
the
consumer
can
customize.
But
if
they
don't
give
you
one,
then
you
would
manufacture
one.
So
basically
that
means
you
have
two
options.
You
can
either
multi
target.
You
can
either
say
you
know
if
a
binary
four
net,
five,
when
you
directly
new
up
Socrates
DP
handler
setting
this
property
to
false
and
you
would
still
have
a
net
standard
reference
assembly.
So
that
means
anybody
who
consumes
you
from
that
standard?
You
know
they.
A
You
know
the
API
service
is
the
same
and
then
the
deploy
for
net
five
to
just
get
into
better
behavior.
The
other
option
you
have
is
you
still
have
a
single
binary.
The
target
is
net
standard.
You
know
2
1
in
your
case,
and
you
would
use
reflection,
that's
also
possible,
but
I
mean
you
know
what
it's
you
know
the
you
know.
You
know
the
names
of
the
types
you
know
the
names
of
the
properties
you
would
just
effectively.
You
know.
A
Do
the
typical
light
up
that
we
do
where
you
probe
for
the
type
probe
for
the
property
and
then
maybe
use
a
delegate
to
to
cache
the
invocations?
We
don't
have
to
go
through
a
reflection
every
single
time,
but
that
would
basically
the
two
patterns
you
would
be
using
multi
targeting
might
be
easier
because
you
know
it's
not
crazy
reflection
and
it's
all
static
and
you
consumers
don't
need
to
know
this.
It
just
works
magically
for
them.
Yeah.
B
B
A
You
see
a
blazer
like
I
just
played
over
the
last
week,
a
little
bit
of
it,
so
their
service,
a
blazer
and
client-side
blazer.
So
my
mental
model
was
probably
not
the
best
one,
but
the
way
I
understand
it
is
that
we
don't
have
a
TFM,
that's
specific
for
blazer,
but
there
will
be
a
rid
that
is
specific
for
them,
so
their
own
implementation
can
totally
have
a
binary
that
is
different
for
what
the
client
would
get
in
blazer
right.
A
The
actual
thing
that
you
sent
to
them
to
the
browser
to
execute
and
that
one
would
basically
do
the
right
thing
and
go
through
the
fetch,
API
and
but
basically
logically,
they
would
both
be
targeting
that
v,
both
the
client
and
the
server.
And
so
basically
you
would
just
you
do
the
differentiation
by
a
writ
specific
winery's.
Okay,.
A
Yeah
something
like
that
and
the
downside
is
like
you:
don't
want
to
generally
do
that
for
other
people's
projects,
because
targeting
wits
is
a
pain
in
the
ass.
Given
that
we
basically
have
no
tooling
so
I.
Think
in
your
case,
you
would
get
away
with
just
doing
the
normal
multi
targeting
variety
FM,
because
you
don't
really
care
whether
you
in
the
browser
or
not
I,
suppose
right.
A
C
B
A
E
It's
actually
a
related
question
based
on
the
fact
that
you
don't
necessarily
have
control
over
what
handler
you're
using.
Is
there
ever
a
scenario
where
a
caller
of
HTTP
client
would
want
to
know
whether
the
request
has
been
dispatched
immediately
or
whether
it's
in
a
queue
waiting
to
be
dispatched?
Is
this
important
for
to
your
variables
there.