►
From YouTube: 2017-01-12 - HTTP2 team meeting
Description
A
B
I
guess
that
the
first
thing
to
discuss
would
be
the
rewrite
of
the
internals
that
I
did
around
eight
it's
much
more
efficient,
taking
advantage
of
the
event
loop,
yeah
and
in
doing
less
per
tick.
Basically,
you
know
queuing
things
up
and
when
you
tell
me
you
met
loop
goes
it,
you
know
you
does
all
bataya
or
triggers
the
callbacks,
and
that
kind
of
thing,
so
it's
much
more
efficient.
B
We're
able
to
get
a
much
higher
throughput
where
things
break
down
of
course,
is
when
over
on
the
JavaScript
side,
is
the
multiple
levels
of
streams
that
we
have
between
the
request
and
response
objects
and
the
stream
duplex.
So
your
idea
of
doing
something
that
is
more
specific,
more
optimized
or
hd2
and
then
allow
you
know
and
then
do
the
higher
level
things
later
on.
I
think
makes
a
lot
of
sense
and
let's
just
get
the
performance
working
just
for
that
lower
level.
The
screen
interface,
no.
A
Even
my
point
of
view
here,
it's
the
fact
that
in
when
so
we
need
to
solve
to,
we
need
to
provide
HTTP
two
for
two
regions
in
ink
or
the
first
one
is,
is,
of
course,
being
able
to
write
web
applications.
Okay,
but
also
as
a
transport
for
my
closest
system
by
gr,
pc
or
other
things
will
can
be
used.
So
it's
critical
that
we
we
get
a
very
fast
implementation.
Now,
all
those
all
the
ballon
with
stalls
for
most
of
the
HTTP
truth,
HTTP
things
and
semantics-
is
not
really
needed.
A
B
But
I
think
that
if
we
start,
if
we
start
on
optimizing,
the
HTTP
to
stream
and
HP
to
session
classes
and
focus
on
those
as
much
as
we
possibly
can,
with
keeping
in
mind
the
types
of
hooks
that
we're
going
to
need
for
the
higher
level
interface
so
doing
things
like
you
know
where
we
can
specify
that
there's
no,
not
going
to
be
a
body
for
a
response.
So
we
can
skip
having
to
do
any
kind
of
data
frame.
B
A
A
Make
sure
that
in
fact
we
can
deliver
greater
performance
Epsom
as
a
side
note,
if,
if
we
want
to,
if
you
are,
if
you
are
interred,
if
your
interest
I
did
a
little
prototype
of
of
something
different
from
from
different
needs,
but
it
might
be
of
interest
for
you,
it's
a
cold
sink
through
it's
you.
A
A
B
Ya
knows:
there's
a
couple
of
things.
My
is
doing
doing.
That
I
think
is
absolutely
the
right
thing
to
do.
If,
if
we
continue
to
view
HP
stream
HP
to
stream
class
as
a
duplex
right,
it
does
it's
writing
directly
into
the
native
Native
side,
which
which
has
to
have
its
own
buffering.
Also,
and
then,
as
data
is
coming
out,
there's
two
levels
of
buffering
one
is
the
data
is
being
pushed
into
the
callback
for
notification
and
then
on
the
event
loop.
It
pushes
it
out
into
the
duplex
instance.
B
A
B
There
is
the
the
native
code,
the
the
bridge
layer
between
JavaScript
and
ng,
h,
b2.
It
implements
buffering
because
of
the
way
ng
needs
to
multiplex
the
data.
Basically,
when
you
do
a
right
on
the
duplex
right
at
the
right
V
in
the
yeah
and
bonus
for
right,
most
methods
do
a
write
request
into
the
native
code.
B
That
puts
the
data
into
an
internal
buffer
where
it
sits
until
ng
is
ready
to
actually
send
it
on
the
sockets
and
then
when
the
data
is
ready
to
be
sent
on
the
socket
that
data
gets
packaged
into
frames
and
then
passed
to
the
socket
implementation,
which
have
obviously
has
its
own
buffering
yeah
but
yeah,
but
wait.
We
cannot
avoid
that
buffering
at
all
the
happening.
The
bumper
that
happens
in
between
are
you
know
the
the
HP
to
session
and
the
ng
HP.
My.
A
A
It's
a
for
me,
the
most
important
piece
of
things:
it's
try
to
avoid
as
most
buffering
a
multiple
buffering
inside
kno
jes,
okay,
each
level
of
buffering
the
throat
yep
there's
a
lot
of
code
in
there,
especially
shrimp
chip.
Okay.
So,
in
order
to
deliver
a
decent
performance,
we
will
need
to.
We
might
want
to
say
we
just
inherit
from
stream
by
HTTP
to
stream
its
internet
from
stream.
It
doesn't
know
it
from
publix,
okay
right
now,
I'm,
not
saying
we
should
do
that
I'm
just
saying
it
might
be
an
option.
We.
B
Can
the
only
the
only
issue
there
then
would
be
being
able
to
do
the
back
pressure
right
now.
The
internal
buffering
that
occurs
doesn't
implement
any
back
pressure.
It
just
takes
whatever
it's
given
its
relying
on
duplex
to
provide
the
back
pressure
necessary.
If
we
know
if
we
implemented
some
kind
of
back
pressure
mechanism
and
the
native
buffering
nuttin
on
the
native
side,
then
we
could
have
HP
to
stream,
extend
from
stream
and
eliminate
a
great
deal
of
that
code.
A
A
B
A
A
C
A
A
C
C
B
Eight,
a
native
HTTP
to
session
claws
right.
It
provides
a
bridge
to
ng,
h,
tup,
all
right.
It
inherits
from
string
bass.
Okay,
now
what
it
does
is.
It
will
take
data
from
the
java
script
HTTP
to
stream
class,
okay,
okay.
Thirdly,
no
or
the
XP
note
from
the
HP
to
session
class.
Yes
right,
which
is
the
HP
to
session
class
on
the
JavaScript
side,
is
not
a
stream
implementation.
Okay,
it's
just
a
class
HTTP
to
stream,
which
is
a
javascript
class
purely
now
before
there
used
to
be
a
native.
B
B
B
Now
there
is
buffering
internally
on
the
native
side,
because
we
have
to.
We
can't
just
write
it.
There
creates
a
socket
or
anything
else
we
have
to
hold
on
to
it
until
ng
is
ready
to
write
it
out
to
the
side.
Yes
to
be
now,
that's
a
HP
to
stream.
Now
it's
a
higher
level
API
with
the
request
and
response
objects,
Yeah
right,
because
the
existing
hp21
RHP
one
implementation
has
separate
request
response
object.
That's
the
only
reason
why
I'm
introducing
here
is
to
keep
the
HDI
parody.
B
C
A
So,
okay,
so
okay,
so
from
a
natural
point
of
view,
okay
now
I
understand
all
what's
going
on
in
there.
So
I
think
we
should
probably
just
get
this
merge
at
this
point
and
maybe
start
like
decoupling
the
request
response.
A
current
location
request
has
passed
from
here:
okay
yep
and
need
them
to.
A
To
its
own
to
their
own
thing,
so
that
today
don't
file,
maybe
in
internal,
so
whatever
I,
no
idea,
but
it's
better
to
let's,
let's
separate
things:
okay,
so
it's
a
nice
focus
on
getting
a
very
fast
duplex
thinking
and
on
the
parallel.
I
can
probably
start
to
getting
down
and
reducing
those
numbers
on
the
JavaScript
side
for
readable
and
writable.
Okay,.
B
Okay,
so
I'll
do
that
so
right
now
I
moved
to
the
implementation
out
to
a
HP
to
dodge
a
asunder
libe
in
order
to
change
that,
but
what
I'm?
Well?
What
I'll
do?
If
we're
going
to
separate
these
out
the
separate
files,
I
will
create
a
new
folder
under
internal
internal
/,
h,
b2
and
I'll
have
a
separate
one
there
for
just
a
stream
and
another
one
there
for
the
high
level
interface,
and
then
we
can
import
those
both
on
lib
HP
to
stream
in
rio
XP
and
exploit
the
necessary
api
there.
B
B
A
B
Right
sides
pretty
straightforward
yeah
on
the
Reid
side,
as
data
is
received
on
on
the
socket,
it
is
passed
to
ng,
which
goes
off
in
fires.
Various
callbacks
callbacks
for,
like
when
a
header
block
is
received
or
one
a
day
to
block
a
to
frame,
is
received.
Those
callbacks
are
queued
up
and
on
each
tick
of
the
event
loop.
Those
callbacks
are
invoked,
so
the
call
it
could
be.
You
know
if
rating
one
connection
you
could
have.
B
You
know
quite
a
few
callbacks
being
called
depending
on
how
many
concurrent
streams
you
have,
but
there
are
very
small
chunks
of
data
in
the
stream
base.
What
basically
ends
up
happening
is
when
you
know
when
a
chunk
of
data
is
received,
that
data
is
pushed
to
the
stream
base
and
pushed
out
to
the
HP
to
stream
socket
per
night.
It's
not
like
a
day
to
be
to
stream
class
yeah,
which
just
forwards
it
on
to
the
HPQ
request.
It's
very
efficient.
B
A
Seems
sir,
it's
really
pretty
straightforward
so
so
on
them
on
the
HTTP
10
things,
the
incoming
message.
The
request
is
unreadable,
while
the
outgoing
one
is
not
is
not.
Writable
is
just
a
stream
right
because
of
that
it
doesn't
do
any
Bufferin,
so
you
can
just
leverage
the
buffer
that
is
underneath.
So
it
might
be
an
approach
that
we
can
follow
on
that
yellow
devil
right.
B
Okay,
sure
about
that
there's
a
lot
yeah
go
ahead,
yeah,
so
yeah
the
flow
of
data
spree
is
really
straightforward.
The
only
thing
that
really
complicates
it
is
the
fact
that
we
just
can't
write
directly
out
to
the
socket.
We
have
to
wait
so
there
has
to
be
buffering.
The
trick,
though,
is
seeing
making
sure
that
we
can
clear
the
buffer
as
quickly
as
possible
on
every
event,
loop,
every
tick
of
the
event
loop
and
then
reducing
the
number
of
times
that
we
have
to
buffer.
A
B
So
the
client
side
I
need
to
have
a
follow-up
conversation
with
our
friends
at
Google.
They
had
a
couple
folks
that
we're
going
to
be
looking
at
that,
but
things
have
been
so
much
in
flux,
I
think
they're
kind
of
waiting
to
see
went
settles
down,
so
I
was
going
to
reach
out
to
them
next
week
and
and
see
what
they
want
to
do
there.
B
A
A
Natural
client,
so
we
can
just
model
eyes
testing
ourselves,
so
if
you
are
then
be
self
hosted
it
because
at
this
point
is
hard
for,
for
other
people
to
contribute
and
join
the
world
cup.
Oh
yeah,
the
moment
we
start
saying:
look
there
is
one
time
to
the
other
we
can
make.
We
can
have
more
people
collaborating.
So
what.
B
One
possible
thing
that
we
could
do
as
far
as
testing
and
benchmarking.
We
could
look
into
whether
we
can
build
the
ng,
HP,
client
and
H
to
load
tools
as
and
run
those
as
part
of
the
test
suite.
So
when
ng
builds
right
now,
it's
only
the
library
rights.
None
of
the
tools
we
could
build.
The
tools
and
use
H
to
load
to
do
benchmarks
in
and
ng
is
a
is
a
client
yeah.
A
B
We
would
just
need
to.
We
just
need
to
to
add
the
full
ng
distribution
to
the
dependencies
right
now.
It's
just
the
light
live
classes,
so
it
increased
the
the
footprint
of
the
dependency
in
the
tree
and
we
would
have
to
make
it
update
the
JIP
so
that
it
would
actually
build
those
things.
But
that
would
be
an
option
too,
and
we
and
you
think,
does
we'd
be
able
to
at
least
get
be
able
to
get
started
on
building
tests
and
then,
when
a
programmatic
client
is
it
actually
developed?
A
Yeah
at
the
test
that
the
time
needs
to
be
too,
if
is
so.
If
this,
the
current
plan
for
us
is
to
ship
this
wing
next
node
the
next
load
major.
We
need
to
have
a
client's.
What
so
it's
a
if
our
friends
at
Google
at
don't
have
time
or
I'll
jump
in
yeah
we
do.
We
will
need
to
make
time
till
happen,
get
this
done,
because
it
is
critical
for
for
a
for
psycho
system
to
11
h,
individual,
an
implementation
that
is
extremely
performant.
Energy
is
so
so.
B
Okay,
I'll.