►
From YouTube: Evolving Web Standards in Node.js- James Snell, IMB
Description
Evolving Web Standards in Node.js- James Snell, IMB
The evolution of Node core standards for URL parsing, http/2, better Unicode support, and other facets
A
Hello,
I'm
james
snell,
I'm
from
ibm
ibm's
technical
lead
for
for
node,
I'm
also
on
the
node
core
core
team.
A
There
was
a
slight
mix-up,
I
think
in
the
in
the
agenda
it
talks
about.
You
know
we're
going
to
talk
about
some
of
the
standards
working
stuff.
We're
doing.
I
thought
I
was
going
to
be
talking
about
http
2,
very
closely
related.
We
can
talk
about
everything,
but
I'm
going
to
kind
of
go
through
some
of
the
efforts
that
that
I've
been
doing
implementing
http
2
in
node
core.
So
this
is
not
using
any
kind
of
libraries.
This
is
building
it
building
it
into
node.
A
This
is
a
kind
of
a
what's
coming
in
the
future.
Talk
a
decision
has
not
been
made
yet
whether
hb2
really
is
going
to
go
into
core.
Yet
it's
very,
very
early
conversation.
It's
something
that's
most
likely
going
to
happen.
We
just
have
to
kind
of
figure
out
how
and
when.
So.
This
is
very
future.
Looking
a
very
important
thing,
saying,
http
2
repeatedly
is
darn
near
impossible,
so
the
ttp
is
silent.
A
I'm
just
going
to
call
it
h2,
okay
and
and
http1
will
be
h1
all
right,
so
no
http
2
over
an
overnight
orchid.
A
A
You
can
pipeline
these
things
by
sending
you
know
multiple
requests
over
a
single
connection,
but
you
have
to
wait
for
the
responses
to
come
back
in
order
right.
A
So
if
you're,
you
know,
if
you
send
a
get
request
and
you're
getting
a
10
meg
file
back
and
then
immediately
after
that
you're
sending
a
post
and
just
want
something
simple
back
that
post
response
is
going
to
wait
until
that
that
that
download
is
received.
The
other
concern
is
if
you
are
doing
get
in
posts.
You
know
one
is
an
item
potent
the
other
is
non-item
potent.
You
can
end
up
with
all
kinds
of
different
issues.
A
If
you're,
if
you're
pipelining
those
types
of
requests
over
a
single
connection,
it
ends
up
leading
to
a
significant
performance
bottleneck
right,
http
2
is
h2
was
specifically
designed
to
deal
with
that
bottleneck
and
I'll.
Explain
that
in
just
a
minute.
A
So,
okay
I'll
explain
it
now.
I
I
constantly
forget
what
order
my
slides
are
in
so
I'll
skip
ahead,
come
back
to
it
and
then
realize
oh,
I
need
to
actually
so
there's
a
little
head
of
line
blocking
issue
going
on
up
here,
too
h2
switches
to
binary
framing
what
it
does,
and
this
is
the
the
primary
difference
with
h2
compared
to
h1-
is
that
it
switches
it
up
to,
rather
than
sending
a
bunch
of
text,
messages
that
are
line
delimited.
A
They
are
binary
packets,
where
your
data
is
is
split
up
into
individual
frames.
What
is
it
basically
allows
you
to
do
is
multiplex
multiple
requests
and
responses
over
a
single
connection
where
the
data
can
be
you
know,
broken
up
and
just
discreet
packets
and
sent
the
packets
cannot
be
sent
out
of
order,
there's
no
sequencing
of
the
frames,
but
you
can
send
multiple
requests
and
multiple
responses
at
the
same
time
right,
and
that
is
the
number
one
difference
with
with
h2.
A
There
are
a
number
of
other
things
here
that
are
included
stateful
header
compression.
So
what
does
this
mean
when
you
send
a
bunch
of
headers
in
your
request,
right
typically
in
h1?
If
you
look
at
it
most
of
those
headers,
don't
change
you're,
sending
the
same
block
of
data
every
time
you
send
this
request.
A
Cookies
are
a
good
example
of
this
right,
because
those
things
don't
change
and
because
of
how
they're
encoded,
for
instance,
dates,
take
up
29,
bytes
per
request
right
if
you're
setting
a
date
back
and
forth
because
of
how
these
things
are
encoded,
that's
a
huge
amount
of
data.
That's
just
lost
is
just
you
know,
taken
up
just
by
sending
information
that
rarely
changes
or
could
be
encoded
in
more
efficient
ways.
A
So
what
hpac,
the
the
the
header
compression
mechanism
in
h2
does?
Is
it
uses
the
delta
encoding
that
sends
just
the
differences
between
one
request
to
the
next?
So
if
I
open
a
connection,
send
an
initial
request,
it's
going
to
have
a
set
of
headers
right.
The
next
request
I
send
will
only
have
what
headers
have
changed
and
it'll
still
it
it'll
assume
that
the
other
headers
that
were
sent
are
just
still
in
effect
right
now.
A
You
have
to
tell
it
you
know,
you
know,
give
it
some
indexes
and
stuff,
so
it
uses
a
couple
of
bytes
per
header.
But
it's
not
this.
This
huge
bucket
of
information
that
no,
it
doesn't
change
from
one
request
to
the
next
significant
savings
in
bytes
sent
over
the
wire.
It
does,
however,
add
the
fact
that
we're
taking
h2
from
a
or
h1
from
a
stateless
protocol
to
h2
being
a
stateful
protocol
right,
the
state
table
for
these
headers
are
maintained.
A
There's
two
for
every
connection,
one
outgoing
one
incoming
right.
The
outgoing
you
get
to
choose
how
you
use
it,
the
incoming
one
you
pretty
much
you
have
to
you
have
to
do
whatever
the
sender
is
telling
you
to
store
okay
because
they're
the
ones
you
know
they're
the
ones
making
the
decision
of
what
to
store.
You
can
tell
them
how
much
you're
willing
to
store,
but
they
will
be
the
ones
that
tell
you
what
you
actually
have
to
store
right
based
on
those
those
constraints.
So
there
is
a
significant
amount
of
new
state
management.
A
The
other
thing
is
server
push
a
better
name,
for
this
would
be
assumed
requests.
What
a
server
push
is
is
the
server
you
know
like
right
now.
You
request
a
web
page.
Typically,
the
web
page
is
not
just
that
one
page
right,
there's
a
lot
of
javascript.
There's
images,
there's
a
lot
of
other
resources
that
come
along
with
that
thing
right.
A
So
what
server
push
is
the
server
knows?
What
additional
resources
that
the
browser
the
client
is
going
to
ask
for
so
it'll
just
go
ahead
and
automatically
start
pushing
those
to
the
client
right,
and
it
does
this
by
assuming
a
request,
and
then
you
know
sending
an
initial
heads
up,
saying:
hey,
I'm
assuming
that
you're
going
to
send
this
and
we'll
give
you
a
a
bunch
of
request.
Headers
and
it'll.
A
Give
you
an
opportunity
to
say
yes
go
ahead
and
send
it
or
don't
right
and
then
it'll
start
pushing
that
response
just
like
it
actually
received
a
request
and
is
handled
as
a
as
a
distinct
stream,
just
like
anything
else,
but
it's
tied
to
the
original
the
original
request.
So
the
other
thing
is
priority.
Prioritization
and
flow
control,
and
the
number
one
question
I
I've
always
got
to
answer:
doesn't
tcp
already
have
prioritization
flow
control?
Yes,
it
does.
This
is
a
second
layer
because
we
are
adding
multiplexing.
A
They
discovered
earlier
on
the
designers
of
the
program
discovered
early
on
that
h2
would
have
to
have
its
own
separate,
prioritization
and
flow
control
in
order
for
it
to
actually
work
with
the
way
that
they're
multiplexing
everything
so
adds
an
additional
layer
of
complexity
that
h1
currently
doesn't
have
to
deal
with
right
now.
You
just
send
a
message
and
get
the
message
back
right
here.
There's
a
lot
more
going
on
and
then
protocol
extensibility
won't
go
into
too
much
of
that
we're
using
a
binary
framing
protocol.
A
You
can
add
new
frame
types
right:
every
every
frame
has
a
stream
id
and
a
type
right,
so
you
can
add
new
types
of
frames
for
the
stream.
The
the
end
result
of
that
is
that
you
can
actually
design
new
protocols
using
this
binary
framing
model
that
you're
not
actually
just
limited
to
the
http
semantics
right.
There's
other
things
that
you
can
do
here.
A
There's
been
a
variety
of
things
that
are
being
looked
at,
for
you
know,
adding
check
summing
to
the
protocol,
adding
these
frames
and
adding
some
additional
security
layers,
and
that
kind
of
thing.
So
there's
a
number
of
things
that
are
interesting
there
bottom
line
of
all
this
is
h2
is
not
an
evolution
of
h1.
It's
a
completely
new
protocol,
it'll
still
use
port.
80
it'll
still
use
443
right,
but
it
is
a
brand
new
protocol
and
and
that's
something
that
that
causes
quite
a
bit
of
confusion.
A
The
confusion
is
not
helped
by
the
fact
that
you
can
serve
h1
and
h2
over
a
single
connection.
Right,
you
can
have
you
know
you
can
have
a
server
stood
up.
That
will
allow
the
client
to
decide
how
they're
going
to
communicate
right
and
if
they
want
to
talk
h1,
then
your
server
can
just
talk
h1
with
it.
You
know,
but
over
the
same
port.
If
a
client
comes
in
and
says
hey,
I
talk
h2,
you
know
that's
what
I
want.
The
server
can
also
communicate
with
the
client
over
h2
all
happening
over
port
80.
A
right,
which
makes
for
some
very
interesting
things
interesting.
I
mean
very
problematic
things
for
intermediaries
right
because
they
have
to
be
able
to
support
this
as
well
and
on
being
able
to
do
multiple
things.
So
there's
a
lot
of
confusion
here
and
what
you'll
end
up
with
is
servers
that
support
you
know
only
h2
over
one
one
port,
other
ones,
that
support
only
h1
over
port
80
and
there's
some
negotiation
going
on.
You
want
to
look
at
the
alt
service,
alternate
service,
spec,
to
kind
of
figure
out
some
of
that.
A
A
How
do
we
go
about
actually
getting
this
into
node
core
to
really
understand
that?
Let's
take
a
look
at
how
h1
is
currently
implemented
right,
but
you
know
the
first
thing
we
have
to
do
is
kind
of
figure
out.
Okay,
why?
Why
are
we
going
to
do
this?
I
think
personally,
I
think
the
reasons
are
pretty
obvious.
I
I
have.
I
have
my
issues
with
the
way
h2
is
designed,
but
I
recognize
you
know
that's
kind
of
where
things
are
going
probably
be
a
good
idea
to
get
it
into
core
right.
A
So
let's
look
at
h1
the
way
it
works.
Currently
we
have
libyuv
and
we
have
the
net
module.
Tls
module,
hp,
module
right.
These
are
the
things
that
people
are
typically
using
inside
inside
core
inside
the
native
bits
of
core.
We
have
this
node
hp
cc
and
the
http
parser
library,
so
the
lib
uv
module
tls
module.
These
are
the
parts
that
actually
manage
the
connection
right.
They
don't
know
anything
about
http
http.
All
they
do
is
establish
the
connection
get
that
going
when
we
want
to
create
an
hp
request.
A
A
But
we
have
this
little
thing
in
here
with
the
hp
parser
right.
What
hdb
parser
does
is
takes
that
input
data
parses
it
out
it's
very,
very
specific
to
the
h1
message
format.
So
it's
it's
a
text
parser
and
it's
a
very
complex
text.
Parser.
If
you're,
you
know,
I
recommend
everyone
to
go.
Take
a
look
at
it.
You
know,
you
know
if
you
look
at
it
and
can't
figure
out.
What's
going
on
that,
don't
worry
about
it!
I
I've
been
in
that
code
many
many
times
and
I
still
can
barely
follow.
A
Parser
right,
so
the
hp
module
gets
that
code
or
gets
the
data
from
the
connection
passes
it
off
to
the
http
parser,
which
has
some
callbacks
to
come
back
and
say:
okay,
we've
received
some
headers,
we
received
some
data,
go,
do
something
with
it
right
that
gets
passed
back
to
the
hp
module,
which
you
know,
carries
out
the
user
code
for
actually
handling
the
requests
and
then
passes
that
back
up
the
connection
and
sends
it
back
out.
That's
basically
how
the
http
module
works
and
note.
A
A
H2
is
a
very
different
protocol,
so
we
can't
just
take
the
existing
existing
code
and
easily
make
it
also
talk.
H2
right.
What
requires
is
is
basically
a
new
stack
of
code
in
inside
core
that
would
sit
in
parallel
with
the
h1
implementation.
There's
a
couple
of
good
things
with
this
one.
It
means
we're
not
touching
h1
implementation,
so
we're
not
breaking
backwards
compatibility
with
anybody,
so
that
should
be
awesome.
A
A
Similar
story,
the
picture
is
almost
identical
to
h1.
We
have
the
libya
v
net
module.
We
have
this
new
http
2
module
and
we're
using
a
library
called
ng,
http
2.,
I'm
very
happy.
This
thing
exists
because
it
means
significantly
less
work.
We
have
to
do
to
get
this
working
in
core.
Ng
is
perhaps
the
best
library
implementation
of
h2
available
right
now
and
the
api
is
is
fantastic,
definitely
work.
You
know
well
worth
going.
A
Take
a
look
at
it's
a
c
api,
very
logical
it
it's
very
straightforward
to
use,
but
in
order
to
get
it
working
in
node,
we
need
a
little
bit
of
glue
code.
So
there's
you
know
going
to
be
a
new
class
yeah,
a
new
c-class
in
there
that
kind
of
connects
it
into
to
node's
api
and
how
node
exposes
data
and
works
with
the
lib
uv,
but
the
flow
is
basically
going
to
be
the
same.
A
A
The
spec
doesn't
require
it,
but
all
of
the
the
main
client
implementations
you
know,
chrome
and
firefox.
They
refused
to
implement
h2
without
having
tls
using
tls
established
connection.
So,
while
in
node
we
will
support
the
ability
to
stand
this
up,
you
know
without
tls.
You
know
you'll
be
able
to
create
a
server
that
that
doesn't
require
tls
for
most
practical
purposes.
Your
your
clients
are
going
to
be
expecting
it
and
requiring
it.
A
So
so
we're
still
going
to
be
using
that
the
data
comes
in,
we
pass
it
off
to
the
http
2
module
which
hands
it
over
to
http
ng
http,
that
library
handles
all
of
the
state
management
for
the
header
compression
it
handles
all
of
the
the
the
parsing
of
the
binary
frames.
All
of
the
state
management
is
all
encapsulated
within
that
library,
so
it
makes
it
very
easy
for
us,
it
doesn't
do
any
of
its
own.
A
I
o,
so
we
still
use
libyuv
to
actually,
you
know
manage
the
the
the
the
I
o.
We
use
the
the
net
module
and
the
tls
module
to
handle
those
bits,
and
we
just
create
a
little
bit
more
code
to
interface
with
ng
and
it
pass
data
back
and
forth
and
ng
uses
the
callback
model,
which
fits
very
nicely
within
core
to
actually
get
that
data
back
and
interface
with
the
user
layer
user
code.
A
Again,
you
know
the
api
is
pretty
straightforward.
I
can
we
can
take
a
look
at
some
of
the
code.
You
know
as
it
has
right
before
I
left
california.
The
demo
is
working
fine,
I
get
here
this
morning
and
it's
segfaulting,
so
I
think
I
updated
something,
so
I
can't
actually
show
the
demo
I
was
going
to
show
unless
you
guys
want
to
see
some
seg
faults,
which
are
always
fun,
but
I'll
show
you
some
of
the
code.
A
It's
very
rough
right
now
very
early.
The
goal
is
to
try
to
get
something
done
experimental
by
the
node
v8
time
frame,
which
is
april
april
2017.,
so
between
now,
and
then
there
should
be
a
significant
amount
of
development
going
into
shoring.
This
up
hp
has
two
layers
of
semantics.
I
kind
of
talked
about
this
a
little
bit.
You
know,
there's
the
framing
layer
and
then
there's
the
hp
layer,
the
framing
layer
deals
with
streams
and
the
frames.
A
A
That
is
a
distinct
layer
from
the
http
http
semantics.
You
can
use
that
based
on
how
they've
defined
in
the
spec
to
send
hp
requests
and
responses
right,
but
it's
actually
not
required.
You
can
implement
the
framing
layer
independently
and
you
can
there's
some
very
interesting
things
you
can
do
with
that
that
have
nothing
to
do
with
hp
semantics.
A
A
The
plan
is
to
introduce
two
apis
for
this
one
is
that
deals
with
the
framing
layer
right,
allowing
developers
to
work
specifically
with
the
with
the
streams
and
the
frames,
and
if
they
never
want
to
actually
do
any
of
the
hp
stuff
they
don't
have
to
they
can
you
can
use
it.
You
know
in
a
number
of
interesting
ways
to
expand
your
your
hp
applications,
but
if
there's
other
things
you
want
to
do
with
it,
you'll
be
able
to.
A
So
that's
something.
That's
you
know,
that's
unique,
and
you
know
the
other
thing
that
allow
you
to
do
is
if
you
don't
think
that
node's
http
implementation
is
good.
You
can
write
your
own
on
top
of
the
lower
level
api
right
so
hopefully
enable
the
the
module
ecosystem
to
do
some
more
interesting
things.
A
It
won't
be
100
fidelity.
There
are
some
things
that
are
missing
from
http
2
that
you
know
we're
in
h1,
for
example,
h2
does
not
have
a
status
message
like
on
a
response.
You
can
specify
a
status
code
and
status
message.
Message
was
dropped
from
h2,
it
doesn't
exist.
All
you
can
do
is
do
a
status
code
so
that
api
in
you
know
in
h1.
It
allows
you
to
set
the
status
message.
A
Well,
that's
either
going
to
be
a
non-up
and
does
absolutely
nothing
or
it's
just
not
going
to
exist
in
the
h2
implementation.
There's
going
to
be
small
differences
like
that
that
could
end
up
creating
some
some
little
gotchas
and
bugs
as
you're
trying
to
to
migrate.
So
you
know
we're
still
documenting
exactly
what
those
changes
are
going
to
be
trying
to
get
an
idea
of
how
those
models
are
going
to
vary,
but
there
are
going
to
be
some
variances.
A
Some
of
the
other
variants
is
going
to
be
caused
by
the
addition
of
new
features
like
push
streams
right.
The
current
api
has
no
concept
of
a
push
stream.
How
do
you
initiate
it?
How
do
you
manage
it
so
we're
going
to
have
to
introduce
new
apis
for
dealing
with
those
things
right
as
we're
going
through
this
we're
still
very
early
in
the
conversation.
A
So
if
you
have
strong
opinions
on
how
that
api
should
or
should
not
work,
there's
gonna
be
urls
here
at
the
end,
where,
where
some
of
the
conversations
are
happening,
I
would
love
for
more
people
to
get
involved
in
the
conversation.
A
So
please,
please,
you
know,
bring
new
ideas
and-
and
even
if
the
idea
is
like,
why
are
you
doing
this
just
make
it
a
module?
Bring
that
you
know.
We
need
to
hear
that
too.
If
there
are
very
good
reasons
for
us
not
to
have
this
in
core,
then
we
need
to
have
that
conversation
right
and
this
this,
like
I
said,
the
final
decision
has
not
been
made
yet
okay.
A
So
what
about
the
lower
level
api?
So
you
know
if
you're
familiar
with
with
how
streams
work
in
the
event
event
event
emitter
model,
then
this
will
be.
You
know,
pretty
straightforward
right,
we're
going
to
create
a
session
use
events
to
basically
say:
okay,
when
do
I
receive
a
frame
you
know
and
then
being
able
to
respond
to
deal
with
that?
That's
going
to
be
a
very
familiar
api,
but
will
allow
you
to
deal
with
deal
with
the
lower
level,
semantics.
A
Okay,
you
will
be
able
to
mix
the
two,
so
it's
not
either
or
you
will
be.
You
know
if
you
want
to
use
the
higher
level,
we
want
to
do
some
more
interesting
things
at
a
lower
level.
Then
you'll
be
able
to
do
that
as
well.
There's
a
couple
of
things
that
we
are
not
doing,
and
at
least
initially
just
because
of
the
additional
complexity
that
it
requires.
A
Initially,
things
like
prioritization
flow
control
may
not
be
directly
exposed.
We
may
get
those
apis
eventually,
but
we
want
to
make
sure
the
basic
protocol,
implementation
and
the
basic
apis
are
right.
Before
going
into
too
much
of
this,
like
I
said,
the
the
protocol
model
is
extensible,
you
can
add
new
frame
types.
You
probably
won't
have
an
api
initially
for
adding
those
types
from
the
user's
point
of
view
until
we
figure
out
everything
else
right,
it's
just
it's
a
matter
of
just
prioritizing
prioritizing
the
amount
of
work.
That's
going
to
be
involved.
A
Okay
and
then
protocol
negotiation
is
kind
of
a
sticky
issue.
We
got
to
figure
out
how
all
that's
going
to
work,
because
you
know
there
are
variances
in
how
all
the
various
implementations
are
doing
this
right
now
and
kind
of
figuring
out
what
all
those
best
practices
are
we'll
get
there.
But
you
know
it's
a
longer
conversation.
A
A
If
you
have
strong
opinions,
please
let
us
know
saying:
okay,
just
make
it
a
a
native
module
out
on
npm
is
a
perfectly
valid
argument
right
and
if
and
if
there
are,
if
there
are
more
people
arguing
for
that,
and
if
those
are
you
know
those
you
know
if
it's,
if
there's
nothing,
we
can
say
about
that
argument,
then
that's
the
way
it
needs
to
go
right.
No,
there's
nothing
says
that
we
have
to
have
this
in
core
we're
working
through
the
all
the
details.
A
Most
of
the
details
are
api.
The
implementation
I
was
able
to
get
a
basic
implementation
using
ng
up
within
about
four
days.
I'm
not
saying
it
was
a
good
implementation.
It
was
just
working
all
of
the
details
are
on.
How
do
we
actually
expose
the
api
right
and,
and
how
do
we
do
that
in
a
way?
That's
not
going
to
break
everybody?
How
do
we
do
that
in
a
way?
That's
consistent
with
how
the
existing
hp
implementation
works?
That
kind
of
thing
so
a
lot
of
details
there
and,
of
course,
testing
docs
examples.
A
All
of
this
is
going
to
take
a
significant
amount
of
time.
That's
why
we're
looking
at
you
know
node
v8
april
time,
frame
before
having
something
at
the
very
least,
that's
experimental
right
to
get
involved.
A
The
node
eps.
If
you're
not
familiar,
it's
node
enhancement,
proposals,
the
the
s
is,
is
a
plural.
It's
not
part
of
the
acronym,
the.
What
these
are
are
basically
proposals
for
improving
node,
significant
portions
of
the
node
api
and
in
internal
implementation.
A
This
is
where
we
have
all
the
the
sticky
conversations
about.
How
do
we
actually
make
these
improvements
without
breaking
everybody
and
there's
a
number
of
conversations
that
are
happening
there?
You
know
things
related
to
the
es6
modules
are
going
on
in
the
eps
repository
the
stuff.
With
you
know,
h2
is
happening
there,
there's
a
variety
of
different
conversations
that
are
happening
so
if
you're
very
interested
in
the
direction
node
is
going,
the
eps
repository
is
definitely
something
to
keep
an
eye
on
and
to
jump
in
and
get
involved
with.
A
Okay,
some
very
interesting
conversations
happening
there,
and
then
I
created
a
repo
there's
very
little
there,
I'm
actually
going
to
start
putting
the
initial
code
in
next
week.
A
A
That's
that's
it.
Basically,
I
wanted
to
open
it
up
for
questions,
and
I
know
you
know
on
the
on
the
agenda
we
were
talking.
I
was
talking
about
you
know
a
lot
of
the
other
standards
work
that
we're
doing
so.
If
there's
interest
there,
I
can
talk
through
some
of
the
some
of
the
additional
things
we're
doing
so
any
questions.
A
I'm
sorry
yes,
so
question
is:
would
we
keep
the
h1
and
h2
implementations
in
parallel?
Yes,
absolutely
nothing
would
be.
You
know
the
the
h1
implementation
isn't
going
anywhere
right.
It
would
still
be
there
eventually,
we
would
have
the
option
with
the
protocol
negotiation
to
be
able
to.
Actually,
you
know,
stand
up
a
server
which
can
speak
either
one
right,
how
exactly
that's
going
to
work
yet
where
I
don't
know
yet
so.
Okay,.
A
Ballpark
at
at
least
an
experimental
version
of
it
by
by
april
april
by
april
2017
is
where
I'm
targeting
like
an
experimental
version
of
it,
to
be
ready
to
go
note,
eight
yeah
note
7
is
going
to
be
out
in
october
and
and
note
8
out
in
april,
so
whether
it
will
be
bundled
in
note
8
or
whether
it's
going
to
be
you
know
experimental
in
this
other
repo.
I
don't
know
yet
that
conversation
has
been.
A
You
know
we
haven't
made
that
decision
yet,
but
that
should
be
when
to
look
for
something,
that's
actually
usable.
So,
okay,
any
other
questions,
any
questions
about
the
protocol.
We
can
go
through
some
more
of
the
details
of
the
actual
protocol
itself
too.
So,
okay,
all
right.
Well,
thank
you
very
much.