►
Description
libuv is what gives Node.js its event loop and cross-platform asynchronous I/O capabilities. This talk explains what libuv is all about, and how it's used by Node.js. This talk also looks at recent development efforts in the libuv project.
A
A
A
So
the
the
picture
in
the
top
left
is
the
actual
libuv
logo
and
then
things
just
kind
of
go
off
the
rails.
From
there
a
little
bit,
we
have
a
dinosaur
riding
a
unicorn.
For
some
reason,
the
Chicago
Bears
football
player
is
on
a
dinosaur
but
yeah.
So
what
I'm
talking
about
is
libuv
the
platform
of
abstraction
library
that
nodejs
sits
on
top
of
libuv
is
written
in
C.
It's
not
C++,
it's
actually
I
believe
C
89.
So
it
runs.
A
A
All
right
so
Libby
V
is
used
by
node,
obviously,
but
it
actually
has
a
number
of
other
really
big
consumers.
So
a
language
called
Julia
knee
ovum
see
make
a
bunch
of
others
at
the
bottom
of
the
screen.
Here
you
will
see
cartoon
head.
That
is
foul.
He
is
another
one
of
the
Libby
V
collaborators,
that's
his
his
github
icon,
and
he
he'd
said
this.
Really
nice
quote
to
me.
One
time
and
I
wanted
to
you
know
attribute
it
correctly,
not
not
take
credit
for
the
quote
myself,
but
it
basically
is.
A
We
write
the
if
def,
so
you
don't
have
to
so.
If
you
look
inside
the
nodejs
C++
code
base
there,
there
is
some
branching
some
if
deaths
based
on
what
platform
you're
executing
on.
But
it's
really
not
that
bad.
If
you
then
dive
down
into
the
Libby
V
source
code,
it's
a
whole
nother
story.
There
are,
if
deaths
all
over
the
place,
there
are
actually
two
different
source
trees,
one
for
Windows
and
then
one
for
everything
else.
A
So
there's
there's
a
lot
of
branching
logic
there
that
that
kind
of
gives
a
nice
consistent
API
to
people
who
want
to
build
on
top
of
Libby
V
and,
as
I
said,
it
is
a
cross-platform
C
library.
So
we
we
support
a
large
number
of
operating
systems,
some
more
than
others.
So
we
have
a
three
tiered
support
system.
A
A
It
might
not
necessarily
be
tested
in
the
CI
and
we'll
we'll
try
to
the
best
of
our
abilities
to
you
know
make
sure
that
it
never
breaks
I'm,
not
really
sure
why
we
have
the
distinction
between
Tier
one
and
tier
two,
because
I
do
believe
that
all
the
things
in
tier
two
are
currently
tested
in
the
CI
and
then
tier
three
is
going
to
be
community
maintained
platforms.
So
that's
going
to
be
Android,
IBM
I,
although
I
think
they're,
currently
in
the
process
of
trying
to
add
some.
A
You
know
a
higher
tier
support
right
now
and
then
just
some
other.
You
know
different
random
platforms.
A
lot
of
times.
People
will
show
up
with
platforms
that
I've
never
even
heard
of,
and
you
know,
they'll
start
trying
to
add
if
deaths
into
the
code
and
see
how
it
works.
So
you
know
as
long
as
it's
not
too
intrusive
into
the
codebase.
We're
usually
ok
with
that,
but
we
can't
really
make
any
promises
that
it
won't
break
because
we're
not
testing
it
anywhere.
A
So
some
of
the
features
that
come
from
libuv
the
event
loop,
obviously
is
a
really
big
one
in
node,
TCP
sockets,
so
a
node
that
basically
translates
to
the
net
module.
We
have
DNS
resolution
so
in
the
DNS
module.
A
lot
of
some
of
the
system
calls
are
from
a
library
called
si
Aires
that
I'll
talk
about
on
the
neck
and
a
couple
slides
but
libuv
also
implements
some
stuff
there.
A
Udp
sockets
is
going
to
be
basically
nodes,
D
Graham
module
file
watching
and
file
system
operations,
so
just
about
everything
in
the
FS
module
is
going
to
go
through
libuv
child
processes
and
threads.
You
know,
obviously
that
translates
to
the
child
process
module
and
worker
threads,
and
then
we
have
other
things
like
synchronization,
primitives,
mutexes
and
things
like
that,
and
then
we
offer
a
high
resolution
clock.
So
if
you
know
date
now
is
just
not
accurate
enough
for
you,
you
can
use
inside
of
node
process
HR
time
to
get
a
little
more
high
resolution.
Timing.
A
And
I
want
to
talk
a
little
bit
about
the
high-level
architecture
of
Libby
V,
so
I
like
to
think
of
this
as
broken
up
into
two
rows.
So
basically
the
top
row,
which
has
the
network
I/o
all
the
way
across
to
like
file
I/o
DNS
and
user
code,
that's
more
of
the
public
facing
API.
What
users
of
libuv
are
going
to
consume
and
then
the
bottom
row
with
like
the
iocp
and
thread
pool
and
things
like
that,
that's
more
of
the
guts
of
Libby
Vee.
A
One
of
the
things
that's
interesting
on
the
bottom
row
is,
you
know.
Iocp
is
that
is
basically
how
we
do
Iowa
polling
on
Windows,
but
we
also
have
things
like
Ipoh
KQ
event,
ports,
so
Libby
V
will
basically
pick
the
best
I
guess
primitives,
for
what
operating
system
you're
running
on
so
KQ
will
be
used
on
bsds
and
the
Mac.
A
poll
is
used
on
Linux
and
then
event.
Ports
are
solaris.
A
A
Tty
is
if
you're
dealing
with
your
terminal
timers
and
things
like
that.
We
also
have
a
couple
of
different
types
of
handles
that
are
used
for
interacting
with
the
event
loop,
so
something
called
an
idle
handle
which
is
actually
poorly
named
because
it
runs
on
every
iteration
of
the
event
loop.
So
there's
not
really
anything,
that's
idle
about
it
and
then
prepare
and
check
handles
which
run
before
and
after
you
do
before.
A
Lybia
UV
does
its
IO
polling
and
then
async
handles,
which
can
you
know,
do
things
like
be
used
to
wake
up
the
event
loop,
if
it's
sleeping
and
things
like
that
and
then
handles
have
a
concept
of
being
active.
So
if
a
handle
is
active,
it'll
actually
keep
the
event
loop
alive,
so,
for
example,
in
node,
whenever
you
start
a
server,
if
you
just
you
know,
you
call
server
dot,
start
and
nothing
else.
You'll
notice
that
node
doesn't
exit.
A
That's
because
the
event
loop
sees
that
there's
at
least
one
active
handle
alive
and
remaining
and
keep
the
event
loop
open
and
then
there's
also
an
operation
called
unwrapping
and
then
the
inverse
of
that
is
reffering.
So
when
a
handle
is
created,
it's
in
a
state
of
being
referenced,
and
that
is
what
will
help
you
keep
the
event
loop
alive.
But
if
you
unwrap
that,
then
the
the
event
loop
no
longer
considers
that
as
something
that
will
keep
your
application
open.
So
these
are
actually
exposed
inside
of
nodes.
A
A
So
in
addition
to
handles,
we
also
have
something
called
requests:
I
like
to
think
of
handles
as
more
of
an
object.
Whereas
a
request
is
more
of
a
function
or
a
method.
I
say,
function
or
method,
because
sometimes
they
are
involved
with
a
handle
and
sometimes
they're
kind
of
their
own
standalone
thing.
These
are
typically
shorter-lived
operations,
so
things
that
happen
when
you're
doing
file
I/o
if
you're
doing
dns
lookups
or
if
the
user
passes
in
their
own
custom
work
that
they
want
to
execute
in
the
thread
pool.
A
Then
that's
going
to
be
more
of
a
request
type
of
operation
instead
of
a
handle
but
like
handles.
They
can
also
keep
the
event
loop
alive.
So,
for
example,
if
you
you
know,
start
a
node
and
you
do
FS
dot
read
file,
it's!
No!
It's
not
going
to
terminate
until
that
read
operation
is
complete.
If
that's
the
only
thing
that's
happening.
If,
if
the,
if
the
request
wasn't
something
that
could
keep
the
event
loop
open,
you
would
call
FS
read
file
and
then
the
program
would
just
exit
immediately.
So,
if
you're
ever
curious,
why?
A
It
is
that
node
doesn't
exit
in
some
cases.
It's
usually
because
there's
a
request
or
a
handle
somewhere
and
then
one
of
the
more
I
guess
famous
things
that
we
get
out
of
Libby
V
is
the
thread
pool.
So
the
whole
point
there
is
to
move
computations
off
of
the
main
thread
javascript
as
a
language,
with
the
exception
of
workers
which
are
relatively
new,
is
single
threaded.
So
you
know
in
a
server
application.
You
probably
are
gonna
have
a
lot
of
things
going
on.
A
At
the
same
time,
you
could
have
you
know
hundreds
or
thousands
of
requests
being
processed
simultaneously
and
if
everything
ran
on
just
the
one
main
thread,
things
would
slow
down
pretty
quickly.
So
we
use
the
thread
pool
to
offset
some
work
onto
you
know:
worker
threads.
One
common
misconception
is
that
everything
runs
in
the
thread
pool
that's
wrong.
A
Only
file,
IO,
DNS
lookups
so
get
address
info
and
get
an
info
and
then
custom
work
that
a
user
might
actually
off
put
into
the
thread
pool
those
are
the
only
things
that
actually
run
on
a
thread
pool
and
by
default
there
are
four
worker
threads
in
the
thread
pool,
but
you
can
actually
control
that
with
an
environment,
variable
called
UV
underscore
thread
pool
size.
So
if
you
pass
that
to
that's
propagated
up
through
node
two,
so
if
you
start
node
with
UV
thread
pool
size
equals,
you
know,
124,
that's
how
many
threads
you'll
spawn.
A
Unless
you
actually
need
to
do
this,
you
should
probably
be
careful
because
more
threads
are
not
always
better.
If
you
don't
have
enough
hardware
to
keep
up
with
all
the
threads,
they
can
actually
compete
with
one
another
and
all
the
context.
Switching
in
between
them
can
actually
slow
your
application
down.
A
So
this
this
picture
is
taken
directly
from
the
Lib
Eevee
documentation.
It
basically
explains
how
the
event
loop
works,
so
every
tick
through
the
event
loop.
We
calculate
the
loop
times,
so
we
have
some
reference
for
what
time
it
is,
and
this
is
kind
of
an
expensive
operation.
So
we
cache
it
at
the
beginning
and
then
we're
gonna
check
is
the
loop
alive
or
not,
and
by
is
the
loop
alive
I
mean?
Are
there
active,
handles
and
requests
that
are
outstanding?
A
If
there
are
none,
then
the
event
loop
can
exit,
and
then
you
know
in
a
node
that
will
propagate
to
the
process
exiting,
but
if
there
is
still
work
to
be
done,
there's
a
number
of
steps
that
Libby
V
goes
through.
So
first
it'll
look
and
see
if
there
are
any
timers
that
are
due.
So
you
know
if
you've
called
set
timeout
or
any
of
those
like
ready
to
to
be
processed
from
there.
It
goes
on
to
pending
callbacks.
So
these
are
gonna,
be
your
you
know
in
nodejs,
callbacks
it
pretty
much.
A
We
passed
the
functions
down
to
libel
UV
as
well,
so
are
there
any
callbacks
that
are
ready
to
run
next
it'll
look
for
it'll
process
idle
handles.
These
are
the
things
that
I
said
before
had
kind
of
a
bad
name,
because
they
get
processed
every
time
through
the
event
loop,
and
then
we
do
something
called
prepare
handles.
So
this
is
basically
we're
about
to
go
into
a
polling
for
I/o,
prepare
handles
and
give
you
kind
of
a
hook
into
the
event
loop.
A
If
you
want
to
do
anything
for
polling
for
I/o,
so
then
we
move
into
the
actual
IO
polling
and
then,
when
we
come
out
of
that,
we
have
check
handles
so
that's
kind
of
the
inverse
of
the
prepare
handle.
So
these
that
it
gives
you
a
good
way
to
hook
into
your
event
loop,
then,
finally,
any
close
callbacks
that
are
outstanding,
we
execute
and
then
we
loop
all
the
way
back
up
to
the
top
compute
the
time
again
and
basically
start
from
scratch,
and
that
is
that's
basically
one
tick
of
the
event
loop.
A
This
is
all
that
I'm
really
going
to
cover
on
the
event
loop,
but
if
you're
interested
in
more
about
it,
I
would
recommend
Bert
belters
talk
from
2016
Note
interactive.
It
is
you
know,
a
few
years
old
now,
but
the
the
structure
of
the
event
loop
hasn't
changed.
The
information
there
is
still
relevant.
A
So
next
I
want
to
talk
a
little
bit
about
how
libuv
works
now
I
want
to
talk
about
how
it
fits
into
no
js'.
So
at
the
very
top
of
this
diagram
in
yellow
I,
have
that's
gonna,
be
your
applications
JavaScript
code
and
then
it's
gonna
call
down
into
nodes
standard
libraries.
So
you
know
the
FS
module
DNS
child
process,
all
those
and
that's
gonna
be
the
the
second
layer
of
yellow
there.
From
there
it's
gonna
call
down
into
the
purple
layer
C++,
that's
the
binding
layer.
A
It's
really
ugly
code
to
kind
of
interface
between
you
know,
JavaScript
and
v8
and
libuv,
and
then
the
binding
layer,
I
kind
of
listed
some
of
the
major
libraries
that
are
part
of
node
here,
I
put
libuv
on
the
left
by
itself,
because
it
is
you
know
what
we're
talking
about
here,
but
in
reality
I
would
say:
v8
is
the
biggest
dependency
and
then
probably
libuv,
but
you
can
see
some
of
the
other
dependencies
here
so
see.
Aries
is
a
DNS
resolver,
so
you
can
actually.
The
node
has
two
different
ways
to
do:
DNS
lookups.
A
There
is
the
way
that
goes
through
see
aries,
which
is
always
going
to
make
a
network
request,
and
then
there
is
the
the
one
that
Libby
ewv
implements,
which
is
going
to
be
DNS,
lookup
and
node
that
actually
uses
the
system
resolver.
So
it's
going
to
use
the
same
lookup
mechanism
as
like
ping
and
any
other
you
know
applications
you
might
be
running
on
your
computer.
A
One
thing
to
note
about
that
is
until
I
don't
know
about
a
year
ago.
It
was
actually
possible
that
if
you
issued
a
bunch
of
DNS
lookup
requests
that
it
would,
you
know,
go
down
into
the
thread
pool,
and
so,
if
you're
doing
like
five
six
seven
eight
DNS
lookups,
it
could
actually
block
other
things
in
your
application
that
we're
using
the
thread
pool
from
running.
So
people
would
issue
a
bunch
of
DNS
requests
and
then
start
doing
file
I/o,
and
they
wouldn't
understand
why
their
file
I/o
wasn't
working.
A
It's
because
the
threads
were
tied
up
with
DNS
lookups.
So
that's
where
it'd
be
a
good
use
case
to
use
Ciara's
because
see
Ares
doesn't
go
through
the
thread
pool
but
either
way.
That's
no
longer
the
case,
so
we've
now
inside
of
Libya
V
started
to
distinguish
between
different
types
of
thread,
pool
operations
to
make
sure
that
they
don't
step
on
each
other's
feed
too
much
and
then
finally,
at
the
bottom
is
just
the
operating
system.
Libbie
v
calls
out
to
that.
A
So
on
this
slide.
Sowell
is
back
to
talk
about
the
onion
architecture,
where
basically
the
more
layers
that
you
peel
away,
the
more
you
cry
and
the
reason
for
this
is
inside
of
node.
You
might
have
used
something
like
net
socket,
which
has
a
nice
JavaScript
API,
it's
built
with
streams
and
is
just
fairly
easy
to
use.
But
if
you
look
in,
if
you
look
closer
at
the
source
code,
it
references
something
called
a
tcp
wrap
which
is
in
purple.
So
it's
one
of
those
nasty
binding
layer
objects.
A
So
this
is
written
in
C++.
It's
a
lot
less
nice
to
work
with
and
then
that
actually
wraps
something
called
a
UV
TCP
T,
which
is
a
libuv
type
written
in
C
for
interacting
with
sockets
and
then
inside
of
libel
UV.
We
actually
wrapped
that
inside
of
an
OS,
specific
Handler
or
a
specific
handle.
So
you
know,
windows
socket
or
UNIX
socket
things
of
that
nature.
A
Libuv
has
close
to
400
tests,
but
they're
written
in
C,
C
is
kind
of
a
pain
to
to
deal
with.
I
mean
I,
think
we're
mostly
JavaScript
developers
in
this
room.
So
it's
it's
tedious
and,
like
I
said
we
had
issues
where,
for
example,
Libby
v1
19
came
out,
everything
was
fine,
went
to
upgrade
node
and
the
CI,
you
know,
came
back
red,
so
we
didn't
actually
land
that.
But
unfortunately
there
are
people
in
the
community
who,
as
soon
as
a
new
libuv
release,
comes
out,
they
build
with
that
version.
A
The
you
know
they
compile
node
with
that
version
of
libuv
themselves.
So,
even
though
we
hadn't
strictly
broken
node,
we
did
break
some
users,
who
were
you
know
a
little
more
brave,
and
so
we
very
quickly
reverted
and
got
a
new
version
released.
And
you
know
life
was
good
again,
but
it
was.
It
was
an
issue
that
happened
more
than
once
and
it
was
kind
of
frustrating
to
deal
with.
A
A
They
actually
created
a
new
CI
job
where,
before
we
create
a
live
UV
release
we
actually
take
whatever
is
the
latest
in
node
and
whatever
is
the
latest
in
libuv,
compile
them
together
and
then
run
nodes
test
suite
and
see
what
happens
so
node
has
you
know
over
2,800
tests
there
it's
easy
to
write
tests
in
node,
because
they're
JavaScript
and
ever
since
we
kind
of
took
this
approach.
We
haven't
had
any
issues
where
you
know
there.
A
A
So
now
I
want
to
actually
trace
through
thread
pool
operations,
so
all
the
way
from
user
land
JavaScript
code,
all
the
way
down
to
the
thread
pool.
So
in
this
case
we're
just
gonna
do
a
copy
file
operation,
so
we're
gonna
use
FS
copy
file,
there's
three
different
variations
of
that.
So
there's
the
synchronous
version,
the
promises
based
version
and
then
a
callback
based
version,
and
so
the
the
first
one
shown
here
is
the
synchronous
version.
Second
is
promises
and
then
the
one
at
the
bottom
is
the
old-school
callback
based
approach.
A
So
from
the
code
that
we
just
saw.
The
first
thing
that
would
happen
is
we
would
call
into
nodes
FS
module
in
this
case
I'm,
showing
the
code
for
the
promises
based
version,
because
it's
just
it
fits
nicely
on
a
slide
a
little
better.
But
you
know
if
you
look
inside
node,
there's
similar
code
for
synchronous
and
callback
versions,
but
so
all
we're
doing
here
is
we're
passing
in
source
the
destination.
A
So
you
know
where
we're
gonna
copy
the
file
to
and
then
certain
flags
that
that
the
operation
takes
so
for
an
example
flag
would
be.
You
know
if
the
file
exists
already.
Do
we
want
to
overwrite
it
or
not?
Things
like
that.
We're
gonna
validate
both
the
source
and
destination
paths
inside
of
node
the
flags.
We're
gonna,
make
sure
that
the
flag
is
an
integer.
So
that's
what
the
or
0
is
that's
kind
of
a
JavaScript
trick.
A
This
is
just
a
symbol
that,
under
the
hood,
will
tell
the
binding
layer
that
we're
doing
a
promises
operation
as
opposed
to
synchronous
or
callbacks,
and
so,
if
you
look
through
the
node
code
base,
these
are
the
three
different
ways
that
we
would
call
binding
copy
file.
You'll
notice
that
the
first
three
parameters
and
every
cake
face
are
the
source
destination
and
flags,
because
that's
gonna
be
the
data
that
we're
operating
on.
But
then
the
you
know,
the
remaining
arguments
differ
between
the
implementations,
so
I
already
mentioned.
A
Kayuu's
promises
is
a
symbol
that
tells
you
to
use
the
promises.
Implementation.
The
the
synchronous
version
passes
undefined,
followed
by
a
context.
The
context
is
what
will
be
populated
with
any
results
and
errors
that
might
come
back
from
the
binding
layer
and
then
the
callback
based
version
just
passes
something
called
req,
which
is
a
C++
object.
That's
used
for
in
the
callback
code,
so
from
here
we're
gonna,
be
leaving
JavaScript
we're
gonna
be
entering
the
the
really
ugly
C++
code
that
I
talked
about
a
little
bit.
A
Can
everybody
in
the
back
read
that
not
that
it's
nice
code,
but
so
this
is
actually
the
C++
code
that
gets
executed
any
time
you
run
copy
file
the
first.
You
know,
collection
of
statements,
you'll
see
a
bunch
of
check,
GE
check,
not
in
all
things
like
that.
These
are
basically
a
last-ditch
effort
to
validate
user
input.
If
any
of
these
checks
fail,
node
will
hard
crash
and
we're
okay
with
that
for
a
couple
reasons:
first,
off
you're
really
not
supposed
to
be
using
the
bonding
layer
directly.
A
So
if,
if
in
your
code
you're
calling
this
directly,
then
you
know
you're
kind
of
on
your
own,
the
other
reason
is.
We
have
already
done
we've
already
validated
all
these
same
things
in
JavaScript,
so
we're
just
trying
to
make
sure
that
you
know
people
aren't
going
to
be
passing
and
garbage
data
to
us
from
there.
A
That's
the
function
inside
of
live
UV
that
this
operation
is
going
to
execute
so
from
here
we're
actually
going
to
be
leaving
node
completely
and
going
into
Libya
V,
and
this
is
that
same
function
that
I
just
mentioned.
This
is
what
the
signature
looks
like,
so
the
first
parameter
is
going
to
be
the
event
loop
for
synchronous
operations
that
can
actually
be
no
but
it'll
be
there
in
all
the
calls,
the
the
rack.
The
second
parameter
is
it's
gonna,
be
basically
a
file
system,
operation
request.
So
you
know
earlier
I
talked
about
handles
and
requests.
A
This
is
one
of
those
requests
so
nodes,
not
really
responsible
for
attaching
all
of
the
information
to
the
request.
There
are
some
macros
down
later
in
this
code
that
I'll
go
over
that
kind
of
populate
that
some
more
but
then
the
source
and
destinations
are
passed
as
path
and
new
path,
the
same
Flags
that
we
gave
in
JavaScript
or
past
as
the
flags
and
then
the
UV
FS
callback,
that's
gonna,
be
either
a
callback
function
or
an
all.
If
it's
synchronous
from
there
we're
gonna
call
init.
So
in
it
is
a
macro.
A
It's
gonna
populate
that
request
that
I
talked
about
with
it's
gonna,
basically
tell
the
request
that
this
is
a
copy
file
operation,
so
everybody
who
calls
into
libuv
doesn't
have
to
specify
what
operation
they're
running
it'll.
Just
you
know
automatically
know
by
the
function
that
you
call.
Then
we
do
some
some
flag
validations.
So
we
want
to
make
sure
that
you
know
people
are
passing
in
garbage
values
in
the
CLA
or
so.
If
anyone
passes
in
a
flag
that
we
don't
recognize,
will
return
am
Val
next
we
do
cap
FS
capture
path.
A
The
point
of
this
function
is
to
basically
take
the
path
that
was
passed
in
and
make
a
copy
of
it,
because,
if
we're
doing
an
asynchronous
operation,
there's
a
chance
that
that
that
memory
could
be
freed
by
the
time
the
operation
completes,
if
that
happens,
you're
probably
going
to
run
into
a
hard
crash
and
then
finally,
we
we
add
the
flags
to
the
request,
and
then
we
call
post
so
post
is
another
macro
that
is
gonna.
Send
your
work
off
to
the
thread
pool
it's
worth
pointing
out.
This
is
the
actual
full
implementation
on
Windows.
A
There
are
similar
ones
on
in
Linux,
but
because
the
windows
code
is
a
little
smaller
and
cleaner
I'm
going
with
that.
So
this
is
what
the
post
macro
looks
like.
Basically,
it
checks
the
call
back
if
it's
null
or
not.
So
it
knows
if
it's
synchronous
or
asynchronous
and
then,
if
it's
asynchronous,
it's
gonna
register
the
request
with
the
event
loop
so
once
it
does
that
it
it'll
keep
your
operate
your
process
alive.
A
A
So
this
is
the
windows
internal
copy
file,
implementation,
we're
again
gonna
do
some
flag
validation.
This
is
gonna,
be
operating
system,
specific
flag
validation,
because
certain
things
are
supported
on
UNIX
and
Mac
that
aren't
supported
on
Windows
and
then
you'll
see
the
copy
file.
W
called
that's.
Actually
a
Windows
API
call
that
will
handle
the
copy
for
you
and
then
the
rest
of
the
code
at
the
bottom
is
because
of
a
little
bug
on
Windows,
where
it'll
return
e
busy,
if
you're
trying
to
copy
the
same
source
and
destination.
A
So
we
try
to
do
the
the
copy
operation.
If
it
fails
with
a
busy,
then
we
stat
both
of
the
both
of
the
files
and
if
it's
the
same
then
we
know
the
operation
actually
succeeded
and
it's
not
a
genuine
error,
but
from
there
we're
basically
you
know:
gonna
go
back
up
the
stack
so
we'll
go
back
to
the
will
exit
the
event.
Loop
go
back
up
to
the
binding
layer
back
up
into
JavaScript.
A
A
Basically
a
few
oops
is
on
our
side,
but
in
order
to
do
some
cleanup
that
we'd
like
to
do
and
add
some
new
features
that
are
breaking
changes,
we
would
have
to
bump
up
to
2.0
the
problem
with
that
is
at
this
point.
That
would
be
a
rather
large
like
Delta,
but
also
we
have
a
small
team.
So
there
are,
you
know,
hundreds
of
people
collaborating
on
node,
I'd,
say
there's
less
than
10
collaborating
on
libuv.
So
it's
a
pretty
big
support.
A
Job
for
the
people
who
are
working
on
Libby
v2
support
a
1x
and
a
Oh,
even
though
some
projects
out
there
are
already
actually
using
the
the
fake.
Oh
so
we
have
V
1
X,
which
is
what
node
and
most
people
are
using,
and
then
the
master
branch
in
github
is
what
would
be
the
oh
I
think
Giulio
Lange
at
least
is
already
using
that.
A
But
you
know:
we've
we've
been
going
back
and
forth
on
this
for
over
a
year
now
and
we're
starting
I
think
we're
starting
to
come
around
to
the
idea
of
just
staying
on
v1
forever.
There's
you
know
something
to
be
said
for
stability
and
things
like
that.
It
would
avoid
extra
work
on
node
side
of
having
to
create
you
know
api's
and
n
api
so
that
so
that
libuv
changes
wouldn't
break
node,
add
ons,
and
things
like
that.
A
So
I
think
now
we're
leaning
more
towards
you
know
one
dot
X
forever,
but
we
are
still
adding
a
bunch
of
new
features.
So
this
is
just
some
of
them
that
have
landed
in
the
past
year.
One
of
the
one
of
the
ones
that
I
guess,
depending
on
your
use
case,
could
be
a
big
feature.
Is
the
maximum
thread
pool
size
has
been
increased,
so
you
know
by
default
it's
still
going
to
be
four
threads,
but
if
you
really
need
it,
you
can
bump
it
up
to
a
thousand
twenty
four
threads
prior
to
that
change.
A
The
maximum
that
you
could
do
is
128.
People
have
made
the
case
that
you
know,
as
computers
continue
to
get
better,
we
can
run
more
threads.
We've
added
something
called
UV
random,
so
this
is
gonna,
be
Libya.
Visa
answer
to
generating
cross-platform
random
numbers
UDP
connected
sockets
are
a
new
thing,
so
you
know
UDP
sockets.
You
generally
think
of
you
just
kind
of
broadcast
your
messages
out
and
maybe
it'll
be
received.
A
Maybe
it
won't,
but
it's
actually
possible
to
connect
a
UDP
socket
so
that
anytime,
you
do
to
send
it'll
always
send
to
the
same
destination.
We've
added
you
v.o.s,
you
name.
So
if
you're
interested
in
getting
more
information
about
what
platform
you're
running
on
that's
gonna,
that's
gonna
be
useful
there
and
then
we've
actually
taken
that
and
nodes
OS
module
is
now
built
on
top
of
that.
A
Uv
get
constrained.
Memory
is
interesting,
so
one
of
the
problems
that
people
have
is
you
know,
especially
v8,
would
set
its
its
memory
limit
to,
like
you
know,
a
certain
amount
of
memory
like
whatever
is
available
in
the
computer,
but
some
operating
systems
like
Linux
have
things
called
C
groups,
and
then
there
can
be
other.
You
know
right
now.
A
We
only
look
at
C
groups,
but
in
the
future
can
be
expanded
to
other
things
where
you
can
actually
impose
artificial
memory
constraints
on
your
application,
and
so,
if
something
like
v8
would
try
to
use
of
the
system
memory,
it's
not
going
to
be
able
to
do
that
anyway.
So
you
V,
get
constrained
memory.
Lets
you
actually
query
the
operating
system
to
see
how
much
memory
you're
allowed
to
use.
So
that's
a
useful
one.
A
Threads
can
now
actually
set
what
they
want
their
stack
size
to
be
so
that's
just
a
little
tweak
for
you
know
if
you
know
how
much
memory
you're
gonna
need.
You
can
configure
that
a
really
big
one
was
streaming
reader.
So
this
request
goes
back
like
five
years:
node
has
FS
dot
reader,
but
under
the
hood
that
actually
costs
kinder
and
that
buffers
of
the
requests
at
once.
So
you
could
see
if
you're
trying
to
read
a
very
large
directory
how
you
could
run
into
memory
issues.
So
people
have
wanted
streaming
reader.
A
We
had
a
pull
request
that
changed
hands
like
three
or
four
times,
and
then
it
was
also
targeted,
the
master
branch.
So
within
the
past
year
we
actually,
you
know,
got
that
under
control,
got
it
targeting
the
V
one
X
branch
and
we're
able
to
get
it
landed,
and
you
know
as
of
a
couple
months
ago,
it's
now
shipping
and
node,
and
then
just
you
Viet
get
time-of-day.
This
is
the
you
know,
basically,
the
the
sea
equivalent
of
date
now
and
you've
EFS
MKS
temp.
A
So
it's
a
call
to
make
a
temp
file
for
your
application
and
then
I
wanted
to
finish
up
with
just
one
thing:
that's
I
guess
tangentially
related
to
libuv.
But
it's
my
talk.
My
rules
so
I
want
to
talk
about
it.
It's
called
UV
huazi,
so
Y
Z
is
the
web
assembly
system
interface.
It's
relatively
new.
It
came
out
within
the
past
year
and
it
basically
gives
webassembly
applications
a
way
to
access
the
underlying
operating
system,
because
by
default
web
assembly
code
is
sandboxed.
A
So
I
you
know
as
a
limb,
UV
maintainer
I'm
when
I
first
heard
about
huazi
I
was
like
oh
that
sounds
like
libuv
for
for
web
assembly.
I
was
like.
Let
me
try
to
build
that
so
I
did
I,
build
it
on
top
of
libuv,
for
you
know
maximum
portability
and
then,
as
of
node
13.3,
which
I
don't
know
came
out
in
the
past
month,
or
so
it's
shipping
and
node,
so
there's
documentation
you
can
do,
require
Y,
Z
and
and
play
around
with
that.