►
Description
RPC is a language independent framework for making remote procedure calls used by large companies such as Netflix, Docker, Google, and more. gRPC leverages technologies such as HTTP2 and protocol buffers to create efficient network based applications.
This talk provides an introduction to basic gRPC concepts, and shows how the framework can be used in both browser and Node.js applications. This talk will compare and contrast the various modules available to JavaScript developers. Finally, the talk will discuss certain architectural tradeoffs that come with gRPC-based systems
A
The
the
thing
that
kind
of
makes
G
RPC
unique
as
far
as
web
frameworks,
is
that
it
leverages
HTTP
2.
So
if
you
just
wanted
to
use
HTTP
1,
that's
not
going
to
be
an
option
and
then
it
uses
something
called
protocol
buffers
which
I'll
talk
about
in
a
little
bit,
which
is
a
way
that
you
serialize
your
data.
So
you
know
think
of
something
like
Jason
except
it's.
A
more
compact
binary
representation.
A
So
some
of
the
features
that
come
with
GRP
see
there
are
4
RPC
types.
They
are
unary
client
streaming
server
streaming
and
then
bi-directional
or
bidi
streaming.
So
the
the
streaming
API
is
like
client-side.
It
basically
means
the
client
is
going
to
send
a
stream
of
messages
to
the
server.
The
server
will
respond
with
one
response:
server-side
streaming
is,
the
client
will
send
rum
one
request
and
the
server
will
respond
with
a
stream
of
requests
bi-directional.
A
Another
feature
is
metadata,
so
this
is
kind
of
a
fancy
way
of
talking
about
the
HTTP
2
headers,
so
G
RPC
kind
of
strips
out
some
of
the
HTTP
2
headers
and
uses
the
what
they
called
metadata
as
a
way
to
send
information
about
the
RPC
back
and
forth,
and
some
one
of
the
things
you
can
use
for.
That
is
authentication
which
is
built-in,
so
you
can
kind
of
roll
your
own
authentication.
A
It
also
has
built-in
support
for
Google
authentication
for
obvious
reasons,
deadlines
and
cancellations,
so
this
just
means
that
either
side
of
the
RPC,
the
client,
where
the
server
is
able
to
timeout
or
cancel
the
request
at
any
point
compression
is
supported
out
of
the
box.
So
you
know
for
making
your
requests
a
little
bit
more
efficient.
You
can
compress
things
before
you
send
it
over
the
wire
one
of
the
really
nice
things
that's
built
in
that
I
haven't
seen
and
other
HTTP
frameworks
is
load
balancing.
A
So
it
has
a
built
in
load
balancer
in
the
client,
where
you
can
give
it
a
list
of
servers
and
then
it
will
actually
balance
between
those
when
it's
sending
its
requests
and
then
you
can
do
a
lot
of
things
for
you
like
automatically
generated
boiler
plates.
So
you
know
it
can
generate
documentation.
It
can
generate
a
client
for
you,
so
you
don't
have
to
use
our
the
G
RPC
directly.
So
there's
a
lot
of
like
really
nice
built
in
things
with
it.
A
So
then,
coming
back
to
protocol
buffers,
this
is
your
you'll,
basically
write
a
file
describing
your
service
and
the
the
messages
that
it's
going
to
send
back
and
forth.
You'll
save
it
in
something
like
a
dot
proto
file.
So
this
is
just
an
example.
The
very
first
line
it's
in
tax
equals
proto
3
protocol
buffers
are
versioned,
so
this
allows
you
to
define
which
version
of
the
protocol
buffers
you're
using
next.
The
green
line
with
the
two
slashes
is
just
a
comment
but
you're
we're
saying
here:
we're
gonna
define
a
message
type.
A
So
if
we're
gonna
create
an
echo
server
where,
basically,
whatever
the
client
sends,
the
server
will
just
send
the
same
message
back.
We
have
we're
gonna
define
a
message
called
the
echo
message
and
it'll
have
one
field
and
it'll
be
named
value
and
it's
gonna
be
of
type
string.
So
all
of
this
stuff
is
very
important.
Whenever
you're
going
to
actually
serialize
your
data,
it
needs
to
know
what
your
serializing
and-
and
you
know
what
what
the
fields
are
called,
so
that
the
other
side
knows
how
to
unpack
it.
A
Then
next
we're
gonna
define
a
service.
So
this
is
going
to
be
an
echo
service
with
four
different
RPC
types,
the
four
that
I
talked
about
before
the
unary,
client
stream,
server
stream
and
bi-directional
streaming.
And
if
you'll
look,
it
basically
says
RPC
echo
unary,
that's
going
to
be
the
name
of
your
function,
call
and
then
in
parentheses,
that's
gonna,
be
your
input.
A
So
it's
going
to
take
an
echo
message
and
then
it's
going
to
return
an
echo
message
and
then
you
can
see
for
the
streaming
ones
that
there's
a
stream
keyword
that
just
gets
prepended
and
it's
really
at
least
for
the
protocol
buffers.
It's
that
simple
to
switch
between
unary
and
and
streaming
or
pcs.
A
So
if
you're,
if
you
want
to
use
G
RPC
in
JavaScript,
you
know,
G
RPC
is
supported
across
a
large
number
of
languages.
I
think
go
is
probably
the
biggest
one,
but
there's
also
a
C
core
that
is
shared
across
a
number
of
languages
and
you
can
use
G
RPC
and
you
know:
node
go
Java,
I,
think
Python.
The
list
goes
on,
but
as
far
as
Java
scripts
concerned,
there's
gonna
be
two
primary
environments
that
are
targeted.
It's
going
to
be
the
browser
and
nodejs,
so
the
browser
has
some
fundamental
limitations.
A
When
it
comes
to
G
RPC,
the
biggest
one
is
being
able
to
specify
that
you
want
to
use
HTTP
2,
so
I,
don't
think,
there's
really
an
API.
That's
built-in!
For
that,
and
you
know
if
you're,
using
an
old
browser,
it
might
not
even
be
supported
at
all,
but
then
you
also
have
to
have
like
more
precise
control
over
the
HTTP
2
frames
that
you're
sending
out
onto
the
network
which
the
browser's
just
don't
you
know,
give
you
any
insight
into
and
then
node
is.
You
know
more
of
your
your
typical
back-end
development.
A
You
know
more
along
the
lines
of
go
and
all
the
other
languages.
The
G
RPC
is
kind
of
a
target
for
so
the
browser
recently
well,
not
recently
I.
Guess
last
year
made
you
know
ready
for
production,
something
called
G
RPC
web.
The
way
that
G
RPC
web
works
is
you
have
to
introduce
an
envoy
server
into
your
stack
somewhere
and
the
Envoy
server
serves
as
a
proxy.
So
you
have
a
browser
that
can
communicate
over
G
RPC
web,
which
is
just
a
slightly
different
protocol
over
HTTP
1
using
xhr
requests.
A
Your
browser
will
talk
to
envoy
and
then
on
vote
envoy.
Will
proxy
your
messages
back
to
a
you
know
normal
G
RPC
server,
but
like
I
was
saying
earlier.
We
had
at
least
our
team
had
used
cases
outside
of
just
the
browser,
so
G,
our
PC
web
wasn't
really
an
option
for
us
and
I.
Don't
think
it
was
actually
considered
production-ready
at
the
time.
A
So
we
looked
at
the
G
RPC
module.
This
is
this
builds
on
top
of
a
C
core,
so
it's
a
native
compiled
add-on.
It's
it
actually
predates
nappy
or
an
API.
So
it's
built
using
nan,
which
is
you
know,
no
longer
considered
the
best
way
to
write
native
add-ons
and
the
native
add-ons
come
with
a
number
of
other
issues
that
I'll
talk
about
in
a
second.
But
this.
This
module
I
have
the
MPM
statistics
at
the
bottom
as
of
a
couple
days
ago.
A
So
I
was
actually
a
little
surprised
to
see
that
G
RPC
had
more
downloads,
then
I
think
every
other
node
framework,
except
for
Express
I,
talked
to
one
of
the
G
RPC
maintains
about
that
turns
out.
It's
bundled
with
all
the
like
Google,
client,
libraries
and
things
like
that,
so
it
kind
of
gives
them
a
little
bit
of
a
little
bit
of
an
advantage
in
downloads,
but
one
other
thing
that
they
one
of
the
things
they
do
to
kind
of
ease
the
pain
of
using
a
compiled
add-on
as
they
provide
pre
builds.
A
So
a
pre
compiled
version
of
the
module,
for
you
know
different
operating
systems,
different
versions
of
the
of
the
module
and
things
like
that
and
last
I
heard
they
were
shipping
over
a
hundred
of
these.
So
it's
a
significant
amount
of
work
that
that
goes
into
that,
but
there's
a
big
problem
with
compiled
add-ons
and
that
is
they
generally
don't
work
very
well.
A
A
A
So
gr,
PC
and
pure
JavaScript,
we
wanted
to
avoid
the
issues
we
were
having
jumping
between
versions
and
node.
We
wanted
to
avoid
a
compile
to
add
on.
There
are
actually
some
ecosystems
in
node,
primarily
talking
about
like
the
happy
ecosystem,
where
compiled
add-ons
are
just
not
allowed
because
of
all
the
issues
they
create,
and
then
there
are
other
issues
with
compiled
add-ons
like
you
have
to
keep
crossing
the
the
JavaScript
and
C++
boundary
a
lot
that
could
you
know,
depending
on
how
chatty
your
your
add-on
is.
A
A
If
you
look
at
the
download
count,
so
it
recently
passed
this
the
number
of
downloads
for
G,
RPC
I,
think
they've
been
googling
migrating
from
G
RPC
to
GRP
CJ
s.
It's
currently
a
beta
release,
but
Google's
using
it.
I
haven't
really
found
issues
with
it.
It's,
in
my
opinion,
more
reliable
than
the
GRP
C
module,
but
it
doesn't
have
all
of
the
same
features.
Yet
there's
still
a
process
adding
them.
A
A
It's
built
on
top
of
nodes,
HTTP
2
module,
so
you
know
that
was
that
was
in
beta
itself
until
node
1010,
which
was
September
of
2018,
and
then,
whenever
you
go
to
to
actually
require
or
import
or
whatever
you
do,
gr
p
CJ
s
it'll
actually
do
a
version
check,
so
you
have
to
be
using
that
similar
string
there.
So
you
know
greater
than
8
13
or
greater
than
10
10
I
recommend
not
using
it
with
no
date.
A
I've
seen
issues
with
it,
I
would
only
recommend
using
with
no
10
and
above
and
another
nice
thing
about.
This
is
that
it
has
no
no
runtime
dependencies
other
than
cember,
which
is
used
to
check
the
the
dependency
string.
So
you
know
you
don't
have
to
worry
too
much
about
a
lot
of
people
slipping.
You
know
viruses
or
things
like
that
into
your
code,
so
an
example
of
a
unary
client,
the
very
first
line
here,
I'm
just
gonna,
be
requiring
GRP
CGAs
and
then
creating
a
client.
It's
the
new
echo
service.
A
If
you
remember
back
to
the
proto
file
that
I'd
showed
earlier,
that
was
the
name
of
the
service
we
were
creating.
It
translates
into
the
same
thing
in
your
JavaScript
code.
You
pass
in
basically
a
you
know,
host
import
that
you
want
to
connect
to
and
then
something
called
credentials.
So
in
this
case
we're
not
going
to
be
doing
secure
communication
with
a
server,
so
we're
using
credentials
create
and
secure.
But
there
are
also
you
know,
secure
versions
of
these
things
and
then
it
just
works.
A
You
know,
since
it's
RPC
stands
for
remote
procedure
call,
so
it
looks
just
like
a
function
call.
So
we
do
client,
echo
unary,
that
if
you
recall
back
to
the
proto
file,
that
was
the
name
of
the
RPC
that
we
created.
You
pass
in
your
value,
so
hello,
unary
and
then
it
does
still
use
callbacks.
They
haven't
moved
over
to
async
away
yet
I
think
they're,
I
think
they're
looking
for
a
proposal,
and
you
know
the
best
way
to
do
that.
A
Next
I
wanted
to
show
a
example
of
client
streaming,
so
the
the
beginning
is
going
to
look
the
same:
we're
requiring
GRP
CGAs,
creating
a
client
this
time,
we're
doing
client,
dot,
echo
client
stream,
and
it's
going
to
return
a
nodejs
stream
to
us.
So
if
you're
familiar
with
using
the
the
built-in
node
streams
api's,
it's
the
exact
same
API.
So
it's
you
know,
it's
pretty
straightforward
for
you
to
get
up
and
running
if
you're,
if
you're
familiar
with
that
and
then,
if
you
look
at
the
very
bottom,
we
have
stream
end.
A
So
in
this
case,
I'm
only
sending
one
message
to
the
server
but
I
could
say:
I
could
do
string
right,
streamed
right
as
many
times
as
I
want
and
then
because
it's
a
client
stream.
The
response
from
the
server
is
just
going
to
be
one
one
response.
So
we
get
the
call
back
the
same
as
before.
But
if
we
look
at
a
server-side
streaming
client,
it's
going
to
start
off
the
same.
A
A
The
next
thing
I
want
to
talk
about
is
something
called
proto
loader,
so
it
is
a
another
module
that
you
use
to
actually
load
proto
files
into
your
application.
The
original
G
RPC
module,
the
compiled
add-on
with
the
C
core,
actually
supports
loading.
These
things
you
know
by
default,
in
the
module
itself,
when
they
moved
over
to
gr
p
CJ
s.
They
wanted
to
kind
of
separate
out
that
functionality
I
think.
A
The
reason
for
doing
that
was
to
create
a
nice
interface
for
proto
files
that
could
be
versioned
independently
without
messing
with
the
rest
of
the
with
the
rest
of
the
module.
So
under
the
hood
it
uses
a
module
called
protobuf
J
s.
The
way
you
would
do
it.
You
know,
obviously
just
an
NPM
install
pretty
loader
on
the
second
line.
Here.
It
shows
how
you
would
require
it
into
your
application
and
then
there's
asynchronous
and
synchronous
loading
capabilities
for
the
purposes
of
keeping
things
simple
on
the
slide.
A
I
went
with
the
synchronous
version,
loader
load,
sync
and
then
assuming
that
our
files
called
example,
dot
proto,
and
then
there
are
some
options
where
you
can
configure
how
you
want
to
load
it
in
so
keep
case.
False
means
that
whatever
the
case
is
inside
of
the
the
profile
itself,
you
don't
necessarily
have
to
respect
that.
A
It'll
do
nice
things
for
you,
like
converting
from
snake
case
to
Java
case
camel
case,
because
that's
you
know,
typically
what
JavaScript
developers
use
and
then
for
things
like
Long's
and
enums,
which
don't
necessarily
have
a
type
in
JavaScript
that
they
correspond
to.
You
can
say
how
you
want
them
to
be
parsed
into
javascript.
So
in
this
case,
lungs
and
enums
will
be
both
be
parsed
as
strings.
A
That's
important,
because
if
you
have
a
really
big
number
that
won't
fit
into
a
typical
JavaScript
number,
you
might
want
to
encode
it
as
a
string
and
then
do
something
with
it
from
there.
So
this
will
give
you
your
package
definition
and
then
all
you
would
do
is
then
inside
of
GRP
CJ
s
called
load
or
load
package
definition
and
it'll.
Give
you
back
a
package
that
you
can
then
start
using
to
make
your
are
pcs.
A
So
GRP
CJ
s
was
great
whenever
we
started
using
it,
but
we
also
needed
to
have
a
mock
server
because
remember.
The
original
use
case
was
trying
to
talk
to
golang
services
that
you
know
weren't
really
working
for
us
and
at
the
time
gr
p.
Cj
s
didn't
have
a
server
component.
It
was
client
only
I,
guess:
Google
prioritized
the
client
over
the
server
for
their
own
needs.
So
I
wrote
this.
It
is
not
an
official,
you
know
officially
G
RPC
supported
module,
but
it
seems
to
work
just
fine.
A
It's
a
server,
it's
written
in
pure
JavaScript,
no
typescript.
Here,
it's
also
a
API
compatible
with
the
G
RPC
module.
So
it's
again
you
can
drop
it
in
for
the
G
RPC
module,
or
now
the
GRP
CJ
s
module.
It
now
has
a
server
component.
The
only
production
dependency
is
GRP
CJ
s
which
is
used
for
some
shared
data
structures.
A
So
constants
like
status
codes,
G
RPC
uses
its
own
status
codes
instead
of
typical,
like
HTTP
status
codes
and
then
the
metadata
thing
that's
used
for
transferring
like
Heather's
around
and
working
with
those
and
when
I
was
creating.
This
I
was
actually
able
to
find
a
few
bugs,
and
you
know,
opportunities
to
improve
performance
in
the
upstream
module.
A
So
an
example
of
what
a
server
would
look
like.
You
would
require
G,
RPC
server,
Jas,
pull
out
the
server
class,
just
instantiate
your
server,
the
server
ad
service.
This
is
gonna,
be
the
same
thing
that
you
got
from
your
proto
file
earlier
and
then
you
can
actually
define
the
implementations
for
how
you
want
to
handle
all
the
different
rpcs.
A
A
So
when
I
went
to
start
testing
this
thing,
I
ported
a
lot
of
the
tests
from
the
g
RPC
repo
over
because
you
know
no
need
to
reinvent
the
wheel,
but
they
have
they
don't
really
focus
a
lot
on
code
coverage
and
I
was
coming
from
a
background
working
with
happy
where
the
happy
maintainer
beat
into
our
head
that
everything
had
to
have
100%
code
coverage.
I
didn't
get
up
to
there,
I
got
to
95%
code
coverage
and
that
also
uncovered
some
more
bugs
inside
of
the
G
RPC
client
implementation.
A
They
were
just
minor
bugs
like
compression
not
working
at
all
and
credentials,
not
working.
So
just
just
little
things
and
then
I
was
like
I
said
earlier.
I
was
able
to
actually
go
back
and
make
some
improvements
to
the
upstream
module,
so
they
were
very
focused
on
the
client.
They
didn't
have
a
server
implementation,
but
they
were
doing
things
like
using
the
delete
operation
all
over
the
place.
If
you
do
a
lot
of
JavaScript
performance
work,
you
know
that
v8
doesn't
really
like
the
delete
operator.
A
A
I
mean
lodash
is
a
popular
module,
but
if
you
don't
need
a
dependency,
it's
best
not
to
have
it,
and
that's
led
to
roughly
15
to
20
percent
improvements
in
the
performance
of
the
server
that
I
had
been
working
on
and
then
I
presented.
This
work
at
G
RPC
conf
last
year,
talk
to
one
of
the
maintainer
x'
of
the
project
and
agreed
that
I.
Could
you
know
upstream
the
server
to
them?
So
I
did
a
lot
of
wrestling
with
converting
from
JavaScript
to
typescript.
A
It
made
me
want
to
cry
a
lot,
but
it
finally
got
in
as
of
June
of
this
year.
The
exact
same
code
is
now
running
as
a
type
script
version
inside
of
the
G
RPC
module
or
GRB
CGAs
I
guess
we
did
some
work
around
benchmarking,
just
to
see
what
performance
would
be
like.
So
this
is
across
GRP
CJ
s
or
the
server
that
I
created
the
compiled
add-on,
G
RPC
and
then
also
we
looked
at
going
and
rust.
Unsurprisingly,
going
and
rust
were
faster.
A
The
the
performance
difference
between
the
pure
JavaScript
implementation
and
the
compiled
add-on
was
actually
you
know
right
about
where
I
thought
it
would
be
so
in
general,
even
though
it
was
the
slowest
implementation,
I
was
actually
happy
with
how
it
turned
out
along
the
way
we
did
run
into
a
number
of
pain
points.
So
one
thing
that
we
didn't
personally
encounter
but
I
you
know,
have
read
a
lot
of
reports
about
is
kind
of
G,
our
PC
and
compatibilities,
with
other
tools
in
the
ecosystem.
A
So
you
know
you
might
have
a
load
balancer
that
doesn't
have
a
load
balance
G
RPC
traffic.
You
can't
just
use
an
L
for
load
balancer.
You
need
an
l7
that
understands
G
RPC.
They
do
exist
out
there.
It's
just
something
that
you
need
to
be
aware
of
also
benchmarking,
so
because
G
RPC
is
kind
of
its
own
special
snowflake.
You
can't
just
use
a
normal.
A
You
know
HTTP
load
generator
to
throw
the
same
traffic
at
you
know
a
happier
express
server
and
then
also
G
RPC,
so
getting
an
apples
to
apples
comparison
can
be
a
little
bit
rough
there.
The
the
node
j
SG
RPC
can
is
not
very
large
from
what
I
can
tell.
There
are
like
I
said
before
a
lot
of
downloads,
but
those
downloads
are
primarily
coming
from
Google
itself.
A
The
other
this
was
probably
my
biggest
complaint
was
that
if
you
enjoy
working
on
open
source,
you
know
if
it's
something
you
want
to
do
in
your
spare
time.
This
isn't
really
the
project
that
I
would
recommend
contributing
to,
even
though
it
is
a
CN
CF
project,
it's
run
more
like
a
Google
project
that
just
happens
to
be
you
know.
A
You
know
you
might
not
want
to
use
on
the
way
or
you
might
just
wanna-
be
doing
some
local
testing,
where
you
don't
need
to
set
up
a
non-void
container
on
your
local
machine.
So
I've
started
working
on
a
node
in
process
proxy
that
can
speak
G,
RPC
web
and
then
proxy
that
out
some
other
future
work.
That
I
would
like
to
see
happen
more
feature
parity
between
GRP
CJ
s
in
the
G
RPC
module,
so
they
have
something
called
interceptors,
which
is
their
version
of
middleware.
There's.
A
A
Then
there's
always
going
to
be
continued
performance
and
stability
work.
You
can
always
make
things
better.
You
can
always
fix
more
bugs
integration
of
node.js
workers.
I
think
would
be
something
to
be
interesting
to
kind
of
play
around
with
I.
Don't
know
you
know
what
kind
of
performance
gains
it
might
lead
to,
but
it's
worth
investigating
and
then
just
general,
tooling
and
and
nodes
ecosystem
integration,
so
it'd
be
nice.
If
you
know
there
was
a
benchmarking
tool
that
could
talk,
HD,
P
and
G
RPC.
At
the
same
time,
it
would
be
nice
if
there
was.