►
From YouTube: Deep Dive: gRPC Node - Michael Lumish, Google
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Deep Dive: gRPC Node - Michael Lumish, Google
A look at the Node gRPC implementations, their interoperability, and future development plans.
To learn more: https://sched.co/GreA
A
My
name
is
Michael
I'm,
the
primary
maintainer
of
ERP
c4
nodejs.
Today,
that's
what
I'm
going
to
be
talking
about
this
presentation
takes
about
20
to
25
minutes
and
then
I'll
take
questions
at
the
end.
So
first
I'll
talk
about
ERP
C
in
general
for
those
who
are
unaware
in
some
ways
it's
similar
to
rest.
A
A
In
note,
in
particular,
we
provide
tools
that
support
to
two
different
workflows.
One
is
to
precompile
the
protobuf
code
into
those
those
generated
things
using
proto
C,
which
is
a
Google
library
and
the
other
is
to
parse
the
proto
files
and
generate
that
code
at
runtime,
using
a
different
library
that
we
provide
that's
built
on
top
of
the
third-party
protobuf
das
library
and
then
I
actually
going
to
jump
into
a
quick
demo
of
what
that
looks
like
in
practice.
A
A
This
looks
like
a
quite
a
bit,
but
it's
all
kind
of
standardized.
You
can
easily
factor
this
out
and
then
we
simply
instantiate
the
client
class
with
the
target
address
and
insecure
credentials
because
we're
not
using
TLS
in
this
case
and
then
we
call
the
method
using
using
the
types
specified
before.
So.
If
we
look
back
at
the
proto
file,
you
can
see
that
the
name
is
a
string,
so
we
pass
an
object
with
a
name
that
is
a
string,
and
then
we
get
a
callback
that
that
shows
the
response.
A
A
So
going
back
to
the
presentation
now
I
want
to
talk
a
little
more
about
the
one
dimension
of
the
architecture
of
this
library.
So
there's
three
main
components:
there
is
the
c
core
which
is
written
in
c
and
increasingly
in
c++
that
really
implements
the
full
g
RPC
protocol
and
actually
the
HTTP
2
protocol.
It
directly
uses
TCP
and
this
library
is
shared
among
several
of
our
other
there.
A
This
core
shared
among
several
of
our
other
libraries,
including
Ruby,
Python
and
others,
and
we've
done
some
work
to
have
an
interface
with
Libya
via
vent
loop,
which
is
what's
used
within
node
itself,
and
then
on
top
of
that
there
is
the
native
add-on
code.
This
is
a
node
specific
concept
it
uses
c-plus,
plus
AP
is
provided
by
the
node
runtime
to
to
define
JavaScript
API
x'.
A
In
this
case,
the
primary
purpose
of
our
native
add-on
code
is
to
transform
types
and
control
flow
between
the
what's
defined
within
the
core
and
what's
useful
in
the
JavaScript
API,
and
it
provides
a
callback
based
interface
and
then.
Finally,
there
is
a
JavaScript
surface
layer
which
is
JavaScript
code
that
consumes
the
api
of
this
native
add-on
and
exposes
the
the
final
public
api
that
the
library
has
and
then
I
want
to
talk
a
little
more
specifically
about
the
precompiled
binaries.
A
You
saw
one
downloaded
and
the
line
I
highlighted
so
the
core
and
the
native
add-on
code
are
statically
linked
together
into
a
single
binary
file
and
we
distribute
these.
We
use
a
library,
that's
called
node
pre,
which,
which
is
built
for
this
purpose.
It's
responsible
for
both
downloading
the
appropriate
binary
for
the
current
system
and
for
loading
it
at
run
time,
and
if
it
falls
back
to
compilation
which
we
have,
we
hopefully
have
to
deal
with
less
and
less
frequently.
A
We
distribute
these
binaries
using
Google,
Cloud
storage
on
a
single
bucket
and
there's
a
lot
of
different
factors
that
affect
the
binaries
that
that
we
have
that
we
have
to
compile
different
binaries,
for
you
know,
node
versus
electron,
different
versions
of
each
operating.
You
know,
different
operating
systems
need
different,
binaries
different
architectures,
and
then
we
have.
We
have
a
set
of
custom
scripts
and
some
internal
CI
jobs
that
generate
this
for
every
version
of
ER
PC
that
we
publish.
A
So
that
brings
us
to
issues
that
we've
encountered
over
time,
primarily
as
a
result
of
the
existence
of
this
native
add-on
and
this
distribution
system,
so
installation
it
becomes
more
failure
prone
since
we're
distributing
these
using
our
own,
our
own
server.
We
have
it's
an
extra
point
where
downloads
can
fail.
You
know
the
user
that
can
access
NPM
servers,
just
fine
may
start
experiencing
failures
if
they
cannot
access
Google,
Cloud
Storage,
as
some
views
some
of
our
users
in
China
experienced
at
some
points.
A
There's
also,
you
know
sometimes
file
system
access
failures.
You
know
no
pre
chip
touches
the
file
system
in
its
own
way,
and
sometimes
especially
in
certain
docker
environments.
It's
not
properly
set
up
to
handle
that
every
time
there's
a
new
version
of
node.
We
have
to
proactively
publish
these
new
binaries.
A
You
know
in
order
to
support
it
and
whenever
there's
a
delay,
you
know
people
start
experiencing
failures
and
similarly,
if
they're
installing
an
old
version
of
our
library
that
doesn't
support
those
versions,
they
experienced
similar
failures
and
as
I
mentioned,
there
is
a
fallback,
but
a
lot
of
environments
like
docker
and
even
you
know,
regular
development
environments
for
node.
Don't
have
the
tools
available
to
actually
compile
this
code,
so
the
fall
Mac
doesn't
work
either
and
they
just
completely
fail.
A
Besides
that,
there's
people
also
have
trouble
loading
it
even
once
they
successfully
installed
the
light
back.
This
particularly
happens
if
you
install
on
one
system
and
then
try
to
deploy
your
library,
you
know
docker
or
to
some
cloud
environment,
because
because
the
detected
version
at
installation
time
doesn't
match
the
needed
version
at
runtime
and
debugging
was
another
big
issue.
A
And
finally,
you
know
I
mentioned
that
we
have
these
variety
of
factors
that
impact
the
binary.
This
has
led
to
this
combinatorial
explosion.
We
don't
support
every
single
combination
of
these
things.
Some
of
them
just
don't
make
sense,
but
we
still
have
well
over
300
binaries
that
we
publish
with
every
single
version
of
GRP,
see
that
total
almost
half
a
gigabyte.
A
So
as
a
result
of
this,
the
solution
that
we
came
up
with
was
to
re-implement
G
RPC
purely
in
JavaScript
on
top
of
node
built-in
api's.
So
the
library
itself
is
written
in
typescript.
That's
become
a
lot
bigger
in
the
time
since
we
started
this
project
originally
and
we
use
the
built
in
HTTP
2
module
in
which
was
introduced
fairly
recently-
and
this
was
this
was
fundamentally
important
to
this
project.
Http
2
is
a
complicated
protocol
and
reimplemented.
A
A
We
want
to
make
sure
that,
to
the
greatest
extent
possible,
this
new
implementation
is
a
drop-in
replacement
for
the
old
one.
This
we
couldn't
do
that
perfectly.
Some
some
of
the
api's
from
the
original
library
are
deprecated
or
otherwise
could
not
be
implemented
with
this
new
implementation.
So
those
are
our
omitted.
A
It
was
kind
of
the
biggest
place
where
we
could
gain
from
providing
this
new
implementation,
and
we've
decided
so
far
to
omit
some
advanced
features
that
we
are
completely
unaware
of
any
demand
for
kind
of
simply
as
a
prior
decision
things.
So
you
know
a
certain
advanced
client-side
loan
balancing
features,
whole
stream
compression
some
others-
will
simply
omit
for
now
and
then
I
have
another
short
demo
to
show
what
I'm
talking
about.
So,
if
we
look
back
at
this
client
code
again,
you
know
to
show
this
api
compatibility.
A
I
can
simply
replace
the
required
G
RPC
with
the
new,
the
new
library
and
then
just
to
show
that
we're
actually
affecting
this
file
change
that
to
and
then
I
go
back
here
and
I
run
the
client
code.
Again
you
see
it's,
it
has
the
new
response,
so
this
is
using
the
new
library.
No
other
code
had
to
be
changed
to
make
it
work
and
it's
completely
compatible
at
a
protocol
level.
A
A
We
can
balance
requests
between
those
backends
and
also
typescript.
As
I
mentioned,
it's
become
a
lot
bigger
and
the
time
since
we
started
this
project
and
the
code
that
gets
generated,
we
don't
generate
typescripts
signatures
for
it
and
that's
kind
of
our
users
biggest
gap
when
they're
trying
to
use
typescript
with
the
library,
and
so
this
doesn't
have
to
be
exhaustive.
A
You
know,
as
we
learn
about
what
feature
developers
who
use
G
RPC,
really
need
we.
We
can
use
that
to
determine
which
we
have
to
focus
on
when
re
implementing
things.
So
if
you
use
this
library
or
or
you're
interested
in
it,
I
encourage
you
to
check
it
out
and
in
particular,
to
tell
us
about
what
features
you
need
for
your
own
use
cases
so
that
we
can
the
fav
better
prioritize
our
own
development
and
that's
actually
all
I
have.