►
Description
Sponsored Keynote: Cheryl Hung, Director of Ecosystem, Cloud Native Computing Foundation & Leonardo Pistone, Senior Site Reliability Engineer, Spotify
A
Hi
and
welcome
to
grpc
conf.
I'm
really
excited
to
be
here
to
talk
to
you
today
and
I'm
going
to
be
talking
a
little
bit
about
grpc
as
it's
grown
as
a
community
and
it's
a
project
and
I'm
also
got
a
special
guest
here.
Who's
going
to
talk
about
spotify's
experience
with
grpc,
so
my
name
is
show
hung
I'm
the
vp
of
ecosystem
at
the
cncf,
which
is
the
home
of
grpc.
Among
many
other
open
source
projects.
B
A
Cool,
so
let's
look
a
little
bit
in
the
back,
so
grpc
was
the
sixth
project
to
join
the
cncf
and
since
march
of
2017,
it's
added
more
than
eleven
hundred
contributors.
Who've
collectively
made
nearly
250
000
contributions
towards
grpc,
and
the
contribution
here
is
defined
as
a
pull
request,
an
issue
a
commit
or
a
review.
A
So
every
year
cncf
runs
a
survey
where
we
ask
people
what
projects
they're
using
and
whether
they're,
using
them
in
evaluation
or
in
production
and
since
2017
grpc
has
gone
from
39
in
production
to
65
today
and
this
number
the
2020
survey
is
not
out
yet.
So
this
is
a
early
sneak
preview
and
with
that,
I'm
going
to
hand
over
to
leonardo
to
talk
about
his
experience.
B
Thank
you
so
at
spotify
we
have
in
our
backend,
we
have
more
than
1400
microservices,
those
are
built
by
280
teams
and
we
handle
more
than
8
million
requests
per
second
from
our
clients.
Only,
and
this
means
a
lot
of
internal
communication
between
backhand
services.
It
also
means
a
lot
of
communication
between
the
teams
that
build
them
that
have
to
to
organize
themselves
and
be
in
sync.
So
this
means
that
we've
been
in
thinking
about
this
for
a
very
long
time.
B
So
if
we
talk,
if
we
look
at
our
past,
it's
already
in
2012
that
we
were
thinking
about
this
and
grpc
didn't
exist
back
then
I
mean
it
was
not
announced
to
the
public
and
we
needed.
We
thought
that
we
needed
a
very
high
performance
and
easy
to
use
y
protocol.
So
what
we
did
back,
then,
is
that
we
built
our
own
and
what
we
ended
up.
Building
was,
interestingly
enough
was
kind
of
similar
to
how
jrpc
is
in
some
ways.
B
For
example,
it
used
protocol
buffers
just
like
grpc
did.
Http
2
didn't
exist
even
back
in
the
time,
so
this
worked
very
well
for
us
and
it
allows
us
to
allowed
us
to
to
grow
and
scale
up
to
the
point
where
we
are
today
so
now
we
get
to
today,
and
the
situation
where
we
have
is
where
we
are
is
that
our
protocol
still
works
fine,
it's
still
high
performance
and
good
enough
for
the
scale
that
we
have
today.
So
it's
scaled
well,
but
there
are
some
problems
that
we
cannot.
B
We
cannot
address
with
our
protocol.
Basically,
this
relates
to
the
ecosystem.
So,
just
to
give
an
example,
we
cannot
leverage
anything
that
already
exists
in
the
community.
We
cannot
open
source
our
own,
our
own
tools,
also
connected
to
that.
We
we
only
built
web
libraries
to
support
our
own
protocol
in
the
couple
programming
languages
that
we
care
the
most,
but
this
is
a
big
imitation,
because
anything
else
we
want
to
do.
We
just
cannot
use
it.
B
So
we
decided
that
this
was
getting
a
problem
and
wanted
to
fix
it
and
to
fix
it,
we
decided
to
adopt
grpc
in
production
as
our
main
as
our
main
wire
protocol.
So
this
has
been
working
great
to
us.
It's
it's
kind
of
familiar,
as
I
mentioned,
because
of
some
similarities
to
what
we
already
have,
but
there
is
much
more
so,
for
example,
as
I
mentioned
it's
great,
that
there
is
already
support
for
different
programming
languages,
for
example,
and
different,
also
programming
paradigms
in
some
cases.
B
Another
specific
aspect
related
to
that
is
that
we,
we
used
to
very
often
build
specific
client
libraries
for
every
one
of
our
microservices,
but
this
was
very
cumbersome
and
also
there
was
a
lot
of
coupling,
because
any
any
one
of
those
libraries
was
very
deeply
coupled
with
not
only
programming
language
but
also
concurrency
models
and
dependencies,
and
things
like
that.
This
is
a
lot
of
unnecessary
complexity
and
coupling
and
grpc
is
a
very
good
solution.
B
Instead,
where
you
just
distribute
the
schema
in
a
protofi
and
code
generation
takes
care
takes
care
of
that.
So
that's
great.
So
if
we
go
to
to
the
future,
there
is
a
lot
of
interesting
opportunity
that
is
happening
in
in
jpc.
That
is
very
interesting
for
us.
I
can
give
a
couple
examples,
so,
for
example,
in
regarding
tracing
grpc
already
has
support
in
different
solutions
like
open,
tracings,
open
tracing
open
sensors,
and
that
this
is
great.
Of
course.
B
These
are
things
that
we
could
build
internally,
but
it's
not
the
best
use
of
our
time.
Let's
say
another
field
that
is
is
very
interesting.
Developments
related
to
grpc
is
is
load
balancing,
so
grpc
is
already
supported
by
different,
for
example,
open
source
service
meshes
and
also
managed
solutions
now
start
supporting
jpc,
and
this
is
super
interesting
for
us.