►
From YouTube: Taking Full Advantage of gRPC
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Hello
and
welcome
to
my
session
on
taking
full
advantage
of
grpc
I'm
Jimmy
zielinski,
let's
get
into
who
I
am
and
why
it's
all
important,
all
right,
so
I
am
the
co-founder
of
a
company
called
off
Zed,
so
auth
Zed
is
the
creators
of
spice
DB
Space
TV
is
an
open
source
permissions
database
inspired
by
Google
settings
of
our
paper
effectively
what
that
means
is.
We
are
a
database
where
you
store
relationships
between
kind
of
different
objects
from
your
applications.
A
For
example,
Amelia
is
a
doctor
assigned
to
this
Clinic
if
you're
a
healthcare
application,
and
then
folks
can
query
those
relationships
to
determine
access.
A
So
things
like
can
Amelia
treat
this
patient,
who
are
all
the
patients
Amelia
can
treat
and
who
are
all
the
doctors
assigned
to
this
Clinic
are
types
of
queries
that
you
can
make
to
our
database,
we're
a
little
bit
of
a
non-traditional
database
in
the
sense
that
we
actually
don't
support
SQL,
so
grpc
is
actually
our
primary
querying
interface
and
there
are
a
couple.
Other
databases
out
there
that
that
follow
this.
A
So
that
means
we're
trying
to
to
squeeze
out
as
much
performance
as
possible,
both
server
side
and
client
side
and
basically
having
an
RPC
layer
like
grpc
lets
us
actually
have
that
that
control
on
both
of
those
sides
pretty
easily
as
well
to
make
things
slightly
more
concrete.
We
can
talk
about
kind
of
like
what
these
slas
look
like
for
something
like
spice
DB
in
single
digit
milliseconds
at
the
99
percentile
is
actually
kind
of
what
we're
targeting
so
kind
of
making
that
even
more
concrete.
A
If
you
are
trying
to
make
a
request,
if
you
have
to
establish
a
new
connection
or
do
a
TLS
handshake,
which
is
required
for
all
secure
connections,
that
amount
of
time
is
enough
to
blow
your
SLA.
So
that
means
all
connections
have
to
be
pulled.
We
have
to
have
a
decent
amount
of
sophisticated
Logic
on
both
sides
of
serving
the
client
to
reach
these
slas.
A
But
before
that,
I
worked
at
a
little
company
called
core
OS
and
coreos
had
a
mission
similar
to
off
Zed,
which
was
securing
the
internet,
but,
unlike
offset
where
we
secure
the
internet
through
building
authorization,
tooling
coreos's
goal
was
to
do
it
through
automated
updates.
So
they
wanted
server
software
to
get
updates
similar
to
how
cell
phone
software
gets
over
the
air
updates
so
before
they
could
do
that
before.
We
could
do
that.
A
We
actually
had
to
build
a
whole
bunch
of
things
to
get
like
the
level
of
automation
such
that
we
could
finally
automate
those
updates
at
the
very
end,
and
we
did
that
by
building
lots
of
systems
that
were
inspired
by
the
internal
systems
at
Google,
and
you
might
see
have
seen
this
trend.
A
We
basically
built
Google's
Google,
inspiring
systems
at
corewest
and
I'm
continuing
to
do
that
at
my
existing
company
offset
so
fast
forward
a
year
into
me
being
at
core
OS
and
2015
Google
open
sources
grpc
at
the
time
it
was
shockingly
similar
to
stubby,
which
my
co-worker
and
now
co-founder
Joey
Shore
worked
on,
while
he
was
at
Google,
basically
how
the
ecosystem
talk
about
it
and
Google's
logic
behind
open
sourcing.
A
A
Now-
and
there
are
plenty
of
cloud
native
projects
all
using
this
as
their
RPC
layer
and
we
kind
of
have
a
healthy
ecosystem
around
using
grpc
I
promised
that
this
is
the
end
of
my
history
lesson
and
I'll
go
forward
with
actually
kind
of
talking
about
the
the
meat
and
potatoes
of
this
talk,
which
is
the
intro
BuzzFeed
article
fashion.
A
Maybe
one
day,
I'll
give
this
talk
at
BuzzFeed
and
it'll
come
full
circle,
but
I
have
the
top
eight
tips
to
get
more
value
out
of
grpc
and
and
unlike
the
typical
BuzzFeed
article,
where
they
kind
of
are
daring
you
to
scroll
further
and
further
to
get
to
the
best
content.
I
actually
ordered
this,
so
it's
the
most
impactful,
the
most
valuable
thing
about
grpc
kind
of
ordered
down
to
the
least
valuable,
but
still
extremely
valuable
and
kind
of
like
warning
at
the
very
end.
A
It
just
has
like
kind
of
like
a
takeaway.
So
if
you
plan
to
fall
asleep
throughout
this
talk,
all
you
have
to
do
is
stay
awake
for
the
next
few,
and
you'll
have
gotten
the
most
value
out
of
it.
So
the
number
one
thing
that
I
think
grpc
has
that
might
be
different
from
other
things
is
real
world
usage
when
you
might
be
thinking
rest
Json.
A
All
this
stuff
has
plenty
of
real
world
uses,
but
the
thing
that
I
kind
of
want
to
stress
is
that
there
are
really
good
projects
out
there
that
are
both
modern
and
mature,
using
grpc,
better
open
source
that
you
can
go,
read
their
code,
so
examples
of
two
that
I
would
Point
people
to
are
Vitesse
and
my
own
space
DB,
the
like
these
projects,
kind
of
are
different
because
grpc
kind
of
crosses
different
language
ecosystems,
so
you
can
kind
of
see
best
practices
and
things
and
extrapolate
those
workflows,
regardless
of
what
your
your
domain
or
your
project
is
unlike
kind
of
rest
apis
where
sure,
if
you're
building
a
web
app
and
you're
doing
it
in
Ruby,
it
makes
sense
for
you
to
look
at
maybe
what
folks
are
using
in
the
rails
ecosystem.
A
But
if
you're
writing
stuff
in,
if
you're
trying
to
write
I,
don't
know
a
database
software,
it
may
not
be
useful
to
see
how
rest
apis
are
implemented
in
web
apps.
For
example,
so
that's
that's
just
my
straw,
man
argument,
I,
guess
kind
of
like
the
real
world
usage
and
other
systems,
but
the
super
cool
thing
about
grpc
is
that
you
get
to
kind
of
see
these
idioms
and
patterns
that
are
used
in
these
mature
projects.
A
You
can
straight
up
copy
them,
but
not
only
that,
because
we
have
open
source
and
the
core
of
the
ecosystem
for
grpc.
You
can
actually
kind
of
go
into
the
pull
requests
and
commit
messages
for
the
software
and
read
the
justifications
behind
the
decisions
that
they've
made.
Why
are
they
doing
particular
things?
Why
have
they
chosen
this?
Maybe
you'll
see
that
like.
Actually,
this
is
a
workaround
for
some
other
Behavior
or
they're
they're,
actually
addressing
Legacy
clients,
for
example,
so
that
can
be
nice
warnings
for
you
to
be
like.
A
Oh,
if
you,
if
you
don't
have
that
Legacy,
maybe
you
don't
need
to
do
this
particular
thing
right,
but
you
get
to
kind
of
see
these
mature
projects
and
see
what
their
workflows
are.
What
tools
do
they
use,
for
example,
deprecating,
rpcs
or
doing
API
versioning?
These
aren't
the
things
that
you'll
find
in
the
grpc
documentation.
There's
no
one
way
to
do
these
things,
but
if
you
look
at
all
these
different
projects
that
are
mature
and
kind
of
following
the
best
practices
you
can
arrive
at
what
you
think.
A
That
solution
should
look
like
for
your
use
case
in
a
well-informed
way
that
you
might
otherwise
not
be
able
to
do
all
right.
So
now
that
we've
kind
of
gotten
that
one
out
of
the
way
that
was
big
number
one
big
number
two
is
buff.
So
buff
is
a
fast,
extremely
fast
Proto,
buff
compiler.
A
So
it's
an
alternative
to
protoc,
which,
if
you're
following
any
of
the
tutorials
or
official
documentation
for
grpc,
that's
the
compiler
you're
using
now
the
value
for
buff
isn't
so
much
in
the
speed
of
the
compiler,
but
actually
the
workflow
that
it
provides
so
buff
was
originally
written
at
well.
Buff
is
the
spiritual
successor
of
a
tool
that
was
internally
developed
at
Uber
to
manage
all
of
their
apis
and
the
big
value
that
I
think
that
buff
gives.
A
You
is
an
improvement
over
kind
of
reading
bash
scripts
to
to
do
workflows
in
in
in
grpc
and
dealing
with
these
printable
definitions,
but
most
powerfully
it
has
static
analysis
and
linting
for
your
definitions
and
I
think
this
is
so
important
that
I
even
wrote
a
blog
post
about
it
that
is
featured
on
Buff's
website.
A
And
if
you
look
at
kind
of
like
the
last
line
of
text
there
that
subtitle
I
call
it
the
first
day
of
the
rest
of
your
life,
because
the
second
you
create
an
API
you're,
stuck
with
it
now,
once
people
start
calling
it
you're
going
to
have
to
maintain
it.
Creating
the
code
is
just
the
first
step
code,
typically
outlives
you,
if
you're
working
on
a
project
that
is
going
to
be
serving
customers,
and
so
you
might
not
always
have
Proto
experts
available
to
you
to
help
you
with
design
decisions.
A
So
and
honestly,
it's
hard
to
keep
up
with
all
the
changes
necessarily.
But
the
nice
thing
about
buff
is
that,
once
someone
learns
what
those
best
practices
are,
if
they
can
codify
it,
they
will
build
it
in
as
a
Lynch
rule
to
buff,
and
then
everyone
who's
using
buff
will
get.
Basically,
if
it's
built
into
your
Ci
or
just
your
local,
tooling,
you'll,
be
aware.
The
second
you
write
the
code
that
that
you're
either
breaking
or
violating
something
or
you're
not
doing
the
best
practice.
A
Breaking
API
changes,
so
it
can
tell
you
if
what
you've
changed
versus
what
you
had
changes
the
protobuf
wire
format,
representation
enough
that
you're
going
to
break
clients,
that's
incredibly
powerful
if
you're
trying
to
figure
out
how
to
move
forward
or
do
backwards
compatibility
with
new
iterations
of
the
same
API
and
kind
of
like
the
the
Big
Value
here,
and
the
reason
why
I
think
buff
is
number
two
is
because,
if
you
are,
if
you're
maintaining
rest
apis,
for
example,
let's
use
the
guiding
guiding
The
Guiding
Light,
the
North
Star,
the
industry,
which
is
stripe.
A
Stripe
has
basically
clients
that
haven't
been
touched.
That
are
still
calling
the
same
apis
perfectly
compatibly
15
years
later,
but
to
do
that
they
have
to
hire
a
whole
team
to
manage
their
API
and
they
have
to
write
a
bunch
of
custom
tools
and
typically
doing
integration
tests
against
their
apis,
so
they're
actually
testing
the
API.
A
After
all,
the
code
is
there
and
kind
of
like
the
complete
end-to-end
experience
versus
a
lot
of
the
same
logic
that
they're
testing
when
you're,
using
something
like
grpc,
any
kind
of
like
RPC
language
that
has
this
IDL
form
that
we
can
use
static
analysis
on.
We
can
catch
good
amount
of
these
problems.
The
second
you
write
the
right,
the
actual
definition
of
the
API.
We
don't
have
to
write
a
client.
We
don't
have
to
generate
a
client
or
anything
like
that.
A
We
don't
have
to
like
test
it
and
to
end
in
a
real
system
to
to
tell
whether
there's
a
problem,
and
you
don't
need
to
hire
all
the
engineers
to
build
all
of
that
stuff
for
you
and
make
sure
you're.
Maintaining.
All
of
that.
If
you
just
have
a
static
analysis
tool
that
runs
in
your
editor
or
runs
in
your
CI,
that
does
this.
For
you,
this
is
a
huge
win
to
production.
If
you
are
not
using
buff
but
you're
using
grpc
highly
recommend
you
look
into
it
so
talking
about
tooling.
A
The
next
one
is
a
library,
so
Google
apis
is
basically
a
collection
of
shared
types
from
Google's
protobuf
apis.
Basically,
they
had
a
whole
bunch
of
services
that
were
using
part
of
externally
facing
to
the
internet
and
they
decided
to
refactor
basically
and
pull
out
all
the
common
types
across
those
apis
turns
out
common
types
across
Google
apis
are
also
useful,
but
they're,
probably
going
to
be
common
types
across
your
apis
as
well.
So
you'll
see
General
patterns
here
for
error,
handling
managing
times
and
durations
key
value.
A
Pairs,
defend
data
structures
like
this
and
the
super
nice
thing
about
this
is
actually
depending
on
what
language
you're
running
in.
There
might
also
already
be
a
library
that
exists
for
these
types,
so,
instead
of
you
having
to
Define
your
own
new
type
for
timestamps,
for
example,
Google
already
has
one
four
timestamps
and
their
timestamp
library
is
going
to
convert
between
that
format
and
the
standard
libraries
time
type
that
that
is
built
in
your
language,
Native
in
your
language.
A
So
you
get
a
lot
of
really
easy
conversions
between
your
native
language
types,
so
you
can
use
all
libraries
you've
written
and
keep
your
all
your
code
kind
of,
like
native
to
the
language
and
not
coupled
to
protobuf,
if
you,
if
you
adopt
some
of
the
Google
apis.
So
the
other
warning
thing
here
is
that
I
will
say
it's
kind
of
tricky
to
know
if
a
project
has
overlooked
Google
apis
or
deemed
it
too
much
complexity
and
not
worth
adopting.
A
So
the
reason
why
traditionally,
a
lot
of
folks
won't
have
adopted
Google
apis
is
because,
prior
to
buff,
there
weren't
really
good
workflows
for
importing
libraries
into
your
own
protoboff
kind
of
definitions
and
generations.
So
now
that
both
exists,
it's
really
easy
to
add
a
dependency
to
something.
But,
prior
to
that,
you
would
have
typically
vendored
it
at
a
particular
version,
which
means
copying
and
pasting
the
code
and
maintaining
it
yourself
from
that
point
on.
A
So
that's
kind
of
error
prone
and
clunky,
and
not
a
lot
of
people
like
understand
how
how
to
do
the
magical
incantation
for
the
proto-c,
compiler
Flags,
so
I'll
a
lot
of
people
have
actually
avoided
using
third-party
dependencies.
It
comes
to
protobuf
and
grpc,
but
that
should
no
longer
be
the
case.
A
So
if
you
see
useful
types
in
here,
I
say
go
for
it
next,
in
the
same
vein
of
trying
to
avoid
writing
as
much
code
as
possible,
don't
write
if
someone
else
has
there's
this
custom
plugin,
which
I'll
get
in
the
custom
plugins
later
spoilers,
but
product
gen
vality,
which
basically
writes
a
validation
method.
So
you
don't
have
to
in
your
print
above
definitions.
A
You
can
annotate
fields
and
say,
for
example,
you
have
a
byte
field
and
in
a
message-
and
you
can
actually
annotate
it
and
say
this
field
should
never
be
more
than
128
kilobytes
or
the
string
field
should
only
contain
this
strings
that
fit
this
regular
expression
and
once
you've
annotated
that
you
generate
code.
That
gives
you
this
validation
method,
and
if
you
call
this
validate,
it
throws
an
error.
If
all
of
those
constraints
that
you
associate
with
those
types
in
the
protobuf
definition
are
not
met.
A
This
supports
a
variety
of
languages
as
Sports,
Go,
C,
plus
plus
Java
python,
but
I'm
not
sure
if
this
exists
in
all
those
languages,
but
in
go
there's
a
really
nice
middleware
that
you
can
use
that.
Actually,
you
slot
into
a
server
and
it
basically
returns
early
with
an
error
if
any
of
the
requests
coming
in
are
not
valid,
like
the
validation
method
throws
an
error.
A
So
that
means
you
don't
actually
have
to
manually
even
call
the
validate
method
in
your
handlers
to
know
that
every
single
request
coming
in
meets
the
constraints
that
you've
labeled
you've
annotated.
In
your
part,
a
buff
definitions.
Incredibly,
powerful
stuff-
and
you
basically
don't
get
you
don't
have
to
write
the
code.
You
have
way
less
room
for
human
error
and
making
any
mistakes
in
what
can
be
pretty
sensitive
stuff.
You
don't
want
to
accept,
like
Corner
cases
or
very,
very
corrupted,
forms
of
RPC
requests
right.
A
So
there's
another
project
up
here
called
grpc
Gateway,
originally
written
by
Johann
bronhorst
and
what
it
does
is.
It
works
very
similar
to
product
gen
and
validate
where
you
annotate
your
protos.
But
this
time
you
annotate
it
with
an
HTTP
path
and
an
HTTP
method,
and
it
generates
for
you
a
reverse
proxy
that
will
sit
in
front
of
your
grpc
application
and
actually
convert
Json
HTTP
requests
into
grpc
requests
and
then
talk
to
your
service.
A
And
then
your
service
will
write
a
response
back
to
their
reverse
proxy
and
then
it
will
take
that
response
and
convert
it
into
Json
HTTP
and
return
that
to
the
client.
So
that
means
your
client.
You
can
support
Legacy
clients,
you
can
support
environments
that
cannot
use
grpc.
Maybe
they
have.
They
have
like
some
kind
of
memory
restrictions
because
they're
an
embedded
system
or
anything
like
that.
You
can
support
all
these
environments
and
not
write
code
to
do
it.
You
can
just
generate
that
code.
A
What's
super
cool
about
this
is
not
only
can
you
generate
the
code
to
do
that,
you
can
also
generate
documentation
for
the
HTTP
API
it
generates,
but
also
you
can
use
that
that
same
exact
generation
tool
to
generate
clients.
So
this
is
all
using
open
API
if
you're
unfamiliar
you
can
Google
that
or
Google
Swagger,
which
is
the
thing
that
inspired
open
API,
but
at
the
end
of
the
day,
What
it
lets
you
do
is
have
API
documentation
and
even
generate
clients
for
HTTP.
A
So
what
that
means
is
you
can
write
a
grpc
service
definition
and
have
it
generate
both
documentation
for
grpc,
the
grpc
service
itself,
documentation
for
HTTP
and
the
HTTP
service
itself,
right
services
and
clients
right
for
for
both
incredibly
incredibly
powerful
stuff
supporting
multiple
protocols,
so
it
may
even
be
a
better
way
of
just
writing
and
maintaining
rest
apis
at
the
end
of
the
day.
A
Even
if
you
choose
to
never
use
the
grpc
apis,
or
maybe
your
customers
don't
or
users,
don't
necessarily
use
it
as
much
so
there's
a
really
really
cool
thing
for
go
programmers
here,
which
is
because
your
PC
Gateway
is
actually
written
and
go.
A
You
can
do
this
additional
trick
where
you
can
actually,
instead
of
running
the
reverse
proxy
as
a
separate
process
as
a
separate
process,
you
can
actually
run
it
in
the
same
process
so
that
it
just
calls
directly
into
your
app
like
in
memory,
but
even
cooler
is,
you
can
actually
make
them
share
the
same
port
if
you're
willing
to
sacrifice
some
performance
by
using
a
trick
where
you
read
the
first
couple,
bytes
of
a
connection
and
determine
whether
the
request
is
grpc
or
HTTP
and
then
route
up
accordingly
internally
to
your
application.
A
A
All
right,
I
mentioned
middleware
a
little
bit
and
I.
Think
that,
like
one
of
these
super
super
useful
and
most
interesting
things
about
grpc
is
that
you
can
actually
support
client
middleware.
A
So
there
there's
when
people
think
of
middleware,
they
almost
always
think
of
server
side
middleware,
they
think
of
adding
on
new
Behavior
or
like
authentication
or
authorization
into
kind
of
like
their
handlers
and
changing
the
handlers
in
in
a
server.
But
what's
super
interesting
about
grpc
is
actually
has
middleware
on
both
sides
and
that
is
less
common,
but
extremely
powerful,
so
powerful.
That
I
argue
it
kind
of
alleviates
the
need
for
an
API
Gateway.
A
A
lot
of
the
time
like,
let's
forget
about
all
the
rest
off
I,
was
just
talking
about
like
let's
get
back
into.
Why
why
we
like
why
we're
using
grpc
like
let's
take
full
advantage
of
it
with
a
single
line
of
code?
We
can
add,
authentication
compression,
modern
observability,
including
logging,
metrics
and
tracing.
We
can
do
timeouts
rate
limiting
recoveries
exponential
back
off
and,
like
all
this
stuff,
is
a
single
line
import
into
your
client.
A
Your
client
and
you
might
be
wondering
well
like
why-
why
would
I
want
that
in
my
client?
Google
actually
believes
internally
in
kind
of
this
philosophy,
that
is
dumb
servers,
more
clients
and
the
value
that
that
has
is
it
lets
you
actually
like
iterate
on
your
design,
a
lot
on
the
client
side,
you're
going
to
do
more
work,
and
it
may
be
a
little
bit
more
complicated,
but
it
avoids
you
putting
Behavior
into
the
server
that
you're
going
to
then
have
as
Tech
debt
forever.
A
So,
if
you're,
not
100,
confident
that
that
is
behavior
that
you
need
server
side
first,
you
should
try
to
experiment
with
a
client
side
and
make
a
really
really
smart
client.
A
great
example
of
this
is
actually
Kube
cuddle
for
a
super
long
time
in
the
kubernetes
ecosystem,
the
kubernetes
API
has
served
us
was
pretty
basic
and
Kube
cuddle.
When
you
did
Coupe
cuddle
apply.
A
It
did
all
of
this
logic
to
figure
out
what
needed
to
be
applied
to
the
the
actual
Etsy
CD
inside
of
kubernetes,
but
nowadays
we
have
finally
a
lot
of
that
logic.
That
was
being
done
in
apply.
We
came
to
the
conclusion
that
this
was
core
logic.
It
should
actually
be
in
a
server,
and
now
we
have
server
side
apply
in
kubernetes
great.
A
So
this
is
an
example
of
that
make
the
client
really
smart
until
you
know
that
this
is
core
behavior
and
then
you
can
move
that
into
the
server
smart
clients
highly
recommended
if
you're
developing
a
service-
and
you
don't
know
exactly
what
should
be
in
the
server
yet
so
custom
plugins
I've
mentioned
a
couple
plugins
so
far,
I
mentioned
how
we
can
generate
all
these
different
things.
Additionally,
so
what
a
plug-in
does
is?
It
is
the
hook
that
generates
code
in
a
protobuf
compiler.
A
So,
for
example,
when
you
generate
your
product
Buffs
in
a
particular
language,
so
you
use
go.
For
example,
that
is
a
the
go
plugin
and
then
there's
a
gogrpc
plugin,
which
generates
your
service
definitions
in
go
when
when
I
was
talking
about
product
gen
validate
that
generates
your
validation
methods.
A
That
is
an
additional
plug-in
when
I
talked
about
kind
of
the
open
API
and
the
different
HTTP
content
that
you
can
generate.
These
are
additional
plugins
that
you
can
generate
off
of
your
protobuf
definitions,
but,
what's
really
cool
is
that
we
can
write
our
own
plugins,
we're
not
beholden
to
just
one
of
our
plugins.
It
is
just
already
for
jrpc
and
product
buff.
A
So
if
you
see
your
problem
like
these
other
projects
that
I
just
mentioned,
you
can
fix
that
problem
and,
what's
really
interesting
is
you
can
even
address
problems
that
you
find
in
the
foundational
plugins,
for
example,
the
go
Plugin
or
the
or
the
grpc
plugin,
for
example,
the
folks
over
at
Planet
scale,
while
developing
the
tests
built
this
project
called
VT
protoboff.
A
What
they
noticed
was
that
when
you
are,
writing,
go
code
for
grpc
or
for
just
part
of
generally
what
you're
doing
is
you're
actually
using
runtime
type
reflection
and
go
when
you're
encoding
and
decoding
to
two
bytes
to
the
protobuf
wire
format,
and
so
that's
really
slow
and
they're
trying
to
write
a
high
performance
server,
so
they
realized
like
hey.
We
have
actually
have
all
this
information
ahead
of
time.
We
know
because
we
have
the
definitions
and
we're
generating
the
code
to
do
all
this
stuff.
A
We
know
what
the
size
of
this
thing
when
it's
encoded
is
going
to
be
statically,
and
we
know
all
these
types
statically
already.
Why
aren't
we
using
that
information
when
we
encode
and
decode
so
what
they
did?
Is
they
wrote
their
own
custom
plugin
that
generates
the
code?
That
does
all
that.
So,
when
you
use
encode
and
decode
like
their
Marshall,
VT,
Marshall
and
vtn
Marshall,
you
are
actually
not
doing
any
reflection
and
it's
way
more
performant
than
the
built-in
encoding
and
decoding
that
you
get
with
grpc.
A
So,
even
when
you
hit
the
core
core,
like
you,
you
hit
the
boundaries
of
what
you
can
actually
do
with
the
core
technology.
It
also
gives
you
a
door
to
kind
of
like
sidestep
it
and
do
whatever
you
need
just
to
just
to
solve
your
problem.
So
custom
plugins
are
incredibly
powerful
because
there
wasn't
really
a
lot
of
documentation
or
really
a
specification
even
around
the
input
and
output
that
you
kind
of
take
to
write
your
own.
A
A
I
know
that
a
bunch
of
companies
actually
have
pretty
healthy
internal
plug-ins
that
they
they
share
among
themselves
for
some
of
the
largest
grpc
shops,
but
really
what
we
want
is
to
build
this
ecosystem
and
have
have
everyone
feel
empowered
when
they
they
have
an
itch.
They
can
scratch
it.
So
with
that,
my
final,
my
final
feature
is
the
mystery
box,
which
is
actually
more
of
a
warning
that
I'm
going
to
leave
you
off
with
this.
A
Is
that
while
I
did
mention
a
lot
of
the
stuff
super
cool
things
all
done
in
the
community,
it's
actually
still
really
hard.
A
When
you're
in
the
grpc
ecosystem,
to
figure
out
what
is
the
best
practice,
you
can
look
at
some
really
popular
projects
or
really
useful
projects
where
they
describe
kind
of
like
the
value
they're
going
to
give
you,
and
you
can
tell
yourself-
hey,
that's
perfect,
that's
exactly
what
I
wanted,
but
it's
really
hard
for
you
to
know
how
they're
doing
it
or,
if
they're
still
maintained
or,
if
they're,
using
all
the
best
practices.
A
So,
for
example,
if
you're
in
the
go
ecosystem,
there's
an
amazing
Library
called
GoGo
protobuf,
it's
incredibly
useful
for
many
years.
Unfortunately,
it's
end
of
life.
It's
unmaintained,
it's
using
an
old
version
of
protobuf.
You
shouldn't
use
it
for
new
projects,
but
the
functionality
provided
for
many
years
was
Second
To
None.
It
was
incredibly
useful
library
for
squeezing
out
more
performance
in
protobuf
and
making
different
various
trade-offs
actually,
depending
on
what
domain
you're
interested
in
using
but
there's
also
kind
of
nowadays
at
least
they
have
a
warning
on
it.
A
It
uses
Coco
Proto
buff.
What's
what's
a
shame,
is
that
there
are
very
mature
critical
projects
out
there,
but
they're,
not
necessarily
modern.
So
when
a
CD
adopted
grpc
for
I
think
it
was
apiv
two
either
API
V2
or
API.
V3
they
adopted
All
The,
Cutting,
Edge
stuff.
They
it
looked
right.
It
was
Modern,
it
was
great
then,
but
then
they
never
touched
it,
which
is
a
shame,
because
it
means
that
if
you
are
using
modern,
protobuf
tooling
and
you
go
take
the
STD
service
definition
and
then
generate
them.
A
So
there's
going
to
be
incompatibilities
there,
which
means
that
now
they're
losing
a
lot
of
the
benefit
of
the
the
grpc
ecosystem,
they're,
not
able
to
actually
leverage
all
these
tools
and
modern
new
things
have
folks
take
their
definitions
generate
and
run,
and
actually
what
you
actually
end
up
doing
in
practice.
Is
you
kind
of
have
a
CD
specific
client
now,
like
it's
no
longer
a
grpc
client?
It's
an
FCD
client,
because
etcd
speaks
a
particular
flavor
of
grpc,
that's
old
and
bespoke.
A
It's
really
unfortunate
and
if
you
kind
of
go
out
there
naively
thinking.
Oh
this
is
a
critical
project.
They
must
be
doing
it
right,
I'm,
going
to
learn
from
them
copy
what
they're
doing
it.
You
might
end
up
adopting
the
wrong
things
unless
you
do
diligence
to
make
sure
that
what
you're
copying
is
is
right.
So
that's
my
word
of
warning.
A
If
you
have
any
other
questions,
you
can
find
me
on
the
social
medias
Twitter
Mastodon
GitHub,
my
company
Zed
actually
has
a
Discord
where
we
discuss
lots
of
Open
Source
technology,
considering
Spice
TV
itself
is
open
source.
So
if
you
have
questions
about
necessarily
how
we're
using
grpc
or
you're
interested
in
kind
of
like
tricks
that
we
have
or
find
anything
on,
the
issue
tracker
related
to
that
feel
free
to
join
that
and
ask
questions
there,
I'm
also
on
the
kubernetes
slack.