►
From YouTube: gRPC September Meetup/Replacing a homegrown gRPC-like framework with the real thing- by E. Zamudio
Description
In this talk Enrique Zamudio explained a homegrown implementation about how it would be to replace a solution with gRPC, considering the cost of migrating, and how much we can reuse. He showed some diagrams and a simple example of a microservice.
A
A
It
was
written,
is
written
in
php
and
it's
a
very
simple
thing:
typical
monolith,
just
one
big
thing:
everything
tied
up
together
talking
to
a
database
and
that's
it-
the
the
founders
of
the
company-
knew
that
this
wasn't.
You
know
this
was
an
mvp.
This
was
a
way
to
get
this
thing
out
the
door
and
start
attracting
customers,
but
it
wasn't
really
going
to
scale.
A
You
cannot
really
migrate
the
logic
you
have
to
rewrite
it,
because
it's
a
completely
different
thing.
Obviously,
but
now
the
the
the
thing
here
is.
So
if,
if
you
take
something
from
php
and
rewrite
it,
let's
say
this
thing
was
written
in
the
with
the
paradigm
of
a
model
view
controller.
A
A
There
isn't
much
you
can
do
about
it.
This
is
going
to
be
determined
by
whatever
database
you
use
here
is
going
to
be
the
protocol
of
the
database,
and
you
try
to
put
these
two
things.
You
know
close
by
and
so
on,
but
because
this
was
the
previous
architecture-
and
this
is
just
this
little
arrow
where
you,
you
know,
php's
talking
directly
to
the
database,
and
now
you
have
this
thing
in
the
middle,
there
was
some
concern
about
performance
and
speed
and
response
times
and
so
on
and
well.
A
Of
course,
it's
a
typical
solution,
very
common
to
start
writing
these
things
and
just
use
rest
or
rest-ish
interfaces
everywhere
with
http
and
json,
but
that
seems
highly
inefficient
in
this
environment,
because
this
is
not
a
user
facing
interface.
This
is
something
internal
right,
so
there
may
be
like,
maybe
to
display
one
page
for
one
user.
There
may
be
several
calls
from
here
to
here
to
these
services,
so
you
need
this
thing
to
be
very
efficient,
and
so
we
defined
some
requirements.
A
A
A
Then
you
know
what's
the
size
of
this
data
size
matters
in
this
case,
because
the
bandwidth
that
you
have
is
finite
is
limited
and
compact
messages
will
help
you.
You
know
cram
as
many
as
you
can
in
into
the
limited
bandwidth
that
you
have
so
compact
messages
mean
you
know,
more
messages
that
you
can
send
per
second
and
on
the
other
side,
on
the
receiving
end,
you
need
something
that's
fast
to
parse
that
you
need
fast
parsing
of
the
messages
so
that
you
can
decode
the
data
quickly.
A
So
these
are
the
three
things
that
you
need.
Obviously,
a
binary.
Some
binary
format
will
help
a
lot
with
this
right,
which
json
is
not,
and
then
you
know
on
the
on
the
reliability
side,
not
so
much
about
performance,
but
you
need
to
find
the
balance.
We
need
something
with
well-defined
contracts,
something
that
you
can
specify.
A
This
is
the
data
that
I
want
to
send
and
you
know,
especially
on
the
java
side.
You
have
a
compiler.
You
want
to
validate
as
much
as
you
can
at
compile
time,
and
so
we
need
something
that
where
you
can
define
a
contract
and
by
control
type,
I
can
even
mean
just
the
format
of
the
data,
specify
what
you're
sending
what
you're
expecting
and
so
on
obviously
static
typing
helps
a
lot,
and
so
maybe
that
sounds
familiar
to
you.
There's
maybe
there's
a
solution
that
fulfills
all
these
requirements.
A
A
No
php
support,
I'm
not
talking
about
today.
Remember
I
just
said
early
very
early
like
january
2017
and
back
then
there
was
no
php
support
at
all.
So
grpc
worked
for
java.
It
was
version
1.1,
I
believe,
but
there
was
no
grpc
for
php,
so
there
was
no
way
that
we
could
use
this.
A
However,
the
underlying
technology
which
upon
which
grpc
is
built
is
protobuf
and
protobuf.
A
You
know
you
could
use
protobuf
on
php
no
grpc,
but
just
like
raw
plain,
vanilla,
protobuf,
you
could
use
so
we
said
well,
you
know
provost
still
fulfills
all
of
these
requirements
except
you
know,
grpc,
it's
nice,
because
we,
you
know
it
takes
care
of
the
communication
itself.
A
If
there's
one
thing,
I
don't
like
about
protobuf,
it's
the
name.
I
think
it's
very
misleading.
I
mean
after
you
use
it.
It
makes
perfect
sense.
It's
you
know,
protocol
buffers
emphasis
on
buffers,
not
on
protocol
right.
It's
not
a
protocol.
It's
it's
buffers
for
any
protocol
that
you
want
to
use.
It's
protocol
agnostic,
it's
just
a
messaging
format,
a
way
to
encode
data.
A
But
if,
if
you
don't
know
what
protobuf
is
the
protocol,
part
of
the
name
is
kind
of
like
you
think
that
maybe
it's
a
protocol
of
communication
which
is
not
and
so
grpc
takes
care
of
of
the
rpc
part
and,
and
we
just
brought
off,
you
need
to
figure
that
out
for
yourself.
So
we
said
okay,
what
what
are
we
gonna
do
we
know
what
we're
going
to
send
through
this
little
arrow,
but
how
are
we
going
to
send
it?
A
A
I
honestly
don't
know,
if
maybe
probably
was
a
kind
of
inspired
by
you
know
the
design
by
this
thing,
because
the
the
format
is
such
that
it's
very
efficient
to
parse
as
well
and
everything,
but
it's
more
limited
than
proud
of
I'd,
say
it's
not
as
versatile,
because
it's
oriented
towards
one
thing:
it's
not
general
purpose,
and
but
anyway,
the
the
usual
way
that
you
send.
A
These
kind
of
messages
here
in
an
asynchronous
socket
is
something
like
this.
With
the
iso
thing,
you
usually
use
16-bit
integers.
We
went
with
32-bit
integers
so
that
we
were
not
limited
to
64
kilobytes
in
a
message,
and
so
what
we
do
is
you
have
a
product
of
message.
You
encode
it
and
you
measure
it,
and
it's
like.
A
Let's
say
that
it's
600
bytes,
then
you
have
to
encode
the
number
600
as
a
32-bit
unsigned
integer,
and
you
send
that
so
the
receiver
knows
to
first
read:
four
bytes
interpret
that
as
an
unsigned
integer,
and
that
tells
you
the
length
of
the
message.
That's
coming
after
that,
you
read
those
many
bytes
and
you
know
the
reading
part
is
just
that
read
four
bytes
interpreted
as
a
number
and
then
read
the
rest
of
the
message
and
so
on.
A
It's
it's
a
loop
and
then
we
said
you
know
I
started
doing
tests
with
these
and
it
worked
nice,
but
the
first
obstacle
that
I
had
was,
I
didn't
know
this
at
the
time.
Protobuf
does
not.
A
Protobuf
does
not
encode
the
message
type
inside
the
message
itself.
We
know
that
it
doesn't.
You
know
it
doesn't
that's
why
it's
so
compact
right.
It
doesn't
encode
like
the
field
names
or
anything,
but
it
doesn't
even
call
the
message
type.
So
you
need
to
know
what
what
they're
sending
you
on
the
receiving
end
to
be
able
to
parse
it
so
that
you
can
use
the
proper
parser
for
that
message
type.
A
A
What
if
I
need
to
receive
like
three
or
four
types
of
messages,
right,
three
or
four
different
request
types?
A
It
wouldn't
be
practical
to
open
one
tcp
port
per
per
message.
Type.
I
mean
it's
gonna
look
like
I
don't
know
like
swiss
cheese
or
something.
So
I
thought
about
this
and
we
came
to
the
conclusion
that
well,
we
can
always
send
one
request
which
we
need
to
like
wrap
the
protobuf
in
another
prototype
message.
A
So
any
message
that
you
want
to
send
you
encode
it
and
then
you
wrap
it
in
another
in
a
very
simple
request,
which
is
just
called
wrap
request.
And
then
you
put
the
message
here:
it's
an
array
of
bytes
and
you
include
the
message
type.
Let's
say
a
login
request,
so
there's
a
wrap
request.
You're,
always
reading
wrap
request
and
inside
there
it
says
login
request
and
the
encoded
message.
How
do
you
interpret
that?
A
A
So
we
had
a
neti
server
which
can
asynchronously
receive
the
connections
and
the
requests
and
with
neti
this.
This
component
receives
a
wrap
request
and
then
we
wrote
neti
has
this
concept
of
handlers,
the
message
hunters
more
than
that
it
has
this
concept
of
a
pipeline
in
which
there
are
several
components
which
are
can
be
parsing
the
message
and
processing
a
message
as
part
of
a
pipeline.
A
So
we
even
had
components
here.
There
was
a
component
that
could
read.
You
know
a
32-bit
integer
to
determine
the
length
of
the
rest
of
the
message,
because
that's
very
standard
thing,
so
we
didn't
have
to
write
that
and-
and
there
was
a
protobuf
parser-
also
as
part
of
neti,
so
we
just
added
a
prototype
of
parser
for
wrap
requests
and
then
these
parse
track
requests.
A
A
So
these
request
processors,
each
processor,
can
handle
only
one
request
type.
So
there's
there's
got
to
be
here:
one
request,
processor
for
a
login
request
and
another
one
for
a
logout
request,
another
for
create
account
request
or
whatever.
So
you
can
have
as
many
as
you
want
here.
Each
one
of
these
has
three
things:
the
message
type.
It
knows
the
message
type
that
it
must
process.
A
A
It
needs
to
go
and
look
up
inside.
You
know
it's.
It
has
a
map
with
request
processors
and
say
if
it
finds
a
login
request,
processor
or
rather
a
processor
for
this
kind
of
request.
It
gets
the
parser
from
it
and
then
you
can
pass
the
request.
You
know
the
array
of
bytes
that
was
inside
the
wrap
request,
and
that
will
give
it.
Finally,
that,
with
that,
you
instantiate
a
login
request
and
you
give
it
back
to
the
processor
and
then
it
processes
it.
A
A
In
case
these
components
process
things
synchronously
this
one
will
not
be
stuck
waiting
for
the
response,
and
basically
that
was
it.
So
that's
that's
pretty
much
the
the
server
side
right
on
the
client
side,
our
client,
if
you
remember
the
client
side,
is
going
to
be
php.
A
A
A
So
we
wrote
this
thing
called
the
product
of
trade
and
it
handled
it
created
the
low-level
tcp
sockets
to
the
to
the
services
that
you
need
to
connect
to.
It
knew
how
to
wrap
prototype
requests,
so
it
had
like
a
method
called
you
know,
send
protobuf
and
send
protobuf
would
take
any
prototype
of
message.
A
Serialize
it
pull
it
inside.
You
know,
get
its
message,
type
put
that
inside
a
wrap
request
and
then
write
that
to
a
socket,
and
it's
also
new,
though
you
know
in
php,
there's
no
asynchronous
anything.
You
need
to
just
send
that
and
then
you
wait,
you
can
set
a
timeout
and
you
wait
for
the
response,
and
so
we
knew
also,
you
know
like
how
to
get
a
response
and
parse
it.
We
didn't
by
the
way,
wrap
the
responses.
A
If
you
remember
here,
I'm
saying
that
we
wrap
the
requests,
but
we
didn't
drop
the
responses.
Why
not?
Because
you
know
php
is
the
only
client.
So
if
it's
a
synchronous
thing,
if
you're
sending
a
login
request,
you
know
that
you
should
get
a
login
response.
There's
not
like.
A
There
are
many
you
design
things
so
that
a
particular
request
type
should
only
have
one
response
type,
that's
a
limitation
that
we
impose
on
ourselves
and
that
way
we
don't
have
to
wrap
the
request,
the
responses
we
also
handled
timeouts
and
we
added
some
metrics
so
that
we
could
know
you
know,
because
there
was
this
concern
about
network
time,
so
that
we
knew
how
long
this
was
taking
right.
The
whole
round
trip
to
the
java
service.
A
A
Php
was
now
supportive,
so
I
thought
this
is
great.
Now
we
can
use
grpc,
maybe
but
then
it
said
yeah
we
support
php,
but
it's
only
php7
and
we
were
using
php5
and
we
tried
to
upgrade
to
php
7,
because
you
know
it
was
like.
Can
we
upgrade
so
that
we
can
use
pgrpc
and
we
try
to
upgrade,
but
there
was
a.
A
There
was
some
weird
bug
that
we
found
when
we
tried
to
to
upgrade
to
php
7
with
parsing
hex
strings,
and
because
this
is
a
cryptocurrency
exchange,
you
know,
bitcoin
addresses
are
are
encoded
in
a
weird
thing:
called
the
base
58
similar
to
base64,
but
ethereum
addresses,
and
you
know,
and
and
with
bitcoin,
like
the
transaction
ids
and
there's
a
lot
of
hashes
and
stuff
that
is
encoded
in
as
hex
strings
and
those
made
php7
crash
like
some
of
those
strings.
A
A
Now
in
2018
the
team
starts
to
grow
and
the
software
starts
to
grow
and
suddenly
php
is
not
going
to
be
the
only
client
because
people
are
starting
to
write
like
we
need
to
develop
new
products
and
things
and
more
services
come
into
play,
and
suddenly
we
need
some
of
these
services
to
talk
to
the
services
that
we
already
have
and
then
comes
a
big
decision.
What
do
we
do?
A
I
mean
it
doesn't
matter
if
php
does
not
support
it
right,
we're
talking
java
to
java,
we
could
use
grpc,
but
then
you
know
this
service,
which
I'm
calling
server
here,
because
you
know
client
server
scheme.
This
service
already
speaks
one
thing
which
is
our
product
of
over
tcp
sockets.
We
never
gave
this
thing
a
name
by
the
way.
A
But
it's
already
talk
using
this,
so
why
go
and
add
grpc
to
these
ones,
so
just
so
that
this
can
you
know
we're
still
going
to
use
the
other
thing
because
php
needs
it.
So
I
think
it's
more
efficient
if
we
just
reuse
that
and
then
we
just
need
to
write
a
client
for
for
this
site,
how
do
we
write
the
client
and
then
we
thought
you
know
in
in
php
we
had
the
product
of
trade,
and
that
was
good
enough
for
that,
but
in
java
we
can
do
better
right.
A
A
I-
and
I
think,
maybe
I'm
getting
a
little
philosophical
here,
but
I
think
it
depends
on
your
perspective.
I
I
don't
know
if
my
brain
works
funny
or
what,
but
I
see
these
things.
A
I
think
most
people
look
at
these
things
from
the
outside
in
like
this
is
a
service,
and
I
talk
to
it
by
using
you
know,
there's
a
rest
method
with
a
url
and
I
send
it
some
json
and
it
gives
back
to
me
and
that's
what
it
does,
and
you
know,
but
when
you're
writing
these
things,
you
look
at
it
from
the
inside
out.
At
least
you
know
I
like
to
look
at
them
this
way,
so
you
write
a
business
component,
let's
say
a
login
service
and
you
test
it
and
it
works.
A
But
how
do
you
expose?
You
know
how?
How
do
you,
let
people
other
services
talk
to
this
to
this
component?
You
do
it
through
these
things.
Right
I
mean
that's
how
php
is
talking
to
the
business
component,
but
if
you
look
at
it
from
the
inside
out,
what
you're
really
doing
is
exposing
the
behavior
of
the
login
service
through
these
handlers
that
you
know,
through
the
prototype
of
requests
that
it
receives
so
you're,
exposing
the
behavior
of
this
component
you're
making
it
accessible.
This
is
the
interface
and
then
the
word
interface
right.
A
Let's
say
that
this
component,
a
login
service,
is,
it
has
to
be.
You
know
if
you're
doing
things
cleanly
in
java.
This
is
the
implementation
of
an
interface,
so
there's
probably
some
interface
somewhere
called
login
service
and
it
defines
a
method
called
login
with
a
username
and
password
and
it
has
a
login
result
and
then
you're
exposing
this
stuff
right
I
mean
this
login
result
is
probably
a
class
like
a
dto
with
an
either
success
or
error,
or
what
kind
of
error,
or
maybe
the
even
the
user
and
stuff
like
that.
A
A
The
handlers
that
are
going
to
to
talk
to
this
component,
these
ones,
should
convert
that
login
result
into
some
login
response
and
send
that
back
over
the
you
know
as
a
protocol,
but
inside
here
you
don't
even
know
about
protobuf
right.
You
can
give
this
to
someone
and
like
please
implement
this,
and
they
don't
even
know
what
you're
using
I
mean
you
could
rip
this
out
and
replace
it
with
rest,
and
then
this
should
still
work
the
same
way
so
on
the
client.
A
Just
do
the
same
thing
that
basically
php
was
doing,
which
is
create
a
request.
A
prototype
of
request,
fill
it
with
data,
send
it
over
the
wire.
You
know
this
thing
has
to
like
know:
the
host
and
port
of
the
of
the
destination
service
wait
for
a
response
and
then
convert
that
response,
probably
a
login
response
or
something
to
the
login
result
object
that
you
need
to
return
externally,
and
you
return
that
and
that's
it.
A
A
So
a
protoshim
is
a
proxy
that
implements
the
same
interface
that
is
in
in
a
component
in
another
service,
and
then
you
know
the
implementation
is
just
talking
to
this
service
and
it's
really
neat
because
we
were
working
in
a
monorepo
environment
and
so
this
service
it
gave
us
another
advantage.
This
service,
have
you
know
when
you
work
against
interfaces.
This
is
the
beautiful
thing
about
it.
A
A
So
what
we
do
usually
is,
if
you
have
the
the
login
service
implementation
that
this
guy
uses,
you
can
use
it
here
for
testing,
so
you
replace
the
protosheam
in
the
test
environment.
You
replace
the
proto-shim
with
the
actual
component
and
you
can
test
away
and
you
don't
need
to
mock.
A
A
So
that's
how
we've
been
working
so
far.
This
is
what
we
have
today.
A
I
I
I
honestly
have
this
question
for
you.
Maybe
it's
gonna
feel
like
a
weird
kind
of
talk,
because
you
know
you
can
ask
me
questions
at
any
point,
but
I
want
to
ask
you
if
you're
like
grpc
experts,
should
we
have
grpc
into
the
mix?
Should
we
continue
using
this
thing?
A
A
A
The
volume
has
grown
like
crazy,
the
number
of
services-
I
don't
know,
I
don't
even
know
how
many
services
we
have
running,
maybe
it's
like
50
or
something
and
they're
all
talking
to
each
other.
With
this
thing
and
we've
had
all
kinds
of
issues,
bandwidth
or
response
time
from
the
you
know
from
the
network
like
this
particular
component
has
never
been
an
issue
it
has.
A
It
has
performed
really
well,
and
we
know
this
because
we
added
all
this
stuff
to
it.
First
thing
we
did
was
I've.
We
used
micrometer,
really
cool
library,
so
we
are
timing,
everything
we
when
we
get
a
request,
we
count
it
and
we
time
you
know
how
long
does
it
take
to
send
the
response,
which
is
not
the
performance
itself
of
this
library?
A
But
rather
you
know
the
component
that
needed
to
do
things,
but
we
do
that
and
we
know
about
there's
a
health
check
that
we
did
and
we
added
some
other
bells
and
whistles
like
if,
if
a
service
is
going
down
because
you're
running
in
kubernetes,
if
a
service
is
going
down,
it's
been
taken
off
the
load
balancer.
A
But
if
you
were
still
connected
to
it
and
you
sent
the
request,
it
will
tell
you
that
it's
in
the
process
of
shutting
down-
and
can
you
please
drop
that
socket,
because
it's
ignoring
requests
now
and
you
need
to
resend
it
and
when
you
resend
it,
you
know
it's
likely
to
be
sent
to
another
pod,
that's
up
and
running,
so
we
have
that
we
have
we
added
recently
thanks
to
domix
who's
in
here
today.
A
In
the
audience
we
added
the
the
open
tracing
api,
so
we
can
do
open
tracing
with
these
and
we
use
that
adobe
for
for
metrics,
and
you
know
we
added
this
thing
to
the
wrapped
request.
So
all
the
requests
we
know
we
can
trace
them
from
from
php
to
a
service
to
another
service
and
to
another
service,
and
we
can
see
you
know
how
complex
a
request
is
because
it
it
can
trigger
one
service
token
to
another,
but
then
one
talking
to
another
three
and
so
on
right
all
triggered
by
the
same
event.
A
Also,
we
added
resilience
for
jay
to
the
clients,
so
this
is
buying
this
prototype.
You
know
this
code
is
not
as
simple
as
this
is.
This
is
just
like
almost
pseudo
code
in
here
this
send
request.
A
We
use
we
use
brazilians
for
j,
so
we
have
secret
breakers
between
these
two
services
and
that
allows
us
to
so
that
if
this
service
gets
overloaded
for
whatever
reason
you
know,
maybe
elon
musk
tweeted
something
stupid
in
the
morning
and
people
are
flooding
to
buy
or
sell
bitcoin
or
whatever,
and
this
service
can
get
overloaded
it
it
doesn't.
It
won't
overload
these
other
ones.
A
They
did
that
in
the
past
and
that's
why
we
added
these
things
right,
because
this
service
would
get
overloaded
and
it
would
overload
this
one
which
in
turn
will
overload
three
other
ones,
which
in
turn
would
overload
the
databases
and
things
crashed,
and
it
wasn't
pretty
by
adding
circuit
breakers.
Here.
A
A
We
had
micrometer
resilience
4j
over
tracing
api
and
also
recently,
because
things
are
getting
complex
enough-
that
we
are
starting
to
introduce
we're
introducing
event-driven
architecture.
You
know
using
message
brokers
to
try
to
decouple
systems,
because
this
thing
is
is
client
server
right.
It's
direct
host
to
host
communication
where
you're
sending
a
message-
and
I
mean
it's-
not-
that
one's
better
than
the
other.
Both
things
work
you
need
some
of
those,
for
you
know
for
different
use
cases
right,
but
we're
pushing
towards
these.
A
So
it's
going
to
be
one
more
moving
part
in
all
these
systems
and.
A
Well,
this
is
not
like
a
requirement
just
I
wanted
to
mention
this
now.
Does
it
make
sense
to
include
one
more
moving
part
which
would
be
grpc,
considering
that
we're
adding
this
one-
and
I
don't
know
honestly
if
grpc
has
all
of
these
bells
and
whistles
that
we
have
added
over
time
to
our
to
our
library,
because
we
would
need
to
keep
you
know
now
that
we
have
open
tracing
we're
not
going
to
give
it
up
right
now
we
have
resiliency,
we
don't
want
to
lose
that
this
has
been
working.
A
We
have
been
adding
these
things
over
time
and
it's
working
really
nice
today,
so
I
don't
know
what
do
you?
What
do
you
think
so
now
I'm
up
into
questions
and
for
answers
too.
Thank
you.
B
Thank
you
so
much
enrique.
So
if
anyone
has
a
question,
we
have
also
released
a
poll,
so
it
would
be
very
helpful
if
you
could
just
feel
it,
and
so
let's
just
give
people
a
minute
just
to
think
about
their
questions,
but
if
not
anyway,
we're
going
to
upload
this
video
on
youtube,
so
you
will
be
able
to.
B
Let
me
see
srini
srini,
please.
C
Hi,
yes,
so
yeah
thanks
for
the
great
presentation,
I
think
it
gives
a
very
good
historical
background
on.
You
know
why
grpc
was
not
chosen
here.
I
think
that
initial
issues
led
to
developing
your
own
in-house.
You
know
framework
right
and
then
continue
with
that,
but
you
asked
a
question
about
why
not
grpc
right-
and
I
think,
a
few
things
that
I
can
see
here-
is
that
obviously
not
everybody
can
develop
such
framework.
You
know
not.
Everyone
has
that
kind
of
resources
and
time
right.
C
Secondly,
I
think
the
recent
developments
in
grpc,
which
essentially
brings
in
a
lot
of
these
the
so-called
service
mesh
functionality,
for
example,
circuit
breakers,
is
what
you
mentioned
right.
So
those
things
are
all
now.
C
You
know
coming
into
grpc
through
the
service
mesh
functionality
integration,
so
I'm
not
sure
if
you're
keeping
an
eye
on
those
on
those
developments,
but
the
idea
is
that
almost
all
the
popular
service
mesh
solutions
out
there,
whether
they're
open
source
or
whether
they
are
being
provided
by
cloud
vendors
they're,
all
pretty
much
standardizing
on
this
thing
called
istio
the
hto
service
mesh.
Oh.
C
So,
for
example,
if
you
want
to
send
circuit
breaking
configuration
to
some
clients,
you
can
do
it
in
a
centralized
way
through
a
control
plane
which
basically
can
also
set
up
a
variety
of
configurations
to
different
different
clients
in
different
ways,
and
then
because
this
xts
protocol
is
kind
of
common
between
many
service
mesh
implementations,
the
grpc
decided
to
implement
the
same
thing,
and
now
we
are
like
we
have
released
like
a
bunch
of
features
like
you
can
discover
services,
you
can
split
traffic,
you
can
do
header
path,
matching
and
route,
your
traffic
to
different
services.
C
You
can
do
circuit
breaking,
you
can
do
timeout,
retry
and
all
those
things
basically
cool.
Okay.
So
I
think
php
is
one
common
problem
that
I
have
seen
where
a
lot
of
companies
which
you
know
are
old
enough
that
they,
you
know
standardized
on
php
monolith
long
time
ago,
and
that
becomes
a
problem
to
break
it
down,
and
you
know
move
it.
Unfortunately,
grpc
still
doesn't
have.
I
think
php
server
side
support.
C
C
So
I
still
think
there
is
a
lot
of
value
being
added
to
grpc
as
we
speak,
so
it's
definitely
worth
taking
a
look
and
see
you
know
whether
you
want
to
build
your
service
mesh
from
scratch.
Or
do
you
want
to
leverage
one
of
the
open
source?
You
know
softwares
or
you
want
to
sign
up
with
a
control
plane
managed
by
a
cloud
vendor.
Basically.
A
Okay,
that's
interesting
because
you
know
this.
A
A
When
we
found
out,
we
couldn't
use
it
and
back
then
I
don't
think
it
had
like
this
like
tracing
and
all
this
the
service
mesh
stuff
it
wasn't,
it
didn't,
have
it
yet
but,
like
I
said
it
was
version
1.1.
C
A
Grpc
has
also
grown
and
evolved,
and
probably
has
all
these
nice
things.
Actually
one
of
the
our
devops
team
members
was
asking
me
today
about.
A
He
mentioned
the
service
mesh
thing
and
something
else
that
seems
that
grpc
has
so.
It
looks
like
our
devops
team
would
be
eager
to
you
know
for
us
to
switch.
C
C
Some
advantages
for
them
right,
if
you
have
like,
if
your
services
are
growing
to
a
size
where
you
need
like,
for
example,
global
load
balancing
like
you,
deploy
your
services
in
multiple
regions
and
often
times
just
the
load.
Balancing
provided
by
kubernetes
is
not
sufficient.
C
How
do
you
ensure
that
a
request
from
a
client
goes
to
the
closest
service
instance,
rather
than
just
being
spread
blindly
across
all
the
instances
of
your
service
in
a
round-robin
fashion
or
whatever
your
balancing
mechanism?
You
choose
right,
so
location
of
air
load
balancing
is
important
because
you
want
to
keep
your
latency
as
low
as
possible,
and
these
are
some
of
the
things
that
you
know
open
source
solutions
can't
solve,
because
this
is
a
control
plane
thing.
C
This
is
the
the
brains
behind
your
service
mesh
right,
which
is
you
need
to
have
a
global
awareness
of
where
your
client
is
coming
from,
which
region
is
the
request
being
generated?
And
where
is
your
server
instance
sitting
so
that
you
can
take
the
shortest
route
right
and
and
when
the
closest
server
capacity
is
filling
up,
then
how
do
you
start
failing
over
or
overflowing
to
the
next
closest
zone
in
the
same
region
and
if
the
zone
fails
and
how
do
you
go
to
the
next
one
and
so
on?
C
Right
and
those
are
the
kind
of
things
that
start
you
start
to
face
when
your
service
is
becoming
really
big
and
you
need
a
regional
redundancy,
for
example,
right
and
yeah,
and
then
how
do
you
do
like
health
checks
at
a
high
scale?
For
example,
if
you
have
thousands
of
service
instances,
the
communities
have
some
scaling
issues
where
health
checks
become
problematic
at
that
scale.
C
That
means
you
need
a
centralized
way
of
checking
health
of
your
endpoints,
and
you
want
to
quickly
update
the
client
saying
that
these
are
not
available
anymore
instead
of
client
making
a
request
and
figuring
out
that
you
know
a
server
is
down
or
it's
slow
or
it's
not
going
to
respond
quickly.
Then
you
end
up
retrying
to
another
backend
right.
So
all
that
logic
you
have
to
build
in
your
own
framework,
but
with
service
mesh.
C
C
A
This
is
really
interesting.
Thank
you
for
that
info.
I
hadn't
thought
about
that
and
wow
yeah,
I'm.
I
think
I'm
sold.
B
Thank
you
so
much
srini.
Thank
you
so
much
enrique.
Okay.
So,
as
I
told
you,
this
recording
will
be
youtube
and
thank
you
for
answering
the
poll.
We
will.
Let
you
know
about
the
next
meet
up.
Remember
the
data
lake
management
meet
of
this
thursday
10
a.m,
pacific
time
and
thank
you
so
much
for
being
here
we're
looking
forward
to
seeing
you
again.