►
From YouTube: gRPC Community Meetup: 1.26.21
Description
On January 26, 2021 the gRPC community held a meetup. Srini Polavarapu, Engineering Manager, Google, presented “xDS in gRPC for Service Mesh”, this demo will introduce the xDS functionality in gRPC and discussed the exciting development of service mesh features in gRPC.
A
Okay,
so
welcome
everybody
to
the
grpc
community
without-
and
thank
you
for
being
here-
my
name
is
maria
cruz,
I'm
a
program
manager
in
the
world,
open
source
programs
and,
as
we
continue
to
welcome
people
into
this
meeting,
I
am
going
to
give
the
floor
to
srini
who
is
going
to
present
a
demo.
It's
really
their
voices.
B
All
right,
great
yeah,
it's
exciting,
to
see
a
great
attendance.
Thank
you
all
for
joining
this
community
meet.
My
name
is
srini
polovarpu
and
I'm
with
the
grpc
team
at
google.
B
B
The
east-west
traffic,
which
is
the
traffic
between
the
microservices,
is
typically
several
orders
of
magnitude
more
than
the
north
south
traffic,
which
is
the
traffic
coming
from
users.
To
your
front
end
right,
so
so
that's
why
it
becomes
even
more
important
to
optimize
and
have
your
east-west
traffic
as
performant
as
possible
to
save
cost
on
compute
and
network
so,
and
you
may
also
want
to
limit
the
amount
of
time
one
service
waits
on
the
other
service
to
return
a
call
so
that
the
overall
user
experience
is
not
compromised.
B
So
all
the
problems
here
that
I
mentioned
are
solved
easily
with
grpc
and
protocol
buffers,
and
and
that's
why
grpc
basically
is
really
a
great
tool
for
building
modern
applications.
B
Right
so
continuing
from
the
previous
slide,
there
are
other
challenges
too,
when
you
want
to
build
microservices-based
applications
like
how
do
the
services
discover
each
other
and
how
requests
are
load
balanced
to
many
replicas
of
a
service.
Typically,
a
services
made
up
of
many
backends
or
or
sometimes
we
call
them
endpoints,
which
are
actually
listening
on
the
service
port
and
providing
as
a
service
right,
and
then
you
also
might
want
to
secure
the
traffic
between
between
the
services.
B
You
want
to
maybe
encrypt
the
traffic,
and
you
may
also
want
some
easy
ways
to
figure
out.
If
something
goes
wrong
like
you
want
metrics,
you
want
to
be
able
to
trace
your
calls
from
one
end
to
the
other
right.
So
so,
even
if
you're
using
grpc
right,
you
will
face
all
these
problems
and
that's
why
you
would
need
a
service
mesh
solution.
B
For
example,
istio
is
a
very
well
known
popular
open
source
service,
mesh
solution
and
grpc
itself
does
not
have
any
of
this
functionality
until
now.
So
in
the
remaining
slides,
we'll
talk
about
what
is
this
about
and
how
we
are
bringing
this
functionality
into
grpc
right
so
before
that,
let's
understand
how
a
grpc
application
is
deployed
in
a
service
mesh.
B
So
in
a
typical
service
mesh,
a
sidecar
proxy
is
deployed
with
each
instance
of
the
service.
This
sidecar
proxy
is
basically
running
in
the
same
pod
if
you're,
using
containerized
services
or
in
the
same
vm,
along
with
your
application,
all
right,
so
the
sidecar
proxies
there.
These
are
the
ones
that
get
the
service
mesh
policies
from
a
control
plane,
and
these
policies
can
tell
the
site
car
proxy
on
how
to
route
a
particular
request,
how
to
load
balance
or
how
to
encrypt
a
particular
request
and
so
on
right.
B
So,
like
any
other
application,
a
grpc
application
would
simply
do
a
dns
lookup
for
a
service
that
it
is
trying
to
connect
to
and
and
open
one
connection
to
the
virtual
ip
of
the
service
right.
This
is
same
as
any
other
application
right
and
then
this
connection
is
then
intercepted
by
a
local
site
called
proxy
and
which
then
applies
the
mesh
policies
and
routes
to
an
endpoint
in
the
destination
service
right.
B
So
the
the
request
is,
you
know,
also
intercepted
by
another
proxy
on
the
server
side,
in
order
to
apply
the
server-side
policies,
for
example,
mutual
dls
right.
So
if
you
want
to
encrypt
and
enforce
the
mutual
tls
policy,
then
you
need
a
proxy
on
the
server
side
also,
so
the
application
is
essentially
unaware
that
they
are
operating
in
a
service
mesh
and
the
service
mesh
functionality
is
seamlessly
provided
by
the
control
plane
and
the
sidecar
proxies
acting
together
right.
B
B
Our
internal
benchmarking
shows
that
you
can
get
three
to
four
x,
more
request
per
cpu.
Second,
if
you
don't
have
these
proxies
in
the
middle-
and
this
is
the
networking
cost
that
I'm
talking
about
and
also
there
is
a
benchmark
by
the
stu
project
that
shows
that
the
latency
of
each
call
can
go
up
from
0.6
milliseconds
to
2.6
milliseconds,
that
is
the
90th
percentile
latency.
B
There
are
a
complex
set
of
iptable
rules
that
are
created
by
the
service
mesh,
which
allows
these
proxies
to
intercept
the
traffic
and,
as
a
result,
it
gets
even
it
gets
harder
to
debug
if
something
were
to
go
wrong
and
and
then
also
proxies
are
additional
binaries
in
your
service
mesh,
which
means
there
is
an
overhead
of
life
cycle
management.
Like
any
other
binary.
B
You
have
to
worry
about
deploying
upgrading
health
checking
these
binaries,
making
sure
that
they
are
running
you
know
and
and
then
dealing
with
any
security
issues
and
all
that
stuff
right.
So
all
these
things
also
make
it
a
little
bit
harder
to
manage
these
proxies
on
your
own
in
vms,
when
compared
to
pods
right.
B
Yeah
and
then
there
is
another
aspect
where,
if
you
want
security,
then
typically
proxies
are
the
ones
that
encrypt
and
de-encrypt
your
traffic,
but
your
application
is
actually
sending
plain
text
requests
right,
so
that
that
may
be
acceptable
in
most
cases,
but
there
may
be
some
cases
where
that
is
not
acceptable.
You
want
a
true
end-to-end
encrypted
solution
right,
so
so
that's
why
we,
we
thought
that
by
adding
service
mesh
functionality
directly
to
grpc.
B
So
this
kind
of
proxy
list
model
is
already
being
used
at
companies
that
are
building.
You
know
global
scale,
services.
B
All
right
so
to
make
this
happen,
we
decided
to
add
support
for
xts
apis
in
grpc,
so
what
are
xts
apis?
These
are
the
same
apis
that
envoy
proxy
uses
to
talk
to
this
mesh
control
plane.
Now
onwi
is
the
de
facto
proxy
used
in
many
service.
Mesh
solutions
and
xts
apis
were
developed
as
part
of
onward
development
right,
so
xts
apis
are
open
and
extensible
and
has
a
very
strong
community
support
and
because
we
also
want
to
support
coexistence
of
proxied
and
proxies
workloads
in
the
same
service
mesh.
B
It
made
total
sense
to
use
the
most
popular
apis
out
there.
So
that's
why
we
picked
xds
apis
a
couple
of
years
ago
and
we
started
developing
and
started
adding
support
for
xds
in
grpc.
B
There
is
a
lot
more
to
xts
apis
than
what
is
shown
in
this
diagram
here,
but
I
will
keep
it
simple
here
just
to
understand
the
concepts
so
so
the
xds
apis
are
a
collection
of
apis
to
discover
various
resources
in
a
mesh
and
I'll
talk
about
what
these
resources
are,
and
hence
the
name
xds
where
x
stands
for
the
type
of
resource
like
a
listener,
a
route,
a
cluster
or
an
endpoint
right.
B
So
you
would
have
lds
eds,
rds
and
all
sorts
of
dss,
basically
so
in
a
service
mesh.
Obviously
you
know
there
are
services-
and
these
are
typically
known
as
virtual
services,
because
it
is
usually
backed
by
a
virtual
ip
and
a
port
to
which
other
services
connect
to.
But
this
virtual
ip
is
basically
backed
by
a
set
of
backends
or
endpoints
which
actually
deliver
you
the
service
on
a
different
ip
and
a
port.
B
But
now
the
proxy
needs
to
know
everything
about
such
virtual
ips
right
because
you
don't
know
which
application
the
proxy
is
fronting
right.
So
it
can
receive
a
request
for
any
of
these
virtual
ips
for
hundreds
of
services
running
in
the
mesh
right.
So
that's.
Why?
Then,
the
proxy
would
do
something
called
a
listener.
Discovery
request
in
order
to
get
the
information
about
all
the
virtual
ips
in
the
system,
so
in
case
of
grpc,
the
application
already
knows
which
service
it
wants
to
connect
to.
B
B
So
you
can
do
lots
of
fancy
stuff
with
your
routing,
and
a
great
use
case
is,
for
example,
a
canary
deployment
where
you
want
to
do
a
b
testing
or
you
want
to
roll
out
a
new
version
slowly,
you
can
configure
the
weights
accordingly,
so
so,
once
you
have
the
cluster
to
route,
the
request
to
the
next
step
is
to
do
the
cluster
discovery
and
with
cds
you
get
the
cluster
configuration
which
can
include
things
like.
What
is
the
load
balancing
policy
for
this
cluster?
B
Is
it
round
robin
or
is
it
least
used
or
what
is
it
right?
And
there
can
also
be
security
policy
policies
associated
with
this
cluster,
like
it
only
accepts
mtls
requests,
for
example
right,
so
a
cluster
is
made
up
of
endpoints,
which
are
the
actual
service
instances,
and
these
endpoints
could
be
spread
across
different
zones
and
regions.
B
If
you
are
talking
about
a
cloud
platform-
and
these
are
known
as
localities
so
locality,
you
can
think
of
it
as
a
deployment
in
a
zone
right
and
that
information
is
obtained
via
endpoint
discovery
service.
C
B
Right,
okay,
so
how
do
you
start
using
xts
in
your
grpc
applications
and
and
go
proxy
lists
right,
and
it
is
quite
easy.
The
way
we
designed
this
is
that
all
you
have
to
do
is
instead
of
using
the
default
dns
scheme.
When
you
create
a
channel
to
connect
to
a
service,
you
have
to
use
this
new
scheme
called
xds.
B
B
B
B
So
there
will
be
few
other
additional
small
things
that
you
will
have
to
do
when
new
features
like
security
rolls
out,
for
example,
you
will
have
to
provide
xds
credentials
to
your
channel
or
you
will
have
to
start
an
xds
server
on
the
server
side
instead
of
starting
a
regular
server
right.
So
that's
why
it's
important
to
plan
ahead
when
you're
writing
your
grpc
applications.
It
is
good
to
consider
making
these
things
dynamically
configurable
for
your
application,
so
you
don't
have
to
rebuild
your
applications
when
you
decide
to
migrate
to
a
service
mesh.
B
So
obviously
we
talked
about
the
advantages
of
a
proxy
list
service
mesh.
But
what
are
the
limitations
right?
So
there
are
some
downsides
too,
and
the
obvious
one
is
that
mature
proxies
like
envoy
are
very
feature
rich
and
they
have
a
good
ecosystem
of
third-party
filters
and
tools,
especially
in
the
area
of
observability.
B
But
grpc
is
also
catching
up
quickly.
We
are
rolling
out
more
and
more
features
and
also
note
that
grpc
has
interceptors
to
provide
extensibility,
and
there
is
also
a
good
open
census.
Integration
with
grpc,
through
which
you
can
get
metrics
and
tracing,
and
just
as
a
side
note,
open,
sensors
is
being
replaced
by
open
telemetry,
which
will
also
have
grpc
support.
B
The
other
downside
is
to
get
xds
functionality.
You
will
have
to
upgrade
to
the
version
of
grpc
that
has
the
functionality
that
you
need,
and
it
is
usually
not
an
issue,
because
developers,
building
microservices
based
applications
typically
have
a
very
good
cicd
infrastructure
and
also
you
can
continue
to
use
proxies
like
I
explained
earlier,
if
you
think
that
you
know
a
a
legacy,
application
cannot
be
upgraded
and
you
cannot
move
to
proxies
and
and
currently
we
have
limited
language
support,
but
you
know
this
is
good
enough
to
start
off
with
most
major
languages.
B
So
the
question
is:
is
the
idea
that
xts
servers
will
eventually
be
implemented
by
other
service
meshes
or
standalone
servers,
otherwise
our
applications
become
tied
to
onboard.
Is
that
correct?
So
the
idea
here
is
that
xts
apis
are
becoming
the
de
facto
apis
for
implementing
service
meshes.
Basically,
if
you
look
at
a
lot
of
popular
service,
mesh
solutions
like
istio
and
other
cloud
vendors,
you
will
see
that
almost
everyone
has
adopted
xds
apis
to
talk
between
their
control,
plane
and
envoy,
which
is
the
sidecar
proxy
right.
B
B
So
the
bootstrap
server
boots,
the
bootstrap
configuration
actually
contains
some
platform
specific
information.
So
you
would
expect
the
platform
provider
to
provide
you.
The
tools
to
generate
bootstrap
configuration
is,
I
think,
that's
what
the
question
is
right.
So
normally
you
can
write
your
own
bootstrap
generator.
If
you
know
what
what
is
required
for
a
given
platform
and
it's
right
now,
it's
quite
straightforward.
You
just
need
to
provide
the
address
of
the
xds
server
the
credentials
and
some
other
small
information.
B
Yeah
so,
okay,
I
talked
about
the
downsides
right
yeah.
So
so
what
is
the
current
status
right
now?
So
we
made
great
progress
in
2020.
That
was
the
first
time
in
june
of
2020.
We
introduced
the
xds
infrastructure
in
grpc,
so,
prior
to
that,
a
team
has
been
working
on
it
for
about
a
year
to
introduce
to
bring
the
xts
infrastructure
into
the
grpc
stack
in
multiple
languages,
and
then,
after
that,
we
released
a
number
of
traffic
management
features.
B
As
you
can
see
in
this
list
here,
there
are
more
features
in
the
pipeline
which
are
scheduled
to
be
released
quite
soon.
In
this
quarter
and
coming
quarters,
you
can
expect
some
ga
versions.
B
B
So,
let's
talk
about
what
control
planes
support
this
today.
So
if
you
are
looking
for
control
planes
that
support
proxies
grpc
applications,
take
a
look
at
google
cloud's
traffic
director
that
has
production
great
support,
so
traffic
director
provides
global
load.
Balancing
such
that
you
know,
your
requests
from
a
client
are
routed
to
the
closest
available
backends
of
a
service.
B
B
If
you
have
made
your
services
global,
another
advantage
with
traffic
director
is
that
it
uses
centralized
health
checks
to
monitor
the
health
of
all
the
backends
of
all
the
services
in
the
mesh.
B
This
avoids
you
know
an
n
square
problem
that
you
can
run
into
where
each
client
is
trying
to
help
check
each
of
the
back
ends
that
it
connects
to
right,
so
that
kind
of
health
check
can
put
a
lot
of
load
on
your
back
ends,
and
the
good
thing
in
traffic
director
is
that
it
also
supports
a
grpc
health
check.
Protocol
so
that
way
you
can
directly
use
grpc
health
checking
in
your
applications,
and
then
traffic
director
can
also
discover
the
end
points
in
a
given
service
as
they
come
and
go.
B
So
it's
often
the
case
that
you
know.
Sometimes
you
are
upgrading
your
deployment
to
a
newer
version,
so
it
slowly
phases
out
old,
pods
and
brings
in
new
parts
right.
So
the
list
of
endpoints
keeps
changing
and
traffic
director
keeps
track
of
that
and
provides
the
latest
greatest
to
the
clients
and
and
also
your
services
can
be
scaled
up
and
down,
based
on
the
load
and
various
configurations
that
you
have
set
and
it
works
on
gke
as
well
as
gce.
B
B
We
also
see
another
thing
called:
go:
control
plane,
which
is
a
lightweight
xts
server
that
some
are
using
to
start
building
their
own
advanced
servers.
So
right
now
we
see
an
open
issue
in
the
project,
but
not
much
work
has
been
done
to
support
proxy
list.
Grpc,
alright
I'll
take
a
few
questions
here.
B
So
convoy
users
usually
complain
about
the
configuration
overheads
it
does.
Xts
grpc
makes
user
configuration
easier
and
hard,
so
configuration
is
basically
done
based
on
the
control
plane
that
you're
using.
So
if
you
are
using
traffic
director,
then
obviously
you
will
go
figure
out
how
traffic
director
is
configured.
If
you
are
using
histo,
then
you
have
to
go
figure
out
how
is2
is
configured.
B
The
control
plane
is
responsible
for
taking
your
configuration
and
then
converting
that
into
xts
apis
and
then
push
it
down
to
the
data
plane.
Components
like
on
voice
and
proxies
grpc
application
right.
So
I
would
think
xts
apis
itself
are
complex
to
understand
in
the
beginning,
but
most
users
don't
have
to
worry
about
that.
All
they
care
about
is
how
do
you
configure
your
services
at
the
platform
level?
B
B
So
a
grpc
client
can
only
talk
to
a
grpc
service.
If
you
want
to
talk
to
http
services,
obviously
you
will
need
a
gateway
in
the
middle,
which
is
the
grpc
http
gateway
that
converts
your
rest
calls
to
grpc
and
vice
versa
right,
so
you
can
obviously
use
an
http
service
in
the
mesh
if
you
have
a
gateway
like
that,
so
all
the
requests
will
have
to
pass
through
gateway
and
that
http
service
can
also
be
part
of
the
service
mesh.
B
Another
question
is
for
local
development.
What
choices
are
available
for
xds
server?
I
think
you
should
take
a
look
at
the
go
control
plane
but,
as
I
said,
there
is
no
support
available
for
proxies
grpc.
If
you're
inclined
you
can,
you
know,
take
up
that
issue
and
and
and-
and
you
know,
contribute
to
that.
Okay,
I
yeah
mark
says
that's
exactly
what
mark
was
looking
for.
That's
great.
B
All
right
anime
is
asking
the
question
again
saying
it
is
more
off
to
do
if
you
still
need
a
proxy
sidecar.
Yes,
I
mean
any
any
service
in
the
mesh
would
need
a
proxy
site
car
to
a
sidecar
proxy
to
provide
the
mesh
functionality
right,
because
the
application
remains
unchanged
right.
C
All
right
just
to
add
a
comment
to
to
that
last
response.
Ultimately,
your
proxy
sidecar
might
be
providing
other
functionalities,
such
as
protocol
translation.
So
in
this
case,
if
you're
looking
at
translating
from
http,
2g
or
pc,
that
is
not
like
that's
an
independent
piece
of
functionality.
C
C
C
Does
that
mean
that,
given
that
it's
part
of
that
xds
service
mesh,
does
it
have
routing
information
to
talk
to
that
service
or
like
am
I
able
to
completely
eliminate
the
side
car
in
the
grpc
server,
or
I
still
need
the
sidecar
to
talk
out
to
another
http
server
rest
based
http
service
in
the
service
mesh?
That
service
itself
would
need
its
own
sidecar.
That
is
not
what
I'm
questioning
I'm
asking
more
for
this
grpc
server.
If
it
needs
to
go
talk
to
some
other
rest
server,
it's
not
doing
any
protocol
translation.
C
B
Yeah
yeah
thanks
for
clarification,
so
the
answer
is
no.
It
would
not
require
any
sidecar.
You
are
actually
connecting
to
another
service
which
let's
say
you
can
do
a
dns
lookup
for
that
service
and
directly
make
a
call
to
that
service
right.
B
B
D
The
important
thing,
though,
is
in
that
case
you're
using
some
other
http
library
to
do
those
requests,
and
that
would
not
use
the
the
service
mesh
for
doing
that
call.
So
it's
if
you're
wanting
grpc
stuff
to
participate
in
the
service
mesh
it
would.
But
if
you're
just
using
something
off
to
the
side,
that's
sort
of
separate,
then
it
would
not
participate
unless
you
started
doing
what
you
would
do
in
other
cases
like
having
a
car
or
something
like
that
to
sort
of
force
it
to.
B
Yeah
another
important
thing
worth
mentioning
again
is
that
even
within
grpc,
let's
say
you
are
calling
out
to
two
different
services
of
grpc
right,
one
can
still
be
called
through
xts
and
one
can.
The
other
one
can
be
called
through
dns,
because
this
is
all
per
channel
scheme
right.
So
so,
let's
say
you
have
a
legacy
application
sitting
outside
of
the
service
mesh.
B
B
All
right,
I
have
one
more
question
here:
is
traffic
director
the
only
grpc
server?
Yes,
so,
as
far
as
I
know,
traffic
director
is
the
only
product
production
ready
product
out
there
that
supports
grpc
proxy
less
workloads.
D
Thinking
yeah,
I
think
it's
asking
is
traffic
director
only
is
it
only
for
is
for
only
jrpc
services,
so
you
can
use
traffic
director
for
with
envoy
directly
as
well.
If
that's
what
the
the
question
is.
B
Oh
okay,
yeah,
so
so
traffic
director
is
an
xds
compatible
control
plane.
So
you
can
deploy
your
service
mesh
using
onboard
also
and-
and
it
also
adds
grpc
proxima.
B
Support
all
right,
so
this
is
my
last
slide
here.
So
if
you
like
to
understand
the
architecture
or
contribute
the
grfcs
are
the
best
place
to
get
started,
they
have
a
lot
of
information
about
the
architecture
and
how
it's
been
implemented
in
grpc.
B
There
are
a
lot
of
other
grfcs
in
flight
for
the
upcoming
feature,
so
take
a
look
at
those
two
and
keep
an
eye
on
the
xds
features
link.
Here
we
keep
updating
them
as
and
when
we
add
more
features-
and
there
are
a
few
other
useful
links
here
and
with
that
I
conclude
my
presentation.
Any
more
questions.
B
A
A
Okay,
cool,
thank
you
and
I
just
want
before
I
do
the
breakout
rooms
where
we
are
going
to
connect
in
smaller
groups.
A
I
wanted
to
give
everybody
this
information,
if
you
re,
if
you
heard
about
this
meetup
on
twitter
and
would
like
to
receive
an
email,
this
is
the
google
group
that
we
have
set
up
for
grpc,
so
feel
free
to
join
that,
and
then
you
will
find
out
about
future
future
meetups
as
well,
and
if
you
would
like
to
present
a
demo
showing
how
you
use
a
grpc,
we
would
love
to
see
that
as
well,
and
this
is
the
email
where
you
can
email
us
about
this.
So
don't
be
shy.
A
C
Sure
we
have
the
clients
as
the
data
flow
google
data
flow
as
the
jvc
clients
and
grpc
is
running
on
kubernetes
gke.
Can
I
take
advantage
of
traffic
director
to
connect
data
flow
clients
to
grpc
services.
B
So
the
data
flow
clients
are
also
talking
to
the
data
flow
service,
while
talking
to
other
services
in
the
service
mesh.
C
Yeah
data
flow
clients
are
receiving
the
data
from
some
gcs
market,
and
then
they
are
calling
the
erp
services
to
ingest
data
into
our
backend
is
database
kind
of
edm
job.
C
So
the
challenge
we
had
in
the
past
was
putting
a
kind
of
single
gateway
and
having
ingress
to
plot
all
the
things,
a
single
point
of
failure,
and
also
we
had
old
issues
like
when
you
increase
the
parts
it
is
not
and
why
it
will
not
able
to
detect
or
not
and
why
but
the
default
l4
gateway
was
not
able
to
detect
the
new
servers.
B
Right
yeah,
so
this
is
a
great
use
case
where
sometimes
middle
proxies
gateways
can
become
a
bottleneck,
because
if
you
have
large
number
of
services
or
large
qps
flowing
through
it,
the
scaling
can
become
an
issue
right.
So
if
you
use
proxies
service
mesh
or
service
machine
general,
then
the
clients
can
directly
talk
to
the
services
without
needing
to
go
through
a
middle
proxy
right.
So
and
obviously
then,
in
that
case
you
need
a
a
service,
mesh
control,
plane
and
traffic
director
works
perfectly
well
for
gke.
D
I'll
just
note
that
in
some
of
those
environments
I
think
dataflow
it
calls
you,
you
don't
call
it
sort
of
idea
like
it
spawns
your
binary
and
things
like
that.
You
may
have
to
set
up
the
environment
outside
of
dataflow,
so
that
everything's
available
whenever
your
process
ends
up
running.
But
if
you
had
already
set
up
envoy
and
had
tried
out
that
you
would,
you
have
gone
through
that
process
already.
D
But
while
almost
everything
is
in
the
library
itself,
there's
some
bootstrap
information
that
jrpc
will
need,
and
that
is
generally
sort
of
injected
to
the
environment.
And
in
some
of
those
cases
where
your
component
is
sort
of
built
into
a
framework
and
sort
of
executed
off
to
the
side
as
a
sort
of
a
separate
process
and
stuff,
you
might
need
to
figure
out
a
little
bit
in
those
environments
how
to
plumb.
It.
C
Thank
you
and
curious
about
java
client
also
can
do
this
xds
schema.
Will
it
support
the
xts
endpoint
kind
of
schema
that
java
client
also
supports.
This
is
only
go
right
now.
B
So
right
now
we
have
java
c
plus
plus
go
python,
php,
ruby
and
and
c-sharp,
but
moving
forward.
We
are
not
planning
to
add
supporting
c
sharp
because
there
is
a
separate,
pure
c-sharp
stack
called
the
dotnet
stack
that
is
being
developed.
So
in
future,
if
needed,
we
will
add.
You
know
xds
support
in
that.
B
A
B
B
So
is
russ
supported?
Okay,
so
I
don't
know
the
plan
for
rest,
I'm
not
sure
if
any
rust
contributor
is
here
in
on
this
call,
maybe
they
can
chime
in
but
as
far
as
I
know,
it's
not
planned.
B
So
if
that's
the
case,
then
most
of
the
xts
functionality
so
far
has
been
implemented
in
the
c
core,
so
the
wrapped
languages
like
python,
for
example
those
languages,
get
this
functionality
for
free.
It
won't
be
the
case
for
upcoming
features.
B
So
this
will
be
very
small.
Small
things,
so,
if
rust
is
wrapping
c
core,
then
it
shouldn't
be
very
difficult
to
get
xts
functionality
in
the
rust.
C
One
one
more
question
for
sure
yeah:
this
is
regarding
the
transport
thing
that
we
bought
of
just
mklisting.
I
was
exploring
the
application
level
transport
security
that
google
is
supporting,
basically
workload,
identity
or
something
like
that
to
to
provide
some
kind
of
mutual
identity
for
the
micro
services
plus
transport.
C
So
because
you
mentioned
alts
specifically,
google
has
published
a
white
paper
on
alts
and
it
is
being
used
in
certain
scenarios
for
internal
communication
or
for
connectivity
from
the
google's
public
cloud
to
google's
private
cloud.
C
But
in
the
service
mesh
world
the
emphasis
going
forward
is
to
use
tls
instead
of
alts,
and
there
is
definitely
some
amount
of
mtls
support.
That's
been
available
in
grpc
for
a
while
and
I'll.
Let
folks,
on
the
bridge,
comment
further
on
what's
available.
B
Yeah,
so
for
for
service
mesh,
when
you
want
to
do
mtls
between
your
own
services,
we
are
working
on
providing
the
mtls
support
in
the
service
mesh
via
the
xts
apis.
So
you
should.
You
can
see
some
grfcs
in
flight.
That
is
trying
to
add
this
functionality
in
in,
and
the
solution
will
basically
automate
getting
a
identity
based
certificate
from
a
certificate
provider
or
a
ca
basically,
and
we
will
support
one
type
of
interface
that
will
be
easy
to
implement.
B
You
know
on
a
file
based
certificate
rotation,
so
you
don't
have
to
worry
about.
You
know,
refreshing
your
certificates
and
it
will
basically
rotate
certificates
in
a
in
some
periodic
way
and
then
provide
the
mtls,
including
the
identity.
B
So
I
mean,
if
you
look
at
what
envoy
provides
today,
it's
all
provided
as
part
of
let's
say,
if
you
look
at
easter,
for
example,
you
can
see
that
this
is
part
of
the
solution.
B
B
C
More
question
about
the
grpc's
saying
there
is
a
gogo
proto
of
thing
and
in
the
beginning
I
had
a
little
confusion
like
which
one
I
should
use
is,
though,
I'm
curious
about
what
the
community,
using
using
google
fast
buffers.
C
So
both
should
work,
whether
you
use
the
the
golang
proto
implementation
or
the
gogo
proto
implementation.
I
will
say
it
seems
like
a
lot
of
our
users.
Do
use
gogo
proto,
but
we
on
the
team,
don't
use
it
at
all
and
I'm
only
sort
of
personally
a
little
bit
familiar
with
it
like
I've.
I've
used
it,
but
but
I'm
not
that
familiar
with
it.
So
my
recommendation
would
be
to
use
the
golang
protobufs.
C
C
C
So
anybody
using
that
puff
there
is
a
project
called
builder
or
something
like
that
for
managing
the
photo
buffs
linking
the
photographs.
Yes,
because
it's
open
discussion,
I
thought,
like
anybody,
has
opinions
those
kind
of
tools.
C
C
Yeah
in
general
I
was,
I
was
using
the
proto
tool
in
the
past,
that
is
from,
I
think
that
was
from
uber
and
that
has
been
deprecated
and
they
have
also
switched
to
buff
tool.
C
C
C
Since
we
are
on
that
topic,
they
are
working
on
something
really
interesting
called
buff
protobuf
schema
registries,
so
we
could
use
protobufs
as
dependency,
something
like
we
have
a
docker
registry
and
we
could
use
those
docker
registry
images
in
other
docker
images.
The
same
way
we
could
construct
our
protobufs,
so
that
would
kind
of
solve
a
lot
of
our
problems
with
the
import
parts
of
protobufs,
but
I
have
no
idea
until
where
they
have
been
or
is
there
any
other
project?
That's
working
on
the
similar
problem.
C
C
I
think
it's
rough
traffic,
but
just
because
most
of
the
storage,
when
they
store
the
data
in
the
files
and
they
store
it
in
the
avro
files
and
if
there
is
an
easy
way
to
translate
the
only
library
I
find
is
from
linkedin's
there's
an
arrow
parcel
for
go
language
but
yeah.
It
is
not
like.
We
have
the
right
code
for
it
and
it's
more
of
generic
record
processing
kind
of
thing.
But.
D
I
think
some
projects
will
pass
generic
payloads.
You
can
imagine
some
like
databases
or
something
where
it's
a
little
bit
more
generic
data
that
just
sort
of
gets
plumbed
around
and
you
might
have
proto
or
json
or
avro,
or
something
like
that.
I
think
that
they
just
sort
of
keep
it
as
bytes
in
the
proto.
D
C
D
Yeah,
I'm
not
aware
of
anything
that
would,
let's
say,
take
an
avro
schema
and
spit
out
a
proto
schema
and
then
also
do
some
cogent
convert
between
the
two
or
something
like
that.
I've
not
seen
something
like
that.
C
A
Okay,
it
sounds
like
we
are
approaching
the
end
of
this
meetup.
Thank
you
all
for
joining
and
participating
so
actively
and
again.
If
you
have
examples
of
how
you
use
grpc,
we
would
love
to
see
them.
I'm
sharing
that
information
once
again,
there's
a
google
group
and
an
email.
So
please
do
email
us
about
this,
so
we
can
feature
you
in
the
next
community
meetup.
I'm
gonna
go
ahead
and
stop
the
recording
and
thank
you,
everybody
for
coming
and
we'll
see
you
again
in
the
next
one.