►
Description
Challenges and Learnings: Building gRPC Python AsyncIO Stack - Lidi Zheng, Google
One of Python 3’s most significant improvements is AsyncIO -- the official asynchronous library. The split Python asynchronous world now shares a common destination. Although this feature has been asked to integrate with gRPC back in 2015, it never really happened until now. In this session, we will discuss the challenges of integrating an asynchronous paradigm with gRPC Core, the designs that are tailored to the AsyncIO world, and, most importantly, the collaborations from the open-source community.
A
A
Third
is
about
some
highlights
of
the
asynchial
api
and
finally,
some
implementation
challenges.
I've
I
personally
encountered
so
for
many
of
you
may
know.
Japanese
has
a
lot
of
stacks.
We
support
14
different
languages
and
some
of
them
even
have
more
than
one
implementations
and
some
information
share
a
majority
of
copas,
but
some
of
them
are
very
different
programming
paradigm.
A
So
why
do
we
want
to
build
the
new
stack
if
there
are
already
many
stacks
existed?
Is
this
is
because
pythons,
fragmented,
asynchronous
story
and,
as
you
may
know,
python
have
many
different
asynchronous
libraries,
for
example
like
g
band
twisted
tornado
and
they
can
split
into
two
different
kinds.
The
first
kind
is
they
do
monkey
patching.
They
patch
all
the
standard
libraries
of
python
and
the
second
one
is
providing
an
alternative
event
loop.
A
So
you
can
register
your
own
coding
into
it,
but
the
problem
is:
if
you
are
using
more
than
one
asynchronous
library,
it
may
very
likely
to
cause
a
deadlock
and
each
one
of
them
have
many
different
programming
paradigm,
so
it's
very
hard
to
switch
from
one
to
the
other.
Luckily,
the
python
maintainers
also
realize
this
is
a
problem
and
they
did
so.
A
Well.
All
of
this
is
great
and
one
of
our
grpc
team
member
posted
a
comment
saying
we
are
going
to
build
this
thing,
and
this
comment,
god
is
the
most,
is
one
of
the
most
uploaded
comments
in
entire
drives
repo.
A
However,
despite
there
is
a
high
demand
and
all
those
benefits
we
talked
about,
it
never
happened
due
to
lack
of
engineering
resources,
so
it
so
the
async
io
api
never
got
prioritized
before
helping
him
appeared.
So
an
engineer
named
paul
frixis
and
he
was
from
sky
scanner.
He
sent
us
an
email
saying:
okay,
now
we
have
now.
A
I
had
built
a
pro
proof
of
concept
of
a
grpc,
async,
io
native
driver
and
he's
asking
for
collaboration
from
the
grcc
team
and
after
our
first
meeting,
they
promised
to
contribute
four
part-time
suites
and
which
solved
which
gives
the
project
all
the
engineering
resources
it
needed.
And
then
we
can
get
finally
get
started
and,
as
time
goes
by
more
and
more
companies
join
the
effort,
including
some
engineers
from
dropbox
and
some
from
uber
and
many
other
community
members
they're
not
only
contributing
to
code,
but
also
to
design
itself
and
commenting
about
the
directions
and.
A
A
So
here
I
want
to
say
thank
you
to
all
the
people
who
have
contributed
to
this
project.
I
know
this
may
sound
like
an
ending,
but
I'm
just
getting
started.
A
A
All
the
io
operations
now
is
labeled
with
async
and
is
now
type
annotated.
So
it's
more
friendly
to
new
project
to
larger
projects
on
the
right
side
is
a
short
example
about
how
to
use
the
async
io
api.
As
you
can
see,
we
can
use
a
single
wave
to
create
a
channel
and
use
a
weight
to
wait
for
an
rpc
to
finish
on
the
lower
part,
you
can
define
asynchronous
master
handlers
through
async
defined
and,
finally,
you
can
start
the
server
in
asynchronous
way.
A
It
only
not
only
introduced
jpc
python
into
the
async
io
world.
It
also
solves
other
problems.
For
example,
the
thread
exhaustion
issue.
So
in
current
api,
if
you
are
trying
to
build
a
if
truck
you're
trying
to
initialize
a
grpc
server,
you
have
to
provide
a
thread,
pull
executor
which
currently
requires
you
to
specify
a
maximum
worker.
A
A
Luckily,
this
problem
would
will
be
no
longer
existed
with
async
io
api
and,
moreover,
we're
trying
to
unify
the
core
entrance
for
the
on
the
client
side.
Currently,
there's
three
way
to
call
to
invoke
an
rpc.
You
can
invoke
it
directly,
invoke
it
with
a
waste
core
method,
invoke
it
with
a
future
method.
A
Secondly,
is
about
the
streaming
costs,
so
in
current
api,
as
you
can
see,
there
are
a
lot
of
boy
played
if
you
are
trying
to
send
a
message,
depending
on
the
response
you
received
from
the
server
and
the
sending
logic
is
before
the
invocation
of
the
rpc
and
the
receiving
logic
is
down
below.
So
you
have
this
conflict
of
reach
of
logic
flow
here.
A
And
finally,
this
part
may
get
into
a
little
bit
technical
during
building
this.
During
building
this
188
io
api,
we
encountered
some
challenge.
The
first
one
is
about
the
numbering:
non-blocking
io
itself,
the.
A
So
it
has
an
io
manager
inside
the
jftc
core
and
the
abstract,
all
the
lower
the
the
system
level
io
operations
into
several
categories
and
allows
people
to
to
provide
their
own.
So
our
solution
is
that
we
can
data
io
manager
calling
back
into
the
python
space
calling
back
sorry.
My
screen
just
logged.
A
A
The
root
cause
is
because
the
python's
three
enchants
so
on
the
right
side
is
a
diagram
showing
how
this
problem
occurred.
In
straight,
a
a
python
application
is
trying
to
invoke
a
grpc
core
method
and
the
javascript
acquired
a
certain
mutex
and
calling
back
into
python
space.
However,
the
python
space
required
a
gear,
the
the
global
interpreter
log
and
on
the
right
side.
A
However,
it
cannot
do
so
because
thready
already
acquired
the
deal
and
but
but
the
threat
be,
can
doesn't
want
to
yield
the
gear
because
because
it
was
trying
to
call
another
jfc
called
api
which
requires
the
mutex
x
again,
so
it
enters
into
a
deadlock.
A
The
solution
is
easy,
which
is
posting
a
polar
thread
in
in
the
as
a
middleman,
which
involves
the
japanese
core
api
to
fetch
the
events
from
the
nfc
core
space
and
then
sending
the
events
to
the
async
io
event.
Loop,
saying
hey.
This
is
a.
There
is
a
I
o
events.
For
example,
a
new
message
arrived
or
a
dns
result.
Resolutions
said
succeed
in
this
way.
A
However,
you
introduce
a
performance
regression
which
means
which,
as
many
of
you
may
know,
because
of
the
existence
of
geo,
mos
threading
basically
means
small
log
contention
lower
in
lower
part.
There
is
a
latency
distribution,
which
means
if
the
latency,
oh
in
low
part,
is
a
latency
distribution,
which
means.
A
Let's
take
another
look
at
the
previous
solution,
so
the
polar
strand
was
skill
protected,
hence
that
you
have
to
jump
between
the
ac
ion
thread
and
the
polar
thread
so
how
the
the
solution
to
solve
this
is
straightforward.
So
we
can
make
the
polar
thread
only
running
in
the
sizing,
the
reason
in
sizing
it
doesn't
require
and
he
releases
the
gills,
so
it
doesn't
need
any
access
to
any
python
object.
A
And
finally,
as
as
a
result,
we
can
see
the
benchmark
or
between
the
current
api
and
the
async
api
and
the
cpl
plus
api
and
the
cpanel
api,
the
red,
the
red
one
is
the
current
api.
The
blue
one
is
the
async.
I
o
api,
the
gray
one
is
the
cpr
plus
api,
so
c5.
As
you
can
see,
the
18ml
api
reaches
around
50
of
the
per
core
performance
of
cpr
plus
and
is
two
times
to
28
times
better
than
the
sync
stack.