►
From YouTube: gRPC Community Meetup: 2.23.21
Description
On February 23, 2021 the gRPC community held their monthly meetup. Ahmet Alp Balkan, Software Engineer, Google, presented "Serverless gRPC on Google Cloud" where he ran gRPC applications as serverless on Google’s infrastructure. (Hint: It's not Kubernetes). With Cloud Run, we can run any container image using any RPC type or language quite easily. But don’t get scared: You don't have to learn containers to use this. In this demo, we'll show how you can use technologies like ko and Buildpacks.
A
All
right
so
we're
now
recording.
Well
thanks
everybody
for
joining
us
for
the
february
edition
of
the
grpc
community
meeting.
We
got
a
great
presentation
for
you
today,
and
I
also
wanted
to
call
out
for
those
of
you
who
I
haven't
met
yet
I'm
april,
and
we
have
a
big
week
coming
or
a
big
day
coming
this
week
on
friday.
A
It
is
a
birthday
it's
grpc's
birthday,
so
grpc
will
be
six
on
friday.
So
a
feisty
little
toddler
or
I
don't
know,
the
open
source
project
years
are
probably
like
dog
beers,
but
yeah,
so
keep
an
eye
on
our
twitter
account
friday
because
they
might
have
some
fun
stuff
planned.
A
So
that's
just
a
little.
You
know
heads
up
for
some
fun
celebration
and
yeah
thanks
for
being
part
of
the
community
and
being
supporters
and
users
and
contributors
to
grpc
over
the
past
six
years.
So
we
appreciate
all
of
it.
Thank
you.
D
A
True,
that's
true:
we
we
have
our
beloved
pancakes.
Actually
he
made
his
debut
two
years
ago
for
the
birthday
that
was
our
belated
birthday
present
to
grpc.
A
But
yeah
so
pancakes
will
be
helping
us
friday,
celebrate
as
well
so
yeah.
A
Well,
I
want
to
hand
it
over
to
amit
for
presentation,
and
then
we've
got
a
lot
of
the
core
maintainers
here
on
the
call,
and
so
hopefully
we
can
do
some
q
a
afterwards
and
you
can
bring
all
your
tough
questions
for
them
and
I'm
sure
amma
will
be
able
to
answer
questions
around
his
presentation
as
well.
So
without
any
further
ado
on
it
floor
is
yours.
B
Okay,
thanks
apple,
I
will
share
my
screen
now
all
right.
B
Okay,
all
right,
so
I
assume
you
can
see
my
screen
now
quick
thumbs
up.
Please
are
you
able
to
see
my
screen
all
right
perfect?
So
my
name
is
ahmed
al
balkan.
I'm
not
a
frequent
member
attended
attending
this
community,
but
I'm
a
jrpc
user
myself.
I
always
try
to
advocate
inside
google
for
other
people
to
use
grpc,
I'm
personally
a
developer
advocate
at
google
working
on
specifically
cloud
run,
but
in
the
past
I
actually
worked
on
kubernetes
fair
amount.
B
So
one
of
the
contributions
that
I
actually
have
to
grpc
project
is
the
grpc
health
project.
Maybe
you've
heard
of
this
project
or
maybe
used
it.
I
think
this
has
reached
3
million
downloads
this
year
it
means
3
million
people
3
million
times.
At
least
we
helped
people
help
check
their
grpc
services.
This
is
my
little
contribution
to
the
grpc
project.
So
today
I
want
to
talk
to
you
about
cloud
run.
B
If
you
never
heard
of
that,
that's
totally
fine
and
I'll
do
my
best
to
you
know
not
turn
this
into
a
sales
pitch.
My
intention
is
definitely
not
that
and
we'll
be
talking
about
some
open
source
technologies,
maybe
that
you've
never
heard
of
before
so
I'll.
Just
jump
right
in.
I
want
to
do
an
early
demo
so
that
you
don't
get
bored.
So
this
talk
is
going
to
be
sort
of
like
a
technical
overview
about
cloud
run
and
how
is
it
related
to
grpc?
B
So
cloud
running,
in
a
nutshell,
is
a
serverless
containers
platform
on
google's
manage
infrastructures.
If
you
hate
containers,
don't
please
do
not
disconnect.
I
will
keep
you
I'll,
find
a
managed
way
to.
You
know,
keep
you
hooked
onto
the
presentation,
but
basically
today,
what
we're
going
to
do
is
you
all
know
the
route
guide?
Example
right.
We
all
probably
learned
unless
you're
one
of
the
core
maintainers,
you
probably
learned
grpc
through
the
right
root
guide
example.
So
we'll
take
the
root
guide.
B
Example
in
actually
different
languages
and
I'll
show
you
how
to
turn
the
root
good
guide
examples
into
container
images
and
run
them
effortlessly
with
tls
and
everything
on
google's
infrastructure.
So
I'm
actually
going
to
exit
right
the
slides
right
here.
So
I'm
going
to
go
to
the
root
guide
example
that
I
have
here
so
right
in
this
directory.
I
have
the
you
know
the
classic
root
card
example
with
you
know
the
client
and
the
server.
So
the
first
thing
I'm
going
to
do
I'm
going
to
go
into
the
server
directory.
B
B
So
what
I
did
in
this
directory
was,
I
basically
created
a
go
module
which
basically
is
called
root
guide,
and
you
know
the
usual
dependencies
that
we
have
here
and
I
made
a
very
small
change
in
the
server
which
I
think
is.
I
think
I
added
a
sleep
here
so
that
we
can
see
something
much
easier
and
then
I
change
this
localhost
to
listen
on
all
the
port
numbers,
but
that
was
basically
it
so
right
now,
I'm
going
to
show
you
again
in
this
directory.
There
is
no
docker
file.
B
If
you
know
containers
you
need
some
something
like
dockerfile,
usually
to
build
containers.
Well
I'll
talk
first,
talk
about
a
project
that
we
have
developed
so
that
you
can
turn
your
go
programs
into
container
images
very
easily
so
that
that
tool
is
called
code.
You
can
find
this
akihabara
slash.
Google,
slash,
co.
Co
is
a
tool
for
building
goal
programs
into
container
images
in
a
very
easy
way.
So
what
I
will
do
here
is
again,
I
don't
know
anything
about
docker.
Let's
say
all
right:
I
have
a
function.
B
I
have
a
program
here
with
a
main
function.
So
all
I
need
to
do
is
I
will
say
this
is
where
I
would
like
to
push
my
container
image.
You
can
put
here
anything
like
docker
hub
or
any
other
container
registry.
It
doesn't
really
matter
so
at
this
point,
all
I
need
to
do
is
well.
Actually
I
will
check
the
server
really
quickly
because
I
forgot,
which
port
number
that
we
listened
on.
I
think
it
was
like
ten
thousand
so
I'll
go
with
that.
B
So
what
I
will
do
here
is
I'll,
deploy
this
application
to
cloud
run
and
to
do
that,
all
I
need
to
do
is
gcloud
run,
deploy
and
I'll
call.
This
route
guide
I'll,
say:
please
build
up,
build
me
an
image
and
put
its
name
right
here
and
I'll
say
this
application
runs
on
port,
10,
000
and
I'll
say:
let's
allow
everyone
to
connect
with
this
image
like
I
do
not
require
any
authentication
so
what's
happening
right
now.
B
Is
I'm
building
a
container
image
and
that's
what
I
did
here
and
I
pushed
the
container
image
again
take
note
that
I'm
not
using
docker
here
at
all.
I
have
a
docker
running
on
my
machine
right
now.
I
can
actually
turn
that
off.
It
doesn't
matter
in
this
case,
but
before
I
could
even
finish
my
sentence,
this
grpc
application
that
you
all
know
is
actually
available
on
the
internet
right
now
and
it
has
this
tls
endpoint.
So
let's
try
to
talk
to
that.
B
I
think
the
way
to
do
that
is.
I
have
a
client
here.
Let's
go
to
the
client
directly
go.
Actually
let
me
go
out
here.
That's
much
easier,
go
run,
I'm
trying
number
yeah.
So
basically
the
sample
application
comes
with
a
client,
and
this
client
basically
says
hey.
This
is
my
host
name.
You
know
this
domain
name
plus
443,
indicating
tls
and
I'm
providing
the
tls
argument
here
and
then,
as
you
can
see,
this
is
a
sample
application
running
on
cloud
run.
B
B
So
this
is
basically
cloud
running
a
nutshell,
so
I
would
like
to
switch
gears
and
actually
I
would
like
to
use
a
python
server.
So
for
that
you
know,
I'm
gonna
go
to
the
python
example
and
python
root
guy
example
again,
so
the
only
change,
if
I
recall
correctly
it's
what
I
did
here
was
I
changed
two
files.
Actually
I
added
two
files
into
this
directory.
One
of
them
is
the
requirements.txt.
B
As
you
all
know,
this
is
how
I
declare
I
have
grpc
libraries
that
I
use
in
my
server.
I
actually
don't
know
why
there
is
no
requirements
txt
here
by
default,
but
it's
there.
So.
The
second
thing
that
I
will
require
is-
maybe
you
know
this,
maybe
not
if
you
use
heroku
before
there's
this
notion
called
profile,
which
basically
says
this
is
how
you
run
this
application
right.
B
So
here
I
will
use
something
rather
interesting:
I'm
not
going
to
write
a
docker
file
again
but
I'll
introduce
you
another
concept,
and
that
is,
I
will
do
this
g
cloud
beta
run
beta
run,
deploy
I'll
say
this
is
my
source
directory.
This
is
where
my
source
code
is
and
then
again
I'll,
say,
unauthenticated,
and
then
I
think
the
port
number
was
something
like
this.
Actually,
I'm
not
gonna
trust
myself.
I'm
gonna
check
this
all
right.
Five,
zero,
zero,
five,
one
yeah!
I
think
that
was
right.
What
was
the
error
there?
B
I
didn't
catch
that
yep.
I
had
a
typo
in
the
one
of
the
arguments
so
again,
what
I'm
doing
right
now
is
I'm
deploying
right
guy
sample
that
was
in
python.
You
know
remember
a
moment
ago.
I
just
deployed
it
using
go
using
this
tool
called
co
right,
so
I'm
not
using
code
here,
but
I'm
not
writing
docker
files
either.
So
what
comes
into
picture
here
again,
if
you
use
heroku
before
you're
familiar
with
this
concept,
it's
called
build
packs
and
build
practice
is
also,
as
you
know,
it's
a
part
of
cncf.
B
It's
called
cloud
native,
build
packs.
It
has
its
own
website,
etc,
etc.
So
what
I'm
using
here
is
google's
own
build
packs.
We
got
an
extension
of
the
built
packs
project
which
basically,
we
have
our
own
list
of
supported.
Runtimes.
If
you
write
your
application
in
one
of
these
languages
and
we're
adding
more
language
as
we
go,
you
can
turn
them
into
a
container
image
and
deploy
to
a
platform
like
cloud
run
without
any
effort.
So,
as
you
said,
I
didn't
even
change
any
code.
B
I
just
added
two
files,
like
I
think
three
lines
total
and
right
now,
what's
going
on
behind
the
scenes,
is
that
I'm
building
this
container
image?
Actually
we
can
go
and
see.
What's
going
on
so
I'll,
go
to
the
link
that
is
building
this
container.
So
what
happened
here
was
that
I
packaged
my
source
code
into
a
zip
file
by
running
this
command
and
I
sent
it
to
a
remote
build
form
running
on
google
cloud.
B
So,
as
you
can
see
here,
a
lot
of
stuff
is
happening
behind
the
scenes
kind
of
it's
kind
of
boring,
but
basically,
here
we're
running
this
thing
called
build
packs
and
bill.
Pax
is
realizing
hey.
You
know
what
you
have
some
dependencies
and
well,
first
of
all
bill
pax
realizes
that
this
is
a
python
application
and
it
goes
ahead
and
installs
bunch
of
python
dependencies
because
I
have
requirements.txt
in
there
right.
So,
as
you
can
see,
this
build
is
successful.
B
But
if
I
go
here
now,
I'm
deploying
the
application-
if
I
recall
correctly,
what's
going
to
happen
in
a
moment
or
so
this
application
will
be
deployed.
But
again
the
the
logic
is
the
same
I'll,
be
deploying
another
grpc
application
written
in
a
different
language
to
clara.
So
I
think
this
is
a
good
point
to
stop
and
again,
as
you
can
see
like
the
deployment
makes
progress
and
in
a
few
seconds
we
will
have
another
deployment.
B
Actually,
let's
wait
for
that
and
see
how
that
goes
if
it
keeps
going
for
another
10
seconds,
I
think
I
will
actually
stop
and
go
to
the
presentation,
all
right,
assigning
traffic
to
the
latest
targets.
Actually
this
is
a
good
time
to
show
you
what's
happening
behind
the
scenes,
so
I'll
do
that.
So
this
is
the
cloud
run
console.
Actually,
let
me
go
there
manually
all
right
yeah.
So,
as
you
can
see,
the
app
application
is
actually
deployed.
B
So
if
I
click
on
this
right
right
guys
service,
that
is
that
I
have
on
cloud
run.
As
you
can
see,
I
have
actually
a
url
that
you
know
is
that
has
tls
simply
you
can
also
bring
your
own
custom
domains
and
you
will
also
get
the
other
certificates
for
those.
So
another
interesting
notion
that
we
have
here
is:
we
have
something
called
revision.
So
if
I
wanted
to
actually
split
traffic
between
the
python
based
version
and
the
go
based
version,
I
can
actually
do
that.
B
I
can
say
I
send
50
of
the
traffic
here
and
another
50
here
and
I'll
just
clear
that
so
I
can
basically
do
stuff
like
this
here
and
that
lets
me
do
cannery
deployments
so
again,
I'll
just
to
show
you.
I
will
go
back
and
run
the
same
client
with
the
same
hostname
again.
I
think
that
will
also
work
yeah,
as
you
can
see
so
we're
getting
a
different
implementation
here,
because
python
has
a
rather
different
data
set.
But
again
the
same
client
is
able
to
invoke
the
python
server.
B
So
this
is
all
the
grpc.
You
know
what
I
did
here
was
I
basically
almost
effortlessly
packaged
a
grpc
server
written
different
languages
using
bunch
of
open
source
tools.
I
don't
have
to
know
containers,
so
this
is
like
something
that
I
encounter
quite
a
bit.
People
don't
want
to
learn,
containers
and
that
that's
totally
fine.
You
can
still
make
use
of
serverless
platforms
without
using
containers,
and
that's
the
video
of
cloud
run.
B
So
we
talked
about
what
cloud
run
is
cloud
run
in
a
nutshell:
can
run
pretty
much
any
linux
executable
as
long
as
you
can
put
it
in
a
container
we're
able
to
run
it.
It
currently
does
not
have
any
windows
or
arm
support.
Cloud
run
is
designed
for
stateless
applications
and
event.
Processing
that
are,
you
know.
B
Is
we
actually
allocate
you
a
cpu
only
while
your
container
is
processing
requests
and
you
actually
pay
only
during
this
time
if
there
is
no
requests,
you're
not
paying
on
cloud
run
and
that
actually
makes
it
quite
well
economically
more
efficient.
So
in
this
case
we
give
you
some
amount
of
cpu.
Actually,
if
you
say
I
want
two
cpus
we'll
give
you
two
cps.
While
there's
a
contain
there's
a
request
for
your
container,
but
this
basically
means
that
they're,
we
don't
support
background
threads.
B
If
you
have
background
computation
running
in
the
background
that
just
that's
not
for
claw
run.
Essentially
I
don't.
I
don't
want
to
get
stuck
on
container
lifecycle
too
much,
but
we
basically
keep
adding
new
containers
to
your
service
if
there
is
more
requests,
if
and
I'll
explain
like
how
do
we
make
that
decision,
but
at
the
end
of
the
day
this
is
again
a
container,
it
has
a
startup
and
we
send
you
a
termination
signal
when
we
want
to
clean
up
this
container-
and
this
is
probably
the
part
that
is
relevant
to
this
meetup.
B
We
support
grpc
and
mo
very
recently
about
like
a
month
ago,
we
added
full
support
for
grpc.
We
actually
do
support
bi-directional
systems
that
that
was
pretty
much.
The
only
part
missing
I'll
talk
about
that.
You
can't
really
develop
your
own
non-http
protocols,
but
websockets
also
work.
Similarly,
so
one
thing
about
tls
is,
you
know,
usually
people
find
that
it's
hard
to
create
your
own
pki
and
set
up
your
certificates.
B
We
do
that
for
you.
If
you
just
notice
that
I
took
an
application
that
doesn't
have
any
server
certificate,
but
I
was
able
to
call
it
with
a
server
certificate
that
was
provisioned
by
google.
I
was
able
to
verify
the
authenticity
of
the
server,
because
google
creates
a
certificate
and
configures
it
for
my
application
and
similarly
we
actually
force
this
tls.
We
don't
let
you
actually
make
unencrypted
calls
at
all,
so
the
load
balancing
there
is
nothing
interesting
there.
We
actually
already
have
automatic
load
balancing
built
into
cloud
run.
B
If
you
want
stuff
like
cdn,
you
actually
offer
an
experience
to
go,
create
a
custom
load
balancer.
So
this
is
the
part
that
it
kind
of
gets
tricky
so
cloudrun
lets
you
actually
limit.
How
many
requests
that
requests
you
are
able
to
handle
in
an
application?
B
For
example,
if
you
say
that
my
application
cannot
handle
more
than
10
requests
at
the
same
time,
you
can
just
see
a
box
that
says
concurrency
and
you
could
just
type
10
there
and
at
the
end
of
the
day,
we're
not
going
to
send
you
an
11th
request
that
11th
request
that
is
coming
at.
In
the
same.
At
the
same
time,
it
will
go
to
a
new
container
instance
that
we're
booting
up
in
the
meanwhile,
so
this
actually
heavily
informs
our
auto
scaling
decision
and
similarly,
our
request
timeout
is
60
minutes.
B
That
is
one
limitation
that
I
think
grpc
users
need
to
know,
because
we
know
that
a
lot
of
you
actually
love
streaming
connections
a
lot
and
if
your
connection
is
going
over
60
minutes
that
will
be
timed
out.
So
you
need
to
actually
reconnect
or
re-initiate
that
connection,
but
there
is
no
response
request
response
limit
like
you
can
send
as
many
bytes
as
you
can
and
we
talked
about
traffic
splitting,
I'm
not
gonna
spend
too
much
time
there.
I
think
pricing
is
important.
B
You
only
pay
during
requests,
so
if
there
are
no
requests
you're
not
paying
during
that
time,
and
similarly
because
of
the
concurrency,
which
is
you
know,
the
ability
to
handle
multiple
requests
at
the
same
time,
which
you
know
if
you
use
lambda
or
cloud
functions,
you
cannot
really
do
that,
like
one
instance
handles
one
request
at
a
time
that
simplifies
a
lot
of
the
application
model,
but
it
costs
you
more
right.
B
So
as
long
as
you're
able
to
handle
more
than
one
request,
you're
actually
saving
quite
a
lot
of
money
here,
because
the
overlapping
requests
are
not
separately
charged
and
we
have
a
pretty
good
three
tier.
So
if
you
want
to
try
color
run
after
this,
stop
feel
free
to
do
so
auto
scaling.
This
is
rather
easy.
We
look
at
your
http
requests.
If
there
are
too
many
requests
coming
to
your
container,
we
just
add
more
instances.
B
You
can
actually
limit
how
many
instances
that
you
will
have
you
can
say
I
don't
want
to
spend
thousands
of
dollars
just
give
me
maximum
10
instances.
I
know
what
I'm
doing
and
we'll
be
able
to
do
that
cold
starts.
Are
another
question
that
comes
up
when
people
use
cloud
run.
Cold
starts
do
exist
on
cloud
run
when
you're.
B
Let's
say
you
have
a
service
and
you're
not
making
any
requests
to
it
for,
like,
let's
say
20
minutes
or
an
hour,
that
container
will
be
cleaned
up
and
eventually,
when
there's
a
request
coming
in,
we
actually
have
to
wake
up
that
container
and
send
the
traffic
to
that,
but
that
takes
a
while.
Usually
it's
pretty
fast,
we
don't
have
to.
We
apply
a
lot
of
optimizations
to
minimize
the
cold
start,
but
at
the
end
of
the
day
you
will
have
cold
starts,
but
to
prevent
the
cold
starts.
We
have
this
feature
called
minimum
instances.
B
That
is
basically
a
way
of
keeping
the
instances
work.
For
example,
if
you
say
hey,
I
know
what
I'm
doing
my
api
gateway
always
needs
five
instances,
so
please
keep
five
instances
around
and
even
then
you're
not
getting
any
requests
you're
paying.
Actually
ten
percent
of
the
normal
cost
that
you'll
be
paying,
so
it
actually
sometimes
becomes
cheaper
than
a
vm.
B
B
It
directly
runs
on
google's
infrastructure
next
to
google
maps
next
to
gmail
and
stuff,
like
that,
and
one
of
the
ways
that
we
make
this
possible
is
we
use
something
called
gvisor,
which
is
an
open
source
system,
called
emulation
layer
in
the
user
space,
and
we
basically
run
your
program
is
inside
this
sandbox
so
that
your
application
doesn't
see
other
workloads
and
vice
versa.
So
this
might
mean
that
some
of
the
low
level
links
apis
might
not
work.
B
For
example,
you
cannot
run
docker
inside
cloud
run,
but
this
is
something
that
we're
planning
to
potentially
change
in
the
future
because
of
performance
reasons,
etc.
So
I'm
approaching
the
end
here.
So
if
you
have
questions,
please
please
I
guess
feel
free
to
write
them
in
the
chat
or
we'll
so
file
system
and
volumes.
We
do
not
support
volumes
currently,
if
you're
trying
to
like
mount
an
external
storage
that
currently
does
not
work.
If
you
want
to
write
files,
the
local
disk
that
counts
towards
your
memory,
so
be
careful
there.
B
We
do
not
show
you
individual
container
instances
like
if
you're,
using
kubernetes,
you're,
probably
used
to
seeing
your
individual
containers
there.
We
do
not
do
that.
We
don't
tell
you
how
many
containers
that
we're
running
at
a
time,
because
you're
not
actually
paying
for
those
you're
only
paying
for
the
requests
that
your
you're
creating
and
if
there's
a
faulty
container,
we
actually
try
to
replace
that
container
and
if
your
container
is
like
crashing
all
the
time,
we
try
to
replace
that
as
well
and
we
support
service
to
service
communication.
B
Just
like
you
know
how
I
deployed
my
app
without
any
authentication
enabled
to
the
public
internet,
you
can
actually
create
authentication.
We
do
that
with
cloud
am
and
as
long
as
you
can
get
a
google
sign
token
and
give
it
to
the
other
service
using
the
grpc
metadata
authentication
header.
Basically
that
works
just
fine,
so
we
have
that
too.
B
And
lastly,
we
have
we
support
imperative
deployments
like
I
just
did,
but
we
also
support
kubernetes
style
ammos
and
if
you
use
kubernetes,
this
will
look
super
familiar
and
that's
because
cloud
run
is
mostly
api,
compliant
with
decay,
nato
project,
which
is
basically
using
the
kubernetes
pod
spec.
So
if
you,
if
you
notice
this
pod
spec
right
here,
there's
a
reason
for
that
and
you
can
just
like
keep
control
apply.
You
can
use
g
cloud
run
services
replace
command
to
deploy
your
yaml
files.
B
So
that's
all
I
have
to
say
about
cloudrunner
and
grpc.
I
hope
you
liked
it
if
you'd
like
to
check
out
the
documentation
check
out
cloud.run,
and
I
have
an
faq
repository
that
you
might
find
useful,
basically
in
a
question
and
answer
format.
So
if
you
have
a
question
about
cloudrun,
I
probably
answered
it
there
as
well,
but
I'm
happy
to
take
questions
here
as
well.
Thank
you
for
listening.
A
Thanks
for
the
great
presentation
on
it,
we
have
one
question
in
the
chat
shady
asked:
I
have
a
question
about
grpc
and
cloudblind.
What
happens
if
I
have
a
chat
app
with
bi-directional
stream?
How
would
that
be
handled
in
cloud
run?.
B
Yeah,
so
this
is
a
fairly
interesting
point.
Actually,
I
recently
wrote
some
best
practices
about
how
to
implement
the
chat
room.
Application
on
cloud
run
in
our
documentation.
I'll
quickly
show
that
as
well.
So
that's
a
fairly
good
point,
because
websockets
connections
or
you
know,
grpc
bi-directional
streams-
are
long-running
requests.
B
B
However,
cloud
run
has
pretty
fine
grain
billing,
like
I
think
we
round
you
to
the
nearest
hundred
millisecond
or
something
like
that,
so
you're
pretty
much
paying
exactly
what
you're
using
and,
if
you're,
especially
implementing
a
chat
room
application
where
you
know,
if
I
send
a
message
to
the
chat
room,
all
the
other
container
instances
in
the
fleet
also
need
to
get
that
message,
and
there
are
some
patterns
for
that,
such
as
you
know,
you
might
be
able
to,
let's
say,
put
a
redis
database
or
something
like
that
in
the
background,
where
all
the
instances
can
connect
to
and
pull
messages
and
push
messages
from.
B
So
you
can
have
like
that
sort
of
pops
up
relationship,
because
normally
the
multiple
instances
in
cloud
run.
As
I
said
before,
you
don't
get
to
see
the
instances
you
don't
know
how
many
instances
are
running
at
any
time,
so
you
you
need
a
way
to
synchronize
data
between
them.
That's
basically,
that's
the
gist
of
it
yeah.
Hopefully
that
answers
the
question.
Are
there
any
other
questions
from
there.
E
Yeah,
I
I
probably
just
missed
this.
You
described
how,
in
the
go
example
the
the
ko
or
co
utility
kind
of
built,
the
container
image
yeah
in
the
python
example,
was
the
container
image
built
in.
B
Cloud
build
exactly
so.
What
I
did
in
the
python
example
was
that
I
used
this
thing.
Called
gcloud
beta
run,
deploy
source,
and
I
said
this
is
my
source
directory
and
what
gcloud
does
here?
Is
it
packages
up
your
container?
It
packages
up
your
source
directory
into
the
file
sends
sends
it
to
google
cloud
build
and
we
were
looking
at
the
cloud
build
logs
here.
I'll
actually
quickly
show
that
again
and
what's
going
on.
Is
I'm
not
running
this
field
locally
on
my
machine?
B
But
if
you
look
at
this
particular
build
step,
I'm
actually
trying
to
see
yeah
okay.
So
I'm
using
this
program
called
pack,
which
is
the
command
line
tool
for
build
packs,
and
I'm
saying
that
hey
just
build
this
zip
file
that
I
just
uploaded
and
that
build
happens
remotely
on
google
cloud's
machines
on
google,
our
build
farm
of
google
cloud
build
essentially
and
as
a
result,
I
get
a
container
image
built
and
push
the
google
container
registry,
and
then
I
basically
deploy
that
image.
E
B
Exactly
so,
the
google
cloud
build
packs
specifically
support
these
runtimes
that
I've
highlighted
here.
So
if
you
have
any
application
written
in
these
things,
we
do
support
that.
So
basically,
the
angle
that
I'm
going
for
here
is
you
know
using
code.
You
can
build
container
images
out
of
go
programs
without
using
docker,
and
similarly
the
build
packs
helps
you
do
that
as
well.
Similarly,
if
you're
using
java,
specifically,
if
you
have
a
project,
called
jib,
if
you've
never
heard
of
that
before,
this
is
also
along
the
same
lines
as
code.
B
D
D
I
think
last
time
I
saw,
if
I
remember
correctly,
I
think
I
set
up
jeb
for
the
host
name.
Example.
I
think
that's
what
I'm
using
to
package
it
in
the
java
grpc
examples
directory.
B
E
Okay,
so
you
mentioned
cold
start
and
you
said
only
when
a
request
comes
in
it
activates
a
container
right.
So
what's
the
typical,
I
don't
know
if
you
mentioned,
but
is
there
like
a
typical
like
latency
introduced
by
that.
B
Yeah
absolutely
yeah,
I
mean
cold,
starts
pretty
much,
always
introduce
latency
right
so
to
minimize
the
latency,
something
that
we
do
is
we.
We
don't
download
the
images
we.
We
have
a
way
of
storing
the
images
that
you
deploy
in
our
own
storage
that
so
basically
so
you
know,
let's
say
downloading
the
image
to
the
disk
like
kubernetes
does
every
time
we
can.
B
We
have
ways
of
basically
pulling
it
from
the
network
much
faster,
and
by
doing
so,
we
actually
also
we
don't
have
to
pull
the
entire
image.
We
can
pull
parts
of
the
image
that
you're
actually
using
during
the
cold
starts.
So
even
if
we
combine
these
things,
we
find
that
most
of
the
time,
the
time
we
spend
putting
up
an
application
and
bringing
it
to
the
ready
state
which
is
basically
listening
on
the
port
number,
the
most
of
the
time
is
spent
in
the
user
code.
B
So,
no
matter
how
much
we
optimize
the
infrastructure,
we
end
up,
finding
that,
if
you're
using
let's
say
java
or
c
sharp,
basically
more
heavyweight
languages,
most
of
the
time
ends
up
being
spent
in.
You
know,
starting
your
code
and
your
code
reaching
to
a
state
that
when
it
listens
on
the
port
number,
so
it's
primarily
affected
by
that.
B
As
long
as
you
can
keep
your
application,
you
know
starting
up
pretty
quickly,
like
let's
say
a
go
application
that
tends
to
work
better
like
I've
seen
cold
start
times
as
low
as
I
think
two
three
seconds.
Actually,
while
I
was
calling
something
here,
I
hit
the
cold
start
and
that
was
barely
noticeable.
I
don't
think
any
of
you
noticed
that.
B
Okay,
yeah,
perfect,
clara
is
cloud
run
comparable
to
openshift.
Yeah
openshift
is
a
much
larger
suite,
as
tony
said.
I
think
openshift
also
has
its
own
serverless
extension.
I
think
it
would
be
more
comparable
to
that.
That's
right,
yeah,
all
right,
perfect.
I
yield
the
floor.
Thank
you
so
much
giving
so
much
for.
E
Having
this
opportunity,
oh
sorry,
now
one
quick
question
yeah.
So
in
your
example
right
you
showed
that
it
basically
gives
you
a
host
name
that
is
auto
generated
by
right.
Is
there
a
way
to
actually
change
that
to
something
more
usable?
Let's
say
my
users
always
use
one
name
and
just
apply
to
that.
B
Yeah,
absolutely
so,
basically,
when
you
go
to
cloud
run,
there's
an
option
up
here
that
says:
manage
custom
domains.
Let's
say
I
can
create
a
mapping
I'll
basically
say
that
you
know
the
route
guide
example
that
I
just
deployed
I'll
say.
I
would
like
to
call
this
route
r.amit.dev,
that's
it
as
long
as
you
know,
I
can
go
update
my
dns
records
to
point
to
the
cname,
I'm
good
to
go.
Google
will
provision
a
tls
certificate
for
me
and
I'll
be
able
to
use
my
domain
name.
E
Great,
and
is
that
the
is
that
the
host
name
that
allows
you
to
split
traffic
between
two
separate
deployments.
B
Yeah
yeah
you
any
domain
that
you
will
provide
for
cloud
run
either
the
built-in
domain,
which
I
have
here
or
the
custom
domains
that
you
provide
you'll,
be
able
to
go
to
revisions,
and
you
can
basically
click
manage
traffic.
Actually,
something
that
you
can
do
here
is.
You
can
also
create,
like
let's
say
I
split
traffic
like
30
70,
something
like
that
and
something
else
that
I
can
do
is.
I
can
also
give
these
individual
revisions
names.
B
A
Yeah,
thank
you
very
much
for
the
presentation
and
at
this
point
we
know
we've
still
got
about
30
minutes
left
in
the
meeting,
so
I
want
to
turn
it
over
to
our
language
maintainers
and
see
if
you
have
any
updates
you
want
to
share
and
if
not
anyone
on
the
call,
if
you
have
any
questions
about
grpc
now
is
the
time
because
you
have
the
right
people
captive
to
ask
your
questions
and
you're
welcome
to
unmute
or
chat
whichever
you
prefer.
E
I
have
one
more
question
for
emmett
yeah
you're
on
new
termite.
So
that's
right,
yeah!
I
lost
my
question.
So
is
it
like?
I
don't
know
if
I'm
right
here,
I
don't
remember
correctly,
but
can
I
deploy
like
two
containers
together
as
a
pod.
B
E
Thank
you.
So
that
means
this
kind
of
environment.
You
know
where
you
cannot
have
like
sidecar
functionality.
Grpc
can
be
very
useful
for
proxy,
less
kind
of
stuff.
You
know
where,
last
time
we
presented
the
proxy
last
functionality
in
grpc
right
where
sidecar
functionality
is
kind
of
pulled
into
grpc
library
itself,
so
that
can
be
useful.
There.
B
A
A
A
I
mean
if
nobody
has
any
questions,
we
can
certainly
have
time
back,
but
I
find
that
very
very
unusual.
We
almost
always
have
really
great
questions
that
folks
ask.
B
Yeah
I'll
I'll
tweet,
the
presentation,
deck
I'll,
download
it
and
re-upload
it
somewhere,
but
the
examples
that
I
used
were
used
were
basically
just
the
route
guide
examples.
I
changed
like
very
few
things
like
the
port
number
and
edit
the
requirements.txt
file.
That
was
basically
it
yeah.
I
didn't
create
any
new
examples,
but
feel
free
to
ping
me
on
twitter,
I'll
hook
you
up
with
the
instructions,
if
necessary,.
D
I
have
a
it's
probably
mainly
focused
at
the
the
java
implementation,
but
I'm
wondering
has
there
been
any
conversation
or
or
thoughts
about
building
a
reactive
streams
implementation
I
I
am
aware
of
the
the
salesforce
library
and
actually
have
used
it
quite
extensively,
but
I
was
more
curious
about
the
the
the
root
project,
so
the
I
mean
so
you've
got
the
sort
of
reactive
stream
stubs,
which
would
be
most
what
we're
talking
about.
D
We
wouldn't
be
talking
about
plumbing
it
down
into
the
lower
levels,
so
the
we've
been
pretty
happy
with,
with
just
the
community
dealing
with
those
things
because
y'all
sort
of
know
more
of
what
y'all
would
like
things
to
look
like
and
y'all
are
more
practiced
with
it.
We've
tried
to
provide
support
to
make
it
easy
to
to
build
stubs
and
things
like
that.
Client
calls
and
server
calls
allow
you
to
pick
little
bits
and
pieces
of
the
generated
code
without
having
to
throw
everything
away
and
some
of
the
generated
code.
D
You
can
reuse,
so
that's
been
working
pretty
well
for
us.
We
would
say
you
know,
go
on
more
of
that.
If
there's
you
know
some
problems
that
you
know
things
you,
you
wanted
to
look
a
little
different
way
or
something
like
that.
That
would
be
for
you
all
who
know
a
little
bit
better
than
us,
but
no
so
so
we're
sort
of
happy
with
with
the
status
quo,
and
we
would
be
using
strong
support
of
those
other
subtypes.
D
I
do
think
job
is
a
little
bit
special
here
in
that
there's
java
has
is
so
fragmented
in
the
different
types
and
styles,
and
it's
not
something
that
we
can
really
or
we're,
not
in
a
good
position
to
support
everything.
D
Just
because
we're
not
knowledgeable
about
everything,
I
think
a
lot
of
the
other
languages
there's
like
if
you've
got
a
blocking
api,
then,
as
you
just
do
a
blocking
api,
there's
not
gonna,
be
much
argument
and
then
some
of
the
things
that
do
async
like
node.js
they've,
got
pretty
common
sort
of
patterns
to
go
by.
So
it's
a
little
bit
more
of
a
java
specific
problem,
but
it's
more
just.
We
we're
quite
happy
with
you
all
to
make
your
own
stubs
and
stuff.
If
that's
causing
a
problem.
D
We
we're
fine
with
here,
but
ryan's,
been
holding
down
the
fort
there
for
a
while.
Now.
D
And
also
to
be
clear,
the
stubs
are
very,
very
thin
layer
on
jrbc,
so
we're
not
really
all
that
concerned
with
them,
causing
too
many
troubles.
It's
a
bit
trouble.
It's
really
just
a
small
little
adapter
layer.
All
the
rest
of
the
jrpc
would
stay.
The
same,
you'd
still
be
able
to
use
the
interceptors
and
stuff,
and
so
we're
not
too
worried
about
fragmenting
the
ecosystem,
or
things
like
that
too.
Much
with
that
yep
thanks.
A
Just
real
quick,
I
want
to
call
out
we've
got
a
couple
of
polls
in
the
chat
question
interface,
if
you
don't
mind,
taking
a
look
and
just
leaving
your
feedback
quickly
about
what
you
thought
of
today's
topic
and
the
meeting
overall
and
again
open
for
questions
steve,
you're
unmuted.
I
wonder
if
that
mean,
if
you
have
a
burning
question
for
everyone,.
E
I
do
but
I'm
coming
from
a
c-plus
plus
legacy
project,
d-com,
wcf
and
now
putting
a
grpc
in
it
and
the
async
examples
and
tutorial
have
one
rpc
in
one
service,
and
I
was
just:
is
there
any
guidance
there
for
how
many
threads?
How
many
workers
you
know
tags
it's
just
kind
of
thin.
So
I
was
basically
going
in
and
looking
at
the
source
to
figure
out
best
practices.
But
I
don't
know
if
anybody
else
has
been
through.
E
E
Oh
just
any
best
practices
on
on
worker
threads
and
thread
handling
for
a
a
large
larger
than
the
example
c,
plus
plus
service.
I
mean.
F
E
I
looked
at
the
async,
but
again
the
example
just
had
one
service
with
one
rpc
in
it
and
it
was
like
we
have.
You
know
multiple
calls.
Several
services.
F
Right,
but
so
so
the
idea
behind
the
the
the
cq
based
api,
which
I
will
freely
admit,
is
not
a
very
good
api
and
we're
we're
trying
to
make
our
reactor
based
api
available,
which
is
much
much
easier
to
use.
But
the
the
idea
behind
the
cq
based
api
is
that
it
it
really
gives
the
application
control
over.
F
You
know
how
many
threads
you
use
and
it
and
the
the
number
of
different
calls
that
you
have
going
on
is
really
sort
of
orthogonal
to
that,
because
any
I
mean
well
not
completely
orthogonal
in
the
sense
that
there's
still
load
right,
but
as
long
as
no
individual
rpc
handler
is
doing
any
blocking
operations,
then
it's
really
just
you
know,
event
scheduling
on
a
set
of
threads
and
it's
just.
How
busy
are
you
right.
C
F
E
Yeah
it's
a
matter
of
returning
it
back
and
okay,
it's
just
the
the
example
is
really
simple.
I
didn't
know
if
there
was
another
more
fleshed
out
example,
especially
using
the
newer
async
stuff
with
c
plus
plus
17,
or
I.
C
Mean
the
design
really
or
the
design
the
the
problematic
here
is
technically
outside
the
scope
of
an
example,
because
it's
really
up
to
the
application
writer
to
decide
how
they
are
going
to
use
the
load
right.
The
the
jpc
itself.
Application
is
merely
going
to
give
you
events.
It's
only
going
to
give
you.
You
know
notifications
about
something
that
just
happened
and
then,
if
you
want,
for
example,
to
use
a
thread
pool
in
order
to
take
action
upon
this
call,
then
it's
up
to
you.
C
If
you,
if
you
look
at
you
know
you
were
talking
about
c
plus
plus,
but
if
you
look
at
other
languages
like
node.js
and
I'm
not
even
talking
about
jpc
js,
I'm
just
talking
about
node.js
in
general
node.js
whole
design
is
to
be
single
threaded
completely
and
have
all
of
its
events
being
handled
in
a
single
thread,
given
the
property
that
you
are
never
going
to
do
any
sort
of
blocking
code
in
the
even
handlers.
C
So
at
this
point
the
example
is
just
trying
to
show
you
how
you
dispatch,
how
you
receive
events
and
how
you
can
dispatch
them.
But
then,
if
you
want
to
dispatch
them
to
a
threat,
pool
for
example,
then
sure
it's
it's
up
to
you,
it's
your
progressive
in
it's
it's
depending
on
what
you
are
going
to
do.
If
you
are
not
going
to
have
any
sort
of
computational
heavy
code,
that's
going
to
be
executed
on
rpcs,
then
you
don't
even
need
necessarily
a
thread
pool.
C
E
C
Pool
and
that's
why
right
the
the
since
it's
really
up
to
how
the
application
works,
then
we
we
we
are,
as
as
my
marker
saying,
this
is
completely
orthogonal
to
to
the
principle
of
what
jpc
is
right
right.
F
Yeah,
and
so
I
mean
basically
what
I
would
say
is
like
it's
it's
you
know
I,
I
would
say
you
you
don't
you
definitely
don't
want
to
do
any
blocking
work
in
your
event
handler
threads.
Unless
you
know
that
you're,
you
know
just
going
to
spawn
another
thread
every
time
that
happens,
that
doesn't
seem
like
a
great.
You
know
approach.
I
think
you
know
what
I
would
advise
is
basically
certainly
a
a
fairly
straightforward
architecture.
That
would
be,
you
know
fairly
simple
would
be,
you
know,
start
with
a
fixed
number
of
event
handler
threads.
F
Those
are
the
threads
that
are,
you
know,
pulling
the
cqs
and
make
sure
that
that
your
callbacks,
that
that
you
know
that
whatever
you
do
when
an
event
comes
on,
one
of
those
cqs
is
either
non-blocking
or
is
handed
off
to
another
thread.
So
you
can
so
that
thread
can
immediately
go
back
to
pulling
the
cqs.
F
F
So
you
could,
if
you
wanted-
and
I
don't
recommend
this,
but
you
could,
if
you
wanted,
there
are
some
applications
that
really
want
to
tune
things,
and
you
know
they
create
like
different
thread,
pools
that
are
pulling
different
cqs
and
then
they
like
load
balance.
You
know
somehow
like
which
calls
are
on
which
thread
pools
to
get
like
prioritization
of
events
or
you
know
whatever
right.
F
F
D
Okay
and
that
api
is,
is
sort
of
around
the
idea
of
like
a
loop
model
where
you're
doing
only
you're
doing
non-blocking
work.
If
you
need
to
do
anything
blocking
or
really
expensive,
you
farm
it
off
to
another
thread,
but
that's
like
there's,
there's
plenty
of
event
loops
to
choose
from
in
c
plus
plus,
which
is
one
of
the
problems,
but
like
it's
pretty
similar
to.
If
you
were
doing
those
sorts
of
models.
F
Yeah
I
mean
the
the
the
the
new
api
will
basically
be.
You
know
you
don't
have
to
worry
about
having
a
bunch
of
threads
that
you
create
that
are
that
are
pulling
cqs
instead,
you
just
you
know,
say:
okay,
you
know
here's
my
request
handler
with
you
know
all
these
methods
that
get
called
when
particular
events
occur,
and
you
know
we
just
create
a
bunch
of
thread.
You
know
we
create
the
thread,
pull
and
do
all
that
management
for
you.
So
it's
a
it's
a
much
simpler
programming
model.
F
You
know,
and
so
I
what
I
would
advise
you
to
do
at
this
point.
If
you're,
if
you're
writing
an
application
is
rate,
you
know
right
as
if
you're
gonna
have
something
like
that,
because
hopefully
it
won't
be
too
long
before
you
do,
and
you
know
that
way
you
can
just
sort
of
throw
out
the
code
that
you
have.
That's
actually
you
know
creating
the
threads
and
and
pulling
the
cqs
and
the
rest
of
it
will
basically
just
work.
E
Right
all
right,
that's
why
I
was
hesitant
to
bring
it
up
because
all
you
and
your
fancy
languages,
no
I'm
kidding
I've
done
working.net,
but
trying
to
graft.net
onto
our
legacy
c
plus
plus
just
became
more
problems
than
it's
worth,
and
so
I
was
going
strictly
native
here.
All
right.
Thanks
a
lot
sure
right.
There
appreciate
it.
E
F
C
And
I
mean
possibility
is
here
for
reason
as
well
right,
if
you,
you
may
have
very
specific
needs
for
very
complicated
approaches
to
how
you
schedule
your
work
so
yeah,
even
though
it's
not
recommended
it's
not
recommended,
because
it's
really
for
extremely
specific
applications
and
usage
cases.
So.
F
Well,
more
to
the
point:
it's
actually
not
clear
when
we
move
to
a
new
polling
model,
how
much
like
the
apis
will
still
continue
to
work.
It's
not
clear
that
they
will
actually
continue
to
give
that
level
of
control
that
that
they
do
currently
right,
because
you
know
they're
going
to
be
kind
of
the
existing
api.
Is
you
know
just
you
know,
working
on
top
of
the
new
polling
model,
which
you
know
doesn't
really
provide
the
same
level
of
control.
F
Oh
there's
a
good
question:
let's
see,
let's
go.
F
Yeah
right
we're
just
sitting
around
twiddling
our
thumbs
yeah,
so
so
aj
is
working
on.
I
don't
know
if
he's
here
he's
probably
here
somewhere
is
working
on
a
design
for
a
new
api
that
we're
calling
event
engine
which
will
basically
provide
a
way
to
do
I'm
doing
a
bunch
of
hand
waving
here,
because
we
don't
have
anything
sort
of
written
down
that
we
can
publish
externally
yet,
but
it'll
it'll
basically
be
a
an
api
that
you
can
use
to
integrate
with.
F
You
know
to
basically
implement
the
event
loop
and
you
plug
that
into
grpc,
and
then
the
reactor
api
just
sort
of
works.
On
top
of
that,
so
you
can
either
use
our
provided
event
engine
which,
which
will
be
sort
of
a
you,
know
something
that
we
ship
that
basically
works
the
way
it
it
sort
of.
Does
you
know
using
the
same
underlying
mechanisms
that
we
use
today
of
directly
dealing
with
you
know
on
linux?
F
You
know
posix
sockets
and
all
that
or
you
know
you
can
you
know
people
in
in
environments
where
they've
already
got
some
external
event
loop
that
they
just
want
to
integrate
with
will
be
able
to
sort
of
you
know,
have
an
implementation
of
this
api
that
plugs
into
their
existing
event
loop,
and
so
once
we
have
that
machinery
in
place,
that's
the
key
thing
that
will
will
enable
use
of
the
the
reactor
api
in
oss
and
timeline
wise.
I
would
guess
I
mean
my
my
off.
F
E
D
E
A
Yeah
and
feel
free
to
you
know,
come
back
to
the
next
meeting
or
send
a
message
to
the
list.
If
you
run
into
further
issues,
always
folks
that
are
interested
to
help
and
happy
to
help-
and
I
know
some
of
the
net
grpc
folks-
don't
often
make
it
to
this
meeting
because
of
time
zone
fun
but
they're
on
the
mailing
list
as
well.
But
they
could
potentially
be
helpful.
A
Anybody
have
any
other
questions
or
anything
else.
You
want
to
talk
about
anything
with
grpc.
D
A
So
what
you're
saying
is
people
who
come
to
the
community
meeting
get
advanced
preview
of
grpc
java
new
release?
That's
what
I'm
hearing.
D
A
A
All
right:
well,
if
nobody
has
anything
else,
I
will
again
mention
if
you
didn't
see
in
the
chat
on
the
side,
there's
this
little
icon,
that's
a
triangle
and
a
square
and
a
circle,
and
that's
just
got
a
couple
of
quick
pull
questions.
We'd
love
to
get
your
feedback
on
today's
meeting.
A
Thank
you
for
being
here
and
joining
us.
We
will
get
the
recording
up
on
the
youtube
channel
and
share
the
link
out
on
the
meetup
page
and
on
it
for
your
samples
and
all
that
we'll
we'll
share
all
the
links
as
well.
There.
Thank
you
again
for
the
presentation.