►
From YouTube: gRPC April Meetup/ Demo: The Ins and Outs of gRPC and Akka Serverless by Jeremy Pollock
Description
Lightbend has built out its new Platform as a Service (PaaS) to be gRPC centric, enabling development teams to build real-time data applications and microservices in whatever language they choose, using either gRPC APIs or idiomatic SDKs. But we also leverage gRPC internally as well, in the product, to facilitate the inner workings of the platform itself. In this demo and presentation, Jeremy Pollock, VP of Product & Developer Platforms, will step through just how the product uses gRPC and how it makes it easier for developers out there in the wild to build gRPC services for their own use.
A
The
ins
and
outs
of
grpc
and
actually
not
aka
serverless,
it's
a
we're,
actually
changing
the
name
and
I'll
talk
a
little
bit
about
that
in
as
throughout
my
presentation
and
how
how
it
all
works
out
out
there
in
the
wild
respective
of
the
product
that
we're
offering
around
grpc
service
development
or
microservice
development
within
the
context
of
the
platform
as
a
service
product.
But
again
my
name
is
jeremy
pollock,
ppa,
product
and
developer
platforms
at
light
benz
and
before
I
kind
of
get
into
I'll.
A
We
leverage
it
quite
a
bit
within
the
context
of
the
open
source
project
called
hakka,
which
has
been
out
there
for
the
better
part
of
a
decade,
and
it
certainly
is
a
key
part
of
our
new
product.
We
were
calling
it
pretty
much
as
a
code
name
aka
serverless,
as
we
were
testing
various
things
out.
The
market
we're
now
converting
that
to
a
new
name
called
calyx
I'll,
set
that
up
around
what
that
product
actually
does.
A
Irrespective
of
the
problems
it
solves
and
then
dig
into
the
details
of
the
the
grbc
usage
and
implementation
and
then,
of
course,
if
time
permits
give
a
quick
demo
of
the
product.
Now,
I'm
gonna
describe
a
little
bit
about
the
company,
the
use
cases,
the
products
and
I
feel
like
I
need
to
do
that
in
order
to
provide
sufficient
context.
I'm
super
excited
about
what
we're
doing
here
at
lightbend
and
what
we're
doing,
especially
so
with
this
new
product
called
calyx.
A
So
forgive
my
enthusiasm.
If
you
will
I'm
a
product
manager
as
well,
although
I'm
generally
quite
technical
in
nature,
and
so
it's
hard
not
to
sell
my
product.
But
you
know
my
intent
here
is
not
to
sell
you
anything.
It's
just
more
to
explain
how
we
leverage
grpc
within
the
context
of
our
technologies
and
just
to
to
set
that
context
around
what
light
bend
is
or
who
we
are
as
a
company,
and
some
of
you
may
have
heard
of
light
bends.
Some
of
you
may
not
have
heard
of
light
bends.
A
Some
of
them
are
customers
that
you
see
before
you
right
now.
Others,
since
it's
open
source,
are
out
there
in
the
wild
that
have
never
subscribed
for
more
services
from
lifebend,
and
perhaps
don't
even
know
like
bend,
is
the
backing
company
behind
akka.
But
if
you
look
at
some
of
the
customers
in
front
of
us
right
now,
if
you
can
imagine
a
disney
plus
where
you're
a
you
know,
tuning
in
for
the
night,
you're
looking
to
decompress
and
watch
some
tv
or
have
your
kids
watch
some
tv
understanding
who
you
are.
A
But
if
you're
going
on
a
cruise
in
norwegian
cruise
lines,
they're
another
great
light
bend
and
akka
user
leveraging
akka
to
create,
in
essence,
digital
twin
representations,
leveraging
the
actor
model
within
the
context
of
akka
to
track.
A
It
tends
to
be.
You
know
real
time
in
nature,
so
data
has
to
be
processed
very
quickly.
Logic
has
to
be
executed
almost
instantaneously
in
order
to
deliver
these
high-end
awesome
experiences,
whether
it's
for
a
user
or
for
systems
and
applications.
A
A
Ultimately,
at
the
end
of
the
day,
though,
it's
really
intended
and
best
used
by
your
very
experienced
development
teams
that
are
coding
in
java
or
scala
and
to
a
certain
extent,
that
has
limited
our
ability
to
push
out
into
other
markets
and
help
other
companies
deliver
these
sorts
of
types
of
services
and
applications
that
we
just
touched
on
very
briefly
through
that
customer
slide,
and
that's
where
calyx
has
emerged
is
to
provide
an
abstraction
layer,
a
more
simple
developer,
experience
that
allows
really
any
sort
of
developer
to
start
building
these
sorts
of
applications
and
not
have
to
worry
about
operations
and
not
have
to
worry
about
all
the
unnecessary
complexity
and
to
do
it
in
their
language
of
choice.
A
Whether
it
was
java,
scala,
python,
typescript,
javascript,
the
list
goes
on
and
on
and
where
I'm
going
to
spend.
Most
of
my
time
on
today
is
calyx.
It
is
this
new
product.
We
went
into
beta
june
of
last
year,
general
availability
in
november
of
last
year,
and
we're
going
through
this
rebranding,
based
on
a
lot
of
different
discussions
with
users,
understanding
where
they
saw
the
value,
how
they
were
best
using
the
product
and,
ultimately,
even
though
it
is
built
on
top
of
akka
and
leverages
extensively
underneath
the
covers,
it
is
a
different
product.
A
So
if
you
can
imagine
the
starbucks
or
a
disney
plus
or
an
eero
through
aws
or
amazon,
you
know
these
are
our
products
and
companies
that
need
to
deliver
incredible
experiences
to
their
end
users,
to
be
able
to
process
data
at
high
speed
and
be
quite
responsive
and-
and
that's
where
this
notion
of
reactive
becomes
quite
critical
and
jonas
bonaire,
who
is
the
founder
of
akka
founder
of
lipin,
also
was
one
of
the
primary
authors
behind
the
reactive
manifesto.
If
you
haven't
checked
into
it,
definitely
go
check
out
the
reactive
manifesto.
A
I've
already
talked
about
this,
and
this
is
just
a
better
picture.
If
you
will
it's
everything
from
database
to
what
transport
protocol,
I
need
to
worry
about
between
my
different
sets
of
microservices
or
frameworks.
I
want
to
use
or
security
layer
or
kubernetes
an
operating
system.
All
that
is
abstracted
the
way
out
from
the
developer,
leaving
just
the
business
logic
that
they
have
to
worry
about
when
developing
back-end,
apis
and
services
in
the
context
of
a
microservices
implementation.
A
Just
to
give
you
a
little
bit
very
quickly,
I'm
not
going
to
I'm
not
gonna,
go
through
all
the
gory
details
here,
so
I'm
just
gonna
page
through
these
slides
relatively
quickly
here,
so
that
you
we
can
get
to
the
meat
of
the
presentation.
But
it
is
important
to
note
that
calyx
is
built
on
top
of
aca,
so
the
akka
team
has
been
building
out
calex
as
well,
and
we've
had
great
learnings
from
that
development
that
have
actually
fed
back
into
the
product,
the
akka
product
and
open
source
project,
which
has
been
awesome
to
see.
A
A
Knobs
and
dials
features
components
that
that
development
teams
can
use
to
integrate
their
own
processes
into
our
runtime
environment
and
then,
of
course,
help
manage
that
environment
through
logging
and
monitoring
some
things.
They
don't
have
to
worry
about,
like
auto
scaling,
for
example,
that's
hanging
automatically
by
by
our
our
team
and
our
infrastructure,
some
of
the
key
capabilities
and
we'll
touch
on
this
in
as
we
kind
of
move
through
the
presentation.
A
You
still
interact
with
state
I.e,
create
data,
update
data,
delete
data,
but
you're
not
having
to
worry
about
actual
database
connectivity
and
database
setup
and
database
schema
management.
That's
handled
all
automatically
for
you
through
our
programming
model
communication
patterns,
whether
it's
eventing
or
direct
request
response
or
streaming.
A
That's
all
part
of
our
declarative
programming
model
that
I'll
show
in
slides
and
in
a
demonstration
as
well
and
we've
been,
you
know,
targeting
some
powerful
microservices
patterns.
If
you
will
out
there
specifically
event
sourcing
and
cqrs
and
those
are
both
natively
handled
in
in
klux,
you
also
have
other
state
models
and
eventing
support
as
well
through
integrations
like
with
kafka
and
google
pub
sub.
But
it's
a
you
know
an
event.
A
Oriented
architecture
supports
event-oriented
architectures
as
key
parts
of
the
system,
and
then
on
top
of
that,
of
course
I
mentioned
this
already,
so
I'm
not
gonna
say
anything
more,
it's
more
than
just
java
and
scala.
A
We
here
at
lightbend
like
to
say
that
quite
a
bit,
because
today,
if
you
knew
about
light
bend,
you
sometimes
refer
to
us
as
the
scala
company.
We
we
love
scala.
It's
great
we've
been
great
supporters.
We
have
teams
on
staff
managing
maintain
scala
too.
So
without
a
doubt,
we
love
scala.
We
love
java,
but
we
also
have
seen
the
need
out
there
in
the
world
to
move
beyond
those
languages
and
building
these
sorts
of
of
powerful
applications.
A
So
how
do
we
use
grpc
within
calex
and
within
light
bend
and
akka
and
I'll
touch
on
on
all
of
it?
I
think
and
why
I
was
so
excited
to
be
talking
to
you
all
today,
is
that
we
really
have
jumped
in
beat
first
headfirst.
Anyway,
we
dove
deep
into
the
pool
into
the
grpc
pool.
We
we
saw
just
in
how
incredible
that
technology
is
and
how
it
can
support
our
own
customers
implementations
and
improve
their
applications
for
performance
and
scalability
perspective.
A
We
released
the
one
of
the
key
libraries
in
the
akka
family.
If
you
will,
in
the
open
source
project,
is
akka
grpc
that
was
released
in
in
first
in
may
of
2018.
If
you
go
to
the
the
github
repo
and
look
at
the
project
and
the
history,
I
think
there's
that
the
hello
world
was
written
by
one
of
our
engineers
back
in
in
may
of
2018.
A
In
2019,
we
made
the
decision
to
really
use
grpc
as
a
front
and
center
component
or
technology
choice
both
for
internal
service
development,
so
as
we're
building
our
own
underlying
services
that
support
the
user's
capabilities
and
functionality.
A
Grpc
is,
is
you
know,
native
within
the
context
of
that
back-end
environment,
but
also
in
the
front
end
as
well?
It
is
the
primary
interface
if
you
will
for
for
users
and
developers
as
they
develop
applications
to
run
within
the
caleb's
environment.
On
the
aquafront,
we
also
shifted
to
grpc
as
our
go-to
implementation
choice
for
service
communication,
so
you're
building
out
microservices
using
aca
and
you're
looking
to
communicate
back
and
forth
between
different
services
or
even
the
ingress
to
a
particular
service.
A
A
Now
right
around
the
time,
there
was
a
benchmark
for
grpc
clients
or
libraries
that
was
produced
and
right
around
that
time,
when
it
was
published,
our
team,
our
aka
team,
the
ones
that
were
building
the
akka
library,
were
also
noticing
some
performance
issues
within
the
context
of
caleb's
development.
That's
I
think,
that's
one
of
the
great
benefits
of
of
calyx
and
building
on
top
of
akka
is
the
fact
that
we
learn
immediately.
What's
working,
what's
not
working,
and
so
right
around
that
time,
this
the
benchmark
result
came
out.
A
We
were
also
starting
to
investigate,
but
that
said
we
were,
you
know
our
akka
team.
Our
engineers
are
really
really
small,
smart,
really
talented
and
motivated
to
fix
things
when
they
dug
into
the
benchmark
and
looking
at
the
performance
of
the
library,
they
definitely
found
some
issues
and
opportunities
for
for
improving
performance
and
scale
and
without
a
doubt,
there's
a
blog
post
out
there
written
by
one
of
our
engineers,
where
he
steps
through
steps
through
the
details
of
what
we
saw
and
how
we
responded
and
delivered
much
better
performance.
A
I
think
it
boiled
down
to
if
I
remember
the
blog
post
correctly
and
summarizing
it
in
my
powerpoint
bullets.
Here
was
ultimately
our
initial
implementation
was
very
focused
on
streaming
and
assuming
streaming,
so
we,
you
know
we're
opening
up
a
stream
to
a
service
sending
data
receiving
data.
We
were
not
necessarily
focused
on
the
more
conventional-
I
guess
usage
model,
at
least
for
people
that
are
more
familiar
with
rest
apis.
A
A
So,
as
we
looked
at
the
code
base,
we
recognized
that
we
were
really
focused
before
this
point
on
optimizing
for
streaming
behaviors.
We
sat
down
and
approached
what
we
want
to
do
with
streaming
or
non-streaming
requests
and
responses,
and
that
led
to
you
know
a
tremendous
improvement
performance
numbers
are
no
numbers
of
numbers.
A
I'm
happy.
This
slide
says
we're
number
one.
None
of
us
here
at
lake
bender,
aqua
sitting
here
going.
Oh
we're
number
one
and
we're
number
three
I'm
happy
to
be
in
the
top
ten
top
five.
You
know
I
everyone's
benchmarks
can
can
vary
so
we're
not
sitting
here
saying
these
are
absolute
numbers.
We
look
at
the
numbers
as
more
directional
than
anything
but
most
important.
A
So
within
our
calyx
environment-
and
I
I'm
not
going
to
go
through
this
lots
of
boxes
on
the
slide,
but
this
is
a
close
approximation
of
how
the
back
end
looks
like
if
you
peeked
under
the
covers
of
the
cable
code
base
and
operational
environment,
and
it's
it's
actually
out
of
date
in
some
regards.
But
the
key
point
I
think
here
is
that
any
communication,
that's
happening
between
these
components
between
these
services
is
happening
through
grpc,
so
everything
that
we're
doing
within
our
core
infrastructure
core
set
of
services-
that's
powering
calyx.
A
This
platform
is
a
service
that
runs
out
there
in
the
cloud.
That's
all
happening
through
through
grpc,
so
heavy
use
without
a
doubt
across
the
board-
and
we
see
this
as
one
of
the
one
of
the
key
ways
that
we
achieve
the
performance
and
scale
that
we've
seen
in
our
own
performance
tests,
just
testing
out
the
calyx
set
of
services.
A
A
So
this
is
our
console,
the
web
ui
it
uses
it
talks
grpc
web
to
apis
that
are
defined,
of
course,
in
protobuf,
we're
compiling
I'm
going
to
screen
grab
on
the
right
hand,
side,
part
of
the
the
repo
and
my
vs
code.
But
if
you
went
and
poked
at
the
you
know
the
inspector
in
a
browser,
you
would
see
it
as
well,
but
ultimately
we
we
love
it.
You
know
I've
been
in
the
api
business
for
a
while
now
better
part
of
13
years.
I
think
and
have
seen
the
the
value
of
of
apis.
A
I
was
mainly
in
the
rest
world
before
light
bend,
but
this
we
just
went
recently
through
a
rebranding
renaming
and
a
in
the
front
in
the
case
of
the
console
or
web
application,
a
re-architecture
of
the
front-end
application
and
just
as
you,
you
would
expect
from
a
contract.
First
api
development
process.
A
We
were
able
to
quickly
create
a
new
version
of
the
front
end
talk
into
the
grpc,
back-end
apis
and
obviously
leverage
existing
tests
and
just
move
very
very
quickly,
since
we
had
broken
those
two
apart
and
leveraging
the
protobuf
to
drive
the
front-end
client
running
client
application.
We
also
see
and
use
protobuf
for
the
cli.
The
cli
actually
is
our
primary
user
experience
for
interacting
with
the
service
in
production.
A
So
when
you
create
a
project,
deploy
your
service
you're
using
the
calix
cli
and
just
like
our
console,
we're
we're
using
a
you,
know:
grpc
and
protobuf
for
contract
first
api
development.
Without
a
doubt,
this
has
been
a
a
game
changer
for
us
and
enabling
us
to
move
very
very
quickly
in
introducing
new
changes,
refactoring
things,
as
as
we
need
to
adding
new
capabilities
to
rapidly
innovate
within
the
context
of
these
user-facing
applications.
A
We
also
use
grpc
web
ui
as
part
of
the
cli,
so
this
is
actually
one
of
the
first
points
in
time
where
a
user
of
calyx,
so
a
developer,
who
signed
up
with
an
account,
perhaps
one
of
you,
who's,
checking
it
out
either
right
now
or
after
this
meeting.
A
If
you
built
a
service
and
we'll
do
that
as
part
of
the
demonstration
today,
if
you
build
a
service,
you
can
actually
proxy
that
service.
You
know,
deploy
it
to
production
and
and
access
it
from
local
environment
through
the
grpc
web
ui,
without
having
to
open
up
an
insecure
connection
to
our
production
environment.
So
it's
a
nice
way
for
secure
local
dev
development
without
having
to
again
expose
your
grpc
endpoints
over
the
the
internet
in
I
guess,
in
a
naked
sort
of
way,
if
you
will.
A
If
you
will
again
we're
leveraging
the
core
akka
trpc
library
that
is
found
in
the
open
source
project,
we're
leveraging
grpc
web
and
the
compiled
protobuf
output
files
in
our
front
app
front
end
application
we're
using
again
the
same
protobuf
and
additional
protobuf
and
grpc
endpoints
in
our
cli
and,
of
course,
we're
using
the
grpc
web
ui
as
part
of
the
the
developer
experience
when
testing
out
those
services
and
apis
that
you're
building
with
rgp
trpc
focused
service.
A
A
We
have
some
annotations
that
we've
added
in
to
support
tying
the
protobuf
to
underlying
capabilities
in
the
product,
but
without
a
doubt
that
api
burst
development
is
key
to
our
product
and
I'm
going
to
dig
into
that
a
little
bit
and
you're,
probably
all
I'm
assuming
you're-
all,
maybe
I
shouldn't
assume,
but
I'm
assuming
at
least
some
of
you
are
heavy
into
protobuf
and
understand
this
proto
buff
set
of
messages,
but
you
know
again
I'll
go
through
my
example
and
development,
but
ultimately
we're
sitting
down.
A
As
I
said
thinking
about
how
do
I
want
to
expose
my
functionality?
What
api
methods
am
I
going
to
support?
What
are
the
inputs
with
the
outputs
and
what
is
the
schema?
The
the
state
model,
the
state
that
I
want
to
manage
within
the
context
of
calyx.
Now
this
one
shows
grpc
alone.
There
are
annotations
for
http
transcoding,
there's
also
annotations
for
doing
things
like
eventing.
A
All
through
this
declarative
programming
model,
once
we
define
the
api,
we
have
to
actually
choose
the
state
model.
Assuming
I
want
to
create
a
service
that
manages
data,
we
have
event.
There
are
really
three
state
models
supported
right
now,
one
is
event
sourcing.
The
other
is
a
key
value
store
which
I'll
demonstrate
today.
We
also
have
support
for
crdts,
complexly
conflict-free,
replicated
data
types,
and
it's
really
simple
to
to
change
to
pick
and
choose.
So
it's
not
a
large
lift
and
shift
effort.
A
If
you
want
to
go
from
key
value
to
event,
sourcing
or
then
sourcing
to
key
value
or
crdts,
for
example,
so
it's
really
nice
way
to
to
manage
your
data
for
key
value.
It's
pretty
straightforward.
I
mean,
I
think
all
of
us
can
probably
quickly
grok
a
key
and
a
value
type
of
data
mechanism
or
data
storage
from
event
sourcing.
A
It's
still
as
simple
pretty
much
as
that
within
the
context
of
calex,
and
I
think
that's
where
anyone
who's
been
involved
in
event.
Sourcing
has
gotten
really
excited
about
calyx
because
they
just
see
this
as
a
very
simplified
programming
model
that
taps
into
the
power
event
sourcing.
But
without
having
to
have
you
know
all
the
expertise
in
event
sourcing
in
order
to
get
to
get
going
and
then
last
but
not
least,
you
actually
do
something
more
than
just
write
protobuf
in
calyx
we
once
we
have
our
api,
our
state
model
we
can
use.
A
We
can
do
this
from
scratch.
I'll
use
the
code
generation
process
that
comes
as
part
of
the
calyx
product
to
generate
stubs
for
our
functions
or
the
api
requests
that
we
handle.
But
ultimately
I
go
in
and
you
know
for
each
one
of
my
rpcs,
the
the
api
requests
I'm
gonna
handle
within
the
context
of
my
service.
I
can
sit
down
and
write
my
code
now.
A
What's
really
important
to
note
is
that,
within
the
context
of
my
function,
my
logic
done
writing
within
within
calyx
the
the
notion
of
state
is
treated
a
little
bit
differently
than
what
developers
typically
expect.
So
typically,
if
you're
building
an
api,
you
know
you're
you're,
setting
up
a
route
and
you're
going
to
say:
okay,
given
an
incoming
route
with
this
type
of
request,
I
look
for
an
id,
perhaps
or
I
look
for
some
other
parameter.
A
If
their
api
request
is
asking
or
supposed
to
store
data,
then
I'm
connecting
to
a
database
setting
it
all
up
making
sure
all
of
my
not
necessarily
always
in
the
same
code
path.
But
I
think
you
get
my
point
that
in
a
traditional
model,
you're
really
thinking
quite
clearly
about
you-
know
a
route
coming
in
and
then
determining.
If
I
have
to
create
something-
and
then
you
know
saying
is
the
right
place
or
part
of
my
database.
A
Excuse
me,
within
the
context
of
calex
state,
is
delivered
to
the
function.
So,
if
I'm
updating
my
shopping
cart,
I
don't
need
to
worry
about
based
on
the
incoming
id
and
fetching
the
cart
from
the
database
somewhere,
that's
handled
automatically
by
by
the
product.
So
when
and
let
me
just
kind
of
quickly
do
my
build
here.
A
What
really
happens
under
the
hood
is
that,
as
an
incoming
request
comes
into
the
calyx
execution,
environment
or
execution
cluster,
we
have
in
essence
what
we
call
a
state
proxy
that
rides
alongside
the
sidecar
proxy.
That
rides,
along
with
the
the
user
code,
that
you've
written
your
business
logic
and
that
state
proxy
is
doing
all
of
the
distributed
state
management
and
other
app
providing
other
abstraction
layers.
A
On
top
of
the
things
like
message,
brokers,
service
meshes
and
like
that
state
proxy
ultimately
takes
that
request
determines
which
entity
which
data
object,
you're
looking
to
interact
with,
and
then
it
passes
that
data
to
that
user
code.
So
it
becomes
very
efficient
and
performant
when
interacting
with
data,
whether
it's
creating
updating
or
deleting
data
within
the
context
of
this
distributed
environment.
A
It
takes
goes
beyond
you
know
the
traditional
model
of
a
stateless
function
if
you
will
or
function
as
a
service
where
you
know
typical,
this
would
be
the
model
where
you
know
event
comes
into
a
lambda
or
an
azure
function.
You
do
something
and
then
you
spit
out
another
event
and
of
course,
if
you
have
to
interact
with
the
database
at
that
point,
you're
doing
that
within
the
context
of
this
stateless
environment,
we
provide
a
stateful,
runtime
environment
and,
as
the
event
comes
in
that
proxy,
that
I
talked
about
at
state
proxy.
A
That's
doing
all
the
heavy
lifting
around
state
management
and
other
operations
gets
the
state
at
the
point
in
time,
puts
it
into
the
user
function
and
then
all
the
user
has
to
worry.
The
developer
has
to
worry
about
in
their
code
is
emitting
that
data
emitting
that
state
and
we
automatically
handle
the
saving
of
it
back
into
the
backend
database.
A
And
I
believe
you
can
see
it
okay,
so
I'm
in
bs
code
I'm
going
to
do
a
I'm
going
to
use
java,
I'm
not
a
java
guy.
I
think
I
use
java
way
back
maybe
15
years
ago,
I'm
using
it
now
more
at
lightbend
for
sure,
I'm
more
of
a
python
developer.
Javascript
developer!
I
have
been
dabbling
in
scala
just
because
I'm
here
and
I've
got
scholar
experts
to
help
me
along
the
way.
But
today
I'll
just
do
java.
A
A
Nothing
too
elaborate
within
the
context
of
a
you
know:
a
microservices
environment.
It's
hey,
I
want
to
you,
know,
increase
value
of
a
counter
decrease
value
so
and
so
forth,
you'll
notice.
This
is
a
part
of
a
file.
Of
course
we
have
our
messages
which
are
representing,
in
this
case
our
incoming
commands
or
requests.
A
You
know
an
increased
call
to
increase
value,
decrease
value,
reset
value
and
get
current
counter
right
here.
This
part
right
here
is
all
about
codegen,
and
this
basically
will
generate
in
this
case
java
stubs.
If
you
will
that
allow
us
to
to
start
coding
exactly
where
we
need
to
be
coding
respective
of
our
business
logic,
that
domain
is
our
state,
and
this
is
defined
in
our
domain.
Protobuf
simple
state
model
in
this
case
is
just
you
know,
a
value,
a
number,
if
you
will,
that
will
be
tracking
as
part
of
a
counter.
A
Is
this
counter
class,
which
is
ultimately
the
thing
that
will
allow
us
to
encapsulate
the
business
logic
for
receiving
the
incoming
events
or
commands
like
increase
value
or
decrease
value,
and
then,
of
course,
omit
the
state
to
update
the
database
as
well?
Now,
I'm
not
going
to
be
showing
off
my
typing
skills,
I'm
going
to
be
showing
off
my
copy
and
paste
skills.
A
So
I'm
just
going
to
go
to
a
file
that
I
have
over
here,
since
we
as
you
can
see
lots
of
not
implemented
yet.
But
let
me
replace
that
with
a
more
fleshed
out
example-
and
you
know,
key
parts
here
is,
for
example,
that
increase
command,
so
increase
is
going
to
get
mapped
to
this
increase
request
or
api
call
or
method
and
you'll
notice
is
the
inputs.
A
Are
the
current
state
and
the
command,
so
the
command
again
is
what
we
defined
in
that
protobuf
as
being
the
input
to
the
api
and
the
state
is
that
data
model
that
we've
been
tracking
or
defined
in
that
domain.
Protobuf
and
again,
this
is
that
current
state,
when
it's
passed
into
this
method
as
a
developer,
I'm
always
assured
of
the
fact
that
it's
the
most
current
data,
that's
being
managed
within
the
context
of
of
calyx,
and
so
there's
no
fetch
of
the
data.
A
It's
delivered
to
me
in
the
function
and
in
this
case
basically
just
saying:
hey:
let's
set
a
new
state
to
be
equal
to
the
current
state,
plus
the
value
passed
into
the
command
and
then,
as
an
effect,
we'll
spit
out
in
essence,
an
event
that
will
update
that
state
for
getting
a
current
counter.
It's
you
know
just
basically
building
up
the
right
data
and
passing
that
back
out
into
into
our
java
code.
A
I've
got
a
there's
a
maven
command
in
this
case
since
we're
in
java
that
will
deploy
this
package
up
the
code,
so
we're
leveraging
containers
within
our
runtime
environment.
So
it's
not
a
you
know
copy
your
your
java
code
and
paste
it
into
a
text
ui
in
our
application,
we're
using
containers
to
execute
the
logic
or
the
the
applications
that
you
develop
within
the
context
of
the
calex
product.
A
A
We
don't
leak
kubernetes
to
you,
so
you
as
a
developer,
don't
have
to
worry
about
kubernetes,
we're
doing
that
all
underneath
the
covers,
but
we
are
using
containers
and-
and
in
this
case,
docker
to
package
things
up
and
then
deploy
them
out
into
production
environment.
So
we
we
now
see
that
our
service
has
been.
A
A
In
this
case
again,
calex
is
the
cli.
I
will
take
advantage
of
this
local
proxy
to
attach
to
that
service
running
in
production.
So
again,
if
you're
building
in
an
application
that
you
know
should
not
be
accessed
by
the
the
public,
if
you
will,
you
can
certainly
choose
not
to
expose
it
and
just
connect
to
it
through
this
local
proxy
and
that's
what
I'll
do
right
now.
A
That
will
give
us
an
opportunity
to
to
test
out
the
service
in
the
grpc
web
ui,
which
hopefully
has
come
up
in
our
google
me,
and
so,
if
I
do
my
decrease
and
test
one
that
should
return
an
error
because
of
course
we
haven't
implemented
that
per
my
copy
and
paste
code.
But
if
I
wanted
to
increase
the
value
of
task
one
and
let's
increment
it
by
10,
then
we
have
success
and
let's
do
get
current
counter.
A
Because
you
can
see
I've
been
doing
some
demo
prep
preparation,
but
we
increased
it
by
10.
Let's
do
it
for
another,
this
one:
let's
do
it
increase
by
30.
A
Now
we
have
70.,
so
you
know
lots
of
other
things
to
look
at.
I
suggest,
if
you're
interested
in
learning
more
about
calex
and
what
it
can
do
for
grpc
power
development,
then
two
resources
to
go
to
go.
Look
at
right
now
would
be
the
docs.calyx.io
site
so
definitely
check
that
out.
You
can
also
go
to
console.calex.io
to
register
for
a
free
account,
and
that
can
be
done
today
if
you
want
we're
still
transitioning
in
the
in
the
renaming.
A
So
the
actual
calyx.io
site
is
not
active,
that'll
be
available
in
a
couple
weeks,
but
the
actual
documentation
and
the
console
is
fully
available
and
ready
to
be
tried
out
by
people
that
are
looking
to
learn
more
about
building
stateful
applications
within
a
cloud-based
environment.
Without
having
to
worry
about
all
the
complexity
that
you
would
normally
be
worrying
about
when
developing
these
sorts
of
high-end
microservices
applications,
and
with
that
I
will
stop
my
screen,
I'm
just
going
to
leave
up
my
those
two
links
just
so
that
we
have
them.