►
From YouTube: State of Serverless & Service Mesh - Giuseppe Bonocore (Red Hat), William Markito Oliveira (Red Hat)
Description
OpenShift Commons Gathering
Milan Italy 2019
Title: State of Serverless & Service Mesh
Speakers: William Markito Oliveira (Red Hat) | Giuseppe Bonocore Red Hat
A
A
Great
and
one
of
the
reasons
that
I'm
looking
for
the
next
round
of
coffee
is
that
this
just
happened
like
around
40
minutes
ago,
I'm
pretty
sure
that
those
should
be
shared
and
not
individual
pizza,
sighs.
But
we
do
what
we
can
right,
but
it
was
really
good.
But
let's
dive
in-
and
let's
start
talking
about
service
mesh
sure
that
a
lot
of
you
are
very
curious
about
this
team.
I've
heard
already
from
some
customers
that
they
are
looking
forward
to
run
a
service
mesh
in
400
and
311
and
whatnot.
B
A
Dive
into
that,
but
for
for
some
of
you
that
are
I
would
say,
learning
about
this
topic
now
I'll
do
a
kind
of
a
quick
brief
introduction
about
what
service
mesh
actually
is
about,
and
what
not
so
we
we
heard,
for
example,
on
the
previous
talk
about,
like
all
those
different
micro
services
that
they
were
implementing
and
you
you
pretty
much
end
up
with
a
with
an
architecture
that
looks
like
this
is
where
all
the
services
same
way.
Somehow
they
start
to
talk
to
each
other.
A
You
are
building
this
kind
of
complex
distributed
system
on
top
of
another
distributed
system.
That
is
Cuba
Nettie's
right
and
there
are
some
complexities
to
that.
As
we
started
this
journey
to
talk
about
micro
services,
you
you
pretty
much
think
about
those
services
in
a
way
that
I'm
just
going
to
write
my
business
logic,
but
then
very
quickly.
You
say
well,
maybe
I
want
some
way
to
configure
all
those
different
services
in
a
concise
way,
so
I'm
gonna,
add
configuration
like
another
trait
to
that
particular
service,
then
you're
like
well.
A
Maybe
I
need
something
to
do
service
discovery
because
now
I
have
like.
So
many
services
I
need
to
add
that
other
capability
to
the
micro
service,
and
then
you
start
adding
all
these
capacities.
All
these
features
and
your
your
micro
service
becomes
like
not
that
micro
anymore
right.
It
starts
to
grow
in
complexity
and
you
kind
of
end
up
with
this
micro
service
and
like
if
you're
doing
it
in
Java.
A
You
have
like
five
ten
other
jar
files
that
you
have
to
add
and
all
the
different
frameworks
you
have
to
pull
up
right
and
that
was
like
I
would
say
very
2014
and
again
it
was
very
programming.
Language
is
specific
as
well,
but
as
things
evolved,
we
started
to
look
into
like
service
meshes
and
how
we
could
delegate
some
of
these
concerns
that
we
think
they
are
more
infrastructure
related
to
the
infrastructure,
which
is
where
they
they
actually
belong.
A
So
that's
pretty
much
how
we
came
up
with
this
thing
in
industry
about
the
service
meshes,
but
how
does
that
magic
actually
work
right?
So
there's
no
magic
I
mean
it's
inside
of
your
pod.
So
again,
like
quick
101,
here,
you
have
a
pod
inside
that
body
can
have
one
or
more
containers,
and
we
have
this
idea
of
a
special
kind
of
container
called
a
sidecar,
and
this
sidecar
is
something
that
can
be
injected
automatically
in
runtime,
so
added
to
that
pod.
A
And
then
we
started
to
implement
some
of
those
services
that
I
was
discussing
before
on
the
micro
service
slide
to
that
sidecar
right
in
a
way
that
is
managed
by
the
infrastructure
and
again,
if
something
happens
or
you
need
to
add
another
feature
to
that
particular
sidecar.
That
can
be
added
there
without
actually
changing
the
source
code
of
your
service.
A
When
you
look
at
the
whole
architecture
of
all
these
different
things,
you
you
get
like
the
the
control
plane,
which
is
where
a
lot
of
the
infrastructure
for
a
service
mesh
like
Easter,
for
example.
Yes,
so
you
have
Jaeger
pilot
mixer
and,
of
course,
the
authentication
itself,
and
you
have
the
the
sidecar
proxies
that
are
injected
into
every
single
pod
and
they
start
to
receive
the
policies,
the
configuration
values
from
that
control
plane.
So
again
like
now,
you
can
say:
hey,
maybe
I
want
to
add
mutual
chil.
As
to
all
my
micro,
sir,
is
great.
A
You
go
there.
You
change
something
in
the
control
plane.
The
control
plane
would
then
propagate
that
configuration
to
all
your
micro
services.
Maybe
you
want
to
add
a
specific
like
routing
policy
or
some
kind
of
retry
configuration
that
you
want
apply
to
all
your
micro
services
as
well.
Again,
you
add
that
to
the
control
plane,
the
control
plane
propagate
that
to
all
your
micro
services.
You
don't
actually
have
to
do
that
yourself.
A
Changing
your
micro
service
for
that
particular
capability,
but
this
I
would
say
takes
us
to
what
we
are
calling
as
a
product
OpenShift
service
mesh,
so
East,
you
is,
is
one
of
the
components
but
with
OpenShift
service
mesh
we
are,
of
course,
is
is
at
the
core
what
we
are
packaging
other
technologies
and
distributing
all
those
different
technologies.
As
a
single
operator
that
you
can
install
on
the
platform,
those
technologies
are
Jaeger
key
ally,
key
ally
is
a
visualization
and
have
more
slides.
A
On
that
later,
you
have
graph
Erna
and
primitives
for
monitoring
and
visualization
of
the
monitoring
as
well
and
then
Jager,
of
course,
for
for
tracing.
But
this
this
whole
package
is
what
you
get
when
you
install
openshift
service
mesh.
It's
not
only
easter.
So
in
this
picture,
for
example,
this
is
what
key
ally
looks
like.
So
this
is
a
representation
of,
for
example,
in
this
case
three
micro
services.
A
So
you
have
one
here
called
product
page,
so
it's
a
web
so
web
app,
you
have
a
reviews
service
and
then
you
have
a
rating
services,
and
this
data
here
is
captured
through
the
actual
traffic
that
is
happening
between
those
applications.
So
Jiali
is
generating
this
using
live
network
data
and
even
the
lines
here,
the
colors
and
the
status
code
that
you
see.
Those
are
also
live
like,
for
example,
if
the
communication
between
the
product
page
and
the
reviews
micro
service
is
bad.
A
A
One
nice
thing
also
that
you
can
see
here
is
the
latency
between
the
communication
of
those
services.
For
example.
Maybe
you
have
like
three
different
versions
of
your
review
service,
so
you
just
shipped
a
new
version
with
let's
say
v3
and
you
start
to
notice
some
latency
now
that
you
roll
out
that
version.
From
this
same
view,
you
can
add
weights
to
those
lines
to
the
graph,
and
then
you
can
see
that,
for
that
particular
call,
the
latency
is
higher
than
for
the
previous
version.
A
So
now
you
have
a
regression
and
you
have
to
decide
how
to
address
that
again,
because
you
are
using
the
service
mesh,
you
can
pretty
much
reshape
the
traffic
and
send
everything
to
v2
and
to
view
one
while
you're
fixing
v3
and
we're
probably
going
to
release
a
new
version
of
before,
with
the
actual
fix
to
improve
the
latency
of
that
service.
But
this
is
like
a
very
it's
a
canonical
example
of
what
you
can
do
with
a
service
mesh
and
again
how
he
Ali
helps
with
that.
A
Another
thing
that
you
can
see
is
like
a
convenient
way
to
see
like
details
about
a
specific
service.
Again,
in
this
case,
you
get
like
information
about
the
IP
addresses
like
the
internal
IPs
and
also
what
the
inbound
and
outbound
metrics
for
that
particular
service
looks
like
you
get,
of
course,
the
status
in
all
the
different
endpoints
that
are
hitting
this
particular
app.
This
is
a
very
common
problem
as
well.
A
When
you
have
multiple
micro
services,
you
are
usually
like,
not
aware
of
who
is
actually
consuming
your
service
you're,
like
oh,
maybe
I,
just
roll
out
a
new
version
here,
I'm
not
impacting
that
other
app
here
right
there
and
you
actually
are
from
here.
You
can
kind
of
see
like
all
the
different
endpoints
that
are
either
using
or
being
consumed
by
by
this
particular
service.
To
another
very
interesting
feature
here
is
the
configuration
of
the
the
traffic
using
weights.
A
So
I
want
to
send
80%
of
the
traffic
to
that
version
and,
while
I'm
doing
that,
I'm
still
keeping
a
very
old
version
around
because
for
some
reason
again,
I
may
have
to
keep
that
around
for
an
old
legacy
application
that
needs
to
consume
that
or
something
like
that,
but
I'm
starting
to
roll
out
a
new
version
and
that
new
version
is
already
receiving
15%
of
the
traffic.
If
you
want
to
change
that
again,
this
is
a
configuration
that
you
do.
A
You
can
do
it
Jiali
or
you
can
apply
some
IMO
if
you
want
to,
but
again
it's
something
that
you
don't
have
to
actually
change
your
source
code
or
anything
like
that.
In
order
to
roll
out
this
change.
You
also
have
some
other
capabilities,
like,
of
course,
adding
TLS
and
adding
a
tag
a
tree
also
from
from
Keowee
like
this
is
a
new
feature
coming
out.
That
is
also
available
now
in
the
in-service
mash.
A
So
when
you
add
all
these
different
components
to
an
architecture
diagram-
and
you
have
something
like
this-
at
least
the
high-level
architecture
diagram
where
of
course
you
have
infrastructure
and
you
have
OpenShift
itself,
but
on
top
of
it,
you
have
a
service
mesh
handling,
the
traffic
for
all
the
different
services
that
you
have
in
any
kind
of
application.
This
is
something
that
is
also
very,
very
important,
because
quite
often,
the
solutions
that
you
embed
in
your
micro
service,
they
are
specific
to
a
given
programming
language.
A
A
And
that's
pretty
much
the
summary
of
the
the
service
mesh
side.
I.
Think
one
thing
that
I
forgot
to
mention
here
that
is
critical
and
super
important
is
that
service
mesh
is
GA
in
4.2.
We
announced
the
GA
last
week.
This
is
something
that
again
like
we
have
many
many
customers
asking
us
for
this
VA
for
quite
a
while,
so
I'm
pretty
happy
that
we're
making
it
available
now
and
please
give
it
a
try
again
it's
available
in
operator
hub.
If
you
have
an
open
shift
for
cluster
running,
could
pretty
much
go
there.
B
Thank
you
very
interesting,
as
as
we
are
gonna
see.
Actually
the
service
match
is
one
of
the
ingredients
is
one
of
the
components
under
the
wood
in
the
world:
server,
less
strategy
of
OpenShift
product.
So
first
things.
First
many
of
you
when
we
think
about
server
less
and
AWS
lambda.
This
is
what
it
comes
to
mind.
So
it's
just
glorify
cgi-bin.
Well,
it's
actually
not,
but
if
you
think
about
it,
the
the
the
thinking
behind
that
is
or
less
the
same,
you
spin
up
a
process.
B
You
take
some
event
and
then
you
put
out
some
some
output.
Of
course
we
did
better
than
cgi-bin,
because
we
are
working
on
security.
We
are
working
on
scalability
and
visibility
on
the
stuff,
but
the
concept
is
pretty
much.
The
same
is
not
something
pretty
new
in
terms
of
concept,
and
indeed
so.
This
is
the
conceptual
model
behind
server
less.
Of
course,
many
of
you
already
know.
So
there
is
a
event
flowing.
Then
there
is
a
function,
but
we
will
see
that
is
not
just
a
function.
B
Processing
this
event,
processing
the
payload
you
are
gonna
put
and
then
you're
gonna
do
the
result.
The
advantage
of
this
model
of
this
model
is
pretty
obvious.
We
will
spin
up
your
computational
power
only
when
you
need
it
so
you're
gonna,
save
resources,
you're
gonna
optimize
your
workload.
So
it's
pretty
much
interesting
in
many
kind
of
use
case.
B
I'm
not
gonna,
do
the
the
the
few
lists,
but
of
course,
if
you
are
going
to
have
some
kind
of
variable
workload
like
dispatching
file
or
basically
all
the
kind
of
application
that
does
nothing
for
80%
of
the
time
and
then
have
a
word
peak.
Of
course.
This
is
something
that
fits
very
bad,
very
good
into
the
server
less
scenario
and
as
I
was
saying,
actually
there
is
a
bit
of
confusion
between
server,
less
and
function
as
a
service.
B
Actually
server
less
as
a
concept
is
broader
than
then
the
functional
service
saying
that
fast,
so
function
of
the
service
is
server.
Less
is
more
like
saying
that
a
square
is
a
rectangle.
So,
yes,
probably
there
is
some
relationship,
but
is
not
complete,
and
this
is
also
true
talking
about
micro
services
and
containers,
because
we
will
see
that
Kennedy
will
provide
you
a
broad
ecosystem
of
building
blocks,
in
which
you
can
run
in
a
serverless
mode.
B
So
in
order
to
do
a
modern
application
on
a
server
less
application,
you
need
a
lot
of
different
things
and
topics
and
concepts.
So,
starting
from
from
the
very
bottom,
you
will
need
an
infrastructure.
You
will
need
to
provision
the
computational
power.
You
will
need
to
schedule
your
workload
then
going
up
to
to
the
next
step.
You
will
need
some
kind
of
traffic
routing
and
natural
resiliency
and,
of
course,
as
many
of
you
can
think
about
business,
what
we
are
gonna
do
with
these
two,
then
you
need
some
support
in
terms
of
DevOps
to
chain.
B
Has
many
of
you
many
of
us?
They
call
it
so
continuous
integration,
continuous
delivery,
get
ops.
You
will
need
some
kind
of
event
orchestration
in
order
to
complete
your
building
blocks,
and
then,
on
top
of
that,
you
will
have
your
own
basically
development
pattern.
So
the
application
by
itself
so
zooming
out
on
the
CNC
F
landscape
here,
will
see
the
full
landscapes,
so
not
just
the
server
less
stuff,
but
all
the
cloud
native
computing
foundation,
components.
B
And
if
you
try
to
map
this
concept
into
implementation
in
CN,
CF,
fully
landscape,
you
will
see
that
there
is
more
or
less
I
want
one
relationship,
and
so
this
is
what
reddit
see
as
like
implementation
of
all
those
concepts
so
for
the
provisioning
for
the
infrastructure
of
things
running
behind
the
scenes.
Of
course,
kubernetes
is
the
underlying
foundation
for
everything,
then,
in
terms
of
traffic,
routing
and
security
and
network
Cillian
c
and
circuit
breaker,
and
all
that
kind
of
stuff
sto,
as
we
were
saying
so.
B
The
service
mesh
is
a
very
important
point,
then,
for
the
part
of
the
support
of
the
world
developed
tool
chain
in
particular
pipelines.
We
are
gonna,
see
some
more
details
about
this
project.
You
will
see
the
cool
logo
with
the
cat,
the
project's
called
pectin
and
it
used
to
be
part
of
Kay
native,
but
you
will
see
it
in
a
bit
on
top
of
that.
B
Other
building
blocks
for
doing
the
typical
features
of
a
serverless
application,
so
the
scale
down
to
zero
and
the
spin-up
of
new
containers
and
key
native
is
providing
you
all
the
building
blocks
further
in
that,
and,
of
course,
on
top
of
that,
you
will
choose
your
own
pattern,
a
language
and
container
to
run.
In
our
view,
an
important
role
will
be
done
by
corpus.
B
You,
many
of
you
will
recognize
the
work
whose
logo
and
camel
there
is
no
time
today
to
talk
about
camel
and
carcass,
but
we
have
very,
very
interesting
project
tank
going
in
the
in
the
community
like
MLK.
That
will
fit
very,
very
well
into
a
server
less
architecture.
This
is
the
wall
picture.
I
would
like
to
alight
that
from
an
infrastructure
point
of
view,
of
course,
kubernetes
the
overshift
service
mesh
an
interesting
logo
to
to
take
into
account
a
SCADA.
B
It's
a
project
for
running
the
SEO
function
on
Prem,
so
you
will
have
a
usual
function.
You
will
write
it.
You
can
run
it
on
the
cloud
and
you
can
run
it
on,
prime,
if
you
want
to
because
it's
mediated
by
openshift.
On
top
of
that,
as
as
you
see,
you
may
choose
your
own
language,
so
you
will
see
the
edge
of
function
logo,
but
it
may
be,
Java
may
be
go.
We
are
working
on
cloud
functions
or
our
own
open
specification
of
functions
and,
of
course,
another
important
point.
B
We
were
looking
at
that
in
the
in
the
first
slide.
Basically,
is
the
eventing
part,
so
a
very
important
topic
of
the
quenette
event
of
the
service
architecture
is
having
some
events
that
will
trigger
basically
actions
to
the
rest
of
the
platform
and
an
important
point
here
is
the
operator
up,
because
you
will
see
that
by
using
operator
you
can
plug
your
own
things
into
the
event
infrastructure
of
kennedys.
B
Those
are
the
principle
of
the
architecture,
so
it
has
to
be
distributed.
Api
centric
born
to
be
multi-cloud,
meaning
public
cloud
and
private
clouds
scalable
by
design
secure,
even
driven,
disposable
and
polyglot.
So
very,
very
we
for
that
man
cgi-bin.
So
we
are,
we
are
doing
better,
but
I
will
end
it
over
to
to
William.
That
is
probably
the
most
important
person
in
the
K
Native
community
thread
that
saw.
A
The
that
was
very
nice
by
the
way.
The
way
you
deliver
those
slides.
Thank
you
for
doing
that.
That
was
pretty
good.
Looking
a
little
bit
that
the
key
native
project
and
again,
if
you've,
never
heard
about
K
native
I'll,
talk
more
about
that
in
a
bit.
But
I
think.
The
first
thing
that
we
like
to
highlight
is
all
the
members
of
that
community
and
how
they
pretty
much
stack
compared
to
contributions
and
there's
a
link
there.
A
But
again,
you
can
see
by
the
number
of
contributions
that
they
are
and
diving
down
into
K
Native
and
explaining
what
it
is
again
like,
as
it
was
said
already
like
K
natives
started
without
say
three
modules,
but
one
of
the
modules
was
build
and
build
evolved
into
pipelines,
and
then
pipelines
evolved
into
its
own
project,
which
is
Tecton
right
and
now
it
lives
in
under
its
own
foundation,
including,
but
with
K
native.
Now,
then,
it
was
left
serving
in
the
venting
serving.
Is
that
the
module
responsible
for
the
auto
scaling
part?
A
And
it's
also
where,
for
example,
we
integrate
we
plug
in
a
service,
mash
or
East
EO,
and
that's
how
you
can
get
some
of
the
same
benefits
from
the
service
mesh
into
your
service
applications
as
well,
and
the
other
module
here
is,
of
course
a
venting.
Because,
again,
like
you
have
those
applications,
you
can
serve
them
and
whatnot.
But
the
most
important
thing
here
as
well
is
how
you're
going
to
receive
event
and
those
events
are
going
to
be
sent
to
those
applications.
A
Looking
at
the
user
experience,
there
is
also
a
CLI
that
is
coming
from
upstream,
so
it's
called
kN
and
using
the
CLI
the
way
you
can
deploy
a
service
application,
it's
very
straightforward,
again,
KN
service,
create
you
pass
an
image
and
there
you
go
and
as
params
for
that
particular
command,
you
can
specify.
Let's
say
the
number
of
instances
that
you
want
running
like
maybe
want
to
limit
to
10
or
to
100.
You
can
change
the
computer
settings
of
that
application
as
well.
A
This
is
something
that
is
very
different
compared
to
let's
say
the
more
traditional
fast
module
that
you
see
in
other
providers,
because
usually
the
fast
model
is
like
a
one
to
one
relationship.
You
have
one
request.
You
have
one
instance
of
that
thing
running
have
two
requests.
You
have
two
instances
of
that
thing
running
here.
You
have
a
little
bit
more
flexibility
that,
for
example,
for
for
a
completely
stateless
application
that
you're
just
serving
something,
let's
say
like
a
web
app.
A
You
can
also,
of
course,
then
provide
limits
for
resource
consumption,
so
CPU
or
memory
as
well
and
doing
all
those
things
from
this
command
is
something
that
like
as
someone
that
might
be
getting
started
with
kubernetes
I'd,
say
it's
a
very
intuitive
easy-to-use
experience
again
like
in
order
to
achieve
something
similar
to
this
and
I
would
say,
vanilla
kubernetes,
like
you'd,
probably
have
to
be
changing
three
or
four
different:
yellow
files.
Learning
a
little
bit
more
about
all
these
constructs
in
kubernetes.
There
are
I'd
say
they
have
their
own
learning
curve.
A
This
I'd
say
streamlines
a
little
bit
that
experience
and
put
that
together
in
a
way
that
it,
it
makes
more
sense,
I
would
say
from
a
developer
perspective.
That
is
just
starting
with
this
with
this
project,
if
you,
if
would
put
like
side
by
side
a
comparison
of
cuber,
Nettie's
deployment
and
key
native
deployment
right,
the
yeah
Mo's
for
that
are
generated
from
from
the
CLI,
for
example,
you'd
get
something
like
this
on
the
left
here.
A
Of
course,
you
have
kubernetes
where
you
have
the
deployment
description,
and
then
you
have
your
route
and
then
you
have
our
service
and
you're
specifying
certain
things
here,
and
you
end
up
with
about
like
fifty
three
lines
of
llamó.
On
the
other
hand,
you
have
a
clean,
ativ
description
of
the
same
service,
with
like
half
almost
half
of
the
lines
in
more
functionalities,
because
again
with
that
one,
you
are
also
consuming
the
bits
from
east
yo,
for
example.
A
The
other
interesting
thing
here
is
how
you
can
pretty
much
get
your
applications
that
are
deployed
today
in
kubernetes,
and
you
can
migrate
them
to
that
model
without
changing
anything
in
your
code
like
this.
This
application
here
specifically,
is
that
it's
a
container
that
was
built
we
joke
about
it
when
we
do
this
demo,
that
is
like
an
application
from
2000s.
A
That
is
just
like
a
front
end
like
a
guestbook
app
built
in
PHP
and
now
we're
migrating
that
app
to
a
servlet
app
without
changing
a
single
line
of
code
or
even
reviewed
in
the
container
right,
just
by
changing
the
way,
I'm
deploying
that
looking
a
little
bit
at
the
roadmap
right.
So
we
are
about
here.
A
We
just
announced
this
week,
actually
our
tech
preview,
so
the
tech
preview
is
available
in
open
shifts
for
one
and
four
point:
two:
we
have
shipped
many
Developer
Preview
so
like
for
very
select
customers
that
were
already
working
with
us
interested
in
this
technology.
They
had
access
to
the
Developer
Preview,
and
now
we
are
going
for
the
tech
preview.
A
We
intend
to
ship
another
tech
preview
still
this
year
and
then
we
have
plans
to
take
K
native,
at
least
the
serving
bits
to
to
a
ga
state,
either
by
the
end
of
the
year
or
next
year.
Again
like
we
are
working
with
those
communities.
Upstream
again,
as
you
saw,
there
is
a
lot
of
company
like
there
are
a
lot
of
companies
collaborating
that
project,
as
you
might
imagine.
A
Sometimes
we
get
into
some
disagreements
about
how
API
is
going
to
look
like
and
what's
the
signature
and
we
spend
months
and
months
debating
how
we're
gonna
call
some
object,
so
that
can
delay
but
we're
pretty
confident
at
least
for
serving.
We
are
in
a
good
state
that,
like
we
are
all
agreeing
on
this,
is
solid
and
stable
enough
to
consider
a
ga
product.
A
So
we
have
prepared
a
demo
and
we
have
a
video
here
for
the
demo,
because
we're
concerned
a
little
bit
about
the
connectivity
I'll
play
the
demo
in
a
bit,
but
just
to
set
the
context
for
the
demo.
You
imagine
that
you
have
like
I'm,
not
sure
how
popular
this
is
here,
but
I've
seen
the
increase
use
of
QR
codes
for
many
things.
It
probably
started
at
any
airports
and
whatnot,
but
for
for
shopping
as
well
I
mean
you
pretty
much
go
to
a
kiosk.
You
scan
your
product,
you
press
a
button.
A
You
get
a
QR
code,
you
pull
up
your
phone.
You
scan
the
QR
code,
your
breasts
pay
done
bait
right,
you're
done
you
have
you
a
joke
that
it's
a
service
cash
payment
system
right
and
this
is
increasing
in
popularity
and
I'd-
say
it's
a
very
interesting
use
case
for
service
because
again
like
what's
the
scale
for
this
system,
like
imagine
that
you
have
like
multiple
stores,
you
have
the
backend
for
this
system
running
like
you
have
no
idea.
A
So
what
we
did
is
we
pretty
much
broke
this
into
two
three
different
micro
services,
key
native
services
in
this
case,
and
they
are
running
as
service
workloads
and
then
we
are
deploying
the
QR
code
generator
one
representing
the
kiosk,
a
mobile
app
that
is
going
to
read
the
QR
code
and
then
the
other
payment
service
that,
in
this
case,
is
going
to
reach
out
to
a
third-party
system
called
stripe.
It's
a
payment
service
very
popular
in
u.s.
that
is
going
to
actually
get
your
information
and
effectively
process
the
payment
yeah.
A
Those
are
the
different
apps
and
let
me
see
if
I
can
play
the
demo
here
just
running
in
an
open
shift.
Four
point:
two
plus
two
right:
let
me
pause
right
there.
So
the
first
thing
here
is
that
KN
service
create
that
I
mentioned
before
I'm
setting
here
already
some
memory
limits
so,
for
example,
I'm
requesting
only
100
megabytes
of
memory
for
this
pod,
because
it's
small
enough
again,
it's
very
limited
again.
If
you
think
about
the
fast
model,
this
kind
of
it,
it
almost
behaves
like
a
function.
A
Even
though
it's
not
a
function,
it's
a
full-fledged
microserver,
it's
a
full-fledged
app
and
as
the
service
is
created
and
deployed
behind
the
scenes,
you
see
the
developer
console
when
openshift
for
point
to
creating
spinning
up
that
that
service
right.
You
click
on
that
link.
You
go
to
the
route
or
to
the
URL.
For
that
service,
you
see
the
QR
code
generated
just
going
to
like
generate
a
new
one
live
because
again
this
is
a
video
right,
but
you
hit
enter
and
you
get
Ike
a
different
QR
code.
I
change!
A
A
Let's
see
great
and
I'm
creating
the
payment
service.
So
that's
the
one
responsible
to
actually
talk
to
stripe
again.
The
30
party
company,
offering
the
payment
system
again
very
similar
thing,
nothing
special
about
that
one
and
I'm
creating
the
the
payments,
the
the
store
application.
That,
in
this
case
here
has
a
connection
to
the
payment
service
I'm,
just
passing
that
as
an
environment
variable
because
again
that
allows
me
to
change
that
application
or
the
endpoint
whenever
I
need
without
changing
my
source
code.
A
A
And
payments
being
processed
so
now
it's
actually
making
a
connection
to
to
the
payment
system
and
the
payment
system
is
reaching
out
to
the
QR
code.
And
then
here
you
have
like
an
order
number
the
amount
that
I've
processed
and
you've
pretty
much
got
idea.
Then
I'm
just
going
to
repeat
that
same
flow
with
a
different
keyword
code.
Just
to
illustrate
that
again,
this
is
an
actual
live
application.
Running
I'm
gonna
skip
forward
here,
so
we
are
right
on
time
there
you
go
processed
a
different
value.
A
Now
now
what
I
will
do
is
just
to
illustrate
another
feature
of
the
developer
console
as
well.
This
is
a
way
for
you
to
import
a
project
from
get
again.
It
can
be
any
project
in
this
case
I'm
picking
up
like
even
a
Heroku
example.
Again,
there's
nothing
special
about
the
application.
In
this
case,
it's
a
node.js
app
and
then,
as
part
of
the
developer
console
I
can
specify
that
the
application
is
going
to
be
a
servlet
application.
A
So
by
just
using
that
checkbox
behind
the
scenes
the
system
is
doing
all
they
have
lifting
to
say.
Oh,
you
don't
want
to
do
a
Cuban
Eddie's
deployment.
You
want
actually
deploy
a
key
native
service
and
in
this
case,
because
I'm,
starting
from
git
I'm,
actually
going
to
build
the
application
as
well
using
the
web
interface
again,
those
same
params
from
the
CLI
they
are
available
and
you
can
tweak
how
that
application
is
going
to
scale
using
the
UI
as
well.
A
Meanwhile,
you
can
see
that
the
previous
application
that
I
deployed,
because
it's
been
idle
for
a
while
it
it's
already
scaled
down
to
zero
and
the
build
is
still
running
right
so
like
let's
let
the
build
run
now.
I'm
just
going
to
fire
like
a
performance
like
testing
thing,
just
to
show
you
how
that
service
is
going
to
scale
and
just
by
hitting
that
URL,
so
I'm
sending
ten
concurrent
requests
with
ten
threads
to
that
service.
A
Let
me
fast-forward
a
bit
now
again
the
benchmark
was
done.
Like
the
test
ran,
you
see
that
it's
going
to
scale
back
came
out
to
zero
again
and
last,
but
not
least,
that
build
was
complete.
You
see
the
status
of
the
build
here.
The
node.js
app
is
being
created
and
I'll
just
hit
the
URL,
and
there
you
go.
You
have
your
nodejs
app,
deploy
it
as
well,
nothing
as
fancy
about
that,
but
this
was
again
like
just
one
example
of.
Let
me
just
pause
here
very
quickly
using
the
DEF
CON.
A
A
A
So,
to
summarize,
when
we
talk
about
open
shift,
of
course,
the
first
thing
we
think
is
Cuba
Nettie's
and,
of
course,
that's
one
very
important
core
component
of
what
we
have,
but
there
are
many
other
services,
many
other
add-ons
that
we
are
adding
on
top
of
the
platform
that
I
would
say
like
deliver
the
full
complete
picture
here,
and
this
particular
talk.
Of
course,
I
explained
openshift
service,
but
you
have
other
things,
such
as
obvious
pipelines.
That's
based
on
tacked
on
often
ship
service,
mash,
that's
based
in
East,
EO
or
SMI
as
well.
A
The
open
shift
console-
and
you
have
like
all
these
different
services
that
are
what
we
call
it
part
of
the
core
platform
like
specifically
targeting
kubernetes,
and
you
have
these
things
on
top
of
it
that
you
can
add
as
operators
as
well.
Also
in
my
previous
talk,
I
also
mention
a
little
bit
of
about
ocm.
We
should
fluster
manager
and
again
you
can
select
the
whole
picture
like
using
this.
You
can
deploy
on
any
cloud
provided.
A
Let's
talk
briefly
also
about
Ezra
functions
in
Kedah.
This
is
a
very
interesting
project
that
we
did
in
partnership
with
Microsoft,
and
this
is
a
way
for
you
to
run
as
your
functions
on
top
of
kubernetes
or
OpenShift.
Its
source
code
is
available
there.
There
are
like
a
bunch
of
tutorials
that
allows
you
to
get
started
very
quickly.
The
idea
is
to
use
kada
and
use
this
as
a
complement
to
the
stuff
that
we're
doing
with
Kay
native.
A
So,
for
example,
maybe
you
want
to
consume
an
even
source
that
is
available
in
Azure,
so
Azure
queues
or
something
like
that.
That's
one
even
source,
for
example,
that
is
not
available
today
in
Kay
native.
You
can
do
that
using
kada,
but
also
you
may
not
want
to
just
deploy
a
micro
service.
You
may
want
to
deploy
a
function.
You
can
create
a
function
using
header
functions.
A
A
It
allows
you
to
scale
up
and
down,
as
you
saw
in
the
demo,
and
of
course
it
can
run
pretty
much
any
containerized
workload.
It's
not
only
for
functions
because
again,
as
Josep
said,
servlets
is
more
than
that.
It's
a
trait
that
you
can
apply
to
a
variety
of
workloads
that
you
might
have
and
I
would
not
be
happy
to
go
to,
like
all
the
customers
that
that
I
advocated
for
micro
services
for
for
many
many
years
say
you
know
what.
A
Now
you
have
to
rewrite
everything
as
a
function
in
order
to
leverage
servers
like
that?
That's
that's
nonsense.
This
is
something
that
I
would
say,
kind
of
proves
that
and
works
much
better.
I!
Guess!
That's
it.
If
you
want
to
learn
more
go
to
the
product
page,
the
documentation
is
already
there
for
our
tech
review.