►
From YouTube: OpenShift Coffee Break: Taking our Website Serverless
Description
Get your espresso ready for the EMEA OpenShift Coffee Break as we welcome our guest Alex Groom, Specialist Solution Architect at Red Hat, to walk us through a serverless journey on Kubernetes and OpenShift, on how to convert a microservices based e-shop into a serverless and event-driven architecture.
Twitch: https://red.ht/twitch
A
A
A
A
Welcome
welcome
back
to
the
openshift
coffee
break
this
morning,
this
december
15th
we're
close
to
christmas,
and
that's
why
we
have
this
beautiful
background.
Hi
everyone.
My
name
is
natalie
vinto,
I'm
a
product
marketing
manager
for
openshift
university
of
my
awesome
colleagues,
I
would
start
from
my
great
friend
and
presenter
taro.
How
are
you
taro
today.
A
Hi,
tara
welcome
back
and
today
I
have
to
also
to
awesome
host
co-host,
my
friend
fabio
and
andrea,
welcome,
fabian
andrea.
How
are
you
welcome.
A
D
I'm
okay,
I'm
also
not
that
far
away
from
london.
So
it's
certainly
not
snowing
here,
but
it's
quite
a
reasonable
day
for
the
uk
in
winter.
So,
yes,
and
I'm
doing
fine
and
looking
forward
to
presenting
today.
A
Awesome
welcome
back
alex
we
we
had
alex
in
in
some
of
the
previous
show.
I
I
don't
see
andre
andre
here.
We
go.
D
A
Oh
andrea
is
twice
fantastic,
so
sorry
andrea.
I
know
you
are
important,
so
we
want
to
you
double
this
setting
alex
was
there
last
time
awesome
demo
about
cloud
native
app
today
we're
gonna.
Do
another
awesome
demo
clone
native
app
in
the
serverless
way?
Does
anyone
want
to
start
begin
telling
some
story
about
serverless.
B
Yeah,
it's
yeah,
it's
just.
If
I
compare
it
to
such
normal
deployments,
it
is
just
deploying
a
lot
of
revisions
of
your
workload
and
more
often,
and
not
having
proper
metrics.
Since
you
have
short
living
objects,
so
yeah
it
has.
It
has
its
benefits,
but
also
there
are
some
cons
that
you
need
to
take
there.
C
If
you
agree
guys,
no,
so
it's
a
new
way
for
developers
to
just
focus
on
the
business
function
they
need
to
solve
or
the
the
the
code
they
have
to
prepare
for,
and
that
fits
a
specific
business
factory.
Instead
of
focus
on
the
performance
of
the
servers
that
could
be,
as
tara
told
yesterday,
useful
for
warming.
D
I
I
would
agree
with
that.
The
naming
is
a
little
bit
confusing,
but
hey
I'm
a
developer.
Serverless
is
an
abstraction
for
developers,
there's
a
different
design
pattern
for
them
to
use
to
build
their
applications
that
allows
them
some
dis,
some
distance
from
that
infrastructure.
D
It
allows
them
some
something
that
will
scale
up
very
naturally,
according
to
load
and
equally
scale
down
to
nothing
to
actually
help
free
up
resources
for
other
applications
or,
potentially,
you
know
free
up
the
whole
server
entirely
so
that
you
actually
have
you
know
much
lower
cost
systems
or
be
able
to
run
more
on
that
one
system,
because
you
can
share
out
the
resources
between
more
applications.
So
for
the
developer,
it's
abstraction
for
the
operational
team.
It's
a
potential
way
of
reducing
the
number
of
servers
they
need
to
run
or
servers.
C
A
I
wanted
to
say
for
the
people
attending
the
show.
If
you
have
any
question,
please
send
the
question
in
the
chat
we
will
go
today
with
another
awesome
live
demo
real
right,
alex.
It's
live
like
herotic,
oh.
D
Absolutely
nothing
like
you
know,
flying
on
the
city
of
pants,
as
it
were
absolutely
there's
little
point
in
doing
these
things
unless
they're
live
so,
yes,
we've
got
some
slides.
Apologies
for
that
it
feels
a
bit
corporate.
But
amongst
all
this,
I've
got
about
five
demos
to
take
you
through
some
serverless
examples
which
will
hopefully
keep
keep
you
happy.
A
Awesome
and
I
wanted
to
say
that
this
is
our
first
time
on
the
new
streaming
platform-
we're
testing
stuff
here,
thanks
to
our
colleague
for
helping
us
for
the
assets.
This
graphics
is
from
alex
it's
a
very
nice
christmas
graphic
and
if
you
look
on
the
top
right,
you
will
see
terror
with
our
red
hat
on
top.
So
it's
something
we
wanted
to
do
for.
B
A
Yeah
we're
playing
with
the
new
streaming
platform,
it's
very
cool.
I
I
hope
you
enjoy
our
graphics
today,
yeah.
So
folks,
I
think
we
we
have
some
slides
before
before
we
go
into
the
demo
right.
D
Absolutely
there's
a
there's,
a
whole
pack
of
stuff
that
will
take
about.
I
don't
know
just
over
half
an
hour
to
run
through
including
some
demos,
but
absolutely
pleased
to
put
some
questions
in
the
chat
we'll
tackle
them
as
we
go
or
possibly
at
the
end.
We'll
you
know,
do
a
recap
and
see
what
we've
seen.
A
Perfect,
so
let's
go
into
this
slides
that
we
have
today
so
alex
prepares
some
some
slide
and
let
me
remove
this
caption
here
so
fab,
it's
visible
as
well.
C
A
D
So
welcome,
I
said:
natalie
is
already
traditionally
on
alex
groom,
I'm
a
senior
solution
architect
at
red
hat
and
today
the
title
of
the
actual
presentation
is
actually
an
eventful
journey
of
resources
and
responsiveness,
because
that's
what
we're
going
to
look
at
we'll
look
at
how
resources
and
responsiveness
kind
of
compete
against
each
other.
D
In
this
particular
topic,
the
subtitle
was
taking
our
website
serverless,
but
natalie
said
the
whole
thing
was
too
long
for
his
deck,
so
we
just
kind
of
shortened
it
and
that's
exactly
what
we're
going
to
talk
about,
how
to
take
something
something
concrete
something
we
understand
like
a
website
how
we
can
make
that
serverless.
What
are
the
pros
and
cons
of
that
process?
D
How
would
we
perhaps
go
through
that
to
activity?
And
that's
that's
the
action
today,
but
bearing
in
mind
resources
is
actually
a
key
element
of
this
as
well
and
kind
of
with
that
in
mind,
I
mean
we
tend
to
believe
the
internet
is
clean,
lovely
picture
here
of
you
know
clean
nature,
etc.
But
you
know
yes,
we've
significantly
reduced
the
use
of
paper
by
consuming
stuff
online,
which
is
good,
but
the
internet
hasn't
become
carbon
neutral.
D
You
know
our
digital
lives
do
have
environmental
impacts,
even
if
we
can't
see
it.
So
you
know
when
you
browse
the
internet,
you
visit
websites.
Each
website
is
using
power,
lots
of
servers
in
the
data
center
somewhere,
typically
so
you're
consuming
energy
and
apparently
two
percent
of
our
global
carbon
emissions
come
from
the
electricity
used
by
the
internet.
So
it's
not
a
minuscule
thing.
It
is
actually
something
that
has
an
impact
for
with
what
we
do.
So,
yes,
we
could
reduce
web
traffic,
can't
see
that
happening.
D
We
could
host
all
our
data
centers
right
next
to
renewable
energy
sources.
That'd
be
good,
but
realistically
perhaps
what
we
need
to
look
at
is
looking
at
the
resources
we're
using
for
web
traffic
and
for
building
websites
to
see
if
we
can
improve
our
carbon
footprint-
and
this
is
obviously
a
very
topical
subject
with
global
warming
and
climate
change.
D
So
going
back
to
the
topic
of
the
title,
which
was
taking
our
website
serverless,
we
need
a
website
now.
I
could
have
gone
into
a
really
complicated
intricate
website,
but
that
would
take
a
long
time
to
explain
and
take
you
through.
So
here
is
a
simple
website
that
we're
going
to
use
today
through
a
selection
of
demos,
to
get
you
an
idea
of
what
we're
talking
about
very
simple
and
natalia.
D
We
have
to
tell
you
all
about
this,
because
it's
featured
in
his
new
book
that
he's
just
written
a
matter
of
small
number
of
services
with
a
web
front
end.
We've
got
an
orchestrating
micro
service
in
the
middle
here
and
two
data
driven
services,
one's
called
the
catalog
service
and
the
other
called
the
inventory
service.
And
what
they're
delivering
is
a
web
front
end
there's
a
storefront,
a
retail
storefront
for
our
red
hat
to
cool
swag.
D
So
these
various
components,
the
catalog
service,
for
example,
is
turn
is
generating
the
information
for
the
pictures,
the
descriptions
and
the
prices
no
they're
in
dollars,
the
inventory
service.
That's
the
stock
levels,
that's
helping
to
generate
these
blue
pieces
of
information,
which
is
you
know
how
much
have
I
got
of
each
item.
D
Those
two
bits
of
data
gets
orchestrated
in
the
middle.
In
the
gateway
service
and
then
rendered
by
the
web
ui
and
that's
how
the
website
is
built,
so
this
is
our
web
application
that
we're
going
to
take
so
this
serverless.
This
is
the
topic.
This
is
the
the
item
of
the
day,
but
why
would
we
want
to
take
this
serverless?
Well,
first
of
all,
let's
have
a
quick
look
at
what
this
thing
actually
looks
like
for
real
in
open
shift
terms,
and
now
this
is
open
shift
tv.
D
So
I'm
kind
of
hoping
this
isn't
too
new
for
you
guys,
but
basically,
we've
got
a
selection
of
pods.
That
node.js
is
the
web
ui.
We
have
a
net
gateway,
microservice,
that's
doing
the
orchestration,
the
two
data
services,
one's
called
catalog
and
that's
written
in
spring
boot
and
I've
got
a
caucus
inventory
service.
D
A
couple
of
databases
over
here
that
are
providing
the
data
for
the
various
services
and
hey,
look
no
surprise
if
I
click
on
the
ui
a
little
bit
of
a
churn,
an
upcoming
my
website
now
for
the
purpose
of
today,
just
to
keep
things
simple,
we
switched
off
all
the
website,
caching
because
it
makes
the
demo
much
more
predictable.
That's
that's
the
website
that
we're
trying
to
deal
with
today.
D
So
it's
worth
for
thinking
about.
You
know
what's
going
on
here,
because
if
we
look
at
the
the
metrics,
the
monitoring
of
this
stuff
continually
not
a
great
deal
of
cpu
use,
but
the
one
that
worries
me
most
is
actually
the
memory
usage,
because
we're
continuously
using
seven
heading
towards
800
megabytes
of
memory
here
and,
as
you
can
see,
the
two
big
chunks,
which
is
the
purple
piece
and
the
blue
piece
down
here.
D
That's
the
catalog
and
the
inventory
service
respectively,
they're,
the
big
users
of
resources
and
that's
on
all
the
time
you
know
memory
is
being
used.
It
cannot
be
shared
with
any
other
application,
while
these
applications
have
grabbed
that
memory,
so
the
cpu
can
be
shared
between
other
applications
on
demand.
In
fact,
the
cpu
usage
is
pretty
small,
but
memory
is
kind
of
fixed.
That's
one
resource
that's
hard
to
share,
so
that's
the
sort
of
introducing
the
cool
store
piece
so
why
take
cool
store,
make
it
serverless?
D
Well,
I
got
this
request
from
my
boss
and
he
said:
hey,
it's
only
a
us
website.
I
need
those
resources
overnight.
So
can
you
shut
down
that
cool
thing
when
the
us
is
asleep
that
way,
the
resources
that
website
are
using
can
be
used
for
some
other
application,
maybe
there's
some
overnight
testing,
maybe
there's
another
development
team
in
india
or
something
that
need
those
resources.
D
So
quite
rightly,
he
says
why
consume
that
stuff.
While
it's
doing
nothing,
let
someone
else
use
those
resources.
Now
that
sounded
quite
a
reasonable
request.
However,
he
did
follow
it
up
with
a
little
caveat
which
caused
me
one
or
two
issues.
He
then
said:
should
one
of
our
employees
on
the
other
side
of
the
world
need
something
cool
need
something
to
buy?
Something
cool,
then
they
better
be
able
to
buy
it.
D
D
So
I
thought
about
this
and
I
thought
well,
okay,
what
are
my
options?
First
of
all,
I
thought
about
batch
jobs
because
you
know
I'm
an
old-fashioned
guy.
I
remember
the
good
old
days
when
everything
was
a
bachelor,
but
that
scale
stuff
down.
I
could
shut
stuff
down,
but
I
couldn't
bring
it
back
up
on
demand
very
easily,
so
that
didn't
seem
like
a
particularly
good
example.
D
D
Now
kind
of
the
topic
of
the
topic
of
the
whole
discussion
is
resources
versus
responsiveness.
Yes,
I
want
to
save
resources.
You
saw
some
of
those
metrics.
I
was
showing
you
a
moment
ago,
but
the
whole
thing
about
websites
is
all
about
user
experience.
We
spend
a
lot
of
effort
making
stuff
as
responsive
as
possible.
Putting
caching
in
putting
in
lots
of
you
know
fast
data
paths.
So
basically
we
will
spend
resources
to
make
something
responsive
and
now
we're
kind
of
turning
on
the
head
and
saying:
hey,
look
we
want
to
make.
D
We
want
to
claim
back
those
resources.
What
is
the
impact
that's
going
to
have
on
responsiveness
and
that's
an
ongoing
theme
we're
going
to
look
at
as
we
go
through
this
set
of
demos,
but,
first
of
all,
let's
have
a
quick
look
at
k-native
serving.
I
don't
know
how
much
this
you've
seen
before,
but
let's
have
a
quick,
quick
review.
D
K-Net
of
serving
is
all
about
event,
driven
serverless
containers
and
functions,
and
what
happens
is
there's
an
event
that
kicks
everything
off
now,
whether
that's
a
web
request
or
an
event
from
some
other
source
like
kafka
or
even
from
an
eventing
platform
that
we'll
look
at
a
little
earlier.
A
little
later,
so
event
comes
in
your
application
starts
kind
of
on
demand
and
if
that
event
source
goes
away,
then
at
some
period
of
time
your
application
will
disappear
and
that's
actually
great
for
the
solution
for
the
type
of
problem.
I've
got
here
today.
D
So
just
to
recap,
what
serving
is
it
consists
of
an
application
and
then
basically
three
other
components
of
the
worth
thinking
about.
First
of
all,
there's
the
configuration.
That's,
you
know
a
way
to
separate
your
application
code
from
how
this
thing
actually
touches.
The
infrastructure
and
the
infrastructure
uses.
D
We've
got
a
route,
that's
the
way
you
access
this
application
from
afar
from
external
systems
and
then
finally,
you've
got
those
revisions
which
is
basically
snapshots
of
previous
instances.
Previous
builds
of
application,
and
that
can
come
in
very
useful
if
you
need
to
roll
back
your
application
at
any
point,
you
know
if
you
have
a
problem
with
it
or
you
find
that
actually
version
three
is
actually
worse
than
version
two,
so
roll
it
back.
So
they're
the
kind
of
components
that
you
have
okay
native
surface,
but
essentially
events
come
in
your
application
runs.
D
So
again,
recap
what
does
curve
decay
native
service
give
us
scale
to
zero
and
that's
something
I'm
going
to
be
very
take
a
lot
of
advantage
of
in
this
particular
demo.
It
saves
resources
when
the
thing
is
not
in
use
it
scales.
According
to
request,
that's
good!
You
know
black
friday
comes
along.
You
want
your
website
to
go
really
big,
but
equally
roll
back
revisions,
and
particularly
traffic
management,
built
into
serverless
you've
got
kind
of
got
this
canary
type
facility,
where
you
can
actually
attribute
whatever
percentage
of
traffic.
D
You
like
to
your
new
revision,
so
maybe
10
for
the
first
time
you
try
it
and
you
increase
up
that
traffic
load.
As
you
go
on
now,
that's
not
something
I'm
going
to
talk
about
much
today,
but
it's
just
a
feature
of
serverless
that
traffic
management
of
canary
deployment
is
actually
built
in
and
it's
fairly
easy
deploy.
You
can
literally
just
use
your
existing
application
and
we'll
have
a
look
at
that
in
a
moment.
D
D
So
if
I
go
back
to
my
set
of
demos
and
have
a
look
at
the
topology
here,
so
this
is
my
cool
store,
app
with
the
first
step
service
and,
as
you
can
see,
I've
got
this
big
square
box
on
the
screen,
and
that
indicates
that
this
item
has
been
taken
serverless.
The
other
thing
to
note
is
all
these
other.
D
These
are
pods
by
the
way
these
got
a
nice
dark
blue
ring,
and
that
means,
if
I
click
on
these,
you
may
not
be
able
to
see
this
particularly
well,
but
they've
actually
all
got
a
pod
associated
with
them.
But
if
I
go
on
the
serverless
one,
it
actually
says
it's
auto
scale
to
zero.
There's
no
pod
running.
So
when
you
see
this
white
circle
in
that
square
box,
that
means
hey.
This
is
serverless
and
there
is
no
pod
running
at
all.
So
this
is
actually
dormant.
A
D
Well,
we
could
do
but
best,
not
because
the
one
problem
with
search
as
you
trigger
it.
You
then
have
to
wait
for
it
all
to
go
quiescent
again
to
it
to
settle
back
down
again.
So
that's
right.
C
C
B
D
The
problem
I've
now
got
so
what
I
was
trying
to
do
is
show
the
fact
that
the
website
is
loading
very
quickly,
based
on
the
fact
that
the
pod
has
now
actually
started
up.
D
So
we
can
wait
for
that
to
go
quiescent,
we'll
have
a
look
around
at
one
or
two
other
things,
while
we're
waiting.
So
how
did
I
create
this?
Well,
there's
two
basic
ways
to
create
these
serverless
applications,
the
first
one.
Obviously
you
can
go
into
the
application,
build
that
is
built
into
openshift
here
and,
as
you
scroll
down,
you've
got
various
options.
This
case
is
the
the
built
from
git.
D
The
other
thing
you
can
do
from
the
topology
screen-
and
this
is
a
fairly
recent
feature-
is
you-
can
go
and
grab
your
application
and
say?
Okay,
I
want
to
make
this
serverless
there's
an
option
here
that
says:
okay,
I'm
going
to
use
the
existing
container
that
you've
built
and
just
convert
it
to
a
k
service.
You
may
need
to
change
this
name
to
something
that
you
want
to
use,
but
you
have
these
two
built-in
features
in
the
web
ui
to
allow
you
to
build,
make
your
application
service.
D
So,
as
you
can
see,
the
the
life
cycle
of
this
particular
example
has
now
gone
black.
Basically,
what
that
means
is
the
pod
is
now
terminating
so
that
pod
is
now
actually
no
use
at
all
for
actually
running
its
application.
So
I
can
actually
repeat
the
demo
to
give
you
an
idea,
and
I
want
to
give
you
an
idea
of
how
long
it's
going
to
take
what
the
responsiveness
of
this
is
like
once
you
make
everything
serverless.
So
let's
have
another
go
at
kicking
this
off.
D
D
So,
if
you're
going
to
have
a
look
at
the
resource
load,
you
notice
that,
as
the
thing
starts
up,
you
get
a
little
bit
of
bump
of
cpu
usage
and
down
here
on
the
memory
resources.
Let's
make
this
a
little
bit
smaller,
it's
a
little
bit
easier
to
see
we're
not
making
a
whole
deal
of
I'm
not
making
a
whole
deal
of
progress
here,
because
all
we've
managed
to
make
serverless.
Is
this
tiny?
You
know
a
few,
I
don't
know
30
40
50
megabytes,
the
main
bulk
of
it
is
still
exactly
the
same.
D
The
problem
here
is
that
the
web
server
is
only
a
very
small
component
of
the
total
resource
consumption
so
when
that
goes,
serverless
you're
only
getting
a
tiny
bump
and
a
tiny
claim
back
so
yes
made
it
serverless,
but
haven't
really
done
a
lot
to
tip
that
boxes
in
terms
of
resource
consumption,
so
I'm
not
sure
I'm
there
yet.
So
maybe
we
have
to
continue
this
continuous
discussion,
so
the
next
option
is
well.
Can
I
make
everything
serve
this
well.
Clearly,
you've
seen
is
relatively
easy
to
turn
those
components
into
serverless
server
list
components.
D
B
D
It
I
mean
you're,
absolutely
right
for
to
run
serverless
record.
I
was
running
a
sidecar
in
the
pod,
so
there
is
a
resource
consumption
and
I
think
what
I
would
suggest
is
if,
if
resources,
if
your
items
are
really
small
and
we'll
come
on
to
this
a
bit
later,
I
think
faz
function
as
a
service
is
probably
a
better
solution,
rather
than
actually
taking
a
whole
container
and
making
it
serverless
so
yeah
there
are
solutions
for
if
you've
got
small
pieces,
just
functions,
you
want
to
run
rather
than
in
this
case.
D
These
are
services
or
this
is
a
demo.
So
these
are
fairly
micro
services
to
be
honest,
but,
as
you
can
see,
still
see
in
total,
the
memory
usage
for
this
application
as
a
whole
is
sort
of
7
800
megabytes,
and
if
we
can
make
it
all
serverless,
then
we
can
bring
that
down
to
almost
zero
megabytes
when
the
thing
isn't
actually
running.
So
there
is
still
a
benefit
here,
but
you're
right.
There
is
a
cost
of
running
the
sales
framework.
B
D
D
The
first
thing
you
notice
and
you
can
see
it-
this
is
really
quite
slow.
This
demo,
the
pods
are
starting.
It
goes
blue,
then
dark
blue
and
then
it
starts
painting
the
web
framework
over
here
on
the
right,
because
the
web
ui
is
painting
the
backgrounds
loading
up,
the
javascript
et
cetera,
but
the
data
is
still
yet
to
come
until
the
data
comes,
it
can't
paint
all
the
graphics
and
all
the
information.
So
what's
going
on
over
here,
node.js
the
web
servers
talk
to
the
gateway
service.
D
It's
made
a
request
of
catalog
service,
so
the
pod
is
starting
and
now
the
application
is
starting
up,
because
it's
gone
this
light
blue,
but
notice
over
here
over
here
on
the
inventory
service.
It
hasn't
even
started
yet
because
the
request
comes
down
through
here
to
the
catalog
service
when
it
responds
the
gateway
service,
then
talks
to
the
inventory
service,
so
we've
got
a
delay
here,
while
we're
waiting
for
stuff
to
happen
so
so
far.
D
D
B
C
D
Let's
go
and
have
a
look
and
see
what
to
see.
If
there's
anything,
we
can
actually
claim
back
from
this
so
yeah
great
service.
If
I'm
going
to
have
a
look
at
the
monitoring,
let's
have
a
look,
you
see
cpu,
nothing
and
then
bang
up.
It
comes
memory
usage.
Nothing
then
up
it
comes,
and
this
will
obviously
quiet
down
as
those
pods
wind
down
idle
down
as
it
were,
the
resources
will
drop.
So
now
I've
got
great
resource
consumption,
but
no
responsiveness.
D
A
D
Now,
what
what
what
you
should
be
able
to
see
is,
hopefully
the
split
screen
this
one
should
be
able
to
run
it
again.
D
A
D
A
Thank
you
sergio
for
this
note
and
alex.
I
take
the
opportunity
also
to
ask
to
everyone.
Actually,
so
we've
seen
that
if
everything
is
serverless,
it
takes
say
one
minute
to
to
start.
Can
we
warm
up
something?
Can
we
prepare?
I
don't
know
containers.
Can
we
make
it
warm
instead
of
a
cult?
B
B
B
A
It's
a
little
bit
noise.
If
you
can
please.
D
A
B
A
Okay,
cool
so
alex
I'm
bringing
back
the
the
screen
sharing.
Now
we
should
we,
we
see
it.
We
see
the
cool
store
on
the
the
upper
shift
on
the
left
and
on
the
right.
We
see
the
the
the
application.
So
thanks
sergio
for
this
spotting
yeah.
D
Thanks,
okay,
so
that
was
the
the
first
instance
where
I've
only
got
one
thing:
serverless
and
again,
that's
reasonably
responsive,
so
not
so
interesting,
but
let's
go
back
and
now
go
to
the
one
we
were
just
looking
at,
which
is
where
everything
was
serverless
and,
as
you
can
see,
hopefully
neither
way
yeah
things
are
just
beginning
to
terminate,
and
then
we
can
run
that
one
again
and
get
you
an
idea
of
how
long
it's
taking
and
if
we
look
at
these,
hopefully
you
see
these
are
still
running
and
it
takes
a
little
while
for
everything
to
go
quiet
here
they
go
terminating
terminating
so
we're
on
to
so
this
is
the
one
that
takes
about
a
minute.
D
So
you
can
see
this
see
this
again.
So
if
I
hit
that
and
then
that
and
then
that
now
you
should
see
both
screens,
so
you
can
see
as
this
loads
up
comes
the
web
panel
and
as
the
data
requests
go
through
the
system,
you
should
slowly
see
well
actually
now
you
have
to
wait
another
minute
for
the
website
to
actually
load.
So
this
is.
D
This
is
going
to
be
a
painful
discussion,
while
we
wait
for
everything
to
happen,
but
this
is
the
this
is
the
downside
with
serverless
is
there
is
a
responsive
issue
because
you
have
to
wait
for
all
those
pods
to
load
and
if
we
wait
and
we
wait
and
wait
and
if
we
chat
gently
around
for
another
30
seconds.
Finally,
here
comes
the
catalog
service.
It's
still
loading
the
application.
That's
why
it's
light
blue.
D
Here's
the
catalog
service
inventory
service
now
starts
app,
has
now
started
to
load
as
the
colors
change
and
get
darker
as
it
goes
blue,
it's
pretty
instantaneous
that
the
the
website
will
paint.
In
fact
there
we
go.
So
that's
what
I'm
hoping
you're
trying
to
show
and
apologize.
I
use
the
wrong
screen
sharing
mode,
so
you
didn't
get
to
see
those
first
time
around,
but
now
you
can
so
hopefully
we
are
in
business.
B
B
Everything
is
nice
because
you
have
like
one
note:
you
pull
everything
once
then
you
run
it,
but
not
that
I
asked
about
it:
how
to
improve
like
the
computer,
get
it
more
responsive
when
you
have
everything
sleeping
at
night
and
they
have
first
customers
or
by
friday
that
you
have
to
understand
what
has
been
underneath.
You
have
a
node,
you
have
a
container,
you
have
a
content
image.
B
You
need
to
figure
out
how
much
time
it
takes
to
download
the
container
from
registry
how
much
it
takes
to
run
how
much
it
takes
the
upgrades
and
boot
and
if
you
have
a
really
dynamic
environment.
So
your
multiple
revision
is
running.
Those
are
different
content
images.
You
have
scaling
nodes,
so
you
have
new
nodes
coming
those
are
empty.
So
there
is
a
lot
of
things
you
need
to
figure
out.
D
D
Absolutely
and
you're
right,
I'm
glossing
over
the
scaling
side
of
things
in
some
respects.
That's
almost
a
different
discussion,
I'm
massively
concentrating
on
saving
resources
at
the
moment,
because,
let's
face
it,
the
easiest
way
to
make
this
responsive
is
to
pay
up
and
run
these
things
continuously.
D
But
again,
if
this
web
application
was
much
bigger,
those
resources
that
you're
using
all
the
time
obviously
increases
in
in
scale
as
well
and
absolutely
everything
is
a
trade-off.
You
have
to
try
and
work
out.
Do
I
need
responsiveness,
because
remember:
the
responsiveness
is
only
that
first
use
once
the
thing
is
running.
It
is
very
responsive.
It
is
no
less
responsive
than
it
would
be
if
it
was
the
original
example
with
just
standard
pods
running.
So
it's
only
that
initial,
oh
coming
away
coming
up
from
idle.
That
is
the
real
response
of
this
issue.
D
However,
if
you
happen
to
be
that
customer
that
comes
into
this
website
and
has
to
wait
a
minute
for
the
website
to
load
you're-
probably
going
to
get
somewhere
else
fairly
soon,
so
that
will
be
a
lost
customer,
but
the
next
person
in
will
get
a
fairly
reasonable
service.
So
you
may
be
over
focusing
on
this
as
an
issue
compared
to
all
the
other
things
you
might
want.
D
C
Yeah
alex
and
other
guys
could
we
tell
that
probably
an
even
driven
architecture
is
the
paradigm
that
helps
and
fits
probably
better
in
a
in
a
context
of
a
serverless
application
compared
to
a
full
synchronous
communication
between
the
microservices.
D
I
think
well,
it
certainly
dawned
on
me
that
trying
to
take
a
website
service
may
not
be
the
best
use
of
this
technology,
but
I
had
a
web
server
and
had
something
familiar,
and
I
wanted
to
see
how
far
I
could
take
this
particular
application
with
the
platform
that
we
have
and
that,
to
a
certain
extent,
is
the
subject
of
the
next
two
or
three
demos
that
we're
going
to
go
through.
Is
that
how
we
can
use
more
of
the
platform
to
make
this
to
try
and
solve
some
of
the
problems
that
we've
got
here?
D
C
Because
I
was
thinking
for
just
for
the
sake
of
clarity,
let's
if
we
need
to
have
that
payment
service
to
this
website-
and
probably
this
could
be
a
good
candidate
for
for
full
serverless
implementation,
because
you
don't
need
always
payments
or
a
shipment
service.
D
That
that's
absolutely
true,
you
can
look
at
your
website
and
say
well
the
front.
The
storefront
needs
to
be
there
all
the
time,
but
some
of
the
secondary
services
that
are
not
used
continuously,
they
may
be
better
candidates
and
people
will
be
able
to
put
up
with
having
more
of
a
delay,
and
that
is
absolutely
true
and
I
think
to
taro's
point.
D
B
One
good
example
actually
is
to
fabio
is
that
we
have
seen
let's
say
last
three
two
three
years
and
the
price
is
moving
from
monoliths
to
my
services
and
the
container,
so
they
are
eating
the
elephant
by
piece
and
now.
Actually
what
actually?
I
have
done
in
my
current
work
that
you
have
container
running
that
have
several
different
rest
end
points
you
actually
start
taking
those
long
term
consuming
rest
end
points
and
implement
those
as
serverless.
B
One
good
example
list
that
we
have
is
that
in
application
there
is
a
single
request
that
takes
minute
and
a
half
when
user
analyzes
game,
you
move
that
out
from
the
normal
enterprise
application
and
implement
it
as
a
service,
so
that
we
don't
actually
need
the
resources
anymore
in
main
application
to
be
used
for
minute
and
a
half
to
that
single
game
analyzer,
because
that
game
analysis
happens
couple
of
times
a
day.
It's
not
constant.
So
this
is
kind
of
you
start
with
your
own
containers
you
roll
microservices,
then
you
actually
might
starting
speeding
some.
D
B
D
So
getting
back
into
the
flow
here,
the
problem
that
we
have
we've
just
seen
with
the
serverless
app
is
that
everything
is
cascading.
One
thing
leads
to
another
leads
to
another
leads
to
another.
There
is
no
parallel
ism
here
we
are
waiting
the
total
delay
of
starting
up
all
the
pods
and
starting
up
all
the
applications.
That
way,
that's
why
it
takes
about
a
minute,
and
that's
not
really
very
good
for
this
scenario.
What
I
want
is
something
like
this.
With
these
braces
these
athletes.
D
We
want
to
be
able
to
start
things
off
in
parallel
and
that's
a
kind
of
key
feature
that
we're
missing.
So
what
can
we
do
about
starting
in
parallel-
and
this
got
me
thinking-
there's
another
half
to
the
ke
native
serverless
framework
and
it's
called
the
eventing
platform.
So
I
wonder
if
we
can
use
some
of
that
to
help
things
out.
B
D
What
we'll
have
with
eventing,
there's,
usually
an
event
source?
Something
is
generating
events.
It
will
get
fed
into
a
broker
or
channel.
I
like
to
call
it
a
bus,
but
I
think
that's
sort
of
antiquated
technology
these
days
and
that
broker
can
then
use
triggers,
which
are
which
is
just
a
binding
between
the
broker
and
the
particular
application
to
forward
events
on
to
those
various
applications.
D
And
now
these
triggers
can
do
a
bit
of
filtering,
so
they
don't
have
to
take
every
event,
that's
on
the
channel
and
that
allows
this
thing
over
here
this
event
to
communicate
with
these
services
in
a
fairly
selective
way-
and
this
is
what
eventing
is
it's
a
framework
for
distributing
events
to
applications
that
are
running
in
the
the
open,
shifter
environment.
D
D
So
just
to
recap,
what
does
eventing
give
us?
It
gives
us
a
framework.
That's
good
receiving
an
event.
Will
wake
a
service
doesn't
have
to
be
a
data
event
like
a
rest,
api
any
event
will
wake
a
service.
It's
fairly
simple
to
inject
events
into
an
event.
Bus
and
I'll
show
you
that
in
a
moment,
so
this
allows
us
all
the
service
to
hear
that
starting
gun
bang
when
things
start
up,
so
they
can
all
run
in
parallel.
D
So
very
briefly,
what
does
an
event
look
like?
Well,
I
could
have
shown
you
this
in
a
variety
of
different
languages
ended
up
with
curl
because
it
seems
like
a
fairly
universal
thing.
This
is
what
an
event
looks
like
you
said
to
eventbroker.
So
here
is
my
web
wake-up
event,
as
you
can
see
down
here.
D
This
is
then
something
I
can
write
a
trigger
for,
or
reference
a
trigger
for
to
say
when
you
see
this
send
the
event
on
to
the
end
service.
So
this
is
how
we
communicate
events
across
the
broker
and
then
grab
and
trigger
applications
using
something
like
this,
a
very
simple
payload.
These
are
just
headers.
You
can
see.
Curl
come
on.
These
are
just
http
headers.
D
So
the
only
other
thing
I
need
to
worry
about
at
this
point
is
decoupling,
because,
if
you're
going
to
scale
out
this
application,
I've
only
got
four
services
but
who's
going
to
scale
out.
We
need
to
worry
a
little
bit
about
decoupling,
so
it's
simple
to
send
events
to
the
broker
plane
fairly
anonymously,
which
is
quite
handy.
It's
easy
to
attach
triggers
again
that's
a
simple
binding
between
broker
and
application.
So
there's
no
kind
of
prior
knowledge.
D
D
So
in
that
curl
example,
I
actually
had
to
say
where
the
broker
was,
but
typically,
if
an
application
is
sending
events
into
the
broker,
you
get
the
platform
to
tell
that
application
via
an
environment
variable
where
to
send
those
events.
So
again,
it's
decoupled
and
being
decoupled,
allows
you
to
scale
and
allows
you
to
change
things
quite
easily
without
the.
D
Being
tightly
dependent
on
the
platform,
okay,
so
here
we
go
attempt.
Three
everything
is
still
serverless.
What
I've
chosen
to
do
is
put
in
the
event
broker,
and
this
is
now
being
used
to
kick
off
the
catalog
and
inventory
service
when
it
receives
a
wake
up
event.
What
I'm
using
is
the
web
ui
when
the
web
ui
gets
its
first
request
in
the
first
thing
it
does
before
it
talks
to
the
gateway.
Is
it
sends
that
web
wake
up
to
the
event
broker,
which
will
then
get
distributed
to
the
catalog
and
inventory
services?
D
D
So
I
move
on
to
my
next
next
variant
here
should
all
be
getting
fairly
familiar
here.
My
web
server
at
the
top
gateway
service,
catalog
inventory
service
as
before.
These
are
two
triggers,
and
this
is
my
event
broker
here.
This
is
the
the
console
representation
of
these
things.
So
when
the
event
broker
gets
a
web
wake-up
call,
these
triggers
will
fire,
and
these
two
will
start.
D
C
C
D
Missed
again,
that's
better,
as
you
can
see,
things
still
take
a
fair
amount
of
time.
The
difference
is:
is
that
the
catalog
servers,
the
inventory
service,
have
already
started
up
very
early.
They
started
probably
before
the
gateway
service.
In
fact,
the
inventory
service
is
now
ready
to
go.
It's
ready
to
do
work
because
it's
gone.
Dark
blue,
we're
still
waiting
for
the
catalog
service
because
it's
slow
to
start
up,
but.
D
Well,
yeah,
that's
certainly,
certainly
an
option
and
we'll
get
on
to
some
of
those
options
next
so
yeah
overall,
this
is
better
catalog
service
is
finished.
Web
service
web
now
paints,
but
what
we're
seeing
is
we're
creating
some
parallelism,
but
this
thing
over
here
is
still
taking
too
long
to
start
up
and
that's
why
we're
seeing
still
a
delay,
maybe
about
45
seconds
now,
rather
than
a
minute.
D
D
So
obviously,
when
I
did
these
slides,
I
think
spacex
had
just
launched,
but
so
website
launch
time
is
critical.
We
need
fast
start
times
from
idle
to
be
responsive,
so
the
slowest
component,
the
catalog
service,
is
preventing
the
whole
site,
launch
that's
kind
of
what
we're
seeing
and
that's
taking
from
anywhere
between
10
and
30
seconds,
which
is
bad
news,
and
even
though
the
wake
up
event
is
working,
you
can't
magic
away
that
delay
we're
just
running
delays
in
parallel,
and
one
of
those
delays
is
still
too
long.
D
So
the
fix
just
to
quote
taro,
you
know
he's
already.
There
is
to
make
faster
components
or
make
faster
starting
components.
So
that's
the
next
thing
to
try.
So
here
we
are
here
is
attempt
four.
Now
things
have
changed
a
little
the
web,
ui
and
gateway
pretty
much
still
is
the
same.
The
catalog
service
and
the
inventory
service
have
been
rewritten
into
fast
starting
components.
D
Inventory
was
already
a
caucus
app
running
in
a
java
vm.
I've
now
used
the
caucus
native
application
instead
to
give
it
a
lot
faster
start
time,
so
that
starts
sort
of
sub.
Second,
as
opposed
to
the
10
seconds,
it
was
currently
doing
so.
This
is
now
a
caucus
native
amp
over
here
in
the
catalog
service.
I've
thrown
out
spring
boot
and
written
the
whole
thing
in
golang,
because
that
now
starts
really
fast
as
well.
So
that's
the
fast
start
aspect.
D
The
other
thing
I
noticed
about
the
whole
thing
is
there
is
a
delay
waiting
for
the
web
ui
to
start
up
for
it
to
then
subsequently
wake
up
the
other
components.
So
I'm
using
a
different
approach
here
still
got
the
event
broker
and
the
event
broker
is
now
waking
up
all
of
these
components,
but
I've
introduced
this
thing
called
the
k8
event.
So
this
is
the
api
server
source.
This
is
api
events
from
the
kubernetes
api.
So
what
this
generates,
among
other
things,
is
an
event
to
say
this
pod
has
been
created.
D
D
So
things
are
getting
a
little
bit
more
complicated
here,
I'll
move
this
thing
out
of
the
way
what
we've
got
same
old
pieces
as
before
still
got
triggers
triggers
to
all
three
components:
I've
introduced
here:
the
api
service
or
api
source.
That
is
a
thing,
that's
generating
the
kubernetes
events.
That's
things
like
pod,
create
pod
delete
that
type
of
stuff.
D
However,
things
aren't
that
simple,
because
this
is
really
chatty.
This
tells
you
far
more
than
you
ever
want
to
know,
and
I
need
to
extract
just
the
bit
that
says
this
pod
has
started
so
I've
added
this
thing,
this
event
filter.
This
is
a
permanently
running
pod
that
just
filters
events
off
the
event,
bus
and
then
feeds
them
back
in
when
there's
a
when
it
needs
to
wake
up.
In
fact,
if
you
look
at
what
this
thing
is
actually
doing,
you
can
actually
look
at
the
logs.
This
is
what
events
look
like.
D
This
is
what
the
the
wake
up
ventus
looks
like,
so
you
know
got
a
web
event
there
bit
small,
maybe
for
you
guys
down
here.
The
most
important
bit
for
me
is
successfully
create
because
the
pod
has
been
created,
but
you
see
these
little
dots
here.
D
These
are
all
the
events
that
are
not
pod
create
for
the
web
event,
so
they're
all
there,
all
the
ones
I'm
ignored,
so
you
can
see
the
ones
that
are
being
filtered
out
and
that
is
required,
because
if
you
sent
all
those
events
to
the
other
parts
of
the
system,
it
would
never
die.
It
would
never
go
quiescent
because
there'd
be
far
too
much
activity.
D
D
As
you
can
see,
sky
blew
pretty
quickly,
so
it's
all
starting
very
fast,
and
in
fact
I
don't
know
about
you,
but
I
would
say
that's
about
five
seconds
to
go
from
hitting
that
url
on
the
website
painting.
So
I'm
pretty
happy
that
that's
now
a
pretty
responsive
site
and
as
before
you
know
you
can
keep
hitting
refresh
and
it's
all
still
working
before
so
adding
in
all
this
framework
has
actually
succeeded.
D
D
D
It's
coming,
it's
it's.
I
think
it's
still
tech
preview,
I
think,
but
it's
coming
and
it's
building
itself
out
and
becoming
more
and
more
stable.
D
What
I've
chosen
to
do
here
is
instead
of
using
that
clunky
process
which
uses
I
know,
60
70
megabytes
of
stuff
and
has
to
keep
running,
I'm
now
using
a
faz
filter
to
filter
those
kubernetes
events
to
then
send
the
wake
up
event
into
the
broker
to
wake
up
all
the
rest
of
the
stuff.
So
this
is
like
an
optimization
or
an
improvement
personally
architecturally.
I
think
this
is
a
much
much
prettier
solution
to,
rather
than
having
that
clunky
process
just
sitting
there
all
the
time
by
using
a
fast
filter.
D
So
what
does
that
faz?
Look
like?
Well,
I
did
grab
you
a
bit
of
code.
I
hope
you
can
all
talk
golang,
because
that's
what
this
happens
to
be
written
is,
but
it's
not
too
hard.
This
thing,
essentially,
I
have
one
function
and
that's
all
I
need
to
define
the
important
thing
here
is
not
so
much
the
context.
D
The
fact
the
input
parameters
is
a
cloud
event,
but,
more
importantly,
the
output
parameter
is
also
a
cloud
event,
so
that
basically
means
that
I
can
take
in
events
using
the
fast
filter
and
replace
them
immediately
back
onto
the
broker
without
having
to
send
them
via
http
onto
the
broker
channel,
which
obviously
has
a
cost
in
the
time
delay.
So
this
is.
D
Using
a
go
function
because
I
can
write,
go
I
mean
it
could
be
javascript.
I
guess
it
could
be
written
in
java.
It
could
be
a
whole
bunch
of
languages
that
we
now
support
as
function
as
a
service,
but
the.
D
In
go
is
because
the
original
code
I
had
was
in
go
and
I
merged
that
into
functions.
But
the
point
is
it
doesn't
really
matter
exactly
what
language
you
choose
you're.
Basically,
writing
a
simple
function
in
this
case
to
do
a
filter
and
all
you're
doing
is
you're
getting
the
event
and
you
can
return
the
event
and
by
doing
a
couple
of
simple
searches
and
a
bit
of
you
know,
json
unmarshaling,
as
required.
D
I
can
go
through
that
event.
Pick
out
the
fact
that
it
is
the
one
I'm
looking
for,
which
is
a
web
event,
relates
to
the
web
pod
starting
up
and
then
down
here
I
can
return
the
response
to
say:
oh,
look:
let's
send
a
web
wake
up
back
onto
the
broker
channel,
which
will
then
be
triggered
to
fire
those
other
components.
D
They
will
need
to
have
I
mean
the
thing.
Is
I'm
processing
cloud
events?
That's
that's
what
we're
processing
here
off
the
that's,
what
the
broker
channel
is
processing,
so
you
would
need
some
kind
of
library,
as
you
say,
to
actually
access
those
cloud
events
I
mean
they
are
going
to
be
a
data
stream,
but
what
the
cloud
event
api
is
doing
is
allows
me
to
pick
into
and
grab
things
like
the
event
subject
and
the
event
payload
and
unpack
them
and
unmarshal
them
a
little.
D
So,
yes,
you
would
need
some
kind
of
extra
support
even
in
go.
I
need
support
for
this.
That's
a
package
that
I
have
to
bring
in.
It's
not
part
of
the
standard
functions
of
service
right.
So
that's
what
my
function
looks
like.
Let's
have
a
look
at
it
in
action
before
we
heavily
run
out
of
time.
D
Here
is
function
as
a
service,
so
things
look
much
the
same.
I've
still
got
my
api
source.
I've
still
got
my
broker
channel.
I've
got
rid
of
that
bind
server.
Thing
don't
need
that
anymore,
because
it's
direct
directly
talking
and
here's
my
fast
function
as
a
service
sitting
there
and,
generally
speaking,
this
should
be
pretty
much
the
same
as
what
we've
just
seen.
As
you
can
see.
They've
gone
sky
blue
pretty
quickly,
which
means
all
the
pods
have
started.
D
One
last
look
at
this
again,
very
much
like
the
previous
application.
What's
going
on,
if
you
look
at
the
logs,
is
it's
filtering?
It's
filtering.
It's
filtering!
That's
what
all
the
dots
are.
Eventually,
it
will
find
a
message:
that's
of
interest,
it'll,
unpack
it
and
when
it's
one,
that's
really
interesting,
it
will
print
it
out
and
send
the
event
down
for
the
triggers
to
the
web.
Wake
up
event
down
for
the
triggers
to
then
respond
to.
D
So
let's
do
a
quick
summary,
quick
takeaway.
My
website
now
idles
down
to
xero
and
we're
not
in
use
yay
success,
but
it
now
responds
rapidly
when
requests
arrive
from
idle.
It's
nicely
decoupled
event.
Notification
notifications
are
now
used
used
to
speed
up
that
parallel
starting,
I
could
migrate
application
to
services
serverless
with
no
code
changes
mostly,
but
I
did
have
to
add
some
fast
start
components
to
when
it
was
really
needed.
D
I
can
leverage
the
system
event
stream
for
faster
starting
and
I
can
use
fast
to
filter
broker
events
very
efficiently.
So
that's
where
I
think
I've
I
think
I've
succeeded.
I
now
have
a
happy
boss.
He
says
you
know
all
my
resources
when
your
web
app
is
not
in
use,
he
gets
them
all
back,
saves
him
money,
but
he
also
has
no
complaints
from
his
travel
users.
So
he's
a
happy
guy,
but
he
may.
B
D
Onto
something
here,
because
last
time
he
looked
over
his
screen,
he
had
this
wired
article
up
on
the
screen
and
I
think
it
kind
of
resonates
back
to
where
we
started
with,
which
is
this
was
the
wild
article.
Your
website
is
killing
the
planet
modern
websites
beautiful
to
look
at,
but
are
they
bad
for
the
climate?
So
what
I
would
say
is
maybe
we
can
use
serverless
to
help
us
go
green,
and
that
brings
us
back
to
where
we
came
in
okay.
So
that's
my
slide.
Deck
and
demos
finished.
A
Wow
alex,
to
be
honest,
it
was
a
very
impressive
demo
because
there's
lots
of
stuff
and
it's
a
kind
of
we
went
into
a
climax
where,
at
the
end
we
have
the
also
the
functional
service.
We
have
a
bot
that
right
now
is
writing
from
twitch.
For
so
for
the
moment
we
don't
have
any
question
from
the.
If
you
have
any
question,
please
send
the
the
question
in
the
chat
the
question
I
had.
You
basically
answered
about
the
function
as
a
service.
A
Well,
that's
speak
for
you.
I
had
my
coffee
shop
and
you
you.
You
have
to
have
your
coffee
shot
and
I
want
to
say
hi
to
tanya,
which
is
following
us
with
our
coffee
shop.
Tara
knows
also
tanya
she's
following
us:
she's
a
redacter
now
back
to
the
back
to
slash
home,
so
very
happy
about
it,
and
I
say-
and
I
want
to
say
hi
to
everyone
following
us
today
we
have
a
question
when
I
want
to
prevent
web
crawlers
from
spinning
up
my
pod.
D
It's
an
interesting
question
because
you're
absolutely
right,
you've
almost
got
a
denial
of
service
attack
type
effect
here,
but
your
spider
goes
around
and
crawls
all
these
websites
and
starts
them
all
up
for
you.
That's
an
interesting
question
you
would
have
to.
I
mean
traditionally
the
way
you
used
to
do
that.
You
used
to
have
a
note
in
your
in
your
website
to
indicate
you
didn't
want
spiders,
but
on
the
other
hand,
if
you
want
search
engines
to
call
your
website
and
index
it,
you
do
have
to
allow
that
to
happen.
D
So
I
think
that's
a
a
really
interesting
question,
which
I
don't
have
any
immediate
solution
for
this
is
the
advantage
of
me
building
a
demo
as
opposed
to
building
something
for
real,
because
you're
right.
If
something
is
pinging
your
website
on
a
continuous
basis,
then
it's
going
to
bring
all
this
stuff
up
every
half
hour,
every
15
minutes
and
kind
of
negate
everything
you're
trying
to
achieve,
and
that
may
well
be.
You
have
to
think
about
this
slightly
differently.
D
You
have
to
somehow
phase
your
website,
so
the
front
page
is
handled
in
a
different
way,
maybe
from
the
bulk
of
your
website,
and
we
kind
of
talked
about
that
a
little
bit
earlier,
where
you
say
you
break
down
your
web
application
into
stuff
that
you
can
afford
to
have
running
all
the
time,
a
very
lightweight
sort
of
aspect,
and
then
the
heavyweight
stuff
actually
comes
into
play.
As
you
go
a
little
bit
deeper
into
the
website
that
that
sounds
like
an
approach
that
I
can
think
of
at
this
particular
moment
in
time.
A
B
Usually
cdn
akamai
cloudflare,
whatever
in
front
taking
care
of
those
scrapers
and
have
a
index
file
for
searching
and
somewhere
else.
B
B
D
Take
advantage
of,
as
you
say,
you
know
cdn's
to
do
that
for
you,
rather
than
actually
having
to
literally
break
your
website
into
pieces,
then
that's
a
far
better
solution
to
be
able
to
break
things
up
in
that
way.
So
yeah.
A
Oh
alex,
we
have
a
few
questions
before
we
close
up.
There's
someone
asking
what
is
the
advantage
to
use
fuss
in
the
cloud
or
kubernetes.
D
The
advantage
in
this
particular
example
of
using
the
fires
was
it's
smaller
and
lighter
weight
than
a
full-blown
full-size
container.
In
fact,
that's
one
thing
actually,
there's
one
thing
I
didn't
actually
show
you,
which
is
the
fast
component
here.
Let's
see
if
we
can
find
it.
D
D
For
you,
the
advantage
of
using
fast
from
a
developer
perspective
is
all
about.
D
As
in
the
abstraction,
basically,
what
you're
saying
is
this
is
the
most
important
part.
This
is
a
set
of
functions.
All
I
need
is
compute
just
go.
Do
it?
I
don't
care
where
you
do
it
just
go
and
execute
this
piece
of
code
for
me.
So
from
a
developer
perspective,
it's
all
about
abstracting
that
that
away
and
being
able
to
break
your
application.
If
you
can
break
your
application
down
into
things
like
that,
then
that's
a
very
useful
construct,
there's
a
very
useful
abstraction
to
have
so.
Some
of
this
is
architectural.
D
I
said
for
me
it
was
worth
30
megabytes.
Maybe
that
isn't
a
huge,
a
huge
win,
but
it
seemed
a
more
architecturally,
cleaner
way
of
doing
things
and
obviously
allowed
me
to
introduce
faz
into
this
discussion
which
otherwise,
I
would
have
struggled
to
do
so.
You
know,
let's,
let's
face
it.
This
is
a
demo
and
that's
why
I'm
trying
to
introduce
as
many
features
as
possible
from
the
platform.
A
Thanks
alex
we're
going
into
the
end
there,
there
was
a
sergio
that
say
that
missed
the
part
where
alex
linked
the
event
broker
with
the
route
in
the
web
container,
probably
is
he
can,
as
I
say,
they
can
go
back
to
recording
and
see
how
you
link
at
the
event
broker
with
the
route.
This
was
his
question.
I
think
the
route
is
actually
created.
A
The
and
the
so
basically
missed
how
the
event
broker
was
linked
from
the
topology
view,
but
I
think
that
it
comes
when
you
use
the
topology
view
and
add
the
the
link
in
that
way,
or
probably
he
missed
that
part
and.
D
I
kind
of
think
I
know
what
you're
saying,
but
in
this
example,
the
events
obviously
being
generated
by
the
api
source.
It
goes
into
the
event
broker,
which
is
this
piece
now.
D
The
fires
in
this
particular
example
is
listening
on
every
single
api
event,
that's
happening,
it's
filtering
them
down
and
then
putting
back
on
to
the
event
broker
the
web
wake-up
call
when
it
sees
that
this
pod
has
started,
and
these
triggers
one
two
and
three
I'm
not
quite
sure
where
we
can
actually
go
and
look
at
them,
and
maybe
we
can
I'm
not
sure
how
much
detail
oh
yeah,
so
the
triggers
that
I've
added
are
looking
for
type
web
wake
up
as
the
event.
D
That's
actually
then
going
to
go
and
trigger
the
other
parts
of
the
application,
so
the
broker
is
working
in
a
dual
mode.
Here,
it's
getting
a
whole
load
of
api
server
events
going
through
it
that
are
being
triggered
to
the
fast
filter.
We
look
at
this
one,
for
example.
This
is
saying:
okay,
give
me
anything.
That's
come
off
the
api
server
limited
to
actually
adding
resources,
so
I
have
done
some
filtering
because
that,
again
anything
you
do
with
triggers
the
less
you
trigger
the
more
efficient
things
get.
D
So
I
have
done
a
little
bit
of
selectivity
here
that
the
api
server
ad
events
are
going
to
the
fast
filter
and
it's
turning
those
into
web
wake
ups
as
appropriate,
so
they
go
back
onto
the
broker.
The
broker
sends
them
across
the
bus
and
the
triggers
will
pick
up
the
web
wake
up
and
send
that
to
these
various
pods
as
required,
or
to
wake
them
up.
A
Yeah,
I
think
yes,
he
said
in
the
chat
yeah
to
send
evan
to
wake
up
the
world
stack
when
the
web
is
accessed
so
yeah.
I
think
you're
answered
so
far
to
wear
at
the
end
of
this
episode.
There's
a
one
comment
there
from
from
louder
tv.
Why
there
and
fabulous
missing
there
with
that.
Well,.
A
A
A
So
yeah
also
alex
and
fabio,
is
in
the
red
office.
So
I
think
it's
it's
okay,
so
folks,
thank
you
very
much
for
attending
today
alex
it
was
really
impressive
demo.
I
don't
know
if
you
ever
resources.
What
I
wanted
to
share
now
is
an
environment
where
you
can
try
all
your
serverless
demos,
like
this
in
this
environment,
called
developer
sandbox
for
openshift.
I
put
the
link
in
the
chat.
A
D
Haven't
tried
the
reason
why
I
haven't
used
the
sandbox
for
this
demo
is
because
to
make
things
move
quickly,
I've
got
five
or
six
demos
set
up.
You
will
not
get
that
much
stuff
running
in
the
sandbox.
There
is
a
limit
on
the
number
of
things
you
can
run
in
it,
but
I
would
have
thought
you
could
with
the
sandbox.
D
Do
something
like
start
with
an
ordinary
app
and
then
go
through
the
phases
by
actually
converting
these
to
serverless
and
and
repeat
a
similar
type
of
experience,
and
you
can
do
that
in
the
sandbox.
What
you
can't
do
with
a
sandbox,
for
example,
is
build
your
native
version
of
caucus.
I
think
that
would
be
a
struggle.
A
D
D
You
can
pretty
much
get
this
sort
of
thing
running
in
the
sandbox
and
that's
when
you
start
to
start
hitting
the
odd
resource
problem.
So
if
you
start
with
this
and
then
convert
and
convert,
you
could
go
through
all
the
phases
that
I've
showed
you.
I
think,
but
I
haven't
tried
that
okay.
D
You're
absolutely
right:
there
is
now
a
k
native
element
into
the
sandbox,
and
that
should
allow
you
to
the
very
least
you
know.
If
nothing
else,
you
should
be
able
to
do
something
like
this
with
just
you
know
one
serverless
box
on
the
front
to
give
you
a
little
bit
of
a
feel
of
what
it
means
to
have
something
come
up
from
idle
from
xero,
and
also
you
can
hit
this
with
a
degree
of
load
and
see
if
you
can
actually
get
it
to
scale
out,
because
you
should
get
some
multiple
pods
running.
D
It
should
auto
scale
out
again.
The
sandbox
is
a
limited
environment.
You
will
be
constrained
for
the
number
of
pods
you
can
run
and
the
amount
of
resources
you
can
consume
so
actually
serverless
is,
if
is
actually
a
really
good
way
of
making
the
best
out
of
your
sandbox
in
some
respects,
because
stuff
will
idle
down.
A
Right,
thanks
alex,
if
you
are
interested,
you
can
do
the
same
a
similar
demo.
This
cool
store
app
on
with
java.
If
you
can
download
this
book,
the
modernizing
enterprise
java
here
are
the
example
for
the
the
name
coolstore
app
and
you
can
make
try,
there's
a
chapter
serverless
chapter:
you
can
try
the
serverless
function
with
quarkus
funky,
which
is
a
framework
for
qr
codes,
allowing
you
to
implement
fuss
and
you
can
try
it
out
on
developers
and
docs,
so
the
same
cool
store
with
quarkus
funky.
A
So
in
this
case
it's
only
java
alex
adam.
If
you,
if
you
have
any
twitter
and
any
any
link
you
want
to
share
for
for
this
demo
or
anything,
please
drop
in
the
chat.
I
will
forward
to
everyone
other
than
that.
I
would
like
to
remind
you
the
appointment
of
today,
and
I
hope
this
shortcut
works.
Let
me
check
okay
yeah
go
into
this
link.
Read
that
streaming
calendar
today.
We
have
our
schedule
this
afternoon
chat
time
level
up,
ask
an
admin:
you
can.
A
You
can
see
what
show
we
have
today,
and
I
would
like
at
this
point
to
thank
everyone
that
joined
today,
first
alex.
Thank
you.
A
This
demo
was
really
impressive
and
the
recording,
if
you
are
interested
it's
on
the
same
link,
so
you
can
use
the
same
link
in
the
open
shift
youtube
channel
and
in
our
openshift
copyright
playlist
to
see
it
again
in
this
nice
christmas
theme,
then
I
would
like
to
thank
nero
for
being
an
awesome
to
the
presenter
as
usual
today,
and
I
also
would
like
to
thank
our
new
awesome
host
fabio
and
andrea
to
for
having
joined
us.
Thank
you.
Folks,
happy
happy
holidays.
A
If
you
have
your
holiday
winter
break
and
see
you
soon
see
you
next
time
in
january,
and
the
appointment
would
be
january,
probably
the
january
5.
We
will
announce
it
and
you
will
see
in
the
in
the
streaming
calendar
but
see
you
in
january
with
the
next
appointment
for
the
next
calendar
year.
At
this
point
folks,
thank
you.
Everyone
and
stay
connected
that
openshift
tv
this
week
and
have
a.