►
Description
London OpenShift Commons Gathering 2019 State of Serverless
Kubernetes, Istio, Knative, Kiali and beyond
Matthias Wessendorf - Principal Software Engineer, Red Hat
Lucas Ponce - Software Engineer, Red Hat
A
So
how
many
of
you
have
heard
of
service
good,
another
new
hotness,
so
we're
bringing
a
couple
of
the
core
Red
Hat
folks
that
are
working
on
these
technologies
here,
to
talk
about
service
and
relationship
to
kubernetes,
kay
native
sto
and
a
little
pee
ally
in
there
too.
So
at
the
end
of
this
next
segment,
you
ought
to
know
what
all
of
those
are
and,
if
not,
you
can
ask
them
questions
in
that
AMA
section.
A
B
Are
here
to
talk
about
state
of
service?
One
of
the
big
topics
there
recently
is
K
native,
and
we
have
basically
some
Canadian
introduction
for
you.
Canadian
under
the
covers,
builds
on
top
of
a
lot
of
existing
primitives
from
the
kubernetes
and
SEO
communities,
and
you
will
learn
how
Canadians
actually
lever
interesting
that,
in
order
to
make
application
development
simpler,
so
serverless
is
not
just
limited
to
functions
with
Canadia.
It's
like
full
application
driven
and
the
first
part
of
the
talk
is
actually
done
by
Lucas.
B
C
Thank
you
this.
How
many
of
you
are
using
micro
services
in
your
organization?
Ok,
how
many
new
has
implemented
that
service
my
sector
today,
ok,
cool,
so,
okay,
why
service
mess?
Let
me
just
to
do
a
quick
introduction.
What
we're
doing
so
now
in
this
is
moving
from
monolithic
application
to
micro
services
that
are
running
to
containers.
C
Now
wait
a
minute
what
is
happening
that,
with
these
new
installations,
micro
services,
we
are
creating
a
lot
new
components
that
need
to
communicate
with
all
over
the
network.
Just
to
give
you
an
example
in
one
of
the
meeting
with
some
customers,
my
country
that
is
planning
to
deploy
any
marketers
is
still
an
asset.
Ok,
we
are
planning
to
move
from
300
application
in
assistance
to
more
than
7,000
new
micro
services,
so
that
is
a
lot
of
new
communication
between
components
and
and
ok,
I'm
undeveloped.
So
what
I
need
to
do?
C
What
is
going
to
happen
now,
how
I'm
going
to
communicate
this
services
between
them?
When
I
was
working
in
my
application
server,
the
application
server
was
responsible
to
inject
one
services
or
work
components
into
my
application.
I
do
too
care,
and
now
you
are
telling
me
a
my
micro
services-
need
to
talk
with
other
micro
services.
I
need
to
add
more
try-catch
blocks
of
call
to
protect
my
component
to
what
happened
with
the
micro
service
design.
Malkin
is
not
working
failing
on
how
to
do
that.
C
Okay,
it's
the
concept
that
okay,
if
I,
have
a
services
a
which
is
calling
service
B
for
this
scene,
and
so
this
D,
but
the
service
D
is
failing.
I,
don't
want
to
have
all
the
five
line
blocking
and
delay
so
I
want
to
make
as
soon
as
possible
to
say
now
to
notify
the
service
a
that
should
fail.
You
know
create
problems
on
my
internet
services
right
and
here
is
when
the
service
mess
enter
own
estates.
C
Unless
the
system,
it
still
is
an
implementation
of
a
service
mess.
Basically,
it
was
with
the
concept
of
proxy
sidecar
to
all
my
pots,
so
every
time
that
I
make
a
deployment
a
new
sidecar,
a
new
container
with
a
proxy,
is
going
to
be
co-located
in
my
port.
At
this
spot,
this
proxy,
which
is
compiled
implementation,
is
based
on
any
boy,
is
going
to
capture
all
the
annoying
and
indeed
going
traffic.
C
To
my
to
my
to
my
pot,
it's
going
just
to
apply
the
traffic
enforcement
propriety
that
is
going
to
communicate
with
Alice
of
components
which
is
went
responsible
example
for
the
service
discovery
for
the
traffic
rules.
Okay,
how
to
report
the
telemetry
how
to
report
a
security
okay.
This
is
how
is
going
to.
We
are
going
to
implement
the
service
mess
in
a
in
another
picture.
C
How
this
wall
is
in
my
my
path,
we
are
going
to
have
her
proxy
and
our
proxies
are
going
to
be
communicate
with
the
core
components
officio,
for
example,
for
pilot
just
to
collect
all
the
configuration
that
I
am.
Writing
and
pilot
will
be
just
visible
to
deploy
all
the
configuration
in
my
pots
to
build
the
traffic
routing
between
all
the
proxies.
C
Ok,
mixer
is
going
to
collect
all
the
enforcement
and
a
telemetry
for
the
components
see
Tyrell
is
going
to
be
responsible
of
the
security
encryption
between
all
the
communications
that
is
going
to
check
all
the
configurations
and
company
like
Yali,
is
going
to
be
responsible
to
the
do.
I
survive
it
all
the
absurdity.
C
Yes,
okay,
a
little
bit
deep
about
the
concept
for
sister
networking,
steelworks
of
Serena's
work
with
two
for
two
main
objects:
okay,
professor
wait.
Wait
is
how
to
spas
external
services
to
how
to
spoke.
Myself
is
my
services
to
the
sterner
okay
to
sternal
traffic.
The
next
concept
is
about.
The
beautiful
services
is
how
I
can
apply
smart
routing
graphic
into
my
system.
How
I
can
say?
Okay
I
want
to
the
service
agent
to
communicate
with
version
one.
C
Destination
rules
are
going
to
implement
the
fault,
tolerance
policies,
what
happened
with
load,
balancing
with
every
trace,
with
the
delays
that
I
can
add
to
my
pots
or
what
happened?
How
to
implement
a
secret
break
between
my
services
and
the
service?
Entry
is
going
to
include
to
the
history
capabilities
all
these
terminal
services
that
we
are
going
to
call
from
all
terminal
services
from
the
stern
up
with
this
component.
We
can
build
example.
C
This
traffic
routing
patterns
like,
for
example,
the
new
cannon
deployment.
That's
a
canary
deployment,
is
the
ability
that
okay,
I
can
implement.
A
new
I
can
deploy
in
a
transparent
way
a
new
version
of
my
service,
and
we
still
I,
can
tell
I
can't
limiter.
I
can
do
a
smart
routine
in
order
to
limit
who
is
going
to
talk
with
a
new
version.
C
For
example,
I
can
just
to
limit
the
new
version
to
a
certain
group
of
users
and
I
can
validate
if
the
new
version
is
correct
is
good
and
then
I
can
go
out
or
completely
deployed
a
that
never
sent
into
the
containment
system.
Also
with
Easter
we
can
apply
was
another
typical
pattern
of
Av
testing,
or
a
redeployment
and
I
can
just
put
a
new
version
and
can
compare
how
it
was
conversant
with
the
other,
ok,
and
what
about
the
serve
ability?
C
We
are
telling
that
ok,
the
maker
services,
we
are
adding
new
complexity,
we
are
in
more
and
more
services,
so
it's
clear
that
we
need.
This
is
key.
It
has
some
kind
of
utility.
We
can
see
what
we
are
doing.
So
this
is
what
the
key
aleatory
it
is
doing.
The
ELISA
do
I.
That
now
is
part
of
the
East.
Do
like
a
new
add-on
that
we
can
start
in
your
in
your
service
mess
and
basically
the
walls
of
this
application.
C
This
project
is
to
is,
for
one
side
is
to
tell
you
what
are
the
sick,
my
what
your
micro
services
are
doing?
How
are
they
communicating?
How
are
the,
what
is
the
files
between
them,
how
this
focus
on
the
on
all
the
telemetry
aspect?
Ok,
and
also
to
give
you
the
ability
to
explain
why?
How
is
configure
my
service
mess?
Ok,
where
I
can
support
potential
problems
or
we're
sending
a
text
and
kind
of
of
a
scenario
that
I
need
to
fix?
Okay,
that
is.
B
All
right,
yeah,
the
second
part
of
the
presentation,
is
covering
kami.
If
a
few
hands
were
up
when
the
term
service
was
there
anybody
already
using
camellias
heard
of
it,
planning
to
adopt
it
or
evaluate
it.
Okay,
I
guess,
I
have
something
for
you,
then
so
yeah.
What
is
Kenneth
Kay
native
is
one
of
the
new
service
projects
there.
It
was
launched
in
summer
by
Google
and
reddit
is
one
of
the
participants
that
are
working
with
Google
and
the
upstream
community
on
driving
Kay
Nader
forward.
B
It's
based
on
established
industry
standards
like
kubernetes,
is
the
infrastructure
layer
and
it
also
leverages
tools
like
sto
for
routing
off
to
different
services,
and
you
will
see
later
in
action
how
that
actually
works.
So
it's
it's
something
with
the
mindset
of
bringing
the
experience
from
from
kubernetes,
more
fully
application
developers
as
well
that
you
can
basically
run
your
service
applications
on
top
of
Kay
native
Canada
itself
has
three
main
components.
There
is
a
build
component,
the
good
news
here.
B
It's
not
yet
another
built
tool,
it's
actually
more
a
standardized
or
more
common
API,
which
has
hooks
and
configuration
mechanisms
into
different,
build
templates.
So
when
you
use
the
Canadian
built
facility,
you
can
actually
trigger
york,
a
native
built
and
under
the
covers,
there's
adoptions
that,
for
instance,
make
sure
that
your
container
is
built
using
open
source
to
image.
So
the
community
has
a
bunch
of
build
templates
up
there
in
the
repository,
for
instance,
builder
and
others
there
as
well.
B
Another
interesting
component
here
is
the
serving
part
and
deserving
part
is
really
simple
way
to
deploy
your
applications
to
a
service
platform,
and
the
term
service
here
really
also
contributes
well
to
the
concept
that
your
application
can
scale
to
zero.
We
will
see
that
later
in
action
as
well,
that
serving
has
an
auto
scaler
and
an
activator.
So
when
there
is
no
traffic
sometime,
your
HTTP
server
application
has
been
teared
out
and
when
there
is
new
traffic
coming
Andy
still
knows.
B
Runtime
I
have
a
demo
later,
which
basically
walks
you
through
an
end-to-end
deployment
where
data
is
triggered
from
a
data
source
and
it
is
routed
to
different
services,
so
wisk,
a
native
eventing,
you
can
build
your
own
kind
of
data
pipelines
or
data
workflows.
As
you
see
here
in
K
native,
there
is
the
event
there
is
the
time
in
Canada
event
in
years
the
API
type
of
a
data
source.
So
this
can
be
something
that
you
either
implement
yourself
or
you
use
the
existing
conte
existing
sources
event
sources
from
the
community.
B
It
also
has
the
feature
of
a
container
source,
so
you
can
run
your
own
images.
So
this
is
where
you,
for
instance,
would
call
into
your
third-party
system
and
then
read
data
and
forward
them
to
an
HTTP
service.
So,
for
instance,
you
could
write
your
own
container
source,
that's
reading
topics
from
Apache
Kafka
and
then
the
container
source
will
run
your
own
image
for
you
and
the
image
basically
reads
every
message
from
Kafka
and
is
then
transforming
them
into
cloud
events
and
is
sending
them
to
a
construct
in
Canada.
B
That's
called
a
channel
which
represents
two
green
lines
here
and
then,
once
you
subscribe,
a
service
which
is
sick,
a
native
serving
so
at
the
end
of
the
day,
that
is
an
HTTP
server
application.
So
once
you
subscribe
your
service
to
the
particular
Channel,
then
there
is
a
kubernetes
controller
running
inside
of
kinetic
eventing.
That
is
reading
all
of
the
messages
from
your
channel
and
is
sending
that
to
the
particular
HTTP
application
that
we
have
there
so
now
you
can
then
use
this
mechanism
to
build
some
fancy
pipelines
there.
B
For
instance,
my
first
service
here
is
reading
some
data
and
then
it
is
doing
some
transformation
and
it
is
returning
some
filtered
reject
and
returning
in
this
case
means
you
basically
send
an
HTTP
response
to
the
invoker
so
and
then
Kay
native
the
subscription
controller
that
manages
the
subscription
of
this
service
to
that
particular
channel
knows
how
to
deal
with
that.
When
a
service
does
return,
a
HTTP
request
object,
for
instance,
a
JSON
document.
That
is
a
cloud
event.
B
You
can
then
define
in
this
subscription
that
the
return
value
of
the
particular
service
is
going
to
a
different
channel.
So
and
then
you
can
have
multiple
services
subscribing
to
this
channel
and
then
you
can
basically
continue
a
pipeline
where
you
route
event
through
a
more
complex
processing
engine
and
at
the
end
of
the
day
you
update
with
the
last
service.
B
This
one
is
not
returning
a
result,
you
can
an
update
the
database
or
you
can
ask
you
directly
go
to
the
database
if,
for
instance,
this
route
is
more
for
short
term
analytics
and
I'm
just
doing
a
yeah
I'm
fine
with
it
I
state
I
store
my
results
in
the
database,
so
you
can
already
with
Kay
native
primitives
that
are
there
today.
You
can
already
build
some
complex
data
pipelines
where
you
basically
run
K
native
serving
services,
these
already
purple
one
boxes
and
with
connection
to
a
database
or
data
source,
actually
with
cañedo
channels.
B
Silence
is
acceptance.
All
right,
I
have
installed
K
native
through
an
operator
that
we
have,
and
it
has
a
lot
of
stuff
installed
here.
So
it
comes
with
the
Red
Hat
service
mash,
which
comes
with
Sto,
which
also
comes
with
Keala
for
visualization.
It
has
a
few
k
native
namespaces
can
a
diff
build
Canadia,
venting
and
cañedo
serving
so,
and
I
am
now
in
my
default
project.
My
project
I'm
going
to
watch
my
ports
earlier
as
a
preparation.
B
B
So
this
is
a
definition
of
a
que
native
serving
service
or
application,
so
the
main
API
here
is
serving
que
native
something
and
then
it's
a
type.
It's
a
service
I,
give
it
a
name,
and
this
cord
is
named
dumpy
because
it's
just
the
golang
written
message
numbers.
So
every
request
that
goes
into
the
HTTP
server
is
dumped
on
the
Raquin
on
the
console.
I
also
have
some
more
information
here
inside
of
the
specification.
B
The
interesting
part
here
is
that
I
have
to
provide
an
container
image
in
my
setup,
so
I
was
stashing
a
previously
built
container
on
a
registry
and
I'm
using
that
I
could
have
also
included
here.
Some
information
about
the
build
system
with
key
native
soca'
native
could
have
gone
out
with
stick
a
native
build
and
the
actual
behind
plugin
or
adapter
for
OpenShift
source
to
image.
B
It
would
basically
read
the
source
code
repository
that
we
have
for
this
very
trivial
HTTP
server
application
from
get
up,
build
it
and
turn
it
into
a
container
and
then
automatically
provision
it.
So
I
don't
have
to
do
anything
with
the
markup
here
now.
What
I'm
doing
here
is
just
an
invocation,
and
you
already
see
the
you
already
see
the
que
native
autoscaler
coming
in
place
here.
B
So
I
have
a
curl
request
and
whenever
my
pod
has
been
provisioned
and
it's
completely
initialized,
then
it
is
in
the
running
state
here
and
then
you
get
back
an
HTTP
response.
S'okay
native
the
activation
mechanism
was
basically
ensuring
that
the
pot
gets
reactivated.
I
can
do
some
little
more
load,
so
the
endpoint
is
so
fancy.
It
has
this
health
endpoint,
which
is
printing
out
reddit
now
I,
can
show
you
this
in
the
visualization.
B
So
what
we
see
here
is
because
the
part
of
my
dumpy
service
was
activated
using
the
cane
ativ
serving
activator,
so
it
is
sending
the
request
and
there
and
then
my
curl,
because
I
was
using
a
special
header
with
the
actual
resolved
name
of
my
proper
service
from
the
sto
system.
The
traffic
gets
routed
to
the
particular
endpoint,
and
this
is
the
yeah
view
of.
What's
going
on
behind
the
scenes
where
is
tio
is
taking
care
of
the
routing
of
the
traffic
to
my
application.
B
That's
running
there
as
well
in
conjunction
with
the
Canadian
achieves
for
the
scanning
part.
In
this
case,
the
auto
activating
all
right,
I
will
pause
this
now.
If
I
would
just
stand
here
for
five
more
minutes.
What
we
would
see
is
this
guy
is
terminating
because
there's
no
traffic
anymore,
so
it's
gaining
automatically
for
a
zero
size
of
deployment.
B
So
that's
one
of
the
models
of
service
pay-as-you-go,
so
that's
also
banked
in
and,
as
you
saw
there,
the
lines
for
describing
your
application
is
pretty
trivial
compared
to
what,
when
you
do
it
like
on
by
yourself
with
kubernetes
and
and
this
ingress
routes
and
stuff
like
that,
so
that's
the
very
nice
and
more
developer-friendly
API,
and
it
was
one
of
the
motivations
from
giving
developers
powerful
tools
here.
Okay,
we
don't
really
want
just
some
dumpy
servers
that
is
manually
invoked.
B
I
was
talking
about
the
eventing
workflow
earlier,
where
we
had
some
event
source
an
event.
Source
is
connected
to
a
channel,
and
the
channel
at
the
end
of
the
day
is
again
an
HTTP
channel.
So,
in
order
to
get
a
full
pipeline,
I
need
to
provision
or
apply
a
channel
here.
The
animal
is
also
very
trivial.
B
Let
me
give
you
the
mo,
so
the
channel
has
a
name,
it's
called
test
channel
and
it
has
a
reference
and
it's
required.
It
has
a
reference
to
the
actual
type
of
the
channel
or
cluster
channel
provisioner.
In
this
case,
the
channel
is
an
in-memory
channel
in
the
upstream
community,
there's
also
different
channels
for
in
since
Miss
Apache
Kafka.
That
means
basically
the
channel
which
is
again
an
HTTP
server.
B
Everything
that's
reading
in
is
stored
in
an
Apache,
tuff
Kafka
topic
and
whenever
you
later
subscribe,
your
application,
your
service,
in
our
case
this
dumpy
guy,
whenever
that
is
subscribed
to
the
particular
channel
of
test
channel,
then
the
que
native
eventing
facility
as
a
certain
dispatcher,
that's
running
there
and
making
sure
it
reads
using
native
Kafka
protocol.
It
reads
all
of
the
messages
from
the
particular
Kafka
topic
and
is
then
again
doing
the
transformation
to
cloud
events
and
sending
them
over
HTTP
to
my
dumpee
service.
So
I
have
two
channel
now
running.
B
The
sources
service,
also
not
too
complicated,
I,
said
earlier
that
there's
two
type
different
types
of
like
pre-existing
controllers
for,
for
instance,
there's
github
sourced.
It's
like
a
native
implementation
and
there's
some
more
convenience
one
and
there
is
a
container
source
so
that
basically
allows
you
in
the
specification
to
provide
some
image,
and
in
our
case
we
have
a
WebSocket
event
source
which
reads
from
a
public
URL
on
the
Internet.
That's
something
in
the
University
of
Newcastle.
They
have
some
meter
and
sensor
data's.
B
They
have
some
devices
deployed
across
the
entire
university,
and
these
nice
guys
are
very
nice
that
they
provide
all
of
the
data
as
a
publicly
available
web
sockets
dream.
So
what
my
event
source
here
is
basically
doing.
If
you
would
take
a
look
at
the
source
code
there,
you
see,
there's
golang
application
running
that
does
a
WebSocket
connection
and
every
message
that
it's
getting
is
translating
that
into
a
cloud
event
and
is
sending
it
over
HTTP
to
a
channel,
and
in
this
case
we
reference
the
sink
to
a
particular
channel.
B
In
its
test
channel
that
I
did
provision
before
now,
because
this
is
not
cluster
internal
and
to
provide
some
sto
routing
routes.
I
need
to
say
is
still
that
I'm
going
outside
here,
I'm
telling
it
it
is
port,
443
and
I
tell
it
it's.
The
protocol,
HTTP
sure
I
said
WebSocket
before,
but
WebSocket
is
established
from
HTTP.
So
it's
a
secure
web
socket
connection.
That's
established
from
an
HTTP
upgrade
101
is
the
response
code
and
then
you're,
basically
on
WebSocket.
So
that's
what
you
need
to
do
there.
B
What
you
will
see
is
that
there
is
another
pot
coming
up
here,
which
represents
the
container,
that's
reference
in
there,
which
basically
does
the
connection
to
the
WebSocket
URL
and
thus
the
transformation
and
sends
them
to
a
sink,
and
the
thing
in
this
case
is
an
HTTP
channel
which
is
not
backed
by
any
powerful
messaging
system.
It's
one
of
the
default
implementations
there
in
memory
channel
last
but
not
least,
what's
missing
there
is
that
I
need
a
subscription
in
order
to
get
some
data
into
my
dumpee
service
here.
B
So
let's
see
the
locks
of
that
particular
container.
There's
there's
three
containers
running
in
this
pot
and
there's
one
it's
the
use
of
container,
which
is
actually
container,
which
is
actually
my
coat.
The
other
ones
are
injected
by
the
framework
and
what
we
see
there's
there's
no
fancy
cloud.
Even
here.
It's
just
vanilla,
ink,
it's
just
from
HTTP
here
and
what
we
actually
see
here.
I
was
talking
more
than
five
minutes,
so
this
guy
is
now
idle
and
terminating.
It
was
not
receiving
any
traffic,
and
now
it
is
gone.
B
Okay,
cool,
that's
exactly
what
I
wanted.
So,
let's
take
a
look
at
this
subscription
so,
while
I'm
talking
the
event
source
is
reading
WebSocket
and
giving
it
to
channel.
However,
I
have
no
connection
now
between
the
channel
and
my
application.
That's
what
I
do
with
a
subscription.
The
subscription
has
basically-
and
just
is
two
interesting
points.
One
is
a
channel,
so
I
say
I'm
interested
in
subscribing
my
application
called
dumpy
to
the
test
channel.
Here.
Let's
do
that.
B
Now
the
subscription
is
created,
and
then
some
K
native
controller
goes
in
and
is
doing
the
relationship
between
a
particular
service
and
the
particular
channel.
I
was
talking
a
lot
and
while
I
was
talking,
it
was
riding
to
the
channel,
but
there
was
no
one
receiving
the
data,
so
a
lot
of
messages
have
been
buffered
up
there.
That
means
the
autoscaler
is
now
kicking
in
because
it
doesn't
want
your
application
to
go
dark.
So
it's
bringing
a
lot
of
these
pots
up
there.
B
B
Okay,
here
we
can
now
see
that
I
have
actually
different
output
here.
What
you
see
here
is
the
some
metadata
for
the
cloud
events
as
response
headers,
so
I
have
the
spec
version.
Oh
one
I
have
some
event
ID
some
timestamp,
for
it
I
have
a
WebSocket
event
as
a
type
of
it
and
I
have
some
source,
which
is
triggering
that
one
here
and
now
what
you
see
here,
Kay
native,
did
realize
that
there
is
not
that
much
traffic
I,
don't
really
need
five
or
six
different
deployments.
B
There,
I'm
just
fine
with
one
particular
instance
I'm
able
to
manage
that.
If,
for
whatever
reason,
this
WebSocket
stream
again
goes
crazy
and
it
is
hammering
and
pumping
too
much
data
into
my
channel
using
the
event
source,
then
the
autoscaler
comes
in
again
and
you
will
get
a
much
more
ports.
Okay.
How
does
this
look
visually?
So
what
we
see
here
is
that
I
have
a
source
which,
which
is
this
guy,
so
that's
my
event,
source
right.
B
Okay,
now,
if
I,
if
I
delete
the
subscription
again
and
I,
would
talk
here
for
five
more
minutes,
we
would
see
that
this
would
also
disappear,
because
now,
when
the
subscription
is
removed,
the
messages
from
the
event
source
are
still
distributed
to
the
channel.
But
there
is
no
subscription
anymore.
So
that's
persisted
in
the
channel
until
you.
Basically,
we
subscribe
to
the
channel
and
then
the
data
is
again
routed
to
the
service,
so
yeah,
and
this
demo
is
using
for
simplicity,
reasons
it's
using
an
in-memory
one.
That's
default
employed
they're
deployed
there.