►
Description
In this rapidly changing society we need a way to quickly, efficiently and powerfully experiment with AI/ML concepts and systems. The revolution of Containers has brought a new facet into the equation; the ability to rapidly build, test and repeat AI/ML applications extremely easily while efficiently using compute resources.
This talk Ian Lawson, Solutions Architect at Red Hat, will present an overview and demo of the ways in which Containers can be easily used to build AI-centric applications, allowing data-scientists to spend more of their time experimenting and less time setting up complex systems.
A
A
A
Yeah
yeah,
we
come
back
after
summer
break.
So
just
a
quick
reminder:
this
is
the
open
shift,
tv
coffee
break
a
show
we
do
here
in
emea.
Talking
about
the
whole
open
shift,
kubernetes
cloud
native
architecture
here
at
openshift
tv
and
today's
our
special
guest
is
ian
lauzon
from
red
dot.
Uk
ian,
do
you
like
to
introduce
the
topic
of
today.
C
Hi
guys
it's
nice
to
be
back
names
in
lawson,
I'm
a
domain
solution,
architect,
red
hat,
I
focus
primarily
on
openshift,
but
in
a
previous
life
I
was
involved
with
artificial
intelligence
programming.
That's
what
I
did
in
college.
What
I
did
my
phd
in
and
today
we're
going
to
talk
about
some
really
nice
cool
things.
You
can
do
with
the
new
native
serverless
technology
around
the
concept
of
ai
and
machine
learning,
and
why
I
think
it's
probably
the
best
tool
set.
You
can
currently
do
to
do
these
kind
of
things.
A
C
I
said
kennedy
was
the
coolest
technology
in
openshift
by
far,
and
I
I'm
a
bit
biased
because
I
love
it
the
thesis
basically,
the
concept
of
k-native
is
the
and
it's
quite
I
get
told
off
at
this
point,
because
k-native
refers
to
something
called
serverless
and
I
hate
the
term
serverless.
I
think
it's
a
ridiculous
term.
It's
someone
else's
server.
C
I
always
refer
to
I'm
talking
to
customers
as
the
unicorn's
bottom,
because
you
know
it
doesn't
actually
exist,
but
what
k
native
serverless
actually
does
is
it
allows
you
very
simply
to
scale
your
applications
down
to
zero,
and
that
sounds
really
weird.
You
know
why
would
I
want
to
turn
my
application
down
to
zero?
C
Well,
let's
take
an
example
of
this.
You've
got,
let's
say
an
application
made
of
five
microservices.
One
of
those
microservices
is
called
once
every
200
milliseconds.
The
other
four
are
called
once
every
24
hours.
Now,
if
you're
standing
up
those
microservices
in
standard
kubernetes
standard
open
shift,
they
have
to
be
resident,
they
have
to
be
running
and
they
have
to
be
able
to
receive
traffic.
The
traffic
comes
into
the
cluster
and
the
pod.
Isn't
there
you
get
a
failure?
C
What
candidative
service
allows
you
to
do
is
to
actually
set
these
applications
to
reduce
themselves
to
zero
replicas
when
they're
not
being
used.
So
what
that
means
in
english
is
that
they're
not
consuming
resource
they're
there
they're
registered
they're
waiting
for
traffic
or,
in
other
cases,
they're
waiting
for
events
to
trigger
them
to
actually
respawn
up
to
one,
but
it
makes
for
highly
efficient
systems.
C
And
what
we're
going
to
talk
about
today
is
a
little
idea.
I've
been
working
on
for
a
couple
of
months
in
my
spare
time,
which
is
the
concept
of
what
I
call
atomic
decomposition
and
what
that
entails
is
breaking
down
systems
to
their
smallest
atomic
components
and
then
actually
instantiating
those
individual
components
as
k-native
serverless,
and
what's
really
cool
about
this,
you
can
imagine
this
having
an
application,
but
that
application
is
made
up
of
a
hundred
services
that
only
appear
when
they're
being
called.
So
you
have
much
smaller
footprint
for
your
applications.
C
B
Yeah,
yes,
it
does
and
probably
can
we
can.
May
we
tell
that
the
the
the
k
native
technology,
the
couple
in
a
very
effective
way,
that
the
creation,
the
development
phase
of
an
application
from
the
way
that
applications
use
their
infrastructures
so
absolutely
app
dev
people
doesn't
really
need
to
worry
about
how
and
how
effectively
underneath
infrastructure
is
used,
which
seems
just
a
a
green
statement,
but
it
is
not
at
the
end.
B
C
Yeah,
I
said
it's
a
fantastic
point
and
one
of
the
things
that
I
love
about
it,
especially
from
an
openshift
perspective,
is
that
to
make
something
k
native
serverless,
you
simply
took
a
box
on
the
user
interface
there's
no
additional
code,
there's
no
changes
for
those
kind
of
things.
In
fact,
we've
got
this
fantastic
functionality
and
I'll
show
you
an
example.
This
later
called
k
native
functions
and
I'm
a
big
fan
of
this,
and
I'm.
C
Hasn't
jumped
on
board
with
this
now
kind
of
functions
are
basically
an
extension
of
our
source
to
image
build
approach.
So
what
you
can
do
is
you
can
rack
up
to
to
your
your
laptop
produce
some
source
get
a
git
repo,
then
point
k
native
function
at
it
and
it'll
build
the
image
set
up
the
image
wire,
the
image
into
the
actual
cluster.
C
So
all
the
kind
of
things
you
have
to
do
to
make
it
a
connective
service
are
under
the
covers
in
the
creative
function,
command
and
all
you
do
is
just
rack
up
there.
You
choose
which
build
image
you
want
to
use
and
you
provide
your
source
and
that's
a
really
nice
way
for
developers
to
be
able
to
get
on
board
with
the
canadian
stuff
because
it,
as
you
say
it
doesn't
actually
require
any
knowledge
from
a
development
perspective.
There
are.
C
So
in
my
example,
here
I'm
going
to
show
you
what
I'm
doing
is
I'm
using
red
hat
datagrid
infinispat,
which
is
an
in-memory
data
cache.
So
what
happens?
Is
the
application
pops
up?
It
gives
its
id
to
the
actual
data
grid,
it
pulls
its
data
state
back.
It
works
on
the
data
state.
When
it's
finished
it
pushes
the
data
state
back
into
the
data
grid.
Then
it
goes
away
and
that's
the
beauty
of
it
and
I
tend
to
get
over
excited
about
it,
because
I
I
wanted
to
write
this
kind
of
system
20
years
ago.
B
We
do
have
already
a
a
question
from
one
of
our
attendees
test:
yeah
g21.
So
first
of
all,
thanks
for
for
question,
and
he
or
she
anticipated.
My
question
is:
can
you
please
give
us
a
real-life
customer
use
case
on
canadian?
How
are
customers
using
it
in
their
environment?
A
great
question:
yeah
yeah
live
use
case
and
yeah.
C
Well,
here's
here's
a
good
one
so
say:
you've
got
something
like
a
department
that
deals
with
license
plates.
So
in
the
uk
we've
got
dbl
dvla
deal
with
license
plates
every
year.
They
release
a
new
tranche
of
license
plates.
So
during
the
year
their
systems
are
low
loaded
and
then,
when
it
gets
to
a
certain
point
midnight
or
a
certain
day,
it
goes
bananas.
It
goes
like
a
thousand
ten
thousand
hundred
thousand
times
more
processing
required.
C
If
you
were
doing
it
in
k
native.
What
you
could
do
is
stand
up
those
services
to
actually
process
the
license
plates
process,
the
postcodes
and
all
those
kind
of
things
and
have
them
offline
until
they're
needed.
So
that's
the
kind
of
use
case
and
a
more
appropriate
use
case
from
a
kind
of
the
green
agenda
perspective
is
that
you
just
need
smaller
customers.
You
know
if
you
want
to
stand
up
a
system,
that's
10
micro
services
or
10
web
services
in
a
static
kubernetes
openshift
system.
They
have
to
be
resident
the
entire
time.
C
So
they're
sat
there
on
the
cluster,
actually
ticking
away
consuming
resource.
You
know
pinging
their
health
checks
and
all
these
kind
of
things
and
they're
not
actually
doing
anything
of
consequence.
All
they're
doing
is
burning
electricity.
So
in
a
k-native
approach,
you've
got
the
situation
where
you
can
make
these
things
go
away.
It's
all
about
efficiency.
To
be
honest,
is
that
a
reasonable
use
case.
B
Definitely,
yes,
it
is
from
my
point
of
view
or
generic
image
recognition
services
when
you
just
need
that
service.
When
you
have
to
recognize
some
images,
I
think
cctv.
Camera
stuff,
like
that,
instead
of
instantiate
100
instances
of
the
specific
services
will
let
the
underneath
k
native
intelligence
to
instances
that
just
the
right
number
of
of
instances
whenever
they
are
needed.
A
That's
on
on
demand
and
the
taking
benefit
from
the
scale
to
zero
approach.
Right
yeah-
and
I
also
have
two
questions.
One
is
concept.
12,
let's
say:
are
those
atomic
decomposit,
the
composit,
the
composited
unit,
in
your
opinion,
are
those
function
or
microservices.
What
is
that
you
are
you're
talking
about
atomic
decomposition?
What
is
the
atomic
object?
It's
it's
a
container
right,
but
it's
is
it
a
function
or
is
it
a
microservices?
C
It's
way
too
early
in
the
morning
for
that
kind
of
question,
from
a
perspective
of
the
technology,
it
could
be
anything
so
you
can
have
things
like,
for
example,
using
caucus.
You
can
have
springboo
app.
You
have
a
massive
full
fat
spring
boot
app
doing
this
kind
of
thing.
C
When
I
talk
about
atomic
decomposition,
it's
kind
of
where
I
see
the
next
generation
of
development
going,
and
I
would
want
to
see
right
down
to
the
function
level
you're,
the
smallest
possible
atomic
component,
because
if
you
can
break
those
into
the
smallest
components
you
can,
it
makes
your
applications
very
agile.
It
makes
your
applications
very
easy
to
change.
You
know
you
can
change
a
very
small
component
of
it
with
no
impact
upon
the
rest
of
the
actual
system
whatsoever.
C
As
long
as
you're
adhering
to
a
standard
interface,
you
know
instead,
that's
one
of
the
nice
things
about
the
kenyatta
service.
Is
that
and
I'll
get
onto
this?
When
I
talk
about
the
eventing
side,
we
have
this
concept
of
what
we
call
cloud.
Events
and
cloud
events
are
basically
very,
very
simple,
very
beautiful
little
components
for
eventing.
Basically,
all
they
are
is
a
type
and
a
payload,
so
you
don't
have
the
overhead
of
having
to
learn
a
format.
All
these
kind
of
things.
C
We
have
a
situation
with
canadian
event
in
which
allows
you
to
trigger
the
creation
or
the
recreation
of
these
applications
based
on
the
arrival
of
one
of
those
events.
So
for
me
it
answers
your
question
very
quickly.
If
I
was
doing
it
and
I
had
a
full
green
field,
I
would
get
everything
down
to
the
smallest
possible
atomic
component
and
that's
the
point
of
my
my
idea
that
I've
got.
I
actually
I'll
give
you
a
quick
overview
of
this.
I'm
obsessed
with
neural
nets
always
love
the
concept
of
neural
net.
C
C
Now
what
I'm
thinking
is
well,
why
don't
we
model
that
we
could
use
connective
serverless?
We
can
build
these
neurons.
We
can
use
cloud
events
to
actually
provide
the
inputs
and
then
write
the
functionality
within
the
neurons
themselves
to
aggregate
the
information
you're
getting
from
the
cloud
event,
and
then
this
is
where
it
gets
beautiful
emit
further
cloud
events
from
your
neurons
based
upon
the
input
you've
got
and
you
can
build
these
massive,
complicated
systems
from
very,
very
small
atomic
components.
C
It's
a
brand
new
way
of
thinking,
it's
a
completely
different
way
of
thinking
in
terms
of
coding,
because
you're
talking
about
aggregation
of
outputs
like
that,
but
I
love
the
idea.
It's
one
this
one,
those
ideas
that
sticks
in
your
head
and
you
think
this
this
could
be
good,
but
how
the
hell
do
I
do
it,
and
it's
only
since
I've
been
looking
at
creative
serverless
and
the
cloud
event
is
just
gone.
This
is
perfect,
for
it
did
I
actually
answer
the
question
there.
Yeah.
A
That
was
great
answer
and
it's
also
introducing
the
title
of
this
episode:
next
generation,
ai
ml
and
we're
talking
about
serverless.
So
just
I
think
we
gave
the
context
why
serverless
and
aiml,
because
we're
going
into
this
model,
as
you
say,
like
a
neuronal
architecture
based
on
now
an
neuron
nets
or
narrow.
A
You
know
the
this
kind
of
architecture,
but
I
was
wondering
if
you
were
thinking
about
using
existing
pattern
for
those
architecture,
like
you
know,
common
ai
ml
pattern
or
is
something
from
scratch.
C
This
is,
this
is
actually
something
from
scratch
and
the
reason
it's
from
scratch
is
the
I've
done
a
lot
of
work
with
customers.
I
did
a
lot
work
in
the
past
with
ai
and
machine
learning
projects
and
the
thing
with
ai
and
ml
is
that
you
end
up
doing
the
same
thing
over
and
over
again
at
scale.
That's
it
yeah
with
mapreduce
and
stuff,
like
that.
You
get
these
massive
data
lakes,
hadoop
data
lakes
and
you're
re-running
experiments
over
and
over
and
over
again,
the
experiments
themselves
are
often
quite
simplistic
yeah.
C
C
Actually,
I
think,
I
think,
we've
all
got
a
little
bit
tied
up
in
the
recent
years
with
becoming
complicated.
You
know,
all
of
our
frameworks
are
complex.
All
of
our
tools
are
complex
and
when
we
write
code
we
spend
90
of
our
time.
You
know
tinkering
with
the
config
on
the
frameworks
we
use
and
10
percent
of
our
time
writing
the
code,
and
that
to
me
is
a
the
balance
is
wrong.
You
know
we
shouldn't
be
actually
being
experts
on
maven
palm
files.
Like
I,
I
hate
nathan
palm
party.
I've
spent
too
much.
C
I
find
maven
tries
to
be
too
helpful
and
isn't,
but
my
point
is
really
that
we
spend
too
much
time
getting
getting
tied
up
with
the
complexity
of
the
frameworks.
We
should
be
focusing
on
the
pure
code
on
writing
the
functionality
itself.
If
we
go
down
this
approach
of
actually
producing
atomic
components,
then
using
kennedy's
serverless
to
framework
here
number
one,
we
don't
need
to
write
any
code
from
a
cognitive
service
perspective.
We
can
just
write
our
tiny
little
atomic
components
that
do
what
they
need
to
do
in
the
best
way.
C
They
can
the
only
pattern
I've
been
using
and
the
pattern
I've
been
using
is
because
of
the
stuff
I
talked
about
with
pvs.
You
can't
have
persistent
volumes
currently
attached
into
the
pods.
So
if
you
want
to
do
things
that
persist
over
the
duration
past
the
duration
of
the
call
of
the
the
function,
you
have
to
have
external
persistence.
So
what
I've
been
doing
is
I've
come
with
this
idea,
where
the
actual
neurons
themselves
are
completely
stateless,
so
you
provide
everything
to
the
neuron
and
what
I'd
like
to
do
going
forward.
C
This
is
where
it
gets
a
little
bit
cooler
I'd
like
to
provide
the
code
for
the
neuron
as
injection
as
well.
So
you
write
a
neuron.
The
neuron
is
just
basically
a
code
executor
and
a
persistent
volume
or
a
let's
say,
a
data
grid
communicator
and
you
provide
the
algorithms
into
the
neuron
and
the
id
for
the
neuron
itself.
The
neuron
goes
to
the
actual
data
grid.
Pulls
its
information
applies,
the
algorithms.
C
So
I
haven't
worked
out,
so
I
haven't
worked
out
how
we've
ever
do
that,
because
I
don't
want
to
go
down
the
route
of
writing
my
own
programming
language,
but
there
are
some
very
cool
things
coming
again
going
back
onto
the
openshift
side,
there's
some
very
cool
things
coming
with
build
v2,
which
people
are
probably
aware
of,
so
we're
upgrading
our
build
processes
within
openshift.
We
have
this
fantastic
thing,
called
s2i
and
I'll.
C
Tell
you
a
naughty
little
story
about
that,
because
it
used
to
be
called
sti
back
in
the
days,
so
I
used
to
go
into
customers
and
talk
about
sti
and
source
to
image,
and
all
these
kind
of
things
and
most
of
the
customers
would
laugh
at
me
and
I
couldn't
work
out
why
they
were
laughing.
Of
course,
sti
stands
for
sexually
transmitted
infection,
but
I
was
going
in
and
talking
about
openshift
and
I
was
saying
well
you
want
to
bring
this
sexually
transmitted
infection
and
use
this
sexually
transmitted.
C
C
I
got
I
got
quite
cross
about
it
because
I
I
got
sick
and
tired
of
being
laughed
at
by
by
military
people
when
I
was
doing
doing
presentations,
but
what
we
got
coming
is
something
called
build:
v2
which
is
based
on
something
called
shipwright
and
what
shipwright
does
is
it
actually
produces
something
called
build
packs
which
have
all
the
framework
images
you
need
to
build,
the
actual
appropriate
source
code
and
you
just
provide
the
source
code
now
the
difference
is
under
the
covers.
C
These
build
packs,
get
executed
within
your
namespace
as
opposed
to
being
executed
at
the
root
level,
which
we
don't
like
within
openshift
itself,
but
it
means
that
you
can
provide
your
own,
build,
packs
and
stuff
like
that.
So
I
was
wondering
going
forward.
I've
been
talking
to
the
canadian
people
about
this
about
whether
we
can
have
the
situation
where
you
run
a
connective
service
that
actually
executes
a
build
pack
to
build
your
source
material
there
and
then
and
then
executes
it,
and
it
builds
it
on
a
code
change.
C
So
you
don't
build
it
every
single
time,
so
it'll
be
sort
of
you
know
you
point
at
a
git
repository.
You
point
a
build
image.
The
build
image
is
used,
it
produces
the
image
it
executes.
The
implicated.
Serverless
goes
away,
comes
back
pop
there
you
go
it
fires
up
a
pre-existing,
pre-existing
image
itself.
A
It
makes
totally
sense
yeah
yeah-
and
I
just
I
wanted
to
ask
I
yeah
to
you-
describe
this
automated
mechanism
for
building
the
functional,
so
shipwright
build
v2
and
then
there's
the
k
native
function.
Do
you
think
this
is
coming
more
and
more
close
to
aws,
lambda
or
those
public
cloud?
You
know
function
as
a
service
or
is
still
there's
something
missing
still.
C
C
But
what
I'd
like
to
see
us
get
to
is
similar
to
the
stuff
we
got
with
techton
hub
and
the
operator
hub
that
we
have
a
situation
where
if
we
go
down
this
route
of
producing
plenty
of
serverless,
let's
say
neurons
or
neuronic
programming,
or
something
like
that,
we
have
a
hub
where
people
can
actually
push
their
images
or
their
source
code
for
building
these
individual,
really
small
components.
C
C
I
need
a
face
recognizer,
I
need
a
textual
analyzer
and
you
can
just
pick
the
actual
components
off
you
like,
and
you
can
pick
the
source
material
and
push
it
into
a
build
pack
as
part
of
your
deployment
mechanism
for
the
canadian
serverless
and
then
suddenly
you're
almost
at
the
lambda
point.
But,
instead
of
having
because
in
lambda
what
they
do,
is
they
have
existing
systems
up
and
running
with
these
functions,
ready
to
call
you
just
subscribe
and
you
call
lambda
it
goes
into
lambda.
Does
it
comes
back
the
response?
C
Their
approach
is
not
efficient.
It's
not
green,
because
they've
still
got
those
lambda,
even
though
it's
serverless
they
have
to
be
still
running.
Those
functions.
24
7
for
you
to
be
able
to
connect
to
canadian
service
is
actually
better
because
those
things
are
available
to
you,
but
they're
offlined
when
you're
not
using
them
so
they're
not
consuming
system
resource.
C
C
So
I
don't
want
to
lose
the
the
it's.
The
the.
C
And
I'll
explain
why
there's
too
much
going
on
here,
because
one
of
the
nice
features
of
openshift
is
when
you
install
an
operator
instead
of
installing
the
operator
across
the
cluster,
because
you
can
do
that,
which
means
everyone
on
the
cluster
then
has
access
to
it.
If
they
got
the
appropriate
rbac,
what
you
can
do
is
you
can
run
operators
locally
within
the
name
space,
so
only
you
have
access
to
those
extended
objects.
C
So,
when
you're
looking
at
this
application
here,
this
application
here
is
actually
quite
simple,
but
I'm
running,
for
example,
the
camel
k
operator
because
I'm
using
some
camel
k,
which
is
a
fantastic
integration
technology
I'll
show
you
the
source
code
for
that.
It's
tiny
and
it's
really
really
sweet.
I'm
also
running,
what's
called
the
infinite
span
operator,
and
that
gives
me
data
grid,
so
I'm
running
basically
a
new,
a
neural
net
as
an
instance
of
data
grid.
C
So
it's
a
named
instance
of
data
grid
and
I'm
using
it
and
I'll,
explain
how
I
do
it
in
a
minute
to
get
around
that
that
problem
I
had
with
the
persistent
volumes,
but
I
want
to
show
you
something
really
sweet.
First
of
all,
and
that's
this
thing
here.
So
what
I
got
here
is
what's
called
a
broker.
Now
a
broker
sits
within
a
namespace
and
it
acts
as
a
hub
for
the
cloud
events.
C
What's
beautiful
about
it,
is
this
number
one
these
are
namespace
banned,
so
I've
got
one
called
default
and
I
can
refer
to
it
by
the
namespace,
so
I
can
refer
to
it
as,
for
example,
serverlessdemo.default
when
I,
when
I
actually
refer
to
the
refer
to
the
the
broker,
what
you
do
is
you
set
up,
what's
called
triggers,
so
a
broker
has
a
number
of
triggers
and
those
triggers
are
looking
for
events
of
a
certain
type.
Now,
in
this
case,
I've
got
a
trigger
here
if
we
click
on
it.
C
This
is
actually
looking
for
a
type
tech
talk
event
with
a
subscriber
of
event:
readers
there's
a
candidate
of
service
called
event
reader,
that's
waiting
for
an
event
to
arrive,
called
tech
talk
event.
I've
got
another
trigger
here,
which
is
waiting
for
a
typed
caucus
event,
which
is
linked
into
a
candidate
of
service
called
caucus
function.
Now
these
services
are
just
the
end
points
that
do
the
actual
action
of
spinning
up
the
application.
If
it
doesn't
exist
or
ingressing
the
traffic,
if
it
already
exists,
and
what
I've
got
is
a
caucus
function.
C
So
caucus
is
our
brand
new
spinner
java
that
makes
java
finally
cool
job.
No,
don't
tell
that
java
was
always
cool.
It's
just,
I
still
think
java's
new.
So
when
I'm
talking
to
people
who
use
node
and
go
and
all
those
kind
of
things
they
make
me
feel
50
years
old,
which
I
am
but
caucus
actually
makes
java
efficient,
it
makes
java
extremely
fast.
So
it's
it's
very
well
suited
for
the
using
kennedy
serverless.
C
So
I've
got
a
caucus
function,
that's
waiting
for
a
quarter
caucus
event
event
and
what
it
does
is
quite
sweet
when
it
receives
that
event.
It
logs
the
information
it
gets,
but
it
emits
an
event
of
type
tech
talk
event.
So
when
it
receives
that
event,
it
emits
an
event
of
its
own
back
into
the
broker.
The
broker
then
looks
at
the
triggers
and
will
kick
off
this
camel
k
component,
which
then
processes
that
event.
C
It's
a
very
pity
example
this
and
it's
kind
of
the
the
basics,
but
what
it
does
is
it
shows
you
that
you
can
actually
link
together
these
individual
small
components
by
an
eventing
model
that
controls
the
connective,
serverless
and
that's
very,
very
sweet.
Now
I've
got
if
I
move
it
around
slightly.
I've.
B
Got
something
very
cool,
that's
a
great
example:
how
can
atv
fits
perfectly
even
driven
basic
architecture
application?
So,
thanks
for
introducing.
C
It
it's
a
pithy
one,
but
it
kind
of
kind
of
sums
up
how
cool
it
is
and
what
I've
got
here
is.
Basically,
I
wrote
a
cloud
emitter
app.
So
what
this
is
is
basically
I'll
go
to
the
url
it
is.
I
have
to
apologize.
I
haven't
written
a
style
sheet
in
25
years,
so
this
looks
a
little
bit
old-school
because
I
don't
like
css
what
this
actually
gives
me.
C
The
ability
is
to
actually
emit
an
event,
because
an
event
is
basically
an
injection
of
an
event
into
the
broker
and
under
the
covers,
it's
actually
a
post.
It's
just
posting
an
event.
So
what
I'm
going
to
do
here
is
actually
that's
the
target
broker.
So
I'm
looking
at
the
default
broker
in
the
serverless
demo
namespace
and
I'm
going
to
send
quarkus
event.
C
C
When
I
admit
the
cloud
event,
what
is
it
that
this
will
do
is
actually
throw
it
into
the
broker
and
everything's
abstracted,
so
that's
just
thrown
into
the
broker
itself.
So
when
I
admit
it,
I'm
going
to
pop
straight
back
to
the
page
and
you'll
see
if
we
go
back
to
the
page
first.
Currently,
we've
got
a
situation
that
the
caucus
app
is
scaled
down
to
zero.
C
So
there's
no
instances
running
and
if
I
look
at
the
metrics
you'll
see,
there's
no
resource
consumption
going
on
and
this
one
over
here
the
actual
camel
k
is
running
at
zero
as
well
very
quickly
before
I
kick
it
off.
I
want
to
show
you
this,
because
this
is
very
sweet.
So
is
that
is
that
big
enough?
Can
people
read
that.
A
C
C
Although
I
do
like
dark
mode,
I
do
like
dark
mode,
but
it
gets
good.
That's
actually
a
411
feature.
You
can
turn
it
off
very
easily.
It's
just.
I
haven't
found
out
how
to
do
it
yet
so
I've
actually
got
a
piece
of
java
here
that
I'm
using
for
my
cane
of
my
camel
k
and
I'll
show
you
how
simple
it
is
I'll
highlight
it.
So
you
can
see
it
and
it's
a
very
pity
one
cause
all
it's
doing.
C
Is
it's
logging,
the
actual
body
it's
getting
from
it,
but
it's
using
a
from
statement,
k
native
colon
event,
tech
talk
event
and
just
by
using
that
that
will
actually
create
a
trigger.
When
I
install
this
as
a
camel
k,
camo
k
is
basically
a
k
native
version
of
camel
and
you
can
see
how
simple
it
is
to
actually
link
this
this
this
piece
of
processing
into
the
triggers
themselves-
and
I
was
very
impressed
by
how
simple
it
was
just
one
line
of
code
to
actually
link
it
in.
C
But
if
we
pop
back
to
the
thing
over
here,
so
what
I'm
going
to
do
is
I'm
going
to
throw
an
event
and
jump
back
very
quickly
because
things
happen
quickly,
so
I'll
admit
it
I'll
pop
back
you'll
see
the
caucus
event
actually
spins
up
immediately
so
what's
happening.
Now
is
it's
spinning
up
from
zero
because
it's
received
traffic?
The
creative
serverless
functionality
is
now
spinning
up.
It's
executing
you
can
see
up
while
I'm
talking,
because
I'm
not
talking
quick
enough
that
the
camel
k
is
already
spanner.
So
what
this
has
done.
C
If
we
pop
back
to
the
topology
and
then
pop
into
the
k
native,
the
camel
k1
you'll,
see
that
I
received
a
message
that
was
created
from
the
caucus
function
now,
if
we
pop
back
and
I'll
talk,
while
it's
happening,
how
it
happens
is
when
these
things
are
actually
spun
up
a
timeout
starts,
and
if
no
traffic
ingresses
into
this
point
or
that
point
during
the
timeout
and
they're
on
a
function
by
function
basis,
these
functions
will
auto
scale
themselves
down
to
zero.
C
Now,
that's
the
beauty
of
it.
You
know
they
sit
and
they
wait.
They
have
a
timeout
period
during
which,
if
additional
traffic
comes
in,
you
don't
have
the
overhead
of
re-spinning
it
back
up
and
doing
all
those
kind
of
things,
but
after
they
go
away
and
the
the
resources
go
back
into
the
back
into
the
pool
they're
not
being
consumed
that
one's
now
going.
You
see
this
one's
now
scanning
down
this
one's
going
down,
so
they're
actually
terminating
down
to
zero.
Again,
what's
lovely
about
this,
is
you
can
also
scale
the
other
way?
C
So
when
you
set
these
connective
serverless
up,
you
can
actually
specify
sort
of
network
traffic
that
will
cause
it
to
upscale
as
well,
and
we
did
this
as
part
of
the
summit
demo,
because
what
I
I
wrote,
the
summit
at
the
back
end
of
the
summit
demo
to
do
with
the
battleships
game.
If
anyone
saw
that-
and
it
was
all
based
around
canadian
functions
and
we
were
pushing
like
a
million
events
in
to
see
what
would
happen,
of
course,
the
clusters
were
just
stopping.
C
C
There
is
a
link
that
I
passed
over
to
to
to
the
guys.
Basically,
I
go
through
the
whole
sort
of
study
of
what
this
is
about.
My
first
new
on
atomic
ao
principles,
and
that's
this
is
talking
basically
about
what
I'm
doing
above
and
beyond
the
standard
things
with
this
approach.
C
So
that
was
a
very,
very
quick
demo.
Any
questions
so
far
guys.
I'm
aware
I'm
just
slamming
information
left
right
and
center.
A
I'm
so
it
looks
very,
very
cool.
I
like
this
the
way
that
everything
is
auto
scaled
through
an
event
and,
to
be
honest,
I
like
the
openshift
topology,
because
you
can
have
a
visualization
of
advent
component
in
this
case,
your
those
are
your
decomposed,
the
atomic
unit.
If
you
will,
I
have
a
question:
what
is
this
dev
epiphany?
Is
your
block
or
yeah?
It's
it's.
C
It's
I
want
to,
I
produced
a
blog,
basically
full
of
stuff.
Why
not
because
I
I
don't
understand
things
so
so
when
I
don't
understand
things,
I
simplify
them
down
to
the
point
at
which
I
can
understand
them,
and
then
what
I
do
is
I
write
I'll
write,
a
blog
on
it.
So
there's
blogs
on
here
around
creative
servers,
argo
cd,
openshift
virtualization,
the
new
kcp
stuff.
Every
time
I
come
across
something
like
that
interests
me,
but
I
don't
really
understand
it.
I
will
I
will
dumb
it
down.
So
I
understand
it
right
so.
B
C
C
I've
been
working
on
using
caucus
to
write
operators
in
java.
So
that's
something
else:
I've
been
playing
with.
C
I
want
to
show
you
something
else
very
quickly
as
well:
that's
that
lends
itself
to
making
the
kennedy
service
much
much
easier,
and
that
isn't,
quite
recently,
I've
been
talking
to
the
business
unit.
The
guys
that
write
this
about
this.
C
I
wanted
to
be
able
to
deploy
k-native
service
apps
using
argo
cd,
because
at
the
moment,
I'm
using
kennedy
function
or
I'm
hand
crafting
it
or
I'm
using
the
the
user
interface,
because
I'll
show
you
very
very
quickly
before
I
show
the
argo
cd
stuff,
but
if
I
wanted
to
add
another
connective
service,
so
I'm
going
to
build
one
from
scratch,
that's
just
for
the
hell
of
it.
So
if
we
go
to
node
go
to
the
builder
image,
what
the
builder
image
does
is
it
allows
you
to
give
it
a
git
repo.
C
A
C
However,
because
I've
installed
connective
servers
which
installs
as
an
operator
and
comes
as
part
of
openshift,
I
can
now
do
this.
So
if
I
create-
and
I'm
going
to
go
for,
node
16
on
relate
ubi,
I'm
going
to
give
it
a
standard.
I've
got
an
application
called
ocp
node,
which
is
a
terrible
application
which
goes
through
to
nasa
and
gets
the
image
of
the
day,
which
is
quite
pretty.
But
if
I
scan
down
you'll
see
I've
got
the
component
to
be
able
to
make
it
serverless
deployment.
C
Now,
just
by
ticking
a
box,
I
can
make
it
serverless
and
what
this
will
do
is
it
will
do
the
source
to
image.
Do
all
those
kind
of
things,
but
instead
of
instantiating
it
using
a
deployment
or
a
deployment
config,
it
will
create
a
creative
service
which
puts
all
that
stuff
in
place
to
allow
you
to.
Actually,
you
know,
create
it
on
demand.
C
I'll,
just
put
a
different
app
group,
I'll
put
the
dot
net
app
group
just
to
just
to
really
annoy
people.
Who've
got
ocd
and
I'll,
create
it,
and
what
you'll
notice
when
I
start
to
create
it
is
I'll
just
resize.
This
box
over
here
bring
it
over
here.
C
Go
down
out
of
it,
so
what
we
got
here,
I'll
bring
it
back
in
so
you
can
see
it
is
this.
It
says
no
revisions,
and
this
is
very
important.
So
what
it's
physically
doing
at
the
moment
is
building
the
image
it's
going
to
get
the
framework
image,
building
the
source
producing
the
composite
image,
putting
it
into
the
upgraded
registry
when
it's
finished
building
it,
it
will
deploy
it,
but
it
will
create,
what's
called
a
revision.
C
C
That
is
immutable,
so
once
I've
actually
created
it,
I
can't
change
anything
that
makes
up
that
deployment.
I
can't
change
the
image.
I
can't
change
the
environment
variables.
I
can't
change
the
k
native
server
service
settings
and
that's
by
design
and
what
happens
is
let's
say
I
want
to
change
an
environment
variable.
I
want
to
change
the
number
of
replicas
or
make
it
scale
up,
and
things
like
that.
You
know
change
the
connective
service
itself
when.
B
C
Do
that
it
will
create
another
revision,
but
it
will
retain
the
original
configuration
for
the
previous
revision
and
then
what
you
can
do,
which
is
really
nice
and
you'll,
see.
There's
a
little
thing
there
saying
100
is
you
can
define
how
much
of
the
traffic
that
spins
up
that
application
is
sent
to
which
revision
so
if
you've
got,
for
example,
a
new
version
of
your
tiny
little
atomic
service?
And
you
want
to
test
it.
You
can
actually
create
a
revision
in
your
cognitive
service
and
push
10
of
the
traffic
in
there.
A
C
C
That
revision
concept
allows
to
roll
back
and
roll
forward
on
the
revisions
you're
actually
providing
to
the
outside
world.
So
if
you
produce
a
little
application
little
micro
service
and
you
find
that
it's
broken
with
a
next
release-
or
it
doesn't
do
what
it
normally
is
meant
to
do
rather
than
having
to
wind
it
off
recreate
and
do
all
those
kind
of
things
you
can
just
push
the
traffic
back
to
the
previous
revision
and
that's
a
nice
little
feature.
C
The
only
downside
of
it
and
I've
talked
to
the
the
business
unit
about
this-
is
that
it
retains
all
the
revisions,
so
you
have
to
manually,
go
and
remove
the
revisions
now.
The
reason
I
found
that
out
was
I
was
doing,
did
about
300
builds
on
the
one
of
the
connective
services
for
the
the
forum,
and
I
had
300
revisions.
So
when
I
did
an
oc
get
service
in
my
project,
I
had
300
connective
service
revisions,
so.
C
The
reason
not
at
the
moment
there
is
the
ability
to
to
limit
the
number
of
revisions
you
can
have.
But
to
be
honest,
you
know
most
people
are
going
to
create
one
revision
or
two
revisions
and
things
like
this
one
there's
additional
features
in
terms
of
canary
deployments.
That's
really
really
sweet
about
this,
but
again
this
is
now
down
to
zero,
because
it's
actually
loaded
it's
finished.
C
So
a
little
tip
when
you're
doing
this
is
to
actually
set
the
minimum
number
of
replicas
in
the
initial
spin
up
to
the
number
of
worker
notes
to
make
sure
you
get
the
image
on
every
single
workout.
While
I'm
talking
that
spun
up.
If
you
go
and
look
at
it,
I
say
it's
a
pretty
appalling
page.
But
if
you
go
to
json
test,
it
goes
off
and
gets
the
latest
image
of
the
day
from
nasa.
C
I,
like
are
you
starting
to
work
in
the
space
industry,
so
it's
one
of
my
sort
of
like
favorite
things
to
play
with
and
if
we
pop
back
you'll
see
that
after
a
certain
time,
it'll
scat
it'll
scale
down,
you
can
set
that
initial
default
scale
up
and
scale
down
time
as
well.
So
but,
interestingly,
and
this
this
is
another
one
of
those
like
like
niggerly
little
things.
The
ability
to
set
the
initial
timeout
isn't
part
of
the
initial
creation
process.
C
C
This
is
all
available
through
there's
a
fantastic
command
line,
called
kn,
which
is
the
k
native
cli,
which
allows
you
to
interact
directly
with
the
k
native
service
components
if
you've
previously
logged
on
so
it
uses
the
openshift
or
the
kubernetes
configuration
to
log
on
to
it,
and
that
gives
you
the
ability
to
actually
change
the
revisions
to
change
the
services
to
do
all
those
things
at
the
connective
service
level.
You
never
really
need
to
go
into
that.
C
To
be
honest,
you
can
use
the
interface,
you
can
just
tick
that
tick
box
or
you
can
actually
do
the
k
native
function
thing
which
does
it
all
for
you.
Does
that
make
sense.
A
A
It's
basically
a
component
that
adds
to
the
web
console,
which
is
extendable
and
you
can
have
a
web
terminal
and
this
web
terminal
will
contain
kn
as
well.
So
I
think
it's
very
easy
to
try
it
out
your
demo
k
native
just
with
you
know
it's
downloading
the
cla
cl
kncl
or
using
the
web
terminal
operator.
Just
a
note
for
people.
I
want
to
try
it
out.
C
What
I've
been
playing
with
is
the
ability
to
actually
set
up
creative
serverless
through
argo
cd
so
and
I
agree
with
sebastian-
I
absolutely
love
the
web
terminal
and
I'm
ashamed
to
admit
that
I
haven't
installed
it
on
this
cluster.
Yet
so
that's
that's
the
next
thing
I
need
to
do
when
I
finish
this,
because
I
absolutely
love
that
web
terminal
I
used
to
produce.
C
There
is
actually
I
produced
a
container
that
did
it
for
demos
and
it
was
a
right
nightmare
to
actually
write
that
so
having
it
integrated
fully
and
the
beautiful
thing
about
the
web
terminals.
The
web
terminal
image
contains
all
the
latest
instances
of
the
command
line
command
line
functions,
so
you
don't
have
to
worry
about
being
version
version
confused
anyway,.
C
Is
I've
set
up
as
a
test
the
ability
to
actually
deploy
a
serverless
app
via
argo,
cd
and
all
I
did
with
that,
and
this
is
where
it
gets
really
sweet.
When
you
define
a
cognitive
serverless,
you
just
define
the
connective
service
itself.
Everything
else
is
done
for
you,
so
the
creative
service,
for
example,
tells
you
which
image
to
use
how
to
scale
it
up
all
those
kind
of
things
you
need
to
define
the
behavior
of
the
k
native
serverless.
C
So
what
I've
got
and
I'll
show
you
it
is.
If
I
go
to
my
github
repo,
if
I
go
to
a
githubs
demo,
I've
got
a
subdirectory
in
here,
which
I
use
for
demos,
because
I'm
always
recreating
clusters.
So
I
need
to
be
doing
this
on
a
daily
basis,
and
in
here
you'll
see,
I've
got
a
customization
file
and
a
customization
file
is
like
a
shopping
list,
and
that
tells
argo
cd.
What
components
have
to
exist
at
the
target
point
to
implement
this
application?
C
In
this
case
it's
a
broker
the
case
service
itself
and
the
trigger
that
causes
the
traffic
to
actually
get
into
the
case
service.
Okay,
a
collective
service.
If
we
go
back
to
serverless
the
broker
definitions
are
beautifully
simple
and
very
quick
point
on
this:
those
brokers,
the
things
that
actually
send
the
cloud
events.
They
are
ephemeral.
So
what
that
means
is
when
an
event
arrives
at
the
broker,
it
will
scan
its
list
of
triggers.
It
will
push
the
event
down
the
trigger
to
cause
the
canadian
service
to
start
and
then
it
discards
the
actual
event.
C
So
they
are
completely
ephemeral,
but
what
you
can
do
is
you
can
back
them
with
what's
called
a
sink.
So
what
a
couple
of
examples?
I've
seen
really
fantastic
ones,
is
a
broker
being
backed
by
a
kafka
topic.
So
you've
got
a
kafka
topic
which
is
actually
driving
the
cloud
events
into
the
broker
and
the
beautiful
thing
about
that
with
the
kafka
is,
of
course
you
can
temporarily
replay
it.
So
you
can
go
back
and
you
can
wind
that
kafka
topic
back
and
then
replay
those
events
through.
C
We
go
and
look
at
the
actual
service
service
is
slightly
more
complicated
because
what
I've
got
here
is
basically
I'm
using
the
connective
functions.
So
what
I'm
doing
is
I'm
actually
using
we'll
scan
down
here?
There's
an
image
I
pre-built,
it's
a
tech
talk
function
on
key.io,
I've
got
a
couple
of
environment
variables,
I'm
pushing
in
and
some
labels
which
actually
set
it
up
as
a
k
native
service,
and
all
I
do
is
I
push
that
in
this
container.
Concurrency
indicates
the
scaled
down
value,
so
that's
going
down
to
zero.
C
C
So
basically,
what
the
trigger
does
is
it
links
that
service
to
that
event,
type
now
very
quickly
on
the
cloud
events,
because
this
is
this-
is
I've
written
a
blog
on
this,
but
it's
really
interesting.
The
cloud
event
has
basically
a
type
and
a
source,
and
the
source
is
basically
a
label
for
which
that
thing
is
actually
actually
actually
being
driven,
and
the
type
is
the
actual
physical
type
that's
used
to
create
the
trigger.
It
also
has
a
unique
id.
So
what
it
does
in
the
system
itself
is.
C
So
just
be
aware
of
that,
when
you're
playing
with
this,
whether
you
have
to
have
a
source
which
actually
identifies
where
the
event
is
coming
from
and
an
id
which
is
unique,
but
I
put
these
three
pieces
of
yaml
and
the
customize
into
a
github
repo.
I've
then
created
an
application
within
argo
cd
and
it
will
automatically
deploy
all
the
components.
I
need
to
stand
up
that
serverless
application.
C
C
C
C
It
is
that
go
into
rjcd,
I'm
going
to
put
it
in
my
new
project,
but
just
to
show
there's
nothing
up.
My
sleeves.
If
I
go
into
here,
go
into
developer,
go
to
topology
no
resources
found.
There's
nothing
running
in
there
whatsoever,
go
to
argo
cd
set
the
customized
defaults.
It's
just
going
to
pull
it.
It's
going
to
recursively
use
the
customize
file
to
push
those
things
onto
it
and
create.
C
C
C
I've
started
spun
up
my
kennedy
service,
which
is
actually
instantly
a
single
copy
because
it
always
creates
a
copy
initially
to
make
sure
everything's
working
fine
before
it
offlines
it.
So
that's
now
running
in
there
sat
there
after
the
timeout
expires.
This
will
spin
down
to
zero,
but
you
can
see
how
simple
it
was.
C
B
I've
gone
off
on
attachment
yeah.
This
approach
really
lowered
the
barrier
for
developers
when
they
approached
the
kubernetes
world,
which
is
sometimes
could
be
challenging,
as
we
know,
and
just
one
question
before,
because
I
would
like
to
hear
from
you
more
from
about
the
these
great
stuffs.
But
time
is,
is
the
end
is
approaching
we
we
use
it.
You
use
a
lot
of
the
terms,
function,
canadian
function
and
for
the
ones
that
are
not
so
used
to
manage
or
to
deal
with
the
kinetic
world.
C
What
we've
done
is
we've
rolled
in
the
kind
of
creation
of
the
image,
the
deployment
of
the
image,
the
creation
of
the
kennedy
of
service
components
into
a
single
command
line,
and
what's
nice
about
it,
I'll
show
you
an
example
of
this
very
very
quickly.
C
So
if
I
can't
forgot
yammer
what
it
is
is
when
you
run
the
okay
and
sorry
kn
function,
it
uses
the
function
to
actually
describe
exactly
how
it
wants
the
candidate
of
service
to
be
deployed
as
a
function.
C
And
if
we
look
at
it,
you'll
see
that
it
has
a
number
of
different
things
that
are
really
really
cool.
Make
it
really
very
simple.
It
basically
defines
the
run
time.
It
defines
the
image
I'm
going
to
push
to,
because
this
actually
compiles
and
builds
the
image
itself.
It
defines
the
actual
type
of
tentative
service.
So
am
I
going
to
be
driven
by
what's
called
serving
when
traffic
ingress
is
in
through
a
url,
or
am
I
going
to
use
the
event
model
where
it's
driven
by
a
cloud
event?
C
Doesn't
have
a
description
of
the
source
because
the
actual
function.yaml
you
execute
it
in
the
directory
where
the
source
is
so
you're
in
the
actual
directory
itself
yaml
and
a
pom.xml
if
you're
doing
the
caucus
one.
When
you
execute
the
kenny
k
and
fun
kane
function,
okay,
funk
it's
called,
it
will
actually
read
this
configuration
file,
compile
the
code
within
the
directory,
build
the
image,
deploy
the
image
set
up.
The
connective
servers.
C
B
A
Yeah,
that's
pretty
cool
that
that
is
the
and
do
you
know
he
and
how
we
can,
because
I
know
this.
This
comes
from
boston
project,
where
basically,
we
merged
boston
project
to
okay
native
and
do
you
know
how
we
could
do
what
what
you
say
like
we
can
make
an
automated
the
build
of
the
function,
because
at
the
moment
it's
a
docker
build
locally.
Do
you
know
if,
on
an
open
shift,
we
can
get
the
function
built
with
build
v2?
Do
you
know
how
it's
going
to
work.
C
Yeah,
when
we
actually
produce
build
v2
and
we
actually
integrate
shipwright
into
to
openshift,
then
what
will
happen
is
because
what
happens
at
the
moment
when
you
run
kn
fund
because
it
builds
it
locally.
It's
quite
difficult
for
me
because
I
hate
to
admit
this.
I
have
a
mac,
even
though
I
work
for
red
hat,
I
have
a
mac,
so
I
have
to
run
it
in
a
fedora
virtual
machine
or
rail
virtual
machine
to
run
podman
to
actually
build
it.
C
What
we're
going
to
have
with
the
build
v2
and
shipwright
is
the
ability
to
build
locally
on
the
cluster
and
we've.
Actually,
this
is
where
it
gets
even
nicer.
C
We're
going
to
be
able
to
actually
build
it
into
a
tecton
pipeline,
because
the
tecton
pipeline
has
a
cluster
task
for
builder,
which
executes
a
podman
build
within
the
cluster
itself,
based
on
the
build
packs.
So
there's
gonna
be
a
lot
of
stuff
coming
where
you
could,
because
you
could
do
it
today
by
actually
setting
it
up
and
running
it
on
the
cluster,
using
a
build
our
task
within
the
actual
tech
time.
C
But
when
we
release
build
v2,
it
will
be
integrated
fully
into
openshift,
so
you'll
be
able
to
do
your
builds
on
the
cluster
itself.
So
all
you
will
have
to
do
is
actually
write
your
code
locally
or
even
write
your
code
in
something
like
developer,
space,
identical
workspaces
and
then
submit
it
and
bang
you
can
do
it
on
the
cluster.
A
C
Yeah
yeah
as
a
final
thing,
because
we're
running
out
of
time,
I've
put
this
link
up.
This
is
a
there's,
a
github
repo
called
k
neural
on
my
kit,
repo,
it's
a
work
in
progress,
but
this
is
where
I'm
actually
putting
all
the
stuff
to
do
with
this
atomic
decomposition
stuff,
because
we
we
got
a
little
bit
into
the
philosophy
of
that,
and
I
think
that
that's
massive
in
terms
of
the
way
that
we
can
write
applications
going
forward.
C
A
C
C
The
nice
thing
about
this
repo
is,
it
also
contains
a
lot
of
yammer
for
building
the
actual
infinite
span
data
grids.
So
that
was
one
of
the
first
problems
I
had
actually
using
the
dedicated
operator
with
an
open
shift.
So
there's
a
lot
of
examples
about
how
to
how
to
spin
up
the
the
operator
how
to
set
up
the
the
degree
to
execute
appropriately
within
openshift.
So
there's
some
nice
features
in
there
as
well.
B
Okay
cool.
We
had
today
a
really
global
session
natalie,
so
people
joining
from
australia,
new
zealand.
I
read
auckland
great
place
so.
A
B
A
Okay
yeah.
Thank
you
very
much.
You
know,
I
think
it
was
really
great
session
and
for
people
that
would
like
to
watch
it
again.
The
recording
is
on
the
same
youtube
link
or
you
find
it
on
twitch
on
youtube.
It's
going
to
be
permanent
in
our
openshift
tv,
coffee,
break,
playlist
and
yeah.
I,
like
your
t-shirt.
I
think
this
is
a
smaller
one,
but
I
have
the
four
opposite.
Four
here.
A
First,
in
first
out
approach
well
fabio:
let's
wrap
up
everything.
First,
thanks
ian
for
this
great
session,
we
understood
more
what
it
means:
creating
an
intelligent
or
aiml,
driven,
app,
dev
next
generation,
so
serverless
or
scale
to
zero
or
and
on
demand
or
event
driven.
I
think
that
they
are
all
valid
pattern
and
we
also
discovered
that
these
your
project,
which
is
the
k
numeral,
which
is
really
interesting,
that
we
shown
in
the
chat
we
have.
We
have
some
appointment
for
the
next
session
today.
A
You
have
the
openshift
preview
shows
in
the
in
the
shadow
in
our
afternoon
time
and
the
fabio
will
come
back
next
week,
wednesday
same
time
and
any
final
reminds
a
final
reminder.
B
No
just
thank
you
again
ian
for
your
availability
for
a
great
session.
It
was
great
stuffs.
I
I
learned
so
when
I
learned
something
I
mean
that
that's
a
great
things.
It's
always.
B
A
Always
and
he
and
you
have
to
come
back
because
we
I'm
sure
you
have
more
and
more
cool
live
demos.
We
really
enjoyed
this
one.
A
Yeah
people
that
want
to
connect
with
ian
you
find
it
on
some
people,
ask
the
you
know
the
name
you
find
on
linkedin.
You
can
interact
and
add
and
talk
with
him,
and
you
find
it
also
in
red
hat
conferences.
What's
your
next
conference.
C
Yeah
I'll
find
the
url
for
you,
but
it's
on
the
meetup.org
and
I
I
tend
to
present
most
of
those.
A
But
thank
you
very
much
and
see
you
next
wednesday
have
a
good
day
bye
thanks
having.