
►
From YouTube: Knative October Meetup/ Slice your business into functions and events by Maciej Swiderski
Description
In this session, Maciej Swiderski presented an approach that allows to rely on serverless techniques such as functions and cloud events yet still working on the higher-level representation. He introduced Workflow as a Function Flow concept that builds up on top of state of the art technologies such as CloudEvents and Knative Eventing to deliver a highly scalable business-oriented solution that looks like a single service but runs as a set of functions.
A
So
very
quickly
go
through
the
first
slide
and
then
just
as
soon
as
we
get
that,
so
we
can
skip
the
one
since
you
already
introduced
me.
So
let's
get
started
with
the
workloads
of
functional
flow
as
from
the
conceptual
point
of
view,
and
maybe
first
of
all
what
actually
led
to
starting
to
think
about
those
kind
of
things.
A
So
I've
been
in
the
workflow
area
for
around
15
years,
a
little
bit
more.
Even
so,
it
started
always
with
the
centralized
big
setup
of
the
orchestrations
or
the
workflow
solution
of
process
management
and
so
on
and
so
on
and
so
forth,
and
now
it's
starting
to
move
towards
more
and
more
distributed.
Setups,
where
you
can
even
have
individual
activities
of
your
workflow,
execute
on
different
containers
and
so
on.
So
but
the
idea
with
the
workloads
and
function
flow
was
to
start
kind
of
in
the
middle
of
both.
A
So
it's
not
to
be
a
fully
centralized
and
not
to
be
fully
distributed,
but
have
something
in
between
to
ease
the
adoption
of
workflows
and
use
the
workload
as
building
your
business
logic.
So
with
that
this,
the
idea
is
that
you
start
developing
start
building
your
business
logic
and
your
workloads
as
complete
as
possible.
A
So
with
that,
we
can
actually
slice
the
business
logic,
meaning
the
workflow
definitions
into
functions
that
are
self-contained
independent,
but
at
the
same
time,
can
be
evoked
at
any
time
and
due
to
the
fact
that
they
are
independent
of
each
other.
They
can
be
scaled
to
pretty
much
no
limit
and
with
that
taking
taking
a
quick
look
at
a
simple
example
of
a
user
registration
workflow
and
in
this
particular
case
it's
a
flowchart
kind
of
workflow.
So
that
is
based
on
the
bpmn
that
stands
for
business
process.
A
Modernity
annotation,
but
the
overall
concept
of
working
as
a
function
is
not
limited
to
that
and
can
be
used,
for
instance,
with
the
serverless
workload
specification
based
like
declarative
workloads
as
well,
because
in
the
end,
it's
actually
to
find
out
the
individual
pieces
that
can
be
extracted
and
made
functions
out
of
them
and
make
them
available
for
execution
from
different
places
based
on
incoming
events,
so
how
it's
actually
slice
into
functions.
A
The
workload
definition
is
based
on
an
idea
of
executing
activities
and
executing
activities
means
that
the
activity
that
performs
some
work
that
at
the
time,
when
it's
being
involved,
it's
unknown.
How
long
will
it
take
how
much
resources
it
will
take
and
what's
the
potential
outcome
of
that
right?
Will
it
produce
only
one
outcome?
Will
there
be
multiple
outputs
of
the
execution
and
so
on
so
forth?
A
That's
why
the
executing
activity
is
considered
a
starting
point
for
a
function
and,
as
you
can
see
here,
we
have
three
functions
that
are
this
derived
from
the
workflow
definition
and
to
start
with
that,
executing
activities
are,
for
instance,
the
verifying
user
data,
because
we
know
that
we
get
the
data
from
the
certain
source,
but
we
don't
know
how
long
will
it
take
to
validate
it.
A
So
that's
why
it's
considered
the
function
and
can
be
invoked
at
any
time
from
any
place
based
on
the
incoming
event,
but
at
the
same
time,
a
switch
in
this
particular
case,
the
gateway,
a
decision
point
or
the
construct
logic.
Point:
it's
not
really
taking
that
much
time
to
invoke
so
it
doesn't
matter.
A
Does
it
make
sense
to
actually
create
another
function
for
it,
because
we
know
that
it's
really
tightly
connected
with
the
activity
in
the
before
it,
because
it
will
act
based
on
the
data
it
receives
from
from
the
performing
activity
or
the
executing
activity?
And
that's
why
you
can
see
that
there
are
individual
parts
like
the
generate
username
password,
which
again
is
not
predictable
at
the
time
when
the
when
it's
executed,
because
it
might
take
milliseconds,
it
might
take
seconds
or
even
minutes.
A
So
that's
why
it's
considered
another
function
and
so
on,
but
at
the
same
time
you
have
full
control
of
deciding
what
actually
constitutes
a
function.
So,
for
instance,
this
notify
invalid
data,
which
is
not
really
marked
as
a
function
itself.
It
is
brought
together
with
the
previous
one
and
it's
based
on
the
configuration
here,
so
you
can
mark
a
certain
activities
that
is
a
continuation
of
the
previous
one
and
by
that
combine
that
into
one.
A
A
Lastly,
what
I
would
like
to
bring
up
for
to
your
attention
as
well
is
the
the
ending
of
the
workflow
is
a
producing
activity
as
well,
meaning
that
it
will
generate
the
output
based
on
the
information
it
collects.
So
it
might
be
end
of
one
workflow
could
be
a
start
of
another
one
or
it
could
integrate
with
other
systems
by
simply
emitting
events
and
at
the
end
of
the
execution
workflow.
So
you
can
have
a
completely
asynchronous
execution
and
you
will
be
notified
with
the
output
of
the
completion
of
the
entire
workflow
instance.
A
Functions
are
named
based
on
the
activities
or
the
workload
definitions
themselves,
so
they
constitute
compo
are
composed
from
three
elements.
One
is
the
package
of
the
workflow,
the
identifier
of
the
workload
and
identify
might
be
a
standalone
identifier.
It
might
be
identifier
with
the
version
number
and
lastly,
is
the
activity
name
and
all
of
those
can
be
either
taken
as
as
is
automatically
based
on
the
working
definition,
but
they
can
be
explicitly
set
by
users
as
well.
So
you
have
the
full
control
again
on
the
type
of
things
that
happen.
A
What
is
what
is
the
name
of
the
function?
What
is
the
identifier
of
the
function
and
so
on
so
looking
into
more
details
around
functions,
there
are
actually
dedicated
entry
points
to
the
execution.
So,
even
though
we
model
everything
as
one
and
at
the
same
time,
it's
deployed
as
one
image
as
well,
but
all
the
sliced
elements
of
the
workload
that
became
functions
are
individually
and
dedicated
as
entry
points,
each
entry
point
and
meaning
each
function
has
an
input.
A
All
the
inputs
are
based
on
the
cloud
events
and,
at
the
same
time,
if
functions
became
become
the
things
because
they
need
to
produce
data,
so
they
can
produce
either
one
or
more
outputs.
The
one
or
more
means
that
at
the
time
of
execution,
the
definition
of
the
workflow
and
the
logic
of
the
workflow
based
on
the
process,
data
can
switch
or
can
fork
into
more
to
multiple
paths
or
we
might
be
using
a
loop
construct
that
will
simply
invoke
multiple
services
and
multiple
function.
A
Identifier
of
the
functions
and
lastly,
the
manifest
file
that
sets
up
everything
for
deployment
to
the
key
native
or
the
kubernetes
cluster
is
created
at
the
build
time
and
and
provided
to
you
at
as
a
starting
point.
Especially
the
manifest
file
is
a
starting
point
because
it
assumes
certain
defaults,
so
it
might
not
be
ready
for
every
the
deployment
or
especially
not
for
production
deployment,
but
at
least
gives
you.
The
most
important
part
are
the
triggers.
So
you
have
the
information.
A
Defined
and
the
input
and
outputs
are
based
on
the
definition
of
the
data
of
your
workflow
inside
the
workflow.
You
can
define
a
certain
data
objects
that
your
worker
is
going
to
deal
with,
and
you
have
the
options
to
classify
the
data
as
well,
so
you
can
classify
data
as
input
as
outputs
as
internal
transient
and
so
on
and
so
forth.
A
All
the
functions
use
the
type
attribute
as
the
link
to
the
function,
with
one
exception.
I
will
mention
that
in
in
a
second
and
the
source
attribute
is
the
function
that
is
actually
being
invoked.
So
if
we
call
a
function,
a
the
function,
a
will
have
the
source
attribute
set
on
the
output
event
as
the
function
a,
but
it
will
produce
with
the
type
information
that
is
for
the
next
function
to
be
invoked.
So
that's
why
we
have
the
function
flow
driven
by
the
workflow
definition,
but
invoked
through
the
broker.
A
A
And
here
I
mentioned
that
there
is
one
exception
in
terms
of
the
type
attribute,
and
this
is
depending
on
the
type
of
environment
we're
deploying
these
two.
So
if
it's
a
vanilla,
creative
environment,
where
we
have
access
to
both
sync
bindings
and
the
broker
itself,
then
you
can
clearly
rely
on
that
information
to
produce
the
events
but,
for
instance,
in
the
in
the
cloud
run
environment,
there
is
no
direct
access
to
the
broker
and
there
is
no
think
binding
as
such.
So
that's
why
moving
toward
the
pops
up?
A
Instead,
there
allows
us
to
take
advantage
of
the
infrastructure,
but
at
the
same
time,
requires
additional
information
to
be
associated
to
a
link
to
the
correct
function,
because
the
type
attributes
of
the
cloud
events
delivered
by
the
pops
up
on
cloud
run
will
always
have
the
same
type
of
event,
which
is
message
received
or
something.
This
is
like
the
the
fully
qualified
name
of
that,
and
with
this
we
need
to
have
the
source
attribute
as
well
and
in
the
cloudgram
environment.
A
The
source
attribute
actually
set
to
the
topic
that
the
message
come
from
and
this
these
two
information
together
allows
us
to
link
to
the
particular
function
within
the
function
flow
based
on
the
workflow.
A
So
enough
talking,
let's
look
at
how
it
actually
runs
and
what
we
can
see
out
of
that.
So
let
me
just
grab
quickly
here,
so
it's
the
same
workflow
definition,
but
with
exact
same
definition
of
the
task.
What
I
would
like
you
to
mention
here
is
the
actual
business
logic
behind
it.
So
it
is
about
user
registration,
really
simple
user
registration
process
that
requires
to
validate
the
data
based
on
the
received
information,
which
is
the
email
address.
A
First
and
last
name,
we
generate
the
username
and
password
next
we
check
if
the
user
already
exists
in
the
repository
or
the
user
repository,
and
in
this
particular
case
we
use
the
swagger
pet
store
as
our
user
repository,
and
that's
why
you
can
see
the
error
handling
here,
because
this
is
why
swagger
pester
returns
404.
If
the
user
does
not
exist.
So
we
need
to
catch
that
as
this
is
considered
an
error
code
and
then,
if
we
have
the
404,
we
can
proceed
with
the
registration
of
the
user.
A
Those
are
rest
calls
and
those
are
individual
functions
as
well.
As
you
will
see
here
in
in
the
build
process.
We
generate
this
user
registration
file
and
we
have
the
options
to
what
is
automatically
generated.
We
have
the
sync
binding.
We
have
the
service
and
then
for
each
individual
function
that
was
derived
from
we
received
what
we
get
the
trigger
definition
so
with
each
type
pointing
to
our
the
workflow
definition.
A
Here
we
have
the
package
name
and
the
identify
of
the
process,
so
this
is
all
composed
based
on
the
information
from
the
workflow
definition
itself.
As
I
mentioned
the
I'm,
the
visual
guy.
So
that's
why
I
prefer
the
bpmn
flowchart
based
workloads,
but
that's
not
the
only
one.
We
can
have
pretty
much
exact
same
functionality
based
on
the
serverless
workflow
specification.
A
So,
in
this
case
it's
a
json
format,
but
there
is
a
yum
base
as
well.
It
has
exact
same
capabilities.
It
has
this
exact
same
execution
flow
and,
as
you
can
see,
we
use
the
same
for
the
open
api
calls.
So
we
have
the
swagger
json,
which
is
the
opi
open
api
specification,
and
that
is
the
operation
id
that
we
wanted
to
invoke
to
get
the
user
or
to
create
the
user,
depending
on
which
one
you
prefer
the
clarity
or
visual
or
the
flowchart
base.
A
You
have
both
options
to
to
apply
the
concept
of
workflow
as
a
function
flow
all
right,
so
we
have
that
covered
and
let's
get
this
going.
So
let
me
switch
here.
So,
as
I
mentioned,
we
have
the
the
yam
file
generated,
as
you
can
see,
I'm
actually
taking
it
from
there.
So
it's
not
like
a
customized
yam
file
for
the
deployment
and
I'm
deploying
that
to
a
kubernetes
cluster,
with
key
native
both
serving
and
eventing
installed,
with
the
026
version.
A
If
we
deploy
that,
we
will
get
the
information
about
all
the
resources
created,
just
look
at
the
logs
as
well.
So
as
soon
as
it
starts,
it
starts
relatively
fast.
It's
it's
a
java
base,
but
it
is
compiled
to
a
native
image
with
grail,
vm
and
powered
by
quarkx,
as
you
can
see
in
the
logs.
So
it
is
quite
fast
to
start
quite
fast
to
shut
down
and
allows
you
to
to
scale
easily
between
different
environments.
So
let
me
just
go
back
here.
A
We
have
the
cloud
events
player
that
we
can
easily
see
the
events
flowing
through
the
invocation.
So
let
me
just
get
going
here
I'll
just
log
into
the
cluster.
Let
me
see
so
I'm
sending
an
event
to
the
broker
in
a
binary
format:
http
binding
with
the
information
that
I'm
a
user
with
email
mark
strong,
first
name
mike
last
name
strong.
A
If
we
just
publish
that
and
if
everything
goes
well
yeah,
we
have
that
information
already
collected
as
you
can
see
it
gets
already
registered.
So
it
means
that
it
goes
through
this
path.
So
it
goes
here.
The
user
was
found
in
the
swagger
pet
store,
and
that
is
quite
a
random
thing,
with
the
swagger
pet
store.
So
sometimes
you
can
get
very
frequently.
That
is
so.
The
register
of
a
frequency
that
is
constantly
being
successfully
registered,
but
it
goes
here
and
ends
the
process.
A
A
A
And
then
this
goes
through
and
we
have
all
the
registration
for
the
user
and
now
it's
successfully
registered,
but
I
would
like
to
bring
up
the
attention
as
well.
So
here
we
have
the
identifier
that
is
associated
with
the
workflow,
and
that
is
allowing
us
to
keep
track
of
the
execution
of
the
workload
with
the
same
identities.
A
But
if
you
have
persistence,
enabled
and
you
have
a
long-running
workload
definitions,
even
though
they
are
still
functions,
you
can
easily
get
that
going
because
it
will
look
up
based
on
the
incoming
identifier
to
find
out
what
is
associated
with
that
particular
instance.
And
the
nice
thing
with
the
identifier
is
that
you
can
use
that
as
a
so-called
business
key,
so
that
might
be
one
of
your
business
data
that
becomes
the
identifier
of
the
instance.
A
So
it
will
not
be
your
id
like
this,
but
it
will
be
something
that
is
known
to
your
domain
and
one,
and
with
that,
it's
easier
to
not
be
specific.
That
yeah,
the
identifier
needs
to
be
something
specific
to
the
workload.
It
is
then
abstracting
and
it's
hiding
the
workflows
completely
from
the
consumers
of
those
events.
A
Lastly,
to
let
me
just
clean
those
events-
and
if
I
just
go
back
here
and
just
let's
look
at
one
example,
I
want
to
show
you
this
registration,
oh
so
here.
What
I
mentioned
is
that
each
function
can
be
invoked
at
any
time
and,
as
you
can
see
here,
instead
of
starting
the
workflow
or
studying
the
workflow
at
the
beginning
of
it,
I
want
to
start
directly
at
the
register
user
function.
A
So
that
is
to
bypass
all
those
activities
and
just
directly
go
here
and
either
there
or
there,
and
this
is
exactly
what
you
can
do,
because
it's
as
easy
as
just
sending
the
user
register
user.
So
we
have
directly
you
register
notification,
provided
so
you
can
start
at
pretty
much
any
time
in
the
workflow
and,
of
course,
depending
on.
A
A
So
if
we
look
at
the
cloudgram,
so
one
thing
that
you
need
to
add
to
the
project
is
dependency,
to
switch
to
a
different
way
of
receiving
and
producing
events
that
will
be
based
on
the
pops
up
instead
of
sync
binding
and
then
configure
your
project
of
the
google
cloud
to
make
sure
that
it
connects
to
the
right
place
and
get
all
the
association
with
the
security
and
service
accounts
and
so
on.
So
when
you
build
that
application,
it
has
exact
same
capabilities
me.
A
It
will
generate
the
will,
create
a
function
slicer
worker
into
functions,
but
it
will
instead
of
creating
the
manifest
the
yam
file,
it
will
create
two
files.
One
is
the
deploy
and
the
other
is
undeploy
that
will
provide
you
with
all
the
gcloud
commands
to
create
topics
triggers
eventark
and
then,
lastly,
to
deploy
the
application
itself
for
the
service
itself.
But
again,
similar
concept,
as
with
the
manifest
file,
is
that
it
will
have
like
the
it
will
be
a
starting
point.
It's
it's
nothing
that
is
ready
of
for
production.
A
A
Is
the
message
published
as
the
type
of
the
cloud
event
that
is
received,
and
then
the
topic
name
that
the
message
came
from,
because
in
that
with
this
combination,
you
can
actually
individually
find
the
right
functions
to
invoke,
but
it
will
pretty
much
perform
the
same
way.
Instead
of
going
through
the
thing
binding
and
publishing
events
directly
to
the
broker,
it
publishes
messages
on
the
pub
sub
based
on
the
information
that
it
grabbed
from
it
all
right
and
let
me
see
yeah,
so
that's
pretty
much
it
slightly
ahead
of
time.