►
From YouTube: Event-driven Applications with Kogito Serverless Workflows and Knative - Ricardo Zanini (Red Hat)
Description
Event-driven Applications with Kogito Serverless Workflows and Knative
Guest Speaker - Ricardo Zanini (Red Hat)
hosted by Diane Mueller (Red Hat)
2021-04-19
OpenShift Commons Briefing
#AMA #Upstream
see full calendar here: https://commons.openshift.org/events.html
link to slides:
https://github.com/openshift-cs/commons.openshift.org/blob/master/briefings/slides/Event-driven%20Applications%20with%20Kogito%20Serverless%20Workflows%20and%20Knative%20%5B2021%5D.pdf
A
All
right,
everybody
welcome
back
to
another
openshift
commons
briefing.
Today
we
have
ricardo,
zanini
who's
from
red
hat
and
a
senior
software
engineer
and
we're
going
to
have
a
topic
that
I
don't
really
know
a
lot
about,
because
I
haven't
heard
a
lot
about
it,
which
is
why
we've
invited
him
here
to
talk
about
event,
driven
applications
using
pagido,
server-less
workflows
with
nk
native
and
so
okay
native
we've
had
a
lot
of
talks
about,
but
is,
I
think,
I'm
pronouncing
it
mashing
up
the
name
wrong,
but
we'll?
A
Let
ricardo
tell
us
a
lot
more
about
this
and
then
we'll
have
live
q
a
after
the
presentation
and
the
demo
that
he's
going
to
give
so
take
it
away.
Ricardo
and
introduce
yourself
tell
us
what
you
do
at
red
hat
and
tell
us
all
about
this.
B
Sure,
yes,
thanks
for
having
me
guys
well,
my
name
is
ricardo,
like
diana
said
today
we're
going
to
talk
about
even
driven
applications
and
how
we
can
create
those.
You
know
business
applications
using
the
serverless,
workflow
specification
and
the
okay
native
platform.
Well,
I'm
a
software
engineer.
B
I
work
with
the
cogito
actually
project
and
I'm
also
helping
the
cncf
serverless
workflow
project
specification
as
well,
so
we
work
in
the
both
ends
in
the
doing
the
specification
side,
you
know
the
specific
specifying
the
workflow
dsl
and
also
in
the
implementation
in
the
coaching
engine.
C
B
Right
in
the
genome,
today
I'm
going
to
introduce
you
guys
the
servos
workflow
specification
and
what
we
are
doing
there.
You
know
just
a
brief
introduction
to
the
to
the
project.
I
I
will
try
to
not
talk
too
much
about
it,
because
there's
a
lot
of
facilities
that
you,
you
should
take
a
look
in
the
specification
and
figure
yourself
for
all
other
details
and
about
the
implementation
itself
that
we
are
working
on
right
now.
B
That
is
the
coach,
the
project
that
we
call
it
is
a
business
application
project
for
you
know
for
the
for
designing,
creating
business
applications.
And
after
that
I
will
introduce
a
small
use
case.
You
know
if,
for
someone
to
feel
comfortable,
it
is
like
a
online
store
or
the
processing.
So
it
is
super
simple
use
case.
B
Everyone
is
super
familiar,
I
hope
so
and
we
are
going
to
run
a
super
short
demo
as
well
exemplifying
how
the
the
workflow
is
reacting
from
events
and
what
he
is
doing
on
a
k
native
overall
platform.
B
So
about
the
the
cncf
serverless
workflow
specification.
We
we
say
cncf
because
the
serverless
workflow
project
it
is
a
a
sandbox
project
on
cncf
institution.
So
you
can
go
there
in
the
sandbox
project
and
you
see
the
server's
workflow
specification
project
in
there
and
we
are
accepting
contributions
and
then
it
is
open
source
patch
to
licensing.
You
can
go
there
and
then
you
know
figure
things
out,
ask
questions
and.
B
Maybe
introduce
some
new
features
that
you
like,
or,
if
you're
for
him,
to
implement
the
specification
as
well.
We
can
help
we
have
sdks.
We
have
a
lot
of
tools
around
the
specification
that
can
help.
You
know
to
understand,
to
implement
maybe
use
the
specification
in
your
company
as
well.
So
what
it
is,
the
the
service
workflow
specification.
It
is
something
that
we
are
targeting.
It
is
to
create
a
declarative,
workflow
language
so
based
and
to
the
target
for
a
specifically
serverless
computing
technology
domain.
B
So
that
means
reading
events
producing
events
consuming
event.
Correlating
them
call
functions
outside
the
the
scope.
B
Have
all
those
things
run
smoothly,
we
can
out
escalate
the
the
functions
you
know
up
and
down
and
do
all
those
features
that
you
already
know
about
service
computing,
but
within
you
know
having
this
in
mind
in
the
in
the
in
the
workflow
language
as
well.
This
quote,
I
I
I
brought
from
the
website,
so
you
can
go
there
and
visit
the
website
and
understand
what
it
is
with
more.
You
know
details
about
that.
Why
why
we
start
with
this
no
crazy
project?
B
Why
why
it
is
important
for
us
well,
first,
that
we
believe
that
workflows
can
capture
and
organize
business
requirements.
So
what
it
is
you
can?
You
know
instead
of
grab
your
maybe
java,
go
or
python
code
and
bring
to
your
business
analyst
and
explain
today:
hey
here's.
What
we
are
doing
this
service
here
is
what
is
going
on
there.
B
You
know
look
at
this,
these
conditionals
here
we
are
doing
this
and
that,
instead
of
that,
you
can
just
bring
the
workflow
to
your
business
to
your
business
person
and
explain
to
them
what
is
happening
there
and
they
can
understand.
They
can
even
help
you,
you
know
design
their
workflow
or
they
can
even
create
it
work
for
themselves
and
handle
it
to
you
in
order
to
have
that
run
into
a
runtime.
Maybe
so
that's
one
of
the
the
the
idea
behind
it.
B
The
specification
also
targets
the
be
a
vendor
neutral
platform,
independent
workflow
language.
So
what
this
means
means
that
we
do
not
tie
the
specification
with
a
specific
vendor.
It
is
not
code.
It's
not
anything
like
that.
It
is
just
the
specification
anyone
out
there
can
take
the
specification
implements
in
your
runtime
and
have
a
workflow
runtime
running.
You
know,
based
on
the
specification
it
is
platform
dependent
because
we're
not
tied
with
any
specific
technology.
B
We
do
not
say
hey
the
the
workflow
must
be
implemented
in
java,
for
instance,
this
is
not
the
case,
so
anyone
can
just
bring
the
specification
and
share
and
use
that
as
they
please
in
the
in
their
in
their
implementations.
B
So
imagine
that
having
this
common
way
of
describing
our
workflow,
we
can,
you
know,
potentialize
and
create
creating
some
common
libraries
to
be
shared
among
all
the
the
the
runtimes
implement,
the
implementations
itself.
We
can
create
tooling
around
it
influx
infrastructure
around
it
around
all
the
workflow
and
having
all
of
them.
B
You
know
to
have
a
common
way
to
describe
something
and
and
create
something
together.
That's
the
the
the
the
nice
of
the
being
on
open
source
specification
so
like,
like
I
said,
being
vendor
neutral,
we
increase
portability,
productivity
and
learning
curve.
You
don't
have
to
learn
all
of
those
workflow
language
that
is,
that
is
out
there,
because
no
nowadays,
if
you
go
in
the
iit
news,
you
see
that
every
day
every
week
we
have
a
new
workflow
engine
out
there.
B
So,
instead
of
that,
we
are
proposing
this
like
on
a
specification
that
can
be
run
in
in
lots
of
runtimes
and
those
runtimes
can
be.
You
know,
offered
by
to
be
run
on
kubernetes
on
openshift.
You
know,
google
cloud
can
have
a
runtime
for
the
specification
as
well
iws
azure,
whatever
cloud
provided
out
there,
they
can
offer
their
own
workflow
service
b.
Based
on
the
specification.
B
That's
a
our
wingy
situation
that
we
believe
you
know
that
everyone
implements
the
the
the
same
dsl
for
specifying
workflows,
so
the
users
would
gain
a
lot
of
that.
B
B
In
order
to
start
my
workflow
or
I
want
to
produce
an
event
at
the
end
of
my
of
my
workflow
or
I
want
to
produce
an
event
if
I
go
to
through
this
branch
on
my
workflow
and
we
use
the
cloud
event
specification
to
define
this
event
in
the
workflow
definition
file
in
the
workflow
definition
itself,
you
can
also
run
and
and
not
run
but
call
external
restful
services
for
ins,
for
instance,
using
the
workflow
specification,
and
we
use
the
open
api
to
declare
how
you
can.
B
You
know
create
those
calls
from
the
workflow
itself.
So
when
you,
when
you,
for
instance,
you
you
want
to
call
when
you
you
receive
an
order,
you
want
to
call
an
external
service
to
validate
your
order
to
do
something
else,
and
it
is
a
restful
end
point
you
can.
You
know
express
that
call
using
the
open
api
specification.
So
you
you
you,
you
bring
the
open
api
interface
into
the
to
the
workflow
when
you
specify
which
function
in
there.
B
That
is
defined
in
the
in
the
open
api
specification
that
you
you
wish
to
call
using
the
workflow
dsl.
Also.
It
is,
of
course,
based
on
workflow
patterns,
so
there
is
execution,
handle
order,
error,
handling
data
management,
data
transformation.
So
there's
a
lots
of
patterns
that
we
envision
in
the
that
we
review
as
useful
to
be
to
to
have
in
the
dsl.
B
B
Well,
the
workflow
itself
is
based
on
a
state
machine
diagram.
It
is,
I
say
that
it's
based,
because
you
know
we
have
start
and
end
conditions
in
in
our
in
our
workflow.
So
when
you,
you
start
and
you
go
from
stage
to
state,
you
know
in
a
fluent
manner,
so
each
state
is
responsible
to
do
something.
B
It
is
responsible,
for
instance,
to
produce
an
event
or
to
call
an
external
function
or
to
decide
to
split
between
two
brands
or
to
go
to
one
branch
or
to
go
to
another
branch.
B
I
can
call
not
another
subflow,
for
instance,
and
there's
a
lot
of
type
of
states
that
we
that
we
we
have
in
the
specification
and
you,
as
I
said
you
can
go
to
the
specifications
see
in
the
details,
everyone
that
we
support
that
that
we
have
in
there
and
if
you,
if
you
feel
like
that,
we
don't
have
something
that
it
might
be
interesting.
B
You
can
propose,
you,
know
a
pr
or
send
or
reach
out
out
to
us
and
say
hey
this
is
this
should
be
nice
to
have
a
specification,
and
we
can,
you
know,
talk
together
and
figure
that
out
for
the
data
processing
itself,
it
is
like
you
know,
you
enter
with
data
and
you
go
out
with
data
so
and
in
between
you
have
this
pipeline
of
states.
B
You
know
going
through
one
after
each
other
and
then
you
can
have
like
a
data
processing
with
the
data
that
you
that
you're
expecting
to
receive
on
your
workflow.
B
The
specification
has
like
those
three
main:
this
is
the
backbone
of
a
workflow.
Let's
say
this
way:
we
have
functions,
events
and,
of
course,
the
control
logic
of
the
work.
Well,
the
the
first
thing
is
the
functions
we
define.
The
functions
is,
for
instance,
a
call
to
a
restful
service
or
a
car
to
a
grfc
api
as
well.
B
So
you
can
define
the
functions,
the
the
the
spec
file
of
the
open
api,
for
instance,
the
id
of
the
function
that
you'd
like
to
call
in
this
section
of
the
workflow.
In
the
event
section,
you
declare
the
how
you're
going
to
consume
and
how
you're
going
to
produce
the
those
events
within
your
workflow.
So,
for
instance,
you
you
you're
listening
to
our
ordered
event
and
you're,
producing
maybe
shipping
events
or
you're
producing
approved
order,
approved
events
or
in
in
a
scenario
like
a
bid
service.
B
You
were
listening
to
bid
events
and
you're,
producing
maybe
notifications
for
users
that
are
listening
to
that
feed,
for
instance.
So
you
can,
you
know,
go
crazy
and
do
whatever
declaration
of
events
and
as
you
please,
as
your
business
needs
in
this
section
of
the
workflow
and
it
states
it
is
not
the
least,
but
not
the
least
important
part
of
the
the
workflow
control
launch.
B
That
is
where
you
describe
the
workflow
and
what
this
workflow
is
doing,
like
the
the
data
processing
and
how
things
are
are,
are
going
to
to
go
they're
like
calling
these
dysfunctions
or
producing
these
events
that
are
declaring
there
or
consuming
events
that
that
you
declare
or
branching
the
the
the
workflow.
B
You
know
the
the
the
run
of
the
workflow
into
more
subflows,
of
the
whatever
control
logic
that
you
come
up
with,
or
your
business
need
in
this
part
the
section
of
the
workflow,
as
as
always
always
like
that
that
I
I
linked
here
to
a
specific
session
in
the
in
the
in
the
in
the
specification.
B
So
you
can
go
there
and
see
for
yourself
all
of
the
details
of
each
session
and
understand
more
in
details
of
what
what
we
are
going
trying
to
express
in
here
I'll
try
to
be
brief,
because
the
the
presentation
can
be
long.
If
I
stay
in
on
one
or
other
details.
B
Well,
every
time
that
we
like
present
the
specification,
talk
about
specification
events,
people
ask
about
implementation
so
well,
it
is
nice
to
have
a
specification,
and
it
is
also
have
now
it's
nice
to
have
an
implementation
right,
because
that's
the
point
of
having
a
specification,
you
don't
you
don't
just
create
a
specification
to
be
in
there.
You
create
specification
implemented.
B
So
we
we
took
this
specification
now
talking
about
the
coached
projects.
We
thought
we
took
the
the
specification
say:
hey,
there's
a
petition
here
when
we
can
actually
implement
this
a
specification
and
offer
to
our
users
a
way
of
then
to
declare
their
workflow
using
json
remo
files
in
a
declarative
way,
and
also
if
there
are
other
vendors
out
there,
implementing
this
specification,
they
can
migrate
to
kojita
easily.
B
You
know
using
the
same
the
same
thing
that
they
learn
with
this
with
this
dsl
in
other
implementations,
with
kojitsu
as
well.
So
this
is
a
work
in
progress,
so
we
are
in
in
tech
preview.
Actually,
right
now,
you
know
community
community
wise.
Of
course,
there
is
some
limitations
of
the
of
the
implementation
that
we
that
we
also
we
are
working
towards
to
implement,
but
the
basic
things
are
there
and
I'm
going
to
show
you
guys
what
about
a
little
bit
about
the
code
itself.
B
It
is.
I
I
like
to
call
this
like
a
cloud
native
solution.
You
know
it
is
a
project
with
lots
of
components.
So
I'm
calling
like
a
cloud
native
solution
for
you
to
build
business
applications,
so
business
applications
that
you
can
solve
business
problems
and
then
what
I
mean
by
that
it
is
like
we
have
an
engine
with
that
is
capable
to
process
rules
in
a
specific
domain
or
even
the
assault.
B
So
imagine
that
you're,
your
insurance
company
and
you
want
to
define
how
you're
going
to
process
that
you
know
super
details,
lots
of
computations
and
calculations
that
it
is
more
or
less
cumbersome
to
have
that.
You
know
in
your
in
your
in
your
code
base,
so
you're
externalizing
rules
and
you
have
all
all
of
them
defined
on
those
on
those
rules,
and
you
can
give
to
the
engineer
in
order
to
run
it.
You
have
also.
We
have
also
an
engine
capable
to
run
complex
decisions
based
on
the
dmn
standard.
B
We
have,
of
course,
serverless
orchestration
feature
that
is
super
brand
new,
something
that
we
are
working
now,
that
is,
the
implementation
of
the
cnscf
service
workforce.
In
a
specification
we
also
have
an
engine
capable
to
run
bpmn
based
files,
and
we
offer
some
tools
focused
on
cloud
for
so
far
you
can
bring
everything
all
your
business
applications
that
you
created
with
conjured
to
the
cloud,
so
we
have
a
specific
tooling
for
that.
B
We
have
like
editors,
since
it
is
a
business
application
driven
project,
and
it
is
of
course
we
we
have
editors
for
you
to
design
and
create
your
bpmn
process
for
you
to
create
and
design
your
dmn
decisions
as
well
your
your
rules
and
that
you
can
download
on
a
vs
code
marketplace.
We
have
an
online
editor
as
well.
If
you,
if
you
go
bpmn.new
or
dmn.new,
you
see
our
online
editor
there
and
you
can
start.
You
know,
scratching
your
process
or
your
dma
right
away.
B
You
can
also
download
from
our
website
this
standalone
editors.
We
are
an
engine
like
I
said,
based
on
drew's
gbpm
and
optic
planner.
It's
you,
of
course,
if
you're
a
long-term
java
developer.
Of
course,
you
should
heard
about
those
during
your
career
like
through
pm,
they're
super
famous
yeah
popular.
Let's
see
that's
correct,
the
correct
word:
there's
super
popular
frameworks
out
there
and
children
rules
and
process
babies
on
vpn.
B
We
are
also
working
on
supporting
services
capable
of
you
know,
giving
support
for
your
business
application.
So,
for
instance,
if
you
run
a
process-
and
you
like
to
see
what
is
happening
inside
that
process,
we
have
some
supporting
services
that
is
capable
to
run
on
cloud.
B
That
can
give
you
a
glimpse
of
the
process
itself
and
what's
going
on
out
there
or
even
a
user
task,
for
instance,
in
in
a
given
process,
you
can
go
in
this
supporting
service
and
see
the
tasks
and
then
you
know
approve
a
task
or
go
forward
with
a
process
and
stuff
and
etc.
So
there
is
a
lot
of
features
out
there
in
those
supporting
services.
As
well
and
like
I
said
some
cloud
tools,
we
have
a
a
specific
kubernetes
operator
for
deploying
codesystem.
B
Applications
on
openshift
are
on
kubernetes
and
I'm
going
to
use
the
the
in
in
the
demonstration.
So
we
we
bring
and
pack
this
all
together
and
that's
what
we
call
the
cogito
project.
So
this
is
a
a
feature
of
you
know,
a
handful
components
to
help
you
design
and
create
your
business
applications
and
deploy
it
on
the
cloud.
B
Let's
go
to
the
to
the
use
case
straightforward.
It
is
imagine
that
you
have
this
online
store
right
and
after
new
new
orders,
you
create
an
an
event
in
your
in
your
architecture,
every
new
order.
We
create
a
new
event
and
then
this
event
is
complete,
is
being
consumed
by
this
order.
Approval
service,
so
the
the
services
will
be
some
sort
of
rules,
and
you
know
some
sort
of
business
requirements.
They
say
if
it's
approved
or
not.
B
So
that's
the
the
the
state
of
the
after
architecture
right
now
and
john
doe,
the
ceo
come
to
you
and
say
hey
before
approving
the
the
order
I'd
like
to
to
go
through
a
shipping
and
fraud
verification
first,
so
whatever
comes
a
new
water
before
approving
it,
we've
been
using
whatever
features
that
you
have
now
do
a
shipping
and
a
fraud
verification
for
me,
please
and
I'd
like
to
to
run
that
you
know
in
parallel,
because
we
can
then
concentrate
the
the
approval
in
the
in
the
in
the
end
of
the
of
this
no
order
processing
so
and
you
came
up
with
okay.
B
So
what
what
about
like?
Creating
a
pre-processing
workflow?
So
when
it
comes
a
new
order
event
we
are,
we
can,
we
are
able
to.
You
know,
split
this
process
inside.
Like
two
events,
you
can
create
two
events.
After
of
this,
this
workflow
within
this
workflow,
we
create
a
shipping
handling
event
and
a
fraud
evaluation
event.
So
each
component
in
the
architecture
can
listen
to
these
events
and
react
upon
it
and
do
whatever
they
want
with
that.
B
B
Once
we
finish
with
the
shipping
validation,
we
meet
a
a
new
event,
saying
like
shipping
verified
or
something
like
that
and
after
the
fraud
evaluation
service,
you
know
is
finished.
Like
you
can
say,
hey
this.
This
looks
like
a
fraud
does
not
look
like
looks
like
a
fraud
can
be
anything
and
in
the
end
we
have
our
order
approval
service
again,
and
you
can
correlate
all
those
events
you
know
by
like,
for
instance,
the
order
id
and
say
hey.
I
received
the
shipping
approval.
B
I
received
the
evaluation
approval
and
I
received
this
new
order
event,
so
I
can
correlate
the
three
of
them
and
then
they
can
proceed
doing
whatever
they
want.
So
this
is
a
a
small
excerpt
for
of
a
small
processing
use
case
in
our
example.
So
I
have
like
five
services
running
around.
None
of
them
knows
about
each
other.
They
just
listen
about
events,
so
can
be.
You
know
a
an
event-driven
architecture
and
even
driven
applications.
B
They
they're
just
reacting
upon
events
and
emitting
new
events,
and
you
can
add
new
components
in
your
architecture.
So
you
can
you
you
have
this
flexibility
of
creating
things
around
your
architecture,
removing
an
event
adding
another.
So,
for
instance,
tomorrow
you
won't
ship
internationally
anymore,
and
you
just
you
know,
remove
the
international
shipping
validation
service.
B
There's
no
use
anymore
right.
You
can't
you,
you
won't
have
to
you,
know,
code,
something
new
or
anything
like
that.
Just
remove
an
event
from
the
from
your
you
just
remove
a
service
from
your
architecture,
and
you
stop
doing
that
thing
or
no.
Well,
I
wish
to
add
some
new
capabilities.
So
I
add
a
new
service
in
order
to
listen
this
event
and
do
whatever
they
want.
B
So
this
is
more
or
less
what
an
event-driven
architecture
can
the
benefits
of
an
even
driven
architecture
can
give
to.
You
know:
async
processing,
this
kind
of
flexibility.
You
know,
composition
and
orchestration
of
events.
That's
why
we
we
are
talking
today.
You
know
like
about
the
workflows,
for
instance,
so
let's
try
to
zoom
in
into
this
first
service,
the
water
pre-processing
workflow.
B
Let's
see
how
it
is
doing
its
thing,
so
I
draw
this
workflow
like
in
order
to
pre-process
my
order,
so
I
receive
disney
water
event
and
I
will
process
the
order.
I
split
into
into
two
parallel
states
saying
I
will
handle
fraud
in
one
state
and
I
will
handle
shipping
in
another
state,
so
I
can
figure
if
I,
which
kind
of
events
I
need
to
you
know
produce
for
my
architecture,
so
this
is
happening
inside
that
preprocessing
workflow
service.
B
So
this
is
more
or
less
what
looks
like
my
workflow,
let's
see
how
this
workflow
works
and
it
is
described
how
I'm
describing
this
workflow
in
a
yearbook
format
using
the
serverless
workflow
specification.
B
So
I
I
broke
the
workflow
into
three
workflows.
I
have
the
main
workflow,
the
water
workflow,
and
I
have
one
state
here
that
we
call
the
parlour
state.
You
know
calling
those
two
subfloors,
the
fraud
handling
workflow
the
shipping
handling.
So
let's
take
a
look
at
it.
The
first
part
of
the
workflow
it
is
the
I
call
the
headers,
you
know
the
when
we
we
describe.
B
We
we
give
some
information
about
the
workflow,
so
the
name
of
the
workflow,
the
identification
of
the
workflow
version
for
you
to
control
the
version
of
the
workflow,
some
description
for
it,
and
I
I
I
will
say
which
state
I
will
start
my
workflow
in
this
case
the
receive
order.
B
The
second
part
I
have
the
events
definition
when
I
define
how
I
I
will
consume
that
event.
So
let's
say
that
in
in
this
particular
use
case,
I
have
the
order
event,
type
cloud
event
type.
So
someone
out
there,
you
know
any
service
out
there
is
creating,
is
producing
this
ordered
event
cloud
event
for
me
and
I'm
and
I'm
capable
of
listening
to
it.
B
So
the
first
state
that
receives
the
receive
order,
it
is
capable
of
you,
know,
listen
listening
to
this
event,
the
other
event
here
that
we
tie
with
the
name
like
order
event
name.
I
I
have
the
the
the
event
by
his
name.
It
is
the
order
event
here
and
when
I
receive
this
event,
I
say
transition
to
process
order
and
this
state
process
order,
which
I
call
5
parallel.
I
will
create
the
branch
fraud
handling
shipping
handling.
Let's
say
my
engine
will
call.
B
You
know
at
the
same
time,
in
parallel
those
two
sub
subflows
and
once
they
finished,
I
end
my
workflow
and
to
create
those
workflows,
I'm
using
the
serverless
workflow
plugin
and
you
can
download
it
in
the
in
in
the
vs
code
marketplace.
B
It
is
maintained
by
by
the
service
workflow
community,
the
specification
community
and
you
can
go
there,
see
the
source
code
do
whatever
you
want,
download
it,
and
this
h5
the
the
plugin
itself-
and
we
have
this
nice
feature
that
is,
you
can
generate
diagram
using
any
workflow
file,
so
it
uses
the
you
know
this
def
simplification
of
the
event
diagram
from
the
uml.
So
we
have
a
start.
We
receive
the
order,
it
is
a
an
event
state.
B
We
process
the
order
in
a
parallel
state
and
we
create
two
branches.
So
let's
take
a
look
into
the
first
one,
the
fraud
handling
workflow
same
thing.
I
have
the
name
the,
which
state
that
we
started
the
version
and
the
events
that
I'm
going
to
produce.
So
I'm
going
to
produce
a
type
event:
orders,
dot,
fraud,
evaluation
and,
let's
take
a
look
into
the
states.
B
So
we
have
a
data
conditioning
here
which
is
when
we
we
start
this
workflow.
We
analyze
the
data
inside
the
the
the
input
of
the
of
my
workflow
and
we
perform
a
json
path
expression
into
this
into
this
data,
and
if
the
total
of
my
order
is
is
greater
than
one
thousand,
I
would
say
that
we
need
fraud,
verification
if
it
is
under
1000.
I
I
won't
need
front
verification.
Of
course.
This
is
super
lame
an
example
in
real
use
case.
You
won't
do
things
like
that.
B
You
can
like
create
an
estate
before
that,
like
you
can
call
a
service
to
evaluate
some
aspects
of
you
know,
taxes
or
you
know,
behavior
of
the
the
customer.
Whatever
thing
you
want,
you
know,
and
then
you
can
say
to
the
the
workflow
hey
this,
this
order
looks
like
they
need
to
perform,
is
to
be
validated
the
defraud,
or
something
like
that.
B
So
for
this
case
for
to
to
simplify
the
conditions,
we
have
this
event
type
switch,
and
I
have
these
conditions
in
here
very
lame,
but
it
is.
It
is
super
nice
and
I
inject
into
the
into
my
data
state,
and
you
can
a
new
a
new
attribute
to
my
my
workflow.
The
fraud
evaluation
equals
true,
and
then
I
produce
the
event
throughout
the
evaluation,
and
this
event
contains
in
the
data
attributes
everything
that
I
created
here
in
their
workflow.
B
So
it
goes
with
the
fraud
evaluation
equals
true
and
the
order
information
as
well.
So
anyone
that
is
interested
in
in
the
in
their
orders
that
fraud,
evaluation
and
event
can
do
something
with
it
all
right.
So
let's
do
the
same
thing
that
we
did
and
let's
generate
diagram
using
the
the
the
word,
the
plugin
and
as
you
see,
we
see
that
we
have
like
a
fraud,
handling
switch
state
and,
depending
on
the
condition
condition.
B
I'm
going
to
you
know
say
that
we
we're
going
to
produce
a
fraud
evaluation,
if
not
otherwise,
we
just
end
our
workflow.
Well,
very
simple,
and
hopefully
you
you
could
understand
what
is
happening
there.
B
The
other
workflow
is
the
shipping
handling
workflow
that
is
being
executed
at
the
same
time
as
the
other
one,
more
or
less
the
same
thing.
We
have
now
two
types
of
events.
We
have
international
shipping
order
event
and
we
have
domestic
shipping
other
events.
So,
depending
on
the
context
of
my
order,
I
will
emit
reproduce
an
event
like
an
international
event:
a
shipping
order
event
or
a
domestic
shipping
event
and
more
or
less
the
same
thing
that
we
did
there.
B
We
have
also
sweet
case,
a
sweet
state
case
here
and
the
first
condition
it
is
like
it
if,
if
it
is
within
u.s,
is
it
domestic
shipping?
If
it
is
not,
it
is
an
international
shipping.
You
know
again
super
lane
an
example
in
a
real
use
case
you
can
like
call
a
shipping,
maybe
you
can
call
it.
B
Maybe
a
google
service,
for
you
know
for
localization
to
find
the
address
the
correct
address
and
then
to
figure
some
other
things
out
or
whatever,
and
enhance
your
your
order,
data
with
some
output
for
this
service,
for
instance,
but
in
this
case
after
if
we
realize
that
this
is
a
u.s
order,
so
we
we
transition
to
domestic
shipping
if
it
is
international
or
outside
u.s
we
handle
for
international
shipping
so
for
the
automatic
shipping.
B
I'm
going
to
add
a
new
attribute
to
my
to
my
data,
that
is,
shipping
domestic.
Otherwise,
I'm
adding
a
shipping
international
to
my
data
and
I
will
produce
an
international
ship
in
order
and
I
will
produce
a
domestic
shipping
order
again.
Super
simple,
but
you
can,
you
know,
see
the
the
powerful
work
of
something
so
imagine
that
you
that
you're
talking
with
a
business
person
and
they
can
bring
the
ds
out
to
them
and
explain
hey.
This
is
what
is
happening
inside
the
these
microservices.
B
This
is
what
what
which
kind
of
events
that
we
are
creating
there
you're
consuming
and
what
we
are
doing
with
it.
So,
for
instance,
in
a
real
use
case,
you
can
say:
hey,
I'm
calling
this
google
service
and
we
can
receive
this
kind
of
information
and
we
can
process
the
information
as
as
we
please
using
the
workflow.
B
Diagram
here,
so
we
have
this:
the
shipping
handling
state,
it
is
a
switch
state
and
we
go
with
it
can
be
a
domestic
shipping
or
can
be
on
international
shipping
as
well
as
you
see.
In
the
end,
we
are
going
to
produce
an
event
being
international
or
domestic
shipping
whatsoever
and
we
are
going
to
win.
B
Well,
that's
what
looked
like
the
event
when
you
when
you,
when
you
see
it
in
this.
In
this
perspective,
and
of
course
this
is
the
when
you
are
designing
and
we
are
creating
your
workflow
when
it
turns
out
to
you,
know,
to
deploy
and
to
into
a
kubernetes
or
on
our
low
bishops
cluster,
for
instance,
or
in
the
cloud
you
can
use
the
capabilities
of
the
co2
we
have
cli.
B
So
in
this
case
you
can
deploy
the
workflow
using
this
command
line
like
conjugate,
deploy.
That
is
the
the
actual
the
verb
of
what
we
are
doing
order
surplus
workflow.
B
That
is
the
name
of
the
service
that
I
gave
and
you
can
add
all
those
emo
files
you
know
into
the
cli
and
the
cli
will
push
in
this
case,
one
openshift
cluster
and
in
there
we
are
going
to
to
generate
code
based
on
those
files
and
we
will
create
a
an
image
according
to
crystal
process
runtime
image
into
our
internal
web
registry
and
the
code
operator
will
be
capable
to
create
the
kubernetes
resources
and
the
canadian
to
deploy
and
and
have
this
and
to
manage
this
workflow.
B
You
know
running
within
the
platform
itself
and,
of
course,
for
this
use
case
specifically
for
this
working
with
events
with
k
native.
We
need
the
the
serverless
platform.
This
also
works
on
kubernetes,
but
in
kubernetes
you,
you
can
build
within
a
cluster.
You
have
to
build
the
image
yourself
and
then
push
the
image
to
the
cluster
and
the
clutch
to
operator.
You
take
your
image
and
you
do
you
perform
the
same
steps?
B
You
create
the
kubernetes
resources,
you
create
the
native
resources
for
you
and
you
know
for
for
your
service
to
be
connected
with
the
keynative
broker.
In
this
case,
diane,
I
see
that
we
have
some.
You
know
chat
going
on
there
you'd
like
me
to
stop
here
and
or
we
can
answer
the
questions
after
the.
A
Let's
answer
the
questions
at
the
end
and
most
of
the
chat
is
me
trying
to
get
people
to
ask
questions.
So
this
is
a
good
point
if
people
have
questions
type
them
into
chat
and
all
right
we'll
make
it
happen
at
the
end,
thanks.
B
When
also
we
have
the
the
service
deployed
on
openshift,
this
is
more
sorry
about
the
other
technical
details,
but
this
is
how
it
looks
like
from
the
cogito
operator
perspective,
how
we
deploy
the
service
in
there,
so
our
service
will
be
like
an
event
source
or
this
event
source
can
be
any
other
thing.
B
In
this
case
we
have
the
the
serverless
workflow
in
here
and
we
are
listening
to
the
new
ordered
events
coming
from
the
canadian
broker
and
we
consuming
using
canada
eventing
triggers
and
that
triggers
you
know
an
event
for
us
to
listen
to
it.
So
the
art
element
comes
here
into
into
the
broker
and
we
listen
to
it
and
we
start
our
workflow.
Our
workflow
will
produce
events
to
the
broker
using
sync
bindings,
so
we
produce
events
to
the
candidate
broker
and
any
other
service
out
there.
B
Any
other
you
know
can
be
a
canadian
service
can
be
any
deployable
pod
or
anything
else.
You
know
in
the
in
the
in
the
spectrum
of
the
kind
of
servant
platform
to
be
listening
to
this
event
and
to
react
upon
it,
and
that
this
is
what
we
are
going
to
see
in
this
short
demonstration.
Let
me
see,
I
think
this
is
not
in.
Let
me
try
to
zoom
that.
B
So
I'm
I'm
producing
an
event
to
the
platform
and
can
in
the
country,
is
u.s
if
the
total
value
is
1001
and
is
an
iphone
12
order.
So
I'm
going
to
push
this
event
to
the
platform
and
you
see
what
what
what
will
happen
in
there
so
in
in
this
terminal.
Here
I
have
three
listeners
of
our
k
native
of
ours,
our
workflow
event
producing
events.
So
in
the
top
most
we
have
the
fraud
evaluation
service.
B
So
we
are
listening
to
the
events
from
the
front
evaluation
service.
We
have
the
I'm
sorry,
the
shipping
international
here
in
the
left
and
the
domestic
shipping
here
in
the
right.
So
us
one
event
comes
to
the
either
through
the
fraud
evaluation
or
to
the
international
or
domestic
shipping
services.
You
will
see
the
the
produced
event
in
here.
B
It
is
like
a
fire
and
forget
type
of
event.
It's
taking
some
time,
because
my
internet
connection
is
not
that
good.
So
I
push
the
event
and
we
can
see
that
we
didn't
have
any
pods
and
listening
to
that,
because
those
are
creative
services
and
we
receive
a
fraud
evaluation
because
the
the
value
is
above
a
thousand
dollars
and
we
received
a
domestic
shipping
because
the
country
that
we
ship
the
our
our
iphone
it
is
within
u.s.
It
is
our
water
from
the
u.s.
B
D
B
You
you
already
saw
those
like
you
could,
because
I'm
I'm
I'm
fetching
for
the
pods
of
the
shipping
international
service,
and
you
see
there
is
no
resources
in
there
and
yeah.
I
don't
know
what
is
happening.
I
don't
know
if
I
need
to
refresh
this,
but
whatever-
and
you
see
that
is
this
event
coming
in
here
and
this
event
coming
in
here.
B
So
there
is
some
extensions
and
as
well,
so
we
have
some
information
about
the
the
the
process
itself
and
here
you
can
see
that
the
fraud
evaluation
service
received
the
fraud
evaluation
equals
true
and
the
domestic
shipping
service
received
the
shipping
domestic.
You
know
and
this
shipping
domestically
does
not.
It
is
in
here,
but
the
front
evaluation
is
not
in
here
because
we
we
run
one
after
the
other.
B
Well,
let's
see,
if
yeah
we
can
see
the
deposit
here
and
you
and
because
we
were
talking
canada
will,
you
know,
destroy
the
pods
because
they
are
not
being
used.
So
that's
why
we
are
seeing
terminating
me
here.
B
So
let's
say
that
we
have,
I
don't
know,
maybe
a
new
order
from
italy
and
that's
above
a
thousand
dollars
as
well,
when
I'm
gonna
send
it
quickly
this
time,
because
the
pod
is
already
there,
I'm
creating
the
ship
international
pod
in
order
to
to
listen
to
this
event
and
react
upon
it
and
the
same
thing,
you
won't
see
fraud
nor
domestic
here,
because
we
entering
that
that
branch
that
we
just
receive
you
know
from
the
international
ship
so
now
try
to
imagine
this
in
a
real
use
case
where
you
you
have.
B
You
know
to
equalize
your
resources,
you
have
to
save
resources
in
order
to
have
you
know
a
more
profit,
a
a
company.
So
in
this
case
let's
say
that
you
receive,
I
don't
know,
maybe
nine
percent
of
your
orders.
They
are
there.
They
are
domestic,
so
you
don't
have
to
have
like
the
international
shipping
handling.
You
know,
services,
be
you
know
up
there,
all
the
time
you
can
have
the
workflow
to
you
know,
control
all
the
logics
and
can
break
through
into
category
categories.
B
You
can
break
your
domain
into
more
specific
things
and
you
know
be
flexible
enough
to
distribute
your
services
around
your
your
architecture
and
you
won't
have
all
the
you
know.
You
want
waste
resources
into
something
that
you
want.
You
know
need
for
some
time,
so
this
is
a
a
a
nice,
a
nice
example
where
a
nice
use
case
for
you
to
maybe
reevaluate
some
of
those
things
that
we
are
doing
because
normally
people
create,
like
oh
huge
microservices.
B
They
are
doing
a
lot
of
things
and
maybe
sometimes
it
is
nicer
just
to
break
into
you,
know
specific
parts
and
and
make
them
you
know
run
into
this
fashion
like
I
can
create,
like
lots
of
domestic
orders
or
lots
of
international
orders
and
having
this
part
of
my
system
not
producing
services
on
producing
events,
not
whatsoever,
so
I
can
reduce
my
cost
and
sometimes
that
will
help
a
lot.
Well,
let's
get
back,
there's
some
much
more
things
that
I'd
like
to
discuss.
B
There
is
no
lots
of
of
details
and
studies
about
it.
I
know
that
was
you
know,
super
quick
and
brief
things.
I
introduced
lots
of
concepts.
You
know,
like
the
cloud
events,
sorry,
the
the
the
the
service
reform
specification,
the
coach,
the
project,
the
houses
deployed,
the
native
there's
lots
of
things
out
there
and
also
lots
of
resources.
You
can
go
there
in
this
on
our
websites
and
see
what
we
are
doing
in
the
the
the
use
case
that
we
are
capable
to.
B
You
know
to
help
you
solve
your
business
problem
so
reach
out
to
me
as
well.
I
have
twitter
account.
You
can
send
me
a
dm
message
or
send
an
email
or
reach
out
to
into
github
or
whatever
I'm
up
there.
You
can
see
and
our
I
I'd
like.
I
would
like
also
to
you
know,
to
ask
you
guys
to
take
a
look
in
the
blog
as
well.
B
We
are
trying
to
update
a
blog
like
twice
a
week
with
new
content
and
to
give
people
an
understanding
of
what
we
are
doing
in
cogito
and
the
business
applications
in
general
and
how
we
can
you
can
solve
your
business
problems
and,
if
you're
curious
about
the
demonstration
the
example
itself,
you
can
go
into
this
we're
holding
here
and
see
all
the
source
code.
You
see
the
the
the
workflow
I
create
some
scripts
in
there
for
you
to
deploy
in
your
kubernetes
instance.
B
I
have
we
have
the
the
image
already
in
there.
You
know
the
docker
file,
sorry
already
in
there,
so
you
can
go
download
the
docker
file
run,
use
the
image
run,
the
the
image
locally.
There's
you
can
play
around
this
demonstration,
whatever
you
want-
and
I
was
I'd
like
us
to
invite
you
to
see
this.
The
the
the
presentation
from
last
year
that
t
homier
and
I
did
tumor-
is
much
better
than
me
explaining
the
the
cloud.
B
The
service
growth
was
specification
and
there
we
go
through
more
details
about
the
specification
and,
if
you're
curious
about
it,
just
go
in
there
and,
of
course,
the
servos
workflow.io
website.
You
have
all
the
links
that
you
that
you
need
to
know
there,
our
slack
channel
our
main
list,
our
weekly
meeting
and
everything
else
you
see
in
the
resources.
Okay.
Well,
thank
you!
That's
it!
I
don't
know
if
we
have
any
questions.
A
Or
not,
we've
there's
a
couple
of
them
in
the
chat
so
and
I'm
going
to
try
and
I'm
going
to
do
them
in
reverse
order,
because
I
think
you
answered
the
earlier
questions
already
and
eric
is
asking.
Why
was
k
native
chosen
to
increase
the
workload
capabilities
and
not
an
auto
scaler?
A
B
Just
for
the
demonstration
purposes,
actually
we
can
use
autoscaler.
This
is
not
that's,
not
a
restriction
whatsoever.
I
just
I
just.
B
I
just
found
that
more
simple
to
you
enough
to
to
to
show
to
the
to
users
what
we
can
like
what
we
can
do
and
then
in
the
example,
you
will
see
all
the
kind
of
resources
for
you
to
replicate
the
exact
same
thing
into
the
into
the
into
the
in
your
in
your
environment
as
well,
so
yeah,
no,
not
particularly
any
any
any
reason
whatsoever,
but
sorry,
but
for
the
canadian
inventing
part
it
is
because
the
the
coached
engine,
actually
we
we
have
these
connectors
and
our
the
crystal
operator
as
well
is
coded,
and
we
have
this
support
for
the
canadian
eventing
platform.
B
A
All
right
and
that
there's
one
other
question
from
ilian:
is
it
possible
to
create
custom
code.
B
Oh
yeah,
of
course,
because
this
is
a
a
java
project,
it's
a
straightforward,
maven
java
project
in
our
in
the
in
the
example
itself.
I
have
the
a
redmi
file
here
that
I
explained
how,
let's
see
how
you
can
create
this
serverless
workflow
code
to
project
from
the
from
the
scratch
using
using
java.
So
you
have
to
know
java
but
yeah,
you
can
add
custom
code
and
you
can
call
this
custom
code
using
a
workflow.
We
have
a
specific
in
the
code
implementation.
B
We
have
specific
functions
that
you
can
declare
in
here,
for
you
know
like
the
you
can
give
a
name
and
then
can
in
the
metadata
space
we
we
have
a.
What
do
you
call
like
a
service
internal
calling?
So
you
can
you
can
call
what
any
java
being
from
the
workflow
so
yeah
you
can
do
it,
but
from
the
from
the
specification
point
of
view
we
have
the
jq
capabilities.
B
So
you
can.
You
can
like
create
a
function
expression
and
then
you
know
externalize
this.
You
know
we
can
define
here
the
function.
That
is
an
expression
as
a
function
function,
and
you
can,
you
know,
declare
any
if
you're,
if
your
custom
code,
it
is
now
related
to
data
handling
with
the
the
input
you
can
just
create
the
expression
and
in
here
in
the
workflow
itself,
and
that
will
work.
B
A
Yes,
so
and
he's
assuming
you
can
call
out
to
other
services
during
execution.
Alien
would
like
to
be
able
to
call
out
to
fraud
prevention
api
that
they
have
so.
B
Yeah,
absolutely
let
me
let
me
share.
Let
me
let
me
show
you
the
specification
itself
yeah
you
can.
You
can
call
any
any
any
functions.
You
know
we
have
three
in
the
specification.
We
expect
the
restful
service,
invocation,
rfc,
service,
invocation
and
also
expressions
like
I
said
before
in
the
restful,
you
can
define
like
a
function
like
this.
Let's
say
same
order:
confirmation
and
the
operation
of
this
function.
B
B
Yeah
yeah,
so
imagine
that
you
have
like
this
one
here.
You
know
this
open
api
spec
here
and
you
would
like
to
call
the
list
paths
function.
So
you
just
say
hey.
This
is
my
file,
and
this
is
my
operation
id,
so
the
cogito
engine
will
re.
We
will
we
read
this
the
this
this
specification
and
we
generate
code
client
code
to
call
this
rest
service.
So
yeah
you
can
call
it.
You
know
a
third-party
api
and
do
whatever
you
want
is
within
your
workflow.
B
A
Okay
and
so
someone's
now
asking
where
are
correlation
keys
states
stored
if
used
well,.
B
This,
the
correlation
id
it
is,
we
we
are
working
on
a
stateful
implementation,
so
we
we
don't,
have
it
yet
on
coach
to
site,
but
in
this
specification
we
have
the
part
where
that
we
define.
Let
me
see
if
I
can
find
it
real
quickly.
B
Oh
very,
on
top
of
the
hour,
but
let
me-
but
this
is
the
yeah
here
in
the
events
you
can.
You
declare
the
correlation
in
here
so
in
in
this
use
case
that
I
that
I
brought
in
the
end
in
the
final
order
process.
So
if
that
process
would
be
like
workflow
service
as
well,
because
I
don't
know
what
it
is.
B
It's
just
an
example
that
I
gave,
but
if
it
is
a
workflow
as
well,
you
can
you
can
create
the
correlation
in
here
in
the
your
workflow
and
say
the
event
say:
hey
the
the
attribute
name.
It
is
order
id
for
instance,
and
then
you
can
res
when
you
receive
everything
else,
you
would
do
whatever
you
want
in
code.
We
don't
have
stateful
yet
for
the
for
the
for
the
service
workflow,
but
we
are
working
on
it.
B
But
in
this
case
we
are
going
to
handle
that
in
memory,
so
we
receive
that
at
the
same
time
we
correlate
and
we
do
all
the
other
stuff.
But
we
are
looking
forward
to.
A
Implemented
all
right.
Well,
there's
one
earlier
question:
I
think
you
might
have
answered
it,
but
someone's
asking
for
an
understanding
of
how
this
relates
or
differs,
to
camel
k.
B
Well,
chemo
is,
for
I'd,
say,
integration
most
of
the
time
you
know
implementations
of
integrations
and
we
do
not
target
integrations.
So
we
you
cannot
like
ftp
a
file
or
send
an
email
or
those.
You
know
things
like
that.
It
is
not
what
we
we
are
discussing
here
and,
of
course,
you
can
implement
a
pipeline
in
came
okay.
B
This
is
not
the
the
idea,
but
from
from
our
perspective,
it
is
more
or
less
a
declarative,
workflow
of
a
feature
that
we
that
we
are
implementing
in
here.
It
is
not
like
dealing
with
integration
side
of
the
things
you
know.
I
understand
that
is
some
some
things
that
they
can
interconnect.
B
Or
can
we
implement
a
an
integration
in
here
like,
for
instance,
you
can
call
a
tweeted
api
using
the
rest
function,
of
course,
or
you
can
in
the
came
okay,
you
can
create
a
route
that
you
you
go
through
every
step,
as
we
created
in
here,
of
course,
but
each
I
I
understand
that
each
it
too
has
its
particular
strengths,
and
you
should
you
know,
leverage
from
that.
B
So
from
the
workflow
side,
sorry
I'd
say
that
for
this,
this
kind
of
declarative,
workflows,
business
applications
that
you
that
we
have
this
is
a
suitable
application
for
for
for
this
kind
of
use
case,
that's
the
that's!
That's
my
point.
In
there.
A
Okay,
cool
all
right,
and
so
we
are
almost
to
the
top
of
the
hour,
and
the
very
first
question
was:
where
did
the
name
come
from.
B
Well,
that's
the
that's
something
that
people
ask
sometimes
and
then
yeah.
Oh
I'm
sorry
you
you
know
I!
I
can't
explain
it
exactly
why
the
the
founders
they
they
come
up
with
this
name,
but
every
everything
that
we
that
we
do
in
kojito.
It
is
like
related
to
knowledge.
So
we
because
are
they
the
key?
So
the
knowledge
is
everything,
that's
our
motto
and
because
you
know
all
these
rules
decisions
process,
everything
is
related
to
you
know.
B
Business
applications
and
knowledge
based-
and
it
is,
I
believe,
is
based
on.
Of
course,
the
the
fathers
can
explain
you
much
better
than
I,
but
from
this
quote,
cody
turgoson
from
renee.
This
card
is,
you
can
read
but
yeah
and
the
k
because
kubernetes
you
know
it
is
a
cloud-based
platform.
So
that's
why
that's
called
kojita.
I
don't
know
if
I
answered
your
question.
Hopefully.
A
B
A
In
the
kubernetes
cloud
native
space,
so
you've
done
well,
so
in
we've
got
like
four
minutes
left.
Is
there
some
new
features
or
what's
in
the
road
map
that
what's
coming
up
next,
that
we
should
be
watching
out
for.
B
Well,
like
I
said,
questions
is
a
work
in
progress
of
the
serverless
workflow
specification,
so
the
workflow
specification
is
going
super
in
the
community.
We
are
creating
great
things
in
their
great
big
companies
are
interested
in
in
the
specification.
This
is
a
very
nice
thing
from
the
community
side.
We
are
about
to
release
1.0
from
in
the
specification.
So
this
is
a
nice
thing.
B
Hopefully
we
see
more
people
trying
to
implementing
it
and
from
college
to
side
we
are
keen
to
implement
everything
that
we,
that
is
there
in
the
specification
to
be
100
percent
compatible
with
the
the
specification
we
are
about
to
to
bring
jq
to
the
engine.
So
you
can,
you
know,
create
jq
expressions
in
your
in
your
workflow,
and
so
you
can,
you
know,
transform
or
you
know,
do
whatever
you
want
with
the
data
within
the
workflow
and,
of
course,
state.
B
This
is
a
very
important
thing
that
we
are
very
looking
for.
You
know
to
implement
in
the
in
the
specific
in
the
in
the
engine
as
well,
as
was,
as
you
know,
error
handling,
compensation
and
all
of
those
workflow
patterns
that
we
don't
have
yet
from
the
workflow
from
the
serverless
workflow
perspective,
but
yeah
that
that's
there's,
there's
there's
a
lot
of
work
to
do
in
there
and
from
the
specification
and
the
information
side.
A
Yeah,
so
if
you
can
share
again
that
resources
page
because
I
think
that's
sure
to
end
and
if
people
want
to
get
involved,
then
you
know
follow
the
link.
B
B
All
the
information
that
you
need
mainly
how
to
engage
with
the
community.
We
have
a
slack
canada
under
the
cncf
slack.
So
please
go
there
and
say
hello,
and
if
you
have
any
other
questions
just
reach
out
to
us
there,
we
are
keen
to
help
and
looking
for
for
contributors
as
well.
A
Well,
thank
you
for
making
this
that
really
interesting
and
engaging
talk.
Ricardo
will
definitely
have
you
back
to
celebrate.
Maybe.
B
A
Release
and
sure
and
and
just
keep
it
keep
us
in
mind,
and
there
were
plenty
of
great
questions
today,
so
thank
you,
everybody
for
tuning
in
and
chris
short
for
producing
in
the
background
there.
So
I'm
really
pleased
to
have
you
here.
Ricardo
definitely
come
back
and
keep
us
posted
and
up
to
date,
and
everybody
join
us
on
the
community
slack
channel.
If
you
have
questions
that
he
didn't
answer
today,
we'll
try
and
get
them
there
all
right.
Well,
thank
you,
everybody
and
again,
ricardo!
Thank
you.
A
I
love
the
hand
the
hand
waving
while
you're
talking
the
most
expressive
virtual
event.
So
far,.