►
From YouTube: Elsa Workflows Community Meeting 21 ( 2022-08-23)
Description
Meeting 21
Topics
- Retention module optimizations
- Elsa 3 Versioning
Demos
- Auth0 Elsa 2
- Background jobs in Elsa 3
How to kill jobs?
A
B
Could
demonstrate
this
version
history
thing,
but
it's
it's
even
hurting,
while
I'm
talking
right
now
so
I
can
do
that.
Maybe
next
week.
A
C
Demo
for
me,
but
maybe
some
discussion
about
the
retention
module,
I
post,
an
issue
on
this
and
I
have
just
some
questions
about
how
we
we
manage
and
how
we
call
the
the
database
with
the
Entity
framework,
because
I
see
we
use
a
lot
of
the
specification
pattern
that
allows
the
developer
to
to
write
a
good
code.
But
when
we,
for
example,
when
we
want
to
delete
some
some
elements,
there
is
a
different
call
to
select
data
to
get
the
ID.
C
C
And
what
I
see
is
a
minimum
of
CPU
and
I've
got
many
timeouts
I
can
understand
that
I
don't
have
to
use
this
kind
of
database
with
small
DQ
in
production,
but
when
I
have
only
10
or
20
workflow
I
think
I,
don't
I,
don't
have
to
get
some
exception
exception
or
timeout
execution
with
this
SQL
I
think
maybe
we
can
add
some
element
to
to
query
to
to
make
best
better
query
against
the
database.
Okay,
that
sounds
good
I,
don't
know
if
you
have
some
times
to
to.
A
Look
on
the
issue
sure,
let's
go
over
this
topic
in
a
second.
Let's
see,
I
do
have
a
demo
about
background
jobs
in
LS3
and
a
r0
demonstration
in
LSAT
2..
It's
something
I
worked
on
today
as
a
proof
concept
for
a
client,
so
I
figured
it's
it's
interesting
to
demonstrate,
and
it
this
is
about
protecting
the
Elsa
API,
using
whatever
identity
server
you
you
want
in
this
example
will
be
out
of
zero.
But
then
the
question
is:
how
can
now
the
dashboard,
which
is
a
spa?
A
A
single
page
application
make
calls
to
the
API,
because
the
API
is
now
protected,
so
the
client
or
the
dashboard
needs
to
send
better
tokens.
So
I
would
like
to
show
one
one
approach
to
that.
So
that's
that's
a
demo
to
look
at
but
yeah.
Let's
start
with
the
retention
module
sounds
like
it.
There's
some
room
for
improvement
there
do
you
have
you
mentioned
an
issue
that
you
opened
yeah.
C
A
32
53
all
right
retention,
module
job
could
cost
timeout
when
deleting
large
data
or
when
using
a
small
DB
yeah
patination.
So
the
idea
here
is
that
it
takes
one
page
at
a
time
for
a
database
could
have
many
many
workflow
instances,
so
loading
them
all
in
memory
at
once
would
be
insane.
It
would
kill
your
application.
Probably
so.
This
is
vaccination
using
specification
here
doing
the
filter
pipeline,
adding
the
selected
instant
studies,
so
it
returns
Wi-Fi
instances
and
you're
saying.
Could
we
maybe
improve
this
by
not
returning
the
whole
workflow
instance?
C
Yes,
because
the
data
column
is
huge
and
could
be
huge,
yeah,
absolutely
and
and
other
thing
is
the
delete,
many
by
IDs
I.
Think
in
this
method.
I
don't
post
this
this
method,
but
we
get
all
the
dependencies,
so
the
bookmark
Etc
to
be
sure
to
that.
We
we
delete
everything
in
this.
We
do
a
DP
context,
that's
set
and
I,
don't
know
when
we
do
this.
If
Entity
framework
do
a
get
to
the
database.
Sorry.
C
And
we
just
open
the
solution
and
okay.
So,
if
I
take
a
look
at
just
open
the
solution.
Okay,
if
you
take
a
look
at
the
Entity
framework,
workflow
instance
store.
There
is
a
delete
menu
by
IDs
async
and
in
this
one
we
have
a
I
do
work
callback
with
a
when,
where
we
we
did
the
following:
DB
context.set
for
workflow
execution,
log
records,
bookmark
and
work
for
instance,
and
when
we
did
this
line,
I
don't
know
if
Entity
framework
realized
a
secret
role
to.
A
C
Is
happening
but
in
terms
of
performance
issue,
when
I
was
a
when
I
get
my
time
out.
I
get
my
time
out
sometime
in
this
method
and
I
was
wondering
if
the
dbcontacts
that
set
that
asquarable
dot,
where,
if
this
command
realize
internally
by
Entity
framework,
a
call
to
the
database
to
get
the
workflow
executioner
record
where
offline
instance
ID
is
equal
to
and
then
it
loads
the
element
in
the
local
context,
and
then
he
is
able
to
delete
using
the
ID
or
he
did
the
The
Entity.
C
A
Etc,
etc.
Yeah,
even
one
workflow
instance,
could
have
thousands
if
not
more
workflow
execution
log
records,
and
this
would
try
and
select
them
all,
although
the
bats
delayed
with
workarounds.
This
is,
it
should
happen
server
side,
but
even
server
side.
It
could
be
a
problem
so,
and
here
it's
even
worse
here-
it's
gonna
project
them
or
get
them
from
the
server
into
local
memory
and
then
delete
them
one
at
a
time
just
for
postgres,
MySQL
and
Oracle,
but
even
if
it
happens,
server
side
with
with
my
with
sqlite,
maybe
could
be
an
issue.
A
C
I
I
know
we
can
check
if
there
is
a
SQL
query
on
a
client
side
with
when
we
debug
we
have
some
I
have
some
question
about.
Can
we
do
do?
We
have
to
continue
to
use
specification
pattern
in
this
kind
of
method,
or
do
we
want
to
to
use
maybe
some
a
specific,
a
SQL
command.
A
Well,
the
the
reason
for
the
specification
pattern
is
to
abstract
away
the
underlying
data
storage,
so
we
support
mongodb
Entity
framework
as
an
abstraction
on
relational
databases.
We
support
the
yes
SQL
provider,
so
we
have
different
providers
and
the
specification
pattern
allows
us
to
abstract
away
the
concrete
implementation
so
going
dropping
directly
into
SQL
here.
Does
it
make
sense
in
the
current
architecture?
A
But
what
would
make
sense
is
to
have
a
a
new
specification,
so
we
Implement
a
custom
specification
called,
let's
say,
delete
workflow
instances
and
then
the
Implement
implementations
of
these
of
this
particular
specification
can
do
the
optimized
querying.
So
if
you
implement
the
specification
Handler
for
EF
core
that
could
have
optimized
SQL,
the
same
goes
for
mongodb
the
mongodb
Handler.
For
this
particular
specification.
C
A
Database
itself,
directly,
yeah
yeah
that's
an
option.
Of
course
you
would
have
to
be
able
to
configure
this
dependency
on
the
EF
core
level
and
also
when
using
mongodb
do
we
have
a
similar
mechanism
there?
What
about?
Yes,
SQL
Etc,
so
you?
So
we
need
to
think
about
these
things
as
well,
but
if
okay,
if
if
there's
a
solution
to
that
for
sure
just
as
long
as
it
doesn't
require
users
to
specifically
execute
some
SQL
script,
to
make
this
happen
for
release
databases
and
another
script
for
mongodb.
B
C
C
A
Right
so
the
place
to
look
for
in
the
example
of
the
EF
core
providers,
every
provider
implements
one
of
these
interfaces,
so
here
this
one
implements
I
workflow
definition
store.
We
would
be
interested
in
the
workflow
instance,
or
this
one
and
and
this
store
will
implement
the
following
methods
of
the
I
store
of
some
entity
in
that
save
is
saying
at
the
same
Etc
and
they
all
receive,
or
some
of
them
received.
They
receive
a
specification
so
in
this
case
delete
many.
A
So
one
way
to
go
about
this
here
would
be
to
update
the
delete,
I
think
or
the
delete
menu
or
both
where
you
did
it
by
many
many
ID
remember
so
this
one
gets
a
nice
specification
exactly
so
here
it
uses
five
many
specifications,
but
what
you
could
do
is
add
a
bit
of
code
where
you
say:
if
specification
is
my
retention
specification,
for
example.
Let's,
let's
imagine
that's
a
new
specification,
then
you
do
whatever
you
need
to,
and
the
retention
specification
can
carry
any
data.
A
You
need
to
perform
the
right
query
like
the
workflow
instance,
for
example,
or
the
set
of
workflow
instances.
You
want
to
delete
and
then
here
you
can
do
anything
you
like
you,
can
inject
all
of
the
the
other
stores
you
can
use
the
context
directly.
So
the
DB
context
you
you
can
get
this
one
directly
and
have
access
to
all
of
the
DB
sets
and,
of
course,
execute
SQL
directly.
So
you
can
do
all
of
the
optimizations
here.
A
If
it's
not
and
my
retention
specification
then
just
execute
the
default
logic
now
I
have
to
admit
this
architecture
is
a
little
bit
rigid.
It's
not
very
flexible.
I
wish
I
didn't
use
the
specification
pattern
in
this
way.
I
just
had
the
abstract
word,
for
instance,
or
that
wasn't
this
generic
instead,
if
entity
specific
store
would
just
have
specific
methods,
then
every
implementation
could
have
perfectly
optimized
queries
to
execute
one
housekeeping.
Note
guys
if
you're,
not
speaking,
please
mute
your
mics
just
to
prevent
background
noise
from
getting
into
the
call
so
yeah.
C
Yes,
it's
it's.
We
can
discuss
her
over
the
time
and
then
work
on
the
on
the
under
yeah
sounds
great
I.
B
Have
one
question
about
the
inversion
history?
In
V3,
you
mentioned
the
execution
workflows,
execution
log
records
I,
just
remembered
that,
like
when
I
was
working
on
this
I
noticed
that
we
are
not
storing
the
execution
log
records
based
on
the
versions
of
the
workflows,
but
only
the
workflow
definition
ID,
so
is
that
by
Design,
because
when
we
delete
a
version,
I
cannot
delete
the
log
records
because
there
is
no
versioning
on
that
model.
A
B
That's
yeah,
that's
what
I'm
saying
like
there
is
a
on
the
workflow
execution
log
record
and
there
is
workflow
definition
ID,
but
not
the
version.
So
I
was
wondering
if
that
by
Design
or
just
a
mistake.
A
B
A
B
A
Dependent
on
its
workflow
instance,
but
I
can
imagine
if
you
delete
a
specific
word
for
definition
version.
Maybe
the
query
could
be
optimized
by
being
able
to
delete
all
workflow
execution
log
records
based
on
the
workflow
definition
ID
and
its
version,
so
that
you
don't
have
to
first
go
over
each
work.
B
A
Yeah
good
question:
thanks
all
right
guys
anything
else,
all
right
and
I
will
continue
with
demos,
I'll
start
with
off
0
for
LSAT
2..
Let
me
undo
these
changes
first.
So
let
me
give
you
some
context.
As
you
host
Elsa
apis
online,
you
probably
want
to
protect
the
API
endpoints.
Otherwise
anyone
could
invoke
those
apis
and
create
or
delete
workflow
definitions
or
access
cute
workflows,
for
example,
and
that's
not
good
for
obvious
reasons.
So,
just
to
give
you
an
example
here
are
some
of
those
endpoints.
A
This
one
is
allows
you
to
save
a
workflow
definition
or
publish
or
unpublished,
is
so
ported
as
well.
These
API
endpoints
they
exist
for
client
applications
like
the
Elsa
dashboard.
But
of
course,
as
I
mentioned,
you
don't
want
to
expose
this
to
the
Internet,
so
you
should
protect
it
somehow
and
there's
various
ways
that
you
can
do
the
most
typical
way
to
do
it
with
aspin
at
core
applications
with
your
Elsa
workflow
server
is
typically
is
to
use
asp.net
core
authentication
middleware.
A
But
then,
if
you
protect
your
API
endpoints,
you
need
a
way
for
your
clients,
in
this
case
the
Elsa
dashboard
to
authenticate
itself
or
at
least
provide
whatever
credentials
are
necessary
to
make
the
API
calls
to
the
back
end.
So
what
I
did
here
is
I
set
up
proof
of
concept
for
a
client
and
I
would
like
to
demo
what
that
looks
like
for
so
as
an
example
that
you
could
use
as
well
I'm
using
off
zero,
but
it
would
work
for
any
oauth
application,
so
would
also
work
with
Azure
ID
or
any
identity
provider.
A
So,
first,
let's
take
a
look
at
the
back
end
or
actually,
let's
take
a
look
at
of
zero,
the
the
setup
there.
Actually,
that's,
not
a
good
idea.
Let's
start
with
the
back
end,
that
might
be
easier
to
explain
so
so
I
set
up
a
sample
here.
So
this
is
an
asp
net,
core
application.
It's
ASP
using.net6
framework,
I'm,
just
referencing
this
packets,
the
jwtwt
bearer
and
the
also
server
API
project.
It's
a
very
simple
application.
All
it
does
basically
is
host
Elsa.
A
So
when
this
application
runs,
we
have
a
workflow
server
running,
but
it
has
a
couple
of
interesting
things.
First
and
foremost,
I'm
adding
the
application.
Services,
of
course,
like
the
health
checks,
it's
optional,
but
it's
a
good
idea
to
have
cross-riching
resource
sharing,
I
think
is
what
the
course
stands
for.
A
This
is
important
if
your
workflow
server
is
hosted
on
a
different
origin
compared
to
your
dashboard,
so
adding
this
by
default
during
development,
I'm
hosting
on
different
origins
in
any
case,
so
it
makes
sense,
but
you
need
to
be
aware
of
the
security
Ram
notifications
if
you
allow
any
headers
and
any
origin,
so
you
may
not
want
to
do
this
in
production
in
production.
You
may
want
to
explicitly
specify
what
Origins
are
supported
here:
I'm,
adding
else
API,
endpoints
and
internally.
A
It
does
a
couple
of
things
like
adding
controllers,
which
is
from
asp.net
configuring,
Newton
soft
configuring,
some
routing.
So
this
is
very
opinionated
and
it's
used
as
what
else
I
requires
in
the
current
version.
In
the
new
version
it
will
also
be
opinionated
because
the
the
client
will
expect
Json
in
a
certain
format,
for
example,
or.
C
B
A
B
A
We
are
configuring,
JWT,
Bearer
and
by
reading
some
configuration
from
of
zero,
but
this
is
not
specific
to
r0
It's.
Just
in
this
example,
it's
specific
because
I'm
using
this
term
of
zero.
But
if
we
look
at
the
up
settings,
that's
just
the
section
name.
You
can
name
it
anything
you
like,
and
these
are
all
standard
fields
that
that
you
can
configure
on
the
options
object.
A
You
get
here
as
a
as
an
argument,
the
Jade,
with
the
bearer
options
so
basically
on
the
server
side,
there's
nothing
oauth
specific
because
of
or
sorry
off
zero
because
of
zero
implements
the
oauth
standard
and
open
ID
connect
standards.
This
is
very
nice
here,
I'm,
adding
some
more
services.
Currently
this
example
doesn't
use
author
authorization,
but
you
you
could,
if
you
wanted
to
and
finally
adding
the
necessary
Elsa
services.
So
this
is
all
depends
injection
setup.
A
Here
we
are
going
to
build
the
application
and
then
configuring
the
request
pipeline,
so
we're
using
the
course
middleware
routing,
authentication,
middleware
and
authorization
middleware,
which
again
in
this
example
is,
is
optional
and
then
here
in
the
use
endpoints,
we
are
mapping
the
health
checks
at
the
root
root.
So
if
we
start
the
application,
it
will
perform
a
health
check
and
return
the
response
or
the
result
of
the
health
check.
And
then
here
we
are
mapping
controllers,
and
this
is
the
key
to
protecting
all
controllers
in
your
application,
including
the
ones
exposed
by
Elsa.
A
So
it's
it
will
protect
everything
which
may
not
be
ideal
in
all
scenarios,
although
maybe
you
can,
let's
say
your
application-
has
custom
controllers
that
are
supposed
to
be
accessible
by
Anonymous
requests.
I
think
allow
Anonymous
will
work,
which
is
an
attribute.
You
can
apply
to
your
controller
but
I'm,
not
sure,
but
you
can
you
can
try
it,
but
in
any
case
this
will
protect
the
Elsa
API
endpoints.
A
So,
let's
see,
if
that's
true,
so
let's
say
we
want
to
list
all
of
the
workflows
and
let's
not
provide
write
any
tokens,
so
no
authentication
and
it's
sent
so
here
we
get
a
response
that
the
request
is
unauthorized,
which
makes
sense
because
we
protected
the
endpoints
so
to
make
it
work.
We
need
to
provide
an
access
token.
So
let's
take
a
quick
look.
What
that
looks
like
so
it's
protected
using
JWT,
Barrett
token
authentication
mechanism,
so
we
need
to
provide
a
better
token,
which
means
we
need
to
provide
an
access
token.
A
So
here,
I
set
up
a
simple
example
using
the
wrong
client
ID.
Let
me
fix
that.
So
this
takes
us
to
the
half
zero
dashboard.
So
in
the
dashboard
I
set
up
a
thing
called
an
application
which
is
basically
configuration
of
a
client.
That's
allowed
to
make
calls
to
the
of
zero
identity
server
and
it
typically
comes
with
a
client,
ID
and
a
client
Secret
in
this
case
to
to
be
able
to
request
an
access
type.
We
need
to
provide
the
client
ID
and
a
secret
and
the
audience.
A
So
we
are
going
to
use
this
client
ID
this
one
here
and
we'll
take
the
client
secret.
The
audience
is
correct
and
the
grant
type
is
client
credential.
So
so,
if
we
submit
this
request,
we
get
an
access
token,
which
I'm
using
using
this
test
script,
which
is
very
convenient
of
Postman.
It's
a
way
to
execute
a
bit
of
JavaScript
in
response
to
the
request.
So
we
get
this
as
a
response
body.
We
are
parsing
this
into
a
Json
object
and
then
getting
the
access
token
field,
which
we
receive.
B
A
And
storing
it
in
an
environment
variable
called
access
token,
which
we
can
then
automatically
use
in
other
requests
such
as
here
yeah
I'm
gonna
pass
in
that
fresh
access
token.
So
now,
when
I
hit
send
we
get
a
successful
response.
Of
course
there
there
are
no
workflows
in
the
system,
but
the
response
is
valid.
So
now
the
question
is:
what
do
we
do
with
the
dashboards?
So,
let's
switch
to
the
dashboards
Let's
Pretend
This
isn't
here
for
a
second
and
just
start
it
as
you
would
normally
do
all
right.
So
we
see
a
blank
screen.
A
If
we
look
at
the
console,
we
see
something
similar.
We
see
a
401
response
when
requesting
some
endpoint,
so
this
makes
sense.
Our
backend
is
protected.
So
now,
let's
see
how
we
can
allow
the
user
to
authenticate
themselves
so
that
they
can
actually
use
the
portal
or
the
dashboard,
and
for
that
I
wrote.
A
small
plugin
called
off
zero
plugin,
which
works
like
this.
So
here
we
are
looking
at
a
simple
HTML
file
that
hosts
the
designer.
So
that's
this
style
you
may
be
familiar
with.
A
A
Basically,
here
we
were
looking
at
the
Elsa
API
application
and
now
we're
going
to
look
at
the
the
dashboard
application.
That's
this
one
and,
as
you
can
see,
it
has
its
own
client
ID
we're
going
to
copy
this.
One
actually,
this
is
the
correct
one.
Then
the
domain
is
also
already
correct
in
the
audience
as
well
domain,
client,
30
and,
of
course,
these
settings
are
important
for
your
applications
like
what
are
the
allowed
callback
URLs.
A
So
since
the
dashboard
is
being
hosted
on
localhost
Port,
this
one,
it
needs
to
be
the
same
same
for
this
all
right
and,
as
you
can
see,
I
just
changed
the
settings
here
in
this
HTML
file
and
because
of
how
to
reloading
the
page
got
refreshed,
and
now
we
do
see
the
dashboard,
which
also
means.
If
we
look
at
the
the
network
tab,
we
see
that
it
invoked
a
token
endpoint.
That's
an
endpoint
hosted
on
my
tenant
on
out
of
zero,
so
it's
requested
a
token
and
a
response
included.
Okay,
this
is
the
payload.
A
The
response
was
the
access
token
here
and
after
that,
in
the
initialization
we
can
now
make
authenticated.
Api
calls
like
getting
features
or
if
we
go
to
workflow
definitions,
as
you
can
see,
it's
now
able
to
make
successful
calls
and
looking
at
the
request,
headers.
It
makes
sense
because
it's,
including
the
bearer
token
that
we
got
from
of
zero
and
a
better
token,
looks
something
like
this.
So
it
includes
an
issuer
subject
audiences
and
some
other
claims-
and
this
is
extensible
of
zero,
very
extensible.
A
You
can
configure
many
things,
including
Scopes,
and
that
can
this
can
include
application,
specific
Scopes,
which
you
can
map
to
permissions
that
are
required
on
your
backend.
So
it's
very,
very
flexible.
So
let's
take
a
look
at
the
plugin,
the
off
zero
plugin.
So
it
comes
with
the
Elsa
core,
but
you
could
implement
it
yourself
as
well,
using
stencil
or
playing
JavaScript.
It's
the
same
as
any
other
designer
plugin.
When
you
use
typescript
you
implement
Elsa
plugin,
you
implement
a
Constructor.
In
this
case.
B
A
This
allows
you
to
do
things
like,
in
this
case
creating
an
r0,
API
clients
testing.
If
the
user
is
already
authenticated,
because
then
we
don't
need
to
do
anything,
but
if
we
are
not
authenticated,
we
need
to
get
an
access
token
and
the
way
that
works
is
by
basically
logging
in
to
to
alph
zero.
So
we
redirect
to
the
off
serial
login
page
where
the
user
can
enter
their
credentials
and
once
the
user
did,
that
officer
will
redirect
back
to
this
page
using
the
origin,
and
that
means
we'll
get
back
to
this
somebody's
making.
C
A
Yeah,
so
when
we
get
redirected,
we
get
through
this
code
again,
of
course,
because
the
page
is
reloaded
and
this
time
we
will
have
have
code,
query
string,
parameter
and
if
that's
the
case,
we
let
the
off
zero
client
API,
handle
that
that
code,
so
it
will
interpret
it,
probably
store
some
cookie
or
whatnot.
It's
all
that's
all
handled
by
the
officer,
API
client,
which
is
very
convenient,
and
here
I'm
just
updating
the
URL,
basically
because
otherwise
you
would
see
some
ugly
code
equals
and
some
value
and
something
else.
A
So
it's
basically
going
to
remove
that
from
history,
which
is
also
better
for
security.
I
assume.
So
that's
one
part.
So
that's
getting
the
access
token
or
actually
getting
the
user
authenticated.
And
this
will
this
handle
redirect
callback
will
store
the
access
token
somewhere
and
it
will
make
it
available
to
the
application
to
Elsa
dashboard
through
the
get
token
silently
it
will
internally
cast
the
access
token
and
presumably
also
refresh
it
I'm,
not
sure,
on
the
implementation.
A
Details
I
have
yet
to
look
into
that,
but
it
will
give
you
the
access
token
and
by
the
way
this
is
configure
of
middleware,
which
is
invoked
from
the
HTTP
client
credit
event.
Bus,
and
this
is
invoked
by
Elsa
as
soon
as
it's
about
to
instantiate
an
anxious
client,
which
is
like
an
HTTP
client.
This
event
gets
fired,
giving
all
plugins
the
opportunity
to
set
up
the
HTTP
client,
for
example,
XCS
supports
middleware,
which
is
very
convenient
for
us,
because
that
means
we
can
attach
custom
headers
to
its
outgoing
requests.
A
A
That's
a
good
question,
so
actually
that's
because
I
was
already
signed
into
my
tenant
on
r0,
but
let
me
show
you
by
clearing
the
data
close
all
windows
and
now
that
all
my
my
caches
emptied
I,
don't
have
any
cookies
anywhere
anymore.
We
will
now
see
the
login
page,
so
we'll
go
to
localhost
this
and
now
we
do
see
the
login
page
because
you
see
the
domain
here.
So
this
is
my
tenant
on
of
zero
and
I
need
to
log
in
with
a
user,
that's
known
to
have
zero.
A
C
A
C
Yeah
and
I
think
after
this
is
the
next
step,
it
could
be
to
add
some
role
and
policy
on
the
different
API
or
to
be
able
to
make
some
policies,
for
example,
administrator
a
viewer
to
see
the
workflow
instances,
for
example,
and
be
able
to
get
the
role
in
the
token
and
only
authorize
some
people
to
access
to
access.
Some
part
of
the
dashboard.
A
Yeah,
absolutely
that
would
be
terrific,
would
probably
require
some
changes
to
the
API.
For
example,
if
we
look
at
the
else
server
API
project,
where
we
have
the
endpoints
and
we
look
at
let's
say:
workflow
definitions
delete
so
right
now
it's
it's
public,
but
we
protected
this
one
in
program
here
using
this.
So
this
is
protecting
everything,
but
doesn't
take
into
account
policies
or
roles,
and
that
kind
of
thing,
although
you
can
provide
policy
names,
but
this
would
be
provided
to
all
controllers.
A
But
if
you
want
more
grinder
earlier
control
over
what
certain
users
can
do
using
their
permissions,
we
need
something
like
authorize
and
then
here
we
should
either
provide
a
policy
name
and
or
a
set
of
roles,
but
this
would
be
very
opinionated
right.
It
would
still
be
helpful.
We
can't
just
Define
a
set
of
policies
and
then
it's
up
to
the
application
implementer
to
configure
the
requirements
for
each
policy
here.
So.
C
Okay
to
see
if
we
can
or
if
we
have
to
create
some
generic
role
in
a
description
to
add
some
attribute
in
the
controller
and
allow
the
example
to
map.
If
we
decide,
for
example,
to
create
some
basic
role
and
add
authorized
attributes,
we
have
to
let
the
developer
and
the
implementer
to
map
his
own
role
on
our
generic.
B
I
think
one
of
the
options
currently
that
we
can
do
right
now
is
that
we
can
write
just
custom
authorization
Handler,
like
one
big
class,
to
handle
all
kinds
of
requests,
and
then
you
can
just
put
the
conditions.
If
this
is
the
end
point
allow
or
deny
so
it
could
be
helpful
if
we
can
just
break
it
down,
for
example,
to
operations
whether
it's
a
create
edit
delete
or
update.
B
A
A
A
Them
to
to
see
if
they
may
be
interested
in
in
demonstrating
it
on.
B
A
A
The
last
meeting,
I
think
or
the
meeting
before
that,
and
it's
basically
a
way
to
as
a
developer,
to
implement
some
work
that
you
want
to
execute
in
the
background,
so
not
in
response
necessarily
to
an
HTTP
request
or
not
based
on
a
timer
you,
you
may
want
to
be
able
to
schedule
it
manually
or
in
response
to
a
timer
or
in
response
or
a
request,
but
it
should
be
non-blocking.
That's
basically
the
idea
there
are
solutions
out
there
like
Hang
Fire,
like
quartz.net,
and
that's
that's
great.
Typically.
A
The
way
you
would
work
with
that
is.
You
would
have
some
code
that
then
schedules
a
job
and
the
job
gets
handled
by
The,
Hang,
Fire
or
bikewatch.net.
What
else
are
three?
We
have
that
as
well,
using
Hang,
Fire
or
using
quartz.net,
but
with
this
feature
that
I'm
about
to
show
these
jobs
now
become
available
as
activities.
So
you
can
orchestrate
what
jobs
you
want
to
execute
and
then
block
the
workflow
until
the
job
is
finished,
and
then
you
can
continue
the
workflow.
So
it's
a
nice
way
of
an
easy
way
of
orchestrating
work.
A
So
yeah,
that's
what
I
want
to
show.
So
first,
let's
take
a
look
at
an
example
here.
I
have
an
example:
job
called
index
blockchain,
and
it's
going
to
do
something
very
simple:
gonna
write
something
to
the
console
out,
wait
for
five
seconds
and
then
write
something
else,
just
to
show
that
this
is
a
long
running
process
in
the
background
and
how
you
can
use
it
from
a
workflow.
C
A
So,
let's
see
this
one,
so
here
we
look
at
the
designer.
Here
are
the
activities,
and
here
we
see
this
job
that
I
created.
So
it's
under
the
jobs
category
I
will
add.
It
here
add
to
activities
for
demo
purposes.
This
will
say
start
this
will
index
the
blockchain,
and
this
will
tell
us
when
it's
done
and
the
reason
I'm
showing
the
start
and
end
activity
is
to
demonstrate
that
this
workflow
will
be
suspended.
B
A
Copy
the
definition
ID,
so
we
can
run
it
from
the
API
all
right
so
before
I
run.
Let's
take
a
look
at
the
console
here.
Let's
clean
this
up
and
send
so
here
we
see
start
and
the
first
line
of
the
job
which
is
now
running.
I
was
a
little
bit
too
late.
It
finished
already.
Let's
start
again,
of
course
we
see
the
that
it's
finished
and
it
ended,
but
I
want
to
show
that
it
was
really
suspended.
I'll
refresh
this-
and
here
we
see
this
workflow-
is
it
see
the
state?
A
Has
the
status
of
running
at
sub
status
is
executing
yeah,
so
this
is,
it
is
suspended.
It's
out
of
memory,
it's
a
little
bit
hard
to
prove
it
without
actually
going
through
a
debugger,
but
I'll
leave
that
as
an
exercise
to
you
but
yeah.
It's
finished
the
work,
and
then
the
cool
thing
about
this
is
that
you
don't
have
to
create
a
custom
activity.
A
You
just
Implement
a
job
by
implementing
or
inheriting
from
the
job,
Base
Class
and
the
way
this
works
is
that
there's
there's
an
activity,
type
provider
that
looks
all
job
implementations
and
then
basically
wrap
its
job
into
a
an
activity
that
you
that
you
can
then
choose
from
the
from
the
Picker
here
and
the
first
time
I
mentioned
this
I
think.
Was
you
Muhammad
to
ask
the
question?
What's
the
difference
between
a
job
activity
and
a
regular
activity?
Yes,.
B
A
And
really,
the
only
difference
is
that
this
job
executes
in
the
background,
while
an
activity
runs
synchronously
right.
So
when
this
executes
workflow
execution
doesn't
continue
until
this
is
done-
and
this
happens
synchronously
in
memory,
the
workflow
remains
active
Etc,
but
it
did
make
me
think
so
more
that
why
not
take
it
a
step
further
and
allowing
activities
themselves
to
declare
or
to
be
configured
to
run
asynchronously.
So
in
parallel
it
would,
for
example,
allow
you
to
Branch
out
to
multiple
branches
of
execution.
A
Have
multiple
activities
execute
without
its
Brands
having
to
wait
for
the
other
one?
To
finish
so
that's
something
I'm
research,
saying
as
we
speak,
and
this
way
we
could
have.
Let's
say
you
have
a
custom
activity
that
makes
an
API
call
to
some
Azure
function,
but
it's
potentially
very
slow
could
take
minutes.
A
A
B
Know
so
so,
basically,
and
like
an
action
activity,
if
it
is
running
for
a
long
time
when
I
look
at
the
workflow
status,
I
will
see
that
it
is
running.
It
will
keep
running
until
the
activity
finish.
But
what
I
understand
in
case
of
a
job
the
workflow
becomes
suspended,
and
it
will
wait
for
the
background
to
finish.
Then
it
will
resume
execution
of
the
auto.
A
That's
right,
although
you
make
a
good
point
there,
because
even
though
the
workflow
will
be
put
out
of
memory
and
persisted
the
status
in
Elsa,
3
will
remain
running,
but
it
will
not
be
actively
running.
It's
just
a
status
that
the
workflow
isn't
finished.
Yet
even
it
may
it
may
be
waiting
for
user
input,
and
while
it's
waiting
for
that,
the
workflow
doesn't
exist
in
memory,
only
ones
that.
A
B
B
A
Three,
it
will
be
running
and
there's
a
bunch
of
sub
statuses
suspended,
so
it
could
be.
The
main
status
could
be
running,
but
then
the
sub
status
should
be
suspended
if
it's
waiting
for
a
job.
To
finish,
for
example,
we
can
pass
some
data
to
the
job,
not
right
now,
but
that
will
be
added
as
well.
That's
very
important
and
not
just
pass
in
data.
You
should
also
be
able
to
provide
a
result
or
some
output
can
I
terminate
a
job
not
currently,
but
that's
a
good
question.
A
You
should
be
able
to
terminate
it
somehow
at
the
very
least,
it
should
be
an
API
to
do
so
and
then,
of
course,
maybe
from
the
workflow.
Maybe
you
want
to
split
execution
and
then
start
a
job
on
the
one
hand
and
have
a
Timeout
on
the
other
hand,
and
if
that
one
fires,
maybe
you
want
to
kill
the
job,
for
example,
or
maybe
jobs
should
have
a
Max
timeout
or
a
timeout
value
that
you
can
configure.
But
that's
a
that's
a
good
question.
Let
me
let
me
write
it
down
so
yeah.
B
A
Run
in
the
background,
which
would
be
would
solve
a
lot
of
issues
actually
that
I
know
of
yeah.
A
And
once
we
have
that,
then
maybe
there's
no
need
for
a
job
itself
to
be
exposed
as
an
activity.
At
the
same
time,
it
could
be
an
interesting
option
because
in
some
scenarios
you
just
want
to
execute
a
job
outside
the
context
of
a
workflow,
as
is
the
case
with
you
know
when
you
implement
Hang
Fire
jobs
or
quartz.net
yeah.
B
A
B
B
A
And,
of
course,
if
you,
if
you
have
any
availability,
please
take
a
look
if
you
can
and
we'll
try
to
we'll
try
to
help
awesome
thanks.
So
much
guys
thanks
for
attending
today
and
we'll
see
you
next
week.