►
From YouTube: Argo Workflows 101 Workshop 22 Sep 2020
Description
06:50 fundamentals
23:30 Anatomy of a Workflow
40:00 Artifacts
1:00:00 Exit Handlers
1:05:00 Workflow Templates
1:19:00 Cluster Workflow Template
1:20:30 Cron Workflows
https://docs.google.com/presentation/d/1ftwxelCtW-onnuxoMMAL-ZAqLOICh9YuAGzgo-dghZQ/edit?usp=sharing
A
Okay,
hello,
everybody.
Typically,
a
lot
of
people
join
two
to
three
minutes
late,
so
we'll
wait
a
couple
of
minutes
in
the
meantime.
B
A
A
Okay,
so
we
will
be
recording
this
and
that
will
be
available
on
youtube
later.
If
you
want
to
share
that
with
your
teammates,
it
would
be
fantastic
if
you
could
sign
in
to
the
signing
sheets
like
we
do
in
the
community
meetings.
Just
it's
really
good
to
know
who
will
be
in
this
session.
Okay-
and
it's
really
good
to
understand
what
you're
using
argo
for
as
well.
If
you
want
to
add
any
commentary
about
that,
there
are
a
couple
of
prerequisites
to
this
workshop
effectively.
A
You
need
to
have
an
argo
installation
to
run
so
there's
some
instructions
on
slides,
two
and
three
about
how
to
do
this.
We
for
development
purposes
use
the
mixture
of
docker
for
den3s.
You
could
probably
also
use
kind
or
doc
of
a
desktop
kubernetes.
If
you
want
to
do
it.
A
The
reason
we
use
k3d
is
really
just
because
that's
what
we
use
on
our
ci
system
and
it
supports
our
back
correctly,
which
docker
desktop
does
not
do,
and
you'll
also
need
cube,
ctl
installed
to
do
the
workshop,
and
then
a
couple
of
people
have
commented
recently.
The
k3d
have
recently
changed
the
parameters,
for
I
think
it's
version
three,
so
we
are
still
using
version.
A
One
and
version
three
has
slightly
different
and
command
line
parameters
you
need
to
watch
out
for-
and
there
is
here
here-
are
more
complete
instructions
to
get
that
running.
This
will
install
the
quick
start
manifests
along
with
mineo
to
simulate
an
artifact
repository
like
s3
and
postgres,
so
we
can
play
around
a
bit
with
the
workflow
archive,
because
k3
deed
is,
I
think
it
uses
container
d
as
its
run
time,
rather
than
docker
you'll
need
to
set
your
container
runtime
executes
the
pns,
and
so
this
is
the
patch
here.
A
This
may
actually
apply
for
many
other
systems
and
may
also
just
work
generally
for
anybody
to
do
that.
This
will
typically
take
about
two
minutes
to
start
up,
because
it's
pulling
those
and
stuff
those
postgres,
pods
and
minneapolis
takes
a
little
bit
of
additional
time.
A
Okay,
it's
a
little
bit
about
who
we
are
so
my
name's
alex.
I
am
one
of
the
three
engineers
he'll
be
presenting
today
and
we'll
also
be
have
a
present
some
presentations
from
bala
and
simon,
who
are
also
engineers
on
the
argo
workflows
team
and
most
of
us
have
been
working
on
argo
workflows
between,
I
think
about
eight
months
bala.
How
long
have
you
been
in
our
go?
Workflows,
18
months?
C
B
A
That
as
well,
we
also
work
on
other
argo
projects
such
as
argo,
cd
and
argo
events.
We
won't
be
talking
about
those
today
there
there
has
been
and
I'll
go
events
kind
of
one-on-one
workshop.
Maybe
we'll
talk
about
doing
that
as
well.
A
Understanding
the
terminology
is
very
helpful
to
kind
of
understanding
things
in
general.
What
to
do?
If
you
have
any
questions,
simply
ask
your
questions
in
in
the
zoom
chat.
I've
got
the
window
up
on
my
desktop.
I
will
monitor
that
and
if
somebody
is
busy
presenting,
then
somebody
who
is
not
presenting
will
then
be
able
to.
You
know:
read
those
out.
A
Okay,
okay,
cool
okay,
so
let's
just
talk
a
little
bit
about
some
of
the
fundamentals
here
of
workflows,
I'm
just
going
to
really
cover
just
the
most
basic
of
running
workflows
and
a
little
bit
of
a
little
bit
of
terminal
in
that
is.
I
have
our
first
very
basic
hands
on
exercise
and
I'd
like
everybody
to
give
the
hands-on
exercises
a
go.
If
you
can
do
if,
when
you've
completed
them,
go
into
the,
I
think
you
go
to
the
chat
and
give
a
thumbs
up.
A
I
think
why
can't
I
hide,
I
don't
see.
I
don't
think
I
have
the
option
because
I'm
presenting
to
give
any
thumbs
up
so
give
us
a
thumbs
up
if
you
have
able
to
use
complete
exercise
that
allows
us
to
kind
of
control
the
pace
of
all
the
different
workshops
and
the
way
that
we're
going
to
do
the
exercises
are
going
to
be
in
a
kind
of
a
call
and
repeat
so
we'll
show
you
how
to
do
that
and
then
you'll
just
have
an
opportunity
to
to
complete
the
exercise
yourself.
A
Okay,
the
first
exercise
is
to
submit
a
workflow
that
present
the
print
argo
work.
Hi
argo
workshop
to
the
user
interface
and
the
way
that
I'm
gonna
do.
That
is
I'm
going
to
go
into
the
user
interface,
which
I've
got.
I've
not
got
listening
because
I've
turned
off
the
port.
Forwarding
make
sure
you
have
your
port
forwarding
running
load
the
reo,
the
user
interface
go
into,
submit
new
workflow
and
there's
a
tab
and
play
it.
Work
flow
out,
console
phrase:
hi
argo
workshop.
A
B
A
Hello
now
all
I
just
got
the
most
basic
layout,
I'm
going
to
use
docker
while
say
latest
and
then
I'm
going
to
run
commands
going
to
bash,
actually
not
sure
if
this
will
work
so
we'll
find
out.
I
see
and
the
args
are
going
to
be
a
little
scripts.
A
So
this
is
the
a
very
basic
workflow
I
like
to
call
my
entry
point
main
you
can
call
them
whatever
you
want
to
call
them,
and
I
like
to
call
my
containers
things
like
pods,
I'm
going
to
use
a
docker
way
I'll,
say
image,
which
is
a
very
common
image
which
is
based
on
ubuntu
and
contains
some
kind
of
useful
things
like
bash
and
echo.
I'm
going
to
save
that
and
I'm
going
to
submit
it
and
then
it'll.
Take
me
to
the
graph
the
the
graph
view,
and
I
can
see
what's
happening
there.
A
A
A
Open
up
a
terminal
and
then
I
need
to
create
myself
a
workflow.
B
A
Now,
what
I've
done
here
is
I've
used
name
from
this,
but
actually
it's
more
difficult
to
use
generate.
B
C
A
Go
submit
what's
up.
B
A
A
A
A
A
C
A
A
And
when
you've
completed
this
exercise,
just
give
me
a
hat
just
give
a
hands
up
or
a
thumbs
up
in
the
in
the
chat.
C
A
A
A
Okay,
I
think
we've
got
enough
thumbs
up
so
far,
so
here's
some
additional
instructions
on
slide,
six
which
likes
to
change
sliders.
Every
time
I
touch
my
mouse,
you
can
do
this
from
the
command
line
using
I'll
go,
submit.
There's
also
a
couple
other
useful
commands
here.
Argo
lists
and
lists
all
the
workflows
in
your
namespace.
A
Okay,
that
gives
me
a
list
of
them
as
well
I'll
go,
get
or
get
you
the
workflow,
and
actually
we
can
be
kind
of
clever.
With
this
command
I'll
show
you
a
little
bonus
technique.
I'll
go
hello,
you
can
actually
get
other
additional
information,
so
you
can
go
for
a
wide
output
or
you
can
go
for
a
you
know.
A
detailed
yaml
output
like
I
get.
You
can
actually
also
provide
latest
as
a
an
alias.
A
B
A
Hope
this
works.
Sometimes
we
do
these
things
and
we
we
discover
that
we
have
some
bugs
that
that
typically
happens
here
as
well.
I've
got
a
I've
got
a
challenge
here.
I'm
jumping
in
minus
nine
go
if
you've
got
a
tool
called
kube
ctx
installed.
You
can
change
your
namespace
by
default,
I'll
change
the
namespace
by
default-
and
I
have
that
I
actually
have
that
on
my
terminal
here.
So
I
can
see
what
I'm
doing.
A
A
So
augustine
has
said:
it's
not
working
for
kind.
Okay,
so
the
area
you've
got
there
is
around
docker
sockets.
You
might
want
to
change
your
installation
to
use
the
pns
execute.
So
that's
on
slide
three.
If
you
want
to
reconfigure
that
one
no
problems
and
address
sorry,
if
I'm
jumping
in
how
about
go
client
for
submitting
workflows,
is
it
widely
used?
So
this
is
actually.
A
This
is
meant
to
be
a
101
workshop,
but,
and
this
is
definitely
a
201
or
a
301-
you
know
intermediate
advanced
topic,
so
we
do
have
a
couple
of
different
ways
to
interact
with
argo
workflows.
You
can
use
the
cli
and
you
can
use
the
user
interface
and
you
can
obviously
use
coop
ctl.
A
A
A
One
faster
two
fasters
okay,
so
we'll
go
we'll
go
a
little,
we'll
go
a
little
bit
faster.
In
that
case
I
I'd
say:
the
number
of
people
have
voted
for
slower,
plus,
faster
plus.
The
same
did
not
add
up
to
100
okay
onwards.
A
So
I
want
to
touch
on
what
is
in
kubernetes
terms
of
kind
of
a
more
an
intermediary
advanced
topic,
but
for
most
users
of
argo
workflows
is
a
very
important
topic
so
workflows.
I
tend
to
execute
with
a
service
account
chosen
by
the
person
submitting
the
workflow,
so
we
think
of
the
argo
controllers
installed
using
its
own
service
account
and
it
gets
the
permissions
from
that
service
account
and
the
workflows
itself
are
submitted
using
a
different
service
account.
A
If
you
you're
doing
third
party
integrations,
it
gets
even
more
complicated
because
you'll
probably
have
a
service
capac
for
every
every
single
user.
Okay,
now
by
default,
workflows,
are
submitted
using
the
default
service
accounts.
I
actually
think
you
can
probably
see
that
listed
here
in
the
console
or
service
account
this
users,
so
you'll
get
the
permissions
from
that
service
account.
Now,
with
these
quick
start
ones,
the
service
account
is
given
a
role
called
the
workflow
role.
A
A
So
there
are
things
like
the
ability
to
get
and
watch
a
pod
and
also
a
patch
it
and
the
ability
to
look
at
the
logs
and
the
ability
to
and
the
ability
to
in
this
case
create
and
get
workflows,
which
is
actually
not
what
would
be
necessary
for
any
of
this
workshop,
that's
useful
if
you
want
a
workflow
to
be
able
to
create
other
workflows,
which
is
an
interesting
pattern,
but
often
you
don't
want
to
submit
it
using
the
desal
service
account.
You
probably
want
to
create
your
own
service
account,
that's
relatively
straightforward.
A
You
can
create
a
service
account
using
coop
ctl,
create
service
account
me,
and
then
you
want
to
bind
that
service
account.
So
you
typically
always
have
three
things.
When
you're
dealing
with
service
account,
you
have
the
role,
which
is
the
permissions
that
you're
allowed
the
rules
that
you
have.
A
You
have
the
service
account,
which
is
you
know.
Well,
it's
an
account
for
in
a
service,
but
like
a
user,
and
then
you
have
a
role
binding,
which
associates
a
role
to
a
service
account.
Basically,
you
know
creates
that
linkage
and
the
way
you
can
do
that
is
doing
coop
ctl
correct
role,
binding.
A
This
is
the
name
of
this,
so
the
countdown
I'll
go
namespace
and
then
it's
called
me.
So
the
rock
the
role
bindings
cover
me
and
the
role
I'm
binding
is
called
workflow
role,
but
you
might
have
multiple
different
roles,
but
you
probably
won't
do
to
be
honest,
you're,
probably
jeff,
one
called
workflow.
A
Okay
and
now
I
can
submit
my
workflow
using
that
and
the
way
that
I
would
do
that
is
I'll.
Do
argo
submits
service
accounts
me
and
then
I
would
just
watch
it
and
you'll
see
this
has
been
submitted
with
the
service
accounts
now,
depending
on
what
so,
we
definitely
recommend
you
have
a
separate
service
account
for
running
workflows
to
you
know,
running
the
system
itself.
Okay,
so
the
hands-on
exercise
there
for
you
to
do
is
simply
create
a
service
account
and
submit
a
workflow
using
that
service
account.
A
Time,
okay
people
got
through
that
quickly,
so
I
can
see
that
the
pace
the
pace
could
be
even
faster,
still,
okay,
so
that
was
just
the
very
basics.
Just
a
quick
review.
We
talked
about
submitting
a
workflow
from
the
user
interface
and
submitting
a
workflow
from
the
command
line.
You
can
also
submit
a
workflow
using
cubectl
as
an
advanced
one
I'll.
Let
you
go
and
discover
how
to
do
that
yourself
and
also
via
an
api
and
sdk.
A
So
that's
just
the
basics
of
it,
but
what's
super
important
is
actually
the
anatomy
of
the
workflow
understanding,
all
the
different
terms
and
terminology,
and
for
this
bit
I'm
going
to
hand
over
to
the
session
over
to
simon
simon.
Are
you
ready.
D
Yes,
I
am
thank
you
so
much
alex.
Can
you
guys
hear
me.
D
Alright
sounds
good
all
right,
so
hi
everyone,
my
name
is
simon.
I'm
also
an
engineer
in
on
the
argo
project.
I've
been
working
on
argo
workflows
for
around
almost
a
year
now
and
previously
to
that.
I've
also
worked
a
bit
with
argo
cd,
so
some
of
you
might
recognize
me
from
github
and
from
slack
so
hi
how
you
guys
doing
so
today,
I'm
gonna
talk
a
little
bit
about
the
anatomy
of
a
workflow.
D
This
is
not
going
to
going
to
be
a
very
a
very
specific,
like
run
through
through
all
the
fields
of
the
workflow.
That's
that's
something
that
not
only
do
we
not
have
time
for,
but
that's
something
that
should
be
done
through
the
documentation
that
we've
provided,
but
instead
I'm
going
to
focus
on
just
abstract
ideas
of
what
a
workflow
is
and
a
useful
sort
of
a
useful
example
that
you
can
use
that.
I
feel
like
if
you
think,
of
a
workflow
in
a
specific
way
it
will.
D
It
will
help
you
with
designing
better
workflows
that
you
can
use.
So
with
that,
let's
get
started.
If
you
guys
have
any
questions,
please
feel
free
to
follow
me
on
the
chat
and
I'll
ask
alex
or
bella
to
read
them
out
for
me,
since
I
don't
have
the
window
right
now,
so
you
guys
can
see
my
screen
right.
The
the
slideshow
screen.
D
All
right,
so
the
the
workflow
has
many
different
sub
units
we'll
be
in
different
concepts.
Some
of
them
are
on
the
screen.
We'll
talk
about
these
independently
and
the
most
important
part
of
our
workflow
is
a
template.
So
templates
are
essentially
how
we
defined
the
work
to
be
done
and
how
we
call
templates
to
actually
do
the
work.
So
templates
are
essentially
very
similar
to
functions
or
methods
in
your
standard
programming,
language
and
they're
always
defined
under
the
templates
field
of
a
workflow.
D
So
there
is,
there
is
a
thing
called
workflow
templates
work,
capital,
w
workflow
capital,
t
templates.
That
is
something
different
and
bala
will
be
talking
about
this
later
on
in
the
workshop.
But
when
I
just
keep
in
mind
that
when
I
say
templates
during
this
section,
it
refers
to
lowercase
templates
that
you
define
under
the
templates
field
of
workload.
D
There
are
two:
there
are
many
kinds
of
templates
that
you
can
use
as
we
will
cover,
but
there's
two
there's
two
main
ones
that
you
should
keep
in
mind.
There
are
templates
that
define
work.
That
needs
to
be
done,
and
then
there
are
that's
one
type
and
then
the
other
type
is
templates.
That
call
other
templates
to
actually
do
the
work
and
perform
some
sort
of
execution
control.
D
We
will
so
if
this
doesn't
make
any
sense
to
you
right
now,
don't
worry.
We
will
run
through
a
very
specific
example.
D
So
when
I
first
started
on
the
team-
and
I
went
when
I
was
first
getting
familiar
with
workflows,
an
analogy
that
I
came
up
with
that
was
very
useful
for
me
was
to
compare
a
workflow
to
a
java
class.
I'm
sure
that
most
of
you
are
familiar
with
java
or
some
similar
class-based
object-oriented
programming
language.
D
So
I
will
essentially
you
be
using
java,
as
a
case
study,
to
introduce
you
to
the
structure
of
a
workflow.
So
here
on
the
left,
we
have
a
very
simple
java
class
with
we
we
have.
This
class
has
three
functions.
Two
of
these
functions
are
used
to
define
work,
which
is
what
I
mentioned
before,
and
one
of
these
functions
is
used
to
actually
call
on
mark.
So,
let's
step
through
it
just
to
make
sure
that
we're
on
the
same
page,
we
have
these
two
methods
that
define
work
to
be
done.
D
One
of
them
is
pretty
simple:
okay,
that's
forward
to
an
input,
the
other
one,
just
prints
out
a
a
string.
These
methods
have
names
that
we
choose.
These
methods
also
specify
their
inputs
and
outputs.
So
in
this
case
the
function
at
four
will
take
in
an
integer
which
we
don't
actually
know
the
value
of
yet,
and
then
it
will
return
an
integer
and
say
hello
will
won't,
take
anything
and
won't
return.
Anything
keep
it.
D
This
is
important
to
distinguish
that
these
at
this
point,
as
all
of
you
know,
by
now
these
are
abstract
arguments.
These
are
not
actually
live
yet
then
we
also
have
a
block
in
which
we
call
work
or
which
we
actually
perform
work.
In
this
case,
we
we
pass
in
the
integer
two
to
our
add4
function.
Save
the
result
perform
some
execution
control
on
the
result,
and
if,
if
it
succeed,
we
call
another
function,
we
use
the
names
that
we've
defined
and
we
pass
in
live
arguments
and
we
perform
execution
control.
D
So
here
on,
the
right
is
the
equivalent
argo
workflow.
I
feel
like,
if
you
guys
were,
if
you
guys,
if
this
is
the
very
first
workflow
that
you
see,
and
you
don't
have
the
the
equivalent
java
code
to
the
left,
at
least
to
me,
it
was
very
overwhelming
and
you
don't
really
know
what's
going
on
or
how
it
will
like
how
it
will
execute
or
what's
the
what
are
the
like,
how
do
you
pass
in
parameters
or
whatever?
So
I
feel
like
this?
D
D
D
We
call
these
functions
using
the
names
that
we
chose
and
we
use
we
actually
in
this
case
we
pass
in
live
arguments,
no
longer
abstract,
and
then
we
also
do
some
execution
control.
D
So
I
will
let
you
guys
just
stare
at
this
for
a
minute
or
so
just
to
get
familiar
with
it
before
I
continue.
If
you
guys
have
any
questions,
please
feel
free
to
ask
them
in
the
chat
and
if
alex
could
read
them
out.
For
me,
I'd
appreciate.
A
That
we
have
a
couple
of
bits
of
feedback-
silent,
yes,
one
is
they
like
the
code
flow
script,
analogy
awesome
and
the
other
one
is.
How
does
argo
know?
The
output
is
of
type
number.
D
That's
that's
a
great
question,
so
argo
is
actually
typeless.
So
if
you
are
a
javascript
or
a
python,
programmer
you'll
you'll
take
this
with
better
you'll.
Take
these
news
better
than
people
who
come
from
go
or
java
like
I
do
so
when
it
comes
to
parameters.
Parameters
are
always
strings,
so
obviously
the
type
of
those
strings.
D
If
you
want
to
treat
them
as
integers
or
if
you
want
to
treat
them
as
strings,
it's
entirely
up
to
you
and
how
your
containers
that
run
manage
them,
and
then
we
also
have
another
type
of
argument.
Called
artifacts
and
artifacts
are
essentially
files
that
we
handle
for
you.
So
essentially
arguments
are
always
strings
and
then
you
can
manipulate
those
strings.
However,
you
want.
D
All
right,
if
there
are
no
other
questions,
I
will
continue
on.
So,
as
we
discussed
earlier,
we
have
two.
We
have
two
main
types
of
templates.
One
of
them
are
definition
templates
on
the
left.
These
templates
define
work
to
be
done
now.
I've
I've
said
that
we
have
set
the
phrase
defined
work
to
be
done,
but
what
it
really
means
is
essentially
it
just
executes
the
container,
as
all
of
you
have
used
kubernetes
before
you
know,
you
can
just
define
whatever
work
you
want
to
be
done
on
the
container.
D
An
article
will
execute
this
container
for
you,
given
the
parameters
that
you
have
defined.
So
we
also
have.
We
also
have
a
script
template,
which
is
essentially
a
convenience
wrapper
on
the
container,
so
that
you
can
define
your
own
scripts
in
line
with
argo,
as
opposed
to
having
to
define
them
within
your
image.
D
We
have
a
resource
template.
The
research
template
is
useful
for
actually
manipulating
kubernetes
resources.
So,
for
example,
you
may
have
a
research
template
that
will
get
that
will
get
a
config
map
and
read
it
or
that
will
create
or
delete
a
service.
If
you
want
to
create
a
service
for
whatever
and
suspend,
is
a
template
that
will
actually
pause
execution
of
a
workflow
and
will
resume
based
on
a
condition
that
you
have
defined
or
will
resume
based
on
the
timeout
or
will
resume
or
will
just
wait
until
a
manual
approval.
D
So
these
are
useful
for
steps
and
manual
approvals
or
delays
stuff,
like
that,
we
also
have
execution
templates.
So
these
templates,
its
only
java,
is
to
call
other
templates.
They
may
call
execution
other
execution
templates,
but
most
likely
they'll
be
calling
definition.
Templates
steps
is
a
template
that
essentially
defines
a
sequence
of
steps
to
be
on.
D
So
if
we
go
back
to
our
example,
here
see
that
we're
using
a
step
template
what
the
double
dash
means
is
that
we
will
perform
argo
will
do
our
will
perform
this
step,
and
once
this
step
is
done,
our
world
will
perform
the
next
step.
So
it's
basically
just
normal
execution.
D
If
so,
this
is
the
type
of
this
is
a
list
of
lists.
If,
if
you,
if
you
were
to
have
a
list
of
many
items,
so
imagine
that
this
dash
here
is
not
here,
then
everything
within
that
higher
order
list
will
be
executed
in
parallel.
So
if
you
wanted
to
have
these
two
steps
run
in
parallel,
you
just
get
rid
of
this
little
dash
here
and
then
these
two
will
run
in
parallel.
D
We
also
have
a
dag
template,
which
is
which
allows
you
way
more
fine-tuned
control
if
you
want
to
have,
if
you
wanted
to
find
a
dag
for
you
to
work
with,
you
will
probably
be
using
a
dac
and
we
find
that
most
of
our
customers
for
actual
for
non-trivial
cases
end
up
using
dags
we've
covered
inputs
and
outputs
parameters.
So
when
I
first
started
working
with
argo,
I
saw
that
there
were
inputs.
D
There
were
outputs
and
there
were
arguments,
and
I
was
wondering
what
the
difference
between,
for
example,
inputs
and
arguments
where
inputs
in
argo
are
always
definitions
as
well
as
outputs.
So
these
are
always
the
placeholders
that
you
will
have
and
arguments
are
actually
the
live
arguments
that
you
pass
in.
So
if
you
go
back
to
our
example
here
you
see
that
in
our
abstract
code
block
here
we
have
inputs.
D
These
will
just
be
a
name
of
a
parameter
to
pass
in
and
in
our
our
execution
code
of
block.
We
have
arguments
and
arguments
will
actually
need
a
value
to
be
passed
in
so,
and
so
because
of
this
inputs
will
always
inputs
will
only
be
used
in
container
or
work.
Definition
templates
and
arguments
will
be
used
when
calling
those
templates.
D
Here
awesome,
so
now
I
have
a
small
exercise
for
you
guys
I've.
Actually,
I
have
taken
the
this
workflow
that
we've
been
talking
about,
and
I
put
it
on
on
this
link
right
here-
ignore
this
first
link.
This
is
an
intro
internal
that
we
use
when
we
give
our
internal
workshop.
D
So
if
you
want
to
open
this
I'll,
be
I'll
be
also
basing
this
on
zoom.
If
you
want
to
open
this,
you'll
get
the
workflow
that
we've
been
working
with,
and
your
challenge
and
your
job
for
the
next
couple
of
minutes.
Let's
say
five
minutes
will
be
to
edit
that
workflow,
so
that
it
corresponds
to
this
edited
java
class
that
we
have
here.
So
all
of
the
changes
that
I've
made
are
highlighted
with
this
gray
highlight
so
yeah.
D
It
and
meanwhile
I
can
be
answering
some
of
the
questions
that
have
popped
up
all
right.
D
So
a
question
is:
how
does
argo
pass
parameters
between
the
pods
mounted
a
special
files,
so
argo
actually
passes
the
parameters
using
annotations.
So
when
you
use
parameters,
argo
will
use
annotations
from
the
pods
to
communicate
between
the
the
pods
and
the
controller
which
will
pass
into
the
next
parts
because
of
that
annotations
and
kubernetes
actually
have
a
limit.
I
think
the
limit
is
around.
D
We
are
currently
exploring
different
ways
to
manage
parameters
so
that
you
don't
have
to
use
artifacts
if
you
don't
want
to,
but
currently
there
is,
that
limit
is
the
core
difference
between
script
and
container
template
that
argo
converts
the
script
to
file
internally
and
then
calls
command
of
that
file.
Yes,
that
is
exactly
the
only
difference
it's
like,
like
I
mentioned
before.
It's
a
convenience
around
a
container
that
will
essentially
save
your
script
into
your
container
and
then
it
will
just
run
it.
D
Those
argo
consider
apache
airflow,
a
competitor.
I
feel,
like
this
question
might
be
better
suited
for
alex
for
our
pm
mukulika.
But
I'll
give
my
quick
answer.
I
I
think
we
consider
ourselves
to
be
a
specific
use
case
of
airflow
under
my
opinion.
So
if
you
are
working
with
kubernetes,
then
yes,
we
might
be
competing
with
airflow,
but
if
you're
not
using
kubernetes
and
obviously
our
good
won't
help.
Much
but
again,
alex
or
mukulika
might
have
more
to
say
about
this.
A
So,
to
answer
your
austin
to
ask
your
question,
the
answer
is
no,
no,
we
don't
really
consider
it
a
competitor,
because
we
would
look.
You
know
there
would
be
a
competitor
if
we
were
solving
the
very
similar
use
case,
but
we,
you
know,
we're
a
very
general
engine.
We
do
find
that
people
migrate
off
airflow
onto
our
go
workflows
for
a
number
of
different
reasons,
such
as
simplicity
of
setup
and
so
forth.
D
And
thank
you
alex
and
then
one
last
question
any
owner
references
by
argo
when
we
create
a
volume
through
resource
and
templates.
So
don't
quote
me
on
this,
but
I
actually
don't
think
that
argo
creates
volumes
for
you.
I
think
you
create
your
volumes
and
then
you
point
argo
to
them.
So,
oh.
E
A
D
D
C
D
I
see
that
I
see
that
someone
finished
the
exercise,
so
in
the
interest
of
time
I
think
I'm
gonna
hand
it
off
to
bala
so
that
we
can
continue
the
workshop
but
feel
free
to
continue
the
exercise
in
your
own
time
and
to
always
reach
out
to
me
on
slack
at
simon
and
with
any
feedback
or
any
questions.
E
Yeah
thanks
simon,
say:
hi,
I'm
bala,
I'm
also
one
of
the
engineers
in
argo
team
working
with
alex
simon
and
derek.
E
You
can
find
me
in
the
slack
and
you
have
sarah
bella
1979,
it's
a
handle.
Let
me
share
my
screen.
E
This
is
big
enough
or
I
need
to
turn
into
presentation
mode,
because
I
like
to
switch
between
this
is
good.
This
is
good
yeah.
Okay,
thanks
can
I
start
or
I
I
will
wait
for
some
to
finish.
The
previous.
E
Handsome,
okay,
I
will
start
okay,
so
simon
was
explaining,
like
you
know,
the
different
types
of
templates
and
input
parameter
and
the
output
parameter.
Those
are
like
it
will
look
for
the
simple
data
to
transfer
between
one
step
to
another
step,
but
some
of
the
advanced
use
cases
you
want
some
steps.
Steps
are
generating
the
big
file.
E
E
E
E
So
in
argo,
you
can
define
your
repository
in
the
three
way.
The
one
is
like
in
controller
level
in
workflow,
dot,
workflow
controller
config
map
level.
When
you,
when
you
configuring
the
controller
level,
the
all
the
workflows
run
on
the
particular
controller,
this
artifactory
will
be
substituted
if
the
workflow
doesn't
have
inline
artifact
repository
or
artifact
ref.
E
E
In
right
side,
you
can
see
that
this
is
the
sample
of
artifact
repository
configuration
that
you
can.
You
can
give
the
this
will
differ
for
each
each
repository,
like
gcs,
will
be
little
bit.
Different
oss
will
be
little
bit
different,
I'm
giving
a
s3
as
example,
so
you
need
to
give
the
bucket
name.
Endpoint
is
like
s3.aws.com
and
whether
it
is
https
or
http.
E
Let
me
go
to
the
next
one,
so
here
you
know
how
you
can
define
an
input.
Artifact
repository.
As
I
said,
this
is
also
similar
to
the
parameter
in
the
template
level.
You
can,
you
can
declare
like
instead
of
input
parameter,
you
can
define
as
an
input
artifact
under
the
artifact
name
and
which
part
you
want
to
download
that
file
for
your
main
container.
You
can
give
the
path
and
you
can
you-
can
give
the
artifact
repository
configuration
like
s3
with
the
end
point
and
which
bucket
you
want
to
create.
E
This
is
the
another
way
you
can
configure
config
artifactor
poster
in
the
workflow
level,
so
you
can
con.
You
can
have
a
config
map
which
has
like
a
multiple
repositories
that
you
can
refer
into
your
your
workflow
like
a
artifact
repository
ref,
then
key.
So
the
order
argo
will
automatically
refer
that
repository
from
the
config
map.
E
Fast,
okay,
let
me
show
the
hands
on
then
you
guys
can
also
work.
Try
with
me,
so
I
have
a
one
example
which
is
here.
I
can
put
this
link
also
in
chat.
E
E
E
E
E
I
don't
know
whether
our
prereq
has
installing
a
minion.
D
E
E
E
A
E
B
E
E
E
Yeah
thanks:
let's
go
to
the
next
one,
it's
a
passing
out
artifacts!
So
so
far
we
saw
that
one
is
like
uploading,
the
artifact
workflow
and
another
one
is
downloading
the
artifact
input
artifact
workflow.
Now
I
want
this
is
another
use
case.
E
E
Then
the
second
step
and
just
passing,
is
an
argument
which
is
a
live
as
simon
says
that
which
is
a
live
value
for
this
step.
I
am
passing
as
our
argument.
Artifact
name
is
message
and
from
the
previous
step
that
you
can
refer
like
a
states,
the
step
name
and
output,
artifact
and
artifact
name
here
is
a
hello
art,
that's
the
one
I
given,
which
will
automatically
pass
through
the
second
step
as
input
artifact.
E
E
B
E
E
E
E
E
E
E
E
Okay,
I
can
see
few
thumbs
ups.
The
next
concept
is
exit
handler.
Exit
handler
is
very
similar
to
finalizer.
In
java
the
exit
handle
you
can
define
in
two
ways.
One
is
like
workflow
level
whenever
the
workflow
is
done,
you
want
to
you
want
to
execute
the
the
particular
exit
handler
to
clean
up
or
notify
or
anything
there's
another.
Another
level
is
you
can
different
exit
handler
in
the
step,
step,
level
and
dag
level
whenever
the
particular
step
is
finished,
you
want
to
do
some
cleanup
or
anything.
E
E
If
you
see
that
this
workflow
has
a
two
template,
one
is
a
vlc
and
another
one
is
say
exit
so
well
c
is
my
entry
point
which
is
normally
executing
in
the
workflow,
but
I
can't
fit
exit
on
exit.
I
configured
an
exit
template
so
when,
when
this
was
is
done,
that
exit
handler
will
automatically
execute
it
and
finished
off
the
workflow
I'll
paste
this
link
here,
just
you
can
submit
it
or
go.
E
E
E
Done
so
mainly
that
exit
template,
mainly
you
can
use
for
notify
anybody,
am
I
on
your
workflow
is
done
or
your
step
is
done
or
your
dag
is
done
or
clean
up
like
cleaning
up
all
the
files,
all
the
things
you
can
use
it
see
here.
This
is
a
main
step
and
this
is
the
exit
hand,
exit.
E
Template
can
out
output
artifact,
so
in
the
meanwhile
you
are
trying.
I
will
read
the
sum
of
the
questions:
can
out
output
artifact
be
saved
on
them
with
the
k8
executor.
E
B
E
Yes,
okay,
it
goes
ahead,
so
here
you
can,
you
can
go,
you
can
define
it
you
you
can
different
the
sequence
of
steps
and
the
sequence
of
the
dag,
and
you
can
say
that
you
can
continue
on
which
will
just
skip.
If
the
step
is
fail
it
will.
It
will
continue
on
the
rest
of
the
steps.
E
B
E
E
E
So
this
will
help
the
user
to
define
that
all
the
templates
and
dag
and
steps
which
they
are
frequently
using
they
can
define
as
a
workflow
template
and
store
it
in
the
cluster.
Only
only
thing
they
want
to
pause
the
param
arguments
to
just
execute
it
with
the
different
parameters
or
different
artifacts.
E
So
in
the
workflow
template
you
can
refer
only
the
particular
template
or
entire
template
as
a
workflow.
That's
a
two
way
you
can
refer
refer
the
workflow
template
in
your
workflow.
One
is
like
you
can
only
refer
the
templates
from
the
workflow
template
for
in
your
workflow,
the
second
one
you
can
just
execute.
You
can
just
convert
your
workflow
template
as
a
workflow,
which
is
the
workflow
submit.
E
This
is
very
similar
to
that
workflow,
temp
workflow,
but
only
thing
the
kind
is
little
bit
scientists
our
floor
template
and
the
name
is
a
fixed
name
in
workflow.
You
can
see
that
always
generate
a
name.
So
whenever
you
are
submitting,
it
will
generate
a
new
name
and
you
can
you
can
create
your
entry
point
and
you
can
define
your
all
the
pre
different
templates
here.
E
E
E
In
two
way,
as
I
said,
you
can
refer
the
entire
template
as
a
workflow,
then
it
will.
It
will
take
the
entry
point
in
workflow
template
and
execute
it
everything
the
another
one.
You
can
only
refer
the
particular
template
which
is
defined
in
the
workflow
template
using
a
template
ref
in
your
workflow.
E
E
E
B
E
E
B
B
E
E
E
E
E
A
E
E
E
E
I
want
a
workflow
definition
sitting
on
the
cluster,
so
all
the
namespace
can
be
accessed
that
use
case
resolved
by
the
cluster
workflow
template,
which
is
same
as
our
flow
template
functionality.
Just
a
scope
is
changed,
so
the
kind
to
kind
is
a
cluster
workflow
template
to
create
that
one.
You
need
to
have
a
cluster
admin
to
create
it.
E
E
E
B
E
Okay,
I
think
I'm
done,
I
will
hand
over
to
simon
father,
chrome,
workflow
awesome.
Thank
you
guys.
D
I'll
take
no
answer
as
a
yes,
so
let
me
share
my
screen
awesome.
So
this
is
just
gonna,
be
a
quick
note
of
a
current
workflows.
D
So
in
argo
you
are
able
to
actually
schedule
your
workflows
that
you've
defined
to
run
on
the
chrome
schedule,
and
it's
actually
very
easy
to
do
so,
and
I'll
just
walk
I'll
walk
through
how
to
do
so.
Right
now,.
D
So
yep
chrome
workflows
are
just
normal
workflows
that
run
on
the
schedule.
Converting
one
is
pretty
easy,
so
here
we
have
a
sample
workflow.
Obviously
this
one
is
pretty
small,
but
this
works
the
same
for
any
arbitrary
workflow.
The
first
step
to
converting
a
workflow
to
a
current
workflow
is
just
to
change
the
kind
just
name
it
a
chrome
workflow.
D
Then
you
need
to
take
the
spec
of
your
workflow
and
just
put
it
under
a
wordpress
spec
field
like
so
the
workflow
spec,
the
chrome,
workflow
workflow
spec
field
is
exactly
the
same
as
the
workflow
spec
field,
the
same
types
and
everything,
so
it's
pretty
much
guaranteed
to
work,
and
then
you
just
need
to
specify
which
chrome
workflow
options
you
want,
such
as
your
current
tab
schedule,
the
time
zone
that
you
want
to
run
and
the
concurrency
policy
that
you'd
like
it
to
run
on
these.
D
These
settings
are
modeled
exactly
after
a
cron
job
from
kubernetes
the
standard
crown
job
so
things
such
as
a
successful
job,
history
limit
and
bail
job
history
limit
work
the
same
way
as
well
as
concurrency
policy
works.
The
same
way,
the
only
field
that
is
not
actually
part
of
the
crown
job
somewhat
infamously
is
the
time
zone
field
which
we
do
actually
support
ourselves.
D
Here,
you
can,
you
can
go
in
our
docs
on
our
chrome
on
our
current
workflows,
dock
and
everything
is
pretty
well
listed
out
there
for
you
and
like
bala
mentioned,
you
can
create
and
manage
your
current
workflows
in
the
in
a
very
similar
way.
So
you
can
manage
your
workflow
templates.
There
is
the
argo
cron
command,
so
argo
chrome
create
our
chrome
list,
argo
chrome,
delete
and
so
on
another
another
thing
I'd
like
to
hint
at.
D
Maybe
you
want
to
define
your
current
workflows
and
have
them
scheduled
with
git
with
getups,
using
a
tool
like
argo
cd.
I
know
some
of
our
users
do
that
where
they
define
their
chrome,
workflows
on
the
git
repo
and
have
our
actual
argo
cd,
pull
the
objects
from
the
repo
and
then
apply
them
to
a
cluster,
so
that
they're
always
up
to
date,
just
some
some
interesting
synergy
between
argo
and
argo
cd
that
you
might
want
to
consider
that
I
I
know
I
like
and
yeah.
D
I
think
this
is
pretty
much
the
end
of
my
short
incursion
to
crown
workflows.
Does
anyone
have
any
questions.
D
Yes,
do
we
support
dst
and
public
holidays,
so
we
actually
do
not
support
public
holidays.
If,
by
supporting
public
public
holidays,
you
mean,
if
we
keep
a
database
of
all
the
public
holidays
around
the
world
and
how
they
change
over
time,
we
certainly
do
not
do
that.
Do
we
support
dst?
Well,
we
our
time
zone
support
is
exactly
the
same
as
the
go
client
time
zone
support.
A
It's
probably
worthwhile
noting
that
some
holidays
vary
massively
from
country
to
country
and
actually
even
from
business
to
business.
For
example,
in
the
united
states
there
are
a
number
of
public
holidays
that
are
observed
or
unobserved
and
depending
on
the
business
people
may
or
may
not
be
in
the
office.
For
that,
so
I
think
that's
very
much.
It
is
on
our
radar,
I
think,
but
it
is
def.
It's
definitely
a
business
decision
rather
than
a
decision
we
can
make
on
your
behalf.
D
All
right
so
with
that
can
user?
Yes,
so
if
the
user
can
provide
toilets,
we
do
not
have
any
interface
for
you
to
tell
argo,
which
dates
you
wanted
to
to
consider
holidays.
We
do.
We
are
aware
that
argo
events
has
a
similar
functionality
and
we
are
considering
maybe
adding
something
like
that.
But
as
of
right
now
there
isn't
anything
on
the
argo
workflow
side.
D
There's
a
current
workflow.
We
try
if
the
job
cannot
be
scheduled.
That's
a
that's
a
good
question,
so
what
chrono
workflows
will
do?
Is
it
will
just
create
a
workflow
object
for
you?
So
actually
argo
workflows
itself,
regardless
of
chronoworkflows,
will
will
retry.
So
let
me
be
more
clear.
The
only
thing
the
only
part
of
a
chrome
workflow
or
by
extensional
workflow
that
actually
needs
to
be
scheduled
is
a
plot
creation.
D
So
our
workflow
object
can
be
created
and
you
can
we
can
try
to.
We
can
try
to
schedule
the
pods,
but
if
the
pods
cannot
be
scheduled,
maybe
for
a
lack
of
resources
or
some
other
reason.
There
are
knobs
that
you
can
turn
in
argo
to
make
sure
that
it
keeps
retrying.
So
the
short,
the
short
answer
is
yes,
but
you
need
to
have
it
set
up
for.
D
That
and
another
note
regarding
holidays,
you
can
only
you're
always
free
to,
as
with
anything
in
our
workflows,
you're
always
free
to
create
your
own
containers
and
images.
D
That,
for
example,
keep
a
database
of
the
workflows
that
you
want
to
consider
as
holidays,
and
if
you,
you
can
always
have
that,
be
the
very
first
step
and,
for
example,
if
the
current
date
happens
to
be
a
holiday
just
fail
that
step
by
returning
exit,
one,
for
example,
and
then
the
entire
workflow
will
not
run
so.
There's
always
ways
that
you
can
sort
of
engineer
a
featuring
by
running
it
through
your
own
containers.
D
D
A
I
think
you
can
run
non-root,
but
it
requires
work
to
do
it.
I
haven't
done
it
myself,
somebody
earlier
shared
a
page
and
I'll
try
to
see
if
I
can
find
it
re-share
it.
That
tells
you
all
the
pros
and
cons
of
the
different
executors.
Let
me
see
if
I
can
find
it
and
I'll
pop
it
into
the
chat
for.
A
D
A
Thank
you
very
much
thomas.
We
have.
We
have
a
few
more
questions
in
here
about
creating
resources
using
the
resource
template.
I
actually
didn't
know
the
resource
template
was
so
popular.
A
So
I'm
just
gonna
very
sorry
about
it:
you're
making
a
bit
of
creaky
noise,
so
there's
information
on
the
different
executors
on
on
that
page.
The
most
popular
executors
are
obviously
docker
and
pns,
and
we
we
use
pns
ourselves,
but
docker
one
requires
additional
permissions
and
privileges
to
execute
now.
A
But
I
can
correct
me:
when
a
resource
template
runs
then
the
then
it
basically
runs
rather
basically
it's
very
similar
to
a
normal
workflow
pod,
except
it
only
contains
a
single
container
and
that
container's
job
is
to
create
the
resource.
A
Okay,
so
next
question
about
any
plant
supplied,
non-root
user
for
docker
executor,
so
we
do
have.
We
do
want
to
make
them
more
secure.
A
However,
with
the
docker
executed,
the
way
that
we
get
artifacts
from
the
main
container
in
to
the
into
the
into
the
sidecar
container.
So
we
can
transport
those
to
s3
on
your
behalf
is
by
mounting
the
docker
socket.
So
it
needs
to
be
that
weight
container
that
style
car
container
needs
to
be
able
to
do
that.
That,
I
don't
think
is
necessary
if
you
amount
of
volume
and
have
your
artefacts
on
the
volumes,
and
I
think
it
can
probably
run
as
a
non
with
fewer
permissions
there.
A
But
I
haven't
really
dug
into
that
very
much
and
maybe
that'll
be
an
epic
for
our
next
next
set
of
planning
sessions.
E
F
Still
in
the
resource,
whatever
we
mentioned
that
gets
stored
in
your,
it
should
be
getting
stored
somewhere
in
your
executor
container
right.
So
in
some
temp
file
and
you
use
this
queue,
theta
create
command,
but
that,
if
you
are
using
that
k8s
api,
then
basically
don't
allow
to
execute
the
argo
workflow,
because
there
is
no
volume.
E
I'll
private
privately
update
this,
I
will
take
a
look
at
the
code
and
I
will
update
you.
F
A
Can
user
add
retries
the
steps
if
the
step
fails
retry
on
a
given
number
of
times
before
giving
up?
I
think
the
answer
to
that
is
yes,
but
bala.
Can
you
confirm,
I
think,
you're
looking
at
this
recently.
E
A
Okay,
okay:
let's
I
think
we
now
need
to
draw
this
to
a
close.
So
thank
you
very
much
everybody
coming
along
here.
If
you
have
any
more
questions,
then
do
come
and
ask
us
in
the
slack
channel
that
we
have
or
you
can
come
and
you
can
obviously
raise
any
issues
or
enhancements
in
github
as
well.
Okay-
and
we
finish
a
bit
ahead
of
schedule.
So
thank
you
very
much
everybody
and
have
a
lovely
day.