►
From YouTube: Argo Workflows and Events Community Meeting 16 Sep 2020
Description
03:00 Argo Events 1.0
19:40 Argo Workflows Roadmap
29:00 Building Kubernetes using Kubernetes
41:40 Couler
A
Thanks
everybody
for
coming
today.
This
is
the
what
ninth
community
meeting
of
2020.
we've
got
a
pretty
interesting
schedule
today,
I'm
quite
excited
about
some
of
the
stuff
we'll
be
talking
about
those
of
you
who've
been
to
the
meeting
before
know,
to
just
add
yourself
to
documents.
So
we
know
who
comes
to
the
meetings,
but
if
not,
please
add
yourself
and
just
say
where
you're
from
we're
gonna
be
talking
about
argo
events
and
argo
workflows
in
this
meeting.
A
A
We're
going
to
be
talking
a
little
about
argo
events
going
ga
going
1.0
and
derek
and
viper
will
be
talking
about
that.
First,
we'll
also
be
looking
at
the
argo
workflows
roadmap
I'll
just
to
show
you
what's
coming
up
in
the
argo
workforce
universe
and
then
we've
got
two
presentation.
A
Stroke,
demonstrations
firstly
from
thomas
who'll,
be
showing
us
how
to
build
kubernetes
using
kubernetes,
and
then
terry
will
be
showing
us
cooler,
which
is
an
interface
for
construction
and
managing
workflows,
and
then
we
have
the
opportunity
for
any
kind
of
any
other
business.
Any
questions
you
want
to
ask
at
the
end
of
the
workshop
now,
if
you
do
want
to
ask
questions
during
the
workshop
and
you've
got
a
couple
of
options,
you've
asked
out
loud.
A
A
Finally,
we
will
be
able
to
answer
any
questions
you
have
on
slack
afterwards,
if
they're,
particularly
long
or
involved,
we
are
recording
this
so
we'll
be
showing
that
recording
on
youtube
later
on.
If
you
want
to
share
that
or
come
back
and
revisit
any
particular
aspects
of
it,
okay,
so
our
first
thing
is
going
to
be
argo.
Events
that
will
be
presented
by
viper
and
derek
vaipav
is
one
of
the
original.
If
not
the
original
code
contributor
for
argo
events
and
derek,
is
you
know
a
key
contributor?
B
I
guess
yes,
all
right,
hello,
everybody,
my
name
is
derek.
I
work
for
intuit,
I'm
a
software
engineer
and
I
worked
on
argo
events
project
today.
We're
gonna
introduce
the
new
things
in
new
arc
events.
B
Some
of
you
might
still
remember
back
in
may
I
made
a
presentation
about
the
proposals
to
enhance
our
events
and
today
I'm
excited
to
announce
that
all
those
proposals
has
been
all
the
store
has
been
implemented
and
we
have
successfully
pushed
our
events
to
version
1.0
and,
let's
see
what's
new
in
the
new
arc
events.
B
So
we
have
two
parts
today.
First
part
is
I'm
going
to
introduce
the
architecture
change
in
arc
events,
the
main
purpose
for
the
changes
like
to
make
the
system
more
reliable,
easy
to
use
and
more
secure,
and
then
I
will
hand
over
to
wipe
up
to
introduce
the
features
enhancement
in
the
new
version.
B
One
of
the
biggest
things
we
introduced
in
in
argo
events
is
a
new
concept
named
even
bus.
Even
bus
is
a
namespace,
the
crd.
It
represents
a
pub
sub
service
sitting
between
the
even
source.
I,
the
events
detect,
which
is
the
event
source
today
and
the
events
consumer
in
the
system,
which
is
sensor
right
now,
it's
backed
by
backed
up
by
not
streaming
so
the
the
simple
example
I
give
here
to
create
even
bus.
B
It's
like
you,
give
the
name
default,
and
then
I
give
a
keyword
native
and
then
that
means
we're.
Gonna
create
a
native,
not
streaming
service
for
you
in
your
name
space,
and
that
not
streaming
service
will
be
used
as
a
message
transmission
center
in
in
your
ser
in
your
system
and
all
the
events
or
the
message,
detecting
the
event
source
will
be
sent
to
this
event.
Bus
and
then
the
sensor
will
listen
to
the
bus
to
read
the
messaging
events
and
trigger
downstream
actions.
B
So
that's
that's
one
of
the
biggest
changes
in
the
new
version
in
new
orleans.
That's
just
even
bus,
and
also
we.
We
are
open
to
support
other
technology
like
kafka
in
the
future.
Right
now,
it's
back
by
not
streaming
another
big
change
in
the
new
york
arguments
like
we
simplified
the
specs
to
make
the
to
make
the
whole
service
easy
to
use.
B
If
you
want
to
migrate
to
the
new
version,
for
example,
we
or
1.0
what
you
need
to
do
is
to
move
the.
If
you
have
anything
under
this
and
get
away
spec
under
an
inspector
template
and
just
move
it
to
event
source
and
same
thing
for
the
service.
That's
I
think,
that's
a
straightforward
for
the
migration.
It's
not
a
big
deal.
You
know
quite
easy
to
do
that,
and
so
we
can
do
a
comparison
about
the
new
stack
and
the
old
spec
on
the
left
side
is.
Is
it
that's
an
example?
B
We
used
to
do
to
create
a
calendar
type
event
source.
What
you
need
to
do
is,
like
you,
create
event,
source
name
with
type
calendar.
You
give
the
spec
there
and
you
also
need
to
give
a
getaway
object
on
the
bottom,
which
I
marked
as
red
and
that's
the
old
spec
right
now
in
the
new
york
events,
you
only
need
to
give
the
event
source
back
like
on
the
right.
B
We
also
simplified
something
on
the
sensors
back
so
left
side
as
osb
right
right
side
is
new
and
then
there's
there's.
One
thing
that
could
be
changed
and
in
the
sense
aspect
is,
is
a
dependency
because
the
getaway
spec
is
going
away
and
then
there's
no
gateway
name
anymore.
So
for
this,
even
for
for
dependency,
you
have
to
change
the
gateway
name
to
event
source
name.
I
think
that's
them.
B
That's
the
only
required
change
for
sensor
spec
to
migrate
from
the
old
version
to
new
version,
and
also
originally,
I
need
to
give
a
subscription
like
like
is
shown
on
the
left
and
then
in
new
version.
There's
no
need
to
do
that
because
we're
using
the
even
bus,
the
sensor
will
body
faucet,
look
for
the
default
name,
even
bus,
and
there's
no
need
for
you
to
do
that
kind
of
thing.
B
The
other
important
change
in
the
new
architecture
is,
we
make
make
the
hoses
more
secure.
We
re
implementing
all
the
controllers
and
services
in
in
the
system.
Originally,
you
need
to
specify
a
service
account,
a
privileged
service,
basically
to
create
an
event
source
or
gateway
and
sensor
in
whatever
event
source
or
whatever.
Actually
you
want
to
trigger
in
the
sensor
and
right
now
for
event
source
we
have
like
more
than
20
type
of
event
source.
B
There's
only
one
human
sources
needed
to
specify
privileged
service
account,
which
is
even
source
to
watch
the
kubernetes
resource
change
and-
and
that's
for
sure
you
need
to
have
a
private
service
account
to
do
that.
Otherwise,
you
you're
not
able
to
you,
know,
watch
those
kind
of
changes
and
for
sensors.
B
You
only
that's
similar
here.
If
you,
if
you
want
to
triggers
some
action
like
to
create
a
kubernetes
type
of
in
source,
a
kubernetes
human
source,
create
a
source.
Sorry
and
or
to
trigger
some
argument
workflows
and
you
you
need
to
give
a
privileged
self
scan
and
you
only
need
to
give
the
you
know,
give
the
rbx
settings
which
relate
to
the,
for
example.
You
want
to
create
an
arc
workflow.
B
We
have
three
crds,
we
have
three
controllers
human
source,
even
bus
and
sensor,
and
the
eventbus
controller
is
used
to
manage
the
event
bus
object
and
then
the
event
event
source
department
or
write.
The
detected
events
to
the
even
bus
and
the
sensor
read
the
message
there
and
you
know
it's
quite
it's
quite
straightforward.
B
C
All
right,
I
hope
everybody
can
see
it,
so
I'm
just
gonna
go
over
some
of
the
features
and
announcements
that
we
introduced
in
1.0.
C
So
one
of
the
I
think
the
main
highlights
of
1.0
is
that
we
simplified
the
use
of
circuits.
C
So
you
may
remember
from
the
like
the
previous
versions
of
fargo
events
we
introduced
like
circus
and
switch
to
better
manage
the
dependencies
like
like
before
1.0.
We
used
to
have
this
concept
of
dependency
groups
where
you
basically
group
dependencies,
and
then
you
apply
circuits
and
in
the
triggers
you
have
switch
based
on
the
circuit
and
switch
combination.
C
You
get
to
decide
which
particular
triggers
are
executed
when
certain
dependencies
are
met,
and
that
was
I
mean
it.
You
had
to
like
configure
both
circuits
and
switch
to
execute
these
triggers
and
it
was
a
bit
complicated
and
it
was
sort
of
like
also
confusing
to
wrap
your
head
around.
It's
like.
What's
going
on,
you
have
to
like
define
groups
circuits
and
switch
to
basically
manage
your
triggers,
so
we
sort
of
like
simplified
that
and
then
we
introduced
this
new
field
called
conditions
in
the
under
the
trigger
templates.
C
So
what
this
condition
basically
does
it
simplifies
the
circuit
and
switch
into
one
field,
so
you
don't
have
to
define
circuit
at
top
level
and
then
apply
switch
at
a
trigger
level.
You
can
just
define
condition
that
takes
a
boolean
logic
itself,
like
a
boolean
circuit,
so
over
here,
for
example,
I
can
just
say
conditions
is
depth
zero.
Two.
That
means
that
whenever
I
get
a
event
for
dependency
named
dev,
zero,
two
ergo
event
should
basically
trigger
this
particular
template.
C
Just
make
http
call
that's
that
that's
this
trigger,
so
you
can.
I
mean
you
can
basically
make
this
more
complex.
You
can
say
that
another
there
can
be
another
templates
and
the
conditions
can
condition
can
be
like
dev01
and
under
dev-02.
So
basically
this
particular
our
trigger
is
waiting
for
both
dev
zero
one
and
dev
zero.
Two,
this
both
of
these
dependencies
to
happen
and
then
then
only
it's
gonna
get
executed.
So
you
can
see
like
we
are.
We
basically
simplified
the
circuit
switch
into
just
one
simple
conditions.
C
So
that's
that's
one
of
the
highlights
of
1.0
going
to
the
next
slide,
and
then
we
introduce
a
bunch
of
other
features
as
well,
especially
introduced
pulsar
event
source
as
so.
Basically,
this
is
a
new
event
source.
That's
added
to
the
existing
list
of
event
sources.
Also,
we
sort
of
like
because
we
decommission
gateway.
We
also
got
rid
of
these
client
server
architecture
within
gateway.
C
So
now
the
event
source
itself,
that's
managed
by
winston
con
event,
source
controller,
so
the
event
source
part
when
it's
when
it
gets
fun,
it's
just
one
binary.
So
your
all
the
event
source
are
just
part
of
that
particular
binary.
So
it's
also
very
easy
for
a
new
developer
to
write
an
event
source
and
just
plug
it
into
existing
code
base.
So
that's
just
another,
I
would
say
an
extra
enhancement
to
the
existing
sort
of
like
features
of
event
sources,
and
then
we
enhanced
filters,
especially
the
data
filter.
C
We
added
operations,
like
you
can
now
basically
perform
operation
like
is
equal
to
less
time
greater
than
not
equal
to
on
the
data
filters,
so
it
just
provides
you.
It
just
gives
you
more
power
to
filter
your
event,
payload
based
on
the
fields
like
you
can
you
can
apply
these
operations
in
the
fields
in
the
even
payload
and
then
so.
That's
that's
what
the
enhanced
filters
are
all
about.
C
C
So
what
what
I
mean
by
static
metadata
is
so
whenever,
let's
say
you
configure
event
source
and
you
provide
some
static,
key
values
in
the
event
source,
so
whenever
an
actual
event
occurs,
this
sort
of
like
key
value
pairs,
are
injected
into
the
event
payload.
So
you
get
the
actual
event,
but
then,
with
that,
you
get
this
extra
information
that
was
configured
at
the
event
source
creation
time.
So
it's
just
and
sort
of
like
there.
There
are
certain
cases
when
you
just
want
an
extra
information.
C
Wherever
event
happens
and
this
this
I
I
think
this
sort
of
like
helps
in
their
situation,
and
then
there
has
been
sort
of
times
where
people
in
the
community
they
asked
for,
like
authentication
for
webhook
type
event
sources,
and
we
usually
advocate
that,
basically,
the
the
users
should
take
care
of
authentication
may
be
the
api
gateway
or
specific
enterprise
solution,
whatever
it
can
be.
But
there
are
certain
cases
when
you're,
just
sort
of
like
researching
certain
things,
and
you
just
need
a
simple
authentication
for
your
web
equipment
sources.
C
So
this
is.
This
is
just
a
simple
sort
of
like
auth
feature
where
what
we
do
is
we
store
a
secret
that
basically
contains
your
auth
key
and
whenever
you
make
a
request
to
or
whenever
any
other
external
party
or
whatever
event
source
sends
a
particular
payload
or
to
the
webwork
event
source.
We
just
make
that
simple
check,
whether
the
payload
or
the
request
has
those
particular
headers,
and
if
so
the
request
goes
through.
If
not,
you
get
error.
C
So
that's
just
a
simple
object,
but
it's,
I
think
it's
very
useful
for
like
just
like
sort
of
like
testing
out
the
event
sources
and
making
sure
that
it
works.
Then
we
also
introduce
health
check
endpoints
for
again
web
given
sources
so
that
you
can
now
basically
query
these
help
points.
C
Health
checkpoints
basically
to
see
if
the
event
source
pods
are
up
or
not
also
before
1.0
when
we
had
gateway
the
this
was
impossible
where
you
can
where,
where
you
get
to
club
like
multiple
different,
like
types
of
gateway
into
one
gateway.
So
before
1.0,
when
we
had
gateway
a
gateway,
I
mean
the
strategy
was
when
you
deploy
gateway,
it
could
be
of
just
one
type.
C
For
example,
if
you
deploy
a
weber
gateway
it,
it
can't
understand,
s3
event
source,
so
webhook
gateway
can
only
understand
webhook
given
source,
so
that
was
the
case
before
1.0,
but
with
1.0.
What
we
did
was
sort
of
like
switched
from
that
particular
sort
of
like
strategy,
and
then
we
introduced
this
new
event
source
deployment
strategy,
where
not
only
you
can
basically
run
event
source
of
that
particular
type,
be
it
to
webhook
s3
sns
sqs.
C
C
Those
are
the
event
source
deployment
strategies
and
I
think
it
has
a
lot
of
value
because
if,
if
you
are
busy,
so
if
you
have
a
use
case
where
you
just
want
to
run
a
bunch
of
like
event,
sources-
and
you
don't
want
to
like
essentially
like
deploy
tens
or
like
like
multiple
parts,
basically,
you
can
just
use
this
deployment
strategy
to
sort
of
like
club.
All
this
event
sources
into
one
pod
and
just
deployed
in
the
cluster.
C
And
then,
as
derek
mentioned,
we
made
the
entire
platform
the
framework
secure
by
removing
unnecessary
service
account
usage
in
both
sensor
and
in
event,
servers.
So
these
are
just
I'm
just
highlighting
some
of
the
features
that
are
introduced
in
the
printer.
There
are
other
features
as
well.
There
are
there.
There
has
been
a
lot
of
bug
fixes
as
well,
but
these
are
some
of
the
I
think
more
important
highlights,
and
I
think
that
covers
it
and
we
can
open
up
for
q
a
alex
back
to
you.
A
We
have
a
couple
of
comments
in
the
chat
from
s
lucas.
He
says
thank
you
for
being
super
responsive
in
the
argo
events
slack
and
then
there's
a
hand,
clapping
symbol
and
from
sean
health
check.
Endpoint
yay.
Thank
you
for
that.
So
some
some
appreciation
out
there.
A
Let
me
get
the
right
window
here,
so
thank
you
much
derek
and
viper
for
giving
us
an
overview
of
our
events.
I
think
you
know
it's
really
good
to
see
such
great
progress,
especially
the
features
around
security
and
usability.
A
We
know
we
know
that
our
users
will
really
appreciate
that
we've
just
been
in
the
process
of
kind
of
formalizing,
a
more
longer-term
roadmap
for
our
go
workflows,
and
I'm
going
to
give
you
guys
a
bit
of
a
preview
of
the
kind
of
things
we're
planning
on
doing
in
the
future,
with
maybe
a
bit
of
an
explanation
about
why
that
is.
A
You
can
find
the
roadmap
on
our
documentation
website
and
you
can
just
search
for
that
in
there
as
well,
and
the
first
item
was
kind
of
improvement
in
the
the
sdks.
If
you've
been
involved
in
the
lg
sdk
channel
you'll
see
that
we've
actually
started
releasing
a
newly
updated
version
of
the
java
sdk
for
workflows,
there's
also
a
java
sdk
for
argo
events.
That's
just
newly
released
and
also
the
python
sdk
is
being
updated
as
we
speak.
A
You
probably
don't
know
this,
but
there
are
about
five
different
ways
to
trigger
a
workflow
in
the
system,
and
we
want
to
continue
to
provide
more
support
for
that.
So
recently,
recently,
we've
added
the
feature
to
trigger
one
workflow
from
another,
as
well
as
the
ability
trigger
workflow
from
webhook,
such
as
github
or
gitlab.
A
Scroll
down
to
controller
enhancements,
so
we
have
a
group
of
features
coming
under
a
controller
enhancements.
Memorization
is
a
new
feature
whose
goal
is
to
make
it
easier
to
run.
A
Workflows
quicker,
basically
allows
you
to
save
the
results
of
the
steps
of
of
a
particular
workflow
and
then
reuse
those
results
that
allows
you
to
have
workflows
that
have
similar
steps
in
it
and,
don't
repeat
the
work
we've
also
just
recently
added
support
for
semaphores
and
in
the
process
of
adding
support
from
usexes,
which
allows
you
to
lock,
lock
one
or
more
workflows
accessing
a
shared
resource.
So
if
you
have
a
particular
database,
you
don't
want
people
to
act,
so
you
can
access
concurrently.
A
A
First,
we're
hoping
to
do
some
enhancements
around
artifact
management,
which
is
obviously
a
key
feature
in
our
go
workflows,
so
improved
performance,
improved
support
for
our
artifact
positive
reference
and
things
like
automatically
creating
buckets
around
artifacts
as
well,
and
this
is
actually
quite
a
large
piece
of
work
because
of
the
number
of
different
places
you
can
store.
Artifact.
A
We
are
going
to
introduce
some
metrics
and
reporting
features,
and
this
is
not
really
something
we
we
have
today
at
all
and
so
the
ability
to
predict
how
long
a
workflow
will
take
or
knows
with
the
workflow
that
will
be
coming
in
the
future,
as
well
as
the
ability
to
look
at
historical
execution
of
workflows
to
see
how
long
they
took
in
the
past
and
how
much
resources
that
they
were
consuming.
A
We're
gonna
finish
out
the
our
back
features.
That's
the
number
one!
Actually,
it's
not
now.
This
is
actually
incorrect.
It's
actually
the
number
two
community
voted
feature
number
one.
Now
being
the
training
session.
Allow
you
to
support
users
with
different
access.
Permissions
for
the
user,
interface,
scalability,
reliability
and
performance
is
a
key
feature.
A
We
did
a
lot
of
work
on
this
earlier
this
year
to
improve
the
support
for
kind
of
running
large
numbers
of
workflows
and
kind
of
very
large
workflows,
and
so
we've
been
doing
some
things
like
failure,
mode
testing
and
stress
testing
and
we've
come
up
with
a
number
of
features
that
we
want
to
implement
around
improving
our
ability
to
allow
large
workflow
graphs,
which
will
be
coming
in
version
212,
and
we're
also
going
to
be
introducing
some
further
performance
improvements
in
so
sorry,
I
correct
myself
that
graph
light
is
coming
in
211,
not
212..
A
A
How
to
deal
with
very
large
numbers
of
concurrent
workflows,
and
these
all
be
coming
up
in
the
future,
we're
looking
to
improve
event
integration.
A
This
hasn't
been
fully
defined
yet,
and
there
are
quite
a
few
potential
things
we
might
be
doing
on
on
the
cards
around.
You
know,
user
interface,
improvements
and,
more
specifically,
making
just
really
easier
to
understand.
Why
did
my
workflow
start?
You
know
who
kicked
off
my
workflow
and
why
did
why?
Did
that
happen?
A
So
the
kind
of
things
that
people
are
using
for
workflows
today,
looking
to
kind
of
make
sure
we
have
the
specific
features
that
are
needed
for
that.
So
obviously
people
are
using
this
argo
workflows,
things
like
etl
data,
pipeline
processing,
batch
processing
and
machine
learning,
ai
ops
as
well,
and
also
ci
and
cd,
and
we're
actually
looking
for
more
people
to
come
up
with
the
designs
for
that
more
people
to
come
up
with
contributions
around
what
kind
of
features
they
need
to
be
able
to
do
that.
A
D
So
I
can
see
a
question
from
leon
on
the
chat
and
he's
asking.
Are
there
plans
to
create
an
argo
events?
Ui
he's
he's
saying
that
it
would
probably
be
nice
to
see
which
event
sources
are
triggered
without
having
to
do
cube.
Ctl
logs.
A
So
that's
that's.
Definitely
something
we've
been
discussing
for
some
time
now.
One
of
the
challenges
around
that
is
that
it's
actually
quite
a
large
piece
of
work
to
do
that,
and
so
we're
still
we're
still
potential
future
improvement.
Argo
workflows
was
actually
to
make
the
user
interface
a
separate
application.
D
Uh-Huh
another
question
from
david:
are
there
any
plans
to
have
support
for
pre-warm
pods
in
the
flow
to
eliminate
slow
start.
A
D
A
Yeah
I
mean
with
all
these
performance
improvements.
You
invariably
just
move
the
bottleneck
to
a
different
place
in
it.
So
we're
definitely
looking
at
improvements
that
allow
us
to
run
more
concurrent,
workflows
and
kind
of
larger
workflows,
and
the
goal
is
to
kind
of
be
able
to
run.
You
know
thousands
of
workflows
concurrently
in
each
workflow
with
your
thousands
of
nodes,
that's
the
ultimate
goal,
but
yes,
so
you
move,
the
bottleneck
around
sometimes
bottleneck.
Simply
moves
to
your
you
know
the
kubernetes
api.
D
Is
there
any
procedure
for
showing
interest
in
specific
use
case
enhancements?
I
can
actually
answer
this
one.
The
best
way
to
show
your
interest
in
a
specific
feature
or
bug
is
to
upload
the
issue
on
github
we
commonly
sort
by
most
of
boated
just
to
see
what
the
community
is
interested
in.
D
A
So
improvements
is
typically
targeted
for
a
specific
milestone.
I
would
look
in
the
milestone
to
see
if
it
if
it
actually
hits
that
milestone,
because
not
normally
not
all
improvements.
Some
improvements
actually
make
up
earlier
milestones,
but
also
commonly
improvements.
Miss
milestones,
so
you
need
to
obviously
just
check
in
the
milestones
to
having
github.
For
that.
A
Okay,
any
more
questions.
It
seems
to
be
all
okay.
Thank
you,
simon,
for
helping
out.
Okay,
let's
go
back
to
the
agenda,
so
next
up
is
thomas
from
he's
from
sap
conquer
and
he's
going
to
be
talking
about
building
kubernetes
using
kubernetes.
Are
you
ready.
E
So
you
can
see
my
screen,
you
can
hear
me
perfect.
Okay,
let's
get
started,
I'm
gonna
hide
this
floating
controls.
Can
you
see
the?
Can
you
see
the
presentation
now
black
screen?
Yes,
we
can
perfect
okay
hi,
so
my
name
is
thomas
valachik
and
I'll
be
talking
about
building
kubernetes
using
kubernetes
today
or
in
other
words
using
argo
to
build
kubernetes.
E
So
I
work
in
a
container
ecosystem
team
at
sap
concord
and
our
team
is
responsible
for
building
and
maintaining
hundreds
of
kubernetes
clusters
for
our
internal
developers,
so
sap
concur,
uses
kubernetes
offering
from
aws,
specifically
the
aws,
eks
and
so
to
build.
Bks
cluster
is
as
simple
as
eks
control
create
cluster.
E
This
is
nice
and
convenient
way
how
to
quickly
bring
the
cluster
up
and
poke
around.
However,
to
build
production-ready
kubernetes
cluster.
You
need
more
than
this,
so
our
team
uses
four
build
stages
to
create
production-ready
kubernetes
cluster.
We
start
with
pre-flight
tests
which
so
we
target
against
the
aws
infrastructure
before
we
start
actual
eks
build.
E
E
As
I
said,
the
list
can
be
long
after
add-ons
are
installed
and
before
we
hand
over
the
clusters
to
to
our
clients
to
our
developers,
we
want
to
make
sure
that
the
cluster
is
validated.
We
run
a
few
end-to-end
tests
against
the
cluster.
We
run
functional
tests,
tests
against
the
add-ons,
so
we
make
sure
they
function
as
we
expect
so
now.
E
Yes,
we
found
argo
specifically
argo
events
and
argo
workflows
that
we
use
so
in
the
next
slide
or
yeah
next
slide
I'll
show
you
a
build
of
a
production,
ready,
kubernetes
cluster,
using
argo
workflows,
I'm
going
to
skip
argo
events
here,
but
argo
events
plays
as
important
role
here
in
the
build
process
as
argo
workflows,
because
everything
is
api
driven.
So
let's
go
ahead
and
jump
onto.
E
E
Okay,
so
here
we
have
one
built
from
today's
morning,
and
so
what
you
see
here
is
the
four
build
stages.
This
is
a
view.
This
is
a
very
zoom
out
view,
but
I'll
zoom
in
a
little
bit.
So
what
you
see
here
is
we
start
with
a
pre-flight
test.
Those
pre-flight
tests
run
in
parallel,
so
they
it's
very
fast.
E
E
E
E
That
workflow
is
much
simpler
and
we
have
few
other
workflows
that
I'll
mention
on
the
end
of
the
presentation.
E
So
back
to
the
slides
and
I'll,
be
speaking
about
some
argo
features
that
we
use
here
so
I'll
start
with
argo
events
and
the
sensor
parameters.
E
So
this
is
something
I
really
really
like,
and
that
is
that
you
can
submit
parameters
from
sensor
and
replace
workflow
template
spec.
So
basically,
what
you
create?
You
create
a
template
of
workflow
templates,
which
is
very
very
it's
a
lovely
feature
that
I
like
like
very
much,
and
so,
as
I
mentioned
earlier,
we
use
workflow
templates
all
the
four
build
stages
that
you
saw
are
in
their
own
workflow
template
and
we
have
bunch
more.
E
This
allows
us
to
combine
them
together
or
even
run
them
completely
independently
from
a
cluster
build.
That
means
I
can
run
a
pre-flight
test
completely
independently
from
a
cluster
build.
I
can
install
add-ons
against
targeted
eks
cluster
without
building
a
cluster
from
scratch,
so
this
is
very,
very
powerful
and
everything
is
api.
Driven
thanks
to
arco,
argo
events
from
other
features
that
we
use.
It
is
the
retries
parallelism
dependencies
conditionals
and
many
more
before
my
last
slide.
E
D
So
there
is
a
question
from
I
think
it's
daniel.
If
I
remember
correctly
sorry,
if,
if
the
d
stands
for
something
else,
how
do
you
manage
the
life
cycle
of
the
cluster
in
between
creation
and
deletion?
Let's
say
managing
the
configuration
or
versions
of
barriers
add-ons
that
you
have.
How
do
you
upgrade
the
clusters
and
node
groups
etc?.
E
Yes,
this
is
something
that
is
work
in
progress.
A
very
good
question:
yes,
work
in
progress.
This
will
be,
in
my
opinion,
pretty
difficult
workflow
to
create,
but
with
the
conditionals
and
with
all
the
argo
workflows,
features
and
with
argo
events,
I'm
sure
this
is
doable
too.
B
E
Oh
yes,
if
I
remember
correctly,
it
is
still
0
15,
maybe
0
16..
E
F
Much
may
I
ask
because
I
tried
to
to
type,
but
it's
not
fast
enough.
We
we
used
argo
in
a
similar
use
case,
but
for
orchestrating
network
functions.
Network
services
and
one
thing
that
we
came
to
realize
was
like
it's
a
workflow
right,
so
it
starts
and
ends
but
natively
in
kubernetes.
You
usually
use
operator
right,
so
you
never
actually
deploy
anything.
F
F
E
F
F
now
that
component,
that
you
set
up
in
step
fifth,
is
now
down,
but
you're
in
a
different
place
in
your
workflow
right.
So
how
would
you
react
to
that?
What
would
you
restart
it
from
the
topological
order
from
where
you
are
right
now?
Would
you
restart
the
whole
workflow
something
else.
E
Yeah,
we
didn't
have
this
situation,
but
in
case
of
failure,
I'm
planning
on
basically
deleting
the
entire
cluster
or
in
any
stage
where
it
fails
and
creates
new
cluster.
E
A
Okay,
tell
us,
thank
you
very
much
for
that
presentation.
Will
you
be
able
to
share
the
slides
with
us,
so
we
could
put
it
into
the
documents.
A
Okay,
so
next
up
we've
got
terry.
Terry
will
be
talking
about
cooler.
You
may
recognize
terry
because
he's
been
an
active,
open
source
contributor
in
not
just
arca,
but
also
in
projects
like
tensorflow
and
kubeflow,
and
he's
going
to
give
us
a
presentation
on
cooler.
G
Thanks
alex
for
the
introduction,
can
you
guys
hear
me
okay,
so
hi?
Let
me
share
my.
G
G
G
There
are
many
solutions
exist
nowadays
for
constructing
workflows
and
managing
them.
For
example,
apache
airflow.
Here's
an
example
of
creating
a
dag
using
apache
airflow
and
here's
an
example
using
kuby
flow
pipelines,
creating
a
flip
coin
workflow
and
here's
another
example
for
from
argo
python
dsl.
That's
our
argo
community
maintained
project.
G
So,
however,
their
programming
experience
varies
and
they
have
different
level
of
abstractions
that
are
often
obscure
and
complex,
for
example,
in
particular
for
data
scientists
or
analyzed,
they
may
not
be
familiar
with
decorators
or
even
worst
case
object,
oriented
programming,
so
things
like
decorators
here
and
classes
and
sub
classes,
even
like
with
syntax,
are
pretty
hard
for
them
to
use.
So
here's
here
comes
cooley.
Cooley
is
a
unified
coolly
aims
to
provide
a
unified
interface
for
constructing
and
managing
workflows
on
different
workflow
engines.
G
G
G
We
will
show
some
examples
later
and
utility
functions
for
configuring
and
submitting
a
workflow
and
good
status,
and
so
on,
keep
in
mind
that
these
apis
are
still
subject
to
change,
because
we
are
still
discussing
with
the
community
to
see
what's
the
best
approach
in
order
to
support
multiple
workflow
engines.
G
You
can
use
python
to
construct
work,
your
workflow,
you
can
define
your
workflow
programmatically
and
that
will
be
translated
to
argo,
our
workforce
yarmulke
specification,
and
we
are
also
working
with
argo
community
to
reuse,
the
aggro,
python
client
for
schema
validation
and
so
on,
and
it's
simple
because
it
provides
the
unified
interface
and
you
can
use
imperative
and
functional
programming
style
to
define
the
workflows
and
it's
extensible
to
support
various
workflow
engines.
This
is
still
a
work
in
progress,
as
we
are
actively
working
with
other
communities
and
organizations.
G
G
Here's
an
example
using
cooley
to
for
the
coin
flip
example,
so
here
you
can
define
a
small
python
function
and
and
then
a
function
that
runs
to
flip
the
coin
based
on
our
head
heads
and
tails,
and
then
these
conditionals
are
specified
are
specified
here
to
to
call
a
particular
function
or
and
start
the
containers
individual
containers,
depending
on
the
result
of
a
previous
step.
So
this
is
you
guys
are
probably
very
familiar
with
this
already,
and
the
second
example
is
to
construct
a
dac
using
apis.
G
This
is
one
of
the
options
and
the
option.
The
other
option
is
to
use
set
dependencies
to
set
them
up
more
explicitly.
The
first
one
is
a
list
of
jobs
here
and
here
you
can
also
define,
like
a
diamond,
dag
shaped,
dag
using
the
same
api.
G
So
the
reusable
steps
that
I
mentioned
earlier-
some
just
want
to
give
some
examples,
for
example,
could
be
flow
operators
for
distributed
machine
learning
jobs
as
well
as
integration
with
third-party
data
sources
and
storage
options,
especially
the
data
sources
and
storage
options
that
we
use
very
heavily
internally
and
we
are
working
actively
with
other
communities
to
make
this
collection
bigger
as
well
just
to
want
to
share
some
the
project
status.
So
we
developed
coolly
and
used.
G
It
used
it
very
heavily
at
ant
group,
with
the
initial
support
for
aggro
workflows
and
it's
open
sourced
at
this
link,
and
next
steps
will
be.
We
are
working
closely
with
the
sdk
maintenance
ago,
sdk
maintainers
for
better
integration
with
the
existing
argo
python
client,
also
collaborate
with
other
open
source
communities
and
organizations
on
additional
backends
and
reusable
steps
and
so
on,
and
there
are
different
ways
to
reach
out.
G
For
example,
we
have
a
dedicated
slack
workspace
if
you
want
to
discuss
more
specifically
about
cooley
and
the
link
can
be
found
in
the
repose
with
me,
and
also
we
can
discuss
on
argo
slack
workspace
as
well,
and
we
do
have
a
twitter
account
which
I
created
yesterday,
because
I
realized
that
the
github
organization
can
include
a
link
to
twitter.
So
I
thought
that
would
be
a
good
opportunity
to
share
important
updates
and
announcements.
G
F
May
I
have,
may
I
ask
a
quick
question:
please
yeah
yeah,
that's
that's
extremely
cool.
First,
I
really
like
it,
but
I
wonder
about
the
motivation
right.
You
have
this
python
for
cook
flow
pipelines
right.
So
you
have
this
dsl
for
cooklow
pipelines,
but
this
seems
to
be
at
a
low
at
a
higher
granularity
than
the
pipeline.
F
G
Yeah
definitely
so
we
tried
at
least
in
my
current
company
as
well
as
a
couple
of
other
companies
we
are
collaborating
with.
We
realized
that
could
be
flow,
pipelines
is
really
complex
to
use
and
very
have
very
heavy
dependencies
and
so
on,
and
our
data
science
team
do
not
have
do
not
want
to
spend
a
lot
of
time
learning
a
new
dsl
using
complex
python
syntax.
So
that's
our
first
motivation
and
the
second
motivation
would
be
for
different
use
cases.
G
We
would
love
to
benchmark
different
workflow
engines
so
by
by
providing
a
unified
interface,
it's
easier
to
migrate,
say
from
apache
airflow
to
other
workflows.
That's
one
of
our
motivations
as
well.
Does
that
answer
your
question.
H
I
had
one
question:
this
is
makalika,
hey
john,
since
you're
migrating
from
apache
airflow
to
qfloor
pipelines
or
argo
workflow.
Will
you
be
able
to
contribute?
Also,
like
airflow,
has
some
specific
tasks
for
connecting
to
different
data
sources
etc,
and
we
are
really
looking
for
some
of
these
use
case
specific
contribution.
G
Yeah,
definitely
we
can
continue
the
discussion
offline,
it
will
depend
on
our
team's
resources
and
so
on
priorities
and
so
on.
G
A
Okay,
thank
you
very
much
for
that
presentation.
So
that's
the
last
presentation
for
today.
So
all
we
really
have
left,
which
you
may
or
may
not
be
pleased.
It's
just
any
other
business.
A
I
just
want
to
chat
about
two
things
very,
very
briefly
and
I'll
open
it
to
the
four
which
we're
looking
to
try
and
move
more
questions
on
to
stack
overflow
from
slack,
because,
unfortunately,
we
don't
necessarily
have
quite
the
amount
of
bandwidth
necessary
to
answer
all
the
questions
that
come
in
from
slack
into
the
future.
A
A
couple
of
people
have
asked
how
to
contribute.
A
week
ago,
I
think
alex
ran
a
as
part
of
the
thursday
meeting,
which
name
I
forget.
He
talked
about
how
to
contribute
argo
cd.
We
actually
also
had
done
a
workshop
for
argo
workflows
back
in
april
this
year.
I've
just
added
the
comments
there
for
people
who
are
interested
in
finding
more
about
that.
A
Okay,
it
sounds
like
that's
a
no
okay.
Well,
thank
you
all
very
much
for
coming
today.
If
you
do
have
any
more
questions
or
you
want
to
talk
about,
anything
else,
do
come
and
find
us
in
slack,
and
we
can
answer
them.
There
have
a
nice
day.