►
Description
00:00 Reusable Cloud Workflows for Environmental Building Simulation - Antoine Dao (Pollination)
20:00 Distributed Load Testing Using Argo Workflows - Sumit Nagal (Intuit)
41:00 Argo Workflows v3.0 Demo - Alex Collins (Argo Team)
http://bit.ly/argo-wf-cmty-mtng
B
Excellent,
can
you
guys
all
hear
me
all
right?
Yes,
on
this
end,
I
assume
I'm
assuming
a
thumbs
up
is
a
good
sign,
excellent
yep,
so
hello,
everyone
and
thanks
for
having
us
at
this
argu
community
meeting
event,
so
I'll
essentially
be
talking
about
how
we
at
ladybug
tools
have
been
using
argo
within
our
new
product
called
pollination
and
specifically,
how
we're
using
argo
to
run
these
reusable
cloud
workflows
for
environmental
building
simulations
so
essentially
I'll
kind
of
take.
B
You
first
talk
you
through
what
pollination
is
and
how
it
came
about
I'll,
try
and
make
that
brief,
because
we
really
hear
about
argo,
then
I'll
talk
about
queen
bee,
which
is
our
kind
of
home-cooked
workflow
schema
after
that
I'll
go
over
how
we
have
integrated
queen
bee
with
argo
and
finally,
once
the
talk's
done,
you
can
essentially
ask
us
questions
or
give
us
comments
or
feedback
on
what
you've
seen
so
pollination
essentially
is
a
new
cloud
product
built
on
top
of
ladybug
tools
for
the
aec
industry.
B
So
this
is
essentially
the
architecture,
engineering
and
construction
industry
and
ladybug
tools
was
born
about
eight
years
ago
as
an
open
source
collection
of
python
libraries
to
facilitate
modeling,
complex
building,
complex
buildings,
and
so
ladybug
tools
is
a
suite
of
like
well
about
five
different,
like
tools
which
help
with
climate
visualization,
energy
analysis,
daylight
analysis,
airflow,
modeling
and
increasingly
urban
scale,
modeling,
and
so
a
lot
of
the
users
essentially
really
like
ladybug
tools,
and
they
grew
to
be
quite
a
popular
tool
within
that
kind
of
niche
industry
because
it
made
specialized
environmental
simulation
expertise
accessible.
B
They
were
developed
as
plug-ins,
which
can
be
integrated
within
current
tools,
and
it
really
supported
a
lot
of
simulation
engines
which
you
could
then
mix
and
match
to
create
relatively
complex
workflows,
so
they've
been
quite
successful
and
essentially
their
users
really
do
like
them.
So
there's
a
bunch
of
comments
you
can
find
on
the
web
and
stuff
about
people
being
quite
happy
about
how
much
money
they
saved
and
all
that
kind
of
stuff.
B
Now
over
time,
the
two
co-offenders
chris
and
mustafa
essentially
did
some
consulting
and
went
around
the
world
to
kind
of
help.
People
use
their
tools
and
while
they
were
doing
that
consulting
they
realized
there,
they
essentially
gained
a
better
understanding
on
how
users
were
using
or
misusing
these
tools
and
what
the
kind
of
recurring
pain
points
were
with
using
these
and
trying
to
run
these
complex
simulations,
and
so
out
of
this,
they
essentially
identified
four
pressing
needs.
The
first
was
there
is
a
need
for
this
out-of-the-box
cloud
computing
type
solution.
B
These
workflows
set
to
be
like
shareable
and
reusable.
So
currently,
the
way
of
sharing
them
is
to
share
a
specific
file
called
like
a
grasshopper
file,
which
is
a
specific
bit
of
software
and
the
dependencies
weren't
packaged
in.
So
all
that
was
really
difficult
and
had
the
classic
works
on
my
machine,
but
not
on
yours,
and
they
wanted
more
tools
to
be
accessible
and,
finally,
all
this
stuff
really
had
to
be
accessible
from
the
web.
B
So
it
was
very
difficult
to
run
all
these
simulations
and
then
show
the
results
to
your
boss
or
your
manager
or
whatever
third
parties
were
involved,
and
so
this
is
how
pollination
was
born.
Essentially
so
it's
kind
of
designed
as
a
cloud
platform
for
collaboration
which
really
helps
all
these
different
parties
make
well-informed
decisions
together.
B
So
there
we
go
the
kind
of
aspiration
of
what
we're
trying
to
get
to
in.
To
give
you
another
pollination
actually
looks
like
so
the
idea
is,
we
want
users
to
be
on
their
desktop
and
to
use
their
cad,
so
computational
computer,
aided
design
tools
or
bim
just
building
information
modeling
tools.
B
They
would
translate
these
to
an
analytical
model,
which
is
an
open
source,
json
format.
We
essentially
then
have
our
recipes,
which
are
these
kind
of
custom
workflows.
We
were
talking
about
and
run
them
locally
and
then
visualize
these
results
and
we're
all
happy.
The
idea
was
then
to
be
able
to
do
the
same
thing
in
the
cloud
and
using
the
same
kind
of
system.
So
you'd
have
your
analytical
model.
You'd.
Have
your
recipes
and
you'd
be
able
to
view
your
model
on
the
web
to
really
debug
it
and
understand?
B
What's
going
on
and
then
you'd
also
be
able
to
run
your
simulations
on
the
cloud
using
the
exact
same
process
in
the
exact
same
recipe,
these
results
could
then
be
fed
back
into
your
3d
model,
as
well
as
other
analytics,
back-ends
or
softwares,
or
whatever
you
wanted
to
do
afterwards,
and
so,
as
you
can
see,
the
real
the
keystone
of
this
entire
process
really
relies
in
this
recipe
object.
B
We
took
some
strong
inspiration
and
it
was
based
on
the
kind
of
argo
kubernetes
object.
Schema
we've
changed
things
over
time
to
really
fair
our
needs,
but
yeah,
it's
been
quite
an
interesting
one.
The
queen
bee
schema
itself
is
open
source,
so
feel
free
to
have
a
look
at
it
on
github,
at
the
link,
I've,
pinged
below
and
and
there's
quite
a
few
docs
on
it.
So
hopefully
it's
relatively
usable
now,
essentially,
what
we
wanted
to
achieve
with
keem
with
queen
bee,
was
really
making
these
reusable
recipes.
B
As
I
said
so,
first,
these
recipes
had
to
be
robust,
and
so
we
did
this
by
essentially
having
a
strong
input
output
system.
So
we
had
a
type
system
for
inputs
and
outputs
and
we
abstracted
the
concept
of
artifacts
that
exist
in
argo.
So
where
argo
only
has
parameters
and
artifacts,
we
then
had
like
you
know
in
spools
json
array,
all
that
kind
of
sorry
objects
and
arrays
and
as
opposed
to
artifacts,
we
had
kind
of
files,
folders
and
paths
and
rather
than
having
things
go
directly
to
s3
or
whatever
source
you're
using.
B
Then
they
had
to
be
pluggable.
So
by
this
I
mean
that
essentially,
a
recipe
is
composed
of
a
series
of
functions,
and
these
functions
are
packaged
and
can
be
like
imported
relatively
easily
and
dragged
and
dropped
to
really
like
create
the
functionality
of
the
workflow
that
you're
building
and
recipes
themselves
can
be
used
inside
of
a
recipe.
B
Is
we
essentially
translate
this
recipe
into
a
luigi
python,
binary,
essentially,
which
just
runs
and
expects
back
that
accepts
the
inputs
that
we
give
it
and
just
runs
the
whole
thing
through
and
as
we
want
it
and,
of
course,
the
way
it's
going
to
run
on
the
cloud
for
us
is
using
argo,
and
so
this
is
essentially
what
brings
us
to
this
queen
bee
with
argo,
which
is,
as
we
call
a
matchman
in
the
cloud
at
this
point.
B
It's
worth
noting
for
those
interested
that
we're
currently
stuck
well
we're
stuck
we're
running
on
argo
2.12.
That's
mostly
because
the
entire
back-end
team
is
me
and
I'm
doing
a
lot
of
stuff
at
the
same
time.
So
it's
been
difficult
to
upgrade
day
by
day.
B
First
of
all,
really
so
queen
b
requires
this
kind
of
dag
and
loop
based
workflow
resolution,
so
that
we
can
really
enable
these
expressive
workflows
that
our
users
needed
and
also
needs
to
be
able
to
hand
like
essentially
be
able
to
schedule
what
I'd
call
obscure
software.
So
a
lot
of
simulation
software
stuff
for
cf
computational
fluid
dynamics
might
be
quite
old
or
quite
specific
to
run
on
like
odd
machinery.
It
might
require
gpu
and
stuff,
like
that.
B
Some
of
the
daylighting
software
has
been
updated
properly
for
the
last
10
years,
and
some
of
the
energy
software,
such
as
energy
plus,
has
written
in
fortran.
So
we're
not
really
in
kind
of
like
you
know:
aws
lambda
land.
We
can't
really
schedule
these
things
that
easily,
and
the
last
of
course
requirement
was
that
it
really
had
to
be
cloud
agnostic,
because
we've
got
these
ambitions
to
enable
our
users
to
schedule
simulations
on
their
own
clouds,
potentially
in
the
future,
and
so
argo
was
perfect
for
us
because
it
was
container
based.
B
So
it
could
handle
this
odd
scheduling,
behavior
as
mentioning
it's
got,
these
very
strong
workflow
primitives,
which
we
could
make
use
of,
and
it's
kubernetes
native,
which
allows
it
to
be
cloud
agnostic
and
it's
open
source,
which
is
always
awesome
so
yeah
now
the
step.
The
way
we
essentially
handle
largo
is
we
translate
our
quinby
recipes
into
an
argo
workflow,
and
so
the
way
this
is
done
is
we've
got
a
recipe.
B
We've
got
our
inputs,
we
transfer
this
combines
into
what
we
call
a
job
object
and
then
we've
got
a
library
called
queen
bee
argo,
which
translates
it
into
an
argo
workflow.
And
so
what
happens
during
this
process?
Is
we
translate
our
abstractions,
such
as
project
folder
into
an
s3
or
google
cloud
storage
sync,
but
keep
this
kind
of,
like
local
file
system,
abstraction
working
for
the
users?
B
Some
of
the
pain
points
here,
which
are
worth
noting,
is
that
artifact
resolution
is
like
super
hard,
and
I've
made
a
couple
of
prs
to
do
with
kind
of
really
getting
s3
to
behave
a
bit
more
like
a
file
system
which
is
not
supposed
to
but
try
and
work
around
these,
as
we
could
a
couple
other
things
which
are
a
bit
tricky
is
handling
like
injecting
secrets
such
as,
if
they're
like
specific,
like
database
requirements
like
access
points
or,
if
they're,
using
a
specific
container,
that's
stuff
that
we're
kind
of
still
to
resolve
and
decide
how
we
want
to
make
work
properly,
then
translating
from
argo
back
to
queen
b
is
essentially
pretty
much
the
same
process
but
backwards.
B
We
take
the
recipe
because
we
need
to
know
a
lot
of
information
about
the
context
that
generated
a
specific
node
in
the
argo
workflow
status
object.
We've
got
the
job
to
know
kind
of
where
to
point
it,
and
finally,
we've
got
the
work.
The
argo
workflow
object.
We
combine
these
together
and
resolve
all
the
stuff.
We
need
and
translate
them
back
into
a
job
status
object,
and
then
we
dump
this
into
a
database
and
one
of
the
specific
reasons
we
don't
just
use.
B
The
argo
workflow
object
directly
is
really
because
we
want
to
enrich
a
lot
of
the
information
kind
of
recontextualize
it.
With
these
strong
input
and
output
primitives,
we
were
mentioning
the
ideas
that
will
then
help
our
downstream
processes
better
and
explain
to
our
users
what
happened
or
really
let
them
see
through
the
spread
this
workflow.
B
B
B
We've
then
got
a
pro
essentially
a
workflow
diff
generator
which
listens
in
on
the
argo
controller
logs
to
essentially
keep
update
events
and
pick
up
data
events
for
the
server
and
dumps
these
update
events
into
a
pub
sub
topic,
which
then
gets
picked
up
by
our
simulation
service,
which
will
translate
them
back
and
dump
them
into
a
database
for
us,
and
once
these
are
done
back,
then
they
can.
You
know,
go
on
to
produce
further
events
that
are
translated
and
in
our
own
pollination
schema.
B
Some
of
the
advantages
of
doing
this
as
opposed
to
again
just
querying
directly
and
translating
other
flies
that
allows
us
to
do
some
sorry.
Instead
of
creating
the
argo
server
directly
for
the
update
status,
is
it
first
of
all
means
that
we
don't
overload
the
argus
server
when
it's
already
been
quite
a
bit
and
stuff
like
that,
especially
if
you
have
quite
a
lot
of
users
using
it?
B
It
also
means
that
we're
free
to
write
our
own
query
api
and
make
some
queries
that
are
a
bit
more
interesting
to
us
and
our
users
without
having
to
make
changes
to
the
argus
server.
B
Requesters
changes
to
the
argo
team,
so
yeah
some
pain
points
of
essentially
this
event-based
state
translation,
scraping
events
updates
against
kubernetes
is
a
bit
unreliable
or
pretty
difficult,
and
the
reason
for
this,
I
think,
is
because
the
argo
controller
essentially
admits
these
update
events
very
in,
like
very,
very
quick
succession,
and
my
understanding
is
that
there's
a
bit
of
a
kind
of
cache
control
system
in
kubernetes,
or
at
least
that
prevents
overload
of
the
server
itself
such
that
these
update
events
aren't
like
necessarily
or
might
be
combined
such
that
we
don't
get
the
granular
information
that
we're
asking
for
initially
and
there's
an
issue
for
this.
B
B
The
translation,
then
itself
is
also
relatively
complex
in
the
sense
that
certain
inputs
or
outputs
for
our
dags
require
information
from
the
nodes
inside
of
it,
as
well
as
the
source
template,
so
the
source
queen
bee
recipe
schema
to
be
available
to
do
the
translation.
This
can
again
be
a
bit
complex
because
it
requires
a
lot
of
retrieval
of
information
and
combining
information
at
the
right
time.
B
B
So
first,
I
will
take
you
to
our
application.
So
this
is
our
pollination
application.
As
I
mentioned,
so
we've
got
these
concept
of
recipes
I'll.
Show
you
a
recipe
first
to
give
you
an
idea
of
what
they're
like.
So,
let's
take
the
annual
daylight
recipe.
So,
as
I
mentioned,
this
recipe
is
good.
You
know
got
some
information
about
it.
B
It's
got
a
series
of
inputs
and
a
series
of
outputs
which
are
rather
specific,
and
then
we
can
essentially
explore
the
recipe
itself
through
this
kind
of
like
node
based
interface,
and
so
at
the
moment,
nothing's
actually
happened.
B
Yet
this
is
just
a
template
for
how
we
want
things
to
be
executed,
and
so,
for
example,
we
can
see
there's
a
step
here
to
create
a
folder
and
we
can
actually,
we
can
actually
check
what
this
step
does
and
that
is
resolved
within
the
plugin
which
you've
used,
which
is
the
honeybee
radiance
plugin
in
this
case,
and
this
function
specifically
creates
reagents
folder,
and
this
is
the
command
for
it
and
then
the
inputs
and
outputs
below
it
and
all
that
kind
of
stuff.
B
So
that's
that
then,
within
that
we
can
also
have
recipes
inside
of
recipes.
So
in
this
case
this
is
like
a
nested
dag.
We've
got
an
annual
daylight
simulation.
That's
actually
running
inside
of
our
annual
daylight
object
itself,
and
so
this
one
just
does
some
more
granular
stuff,
as
we
can
see,
and
finally,
this
exposes
its
own
results
at
the
end
of
it.
B
So
yeah,
that's
essentially
our
quick
recipe
viewer,
then
our
main
place
to
run
things
essentially
are
inside
of
our
excuse
me.
We
go
to
my
profile
instead
of
our
projects,
where
we
really
combine
everything
that
we
bring
together.
So
essentially,
users
have
access
to
a
file
system
which
we
kind
of
give
them.
So
they've
got
a
folder
with
information
in
it
and
whatever
they
need,
and
in
our
case
we've
got
two
things.
We're
interested
in.
B
The
first
is
our
3d
model,
so
we've
got
a
simple
3d
model
that
we've
got
here
and
the
weather
file
that
we're
going
to
use
to
schedule
and
simulate
it.
So
as
we
do
this,
I
can
then
schedule
a
new
run
for
my
account
here.
I
can
select
my
demo
project
and
I
can
select
the
annual
daylight
recipe
we
were
talking
about
before.
B
B
There
are
a
couple
of
other
inputs
here
which
we
could
change,
I'm
going
to
leave
them
because
I'm
lazy,
but
essentially
we
could
change
them
and
a
quick
description.
So
then
this
gets
scheduled
to
run,
and
so
now
we
know
that
we've
got
our
run,
it's
been
scheduled
to
run
and
it's
happening
as
a
side.
Note:
we've
got
this
yeah,
our
argo
server
got
up
and
going,
and
we
can
see
that
this
run
has
now
been
scheduled
to
run
on
our
by
argo
and
it's
doing
its
own
thing
now.
B
The
update,
speed
at
the
moment
is
a
bit
late,
because
this
is
our
staging
setting,
so
this
run
might
be
a
bit
slow
to
update
and
it's
updated
pretty
quickly,
and
so
we
can
see
that
this
ui
is.
B
I
mean
it's
essentially
the
same
for
now,
but
the
information
we've
got
is
a
bit
different
from
what
we've
got
in
argo,
where
essentially,
we've
got
a
quick
overview
of
what's
happening,
our
inputs
and
outputs,
and
we
then
there's
this
idea
of
like
adding
extra
information
to
things
is
really
so
that
we
can
resolve
things
like
this
is,
for
example,
a
model
file,
and
so
we
can
say
cool.
We
can
now
resolve
where
that
model
is
inside,
of
our
folder
itself,
for
example,
so
yeah,
that's
essentially
it.
B
So
this
is
how
our
simulation
is
running
once
it's
done.
It'll
tell
us,
then
we'll
have
our
outputs
available
here.
If
I
go
back
to
my
project
here,
we
can
see
that
I've
got
a
series
of
runs
that
I've
been
running.
This
one
is
doing
its
thing
and
I've
got
a
series
of
runs
with
their
own
inputs
and
outputs
here,
as
I
run
them
as
well
as
the
link
to
the
recipe
that
originated
them.
I
can
then
go
back
to
which
was
the
annual
daylight
one.
I
showed
you
previously
so
yeah.
B
I
think
that's
pretty
much
us
at
this
point
for
our
live
demo
and
how
things
work,
if
you
guys
have
got
any
questions,
of
course,
when
I'm
welcome
to
ask
them
and
as
is
mandatory,
we're
hiring
so
please
feel
free
to
get
in
touch
if
you're
interested
in
what
we're
doing,
I
want
to
hear
a
bit
more
about
it.
Thank
you
very
much.
B
Oh,
do
you
know
what
I'm
actually
100
sure
this
is
our
front-end
engineer
built
it
originally?
I
could
honestly,
I'm
not
sure,
I'm
sorry
we're.
Actually
I
currently
looking
into
using
something
different,
because
we
found
something
called
light
graph,
which
I'll
show
you
really
quickly
light
graph
would
be
something
we
used
to
actually
help
people
build
them
themselves.
B
So
really
graph
provides
this
kind
of
recipe
build
this
builder
interface,
so
you
can
drag
and
drop
things
and
like
connect
them
together
and
stuff
like
that,
so
yeah
if
you've
got,
updates
and
interested
we'll,
happily
collaborate
with
you
or
like
send
you
some
feedback
and
have
a
chat
about
it
once
you've
got
progress
on
that
aspect
of
it.
A
Have
a
look
at
this
like
graph,
certainly,
does
anybody
else
have
any
more
questions
they
would
like
to
ask
before
we
move
on
to
summit's
presentation.
C
I
actually
have
a
question
about
the
there's
one
page
on
the
slide
saying:
argo
events,
there's
a
kubernetes
update
event,
which
is
not
reliable
and
there's
a
link
if
it
possibly
can.
Please
share
the
detail
at
that
part
to
me,
but
it
could
be
offline.
A
I
assumed
it
was
for
you,
I
wasn't
paying
attention.
I
was
looking
at
the
like
crap
you're,
a
pizza.
C
Oh
yeah,
I
just
want
to
know
that
there's
a
there's,
a
an
unreliable
kubernetes
update
event
on
one
of
the
slides.
B
A
Yeah,
okay,
okay
summer,
are
you?
Are
you
ready,
yes,
alex?
Okay,
let's,
let's
pass
over
to
you
in
that
case,.
D
Thanks
alex
good
morning
good
evening,
everyone,
I'm
sumit,
I'm
principal
engineer,
intuit
working
on
something
called
distributed:
load
infrastructure
and
in
intuit.
We
are
using
this
for
many
of
our
cluster.
So
this
is
how
our
scale
look
like.
We
have
close
to
1000
plus
developer,
more
than
200
clusters
and
a
bunch
of
namespaces
with
many
many
services.
D
Now,
if
someone
wanted
to
test
or
someone
wanted
to
create
load
on
this
specific
infrastructure
for
their
specific
service,
they
are
facing
lot
of
challenges,
and
that
is
the
thing
which
we
we
have
shared
in
a
last
year's
ergo
community.
So
we
build
a
in-house
solution
and
before
I
am
jumping
into
that,
let
me
talk
about
the
biggest
problem.
What
we
were
facing
first
problem,
which
I
think
very
common
across
the
board-
is
that
scale
problem.
So
when
you
talk
about
load
infrastructure,
how
you
scale
the
load
infrastructure?
D
Usually
you
have
a
dedicated
instance,
or
you
have
probably
a
certain
tool
but
again
scaling.
That
is
one
of
the
biggest
challenge.
Cost
is
also
one
of
the
very
very
important
thing
because
you
have
to
running
the
cost
you
have
to
owning
the
cost.
In
the
licensing,
there
are
a
lot
of
commercial
tools.
D
Are
there
if
you
go
for
security
and
compliance
lot
of
integration
require
a
lot
of
equal,
you
need
to
open
and
then
those
all
solutions
are
not
based
on
the
container
native
or
test
container
based
approach
and
last
but
not
least,
that
self-service
and
automation
usually
being
stuck
either.
You
go
out
from
your
existing
pipeline
or
you
build
some
custom
solution,
and
then
you
come
back.
D
So
what
is
distro
so
we
in
intuit
we
are
using
load
testing
using
this
specific
tool,
which
is
a
not
anything,
a
specific
solution,
but
bunch
of
tools
will
be
called
as
a
distro
and
it
is
a
scalable
container
native
self-service
solution,
which
is
technology
and
cloud
diagnostic
right.
Now
we
build
solution
for
one
of
the
gatling
or
scala
based
programming
and
aws,
but
it
could
be
a
pretty
easily
be
ported
to
any
other
scenario
and
how
we
do
that.
D
I
will
just
quickly
talk
about
that
and
I
think
it
the
way
how
pollination
has
mentioned
that
they
are
using
reusable
recipes
workflows
and
the
same
concept
will
remain
with
the
distributed
load
infrastructure.
D
So
you
have
to
build
a
a
specific
programming
and
container,
so
we
usually
use
a
for
a
load
testing,
a
gatling
as
a
one
of
the
programming,
and
then
we
get
a
container
and
how
we
create
a
container
docker
container
using
jenkins.
So
you
have
a
container
available
now
this
container
information
you
provide
in
the
workflow
and
then
you
write
the
entry
and
exit
point.
This
ago
you
have
set
it
up
on
the
kubernetes
infrastructure,
on
your
specific
workload
or
specific
name
space
now
again
from
the
same
jenkins,
reusable
declarative
pipeline.
D
You
will
again
invoke
the
the
ago
and
then
ergo
will
go
and
execute
this
load
infrastructure,
and
then
it
will
try
to
push
all
this,
because
it's
a
many
of
the
execution
will
push
the
load
to
the
aws
s3
bucket,
and
then
it
will
gather
all
the
results
and
consolidate
that
this
is
the
way
how,
if
you
are
moving
away
from
a
specific
programming,
you
can
have
bring
your
own
container
and
you
can
use
any
of
the
existing
kubernetes
provider
or
you
can
in
place
of
using
sd
any
of
the
other
data
source,
and
this
will
work
as
as
straightforward
before
I
jump
in
that
this
is
the
blog
I
have
written
on
this,
which
talk
briefly
about
the
distributed
load,
testing
and
all
the
links
are
available,
and
this
is
the
way
how
you
can
get
started
on
this,
this
specific
setup
for
your
execution.
D
D
We
as
we
are
building
this
for
us.
There
is
a
little
bit
more
responsibility
because
we
need
to
maintain
this
infrastructure
in
such
a
manner
that
it
will
be
self-serve
by
itself
and
how
it
can
maintain
as
a
part
of
pipeline.
So
we
probably
need
to
think
about
that,
how
we
can
create
a
declarative
pipeline
and
then
to
get
all
the
specific
container
images
and
maintaining
cargo
workflow.
We
need
to
be
have
the
security
and
compliance
of
the
right
images,
so
in
nutshell,
how
this
whole
thing
works.
D
So
you
have
your
source
code,
which
is
your
any
of
the
programming
language
code.
We
suggest
that
you,
along
with
that
you
have
your
test
code
and
this
infra
code.
This
infra
code
is
nothing
but
those
certain
yamls
and
all
those
processes
that
how
you
wanted
to
go
and
execute
your
load
distribution.
D
Your
test
code
will
create
a
container
image
which
is
a
docker
base,
and
then
here
on
the
infra
code,
you
will
have
this
information
that
what
specific
docker
entry
point
or
what
command
line
I
have
executed
and
how
I
can
parameterize
that
in
argo
workflow
through
this
jenkins
pipeline,
we
just
execute
those
processes
and
it
will
go
and
it
will
grow.
Based
on
your
demand
and
supply
that
how
much
load
you
wanted
to
give
and
benefit
the
kubernetes
cluster.
D
You
have
your
argo,
workflow
and
argo
ui
set
it
up
now,
once
you
have
that
this
whole
setup
available
through
ergo
workflow
account,
it
will
go
and
target
or
hit
any
specific
end
point,
and
once
the
execution
is
done
through
workflow,
it
will
consolidate
all
the
results,
aggregate
that
and
then
add
upload
to
s3.
So
this
this
usually
give
you
a
power
that
you
bring
any
kind
of
test
technology,
and
you
create
a
container
image.
D
You
can
bring
any
a
kind
of
cloud
provider
and
store
the
data
and
rest
everything
become
again
reusable
components.
D
So
let
me
just
quickly
jump
on
a
demo
first
and
then
I
can
quickly
talk
about
the
workflow
to
do
the
demo.
I
have
one
very
simple,
hello
engine
x,
application
which
just
go
and
execute
a
helpful
update,
and
I
can
go
and.
D
Then
I
have
to
run
the
test.
I
have
to
create
a
container
image,
so
container
image
is
so
I
already
have
these
projects
defined
and
how
you
I
can
run
the
test
is,
I
can
run
and
hit
any
specific
end
point,
and
this
end
point
will
give
me
the
execution
whether
the
test
is
passing
or
failing.
So,
if
I
go
and
run
this,
I
get
okay,
so
now
the
same
stuff,
I
have
put
it
in
one
of
the
test
code
and
I
have
a
command
line
which
actually
go
and
expose
all
this
information.
D
This
information
I
provided
as
part
of
one
of
my
argo
workflows,
so
in
argo
workflow,
I
have
given
a
bunch
of
steps
first
and
these
all
are
best
practices
which
we
have
learned
over
a
period,
but
here
this
is
the
important
line
where
you
say
that
you
run
the
test.
So
when
you
say
the
run,
the
task
that
time,
we
actually
invoke
this
container
image
and
we
pass
certain
parameter
and
what
are
those
parameter?
D
Those
parameters
are
that
how
you
want
to
distribute
your
load
where
and
how
this
load
will
be
going
and
what's
the
tps
you
want
to
what's
the
number
of
nodes
and
parts
you
you
really
want
so
with
that
and
how
you
can
just
execute
this,
a
as
as
command
line
as
simple
as
that
or
you
can
have
this
thing
being
tracked
as
part
of
jenkins
file.
D
So
we
have
created
the
jenkins
file
again
is
a
declarative
pipeline,
so
you
don't
need
to
go
and
write
everything
you
just
import
that
library
and
it
will
just
go
and
invoke
so
now.
I
am
just
going
and
invoking
one
of
the
execution
now
behind
the
scene,
what
it
will
do,
it
will
just
try
to
grab
the
container
image
which
we
have
built
as
part
of
this.
D
It
will
have
argo
workflow,
which
we
have
encoded,
that
it
has
certain
parameters
which
argo
workflow
want,
which
we
are
passing
here
and
then
it
has
being
set
up
via
a
context
through
execution
on
a
specific
kubernetes
cluster.
So
we
will
be
actually
executing
this
on
one
of
the
kubernetes
cluster,
where
this
argo
workflow
has
been
set
it
up
as
a
namespace
or
workload.
D
D
So,
as
I
am
running
this
execution
behind
the
scene,
it
will
go
and
it
started
a
workflow
on
on
our
kubernetes
cluster.
So
this
is
a
one
of
the
example
where
I
showed
that
so
here
it
will
first.
So
our
goal
team
has
a
lot
of
best
practices
on
how
you
can
make
sure
that
this
specific
workflow
should
not
be
impacted.
So
we
do
pdb
create,
and
now
here
we
are
actually
going
and
executing
the
test.
D
So
here
the
test
actually
started
and
through
argo,
because
the
inside
we
so
here
you
could
see
that
we
are
actually
hitting
the
end
point
and
the
test
actually
started
so
we
can
go.
We
can
look
lot
more
information
that
what
and
all
those
parameters
we
are
and
how
we
can
configure
and
parameterize
those
things
so
coming
back
to
here
so
now.
What
I
showed
here
is
that,
like
I
pick
a
test
code,
create
container
image,
configure
the
workflow
and
started
executing
now
as
we
are
distributing
the
load.
D
The
one
of
the
biggest
challenge
is
that
how
you
can
consolidate
those
results,
because
every
pod
will
spit
a
different
kind
of
report.
So
for
that
the
same
execution,
we
have
actually
added
a
aggregated
aggregator.
So
aggregator
is
nothing,
but
it
will
be
merging
your
existing
results
so
most
of
the
technologies
on
a
container.
They
usually
have
a
mechanism
that,
if
you
create
a
report
you
can
consolidate.
So
we
use
that-
and
these
are
the
two
images
which
is
good
to
start
from
the
from
the
scratch.
D
So
let's
see
how
things
are
going
now,
once
the
test
being
executed.
Again,
we
have
integration
with
the
splunk
and
we
do
have
a
monitoring.
You
can
go
and
look
for
the
splunk
which
in
intuit
we
are
doing
that.
But
there
are
a
lot
of
other
monitoring
aspects
are
available
as
part
of
the
go.
You
can
look
into
the
orgo
job.
You
could
look
into
that.
All
other
things
is
specific
to
the
monitoring
sweet.
D
So
how
the
workflow
is
happening
so
the
overall
workflow,
as
I
mentioned,
that
you
have
a
code
base
which
is
there
and
get
create
a
container
image.
You
have
a
workflow,
yaml
defined
and
everything
has
been
triggered
by
jenkins
jenkins,
will
go
and
submit
context
of
cube.
It
will
just
again
these
all
things
are
a
very,
very
compliance
thing
in
security.
You
cannot
have
a
specific
execution,
so
what
we
are
doing
is
just
with
this
context.
D
We
are
only
creating
the
ergo
submit
we
and
ergo
submit
will
have
ergo
has
a
specific
role
and
permission
we
have
defined,
and
those
role
and
binding
can
have
execution
on
a
specific
namespace
and
initially
it
will
go
and
create
a
pod
disruption
budget.
Then
it
will
execute
the
test
and
it
will
aggregate
and
put
it
in
the
s3.
This
is
the
setup
what
we
need
on
on
our
go
for
a
specific
cluster
or
a
workload,
and
all
those
recipes
are
available
on
the
community
side
as
well
as
in
the
ergo
onboarding
guide.
D
Now
this
is
the
way
how
it
will
come
up.
So
here
you
see
that
list
come
and
then
we
are
getting
s3
and
we
can.
We
should
be
able
to
see
the
report,
so
this
is
the
way
how
the
pipeline
will
look
like
that
you
get
the
execution
going,
you
get
your
results
and
then
we
do
have
a
two
integration.
One
is
the
weather
captain.
So
in
case
you
wanted
to
define
your
sli
and
slo
and
you
wanted
to
make
this
execution
pass
or
fail.
D
You
can
do
that
as
well
as
very
recently
in
kubecon
we
shared
that
how
this
same
workflow
can
be
used
as
part
of
the
chaos
execution.
So
these
were
the
other
two
solution
which
we
have
built.
On
top
of
that,
where
you
can
build
the
specific
gating
mechanism
or
you
can
go
and
with
the
execution
of
the
test,
you
can
create
many
of
the
chaos
experiment.
Also.
D
Now,
what's
the
reward
for
us
and
what's
the
benefit?
If
anyone
wanted
to
use
this
because
in
our
scenario
we
were
using,
we
are
building
a
platform
and
that
platform
will
be
used
by
many
of
that.
So
it
practically
it's
impossible
that
one
team
or
one
person
can
go
and
support
all
this.
So
it
has
to
to
support
the
platform.
So
it
has
to
be
something
which
is
a
cell
self.
That's
a
one
very
important
part
as
we
are
building
as
part
of
the
code,
the
container
test
container.
D
So
it's
become
a
very,
very
container
native
approach
and
as
everything
is
being
in
the
yaml
or
everything
has
been
in
a
container
image,
everything
is
being
code.
You
can
just
add,
as
a
part
of
pipeline
cost
is
really
really
important.
We
compare
with
any
other
commercial
tool
what
we
are
using
and
we
are
maintaining
versus.
D
We
are
using
ergo
workflow
and
doing
all
the
setup
we
have
seen
in
a
in
a
year
time
we
save
more
than
90
while
running
the
specific
execution
and
again
everything
is
open
source
so
to
get
started.
You
just
need
to
go
and
start
on
the
other
workflow,
so
I
wanted
to
talk
one
very
interesting
thing,
because
we
are
doing
this
because
we
are
eating
our
own
dog
food.
D
We
reach
up
to
1000
nodes
on
a
given
cluster.
Now,
if
any
other
way,
we
wanted
to
execute
it,
it
required
a
lot
of
cost.
It
required
a
lot
of
resources.
Now,
with
the
ago
workflow,
we
were
able
to
do
it
very
quickly
in
our
clusters.
We
have
close
to.
I
think,
25
plus
setup,
like
that.
Again,
all
these
things
are
being
set
it
up
as
part
of
the
cluster
creation.
D
We
do
have
one
add-on
base
setup
on
a
kubernetes
cluster
which
which
actually
go
and
set
this
whole
infrastructure
for
us.
So
this
is
the
way
how
we
are
actually
doing,
and
this
is
the
way
how
I
see
going
forward
right
now.
We
are
supporting
only
the
aws
space
execution,
but
we
can
support
azure
and
gk.
If
the
only
one
technology
we
are
using
right
now,
gatling,
but
any
of
the
load
distribution
technology,
like
geometry,
go
python,
you
can
create
container
image
and
get
that
integration
of.
D
We
already
have
that
two
integration,
our
two
open
source
on
cncf
litmus
and
captain.
D
I
just
kept
four
information
as
part
of
this
that
if
anyone
wanted
to
get
started,
there
are
a
lot
of
information
available.
D
Now
with
that,
I
will
just
take
pause
and
see
if
there
is
any
question.
E
Actually,
if,
if
you
have
one
sec,
I
do
have
a
quick
question
not
related
to
the
load
testing,
but
I
happen
to
get
a
peek
at
looks
like
you
have
a
ui
for
managing
clusters
into
it,
which
looks
really
interesting,
and
I
was
just
curious.
Is
that
a
proprietary
thing
or
is
that
based
off
of
an
open
source
project.
E
A
He
he
is
here,
but
he's
not
answering.
I
I
will
answer
on
his
behalf.
A
We
do
have
our
own
in-house
system
for
managing
kubernetes
clusters
that
allows
teams
to
self-serve
themselves.
You
know
create
their
own
test
clusters,
install
a
set
of
kind
of
enterprise,
add-ons
related
to
things
like
security
and
logging,
so
they
don't
have
to
worry
about
a
lot
of
that
kind
of
stuff
and
that's
got
a
user
interface
and
audit
logging
and
all
those
kind
of
juicy
features
you
want
and
something
like
that.
A
D
Yep
for
managing
the
cluster,
we
use
input,
kubernetes,
service
management
so
but
yeah
hong
is
the
right
person
to
get
started.
It's
a
straightforward,
just
go
and
start
here
and
all
the
documentation
is
available
for
the
reference
engine.
Xbase
image,
as
well
as
the
test
container.
A
What's
some
test
proj
system,
it
was
that.
D
Yeah,
so
this
is
another
git
project
which
I
built,
because
when
initially
we
built
for
intuit,
we
figured
it
out
that
input
has
a
lot
of
property
stuff.
We
cannot
use
that
so
we
use
something
which
is
a
very
generic
use,
something
which
anyone
can
take
it
and
use
it
so
because
we
use
a
graphql
base
execution
into
it
for
most
of
our
services.
A
A
So,
if
you've
been
following
discussions
inside
our
chat
recently,
you'll
know
that
we're
on
the
verge
of
releasing
argo
workflows,
v3,
I'm
going
to
talk
a
little
bit
about
some
of
the
reasons
behind
doing
that,
and
then
I'm
going
to
talk
a
little
bit
some
about
them,
some
of
the
major
features,
so
we've
always
wanted
to
rename
the
repository
from
argo
to
argo
high
from
workflows,
because
people
get
confused
between
the
two
repositories.
A
We're
planning
on
doing
that.
We're
planning
on
also
additionally
sorting
out
a
problem
with
our
go
modules
that
makes
it
very
hard
for
people
to
currently
import
that
code
base
as
a
library
into
the
code
base
and
we're
also
going
to
make
some
small,
potentially
breaking
changes
for
some
for
some
other
users,
including
a
slightly
different
way
of
doing
artifacts,
called
key
only
artifact,
which
I'll
maybe
I'll
circle
back
around
to
that.
A
If
people
are
particularly
interested,
so
that's
why
we're
gonna
we're
we're
gonna
do
v3,
I
kind
of
like
one
of
our
main
goals
with
three
is
actually
is
not
intended
to
be
a
breaking
changes,
so
we're
hoping
that
it's
going
to
be
a
relatively
straightforward
migration
for
most
users,
we
haven't
actually
got
that
many
more
new
features
in
version
three
than
we
do
in
version
212,
and
we
do
plan
to
give
212
kind
of
long
term
support.
A
But
what
I
want
to
talk
about
today
really
is
some
of
the
new
capabilities
that
we're
going
to
put
into
the
argo
server
user
interface,
and
I've
got
a
little
wiki
page
here
to
talk
a
little
bit
about
those
and
most,
I
think
I'm
pretty
going
to
talk
about
two
main
features
and
I'm
going
to
talk
a
little
bit
about
some
of
the
smaller
features
and
I'm
going
to
do
a
little
demo
of
those
guys.
A
So,
let's
crack
on
with
the
meat
of
it,
so
people
will
notice
this
user
in
face,
looks
slightly
different
to
the
existing
one.
You
can
see
that
the
left
hand,
navigation
is,
is
a
slightly
different
shade
and
also
has
a
lot
more
icons
in
it.
There's
about
twice
as
many
icons
as
v211
has
and
in
the
bottom
right
hand,
corner.
A
We've
got
like
a
new
chat
button
down
here
so,
and
this
chat
button
is
a
configurable
chat
button
in
the
same
way
that
you
can
configure
links
in
the
rest
of
the
user
interface,
and
you
can
choose
to
have
this
point
to
your
your
kind
of
internal
documentation
or
it
can
go
to
some
kind
of
external
condition.
We
found
this
very
useful
in
argo
cd
to
direct
our
internal
users
to
some
kind
of
help,
self-help
and
faq
pages
that
we
could
quickly.
A
We
could
keep
up
to
date
using
kind
of
a
google
doc
kind
of
thing,
and
then
they
can
jump
into
another
slack
room
for
chats.
So
it's
really
been
really
helpful
for
our
go
cd
and
helping
people
out
and
we
wanted
to
bring
that
into
archive
workflows.
A
Okay,
so
that
the
most
the
two
main
most
major
features
in
the
user
interface.
I
think
that
they're
gonna
be
most
impactful
for
people.
One
is
around
the
introduction
of
argo
events
into
the
user
interface,
so
we've
actually
got
some
new
api
endpoints
for
people
who
are
go.
Events,
users.
A
New,
a
service
called
a
sensor
service
and
a
new
service
called
the
event
source
service,
which
I
can't
seem
to
see
here,
but
I'm
sure
it's
here
event
source
service
to
provide
a
kind
of
additional
sort
of
kind
of
standard.
Crud
endpoints.
For
for
these
two
types
of
resources,
as
well
as
the
ability
to
stream
the
data
about
changes
to
event
sources
and
also
to
stream
data
about
the
logs
and
I'll,
show
you
some
of
that
shortly.
A
Well,
that's
particularly
interesting.
An
event
source
in
argo
events
is
an
action
that,
basically,
we
are
basically
an
event.
Source,
listens
to
something
and
then
drops
a
message
onto
a
message:
bus.
So,
for
example,
a
kafka
event
source
will
wait
for
a
kafka
message.
A
An
s3
event
source
will
wait
for
a
drop
into
an
s3
bucket,
and
that
drops
a
message
on
to
a
message
brush
and
by
default
we've
kind
of
configured
it
with
in
the
demonstration
environment.
It's
got
a
calendar
event
source
which
emits
a
message
every
10
seconds.
A
A
sensor
in
argo
events.
Land
is
something
that
listens
to
event,
sources,
messages
that
have
been
dropped
onto
the
event,
bus
by
event,
sources
and
perform
some
actions,
and
some
common
actions
again.
Are
things
like
trigger
a
workflow,
create
a
kubernetes
resource,
drop,
a
message
onto
some
kind
of
message:
bus
and
that
kind
of
thing.
So
it
allows
you
basically
to
use
kubernetes
to
plumb
actions
and
you
can
in
in
the
user
interface.
A
We've
got
a
very
common,
very
familiar
people
system
here,
showing
you
the
ability
to
add
event
sources
and
we've
actually
got
two
event
sources
set
up
in
this
system,
one
that
logs
the
events.
Sorry,
two
sensors,
one
that
logs
the
events
that
it
receives
and
another
one
that
triggers
workflows
and
that's
put
together
in
this
page
called
the
event
flow
page,
and
this
shows
you
a
diagram
of
within
a
particular
namespace
of
all
all
the
events
going
on
there.
A
You
can
click
this
event
flow
button
and
it'll
demonstrate
the
as
long
as
there's
no
bugs,
because
there
are
a
few
teething
issues
with
this.
It'll
demonstrate
the
flow
of
messages
through
the
system
using
animation.
So
you
can
see
that
at
that
point
there,
the
calendar
triggered
because
it
triggers
every
10
seconds,
and
you
can
see
that
it
was
connected
to
the
log
sensor
in
the
workflow
sensor
and
you
can
see
that
those
guys
have
acted
there.
I
think
that
might
be
a
bug
without
animation,
but
that's
fine.
A
You
can
also
click
on
these
and
it'll
bring
up
additional
information
about
that.
That
particular
particular
trigger
that's
quite
interesting
because
in
the
argo
events
worlds
these
things
tend
to
be
multiplexed
into
a
single
crd.
A
So
a
sensor
tends
to
contain
multiple
triggers,
but
often
you
only
want
to
see
the
logs
of
a
particular
trigger
and
you've
got
additional
kind
of
diagnostic
tools
just
to
see
that
particular
trigger
here
it
wouldn't
show
any
other
information
from
the
logs
by
other
triggers
it
filters
those
ones
out
and
obviously
you've
got
some
kind
of
events
as
well.
Here,
in
the
background
that
it
shows,
I
mentioned
a
bit
about
kind
of
improved
reliability
in
the
user
interface
and
one
of
the
things
we've
done
is
we've
refactored.
A
Quite
a
lot
of
code
to
use
the
user
interface
is
written
in
react
and
we've
changed
a
lot
of
the
code
to
use
a
thing
called
react,
functional
components,
and
we
found
that
they're
much
easier.
It's
much
harder
to
write
buggy
code
using
a
functional
component,
much
easier
to
fix
bugs
in
those
components
bugs
can't
really
hide
in
too
many
places.
A
As
a
result
of
that,
and
you
probably
notice
in
the
user
interface
there
a
second
ago
we
had
a
connection
issue
and
it
actually
automatically
resolved
that
connection
issue.
So
we've
got
a
lot
more
code
that
deals
with
problems
around
being
disconnected
from
the
network
due
to
unreliable
network
or
some
kind
of
proxy
sat
in
front
of
it.
The
user
interface
is
there
now
extremely
tolerant
to
that,
and
typically
reconnects
you
within
a
couple
of
seconds
okay.
So
I
talked
a
little
bit
about
the
events
there.
A
Okay,
I
won't
do
that.
Okay,
let's
have
a
little
look
at
the
workflows,
so
in
in
the
workflows
interface,
we've
got
a
new
way
to
submit
workflows
from
the
user
interface
here
effectively.
You
can
choose
from
a
drop-down
of
workflow
templates
and
it'll.
Take
you
straight
into
the
submission
method
or
you've
got
a
new
workflow
editor
that
has
a
few
more
options
around
it
as
well.
A
So
you
can
edit
using
yaml,
but
you
can
also
edit
the
parameters
individually
and
you
can
actually
edit
the
metadata
and
this
the
same
kind
of
editor
is
now
available
for
workflow
templates.
You've
got
it
straight
here,
I'm
straight
into
the
editor.
When
I
go
in
here
and
I
can
go
through
and
edit
and
various
bits
of
information
very
directly
in
in
the
data
here,
which
I
think
is
I'm
pretty
nice.
A
If
I
go
into
a
particular
workflow
you
can,
some
of
you
are
familiar
with
this
probably
notice
that
the
the
the
the
dag
the
graph
is
rendered
slightly
differently
and
I've
got
a
few
new
options
here.
At
the
top
left
hand,
side
looks
slightly
different,
so
I
need
one
of
the
options
here.
Is
the
ability
to
filter
by
template,
in
the
view
you'll
also
notice,
the
number
of
icons
have
shrunk
in
the
top
view.
Here
we
actually
show
fewer
icons.
A
We've
got
two
new
icons
at
the
top.
Here
one
is
a
new
log
viewer.
This
is
a
whole
workflow
log
viewer,
so
previously
you'd
be
able
to
look
at
the
logs
of
a
specific
pod
within
your
workflow
and
in
v212
you
can
look
at
actually
the
different
containers
within
that
part
in
I'll
go
work.
Those
version
three.
You
can
actually
look
not
only
just
at
the
the
logs
of
the
specific
pod,
so
I
can
have
a
little
look
at
the
init
and
weight
container
logs.
A
If
you
know
if
they
exist
as
well
as
the
main
ones,
I
actually
get
the
option
of
choosing
which
one
of
the
particular
pods
within
the
workflow
I
want
to
view,
and
if
I
just
want
to
work
like
a
specific
one,
I
can
choose
that
or
I
can
look
at
the
hull
once
so.
Actually
what
you
can
do
here
is
you
can
watch
the
progress
of
your
logs
as
they
scroll
down
the
screen.
A
Additionally,
we've
got
a
bit
of
a
beta
functionality
here,
just
click
on
this
guy
here
we've
now
got
some.
Oh
he's
probably
been
deleted.
We've
now
got
some
kind
of
a
very
basic
b3
embeddable
widgets
that
you
can
embed
into
your
application,
showing
the
status
of
the
workflow
and
of
the
graph,
and
that
also
actually
works
as
well
for
workflow
templates.
If
I
go
into,
let's
choose
github
events,
if
I
go
into
github
event,
I
can
actually
have
a
look
at
the
widgets
here,
and
these
widgets
will
automatically
update
as
the
workflow
progresses.
A
We
will
probably
look
to
improve
these
over
time
because
they're
quite
heavyweight
in
terms
of
the
user
interface,
and
they
do
require
you
to
if
you're,
going
to
embed
them
in
another
application.
You
need
to
deal
with
effectively
kind
of
the
typical
cross
frame
issues
that
you
you
get
with
with
those
guys
here,
but
these
particular
widgets
here
for
a
workflow
template
will
actually
automatically
update
as
a
new
template
is
created.
A
So
if
you've
got
a
cron
job,
you
can
have
a
widget
for
that
cron
workflow
or
a
widget
for
a
particular
workflow
template
it'll
automatically
update
without
you
having
to
refresh
the
page
or
particularly
deep
link
you
don't
deep.
Well
I'll,
show
you
you
don't
deep
link
to
the
workflow,
you
actually
deep
link
to
the
particular
template.
A
Okay.
Now
what
else
do
we
have?
In
the
user
interface?
I've
covered
that
we've
got
some
new
links,
so
a
lot
of
people
use
kind
of
deep
logging
links
to
make
it
easier
to
jump
into
their
logging
facility
because
obviously
pod
logs
obviously
disappear
quite
rapidly,
or
maybe
you
don't
have
archiving
of
logs
set
up,
but
it
can
be
quite
useful
to
be
able
to
get
into
deep
links.
So
you'll
notice
there's
a
there.
A
There
was
a
deep
link
on
this
page.
There
was
definitely
one
of
the
deep
links
in
this
page.
Oh
it's
probably
not
configured
on
this
environment.
An
additional
log
link
will
appear
on
this
page
for
that
one.
And
if
you
choose
to
pick
your
pods,
you
can
actually
have
another
another
deep
link
for
the
pod
and
the
same
for
event,
sources
and
sensors.
If
you
want
to
deep
link
into
splunk-
or
I
don't
know
what
whatever
you
guys
use,
then
you
can
do
that.
A
There's
only
I'm
going
to
show
you
a
couple
of
items
from
like
v212
as
well
for
people
who
haven't
upgraded.
So
we've
also
got
a
new
workflow
report.
That's
actually
in
v212,
that's
actually
been
around
for
a
while,
and
you
can
use
that
to
look
at
historical
data
for
particular
workflows.
This
is
not
very
good
example,
because
no
no
no
data's
appeared.
Let's
see
if
we
get
some
data
for
this
one,
no
data's
appearing.
A
I'm
not
really
sure
why
that
allows
you
to
look
at
things
like
how
long
did
that
workflow
take?
How
much
memory
do
you
use
how
much
cpu
you
usually
use,
which
is
already
kind
of
existing
data?
Just
is
it
just
surfacing
into
the
user
interface,
and
we
made
a
few
kind
of
like
small
tweaks
to
the
user
interface
to
improve
things
like,
for
example,
a
better
layout
when
the
page
is
resized
from
a
small
page
to
a
large
page?
A
A
Just
click
on
the
single
sign
on
login
here
and
log,
in
with
your
github.com
login
and
click
grant
access,
and
it
will
it'll
take
you
in
to
that
interface
and
it'll,
provide
you
a
read-only
view
of
that,
of
course,
but
there's
quite
a
lot
of
seated
data
within
the
user
interface.
You
can
see
what's
going
on,
okay,
what
I
also
wanted
to
just
talk
a
little
bit
about.
We've
got
a
few
other
features.
You
can
come
and
have
a
look
at
what
else
is
in
the
v3
milestone
here
as
well.
A
We've
got
scalable
controller
using
leadership
election,
so
you're
able
to
run
two
controllers
and
when
one
is
in
hot
standby.
So
when
one
fails,
the
other
one
will
jump
in
and
start
processing
that
we're
going
to
fix
the
support
for
go
modules.
Simon
is
working
on
that
at
the
moment
and
we've
got
a
couple
of
improvements
around
artifacts,
so
we've
got
a
new
thing:
we've
enhanced
the
artifact
repository
reference,
so
you
can
have
a
default
one
for
a
particular
namespace.
That's
automatically
used
for
workflows
in
that
namespace.
A
We
have
a
feature
that
will
automatically
create
s3
buckets
for
you.
So
if
your
workflow
uses
an
s3
bucket,
you
can
do
that
and
another
feature
called
keyoni
artifacts,
which
allows
you
to
only
specify
the
key
or
path
of
an
artifact
and
all
the
information,
such
as
secrets
and
endpoints,
is
automatically
filled
in
for
you,
and
those
key
only
artifacts
can
also
be.
They
can
also
be
referenced
within
a
particular
bucket.
So
it
allows
you
to
do
quite
nice.
A
Things
such
as
have
a
workflow
where
you
have
many
tasks,
all
the
right
into
a
single
bucket
that
are
followed
by
a
subsequent
task
that
actually
gets
that
whole.
You
know
that
whole
bucket
mounted
as
a
directory,
and
it
can
go
through
and
process
those
files,
so
it
makes
a
fan
in
fan
in
workflow,
so
like
map
reduce
workflows
using
artifacts,
much
easier
to
use
do
expect
teething
issues.
It
was
quite
a
large
code
change,
we're
hoping
we
can
kind
of
squash
some
of
those
teething
issues
soon.
B
B
A
A
We
will
probably
be
looking
at
three
in
3.1
and
3.2
in
and
providing
some
new
features
around
artifact
management
that
makes
things
like
fanning
and
fan
out
much
easier.
It's
a
little
bit
difficult
to
do.
Mapreduce
with
artifacts.
Today
you
have
to
kind
of
work
around
it,
so
we're
looking
we're,
probably
looking
to
improve
that
and
also
conditional
artifacts,
where
the
artifact
you
use
is
is
actually
based
on
a
condition.
You
know,
if
step
a
passed,
use
the
artifact
from
step
a
otherwise
use
the
artifact
from
step
b.
That
kind
of
stuff.
F
Yeah
this
looks
awesome.
I
had
a
question
around
logging,
so
we
definitely
have
you
know
workflow
authors
and
then
workflow
users.
The
authors
are
very
interested
in
like
debug
output
and
all
this
junk
and
the
workflow
users
just
want
them
to
work.
F
We,
I
guess,
want
to
use
structured
logging
with
our
argo
workflows,
so
in
gcp
you
know
we
were
able
to
filter
all
that
stuff
out
properly.
Is
there
any
interest
or
any
ideas
around
like
supporting
that
kind
of
use
case
in
like
the
actual
log
beer
itself,
or
is
that
something
you
would
just
like
ship?
It
just
just
make
that
log
button
go
directly
to
the
gcp
log
sync.
Or
what
do
you
think.
A
It's
yeah,
it's
it's
a
it's
an
interesting
question.
We
one
of
our
developers
is
currently
working
on
kind
of
revitalizing.
A
The
kind
of
underlying
log
component
is,
and
I'm
kind
of
aware
that
it
would
be
nice
to
have
things
like
structured
logging
filtering
by
you
know,
time
frames
of
those
logs
filtering
by
you
know,
log
level
and
so
forth,
but
we
can
never
do
as
good
a
job
as
as
like
a
professional
tool
because
we
do
really
on
it
all,
but
it
all
underlies
it's
all
underlied
by
the
kubernetes
pods
api
kubectl,
I'm
sorry
logs
yeah
api,
and
if
you
look
at
that,
api
only
has
a
certain
number
of
options.
A
It's
not
particularly
efficient,
for
you
know
you
can't
say
to
it
return
your
only
error,
messages
or
return
me,
your
own.
You
know
only
a
certain
time
frame,
so
it
would
never
be
particularly
efficient
performing
my
suggestion.
Is
you
deeply
deep
link
into
your
locking
facility
or
or
is
persistent
as
well?
It
goes
with
goes
away
with
the
pod.
F
The
other
question
I
had
so
I
admittedly
we've
been
using
argo
successfully
for
quite
a
while
now
to
automate
some
workflows
that
used
to
run
on
developer
boxes.
You
know-
and
it's
been
great
for
that
we
hand
rolled
our
own
little.
I
guess
like
instrumentation
around
the
kubernetes
events
that
argo
emits
you
know
to
do
things
like
track
things
like
workflow,
like
how
long
workflows
take
to
run
and
amount
of
failures
and
stuff
like
that,
we
pipe
that
over
to
a
wavefront.
F
So
I
I
noticed
that
I
mean
there
was
a
little
bit
of
argo
event
stuff
going
on.
I
saw
some
sinks
and
some
emitters
and
stuff
like
that
as
well.
They
saw
some
charts
showing
up
in
that
sidebar.
Is
that
so
am
I
going
to
be
able
to
delete
that
code
sooner.
A
You
know
we
relate,
we
lay
some
of
the
groundwork
for
it
and
then
we,
then
we
see,
if
there's
a
lot
of
interest
from
the
community
before
proceeding
with,
with
the
features
that
those
charts
in
the
left-hand
navigation
that
that
is
in
that
that
I,
my
hope,
is
that
people
will
get
involved.
They'll
come
up
with
new
charts,
they're
kind
of
interested
in
and
we'll
develop
them
it's.
A
You
know
it's
just
a
bit
of
javascript
to
be
honest,
that
that
loops
over
the
things,
so
you
could
easily
come
in
and
add
your
own
chart
if
you
wanted
to
to
provide
that.
That
would
be
fantastic.
Also,.
G
G
And
if
you,
if
you
want
that
stuff
to
go
the
way
front,
we
also
use
wavefront
and
we
use
like
telegraph
to
pipe
prometheus
metrics
into
wavefront
and
then
so.
What,
instead
of
relying
on
kubernetes
events,
your
workflows
themselves
can
say,
chart
completion
times
and
success
rates
and
other
actually
customize
your
metrics
of
whatever
you'd
like
to
emit
asymmetric
and
then
pipe
that
into
wavefront
using
telegraph,
telegraph.
F
A
The
the
the
prometheus
metrics
we've
had
we've
had
for
a
while
they're
really
they're.
I
mean
for
operational
stuff
that
that's
that's
really
the
solution
for
you.
So
there
are
things
like
you
know.
Average.
There
are
some
kind
of
like
deep
insights
into
the
controller's
behavior,
for
example.
How
long
is
it
taking
for
the
controller
to
get
around
to
processing
a
workflow
that
stuff
is
in
there
as
well,
there's
also
custom
metrics
as
well.
A
I
don't
know
if
simon's
here,
to
talk
about
a
bit
about
that
that
allows
you
to
build
metrics
that
we
admit
on
behalf
of
your
workflow
but
they're
defined
by
the
workflow
itself.
F
A
Okay,
we're
running
out:
are
we
running
a
bit
over
time?
We
are
a
little
bit.
I
just
wanted
to
kind
of
close
up,
we'll
be
sending
out
a
couple
of
surveys
in
the
next-
maybe
maybe
even
today,
but
certainly
next
couple
of
days
to
gather
information
about
from
the
community
about
people's
usage
of
argo,
workflows
and
argo
events.
If,
if
there's
one
thing
you
could
do
that
could
make
an
impact
for
your
benefit.
A
Doing
this
survey
is,
is
the
really
key
thing
for
everybody
to
be
doing
it
shouldn't
take
more
than
like
10
or
15
minutes,
and
it's
the
usual
kind
of
questions
about
what
are
your
use
cases?
What
do
you
want?
You
know?
What
would
you
like
about
argo
workflows,
but
it's
just
very.
I
just
want
to
really.
A
I
want
to
really
emphasize
very
highly
and,
very
importantly,
how
useful
this
will
be
to
us
if
you
can
complete
it
actually
it'll
be
very
useful
to
you,
because
it'll
help
us
drive
our
roadmap
over
the
next
kind
of
12
months
and
I'll
be
dropping
that.