►
From YouTube: Argo Workflows and Event Community Meeting 17th Feb 2021
Description
00:00:00 Introduction
00:02:20 Infrastructure Automation with Argo at InsideBoard - Alexandre Le Mao (Head of infrastructure / Lead DevOps, Insideboard)
00:15:00 MLOps at Tripadvisor: ML Models CI/CD Automation with Argo - Ang Gao (Principal Software Engineer, TripAdvisor)
00:38:00 Data templates
00:57:30 Pod templates/emissary executor
01:19:20 Conditional artifacts template
01:22:00 Argo Events: New release, Metrics, Validating webhooks
A
So
good
morning,
everyone
welcome
to
the
second
argo
workflows
and
events.
Community
meeting
of
2021,
which
it
turns
out,
is
exactly
the
same
as
2020,
which
is
probably
disappointing.
We've
got
a
pretty
good
agenda
today.
I'm
pretty
excited
about
some
of
the
things
that
we're
going
to
be
talking
about.
Today,
we're
going
to
be
talking
a
bit
about
infrastructure,
automation
with
alexandra
from
insightboard
and
we've
also
gonna
have
a
presentation
from
angel
he's
going
to
talk
about
how
tripadvisor
uses
argo
for
their
ml
ops
exercises.
A
Derek
is
going
to
be
talking
a
little
bit
about
some
of
the
upcoming
features
in
argo
events
related
to
some
new
prometheus
metrics
for
people
who
are
wanting
more
monitoring
on
that,
and
then
we
can
talk
a
little
bit
about.
Some
of
the
changes
that
are
coming
in
are
go
workflows
3.1,
so
simon
is
going
to
talk
to
us
about
data
templates,
I'm
going
to
do.
A
Hopefully,
a
little
demonstration
of
the
emissary
executor
and
bala
will
be
talking
about
conditional,
artifacts
templates,
and
I'm
pretty
excited
about
this,
because
we're
kind
of
filling
out
some
of
the
areas
and
the
product
features
that
we
haven't
got
around
to
before.
Now
we
are
recording
this
and
the
recording
will
be
shared
on
youtube
if
you
want
to
share
that
recording
with
your
colleagues
or
come
back
and
reference
something
later
on,
that
typically
takes
about
24
hours
to
get
sorted
out.
A
If
you
want
to
ask
a
question
at
any
point
during
the
session,
you
can
either
I
just
ask
out
loud
or
you
can
ask
in
the
chat
room
both
of
those
are
fine
ways
to
do
it.
If
you're
asking
the
chat
room,
we'll
probably
just
read
out
loud
what
the
question
you
asked
at
that
point:
it's
awesome
if
you
could
add
yourself
to
the
attendees
list,
so
we
can
know
who's
joined
us
and
where
you
guys
have
come
from
and
before
we
start
does
anybody
have
any
questions
or
queries.
B
B
B
Okay
is
everybody
seeing
my
screen
and
I
will
have
some
few
slides.
I
will
go
straight
to
the
to
the
demo.
We
will
talk
about
infrastructure
automation
on
how
we
use
argo
at
inside
board.
B
B
and
we
aim
to
automate
everything
from
the
infrastructure
to
the
product.
Our
infrastructure
is
today
fully
cloud
diagnostic
and
we
deliver
dedicated
cloud
resources
for
all
our
customers.
B
One
year
ago
we
moved
all
our
kubernetes
job
applications
workflow
from
stackstorm
to
argo.
I
don't
know
if,
if
anybody
knows
tax
storm,
it's
a
good
technology,
but
the
pain
point
is
that
it's
not
kubernetes
native
after
that,
first
migration,
the
ops
team
decided
to
move
all
the
infrastructure
stack
storm
jobs
to
argo,
why
we
decided
that
the
first
is
the
github's
way.
B
B
B
B
B
To
to
get
all
the
secrets
part
to
be
able
to
to
launch
the
the
terraform
plan
for
the
cloud
providers
or
etc,
we
inject
side
card
vault
and
as
input
of
the
workflow,
we
have
a
gitlab
repository.
B
B
B
B
B
B
So
now
we
are
going
to
apply
the
terraform
plan.
We
are
here
and
you
can
see
and
on
the
plan
that
I
have
a
link
to
the
console
cavistor.
We
saw
earlier
that
we
store
the
new
customer
in
the
console
cavistor
and
we
can
see
here
that
we
have
a
gitlab
repository
as
input
artifacts
and
in
fact,
in
sideboard
we
manage
all
our
terraform
script
with
ruby
on
consul
templaterb.
C
B
B
While
it's
building,
I
want
to
show
something
we
really
like
to
do
with
argo
on
all
the
events.
B
B
We
have
updated
the
terraform
code
now
we
we
don't
want
to
go
to
the
ui.
I
show
you
earlier.
We
just
want
to
launch
a
new,
a
new
workflow,
a
new
terraform
workflow,
okay,
so
that
is
something
I
really
like
is
managing
all
our
infrastructure.
With
some
echo
commands.
A
A
Okay,
cool,
if
you
do
want
to
ask
any
questions,
of
course,
you
can
just
do
that
in
the
chat,
or
we
can
circle
background
to
this
later
on.
A
I
think
it
was
clear
enough:
okay,
aang
are
you?
Are
you
ready
to
take
over
yeah
sure?
Can
you
brilliant?
Yes,
I
can
hear
you
very
well.
Thank
you.
Okay,.
D
E
D
C
D
We
do
cicd
automations
with
help
of
our
projects
to
start
off,
so
as
a
travel
company,
there
are
a
lot
of
place,
we
can
apply
machine
learning
techniques,
and
I
just
I
give
you
some
examples
where
we
apply
machine
learning
models,
for
example
in
this
in
this
graph
you
can
see
we
normally
do
the
recommendation
based
model
or
some
sorting
based
model
to
rank,
say
the
hotels
or
restaurants,
and
also
we
do
some
smart
buildings
for
the
option
based
models.
D
So
that's
where
we
apply
those
machine
learning
models
and
for
the
ml
ops
at
tripadvisor.
For
our
team.
We
work
closely
with
the
scientists
across
our
company
to
provide
them
a
common
infrastructure,
so
the
data
scientists
can
deploy
or
train
their
model
easily
and
we
heavily
apply
the
devops
principles
to
our
ml
systems
and
at
the
core
of
it,
is
the
cicd
pipeline
for
ultimate
all
the
things.
D
So
I
want
to
give
you
some
background
on
what
exactly
ml
lifecycle
is.
So
normally,
when
data
scientists
plan
to
build
a
model,
they
start
with
the
design
phase,
and
then
they
will
do
some
offline
training
or
in
some
testing
environment,
and
once
they
finish
the
training,
they
will
deploy
the
model
to
a
staging
environment,
normally
in
our
staging
cluster,
and
then
they
can
do
some
say,
the
offline
experiment
with
the
model
they
deployed
in
our
seating
environment
and
once
they
verify
the
model
behave,
has
expanded.
D
They
will
deploy
the
model
to
our
production
environment
and
then
we
have
some
tools
for
them
to
analyze
the
performance
of
model
and
then
based
on
the
feedback.
They
can
either
improve
the
model
or
deprecate
some
old
models.
So
this
is
whole
ml
ml
lifecycle
and
for
our
team
we
are
actually
involved
in
all
these
stages,
starting
from
the
training
of
tod
deployment
and
as
part
of
this
ml
lifecycle.
D
So,
at
the
core
of
all
these
ml
lab
cycles,
it's
actually
the
cicd
pipeline
and
we
heavily
use
a
lot
of
open
source
projects.
So
we
apply
git,
ops,
continuous
delivery,
with
the
help
of
our
go
cd
and
for
the
ml
pipeline
automation.
We
use
our
goal.
Events
together
with
our
workflow,
actually
for
both
our
like
model
building
and
even
the
offline
and
best
prediction,
and
for
the
mode
machine
learning,
lifecycle
management
we
use
ml
flow.
D
All
most
of
these
tools
are
cloud
native,
so
it
can
be
easily
right
in
the
kubernetes
environment,
okay,
so
our
journey
to
the
ml
ops
starts
nearly
two
years
ago
and
it's
been
a
great
success.
D
So
now
a
data
scientist
can
easily
deploy
a
model
from
there
actually
from
the
notebook
into
the
production
environment
with
a
few
clicks.
So
everything
is
automated
and
it
actually
frees
the
time
for
the
engineering
to
do
more
interesting
work
and
also
on
the
data
scientists
can
do
experiment
with
their
model
more
easily.
D
So
this
is
the
platform
overview.
How
our
architecture
looks
like
the
real
architecture
is
actually
more
complicated
than
this.
I
did
some
simplification,
so,
as
you
can
see
the
data
scientists
start
from
here,
they
normally
interact
with
a
jupiter
notebook
for
the
model
training
and
they
will
defining
the
features
with
our
in-house
feature
definition.
D
So
we
have
a
central
reboot
for
the
feature
definition
and
then
once
they
defend
this
feature,
they
will
able
to
use
the
offline
feature
store
for
offline
model
training
and
also
this
feature
will
automatically
provide
to
them
as
a
feature
service
for
online
prediction
as
well.
So
this
help
us
make
sure
whatever
the
data
they
use
train
the
model
actually
the
same.
They
use
for
prediction
online
for
data
consistency,
and
once
they
are
satisfied
with
their
model,
they
can
register
their
experiment
with
ml
flow
tracking
server.
D
So
the
tracking
server
will
track
all
the
experiment
the
data
scientist
has
made
and
when
they
satisfy
the
experiment,
they
can
register
the
model
and,
as
part
of
the
model,
registration
is
actually
kick
off
a
pipeline
for
safe
image
building
and
also
we
do
some
testing
for
the
model
as
well.
We
will
talk
detail
how
this
pipeline
works
in
the
later
of
the
presentation,
and
the
next
stage
is
the
model
deployment
so
from
the
model
registry.
D
They
can
buy
fuel
complex,
they
can
deploy
the
model
either
to
the
stage
environment
or
the
production
environment,
and
also,
as
of
now,
we
provide
a
say,
the
canary
deployment
or
shadow
deployments
for
the
model
as
well.
So
that
makes
sure
the
the
model
doesn't
affect
the
real
traffic
for
the
user
and
also
we
provide
a
like
a
quick
drawback
as
well.
D
So
this
is
the
runtime
we
use
like
I
mentioned.
We
use
southern
and
also
all
the
southern
model
service
is
part
of
our
service
mesh.
So
it's
actually
talk
to
our
future
service
in
kubernetes
and
through
istio,
and
there
are
some
monitoring
we
provide
to
the
data
scientists
by
using
premises.
So
all
the
models
metrics
are
collected
by
promises
and
we
have
some
alerting
for
all
the
models
and
we
also
provide
auto
scaling
for
the
data
scientist
model.
D
So,
let's
take
a
look
deeper,
look
on
how
exactly
the
where
our
product
fits
into
all
these
pipelines
so
for
the
model
registry.
Once
the
data
centers
register
their
experiment
in
mf
flow
tracking
server,
they
can.
This
is
the
screenshot
of
how
the
page
looks
like
they
can
select
the
model
and
then
register
their
model
by
providing
a
name
for
the
model
or
use
some
existing
model,
and
this
is
how
the
pipeline
actually
triggered.
We
have
a
ml
flow
sqs
plugin.
D
It
actually
will
send
a
event
when,
wherever
the
data
centers
reduce
your
model
and
our
goal
given
to
pick
up
this
event
and
then
trigger
the
argo
workflow
to
do
actually
this
pipeline
and
as
part
of
this
pipeline,
it
involves
say,
send
a
start
notification
to
the
slack
of
the
data
scientist.
D
So
he
knows
when
the
model
starts
building
and
then
once
the
model
finished
building,
we
do
some
heavy
testing
on
the
model,
to
figure
out
the
say,
the
cpu
usage
or
memory
requirements
for
the
data
center
model
and
also
the
latency
for
the
model
and
also
the
throughput.
So
all
this
info
will
have
will
get
us
to
generate
the
deployment
yaml
cell
at
the
later
stage,
and
once
all
this
pipeline
finished,
we
wrapped
back.
D
All
these
meta
info
has
an
upload
tag
back
to
the
ml
flow
ui,
so
the
data
centers
can
view
the
some
of
the
meta
info
like
what
the
mod
image
is
and
which
argo
workflow
build
the
model
and
also
some
testing
reports.
D
They
can
see
how
their
model
performs
and
if
they
satisfy
their
model
performance
they
can,
they
can
actually
deploy
the
model,
so
this
page
actually
shows
a
list
of
all
the
models
so
from
this
ui
we
can
easily
see
which
model
are
in
production
by
by
using
this
column,
and
we
see
what
version
of
the
model
I
in
production
and
virtual.
What
version
of
the
model
are
in
staging
stage
and
also
at
the
model
model
description
page
on
the
data.
D
Centers
can
also
provide
some
like
description
for
the
model
or
text
for
their
model,
so
once
the
model
is
registered
in
ml
flow,
the
next
step
is
to
deploy
the
model
to
the
staging
environment.
So
this
is
how
they
do
their
work
from
the
model
register
ui.
They
can
change
the
stage
of
the
model
from
non
to
say
staging
or
production.
D
So
if
they
change
the
stage
to
stating
that
means
as
part
of
the
pipeline,
the
model
will
be
deployed
to
the
staging
cluster,
and
this
is
also
managed
by
the
argo
events
and
our
co-worker
so
like
before.
There
will
be
a
sql
and
sent
to
the
sksqs
sql
sq
and
the
argo
events
will
trigger
a
pipeline.
And
this
time
we
will
generate
a
deployment
yaml
file
for
the
model
and
as
this,
this
deployment
yamco
will
be
checking
to
the
gitlab.
D
And
then
we
have
a
home
homegrown,
smart,
app,
called
a
model
manager,
and
this
app
actually
just
called
the
argo
cd
api
to
create
the
argo
cd
application
and
then
all
the
rest
work
will
be
managed
by
argo
cd
for
our
devops
dell
management
yeah.
D
And
this
is
how
the
deployment
looks
like.
So
we
have
a
custom
model
deployment
operator,
basically
abstract
the
sound
deployment
and
under
under
our
hood
of
this
deployment.
We
use
sodium,
so
our
operator
actually
created
a
certain
deployment,
and
this
deployment
also
abstract
away.
Some
of
the
detail
like
how
we
do
still
launching
and
some
other
complex
stuff
so
make
the
deployment
yammer
file
more
easier
to
understand
and
reason
about
yeah.
So
this
is
an
example
of
the
argo
cd
application
created
by
our
pipeline
and
yeah.
D
I
think
that's
pretty
much.
Okay,
and
also
we
have
the
we
also
utilize
another
tool
called
skida,
so
kita
basically
provide
us
the
ability
to
auto-scale
the
model.
D
For
this
example,
we
will
auto-scale
model
based
on
the
number
of
requests
to
the
model,
and
this
threshold
is
also
actually
figured
out
by
part
of
our
pipelines
as
well
yeah,
I
think
that's
everything
I
want
to
cover.
Is
there
any
questions.
F
You're
triggering
the
model
rebuild
based
on
the
event
that
comes
out
of
ml
flow,
so
I've
seen
a
lot
of
people
just
like
you
know:
fire
up
a
notebook,
ad
hoc
and
go
and
register
a
model
themselves.
But
how
are
you
getting
the
information
out
of
mlflow
to
define?
You
know
how
one
ought
to
build
that
thing.
D
I'm
not
really
get
what
your
question
is.
So
what
is
on
the
sqs
topic
in
that
slide?
Oh
yeah!
Yes!
So,
right
now,
as
I
can't
exactly
remember
what
what
in
the
message,
but
I
think
we
we
have
to
ml
flow
right
id,
so
it
will
allow
the
argo
workflow
to
basically
read
back
all
the
meta
info
from
the
ml
flow.
So
mflow
also
has
a
rest
api.
D
D
Yeah,
but
I
think
some
part
of
it
might
be
able
to
open
source
if
it's
generally
enough
and
right,
we
used
to
have
a
talk
with
the
southern
team
as
well.
This
is
actually
like
a
sim
simplified
version
of
the
model
deployment
yeah
mo
comparing
to
what
a
normal
deployment
gamma
file
will
look
like.
A
How
many,
how
many
models
do
you
train
and
serve
a
day.
D
Now
it's
I
think
around
100,
I
don't
have
exact
number
it
actually
has.
It
will
become
more
and
more
every
day,
I
think
about
80
or
70,
or
something
around
100..
A
What
kind
of,
if
you
can
answer
this?
What
kind
of
things
are
those
models
trained
to
do?.
D
It's
basically
like
a
prediction
of
basic
scoring
or
like
recommendation,
so
that's
what
our
company
mostly
do
for
the
machine
learning
like
I
recommend
a
hotel
or
restaurant
for
you
or
rank
the
hotel.
Restaurant
video
yeah.
G
Oh,
I
I
have
a
question.
Can
you
hear
me
yeah
sure
the
question
is
about
the?
Do
you
see
any
to
run
your
workflow
in
a
single
cluster
since
you
have
a
like?
Maybe
you
have
some
different
scalability
requirement.
D
Right
now
we
don't
hit
any
scalability
and
we
are
actually
really
satisfied
with
the
feature
argo
workflow
and
rmc
provided
to
us.
I
think
we,
the
only
issue
we
encountered
is
for
some
long
running
offline
training
pipeline.
D
C
I
have
a
question
so
you
talked
about
performance,
testing
and
and-
and
I
was
very
interested
in
that-
so
I'm
pretty
sure
you're
not
doing
long
running
performance
tests,
you're-
probably
doing
like
tests
to
just
check
the
threshold
to
see
that
a
certain
response
is
like,
in
the
mean
of
us,
some
kind
of
500,
milliseconds
or
less
or
something
or
how
is
that
happening?
If
you
don't
mind
me
asking.
D
Yeah
so
that
actually
happened
at
part
of
the
model
image
building
pipeline.
So
once
we
have
the
image
we
have
another,
we
have
another
tool,
it
actually
create
a
let's
say,
a
deployment
of
the
model,
and
then
it
start
another
like
load
testing.
There
is
open
source
tool
called
bigata,
so
we
use
that
for
load
testing
the
model
by
sending
some
sample
requests
heavily
to
the
model
and
then
based
on
that,
we
figure
out
the
the
memory
usage
and
the
cpu
for
the
model
and
also
like
the
threads
throughput
or
latency.
A
A
Excellent
excellent,
okay,
okay!
So
up
next
girlfriend,
yeah.
H
I
H
Yeah,
okay,
so
the
whatever
the
terraform
workflow
engine
issue
that
automated
through
the
argo
is
it
take
care
of
the
remote,
remote,
bucketing
or
sorry
remote
state
file.
B
B
B
I
B
I
wrote
some
some
post
about
it.
There
is
more
detail
on
the
code
on
medium.
I
can
show
you
afterwards.
If
you
want.
H
Sure,
if
you
it
will
look
good.
If
you
can
share
this,
whatever
the
demo
code,
you
have
it
yeah.
A
A
J
All
right,
can
you
see
my
screen.
J
Yes,
awesome
so
hi
everyone,
I'm
simon,
I'm
an
engineer
here
with
intuit
and
I
obviously
work
on
argo
workflows.
Some
of
you
might
know
me
already
so
I
wanted
to
present
this
new
template
idea
that
we're
playing
around
with
called
the
data
template.
J
So
the
motivation
of
this
data
template
is
essentially,
we
know
that
it's
very
common
for
users
to
run
a
pod
that
downloads
some
data
from
an
external
source
and
then
just
pre-process.
It
pre-process
it
in
some
way
and
what
we
want
to
do
with
this
data.
J
So
here's
our
goals
with
this
data
template
are
essentially
to
provide
a
set
of
sources
and
transformations
for
our
users
as
convenience
and
to
me,
more
importantly,
to
be
able
to
sort
of
automatically
decide
if
we
are
able
to
do
the
transformations
that
the
user
wants
in
the
controller's
memory.
So,
for
example,
maybe
maybe
a
user
runs
a
step
that
generates
a
long
list
of
csv
lines
or
whatever,
and
they
just
want
to
do
a
simple,
find
and
replace
or
simple
filter.
J
What
they
would
have
to
do
is
either
write
a
container
that
spins
up
and
just
those
are
very
trivial
idea,
but
with
data
template
for
example,
what
we
want
to
do
is
we
want
to
give
the
user
the
ability
to
specify
that
on
the
on
the
workflow
themselves,
and
if
the
controller
sees
oh,
these
guys
just
want
to
manipulate
a
list
of
strings.
We
can
do
that
in
memory,
so
we
don't
even
have
to
start
a
new
plot
for
them.
That's
essentially
what
we're
going
for.
J
We
want
to
have
an
experience
similar
to
oh,
I'm
sure
all
of
you
have
used
bash
before
so
similar
to
piping.
Essentially
so
in
this
example,
here
we
source
some
information
like
the
context
of
this
file,
and
then
we
can
just
pipe
some
some
grips.
Some
said
some
sort,
some
uniques
and
that's
I
mean
as
a
developer.
This
is
very
convenient.
J
It's
very
easy
and
quick
to
do
you
don't
have
to
write
golem
code
or
python
code
to
do
these
sort
of
simple,
quick
motif
transformations,
so
this
is
sort
of
what
we're
aiming
for.
We
don't
want
to
obviously
write
a
programming
language,
that's
the
whole
point
of
containers,
but
we
also
want
to
give
you
some
tools
that
for
common
use
case,
that
we
hope
that
you're
able
to
sort
of
assemble
in
such
a
way
that
provides
actually
a
lot
of
utility
for
you,
so
maybe
goals
that
we
have.
J
Maybe
we
we
might
want
to
use
this
engine
to
do
like
very
quick
inline
modifications.
But
I
don't
think
this.
I
don't
think
we're
in
look
we're
gonna
end
up
doing
this.
This
way,
like
I
said
before,
our
goal
is
to
do
large
amounts
of
data
transformations.
Not
quick,
inline
changes,
so
alex
is
actually
working
on
something
alex
is
actually
working
on
something
that
would
accomplish
this.
J
So,
if
you're
interested,
I
suggest
you
take
a
look
at
5115
and
also
a
non-goal
that
we
have
is
we
don't
want
to
we?
Don't
we
don't
want
this
to
be
too
small,
but
we
also
don't
want
this
to
be
too
big,
there's,
obviously
a
point
where
a
user
may
have
a
user
may
have
requirements
that
are
too
specific
or
too
unique
to
their
data.
J
That
they're,
just
gonna
have
to
write
their
own
container
and
we
obviously
that's.
Obviously
fine.
We
don't
wanna
substitute
the
idea
of
containers
that
would
yeah.
So,
let's
run
through
an
example,
this
is
sort
of
what
we
have
right
now
for
the
data
template
could
look
like.
So
you
can
see
here
that
this
is
a
template
and
instead
of
container
script,
we
have
data.
J
So
in
this
particular
data
template
it
includes
a
source.
So
if
you
remember,
we
have
two
types
of.
We
have
essentially
two
types
of
contents
for
a
data
template.
We
can
have
one
source,
which
is
this
cat
and
then
unlimited
number
of
transformations.
J
So
this
one
in
particular
happens
to
have
a
source
and
this
source
is
something
called
with
artifact
paths.
So
the
idea
behind
the
source
is
that
argo
will
open
up
this
bucket
in
s3
and
it
will
essentially
get
get
you
a
list
of
all
the
file,
all
the
file
paths
that
are
in
that
bucket.
So
that
could
be
useful.
J
For
example,
if
you
have
a
if
you
have
a
bucket
with
all
of
your
storage
for
all
your
files
for
your
project,
and
you
want
to,
for
example,
only
process,
one
files
that
include
that
are
part
of
a
certain
project,
or
that
are
part
of
a
certain
model
or
a
certain
date
or
a
certain.
I
don't
know
you
name
it.
So
what
this
does
is
this
actually
opens
up.
Your
s3
bucket
looks
at
the
files
and
then
just
returns
all
of
the
files
down
now
to
the
transformation
step,
so
we
have
a
source
step.
J
Now
we
have
a
transformation
step
and
the
transformation
step
is
essentially
like
I
mentioned
before
it's
all
of
these
little
programs
that
run
in
sequence
to
each
other
and
can
pipe
into
each
other.
So
our
first
transformation
is
filter,
so
we
we
only
want
to
keep
the
files
that
have
in
this
example,
this
project
name
now
after
the
transformation
is
done,
we
essentially
transfer
the
data,
so
the
the
data
outputted
by
this
transformation
to
the
next
one.
J
J
And,
lastly,
we
have
this
transformation
called
group,
so
the
idea
is
that
maybe
you
have
maybe
this
returns
250
files
and
you
want
to
process
them
in
25.
Pods.
Well.
Group
is
a
very
convenient
way
to
create
batches
of
10
files.
At
a
time
now,
you
can
use
the
results
of
this
template
in
another
template
that
runs
with
with
items.
J
So,
if
you
remember,
with
items
we'll
automatically
expand
and
create
the
pods
one
for
each
item
that
you
have-
and
each
item
could
be
a
list
so
this
so
the
dependent
of
this
data
template
could
be
a
with
items
container
that
expands
into
25
containers
and
you
process
each
of
the
10
files
in
parallel.
J
Up
to
this
point,
do
we
have
any
questions
all
right?
In
that
case,
I
will
continue
so
this.
In
this
example,
we
have
a
source.
So
this
actually
would
this,
since
we
have
a
source
and
the
source
has
to
make
a
network
connection,
and
if
it
has
to
download
some
file
names,
that's
essentially
going
to
run
in
a
pod
right.
J
We
don't
want
to
run
this
in
memory,
but
you
can
also
have
a
data
transformation
that
doesn't
have
a
source,
a
data
template
that
essentially
just
does
this
little
quick
and
easy
sort
of
filters
and
maps
and
groups
that
can
be
done
in
memory.
So
here
we
have
so
here
we
have
another
data
template.
You
can
imagine
that
this
you
can
imagine
that
we
first
call
this
main
one.
J
This
main
data
template
and
then,
like
I
mentioned
before
it
expands
to
25
pods,
but
now
this
data
template
can
fan
in
and
essentially
take
in
the
results
of
those
25
parts,
and
it
does
so
through.
You
guys
will
be
used
to
this
sort
of
input
parameter
pattern
that
we
have
in
argo.
So,
since
this,
since
this
data
template,
doesn't
actually
source
anything
from
the
outside
world,
it
sources
just
from
materials
that
are
already
in
argo.
We
can
actually
run
all
of
this
all
of
these
transformations
in
the
controller
memory.
J
So
this
essentially,
this
will
immediately
process
the
information
without
me
having
to
wait
for
a
pod.
That's
one
of
the
large
benefits
that
we
see
with
this
data
template
to
be
able
to
do
this
kind
of
stuff
without
even
starting
a
single
pod.
So
to
continue
our
example,
let's
assume
that
our
processing
created
a
bunch
of
logs
that
we're
not
interested
in,
so
we
have
run
a
filter
to
exclude
those
logs
and
file.
J
It
contains
the
log
and
let's
say,
for
example,
that
now
that
we
have
all
these
files,
we
want
to
group
them
by
extension.
So
here
we
have
a
regular
expression
in
case
you're,
not
familiar
with
regular
expressions.
This
particular
regular
expression
will
group
by
the
extension
of
the
file,
and
maybe
you
want
to
zip
python
files
in
a
different
way
that
you
want
to
compress
video
files
or
whatever.
This
could
be
a
useful
way
to
do
so.
J
So
that's
essentially
sort
of
the
spec
that
we
are
working
with
right
now
and
I
just
want
to
open
up
a
few
questions
sort
of
for
discussion.
If
you
guys
have
anything
in
mind
and
if
you
guys
don't
have
anything
in
mind,
you're
always
welcome
to
visit
4958
and
then
just
comment,
any
sort
of
ideas
that
you
have
here
and
now
obviously
we'll
discuss.
J
But
the
main
question
that
we
wanna
the
main
question
that
we're
interested
in
discussing-
and
I
think
it's
the
only
one
that
I'm
gonna
open
to
the
group
is
how
do
we
provide
a
list
of
tools,
groups
and
filters,
so
just
in
a
way
that
they're,
very
extensible
to
the
user,
and
that
we
don't
have
to
actually
implement
each
and
then
or
implement
or
enumerate
them
ourselves
like
we
don't.
J
What
we
don't
want
is
to
have
to
create
a
list
of
useful
tools
so
that
our
users
can
just
see
it
and
start
using
them
so
that
we
have
to
maintain
that
list.
What
we
want
is
some
way
that
we
can
very
easily
maybe
load
one
library
or
maybe
provide
access
to
sort
of
the
common
batch
tools
under
that's
that
are
running
the
pod
or
whatever
that
we
don't
have
to
essentially
maintain
this
list.
J
J
Yeah,
so
we're
targeting
for
this
3.1
and
if
which
will
be,
I'm
not
sure
what
the
schedule
will
be,
but
it
will
even
it'll
be
the
next
year
release.
That's
not
3.0,
so
it'll
be
in
two
releases,
but
if
you
guys
want
to
get
your
hands
dirty
with
as
soon
as
possible,
I
can
always
make
a
dev
build
and
just
share
some.
K
Images
in
general
do
people
think
that
this
would
be
useful
for
them
like
is
this
a?
This
is
a
use
case
that
I
imagine,
people
having
trouble
with
like
not
being
able
to
do
simple
transformations
in
workflows,
but
I
just
wanted
to
you
know,
get
some
validation,
that
this
is
a
feature
that
people
would
actually
use
in
their
workflows.
F
A
I
I
wonder
if
we
could
have
a
bit
of
a
scripting
language,
it's
just
a
bit
of
a
wondering
I've
had
today
about
you
know.
One
of
these
stat
transformations
could
be
kind
of
scripted.
The
question,
of
course,
is
then
is
what
is
that
scripted
language
simon?
I
don't
think
you
talked
enough
about
the
kind
of
potential
sources
of
data
for
this
as
well.
Did
you.
J
Yeah
so
yeah,
like
I
mentioned
before
we
have
sort
of
we
have
the
idea
of
data
sources
in
the
in
the
example
that
I
have,
and
the
only
one
that's
actually
currently
implemented
right
now
is
to
get
artifact
paths.
J
But
we
also
imagine
things,
for
example
like
an
http
request
or
a
curl
or
a
find
to
get
a
list
of
files
in
a
specific
way,
or
maybe
to
read
a
large
csv
file
or
read
a
database
or
read,
maybe
an
rss
stream,
or
I
also
we
also
had
this
idea,
although
alex
alex,
I
think
correctly
suggested
that
it
might
not
fit
this
template,
but
to
have
us
a
data
source,
that's
something
like
kafka
or
some
sort
of
stream
processing,
but
I
think
alex
quickly
said
that-
and
I
agree
with
that-
that
might
be
better
for
another
idea
that
we're
playing
with.
J
Yeah,
so
that
so
not
only
the
transformations
need
to
be
flexible,
but
also
the
sources
need
to
be
flexible,
so,
which
is
this
so
this
this?
Essentially,
this
template
is
a
balancing
act
between
not
providing
too
little
utility
that
no
one
uses
it,
but
not
throwing
too
much
utility
that
essentially,
we
overload
and
overwork
ourselves
maintaining
it,
because
that's
the
whole
point
of
containers.
J
All
right,
and
with
that,
I
guess
you
guys-
are
always
welcome
to
visit
this
pr
and
I'm
going
to
pass
it
back
to
alex
to
talk
about.
I
think
emissary.
G
Something
I
have
a
one
tiny
question:
yes,
because
I
think
we're
doing
the
processing
definitely
requiring
a
lot
of
resource
in
the
controller,
because
you
said
you're
not
using
containers,
I
mean
in
the
controller.
We
are
doing
such
thing.
G
J
That's
actually
a
great
question
so
currently
the
only
guidance
we
have
well
currently.
J
This
is
a
very
naive
implementation
so
far,
but
if
we're,
if
we're
not
doing
anything
with
external
networking,
we
do
it,
we
try
to
do
it
in
memory,
but,
like
you
said,
you
can
imagine
that
the
result
of
this
file
list
is
immense
right
and
maybe
it's
so
large
that
it
could
actually
hold
up
the
controller
so
yeah,
maybe
part
of
the
decision
would
be
to
sort
of
see
before
we
decide
to
weather,
launch
a
pod
or
not
to
see
how
big
the
inputs
are.
J
Maybe
different,
maybe
different
functions
will
have
different
expected
sort
of
resource
use.
So
maybe
we
also
take
that
into
account.
I
something
I
would
want
to
mention
is
that
anything
that
the
controller
can
do
in
memory
could
be
done
in
a
pod
and
also
users
are
always
allowed.
Your
users
will
always
be
allowed
to
force
a
pod,
so
if
they,
if
they
think
argo,
might
want
to
do
it
in
memory
but
you're
like
no.
I
know
this
won't
work.
You
can
always
add
a
pod
strategy,
always
and
then
you'll
just
launch
a
pod,
regardless.
G
I
see
oh
yeah,
that
makes
sense
so
other
thing
I
want
to
add
to
that
because
I
know
you're
using
the
menu
as
an
example
for
the
secret
loading.
But
for
the
case
we
do
we
we
are
using
a
kim.
I
am
access
for
access,
so
sometimes
running
from
the
controller
to
access
data
is
actually
not
feasible
due
due
to
some
setup,
so
yeah
yeah
well
work
here.
K
Yeah
awesome
artifacts
generally,
I
think
artifacts
are
always
going
to
go,
create
a
pod,
but
I
think
the
controller
level
ones
are
usually
for
like
http
requests
would
probably
be
done
in
a
controller
unless,
for
some
weird
reason
they
needed
to
be
done
in
the
pod.
J
Yep,
that's
pretty
much
what
I
was
going
to
say,
so
we
we
pretty
much
have
decided
that
any
time
anything
artifact
really
has
to
happen.
It
will
happen
in
the
pod.
J
Yes,
so
that's
actually
yeah,
that's
a
great
clarification.
If
you
guys
are
familiar
with
how
we
do
memoize
nodes,
we
essentially
when
we
create
a
node
that
hits
a
cache
and
it's
memoized.
We
immediately
create
a
node
that
immediately
completes.
J
So
if
you
guys
are
worried
that
you
guys
are
gonna
have
to
think,
for
example
about.
May
I
don't
know
if
this
is
gonna,
be
a
pot
or
not,
and
I
don't
know
if
argo
is
gonna
treat
this
as
a
pod
or
not
depending
on
whether
or
not
it
runs
the
plot
behind
the
scenes.
It'll,
essentially
all
be
the
same,
so
it'll
essentially
just
be
a
run
that
ran
within
a
millisecond.
J
A
C
Yeah,
can
I
just
sorry
to
interrupt
this
is
zubair.
I
want
to
second
what
was
just
said
with
the
logs.
That
would
be
very
helpful.
J
So,
to
clarify
the
the
log
request
is
essentially:
if
we
decide
to
not
run
a
pod,
you
guys
will
want
the
logs
to
be
stored
in
the
same
manner
as
they
would
if
they
ran
in
a
pod.
K
What
what
logs
are
you
interested
for?
The
I
understand
the
artifact
ones
would
be
necessary
like
if
I
can't
just
download
the
s3
key
because
of
permission
issues.
It's
you
need
to
that
surfaced
up
into
the
or
maybe
you
want
that
log
archive.
But
if
it's
something
like
I
have
a
parameter,
and
I
and
then
you're
trying
to
do
transformation
is.
Is
there
something
about
that
that
you
wouldn't
want
logged?
K
L
Well
suppose,
I
input
a
parameter
that
doesn't
need,
like
it
looks
like
group,
might
expect
a
json
list
if
it's
an
invalid
json
list.
I
want
to
see
the
log
if
my
regex
is
invalid.
I'd
want
to
see
logging
about
that,
and
it
might
be
fine
that
that
stays
in
the
controller,
but
it
would
just
be
nice
if
the
logging
interface
behaved
the
same
as
pod
steps.
I
I
think,
irrespective
the
the.
K
Reason
for
the
transformation
would
be
surface
in
the
workflow
itself
like
if,
if
this
step
this
data
step
filtered
out,
nothing
right
and
then
it
produced
no
like
your
filter
was
wrong
because
your
regex
was
too
restrictive.
K
This
would
produce
an
output
of
nothing
or,
if
you
know
the
http
request
failed
because
it
had
no
didn't
have
proper
like
the
url
was
fingered,
I
think
all
of
those
errors.
I
would
expect
to
be
surfaced
in
the
workflow
itself.
So
when
you
click
on
the
node,
it
would
explain
why
this
node
failed.
K
A
A
Slack,
okay,
so
a
feature
coming
up,
probably
in
version
3.1
or
3.2,
is
going
to
be
a
new
type
of
executor
for
argo
workflows.
Now,
for
those
of
you
who
don't
know
what
an
executor
is,
an
executor
is
the
job
of
the
executor
is
typically
to
run
the
particular
pods
in
your
workflow
and
what
happens
is
when
you
start
a
pod
in
the
workflow.
You
typically
get
a
couple
of
containers
attached
to
that
pods
one
called
the
init
container
and
one
called
the
weight
container.
A
The
role
of
the
init
container
is
basically
to
ensure
that
any
artifacts
you
wanted
in
your
pods
are
made
available
to
you,
and
the
role
of
the
weight
container
is
to
is
to
actually
save
any
artifacts.
You
want
saving
back
to
s3
or
gcp,
or
what
have
you
and
also
kind
of
report
back
the
status
of
your
main
container,
so
we
have
several
different
implementations
of
this
code
using
different
mechanisms,
so
one
implementation
uses
the
kubernetes
api.
A
Another
implementation
uses
the
kubler
api
and
one
uses
the
docker
cli
and
the
most
recent
one
uses
process
namespace
sharing
to
get
access
to
the
data.
A
What
we
wanted
to
be
able
to
do
was
to
be
able
to
run
multiple
steps
in
sequence,
in
a
single
pod.
Now
I'll
talk
a
bit
about
what
those
benefits
were,
and
we
had
a
look
around
for
some
solutions
related
to
that
and
we
came
across
how
techton
did
this
and
what
techton
does
is
when
it
starts
up
your
main
container
effectively.
It
replaces
the
command
that
you
specified
with
a
new
command
that
then
launches
your
original
command
in
a
sub
process.
A
Okay,
so
what
we've
done
is
we've
taken
this
and
we've
adapted
it
to
how
this
idea
adapts
to
how
our
go
workflows,
does
things
and
call
it
the
emissary
executor
now
the
mrc
executor
is:
has
some
slightly
different
trade-offs
compared
to
other
executors.
So
it's
typically
is
bound
to
secure
the
executors,
and
it
typically
is
about
as
performant
as
other
executors,
and
its
main
downside
is
that
if
you
don't
specify
the
command
that
your
image
needs
to
use,
the
emissary
cannot
infer
that
from
the
dot
from
the
image
itself.
A
So
you
actually
need
to
specify
it's
only
a
small
downside,
but
what
the
emissary
can
do
is
because
it
sits
inside
your
container.
It
can
actually
wait
for
something
else
to
happen
before
starting
your
process,
so
what
we
can
do
is
actually
use
the
emissary
executor
to
run
multiple
containers
within
a
single
pod.
So
let
me
give
you
an
example
of
what
that
looks.
A
A
So
here
we
go,
this
is
a
new
template.
I've
got
a
new
section
in
my
template
called
pod
under
that
has
a
graph.
Now
this
none
of
this
is
concreted
in
in
stone
just
yet.
A
It's
just
intended
to
be
indicative
of
what
it
might
ultimately
look
like,
but
the
names
might
change
and
under
graph,
basically
three
nodes
of
this
graph,
a
b
and
c,
and
each
of
those
run
the
argo
and
say
demo
like
a
hello
world
image,
and
what
this
tells
me
is
that
container
b
is
going
to
depend
on
container
a
and
container
c
is
dependent
on
containers
a
and
b
okay.
A
Let's
create
that
so
the
namespace
and
then
create
that
what
you
can
now
see
is
my
graph
has
expanded,
and
I
can
see
my
workflow
here
now.
This
is
would
have
originally
had.
A
single
would
have
run
three
pods
to
achieve
this.
What
this
actually
does
instead
is
runs
three
containers
within
a
single
pod.
A
There
are
a
couple
of
benefits
to
that.
The
first
kind
of
benefit
is
that
that's
a
lot
less
expensive
to
launch
a
single
pod
than
to
launch
multiple
pods.
I
can
give
you
another
demonstration
of
of
a
use
case
where
this
is
going
to
make
a
big
difference.
A
So
this
is
a
workflow
that
contains
a
graph
and
with
a
count
of
three
and
it
launches
three
pods
in
each
of
those
pot.
Each
of
those
templates
runs
three
tasks
now,
with
version
2.12
you'd
need
to
launch
one
pod
per
sequence.
A
So
say
you
had
a
sequence
of
a
thousand
and
you
needed
to
launch
three
three
tasks:
you'd
need
to
launch
three
thousand
pods,
but
if
you
put
all
those
pods
into
all
those
pods,
sorry
all
those
tasks
as
containers
within
a
single
part
rather
than
having
to
launch
a
thousand
pods
times
three.
You
actually
launched
just
one
thousand
pods,
so
you
can
actually
get
an
order
of
magnitude
and
performance
improvement.
A
Name
space
again,
I'm
sure
I
will
do
that
every
single
time.
We
do
that.
Okay,
there
you
go
now.
You
can
see
that
I've
launched
my
graph.
It's
a
graph
containing
subgraphs
and
each
of
those
sub
graphs
is
contains
three
tasks,
but
because
each
task
is
a
container
and
a
single
pod,
you
can
see
that
they
execute.
Actually,
I
think
it's
you
can
see
it's
visually
faster
than
executing
that
okay,
what
are
the
other
benefits
of
running
multiple
containers
within
the
pods?
Well,
this
is
this
is
the
other
main
use
case.
A
I
think
that
people
may
find
this
useful,
for
is
that
if
you
need
to
share
some
disk
space
between
your
pods,
they
can
all
have
a
common
mounted
volume
and
pass
data
between
one
another
using
that
volume
and
so
in
this
example,
I'm
going
to
put
some
raw
data
onto
a
volume.
A
I've
just
called
it
slash
workspace
and
mount
the
workspace
as
an
empty
dura,
and
then
that
volume
is
mounted
onto
every
one
of
my
containers
within
my
pod
and
they
can
all
use
that
as
inter-process
communication
or
just
you
know,
multiple
products
writing
all
there
and
produce
single
output
artifacts
created
by
multiple
ones.
A
And
again,
as
I
said,
I
would
forget
the
namespace
there
we
go.
We
can
execute
that
and
you
can
see
this
guy
will
just
execute
and
when
he
finishes
we
should
see
some
output
artifacts
appearing
on
here.
Oh,
maybe
not
sorry,
maybe
the
sorry.
Maybe
you
can
see
that
I
bought
a
an
output.
Artifact,
that's
come
from
all
that's
been
aggregated
from
all
the
containers
within
the
pod
and
that's
kind
of
I
mean
to
be
honest,
that's
kind
of
it.
F
Can
you
show
the
the
pod
spec
that
was
rendered
out
for
the
one
of
the
more
complicated.
A
A
A
A
A
F
Does
argo
so
to
all
of
these
containers
obviously
start
at
the
same
time,
but
some
of
these
stages
should
not
do
anything
until
the
previous
one
completes
in
the
daggers.
The
argo
exact
process
deal
with
that
problem.
A
Correct
I'll
just
I
actually
can
show
you
what
it
looks
like
in
the
in
the
logs
for
particular
containers.
So
in
this,
in
this
graph,
container
b
has
waited
for
con
container
c
has
weighted
containers
a
and
containing
b
so
oops.
This
is
buggy.
Shouldn't
say
that
you
can
see,
then,
in
the
outputs
from
your
pod.
A
It
and
just
says
it's
waiting
on
container
a
and
waiting
on
container
b,
and
if
you
look
at
the
timestamps,
you
can
see
the
timestamps
are
slightly
different
between
the
two
of
them,
so
the
first
one's
the
second
one.
It
waited
three
seconds
for
a
to
complete
and
then
waited
for
b
to
complete.
F
A
A
Under
that
volume
there
are
files
like
template,
which
contains
the
templates,
then
there's
a
sub
directory
called
ctr
and
slash
name
of
the
container
and
in
that
directory
they're
files,
such
as
exit
code
to
indicate
what
what
the
exit
code
was
of
that
one
things
like
standard
out
and
standard
error
when
they're
needed,
which
they
typically
aren't
needed.
A
Any
outputs
are
captured
into
that
directory,
and
so
the
weight
container
can
just
copy
those
files
from
from
that
directory
and
then
the
way
that
the
the
weight
happens
is
basically
every
three
seconds
it
checks
to
see
if
the
exit
code
file
has
been
created,
if
if
it
has
been
created
and
the
contents
of
that
file
is
zero,
then
it
launches
the
sub
process.
If
that
file
exists
and
the
contents
are
not
zero,
then
it
actually
exits
itself.
So
if
a
previous
task
doesn't
complete
successfully,
the
subsequent
tasks
don't
run.
E
And
there's
only
one
concern
is
that
since
in
in
the
one
part
there
are
so
many
containers
and
then
the
requested
resources
will
be
like
I
mean
the
cpu
memory,
those
kind
of
stuff
will
be
double
the
triple
those.
A
Yeah,
so
what
you
probably
don't
want
to
do
with
this
is
start
a
two
two
steps,
one
after
another,
both
of
which
have
very
high
cpu
requests
and
run
for
a
very
long
period
of
time,
because
it
will
double
allocate
those
resources.
So
there
is,
there
is
a
there
is
cost
issue
when
when
doing
this,
and
so
may
it
may
not
be
the
right
solution
in
some
cases,
it's
a
better,
it's
typically
more
tailored
towards
situations
where
you've
got
a
number
of
pretty
shorter
running
tasks.
I
Hey
so
hi,
I'm
jason,
I'm
from
the
tech
time
team
alex
invited
me
because
this
is
inspired
by
tecton.
One
thing
that
tekton
does
that.
I
don't
know
if
this
does
or
can
do
first
of
all,
tekton
does
this,
but
only
can
ever
run
steps
sequentially.
It
doesn't
do
a
dag
within
the
pod,
at
least
not
yet,
but
for
the
for
the
resource
question.
I
If,
if
someone
requests
a
techton
task,
that
runs
three
steps
and
each
requests
two
cpus
and
eight
gigs
of
ram
it's
actually
going
to
squash
that
down
and
when
it
makes
the
pod
when
it
translates
that
to
a
pod,
it's
going
to
take
the
max
cpu
across
all
steps
and
the
max
ram
across
all
steps
and
request
that
in
the
first
container,
only
and
each
container
afterwards
in
the
pod
is
like
a
minimal,
a
minimal
request.
I
It
doesn't
do
anything
with
limits
because
you'd
be
limits
are
actually
enforced,
but
I
don't
know
if
that
applies.
When
you
do
things
in
a
dag-
and
you
don't
know
exactly
when
things
will
run,
you
don't
know
if
two
things
will
run
together
that
both
you
know
do
need
those
requests
at
the
same
time,
but
I
I
bet
you
could
be
smarter
about
this.
I
If
you
know
that
it
is
going
to
be
a
linear
execution
of
containers
and
possibly
even
smarter,
if
you
know
certain
things
about
the
dag,
for
instance
and
resources,
so
I
think
there's
there's
a
room
for
improvement
if
you're
interested-
and
I
have
experience
with
it,
I
can
help
if
you
need
to
thank
you.
This.
A
I
I
actually
also
had
a
question
about
the
command,
so
in
in
the
docs
you
said
the
command
has
to
be
specified
and
I
think
you
said
that
the
user
has
to
specify
the
command,
but
in
your
workflow
requests
you
just
say
the
image:
do
you
look
up
the
command
from
the
registry?
That's
what
tekton
does
when
you
don't
specify
a
command.
It
goes
and
looks
it
up,
but
I'm
curious
if
you've
come
up
with
a
better
way
to
do
it,
that
we
can
steal
back
from
you.
A
Is
no
we
don't
we
don't
do
command,
look
up
what
you,
what
you,
what
you
can
do,
there's
a
couple
of
escape
hatches
here.
One
escape
hatch,
is
that,
in
your
configuration
of
the
controller
you
can
just
specify
you
can
write
a
list
of
images
and
then
just
say
what
the
command
should
be
for
those
images.
A
So
there's
that's
one
escape
hack,
the
other
escape
hatches
that,
in
version
3.1
it'll
be
possible
to
choose
which
executor
you
use
for
a
particular
workflow
based
on
the
labels,
so
you'll
be
able
to
label
your
workflow
say,
use
the
kubler,
execute
or
use
the
docker
executor
and
there
and
the
goal
behind
that
is
to
allow
people
to
try
out
different
executors
to
see
if
they
work
or
if
they
don't
work
that
you
know
the
default.
One
doesn't
work
under
certain
circumstances,
then
they
can
actually
configure
the
system
to
use
a
different
one.
A
Sorry,
because
security
capability
performance
trade-off,
you
can
actually
choose
which
one
you
want
to
use
under
each
different
circumstances
and
particularly
with
docker
going
away,
as
you
know,
in
kubernetes
that
that
kind
of
mitigates
that
risk
on
the
horizon
as
well,
maybe
a
v1.1
or
a
v2
of
this
feature
would
include
a
registry
lookup.
I
think
we
would
look
to
see.
You
know
how
complicated
it
is
to
develop
that
and
how
much
demand
for
that
one.
A
A
You
need
to
determine
how
much
to
request
for
that
and
request
less
than
you
think
you
might
want
to
request
for
it,
because
yeah
their
ms3
doesn't
consume
very
much
in
the
way
of
cpu
and
resources
at
all.
I
think
it's
like
two
meg,
or
something
even
quite
trivial,.
I
The
registry
lookup
tekton
does
a
registry
lookup
the
downside
to,
and
it's
not
terribly
complex,
the
the
real
downside
to
it
is
that
it
does
the
lookup
from
the
controller.
And
so,
if
it's
a
private
image,
you
have
to
give
credentials
to
the
controller,
to
look
up
the
entry
point
of
that
image
and
like
there's
only
one
controller,
and
so
you
end
up
with
this.
Like
all-powerful,
can
read
many
many
images
it
shouldn't
have
necessarily
access
to
all
powerful
controller.
I
We
have
looked
at
looking
up
the
command
on
the
node,
basically
doing
a
docker
inspect.
What's
your
command
and
then
running
that
I
don't
know
that
we've
decided
to
do
it
or
not
to
do
it,
but
it's
good
to
know
that
someone
else
has
already
done
it
and
doesn't
seem
to
completely
hate
it
as
an
idea.
A
We
we
sometimes
so
my
philosophy
is,
you
know,
often
it's
better
to
develop
the
the
core
capabilities
of
a
particular
feature
and
then
see
what
people
come
back
for
and
ask
afterwards.
So
don't
try
and
develop
every
single
feature
for
your
products.
You
know
develop
almost
almost
deliberately
under
develop
a
particular
feature.
A
Then,
when
people
come
back
to
to
you
asking
for
more,
then
you
can
give
it
to
them
and
like
an
example
of
of
where
I've
done
that
in
the
last
year
is
the
sso
integration
that
was
deliberately
written
as
a
very
kind
of
very
minimal
implementation.
Then
there
were
like
two
medium-sized
things
that
needed
to
be
added
to
our
to
that
afterwards
to
actually
really
fill
that
feature
out
properly,
which
was
the
support
for
jwe's
and
the
support
for
email
and
email
confirmation
claims.
K
Alex
there's
a
question
in
the
chat
it
says:
would
we
be
able
to
have
demonized
containers
as
part
of
the
container
dependency
graph.
A
Interesting
question,
so
obviously
you
can
use
side
cars
with
this,
so
not
only
you
can
obviously
run
you'll
run
three
main
containers,
but
if
you
also
want
to
run
side
cars,
you
can
do
that
as
well,
and
actually
one
of
the
biggest
challenges
around
writing
executors
is
is
very
little
to
do
with
wrapping
processes
up,
there's
actually
things
about
correctly
dispatching
sig
term
and
sig
kill
to
containers,
but
when
they
need
they
need
to.
You
know
it's
actually
quite
difficult
to
do.
You
know
what
I
haven't
looked
or
tested
it
with
the
demons.
K
So
yeah
yeah-
I
guess
I
would
like
to
if
I
I
asked
dylan
to
clarify,
so
we
have
the
daemon
container
feature
which
actually
launches
a
separate
pod
that
should
obviously
just
work
because
it's
a
it's
a
separate
template
type,
I'm
not
template
but
option
in
the
template.
Are
you
asking
about
the
sidecar
use
case
still.
K
That
depend
on
each
other,
so
you
clarified
about
that
demons
that
depend
on
each
other.
I
think
that
would
just
be
nodes
in
the
graph
that
have
dependencies,
I
guess,
but
it
would
imply
that
they
complete.
So
to
me,
that's
a
little
bit
of
oxymoron
to
have
a
demon
dependency
unless
you're
talking
about
startup
time.
A
Okay,
any
more
questions,
I'm
a
bit
aware
we're
over
time
today,
but
I
think
we're
having
some
good
conversations
and
I'm
quite
happy
to
keep
going.
People
haven't,
dropped
off
which
in
great
numbers,
which
which
normally
happens.
A
Okay,
oh
yeah,.
H
A
I
just
realized
I've
missed
I've
missed
a
question
about
that,
so
people
are
asking
about
running
python
inside
the
controller
as
well,
but
let's
circle
back
around
that
we
haven't.
I
haven't
talked
about
anything
about
plug-ins
in
this
in
this
session.
So
it's
not
because
it's
not
a
fully
developed
topic
for
us
yet,
but
there
may
be
ways
to
plug
in
things
like
python
in
the
future.
A
Okay,
so
next
on
our
menu
for
today
is
a
new
another
new
feature
in
argo
workflows,
3.1
called
conditional,
artifacts
templates,
which
barlow
is
going
to
talk
about
balor.
Are
you
ready
to
take
over?
Yes,
thank
you
very
much
I'll.
Let
you
grab
the
screen.
Yep.
M
So
hi
everybody,
I'm
bala
engineering.
Are
you
just
able
to
see
my
screen.
M
M
Are
you
able
to
see
my
screen?
Yes,
okay,
yes,
hi!
I'm
currently
working
on
the
the
future
called
conditional,
artifact
under
parameter
logic,
so,
basically
the
currently
there
is
no
way
you
can
specify
the
step
level
or
dag
level
artifact
or
parameter.
If
the
one
of
the
stepper
dac
has
a
brain
conditions,
if
you
specify
any
any
any
any
step
or
a
task
artifact
which
didn't
run,
then
the
workflow
will
fail
to
say
that
hey,
I
couldn't
find
an
artifact.
M
So
the
motivation
here
is
second,
we
need
to
have
like
a
condition
based
picking
the
artifact
or
parameter
based
on
the
parameter
or
conditions.
M
Anything
you
can
do
that
here,
because
this
is
the
extra
language
which
will
support,
like
all
the
expression
conditions,
so
this
is
mainly
for
the
artifact.
So
so
this
example
mainly
like
it's
a
coin.
Flip
example,
which
has
like
a
two
two
step,
has
like
a
when
conditions
so
based
on
the
value
that
particular
step
will
pick
that
artifact
from
the
head
or
tail.
M
So
that's
a
quick
things.
You
can
expect
this
feature
in
the
3.1
so
that
will
open
up
like
instead
of
hard
coding
or
only
choosing
one
artifact
or
parameter.
You
can
have
like
a
full
fledge
expression
to
choose
the
artifact
or
parameters
from
the
previous
step.
M
That's
it
from
my
side,
because,
given
time
next
meeting,
I
will
do
the
demo
for
you
guys.
Do
you
have
any
questions.
A
Thank
you
very
much
bala
and
to
just
close
out
with
the
final
topic
for
today,
derek
is
going
to
talk
a
little
bit
about
argo
events.
Upcoming
features
derek
over
to
you,
yep.
E
Thanks,
I
know
we
have
been
overtime
for
a
lot,
so
I'm
gonna,
I'm
gonna,
make
it
quick.
I
have
nothing
to
share,
but
just
wanna
give
a
his
his
about
the
new
features
in
the
next
algorithms
major
at
least
1.3.
The
first
feature
we're
going
to
have
is
the
permissive
metrics
and
in
the
new
event,
sourcing
and
sensor
power,
we
will
have
a
metrics
endpoint
exposed
and
then
your
permissibles
can
scrape
those
metrics
and
then
you'll
know
those
things
like
how
many
actions
your
sensor
has
been
triggered.
E
Let's
say
you
want,
if
you
use
a
sensor
to
trigger
a
workflow,
and
then
you
know
how
many
workflows
this
sensor
has
been
created,
has
created
and
also
includes
those
things
like
how
many
events,
your
event
source
has
been
received.
Or
is
there
any
error
when
you
trigger
action
things
like
that
and
then
for
the
configuration
stuff,
your
permission
operator?
If
you
use
permissions
directly
and
then
you
need
to
configure
like
power
discovery
mode
for
the
for
the
arguments
metrics.
If
you
use
permissible
operator,
you
need
to
configure
a
pod
monitor
stuff.
E
That's
for
the
permissions
metrics.
Another
thing
we're
gonna,
including
the
next
phrase
for
arguments,
will
be
a
validation
webhook.
We
introduced
a
validation
web
for
arguments.
E
Currently,
if
you
have
any,
if
you
have
any
issue
with
your
yammer
file,
let's
say
your
even
source
is
invalid
or
your
sensor
part
is
sensor
invalid
and
then
you
have
to
wait.
You
have
to
apply
the
yama
for
yama
file
first
and
then
you
need
to
check
the
cr
object
status
to
see
if
there
any
error
or
you
need.
E
You
need
to
go
to
the
controller
logs
to
see
if
there's
any
issue
with
the
reconciliation,
but
with
the
validation
web
hook-
and
you
will,
you
will
know
if
there's
an
error
in
your
yama
file
right
after
you
apply
the
the
cr,
the
yamaha
and
then
return
to
your
terminal
directly
and
that's
one
benefit.
Another
benefit
is
like.
We
actually
have
some
immutable
fields
in
the
crd
objects,
for
example
the
even
bus
authentication
strategy.
E
That
field
is
actually
immutable.
If,
if
you,
if
you
already
have
an
even
bus
with
authentication
strategy
to
be
like
no,
which
means
there's
no
off
strategy
and
then
you
you
want
to
change
the
token
and
then
you'll,
you
know
it
will
be
a
disaster
for
the
investors
center
pass.
So
actually
we
don't
want
that
field
to
be
changed.
So
with
that,
when
I
did
your
web
hook
and
then
you
know,
you
know
it
will
prevent
you
from
doing
that.
E
So
that's
a
benefit
from
what
I
think,
but,
right
now
the
validation
webpage
will
be
optional
extension
for
installation,
it's
not
mandatory.
You
can
choose
install
or
non-install
it's
up
to
you
and
I
think
that's
it
I'll
get
it
back
to
alex.
A
A
We
I'm
often
happy
to
create
you
an
engineering
build
if
you
want
to
try
out
a
new
feature
and
experiment
with
it
as
well,
to
give
us
feedback
on
that
feature.
You
know
yeah,
it's
very
valuable
for
us
to
just
get
people's
thoughts
and
ideas
on
things
we
develop.
Thank
you
to
all
our
presenters
today,
especially
alexandra,
and
for
their
fantastic
presentations
earlier
on.
A
It's
very
interesting
to
learn
about
how
different
people
are
using
argo
workflows
and
if
you
are
interested
yourself
in
coming
along
and
presenting
at
the
argo
community
meeting
or
being
involved
more
in
things,
like
writing,
blog
posts
coming
and
doing
guest
blog
posts
or
any
other
kind
of
material.
You
know
we
more
than
welcome
that
kind
of
stuff.
It's
a
great
opportunity
for
you
to
talk
to
the
computer
community
about
your
stuff
and
a
great
opportunity
also
for
you
to
you
know,
get
a
bit
more
exposure
about
what
you're
doing.
A
Finally,
you
know
this
meeting
will
be
recorded
and
uploaded
to
youtube,
so
you
can
share
it
with
your
colleagues
tomorrow
and
I'll
I'll
drop
a
link
into
the
youtube
into
our
slack
channel,
probably
sometime
tomorrow.
Our
next
community
meeting
is
the
same
time.
Next
month,
though,
I
do
plan
to
do
a
short
working
session
on
argo,
workflows
and
security
and
around
configuring
workflows
for
security
and
I'll.
Let
you
know
more
about
that.
Soon.
Okay,
have
a
lovely
day.