►
Description
2:30 Kubeflow Katib with Andrey Velichkevich and Johnu George
25:00 Plugins + Hooks - Bala / Intuit
41:00 Pod names v2 - J.P. Zivalich / Pipekit (https://pipekit.io)
46:00 Debug pause - Niklas / Northvolt
50:00 SSO/RBAC namespaced - Basanth / Intuit
A
There
we
go
okay,
so
thank
you
all
for
coming
today.
This
is
the
first
I'll
go
workflows
and
events
community
meeting
of
2022
yay
and
the
first
one
we've
had
for
a
couple
of
months,
of
course,
because
it's
been
a
bit
of
a
holiday
season
recently,
so
we
didn't
actually
run
the
last
couple
of
meetings.
A
Just
a
little
bit
of
an
overview
of
today,
we're
going
to
be
getting
a
a
demo
of
cube
located,
which
is
one
of
a
number
of
different
applications
which
uses
argo
workflows
under
the
hood
from
andre,
and
I
put
john,
but
it's
jonno,
isn't
it
yes,
jonah?
Okay,
there
we
go
fix,
jonno,
they're,
going
to
be
doing
a
demo,
and
afterwards,
I'm
quite
looking
forward
to
doing
seeing
some
kind
of
short,
lightning
demos
of
features
that
could
be
coming
in
version
3.3.
So
each
demo
will
only
be
a
couple
of
minutes
long.
A
You
know
two
three
four
minutes
long
and
for
that
do
add
yourself
to
the
document
as
an
attendee.
It's
always
great
to
know.
Who's
been
at
this
meeting,
especially
if
you're
new
kind
of
shout
yourself
out.
A
If
you
don't
know
what
argo
workflows
and
events
is
well
I'll,
go,
workflows
is
kind
of
the
de
facto
workflow
execution
on
kubernetes
and
events
is
kind
of
a
sister
project
that
allows
you
to
easily
start
workflows
from
different
kinds
of
events
such
as
slack
or
a
you
know,
kafka
event,
I'm
my
name's
alex
I'm
the
lead
engineer,
I'll,
go
workflows
and
events.
I
work
at
intuit
and
I've
been
working
on
the
platform
for
about
two
years.
Now.
If
you
want
to
ask
a
question
during
the
presentation,
that's
brilliant!
A
I
love
to
hear
the
questions
you
might.
You
can
either
ask
in
the
chat
if
you
want
to
or
you
you
know,
if
you
think
it's
going
to
be
a
quick
one,
you
can
shout
the
question
out
or
or
typically
it's
kind
of
better
to
just
wait
until
the
end
when
they
ask
for
questions
that's
great
as
well.
We
are
recording
this
session
and
we
tend
to
publish
the
recording
afterwards
on
youtube.
A
If
you
want
to
share
the
recording
with
somebody
else
or
if
you
you
know,
have
colleagues
who
are
interested
and
I
try
and
make
sure
it's
time
stamp.
So
if
you
do
actually
want
to
dive
deep
into
a
particular
point
in
the
in
the
presentation
particular
area,
they
want
to
share.
That's
quite
easy
to
do
as
well,
because
youtube
makes
that
really
easy.
A
B
Great
hello,
everyone,
I'm
so
happy
to
be
in
the
argo
community.
B
Finally,
and
today
we're
going
to
present
some
very
exciting
updates
from
the
qfloor
side,
I'm
andre,
I'm
a
lead
of
the
working
group
in
kyuflo
for
training
and
ultima.
I'm
doing
you
can
flow
for
some
other
time.
I
think
almost
like
three
years
and
I'm
here
with
jonah,
you
can
introduce
yourself
as
well.
C
Hi
everyone,
I'm
a
staff
engineer
at
nutanix,
actually
like
we
have
been
working
in
kubeflow
for
quite
long
and
being
like
the
co-chairs
of
q
floor
terminal
working
group
happy
to
be
in
this
community.
B
C
So
great
yeah,
so
our
our
goal
is
to
provide
automated
machine
learning
features
or,
in
short,
automl
right
to
any
kubernetes
user.
So
in
general,
automl
targets,
different
classes
of
algorithms
that
you
basically
see
here.
The
hyperparameter,
optimization,
neural
architecture,
search
speech,
engineering
model,
compression,
etc
and
currently
cati
provides
hiv
parameter,
optimization
and
neural
architecture
search,
and
we
we
are
actually
in
the
process
of
extending
that
to
support
multiple
algorithms.
C
Entry
next
yeah,
so
if
you
just
go
one
step
deep
in
automl
right
like
you
could
think
this
as
one
project
that
targets
different
aspects
of
model,
optimization
say
it
can
be
like
features,
hyper
parameter,
optimization
or
it
can
be
the
model
architecture.
C
So
you
could
think
this
as
an
external
automatic
system
right,
like
external
optimizers
sitting
there,
looking
at
different
configuration
spaces,
constantly
optimizing
the
configuration
depending
on
the
metrics
which
are
emitted
from
the
from
your
model.
It
is
continuously
evaluated
and
reconfigures
to
get
a
better
model
at
the
end.
C
Next,
please
yeah
so
talking
about
catip.
So
it's
a
fully
kubernetes
native
open
source
project
in
kubeflow,
and
it
is
one
of
the
core
queue
flow
component
which
is
bundled
together
in
the
default
queue
flow
insulation
and
you
could
independently
install
as
well
in
any
kubernetes
cluster.
It
currently
supports
hyper
parameter
tuning
and
a
neural
architecture,
search
with
an
extra
feature
of
early
stopping
as
well.
C
It
is
framework
agnostic,
which
means
you
could
have
your
programs
in
tensorflow,
pytorch
or
any
ml
framework
written
in
any
programming
language,
and
you
automatically
get
the
best
use
out
of
it.
Since
it
is
actually
built
on
kubernetes,
it
can
be
deployed
on
local
machines
or
any
cloud,
and
it
is
natively
integrated
with
all
queue
flow
components
say
if
it
is
new
notebook
pipelines
which
is
again
built
on
argu
training,
a
training
operator
and
even
other
q
flow
components.
C
Next
place,
yeah,
so
I'll
just
go
through
a
like.
Take
a
very
quick
walkthrough
on
what
cad
tip
how
cat
architecture
looks
like
so
that,
like
you'll,
get
to
know
what
it
is
so
from
a
user.
What
he
has
to
provide
is
a
basic
experiment:
spec,
which
describes
a
problem,
optimization
problem
it
it
describes
what
to
what
to
optimize
and
how
to
optimize
right.
C
So
you
you
could
basically
provide
your
parameter
space
and
there's
a
trial
spec
which
defines
what
to
optimize
so
here
in
in
once
user
submits
your
experiment,
cr
or
the
experiment
spec
in
yaml
format,
the
character,
controller,
experiment,
controller,
picks
it
up,
and
it
talks
to
a
suggestion,
controller,
which
is
again
a
custom
controller
which
creates
set
of
hyper
parameters
depending
on
the
algorithm,
which
is
that
so
the
cative
supports
many
of
these
algorithms,
random
grid,
bayesian,
hyperband,
etc.
C
So,
depending
on
different
algorithms,
it
creates
set
of
hyper
parameters
which
are
called
suggestions
in
character
terminology
and
once
they
are
created
against
each
parameter
set
or
suggestion
trials
are
created
so
trial.
You
could
think
this
as
a
worker
job
right,
which
is
running
your
model
training
and
you
could
parallelize
it
depending
on
the
configuration
that
you
set
and
each
worker
produces
metrics
a
different
metrics
for
your
your
optimization
right,
which
is
stored
in
your
metrics
db
and
experiment.
Controller,
sees
it
and
and
evaluates
whether
my
optimization
goal
is
reached
else.
C
It
will
actually
again
go
in
the
same
loop,
have
more
hyper
parameters
create
more
trials,
and
this
loop
continues
till
your
objective
goal
is
reached.
So
this
is
your
main
optimization
loop,
which
happens
in
catim,
and
it
have.
You
could
basically
configure
how
much
you
see
our
resource
that
you
that
you
can
set
it
for
getting
per
problem.
So
you
could
say
that,
like
after
10
trials,
you
can
stop
your
entire
optimization
goal
or
it
will
actually
go
in
finitely
till
your
objective
goal
is
reached
so
andre.
C
You
can
take
it
from
there.
Now
we
could
talk
about
how
it
is
integrated
with
argo,
overflow.
B
Yeah,
I
will
just
speak
more
about
how
our
currently
fit
in
our
ecosystem,
so
just
feel
free
to
something.
If
you
have
any
questions
during
the
presentation
there
speaking
about
argo,
I
think
many
of
you
just
familiar
with
this
argument
and
it's
as
alex
said,
state-of-the-art
container
container
native
orchestrator
building
of
kubernetes.
B
An
ecosystem
of
fargo
is
huge,
for
example,
in
kubeflow,
we're
using
argo
for
the
last
like
two
years
in
q4
pipelines
widely
and
it's
production
adopted
in
like
many
public
clouds.
As
I
know,
and
q
flow
pipelines
use
like
cargo
under
their
like
main
platform
for
all,
also
a
lot
of
like
cool
projects,
also
using
argon,
and
we
were
thinking
how
we
can
adopt
this
in
the
trial
template
because
jonah
mentioned
before
then
in
kdp
architecture.
B
We
have
an
instance
called
trials,
which
is
basically
one
of
the
training
process
and
imagine
jumping
to
the
next
slide.
Trial
can
be
as
sophisticated
process
as
you
need,
because
usually
model
training
is
not
as
very
straightforward.
You
can
have
like
the
model.
Pre-Processing
pre-processing,
you
probably
need
to
attach
different
volumes.
You
probably
need
to
have
a
set
of
different
workflows
which
will
combine
in
your
whole
training
process,
and
for
that
reason
we
are
thinking
how
we
can
use
argo
to
perform
these
tasks.
B
That
is
why,
since
since
our
workers,
which
is
basically
a
trial
just
running
the
workers,
can
be
anything
which
built
on
top
of
kubernetes,
we
adopting
argo
custom
resources
as
our
worker,
and
that
is
why
we
can
use
multiple
object.
Objective
object,
optimization
in
future,
we
can
use,
as
I
mentioned
before,
data
preprocessing
post
processing.
B
We
can
also
execute
the
specific
training
in
different
tasks
because,
for
example,
you
can
train,
you
can
optimize
the
model
which
train
on
completely
different
problems
and
a
completely
different
data
set,
and
you
can
just
pull
all
of
this
in
the
one
in
one
optimization
loop.
I
will
show
this
in
the
end
example.
Next
and
also
you
can
have
a
break
better
tracking.
B
You
can
get
all
of
the
richness
of
the
argo
inside
your
optimization
process,
and
so
this
is
just
a
similar
inside
how
like
the
architecture
look
like
and,
as
you
can
see
here
in
the
work
here,
you
can
specify
the
issue
workflow,
so
the
each
trial
will
just
actually
spawn
the
argo
workflow
and
inside
argo.
Workflow
will
have
the
workflow,
which
is
basically
will
have
the
training
process
inside
yeah.
So
let
me
jump
to
the
demo,
because
I
think
it's
more
interesting
to
see
how
we're
actually
adopting
this.
B
So
first
of
all,
I
will
show
you
one
demo.
Let
me
let
me
close
the
slides.
B
Yes,
I
will
show
you
one
demo
of
one
kdp
example,
which
is
under
the
hood,
using
argo
workflow
and
before
actually
showing.
I
just
want
to
see
how
we
actually
installing
argo
with
integration
with
key.
So
the
installation
is
pretty
straightforward.
You
installing
cargo
from
the
one
of
the
releases,
and
you
only
two
steps
that
you
need
to
follow.
B
First
of
all,
you
need
to
use
a
mystery
executor
because
cadib
using
side
cars
to
inject
mattress
collector-
and
it
is
why
emissary
is
necessary
for
us-
and
I
I
imagine
the
mrsa
will
be
the
default
executor
in
the
future.
So
you
don't
really
need
to
make
the
changes
in
the
next
algorithms,
and
you
also
need
to
patch
our
kdp,
our
kdp,
airbag,
to
make
sure
that
our
controller
has
an
access
to
the
argo
resources
and
also
you
need
to
specify
the
flags
in
the
edit
controller
and
that's
it.
B
You
can
use
argo
in
your
creative
experiments
so
by
the
experiment.
So
this
is
the
yaml
upon
one
of
the
experiments.
It's
very
straightforward.
Our
users
just
need
to
specify
the
objective,
what
they
want
to
optimize
which,
like
metrics,
they
want
to
optimize
which
metrics
they
want
to
collect,
which
algorithm
they're
going
to
use.
For
our
example:
we're
going
to
use
random
just
a
simple
random
algorithm.
So
this
is
the
trial
trial
objective,
just
a
threshold,
for
example,
how
many
trials
you
want
to
run
how
many
trials
in
parallel.
B
You
want
to
run
so
then
you
specify
the
parameters
this
in
this
example
we're
going
to
tune
only
learning
grade,
and
then
this
is
the
child
template.
So
here,
as
you
can
see
as
a
trial
specification,
we
have
the
whole
argo
workflow,
which
is
specified
which
is
specified
inside
the
api
and,
if
you're
familiar
with
argo,
it's
not
very
complicated
you
just.
We
have
only
two
simple
steps
here
in
this
example,
which
is
data
pre-processing
and
the
model,
training
and
data
preprocessing
step.
Here
we
simply
print
the
random
value.
B
We
just
simply
get
the
random
value
by
by
dividing
60
000
on
or
some
random
value,
and
we
pushing
this
value
to
our
next
step,
which
is
model
training
and
we're
using
this
value
as
the
number
of
examples
for
our
model
and
for
optim.
Since
optimize
optimizing
learning
rate,
we
need
to
specify
the
learning
rate
as
the
input
parameter
here
and
also
you
for
the
api.
You
need
to
specify
the
primary
port
labels,
because
our
controller
needs
to
know
which
port
is
actually
running.
B
The
training,
and
also
we
need
to
know
which
container
is
the
main
on
your
training
process,
and
this
is
the
success
and
parallel
condition
for
the
arc
workflow
so
for
this
example.
We're
actually
going
to
take
this
yaml
and
I'm
going
to
try
to
accept
this
apple.
So
this
is
the
completely
new
brand
new
patipui,
which
you're
going
to
use.
We
continually
involving
this
in
the
flow
integration
which
will
provide
the
unified
experience
across
the
different
flow
applications.
B
So,
first
of
all,
you
need
to
select
the
namespace.
You
need
to
click
the
new
experiment,
and
here
you
just
need
to
specify
the
similar
information
from
the
api
by
using
just
the
ui
forms,
if
you're
not
like
like
camels,
because
many
other
scientists
they
like
not
really
want
to
deal
with
yamas,
they
want
to
deal
with
some
simple
ui
forms
and
the
jupiter
notebook
environment.
This
is
actually
why
we
building
sdk
for
the
data
scientist
to
make
it
easier,
as
as
we
can
so,
you
can
specify
the
metadata.
B
You
can
specify
the
trial
threshold,
the
objective,
the
search
algorithm,
as
john
I
mentioned
before.
We
support
a
numerous
number
of
hyperbaric
tuning
algorithms
and
also
by
supporting
neural
objectives,
architectural
search,
algorithms.
You
can
specify
the
early
subject
techniques,
which
is
very
important
if
you
want
to
avoid
overfeeding
of
your
model.
This
is
the
hive
parameters
that
you
want
to
tune.
This
is
the
matrix
collector
specification
and
the
trial
template.
B
So
for
this
particular
example,
I'm
going
to
paste
just
a
yaml
in
the
in
this
form
and
I'm
going
to
change
the
maximum
number
of
trials
to
eight.
For
this
example,
and
I'm
going
to
use
the
names
bit
of
fargo.
Yes,
let's
click
create
so
right
now
the
experiment
is
creating
and
while
we're
waiting,
I
can
show
you
that,
for
example,
if
you
select
a
different
namespace,
you
can
see
the
other
experiments
that
they
are
on
before
you
can
see
the
optimal
trial.
Here
you
can
see
the
optimal
hyper
parameters
that
controller
produced.
B
B
B
This
is
the
suggestion
which
is
spawns
and
let
me
see
how
many
extra
trials
have
been
created
so
right
now
the
two
trials
is
created
and
if
I
check
the
argo
workflows,
I
will
also
see
the
two
workflows
also
have
been
created.
So
each
each
trial
corresponds
to
the
whole
argo
workflow.
And
if
I
try
to
describe
one
of
the
trials,
I
will
see
that
the
parameters
here
learning
rate
has
been
substituted
from
the
from
the
from
the
suggestion
from
the
random
algorithm
and
before
jumping
to
the
ui.
B
I
just
want
to
check
the
locks
so,
for
example,
if
we
take
the
workflow
this
one
we're
going
to
see
two
ports,
so
the
first
one
is
running
our
simple
script.
We
can
also
check
the
locks
from
this
port
yeah.
If
we
check
the
locks
from
the
main
container,
we
will
see
the
sum
random
value
which
is
generated
and
if
we
check
the
locks
from
the
training
process.
B
B
It
should
be
like
a
number
of
examples
and
also
we're
going
to
see
learning
create,
which
is
our
suggestion
algorithm
produced,
and
once
the
training
process
is
finished,
we
parse
all
of
the
results
to
our
db
and
then
you
can
analyze
this
information
on
our
ui
so
which
is
so
going
back
to
the
ui
since
we're
using
argon.
We
can
use
all
of
the
functionalities
from
the
argo
components,
for
example
argo
ui.
B
So
since
each
trial
runs
separate,
argo
workflow,
you
can
analyze
that
workflows
on
the
argo
ui,
you
can
click
to
the
particular
you.
Let
me
see
the
running
that
represents
this
one.
You
can
link
to
the
particular
workflow.
You
can
see
the
whole
like
process
what's
happening
under
the
hood.
You
can
also
like
analyze
the
results
from
each
step,
for
example
from
the
data
preprocessing.
Instead
of
going
to
the
locks,
you
can
see
the
input
and
output
here.
B
So,
as
a
result,
we
can
see
the
output
results
and
if
we
keep
to
the
model,
training,
we'll
see
these
input
parameters
here
and
by
using
this
ui
you
can
also
like
analyze.
All
of
your
workflows.
Imagine
you
can
have,
as
I
mentioned
before,
a
very
complex
workflow,
which,
with
the
with
the
different
volumes
attached
with
the
different
processes
like
post
processing,
you
can
get
your
data
from
a
different
different
sources
and
also
just
combine
these
two
uis.
B
You
can
jump
to
the
our
kdpi
and
if
you
click
to
the
experiments
page,
you
will
see
some
useful
information
for
you
during
your
training
during
your
hyper
variation,
optimization
process.
For
example,
what
is
the
current
experiment
status?
What
is
the
current
best
trial
parameters?
What
is
the
current
best
trial
performance?
If
you
click
the
trial
tab,
you
will
see
how
many
trials
have
been
succeeded
already,
how
many
charts
are
running
right
now?
If
you
click
to
the
particular
trial,
you
will
see
the
metrics
results.
B
So,
as
I
mentioned
before,
we're
collecting
two
metrics
and
we
saving
this
metrics
to
the
db.
So
our
users
can
see
what
was
like
the
performance
from
photo
training
process
and
also
we
highlighting
the
best
child
to
get
the
best
result
to
provide
the
best
hyper
parameter
results.
B
Also,
you
can
click
to
the
details
to
see
like
more
concrete
information
about
about
your
experiment,
and
you
can
also
view
the
yaml
to
see
some
annotation
or
some
useful
information
from
your
experiment,
api
and
once
all
of
this
workflow
will
be
finished,
the
experiment
also
will
be
finished
as
well.
B
So
again,
we're
running
two
trials
in
parallel
in
this
example,
and
each
trial
is
just
running,
separate,
workflow
and,
as
we
can
see
here
in
this
example,
since
all
of
the
workflows
are
finished,
we
actually
go
back
to
our
experiment,
to
our
kdpi
experiment
also
succeeded
and
we
get
our
best
hyper
parameters.
I
think
it's
right
here.
So
this
is
the
best
learning
trade,
and
this
is
the
best
validation
accuracy
that
we
generated
and
the
best
range
accuracy
so
yeah.
This
is
pretty
much
it
from
the
demo.
I
just
want
to
jump
yeah.
B
I
I'm
going
to
answer
all
of
your
questions
in
chat
once
I
have
one
more,
very
important
slide
that
I
want
to
mention
before
answer
the
questions.
So
let
me
yeah.
So
let
let
me
just
briefly
speak
about
our
community
because
again,
qflo
is
open
community
and
we
always
welcome
people
to
contribute.
We
have
a
lot
of
components,
not
only
in
automotive
training,
as
you
know,
like
uploads
is
like
end-to-end
envelopes
platform.
B
So
we
are
more
than
happy
to
to
invite
you
to
our
like
huge
community,
and
if
you
want
to
be
user
contributor,
we
recently
launched
a
new
process
where
you
can
be
part
of
the
release
process
and
we
can
you.
You
can
also
contribute
in
the
particular
proposals.
If
you
want
to
push
to
the
to
the
upstream,
you
can
take
a
look
at
these
examples.
Also,
we
have
a
regular
meetings.
You
know
on
wednesdays,
please
feel
free
to
check
our
meeting
notes.
B
We
have
a
slack
channel
if
you're
using
kd,
please
update
the
adopters
list
and
also,
if
you
want
to
contribute,
we
have
a
developer
guides.
Hell
wounded,
issue
proposals
so
pretty
much
like
similar
processes.
Other,
like
open
source
communities
that
have
and
also
have
a
roadmap.
I
think
we
will
update
for
2022
pretty
soon.
I
have
a
lot
of
like
exciting
updates.
We
want
to
deliver
to
the
community
yeah,
so
this
is,
I
think,
pretty
much
it.
So
let
me
actually
first
of
all
check
the
questions
from
the
chat
alice.
B
It's
great
yes,
do
you
have
any
questions
by
the
way
for
the
demo
or
for
the
slides,
I'm
gonna.
Have
it
sponsor
all
of
them.
A
No,
it's
cool,
I
think
it's
really
great
to
see
you
know
this
kind
of
machine
learning
usage
and
to
see
the
kind
of
different
products
working
together.
So
I
think
this
is
a
fantastic
presentation.
I
think
ewan
has
also
saying
the
same
thing.
Is
this
great
presentation.
B
Thank
you
alex
absolutely
yeah,
because
I
think
imagine
since
pipelines,
under
the
hood
using
argo,
and
there
are
a
lot
of
use
case
when
the
this
orchestrator
helps
us,
I
think,
in
the
optimal
field.
This
will
be
also
very
valuable
for
us
by
pushing
this
forward.
C
Yeah
just
to
add
it's
a
very
powerful
concept,
because
you
are
basically
optimizing.
An
entire
pipeline,
which
consists
of
multiple
steps
so
typically
like
most
of
the
users,
would
have
their
ml
journey
would
in
a
in
a
particular
workflow
right
here.
We
could
basically
optimize
the
whole
workflow
as
a
trial
spec
in
cata,
which
is
which
is
very
powerful
cool.
B
Yeah,
I
think
it's
would
be
great
if
you
can
go
to
the
opposite,
because
this
actually
we
integrated
this
feature
a
couple
of
months
ago,
and
it
would
be
great
if
we
can
get
some
feedback
from
the
users
if
they
want
to
use
this.
This
feature
in
production,
because
previously
we
supported
only
comprehensive
job
to
be
optimized
and.
B
New
and
we
really
want
to
see
the
feedback,
because
I
think
we
discussed
it
previously
with
you
alex
about
the
sidecars,
and
we
want
to
see
how
does
it
work
like
in
production
and
why?
How
we
like
immediacy
mixers,
mr
executor,
will
work
for
us
in
the
future
releases
and
we'll
have
any
problems
by
doing
this
in
production.
A
B
Yeah,
it's
more
about
like
the
complex
model,
training
workflow,
because
usually,
as
I
mentioned,
you
cannot
like
put
everything
under
one
container
and
kubernetes.
Job
does
not
like
allow
you
to
do
that,
like,
for
example,
why
we
actually
have
tf
jobs
right.
It's
like
for
running
distributive,
tensorflow
on
kubernetes.
B
You
can
do
buy
everything
by
yourself,
but
tf
job.
We
will
do
it
under
the
hood,
like
all
these
tensorflow
distributor
technologies,
which
is
built
in
kubernetes
series.
Oh.
A
Cool
okay,
great
stuff.
Well,
you
know,
thank
you
guys
very
much
for
coming
along
today
and
preparing
that
presentation
for
everybody.
We
really
love
this
kind
of
stuff
and
terry
has
just
shared
in
the
chat,
the
slides.
If
anybody
wants
to
go
and
have
a
look
at
that,
okay,
so
the
next
gender
item
is
going
to
be
about
argo
workflows,
3.3
we've
got
a
few
demos,
like
short
lightning
demos,
to
show
you
what's
going
to
be
coming
up,
I'm
pretty
excited
about
this
release.
A
A
The
first
one
we're
going
to
be
having
a
demo
from
will
be
from
bala
bala's,
going
to
demo
two
new
features
called
plugins
and
hooks
and
he's
going
to
demonstrate
them
together
because
they're
kind
of
separate
features
but
they're
the
kind
of
features
when
you
combine
them
together,
they
actually
give
you
kind
of
you
know:
order
m
n
squared
power,
rather
than
just
order
n
squared
additional
power
or
capability.
Follow
you
are
you
ready?
Yes,
I'm
ready,
brilliant
I'll.
Let
you
take
over.
E
E
You
can
create
your
organization,
specific
logic
as
a
plugin,
then
this
will
available
available
in
the
controller
level
so
that
all
the
workflows
running
on
the
controller
will
be
will
can
reuse
that
same
plugins
and
so
currently
in
the
3.3
we
are.
We
are
introduced
only
that
executor
plugins,
which
will
run
on
the
agent
part
so
that
only
the
one
for
per
workflow
so
that
the
multiple
multiple
templates
can
run
on
the
single
part,
so
that
resource
consumption
will
be
very
less
than
the
plugin.
E
Next,
so
what
do
you
need
to
do
for
creating
a
plugins?
Just
you
need
to
create
a
config
map
with
the
plugin
code
and
apply
on
the
orgo
name,
space
I'll,
give
a
small
example
here.
E
If
you
go
to
that
repository,
there
is
a
folder
called
plugins
executors.
There
is
a
bunch
of
sample
plugins,
so
we
have
like
a
two
plugins.
One
is
a
hello
another
one
is
a
slack.
So
if
you
want
to
just
I'm
just
showing
the
simple
one
like
a
hello,
so
currently
we
have
a
bunch
of
files
here,
like
we
have
like
a
python
file
and
a
plugin.tml
file
and
hello,
that's
a
workflow,
but
in
our
repository
there
is
a
the
make
job
which
will
generate
a
config
map
for
you
automatically.
E
But
if
you
aren't
out
of
that
repo,
then
just
you
need
to
create
this
conflict
map,
which
can
have
like
a
some
structure
like
a
sidecar
there.
Only
you
are
going
to
define
your
plugin
code
and
you
are
going
to
tell
that.
What
is
the
image
you
like
to
use?
So
here
it's
a
python!
That's
why
I'm
using
a
python
image?
E
If
you
are
java,
you
can
use
the
java
image
and
you
can
write
a
java
services
and
you
can
define
like
which
port
this
plugin
is
going
to
support
all
the
informations,
and
there
is
a
naming
convention
on
the
the
config
map,
which
will
say
that
this
is
the
plugin
name,
that
plugin
name-
and
this
will
be
like
a
convention
of
like
you-
are
telling
to
the
controller.
This
conflict
map
is
for
executor
plugin.
E
E
So
that's
in
the
plug-in
site
next
is
hook
the
hoops
are
we
already
introduced
that
the
hooks
for
the
template
level
for
the
exit
handler,
but
now
we
are
introducing
like
a
expression
based
hooks
which,
which
can
be
triggered
during
the
workflow
executions
anytime,
the
execution
whenever
it's
meet
that
expression,
but
it
will
execute
only
once
that
means
one
for
the
first
time
of
the
expression
meet
it
will
execute
it.
We
don't
want
to
repeat
it
the
same
same
trigger
every
time,
then
the
hooks
can
be
configured
workflow
level
and
template
level.
E
Previously,
when
we
introduced
the
exit
handler
hooks
it
will,
it
will
only
support
in
the
step
or
that
level.
But
now
we
did.
We
added
into
the
workflow
level.
Also
under
the
hooks
will
support
like
a
template
and
template.
So
that
means
you
can
define
your
action.
Should
the
workflow
template
or
cluster
template
that
can
be
referred
into
the
host
under
that
another
advantage,
it
can
take
the
arguments,
the
previous
exit
handler
or
a
workflow
level
exit
handler
template
level.
Exit
handler
will
not
take
the
arguments,
but
this
can
be.
E
E
The
last
one
will
be
like
this:
this
will
be
a
good
replacement
for
the
workflow
automations.
Currently
we
have
like
argo
like
a
kubernetes
events,
which
will
which
will
emit
like
a
argo
workflow
status
like
a
argo
or
flow
star,
running
fail,
succeed
all
the
things,
but,
as
you
know,
that
argo
events
are
not
a
trustable
trustable
resource,
but
you
can
replace
it
to
the
hook.
Who
will
be
definitely
to
deliver
the
delivery
actions
execute
the
actions.
E
E
So
one
is
the
main
thing
suppose
if,
if
you
have
like
external
services,
which
is
managing
your
auto
workflows
process,
informations
all
the
things
you
can,
you
can
integrate
with
the
hoops
to
send
a
http
re
respon
request
to
that
external
service.
To
tell
that
workflow
progress
informations.
E
The
second
one
is
a
very
popular
asking
questions
in
the
argo
workflow.
Like
they
want
to
get
a
notifications
about
the
argo
workflow
status
currently,
currently
there
is
a
workaround,
so
you
can
use
exit
handler
to
send
it,
but
if
they
want
it
like
a
hey
when
the
workflow
started
or
when
the
workflow
is
running,
what
step
is
currently
running
those
kind
of
things
they
couldn't
do
that,
but
the
hooks
will
enable
those
also
so
here
the
some
like
spec
definitions
for
the
workflow
level,
the
workflow
level.
E
E
E
This
is
the
same
same
spec.
You
can
put
it
in
under
the
each
steps
in
your
step,
template
or
dag
template
that
will
execute
it.
Whenever
the
condition
is
made,
let's
go
to
the
demo
so
demo,
mainly
I'm
using
a
combined
of
plugin
and
the
codes.
So
I
have.
I
have
a
workflow
which
I
need
to
get
a
slack
message
whenever
the
workflow
is
running
and
workflows
failed
and
whenever
the
step
is
started
running
whether
it
is
sexy
or
failed
those
kind
of
things.
E
E
E
D
I
just
have,
I
don't
have
exactly
a
question,
but
I
wanted
to
confirm
one
thing
so,
on
the
chat
window,
alex
was
actually
resolving
some
of
like
technically
mine,
as
well
as
nicholas's
thought,
which
was
basically
around
an
example
of
a
plug-in.
So
looking
at
our
examples,
what
I
understand
is
that,
like
a
with
a
plug-in,
you
can
actually
instantiate
that
how
basically
a
slack
endpoint
and
a
hook
is
actually
used
to
trigger
a
slack
message
through
that
endpoint.
Is
that
how
it's
working
exactly
yes,.
E
So
all
right!
Okay!
So
if
you
see
here,
you
can
configure
your
slack
things
here
and
basically,
basically,
the
controller
is
making
like
a
http
post
message
and
they're.
Taking
that
all
the
arguments
to
give
to
this
function,
you
can
do
whatever
things
you
want
it
here,
whether
you
want
to
do
the
slack
or
page
duty
or
anything
you
can
implement
it
so
technically
the
architecture
behind
that
these
plugins
are
running
as
a
sidecar
as
a
http
service.
D
E
D
Yeah,
that's
it
and
sorry,
one
more
thing.
So
in
that
kind
of
scenario,
whenever
I'm
specifying
the
this
plugin
as
a
container
image,
so
where
can
I
actually
specify
the
trigger
path
that
okay?
This
is
the
path
on
this
port,
where
the
plug-in
should
be
triggered
so.
E
D
Okay,
you
got
it
yeah,
I
got
it,
but
my
question
was
that
just
I
was
trying
to
understand
the
behind
the
scenes
working
of
this
thing
that
when,
when
this
plugin
is
triggered,
what
exactly
tells
the
plugin
that
okay
route?
This
request
to
this
specific
end
point
of
this
plugin,
because
we
are
specifying
the
port
in
the
in
the
plugin
spec,
but
I
did
not
see-
or
I
did
not
notice
any
end-
point
related
specification
that
okay,
all
the
requests
should
be
routed
at
this
api
route
or
api.
So.
E
D
Your
container
is
no
I'm
not
talking
about
the
host,
I'm
talking
about
the
and
like
the
basically
the
route
actually
so
localhost
slash,
for
example,
something
something.
No
that
that.
E
That
plugin,
the
controller,
will
construct
it.
So
whenever
you
are
giving
up
the
the
port,
the
construct,
the
controller
will
construct.
Localhost
poland,
4355,
yes
and
send
the
request.
The
controller.
D
Yeah,
so
it
will
basically
okay,
okay,
okay,
I
thought
that
we
could
configure
the
specific
urls.
For
example,
if
like
there
are
some
sort
of
multiple
functionalities
where
I
just
want
to
okay,
not
expose
my
functionality
at
the
base
route,
then
I
thought
we
could
configure
that.
Okay.
E
A
Thanks
bob,
if
you
want
to
find
more
about
that,
you
can
obviously
look
at
the
blog
post.
Our
next
demo
will
be
from
jp
he's,
gonna
be
showing
a
little
feature
called
pod
names
v2,
which
is
an
opt-in
feature.
I
think
in
version
three
two
and
three
three
jp
you
ready
yep.
F
Thanks
alex
yeah,
this
is
very
much
going
to
be
a
lightning
emphasis
on
the
speed
on
this
demo,
but
this
is
a
pretty
cool
feature.
What
it
does
is
it
takes
the
name
of
the
workflow
and
the
name
of
the
template
being
invoked
for
a
pod
and
or
concatenates
those
two.
It
makes
that
the
pod
name,
so
I'm
going
to
show
you
a
quick
demo,
real
quick.
F
So
first
we
don't
have
the
pod
names,
v2
enabled
and
I'm
going
to
go
ahead
and
submit
a
workflow
we're
going
to
see
kind
of
what
our
you
know,
we're
very
accustomed
to
seeing
that
we've
just
got
these
like
workflow
steps
for
each
of
these
pod
names,
and
if
we
go
into
the
ui,
we
can
take
a
look
here
and
pull
up
this
sidebar
and
here's
like
our
pod
names-
and
you
know
this
is
good,
but
once
you
hit
like
a
certain
number
of
like
pods
or
notes
that
are
being
run,
it
can
become
a
little
bit
cumbersome
to
know
like
which
pod
is
running
like
which
template
and
whatnot.
F
F
F
F
We
can
see
that
the
workflow
template
has
been
appended
after
the
workflow
name.
When
we
take
a
look
at
the
pod
name
here
and
my
cli
is
getting
wonky
all
right,
just
gonna
have
to
go
to
the
ui
we'll
do
it
live
so
here
we
see
yep
the
workflow
name
and
then
the
workflow
template
within
the
pods,
and
we
can
go
to
yeah,
whichever
like
pod
and
see
that
concatenated
and
lastly,
we
can
go
to
the
logs.
F
So
this
was
a
pretty
broad
reaching
issue
and
a
couple
people
were
very
like
helpful
with
it.
So,
thanks
to
severan
from
a
cure,
alex
bala
and
juan
for
approving
and
merging
pull
requests
and
doing
releases
andy,
she
morita,
krugner,
guillaume,
gonzalez
and
tianjin
also
did
a
lot
of
bug
bashing
as
well.
My
apologies,
if
I
butchered
anyone's
name
but
yeah,
that's
all
I
got
so
check
it
out
in
v.
3.3.
G
F
That's
a
good
question:
I'm
not
actually
sure
about
like
some
sort
of
like
enforced
max
we've
seen
some
or
I've
seen
some
like
pretty
long
pod
names
and
I
haven't
seen
anything
get
clipped
on
it,
but
to
clarify,
if
you
have
like
nested
templates,
for
example,
it's
not
going
to
do
like
a
like.
Every
single
template
is
going
to
get
her.
Template
name
is
going
to
get
concatenated,
okay,
okay,
but
yeah.
Thank
you
for
the
question.
A
I
think
it's
a
quite
good
question:
I'm
going
to
actually
raise
an
issue
on
that
one,
because
I
think
we
should
watch
out
for
that,
but
I
do
think
this
was
a
really
popular
feature.
It
had
a
lot
of
upvotes
for
it.
I
think
it's
going
to
be
really
useful,
especially
for
simpler
pipelines.
A
Where
the
you
know,
all
the
steps
have
a
different
name,
and
now
you,
when
you
do
cube
ctl
get
pod
you'll,
be
able
to
see
exactly
which
pods
is
which
step
without
having
to
do
that
kind
of
de-reference
activity
that
you
have
to
do,
which
is
yeah
quite
kind
of
difficult.
So
so
it
doesn't
seem
like
a
massive
change,
but
actually
will
make
people's
lives
a
lot
easier.
A
Okay,
thank
you
jp
now.
The
next
demo
we
have
is
from
niklas
it's
a
new
feature
called
debug
pause
again.
I
think
this
is
a
feature
that
seems
pretty
simple
on
the
face
of
it,
but
it's
just
such
a
massive
usability
improvement
for
debugging.
I
think
it's
going
to
make
a
big
difference
to
people's
lives
nicholas
you
are
you
ready
for
your
presentation,
yeah,
cool
I'll?
Let
you
have
at
it
great.
H
Yep,
perfect
yeah,
so
I
will
just
go
through
the
pulse
they
can
do
now.
So
this
is
the
possibility
to
actually
debug
steps
and
you
can
do
it
before
and
after
execution.
This
is
only
applicable
when
you
use
the
emissary
workflow
execution
yeah.
I
guess
you
can
look
at
the
code.
The
prs
are,
you
can
see
the
numbers
here,
so
you
can
go
in
and
check
them.
It's
using
marker
files
for
how
it
does
the
the
actual
pausing.
H
There
is
also
an
issue
for
documentation
that
I
will
update,
hopefully
in
the
coming
week,
so
with
the
dmo,
we
will
actually
pause
a
container
a
step
after
execution.
We
will
use
one
of
the
the
templates
that
we
have
in
the
tests
that
will
run,
and
in
this
case
I
will
use
infer
inferal
containers
when
I
do
it.
H
So
if
you
want
to
do
the
same
thing
instead
of
executing
into
the
same
pod,
you
can
do
that,
but
just
make
sure
that
you
have
it
set
up,
so
you
can
do
that.
So
now
I
have
orgo
running
and
I
just
write
the
mini
cube
in
this
case
to
have
it
running
locally.
H
So
we
can
create
a
plot.
We
can
see
that
in
this
case,
let's
bring
this
one
down.
We
should
have
one
running
here,
and
this
is
a
super
simple
one.
It's
in
the
test.
It
doesn't
do
anything
useful.
Really,
we
can
take
this
one.
We
take
the
name
here
and
then
we
just
update
here.
H
There
we
go
and
then
we
should
get
a
shell
yeah,
so
now
we
should
be
able
to
debug.
In
this
case
we
use
referral
containers
as
well,
so
we
don't
need
to
have
a
shell
in
the
actual
container
or
in
yeah
in
the
container
we're
interested
into
debugging
as
well
to
when
you're
done
to
continue
the
execution
you
just
have
to
create
a
marker
file.
The
name
here
is
different.
H
If
you
do
the
debugging
before
execution
and
after
execution,
and
so
let's
do
that,
we
can
see
that
we
kicked
out
and
then
yeah
the
c
li
here
should
mark
it
as
complete
yeah.
So
we
can
see
that
this
one
is
now
done.
H
A
Cool
cool
okay,
so
we
have
one
more
demo
from
my
colleague
bassanth
he's
gonna
demo
kind
of
a
cool
new
feature
for
making
our
back
easier
to
configure
when
you're,
when
you're
doing
sso
and
I'll
I'll.
Already
just
let
him
batten
you
ready,
yep
I'll,
just
let
you
take
over
and
do
the
demo.
A
G
Hello
folks
hi
this
is
bassan
and
today
I'll
be
featuring
I'll,
be
demoing.
The
feature
what
I
like
to
call
sso
our
back
namespace
delegation,
so
to
understand
this
I'll.
Just
do
a
quick
recap
of
how
sso
are
back
works
today.
Typical
use
cases
and
the
issues
with
the
current
are
back
sso
setup.
And
what
do
we
entail
in
the
new
new
feature
right?
So
this
is
how
currently
rbc
works.
G
So
so
you
need
to
create
a
service
account
and
enter
it
with
certain
rules,
and
this
is
done
by
the
people
who
have
installed
argo
in
in
the
cluster.
So
you
create
a
service
account
like
this
and
give
it
a
name,
and
these
are
the
annotations
that
you
provide.
So
what
you're
saying
is
if
the?
G
If
the
person
who
is
trying
to
get
inside
your
argo,
close
tester,
if
he
is
part
of,
say
some
groups
called
as
admin,
then
basically
use
this
service
account
for
to
allow
him
access
to
the
cluster,
and
we
have
something
called
as
precedence
which
decides
that
if
there
are
multiple
such
series
accounts
which
one
to
which
one
to
use
to
give
higher
precedence.
Something
like
that
right,
so
yeah
and
now
let
me
talk
about
like
typical
kubernetes
setup
and
some
use
cases.
G
So
there
is
so
so
there
is
one
central
team
which
would
install
argo
workflows
in
the
say,
argo
name
space
typically,
and
there
would
be
like
multiple
name
spaces,
which
would
be
managed
by
different
teams
and
since,
since
the
name
since
the
whole
cluster
is
shared,
essentially
we
would
need
to
manage
some
sort
of
our
back
to
give
to
say
that
if
I
am
from,
if
the
person
is
from
team
one,
he
should
have
access
to
only
these
four
namespaces.
If
he
is
from
team
two
he
should
be.
G
She
should
have
access
to
a
separate
four
namespaces.
Something
like
that,
and
this
is
actually
a
typical
setup
and
use
case
for
us
at
intuit
also
right.
So
the
problems
with
this
is
essentially
list
like
this.
So
any
change
that
needs
to
be
made
to
the
the
sso
configuration.
Basically,
this
setup
of
the
service
account
and
the
associated
roles
and
the
role
binding.
G
Please
set
up
a
service
account
and
so
on,
and
it's
basically
burdensome
on
that
team,
and,
apart
from
that,
managing
conflicts
is
hard
because
what
happens
is
it
might
so
happen
that
one
person
belongs
to
two
different
teams
and
he
has
to
be
again
like
given
multiple
rules
and
you
will
have
to
manage
a
lot
more
permissions.
So
basically,
granting
permissions
and
and
management
is
a
little
bit
hard
right.
G
So
we
have
so
now
we
have
developed
something
called
as
space
delegation
so
now,
currently
it
is
in
beta
so
to
use
namespace
allocation.
You
would
have
to
set
your
environment
variable
sso,
delegate
or
back
to
namespace
as
true
and
now
with
with
this.
What
will
happen
is
we'll
will
have
two
types
so
the
way
I
have
I
look
at
it
is
you
have
logically
two
types
of
groups.
So
your
name,
your
cluster
admin,
will
create
one
service
account
which
you
will
just
use
to
login.
So
the
cluster
admin
would
set
up.
G
Something
like
this
and
he'll
say
that
if
the
person
belongs
to
my
view
or
like
like
anybody
who
should
have
access
to
the
cluster
I'll
just
allow
him
and
it'll
just
have
like
it
can
probably
just
have
say,
read
on
the
permissions
and
it'll
have
like
a
minimal
precedence
so
that
if
there
are
other
service
accounts,
they
could
be
used
and
now
the
interest.
And
now
the
main
thing
is.
If
you
are
the
service
owner
and
like,
if
you
are
the
namespace
owner,
so
in
your
namespace,
you
can
configure
this.
G
So
you
as
your
service
owner
will
say
that
if
the
person
is
from
my
teams,
then
I
will
allow
him
to
log
into
the
cluster,
and
essentially
you
can
manage
that
whole
request
yourself
right
so
now.
This
is
like
on
a
high
level.
What
it
looks
like
so
now
coming
to
the
demo,
so
I
have
argoflow
running
locally,
and
so
currently
it
is
just
installed
like.
G
So
there
is
a
service
account
which
is
in
argo
namespace,
so
I
have
access
so
here
I
like,
like,
I
don't
see
any
issues
and
if
I
just
want
to
run
a
workflow
I'll
be
able
to
run
it.
But
now
let's
say
I
create
now.
Let's
say
there
is
a
different
name:
space
called
busting.
G
Now
when
I
try
to
access
it,
it
just
says
that
so
argo
server
does
not
have
permissions
in
the
name
space
person
right
so
now,
using
this
sso
delegate
feature
essentially
what
you
would
do
is
you
would
so
like
you
would
create
these
service
accounts
and
roles
in
your
namespace.
G
So
you
can
see
that
there
is
a
service
account
on
the
new
space
basin
and
you
can
set
up
roles
and
role
bindings
and
everything
is
in
the
namespace
that
you
want,
and
since
you
are
the
owner
of
this
particular
space,
you
don't
need
to
reach
out
to
the
cluster
admins
and
so
on.
So
I'll
just
quickly
apply
this
yaml,
which
should
yeah
so
which
should
configure
it.
And
now,
when
I
go
back
so
the
errors
are
going
so
essentially
now
I
have
permissions
to
view
our
workflow
submit,
workflows
and
all
that
so
yeah.
G
This
should
hopefully
succeed.
There
are
no
errors
as
such,
so
more
or
less.
This
is
the
demo.
Essentially
what
it
gives
you
is.
It
enables
self
serve
for
you,
so
that,
if
you
are
the
new
space
owner,
you
can
you
yourself
can
configure
who
has
access
to
your
name
space
and
who
doesn't
yeah.
That's
that's
pretty
much
what
I
had
for
demo
questions
comments.
G
Okay,
I
guess
no
cool.
A
Thank
you.
So
thank
you
very
much.
Everybody
says,
look
awesome.
I
think
this
is
a
great
feature.
I
think
when
we
first
implemented
the
sso
feature,
we
kind
of
really
understood
how
to
do
sso,
really
well
yeah
single
central
configuration
pointing
to
whatever
the
organization's
you
know
primary.
You
know
authentication
server.
We
didn't
really
know
how
to
do
authorization.
I
think
we've
kind
of
felt
our
way
a
bit.
There.
A
We've
done
a
few
different
iterations
on
the
authorization
aspect,
and
I
think
this
is
one
thing
we
didn't
really
get
was
that
operators
don't
really
want
to
configure
the
authorization
because
teams
tend
to
own
their
own
name
space
and
configure
the
r
back
for
their
own
namespace
through
things
like
cluster
role,
bindings
and
role
bindings,
but
that
wasn't
basically
possible
with
sso.
So
this
this
now
enables
teams
to
set
that
up
themselves
using
sso,
and
it
just
takes
that
burden
off
the
operator
and
allows
teams
to
kind
of
self-serve
themselves.
A
A
So
we've
got
a
couple
of
cool
things
coming
up
in
march
and
april,
netflix
metaflow,
which
is
a
a
quite
new
and
actually
quite
popular
netflix
data
processing
library,
and
we
know
that
netflix
is
kind
of
really
big
into
doing
data
processing
they
they
will
becoming
doing
a
demo
of
their
integration
with
arco
workflows.
So
that's
going
to
be
really
cool
and
then
eric
meadows,
serbia,
saving
oil
and
eric
meadows
will
be
coming
in
april.
A
To
do
a
demo
of
plumber
we'll
also
be
reviewing
the
road
map
which
has
been
updated
quite
recently,
there's
a
there's,
a
link
there.
If
you
want
to
take
a
look
to
see
what
you
want
to
do,
if
you
are
interesting
at
presenting
at
the
meeting
or
you
want
to
come
and
get
involved
in
rather
than
doing
a
code
competition,
some
kind
of
community
contribution-
that's
you
know
a
blog
post
or
an
article
or
a
presentation
or
you're
doing
a
conference.