►
Description
* Demo Argo Events and Workflows with Jupyter notebook - Vaibhav Page from Blackrock
* Demo of Python DSL - Marek Cermak from RedHat
* Argo Workflow Survey 2020 Results - Alex Collins from Intuit
Slides
http://bit.ly/argo-wf-cmty-mtng
A
Getting
the
time
in
a
confusing
manner,
in
the
the
invite
you
said:
10
a.m.
Pacific
Standard
Time,
but
obviously
the
clocks
have
changed
in
most
countries
and
it's
actually
now
see
10:00
a.m.
Pacific,
Daylight
Time,
so
we'll
have
to
try
and
do
better
on
that
or
I'll
have
to
try
and
do
better
on
that
going
forward.
But
thank
you
guys
for
joining
anyway.
Of
course,
we
always
record
this
meetings,
so
you
could
always
come
back
and
watch
later
on.
A
So
that's
going
to
be
quite
exciting,
now
I'm
in
quite
excited
about
today,
particularly
because
we've
got
no
less
than
two
community
demos,
and
we
know
the
people
really
enjoy
them.
Those
kind
of
demos,
so
I'm
really
looking
for
us
being
able
to
share
that
information
with
you-
and
this
is
also
the
first
time
this
year
that
we've
going
to
combine
both
Argo
events
and
I'll
go
what
goes
into
a
single
community
meeting.
A
Know
anybody
who
knows
pretty
knows
about
Argo
events
already
algo
events
is
essentially
a
way
to
have
events
come
into
a
posture
and
trigger
things
within
your
cluster
and
vy5
is
going
to
talk
about
that
and
shortly
and
give
us
kind
of
a
really
interesting
demo,
showing
you
how
you
can
stitch
together,
both
Argo
workflows
and
Argo
vents.
We're
also
going
to
have
a
demo
from
Marik
from
Red
Hat's
and
of
Python
DSL.
A
Now
we
know
that
number
of
people
want
to
use
Python
for
running
their
their
workloads
and
Argo,
and
it's
obviously
a
lot
easier
to
use
than
the
Yama
syntax.
So
he's
going
to
do
it
of
a
demo
on
that
later
on
and
hopefully
going
to
close
out
with
a
survey
results
of
the
work
of
the
workflow
server.
We
did
about
a
month
and
a
half
ago,
and
probably
hopefully
you
guys
get
to
see
some
insights
and
then
we'll
share
around
those
slides
later
on.
If
you'd
like
to
find
it
interesting.
A
You
know
just
ask
them
or
you
can
of
course
ask
your
questions
in
the
chat,
room
and
we'll,
hopefully
weed
them
out
for
the
recording,
and
you
can
get
answers
to
your
questions
if
you've
got
any
detail
ones.
Obviously
we
can
take
those
guys
offline
afterwards,
as
well
now
I
think
Marik
was
mentally
the
first
demo
america.you
online
at
the
moment.
A
A
C
B
C
D
B
B
Now
let
me
first
explain
what
is
the
DSL,
because
I
get
that
question
was
so
DSL
is
basically
an
object-oriented
way
to
define
our
groups
in
Python
in
general.
Tso
is
a
domain-specific
language.
It
should
provide
some
sort
of
an
easy-to-use
and
user-friendly
abstraction
over
another
definition.
Add
another
definition
in
our
case
would
be
the
mo
specification
of
our
workflows.
B
Here
you
can
see
what
it
can
look
like
on
the
left
side.
You
have
the
arc
of
Yama
file
taken
from
the
examples
of
the
hello
world,
hello,
world
workflow
and
on
the
right
side,
you
can
see
the
actual
definition
of
that
workflow
as
a
class
in
Python.
I'll,
explain
a
little
more
later
on.
This
is
search
a
suit
case.
You
excited
about
it,
so
why
is
it
useful?
Why?
Why
do
we
want
to
actually
have
something
like
that
in
Python?
B
B
You
can
basically
define
all
your
functions
test
them
I
had
before
submitting
through
workflow
and
therefore
avoiding
clumsy
debugging
in
the
cluster
of
typos.
In
your
Python
code,
now
I
say
we
who
are
we
are.
We
were
basically
a
growing
community
of
our
workflow
users
and
mostly
Python
Easter's
I'm
part
of
project
earth,
which
is
a
team
effort
head
that
specializes
in
usage
of
artificial
intelligence
and
an
analysis
of
software
stacks.
B
So
what
can
we
do
with
these?
Let
me
show
you
a
quick
demo
in
the
repository.
The
link
will
be
in
the
last
slide.
You'll
see
this
examples.
These
examples
so
feel
free
to
take
a
look
at
this
and
I'm.
Just
briefly
going
to
show
you
some
of
these.
So
here
you
can
see
the
hello
world
example
and
the
definition
that
I
should
do
two
before
so.
B
The
main
idea
behind
this
workflow
definition
is
that
workflow
is
sort
of
a
meta
class
that
you
inherit
from
whatever
you
specify
as
an
attribute
to
this
workflow
will
become
part
of
the
spec
of
the
workflow.
If
is
a
valid
attribute
according
to
the
specification,
and
then
we
can
basically
specify
a
template,
as
you
used
to
which-which
with
which
would
match
one
of
these
templates
specified
in
the
spec
and
using
this
decorator
we
just
return
a
model
that
is
valid
for
a
template.
In
that
case,
that
would
be.
B
That
would
be
a
container
and
you
can
see
that
it
projects
directly
into
the
specification
right
here
so
what's
below.
This
is
just
for
testing
purposes,
because
we
also
use
these
for
tests.
So
let's
skip
this
hello
world
example
part
and
go
to
something
more
interesting,
which
would
be
the
tag
diamond
again.
Example
from
oracle
workflows.
There
are
four
tasks
ABCD
and
they
have
dependencies
and
I,
don't
think
they
have
inputs,
except
for
this,
except
for
this
template.
B
B
There
is
something
called
compilation
which
might
be
interesting
to
you
and
there
is
a
hidden
compiled
parameter
which
is
by
default.
So
true,
if
you
say
to
false,
you
can
basically
see
all
of
these
objects
not
compiled,
so
don't
be
scared
by
it
by
this
parameter.
Compile
it
may
come
in
useful
if
you
want
to
dig
deeper
into
these
work
from
us,
but
in
most
cases
you
want
to
use
it.
So
the
compilation
card
is
what
basically
turns
the
Python
objects
into
the
relevant
specification.
B
Now,
what
might
be
interesting
is
the
order,
completion
of
basically
any
object.
That
is
here
so,
for
example,
if
you
have
artifact,
as
this
is
generated
from
the
swagger
specification,
you
can
see
all
of
the
all
of
the
attributes
that
you
can
provide
to
the
artifact,
sometimes
and
I
think
this
is
the
only
exception.
B
B
What
might
be
useful
as
well
are
two
things
so
when
it
comes
to
generating
these
local
interest,
sometimes
you
see
really
clumsy,
clumsy
blocks
of
strings
in
Python
in
any
animal.
What
you
can
see
here
is
that
the
DSL
also
generates
block
literals,
so
you
can
see
really
in
the
same
fashion
as
it
would
be
defined
in
in
the
original
manifest
and
as
I
mentioned
before,
there
are
two
specialties
and
that
would
be
closures
and
scopes,
and
let
me
explain
very
briefly
what
that
is.
B
So,
let's
say
we
have
this
demo
specification
where
we
have
a
template
which
contains
Python
code.
These
three
lines
suggest
an
important
bring
for
the
simplicity
of
the
example,
so
we
can
either
do
it.
Let's
say
the
old-fashioned
way,
just
use
the
source
as
a
raw
string
and
provide
ease
to
the
v1
alpha
one
script
and
plate
and
recent
that
template.
That
would
be
the
ugly
way
I,
don't
like
it,
because
we
work
with
Python
and
I
want
to
work
more
natively.
B
So
what
closure
allows
you
to
do
is
you
would
define
the
Python
function
in
a
very
normal
way,
just
like
any
other
function,
and
then
you
wrap
that
function
in
an
enclosure.
The
only
thing
you
have
to
provide
is
an
image
which
should
be.
We
should
actually
execute
the
Python
code
in
the
future.
We
might
potentially
eliminate
even
this
thing,
because
we
in
our
team
analyze
source
code
and
dependencies
and
functions
directly,
so
we
might
actually
do
this
dynamically
and
provide
a
relevant
image.
B
Oh,
is
it
the
decorator
I,
don't
know
if
it's
the
decorator
and
maybe
no
but
scopes,
allow
you
to
bind
these
closures
to
get
so
you
might
have
multiple
closures
and
if
you
bind
them
in
a
scope
which
you
would
denote
something
like
scope,
blah
blah
blah,
then
these
closures
share
the
variables
and
the
objects
defined
in
them,
which
again
might
be
handy
so
that
you
don't
need
to
write
functions
that
have
multiple
billion
gazillion
lines.
But
you
can
simply
have
multiple
functions
and
find
them
in
the
same
scope.
B
B
It
doesn't
allow
you
to
specify
these
argue
workflows,
natively.
It
doesn't
have
all
the
capabilities,
whereas
this
DSO
is
really
quite
low
level,
and
it
has
all
the
models
that
you
need
to
create
full
fledged
Arabic
workflows.
So
you
can
say
that
our
goal
that
Q
for
pipelines
DSL,
can
be
in
the
future
based
upon
our
go
Python
DSL.
And
how
can
you
contribute
so
take
a
look
at
our
go.
Take
a
look
at
workers
and
pipelines
join
the
community
on
slack
and,
of
course
there
is
the
github
repo
for
our
go.
B
D
Had
one
question
for
the
community
mark,
so
this
DSL
is,
as
you
said,
is
focused
on
every
feature
of
our
go
workflows.
Question
is
to
anybody,
see
or
need
to
have
another
abstraction,
on
top
specific
to
any
data,
processing
or
specific
to
machine
learning.
Inside
of
our
workloads
directly,
like
are
people
looking
for
more
abstractions,
but
this
is
what
people
need.
B
For
people
coming
up
coming
up
with
their
own
plugins,
currently
it's
not
that
easy
to
get
the
grasp
of
it,
but
it
should
be
possible
in
the
future
to
come
up
with
your
own
sort
of
modules,
let's
say,
and
to
just
plug
them
in
and
and
use
them.
Basically,
each
of
these
modules
would
just
have
to
be
very
similar
to
a
swagger.
B
B
Although
there
there
has
been
an
initiative
to
have
a
functional
API
for
it,
so
that
we
can
work
with
it,
not
just
on
an
object-oriented
level,
but
it
was
so
in
a
functional
level
that
would
make
things
like
generating
tasks
quite
quite
easy
and
much
easier
than
it
is
now,
but
as
far
as
abstractions
and
higher
level
decorators
are
concerned,
we're
not
planning
anything
right
now.
Okay,.
D
B
Well,
that
depends
on
the
definition
of
stead
of
stable,
but
all
100
should
be
released
quite
soon
and
by
soon
I
mean
in
a
matter
of
weeks
and
and
hopefully
will
attract
more
contributors
once
it
is
migrated
under
our
go
project
labs.
So
then
we'll
see
what
really
community
wants.
As
far
as
roadmap
is
concerned,
my
my
current,
let's
say
my
biggest
concern
right
now-
is
to
keep
up
with
our
go
releases
and
to
provide
support
for
each
release.
Currently,
the
stable
so
to
speak,
is
to
402.
D
A
That's
that's
Kurt,
that's
correct!
It
was
two
five
that
was
the
big
release
and
a
big
change.
We
introduced
the
the
Argos
server
along
with
all
the
various
api's
that
come
with
those
API
is
are
basically
stable
now,
so
you
should
expect
them
to
change
any
kind
of
braking
fashion
from
from
now
on.
It's
just
that.
You
need
to
support
these
api's.
If
you
want
a
full
and
quick
feature
sets.
A
A
C
My
particular
oh
yeah
sure
thanks
thanks
Alex.
Can
you
guys
see
my
screen?
Yes,
perfect,
hello,
everyone,
my
name
is
wiper
and
software
engineer
at
Blackrock
and
today
I'm
going
to
basically
discuss
what
are
going
benches.
I
assume
some
of
you
already
are
using
arguments
in
some
shape
or
form,
but
for
the
folks
on
the
call
who
are
not
familiar
with
guidelines.
C
I
just
want
to
kind
of
give
a
brief
overview,
what
the
project
is
about
and
how
it
integrates
with
workflows
and
what
are
different
types
of
use
cases
you
can
use
this
framework
for
and
what
are
the
use
cases
that
we
use
it
at
Blackrock.
So,
to
start
with
what
is
arguments
so
it
reads
that
arguments
is
given:
it
is
native
event-driven
for
a
flow
Commission
framework.
So
a
lot
of
things
that
are
going
on
here
right
first
is
event-driven
put
what
that
means
is.
C
So
the
arguments
is
that
framework
which
basically
stitch
events
with
workflows,
and
it
helps
you
basically
automate
loan
of
pipelines,
so
very
high-level
architecture
of
our
events.
It
has
two
main
components:
a
gateway
and
sensors.
A
gateway
is
pretty
much.
It
listens
to
the
events
from
all
these
different
sorts
of
external
event.
Sources
such
as
s3,
github,
sqs,
SNS,
GCP,
pups,
Azure,
blob
storage.
All
that
right-
and
then
you
have
sensor
sensor-
is
the
component
which
actually
triggers
the
workflows
for
you.
So
by
using
both
k2
and
sensor.
C
But
if
you
look
at
a
real
world
use
case
or
real
world
scenario,
most
of
the
time
you
wanna
actually
automate
that
machine
learning
model
or
that
ETL
pipeline
upon
some
event.
For
example,
let's
say
there
is
a
file
drop
on
s3
and
you
want
to
trigger
your
ETL
pipeline
or
there
is
a
push
event
on
github
and
you
want
to
run
your
CI
pipeline.
So
that's
where
our
variants
come
comes
into
picture
and
it
helps
you
basically
automate
your
pipelines
easily.
C
C
So
let
me
first
show
you
what
those
two
Python
scripts
look
like.
So
these
are
a
simple
script
that
basically
uses
Gaussian
filter
to
apply
the
smooth
and
then
add
some
noise
and
then
store
that
output
image
onto
the
s3
bucket
and
then
I
have
a
different
script
over
here.
What
that
does
it?
Basically,
it
reads
that
output
file
from
s3
bucket
and
it
compares
with
the
original
file,
and
then
it
basically
looks
for
the
similarity
measurement
and
depending
upon
whether
the
similarity
was
more
than
80
percent
or
less
than
80
percent.
C
It
dispatches
a
message
on
Nats
queue,
so
I'm
just
using
maths
as
a
mechanism
to
kind
of
like
get
the
notification
of
what
has
happened
in
the
workflow.
You
can
basically,
let's
say
I
could
have
put
a
stack
notification
over
here
or
email
or
something
like
that.
Some
sort
of
like
messaging
mechanism
that
will
let
us
know
what
has
basically
happened
with
the
this
particular
script
that
runs
in
overview.
So
if
you
look
at
the
first
script,
it
basically
applies
that
noise
and
it
takes
some
parameter
right.
C
Again,
you
put
a
notebook
paper.
Mail
is
for
parametrize
in
notebooks
or
go
events
workflows
pretty
obvious
and
Mineo
ads
are
kind
of
like
a
storage
for
storing
the
output
images
and
output
files.
So
this
is
the
flow.
So
let
us
consider
B.
There
is
a
researcher
who
is
trying
to
basically
use
this
noise
model
to
generate
images
that
have
some
similarity
with
the
original
one,
and
that
researcher
wants
to
basically
pass
some
parameters
for
the
Gaussian
filter
and
for
the
noise
right.
C
C
That
does
image
similarity
check
and
it's
going
to
publish
result
on
to
that.
So
let's
look
at
this
in
action.
So
I
already
have
this
setup
ready
running
in
my
local
min
of
cluster
I
have
weapon
gateway
sensor
where
put
gateway
and
web
book
sensor
running
a
venire
gate
when
your
sensor,
so
the
Gateway
and
since
the
sensor
are
pretty
much
of
a
particular
type,
for
example-
redwood
gateway
it
will.
It
listens
to
HTTP
requests
when
you
get
wait,
listens
to
events
on
Mineo,
s3
storage.
C
So
what
I'm
going
to
do
is
I'm
going
to
first
send
a
request
to
web
forget
where
I'm
going
to
pass
these
parameters
so
filter
a
filter,
be
are
the
size
of
Gaussian
filter
matrix
and
then
how
much
amount
of
noise
we
want
to
introduce.
So
treating
me
as
a
researcher.
This
will
be
my
parameters
to
find
them
the
model
book.
So
once
I
make
a
post
request,
the
sensor
is
going
to
trigger
the
our
go
workflow.
C
C
Let
me
show
you,
so
this
is
what
the
logo
looks
like
with
the
image
and
as
soon
as
the
the
image
was
dropped
into
this
bucket.
The
menial
gateway
triggered
another
workflow
to
match
this
image
with
the
original
one
and
it
did
match
and
let's
check
the
similarity
measurement
on
the
next.
So
it
says
failure
because
similarity
was
only
0.7
to
9.
We
are
expecting
something
over
0.8.
C
Now
let
me
actually
go
to
the
the
Gateway
and
definition
sensor
so
that
you
guys
will
get
more
I,
like
kind
of
like
a
insider
to
how
it
works.
So
whatever
the
events
that
I
passed
to
the
webhook
gateway
are
passed
to
my
book
sensor
and
over
here,
I'm,
basically
using
paper
mill
in
order
to
execute
the
Jupiter
notebook
that
introduces
the
noise,
but
I
am
going
to
pass
these
parameters
on
the
fly
to
my
container.
C
So
this
is
how
our
events,
let
you
basically
extract
different
values
from
event
and
then
inject
them
into
your
workflow
definition,
so
that
you
can
on-the-fly
change
the
parameters
to
your
whatever
command
that
you're
trying
to
run
or
any
of
the
field
within
that
workflow
definition.
So
I
can
change
whatever
I
wish,
using
the
parameters
on
the
fly
and
I
can
run
that.
So
this
is
the.
This
is
how
the
cook
sensor
basically
takes
the
input,
requests
and
changes
the
values
of
the
filters
and
the
noise
among
on
the
for
the
Mineo
gateway.
C
C
C
So
now
so
now,
as
a
researcher,
I
can
say
that
these
parameters
are
perfect
for
my
model,
and
my
model
is
something
that
I
can
deploy
into
production
so
using
arguments.
You
can
set
up
pipelines
like
this.
Well
this
this
was
a
very
simple
straightforward
example.
You
can
essentially
take
concepts
from
this
example
and
apply
to
actual
machine
learning
models
where
you
are
trying
to
basically
fine-tune
different
sorts
of
models
and
scripts
to
get
optimal
result.
C
So
what
what
this
whole
entire
workflow
is
doing
the
pipeline
is
doing
is
basically
it
is
taking
that
jupiter
notebook,
a
simple
Python
script
and
it
is
converting
that
into
an
HTTP
server
so
that
other
team,
member
or
any
other
employee
in
the
same
firm
can
basically
just
make
a
cold
request
to
that
HTTP
server
and
get
the
output
from
that
Python
model.
So
let
me
go
to
the
Mineo
browser,
so
this
is
the
kind
of
like
a
meteor
UI.
C
So
I
have
the
output
I
can
file
or
a
Python
zip
it
on
a
book
file
over
here,
I'm
just
going
to
download
this
and
I'm
going
to
actually
push
it
into
a
bucket
called
s
production.
So
what
that
means
is
when
I
push
any
notebook
file
into
this
production
bucket.
There
is
a
gateway
that
is
configured
in
the
sent
to
this
bucket
and
as
soon
as
something
gets
dropped
into
this
bucket.
It's
going
to
make
that
Jupiter
notebook
available
as
the
available
Thresher
to
a
server
to.
C
Add
this
one
so
as
soon
as
I
uploaded
that
it?
Yes,
you
can
see.
There
is
now
a
new
deployment.
Now
is
he
out
dot
I
buy
my
and
B,
which
is
Jupiter
notebook
and
it's
deployed
as
a
deployment,
and
it's
exposed
to
a
simple
gin,
HTTP
server
and
if
I
go
back
to
the
terminal
and
this
distance
order,
I
am
a
different
user.
Now
and
I
want
to
use
this
particular
model
to
get
the
output
result.
I
created
another
user
bucket.
C
So
what
I'm
going
to
do
now
is
I'm
just
going
to
basically
query
the
model,
that's
running
as
HTTP
server,
and
what
would
this
particular
old
request
is?
Is,
as
a
user
I
am
saying,
hey
I
am
making
this
request
a
store,
the
output
of
the
model
into
the
bucket
called
user
one.
So
as
soon
as
I
need
to
port
forward.
It
works
right.
C
C
See
it's
the
same
kind
of
like
the
image
that
might
just
more
than
80-percent,
so
we
so
we
kind
of
like
successfully
production
alized
our
Python
mode
model.
You
can.
You
can
basically
treat
this
Python
model
as
any
other
machine
learning
model.
We
are
trying
to
productionize
your
jupiter
notebook
from
a
research
mode
into
production
mode.
C
So
that's
for
the
demo.
It
was
super
easy
to
set
everything
through
our
events.
These
are
the
links
there
is
already
this
demo
available
under
my
github
account.
If
you
are
not
familiar
with
our
go
events,
I
would
encourage
you
to
go
to
the
github
repo
check
the
code
out
and
if
you
want
to
join
the
slack
channel,
this
is
the
link
and
we
at
Blackrock
or
hiring
for
our
data
science
and
computer
platform.
C
E
Yes,
can
you
hear
me?
Yes
hi?
This
is
Thomas
from
my
fate
conquer,
so
we
use
both
products,
our
go
workflows
and
our
go.
Events.
I
have
to
say
huge
thanks
to
the
entire
agro
community.
It's
in
we
getting
a
support
and
the
product
itself
is
just
so
low,
but
kubernetes
I
can
say
it's
it's
really
nice
a
question
I
have
to
vibe,
so
we
currently
sub
it's
a
one.
E
Big
agro
workflow
file
into
the
Argo
bar
closed
engine
and
I
would
like
to
break
it
into
smaller
pieces,
so
break
it
into
workflow
templates,
thus
sense
or
can
submit
workflow
templates
and
in
a
certain
order.
You
know
you
need
to
first
the
way
workbook
template
or
you
need
to
need
to
follow
certain
order
before
you
submit
the
actual
workflow.
Is
this
doable
with.
C
A
Depends
on
your
use
cases
so
in
2/7,
I
think
we're
introducing
a
new
command
I'll
go
submit
resource
and
that
allows
you
to
basically
submit
a
workflow
template
as
as
a
workflow,
so
you
can
treat
them
as
effectively
a
library
of
reusable
templates
and
the
same
goes
for
Chrome
workflows.
In
fact,
you
can
submit
prom
workflow
in
the
same
manner.
If
you
want
that's
run
now,
yeah.
C
But
I
think
once
I
think
the
sensor
can
leverage
that
functionality,
but
as
of
now,
I
think
what
we
can
do
is
you
can
split
your
workflow
into
different
triggers
and
you
can
either
wait.
I,
basically
sequentially
go
through
those
triggers
and
run
them,
or
you
can
basically
set
up
a
resource
gateway
that
watches
certain
type
of
kubernetes
resources
being
created
in
the
cluster
and
then
take
some
action
on
that
I'm,
not
sure
about
the
second
approach,
but
if
the
Argos
of
met
with
the
resource
is
introduced
into
21
can
definitely.
C
E
B
A
Yes,
brilliant?
We
have
to
check
these
things
that
way
so
around
six
to
eight
weeks
ago,
I
can't
remember
the
exact
date
we
sent
her
out
and
a
Google
Forms
survey
and
we've
got
back
the
responses
a
couple
of
weeks
ago
and
we've
been
collating
than
to
see
if
we
can
draw
any
kind
of
insights
into
this.
I'm
gonna
go
through
this
relatively
rapidly.
So
if
you
particularly
want
to
ask
about
a
particular
topic,
then
you
know
jump
into
it.
The
reason
I'm
gonna
go
through
it
relatively
rapidly
is
I.
Pretty
goodness
talk
briefly.
A
Why
did
we
ask
this
question?
What
did
we
learn
and
then
kind
of
move
on
from
from
there?
The
first
question
we
asked
about
was
what
kind
of
roles
were
people
involved
in?
We
wanted
to
kind
of
understand,
partly
how
strong
the
ML
influence
is
in
there
and
it's
quite
interesting
to
see
a
kind
of
like
kind
of
interesting
roles
such
as
data
engineering
and
data.
Engineer
consultant
come
out
of
this,
and
this
is
a
word
clouds
which
are,
quite
you
know,
2015
I
guess,
but
the
larger.
A
The
item
ins,
the
more
responses
that
were
related
to
that.
The
second
question
we
asked
was:
what
was
the
use
cases
and
we've
got
just
a
lot
of
round
machine
learning
and
data
pipelines.
I
suspect
this
is
probably
nothing
new
to
anybody
here.
We
also
got
one
or
two
responses
around
doing
continues
to
the
butcher's
by
interesting.
A
We
wanted
to
ask
about
features.
The
reason
we
want
to
ask
about
this
is
we
want
to
know
how
you
know
how
important
these
features
are
to
anybody
and
isn't
the
prettiest
surprise
here.
That
kind
of
some
of
the
newer
features
such
as
node
or
flow
status,
were
we're
not
heavily
used,
whereas
things
like
artifacts
and
parameters
are
almost
used
almost
completely
across
the
board.
A
We
asked
about
executors
we're
interested
to
see
how
that
would
that
would
spread
and
obviously
docker
dominated
the
executors.
The
people
used
here
as
well,
but
then,
with
the
Kubla
executors,
actually
coming
in
last
place,
we
actually
use
the
process
namespace
executors
ourselves
again.
Add
back
storage
I
feel,
like
probably
no
surprises
here
again,
s3,
really
dominating
artifacts
storage.
A
A
Again,
what
alternatives
are
you
considered
and
what
we
didn't
ask
is:
why
did
you
switch
from
these
ones?
They'll
be
really
interesting
to
anybody
who
has
chosen
these
chosen
switch
to
Argo
workflows
to
kind
of
know
bit
that
why
you
did
that
and
again
you
can
see.
There's
a
note
here
for
drone
CI
in
the
list
and
techsan.
A
We
asked
about
versions
we're
pretty
good
asked
about
a
bit
more
about
this
in
the
future.
One
of
the
one
of
the
hot
topics
within
the
core
team
is
whether
or
not
we
should
be
releasing
quite
so
frequently
as
we
are.
What's
the
right
release
cadence.
This
is
quite
interesting.
We've
also
and
ask
the
same
question
recently.
A
It
looks
like
most
of
the
people
who
are
on
two
five
or
now
on
to
six,
but
actually
most
of
the
people
running
to
four
are
still
running
to
four
be
interesting
to
know
from
people
why
they
they
hold
back
on
that.
The
final
question
we
asked
around
was:
what
what
features
could
we
build?
That
will
make
your
experience
better
I'm
interested
in
here
hear
from
anybody
who
wants
to
comment
on
these
I
believe.
A
A
Okay,
well,
thank
you
all
very
much
for
joining
us
today.
I
hope
you
guys
have
enjoyed
the
demo,
do
drop
onto
the
slack
and
say
hi
to
everybody.
You
don't
forget
to
tweet
about
things
you
enjoy,
don't
forget
to
keep
helping
out.
You
know
we
love
when
people
get
stuck
in
a
commit
code,
contributions
and
I
hope
you
all
stay
safe
and
well.