►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
everybody
or
good
afternoon
or
good
evening,
wherever
you
are
in
the
world,
we're
all
doing
well
at
home,
I
hope
you've
all
managed
to
get
dressed
this
morning
and
congratulate
yourself
if
you
manage
to
do
that,
I
certainly
do
that
every
day.
What
are
we
doing
today?
Well
welcome
to
the
fourth
Agro
community,
our
go
workflows
and
events.
Community
meeting
of
2020
we've
got
a
couple
of
new
demos
and
features
coming
up
today.
A
We've
got
a
bit
of
open
discussion
today
because
there's
a
couple
topics
we
want
to
talk
about
at
the
end
of
the
meeting
for
those
of
you
who
are
not
familiar
with
our
go
work.
Those
are
events.
I'll
go
workflows
is
a
containing
native
workflow
solution
and
our
gobiins
is
a
container
native
way
to
trigger
things
within
your
within
your
cluster
and
they're,
quite
often
used
together.
So
that's
why
we
talk
about
them
in
the
same
community
meeting.
Today's
community
meeting
is
a
bit
agro,
workflows,
focus
but
obviously
a
lot.
A
People
using
mr-go
events
will
also
use
using
arco,
workflows,
so
I
hope
it's
useful.
So
who
are
we
well?
My
name
is
Alex
I'm,
a
principal
engineer
at
Intuit
working
on
kubernetes
as
back
to
use
a
long
fangled
wood
for
cuba
Nettie's,
but
that's
what
we
were
working
on
and
I
work
with
Bala
Derek,
Alex
and
Jesse
on
and
Simon.
My
seven
on
our
NGO
project,
which
we
are
very
pleased
to
have
recently
been
secta
din
to
CN
CF.
A
We
are
recording
this,
so
we
have
to
watch
you
or
share
it
with
your
friends
afterwards,
and
if
you
want
to
ask
questions,
there's
a
couple
of
different
ways
to
ask
questions.
One
is
that
asked
in
the
in
you
know
using
your
voice
yourself
and
ask
a
question
or
you
can
ask
them
in
the
chat
channel.
We
have
here
on
Zoom
or
if
you
want
have
a
longer
conversation
about
something,
then
you
obviously
you
can
come
and
find
us
on
the
Argo
slack
channel
as
well.
A
Okay,
so
the
I'm
gonna
have
a
couple
demos
to
start
with.
Firstly,
Derek
is
going
to
do
a
demo
of
the
new
GCP
supports
coming
in
I,
think
v2,
point
7
and
then
Barnes
gonna
give
a
demonstration
of
a
new
type
of
workflow
template
that
is
cluster
wide
again
I
think
that's
a
V
2.8
feature
I'm,
not
I'm,
not
sure
you
can
correct
me
with
that.
So
Derek
you
do.
You
want
to
just
take
over.
Take
the
con
yep.
B
B
All
right
today,
I'm
gonna,
do
an
introduction
on
the
new
GCD
storagee
artifact
of
support
in
our
go.
We
are
introducing
some
new
spec
for
the
artifact
configuration
we
used
to
do
GCD
a
storage
artifical
support
by
using
a
three
compatible
API
and
you
need
to
configure
the
interoperability
or
TCP
console
and
people
complain.
They
can
I
use
work
like
again
and
if
I'm
running
my
workload
on
GK
clustered
so
now
you
can
use
a
new
spec
to
do
that.
I'm
gonna
make
the
demo
quick
and
then
leave
more
time
for
the
rest
topics.
B
So
the
news
backing
in
the
artifact
configurations,
like
this
at
least
use
f3
here
and
now
we
introduce
a
GCS
and
then
what
you
need
to
do
is
configure
your
pocket
here,
give
the
bucket
name
and
then
your
key.
The
key
is
path
in
your
GCP
storage
to
your
object,
and
then
you
need
to
so
before
that
I
need
to
create
a
service
account
key
and
download
it
as
a
JSON
file
and
store
the
increased
sacred
and
then
here
is
a
sacred
reference
for
my
demo.
B
The
demons
might
use
a
credential
and
the
key,
the
seven
second
key,
and
if
you
are
running
your
worker
on
GK,
custardy
and
you're,
using
worker
identity,
and
this
this
one
you
can,
you
can
just
ignore
it.
Just
is
not
needed
and
then,
in
that
case
or
Chania,
is
just
a
bucket
and
the
key.
That's
it
I'm
gonna
I'm
gonna
ground.
This
workflow
work
quickly,
but
I'm
gonna
use
the
other
one.
B
Let's
say
this
one
without
the
key,
because
I'm
running
my
testing
transferring
GK
and
they
already
configured
work
right
any
for
the
service
can
I'm
using
and
so
I'm
gonna
run
this
workload.
Real
quickly
and
then
it
it
tells
the
directory
from
my
GGC
s,
storage
and
then
you
are
just
your
hours
or
are
to
shoot
all
the
volunteer
application
in
your
under
the
default.
The
folder
Argos
and
man.
B
A
B
A
Ok,
great
stuff,
okay,
Thank
You
Derek
now
bangla
is
gonna.
Give
us
a
demonstration
of
cost
of
workflow
template.
I
won't
Owens
for
this
I
think
this
is
a
pretty
cool
new
feature.
It's
gonna
solve
quite
a
few
kind
of
interesting
use.
Cases
and
I
just
want
to
hand
over
to
Bala
to
take
over
and
to
here's.
D
E
F
E
D
To
the
torque
Pro
template
only
the
differences
that
resource
is
a
cluster
wide
resource
cluster
scope
resource.
So
that
reckoning
you
can
refer
that
resource
in
all
the
namespace
in
that
cluster.
The
CR.
This
pick
is
exactly
same
as
a
workflow
template.
Only
thing
ignore
the
kind
is
a
cluster
or
throw
template.
D
And
basically,
when
you
are
referring
from
the
workflow,
you
need
to
give
one
more
additional
flag,
which
is
saying
that
cluster
workflows,
justice
Kapital
to
true
then
I'd
known.
The
controller
will
identify
that
technol.
The
reference
is
a
cluster
or
pro,
then
it
will
get
it
from
the
cluster
or
pro
results.
D
D
D
Then,
probably
like
know
the
we
have
a
CLA
which
has
a
you,
can
create
gate
list
and
delete
an
int
on
the
crostata
pro
template.
There
is
a
new
command
or
go
trust
template
and
it
there's
a
ata
support.
Also,
so
you
can
programmatically,
you
can
create,
get
list,
update,
delete
and
linked
your
cluster
or
throw
template
and
their
cui.
D
G
H
A
E
A
Thank
you
very
much
mama
just
in
case
you've
got
your
microphone
unmuted
and
you
know
your
family's
in
the
background.
It
might
be
good
just
again
double-check
your
Mutis.
Thank
you,
so
I
think
about
I.
Think
that's
quite
a
new
feature
we
introduced.
Workflow
templates
are
thinking
2
4.
Is
that
right,
Bala?
Yes,
this
kind
of
built
you
kind
of
really
builds
upon
that,
allowing
you
to
use
the
same
templates
in
multiple
different
namespaces.
You
know
the
templates
become
more
like
a
shared
library
or
shared
and
Resort
cluster.
A
Ok,
great
now,
we've
got
coming
features,
so
this
is
really
an
opportunity
for
you,
guys
you're,
not
pretty
gonna,
see
any
and
much
in
the
way
of
new
code
or
a
demo.
Now
this
is
really
an
opportunity
to
see
what
we're
doing
coming
up
and
to
kind
of
really
we
want
to
get
your
feedback
on
these
new
features.
You
know,
do
you
think
it
works
in
the
way
that
you're
you
like?
Does
it
solve
your
use?
A
Cases
that
you're
looking
to
solve
or
or
you
know
do
you
think
it
needs
to
be
generalized
to
help
better
solve
some
of
the
use
cases
you
have
we're.
Gonna
have
Simon
talking
about
a
new
dependence
logic
that
allows
you
do
code
more
sophisticated
dependencies
between
steps
in
your
workflows
and
he's
also
gonna
do
can
maybe
will
get
balanced
to
the
workflow
semaphore
feature
which
will
allow
you
to
prevent
to
workflows
running
simultaneously,
based
on
some
relatively.
A
And
then
Simon
the
world
Simon
will
do
a
demo
re,
an
introduction
to
container
sequences,
which
is
I'm
actually
not
sure,
a
better
way
to
improve
the
performance
of
your
workflows
and
also
to
make
it
easier
to
work
with
artifacts
shared
between
workflow
steps.
So
Simon,
are
you
ready
to
take
this
away?
He
doesn't
know
yes.
F
All
right,
can
you
I
see
my
github
screen,
yeah
cool,
so
this
essentially
I
just
wanted
to
essentially
show
you
guys
what
quickly
try
been
working
on
and
sort
of
asks
for
your
feedback.
I
we've
had
many
issues
arising
from
arising
from
very
specific
edge
cases
when
using
DAGs.
So,
for
example,
here
there's
one
where
a
user
has
a
bag.
F
That
is
skipped
in
a
one
command
and
then
after
that,
died
after
this
step
is
skipped
subsequent
dependencies,
for
example,
this
dependency
that
depends
on
that
skip
tag,
error
out
or
behaves
sort
of
with
an
undefined,
behavior
or
unexpectedly,
so
as
a
way
to
sort
of
mitigate
this,
and
also
as
a
way
to
give
you
guys
more
control
over
how
your
dagger
runs.
We've
sort
of
enhanced
the
syntax
you
can
use
on
a
new,
the
new
field
called
depends.
So
currently
we
have
dependencies
dependencies.
F
Allow
you
to
set
tags
tasks
at
which
the
current
test
depends
on,
and
that
is
pretty
much
it
it.
For
example,
tests
II
can
depend
on
a
and
B,
but
it
doesn't
really
allow
you
to
give
that
complex
logic,
or
maybe
you
want
to
test.
You
only
proceed
if
this
test
failed
or
if
this
text
failed,
and
this
other
test
succeeded.
So
with
this
feature
we
sort
of
want
to
allow
you
guys
to
essentially
take
control
of
your
DAGs.
F
We
currently
support
all
these
different
states
succeeded
and
failed
skipped
completed,
which
means
either
succeed
or
failed
or
any
which
means
either
complete
or
skipped.
So
this
sort
of
this
sort
of
syntax
could
alleviate
issues
such
as
the
one
we
just
saw
here.
I
have
some
examples,
so
I
I
would
invite
all
of
you
to
essentially
drop
into
this
PR
number
2693.
Take
a
look
at
the
spec
comment
with
any
feedback
or
any
use
cases
that
you
think
this
solves
or
doesn't
solve.
So
I
can
take
that
into
account
during
the
development.
A
F
F
As
of
as
of
right
now,
no
as
of
right
now,
this
will
only
work
for
Dagg,
because,
if
you
think
about
it,
steps
is
just
a
very
strict
definition
of
a
dag
where
subsequent
parallels
subsequent
step
groups
are
essentially
depending
on
all
the
previous
step
group
steps.
So
I
would
say
that
steps
is
a
subset
of
a
dag.
So
it's
more
of
a
convenience.
F
Thank
you
all
right
now,
I
want
to
move
on
to
this
other
feature.
This
will
be
is
relatively
short
because
I
we've
just
started
thinking
about
this
and
I
haven't
done
any
work
on
this,
so
this
is
just
where's
the
way
to
invite
you
guys
to
give
some
feedback
with
us
as
well.
So
we
are
exploring
we're
exploring
this
new
feature
in
coronaries
called
ephemeral
containers.
F
Your
the
pod
is
the
atomic
unit
in
grenades,
but
this
new
feature
called
ephemeral
containers
have
surfaced
which
allows
you
to,
in
some
specific
and
some
specific
circumstances,
sort
of
bring
up
and
teardown
containers
within
a
pod,
and
then
we
want
to
explore
using
this
feature
to
essentially
exploited
to
be
able
to
run
sequences
of
steps
with
different
containers
inside
a
single
pod.
So
why
would
this
be
useful,
mainly
it?
It
allows
you
guys
to
run
your
ma.
F
F
This
is
sort
of
how
I
would
envision
this
to
run
pretty
much
similar
to
a
steps
template,
hopefully
with
its
possible
implementation,
for
us
to
just
use
your
existing
we're
close
to
change
from
steps
to
sequence,
and
then
the
only
difference
will
be
that
everything
will
run
in
a
single
Potter
node,
and
then
you
will
get
an
output
up.
This
is
still
very
much
and
only
an
idea.
I
can
already
think
of
some
some
challenges
that
we
might
have
to
run
into.
F
F
A
A
What
they
use
cases
are
is
really
useful
to
us
and
the
best
way
to
do
that
is
really
go
in,
create
an
enhancement
proposal
inside
github
or
an
issue
you
know
and
the
more
thumbs
ups
who
has
the
more
popular
it
is,
so
the
more
attention
will
give
it.
So
your
thought,
your
thumbs
ups,
are
genuinely
really
important.
I
think
we're
pretty
good,
completely
consistent
with
other
open
source
projects.
To
do
it,
I
will
say
think
I
always
say.
Of
course.
A
If
there's
a
feature,
you
really
really
really
want,
and
you
know
that
it's
for
you
the
best
way
to
do
it
is
CC
and
contribute
to
open
source.
You
know
write
develop
the
future
yourself
and
we
were
you
know,
we're
gonna
be
doing
I'll,
be
talking
a
bit
more
about
that
later
on.
Okay,
so
thank
you
Simon,
but
let's
now
gonna
talk
a
bit
about
workflow
semaphores,
which
is
a
way
to
ensure
that
two
workflow
steps
or
workflows
don't
and
run
at
the
same
time.
So,
but
are
you
ready
to
take
over
yeah
brilliant?
D
Hi,
there's
also
another
idea
about
how
we
can
do
the
concurrency
of
the
workflows.
The
currently
current
version
of
the
argue
will
support
the
two
level
of
concurrence.
One
is
like
a
controller
level
which
will
which
will
limit
your
concurrency
workflow
executions
and
the
second
one
is
like
a
what
pro
level
which
will
control
your
step
level
concurrency.
D
But
if
you,
if
you
think
about
another
use
case
of
a
I,
want
to
limit
the
concurrency
of
the
state
of
the
workflows,
multiple
concurrency
groups.
In
that
case
Signum,
we
can
have
like
a
sum
of
group
which
will
say
that
concurrency
count.
Then
what
are
the
workflows
are
referred
into?
That
configuration
that
or
flow
will
fall
into
that
concurrency
group
and
the
concurrency
will
limited
based
on
the
configuration.
D
So
the
same
thing
I
can
we
can
use
it
for
the
step
level.
Also,
the
step
level
have
like
a
multiple
group
of
concurrency
so
that
no
in
single
workflow
you
can
control
the
parallel
steps
in
different
concurrency
limit.
That's
the
idea
so
think
about
the
use
cases
like
if
we
want
to
control
your
elasticsearch
connections
like
like
a
two
or
three
connection.
Concurrently.
Concurrently,
you
can
update
the
records
that
I'm
attorney.
D
You
can
control
your
elasticsearch
short
rows
with
that
group,
so
that
not
only
that
what
flows
will
be
controlled
by
the
concurrency,
the
rest
of
the
applause
will
be
or
executed
parallel.
That
was
the
thought
and
still
we
were
evaluating.
We
were
thinking
that
new
ideas
and
the
new
use
cases
so
currently
that
proposal
we
were
kind
of
thinking
is
like
we
can
have
a
conflict
Mac
which
will
have
like
a
key
of
concurrency.
D
Then
you
can
refer
in
your
workflow,
the
semaphore
reference.
What
is
the
config
conflict
map
and
what
is
the
key
then,
whenever
it's
the
controller
is
executing,
it
will
look
that
content
that
concurrency
count
based
on
that
technol.
It
will
rate
limit
that
you
are
the
particular
workflows
execution.
D
D
Yeah,
this
is
the
part
and
we
were
thinking
of
mutex
also,
but
the
2.9
we
were
proceeding
with
a
semaphore
first
than
the
mutex
is
later.
If
there
are
more
use
cases
on
the
mutex,
we
will
implement
the
matrix.
Also,
so
please
put
a
thumbs
up
if
you
like
this
feature
comment
your
use
cases
or
all
the
things
in
there
issues
like
a
twenty
five
five,
zero.
D
A
D
I
C
A
Yeah
already
worthwhile,
just
noting
that
the
sent
their
direct
they're
actually
two
semaphore
is
in
this
specification:
aren't
there
one
that
prevents
two
steps
running
concurrently
and
which
I
guess
that
solves
a
use
case
where
you
pitch
your
two
steps
are
using
one
expensive
resource
that
you
only
have
one
off
and
wanted
lock
the
workflows,
the
same
kind
of
reason
you
know
within
that
workflow
or
generally
that
whole
workflow.
You
don't
want
that
to
run
at
the
same
time,
and
so
another
work
like
because
they're
using
the
same
resource.
A
I'll,
just
drop
in
I'm
gonna
drop
in
the
issue
bar
to
five
five
zero
I
think
you
says.
Yes,
we
have
another
question
Bala
from
Eric
Eric
asks:
can
we
use
variables
that
are
included
in
a
template
to
limit
semaphores
to
when
the
variable
is
called
with
specific
workflow
lower-level
I
have
a
question
for
Eric?
Is
this
the
kind
of
you
know?
You
have
a
work
tool,
it's
quite
generic.
C
D
A
What
might
be
really
useful
is
on
the
entry
if
you
do
have
a
use
case,
where
you
think
you
would
use
it
lists
lists
that
use
case,
so
we
can
then
check
to
see
if
your
use
cases
will
be
solved
by
by
this
this
problem
and
also
reflectively,
we
can
explain
to
people
when
they
want
to
solve
your
problem.
This
is
how
to
do
it.
How
to
solve
that.
You
case
it
says:
do
it
well.
A
Ok,
any
more
questions,
five
seconds,
I
think
I
think
we're
good
I'm
going
to
share
my
screen.
So
thank
you,
borrow
fur
for
doing
that.
I
think
that's
gonna,
be
cool.
Calling
your
feature
I
think
we're
all
pretty
looking
forward
to
it.
I'm
just
gonna
talk
a
little
bit
about
our
go.
Were
foes
contribute
of
workshop,
we're
playing
to
run
it
says
in
April.
22Nd
action
of
the
planned
date
will
be
this
time.
Next
week,
10
a.m.
on
the
22nd
of
April.
A
A
A
I'll
just
talk
a
little
bit
about
what
we
plan
to
do
in
that
workshop,
so
the
the
workshops
gonna
revolve
around
actually
making
a
code
change
to
our
go
workflow
so
introducing
a
new
feature,
and
so
we're
gonna
be
have
to
learn
how
to
do
things
like
how
to
run
the
latest
master
code
locally
on
your
machine,
how
to
do
proto
generation.
So
we
use
Angie
our
PC
to
generate
our
types.
A
Archetypes,
we'll
talk
a
bit
about
how
to
actually
make
code,
changes
and
writing
and
running
unit
tests
and
kind
of
how
to
write
a
good
unit
tests
for
this,
and
also
we'll
talk
a
bit
about
to
bugging
how
to
run
their
workflow
controller
in
in
a
kind
of
debugging
those
you
know
with
debugger
connected
it.
So
you
connect
breakpoints
and
we'll
also
talk
about
how
we
write
e
to
e
tests
and
actually
we'll
talk
a
bit
about
documenting
your
changes.
So
that's
gonna
be
some
kind
of
kind
of
stuff
to
do.
A
The
goal
of
this
is
kind
of
really
put
you
in
a
position.
If
you
want
to
become
an
OSS
contributor
to
our
work,
beause
you'll
have
all
the
tools
that
you
need
to
to
do
that
and
you're
being
a
really
good
position
to
not
only
have
the
right
kind
of
conversations
about
it,
also
to
be
able
to
kind
of
have
been
a
good
position
to
create
an
issue
or
raise
a
PR
with
your
pull
request
and
not
to
have
quite
a
lot
of
feedback
or
changes
to
that.
A
Pr
requester,
because
you're
already
being
a
really
good,
well
educated
position
about
how
to
go
and
do
that
and
also
you'll
know
that
perhaps
you'll
done
the
things
that
you
need
to
do
to
make
sure
that
your
code
is,
you
know
really
good
and
strong.
You
don't
need
to
make
too
many
requests
too
many
changes
to
afterwards.
There
are
a
few
things,
a
few
prerequisites
if
you
want
to
come
and
join
this
workshop
and
I
think
we
have
about
20
people
coming
so
far
to
it
and
there's
a
sign-up
for
to
conform
to
complete.
A
Do
that
and
you'll
get
a
meeting
invite,
but
also
that
silent
form
is
a
way
for
you
to
say.
You
know
I
want
to
learn
about
these
specific
things
during
the
workshops,
so
it's
a
good
opportunity
that
and
there's
a
there's,
a
flat
chat
room
to
join,
which
is
going
to
be
a
place
for
you
to
go
to
talk
about
the
workshop
and
make
ask
questions
and
obviously
install
a
number
of
software
prerequisites.
You
can
probably
have
a
quick
guess
about
what
these
are.
I'm
just
gonna
go
with
them
quickly.
A
Some
of
them
you
may
already
have
this
is
wrong.
We
don't
use
death
anymore
golang,
which
is
the
language,
and
you
see
I,
love,
writing,
go
and
there's
a
really
great
tutorial
to
learn.
Go
that
I
think
takes
about
three
took
me
about
three
hours
to
do
the
tutorial
and
then
be
kind
of
capable
in
writing
and
running,
go
yarn,
so
we've
called
the
user
interface
using
yarn
and
react.
A
So
if
you
know,
react
or
potentially
get
to
learn
a
bit
about
react
as
a
result,
this
docker
obviously,
and
a
couple
of
tools
like
customized
protein
JQ
and
the
final
one-
that's
pretty
maybe
a
bit
controversial
is
you'll
need
to
be
having
it
faster
running.
When
we
do
feature
development,
we
people
use
different
things
on
our
team,
so
myself
and
Barney's,
and
a
thing
called
k3
d,
which
is
I,
think
stands
for
kubernetes
on
docker.
A
So
basically
it
runs
a
container
on
docker
for
desktop
that
implements
the
kubernetes
api
and
then
I
think
actually
just
container
D
to
start
its
own
containers
inside
docker.
But
you
can
use
MIDI
cue
to
whatever
you
know
ever
floats
your
boat,
it's
good
to
kind
of
run
it
locally,
because
it
makes
it
easy
to
do
and
there's
a
couple
other
useful
tools
listed
in
this
guide
here
as
well.
Okay,
also,
obviously
you'll
need
an
IDE.
A
You
can
use
whatever
you
want
to
I
prefer
IntelliJ
and
that's
what
I've
been
using
for
a
long
time
now
and
that
has
the
the
Ultimate
Edition
has
loads
of
really
great
kind
of
code,
helping
advice
in
it.
Linting
you
know
points
out
bugs
in
your
code
before
you
even
save
the
file
that
kind
of
stuff
I
know
people
use
Golan,
you
could
use
VI
or
MX.
If
you
want
to
ya.
A
A
Okay,
so
the
last
part
of
our
community
meeting
is
really
intended
for
open
discussion
topics,
but
you
guys
have
probably
heard
in
the
last
week
or
so.
The
Argo
project
has
become
a
CN
CF
incubator
project
and
I'd
like
to
ask
lookalike.
If
she
wants
to,
you
know,
just
tell
people
a
little
bit
about
what
that
might
mean
to
our
go
in
the
Argo
community.
Sure.
J
Firstly,
thanks
everybody
for
being
a
community
member
for
using
our
goal
for
contributing
to
our
goal.
The
reason
why
our
go
directly
could
go
to
the
incubating
project
level
and
didn't
have
to
go
through
sandbox
and
then
incubating
is
because
of
the
number
of
users
Argo
has
or
like
all
big
small.
All
of
companies
are
using
our
goal.
J
We
have
more
than
hundred
companies
using
Oracle
at
large
scale,
also
based
on
the
number
of
stars,
the
number
of
contributors,
the
number
of
releases
we
have
made
all
of
this
led
are
going
to
be
in
the
incubating
level
directly.
We
started
the
project
in
2017
and
that
was
our
goal
will
close.
Then
we
started
our
Cassini
in
2018.
We
started
our
the
events
in
2018
and
then
our
go
rollouts
in
2018
and
I.
Think
yes,
so
also.
So
what
does
this
mean?
Our
go
brand?
Our
agro
copy
right
now
belongs
to
CN
CF.
J
J
We
go
through
all
the
levels
of
approval,
votes
and
everything,
and
now
we
are
in
if
you
one
thing
to
point
out,
if
you
see
in
the
description
of
our
go
in
the
CN
CF
incubated
projects
website,
it's
a
CIC
be
Argo
has
a
bunch
of
projects
right.
It
is
the
first
time
I
think
the
only
project
which
has
a
bunch
of
sub
projects
under
the
same
org
and
going
together
as
a
incubating
project.
Now
there
is
no
category.
There
was
no
other
category
to
describe
all
of
these
projects
together.
J
So
right
now
it's
grouped
under
C
ICD,
because
people
understand
that
and
Argos
EB
is
focused
on
CV
and
some
of
you
supposedly
even
use
our
workflows
for
CI,
but
I
know
a
lot
of
you
use
our
go
workflows
for
machine
learning
and
data
processing
and
and
I
wish.
There
was
a
better
way
to
describe
the
whole
set
of
projects,
but
right
now
it's
grouped
under
the
eyes
I
think.
That's
all
I
had.
A
Thanks
for
later
clapping
on
my
own
here,
kinda
laughter,
I'm,
just
gonna
drop
a
link
into
the
chat
room.
Thank
you
read
a
bit
more
about
that
this,
it's
hard
to
it's
hard
to
probably
describe
how
much
time
and
effort
has
gone
into
this
MOOC
like.
Oh,
that's,
pretty,
that's
proud
to
say,
yeah,
yes,.
J
A
A
minor
release
and
minor
is
in
the
three
digit
release
and
the
minor
is
the
middle
number
and
so
I
think
you
know
people
could
grant
me
two
three
and
two
for
the
only
releases
we
did
in
2019.
Now
as
of
December
2019,
the
team
size
went
from
one
person
to
four
people
when
you've
got
four
people.
You
can
actually
do
a
lot
more
with
that,
because
you
know
with
a
team
of
four
with
a
team
of
one
you're
spending
lot
time.
Firefighting
and
fixing
bugs
we've
got
a
team
of
four
people.
A
You've
got
a
good
opportunity
to
build
lots
of
new
features,
and
that's
what
we've
been
obviously
doing,
and
one
of
the
other
things
we've
done
during
that
period
is
look
to
do
more
release
a
more
regular,
open
source
releases.
So,
rather
than
doing
it
when
it
I
guess
felt
right,
we've
been
doing
one
every
month
since
we've
done
to
5
to
6
and
we've
just
done
to
seven
and
each
one
of
those
release.
Cadence's
is
about
month
at
the
moment
and
what
you
probably
don't
know.
A
I
hope
you
don't
know
is
that
that's
not
what
we
do
into
it
internally,
actually
into
it.
We
do
something
quite
different
to
release
in
the
open-source
version.
What
we
actually
do
is-
and
we
group
our
clusters
and
we
have
around
a
hundred
and
thirty
clusters
Jesse-
is
that
right
it
goes
up.
It
goes
up
every
day.
I
can't
I,
can't
cope.
I
think
I
think
we're
currently
releasing
it.
A
A
Important
training.
Jobs
are
done
in
pre-production,
so
lots
of
those
those
you
know
lots
of
capability
there
is
actually
in
would
actually
impact
and
production
and
then
the
final
wages.
The
fourth
ways
is
the
production
system
and
that's
segregated.
So
we
can,
you
know,
keep
that
one
to
last
and
we
we
aim
to
do
it
about
on
a
cycle
about
a
week,
but
it's
actually
tends
to
be
there
be
two
weeks
we
release
the
tip
of
master
to
that
first
wave.
We
leave
it
there.
A
We
gather
some
data
on
it
and
then
we
release
it's
the
second
cluster
and
third
cluster
and
fourth
cluster,
and
why
am
I?
Why
am
I
telling
you
guys
this
one
I'm
telling
you
this
is
because
I
want
to
highlight
a
bit,
but
actually
there
are
other
ways
to
bring
so
I'll
go
workflows,
and
this
is
one
one
that
we
do
ourselves.
We
try
and
dog
food
that
coat
soon,
but
it
also
means
that
we
don't
actually
consume
ourself
the
open-source
version.
A
A
K
A
A
The
second
one
is
what
we
call
a
minor
release:
that's
the
one
we
do
on
a
roughly
and
monthly
basis
and
that's
intended
to
only
contain
new
features,
and
the
third
number
is
the
patch
number,
which
is
the
only
contains
bug
fixes
now,
with
with
the
patch
numbers,
we
tend
to
release
them
as
soon
as
we
have
a
few
bug
fixes
we
want
to
get
out,
and
they
typically
seem
to
be
about.
I
would
say:
weekly
is
probably
about
right
for
those
ones
with
that
with
the
minor
ones.
It's
about
monthly.
K
G
A
A
A
C
A
So
myself
and
Jesse
have
engaged
a
couple
of
beautiful
recently
on
the
talk
on
the
kind
of
general
topic
of
understanding,
how
much
work
flows
cost
and
what
we
did
in
version.
2.6
I
think
it
was
pretty
much.
The
last
feature
motion
that
was
at
a
thing
called
resources,
duration,
which
is
appears
a
pod
level
and
a
workflow
level
that
basically
takes
the
amount
of.
A
One,
it
takes
the
amount
of
time
that
the
pod
ran
and
multiplies
it
by
the
amount
of
resources
that
it
was
requesting.
So
typically,
that's
you
know
one
gigabyte
of
RAM.
You
know
X
many
millions
of
CPU
and
just
presents
that
as
a
summary
number
for
the
whole
workflow
and
we
had
a
really
interesting
conversation
with
a
company
recently
about
the
differences
between
requested
resources
and
actually
used
resources.
A
So
you
can
you
cannot
you
know
with
kubernetes,
you
can
request
more
or
less
resource
that
you
actually
use
and
we
can
kind
of
make
a
kind
of
a
guess
on
that.
But
actually
that's
a
really
difficult
thing
to
go
and
do
there
are
many
kind
of
things
that
can
go
wrong
with
that
and
what
we,
what
we
don't
know
what
we
haven't
asked.
What
questions
given
us
is:
how
are
people
currently
doing
cost
optimizations
for
our
workflows?
Is
there
anywhere
else
out
in
the
in
the
room
today,
who's
doing
anything
like
this?
A
A
And
it
asks
you
mean
who
is
calculating
resource
/,
workflow
I,
think
the
question
I'm
really
asking
is:
how
do
you
do
cost
optimization,
but
workflows
and
workflows
can
be
very,
very
expensive
to
run.
You
know,
especially
viewing
very
large
datasets,
and
we
want
to
kind
of
know
how
people
are
doing
that.
So
we
can
share
the
information.
The
community.
A
A
It's
very
helpful:
okay,
great
sir
I
think
we'll
move
on.
Does
anybody
have
any
topics
I
want
to
talk
about
at
the
end
of
the
meeting
before
we
close
out
anything
you're
interested
in
any
particular
issues?
You
like
an
update
on.
K
I'd
like
to
understand,
others,
might
be
sort
of
wide
knowledge
or
discussed,
but
I'm
new
to
this
meeting.
So
how
how
is
sort
of
the
process
for
deciding
what
you
guys
work
on
in
terms
of
feature
development
is
the
you
know.
Is
there
a
as
Robin
into
it
internal
roadmap
and
then
you
bring
in
things
that
sort
of
community
is
asking
for,
or
is
it
kind
of
like
we're,
not
taking
all
the
requests
that
we
think
to
make
sense
from
a
community
and
build
it
or
you
know
how
how's
that
directions
are
set.
J
Sure
so,
obviously,
if
there
is
need
and
there's
a
requirement
coming
from
Intuit
ml
or
they
are
processing
teams,
we
will
build
that
without
breaking
anything
else.
Obviously,
but
we
review
all
the
top
issues
voted
and
liked
by
people
commented
on
and
we
include
those
as
well
whether
Intuit
is
using
that
feature
or
not
so
far.
That's
how
we
have
been
doing
it
if
you
all
have
see
anything
missing
and
that's
why
these
community
meetings
are
places
or
on
the
slack
Channel.
J
If
you
see
there
is
an
important
issue
which
is
being
ignored
but
which
will
impact
everybody
most
of
the
community
which
will
help
most
of
the
community.
Please
highlight
those
but
the
process
we
followers.
We
review
the
issues
once
a
month.
We
look
at
the
issues
with
the
highest
number
of
words
and
likes
and
comments,
and
of
course,
we
may
also
review
what
into
its
requirements
are
and.
I
H
J
A
Some
that
add
to
Kalika
just
said
there
I
think
to
five
had
sixty
individual
contributors
to
it,
and
our
team
size
is
four
so
six
contributors
who
don't
work
for
into
its
and
contributing
teachers
there
and
that's
unusual
I
mean
it
was
like
that
was
a
three
months.
Typically,
it's
more
like
one
quarter,
epcot
thousand
thirty
three
quarters
Excel
developers,
the
work-life
semaphores
issue
is
a
really
interesting
one
to
me
because
of
how
popular
it
was
almost
immediately
when
we
opened
the
issue.
A
So
somebody
open
the
issue
I
think
two
weeks
ago
or
even
a
week
ago,
Jesse
and
it
had
fourteen
thumbs
up
from
different
people,
rewrap
it
at
the
time-
and
you
know
we
would
have
you
know
we
opened
that
issue,
but
there's
clearly
a
lot
of
people
who
want
that
and
it
was
a
related
issue
that
would
that
would
solve
as
well
and
I
got
a
thumbs
up.
So
it
makes
me
weak
when
that
happens.
It
makes
very
clear,
what's
a
big
useful
feature
to
people
as
well,
and
that's
why
it's
so
useful?
Obviously,.
A
J
So
I,
firstly,
I,
don't
think
I've
seen
any
issue
open
by
anybody
requesting
such
a
specification.
So
all
more,
it
would
be
really
helpful.
If
you
can
open
an
issue
and
then
we
will
yeah,
we
will
we
will
let
the
community
clarify
it
or
if
you
have
more
details
on
what
would
you
like
that
specification
to
have?
That
would
be
helpful.
J
We
in
past,
as
well
as
we
are
also
thinking
that
we
need
to
have
a
specification
on
easier
stream
processing
by
workflows
and
we
are
thinking
and
again
I
I
think
Paula
opened
an
issue
already
to
get
some
feedback
from
the
community.
We
thought
of
that,
but
we
didn't
think
of
implementing
server
less
work
for
specification.
If
you
have
a
need,
then
can
you
please
open
an
issue?
J
G
I'd
have
a
question
about
a
missing
feature.
This
might
be
the
correct
venue.
If
not
let
me
know,
I
came
at
Argo
workflows,
having
some
familiarity
with
Apache
airflow
and
for
the
most
part,
Argo
is
better
and
as
a
bigger
feature
set
and
such
unless
you're
trying
to
hack
on
like
an
isolated
box
which
is
nice,
but
the
one
missing
feature.
That's
kind
of
bugged
me
this
might
be
intentional
in
air
flow.
You
can
define
a
schedule
for
some
ETL
job
right
so
say
like
run
daily.
G
G
I
I
would
I
would
say,
there's,
although
you
think
were
close,
has
more
features.
I
would
actually
say:
airflow
is
actually
more
of
a
comprehensive,
a
higher
level
solution,
because
it
does
include
more
comprehensive
scheduling
and
air
flow
jobs
or
whatever
the
terminology
is,
and
this
is
also
the
reason
why
agro
events
exists.
So
agro
vents
was
started
by
Blackrock
to
address
their
scheduling,
needs
for
triggering
workflows,
baseline
calendar.
G
J
F
A
Okay,
brilliant!
Thank
you
very
much
since
there
any
other
topics
that
people
assume
I
think
we
have
a
question
from
Christina.
If
there
any
current
work
on
what
flows,
permissioning,
utilizing
I
am
roles
for
sa
I.
Guess,
there's
any
service
cap
in
AWS.
The
reason
I'm
asking
is:
we
are
running
into
an
issue
where
it
cannot
upload
the
logs
to
s3.
I
will
open
an
issue
for
that.
K
A
L
L
A
A
That
is
to
actually
use
templates
for
your
workflow
to
break
your
workflow
and
multiple
templates,
and
that's
that's
a
good
first
solution.
If
that
doesn't
work,
then
you
can
look
into
using
offload
node
status,
which
stores
some
of
the
data
of
your
large
workflows
in
an
SQL
database.
So
you
need
to
configure
Postgres
or
MySQL
to
use
that
sometimes
just
share
the
link
for
this
in
the
chat
for
you
on
it.
K
A
A
A
I
mean
workflows
with
you
know:
500
nose
in
each
node
is
a
dag
with
you
know,
10
10
steps
in
it
and
then
that
workflows
passing
around
you
know
multi-gigabyte
and
artifacts
between
different
between
different
pods,
but
it'd
be
really
good
to
get
it
input
from
people
on
the
kind
of
problems
that
they're
having
with
performances.
There's
a
number
of
issues,
labeled
I
think
with
scalability
in
in
in
the
github.
So
it's
really
good
to
get
some
examples
of
the
kind
of
things
that
people
doing
on
this
on
those
large
scales.
J
K
A
The
more
we
can
know
about
the
kind
of
jobs
workloads,
you're
running
the
better,
really
better
able.
We
will
wait
to
the
kind
of
test
to
the
scale
that
we
want
to
test
to
and
I
don't
know.
If
Jesse
can
speak
to
this
I,
don't
know
if
we
usually
set
out
expecting.
You
know
these
workflows
this
size,
but
we
want.
We
definitely
want
to
be
able
to
support
them.
So.
I
If
we,
for
example,
could
collapse
DAGs
or
steps
or
something
so
that
they
kind
of
just
appear
as
a
single
thing
and
then
some
mechanism
to
expand
those
things,
and
so
so
yeah
like
a
collapse
feature
which
only
shows
the
high
level
thing
and
then
clicking
into
bags
and
steps
to
expand
those
things
and
so
on
and
so
forth.
That
probably
would
help
go
a
long
way
to
help
people
with
the
presentation
issue.
So.
A
We
did
that
a
couple
of
features
in
the
UI
in
2/6
and
some
more
more
kind
of
minor
UI
tweaks
and
to
7.
If
you
go
to
the
top
right
hand,
corner
of
the
screen,
there's
now
a
kind
of
filter
drop-down
on
the
on
the
view
of
the
dag
and
you
can
choose
which
ones
you
want
view
in
there
as
an
option
to
orientate
it
horizontally
or
vertically
an
option
to
zoom
out
two
times.
So
you
can
you
can
you
can
improve
it?
J
I
The
other
thing
we
could
even
have
maybe
some
implicit
behavior,
where
you
know
a
workflow
with
a
hundred
nodes
plus
notes
automatically-
is
collapsed
by
default
at
verses
like
a
smaller
work
list,
was
just
kind
of
showed.
Show
this
everything,
but
let's,
let's
file
an
issue
for
that
UI
improvement
and
then
check
it.
There.