►
From YouTube: Viktor and Mikhail on Automatable DevOps and GitLab CI
B
A
So
victor
is
asking
for
the
for
even
anyone
who's
watching
why
my
idea
automatically
devops
is
not
the
same
as
gitlab
ci.
Well,
eclapsia
is
sci,
it
should
be
optimized
for
the
domain
of
ci
and
automatable.
Devops
is
a
way
to
express
workflows
to
manipulate
the
main
objects
of
the
whole
devops
cycle
space.
So
they
are
not
the
same
things
and
from
a
technical
perspective,
I
actually
started
prototyping
this
thing
and
the
workflow
for
automatable
devops
is
modeled
using
battery
nets.
That's
like
a
long
solved
problem
turns
out.
A
I
didn't
know
that
when
I
was
writing
the
ticket,
but
yeah
everybody
does
that
this
way
petronets
and
for
ci,
it's
it's
well,
you
could,
I
guess,
do
patroness,
but
that's
overkill.
It's
not
necessary.
A
much
simpler
model
of
just
a
directed
acyclic
graph
is
sufficient.
Petronas
can
have
cycles,
for
example,
so.
B
Yeah,
but
what
is
this
thing
so
the
thing
is,
I
tried
to
discuss
this
with
kenny,
and
that
is
that
cannot
give
him
a
good
enough
motivation.
A
A
Well,
I
guess
you
can
say
scripting,
but
it's
not
really
scripting,
it's
just
describing
workflows
in
devops.
A
lot
of
our
things
are
ad
hoc
solutions
to
like
pieces
of
the
whole
big
picture,
big
big
problem.
I
think
I
I
don't
know
in
1312
we
released
variables
in
in
ci,
for
example,
that's
okay,
maybe
that's
a
bad
example.
That's
probably
for
the
ci
part.
A
How
can
I
build
an
integration
with
gitlab,
for
example,
if
I
want
to
get
a
slack
message,
different
issues
created
in
my
project,
you
cannot.
A
B
A
A
A
Currently
mean,
and
we
should
be
voting
for
because
that's
still
not
it
doesn't,
we
should
then
say
outer
deployments
or
something.
Why
do?
Why
are
we
saying
how
to
devops
then.
A
B
A
I
mean,
of
course,
you
agree.
That
makes
sense,
I
think,
from
a
product
perspective,
I
think,
as
a
user,
it
would
be
great,
to
you
know,
be
able
to
express
a
more
interesting
workflows
with
ci,
okay.
I
think
I
have
an
example
with
the
more
fixed
pipeline.
A
You
release
a
an
image.
You
push
an
image
and
then
you
you
want
to
do
container
scanning.
Okay,
what?
If
you
want
to
do
end
images
and
you
need
to
scan
any
and
images
after
that?
Well,
that's
not
great.
You
know
I
I
had
to
hack
some
yam
and
it's
kind
of
terrible
and
everything
falls
apart.
If
you
take
one
step
out
of
out
of
the
blessed
path.
Basically,
it's
just
not
good.
B
Enough,
I
totally
agree
with
you.
So
like
one
thing
I
had
this
in
my
mind.
I
guess
you
know,
because
you
were
threat
lash
and
there
is
a
gyro
automation
project
that
is,
that
is
a
plugin
for
gyro.
So
it's
not
it's
not
that
last
chance
itself.
I
think,
but
it's
it's
really
great
and
that's
that's
what
they
provide.
They
just
give
you
a
really
nice
wizard
to
do
any
kind
of
automation
around
gyro.
Evens,
basically,
are
things
happening
there.
B
A
B
It
doesn't
matter
if
you
agree
or
not,
but
it
is
from
the
user's
point
of
view.
It
is
a
competing
solution
because
and
even
from
gitlab's
point
of
view,
which
is
inverse.
It
is
a
competing
solution
because
you
can
have
because
ci
is
at
the
center
of
gitlab.
Today
you
can
use
it
for
deployments,
you
can
even
use
it
for
slack
messaging,
which
is
terrible
actually,
but
you
can
do
that,
and
and
for
this
reason
many
users
would
like
to
use
an
automation
for
for
ci
as
well
like
whatever
we
create
here.
B
They
want
to
use
it
for
ci
and
they
have
to
make
this
choice.
Should
I
use
the
directed
acyclic
graph
solution,
or
shall
I
use
the
pattern
that
based
automation,
solution
for
ci?
They
have
to
make
make
these
choices
and
we
have
to
make
sure
that
we
can
drive
them
properly.
We
can
drive
our
company
properly,
what
where
to
invest
and
and
how,
how
we?
B
What
what
the
future
of
these
different
approaches
look
like
looks
like,
because
it's
relatively
easy
to
say
that
if
we
create
a
pattern,
app-based
workflow
engine,
we
should
use
that
same
workflow
engine
for
the
much
simpler
ci
jobs
as
well.
Instead
of
the
current
dag
approach,
because
dags
are
just
a
subset
of.
A
Patronage,
well
maybe
that's
the
right
thing,
but
I
don't
see
it
as
a
competitor.
Why?
Because
the
runner
and
all
the
infrastructure
and
that
it
represents
a
guest,
is
what
is
going
to
be
used
to
run
the
any
any
jobs.
So
it's
like
90
percent
code,
reuse,
basically
or
or
something
like
that.
It's
not
like!
A
B
A
B
I
I'm
afraid
that
there's
very
many
more
things
like
when
I
I
watched
the
matt's
video,
then
I
I
had
this.
He
says
in
the
video
that
in
the
merge
requests,
when
the
job
fails,
I
see
which
job
fails.
B
Even
if
it
was
defined
at
the
group
level
or
not
at
the
project
level,
I
can
figure
it
out.
What
was
the
failure
and
later
on?
I
can
rerun
just
that
job
from
the
cli
and
when
I
watched
this
video
I
I
was
thinking
that
if
our
dslr
works
in
a
way
that
we
transform
the
imperative
code
into
a
declarative
code
and
give
the
declarative
code
to
the
runner,
then
the
runner
can
report
what
declarative
job
failed,
but
the
user
actually
interfaces
with
the
imperative
code.
A
One
piece
of
like:
if
you
call
a
function
and
it
generates
you
several
vertices
in
the
graph
one
of
them
fails,
you
can
either
run
the
whole
thing
or
maybe
just
run
one
whatever
makes
sense,
and
you
can
do
both
you
can
implement
both,
but
I
think
maybe
we
don't
need
both.
Maybe
we
just
want
to
run
the
one
that
failed
and
then
everything
that
depended
on
it
as
well
like
can
continue
running.
A
A
B
I
see
what
you
mean
here.
I
I
think
what
you
proposed
last
week
is
a
better
approach
that
that
we
can
define
a
pipeline
at
the
group
level
as
well.
That
has
to
run
because
that's
a
security
linter,
for
example,
and
the
project
users
shouldn't
be
able
to
switch
it
off.
So
that's
that's
totally
fine.
So.
A
I
I
think
I
propose
something
a
little
bit
different.
This
actually
may
make
sense
as
well
what
you
just
said,
but
I
think
I
proposed
a
little
bit
something
yeah.
A
A
B
So
there
will
be
multiple
pipelines
for
the
single
for
the
same
event,
and
you
can
say
that
it
should
be
a
single
pipeline.
Everything
should
be
added
to
that,
but
simply
if
there
are
different
departments
who
actually
own
these
responsibilities,
then
probably
they're
they're
very,
very
clear.
Come
this
idea
of
multiple
pipelines.
But
I
remember
what
you
said
last
week
about
the
artifact
signature,
just
that
we.
B
A
B
A
Yeah,
I
suppose
yes,
of
course,
yes,
actually
one
more
interesting
thing.
I
wanted
to
give
as
an
example,
but
it's
an
example
of
ci.
Not
workflows
are
the
magical
magical
artifacts
that
we
support.
If
you
want
coverage,
put
your
file
somewhere
here.
If
you
want
to
upload,
I
don't
know
what
else
there
is
there.
I
saw
it
some
other
day,
some
scandals
out
linter,
lintering
results,
or
something
put
it
over
here.
A
A
A
Okay,
I
have,
I
don't
know
if
we
have
time,
but
I
have
started
just
for
fun
prototyping
something
and
can
you
show
you.
A
A
Basically,
it
allows
you
to
describe
your
builds
right.
It's
not
automation,
it's
not
automatable
devops,
it's
not
a
patreon,
it's
the
ci
idea,
so
I'm
trying
to
take.
I
see
the
same
approach,
but
for
ci
and
then
the
nets
will
be
similar,
I'm
kind
of
evaluating
both
at
the
same
time
with
a
single
prototype.
A
A
A
Yeah
yep
and
that's
the
library
so
because
the
symbol
needs
to
be
defined
when
it's
referenced,
it's
kind
of
up
like
upside
down.
This
is
used
here
so,
but
we
should
read
it
from
here.
I
think
step
defines
a
a
function
that
when,
when
is
when
it
is
executed,
it
can
define
vertices
in
the
in
the
graph.
Basically,
so
we
say
that
job
when
it's
called
it
can
have
these
attributes.
B
B
A
A
library
can
define
new
functionality,
validation
and
and
so
on,
and
then
you
can
have,
depending
on
these
parameters
that
are
passed
when
job
is
called.
You
can
have
logic
here
like
if
and
so
on,
right
yeah,
but
then
this
is
a
the
simplest
example
and
it's
just
run
an
image.
With
these
parameters
identical
environment
variables,
we
should
have
bars
yeah.
A
Okay
and
then
we
import
it
and
use
it.
Okay,
that's
kind
of
the
as
simple
as
it
gets.
I
guess,
then
there
is
something
more
interesting
which
I
started
doing
but
haven't
finished.
Yet
it's
I'm
trying
to
model
the
case
where
you
want
to
scan
several
docker
images
that
you
produced
just
because
I
I
have
run
into
this
limitation
of
the
current
design.
So
we
also
have
two
libraries.
As
you
can
see,
everything
should
already
like
look
more
less
familiar
custom
job
from
library,
job
library,
custom
should
be
custom,
job.
A
Yeah,
so
if
this
is
the
same
thing
I
wanted,
I
haven't
finished
adding
interesting
stuff
here,
but
then
we
call
it
okay,
then
this
thing
depends
on
on
jobs
like
to
scan
and
job
scan.
Scan
itself
also
comes
from
the
library,
and
this
is
where
stuff
is
getting
more
interesting,
so
it's
just
constant,
then
we
define
a
provider
provider
is
something
that
is
a
direct
copy
from
bazel
it's
a
named.
A
This
is
the
name
piece
of
structured
data,
so
think
of
it
as
a
json
object
with
this
field,
but
it's
immutable
and
you
can
look
it
up
by
by
name
in
the
dictionary.
So
basically.
A
A
Then
we
want
a
list
of
list
of
labels
and
then
it's
just
a
boolean,
parallel
or
sequential
scan
just
for
fun.
You
know,
as
for
example,
purposes
so.
B
A
Iterate
all
the
images
we
want
to
scan
and
then
we
create
a
scan
action
and
then
of
from
which
job
that
produced
an
image.
We
get
this
piece
of
structured
data
and
pass
the
name
of
the
image
as
a
command
line.
Argument
probably
should
be
like
that,
and
then
we
can
set
the
dependencies
for
ordering
purposes.
A
A
Inside
sequential
jobs
that
that
change,
if
we
don't
want
parallel,
if
you
want
parallel,
there's
no
dependencies
among
them-
and
you
see
this
is
much
nicer
than
I
don't
know
how
to
how
to
do
this,
like
with
the
with
yammer
yeah
yeah,
one
more
interesting
thing
we
can
have
matrix,
builds
like
let's
say
arch
at
least
x,
86
and
like
x64,
whatever
doesn't
matter
and
then
os,
let's
say
linux
and
mac.
We
can
it's
just
python,
basically
in
python.
A
What
you
do
is
you
do
four
comprehensions
and
list
comprehensions
you
can
you
can
do
something
like
that
and
blah
blah
blah
house
for
os
in
where
oils
for
are
in
arch,
I
can
type,
then
you
pass
like
less
or
s,
and
if
you
have
these
parameters
just
the
same
example
and
then
ends
arch
equals
arch
and
that's
your
matrix
builds,
you
don't
need
an
extra
feature
for
matrix
builds
it's
just
magically
there,
because
you
have
a
proper
programming
language.
You
don't
need
to
program
in
or
have
conventions
in
yaml
or
you
don't.
A
You
also
don't
need
a
weird
inline,
syntax
to
syntax
yeah
off.
B
A
Yes,
you,
you
will
say
this
action
failed.
So
the
question,
I
guess
what
you're
asking
is:
can
we
actually
point
to
this
place
where
it
was
defined?
Is
that
the
thing.
B
B
That's
one
thing
that
we
would
reuse
runners,
but
the
ci
is
deeply
integrated
into
gitlab,
because
if
a
pipeline
fails,
we
show
it
in
emerging
quests
we
created
to
do
and
so
on
and
so
forth.
I
don't
know
how
many
things
we
can
we
might
do
there
and-
and
this
is
the
the
business
risk
here.
I
really
love
this
idea
and
I
think
if
gitlab
is
brave
enough,
we
should
go
into
this
direction.
A
A
So,
to
make
it
clear,
I'm
not
suggesting
rebuilding
the
ci
again
right.
We
don't
have
to
throw
away
the
ammo.
We
don't
have
to
abandon
the
users.
We
no,
of
course
not.
It
can
be
a
new
prototype,
a
solution
that
we,
what
I
actually
thought
about
is
not
touching
ci
as
and
just
having
a
docker
image,
which
you
know
you
just
give
it
the
file
it
produces,
that
it
evaluates
your
program.
It
produces
a
set
of
nested
pipelines.
B
Of
possible
it
has
many
shortcomings,
but
it's
kind
of
possible,
so
you
can
generate
dynamic
pipelines
today.
B
The
problem,
so
we
have
to
find
that
this
is.
This
is
an
interesting
idea
that
you
said,
but
then
it
would
mean
that
we
we
have
to
live
with
all
the
shortcomings
of
parent-child
pipelines,
because
that
becomes
apparent.
That
becomes
a
child
pipeline
that
you
generate
that
way
and,
for
example,
in
gitlab
you
cannot,
it
might
be
solved
actually
in
this
special
situation,
it
might
be
feasible.
B
So
there's
a
problem.
I
just
wrote
it
in
slack
like
an
hour
ago
on
on
tong,
when
we
are
discussing
a
topic
with
tong
that
with
parent-child
pipelines,
there
is
an
issue
that
you
cannot
from
the
parent
child
from
the
parent
pipeline.
B
Which
breaks
down
many
integrations,
like
the
traffic
merge
request.
Widget,
for
example,
doesn't
work
if
you
run
the
terraform
pipeline
in
a
child
pipeline,
because
we
use
a
an
artifact
for
that,
but
yeah
this.
This
could
be
an
interesting
first
step,
and
this
is
what
I,
where
we
have
to
think
a
lot
that
like
where
to
show
the
power
of
this
of
this
feature,
because
we
have,
I
don't
know,
40
some
groups
at
gitlab,
I
mean
product
groups
who
develop
black
configure
and
many
of
them
develop
features
related
to
the
ci.
B
As
a
result,
like
think
about
the
pipeline
authoring
with
nadia,
they
create
a
pipeline
editor
and
they
will
not
create
an
editor
for
this
dsl.
They
will
create
an
editor
for
the
ammo,
the
same
for
verify,
who
verifies
about
c
either.
I
don't
know,
I
don't
know
how
many
teams
they
have
that
are
around
ci
and
and
and
then
there
is
a
separate
team
who
is
who
owns
the
merge,
request,
experience
and
all
the
integrations
with
ci.
B
So
that's
what
we
have
to
figure
out
like
what
is
where
can
we
offer
a
value
proposition?
That
is
high
enough
to
start
this
approach,
and
it
might
be
possible
that
we
say
that
okay,
we
have
a
prototype
that
you
can
run
as
docker
image
just
for
testing
to
see
feedback,
and
then
we
can
decide.
But
on
the
long
term,
this
will
either
include
ci
or
will
replace
ci,
because
this
is
a
competing
solution
for
automation.
A
I'm
not
sure
understand
what
what
you
said.
It.
A
A
So
I
thought
that
we
can
actually
do
either
of
the
approaches
either.
We
can
start
with
adding
support
for
this
way,
to
define
pipelines
in
ci
and
then
expand
to
everything.
A
That
probably,
is
because
that's
that's
more
difficult
to
start
with
everything,
but
we
can
also
start
the
other
way
around
start
with
automation
for
devops
and
not
touch
ci
at
all,
but
we
will
need
a
way,
a
place
to
run
those
actions
anyway,
so
so
ci
can
be
left
alone.
Basically,
I'm
I'm
talking
about
the
syntax.
We
still,
as
I
said,
we
will
still
need
a
place
to
run
those
actions.
B
A
A
B
A
And
you
probably
have
seen
how
it
looks
in
argo
workflows.
It's
it's
just
terrible!
So
then
another
problem
that
I'm
I'm
suggesting
we
need
to
solve
is
again
jammu
doesn't
compose
it's
not
a
programming
language.
It's
a
format
for
beta,
so
but
well.
Data
and
code
is
the
same
thing
can
be
viewed
as
the
same,
but
it
doesn't.
It
wasn't
designed
to
compose
so.
A
I
mean
what
I
just
showed
looks
much
better
right.
You
import
a
certain
list
of
symbols
and
you
use
those
symbols.
Nothing
else
appears
in
your
namespace
of
the
current
file
out
of
the
blue.
Well,
maybe
one
more
thing
or
well
predefined
symbols
which
are
less
than
five,
probably
something
like
that
that
those
step
you
know
and
and
a
few
more
magical
predefined
functions
most
of
every
like
everything
else
is,
there
is
only
imported,
then
the
third,
so
no
composition,
not
a
real
programming
language
and
another,
is
no
way
to
share
code
right.
A
You
can't
import
someone
else's
library.
If
you
import
someone
else's
yammo,
can
you
even
do
that?
I
don't
know,
but
if
you
do,
that's
probably
going
to
break
very
soon
and
one
more
thing
that
I
I
think
we
also
need
to
solve
is
a
way
we
need
to
have
a
way
for
passing
data
between
jobs
in
a
structured
way.
Not
just
do
we
even
have
airway,
maybe
no
or
maybe.
B
A
End
so
I
think
I
I
I
was
thinking
about
that
yesterday
or
two
days
ago,
and
I
think
I
came
up
with
a
really
clean
solution.
It's
basically
providers,
but
also
it's
in
programming.
If
you
want
a
value
of
some,
a
value
that
wasn't
computed.
Yet
there
is
a
concept
of
a
future
right.
So
in
the
program
you
use
futures
basically
and
then
they
are
replaced
with
magical
identifiers.
A
Is
environment,
variables,
command,
line,
arguments,
standard
input
and
that's
it
right,
environment
variables,
command
line,
arguments
standard
in
that's
it
so
either
of
those
three
can
get
either
the
data
directly
or
the
name
of
the
file
without
data,
and
that's
that's
it
that
solves
all
the
passing
data
things
problems
and
you
just
need
to
structure
structure
this
to
pass
it
in
structured
way.
So
the
providers
concept
the
named
named
structured
objects,
you
process
a
collection
of
them
and
yeah,
and
you
look
for
a
structure
that
is
named
in
a
certain
way.
B
Yesterday,
as
well
like
based
on
the
good
presentation
link
you
provided,
it
says
basically
it
cause,
I
don't
know
how
much
how
familiar
you
are,
but
it's
vocabulary
that
there's
a
case
and
always
a
pipe.
B
What
we
call
a
pipeline
is
actually
a
process
definition
and
a
case
which
is
the
case
in
this
case,
is
a
commit
that
was
that
generates
the
whole
the
instantiates
the
pipeline
and
and
when
I
was
thinking
that
it
would
be
just
great
if
he
could
say
that,
like
this
referencing
would
say
that
I
from
from
a
job,
I
can
reference
that
I
want.
B
Okay,
I
want,
I
don't
know
whichever
jobs
artifact
and
it
might
not
exist
yet
and
then
I
just
get
back,
it
doesn't
exist
yet
and
I
can
start
a
failure
path
or
timeout
or
whatever,
or
I
can
wait
until
it
becomes
available,
but
basically
that
there's
this
highest
level
instance
specific
set
of
attributes,
yeah
set
of
attributes
attached
to
the
case,
and
the
only
thing
I
need
is.
I
have
to
access
the
case
and
all
its
attributes,
including
the
whole
workflow.
Basically,
that's
being
run
to
the
whole
yep
tree.
A
Was
thinking
about
that
and
it's
like
graph,
whatever
name
does
matter,
it's
a
function
that
takes
case
and
then,
rather
than
all
of
that
being
on
the
top
level.
You
just
put
it
into
it
for
the
function
yep
and
that's
it.
You
have
programmatic
access
to
case
attributes
and
you
can
construct
and
construct
this
stuff
based
on
the
case
attributes
and
then
the
thing
that
evaluates
this
program
it
cause
it
looks
for
the
function.
If
there
is
a
function
like
that,
it
calls
the
function
yeah.
That's
it
that's
your
case.
Attributes
in
the
system.
B
Yeah,
or
or
or
like
as
python,
you
already
have.
A
few
kind
of
conventions
like
every
class
first
attribute,
is
the
the
class
itself,
the
class
instance
itself.
We
could
have
that
there
is
always
a
context
or
a
case
in
all
these
functions
that
that
our
yeah.
B
Approach,
that's
another
approach
which
I
think
is
a
bit
nicer,
actually
because
I
don't
have
to
write
always
a
wrapper
function
just
to
write
all
the
other
functions
anyway.
Okay,
let's,
let's
finish
this
call,
I
think
these
are
the
the
biggest
I
I
kind
of
see
the
potential
in
this
docker
image
that
generates
a
dynamic
child
pipeline.
A
What
what
is
the
other
way,
this
can
be
done
right
more.
Yes,
this
is
a
prototype,
but
a
little
bit
less
of
a
prototype.
How
can
it?
What
might
be
the
next
small
step?
We
put
the
evaluator
of
such
programs
into
the
runner
it's
in
go,
but
so
can
easily
put
it
there
then
git
lab
calls
it
or
if
the
runner
fetches
the
job,
basically
special
type
of
a
job,
it
evaluates
it.
It's
like
one
second
to
evaluate
such
things.
A
It
returns
the
result
without
maybe
even
we
don't
need
yeah,
we
shouldn't
even
show
anything.
Maybe
we
can
show
a
pipeline
that
quickly
executes
sprints
and
errors.
If
there
are
any
errors,
then,
based
on
the
return
of
data
gitlab,
the
main
application
actually
constructs
the
jobs.
Well,
it
already
has
a
definition
of
them.
It
just
submits
everything
into
whatever
I
don't
know
how
how
it
actually
works.
It
creates
a
pipeline
with
those
jobs.
Basically,
that's
what
I'm
saying
rather.
A
A
B
We
have
so
many
integrations
around
ci.
It
seems
that
this
is
this
is
where
we
don't
agree.
That's
that's
that's
my
opinion
that
gitlab
is
so
much
integrated
with
gitlab
ci.
It's
hard
to
describe.
B
Okay,
anyway,
I
I
really
want
to
do
my
video,
because
I
just
wanted
to
wait
for
this
for
our
discussion
and
I
will
have
a
meeting
in
15
minutes.
So
I'm
I
will
let's
finish
this
now.