►
A
And
say:
welcome
back
to
the
everyone
can
contribute
cafe
this
time.
I
think
we
have
number
50.
So
it's
like
celebrations
and
we
are
remote
everywhere
all
over
the
world.
Hopefully
I'm
here
with
mike
gleidner
in
austria
in
mark-
and
I
know
that
nicholas
is
in
tenerife,
and
hopefully
everyone
else
also
has
a
great
time
wherever
you
are
joining
today.
Yeah
and
the
thing
we
kind
of
talked
about
or
came
about
was
dagger
or
dagger.
A
I
have
no
idea
how
to
pronounce
it,
which
which
tends
to
be
a
portable,
ci
cd
kit,
and
it
should
be
first
look.
I
know
that
meteor
has
prepared
something
to
get
us
started,
but
yeah.
The
idea
is
to
learn
together
and
ask
questions
and
figure
out
how
we
like
can
use
it
in
the
future
or
what
what
its
purpose
is.
Yeah
I
mean
to
get
started
with.
I
would
just
hand
over
the
voice
to
to
nicholas
to
maybe
kick
us
off
and
give
us
a
little
introduction.
A
What
you
have
seen
already
and
yeah
just
go
ahead.
Think
please.
B
Yeah
yeah
so
welcome
it
also,
since
we
have
a
live
transaction
enabled.
So
I
can
also
do
the
whole
talk
in
spanish.
So
then
we
would
see
it
in
english,
no
I'm
judging
so
my
spanish
is
not
too
fluent
for
that,
so
that
I
can
do
it,
but
it
was
mostly
when
I
started
with
data,
because
it
was
like
a
lot
of
topics
that
are
coming
around
and
I
need
to
understand
how
it
was
working.
B
That
also
was
the
reason
why
I
really
want
to
do
that
to
learn
something
new
and
for
that
what
you
want
to
do
when
I
do
today,
so
I
have
some
slides
prepared
so
to
have
like
a
little
red
dot
in
terms
of
getting
started
and
how
you
can
use
it.
We
do
like
a
simple
example.
Then
we
probably
do
also
an
advantage
example,
and
then
we
were
checking
how
much
time
is
left
and
which
questions
are
unanswered
by
the
people
yeah.
B
For
that,
so
I
would
already
switch
directly
on
to
my
screen
share.
I
hope
no,
not
that
my
mac
needs
to
say
something.
I
dropped
it
through
yeah.
B
So
we
want
to
talk
about
data,
so
probably
you
know.
Are
you
coming
from
a
problem
like
you
know,
it's
too
directional.
B
So
currently
you
know
the
problem
that
we
wanted
to
surf.
Like
eight
years
ago
there
was
a
new
technology
coming
up,
it
was
called
docker
and
it
helped
us
really
to
easily
install
packages
on
different
operating
systems.
B
So
and
then
we
had
like
a
phase
of
where
everyone
was
building
smaller
containers
in
terms
of
like
creating
small
applications
only
for
the
users
that
it's
needed
right,
not
for
big
bloated
operating
system,
with
all
packages
for
everything
so
not
generalized,
so
really
specialized,
but
another
problem
that
also
happened.
Then
it's
mostly
is
like
when
you're
building
a
lot
of
different
containers.
B
You
get
like
the
same
like
with
lego
bricks.
So
there
are
a
lot
of
different
shapes,
a
lot
of
different
colors
and
you
need
to
stick
them
all
together
and
that's
like
a
problem
that
some
company
already
has,
and
the
next
problem
that
you
probably
also
wear
is
is
like.
We
all
do
like
the
icd
and
we
have
our
environment
and
it's
mostly
crew
running
you
can
easily
commit.
It
will
be
done.
It
will
be
executed,
probably
in
a
digital
pipeline,
but
the
problem
is,
for
example,
like
as
a
developer.
B
Like
me,
I
want
also
to
be
able
to
execute
locally
for
faster
debugging
circuits
so
that
the
feedback
loop
is
a
lot
of
faster
instead
of
waiting
on
the
pipeline.
This
waiting,
you
probably
all
know,
also
the
xkcd
comment
where
the
same
compiling
and
all
these
that's
not
compiling
anymore.
It's
mostly
waiting
on
the
pipeline.
We
can
say
pipeline
pipeline
yeah,
and
for
that
there
was
a
tool.
It
was
mostly
invented
by
the
solomon
heights
with
team
members,
and
it's
called
dagger.
So
give
me
a
second.
We
look.
B
We
need
to
first
look
a
little
bit
on
the
pictures
because
I
really
like
them
yeah.
So
we
have
like
the
main
purpose
for
dagger
is
mostly
that
you
have
like
a
unified
way
to
have
your
ci
cd
pipelines
locally
and
also
light
in
a
different
environment.
So
it
could
be
like
you
can,
try
it
on
deeplet.
You
can
run
it
on
other
platforms
or
you
can
run
it
local,
but
you
can
use
the
same
commands
everywhere.
B
B
We
have
to
know
this
unified
description,
language
and
that
will
give
us
probably
a
cruel
advantage
and
also
like
it
can
be
the
future
of
how
we
describe
our
our
pipelines.
Instead
of
that,
we,
when
we
switching
from
one
system
to
another
system,
we
need
them
to
write
new
general
syntax.
The
configuration
files
are
a
little
bit
different
and
then
you
need
to
check
it
every
time,
and
this
is
what
data
gives
us.
B
It
gives
us
a
short
abstraction
on
that
or
a
really
meaningful
abstraction,
so
what
it
really
tries
to
answer
that,
it's
like
it
reduces
the
drift
between
your
local
environment
and
the
ci.
B
It
allows
you
that
you
don't
have
any
configuration
drift.
Another
advantage
is
like
we
don't
go.
I
need
to
write
any
more
yammer,
so
we
know,
starting,
of
course,
using
a
new
language.
It's
called
tulane.
It
was
invented
at
google
so
because
they
had
also
the
problem
with
configuration
fight
so
because
they
had
a
lot
of
difference,
so
they
have
yam
adjacent
and
so
on,
but
they
wanted
to
be
held
for
more
tight
and
more
dynamic
language.
B
In
terms
of
that,
you
can
easily
do
string,
interpolation
and
all
the
stuff
so
and
that's
the
reason
why
also
data
also
choose
q-line,
because
it
gives
us
a
lot
of
flexibility
and
it
has
some
similarities
from
other
configuration
or
package
management
system
that
we
had
before
so
before.
We
really
go
into
what
is
q
and
all
the
technical
details
on
that.
B
So
for
that
I
would
like
really
first
to
get
started
with
a
simple
example,
and
then
we
go
in
step
by
step
deeper
into
the
topic,
so
the
next
step
would
be.
What
do
I
need
currently
so
to
run
dagger
at
all,
so
you
need
to
run
it
with
you
need
to
you
can
insert
locally.
You
can
run
it
in
ci.
B
You
can
also
later
we
have
autolater
an
example
for
gitlab,
so
I
will
also
show
that
how
it
works
and
explain
it,
but
firstly,
we
need
to
start
it,
so
I
have
made,
of
course,
so
I
installed
it
with
homebrew.
It's
like
you
can
easily
do
a
brew
install
and
what
you
also
need
is
slightly
darker
and
that's
it
so
then
we
can
start
with
later.
B
Besides
that
now
coming
very
interesting
point,
as
I
said,
we
want
to
have
like
a
unified
language
instead
of
that
we're
writing.
Yaml
files
and
data
mostly
consists
of
q
files
for
the
from
maturing
language,
and
I
will
show
you
now
a
simple
example.
So
this
is
like
a
hello
world,
so
what
you
can
do
at
dagger,
so
everything
in
dagger
starts
with
a
plan,
so
this
type
of
object.
B
You
mostly
started
with-
and
probably
you
see
already
some
similarities
from
programming
language
that
you're
already
aware,
because
we
have
the
package
and
we
have
the
import.
This
is
really
transisting
to
garlic.
So
if
you
away
a
little
bit
of
darling,
it
looks
the
same.
The
reason
behind
that
is
because
q
lane
is
mostly
based
in
it's
like
a
super
set
of
json.
You
can
transform
it
to
a
little
bit
more
adjacent
syntax,
also
yummy
slightly
super
set
of
json.
B
So
that's
the
reason
why
you
can
interchange
them
and
later
you
can.
The
real
execution
of
each
action
will
be
done
by
by
a
drawling
program.
Underneath
so
that's
like
why
dagger
is
so
powerful,
so
it
gives
us
a
big
catalog
of
actions
that
we
can
do
and
each
action
just
like
you
know
a
short
benefit.
So
first
we
have
here
the
hello
action.
B
The
slide
started.
It's
called
core
execute.
That
means
like
we
want
to
execute
something
and
we
will
later
see
how
it
will
be
executed
and
we
have
flight
imported
outputs.
B
This
is
a
typical
system
that
you
probably
know
from
other
icd
systems
like
how
was
it
I
don't
remember:
okay,
but
that's
it
and
we
first
now
looking
into
this
example
at
all,
so
I've
prepared
already
on
my
steam.
So
okay
do
something
to
show
you
how
you
get
started,
so
we
have
here
on
the
same
file.
So
we
have
here
also
the
hello
world
view
it's
like
intended
all
the
stuff,
and
now
we
want
to
execute
it.
B
So
the
first
point
that
you
probably
need
to
do
is
first,
of
course,
insta
dagger.
We
will
check
it,
so
I
will
do
a
dagger
version,
so
we're
currently
running
on
version
2.05.
B
It's
currently
in
a
really
early
phase.
So
a
lot
of
changes
were
happening,
but
the
documentation
is
mostly
up
to
date
and
you
can
easily
get
started.
There
are
a
lot
of
more
examples
and
afterwards,
for
example,
now
we
can
type
dagger
and
we
see
we
have
only
two
commands
really
so
because
that
is
not
the
real
command.
B
It's
a
supporting
command
and
version
is,
of
course,
only
printing
reversal,
so
we
have
project,
and
we
have
a
do
so
when
you
now
type
data
due
that
you
want
to
do
something,
so
you
want
to
execute
an
action.
We
already
get
an
error,
so
it
says
here
hey.
We
need
to
initiate
the
dagger
project,
because
why
we
need
to
do
that
is
like
we
have
the
hello
world
view,
and
it
has
also
some
input,
so
we
need
also
to
import
all
the
dependencies
of
course,
and
it
currently
data
doesn't
know.
B
This
is
a
project,
so
the
first
command
will
be
with
the
hello
world.
It
will
be
start
data
project
in
it
and
then
we
starting
the
project
and
what's
happening
in
the
end
it
a
new
photo
is
here
created,
it's
called
q
mod.
This
is
like
really
similar
to
the
vendor
directory
from
goalie.
So
if
you
know
it,
this
is
like
literally
the
same
in
the
end.
Now
we
have
here
also
the
package
folder
and
this
currently
it's
really
empty.
So
let's
try
go
back
again,
just
a
question.
B
When
you're
downloading
youtube
yeah,
so
let's
yeah,
that's
the
interesting
part.
So
when
you
know
data,
do
you
still
get
an
error
because
it
says
hey,
we
cannot
find
the
dependencies
terraform
in
it
does
two
steps
in
the
end,
it's
in
it
for
crowded
and
automatic
download
dependencies,
but
data
you
need
to
do
it
in
two
commands.
B
B
So
and
it
says
already:
okay,
do
a
data
project
update
so
this
what
we
now
do,
we
will
do
a
data
project
update
and
that's
it.
So
now,
let's
look
again
in
the
tier
mode
so,
and
we
have
here
the
package
folder-
and
you
currently
can
see
here
that
we
this
was
empty
before
so,
when
we're
looking
a
little
bit
on
top,
it's
mostly
empty.
So
nothing
was
there
and
now.
B
I
o
and
universe
data
I
o.
These
are
the
packages
that
we
had
from
the
hello
world
view
because
they're
coming
while
it's
universal,
we're
coming
out
later
to
the
point,
but
when
you
now
go
into
the
dependency
tree
and
check
it,
for
example,
like
we're
going
to
be
universe
yeah
you
see
now
a
lot
of
more
directories
were
created.
B
For
example,
we
looked
now
in
the
I
want
to
have
a
simple
one:
an
airport,
okay
alpha-
and
you
see
also
these
packages
are
also
written
in
cuba,
so
they
are
not
written
in
a
different
language
and
you
have
like
an
importing
dependency
system
right
yet,
but
we
will
translate
it
to
the
focus.
So,
let's
start
now
with
executing
our
first
data
action.
B
Now,
let's
type
data
due
again
and
now
we
see
that
we
have
different
available
actions
or
one
evaluation.
It
taught
hello
so
in
this
description,
hello
world.
So
let's
do
a
dagger
to
hello
and
now
the
action
will
be
loaded,
it
will
be
executed
and
that's
it.
B
So
probably
we
didn't
see
anything-
and
this
is
like
currently
the
correct
way,
because
we
did
not
print
the
full
lock
level
out
there.
So
we
can
do
this
with
data.
D
B
Do
hello,
and
now
it
will
come
in
really
interesting
so
because
we
have
different
types
of
action
and
what
data
makes
mostly
unified,
then
the
other
way
every
action
will
be
executed
into
a
container.
So
this
means,
like
we
have
like
really
an
isolated
environment
for
each
action
step.
B
This
is
what
probably
like
a
little
bit
different
to
other
bit
systems.
We
don't
execute
it
directly
on
the
system
by
itself.
We
use
docker
for
that
to
abstract
in
it
in
the
end,
so
you
can
also
see
here
that
some
debug
detected
the
build
kit
instead
computing.
You
can
see
also
here
that
the
image
was
pulled
or
the
lp3.
B
Then
it
during
started
to
be
data
fs,
and
now
we
can
here
see
in
the
info
that
at
step
3,
we
have
also
the
hello
world.
No,
I'm
wondering
if
I
missed
it.
B
We
can
do
data,
do
hello
again,
and
you
see
it's
quite
fast,
so
it's
like
under
one
second
in
total,
so
it
runs
in
800
seconds,
mostly
and
800
milliseconds,
to
execute
for
command
to
execute
starting
a
container
printing
out
the
command
in
there
executing
the
command,
and
this
gives
us
for
full
flexibility.
So
we
can
have
different
type
of
actions
can
be
described
in
tuning
instead
of
writing
like
a
lot
of
docker,
composers
or
like
space
scripts,
or
something
like
that
to
start
different
containers
in
different
directions.
B
We
have
q
lane
for
that,
to
orchestrate
it
completely
or
to
unified,
and
that's
mostly
the
trick
what's
happening
there,
yes,
yeah,
so
I
would
say
we
should
make
shortly
a
stop.
And
probably
we
asked
for
questions,
because
probably
I
was
a
little
bit
too
fast
for
that
and
then
otherwise.
I
would
go
to
the
next
example
to
a
more
complex
result.
B
B
Okay,
so
when
there
are
currently
no
questions,
I
would
draw
now
a
little
bit
back
and
starting
a
short
summary
of
a
hello
world.
So
first,
it's
important
to
know
that
dagger
is
mostly
using
containers
for
executing
all
steps.
So
it's
running
every
time
in
an
isolated
environment
and
q
lane
will
be
used
for
describing
the
action,
steps
and
action
step
can
become
as
packaged
so
like,
as
also
like
as
module
way,
so
that
you
can
composite
each
to
another.
So,
for
example,
right
we
can
have
an
alpine
package.
B
We
have
a
docker
package,
we
can
have
a
terraform
package
and
all
the
stuff
on
that.
That's
quite
cool,
but
mostly,
I
think,
okay,
you
would
say
okay.
This
is
like
not
an
interesting
example
like
because
it's
like
only
one
step
and
it's
like
not
really
a
compact
system,
not
a
real
world
system
yeah,
for
that
we
should
probably
switch
to
to
a
brighter
example.
B
So
for
that
I
prepared
now
the
official
data
example,
so
we
have
here
locally,
like
let
me
go
back.
B
We
have
here
currently
an
to
do
application.
It's
deployed
on
netlyfly,
it's
on,
react
applications,
so
you
can
easily
add
things
to
it,
so
you
can
send
something
to
the
edit
and
you
can
delete
it
another,
of
course
so,
and
we
would
need
to
know
as
developers
now
we
want
to
deploy
it
and
typically
how
you
would
do
it
right
now
would
be
like
oh
yeah.
We
need
to
deploy
it,
so
we
need
to
write
our
eclipse,
ci
yammer.
B
We
need
to
do
the
following
things
and
then
we're
probably
not
able
to
do
it
locally,
but
this
is
now
where
data
comes
in
and
help
helps
us
a
little
bit
more.
So
we
have
now
here
our
to
do
up
to,
but
I
made
it
open
in
another
a
little
bit
easier
to
see.
C
B
So
it's
like
a
little
bit
more
complex
here
right
now.
Why?
Because
we
do
different
types
of
actions.
Probably
you
can
really
compare
data,
also
like
like
the
old
make
file
so
because
we
have
now
here
we
have
flight
for
all
javascript
for
our
nodes
application.
We
have
like
a
test
action.
We
have
a
build
action.
B
We
have
like
a
deploy
action
so
that
we
want
to
deploy
to
netlify
and
also,
what's
now
really
cruel
is
that
you
can
also
have
different
inputs
and
outputs
so
that
you
want
to
when
you
want
to
share
one
result,
for
example
like
when
you
want
to
build,
you
want
also
the
output
from
the
test
stage,
so
that
the
test
stage
is
every
time
run
before
you
build,
so
that
you
have
like
a
dependency
to
it
like
a
relationship,
and
probably
also
you
want
write
content
out
out
of
a
file-
and
this
is
like
you,
probably
know
what
crt
yeah.
B
Why
should
I
use
it?
Then,
like
the
normal
data
file
of
the
novel
approach
here
we?
Why
should
I
not
use
my
shell
script
so
because
you
should
probably
see
right
now
here
also
that
we
have
like
different
types
of
packages,
so
we
can
do
like
using
here
bash
commands.
B
In
terms
of
that,
you
want
to
run
something
in
in
a
shell
script.
Here
we
want
probably
to
do
use
fenetify.
Typically,
you
know
what
what
happening
is
like.
We
would
have
like
10
different
chest
strips
that
needs
to
be
maintained
and
we
cannot
easily
composite
to
each
other.
So
we
need
to
have
like
in
a
shared
common
folder
and
with
data
we
can
also
import
them
as
dependency
and
I'm
sharing
it
with
other.
So
and
it's
like
a
really
cool
move.
B
B
So,
first
of
all,
what
we
would
probably
do
is
like
we
would
do
to
the
daily
command
to
see
which
available
actions
are
currently
there
so,
and
we
can
say
okay
what
we
should
do
now
to
to
run
it.
Oh
wait:
yeah!
Okay!
We
do
that
right.
So
it's
fine,
so
we
first
would
start
to
build
it.
So
we
would
do
a
dagger
boat
and
you
see
that's
currently
building
the
application.
B
You
see
right
now
what's
happening
here.
Is
that
he's
running
on
the
rebuild
step
and
the
build
script.
B
What
it
does
in
the
end
currently,
when
you
have
data
locally,
that's
the
reason
why
you
need
to
have
a
docker
engine.
You
will
use
build
kit
to
send
all
the
information
to
build
kit
and
build.
It
will
do
the
execution
steps
for
you.
So
when
we
repeat
it
again,.
B
It's
also
mostly
instant
because
it
already
attached
from
the
widget
demon
and,
if
you
remember,
basically,
probably
from
docker
when
we
started
to
use
like
multi-stage
images.
So
that's
like
now.
The
really
powerful
steps
that
coming
in
here
is
like
that.
We
can
parallelize
all
the
tasks,
because
when,
for
example,
when
I
don't
need
the
deploys
the
for
running
the
build
steps,
they
can
run
in
peril
and
this
slide
when
we
also
have
the
same
like
in
gitlab
pipelines
or
in
in
yeah.
I
like
the
normal
pipelines.
B
We
have
like
this.
This
chord
lets
you
also
a
crew
funny
fact
about
dagger.
So
here's
like
a
blueprint
and
the
blueprint
shows
the
arrows
to
which
you
can
think
about
the
whitebox
to
each
action
step
and
each
action
step
goes
into
one
direction,
but
interesting
on
this
one
you're
coming
from
the
referee,
it's
like
that.
You
not
see
any
cyprus.
B
So
when
you
are
here
at
that
point
in
the
latest
at
the
bottom,
you
cannot
turn
back
anymore,
and
this
helps
us
to
visualize
if
there
are
dependencies
between
the
jobs
and
if
they
can
be
run
in
parallel
or
not.
So
that's
also
a
reason
why
built
it
on
dagger
are
so
fast
in
the
end,
so
that
you
can
run
independent,
drops
emperor.
B
So,
let's
try
to
do:
let's
try
to
deploy
it,
so
we
do
a
data
do
deploy,
and
you
see
now
that
also
the
build
step
was
cached.
It
started
now
a
new
container,
it
started
here.
We
we
beat
it
for
action
deployed
container.
We
can
also
have
later
look
into
the
that's.
B
So
currently,
when
you
have
it
not
light
in
debug
mode
right
now
you
don't
see
so
much
stuff,
so
you
see
only
like
a
little
bit
output
what's
coming
out
of
the
budget,
so
for
that
I
would
recommend
to
use.
Currently
all
the
stuff
is
now
written
into
the
directly
to
our
local
system.
And
now
we
see
that,
like
the
netify,
build
action
was
triggered
and
now
our
app
was
deployed.
B
So
that's
it.
So
it's
still
every
time,
I'm
on
a
similar
stream.
It's
like
we
have
it
here.
So,
let's,
let's
make
a
small
change
to
check
it.
So
we
can.
B
B
It
runs
right
now
so
now,
of
course,
because
for
caching
I
happen
to
change
something
on
my
javascript,
so
also
the
build
needs
to
be
new
run,
and
currently
this
step
is
more
like
synchronized,
so
you
need
to
have,
of
course,
of
the
build
before
the
deployment.
So
now
the
deployment
container
will
be
recreated
and
we
get
an
output
out
of
that.
B
So
I
can
take
on
the
question
from
daniel
so
because
yeah
that's
a
good
question,
so
why
is
it
portable
or
why
is
it
not
portable?
So
we?
How
does
it
runs
in
under
pipelines?
We
will
come
now
to
the
point
into
two
seconds,
so
I
think
it
should
be
as
a
question
somewhere.
B
D
B
Okay
but
yeah,
let's
look
into
the
next.
I
will
add
now
here.
B
Some
of
my
tradition
does
not
work
correctly,
except
yeah,
but
let's
see
why.
Why
is
no
way
says
it's
no
different.
So
I
have
also
data
here
in
it's
the
same
pro
project
and
I
have
also
like
set
up
the
data
pipeline
so
for
the
project.
So
we
can
look
here
on
into
the
step
and
we
have
here
also
we
deploy
netly
faster
so
and
you
can
see
what's
really
crude
on
that
is
it's
like
yeah.
B
We
have
gitlab,
of
course,
but
we
use
diplab
as
as
a
middleware,
so
we
do
like
we
doing
like
the
same
command.
So
we
do
the
data
project
updates
just
like
protection
dependencies,
and
then
we
say
innovation
we
want
to
trigger
currently
because
it's
your
multi-line
collapse,
we're
doing
like
the
data
to
deploy
and
what
we
only
need
to
have
like
available
in
the
pipeline
is
like.
B
We
need
to
have
like
v
docker
image
and
we
are
slightly
documented
so
that
we
can
use
the
docker
and
docker
to
get
access
to
the
docker
engine.
That
data
can
connect
to
it
and
then
it
will
execute
the
steps
in
the
same
way
and
I
don't
need
to
have
like
any
other,
any
other
abstractions.
So
I
don't
need
to
write
now
new
scripts.
I
can
use
the
same
of
that
and
that's
like
a
really
cruel
benefit,
and
you
also
see
here.
B
That's
all
what
I
really
like
is
that
they
have
colored
output
already,
because
it
makes
also
easily
to
see
what's
happening
there
and
what's
not
happening
so
yeah.
I
hope-
and
this
can
be
literally
done
for
every
pipeline
system,
so
I
know
that
they're
currently
people
watching
already
on
rankings
fights
already
for
azure
for
devops
and
other
stuff
so
and
we
can
have
like
a
look
in
how
complex
now
our
data
ci
pipeline
is
it's
like
that.
B
So
we
have
here
on
the
docker
image
so
that
we're
providing
the
docker
and
the
docker
service
to
it,
and
then
we
are
using
the
as
a
point
of
that,
we're
using
have
also
cache
that
we
do
like.
We
are
installing
the
data.
So
currently
it's
installed
via
a
ship
strip
because
it's
currently
the
fastest
way
and
then
we
do
like
having
like
a
re-template
of
data
to
action
and
no,
I
cannot
see
it
right.
B
I
need
to
move
it
and
then
I
can
say,
for
example
like
arts,
and
then
I
would
have
built
stuff
so
and
it
would
like
do
that.
I
do
test
from
there
attach
path,
there's
a
flashback
to
a
bit
and
it's
like
literally
the
same
command
that
I
could
do
here
that
so
when
I
do
data
build,
it's.
B
Images
can
be
built,
but
this
is
now
a
little
bit
coming
back
to
the
theory
so
because
docker
images
are
nothing
else
like
like
a
file
like
guitar
archive
the
file
system
in
an
archive
or
like
a
zip
file
in
the
end,
and
you
then
do
mostly
the
manipulation.
So
what
you
really
need
is
when
we
are
now
drawing
back.
I
will
go
back
into
my
slides
because
I
have
like
a.
B
I
stole
the
picture
mostly
from
so
we
have
here
the
data
engine
and
then
we
need
every
time
a
docker
engine,
but
in
real
what
they're
doing
using
under
the
hood
is
like
widget,
so
you
don't
need
to
have
like
the
darker
engine.
You
can
also
use
only
build
kit
for
it,
so
this
is
like
a
more
advanced
concept
on
top
of
them,
so
because
the
main
benefit
is
why
why
you
use
endocrine
yeah,
for
example,
when
I
have
doctor
for
desktop,
there's
already
included
widgets.
B
So
that's
the
reason
why
you
don't
don't
need
to
have
flights
and
widgets
separately,
but
you
can
also
run,
and
you
actually
need
this
cluster.
You
can
only
run
and
it's
on
budget
only
yeah,
but
it's
all
the
way
doctored
almost
using
container
for
everything.
B
So,
as
we
saw
here
in
the
to
do
app
shortly,
we
have
like
here,
for
example,
let's
look
into
the
netlify:
let's
go
to
the
airplane
command
again.
B
So
you
do
here
literally
what
you
do
is
like
you
use
before
we,
when
we
would
use
the
alpine
package,
it
does
a
docker
build,
it
pulls
the
current
alpine
version.
This
is
now
also
what's
coming
here
from
the
strength
from
q
lane
what
you
not
have
in
yemen.
You
can
use
templates,
so
we
have
here
the
version
this
can
be
coming
outside
as
a
parameter,
so
it's
a
string
or
we
can
have
a
default
value.
B
So
this
reason,
when
you're
going
to
put
it
in
there,
so
then
would
be
version,
8,
3.5,
dot,
blah
blah
blah
and
then
typically,
what
it
does.
It
uses
a
dr
run
commander,
building
them
the
image
all
the
dependencies
that
we
need
when
we
look
back
into
the
alpine.
Well,
let
me
do
it
you
see.
Currently
the
same
step
is
like
here,
so
we
abstract
it
a
little
bit
in
the
depth
step.
We
have
here
the
alpine
bit
and
then
we
said
here
we
want
to
install
the
beyond
and
widget.
B
Happening,
let
me
check
all
right
here.
You
see
here
it's
currently
the
airplane
was
be
pulled.
It
was
beyond
mostly
cash.
B
You
see
that
he's
now
really
executing
into
the
container
of
the
yarn
with
the
bash
and
fidget.
So
then
we're
using
this
container
and
using
it
on
top
on
top,
so
it's
mostly
docker
all
the
way
down.
So
that
is
also
gives
us
reflectivity
that
we
mostly
can
say
that
data
runs
on
any
ci
system
on
any
operating
system.
When
you
can
run
docker
or
anything
else,
it
will
still
work.
So
I
hope
it
answered
your
question
or
if
they
are
still
questioned
to
that,
I
must
be
free.
B
It
brings
us
the
flexibility
that
we
have
like,
so
the
question
was,
apart
from
removing
the
need
of
yammer
and
the
usage
of
q.
What
else
does
bring
us
data
to
the
table
that
we
have
like
a
unified,
a
unified
language,
a
dsl,
a
domain,
specific
language
that
we
can
use
to
describe
our
pipelines?
That,
for
example,
when
you
shift
from
from
github
actions
to
deep
lab
yammer,
you
don't
need
to
re-implement,
then
every
action
you
need
to
check?
Okay,
however,
there?
How
vds
is
at
that
topic?
B
You
use
your
data
state
and
you
transport
it
from
one
ci
system
to
another
ci
system,
and
that's
like
a
lot
of.
I
would
see
this
as
a
big
benefit,
because
then
you
don't
need
to
write
every
time
you
script
when
you're
moving
to
somewhere
else
or,
for
example,
it's
like.
B
Typically,
you
have
a
problem
that
you
have
like
different
logo
strips
and
different
ci
environment
strips
because
we
have
some
other
requirements
and
just
like
give
us
the
same
part
of
that
we
are
having
we
having
the
same
conformity
mostly
and
that's
the
real
big
game
changer.
I
would
say,
because
I
see
currently
here,
that
data
can
be
like
the
next
evolve.
Evolution
of
the
ground.
B
Yeah,
so
no
that's
not
true,
regarding
which
the
environments
are
currently
involved.
So
there
was
like
when
you're
going
here
to
this
discussion,
if
you
are
checking
for
ci
system,
so
I
know
that
the
cloud
piece
guys
already
worked
on
the
jenkins
integration
and
also
I
saw
already
some
involvement
on
the
discord
for
azure
devops,
where
it's
currently
working.
So
in
the
end,
you
don't
need
so
many
steps,
so
you
need
to
have
like
a
docker
and
you
need
to
have
a
cli
installed
in
your
system.
B
B
Yeah-
and
this
is
like
the
current
short
overview,
and
now
we
should
go
into
deeper
into
different
topics
in
terms
of
we
can
go
into
the
queue
lane
or
also
for
other
questions,
or
we
generally
rotate
sample
work
trip,
for
example
right
when
we
want
to
wrap
other
to
it.
So
because
nitro
already
did
write
a
proposal
of
wrapping
terraform
so
that
you
use
data
to
also
do
like
a
terraform
in
it.
To
do
a
terraform
apply
that
you
have
like
this
unified
language.
B
Yeah,
so
currently
the
demon
or
the
execute
execution
engine
is
like
build
it.
So
this
is
like
what
they're,
using
at
heart
to
execute
all
the
container
commands,
mostly
because
this
is
also
what
use
docker
under
the
hood
to
do
like
a
docker
build.
So
when
you
do
a
docker
build,
it
will
be
directly,
send
it
to
build
it
and
build
your
builds
and
with
container
because
and
controls
also
be.
We
run
c
into
continuously.
There.
B
I
think
by
default
it's
currently
run
with
root
privileges,
but
you
can
also
start
bluetooth
without
root
privileges.
That's
currently
supported,
but
currently
I
think
I
not
saw
it
that
they're
already
executing
it.
I
think
you
can.
We
will
have
it
later
so
that
you
can
run
it
also
with
without
privileges
that
you
can
be
on
ci
systems,
because
when
we're
going
to
movie
it's
coming
out
of
the
moby
budget.
B
Yeah,
okay,
yeah!
Probably-
but
you
know
all
this
mostly
because
when
you
use
docker,
you
see
that
he's
checking
and
he's
resolving,
then
he's
building
like
a
dependency
tree
and
that's
right
called
like
a
duck
currently
so
to
know
which
steps
will
be
executed
and
funny.
We
know
the
first
instructions
that
everyone
knows
already
what
bit
it
uses
so
b
to
use
as
instruction
set,
for
example,
docker
built.
So
a
docker
file
is
like
a
build
instruction
for
build
it.
How
to
execute
the
steps,
and
there
are
a
lot
of
other
important.
B
There
are
like
another
implementation
so
built
by
a
lot
of
people
to
show
the
demonstration
that
it's
portable,
because
it's
like,
because
this
is
coming
from
the
portable
of
the
state.
So
how.
B
B
It
was
called
somewhere
motor
file
and,
just
like
the
instructions
said
so
just
like
this
instructions
understands
it,
because
it's
transporting
the
motor
file
into
the
llb.
So
just
like
the
instruction
set
for
bit
and
then
built
it
understands
it.
But
this
is
like,
I
think,
a
really
deep
topic
in
terms
of
what
you
can
do
there,
and
this
is
like
what
also
dagger
uses
under
the
hood.
B
Or
in
it,
it's
executed
in
a
container,
but
file
system
operations
are
simple
container
operations,
because
the
the
file
system
operations
on
the
containment
images
are
nothing
like
putting
the
tasks
together
and
changing
the
difference.
The
order.
B
A
B
Yeah
yeah.
We
need
to
check
also
the
other
questions,
but
I
can
explain
shortly
so
because
let
me
do
I
have
background
image.
Give
me
a
second.
B
But
you
have
here
like
the
hashes
and
each
hash
is
like
a
simple
tar
file,
so
you
can,
when
you
do
docker
export.
It's
like
multiple
layers
right
here,
because
we
have
now
here
the
base
layer.
It's
only
one
link
and
they
all
connected
to
each
other.
That's
where
the
overlay
comes
into
the
game.
It
just
like
a
union,
a
merged
view
of
a
file
system.
B
And
this
is
what
bluetooth
uses
underwood.
It
goes
into
the
tar
file
and
changes
files
or
adding
like
a
new
tar
file,
and
then
you
would
have
the
multiple
layers,
because
it's
like
every
step
or
every
execution
in
build.
It
is
also
generating
a
new
layer.
In
the
end,
it's
like
also
the
same
with
your
docker
files.
So
if
you
have
a
run
command,
it
will
generate
a
new
layer.
If
you
have
an
exit
command,
you
will
generate
a
new
layer
and
just
like
the
similar.
A
A
Yeah,
okay,
coming
from
buildkit
back
to
dagger,
so
the
bulkhead
demon
is
being
run
and
it
picks
up
the
action.
Somehow.
B
Yeah,
it
gets
sanded
the
action.
So
when
we
look
in
our
example,
we
have
here
here.
This
is
the
important
part.
This
is
for
client
and
what
it
does
we
have
here
this
this
slide
in
tulane.
Now
the
read
file
system,
it
says
secretly
send
the
current
directory.
So
that's
the
reason
why
we
have
h3
so
and
when
we
say
we
don't
want
to
read
to
read
media
the
build
file
and
patufa,
and
this
would
be
centered
as
a
tar
archive
to
bridge
it
and
then
build.
B
It
is
doing
the
execution
steps
and
next
underwater.
So
what's
coming
in
the
actions
mostly
and
then
you
can
also
say,
okay,
I
want
to
write
the
content
to
a
local
file
system
also
back
because
this
is
all
also
how
docker
is
working.
So
when
you
are
building
a
docker
image,
everything
from
your
logo
file
system
will
be
packaged
as
a
as
a
zip
file
and
will
be
send
it
to
the
docker
daemon,
and
then
he
will
execute
in
all
the
steps.
B
Whatever
you
need,
it
can
be
tcp,
it
can
be
like
you
can
put
it
in
the
advanced
step
like
you
want
to
by
default.
It's
easier
in
here
like
the
docker
socket
in
the
end
using
the
the
default
tcp
connection,
but
you
can
overwrite
it
so,
for
example,
I
had
an
other
one,
so
you.
A
Can
do
just
to
to
like
clarify
my
misunderstanding.
I
need
build
kit
as
a
demon
being
run
to
use
dagger.
B
There
we
currently
don't
have
like
and
the
reason
why
you
should
not
worry
about
like
if
you
want
to
run,
build
it.
The
simpler
approach
is
to
run
docker
because
it's
included,
and
then
you
don't
need
to
take
up
how
you
install
bitrate
and
all
the
stuff.
That's
the
reason
why
they're
saying
use
docker
for
it,
because
it's
a
little
bit
simpler.
B
Finally,
so
as
you
see
what
he
what
he
did
is
when
I
did,
I
don't
know
dinner
project
only.
Let's
make
an
example.
I
think
I
will
delete
them.
My
whole
cache,
but
it's
fine
so.
A
There's
an
interesting
question
in
the
chat:
how
new
is
dagger,
would
you
consider
production
pipeline
ready?
I
think
it's
pretty
news,
so
it
left
its
private
beta
two
weeks
ago
and
it's
in
in
rapid
development.
I
would
say
so.
I
would
wait
and
test
it
and
try
it
out,
give
feedback
to
the
developers,
open
issues,
pull
requests
and
so
on,
but
my
personal
opinion,
it's
not
it
might
it
doesn't
look
like
it's.
It
has
reached
the
general
availability
release
yet,
but
who
knows
what
the
future
will
bring.
B
Yeah,
so
I'm
here
what
you
can
see
so
I
removed
before
I
before
I
did
the
door
that
I
do
deploy
step.
I
remove
the
database
with
the
container
so
and
there
are
also
started
this
container
again
so
that
he
can
execute
the
steps.
So
you
don't
need
to
have
like
you
need
to
have.
It
could
be
also
working
with
contain
id
only
so
because
you
would
send
when
we
yeah
it
worked.
So
I
I'm
for
sure.
If
you
wrench
off
a
desktop,
I
don't
have
it
so
that's
the
reason
why
I
cannot.
C
Sonic,
the
dagger
c
line
needs
the
docker
socket,
so
it's
directly
talking
to
the
docker.
So
if
you
go
with
ranger,
you
have
to
enable.
C
B
Yeah
thanks
yeah
wait,
wait,
wait!
Let's
I!
I
can
also
revert
that
so
because
there's
like
an
example,
script.
C
B
Can
start
like
the
beach
container
on
your
own?
It
needs
to
be
currently
run,
I
think
in
privileged,
and
then
you
need
to
connect
to
it,
and
this
can
be
done
by.
B
Yeah
the
main
trick
as
a
not
for
metric.
So
what
you
need
to
specify
when
you
want
to
have
like
running
budget,
it
doesn't
need
to
be
run
on
your
router
machine.
You
can
specify
the
built-in
host.
This
can
be
a
container.
It
can
be
like
a
tcp
connection.
It
can
be
assorted,
it
can
be
everything
mostly,
and
this
is
how
it
connects.
It
connects
by
default
to
a
socket,
but
you
can
also
overwrite
it
and
there's
also
already
some
guidance
on
that,
and
if
I
check
that
so.
B
Customizing,
I
think
here
you
see
so
you
can
say
to
setting
up
or
you
can
have
it
also
like
to
remote
host
here's
the
instruction
how
it
should
work
on
on
a
container
yeah.
So
there
are
different
ways
to
execute
that,
but
for
simplicity
reason
I
would
recommend
to
use
the
docker.
B
B
B
A
So
if
you,
if
you're
trying
to
do
that,
potentially
do
it
on,
for
example,
a
good
lip
runner
which
is
just
being
used
for
running
the
decker
action
now.
B
Yeah,
so
it
works
out
of
the
box.
So
I
didn't
change
anything
here,
so
I
used
the
official
little
bronzers
and
we
can
also
run
a
new
partner
so
that
I
can
prove
it.
No,
I
cannot
wait.
I
need
to
switch
to
my
my
appointment.
That's
the
reason.
Why
not
see
it.
B
Let's
run
on
your
powerpoint,
so
rewriting
also
our
change.
I
hope
it's
not
important.
B
Okay,
I
need
to
check
why
netflix
is
working
differently
than
my
expectation.
Okay,
that's
that's
interesting.
So
you
see
it's
running
here
on
the
gitlab
share
instance.
It's
we're
now
pulling
the
image.
B
A
The
end
I
think
for
like,
if
you
want
to
play
around
in
a
production
system
like
have
a
dedicated
runner
and
virtual
machine
somewhere
and
just
play
around
and
see
how
it
performs
or
what
it
does
yeah
just
to
avoid
any
any
potential
things
which
might
harm
your
production
system.
D
B
So
and
probably
very
interesting
question
how
you
can
also
give
your
feedback
and
involve
into
the
community.
Of
course
you
can
use
github
for
that
check
the
people
out
also
on
twitter,
and
they
have
also
like
a
discord
official
discord.
So
you
can
also
connect
to
them
and
ask
them
questions.
They
are
really
eager
to
help
there's
currently
a
lot
of
work
on
growing.
B
So
a
lot
of
changes
are
currently
happening
and
they're
really
happy
about
contributions
and
also
really
about
so
sometimes
the
documentation
is
like
not
so
intuitive,
but
they
are
also
really
open
to
and
flight
other
changes.
B
And
that's
it
mostly
about
data,
so
it's
I
currently
really
like
it.
In
summary,
so
it's
still
a
little
bit
complex
to
get
familiar
with
the
tulane.
B
So
it's
like
for
me
a
steep
learning
curve
still,
but
I
think
it
can
be
like
the
next
step
of
running
our
ci
cd
systems
and
doing
like
day
one
operations
on
that
and
having
like,
also
all
the
developers,
all
people
on
the
same
speaking
language,
so
you're
not
required
to
learn
best
scripting
and
you
can
easily
share
it
because
you
have
packages,
you
can
reuse
it
from
the
community
of
like
a
big
catalog
so
because,
when
we
are
starting,
when
we
are
looking
so
how
much
stuff
is
currently
available,
we
have
a
lot
of
stuff
already
that
can
do
so.
B
We
have,
we
have
neatly
five.
We
have
aws,
so
we
have
bash.
We
can
do
did
actions,
we
can
do
yarn,
we
can
do
powershell.
Terraform
is
coming
up
a
lot
of
other
truths
and
I
think
in
future
there
will
be
like
a
big
ecosystem
of
actions
that
you
can
share.
B
It's
like
the
same
idea
like
from
circles
to
evo
for
alls,
and
that's
like
a
really
cool
approach,
so
that
you
don't
need
to
reinvent
the
wheel
every
time
when
you
want
to
have
like
a
small
dedication
or
a
small
strip
and
gets
you
started
faster
and
you
can
use
it
not
only
in
ci.
You
can
also
use
all
the
same
things
on
your
local
machine,
so
that
makes
also
easier
for
us
to
debug
pipelines.
B
B
A
A
B
Yeah,
so
I'm
mostly
what
I
do
to
being
catched
up
with
all
the
changes
that
are
happening.
What
I
can
recommend
is
I
go
back
to
the
distro,
and
then
I
read
it
mostly
what's
currently
happening,
and
currently
people
already
have
like
newborns
new
tools,
so
I
think
for
getting
started.
This
could
be
quite
interesting.
B
What's
currently
the
problem,
so
currently
it's
not
sensitive
with
working
with
q,
because
there's
no
like
big
integration,
so
there's
a
small
vs
code
extension,
but
that's
not
like
a
lsp,
but
currently
I
was
not
sure
about
the
roadmap.
I
know
that
they're
repackaging
the
the
the
packages,
so
that
means
like
that.
They
have,
for
example,
terraform
other
small
things
like
probably
also
like
for
github.
B
It
could
be
also
that
you
use
for
github
api
through
two
on
pages
and
configure
like
for
pipeline
and
like
having
like
an
event
stream
and
also,
but
just
like
a
little
bit
more
topic.
They
are
mostly
open
for
discussions
and
I
think
the
next
interesting
what
people
can
involve,
of
course,
is
also
like.
What's
the
next
step
environment
should
be
integrated
so
because
a
lot
of
people
currently
working
on
it
and
when
you're,
probably
not
sure
if
your
ci
system
is
in
there,
I
think
that's
always
a
good
hint.
B
C
A
Also
worthwhile
to
mention
that
there
was
a
seat
funding
around
for
the
company,
so
they
might
level
up
with
engineers
and
resources
being
away
later.
A
So
I
would
recommend
so
the
dagger
twitter
account
is
stagger,
underscore
io
to
follow
them
and
and
also
retweet
like
and
ask
questions
or
things
like
that
in
public
and
and
also
engage
on
on
the
issues
and
discussion
forums
just
to
get
an
idea
yeah
when,
when
you
see
something,
contribute
improve
help
improve,
probably
the
documentation
with
the
experience,
but
also
look
into
things
like
improving
certain
things
or
maybe
adding
a
feature
request.
A
For
I
don't
know,
I
haven't
seen
it
linking
the
queue
lung
syntax,
for
example,
because
I
I
will
probably
struggle
a
lot
with
the
new
language
to
learn
after
hating
yammer
for
a
while
and
now
having
adopted
it
so
like
everything
which
makes
it
more
convenient
in
terms
of
troubleshooting.
B
Would
be
okay
for
that?
I
can
actually
shortly
give
a
shot.
So
we
have
also
like
your
formatting
two
in
q
length.
So
like
growth
format,
it's
also
q
format
and
you
can
format
right
directly
find
solution
so
available.
But
you
can
also
do
like
things
that
you
you
to
evil
and
then
you
would
see
how
the
whole
fight
would
pure
look
into,
but
just
like,
I
think
this
one.
We
should
dive
this
into
next
session
dedicated
only
to
lane
and
all
its
different
kinds,
because
this
is
like
an
own
language.
D
A
B
A
Yeah
or
maybe
maybe
have
a
github
a
pre-pre-commit
talk
locally
in
my
death
environment,
for
example,
which
which
does
the
linting
or
the
syntax
checking,
or
something
like
that.
So
before
I
even
commit
things
I
can,
I
can
verify
that
automatically
yeah.
I
don't
need
to
remember
the
exact
command,
but
instead
have
it
executed
automatically.
B
Yeah
yeah,
and
also
what's
good,
sometimes
happening
so
when
you
are
being
a
little
bit
late
on
the
europe
time
so
that
the
developers
are
on
the
stage
and
developing
live
in
this
box,
so
that
you
can
see
and
probably
also
watch
what
they
are
doing.
So
that's
also
really
cool
if
you
want
to
so
last
time
I
was
a
little
bit
late
on
and
I
misplaced-
and
I
was
in
the
stage
with
solomon
heights
and
some
other
developer,
possibly
discussing
shortly
about
this
topic.
B
So
it's
yeah
it's
trying
to
crew
involvement,
yeah
and
that's
it
for
the
start.
So
if
you
have
questions,
if
you
reach
out
so
the
most
interesting
thing
could
be
probably
the
project
you
can
find
the
whole
project.
The
example
project
that
we
didn't
know
with
the
neglify
amp
is
also
on
gitlab.
It
says
the
piper
and
it
have
everything
I
described
also
a
little
bit
what
you
need
to
do
to
deploy
it
so
and
feel
free
to
do
much,
requests
and
help
us
to
build.
B
Also
more
examples-
and
we
could
probably
make
like
in
the
next
six
months
and
you
will
revisit
on
data.
What's
up
involved
what
changed
and
then
we
probably
also
a
lot
of
more
deeper
in
there,
because
it's
like
really
in
an
early
phase
right
now,
but
it's
quite
rude
to
be
like
being
an
early
adopter
here
again,
yeah.
A
And
that's
really
crazy.
Thanks
for
that,
I
will
collect
the
resources
you
shared
today
in
a
blog
post
this
week
and
also
share
the
recording
yeah.
Is
there
anything
else
you
want
to
mention
or
highlight.
B
No
yeah
the
only
thing
what
I
want
to
mention
I
wanted
to
highlight,
but
just
like
what
we
need
to
check.
I
would
like
to
improve
the
ci
template
a
little
bit
so
that
we
have
an
official
one,
that
you
can
easily
start
also
from
gitlab
directly
to
use
dagger.
B
D
B
That
they
already
also
working
on
gitpod
that
people
are
trying
out
so
dagger,
using
gitpod
to
use
data
in
git
port
and
then
they're
using
it
in
gitlab,
for
example.
So.
A
Then
you
don't
have
it:
you
can
spin
up
a
git
port
workspace
and
run
decker
inside
the
github
port
port,
basically
yeah
yeah,
okay,
that's
kind
of
brain
now.
No
it's
an
exception
for
the
inception.
Yeah.
A
B
Yeah
yeah,
I
think
there
will
be
also
be
composed.
Actions
will
coming
up,
so
a
lot
of
more
wrappers
will
come
in
future.
So
I
saw
what
I
really
found
interesting,
so
there
was
already
a
project
that
do
aws
lambda
with
dagger
already
so
the
people
who
are
working
on
that
so
that
you
don't
need
to
really
use
these
serverless
states
that
you
use
directly
data
for
deploying
number
actions
and
all
the
stuff
so
use
the.
D
D
A
Ago,
yeah,
I
think
it
was
more
like
orchestrating
the
deployment
and
doing
all
the
cloud
steps,
but
it
didn't
involve
any
container
builds,
but
it
has
had
a
similar
like
idea
on
you're
finishing
your
ci
steps,
and
then
you
do
continuous
delivery
with
with
waypoint.
B
Yeah,
I
think
the
main
benefits.
In
short,
I
think,
like
what
you
can
see
here,
that
we
can
have
like
different.
So
you
don't
need
to
write
the
deploys
age
script.
Then
it
will
be
looking
different.
You
have
a
simple
two
file
that
you
you
have
to
file
for
it.
Then
you
have
like
the
category.
That's
individually
grow
over
time,
probably
also
different
community
packages,
so
they
have
also
like
different
source
already,
so
universe
is
for
official
one,
so
there's
also
a
community
space.
B
It's
called
europa
there's
also
an
alpha
version,
and
you
can
have
like
your
own
modules
if
you
want
to,
but
I
mean
the
main
benefit
is
really
that
you
can
do
it
locally
on
your
machine.
You
don't
need
to
run
a
whole
ci
system
to
test
all
the
changes
that
you
do
on
your
ci
pipeline
and
to
affect
faster
feedback
instead
of
waiting
and
all
the
machines.
That's,
I
think
what
currently
breaks,
probably
so
with
really
large
workloads.
B
So,
for
example,
like
you
need
to
do
number
crunching
or
really
computing
compiling,
and
it
takes
only
cpu
if
it
won't
make
it
faster
and
darker,
because
you
need
to
have
hardware
resource
to
fix
it,
but
if
you
have
multiple
steps
and
you
need
to
orchestrate
that
could
be
like
a
good
step
that
you
draw
on
parallelization,
because
it's
like
yeah,
I
see.
Okay,
this
step
is
independent
from
that.
So
you
see
it
here
directly.
So
the
these
proof
steps
can
be
run,
apparel,
okay
and
then
the
next
step,
and
so
on.
B
A
Yeah
for
the
mod
use
and
reusability
I'm
a
little
worried
around
when
something
is
incompatible
and
you're,
using
the
same
actions
or
modules
or
whatever
is
it
is
called
then
so,
like
quality
assurance
in
in
extensions
and
modules.
This
will
be
a
thing
in
the
future
and
I'm
sure
the
developers
already
thought
about
it.
A
So
this
will
this
could
become
a
little
tricky
and
the
learning
curve
might
not
be
as
as
good
as
it
intended
to
be.
But
we
will
see
about
this
like
you
need
to
start
somewhere
and
then
you
can
like
iterate
and
improve
the
process.
A
B
A
Okay:
okay,
if
there
are
no
further
questions,
I
would
like
to
say
thanks
a
lot
for
preparing
for
today.
I
you
know
I
didn't
ask
for
it,
but
I
really
appreciate
it
yeah
and
I
hope
everyone
stays
safe
and
we
might
be
meeting
in
the
future
next
month.
Topic
is
still
open.
It
will
be
in
the
week
before
kubecon.
A
I
don't
know
what
we
will
do,
but
we
will
decide
shortly.
The
event
is
already
scheduled
in
the
meetup
group,
so
you
can
plan
your
calendar
calendar
if
you
want
to
and.
A
I'm
not
sure,
maybe
kubecon
cloud
native,
something
like
that:
yeah,
okay.
Okay,
then,
thanks
for
joining
today
and
see
you
next
month,
bye,
okay,.