►
From YouTube: GitLab 13.3 Technical Showcase
Description
The GitLab 13.3 Technical Showcase includes:
Cesar Saavedra, Technical Marketing Manager- Edit Feature Flag User Lists from the UI AND ECS Task Definition from local JSON
Fernando Diaz, Technical Marketing Manager-Coverage-guided fuzz testing for Go and C/C++ applications, On-demand DAST scans, SAST security analyzers available for all, Guided SAST configuration experience
Itzik Gan-Baruch, Technical Marketing Manager- Create a matrix of jobs using a simple syntax and DAG Visualization
William Galindez Arias, Technical Marketing Manager- GitLab Workflow extension for Visual Studio Code
Tye Davis, Senior Technical Marketing Manager- Multiple Value Streams
A
A
Ecs
task
definition
from
the
local
json
and
fern
we'll
be
talking
about
coverage,
guided
fuzz
testing
on
demand,
dast
scans,
sas
security
and
analyzers
guided
sas
configuration
itsec
will
be
talking
about
creating
a
matrix
of
jobs
using
a
simple
syntax,
dag
visualization,
william,
will
be
talking
about
get
lab
flow
extension
for
visual
studio
core
and
tile.
Take
us
home
with
multiple
value
streams.
A
So
I
will
turn
it
over
to
dan
for
any
other
things
to
test.
C
Thank
you,
dan
and
good
morning,
good
afternoon,
everyone.
So
I
will
just
before
when
the
team
starts,
we
decided
to
take
a
advantage
of
this
enablement
and
after
this
recording
we
will
split
it
to
individual
videos,
standalone
videos
for
each
demo,
and
then
we
will
share
it
to
end
users
to
everyone
to
the
world.
Actually,
so
you
will
notice
that
before
each
demo
we
will,
we
will
introduce
ourselves
again
and
give
some
background.
C
Don't
look
us
as
weird
because
we're
going
to,
although
it
is
a
long
session,
it's
going
to
be
the
hand,
also
standalone
demos
so
and
that's
it
and
regarding
the
the
the
slide
deck
before
so
then
just
share
the
the
deck
with
all
of
the
content
in
the
customer
success
channel.
So
the
format
will
be
very
similar
to
the
group
conversation,
so
the
team
will
very
quickly
walk
through
the
main
points
of
the
slide,
but
not
too
much
time
spent
on
the
slide.
C
So
if
you
have
a-
and
we
want
to
make
it
as
more
efficient,
so
if
you
have
any
questions,
focus
on
questions
and
we
will
allow
this,
allow
will
allow
us
more
time
for
questions
and
answers,
and
after
that
we've
complete
for
each
slide,
we
will
move
on
to
the
demo
and
again.
I
believe
we
will
have
more
time
now
this
time
for
questions
less
content
like
we
did
the
last
time,
so
you
will
have
time
to
read
digest
the
things
ask
questions
see
the
demos.
So
with
that,
I
turn
over
to
caesar.
D
D
Screen
very
good
hello.
My
name
is
cesar
cevedra,
I'm
a
technical
marketing
manager
here
at
gitlab
and
in
this
short
video,
I'm
going
to
be
covering
a
new
feature
introduced
in
gitlab
13.3
called
edit
feature
flag
users
list
from
the
ui.
D
So
before
13.3
you
could
perform,
you
could
actually
could
not
create
a
list.
This
is
brand
new
in
13.3.
D
D
So
why
does
it
matter
for
customers
and
prospects?
You
know
they
can
now
segment
users,
so
you
can
have
you
can
think
of
segments
of
users
like
you
know,
platinum,
gold,
silver
and
you
can
have
user
lists
for
each
of
those,
and
you
can
target
features
to
those
specific
different
segments,
so
it
can
help
you
in
a
b
testing
another
thing
that
it
can
help
you
with
it.
Can
you
know
lower
the
risk
and
increase
customer
satisfaction,
because
you
can?
D
Actually
you
know
through
this
targeting
the
featured
specific
segment
allows
you
to.
You
know
test
out
that
specific
feature
in
production
before
you
roll
it
out
to
everybody
else
and
for
us
for
gitlab
it
enriches
our
advanced
deployment
techniques
arsenal.
It
also
reaches
our
continuous
deployment
story,
and
you
know
this
features.
This
feature
differentiates
differentiates
us
from
the
rest.
Not
very
many
of
our
competitors
actually
have
this
this
capability.
D
So
let's
jump
into
the
demo
just
in
case
I've.
Also,
I
have
a
link
to
a
sample
project,
I'm
about
to
show
you.
You
can
go
and
look
at
the
code.
There
there's
also
a
recording
that
goes
more
in
depth
into
this
feature,
and
you
know
there's
also
an
issue,
an
interesting
issue
to
follow
for
you
to
follow
as
far
as
feature
flag
and
how
it
is
going
to
be
evolving
all
right.
So
this
is
a
feature
flight
demo,
that's
the
other
demo.
D
There
we
go
all
right,
so
the
the
you
know
to
access
feature
flags.
You
go
to
operations
and
feature
flags,
and
you
would
see
all
your
feature
flags
here,
but
now
you
have
this
other
menu
called
lists
in
here.
I've
created
two
lists
and
for
the
sake
of
time
I
already
pre-created
one
but
the
way
to
create
a
list
and
segment.
Your
users
is
you,
you
would
say
new
list
and
you
just
give
it
a
a
name.
It
could
be
fflist3,
for
example,
and
then
once
that's
created,
you
would
add
users
right.
D
In
that
case
I
don't
know
cesar
getlab.com,
for
example,
and
you
can
separate
him
with
a
comma,
it's
like
gitlab.com
and
then
and
then
you
just
say
add,
and
then
you
have
your
new
list.
Okay,
so
once
you
create
the
list
here,
it
is
the
new
one.
If
list
three,
how
do
you
use
this
list?
So
when
you
come
to
feature
flags
and
as
you
create
a
feature
flag,
you
have
an
opportunity.
D
Three
and
here
in
the
strategies.
Let's
say
you
want
to
for
production.
D
You
want
to
select
these
are
the
options
you
can.
You
can
select,
for
example,
roll
out
this
feature
to
a
specific
percentage
of
your
users
in
production.
D
D
So
that's,
basically
how
you
use
it,
I'm
going
to
cancel
out
of
that,
and
one
thing
you
also
need
to
do
is:
if
you
want
the
feature
flag
to
to
be
active,
you
have
to
turn
it
on
right
right
right
now.
These
two
features
feature
flags
here
are
off
and
just
to
show
you
that
I'm
gonna
learn
a
script
with
a
few
emails
or
usernames.
Actually,
I'm
gonna
run
it
with
four
usernames
and
this
sample
program.
What
it
does
it.
D
It
registers
a
username
and
then
it'll
attempt
to
execute
the
feature
that
is
being
that
has
a
flag
and
then,
if,
if
the
user
is
defining
the
list,
it
will
allow
the
user
to
use
a
new
feature.
If
not,
it
will
just
use
the
you
know
the
old
way
of
doing
things
without
this
new
feature.
D
D
D
Again,
the
flag
is
off
right.
So
everyone's
getting
the
same,
behavior
nobody's
getting
a
new
feature
and
the
strategies
are
here.
D
So,
let's
look
at
that
one
I
have.
I
have
three
strategies,
one
for
qa,
one
for
staging
one
for
production
and
here
for
qa,
I'm
gonna
use
my
user
list,
which
has
pluto
and
magic,
meaning
that
pluto
magic
will
be
the
only
ones
getting
that
feature
in
staging.
I
will
have
all
users
will
get
the
new
feature
or
the
feature
and
in
production
I
only
want
80
of
the
users
to
get
the
feature
and
80
of
four
is
three,
so
only
three
users
will
get
the
feature.
D
In
staging,
everyone
is
getting
the
new
feature
and
in
production,
eighty
percent
is
getting
the
new
feature
flag,
and
eighty
percent
of
four
is
three.
So
three,
three
usernames
got
the
new
feature,
we're
able
to
see
the
new
feature
and
one
of
them
was
not.
D
Here
are
some
resources
which
I
already
mentioned,
and,
and
I
I
gave
you
on
a
slide
here-
the
high
level
steps
on
how
to
use
how
to
create
a
user
list
and
how
to
maintain
it,
and
that
was
that
was
the
conclusion
of
this
this
demo.
Thank
you
very
much.
D
Very
good,
so
my
name
is
cesar
cevedo,
I'm
a
technical
manager,
marketing
manager
with
gitlab
and
in
this
feature
in
this
short
demo,
I'm
going
to
be
covering
ecs
task
definition
from
local
json
feature,
which
was
introduced
in
gitlab.
D
13.3
so
previously,
before
this
new
capability
was
introduced,
we
were
able
to
deploy
to
ecs
using
push
to
ecs
and
from
gitlab
the
deploy,
template
and,
and
that
was
it
with
this
release,
you
are
actually
able
to
define
and
and
have
the
actual
test
definition
file
on
your
gitlab
repository
and
use
it
to
push
your
project
to
east
to
an
ecs
cluster.
D
D
So
why
does
it
matter
as
a
customer?
It
allows
you
to
basically
manage
and
maintain
your
test
definition
via
our
version,
control
and
collaboration
capabilities
within
git
lab.
It
also
permits
stakeholders
to
collaborate
on
the
configuration,
the
creation
of
the
development
of
the
test,
definition
file
and
obviously,
streamfly
streamlines
your
deployment
to
ecs
to
any
cs
cluster.
D
It
allows
you
to
roll
back
to
a
previous
version
of
the
task
definition
file,
and
it
also
helps
you
with
your
organizational
organizational
audit
and
compliance
regulations
for
your
ecs
container
definitions,
so
the
task
definition
file.
It
actually
defines
the
way
your
container
is
going
to
look
like,
and
if
your
organization
has
specific
requirements,
then
they
need
to
be
captured
within
your
test
definition.
I
mean
requirements
for
containers
and
regulations
for
that.
D
What
is
it?
What
does
it
matter
for
for
us
gitlab?
Well,
it
enriches
our
amazon
deployment
story.
It
also
helps
us
engage
with
shops
that
are
using
ews,
aws
ecs
for
containers,
not
necessarily
kubernetes
shops,
and
it
also
you
know
it.
You
know
it
gives
you
an
option
and
an
opportunity
to
establish
a
beachhead
at
an
ecs
shop
and
then
expand
from
there
again.
I've
prepared
a
recorded
demo,
here's
the
link
and
also
sample
project
for
you
to
look
at
later
and
some
things
to
follow.
D
All
right,
so,
let's
go
to
to
the
demo
now.
This
demo
is
a
bit
more
involved
and
I
only
have
10
minutes
so,
instead
of
going
through
the
entire
process,
I'm
just
going
to
go
over
the
things
that
you
need
to
do
in
in
in
order
to
be
able
to
use
this
new
capability
so
to
connect
to
an
ecs
cluster.
Well,
first
of
all,
you
have
to
have
a
you
need
to
need
to
have
an
ecs
cluster
right.
So
I
have
an
ac
ecs
cluster
here
on
amazon
and
it's
running
right
now.
D
Three
services
and
each
service
instance
uses
or
is
defined
by
a
test
definition
file.
Okay,
so
before
you
had
to
define
your
test
definition
files
on
each
on
aws
side
and-
and
that
was
it
and
then
you
could
just
deploy
to
them
from
gitlab
with
this
new
capability,
you
can
actually
define
and
maintain
and
have
the
task
definition
file
on
on
the
gitlab
side
of
the
gitlab
project.
D
Now,
in
order
to
use
it,
obviously
you
have
to
have
one.
So,
let's
see
if
I
have
it
here,
yes,
so
if
you
go
to
the
project
itself,
I've
created
a
a
directory
called
ci
and
I've
stored
the
the
test
definition
here.
So
this
is
a
json
file
that
I
put
together
that
basically
defines
the
way
my
container
is
going
to
look
like
when
it
runs
on
ecs.
D
Okay,
now
you
may
ask
yourselves,
you
know:
how
do
I
create
this
test
definition
file
there
is
you
can
find
some
documentation
on
aws
of
a
template
that
you
can
start
with,
but
the
way
I
I
do
it
usually
because
I'm
lazy,
I
usually
just
go
through
the
wizard.
There's
a
wizard
there's
a
wizard
here
on
the
aws
console,
you
say
task
definition
and
then
you
say
create
new.
D
D
You
have
to
make
some
slight
changes.
I
actually
cover
those
in
in
detail
in
the
in
the
video
link
that
I
gave
you
earlier
so
in
order
to
use
the
test
definition
file,
you
have
to
have
it
defined
and
created
on
on
your
gitlab
project
like,
like
you
see
here
on
the
screen.
D
D
Now
another
option
you
have
here
is:
you
can
actually
paste
the
content
of
the
json
here
in
this
value
field,
but
you
have
to
change
this
to
save
file.
Okay,
so
it's
either
or
okay.
So
you
can
have
the
the
location
of
the
file
here,
and
then
you
set
this
to
variable,
or
you
can
actually
paste
the
content
of
the
json
here
and
change
this
to
file
okay
and
then,
obviously,
obviously
you
need
to
have
all
these
other.
You
know
variables
to
be
able
to
connect
to
aws
and
connect
to
the
ecs
cluster.
D
Again,
I
cover
this
in
detail
in
the
video
that
I
that
I
shared
with
you
in
the
presentation
and
then
when
you
run
the
pipeline,
this
runs
with
auto
devops.
So
this
is
this
is
this
is
this
is
a
very
simple
pipeline?
This
is
what
it
looks
like.
It
just
includes
this
template,
which
is
the
ccs
gitlab,
which
I
believe
is
right
here,
which
just
have
five
stages,
and
it
includes
another
template
called
here.
D
D
Last
thing
I
wanted
to
show
you
was
the
actual
pipeline.
This
pipeline
takes
many
many
minutes.
Well,
I'm
not
sure
so
many,
but
a
few
minutes,
and
and
for
the
sake
of
time
I'm
gonna
skip
this.
D
But
you
know
it
builds
the
java
pro
sample
program
that
I
have
here
and
then
it
pushes
it
to
to
the
ecs
cluster
on
aws
by
doing
that
command
and
then
the
way
I
knew
that
it
was
running
is
because
I
had
changed
the
background
color
to
to
brown
and
then,
when
I
check
the
live
environment
well,
this
is
supposed
to
look
like
brown,
but
it
looks
kind
of
like
red
to
me,
but
but
I
knew
that
it
had
been
deployed
successfully.
D
So
here
last
thing
I
wanted
to
show
you
was
just
a
you
know:
high-level
steps
to
follow
to
to
use
this
feature,
and
that
concludes
this
demo.
Thank
you
very
much.
B
All
right,
thank
you
caesar,
so
we're
running
a
little
bit
short
on
time
here
I
think
fern.
Could
you
cover
one
of
your
favorite
of
the
four
that
you've
got
and
then
it's?
If
we
have
time
we'll
have
you
do
one
of
yours.
E
E
Next
time,
okay,
I
can
actually
quickly
run
through
all
of
them,
but
I'll
I'll
be
pretty
quick
and
give
the
main
details
and
the
main
value
streams
of
each
one.
If
that's,
okay,.
B
E
E
If
you
haven't
seen
plus
testing
before
it's,
where
we
send
a
bunch
of
unexpected
or
random
data
to
an
application
to
try
to
cause
it
to
fail
to
panic
or
to
find
some
type
of
underlying
issue.
E
So
the
way
that
this
values
our
customers
is
that
it
can
find
bugs
and
vulnerabilities
that
other
qa
processes
can't
find
and
it'll
and
from
there
it
can
include
some
different
security
issues
and
just
to
show
you
the
different
languages
we
have
so
it's
available
in
right.
Now,
five
different
languages
and
they're
working
on
more
and
the
way
that
this
works
is
the
developer
is
actually
able
to
add
tests.
E
E
So
what
what
the
way
that
this
is
done
is
the
coverage?
Fuzzing
job
is
included
and
the
fuzzing
target
is
included
and
what
the
fuzzing
target
kit
does
is.
It
sets
up
the
fuzzer,
so
each
language
will
use
a
different
buzzer
or
actually
each
application
will
use
a
different
fuzzer
that
defined
by
the
user
and
there'll
be
a
different
package.
E
That's
used
for
each
language,
so
in
this
case
the
package
is
going
to
be
go
fuzz
and
then
go
fuzz
will
be
running
the
fuzz
testing,
and
let
me
show
you
just
an
example
of
what
what
the
fuzz
test,
what
the
fuzzer
looks
like.
E
So
the
filter
calls
this
parse
complex
function
and
it
passes
it'll
pass
in
random
data
to
it,
which
will
be
random,
random,
byte
array,
and
then
it
knows
that
it's
the
fuzzer
because
of
this
value,
that's
added
to
it,
and
then
now
you
can
see
that
when
we
look
at
the
actual
function,
that's
being
fuzzed
right
here.
If
len
of
data
equals
six,
it
actually
populates
it
with
seven
an
array
of
length.
Seven
so
it'll
cause
it'll,
go
ahead
and
cause
a
panic.
E
E
E
E
So
if
you
need
to
you
know
test
how
test
how
secure
system
is
or
you're
working
on
a
specific
path
of
an
application
that
you
want
to
verify,
you
can
go
ahead
and
run
dash
on
that
without
having
to
push
code
to
a
feature
branch
and
have
a
pipeline
run,
so
it
makes
it
easier
to
configure
and
get
started
with,
and
it
just
enables
developers
to
do
more
tests
on
the
running
application,
and
that's
just
done
I'll.
E
Just
show
you
where
it's
where
it's
done,
so
you
create
a
new
site
profile,
you
save
the
site
profile
and
then
you
can
go
ahead
and
run
dast
on
this,
so
it'll
be
on
demand.
Scans
and
you'll
create
a
new
dash
scan
and
then
you
will
pick
the
profile
and
then
you
just
run
the
scan
and
it'll
go
ahead
and
run
dash
on
the
url
that
you
targeted,
and
this
could
be
an
ip.
This
could
be.
This
could
be
pretty
much
anything
that
you
set
it
to
usually
related
to
your
application
or
application
path.
E
It
has
a
site
map
which
it'll
it'll
go
to
the
site
map
and
also
check
the
other
path.
So
that's
when
that's
the
way
of
setting
up
desks,
I'm
doing
it
on
demand
and
then
last
security
feature
in
this
release
will
be
sas
for
all
and
guided
sas
configuration
so
sas
for
all
means
that
all
the
open
source
scanners
that
perform
sas
for
the
different
languages
are
being
moved
to
the
base
get
lab
offering
and
from
there
that'll
showcase
this
to
our
customers.
E
Our
customers
will
see
the
value
of
our
security
tools
and
the
driver
towards
ultimate
is
that
our
own
in-house
sas
analyzers
will
be
an
ultimate
and
there's
been
lots
of
research
done
on
us,
finding
more
real
positives
over
other
security
tools.
E
And
then
there's
the
guided
sas
configuration
which
allows
us
to
if
we
don't
have
sas
running
on
our
application,
we
can
go
ahead
and
just
create
sas
and
start
sas.
So
the
way
that
this
works
is
you
just
go
into
securing
compliance.
You
can
go
to
configuration
and
you
can
pick
the
image
that
you
want
to
run
for
the
sas
analyzer.
You
can
pick
the
path
to
exclude
the
image
tag
and
there
will
be
different
images
that
you
can
find
through
the
documentation
and
it'll.
E
Tell
you
what
stage
you
want
sas
to
run
and
how
many
directories
you
want
to
go
down
in.
So
this
is
just
for
someone,
who's,
never
used
dashboard
or
doesn't
know
really
how
to
configure
it.
They
can
find
these
configuration
options
and
then
it
will
add
it
to
the
gitlab
ci
ammo
and
configure
sas
for
you
so
and
with
that.
C
C
And
in
a
gitlab
13.3
and
they
we
released
a
new
matrix
keyword
that
makes
the
work
to
to
implement
a
matrix,
build.
It
makes
very
simple
if
you
remember
in
the
last
session
I
showed
you
how
I
used
I
needed
to
use
a
json
script
with
logic,
to
create
a
long
yaml
with
many
jobs.
Now
I
don't
need
one
keyword
matrix,
do
the
job
for
me
and
why
it
matters
it
makes
for
more
efficient
work
and
you
can
easy
to
maintain
the
enamel.
C
If
you
want
to
add
and
remove
jobs
to
the
matrix,
you
don't
it's
very
easy,
just
to
add
more
more
dimensions
or
environments
and
targets
to
the
matrix,
it's
very
easy,
and
then
it
will
create
for
you
at
runtime
jobs.
For
what
you
add-
and
I
I
will
not
cover
everything
because
of
the
time,
so
you
can
feel
free
to
read
it
later
and
then
there
is
link
to
documentation
and
the
link,
to
example,
project
that
I
will
just
show
you
now.
So,
let's
jump
to
the
project.
C
Okay,
so
right
so
so
this
is
the
the
the
yaml
file.
C
One
job-
this
is
job
name,
and
I
added
these
two
keywords:
parallel
that
it's
not
new,
but
the
matrix
is
new,
and
I
added
here
the
three
dimensions:
different
architectures
targets
and
versions,
and
when
I
will
run
the
pipeline,
what
it
will
do
it
will
create
for
me,
four
and
double
to
double
rate
will
be
worked
for
me:
24
jobs.
So,
let's,
let's
see
how
it
works.
C
C
C
And,
and-
and
this
is
the
demo
and
I
didn't
need
to
do
too
many
efforts,
as
you
see-
to
make
all
of
that
just
line
of
code
or
two
lines
of
code
in
my
yaml
file,
and
with
that
I
see
that
I
have
more
five
minutes,
so
I
will
go
quickly
unless
you
have
any
questions,
I
will
go
quickly
to
the
next
demo.
Okay,
so
I
will
go
back,
slides.
G
Yeah,
I
was
just
curious
in
how
you
define
when
you
define
matrix,
like
you,
had
all
those
cpu
types
and
all
that
how
do
you
define
that
in
the
settings
like
for
for
someone
like
one
of
our
like
a
graphics
card
designer
they
might
have
to
test
their
software
across
multiple
graphics
cards?
So
they'll
have
different
actual
like
bare
metal
machines,
but
they
want
to
do
a
matrix,
build.
How
are
is
the
architecture
like
a
tag
for
for
a
runner?
I'm
not
sure
I
understand
how
that
works.
G
C
Here
in
gitlab
I
just
what
I
do
is
just
creating.
I
setting
only
the
the
yaml
file,
then,
of
course,
in
the
script.
If
I
have
a
arm
or
a
debug
debug
target
or
release
target,
I
will
in
the
script.
I
will
provide
a
different
parameters.
That,
for
example,
will
provision
for
me
the
right
infrastructure
or
will
will
build
for
me
the
right
with
the
right
code,
but
only
difficult
for
the
configuration
to
generate
those
jobs
is
done
here.
I
hope
this
answer
your
question
or.
G
C
Oh,
I
I
believe
no,
I
did
yeah,
so
I
would
think
about.
Usually
you
put
tag
to
pick
up
a
specific
machine.
C
And
here
I
don't
know
and
if
we
can
specify
a
tag
when
maybe
it's
a
good
suggestion
for
improvement,
but
let's
take
it
offline.
It's
really
good
the
interesting
question
how
we
can
specify
a
specific
machine
so,
but
I
don't
have
a
quick
answer
for
that.
So,
let's
take
it
offline.
C
Have
three
minutes
to
allow
william
also
to
make
his
demo
so
quickly?
I
will
go
to
the
other
thing.
I
was
planning
to
do.
C
Hi,
my
name
is
it's
a
ganbao
technical
marketing
manager,
and
today
I
will
demo
dag
visualization,
so
doug
is
stands
for
directed
cyclic
graph
and
actually
it
break
the
order
between
jobs.
So
it
makes
the
make
the
pipeline
more
efficient
and
it
put
some
dependency
between
jobs.
So
a
specific
job
will
will
not
need
to
wait
to
all
jobs
in
previous
stage
to
complete,
only
related
jobs
that
completes,
and
then
it
will
be
able
to
start.
So.
C
This
is
a
dag,
very
short,
and
what
has
been
released
as
official
product
in
13.3
is
the
dog
visualization.
F
C
Which
is
this
graph
that
shows
you
and
the
relationships
between
the
dependent
jobs?
It
will
not
show
you
not
related
jobs.
So
if
you
have
jobs
that
you're
not
identified
as
a
dependent
job,
you
will
not
sit
in
that
graph.
So
I
will
quickly
move-
and
here
I
will
skip
this-
you
can
read
it
later.
So,
just
let
me
quickly
move
to
the
demo.
C
I
have
one
minute
okay,
so
this
is
the
ml,
and
I
have
here,
for
example,
ios
a
job
we
build
ios
and
it
is
in
the
build
stage,
and
here
you
can
see
dependent
jobs.
We
have
in
the
test
stage,
we
have
safari
ios
and
home
ios
and
all
of
those
independent
jobs
as
they
say,
needs
tag
means
that
they
depend
only
on
the
ios
build
and
they
don't
depend,
for
example,
on
the
android
build.
C
Pipeline
so
for
the
dag
visualization,
we
have
a
new
tab
here,
and
this
shows
me
the
dependencies
that
I
just
defined
in
the
ml.
So
the
deploy
ios
depends
on
three
jobs,
which
is
three
of
them,
but
the
home
ios
depends
only
on
the
ios
build.
The
ios
test
depends
on
the
ios,
and
the
safari
os
also
depends
on
the
is,
and
the
monitor
to
put
in
production
depends
industry
and
jobs.
C
So
let's
go
back
to
the
pipeline,
and-
and
we
will
see
that
we
have
four
jobs
in
the
build,
but
where
once
the
ios
and
will
be
completed,
and
you
will
see
the
test
job
will
start.
I
will
give
it
another
30
minutes
and
then
we'll
give
okay,
as
you
see
that
the
ios
job
completed,
which
allowed
the
test
jobs
that
depend
on
it
to
start,
but
still
mac
and
windows
jobs
running
this
makes
the
pipeline
more
efficient
means
that
we
will
be
able
to
complete
the
overall
pipeline
execution.
C
Time
will
be
shorter
because
we
running
in
parallel.
We
don't
need
this
to
start
only
when
these
long
jobs
that
can
be
long
jobs
they
don't
it
doesn't
need
to
wait,
it
can
start
in
parallel,
so
this
is
it
and
make
it
more
always.
We
added
features
that
make
the
work
more
efficient
and
the
pipeline
more
efficient.
So
with
that
turn
over
to
william.
H
Hello
everyone,
so
my
name
is
willian
and
I
will
test
today
one
one
one
theory:
I
regular
day
that
is
called
the
3
33
three
minutes.
30
minutes
three
hours.
I'm
gonna
explain
you
in
three
minutes
what
I
wanted
to
to
explain.
Let's
imagine
we
are
holding
an
elevator,
so
I
will
talk
about
gitlab,
workflow
extension
for
visual
studio
code.
So
why
is
this
important?
So,
let's,
let's
jump
to
this
part.
H
So
if,
if
you
you
you'll
know
that
gitlab
is
a
single
application
for
the
bobs,
but
most
of
the
work
happened
in
gitlab
and
the
developers
they
live
in
their
local
editors.
So
as
a
single
devops
application,
it
offers
a
lot
of
capabilities,
but
the
developers
as
a
main
audience
they
do
most
of
their
work
in
local
editors.
So
this
visual
code
extension
is
one
of
the
first
steps
to
bring
most
of
the
many
of
the
capabilities
from
gitlab
to
the
local
editor,
where
the
developers
work.
H
So
if
you
are
ever
with
customers-
and
you
find
an
objection
like
yes,
it's
very
cool
all
the
capabilities
you
offer,
but
I
use
visual
studio
code
or
I
use
a
locator.
How
can
I
take
advantage
of
this?
So
this
is
the
answer.
This
is
the
answer.
This
is
the
the
one
of
the
first
steps
that
is
aligned
with
the
strategy
of
gitlab,
in
which
we
want
to
bring
capabilities
from
gitlab
to
the
local
editor.
H
You
just
have
to
do
command
shift
this,
and
when
you
write
gitlab,
you
will
see
all
of
the
of
the
possible
commands
that
you
can
execute
from
here
from
this
local
editor
and
they
will
have
their
own
packet
lab.
So
this
is
three
minutes
explanation
in
the
video
that
I
have
shared
with
you.
I
have
different
examples
of
what
can
I
do
using
visual
studio
code
and
gitlab
leveraging
this
extension?
H
I
Hey
I'm
ty
davis,
and
I
am
going
to
be
talking
about
multiple
value
streams
today
that
came
out
in
13.3
with
the
value
streams
that
we
have
instead
of
gitlab
right
now
they
are
currently
customizable.
You
can
customize
different
stages
that
pertain
specifically
to
your
team
and
that
you
believe,
really
create
the
value
that
that
is
your
value
stream
for
your
specific
team.
I
Now
previously,
there's
only
one
value
stream
that
was
allowed
to
be
created
per
project
per
group
and
what
this
new
feature
does
is
it
allows
you
to
create
multiple
value
streams
that
maybe
are
specific
to
a
project
management
team
that
are
specific
to
an
engineering
team
or
specific
to
a
manager,
not
a
manager,
but
a
a
director
vp
that
wants
an
oversight
on
the
complete
end-to-end
idea
to
production,
it's
pretty
straightforward
and
what
you
can
do
with
these
value
stream
creation.
I
You
just
create
different
multiple
teams
or
existing
names
for
those
value
streams,
and
you
can
toggle
back
and
forth
between
them
and
customize
those
accordingly
to
what
you
see
fit
for
your
team.
F
F
Okay,
all
right,
thank
you
for
everyone.
Can
you
hear
me
yeah.
F
D
B
I
believe
that
it
was,
I
think
I
think
I
saw
some
traffic
in
a
discussion
channel
implying
that
it
was
but
yeah.
We
don't
know
for
sure.
So.
C
And
regarding
the
question
about
the
matrix,
build
and
the
it's
nice
question,
and
I
will
send
the
answer
in
the
customer
success
channel
just
for
you
that.
B
Yeah,
this
might
be
a
lead
into
that
or
help
with
that
a
little
bit.
So
when
you
define
the
parameters
of
the
matrix,
those
each
of
those
values
gets
passed
into
the
job
that
gets
run
right.
So
if
you
have
like
a
2x2
matrix,
you
have
four
jobs
running
and
the
parameters
get
cycled
through.
So
what
you
send
in
as
parameters
are
available
as
variables,
and
you
can
use
those
and
set
your
architecture
to
to
then
specify
the
tag
that
that
you're
going
to
look
for
architecturally
on
your
on
your
runner.