►
From YouTube: CMS Research for Continuous Integration
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everyone,
I'm
vitika
and
I'm
the
senior
product
designer
for
verify
ci
at
gitlab
and
today
in
this
video.
I
wanted
to
share
the
outcome
from
a
very
recent
effort
that
we
had
conducted
to
evaluate
the
maturity
level
of
ci
all
right.
A
So,
let's
get
started
so
how
we
started
off
was
we
started
off
with
selecting
two
of
the
of
all
the
listed
jobs
to
be
done
that
we
have
in
the
handbook
for
a
continuous
integration
group
and
those
two
jobs
to
be
done
were
number
one
trigger
a
pipeline,
which
is
the
statement
for
this
goes
like
when
integrating
my
code
into
a
repository
I
want
to
initiate
the
ci
pipeline,
so
I
can
maintain
code
quality
and
security
with
the
option
to
also
deploy
automatically,
and
the
second
one
was
examine
the
health
of
a
running
pipeline
again.
A
The
statement
for
this
goes
as
when
joining
a
pipeline
for
a
repository.
I
want
to
monitor
the
performance
of
jobs
or
tax
tasks
in
the
pipeline,
so
I
can
be
aware
of
delays
and
respond
to
failures
all
right.
So
what
we
did
after
selecting
the
jobs
to
be
done
was
we
we
kind
of
we
translated
them
into
scenarios,
and
then
we
put
each
one
of
those
scenarios
into
a
gitlab
project.
So
these
are
the
two
scenarios,
slash
projects
that
we
had
ready
for
the
research
activity.
A
Okay,
now
talking
about
the
participants
in
total,
we
had
about
seven
participants
which
constituted
of
two
software
engineers
and
five
developers
who
identified
themselves
as
devops
engineers.
A
That
means
they
at
least
have
taken
up
the
responsibility
of
devops
engineers,
if
not
the
title
at
their
current
job,
all
right
now
getting
to
the
outcome
so
for
the
first
jobs
to
be
done,
which
was
trigger
a
pipeline
users
are
provided
with
the
following
scenario:
to
complete
so
the
scenario
it
kind
of
demanded
the
participants
to
play
a
software
developer
on
a
large
team
that
uses
a
mono
wrapper
for
the
project,
and
they
have
to
assume
that
there's
a
cicd
pipeline
which
has
already
been
created
for
the
project
to
streamline
the
workflow
for
all
the
developers
who
are
kind
of
contributing
to
the
same
project
and
the
next
thing
that
they
had
to
do.
A
There
was
they
were
provided
with
a
feature
branch
which,
with
an
additional
feature,
branch
of
the
project,
and
they
had
to
include
the
changes
from
that
feature
branch
and
make
sure
it
was
included
in
the
main
branch,
and
they
had
to
do
this
while
making
sure
that
they
leveraged
the
pipeline,
which
has
already
been
created
for
them.
A
Okay,
now
talking
about
the
score
that
we
achieved
for
this,
so
the
overall
cms
score
for
this
particular
scenario.
It
was
pretty
high.
It
was
4.5
that
means
participants
found
it
pretty
convenient
and
easy
to
perform.
This
task
that
was
provided
to
them
and
talking
about
how
participant
one
versus
two
reacted
to
the
three
different
set
of
questions
that
we
had
put
forward.
The
first
question:
it
was
around
the
ease
of
performing
the
task.
A
So
how
easy
or
not
easy
was
it
for
them
to
handle
this
task
that
was
given
out
to
them
and
as
we
can
see
it
kind
of
stayed
in
the
zone
of
four
to
five,
then
the
next
one
was
around
how
they
would
want
how
they
would
like
to
rate
the
overall
user
experience
around
the
scenario
and
the
last
one
which
is
ux
lite.
A
It
kind
of
through
this
question
we
kind
of
asked
as
to
more
a
holistic
view
that
they
had
about
the
scenario
that
we
have
provided.
So,
whether
or
not
they
think
that
the
scenario
kind
of
had
all
the
functionalities
in
place
that
they
required
a
while
performing
their
tasks
and
the
I
mean
all
the
overall
responses.
They
stayed
in
the
zone
of
four
to
five
okay,
so
this
was
the
first
jdbd
now
moving
on
to
the
next
one.
A
So
in
this
one
they
had
to
play
a
devops
engineer
who,
from
time
to
time,
has
to
assist
a
developer
in
the
team
for
troubleshooting,
broken
pipelines
and
for
this
particular
task
here
they
were
provided
with
a
project
id,
so
they
could
just
go
ahead
and
start
looking
at
what's
wrong
with
the
pipeline
and
start
like
thinking
about
or
just
performing
the
task
of
fixing
it
and
again,
as
we
can
see,
the
score
for
this
particular
jtbd
was
also
high.
A
It
was
4.5
or
4.4,
and
now
here
is
a
chart
showing
how
different
participants
reacted
to
the
three
questions
that
we
ask
them.
So
when
it
comes
to
the
ease
of
performing
the
task,
almost
everyone
found
this
pretty
convenient
and
easy
to
perform.
A
Then
talking
about
the
overall
ux
experience,
I
mean
user
experience,
so
five
out
of
five
participants,
rated
it
as
four,
which
means
maybe
there's
some
little
room
for
some
work
to
be
done
there,
that's
good
to
know
and
finally,
for
the
ux
flight.
They
feel
that
this
scenario
did
have.
I
mean
at
least
two
of
the
five
participants
feel
very
strongly
about
the
functionalities
that
you
provided
in
the
scenario
for
them
to
perform
the
task
okay.
So
this
is
the
overall
score
that
we
have
here
like.
A
Oh,
the
first
jobs
to
be
done
is
4.5,
the
other
one
is
4.4,
and
when
we
look
at
how
we
define
maturity
in
gitlab,
we
can
see
that
when
the
same
score
is
at
least
3.95
for
the
gtbt,
we
can
consider
it
lovable,
but
is
that
what
we
are
going
to
do
for
ci?
A
So,
even
though
the
overall
cms
score
for
the
chosen
jobs
to
be
done,
I
would
say
chosen
because
we
haven't.
We
have
definitely
not
looked
at
all
of
the
jdpd's
for
ci.
We've
just
picked
two
out
of
everything
that
is
there,
so,
even
though
they
exceed
the
average
of
4.4.
A
These
features
these
functionalities,
which
are
kind
of
which
come
under
these
jdbts.
They
only
represent
a
very
small
fraction
of
the
overall
value
that
we
intend
to
provide
our
users
through
the
gitlab
ci
feature
set
now,
quoting
from
the
blog
that
you
would
find
in
unfiltered,
which
is
written
by
jackie
here.
So
I
I
have
included
some
portions
from
it
and
just
to
highlight.
We
already
know
that
gitlab
is
loved
by
by
developers
and
this
this
research
activity
it
kind
of
it.
A
It
kind
of
proved
it
right
so,
but
there
are
other
areas
in
verify
that
we
also
need
to
consider,
and
those
are
the
places
where
we
need
to
improve,
to
make
gitlab
ci
lovable
in
like
we
to
really
move
towards
lovable.
So
considering
the
listed
factors
these
being
the
factors,
so
I've
also
included
two
other
two
other
insights
from
this
article.
A
This
one
says:
gitlab
has
some
challenges
with
performance
of
minimal,
viable
changes
and
they're
expected
to
work
at
a
higher
finish,
and
if
you
want
to
read
further,
you
can
go
to
this
blog
and
the
other
insights
is
github's.
Visibility
into
jobs
at
scale
is
painful.
That
means
we
are
not
maybe
catering
well
enough
to
this
persona.
That
says
dakota
here,
that's
I
mean
documented
as
dakota
in
the
gitlab
roles
and
personas
handbook
page.
A
So
considering
these
things,
we
have
decided
that
while
we
really
appreciate
the
results
from
the
activity-
and
we
are
very
excited
about
having
scored
such
high
scores
here,
it
should
be.
I
mean
it
should
be
safe
to
say
that
gitlab
ci
is
still
complete
and
we
need
some
work
to
do
before.
We
can
move
it
to
the
level
state
all
right.
A
So
I
this
was
just
my
thought
and
I
would
love
for
others
to
chime
in
as
well
here
and
in
case
you
want
to
know
more
about
what
is
it
that
we
are
trying
to
improve
next,
to
take
the
stage
group
closer
to
the
next
stage
in
maturity?
A
You
can
follow
this
link,
which
is
what's
next
and
why
it
will
take
you
to
a
handbook
page
that
kind
of
gives
a
glimpse
of
what's
upcoming,
like
what's
the
next
thing
that
we
want
to
work
on,
and
why
is
it
that
we
have
chosen
that
area
to
work
on
all
right?
So
these
were
the
findings
from
our
cms
research
for
ci
and
feel
free
to
leave
your
thoughts
and
even
bring
me
on
the
issue
or
or
slack
if
you
want
to
just
have
some
comment
about
the
whole
process.
Thank
you.