►
From YouTube: GitLab 15.6 Kickoff - Verify:Runner Group
A
Hi
everyone
Gina
Doyle
I'm,
the
product
designer
for
Runner
and
I'm
here
with
Darren
today,
the
product
manager
for
git
love,
Runner
and
we'll
be
talking
about
our
plan
for
Milestone
15.6,
we're
going
to
start
off
with
Runner
Fleet.
So
we
have
a
few
categories
in
the
runner
group.
If
you
didn't
already
know,
but
Darren
will
be
doing
the
other
talking
about
the
other
two
categories:
surrender,
staffs
and
Runner
core
and
then
I'll
be
talking
about
Runner
Fleet
today.
As
a
reminder,
Runner
Fleet
is
about
managing
a
fleet
of
Runners.
A
So,
however,
many
Runners
that
you
have
either
at
the
group
level,
the
admin
level
or
even
the
project
level-
and
we
are
our
vision-
is
to
be
able
to
make
it
super
easy
to
be
able
to
manage
all
those
Runners
at
once
and
enable
your
team
to
work
as
expected.
Work
faster,
so
I'll
start
sharing
my
screen
and
go
over
some
of
the
issues
that
we'll
be
looking
at.
A
So
on
the
development
side,
we're
going
to
be
adding
bulk
delete
into
the
group
view.
So
if
you
have
to
delete
multiple
Runners
at
once,
this
has
been
something
that
we've
been
asked
by
many
users
to
add
in
so
you
no
longer
have
to
individually
delete
Runners,
which
is
super
exciting.
We
just
added
this
into
the
admin
view,
so
our
next
iteration
here
is
the
group
View
we're
also
going
to
be
improving
assigned
projects
section
when
you're
editing
a
project
Runner.
A
Some
exciting
things
that
we
have
for
new
features
is
our
MVC
trying
to
give
more
information
into
the
runner
queue
type
of
problem
that
we
were
seeing
so
we're
adding
a
cued
and
duration
time
for
jobs
that
the
runner
has
run.
A
So
this
list
is
only
available
in
the
admin
area
right
now,
although
we
will
be
working
on
making
it
available
for
groups,
but
for
each
one
of
these
jobs
that
the
runner
has
run
will
provide
a
duration
time
and
acute
time
so
that
you
can
see
at
a
glance
what
how
that
queue
time
is
changing
within
the
past
few
jobs
that
you're
looking
at
and
then
for
the
ux
side
of
things
for
Runner
Fleet,
we're
looking
at
making
it
more
clear
when
you're,
enabling
stale
group
Runner
cleanup
at
which
Runners
that
setting
will
impact.
A
A
So
we'll
just
make
that
more
clear
and
then
the
next
two
issues
came
from
some
recent
feedback
that
we've
gotten
one
of
the
things
being
that
when
your
job
is
pending,
when
you
run
a
job
as
a
as
a
developer
in
a
project,
you
sometimes
don't
understand
why
it's
pending
for
so
long.
So
it
could
be
queuing
up
to
15
minutes
or
whatever
happens
to
be,
and
so
we
at
this
point,
we'd
like
to
provide
some
more
information.
A
Like
average,
wait
time
of
how
long
you
might
have
to
wait
at
this
pending
page
the
runners
available
to
take
this
on
and
then
potentially
other
metrics,
like
jobs
ahead
of
this
one,
but
we're
still
trying
to
work
at
that
metric
out
and
then.
A
Lastly,
another
thing
that
we've
heard
from
a
bunch
of
users
is
that
you're
not
sure
if
a
runner
is
running
a
job
right
now
or
not,
and
one
of
the
things
that
we're
talking
about
doing
is
making
it
more
clear
by
either
saying
that
it's
running
or
it's
idle,
and
then
we
can
link
to
the
list
of
jobs
that
I
was
showing
earlier,
so
that
you
know
exactly
what
job
it's
running
at
that
moment
or
jobs,
plural,
and
that
is
it
on
the
fleet
side.
B
Thanks
a
bunch,
you
know
it's
going
to
be
such
a
hard
follow-up.
That's
super
cool
stuff.
Coming
in
on
the
fleet.
That's
awesome,
hey
so
for
Runner
poor,
as
I
mentioned
in
the
last
couple
of
releases,
a
key
focus
and
major
prioritization
Focus
for
us
under
run
across
side
and
each
iteration
is
first
and
foremost
any
security
issues,
any
security
vulnerabilities.
So
when
we
look
at
our
prices
prioritization
when
we
look
at
allocating
resources,
the
first
thing
we
want
to
work
on
are
those
security
issues
or
security
vulnerabilities.
B
So
you
can
see
here
in
156,
we've
got
a
couple
things
on
Deck,
so
those
are
the
things
that
we
actually
will
be
focused
on
first,
making
sure
that
those
hopefully
get
over
the
finish
line
and
resolve
in
15-6,
and
what
I
mean
sometimes
is
that
it
may,
depending
on
the
complexity
of
some
of
this
work,
that
may
impact
impact
our
plans
for
delivering
other
features
that
we
hope
to
deliver
in
15
states
just
the
reality
of
it.
We
have
you
know
we
we
plan
aggressively.
B
We
have
very
aggressive
goals,
but
as
we
get
into
the
iteration,
if
the
work
effort
and
some
of
these
security
issues
is
taking
longer
than
we
expect,
that
will
just
be
in
very
clear
impact.
Our
ability
to
deliver
some
of
these
new
features
right.
So
the
first
thing
we've
got
a
couple
of
security
issues
on
that,
then,
as
you
go
through
the
listing
in
this,
this
Federation
planning
issue
is
available
to
everyone.
B
You
can
also
see
we
have
a
number
of
severity
to
Priority
two
bugs
and
then
we
hope
to
start
getting
to
here
in
15
seconds.
We
do
in
fact
have
a
back
build
upset
two
bonds
and
one
of
our
goals,
as
we
head
into
the
back
third
of
this
year
and
into
next
year,
is
to
whittle
that
list
down
significantly.
We
want
to
get
on
top
of
ourselves,
especially
foreign.
B
Configurable,
this
actually
is
a
new
feature
request
that
came
in
for
some
customers.
We
actually
weren't
aware
of
this
problem,
we're
still
trying
to
do
a
bit
of
Investigation
in
terms
of
finding
out
when
this
was
introduced
and
kind
of
the
different
parameters
for
when
this
may
cause
a
problem
for
customers.
But
what
we're
seeing
here,
we've
heard
from
some
large
customers
that
are
managing
their
own
gitlab
instances
and
by
by
extension,
of
course,
managing
and
their
own
Runners
and
running
fleets.
B
Is
that
for
maintenance
windows
or
for
downtime
maintenance
windows
for
the
app
to
get
that
instance,
whether
they're
doing
an
upgrade
whether
they're
doing
a
database
migration
if
the
instance
is
offline
for
some
time,
what
they're
finding
out
is
that
the
Learners
are
not
are
taking
a
long
time
to
reconnect
and
start
processing
jobs
on
that
instance,
once
instance
is
back
online.
So
we
actually
have
a
merge
request.
That's
been
submitted
and
we
hope
to
actually
add
some
configurations
into
the
runners
to
allow
for
I'm
solving
this
particular
problem.
B
B
So
these
are
two
incremental
new
features
that
we're
adding
in
156,
but
those
two
incremental
efficients
are
part
of
a
broader
long-term
plan.
The
end
state
is
this:
once
we
get
this
new
architecture
implemented,
we
will
have
a
different,
much
more
secure
mechanism
for
how
you
register
runners
in
gitlab,
and
so
these
two
pieces
of
rookie
and
these
two
new
features
are
small
incremental
features
towards
that
end.
State,
we're
targeting
I
think
approximately
16
at
all
at
this
point
for
having
everything
all
wrapped
up,
and
we
have
a
lot
as
I
get
an
image.
B
We
have
a
lot
more
details
in
terms
of
blog
posts.
Documentation
updates
videos
around
what
that
means,
but
this
is
a
very
critical
piece
for
us,
and
this
is
the
next
evolution
in
the
runner
registration
system
and
so
look
for
some
more
information
on
that.
So
that's
what's
happening
in
monopore
and
run
a
SAS
in
1506.
B
We
are
not
shipping
any
new
features
or
capabilities
in
15
four,
we
shipped
some
new
Runa
times
for
the
get
the
SAS
Linux
runners,
but
we
are
working
on
a
number
of
very
interesting
foundational
pieces
in
terms
of
our
new
running
Auto
scaling
technology,
which
will
enable
us
to
very
hopefully
soon
launch
our
Mac
OS
runners
in
limited
availability.
Finally
on
and
specifically
an
apple
Circle
and
or
M1
chips
in
this
sort
of
type
printer.
B
So
we'll
fix
it
afterwards
that
foundational
work
for
the
next
one
Auto
scaling
architecture
not
only
enables
the
Mac
OS
Runners,
but
it
will
also
provide
customers
that
self-manage
get
that
Runners.
The
new
auto
scaling
capabilities
for
managing
Runners
on
Amazon
web
services,
virtual
machine
instance
groups
or
HDs
and
Google
complete
platforms,
so
lots
of
interesting
cool
things
that
are
coming
with
Rana
SAS
and
with
that
next
one
Auto
scaling
technology.
A
Sure
yeah
I
did
want
to
just
thank
everyone,
because
we've
been
getting
a
ton
of
feedback
on
actually
an
issue
that
we
opened
for
gitlab.
That
was
mainly
brought
upon
through
the
admin
area.
So
we've
seen
like
a
ton
of
traction
from
users
giving
us
feedback
on
our
new
design
that
we
added
there,
but
also
around
other
pains
that
that
you're
facing
and
I
did
want
to
say
thank
you
in
case
you're
listening.