►
From YouTube: GitLab 16.4 Kickoff - Verify:Runner
Description
Kickoff video for GitLab Runner Core & Fleet for GitLab 16.4.
https://about.gitlab.com/direction/verify/runner_core/
https://about.gitlab.com/direction/verify/runner_fleet/
A
Hey
everyone:
this
is
Gina
Doyle
and
Darren
Eastman,
and
today
we're
going
to
talk
about
our
16-4
Milestone
kickoff
for
the
runner
group.
I'm
gonna,
take
it
first
and
just
talk
about
Runner
Fleet
start
sharing
my
screen.
A
So
just
as
a
reminder,
Runner
Fleet
focuses
on
users
who
are
managing
large
numbers
of
Runners,
which
we
refer
to
as
a
fleet
and
making
it
easier
to
manage
them
and
take
action
based
on
whatever's
happening
with
the
fleet.
So
we
have
some
features
that
connect
to
what
we
were
working
on
last
Milestone,
which
are
around
the
fleet
dashboard
MVC,
and
this
is
more
of
a
feature
that
provides
a
higher
visibility
into
the
fleet
of
Runners.
A
That
you're
working
with
right
now
we're
doing
a
lot
of
work
to
be
able
to
gather
all
the
data.
That's
needed
to
be
able
to
Showcase
that
in
the
dashboard.
So
most
of
the
issues
that
we're
working
on
relate
to
that
I'm
not
going
to
go
into
too
much
depth
there.
So
then,
I'll
just
switch
over
to
what
we're
working
on
for
ux
in
that
area,
which
one
of
the
big
problems
that
we
are
focusing
on
right
now
is
providing
more
visibility
into
Runner
cost.
A
That
includes
stuff
like
when
you
deal
with
SAS
Runners
dealing
with
a
compute
minutes
and
how
much
those
cost
you
so
we're
looking
into
problem
validation
for
that
to
focus
on
self-managed
users
who
are
bringing
their
own
fleets
of
Runners,
as
well
as
those
who
are
using
SAS
and
bringing
their
own
Fleet,
specifically
using
Cloud
platforms
to
create
those
runners
and
then,
while
we're
doing
that,
we're
also
going
to
take
a
step
forward
to
be
able
to
start
presenting
some
information.
As
to
how
much
usage
you
you
have
in
your
Fleet.
A
So
we're
going
to
start
displaying
job
duration
or
what
we're
referring
to
this
as
Runner
minutes,
but
really
just
the
minutes
from
when
the
runner
starts
the
job
to
when
it
ends.
So
that
you
have
a
better
idea
of
Runner
usage
across
your
Fleet
and
then
even
the
projects
that
are
using
the
runners
most
often.
So
that's
something
that
we're
going
to
move
forward
with,
as
we
do,
the
problem,
validation
on
the
side
and
then
finally,
something
we're
going
to
concentrate
on
is
exposing
Runner
configurations
in
the
runners
page.
A
If
you've
used
our
new
Runner
creation,
workflow
you'll
know
that
this
Runner
configurations
definitely
play
a
much
larger
role
than
they
did
before,
because
now
we're
grouping
them
based
on
that,
basically
in
the
UI.
So
we
want
to
just
make
that
more
of
like
a
first
class
Citizen
and
make
it
easier
to
understand
why
Runners
are
grouped
and
just
give
you
like
more
access
into
the
configuration,
since
it
is
such
a
large
role
when
you
create
runners,
that
is
it
for
ux,
so
I'm
going
to
pass
it
to
Darren
to
talk
about
Runner
core.
B
Hey
thanks
a
bunchita,
hey
everyone,
so
I'm
gonna
give
you
a
quick
overview
of
what's
happening
on
a
call
in
terms
of
new
features
in
16-4
and
a
quick
touch
on
capabilities
of
rebuilding
n164.
That
would
enable
us
to
actually
release
features
that
you
can
consume
in
16.5
on
my
screen.
Hopefully,
I'm
sharing
the
right
screen
because
sometimes
you'll
chase
the
wrong
screen.
This
is
just
to
kind
of
like
level
said
everyone.
B
This
is
kind
of
like
our
single
source
of
Truth
or
like
it
is
a
single
source
of
Truth
for
the
you
want
to
call
kind
of
category.
Direction
always
recommended
first
kind
of
start
here
to
understand
the
high
level
view
in
terms
of
the
big
things
that
we're
working
on
this
fiscal
year
as
well
as
thinking
about
the
next
fiscal
year.
B
You
start
here
kind
of
got
a
high
level
view.
Then
you
can
go
a
little
bit
deeper
in
terms
of
our
project,
repos
and
issues
and
kind
of
look
at
more
detailed
stuff.
But
I
still
haven't
said
the
conversation
today.
In
terms
of
our
one
year
plan,
we've
got
some
big
sort
of
meaty
meaty
products.
They
use
our
product
leadership
level
and
under
world-class
Dev.
Second
experience
the
big
things
for
the
Runa
core
team
that
we've
been
working
on
for
the
entire
year
thus
far.
B
The
first
thing
is
the
next
one:
I
took
an
architecture
most
of
the
features
and
capabilities
in
that
space
are
done,
we're
in
feedback,
wonderful
Gathering
feedback
continuously
from
customers
and
Junior
has
touched
on.
One
thing:
that's
on
the
runoff
side
of
the
house
where
we
have
to
improve
the
configuration,
so
I
mean
in
feedback
Gathering
mode,
no
new,
significant
features
or
capabilities
are
planned
because
we've
already
done
the
work
for
this
in
early
part
of
this
year.
B
But
the
next
thing
I
want
to
call
your
attention
to
is
next
Runner
Auto
scaling.
So
for
folks,
who
kind
of
need
a
quick
refresher,
an
extra
Auto
scaling
is
the
feature
set
that
we
have
developed
here
at
gitlab.
That
is
the
replacement
for
our
Docker
machine
based
autoscaler.
B
That's
the
auto
scaler
that
a
lot
of
our
customers
use
to
Auto
scale,
gitlab
Runners,
specifically
on
public
Cloud,
compute
and,
more
specifically,
in
public
Cloud,
sort
of
virtual
machine
or
instances
right,
and
that
auto
scaling
solution
again
is
separate
from
our
kubernetes
order,
scaling
solution,
so
we
have
to
so
if
you're,
a
gitlab
customer
and
you're
looking
for
an
auto
scaling,
you
want
a
solution.
You've
got
multiple
features
and
capabilities
that
you
can
choose
from.
One
is
auto
scaling
my
virtual
machines
and
then
one
is
kubernetes
so
just
to
come
back
to
the
roadmap.
B
The
next
One
auto
scaling
is
interesting,
really
great
new
technology
developed
by
our
running
core
team,
where
we
have
a
whole
new
framework
for
all
scaling,
Runners
on
public
Cloud
instances,
and
so
we're
releasing
and
supporting
plugins
for
the
major
public
Cloud
platforms.
I
mean
I.
Have
this
plug-in
framework
that
will
enable
you
to
develop
your
own
plugins
for
other
Cloud
platforms
as
well,
etc,
etc.
That
are
not
supported
and
our
default
repository.
So
for
16.4.
This
is.
Let
me
switch
over
now
to
that
integration
plan.
B
We're
working
on
some
prerequisite
work
to
get
the
ec2,
the
AWS
ec2
plugin,
ready
for
business,
currently
an
experimental
phase,
and
so
is
diving
into
the
run.
A
call
plan
here
for
a
moment
when
you
come
to
the
we're
going
to
qualify
on
the
features
you'll
see,
the
section
called
run,
auto
scaling
and
then
the
AWS
plugin
we're
working
on
on
the
prerequisites
task
for
transitioning
to
Beta.
B
So
we
our
goal,
is
to
get
these
tests
John
16
4,
so
that
in
16.5
we
will
announce
the
plugin
for
AWS
as
being
ready
for
beta
and
everyone,
maybe
one
or
two
releases
and
then
we'll
go
to
GA,
the
other.
The
only
other
net
new
feature
I
want
to
call
out
in
one
Accord.
That's
not
specific
to
this
Auto
scaling
feature
set
of
capabilities
bucket,
as
here
it's
this
kind
of
in
the
weeds
kind
of
very
practically
named
thing
called
support
for
passing
a
custom
Community
sports
bag.
B
This
is
in
our
kubernetes
executive,
so
I
guess
I
mentioned
before
we've
got
kubernetes.
One
has
one
option
for
you
and
we
have
Auto
scaling
and
virtual
machines
as
the
other
options,
and
so
what
this
feature
is
which
it
sounds
like
it's
super
uninteresting.
It's
like.
Why
do
I
need
this
for
customers
who
have
invested
a
lot
of
time.
We
have
a
number
of
customers
that
you
rely
on
kubernetes
for
scaling,
millions
and
millions
of
CI
jobs
across
their
self-managed
Runner
environments.
B
What
this
does
is
allows
to
pass
a
custom
part
stack
that
simplifies
configuring,
the
cube
in
Israel
and
manager
and
the
workers
we
released
an
alpha
for
this
picture,
a
few
releases
back
and
now.
The
next
goal
is
to
enable
the
feature
flag
and
and
actually
formally
call
the
spot
set
feature
beta,
so
that
folks,
that
are
interested
in
testing
this
out
and
you
know
and
giving
it
a
shot,
can
go
ahead
and
do
so,
knowing
that
we
are
full
on
planning
to
go
ga
in
a
couple
of
releases
after
16
or
four.
B
Of
course,
the
plans
for
GA
will
change
based
on
the
feedback
we
get
from
more
customers
that
are
testing
the
beta.
But
we're
super
excited
about
this
kubernetes
pod.
B
Spec
feature:
we
have
other
capabilities
planned
or
this
thought
about
in
fy25
what
we
hope
to
enable
the
developer
Persona
to
be
able
to
specify
the
parts
per
configuration
changes
in
their
gitlab
CI
pipeline
file,
which
makes
for
a
really
interesting
and
value
proposition
in
terms
of
folks
being
able
to
run
workflows
on
the
target
platforms
that
they
need
for
kubernetes
and
running
in
Monte
architecture,
environments,
GPU
enabled
versus
not
and
so
on.
So
it's
a
really
powerful
feature.
B
We
probably
that
probably
we
will
be
creating
a
blog
post
and
an
additional
video
around
the
custom
puberty
spot
space.
So
if
we're
on
a
call,
that's
what
we've
got
a
custom
communities
passed
back
in
16.4,
moving
to
Beta
and
then
the
pre-work
for
the
Auto
scaling
plugin
for
AWS
and
with
that
coming
in
16
files
beta
back
to
Eugene.
A
Thanks
Darren,
so
if
you
have
any
feedback
for
us
per
usual,
please
reach
out
either
through
issues
or
through
either
of
Darren
and
her
eyes,
email
as
well,
and
we
appreciate
all
the
feedback
that
we've
gotten
so
far
about
the
numerous
features
we've
been
releasing
thanks
for
watching.