►
From YouTube: GitLab 16.1 Kickoff - Verify:Runner
Description
Kickoff video for Runner Fleet, SaaS, Core 16.1.
https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29287
https://about.gitlab.com/direction/ve...
https://about.gitlab.com/direction/ve...
https://about.gitlab.com/direction/ve...
A
Hi
everyone:
this
is
the
runner
group
we're
going
to
talk
about
our
16.1
Milestone
plan
today,
I'm
here
with
Darren
and
Gabriel,
so
we're
going
to
start
off
the
conversation
with
talking
about
Runner,
Fleet
and
I
will
share
my
screen
and
show
you
what's
on
our
plan
for
this
milestone.
A
So
we
have
a
lot
of
features
that
have
to
still
connecting,
with
the
new
token
architecture
that
we've
rolled
out
in
160.
So
we
have
the
the
new
registration
process
now,
so
you
can
create
a
new
Runner
using
the
new
token
architecture,
but
we
also
have
some
follow-ups
coming
out
of
that.
A
Some
are
just
small
updates
and
one
of
those
include
when
you're
creating
a
runner
to
make
the
tags
and
run
untagged
jobs
Fields
required,
because
those
are
really
the
only
two
pieces
of
the
runner
creation
flow
that
are
required,
so
we'll
be
improving
that
so
that
you
get
less
errors.
While
you
try
to
create
a
runner
and
then
another
big
thing
that
we're
working
on
that's
also
related
to
the
Token
architecture.
Is
the
new
Runner
Details
page
to
support
groups
of
Runners?
A
With
that
same
token,
in
the
UI,
so
right
now,
you're
only
seeing
the
most
recent
one
that
was
created
with
that
config,
but
with
this
design
now
you'll
be
able
to
see
all
of
the
runners
that
are
using
that
same
config
when
you
click
into
the
runner
details.
So
now
you
have
more
visibility
into
that.
We've.
We've
already
heard
some
feedback
about
this
being
confusing.
A
So
hopefully
this
helps
that
problem
and
then
on
the
ux
side
of
things,
we
finished
up
our
solution,
validation
for
the
runnerfleet
dashboard
MVC,
and
it
turned
out
to
be
very
positive
overall
overall
and
we
were
able
to
make
some
updates
to
the
dashboard
design
that
we
are
now
just
taking
in
feedback
to
make
sure
that
this
iteration
is
for
sure
what
we
need
for
the
First
NBC.
But
if
you
have
more
feedback,
we'd
still
be
open
to
hear
it.
A
You
can
leave
your
feedback
in
the
new
Runner
Fleet
dashboard
issue,
which
is
linked
in
our
iteration
plan,
and
just
so
that
you
have
a
summary
of
where
we're
at
right.
Now
we
did
find
out
that
the
three
most
important
features
are
your
Fleet
health.
So
if
you
have
any
system
failures,
which
ones
are
online
and
offline,
the
busiest
Runners
that
you
have,
we
would
start
with
just
showing
you
the
top
five
and
give
you
a
sense
of
the
job
load
by
the
jobs
in
progress
over.
A
However
many
jobs
they
can
run
in
parallel
and
then,
finally,
the
wait
time
to
pick
up
a
job.
We
know
that
this
needs
to
be
more
configurable
in
the
future,
but
right
now
we'll
just
start
with
giving
you
that
wait
time
for
all
of
your
instance
Runners
all
of
your
group
runners
or
all
of
your
project
runners,
so
that
that
is
that
for
Runner,
Fleet
and
I'll
pass
it
to
Gabriel.
B
Thank
you
so
much.
The
future
is
so
exciting
for
Runner,
Fleet
and
really
stoked
to
be
part
of
this
too.
Let
me
share
my
screen
as
well
and
giving
Insight
on
what's
happening
on
the
runner
SAS
site,
as
you
might
have
seen,
16.0
was
a
super,
exciting
release.
We
have
finally
released
our
Apple
silicon
mac-1
Runners,
for
you
guys
to
use
in
beta.
We
have
also
released
GPU
routers
and
then
have
upsized
our
current
SAS
offering
on
Linux.
B
So
a
lot
happened
and
now
to
follow
up
our
Priority
One
Is,
to
support
everybody
in
the
migration
of
Mac,
OS
and
I
run
out
any
issues
that
we
may
find
if
you're
already
part
of
the
Mac
OS
program,
you
should
have
received
a
message
looking
like
this,
that
contains
all
of
the
information
you
need
to
to
transition.
One
thing
I
want
to
note,
because
it's
also
important
for
the
next
release.
There's
two
things
in
specific
on
this
Mac
OS
topic
that
we
want
to
tackle
in
the
opportunities.
B
First,
one
is
support
your
migrating.
So
if
you
find
any
problems
using
the
mic
Runners,
there
is
an
issue
that
is
linked
here.
In
this
message,
where
you
can
just
report
any
issues
that
you
have
so
that's
super
valuable
for
us
to
get
inside
as
fast
as
possible
and
iron
these
out
and
then
another
thing
I
wanted
to
mention
is
we
are
planning
to
work
on
a
image
updating
strategy,
so
you
can
use
a
latest
image,
for
example,
but
also
we're
updating
our
stable
image
more
regularly.
B
B
So
please
be
sure
to
give
any
needs
you
have
in
this
issue,
write
down
what
you,
what
you
guys
want,
and
then
we
will
evaluate,
if
that's
possible,
to
do
or
not
but
yeah
that's
super
exciting
and
then
on
another
note,
as
a
stretch,
call
We
are
continuing
the
path
of
our
upsizing,
of
the
Linux,
offering
to
again
achieve
Best
in
Class
CI,
build
speed
and
on
that
what
we
want
to
do
is
offer
larger
machine
sizes
adding
we
have
recently
upgraded
our
large
one
from
four
to
eight
vcpus
and
adding
on
that.
B
We
want
to
implement
a
2X
large
with
16
and
then
a
sorry,
an
x-large
16
and
the
2x
large,
with
32
vcpus.
So
that's
super
exciting
for
everybody
that
runs
super
heavy
loads
and
yeah.
That's
about
it.
From
from
Runner
SAS
team
handing
over
to
you,
there.
C
Hey
thanks,
Tom
Gabriel
I'm,
not
sure
how
I
Will
Follow
That
from
both
Gina
and
Gabriel,
but
I
will
try
and
maybe
have
talk
about
something
in
our
code.
It
might
be
a
super
exciting.
That's
what
we
just
saw
so
on
the
screen
right
now.
Folks,
what
I'm,
just
sharing
just
to
reiterate,
run
a
core
is
the
engine.
It's
sort
of
the
the
thing
under
the
hood
of
your
super
fancy.
C
Sports
car,
that's
good,
left,
cic,
and
so
in
terms
of
this
is
what
I'm
looking
at
what
you're
seeing
on
the
page
right
now
is
the
direction
page
for
run
a
corset
if
you
ever
want
that
top
level
view
that
20
000
point
of
view
of
I
know
what
we're
thinking
for
run
or
call
here's
the
page
to
start-
and
this
is
a
key
thing
to
call
out
on
under
the
strategy
and
theme
section-
is
that
we're
always
thinking
about
how
we
can
make
it
as
efficient
and
as
easy
as
possible
for
you
to
run
your
cicd
workloads
anywhere
at
the
end
of
the
day,
if
you
don't
get
lap
status
is
well
as
Gabe
was
pointing
out.
C
C
They
have
other
sort
of
different
needs,
they're
self-managing
get
them
they're
doing
other
things,
they
have
more
complex
sort
of
computing
needs,
and
so
they
have
to
think
about
running
their
CI
CD
workloads
on
compute,
and
so
we're
always
thinking
about
how
we
can
make
that
experience
as
easy
as
possible.
How
we
can
simplify
getting
your
gitlab
cicd
workload
onto
the
theme
onto
that
substance.
C
That
will
actually
run
your
workload,
and
so
that's
another
horrible
thinking
and
when
you
look
here
at
our
strategy,
page
I
will
get
that
one
or
for
the
one
year
plan
and
tying
our
strategy
here
back
to
our
gitlab
product
strategy
under
the
world-class
set
up
experience.
So
you
can
get
that
out.
We
have
these
three
pay
buckets
right.
C
The
next
one
I
took
an
architecture
which
Gina
touched
on
in
the
beginning
of
her
presentation
were
mostly
done
with
that
piece
of
work
from
a
core
perspective,
and
the
next
phase
of
work
is
as
Gina
was
mentioning
sort
of
iterates
the
features
to
clean
up
things
in
the
UI
pieces,
so
the
runner
core
pieces
for
this
is
mostly
known.
We
have
a
few
tweaks
to
do
for
our
gitlab
on
our
health
insurance
and
we're
not
spending
a
whole
lot
of
time
there.
C
The
next
big
theme
under
the
world-class
Dev
setup
experience
sort
of
umbrella
bucket.
It's
next
one
Auto
scaling,
and
this
is
again
for
customers
that
have
to
manage
your
own
run
of
fleets
on
various
public
cloud
infrastructures
and
even
on-premise
public
Cloud
infrastrops,
maybe
like
VMware
different
open
stack
files,
or
what
have
you?
Can
you
easily
Auto
scare
runners
that
infrastructure?
What
we've
been
doing
this
here
is
coming
working
very,
very
hard
to
release
the
next
one
of
Auto
scaling
architecture
and
so
going
over
to
our
16
Network
iteration
plan.
C
You
can
zoom
in
when
you're
here
in
terms
of
new
features,
I'm
just
going
to
highlight
too
the
first
one
is
the
feeding
plugin
for
AWS
ec2.
So
we
release
the
fleeting
plugin
for
AWS
ec2
experimental
in
a
previous
release,
and
my
hope
is
to
transition
this
plugin
to
Beta,
And
16.1.
So
but
customers
that
are
looking
for
an
auto
scaling
solution.
The
new
auto
scaling
solution
for
Amazon
ec2
instances,
whether
it's
Mac
instances,
whether
it's
Linux
instances.
C
So
as
soon
as
we
can
get
to
Beta
the
quicker,
we
can
get
feedback
from
you
from
the
customer
base
at
scale
and
determine
what
key
things
we
need
to
do
to
add
to
the
feature
set
to
transition
to
GA
and
then
right
below
that
you'll
see
that
we've
got
the
fleeting
plugin
and
the
plugin
is
the
thing
that
sort
of
makes
a
new,
auto
scaling
abstraction
work
on
public
clouds
right,
and
so
we
want
to
get
to
an
experimental
plugin
for
Azure.
So
the
AWS
plugin
is
already
available
as
an
experiment.
C
The
gcp
plugin
experiment
is
shipping
in
gitlab
1600.
So
the
release
that's
coming
out
right
now
on
the
22nd
of
May
and
then
the
goal
for
16
month
is
to
release
the
plugin
for
Azure.
So
our
initial
plans
will
be
got
into
this
journey
was
to
say
hey.
We
will
develop
the
plugins
for
the
top
three
clouds,
but
then
the
plug-in
architectural
framework
is
hopefully
simple
enough
and
we've
got
some
videos
that
was
created
by
one
of
our
Engineers
Joe
Burnett.
C
C
We
want
to
continue
working
through
our
backlog
of
bugs
in
104
that
are
critical
to
making
sure
that
you
can
run
gitlab
CI
efficiently
on
your
environments
and
then
obviously,
we
spend
some
time
but
a
bit
of
our
capacity,
each
iteration
resolving
any
new
security
issues.
And
finally,
one
quick
call
out
in
this
iteration
as
well.
We've
got
a
lot
of
work
in
our
maintenance
bucket.
C
So,
in
addition
to
the
features
we
want
to
get
out
and
in
addition
to
the
to
the
bugs
the
functional
bugs
and,
of
course,
the
security
issues
which
which
gets
worked
on
the
first
there's
a
number
of
maintenance
issues
that
we
need
to
get
to
to
that
are
essential
not
only
to
sort
of
our
workflows
but
critical
pieces
of
work.
C
That's
essential
to
ensuring
our
customers
could
run
gitlab
runner,
for
example,
we're
going
to
do
some
testing
of
the
runner
with
the
latest
versions
of
podman
from
from
Red
Hat,
which
is
basically
a
different
container.
Runtime
in
that
replaces
Docker,
and
we
have
support
for
pubman
today.
But
we
need
to
ensure
that,
for
example,
apartment
4.5,
which
is
latest
version
that
the
runner
continues
to
work
with
podman.
So
there's
somehow
there's
a
lot
of
these
other
things
as
well,
so
again,
not
as
exciting
as
as
quality
I'm.
Sorry
as
Fleet
and
SAS.