►
From YouTube: GitLab 16.3 Kickoff - Verify:Runner
Description
Kickoff video for GitLab Runner Coe and Fleet
A
Hi
everyone-
this
is
the
16-3
Milestone
planning
call
with
me
I'm,
Gina
and
I'm.
Here
with
Darren,
we
are
going
to
start
with
Runner
Fleet,
for
this
call
so
I'm
going
to
start
sharing
my
screen
and
just
as
a
reminder,
Runner
Fleet
is
around
managing
your
Fleet
of
Runners,
if
you're
bringing
them
yourself
or
using
even
some
of
our
shared
Runners
and
being
able
to
quickly
make
the
decisions
based
on
whatever's
happening
within
your
Fleet
and
to
go
along
with
that.
A
So
the
end
result
which
you
may
have
seen
before
is
going
to
be
something
like
this
and
we're
iterating.
To
get
there
last
Milestone,
we
were
able
to
add
the
active
Runners
panel
and
then
this
Milestone
we're
going
to
continue
to
work
to
add
stuff
like
the
action
buttons.
A
Potentially,
the
fleet
Health
panel
that
you
see
up
here
and
then
here
with
the
wait
time
we're
actually
having
a
we're
having
some
work
done
for
a
POC
to
be
able
to
effectively
and
performantly
I
guess,
create
a
database
structure
to
be
able
to
get
all
of
that
data
historically,
so
that
we
can
show
that
full
visualization
that
I
was
just
showing
and
then
those
are
like
the
two
biggest
things
that
we're
working
on
for
ux
we're
gonna
continue:
analyzing
mental
model
research
that
we
started
last
Milestone.
A
This
is
kind
of
a
big
thing
for
us
to
sort
out
the
terminology
that
we're
using
across
Runners
and
then
just
learn
how
people
are
conceptualizing,
those
how
our
users
are
so
that
we
can
better
design
the
experience
to
match
your.
How
you
think
about
Runners
and
then
another
huge
thing
that
we're
going
to
work
on
this
milestone
for
ux
is
in
NBC
for
Runner
cost
visibility.
A
So
we've
heard
from
multiple
multiple
users
that
it's
difficult
to
see
what
the
infrastructure
costs
are
when
you're,
even
bringing
your
own
Runners,
and
it's
really
important
to
even
be
able
to
sort
that
out.
According
to
groups
and
projects
in
your
team,
so
we're
going
to
start
with
a
very
simple,
hopefully,
MVC
and
also
a
vision
will
come
out
of
this,
so
that
we
have
an
idea
of
where
we're
going
to
go
next
and
that's
about
it
from
Fleet.
So
I'll
give
it
to
Darren
for
Runner
core.
B
Hey
John
thanks
a
bunch
so
hi
folks,
this
is
Darren
again
and
so
just
to
level
set
for
everything.
You
want
to
call
I've
gotten
a
couple
of
questions:
some
customers
over
the
past
few
week,
few
weeks
or
so
like,
hey
Darren.
B
What's
the
run
of
course,
strategy
what's
to
run
a
core
roadmap,
look
like!
Are
you
doing
new
things
with
the
architecture
and
I
want
to
just
kind
of
I'm
happy
once
previously,
when
you
get
thinking
about
a
question.
First,
come
to
our
run,
a
call
Direction
page
and
as
as
you
can
see
here,
I
revised
the
content
of
this
page
back
on
on
July
6.
So
it's
pretty
fresh,
and
this
is
kind
of
your
starting
point
and
yes,
I
understand,
and
we
understand
that
there
are
a
lot
of
moving
parts.
B
B
Generally
speaking,
some
of
the
big
ticket
themes
that
we're
working
on
in
the
current
fiscal
year
and
right
now
we're
gonna,
get
we're
in
gitlab
Festival
24.,
and
then
you
can
kind
of
say:
okay,
here's
what
they're
thinking
about
here's,
what
they're
working
on
in
terms
of
big
themes
this
fiscal
year
and
then
you
can
then
dive
into
maybe
some
of
these
features
and
then
all
access
question,
hey
I'm,
interested
in
something
else,
Darren
or
team.
That's
not
on
here!
B
So
with
that
said,
for
fy24,
our
product
teams
haven't
changed
right
since
we
created
the
initial
iteration
this
direction
page
at
the
start
of
the
year
right
and
our
fy24
pro
teams
that
are
mapped
to
gitlabs
FY
24,
product
investment,
themes,
themes
and
the
first
one,
the
first
major
head
against
the
world-class
Dev
setups
experience.
B
So
under
here
using
neurons
like
the
next
one,
is
open
architecture-
I'm
not
going
to
talk
about
that
in
today's
kickoff
call,
because
we've
done
a
heavy
lifting
with
that
theme
and
that
book
effort,
the
next
bit
of
molecules,
CEO
Communications
around
that
will
start
coming
out
in
16.5,
or
so
when
we
start
talking
about
migration
and
deactivating
certain
things
right,
but
there's
no
new
features
planned
on
that.
We've
done
a
lot
of
that
work
already.
B
The
next
one
here
is
next
one
Auto
scaling,
so
the
value
prop
for
next
router
Auto
scaling
and
why
it's
super
important
to
large
swaths
of
our
customers.
Right.
We
have
customers
on
gitlab
SAS.
That's
as
Genie
was
pointing
out
that
manage
their
own
Runners.
We
have
customers
that
have
self-match
instances
forget
lab,
obviously,
and
they
themselves
of
course,
have
to
manage
Runners
one
of
the
big
questions.
One
of
the
the
main
question
we
get
probably
is
hey
either
I
am
a
customer.
I
am
studying.
B
Oh,
this
is
my
first
time
setting
up
a
git
lab
or
B.
What
we're
hearing
a
lot
recently
is:
hey
I'm,
going
to
get
that
customer
for
a
number
of
years.
Now
our
organization
is
adopting
gitlab
more
fully.
This
thing
is
scaling.
I've
got
more
teams
more
projects,
more
groups.
How
do
I
configure
this
run
a
thing
to
most
effective
way
possible
right?
What's
the
best
method
do
I
have
dedicated
Runners
at
a
project
level?
B
Do
I
have
dedicated
group
level,
do
I
have
dedicated
ones
on
the
instance
level,
if
I
have
Runners
at
the
group
level,
do
I
do
auto
scaling.
How
do
I
handle
all
these
workloads,
and
so
the
next
One
auto
scaling
is
one
solution
to
try
to
help
our
customers
manage
that
query.
What
could
become
a
very
complex
environment
right
and
so
our
next
one
Auto
scaling?
B
If
you
are
a
self-managed
customer
and
you're
using
one
of
these
top
three
public
compute
platforms
here
in
the
United
States
with
AWS
Google
Azure
right,
you
can
very
easily
Auto
scale,
get
lab
runner
on
instances
that
are
on
those
public
Cloud
platforms.
So
that
means
that
if
you're
starting,
you
know
you
say,
okay,
how
do
I
start?
How
do
I
offer
up
GitHub
CI
to
my
entire
organization?
B
Well,
the
easy
answer:
is
you
create
one
one
gitlab
runner
at
the
instance
level
you
can
figure
Auto
scaling
and
that
won't
get
that
one
with
auto
skating
can
handle
hundreds
and
hundreds
of
thousands
of
CI
jobs
daily
monthly,
you
name
it
and
then
you
can
kind
of
Branch
out
from
there.
So
that's
the
value
proposition
here
on
auto
scaling.
It's
giving
you
a
very
easy
way
to
set
up
a
runner
environment
that
can
automatically
scale
up
and
automatically
scaled
down
as
your
workloads
demand.
B
And
so
what
we've
been
working
on
is
a
new
order,
scaling
architecture
right,
we're
calling
the
next
One
auto
scaling.
It
has
a
whole
new
set
of
components
and
we're
doing
it
for
ourselves
we're
creating
the
plugins
for
ec2,
Google
and
Azure.
So
with
that
said
in
16.3
now
finally
getting
over
the
16x3
iteration
plan,
because
I
completely
did
a
complete
sidebender,
then
it
gave
you
a
bit
of
a
mouthful
and
16.3
on
that
particular
theme
and
feature
set.
B
What
we
hope
to
get
done
is
transitioning
the
plugin
for
Azure
version
of
virtual
machines
from
experimental
to
Beta
and
16.2
right
now,
we're
hoping
to
do
the
same
thing
for
the
AWS
ec2
plugin,
which
is
currently
experimental
if
the
transition
of
the
AWS
ec2
plugin
from
experimental
to
Beta
does
not
happen
in
1602
that
will
over
the
16
and
three.
But
our
goal
is
to
move
all
of
these
plugins
AWS
ec2
Azure
virtual
machines,
Google
Cloud
for
Google
Cloud
compute
instances.
B
Our
goal
is
to
move
the
sub
data
next
two
months
and
then
after
moving
from
beta
by
Ray
Q3
early
Q4
of
this
fiscal
year.
My
goal
is
to
move
those
plugins
to
GA
and
move
the
entire
next
one
Auto
scaling
solution
to
GA,
so
customers
in
order
scale
solution
can
start
thinking
about.
Yes,
I
can
adopt
and
migrate
to
this
new
order
state.
So
that's
kind
of
the
value
proposition
of
that
big
theme
and
that's
what
we're
working
on
in
16-3
specific
to
Auto
scaling.
B
We
have
some
other
small
small
features
and
small
features.
Other
features
here
that
are
planned,
but
they're
sort
of
within
the
core.
Runner
code
base
and
then
another
area
of
investment
again
distributed
with
the
steam
theme,
every
every
iteration.
B
First
order
of
business
security
issues
vulnerabilities
in
the
core
owner
product
next
order
of
business
is
our
continued
efforts
to
address
long-standing
bugs
in
the
core
runability
ensure
that
run
a
core
itself,
a
stable,
reliable,
etc,
etc
so
that
you
can
run
your
cicb
jobs
at
scale.
So
high
level,
big
picture
theme
for
this
year
again:
Auto
scaling
from
an
architectural
perspective
and
16.3
the
the
focus
is
to
get
the
Azure
virtual
machine
plugin
from
experimental
to
Beta
and
just
to
wrap
up
the
sort
of
put
a
bow
on
the
runner,
Auto
scaling
solution.
B
B
If
anyone
looking
at
the
video
is,
we
were
perhaps
my
my
recommendation
is
just
to
give
us
a
call
start
a
POC
as
soon
as
possible,
because
as
soon
as
you
start
a
POC
and
as
soon
as
you
start
playing
with
it,
the
more
feedback
will
get
your
feedback
and
we
can
determine
if
there
are
any
edge
cases.
If
there
are
any
critical
issues
that
we're
missing.
B
My
recommendation
is:
do
not
wait
if
you
want
to
try
it
out,
let's
try
start
trying
it
right,
but
right
now,
in
a
self-contained
environment
start
small,
even
though
it's
experimental,
we
are
actually
dog
fooding
it
right
now
and
get
lab
SAS
for
our
own
macro
and
stronger
Fleet.
So
we're
working
out
the
Cake
The
Kinks
at
scale
as
well.
So
that's
it
for
three
back
to
Regina.
Sorry,
I
and
I
went
low.
A
No
problem,
no
I
think
that
overview
is
very
helpful
for
everyone
watching
so
yeah
that
sums
up
our
16-3
plan
and
as
always,
if
you
have
any
questions
or
feedback,
please
feel
free
to
reach
out
to
either
Darren
or
I.
Thank
you.