►
From YouTube: Next possible iteration of CI/CD GitLab Runners autoscaling and GitLab CI/CD Service on gitlab.com
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
The
CSD
demon
and
firecracker
the
Amazon
product
they
use
on
AWS,
lambda
and
Fargate
that
work
much
easier
and
more
efficient
for
them
to
virtual
machines,
reduce
the
cost
and
time,
and
this
could
be
something
we
could
use
to
replace
dr.
Machine
executed
right
now
or
augmented.
Somehow,
so
it's
Octavia,
but
this
is
merely
an
idea
and
the
Firecracker
has
been
mentioned
by
one
of
the
CIE
verify
CIT
members
I
think
it
was
Mario
who's.
Talking
who
mentioned
this
project
I
have
not
heard
about
before
I
had
not
heard
about
it
before
and
yeah.
A
Yeah,
so
the
idea
is
that
the
idea,
basically,
we
are
currently
using
docker,
execute
or
as
and
we
recently
discussed
the
problem
of
poker
machine
not
being
maintained
anymore
and
that
it
will
reach
the
end
of
life,
probably
in
a
couple
of
months
from
now,
and
that
that
might
be
a
good
opportunity
to
think
about.
What
could
we
replace
it
with,
and
my
idea
is
that
we
could
create
the
service
and
an
API.
A
B
A
B
Than
a
year
ago,
so
few
months
after
dr.
machine
was
mentioned
to
be
not
maintained
anymore
and
firecracker,
the
way
how
it
works
after
I
checked
it
recently,
it
could
fit
in
one
of
the
ideas
that
we
had
because
somewhere
in
January
February
last
year,
we've
made
an
experiment,
three
of
us
cameo
Steve,
and
me
we've
choosen
each
a
function.
B
Because
supporting
confiscating
his
heart,
it's
complex
tasks
and
there
are
probably
people
communities
that
do
this
better.
We
should
try
to
use
what
exists
already
and
just
try
to
fit
it,
so
they
run
to
just
ask
for
a
place
for
the
job
execution
for
the
environment
and
then
jump
into
this
environment
and
do
what
the
rawness
does
best
and
the
idea
that
come
in
propose
was
you
use
kubernetes,
but
not
with
the
kubernetes
executors
that
we
have
now,
but
kubernetes
running
a
side
of
the
runner
and
being
the
engine
for
auto-scaling.
B
The
kubernetes
would
manage
auto-scaling
notes,
for
example,
in
gk.
You
can
find
that
the
cluster
should
be
out
of
scale.
So
we
now
don't
need
to
think
about
scaling
the
VMS.
We
only
need
to
communicate
with
some
existing
kubernetes
and
schedule
a
pod
creation,
but
because
why
we
don't
use
kubernetes
right.
B
Versus
that
need
for
the
privileged
mode,
because
dr
in
docker
approach
is
still
too
popular
to
drop
it.
Second,
even
if
we
would
drop
the
doctor
in
docker
approach,
it's
still
risky
to
execute
two
jobs
from
two
random
projects.
On
the
same
same
VM
and
with
kubernetes,
we
don't
have
full
power
on
what
VM
civilians.
B
So
what
coming
proposed
was
that
we
don't
use
the
kubernetes
executor
per
se,
but
we
use
kubernetes
a
site
of
the
runner
to
create
a
pod
in
which
we
always
start
the
same
container,
to
create
a
fully
virtualized
VM
in
probably
nested
virtualization
in
let's
say
GCP,
and
then
what
runner
gets
is
access
to
this
VM
inside
of
the
pod.
This
is,
for
example,
how
Mac
stadium
handles
the
Mac
OS
cloud.
B
At
this
moment
they
have
internal
kubernetes
cluster,
that
the
user
don't
have
access
to
at
all,
because
this
is
their
internal
implementation
of
the
auto-scaling
and
they
have
their
own
API
that
communicates
with
this
class.
Their
schedules,
the
pod,
which
creates
a
VM
inside
and
gives
the
user
back
SSH
access
to
this
VM.
So
we
wanted
to
do
the
same,
but
we
were
stopped
at
the
moment
how
to
manage
creation
of
this
internal
VM
in
a
good
way,
and
at
this
moment
we
finished
the
discussion
about
the
POC
with
this
is
an
interesting
idea.
A
Know
so
it
appears
that
the
fire
cracker
has
been
also
integrated
with
container
library
and
I.
Wonder
how
easy
it
would
actually
be
to
configure
a
kubernetes
cluster
using
container
packet
by
a
fire
cracker
library.
Perhaps
it
could
be
like
a
matter
of
you
know,
simple
configuration
a
couple
of
lines
of
configuration
to
use
fire,
cracker
micro
VMs
instead
of
containers,
and
this
could
be
transparent
to
the
current
assets
out
of
the
runner,
and
it
could
be
like
interesting
solution,
but
without
giving
it
a
try
and
building
the
classroom
like
that.
A
B
A
B
B
B
B
We
the
way
how
kubernetes
executor
works
is
totally
different.
We
don't
work
with
docker
in
kubernetes
executors,
we
don't.
We
don't
work
with
docker
itself.
We
just
work
with
the
pots.
We
don't
schedule.
We
schedule
a
pod
configured
from
the
user-defined
images
from
the
job
inferred
for
the
services.
B
B
B
Few
months
ago,
at
the
beginning
of
last
year,
we
introduced
it
custom
executors
in
github
rudder,
which
give
you
an
abstracted
interface,
how
we
can
hook
into
how
the
executor
works,
and
at
this
moment
we
don't
accept
any
new
executors
implemented
in
run.
Our
code
base
itself.
If
anyone
wants
to
implement
a
new
type
of
the
executor,
it
moves
must
go
through
the
custom,
executor
interface
and
the
follow-up
from
this
was
to
do
the
same
with
how
docker
machine
works
right
now,
because
docker
machine
executor
is
a
wrapper
around
the
docker
executor
doctor
machine
executor.
B
B
Wanted
to
abstract
the
behavior
of
docker
machine
to
something
named:
docker
executor
provider
or
doctor
provider
or
custom
doctor
provider.
Something
like
that
and
then
docker
machine
would
be
one
type
of
this
provider
and
we
could
implement
a
new
one,
and
this
is
a
place
where
we
could
experiment
with
kubernetes
refraction,
because
kubernetes
we
would
prepare
a
driver
for
such
custom
docker
provider,
which
would
use
kubernetes
cluster,
configure
it
to
use
firecracker
to
create
a
VM
with
docker
engine
inside.
B
This
would
give
us
back
the
credentials
to
the
docker
engine,
and
then
we
would
inject
this
to
the
internally
docker
executors,
which
we
already
have
in
this
working
as
we
as
we
expect.
So
we
just
extract
this
whole
kubernetes
play
with
firecracker
outside
of
runner
itself
and
outside
of
executing
the
job.
It's
to
prepare
the
environment.
Only.
A
So
that
that's
very
interesting,
it
appears
that,
but
this
could
be
like
multi-step
process.
One
step
is
to
replace
dr.
machine
with
something
that
makes
use
of
a
firecracker
and
becomes
a
token
provider
executor.
Then
we
could
mix
that
with
Quintus
executor,
perhaps
or
we
use
some
parts
of
code
to
extend
this.
Why
not?
No.
B
A
A
B
And
this
is
why
we
don't
want
to
hook
into
bird
in
his
executors,
because
this
is
a
different
concept
of
working.
Kubernetes
executors
directly
execute
the
job
on
kubernetes
cluster,
and
we
don't
want
to
do
this
here.
We
want
to
use
kubernetes
externally
to
provide
us
virtual
machine
with
Dr
Ng,
okay,.
B
I,
don't
know,
I,
don't
know
these
differences
I
from
what
I
read.
I
only
see
that
there
is
already
a
proven
configuration
how
you
can
start
such
VM
from
kubernetes,
and
this
is
why
this
becomes
usable,
because
the
biggest
problem,
if
the
kubernetes
approach
proposed
by
camilla
last
year,
is
that
configuring,
auto-scale
kubernetes
cluster
in
GAE
is
simple.
It's
just.
We
already
have
some
terraform
scripts
in
our
infrastructure
that
that
creates
such
clusters
for
right
now,
the
sidekick
free
for
from
it'll
calm.
So
we
already
know
how
to
do
this.
B
The
problematic
part
is
that
we
need
to
start
a
pods
with
a
configuration
that
will
start
internal
VM,
and
this
needs
to
work
with
a
nested
virtualization,
because
the
GAE
cluster
itself
works
on
the
virtual
machines,
and
this
was
something
unknown
for
us.
This
was
something
that
we
would
need
to
work
out
and
from
what
I
read
in
the
firecracker
documentation.
This
is
the
problem
that
they
solved,
so
they
they
give
you
a
working
configuration
how
to
configure
a
pod
specification,
how
to
create
a
pod
that
will
give
you
this
beautiful
machine
inside
of
kubernetes.
A
B
A
A
B
Escape
to
the
host
curve
with
the
full
virtualization,
this
is
way
way
way
harder.
It's
it's
like
virtualization
is
now
20
years
old
containers
in
the
current
form.
It's
it's
still
probably
a
hype
of
last
five.
Six
years,
Google
was
using
it
four
more
years,
but
it's
still
not
as
proven
master
as
the
so.
A
And
I
and
I
wonder:
what
approach
can
we
take
to
make
some
predictions
about
whether
we
should
actually
save
something
or
not,
or
perhaps
this
is
going
to
cost
us
more
I?
Think
it's
very
difficult
to
predict,
because
I
know
that
we
are
trying
to
make
efficient
use
of
virtual
machines
that
we
reuse
them
that
we
do
have
only
a
bunch
of
idle
machines
and
stuff
like
that,
but
yeah
how
to
like.
B
A
B
So
the
first
thing
that
we
will
need
to
do,
no
matter
which
direction
we'll
choose,
is
to
create
this
disgustin
provider
interface,
because
this
open
us
a
way
to
start
supporting
out
of
scaling
for
dr.
executors
in
different
ways,
and
this
would
also
make
the
windows
doctor
executor
also
applicable
for
at
escape.
Okay.
B
A
B
A
A
B
B
A
A
B
A
A
A
B
A
B
A
B
A
B
A
B
A
B
A
B
B
B
Flu
shot
its
flu
shot
with
the
clean
base
image,
but
we
don't
waste
time
on
on
removing
and
creating
again.
We
already
talked
in
the
past
that
this
would
be
a
big
improvement,
but
not
every
cloud
provider
that
dr.
Machine
supports
supports
the
provision.
So
we
couldn't,
we
couldn't
implement
this
into
the
executor.
So
even
with
this,
we
should
have
already
a
big
big
cost
saving
and
then
being
able
to
execute
multiple
jobs
on
a
note
which
kubernetes
does
by
default.
It's
the
next
level
of,
of
course,
sufficient.
A
B
A
Gut
feeling
what
the
users
are
doing,
my
gut
feeling
is
that
with
fine
cracker
and
with
running
jobs
on
kubernetes,
instead
of
managing
help
to
scaling
the
dump,
runner,
then
cost
saving
much
be
higher
than
50%.
That's
my
gut
feeling
and
I
might
be
wrong.
Of
course,
we
need
to
make
experiments
anyway.
Let
me
summarize
that'll:
do
it
for
people
that
might
watch
this
video?
It
appears
that
currently
we
are
running
around
three
hundred
forty
five
thousand
jobs
in
each
24
hours.
A
A
It
appears
that
50
percent
of
time
is
wasted
on
creating
a
machine
and
to
removing
it,
and
we
are
being
built
for
by
that
time.
By
using
firecracker
and
kubernetes,
we
can
reduce
the
time
we
spent
on
creating
virtual
machines
and
the
time
we
spend
on
removing
them
basically
like
negligible
amount
of
time.
Probably
given
the
scale,
then
the
outer
scaling
we
currently
have
indeed
lapped
rather
is
very
inefficient
and
kubernetes
has
been
built
to
increase
utilization
and
exactly
solve
this
problem
and
by
using
kubernetes
without
the
scaling
we
can
shave.
A
Another
shave
more
time,
shave
some
costs
and
department
efficient
with
utilizing
virtual
machines
and
running
jobs,
and
our
assumption
for
now
is
that
we,
this
way
we
can
reduce
around
50%
of
costs
of
shine
runners
will
be
club.com.
This
is
when
the
assumption
we
would
need
to
validate
that,
but
it
looks
probably
promising
enough
to
iterate
on
that.
B
B
B
B
B
Yes,
this
is
this:
is
the
big
big
weight
off
using
kubernetes
for
such
auto
scaling,
because
then
kubernetes
is
its
kubernetes
work
to
see
and
estimate
how
much
resources
are
used
and
how
more
things
can
be
scheduled
on
certain
node.
So
we
would
only
need
to
observe
these
metrics
and
try
to
feed
VM
size
to
the
level
where
this
would
be
probably
at
the
level
of
90
percent,
so
where
we
would
use
a
maximum
of
the
nodes.
A
B
B
B
B
B
A
Switching
1%
of
traffic
to
the
new
area
like
it
will
be
very
difficult
to
actually
get
valuable
insights.
Ok,
so
I
think
that
that
really
looks
very
promising.
I
will
share
this
video
and
our
calculations
with
Browder
no
team,
and
we
will
see
how
to
actually
make
progress.
I.
Try
to
progress
on
that.
I
think
that
what
we
are
saying
about
extracting
refactoring
docker,
exactly
we
have
right
now,
an
interface
for
that
might
be
a
really
good
start
towards
in
would
move
us
towards
experimentation.
So
that's
interesting,
ok,
so
yeah!
A
I
can
spend
I
have
like
20
minutes
and
I
think
it
might
be
enough.
So
another
interesting
initiative,
in
my
opinion,
is
the
CIC
demon,
so
I
I
managed
to
take
a
look
at
our
matrix
yesterday
to
see
how
many
requests
our
brother
is
to
handle,
how
many
fields
we
need
to
serve
and
how
many
traces
we
actually
post,
and
it
appears
that
as
first
remember,
take
a
look
at
the
issue.
B
A
The
amount
of
requests
for
patching
the
trace
is
currently
250
requests
per.
Second,
it
means
that
it's
around
22,000
transactions
per
minute
and
it's
like
a
significant
number
CIC
demon-
could
actually
help
with
that
to
aggregate
some
of
these
requests
and
do
some
work
concurrently
that
we
otherwise
need
to
do
on
the
right
side.
A
A
Getting
bills
from
the
API
is
currently
6,000
transactions
per
minute
and
round.
100
database
queries
for
the
95
present
I'll
pre-cast's,
so
I
I
wonder
so.
The
CI
CT
demon
initiative
is
not
so
obvious,
like
the
Firecracker
or
the
rather
efficiency
improvements
that
one
is
no
brainer.
We
should
do
that
or
at
least
I'll
try
towards
getting
more
metrics
and
insights
to
make
more.
You
know
informed
decisions
in
case
of
CIC
demon
I
think
that
in
order
to
actually
do
something,
we
need
to
have
more
data
and
more.
A
Of
course,
in
processors
we
have
web
and
MPI
nodes,
but
I
think
that
the
promise
of
what
we
could
build
having
CIC
daemon
is
probably
not
enough
to
build
the
CIC
demo
itself
and
we
need
to
iterate
any
way
that
we
work
on
something
for
release
and
we
see
some
tangible
benefits
of
that
work.
And
what
is
your
opinion
about
things
like
that?
What
is
a
tangible
benefit?
We
could
see
after
working
on
cncd
one
for
a
month
or
two.
B
The
first
time
when
we,
when
we
discussed
CI
CD
demon
in
the
past,
we
wanted
to
do
this
because
of
the
job
screaming
and
I
think
this
is
still
the
the
biggest
the
most
important
reason
to
start.
This,
like
you
just
said
that
we
have
5000
transactions
per
minute
for
requesting
a
new
job.
Now
each
of
these
transaction
is
a
set
of
huge
queries.
Huge
queries
that,
because
the
way
how
our
shops
quilling
works
is
fully
related
on
the
database.
B
We
use
database
and
SQL
queries
to
find
out
which
job
would
be
the
best
choice
for
the
runner,
but
ask
it
for
a
new
job.
And
then
we
add
some
more
internal
queries
to
handle
things
like
minutes,
quotas
to
handle
things
like
supporting
group
level,
runners
or
supporting
project
level
granules
and
at
the
end,
if
it
ends
with
a
very,
very,
very
complex,
very
complex
query
that
is
executed
each
time
when
rather
asks
for
a
new
job,
which
now
we
have
five
titles
and
times
per
minute,
and
this.
B
A
A
A
A
A
As
fuel
timings-
so
definitely
pigs
are
good,
they
don't
mean
so
so
we
can
probably
calculate
the
amount
of
time
and
we
spend
all
that.
So
we
have,
let's
see
around
150
milliseconds
naughty
milliseconds
multiplied
by
200
requests
per
second,
so
we
use
that
we
spend
like
that's
okay,
so
that
it's
in
milliseconds.
A
A
A
I
disagree
that
building
CI
CDD
moon
would
be
something
that
might
result
in
making
CID
platform
much
more
variable,
and
this
is
you
know,
very
important
aspect
of
appreciative
platform
people
to
depend
on
platforms
like
that,
and
they
want
to
use
that
product
that
is
really
able,
but
then
really
really
like.
Can
it
be
captured
on
metrics?
How
can
we
back
up
this
work
with
data?
That's
something
I
feel
like
it's
crucial
to
getting
it
done
and
making
progress
on
it.
Iterations.
B
Right
for
the
normal
times
that
the
metrics
you
already
showed
this
is
this
is
what
we
have
now
I
think
this
is
not
good.
These
timings
are
not
good,
even
if
they
are
small.
This
is
this
happening
constantly.
It's
it's
not
the
API
for
getting
a
lift
of
list
of
memory,
because
that
I
want
to
put
in
the
change.
Look,
because
this
you
do
from
time
to
time
and
the
rather
asks
for
a
new
job
constantly.
We
have
currently
something
about
50,000
runners
registered
and
active
on
github.com.
B
Each
of
them
is
asking
for
a
new
job,
probably
each
one
second,
sometimes
even
even
more
often,
and
if
we
could
move
this
off
database
I
really
see.
No
reason
why
to
even
think
about-
and-
and
this
opens
us
for
many
many
things
in
the
future-
not
only
not
on
me
not
only
the
features
that
you
said
like
like
DC
LPC
protocol
for
talking
with.
Rather
it
would
be
nice,
but
the
protocol
that
we
have
now
it's
quite
efficient
for
asking
for
jobs,
for
updating
them
in.
B
Our
blood
I
have
my
own
runners
enabled
and
now,
if
I
have
capacity
on
the
rather
side
of
my
private
traders,
I
want
to
use
them
as
much
as
I
can
because
I
already
paid
for
them.
I
don't
want
to
wait
for
the
runner
that
is
not
used
because
the
jobs
are
scheduled
on
the
undershot
runners.
Now
doing
this
now
would
mean.
A
B
Every
every
every
idea
that
we
discussed
in
the
past
was
started
with
that.
We
need
to
change
how
we
handle
the
queueing,
whether
it
will
be
a
change
in
rails
or
a
totally
new
thing.
There's
a
sassy
icdd
mom,
it's
it's!
It's
not
the
case
of
the
discussion.
We
can't
add
any
new
features
related
to
schedules,
scheduling
jobs
with
the
current
implementation,
because
it's
not
efficient
up
already
I
think
anything
else
will
make
it
much
slower,
much
more
harmful
from
the
database
and
probably
way
way
more
complex.
B
If
we
want
to
work
only
of
preparing
some
fancy,
SQL
queries
to
handle,
like
I
I,
think,
come
in
recently
proposed
somewhere
to
start
handling
the
scheduling
in
context
of
each
run.
So
we
could
prepare
some
initial
list
of
jobs
that
could
be
center
run
and
this
can
be
done
whatever
we
want.
But
this
means
that
we
need
to
change
the
current
maintenance
and.
B
A
Think
that
it
might
be
very
interesting
solution,
but
I
also
want
to
be
a
tagless
advocate
here
and
try
to
put
some
counter
arguments
to
make
this
discussion
more
valuable
and
another
point
against
the
IC
demon
is
that
we
could
probably
optimize
queuing
on
the
right
side
still
like
it
still.
There
is
still
some
room
for
improvement
there.
It
would
require
a
significant
refactoring
Camille
expressed
opinion
that
can
do
that
and
that
we
can
still
be
much
more
efficient
with
queuing
on
the
red
side.
B
B
What
we
did
in
in
detail
in
the
past,
we,
the
main
the
main,
the
the
official
version
of
get
lap,
was
still
relying
on
local
access
to
the
phylum
and
customers
that
had
aged
I
needed
to
use
NFS
or
things
like
that.
But
we
started
experimenting
with
Italy
aside
to
check
how
this
works
and
how
hard
it
would
be
to
start
with
the
with
the
CIC
d-double,
because
if
we
need
now
three
four
five
months
to
create
this
internal
architecture
be
able
to
inject
this
on
github.com
to
start
refactoring,
the
Curie's.
A
B
A
Agree
with
you,
but
let
me
check
if
we
are
on
the
same
page
and
let's
be
a
little
more
explicit
sort
of
people
that
are
going
towards
this
video
understand
it
better.
So
our
assumption
is
that
the
things
that
we
can
like
the
benefits
coming
from
CIC
demon
might
be
interesting.
We
might
be
able
to
build
a
lot
of
interesting
features.
A
On
top
of
that,
we
might
be
able
to
improve
performance,
but
it
doesn't
make
sense
currently
to
invest
like
six
months
in
building
that,
because
there
is
still
some
room
of
improvement
on
the
right
side,
but
being
able
to
foster
experimentation
with
building
something
in
the
week
two
or
a
month.
Max
is
worth
it
because
this
way
we
can
get
more
insights,
we
can
iterate
better.
We
can
validate
our
assumptions
and
this
card.
Does
that
turn
out
to
be
invalid
and
investing
like
one
witness
in
something?
B
If
we
can
do
it
quickly,
I
would
say:
let's
do
it,
let's
do
it,
because
we
we
need
to
refactor
the
queuing
and
there
is
no
difference
whether
we
do
it
in
the
service
or
in
the
rails.
It's
only
the
matter
how
much
time
it
will
take
preparing
the
service
itself.
If
we
can
do
this
quickly,
then
let's
just
jump
to
that
to
the
final
solutions
that
we
want
to
have.
We.
B
B
A
Need
to
work
a
little
bit
more
on
figuring
it
out.
What
actually
can
we
do
in
one
release
that
will
always
get
more
insights
to
see
something
on
metrics
to
you
know,
understand
this
problem
and
this
project
better
in
how
viable
it
might
be
to
invest
in
the
next
iteration.
So,
let's
let
live
a
societal
I
scheduled
another
meeting
to
discuss
exactly
this
problem.
What
can
be
done
in
the
first
iteration
to
gain
more
insights
and
to
decide
whether
we
want
to
proceed
with
Jesse
diamond
to
the
next
saturation
or
not
so.