►
From YouTube: 2020-07-08 AMA about GitLab Releases
Description
AMA with the Delivery Team
A
B
I'll
ask
a
question
that
I
find
the
issue
linked
to
Maurin
in
an
issue
but
I'd
love
to
ask
the
team,
which
was
what
I'm
trying
to
get
an
understanding
of
is
kind
of
the
the
value
and
the.
Why?
Behind
our
kubernetes
migration,
so
I
know
it's
an
ongoing
thing
and
something
that
the
team
has
been
focused
on
for
I.
Think
years,
if
not
months,
but
I
did
like
I
wanted
to
get
us,
try
to
get
a
sense
of
the
implication
of
once
we
move
like.
A
So
that's
kind
of
the
big
motivator
there
are
other,
hopefully
what
we're
already
seeing
that
the
cuban
h
pods
start
up
considerably
faster
than
the
VMS.
So
that
gives
us
a
lot
more
opportunity
in
terms
of
scaling
like
we'll,
have
a
lot
more
ability
to
handle
sharp
spikes
in
traffic,
because
we
can
add
pods
way
more
quickly
than
we
can
currently
add.
Vms
should
give
us
better
kind
of
cost
management
as
well.
Just
cuz
we'll
be
out
our
tune,
things
to
more
likely
fit
stuff.
A
So
I
think
from
my
opponent
view
it's
very
much
around
giving
us.
It
should
give
us
a
little
bit
more
confidence
in
the
stability,
but
also
it
gives
us
a
little
bit
more
flexibility
in
how
we
actually
set
up
and
operate
get
up.
Jeff.
Do
you
have
other
stuff
you'd
like
to
hold
out
there's
a
super
high
level
yay.
C
Yeah,
that's
I,
think
everything
Amy
just
said:
there's
also
I
mean
one
of
the
main
reasons
we
want
to
move
to
kubernetes
on
comm
is
to
make
it
easier
for
our
customers
or
our
self-managed
customers
to
run
give
up
in
kubernetes
and
by
dogfooding
the
home
charts
and
running
running,
using
the
same
tooling
that
we
provide
to
our
self-managed
customers
were
kind
of
like
working
out
the
problems
and
making
sure
that
things
are.
You
know
working
well,
so
I
think
it's
it's
twofold,
I
think
it's
like
one,
it's
very
good
for
running
the
SAS.
C
B
It's
great
I,
like
the
dog
fooding
angle.
One
thing
you
had
said
amy
was
this
scaling
speed
like.
Does
that
manifest
itself
as
less
performance
issues
to
our
users,
who
might
experience
interim
performance
problems
while
we're
scaling
give
at
comm
via
VMs
today,
or
is
it
like
less
downtime
in
general,
because
we
might
have
been
down
or
having
a
problem
and
we
can
scale
it
to
alleviate
that
problem
quickly
in.
A
Theory,
it
should
be
both
so
one
of
the
good
things
about
Cuba
Nettie's
is.
It
should
be
able
to
sort
of
know
when
it's
running
out
or
like
we
figure
it.
We
need
to
add
more
capacity.
It
should
be
able
to
add
more
capacity
quickly
and
after
handle
sort
of
spikes.
It
should
also
be
able
to
manage
its
own
health
right,
so
we
can
set
it
up.
So
it
will
know
a
certain
say:
25%
is
unhealthy,
it
can
add
its
own
pods
and
bring
them
back.
A
B
D
There
is
also
something
that
I
would
like
to
have
there
here
because
may
help
a
lot
from
product
perspective.
Even
yeah,
lots
of
profit
forks
are
in
this
code,
so
there
is
an
issue
that
I
hope
in
some
time
ago.
I'm
going
to
paste
the
link
here,
which
is
about
a
nice
feature
that
we
can
get
from
a
complete
kubernetes
deployment,
which
is
matching
routes
with
product
categories.
D
So
if
we
go
down
this
route,
we
could
hand
up
having
the
deployment
segregated
by
product
categories,
and
then
we
can
extract
metrics
from
product
categories
like
number
of
crashes
or
number
of
users,
or
things
like
that.
So
this
is
very:
it's
a
moonshot
right
this
first,
we
need
to
be
able
to
run
everything
on
kubernetes,
but
once
we
have
it,
we
can
get
also
a
useful
business
information,
in
terms
of
which
features
are
most
useful,
which
features
interacts
with
other
features.
So
I'll
just
leave
you
the
link.
B
Can
I
say
yes
and
on
that?
That
would
also
imply,
as
I
understand,
our
infrastructure
teams
set
up
today.
There's
not
really
close
alignment
to
product
categories
and
features
and
the
infrastructure
teams
responsibility.
But
that
would
maybe
allow
us
to
be
a
little
bit
closer
to
DevOps,
see
where
the
engineering
or
development
teams
would
have
much
higher
awareness
of
things
like
crashes
or
other
things,
particularly
affiliated
with
their
features
and
categories.