►
Description
In this video, Skarbek enables the Sidekiq Queue 'project_export' running in Kubernetes for GitLab.com. See https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1837 for details.
A
Today,
I
would
like
to
enable
the
projects
portside.
Kick
you
running
in
in
kubernetes
change.
Requests
18:37
contains
this
information.
The
this
is
no
different
than
our
experiment
that
we
performed
a
few
weekends
ago,
the
difference
being
that
we
will
operate
using
the
full
capacity
of
sixteen
pods.
We
are
not
limiting
it
to
four
pods
like
we
did
a
few
beacons
ago,
and
the
second
item
to
note
is
that
we
don't
plan
on
rolling
this
back.
So
this
is
a
permanent
thing.
A
That's
going
into
production,
we
will
not
be
making
any
changes
to
the
existing
virtual
machines,
so
the
virtual
machines
that
currently
run
project
export
they're
not
being
turned
off
or
shut
down
or
removed,
or
anything
there.
They
will
continue
chugging
through
project
exports
as
needed.
So
by
the
time
this
becomes
enabled,
we
will
technically
have
32
total
running
project,
export
queues
inside
of
sidekick
16
as
VMs
and
16
as
pods
in
production.
A
The
other
important
bit
to
this
is
this
very
important
line
that
deployments
Inc
urban
IDs
are
currently
going
to
be
handled
in
a
manual
fashion.
This
work
will
be
shared
between
jarv
and
I,
and
you'll
Sibley
roll
out
a
new
change
to
kubernetes
after
a
deployment
in
production
has
been
completed
successfully.
A
Him
and
I
will
coordinate
during
our
day
throughout
the
week.
We
hope
to
complete
the
necessary
work
to
finish
auto
deploy
this
week,
so
this
should
not
be
lengthy
thing
to
work
worried
about
so
for
this
particular
change.
All
we're
gonna
do
is
merge
a
merge
request.
Once
that's
merged,
we
go
find
the
pipeline
and
mainly
hit
the
play
button
to
push
it
into
production
and
I've
got
links
to
logs
and
metrics
that
we
watching
throughout
this
process.
So
I'll
go
ahead.
Open
up
this.
A
This,
mr,
is
very
straightforward,
so
we're
just
changing
the
queue
for
Knowle
to
project
export
and
we're
changing
the
replicas.
Currently,
it's
set
to
1
min
and
Max
we're
gonna
set
a
minimum
of
4
because
the
pods
do
take
a
while
to
start
off
and
we'll
set
a
maximum
16.
That
is
what
we
plan
to
run
in
production
in
the
debug
feature.
So
I
will
proceed
to
hit
the
I
guess:
I
gotta
get
rid
of
the
whip.
B
A
A
B
A
Let
me
go
through
that
right
now,
before
I
deploy
to
production,
so
the
road
back
steps
are
relatively
straightforward.
There's
two
ways:
we
could
go
about
doing
this
if
there's
a
situation
where
we
discovered
an
issue,
but
we
could
take
our
time
to
perform
the
rollback
procedure
of
this.
The
merge
request
then
emerging
and
now
we
could
simply
revert
they'll,
be
the
easiest
and
quick,
not
the
quickest,
but
it
will
be
the
easiest
way
to
get
through
this.
A
The
only
thing
that
you
need
to
be
aware
of
us
that
if
you
do
perform
a
revert,
you
need
to
make
sure
that
you
go
back
in
and
manually
apply
it
to
the
production
environment
because
that's
still
a
manual
action
at
this
moment
in
time.
If,
for
whatever
reason,
the
process
of
creating
the
revert,
merge,
request
and
getting
reverted
is
too
lengthy
and
we
need
a
stop
immediately.
You
can
log
in
to
our
console
server
and
you
can
just
scale
it
down
to
zero
pods.
So
there
are
two
commands
who
copy
and
paste
now
perform.
A
The
precise
actions
you
need
to
occur
and
that'll
happen
pretty
much
immediately.
It
takes
roughly
between
30
and
60
seconds
for
the
positive
shutdown,
but
that'll
force
them
to
delete
themselves.
So
that's
our
break
glass
scenario.
Reverting
the
merge
request
is
obviously
the
desired.
I've
got
that
as
the
first
step
here,
but
I
even
have
a
comment
saying
you
know
that
takes
roughly
10
minutes
to
go
from
all
the
way
through
the
pipeline.
A
B
A
Okay,
so
we've
deployed
to
all
of
our
environments,
so
I'm
going
to
hit
the
play
button
on
production
and
we'll
watch
that
here
and
I
guess
if
I
wanted
to,
let
me
roll
that
in
here
somewhere
monitoring
how
well
things
are
going
in
my
state
along
this
time.
Let
me
stop
the
share
real,
quick,
I'm
gonna.
Try
to
share
my
entire
desktop
I
feel
like
it
might
be
sure
the
font
size
will
be
a
little
goofy,
but
I'll
try
to
know
so
just
logging
into
the
production
server
so
watch
you
can
control.
A
We
care
specifically
about
sidekick,
so
we
should
see
one
pod
perfect,
so
it's
already
started
the
deploy
for
the
export,
so
we
see
the
init's
coming
in
for
the
other,
the
rest
of
them,
so
we're
switching
the
HP
from
one
to
four
pods
and
we're
also
changing
the
cue
from
null
to
project
exports.
So
once
these
new
pods
come
online,
they're
gonna
start
processing
work,
which
is
pretty
cool
for
some
reason.
Auto-Scrolling
is
not
working
inside
of
this
sea
ice
cream.
It's
great.
B
A
We're
not
changing
the
behavior
for
which
sidekick
export
or
the
project
the
sport
operates
and
any
way
shape
or
form.
This
is
strictly
a
change
to
where
project
export
is
being
picked
up
from.
We
have
done
a
little
bit
of
research
and
we
have
determined
that
project
export
runs
ever
so
slightly
faster
inside
of
our
Cabernets
infrastructure.
This
being
the
nature
of
the
Korea
Nettie's
pods
don't
have
NFS
mounts
and
because
of
that,
all
the
data
is
stored
locally
on
the
nodes.
A
So
the
process
of
getting
and
storing
data
in
the
process
that
rails
runs
during
the
job
just
shapes
a
few
milliseconds
off
each
Network
call
that
needs
to
be
made
because
we're
not
making
Network
calls
instead
they're
just
storage
calls
to
the
underlying
VM,
instead
of
and
if
that's
to
a
different
server
to
get
data.
Aside
from
that,
I
don't
see
any
change
in
the
nature
of
how
psyche
exports
going
to
operate
unless
there's
a
catastrophic
problem,
and
then
we
would
roll
back.
A
B
A
All
images
that
we
rely
on
for
the
gate
lab
product
will
be
pulled
from
our
dev
instance.
Okay,
so
we
have
succeeded
in
the
deployment,
so
helm
is
just
doing
its
cleanup.
A
A
That's
interesting!
That's
a
lot
more
than
I'm,
seeing
I've
noticed
that
psych
or
excuse
me
Cabana
is
a
little
awkward
when
it
comes
to
stuff,
like
this
I'm,
not
really
sure
how
to
explain
why
you
would
see
certain
things
and
sometimes
not
I've
had
to
Reese
research
like
search
again,
multiple
times,
sometimes
where's.
The
message
that's
this
year.
So
I'll
just
add
this
to
a
panel
I'm.
Just
one
look
at
the
logs
real,
quick.
A
And
then
we
order
with
you
over
okay,
so
yeah,
that
was
our
old
pod
that
shut
down.
We
have
our
new
pods,
are
watching
projects,
port,
so
they're,
definitely
processing
work
and
I
bet.
If
we
scroll
down
far
enough,
let
me
just
hit
refresh:
we
should
be
able
to
see
that
hey
I'm,
pulling
work
or
something
I
don't
see
that.