►
From YouTube: 2020-04-17 Sidekiq migration
B
A
A
D
D
D
D
Okay,
so
let's
meet
a
hot
hatch
on
staging,
so
if
you've
never
done
this
before
the
first
thing,
you
need
to
do
when
you
create
a
hot
bat,
you
need
to
figure
out
what's
running
on
staging
the
way
we
do.
This
is
that
we
go
to
slack
and
we
do
all
of
the
place
status.
Let
me
just
see
if
there's
any
deployments
in
progress
I'll
just
do
it
again,
just
in
case
I
think
it
hasn't
changed
so
last
time
this
was
done.
I
know
you
can't
see
my
screen,
but
I'm.
D
D
D
And
we'll
work
on
my
branch
here
so
in
the
patch,
where
there's
a
patch
directory.
This
has
a
bunch
of
directories
that
mapped
to
the
package.
That's
installed
on
the
environment,
and
this
corresponds
to
the
omnibus
package
and
the
way
that
we
have
patching
working
for
kubernetes
is
that
it
will
like
work
for
both.
So
we
were
basically,
we
create
a
patch
as
if
we're
gonna
be
patching
v
ends,
and
it's
also
going
to
go
to
Cooper
Denny's
to
get
the
package
I
just
also
looked
at
chat
ups,
so
I
have
it
here.
D
So
so
here's
our
patch,
you
can
actually
the
way
that
post-deployment
patch
works
is
that
you
can
stick
in
multiple
patches
in
and
it'll
just
apply
them
one
after
each
other
and
it
sorts
it
alphabetically.
So
you
know
if
you're
patching
different
files.
This
works
fine.
If
you're
patching
the
same
file,
you
need
to
have
patches
that
can
be
applied
on
top
of
patches.
Super
super
like
I,
would
say,
hacky
this
whole
process,
but
it's
what
we
do.
So
this
is
what
we
have
so
I've
created
the
patch.
D
So
now
we
have
the
pipeline.
You
can
see
like
if
you're
creating
a
branch
on
the
patcher,
but
we
have
our.
We
basically
dry
run
patches
on
the
fleet
in
the
case
of
kubernetes.
What
this
is
going
to
do,
hopefully
it's
going
to
trigger
a
dry
run
or
a
like
the
pipeline
that
runs
in
check
mode.
This
already
completed,
so
that's
good,
so
aha,
okay,
so
looks
like
so
first
thing
we
did
is
I
went
to
the
kubernetes
cluster
and
it
looked
for
what's
running
it
found
this
time,
but
we
don't
have
this
time
again.
D
Maybe
this
is
just.
This
is
a
formatting
difference
between
the
the
kubernetes
tag
and
the
VM,
so
I
think
I'm
gonna
have
to
do
a
little
bit
more
fuzzy
matching
here
to
get
to
get
it
right.
It
was
looking
for
a
directory
named.
This
actually
was
looking
specifically.
I
was
looking
for
a
doctor,
I'd
like
named
this,
and
that
doesn't
exist
because
we
created
a
directory.
We.
D
D
But
so
so
you
can
see
like
what's
running
in
the
kubernetes
cluster.
Is
this
tag,
and
this
is
the
VM
package?
The
difference
here
is
that
the
VM
package
has
the
EE
sha
here
and
then
the
Omnibus
sha
here,
but
we
don't
care
with
the
omnibus
since
it's
scoober
Nettie's,
so
so
I
think
I'm
gonna
have
to
improve
this
fuzzy,
like
fuzzy
matching
in
the
meantime,
what
we
can
do
is
I
can
justice
is
looking
for
a
directory
with
this
name,
I'm
just
going
to
create
it,
and
we
just
don't
see
the
patchwork.
D
We
could
do
what
we
could
potentially
do
here
is
just
make
it
so
that
in
the
patches
directory
you
just
create
a
directory
that
that's
just
the
shop
in
that
way.
You'll
just
match
both
the
VMS
and
the
kubernetes
bye
shop.
So
now
it's
working
a
bit
better.
So
the
first
thing
it
does
is:
it
goes
out
to
kubernetes
to
get
the
version
it
got.
This
is
the
version
and
then
say
so:
I
found
a
directory
here,
that's
the
directory
that
we
just
created.
Ok,
so
it
logs
into
the
dev
registry.
D
This
is
a
dry
run
and
what
it
does
is
it
actually
creates
this
docker
file
here
from
the
image
that's
running
in
the
kubernetes
cluster.
It
copies
the
patch
file
to
a
temp
directory,
a
working
directory,
and
then
what
it
does
is.
It
applies
the
patch
to
the
image
and
then
it
builds
it
again.
So
we're
basically
just
taking
the
same
patch
file
that
we
slam
onto
the
VMS.
We
are,
you
know,
creating
a
patch
image,
so
the
first
thing
it
does
is
it
pulls
the
image
from
dev.
D
D
D
D
So
actually
what
I
do
is
I
like
chop
off
this
patched
when
it
goes
and
fetches
the
image
so
that
if
we
run
this
again
from
the
patcher,
it's
still
gonna
look
for
a
directory
that
has
this
like
10
tag
name
and
it
will
just
reapply
the
same
patches.
I
think.
The
next
thing
we
can
do
is
we
can
actually
apply
the
patch.
This
is
just
a
dry
run.
D
So
if
we
to
the
pipeline
the
way
the
patcher
work
is
that
you,
you
kind
of
like
you,
staged
the
patch
on
the
branch
and
it
shows
you
a
dry
run
of
so
you
can
make
sure
that
the
patch
can
apply
successfully
on
staging
and
then
often
you
just
merge
it,
and
then
it
gets
applied
to
staging
and
production.
You
also
have
these
manual
jobs
that
allow
you
to
apply
it
on
staging.
D
If
you
want
to
from
the
branch
we
don't
normally
do
this,
but
sometimes
it's
necessary
just
to
kind
of
like
see
how
it'll
apply
it
before
you
merge.
So
we
have
a
manual
job
for
kubernetes
here,
just
like
the
other
ones.
If
I
play
this,
what
it's
going
to
do
is
it's
going
to
do
the
same
thing
again,
but
this
time
it's
going
to
trigger
a
regular
kubernetes
deploy
so
it'll
actually
deploy
the
patch.
E
A
D
A
As
it
stands
like,
this
process
makes
the
most
sense
because
it's
the
least
amount
of
change
required
for
people
to
learn
how
to
do
patches
if
needed
in
the
future.
Like
this
is
the
same
process.
It's
just
a
we're,
adding
a
different
service
to
it,
which
is
perfectly
fine,
I,
don't
know
how
else
we
would
do
this
without
waiting
for
like
your
ideal
process,
but
we
would
have
to
wait
for
things
to
build
and
that's
excruciating,
and
we
can't
wait
for
that
in
times
of
an
incident.
B
D
A
D
I
think
it's
interesting
I
think
what
what
changed
is
that
we
we
now
have
the
continuous
deploy
pipeline,
and
it's
well
understood
that
if
we
patch
production,
that
pipeline
is
halted.
So
there's
like
a
lot
of
incentive
to
not
patch
and
you
know,
fix
it
and
deploy
normally
or
instead
of
patching,
will
maybe
do
some
other
infrastructure
workarounds,
like
a.j
proxy
blocks
for
security
issues.
Things
like
that
I
think
before
Auto
deploy
before
we
had
this
continuous
deploy
pipeline.
D
We
were
sitting
at
patch
releases
on
like
production
for
weeks,
and
we
would
just
like
patch
as
a
quick
thing,
because
it
was
just
like
there
was
so
much
work
to
deploy.
I
mean
now
it
still
takes
a
while
out,
but
it's
a
lot
easier
just
to
get
changes
to
production.
So
we
do
that
more
often
now
I
think.
E
So
what
we're
seeing
noted
actually
create
an
image
like
the
logical
thing
you
would
want
to
see
now
that
we
actually
deploy
from
from
a
docker
image.
Right,
like
the
logical
thing
you
want
to
see,
is
that
someone
creates
a
docker
image,
real,
quick
and
then
pushes
that
and
then
we
use
that
to
go
and
do
things.
The
problem
is
our
pipeline
that
creates
these
images
has
stages
that
are
very
much
tied
together.
It
has
six
stages.
E
If
you
go
to
the
components,
images
or
four
charts,
it
has
18
six
stages
and
each
of
them
takes
some
time
by
the
time
you
actually
get
to
the
point
where
it's
gonna
take
the
longest
to
build
an
image
you
are
already
able
to
patch.
So
we
are
talking
here
about
jarv,
correct
me
if
I'm
wrong,
but
I
think
last
time
I
checked,
it
was
something
like
three
times
slower
to
actually
create
an
image.
E
Then
patch
manually
like
this,
so
do
you
wanna
in
the
middle
of
an
incident,
wait
for
a
pipeline
to
build
images
for
45
minutes.
I,
don't
know
when
I
was
on
call
I
didn't
want
to
wait
that
long.
So
I'm
kind
of
guessing
that
you
guys
as
well
don't
want
that.
So
this
is
like
the
most
shortcut,
the
shortcut
that
we
can
take.
C
So
related
to
what
I
just
said,
just
two
things
came
to
my
mind.
So
one
is:
they
did
that
long
term?
We
will
bring
down
the
time
it
takes
to
build
the
image,
or
are
we
happy
with
with
this
process
and
the
second
one
is:
if
we're
patching,
is
there
any
sort
of
tests
conducted
so
like
I'm?
Not
talking
about
you
know
like
a
full
QA
case
or
information
test
or
anything,
but
is
there
any
form
of
validation
that
the
Patrick
Lloyd
is?
Actually
you
know
it's
not?
Actually
you
can
make
things
worse,
I
think.
D
It
does
have
I
guess
that
if
the
thing
that
we
do
is
we
do
give
out
QA
on
staging
before
we
move
to
production,
so
we
have
that
other
than
that
we
have
the
liveness
in
readiness
checks
right
and
that
basically
just
tells
you
what
the
things
are
kind
of
functioning
in
the
minimum
level.
Those
are
the
only
two
checks
that
we
have
for
post
deployment
patches
usually,
but
though
this
is
like,
we
don't
typically
want
to
wait
for
a
full
battery
aspect
test
pass
and
you
know
so
so
normally
like
we
don't
we
don't.
D
D
C
D
D
D
We
had
like
a
20
percent
surge
that
we
allow,
for.
We
actually
have
an
issue.
Skarbek
and
I
were
talking
about
this
earlier-
that
you
need
to
make
this
bigger
for
sidekick
I
like
to
surge
more
so
that
we
can
speed
up
the
deploys,
but
yeah
we
have
like
a
25%,
you
know
surge
on
deployments
and
yeah
I
mean
we
wouldn't
lose
any,
and
we
also
have
the
pod
disruption
budget
as
well.
So
it's
much
proud
of
the
kubernetes,
the
equivalent
on
the
V
ends,
is
like
the
H,
a
proxy
checks.
D
E
First
question
was
whether
we
are
planning
to
make
a
change
on
this,
make
this
faster
image,
building
faster,
it's
a
tricky
one.
Okay,
it's
a
tricky
one,
mostly
because
with
whenever
we
apply
something
temporarily
temporary
like
this,
it
becomes
permanent
and
the
next
set
of
super
urgent
priorities
that
need
to
land
yesterday
takeover,
like
you
know,
so,
ultimately,
it's
gonna
happen
with
the
same
way.
I
think
is.
E
If
it
is,
you
don't
take
any
additional
action
and
you're
going
and
optimizing
tooling
in
other
places,
but
when
that
start
sliding
down
like
it
has
couple
of
times
with
with
the
package
build
times,
we
go
to
distribution,
team
and
say
this
is
like
something
is
wrong
like
we
are
now
taking
an
hour
to
build
a
package
right,
so
it's
trade-offs
like
everything
in
life.
It's
traders
here
as
well.
So,
while
the
image
building
is
very
complicated
right
now
and
janky
and
I,
don't
I,
don't
think
anyone
in
distribution
likes
it.
E
Actually,
no
one
actually
had
the
time
to
look
into
that,
because
that's
considered
technical
debt
that
doesn't
bring
any
additional
value
right.
So
until
we
get
to
the
point
where
the
deployments
are
going
to
be
slow,
regardless
of
whether
they're
patched
or
not,
I
don't
see.
This
is
becoming
a
priority
right
like
we
have
some
basic
things
missing
from
from
the
charts
and
from
the
images
right
now
that
we
are
discovering,
as
we
are
migrating
so
I,
think
those
take.
We
higher
priority.
C
E
It's
it's
a
trade-off
right
like
even
when
we
build
the
original
pipeline
back
in
what
was
it
2017
I
think
the
the
CNG
pipeline
I,
don't
think
anyone
with
the
team
liked
it's
done
as
well,
but
it
was
the
quickest
way
to
get
us
to
point
where
we
now
have
images
that
we
can
leverage
right.
So
it's
always
that
trade-off
that
you
keep
going
and
forward
and
forward
until
it
starts
being
more
expensive
right
like
when
it
takes
too
long
to
build
something,
and
that
affects
your
availability
or
productivity
of
larger
parts
of
the
company.