►
From YouTube: 2020-07-24 Creating a change issue
Description
Craig M. is running through creation of a change issues
A
Hi,
I'm
going
to
be
recording
creating
a
new
change
control.
This
is
just
part
of
the
closing
deck
between
dibs
and
zary's.
Moving
the
line
with
the
scalability
team.
A
It's
gonna
be
stream
of
consciousness,
not
well
edited.
I
will
see
where
this
goes.
I
may
delete
this
and
try
again,
but
we'll
see
what
happens,
and
hopefully
something
of
this
becomes
useful
shows
you
just
a
little
bit
about
what
I'm
thinking.
When
I'm
creating
these
things
sort
of
issues,
I
need
to
think
about,
etc.
A
Sharing
my
screen,
so
we
start
from
the
production
tracker.
This
is
where
we
create
the
the
change
issues,
not
necessarily
change
controls,
stroke
sense
we're
using
them,
sometimes
just
for
recording
the
fact
that
something's
happened
other
times.
It's
it's
to
ensure
that
we
think,
through
all
the
issues
that
we
need
to
think
through.
You
know
how
we're
gonna
do
this,
how
we're
going
to
monitor
it,
have
we
automated
it
and
so
on
and
so
forth.
A
So
I'm
going
to
create
a
new
issue.
This
is
for
staging,
so
it's
pretty
simple
good
title
is
obviously
pretty.
A
That's
the
word,
so
we've
got
a
fairly
well
described
set
of
criticalities
here.
C1S
are
rare,
very,
very
rare.
High
impact
high
risk,
you
know
downtime
will
always
be
a
c1,
otherwise
there's
a
few
ones.
Failover,
there's
a
few
things
that
are
really
really
scary,
that
just
not
sure.
A
C2S
yeah
no
downtime,
but
it's
a
bit
risky.
If
something
unexpected
happens
during
it,
it
could
be
bad.
That's
the
sort
of
thing
I
think
you
know,
c3s
yeah
not
really
expecting
anything,
but
we're
gonna
do
a
little
bit
by
hand
and
c4
stuff.
We've
done
a
hundred
times
before.
A
C4S
are
very
much
the
recording
just
so
that
someone
knows
it
happened,
and
you
have
something
to
do
so.
I
think
for
this
one
here:
it's
staging
which
definitely
reduces
the
risk
or
the
criticality.
It
is
not
ch
we
can.
We
can
do
this
without
any
downtime,
so
it's
down
to
a
c2
or
c3,
obviously
very,
very
low
criticality.
For
this
one
here,
negative
impact
is
pretty
minimal.
A
A
We
have
done
this
before
similar
basic
process,
which
I
will
grab
from
another
one.
It
seems
like
it's
cheating
to
go
and
copy
change
controls
from
somewhere
else,
but
I
think
it's
valid
because
it's
meant
to
be
a
collection
of
tools,
not
collection
of
tools,
a
collection
of
well
defined
processes
that
we've
done
and
that's
exactly
what
we
should
be
doing.
We
should
be
reusing
these
things,
cutting
and
pasting
yeah.
A
If
we
do
something
10
times
sure
we'll
rub
it
up
and
automate
it
to
the
to
the
nth
degree,
but
reusing
the
change,
control
and
adjusting
it
slightly.
That's
perfect
exactly
what
we
should
be
doing
so
again.
We
need
to
do
that.
Often
I
just
copy
that
title
straight
into
the
change
objective,
because
the
title
really
should
describe
it
recently.
Simply
a
lot
of
the
modus
configuration
changes.
A
A
And
I
think
that
if
I
just
have
a
quick
look,
what's
worth
noting,
is
let's
bring
this
across?
This
is
the
note?
That's
not
the
one
for
these
this
one
here.
A
This
is
the
chef
change
and
one
of
the
handy
things
is
danger,
but
will
tell
you
which
nodes
will
be
affected
and
because
we've
changed
that
on
the
base,
role
applies
to
water
view
stage.
This
one
is
basically
every
energy
stage,
we're
looking
really
caring
about
the
ones
that
have
rails
on
them.
A
So
I'm
looking
for
apis
fasting's
done
blackbox.
No,
no,
no,
no,
no
front
ends,
that's
electric
proxies,
they
run
italy,
don't
care
about
those
they
get
service.
So
they
do
the
good
over
registration
get
over
hdp.
They
do
run
rails
as
well.
So
that
would
be
relevant
maturity.
Direct
pay,
just
doesn't
pg
balance,
it
doesn't
and
first
queue
grade
prefixes.
A
Sidekick
get
and
api,
so
there's
four
classes
of
machines
that
we
need
to
think
about
pop
that
back
over
there.
That
danger
bot
is
really
really
handy
for
finding
that
so
I've
listed
this
for
I've
got
psychic
web
api
and
get
and
it
tangentially
affects
postgres.
I
don't
think
we
have
a
pg
bouncer.
We
do.
A
A
A
27Th
and
just
for
the
record,
1
30
is
about
my
favorite
time
for
doing
these
things.
I've
come
back
from
lunch.
We
have
a
change
lock
that
goes
over
the
weekend.
We
want
to
avoid
making
changes
during
the
weekend
unless
they're
absolutely
necessary,
not
that
it's
inherently
more
dangerous,
just
there's
less
people
around,
so
that
actually
goes
up
to
midnight
utc
some
sunday
night
monday
morning,
which
is
my
noon.
A
Usually
one
o'clock
during
my
daylight
savings
can
be
a
little
bit
annoying,
but
the
logic
is
that
monday
morning,
when
I'm
here
by
myself
and
everyone
else
is
still
on
there
someday-
you
know-
maybe
graham
and
australia
is
waking
up
in
the
mid
morning,
not
the
best
time
for
it.
A
For
someone
to
be
making
production
changes,
maybe
they're
safe,
maybe
they're,
not
it's
just
nicer
to
hold
back
from
that
a
little
bit
so
so
we
have
the
change
lock
and
that'll
actually
prevent
certain
mr's
from
being
merged
and
checked
and
all
the
pipelines
from
running.
You
know
chef
and
terraform
related
ones.
A
So
if
it's
1
30
there's
some
time
to
have
lunch,
come
back
from
lunch
woke
up,
be
ready.
Caffeinated
as
necessary,
so
originally
we
put
staging
in
production
in
the
same
step,
but
we'll
do
staging
this.
One
is
just
staging
one
day
and
then
I
think
production
the
next.
So
we'll
do
that
yeah.
So
because
we
are
changing
this,
we
want
to
stop
shift
first,
because
we
want
to
do
this.
A
Control
rollout
now
shift
runs
every
30
minutes,
so
it
takes
a
couple
minutes
to
run
when
it
completes
it
schedules
itself
for
30
minutes
later,
so
it's
30
more
minutes
and
it
skews
over
time
displays
over
time
across
the
fleet,
but
it
doesn't
mean
we
don't
have
precise
control
if
the
chef
damon
is
running
as
to
when
it
will
actually
run
on
any
other
node.
So
for
a
lot
of
these
changes,
we're
running
we're
chef
running
we'll,
deploy
the
change
to
get
lab.rb,
which
will
then
trigger
an
omnibus
reconfiguration
and
then
restart
rails.
A
A
A
And
that
correlates
with
what
we're
expecting?
No,
we
still
haven't
found
those.
Why
not
stage
base
if
you
get
oops
quotes.
A
A
Won't
show
you
here
exactly,
but
the
in
the
chef
request
when
you
make
changes
to
roles
these
staging
ones.
Also,
the
non-production
ones
apply
immediately
when
you
merge,
the
production
ones
have
an
additional
manual
step.
So
you
can
merge
the
thing
now.
We've
left
this,
mr
and
work
in
progress
state
so
that
someone
doesn't
come
along
and
merge
it
without
actually
thinking
that
you
know
without
doing
it
properly
and
I'm
going
to
note
saying
it's
worked,
because
it
requires
a
managed
rollout,
not
that
anyone
would
come
along
and
move
someone
else's.
A
Merch
request
like
that
randomly
but
yeah
accidents
happen.
Someone
for
someone
to
do
that.
They'd
have
to
take
several
buttons
to
to
bypass
that.
So
the
next
thing.
A
A
We
want
to
run
chef
on
the
psychic
nodes,
there's
only
one
of
them.
There
were
the
three
apis
three
goods
and
three
webs,
so
what
we'll
say
is
actually
we
might
just
do
that
first,
because
that's
nice.
Yes,
I
could
go
down
staging's,
not
the
end
of
the
world
and
just
verify.
A
A
So
knife
ssh
will
go
around
this
thing,
but
only
the
c1
will
only
run
it
concurrently
on
one
of
the
time.
So
with
three
nodes,
one
of
the
times,
it's
reasonable
yeah
one,
two
three
for
production,
yeah
with
12
nodes,
18
nodes
on
the
right
on
the
web
fleet.
I
think
it's
12
at
the
moment.
A
Yeah
we'll
probably
do
two
or
three
at
a
time,
especially
given
that
it's
1
30
in
the
morning
utc,
which
is
the
real
quiet
time,
yeah
half
the
capacity
if
we
had
to
and
that
just
keeps
it
keeps
it
ticking
along
without
taking
too
long
I
mean
each
of
these
shift
runs
takes
you
know
two
or
three
minutes.
It
will
take
plus
restarts
time
for
for
rails
to
restart.
You
know
you,
you
don't
want
to
be
sitting
there
for
an
hour
watching
the
stuff
watching
paints
pro
so
the
same
for.
A
Yeah
and
now
one
of
the
important
things
when
creating
change
control
is
to
think
about
how
you're
going
to
roll
this
back
now,
a
lot
of
them
are
revealed
to
roll
back.
They
are
revert
them.
I
do
the
same
thing
again.
Some
of
them
are
not,
but
even
the
fact
of
stating
we're
just
going
to
revert,
the
change
in
the
mr
and
rollback
is
good,
because
then
you've
thought
about
it.
A
A
A
Oh
there
we
go.
Thank
you
oh
indeed,
and
we
can
leave
seafood
and
cpu
around
there.
It
doesn't
matter
terribly
much,
it's
not
in
vms
anymore.
We've
already
got
catch
all.
Basically,
what
I'll
be
doing
here
is
looking
to
make
sure
that
something
is
still
occurring
and
that
they
yeah
starting
done
they're
not
failed.
A
Let's
see
if
this
still
works,
rails
database
connections,
there
we
go,
and
that
is
like
a
catch-all
cool.
So
we
can
see
where
we've
got
this.
So
what
I
think
we
will
be
expecting
to
see
is
that
we
are
bumping
that
number
a
little
bit.
I
think
so
we're
expecting
to
see
that
this
effective
percentage
drop
a
little
bit
reduce
saturation.
A
So
that's
what
we're
saying
and
then
what
we
will
say
is.
A
Get
some
grass,
perhaps
the
easy
one.
I
think
there
we
go
just
pop
that
in
there.
A
Also,
we
heavily
use
these
check
boxes,
so
we
can
check
them
as
we
go.
Don't
always
get
it
at
the
right
time,
but
it
can
be
quite
handy,
especially
because
you
actually
do
get
along
in
the
issue
as
to
when
it
was
clicked.
So
let's
get
that
weapon
get.
Let's
just
check
that
that.
A
A
A
All
right,
let's
course
with
that.
Actually,
so
I
guess
it's
actually
status
that
we
care
about
there
we
go
so
we
add
status.
A
Okay,
this
might
actually
be
hard
to.
I
guess
we're
looking
for
normal
looking
traffic.
What
I
might
do
is
just
drop
out
a
few
obvious.
A
A
A
A
A
That's
part
of
the
background
noise
so
we'll
just
accept
that,
there's
that
what
we're
saying
is
and
what
I'm
trying
to
do
here
is
get
it
to
a
point
where
I
could
theoretically
hand
this
over
to
someone
else
and
they'd
be
able
to
reasonably
successfully
navigate
this
run
it
through
and
do
it
themselves
yeah.
You
don't
always
expect
every
person
on
this
very
team
have
exactly
the
same
capabilities
or
knowledge
of
the
system
that
you're
working
on
in
this
case,
you
know
we're
just
making
a
more
affordable
change
to
the
whole
yeah.
A
Everyone
in
the
sra
team
should
vaguely
know
how
rails
works,
so
they
really
should
be
able
to
run
this
through.
Look
at
that
and
say:
hey
yeah,
we're
seeing
about
80
errors
to
set
errors
for
whatever
just
letters
every
30
seconds.
That's
normal!
That's
fine!
We'll
we'll
run
with
that.
You'll!
Look
at
double,
we'll
look
at
heart,
whatever
cool,
so
I've
actually
described
the
key
metrics.
A
A
A
A
Staging
I'd
still
probably
let
this
the
encore,
know
and
again
staging
isn't
super
critical,
but
people
do
rely
on
it.
You
know
qa
relies
on
it
releases.
You
know
we
should
let
people
know
that
we're
doing
something
a
little
bit
unusual
here
and
we
check
that
there's
no
active
incidents.
Otherwise
we
can
confuse
measures
immensely,
thankfully,
when
their
warnings
need
to
be
pretty.
A
A
A
A
A
I
think
that's
about
it,
though,
so
I'm
going
to
assign
that
to
myself.
No
other
details
needed
here
at
the
moment.
There's
a
bunch
of
labels
in
there
that
will
be
created.